Can machines be more ‘truthful’ than humans?
Stay informed with free updates
Simply sign up to the Artificial intelligence myFT Digest — delivered directly to your inbox.
It has been said that truth is singular while lies are plural, giving disinformation an unfair numerical advantage. But is truth really singular?
Take the stories of our own lives. Which is the most truthful version? The official account contained in our CV or LinkedIn profile? Or the one we tell ourselves? Or the ones our friends and family tell about us behind our backs? All of them can be simultaneously true — or misleading.
The idea that multiple truths can be drawn from the same material is radiantly explored in the film Eno, which I saw last week. Based on the life of the multitalented music producer Brian Eno, the documentary is auto-generated by a machine and varies every time it is shown.
According to the film’s makers, there are 52 quintillion possible versions of it, which could make “a really big box set”. This artistic experiment tells us much about the nature of creativity and the plurality of truth in the age of generative media.
To make the film, the producer Gary Hustwit and the creative technologist Brendan Dawes digitised more than 500 hours of Eno’s video footage, interviews and recordings. From this archive, spanning 50 years of Eno’s creative output working with artists including Talking Heads, David Bowie and U2, two editors created 100 scenes. The filmmakers wrote software generating introductory and concluding scenes with Eno and outlining a rough three-act structure. They then let the software loose on this digital archive, splicing together different scenes and recordings to generate a 90-minute film.
Critics generally found the film — or films — to be quirky and compelling, just like Eno himself. It can seem a little random, Dawes tells me, but audiences have still been able to absorb the ingredients and construct a narrative in their heads. Only, “the audience does the cooking,” he adds.
The version I saw was a mesmerising mix of interviews and recordings with some jagged juxtapositions but a clear narrative arc. I was particularly intrigued by one section in which Eno talked about the concept of “scenius”. Eno has long resisted the idea that creativity is the output of one lone genius, rather it is the product of collective societal intelligence, or scenius. “The film is the embodiment of this idea of scenius,” says Dawes.
Hustwit and Dawes have now launched a company called Anamorph to apply their generative software to other types of content. Hollywood studios, advertising agencies and sporting franchises are target customers. But Dawes stresses they are using their own proprietary software to reimagine existing human-generated content in unique ways; they are not using generative AI models, such as OpenAI’s GPT-4, to generate alternative content.
The increasingly widespread use of generative AI models, however, raises other questions about truthfulness. Much has been written about how buggy models can generate non-truths and “hallucinate” facts. That is a big drawback if a user wants to generate a legal brief. But it can be a bonus feature in creating fictional content.
To investigate how good it is at doing so, Nina Beguš, a researcher at University of California, Berkeley, commissioned 250 human writers in 2019 and 80 generative AI models last year to write short stories based on identical prompts. The challenge was to reimagine the Pygmalion myth in which a human creates an artificial human and falls in love with it.
Beguš tells me that she was surprised that the machine-generated content was more formulaic and less imaginative in structure, yet also more woke. Rather than reinforcing some societal stereotypes, the models seemed to challenge them. In their stories, more of the creators were women and more of the relationships were same-sex, for example.
She suspects that those outputs reflect the way in which the models have been fine tuned by human programmers although it is hard to know for sure because the models are opaque. But she suggests we have now reached a “new frontier” in writing in which human and non-human generated content have in effect merged.
That raises worries about how dominant US AI companies are encoding the values of this new frontier, which may jar with other societies. “Will these hegemonic models and cultures take over everywhere?” she asks.
Whether machines enhance or degrade truthfulness and what human values they reflect critically depends on how they are designed, trained and used. We had better pay close attention.
#machines #truthful #humans