Quibi was a streaming service that launched in April 2020 and closed in October of the same year. As is evident from this information, it was not a success. Not even the pandemic, which compelled hundreds of millions of people who were isolated at home in the first months to consume content via the internet, allowed Quibi to establish itself. Today, the product is remembered as a bizarre (and very expensive) experiment, proof that money alone is not enough to guarantee the success of anything.
The app had many things wrong with it. Of these, one in particular drew immediate criticism: Quibi did not allow screenshots to be taken of the screen. The service was mobile first, designed to be used via smartphone, and the referenced application did not allow users to take pictures of what they were looking at. There are various types of services that prevent this: banking apps, for example. But a streaming app? It didn't make much sense. “Breaking through to a large audience takes more than just having a good show,” an article on The Verge website said at the time. “It has to be shareable, too.” And the screenshot is the basic element of online sharing.
The diffusion of small elements
Think of the discussion over who was cuter: Baby Yoda or Baby Groot? Millions of people followed these characters' antics in The Mandalorian and Guardians of the Galaxy respectively, and many of them promptly took a screenshot and posted it on social media or sent it on WhatsApp to those concerned.
The culture of the 21st century is also based on this, on the diffusion of small elements in social circles of all sizes. But screenshots are not enough. Or rather, they are only one stage of this process. To really take off, these images must become memes. The concept of the meme originated in the field of evolutionary biology, but within a few years, with the advent and triumph of the internet, it has become a passe-partout term for any piece of content capable of duplicating itself, modifying itself and joining other content, becoming part of the digital conversation in a completely spontaneous manner.
Memes are not fabricated. They are not planned, nor are they sown. On the contrary, attempting to do so is a very serious act and a harbinger of cringe, the extreme embarrassment felt by those watching. In technical terms, these are called forced memes, and they never get very far.
Memes are an essential element of our culture. They are able to determine, in part, the success of a product, a TV series, a brand, a political candidate. Yet for them to be truly effective and resilient, they must not be controllable in any way. Such an ambivalent nature is the worry of the marketing and advertising departments of many companies who are forced to play with the perverse rules of 21st-century digital culture, hoping that they will not explode in their hands.
“Real” vs “AI-generated”
Even recent developments in the field of artificial intelligence have been overwhelmed by the meme industry. We are talking about services such as Midjourney, Dall-E and other AI programs that can generate images on the basis of written descriptions – or prompts. The resulting images, surprising in their ability to look real, or inspired by famous paintings and photographs, as the case may be, have inspired an ongoing debate on Twitter and Instagram about the relationship between artist and machine, between “real” and “generated” art. As usual, the doom-mongers and the advocates clash, divided by their views on the weight this sophisticated technology will have in artistic creation.
On the one hand “real” art, on the other “fake”? Not really, since many creators have already started to use these programs as assistants, to speed up certain processes or to have instant proof of ideas to be developed later in a more traditional way. Such a clear division does not seem to exist. In the meantime, however, “AI Art” has already become a meme. It spread on Twitter using the same logic that made Nyan Cat and other memes successful, generating discussion ranging from the academic to the facetious. Some Silicon Valley entrepreneurs, for example, are already convinced that these services are capable of producing “real” art, and they tend to overvalue their works, which they proudly share on Twitter, where they are satirized by other users. It's a memetic polemical cycle with which we are familiar, and which always seems to repeat itself, even when the boundary between human and non-human, between real and unreal, becomes blurred.
Behind the network
The risk is that we will find ourselves in a not-so-distant future where the web and social networks are full of generated images and texts, suspended between the credible and the ridiculous (or disturbing), and human users, those left behind, can only maintain an air of scepticism, never being sure of interacting with “real” content.
In addition to Midjourney and the aforementioned AIs, think of GPT-3, a computer model based on neural networks, which is already capable of generating good-quality human-like text. The consequences this could have on a cultural, social and political level are enormous, and go far beyond the age-old problem of fake news, but perhaps we can prepare for it by abandoning the true/false dichotomy. After all, according to some studies, only 60 per cent of today's internet traffic is generated by human activity; the rest is a riot of bots, little programs created to do something, wandering around the web. We are used to thinking that every bot is malicious and harmful, but in fact these pieces of software are the network itself, they help it remain standing as much as we humans do. We may have the idea that they are “false” or “fake” but we have to accept that they exist – and in that sense they are “true”.