
It comes in a variety of forms, from the smooth and the surreal to the truly, unbelievably ugly, but you know it when you see it: an old woman celebrating her 121st birthday. A farm boy posing with a horse he made out of garlic. A Dubai-ified version of Gaza, featuring Donald Trump, Elon Musk and Benjamin Netanyahu relaxing on the beach.
It’s hard to spend time on any platform in 2025 and not come across what is commonly referred to as “AI slop”. Its ubiquity is in part thanks to platforms pushing this type of content. But also, at times, because it’s popular – users marvel at the ability to make visibly fake, iterative images which appeal to (some) people’s base senses.
It is not enough to choose to forgo the AI tidal wave – even if you have no taste for slop. A recent study published in Cornell University’s preprint server arXiv found that the internet has become choked by all kinds of AI-generated content, far from just the low-grade visual images we see on social media.
The study analysed hundreds of millions of written text documents online – including consumer feedback and corporate marketing communications from major organisations, such as the United Nations and online job postings on LinkedIn – to see how often AI-generated text appeared in at least some parts of these types of content. The study found that, following ChatGPT’s release in November 2022, the volume of information featuring AI-generated text increased tenfold. In some instances, it accounted for nearly a quarter of certain types of content, such as press releases.
AI-generated text typically is derivative and repetitive, and becomes apparently so the more often you see it. (In the recent rise of students using AI chatbots to write their university essays, professors have noticed a greater use of words like “delve” “deep-dive” and “in-depth”). One example of the use of ChatGPT in marketing is the use of these innovative lines written for an ice cream shop announcing new flavours on its menu: “Attention ice cream lovers!” “New month, new ice cream flavours!” and “Satisfy your sweet tooth with these new ice cream flavours.”
We are right to feel depressed at the ugliness and dangers of AI imagery – and may think, in comparison, that outsourcing corporate marketing-speak to ChatGPT is far less a threat to our society than the deluge of hideous content being beamed into our brains daily. But the pervasive use of generative AI applied to everything – even minutiae as small as consumer complaints to brands – tells us something more concerning than that the UN is simply phoning in press releases. It shows a lack of creativity, intellectual rigour and perhaps most of all laziness that may lead not just to a homogenisation of art, but also a homogenisation of culture.
What happens to our creative imaginations when most of what we see is both bad and the same? There is an pessimism toward generative AI that suggests we have reached the outer limits of what we can achieve – that only robots can advance our content from here. In reality, generative AI leaves little room for anything new as it continues to pull from what already exists.
As the Cornell study lays out, this effect can already be discerned – and often – on the internet. It won’t just be that most images we see are AI-generated, but that large amounts of text we come across is underpinned by the same narrow language. Newspaper articles, marketing campaigns and restaurants reviews we see on Google Maps: these may all be made from the same trough of bland, limited data, which could lead to a loss of variety in what we encounter. (This is before we even consider how often these generative AI tools get information wrong, and that false ideas may be woven into this repetitive, uniform text.)
The knock-on effect isn’t just dullness and homogeneity: it’s a narrowing of our intellectual and creative horizons. As one of the co-authors of the Cornell study said: “I think [generative AI] is somehow constraining the creativity of humans.” The future of human creativity will not live or die on an AI-generated LinkedIn post, but this trend will have a cumulative impact on our imagination if we are only getting more and more of the same thing. This compounds when we outsource even our most basic thinking to this inconsistent technology. If we need AI tools to generate summaries of four sentence emails, then what is the impact on technological advancement, new ideas, and our drive to create more when we don’t want to expand our minds beyond our monocultural surroundings?
There may well be a pushback against the AI slop takeover. As we have already seen in the arts, it may be that the corporate world also experiences a backlash to generative information becoming a key part of online communication (though the incentives of corporations makes this seem substantially less likely). But rather than merely making our experience online monotonous and repellent, we should see that these changes signal much greater harm than a bunch of similar-sounding press releases. They warn of a more boring future – whose only distinctive quality is its extreme, myopic limitations.
[See more: How the right weaponised the vibe shift]