I was recently in Naples for a conference, a marvelous city that I like to think of as “the Italy of Italy.” In one of its countless souvenir shops, most of them filled with the usual, horrendous clichés, I found an object so unspeakably ugly that it became, for that very reason, wonderful. I bought it immediately.

If this little artifact had been made by an AI (which I doubt), it would have instantly been branded with the infamous label AI slop. Why has the discovery that we can produce junk with machines made us blind to the fact that humanity has never done much better? As Alberto Puliafito wrote in Slow News, we have never produced one masterpiece after another. For the most part, we have made mediocre things, and we always will.

The phrase “AI slop” has spread like wildfire through the art-theory establishment; it describes the dull, uncanny sameness of algorithmic imagery. The implication is that artificial intelligence, by its very nature, produces a kind of cultural residue, content without culture.

But this assumption collapses the moment we look at history. The majority of human production has always been slop. Mediocrity is not a bug of technology; it is the baseline of culture. The canon of art we revere today – those few thousands works in museums and textbooks – is the surviving tip of an immense iceberg of forgotten, derivative, or simply boring creations.

An artistic medium often begins in scarcity and ends in abundance. When a tool becomes accessible, it multiplies not only creativity but also repetition. Oil painting, once the privilege of guilds and courts, flooded the world with saints and still lifes as soon as pigments became cheaper. The printing press multiplied pamphlets, gossip, and devotional kitsch alongside poetry. Photography industrialized portraiture and bad taste. Internet flooded us with images and words of appalling banality.

The same pattern repeats today. To say that “AI produces only slop” is a simple statistical error. Yes, most outputs are mediocre; but that is because most human ideas are mediocre, and most uses of technology are unimaginative. The problem is not the machine but our expectations of it, our eagerness to see in it either a monster or a miracle instead of a mirror.

The word kitsch appeared in Munich in the 1860s as slang among art dealers for sentimental, low-cost paintings made for the expanding middle class. It derives from the German verkitschen, meaning to make something cheap or to sell it off hastily. The term acquired critical weight only in the twentieth century, through writers such as Hermann Broch in 1933 and Clement Greenberg in 1939. The sensibility it denotes, however, is far older. Every era has produced its own forms of aesthetic excess, its own repetitions of feeling and form. In this light, kitsch is less an aberration of culture than one of its constant expressions.

Art, in any age, is a matter of selection. The quantity of rubbish says nothing about the potential for excellence – if anything, it confirms the very meaning of the word excellence, which implies its rarity. What we call “AI slop” is simply the visible surplus of a process that has always accompanied art whenever tools become widely available: a vast field of failures through which masterpieces occasionally bloom.

If we scrape away the idealized varnish of art history, what we find underneath is a landscape overflowing with repetition. The Romans, for instance, filled their villas and baths with marble copies of Greek statues. Entire workshops specialized in churning out Aphrodites and Apollos with standardized features. Their purpose was ornamental, a kind of ancient stock imagery carved in stone.

Centuries later, ex-voto paintings multiplied across Europe and Latin America. Small panels depicting shipwrecks, accidents, and miraculous recoveries followed an almost algorithmic formula: a scene of peril, divine intervention, and a handwritten note of thanks. Each image was unique only in its inscription, the rest assembled from a common visual template.

The same logic drove the souvenir economy of the eighteenth and nineteenth centuries. The Grand Tour produced an entire industry of vedute: idyllic views of Venice or Rome, endlessly repeated for wealthy travelers. Few of these artists were Canaletto; most were competent copyists catering to demand. Like today’s Instagram feeds, they offered a vision of the world already mediated by expectation.

Salon painting in nineteenth-century Paris rewarded technical perfection and moral clarity over experimentation. Thousands of canvases, impeccably executed and instantly forgettable, celebrated history, myth, and polished skin. At the same time, Victorian chromolithographs brought sentimental images of children, angels, and pets into every middle-class home. The new bourgeois audience wanted art that soothed, and the market obliged.

Then came photography, which democratized representation and, as Baudelaire warned, also industrialized bad taste: “If photography is allowed to supplement art in some of its functions, it will soon supplant or corrupt it altogether, thanks to the stupidity of the multitude.” The camera turned out to be a mirror that everyone could hold, and the result was an ocean of mediocrity punctuated by a few islands of genius. From there to the stock image, the mall painting, and the motivational poster, the line is perfectly straight.

Even the digital age has its pre-AI forms of slop: clip-art figures shaking hands, banner ads blinking in neon loops, WordArt titles proudly curving across PowerPoint slides. Each was a small victory of accessibility and a large defeat of sensibility. But it would be absurd to claim that these tools prevented creativity; they simply provided the visual vernacular of their time.

“Mass-produced culture has a long, messy history,” writes Deni Ellis Béchard in Scientific American. “Some of that sediment has incubated new artforms, and trash and treasure have appeared in the same stream”. Human slop, in short, is the compost from which rare blooms grow. Without the mass of conventional forms, there would be no rupture, no surprise, no recognition of the exceptional. The machine, once again, is only joining a very old tradition.

  1. H. Lossin’s essay “Value In, Garbage Out” offers one of the sharpest Marxian readings of AI art to date. It argues that generative systems inherit not only data but also ideology: that their “training sets” reproduce the hierarchies of class, race, and power embedded in the capitalist structures that produce them. In this view, what comes out of the machine is already conditioned by hegemony, the cultural equivalent of “you get out what you put in.”

This is true, but incomplete. Every archive, every canon, every museum collection is already a biased dataset. The noise inside the model is often the noise inside our culture. To pretend otherwise is to indulge in a comforting fiction: that human creativity operates from a clean slate, while the algorithm works from contamination.

Lossin is right to expose the political economy of AI: the extractive infrastructures, the opaque ownership of data, the labor behind the tool. Yet her argument risks moralizing the technology. Bias, repetition, imitation, and bad taste are the ordinary materials of art history. The garbage that circulates through a neural network is simply a digitized continuation of the same cultural residue that once filled Roman workshops, Baroque ateliers, and Parisian Salons.

If anything, AI makes visible the statistical average of human vision. In this sense, generative models are not corruptions of the creative process but its X-ray. They reveal how much of what we call “originality” is built upon conventions, trends, and accidents of style. The mirror may be uncomfortable, but it is accurate.

Lossin’s argument mirrors a familiar double standard in how criticism judges new media. When human artists reproduce clichés, we call it convention; when a machine does the same, we call it corruption. This moral asymmetry, I suspect, has less to do with aesthetics than with anxiety.

As media scholar Kirsten Drotner has shown, every technological turn produces its own media panic; a moral outbreak that recasts old cultural fears in the language of the new device. From the printing press to television, from video games to the internet, each wave of mediation has been accused of diluting creativity and destroying attention. The pattern is ritual: first alarm, then adaptation, then absorption into the cultural norm. AI has simply become the latest stage for this cycle of anxiety.

I could name many valuable contemporary artists who do not reduce their relationship with AI to a mere critique of the medium, although that kind of reflection remains necessary. Some, such as Fellowship.xyz, a global gallery dedicated to artists working between art and technology, also support creators who use AI not only to question the tool but to expand their own language and poetics.

Yet because taste is ultimately subjective, naming examples would be useless. Those who defend the idea of “AI slop” will always find it easy to dismiss any artist as worthless. And still, even to write this very article (which some might also consider worthless) I used a generative AI to help me with my English, which I speak but is not my native language. Does that make this text slop simply because I used a machine?

It seems that we are often willing to see only the negative aspects of these tools, ignoring the ways we already inhabit them. Academic and scientific research, the diffusion of knowledge, and the production of intellectual material, when guided by awareness and care, are all being strengthened by AI. These are general-purpose technologies, like electricity, the internet, or social networks. To assume they can have only negative effects because they are currently controlled by economic monopolies (which is true, and indeed the main issue) is to fall into a kind of cultural myopia. That blindness will inevitably fade as AI use becomes transversal and widespread. In fact, it already has.

In my opinion Artificial intelligence should be open, transparent, and collectively owned; its code inspectable, its architectures adaptable, and its tools accessible to everyone rather than enclosed by a few corporate monopolies. Only through open systems can AI become a space for public knowledge and creative autonomy. Yet to deny its cognitive and cultural reach, even in its current imperfect forms, is to overlook what it already reveals about us. If there is an ethical horizon beyond this debate on mediocrity, it lies in the question of access and governance. I agree with Stephanie Dinkins when she writes that “Instead of desperately fighting to hold on to familiar methods, occupational relationships, claims to intellectual property, and personal data, we must adapt our minds and legal frameworks with an eye toward learning to surf and shift the advantages of the exponential change smart technologies usher in while holding the tech sector and policymakers accountable for creating AI that centers societal care and generosity”.

Kate Crawford’s essay “Eating the Future” describes AI as a metabolic system that consumes the world in order to reproduce it. According to her, generative models feed on vast quantities of data and then recycle their own waste, producing an endless loop of degraded content, what she calls “AI slop.” It is a vivid image: a machine feeding on its own excrement until meaning itself collapses.

The metaphor works, but again, it is not new. Every mass medium has been accused of devouring the very culture it creates. Television, radio, magazines, and social media have all been described in similar terms. The history of communication is, in fact, a history of digestion. Society has always eaten and re-eaten its own slop.

Crawford’s ecological and infrastructural concerns are entirely legitimate. The environmental cost of AI, its energy hunger, and its industrial dependence on rare materials are urgent issues. Yet almost all of our technologies rely on similar systems of exploitation and pollution, often on a far greater scale, and we rarely apply the same moral scrutiny to them. I once heard an artist at a conference justify the use of AI for making art by saying that he used it to criticize the medium and that his generative runs did not pollute much. Yet to attend that same conference he had flown across the ocean, releasing more carbon into the atmosphere than my lifetime of AI usage, and I use it a lot. A high-end 3D rendering, a few hours of streaming television, or a gaming session can consume as much energy as a complex generative run, and sometimes more. What I find less convincing, therefore, is the sense that this cycle is unprecedented; the double standard is pretty evident.

If AI seems to recycle mediocrity faster than ever before, it is because it operates in a world that already does. Our economies, our media, and our education systems are all structured around overproduction. AI only accelerates what we have long set in motion. The same pattern has accompanied every communicative revolution; the printing press, photography, and later the internet all triggered identical anxieties, and sometimes they still do. Yet literature has not turned into slop because the web allows anyone to publish, nor has visual art died because photography has flooded our screens with endless, horrible images (poor Baudelaire: in this at least he was right).

And yet, within that amplification, there is also the possibility of intelligence. The same infrastructure that floods us with synthetic kitsch also enables research, translation, accessibility, and creative experimentation. The same models that generate oceans of noise are used to cure diseases, design materials, and assist writing and art.

Crawford’s concept of a “metabolic rift” usefully frames AI as part of a wider system of extraction, consumption, and waste. Yet her argument also leans on the emerging notion of Model Autophagy Disorder, the idea that generative systems will eventually collapse as they ingest too much of their own output. This phenomenon has been demonstrated in controlled environments (Shumailov et al. 2023; Alemohammad et al. 2023), but remains largely a laboratory scenario.

More recent studies indicate that model collapse is not an inevitable consequence of recursive training. Gillman et al. (2024) show that introducing a self-correction mechanism—a formal analogue of human curation that maps generated samples toward the true data distribution—makes self-consuming generative loops exponentially more stable. In both theoretical and empirical settings, they demonstrate that even simple correction or selection procedures prevent degradation across iterations, allowing models to remain coherent even when trained on predominantly synthetic data. These findings suggest that what laboratory scenarios describe as Model Autophagy Disorder arises mainly in the absence of any corrective feedback: when human or automated curation is present, collapse becomes a contingent, not structural, risk.

The problem, therefore, is not metabolic by nature but infrastructural and political: who owns the data, who controls the flow, and to what ends. Crawford’s vision of an AI ecological breakdown is persuasive as critique, but less convincing as prophecy.

When I think again of those Maradona souvenirs I bought in Naples I realize that they contain the whole argument: if “AI slop” provokes discomfort, perhaps it is because it mirrors the collective and unfiltered texture of what we are. We have always lived surrounded by mediocre things; the abundance of the trivial is not a catastrophe but both the residue and the raw material of culture. Making culture has always meant recycling its own leftovers, and in the best possible sense.

Blaming AI for our own mediocrity is another way of denying its continuity with us. The machine has simply made our habits explicit and given visible form to the clichés we already cherished. What matters now is not defending the sanctity of the human but cultivating the discernment to tell when something deserves our attention.

It is worth recalling that the artists now known as the Impressionism were met at their first group exhibition in April 1874 with derision and ridicule rather than acclaim. For example, the critic Louis Leroy mockingly described one painting as ‘a preliminary drawing for a wallpaper pattern’ and dubbed the event the “Exhibition of the Impressionists”. They were the sloppers of their time. As Béchard notes, “‘Slop’ helps us when used correctly. Calling everything worthless is a misguided attempt to dam the flood rather than channel it.”

Art looks less like a cathedral and more like a market stall: crowded, noisy, full of cheap miracles. Among them, now and then, a small image will shine brighter than the rest, and we will call it beauty. The rest is slop, human slop.