Spam vs Slop
Recently, a journalist reached out to request insights into AI Slop for a myth-busting article. Many of my points made it to the published article, but some didn't. Here's my full, impassioned response.
Before AI slop, we simply called it spam. As long as there has been money to made on the internet, there are people have been gaming the system to cash in. In February 2023, Google Search advised that AI-assisted plagiarism is still plagiarism. "No AI-author bylines, please." It must have blindsided the team when Google CEO Sundar Pinchai told WSJ that Google's now an AI company.
In May 2025, Google sent out an internal memo that they "had no moat" against AI content. Instead they decided to build the ship. Google began offering its own AI products and pursuing AI dominance as a primary revenue stream. By September 2023, Google Search's Helpful Content Guidance removed the qualification "written by people". (pre/post)
Outnumbered
Companies are okay with AI-generated content when they also offer content-generating AI. Truthfulness doesn't matter in the attention economy– engagement does. New research from the firm Graphite found that about half of all internet articles are AI-generated.
The leap in size between 2022 and 2023 is roughly the size of 2010-2014 combined. While the systems to sort out spam are advancing at Google, LLMs were falling for white text on white background.
There’s a payout for consuming all this spam. This unprecedented access to data is big tech’s dream. People are granting AI access to personal emails, calendars, photos, and more. Data has been worth more than oil since 2016. Larry Ellison, cofounder of Oracle and potential new owner of tiktok said, “Citizens will be on their best behavior because we are constantly recording and reporting everything that’s going on.” There is a very real likelihood of this unfettered access turning into surveillance.
And all of this is because we have to exist online.
MOA vs slop
The stakes for listicles and other made-for-advertising content were fundamentally lower. You wasted the user's time, making them dive a dozen clicks deep to find out what the Buffy cast looks like now. (Spoliers: Spike is still hot.)
Myriam Jessier reframes why AI slop is different.
"The real risk is that as automated content gets slicker, the dividing line between slop and “good enough” content blurs, leaving audiences detached from reality."
Generative AI tools like Sora2 are imprints of power. These tools can and are used to reinforce their creators' agendas. There are restrictions on topics and who can be represented in the videos. We're seeing Sora2 videos used to escalate domestic political tensions. Google is blocking AI searches for the president and dementia. In May 2023, the dangeous potential of generative AI images caught shareholder attention when a faked attack on the Pentagon went viral and caused a brief but significant dent in the S&P 500.
Generative AI in all forms operates on the assumption of consent. Copyright holders have to opt-out of Sora2 character-by-character. Content creators– especially women– should be concerned. According to a 2023 study, deepfake pornography makes up 98% of all deepfake videos online. 99% of unconstenting targets are women. Sora2 is being used to make nonconsensual fetish content.
Profit vs people
These companies do not care because it makes them profit. They are seemingly willfully ignorant that these models as systems of power, shaping what we know and how we act. They mirror the worst tropes, trends, and tendencies humanity has to offer.
We need to differentiate AI slop from propaganda, exploitation, and dilution of human connection.
Tim Berners-Lee, a computer scientist best known as the inventor of the World Wide Web, wrote a poignant plea after reflection on his creation's contribution to humanity:
Today, I look at my invention and I am forced to ask: is the web still free today? No, not all of it. We see a handful of large platforms harvesting users’ private data to share with commercial brokers or even repressive governments. We see ubiquitous algorithms that are addictive by design and damaging to our teenagers’ mental health. Trading personal data for use certainly does not fit with my vision for a free web.
On many platforms, we are no longer the customers, but instead have become the product. Our data, even if anonymised, is sold on to actors we never intended it to reach, who can then target us with content and advertising. This includes deliberately harmful content that leads to real-world violence, spreads misinformation, wreaks havoc on our psychological wellbeing and seeks to undermine social cohesion.
We have to exist online, and right now that gives these companies the right to harvest us. We are vulnerable to the illusory truth effect. When we see something repeated enough, we believe it's true. We are being flooded with dark PR and propaganda. Human connection and collaboration were the inspiration for making a free World Wide Web.
Content creators and publishers need to care because faith and trust in the human connections found online are what make the internet work. Tim Berners-Lee puts it best, "We can re-empower individuals, and take the web back. It’s not too late."
Published on 1/2/2026 by Jamie Indigo