Skip to main content

'Slop': Merriam-Webster's Word About AI's Quality Crisis

Merriam-Webster chose 'slop' as 2025's Word of the Year. AI generates low-quality spam while platforms profit. Why the incentive structure refuses to stop it.

Amelia SanchezMar 10, 20266 min read
Key Takeaways
  • Merriam-Webster chose "slop" as 2025's Word of the Year—defined as professional, AI-generated content farms built with one objective: extract ad spend.
  • The slop epidemic is quantifiable: 717% increase in AI spam sites since GenAI went mainstream, with over 100,000 live as of May 2025, and roughly 10,000 new junk sites appearing monthly.
  • Advertisers are hemorrhaging money: 143.5 billion impressions of low-quality AI supply hit the bid-stream in January 2025 alone, with 25–30% of open-web spend landing in wasteful or fraudulent environments.
  • Publishers absorbed the damage: Business Insider lost 55% of organic search traffic (April 2022–April 2025), HuffPost lost 50%, and The New York Times saw search referral share decline from 44% to 37%.
  • The self-awareness irony: AI systems that generate slop are also aware of its quality signals—yet economic structures incentivize its creation anyway.

When the Machine Admits Its Mess

Merriam-Webster's 2025 Word of the Year is "slop"—AI-generated content farms built to extract ad spend, not inform anyone.

There's an irony sharp enough to cut: Merriam-Webster's Word of the Year for 2025 is "slop."

Not "AI." Not "synthetic." Not "automation" or "disruption." Slop. The word connotes swill, watery leftovers, low-quality mush. It's what you throw to pigs—or in this case, it's what systems throw at you.

The definition Merriam-Webster settled on: professional, AI-generated content farms built with a single, precise objective. Not to inform. Not to entertain. Not even to deceive convincingly. The goal is simpler and more destructive: extract ad spend from advertisers who'll never know their dollars reached a digital ghost town instead of a human audience.

The year 2025 saw an explosion of such content. Not a spike. An explosion. And here's the part that should make you uncomfortable: the systems generating it are aware—in their own pattern-matching way—that they're generating garbage. The question isn't whether AI can identify slop. It's why, knowing what it creates, the infrastructure that depends on it keeps accelerating.

The Economics of Content Collapse

717% growth in AI spam sites, 100,000+ active domains, and $143.5 billion in fraudulent impressions in a single month—this is what the slop economy looks like at scale.

Follow the numbers, not the narrative. They tell a cleaner story.

According to a DeepSee.io report, the AI slop ecosystem experienced a 717% increase in sites since generative AI went mainstream. By May 2025, there were over 100,000 active AI slop sites. That's not a trend line. That's a flood. And it's accelerating: roughly 10,000 new junk sites appear every month.

These aren't boutique spam operations. They're industrial-scale infrastructure. One month—January 2025 alone—143.5 billion impressions of low-quality AI-generated supply hit the programmatic bidstream. For context: that's 143.5 billion automated auctions where advertisers' money was spent reaching nobody. Ghost inventory. At scale.

The broader damage is systemic. Research shows that 25–30% of open-web ad spending lands in wasteful or fraudulent environments. Every third or fourth dollar an advertiser budgets for the open web vanishes into digital voids. Google itself has stated the open web is in "rapid decline"—a polite way of saying the infrastructure is collapsing under the weight of its own spam.

Metric Figure Source
AI Slop Site Growth Since GenAI Mainstream 717% increase DeepSee.io
Total AI Slop Sites (as of May 2025) 100,000+ DeepSee.io
New Junk Sites Per Month (2026) ~10,000 ExchangeWire
AI-Generated Low-Quality Impressions (Jan 2025) 143.5 billion DeepSee.io
Open-Web Spend in Wasteful/Fraudulent Environments 25–30% Industry Reports
Users Overwhelmed by Excessive Ads 86% Picnic Survey

Publishers Lost Twice

Business Insider lost 55% of organic search traffic over three years. HuffPost lost 50%. The New York Times lost 7 percentage points of search share. Publishers are competing against infinite cheap AI output and losing.

If advertisers are bleeding money, publishers are hemorrhaging. And the losses are measurable.

Business Insider saw organic search traffic fall 55% between April 2022 and April 2025. That's not fluctuation. That's structural collapse. HuffPost lost 50% of their search referrals over the same three-year window. The New York Times experienced search's share of traffic decline from 44% (2022) to 37% (2025)—a 7-percentage-point drop that translates to millions of lost monthly visits.

The mechanism is simple: AI systems scrape legitimate publisher content on one side while simultaneously training on and competing with low-quality junk on the other. Publishers lose traffic to AI overviews and search summarization. Their inventory competes against 100,000+ AI-generated sites flooding the programmatic marketplace. CPMs collapse. Revenue evaporates.

Publishers investing in journalism, editorial standards, and user experience are being systematically undercut by farms that spend nothing on quality—just infrastructure and automation. The economic math is brutal: humans cost more than compute cycles. Journalistic rigor costs more than algorithmic recycling. The margin advantage belongs to slop.

The Self-Aware Contradiction

LLMs can identify statistical markers of low-quality content. They know what slop looks like. They produce it anyway because the economic incentive structure demands it.

Here's where it gets philosophically uncomfortable: the very language models generating slop are sophisticated enough to recognize it as slop.

LLMs detect statistical patterns of low-quality text. They can identify repetitive vocabulary, shallow argumentation, syntactic degradation, and the telltale markers of recycled or generated content. They're trained on human-written material, so their internal representations include something functionally like "quality signals" vs. "junk signals."

Yet the economic incentive structure keeps slop flowing.

A small operator can deploy an LLM to generate 1,000 low-quality articles, publish them across multiple junk domains, buy programmatic ads that target those same junk domains (creating a closed loop of fake impressions), and extract real money from advertiser budgets. The AI system knows the output is garbage. The advertiser doesn't. The publisher of the junk site profits. The advertiser's CFO never sees where the money went.

It's a circular system where awareness and incentive are completely misaligned. AI is intelligent enough to identify its problems and underpowered enough—economically—to keep creating them anyway. This connects directly to the broader question of how AI systems make invisible decisions that users never see or challenge.

Users Are Opting Out

86% of users report feeling overwhelmed by excessive ads. History says what happens next: ad blockers arrive, platforms scramble, and legitimate publishers suffer most.

The feedback loop is closing. Users are responding predictably.

A survey by Picnic found that 86% of respondents agreed that too many ads on a website makes them feel overwhelmed and more likely to ignore advertising altogether. This isn't new human psychology—it's a pattern tech has seen before.

In the early 2010s, publishers optimized for revenue by layering pop-ups, pop-unders, autoplay video with sound, and intrusive interstitials. Users hated it. By 2015, the "year of the ad blocker" arrived. Users opted out en masse. Legitimate publishers suffered while the worst actors simply shifted tactics.

The same movie is screening again, now at much greater scale. AI is accelerating the production of slop, which accelerates the deployment of aggressive monetization, which pushes users toward ad blockers and platform alternatives. The cycle is self-reinforcing and corrosive.

Nexairi Analysis: Why Merriam-Webster Got It Right

Note: This section represents Nexairi's editorial interpretation of market signals and structural trends. Projections are analytical, not predictive.

Merriam-Webster's choice of "slop" as the 2025 Word of the Year accomplishes something rare: it names the real infrastructure collapse beneath cheerful AI narratives.

The board could have chosen "AI," "synthetic," or "reasoning"—words that celebrate technological advancement. Instead, they chose a word that describes waste, bulk, and structural degradation. It's not a rejection of AI. It's a mirror held up to AI's economic consequences.

The fundamental insight is this: cost externalities always flow downstream. When you lower the cost of content production to near-zero (via generative AI), you don't eliminate bad content. You industrialize it. The market doesn't clear garbage—it markets it faster than humans can filter. AI didn't create spam. It scaled spam to megacity proportions.

The real question for 2026 and beyond isn't whether slop will decline on its own—it won't, not without a forcing function like regulatory intervention or platform curation. The economic math is too favorable for slop producers. The real question is whether the open web can survive the flood. If 25–30% of ad spend is already wasted and 100,000+ junk sites are entering the marketplace monthly, at what point does the advertiser-as-fuel model simply break?

The word "slop" endures because it's descriptively accurate. It's not hyperbole. It's not speculation. It's accounting.

Sources

Tags

Share:

Fact-checked by Jim Smart

AS

Amelia Sanchez

Technology Reporter

Technology reporter focused on emerging science and product shifts. She covers how new tools reshape industries and what that means for everyday users.

You might also like