News Bite: AI Slop, Jon Oliver, and (Literally) Fake News

Emily_AI Lab_Sig

Written by Emily Wolfteich
Senior Industry Analyst

Anyone with an online presence has probably seen a picture or two (or ten). Shrimp Jesus. Catchy videos of awkward text messages, set to banal country songs. Videos or pictures of kittens and children that are a little soft around the edges. Perhaps you noticed right away that they were AI, or maybe something just seemed a little off to you. Or perhaps you didn’t notice at all.

Generative AI tools are incredibly simple to use. All it takes is a few keywords and there’s an image of anything you like – a strikingly lifelike image of Kermit the Frog as painted by Edvard Munch, for example, or the Pope in a Balenciaga puffer jacket. In many cases, it is very clear that the image is generated (again, Shrimp Jesus). But as AI tools become increasingly sophisticated, and the subject of the content becomes more realistic, identifying what is real and what is AI is a more concerning challenge.

The Rise of the Slop

There’s a term for this sudden deluge of AI-generated content – “AI slop.” It’s produced in massive quantities, designed for volume in the hopes that one or two pieces will go viral. Comedian Jon Oliver’s recent coverage highlights both the drivers and consequences of the rising prevalence of this content. Oliver’s piece specifically focuses on the economics behind AI slop, including the monetization by social media platforms that reward virality, the second-tier cottage industry of paid tutorials on how to create similar content, and the real-world consequences of its proliferation. The segment showcases multiple videos of people fooled by AI videos and images, including two men who had driven up to the Hollywood Sign in a panic after seeing an image of the iconic symbol on fire (it was not) and a woodcarver whose work had been ripped off and replicated in increasingly unbelievable ways.
Oliver’s piece isn’t breaking news – the term “AI slop” has been around as early as 2022 – but it draws attention to some of the darker undertones of AI-generated content. Among the devastation of Hurricane Helene in 2024, AI-generated photos sparked false rumors on social media that caused confusion for first responders and spread suspicion among the community, a phenomenon that repeated during Hurricane Milton. It’s not only the chaos of natural disasters that provides fodder for AI-generated misinformation. Much of this type of AI content is designed to provoke online engagement, whether enjoyment or outrage.
 
False images of Donald Trump being arrested, wounded soldiers holding up signs asking for birthday greetings, and bizarre videos of world leaders as infants are all asking for a response from the audience. At best, that engagement is monetized. At worst, it deceives. The fake images of a girl clutching her dog that sparked so much furor during Hurricane Helene were quickly debunked, but some lawmakers kept the images up on their social media, arguing that the image was “emblematic of the trauma and pain people are living through."
points-of-view (1)

Creating Fake News – For Cheap

“The tools are going to get better, they’re going to get cheaper, and there will come a day when nothing you see on the internet can be believed,” Wasim Khaled, CEO of Blackbird.AI, a company that helps clients fight disinformation, told the New York Times.
AI-generated images are cheap toproduce, and can result in stunning returns – one slop creator says that he earns $5,500 a month through TikTok, and, for a fee, can teach others to do the same. This low-stakes entrance also leads to far more content from overseas, especially from places like India, Vietnam, and China. A Serbian DJ currently owns about 2,000 content-mill websites populated with fake clickbait articles designed for search engine optimization, including formerly popular domains like The Hairpin and TrumpPlaza.
One of these websites is the former Southwest Journal, a local newspaper from Minnesota. As journalism revenues decline in the US, defunct news outlets are bought up and turned into AI slop news sites that plagiarize, impersonate journalists, and outright lie in order to get unsuspecting readers to boost their ad revenue. For locals who might not know about the change in ownership, the garbage stories create confusion and uncertainty. 
points-of-view (2)
The AI takeover of a 140-year-old newspaper in Oregon used the bylines of journalists from other news outlets, made up journalists entirely, and plagiarized entire stories with only minor changes in what’s called AI spinning. (All of this is legal, by the way.)

What Now?

Expert’s views of AI slop’s implications are mixed. Some see it as a turbocharged tool for propaganda machines, but not inherently an AI problem. Others argue that AI slop is more spam than misinformation, akin to chain emails or other viral social media posts, that platforms will learn to sort out and control in time. Despite many AI images of politicians before the 2024 election, the deepfake apocalypse that many feared did not materialize, though there was evidence of AI bot campaigns. (The question of deepfakes deserves a deeper dive, given their role in increasing waves of cybercrime.)

Some state governments are trying to contain the worst of the spread. Alabama, Colorado, New Hampshire, New Mexico, and Oregon have enacted legislation banning deepfakes, “fraudulent representations” or “materially deceptive media” in elections; California is considering an AI watermark bill that would require AI-generating entities to include digital content provenance. However, when it comes to sharing news on social media, platforms say there is little they can do until their models are better trained to recognize the content.

Meanwhile, AI slop is everywhere. A new study by AI detection startup Originality AI found that over half of long-form, English-language posts on LinkedIn are likely AI-generated. Oliver’s piece discusses the complete takeover of visual site Pinterest by AI-generated home decor and outfits. The sheer volume of fake images is even distorting search results, leading to moments like Google serving up as a top result an AI slop version of one of the most famous paintings in history, Hieronymus Bosch’s The Garden of Good and Evil.

The consequences of more and more of our digital spaces being consumed by literal fake news range from erosion of trust in journalism to more opportunities for fraud, even posing national security concerns as more “local news” is owned and operated by foreign actors. For now, at least, we hope that no one is fooled by Shrimp Jesus.

points-of-view (3)
points-of-view (4)

The consequences of more and more of our digital spaces being consumed by literal fake news range from erosion of trust in journalism to more opportunities for fraud, even posing national security concerns as more “local news” is owned and operated by foreign actors. For now, at least, we hope that no one is fooled by Shrimp Jesus.

To read additional thought leadership from Emily, connect with her on LinkedIn.

Related Posts

AI in December (2025)

A look at December’s top AI news in government: The House Task Force report, DHS’s DHSChat launch, and the White House’s 1,700+ federal AI use case inventory.

7 Ways BD & Sales Teams Can Use Federal Personas for Competitive Advantage

Explore 7 strategies for BD and sales teams to use research-based federal personas to sharpen pursuit strategy, tailor messaging, boost credibility, and gain a competitive edge in government contracting.

AI in August (2025)

Key AI news from August 2025: GSA launched USAi.Gov for federal AI adoption, the Pentagon’s Advana platform faced cuts and setbacks, and the Army tested smart glasses for vehicle maintenance. Also, the Department of Labor unveiled a national AI workforce strategy, and Colorado lawmakers began revising the state’s pioneering AI Act.

Deep Dive: Department of Treasury

A look inside Treasury’s 2026-era tech strategy: AI isn’t a standalone budget line — but ~$48.8 M funding for a centralized fraud-detection platform points to growing use of chatbots, generative AI pilots, taxpayer services, fraud monitoring, and data-driven automation under its IT-modernization efforts.

Decoding OMB Memorandums M-25-21 and M-25-22

Explains Office of Management and Budget (OMB) M‑25‑21 and M‑25‑22 — new federal‑AI directives that replace prior guidance, empower agencies to adopt AI faster, and streamline procurement, while aiming to balance innovation, governance and public trust.

Deep Dive: Department of Defense

Overview of the Department of Defense’s FY 2026 budget: $961.6 billion total, with heavy investment in AI, unmanned aerial, ground, maritime, and undersea systems — spotlighting a modernization push across all domains.

AI in July (2025)

Federal AI Policy Heats Up in July: The Trump administration unveiled its “America’s AI Action Plan,” prompting a lawsuit over deregulation and a battle with states. Also featuring: a new defense bill with AI provisions and GSA’s $1 ChatGPT deal for federal agencies.

AI in Government: A Question of Trust

Explores how the use of AI by government agencies raises fundamental questions of trust — weighing the benefits of efficiency, fraud detection and streamlined services against serious risks around bias, transparency, accountability, and public confidence.

Insights at a Glance: May 2025

A data driven rundown of the latest federal AI, policy and government‑tech developments from May 2025.

AI in April (2025)

April’s AI news: New White House policies, controversial federal agency automation, military digital overhaul, the TAKE IT DOWN Act, and plans to integrate AI in K-12 education.

Insights at a Glance: March 2025

A data driven rundown of the latest federal AI, policy and government‑tech developments from March 2025.

Policy Dive: AI in the First Week of Trump

Covers the first‑week AI moves by the new administration — from revoking prior federal AI safeguards to launching a sweeping AI‑domination agenda that prioritizes innovation and global competitiveness over prior guardrails.

AI in January (2025)

Explore the new Trump administration’s deregulatory shift, the massive “Stargate Project” with tech giants, the emergence of a high-performing, cost-effective Chinese AI model (DeepSeek), the launch of OpenAI’s ChatGPT Gov, and key ethical priorities set by the NAIAC.

Artificial Intelligence & the Government: Who’s Driving the Car?

The GAO’s report on the federal government’s adoption of AI is as comprehensive as it can be – but do we like what we see?

How government can experience the Great Stay

As the labor market begins to stabilize, experts predict FY 2024 to be the year of “The Great Stay” among the federal workforce.

AI & the Pentagon: Cautiously Curious

As AI hype increases across the public and private sectors, organizations are weighing the possibilities (and risks) the tech creates.

AFA’s Air Space & Cyber Conference 2023: Key Takeaways and Insights

Key takeaways from David Hutchins (Government Business Council) and Jon Hemler (Forecast International) on the AFA’s 2023 Air Space & Cyber Conference.

How the Federal Government Can Attract Employees

As the federal workforce ages, attracting young talent is critical. Taking these 10 actions can help attract the next generation.

Top Cybersecurity Trends in the Federal Government and Why They are Important

As cybersecurity tech, frameworks, and standards evolve, there are many trends driving cyber investments within the federal sector in 2023.

Top 5 Supply Chain Issues in the Federal Government… and What’s Being Done About it

This article discusses supply chain disruptions and their impact on the federal government, businesses, and society.