Could artificial intelligence take over the internet? Some members of the online community reckon it already has. This old theory is flooding virtual spaces again, and it all has to do with Shrimp Jesus.
If you don't know what I'm talking about, that's the infamous AI-generated Facebook image that, along with variations, has been floating around the net since the image first surfaced in March of 2024. At first glance, Shrimp Jesus appeared to be your standard, human-made meme. But it was actually the jumping-off point for Facebook AI art slop -- a proliferation of AI-generated memes like the Challah Horse, the 386-year-old granny baking her own birthday cake, and the random wooden cars (to name just a few).
The flood of these pictures has reignited discussions about a conspiracy theory that cropped up in 2021, called the Dead Internet Theory. If you frequently use TikTok, Instagram or Facebook, you may've already seen examples of these kinds of images, without knowing it. I write about the internet for a living, and I only recently heard about the theory. Researching it led me down a rabbit hole I struggled to emerge from. So, what is the Dead Internet Theory? And how does it parallel the rise of artificial intelligence?
The origins of the Dead Internet Theory
The Dead Internet Theory first emerged in 2021 on the online forums, 4chan and Wizardchan. People took to these forums claiming that the internet died in 2016 and that AI bots mostly run the content we now see online. This theory also supports the possibility that AI is being used to manipulate the public due to a much larger and sinister agenda. These posts were pieced together in a lengthy thread and published on another online forum called Agora Road's Macintosh Cafe. Be aware, the thread can be easily accessed online, but I did not link to it due to the obscene language in the post.
User IlluminatiPirate wrote, "The internet feels empty and devoid of people. It is also devoid of content."
Now, years later, this conspiracy is seeing the light of day again with a rise of TikTok creators dissecting the theory and finding examples to support it. One creator, with a username of SideMoneyTom, posted a video in March 2024, showing examples of different Facebook accounts posting variations of AI-generated images of Jesus. These images provide little traffic online, yet they can still easily proliferate your feed. Like many other online creators, SideMoneyTom echoed the same sentiment: These Facebook accounts are run by AI bots and create all content. To better understand this theory, it helps to know how generative AI works.
Generative AI uses artificial intelligence systems that produce new content in the form of stories, images, videos, music and even software code. According to Monetate, "Generative AI uses machine-learning algorithms and training data to generate new, plausibly human-passing content." With the launch of ChatGPT in 2022, chatbots have become all the rage these days, with tech giants like Google, Apple and MetaAI creating a slew of AI tools for their products. (Disclosure: Ziff Davis, CNET's parent company, in April filed a lawsuit against OpenAI, the owner of ChatGPT, alleging it infringed Ziff Davis copyrights in training and operating its AI systems.)
Now, back to Shrimp Jesus. If you feed specific data and prompts to a chatbot, you'll find that these images are "human-passing." Emphasis on “passing.” Content created by chatbots is certainly known to have its faults.
"While large pre-trained systems such as LLMs [large language models] have made impressive advancements in their reasoning capabilities, more research is needed to guarantee correctness and depth of the reasoning performed by them," AI experts wrote in a report by the Association for the Advancement of Artificial Intelligence.
However, Shrimp Jesus and other AI-generated images aren't the only things online believers use to substantiate this theory.
Are these bots or real people?
If you spend enough time on social media, you'll see odd things in the comments section of certain posts, like repetitive comments from accounts that are irrelevant to the post. These comments are often strange and don't make sense. Last winter, Bluesky subscribers took to Reddit to complain about being plagued by reply bots that were politely and annoyingly argumentative.
One user flagged the common signs to spot these reply bots and what to do when encountering them. Some indications you're experiencing a bot are when the account is new and has many replies to different posts, as seen from this Bluesky reply bot account.
How to spot an AI bot:
- An account with a short bio, a bio that is too specific, or is missing one.
- An account with no photos or only AI-generated photos.
- The account is relatively new.
- An account with few or no followers.
- An account with an odd followers-to-likes ratio. (If they have 10K followers but their posts receive 50 to 100 likes.)
- An account that posts scammy comments.
According to cybersecurity company Imperva's 2024 Bad Bot report, nearly half of all internet traffic came from bots in 2023, a 2% increase from the previous year. Imperva characterizes Bad Bots as automated software programs that perform malicious activities on websites. These bots can take sensitive information, perform account takeovers and initiate cyberattacks or DDoS. That report also highlights that the rapid adoption of generative AI and other LLMs has increased the number of simple bad bots because "less technical individuals can now write basic bot scripts."
The report notes that the US saw a rise in bad-bot attacks in 2023, accounting for 47% of all bot activity globally. According to Imperva, the US is the most targeted country for bad-bot traffic worldwide. (Bad-bot attacks aren't limited to the internet and online communication. They can be found affecting several industries. For instance, a rise in bad-bot traffic can be seen in the gaming industry; among telecommunications and internet service providers; in the computing and IT sector; and in travel.)
Generative AI growth has accelerated in recent years, but so have the fears and concerns surrounding these changes, including their impact on the environment. According to recent data from the Pew Research Center, AI experts were more likely than Americans to believe that AI will positively impact the US in the next 20 years. Data shows that over 47% of experts are excited about using AI daily, versus 11% of the public. That same report also highlights that over 51% of US adults have been concerned about the growth of AI since 2021.
Transforming, not dying
Regarding the growing concerns over whether the internet is dead, Sofie Hvitved, technology futurist and senior advisor at the Copenhagen Institute of Future Studies, believes that the internet is not dead, but evolving.
"I think the internet, as it looks like now, will die, but it has been dying for a long time, in that sense," Hvitved said.
"It's transforming into something else and decomposing itself into a new thing, so we have to figure out how to make new solutions and better algorithms… making it better and more relevant to us as humans."
In 2024, a NewsGuard audit report revealed that generative AI tools were used to spread Russian propaganda in over 3.6 million articles. NewsGuard also found that AI chatbots were used to create false narratives online from a Russian misinformation news site. To that point, Hvitved emphasized that these issues stemming from AI do not signify that the internet is dead but instead force us to address how we can improve these AI tools.
"Since there are large language models, and you know, AI feeds on all the information it can gather, it can start polluting the LLMs and pollute the data, which is a huge problem," said Hvitved.
What does the online community think?
The Dead Internet Theory isn't dying anytime soon. Online discourse surrounding the idea isn't limited to TikTok. It's also found a home in multiple Reddit threads.
One Reddit user wrote, "AI chatbots are going to be catastrophic for so many people's mental health." But research to back this up has been mixed. Some points to how AI chatbots can effectively reduce the severity of mental health concerns for people from different demographics and backgrounds. Other studies show that AI chatbots could be detrimental to young children and their development.
Another Reddit user posted, "Considering that we are just at the beginning of AI, especially its capabilities with video, I'd say there's a real chance that it will destroy the usefulness of the internet and make it dead."
Other people echo that sentiment by adding that the ratio of AI content to human content will change dramatically over the next few years. One even compiled a list of over 130 examples of subreddit threads on the internet that consisted of comments and posts generated by AI bots.
Could AI shape a new digital internet culture?
According to the Harvard Business Review, generative AI could primarily threaten content created by independent writers, artists, musicians and podcasters. That said, one looming question following the Dead Internet Theory is whether AI will completely replace human-made content. If so, how will this shape internet culture?
Hvitved is also the Head of Media at the Copenhagen Institute for Future Studies and specializes in examining the relationship between emerging technologies like AI and their impact on communication. She has a take on the future of a new internet culture as AI use increases.
"Maybe the static element of the internet is going to die. So we have articles, static pages and web pages you must scroll through, but is that the death of the internet? I don't think so."
She believes this new internet culture could mean more relevant content for broadband users.
"That kind of contextual internet, knowledge graphs, real-time summaries and interactive microformats, that's something these [AI] agents can go out and pick from to create something specialized for you."
This new internet culture will emphasize AI's ability to tailor unique content for each user and may mean abandoning the concept of shared spaces and communities.
"We have to pay attention to echo chambers or diving into your own little worlds that only you would understand. We won't have any shared reality anymore," Hvitved said.
So, is the internet really dead?
If you've watched films like The Terminator, Blade Runner or Wall-E, you know there's always been a fascination with robots and whether they will take over the world one day. The resurgence of the Dead Internet Theory is just the latest evidence of that ongoing discourse. One could argue that AI shaping a new internet culture would mean the death of the internet as we know it. But this doesn't imply that the internet will just disappear. To echo what AI expert Sofie Hvitved conveyed, the internet may eventually evolve into something new. With the rapid growth of AI in our day-to-day lives, there's no question this is transforming the digital landscape. But is the internet dead? As a broadband writer working with numerous hard-working CNET writers daily, I can testify that it's alive.
The Dead Internet Theory FAQs
What is the Dead Internet Theory?
The Dead Internet Theory emerged in 2021 from online conspiracy theorists on forums like 4chan and Wizardchan. It suggests that the internet died in 2016 and that the content we see online is run mainly by AI bots. The Dead internet Theory also suggests that AI is being used to manipulate the public due to a much larger and sinister agenda.
What are examples of the Dead Internet Theory?
TikTok creators note the increased number of Facebook bot accounts creating AI-generated images, with Shrimp Jesus and other variations of this image being the most infamous. This image also became the jumping point for Facebook AI art slop to spread online, with newly generated AI memes like the Challah Horse, the 386-year-old granny baking their own birthday cake, and the random wooden cars. In addition, followers also ascribe to this theory due to the spread of bot accounts filling the comment sections across different social media platforms.
What does generative AI mean?
Generative AI uses artificial intelligence systems to create new content, including stories, images, videos, music and software code. The way it works is you feed specific prompts and data to a chatbot, and it creates a particular output for you. Examples of generative AI include chatbots like ChatGPT, Preplexity, Google Gemini and Claude by Anthropic -- a CNET Editors' Choice for the best overall AI chatbot.