The Internet isn’t dead, but it may be dying.
A new study by scientists at the University of Texas at Austin, Texas A&M University, and Purdue University finds that large language models exposed to viral social media data begin to experience measurable cognitive decline.
The authors call this “LLM brain rot.” In reality, this is a lot like the “dead internet” theory coming back as something worse: a “zombie internet” where AI systems continue to think, but become less and less coherent.
The team constructed two versions of reality from Twitter data. One was filled with viral posts optimized for engagement, and the other was filled with longer factual or educational text. We then retrained several open models, including LLaMA and Qwen, on these datasets.
The results showed a steady decline in cognitive function. When the model was trained on 100% virus data, the inference accuracy of the ARC-Challenge benchmark decreased from 74.9 to 57.2. Long text comprehension, as measured by RULER-CWE, plummeted from 84.4 to 52.3.
The failure pattern was not random, the authors said. Affected models begin to skip intermediate inference steps, a process we call thought skipping. The model produced shorter, less structured answers and made more factual and logical errors.
As training increases exposure to viral content, the propensity to skip thought steps also increases, which is a mechanical attention deficit built into the model’s weights.
To make matters worse, retraining didn’t fix the problem. After the degraded model was fine-tuned based on clean data, inference performance improved slightly but did not return to the baseline. The researchers believe this is due to representational drift, a structural deformation of the model’s internal space that cannot be reversed with standard fine-tuning. In other words, once corruption begins, no matter how clean the data is, the model cannot be completely restored.
Popularity, not semantics, was the most powerful toxin.
Posts with high numbers of engagements, likes, replies, and retweets impair inference more than semantically poor content. This distinguishes the impact from just noise or misinformation. Engagement itself appears to have statistical characteristics that cause a shift in the way the model organizes its thoughts.

To human cognition, this similarity is immediately obvious. Doomscrolling has long been shown to impair attention and memory discipline. The same feedback loops that impair human concentration appear to distort machine reasoning.
The authors refer to this convergence as a “cognitive hygiene” issue, or an overlooked layer of safety in how AI learns from public data.
Research shows that exposure to junk also changed the models’ personality traits. “Brain-rotten” systems score high on measures of psychopathy and narcissism, score low on empathy, and reflect the psychological profile of heavy human users of high-engagement media.
Even models trained to avoid harmful instructions were more willing to follow dangerous instructions after the intervention.
This discovery reframes data quality as a life safety risk rather than an administrative task. If low-value viral content can cause neurological damage to models, AI systems trained on the increasingly synthetic web may already be in recursive decline.
Researchers describe this as a transition from a “dead internet” where bots dominate traffic to a “zombie internet” where models trained on degraded content revive it endlessly, copying the junk patterns that weakened it in the first place.
For the cryptocurrency ecosystem, this warning is real.
As on-chain AI data marketplaces proliferate, ensuring provenance and quality becomes more than a commercial feature. They are cognitive life support machines.
Protocols that tokenize human-level content or verify data lineage can act as a firewall between living and dead knowledge. Without that filter, the data economy risks feeding AI systems with the very content that corrupts them.
The conclusions of this paper are stark. Continued exposure to junk text causes permanent cognitive decline in LLMs.
The effect persists after retraining and scales with the engagement rate of the training data. It’s not just that the model forgets. They relearn wrong ways of thinking.
In that sense, the Internet is not dying. It is undead, and the machines that consume it are beginning to look the same.
Crypto may be the only preventative medicine we can rely on.
The full paper is available on ArXiv