Welcome to DU! The truly grassroots left-of-center political community where regular people, not algorithms, drive the discussions and set the standards. Join the community: Create a free account Support DU (and get rid of ads!): Become a Star Member Latest Breaking News Editorials & Other Articles General Discussion The DU Lounge All Forums Issue Forums Culture Forums Alliance Forums Region Forums Support Forums Help & Search

highplainsdem

(58,749 posts)
Wed Oct 22, 2025, 03:05 PM Wednesday

AI Models Get Brain Rot, Too

Source: Wired

-snip-

A new study from the University of Texas at Austin, Texas A&M, and Purdue University shows that large language models fed a diet of popular but low-quality social media content experience a kind of “brain rot” that may be familiar to anyone who has spent too long doomscrolling on X or TikTok.

"We live in an age where information grows faster than attention spans—and much of it is engineered to capture clicks, not convey truth or depth,” says Junyuan Hong, an incoming assistant professor at the National University of Singapore who worked on the study as a graduate student at UT Austin. “We wondered: What happens when AIs are trained on the same stuff?”

Hong and his colleagues fed different kinds of text to two open source large language models in pretraining. They examined what happened when the models were fed a mix of highly “engaging,” or widely shared, social media posts and ones that contained sensational or hyped text like “wow,” “look,” or “today only.”

-snip-

The models fed junk text experienced a kind of AI brain rot—with cognitive decline including reduced reasoning abilities and degraded memory. The models also became less ethically aligned and more psychopathic according to two measures.

-snip-

Read more: https://www.wired.com/story/ai-models-social-media-cognitive-decline-study/



Much more at the link.

The study itself is here: https://llm-brain-rot.github.io/

In this work, we introduced and empirically validated the LLM Brain Rot Hypothesis, demonstrating that continual exposure to junk data—defined as engaging (fragmentary and popular) or semantically low-quality (sensationalist) content—induces systematic cognitive decline in large language models. The decline includes worse reasoning, poorer long-context understanding, diminished ethical norms, and emergent socially undesirable personalities.

Fine-grained analysis shows that the damage is multifaceted in changing the reasoning patterns and is persistent against large-scale post-hoc tuning. These results call for a re-examination of current data collection from the Internet and continual pre-training practices. As LLMs scale and ingest ever-larger corpora of web data, careful curation and quality control will be essential to prevent cumulative harms.


3 replies = new reply since forum marked as read
Highlight: NoneDon't highlight anything 5 newestHighlight 5 most recent replies

kysrsoze

(6,381 posts)
1. You know, I hadn't thought about it, but AI is now consuming tons of AI slop. As always, G.I.G.O.
Wed Oct 22, 2025, 03:07 PM
Wednesday

FakeNoose

(39,138 posts)
2. Unlike humans, these bots are not taught to read anything with a healthy skepticism
Wed Oct 22, 2025, 04:18 PM
Wednesday

If I read something on Xwitter, I can decide to disbelieve it any time. But the AI bots aren't coded to do that.

scipan

(2,957 posts)
3. This is wild. Wish I knew more about how AI works.
Thu Oct 23, 2025, 11:32 PM
Thursday
We analyze the reasoning failures in ARC-Challenge to identify different failure modes. We find that the majority failures can be attributed to "thought skipping" (e.g., the model fails to generate intermediate reasoning steps), which significantly increases in models affected by brain rot.


It seems like it's more than just GIGO,... It's something I would have thought was intrinsic to the LLM. Like it's mimicing the low quality tweets' thought processes???

Thanks for this.
Latest Discussions»Latest Breaking News»AI Models Get Brain Rot, ...