- The Dean of AI by Alex Goryachev
- Posts
- MIT Study: 83% of AI Users Can't Remember Their Own Writing
MIT Study: 83% of AI Users Can't Remember Their Own Writing

This week's headlines were hard to miss: "ChatGPT may be eroding critical thinking," declared TIME, reporting on a new MIT study. The data was stark — students who leaned on AI to write essays showed weaker brain activity, struggled with memory and focus, and often couldn't even recall what they'd written.
Almost every time I talk about AI, someone asks if it's making us dumber. Yesterday, it was my 10-year-old son—right after I used Gemini to help with his school project. The irony wasn't lost on either of us. I used to take pride in mental math, writing clean emails from scratch, remembering phone numbers. Now I outsource these so reflexively, I sometimes forget what my own thinking sounds like.
I feel the MIT study didn't identify a crisis. It documented the rise of something new. We've crossed some invisible line from using AI to needing it, and I'm not sure we can go back. The scariest part: I'm not even sure we should.
While that MIT study raised alarms, other breakthroughs were happening. IBM unveiled brain-computer interfaces designed to enhance cognition. Korean scientists introduced AI that mimics human vision to improve diagnostics. New data showed AI-powered learning tools actually improving memory and speeding up comprehension.
Both things can be true. AI can dull us — or develop us. The difference lies in how we use it.
The Beautiful Death of Human Intelligence
Evolution doesn't care about your SAT scores. In Plato's Phaedrus, Socrates watched writing destroy 10,000 years of oral tradition and warned: "This invention will produce forgetfulness in the minds of those who learn to use it." He was right. Homer's disciples could recite the entire Iliad from memory. I can't remember my own phone number.
But here's the truth: cognitive offloading is humanity's oldest trick. We've always outsourced thinking to external systems. Cave paintings stored hunting knowledge. Maps carried our spatial memory. Libraries became our collective brain.
In 1492, when the printing press arrived, German abbot Johannes Trithemius accused those preferring printed books of "trying to conceal their own laziness". By 1680, intellectuals complained that the 'horrible mass of books which keeps on growing' would destroy civilization. Every generation thinks the new technology will be the one that finally makes us stupid.
The 1983 Washington Post reported fears that calculators would make students "dependent on batteries instead of brains." Yet society adapted. By the 1990s, SAT and AP exams required calculators for certain sections. What was once a crisis tool became mandatory skill.
Now AI is accelerating this ancient pattern at breathtaking speed. Perhaps the MIT students couldn't quote their own work not because they're broken, but because their brains are already adapting to a world where memory storage is obsolete. They're wasting no neural real estate on information retrieval when they need it for synthesis.
The MIT Media Lab Study: Your Brain on ChatGPT
The MIT Media Lab study that made headlines this week wasn't just another AI research paper—in a way it could be a brain scan of our cognitive future. Researchers tracked 54 participants across multiple essay-writing sessions, monitoring their neural activity through EEG while they worked. The setup was elegantly simple: one group used ChatGPT, another used search engines, and a control group relied only on their brains.
What they found should terrify anyone who's ever reached for ChatGPT mid-thought. The AI users showed "significantly weaker neural connectivity" across the board—their brains literally worked less when AI was available. But the most damning finding came in the memory test: 83.3% of ChatGPT users couldn't provide even a single correct quote from essays they had written just minutes earlier.
The Paradox of Enhanced Mediocrity
But here's where the story gets strange. Even as individual brains show signs of atrophy, collective output appears to be improving. Industry reports this week found that AI is "sparking a cognitive revolution"—enhancing average creative output across industries.
The catch: While AI makes everyone's work better on average, it also makes it more similar. We're witnessing the birth of what researchers call "algorithmic mediocrity."
When everyone uses the same AI tools, drawing from the same training data, we inevitably converge on similar solutions. AI sometimes helps brainstorming, but more often it steers us toward the predictable middle. It's the cognitive equivalent of suburban sprawl—everything works better, but everything looks the same.
What if we're not just confusing speed with originality, but actively choosing efficiency over excellence? Just as factories made goods cheaper while killing craftsmanship, AI makes thinking easier—often at the expense of genuine creativity.
Or perhaps this is just a transition. Painters once feared photography would destroy art, only to discover it expanded what art could be. Maybe this isn't the end of creativity. Maybe it's just the beginning of something new.
The Numbers That Reveal The Challenge
The data paints a picture of radical cognitive redistribution, not simple decline:
83.3% of ChatGPT users couldn't quote their own AI-written essays minutes later—complete dependence on the machine's memory (MIT Media Lab study, June 2025)
78% of MIT students became so reliant on AI they couldn't remember their own writing after 4 months (MIT Media Lab study, June 2025)
666 participants showed the more dependent they became on AI, the less capable of independent thinking—inverse correlation between reliance and cognition (Swiss study, January 2025)
36% of surveyed knowledge workers actively applied critical thinking when using AI—meaning 64% blindly accepted whatever the machine told them (Microsoft/Carnegie Mellon study, January 2025)
30,889 participants across 22 studies showed we forget information we know we can find online (Meta-analysis, 2024)
Yet here's the contradiction that scrambles the narrative: IQ scores rose throughout the 20th century even as we adopted calculators and computers. In AI-exposed jobs, skills are changing 66% faster according to PwC's 2025 Global AI Jobs Barometer—forcing unprecedented cognitive adaptation.
These changes don't just mean we're getting smarter or dumber—they suggest our ways of thinking might be starting to drift in completely different directions.
Three Hard Truths About How AI Is Changing the Way We Think
Truth 1: The Smartest People Are Choosing "Stupidity"
The cognitive elite seem to be among the first to adapt. Doctors are beginning to rely on AI for diagnostics. Lawyers increasingly turn to legal AI tools. Even many programmers now use Copilot to streamline their coding.
This echoes how oral storytellers in ancient India — the Vedic priests who could recite entire scriptures from memory — initially resisted writing their sacred texts down, fearing the knowledge would "lose its potency." Yet eventually, even they capitulated. Similarly, when digital music arrived, the most skilled audio engineers who spent decades perfecting analog recording techniques were the first to embrace Pro Tools and digital workstations. They understood that clinging to "pure" skill meant irrelevance.
High intelligence might mean recognizing when to let go of obsolete mastery.
Truth 2: Children Are Evolving Faster Than We Can Study Them
Children born after 2020 have never known a world without AI. Their brains are developing different architectures — not damaged, different.
In 2013, South Korea coined the term "digital dementia" for young people showing memory deficits from device overuse. Yet those same "impaired" youth went on to make South Korea a global tech powerhouse. Similarly, when pocket watches became common in the 18th century, critics worried people would lose their natural sense of time. Today, not knowing the time without a device isn't considered a cognitive failure — it's normal.
What some call decline might be evolution in disguise.
Truth 3: The AI Genie Is Out of the Bottle — And We Don't Want It Back
Humanity always takes the path of least resistance. We use AI to write like we use calculators for math. That shift has already happened. I see it in myself — I rarely want to write an email without it.
Let’s be honest: we wouldn’t go back even if we could. When was the last time you did long division by hand or wrote a letter instead of texting? We’re wired to save effort—and we don’t let go of convenience once we’ve tasted it. Resistance isn’t futile because AI is unstoppable. It’s futile because we’re eager.
Even the critics aren’t immune. At a recent event, I watched an AI skeptic deliver a passionate warning—then quietly use Copilot to draft his notes.
What feels like surrender might just be our most natural instinct.
The Uncomfortable Edge of Evolution
History repeats itself: writing killed memorization but gave us literature. Printing ended scribal labor but sparked the Scientific Revolution. Calculators wiped out mental math but made higher-level math accessible to everyone. Each time, we lost something—but gained more.
But this time feels different. The speed. The scale. The fact that we're outsourcing not just memory or calculation—but reasoning itself.
The Real Intelligence Test: How We Teach in the Age of AI
This is the real crisis. Our education system is still training kids for a world that no longer exists—memorizing facts Google already knows, writing essays ChatGPT can generate, solving problems AI handles instantly. We’re preparing them for cognitive independence in an age of cognitive interdependence.
The questions this raises aren’t just academic—they’re existential:
If AI can write, analyze, and create—what human skills still matter?
If traditional intelligence metrics no longer apply, how do we define learning or competence?
When AI tutors personalize better than classrooms, what’s the role of a teacher?
And if we stop teaching handwriting, mental math, or unassisted reasoning—what happens when the power goes out?
But maybe we're asking the wrong questions. Instead of panicking about what we're losing, we should be building what comes next.
The future of learning isn’t about choosing between human and artificial intelligence. It’s about designing education for a world where one augments the other. We need to stop asking how to keep AI from making us dumber—and start asking how to ensure it makes us wiser.
So the real question isn’t: Is AI making us dumber? It’s: Dumber compared to what—the past we’re leaving, or the future we’re building?
The MIT students couldn't remember what they wrote. Is that evolution or decay? Are we adapting or atrophying? What do you think—and more importantly, what should we do about it?
Until next week -
Alex G.