Thinking for Yourself in the Age of ChatGPT
Thoughts from Taiyo and Sarah, hosts of My Robot Teacher
Recently researchers from MIT uploaded a draft paper to the ArXiv, “Your Brain on ChatGPT: Accumulation of Cognitive Debt when Using an AI Assistant for Essay Writing Task,” activating a flurry of online commentary and moral panic.
In many ways the panic we’re seeing in response to the MIT paper is understandable, grounded in a widespread fear that Generative AI platforms, in their frictionless efficiency (as our guest Hana Zaydens puts it in episode 1), might inadvertently smooth away some of the cognitive difficulty that leads to deep learning. But we have a big problem with headlines like Time Magazine's “ChatGPT May Be Eroding Critical Thinking Skills, According to a New MIT Study,” The Hill’s “ChatGPT use linked to cognitive decline: MIT research,” and - our personal favorite - The Daily Mail’s “Using ChatGPT? It might make you STUPID: Brain scans reveal how using AI erodes critical thinking skills.”
As much as we worry about becoming all-caps-STUPID, we don’t think it supports such hysterics. The paper’s strongest finding is neural: it found that the different parts of your brain “talk less to each other” when you’re using LLMs to write essays than if you’re working with no technological assistance. But just because the brain is working differently doesn’t mean it’s working worse; reduced inter-region activity doesn’t necessarily mean that LLM-use causes wholesale cognitive decline, nor do the paper’s authors say it does. What they do say is: “while these tools offer unprecedented opportunities for enhancing learning and information access, potential impact on cognitive development, critical thinking, and intellectual independence demands a very careful consideration and continued research”(142-3) [our emphasis]. In other words: more research is needed.
We believe that AI will likely change how we think and learn. But so did writing. So did print. So did Google. Ultimately, even if researchers definitively prove that using LLMs in the manner prescribed by the paper’s methodology can lead to cognitive deficits in students, the question is still, “What are we, as educators, going to do about this?” because, all-caps-and-bold, AI IS NOT GOING AWAY.
As we see it, we educators have four options:
Quit. Go live off the grid eating grubs and wild onions (if you’re lucky because your tomato plants aren’t going to survive the harsh northern California winter and NEITHER WILL YOU).
Deny the ubiquity of AI and pretend the last 3 years never happened. Consider going into a coma for your next sabbatical. Or just keep on doing what you’ve been doing. Or ban LLM-use in your classes. Lament the endless parade of em dashes your students submit in flagrant defiance of your edict. Swear by AI-detection software, even though it doesn’t work.
Cope with the immense change, maintaining some minimal semblance of that which led you to become an educator in the first place while your job slowly kills you. (This might look like going back to blue books, reimplementing handwritten exams and essays to directly circumvent AI use, even if it means more grading for you and less accessibility for some students.)
Thrive by learning to interact with AI in a way which promotes student learning and well-being. Consider AI's potential to enhance, not diminish, intellectual rigor. That doesn’t have to mean adding more to already overflowing plates, but strategically adapting how we work and teach (and learn), using AI to automate what we can (like tedious service work), facilitate cross-disciplinary dialogue, and create new practice grounds for critical thinking.
Our starting point is a paradox: Generative AI is potentially poison and cure, arguably an existential threat to the foundations of our intellectual traditions, but also perhaps a key to their rejuvenation. On one hand, it can serve as a powerful apparatus of pseudo-intellectual consumption, offering frictionless answers that risk supplanting the essential and often difficult work of thinking itself. This is where the MIT paper's findings resonate; it seems pretty commonsensical that uncritical reliance on anything can lead to a “cognitive debt.”
But on the other hand, it offers a new kind of collaborator at the intersection of different minds, a tool that can help us translate between disciplines and catalyze discoveries in ways we are only beginning to understand. At our most optimistic times, we imagine AI as the ultimate Rosetta Stone for academic silos, potentially unlocking vast reservoirs of knowledge previously inaccessible across different fields - something that could allow us to transcend the inherent limitations of human disciplinary boundaries and forge intellectual connections that could reshape how we tackle large-scale problems. This is something we explore in Episode 2 of My Robot Teacher, available on July 8.
However, despite this immense potential, many alarmist claims about LLM use continue to circulate. We suspect this panic - like those headlines screaming about eroding critical thinking skills - stems in part from a Golden Age fantasy that critical thinking was once perfectly robust, uniformly defined, and nurtured through “timeless” techniques like essay-writing (which are of course not really timeless at all), only now to be uniquely threatened by a new technology. The truth is, critical thinking is notoriously difficult to define and assess. It’s an elusive concept, interpreted in myriad ways across disciplines and contexts. Is it logical reasoning? Problem-solving? Analytical prowess? Metacognition? Without a consistent, universally agreed-upon definition, how can we definitively claim its erosion? That ambiguity allows for a simplistic narrative of decline, rather than a nuanced understanding of how cognition might adapt or change with new tools and systems.
-Taiyo and Sarah
[Subscribe to My Robot Teacher: Apple / Spotify / YouTube / Instagram]