My Robot Teacher Episode 3 Transcript
The End of Literacy As We Know It - How ChatGPT Exposed What's Broken in Education
Below is the full transcript of Episode 3 of My Robot Teacher (lightly edited for clarity and concision; filler words and false starts have been removed).
Also available on: Apple / Spotify
CHAPTER 1: Introduction [00:00 - 1:55]
Pranav: One thing that it's taught me as a linguist is like how seductive text is.
Taiyo: I kind of think of these large language models as being, yes, inhuman, but almost like a kind of mind in and of itself.
Sarah: It's like, “this is why we can't have nice things,” right? “You guys said there was no universal truth and now we have fake news!”
Taiyo: Welcome back to My Robot Teacher, where we are exploring if AI might just be the catalyst needed to reinvigorate higher education. Today we are joined by Pranav Anand.
Pranav: I am a Professor of Linguistics at UC Santa Cruz, and the faculty director of the Humanities Institute
Taiyo: and Chesa Caparas.
Chesa: I am faculty at DeAnza College, which is a two year college in Cupertino, California. I teach English and Asian American and Asian studies.
Taiyo: We sat down with Chesa and Pranav to answer the question, what is AI literacy and who gets to define it?
Sarah: Today’s episode refuses to hand you a tidy definition, partly because we ended up taking a lot of detours during our conversation. But instead, you’ll hear how AI is destabilizing our most familiar teaching rituals and why the real crisis might not be students cheating, but our own unimaginative assessments.
Taiyo: Before we begin, we wanted to take a moment to thank the California Education Learning Lab for sponsoring this podcast and also for all the work they do to facilitate collaborations between the three segments of the California Public Higher Education System: the UC, the CSU, and the California Community Colleges.
Sarah: By getting faculty from all three in the same room, they're making sure that the 2.8 million students in the public higher education system in California don't end up with three incompatible playbooks for the AI age.
Taiyo: Thank you California Education Learning Lab for making that happen. Now on to the conversation.
CHAPTER 2: AI Literacy and Humanities Inquiry [1:55 - 15:02]
Taiyo: In terms of AI literacy, I don't know, are there any initial thoughts about like, what is AI literacy? Because it's a phrase right? That gets tossed around a lot.
Sarah: Everyone says we need it.
Taiyo: Everybody talks about how we need it. What is it? Any thoughts?
Chesa: The first thing that comes to mind is what we choose to call literacy is a highly political move, right? And so the things that we think are necessary to learn are really important. And so the fact that, you know, AI literacy is a thing… we have to look at the politics, the economics of it, and think about what are the motivations of that and who is getting left behind in those conversations. There’s a really great book by Annette Vee called Coding Literacy that talks about when we think about literacy, it means that we separate people into groups of the illiterate and the literate, and there's an urgency that comes with that, right? That, oh, we have this literacy crisis, like people don't know enough. And so that's one of the first things that comes to mind when we talk about AI literacy, it's a political and economic project. And then for me, just given my research, because I was studying media information literacy in the Philippines during their 2022 election and how algorithms and those types of technologies impacted democracy. One of the things that I think is necessary for AI literacy is actually emotional literacy, because I feel like these technologies really have a capability to kind of manipulate our behavior. But that's the stuff that doesn't really get talked about, you know, it's like, “oh, are students cheating? You know, oh, are they losing critical thinking?” And it's like, there's so many different answers to that. But really what I've seen in the ways that I've studied social media is that what these AI tools are doing are actually learning from us and figuring out how to manipulate our behavior because of the things that they're learning about us, especially the ways that we are motivated by our emotions as humans. So if I could say one thing, it would be AI literacy has to include social emotional literacy as part of it.
Pranav: That's a great answer. I don't know if I have any comment after that. I think we think of AI as this monolithic thing. We talk about it, and of course AI is a very old field and we're at a particular point, I don't know where we'll end up. And so I do worry that we're somehow putting the cart before the horse, to be like, well, we need to train people on these technologies. I mean, you were saying, that will be obsolete literally in two months, and that's why I like the idea of like, “work on yourself.” Because I do think one thing that is interesting, maybe we could talk about generative AI with text. One thing that it's taught me as a linguist is how seductive text is, you know? People really don't want reams of information that they have to analyze for themselves. They want it to be put in a packaged form that is narrativized. And so in some sense, alongside with that emotional training, it's like the normal training about propaganda, right? And the way in which this very old form of narrative experience can short circuit a lot of your thinking in general.
Chesa: Right.
Pranav: And so I sort of feel like we wanna go back to really, really basic things about manipulation.
Chesa: No, I love that. I mean, as an English person, I love saying that when the data gets narrated, that's when it actually gets consumable by end users and, and that's, you know, as a literature person, it's like, let's interrogate the story, right? What are the stories that these tools are spitting out?
Sarah: Totally. And I've been thinking about this a lot, well, since the three of us met on a panel called “Can AI Save the Humanities?”
Chesa: Yes. Yes. The answer is yes.
Sarah: But also it was like, “humanities might save AI?” But I think one of the things that I think was so fun about being on that panel is that we were thinking about, we were trying to historicize. I think one of the first messages to our colleagues was like, “don't panic: there are historical precedents for this.” And one of the things that the humanities or humanities training does is give you that historical knowledge and ability to contextualize stuff. But I think we were all in various ways talking about the idea that to make something seductive, to use your word, to make something consumable, saleable, it needs to be packaged in a kind of narrative. And so humanists are typically the people who are studying and consuming a lot of narratives. Taiyo has this whole thing about the Humanities needs to step up right now.
Taiyo: I'm trying to similarly clarify my own thoughts about this, so I would really appreciate any input that you all can provide. But this seems like a really potent moment for the humanities, which - is it fair to say, has taken a little bit of a beating in maybe the last few decades as STEM has gained in prominence? As STEM has gained in prominence there's been a sort of concomitant degradation of the reputation around the humanities…
Chesa: Mm-hmm.
Taiyo: …around philosophy, and around, uh, maybe the role that narrative plays. I'm not really sure. But, it does feel like the humanities gets a little bit, can I say shit on, by society? And I think that's really unfortunate. But now we're at a moment, right, where it's so incredibly important to be able to communicate effectively, of course, with each other. But now crucially with ai, because guess what? The interface for the large language model is natural language.
Chesa: Mm-hmm.
Taiyo: Natural language is the trade of the Humanities, it seems like. Yes, mathematics might treat language in a kind of very abstract kind of way, but when we're talking about how to interface with these things, natural language is the tongue that you use, right? That's how you do it. Then there's this other part that I think about, which is a little bit more speculative, but I'd love to hear what you all think about this. I feel like the Humanities, more than other disciplines, takes into account or has to take into account the way other people are thinking, the minds of others, and has to really emphasize the idea of empathy and like trying to get in somebody else's shoes and understand their experiences and how that shapes the things, the way that they think everything about them. Right? Here's the speculative part: I kind of think of these large language models as being almost like, yes, inhuman, but almost like a kind of mind in and of itself. The methods that humanists have developed to understand other people through their writings and through the things that they say could be used potentially to understand the ways in which these LLMs kind of think.
Chesa: Mm-hmm.
Taiyo: Which I think are not human ways of thinking.
Sarah: : Mm-hmm.
Taiyo: And I think we do ourselves a real disservice when we try to anthropomorphize the these LLMs too much.
Chesa/Sarah: Mm-hmm. Mm-hmm.
Taiyo: Because they're not human after all. They are something fundamentally different. But I, what I'm trying to get at is, yes, you're humanists, but part of the humanist thing is understanding how other people think…
Chesa: Yeah.
Taiyo: …getting ways of interrogating that, looking inside of the head, into your brain, figuring out what those things are, and then, being insightful about all of that. I feel like those same tools need to be brought to bear on these LLMs - which right now are black boxes, to a very great degree, right? We do not really understand what's happening inside of those things.
Chesa: Right? Yeah.
Taiyo: These things are grown, not built, you know?
Chesa: Which is the same that we can say about talking to another human.
Pranav: Absolutely.
Chesa: Like I'm talking to you and you're saying things and it's intelligible to me. I can understand, but I don't know the wealth of experiences and inputs that have gone into you saying what you're saying to me, exactly. And that's just what I try to teach my students: when they are engaging with the LLMs, I tell them, I'm like, “okay…” We do a lot of, you know, critiquing the AI output and I teach ethnic studies and so I'll say, “I want you to ask the LLM,” or actually, you know, the image generator “create an image of an Asian American,” right, because I teach Asian American studies and then we say, “What do you think about this image? What does it tell you about the stereotypes, the biases, and possibly the training data that went into it?” You know, what are the cultural biases that created this image that you're seeing here, how do we get inside the quote unquote head of the model to figure out why it's presenting the way it's presenting? We'll never know exactly what series of dot matrices or whatever multiplication it's done, but that doesn't matter. What matters is the output and the ways that the output impacts us.
Taiyo: Yes. Right? Mm-hmm.
Chesa: As you know, say like an Asian American who's looking at an image and saying, “that doesn't look like me,” or “am I supposed to look like that?” You know, that's the, maybe the more phenomenological way of looking at it is like, well, this is the way this presentation is impacting me. This is the consequences it has in my real lived experience, and let's try and think about how it got there and how we can make it better so it's not causing harm.
Sarah: Yeah. Yeah. Do you think there's the sense of like, as a humanist being comfortable with the idea that you'll never fundamentally know what goes on in the mind of another, whether it's a human or an LLM?
Chesa: Yeah, I think, yeah. And it's funny 'cause you know, a lot of my STEM students are just like, oh, I've always hated English, because it seems like there's never one answer. And I'm like, that's beautiful! Bullshit me! And I'll say, oh, that was a great essay. You know what I mean? And, and I get excited about that, you know, and, and, but I understand that other, you know, folks are just like, no, I really want there to be one right answer.
Sarah: Mm-hmm. Pranav your thoughts about the consciousness?
Pranav: Well, I mean, I, yeah, I agree. They're not conscious, right. And they are very different - probably, probably - than people, although I agree with, with chesa, like speaking as a cognitive scientist, we literally don't know what people are doing, so like, we need to like de-anthropomorphize ourselves. So, but, but I, but I, I do think that, like, I think that one thing that the humanities does, traffic in is not just ambiguity and multiplicity, but the 20th century, the early 20th century was about breaking the relationship between authorial intent and the text itself. And that whole bizarre theory. I remember learning that as a freshman thinking, “What, what is that?” That is such preparation for the moment we're in now, because like these things are just, you know, they really are like spinning a wheel or whatever. There is no a authorial intent. They're like John Cage, they're aleatoric things. They're just rolling dice.
Taiyo: Stochastic parrots.
PRANAV: That's right! I mean, that's it indeed. But, but the thing that's tricky, like, and I think that was what was so powerful about the Bender et. al. idea. It's like, we do interpret the parrots. So I agree with you that like, that, that toolkit, uh, which begins by like, saying that a text can have meaning independent of what the person writing it wanted it to have. It is very revolutionary. You listen to people talk in book clubs and they're always like, why did they write that?
Chesa: I know. They look at the back of the book, is there an answer?
Sarah: I'm always like, even if they tell you, you think they know themselves. Yeah. Truly.
Pranav: Or even if they tell you like that, why does that change how you interpret it? Right. And so I do feel like. There is something really exciting about these things. I recently learned this term that people have been using plausibility engines, which I really like for these generative technologies, and that's what they are, right? And I like that term engine because it makes something like they’re tools, but they output stuff and we can still interpret it.
Chesa: Well, I think kind of what you're bringing up is this tension of like author authorial intent and authority, right? Because I feel like, you know, that motivation of “I want there to just be one right answer.” There is a safety, there's a security in thinking that you're either right or you're wrong. And so I feel that's kinda what we're grappling with with these tools is - especially when students use them, they're like, “I think that there's one right answer and so I'm gonna copy and paste this prompt and just get the answer that it gives me I'm gonna say that that must be truth.” Whereas if we are encouraging them to be more iterative and more like having a conversation with the AI and co-creating something because you don't know whether it's the right answer or not. I feel like that is where the more exciting stuff happens with these tools.
Sarah: I love this. I feel like I hear so many people talking about blaming, essentially humanists and particularly post-structural humanists for like, this is the reason why. Yeah. It's like this is why we can't have nice things. Right.
Pranav: because of relativity.
Sarah: You guys said there was no universal truth and now we have fake news, right? So like that's part of the humanities getting shit on in the last 40 years. So I think this is a nice way of recuperating that in some way of being like, no, you can still have that and use it to come to some kind of mm-hmm. Um, consensus about what we maybe, how we approach questions.
CHAPTER 3: “What the Hell is Education For?” [15:03 - 31:45]
Taiyo: There was a comment that Pranav made that I think is really important that we not let go of, which is the, um, idea that because things are moving so quickly now, like is it fair to say that it feels like the pace of change right now, particularly technologically, is like really, really, really much faster than anything I've been used to in my lifetime That given that that's the case. The changes that we make, like, you know, we're gonna be making superficial changes to our curriculum or whatever. Mm-hmm. Adding AI elements here and there, and we can talk about that. But I think that the thing that's frightening is that those could be washed away, swept away by the tides of change coming in.
And rendering the lessons that we're trying to teach in those changes, rendering those completely obsolete. The last thing we want to do is invest our time and energy into creating things which are gonna be obsolete for our students within months. Right. Then there's the question of how can educators intelligently respond to that kind of situation on the ground. What is it that we do in the face of that? Any ideas there?
Chesa: I don't think you want my answer. Oh, God. Let's,
Taiyo: I think I do. I really want that, especially now.
Chesa: Well, I feel like a lot of people, uh, I, this is not to diminish the amount of effort and care and labor that's going into people like fine-tuning their curriculum or incorporating stuff. But I feel like the energy really needs to be placed in what the hell is education even for now? Now, you know what I mean? Like not to say that there isn't value because there definitely some people are trying to argue that there's no million higher ed and I do not agree with that. But what are we doing when we do education? Like what is it, what now that tools can simulate learning or intelligence, what does it mean for a human? Mainly our students to actually learn something and to demonstrate intelligence. And I feel like our education system has been designed around certain metrics for intelligence that are no longer relevant. And so we're kind of chipping away or fine-tuning things, but it's a fundamental foundational problem, which is exciting and exhausting at the same time. So that's why I don't think people really like that answer.
Taiyo: I love that answer. Okay. Uh, because we've been thinking exactly along these lines. I think then the question becomes, well, what is education for? What do we want to see out of our graduates? You know what I mean?
Chesa: Yeah, I'm curious from the cog side perspective, like what, what is learning?
Pranav: Oh, I don't know. I mean, I'll give you the answer that famously Noam Chomsky has, which is “learning is growth.” So like it's the development of the system according to what it's supposed to do, how it's supposed to develop. But like I think one of the problems that in cognitive science is - it's sort of like the, the, the point that, that you were making: Is it that they're growing towards what they were always supposed to or are they coming up with something new? It's like that question of invention versus discovery. Like that's, that's the answer to that question. Who knows? I think that the problem here is that learning within education always has to be based on what the purpose of the educational system is for, and it's, it's always been about manufacturing a worker, right? And so like if you think about it from that perspective, it totally makes sense that if calculators come along, you don't need people to memorize multiplication tables. Or even like learn how to use a slide rule, right. And similarly, if all of these narrative technologies evolve, we don't need it, right? But then there's this other thing that we always say that that education is about. Building the person.
Chesa: And creating civically minded.
Pranav: That's right. That's right.
Sarah: Yeah, people who can collaborate and communicate, people who can learn how to learn.
Pranav: And so those feel like those are probably enduring.
Chesa: I hope. I hope. I would say given the state of things, I don't know if we've succeeded.
Pranav: But we don't really, but that, but I mean, to that point, like for me, the thing that's so interesting is that these technologies have come along and revealed to us how bad our assessments have been.
Chesa: Yeah.
Pranav: And how uncreative we've been. Yeah. Yeah.
Taiyo: Thank you.
Pranav: And how, like, I mean, how unsophisticated we are. I mean, because it was that factory model.
Sarah: That’s right. Exactly, exactly. I remember we were talking about the five paragraph essay.
Chesa: Ughhhhh. In case you need it again for audio: Ughhhhhhh.
Sarah: But I have this little theory that the people teaching composition and literature, who like jumped on the AI bandwagon pretty early were the people who were just like, I hate the five paragraph essay. I've always hated it.. And we were talking about why, and I was like, well, on one hand I buy the argument that it’s arguably, the greatest one size fits all way of figuring out “can you make a point and sustain it with evidence and follow a structure and follow the rules,” - which of course, ideologically is linked to train you to be a compliant worker. But that's not the only way to do that and I remember saying to you like, well, why? You were like, “why do they do this?” And I was like, because you have a bunch of underpaid teachers doing grading work over the summer. That's the easiest thing for them to grade. It needs to be something they can actually assess. And so there isn't room for creativity. And that's the kind of thing that I think just stifles people's desire to learn.
Pranav: Absolutely.
Sarah: When they feel like they have something to say and then they're constrained by this form that's there, possibly for good reasons, possibly because we just haven't like been brave enough to throw it out. I think this is bringing me to this question about institutional change - that we have all of these educators right now who are doing really exciting things with AI in their classrooms and their research, but then there's also people who are taking an approach that's like, well, I'm gonna ban it, or I'm gonna rely on - totally ineffectual, by the way - AI detectors And so like, what goes into driving change in higher ed?
Pranav/Chesa: [Laughter]
Taiyo: That's about the reaction I was expecting.
Pranav: I think it takes something like this kind of, you know, trillion dollar industry coming and hitting it, right? Like, I actually think that the sclerosis has been there for, for decades, right? And we've known about all of these systemic biases that you mentioned, right? And we've done nothing, or we've worked at the edges.
Chesa: Yeah.
Pranav: And I, I think it, it's gonna require students and their families saying like, why are we paying for this? If what we're getting out of it is like this form of learning, that doesn't even seem to relate to what I do in my everyday life.
I feel like for the first time, I don't know how you all feel. People wanna have this conversation this year.
Sarah/Chesa: Yes. Mm-hmm.
Pranav: Like at the end of this year.
Sarah: Totally. Yeah. All of a sudden, after two and a half years.
Pranav: Right. I think there was a lot of ostriching and now people are like, oh, it's not going away. It's not a fad. And they're, maybe they're seeing the extent to which their courses don't respond to the moment.
Chesa: Well, and I think that what's nice is that, you know, it started with teachers being critical of students and like, “oh, the students aren't learning; they're, you know, losing their critical thinking” and now it's students becoming critical of teachers using it, saying like, oh, I don't even know why I'm in college. Why am I shelling out all this money, you know, to have an AI grade. my papers? I want the human touch. And so I feel like now that. Everybody's pointing the finger at everybody else, we’re finally are able to have this conversation. You know what I mean?
Pranav: But it's interesting that that whole dynamic is all about assessment, which is the part of education that none of us,
Chesa: nobody likes!
Pranav: None of us. I mean, I think there are educational assessors out there who really love it. But like most of us aren't. Yeah. And so it's, it's a little bit weird to ask us like, how do you make your assessments that you were sort of like roped into doing. Yeah. You just followed other people. How do you make them AI proof? And it's like, well, that's not like the, the reason that I am in the classroom is not to assess you. No.
Chesa: Mm-hmm. Mm-hmm.
Taiyo: I think a lot, I've been thinking a lot about the kinds of constraints that educators are under. One of these is the demand, and I think I would call it a recent demand to assess and measure earning outcomes, right?
Sarah: Mm-hmm. Yeah.
Taiyo: That we are instructed in our syllabi now to write out learning outcomes. They can't just be anything. They have to be measurable outcomes. You need to be able to measure those things or it's worthless. And I'm a math guy. Bringing this kind of quantification, this demand for numbers and hard data around these learning outcomes, which I think are really fun to like, like when we're thinking about the. Qualities that we want out of our, out of our graduates, we want people that have good metacognitive skills, that can think about their own thinking, can, that are interested in learning, that will, will keep learning when after they're, they've left the institution, that they're motivated to learn. They're curious people. They're also discerning people. So they don't get immediately bamboozled by the next charlatan that comes along with snake oil. All of these things… these feel extraordinarily unmeasurable.
Chesa: Right, exactly.
Taiyo: I don't know how to measure them and I don't know, but even though that's ultimately what I want out of my students, the out of and out of the graduates that I see coming out of my institution. Those are the things that I can't measure or I don't know how to measure them. They can't be put into learning outcomes that are measurable at the end of the semester because I gave them some questions about integrals or something like that.
Sarah: Isn't partly because you need to see how they are like later in life?
Chesa: Yeah, We need longitudinal studies.
Taiyo: I mean, maybe that's what it is, but like also like something like curiosity, that's something that also exists in the moment. But I don't know how to measure it.
Chesa: I just feel like in this era of generative ai. It's like what I want my students to take away is like, how would you apply this in the real world? You know, like that to me, that's the measure of intelligence is like, can you take from the classroom, like out into the real world context and do something good with it. You know, I definitely like, have problems at like, you know, when I'm fixing up my house and I'm like, wow, I wish I were good at math so I could like know how to measure this or know how to build this. You know what I mean? But I was never taught how to apply things in real world contexts, you know? And so I'm just, I feel like that is maybe the skill now, right? That the literacy; it’s like, not just what's the right answer, but like why do you care about getting the right answer because there's real stakes. Stakes, right. You know, and like building a house or, you [00:26:00] know.
Taiyo: I think that there is a model for education that harnesses this idea. And I think that it probably does an incredibly good job of fostering things like curiosity, uh, and that sort of thing. And that's project based learning. Yeah.
Chesa: exactly.
Taiyo: Where, uh, students pick a project, the, uh, instructors aren't necessarily telling them what to do. but rather kind of facilitating or guiding the educational process there, trying to point them kind of in the right directions and that sort of thing, giving them different ways of thinking about that. I think that's a beautiful thing. I wonder if it scales.
Pranav: Well, but so maybe, maybe so it didn't, right? I mean that's sort of like that model is the Oxbridge model, isn't it? I never really understood it, but it's like, you know, you go, you make your reading list, you talk to this person one-on-one, right? And they're all very individualized. But we sort of have the opportunity now because you can have these angelic resources to help the person navigate.
Taiyo: know, wait, be more specific. What are we talking about?
Pranav: Well, I mean, you can, you can have, you can spin up an LLM that could, that could help guide them, and maybe there could be ones for different project areas. You know, I, I think that's the biggest problem, right? Is that they need some mechanism by which they can continue to make progress or even understand how to like. To, to scaffold the inquiry process, right? And that is something, these are pretty good at conversations. Mm-hmm. And if the conversations turn out to be dead ends, well, as you said, that's like a learning opportunity, right? Like, research is like deeply non monotonic when it comes to the amount of time you
Sarah: So, oh, that's such a great point. I think a question that's on everybody's mind is the idea of cognitive decline.
Pranav: Oh yeah.
Sarah: Especially since this is the summer of 2025 that we're recording this, and several weeks ago that MIT paper that blew up the internet - or higher ed internet - came out, and so we were wondering what skills are you most worried about students losing, and then how do we embed those skills in this new world? It's probably too big a question to answer in 10 minutes.
Chesa: I will say that the big thing that I worry about is the erosion of trust. Um, and that's like social, like interpersonal trust, but also like trusting themselves. So many students, when I actually ask them, I was like, oh, you know, you weren't supposed to use generative AI. It looks like you did. Why'd you do that? “I'm a really bad writer. I'm bad.” There's like so much outsourcing the work, not because they're quote unquote lazy, like a lot of teachers might accuse them of, but mainly because they don't think that they're good enough. Yeah. And depending on these tools, is reinforcing that by making them sound like everybody else.And so that's the thing that I'm really worried about is that they're not, I worry that they're not finding their own voice. Because they are already so distrustful of themselves.
Sarah: Totally. Somebody, a colleague was asking me why… she's like, “as somebody who believes in the value of writing for thinking, why don’t you fight harder for the essay, for writing?” And I think one of the reasons is exactly that, it’s like, here's a form that somewhat arbitrarily and pretty recently in terms of like the history of communication has been propped up as like the only way to do this, and what are we missing if we're so bound to that model?
Pranav: Fetishizing, yeah.
Sarah: Yeah. Exactly. And this idea of like, just because a student can't write well doesn't mean they don't have good ideas.
Pranav: Or that they can't think well.
Sarah: Exactly. Yes. Yes. Exactly. And so there's gotta be other ways to measure that and more like a multitude of different ways to measure that. And you know, where you get into trouble then is thinking about what this civic society question and like, how do you get everybody just sort of like in their VR goggles being like, I'm gonna think my own way, in my own bubble. That's a dystopian vision too.
Pranav: Absolutely.
Sarah: But so is the idea of a monoculture where everybody sounds like ChatGPT, which I would argue is like, that's what the five paragraph essay was made.
Pranav: It totally does. And then it gets people thinking “I don't know how to write well because I don’t know how to write with your particular language, because my language might be different, and with these weird conventions. I mean, I sometimes think about this, like that particular model of tracing one argument straight through without any deviation, I can understand that from a logical standpoint. But on the other hand, it's hard because as you're writing, you're often filled with doubts yourself. They, they sort of spring and so to, to quiet them is sort of to do violence to your own thinking.
Sarah: Yeah. Yeah.
Pranav: And so then someone's like, wait, I can't focus because I'm always having these doubts, but there are plenty of other writing cultures where you talk about the thesis and then antithesis, right? So you do talk about those doubts in the same moment? There's something about that myopia - I don't know. But it's not myopia.
Chesa: It's like shoehorning your reasoning - like you're always, everyone thinks linearly.
Sarah: We were talking about this at Inspire at the Learning Lab conference last year, where like the whole idea of the essay, the genre of the essay comes from like the French word for “to try.” It's an attempt. An attempt. It's kind of supposed to be iterative and then we sort of lost the way to make seem like it was not iterative, but seemed like something you have to do under timed circumstances where you must organize your thoughts instantly and rigidly and...
Chesa: Yeah
Sarah: I got a terrible score on the GRE English test, by the way.
Chesa: Hey! [sound of high fiving]
Pranav: Really?
Chesa: Yeah. I just wrote, this score does not reflect my ability as a writer. Like, please read my writing sample.
CHAPTER 4: The End of Writing as We Know It [31:45 - 37:54]
Sarah: So I think that this is something that, the reason that I don't feel so pessimistic about, like what happens in an English classroom or a writing classroom is because I think that. While we're at this moment when everybody, it seems, is suddenly talking about AI and talking about what happens and how do you, there's a culture wide anxiety. Think about what happens to humanity with AI. I think we're gonna see a renewed, like, intrinsic motivation in the idea of having your voice. And that this might be something that could become like something fun for students. If we can frame it in a way where it's like, this might not even be something that you discover perfectly formed, but something that. You are always forming in the process of trying to share ideas. And like that seems to me a really powerful way that can sort of claw back some of this, the what we might've lost and like the rote routine and highly structured way that we've been thinking about communicating and thinking through writing.
Chesa: Yeah, I agree.
Taiyo: One thing that you were talking about was externalizing your thoughts, and how writing allows you to do that. And isn't it interesting that this, um, technology of writing lets you do that?
Sarah: Right.
Taiyo: Because there was a time when there was no writing or where many cultures there was a pre-literate time where there was no such thing as writing or reading, and yet there was still information that was being shared and communicated purely through oral means, right? It was pure orality for those cultures at that time. And actually when writing came along, you can find artifacts, which lament the advent of writing - the idea being this is gonna make people stupider, that there's gonna be cognitive decline associated with the advent of this technology, that people externalizing their thoughts, which we're now talking about as a virtue - that’s actually a vice because this is gonna weaken our memories. This is going to have deleterious effects on our culture and that sort of thing. We're not gonna be able to preserve the most important aspects of the past. And the people that we once considered to be the wise men and women in our culture are no longer going to be because the wisdom that was transmitted via these short, pithy proverbs and things could now be shared with everybody. Right? And so all of these things have his, uh, effect of very learned people - like I'm talking about Plato - questioning whether or not writing was right for human beings. Now, I think living in the 21st century, it's very clear that writing is pretty great. I kind of like it. I like writing, I like reading. I think these have been overall good things for humanity, but this does mean that this did come with a change, I think in our own consciousness and, and maybe, uh, relevant for education is maybe how we think about what it means to be smart versus stupid, and that sort of thing.
Chesa: Yeah.
Taiyo: I wonder if this moment where we're grappling with this question around AI is analogous to that moment of literacy’s advent into the world and all that kind of thing.
Pranav: I have like a hundred thoughts on that.
Chesa: Me too.
Sarah: So, okay, so can I say one thing first? So I did a little research on this for that class that I told you about. So, you know, widespread literacy is a product of the late 1800s, which I did not know, really. It's very, very, very recent. And so, widespread literacy, perhaps it's been a net good. But you know, there are many people who are not very literate. Even still, and that's a variety that's based on, you know, their circumstance, that's based on their processing of text. which varies depending upon which script you have and language. So it's very circumstantial. Someone can be, you know, dyslexic in English, but not dyslexic in Japanese, because of the different writing system or reverse, right? So there, there's so much about the technology of writing that has to do with the affordances of the particular writing and we're all academics and we were all really good at this. And so like when we talk about it as a net good, but that's because I happen to be good at this and that's why I'm here. And we've been rewarded.
Taiyo: Thank you for questioning that.
Pranav: Absolutely. And so I just feel like, you know, in the same way that I'm not a video person at all. I cannot think in terms of moving images at all. And so like if we had a society where people engaged with each other, I guess we do, we social media…
Taiyo: We're entering into that society.
Pranav: I don't do any social media because I couldn't. Right. So like, I'm happy that, I guess I was born when I was born and not a bit later. And so I just, I, but like we don't have articles that are written in, you know, I don't know, like, TikTok videos. Like we don't,
Chesa: They do actually. Now there's video. Somebody sent me a link and it was like, oh, here's a video essay. I was like, you mean like a lecture?
Taiyo: Yeah, no, the video essay. Yeah, video.
Chesa: They’re really, they're, well, I'm a video person.
Pranav: Yeah, but you are a video person. Right? So like I just feel like, I feel like this is one form of media that has been a mass form of media only, like at most 200 years or something. Like thinking of the invention of the paperback, let's say. Even like, because books were already very, very expensive for so, so, so much history. So I, I, I feel like it's had a good run.
Chesa/Taiyo/Sarah: [laughter]
Pranav: Yeah. Like I really do think this. But like, I don't know. I mean, you know, there's so many ways of expressing and, and, and, and we do so much cognitive offloading all the time as a social organism. Yeah. Right. Okay. I'll stop. No, no. I can keep going, but….
Chesa: No, I, I totally agree. I was, one of the things I wanted to say was like, I love reading, I love writing, but I will say that, you know, there's many different ways of generating meaning and finding meaning in the world that aren't necessarily rooted in text.
CHAPTER 5: The New Critical Thinking Debate 37:54 - 48:16
Sarah: So we started with a question about AI literacy and ended with a point about finding meaning in the world.
Taiyo: We were aggressively digressive in this one, but if you think about this conversation as a whole, Sarah, what are some of the things, some points that are gonna stick with you from this conversation?
Sarah: I think one of the things that really stuck with me was that when we tried to answer this ostensibly simple question, “What is AI literacy and what goes in the AI literacy curriculum,” we ended up circling around a lot of skills that I would think of are really fundamental to a liberal arts education. And so, right, we didn't get answers that were like, you need to know how LLMs work. I mean, I think you do. I think that is an important part of AI literacy. But I think that it was interesting to hear Chesa and Pranav talk about things like self-awareness, metacognition, social emotional skills, the importance of being aware of how your own mind works. I don't know. I was thinking about what Chesa said about different ways of generating meaning and finding meaning in the world. Like, part of that comes from thinking about our role in interpreting text, whether that's human generated or LLM generated. But I think that this question of finding meaning in the world, I don't know; it opens up for me this question about, we were talking a lot about what education is for, and I think part of it and part of the AI literacy curriculum right now needs to be helping people cope with the kind of existential crisis that arises when a machine can do a lot of the stuff that you as a human can do. Like, that has got to lead to questions about our own obsolescence. Or what is the point of me as a human if, if this thing, right, if I can outsource thinking, what is the point of me? That's a big question that people are gonna have to cope with.
Taiyo: Okay, here's my truth. You ready?
Sarah: Oh God. Okay, here we go.
Taiyo: I do not like the concept of critical thinking. It is not well defined. I don't know what goes into it. I don't know what's left out of it.
Sarah: The skill, like very practically speaking, we don't want our students to graduate into a world where they are believing every output they see, whether it's human or LLM generated, and then jumping to a conclusion that is just like wildly out of sync with reality, right?
Taiyo: Absolutely.
Sarah: We don't want them like seeing a deep fake, and then, like I said, we don't want them seeing a deep fake and being like, “I gotta hoard baked beans because the world is ending tomorrow because I read it here.”
Taiyo: Turns out I agree with that. It's just not obvious to me that just because you can, if you want to, outsource your quote unquote critical thinking to these, uh, AI systems, it's not obvious to me that that wouldn't come with the ability to think even deeper, higher level, critical thoughts. In other words, it's not obvious to me that outsourcing means that you give up on the enterprise altogether.
Sarah: Mmmhmm.
Taiyo: Other instances of this happening, for example, one which comes to mind, is chess, right? We have these computers which absolutely dominate humans at chess. Humans cannot beat the best chess playing programs now, no chance. But chess is more popular than ever. People are not just outsourcing chess to the computers. I don't think there's much interest in that. There's much more interest in human beings doing this activity.
Sarah: I'm trying to think of a mental task that we've fully automated. I mean, maybe the art of memory to a certain extent. Like we don't really train people anymore to memorize stuff.
Taiyo: Memory is definitely less of an emphasized skill. Mental arithmetic is a less of an emphasized skill. Why, why, why do we put this thing that we call critical thinking in this rarefied era of like the thing that can't be touched or else society is going to crumble? Why do we do that?
Sarah: I think the reason is because society will crumble if you have every individual living in an isolated information bubble without any ability to discern? Wait, I think the question for me…
Taiyo: I'm just very provocative here.
Sarah: … The question for me is, do we think we were doing such a good job of teaching critical thinking that no, we shouldn't try something new.
Taiyo: This is the point that Pranav is making, right?
Sarah: Mm-hmm.
Taiyo: Is that we're lamenting the loss of something that was never there.
Sarah: Mm. Golden age logic.
Taiyo: Golden age logic! We're nostalgic for a time when our students were these big galaxy brain critical thinkers, looking at any piece of text that came, and analyzing… Give me a break. Let's not fool ourselves here. Okay. Higher education, particularly higher education at scale, hasn't been able ever to assess or to teach our students how to be good critical thinkers. I mean, we can gesture at it
Sarah: spicy, hot take!
Taiyo: But I mean, look at, look at the United States of America right now and tell me that in the last two decades higher education has done a good job of teaching critical thinking,
Sarah: [laughs]
Taiyo: But now we have this AI moment. Right?
Sarah: Mm-hmm.
Taiyo: And not only does this AI moment give us - it forces us - to reconsider what it is that we've been doing. Reassess and, and reckon with some of the, as Pranav puts it, the sclerosis in the system that's always been there. And see what we can do to actually fix that. But I think AI itself presents a possible, possible solution.
Sarah: I think the big question was how do you bring back kind of intrinsic motivation into assignments, into education. If students are seeing what they're learning as just a box they have to check so that they can then go get a job somewhere that isn't learning, that's box checking. And so if you want to train students who are adaptable and capable of being resilient in the face of technological change, that is going to upend most jobs, many jobs. How do we then make that as like the center piece of education? Adaptability.
Taiyo: Right now it feels like we're not really tapping into one of the greatest resources which is available to us, which is students' intrinsic motivation, the thing or things that they might be deeply curious about, the thing that would, that they might do in their free time. We're not using that curiosity. We're not using that energy to our advantage.
Sarah: So practically speaking, does this mean? Like get rid of grades? Like it sounds like what we're, what we were getting at in that conversation was: “we are not assessing what is actually valuable about education and what is valuable about education we're also saying is in some ways immeasurable. And so what is the practical takeaway here? Is it like get rid of grades? And then how do you do that in a system where students are applying to jobs and applying to grad school, law school, and you need to actually have some kind of… Like, what does that actually look like practically? I would love to just sit around with my students and you know, talk about stuff and not actually have to grade them, but just sit there and find out what is the intrinsically motivating thing that's gonna get them excited about the thing that I'm excited about.
Taiyo: That would be wonderful. The most common question that a math instructor gets, and every math instructor will agree with me about this is when are we ever gonna use this stuff?
Sarah: Hmm hmm.
Taiyo: The math that they're learning. When are they ever, when are, am I as a student ever going to use calculus and math instructors don't always have the greatest response to that.
Sarah: Yeah.
Taiyo: Because for some students it's hard to know, like, like you can go by their major.
Sarah: But I've heard you answer this for students. so like here, at Cal Poly Maritime, when a student who's a mechanical engineering major or a marine transportation major, you're like, you will need to use this in your step, like ship stability, right? You don't wanna tip over the ship, and so you're gonna use this math, or you're gonna need to make an engine that doesn't explode, right? There's a direct application.
Taiyo: Right. I guess I'm envisioning a world in which with a triangulation between artificial intelligence and an expert such as myself, that student can get the individual attention with respect to that particular passion, that will make that mathematical content seem alive, seem vibrant, seem like something that they might want to even - shocker - keep pursuing because it lights up… it takes advantage… it leverages that internal motivation that they already had.
Sarah: This is the challenge that I think all of us have to think about now is how do we do that in all of our disciplines? And we are really interested in hearing from all of you. And we'll be posting a lot of practical tips for how we do this in assignments. And we're really hoping that you will share some of yours on the California Education Learning Lab, Substack.
Taiyo: You can find links to everything at myrobotteacher.ai.
Sarah: Thanks for listening. This has been My Robot Teacher hosted by me, Sarah Senk…
Taiyo: …and me Taiyo Inoue.
Sarah: Thanks to Chesa Caparas and Pranav Anand, and our wonderful team at EditAudio. And special thanks to the California Education Learning Lab.
Taiyo: If you enjoyed the podcast, subscribe on YouTube or leave a review on Apple Podcasts, Spotify, or wherever you get your podcasts.
I loved this episode! I was beside myself listening to it on a drive and wishing to jump into the discussion.
There's a lot good here (and I agree that ChatGPT exposed what's broken) but the issues with writing in the CSU are vast and the structure of GE is fundamentally unstable. I've been endeavoring to point this out for months. https://hollisrobbinsanecdotal.substack.com/p/harvard-will-survive
https://hollisrobbinsanecdotal.substack.com/p/put-your-bodies-upon-the-gears
and (for the bold) https://hollisrobbinsanecdotal.substack.com/p/how-to-deliver-csus-gen-ed-with-ai