My Robot Teacher Episode 2 Transcript
Higher Education in the Age of AI: Rethinking Teaching and Learning with ChatGPT
Below is the full transcript of Episode 2 of My Robot Teacher.
CHAPTER 1 - Waking Up to the AI Revolution [00:00-1:09]
Taiyo: [00:00] So imagine it’s fall 2020 again, and you've just taught your first course online because everybody's gone online because of the stupid pandemic, and you are miserable, and you decide, “you know what, I can't do this anymore. I'm going to voluntarily go into a coma for the next five years.” You wake up and it's 2025. Sarah, how do you explain what's happened in the world since then?
Sarah: [00:30] Oh man. Maybe I'd start by just being like, remember back when you thought AI looked like Ex Machina or Terminator, maybe. Well, here's how it played out. It’s a lot more like Her!
Sarah [VOICEOVER]: [00:45] Welcome to My Robot Teacher, where we explore the core tensions shaping this new era in higher education: the promise of innovation versus the anxiety of displacement, the allure of efficiency against the imperative for human connection, and the evolving definitions of what it means to learn and to teach in the age of AI.
CHAPTER 2 - The ChatGPT Moment and Its Aftermath [1:09-14:17]
Taiyo: [1:09] So maybe the big thing to talk about, is the so-called ChatGPT moment
Sarah: [1:13] which is I think the first time I learned about generative AI from Taiyo,
Taiyo: [1:18] right? In November, 2022, I read on Hacker News about ChatGPT and I tried it out. And it absolutely shook my soul to the extent that I have a soul. I'm not totally sure I do, but, um…
Sarah: [1:32]…his soul's an AI bot.
Taiyo: [1:36] That's what people sometimes say. I've heard that before and I was trying to find anybody that would share my sentiments about this thing, share the excitement that I was feeling. So I naturally talked to all of my colleagues at work the next day, and nobody was sharing my enthusiasm. They said, oh, that's pretty cool. You know, but I could tell
Sarah: [1:58] we were all coming back. We were exhausted. We were coming back from Thanksgiving break. It was like the sad reprieve that you get from a few days off.
Taiyo: [2:08] Nonetheless, I was hoping somebody would share my excitement. So later, maybe the next day, we all go out to dinner. I say, have you tried this thing called ChatGPT? And it was like, no, not really. But at the end of the night, okay, Sarah says to me, so what is that thing you were talking about? “Chat G.T.P.?” [laughs] Anyway, I go home, I wake up the next morning and I see that my inbox has become a dumping ground for Sarah’s explorations with ChatGPT, and it’s just wall after wall of text of her conversations with ChatGPT and I realized oh, I found a fellow traveler here…
Sarah: [2:53]…that my soul is also an AI bot.
Taiyo: [2:56] She was super thrilled, super excited, amazed, mind blown, all of the superlatives.
Sarah: [3:03] I’m trying to remember what captivated me so much about it, and I think it was. You know, the first thing that I typed was something really stupid. It was like, make a poem in the style of Wordsworth about something that I won’t mention. And I remember being like, oh, you know, this is okay. But then I started asking it questions like, “do you have a sense of self?” And “what do you know about a human sense of self?” And then started going into like how it could summarize stuff that I had written and clarify my thoughts better than I had actually articulated them. And like that completely blew my mind. And I think. Especially as a literature professor, the thing that seemed so revolutionary to me is that now my access to computing power was through language.
Taiyo: [3:46] Right? Like you're a humanist after all.
Sarah: Mm-hmm.
Taiyo: [3:49] Your training is in comparative literature.
Sarah: [3:51] Guilty as charged.
Taiyo: [3:52] So clearly having natural language as the interface to accessing insane amounts of computing power represents a completely new paradigm, which is very much in your wheelhouse.
Sarah: [4:04] Completely. You could just talk to it in the language you use to talk to everybody else you talk to. It wasn’t perfect. There was quite a lot of hallucination back then, but it really blew my mind that something that seemed to be - as you know, critics called it like just an advanced auto or word predictive text tool - somehow could put together like passable human sounding text.
Taiyo: Right.
Sarah: [4:28] I’m trying to remember too, at the time you had a really concise explanation for me because I was like, how does this work? And you've been obsessed with AI for a long time.
Taiyo: Yeah.
Sarah: [4:36] So you were like my AI guide and guru throughout this process.
Taiyo: [4:40] Right. Yeah. So ChatGPT is just a sort of interface for something called a large language model. It’s a system that's trained on a colossal amount of text from the internet. It consumes all of this, and through the magic of multi-variable calculus and linear algebra mixed with a bunch of statistics, and computation, of course, it learns the patterns in language and it is able to pull out statistical regularities and statistical insights about language that have alluded humans or the task that LLMs are designed to do is to predict the next word given a sequence of words. You give it a bunch of text, it analyzes that text and uses that as sort of input for trying to predict what the next word - what they call token - in that sequence will be.
Sarah: [5:36] Before ChatGPT I think I would've like zoned out as soon as you said “linear algebra,” but the thing that made it seem really cool to me was that it was clear that it had somehow learned the patterns and structure and relationships in language - something that's so crucial to my field - to a really incredible degree.
Taiyo: [5:51] You know, one thing that will always stick with me about your reaction, Sarah, to chat. GPT was, yes, you were very excited. You were bringing a super interesting curiosity and energy to the whole thing. But at the same time it wasn't uncritical.
Sarah: [6:09] Thanks. I'm really glad you brought that up because I worry sometimes that my excitement might come across like a kid in a candy store and it's like, you don't know all that sugar's gonna kill you one day. But yeah, as generous as I wanna be to my students, I don't think the skeptics were wrong to say that the first thing students are going to do is cheat, because I definitely observed that. I overheard in the hallways right when we got back from that break, students talking about how they used it on a lab report and got a hundred on it. Remember you were academic integrity chair at that time.
Taiyo: [6:40] Yeah, that was a really interesting time to be Academic integrity chair.
Sarah: [6:45] [laughs] And I should say for context, at the time I was teaching a year long intro to composition class, so I was at the midway point of my class and was still gonna see the same group of students in the spring. And so I went into class and did this sort of performative bit where I tore up the syllabus that I had for the spring. And part of my logic was, well, if students are gonna be using this anyway, I'd rather try and engage them in thinking through the risks of offloading cognitive labor, and also encourage them to try and imagine ways that they could use it as an educational tool to bolster the skills that we were working on in my class, say, which had to do with how you write and think and organize your thoughts. It also seemed to me that if I didn't change what I was doing, I'd just be looking the other way while lots of people offloaded their intellectual work. And I also wouldn't be preparing the others for a world where these tools were quite obviously going to be normalized pretty quickly.
Taiyo: [7:36] So it sounds like what you're saying, Sarah, is that “if you can’t beat ‘em, join ‘em.”
Sarah: [7:40] Well, I think I've been disillusioned for a really long time feeling like what I was doing was not intrinsically motivating students who would often come to my class and say, why do I need to take a writing class? So it became a really amazing opportunity. I remember saying to them, “like, right now, you guys are on the cusp of this, and we’re all now in this transitional period where humanity’s gonna be grappling with how we integrate this technology into our lives, like into workplaces, into our educational systems, while still preserving foundational skills that you need to develop to effectively communicate with one another.” And so I said, “I think we need to really think clearly about what you value” and asked them “what do you value and how does what we’re learning in this class support that?” I also remember framing it really explicitly in interpersonal terms. So I had them working in groups like to freestyle with ChatGPT that week just to play around with it, and then they were supposed to write, like, a little reflection about the experience. And one group I remember asked it, “how do you get a girlfriend as an introvert” was the first prompt.
Taiyo: [8:39] Oh wow.
Sarah: [8:40] It was really sweet. And so I, um, I kind of played on that anxiety. You know, they all seem to find that like really relatable as a prompt. And so I said, “well, you know, when you’re looking to meet people, whether those are friends or partners, you’re essentially, you’re communicating with them in whatever medium you use, like a kind of implicit argument about why they should hang out with you. And if you’re relying on an AI chatbot to tell you that and to say what to tell them to like create that framing for you, where do you think that leads?” And they did not think it was good. I am wary of the risks that by using it in my classroom, I am having them outsource a lot of the work that I had them, that I used to have them do in class such as sitting there awkwardly for long periods of time, puzzling through ideas. I sincerely believe that even though AI may be capable of superseding humans in a number of things, it is still really important for humans to practice puzzling through ideas, supporting claims with evidence, and I think ultimately too, thinking about that as an iterative process, which is something I was always taught in my training as a writing instructor in grad school, that writing is not indistinguishable, but it is like an inextricable part of thinking. It’s so linked to your ability to organize your ideas in ways that have these effects in all kinds of arenas of your life. Part of what a writing process is often in isolation is encountering new ideas and then integrating them into your thinking and so when you're writing a research paper. The reason we have what I think to many students feels like a kind of like rote and thankless task, but the reason is to practice putting together some kind of like theory about something, a thesis, and then testing it out and seeing how it holds up against evidence and then revising your thesis when you find evidence that convincingly refutes it. And so whether or not writing looks the same and the teaching of writing looks the same in five years and ten years or whatever, I still think that the skill that they need to be mastering in these classes is going to be a skill that that humans need for a really long time: it’s a skill of being humble in the face of evidence. It’s a skill that involves concatenating all of your like random threads and theories and trying to synthesize them into something that you can communicate to another person, whether that’s to, like, share information with them or convince them of something.
Taiyo: [11:05] Wait, you want to claim that humans need to do that?
Sarah: Yeah, I do. Even though ChatGPT can do it for them.
Taiyo: Okay. Really?
Sarah: You don't think so?
Taiyo: [11:14] I don't think that people need to do things that, for example, I could think of some really awful mathematical tasks, like multiplying two eight digit numbers together. Or five, eight digit numbers together. You could do it. But why would you ever do that?
Sarah: Oh yeah. Okay. Great question.
Taiyo: …when a computer can do that way more efficiently.
Sarah: [11:37] Yeah, thank you for that, ‘cause let me clarify. So what I’m saying is not that we need to preserve the exact task of the research paper as it stands. I've already thrown that outta my classes. Like I'm not doing that; I’d be a hypocrite if I said that’s what we should be doing. But I think that it really made me think, like, what’s the fundamental skill here when I'm having students sit down and write reading responses. And the idea is that you collect your thoughts and then in the process of discussing with others, you revise your thoughts. And so I was like, well, how can I do that now using this tool? And so one of the ways that I did that was, well, you can just generate counterarguments on the fly. You can generate dozens of them immediately. And so actually it was kind of a fun way and it was a really engaging way in class because it was like it lacked that kind of anxious tension you get when you're arguing with another human to have them put their initial ideas into ChatGPT and then say, generate 10 counter arguments, and then go through and systematically like write in their own responses, well here’s why that convinces me [or] here’s why that doesn't convince me. We were still able to get to that process. And in a weird way, I think it got better when I was using ChatGPT because it didn't have the, like, cultural connotations or maybe emotional sting of having me, the other human. When I say, “well, have you thought about this counter argument?” maybe many of them perceive that as, like, here’s why you’re wrong, as opposed to stripping it of that in a weird way, stripping it of its humanity made it something that they could accept more readily.
Taiyo: [13:01] I think there’s also something interesting about that anecdote of when you have 10 counter arguments in front of you and you say you go through them and you think about which of those arguments resonate with you and which of them do you disagree with?
Sarah: Right?
Taiyo: You begin to shape your own identity as a thinker through that process.
Sarah: [13:25] Yes, completely. And that is ideally what happens when you read a lot of books. Like you, you do that. That's what I did in, in studying literature. The tricky thing is that it's been so clear that students are not reading as much anymore. On one hand, I think on my most pessimistic days, I [00:14:00] think, oh, I am totally complicit in the decline of reading skills because I'm not assigning as much reading. I am not as finding as much writing and isolation. I assign actually way more writing like the word counts of the text they produce in the semester by themselves are just astronomical compared to what they used to be in my classes. It's not grammatically correct writing that they're producing. They are producing these like free flowing, free write reflective responses that they have to do in every class. But the thing that I found is that they are demonstrating like a lot of improvement in being able to synthesize information and remember stuff from the beginning of the semester to the end.
CHAPTER 3 - Dialogic AI: A New Way to Engage with Information [14:17-16:21]
Taiyo: [14:18] And you were talking about how we hopefully are able to shape our identities through reading books.
Sarah: Right.
Taiyo: [14:26] Or at least that’s what you did in your, as part of your education. Yeah. But do you think that large language models, artificial intelligences, can take the place of that identity shaping through their outputs, through in particular outputs that you critically read, that you use your best discernment as a student to analyze and to figure out which points that, uh, and which perspectives that the large language model is generating, which of those resonate with me and which of those do I outright reject?
Sarah: [15:02] For sure. I think for better or worse. [laughs]
Taiyo: [15:05] But aren’t there some positives that we can think of - about large language models? Advantages of them over books?
Sarah: [15:08] Well, the positive thing is for students who are not doing the reading, at least they're getting something like at least they're getting a digestible, you know, [laughs] That’s why I’m like, “am I just complicit in this problem?”
Taiyo: [15:23] Okay, but isn't there something more. Like if I pick up a book from your bookshelf, and I start reading it, that’s a very different experience from reading the output of a large language model.
Sarah: Yeah.
Taiyo: From a prompt that I give it. Right?
Sarah: Oh, completely. Yeah.
Taiyo: Why is that? What? What's happening there?
Sarah: [15:40] It’s dialogic with a large language model, right? When you sit down with a book, it’s a monologic experience. You're, you're, absorbing something. There’s the voice of this book that you're absorbing. Sure, you might have ideas in your head, but you’re not immediately - unless you are a note taker and writing in the margins, which nobody does anymore - you’re not turning that into a dialogic experience. And so that’s what's so cool about the large language model, especially as they get better and they hallucinate less. You know, this is all, all predicated on making sure the information you’re getting is accurate. But so far, and in the last iteration and since the students have had available to them the ChatGPT EDU accounts from the CSU, and now they’re all on a level playing field, having prompts that are like more sophisticated models, not the older models. I am seeing a lot of them making connections between things and between weeks of the semester that I never saw happening in my class before. Like they would just forget stuff that they learned in the beginning of the semester, whereas now it seems much more sustained.
CHAPTER 4 - Streamlining University Service: AI for Administrative Drudgery [16:21-25:04]
Taiyo: [16:41]That’s amazing. And you know, writing isn’t just the only thing that chat GPT can do pretty well, right? It can do a lot of other things. Like for example, it can code really well.
Sarah: [16:52] Oh, this is something that totally blows my mind and I know we’re kind of, we were just talking about teaching and now this will, I think, relate to some of the ways we’ve used it in our university service work.
Taiyo: Oh, yeah.
Sarah: [17:03] Um, but yeah, they can generate working computer code and then debug the programming mistakes of that code. Um, and so basically this opens up technical tasks to people without advanced training.
Taiyo: [17:14] These things do a very good job of coding, maybe not at the level of like the top 1% of coders in the world. But it certainly does a remarkably good job enough that you can take somebody who doesn’t know anything about coding and quickly wind them up into a situation if they were working with an LLM into a situation where they can write their own apps.
Sarah: [17:37] Yeah, I mean, I think this was a part that just really blew my mind that somebody like, I do not know anything about coding [00:18:00] languages or how to code, but this year I was able to use Chat GPT to, to like make a code, to like scrape our website. Wait, why are you laughing?
Taiyo: [17:56] Okay. It’s very clear that you didn't know anything about programming because you just said make a code.
Sarah: [laughs] Why is that wrong?
Taiyo: And programmers do not talk like that.
Sarah: What’s the right phrase of that?
Taiyo: “Write code!”
Sarah: Just “write code?” “Write code.” Why isn’t it “write a code?” [laughs]
Taiyo: [18:14] [laughs] Because it's not!
Sarah: [laughs] I would say “use a language!” I guess “use language.”
Taiyo: Use a language?
Sarah: [18:23] Okay, so the point is, yeah, I could, I could write code without even knowing how to say “write code.” Because I could just say, “I need something that will allow me to do this thing, and then it’ll say, here's a thing in Python.”
Taiyo: [18:26] See, the difference between an LLM and a human is that if you ask an LLM to write a code, it won't laugh at you.
Sarah: [18:46] [laughs] That’s so good. So true. There isn’t the shame of messy human relationships when you're talking with your chat bot.
Taiyo: Exactly.
Sarah: [18:58] Thanks for teaching me the language. This is the value of interdisciplinary partnership. [laughs] But I was able to do something that was completely out of my wheelhouse and minimized like hours and hours and hours of my university service work this year, which as our campus Senate chair this year meant going over policies and dealing with the, like, bureaucratic and institutional nightmare of policies that were not readable and therefore not legible to LLMs. We have hundreds of current and archived policies that are not clearly marked on the website, and we need to compile all of them so that we can identify discrepancies. And so what I wanna do is not have to sit there for all damn day locating these policies on different busted old websites and downloading them myself, and then realizing, oh, this one was actually an image because somebody scanned a piece of paper the policy’s so old.
Taiyo: [19:53] Yeah, those scanned PDFs were scans of, of policies, which were written on paper. And so you couldn't just grab the text of those PDFs and use them in some way.
Sarah: Yeah.
Taiyo: [20:05] What was required was optical character recognition, which was able to look at the images of the letters and determine from that the letters that they represented,
Sarah: [20:16] But being able to, like, translate these images and then rapidly identify patterns and identify things that were out of alignment, completely minimized this work that I would’ve been like banging my head against the desk doing. So in all seriousness, I wanted to clarify something though that I didn't just do that by myself. Like, I had you there with me. We were together in your office, doing this mind-numbing, soul-destroying task of trying to identify all of Cal Maritime’s policies. You knew the terminology for stuff - like I didn't know what I needed to ask ChatGPT. So it’s not as simple as saying, “I wanna do this thing, tell me how to do it.” That’s not gonna get you a prompt that will actually give you the code you need.
Taiyo: [20:53] Are we sure that's true?
Sarah: [20:56] I don't think before that moment I even knew that you could write code to do that task, and so I needed you to be like, well, here’s the way you could do it. So you’re saying that if I just prompted chatGPT and said, here's my problem. How do I solve it? It would suggest, why don't you write code?
Taiyo: [21:10] I think so. What? What? No, no, no. It wouldn’t suggest, “hy don't you write code.” Or, or maybe it would, and then you would say, I don't know how to write code.
Sarah: [21:20] Yeah. And I believe from there I could prompt it to tell me how to do it.
Taiyo: Right.
Sarah: [20:24] But I think that. In that moment of learning something brand new for the first time, it was really helpful to have you there, like a human guide who knew the right language and knew how to phrase the prompt so that we got what we wanted really quickly.
Taiyo: [21:40] Right, right. So doesn’t this point to the value of a sort of an education, which promotes breadth, uh, where you’re not just focusing on, you’re not just sort of narrowly siloed into whatever discipline you're working in, but that you have some basic awareness of what's possible by virtue of the other disciplines that you might not be studying in depth.
Sarah: [22:03] That is a beautiful framing for, I think, why general education is a good idea.
Taiyo: [22:07] I wanted to lead you there because I, that was what was in my mind for sure.
Sarah: [22:15] Yeah, I think so. I think you need to know that the thing exists before you know how to ask for it. And I guess there’s a certain extent where you can now use a large language model to describe the problem in a kind of nonadept way.
Taiyo: Right?
Sarah: [22:29] And then have it give you the language you need to prompt it, which is something I know that really got one of our colleagues excited when she was using ChatGPT and the deep research function after the rollout, and was asking me, “how do I basically get it to do the things that I want it to do?” And I said, “well, why don’t you tell it the problem and why don’t you ask it what the most effective prompt would be to elicit the response you’d like to get?” And she was like, “you can tell it to prompt itself!?” I think that is completely groundbreaking. And again, go, goes back to the access through natural language because it’s a conversation. You can go back and forth saying, well, here’s the thing that I want. And then if the solution doesn't appear to work, or if the solution just doesn't work, let’s say, let's say the solution, it proposes to write code. [laughs]
Taiyo: [23:22] Oh, excellent.
Sarah: [22:23] Thank you. And then as happened with us, that code didn't work right away. It needed a little bit of debugging. I remember at the time you used, you asked ChatGPT, how do I debug this? Right? “It looks like this isn't working. This is happening when I want this to happen,” and then it immediately identified the bugs.
Taiyo: [23:43] Yeah. What's really great about something like ChatGPT is that it doesn't face, it doesn't feel the same kind of frustration or despair at some of the inscrutable messages that you get when you are working in a programming language like Python. So oftentimes when you goof up something in Python, you'll get these error messages and hopefully if you're, if you're very good or experienced with Python, you probably are able to interpret those error messages and, and fix your code. However, particularly for neophytes, for students, for example, who are just learning Python for the first time, those error messages can be like a complete mind killer.
Sarah: Yeah.
Taiyo: In the sense that like, they stop your, like you, you just get frustrated. You don't know what to do with that error message. It means nothing to you.
Sarah: [24:37] Oh, totally. Yeah.
Taiyo: You know what I mean?
Sarah: [23:38] Yeah. I always think of Dolores in Westworld being like, it doesn't look like anything to me. Like that’s what happens to me when I get, when I was looking at that screen with you. [laughs]
Taiyo: [23:45] So nowadays what you can do is just copy-paste that error message and just shove it into ChatGPT, with no further prompting and say not, you don't even have to say anything and it will diagnose the issue for you and then fix it.
CHAPTER 5 - When to Outsource, When to Engage [25:05-32:48]
Sarah: [25:00] Yeah. That's incredible. And that's something I see a parallel with clarifying messaging. So I know a lot of instructors who've used it, they've put their own assignments in and said, “here's a way that students tend to misinterpret this assignment. How do I make the instructions clearer and make it clear that my intent is this?” Um, or “how do I rephrase these learning outcomes in a way so they don't sound so jargony and make really clear in a measurable way for our assessment practices, but also in a meaningful way for students to understand what the point is.” So I've been thinking a lot about how fun it's been to outsource the kind of mind numbing aspects of my job, and arguably this time that I freed up by outsourcing this work of writing code to chat GPT and doing an administrative task that would've taken me a week, freed me up to do things that I find personally more rewarding and intellectually stimulating. On the flip side, and this is gonna get a little meta for a moment, but last night when I was making a rough cut of this episode, something that we've been outsourcing to our wonderful editors at EditAudio, but I felt like the through line of the episode only really became clear to me when I sat there doing the endless drudgery of watching clip after clip and then playing around with moving them around myself.
Taiyo: [26:19] Hmm.
Sarah: [26:20] And so. I don't know. I go back and forth on this and this morning I'm like, yes, in praise of drudgery. Because getting so embedded in the text of like this transcript and seeing how we were talking about it made me see, oh, this is what I think we wanna say in this episode. [laughs]
Taiyo: [26:38] You've been thoroughly brainwashed, Sarah. I mean, listen. Okay, sure. There are some forms of drudgery that are probably quite productive, but I think we can agree that there are other forms of drudgery which are just miserable. I mean, I know for myself, even though I'm a mathematician, I love mathematics, I will absolutely admit that from grade seven to eleven, Mathematics was my least favorite subject. It was mind numbing. It was completely dull for me, right?
Sarah: [27:12] Well, how do you know like what drudgery is gonna turn out to be productive?
Taiyo: [27:16] Well, this requires discernment, and this is something that hopefully a good instructor, a good educator can provide their discernment about.
Sarah: [27:25] Hmm. Sometimes I think though, there's just things that, things that gotta play hard. [laughs]
Taiyo: [27:30] Oh yeah, like what? Like the five paragraph essay?
Sarah: [27:33] Well, that's for a whole other episode, but, well, here we out on this because I think there's actually a common argument that people made back when Nicholas Carr wrote The Shallows, uh, like a decade and a half ago, very famous book.
Taiyo: [27:46] Oh. Really?
Sarah: [27:47] Started as an Atlantic article called “Is Google making us Stupid.” And the thesis of it was basically the fact that you can now like Google stuff and you don't have to do the work of going into the card catalogs and putting in that effort.
Taiyo: [27:58] This is interdisciplinarity in action.
Sarah: [laughs] Yeah.
Taiyo: Okay, Wordcel. Go for it. [laughs]
Sarah: [28:07] So, so the argument was that he opens with actually a quote from 2001: a Space Odyssey. Like, “I'm afraid… my mind is going” and talks about that in an age of, of doing things on digital readers, he feels like his attention span is diminishing and lots of people have reported this, that there's a feeling that people aren't reading as much anymore that when we try, our attention is divided and some attribute that to the fact that we're now reading in this piecemeal fashion we're not sitting down with a good book in a quiet room. We are instead reading while we have like seven things open on our computers and music playing, and we're texting people and you're interrupted, and I think there's something compelling to that.
Taiyo: [28:43] Guilty, Guilty.
Sarah: [28:44] Maybe it's, for me, I think it's less about the reading on a screen in some cases, although some people do have a kind of tactile memory. But I think it's more about the disruption that as soon as your train of thought is broken, I think you will be less likely to retain some of the information. Like multitasking does not work. It's no matter how much I want to believe it does, 'cause I love multitasking. I have been convinced that it doesn't actually work or that it will diminish you know, what you do on one task, you should have just instead focused on that task. But anyway,
Taiyo: [29:13] This is again, a thing. So I've, we've all, I think heard, you can't multitask. Yeah. Humans cannot multitask. And I always feel a little skeptical when I hear those kinds of very broad, generalized claims.
Sarah: Mm-hmm. Yeah.
Taiyo: [29:27] Because I do think of human beings as being incredibly adaptable and flexible and that the architecture of our minds do not necessarily dictate just one way of going about solving a problem.
Sarah: Totally. Yeah
Taiyo: [29:43] So, when I hear humans can't multitask, I think to myself, is that really true or is that just true about me or you, or about the way, or maybe it's a function of the way we've been trained through education. Maybe if education put a greater focus on multitasking
Sarah: on how to multitask?
Taiyo: maybe human beings will get better at it.
Sarah: [30:04] I'm looking for this article, I'll have to look this up somewhere, but there is an article about this in praise of multitasking, that makes this argument.
Taiyo: It does?
Sarah: [30:10] Yeah. Yeah, exactly that. Well, we haven't really practiced it the way we've practiced, sitting quietly and reading. But to go back to this, this question about the drudgery, et cetera. So I think there's an argument that I find compelling that in offloading that work of actually reading all those policies myself, I didn't really get a deep knowledge of them. And so I, I wouldn't have known if there was a hallucination, I, I wouldn't have known if something was missed. You know, my, my check for that was to have a group of people who did write the original policies rather than having, like it would've been an absolutely impossible task to get everybody scheduled in the same room in the middle of the semester to do this and it was so much easier to just say, “Hey, I have now used a large language model to identify discrepancies in these policies. You are the author of the policy, can you please confirm the accuracy for your one little niche thing here?” And so it was kind of like harnessing the power of a, of a diffuse like collective group of people to get this task done in a way that I think was very efficient and had checks to make sure for accuracy. I think what you could say though is that I lost out on an opportunity to learn something about the institution in some way by, like, really sitting down and reading these policies. I'm… I say that, I can't even say this with a straight face because I don't actually believe that part, like, that drudgery to me? It's just like, nope. Which I think is interesting. But the reason I think that I don't see that as a missed opportunity is because in the time that I would've been sitting there downloading PDFs and reading policies that are defunct, I was able to do a whole bunch of other stuff that taught me more about the institution.
Taiyo: [31:48] Exactly.
Sarah: [31:49] Like it's, it is, in some way, there is a bit of a zero sum game going on with like every moment that you're spending doing drudgery, you are missing, you are not doing another thing
Taiyo: [31:59] for every moment that you're not doing drudgery, you could be doing something else.
Sarah: Yeah. Or every moment that you're stuck in drudgery is a missed opportunity to be doing something else.
Taiyo: [32:06] Yeah right. Exactly.
Sarah: [32:08] We're not talking about freeing up my time so I can go watch TV here.
Taiyo: [32:11] Right. So you, you probably do get benefit out of the drudgery. Mm-hmm. There's benefits to be had. I think that's an easy argument to make. However, the question should be, if you hadn't had to do that drudgery, are there greater benefits to be gained by doing something else.
Sarah: [32:32] Right.
Taiyo: Something that could be, you know, more impactful.
Sarah: Like for instance, learning a little bit about how to write code.
Taiyo: Very good!
Sarah: [31:41] Which now has opened up this whole world for me, and I'm taking our colleague Ariel's data Science thing…
Taiyo: [32:44] And now you can make so many codes!
CHAPTER 6 - Unleashing Scientific Breakthroughs [32:48-35:38]
Sarah: [32:48] [laughs] And then I think for people who do have advanced training, they unleash potential for scientific breakthroughs that were previously unimaginable.
Taiyo: [33:06] Right, right. So for example, there was this Artificial intelligence released by Google, uh, specifically DeepMind, which is, uh, sort of the AI lab that's within the Google, uh, superstructure. But the achievement that, um, eventually earned DeepMind's founder Demis Hassabis, the Nobel Prize was to build a deep learning model, uh, called AlphaFold, which was able to take the 200 million amino acid sequences of proteins that we know about and use AlphaFold to be able to fold these to figure out what shape these proteins will eventually have. You know, before you went into a coma, it would take a single scientist, something like five years to figure out the shape of just a single protein, right?
Sarah: [33:54] People did like their whole PhDs on one protein, right?
Taiyo: [33:58] But now with alpha fold, alpha fold was able to fold 200 million of these proteins, thus compressing a task that would take a billion years of human scientist time into mere months.
Sarah: [34:14] That's so wild when you frame it that way. I remember too back when you first told me about chatGPT, one of the things you were so excited about was the capacity to, like, crack mathematical puzzles that had eluded human mathematicians for forever.
Taiyo: [34:29] Yeah, and for a long time it was sort of the conventional wisdom that these LLMs were not very good at mathematics. And that was true in my experience as well. I had it try to solve or factor even very simple quadratic polynomials, and it tripped on that very, very simple task. However, we're now at the point, in fact, very recently, Google again released something called AlphaEvolve, which is another artificial intelligence and neural network that is able to crack some of these mathematical puzzles that have eluded human thinkers and human mathematicians. And the insights that AlphaEvolve is generating, are being implemented into Google's processes, thus improving… so just to give an example, it found a better way of doing something called matrix multiplication, which is a very important task for any tech company to be able to do well and efficiently. So incremental gains in the efficiency of matrix multiplication are gonna have very, very large impacts on a company's bottom line.
CHAPTER 7 - [35:38-38:35]
Sarah: [35:38] There's a sense that we have now have the capacity to exceed what was possible in human achievement, humanity at large can now accomplish things that would have been unimaginable on like a timescale of a single human life. Now that's happening in like minutes potentially. So what are the downsides?
Taiyo: [35:57] So maybe the most glaring downside of these large language models is how much they hallucinate. So for example, it might cite journal articles with very plausible looking citations, but they're pointing to journal articles that do not exist.
Sarah: [36:14] Right, articles that don't exist, or mismatching authors. There's also the issue of bias. Mm. So they're gonna reflect the biases and maybe even amplify the biases that are at work in the data they were trained on.
Taiyo: [36:28] Right. So these kinds of amazing capabilities of large language models, well, they grant the user really immense capabilities and power. But what happens if these power, if this kind of power falls into the hands of bad actors?
Sarah: [36:44] Of course there are a number of ways that people could use it for evil, but I think the important point to emphasize is that even in the so-called right hands, like what happens when we have technological capabilities in the hands of people who might not be able to see, because humans are limited beings and finite beings might not be able to see the ripple effects.
Taiyo: [37:05] So then this opens up the question of how do we get an artificial intelligence to share our values?
Sarah: [37:13] I think this is one of the hardest problems humanity has ever faced. How do you define human values to something non-human? Whose values? How do you translate human concepts like fairness, wellbeing, “don’t destroy humanity?” Like, what does that mean into precise code and AI can't misinterpret or find loopholes in.
Taiyo: [37:33] Right, that misinterpretation piece is also really interesting. You know, it reminds me of, you've seen Fantasia, right? Yeah, yeah. There was that story about Mickey Mouse. You know what I'm talking about? Mickey Mouse has to…
Sarah: Wait, the Sorcerer’s apprentice?
Taiyo: [37:48] Yeah. So Mickey Mouse, right. Like has to do these chores or errands. And so he casts a magic spell on the brooms, isn't that right? Yeah, yeah, yeah, to do the chores for him. But then the brooms start going crazy and start like flooding and everybody drowns or something like that. Is that how that works?
Sarah: [38:04] I think you're misremembering how that ended. I believe the sorcerer came in and stopped the brooms from, but yes. Oh yeah. He was about to drown. Um, great parable. Uh, it didn't start with Mickey Mouse. It started with Goethe, [laughs] which I drop not just to be a Comp Lit asshole, but to point out that throughout literary history, this has been kind of a perennial problem of how do you manage the outcomes of new technologies or things that are like beyond your capacity as an individual, starting from Prometheus stealing fire from the gods.
CHAPTER 8 [38:35-48:32]
Taiyo: [38:35] Given that these, that the interface for these artificial intelligences is natural language, what do you think are the core competencies that we need to be teaching our students about in order to get them to a point where they're so-called AI literate, where they're able to use these tools in ways aren't that are effective?
Sarah: [38:57] I think that what is, so what really makes me so excited, right? It's not - as much as I love, you know, having it write my emails and do these, the, the like service work that I hate doing, that's not really what excites me about it. The thing that kept me coming back and just makes me think that this is so full of potential in higher education is the kinds of collaborations that I've been able to have, I mean, even with you. Using this as a tool to translate between our two disciplines felt miraculous to me. And it's like you are a patient friend who will explain to me things about like your dissertation. And I'm just like, mm-hmm. I kind of get it. But there's a point where I, I just feel bad about asking you for the millionth time, like what this means. And also because you're in your own field, you don't have the sort of fluency in like the way that I speak and think and, and and frame things. And so what's so amazing about working with you in a large language model is that you can explain something and then say, okay, explain this to somebody with this particular educational background and also this, these like theoretical inclinations, why they should care about it. And then you're there as the human verifying, “Yes, that is accurate.” And so there's this kind of mutual growth that I think I see going on where it's like, I am gaining an understanding of what you do and then you're gaining an understanding of how to talk to me about it.
Taiyo: Right.
Sarah: [40:14] And that to me is everything like these, this is, I think a kind of fundamental problem that humanity faces right now is how to tap into knowledge bases that different, that people who speak and think differently from you have.
Taiyo: [40:26] Absolutely. And we get such specialized trainings, uh, in our respective disciplines. We sometimes develop whole vocabularies for the same ideas or the same concepts, but we just talk about them in different ways. And because of that difference in vernacular, right, it becomes difficult to be able to communicate with somebody that's in, even in an very related discipline. So like you can have an algebraic topologist who is trying to talk to an algebraic geometer. These are two very, um, uh, specialized fields within mathematics.
Sarah: Yeah.
Taiyo: [41:03] And they have difficulty talking to one another. They share concepts, but they just have different words for them. And it takes work for the algebraic topologist to learn the algebraic geometer's way of thinking about things and talking about things mm-hmm. And vice versa.
Sarah: Mm-hmm.
Taiyo: [41:23] I imagine that this is, um, a problem all over human knowledge. Yep. Right. All over human knowledge. We're all inside of our tiny little silos where, you know, yeah. And sometimes, like I've had the experience of, you know, writing a paper, advancing the, uh, frontiers of what's known about - in my case, right angled, hyperbolic, orbi folds in dimension three. And you know who gets excited about these things? Maybe a dozen people in Russia. They're the ones that are in my little niche. That are like, wow, you did amazing work pushing things a uh, forward and our understanding of right angled, hyperbolic orbital folds in dimension three, and I'm like, wow, I can pat myself on the back. But yeah, ultimately, I wonder what the value is now, now that we have these kinds of tools that are amazing at detecting well of, of translating across disciplines - even related disciplines, but also of, discerning kind of structural patterns in different disciplines.
Sarah: Yeah.
Taiyo: [42:34] And this gets into, um, something that's really taking on life of its own in mathematics. Anyway, this discipline called category theory, which is all about trying to find the sort of patterns within certain disciplines within mathematics.
Sarah: Oh, interesting.
Taiyo: And then linking or finding the similarities in patterns which are encoded themselves in a kind of mathematical object called a function or whatever, and, um, translating between worlds there. This is something that mathematicians already began the project of trying to do. Now with large language models, obviously it's in a kind of softer language based way.
Sarah: Right.
Taiyo: Um, but it's, it feels like it's doing something similar or it can be the kind of connective tissue that, uh, uh, allows a microbiologist to talk to a cancer researcher.
Sarah: Right.
Taiyo: Uh, for example.
Sarah: [43:34] And the potential there, aside from just being like a fun, intellectual exercise and making new friends across universities. I think that, and so many people have said this, the problems of the 21st century are not problems that any one discipline or group is going to solve. They require the kinds of collaboration that I think. Pessimistically, perhaps were not possible before we had these tools to kind of navigate the sensitive emotional terrain that comes from having to explain your work to people who might not value it the way you do, or to explain your work using frames that just are gonna land badly or, or make people misunderstand what you're actually talking about. And so as long as you have. As you think, I think about it as kind of like, it's like the triangulation of like two experts in a different field with a large language model as their little, like, robot co-teacher and the potential that comes from having like the experts verifying from their end what's correct and then coming to consensus with this non-human intelligence about what the relations are between these fields. I think that just has an immense potential to solve, to solve structural problems in a way that maybe were unimaginable before.
Taiyo: [44:50] Oh my God, I have so much to say about this, really! Can I just go for a sec?
Sarah: Go,
Taiyo: Okay.
Sarah: Go!
Taiyo: [44:55] So in mathematics, I often think of there as being a mountain. Okay? There's like a mountain like this - a mountain, all right? And I call this the mountain of abstraction. Okay. Okay. And as you climb the mountain of abstraction, it gets more and more well abstract, that is to say more and more general. And mathematicians, they often start from different starting points down here at the base of the mountain. And they climb up the mountain in various ways and sometimes they meet up.
Sarah: Hmm.
Taiyo: In places. Right? They meet up in places even though they might have taken very different, very circuitous roots up the mountain.
Sarah: [45:35] I'm so bummed this is an audio episode, 'cause this illustration you're drawing on my whiteboard is like a work of art.
Taiyo: [45:41] Yes, I know. I know. You don't need to tell me, but you know what's, what's really interesting? This is what's really interesting. Sometimes there are fields that are climbing the mountain of abstraction and they miss this insight. In other words, they don't ever hit this particular point on the mountain.
Sarah: Right.
Taiyo: [46:00] But you know what's interesting? If you get to that point, and you have enough of an understanding of what's going on down here at the base, some base level understanding of the various disciplines, you can, instead of keep trying to climb up. You can turn around and you can look back down the mountain.
Sarah: Huh.
Taiyo: [46:18] And you can see all of the different paths, from the different disciplines that could have led to where you're standing now. And that act is generative! That generates new knowledge because those paths were not previously traversed.
Sarah: Right.
Taiyo: [46:40] They, for whatever reason - maybe it's sociological, cultural, maybe they just missed it - their focus is just different, but they didn't take those paths to where to land, where you are up the mountain.
Sarah: I love that.
Taiyo: [46:52] And this has been an engine for pushing mathematical boundaries forward, particularly in the 20th century, where it was realized the power of things like category theory as applied to open mathematical problems of interest.
Sarah: Mm-hmm. I love this so much.
Taiyo: [47:10] Oh, so just to then expand out the view a little bit more [laughs]
Sarah: Mm-hmm. This is great
Taiyo: [47:14] If there's like a mountain of abstraction that encompasses not just mathematical disciplines, but all disciplines in some way. Then what kinds of insights can we have if we have an oracle with a wide breadth of knowledge, like a large language model?
Sarah: the Tower of Babel with crisscrossing roads that… [laughs]
Taiyo: Absolutely.
Sarah: [47:48] And so maybe this is like the opportunity to think about that: We now have a tool that is gonna allow us to remix all of our disciplinary knowledges and to do so in a way that is constantly drawing attention to the importance of, of being, of a healthy dose of skepticism in what you read and a consideration of structural factors like bias in the training data that the conditions of possibility that lead to a certain response. I think if we teach AI literacy well, we will teach those things and that will ideally equip people to be able to collaborate in ways that they haven't been able to do before.
Taiyo: [48:15] Thanks for listening. My robot teacher is hosted by me - Taiyo Inoue.
Sarah: And me, Sarah Senk and produced by Edit Audio. Special thanks to the California Learning Lab for sponsoring this podcast.