My Robot Teacher Episode 10 Transcript
Teaching without a Script: Improv Pedagogy in the Probabilistic Classroom
Below is the full transcript of Episode 10 of My Robot Teacher (lightly edited for clarity and concision).
Guests:
Pedro Morales-Almazán: Teaching Professor of Mathematics, UC Santa Cruz’s Physical & Biological Sciences Division; specializing in regularization problems arising in quantum field theory and asymptotic methods in number theory
Julie Simons: Associate Professor of Teaching, UC Santa Cruz’s Baskin School of Engineering; specializing in applied mathematics with a focus on cellular motility, modeling, and computational simulation
Also available on: Apple / Spotify
COLD OPEN
Pedro Morales-Almazán: We don’t know what’s going to happen. And I think as academics we also should embrace that a little bit more.
Julie Simons: Once you add this huge probabilistic LLM, now you’re in a whole different regime where you have to be able to improvise.
CHAPTER 1 [00:18-6:40]
Sarah: Welcome back to My Robot Teacher. So Taiyo, when we last left off, you were waxing lyrical about Claude Code doing all the bureaucratic work that’s allowing you to now flip your classroom for the first time ever.
Taiyo: Yeah, the flipped classroom’s still going strong. Really incredible participation for my students. And maintaining my canvas shell now is SO easy because I have this Claude code generated script, which will grab links from my YouTube channel and automatically post all of the content to my canvas shell without me having to do anything. This is just a total game changer for me. Each of these announcements, if I’m at my best, would’ve taken me 30 minutes per class - 15 of which would be me feeling pity for myself as I have to wrangle all of these URLs and links and put them in a nicely formatted announcement. No longer, I can now offload all of this work to Claude Code has written for me using the YouTube API key and using the canvas API key. And it is magical.
Sarah: It SOUNDS magical. But what’s your classroom like now? Did you manifest the dream of the 2010s? Is everyone engaged in active learning in class now?
Taiyo: YES I DID! I did manifest the dream of the 2010s! Don’t mock me! I’ve got 25 students, they’re watching my lecture videos before class, taking a pre-class quiz to test their knowledge, and then coming to class and spending 50 minutes 3x a week - the entire time working, solving problems, discussing course content with their peers while I walk around and give a healthy mix of positive AND negative reinforcement.
Sarah: Oh you mean, when you shine a laser pointer at them and yell, “Where’s that number gonna go?”
Taiyo: Look, I’m not actually burning their retinas, okay?
Sarah: Oh that’s good to know you’re experimenting within Cal State approved boundaries.
Taiyo: But seriously, I really think this AI enriched flipped classroom idea is so powerful for this particular moment because you’re turning your classroom into a community of learners.
Sarah: Mmm, love that.
Taiyo: There’s a kind of social accountability that kicks in, something that doesn’t happen when all the cognition is happening alone with headphones on at midnight.
Sarah: Well or worse, like being outsourced to ChatGPT because you give them the problem set to take home and you have no guarantee they’re actually doing it themselves anymore.
Taiyo: This was a problem even before ChatGPT; the internet already made it way too easy to find ready-made solutions to your homework.
Sarah: Yeah
Taiyo: But now it’s even easier to give away your cognition entirely. And I think that’s a tragedy. And I feel like this intense active learning experience means they can’t completely offload their cognition because they’re being asked to demonstrate and own it publicly!
Sarah: Yeah, that also feels to me like accountability without like icky surveillance.
Taiyo: What you mean by that?
Sarah: It’s not, like, “I’m gonna put your problem set through an AI detector and hope that’s actually effective. It’s more like you are expected to come to class and demonstrate the cognitive moves that allow you to solve a problem and understand what you’re actually doing in this math class.
Taiyo: Oh yeah, for sure. I’ve got my students doing at the whiteboards, right, in these small groups, so they have to exercise judgment, um, constructive critique, analysis. They’re trying to understand what a peer who made a mistake was actually thinking, and then they have to demonstrate a kind of leadership in explaining where things went wrong. They’re problem-solving in teams, it’s like real teamwork. And I think it’s beautiful. These soft skills, social skills, whatever you want to call them, are going to be of paramount importance after college.
Sarah: Aww, you used to hate soft skills?
Taiyo: That is a malicious lie!
Sarah: [laughs] I think it’s interesting that we’re not even talking about AI in the classroom in a student-facing way. We’re talking about AI inasmuch as you were able to do this for the first time, it wasn’t such a leap with the amount of time you would have to spend setting up the infrastructure, but you’re not using AI in the classroom, right?
Taiyo: Yeah, I’m not using AI actively in the classroom, like in a student-facing way. I’m not closed off to it at all. But I’d still call what I’m doing right now “AI-enriched active learning.” But I think this is definitely different from what you’re doing in your classroom, which is extraordinarily student-facing AI use.
Sarah: Right, right. And in your case the class is AI-enriched and enables you to turn something that was passive learning into an active environment. I get what you’re saying. So how do you think it’s going to turn out?
Taiyo: I have no idea. We’ll see. I mean, I’ve got a basic underlying sense of what is effective. I know active learning is good. But really, when it comes to the day to day classroom dynamics now… I’m kinda just making it all up as I go along.
Sarah: Well, that’s fitting, because this episode is all about trying new things and adapting, and improvising inside the classroom with students (and sometimes with AI) in real time.
Taiyo: That’s right. We sat down with our friends Julie Simons and Pedro Morales from UC Santa Cruz to talk about what happens to the classroom when none of us quite knows what’s coming next. And maybe I should explain that I had a cold on the day I recorded this, which is why it sounds exactly like I have a cold in this episode. So anyway.
Sarah: Yeah, we didn’t train a very nasal AI bot on Taiyo’s corpus of work.
Taiyo: Did I really sound that bad?
Sarah: No.
Taiyo: Anyway, please enjoy this conversation with Julie and Pedro.
CHAPTER 2 [6:40 - 12:38]
Sarah: So we’re here right now with our former colleague, Julie Simons, who used to be our colleague down the hall at Cal Poly Maritime, and then defected to UC Santa Cruz.
Taiyo: Whoa, Sarah, relax.
Julie: Oops!
Sarah: And she’s here with one of her new work friends.
Taiyo: You know, Sarah Julie is allowed to make new friends. You’re, you realize that, right?
Sarah: I know, I know. But I do wanna know, Julie, what is the basis of this new work friendship?
Julie: Well, in addition to Pedro successfully recruiting me, he and I are both teaching professors at sort of companion math departments at UC Santa Cruz. I’m in the applied math department, and he is in the math department. So, we chat a lot about the issues our departments are facing. We co-teach some classes and things like that, so it’s been really great. The other thing that links us, I think, is our interest in DEI, you know, belonging and equity I think are really core to my values and thinking about how to reach all students, frankly.
Taiyo: So if you’re a teaching professor, does that mean you don’t have to do as much research? Or what does that mean in terms of like, in terms of what you do at your job?
Julie: Yeah, I mean, I think it just tilts the balance of all the things that I think every professor really has to do. We’re all involved in teaching scholarship or research and service at our institutions, and if you’re a teaching professor, you’re, you’re expected to lean in more on the teaching side, but we’re still doing research as well. Pedro and I both have active research projects, both within sort of our mathematical disciplines as well as thinking about research questions that relate to pedagogy.
Taiyo: Wow. Cool.
Pedro: I mean, on top of what Julie says about our roles as teaching professors, which actually I wanna highlight are pretty unique to the UC system, I think in other institutions that are getting more and more popular due to realizing how much this particular role is needed in higher education, um, because it’s not only about instruction, but also thinking about teaching and pedagogy in general, holistically. ‘cause we also have to think about our students’ career paths. Of course, we want to think about AI and the impact beyond just individual classes, but also in their own development as professionals. I would say that teaching professors, we are, we are very equipped to, to think about these things because it’s, it should be more than just checking to see if students are cheating or not to whatever that means. [00:13:48] And also to think how we can effectively use new technologies to promote their own learning.
Sarah: Oh, I love that. I totally agree. Julie, can you tell our audience a little bit about your stance on AI?
Julie: Yeah, I won’t say that I am. I think on, you know, one extreme or the other, like, I’m not an evangelist of AI. I also am very much not opposed, uh, when all this stuff came around, I remember Taiyo asking me if I had seen the whole, all of the news about ChatGPT and if I’d tried it out. And he was so excited. And his eyes got real big. And he is like, you’ve gotta try it. And I remember being like, okay, I’m gonna go try it because it’s Taiyo and what he tells me to do, usually I take seriously.
Taiyo: Aw.
Julie: So I checked it out, but I, I wasn’t like one of, I wasn’t like you and, um, like Taiyo and Sarah about this. Like both of you, I think, immediately got thinking “Like, oh, we could use it to do this and I wanna test these edge cases” and all of, all of these, uh, fun experiments that you have done. I wasn’t really on that page, but as I started playing with it, I started thinking about how students could interact with this, started thinking, like many people did, about what this might mean for personalization of education and tutoring. And more recently my feeling is also really centered around students and not letting students get left behind.
Sarah: Hmm.
Julie: I think a lot of our like STEM focused students tend to lean into technologies and get really excited about this, but they also might have, you know, the prerequisite skills to know how to test these tools and figure out how they work much more readily than some of our other students. Those other students are the ones that I spend a lot of time thinking about how to sort of support them and help in their personal development. And some of those students who are not exploring new tools have really great questions about ethics, about environmental stewardship and all of that. But what I worry about is whether those students get left behind by self-selecting out of it or by like society essentially selecting them out of it because of all sorts of barriers.
Sarah: And then they’re absent from the development of those technologies, yeah.
Julie: Exactly.
Sarah: Mmm. What also worries me is that self-selection could exacerbate achievement gaps, too: I see this happening now where some students opt out, others use it really effectively to support their learning, like, you know, doing stuff like using the quiz feature on ChatGPT or asking it to explain a concept three different ways (and confirming with me the one that clicks for them is actually accurate). And of course there are the risks to those who use it to avoid learning or offload it. Are you also seeing that reflected in the faculty conversation?
Julie: I think this is the debate that’s happening in, every campus right now; there’s faculty who are really promoting it, faculty who are really not, and then when I talk to people who are hiring our graduates afterwards, they’re all expecting students to be AI literate, not, and not just literate, but AI savvy. ] I’ll talk to people who are like, I expect my new hires to pass their ideas through five different LLMs, vet all of them, and then explain which one is the best. Now that’s a level of cognition and understanding that we used to expect, not from like entry level positions, but really after getting some experience. And so that’s now sort of leveling up what employers are gonna expect from new employees.
Sarah: Mmhmm.
CHAPTER 3 [12:39-15:27]
Taiyo: I think there’s always been a kind of debate in higher education about, you know, to what extent should we be in the service of the economy, of the workforce? Right? Uh, one of the roles certainly that we play is workforce development. Pedro, do you have any thoughts about that?
Pedro: I have a lot of thoughts about that.
Taiyo: Oh, please go for it.
Pedro: Maybe too many thoughts, but I think I have some practical thoughts about it. I think that we should acknowledge that a lot of our students come to higher education because of that. It’s a reality. But I also think that we have a responsibility, even more in the California system as public servants, to contribute to society and to support our students becoming better citizens. This means I don’t think it’s just about skill building; I think it is about holistically supporting them to become the best versions of themselves and, um, supporting their role in society. So I guess the question now translates into what kind of society are we going to have now with AI being an important actor - into the economy, into entertainment, into even politics. Part of what Julie I think is also addressing is how to develop that critical thinking that students will need in order to navigate this new world. I mean, it’s becoming a new world and one thing that I want to share, to say proudly, is that we don’t know what’s going to happen. And I think as academics we also should embrace that a little bit more. A lot of what we hear in our bubbles or not bubbles, is, highly speculative. We’re making things up and I think we are addressing our own beliefs, probably hopes and fears, but sometimes I believe as academics we should acknowledge that we don’t have the answers yet for what’s going to happen with AI in a lot of fields. And specifically to our conversation right now in learning. We don’t know to what extent this is going to impact student development, learning and behavior in general. So I, I do believe that it’s important for us to recognize again, that we are just still discovering and still figuring out how to better use AI in the different areas of humanity, but also within higher education.
CHAPTER 4 [15:28-24:12]
Sarah: One of the most compelling arguments I hear from colleagues who don’t want AI in their classrooms is that a work product is now so tantalizingly easy to generate that students will be unable to resist the temptation to reach for it, instead of building their own tolerance for persisting through not knowing. So how do we help students develop the tolerance to deal with uncertainty, and persist through not-knowing and also set expectations of what is and isn’t appropriate, while we’re also still figuring it out?
Pedro: So I personally would say it depends on the class, and what I try to doing in, in this is to usually, at the beginning of the term, to have a discussion with them, an honest discussion like, “Hey guys, what do you think would be an appropriate use of AI for this class?” And then it’s very interesting because you are not coming with a lot of assumptions, you’re actually generally interested in their, in their learning and this class was great because we have almost an entire lecture day spent on why would it be cheating, quote unquote, and what would be allowed?
Julie: I think that’s really great. Pedro. I think, you know, we’ve always tried to like incorporate students, or at least more recently, the trend has been incorporating students into understanding the course structure. A lot of things, you know, over the last maybe 15 years, faculty have moved towards like co-creating syllabi with students and, and talking through this. And so engaging them and understanding the learning process and our goals and allowing them to goal set in the class, I think is really helpful and, and really grounding for students to try to recenter them on the. The process of learning itself. Anyway, that was just bringing that up for me. But going back a little bit to Sarah’s question about, uh, struggle offloading, all of that struggle to LLMs and people’s concerns about this, I think, you know, a lot of us have been thinking about how to get our students to productively struggle for many years. A lot of our students, especially we see this in, in math classes, uh, where students will be like, “I was working on this problem for three hours last night.” And it, it’s, it’s a problem that should definitely not take that long. And the question then is: what are you calling working on the problem? That doesn’t sound like productive struggle to me, that doesn’t sound like you’re making any progress. And so getting students out of that productive struggle is actually one of the things that a lot of people who are trying to create AI tools specific to education right now are focused on how to get students unstuck. How to get them towards a struggle that actually leads somewhere instead of just feeling like demoralizing, a situation that doesn’t promote learning.
Sarah: Hmm, yeah, that’s something I’m definitely grappling with right now, you know, how do we create situations in the classroom that PROMOTE learning? It’s a perennial problem, I think. But what you said made me think of this social media post I saw this morning where someone said they have gone entirely back to pens and blue books because they, I think the quote was, they “don’t want to get AI slop.” And I think there’s an assumption there - well maybe more than one - like first of all, that all AI writing is slop. I think it takes some skill to get NON-sloppy outputs. But, more importantly for the point I’m trying to make here, I think it presupposes that thinking alone with that blank page in front of you is the most rigorous kind of learning. And just to clarify, I’m not anti-pen and paper at all. I actually do a bunch of hand-written assignments right now in my own class; but I tend to treat them kind of as one part of a scaffold, or one scaffold inside a larger writing process where students are allowed to use AI. But I’ve observed that when students use LLMs in a structured way, it can surface different cognitive “moves” in really interesting ways. But this is such a weird time where nobody has the answers yet, and so a lot of what I’m doing right now is intuition. It’s like patterns I’m noticing in my students kind of like hunches about what’s helping or not and a sense of what, what seems to be backfiring or workking. And so a lot of what I keep thinking about though is rather than thinking about like all AI use, good or bad, is really like to what extent does using an LLM help a student in a specific context and with what other scaffolding in place, right?
Pedro: Sometimes we as educators also have to acknowledge that we don’t know what’s the best way to use these tools. We are still adjusting to this. And sometimes I feel - I don’t know how you guys feel about this, but - I feel like we have this pressure to say one way or another, like whether AI is good for society or bad for society, or good for learning or bad for learning. And I personally like to replace that expression, that belief with to what extent? I think that a lot of the applications are a spectrum. They’re not binary. Just recognizing that, I think it’s, it’s something refreshing that we might need in academia to sometimes acknowledge that we don’t have the answers, that we don’t know certain things, and that should allow us to be more maybe humble towards discovery because in the end I do believe that academia has that responsibility. We are in pursuit of knowledge and discovery and curiosity. I think as educators we also should embrace that and should embrace that we are discovering, we are trying things and not everything is going to work out perfectly fine, but that’s okay. It’s part of the process.
Julie: You know, a lot of us academics think that academia should be this, this sort of bastion of curiosity and, and intellectual exploration and that we’re, we have this great culture of this, but one thing I’ve been thinking about, we’re all very discipline-specific. We’re trained in one department, but not just in one subject area. We’re all like so highly trained to be curious really in a narrow band and things outside of that band. Are still scary to us. We may think they’re interesting, like from afar, but to engage with that and admit just how little we know outside of where we’ve been trained to not know is, is a challenge. When I think about this translating to how we teach and how we interact with students and what topics we teach, I think there’s a lot of fear about not just change, but the loss of control of our classroom and of our student experience. Now, we’ve never really had control of the student experience outside of what, what little pieces we control. And of course, our students have always done different things outside of the classroom than what we expect. Thanks, Chegg! But, uh, but I think the idea for faculty right now who are maybe experiencing fear or resistance to this is, I don’t know what will happen. Not only do I not know whether it’s gonna, like how it’s gonna impact student learning, I also just don’t know how to react in that moment in class when students are engaging with these tools and they spit out something that is probabilistic in nature. So I can’t, there, this is not a deterministic process and that feels scary for some faculty who are used to saying, I’m gonna go today and teach this, this lesson plan that I’ve created. I know what questions I’m gonna ask students. I kind of know what questions they might have or where their mistakes might lie. Once you add this huge probabilistic LLM behind everything and allow sort of that, that loose, now you’re in a whole different regime where you have to be able to improvise. You have to be able to think on your feet and somehow address this. And that’s a scary, scary thought process I think for a lot of, a lot of folks.
Taiyo: Just real quick, one of the most pleasurable things that has come out of the ChatGPT moment is the five-year Chegg stock ticker to see what happened when ChatGPT was released. This was a stock that was trading at around a hundred dollars five years ago, and now it’s at 79 cents. So just saying…
Julie: Beautiful. [laughs]
CHAPTER 5 [24:13-35:06]
Sarah: This conversation about fear and loss of control, though, it’s making me think of that discussion that we had last summer, Julie, when you introduced us to Pedro, um, and we were talking about improv, and how you both like to think about it as a pedagogical tool or at least a faculty development, uh, thing. Um, I’d love to hear more about that.
Julie: Yeah, for sure. Well, so this, this all started, uh, with Pedro sharing with me that he’s been thinking a lot about improv and pedagogy, which was to me, like outside of my, my wheelhouse. I am not an improv person, not an actor or theater person. Those ideas scare me. But then I went to a conference last fall where we actually did an improv session and I was asked to lead one of those. I don’t know why, but I learned about all of these different improv activities that folks do, and it was amazing. It was like icebreakers on, on drugs or something, I don’t know, maybe on, on something really nice. But it was really transformative for the whole vibe of the conference. People got to know each other and it set a norm of community from the beginning. That was the most lovely conference I’ve ever been to. And if anybody listening has been to an academic conference, you know, it can be scary - a lot of people not talking to each other or only talking to those people they already know, and this totally flipped all of that upside down. But Pedro actually is the expert in the room. Um, he’s been doing improv for quite some time and has been leading an improv group in the math community at UC Santa Cruz.
Sarah: Pedro, do you see this as like a faculty development opportunity? Get more educators comfortable with improv and maybe people will be a little more adaptable and willing to experiment in the classroom?
Pedro: I see it more as a philosophy of life, honestly, because it is. And this is, this is to, to follow up what Julie was saying about that fear of losing control. It is a good antidote to that. If you’ve ever seen improv, you see how smooth things can be without any prior preparation, and I mean, preparation in a very loose way because the preparation has taken probably like years and years of skill building. It is a different type of preparation, and I honestly think that in our classrooms we now have to figure out a different kind of preparation. That’s the challenge that we have now with, with AI. The basic idea of improv, the way I like to think about it, is that it provides you with a framework and then you just basically focus on the little details on the go, which is doable and it actually allows for better collaborations. If I tell you the script of a sketch, that’s, that’s just me basically speaking through your voice. But if we both are improvising a scene, both of us are creating at the same time, we’re co-creating. It will be wonderful if we can have a co-created experience in the classroom where not only students, quote unquote are learning, but also the instructors - we are also learning, and I’ll be a little bit more radical because I actually, or maybe cynical is the best word, because I think that actually it is the opposite: It is usually the instructor that is learning and the students are just there observing the instructor learning. Again, maybe that’s a little bit radical, but I do think that both instructors and students can come to this commonplace to co-create learning.
Sarah: I guess it is a bit radical to talk about it that way, but it makes sense to me: we talk about “student learning” but often in the classroom what’s most visible is actually the instructor’s thinking. We’re the ones iterating in public, breaking ideas down, uh, adjusting explanations for different audiences. I agree we need to stop treating students like spectators of our expertise and instead treat them like apprentices in the types of cognitive habits that produce expertise. I also think, I bet some people hear “improv” and assume that means no structure. But what you’re describing sounds more like a different kind of structure, or at least like it might require us to be more intentional about how we structure things and what we ask of students?
Pedro: I think that, um, at least my philosophy is that we can use this improv framework to be more successful at achieving our learning goals. But the, I guess the price to pay is that we have to be very clear: what do we actually want our students to accomplish? In other words, I think that we have to have a very clear idea of what it would mean for our students to learn something. And that’s the challenge that AI is bringing to our classrooms. I think that back in the day, um, we were all like, like The Karate Kid, like Mr. Miyagi, right? We were learning without knowing that we were learning something. as instructors, we were trying to teach them without telling them what was going on. Now we’re being confronted that we have to be very clear what are we doing to what purpose and how. So I would say, again, that’s my hopeful point of view of how AI is disrupting higher education is pushing us to really think and being very intentional about what is it our students will learn? How will they learn it, and how can we get there?
Sarah: Yeah, in the original Karate Kid, that “wax on, wax off” strategy assumes that students will trust your authority long enough to actually do the thing before they even get the point of it, which, I think, hasn’t been the case in higher ed for a while now, at least in my experience. So yeah, I think one of the ways we build that trust is to give a super clear articulation of what they’re learning. But that question - “How will they learn it, and how can we get there?” - that’s something, the exact route can’t be pre-determined for every student. So often in the classroom we teachers discover the “how” through iteration, and we might make missteps along the way, so it feels to me like part of our job now is also to teach students how to iterate through mistakes without shame.
Pedro: I think that’s also our role, right? Even more in math and in STEM. I do believe in the power of being wrong, and learning from that. I don’t think, I usually joke that math is the only subject in which having problems is a good thing. We want to have problems, and many times we want to fail from them. Failure, in my opinion, sadly, has a negative connotation where I do believe that failure is a sign of learning and progress, learning through failure.
Julie: Right. Pedro? This was when I brought my TAs that I was training, I was teaching a TA training class last fall, and we took a field trip to one of Pedro’s improv sessions. And, um, that was like the big point that I think was so lovely that you brought to our students was embracing failure and talking about how important that is in improv, but also in our classrooms, in our pedagogy and modeling that for our students. And as, as we were doing these improv activities, you know, you, you have different sort of, things that you’re asked to do, um, physically with your body and things you’re supposed to say and whatnot, and it goes quickly and you’re supposed to sort of maybe get in, I guess a flow state sort of with us and just not try to be in control. And it’s the thing that a lot of our students and faculty, frankly, it struggle with, I think in a lot of these realms because we’ve been taught that being correct is important. Knowing is important. And of course, like we, we want to know. We want that curiosity and we want to learn and develop. But letting go of shame about mistakes and embracing that and being able to laugh and sit and and celebrate the mistakes was something that was so beautiful about the improv session - like Pedro told us at one point, like, if you make a mistake, you have to like own it and like shout like “I made a mistake. Yay.” And move forward with that, you know? And we all got into this business of teaching. I think really focused on or motivated by facilitating growth in students. That’s what what we do this work for. It doesn’t pay us a lot. There’s a lot of other issues that come along with with choosing education as your profession, but it is so amazing to be able to facilitate student growth and getting them where they wanna be.
Taiyo: Do y’all ever make mistakes on purpose during your classes?
Sarah: Because Taiyo does.
Pedro: All the time!
Taiyo: Yeah, I do it too. And then what I like to do is try to gaslight my students into thinking that what I did wasn’t a mistake. They don’t like that very much.
Julie: They don’t like it.
Taiyo: No, they don’t like it when I do that. But I think it’s hilarious.
Julie: But that is also like, that’s again, that’s the kind of skillset that I want to be teaching them to do with the output of, of LLMs. Right?
Taiyo: Right.
Julie: Is to be critical, to be like, you’re spitting out this thing that Sure, it sounds right. You know, this sounds like something Taiyo would’ve told me in the classroom.
Taiyo: And be critical in particular of people in positions of authority. Like it’s incredibly important, right? Particularly these days. I mean, good god, just look at the news, right?
Sarah: Yeah. This takes me back to time and I always say: “AI Slop. What about human slop?”
Taiyo: Wait, are you calling? Wait, hold on. You’re not saying that I was doing human slop, were you, Sarah? You better not.
Sarah: No, you were doing intentional human slop, right? Oh, yeah. I love how you, I mean, I’ve like heard the legends of you doing this, of being like, “this is right, right? It’s right? Right?” And, and the students all sort of looking at each other.
Pedro: You know what, what really scares me is not artificial intelligence is human stupidity.
Sarah: Boom. Yep, yep.
Taiyo: There you go.
CHAPTER 6 [35:07-42:53]
Taiyo: You all are on kind of the front lines of math education, uh, in the 21st century. And one of the trends that I think we’ve all been feeling and observing in our students is an increasing lack of preparation, in our incoming students. For those that are not familiar, there was this report that came out of UCSD from their faculty senate, which reported on just this issue - that they’re seeing increasing deficits is the right word? And cracks in the foundational understanding of basic things, basic mathematics, basic writing skills, and that sort of thing. What do you think about that? Does AI have any role to play in helping educators with that issue? Particularly in light of some of the, uh, legislation, particularly in the state of California, which is making remediation a a sort of stigmatized, dirty word. What do you think about that?
Julie: Silence.
Taiyo: Oh, is this too spicy? Wait, hold on. Maybe this is too spicy?
Sarah: No, this is great. This is great. Literally like, oh, he’s going there.
Julie: Yep, yep. And we’re here.
Sarah: Maybe, maybe some context for listeners unfamiliar with, uh, the, the state of math education in the public system in California, a little background would be good.
Julie: Oh dear. Where did this start? Um, basically, gosh, what was it - 2017, Taiyo? The CSU was mandated to remove any remediation classes, and by that what we mean is classes that students were required to take that are not considered college level math, but that they were deemed necessary to take prior to getting to say a pre-calculus class or a calculus class - so what we consider college level class, that is credit bearing for students, meaning they actually get something on a transcript for it. Um, in the past with remediation, they had to take non-credit bearing classes sometimes, um, in order to finally get into credit bearing classes. And so that was essentially banned in the CSU around 2017. And then more recently, the state legislature mandated that in the community colleges. And so now basically across the entire state, this is essentially the deal, the UC system, because it typically has admitted students with more preparation, has not seen the same kind of level of impact, um, of those legislations. However, we are also seeing the same preparedness issues. I think every institution across the country is seeing math preparedness as a major, major problem. Our students are coming in, so the UC San Diego study that you cited, Taiyo, students are coming in not being able to do eighth grade math. How do you throw them in a calculus class without being able to do eighth grade math? You know, they don’t have those foundations and now we can’t do remediation. So a lot of the, the solution has been you add co-requisite courses somehow to support students through this, and there’s been some compelling evidence that that’s worked on some campuses and not on others. I think it really depends on how well it is supported, and it’s really hard to do this at scale.
Julie: I know on our campus at UC Santa Cruz, it’s a major challenge just figuring out how to place students appropriately and, and support for students. There’s a lot of different support mechanisms, but it’s hard to know like what is working and what is, what is not, and what the most effective practice is right now. So I do think like a lot of people are hoping maybe gen AI can solve that. But you can’t just assume that like, because it’s out there that this, you know, it’s like everybody thought 20 years ago that universities would cease to exist because you could learn everything for free on Coursera.
Sarah: Hmm.
Julie: That is not how most people learn. Honestly, I really do believe a lot of us learn best in community and in conversation and by relating to each other. And so, you know, one, one thing that we talk about is like, we still want that human connection even through AI. Like, you know, we want to be using AI and using different tools and preparing our students to be able to use these tools. But I also - like the soft skills are so important. What they need to be able to do is also be able to work with other humans. And that’s a challenge for some of our students right now.
Sarah: Wait, let me make sure I’m understanding this. So I always thought that the, like catch up route, if you, if you didn’t meet the expectations for, you know, high school math, was that you could go to community college, like get your pre-algebra foundation with really real like substantive support and then come back and take whatever courses required, um, like college algebra, calc, whatever. Is that still true? I thought our community college system was really incredible at that, um, aspect of education.
Julie: That certainly was like something that was an option for students. It used to be that… I’m really proud of our community college system in California. I think they do amazing work and I’m really proud of our transfer students that we get from community college. They are amazing.
Taiyo: Damn right!
Julie: And it used to be that I could tell students like, “Hey, actually, if you went and took this class at community college, you might get more support than at the CSU,” just because of how the systems work differently and what their aims are different. And so that now is no longer really the case because they’re, they’re subject to the same rules essentially. Um, they have different support mechanisms still, but it’s a real challenge. Like if you don’t know eighth grade math, there’s no class for you.
Sarah: Well, unless you can afford to pay private tutors or something. Wow.
Taiyo: Yeah. I know this is an often, cited use of AI or positive potential benefit of AI. And Julie, you, you mentioned how not everybody learns, you know, in front of a screen as you would have to be if you were working with an AI tutor. But do you think that there’s potential for AI tutors to sort of level the playing field? Because we know, like, and this was the case even when I was in high school 30 years ago, there’s like the, the SAT prep Industrial Complex, there’s the Kumon Industrial Complex that we know that rich families are able to put their kids, uh, into and, uh, and accelerate their learning through these kinds of extracurricular activities and such things were just not like that are, are just not an option for all families. Right, right. I, I wonder to what extent this, like personalized 24/7 infinitely patient, etc., etc. - you’ve heard the drill - AI tutor could level that kind of playing field. Do you have any thoughts about that?
Julie: I, I think there’s potentials with a lot of asterisks. I would say, a worry I have is of course, like even with AI, we have paid models, unpaid models, right? You get a very different product depending on what you are paying for or not paying for. So there’s an equity concern there. I mean, I think a lot of us who work in, um, the public education systems across the country and in California certainly are really hoping also to get first generation students and to help social mobility for, for the general public - that’s like one of, one of our key like sort of core values here. And so is that gonna, is that gonna reach that population? I’m not convinced. I think still there, there’s a lot of work to be done.
CHAPTER 7 [42:55-47:26]
Sarah: It brings me back to something Pedro said earlier about what is the purpose of higher education - that it’s framed sometimes as workforce preparation or, uh, also character development or holistic development of the whole self. And it feels to me that sometimes those things are positioned as mutually exclusive. I don’t think they are.
Pedro: Yeah, I would agree completely with that, Sarah. Um, I don’t think it’s, again, binary, it’s “to what extent,” right? It’s a spectrum. And what I think it’s more important for us as instructors and leaders in higher education is to be aware that these two things might, might not necessarily be competing against each other. We don’t have to choose either to do one or the other. One of the things that is happening right now in the industry is that a lot of entry level jobs are basically being outsourced to AI. So if you are left with a, again, economy that you only have, like middle to senior level jobs, how do you become a junior so you can progress? And I think that there needs to be this conversation between higher education and industry because it’s not effective for anyone. It is not effective for anyone - maybe in the short, in the short term is, is, is really good. It will boost, um, the economy or whatever. But then in the end, if all you’re left with are senior level jobs, how are students going to get those? So I do believe that there are a lot of things that we really need to be intentional about, again, going beyond the classroom and going beyond tools to, uh, improve productivity, but also what is the role of AI in society in some sense, and how we can be intentional about that.
Julie: Yeah, I think like returning to the equity issue and thinking one thing that keeps me up at night, um, you know, faculty love to worry about all sorts of things affecting our students, but I meet a lot of students who are anti-AI or just not convinced that this is the direction the world should be going and what I worry about is losing, like this is different than other technologies. AI is being trained on its users, and so if the people who are opting out of using gen AI are the ones who are most ethically minded, who care most about society, our planet. Then these tools are going to miss out on those voices. And those are the voices I think that are the most critical right now in leading us where we need to go. This presents a challenge, right? You have a student who doesn’t wanna use gen AI because they think it’s an unethical tool to use, it’s destroying, destroying our planet environmentally, data privacy issues, and ultimately, potentially the thought is it might destroy humanity, right? Or overtake humanity. And so do we wanna contribute to that? But those are the people that like, they, we need them in these conversations. We need them to know about how these tools work. And choosing not to engage with the tools means missing out on understanding the tools too.
Taiyo: I definitely agree with that. It is really important to have a diversity of, uh, perspectives and voices, embodied in these AI systems that are currently being developed. I think, though, an anti-AI stance in higher education, an anti-AI stance, that is not necessarily an ethical stance in my opinion. I mean, it’s ethical in the sense that it is a sort of normative claim, but to me, to deny your students access to this world-shaking technology that’s going to strengthen them. Forget about workplace stuff. Forget about the economy. Just strengthen your ability to learn things. And that’s been my experience with AI. I think that to deny them wholesale from that technology, I don’t consider that to be an ethical move. People might believe that, uh, but I would want to have that argument. And I think it’s a really critical, important argument to have, and I don’t think that an outright ban on AI is responsive to the fact that that is an argument, there is an argument to be had there. Anyway…
Julie: [laughs]
CHAPTER 8 [47:27-57:40]
Taiyo: Oh my God. Why was that so awkward? I think I made it awkward, honestly, right, Sarah?
Sarah: Yeah, I think so. Um, because you basically just called every educator who has a no AI policy on their syllabus unethical.
Taiyo: Wait, wait a minute. Did I really come down that hard?
Sarah: I don’t know. I mean, I guess we’ll find out by the amount of hate mail we get.
Taiyo: Oh well. Bring it on. I’m ready. I respond really well to negative stimulation, so…
Sarah: We’ve established that.
Taiyo: Looking forward to it.
Sarah: Well, I mean, I think I agree at least with part of what you were saying, that there is an argument to be had, and that a lot of the blanket “AI-has-no-place-in-higher-ed” arguments are based on an assumption that using AI can only outsource cognitive labor and, and not help not enhance it.
Taiyo: Yeah.
Sarah: Or not support students, right?
Taiyo: Well, yeah, and I get that and I also understand Julie’s point of view where, you know, she was talking about how when you introduce this sort of probabilistic, you know, entity into your classroom, it can be difficult because there’s a feeling in which some degree of control which you may have had in your classroom will be lost.
Sarah: Mm-hmm. Yeah. That part was really interesting to me. You know, I was thinking a lot about this, the loss of control, because I think I pride myself on being very flexible and adaptable in the classroom. And yet for some reason, even though it’s been about a month since you first told me about Claude Code and I was really excited and I was like, show me how to do it - something about, I don’t know why I’ve been resistant, I haven’t done it. And I know it’s gonna save me so much time every day in class. I have a really great talkative section in my critical thinking class this semester. And for some reason, I can’t pry myself away from the model of where I do like a 10 minute lecture at the beginning of every class period. This is a class I’ve taught a million times. It’s like one of my favorite classes to teach. I could make video lectures so easily for this class.
Taiyo: Yeah.
Sarah: And I don’t know why I haven’t taken that leap. And something about what Julie said about the control thing makes me. I don’t know. I think that’s sort of at the root of it in some way.
Taiyo: Yeah, I think that’s there for me too, in addition to the logistical nightmare that is flipping the classroom. I think the other thing that really deterred me from it was that loss of control. Um, as a departure from such a familiar model for myself of how teaching and learning should happen.
Sarah: Yeah.
Taiyo: When I was a student, I really admired so much my professors who were able to, you know, weave together ideas and such, you know, beautiful tapestries of knowledge. And I thought that that was the way everybody learned and had those incredible aha moments and the ecstasy of epiphany and all of that kind of stuff, right?
Sarah: Right.
Taiyo: But that’s just not the case, right? And science is like, the science of education has really established that active learning is really what works best for the majority of our students. And it’s, I think, really important we all take heed of that.
Sarah: Yeah. One thing I keep coming back to is like right now, you know, reading all this stuff, the really promising research coming out about how impactful AI tutors can be as a supplement to a course.
Taiyo: Mm-hmm.
Sarah: One of the things I think because I’m experimenting as I go, I really like to have the student engagement with AI happen in the classroom. I just started this semester, giving them guided prompts to do is like quote unquote homework. But then we go over the outputs in class. But so far, for the past few years, most of the workshops I’ve done where they’re doing AI generated writing or AI assisted writing, they’re doing it in class and that is, it’s totally active learning. I wish I had more time for that.
Taiyo: Mm-hmm.
Sarah: I think I have to do it. Taiyo, I think I actually have to follow through and now I’m saying it on the air and so I don’t wanna be embarrassed in a month’s time.
Taiyo: Well, let’s, let’s do this together. I, I like,
Sarah: Oh my God.
Taiyo: Let’s figure out ways to, and, and share with one another about ways that we can make this flipped classroom thing - the AI enriched flipped classroom - let’s talk about ways that we can make that work. What do you think?
Sarah: I think that would be an awesome idea. I mean, I need a motivator, right? I need a friend to do this with me.
Taiyo: Well, hello right here!
Sarah: [laughs] I know you’re already doing it. Well, I mean, here’s an interesting thing. Are you doing anything with AI tutoring with that class?
Taiyo: I’m not.
Sarah: Because this I think would be a challenge for both of us. Like right now, I see you doing it. I’d call it like an instructor-centered use of ai. Yes. Where you’re using it to like make your life easier so that you can do something. That was available to you pre AI that you just couldn’t do because of like logistics and time, right?
Taiyo: That’s right.
Sarah: Whereas I, for whatever reason, even though I know how easy it would be, have not done that. But I feel like I have been doing pretty cool stuff in the classroom with LLMs and I wish I had more time to help guide students through the process of engaging with LLMs. And what would free that up is if I had the lecture stuff. Because here’s the, the kicker that I’m realizing right now. The thing that would really I think take stuff to the next level in my class is that if they were able to like watch the 10 minute lecture and then have a conversation with an LLM, maybe a custom trained ChatGPT on the content on my course. But if they are able to have like that pre-class kind of dialogue, low stakes with the LLM and then class time is the, you know, like, it feels like a much more like honed springboard. That’s a weird metaphor, but a much better springboard.
Something where they don’t, they don’t come to class like currently - and how I’ve always done it, and again, I think I’ve done, you know, this class is, is like a favorite of mine and my students, if evals are to be trusted, but I see them kind of overwhelmed after my 10 minute lecture, right. I think for a long time I’ve been like, “yes, that confusion in your face is the…” right, like I’ve been doing the same thing of treating the like struggle as a kind of moral high ground as opposed to experimenting with what it would look like to have them have an interaction with an LLM where they can say like, what the hell is she talking about when she says this, right?
Taiyo: You give them space and time.
Sarah: Yeah.
Taiyo: And, uh, a sort of a safe interlocutor. Mm-hmm. A low stakes discussion with an LLM to rehearse their ideas before they come to class.
Sarah: Right.
Taiyo: And share publicly
Sarah: Right.
Taiyo: Their ideas with the rest of the class.
Sarah: Yeah, exactly.
Taiyo: That makes a lot of sense to me.
Sarah: I’ve done that to an extent, like in critical thinking right now, where we’re, we’re working on, you know, critical reading and explicating, textual details of stuff that they’re reading. Everything from poems to advertisements. And one of the things I started doing that has made discussion much more active than I think it’s ever been is having them ask an LLM, what are the implications or connotations of this word? What could they be? And I ask them to try it themselves first, but then to write down, ask for 10 and pick the three that are the most compelling, and then come to class being prepared to explain why they think it’s most compelling. And the reason that I think, you know, you could say that’s offloading the work of them sitting there struggling and thinking about what the connotations are. What that ignores is that like realistically, probably more than half the class for the whole time I’ve taught this comes to class with like, I don’t know what the connotations are, right?
Taiyo: Mm-hmm.
Sarah: And we work through it together and that’s been really awesome. And I love seeing that where they really struggle and then like over time through class discussion, they get it, but it usually takes weeks. And I’m wondering if it could be accelerated in a way that did not offload something but enriched something by giving them more examples, more models, and then also the, that we’d be using class time to like pick apart the outputs and build on it.
Taiyo: That’s, I mean, that sounds incredible to me, honestly.
Sarah: All right, let’s do it.
Taiyo: It really does. It really sounds great.
Sarah: All right.
Taiyo: You’re committed then.
Sarah: I’m committed.
Taiyo: Because this is going on the podcast. It’s going to be, this podcast is gonna go out to our thousands of listeners. You are really committing publicly to this, right Sarah?
Sarah: I guess so. You know, I love the pressure of a deadline and social anxiety.
Taiyo: Well, we’re gonna document all of this, every step along the way on this podcast!
Sarah: This is like the story of our lives: signing up for more work. We’ll let you know how it goes.
Taiyo: Well, maybe. Okay. Honestly, I was gonna say no, no more work because of Claude Code, but I mean, this is gonna take some thinking in some planning. It’s gonna be awesome.
Sarah: All right, well, we’ll see what happens. Until next time, I’m Sarah Senk.
Taiyo: and I’m Taiyo Inoue.
Sarah: This has been My Robot Teacher, brought to you by the California Education Learning Lab. If you haven’t already, please subscribe to our YouTube Channel!
Taiyo: Leave a review on Apple, Apple Podcast. Please?!
Sarah: Leave a review on “Ah-pull?”
Taiyo: I’m sorry.
Sarah: Please leave us a review.
Taiyo: On Apple Podcasts.
Sarah: Especially if you were that guy at that conference we went to who said this was better than Hard Fork.
Taiyo: Oh yeah, please that guy. Shout out to that guy! [laughter]

