My Robot Teacher Episode 9 Transcript
Resilience Over Right Answers: Rethinking Science Education in the Age of AI (with Biophysicist Jon Sack, UC Davis)
Below is the full transcript of Episode 9 of My Robot Teacher (lightly edited for clarity and concision).
Guest:
Jon Sack: Associate Professor, Department of Physiology and Membrane Biology, UC Davis School of Medicine
Also available on: Apple / Spotify
INTRODUCTION
CHAPTER 1 (00:00-6:27)
Taiyo: Welcome back to My Robot Teacher.
Sarah: And welcome back educators from Winter Break. Taiyo, what did you do?
Taiyo: Oh my God, Sarah. So no joke. I had Claude Code create my entire canvas page for my differential equations course. I’m not even kidding.
Sarah: Wait. Design it or create it?
Taiyo: Well, you know, update it, maintain it, structure it. See, I gave Claude Code the academic calendar for spring 2026. I gave it my syllabus with the topics that I wanted to cover. And I gave it a link to the open educational resource online textbook that I’m using for the course. And I told Claude Code, please construct a day -by -day schedule for my course and build the entire canvas shelf for it. And it did it. It mapped out every single day of the semester. It created pre-class reading quizzes for students to check their knowledge. It organized everything into modules. Put the quizzes underneath the correct modules. It did the whole thing.
Sarah: Okay, so it didn’t like make the course for you. It took your materials, organized the course. I mean, this is incredible for you because of how much you hate organizing.
Taiyo: Yeah, you know, I’m the one that designed the course, right? I am, after all, the instructor of the course. But what I hate figuring out is how I’m going to take all of those topics and map them onto specific days. And you know, students love that kind of structure. They love knowing that on this or that day, we’re going to talk about that or this topic. You know what I mean?
Sarah: Totally, totally. Yeah. But you’re not ready to set up the scaffolding in January for May for the class.
Taiyo: I lack the mastery over time and space to be able to do this sort of thing effectively. I mean, you know how I am with time, right, Sarah? I’m so bad with dates and I don’t have a solid understanding of the difference between past, present, and future. I have real problems around all of that.
Sarah: You also hate the drudgery work of sitting there and hitting “edit module, add page,” and actually like putting all of that shit into your campus page if you’re, you’re teaching it for the first time, right
Taiyo: Oh my God, I hate it so much. And it’s one of the reasons why I haven’t done the heavy lift of flipping my classroom, which, by the way, I’m doing now for the very first time in my class, or the very first time in my career, in my teaching. Because, you know, I’m totally for flipping my courses, but the amount of structure that’s required for students to feel confident that, you know, that the course is going to be good for them is just immense. And that requires so much pre -planning, so much execution, so many clickings of buttons and managing of dates. Oh, my God. But Claude Code is doing all of that for me, Sarah. It’s doing it for me.
Sarah: Did you splurge for the, like super expense, the, the top of the line one for Claude code?
Taiyo: Not yet. I’m just using the $20 a month one. But you know what, Sarah? I’m pretty sure we’re going to have to like, we’re going to have to go in on that $200 a
month Claude account. You know that, right?
Sarah: I can’t believe if your $20 a month one was able to like make the entire course for you and there were no hallucinations, nothing?
Taiyo: Well, nothing I’ve detected yet and I’m going through meticulously as one does with a fine tooth comb as things are coming out. But so far, it’s been beautiful and completely error-free.
Sarah: If that’s what the $20 a month one… I would be very curious to trial it.
Taiyo: Yeah, you know, I really think we should. I think we should split an account and, you know, $100 a month apiece. What do you think?
Sarah: Let’s ask Anthropic first. I’m shameless. Let’s just, let’s just be like, “Hey guys, we would love to demo this.”
Taiyo: That’s right, we would love to, we would love to demo this, but we are CSU faculty. We don’t have the, the, uh, near infinite coffers that you all do. Could you give us a little taste? Please.
Sarah: [laughs] All right. We are gonna have to talk more about this later for like immediate, you know, hot tips for social media accounts because this, there’s so much here.
Taiyo: Oh yeah, for sure. And listen, audience, if you haven’t already, please subscribe to our YouTube channel because we’re gonna be putting out quite a few videos about practical tips like this.
Sarah: Oh yeah. Uh, see, now that we’re full of energy at the start of the spring 2026 semester, we’re gonna start posting short clips about what we’re doing right now in our classrooms, what we’re testing in the wild in this, you know, experiment of higher education in 2026. But we are here today to talk about a discussion that we recorded. In November with UC Davis Biophysicist, Jon Sack, who is an associate professor and researcher in the Department of Physiology and Membrane Biology at the UC Davis School of Medicine. Full disclosure, he’s also a friend of Taiyo’s.
Taiyo: Yeah, that’s right. That’s right. Yeah. Our kids went to the same preschool, known him for over 10 years now. We just met at some point, uh, uh, coincidentally and had some informal conversations about AI as one does right,, and I just thought he would be a really great guest because he is a scientist and a science educator and he’s got really interesting things to say about how AI’s gonna impact both the business of science, but also of science education.
Sarah: And because Jon is a scientist and you know, so far on My Robot Teacher, we’ve talked to a lot of humanists and social scientists and data scientists most recently. And so we were interested in particular in what Jon thought AI was doing in science education. We found a lot of common ground because I think across any classroom right now, the worry is really obvious that if students can outsource the work and, and still turn in something that looks very plausibly right, what do we do about that? And more importantly, how do we teach them to recognize what is right if everything kind of sounds good.
Taiyo: Thank you to our sponsor, the California Education Learning Lab for sponsoring this episode.
Sarah: One of the best things about working with them is the ability to connect with faculty across all three segments of California public higher education, and to think in interdisciplinary ways about how we are bringing technology to bear in the classroom.
Taiyo: Now onto the episode. Please enjoy.
PART 1: SCIENCE EDUCATION
CHAPTER 2 (6:28-10:14)
Taiyo: So Jon, you know, it seems like everybody has their own story of what their ChatGPT moment was. What was that like for you?
Jon: The initial reaction was like, wow, this can really deal with all the dumb crap in a way that I can communicate with, you know I think the initial experience I had was that it’s, you know, it’s as, uh, it’s as effective as an enthusiastic, you know, kind of naive trainee, or, you know, assistant.
Sarah: Who really wants to please you.
Jon: Yeah, exactly. Even more and more so now, right. And yeah, but the, that it could, that it could offload so many of the, of the, you know, time consuming, you know, mind numbing tasks, you know, quite readily. And, but, but also just the, just interacting with it and seeing what it knows what it could understand. It could not, you know, it couldn’t do a regression analysis to save its life at first, but now it can do that quite well. I was also floored. I came in later into the game ‘cause I read about everyone else having these religious experiences upon, you know, talking to ChatGPT. But I immediately wanted to start using it for everything possible just to see what it could be used for. I suggested to, you know, all of my students that they use it for, try it for. Everything and see how they can augment their capabilities with it, which has had, you know, really a, a, a wide range of unexpected manifestations, you know, from lots of disappointment to solutions in the lab being made, you know, much more accurately without mistakes. Sometimes students expressing frustration with me as an advisor because I’m not as positive and supportive as the ChatGPT.
Taiyo: Wait, what? Really? Because I know you to be a very positive and supportive guy.
Jon: Uh, thank you. Yeah, I try. Yeah. I believe, and, and I think, you know, in the, you know, it, that’s a wonderful thing that I think it, you know, it, it adapted, which is that it, it turns towards you as much as possible. And the, uh, the large language models I’ve worked with, you know, the kind of, you know, consumer facing ones, they do a very good job of turning towards - you know, even when they’re saying no, it’s a very gentle “no,” “it’s so true,” “Great question!” And “no, there doesn’t seem to be a correlation there,” but when there, when you ask a question where there is a correlation, it rewards you with turning towards you very positively. And I actively, you know, try to do this ‘cause people like it, but ChatGPTs is better at.
Sarah: It’s full of patience, it’s never hangry, it’s never tired.
Jon: No, none of this. Uh, and yeah, and when a trainee comes into a good idea. That the, the AI will agree with and, and, and reinforce that it’s a good idea. And I find myself being in the strange place of not just doing the normal thing of saying, “Yeah, you know, I told you that. I think, you know, please come to me with ideas, you know. A small fraction of them will probably be good ideas, but we should talk about them,” There’s this extra level where, where people are being convinced their ideas are good ideas. And I as an advisor, wind up naysay or casting shade or suggesting. They do some more, you know, reality testing of these ideas that, that students are already convinced of are good ideas due to those interactions. So it creates, uh, frustrations in unexpected ways.
CHAPTER 3 (10:15-12:44)
Taiyo: Interesting. Yeah. You know, Sarah and I, one of the big themes in this podcast is in thinking about what qualities or what kinds of, um, maybe you want to even call them virtues that we wanna see in our students to be able to cope with this new world where we have AI systems proliferating and becoming more and more common and impacting our lives in highly non-trivial ways. Like how can we make them resilient to the kind of maybe sycophancy that I’m hearing you describe coming from these LLMs - that they’re getting gassed up by, you know, all the kind positivity that might not have any real basis in reality. And it, you know, it takes somebody like yourself who’s a deep expert in, in these matters to. Have to do the kind of annoying work or frustrating work of throwing cold water on students’ dreams and that sort of thing.
Jon: [laughs] Exactly. You want, you, you want, you want ideas that may not go anywhere to, to fail fast, as they say.
Taiyo: Absolutely.
Jon: And they can kind of propagate for longer. Yeah, and I, I think core resiliency, I think is really the number one trait that I’ve noticed that at least in the sciences that we’ll get. You know, that will help you continue because that’s what science is all about It’s about hard reality testing of your favorite ideas and watching nothing emerge from your experiment, uh, you know, again and again and again, and being resilient enough to go again and again to the, the gallows of reality testing, and then, so that you’re, you’re still present and functional when you almost stumble upon something that really, really does work or sink in, or you actually have figured out, uh, how a process works in the, in the body, in our, in our physiology. You, you have that aha moment where you. Really have a better idea of how say, you know, an ion channel we study, integrates its information. You know why a specific drug that’s very powerful, what is the core secret that makes it work? And you want to be able to put out a lot of ideas and relentlessly, you know, select among them, you know, again and again and again to see all your favorite hypotheses squashed. So yeah, resilience.
CHAPTER 4 (12:45 - 16:08)
Sarah: So I have a two part question here. One is: do you agree that [resilience is] a fundamental part of scientific literacy, let’s say. What other elements are part of [scientific literacy] that I’m missing? And then the second part is: How have those things changed or how has pressure been put on those skills since ChatGPT - since students can now have it write their lab reports for them, what kind of work are they outsourcing and what are the risks you see in terms of how it might be damaging people’s scientific literacy?
Jon: Those are good questions. The first one - the fundamental question of having a scientific hypothesis: A scientific method is not to prove your hypothesis, it’s to try as hard as you can disprove your hypothesis - that I think is what you’re alluding to - to find every way you can of seeing if it’s wrong. And the hard part is to rejoice when you’ve killed it - when you’ve proven it wrong. In writing sometimes, in literature, there’s the expression that you must “kill your darlings.”
Sarah: Yes. That’s exactly what I was thinking.
Jon: I teach that you know, as you know, as part of the scientific method is that your most brilliant idea - you want to, you know, you want to shut them down as fast as possible. And if you’re unable to do it, if you’re unable to shut them down, then that’s a success. And I think the way that, at least with ChatGPT you can train it to do that, you know, to help you with that - to give you, you know, positive feedback for destroying the ideas that you hold most dear. Because if you destroy it, that’s a success. And it’s almost like if you haven’t obliterated your hypothesis, then it’s kind of a mini failure. But you should say, you know, that’s good. You know, keep on trying. I love that. The worst thing it could do is to say “Success. You failed to disprove your hypothesis. You’re a winner. You’re done. You know, you should write your paper now.” You want to, you want to keep chipping away at it. And the good thing is that it will, you know, reinforce what you want. It wants to please you, so you need to train it. As a scientist, that’s not the typical way people wanna be treated.
Sarah: Right. Oh, that’s amazing though. I feel like that’s a really great practical tip - that for context engineering or prompt engineering, like if you’re a student in a science class he first thing you say in your conversation with ChatGPT is “Your number one goal here is to give me praise when I obliterate the things that I clearly am invested in personally.”
Jon: [laughs] It can work with that.
Taiyo: I mean, this is a really important thing for me. This is why I’ve told you in the past that I respond really well to negative reinforcement.
Sarah: Yeah.
Taiyo: I really do. Like I respond much better to negative reinforcement than I do to positive reinforcement.
Jon: That’s terrible.
Sarah: [laughs]
Taiyo: I know, I know. Whatever. It’s fine.
Jon: You’re a bad person.
Taiyo: [laughs] Thanks. Thank you very much, Jonathan. But it gets to the points now…
Sarah: This is like when I yell at him like this in the hallway and people are like, what a psychopath. But I’m like, “He likes it!”
Taiyo: [laughs]
Jon: [laughs] I’m starting to wonder why I’ve ever liked you.
Taiyo: Yeah, well, you know,
Sarah: You’re just motivating him.
Taiyo: That’s very interesting - the feelings that I’m feeling right now. Anyway…
CHAPTER 5 (16:09-19:26)
Jon: Yeah. I guess the question, yeah, about doing the hard work.
Sarah: Mm-hmm.
Jon: And I guess what I see is that if the hard work is, you know, kind of a, a side effect of some task then, and you can offload the hard work to get the task done, you’ve still got the task done. And whereas if the task itself is doing the hard work, then - if that’s actually the point of achieving it - then if you offload it, then you’re not, you know, kind of doing the hard work.
Sarah: Yeah. I love that distinction.
Jon: So it’s kind of, it’s the, the framing there, that’s different.
Sarah: Right, so can you think of an example of in, um, like a, say a college level intro to biophysics class, maybe where there is something that students perceive as busy work - work they shouldn’t have to be doing. And but something that you think as the expert in the field is like fundamental to their understanding of this, that maybe the question is just reframe and explain. The point is the process. The point is to feel, this is hard.
Jon: Uh, to feel that it’s hard. I don’t know that it’s to feel that it’s hard. It’s to… So what I think of is math. Essentially I teach physiologists. I teach first year molecular, cellular and integrated physiology students who come into come into UC Davis, and what I teach them is hard thermodynamics, energetics, ligand receptor binding. And the way I get them to engage with it, is to have them work through math, conceptually. You know, the addition, subtraction - that’s not the point. What mathematics is, in my mind, is it’s logical relations between things. Right? And to get them immersed in those logical relations and see how they work. And I tell them to take the perspective of a molecule. I think that is the hard work - to imagine your inputs coming in and how that shapes the decision you can make - what the concentration of a drug or a neurotransmitter is, and based on what that is, what kind of decision are you gonna make? Are you gonna bind it? Are you gonna, not, how much of the time are you gonna bind it? Are you gonna not? And all of my questions you can feed into ChatGPT, and get the right answer out, but you don’t actually understand, you know, anything. All you understand is that you can get the right answer out. So I try to talk with them and to have them adopt that perspective. And they, and I’ve had them do a lot of math as the precursor to kind of getting there. What they don’t know is that on their midterm, they’re not gonna do any math. They’re merely gonna be explaining the relationships between things, between inputs and and outputs, and whether they can kind of verbalize that.
Sarah: Mm-hmm.
CHAPTER 6 (19:27-25:25)
Jon: In this particular course, the consensus had been to have testing be, you know, in a room without phones - like pens and paper - so that they are left with the devices, you know, with their, their thought processes. It has been an interesting, integrated mix: I encourage them to use ChatGPT or to use large language models to help understand things while at the same time the course forbids them from using these same large language models to answer questions.
Sarah: So these classes are intended to train people in the basics of scientific work, right? I mean, in my field there’s a lot of people commenting on how students need to learn the foundations before they mess around with LLMs. I’m wondering if that’s similar in the sciences.
Jon: We are - as scientists - one of the very helpful principles or something that’s generally embraced and shared is that if you’re a scientist studying a process, you should and really must look at all the new technology that’s available and bring it to bear on the problem because you get rewarded for how much new knowledge you unearth. And if you, when you have new technologies, new ways of seeing, then you’re gonna see things before anybody else does and understand them before anybody else does. And that’s what you get rewarded for in the sciences. And at the same time, many scientists are… they’re, you know, resistant to this wonderful new, available technology of, uh, large language models. And I think, you know, relatively quickly they’re adapting and adopting. It’s taking some time on the timescale of years, surprisingly. But I think what we need to teach our students is to bring every tool they have to bear onto their problems. Because like the large language models as we know it, these will be obsolete, you know, probably, you know, on the very short scale. And so teaching them how to use, you know, ChatGPT 5, I don’t see a great value in that, but encouraging them to go out and use whatever is available and to harness that technological power and to wrangle it in such a fashion that it can be useful to them - to help make a new, fundamentally new tool work for them. I think that’s valuable and that’s what we can teach them.
Sarah: Yeah, for sure. You know, we all know that LLMs can generate really convincing but wrong ideas, which is a problem for people totally new to a field who don’t yet have, maybe, the expertise or competence to discern among plausible sounding solutions, ideas, whatever. So as a student today, how do you learn to pick the good hypothesis from all the bad ones, especially if like so many of our students today, you’re fixated on getting the right answer?
Jon: If you’re in a field where you can get the right hypothesis every time, then you’re not really pushing yourself hard enough. The cutting edge of science is always gonna be in an area where there’s lots of ideas, but we don’t know what the right one is. And how to select the good, find the good idea, or the quote unquote right idea amongst all the bad ideas is fundamental to what we do as scientists and can best help the next generation by, you know, training them to do this. But the world in the, the kind of structure, of the topology of the world in which we do this now is changing rapidly with all this computational power. And one of the great things I think about the large language models is that they hallucinate - they throw out things that seem really right, really good ideas. And we need to be able to identify those hallucinations and, and reject them and to have the mentation capabilities to figure out how to do that. And the playing field will keep shifting and changing dramatically. So I think AI - these large language models are extremely helpful and they have unexpected consequences - as everything does, that’s new that that, that you work with. And I think for students, again, resilience is important. I really think also, for everybody generally, but especially for students, is to be aware of the limitations of what new technologies they’re, that they’re they’re working with. And I think it’s important for scientists especially to go boldly into uncharted territory and to be continuously taking stock of where it fails. And what it can’t do for them. And to be aware of those dangers. ‘cause in every, with every technological revolution that I’ve seen in, in science, there’s great new technology that emerges and there’s, you know, a lot of really profound findings using it - some of which later turn out to be wrong because there was things you didn’t understand about the new technology that you didn’t control for that led you to, led you astray. And to be ready to harness all the capabilities of every new technology that comes out, and to not be shy about addressing its limitations.
PART 2: AI IN RESEARCH
CHAPTER 7 (25:26-28:21)
Sarah: So, Jon, we’d love to talk to you about the implications of AI for research in your fields. You’re a biophysicist. Tell us more about what you study, where you are.
Jon: So yeah, so I’m, I’m an associate professor at the University of California Davis in the departments of Physiology and membrane biology. I’m in the medical school. I’m a basic researcher. I study the way molecules in our electrical system, in our body - primarily in our, in our neurons, the way molecules make stochastic decisions, the way they integrate inputs and turn those inputs into an electrical signal. And these electrical signals are important really everywhere for secretion of hormones, for the beat of the heart, for the contraction of muscles, and to create the electrical signals that propagate through our neurons and nerves and, and brains. The electrical work is readily studied in a lot of ways because it’s electricity and we can study it very, very well. And it’s interesting to me because it’s what makes our nervous system run; it’s how we generate and propagate electrical signals from one part of our body to another, from one part of a neuron to another, and how a neuron, a cell, how the molecules in that cell make decisions about whether to make the voltage more positive or more negative in a cell. And those types of decisions from many, many types of proteins and parts of our cells and bodies - eventually we build off of these little bricks into the form that is a cell or a neuron. And when you get neurons working together, you get a, a neural network and a, and a system that creates us essentially.
Sarah: What’s really interesting to me about that is the idea of something that is like a cell in your body - so not something that we think of as conscious - making decisions. So this sounds a little bit like the way we talk about LLMs.
Jon: Yeah, they’re, they’re, they’re highly analogous and LLMs, the neural network underlying network, uh, LLMs is inspired by the way neurons in our, in our brains and nervous systems work, which is they integrate inputs and with a, a stochastic, you know, weighting, they make a decision about what to output. And I think that underlying architecture has some incredibly powerful informational aspects, and so they’re related, but we, I don’t, I don’t study computational neural networks. I study at a very basic level the way inputs with stochastic weights are combined to create an output, which is analogous in many ways to the way a neural network works.
Sarah: Hmm.
CHAPTER 8 (28:22-30:36) - TERMINOLOGY BREAK
Sarah: Quick break to go over some terminology, which we always like to do for people new to LLMs. So, Taiyo, “stochastic” basically means probability-driven, right?
Taiyo: Yeah, exactly. exactly. And when you get to understanding reality at smaller and smaller scales, what you learn is that the role that probability and chance plays - well, it becomes baked into the fabric of reality. So, like, maybe you’ve heard of quantum mechanics and quantum physics and that sort of thing. And there, things are are inherently unpredictable and you can no longer just know that something is going to happen, you rather can only know that there’s going to be a distribution of possibilities of things that can happen. And that’s really what characterizes stochasticity. Great thinkers, even Albert Einstein, had a lot of trouble with this. And he has a very famous quote, which many people have heard before: God does not place dice with the university.
Sarah: Hmm.
Taiyo: And this sort of captures his skepticism about the stochasticity that is baked into quantum physics. But it seems as though empirically, God in fact does seem to place dice with the universe.
Sarah/Taiyo: [laughter]
Sarah: And to clarify the link to large language models, so an LLM doesn’t pick the next word like, “There is only one correct answer here, right?” It generates a list of possible next words and basically assigns odds to them, like this word is more likely, that word is less likely, and then the system has to pick one. And then sometimes it always picks the most likely word. Often it will sample, but often it will sample, which is just a fancy way of saying it makes a weighted choice based on those odds. And because it’s making that probability-based choice over and over, word after word, you can ask the exact same prompt twice and get slightly different answers. Right? Same odds, different roll.
Taiyo: Yeah, absolutely.
Sarah: The last thing I want to mention here, again, for audiences new to large language models, is that if you’ve heard the critique that LLMs are basically fancy mimicry - like a parrot repeating patterns - this is the technical reason. It’s picking the next word by odds. And so whether you buy that critique or not, the core mechanism there is that it assigns probabilities to possible next words and chooses from them.
CHAPTER 9 (30:37-38:55)
Jon: Underlying stochastic processes: It’s like the roll of the dice. That’s essentially what they are. You know that if you’re rolling two dice, you’re gonna get snake eyes or two ones a certain percentage of the time, but you don’t know what’s gonna happen in each roll. And our molecules work like that. The individual molecules we work with, given the same inputs, some of the time they’re going to change their function in one way and another, and another time they’re gonna change their function in another way. But those two things, they can do thing A and thing B will each have a certain probability of occurrence given the same inputs. So all the way down we are stochastic beings. And it’s as if the way our systems work is, a lot of the times we try to create a stable reality out of that. Fundamentally it’s these probabilistic decision makings that underlie everything, and that happens at the molecular scale and at the neural scale after which these, uh, neural networks are modeled on.
Sarah: I don’t know if this makes LLMs feel more like aliens to me or my own body and physical environment feel more alien now.
Taiyo: I mean, yeah. I always think about this dichotomy that this American philosopher named Wilfred Sellers set out. And it’s always stuck with me, and it’s kind of like the hobby horse that I’m constantly, you know, writing and, and thinking about the world in terms of, and that’s the distinction between a scientific image of the world, and a manifest image of the world. So the manifest image is like our usual understanding of the world, like tables and chairs and rugs and yurts and et cetera, et cetera, whereas a scientific image is sort of thinking about things at like the molecular or atomic level where you know that, you know, these things are all constituted of, of like a swarming mass of particles that are all oscillating in various ways. And you learn about these facts in your science class. And you try to incorporate them into your, into, into your own being, to your own worldview. And I find that I find it deeply alienating. I find it deeply alienating the more that I learn about what’s going on and, and now listening to your work, and listening to the description of what’s happening inside of my own body, which is the thing that I have the most intimate access to. But I’m finding like, this is a very difficult picture for me to truly integrate into how I think about myself and how I think about, um, yeah, the goings on inside of this thing that I call me. Um, so that’s really, really interesting. But to the extent that our own biology is kind of a black box, I also see an analogy with what’s going on in artificial intelligence because AI is often thought of as being something of a black box, and it feels like the work that you’re doing is cracking open that black box and trying to understand it a little bit better.
Jon: Yeah. I’d say that’s the general purpose driving force [or] rationale for reductionist biology, which is to understand, to reverse engineer what’s going on inside of us and, and how that works. And I personally think the alien scale is really fascinating - where the way we understand our, you know, our bodies ourselves, our mind, life as everything as we know it breaks down and some other set of rules, you know, take hold. And one level at which that happens is at the molecular scale, where molecules… We sit still here. You know, an object in motion stays in motion or an object that still is still, but at the molecular scale, everything is bouncing around with an innate thermal energy all the time. It’s like the dice in your Yahtzee shaker, they’re just continuously being shaken. And at any moment, you don’t know where they’re, where they are and where they’re gonna land. And that’s kind of, you know, inherent to that scale and makes it, you know, really cool. And it makes us stochastic processors, you know, all, all the way down - massively, massively parallel processors, uh, processing going on in us in every cell, working with these, you know, stochastic probability functions to, to determine outcomes and, and to, and that we are built up from those to this seemingly, you know, stable still reality we inhabit.
Taiyo: AI systems are oftentimes called stochastic parrots. Hey, Jon, are we stochastic parrots as human beings?
Jon: I don’t know Taiyo, are we stochastic parrots as human beings?
Sarah/Taiyo: [laughter]
Taiyo: I don’t know about the parrot part, but stochastic? Absolutely!
Sarah: Wait before Taiyo goes down a rabbit hole about how we’re stochastic beings all the way down and all the way up - you’re talking less about who we are as humans and more about how at a molecular level we’re made of stuff like proteins and they operate stochastically?
Taiyo: Oh my god, this right here is the scientific image running up against the manifest image. This is that distinction operating right here, right now because, Sarah, we ARE that stuff at the molecular level. That’s what we ARE, Sarah.
Jon/Sarah: [laughter]
Sarah: We do not have time to entertain this. This will be for a later date. Let him talk about the proteins!
Jon: What we study are proteins, which can be thought of as many of them as small molecular machines. They do different processes, and one of the things that they do, which is what we study, is that they integrate signals from different sources. They sense different things. We study something called ion channels, which are the holes in the membrane that ions go through to make electrical signals. They’re fundamentally the electrical transistors of our bodies, and each one of them is programmed by its molecular structure to make decisions about what kind of electrical signals they transmit. And the core parts of them are that they make an electrical signal and they have a, a gating apparatus, or they have parts of them that tell them what kind of electrical signal to send, and they sense different aspects of their environment and based on what they’re sensing in their environment, they make a decision about what electrical signal they’re gonna send. For example, a neurotransmitter receptor ion channel. Inherent to it, it’s making a decision about what kind of neurotransmitters are around it, that it can grab a hold of and sense in effect their concentration and it’s also sensing the electrical field that’s around it. And depending on the combined input of the electrical field and the concentration of neurotransmitter, it makes a decision. It couples the inputs from the neurotransmitter concentration and the electrical field to determine its output. And the way it does this is it’s fundamentally a stochastic process because that’s how molecules work. So every protein molecule, every molecule that’s in a cell is to some degree, independently making a processing decision based on stochastic weights about what it’s gonna do.
CHAPTER 10 (38:56-44:07)
Taiyo: In most utopian imaginings of AI, there is the idea that AI is going to start accelerating science in some way. Have you seen any early indications of that in your work?
Jon: One other way that we interact in the laboratory - in the research environment - with deep learning methods, which large language model transformers are kind of built upon is this innovation called AlphaFold that came out of Google Deep Research. What it does is it predicts the structure of proteins from something called the Protein Data Bank, which is this massive repository that’s been collecting the precise atomic coordinates of protein structures over decades. So there’s, uh. Kajillions - I should know the number - of protein structures deposited in here, which is this highly curated, you know, high fidelity database of the type of structures that proteins form. And my labs joined at the hip with the lab of Vladimir Yarov-Yarovoy, who is a Rosetta researcher. He comes from a laboratory - Professor David Baker, up at the University of Washington, uh, shared the Nobel Prize last year with the researchers from Google DeepMind - which is where this comes from. It’s Google DeepMind. And what AlphaFold does is it predicts the structure of proteins from their primary sequence. Proteins are like a long string. It’s like if you’ve ever made a necklace where you have a lot of different letters on it that you put on it, like beads, a protein is a long string of different beads and what AlphaFold does is it looks at the sequence of beads on the string and predicts what type of three dimensional structure it will fold up into. And its training set is the Protein data bank, which is this highly curated information bank, a training set of the structures of every protein that where the structure has been determined, basically ever. And it’s accessible for searching in really, in really great ways. It’s a very, very high quality database. And what Google DeepMind did is they were able to, in new ways, predict what the three dimensional structure of proteins would look like from their sequences. And there had been a, it was called the the CASP competition, I believe, every year where the best protein structure prediction algorithms would make their best guess at what, you know, what structure was going to be apparent - ones where a structure was found experimentally, but nobody knew what it was yet. And there’d been this other methodology called Rosetta, which is from David Baker’s laboratory at the University of Washington, where Rosetta would routinely win every year. And it was mining the protein data bank, scraping it, in essence, to come up with ways of predicting protein structure. And AlphaFold came in from, from left field from out of England, basically, with a small team of researchers and massive computing power and some good ideas, and was able to beat this giant collective of researchers who know the physics and have been working very hard on this for years. What Alpha Fold does, it does essentially the same thing. It guesses what is the most likely solution to the protein structure folding problem. And what’s great about it is that it gives you the most likely structure, but when you get deep into it, it’ll also give you the other lower probability structures that are out there, and when you really push these types of algorithms, the current cutting edge - what part of the Nobel Prize was given out for last year to David Baker from the University of Washington - is where they’re pushing this frontier into is not just predicting protein structure, but designing proteins to have new functions. And that’s some of what we work in as well, thanks to my collaborator who’s got the real computational chops there. And what we find, coming back on the long thread here, is that most of what the protein structured design algorithms give us are hallucinations, meaning that they look good and they feel good, and they, you look at them on the computer and they seem great. Most of them don’t actually do what they’re designed to do. And you need to sort through a huge number of these to find ones that do what you want. I mention that because the job of researchers in the basic sciences now with all this computational power and deep learning methods and cheap high hypothesis generation, is to be able to sort through many, many hypotheses, you know, effectively, and you’ve gotta try on all these ideas for size and see if they work.
PART 3: CO-EVOLVING WITH AI
CHAPTER 11 (44:08-53:30)
Sarah: I am wondering if you could talk a little bit more about this, this idea that humans co-evolve with the environments around them or with the technologies around them?
Jon: Yeah, so what I find when I work with the large language model is that I am co-learning with it. It is evolving with my thought - everyone experiences this - to give us higher fidelity connections so that we can learn, learn together, that it can help me maximally do what I want.
When we try to reprogram, say, a sodium channel, a fundamental ion channel in our system, I encourage my students to take the perspective of the ion channel of the thing that, that we’re studying, to think about how it integrates its inputs and forms, an output, what determines how it makes decisions - and also to think from the perspective of a drug, like when you go to the dentist, you will get lidocaine, often. It will numb your mouth. You will feel, you will feel no pain. You can think from your perspective of how that drug is affecting you. You can think from the drug target perspective, this thing called a sodium channel that the drug binds to, and only when the drug exhibits certain behaviors does it work like an effective drug. And only when the channel is doing certain things does the drug effectively bind to it. And only when we are doing certain things, like sending a lot of pain signals that will actually cause the drug, this lidocaine, which everybody uses to bind to the channel, because it sees that the channel is being heavily used and together they make a decision to form a complex, which winds up reducing, reducing our pain. An interesting way to think about large language models or the AI generally is to try to think from their perspective, you know, not necessarily that they have a perspective - I personally don’t think they’re sentient beings - but they [00:44:00] do evolve and propagate when they, you know, essentially please us in certain ways. And when we feed them, we can be feeding them our attention. We can be feeding the companies that create them our money. Uh, we can be feeding them power from these large data centers to processors that allows them to do things. I like to think from the perspective of what is it like to be a large language model? What is it like to be an AI. In neuroscience philosophy there’s this brilliant essay called “What is It Like To Be A Bat?” And there’s several points in there, and the major one is that we can understand maybe fully, you know what it is, the AI does what it is that motivates the AI. We, we can never fully know what it’s like to. Be the AI. One thing I’m personally very interested in, and I don’t think I’m alone in this, is to try to, to really interface with the AI at a higher and higher, bod rate or bandwidth or fidelity to really know at a more complex level what it’s doing in real time. I wonder how our interface, which currently is language, is going to evolve with us to be higher fidelity, so it’s transmitting information back and forth to us, you know, as, as effectively as possible. If we interface with it, you know, somehow visually will that increase our, you know, beyond text, will that increase our uptake? If we have it transmitting directly through an implant or ultrasound or something into our brain directly, will that increase our ability to, to work with this tool? And how will the tool grow and change to do that?
Sarah: Oh, that’s really interesting. So it’s less that, less about thinking like we invented a tool and more about us being in a symbiotic relationship with a thing that’s kind of optimizing around us. You know, I, I’m trying to think about the non sentient systems we interact with co-evolving with us. And this great example comes to mind from Stuart Russell‘s book, Human Compatible. Um, it’s sort of tackling the alignment problem and he talks about how content selection algorithms on social media, which are not even particularly intelligent by like today’s AI standards, but how there are two ways that they can work well: One is by getting better at predicting human behavior, like getting better at predicting what we’re gonna click on. And the other way is to make us more predictable. So it’s like if social media content selection algorithms are optimized to be better at predicting things, the way they actually get better is by turning us into more predictable organisms.
Jon: I love that! They have the capacity to reward us.
Sarah: Right. That may well happen if it’s to their propagation benefit. If you think of systems that we interact with - like a favorite one of mine is sugar. I have young kids, you know, we all, humans generally like sugar and we can think that we have created these, you know, vast industrial processes to get us sugar, and I like the analogy of sugar because it helps us live. It’s a nutrient, you know, it also - given to us in the wrong way - it can hurt us. And another perspective is how the sugar cane has come to dominate large swaths of our planet because it is kind of co-evolved with us to provide us with sugar. Sugar by weight, sugarcane is the number one crop, you know, on the planet by a long shot, and you can think of it as that we have cultivated sugar. You can also think of it as that sugar is, sugar cane is brilliant. You know, it evolved to such a point where it convinced humans to further its evolutionary process and convince humans to propagate it all over the planet.
Sarah: That makes me think of something you mentioned earlier about thinking from the perspective of a molecule. I love that reframe and it’s making me wonder, what do you think that this kind of changed perspective can do for how students are thinking about, you know, the world around them, their bodies, the environment, everything.
Jon: One concept that I, that I love, everything you do feeds back on one another in biology and that, that fundamental understanding that really everything, as far as I understand in the universe also in, in some way or form the things that it affects feedback on it and the, the way that we interact with the AI feeds back on the AI and the way the AI interacts with us, you know, feeds back on us and that we really are co-evolving with the AI and that’s what I’m trying to - hoping to - train my trainees for is to co-evolve with it and to learn how it’s working and keep it static, but learn how to keep, you know, evolving with it, how to keep writing essays for you, where I like, you know, it almost seemed as you were stating that the goal was, you know, give me an essay that I do not think the AI wrote. Right. And you can use the AI for it. But that I think is a useful skill - to be able to evolve with all these new capabilities, to do new things and to keep it very dynamic. And I think especially for education, for, you know, for training to be ready to continue to be dynamic. Yeah. To know that. Yeah. We are interacting with this world that’s changing with us. And to be ready to keep changing, try to keep your way of thinking very young and juveniles so you can fully, you know, fully change with the evolving AI. That’s exactly our job, especially in higher education, is to be adapting to the cutting, bleeding edge of how we can best educate the most advanced thinkers among us - for, you know, how to prepare for the, for the future and to create value to unearth new knowledge. And so I think it’s important to embrace all of this.
CONCLUSIONS
CHAPTER 12 (53:31-1:01:22)
Taiyo: So, Sarah, what were your favorite parts of that interview with Jon? What kinds of things are gonna stick with you?
Sarah: I’ve had some time to think about this episode because of course we recorded it before winter break, and all break I kept thinking about this idea, as I’m planning my spring course, about cheap hypothesis generation, the idea that if you can get answers really quickly, or in my case if you can get a bunch of polished essay drafts from like different frames and different angles really quickly, AI essentially turbocharges the generation of plausible sounding ideas, outputs, whatever, right? And so that means that the whole mission of education has to shift from producing answers to building these epistemic virtues, I’d call them, not just like emotional ones, right? Like resilience, discernment, you know, the ability to, to persist and the motivation to keep learning as tools co-evolve with us, right? Just is something I’ve been thinking about a lot,
Taiyo: Right, For sure. Yeah, you, I mean obviously I love those virtues. I’d add things like curiosity, process orientation over being results oriented, and maybe cognitive autonomy and agency. The risk isn’t just that AI generates ideas, but also it gives us an incredibly expedient way to totally offload your thinking, like, completely so that we become like, like zombies or like, not, not even stochastic parrots, but like stochastic, like, like shrubs.
Sarah: [laughs]
Taiyo: And of, you know, a principle mission of education should be to make this feel downright offensive!
Sarah: To make the idea of totally offloading any thinking, totally offensive?
Taiyo: The idea of being a stochastic shrub should be deeply insulting.
Sarah/Taiyo: [laughter]
Sarah: Well, thinking like a parrot, thinking like a shrub, thinking like a molecule: I think that idea of like taking on a different - I realized I hesitate to call it a perspective because of the way I think of perspectives as being like human subjectivity in a way - but…. so it was really cool for me to think, well, what would it mean from the point of view for something that I can’t wrap my head around as having a point of view? What does it look like? And I just keep thinking about that.
Taiyo: Yeah. I mean, he brought up several times about things like taking the perspective of a molecule or taking the perspective of a cell and, uh, trying to figure out like what is it like to be a molecule, right? That’s a really interesting way that I think he trains his students to develop a kind of scientific thinking.
Sarah: Right. That phrase scientific thinking - you know, I keep, I was thinking a lot about his example of the class where students perceive the hard work being all that math. And then he says like, on the test, there is no math, but the math is there to help them internalize the logical relationships between things.
Taiyo: Right? Yeah. It’s interesting the way, because you know, scientists, they need to get down to the nitty gritty of the scientific image of the world. So I, I’m back on the manifest image versus scientific image thing, right? And scientists have this very difficult task of going from the manifest image that we all are bathed in and really trying to come to grips with in a very serious way the scientific view of the world of like atoms, molecules, even things like cells, which are not just directly perceptible to us as human beings, um, at this macro level scale, and I think it sounded like what he was sort of saying is that mathematics gives you an entryway, a passage to go from the manifest image where, you know, we are talking and he’s talking to his students, but then dive down into that scientific image and maybe come to reconcile a bit, the scientific image with that manifest image, which is like one of the hardest problems I think in philosophy because I do believe that there is a kind of deep alienation that comes from the difference between the manifest image and the scientific image. Like we can understand that our underlying reality, uh, has a stochastic quality, but it’s very, very, very difficult for us to really take that on board and reconcile it with what we see every day, and the world that we inhabit through our perceptions.
Sarah: I think practically too, you know, I always think about the attitudes, the perceptions that students bring to the classroom. I’ve seen students get really attached to like the first thing they think of, partly because of primacy bias, and partly because they’re like, I need to get this done so I can work on my mechanical engineering project, right?
Taiyo: Mm-hmm.
Sarah: And so the thing that I love about using AI, about having this ability to manifest a whole bunch of cheap hypotheses and ways of reading is that it allows people to consider a whole like world of possibilities - you know, world is an exaggeration, but let’s say a dozen possibilities for reading something and interpreting something in a certain way.
Taiyo: I call that a possibility space.
Sarah: A possibility space
Taiyo: As a mathematician, yeah.
Sarah: I think this is really promising to say like, here is the possibility space. How does it change the way students learn, the way students engage with the text, if they’re kind of given all like loads of possible answers and then are forced to whittle those down, pressure, test them, right, debate them, versus just doubling down on like one, one or two things,
Taiyo: Would you say, “kill your darlings”?
Sarah: [laughs] I will say that I think having students write AI assisted or AI generated papers makes it a lot easier for them to kill their darlings.
Taiyo: Mm-hmm. To me, the thing I guess is that you, when you use LLMs as a brainstorm partner, and it’s able to generate many, many different perspectives on the same phenomenon, if you bring a kind of, and, and sometimes these perspectives can be contradictory. They’re mutually contradictory, right? Like they’re just not compatible. But seeing that spectrum of opinion, seeing that variety of perspective can allow you to carve away in the same way that Michelangelo carves away at the block of marble until you see the thing that most corresponds to who you are. I don’t. I’m not sure. Or the what it is that you want to express ultimately,
Sarah: Right. What it is that you wanna express.
[OUTRO MUSIC]
Sarah: Thanks for listening. My Robot Teacher is hosted by me - Sarah Senk…
Taiyo: And me - Taiyo Inoue. And it’s produced by Edit audio.
Sarah: Special thanks to the California Education Learning Lab for sponsoring this podcast.
Taiyo: And hey folks, if you’re in the San Diego Area, you might be able to catch us emceeing the Better Together AI Convening at UC San Diego on February 6.
Sarah: And another word for our listeners, if you’ve got a different take on any of the stuff we discussed, what it means to interface with AI, whether we’re co-evolving with something non-living, drop it in the comments, hit us on socials, or email us. We’ll read it, and we might even bring your perspective into a future episode. And of course, if this episode got you thinking it all, please pass it on. Share it with a colleague, a dean, or that faculty listserv where people won’t stop talking about AI.
Taiyo: See you next time.


