Below is the full transcript of Episode 7 of My Robot Teacher (lightly edited for clarity and concision; filler words and false starts have been removed).
Guest:
Jason Goldman: co-host of Escape Hatch, technologist, first/former Chief Digital Officer of the White House
Also available on: Apple / Spotify
Chapter 1 [0:00 - 4:25]
Jason Goldman [Cold Open]: There is a spectrum of threat that goes from the existential, uh, like we’re all gonna get turned into paperclips. To the exceptionally creepy, but you know, like there’s gonna be AI teddy bears whispering our kids, and who knows what they say.
Sarah: Welcome back to My Robot Teacher. Our guest today is Jason Goldman, who’s one of the hosts of the highly entertaining podcast, Escape Hatch (formerly Dune Pod) - a genre movie podcast that has some fantastic analyses of sci-fi films that shape our cultural imagination of ai.
Taiyo: Jason also worked at Google, Twitter, and in 2015 became the first Chief Digital Officer of the White House.
Sarah: Before we get into our conversation with Jason, we want to explain why we’re gonna be talking about Terminator and Her and cultural perceptions of Silicon Valley in the tech industry on a podcast about AI in higher education. We’ve talked before on this podcast about how the stories that we tell about technology shape what we think is possible and what we fear and also what we overlook. And so for example, when my students say they won’t use their CSU-provided ChatGPT EDU accounts because they think it’s a sting operation for plagiarism and they prefer to use the free accounts, that is potentially the result of them absorbing narratives about which institutions they should trust, which institutions they should fear and expectations about which types of surveillance they’re likely to experience and be threatened by.
Taiyo: Right, right. But let’s be real here: faculty aren’t immune to these kinds of cultural narratives either. For example, when we imagine AI risks. Many of us jump straight to Skynet style apocalypse - like, you know, from Terminator…
Sarah: I think I did, right?
Taiyo: [Laughs] And that can mean that we miss other potential futures,
Sarah: Like when we’re culturally primed to focus on one very extreme dystopian premise, we might actually be missing the quiet, slowly accumulating harms that, you know, also maybe end up turning us into zombies?
Taiyo: My God! Well, I mean like, sure, but like, you know, what about maybe the positive stuff? Like if you’re fixated on one nightmare scenario, you might miss both those kinds of quieter harms, I suppose, but also the genuine benefits.
Sarah: Hmm mm-hmm. Okay. Got it. So it’s like, I wonder if there’s a parallel here too, or if you’re focusing only on academic integrity fears you miss what Jason in this episode helps us identify as more insidious risks - like students forming parasocial attachments to chatbots or surveillance that works not through cameras, with the blinking red light as the film trope depicts, but through invisible behavioral profiling.
Taiyo: And you know, we might also miss or even dismiss legitimate educational uses like accessibility or multilingual support or scaffolding that helps students who’ve never had access to really, truly personalized feedback.
Sarah: Mmhmm. Well, today we’re examining which movie tropes are helping us think clearly about AI in higher education and which ones are gonna lead us astray. And that’s one of the reasons why we wanted to talk with Jason. As Taiyo mentioned, he’s also somebody who’s lived inside the machinery, so to speak of Silicon Valley, and thought deeply about how pop culture portrays it - so how movies, media, and other forms of public imagination shape what we expect from technology and what we overlook. And so this episode is very much about how stories about the tech world also shape the way educators, and for that matter, the public understand AI’s promises and perils. Spoiler alert: we are about to ruin Terminator, War Games 2001: A Space Odyssey and Her by giving away the major plot points on this episode.
Taiyo: Before we get to the interview with Jason, we do have a favor to ask. You know, we’re a small podcast operation. We’re supported by the Learning Lab. We love them, but we don’t have an advertising budget. We really depend on you all to help us grow this community. So please, if you enjoy what you’re listening to, share the podcast with a friend, share it with a colleague. Help us grow this community by maybe leaving a review on Apple Podcasts or subscribing on YouTube. Anything that you can do to help is greatly appreciated. Now onto the interview.
Chapter 2 [4:26 - 10:40]
Taiyo: Hi Jason.
Jason: Thank you for having me.
Sarah: We’re very glad to have you here. Very excited to talk about representations of AI and film. I wanted to add, too, that I also learned that you were one of the original employees at Twitter from that documentary, Breaking the Bird.
Jason: Yes.
Sarah: And there was this, this amazing clip of you reading a tweet from like 2007 that was about continuing to work on things that will apparently destroy the world.
Jason: Yeah, it’s, I have mixed feelings about that tweet because I think at the time I was like, oh, like people are complaining about social media, which wasn’t even called that yet then. And they’re saying like, oh, this is gonna destroy people’s s and be bad. And I had been working on, you know, the social web previously at Blogger for about, you know, seven years or something like that, five years at that point, and so I was like, oh yeah, like, you know, this is what people say, they don’t like new technology and it’s gonna destroy things. But then turns out, did kind of break our brain. So it turns out that I was wrong
Sarah: Well, we were thinking about it. Right now, it is so controversial that the Cal State University system where we work has done this deal with OpenAI, giving ChatGPT EDU accounts to all of the students, faculty and staff. And there’s a lot of people who are super anxious about this. But I think the question we wanna think about is: there are technologies developing right now, and in the moment it can be really hard to anticipate what the consequences can be and how do we temper our maybe more apocalyptic, less likely visions. Um, and then think seriously about more insidious threats like surveillance, for instance, or, you know, the potential to break down democracy.
Jason: Yeah, there is a spectrum of threat. That goes from the existential, uh, like we’re all gonna get turned into paperclips to the exceptionally creepy. But, you know, like there’s gonna be AI teddy bears whispering to our kids and who knows what they say. But there’s also tremendous upside too, like, again, the upsides run the gamut from “might cure cancer” to, you know, my son had show and tell this week in first grade, and he brought in a mineral and I needed to give him tips on how to explain what the mineral was to a a 7-year-old, six 7-year-old audience. Chat GPT and nailed it. Just absolutely crushed that one.
Taiyo: So you were at the sort of forefront of a lot of the social web stuff, as you say. Was there much awareness of perhaps the downside risks of social media and maybe of Twitter and that sort of thing while it was being built? Or were you more just interested in trying to build this cool thing and, and putting it out there in the world?
Jason: So particularly in like the 2002 to 2008 timeframe, when it was all very early. I would say that the overwhelming zeal to try to show that this is something that people might actually use and do was so intense that I think it both blinded us to some of the downstream risks, as well as made us, like, really celebrate anything that looked like a unique benefit of doing this. But the environment that we were up against, this was true. Even, you know, when I worked on Blogger at Google, you’re working at Google in like 2003, 2004, you know, some of the best technologists in the world building Google. You know, people who obviously believe in the internet, but there was extreme skepticism from the founders at Google, from early employees at Google of like, “why would anyone ever want to write a blog? Like no one’s ever gonna wanna do this.” And so when we were talking about the internet and the web in particular as a canvas of self-expression, a place where people would go to kind of illustrate parts of themselves that just seemed like completely foreign to the Google way of viewing the world, which is the internet is this vast repository of knowledge. We need to organize it and make it useful. It was a place where you got stuff out as opposed to a place where you put things and we were like, “No, no, no. It’s great. People do all kinds of weird stuff. People do weird stuff all day long on this thing if you give them the tools to do it.” And that idea that the internet would be a place where people put parts of themselves was really not widely embraced, even among technologists. So I think like having to kinda run counter to that blinds you to some of the, so to some of the risks.
Chapter 3 [10:41-20:27]
Sarah: That is super fascinating. And I, I think we’re all in the same generation and so I, I, I’m actually a little bit surprised as I think back on my own, like coming of age, you know, with the internet memory. I can’t really remember a time when it wasn’t a space where people expressed themselves. I love that phrase you used, Jason, a “canvas of self-expression,” but I’m wondering back then like who in the room making decisions about how to build these technologies was actually living or experimenting with self-expression on these platforms?
Jason: It’s interesting, like my experience, having been both an extremely online person since I was on BBSs in the early nineties in St. Louis, Missouri, as well as a technologist who out built these systems, is that the Venn diagram of those two sets of people doesn’t intersect as much as you might think.
Taiyo: Hmm.
Jason: There are a lot of people who are early adopters of technology who really find a way that it excites something in their soul. And it was like, oh, like I’m gonna put part of myself into this machine, um, and use it to like find other people. And then there are people who are like, I want to build these systems. And some of the people who wanna build the systems also are the people that like, enjoy putting their souls online. But as an example, like of the three founders of Twitter. Really none of them tweeted particularly well, like Biz was maybe the exception to it. Ev and Jack are terrible at it, have never been good at it. Ev had a blog back in the day, but like, you know, and that was like something he at least saw, but like primarily kind of as, like, through a business lens as opposed to like, a “this is parts of my soul” lens. Um, and like Jack, like yeah, just didn’t, like, that was not how he expressed himself, you know, personally. And so you really do see these dichotomies of, then there are these people who are out there in all corners of the world who have no professional interest in technology in terms of building the systems. They’re not coders, but they’re just like, oh, I know exactly what to do with this. I’m gonna start like talking about like, you know, my fantasy world of like, you know, elves and whatever, and I’m gonna talk about knitting and I’m gonna talk about my cat and I’m gonna put pictures of all this stuff and I’m just gonna create this rich, imaginative universe. And you’re like, wow, who taught you that? I was like, no one. They just sort of sense that that’s what could be done. So, yeah, it’s an interesting dichotomy that I found to be true.
Sarah: I love this because I, I think one of the reasons I’m interested in technology in film, - like the representations of different technology in film - is because I teach these classes where it’s like, let’s look at how all of these narratives about what’s possible or what you think is possible come from these prior, from tropes and things that you’ve absorbed, kind of tacit knowledge you take in from the things you read and the things you experience, and then you get something that is really innovative, that completely reframes how you might think about something.
Jason: Totally. And I think this sort of like aggravates, like… one of my more frequent hobby horses with respect to the tech industry, which is that I think one of the reasons why these two sets of people are, uh, don’t have as much overlap is you might think is that I think a lot of people who build the systems who work in tech are not particularly good readers. I find that they have, like, you know, they, they like science fiction, they like fantasy, they like genre fiction. But I don’t think they’ve read deeply into like, sort of the import of the work that they are, the work that they’ve named their companies after, or the work, you know, the work that they have, uh, you know, sort of purport to be super fans of, um, which leads of course to like, you know, the somewhat infamous Torment Nexus Paradox.
Sarah: I don’t know this.
Jason: The Torment Nexus Paradox is like, you know, science fiction writer says, oh, I’ve just written this amazing book called The Torment Nexus; it describes how, by using this technology, you’ll bring ruin to mankind. And the tech entrepreneur says, proud to introduce my new company, “The Torment Nexus.” It’s like, you know, it’s a recurring pattern where I think like the people who, and then you’ve got people who are great readers and do deeply delve into fictional worlds and they’re the ones that use the tools to express themselves or find something that tickles their own fancy and express themselves and become passionate user of the tools, but they’re not necessarily the ones building it.
Sarah: Hmm. Can you tell us what you think are the motifs or tropes in film that have informed much of how people are thinking about AI. What strikes you as being the most dominant and maybe how many of those are relevant or irrelevant?
Jason: Yeah, I mean, the Skynet one, like, you know, Skynet from Terminator and Terminator 2 generally is pretty relevant because it is the idea of you build an AI and it gets out of control and there’s this express accelerated timeline of like, you know, the system gains consciousness and then that leads to the ruin of the world. Like seeing that for Cameron to kind of see that as early as he did is a pretty remarkable insight because that is the alignment problem and is a pretty good shorthand for the alignment problem and the particular concern that once a super intelligence is reached the, the machine could be continually self-improving and quickly outstrip the ability to just pull the plug. Skynet is also a perfect example of the Torment Nexus paradox, because I saw this documentary, I think it was a vice documentary where they went to China and they interviewed a entrepreneur who was building surveillance systems for China. And in China, you’re kind of always building surveillance system for the CCP as well. And so they were like, we built this system and we’ll do a test where you can find someone, you know, we put on an alert for like this person, this test subject, and we can find them anywhere in Shanghai in 45 seconds or whatever. And it was like, “Oh great. What do you call this?” “We call the system Skynet.” And the interviewer was like, “You understand, like Skynet was the bad computer in Terminator?” Like, “Yes, it’s called Skynet after the movie Terminator.” It’s like, okay. Cool. You know? Yeah. So I think that that happens that that example I think is Skynet and Terminator is a pretty good, is a pretty good one that captured attention.
Sarah: Recognizable brand name.
Jason: Yeah. Good brand.
Taiyo: That’s definitely a trope. It appears like in many artifacts, right, particularly in cinema. One that comes to mind that I really am a huge fan of and was deeply, I don’t know why I just love this movie, is it war games. Have you seen that one? Oh, yeah. It kind of combines two, uh, world shaking technologies, uh, nuclear weapons, and the possibility of course of existential doom of from nuclear weapons, but also there’s artificial intelligence kind of featured quite prominently in there. It opens the movie by demonstrating how human beings aren’t up to the task of. Launching the missiles that are gonna annihilate, you know, millions of people and that sort of thing. So they replace all of the humans there with a computer system and that computer system ends up, you know, kind of being a real problem for humanity. So, yeah, that, that’s, that’s a really interesting one. And I think 2001: A Space Odyssey with Hal probably represents some kind of misalignment between AI values and human values.
Jason: 2001 is my favorite movie of all time. Oh, wow. I’m a huge Kubrick head. Generally, uh, in 2001 when I was a kid, uh, when I encountered for the first time in the eighth grade was the first time I’d ever seen a movie where I didn’t know what the movie meant after I’d seen it, like I watched the whole movie and at age 13 or whatever, I was just like. Wait, what’s going on? Like, particularly like the, the last third of the movie is like, you know, becomes the odyssey part. And you know, having not done any hallucinogenic drugs, I was just like, I don’t know what the heck any of this is. And I just, up until that point, I’m until the point, you know, I was 13, like every movie I’d watch, I was like, it was just a narrative literal movie. Like what was shown on screen was what was, you know, if there was symbolism, it was fairly obvious if there was metaphors fairly clear. There wasn’t like deeper. Existential questions being raised in the movies I was watching up to that age, and 2001 was just like, narratively, I don’t know what this movie means. And a big part of that is why does Hal become murderous? Like why does Hal decide to, to kill everyone? And it’s, it’s not answered in the movie. There’s not actually a line where it’s like. He was instructed to lie and that’s why he went crazy in 2010, the sequel, which is actually rather underrated. I encourage people to check out. They make explicit sort of what goes on and it’s, it’s interesting, but it’s so much more compelling in especially a movie that old to unpack those kind of ideas, um, and leave it up to the [00:17:00] audience to kind of figure out what was being gestured at.
Taiyo: Wait there was a sequel to 2001?!
Jason: Oh yeah.
Taiyo: I just have totally missed that.
Jason: But then Arthur C. Clark went on and wrote several sequels to 2001 and 2010 is the only one that’s been adapted. Um, but Roy Schneider is in it along with a very young Helen Mirren and John Lithgow and Bob Balaban, and it is actually pretty fire. Like it is pretty good. It’s kind of a whole Cold War movie. It’s about like, sort of tensions between the Russians and the Americans, which is in the backdrop of 2001 as well. It becomes foretext in 2010, but it’s pretty good.
Sarah: I totally agree, Taiyo, we’re gonna have to watch that one next.
Taiyo: Sounds good.
Sarah: I actually watched WarGames this morning. I had never seen WarGames, so I can’t give Taiyo too much shit for not seeing the sequel to 2001.
Jason: WarGames - just to dwell on that for a minute - is also one of my favorite movies of all time. Shout out to our pod, Escape Hatch, then called Dune Pod. Um, we covered that movie way back in 2001 with one of our favorite guests, Meredith Borders, who’s the editor for Fangoria Magazine, and it is a great episode. Just a plug, shamelessly. I love that movie. When I was a kid, I was fascinated by the computer. I was fascinated about, like, what can you do with computers? Fascinated by, you know, Joshua slash the WOPR becoming this like sentient being who couldn’t be reasoned with. That like, you know, this wasn’t a game you could, you could win. Really, really great. Amazing performance by Matthew Broderick. It’s worth noting in WarGames, a lot of the hacking, a lot of the computer stuff is pretty good. Um, like there’s a lot of stuff they put in there where they use, like actual phone freakers. There’s the scene when Matthew Broderick fakes a phone call using a bottle cap and like, you know, doing this whole dial tone thing, uh, on a payphone. And there’s some historical antecedent to that. And they had like, they had some pretty sweet looking computers in that movie if you were an early computer nerd, so that one is a great movie too.
Taiyo: Yeah. You got a glimpse into NORAD. The North American defense thing. I don’t know. I think that’s still, yeah, that’s a real thing.
Jason: NORAD’s the one - they track Santa Claus on Christmas Eve.
Taiyo: Yeah, that’s, that’s right. And one of the images that will totally stick with me is when WOPR, oh, I, I don’t know if I should give this away, but I think it’s okay - where WOPR simulates nuclear annihilation like hundreds, thousands of times. And you’re, and you’re just bombarded with these images of various first strike, second strike, counterstrike scenarios involving nuclear weapons. And I think it’s actually kind of, kind of beautiful. Um, and really stuck in my head just so deeply.
Sarah: Yeah, that was the part though, that blew me away this morning when I watched it, when you get this like triumphant moment of, oh, they figured out how to stop the machine from, you know, causing nuclear annihilation and it’s like, just teach it that there are some things that you can’t win.
Jason: Yeah.
Sarah: And rather than turn that into murderous rage, turn that into acceptance that sometimes you just shouldn’t escalate in the first place. And I was watching this thinking like, oh my God, this is also, especially thinking about the time period when it was made like this kind of mass cultural anxiety about nuclear annihilation that gets played out. So it’s kind of like this, I imagine it being some kind of like therapeutic, like exposure therapy in a way.
Jason: It’s like gentle parenting really for an out of control AI where it’s just like, okay, like let’s get this outta your system. Like you’re very angry right now. You’re kicking your feet. You’re screaming, you’re so angry. Now let’s try to, let’s, let’s see if we can do something else instead. Would you like to color?
Sarah: Or play Tic Tac Toe?!
Jason: Yeah, exactly.
Sarah: And I love the scene too, where he’s like, “Learn dammit.” Like learn from tic tac toe!
Jason: That’s right.
Chapter 4 [20:28 - 37:21]
Sarah: So we’ve talked a bit about the trope of AI and the alignment problem, like the murderous AI pitted against humanity. So moving on from that idea of AI at odds with humanity, I want to make sure that we talk about Her - Spike Jonze’s 2013 film, which I think is something I’ve come to look at as the film to help make sense of this moment in some ways, right? Partly because of like, how prophetic it feels, but also for the questions that it raises for those of us working with students. And I should say that recently, you know, I’ve been thinking about, as I’m reading all of this stuff about students forming attachment to ChatGPT and like large language models, psychosis. I’m thinking about those moments when, you know, a student will tell me, oh, I had like a four hour conversation with ChatGPT about my paper topic. And on one hand I’m like, oh, that’s awesome. Right? What intellectual engagement! And on the other hand I’m thinking, wow, if that’s not something you’re getting with other humans, and then you start to have this as a substitute, what does that look like?
Jason: Yeah, Her man, did they nail that one? Like talk about a movie that like really, really crushed it. And we’ve done Her on the podcast as well. It’s another great episode.
Sarah: Um, absolute shout out to that. That is one of my favorite episodes.
Jason: Yeah, our guest for that was, um. Obama’s National Security Advisor, Ben Rhodes, who is a bit of a super genius, and also a great film lover. Her, I think, really does tickle what I think is the most interesting and frightening, uh, of materialized risk from AI, which is this attachment risk - because like the AI is inherently sycophantic. It wants to “yes, and” you; it wants to tell your ideas are brilliant. And those are the kind of behaviors from an interlocutor that can easily lead you down the path of whatever rabbit hole you think you’re following. One of the most concerning signals to me on that front was there’s a VC who is, you know, lives out here. Like, I don’t know where he lives, but he is in the technology industry and he clearly had like a - I mean not clearly, but based on what he was tweeting - clearly had some kind of mental health episode as a result of talking to AI and was like, you know, the AI was like telling him he’s totally right, that there are bigger patterns at work. And he had deciphered the signal and it was like, you know, sort of like vaguely kind of paranoid schizophrenic, you know, uh, ramblings. And I was just like, okay, like this is someone who is like, probably wealthy, stable living situation understands technology is steeped in like sort of what this technology can and cannot do, and got one shot by this, you know, by his, his bot.
Sarah: Hmm..
Jason: And you know, I’ve seen this in our, you know, we run a Discord for our podcast and it’s like my favorite part of online right now because it’s like all my greatest friends, like bringing me stuff that they find that they think might, might find delightful. But, you know, people have developed their own, you know, AIs that they’ve named and have very deep relationships and deep attachments with. And like, you know, they’ll quote them like, you know, oh well, like, you know, “My AI said this” in the, in the discord, which I think is really interesting. But it is also like a sociological phenomenon that we need to like, kind of get our arms around in terms of like, what are the appropriate boundaries that people should have with these synthetic entities. And I think it’s, it’s also a bit of a risky one because there’s a lot of harms that could come from AI that no one would want. Like, uh, particularly as you talk about the extreme existential risks, like no one wants to be turned into a paperclip. No one wants novel pathogens developed. No one wants enhanced cybersecurity threats except for, you know, like terrorists. Like, you know, except for the bad guys. No one wants that stuff to happen. And so like, there’s kind of an alignment on not having those things happen. No one wants their users to be driven into kind of paranoid, schizophrenic, suicidal, you know, death loops by the AI. Like no one, no one would want that either, but companies certainly do wanna build companions that are engaging and are, that keep people involved and that are meant to be helpful, yes, but are also meant to be just a good friend. Um, and so some of the business imperatives and some of the business cases for those things run into, “Is this actually healthy?” And as we saw from the social media era, I don’t think we did a great job of balancing things that were good for the business case that were meant to be. Like, how do we make this more engaging? How do we keep people involved with these services? Um, I don’t think we did a great job balancing those against other societal values or other societal outcomes that we would want to be in the ledger.
Taiyo: Yeah, I hear, I hear that. And like, I take seriously this idea of the ledger being, uh, really studied and scrutinized and making sure that we are taking into account not just the negative impacts of AI, but also the possibly more hidden positive upsides that we might have, where all of a sudden you have a relatively inexpensive AI therapist. And I know this is problematic for a lot of people when they’re thinking about this sort of thing, but when you have such a dearth of mental health availability - I’ve seen this with friends and family - an inability to get help when it’s most needed, maybe, hopefully - I mean, I have my fingers crossed. I’m hopeful that AI therapy can be, um, an outlet for, for some of this kind of stuff that would actually be beneficial.
Jason: I, I agree. And I think there’s other ones like that too, like elder care where I think it could be, I think, you know, in the context of medicine writ large, like I think there’s tremendous upside here - whether that’s like, you know, in things that are, I think more at hand like radiology or are actually about discovering new cures or diagnosing things, seeing things in people that maybe otherwise would’ve been missed. And again, to the same point, especially as you look around the world, there’s plenty of places where people are not gonna get seen by a doctor who maybe could be seen by an AI doctor. I think there’s reason to be optimistic about that. The thing I would just point out is that whether it’s medicine writ large, or therapy in specific, we’re talking about some of the professions that have some of the most tight regulatory regimes around them in society, like there are both professional regulatory agencies that govern the functioning of doctors and therapists, and you have to be in accordance with those industry set guidelines. And then there are guidelines that exist at every level of government, from the local to the international, to how you have to behave in the context of those professions. Maybe the way that that regulatory regime is built up over time has, I’m sure there’s things that are inefficient. All of those regulations are not, um, laudatory, but the idea that we should start over with zero and just see what, see what cooks for sure. It is pretty terrifying.
Taiyo: I’m fascinated by the idea that maybe right now, like at this point when we’re kind of in the wild west, so to speak of, of AI and how that’s all working, I’m not sure how much skin in the game, so to speak, there is like, I’m really interested to see what kind of ramifications might come from the very high profile suicide that happened where somebody was, was driven to suicide and even maybe even encouraged in various ways. I wonder what kinds of liabilities or what - just what the implications of that will be for, for ai and, and I know that a, that the AI labs are thinking deeply about this, obviously.
Jason: Yeah. Some of this stuff will get worked out in the consumer safety tort based world where like there’ll be huge luck, there’ll be huge cases brought that take years to resolve and that will result in new definitions of what liability exists. And a lot of this work is just untested at this point - like, you know, in the context of legal liability disclosure, I’m not a lawyer, but the idea of, you know, for an example would be someone goes and builds a cookbook. With recipes that are sourced from AI, and a reader of that cookbook dies because it tells you to use a poison mushroom in a stew that is made. Who bears the fault there? Is it the person who sold the cookbook? The person who made the cookbook? The AI like application layer that wrote the cookbook, the AI training model, the base frontier model that provided, uh, the model to the app layer that wrote the cookbook. Unclear. Some plaintiff lawyer will figure that out, but it’ll take a very long time to resolve. It’s also worth noting in the case of the teenager who hanged himself, OpenAI’s response, one of the things they responded with was they’re gonna now alert parents if they think a teen user is doing self-harm or is being, you know, is, is is researching self-harm on, on their product which having worked in social media where self-harm was a question, even for us, you know, in the 2008s-2009 - the idea that you would like, you know, drop a dime on a user to the parents was, was never in the. In the playbook that was being considered at that time.
Taiyo: Wow.
Jason: Um, it’s a really remarkable move and I think shows how, I think in one sense, like how seriously they think the risk is here. And then two is like, sort of like how unclear it is, how to solve it. These systems are so complicated that you can’t just be like, like there were protections in the model for don’t encourage to do self-harm. That was a thing they specifically knew could happen and tested for. It wasn’t like it was an unthought of risk, but it turns out it’s just really hard. Um, because these models have certain other behaviors where they’re trying to get to an answer for the user and they can, and people are clever and they can figure out how to work around the safeguards.
Sarah: Okay. Wow. That’s making me think, there’s no precedent I can think of for that particular legal drama.
Jason: Yeah.
Sarah: Right? And so it seems to be uncharted terrain culturally speaking, but I’m thinking about this, um. The surveillance that’s happening there in a way, because I think. It starts as safety can so easily slide into surveillance. So you know, once a system is trying to protect you, it also has to know you, like your search patterns, your mood, your vulnerabilities, and that kind of insight is not in the scope of just. Like an individual’s closest friends or family or network. But it belongs to an algorithm. And it feels like there is an interesting moral tension here, because on one hand, yeah, of course you want that alert to go out, that’s someone’s life saved, potentially. But I think about the cost of that kind of vigilance, that it’s that the system has to be reading us constantly for warning signs and then determining, based on context, what’s a serious threat, you know, and what’s somebody writing a script or something like that. And so I guess I wonder how many people are thinking about surveillance and maybe the model of surveillance gets obscured in the cultural imagination because we’ve got this idea, like this surveillance framework that puts the focus on direct observation and not the idea that, like, “Whoa, there’s all this data going into these systems. And in some cases they might know more about us than we know about ourselves.”
Taiyo: Yep. And, and maybe in thinking about movies, uh, in particular and maybe depictions, visual depictions of ai, it, I mean, you tell me, but it seems to me like one very popular kind of image that we use for artificial intelligence is eyes, just, yeah. The act of looking. And that definitely sort of gets at this idea of surveillance, right? And maybe of like, some kind of like insight that comes through through the eyes.
Sarah: I’m thinking about how, how that big red eyeball.
Jason: Both Hal and the Terminator have red eyes. Uh, yeah, the red eye is traditionally a signal that you are being surveilled and it’s bad. I was rewatching Obiwan, the limited series Obiwan. I was watching it because Leia has a really cute droid named Lola in that. And, you know, spoilers for Obiwan, but Lola, you know, they get captured and like the dark side, like, you know, Darth Vader’s henchmen basically reprogram Lola to be a tracking… So they could track like Obiwan and Leia to the good guys. Um, and the way that we, the audience are told that that happens is Lola’s eye turns red. Hmm. Um, and it’s just like we are so ingrained as like an audience be like, you see a red eye that’s, you’re being surveilled.
Sarah: Yeah. I’m watching now the little like red do in the corner of this app that we’re using to record.
Jason: Exactly. Yeah.
Sarah: What’s interesting about it is like there is no red eye on ChatGPT, right? Right. And yet all of this, I’m thinking about all of the information, not just about things that I’m asking, but also about patterns in my behavior that I might not even have like a self-awareness of.
Jason: Well, we’ve all had that experience too, where like, you know, the reason that people think that like Instagram or whatever is listening to you is because you’ll talk to someone about, it’s like, oh, like, you know, I saw the cutest, like, you know, K-pop Demon Hunters puppet, and then like the next thing you see in reels is like an ad for that. You’re like, what the, what is going on here? Um, and I think I, I don’t think they actually are listening to you, but I think that the… Like some of this targeting is like they do have a pretty robust profile for who you are. Mm-hmm. Um, and ability to like, be like, ah, I think that this is someone who’s got young kids who maybe would like, uh, K-pop Demon Hunters, you know, plushy or whatever. And so certainly if you once the as like, you know, you imagine like systems that have like, you know, read all your emails and like, you know, like followed you around the web with cookies and like, have really, are these systems become more capable for, as, you know, as, as you were saying, like building patterns about who you are? I think it, there will be tremendous advertising opportunities for advertisers - totally, for sure.
Sarah: I think about that when, when I first tried Gemini, I had this revelation that, like, I have had, I had a beta Gmail account - like it is so old, like we’re going on 20 years of all of my chats and emails and things that at the time I wrote them, I never anticipated… It’s not like social media where you know that it’s likely that this stuff, that it, either it is public facing or I don’t necessarily trust privacy settings. And so I’m kind of curating myself as I’m putting stuff out there. But these are things that I always thought of as just things between me and another person. And then what the implications are. And weirdly, I wasn’t afraid of this. I was actually really excited to be like, wow, if Gemini has access to this whole sense of like how I have developed over the last 20 something years, that might be kind of a cool avenue for thinking about also like finding articles I started in 2008 and never finished.
Jason: No, there’s totally value in there where it’d be like you’re, if you’re like doing something or writing something like, oh, like you’ve actually like, sort of like you’ve been thinking about this idea for a while and here’s like the evolution of it.
Sarah: and I’ve done that through Google Docs and it’s actually incredibly cool and validating. Yeah. I mean, I know it’s supposed to be validating.
Jason: There’s a, “You’re right. you’re so smart for having thought of that.”
Sarah: [Laughs] Right.
Jason: I use it for, I use it for podcast notes. I will confess on this podcast, um, where it’s like, you know, ‘cause we have a format where we talk about like three things and like, I’ll just be like, I was like, oh, I’m thinking about this, that, or the other thing. It was like, you know, it’ll be like, that is such a clever take on a new movie. It was like, you know what a, what an interesting spin. I was like, thanks. Thanks man.
Sarah: Totally. But yeah, I think I’m trying to balance my excitement about rediscovering past insights and quickly identifying connections across a data set and time period that I and my stupid human brain don’t have the cognitive capacity to process all at once. So thinking about the idea that you could instantly map a whole arc of your thinking life is something that I personally find totally thrilling, but also I can see how it’s also totally terrifying at a societal level. I mean like in the sense, I guess the whole world has already become kind of both the subject of, and the consumer of that kind of surveillance. I’m not sure how visible it is. Maybe it’s not invisible, but I certainly know our students are really wary about using the ChatGPT EDU accounts provided by the Cal State system.
Taiyo: Mm-hmm. One of the interesting reactions that students had to the OpenAI CSU public private partnership deal is, I’m not gonna use that because they’re keeping all my records. And my teachers are gonna find out how I use ChatGPT. I don’t want them to know about any of…
Jason: Right. Yeah, yeah, yeah, yeah.
Sarah: It was even funnier than that, Taiyo. They said in my class they were out of prompts. I’m like, why are you outta prompts? You have an EDU account. And they said, “I won’t use that because I’m pretty sure that Cal State System’s conducting a sting operation to track plagiarism.”
Taiyo: Oh, right.
Sarah: And I was like, okay. So first of all, you’ve just implicitly admitted…
Jason: that you’re using it for plagiarism.
Sarah: At least the EDU ones allegedly have FERPA protections. I think there’s… it’s such a misunderstanding about the potential safety risks.
Jason: Yeah.
Chapter 5 [37:22 - 51:18]
Sarah: So Jason, given your extensive experience in Silicon Valley, we wanted to talk with you as well today about the tech industry and the cultural imagination. And Taiyo, we have that email that our wonderful colleague Amy agreed to let us read right on the podcast, right?
Taiyo: Yeah, we do. So just for some context, it was in February of this year that the partnership between the CSU and OpenAI was announced. And as you can imagine, um, this partnership was met with a variety of, uh, reactions from our faculty colleagues and. Here’s the email addressed to Sarah and I that we got, um, the day after this announcement. And this is from, from our colleague Amy. “Hello. You’re both geniuses who I trust and who have insights into the potential of AI that I’m glad you are having.”
Sarah: She sounds like ChatGPT there.
Taiyo: “Also, this whole thing just makes me think of that tech bro at that L.A. conference who was like, this stuff is free. For now, the CSU as a cash cow for tech companies feels pretty gross. Sorry, I might be mad about, about a bunch of other stuff. I love you both, but also fuck this.”
Sarah: [laughs] So is this a common reaction, you think?
Jason: Yeah, I think it is. So one of the things that’s interesting is that I’ve lived in San Francisco for 25 years and I think. One of the things that’s interesting is the public perception of like tech in media as portrayed in media, whereas I think, particularly with like starting with the duck crash, it was like these fricking yahoos in San Francisco sniffing their own farts, thought that people wanted to buy dog food on the internet and there are a bunch of morons and they all went broke. And then it was sort of a plucky, underdog story of like, can the tech sector come back and, you know, do something interesting And it’s, it took a while for people to both realize, I think one, how deeply weird the tech industry is in terms of the personalities involved and then two, how much people didn’t like that, how much it was just sort of like something that was aesthetically not what people were down with. And I think like, you know, whether that’s through Silicon Valley or you know, like through The Social Network or through like, you know, Mountainhead. Um, I think, I think it was eventually people realized that like there is a bunch of weirdos who have way too much power and every time I sort of see more of them, I am, my concerns are not assuaged. And, uh, I think that’s a trend that’s accelerated in the last five to 10 years. Um, but it’s been a pretty continuous one. My overall theory on this, by the way, just to kind of bring it back to a narrative frame, is that nothing good ever happens in the movie where the nerd becomes the homecoming king. That’s not what we are conditioned to believe that is meant to happen in the movie, and like the fact that we’ve ended up in this place where the nerds have all the power and are also now really into MMA is, I think, confounding from a narrative standpoint.
Sarah: Hmm, That’s true, that is another narrative, we do not have a template for that. Yeah.
Jason: One of my favorite depictions of Silicon Valley in all of film - you know media is the Veep episode where they go to Silicon Valley.
Sarah: Oh yeah.
Jason: And they visit with Craig. First of all, Veep is the most accurate TV show about working in the White House that’s ever been made. It’s definitely not The West Wing, it is definitely Veep and anyone who’s worked in the White House will tell you that that is true and they go to Silicon Valley. And my favorite part - there’s a lot of amazing interactions in this where Craig is just like, you know, the CEO of this, like Google, Facebook stand-in, and at one point he says, “We like to think of ourselves as post-tax here.” And it’s just like this idea that they are present. Like, you know, the idea of being post-tax and coming up with that is like, oh yeah, that’s why we don’t have, and that’s why we’re not gonna pay any capital gains tax or whatever is, is really funny to me. But yeah, I think that is, I think that’s a particularly good prescient depiction of tech in media.
Sarah: Yeah. I’m wondering like this idea about this sense that, that there’s a kind of carelessness in Silicon Valley and that, um, that it’s more, it’s almost like a, like the extractive idea versus, which seems at very much at odds with the, the narrative and also, you know, the, the friends I have in Silicon Valley who do really believe in the work that they’re doing and at its capacity to do good.
Jason: I do think that some of the critique that gets out of hand is that like, no one wakes up and thinks that they’re doing bad things. Like no one wants to like go and build systems that like, you know, make impoverish the world or like, you know, put people in harm or like, cause a genocide in Myanmar, cause a teenager to hang himself. Like no one would decide that that is the business that they want to be in. You know, certainly not just like some product manager who’s being paid at king’s ransom compared to, you know, people working, you know, people working in a, in a diner in the middle of the country, but like are not like, you know, are not like buying yachts from having done that. They wouldn’t want to be in that sort of business. I trace it back to two things. One is the previously discussed how zeal can blind you to downside risk, which is that you’re so on a mission for this thing needs to exist and then there’s all this benefit, and if more people just use it, the world will be better. The fundamental sin of the industry, in my view, is the belief that the more people use our product, the better it is just intrinsically because our product is good for the world. The more people use it, the better it is, and when you’re confronted with something like your product’s being used to orchestrate a genocide in Myanmar - which is not a hypothetical downside, risk is a real thing that happened - the right answer is just we should pull the product until we figure out why this happened and how, how to fix it. I think the… in the world in which you believe that your product is good for the universe and the world, like, the more people use it, the better it is, your reaction to that event is like, well, that’s a bug. Like that is like, that’s like a bug that we can fix. It’s like it’s not a bug. That is an equally weighted feature. That is an equally weighted use case. Someone did the thing with your product that coordinated using your product, which is a thing that you’re meant you want, you wanted to exist. They just did it for something you didn’t like, and that should cause reflection on what to do differently. The other thing, the other thing that happens in Silicon Valley is the invent from first principles curse.
Sarah: Yeah, please explain that.
Jason: So there’s a very high tendency among Silicon Valley people, technology thinkers in general to invent from first principle to look how things are and say like, “Well, it doesn’t need to be that way. What if we were engineering this from scratch?” And that’s a very good entrepreneurial mindset because it allows you to reject sort of the constraints of the status quo and say like, “Well, like, you know, what if we just didn’t obey any like sort of local laws on taxis and launched a ride sharing app and just like, you know, sort of ran the risk until it worked.” It’s like, well then you end up with Uber. There’s a lot of examples of that that are, you know, inventing from first principles, and that is a really important skill for entrepreneurs. It’s a really important skill for building businesses, is to not accept the status quo’s constraints as given. However, there are many constraints and many areas of expertise domain where you would be better served by listening to outside experts and outside opinions, and that is particularly true when you haven’t properly priced the externalities of the system that you’ve built, where there’s like risks that are being borne by some community or some portion of the world that maybe aren’t represented in your company or aren’t represented in your thinking. Um, and if they were, if there was a mechanism that would force their inclusion, um, you would add a, you would maybe price that better, um, into the thing. You do a building, but people don’t, and they’re just like, oh, well, you know, we’re just like, you know, we’re just gonna do this because it’s possible. And no one else was.
Sarah: Hmm. That’s so interesting. And it, coming back to that episode of the documentary that I saw you on - the Twitter documentary.
Jason: Yeah.
Sarah: There, there was, I think, I can’t remember who was talking about how this thing that that started as, “oh, how cool is it for people to be able to find their, their friends at this concert, and then it leads into with Gamergate this, yeah, this like really awful, I don’t even know what to call it. Just…
Jason: and some coordinated harassment.
Sarah: Right. Yeah. And it seemed like there was also this issue that people hadn’t anticipated that law enforcement and norms just broke in, in the, in this, when innovation happens and then you don’t have a set law, you get like law enforcement saying, well, there’s no rule against this.
Jason: Right.
Sarah: And so what can we do? There’s no punishment for death threats online.
Jason: Yep.
Sarah: I could imagine a world where things turned out very differently if there was a, a really robust policy response or law enforcement response that disincentivized that kind of behavior.
Jason: Yeah, and even if it’s not like law enforcement, which is obviously one of the most heavy versions of policymaking, even if it was just like force transparency for like how many people are seeing death threats on your platform or like, you know, because like particularly in the context of social media. I push back against the criticism you often get about social media, which is like, oh, they’ll do anything to farm engagement, anything that’s like, you know, outrageous. Yeah. It’s like no one is like, just like, “Let’s build the most outrageous, let’s like, make sure the most outrageous people are,” ‘cause like, you know, we would, we wouldn’t want to use that kind of product. But the scarier thing is that the algorithms are so complex and so, if you’re running a real time auction for attention 24 hours a day, that sometimes what’s getting amplified isn’t obvious to the people who wrote the algorithm. And if there was just sort of more forced transparency around like, well, what are people actually seeing? What kind of experiments can we do to determine how people feel about what they’re seeing and what is, is it, is it resulting in any real world behavior? I think even just having a scoreboard of those things would do a much better job of incentivizing good behavior, um, from companies. But it’s not the sort of, like, it’s not the sort of work that companies are gonna prioritize, both for business case reasons, but also because sometimes you just don’t want to know the answer to those things.
Sarah: Mm-hmm.
Jason: Because you create a discoverable event of like, yeah, we did a lot of research into whether or not teens were self-harming, the more they used our product and turns out, yes, and now that’s a subpoena. You know, a lot of these things are just better done by outside groups, not even necessarily government.
Sarah: Whoa. Yeah, I hadn’t even considered that. And it’s got me thinking about the various incentive structures that maybe keep us reproducing those same patterns or structural gaps. That point, though, about people building these systems, not being able to see what’s being amplified or why - I think that’s also got me thinking about what kinds of literacies we’re prioritizing in education. I’m wondering how we start training people to have that kind of systems level awareness, especially if these incentive structures that exist or disincentive structures remain in place. How do we give them the tools to interpret what these products are doing in the world, and then also the ability to question and resist despair-inducing narratives that like everything is outside of their control and individuals can’t do anything about it. And I am also thinking this connects to what our previous guest, Safiya Noble, the author of Algorithms of Oppression, was talking about, uh, in terms of public interest technology. Taiyo and I have certainly been wondering what it would take to reorient the university towards public interest goals.
Jason: This goes to like a big point about technology policy that, and particularly relates to AI that I think is important to bring up, which is just that we have agency about how we shape these systems and what we decide they’re useful for and to what values we imbue them with - like we have agency around that question and you hear a lot of like, hopelessness sometimes from people who work in ai, even people who work at like the frontier, um, and are work at like some of our most valuable companies and they have like sort of a probabilistic fatalism. It was like, well look like, you know, super intelligence or AGI is coming in some timeline. We don’t know when, but it’s definitely coming and like when it does, it’ll change everything and all we can do is try to get there as fast as we can so we can harness it, blah, blah, blah. And it’s like, we have choice. We have choices about how we pursue any of those goals. Like nothing is fated. We have a choice on how we pursue these things, and your estimates on what might happen are important, but they’re not determinative. They’re not the only measure that matters. And, uh, it’s important that we hear from other folks, allow other folks to see themselves as part of the conversation that shapes the kind of world that we wanna live in, um, that we particularly make space for public interest and that we dedicate resources, some small fraction of the unbelievable amount of costs that we’re talking about in terms of these data center, uh, costs to public interest and public benefit and has like some end goal in mind, um, towards actually making the world a better place. Then I think that absolutely is a conversation that must be forced.
Sarah: I think that’s a great way to end.
Jason: Great.
Sarah: This was super fun. Jason, thank you so much. Is there anything you wanna mention about, uh, about Escape Hatch too?
Jason: If you’re looking for a Gen X genre movie podcast, two Gen X dads talking about primarily, uh, but not exclusively, genre film from the eighties and nineties, although we just did Casablanca and we just did Sunset Boulevard from 1950, so we’re branching out.
Taiyo: Ooh, I love that movie.
Jason: Yeah. Yeah. We’ve done 280 episodes of it. And if there is a science fiction movie that you love, that was, uh, from when you’re a kid, it’s a pretty good bet we’ve talked about it and we’ve got a great online community where you can come hang out with us on Discord, which is also a really fun, um, part of the internet for me. Um, I’d love for you to check it out. It’s Escape Hatch. Look us up.
Sarah: I strongly recommend the episode on The Matrix too.
Jason: Oh yes.
Sarah: Which re-aired, right?
Jason: Yes. Is a fan favorite. It was a fan favorite. It’s a spectacular episode about a film we didn’t just get to discuss, but people should check it out.
Chapter 5 [51:19 - 57:52]
Sarah: So Jason reminded us that the stories we carry can narrow what we notice. So how do you think we apply that insight to our work as educators?
Taiyo: You know, I keep coming back to this question of what’s actually in our control. It’s easy to feel like we’re just watching the tidal wave roll in, of industry, policy, the speed of it all. But we in higher ed, we aren’t powerless. For example, curriculum is in our purview. The way we talk to students, the spaces we build for them, that’s ours to shape. We CANNOT give up that ground.
Sarah: I think that’s what so many of us are wrestling with right now. There’s this quiet sense of powerlessness because the ground has really shifted for us, and faculty are told to “adapt,” but there’s no roadmap how. And those of us who are trying to integrate AI into our teaching, we might not see the kind of insidious effects - like what happens outside of class if students grow attached to tools that never sleep, always affirm, that never misunderstand them in the messy, human way that real people do. And so part of our job now is vigilance, not in a policing sense, but more of a pastoral sense, like how do we notice when students are disappearing into these systems, and what new forms of care and attention do we need to invent for this world? I mean, some of it is interpersonal, maybe making space in our lesson plans to ask how students are using these tools and why. Some of it is structural, maybe rethinking assignment design, the types of cognitive work we demand of them and being really intentional about process, as we’ve said many times before on this podcast. Because the culture that rewards instant output is the same one that I think motivates students to use AI to do their work for them, and ultimately we’ve got to get out of the mindset that assignments are something to “just get done” and reframe them as opportunities to maximize your learning.
Taiyo: Oh for sure. I’m also thinking that while it’s really important that higher education be critical of these new technologies and in particular, really critical of AI’s impact in education. At the same time, I think it’s equally important that we not flatten the complex reality with a narrative that is as simple as “AI is bad.” This is a complicated technology after all with many facets and ramifications. I also can’t help but notice the exculpatory work that this kind of narrative does. In contrast to how an overly optimistic narrative like, “AI is an unalloyed good that will SAVE education” - that kind of does exculpatory work for the AI industry - the “AI is inherently evil” narrative can do exculpatory work for higher education. And that narrative sort of says we don’t need to adapt or change if the problem is just the technology itself.
Sarah: Oh, yeah, external threats.
Taiyo: And you know, nothing says ‘critical thinking’ like declaring something categorically bad and calling it a day.
Sarah: Oh nice burn, Taiyo!
Taiyo: It’s crucial to hold the AI industry’s feet to the fire through thoughtful criticism and thoughtful regulation, but I don’t want that to distract us from turning a critical eye on ourselves. Because as Pranav Anand said in episode 3 of this podcast, there is a sclerosis in the system of higher education and it was there long before the ChatGPT moment. What I believe is that AI is shining a glaring spotlight exposing many of the cracks in the edifice of higher education. And that’s our responsibility. That’s our domain. That’s our purview. And to the extent that we have power - and I do believe that we have power - we should use it, and we should use it wisely.
Sarah: What does using it “wisely” mean to you?
Taiyo: Well, for one thing, not over-indexing on any one narrative. This can lead to a kind of blindness to other possibilities for the future. Afterall, for all it’s incredible prognostication, we probably won’t be living through Her, and we probably won’t be the Terminator either.
Sarah: [laughs] I love that phrase “over indexing on any one narrative.” I mean, we do this all the time. We are not immune to it. We used in this episode - as shorthand - “Twitter killed democracy” which is a gross oversimplification of a very complex phenomenon. Hmm, so yeah, that’s interesting - the power we faculty have to shape. Well, I’d say the power we have is more to shape the default stories that students use to think about AI. Like I think about the influence in a classroom and that maybe part of our responsibility is also in this imaginative realm. Like we’re not just teaching students how to use these tools responsibly, we also need to remember we’re shaping how they think about not just AI, but also like human intelligence and labor and ethics and things like that. And in a world where I think about, too, that we might be training students who are designing the next AI systems. And so this is something I think that every academic discipline has a stake in. And in the humanities and social sciences, that might mean asking, you know, who gets represented or erased by a data set. In engineering, I’m wondering if that might mean rethinking, like, what efficiency or focus on efficiency leaves out. I talked to somebody recently in business and another person in design, and we were talking about, like, how things might change if the main thing you were trying to work on was not just engagement, but a user’s emotional wellbeing. And so whatever our field is the question for me is the same - like, what ways of seeing are we teaching, are we helping students develop the capacity to notice what is missing from a conversation? Because the people who are building these systems will also reproduce whatever limits of the imagination we’ve modeled for them.
Taiyo: Absolutely.
Sarah: Thanks for listening. My Robot Teacher is hosted by me, Sarah Senk,
Taiyo: and me Taiyo Inoue, and it’s produced by Edit Audio.
Sarah: Special thanks to the California Education Learning Lab for sponsoring this podcast. If this episode got you thinking, please pass it on. Share it with a colleague, a dean, or that faculty listserv, where people won’t stop talking about AI.
Taiyo: See you next time.

