Can a Computer Scientist Still Hope?
Reflecting on the Evolution of Computing, Coding, CS Education and Learning
The conversation below is based on an interview with Armando Fox, a Professor of Computer Science at UC Berkeley, who is also faculty advisor for digital learning strategy and a campus equity advisor. He was named a “Scientific American Top 50” researcher, helped design the Intel Pentium Pro microprocessor, founded a successful startup to commercialize research on mobile computing, and received the ACM Karl V. Karlstrom Outstanding Educator Award, among others. His current research focuses on computer science (CS) education and technology-enhanced learning, at the intersection of pedagogy, human-computer interaction, and programming systems. He is also a classically-trained musician and performer, an avid musical theater fan and freelance music director, and bilingual/bicultural (Cuban-American) living in San Francisco.
Learning Lab Director Lark Park interviewed Dr. Fox (who is also a two-time Learning Lab awardee with Professor Dan Garcia) after he co-published an (Inside Higher Ed) op-ed, “Blue Books Are Not the Answer to AI” and just prior to him taking students to the Computer History Museum in Mountain View, CA. Below is their conversation about how computer science education and coding have changed, what failure and responsibility in computing’s history look like, what students in CS should be learning and doing today, and why attaining “the sum total of human knowledge” is still what we ought to be encouraging for anyone studying CS.
Lark Park: Thanks so much for taking the time to do this. Human-computer interaction seems to be where a lot of people’s thoughts are. What’s the state of play in human-computer interaction?
Armando Fox: When I say that my research is at the intersection of those fields, that’s a nice way of saying I’m probably not really an expert in any of them. I’m sort of good enough in all of them to find interesting ways to combine them.
What fascinates me about HCI — human-computer interaction — is that it’s the rigorous study of how people engage with technology, and at its best it can tell us things about how technology can be brought to bear, broadly speaking, to make people’s lives better. That could be by automating or simplifying tasks that people don’t enjoy, or by helping people achieve their goals — to learn something or to get something done.
Unfortunately, since roughly the Web 2.0 era, a lot of valuable HCI research has been subverted in the service of what is widely called the attention economy — or the surveillance economy, if you want to be a little more sinister. I remember a colleague from Berkeley — this was in the early days of Facebook — a brilliant engineer who, in an after-dinner conversation, said it was distressing how much brilliant talent in engineering and HCI was being wasted on getting people to look at more ads.
The state of HCI is as vigorous as it’s ever been, and at its best we could be using it to deeply study the engagement between people and technology, including AI. HCI has been around for a long time, and there was a lot of work that really cared about that.
If I think back to the early days of Apple — there was a time when people at that company genuinely thought they could make the world a better place by providing well-designed, tasteful products that actually helped people do things, that were delightful to use, that made life better. A lot of what you hear about technology today has gotten very far away from that. The engagement with technology no longer really focuses on the benefit to the individual. It focuses on whether people will consume more ads, buy more things, spend more time on a site, become more enraged, or upvote something. It’s disappointing that so much attention is being paid to a field that historically has been able to do so much good.
Lark Park: Who’s responsible for this — I’ll just call it overemphasis? And who’s responsible for trying to get it back on track?
Armando Fox: There’s an amplification effect that is different from what it was just a few decades ago. The speed with which a piece of technology can be taken up by literally billions of people is unprecedented. That’s really only been the case since the early 2000s. If you unleash something, it can get much bigger than you expected, much faster than you expected. I think it’s more important than ever — for exactly that reason — to ask: when I make a piece of technology and put it out there and people start to take it up, who is being invited to participate? Who benefits if it’s taken up? Who am I excluding? What are the biases — implicit or explicit — that are built into its design assumptions, and how are those going to manifest when people who are nothing like me start using that technology?
One thing the history of technology teaches us — not just in CS, but really all technology — is that successful technologies ultimately get used in ways their creators did not anticipate. But that’s not a pass to say, well, I’m going to invent something, it’s going to be used in ways I couldn’t anticipate, so I’m off the hook. On the contrary, that means there’s that much more pressure on you to think deeply about the ways you at least could anticipate. I think there has been a general failure — mostly across industry — to do that. And when I say a general failure, I don’t just mean that potentially negative consequences have not been sufficiently investigated. I think there are times when they’ve been investigated, ignored, and set aside with the attitude of, well, that’s not great, but we can probably live with it. I will let readers decide which companies my observations might apply to.
Lark Park: Let’s get a little more historical perspective on computing. Living in California, with UC Berkeley close to Silicon Valley, the lore of the computing industry — Moore’s Law, and so on — looms large in terms of California identity. Is there something about computing history that people ought to know or would be surprised by?
Armando Fox: There’s tons. As a field, we don’t do nearly as much to understand our history as, for example, physicists do. Physicists know the history of their field; they understand when things were discovered, what mistakes were made, and what pitfalls to watch out for.
In CS, it’s fairly common for a new shiny thing — typically a language or a framework — to come along, generating a lot of noise in the blogosphere, and then one of the grizzled veterans will say, well, this idea kind of dates from the 1970s or 80s. We tried it then, there were some reasons it didn’t work, and some of those reasons are now different, so it’s worth trying again — but understand that this idea is not new. From a purely technical perspective, the canonical example is that the ideas behind deep learning have been around since basically the early 1970s, or earlier. We just didn’t have the data and compute platforms to really push on those ideas and see how viable they were.
But more importantly, there’s something that is often overlooked: there was a certain kind of person who was in a better position to make those advances — whether because of privilege, social and economic class, or other factors. A lot of the received history of computing is essentially the story of middle-aged white guys. I think it’s important to understand that the systems that were built, the way they were commercialized and deployed, and the design assumptions that went into them came from one particular perspective. To the extent that computer science ought to be helping solve real people’s problems, the history of the field does not necessarily come from the broadest perspective of what those problems are or what the needs of those people might be.
And then there’s a third thing we can learn from computing history: it’s also the history of business decisions. The only computing artifacts we use are the ones that at some point became commercially successful enough that it was worth someone’s while to produce them. A lot of the boom-and-bust cycles, the ways technologies get misused, the way technologies become the new bright shiny thing and get shoved into places they don’t belong and nobody really asked for — we can understand some of those things from computing history as well.
When I take students to the Computer History Museum [in a couple of days], I want them to come away with the stories at the intersection of computers and human beings — the people who made them, what they were trying to do, what background they came from, and what was the effect of what they built. I want students to understand that context and then ask: which of these lessons could somehow be applied to what we’re seeing today? In the case of students, they’re about to graduate and hopefully land a position they enjoy, or maybe go to grad school — but they’re going to start spending thousands of hours a year of their talent and time doing something in computer science. What’s the effect of that going to be? Who is going to benefit from what you do? Who are you excluding? Is there anything in the history of the field that will inform how you approach this job? When is it okay to say no? How will you know if you’re being asked to do something that goes against your values? People who work in the social sciences think about this stuff all the time. Computer science students ought to think about it more, and that context from computing history is what I hope they — we — can learn from it.
Lark Park: I imagine that kind of perspective does a lot to build students’ sense of agency — that there are real decision points here, there’s an ethics to all of this. People sort of understand that about the present day, and about how so much of this is driven by commercial interests. I’m probably one of those people who, thinking about the history of computing and how things came into being, would have thought about it [the history] more like the invention or discovery of fire — it just was, and could it have been any other way? And you’re telling me: yes, it could have been otherwise.
When we talked a couple of years ago — about ChatGPT, about the advances in coding, how coding had changed and was going to keep changing with ChatGPT, you made this analogy to reading [and playing] music.
Armando Fox: With respect to performing a piece of music — the way we started out doing programming decades ago, say in the 40s and 50s, was analogous to see[ing] a note on a page, figure[ing] out which line of the staff it’s on, which key on the piano it corresponds to, and instruct[ing] the various muscles in your arm to assume the correct angles and press down on the key. As you become more experienced at reading music, that level of abstraction disappears, and you say, oh, that’s an arpeggio — I’ll just play that arpeggio. And if you become an improvising musician, like a jazz musician, there’s an even higher level of abstraction where the notes aren’t even written out. It just says, the style of the song is this, and the harmonies and the chords are these.
Over time, the levels of abstraction for programming have gotten higher and higher. In the early days, you pretty much had to be an electrical engineer, because that was the level at which programming was done. Most people doing programming today don’t know anything about electrical engineering, and that’s fine.
AI-assisted coding adds yet another layer, though it’s a much bigger jump. What’s different about it is that so far, all the layers of abstraction have been formal systems. When you write Python, you have to write it a certain way — it has syntax rules, it has semantics. You’re still formally specifying what it has to do; it’s just that each token or line of code accomplishes much more than it would have fifty years ago. The difference with AI is that you can use natural language — which is imprecise, not formal, and not well-specified — as a way of describing what you want.
The pitfall with that, of course, is that because you’re not using a formal language, it’s possible you’re not specifying things exactly right, or you’ve forgotten something, or there are cases you haven’t accounted for. You’ve described behaviors you want, but you’ve failed to describe behaviors that are bad and should be forbidden.
So the good news is that AI-based coding will enable more people with less training to do certain kinds of things. However, I think it’s a mistake to assume that means people with less training will be able to do everything that trained people are doing today. There are cases where you really need people with deep knowledge doing a very specialized task. But for every one of those people, there are now hundreds or thousands who don’t have that knowledge but can still do something useful.
If I stretch the analogy: there are specialty surgeons, general surgeons, general practitioners, nurses, physician assistants, and healthcare technicians. There are many different ways to add value in healthcare. I think AI is a new and interesting way to add value in the realm of getting machines to do useful things for us. I’m not confident that’s how most people are going to use it, but that is at least what’s possible.
Lark Park: What does that mean for CS students? How is what undergraduates are learning in computer science changing?
Armando Fox: What we’re trying to disentangle — at least in the conversations my colleagues and I are having — is this: for a long time, there has been a body of computer science concepts that are taught in a way that is entangled with learning to code, writing code, debugging code, reading code, reviewing code.
The challenge now is that certain parts of the code production task can be very efficiently automated. So the question is: if we tried to separate the concepts from the fact that those concepts are taught through the medium of writing code, what would those concepts be?
It’s going to be like calculus. Everyone I know who’s an engineer at some point took differential and integral calculus — they did closed-form integrals, they learned all the tricks for doing integration — and then in real life, you never do that. All interesting integration gets done through numerical methods. But your understanding of what integration is and what the limitations of numerical methods are informs the way you use that automation.
How important is code comprehension, and can it be taught without code writing? I think there’s probably more than an incidental connection between that question and the question of learning to write versus learning to read.
The other thing that’s making this difficult is what’s happening in industry. If you look at the companies that are successfully using AI responsibly and improving their productivity, they’re companies that already had good systems in place, including the people skills aspect, which is hugely important. But those systems were put in place and are being maintained by people who are quite experienced.
So there’s a gap: how do you become a senior engineer without going through what was effectively an apprenticeship? That challenge is being mirrored in academia. I just finished teaching a software engineering honors course where students build software pro bono for nonprofits. We encourage students to use AI responsibly, we review their work, we have great discussions about it. But it works because these are advanced students. How did they get to be advanced students? In their generation, they got there by learning to code the hard way.
How do students get to the upper division now, in a world where AI can trivially do all the lower division assignments? What do we need to be teaching them so that even if they write a lot of AI-assisted code, they understand the limitations? What does that kernel of knowledge consist of? That’s what’s changing — we’re trying to identify it and ask how to teach it effectively, and whether there are places where we no longer need code writing as the vehicle.
Lark Park: That’s a big one: if the bottom rung of the career ladder is effectively broken, and industry is going to look for greater experience, how do students actually get that experience? Also, the signals aren’t clear. Some data suggests that software engineering jobs are disappearing. Then another report says software engineers are fine.
Armando Fox: Well, “software engineer” isn’t really a precise job title. If you told me registered nurse jobs were disappearing, that has a very specific meaning, in part because it’s tied to a standardized accreditation exam.
There’s no analogous thing you can say about software engineers. During the boom times of boot camps — learn JavaScript in ten days, we promise to get you a job in six months or your money back — a lot of us grizzled beards were saying: if these boot camps are teaching you superficial skills, your utility is going to be short-lived, because at some point those skills are either no longer going to be necessary, or the tools are going to get better and you won’t be needed for that anymore. And that’s exactly what’s happening. I know people whose title is software engineer, but whose skill set is fairly limited to things that have now been largely automated away.
What part of front-end software engineering has not been automated away? The design part. The human-computer interaction part — the way a human being interacts with the site, the interactions they go through, the way cues are presented visually. That part is still really important. But the mechanics of how you get that to work have been largely automated away. And if you went through one of those boot camps and didn’t come away with fundamental, transferable knowledge that allows you to approach other analytical coding tasks, you’re probably done.
One of my students asked me: if I become a software engineer, is AI going to replace me? I said: no, what’s going to happen is that software engineers who have foundational knowledge and know how to use AI to be more productive are going to replace the ones who don’t.
If you have deep knowledge, if you can think about systems and architecture, if you can talk to a customer and listen to what they say they need and turn that into a technical plan with design alternatives — and if you can figure out how to use AI to make that process better — people want that.
Lark Park: What you’re saying sounds like a real endorsement of a university education. That what you’re going to get is not going to be fast, not going to be shallow — it’s going to be deep and comprehensive. Assuming you’re nodding yes to that, let me ask about anxiety that’s circulating: are students actually learning what we want them to learn? You co-authored a piece in Inside Higher Ed recently called “Blue Books Are Not the Answer to AI.” What’s driven the move to blue books is the concern that students are going to be able to cheat more, and we won’t really be able to tell whether they got the benefit of a true university education.
Armando Fox: Let me start by saying that the people advocating for returning to handwritten exams and blue books are, in my opinion, conflating two distinct things.
One is: we can no longer trust that when students produce work outside of a controlled environment, it’s their own work. That has always been true — there’s always been a gray market of people who will write your reports and do your projects for you. But with AI, the price of doing it has gone to zero. So one concern is whether, in the absence of a monitored environment, you can trust that the work product reflects actual learning.
That’s a perfectly valid concern, and I agree with it. It has always been important to make sure that when a student produces a piece of work, the fact that they produced it somehow implies that they learned the process. AI has sort of blown that up.
However, the second thing is: what is the right way to administer assessments in a controlled environment so you can trust the results? That’s what our piece was arguing for. AI presents an easy way to obtain a credential for now, because the credential is framed in terms of jumping through certain hoops, and there’s now a way to jump through those hoops without doing much actual work or gaining any real understanding. But you’ll still get the credential, and the credential becomes your ticket to economic security.
In CS, that second part of the equation is no longer as true as it was. You can try to AI your way through a Zoom interview if you’re very clever, but at some point you’re going to sit across the table from someone, and you’re not going to be able to do that. You can’t really fake it till you make it in that regard.
When we argued against blue books, we were saying: yes, we need a proctored environment, but there’s a better way to do that than handwritten exams with 800 people in a gymnasium.
I’d also like to frame our argument for computer-based testing not just as a fix for the AI cheating problem. There are real benefits to being able to do frequent, short assessments. It’s a key ingredient of mastery learning, for example, and it’s actually the reason we got interested in computer-based exams in the first place. Retention is better, it’s better for pedagogy, it’s better for student mental health. It’s also better for the quality of life of instructors and TAs.
Lark Park: In full disclosure, your two Learning Lab grants are about mastery learning.
Armando Fox: Very much so.
Lark Park: A lot of things that cause anxiety get thrown into the same stew. The other thing that’s related — but not in a good way [to mastery learning] — is the curved grading issue, which seems to be gaining some steam. The whole grade inflation conversation: everybody’s getting an A, something must be terribly wrong.
Armando Fox: Grade inflation is a problem, but curving doesn’t fix it. Grade inflation without a curve means too many people who got A’s should probably have gotten B’s or C’s. Grade inflation with a curve means not only that, but you can’t even tell what the people who got A’s know how to do. At least without the curve, you could say: the people who got A’s are supposed to know 90% of the material, but there are a bunch of people who know 80% and received an A regardless. That’s grade inflation. Grade curving means whether you get an A depends on what the next person got.
I can’t think of any justification for that — unless the goal of grades is to identify the top X percent of students within a particular cohort. And not all student cohorts are equal, not all instructors are equally effective, and course materials change.
Lark Park: I do think higher ed has been on the train of wanting to identify the top five or ten percent. Part of that, I think, is about self-propagation for graduate school. But I think that tends to have collateral impacts that are less desirable.
Armando Fox: Even if your goal was explicitly to identify the top ten percent of performers, there are many ways to do that. When I talk to people in industry who are doing the hiring, they basically say: grades mostly don’t matter. That’s a little glib, because due to grade inflation, it now looks suspicious if you don’t have really good grades. There’s a kind of expectation — like, what, you didn’t get all A’s? Grades aren’t the signal that people who care are using to identify who’s best.
The signals they’re using are: did you participate in hackathons? Do you have personal projects you do not for credit, not for a class, not for money, but just because you have the passion for building things? Did you try to get involved in research? Are you a leader in an organization — technical or otherwise? That’s how you identify top performers.
Lark Park: So why has mastery learning been such a focus for you?
Armando Fox: Because I don’t think it has to be the way that it is. For a long time, I think the main reason not to do mastery learning was that it was too expensive. We developed an assembly line model of education because we had more and more people wanting an education, and a limited number of people who could teach them. So the kind of individualized attention — lots of practice, flexible deadlines — we couldn’t afford to do that. Our argument is: if you could afford to do it practically and without breaking the bank, why wouldn’t you? Because we know it works. If there’s something you’re not doing that you know would work, you have to ask yourself why. If the answer is you don’t have the resources, that’s unfortunate but understandable. But if technology can help you get closer to doing it — well, that’s an example of a good use of technology. I’m a big proponent of AI in the classroom for exactly that reason. There are things AI can help with that will help students learn faster.
Imagine having an AI conversational assistant or programming partner. In my software engineering courses, we have an AI play the role of a customer so that students develop the skills of interviewing someone non-technical. That’s hugely valuable. You can’t go out and get real industry people to role-play a customer for an hour with every student — that’s unaffordable. But if we can develop that skill in a less expensive way, and we know it’s an important skill, why wouldn’t we work on that?
So to me, mastery learning, once it’s explained in those terms, should be a no-brainer. We know it works; the only question is why we aren’t doing more of it. The implicit answer has been that we couldn’t afford it. We’re trying to wake people up to the fact that now, we sort of can.
Lark Park: I want to go back to something you said earlier about the history of computing — the role of identity, the people who innovated and made business decisions. There’s subjectivity there. I want to ask about your roots, and how they’ve informed the decisions you’ve made — how you engage as a professor.
Armando Fox: Let me give both a little-picture and a big-picture answer. As you mentioned, I do music and theater. Friends and colleagues have asked me: there’s AI-generated music now, AI is writing plays — how do you feel about that? Is it going to write music that’s better than yours?
I never really understood those arguments, because making music and making theater is fun. I can’t explain why it’s fun. I think it’s something deep in our DNA — we like storytelling, we like expressing emotions, we like the idea of connecting with another human being, having emotional experiences, bringing those experiences together. There’s some primal connection in there. So the fact that connecting to other human beings through that medium is enjoyable — whether tech or AI can do that better seems to me completely irrelevant. There’s an intrinsic joy in doing something that connects me with other human beings, and I think that’s fundamental. I try to get my students to see that.
There was a set of videos that came out in the late 90s called The Triumph of the Nerds — not to be confused with Revenge of the Nerds. There’s a segment with Steve Jobs, when Apple was just hitting its early stride…. he was talking about the design of the Apple II and later the Macintosh. At some point as a college student, he had taken a class in calligraphy and developed an appreciation for the joy and beauty of beautiful writing — and because of that, he was so insistent that the Mac would have proportionally spaced fonts. It was the first mass-market computer to really attempt to do that.
He said — and I’m paraphrasing — that good product design is about trying to understand the best things that humans have done in all areas of knowledge and culture, and finding ways to bring those achievements into what you do.
That’s the other thing I try to convey to my students. Music and theater happens to be my thing, but I’m also a fan of reading history, of understanding political economics. There is no piece of knowledge I’ve ever come across where I could confidently say, that’s never going to be useful in my life as a computer scientist. One of my colleagues at Berkeley, Paul Hilfinger — a professor who retired a couple of years ago, legendary in the department — when students would ask what’s going to be on the exam, he would say: it’ll cover the sum total of human knowledge, but with a very strong focus on compilers. At the risk of putting words in his mouth, I think that was his way of saying: yes, you’re here to become a good computer scientist, but you can’t be a good computer scientist without being a good human being first. And to be a good human being, you ought to understand the best and worst things that human beings have done, and ask how you can bring those things to the project you’re working on today.
People ask: do you combine music and technology? Not directly, but music is an aesthetic. Music can be beautiful in different ways — the way Rachmaninoff is beautiful, or the way Mozart is beautiful, where it’s really transparent, or the way Bach is beautiful, where there’s deep structure combined with an aesthetic. All of those concepts have parallels in technology. Code can be beautiful. The way a piece of software interacts with you can be beautiful. But you have to bring the beauty — beauty is something human beings do. Without it, we’re going to build crappy things and become automatons.
Lark Park: Can I just say that everything you’ve said is not the typical view most people would have of a professor of computer science? And it’s exactly the kind of thing students should understand….
[On the topic of things that students should understand:] One of your sites had your “failure” bio…. I wanted to ask, looking at your “failure” resume, but also who you are and all the successes you’ve had — are you happy with your career journey?
Armando Fox: I am happy with the way it’s gone — which is different than saying that at every moment I was happy with what happened next. It’s well known — I didn’t get tenure at Stanford. I’d be lying if I said that didn’t impact me at all. But I came to realize that I have to define success on my own terms.
I talk a lot about imposter syndrome. Students say, oh, I have imposter syndrome — everybody at Berkeley seems better prepared or smarter than me. I say: welcome to my life. We have six Turing Award winners among our faculty and emeritus faculty. Of course I have imposter syndrome — I have it every day. But I understand that I love learning and appreciating the best things people have done, and part of the cost of that is recognizing that I’m not going to do some of those things. On the other hand, there are things I do that bring me incommensurable joy. I can sit down and play the piano and have an experience that no one else can provide.
I feel very fortunate. And I say that having a long failure bio, with more items surely to come — but that’s okay.
Lark Park: As a proponent of public higher education in California, I’m thinking that not getting tenure at Stanford is what brought you to Cal — and that was a huge benefit. You teach so many more students at Cal than you otherwise would have at Stanford, and they [students] are the beneficiaries of that.
Armando Fox: Not just the number of students, though. We do have a wider range of students in terms of backgrounds and socioeconomic conditions. You and I have talked about my own background. My parents escaped from Cuba — it was after the Castro government came to power. It became literally illegal for professionals to leave the country. They left on the pretense of a one-year fellowship in Spain, because they had both finished medical school and been granted the equivalent of a one-year postdoc. On paper, they were supposed to come back after a year. In practice, they knew they were never going to come back. They were fleeing as political refugees, and they were allowed one suitcase per person. They literally walked away from everything — their houses, their friends, everything. Imagine putting some things in one suitcase, walking out of your house, and knowing you’re never going to be in that world again. It’s hard for me to imagine.
What I got from that is: what you know how to do, and what brings you joy — those are things no one can take from you. Focus on those things. That’s the other thing I try to tell my students: whatever background you come from, if you’re here to learn things, what you learn can never be taken from you. My parents, because they had medical knowledge and degrees, were able to rebuild from zero — start completely over in another country where the language spoken wasn’t their first language, with no connections, no social network, really no professional network. And they did pretty well.
So my parents always told me: do whatever you’re going to do, but do it well. Because that’s the one thing that can’t be taken. You might be forced to leave your country, things might be stolen from you, but your knowledge and your skills and the value that you can offer other people by virtue of those — no one can take those away.


