My Robot Teacher Episode 12 Transcript
Querying the Collective Mind: CrowdSmart Co-Founder Kim Polese on Collective Intelligence
Below is the full transcript of Episode 12 of My Robot Teacher (lightly edited for clarity and concision).
Guest:
Kim Polese: technology executive, founder, CrowdSmart Inc. and Common Good AI; Public Policy Institute of California Statewide Leadership Council member
Also available on: Apple / Spotify
INTRODUCTION [0:00-8:19]
KIM POLESE: There are many other ways to use AI, one of which is to use AI to help humans actually collectively problem solve and learn from each other, and also collectively learn from data, but interacting with each other. And we can use AI to do that at scale.
SARAH: Welcome back to My Robot Teacher. I’m Sarah Senk.
TAIYO: And I’m Taiyo Inoue. And welcome to all our new listeners!
SARAH: Yes, happy to say we had a big bump in subscribers earlier this month, presumably because we got mentioned and quoted in the LA Times!
TAIYO: That’s some nice self-promotion there, Sarah. Well done.
SARAH: You know it!
TAIYO: To be clear, though, the article wasn’t entirely about us.
SARAH: No, not about us at all, but about Cal State Faculty.
TAIYO: Right, true. The CSU, where Sarah and I are professors, released a report on one of the largest studies to date on AI in higher education, and the article was about how faculty opinions about AI are - SHOCKER! - very polarized.
SARAH: Shout out to the San Diego State University researchers behind the survey, whose work - like ours - is supported by the California Education Learning Lab. We hope you too will check out our YouTube channel and leave a review on Apple Podcasts to drive more of that sweet, sweet internet traffic our way.
TAIYO: My god, you’re a publicity monster!
SARAH: Today’s episode is about how to use AI to summon the wisdom of the crowd without the stupidity of the mob. Our guest is Kim Polese, founder of CrowdSmart AI and Common Good AI, whose work explores how AI can be used not just to generate content or outsource drudgery, but to help groups reason together, deliberate more effectively, and work through complex problems by surfacing their collective intelligence. Taiyo, what is collective intelligence as you understand it?
TAIYO: Oh, God, you’re really going to put me on the spot, aren’t you?
SARAH: You’re so good when I put you on the spot.
TAIYO: Okay, okay. I guess I think of collective intelligence as being like a group of minds that come together and function almost like a single distributed mind, capable of generating insights and ideas that no one person in the group could have generated or produced alone. You know, pulling out the intelligence of a collective can be really, really difficult. For instance, maybe... individuals in the group bring different assumptions or different ways of thinking. I mean, just take our collaboration, for example, right? We bring this up a lot, but I’m in math and you’re in comp lit and we have really. different, very different ways of thinking about stuff. I think one of the really beautiful things about our collaboration is that we can kind of bounce ideas off of one another and cast these ideas through our respective disciplinary lenses, we can see beautiful new insights that I don’t think either one of us could have come up with on our own.
SARAH: I love that. So we’re like a hive mind of two.
TAIYO: I mean, “hive mind” kind of has some negative connotations. I’m kind of thinking of the BORG now.
SARAH: I do feel like you have infected my mind in some way. I’ve noticed I use your mathy language for things. If I have an expectation of what somebody’s going to do and they really surprise me, I’ll say, hold on, I’m updating my priors, which I now know from you is something from Bayesian statistics.
TAIYO: Absolutely. That’s amazing. Yeah, exactly. When you come across new evidence, you update your priors to obtain a posterior distribution
SARAH: Oh my god, that’s the other thing I’ll say is “I’m really interested in the outliers in this distribution.” Again, these are words I would not have used two years ago, and I love it because having that language now makes my personal process of judging other people feel way more rigorous.
TAIYO: I mean, I have to say the influence is totally mutual. I’ve mentioned now almost on a daily basis George Lakoff’s Metaphors We Live By, right? And of course Walter Ong’s book, Orality and Literacy. That was a Sarah recommendation. You’ve also influenced my understanding of language, particularly from a post-structural point of view.
SARAH: The hilarious part about that for me is that I think my understanding of
poststructuralism has been totally deepened by you explaining to me the geometry of high dimensional vector space when I asked you what the hell “word embedding” was. And I was like, how cool to have this new way of thinking about how meaning is relational.
TAIYO: [laugh]
SARAH: I think that speaks to what’s so cool about interdisciplinary projects of any kind: you learn new things, but you also learn new ways to think about the things you’ve been thinking about forever!
TAIYO: Right, and we’ve talked before on this podcast about how many of the defining problems of the 21st century are compound problems, where no single discipline can see the whole thing, much less solve it alone.
SARAH: I think we feel this acutely as educators because universities are full of some seriously smart people and yet universities are super dysfunctional - like, why is that!? That’s a problem, as I see it, of collective intelligence. Like for a group of intelligent people to act intelligently there are certain conditions that have to be in place; they need to be focused around the same thing, the same priority, or whatever. They need structures for working through disagreement, and they need to feel socially safe sharing controversial… those outlier ideas. So our ability to hive-mind is at least partly due to the trust we share as friends - like we have no shame in risking half-formed thoughts and we know there’s not gonna be strife if we ever say something that really offends the other one, because we’ll give each other the benefit of the doubt.
TAIYO: And how often do we ever offend each other, Sarah?
SARAH: Okay, fair. I actually think one basis of our friendship is that we are
not-offended by the same things.
TAIYO: [laughs] Wait, so are you saying that we laugh at the same things that offend other people? Is that what you mean?
SARAH: There’s that hive-mind, parsing my deliberately ambiguous sentence exactly right.
TAIYO: Nice. Oh man, but seriously, that’s the larger question here. What happens when you try to scale that kind of generative exchange beyond two people and bring it to a large group of people who don’t personally know each other? What would it take for a classroom, a university, or a whole organization to think more intelligently together? And can AI help groups surface not just the knowledge each person already holds, but the new insight that emerges between people when the conditions are right?
SARAH: And so that brings us again to our guest, Kim Polese, who is a longtime
technology leader. And as I said earlier, the co-founder and executive chairman of CrowdSmart AI, which helps organizations gather and analyze large amounts of group input and identify patterns in how people are thinking. CrowdSmart draws on some of the same underlying advances that ChatGPT has, but the aim is different. Instead of generating answers based on training on a massive corpus of internet text. This technology is designed to listen to a massive group conversation, map the ideas that people are contributing, and then track patterns of agreement and disagreement and emerging themes to help surface what the group seems to know and what it’s still working through. Kim is also the co-founder of Common Good AI, which is a nonprofit initiative that brings together similar AI-enhanced deliberation tools into civic life with the goal of helping polarized communities identify common ground and work through shared problems.
TAIYO: Earlier in her career, Kim worked at Sun Microsystems, where she led the launch of Java in the mid-1990s. She also co-founded Marimba and served as co-founder and CEO of SpikeSource. She has held leadership roles across technology and public policy organizations, including the Obama Administration’s Innovation Advisory Board, the Public Policy Institute of California, TechNet, and the Silicon Valley Leadership Group. She also co-teaches Lean Launchpad at UC Berkeley’s Haas School of Business. Please enjoy the interview.
CHAPTER 2: Kim’s Background + Java and Product Management as Collective Intelligence [8:20-13:08]
Sarah: Kim, thank you so much for joining us today. Can you tell us a bit about what first got you interested in AI and how you started working in this industry?
Kim: Well, it started when I was 10 years old and actually encountered Eliza for the first time. So Eliza was a program written in the sixties. Mm-hmm. That was essentially the first chatbot, it was a psychologist or a therapist. It was acting like a therapist. And I was a 10-year-old kid. My mom would take me up to the Lawrence Hall of Science in Berkeley.
Taiyo: You’re kidding. That’s amazing!
Kim: Yeah! And there was this old mainframe in the basement, this dusty old mainframe running this application called Eliza, and I would have conversations for hours with Eliza. Eliza would type, “how are you feeling today?” And I would respond, “I’m feeling crummy.” And Eliza would come back, “well, why are you feeling crummy?” And I would say, “well, because I had a fight with my best friend.” And Eliza would say, “well, how did that make you feel?” And so it was like that: it was like the most annoying, ultimately, conversation with a therapist that one could have. However, as a 10-year-old kid, I was fascinated by this idea that we could encapsulate the behavior of humans in computers. And it really set me on a path to computing and also ultimately, AI. My educational background, my academic background is biophysics and computer science. So I was at Berkeley and also studied computer science at University of Washington, and ultimately went into product management after that first job in AI, went to work actually in the AI team for Eric Schmidt at Sun Microsystems. And then became a product manager. And that led to founding multiple companies. But the path led back to AI and to the work that I’m doing today, which feels like is the most interesting from a technology and science standpoint, and also the, the most impactful potential for impact of everything I’ve worked on, including Java.
Sarah: That’s really interesting to me that you were a product manager for Java, right?
Kim: Yeah.
Sarah: It seems like this is a role where, especially on a product that big, like you would’ve to do a lot of synthesis work and coordination. Um, and it makes me wonder if people who have experiences like that, working with and managing teams of people and you know, trying to anticipate everything from what engineers are gonna do to consumer needs and so on, like you might be predisposed to thinking about collective intelligence. So looking back, does it feel connected to your interest in collective intelligence today?
Kim: It’s such an interesting question and I hadn’t really put, you know, connected those dots, but you’re absolutely right because you know, as a product manager you have all of the responsibility, but none of the delegation authority - like people don’t report to you, you’re the manager of a product, not the team. And so you have to figure out what is the technical definition, what are the features, what’s the go-to market strategy? What about the branding, what about the channel partnership strategy? The whole thing is up to you, and oftentimes you have, you know, strengths in different areas of those disciplines, maybe not all of them, and so you’re constantly going out and talking to a whole lot of people and learning and synthesizing, and then balancing different beliefs on the team about where should this product go. In the case of Java, it was a really big challenge. It was called Oak at the time, and we were trying to figure out how to get it out to the world, and it was too early. It was before the web, it was before there were smartphones, or set top boxes. And so it was very, very difficult. And ultimately after going out and talking to a whole lot of people and getting a bunch of data about where, where is the information superhighway? And we need to create a whole new go to market approach. But that would never have happened without synthesizing all of this insight and getting a bunch of data and, you know, balancing a lot of very, very strong opinions about where to take this.
Taiyo: You’re telling me Java used to be called Oak?
Kim: Yes! [laughs]
Taiyo: You’re kidding.
Kim: Yes. That was the code name.
Taiyo: So then where did Java come from?
Sarah: Kim named it, right?
Kim: So Java came from it because, um, I had, it was my responsibility to name this thing.
Taiyo: Wait, you, you..You named Java?
Sarah: We’re in the presence of a luminary.
Taiyo: That’s amazing.
Kim: I, I knew that this was a very important decision to the team and so I organized two brainstorming sessions and then wrote a bunch of things on the whiteboard, like waking up the web, bringing the web to life ‘cause that’s what we were doing. And that we got on a coffee and caffeine riff, and out of that Java emerged. It was one of several names, and I chose Java because I felt it was the best. Eric Schmidt gave the thumbs up and it became the name.
Taiyo: Wow, this is so cool. Legendary. Legendary, amazing. Yeah.
Kim: But it was a collective intelligence exercise - the entire thing.
Sarah: Exactly.
Kim: That team also was, it was comprised not just of engineers, but artists, and you know, designers, people that were really thinking broadly.
CHAPTER 3: CrowdSmart [13:09-16:00]
Taiyo: Mm. Can you tell us a little bit more about what you’re doing right now? CrowdsmartAI and Common Good AI. We’d love to hear about it.
Kim: Yes. So this is a new approach to AI. So all of the focus over the last few years since large language models came out has been on learning from data, using AI to learn from data. So massive amounts of content, using large language models to synthesize and find patterns in that content. And then individuals asking questions of the AI. Incredibly powerful, amazing, you know, breakthroughs and phenomenal productivity tool. However, there’s a whole other way to use AI. There are many other ways to use AI, one of which is to use. AI to help humans actually collectively problem solve and learn from each other. And also collectively learn from data, but interacting with each other. And we can use AI to do that at scale. So here’s the challenge in real life, if you’ve ever been in a room with a really good human facilitator mm-hmm. They’re great at finding a line, you know, alignment - what’s resonating, asking open-ended questions, encouraging the quiet voices, encouraging productive friction. The challenge is, after about 15 or 20 people in a room, is just the communication complexity problem is off the charts. But that’s a perfect problem for AI. So what we can do is actually build new forms of AI that act like the best human facilitators that encourage us to ask and answer open-ended questions. To, uh, surface insights that normally, often are buried, often because of, you know, all sorts of incentives that sort of run against saying what you really think. It might be organizational, hierarchical, whatever biases that come into play. Well, AI can basically, uh, eliminate all those challenges. And what the AI also is really, really good at is managing these multiple lines of communication. There’s something called Brooks Law, which says that ultimately you start to get exponential complexity once you have more than a few people. So three people is 3 lines of communication. 10 people is 45 lines of communication, 20 people is almost 200 lines of communication. And managing all of those different perspectives Well, again, perfect problem for AI.
Sarah: Right.
Kim: So this is a new approach to AI that is using AI to both facilitate how humans collaborate and ideate and, you know, use their productive friction and their insights and their experience, and their diverse perspectives to align on an actionable path forward, or the best decision about a complex problem or a compendium of knowledge about something that is evolving constantly, and you wanna keep evolving that knowledge, but that’s basically what this is. It’s a new approach to AI, using AI to help humans learn from each other and facilitate that process.
CHAPTER 4: A Different Approach to AI - Collective Intelligence [16:01-22:30]
Sarah: Right. Can you tell us a little bit more about the AI side of this? When you describe Crowdsmart as a different approach, are you still building on the technology that we know from foundation models like ChatGPT or is the tech underneath it something distinct?
Kim: It certainly builds on and, and leverages what’s going on with LLMs. So we’re using transformer models to, instead of making sense of existing static data, we’re making sense of this conversation that’s happening between humans. And so we’re creating vectorized… essentially knowledge models, and using the transformer models to learn semantic meaning from what people are saying. So you start to identify themes that transformer models are starting to identify themes. And then the AI, this, this, uh, new approach to AI is using a set of algorithms. There’s something called hidden Markov models and Bayesian belief networks. And there’s a whole set of approaches that are basically identifying what do we know? What have we surfaced about the knowledge about this particular topic or decision or problem we’re trying to solve? And then encouraging that productive friction by surfacing diverse insights, enabling people to rank each other’s insights and respond to other people’s ideas, enabling people to disagree, change their mind. The system’s constantly prompting for the why’s behind what you think. It’s not just about what you think, but why do you think that? What led you to believe that? And all of that conversation is actually being turned into a private language model, essentially a collective mind of the group.
Sarah: Of the group.
Kim: Yeah. Yeah.
Sarah: And with nothing coming from outside the group. It’s all training on things just…
Kim: Right. Just within the group. You can bring in optionally other data, you can actually incorporate agents that can bring knowledge about particular topics or fact check or whatever. But really the, the, the core of this is deliberation between humans, knowledge sharing.
Taiyo: So it really feels like you’re kind of drawing out the intelligence that was already sort of in the groups, in the organization.
Kim: Yes, Exactly.
Taiyo: Uh, but you’re using these, as you say, superhuman facilitators - these, these AIs to make these kinds of insights that are oftentimes inscrutable just because of the combinatorial issues, like as you were saying. Yes. When you have just even a moderate number of humans in the same room, the number of lines of conversation that can happen there, just, it just explodes. So what’s really interesting about this is the idea that it’s not so much the AI that’s generating the intelligence. It’s the human beings that are generating the intelligence.
Kim: Right.
Taiyo: It’s just that the AI’s able to pick up on through its superhuman machine powers, the patterns that are in the conversations, the, and, and being able to coordinate people, connect people, all of these kinds of things. And maybe these things were not possible before, only because of the immense complexity of the organization.
Kim: That’s exactly it.
Sarah: I think what’s interesting to me here is the question of what kinds of social structures and dynamics establish the conditions for people to speak up, you know, in the first place. Like even before you’re talking about the scale of a really big organization. Even small group dynamics can be problematic, right? So, part of what this seems designed to do is let people contribute without maybe the, uh, the threat of social penalty.
Kim: That’s exactly right. One thing I didn’t mention is, to your point, identities are masked, so you don’t know who’s saying what. And that really encourages transparency. And this is a best practice of the science of collective intelligence. Also diversity of thought is a best practice. Mm-hmm. And diversity of the participants and diversity in every sense. Cognitive diversity, life experience, age, perspectives. Right. You know, expertise. But the fact that identities are masked to your point. Really encourages people to say what they think. It also actively, this approach to AI actively resist trolling or Yeah. Attempts to game the conversation because people are constantly ranking up, voting or responding to ideas that they like. And so the ones that are trying to game, or, you know, troll always fall immediately to the bottom because instead of optimizing for attention, which is what social media does, which encourages the trolls, this is optimizing for common ground and serendipitous breakthroughs, new ideas, diverse perspectives. And so that optimizing for common ground and diverse ideas, just that eliminates naturally the attempt for people to hijack.
Taiyo: Yeah. It does seem to me like a naive view of collective intelligence might just be about taking the average of everybody’s opinions. And what you end up with is very bland, boring, maybe insight, but not really something that, you know, is gonna be exciting. The exciting stuff is gonna live in the outliers at the extremes of the distribution. And we wanna make sure that we’re honoring those kinds of minority kind of sometimes really out there viewpoints ‘cause those have a chance.
Kim: That’s right.
Taiyo: Of being really productive and being really brilliant
Sarah: In the classroom context, this happens all the time where somebody might say, oh, it’s, it’s too out there. And I always say, “say it!” Mm-hmm. And it - every day - it’s the most brilliant thing somebody said that week.
Kim: Right? Yeah.
Sarah: But there’s no convincing them, like, you know, “say the thing that you think is out there.”
Kim: Right! And that can shift the conversation a whole new direction that suddenly uncovers a new insight - an “Aha” in someone else. Right. That’s productive friction. And also, uh, encouraging serendipity.
Sarah: Yes!
Kim: It’s encouraging serendipity. That, and that’s so important.
Sarah: Mm. I think serendipity is such a fascinating thing because it’s like, you know, you find it when you are not searching for it.
Kim: That’s right.
Sarah: Right? It sort of comes up in these, in these surprising ways.
Kim: Yeah.
Sarah: And, maybe if you’re too focused on searching for something, you might miss the serendipitous thing because it doesn’t fit into your framework, right?
Kim: That’s right. And what it’s doing is it’s learning and it’s, it’s listening and it’s learning from the humans. So to your points, both of you, the AI is not making the decision or having a serendipitous thought.
Sarah: Right.
Kim: It is orchestrating the communication between the humans to encourage… to have the best chances of those serendipitous thoughts emerging.
CHAPTER 5: Why should we care about collective intelligence? [22:31-24:12]
Taiyo: Why should we care about collective intelligence? Right? Why? Why do we want the insights of an organization? Can you speak to that?
Kim: Yes. Well think about the current approach to AI, which again is incredibly powerful, but it’s just one piece and what it does is it learns from existing data. So imagine if someone said “I know everything about you ‘cause I’ve read everything that you’ve, you know, that you’ve ever written, uh, or listened to everything you’ve ever said, and therefore, I don’t really need to interact with you anymore because I already know. You know, there’s no point in us having a conversation or ever getting together in real life.” Or being part of a group, you know, that’s ridiculous. Obviously, we’re so much more than what lives in our brains, our experiences in life, our ability to make these aha connections in, in our minds our ability to interact with each other and then have new insights emerge. That only comes from the interaction of humans. And from learning from living real, live humans, which is different from learning from existing data. That’s just one piece of, you know, advancing human knowledge. Advancing human knowledge really depends - in a major way - on learning from living humans, who are interacting with each other. And when you think about it, collective intelligence is how humanity has evolved throughout, you know, of course, millennia and everything that we’ve done has been a collective ideation interaction, trial and error, disagreements, ah-ha’s, breakthroughs together as groups.
CHAPTER 6: A Computational Model of the Collective Mind of the Group [24:13-30:42]
Taiyo: So are there examples that you can share of applications of this collective intelligence technology that it sounds like your work is really focused on. Are there examples that you would like to share with the audience?
Kim: Yeah. There are so many. And you can imagine any organization that’s trying to solve a problem, make a decision, advance their mission could benefit from collective intelligence, the collective intelligence of their stakeholders, and a broader world and stakeholders, meaning their employees, their team, um, their customers, their partners.
Taiyo: Oh yeah.
Kim: So some examples are sort of categories. One is organizations, companies. For example, a company called Alera. This is a big insurance rollup, a rollup of over 150 insurance companies, a private equity acquisition, roll up all these insurance, private, you know, brokers, and then made one insurance company. And the CEO realized there’s all this embedded knowledge in all these offices and all the teams, the people that work in all these different geographies about what’s working, what’s not. And we need to create a unified culture. So we need to drive, you know, a unified revenue model. And so there’s a whole new set of challenges. So the first question was, what are the challenges? What are the issues that you’re seeing? What are the problems, right? Because people always can see what’s not working. They can see what’s possible, but what’s not working. And so once that open-ended exchange happened among all of the different employees, suddenly all these insights came up to the CEO and where that CEO might’ve thought, oh, this is the path forward now suddenly there’s new information emerging. So that’s one example.
Sarah: And what does that look like? Like how, just how the process works of getting that information.
Kim: What’s the user experience…
Sarah: That’s the word I was looking for! User experience!
Kim: Open-ended question. What, what do you think is working well and what do you think we could improve at this company?
Sarah: And they type it or speak it?
Kim: They type it and it’s asynchronous. Identities are masked. All the ideas that come to mind, a stream of consciousness, whatever comes to mind. What are all the ideas now submit. Now you start to see other people’s ideas, small groups of other people’s ideas, and that small group of ideas. It could be, you know, like seven max, seven people’s ideas max. ‘cause that’s about as much as you can consume. Well those are optimized. Each set that you see is optimized for what might resonate with you, but also what might challenge you.
Sarah: Right.
Kim: Again, what the best human facilitators are doing. And now you’re presented with the opportunity to rank those ideas, click on the ones that resonate. If none do skip you get another group, another seven, skip, get another seven. Skip. So you’re, you’re doing this ranking, responding to other people’s ideas. And the system is on the back end essentially doing this massive parallel AB test because the trade-offs, there’s a lot of insight to be gained by just your being forced to say, “Hey, I think this is a great idea, not so much this other idea.” And as you have a lot of people doing that in parallels, suddenly now insights are starting to emerge from the group. And again, a key sort of secret sauce here is that everyone is seeing a different, unique, customized view based on what might resonate, but also what my challenge. And that is so key because that’s about introducing productive friction.
Sarah: And they’re asked to say why?
Kim: It’s always about why do you think this? And then others are seeing those reasons and then responding. Now someone’s going, yes, you know what, I saw that too, and I had an idea about how we might solve that. And now that’s starting to enter the conversation. Someone else is responding to that going, “that’s an interesting idea; that might work if we tweaked it this way, you know?” And someone else comes in and says, you know what, “eh, I am not so sure, but. What about this different approach?”
Sarah: Right.
Kim: And suddenly now you’re building this computational model of the collective mind of the group that is preserving everything that everyone has said. And so it’s also explainable, it’s, it’s auditable. You know how you reached that decision, which is very different from the existing approach, which is we don’t know how the LLMs came up with that answer.
Sarah: Yeah, exactly.
Kim: You know, might or might not be true, you know? But in this case it is fully explainable.
Sarah: Wow.
Kim: And it is, it is essentially an auditable, queryable computational model of the collective mind of the group that is constantly evolving as people are interacting with each other.
Sarah: I have goosebumps right now ‘cause I’m like, this is the pedagogical tool that I need in a seminar classroom in 2026 that I have never had before. Because that, that’s the key. It’s so hard to be able to identify…. What I wanna see is - I wanna see them making connections between something that we talked about a week ago and now something somebody else said. And often they’re not self-aware - you know, none of us are, or sometimes it’s like, “oh, I just thought of this thing.” And that might have come from a random sequence of people you talked to previously.
Kim: That’s right.
Sarah: I mean, that would be incredible to be able to make the genealogy of that thought visible! For one thing - it could make learning visible as iteration and revision of ideas rather than product or performance of competence, and it seems like it could surface a trail of intellectual development that no one instructor could realistically audit by themselves, too.
Kim: I get excited about this because I, I think this is, this has the potential, this approach, uh, using AI in this way, in this new way, has the potential to really advance civilization towards a new enlightenment. We don’t have to go down this dark path of giving our lives over to the AI overlords!
Sarah: Yeah, right!
Kim: You know, there is a different way and it’s, it’s all about harnessing what makes us most human.
Sarah: I wanna pause on that because I think what makes us most human in this context seems to be the ability to give our reasons and respond reflectively to other people’s reasons in, in our own voice, right? And, and actually learn something about how other people frame the problem as opposed to thinking about, like, responding, you know, adversarially to it. And so, it also sounds like, unlike a stereotypical chatbot of today, that this system is never gonna tell the user, “oh, your idea is so brilliant and deep and nuanced” when it actually isn’t, because what this is doing is surfacing how other like patterns and how other people in the group actually responded with their human brains to one another. Right. It’s not, it’s not like, uh, engaging with them in this one-on-one way being sycophantic.
Kim: Yeah. That does not happen.
CHAPTER 7: What Makes Use Most Human [30:43-35:28]
Kim: So that’s one example of an organization. You can imagine all the different companies. We also, uh, Guitar Center is another.
Sarah: Oh, tell us about that, because after you talked about it last time, I took my kids to Guitar Center. And I was like, this is amazing. And I’m gonna sign them up for lessons now.
Kim: Oh cool! Love it. So a new CEO came in again, a private equity, um, acquisition. New CEO comes in oftentimes it’s interesting ‘cause private equity creates the opportunity to kind of blank slate. What would we do new and different? So there can be, uh, not so good outcomes with private equity, but there can be some interesting, you know, uh, positive ones as well. Yeah. And so that’s not a pattern, necessarily, but I just find it interesting.
Taiyo: Mm-hmm.
Kim: So new CEO came in. And he said, you know what, the insights about the Guitar Center of the future, like the Guitar Center of our dreams will come from our stakeholders - from our employees, from the students who take classes, the musicians, the broader community, our, you know, everybody, the customers. And so I wanna learn what they think about what they dream about the Guitar Center of the future could be, and this is a beloved brand. And it’s also a real, um, important hub in many communities ‘cause it does it, people go there and take music classes and musicians, you know, meet each other and so forth. Anyway, so they got some really interesting insights that have informed the direction of really central, strategic, uh, decisions around guiding the path forward for Guitar Center. And it is all about learning, listening, and learning from the people who know best and who, who care most.
Sarah: Mm-hmm. That care - also you would need to have somebody in place, a leader who is willing to hear things that they don’t necessarily want to hear.
Kim: That’s exactly right.
Sarah: And change their own vision that they might be bringing in.
Kim: That’s right. And then a third example I’ll give is is NATO.
Taiyo: NATO?!
Kim: So NATO is an interesting organization. They’ve got challenges obviously integrating all sorts of different perspectives ‘cause they’ve got multiple different cultures, languages and so forth. So 32 different member nations. You’ve got a lot of intelligence that’s often embedded deep in the organization at the edges of the organization, sometimes outside the organization and in a command and control environment that often doesn’t filter up.
Sarah: Right.
Kim: So, you know, some line employee or very low level employee might have that. Insight have identified a risk that, you know has not filtered back up.
Taiyo: Yeah.
Kim: So this is a system that really surfaces those insights, encourages transparent communication and allows the serendipitous ideas and also insights about risks to filter up immediately to the top.
Taiyo: Yeah.
Kim: And they’ve been using it to improve decision accuracy, to come up with new innovations, to respond to terrible, you know, evolutions on the battlefield. And it’s been, uh, really it’s led to some breakthroughs for, for NATO.
Taiyo: I’m really thinking we need to use this thing in the CSU. I know, honestly.
Sarah: Can you imagine?
Taiyo: Uh, yeah. Uh, because what it sounds like is like these issues in NATO Yeah. Are exactly the sort of organizational issues that we see in the CSU, right. Where you have
Sarah: NATO might be, you know, a bit more broad, extreme, I don’t know.
Taiyo: Higher stakes, is that what you mean?
Sarah: Higher stakes. [laughs]
Taiyo: Well you know, higher education’s a pretty big deal.
Kim: That’s true.
Taiyo: And, uh, I care a lot about it. Um, and I see a lot of misunderstanding and miscommunication, which can happen
Kim: yeah, so much
Taiyo: Because there are different levels of stakeholders. We have students, we have faculty, we have staff, we have administration, we have the chancellor’s office, and all of these different levels have different priorities, different value systems, and sometimes there can be conflict.
Kim: Yes.
Taiyo: Um, and when a problem comes along, like for example, enrollment issues, for example, that are not just hitting the CSU, it’s hitting all of higher education. Right. There are different responses at the different levels of how we should solve this problem.
Kim: yes.
Taiyo: And there can be butting heads,
Kim: right.
Taiyo: And it would be wonderful if like, there could be a tool like Crowdsmart AI or something along these lines, which could draw on the collective intelligence of the entire CSU as an organization and surface insights, which, um, could help us navigate this enrollment decline this demographic cliff that we’re facing, and do it smartly and do it in a way that’s gonna make us stronger, hopefully.
Sarah: Also then just being able to like show people then transparently… so if there’s, let’s say, a strategic plan that is the like result that comes out of this process of deliberation, I imagine,
Kim: Right.
Sarah: To be able to then go and say, “Well, here’s why the thing you said, you could go back and look and audit it
Kim: Complete
Sarah: And say, here’s why the thing I suggested wasn’t gonna work.
Kim: Yes.
Sarah: Like that alone, that never happens.
CHAPTER 8: How to Talk Across Difference [35:29-38:11]
Taiyo: I think another obstacle that I think we found in just the collectivity of our two brains in working on this podcast is that sometimes there can be miscommunication because we come from very different disciplinary backgrounds, right?
Kim: Uh, yes, exactly.
Taiyo: Um, I’ve said this many times, but Sarah, you know, she’s studied comp lit, she’s a humanist and, so she’s deeply embedded in that world, and I’m a mathematician, and I’m deeply embedded in a very different culture. Even though we both went through academia, we both got, you know, high degrees in education and all that, we speak different languages oftentimes we think in very different ways, and what we’ve always felt is that there’s this promise of AI to sort of give us a translation tool, a way of translating ideas that are in her discipline into the language of mathematics, so that it’s more legible to somebody like me.
Kim: Right, right!
Taiyo: And vice versa, right?
Kim: Yes.
Taiyo: Um, and we found that that was a really incredible possible use case. Yes. And I think it’s been really productive for our collaboration.
Sarah: Totally.
Taiyo: I’m wondering how this could scale out to larger organizations, um, not just duos, but like thinking about like 20 people or in an organization, or even larger, yeah, that sort of thing,
Kim: What you’re talking about is bringing to mind the fact that this is, it’s building on top of the, the valuable uses of LLMs - like there’s some interesting diplomatic translation of the way people say things, right?
Sarah: Yeah.
Kim: And that can be another way that you bring that approach in. But this does, to your point, from the standpoint of a learning environment, this is essentially a learning management system
Sarah: Yeah.
Kim: A knowledge management system. There’s something called knowledge management back in the nineties, and it sort of fell out of favor, but this really is a new form of knowledge management and evolution - not just management, but really knowledge evolution. And it’s scalable to any number of people. So that’s also very different from old approaches to knowledge management. So you can have thousands of people across a campus or an organization, even tens of thousands of people interacting, using this system, interacting with each other, responding to each other’s ideas, always saying why they think what they think. And again, because the AI - this is such a perfect problem for AI - AI can handle all of those multiple lines of communication. It’s infinitely scalable and it’s not using a ton of energy because there’s no pre-training. So we don’t need to build big data centers and nuclear power plants.
CHAPTER 9: Common Good AI and Deliberative Tech [38:12-42:22]
Taiyo: We’ve been talking about how companies can use AI to support practices of collective decision-making. You’re also the founder of a nonprofit that applies this technology to deliberative democracy. Can you tell us a bit about that, and what’s happening in the not-for-profit space?
Kim: So what had happened was a lot of people were coming to us and saying, we can’t talk to each other anymore at our city council meetings or school board meetings, but we have problems in our community and decisions in our local cities that we need to make, we don’t have a choice, we need to make decisions, we need to be able to find common ground. Could we use this approach?
Sarah: Right, right.
Kim: And so we founded Common Good AI. We’re now working actually in partnership with the Citizen Assembly movement and citizen assemblies are really interesting. They’re like citizen juries. It’s a self-organized, it’s been happening around the world for about a decade in different cities, different countries and citizens get together: people in a local city community will get together over a series of weekends, and it’s all randomly chosen. It represents the ideological and political and, um, demographic perspectives of that community. And then they actually come up with a set of decisions or priorities about whatever the topic was. There was recently one in Bend, Oregon on homelessness. This group of citizens came together. It’s usually about 20 or 30 people over a series of weekends. They get paid a stipend, they meet in a hotel room somewhere, and then after two or three months, they actually came up with a prioritized set of recommendations about what to do about homelessness.
Sarah: This sounds like what elected officials are supposed to be doing, right?
Kim: Yes, exactly. And it really is - taking into our hands, into our own hands, the work that is not being done too often because of stymied political, um, systems. So what we’re doing now, the challenge of course, with citizen assemblies, it doesn’t scale, right? And also you have all the biases that can come into play and being face to face and quiet voices maybe don’t get heard or whatever. I will say they’re incredibly effective and they’re showing the way of what’s possible. So now the question is how do we scale this?
Sarah: Yeah.
Kim: How do we make it possible for an entire city to participate in this kind of process? And so at Common Good AI, we are working hand in hand with the citizen assembly movement. Again, it’s uh, there’s no one in charge. It’s decentralized, but in a bunch of different cities. So there’s a new initiative launching in South Carolina now called American Forum, and it’s gonna be actually happening in three states. It’s happening in First South Carolina, then Nevada, and then New Hampshire. And it is again, working in partnership with the Citizen Assembly, uh, teams on, in those states. A combination of in-person gatherings and then these virtual engagements and anyone can sign up to participate in those different states, cities throughout the state. And that combination of in-person and virtual is very powerful because oftentimes going into a meeting you can, before you gather, you can really prioritize what is important to the group already.
Sarah: Right.
Kim: And now you’re together and you’ve already found common ground and mm-hmm. Maybe some breakthrough ideas. Then in real space, you know, there’s, there’s real magic to being together. And then you leave and you are able to synthesize what happened in that gathering and take it to a whole new level and bring other people in. And so those engagements have been very encouraging. Because it’s, it’s helping people realize, wow, this is possible.
Sarah: Exactly.
Kim: We can actually, we can achieve breakthroughs and find a path forward together even though we, we thought we disagree so violently.
Sarah: Right.
Kim: And so what we’re seeing already is people are finding common ground and realizing, yeah, I don’t hate you just because you voted for the other guy. And I actually realized I have so much more in common with you, and we together care about a whole lot of things we care about more, you know, we have more in common than we have that we disagree about. And so that already we’re seeing that happen in multiple different engagements. There are multiple tools now that are starting to emerge in this area of deliberative tech.
CHAPTER 10: Querying the Collective Mind [42:23-44:37]
Taiyo: I think one really provocative thing that you said that’s gonna get people’s juices flowing is the idea of the collective itself as being a kind of mind.
Kim: Yes.
Taiyo: I love that. That’s definitely my lane, and what I love, how I love thinking about minds at various levels of organization. Can you just say a little bit more about that? Maybe try to convince the more skeptical in our audience that we really should think of an organization, for example as being a kind of mind that’s unique and different than just this person, that person, that person that you… you know what I’m saying?
Kim: Yes, I do, I do. And, uh, and this, this is, to me, this is the essence of what we’re talking about. It’s the ability to actually tap into the insights and the, the latent knowledge, the, the tacit knowledge, the experiences that we’ve had in life and surface that in a way we’ve never been able to do at scale. And so, the AI in this case, again, is being the facilitator, but it is helping us share what we know and believe and encouraging that productive friction to enable ideas to emerge and patterns that we might start to see, to share that with somebody else. And suddenly now you’re creating a whole new computational model that is based on the collective insights of humans. And it is constantly evolving. It’s not static. It is, as people continue to interact with each other and with these ideas, they are advancing this computational model of the collective mind. And you can query it. I mean, this, um, it is a queryable, it’s a private language model. It is queryable. And so you can actually go back and, and look at what did we get right? What do we get wrong? You can ask, ask questions of that collective mind. And it’s just us. It’s not the AI, it’s not the AI’s mind. It’s our mind - and as surfaced by, you know, the best, highest practices of really great facilitators.
Taiyo: Amazing.
CHAPTER 11: Conclusions - Human Opacity, Combinatorial Explosions and AI for Good [44:38-57:12]
Taiyo: So what have you been thinking about since we last talked to Kim?
Sarah: First, I was thinking, I really wanna be in one of these organizations for a day to try this thing out for myself.
Taiyo: Oh yeah, for sure. I mean, we haven’t even tried this product, CrowdSmartAI thing, right? We’ve never actually done it.
Sarah: Yeah. Neither of us work for NATO, sadly.
Taiyo: Right.
Sarah: But yeah, I, you know, I was trying to think about ways we could use something like this as Cal State faculty and I was thinking about that. We were at a meeting recently where we were both reporting out with two other people from the same meeting, and our reports were so different. It was like we had different experiences of the meeting we were reporting out from. It was like Academic Senate Rashomon.
Taiyo: Wait, what. Remind me…
Sarah: You know the classic, you’ve seen this, the classic, the Kurosawa film that tells the same story from multiple perspectives, right?
Taiyo: Right, right, right, right. That’s a really fabulous movie! And doesn’t it have the dead guy’s ghost... Oh wait, spoiler alert! Doesn’t it have the dead guy’s ghost testified through a medium or something like that.
Sarah: Yeah, that’s the one. Well, so after the interview I asked ChatGPT, what if Rashomon ended with an AI adjudication board? And it was like, “here are your areas of agreement, here are the key elements underlying, you know, underlying the disagreement in your stories. And it said, and I love this response. Then Rashomon would stop being a film about irreducible human opacity and become a film about procedural reconciliation. The AI board would treat contradiction not as the tragic condition of testimony, but as a dataset to be harmonized. [laughs]
Taiyo: Why is that funny?
Sarah: What?
Taiyo: I mean, that’s exactly what the AI would do. Right? It would harmonize the testimonies. I don’t get it.
Sarah: Wait , so are you saying that you think human opacity is not irreducible?
Taiyo: No, I didn’t say that.
Sarah: I remember it differently.
Taiyo: Okay. Sick Rashomon reference, bro. Dude, your Rashomon references are out of control. Everybody knows that.
Sarah: Your pop culture meme references are out of control!
Taiyo: Okay. See you next time on the next episode of My Robot Teacher.
Sarah: Wait, wait, wait! No, no, no. For real. I do wanna hear your takeaways about the interview with Kim.
Taiyo: Oh, you do?
Sarah: Yes, I do. I’m very curious.
Taiyo: Okay. You know, I actually see a real connection between CrowdSmartAI, the product that we were talking about here today on this episode, and the project that we talked about on the last episode of My Robot Teacher - the Digital Democracy Project.
Sarah: Oh, yeah.
Taiyo: The way I see it is that one thing that AI is allowing humans to do now is to create a kind of synthetic or machine attention. I talked a little bit about this in the conclusion of that last episode. I mentioned the attention economy and the idea that, uh, now that we are so pummeled by so much, so much information out, it’s our attention that becomes the bottleneck for processing all of that information and I think of both of these projects, both CrowdSmartAI and the Digital Democracy Project as about creating and directing a machine intelligence, sure. But also a machine attention toward important causes that kind of escape human attention because of issues around our finite nature.
Sarah: Right.
Taiyo: Our bounded rationality.
Sarah: Mm-hmm.
Taiyo: So. I guess I think of the Digital Democracy Project as making legible the Kafkaesque proceedings of state and local government, while at the same time with Crowd Smart, it’s about dealing with the combinatorial explosion of conversations that can exist in a group of people.
Sarah: Wait, wait, what’s combinatorial explosion mean?
Taiyo: Okay, so you remember that subway ad in which, in which there was this enormous number of possible $5 footlong orders. Maybe I’m misremembering.
Sarah: Like I said, your pop culture beam references are out of control, but also, yes, like you can have lettuce and tomato and lettuce and mustard and lettuce and onion and lettuce.
Taiyo: Oh, very good!
Sarah: But you could also have tomato and mustard and tomato and onion.
Taiyo: Exactly.
Sarah: Okay.
Taiyo: All of that optionality. Even though we’re talking about a finite system here causes the number of possibilities to just explode. That’s combinatorial explosion.
Sarah: Ah, got it.
Taiyo: it’s the, it’s like when people say the number of possible chess games exceeds the number of subatomic particles in the observable universe. That’s sort of, again, a kind of combinatorial explosion. And I think of machine attention as being something which can overcome human limitations around this kind of combinatorial explosion. It can solve this problem for us and, you know, deal with the massive numbers of, for example, conversations that can happen even in moderately sized organizations.
Sarah: Interesting.
Taiyo: So then I begin to think about how all of this might be relevant for pedagogy, for what we do in the classroom, right?
Sarah: Hmm. Mm-hmm.
Taiyo: And how this idea of machine attention could be implemented in a pedagogical context.
Sarah: Right. The reason I was like, Ooh, I think I just got an idea. You know, I typically teach seminar classes. They’re capped at 25 - like, that’s small by Cal State standards, but it’s also like a pretty big seminar class. And it means that when I am going around the room, like observing and engaging with groups, when I’m having them do group work, sometimes I notice that as soon as I talk to a group like. I have to go back and be like, put your phone away. Focus on this. Don’t do your, you know, navigation homework in my class, please. Right? I would love to have more accountability - the kind of accountability you have in your class where they’re all up on the whiteboard solving problems right now in your flipped classroom. And so I’m thinking an interesting example or of how to like deploy this in a classroom might be to have every student sit down and do a one-on-one conversation with other students in the class.
Taiyo: Mm-hmm.
Sarah: And then they could either like elect to record the conversation and copy the transcript into a Google Doc, or they could free write if they didn’t wanna record their voices to document what, like evidence of the discussion that two of them had. And there might be some really cool insights that they would have one-on-one that they’re not comfortable sharing with the entire class or even a group, right?
Taiyo: Yeah. Yeah.
Sarah: But that’s a lot. That’s like a grading nightmare.
Taiyo: That is a lot. Uh, like, like if you had 20 people in your class, you would have 20 times 19 divided by two… number of conversations. Uh, 190 conversations. Quick math!
Sarah: Yeah.
Taiyo: And if you just increase that to 25 people, just add five more people. I think it gets to be 300 conversations. Again, quick math, but that’s a lot of conversations to manage.
Taiyo: yeah. 25. Yeah. 25 times. 24 divided by two. Exactly. Exactly.
Sarah: Yeah. That is a lot of conversations to manage. It’s, and so this actually might really be a cool way of this problem of end of semester, you know, we’re a month out from the end of the semester as we’re recording this right now. And it’s like, how do I keep them attentive on what’s happening? And that’s a, I think a really cool thing. And then it could also, I am imagining the AI digest of all of these, like conversational relics could also, it would be interesting to have the class then go in and reflect on what are the common things that people are discussing, what are the outliers? And then do a bit of like synthesis work around that.
Taiyo: Yeah. Yeah.
Sarah: Okay. That’s cool. I’m gonna do that.
Taiyo: There is also a logistical aspect of this, which is kind of nightmare sounding, which is figuring out how you’re gonna pair everybody for these various rounds of conversation. Right?
Sarah: Oh my God. Totally.
Taiyo: But guess what we can use now in order to solve that exact logistical problem.
Sarah: Yes!
Taiyo: absolutely. Okay. AI will absolutely crush that problem
Sarah: Okay, I’m trying this in my class tomorrow. I’m very excited.
Taiyo: You are? Okay. Yeah, yeah.
Sarah: I am. I’m gonna do, I mean, why not? This sounds super fun!
Taiyo: it really does
Sarah: And you know, a really cool experiment.
Taiyo: It really does. Yes.
Sarah: And I, you know, I think it’s a case of like, actually this is something that would be really, really pedagogically good for my students, but the, the thing that’s stopping me from doing it is that it would be an astronomical amount of work for me to do, to read all of those things myself.
Taiyo: Yeah.
Sarah: But the point is that, what we’re gonna do is put that like raw material of the conversations into ChatGPT, and then have the students analyze the output and then say in real time, you know, does this represent what we talked about in our pair? It’s a new kind of think, pair, share.
Taiyo: You know what we’re gonna use on this? We’re gonna use a transcription program to transcribe the conversation. You have to get permission, make sure you get students’ permission to do this. Okay?
Sarah: So I will with, I think I’ve mentioned this before, I will, with their permission, of course, I’ll say I’d like to record this conversation that we’re having, and then ChatGPT will make a digest. You can go back to, and you can tell me, we’ll go over it together. So you can say, does this represent you accurately? So then I’m also modeling the process of like, let’s go through it and confirm.
Taiyo: Okay.
[53:39] Sarah: The thing I was gonna say is, and you know this is gonna speak to my tendency to immediately imagine the worst case scenario and then have to entertain it in my head that the very thing that makes this exciting to me is also what makes this technology potentially dangerous because I am talking about using this, yes, as a pedagogical tool, but also in a sense as, as a surveillance tool, like this is gonna allow me better to gauge how my students are participating and also to make them participate more because now they’re accountable to just one other person and they have to document what they both said in this conversation.
Taiyo: Mm-hmm.
Sarah: Whereas like before, maybe they would get away with saying two or three things in class that were well timed and then I couldn’t call on everybody. Right?
Taiyo: Sure.
Sarah: I guess I’m thinking that a tool that helps an institution hear insights better can also help an institution police its members better. Right? Like if you’re thinking about this as a map of, you know, human collective thinking or, or think collected thinking, maybe the same map that’s gonna reveal the buried insights within an organization is also gonna reveal your buried opposition.
Taiyo: Mm-hmm.
Sarah: And so I think any democratic promise here depends on the politics of it, like the governance and the access and transparency and who’s consenting and whether you’re gonna treat the dissenting opinions as knowledge to help inform good decision making or as documentation of, like, a threat to your power regime.
Taiyo: Yeah. Make a burn list, right? A burn book. Right.
Sarah: But yeah, I guess ultimately, right, does, does a tool like this when used, you know, used for deliberative democracy, does it create protected space for, for individuals to speak up and contribute more? Or could it just be used as a more elegant mechanism for managerial intelligence gathering, right?
Taiyo: Mm-hmm.
Sarah: I, I think, I mean, I think Kim speaks to this when she says that part of what Crowdsmart AI and Common Good AI are doing is trying to think about how you could use this in a way that was not to entrench power, but rather to help organizations actually like better pay attention to the insights of people who don’t have as much power in an organization.
Taiyo: Yeah, you know, I think this kind of analysis of upside and downside, it. There are gonna be upsides and downsides of really every technological advancement that happens, right?
Sarah: Yeah.
Taiyo: I mean it was true about something as innocent as writing, right? Uh, writing can be used for good or for evil.
Sarah: Very true or, right, the same thing that allows you to document things also exposes you to being read by various systems. Yes!
Taiyo: Sure.
Sarah: As the taker of Senate minutes for years, I am like keenly aware of this.
Taiyo: Yeah.
Sarah: That’s the pedagogical context too, is making sure that we’re teaching students to think critically about how the same systems that can surface insight might also be used to repress insight. That awareness is part of a literacy of, you know, knowing how technology can can be used.
Taiyo: Absolutely.
Sarah: Thanks for listening. My Robot Teacher is hosted by me, Sarah Senk.
Taiyo: And me, Taiyo Inoue. And it’s produced by Editaudio.
Sarah: Special thanks to the California Education Learning Lab for sponsoring this podcast. If this episode got you thinking, please pass it on. Share it with a colleague, a dean, or that faculty listserv where people won’t stop talking about AI.
Taiyo: See you next time!
