title Could AI help us, not replace us?

description The time has come for humanity to make a choice: Will we build AI to replace humans or enhance them? This hour, the "humanistic AI" philosophy, a test case, and a glimpse into the future of work.

Guests include Siri co-creator Tom Gruber, CENTURY Tech CEO Priya Lakhani and Robinhood CEO Vlad Tenev.

TED Radio Hour+ listeners now get access to bonus episodes, with more ideas from TED speakers and deeper conversations with Manoush. By signing up for Plus, you directly support our work and public media, so all your episodes (like this one!) come to you without sponsor breaks. Learn more at plus.npr.org/ted.

See pcm.adswizz.com for information about our collection and use of personal data for sponsorship and to manage your podcast sponsorship preferences.

NPR Privacy Policy

pubDate Fri, 03 Apr 2026 07:00:00 GMT

author NPR

duration 2975000

transcript

Speaker 1:
[00:00] Do you love pop culture? Hate some of it too? You're in good company. Pull up a metaphorical chair to Pop Culture Happy Hour, the podcast that breaks down the best and some of the most questionable moments in pop culture. We'll tell you what's great, what's interesting, and break it all down with debates that'll have you yelling at your speakers, but in a good way. Listen to NPR's Pop Culture Happy Hour by finding us wherever you get your podcasts.

Speaker 2:
[00:24] This is the TED Radio Hour. Each week, ground breaking TED Talks.

Speaker 3:
[00:31] Our job now is to dream big.

Speaker 2:
[00:32] Delivered at TED conferences.

Speaker 1:
[00:34] To bring about the future we want to see.

Speaker 2:
[00:36] Around the world.

Speaker 4:
[00:37] To understand who we are.

Speaker 2:
[00:39] From those talks, we bring you speakers and ideas that will surprise you.

Speaker 4:
[00:44] You just don't know what you're gonna find.

Speaker 2:
[00:46] Challenge you. We truly have to ask ourselves, like, why is it noteworthy? And even change you. I literally feel like I'm a different person. Yes.

Speaker 5:
[00:54] Do you feel that way?

Speaker 2:
[00:55] Ideas worth spreading. From TED and NPR, I'm Manoush Zomorodi. On the show today, Building AI That Puts Humans First. 15 years ago, millions of people around the world might have had their first interaction with AI when they talked to... My name? It's Siri. Yes, Siri. Apple's voice-controlled virtual assistant was added to new iPhones in 2011. That was a year after Apple CEO Steve Jobs had set his sights on the technology and the people who'd been building it.

Speaker 5:
[01:38] He kind of surprised us. He literally called us on our iPhones at work. And he's like, hi, Steve. Yeah, sure, you're Steve, right.

Speaker 2:
[01:46] Tom Gruber was Siri's Chief Technology Officer and Head of Design. Steve Jobs invited Tom and his other two co-founders to his house.

Speaker 5:
[01:56] I mean, imagine being in Steve Jobs' house in Palo Alto. There's Ansel Adams on the wall. There's a beautiful Carver Amp. Like, his taste, this guy. And it's all private and quiet. And for three hours, we talked about what it would be like to build products together. And the reality distortion field completely worked at that point. Like, we were so seduced.

Speaker 2:
[02:17] They were made an offer they couldn't refuse and sold their company to Apple, who we should mention is a financial supporter of NPR. Siri was folded into Apple products and debuted on October 4th, 2011. Celebrities were ready to demonstrate how seamlessly this little voice could fit into our everyday lives.

Speaker 5:
[02:37] Remind me to clean up tomorrow.

Speaker 2:
[02:53] It was one of Jobs' last projects ever.

Speaker 5:
[02:57] The day after Siri launched is the day Steve Jobs died. And apparently, we're told that he did get to see the demo.

Speaker 2:
[03:07] So that was 15 years ago. At the time, were you worried about the ethical considerations since people did attribute human thinking to Siri? I mean, did you start to think, hmm, this is at the forefront of getting this kind of technology into regular people's hands. We really need to think about what's okay and what's not okay.

Speaker 5:
[03:27] Oh, yeah, absolutely. If people took Siri too seriously, it means they're out there on part of the curve where people are a bit inclined to see agency in inanimate objects more than other people. But in general, I saw that, yeah, of course, AI was moving ahead while I was at Apple, the deep learning networks came to power. And we were then saying, look, we've got to lay down the ethical foundation because the stuff is coming fast. And we need to make sure that we don't use AI to exploit people, that we use it to augment and work with people.

Speaker 2:
[03:58] In the fall of 2016, big tech companies, including Amazon, Facebook, Google, IBM and Microsoft, competitors with each other, they came together to form a non-profit called the Partnership on Artificial Intelligence to Benefit People and Society. Apple soon joined as well. Tom Gruber was their representative. And in 2017, he shared his thoughts on what their goals should be from the TED stage.

Speaker 5:
[04:26] I'm here to offer you a new way to think about my field, artificial intelligence. I think the purpose of AI is to empower humans with machine intelligence. As machines get smarter, we get smarter. I call this humanistic AI, artificial intelligence designed to meet human needs by collaborating and augmenting people.

Speaker 2:
[04:52] That was around the time that you gave your TED Talk in which you introduced the idea of humanistic AI. Can you define what that was in your mind?

Speaker 5:
[05:02] The idea was really like, where do we stand in the world of the future of AI? So there was really two paths I saw. One path was, you can say machine intelligence or machine-centric, delighting and celebrating how smart machines were getting relative to humans, and there were businesses being formed to use that to automate human work. And then the other way of looking at it, which I call humanistic AI, which wasn't just my idea, a lot of people had this idea, was that the purpose of AI should be to actually help people do things they're trying to do by either augmenting their intelligence or collaborating with them as an intelligence. And so it turns out that down the road now, there was a watershed because now we see companies that are raising money by the billions with the explicit goal of automating white collar work. And then we have other companies that are raising billions of dollars and saying that our job is to help solve some of humanity's big problems and make people smarter. So that's why I wanted to give it a label and that's why I call it humanistic AI.

Speaker 2:
[06:11] Yeah. I mean, I have to admit, I was in the audience, I didn't get it at the time. I was like, this guy lives in the future, but the future came pretty fast. 2022, 2023, LLMs like Chachi BT and Claude and Perplexity, regular people started using them. And do you get that sense that people get it now?

Speaker 5:
[06:31] Oh, I think so now.

Speaker 2:
[06:35] It is 2026 and in the US., AI anxiety is at a fever pitch from fears about the destruction of white collar work.

Speaker 5:
[06:44] What is happening? Why is this happening so fast?

Speaker 2:
[06:46] I value my brain. I value my ability to think.

Speaker 4:
[06:48] I don't want to outsource it.

Speaker 2:
[06:50] To protests in towns where massive data centers are being built.

Speaker 1:
[06:54] Data centers bound to fail.

Speaker 2:
[06:56] To debates over the government using AI as part of its military operations...

Speaker 5:
[07:01] .anthropic refusing to allow its powerful AI models to be used for autonomous lethal weapons and surveys.

Speaker 2:
[07:06] For those who have been calling for ethical guidelines all along, this moment represents an opportunity. It's time, they say, for these conversations about how AI is built to move beyond Silicon Valley to the public. Because AI is no longer a futuristic possibility. It's here now. And decisions need to be made. Recently, a declaration of human rights for the AI age was signed by a cross-section of leaders from across the political spectrum, including Ralph Nader and Steve Bannon. Ensuring AI allows humans to flourish could just be the one idea that everyone, no matter their politics, can agree on. Making sure tech is built to serve humanity has been Tom Gruber's goal for decades. It all started when he was studying psychology and computer science in college in the 1970s.

Speaker 5:
[08:02] I was a little frustrated that the psychology of the day didn't have very good experimental methods to truly understand the mind. There wasn't really cognitive science yet. Then a computer showed up at my university in the late 70s, so I was able to start programming. Just by reading some papers and so on, I discovered artificial intelligence. I said, wow, this is the way to start studying intelligence by making it.

Speaker 2:
[08:25] A few years later, Tom began a graduate degree focusing on cognitive science and AI, working to build machines that could mimic the mind, at least a little bit.

Speaker 5:
[08:36] The models that we could build then were nothing like human intelligence. They were just crude approximations. Then we decided, okay, well, how would you augment human intelligence? One of the ways is to chore up where there's some impairment in human capability. The first project I did was to build what I call the communications assistant for people who can't speak, who have cerebral palsy in this case, but also ALS and folks like that, that have neurological conditions that makes it hard to speak like we're speaking here. So we built an AI program that had an LLM, a little language model, maybe a TLM, tiny language model that actually predicted the next word and the next phrase based on single motor action like a switch on their muscle. They could actually communicate in sentences.

Speaker 2:
[09:29] So was that voice then or no, not voice?

Speaker 5:
[09:32] Yeah, it was voice even then. They all sounded like a strangled Swedish person back then. Something called a Votrox voice synthesizer, very primitive and no prosody, but it was voice.

Speaker 2:
[09:44] I mean, this is pretty common, right, Tom, that a lot of extremely hard technical problems, they're often built for people who need them the absolute most. But then once the tech gets going, often developers find a way to get it into the hands of all of us. Is that what happened with Siri?

Speaker 5:
[10:04] Siri wasn't so much by disability, but it was definitely the kind of handicap. Imagine you're in a car and you're trying to text somebody or you're trying to get the directions, and you are cognitively loaded.

Speaker 2:
[10:16] Meaning you're concentrating big time.

Speaker 5:
[10:18] Yeah, you should be concentrating on the road, and if you're distracted, it's dangerous. You're not supposed to touch it, so it's as if you can't use your hands, and you're not supposed to look at it. And so how would you use your mobile phone? Well, you have to have a voice interface. That's a dialogue. And so that's the kind of thing that Siri was built for.

Speaker 2:
[10:35] Where are we now with AI in our lives? It's not just a voice in the room. We're sort of at a tipping point in terms of understanding how they might change the way we live.

Speaker 5:
[10:46] Oh, absolutely. We are at a tipping point. I mean, everybody now has access to a real amazing, intelligent partner to help them do things in their lives. It's almost free. It's unbelievable. And it's the same conversational user interface in the front end. So it's really kind of held that style. But the back end has gotten extremely smart. Like, you know, I mean, I shouldn't get excited. I've seen AI for 40 years, right? And all of a sudden, it's just like a million times different than it has been in the past. As Yuval Harari says, you know, anything that's made of words, AI will own. Who else has read everything in the world and can talk about it? So anyway, I am simultaneously freaked out and excited and scared and unbelievably optimistic. However, we have to act now on the basis that we have superhuman ability to talk a good game, which means persuade people, do things against the will, convincing them and so on, but also helping them in ways that they couldn't do otherwise.

Speaker 2:
[11:48] We're not seeing any regulation around AI here in the United States. Where does this go back to your idea of humanistic AI? What sort of guardrails do you think we need to be putting around it right now?

Speaker 5:
[12:01] Yeah, the humanistic AI framework would say that the objective function of the AI, that is the thing that the AI is optimizing for, should be human benefit, not say profit or something else. Human benefit is hard to measure, so there's no easy prescription. We have to decide if we're going to put guardrails on a thing, we have to solve the engineering and scientific problems of detecting that something the AI is doing is harmful to humans. We have to build theories of human harm and human benefit into our objective functions. So that's what I think we should be working on. I think it's not just a matter of regulation, it's a matter of the scientific agenda for AI research. We have a choice in how we use this powerful technology. We can choose to use AI to automate and compete with us, or we can use AI to augment and collaborate with us, to overcome our cognitive limitations and to help us do what we want to do only better.

Speaker 2:
[13:11] In a minute, how ethics could be built into AI going forward. Today on the show, AI that puts humans first. I'm Manoush Zomorodi and you're listening to the TED Radio Hour from NPR. It's the TED Radio Hour from NPR. I'm Manoush Zomorodi. We're spending the hour talking to Tom Gruber, one of the inventors of Siri, about how to build AI that puts humans first. Because it can feel like right now, AI is on a collision course with humanity. Recently, for example, the US Department of War struck a deal with OpenAI, while its competitor, Anthropic, fought back over military use of its models. And people responded, some deleted OpenAI's ChatGPT in protest, or downloaded Anthropic's clot to show support. Tom believes that if regulation falls short, it will be on consumers to push the market towards AI that feels safer. Meanwhile, he says, there are technologists who are trying right now to shape AI behavior that could prevent future doomsday scenarios.

Speaker 5:
[14:35] For instance, imagine a scenario where the AI escapes and takes over, maybe takes your money, maybe starts running cyberbots and attacks people and so on. And so there are things that are prerequisites to those scenarios, like can it lie to you effectively, masquerade itself as something else, prevent itself from being turned off. These behaviors you can put guardrails around, and people are studying that.

Speaker 2:
[14:57] Yeah, because I feel like every so often there's some report that comes out that's like, oh, Claude manipulated the developers in some way. They got to fix that.

Speaker 5:
[15:07] Oh, yeah, we're just seeing the beginning of this. So we have to come to grips with this. This is one area of just safety that there's no easy answer. But I would hope that we have competition among the big AI model companies, and like which ones are going to be safest. So for example, back in the old days, there was a brand allegiance to Volvo, because it really went to the great extent to be safe, like the best airbags and the best heat seats and all that sort of thing. So I think we should be able to have AI compete on how safe it is, so that people who know could buy the one that is safest.

Speaker 2:
[15:43] Do you see that happening anytime soon?

Speaker 5:
[15:46] Globally, yes. We see a lot of distorted thinking at the national level here, and ideologically driven policy and so on. I think hopefully we'll get past that. But the key thing is that we still have a free market system, and we still have the freedom to choose among a set of AI products.

Speaker 2:
[16:06] I mean, you probably know a lot of these, mostly guys running these companies. What do they say? Do you bring this up with them? I don't really hear much about safety from Sam Altman.

Speaker 5:
[16:17] No, you don't hear from him or Ilan, but you hear it from Dennis, from Satya, from Microsoft, definitely from Dario Amato, from Anthropic. So I think the studios that make the foundation models, Dennis and Dario, Anthropic and DeepMind, Google have always been safety conscious and have serious well-funded teams working on that part of the problem. And that's the sort of healthy normal thing you would expect from a company that's worth trillions.

Speaker 2:
[16:47] I mean, when I first started really digging into tech and reporting on all of this 15 years ago, I have to say I was a lot more optimistic and excited about the possibility of building these sort of altruistic systems. But we've been down this road, Tom, with social media, with the surveillance economy, attention economy. It seems like any chance for a tech company to take advantage of its users, it will. So what makes you think this could be different?

Speaker 5:
[17:18] One thing that's different is that the foundation infrastructure of AI, the models like Claude or Gemini and these models, they're super expensive to build, both in money, time, especially talent, which is very rare, for now. Anyway, they're hard to build these things. And there's only a few of them, maybe 10. The good news is that they're fairly omnipurpose at that point. And once you have the models, then the rest of the world builds applications on top of them. So the rest of the world can compete on humanistic applications. Well, I mean, look, you can go out there into the dark web and get the most nasty, evil software in the world, or you can go out there and find lovely games or educational things or whatever. You can find everything in the application world. I think we're going to see that with AI too.

Speaker 2:
[18:08] Is that kind of how Apple created the app ecosystem and you have to adhere to certain privacy standards?

Speaker 5:
[18:14] Yeah, that's right. Well, that's a very good analogy there. It would be great if there was something like an app store for AI that would do the work of at least minimally establishing whether an application is in the benefit of humans or not. You know, X is using AI to do evil things right now. It's already happening. So we can't stop, I can't stop them, you can't stop them. Only a government could stop them and they're not stopping them this year. Right, so, okay. But you don't have to use X, you don't have to use Croc.

Speaker 2:
[18:46] There's a talk that I know you're giving that says that instead of talking about giving up our privacy and self-determinism, we should start thinking about, instead of big brother being what tech has sort of become surveilling all, we need to think about big mother.

Speaker 5:
[19:01] Yeah, exactly. I tried to figure out what would be the right symbol and I found, I think, an African elephant is kind of a cool big mother. It's smart, protective. The matriarchs are the ones that run the herds, they're the wisest and smartest and so on. So imagine a female African elephant with her calf and think about her value alignment. She will do anything to protect the calf, and obviously humans too, human mothers, they nurture their children, they teach them right from wrong and truth from falsity, and then they show them skills to survive in the environment they're in. That's what AIs can and should do. Machines can now have access to everything in your life that's digital, and with that you can build amazingly good recommendations about how to use your attention, maybe give you insights on things you could discover or learn from. At the same time, that same data is extremely powerful if it's used in a surveillance economy against your interests to addict you, to make you buy things. This is why it's a real ethical choice. How you optimize the AI is actually a massively important societal choice today. It's not a technical thing that we leave to the engineers or to the boards of directors of companies. It has to be done at the society level. And so I think big mothers are ways to go ahead, have a lot of data. Like mothers know everything about their kids, but mothers are aligned with their kids' interests.

Speaker 2:
[20:29] That was Siri Inventor and AI pioneer Tom Gruber. You can watch his TED Talk at ted.com. As Tom said, there are people around the world who are trying to build AI with a big mother mindset. Sure, knowing more about our minds than mere humans can, but using those superpowers to make us better without trying to take advantage of us. Priya Lakhani says she is one of those people.

Speaker 3:
[20:58] I think AI could be the single most positive technology to impact everybody's lives in an educational context. But it has to be developed responsibly.

Speaker 2:
[21:11] 15 years ago, Priya was an entrepreneur building schools in India. But back home in the UK, she found that schools were struggling too.

Speaker 3:
[21:20] 20% of children cannot read or write well enough. In the United Kingdom, there were some statistics that were just as poor to do with mathematics. And it was just a real shock to my system. I thought, there's a fundamental problem here. And if we don't fix this in the UK, then we're definitely not going to be able to help the sorts of places where I'm trying to make a difference. And so just out of pure curiosity, I went to schools.

Speaker 2:
[21:43] What she saw were two big problems.

Speaker 3:
[21:46] You may have maybe richer resources, more teachers in different ratios. But once you close the door of the classroom, you will often find there is a teacher stood at the front, delivering a sort of one size fits all delivery of education. And then the second problem that was prevalent across all schools was every teacher. Workload was a massive problem. There were teachers who were stressed, they were anxious, they had far too much to do. To the point where they were considering leaving their roles, we were tens of thousands of teachers short in the United Kingdom. How can we have a professional who is trained for this position enjoying their role in inspiring children, imparting knowledge of a subject that they're really passionate about and being able to do so confidently and in a way where they're not exhausted?

Speaker 2:
[22:32] She realized the outdated tech in classrooms could be the solutions if it could curate a tailored education experience for each child and give teachers instant data about where exactly each student was struggling. So she looked around at what was on the market.

Speaker 3:
[22:50] There was one very large company in the US that was talking about this already. It was touted as this massive AI system for education, sort of similar sounding, personalizing learning, reducing workload for teachers. But they were using a type of machine learning that is very common for retail systems. So recommender systems that track user behavior and then recommend what you like. So if a student is on the system and they're learning biology, you get a lot more biology.

Speaker 2:
[23:18] Kind of like TikTok.

Speaker 3:
[23:19] Right. The problem with that is that that is not going to work in education. That's because in education, sadly, sometimes you need to give someone what they don't like. And it's often you'll find, particularly with younger people, they don't like the subject that they're struggling in.

Speaker 2:
[23:35] Right.

Speaker 3:
[23:35] You can't ignore that. What you need to do is have a system that's a little bit more complicated, give them some things they like to keep them engaged. But you do have to hand them the foundational prerequisite knowledge that they're missing in order to increase their knowledge and skill set in areas where they're struggling. And so, it was going to be a more complex system from the outset. And it didn't exist. And that is where the journey to create CENTURY really started.

Speaker 2:
[24:00] CENTURY Tech is the educational tech company that Priya founded in 2013. She says the platform's goal is to support overworked teachers, not take their place. There is, of course, a big debate in education right now about the role of AI in teaching. Some believe funding should go straight into schools, not into technology that's trying to make money off of learning. Others think the only way to make education accessible to every child will be with the help of AI. That's the camp that Priya is in. But she says that doesn't mean replacing the hard, even grueling work that happens in the classroom between teachers and their students. Here's Priya Lakhani on the TED stage.

Speaker 3:
[24:42] We need to combine artificial intelligence with neuroscientific theory and the learning sciences to learn how every single brain in this room learns. Because if we can fix learning, we can improve outcomes. We can personalize education for every single one of us and provide intelligent insights to teachers to reduce the workload. So 12 years ago, I built a team. They built the technology. It exists. Students use it in over 140 countries. I thought it would be really important to share with you some student feedback that I have on our platform. Because it tells us what children's expectations are when they use an AI education partner. So I get feedback like, I think CENTURY will help me to achieve things that I thought were impossible. It's a golden child, right? My life's purpose has been fulfilled. And then these sweet, lovely, innocent children send me messages like this. I don't like this website. It makes me able to do my homework. Wait, I'm being bribed. I will give you 100,000 pounds. I'm not joking. You just need to give me no work. Give me a button to do the work for me.

Speaker 2:
[25:52] If I was to go into a classroom as a student to use CENTURY Tech, what would my experience be like?

Speaker 3:
[26:00] You know, the teacher would walk into the classroom, say, right, everyone, you want to log in to the machine. And they could either set them learning material. Usually they set them a set of questions. The teacher has their dashboard open and they can see how the students are performing. So information would light up about who is struggling. The teacher would then walk around the classroom and be able to make those interventions real time. Okay, so this student, why did you answer that question in that way? And the teacher's then using their expertise. So that's a blended learning environment. And the teacher would then utilize that information to then be able to not just stand in front of the class assuming everyone's taking in the knowledge and asking people to put their hands up and spot checking with their knowledge. You can walk around and have that very sort of targeted intervention with students.

Speaker 2:
[26:42] Okay, so that's in the classroom, but what about at home?

Speaker 3:
[26:46] So half of them will use it as teacher sets homework. So Manoush, you've been given this assignment by your teacher, it's on Pythagoras' Theorem. You may be a student where before that, it has automatically given you some maths work on roots and powers, because it knows that you don't understand that. And there's no point you doing the Pythagoras' work if you don't understand roots and powers. In the same way, right, if you master that work, it can then do what's called a smart recommendation and stretch you as well. So some people will use it for assignments in that way, and then the others will use it in a flipped learning way. So teachers will be planning on teaching a lesson. They'll say, right, we would like you to learn the lesson on century the week before. Students do that in their own time. Teachers then receive the information of how the students have done. And then that lesson the following week is a far more targeted lesson as to what did they understand, what didn't understand. So 50% of the platform is about the teacher because we fundamentally believe that empowering teachers is one of the best ways to improve education. They should not be doing data analysis off spreadsheets in the evenings, which is what many of them sadly have to do. They should be receiving insights from our platform instantly so that they can then go into the class the following week. Focus on that particular concept but not have to just teach it one size fits all. They should be able to say, right, I can see that here is the misconception a third of you have, or they can pair off the class into different peer groups. We have various dashboards that show you which kids are excelling in this particular topic, which ones are really struggling, and all of that data is turned into actionable insights. So it's a highly flexible system and the reason that has to be the case, Manoush, is because teaching is as personalised as learning is. I don't think it's right to build a system and say, here's the system and here is how one must use it. Teachers are different, they're professionals, and some of them will prefer to be standing at the front and inspiring. Some of them much prefer to be having those sort of one-to-one interactions walking around the classroom.

Speaker 2:
[28:48] So you're not replacing the teacher, you're making it possible for the teacher to shine in the way that they shine is the idea.

Speaker 3:
[28:55] 100%, and I think this is a really controversial topic for discussion in education about this replacement of the teacher because a lot of big tech, for example, they've said this is the future of education. I don't think they understand what education means. Education is not transfer of knowledge from textbook into brain. Education is so much bigger than that. These schools, they're like the old village. They're providing an enormous amount to students beyond the formula for Pythagoras. This is about augmenting the teacher and being able to push us all ahead. If you can improve the baseline standard of education, we can focus on skills that really matter in an age of AI. The problem, Manoush, that a lot of neuroscientists and cognitive scientists have discovered is that we can over rely on tools and we can completely bypass the cognitive processes.

Speaker 2:
[29:48] Okay, I think I have an example actually. Tell me if I'm right. So, I was going to spend time in Italy recently. And so, six months ahead of going, I downloaded Duolingo and I took Italian every single day. And I got to Italy and literally all that would come out of my mouth was ciao, buongiorno, cappuccino, por favore. That is it.

Speaker 3:
[30:10] That is highly useful Italian for someone like me.

Speaker 2:
[30:13] Well, yes, but it did not get me very far. And so, what I can only imagine is that I was able to figure out very quickly what the game wanted me to say or how to please whatever the owl wanted me to do. But when it actually came to real life application of the alleged knowledge I had put in my brain, I had none. I really was distraught. I'd spent a lot of time and I didn't have anything to show for it. And I guess my fear is that in these classrooms, it looks like the kids are killing it on their test scores on all the work that they're doing. But when it comes to actually being competent adults in the quote real world, they will not be able to fend for themselves, whether it comes to knowledge or judgment.

Speaker 3:
[30:56] Well, this is what I like to call automation complacency. And it's a very digitally transactional memory.

Speaker 2:
[31:03] Yes.

Speaker 3:
[31:03] And the problem is, is that you've done it very quickly, but you haven't thought deeply about that particular answer. It's this sort of reliance on technology where actually it kind of weakens the productive learning behaviors, right? It creates this unrealistic expectation about the ease of learning. So basically, people are poor judges of their own learning. So when information is presented fluently or quickly, when you're on these apps, right? You're doing it quite quickly. You're feeling quite good about yourself. You tend to believe that you understand it, even though your actual retention is low.

Speaker 2:
[31:38] When we come back, Priya Lakhani explains why kids need what's called a productive struggle in the classroom and how she thinks AI can help them get it. On the show today, how to build AI that puts humans first. I'm Manoush Zomorodi and you're listening to the TED Radio Hour from NPR. Don't go away. It's the TED Radio Hour from NPR, I'm Manoush Zomorodi. Today on the show, AI that puts humans first. We were just hearing from Priya Lakhani, the founder of CENTURY Tech, an education technology company. Priya says they use AI to create friction, to make kids work, to learn. Unlike chatbots that try to replace human thinking or simply transfer knowledge. Here she is on the TED stage.

Speaker 3:
[32:41] Think about how we felt when we first used ChatGPT. I think we thought, wow, I never need to do any work ever again. This is amazing. And then it hallucinated, and I think we've ended up with this sort of sinking realisation of acceptance, right? That the shortcuts don't really replace the work. They're very helpful, but we still need to learn and we need to think. Now, when we read those long answers that an LLM chatbot gives us, it feels very fluent. The problem is, is that fluency we often mistake for learning. What we actually know about learning is that learning requires what researchers called a productive struggle. It's this sort of mental effort that builds understanding. Sustained mental effort strengthens the parts of the brain, and it's positively correlated with growth in the brain. Durable learning does not come from shortcuts. It comes from certain types of effort, and this is why AI is amazing for education, because AI can spot patterns in how we all learn. It can force you to generate an answer rather than just reveal the answer, and it can provide amazing structured feedback against expertly designed rubrics from teachers.

Speaker 2:
[34:00] So, AI that's effective in education doesn't spit out the answer. It doesn't expose the answers, or nudge the students towards the right answer, like in gamified apps or chatbots, like that might be fun, but it's not terribly educational, you're saying.

Speaker 3:
[34:14] Exactly. So, these sort of AI-generated explanations, or very quick learning, or quick apps, amplify that effect, because they make you think that it's clear, it's confident, particularly if they're anthropomorphizing the technology, so it feels like it's very human, you're kind of conversing with it, you're talking to it. But think about that working memory, right? Now, you can retrieve that after a short period of time. The problem is, is can you do that later? So we can be cognitive misers, trying to conserve mental effort. In a learning context, if the technology replaces the very thinking that a student needs to develop, then learning can suffer. We've actually had teachers come to us and say, can you put in more coins and badges and characters? The kids love it. But the point is, if you learn that, oh, when I put in effort, I get a badge, you then start to build up an extrinsic value for learning. And that's actually really unhealthy. What we need people to have is intrinsic value. I'm learning for learning's sake because learning is good. So learning agility, just the ability to learn how to learn. I believe fundamentally that those overly gamified applications are bad for young people. That's why we don't do it. And it's a business, right? Companies are very focused on engagement. You're measured by your daily average users, your monthly average users. How often are they coming in? When do they engage? Why do they log off? Because the more engaged that people are, the more money you're going to make. There's a perverse incentive to overly engage them.

Speaker 2:
[35:46] So what are you measuring for then? If not engagement, then what?

Speaker 3:
[35:50] My key success metric is how quickly can I get this kid off the screen? And so we give guidance that's completely the opposite of a typical company. We say to schools, you know what? You really should not be on our system for more than about an hour and a half a week. If you're sitting there for nine hours a week, this system is actually not performing at the level in which we would hope it to be. It's really challenging because that generative AI market, it's erased by the big tech companies. So when it comes to gen AI, how they build those tools is going to mean everything for the future of human flourishing. And it really sits with a handful of people in this world, which look, I run a tech company, I have shareholders and investors, but we're a social enterprise. We have a very different set of metrics as to how we're measured. There are so many instances where, for example, we have said, no, we're not going to build it in that way. Our neuroscientists will turn around and say, no, can't do that. That's generally known to be bad for kids. This is why educating the public about AI is really important. Which model was it built on? What was it trained on? Which data? What is it trying to do for you? Is it beneficial for you? Is it providing you with something where you are then exercising your brain, rather than just transferring the skill set over to the AI? If you can then answer those questions, you are going to be very, very well-equipped to decide whether that is good for you or bad for you. That is powerful.

Speaker 2:
[37:28] That was the founder of CENTURY Tech, Priya Lakhani. You can watch her full talk at ted.com. So Priya says her technology won't make teachers obsolete. But there are fears that educators and many, many more careers will be replaced by AI that can do their jobs. But there are others that say that's not how innovation works. The workforce won't disappear, though it will change, says Vlad Tenev. Vlad is the co-founder of Robinhood, a financial services app. He's exactly the kind of Silicon Valley billionaire you'd expect to tell you everything with AI is going to be fine. But his argument isn't, don't worry, it's look at history. Because what we think of as a job today won't be the same tomorrow.

Speaker 4:
[38:20] Let's take a moment and reflect back upon our lives when we were 20 years old. Think about the opportunities for work and career that lay in front of you. How many of you had a pretty good idea of what you wanted to do for your career? Not too many. How many were overwhelmed by all the options? I know I felt the same.

Speaker 2:
[38:43] Vlad recently gave a talk called AI is Coming for Your Job. Now what? In it, he told his own story of being 20 years old and graduating from Stanford University with a degree in mathematics.

Speaker 4:
[38:57] Nobody had sat me down to tell me that my pure math major wasn't going to be the most desirable qualification, and I probably wouldn't have listened if they did. Now my first month in graduate school, Lehman Brothers went under, the start of the global financial crisis. Most of my friends, particularly the ones that felt the most secure, found themselves packing up their cubicles. Some of us wondered whether the economy would recover at all, or whether we were in store for another decade long Great Depression. But amidst the uncertainty, some of us found a source of optimism. The iPhone App Store came out that very same year, 2008.

Speaker 2:
[39:42] And the idea that anyone could build a digital game or service that could be delivered to millions of people and a device that lived in their pockets? Well, it was the digital equivalent of when the first pioneers found gold in California in the 1840s. Vlad was all in.

Speaker 4:
[40:01] I still remember when the instruction manual for how to build iPhone apps was released. I was up all night reading it, learning, trying to understand. I saw an opportunity for a new level playing field. Pretty much everything I've done since then was a product both of the economic malaise but the technological optimism of the time. But times have changed. The average 20-year-old today also has quite a bit of fear. But this time, emerging technology is not the antidote to that fear. It's the source. And they're asking themselves, will that career I'm looking at even be around in 10 years? One reason why it feels different this time is because AI, unlike the iPhone, is the first tool that we've built that's capable of leaving the tool box. And we don't yet know its limits. A few years ago, I founded another company with the mission to build mathematical super intelligence, artificial intelligence that can reason and solve problems better than any mathematician. I always thought of mathematics as the pinnacle of human intellectual activity. So a superhuman AI at mathematics could potentially be superhuman at everything. Combine that with my day job, which is running a global financial services platform. It's led to me spending a lot of time pondering one very important question. What do we do in a world where the vast majority of today's jobs are gone? And I want to analyze this question rationally without fear and hyperbole. One way to do it is to look back through history and see if there's been a time where we faced this type of job disruption before, at anything near these levels, and how we as humans have navigated it. Now, I'm a technologist, not a historian, so with that caveat, let's go back in time to a world a 20-year-old would have known tens of thousands of years ago.

Speaker 2:
[42:16] In approximately 50,000 BCE, most people were hunters, gatherers, or toolmakers. Very few of us today know anyone with those job descriptions.

Speaker 4:
[42:29] The main occupations of the Paleolithic era are largely gone, but they didn't disappear overnight. Instead, they were subdivided into lots of other more specialized jobs.

Speaker 2:
[42:41] The next era, the Neolithic era, saw all kinds of different vocations popping up, thanks to advances in how people stored their food and built shelter, domesticated plants, and animals.

Speaker 4:
[42:54] Humans have mastered a few new things, farming, keeping livestock. The invention of these things allowed us to spend more time doing what we consider creative work, and less time on pure survival and subsistence. And this opened up a lot of new jobs. You had artisans like weavers, potters, you had farmers. These jobs too, largely all gone. In the US today, we should say farmers make up less than 2% of the workforce. Let's move ahead through the changing jobs of the Bronze Age, the Iron Age, the Dark Ages, the Renaissance, and the Age of Exploration.

Speaker 2:
[43:34] Each age, the same thing happens. Jobs are lost, sure, but more different jobs take their place.

Speaker 4:
[43:43] Too many jobs to count, a lot of them are gone. Any blacksmiths or explorers in this room? I didn't think so. It might come back with space exploration. If you think about it, most of our last names are from jobs that our families no longer do. Potter, Butler, Butcher, Smith. Any Fletcher's in the audience? Anyone know what a Fletcher is? I was going over my talk this weekend and my son said, I know what a Fletcher is, Dad. He plays Minecraft. A Fletcher is someone that makes and sells arrows. So if you know someone with that last name, their relatives were arms dealers. Now my point in all of this is job disruption is then an essential quality of human evolution. We want work to disappear because it means that we're doing our jobs as humans, making our lives better and easier. So with AI, maybe it's not the job disruption itself that makes us so nervous, but the speed with which it's happening. So why don't we accelerate? We're going to go right through the industrial revolution into the modern era. In the 20th century, a young person in the wake of companies expanding and automating would have found an entirely new menu of jobs that their parents never had access to. So instead of working in a factory, they would have had the selection of a wide assortment of new office jobs. And some of the parents were probably thinking, you sit in a chair all day. That's not real work. Now, the Internet era. We see all around us jobs that didn't exist before. We have people getting paid to play video games, eat at restaurants, travel, talk to their friends on video. Those last people we call podcast bros. And we take our jobs very seriously. But if you took someone from the 20th century, when people first started contemplating these problems, and they could peek into our world today, they would think that all of the predictions around technological unemployment came true. So where does all this leave our 20-year-old at the dawn of the AI era? One feature that we found is recurrent throughout generations is this feeling of exceptionalism. We'd like to think that somehow, we're at a discontinuity where history ends and we're in a new world with no precedent. And maybe it's true this time. We really don't know if we're building a super assistant or an apex predator. Certainly all change and disruption brings with it a painful transition. Jobs will disappear, perhaps they'll disappear at an accelerating rate. But at the same time, we see one undeniable trend. There's going to be new jobs and lots and lots of them across every imaginable field. Where the Internet gave people worldwide reach, AI gives them a world class staff. The jobs will not look like real work, much like to our predecessors, our current jobs would have looked like leisure. And I bet that we would feel the same about our descendants in the future. And I can tell you with near certainty that a humanity that's capable of building a super intelligent AI also has the creativity to navigate through this potential job doom and gloom scenario. Although we'll never stop worrying about it, being hyper vigilant about threats to our survival is a key part of evolution. What makes us human? I can tell you that you shouldn't take predictions about future job disruption to keep you from doing something you feel very passionately about. You know, when I was a kid, in the 90s, teachers discouraged me from becoming a computer programmer. Back then, it was a common thought that all those jobs would be shipped off to China. So, even where it seems obvious, sometimes our predictions of the future end up being completely off. Humanity has always excelled at providing itself with meaning and purpose. Even in the darkest and most uncertain of times, I feel very, very confident that the 20-year-olds of the future, perhaps in collaboration with AI, will continue to build new things, which simultaneously we're going to be scared of, but also excited by.

Speaker 2:
[48:43] That was Vlad Tenev, the CEO of Robinhood, a mobile financial trading app, and one of the places where he thinks people will go when they no longer need to perform labor to earn an income, but do things like trade on various markets to earn money. Critics have called the platform risky, especially considering that billions have been invested in AI companies that aren't profitable yet. You can see Vlad's full talk at ted.com. Thank you so much for listening. Make sure you watch some of the videos that we've been making with our guests. If you're on Instagram, you can find them at Manoush Z. That's M-A-N-O-U-S-H-Z. This episode was produced by Phoebe Lett and edited by Sanaz Meshkampur and me. Our production staff at NPR also includes Matthew Cloutier, James De La Hussie, Fiona Geeran, Harsha Nahada, Rachel Faulkner-White and Katie Montalione. Our executive producer is Irene Noguchi. Our audio engineers were Damien Herring and Simon Jensen. Our theme music was written by Ramtin Arablewi. Our partners at TED are Chris Anderson, Roxanne Hilash and Daniela Belarezo. I'm Manoush Zomorodi and you've been listening to the TED Radio Hour from NPR.