transcript
Speaker 1:
[00:00] In the 1980s, I was in the AI wave that was expert systems, and they were public companies, and the fifth generation, and the Japanese were going to be this big threat, and that AI didn't work. But this one is. Anything that lives in the emotional realm will be impacted not as much by AI, because we humans react to these things emotionally. And so again, I think we'll not watch robots playing basketball. STEM practically took over Stanford University. Okay. And now maybe what we'll see is a rotation back to the humanities and to understanding combination of history and literature. If I had a three-year-old today, I would be like doubling down on the emotional skills. In 20 years, robots will do maybe 1 percent of the plumbing.
Speaker 2:
[00:48] At most.
Speaker 3:
[00:49] Reid Hastings is the co-founder and former CEO of Netflix, the company that helped define the streaming era. And in doing so, rewired how 2 billion people spend their evenings. Under his leadership, Netflix launched Streaming, pioneered the original content model with House of Cards, went global in 190 countries, and produced some of the most watched programming in TV history. He ran it for 25 years.
Speaker 2:
[01:18] But Reid's ambitions have always extended well beyond entertainment. He served on the boards of Microsoft and Meta. He's currently on the boards of Bloomberg and Anthropic. He's given hundreds of millions of dollars to education reform, and he holds a master's degree in artificial intelligence from Stanford, from 1988, before most of us had AI on our radar.
Speaker 3:
[01:37] Today we're asking, what does someone with that vantage point across entertainment, technology, AI safety, and the long arc of institutional change actually think is happening right now? Where is the leverage? What are we getting wrong? And what would it look like for this moment to go genuinely well for humanity?
Speaker 2:
[02:01] This is a conversation about technology as a civilizational lever. About whether the people building AI and the people governing it are asking the right questions. About what entertainment can teach us about human nature, and what human nature can teach us about AI. And about what it means to have spent 35 years watching technology reshape the world and to still believe the best is ahead.
Speaker 3:
[02:25] And now to our conversation with Reid Hastings. Reid, awesome as usual to talk with you. I thought we'd start with something a little light for both of us, so I'll answer this too, but we'll start with you. In various weird ways we get mistaken for each other. What was one of the funny ways that comes to mind for you about being mistaken for me? And then I will also answer that question.
Speaker 1:
[02:45] Reid Hoffman, Reid Hastings, both in DEC, so I get introduced sometimes at conferences and someone's done the quick ChatGPT and they do it on Reid or ReidH or something, and they call me the founder of LinkedIn, and I'm very excited because that's a hell of a business.
Speaker 3:
[03:00] Well, and similarly, because Netflix is a hell of a business, I think the last one that I got in the side of many, many was, oh my God, that Wednesday show is awesome. You guys have done such a great job, which I'm now passing along to you. It's an awesome show, but that kind of thing. And we've been doing this for a while, so when we get the emails, we just kind of forward them to each other.
Speaker 2:
[03:22] Exactly. All right, well, people confuse you guys, but let's see if you guys confuse yourselves with each other. So I'm gonna say a quote, and you guys are gonna tell me who said it. Shout it out. Most entrepreneurial ideas will sound crazy, stupid and uneconomic, and then they'll turn out to be right.
Speaker 3:
[03:43] I think that's me.
Speaker 1:
[03:45] I would totally disagree.
Speaker 3:
[03:47] It was Reid, that's good.
Speaker 1:
[03:49] Most turn out to be wrong. That's the whole point of the contrarian thesis. And then occasionally they're right.
Speaker 3:
[03:54] Yes, but they sound stupid. Oh wait, no, Reid Hastings, you said it.
Speaker 1:
[03:59] No, I did not. I'm sure that's a misquote.
Speaker 2:
[04:02] Oh my God, we're gonna fact check.
Speaker 1:
[04:04] Okay, let's hit it from the top. But the quote is, you gotta have a contrarian thesis where everyone else thinks you're stupid, basically, and that it turns out to be right, which is rare. The whole point is that most of the time, the entrepreneurs fail because the idea doesn't work.
Speaker 2:
[04:21] Oh my God, they're so similar that they're confusing each other's quotes. All right, the future isn't something that experts and regulators can meticulously design. It's something that society explores and discovers collectively.
Speaker 1:
[04:34] Boring.
Speaker 3:
[04:36] So that means me and it's accurate.
Speaker 2:
[04:38] That was Reid Hoffman. Thank you, thank you. All right, we have a few more.
Speaker 3:
[04:41] Boring is frequently how Reid talks to me, but it's the details.
Speaker 1:
[04:45] The entertainment part of Netflix, the key to that was always, not that I am, but always trying to be thrilling, and boring is the only sin in entertainment.
Speaker 2:
[04:55] There you go. All right. Stone Age, Bronze Age, Iron Age, we define entire epics of humanity by the technology that they use. Reid Hastings, all right, we got that. All right, two more. We are Homo Technae. When we cross the river, we are deepening our understanding of technology and ourselves. That was Reid Hoffman. Last one, companies rarely die from moving too fast, and they frequently die from moving too slowly. That was Reid Hastings, but I feel like you've said such similar stuff, such similar stuff.
Speaker 3:
[05:31] It's fricking brach.
Speaker 2:
[05:33] Thank you for humoring me.
Speaker 1:
[05:35] Who's the better investor?
Speaker 3:
[05:39] Who's more entertaining?
Speaker 2:
[05:40] All right, there you go.
Speaker 1:
[05:42] Well, it turns out that operating personality is really different from allocating capital personality. So operating personality, you're like a dog with a bone, you never give up on a problem, you just work it, you work it. And investing personality is staying very broad, not falling in love with an idea, cutting your losses and moving on. And like when I tried investing, I just fell in love with all the entrepreneurs and just giving them money and none of it or doubt. And when Reid tried operating, we quickly found the right guy in Jeff to operate for him, who was an amazing operator.
Speaker 2:
[06:16] Fair enough. Reid, I'll turn it over to you.
Speaker 3:
[06:18] So you were CEO of Netflix for 25 years. And then when you handed over the role, January, 2023, what happened the next day? What did you then go do?
Speaker 1:
[06:33] My schedule evaporated. So first of all, it was a secret. And second, I had a whole bunch of work meetings and a pretty full schedule that I then wasn't going to do. A bunch of internal and external things. First couple days, it was just like be at home and shock. And a lot of calls and just people saying congratulations, kind of nice sweet things. And then I said, well, I've always wanted to ski a lot. And I've only been able to ski five or 10 days a year because of work. So I'm going to take February and March and ski my brains out. So mostly that's what I did. And that was a super fun release and fulfillment, but it wasn't really planned. And it gave space really to the company. And you kind of, in many cases, it's good to have the ex CEO not be itching to like call in to make decisions. And how about lunch, Ted? Greg, let's talk about strategy.
Speaker 2:
[07:37] That episode of Love is Blind was terrible.
Speaker 1:
[07:40] So I was able to just enjoy myself and focus on snowboarding.
Speaker 3:
[07:46] Was there anything that you found surprising that you missed?
Speaker 1:
[07:51] No, the huge surprise is how much I was okay with moving on. I thought I would miss everything, because it was my whole entire life. I loved every second of it. And what I was just realized, I had done everything I wanted. I had done this huge global rollout. I had lived on an airplane around the world and every week in Seoul or Mumbai or Berlin. And it was incredibly fun and exciting, but I didn't need any more of it. And so I was super surprised that I didn't like yearn to get back in the saddle. And of course, there's aspects you miss and the people I missed. But I've stayed in touch with them on a personal basis. So it was surprisingly easy, because I certainly have heard of people who have a very hard time with it.
Speaker 2:
[08:42] We've certainly seen some CEOs who stepped away and then had to go right back because they missed it so much.
Speaker 3:
[08:48] Even in your former industry.
Speaker 2:
[08:51] But beyond Netflix, you've been on the boards of Meta, Bloomberg, so many influential companies. These companies were some tech, some less, different business models, different industries. What did you learn from being on the board of such diverse companies, obviously, in addition to Netflix, about the AI and media ecosystem?
Speaker 1:
[09:10] Well, the lucky break of my professional life in that way was that in 2005, Microsoft was looking to get someone from tech on the board, but they have conflict rules. So they couldn't have the head of Cisco on the board. So they started going down the list of tech people, and after a year, they had gotten down to the head of a domestic DVD rental service, Reid Hastings, for an interview. So I was like, you want to interview me? Then I went up and met with Steve Ballmer and Bill Gates, and we really hit it off. A little while after that, they put me on the board of directors of the Microsoft Corporation, which was a million times larger than Netflix at that time, and super global and super long-term in their thinking. So the stuff I got out of it was amazing, because they were willing to work on projects for things 10 years ahead, and I had never been able to do that because the company didn't have enough profits stream to do that kind of thing. And so it was such incredible learning for me. Then later, as social became really important, I thought, oh, you know, I'm kind of old, I should really get much closer to this. This is going to transform Netflix, like it will transform photos and things. So I got on the board of Facebook, and Mark Zuckerberg was very open in the company, and I tried to do what I could to help. And I certainly learned a ton about social. But then it turned out that social really had very little to do with movies and TV shows. So it wasn't like this huge transformation of the rest of the universe, like in the way that say AI is. So those were two long-term born assignments that I did, and I learned a lot on. And I certainly encourage all the CEOs I know to do one or two other companies, hopefully big companies, where they can learn from. And then I got on the Bloomberg board, really, as a favor to my friend Mike Bloomberg, who's an incredible philanthropist. He's been number one in the last two years of most giving in the US, and just a great human being and a great philanthropist. So that's been a favor and fun. And then I'm on the board right now of Anthropic, as well as Netflix. And on Netflix, it's pretty passive, because the CEOs trust me and I trust them. And so it's just kind of easy. But Anthropic is really exciting and intense. And I've really come to appreciate the team, the mission, and it's neat to have a seat kind of on the front edge of what they're doing. Absolutely.
Speaker 3:
[11:48] Well, one of the things some people know, but very early days in LinkedIn, I wrote to you asking you to join the board because you're one of the top board picks for Silicon Valley intelligent people. But I want to go back to the Microsoft. So you were there before me. I joined in 2017, actually called you, and as usual, got advice on this stuff. What do you think, speaking of long projectories, Satya has done a great job at Microsoft. What do you think the things were the right setup for Satya doing that? How much of that's Satya's own brand of things, and how much of that's the patterns of work that were already present at Microsoft?
Speaker 1:
[12:34] Well, let's think about why is Microsoft 10, 15 times more valuable than when he started. The profit stream has continued to grow, but then it grew a lot before that under Steve Ballmer. So Office has stayed a very strong franchise, and they have withstood some erosion from Google, but there was no wholesale change. They won the Office wars, and they've made Office good enough that fewer people are switching, so they stabilized that. Windows didn't particularly get stabilized. Apple in particular has continued to grow in the face of Apple. Google Search has continued to grow, so Bing and all those. So Windows and Bing haven't really delivered. Azure has delivered in a big way. It's delivered in a big way because of the AI workload. That's the big validating workload. So then that comes back to Satya made one incredibly ballsy, insightful decision, which is to invest in OpenAI back in 2018.
Speaker 3:
[13:39] I thought you were going to say, buy LinkedIn. That's okay. You know, keep going.
Speaker 1:
[13:43] So, you know, he bet big on AI and that catapulted them both reputationally and in terms of other, a little bit on the prop and in the Azure business. So it was the workload that has grown Azure into a monster and a big success, which is because Amazon had their own first, their first party workload, plus all the early adopters. Okay, so it was really hard for Azure for a while.
Speaker 3:
[14:08] Early adopters like Netflix.
Speaker 1:
[14:10] That's right. So where Satya gets incredible, he also internally, he just got people talking and working together more than Steve have been able to do. But that, I mean, again, IBM was a nice place to work, but they just didn't take any product bits that really delivered. And Satya took a huge product that did deliver.
Speaker 2:
[14:31] So talking about AI, everyone is talking about AI, and you have also joined the public conversation. Where do you think people are asking the wrong questions, maybe having the wrong answers, or what is part of the discourse of AI that you think is totally missing, that people should be talking about more?
Speaker 1:
[14:48] Well, I'm not sure if whether AGI comes in 18 months or six years, it really is going to make much difference. So I think we should just sort of say, it's coming fast, and how do we want society to be, what do we imagine, in 10 or 20 years? Sort of get over the idea of the intensity of how fast it's happening to, it's here. And what will we do, and how does it work, and how would the legal profession work? Like, as an example, in education, people say, oh, it doesn't change at all, and look at how medicine has changed from 50 years ago, educators are recalcitrant because they don't want to change. But if you look at the practice of law, like pleading before the Supreme Court, it's identical to 100 years ago, and we're sort of proud of that. So I imagine things like that, briefing before a Supreme Court, will actually work identical in 20 years when AI is fully ubiquitous. Now, the briefs will get AI edited and improved, and the research will be a little faster, but it's like in the noise and the way that today LexisNexis is, or word for legal briefs. So some areas of the economy has a huge change, some areas not so much. So I think again, there's so much focus on, is the current version of Claude Code, this or that, as opposed to, okay, it's going to happen and the robot side, the humanoid robot is going to happen. So then, what will society be in 10 to 20 years?
Speaker 2:
[16:26] Well, you're saying if so, education, medicine, law, those are either sort of highly regulated industries or industries with big unions. Do you think that's why they might be affected less and others will be affected more? Or to actually throw your question back at you, do you think there's a profession that will be the most affected by AI, whether it's in three years or 10?
Speaker 1:
[16:47] Well, I was going to say the least affected, I think will be entertainment. You're not going to watch a basketball game of robots.
Speaker 2:
[16:53] Right. March Madness is not going to be taken over by AI, we're safe.
Speaker 1:
[16:55] So we like the human conflict and that draws us in. So most, like a percentage reduction in employment. One theory would be software engineers because everyone's working really hard on automated software development. But there's a substantial chance that while many companies will have reduced software engineering employment, there'll be many other opportunities for more software. So that kind of elastic response is, well, it's what we've seen in radiology, which is an interesting example because radiology is image processing. Computers and AI are much better than humans at, and have been now for several years. Okay. And so you can now go into an MRI center. And if you're self-pay 300 bucks and get an MRI, it's dramatically down in pricing. And so people are getting many more scans, and they're being read by AI with a radiologist approving them. So I had thought four years ago, going to be devastation in radiology. It's going to be the first high-end profession.
Speaker 2:
[17:56] Everyone talked about this, right? Obvious.
Speaker 1:
[17:58] Well, we have a shortage now. We have 35,000 radiologists. We need about 40,000. Wages are high. And so it's a good example of, it's easy to get focused on the disaster case with Armageddon. We love movies about Armageddon. We have them in our religions. It's like we love disaster. So we're drawn to these scenarios of AI wiping out things. And again, hasn't happened in radiology. Maybe it will eventually, but not in the last five years. So my hunch is that many professions will be resilient, and that there's certainly much more demand for healthcare. So now, is there much more demand for legal services? Maybe. I mean, poor people are definitely under-lawyered, they can take advantage of a lot. So maybe there will be a whole elastic response there, and we'll see. But if I had to guess, it might be lawyers as the kind of most affected, because it's very verbal, and it's somewhat formulaic. Not as much as writing software, but it's somewhat formulaic.
Speaker 3:
[19:04] Well, and also, if we thought about what would be an increase in productivity, if you had actually people shifting from legal work, which is generally transactional costs on things that are happening to actually production of things, that would actually in fact be a general positive thing in society. And I say this as a child of two lawyers. So I actually think it would be a good thing if the work shifted more towards, how do we build more products and services? What are the ways that we create entertainment, create other work, you know, other kinds of things versus the transactional cost, which is legitimately there on a number of things, but they increase in transactional cost that's illegal.
Speaker 1:
[19:43] Well, I think where we both are is no one quite knows exactly, right? Which fields will be, but what I think we both see is an era of great promise, that the next 20 years will be super exciting. And I think it will usher in this era of abundance through both scenarios, which is again, you've got the white collar symbolic processing stuff that's getting pioneered now, like lawyering, writing software. And then we've got the Android humanoid robots coming at some pace. So that will be pretty exciting.
Speaker 3:
[20:17] So one of the things that obviously a lot of discussion of AI is on, the question is, is there safety issues? And the safety issues range, we've been in these conversations, anything from various smart people being, as you were referring to earlier, apocalypse doomsayers, whether it's terminator scenarios, whether it's jobs, etc. So what do you think the right way of trying to navigate the development of all of these, what we both agree are amazing future capabilities, medicine, amplification of software engineering, a whole bunch of other things, and navigating the safety. How much of that should be technical work? Constitutional AI, etc. How much of that should be incentives? How much of that should be regulatory? What are the guideposts that you throw out there, or principles by which people should think about how to navigate both keeping up our speed in building AI for all the good things and also navigating safety?
Speaker 1:
[21:22] Well, in the safety, we can break it down in a couple of categories. So there's the super disaster, Skyneti case, where like AI takes over.
Speaker 3:
[21:32] Soon to be a Netflix film, New York.
Speaker 1:
[21:34] Yeah, and we absolutely have to prevent it. It's not something that's going to happen in the short term, but the danger is that we could slide into it as these things take care of more and more of our life. So I think because the downside or recovery from that, because we don't have time travel, is extremely hard. That's when we have to think about, even if it was low probability, like massive nuclear war, we have to do some stuff about it to prevent it because it would be so disastrous. So that's the mass nuclear war equivalent case that they take over. Then the other case is North Korean soldiers use it to design a virus and a lot of people die. So think of it as in the hands of the wrong people, it's very powerful. One scenario is sort of combined with synthetic biology. Another scenario is cracking into other computer systems. So it turns out AI is very good at breaking in, finding bugs in open source things through code analysis or finding bugs through closed things by probing in the way that national security agencies do today, but instead now it's in the hands of terrorists or other people. So again, think of it, those are two specific examples of bad people using AI as a powerful tool to do very bad things. So there we've got to make sure that the whole industry does tech prevention that it's hard to do. Now, right now they're sort of everyone is doing that because they don't want to see these scenarios, but you could imagine that it might need to be regulated over time to insist that all of the sufficiently powerful AI systems protect against these kind of scenarios and probably some of them will happen and then we'll set up a protection regimen afterwards to prevent that kind of thing. But no one of them is going to destroy all humanity at once like massive nuclear war might.
Speaker 2:
[23:39] All right. We're going to go from nuclear war to AI writing.
Speaker 3:
[23:43] Which some people think is nuclear war, but it's not.
Speaker 2:
[23:46] So the New York Times recently ran a blind test and 86,000 people took it and I don't know if you saw the results, but they were showing folks snippets of AI writing versus human writing and 54 percent of people preferred the AI writing. Some people argued like that was short-form, that wasn't novels, that's what AI is particularly good at. Other people were so devastated that more people preferred the AI writing. What do you think that, what does this mean? Does this mean that most writing is going to become AI in the future or what are the implications of a test like this where regular people preferred the AI writing?
Speaker 1:
[24:22] I think short-form writing on a specific topic is very different.
Speaker 2:
[24:27] Yeah.
Speaker 1:
[24:27] Writing a story and developing character and a character arc and a conflict and a resolution. Just look at how Shakespeare still has a huge impact. You know, when it was 400 years ago, and no one's been quite as good as him at a certain broad range of things. So I think it's an extremely rare talent of the very high-end AI. But like average writing is, I don't think the way to analyze it, and I'm not surprised that the AI is better than the average writer today.
Speaker 2:
[25:00] Right. Fair enough. So thinking about all these creative revolutions, it's like you were there for DVD to streaming, now obviously lots of people, they're concerned about AI is the entertainment industry. So forget the tools that people use, whether we'll use AI for editing or scripting or whatever, but the actual stories that are being told. Do you think the AI era ushers in new stories or different people who are telling these stories? What will the effects be?
Speaker 1:
[25:27] Well, people have been talking about democratization of film for 50 years. So the early wave of democratization was in the 90s, when you could take it with digital instead of celluloid film.
Speaker 2:
[25:39] Sure.
Speaker 1:
[25:39] So you could do more takes and the cost of filmmaking was going to come down and that was going to democratize film. It really didn't change. We got bigger budgets, higher special effects. We did reshoots more.
Speaker 2:
[25:51] We just raised it up.
Speaker 1:
[25:54] It turned out that the constraint really wasn't the film cost. There's lots of student films produced today, produced back then, and they just don't break through. Take our recent success with K-pop Demon Hunters. I mean, in some ways, wow, cool.
Speaker 2:
[26:07] I have a 10-year-old and 8-year-old and a 5-year-old. I certainly know about it.
Speaker 1:
[26:12] It's like our 28th animated film. Right. It's like even for us, it's really, really hard. God knows we want to repeat K-pop, but other than we'll do the sequel, but other than it's like, can we apply the same lightning strikes? So we are really working hard, but think of it as it's a very subtle set of high-end things. So it'll help a little, particularly in TV when you do a lot of episodes, and it'll help a little bit on the scripting. Where it will help particularly is then the script to screen. So you've got a big crowd shot and a big stadium. That might be a very expensive VR shot now, special effects shot. But now it'll be AI and that will be lower cost. So the mechanical parts or industrial parts will be lower cost. But the backbone, the story, and then back to the basketball example. So we don't want to watch robots. I think we're going to want people will pay for real actors and people they recognize in that way. So but we'll see. I mean, there's definitely a threat of short form. Do young people only go on TikTok and never watch a podcast or a Netflix? We'll see.
Speaker 2:
[27:30] I mean, we'll also see Val Kilmer has a new film out and he's been dead for several years. And so AI Val Kilmer, we'll see if that's a success or not.
Speaker 1:
[27:39] Recovering past IP and extending it is a niche.
Speaker 3:
[27:44] And by the way, people will watch Robots, one of the great Netflix love death and robot series. So it's what do you watch robots doing is the interesting question. Now, what do you think are some of the interesting more side effects? Like obviously, if you came to me today and said, hey, AI will help us improve identification of potential hits early. That seems unlikely unless there's some analysis of major data streams that the humans don't do yet. But what are some of the more side cases of how AI comes into this? It could be also like an AI discuss the content with. I mean, what are some of the things that kind of not just reinventing the process, but thinking about AI at the corner cases or AI at a corner case that may become big. Is there any of that that you've had musings on yet?
Speaker 1:
[28:43] We haven't found, say, the equivalent of sports betting. Sports betting is a whole kind of value, emotional engagement layer on top of sports. It's not really AI enabled, kind of nothing to do with AI, but we're always with entertainment looking for things that what's a layer of conversation, what's a layer of engagement that enriches the experience. So maybe AI will improve that, but it's not obvious what that will be. And to some part, the beauty of a show or a movie is that it is kind of self-contained, and that it's been more than a hundred years we've had films. And so it's like a novel where the novel as an art form has really stayed constant. And so there's something about the size, and the capability that people are used to it. Sure, there's short stories, and sure, there's epics, but like almost all of the business is novels, and what people read. And I think there's a lot of that with film and TV series that they're not artifacts, they're really reflections of human attention span and stories.
Speaker 2:
[29:53] I feel like people have been complaining forever about, you know, you have one superhero movie hit, and then you have 25 more superhero, or like Barbie hits, and then all of a sudden, you have toys, like toy stories. Like we're not getting sort of the diversity of hits because people are looking at an algorithm, or they're, well, you know, you gotta put this star with this story and then, and so you don't maybe get a K-pop Demon Hunters, which like you said was the 28th and no one could predicted it. Obviously, we're in sort of the time of heated rivalry when no one could have predicted that. Do you think AI will flatten things? Like will AI lead to people just, you know, betting on past success because the algorithm told it to, and so they'll take less chances or that's already happening? Like how does AI affect it from a data perspective?
Speaker 1:
[30:38] You know, very, very little. We're really predicting for a kind of that human conflict and something, you know, you want something that's both familiar, but also fresh. So those are our tensions. In terms of like over-sequelization, you look at the amount of new content that Netflix is producing, that's really gone away. In other words, sure, we'll have some sequels and think of season two of Wednesday was season three, et cetera. So that's always been a part of the business. It's very enjoyable for people because it's a story they already know, and you see more of it, but you don't want it to be the only thing. And that's certainly not true today. Today we have an incredible set of new films coming out all the time.
Speaker 2:
[31:19] Yeah. And do you like, how well do you think Netflix can predict a hit? Like sometimes things come by surprise, but you're like, no, no, no, we got it. We know. Or you're like, we sometimes we don't.
Speaker 1:
[31:29] Yeah, it's a mix of both.
Speaker 2:
[31:30] Okay.
Speaker 1:
[31:31] So season twos, we have much more data there.
Speaker 3:
[31:36] All right.
Speaker 2:
[31:37] Fair enough.
Speaker 3:
[31:38] One of the many smart things that Ted Sarandos said is that there's a much better business in increasing the upside, the quality, the volume of the reception of content by 10 percent, then cutting costs by 50 percent. And it was a way of kind of lensing to why is AI potentially really amazing. What are the kinds of things in terms of thinking about how AI can be additive, that you think that the entertainment industry should be thinking about broadly, given obviously there's a lot of concerns and uncertainty around it?
Speaker 1:
[32:14] You know, in entertainment, there will be a lot of work on special effects and sort of things that didn't fit in the budget before can now be done. You know, using AI, so that's a good example. But more or less, I think anything that lives in the emotional realm will be impacted not as much by AI, because we humans react to these things emotionally. And so again, I think we'll not watch robots playing basketball. And so it's the easiest example for people to get that things that are emotionally stiff pleasure, emotional stimulation will still do. So you like to give and get real flowers, not fake flowers. So it's like, I don't think that's changing just because of AI. So where AI is very good is that thinking and logic like coding. Wow, it's really breaking too. So I think there's just, think of it as tremendous excitement in medicine, in biology, in things that are very factual, logical, hard, and complex. And things that are emotional will be, it's not that it won't eventually be able to add value, but that's certainly not the big thrust of the AI world.
Speaker 2:
[33:26] All right. This is something I'm very excited to talk about. You and I share a deep passion for education. And I feel like education is like at the center of AI. It's like, is it going to usher in a new era where everyone has amazing tutors and education is supercharged, or are public school students especially, perhaps just going to have AI slop constantly? Like as a philanthropist who's invested a lot here and sort of studied the education, especially in the United States, how do you think AI is going to affect education?
Speaker 1:
[33:58] Well, there's two kind of big questions, which is one, what are we educating kids for? What kind of society are we educating them for? What skills should they have? So the kind of goal state, in the past was pass a lot of AP exams, okay? And that's probably not the right goal state for the future world. So there's a big discussion to have, which is what are the skills that you're trying to target for your kids? And then second is implementation, which is can we use AI tutors to whatever we decide is important, to teach more of that, to make teachers' lives better, to have kids learn more. So there are two different discussions that get kind of mushed together. I would say on the second, like how to teach more, then there's some relatively clear paths of doing, you know, different kinds of AI tutors. And, you know, just like when an AI doctor, AI lawyer, there'll be AI teacher, and that will come along. It'll be easier in private schools, it'll be easier in charter schools, it'll be harder in school district schools, because of all the regulations they have. It'll be easier outside of the US where they're more willing to do some of these things. So, you know, that will happen. It's unclear, the first question, which is, if you have a three-year-old or a five-year-old, what do you want them to learn? Because the hard skills, like, that we used to value, STEM. Okay, that's probably like coding. Like, you know, we spent 25 years saying, learn to code, learn to code. Oops, don't learn to code, don't learn to code. So, it's probably true that, like, all the hard facts of, you know, biology, chemistry, physics, will be extremely specialized and not necessary general knowledge. And probably not that valuable, let's say, if you had an AP result in this. The things that are more emotional, how you read people, how you work with them, probably are quite valuable because they're much harder for the computers to do, and humans like doing them with other humans more often. So, I think we'll see a kind of shift of the value, you know, STEM practically took over Stanford University, okay? And now, maybe what we'll see is a rotation, you know, back to the humanities and to understanding combination of history and literature, but also kind of the physiology of the brain and how we interact with each other. If I had a three-year-old today, I would be like doubling down on the emotional skills. There's some great middle schools, one's a charter school Valor, another's a private school Flourish, and they really focus like in seventh grade on these emotional circles, where you sit around and talk about your feelings, and because they feel like this kind of skill and knowing yourself, they interact with other human beings is going to be the thing that sustains those kids through their working life.
Speaker 2:
[37:03] Do you, I feel like when people talk about AI and education, Alpha School always comes up. We had Mackenzie on the pod a few months ago, and I feel like some people are like, the criticism is like they're not even using the cutting edge AI. Other people say, look at their test scores. They're doing twice as well as anyone else. Some people complain about equity for $40,000 a year.
Speaker 1:
[37:25] $60,000, it's fantastic. If you can afford it, go.
Speaker 2:
[37:28] So why do you, like the thing I love about it is actually less about the AI, but more just about the like giving agency to kids, focusing on other things. Like what do you, why do you think Alpha School is so great?
Speaker 1:
[37:39] Well, it starts with Mackenzie and Joe's philosophy that the kids have to love school and they love vacation. So the number one goal is the kids don't want to go on vacation. They want to stay in school because they love it so much. Then they start from, okay, if we want them to love it, and we need them to know the basics up to do well on tests, what are we going to do? So they do two hours on software a day, where it's relatively practice and drilling and these kinds of things. But they have a coach really motivating and helping, and then they get the reward for that is they get to spend the rest of the day doing stuff they want. Either a set of the schools are for athletes and people for sports, or they spend the rest of the afternoon doing, like watching Ted Talks and talking about them or a range of things. But I would say they're kind of like the Tesla Roadster. That was the first Tesla over $100,000 sports car specialist thing. But it set electric cars as the aspiration. Then they were cool because the first electric car was this extremely fast accelerating thing. So that's Alpha School is our Tesla Roadster for AI schools.
Speaker 2:
[38:48] You think it can be done? We can go down the cost chain and we can do it at a lower amount.
Speaker 1:
[38:53] A hundred percent. Model 3 will be coming. I love it. So there'll be more and more innovation at that end.
Speaker 2:
[38:59] Cool. So Reid, we talked about education and how there's enormous opportunity for change, Alpha School which is operating within the United States. How is education going to change outside the United States with AI?
Speaker 1:
[39:10] Well, I think what we'll see is tremendous flourishing of AI teacher in an international situation. So we have pretty good teachers in the US and it might be high satisfaction in rich countries generally. But in lower income countries, the education budget might be $300 per kid per year, and the class size might be 50 to 70 kids, and there's just not a lot of learning going on. I think the combination of a Starlink on every school and tablet for every child and then really good AI software will close the gap. So think of the way mobile phones are ubiquitous in the developing world. I think AI learning will be ubiquitous, and that will help. I don't know that they'll leapfrog the advanced countries, but they'll certainly close the gap that we've seen historically, and in places they may leapfrog.
Speaker 2:
[40:01] I feel like lately there's been breathless articles everywhere about how one laptop per child was the hope and then it was really a failure, and we put so much money into it and it didn't work, and then other people are saying like, well, that's really nice that you want that, but the cost of compute is too much. What would you say to those people who think that like we've heard tech solutions and education before, but it's not gonna happen this time?
Speaker 1:
[40:23] Yeah, there's always the crash dream phenomena, like in the 1980s, I was in the AI wave that was expert systems, and they were public companies, and the fifth generation and the Japanese were gonna be this big threat and that AI didn't work, but this one is. So just think, just because it failed before, it was too early, it doesn't mean it won't work. So I think the one tablet per child around the world is very scalable, builds on the mobile phone platforms, and it is some of the same ideas that one laptop per child had, but that was just 20 years too early.
Speaker 2:
[41:00] Yep, and I assume you think the cost of compute is just gonna go through the floor, and so that it won't be a barrier to entry for folks in the developing world.
Speaker 1:
[41:07] That's right, the costs are really driven off the phone market. So we have $800 phones in the US, but in most of the world, there's $50 phones are quite usable. And so think of it as $50 phone and a Starlink per school, and some solar panels. So it's very scalable.
Speaker 2:
[41:29] Fantastic.
Speaker 3:
[41:30] So one of the things you said earlier, which I completely agree with, and one of the things I think I would partially reframe, and it's kind of a question leading into education. I think our human skills, how we collaborate, getting in the humanity, understanding these things, I do think will amplify up. And the skill of learning coding obviously goes away, because it's like, oh, this does the code. But I think the various two things that I think will still be pretty deep in what we're doing is, one is a good systematic understanding of what truth in the world is around, so like biology, physics, chemistry, etc. Because that kind of iteration of understanding the nature of the world we're in, I think is pretty familiar scientific thinking. Is it like the patterns of thinking I think are important, the patterns of strategy are important. And I think actually even coding will go that way, because part of the thing is, is while people say, well, should I study, should I stop studying computer science? Like, well, no, you should study computer science, not coding.
Speaker 1:
[42:32] Right?
Speaker 3:
[42:32] Like more of the kind of the thinking about the system or the pattern of it in terms of how we operate. And I think that will become more general. I'm curious what you think of my modest reframing of your earlier statement.
Speaker 1:
[42:48] I think I'd just study math. You know, if you want to get to like patterns.
Speaker 3:
[42:53] Well, alarms are terrible accounting, so yes.
Speaker 1:
[42:55] So, you know, whether it's algebra, I mean, there's so many interesting things in math. If you want to help people like kind of learn abstraction or search for truth. You know, look, there's going to be some role for science and a few people will be drawn to it. But think of the last 20 years, we've been as a society, stem, stem, stem, learn to code, stem, stem, stem. So I just think that as everyone sees that coding is overdone, my guess is we'll see that stem is overdone. And the kinds of things that you do with a biology background will be done so much better and faster by AI, that it'll be hard to compete for jobs in that space.
Speaker 3:
[43:40] So shifting to another thread, one of the things I think AI will do is it will break us a little bit of the industrial model, which is the industrial model tends to be the, go to high school, go to college, learn, now go work. And learning then, now work. And I think ongoing learning will be one of the key things. Do you think that will also be part of, not just like, I have this one learning period, but it's like, I'm constantly interweaving learning with what I'm doing?
Speaker 1:
[44:07] Absolutely. So, you know, trying to think through the ongoing skill acquisition, you know, by, by learning things has probably, you know, been important for a long time because the stuff you learned in high school be changes so much. And the knowledge in the field changes so much and that's only going to continue. So, certainly learning will need to be for people who want to make a living intellectually constant. But I don't think that's, it's like a more of the past because that's been true for a while. It's just going to be even more true, you know, in the, in this new world.
Speaker 2:
[44:42] So, we learned earlier that you were the one that said, Stone Age, Bronze Age, Iron Age, we define entire epics of humanity by the technology that they use. And so we're entering the AI age. Do you think even more than previous ages, that now if you are left out of the AI age, either personally or sort of as a country, like that now is the time that some folks might actually fall behind, and there's going to be a bigger divergence between those who embrace the AI age and those who fall behind.
Speaker 1:
[45:13] Well, I don't know that just embracing it, like if you're a small country in Europe that has a glorious past, I don't think it's whether you embrace it or not, that it's going to change the outcome. I mean, you got to try to figure out a strategy because it's going to be dominated by China and the US. How does the middle powers like Canada thrive? I think we owe, we're part of that thinking and solution to be able to build a world that we all want to live in.
Speaker 2:
[45:46] So what would you do for these middle powers? They don't want to live in a world where it's just China and the US?
Speaker 1:
[45:54] I don't know that they have much of a choice. So I think it's, I mean, what they're doing is linking up together. And I think that probably makes sense. And certainly, they'll embrace using American AI, because they'll need to. And maybe they will be able to get some local things going, or they'll do it by treaty, where they're not going to get shut off. But think of it as AI is going to accentuate income gaps, both in the US, and between the US and other countries. So, you know, it's going to create tensions, just like it does in one society, it does between societies.
Speaker 3:
[46:36] I do think that one of the things, and I think you would agree, that actually having an active AI strategy for the countries, for the industries, et cetera, is actually, in fact, really important. I think it's less important to be going, it isn't that, you know, so-called digital sovereignty isn't also important, but the most important thing is to actually be modernizing your kind of industry, your countries, your companies, your workforce, because it's a little bit like the industrial revolution. Like one of the things I put in Super Agency is that, you know, England quartered the population of France, 10th the population in China, multi-century industry, you know, and did not invent the industrial revolution, but multi-centuries empire because they embraced the industrial revolution most robustly early. Would you also think that when you get to thinking about this from a countries or industries or companies perspective that they should adopt that same thing of like, we have to have an active strategy in being there early?
Speaker 1:
[47:40] Well, your example is interesting. So you said the UK which led industrialization. So that's kind of the equivalent of America today, right? So you didn't really phrase it as, what should Argentina do about industrialization, which is a very hard challenge because the UK did everything in its power to keep Argentina from being able country to industrialize, to maintain their strength or their power. So if you are a middle power like Argentina, should you have an industrialization policy? Sure, probably that makes sense. Will it work? Not clear at all because the power imbalance in the UK was so high. So they were laws preventing looms and various things from being in India. So the cotton from India had to come and then get processed in the UK and they enforce those laws with their military. So it sounds all great that Belgium should have an AI policy, and Estonia is a small country that's done a lot on, say, digital ID and do good government tech. So I guess that's better than not doing it. But I don't know that it's actually much of an answer for a middle power country. I mean, I think the challenges for a middle power country are quite substantial and I don't have a good solution, but I think in them linking up and in doing things and working well, and I hope that the US as the western leader of that is embracing in a very important way, which is not the current America first policy, but I think it's much more in our long-term interest to have a long and strong allies.
Speaker 2:
[49:21] So earlier you said that you think AI is going to usher in this great era of abundance, and whether it's 10 years or 20 years, when sort of AGI gets here, what are the things that need to happen for that to be true? Are there conditions that need to be met now for that to happen? And did you mean just in the US in terms of the abundance, or could it be global abundance too?
Speaker 1:
[49:43] Depends kind of how the IP is shared or the rewards of it, but I think it'll be pretty global. So just like you saw in the industrialization, it really did lift all boats. You know, it's at a different level than in the host country. But so let's take nuclear fusion, you know, and if we can actually make it work, you know, which will be assisted by AI, you know, then we've got a tremendous energy source. If through it, we're able to bring down the cost of solar, you know, or invent new types of batteries and storage, you know, it could really revolutionize energy usage and production, which then brings us to, you know, age of abundance. So, in early nuclear, we were a little bit naïve, and we thought electricity would be too cheap to meter, okay? So, you know, we're probably not going to get to that. But, you know, that's the kinds of thinking for abundance that we want to do. So, if you think about housing and, you know, it's very expensive to build, what if it was robots building it, you know, 24 hours a day? So, then you got, okay, you've got carpenters not working, so that's a negative, but you've got, you know, low cost, very custom, beautiful housing. So, that would be, you know, another example. So, you know, I think in each industry, we're gonna see much lower costs for doing things and then incredible amounts of inventive energy because maybe we don't need to build a house that way, maybe you just print it, you know, in some giant 3D printer. So, there'll be both tech revolution in that way, which again, would be like nuclear fusion.
Speaker 3:
[51:19] I'm gonna shift to one other zone of subject because I think this is something that people frequently misunderstand. What do you think are, why is so much of this innovation happen in Silicon Valley? What are the things that make Silicon Valley kind of a unique creator of these kinds of technologies? Why is it half of the NASDAQ is within 30 miles of where we're currently sitting?
Speaker 1:
[51:42] Well, I think if you look at the financial city of London, or you look at New York, or you look at Detroit for cars, this is not something specific about tech. You get very positive reinforcement when you've got a whole lot of talented people that can switch jobs and not have to move, and then they move the ideas. So, in the US, we don't have very high protection against the ideas walking up the door. So, it can be very hard for a given company because they feel like, my secret spread, but it makes it very productive for the economy and for the ecosystem that those ideas do spread. So, I think of it as liquidity and employees changing is probably the key ingredient which is a combination of things like LinkedIn, of I wish we had non-employer-based healthcare. So, if you had it in some other ways that employer-based it, you'd have more liquidity. The less we do non-competes, the better. It's one of the things I think that the Biden FTC had right, making it easier to compete with existing companies by eliminating non-competes. So, that liquidity of movement, I think, is the most important thing.
Speaker 3:
[52:59] So, one of the questions, maybe a last question before we get to rapid fire on jobs is, obviously, the general discourse is loss of jobs and kind of other sorts of issues, but there's way insufficient discussion on wages. Because even though, in fact, you know, actually, in fact, I think there will be a lot of creation of jobs, a lot of transferration of jobs and so forth, but will happen, a little bit like the earlier comment in radiology is, there will be a, okay, which of these jobs are super valuable? Compensation goes up. Which of these jobs become less valuable? Compensation goes down. Have you had any thoughts, because you're a systems thinker, on the wages effects, what countries, companies, industries, but specifically also individuals should be doing, in kind of navigating what will be happening within wages?
Speaker 1:
[53:48] Well, let's see, you describe as some jobs is valuable, but like teachers are very valuable and not paid very much.
Speaker 3:
[53:55] Yes.
Speaker 1:
[53:55] So I don't think pay follows value very much. It follows shortages in demand and supply. Okay. So the question is, for wages, what jobs will be in shortage? And I think it's all those jobs that are emotional, that computers are not very good at, because we'll have a lot of need in those. And the jobs that are more administrative, those are going to have lower wages because you're competing with computer who could do that job. So again, it's really where AI does a job well, it's cost done by a human or it's pay will be less. And where it's super hard for the AI to do, it'll be continued high wages.
Speaker 2:
[54:39] I feel like a lot of people are telling people that the solution to that is trades. Everyone's like, oh, become a plumber, HVAC electrician, because those are in shortage right now. Obviously, the big question there is, how soon robots will catch up? What do you, it matters if that's gonna be five years or 20 years or 40 years in terms of people getting wage premiums from the trades. What would you tell people in that respect?
Speaker 1:
[55:00] Okay, so let's look at electric cars. So 2007, it's self-driving, sorry, I should say self-driving cars. 2007 was the DARPA challenge, and it was like it actually did work, and self-driving cars in a limited lab. And now 20 years from now, it pretty much works, okay? But the percentage of miles driven, self-driving by the machines, by high-end Teslas and by Waymos, it's gotta be less than 1% of global miles. That's after 20 years. So we're not even to the stage of that DARPA challenge. That is, we don't have a demonstration in the home, okay, that can do all these things. So think of it as in 20 years, robots will do maybe 1% of the plumbing.
Speaker 2:
[55:46] At most, right.
Speaker 1:
[55:47] Okay, so it just takes a super long time to build, deploy, and then to get them to be lower cost, higher safety than others. But I think over 50 years, it will happen. So that's still going to be a great field for the next 20 years, if we're talking plumbing specifically.
Speaker 3:
[56:04] Thanks. So now rapid fire unless there's another question.
Speaker 1:
[56:07] Let's do it.
Speaker 3:
[56:08] All right. Is there a movie, song or book that fills you with optimism for the future?
Speaker 1:
[56:14] There's a cool movie I watched recently, The Queen of Chess, and it's about a Eastern European 1980s guy family who raises his three daughters to get out of poverty via chess. And they all three become grand masters and get to a middle class or better living through this dedication. And so it's a situation where there's no reason they should have had hope being in 1980s Romania, and yet they did. And they worked towards a future that has been great. So I love a documentary like that.
Speaker 2:
[56:48] All right. I got to check that out. What is the question that you wish people would ask you more often?
Speaker 1:
[56:53] I think people focus a lot on business success, which I've been super fortunate on, and less on joy. And so I think the question would be, what gives you joy? How do you increase joy in your life?
Speaker 2:
[57:07] All right. How do you increase joy in your life? Let's hear it.
Speaker 1:
[57:10] I would say that I'm trying to do more on the mindfulness and on the appreciation noticing much of my work life was relatively frantic. Kind of lots of e-mails, short burst stuff. And I think I could have integrated more mindfulness into that busy time. But certainly now that I'm retired, I can.
Speaker 3:
[57:32] So where do you see progress or momentum outside of your industry? Let's call that tech entertainment that inspires you.
Speaker 1:
[57:40] Definitely medical work. I mean, the amount of improvement in cancer therapies, health, understanding, insulin resistance. I mean, it goes on and on. What we're slowly learning of the brain, of the body, and then the brain's even slower, but making some progress on that.
Speaker 2:
[57:59] All right. Our final question. Can you leave us with a final thought on what is possible to achieve in the next 15 years if everything breaks humanity's way? And what's the first step to get there?
Speaker 1:
[58:10] Well, if everything breaks human humanity's way, it's because AI has unleashed human flourishing, and we find the political mechanisms to share that across within our country, across different income groups, and then between countries, the world as a whole is enhanced. And a first step for that would be to realize how interconnected we are between people in our country and then between countries, and trying to get away from win-lose to win-win.
Speaker 2:
[58:42] Totally. Amen. Thanks so much.
Speaker 3:
[58:45] Reid, always a pleasure.
Speaker 1:
[58:46] Indeed.
Speaker 3:
[58:47] Possible is produced by Pallet Media. It's hosted by Aria Finger and me, Reid Hoffman. Our showrunner is Sean Young. Possible is produced by Tenasi Dilos, Katie Sanders, Spencer Strasmore, Imo Zou, Trent Barboza, and Tafadzwa Nimarundwe.
Speaker 2:
[59:03] Special thanks to Surya, Yala Manchili, Saida Sapiyeva, Ian Alice, Greg Beato, Parth Patil, and Ben Rallis.