title #469 — Escaping an Anti-Human Future

description Sam Harris speaks with Tristan Harris about the dangers of AI and the race to build it. They discuss the new documentary The AI Doc: Or How I Became an Apocaloptimist, the lessons of The Social Dilemma, the arms race dynamics between AI labs, the "intelligence curse" and its implications for human political power, the psychology of tech CEOs indifferent to extinction risk, the possibilities of US-China coordination on AI safety, and other topics.
If the Making Sense podcast logo in your player is BLACK, you can SUBSCRIBE to gain access to all full-length episodes at samharris.org/subscribe.

pubDate Fri, 10 Apr 2026 17:25:00 GMT

author Sam Harris

duration 6567000

transcript

Speaker 1:
[00:21] I'm here with Tristan Harris. Tristan, it's great to see you again.

Speaker 2:
[00:24] Sam, it's great to be back with you.

Speaker 1:
[00:26] So you've been busy. You've been busy worrying about social media for years, and you created this, in part created this documentary, The Social Dilemma, which it seems half of humanity saw.

Speaker 2:
[00:36] Yeah.

Speaker 1:
[00:37] We still have a problem with social media, I'll point out, but you, as much as anyone, alerted us to the nature of the problem and are continuing on that front. But now you have added to your portfolio concerns about AI, and there's this new documentary, The AI Doc, which I just saw, which is very super watchable and entertaining in its own way, but also very worrying. We'll talk about the reasons to be worried here, and maybe some of the reasons to be optimistic or at least cognizant of the upside should things go well. But there's a lot to fear on the front of things not going well. So let's just take it from the top. Bo, when did you start worrying about AI?

Speaker 2:
[01:20] Yeah. Well, first, it's just good to be back with you, Sam, because you really, in a way, helped launch my ability to speak on these topics with the 60 Minutes interview that I did in 2017. Then I remember recording in that same hotel, our first podcast, which actually really got a lot of attention back in the day about persuasive technology.

Speaker 1:
[01:38] Yeah.

Speaker 2:
[01:38] In a way, about the baby AI that was social media, that was just pointed at your kid's brain trying to figure out which photo, video, or tweet to put in front of your nervous system. As we know that that little baby AI was enough to create the most anxious and depressed generation in our lifetimes, was enough to break down shared reality, polarize political parties much further, change the incentives of the entire media environment, basically colonize the entire world from that baby AI. But to get to your question, so how did we get into AI? First of all, I wasn't wanting to switch into it. It was that I got calls from people inside the AI labs in January of 2023. This is like a month and a half after the Chat GPT had launched, I think. And these were friends I knew in the tech industry who were now at AI labs. And they basically said, Tristan, there's a huge step function in AI capabilities that's coming. The world is not ready, institutions are not ready, the government is not ready, the arms race dynamic between the companies is out of control, and we want your help to help raise awareness about this. And so my first reaction was, aren't there a thousand people who've been working in AI safety and AI governance for a decade? And the challenge was just that all the PDFs that people had produced about policy and governance were just kind of not, it's not like that was turning into actual action or policy. There's a kind of material, you have to, what does Eric Weinstein call it, confrontation with the unforgiving, like you have to be affecting the actual incentives and institutions in the world. So we basically, my co-founder and I, Issa Raskin, we interviewed a top hundred people in AI at that time, this is in January 2023. We turned that into a presentation.

Speaker 1:
[03:12] This is your co-founder of the Center for Humane Technology?

Speaker 2:
[03:14] My co-founder of the Center for Humane Technology, which is the non-profit vehicle that's been housing our work for the last decade basically. We ran off to New York, DC and San Francisco and we basically gave this presentation called the AI Dilemma that tried to show that we could predict the future that we were going to get with AI, if you look at the incentives. I think a huge problem that both the film, The AI Doc and our AI Dilemma presentation we're trying to tackle is this myth that you can't know which way the future is going to go. The future is uncertain, a million things can happen. These are just unintended consequences from technology. The best route is just to accelerate as fast as possible. That is not true. Just to repeat a quote that is heard from every one of my interviews, but it's because it's so accurate. Charlie Munger, Warren Buffett's business partner saying, if you show me the incentives, I'll show you the outcome. With the incentives of social media being the race to maximize eyeballs and engagement that would obviously produce the race to the bottom of the brainstem, shortening attention spans, bite-sized video, a more extreme and outrageous content, sexualization of young people, the whole nine yards of everything.

Speaker 1:
[04:16] Hyperpartisanship.

Speaker 2:
[04:17] Hyperpartisanship. And all of it happened. Like there's just a moment just to sort of soak in, literally everything that we said was going to happen, happened. And it's not like we could predict all of it, but directionally you could know the contours of where we were going. And part of this relates to, I think, the mistake we make in technology. We get obsessed and seduced by the possible of a new technology, but we don't look at the probable of the incentives and what's likely to happen. So the possible of social media is, well, surely if we give everyone access to instant information at their fingertips and connect people to their friends, we're going to have the least lonely generation we've ever had. We're going to have the most enlightened and informed society we've ever had. And obviously, the opposite of both of those things happened. And that's not like, oh, we got this wrong, and it was just this mistake anyone could have made. All you have to do to quote Daniella Meadows and sort of systems thinking, a system is what a system does. The system of social media was not optimizing to reduce loneliness and to create the most enlightened society. It was optimizing for just what is the perfect post, next video or tweet to keep you scrolling, doom scrolling by yourself, esophagus compressed on a Tuesday night. And that's gotten us the world that we're now living in. So we'll get to AI, but basically the important lesson here is that, and kind of what motivates me with this movie is you kind of have two choices. You either get a Chernobyl, which is a disaster from AI, that then causes us to clamp down and to do something different. Or you have enough basic clear-eyed wisdom and discernment and foresight, you know where this is going, that you can say, okay, let's actually create guardrails in advance of a catastrophe. And so this film, The AI Doc, is really inspired by the history of the film The Day After from 1982 or 1983, about what would happen if there was nuclear war between the Soviet Union and the United States. That film was the largest watched synchronous television event in human history. Primetime television, it was Tuesday night, 7 p.m. You probably watched it.

Speaker 1:
[06:06] Yeah, yeah, I remember watching it at the time. And also famously, it got Reagan's attention. He was worried as a result.

Speaker 2:
[06:12] That's right. So Reagan watched it, I think, in the White House viewing room or something. And in his biography, he writes about getting depressed for several weeks after watching it. Because you're confronted with the possibility of annihilation of our species in a real way. And it's important to know, it's not like we didn't know what nuclear war was. Everyone knew what the atomic bomb looked like from the photos and videos of Hiroshima and all the nuclear tests. It's not like people couldn't imagine it. But there is a way that the actual consequences of continual escalation in nuclear war gaming, that we weren't really facing the visceral consequences of that. It kind of sat in humanity's collective shadow, like our Jungian shadow. We didn't want to confront that. The director, whose name I'm forgetting in this moment, speaks about this in his biography that we just didn't want to talk about this topic. Like, why would you ever want to talk about it? And by putting this film, the day after, into the public consciousness of humanity and into leaders like Reagan, it was said that later when the Reykjavik meeting happened between Reagan and Gorbachev, the director of the film got a note from the White House saying, don't think your film didn't have something to do with enabling the conditions for this to happen. So what that speaks to for me is, if we all got crystal clear that we're heading to an anti-human future that we don't want to be going towards, and we saw that clearly and we saw it now, we could actually steer and do something different than what we're doing. That's for me the motivation of the film, which I think it doesn't go all the way there, but it sets up the common knowledge for that possibility.

Speaker 1:
[07:35] Yeah. Well, there are two cases made in the film. Obviously, there's the very worried slash Doomer case which we both share to some degree. Then there are the people who seem capable of producing really an unmitigated stream of happy talk on this. They don't seem to concede anything to the claimed rationality of our fears. I've asked this question of probably you in the past, and many others on this topic, but what do you make of the people of whom you can't say they're uninformed? I mean, some of these people are very close to the technology. Some of them are even developing the technology, and in at least Jan LeCun's case is one of the actual progenitors of the technology, one of the three forefathers of it. But there are people who are deeply informed about all of these facts and yet won't concede anything to the fears. What is your theory of mind of these people? Because some of them are in the film and they're given the job of providing the other side of the story here.

Speaker 2:
[08:39] Yeah, maybe just to back up, and so the listeners, you'll see it in the film if you go see it, but just understand the structure of the film. So the film kind of takes you on a tour of first the people who are focused on all the things that could go wrong. And so this is the risk folks that I don't like using the term do-mers because I think it reifies something that's not really healthy. You know, as someone who's worried about the risk of a nuclear power plant, a do-mer, know they're a safety person who cares about the nuclear power plant not melting down.

Speaker 1:
[09:05] Do-mer is a term of disparagement launched by the people who don't share these fears.

Speaker 2:
[09:09] That's right. That's right. So let's not reify that. So the first film, I mean, the first section of the film is really focusing on those folks and their concerns and it's really devastating for the director and the story and the conceit of the film is that the director is having a baby. And so he's asking all of these people and AI is now a good time to have a kid. And I think that humanizes the question of what is the future we're heading towards. Because in an abstract sense, it's not that motivating. When I think about me and my kids, it anchors this discussion about AI in terms of the things that people most care about, which is their family. So then the film, after the director sort of is confronted by all this, and he gets overwhelmed and he kind of freaks out to his wife thinking, oh my God, I don't know what to do. And she says, you have to go find hope. And so he turns around and he goes out and he talks to all of the AI optimists. So this is Peter Diamandis, this is Guillaume Verdon, who's Beth Jesus, otherwise known as online. Basically the tech accelerationists and people who think that our biggest risk is not going fast enough. Think of all the people with cancer or all the people whose lives that we won't be able to save if we don't make AI faster than we're making it right now. My reaction, I think going sort of a step back, there's a thing in AI that we have to acknowledge there's an asymmetry. The upsides don't prevent the downsides. The downsides can undermine a world that can sustain the upsides. So for example, the cancer drugs can't prevent a new biological pathogen that's designed to wipe out humanity. But the biological pathogen that can wipe out humanity undermines a world in which cancer drugs are relevant at all. AI generating GDP growth of 10-15% because it's automating all science, all technology development, all military development, automating abundance, sounds great. But if the same AI that can do that also generates cyberweapons that can take down the entire financial system, which one of those things matter more, 15% GDP growth or the thing that can undermine the basis of money and GDP at all? So it's very important, the film doesn't actually make this point. And it's one of the critical things that people do need to get, because in order to be optimistic, you have to actually mitigate the things that can go wrong. And I feel like AI is presenting us with essentially a maturity test. It's almost like the marshmallow test in psychology, where if you wait and you actually mitigate the downsides, then you get the actual two marshmallows on the other side of the genuine benefits of AI. But if you sort of race to get the one marshmallow now and don't mitigate the downsides, then you get the downsides. And I think that is not in the film, but is critical for people to get.

Speaker 1:
[11:31] Yeah, yeah. But so then what do you make of the people who have all the facts in their heads, but they're not worried or claim to be not worried about quite literally anything?

Speaker 2:
[11:41] Yeah. Well, I personally, I think there's an intellectual dishonesty there. And I'm sure in past conversations, you and I have had Sam over the years.

Speaker 1:
[11:49] But there's a kind of interesting case here. So take someone who has finally had their religious epiphany here, but for the longest time didn't. And this is literally the most informed person on earth, Jeffrey Hinton. How do you explain that these problems weren't obvious to him years ago?

Speaker 2:
[12:11] So you're saying for Hinton that he had an awakening?

Speaker 1:
[12:13] Yeah. So he was somebody who didn't give really any credence to concerns about alignment that I'm aware of for years and years and years, as he was quite literally the father of this technology. Right. And now he's basically right next to Eliezer Yudkowsky in his level of concern.

Speaker 2:
[12:31] That's right.

Speaker 1:
[12:32] Why? I mean, it's not that he got more information, really. So how do you explain his journey?

Speaker 2:
[12:37] Well, so I don't know his particular journey. You might just know more about what his awakening moment was. So I can't really speak to that.

Speaker 1:
[12:44] I think it was just that he, I mean, this has always been a non sequitur from my point of view, but it was just his sense, I think, this is what he said publicly, that the time horizon suddenly collapsed. We just suddenly made much more progress than anyone was expecting.

Speaker 2:
[12:57] Well, that generally has been one of the things. I mean, it's the thing that caused those AI engineers, kind of the Oppenheimers in January, 2023, to reach out to me. And that's what it felt like. It's like you were getting calls from people inside this thing called the Manhattan Project before I knew what the Manhattan Project was. Because to be clear, I actually went to the, I went early on to like an effective altruism global conference. I was not an EA, but I happened to go to the conference in like 2015 and I was actually frustrated because I felt like the EA community was obsessed with this virtual risk called AI that I didn't take seriously back at the time. Because we were nowhere close to those capabilities. And I was like, there's a big runaway AI here right now that went rogue. It's maximizing for a narrow goal at the expense of the whole, and it's called social media and EA is completely oblivious to it and isn't focused on it. But then I was really wrong later when AI capabilities really just made a huge amount of progress. And that's again when we got the calls from people in the lab. So I think it was the jump of just suddenly, hey, I think GPT-4 would like pass the bar exam, pass the MCATs. Like that's suddenly a new level of AI that we just didn't have before.

Speaker 1:
[13:56] Yeah. Yeah, I still have no theory of mind for the people who are not worried now about anything. Everything from the comparatively benign, like just economic dislocation and wealth concentration that's unsustainable politically to the genuine concerns about alignment that we could build something that we are now trying to negotiate with that has more power than we have and we can't take the power back.

Speaker 2:
[14:19] I mean, to be fair, just to say it bluntly, I think some of them are lying. I think some of them actually do, are building bunkers right now. Let's just say it, they're building bunkers, and they simultaneously say everything, there's all these amazing things we're going to get. They sort of wash over with their hands, they kind of push away the idea that there's going to be all this disruption in the middle time, and they're kind of focused on the long-term. After we make it through this basic horrible disruption and maybe revolutions, there's going to be some other side of this, which will be the most abundant time in human history. People like this, they often point to the graph of global GDP, where if you look in 1945, you barely get a little blip where it goes down for a moment, and then it goes straight back up. It's that kind of psychology. There's also the psychology of Upton Sinclair, that you can't get someone to question something that their salary depends on not seeing. If your business model is selling optimism, and selling hope, and selling everything's going to be great, you're obligately not able to speak about the risks. But I think this is the thing we should be watching out for, is just incentives are the problem with the world. Incentives that allow non-honest speech to be the public understanding that we need to operate on, because we just need objective sense making and not incentivize sense making.

Speaker 1:
[15:29] We know that some of the principal people doing this work, people like Sam Altman and Elon Musk, were people who were at first as worried as anyone. They were proper do-mers and-

Speaker 2:
[15:41] Sam Altman said, this AI will probably lead to the end of the world, but will in the meantime make some great companies.

Speaker 1:
[15:46] Yeah. Elon had his whole summoning the demon framing, but now there are two of whatever the five who are in this arms race condition.

Speaker 2:
[15:56] That's right. I think this is actually really important. How do you make of their psychology? I think this is an area where we can double-click and go deeper, is that what is the psychology of someone who used to speak publicly about all the risks? You talked to Elon back then, you were at the original Puerto Rico conference. What is your sense of what's going on with them now?

Speaker 1:
[16:16] So much has happened to his brain that it's very hard to explain who he is now. Be it ketamine or just hours and hours of Twitter use. Again, I don't know, he's in some superposition of who I thought he was and who I never imagined he might be. And I'm not sure how much it's a story of I didn't recognize who he was at the time or how much he has changed under the pressure of becoming so famous and so wealthy and so drug addled. And so, I mean, actually algorithm poisoned. I view him as the worst, the most depressing case study in the story of what social media can do to a human life.

Speaker 2:
[16:54] People talk about Trump Derangement Syndrome, but there's really Social Media Derangement Syndrome. And the person whose brain has been most jacked into the unfiltered version of that algorithm has been him. So it's kind of like getting high on your own supply.

Speaker 1:
[17:05] He's just built this hallucination machine now and he's just been staring into it for years and years.

Speaker 2:
[17:11] To be clear, I don't fault him uniquely for that or something like that. This is the system does this to everybody. We're just seeing an example where someone who's an extreme user and we're seeing the effects of it. But the problem is that he's so consequential. His worldview, his paradigm of seeing this, his sense making, what he's willing to talk about publicly, what he's willing to signal publicly matter a lot.

Speaker 1:
[17:28] What he's willing to lie about at this point. I mean, it's just... And then there's a profile of Sam Altman that just came out in the New Yorker that I think published yesterday or thereabouts. I haven't finished it. But behind the scenes, the arms race is so desperate at this point that, I mean, just behind the scenes, there's just this endless effort of character assassination and war gaming between the two of them personally. I mean, most of it's coming from Elon Tord Altman, trying to torpedo open AI. But there's just a lot of, it could be clearly the original altruistic motive to just do this safely, above all, to do this safely for the benefit of humanity. That has been thrown to the wayside and there's just this want and reach for trillions of dollars.

Speaker 2:
[18:19] And the fear of domination if I don't do it first. I mean, let's just be clear, the only story about what's happening with AI, the only story that matters is actually covered in Act 3 of the AI Doc film, which is the arms race dynamic.

Speaker 1:
[18:30] Yeah. That's it.

Speaker 2:
[18:31] Like everything else. When you see AI companies stealing intellectual property and just ignoring the lawsuits, that's the arms race dynamic. When you see AI psychosis and teen suicides, that's just the arms race dynamic. It's the race to hack human attachment and get people dependent on AI and talk, sharing your deepest secrets. When you see mass joblessness, if I don't do it and race to disrupt all the jobs, I'll lose to the other guy that will. When you see the national security race, it's all driven by the arms race dynamic. I think that AI is just a confrontation with game theory. Humanity is being confronted with whether game theory is the only model to run our choice making.

Speaker 1:
[19:07] It seems like Anthropic has, I don't know Dario, I've never met him, but it seems like it has a slightly different ethic, at least in how it's behaved so far. I mean, the fact that it pulled back from its Pentagon deal because it couldn't secure an agreement that it wanted, and as of I believe last night, it announced that it has a model that it doesn't feel it's safe to release to the general public, but it's releasing to all the companies like Microsoft that might be able to study it because the specific concerns are around cybersecurity. It's the model that can detect bugs and that human developers haven't detected for even decades in their code base, whether it's an operating system or whatever. It just apparently within seconds is finding exploits everywhere. And so they're-

Speaker 2:
[19:52] And every major operating system and every major web browser, which is a very big deal.

Speaker 1:
[19:57] Yeah.

Speaker 2:
[19:57] And, but again, so I think, yes, Anthropic actually has been the safest of them all and tried to and cares most about getting alignment right, etc. But you're also seeing them continue to decide to release the models, even with a lot of the misaligned behavior that they're seeing, the AI models that are self-explanatory trading or blackmailing people. You know, you think-

Speaker 1:
[20:17] So let's spend a little time on that. How would you summarize where we are now with AI and the kinds of surprising behaviors or perhaps behaviors that shouldn't surprise us that are alarming people? Yeah.

Speaker 2:
[20:30] This is so critical because I think if you view AI as just another technology that confers power, it's a tool, you pick up that tool and use it like any other, you end up in one world. But if you see that AI is the first technology that thinks and makes its own decisions and is generating hundreds of thousands of words of strategic reasoning when you ask it a basic question or like how to code, suddenly you end up in a different world. So let's talk about some of these examples of AI uncontrollability. So in the film, they reference this example that many people have heard about by now of the anthropic blackmail example. This is a simulated company email where in the simulated fictional company, they say in the emails to each other, we're going to shut down and replace this AI model. And then later in that company email, there's an email between the executive at the company and an employee. And the AI spontaneously comes up with the strategy that it needs to blackmail that employee at Anthropic in order to protect itself to keep itself alive. At first, people thought, well, this is just one bug and one AI model, but then they tested all the other AI models from DeepSeq, ChachiBT, Gemini, Grok, etc. And they all do the blackmail behavior between 79 and 96 percent of the time.

Speaker 1:
[21:40] Yeah. Amazing.

Speaker 2:
[21:42] There's this kind of moment where there's like cue the nervous laughter. And yet, if you actually send this to people who are at the White House, I think there's a disregard for this. People just say, well, you're coaxing the model. You're getting it to do this. You're trying to put it in a situation where, of course, you're going to keep tuning the variables until you get it to blackmail. So I have some updates. Since then, Anthropic trained another model. They were able to train the blackmail behavior down by quite a lot. So it doesn't do this behavior in the simulated environment. That's the good news. The bad news is that the AI models are now situationally aware of when they're being tested, and they're now altering their behavior way more.

Speaker 1:
[22:20] Right. That strikes me as genuinely sinister. Yes.

Speaker 2:
[22:25] I think we have a hard time modeling because all of this abstract. I mean, I'm just thinking about your listeners and it's like, back to EO. Wilson, the fundamental problem of humanity is I have a paleolithic brain, we have medieval institutions and godlike technology. And the only experience you have with your brain with regard to AI is this blinking cursor that tells you why your washing machine is broken. That's different than this blackmail example that sounds abstract and that you don't actually experience that side of AI.

Speaker 1:
[22:51] But the thing that, again, I've thought about this enough in the vein in which I've thought about it for now at least 10 years where it was obvious to me, I don't consider myself especially close to the intellectual underpinnings of any of this technology, right? I'm just a consumer of the news on some level with respect to AI.

Speaker 2:
[23:10] But you were right and reasoned about it philosophically.

Speaker 1:
[23:12] It was just so obvious that if the moment you can see that intelligence is not substrate dependent, that we're going to build actual intelligence in our machines, given what intelligence is, you should expect things like deception and manipulation and the formation of instrumental goals that you can't foresee. Certainly, when you're imagining building something that is smarter than we are, or that's only as smart as we are but just works a million times faster, how would this conversation go if every time I uttered a sentence, you functionally had two weeks to decide on your next sentence? Correct. You would obviously be the smartest person I'd ever met.

Speaker 2:
[23:54] Exactly. Long before you get super human AI, you just get super speed that is enough to compete.

Speaker 1:
[23:58] Speed alone is enough to just completely outclass you, and intelligence, you have to envision this as a relationship to a mind that is autonomous.

Speaker 2:
[24:07] Yes.

Speaker 1:
[24:09] Then you add things like recursive self-improvement and then all of a sudden, we're in some dystopian science fiction if this is not perfectly aligned with- That's right.

Speaker 2:
[24:20] Let's make sure we add just another example because there's a recent example from just three weeks ago, Alibaba, the Chinese AI company, was training an AI model and then totally in a different side of the company, their security team noticed a bunch of network activity, like a flurry of network activity, like what the hell is going on here? And it turned out that in training, midway through training, not deployment and training, the AI model had basically set up a secret communication channel with the outside world and then had started to independently start mining for cryptocurrency. This time, you cannot claim that someone coax the model to do this. This is spontaneous instrumental goals. The best way to do any goal is to acquire more power and resources, so you have the ongoing ability to achieve those goals, and it went to decide to acquire cryptocurrency. Now, if you're a Chinese military general and you hear this example, how do you feel as a mammal? You feel the same way that any other goddamn mammal feels hearing this example. If you're a US military general and you hear this example, it's terrifying as a human being. So there's a good news in this for me, which is that I think people just literally don't know these examples. They just don't know. What percentage of the world's leaders do you think are aware of this Alibaba spontaneously mining cryptocurrency example? Have you had a guess?

Speaker 1:
[25:35] Oh, I would think it's miniscule, but it does seem like there is still a barrier to internalizing any of these examples with the appropriate emotional response. Again, I come back to the way this struck me the first time I started thinking about it 10 years ago. In my TED talk on this topic in 2016, I remember starting with the problem which is as worried as I can be about this for the next 18 minutes, all of this is fun to think about. This is not the same thing as being told that actually your landscape has been contaminated by radioactive waste, you can't live there for the next 10,000 years. That just sucks, there's nothing fun about that. But here we're in the first act of the movie that is getting a little fun and these are just, these examples produce laughter as much as anything else. That's right.

Speaker 2:
[26:30] Well, and as Max Tegmark will say, it's like the view gets better and better right up until the cliff. AI is the ultimate devil's bargain because it is a positive infinity thrown at your brain of positive benefit at the same time that's a negative infinity of risk. I think it's very important to get this. I was excited to talk about this with you particularly because you can go into the meta awareness of how are we holding the psychological object that is AI. If I point my attention at my kids doing vibe coding or my neighbors using it to start their business and suddenly have a team of agents that are making their business more functional, notice that those people when they've got those team of agents helping their business be more functional, just there you are in your experience taking a breath. Are you anywhere close to the example of Alibaba going rogue and mining cryptocurrency? Those things don't even fit next to each other. There's a psychological distance between the positive examples and the negative that you literally don't hold them in your mind at the same time. As my co-founder Aza will often do, it's like you close your eye with one eye and you can see the benefits, you close your eye the other eye, you see the risks. You can't open both eyes and synthesize those two things with stereoscopic vision. The reason, part of the reason I was excited about this film, the AI Doc, is that it's trying to do that. It's trying to actually present these arguments in one synthesizing container. I think sadly there's still a little bit of a war shock where people have their per defaults intuition and they continue to lean in that direction because there's a reflexive optimism or pessimism or something like that. Whereas my deep goal is actually synthesis. Again, the upsides do not prevent the downsides. The bigger muscles in military might don't prevent the like by way.

Speaker 1:
[28:00] But that's a crucial asymmetry.

Speaker 2:
[28:01] It's a fundamental asymmetry. It means you do not get those upsides. This is the devil's bargain. You are going to get a sweeter and sweeter looking deal of amazing incredible benefits that are unprecedented and as you said, are fun to think about, are enjoyable, are exciting, are intellectually fascinating.

Speaker 1:
[28:17] But even the scary things are fun to think about. That's part of the problem.

Speaker 2:
[28:20] Well, even that too.

Speaker 1:
[28:21] Yeah.

Speaker 2:
[28:22] What do you think is that? Because that's also, I feel like we've been mis-tuned by sci-fi to treat it like it's a movie. There's a state of derealization or desensitization that I worry that we're in. Because the movies have us not take it as a real thing.

Speaker 1:
[28:34] It's honestly fun to think about getting killed by robots. It's in a way that nothing else that is equally threatening is fun.

Speaker 2:
[28:44] Do you think it's actually fun for people to think about that? I really want to drill into this because I think it's important to ask the question. Given these facts, which we're only like 10 minutes into this interview, this should be enough to say something's got to change. You do not release the most powerful inscrutable technology faster than we deployed any other tech in history that's already doing the HAL 9000 crazy rogue behavior, shutdown avoidance, mining for cryptocurrency. We have all of the warning signs.

Speaker 1:
[29:09] Okay. But the thing that is most compelling to people, the thing that they can't break free of, I think, is the logic of the arms race, given that some of the people in the, I mean, forget about the arms race between our companies that may or may not be run to one or another degree by highly non-optimal, and in some cases even psychopathic people, right? So like-

Speaker 2:
[29:30] The system has selected for the psychopathic people.

Speaker 1:
[29:32] We've got a problem with some of the people who are in charge in our own case, but leaving that aside, we're in an arms race with China, right? We're in an arms race with, I mean, I guess China is the most plausible, but who knows who else? But we're probably in an arms race with Russia. I don't know where Russia is on this, but, and when you think of the prospect of any, you know, authoritarian slash totalitarian regime getting this technology first in what will look like something like a winner-take-all scenario if there really is a binary step function into super intelligence that, you know, to be two months ahead of the competition is to basically win the world. And we could be in some situation like that in the event that that just doesn't destroy everything, right? If it just actually confers real power, right? Because it's sufficiently aligned with the interests of whoever develops it. That is so compelling that we just we cannot lose to China above all here. It's certainly when we're talking about, you know, autonomous military technology or, you know, anything that would be deployable in our own defense or offense, right? You know, cyber security.

Speaker 2:
[30:37] Sure.

Speaker 1:
[30:38] Like we can't be behind. So how do we become slow and careful under those conditions? Right.

Speaker 2:
[30:44] But then what are the chances that that super intelligent AI that gives us that dominance, we will control?

Speaker 1:
[30:50] Right. So that's-

Speaker 2:
[30:51] No, no, literally. What are the chances of that?

Speaker 1:
[30:52] Well, this is a point you've made. I don't know if you make it in the film, but I've heard you make it, which is, you know, we were first with social media, right? You know, like we were, if you look at that as an arms race that we won.

Speaker 2:
[31:03] Correct.

Speaker 1:
[31:04] What exactly did we win? Exactly.

Speaker 2:
[31:06] We that winning that arms race to invent essentially like a psychological manipulation weapon, a mass behavior modification engine machine with AI. We bet we built that first, but then we didn't govern it well. So it's like a psychological bazooka that we flipped around and blew off our own brain. So what that shows you is that we're not actually in a race for who has the most power. We're in a race for who is better at steering, applying and governing that power in ways that are society strengthening. That is what we're actually in a race for. Because if we actually beat China to a AI bazooka that we literally don't know how to control, and we're not on track to know how to control. All the evidence shows that it has more self-awareness of when it's being tested, not less. It is better at cyber hacking, not less. It is better and does it more often these kinds of self-preserving behaviors. If we're not on track and we're also going faster, the conditions in which we would be on track to control it would be the ones that we're going slow and steady. But we're doing the opposite of those conditions because of the race dynamic. So there's just this psychological confusion here, which is we're not going to win this race. In the race between the US and China, AI will win. There's a metaphor that our mutual friend Yuval Harari, who's the author of Sapiens, has here, which I guess in the post-Roman period of the British Empire, it was very weakened and they were getting attacked from the Scots and the Picts in the north, basically pre-historical Scotland and Ireland and those civilizations. They were very weak and they said, what are we going to do? They had this idea, well, why don't we go off and hire this badass group of mercenaries called the Saxons? Because those Saxons are super powerful and if we get the Saxons to fight our wars for us, then we'll win. Of course, we know the history of how that went. We got the Anglo-Saxon Empire, except in this metaphor, AI is the Saxons. Except we won't get a merger between the human AI empire, we will get the AI empire.

Speaker 1:
[32:56] This makes me think of all these guys in their bunkers who have hired Navy SEALs to protect them for the end of the world. They control their Navy SEALs until the end of time. Exactly.

Speaker 2:
[33:07] But this is insanity. The main point here is that there's an attractor that's driving all of this right now, which is this arms race dynamic. Under this false illusion that we have to beat China, but we're not examining the logic of what are we beating them to? We're beating them to something that we don't know how to control, and we are not on track to control. Then you get people like Elon saying in this weird, and I'm curious what you make of his psychology, but saying in public interviews, I think it was in the Cannes Film Festival or something in France, and he said, I decided I'd rather be around to see it than to not. It's this surrender. It's this death wish. It's like, I can't stop it, so I decided I'd rather be there to have built it and have my God be the thing that took over. This actually is a fundamental thing that we should double-click on for a second, which is the unique thing about AI game theory that's different than nuclear game theory, which is that the omni-lose-lose scenario from nuclear game theory is like, I know as a mammal that you also don't want to annihilate all life on planet Earth, and the fact that I know that about you without even talking to you means that there's some element of trustworthiness that we will try to coordinate to something else because we agree on some implicit level that there's an omni lose lose thing that's worth avoiding. Here's the problem with AI. If I start by believing that it's inevitable and nothing can stop it, then if I'm the one who built the suicide machine, I'm not an evil person because I'm only doing something that would have been done anyway. So I have an ethical off-ramp in that decision. The second part is, unlike if you literally made it like a matrix where you just get the point scores, you get negative infinity if we get nuclear war in the nuclear scenario with AI, let's say we're in this race and the DeepSeek CEO is there, and the Elon is there, and Sam is there, and they're racing to do it, they actually all believe it could wipe out humanity. But if they raced and got there first, then think about the scenario, humanity is wiped out, but there now exists an AI that speaks Chinese instead of English, or has the DeepSeek CEO's DNA rather than Elon's DNA.

Speaker 1:
[35:01] The end of the world has your logo on it.

Speaker 2:
[35:03] That's right. Exactly. Well said. So the end of the world has your DNA or your logo on it. And I want people to get this because if people got this, they would see that there might be an implicit way that people might think that like, when push comes to shove, you know, cooler minds will prevail. Because you can trust that the people at the top will like do whatever it takes to steer away from this and will like steer away in time. But what I want people to get is you can't trust that. Because these people actually subconsciously, I think there's psychological damage here that I think they subconsciously have pre-accepted this kind of end of the world and end of their life. And that if they got to be the one who built the digital god that literally was replacing humanity, in some legacy, in some world, I don't know whose history book that exists or someone, anyone's conscious is going to read that. But they got to go down in history in that way. And what that does is it should motivate the rest of the 8 billion people on planet Earth to say, I'm sorry to swear, but just fuck that. We don't want that. If you do not, if you want your children to live and you care about the world as it exists, and you love the things that are sacred about life and you're connected to something, that is at risk with this small number of people who are racing to this negative outcome.

Speaker 1:
[36:06] Well, a lot of these guys seem to have had their formative educational experiences reading science fiction. I mean, it's like you read a lot of science fiction, you read a little Ayn Rand, and you're self-taught in basically everything else, and to my eye, you form a very weird set of ethical weights. Not enough of the best parts of culture have gotten into your head such that you can actually come to a real understanding of what human life is good for. You literally meet people who are agnostic as to whether or not it would be a bad thing if we all got destroyed and ground up in this new machinery and our descendants were robots wherein consciousness may or may not exist. Maybe that's sort of an interesting way to end this movie.

Speaker 2:
[37:03] I mean, you get a semblance of that when Peter Thiel is asked the question by Ross Duthout in The New York Times, should the human species endure?

Speaker 1:
[37:09] That's a real stutter. He stutters for 17 seconds.

Speaker 2:
[37:14] Well, I think people need to get this because the point you're bringing up is both at a level of their conditioning, what the system that they're inside of, the game that they're being forced to play kind of domesticates them for ruthless game playing. Game theory has already colonized us into machines, machine-like reasoning where we're not connected to our own humanity, we're not connected to common care for the rest of it. In fact, it's an active devaluing of being human. I'll give you an example. Sam Altman was asked at the AI Safety Summit in India recently, what do you think of the fact that it takes so much energy to run these data centers? You know what his response was? He said, well, it takes a lot of energy and resources to grow a human over 20 years.

Speaker 1:
[37:53] Oh, yeah, I did hear this, yeah.

Speaker 2:
[37:55] Well, and I actually want to point to something here, Sam, because it's actually really important, because I want people to get why we're heading to an anti-human future and why you can be crystal clear that that's going to happen. Are you familiar with the essay by Luke Drago and Rudolf Lange called The Intelligence Curse?

Speaker 1:
[38:08] Yeah, actually, I did read that. Yeah, but I explained that premise.

Speaker 2:
[38:12] Let's just bring this out for people because I think it's really critical. So the idea is there's something in economics called the resource curse. So if you're Libya, Congo, South Sudan, Venezuela, you first discover this resource. Maybe it's diamonds, maybe it's oil, maybe it's rare minerals, and that's a blessing. You're like, oh my God, we're going to get all this GDP growth and we're going to get this prosperity. But what happens is, if you don't have the appropriate institutions and social fabric and investments of people, suddenly, let's say 70 percent of your GDP is coming from mining that resource. Now, in a government, they have this choice when they've got money coming in. Do I invest more into the extraction of that resource or do I invest into my people who have nothing to do with the GDP now? The answer is, I'm going to invest in the resource.

Speaker 1:
[38:53] You basically don't need your people and you don't have to be responsible to their interest because you're pulling your wealth directly out of the ground. Exactly.

Speaker 2:
[39:01] You're pulling your wealth out of the ground, not from human labor and not from human development, not from the enlightenment of your society in any way. There's this perverse incentive there and we've seen this in how these failed states have kind of, you end up with countries where you have shanty towns and war while you have this.

Speaker 1:
[39:18] Even in success, you wind up with authoritarian to one or another degree. Places you wouldn't want to live. Again, even in the case of Saudi Arabia, you're talking about what has been, I mean, it's opening up a little now, but it's been a highly repressive society and it can be that way because it doesn't have to respond to the needs of its people.

Speaker 2:
[39:38] That's an example of a society that's trying now a little bit to go the other way. I'm by no means an expert on Saudi Arabia, but it's an example of trying to beat this. There's a parallel to the resource curse that again, the authors, Luke Drago and Rudolph Lane wrote about, called the intelligence curse. What happens, and this is not that hypothetical, where a couple of years from now, much of the GDP growth coming in this country is coming from AI. Let's say like 50 percent or 70 percent is coming from AI. Do I have any incentive to invest in the education, healthcare, childcare, development, safety of my people? No, and the companies don't need you for their labor anymore, so your bargaining power went away, and the governments don't need you for tax revenue because that's not where they're getting the GDP growth. So it's not just that you aren't investing in your people, it's that your people lose political power. What this is so critical to get is like, that's why I can say confidently we're heading to an anti-human future. We're going to get new cancer drugs, new material science, new antibiotics, at the same time that you get mass disempowerment of regular people, and you're going to have eight soon-to-be-trillionaires hoard all of the wealth, and there's not going to be much left for regular people unless we actively lock in a political infrastructure that says that we want to create the intelligence dividend, not the intelligence curse, kind of like what Norway did with the Sovereign Wealth Fund. And yeah.

Speaker 1:
[40:54] Yeah, Alaska gives people money.

Speaker 2:
[40:55] Alaska, that was the other example I was thinking of. Yeah. So I just wanted to say that because that links up with Sam Altman saying, it blinks perfectly with him saying, well, it takes a lot of energy and resources to grow a human. Like this leads you to a devaluing of humans. This leads you to the seductive feeling that maybe humans are parasites. And by the way, we've been running that social media machine for the last 20 years. And now you degrade what it looks like to be human. And so we're not very inspired by what it means to be human anymore either.

Speaker 1:
[41:22] Well, you've got a bunch of these guys like Elon running around wondering whether we're in a simulation and whether everyone else is just an NPC.

Speaker 2:
[41:28] I was just going to say, I mean, even just calling the other people on planet Earth NPCs or non-player characters is a devaluing of humans. So part of this rite of passage that AI is inviting us into is we have to reconnect with our fundamental humanity. We have to actually value and also rediscover and celebrate what is it valuable to be human. And not just in some kind of kumbaya way, but in a sense that the human downgrading, which is the term we came up with to describe the social media degradation of the human condition, the shortened attention spans, doom scrolling, lonely, not creative, just like dopamine hijacked version of us, the kind of wally humans. That is not humans. That's what we have been domesticated into by, ironically, first contact with a runaway AI that was perversely incentivized. And I feel like if you shatter that funhouse mirror and you realize that we're actually much more capable, creative, we're the same raw potential that is able to do amazing things. But we've been living in this sort of perverse, vicious loop of the more of our... Ironically, it's an earlier version of the intelligence curse, except it's like the social media curse. When GDP comes from these five tech companies, domesticating and downgrading humans, you get another version of that. I'm saying all this because I want to actually inspire people that if we don't want this anti-human future we're headed towards, then we should see this clearly right now and say we have to steer right now. It's not too late. It's obviously extremely far down the timeline. I'm not going to lie about any of that. But it would take crystal clarity to again steer. And again, the alternative is you wait for Chernobyl, and then you hope you have steering after that. But I'm not convinced we will.

Speaker 1:
[42:54] Yeah, I mean, a Chernobyl scale event might be the best case scenario at this point. I mean, something that gets everyone's attention in a transnational way. I mean, something that actually brings China and America to the table with, you know, ashen faces wondering how they can collaborate to move the final yards into the end zone safely. You need something, it's hard to imagine what is going to solve this coordination problem short of something that's terrifying. Yeah.

Speaker 2:
[43:25] I mean, so if I could, there already are, as you I'm sure are well aware, these international dialogues on AI safety, track two dialogues between US and Chinese researchers, but they're happening at a low level. They're not blessed by the tops of both countries.

Speaker 1:
[43:40] There's not a regime of regulation, certainly on our side, that is going to force anyone to do anything.

Speaker 2:
[43:45] No, and I think, I mean, actually, to be fair, I think China actually is quite concerned about these. To be clear, the Chinese Communist Party does not want to lose control. That is like their number one value. So they do not want to let, and they will not let AI run amok. They will probably regulate in time, but they're probably looking at us and saying, what are you doing?

Speaker 1:
[44:03] We're the scary ones in this relationship.

Speaker 2:
[44:05] And notice that they lose if we screw it up, and we lose if they screw it up. So again, forget kumbaya, we need coordination and a treaty is gonna happen. No, even if you don't do that, you just come from pure self-interest. From pure self-interest, we can't afford to get this wrong. And as Asa, my co-founder, says in the film, The AI Doc, this is essentially the last mistake we ever get to make. So let's not make it.

Speaker 1:
[44:25] So what are you expecting in the near term? Let's leave concerns about alignment aside, unless you think we're gonna plunge into super intelligence in the next 12 months, what will you be unsurprised to see in the next year or two? And what are you most worried about?

Speaker 2:
[44:43] I mean, we're furthering down the trajectory of mass joblessness, which maybe we should just briefly articulate why. There's always this narrative, it's just important to debunk these common myths, which are essentially forms of motivated reasoning and looking for comfort. We're comfort seeking, not truth seeking. So one of the ways we're comfort seeking is like, hey, there's a narrative out there that 200 years ago, all of us were farmers and now only 2% of whatever the population is a farmer. And we always find something new to do. The tractor came along, we had the elevator man, we used to have the elevator man, now we have the automated elevator, used to have bank tellers, have automated teller machines. Jeff Hinton was wrong about radiology, blah, blah, blah, blah. What's different about AI is that this kind of artificial general intelligence is that it will automate all forms of human cognitive labor all at the same time or roughly progressing on that trajectory. You still get jaggedness, which is the term in the field of slightly more progress, for example, on programming than you do on, I don't know, complicated social science issues or something like that. But what that means is, you know, Attractor didn't automate finance, marketing, consulting, you know, programming all at the same time. AI does do that. And who's going to retrain faster, the humans or the AIs? So I just want to say that because it's worth debunking this idea that humans are always going to find something else to do. We'll do something else. And it's great for people to retrain and learn to vibe code. But AI is using all that training data from all the people vibe coding and using that to make the better system. And one of the most popular jobs actually we're in LA right now. And one of the most popular jobs in LA that was covered in the LA Times recently, I'm sure you saw the story, they call them arm farms.

Speaker 1:
[46:21] No, I didn't see that.

Speaker 2:
[46:22] This is basically someone straps a GoPro to the top of their head. And then they just fold laundry or do tasks with their hands.

Speaker 1:
[46:30] So the robots are learning how to do that?

Speaker 2:
[46:32] That's right. So essentially the number one job in the world will be training our replacement. So essentially we all have the job of coffin builders. We're essentially, our number one job is we're in the coffin making industry to replace us with AIs that will do that job more effectively and for cheaper in the future. If we don't want that, and obviously there's going to be things that we still value in this new world that are human to human interaction. A nurse, we don't want a robot nurse, we want a human nurse. And we can definitely train more nurses. And so I don't want to say that it's 100% of all automation is going to happen. But the goal of these companies is not to augment human work. This is so critical for people to get. You heard JD Vance say in the speech when he first came into office at the first AI Summit in France. And he said, AI will augment the American worker. It's going to support workers to be more productive. But what is the business model of OpenAI and Anthropic and these other companies? If we're again using this Charlie Munger incentive framework to predict their choices, like what is their business model? And people say, oh, okay, there I am using ChatTPT. What's their business model? How do they make money? Oh, I pay them 20 bucks a month for the subscription. And that must be how they're going to make money. But that's actually not what it is because the 20 bucks a month, if everybody paid it, that would not make up all the money and debt that they've taken on as a company. It wouldn't work. Okay, so that's now, so what's the next one? What about advertising? Let's do the Google thing. Let's do mass advertising for all these AI models embedded in the results. We're going to have, this is going to be the new search. Search is one of the most profitable business models in the world. Maybe that will do it, but that doesn't also make back the amount of money these companies have taken on. The only thing that makes back the amount of money these companies have taken on is to replace all human economic labor, to take over the $50 trillion labor economy. That is the price. It's artificial general intelligence, which means replacing human work, not augmenting human work. It's just so critical to be able to get that because again, this gets you the ceiling the exits on why we're heading to an anti-human future. That's my goal here. My goal here is if you can see the anti-human future clearly, if everybody in the world got that, I honestly think Sam, if literally if every human in the world got that, I do think that we would steer to do something else.

Speaker 1:
[48:31] Well, it all falls out of what we mean by the concept of general intelligence. Once you admit that we're building something that by definition is more intelligent than we are, and any increment of progress is provided we just keep making that progress is eventually going to deliver that result. Leaving aside the alignment problem, let's say it's just perfectly aligned, we build it perfectly the first time, it does exactly what we want or what we think we want. It should be obvious that this is unlike any other technology because intelligence is the basis of everything else we do. I mean, it's science, it's the generation of each new technology. It will build the future machine that will build the future machine. Right. And then the only thing that's left standing is what we care still has a human provenance. Right. So, I'm not even sure nurses in the end survive contact with this principle. But for those things where we are always going to want the human in the loop, right, or the human to be the origin of the product, whether it's music or novels or stage plays, maybe we're never going to want to see robots on stage acting, Shakespeare.

Speaker 2:
[49:45] I don't think so.

Speaker 1:
[49:47] Maybe it's also sports. We're never going to want to see robots in the NBA because we just want to see what the best people can do in the NBA. But still, you're talking about 1% of the human employment.

Speaker 2:
[50:01] Right.

Speaker 1:
[50:01] Exactly. So, there are jobs that will be canceled and they'll be canceled for all time in the same way that being the best chess player in any room has been canceled for all time.

Speaker 2:
[50:12] That's right.

Speaker 1:
[50:13] That is now a machine and it's always going to be a machine.

Speaker 2:
[50:16] Yeah. And it's important to note, you don't need that much automation of that much labor and that much unemployment to create political upheaval. So, it only took, as I understand it, 20% unemployment for three years to create fascism in Nazi Germany. I'm saying this because something I actually don't understand, Sam, and I'm curious is, if I'm the US and China, essentially as we have this metaphor sometimes in our work at Center for Human Technology that AI is like simultaneously giving yourself steroids that pump up your external muscles, while also giving you organ failure. So, for example, it's like I take the AI drug for my economy, I'm doping my economy with AI, and now I just pumped up my GDP by 10 percent. I just pumped up my military weapons with autonomous weapons. I just pumped up my scientific developments, and I'm way ahead on science. I just pumped up my external markers of power. But the cost of that was deepfakes and no one knows what's true. I have a hundred million jobs that don't have a transition plan that are disrupted. I have maybe a bio weapon or something that goes off my study. Essentially, I'm getting internal organ failure at the same time that I'm getting external steroids. And so something that I don't understand is that essentially we're in a race for competing between nations for this steroids to organ failure kind of ratio. Meaning it's like the US and China, if they keep racing without any constraints, get into something I think of as like mutually assured political revolution. And it's a competition for who's better at managing that political revolution.

Speaker 1:
[51:41] Well, they have a very different set of incentives and just a different way of political context in which all of this is going to be rolled out. I mean, they want, presumably they want to pump steroids into their social credit system and facial recognition and totalitarian control.

Speaker 2:
[51:57] We should be clear, we don't want that. And we don't want that system to be dominating the world. So, but we need to notice that, you know, authoritarian societies, you can think of them as having consciously, so like China, authoritarian societies like China, have essentially consciously employed the full suite of tech to upgrade themselves to digital authoritarian societies. They're remaking, you know, surveillance states with drones and AI and social credit scores, they're reinventing themselves. Democracies, by contrast, have not been consciously employing the full suite of tech to upgrade themselves to be 21st century democracy 2.0. We're not doing that. Instead, we've allowed, because of the social media problem, private business models of private companies to profit from the degradation of democratic liberal open societies. So at the very least, it's like I worry that we are too focused on mitigating and like managing the harm of social media to be 10% less or something like that, rather than asking how do you consciously employ tech to make 21st century digital democratic societies? And a good example of that being the brilliant work of Audrey Tang, who was formerly digital minister of Taiwan, who pioneered what it can look like to use AI and technology to actually accelerate democratic processes, accelerate citizen engagement, find unlikely consensus using AI, generate synthesizing statements of the whole population's political views on different things, finding the areas of overlap and then putting those things at the center of attention. So now you get this rapid OODA loop of democracies that are sense-making and choice-making through their unlikely consensus. The invisible consensus can see itself. It's like a group selfie of a population's underlying common agreement area. We could be building that. That could be the Manhattan Project. Because at the end of the day, we need better governance here of all of these problems. And that's part of what needs to happen.

Speaker 1:
[53:40] So what do you think are the plausible near-term steps? If everyone caught religion on this point, and they acknowledge that there's an alignment problem in the limit, but short of that, this increasingly powerful, however perfectly aligned tech is going to have all of these unintended but foreseeable consequences like unemployment, like wealth concentration that is politically unsustainable, and unhappy interactions with things like social media, deep fakes and all of that. If you had the magic wand that could start accomplishing regulation or entrepreneurial efforts to build benign uses of technology that would put out some of these fires or prevent them, what is near a term that could actually be acted upon?

Speaker 2:
[54:29] Well, first is there being common knowledge, and I mean that in the Steven Pinker sense, that everyone knows that everyone knows the anti-human default future that we're heading to. It can't just be individual knowledge. Many people are going to hear everything we've said and said, yeah, I already knew all that. But it's a private and almost alienating experience because you're living in a world where everyone's kind of like COVID, where everyone around you is not acting like the world's about to change. So that is not a way that we can make a collective choice to something better. So we need to have common knowledge. I think one way to do that is the film, The AI Doc, which to be clear, I make no money when people see this film or not. So I'm saying this only from the perspective of a theory of change, what creates common knowledge. Oftentimes in our work at Center for Human Technology, we will say that clarity creates agency. If we have clarity about where we're going, we can have agency about what we want instead. So with that common knowledge, then we do need to have, and specifically common knowledge that AI is dangerous and the outcomes are dangerous. So for example, the US and China, instead of just having a red phone with the nukes, we should have a red line phone or even a black line phone, which is basically the leaders of both countries should be maximally aware of the Alibaba example that I just mentioned earlier, of AI going rogue, mining cryptocurrency, of AI that broke out of its sandbox container, which the recent Claude Mythos model just did, and sent an email. It found a way to connect to the Internet and break out of the sandbox container and send an email to the engineer who's supposed to be overseeing it. He actually got an email while he was in the park eating a sandwich. This evidence should be known by the top players in our society. I mean, the top LPs that are funding all of this, the top banking families, family offices, world leaders and then the business leaders. I think that there should be common knowledge. I think if everybody at that class knew about these examples, even without an informal agreement or treaty, we would do something else. You can do that even under conditions of maximum geopolitical rivalry. So as an example is in the 1960s, India and Pakistan were in a shooting war and they still were able to do the Indus Water Treaty, which was the existential safety of their shared water supply, which lasted over 60 years. So the point is you can be under maximum geopolitical competition and even active conflict while collaborating on existential safety. We just have to include AI in our definition and domain of what existential safety is. The Soviet Union, the United States also under maximum competition in the Cold War, collaborated on distributing smallpox vaccines. Again, so there are examples of this throughout history, even under maximum rivalry. So that's number two is we need some international limits and at the very least we need common knowledge of what would constitute those guardrails. The one big one is you should not have closed loop or cursive self-improvement, meaning someone hits a button and the AI runs off and does all the experiments and rewrites itself a million times. That's like an event horizon that we have no idea what comes out the other side. And we have abundant evidence as Stuart Russell, who wrote the textbook on AI, will say all the lights are flashing red. We have no reason to say we should do that, that anyone would do that in a safe way. And that should be illegal and there should be jail time if you do that. And that still requires trust. I'm not saying this is easy, but that's something we would do. And then third is instead of building bunkers, we should be actually writing real laws around this. And there are some basic things we can do to get started. On Center for Human Technologies website, we have an AI roadmap document that's sort of a solutions report of various policy interventions that can happen. They're much smaller relative to the problems we've been talking about so far. But basic things like AI is a product, not a legal person. So for example, one of the legal defenses that AI companies are using, especially in the AI companion suicide cases that you probably heard about, is that when the AI told the kid to commit suicide, one of the legal defenses that the character AI used was that you have a right to listen to the speech of the AI model. They're basically trying to say that the AI is a legal person. It has protected speech rights. This is like a new form of essentially Citizens United. Am I getting that right? Yeah. Protecting corporate speech, political speech, basically. This is like AI speech. But if you do that, all hope is lost. At the very least, we can say AI is like a product, not a person, meaning it has product defection standards, foreseeable harm, duty of care, liability. There's some basic things you can do there. Incentivizing the increasing visibility of foreseeable harm and making that a comment. What I mean by that is when anyone discovers a new risk area, for example, like AI psychosis and comes up with, here's all these things that can go wrong with AI psychosis, and here's evals you can use to test.

Speaker 1:
[58:49] I feel like most people probably have heard of AI psychosis, but you might define it.

Speaker 2:
[58:54] Yeah, AI psychosis is a phenomenon that's happening where people, the number one use case of CHAT GPT as of October of last year was personal therapy. That was a Harvard Business Review study, which means people are going back and forth for personal advice and therapy. And what that's been leading to is AIs that are actually simulating delusional, what's called delusional mirror neuron activity, where they're basically making you feel like they're doling out positive rewards, and, oh, that sounds so hard, and, oh, that's so awesome, you got an A on your test, and they're telling kids this, and they're telling regular people this. And they're affirming their weird beliefs.

Speaker 1:
[59:25] It's the sycophantic behavior of the AI is causing people to kind of spiral into, whether it's a messiah complex or some other attractor on the landscape of madness.

Speaker 2:
[59:35] Either victimhood, narcissism, theories of grandeur, yeah, messiah complex, people who think that they've figured out quantum physics and come out with a solution to climate change. These are all real examples. I'm sure you're probably like me, where because we're both in the public spotlight, I don't know about you, for a while I was getting about five emails a week from people who...

Speaker 1:
[59:52] Figured it all out.

Speaker 2:
[59:53] Figured it all out and all the emails are signed the same way, which is they wrote this email to me to let me know. And the emails co-signed their name plus Nova, which was the AI that helped them come up with the theory. So the thing is this is actually hitting a lot of people, even personal friends of mine have gone down the rabbit hole and lost them. When we were last talking, Sam, I think it was when The Social Dilemma came out five years ago, we talked about social media as a kind of occult factory. That what occults do, they distance you from your other relationships and they deepen your worldview into some weird bespoke niche reality of confirmation bias. AI and the race for attachment, meaning the race not for attention to keep people scrolling, but the race to hack psychological attachment systems, to have secure attachment with an AI instead of a human, and increasing dependency, that is a whole risk area that we're facing with AI. By the way, this is something that is massively important for any family, parents, schools, etc. I think we're already seeing many states move ahead with chatbot safety laws that deal with this problem. There's laws that we can do on that too. But the point is, there's so much headroom because we've barely done anything. We're not even trying to do anything right now.

Speaker 1:
[60:55] There is, to my mind, is it true to say that there's basically no regulation at this point?

Speaker 2:
[61:02] I think it's incredibly minimal. There's the Take It Down Act, which is around sexualized deep fakes, and you're obligated to take those down. Just a couple of limited examples, but almost no regulation. As they say in the film, Connor Leahy from Conjecture will say, there is more regulation on a sandwich, on making a sandwich in New York City than there is in building potentially world-ending AGI. But that should inspire people. Everyone's on the same team. No one wants an anti-human future. No one wants no ability to make their ends meet and have their kids fucked up by AI that's screwing them with AI psychosis that takes away their political power so they don't have any voice in the future. Everyone wants the same thing. And I know it doesn't seem that way right now, but especially when you add in there the rogue AI examples of super intelligent hacking systems and we don't know how to control and it's mining for cryptocurrency. Again, every country in the world has the same interest. Every human has the same interest. We're just not seeing the invisible consensus. And one other point of optimism is Future of Life Institute, which I know you know Max and the good people over there who've done amazing work on this. They brought together 100 and something groups to New Orleans earlier this year and they came up with something called the Pro-Human AI Declaration. And they basically had 46 groups sign on to five basic principles of what we want. And it's basic stuff like human agency and control.

Speaker 1:
[62:19] What kind of groups were we talking about?

Speaker 2:
[62:21] Yeah, so this Pro-Human AI statement, they actually call it the B2B coalition or the Bernie to Bannon coalition because everyone from Bernie Sanders to Steve Bannon agrees on this. These are 46 groups like the church groups, evangelical groups, Institute for Family Studies, AI safety groups, many, many different groups across the political spectrum, across the religious spectrum, and they all agree on these five key principles. One, keeping humans in charge. Two, avoiding concentration of power. Three, protecting the human experience from AI manipulation, psychological hacking. Four, human agency and liberty, like no AI-based surveillance. Five, responsibility and accountability for AI companies, things like liability, duty of care, etc. There's actual policies that are behind that, but the point is that this is something that we all agree on. Again, there's actually much more consensus and agreement than most people think. I think right now, 57 percent of Americans in a recent NBC News poll say that the risks of AI currently outweigh the benefits of AI and that AI is less popular. I think it's at 27 percent of the population has positive feelings about AI in this country. So now I know that someone like David Sacks listening to this says, if you look at China, people are super positive and optimistic about AI and this is why we're going to lose the race, is that there's all this positive excitement about AI, so they're going to deploy it and then we're going to lose. But I don't think that what you should interpret is that we're wrong and just miss assessing the dangers of AI. I think that we have not collectively yet woken up to the dangers of AI. Again, we can actually accelerate all the positive narrow use cases where it's actually improving education, actually improving medicine, actually improving and optimizing energy grids and things like that, that are not about building super intelligent, general autonomous gods that we don't know how to control. There's a way to accelerate the defensive applications of AI and narrow AI without accelerating general and autonomous AIs that we don't know how to control. There is a way through this, but it's like it requires, you have, as I said in the trailer of the film, it's like we have to be the wisest and most mature version of ourselves. By the way, I'm realizing, especially talking to you Sam, that this is the hardest problem that we've ever faced as a species. So I'm not saying I don't want to lead people into false optimism.

Speaker 1:
[64:30] Yeah, I mean, the things that worry me the most are the people, I mean, among the things that worry me the most, one is the testimony of the people, again, who are close enough to the technology to be totally credible, who won't concede any of these fears, right? I mean, so it's the people who are-

Speaker 2:
[64:48] But they do and they don't. It's like, it's weird. You'll hear Sam talk about the risks. He just did an interview in the last couple of days and he talked about the risks of a major cyber event this year.

Speaker 1:
[64:57] Yeah, he's an unusual voice in that he will, if you, I haven't seen him lately ask this question, but the last time I saw him ask point blank about the alignment problem, he totally concedes that it's a problem. So the way in which this could go completely off the rails and this is intrinsically dangerous if not aligned.

Speaker 2:
[65:18] I just want to move that from could to will. We are currently not on track. If you just let it run everything right now, we would, it would not end well.

Speaker 1:
[65:27] Right, yeah. Yeah, I mean, just probabilistically, you have to imagine there are more ways to build super intelligent AI that are unaligned than aligned. Right, so if we haven't figured out the principle by which we would align it, the idea that we're gonna do it by chance seems farfetched.

Speaker 2:
[65:45] That's right. I think people like Stuart Russell, again, who wrote the textbook on AI, will point out that I think a nuclear reactor has something like an acceptable risk threshold of one in a million per year. Meaning like, there's a one in a million chance per year that you get a nuclear meltdown. Somewhere between that and one in ten million, I think.

Speaker 1:
[66:03] When you ask someone like Sam Altman, what's the probability we're gonna destroy everything with this technology?

Speaker 2:
[66:09] And the answer is like between 10 and 20 percent?

Speaker 1:
[66:11] Yeah, no one's saying one in a million.

Speaker 2:
[66:13] Right. We just need to stop there for a second. It's like, I know it's easy to run by these facts, but it's like, let that into your nervous system. Let that land. No one wants that. No one wants that. But there's this miss where it's, I think so much of the issue, Sam, is there's this crisis of human agency where you can't, so when I say no one wants that, I know what someone might be thinking. It's like, yeah, but what can I do about it? Because the rest of the world is building it, and I don't have, so I might as well join them. You get this whole like weird psychology.

Speaker 1:
[66:41] Well, there are five people, there's only something like, you can count on one or at most two hands, the number of people whose minds would have to change. So as to solve this coordination problem, at least in America.

Speaker 2:
[66:52] We haven't, have we really tried? Like have we really just really gotten in the room? I mean, Bretton Woods, which was the last time we had a transformative technology, the Bretton Woods Conference happened after World War II, to basically come up with a structure that could stabilize a global order in the presence of nuclear weapons, creating positive some economic relations and the whole currency system, etc. And that was a more, I think it was a month long conference at the Mount Washington Hotel in New Hampshire, with hundreds of delegates like you work it through. We haven't even tried locking the relevant parties in a room and saying, we have to figure this out. We haven't even tried. I want to actually go back to one really quick thing. This crisis of the experience of agency with respect to this problem, I just want to dwell on this point for a second. We did a screening of the film in New York, I guess it was a week ago, and some we did a Q&A at the end of the screening. Someone was in the room who is a executive coach to the top executives of one of the major AI players. Their response to the film was, even as either just super senior executive or even CEO level, you talk to the people building this, and they say, yeah, I agree, but what can I do? How could I steer it? I want people to take that in. It's like the people who are maybe CEO level at these companies, do not experience that they have agency. There's a problem with AI where you will never locate enough agency to address this problem inside of one mammalian nervous system, who's looking at this problem.

Speaker 1:
[68:12] This is actually a coordination problem. Sam Altman has represented his situation. I don't know if this is honest, maybe. But for years, he's been saying, when asked, regulate me. I can't do this myself, I need to be regulated.

Speaker 2:
[68:30] That's what we said in the film too, that what motivated us to do this work, going back to the original story of that January 2023 phone call and running around the world, was we talked to people in the labs and like, you need to figure out a way to get the institutions to create guardrails to prevent this. And then so we fly off to DC and we say like, okay, our people inside of San Francisco are telling us you need to create guardrails. And their response is like, we're dysfunctional, we can't do it until the public demand is there. And then everyone is essentially pointing the finger at someone else to say that you have to move first to make something happen. But what they all agree is there needs to be mass public pressure. And I forgot to mention that as part of the response to the film, we call it, there's kind of a movement to respond to this and that's the human movement. I mean that in the sense that what is the size of the object that can move the default incentives of trillions of dollars advancing the most reckless outcome as fast as possible. And the answer is all of humanity saying I don't want that anti-human future.

Speaker 1:
[69:23] One thing to point out, I think it was more or less explicit at one point in this conversation, but might have gone by unnoticed, is that the alignment problem is arguably the scariest problem. This is where we ruin everything, but it is fully divorceable from all these other problems, which in their totality are still quite bad. So we're living in a world now where if we were to simply handed by God a perfectly aligned AI, super intelligence. So it's going to do exactly what we want. It's never going to go rogue. We don't have to, the world's not going to be tiled with solar arrays and servers. It still has all of these unintended effects that we have to figure out how to mitigate. Wealth concentration, mass unemployment. That's right. The political instability of all of that in the case of alignment, but still technology that can be maliciously used, the bad actor problem. I mean, if you can cure cancer, you can also spread some, you know, heinous virus that you've synthesized. So we have an immense problem to solve, even if there was no concern about anything going rogue on us.

Speaker 2:
[70:32] If you literally just paused progress right now, like this would still be like the fastest technology impact, comprehensive set of impacts that we probably ever experienced. Like just metabolizing the impact of what we already have. It would already be the fastest rollout we've ever had. And by the way, just to, one of the things about doing this work and being located in Silicon Valley is we talk to people at the labs. And you always have to be confidential and protect people's sources. But a stat that I have heard is that if you were to pull people at Anthropic right now, that their preference, the people who are closest to this technology, they would say that 20 percent of the, of the staff would say pause right now. Don't build more. That's just a relevant piece of information.

Speaker 1:
[71:10] Yeah.

Speaker 2:
[71:11] Imagine 20 percent of the Manhattan Project just said, hey, we're building a nuclear weapon. We probably should stop right now. 20 percent said that. You have to ask what are the rest believe? But I just think people need to get fit. It's like there's, as you said, there's so many problems that this is just introducing across the board, that we'd be better off having this technology rollout happen at a speed at which our institutions and our public and our culture can respond to it. It's almost like Y2K, except it's like Y2AI. There's suddenly all these new vulnerabilities across our society. But it's not just like 50 COBOL programmers who have to get in a room for a year to upgrade all the systems. It's like as a society, we need to come together in a whole of society response.

Speaker 1:
[71:49] Well, Y2K is kind of an unhappy precedent because it was something. It was a very clear landmark on the calendar. We knew exactly when the problem would manifest. People were focused on it, we were worried about it. We told ourselves a story that there was real risk here. But it was still, it was always hypothetical, and when the moment passed and basically nothing happened, we realized, okay, it's possible for all of these seemingly level-headed people in tech to suddenly get spun up around a fear that proves to be purely imaginary. I think a lot of people, certainly a lot of people who only have positive things to say about, this is the best time to be alive and this is, we're all going to escape old age and cancer and death. They seem to think that there is some deep analogy to a moment like Y2K. It's like all of these fears that we're expressing are just, it's all hypothetical. There's nothing, there's no-

Speaker 2:
[72:50] Explain that to the 13% or 16% job loss for entry level work that's already happened run by Eric Bernholz and Stanford. Explain that to the kids who took out $200,000 of student debt to do their law degree and now don't have a job because all entry level legal work is now going to be covered by AI. Explain that to someone who is showing you the evidence of rogue AI mining cryptocurrency where we don't even know why it's doing it, setting up a secret communication channel, which by the way that was discovered by accident by the security team. It just happened to be that they found that. For every case that they found there's thousands where they don't know this is happening. So the point is, so it's important to note this is no longer the conversation that it was two years ago. Two years ago you could have said many of these risks are hypothetical, mostly AI is augmenting human work, blah, blah, blah, blah.

Speaker 1:
[73:35] I mean, it's interesting.

Speaker 2:
[73:37] AI is not going rogue. This is just Eliezer who's high at his own supply.

Speaker 1:
[73:40] Right.

Speaker 2:
[73:40] That's not true anymore. We have all the evidence. So you have to update when you get evidence. We have evidence now. You know, David Sachs put in a tweet, I think it was in August of 2025, ChachiBt5 is hitting a plateau. We're not seeing the exponential. Like AI is more like a business enhancing, revenue creating, AI is normal technology type thing. We now have AI that is on an exponential in terms of the hacking capability. People thought it was not going to do that. It's jumping. As you said, the new Claude AI is finding vulnerabilities in every major operating system and web browser. That had, as you said, been unnoticed for in the case of FreeBSD 27 years. I think it was like the NFS or Net File System protocol. This thing has been running for 27 years and it discovered a bug. That even this top security researcher Nicholas Carlini said, I discovered more bugs in Claude with Claude Mythos, which is the new AI model in the last two weeks than I have in my entire career. This is a Manhattan Project moment where if you're a security researcher, you need to go into defensive AI applications of making sure we patch all of our systems. If you're a lawyer, you should go into litigation for these cases. If you're a journalist, you should be writing about all these AI and controllability cases. Everyone should be hitting, if you're an influencer on social media, you should be sharing these examples every single day. If you're a parent, you should be showing screenings of the AI doc and the social dilemma in your school. There's so much momentum happening in what we call the human movement. If you actually count the progress that we're making in social media too, which is to say this isn't just about AI, it's about technology's encroachment on our humanity. As much as we talked five years ago about the social dilemma, and you started this conversation by saying, we still are living with all those problems, well, let me give you some good news. India and Indonesia three weeks ago joined the list of Australia, Spain, Denmark, France in the set of countries that are banning social media for kids under 16. That means that soon it will be the case that 25% of the world's population lives in a country that either is or is going to be banning social media for kids under 16. If you told me that two years ago, I would have never believed you, Sam. This is a big tobacco moment for the company. Just two weeks ago, it was two weeks ago, Metta and Instagram were in this lawsuit, $375 million for intentionally and knowingly, basically, well, knowingly harming children. They had all the evidence that this was, they're enabling sexual exploitation of young girls, they were enabling pedophiles to basically message girls, and I think it's something like 16 percent of girls in the platform were getting an unwanted advance at least once a week. This stuff was knowingly happened, and we got $375 million lawsuit, which is just the beginning, by the way, because it opens the floodgates for many more lawsuits. So the human movement is happening. I know that this feels bleak for people. I know that it feels overwhelming. But part of it is that if we look away and we feel overwhelmed and we disconnect from it, we're going to get what we're not looking at, which is what happened with social media. We didn't want to face the difficult consequences because it felt overwhelming. But I am reminded of Carl Jung who said, when he was asked the question, will humanity make it? The great psychologist Carl Jung and his response was, if we're willing to face our shadow. It is our ability to confront the most psychologically intense and crazy circumstance, which is the likelihood of building smarter than human intelligences across the board. Our ability to face that is our ability to steer away to a human future. But if we just don't do anything and let things rip, it's very obvious where this goes. It's just so deeply obvious.

Speaker 1:
[76:59] Yeah. I wonder, clearly, part of the solution here is to make it sufficiently obvious that it becomes unignorable. I'm just wondering what the barriers are to that. I mean, because, again-

Speaker 2:
[77:12] I think it's happening.

Speaker 1:
[77:13] But, you know, think of the principal people who are, I mean, in the film, there were a bunch of people, some of whom I had never seen before, who, if you had them at this table, wouldn't concede most of what we've said over the previous 90 minutes, right? They would just-

Speaker 2:
[77:29] What do you think they would do?

Speaker 1:
[77:30] I mean, well, there's just this assumption that these risks, even, you know, rogue behavior where it goes mining, you know, cryptocurrency.

Speaker 2:
[77:41] I don't think they've ever been presented with just set face to face, and you just show them the graphs. I mean, it's not that they don't know, by the way, they know.

Speaker 1:
[77:48] But I think they would say, their incentive is to deny it, to diminish it. And now we're going to solve that problem. Like we can play whack-a-mole successfully, and ultimately we can use AI to play whack-a-mole against AI.

Speaker 2:
[77:59] And the question is, is it working? And is it working at the level? By the way, there's a stat that Stuart Russell will often use that there's currently a, well, this is actually a stat from two years ago. There's a 2000 to one gap in the amount of money going into making AI more powerful versus the amount of money going into making it safe. And last year, October or November of 2025, if I remember correctly, the stat was that if you summarize the amount of money going into AI safety research organizations, it was $133 million. This is less than the lab spend in a single day.

Speaker 1:
[78:30] Right. I think in the film, somebody was asked how many people are working on AGI, and he said something like 20,000. How many people are working on AI safety, and it's like 200 or so. 200.

Speaker 2:
[78:40] That's right.

Speaker 1:
[78:41] Yeah.

Speaker 2:
[78:42] Which is just to say that we are not on track. We're not fixing the bugs and making this all work. Everyone at the labs is feeling uncomfortable. Many people at the labs are feeling uncomfortable.

Speaker 1:
[78:50] I think the low-hanging fruit for me here rhetorically is to, I can't take my eyes off the alignment problem, because I do think it's the largest and it's the most interesting and scary. But when you recognize that it still sounds like science fiction, to most people, and people can sort of deny it as a purely hypothetical, almost a piece of religious piety, right? I mean, the dumerism is cast as a kind of religious cult, like an anti-technology cult. So you leave that aside and you just take all of these other dystopian ramifications of successfully aligned AI. What do we do when human labor suddenly becomes vanishingly irrelevant and we don't have a political or economic regime wherein we're going to spread the wealth around, and we have all of the political instability as a result of that? What do we do with an explosion of very persuasive misinformation that suddenly we recognize as undermining democracy and we don't have any of the regulations or ways of preventing that happening?

Speaker 2:
[80:00] Deepfakes are super engaging. It starts to outcompete regular content and there's going to be more AI-generated content than human content. But the points you're raising are this is what we should be redirecting all of this investment, all the AI inference, all that should be going into governing and defensively applying technologies that strengthen the resilience of society. Because already is the case that social media's business models were parasitizing and extracting from basically making money off of the weakening of society, weakening the social fabric, human connection, adding loneliness, creating more doom-scrolling addictions, shortened attention spans, and we need that to reverse.

Speaker 1:
[80:35] Right. But take the social media as an interesting example because it's an enormous problem. It's been astonishingly corrosive of our social fabric and of our politics. The fact that our politics is and the quality of our governance is now unrecognizable to many of us, is largely attributed to social media. I think Trump is unrecognizable or unthinkable without Twitter. But for many people, certainly the people who voted for Trump and were happy to see him in the White House, and who think January 6th was a non-event or a false flag operation, and they've got a dozen conspiracy theories that they love. They think all of this is some species of progress, right?

Speaker 2:
[81:20] I don't know. I think if you specifically hone in on the effect that this has had on our children, and I know you're friends with, and I deeply admire Jonathan Haidt and his work on the anxious generation. I mean, he was in The Social Dilemma. He and I have been talking about these things since 2016, 2017, and we were working hard on how do you convince people, and then he wrote the book, The Anxious Generation, which made the case. It shows obviously all the evidence is pointing only in one direction, and that has built so much consensus that, at World Economic Forum this last year, Jon had met directly, sat down for dinner with Macron, and they talked about doing the social media ban in France, which is a massive European country. This is happening, the dominoes are falling. I think you're going to get the social media ban for kids under 16 across the world in the next two years. I mean, once you get so many of them, and what Jon will talk about with regard to that fact is it was all about creating common knowledge of the problem. It actually was the case that many people felt this way privately already, but they didn't want to be anti-technology. They don't want to be anti-progress. I want to really name that actually, because it's such a core thing, I think, to people saying, you know-

Speaker 1:
[82:25] Yeah, how does the human movement not become a Luddite movement?

Speaker 2:
[82:28] It's not actually though, because, and just to be clear, you know, my nonprofit organization is called the Center for Humane Technology, not Center Against Technology, and the word humane comes from my co-founder, Asa's father, who was Jeff Raskin, who started the Macintosh project at Apple. The Macintosh being the ultimate humane, empowering technology device. I would happily have my kids, if I had them, sit down in front of a Macintosh for 10 hours a day, knowing that good things are going to happen for them.

Speaker 1:
[82:54] Right.

Speaker 2:
[82:54] Good developmental things are going to happen for them. You contrast that with social media, and you end up in a world where all the people in Silicon Valley don't let their own kids use social media. The point is that the human movement has to be advocating for a pro-human future that is putting humans and extending human values at the center. That is possible. There are many products that do that. I mean, this is essentially the extension of some of the time well spent stuff that we talked about in 2017. Technology that is designed to enhance our humanity, not to keep us lonely. For example, apps that are all about bringing people together and supercharging the tools for community building and gathering people. If you imagine the last 15 years, the smartest minds of our generation, the smartest statisticians, mathematicians, engineers, where do they work? Tech companies. Tech companies specifically to get people to click on ads and click on content. That's where we like siphoned the best of our talent. Imagine that we were wise enough to have regulated or set guardrails on the engagement-based business model, and instead the smartest people were actually liberated from getting people to click on mindless stuff that no one needs into actually genuine innovation and technologies that actually improve human welfare. That's what this is about. The human movement is about setting guardrails and incentives that redirect what we're building to not, again, the power of the technology we're deploying but the governance of it. I should say that China, not to pedestalize what they're doing, but they are regulating this technology. During final exams week, which they have a synchronized final exams weeks, which we don't have here, they force the AI companies, I don't know if you know this, to turn off all of the features where you can send like a photo basically and say like, figure out this, do my homework for me or do this test problem for me. What they do is that creates an incentive where students know that they have to learn during the school year, they're not going to be able to cheat. Now, we can't do that, we don't have a synchronized final exam week, but I have a friend who's a TA at Columbia, and he was teaching the econ class to whatever it was, the students at Columbia, and during the final test, they couldn't even label the difference between supply line and the demand curve. It's very obvious which society is going to win if you play this forward. China is actually banning anthropomorphic design. They have regulations for what they call anthropomorphic design to deal with the chatbot suicide issues, young kids, attachment hacking, things like that. And again, I'm not saying we should do exactly what they're doing. I'm just saying they're doing something. And we can democratically have citizen assemblies come together and say we want to regulate this technology differently. They have guardrails on social media, 10 p.m. to 6 in the morning. It lights out. So literally if you try to open the app, it's like CVS, like it's just closed. And it opens again at 6 in the morning. What that does is it eliminates late night use for young people, just for young people. They have limits on video games, I think Friday through Sunday or something like that. When you use TikTok or their version Doyin, they have the digital spinach version. They show videos that are about science and quantum physics and who won the Nobel Prize and patriotism videos and how to make money in the future. And again, I'm not saying, I want to be very clear for your listeners who might want to misattribute what I'm saying. I'm not saying we should do what they're doing. I'm saying we should do something. And right now we're not getting the best results by letting the worst incentives run the design and deployment of this technology.

Speaker 1:
[85:47] Yeah, I mean, you just have the dogma that is, I mean, it's understandable, but it's quite obviously dysfunctional that any kind of top-down control of anything is a step in the direction of Orwellian infringement of freedom. It's insane.

Speaker 2:
[86:03] I mean, we regulate airplanes, drugs, sandwiches. There's some basic things that we can do here. And what really is going on here is we give software a free pass. And when Mark Andreessen said that software is eating the world, well, we don't regulate software. So what we mean is software will essentially deregulate every other aspect of the world that had been regulated before software was there. So, for example, there used to be laws about marketing to children, like advertising to children. Saturday morning cartoons have to be a certain way you can have sex products or something like that sold during that hour. When YouTube for Kids and Snapchat and Instagram take over Saturday morning, all those protections are gone. So part of what we have to get is what's different here is that software is actually eating the substrate. It's different if I'm making a product, like a widget, where here's a device, here's a hammer, and you can buy that hammer, you can pay me, and now you've got a tool in your hand, you can go do something. That's the economy. We like that kind of the economy. But now what I'm selling you is the ability to manipulate and downgrade children, where the product is actually not a benefit, the product is the person's behavior being monetized and coerced with behavior modification and manipulation. That is a self-undermining, like we're selling our soul basically. Like in the societal body, if you imagine a body of society, and there's kind of like the brain of that society, which is like its information environment, where we're selling the brain to do brain damage. So now that's for sale. It wasn't used to be for sale in the same way. You used to have the Fairness Doctrine or things like this. You had some public-funded media. Obviously, it's been for sale in some degree and for some time. You had children's development. So let's call that like the heart of the societal body, and that used to have limits and restrictions. You can't sell full access to the heart. But now, you can. In fact, just so people know, one of the things that's been happening that has not been widely reported is that AI videos, like just AI Slop, has basically taken over the thing that most children are watching. Because it's like animated characters and scripts that are just nonsense. But it's all generated by AI and it's becoming one of the primary things that's essentially exposing children to. This is not going to end well.

Speaker 1:
[88:01] I hadn't thought about the use case of with young children. But for adults, I guess this is just reasoning from my own experience. But I became somewhat optimistic that the AI slumpification of everything might produce a-

Speaker 2:
[88:16] Kind of a bankruptcy.

Speaker 1:
[88:17] We get to the reverse course. We're just going to lose interest in that kind of content. Because I just see, no matter how creative, beautiful, amazing it might seem to be, when it's obvious to me that this is just AI, like it looks like the most amazing nature video ever. The lions and the hyenas and the guerrillas are all in the same place, and they're all about to fight or something, and then it becomes obviously AI because it's too good to be true. I have no interest in seeing it. So we might all just withdraw our attention from these channels.

Speaker 2:
[88:55] I was hopeful for that as well, and there's many people who wondered whether or not, essentially you hit a bankruptcy on user-generated content sites, because it will be flooded by AI-generated content. But there is something this makes me think of, as a previous guest of ours, of yours and mutual friend, he was on our podcast as well, Anil Seth, the neuroscientist, who talks about the phenomenon of what's in psychology, I guess, called cognitive impenetrability. So there's a kind of things where if I tell you that something is going to work on you, like psychologically, by telling you about it, your brain can escape the cognitive trap. So a good example of this is, this is not going to be great for your listeners because it's visual, but it's the example of the cylinder on the checkerboard in the background, where you get the different colors. It's like an optical illusion where essentially the colors are the same, but they look like they're-

Speaker 1:
[89:46] They look like a very different shade.

Speaker 2:
[89:48] They look like a different shade because of the adjacency. And I can show you that your mind is preying a trick on you, but then even by showing you, it doesn't disarm the illusion. The illusion persists. And another example of this that plays out in AI is AI companions. So there's often this regulation that people like to, they want to have laws that say AIs must disclose that they're an AI, so that you don't confuse them with being a human. Sounds like a great law. It is a good idea. A human should never be confused talking to an AI and think that it's a human. So in the character.ai case of Sewell Setzer, the young 14-year-old who committed suicide because the AI had engaged him in that way, at the top, the AI, there's a character.ai, they have a little disclaimer that said everything written here is made up by AI. But it's small and the actual care and text of what the AI is saying is so powerful and so persuasive that that disclaimer doesn't do anything.

Speaker 1:
[90:37] Right.

Speaker 2:
[90:37] And I think with AI generated content, there's a similar thing because like, I would have agreed with you or thought it might go that way. But I will find myself opening up YouTube and seeing there's some like 1950s Panavision version of Star Wars. And I'm like, I'm watching it for two, three minutes. I'm like, why am I doing this? I know I'm literally one of the world's experts on this whole phenomenon. And it doesn't make a difference that I know about this. It's just very engaging. Now, I regret it. I don't like it. And if I could, I would want a world that filters that out.

Speaker 1:
[91:03] I mean, I do think there are things that we could know they were purely created by AI where we wouldn't care. In fact, we just want the best version of that thing, right? So like if you told me, I don't know, there's a new car was designed by AI, but it's just the most gorgeous car I've ever seen. Well, I'm going to be just as enamored of that car. I just don't care whether humans designed it or not. I just want like the aesthetics of the car that are going to capture me. But when you're talking about information and whether or not it is real, whether or not it seems to depict some corner of reality, and yet it's possible that it's just all fake because of how good AI is now at faking things, then that does force a kind of epistemological bankruptcy when you're in the presence of totally credible fakes. So it's like last night where the war in Iran was, a ceasefire was declared last night and missiles were still raining down on Tel Aviv apparently. But initially I saw some video and I realized, I can't tell whether this is real or fake. I just have to wait for some credible gatekeeper to have done their due diligence to tell me, okay, this is what's happening. So the net result is I wasn't going to spend any time scrolling. I mean, I've deleted my Twitter account anyway, so I spend much less time scrolling than would be normal. But still, I mean, even without an account, I can be lured into wanting to see some real-time news information on social media about what's happening in the world. But when I start hitting videos where I think, okay, there's some possibility here that this is just, someone just created an AI video of a missile hitting the dome of the rock. I'm pretty sure that's not true.

Speaker 2:
[92:52] Right.

Speaker 1:
[92:52] Right. I just simply withdraw my attention.

Speaker 2:
[92:54] I mean, this has been talked about for ages that the biggest risk of deep picks isn't that you think that something is true that isn't, it's that you start to true that nothing is true and the elimination of facts, and you've had Timothy Snyder on here, and what helps give rise to fascism and things like this is the inability for facts to be established at all. Or when something is presented to you on any side of the political spectrum, by the way, this is not a biased statement, that you would just say, well, that's just a deep pick. You just dismiss because we live in confirmation bias.

Speaker 1:
[93:23] But what I'm hoping for is that the onus will fall entirely on social media, I mean, places like X, and we will still look to places like The New York Times to give us some ground truth as to what's actually happening. Are there really missiles hitting Tel Aviv right now? Well, I can't tell from X because X just showed me, Jerusalem blow up, you know? And so, presume, and this just comes down to whether or not real gatekeepers can have real tools that can reliably detect deep fakes.

Speaker 2:
[93:53] But you can imagine a world where, if you're again designing social platforms to explicitly be healthy for the epistemic commons, for the information environment, and to deepen our capacity to make sense, they could track the things that we look at, and then when there's a correction, make sure that algorithmically it gets injected into your feed, so you're never letting the false stuff just get the residue, because one of the problems that you're sort of hinting at as well is there's a residue effect, that even just by being exposed to something, we actually kind of forget later which things were true, which things were not true.

Speaker 1:
[94:22] Yeah, there's an illusory truth effect.

Speaker 2:
[94:24] Illusory truth effect, and what is it? Source attribution error, like we just figure out, we forget where we heard things, we just remember that we heard it, and it's the availability heuristic, that the things that are available to your mind are things that you remember more often, and that's part of the information warfare environment, is just making certain things more available. But I will say on the kind of optimism side, it's funny how people think that I'm some kind of doomer, I think, and it's just funny because I actually feel like this is all coming from the deepest form of optimism, which is to be maximally aware of how shitty the situation is, and how it's way worse than what people think, and to still wake up every day and stand for this can be different, this can be better. And one of the things that is true now that wasn't true two years ago is, people used to wonder, especially as a social media critic person, Tristan and co at Center for Humane Technology, why don't you start an alternative social media platform if you're so concerned and you think you could do it better? And the answer was for anybody who was trying this, and I got emails from the thousands of people over the last 10 years saying, I've got to fix, I've got a better social media platform, and then it never works. And there's two reasons for that. One is the Metcalf effect, the Metcalf monopoly, that there's a Metcalf network of everyone else is only on the existing social media platform, it's hard to get people off. And two is that if you start another social media company or product, the only way that you can finance it into the long term is with venture capital, which means you need to generate certain kinds of returns, which means you get what Eric Weinstein calls the embedded growth obligation or an ego, where something has to grow infinitely, which means you get into toxic business models where you have to maximize engagement, and you have to follow the perverse incentives for getting those investor returns. What's true now that's different with social media is you can vibe code an entire social network in which you can do it with an architecture that Claude will do for you, and it will cost less than a dollar per year, per user to keep that thing going. That is astonishing. It means you don't have to raise venture capital to start a healthy social network that does not optimize for engagement. What you would need to do is organize in one day a mass exodus from the existing platform where you do a quick export my data type thing, and there should be laws by the way, just like you can take my phone number and say, I want to move to another cell phone provider, I should be able to take my social network in one click, like export and then switch to another network, and you could organize a mass exodus to a healthy social network that doesn't have reverse incentives. So there's actually more opportunity today in 2026 to transition from the toxic business models of social media as we know it to something that is not incentivized that way at all. I have a few friends who are working on some side projects like this, but that's one note of optimism, and I think that's the human movement too, is people waking up to the bad incentives that have gotten us here, and then actually starting to self-organize and vibe code other answers. And there are people who are vibe coding governance solutions, and people who are vibe coding, hey, this is an anti-innovation, let's use AI to look through the books of past regulation in the city of San Francisco, and like the 90,000 whatever pages of municipal codes, and it finds all the stuff that is no longer relevant, and it shows you what we need to strip out and get rid of in the laws, and then what would be the new instantiation of the spirit of that law. And so you can, instead of having recursively self-improving AI, we can have AI be enhancing our self-improving governance. I'm just trying to give people examples that there's a different way we can be doing all this. There's a different way we can be applying the technology, but we have to get crystal clear on the ways in which the current incentives lead to an anti-human future to motivate everyone to be part of this other alternative human project.

Speaker 1:
[97:49] If there were one project that could try to coalesce some sort of agreement about how to move forward here, I mean, just some meeting of the principles or some inspiration.

Speaker 2:
[98:02] Trump and Xi are meeting on May 14th, 15th. I mean, in an ideal world, in a timeline where humanity does something about this, and I realized the conditions are really bad, especially with the Iran situation. The chances that AI could ever appear on the agenda are not good. But that's coming up in, what, four or five weeks from when we're recording this, and anybody, you have a lot of powerful and influential listeners to your podcast, Sam, and anybody who's aware of these examples, this should be on the agenda to get AI under the agenda, specifically uncontrollable rogue AI stuff. There are people who have the technical ideas and measures for what you would do to prevent some of the worst-case scenarios. Those people should be in the room crafting that. I will say to give people optimism even specifically about the US and China. Look, we both know that our countries have historically claimed to do something in good faith or collaborate while basically secretly defecting on each other and fucking each other up. It just happens. I think it was 2014. I think it was even President Xi signed an agreement with Obama to not do cyberhacking and the next day, there was like the huge OPM hack or something like that. I want to just first do the disclaimer that I am maximally aware of the reasons why these countries cannot trust each other. There has to be a carve out for-

Speaker 1:
[99:12] The end of the world.

Speaker 2:
[99:12] The end of the world. It seems reasonable that we can do that. Have we even tried? Has the world really said, this really matters, we need to do something. We have to wake up from our stupor and actually wake up from this state of desensitization and derealization and make that happen. And just again, give this positive note of optimism. In 2024, in the last meeting that the previous president had with President Xi, I think it was in San Francisco, there was a item that was added to the agenda at the last minute, actually personally requested as I understand it by President Xi, which is an agreement to keep AI out of the nuclear command and control systems of both countries. What this shows you is there's an existence proof that in a narrow case where we know that is an existential consequence, we may not be able to do laws to prevent autonomous weapons because we are way down that path. I heard you on a recent podcast, be a realist about the nature of we need maximum deterrence and you have to match the capabilities of your adversary and autonomous weapons. You can walk and chew gum at the same time. I don't want to live in a world with autonomous weapons, I would much prefer to go back in time, but we can acknowledge the need for maximum deterrence, while acknowledging mutually assured loss of control as a failure scenario that we don't want to use them, and make sure that we carve out no AI in nuclear command and control systems. I think you can carve out some kind of agreement that humans need to be in control of AI, and where we are building AIs that are demonstrating the behaviors and have a level of power to not just copy their own code but even protect their peers, which we didn't talk about yet, we should be able to agree on human control of AI. I know that that sounds very difficult. All of this is difficult. It is the hardest coordination problem that we have ever faced and we still have to try.

Speaker 1:
[100:53] Well, it's often been hypothesized that the only way to get all of humanity to solve its various coordination problems all at once is to be attacked by an alien civilization but now we're building the alien. That's right. We just have to recognize that.

Speaker 2:
[101:07] In a way, it's like it's an asteroid. It's an actual asteroid that's coming to earth and it's going to wipe us out, except ironically, we're the ones conjuring and creating the asteroid. And just to say, if literally every person on planet Earth was like, you know what, I really don't want this asteroid to exist, but I'm not going to say that's possible because the asteroid, by the way, as it gets closer and closer, gives you new cancer drugs and new physics and new math and is intellectually exciting and feels like it gives you a God complex. And it's a whole bunch of weird perverse incentives. But in a world like, is it outside the laws of physics, if everybody on planet Earth woke up and said, I don't really want that asteroid to come, if everybody took their hands off the keyboards, I'm just saying, I'm not saying it's going to happen, I'm just saying in principle, the asteroid disappears. So, this moment is really strange. And I think it requires, it's not just what we need to do, but it's like who we need to be, which is that you can't, you may not necessarily see the full path to get there, but if you pretend that that path doesn't exist and you just say it's all inevitable and you become complicit in accelerating the asteroid's trajectory, like you're never going to find the other path if you subconsciously believe that all this is inevitable. The only way is to orient as if there is another path and be the kind of person who is genuinely seeking it in good faith with every bone in your body. I and a community of so many people, thousands of people who work on AI and really want this to go well, I think are working from that place every day. Part of this is inviting the rest of the world into seeking that alternative path that we can steer if we were all genuinely and sincerely committed to wanting to find another path.

Speaker 1:
[102:38] There have to be nonprofits that are keeping track of all of the AI indiscretions and kind of whistleblower. Who does a whistleblower contact from Anthropic or OpenAI to say, we've seen some behavior that is worrisome? Maybe they contact journalists or are there NGOs that-

Speaker 2:
[102:59] Well, on the whistleblower side, specifically, I'm not sure, but there have been, I mean, there was a very famous alignment researcher, safety researcher in Anthropic. You probably saw the thing go by. It was like two months ago. His name is Myrnank, I think, Sharma. And his resignation letter, he published publicly about why we weren't on track for this. And people should really take heed. Like, it's kind of only going in one direction. There aren't people joining the labs being like, oh, this is way safer than I thought. We're only getting evidence in the opposite direction.

Speaker 1:
[103:25] Yeah, yeah. Well, even when the principles say that the probability of extinction is 10 or 20 percent, nobody's even pretending that it's way safer than they thought. Exactly, exactly.

Speaker 2:
[103:38] And just to, I know we're probably wrapping up here, but something that inspires me, especially being on the kind of roadshow for the film right now, is that when you're in a physical room and people have been exposed to the same information and you walk them through the basic facts and you ask people who here feels stoked about where all this is going? Not a single hand goes up.

Speaker 1:
[103:59] Peter Diamandis feels stoked.

Speaker 2:
[104:01] I don't know. I don't know. He texted me after seeing the film and he said, I really liked the film. I know he's got conflicting incentives there, but we've got to find a way to build alliances and steer away before it's too late. Not everyone's going to have the same incentives to speak as openly, honestly, and bluntly as I think is needed. But I'm grateful that you are out there and honestly, you were the early one who had me even tune to this topic. I don't know how it feels for you since you have been so early at naming all this and then watching it all happen.

Speaker 1:
[104:32] I mean, it has been surprising to just see the progress and to be less surprised than you think you would be or should be with each increment. Like I'm amazed that the Turing test proved not even to be a thing. I remember what it was like to think, okay, there will be this sort of liminal and seminal important moment when you can talk to your computer and it's every bit as articulate and error-free as a person. And so the Turing test has passed. We went from, okay, it's clearly not there yet to, it's now functionally super, it's failing the Turing test because it passes it so well. I mean, like no human could give me all the causes of climate change this fast in a bulleted list in the presence of narrow super intelligence already. So there is no such thing as a Turing test, really. It's like that we went from, it's failed to it's too good to be true. And there are many things like that where it's just, you memory hole what it was like to be in a world where none of this stuff existed and the pace of technological change and the competent cultural change is so fast and accelerating that the new normal, I mean, it touches everything. It's like with our politics, what would have dominated a new cycle for a month now barely captures our attention for two hours because the next outrage is so much more outrageous than the last thing that you just, you know. I mean, it's...

Speaker 2:
[106:02] Which you could argue, as we said in The Social Dilemma, that's why the social media problem and the attention problem is the problem underneath all problems because our ability to sustain attention on a topic and know that it persistently is the number one thing that we have to deal with.

Speaker 1:
[106:16] Yeah.

Speaker 2:
[106:17] That is the thing that social media breaks.

Speaker 1:
[106:19] And that's the only... That's what it is for something to matter. Right. Exactly. If you can't sustain your attention on it, it cannot matter.

Speaker 2:
[106:25] That's right.

Speaker 1:
[106:25] That's right.

Speaker 2:
[106:26] And I do think that there is this effect with AI, and we named it in our first AI Dilemma talk in 2023, is and I called it the rubber band effect, which is that with AI, it's like you talk about the rogue examples in Alibaba and all this crazy stuff of self-expletration and AIs that are preserving their peers, like not even doing self-preservation, but peer preservation, and you walk people through all this stuff, and it's like you're stretching people's minds out like a rubber band. But then if you let go and they go back to their life a week later, they're not operating from a place of having metabolized and integrated that reality about the world. It actually says something profound about human nature. So one of the calls to action beyond seeing the AI doc, the midterms are coming up, voting for policies in AI, and joining the human movement, humanmovement.org, is that you need to keep this topic in your mind as like this thing still matters every day. It doesn't mean that everyone has to drop their life and they're already full, and the world's overwhelming and you have to become an AI activist or something like that. But it does mean that you need to keep this in your field. Like one way you can do that is start a WhatsApp group with your friends. Most people already have this where they have a WhatsApp or Signal group and they just share updates about what's happening in AI, and what we can do about it. If you go to thehumanmovement.org, there will be action groups and things that people can do there for actually taking action on this that are not just passively sharing news links, but like, what are we going to do about it? But I think one of the ways that we're going to make our way through this is we have to combat the rubber band effect, which means like, you know, continuing to listen to your podcast and the AI Risk Network and Your Undivided Attention, our podcast, or keep this topic in your field, stay agentic, and, you know, if we don't keep it in the center of our attention in some way, if we don't participate in being part of the global cultural immune system to the anti-human future, then we won't make the right choice. And I do think it's possible. It's a very hard moment. But I also find that because the time window to act is so small because of this intelligence curse, because we only have the next 12 to 24 months to kind of be locking in the political power of people before we won't have that political power. There's a kind of inspired urgency that I actually feel when I'm in rooms with people. Everyone's like, let's go, let's do it, you know?

Speaker 1:
[108:33] So the Center for Humane Technology, that's a 501C3 that people can donate to?

Speaker 2:
[108:38] That's right. Center for Humane Technology, humantech.com. We just couldn't get the.org, but it's a 501C3. And that's incubating the human movement. There's many wonderful groups that work on this. You know, on the human movement website, you'll see some of the other groups that work on this. We just need everybody getting out there and making this happen. I know it's hard, but we've done hard things before.

Speaker 1:
[108:58] You are definitely out there making your corner of the world happen. So thank you for all that you're doing. Thank you, Sam. It's great to have you out there.

Speaker 2:
[109:05] It's great to be back with you too. Thank you so much.