title Sam Altman’s Attacker, In His Own Words

description In this episode we talk with Daniel Alejandro Moreno-Gama, the man who was recently arrested and charged with attempting to kill Sam Altman. Several months before the attack, our team contacted a young man posting on Discord under the handle "Butlerian Jihadist," who referenced “Luigi-ing tech CEOs” to our producer. He agreed to an interview and to answer questions about his background and how he came to believe that AGI must be stopped for humanity to survive.

To leave a comment and sign up for our mailing list visit us at our website ⁠here.⁠ To support our ability to report more stories like this you can become a subscriber ⁠here⁠⁠. You can email us directly at [email protected]

THIS EPISODE FEATURES:

Daniel Moreno-Gama

LINKS:

San Francisco District Attorney’s Office Press Release

U.S. Department of Justice Press Release

Statement from Pause AI 

Statement from Sam Altman

San Francisco Police Department Press Conference

Daniel Moreno-Gama Criminal Complaint

CREDITS:

This episode of The Last Invention was reported and produced by Andy Mills, Simon Adler, Matthew Boll, Seth Temple Andrews, Ethan Mannello, and Carmen Hilbert.

Music for this episode was composed by Scott Devendorf, Ben Lanz, Cobey Bienert, and Simon Adler

The Last Invention artwork by Jacob Boll
Learn more about your ad choices. Visit megaphone.fm/adchoices

pubDate Thu, 16 Apr 2026 01:50:00 GMT

author Longview

duration 1217000

transcript

Speaker 1:
[00:00] All right, well, let's just start here. I'm going to call you Discord Dan. Are you okay with that?

Speaker 2:
[00:06] Yeah, that's great.

Speaker 1:
[00:07] So a few months ago, I interviewed this young man that our team had found posting anonymously on a Discord server called Stop AI. Can you tell me a little bit about your background? Where'd you grow up?

Speaker 2:
[00:18] Oh, suburbs. Mainly just suburbs my whole life.

Speaker 1:
[00:21] He'd gotten our attention because some of his posts were raising questions about the possible use of violence to stop the frontier AI labs from building AGI. While he was far from the only person online to be posting stuff like that, he was one of the very few people who was willing to sit down with us for an interview and talk to us about his views in his life. I hate to sound like an old man here, but you are what we call Generation Z, right? Yeah. Did you have a smartphone when you were super young? How would you describe your relationship to the internet throughout your life?

Speaker 2:
[00:58] Yeah, I grew up quite close to the internet. I think I got my iPad when I was like in sixth grade, or maybe younger, maybe fifth, fourth grade actually.

Speaker 1:
[01:09] The two of us talked about him growing up in the suburbs, about how starting around age nine, he'd begun to live online pretty much every day.

Speaker 2:
[01:17] Mainly just like YouTube comment sections. I would debate people on there.

Speaker 1:
[01:21] In the comments?

Speaker 2:
[01:22] Yeah.

Speaker 1:
[01:23] We talked about his shifting political views, how he was at first on the right.

Speaker 2:
[01:27] Just like Ben Shapiro clips basically.

Speaker 1:
[01:30] But how later he became a Bernie Sanders guy, and how all of that was essentially driven by what he was watching online. It was actually those online videos, he said, that led him to debates around AI.

Speaker 2:
[01:44] You know, the arguments of people like Hikowski and Max Tegmark, Conor Leahy, like on YouTube, for example.

Speaker 1:
[01:51] But when we got to the subject of the possible use of violence to stop AI, when I asked him about the views that he'd shared online and what links he was willing to go through, to stop what he believed might lead to human extinction. Do you think that if we continue to see the industry move in the direction it's moving now, that by whatever means necessary, we have to stop the extinction of the human race? He paused.

Speaker 2:
[02:21] I'll say no comment.

Speaker 1:
[02:23] He eventually seemed to back away from some of the sentiments that he'd shared online. You don't really think it would be wise for someone to, let's say, kill Sam Altman?

Speaker 2:
[02:37] No.

Speaker 1:
[02:41] But then came Friday, April 10th.

Speaker 3:
[02:45] Scary Friday morning for OpenAI CEO Sam Altman and his employees.

Speaker 4:
[02:50] The disturbing fire bomb attack in San Francisco, authorities say targeting the home of OpenAI CEO Sam Altman, one of the leaders in artificial intelligence.

Speaker 1:
[03:00] At around 3.30 in the morning in a quiet neighborhood in San Francisco, home security cameras caught a young man throwing a Molotov cocktail at the home of Sam Altman. A few hours later, what appeared to be the same young man then attacked OpenAI's headquarters. When security guards approached him, he allegedly pulled out a jug of kerosene and said that he had come to burn the building to the ground and kill anyone who was inside.

Speaker 5:
[03:28] We're learning more about that suspect, as you said, arrested here in the vicinity of OpenAI's headquarters, reportedly making threats to try to burn the building down.

Speaker 6:
[03:37] And when the local authorities released the name of the man that they had arrested, Investigators identify him as Daniel Moreno-Gama and were holding as a suspect, Daniel Alejandro Moreno-Gama, Daniel Alejandro Moreno-Gama in custody.

Speaker 1:
[03:51] I was like, holy shit, that's Dan.

Speaker 2:
[03:56] I want to start by saying I don't believe I'm a lying person. I would normally only advocate for violence as the absolute final, like, I don't want to say final solution, but you know, the final, you know, final, okay, you get what I'm saying, okay.

Speaker 1:
[04:17] The FBI released a statement saying that it appeared Dan had a much larger plan and was carrying something of a manifesto.

Speaker 6:
[04:24] Sources tell me the suspect was driven by strong anti-AI views when he was arrested in San Francisco. He was carrying a manifesto that includes a list of names and addresses of other AI CEOs and investors.

Speaker 1:
[04:37] In addition to the names and addresses of several leaders and board members in the world of AI, he had also allegedly written, quote, if I'm going to advocate for others to kill and commit crimes, then I must lead by example and show that I am fully sincere in my message. I'm Andy Mills, and you're listening to The Last Invention. And in light of this recent attack, as well as the growing number of threats to those building AGI, to lawmakers who are voting on data centers, we wanted to share our interview with Dan. Right now, Dan is 20 years old, he's being held in custody without bail, and has been charged with 13 felonies, including attempted murder. But back in January, he was still only 19, and as you'll hear, still unsure of how far he and others should go in their belief that creating AGI may lead to human extinction. As far as I'm aware, this is the only recorded interview with Daniel Moreno-Gama. When is the first time you have any memory of hearing about artificial intelligence?

Speaker 2:
[06:00] When CHAT GPT came out, and at first I thought it was the greatest thing on Earth. I thought, this is awesome. I get to basically cheat on everything. Because I wasn't thinking about the repercussions that might have on learning at the time I was just a sophomore when CHAT GPT came out.

Speaker 1:
[06:19] So you were in high school and you thought, oh, this is going to really make my homework easier.

Speaker 2:
[06:23] Yeah. But at the same time, I guess I was a bit more curious to like, what actually is this? And so I kind of started looking a bit more into it. And that's, I can't exactly remember the first video I came across. But probably it was Yudkowski, I imagine.

Speaker 7:
[06:40] The basic description I would give to the current scenario is, if anyone builds it, everyone dies.

Speaker 8:
[06:45] Elisa Yudkowski is a notable member of the group of AI commentators, Adamant, in their belief that there's very little hope for humanity's survival short of brute force intervention.

Speaker 2:
[06:54] What really got me into it was the arguments of people like Yudkowski.

Speaker 1:
[06:59] Please welcome Eliezer Yudkowski.

Speaker 2:
[07:01] My opinion, Eliezer is the most important thinker of our time.

Speaker 7:
[07:05] I expect an actually smarter and uncaring entity will figure out strategies and technologies that can kill us.

Speaker 2:
[07:11] Don't listen to this episode if you're not ready for an existential crisis.

Speaker 1:
[07:15] And when you came across videos from people like Eliezer Yudkowski, did you immediately think like, wow, this guy's right?

Speaker 2:
[07:21] Well, I don't think many people would want to believe what Yudkowski's saying.

Speaker 7:
[07:29] The AI doesn't hate you, neither does it love you, and you're made of atoms that can use for something else. That's all there is to it in the end. But, you know, as I kept watching it, I am worried about the AI that is smarter than us. I'm worried about the AI that is good enough at AI research to build the AI, that builds the AI that is smarter than us and kills everyone.

Speaker 2:
[07:49] It was a mix of, you know, scary but very interesting.

Speaker 7:
[07:52] There's no fire alarm for artificial general intelligence.

Speaker 2:
[07:56] I've always liked watching debates. So I was like, okay, I hope he's kind of wrong. But over time, I realized very few of his main criticisms ever got refuted or rebuttled in a proper way, I think. And that made me think, okay, this merits a similarly simple, solid counter-argument, but I just never saw that. I saw a lot of the counter-arguments from, I guess, the accelerationist side seemed to be very fallacious. They never wanted to engage with his core arguments. And that made me realize that I should probably do something to get what he's saying out there more, get it to be part of the public conscious a little bit more than it is currently.

Speaker 1:
[08:44] What form did that take? Are you getting into the comment section on YouTube videos? Is that kind of step one?

Speaker 2:
[08:49] That was step one, yeah. Then probably talking to my parents about it, strangely enough, talking to people in my life about it being like, hey, did you know this technology is kind of strange? Like, we don't really understand what's going on about it. And well, that didn't go too well. I kind of became a bit like annoying, a bit autistic about that. So my mom kind of recommended, maybe you should join an organization or something. I'm like, okay, yeah, that makes sense. So I did that with Paws AI in 2024.

Speaker 9:
[09:28] All right, well, a global call to stop artificial intelligence in its tracks.

Speaker 6:
[09:32] Close OpenAI!

Speaker 3:
[09:34] Close OpenAI!

Speaker 10:
[09:35] The protesters are demanding that OpenAI be shut down and that the government permanently banned the development of artificial general intelligence, or AGI.

Speaker 1:
[09:50] Do you at this point consider yourself an anti-AI activist? Is that the right language we should use to describe you?

Speaker 3:
[09:58] No.

Speaker 2:
[09:58] I think AI is too broad of a concept. I would say I'm an AI safetyist. I'm against general intelligence.

Speaker 1:
[10:06] You're saying that you're fine with, like, chatbots. You're even fine with, you know, maybe what DeepMind is up to with AlphaFold and trying to do scientific discoveries. What you're worried about is trying to actually create something more like a digital species, this AGI dream. That's the thing you're worried about.

Speaker 2:
[10:26] Yes, absolutely.

Speaker 1:
[10:27] And primarily is your fear that it's going to disrupt the economy, that it's going to lead to some kind of AI dictatorship and one person or company has too much control? Or like Nate Sores and Eleazer Yudkowsky, are you concerned that this, in short, if anyone builds it, everyone dies?

Speaker 2:
[10:49] Yeah, I would say definitely the latter. And if most people were as educated as me on this topic, they knew the amount of information, the amount of statistics I knew, they would probably lean towards my position pretty heavily, I'm guessing. Like, if there was a bridge where the engineer said there's a 25% chance that it collapses, most people probably wouldn't take that. And I think the do-mers, right, I think they're actually a lot more populist and democratic than people would make them out to be. I find that a lot of the accelerationist tend to be almost anti-democratic. They often talk about, like, how it's going to make the lives of people better, but they'll never seem to really dive deep into, like, the arguments about, like, these people didn't consent for that, they didn't vote for that. You wouldn't give them an option to vote for that because we know who would win.

Speaker 1:
[11:40] Mm-hmm. But what do you make of the argument that I've heard from several accelerationists in the past that, you know, it was not up to a vote whether Gutenberg should make his printing press, right? It's not up to the populist to decide if Edison makes the light bulb. You know, that's just not how the real world works. And so why would this be any different?

Speaker 2:
[12:03] I think the difference is Edison and Gutenberg can say there was a one in four chance everybody dies because I made this invention. You know, the light bulb is just a different type of invention than what is essentially an autonomous species. This is less like the light bulb and more like the Manhattan Project, like nuclear weapons.

Speaker 1:
[12:23] And if there is truly an existential risk that if the open AIs, if the deep minds, if the XAIs, if these big companies actually are going to make AGI, how great of a threat do you think that poses and what kind of response should we be willing to make to mitigate that risk?

Speaker 2:
[12:53] Well, what response? I think we need policy 100 percent. We need policy to ensure that this technology gets regulated on how big data centers can get, like maybe putting a moratorium or a cap on the construction of new data centers. I feel like we have regulated technology before. So there's no reason why we couldn't take it for artificial intelligence.

Speaker 1:
[13:17] Just to put my cards on the table here, I'm working on a story about political violence. And I want to understand just personally for you, if this technology really poses an existential threat, and right now, lawmakers are not motivated to heavily regulate it, what do you think we should be willing to do to ensure that this technology that you believe is going to lead to the likely extinction of the human race doesn't get made and doesn't get released?

Speaker 2:
[13:53] Well, first, I think before we even think about violence, we need to exhaust all our peaceful means first. I think protesting, I think sharing information, I think that needs to come way before we even consider that.

Speaker 1:
[14:07] But is it on the table? Do you think that if we continue to see the industry move in the direction it's moving now, that, by whatever means necessary, we have to stop the extinction of the human race?

Speaker 2:
[14:26] I'll say no comment.

Speaker 1:
[14:28] But is it true that you have online, at times, at least toyed with the idea of advocating violence against the leaders in these tech companies? I think my producer said something about a post about Luigi-ing some of these CEOs.

Speaker 2:
[14:45] Right. I mean, it was kind of... It shouldn't be taken too literally. I mean, people kind of say that all the time. I didn't really mean that as a threat or anything.

Speaker 1:
[14:57] You were essentially being provocative.

Speaker 2:
[14:59] Yes. That's kind of my idea. I'd rather be provocative with my statements than actually promote something like that.

Speaker 1:
[15:08] So you don't really think it would be wise for someone to, let's say, kill Sam Altman?

Speaker 2:
[15:16] No. I mean, I think these people, they have unlimited resources. One person is not really going to do that much of a dent. I understand the frustration with a person who might advocate for that, but it's not practical. It's not worth it. It's almost all risk, no reward. So, people may feel that way, but I don't know. Not too many people would do it.

Speaker 5:
[16:01] My name is Matt Kobo.

Speaker 11:
[16:03] I serve as the Acting Special Agent Charge for the San Francisco FBI. Today's charges outlined a dangerous and deliberate plan to bring violence into San Francisco. This was not spontaneous. This was planned, targeted, and extremely serious.

Speaker 1:
[16:33] Daniel's parents did not respond to my request for an interview, but released a statement, saying that their son was, quote, a loving person who has been suffering recently from a mental illness. But according to law enforcement, in Daniel's three-part manifesto, he makes clear that based on his belief in humanity's, quote, impending extinction, Daniel had not only planned this attack on Sam Altman, but was also attempting to inspire others to attack and kill the leaders of the frontier AI labs. He allegedly supplied their names and home addresses. Daniel also allegedly included a personal note to Sam Altman that read, if by some miracle you live, then I want you to take this as a sign from the divine to redeem your life. In the last few days, many prominent AI safety organizations like Paws AI and figures like Eliezer Yudkowsky have released statements strongly condemning this attack and saying that impressionable young people should not follow Dan's lead. But online, several anonymous users in places like Reddit have already begun to compare Dan to Luigi Mangione, including calls that someone should finish what he started. This reporting is a part of our ongoing coverage of the AI race and the debate around it.

Speaker 12:
[18:05] There is a longer term existential threat that will arise when we create digital beings that are more intelligent than ourselves. We have no idea whether we can stay in control.

Speaker 1:
[18:16] To hear the backstory of this moment we're in, including the views of the Accelerationists.

Speaker 7:
[18:22] This really will be a world of abundance.

Speaker 1:
[18:24] The AI Doomers.

Speaker 13:
[18:26] I was selling AI as a great thing for decades, and I was wrong. I was wrong.

Speaker 1:
[18:34] We'd recommend going back and starting at episode one of our eight-part series. We're going to continue to cover the story as well as the ongoing developments around the AI revolution. This podcast is produced by Longview. If you'd like to drop us a line or send us a tip, you can reach us at hello at longview.report. To leave a comment or subscribe to our newsletter, visit us on our sub stack or you can become a subscriber and support our reporting. Links are in the show notes. Thank you for listening, and we'll be back soon.

Speaker 14:
[19:19] Hello, everyone, this is Matt, co-founder here at Longview, where we report stories that are grounded in curiosity and context, not political bias. As we say, it's not the left view, not the right view, but the Longview. One way we sustain this business is by advertising, but we are also listener-supported. And if you would like to go ad-free and support us at any dollar amount that you'd like, you can do that by clicking on the link in our show notes or by going to longviewinvestigations.com. Until then, here is a brief message from our sponsors. The Last Invention is brought to you by Quince. Every spring, I feel the same urge to clean things out. Closets, habits, routines. And with clothes, I keep coming back to the same idea. Fewer of them, but better quality. That's what I like about Quince. The materials feel elevated, and the cuts are clean, and the prices, they aren't insane. They make everyday staples using premium fabrics like 100% European linen, and this incredibly soft flow-knit fabric for their active wear. The kind of gear you end up wearing way beyond the gym. What surprised me most is the pricing. It's about 50% to 60% less than what you'd expect from comparable brands, and that's because Quince goes straight to the source, working directly with ethical factories and skipping the middlemen. So you're actually paying for the quality itself instead of the markup. My favorite piece these days is one of their blue chore jackets. It fits great, it's really comfortable, and I know it'll hold up even though I wear it all the time. Refresh your wardrobe with Quince. Go to quince.com/lastinvention for free shipping and 365 day returns. Now available in Canada too. Again, go to quince.com/lastinvention.