transcript
Speaker 1:
[00:00] I sold my car in Carvana last night.
Speaker 2:
[00:02] Well, that's cool.
Speaker 3:
[00:03] No, you don't understand.
Speaker 4:
[00:04] It went perfectly.
Speaker 5:
[00:05] Real offer, down to the penny.
Speaker 3:
[00:07] They're picking it up tomorrow.
Speaker 4:
[00:08] Nothing went wrong.
Speaker 2:
[00:09] So, what's the problem?
Speaker 1:
[00:10] That is the problem.
Speaker 3:
[00:11] Nothing in my life goes as smoothly.
Speaker 1:
[00:13] I'm waiting for the catch.
Speaker 6:
[00:14] Maybe there's no catch.
Speaker 4:
[00:15] That's exactly what a catch would want me to think.
Speaker 2:
[00:18] Wow, you need to relax.
Speaker 3:
[00:19] I need a knock on wood.
Speaker 1:
[00:20] Do we have wood? Is this table wood?
Speaker 2:
[00:22] I think it's laminate. Okay, yeah, that's good. That's close enough. Car selling without a catch. Sell your car today on Carvana. Pick up fees may apply.
Speaker 7:
[00:30] So you're saying with Hilton Honors, I can use points for a free night's day anywhere? Anywhere. What about fancy places like the Canopy in Paris?
Speaker 3:
[00:38] Yeah, Hilton Honors, baby.
Speaker 7:
[00:40] Or relaxing sanctuaries like the Conrad and Tulum?
Speaker 6:
[00:43] Hilton Honors, baby.
Speaker 7:
[00:45] What about the five-star Waldorf Astoria in the Maldives? Are you going to do this for all 9,000 properties?
Speaker 8:
[00:52] When you want points that can take you anywhere, anytime, it matters where you stay. Hilton, for the stay. Book your spring break now.
Speaker 4:
[01:11] Hey everyone, welcome to Conspirituality, where we investigate the intersections of conspiracy theories and spiritual influence to uncover cults, pseudoscience and authoritarian extremism. I'm Derek Beres.
Speaker 1:
[01:23] I'm Matthew Remski.
Speaker 9:
[01:24] I'm Julian Walker.
Speaker 4:
[01:25] You can find us on Instagram and threads at ConspiritualityPod, as well as individually over on Blue Sky. You can access all of our episodes ad free, plus our Monday bonus episodes on Patreon, patreon.com/conspirituality. You can also grab our Monday bonus episodes via Apple subscriptions. As independent media creators, we really appreciate your support.
Speaker 1:
[01:50] And we have a couple of book announcements today because Derek and I have both come out with books this week. I'll just go first. I'm really lucky that North Atlantic Books has published Anti-Fascist Dad this week. I'm happy to say that here because it's really a consequence of doing all this work with you both, toiling in the right-wing extremism minds. When Trump locked in that second term and our slack was burning up, our then 12-year-old came to me and said, well, what will happen now? And I had nothing in the moment. But of course, it wasn't long before I thought that as a parent, I have to spend less of my extremely limited time on the details of fascist entrenchment and insanity more on how this stuff has been fought through the ages. I think next week, we're gonna interview each other about these books and over the next couple of weeks, we'll publish sections of the audiobooks on the feed here. But Derek, you've got a book too.
Speaker 4:
[02:45] I do. It came out on Monday, purely coincidentally a day before yours, but I decided to publish mine that day because it's my favorite holiday for 20.
Speaker 1:
[02:55] Exactly.
Speaker 4:
[02:56] The book is called Well Enough, Finding Health Despite the Wellness Industry. After we published collectively our 2023 book Conspirituality, I wanted to write a memoir about my decades in the wellness industry, leading up to the creation of the podcast and go more in-depth about the history of what led there. It's half memoir, half the work I do on this podcast, and we'll put links in the show notes to both my book and to Matthew's book.
Speaker 9:
[03:25] Congratulations to you both.
Speaker 1:
[03:27] Thanks, Julian.
Speaker 4:
[03:28] Thank you, Matthew.
Speaker 9:
[03:35] Conspirituality 305, AI's cultish leader. Ronan Farrow is at it again. The reporter who launched the MeToo movement has a new feature in The New Yorker, written alongside staff writer Andrew Marantz about OpenAI CEO Sam Altman. In many ways, the 16,000 word investigation is a meditation on the existential risks of AI being placed in the hands of a few powerful men, and in this case, a possible sociopath. Today, we discuss the article and then zoom out on broader questions in AI. Who is it for? How is it being used? And can it be reined in? On April 6th, Ronan Farrow and Andrew Marantz published an article in The New Yorker titled Sam Altman May Control Our Future, Can He Be Trusted? The writer spent 16,000 words unfolding the story of Sam Altman's career in tech, culminating in his current tenure as CEO of OpenAI. They use interviews, internal company documents, and notes and memos from former associates to paint a portrait of Altman as deceptive, dishonest, and opportunistic. Central to that story is how Altman has claimed to prioritize guarding against the potential dangers that generative AI poses to humanity while lying about the measures he was implementing to do so. It's a story of an archetype now emblematic of our times, the tech innovator and entrepreneur who surfs the colliding waves of creative ideas, market trends, public image management, and investor confidence, always with an eye on the prize of immense wealth, power, and personal glory. In a clever wink to the subject matter, the article ran alongside a creepy, high-res, AI-generated, animated gif of Sam Altman from the waist up. He's looking straight at us as he tries on different masks that all bear different facial expressions, and those heads are kind of hovering alongside him as replacements for whatever expression he's wanting to convey.
Speaker 4:
[05:47] I don't know if it's actually AI-generated. I mean, that's the thing about this. We now see everything as AI-generated. But remember, so much of design is not that. It could have been a suite of tools that we used to create it, because that's sort of image that they used or moving image that they use. We've seen stuff like that for years, if not decades, but it is very creepy.
Speaker 9:
[06:11] Yeah. I mean, it seems like any impressive looking new piece of digital media, we're like, that's AI, right?
Speaker 4:
[06:17] Yeah.
Speaker 9:
[06:17] It's like, well, how is that sausage made? And yeah, in this case, it's a particularly creepy one. The article outlines Altman's journey from dropping out of Stanford at 19 in 2005 to work on his startup called Looped, and how even that early on, he reportedly exaggerated trivial things, like he falsely claimed to everyone who worked there that he was the high school Missouri state champion ping pong player and then turned out to be pretty terrible at the game.
Speaker 4:
[06:45] Yeah, because that was me, I was the champion.
Speaker 9:
[06:49] We should have checked. He then sold that company in 2012 and oversaw hugely, a whole litany of hugely successful startups at venture capital company Y Combinator from where he appears to have then been ousted for prioritizing his own investments over those of other partners. As of 2024, reporting puts his investment portfolio at worth about $2.8 billion. So whatever he was doing, it was working. And this includes several companies that are also in business with OpenAI currently. So that raises questions about conflicts of interest. Much of the money Altman used for personal investment in those Y Combinator companies came from his main funder and mentor in Silicon Valley. And all roads always lead back to this. He's that little known and quite cuddly expert on the Antichrist named Peter Thiel. Now, Derek, you're going to give us more detail about OpenAI and how it factors into all of this in a minute. The short version that I'll just ping right here is that Altman was initially very focused on safety, centering these concerns in the 2015 founding documents of what was at first a non-profit company, set up purportedly to be an ethical competitor to Google in the space, but then over time failed promises, lies, and his prioritizing of speed and profit over safety, especially while negotiating that huge investment from Microsoft, led to internal pressure to have him fired. And that actually happened in November of 2023, but it only lasted a few days. And then he aggressively found his way back into being at the helm of the company. Over time, Altman has also shed his image as protector of humanity and become increasingly tech optimist in his blog posts about AI, writing things like, we will all build ever more wonderful things for each other. Despite appearing to call for greater oversight and regulation of AI when testifying before the Senate, behind the scenes, the writers allege, he actually lobbied to make sure that those things didn't happen. And these types of duplicitousness and self-interested reversals are also exhibited in his political opinions and his donations to political parties. And he steadily shifted, perhaps predictably, from being pro-democrat and anti-Trump to flipping toward currying favor, like a lot of tech oligarchs with MAGA in the time we're in now. This is consistent with early ideas floated by Altman and others at OpenAI about setting up an investment bidding war for their technology between global governments because of the military edge the technology will provide. After abandoning that idea, Altman instead began going behind the boards back to court Saudi Arabia's Mohammed bin Salman for investment and then to the UAE, deflecting ethical or geopolitical concerns as mere inconveniences to be negotiated. That aside, during the Biden administration, Altman also sought and failed to get security clearance for classified AI policy discussions. Skeptical about his trustworthiness in this area, staffers at the Rand Corporation cited extremely expensive gifts that he apparently has gotten from foreign governments. But as Trump was starting his second term, UAE National Security Advisor and Altman Business Associate, Sheikh Tahun, delivered a half billion dollar investment in a cryptocurrency company. And then from the White House's Roosevelt Room, Altman standing alongside Trump and other tech CEOs, announced a massive project called Stargate that will build AI infrastructure across the entire US. A few months later, the Trump administration rescinded export restrictions on AI technology. And then, the Saudis announced their plans to build a data center seven times larger than Central Park in the UAE. It will use about as much electricity as the city of Miami. By February of this year, Altman had maneuvered himself into position to be able to announce a partnership to deploy its models on US military classified networks.
Speaker 4:
[11:10] I should also note, if anyone remembers this Stargate announcement, Trump took all credit for it, even though it had been in progress for months before Trump even knew about it, really.
Speaker 1:
[11:21] Didn't it also prompt an explosion amongst the sort of channeler crowd of like, oh, Stargate is coming, because it just sounded so cool. It was like that was a Lori Ladd wet dream kind of thing, right?
Speaker 9:
[11:33] Yeah, you would think so. We'd have to go back.
Speaker 1:
[11:35] I think I remember that.
Speaker 9:
[11:36] Oh, my God. The Stargate is finally here.
Speaker 1:
[11:39] It's open. Well, it's opening, right? You can go through it.
Speaker 9:
[11:43] I mean, I know a lot of people who say that every single month when the moon moves into a new star, the Stargate is opening. Okay. OpenAI has now shuttered many of their safety-focused departments, and it actually ceased listing safety protocols and expenses as one of their most significant activities on their most recent IRS disclosure form. Meanwhile, AI slop, AI voter suppression efforts, and seven wrongful death lawsuits against the company involving chatbots all validate the predictions that Altman and others ironically were making over a decade ago.
Speaker 4:
[12:23] Speaking briefly about their business, OpenAI, they follow a lot of rhetoric from tech companies that realize eventually that they have to earn a profit, that all that investment money wasn't just for philanthropic means. I mean, remember the commercial where Facebook promised to connect the world and Google said, do no evil, or they won't ever do evil? OpenAI had its own Facebook moment. So they were founded in 2015 as a nonprofit with the stated goal of advancing digital intelligence, quote, in the way that is most likely to benefit humanity as a whole, unconstrained by a need to generate financial return. By 2019, things turned as the nonprofit launched OpenAI LP, a capped profit subsidiary with initial investors' return capped at 100 times their investment.
Speaker 1:
[13:16] Oh, wow. That's really stiff. That's stiff, 100 times their investment. What a limitation.
Speaker 4:
[13:22] Usually in the investment world, there aren't caps at all. So having a cap is their attempt to try to soften what they were eventually going for anyway. Then they received billions of dollars from Microsoft, SoftBank and NVIDIA, and so increased corporate pressure forced them to separate its business from the nonprofit in order to attract further investment and then pursue a potential IPO, which could possibly be slated for this year. I mean, you have a rush of AI and cryptocurrency companies right now trying to IPO this year before the midterms because they know that if they lose the House and the Senate, then it's gonna be really difficult to IPO. So it's actually kind of a mad dash at the moment. Here's the rhetoric the OpenAI board put forward at the time.
Speaker 1:
[14:12] We once again need to raise more capital than we'd imagined. Investors want to back us, but at the scale of capital need conventional equity and less structural bespokenness.
Speaker 4:
[14:26] Now, as a founding member, Elon Musk plays into this story as he wanted more and more money, even though he was on board with the initial rosy sentiment. I'll return to Musk in segment two. In 2023, though, OpenAI turned into a public benefit corporation, or PBC, that had the same mission, while the nonprofit retained control and became a large shareholder. Now, OpenAI is unique in that its PBC is controlled by a nonprofit board. It's also important to note that the open part of OpenAI is a nod to the original open-source model, where anyone could access their code. There's a long history of open-source code that plays into the larger story. I'm going to talk about that a little bit, but what matters for this particular story right now is that China released DeepSeq in January 2025, which shook the American AI industry because they took the code because it was open, and they evolved it very quickly, much faster than American companies were expecting. We can't diminish foreign competition on this front, especially since China does not have a great track record of human rights ethics, especially when it comes to science and technology. And in fact, users of DeepSeq today will quickly discover the model can't be critical of the CCP. And when it first launched in the US, I did try it, and I tried this. If you ask about Tiananmen Square, the model punts and asks you to talk about other topics. Now that said, a lot of American companies were rattled by this rapid evolution of DeepSeq's abilities. And then you have Altman, ever the opportunist, using this leverage to make the case for profit seeking, which has a bit of truth, though he certainly used it to deflect for his own ambitions.
Speaker 1:
[16:18] You know, I think we were all interested in this article for different reasons, but in editorial, I think we zeroed in on the entry point of looking at the Altman story through the lens of cult formation. Not just because of the leadership story, but because of the subject matter, because AI has become, I think, iconic of both apocalyptic and salvation fantasies and projections. So it's pervasive, it's a hyper object. You know, for the vast majority of people on the planet, it's poorly understood, that would include me. The narratives that represent or exaggerate its capabilities in medicine or clean energy or the democratization of access to legal and bureaucratic processes versus the consequences of mishandling it or it being dominated by weapons companies or evil Chinese communists, turn it into an almost godlike substance. And so the question is like, who mediates for God? And you know, Derek, we'll talk later about open source creators on this journey who create applications of shared benefit or that's where they start. And you know, that there's a robust community of people who are always doing that. But the field and the discourse around it is dominated by techno oligarchs like Altman, who are culturally allowed, economically incentivized, legally supported by the state to concentrate levels of wealth and executive power between their hands. They can't really be challenged through conventional or even democratic means. His board came close to ousting him, as we've said, for being a liar. But then he had the social capital gained through the perception of his wizardry to you know, slime or strong arm his way back into the driver's seat. Like he seems like he's a little bit bulletproof.
Speaker 4:
[18:04] I can't answer who mediates for God because I don't believe in God. But I can say that men who want a lot of power and what we're seeing in AI right now is they don't want any regulations whatsoever. They don't want anyone mediating for them. That's the whole point. They are trying to get complete unregulated, unfettered access to use their technology however they want. So from their perspective, no one mediates for them. And that's the heart of the problem or one of the main problems from my perspective. And I also, we're talking about open source. I want to flag this isn't unique to AI because open source projects have always attempted to be a bulwark against corporatization of computing. And it's existed since the beginning of computers. GitHub sponsors, jumping ahead to today, is one of the most famous ways for these types of projects to proliferate and for people to use code and check code and see what other people are doing. But people involved in open source don't have the marketing or the media power of the major companies like those we're discussing. That doesn't mean they're not critical for technology because they've led to major technological breakthroughs. What usually happens is you get the big companies come in, take those breakthroughs and then privatize them.
Speaker 9:
[19:19] Yeah, and so there's a level of power that to follow your conceit, Matthew, where the promise and the possibility and the mysteriousness, right? That the hyper-object kind of quality of like, oh, what is this incredible magical thing that we can project all of these sorts of meanings into. That becomes the way that you're saying, okay, so who mediates for that mystery?
Speaker 1:
[19:45] Yeah, who's the wizard?
Speaker 9:
[19:46] Yeah, in this case, it's the tech CEOs who are kind of like the wizard or the shaman or the cult leader, who's saying, I alone can take you to the promised land of what AI represents.
Speaker 1:
[19:56] Yeah, and if I don't, then you might be in real trouble, right? I think that this paradox is actually is key.
Speaker 9:
[20:03] There you go.
Speaker 1:
[20:04] Because the concentration of power is privatized and it's fickle and, you know, one guy like Altman can wake up in the morning and make a key central, hugely influential decision about unleashing one product or another on the world without showing much of any concern for its impact. It makes sense that, you know, crack investigators like Farrow and Marantz spend 18 months of their pressure of their lives to give us an accurate psychological profile. Because when, for instance, OpenAI signs on with the DOD to help target select strikes on Iran, and Altman is personally on the phone driving the deal, it's a matter of public interest as to whether this apex wizard of a godly power is a sociopath. And I think the cult lens is most useful from that content side, because the power seems rooted in Altman's ability to manipulate a paradox. Like his product, or his ability to channel the magic of AI, carries either ultimate destruction or the dawn of utopia. And he has power not just because of the money and the bureaucratic positioning and whatever sort of admin skills he has, but because he is a gateway to death or life. And he plays an ambivalent role at that gate. Like what is he choosing in any given moment? Does he care enough about human beings to decide in our favor? I don't think we'll ever forget Peter Thiel sitting in that interview and saying, should humans survive? I'm not quite sure. I know they're different guys, but it's kind of a mindset that I'm pointing to. And then there's this question of like, is Altman even in control at all? Because one weird aspect of his power is not just that he can decide whether his bots will provoke psychosis or guide drone strikes on children, but he's kind of like Rabbi Leif in Prague, who creates the golem, but then he can't ultimately control it. But he's there, he's still the best shot at mediating the danger.
Speaker 4:
[22:03] Just one note, you said is product. I think it's really important. I know we're not going to get too much into in this episode, but it's products.
Speaker 1:
[22:11] Sure.
Speaker 4:
[22:12] And that's part of the issue with AI in general, is it's a suite of products that cuts across all technologies. I think part of the public furor, which is warranted, but it is that confusion in understanding that AI is just pervasive across many technologies. And I think it's really important that we talk about what specific applications of that technology when we have these conversations.
Speaker 1:
[22:41] Yeah. And I think we will get to that when we start making a distinction between products and then, is there an underlying logic that sort of governs or pushes how they come to the fore or they become dominating in our lives, right?
Speaker 9:
[22:54] Yeah. And what you're referencing a moment ago, Matthew, I think is really central. Technology that creates such a significant power advantage over others globally, while also potentially enabling, as we've seen, such total control in new ways at home, does raise overwhelming moral questions and concerns about oversight. Who's watching the surveillance state, right? This is, it's true for whoever has access to such advantageous tech, right, across the board. I share concerns about the US misusing it, whether for ICE or for foreign policy. And at the same time, there have been documented cases of China using AI surveillance and face recognition software in their ethnic cleansing of the Uighur Muslims and of Iran, enacting similar AI-powered surveillance to identify and then target them in breaking strict dress code laws. So in terms of rail politic, it appears we're in yet another arms race, which makes our tech overlords even more powerful and dangerous.
Speaker 1:
[23:58] So, I mean, yes, everyone has evil uses for it. States have evil uses for it. I don't really understand the arms race argument. Like, what do you mean by that?
Speaker 9:
[24:07] Well, you know, in comparison to nuclear weapons, right? So, the evil uses, the arms race is an extension of the evil uses. It's all very well for us to say, well, we should stop developing and using this kind of technology, but our enemies are not going to stop. And so, if Russia, China, North Korea, Iran have the edge over Western powers in terms of AI technology, that's a serious problem for certainly for us, but I would say for the whole world.
Speaker 4:
[24:38] And one of the, just to give you an example, Matthew, one of them is AI powered drone technology is used in war right now. And so, if competitors, adversaries develop that sort of technology, then whatever country isn't developing it will fall behind. And it's never an argument I like to make because I am nonviolent in general, and I would love a world without war. But when you have defense technologies needed between people we don't have control over, I don't really see how we stop it outside of regulations and voting the right people in, which doesn't seem like a lot at this moment in time.
Speaker 1:
[25:22] In a lot of instances for this podcast, I've described how this double offering of fear and promise, power and surrender is at the heart of cultic formation because followers get caught between terror and love and they are both abused by the authority figure who can save them. And it's the pattern of domestic violence. And if Altman is an archetype of the top level AI tycoon, I think this social dynamic mirrors and perhaps helps to lock in the economic power of his products and his role, like he can help us, he can kill us. That means he has our full attention and our disorganized attachment. But at the same time, over the past several years, like I've personally gone lukewarm on the cult framework in these instances, especially if through over-psychologization, it obscures the more obvious and pervasive political economies that lock someone like Altman into a death embrace with his investors and his consumer base. Because the cult leader sort of trope carries the stain of the monstrous, like he's the disordered personality, he's weaponized mental illness, perhaps. But Altman doesn't rise to where he is without acting rationally within a permission structure. And so thinking of him as a bad apple can overlook the mold in the barrel, right? Like in this case, the unrelenting economic logic that has allowed AI tycoons to prioritize accelerated extraction, steal human IP, finish the process of labor immiseration. I'll circle back to that in segment three. But just today, I found out that TikTok flipped a button that forces everybody's past archive to be sort of opted into AI remixing. So in other words, and you can only undo it. Have you heard about this? You can only undo it through the app itself on your mobile phone. So if you've been on TikTok for like six years, every single one of your videos is now theirs to train their AI and do whatever they want with it. And if you don't want to be part of that, you have to go back through your phone video by video and click off the button, right? Like some people are-
Speaker 9:
[27:41] Oh, it's for each video.
Speaker 1:
[27:42] For each video. You can't globally do it. And so it's like, what the, I mean, come on. You know, it's like, we just snap, got the labor, right? Incredible stuff.
Speaker 4:
[27:54] That's the challenge of building profiles on these media platforms that are not open source. And that's, that's it right there.
Speaker 9:
[28:02] Yeah. I mean, Matthew, I agree with everything you're saying there. It's sort of, there's like a pulled back or sort of meta question about this with regard to, to the nature of power, because it does seem that there is a kind of sociopathic leaning personality type that will find their way into positions where they can abuse absolute power. And it seems to be the case regardless of the style of economic, political, or even religious system, like which ideology is immune? Not capitalist democracy, not state communism, not legally enforced Islamism, not organized Christianity, not traditional Buddhism. Sadly, I thought it was for a long time, not the Jewish ethnostate, that's for sure. It seems like the only harm reduction, oversight that we have, it seems like only harm reduction, oversight has to include specific checks and balances with regard to how power is structured and then make adjustments based on this new tech and make sure that no one is immune. I don't know how we then extend that to tech giants in terms of detail, but that seems to be the obvious move if their inventions carry the promise of transcending limitations for those who seek unaccountable power, which is what we're standing on the edge of, right?
Speaker 1:
[29:18] Well, maybe this is why I don't think the cult model is that great, despite the title for this episode is because you just rattled off a combination of economic systems and ideologies and religions. I think we're really talking about capitalism produces an ideology, but it's a mode of production with really specific qualities that are not promoted or sort of like taught within traditional Buddhism or a Jewish ethnostate, right? It's a very specific and so, yeah, restructuring power is a huge job when it comes to a system that pretty much guarantees that Sam Altman is going to rise to the top and be rewarded for where he gets to.
Speaker 4:
[29:58] That sounds a bit like an inevitability argument.
Speaker 1:
[30:01] Well, yeah, not in a personal sense, but in a depersonalized sense. I mean, there's nothing that he's done technically with regard to his wealth accumulation or the way in which he has gathered his raw resources to train his models or the way he is impacting labor relations for the entire planet. There's nothing about that that is anything but celebrated, anything but validated, anything but legally permitted. And so I'm like, who wouldn't go into that spot? Because there's an empty permission structure that says, yeah, take what you want, right? Take what you want.
Speaker 9:
[30:41] Even more than that, Matthew, I think you're saying none of it is unexpected given the dynamics of capitalism, right? You're saying that's where it goes.
Speaker 1:
[30:49] Totally predictable.
Speaker 9:
[30:50] And what I'm saying is that regardless of all of these different ideological systems and types of political and economic and religious power arrangements, it seems like people like this tend to find their way to the top anyway and abuse power.
Speaker 1:
[31:05] I mean, I guess we're just talking about scale then because I mean, if you could become a cult leader of a Buddhist monarchy and it's not going to take over the world, right? No, it's not going to govern productive forces throughout the entire planet. Not yet, right?
Speaker 4:
[31:19] We still have a lot of history.
Speaker 1:
[31:21] We do. We have a lot of history in front of us. Right. One last thing that I would note about the cult framework here that I actually appreciate is that, you know, Altman's particular concentration of power, because it's so privatized, because it's so individual, it does resolve down to his body, to his actual physical presence and existence in one way. I mean, he'll be replaced by somebody else, as I'm arguing earlier, but from this framework, it matters that one guy tried to firebomb his house, and then days later, this couple drove by with their Honda Civic or whatever and shot it up. So the personal and psychological concentration brought by Farrow and Marantz, it didn't cue these crimes, but I think it's the framework that motivates the person who might say, well, yeah, we have a sociopath, we have to eliminate the sociopath. And so through the cultic lens, I think we know that organizations fall apart at the deaths of leaders, but we also know that there's too much money and infrastructure for a propaganda of the deed to change things in this case.
Speaker 9:
[32:33] Yeah, yeah, it's not like there aren't people waiting to replace him. You know, that guy, Daniel Moreno Gama, who threw that fire bomb later that same morning attacked OpenAI's headquarters trying to break a window with a chair and then throw another fire bomb in, allegedly saying that he had come to burn it down and kill anyone inside. And he apparently had a manifesto advocating violence against a list of AI executives. So maybe anyone who might take as Altman's place. He provided their addresses in that manifesto, and it expressed his hope that others would follow his example. Now his parents have called him a loving and good person who has been suffering a mental health crisis. In an online discussion late last year, he apparently called for Luigiing AI executives, which got him then an interview on a podcast, oddly, and then he walked that comment back during that podcast interview. So strange stuff.
Speaker 1:
[33:29] The public defender did suggest as well that he was in a mental health crisis, but the DA is treating him as a rational actor.
Speaker 9:
[33:36] Well, that's predictable, right?
Speaker 1:
[33:37] And that to me, but the problem is, I think that might have political consequences that the state doesn't want to entertain, right? Because CNN is citing writings where he talks about impending extinction from AI, which to my ear could be argued as a rational interpretation of what Sam Altman has been saying for years, including other tech-doomers, in addition to whatever assessment of the technology Daniel has on his own. It's like, was he in a mental health crisis or was he seeing the situation absolutely clearly and the only rational thing to do was to attack the head?
Speaker 9:
[34:09] Yeah. I mean, I said it's predictable because the defense is going to say, we need to go easy on this person. There are reasons beyond their control for their aberrant actions and the prosecution is going to say, we should prosecute them to the full extent of the law. Don't try to tell me that this person deserves that we go easy on him because he has a history of autism and mental illness that, no, we're going to string them up and make an example of him.
Speaker 1:
[34:38] I think either way, the state loses here, right? Because, I mean...
Speaker 9:
[34:41] Well, I don't know necessarily that the prosecutor is as connected to some kind of ideological drive to suppress legitimate political violence, if that's what you're saying this is.
Speaker 1:
[34:56] No, it's a function of what they have to do, though. They have to make the argument, and I'm saying a downstream effect of that is like, oh, he's a rational actor, right? Is it a rational action to want to attack the infrastructure or the human infrastructure stuff?
Speaker 9:
[35:10] Yeah, I mean, it's possible that he has an accurate perception of the dangers of AI, and that that's rational, but, you know, we all have that to some extent, right, everyone in this conversation, but none of us is then taking the next step of saying we should go and try and burn down OpenAI's headquarters and kill anyone inside and claiming that that's rational, right?
Speaker 3:
[35:38] Flowing ad budget on metrics that look great till the CFO sees them, that's bull spend. And marketers are calling it out in Dashboard Confessions.
Speaker 10:
[35:48] I remember telling my boss, it'll be good for the brand when leads were slow. Yeah, it wasn't.
Speaker 3:
[35:56] Cut the bull spend. LinkedIn lets you target by company, job title and more. Advertise on LinkedIn. Spend $250 on your first campaign and get a $250 credit. Go to linkedin.com/campaignterms and conditions apply.
Speaker 2:
[36:10] This is a Bose moment. It's 10 blocks from the train to your apartment door. Ten basic boring city blocks until the beat drops in Bose clarity. Streetlights become spotlights as you strut down the sidewalk, your own personal runway. With Bose, you get every note, every baseline, every detail just as you should. Those 10 blocks, they could be the best part of your day. Your life deserves music. Your music deserves Bose. Find your perfect product at bose.com.
Speaker 5:
[36:40] Introducing the new Best Skin Ever Ultra Slim Precision Concealer from Sephora Collection. It's full coverage with a matte finish and perfect for any look. Whether you're building it up for a full glam moment or targeting correction for a more natural vibe. At only $12, it's great for affordable touch-ups on the go. Get this new must-have concealer at Sephora or at sephora.com today.
Speaker 4:
[37:08] Let's turn to something lighter and fun. Data centers.
Speaker 9:
[37:13] So fascinating.
Speaker 4:
[37:14] This is going to be a heavy episode. The scale of power demand right now is staggering. A hyperscale data center can consume up as much power as 100,000 homes. You have Meta's Hyperion data center, which is expected to draw more than twice the power of New Orleans that is being built in Louisiana. One report predicts that US data centers total combined energy demand will nearly double between 2025 and 2028, up to 150 gigawatts from 80. This is like adding a country with the energy needs of Spain in just three years.
Speaker 9:
[37:51] We're so screwed.
Speaker 4:
[37:52] Cooling the hardware, which is a massive cost. Larger data centers require up to 5 million gallons per day, which is equivalent to the needs of a city of 50,000 people. Much of this water evaporates and cannot be recovered as treatable wastewater. By 2030, the current rate of AI growth could drain an equivalent amount to the annual household water usage of 6 to 10 million Americans. And of course, the carbon footprint, which is predicted to reach nearly 80 million tons of carbon dioxide emissions this year. Basically, what New York City will produce. And while a number of projects do use solar and wind to power data centers, 56% of data center electricity still comes from fossil fuels. One thing we haven't talked about, what we touched upon though, is that this is happening in the Trump administration, which has zero interest in regulating this problem. And what's even more sinister is that the infrastructure burden is falling on rate payers, so all of us. Residential electricity prices jumped 7.1% in 2025, which is more than double inflation. In some states, it reached 20%. I know my power bill has gone through the roof after four years of living here in Oregon. Then we get to the land issue. Communities are grappling with noise pollution, energy and water requirements that strain local resources and loss of habitats. Large data centers, as you flagged earlier, can cover hundreds of acres. Right before we recorded the newest issue of Mother Jones came out, it focuses on AI. So friend of the pod, Kara Butler, wrote an excellent piece about the strange relationship between religious conservatives and support for AI, some of whom believe a Christ-like Messiah is going to emerge from this technology. But I want to talk about the cover story, which is called Empire Builders. A lot of the info that I got for this segment is from that article. I'll include a link in the show notes. We're going to discuss some potential solutions at the end of the episode, but I'll preempt that by saying, if these centers are going to be built, sharing ownership or creating an Alaska style kicker fund could alleviate some of this distress. In fact, Alex Bores, who is running for Congress in New York State right now, he is calling exactly for that a citizen kicker fund that would give them some money, which I think is excellent, but it still does not answer the environmental question.
Speaker 9:
[40:24] And as a result, as a result, the tech oligarchs are trying to smear him and shut him down and make sure he doesn't get any power, right?
Speaker 4:
[40:31] Yeah. Well, they said that they are trying to, well, he found out they're trying to make an example of him to show anyone who wants to regulate AI better watch out. And these builders who you just referenced, Julian, they mostly promise jobs, which gives them cover and is partly true, but never to what they promise. Here in Oregon, our government is offering tax incentives to data center builders to drive business to the state, which we do need business here. We have a real problem with that, but I don't think this is the right way. There's one Google site that's built in the Dali's that has produced only 200 full-time jobs. Not nothing, but Google received $260 million in tax incentives to build there. And speaking of building, Tim Murphy, the Mother Jones author, writes about one particularly disturbing instance around Elon Musk's XAI data center project. It's a long excerpt, but for this conversation, I feel it's really relevant. So I clipped it all. It highlights how billionaires snake around regulations.
Speaker 9:
[41:38] Colossus One, the first of three XAI facilities in and around Memphis, embodied the build at any cost mindset that was propelling the hyperscale boom and the co-mingling of corporate and political power it was building toward. In 2024, desperately playing catch up to OpenAI and Metta, Musk struck a deal with the Chamber of Commerce to construct what he has marketed as the world's largest and most powerful supercomputer in furtherance of XAI's mission to understand the true nature of the universe.
Speaker 1:
[42:11] I thought Graw could do that.
Speaker 9:
[42:13] It was up and running in 122 days, a remarkable feat that he pulled off by signing a ton of NDAs and treating the Clean Air Act like a CVS receipt. Although Musk built his fortune by collecting federal subsidies for green energy, XAI initially powered the site with dozens of old gas turbines, which it claimed, when pressed months later, were exempt from permit requirements because they were temporary. The Clean Air Act's exemptions were meant for things like lawnmowers, said Patrick Anderson, an attorney at the Southern Environmental Law Center, which threatened to sue XAI last year. Musk's turbines were emitting the amount of pollution you might see from a plant 10 times larger, Anderson said. The EPA appeared to accept the Law Center's argument in a regulatory decision in January. Two months later, Mississippi regulators approved 41 gas turbines at an XAI plant across the state line. The XAI experiment was clarifying in its brazenness. Musk hardly pitched his new neighbors at all. The deal was hammered out before the residents of nearby Box Town, a largely black neighborhood in a city with one of the highest asthma rates in the country, were aware it was happening.
Speaker 4:
[43:29] Can't believe the South African would not care about that South African, to be clear, Julian. One question we have between us and society has overall is do the costs outweigh the benefits? You've probably read about recent polls saying that people are now largely more concerned about the risks than the benefits. I am included in that number, even though I am not completely anti-AI. This is, to me, a difficult question to answer. With most technologies, I'm hard pressed to think of any technology that doesn't have positive and negative aspects. So we have to weigh that. But this particular next quote is easy because Murphy pings this elsewhere in the article, quote, The pride and joy of Grok is that it can create a racist Mickey Mouse.
Speaker 1:
[44:16] Democratic state representative Justin Pearson said, It hurts my stomach every time I see Grok in the news because I know that's being powered by the pollution that we're experiencing in our community. The racist posts were one facet of the problem. All the sexual abuse material was another. According to the New York Times, 41% of all images generated by the colossus-trained Grok over a nine-day period starting in late December were sexualized images of women, while an analysis from the Center for Countering Digital Hate estimated that Grok had produced 23,000 sexualized images of children.
Speaker 4:
[44:53] Makes it pretty easy to hate AI when that is happening, and I completely understand that. And a lot of times we see, you know, recently in Budapest, for example, you saw so many people come out to celebrate the downfall of Viktor Orban. And I've been in Hungary, my family comes from there largely. And the thing about European nations is that, relatively speaking, they're smaller and they also have central gathering places. The US does not, and that's part of the challenge here because we're regulating basically 50 different countries that are trying to collaborate under one federal system. And the Musk incident highlights this perfectly. Oh, you're not gonna let me build here. I can just go one state over because they're gonna take my money. I'll return to Musk and data centers in a moment. But on the topic of states, there is actual traction happening in pushing back on the data center rush. So we're gonna go through just a quick bullet list here.
Speaker 9:
[45:52] In 2025 alone, over 40 states considered 267 data center related bills, a number of them bipartisan efforts to slow or stop them being constructed.
Speaker 1:
[46:03] On April 14th, the Maine Senate approved the first of its kind legislation banning large data centers in the state until November of 2027, so that they can create a council to study future electric load projections and identify strategies to protect Mainers from paying higher electricity rates.
Speaker 9:
[46:19] Maine is not alone. At least 11 states have proposed some legislation to restrict or ban data center development, while another dozen have seen local pushback or enacted restrictions tackling environmental concerns, consumer data or energy bills. Similar temporary bans are being proposed in New York, South Carolina, Oklahoma and Vermont. There are also dozens of local bans at the county and municipal level.
Speaker 1:
[46:43] Ohio residents are attempting to bypass the legislature entirely and get a measure on the November ballot that would permanently ban hyperscale data centers.
Speaker 9:
[46:53] Voters in Festus, Missouri, a suburb of St. Louis, replaced half of their city's eight-member city council this month amid a backlash over a local data center project.
Speaker 1:
[47:03] At least 36 data centers were blocked or delayed between May of 2024 and June of 2025, driven by concerns about rising electricity prices and environmental harms. This includes efforts in Virginia, Minnesota, Indiana, Missouri and Oregon. So we see a lot of local pushback to the extent that it can be organized, to the extent that local municipalities have strong administrative cores, right?
Speaker 4:
[47:31] And states.
Speaker 1:
[47:32] And states, right. Here's a story from where I live about how tech and AI capital is following its borderless global flow, because I think that's what these localized efforts are really going to ultimately run up against, is that whoever is behind in winning some concession, winning some regulatory protection for their zone, the data center is going to be booted down the road somewhere else. And so anyway, if I drive west from here in Toronto to Niagara Falls, there's an endless bounty of green. There's vegetables exploding in the summer season, livestock all over the place, mixed crops. Everything's coming out of this ancient glacial till. It's deep black soil with excellent water retention and drainage. It's fed and regulated by these old glacial lakes. The soil is productive. It's regenerative. And that means that it requires far less of that fertilizer that's now tied up in the Strait of Hormuz. It's a soil that approaches self-sufficiency when it's well managed.
Speaker 9:
[48:40] I just want to thank you, Matthew, for creating this little oasis of just describing the natural world. It's really refreshing in terms of everything else we've been talking about.
Speaker 1:
[48:49] Right, which I mean, I'm setting it up to not get... To burn it to the ground.
Speaker 9:
[48:54] Such is the tragic nature of our art form.
Speaker 1:
[48:57] In March of 2024, 12 farming families in the heart of this zone, it's called Wilmot Township, they get knocks on their doors. And now who is it? These are agents working for Can Acre, which is a US linked land acquisition firm acting for the region of Waterloo. And these guys say, we need you to sell your farms to us at $35,000 an acre, or just face expropriation. And the land once rezoned industrial would be worth over a million an acre. So they're low balling them too.
Speaker 9:
[49:31] Yeah, and somehow they managed to have a name that is just one letter away from spelling cancer. It's just Ken Acre.
Speaker 1:
[49:38] So now the purpose of the land grab is buried under NDAs and closed council meetings. There's FOIA requests that have been denied. And last year investigative reporting identified that one likely tenant would be Toyota building an EV battery facility to anchor Ontario's automotive supply chain. This is against the Trump tariff pressure on Canadian automakers. But now there is another potential tenant, and it's called QScale. It's a Quebec based AI infrastructure company, which has stated that Wilmot Township is among their shortlisted sites for a data center. Now on the side of the government, while this shit is going down, the premier here is Doug Ford. And he's rammed through something called Bill 162, which he calls the Get It Done Act, which sounds like what it sounds like. It dissolves local consultation boards to appoint strong regional mayors among other provisions to expedite land theft. So all of the mechanisms by which the states and the municipalities that we just covered in the previous section are sort of binding themselves together to vote on proposals and do things is all being sort of stripped out of the local democratic process that's been set up and functional in Ontario for probably more than a century. And additionally, there's a Federal Omnibus Bill getting rammed through Ottawa by the Carney government that does the same thing because what it does is it gives individual cabinet members just full personal discretion to sideline any regulation or law up to and including, or not including rather, the criminal code if they want to get something done, if they want to propose some project for the national good. So the farmers are organizing. But there's also a tradition of First Nations people who always end up putting their bodies in front of the bulldozers here. And I think everybody that I know, including myself, are wondering how we're gonna end up supporting that because I think it's gonna come to that.
Speaker 4:
[51:40] One last piece of this particular story flagged, they'll punt it down the road somewhere. And that is true until it's not because there's another place where they might be building data centers. And this story concerns Artemis. A lot of people were really psyched about the recent lunar expedition and the forthcoming plans to return to the moon. I don't want to take anything away from the awesome astronauts or people's love of space travel. But it is good to know just how deeply Elon Musk is entwined in this story as well and what it actually represents for the broader story we're telling. I noticed a lot of people online crediting NASA for the Artemis mission. But for the last few years, it's becoming harder to distinguish between our federal space agency and SpaceX, which looks like it's going to be IPO-ing for $1.75 trillion this June. SpaceX has launched over 80% of all missions over the last three years as a private company. And this is globally. The next phase includes their next generation V3 Starlink satellites that Musk plans to use for eventual data centers in space. That's his big plan. Occupy space with data centers so that incels can churn out AI porn on X.
Speaker 1:
[53:04] I miss the earth so much. I miss my wife. It's lonely out in space. On such a timeless flight. In fact, it's cold as hell, Elon. It's cold as hell.
Speaker 4:
[53:15] It's really important to know just how much space Musk occupies in space right now. SpaceX now has over 10,000 star like star link satellites floating out there. The FCC recently granted him permission to go to 15,000. Those satellites are in part contracted with the government agencies to provide internet service. Who really knows how much data he's allowed to collect? But back to Artemis, NASA has contracted SpaceX's starship human landing system to transport Artemis astronauts from lunar orbit to the moon's surface, with SpaceX required to fly at least one uncrewed demo mission before the crewed Artemis landing. I believe that's in 2028. What you're seeing is not so much about landing on the moon again, go America, we're there. It is about eventually colonizing the moon with data centers. Musk has said this blatantly and to serve as a launch pad for Musk to fly his rockets to Mars.
Speaker 10:
[54:51] 12-month special financing, now at The Home Depot. Offer valid April 16th through May 3rd, 2026. Exclusions apply for license at seahomedepot.com/licensenumbers.
Speaker 11:
[55:00] All right, ladies. When you've done the work, you want your hydration to do the same. Introducing new Gatorade Lower Sugar, now with no artificial flavors, sweeteners or colors. And 75% less sugar and all the electrolytes of regular Gatorade, now available nationwide.
Speaker 6:
[55:16] Your favorite local grocery stores like Kroger, Ralph's, Fred Meyer and more are now delivering on Uber Eats. Get 40% off your order of fresh quality ingredients. Whether you just got home to an empty fridge or suddenly got a craving to whip up something new, you can get everything you need delivered in as little as 25 minutes. Get 40% off your order with code KROGER2026. Plus members get $0 delivery fees. Order now on Uber Eats. Orders of $30 or more save up to $25 and it's 430.26. Yeah, for details.
Speaker 4:
[55:46] Okay. Like I said, it's a lot today. I want to turn to a question about AI and computers that I've personally weighed for some time. Did we think in terms of humans, did we think collectively that technology would stop at some point? Now, historically speaking, every technology advances over time. Sometimes centuries, sometimes much faster. Given that we've had an accelerationist population explosion since the late 19th century when we jumped from less than a billion humans to over eight billion in less than 150 years, it makes sense that our technologies would also accelerate given the number of people working on them and the collective knowledge that we've acquired. AI isn't a concept that was added into compute power as an afterthought. It was part of the foundation of computers in the very first place. Mid-19th century mathematician Ada Lovelace was theorizing about these concepts nearly a century before the first computers came into existence. At 17, she met Charles Babbage, the inventor who was working on a theoretical computer called the Analytical Engine. Ada collaborated with Babbage throughout the rest of his lifetime, and she's responsible for writing what is considered to be the first algorithm for use in the Analytical Engine. Her ideas resulted in the conceptual foundation of modern general purpose computer based on her mathematical understanding of loops, which would become one of the most fundamental constructs in all of programming.
Speaker 9:
[57:19] Just genius.
Speaker 4:
[57:21] Yeah. Neither of these figures saw computers actually exist, just to be clear. This is all math beyond my scope of understanding, but as someone who loves history, I love stories like this to understand the origins. Now, in her notes, Lovelace wrote the Lovelace objection, noting that the hypothesized analytical engine cannot think, but only execute what humans input. And this philosophical boundary became one of the central debates in AI, because when computers did become a reality, AI was discussed from nearly day one. Fast forward to fellow mathematician and computer scientist, Alan Turing, who responded to this question in a 1950 paper, arguing against what he called Lady Lovelace's objection. This is a landmark paper. It's called Computing Machinery and Intelligence, and it asked whether or not machines can think. This is where he proposed what became the Turing test, just four years after the first general purpose electronic computers came online. Now, from that day, programmers set out to create systems that passed the Turing test, which may or may not happened in 2014 because that benchmark remains debated. What we do know is that early computing and AI co-evolved. The term artificial intelligence was coined at the 1956 Dartmouth Conference, citing advances in programming and logic. Early computing pioneers like John McCarthy, Marvin Minsky and Claude Shannon were actively building the foundations of both. The modern GPU was built through deep learning demands in attempts to create complex AI programming. In fact, Lisp, which is one of the oldest high-level languages still in use, was created specifically for AI research in 1958. Many other concepts in modern programming like recursion, dynamic typing, garbage collection, all trace back to AI language work. A lot of people right now are rightfully surprised by the seemingly rapid advances in AI and I'm not trying to detract from the very important conversations that we're having and societies having around benefits versus the problems that we have. I'm just pointing out that this drive toward creating intelligence systems, it's not an afterthought in the devices everyone is listening to this podcast on right now. It was since day one baked into the system.
Speaker 1:
[59:59] You know, I love this history as well. It's amazing what people are able to do. What I think of is that the baking system itself deserves a really careful look. So, my comments are offered as a non-specialist, like I don't understand much of this stuff at all. But I do listen for the story of power and class difference when talking about tech development in the industrial age. Now, Babbage and Lovelace, they're pulling on math that's centuries or millennia old before that. But if we just start our consideration from the industrial age, the pure science narrative effectively shows how one concept leads to another with this kind of eureka feeling of naturalness and inevitability. But if we consider class interests, I think we start getting into how those flows of logic run parallel to flows of capital and power.
Speaker 4:
[60:54] Wait, that's interesting. I'm sorry, I don't mean to erupt, but like I just want to be clear on this. So if it feels like inevitability from a naturalist perspective, we have to question it. But if it feels like inevitability from an economic system, that is part of the logic behind it.
Speaker 1:
[61:11] Well, I think that there's nothing that is fully chosen or consented to when we get into the pervasiveness of capitalist logic. It's something that we can see as an economic system. It's not organic in the sense that it arises out of a series of historical contingencies. But it's also something that because we can see it, we can change it and we can question it. It's not like pure ideas that get translated from student to teacher and then get developed on, right?
Speaker 4:
[61:48] Okay, so thank you. So it's like all technologies, they can be changed and redirected depending on who's doing the engineering. That's an economic system, that's a technology.
Speaker 1:
[61:58] That's what I want to focus on is who's doing the engineering, right? Because when I was thinking about Altman, AI and late capitalism, what I remembered was this passage from from Capital in Chapter 15, where Marx really starts grappling with mechanization and specifically how it separates intellectual from physical power so that the former, the intellectual power can dominate the latter or the physical power. So he writes, the separation of the intellectual powers of production from manual labor and the conversion of those powers into the might of capital over labor is finally completed by modern industry erected on the foundation of machinery. The special skill of each individual insignificant factory operative vanishes as an infinitesimal quantity before the science, the gigantic physical forces and the mass of labor that are embodied in the factory mechanism, and together with that mechanism constitute the power of the master. So there's a lot there and I think it's a good passage also for reflecting on why this stuff gets obsessively studied and argued about 150 years on because it's also very poetic. But he's naming the triumph of mechanization in production as a peak point in a kind of Cartesian splitting, where the body shrinks in skill and relevance and galaxy brains can take over. And where this intersects with computing history is that Marx actually got to some of that in part by studying guys like Babbage directly. Because Babbage was, and Marx is aware of this, he's like, he pings everybody for where they come from. He's the son of a banking family. And Lovelace was an aristocrat. She was the daughter of Lord Byron. So right there, the knowledge and training and access to intellectual power from a social relations point of view is pertinent.
Speaker 4:
[63:48] Does he ever, I'm sorry, does he ever ping Engels from where he came from?
Speaker 1:
[63:52] Absolutely. That's completely, in fact, I'm going to say later that Engels did the exact opposite of what Babbage did. Engels was a fucking class trader, right? He went into his father's factory and he said, oh my God, what's happening here? And it was all, it was, everybody knew that he was funding his friend, for sure.
Speaker 4:
[64:09] No, we talk about this. I'm not as schooled in this. So Mark's grapple with the fact that he was being funded to do his work through this.
Speaker 1:
[64:16] He was completely aware of the contradictions of what he's doing, just as we are as critics of capitalism within a capitalist system.
Speaker 4:
[64:25] It's a broader point because this is one of the central things about, when I see about all this, the questioners on AI is that people who are criticizing it and say that it shouldn't exist are also using it. And so that's a contradiction that I think is always going to be a challenge to Square. So I'm glad that we can air out the contradictions and try to make sense of them.
Speaker 9:
[64:43] There's that contradiction, but there's also the thing that I think just sparked your question, which is that we're talking about Mark's looking at someone like Babbage and pinging, oh, look at his background, look at where his money comes from. And so Derek is going, oh, well, if that's fair game, is it also fair game to say, look at where Engel's money came from? And absolutely it is.
Speaker 1:
[65:05] Yeah. And in fact, if you didn't, you would be dishonest about how the material sort of conditions produce the criticism of capitalism. Capitalism produces its own contradiction within its capacity to criticize itself. And so that's part of the story. So Babbage was interested in mechanization processes that shrunk production down into tiny little routines. So he actually sort of prefigured forward in that sense. And the point was to maximize efficiency. And he wrote a book in 1832 called The Economy of Machinery that formalized factory production as a mathematical problem. And his whole point is, you know, we can make more money if we miniaturize these processes. He didn't have any problem with that. Like that's what he wanted to do. Because he was showing that decomposing skilled work into cheaper unskilled components maximized profit. And he did that by surveying hundreds of factories. And as we're saying, Derek, a decade later, Engel starts his own tour of factories with the opposite perspective, which he has access to because that's his own class. And he's paying attention to what industrial tech is doing to workers. And then he publishes his findings. And basically the first book of political sociology in the world, the condition of the working class in England. By the time he gets to capital in the 1860s, Marx recognized Babbage's book as the clearest account of capitalist labor reduction ever written. And as a contribution to the analytical software that runs this economic logic. My point is that our mainstream discourse is continually considering this current sort of evolution of, you know, computing as if I think it is politically and economically neutral or ambivalent or without its own content and purpose. And I think there's a deeper story there because things get produced through social relations. And I don't see a lot of democracy in the biggest choices that get made.
Speaker 9:
[67:08] Yeah, it's such an interesting thing to dig into. I think it's tricky to tease apart like this again is one of those big picture questions, right? Or meta questions, like how science progresses or how knowledge about the world is discovered. And like teasing that apart from who's deciding what to focus on and to what end, like who do these discoveries serve and what are they trying to do with them? But somewhere in that distinction, I think, for example, there's the reality of like how viruses work objectively and how vaccines can radically reduce their impact. And that's purely factual. While how we structure access to vaccines and how much they cost is more social and political. So I wonder how these layers extend to the ontology of what we discover about computer science over time and then how that gets utilized. And of course, the question for me, when I hear where you're going, is like, might we have discovered other things about how computer technology works if someone else was driving the ship?
Speaker 1:
[68:11] I don't know. That's a really great question. And I think the vaccine example is a great counter example because most people at the ground floor of studying variolation, inoculation, vaccination are going to belong to the same literate class that Babbage comes from who can fund experiments, record results. But I wouldn't think that maximizing productivity is their main aim. Like I'm thinking about the Chinese emperor who innovated variolation by grinding up the scabs and shooting them into nostrils. He was terrified of smallpox taking another one of his own kids, right?
Speaker 9:
[68:45] Yes, but there is something also to be said for, okay, mass production maximizes profits, but it also enables us to get to millions of vaccine shots going into children's arms and saving them from polio.
Speaker 1:
[68:57] Right, and because beating smallpox naturally invokes cross-class solidarity for the people who understand that viruses don't care whether you're rich or poor, there's going to be an incentive there to scale, but profit motives are going to be downstream of that, I think. So maybe there's a distinction to make between production and medical sciences, like making wealth versus making health. There seem to be different categories to make.
Speaker 4:
[69:27] Oh, they're not. I mean, that's well known because the social determinants of health show that the more wealth you have, the healthier you are. So they can't be disentangled in our current...
Speaker 1:
[69:37] Yeah, I was talking about what your intention was for making the actual thing, right? Like, are you making something so that you can maximize product on a number of widgets, or are you making vaccines? I know about that. We know about the social determinants of health.
Speaker 4:
[69:51] The intention behind it, yeah, that's a little trickier, too, I mean, because everyone always likes to talk about the polio vaccine, how it was open source, but in reality, it was only because the lawyer said you couldn't patent it. So there's long been a sort of economic motive between a lot of the applications. Although, sure, some people are purely philanthropic. The emergence of variolation, studying variolation exists before in pre-literate societies. It actually started with rabies is the first example. So before the Chinese Empire was starting to be developed in pre-literate China, in the rural areas, in actually studying rabies, where they would advocate for, if you were bit by a dog, to open up the dog's head, take part of its brain and smear it on the wound. So this is oral folk medicine, and that also perpetuates it across the concepts of variolation in Africa as well. So there is definitely a strong pre-literate drive to healing that exists. We know about this stuff, because of the literature, it probably goes far beyond that. I do want to get to this question of ambivalence though. I've heard you say it before. I don't personally think any technology is ambivalent. Maybe some people do, but that's why I said, it's always a cost benefit analysis that goes with any technology. The earliest technology of humans, a stick, it can be used to harvest for grubs or it could be used to poke someone's eye out. So the technological advances will always have to be considered in that. And if anyone ever considered a technology ambivalent, I think they would just not really know what the technology is. And speaking of, I flagged this before, but I think we do need to talk about the open source movement because that's a big part of this story. There's a movement of coders advocating for these very cross-class principles. Before software was even decoupled from hardware, they didn't want to see the commercialization of what they felt could be a tool for everyone. And so in 1955 with IBM, people were talking about how they should not be using this as a capitalist source, that the software that was running this hardware, they were sending floppy disks back and forth through the mail before we were able to email things. And some of these people created the theoretical writings that would eventually create Bitcoin, because that comes from anarchist literature who wanted a currency that existed outside of the banking systems. If you're talking about crass commercialization for profit, yes, that has absolutely existed with basically every technology we've produced. But if we're talking about the morality of origins, there have always been people advocating for the commons, often before commercialization even begins, because they can see it coming down the road, and they're like, let's get ahead of this. And unfortunately, it usually doesn't work out well.
Speaker 1:
[72:49] Well, I do know that there's this window in the 70s and 80s, when the internet is conceptualized as the commons. And this is initially driven by people who pioneer the California ideology thing about, you know, which is anarcho-hippiesim and libertarian resistance to regulation. So how long do those sort of emergencies, those blips of sort of common goodness and intentionality last?
Speaker 4:
[73:15] They're not blips. I mean, they never have gone away. 1955 was the early instance of an open-source community. And when software, like I said, was embedded in hardware, but it picked up in the 70s because, ironically, Bill Gates came out after the quote unquote hobbyists and said, hey, you're going to slow down development if you keep sharing this shit for free, which is kind of ironic. So the actual 70s mark you're talking about was a fuck you to Bill Gates. And there has been a robust open-source community ever since. It's alive and well today. Again, I talked about GitHub before you can view the work. There are other open-source communities. The problem is you have to know where to look and what you're looking at. And that is a barrier for a lot of people to understand.
Speaker 1:
[74:00] But every techno-optimist needs to learn that skill, right? They need to figure out how to use GitHub. They need to figure out what they're looking for.
Speaker 4:
[74:09] You're using that term wrong. I mean, you're treasoning that term wrong. You're using a term that was weaponized by Andreessen and then later Altman. People who enjoy technology aren't by default optimists. Like you don't have to think technology is going to lead to a utopia in order to enjoy coding.
Speaker 1:
[74:27] Maybe I'm using the term wrong. What I'm talking about is, I'm not quite sure how we got to this discussion actually of open source versus proprietary. But when you're describing open source communities, and I use the term blip, I'm talking about like something that attains broad cultural and economic reach. But you're saying that there's a network of open source creators that are always doing work parallel to the dominant culture, that might be suggesting that less privatization of profit, to accord with the socialization of labor might be a good idea, that we might want to create more things that are just for free, we might want to improve society. That there's a lot of people that are doing that, and it seems to me, I'm wondering if that's like the creative fringe of an art discipline, where people make things for use value and the joy of doing it according to their resources, and occasionally, an innovation gets pulled into the productive and economic stream, right?
Speaker 4:
[75:26] Oh, not occasionally, very often. I mean, Microsoft, Google, Meta, they all actually fund open source projects. For a while, Google had an allowance of something like five or 10 hours a week, where you could go work on your own projects and still be paid for them, because they wanted people to have joy in their work as well, not just the work they were doing for the company. It never rises to the mainstream attention in terms of the work that they're doing behind the scenes all the time, but it does appear in the products they develop sometimes. There is very robust open software for pretty much anything that you need and can pay for, you can go online. When I need to rip videos, I use Handbrake, which is free. It's open source. It's constantly updated. No one makes a dollar off it unless you volunteer to give some donations. So these exist, but crossing over into the mainstream, no, there's not a marketing budget for this. And because they're not privatized, they're not often, they don't have the same security guarantees that private companies are beholden to. So it makes it a little more challenging. And just to answer your question we got on this, because it was a question about the morality of origins. And I'm just saying, yes, Babbage had a theoretical understanding of where computers would eventually get to, but when computers first came into production, there has always been a community that has wanted to keep it away from commercialization.
Speaker 1:
[77:03] Yeah. And I could imagine Charles Babbage having like a socialist-minded brother who was also fascinated in analytical machines and computations and thought it would liberate generations of workers to pursue lives of cultural development on a commons-distributed income, right? Like, I suppose it could have happened.
Speaker 4:
[77:22] So I want to close on a different hopeful note here from there, as much as it is to a call to action. So a friend of the pod and one of my closest oldest friends, Dax Devillon Ross, recently co-wrote an article with his friend Jason Council called John Henry for the AI era, Raging with the Machine. I've linked to it in the show notes. Both are black men who have worked in DEI consulting spaces for decades, Dax predominantly with non-profits, and their argument makes a compelling argument. Black people haven't been considered in many of the technologies that have been developed, so they don't want people to sit this one out either. Get in there, train the systems to be more representative of everyone, not just the oligarch class.
Speaker 9:
[78:06] And in the paper, they coin the term DEAI, where they write, Owners of capital, the industrialists and entrepreneurs, may feel insulated from immediate threats, yet the digital divide exacerbates vulnerabilities for those without access to AI's advantages. This moment demands a united response to reframe diversity, equity and inclusion, DEI, in the context of AI, moving from a framework that has faltered into one that actively addresses systemic inequalities. The future scenarios we face hinge on our ability to collaborate with AI, rather than resist it, ensuring that the benefits of innovation are shared equitably. In this new paradigm, the imperative is clear. We must consciously design AI systems that reflect our highest values and aspirations, or risk embedding historical inequities in the very fabric of our technological progress.
Speaker 4:
[79:02] And that basically sums up a lot of my own thinking on this topic. The industry, we have to regulate. There's a lot of criticism that's needed. Public sentiment has turned against it. I just found out about a poll. However, that said 10% of people want to turn back the clock and have no AI. 10% of Americans want no regulation. They're the techno-optimists. 80% are fearful but want it regulated and they want it as part of their lives. So that's where we sort of land at this moment. The technology is here. It's going to be continued to be developed. So as Dax and Jason suggest, look for opportunities where you can affect change within it. Whether or not computers will ever undoubtedly pass the Turing test remains to be seen. But we also know that AI is still being made with human intelligence. And the more intelligence that takes part in it, from my perspective, the better.
Speaker 1:
[79:59] You know, I appreciate Dax and Jason's post. I think the question I have is that when they say, we must consciously design AI systems that reflect our highest values and aspirations, will the capital that largely controls the tech and its distribution and the projects it's assigned to ultimately let anyone do that in any substantial way? Or, for example, is the host of the blog post that they're writing that in just fine with hosting that message because it's content that generates traffic? And because the message is not saying, like, well, we've got to get together and go do something about data centers.
Speaker 4:
[80:45] I don't know anyone. I mean, they want to. They might want to do stuff about data centers, like regulate them or put moratoriums on them, which I think would be fine. That's not really the question they're addressing. What they are saying is that black people and there was a very poorly constructive video by Reese Witherspoon that is making an argument I agree with, even though I don't think she's being honest about it, which is women in tech. That's been a movement for a while. You have all of these different communities are saying, hey, we want a voice as well. Will the capital flow to them? I don't know. Historically, it has not. From my perspective, them trying to get in on the game and advocating for others to do so is really important. And I think it's a lot more potentially effectual than not doing anything. So I'm happy to see people in the space actively working toward that.