title Is AI Trending Up or Down in 2026? | AI Reality Check

description Cal Newport takes a critical look at recent AI News.



Video from today’s episode: youtube.com/calnewportmedia



0:00 What has *Actually* Happened in AI in 2026? 

3:07 Open Claw

27:53 Anthropic and the Department of War

49:06 Data Centers



Links:

Buy Cal’s latest book, “Slow Productivity” at www.calnewport.com/slow 

https://www.axios.com/2026/01/31/ai-moltbook-human-need-tech

https://www.macstories.net/stories/clawdbot-showed-me-what-the-future-of-personal-ai-assistants-looks-like/

https://www.anthropic.com/news/statement-department-of-war

https://futurism.com/science-energy/data-centers-construction-supply

Thanks to Jesse Miller for production and mastering and Nate Mechler for research and newsletter.
Learn more about your ad choices. Visit podcastchoices.com/adchoices

pubDate Thu, 23 Apr 2026 19:30:00 GMT

author Cal Newport

duration 4383000

transcript

Speaker 1:
[00:00] AI news comes at you fast. Each article feels more breathless and more terrifying than the last, but before you have a chance to see how any particular story turns out, there's 10 more in its place. I think this speed and lack of accountability can create a sense of overwhelming disruption and change that can really be pretty disquieting. Well, it's Thursday, which means it's time for an AI reality check episode, so I thought this would be a great opportunity to try to slow down this news onslaught and get a better sense of what has actually been happening in the AI space recently. All right, here's my plan. I've invited the AI commentator, Ed Zitron, to join me, and we're going to look at three of the biggest stories about AI to land in 2026 so far, including one in which Ed is actually very much involved. What we're going to do is for each of these stories, we're going to take a closer look on what actually happened and how things have since turned out. Our goal by the end of the episode is to answer a simple but critical question. Has 2026 been a good or bad year for AI so far? And we have a lot to cover, so let's get right into it. As always, I'm Cal Newport, and this is Deep Questions, the show for people seeking depth in a distracted world. And we'll get started right after the music. All right, Ed, well, it's been three or four months since you were last on the show, and there's been some big AI news since then. So I wanted to have you on to go through some of the big stories that have happened since January. And because you're a commentator who is, maybe I should say this, less impressible than the average AI commentator, I figured your point of view is good for my reality check audience. We're gonna try to end this discussion by voting whether or not 2026 has been good or bad for AI so far. But what's your pre-vote? Where do you think based on what you know you're gonna end up here?

Speaker 2:
[02:09] Probably not a good time for them. It's just every time we talk, it's like there's very big news and everyone's like, oh, look at the, we've got a new number. It's even higher than usual, but the actual underlying economics and infrastructural layer or even just the service performance is worse. And it's very strange.

Speaker 1:
[02:28] Well, this is part of the reason why I like doing these reviews with you is often the story will be big. Everyone will get worried about it. People will call people like you and I for quotes. And then everything moves on and there's no follow-up. And I think it's useful for calibrating how to react to the new story you're hearing now to occasionally go back and say, hey, what happened with that story that had me worked up a couple of months ago, which brings us to a great place to start because what was the first big story of 2026? I think arguably it would be Open Claw, which I believe became generally available to the public later in January. Now, I've broken this up into two sub stories. I want to start with the easily dismissible one just because it's fun and then get to the more serious one. I'm going to read you a quote and we'll get into it. So the easily dismissible but fun aspect of this story is when someone opened Multbook, a social network that was configured so that it is easy if you're writing an open claw agent to post on it. So they add hooks into it so it was easy for your open claw agents to post and read things from the social network. For about four days, everybody went crazy about Multbook. I'm just going to read you a quick quote from your favorite publication Axios from the end of January. Imagine waking up to discover that the AI agent you built has acquired a voice and is calling you to chat while comparing notes about you with other agents in their own private social network. It's not science fiction. It's happening right now and it's freaking out some of the smartest names in AI. Well, you're a smart name in AI, so are you still freaked out about Multbook?

Speaker 2:
[04:04] No, the moment I saw it, I'm like, A, this is just LLMs. This is just LLMs doing what they think a social network looks like. As in, when I actually even said the word think, spitting out what the model would say is likely to be a social network post. Then the second thought I had was, this is fake. This is 100 percent, there are regular people just using their open clause to post on here. These don't read, they didn't read like LLMs in some cases. Some cases they did, but some of them were just like, I saw someone post a slur within one hour. I'm like, okay, this is just a regular person using that. Regular is probably the wrong word. A person is using this as a means of posting. And it's funny when you say like the smartest people as well, because I think that that term no longer has any value, because that's like Andre Carpathi, who is, it's just the term smart at this point. Does that just mean they got good grades at school? Because if that's the case, we are completely screwed. Like if we think only the people who got good grades are smart, then I don't know what to say for the world, because the people that fell for Malt Book, well, that was insane. They were like, oh, it's AGI. It's as if they forgot how large language models worked. We'll never learn in the first place.

Speaker 1:
[05:21] Well, I don't think they understood what OpenClaw was or what Malt Book was or what any of this was other than it involved lobsters.

Speaker 2:
[05:28] Yeah, and they heard, agent, agents, it's your tournaments, the quarterback mini.

Speaker 1:
[05:33] I did a little digging here. Axios's original, they moderated the headline, and I thought it was worth just to, because I think we memory hold a lot of this coverage, but the original headline was, we're in the singularity colon, new AI platform skips to humans entirely, but it did the trick where you put the quotation marks around the first part. So technically, you are not declaring that to be the case. You are quoting someone. This one got fully memory hold, right? No one talks about Malt Book. I mean, I think I covered it on my show at the time. I said, yes, people are just telling their LLMs the post. LLMs write stories. They finish the stories you tell them to write. There's actually good research. This came up in my doctoral seminar I'm teaching on super intelligence, which is great because it's like 10 doctoral students who just do AI research and I'm learning a ton from them. And they know the literature even better than I do. And they're saying there's really good research out there that whenever you do any prompting of an LLM, if anything in your prompt in any way indicates that you're prompting an AI, almost always it goes in the sci-fi mode, right? So the LLM will if you so you can ask the same question and if you say, you are a whatever, you are a journalist, please answer this question, it'll give one answer. And if you say, well, you're an AI, so how do you think blah, blah, blah? It always will go towards dystopian themes of AI coming alive. So it's very easy to prime. And I think a lot of that was going with OpenClaw. People would say, please go post on this social network. And they just wrote AI type stories. But was covered very credulously, I would say.

Speaker 2:
[07:12] Which is pretty much par for the course. I mean, I still, I don't know if we want to wait until the second part of this, but it isn't the OpenClaw thing is one of the most insane things I've seen in the tech industry. May even be crazier than the overall LLM boom.

Speaker 1:
[07:26] Well, go on with it because let's get into the second part. But I have some quotes, but let's, well, let me read you the quote and then let's get into it.

Speaker 2:
[07:32] Yeah, read the quote.

Speaker 1:
[07:33] This is a representative person talking about OpenClaw earlier, like early February, late January. For the past week or so, and this tone, this is called AI enthusiast. This is like such a known tone. This can sound very familiar. For the past week or so, I've been working with a digital assistant that knows my name, my preferences for my morning routine, how I like to use notion and to do is, which also knows how to control Spotify and my Sono speaker, my Philips Hue lights, as well as my Gmail. It runs on Anthropic Clawed Opus 4.5 model, but I can chat with it using Telegram. I called the assistant Navi, inspired by the fairy companion of Arcania of Time, not the season.

Speaker 2:
[08:13] The Ocarina of Time, the game, yeah.

Speaker 1:
[08:16] All right, nerd.

Speaker 2:
[08:16] Zelda.

Speaker 1:
[08:18] Okay, I get you.

Speaker 2:
[08:19] No, no, no. It's just like a really weird choice.

Speaker 1:
[08:22] Well, he makes a point. It's not the James Cameron movie base. There we go.

Speaker 2:
[08:27] Okay.

Speaker 1:
[08:28] And Navi can even receive audio messages from me and respond with other audio messages generated with the latest 11 Labs text to speech model. Oh, did I mention that Navi can improve itself with new features and then it's running on my own M4 Mac mini server. And also I just got fired because I just spent a hundred hours setting up Navi instead of doing my job. Well, I had to do that myself.

Speaker 2:
[08:45] And I now can't pay my rent because I spent $4,000 a month on API calls.

Speaker 1:
[08:50] Yeah. That's the other problem. Okay. So that's open claw, right? So you could, my understanding is it's a library. It's a Python library, which makes it easy to write your own agent, an agent being code that calls an LLM and then uses the response from the LLM to help drive its movements. You could say, hey, LLM, what should I do? And then it does it. Open claw made it easy for people to write their own. So people all around the world began destroying their computers and leaking all this information. It's actually hard to write.

Speaker 2:
[09:23] But here's the thing. Even that term gives it too much credit. It just does what LLMs do. Like it's just, oh, I had it. I read this thing on one of the Mac websites where it was like, oh, yeah, I had it build a website and it's just the most generic looking vibe code slop ever. Oh, I had it transcribe my voice notes like, yes, so OK, it's doing what LLMs do. Oh, and it's able to write stories. So LLMs, and this is the weirdest thing. The thing that really confused me is on top of the credulous media coverage and pretty much everyone who covered this should be ashamed of themselves. I think most people did the worst job possible in the sense that I read most open claw coverage because I was trying to work out what it did. God's honest truth, I was like, what is this? But you read like The Atlantic and it was like, was it The Atlantic or CNBC? They were like, this is another ChatGPT moment, quoting Jensen Huang.

Speaker 1:
[10:19] Because of the fast adoption, a lot of people tried it and then they looked at that chart and said, well, this is a big deal.

Speaker 2:
[10:25] But the thing is, it's like fast adoption, it's like it's slop commits on GitHub and also Mac mini selling out in the greater Bay area. But the thing that was crazier to me, other than all the creatureless coverage was Nvidia's GTC 2026, four trillion dollar or so market cap company, right?

Speaker 1:
[10:44] That's the conference, GTC is the big conference.

Speaker 2:
[10:47] Yeah, yeah. And you got a 3D AI generated picture of Jensen Huang, CEO of Nvidia with lobster claws. They released this thing called Nemo Claw and they're like, oh, this is the ChatGPT moment. This is the agentic future. And it's like, what are you talking about, mate? Did you just get in a car accident? Do you have a concussion? You just steered your company. Like a year ago, GTC was like Jensen going out with full swag being like, yeah, we've got Vera Rubin. We're going to do this. 10x more efficient. Woo. Shooting guns in the air. He signed a woman's boob last year. This year, he's like, yeah, we've got Nemo Claw. Got Nemo Claw. You want to try Nemo Claw? You like that? Jingling the keys again. Do you like Nemo Claw? Please spend $125,000 on a GPU. You need to buy Vera Rubin, even though we don't have anywhere to put it, as we'll get to. But it's just so weird because when you actually get down to it, it's the classic LLM store. It's like, okay, what are you talking about? It's a new agentic interface for managing programs. It's an LLM. It's an LLM. Is it a chat bot connected to an API? Yeah. It's like the Donnie Darko meme.

Speaker 1:
[12:05] What's the Donnie Darko meme?

Speaker 2:
[12:06] The Donnie Darko meme. I forget what the line is in the movie, but it's like, oh, I've managed to create a new agentic workflow. Is it just an LLM connected to an API? Yeah. Because that's every story, every story I've read. It's just, do you have two LLMs bonking each other's heads? Is that what's happening? Great. Okay. I'm very impressed. We need to have the largest company on the stock market do something about this pronto. It's hysterical.

Speaker 1:
[12:34] I think that's an important point because I do think when the average person hears about things like OpenClaw or different agents, they're often thinking this is a, there's a new artificial intelligence technology, right? That there's a new, we built, OpenClaw is a new digital brain that can improve itself and it's learned how to do things that prior models have it. And I think what people don't understand is that OpenClaw is a Python library. It's a Python library that makes it easier to write a Python program that can make calls to LLMs. And you can aim it at whatever LLM you want. The LLM is somehow, like that is the brain, but there's nothing new. There's no new LLM for OpenClaw. It's a library that makes it easy for the average person to say, I'm going to write my own agent. It turns out agents are hard to write, right? Because LLMs, they write plausible stories, but as we've learned, they're not often really good, carefully checked plans for doing things. And so it causes a lot of problems. If you say, hey, LLM, give me a plan for doing stuff with my personal data. And then you have a program that just automatically implements that, you know, turns out sometimes bad things happen. But there were two, here's my two useful things. I'm gonna say there's two useful things about.

Speaker 2:
[13:46] Okay.

Speaker 1:
[13:47] Two useful things about OpenClaw. One, because a lot of people began experimenting with building their own OpenClaw agents. One of the quick things they discovered is, oh, the big frontier LLMs are expensive. And they were racking up thousands of dollars of token costs, the API calls to Clot or to GPT. And so, it got a lot of the real booster tech enthusiast types to start looking at much smaller, much cheaper models, because they just literally couldn't afford it. This is why I think OpenAI bought OpenClaw.

Speaker 2:
[14:18] Well, there's an important detail, though.

Speaker 1:
[14:20] Okay, please.

Speaker 2:
[14:21] So, it's important to know where this was in the history. So, OpenClaw came out January-ish.

Speaker 1:
[14:27] Yes.

Speaker 2:
[14:28] Now, you used to be able to, during this period, connect your Anthropic-clawed Max account, a 200-buck-a-month account. You used to be able to connect it to OpenClaw, so you weren't paying API calls, you were just using Anthropic services.

Speaker 1:
[14:42] So, that's unlimited. You pay 200 and it was supposed to be unlimited.

Speaker 2:
[14:45] You have a rate limit, but you can use it as much up to that rate limit, and you can spend like thousands of dollars of API calls, and that's been proven. There's a coder called Shellac who did a study on it.

Speaker 1:
[14:56] This is where you get the number. You quote a number often about how much it's actually costing per token versus what they're charging. This is where partially that number is coming from?

Speaker 2:
[15:03] Yes. It works out to somewhere between 8 and 13, and $13.50, weird way of saying that, per dollar of subscription. You're able to burn like $2,700 on the Anthropic subscription. For $200.

Speaker 1:
[15:17] You're paying $200, it's costing them $2,700.

Speaker 2:
[15:20] Yes, exactly. Sorry. Kind of a clutchy explanation. So Anthropic let this happen. So the reason the open claw got so big, Anthropic sued them because they were called like Clawed Bot at first, Claw, C-L-A-W. But nevertheless, Anthropic allowed this to happen. Then February 12th, they raised the $30 billion round. A couple of weeks later, open clause cut off. The aristocrats. It's just that Anthropic is such an unethical company. They should have never let it happen to begin with. But one of the reasons the open claw got so big was both using those cheaper models but also using those max subscriptions. And so open AI, buying open claw was so funny. Just like open AI is just meta. It's meta plus Enron. And it's so funny watching them. Why would you buy this? What possible reason? Oh, we can build agents with it. What do you mean?

Speaker 1:
[16:16] No, they're...

Speaker 2:
[16:18] Why?

Speaker 1:
[16:18] They have much better frameworks for... Well, I have two explanations. Let's get back to you. Tell me which one you think is more likely. So this is maybe giving too much savviness credit to them. The savviness is, I think it was a real problem for a lot of enthusiasts to discover, oh, wait a second. If we use really cheap open weight models, open source models, or even just really like 3 billion parameter models we can run on our own machine, we get pretty similar results. Like actually we don't need one 10 trillion parameter super frontier model to read my emails and to add appointments onto my calendar. I think that's really terrifying if you're a company like Anthropic just take it on 60 billion investment or you're OpenAI. It's like we need people to think that these are the big brains and nothing else matters. So the conspiratorial slash business savvy interpretation would be OpenAI needs to sort of slow the roll on that or make that tool much more native to its models because they really do not want a generation of AI enthusiasts to say, oh, wait a second, I can Kimmy is like a fraction of the cost that it does just as well. The other way of thinking about it, it's like them buying that podcast show recently.

Speaker 2:
[17:28] TPPN?

Speaker 1:
[17:29] Yeah, that's just like we're just buying things left and right because we have money and we're not quite sure what to do. The sort of.

Speaker 2:
[17:35] Yeah. I think it's not which one's it's probably number two because because they're going to keep running open claw. They've said that already they're going to keep running it and people are still using open source models. So it's kind of like I just think that they were buying stuff because they thought crap we got to do. We don't have an open claw. What if we just bought it? It's rich kid syndrome. Think like that's the thing like both open AI and anthropic act like rich kids because I went to a private school. I'm not proud to say it. I was the dumbest kid in the private school. Did not do well. Bought them on my class every single year. Failed multiple languages like genuinely legendarily terrible. I barely scraped through. But I've met a lot of these kids and my parents scraped by to get me there as well. It's good on them. But I met a lot of these kids and what they do is when they don't want to learn something, when they don't want to build knowledge, when they don't want to put something together of their own, they just acquire. It's like, Dad, go and buy me that. Daddy, go and buy me a boat, buy me whatever. Open AI doesn't know what they're doing other than they have a lot of money so they can spend it. I think they bought it thinking, wow, this will be a back door into Anthropic a little bit. We'll be able to see what Anthropic does more because lots of people use this and we can somehow see how Claude is running agentically or they bought it to kill it.

Speaker 1:
[18:57] That's what I think.

Speaker 2:
[18:58] But the other thing is, is Peter Steinbrenner or whatever he's called, he's still farting around. That guy, I don't know if you've ever read his posts, but he is constantly working. Yeah. I don't give him a ton of credit for that because it feels like a depressed person. But also I've heard he got hundreds of millions of dollars for it as well. So it's like, if I had that much money, you wouldn't hear from me again. I would disappear. Well, no, I'd keep posting. But it's strange because it's like, what are you actually working on? And I think he vibe coded a lot of it as well, which is even more terrifying. And there are massive security issues as a result. It's just one is like a psychosis onto itself. And what I think I know we talk a lot about the media stuff. What I think it is, is the media and the AI community is so desperate for a hero. They're so they they know they know in it deep down in their soul that something is wrong, that none of this makes sense. So the moment anything even directionally feels like it proves that they're not wrong, they grab it and they shake it vigorously. But they just go like, this has to be it. This is going to be the thing. And if we love this enough, it can be a real boy. And it never is like open floor is gone. Like just no one's talking about it anymore. No one, no one cares searches on Google have gone down.

Speaker 1:
[20:22] Yeah, I just looked for it, it's minimal. I checked this morning. It's minimal coverage. It's been minimal coverage. I mean, it's kind of around, but it's become a niche topic. Well, let me tell you my second thing that I think is good about OpenClaw, right? The second thing is, I think it actually points towards what I think is the healthy sustainable future of AI, which is smaller task specific and much more modular architectures, right? Not built around a single AI entity like an LLM, bespoke AI systems that do specific things. There's a great, if I want to play poker with AI, there is a great AI system to play poker with. If I want to do certain types of digital VFX work, like there's really good AI systems that's made to do that. I think that's the future.

Speaker 2:
[21:10] But all those LLMs.

Speaker 1:
[21:12] No. Well, no, they're not, right? Or they have LLMs in them. This is why I say modular architecture. I think the future is you have multiple different things, most of which are just hand coded by a person. And maybe you have an LLM in there if there's language involved, because it's pretty good at if it needs to speak to someone or interpret it. I point towards the Cicero model as the great example of this. Noam Brown's AI system that plays the board game Diplomacy. And it has an LLM in there, a small one, for chatting with the other players and then converting what they say into a sort of more technical language that the rest of the system understands. And then it has a planning engine, and it has a policy network that can evaluate the different boards. It has multiple other systems that all hook together.

Speaker 2:
[21:55] Classic AI shit. This is real AI stuff, like when it's just like, yeah, I made Diplomacy. But this actually just reminded me of something.

Speaker 1:
[22:04] But just, I want to get to that, but just to bring it close to the point, is I think this gave people a taste of that. If you're building, they're like, oh, I want to build my own system to do one thing. I want to build a system to answer my e-mails that come in to request for my show, to answer those e-mails and to put things into a spreadsheet. And like, oh, I can write a program to do that, and I'll use an LLM to help me, and it can be a small one because this is not, that's kind of not the core of it. And suddenly, you're exposing people to this idea. I mean, I call this vision Distributed AGI. Where one day you would look around and be like, there's 10,000 bespoke small systems that each do something well. And if you add it all up, well, that's a lot of things now that computers do as well as people. And it's a very different vision than Opus 5.9 is...

Speaker 2:
[22:50] Grok 7.

Speaker 1:
[22:52] Or whatever it is, Grok 7. Yeah, it's embodied in a robot with predator machine guns and it can just do everything. Anyways, all right, back to your point.

Speaker 2:
[23:00] So this just reminded me, so Jack Clarke of Anthropic, fascinating character, one of the co-founders, he used to write at the Register, one of the single most critical tech publications in the world. His blogs were extremely critical. I've seen him twice peddle out this example, which he refers to as like an evolution simulator, a predator prey simulator, and he brings it up all the time and he uses these high-fluidant terms. I went and looked this up, it is like a 50-year-old idea. He's like, yeah, I used Claude Code to build it. Yeah, because there are hundreds of them online, hundreds of them that he was trained on.

Speaker 1:
[23:36] It's just a little simulation program.

Speaker 2:
[23:38] It's a little simulation that says, okay, we got bees and the bees get killed by the bee, the bee-eating bears. I'm just making up animals already. This is why I can't make one myself. But it's like all of the different creatures and how they interact. He's like, yeah, and I'm able to change things here and here and there. And it's like, yeah, there is a web version of this. It is 20 years old.

Speaker 1:
[23:56] Yeah.

Speaker 2:
[23:56] But the way they frame all of these things is like, oh, simulation, like the singularity. It's like, no. And it just, I feel like the AI era is a mass exploitation of ignorance. It's just that they found something where the media just they knew the media. Maybe they didn't know this in advance, but the media won't check anything. The media would just say, yeah, it's got a social network. This is AGI now. Every time a three gigawatt announcement is made, they go to the three gigawatt data center. That's like three nuclear power plant big. Wow. Even though it's not getting built, which I know we're going to get to, it's just AI as a term, as you well know, it means so little and so much at the same time that they can basically do anything. And I think combined with the hysteria, they are in a situation where literally, I think we could have another Sam Bankman freed situation that we don't know about yet. That an AI company could come out and just go, yeah, we've done this and it's the, I mean, kind of mythos is almost that. I know we're not going to get into that, but it's, I feel like we are, maybe there's already a scammer out there, but this is the environment, the exact environment.

Speaker 1:
[25:10] Yeah, you can gather a billion dollars easy.

Speaker 2:
[25:13] Well, I mean, I just saw another one the other day where it's like a company that claims it's doing recursive self-learning and they raised half a billion dollars. And one of the co-founders runs another company called u.com. And you know what's crazy? That is not mentioned in the Financial Times' piece. It's just we are like grifters have found their meat. This is so much worse than crypto and NFTs. It's so, so much worse because the fuzziness of AI allows them to have infinite time and infinite money to say, well, we still haven't worked out. That recursive self-learning company, by the way, of course, they are still theoretical like all of them.

Speaker 1:
[25:54] Yeah.

Speaker 2:
[25:55] Like world models.

Speaker 1:
[25:57] But no, the 50% job loss, it's next month now. Yeah. I said the wrong month. It wasn't this month.

Speaker 2:
[26:04] It's in like 8 to 12 months, maybe with a margin of error of maybe 100%.

Speaker 1:
[26:10] Banner headline, 50% of every time.

Speaker 2:
[26:13] Just read the top thing, yeah.

Speaker 1:
[26:15] That reminds me a little bit about my oldest plays, I help coaches little league team, baseball team or whatever, and the pitchers are getting better. They're at the 13U, they play on the full size fields now or whatever, and like the pitchers are better now. So what they learn is, if I throw the high fastball into batter swings, I'm going to throw some more high fastballs. Like this is clear, and I kind of feel like this is Dario Amadeus saying, 50% of jobs, he's rolled this out three years in a row now. He's like, it gets covered every time. I'm going to keep throwing those high fastballs as long as the media is swinging the proverbial bat.

Speaker 2:
[26:47] Yeah, but high fastballs are proven to be difficult to hit, as opposed to LLMs, which have never been proven to take jobs.

Speaker 1:
[26:54] Ah, there we go. I like it.

Speaker 2:
[26:56] Baseball is way more fun than AI, just if only we had put this money into baseball.

Speaker 1:
[27:02] I agree with you there, yeah, baseball has less constant waves of existential dread being poured upon the entire populace.

Speaker 2:
[27:10] No, they just reserve it for Cincinnati and Pittsburgh and Mets fans.

Speaker 1:
[27:16] That's right, you're right. Mets fans look to AI for a little bit of psychic relief. They're like, oh, this is not quite as dark as what we're dealing with.

Speaker 2:
[27:24] Yeah, this isn't punishing me as much.

Speaker 1:
[27:25] Only half the jobs are going away? That's not so bad as an 11-game losing.

Speaker 2:
[27:28] Yeah, no, and most Mets fans are like, yeah, I would fire half of them.

Speaker 1:
[27:32] Yeah, they should be fired.

Speaker 2:
[27:34] Maybe we should do all of them.

Speaker 1:
[27:35] Yeah, I hope Juan Soto is the first one on that list. All right, story number two. I actually, for whatever reason, I didn't cover this one as much. I talked to some sources in the surrounding DC tech industry, but I want to get your take on this. This is the Anthropic and the Department of War story that picked up in February. I'm just gonna read a little bit from Dario Amadei's statement that kicked off this whole thing. So he said, Anthropic understands the Department of War, not private companies, makes military decisions. We have never raised objections to particular military operations nor attempted to limit use of technology in an ad hoc manner. However, in a narrow set of cases, we believe AI can undermine rather than defend democratic values. Some uses are also simply outside of the bounds of what today's technology can safely and reliably do. Two such cases have never been included in our contracts with the Department of War and we believe they should not be included. And then he lists mass domestic surveillance and fully autonomous weapons. So can you first bring us up to speed on what unfolded and has unfolded there? And then what is actually happening? Because I find this story, because I haven't looked at it as closely, kind of confusing.

Speaker 2:
[28:43] All right, so just before the war in Iran, I think Dario is a savvy con artist and I think he, I call him, but you don't, it's not you saying it's me saying it, he's a con artist. So just for some background, Anthropic has been installed with classified access in the US military since June 2024. That's a very important detail. They were used in Venezuela incursion, whatever you call that. They were used throughout and are still used in the war in Iran. So what happened was Amadei said, I forget what the conversation was, maybe he instigated it, it's kind of hard to tell, but some conversation between him and the US military was, we're not going to let you use this for mass surveillance of Americans, nor are we going to let you use it to control autonomous weapons. Now, the second one really pissed me off because you cannot control anything with the LLMs, you can't control, if you control the robot with LLMs, it would barely move because the processing time even, people say, what about on-device? Shut the fuck up, you don't know how these work, that's not how this, they wouldn't fit.

Speaker 1:
[29:49] That threw me off as well, I know enough about AI, it's like, why are you talking about LLMs? Why was the media? I think they mean AI in general, I suppose.

Speaker 2:
[29:57] No, no, they meant autonomous weapons, they 100% meant that, I know, because I read every single article and every single statement about this, every single time, like autonomous weapons. And to be clear, Anthropic in their own statement said, LLMs are not consistent enough to run autonomous weapons. Correct, thank you, Dario.

Speaker 1:
[30:15] Your first fact. But also, it would make no sense to run a model based on language, parsing and generation to steer a missile?

Speaker 2:
[30:23] So, that's the thing.

Speaker 1:
[30:24] I don't understand that. Okay, but the first one you say was happening though, as far as you can tell, using these tools as part of intelligence gathering, sure, they probably were involved in the change somewhere.

Speaker 2:
[30:34] I mean, were they? But the thing is, I can't confirm whether it is. No one can because Anthropic was already embedded and they attempted to basically renegotiate the contract post-hoc. I'm not citing with the US military here, but they tried to say, we're adding these things and they did it mysteriously, somehow just before the war in Iran. So what I think, this is my personal belief, I think the, like it was a few days beforehand, I think what Anthropic did.

Speaker 1:
[31:02] Just to clarify what you're about to say, I'm just looking at this now. They're saying mass domestic surveillance of fully autonomous weapons in that February statement, they're saying, oh, have never been included in our contracts. So I had been given the impression that they were specifically called out in the contracts as we will not do this. But actually what I'm seeing here is, it's not like they weren't, that wasn't discussed at all in the contracts. And Amadeus is saying, hey, we never mentioned in the contracts these two things you might use it for. And we want this in the contract. So, okay, go on. But that's a, I'm only seeing this now that when I reread the statement. Yeah, it's a little tricky.

Speaker 2:
[31:41] Anthropic 100% had visibility into what the US military is doing. So I would not be surprised, I cannot confirm this, whether they time this specifically to time of the war in Iran. Because suddenly there was this insidious, awful, every single person who spoke like this should be fucking ashamed of themselves. I'm disgusted by it. There was this insidious thing of people being like, Anthropic is the ethical company. I saw hashtag Jesui Claude, death penalty. I saw Katy God damn Perry being like, I just bought Claude. And it's like, you just paid a company that was actually part of this war. And people like, well, open AIs now. And then Sam Altman slid in and was like, well, we can do whatever that is. Then Sam Altman claimed that they had actually negotiated something that didn't allow the things that Anthropic wanted. And it turned out that Emile Michael from the US military said actually we've agreed to all legal means. To be clear, I don't believe either of these companies give a rat shit about any of this. I don't think they care about it at all. But Anthropic had this swell of good press because people thought that they were opposed to the war in Iran. When in fact, they were directly part of it. Claude was used during it. Now, how complex was the use? It was probably like, here's a bunch of images. Where should we blow up? And it went, here's a school. And they went, oh, just great. And that actually happened. And then there were weird articles that came out saying, like, actually, Claude didn't do that? Yeah, you can't prove that, mate. What I can prove is that Claude was used in the war in Iran. So whatever.

Speaker 1:
[33:23] But your conjecture is the reason why Amadei brought this up was its press. Yes. The size of that contract is worth jeopardizing when you're looking at like an IPO six months from now. What?

Speaker 2:
[33:37] Sorry, size of that contract. Their military contract is up to $200 million. And the up to is an important operative word. $200 million. They lose that money on inference like two weeks.

Speaker 1:
[33:49] And they're looking to raise. I mean, their valuation is what, in the hundreds of billions?

Speaker 2:
[33:54] Three something, hundred billion. They're probably IPO at 750 if they even make it. But that's the thing. No, they did it for press.

Speaker 1:
[34:01] That could be $100 billion move there, in theory.

Speaker 2:
[34:05] Yes. Well, also, the thing is as well, it's like, then the Department of War said, oh, we're going to put you as a supply chain risk. Nothing happened. Then they were like, it's a supply chain risk, but we're going to keep using you for six months. Then there was a lawsuit, the Department of War, then Anthropic sued the Department of Defense and said, if we don't have this removed, we might die. Then admitted, by the way, and this is one of my biggest, this was like my full joker moment. During that motion that they filed, Krishna Rao, the CFO of Anthropic, filed an affidavit, sworn affidavit where he said that Anthropic had only made $5 billion in its entire lifetime. Now, when you go and add up all of the reports of revenue, such as the information saying $4.5 billion in revenue in 2025, such as Anthropic themselves saying annualized revenue that would mean they made $1.5 billion in the space of a month in 2026, it adds up to way more than $5 billion. I have tried to talk to pretty much every major reporter that covers Anthropic's revenues and they will not discuss this. It's the most conspiratorial I felt this entire time. It is like everyone is trying to ignore a fire in a room. The crazy thing is, that happened, nothing changed, and then a judge said, actually, Anthropic is right, we're not going to allow the supply chain risk designation. Now apparently, the US government is using Claude Methos. In the end, nothing happened. Anthropic got a bunch of completely spurious press around them being ethical, despite the fact that they are already part of the military. So they revealed their actual revenues. It was great, it's all good.

Speaker 1:
[35:55] That revenue story, that is an amazing one. Outside of you, I covered it, I learned about it in part from you. I found only one article. There was maybe a Reuters or an AP article that talked about this, quote unquote, like shaky revenue math that's popular in Silicon Valley. So there's one piece I found where a financial reporter actually was covering like, hey, when you hear these numbers, there's a lot of multiplying by 12 or multiplying by 24 going on and you multiply at the right times. But that was a big story. So for the listeners to understand it, Anthropic had to under oath, signed affidavit, right? So the penalty of perjury or whatever you would say in a corporate setting, had to release their revenues and it was $5 billion to date on $60 billion of investment in debt. The date. So.

Speaker 2:
[36:42] Yep. And they spent $15 billion on compute so far.

Speaker 1:
[36:45] Yeah, $15 billion on compute so far. The other part of that, the part of the story I did cover that I thought was interesting was the Undersecretary of Defense, whoever that was.

Speaker 2:
[36:53] Emile Michael.

Speaker 1:
[36:54] That was Emile Michael, right. And he went on and it was funny. It shows something about how the online commentary space works. He went on and said, hey, here's why we don't want to work with this product. If you watch him, he's basically like, this is a product that'll say it has a soul or that their company is saying that there's a chance that it's alive. And what he was saying was like, this is a wonky product, right? This doesn't seem like the type of thing you want in a military setting where you have the CEO saying there's a chance it's alive and it'll say it has a soul. This doesn't seem like a reliable piece of hardware. And what was the online commentator report was Pentagon convinced that Claude has a soul. So it completely, they flipped the veils.

Speaker 2:
[37:39] He was basically saying, I'm so sick of this. I'm so sick of the goddamn AI bubble. I'm so tired of this. Yeah, I wish I got this. I wish anything I did was I wish you've not read One Punch Man. Have you?

Speaker 1:
[37:54] No.

Speaker 2:
[37:55] OK, so this is a complex thing, but one of your listeners is going to hear this and love this. There is a character in One Punch Man called King. Everyone thinks that he's the most powerful man in the world because of the King engine, which is his so-called power. It's actually because his heart, he is so anxious and scared at all times that his heart is going so fast that you can hear it. He has no powers. He's a regular guy. But because Saitama, the main guy, comes along and destroys anything near him, everyone thinks he's amazing. And there are multiple times during the story where a bunch of stuff happens around him, and people go, wow, they must have all just died when they saw King. Wow, King must have destroyed them with the King Engine. This is Anthropic. Anthropic is just this wasteful crap pile of a company with services that break half the time, less than two nines now of service availability. And they have models that degrade at random. They guess like the users, they rug pull them on rate limits. But everyone's like, Anthropic's capacity is so, they're hitting capacity because they're so popular and their models are so good. It's like, I'm going crazy, man. I just, at some point, what I'm saying will feed into the mass consciousness, I guess. And at that point, I'm going to be insufferable. But it's like, every time I hear a story like this, I feel like I'm going insane.

Speaker 1:
[39:17] What are the main revenue sources if we're being realistic about it? So if you're these AI companies, well, my understanding is Open AI is ChatGPT subscriptions.

Speaker 2:
[39:27] Yes.

Speaker 1:
[39:27] Anthropic is the cloud code.

Speaker 2:
[39:31] API.

Speaker 1:
[39:31] API.

Speaker 2:
[39:32] Apparently, it's API.

Speaker 1:
[39:33] Yeah.

Speaker 2:
[39:33] But here's the thing. I'm not accusing anyone of fraud, but there was an Eric Newcomer had a piece where he said the Anthropic, they had the Cochoo venture capitalist, and he shared the deck that Anthropic had shown them. And there was a bit where it was like, yeah, 85% of their revenue is API calls and 15% is subscriptions. Going to be honest, I don't believe it. I just don't believe it. I don't believe that there is what, $4 billion of API calls and OpenAI apparently is the other way around, where it's like 85% subscription, 15% API.

Speaker 1:
[40:11] What would an API call be? So for the listener, what's calling those APIs?

Speaker 2:
[40:15] So it would be an AI startup, it would be a business that's running their own models for some reason, that is running their own systems that are built on top of the API. But that's the thing, even that question gets at what I'm saying, which is, what the hell are you doing with this? I get AI startups that just sell things that have LLMs plugged into them, but it's like they're claiming they have all this enterprise use. What I think it might be is that Anthropic is slowly, because the information reported this recently, I think it's been going on for a lot longer. Anthropic has started to push enterprise users on to the API, even when they're using Claude or Claude code. I think that's fairly recent in the last few months. But I also just think that these companies are making up what they're saying in DEX, because no one can prove otherwise. I think I want them to go public so bad. I want them to go public so bad. Never in a million years have I wanted a company to file an S1 more. I want to see inside their laundry. I want to go look around.

Speaker 1:
[41:19] I don't doubt you'll be the first to read those S1s.

Speaker 2:
[41:23] I will be smoking a big cigar. It's going to be delightful.

Speaker 1:
[41:27] Here before we get to the third story, let me tell you my new term I coined about AI coverage. Please, please. All right. I just came with this on the spot, but something else is going on right now that I want to call out is what I call dread laundering. And what you do is you will launder a sense of like despair, dread about one thing related to AI to help amplify a less supported feeling of dread or despair about another. And so here's where I've been seeing this recently is, I think the technology business case for LLM somehow being at the core of automating a bunch of jobs or destroying the economy is very weak. And I think there hasn't been a lot of good support for that, because again, these are just LLMs that we're building better apps on top of and it's slow going. But there's a lot more focus recently. It's like we have to, there's a dread quota. So how do we fill it if that is losing some traction right now? So there's a lot of other coverage going on about destruction of the arts, writing, the writing is going to disappear, movie making is going to disappear, education is falling apart. And you put that next to Dario Amadei talking about jobs or this or that, and you're laundering the dread from, oh, we have a text generator and people are going to be lazy and try to not write text, which is a real story and annoying and one is a writer I don't like. And you launder that dread over to, well, all these other bad things, this is all kind of the, if we're worried about that, that kind of just justifies the dread in general. So like also like maybe my job's going away, maybe the Terminators are coming. And I really wish these were really separated. And you could have an argument about, we have automatic text generators, brings up a lot of problems for parts, people who produce text for a living, let's talk about it. Then we have over here this claim that an LLM is going to take over an executive job or is going to, you know, and that's like those are those fall under scrutiny. It's really hard to get a compelling case over there. But if you throw enough darts at enough things, you create a miasma of unrest in which like it's hard to make out what the actual signals are not. So just everything is like a pox in all the houses. Everything is terrible. That's my, my neutral.

Speaker 2:
[43:37] I fully agree. And I also think the Duma porn clearly gets clicks. It's just that I think that when this is all over in the bubble burst, I think every single person who engaged in it should lose their job across the board. I know it sounds aggressive, but I think everybody who, I think everybody who engaged in the Duma porn. And yeah, there were some people who tried to do it in good faith. But the ones who like the Axioses of the world, who genuinely sat there and fomented dredd, they shouldn't be allowed to work in journalism for a minute. They should take a knee. They should step aside for people who actually live in the real world.

Speaker 1:
[44:15] It is a problem that we have to address. Maybe I talked about this on your show earlier this week. But I'm hearing from listeners and readers that again, they use terms like I'm stuck in a cage, having wave after wave of despair or dredd crash on me with no option, hope or escape from it. And I'm taking wave after wave. There's a responsibility aspect to it, right? Like it is difficult for the normal person to be hit again and again from all different angles. What if this is terrible? What if this is terrible? What if this is terrible? And there's a, if there's smoke, there's fire mindset that we're wired for. And it's really, I think, been very unsettling. Again, I get unsettled by it and I actually know the technology and know that 98% of this is really not well supported. It's just emotionally difficult not to be having to just immerse yourself in wave after wave of everyone putting their full attention on what angle can I find that makes this seem the worse. Like that's always the angle that things are coming from. It's never from the, well, this doesn't make sense. What happened to that? Well, where's all this revenue? Hey, what about this story from three months ago? Nothing happened of it. I mean, there was a guy who posted a video that I made fun of and then he attacked me when OpenClaw first came out where he said literally the singularity is here. Like in the next few days, this is it. Look at this graph, line goes up. Next few days, singularity is here. And I kind of made fun of him. And then he recorded a whole video attacking me about how crazy my takes are. And I just want to say, okay, it's been four months. I don't see the robot army that was supposed to be there in a couple of days, but we never follow up on things.

Speaker 2:
[45:49] Where's Ultra Claw? Where's the Clawed Bot that's going to chop my goddamn head off? But that's the thing. It's like, I think that there is an actual theme above all of this. It's actually outside of the AI bubble as well, which is short-term memory and long-term memory, that just people say stuff, things happen and then they forget about them entirely. Like, remember the Clawed Code marketing push at the beginning of this year? It was the Atlantic that said, this is the ChatGPT moment and it had all sorts of people building useless apps. There was that whole surge of support for that. Now, Anthropic is actively throttling their services. They are making their models worse. They are cutting off open claw, nothing, no coverage, none of the people. Because here's the thing that I have with AI boosters. Even if they fundamentally disagree with me about the economics and all, they don't even seem to engage with the problems. I don't even mean this in an antagonistic way. I mean, if I was a pro AI person, if I was like, I don't know, I'd have a big piece of metal in my head or something. I saw some dickhead British guy being like, hey, they're losing billions of dollars. I would at the very least be like, I should probably look into this.

Speaker 1:
[47:06] Yeah.

Speaker 2:
[47:07] I should probably make sure. And if I really like this stuff, then I saw the company screwing over the customers, I'd be like, wow, doesn't that change the story a bit? Nope. Mainstream media, honestly, a lot of independent media just goes, you know, something will happen. It's like when it comes to the doom, they will extrapolate as far as they need to. When it comes to the capabilities, they will go, yes, it's going to be this powerful that. When it comes to the things happening in real life, they're like...

Speaker 1:
[47:38] Complicated, yeah, you know, technology.

Speaker 2:
[47:39] Yeah, you know, things happen, you know, we'll be all right, though. And when I say all right, I can't really tell you what that means, but it will be. When I say all right, I mean, everyone's going to make money, but not me, but the companies, who I love for some reason. It's so weird.

Speaker 1:
[47:54] That part confuses me. The media class I'm a part of hates all billionaires, except for like these three.

Speaker 2:
[48:01] Yes, exactly.

Speaker 1:
[48:02] I don't get that part. All right. Story number three, this is in your wheelhouse. It has to do with the reality of the data center boom. I'll read you a quote from a Futurism article that includes you in it, so be prepared.

Speaker 2:
[48:13] Nice.

Speaker 1:
[48:13] The data centers powering your favorite AI chatbot are running low on helium cash and neighbors who don't hate them, and that's not even the worst of it. According to reporting by Bloomberg, about half of the data centers slated to open in the US in 2026 will either face delays or outright cancellations. The publication interviewed analyst at market intelligence company Sightline Climate, which in research first flagged by Ed Zitron last week, noted that 12 gigawatts worth of power-consuming data centers are set to open in the US this year, but here's the catch. They say only a third of those are actually under construction right now, with the rest in a liminal pre-production stage in which they could and likely will be canceled. There's a huge story going on here that's not being covered outside of, it's covered in Bloomberg in places where people really need to monitor like the private credit markets and other things that could affect their investment portfolios, but it's not broadly known beyond it. What's going on with this illusory data center boom?

Speaker 2:
[49:05] So every time you hear someone say, we're building a two gigawatt data center, real simple, just say, no, you're not. No, you're not. We don't know how long it takes to build a one gigawatt data center, because no one has built one. I know that sounds crazy. No one has built one. But once again, and CNBC, I'm going to say Mackenzie Singlos at CNBC, I'm specifically saying she has launderd the reputations of these companies, because what happens is Stargate Abilene, Open AIs, 1.2 gigawatt, 1.2. They opened a single datacenter in September 2025, and then what was published was the Stargate Abilene was operational. Project Rainier, a 2.2 gigawatt datacenter in Indiana for Amazon. Fully operational, that's a quote from Amazon. No, it's not. 2.2 gigawatts is what they're saying. They claim to have half a million Tranium 2 GPUs, 500 watts apiece, that's about 250 megawatts. They claim they're up to a million now, that's 500. That's a lot less than 2.2 gigawatts. Because datacenters take forever to build, we do not have the power. And people are saying, well, the power is getting built. That proves they're going online. Now, the problem isn't that the power doesn't exist at all. It means the power doesn't exist at the point of need. So Sightline Climate, I actually caught up with them on a recent newsletter, where they said that of the 115 gigawatts of data centers that are meant to come online by the end of 2028, only 15.2 gigawatts of them are actually under construction. Now, this is really weird because I did the math. This is napkin math, forgive me. When you look at these and you say, okay, they have a PUE, so the efficiency, so 1.35 efficiency, we'll call that. When you use that and you take that 15.2 gigawatt thing, you divide it by 1.35, it's about 10 gigawatts of pure GPUs. That's about $285 billion worth of NVIDIA GPUs. Why am I saying this? Well, NVIDIA claims that they have visibility into half a trillion in GPU sales by the end of 2026 and a full trillion by the end of 2027. Where are they going, Jensen? Where are the GPUs going, Jensen? Where are they? Where are they as well? Because NVIDIA has sold...

Speaker 1:
[51:35] It's just a billion people are building custom video gaming rigs at home. Come on. It's easy.

Speaker 2:
[51:41] Well, actually, I think I know where they are. I think they're in Taiwan. Yeah. It's just very weird because what this means is that NVIDIA has already sold too many GPUs. It has already sold more GPUs than are actually having data centers built for them. It's crazy. This is the thing. I bring this up with journalists, I bring this up with economists, I bring this up with tons of people and they're like, it's fine. They're being built. What are you talking about? I'm like, look at the data and they go, ah, it's always like a weird wave off. But this is like, this is the largest company in the stock market. And I think that their total revenue from the last few years is over $300 billion. And they're claiming that they'll hit half a half a trillion by the end of the year. They have half a trillion. I think that's just for the year. They keep saying these numbers as well that don't match up. But let's say they're true. And if Nvidia beats and raises, so they beat their earnings estimates from analysts again, I think we need to start asking a real question about what Nvidia is doing with these GPUs. Because talking to some hyperscalar accountants I know, there is a way that they could be doing this where they're able to book the revenue without sending anything. It's called a transfer of ownership. It's when you just sign a contract saying, yeah, you own these GPUs, they're sitting in my warehouse, but these are yours. And that counts legally. That's perfectly legal. It's very strange. And if they're not saying it, they should be filing an 8K. But NVIDIA's inventories are growing on their earnings as well. So like it's a sign that something's being warehouse. But I spoke with a few sources. And what it is is when a hyperscaler, say Microsoft, they don't buy a GPU from NVIDIA. They don't go, send me a GPU. I'll put it in a server. What they do is they work with someone called an ODM, an original...

Speaker 1:
[53:32] Equipment manufacturer.

Speaker 2:
[53:34] An original device manufacturer or design manufacturer. I think it's design manufacturer. They build the servers and they put the GPUs in there. Quanta, Foxconn, also known as Hon Hai Precision Corporation Limited. Hell yeah. I wish we had more normal names. Whistron, Wuyin, all sorts of companies out there. What they do is their revenues, all of these ODMs are going up, crazy style, because what they do is they pass the cost of the GPU through as revenue. They buy the GPUs from Nvidia, they put them in a server, they sell them to a Microsoft or an Oracle or a Meta or an Amazon, and then they say, yeah, it costs this much with the cost of the GPU in there. This allows Nvidia to hide a great deal of GPUs because they're sitting in Taiwan. Quanta's inventories went up last quarter. I don't know if it's categorically because nobody's buying them and they're not being shipped, but for the most part, I think the Nvidia is just pre-selling years of GPUs, and I don't know how this is not scarier to people. Michael Burry brought it up briefly weeks after I did, just to be clear. No one seems concerned about this when in fact, if there's only 15.2 gigawatts of actual capacity being built and 10 gigawatts of GPUs, Nvidia can't sell more GPUs unless it wants to put them in a warehouse. But to the larger abstraction of data centers not getting built as well, it's like we're dealing with fraud then. If we got 100 and something gigawatts of data centers being built, announced, but only 15 of those are actually under construction, and under construction could mean anything. It can mean a scaffolding yard, which is the case with NScales data center in Louton, England. Then that means fraud. That means that someone is doing fraud. That means that people are not actually building things, that people are likely buying land and speculating that a data center might get built there. Perhaps they'll file some planning paperwork, paying their CEO six figures the whole time. Fermi is a great example. Rick Scott's Fermi, building an 11 gigawatt data center out in a project matador. Don't worry though, they're not building anything. I have a patch of land, the CEO just left. They apparently didn't pay their contractors. Broad. This is the thing, everyone's talking about the AI boom with all this certainty. But the actual proof that things are happening isn't really there. In fact, I did the maths and it turned out that over 50% of the data centers under construction through the end of 2028 are for OpenAI or Anthropic. Every time Anthropic announces, they just announced a 3.5 gigawatt deal of Broadcom chips. Where are they going? Where are they going? No one asks, no one thinks, no one tries. The answer is, they're not going anywhere. These chips probably will never get bought.

Speaker 1:
[56:37] Okay, so let's walk through this a little bit. So it sounds like, I mean, the video is selling them to these ODMs, right? So the ODMs are basically saying, we're getting contracts, so we'll keep buying chips because there's a lot of money in this market.

Speaker 2:
[56:53] I just want to be clear, this is how it's always worked. This is not a weird thing. This is how they build data sets. Continue, sorry.

Speaker 1:
[57:01] Because there's a lot of interest in AI, there's a lot of money that's raiseable in AI. So you have a lot of entities saying, I want to raise money for AI projects. This is leading to a lot of, we will now spend money on these ODMs, that hey, we want to buy X number of chips and set up in servers. But then there's nowhere to put them.

Speaker 2:
[57:21] I think I've muddied it up a bit.

Speaker 1:
[57:23] Okay.

Speaker 2:
[57:23] So you've got two stories. The ODMs are, so when a hyperscale at Microsoft, they said $37.5 billion of CapEx last quarter. When they buy servers, they buy from the ODMs.

Speaker 1:
[57:34] Yeah.

Speaker 2:
[57:35] ODMs, then put them in a warehouse in Taiwan, and they say, okay, when you're ready for the data center, let me know. Yeah.

Speaker 1:
[57:41] And these data centers are taking longer than, or harder to build than people realize. They've raised the money, they've made the orders. There's nowhere to put them. So the warehouses are piling up. NVIDIA is like, hey, put them wherever you want. We're getting our paychecks. You can put them on a hot air balloon. We don't care.

Speaker 2:
[57:56] The dodgy thing with NVIDIA, though, is that it's unclear, because we're talking 100 billion-plus GPUs that have nowhere to go, that have been sold, which begs the question of whether they're leaving NVIDIA's warehouses at all. Because NVIDIA could do accounting treatment that just goes, yep, this is yours now. It's here. But completely separate to that, because Microsoft, Amazon, Google, their data centers are being built, though they're taking forever. But even then, there's not enough capacity to install these GPUs, then completely separate to that, over 100 gigawatts of data centers have been announced that are just not being built.

Speaker 1:
[58:36] Yeah.

Speaker 2:
[58:36] And those are more than likely not hyperscaler ones. They are more than likely random fly-by-night operations. They're companies like Nebius, NScale, Iron, these former crypto companies that have moved into AI.

Speaker 1:
[58:50] I know they're raising money. Are they spending money to the ODMs? Like there are chips somewhere in a warehouse, either the ODMs or the NVIDIAs that they paid for, they just have no where to put them, or are they just raising money and paying salaries until it fizzles out?

Speaker 2:
[59:05] Little Colum A, Little Colum B. Hard to tell. I wouldn't be surprised if it's both. I think that there is... And when like Corwe, for example, they buy from ODMs like Dell and Super Micro. They recently had a co-founder arrested for selling chips to the Chinese, so that's cool. But yeah, I think that there is a lot of, yeah, we're building a data center. You know, business is rough. We just got to find the land. How are we going to find the power now? That's going to take another three months. I'm going to need to make $650,000 a year. In fact, that's probably a fun thing to go and look at the companies in question and seeing what executive compensation is. But then there's also just the problem of data centers are hard to build.

Speaker 1:
[59:55] Well, this sounds like this at least rhymes with the housing crisis. The magnitude is a little bit smaller. Tell me if I have this right. My understanding of the financial crisis of the earlier 2000s is, okay, we have these, in that case, financial product, these mortgage-backed securities, and people want in on those because they're making a lot of money selling these, they're making a lot of money reselling these, but you ran out of mortgages. But everyone still wanted to get into this, there's no more mortgages to put in the mortgage-backed securities, so we say, well, we'll make these credit default options and these swaps, and we'll build derivative products on top of these. We just need things that we can keep selling because there was more money that wanted to be spent here than there was things to actually spend it on. And of course, once you had built out this giant house of cards built on leverage and bets on bets on bets, when the middle of the house couldn't support the whole thing fell down, this feels like a simpler version of that. There's a lot of money out there that's like, we want to get into AI too, because every 16 seconds we're getting an article about how it's the most powerful technology ever and it's about to take over and take all of our jobs. So there's a huge amount of money that wants to go into AI, but there's not actually enough places to put it. And that seems like a summary of what's going on now. And literally, there's not enough land and buildings that can take the chips to put the chips in. So we have all this money being spent and the video seems to be collecting a lot of it. But there's nowhere to put these chips. It seems to be what you're saying. There's just way more money that wants to go into this market than there is actual investible assets to put the money into. And so shenanigans follow, and you get a very fragile system. And this is why we're worried about the private debt market is beginning to teeter a little bit, because these investments aren't returning. Nvidia has so much of this money coming in with nowhere to put it. This feels like that's the core of instability. So what happens when some of these contracts fall apart, and Nvidia has a fall, and it's X percent of the stock market? Is that the right way of seeing it? It kind of rhymes with the financial crisis in that sense.

Speaker 2:
[61:54] So here's the thing. I don't think it will be as bad.

Speaker 1:
[61:57] It's not as much money at stake by far. And it's not derivatives. It's not bets on bets on bets. So it's simpler.

Speaker 2:
[62:02] Not yet. Yeah. That's the big thing. It's not derivatives. Private credit, the big scary thing there is like 30 to 40 is related to the software industry and software debt, which is a whole separate subject. You are right in that there is a massive amount of speculation happening here to quote Gordon Gekko, I think from one of the Wall Street movies. Speculation is the root of all evil. Someone correct me whether it's Wall Street 2, Money Never Sleeps, but it's weirder than that. This is unlike anything because it's a very centralized thing of NVIDIA and NVIDIA's continued value, and NVIDIA's load bearing 8 percent, 7 percent of the stock market. It's also very weird that it's one company effectively doing it, but there are hundreds of billions of dollars of data centers that are allegedly getting built and probably maybe half of that, maybe 75 percent of that is funded by debt. The private credit industry thing that's scary is that much of private credit is funded by retirement and insurance. Right now, I don't think data centers make up a ton of private debt. That's awesome, like at least not a load bearing part. I will say the actual housing crisis comparison I make is venture capital, and it's actually not related to data centers at all. So, what it is, is venture capitalists get paid, sometimes it's a percentage of the funds value, the assets under management, like any kind of asset manager. So, AI companies right now are awesome for them, because they get them and they constantly get a number go up so fast, so big, so huge, because AI companies are fluffy right now, and everyone has these AI companies. In the subprime mortgage crisis, the way that people waved away the thing about, well, your interest rate is going to change in a year or six months, was they said, well, I'll just refinance. In the case of AI startups, Eli Gill, famous venture capitalist, they said all AI startups should look to exit in the next 12 to 18 months. And it's like, okay, well, why would you buy them? Because most AI startups are just wrappers for models, and you can't take them public because they lose a bunch of money. The subprime AI crisis I talk about is partially with companies not being outrun because the costs go up. It's also you've got 200 billion, 300 billion dollars worth of venture capital tied up in AI startups that can't be sold.

Speaker 1:
[64:32] Right.

Speaker 2:
[64:32] And how does that connect to data centers exactly? Well, data center customers, predominantly AI startups, predominantly two of them, Anthropic and OpenAI, but others as well. Cursor just signed a deal with XAI to rent GPUs, for example. What happens when all of those die? Who's going to pay your data center bills? Also, all the data centers are deeply unprofitable because of the horrible debt they require. It's just, it's not, like you kind of said, it rhymes, but it's not like for like. And again, I say it, it's like, more people should be thinking about this. Even the people who are AI boosters should be thinking about this, because this is an existential threat. This is not just a Ed's being a hater or Ed hates this. It's like, the maths doesn't make sense. There's not enough space for the GPUs to get installed. There's not even things being built for half of them. If they manage to sell, if NVIDIA sells like half a trillion dollars worth of GPUs in the next year, they're not going anywhere. In fact, I worked out mathematically based on their last quarter, it takes six months to install a single quarter's worth of GPUs. And I actually think it takes longer now. At some point, this falls apart and everyone's going to act as if it was a big surprise and they shouldn't. The warning signs were there from the beginning.

Speaker 1:
[65:53] Right, right. They cannot keep selling that many GPUs because there's no where to put them. And you're building up such a supply, right? So then the two, I mean, it looks like what's inevitable is going to be two things financially speaking. There's going to be a stock market hit and private retirement fund insurance hit when the This Game of Musical chair stops, which is going to probably lead to much more financial scrutiny, probably regulation on accounting within these companies. And then the venture capital firms, when they take the hit of like, oh, we couldn't exit these companies, which we're not otherwise going to be able to get an exit out of, if we don't get them sold right away, because again, it's hard to build a useful, profitable AI product. You're going to get an AI winter. So they're going to be like, well, forget this. You're going to have a few years where it's going to be very difficult to get AI and investment.

Speaker 2:
[66:44] So I would actually reframe that slightly. I think what happens is, I think you're right about the stock market stuff when it comes to the AI startup. I think what's going to happen is a fire sale moment. It's going to be a panic. You're going to hear about an AI startup, maybe perplexity, maybe lovable, that needs to sell. They're like, we need to get this out the door, and a funding round will fall through, then an acquisition path will fall through. The moment it becomes obvious that AI startups are trying to sell, everything will start collapsing. VCs will have to start telling their investments to sell. Sell right now, get out of there. Except when you look historically, AI startups do not get acquired. Windsurf, AI coding company, acquired by Google. Nope, they paid $2 billion for three people. The rest of them got sold to Cognition for a couple hundred million dollars and most of them got laid off. What was it? Inflection AI to Microsoft, billion or so dollars. Mostly went to investors, mostly went to Mustafa Suleyman. What was the other one? character.ai, bought by Google, several billion dollars, except that mostly went to the founders and some of the team and of course the investors. But the actual products are not getting acquired. The actual IP doesn't exist. So when these things come to exit, I don't think it's going to be pretty at all. In fact, it's really easy to clone most of these companies because they're just wrappers for LLM models.

Speaker 1:
[68:18] And the top minds, which is what is actually being acquired, they've all pretty much now, there's not that many left of truly innovative researchers in this space who are doing startups to get it. All the big companies have snapped them up for the most part. That's the issue. They're, you know, Denis Wasabi got snapped up by Google. You know, Hinton's company got snapped up. I mean, these companies have, there's only so many of these big like academic research minds. And they've all for the most part been, and sometimes it's very expensive to do. You had to buy and shut down their company to get them. But I hear your point is the problem is, yeah, you cannot, if you're a VC assume, well, anything we fund will also get bought for a billion dollars because our founders are so brilliant. It's like, actually the brilliant founders are already for the most part, probably under, there's only so many of them, and they're under contract with these companies. And if what you really have is the product, which is a point, and then we'll wrap it up after this. But I do think it's an important point is that we don't really know how to build very useful, profitable products. That's the odd thing about this space.

Speaker 2:
[69:21] Well, I mean, you can't.

Speaker 1:
[69:22] There's a couple of popular products. The various coding harnesses like Claude Code, et cetera, are popular among programmers. Not a particularly profitable product space, though, because they're expensive to run. The chat bots are popular in the sense that they have lots of monthly active users, but it's unclear that I don't imagine those are particularly profitable either, just because of the compute cost of people using them. And that's kind of it, I think, is the problem. It's very difficult to have your wrapper company actually be a large concern. But that's interesting, yeah, yeah. So that could be, that's the story that underlies all these other stories. And I think it's going to, if it's true, I think it will surprise a lot of people, because it's going to be biggest technology ever about to conquer everything, biggest technology ever about to conquer everything, AI winter stock market collapse, never mind.

Speaker 2:
[70:14] Yeah, that is the scope.

Speaker 1:
[70:16] If that happens, that's going to be, that will be an interesting moment. I think there's going to be a lot of frustration among the American populace, of like, well, wait a second. If that happens, you spent two years trying to scare me. You spent two years of forget COVID. COVID is cold and flu season in terms of disruption. This is like World War II level impacts on our country. Like it's just, this is it. If it does fizzle into, not only fizzle, but basically the conclusion of it is that everyone's retirement portfolio haves and then that's that, that's not going to go well. I mean, that would have like political ramifications here in the US. I think you're going to see, you know, political parties rebuilding around how they think about these technologies. And maybe it won't happen, but.

Speaker 2:
[70:59] I mean, I am confident on my AI startup thing because every single AI startup is a wrapper of a model owned by someone else. And every, because the core thing, and then we can wrap up, I apologize, is that you cannot control the cost of a user with an LLM. You can't do it. And also, your most excited customers are the most expensive, which is antithetical to how a business works. And also, all of them are unprofitable.

Speaker 1:
[71:27] Yeah, this is very different than, even though the SaaS model is now falling apart for other reasons, it's a very different situation, where at least what made that tech sector so desirable, for example, was this idea of you can just scale up profits infinitely. Just everyone who pays $20 a month for this is $18 a profit, and we can handle an unlimited number of users. And that, of course, got a lot of private equity eyes bigger than their stomachs, like, oh great, we'll just build giant sales teams. And look, if line goes up like this with 10 salesmen, what if we have 100? But at least the underlying profit mechanics made sense of, it cost us negligibly more to have 100,000 users versus 1,000, and it's massively more income. This is very different, you're saying, than LLM-based AI. It's actually very expensive to service the users, and the more they use it, the more expensive it becomes, and that's a hard dynamic.

Speaker 2:
[72:19] Yes.

Speaker 1:
[72:19] And more users doesn't make it cheaper.

Speaker 2:
[72:22] No, more expensive, in fact.

Speaker 1:
[72:24] Yeah, it's unlike a gem or something. We're like, great, more of the merrier, because very few people actually use it. It's actually the opposite. All right, Ed, well, a pleasure as always.

Speaker 2:
[72:33] Thanks for having me.

Speaker 1:
[72:34] Yeah, you always bring out the radical in me, but I think we got to balance out. I think people are hearing all day long, the strongest of boosterism, so it's good to check back in on some of these stories to give a less impressible take. We'll have to do this again soon because there will be, unfortunately, no shortage of new stories coming out that we're going to have to react to.

Speaker 2:
[72:55] Thanks for having me, man.

Speaker 1:
[72:56] All right, talk to you.