title Iran's AI Supply Chain Threat, Claude vs. SaaS, and Elon's $60B Cursor Bet | EP #249

description In this episode, the mates center on the Iran oil shock as a broader system shock, Anthropic’s Claude Design and the “unhobbled” frontier-lab threat to vertical software, the AI job transition, UAP disclosure, the power and geopolitics of data centers, China’s solar and robotics push, and the future of entrepreneurship and agency.



Get access to metatrends 10+ years before anyone else - https://qr.diamandis.com/metatrends  



Peter H. Diamandis, MD, is the Founder of XPRIZE, Singularity University, ZeroG, and A360



Salim Ismail is the founder of OpenExO



Dave Blundin is the founder & GP of Link Ventures



Dr. Alexander Wissner-Gross is a computer scientist and founder of Reified





My companies:



Apply to Dave's and my new fund:https://qr.diamandis.com/linkventureslanding  

  

Go to Blitzy to book a free demo and start building today: https://qr.diamandis.com/blitzy  



Your body is incredibly good at hiding disease. Schedule a call with Fountain Life to add healthy decades to your life, and to learn more about their Memberships: https://www.fountainlife.com/peter 



_



Connect with Peter:

X

Instagram



Connect with Dave:

Web

X

LinkedIn

Instagram

TikTok



Connect with Salim:

X

Join Salim's Workshop to build your ExO 



Connect with Alex

Website

LinkedIn

X

Email

Substack 

Spotify

Threads



Listen to MOONSHOTS:

Apple

YouTube





*Recorded on April 21st, 2026

*The views expressed by me and all guests are personal opinions and do not constitute Financial, Medical, or Legal advice.
Learn more about your ad choices. Visit megaphone.fm/adchoices

pubDate Thu, 23 Apr 2026 15:00:00 GMT

author PHD Ventures

duration 9279000

transcript

Speaker 1:
[00:00] Having this global geopolitical dependency on one narrow volatile geographic region is completely unacceptable.

Speaker 2:
[00:08] This Iran war is not really just an oil shock, it's a system shock.

Speaker 3:
[00:12] TBD, whether, you know, what direction the US goes on that.

Speaker 1:
[00:14] It's almost easier to predict what happens after it ends, which is to say...

Speaker 4:
[00:20] We're seeing all of a sudden the frontier labs beginning to compete with the vertical businesses built on top of large language models.

Speaker 2:
[00:28] We're going to get this horde of startups building in an AI-native mode, and the big companies will struggle to adopt.

Speaker 4:
[00:35] SpaceX negotiates the right to buy Cursor for $60 billion. Is this just Elon's normal play?

Speaker 1:
[00:42] This is actually what a Dyson swarm probably looks like. This is where it's all coming down to. This is like the final countdown of whatever stage of the singularity we're in.

Speaker 3:
[00:56] Now that's a Moonshot, ladies and gentlemen.

Speaker 4:
[01:01] Everybody welcome to another episode of Moonshots. WTF just happened in tech this week. Your opportunity to hear about what's going on in the world at a hypersonic speed. Here to help prepare you for the future with my incredible Moonshot mates, our resident genius AWG. Alex, good to see you in your normal haunt.

Speaker 1:
[01:18] It's good to be haunted.

Speaker 4:
[01:22] And, of course, our exponential investor DB2, Dave, back at MITIC.

Speaker 3:
[01:30] Yes, back from San Fran for one day and then off to New York tonight. Yeah.

Speaker 4:
[01:34] You know, it's funny. We talk about the three-day work week, the four-day work week. I think this has invented the nine- and ten-day work week for us. Only I've ever worked hard in my life.

Speaker 3:
[01:46] You know what? Alex is constantly reminding us there's only going to be one singularity in our lifetimes, and you have to just savor and use every minute of it. So it's worth it. It's so worth it.

Speaker 1:
[01:57] I also think, Peter, all these people who are absolutely convinced that human attention is a scarce resource, I think they're going to get proven wrong ultimately. I think we can manufacture more attention.

Speaker 4:
[02:06] Now, I'm looking forward to that. You know, Peter, two of ten is going to be dedicated to this podcast all the time.

Speaker 1:
[02:13] Or as they say, go fork yourself.

Speaker 4:
[02:15] Oh, no. Salim, you are worse than all of us put together. Where are you on the planet today?

Speaker 2:
[02:23] I'm at The Breakers and Palm Beach doing a keynote to 700 CEOs, which I did this morning. They're pretty freaked out, as they should be, because the world is changing pretty radically.

Speaker 3:
[02:34] By AI or by you?

Speaker 2:
[02:38] About the content. Look, it's such a huge shock for people in the normal world to go, what is happening and how do I juxtapose with this? It's really a big topic. So it was a great conference, great questions. That's always the great, the best moderator of whether an event was good or not. The quality of the questions was excellent.

Speaker 4:
[03:00] Yeah, and the number of people who are standing up and screaming, okay.

Speaker 1:
[03:06] It's the usual Palm Beach dilemma of billionaires worrying that they might become millionaires again.

Speaker 2:
[03:11] Well, this is not Palm Beach residents. These are people coming in for this event from all over the country. So this was really Main Street USA in one place was really quite surreal.

Speaker 4:
[03:21] I'm here in our Moonshots studio, Galore. Again, I still want to get you guys here. Let's open up. I had an amazing, amazing Saturday night, and I want to just tell you guys about it. So I was at the annual Breakthrough Prize. It's put on every year by Yuri and Julia Milner. It's an annual event. It began back in 2012. It was co-founded by Sergey Brin and Mark Zuckerberg and Anne Wojcicki. And I don't know if you remember, Salim, but this used to take place in the giant airship hangar at Moffitt Field. Do you remember that giant structure? I very well. Right next to Singularity University. This year it took place at the Barker Hangar in Santa Monica Airport right near my home. And Yuri Milner, who's the principal sponsor of the prize, is an amazing guy. A Russian-Israeli physicist turned VC billionaire, had some of the most prescient investments. He invested in Facebook and Twitter, Airbnb, WhatsApp. And he decided, along with Sergei and Anne and others, they wanted to put on sort of the Oscars of Science, sort of the new Nobel Prize and recognize the top researchers and breakthroughs. And they give out $3 million in cash to the researcher who did the breakthrough. So it's a big deal, bigger than the Nobel Prize. You know, past winners were Stephen Hawking for Hawking Radiation back in 2013, Jennifer Doudna for CRISPR, Demis Hisabas won it a couple of years ago in Life Sciences of course for his incredible work. And this Saturday night was like the best ever. I've been honored. I've been going since the beginning of this thing. It was all the Hollywood stars, all the tech CEOs were there, right? On the Hollywood side, it was like Ben Affleck and Anne Hathaway, Robert Downey Jr. On the tech side, it was Elon, Sundar, Demis were there. I spent time with Jensen and Sam Altman. Yeah, it really was extraordinary.

Speaker 2:
[05:18] Did you invite them on the pod?

Speaker 4:
[05:21] I actually did. Sam said he would come on as soon as things slowed down a tiny bit. He's got a busy couple of months ahead of him.

Speaker 2:
[05:28] That's like 2028.

Speaker 4:
[05:30] Well, hopefully sooner than that. Elon said he's happy to come back at the end of the year for a day of our annual catch up on what predictions came out, which didn't.

Speaker 3:
[05:41] Oh, well, that's going to be stacked.

Speaker 4:
[05:43] Yeah, I invited Sundar and Demis as well. So All in, and Chamath was there from the All in Podcast. And Chamath 100% said he's coming on the pod. So that will be super fun.

Speaker 3:
[05:55] That will be, oh my God, I'm so looking forward to that. What about Robert Downey Jr., Iron Man?

Speaker 4:
[06:00] I did chat with him about longevity. And yep, I think that could be fun. But I think having him comment on all the tech news isn't going to work out. But the winners...

Speaker 3:
[06:13] Yeah, but you know, I tell you, when the movie stars meet the great scientists, it's the most bizarre and awesome conflagration. They're so different yet, they're so impactful as a group together. And I think the movie stars, they really want world change and they can't really affect it, but the scientists can. But the movie stars can get the word out. So it really is pretty magical. But I can't tell you how impactful Iron Man and Tony Stark as a character have been at MIT. It's just mind blowing the impact. And then, you know, I don't know if Robert Downey Jr. really sounds and acts like Iron Man, his character, but I want to find out. I would love to at least try.

Speaker 4:
[06:52] Super nice guy. Super nice guy. I mean, it was fascinating to see Hollywood there turn out to sort of rub shoulders with the CEOs and vice versa. And of course, you know, having Sam and Elon in the same room was interesting. And then, you know, the whole undercurrent, of course, and it was, you know, the conversation about is AI and all of this going to destroy or disrupt Hollywood. So fascinating. The winners this year, by the way, she just mentioned them since it was the point of the whole evening. Gene therapy for inherited blindness, sickle cell disease and beta thalassemia, sort of a gene therapy for curing that, for turning on early infant versions of your sickle cell hemoglobin or your hemoglobin. ALS and frontotemporal dementia genetics, muon anomalous magnetic moments. Alex, you probably know all about that. And then mathematics of wave behavior. So they did an incredible job explaining the implications of each of these breakthroughs. And it was just a great award. In this photo, you can see Jensen down at the bottom right, along with Yuri Milner, our host. And it was just an incredible evening. Super, super pumped. And thank you to the Milners for putting this on this year.

Speaker 2:
[08:14] You know what's awesome about this is if you go back to the Renaissance, scientists at that time were treated like rock stars and global celebrities. And when we can recreate that, it actually affects civilization. So that's awesome.

Speaker 4:
[08:29] Yeah. I mean, it's interesting, right? It's like, who do you celebrate in the world? Right. It tells you a lot about the culture. And I remember when I was growing up in the 80s and typically in the 80s, all of the Wall Street bankers were being celebrated, right? So everybody was going into Wall Street. Dave, I don't know if you remember that. Yeah. And now-

Speaker 3:
[08:52] Some of the best and brightest. The people that would have changed the world with breakthroughs were going off and putting together pitch decks for real estate transactions. It was so frustrating and painful to see. But a lot of our classmates got sucked into that.

Speaker 4:
[09:06] And luckily, people are now focused on changing the world. And anyway, again, an amazing evening. All right, let's get on to our docket of incredible stories. The first one here is on Anthropic's dominance. In particular, Anthropic releases Claude Design. So I'm going to play a little bit of a video here. It's a Claude Design released on top of Opus 4.7, allowing you to design almost anything that you want. And what we saw here was immediately the stocks from both Figma and Adobe started dropping. Figma was down 10%. Adobe was down about 2%. And so the question here, and I want to talk about this as the underlying part of this, we're seeing all of a sudden the Frontier Labs beginning to compete with the vertical businesses being built on top of large language models. Gentlemen, thoughts?

Speaker 1:
[10:10] Well, I think the elephant in the room here is this is just an unhobbling of capabilities that were already present in the model. There may be some fine-tuning or post-training that's involved at the back end, but this is essentially, I think, what one might call an unhobbling of latent capabilities. And I think it does raise the question every time a few of these other SaaS stocks have a mini SaaS-pocalypse freak out from some unhobbling of a base model, whether and how much of the remaining economy is one unhobbling of a scaffold away from utterly cratering existing verticals. I really do think that, and I say this to every company I advise, you really don't want to be just a SaaS at this point. You don't want to be just a scaffold around models. You need your own vertically integrated native capabilities. If there is a physical world integration that you can credibly claim or build out, you really should be doing at this point because software is getting dissolved. And in the case of Claude Design, I've been using Claude to design PowerPoints and websites and other, call them prototypically Adobe-esque outputs and artifacts for months now, if not low numbers of years across all of the frontier models. We already had these capabilities. It's just that through un-hobblings like Claude Design or Cowork or even Claude Code originally, Anthropic is very successfully giving users permission to do what they could already do with first class Chrome. And I think it does raise the question, how many other user experiences that right now are their own dedicated verticals are one un-hobbling away from just latent capabilities? I think near the top of my list is creating new businesses. I think creating new businesses is just an un-hobbling at this point.

Speaker 4:
[12:10] Dave, you're using...

Speaker 3:
[12:11] I used it last night and it's incredible what it can do and it's so dog slow. It's like torture. And I'm sure it's dog slow because it's overloaded. I'm sure it's overloaded because Anthropic doesn't have the compute to support all of the growth. Really important point. It's being shared and they desperately need a Steve Jobs, Johnny Ive kind of guy over there that understands out of the box experience. And the other thing they desperately need is some kind of an ISV, independent software strategy. So they need to build a software community that knows where they're going and can work with them and not just be in their way all the time. But right now, they haven't declared any roadmap. And so I think we all know that any big company like Microsoft can kill any small company if they want to. But why would they want to? Some things they do want to do, some things they don't want to do. But you have to be intentional and explicit about it. So it really feels like Anthropic is very PhD-ish and acting very PhD-ish. And they need to think a lot about how much power and influence they have now. Going out and just bombarding the stocks of Adobe and Figma and driving them through the floor and then not offering enough speed to be usable. Like, what did that achieve? That's good for nobody. What is the plan, guys?

Speaker 2:
[13:31] Well, unless you shorted the stock before you made the announcement. That would be the classic play today, isn't it?

Speaker 4:
[13:39] They did recover. They did recover. But the markets hit them hard. And to your point, Alex, I think about what's going to be hit next. I'll give you my top ones. Legal research, document review, LexisNexis, business intelligence from Tableau, medical documentation, clinical decision support from Epic. On my list, I have the Bloomberg Terminal, financial modeling and research. I've seen a lot come out there. And then Workday, HR, recruiting, performance. All these are going to get hit.

Speaker 2:
[14:13] It's easier to ask what's not, because AI is not anymore a two-layer, it's the control layer. Over design, coding, et cetera, et cetera. And to Alex's point that you made elsewhere, you know, when you're not, the incumbents aren't competing against new software features, you're competing against compounding intelligence. And that's a hard task to take on.

Speaker 4:
[14:33] Yeah. And then the question is, who owns the cost-moving target?

Speaker 2:
[14:36] Yeah. And I have a shout out here because last, on our last podcast, Dave, you complained about the design side and Alex, you guys talked about it and literally the answer to that was right there. So perfect.

Speaker 3:
[14:48] We were pressing it again. When the iPhone came out, Steve Jobs actually was originally opposed to the App Store. He thought it was a bad idea. He hated it. And the team there talked him into it and it became the most incredible experience for everybody. I mean, you would never have had Waze and Navigation and great weather apps and all these niche apps too. Like I use the Tide app to know when the tide is coming in. None of that stuff would have ever existed if they hadn't been. Steve, this is a good idea. And this is where Anthropic is right now. You're about to destroy the careers of 80% of software developers. What do you want them to do? Give us the proactive future for where are you going and what are they supposed to do with this new tool you rolled out? Tell us. And that's what they need to do next.

Speaker 2:
[15:33] So Claude is launching a partner program. We're in the running for that as OpenExO. So they are actually starting to put that ISV type of structure into place.

Speaker 4:
[15:44] Dave, I imagine you're going to see a lot of people come up with a business idea that they brainstorm with Claude and then they hit print and a pitch deck comes out and you're going to get AI slop in terms of business and pitch decks.

Speaker 1:
[15:58] We're already there.

Speaker 2:
[15:59] Yeah, that's there now.

Speaker 1:
[16:00] We were already there months ago.

Speaker 4:
[16:01] We're at an accelerating rate. Yeah.

Speaker 2:
[16:03] I think now you did a bunch of work and it builds the business for you per Alex's earlier point.

Speaker 1:
[16:07] Yes, in entire businesses, not just business plans. I think maybe we need to do a little bit of brainstorming for Anthropic here on the pod. I proposed the name. We need a plan for a country of non-geniuses outside the data center.

Speaker 3:
[16:22] Totally, totally right. Totally right. And yeah, I think we're in a really weird moment of time right now where if you use this tool to create a PowerPoint deck, what's the point of the PowerPoint? Who's going to even look at it? Oh, it's probably another AI. Well, then why is it in PowerPoint in the first place? But right now we're in this transition phase where a product like this makes sense. But it's only like maybe a year of human history where something like this makes sense because we're moving to this world where you just tell the AI the outcome you want and it doesn't need a deck and it doesn't need a Figma design in the first place. So it's an interesting transition phase, but desperately in need of leadership. And a roadmap from Dario or from Daniella would be incredibly welcome.

Speaker 4:
[17:04] Hey everybody, you may not know this, but I've got an incredible research team. And every week myself, my research team study the meta trends that are impacting the world. Topics like computation, sensors, networks, AI, robotics, 3D printing, synthetic biology. And these meta trend reports I put out once a week enable you to see the future 10 years ahead of anybody else. If you'd like to get access to the meta trends newsletter every week, go to diamandis.com/metatrends. That's diamandis.com/metatrends. Dario has been saying for at least the last year, AI is going to wipe out 50% of entry-level white collar jobs in the next one to five years. And so out of Anthropic comes a survey of the employees, not a large number of employees, a small number, but their estimate is that entry-level software engineers and researchers will be replaced by Mythos in three months. So a little bit of an extreme here. And the speed of replacement is key here. So what's most significant about this prediction is it's coming from the engineers that this tech is likely to replace. I guess you guys can all remember the golden ticket for your kid was always go and get a degree in coding, go get comp sci, get an internship, get an entry-level position and then work your way up the chain. And all of that is disappearing. And what I find fascinating here is the idea that software engineers and coders are really the canary in the coal mine for a whole slew of other jobs. Thoughts, gentlemen? Does this mean anything?

Speaker 3:
[18:39] Human arrogance is unbelievable, actually. If you go to San Francisco and you talk to these exact people, they all say exactly what the slide said, this is going to replace almost all entry-level engineers. Are you an entry-level engineer? No, not me. I'm not. I'm a hyper-senior whatever engineer. Well, you know it's on an exponential curve, right? You're talking about three months after those three months. Either way, you're coding yourself out of a job very, very quickly. Yeah, okay, I guess I am.

Speaker 2:
[19:10] I'll take the other side of this. I thought that saying one-third of the employees think they'll be replaced, why isn't it 80%? That would be much more indicative, I think. That third tells me not a lot about that particular thinking. And yes, they're going to be wiped out. I think we'll just change the types of what we mean by entry-level job.

Speaker 1:
[19:30] Maybe the one-third who thought the entry-level S.W.E.E.s are going to be replaced are the senior S.W.E.E.s and executives at the company who are in a better position to judge that. I would just comment, you know, I talk, we talk, I think it's fair to say, about being in the singularity. It's not just vibes, and recursive self-improvement is not just vibes. This is exactly what one would expect to see if we are, as I would argue we are, in the midst of recursive self-improvement, where almost all of the code, it's been widely reported, and Anthropic is now generated by Claude. At other companies, too, at Apple, at Google, this was, I don't think we have a slide for it on this pod, but pretty spicy news in the past 24 to 48 hours, rumors out of Google DeepMind that a lot of Google DeepMind code is now being generated by Claude. So I think it's not just necessarily junior suies that are being replaced by hypothetical future releases of Claude. It's also other frontier models from other vendors that are being displaced. The blast radius is pretty wide for this.

Speaker 4:
[20:36] We expect everybody's leapfrogging each other. I expect this will occur in every single lab.

Speaker 2:
[20:41] I think there's also something here for organizations because the metabolism really gets affected. How do you develop talent when AIs doing all the apprenticeship work? And this is going to be a big structural problem over the next few years.

Speaker 4:
[20:55] Yeah. How do you get to senior level players if you don't have junior level players?

Speaker 2:
[20:58] Exactly.

Speaker 1:
[20:59] The suie ladder gets yanked up. But then we see in other news in the past 24 hours, Meta announcing an upskilling program to teach people over a relatively short period of time how to lay fiber, optical fiber in data centers. So, I think these are all just patches or bridges to an ultimate solution. We can debate what the ultimate Star Trek economics look like after this is all over. But in the meantime, certainly if you're a software engineer, a junior one, maybe you want to start your own company or shift to some other career ladder that has more of a ladder to it.

Speaker 3:
[21:34] I was going to say something similar. If you look at all the senior execs at OpenAI, I don't know about Anthropic as much, but we know a lot of the people at OpenAI, but the vast majority of them were previously founders and co-founders of other companies, so they became senior through the entrepreneurship pathway, not through the work-your-way-up-the-ranks pathway. So that's the answer to your question. Now, I know a lot of the listeners then say, not everybody can be an entrepreneur.

Speaker 4:
[21:55] We'll come back to that. At the end of this podcast, we're going to have a session addressing the question that's been asked by all of our listeners. Hey, I don't think I can become an entrepreneur. Is it for everybody? We're going to dive into that a little bit. So Dave, a question for you. I can imagine this three month survey, this sweet apocalypse in three months works for the hyperscalers where you've got these high level software engineers, but what about large Fortune 500 companies? What about like Chase and Kaiser Permanente? I think this, Methos or all these technologies are going to take a while to permeate through the larger companies. How fast is it going to happen there, do you think?

Speaker 3:
[22:40] I think it will vary depending on the quality of the management team, but at great well-managed companies, it will happen very, very quickly. I think you have to realize that after Sam's house got firebombed and then shot at and he has bullet holes in his door, the bias when you survey these people is toward downplaying, not toward exaggerating. That wasn't true a year ago. A year ago, the bias was toward exaggerating, but now it's the exact opposite. So the capabilities of the AI are going to be far, far ahead of what they want to talk about. The employers will then underreact, by and large, and a lot of them just don't want to deal with it. And so you'll see a subset of them that are early adopters. You know, part of it is cost reduction, but another bigger part of it is Greenfield building of new things that you never could have thought of before. And that's such a massive sweeping opportunity that those companies will get miles ahead, hopefully create new jobs as they grow. And then the legacy guys will be very, very late to catch up and will probably get crushed in the process, actually. And that includes some really big companies, you know, big banks, big insurance companies. But they just won't move fast enough and they won't see the little guy coming up. But you know, a little guy with an effective workforce of a billion AIs, they're going to go very, very quickly past you and you won't see it coming.

Speaker 2:
[24:01] I have a clear lens on how this will happen.

Speaker 4:
[24:04] Please.

Speaker 2:
[24:05] I think what's going to happen is you're going to get this horde of startups building in an AI native mode. And the big companies will struggle to adopt because as I've mentioned before, all our workflows and big companies are human to human. All the approval lines are human to human. And you need to be AI native, which means you completely need to redesign your workflow. Most big companies don't have the skill set or the ability to do that. So what will happen is they'll watch the incumbents come along and they'll start buying them because they have to in order to compete and then they'll reverse into them that way. That's what I think happens.

Speaker 4:
[24:41] Innovation on the edge.

Speaker 3:
[24:41] I think Salim, it's really clear, since you're in every country and every city in the world pretty much every week, as far as I can tell, Salim, the range of speed of adoption by geography is mind-bogglingly different. It is just crazy, crazy how far ahead San Francisco, Austin, Boston are moving at warp speed. And then you go elsewhere in the country, it's much slower, but then you go to Europe. And it's like no action at all, no reaction whatsoever.

Speaker 4:
[25:11] Sublinear.

Speaker 3:
[25:12] And so, yeah, very sublinear. There's a view of the world that if you think about some tribal village in the middle of Africa, and then AI comes into the world, does that tribal village even know, or does anyone care, do you interrupt it in any way? No, no, it just sits there and does what it's been doing for 100 years. Some of America is going to be like that too.

Speaker 4:
[25:32] But of course, Dave, we have more than 8 billion handsets on the planet, and AI is diffusing to all individuals. I told you in the last pod, my story, when I was in Morocco, of that young man who basically chatted with ChatGPT to come up with his own new novel career. It's going to be an initiative people take.

Speaker 2:
[25:55] I think people will grab the tools and start using them to improve their lives. It will be amazing.

Speaker 3:
[26:00] Well, I think, you know, at the rate of growth that we're talking about, the people who choose to grab the tools will thrive like crazy, regardless of what happened to their old job. I'm almost positive of that now, because the rate of what it can do for you and with you is growing so much faster. And so, people who kind of lament their 10 years they learned to code Python better than anyone else on the planet, and they're like, I can't let go of it, they're going to suffer for not grabbing the new paradigm. But it's, you know, you're trapped, you're trapped by your history. But if you let it go, just like release, let it go, focus on the new and ride the wave, there's so much value being created that you'll actually end up ahead, not behind where you would have been.

Speaker 4:
[26:40] Well, as we talk about speed of growth, Elon comes back into front view mirror, and here he is predicting the next rollouts of GROCK. So GROCK 4.4 will be twice the size of 1 trillion parameters of 4.3. It will be coming out in early April. Then comes GROCK 4.5, 1.5 trillion parameters. Then 4.8, 4.9, and GROCK 5, he predicts to be AGI, GROCK 6, ASI, GROCK 7, ASI 2. I mean, this, I think, was like a 2 a.m. tweet by Elon.

Speaker 2:
[27:19] I mean, seriously, Elon, ASI 2, we haven't even defined AGI, for God's sake. Come on, insert my normal rant here. And how many parameters do you need? This is crazy.

Speaker 3:
[27:31] Yeah, the more the merrier. Actually, I love the fact that he's willing to talk about parameters, because I did an interview of Sam at MIT, Sam Altman at MIT, back in, I think it was 2020. And I said, how long till we get to a trillion parameters? And he said, we need to stop masturbating over parameter count. And the crowd loved that. And I was like, yeah, but I want the answer. Give me just give me the data, man. I want the facts. And he wouldn't do it. So here we are years later, Elon's like, okay, here's our exact parameter account by model number. And you know, the IQ goes up with the parameter count. We all know that. Then you distill it back down and gets all confusing. And Alex has to translate it for us. But you know, this is the road map and he's right. And it's transparent and it's beautiful. And it's just great. He's willing to talk about it.

Speaker 4:
[28:15] Alex, what should you think on this?

Speaker 1:
[28:16] Maybe to chime in. I think this doesn't necessarily bode well for XAI's roadmap if they're bragging about parameter counts. I don't think this race to super intelligence, as with Moore's law, ending when Dennard scaling ended in the mid 2000s and the gigahertz per processor, per CPU processor core topped out at four or five gigahertz. I think it's unlikely that we're just going to see larger and larger parameter counts on models. I think most of these weights, I've commented on this in the past, most of these weights are probably wasted on world knowledge that could be safely externalized into a database or a text file. Doubly so if once we're able to fully resolve the attention bottleneck and our ability to expand context windows to infinity or at least to a few billion characters or tokens, I'm not sure that we need all of this knowledge built in directly into the models. So I worry a little bit that this may signify that XAI or SpaceX AI may be running the last race and not the current race, which is focused on distillation and iterated amplification and finding, if anything, what I would have hoped from Elon's late night tweet would be a race to reduce the number of parameters at constant capability. I think a race to the bottom where we're compressing more and more capability into a smaller and smaller parameter count, so-called intelligence density. I'd love to see him bragging that XAI is going to be doubling, tripling, 10Xing its intelligence density per model. And that's what we see from some of certain Neo labs that are bragging about their move. You know, Dave and I like to talk about ternary quantization or even just pure binarization of models. I think there's probably some sort of breakthrough or maybe a set of breakthroughs right around the corner that will enable us to take the capabilities of a raw, say, undistilled 10 trillion model and compress them down to maybe a few million parameter equivalents. That's what I'd like to see Elon focusing on.

Speaker 3:
[30:25] So totally right. I mentioned that to Elon when we were in Austin and he was absolutely aware of it. But reading the body language, I think you're exactly right, Alex. He's aware of it and he's probably like, why do I not have those people? Why do I have that? He has the hyperscaler people, the builders, the creators. And to use a rough analogy, he's going to launch 50 tons and then 100 tons per Starship launch. But why don't we just make the satellites smaller? That would be the opposite side of that. That's much less marketable as like, oh, the satellite, it's half as heavy now. Well, that's also equally important breakthrough.

Speaker 1:
[31:00] Dave, let's be fair.

Speaker 4:
[31:01] Let's be fair. I mean, he is doing that if you look at sort of the progression of his V3 engines for Starship. They're becoming smaller. And he's the king of minimize the number of components and make things more elegant. You know, Alex, a question for you. You know, is AGI also going to be a function of the amount of compute power that you have, right? And if you look at it that way, you know, if you look at it, XAI is about 2 gigawatts by the end of this year. OpenAI is at 1.2. My research has met as about 1 gigawatts, Anthropic and AWS together, about 1 gigawatt. So, I mean, XAI is still leading in the total amount of compute power, which puts it in some kind of a good position, don't you think?

Speaker 1:
[31:47] Yes, and he's the king, the god emperor, if you will, of brute forcing. So I think even if he had just said, here's the training kilowatt hours or gigawatt hours that we're going to spend on these respective models, and we saw that we're spending more and more on training, I think that would be probably a more convincing experience law curve to highlight than just the raw parameter count, which again seems like he's pointing to a trend from 2 years ago that the rest of the industry has abandoned as ineffective.

Speaker 4:
[32:20] I mean, this is an intelligence meme. I mean, this is not a timeline. This is like he gave a timeline for 4.4 through 4.5, but just saying, you know, GRAC 5 is going to be AGI. And then the question I have is if he gets to AGI or whatever his definition of AGI, and the same thing for OpenAI and Anthropic, do they release it or do they hold on to it? And utilize it for their own science breakthroughs, their own capabilities?

Speaker 1:
[32:50] I think we're going to see a lot of that. And the snide part of me wants to look at the tweet or ex-posts that you posted through computer science language. Is it possible that he's actually defining AGI as GRAC 5.0 and defining its I as 6.0? If you look at those right arrows in pseudo code, that's how I read it.

Speaker 4:
[33:10] Well, also, right, we're going to define AGI when Amazon invests in Anthropic.

Speaker 1:
[33:16] That's right, or OpenAI's definition, AGI, is whatever it takes to produce 100 billion in revenue from it. So I think these definitions, Salim, are somewhat malleable. But I also, Peter, taking your question seriously about whether capabilities get so advanced that ultimately the frontier labs no longer want to share them, that's already starting to happen. We saw that with Mythos, which is already being used, with a delimited set of users, it's not being shared with substantially all of Europe, other than the UK, which is no longer formally part of Europe politically. And I think we're going to see, potentially, I chat with my friends at the frontier labs all the time, they're expecting all of this to happen in the next couple of years.

Speaker 4:
[33:59] All right, more on the Elon news. XAI launches Grok text-to-speech targeting voice developers. So XAI just shipped a standalone speech API with a 5% error rate on phone calls versus 11 labs at 12%. Of course, 11 labs has been the darling and the default. It's a super competitive price at 10 cents per hour, supporting 25 languages. And here's the question. Is this just Elon's normal play where he basically comes in vertical integration, price to zero, uses his real world data to dominate an area? Is he basically going to go and deliver what 11 labs is doing? They've been the king. They're valued at $6 billion. They've been the voice AI for the last three or four years. Where do you see this going, guys?

Speaker 2:
[34:55] I see this as the lack parallel with what we saw with Claude Design. We're just going to keep having compounding capability, and they're just choosing to go after the voice thing, because somebody was able to show that it was much better, and they'll just keep launching things like this.

Speaker 3:
[35:11] Well, I think Elon cares tremendously about leveraging XAI, or sorry, leveraging X, and winning the race to the best media experience generated. So, that's image creation, video creation, voice, synthetic characters, virtual girlfriend and boyfriends, all that he wants to win for sure, because he can distribute it through X. So, he has that ace in the hole. So, he wants that to all be dominated by XAI, and speech is just on that road map. I don't think 11 Labs has a lot to worry about, though. I mean, just staying ahead of that curve is just not that hard. At the end of the day, it's expanding so much. There's a trillion dollars of global payroll in call centers and voice, and we've penetrated 0.0001% of it so far. So, the fact that Elon is stepping on 11 Labs' toes a little bit doesn't matter. 11 Labs just needs to keep moving forward. It's the same thing we were saying before about Anthropic. Just declare what your roadmap is so people can work with you and not against you. 11 Labs should be fine.

Speaker 4:
[36:10] What's the guidance for entrepreneurs who are building a vertical on top of one of the frontier models? Should they be concerned about maintaining some kind of a moat that's not going to get instantly disrupted? How do you think about that?

Speaker 3:
[36:27] Yeah, totally. I had a great conversation with Liz Harkavy at A16Z last week in San Fran on this exact topic. Because she thinks a lot of the Y Combinator really, really young founding teams are going to get crushed. But the more mature teams that are doing any kind of enterprise distribution of any of these capabilities, they're just thriving like crazy. Or any consumer user base. You know, again, Peter, you and I talk a lot about our moms. Like, my mom has not been touched by AI in any way, shape or form yet. She has money. Your mom has money. She'll buy things. There's massive parts of the tech economy that she would benefit from. You know, everything from hailing an Uber more efficiently to, you know, should I get a Tesla? All those things are conversations that our moms should be having with AI. No one has touched it yet. It's just wide open. And so the key for an entrepreneur is think about your data moat. Do you have any kind of a flywheel effect with a data moat? And, you know, are you directly in their crosshairs or are you doing something that they're not going to do? And that's a tricky one to predict because they haven't declared, but you should be fine. Just jump in.

Speaker 4:
[37:35] Isn't, I mean, the other thing is it's easy to build. I mean, you see some kind of capability. You see an app. You see a piece of software. You can easily replicate it. And then it's a matter, can you market it? And then you get customers, right? So I think the differentiator of who the customer relationships you have, your reach, your brand is going to become more and more important. Do you agree with that?

Speaker 3:
[37:57] Well, also regulatory. So we have a company here in the lab, Vokara, that does voice AIs doing great, growing like crazy, great valuation. But they're doing mortgages and insurance and things that have regulatory components to them. And knowing what you can and can't do in terms of your AI, quoting a price and having an accountable agent and all that detail, like XAI isn't going to do any of that. And so if they give you a better speech API, that just makes it easier for you to succeed as an entrepreneur. So there's all kinds of regulatory knowledge, vertical knowledge, data mode knowledge that makes these companies sustainable. Salim, sorry, what you're about to say?

Speaker 2:
[38:33] Yeah, two things that strike out to me from this. One is if you're building a tech company and you have a tech stack, make sure that the tech stack is agnostic as to what models underneath it, so you can swap out 11 Labs API for XAI for another ones, because as we just talked about, your edge will be the differentiary you can make close to the customer, right? This is where the MTP comes in. For years, people thought box.net and Dropbox would be toast because of OneDrive and Google Drive and iCloud. And because they were so passionate about storage and the delivering value to the customer, they thrived absolutely with everybody else. And so, what I remember from my days at Yahoo! is, yeah, Yahoo! has a storage capability, but there's three people on that team. And we've got 15 people on Yahoo! Finance and 15 people on Yahoo! Personals. And it's never going to outperform somebody that's dedicated to that one domain full-time 100%. So people get scared of the big companies. But if you have passion and you're focused on it, you are going to outperform them all the time, especially, as Dave pointed out, in the areas where you have regulatory issues and industry-specific knowledge and tacit knowledge that's very specific to that application or use case. That is the part that keeps you going. And then you just... technology's swappable underneath the hood.

Speaker 3:
[39:56] You know, what's funny about that, Salim, is I launched my first successful 1,000 concurrent agent swarm yesterday. And I couldn't believe how easy it was. Well, I gave it a whole bunch of math problems to work on. I wanted to do something productive, but I didn't know how to coordinate it into building something cool. I'll figure that out in this week. Well, I did burn a lot of tokens. I used Sonnet to keep the price down a little bit. But it works. The whole infrastructure works. But it was funny about it, as I can just say to Claude 4.7, switch half of these over to a different AI vendor, and it just does it. It's pretty funny that you're just that willing to swap. But what Salim said is exactly right. You need to be flexible and nimble, but it's so easy now. In the old days, you had to really think ahead to plan swappable APIs. Now, you literally just write one line saying swap, and it just magically works. It's wild.

Speaker 4:
[40:51] Speaking about swapping, opening up.

Speaker 5:
[40:56] You've got a perfect segue. Wow.

Speaker 3:
[40:58] That's professional grade.

Speaker 4:
[41:00] OpenAI is swapping out its leadership. So there were three senior OpenAI leaders that left a few days ago on April 17th. Friend of the pod, Kevin Weill, who's the VP of Science, Bill Peebles, the head of Sora, and Srinivas Narayanan, the CTO of B2B. You know, best of wishes. I texted with Kevin to wish him all the best. And he's gonna keep me informed of what he's up to next. A brilliant, super nice guy. He was on our stage at the Abundance Summit this year. You know, the exits are apparently tied to restructuring, reallocating Sora resources and decentralizing the science team. Again, OpenAI is heading towards its IPO. It's got a focus on use of capital and really driving things that are driving near-term revenue. You read anything else into this, Bill, into this, gentlemen? Dave, Alex?

Speaker 3:
[41:57] Well, it's... Go ahead, Alex. We'll think for a minute.

Speaker 1:
[42:02] Lots of thoughts. So, OpenAI has seen multiple waves of executives leave over the years. I'm reminded of the great Anthropic schism where Dario and Jared and others all left at approximately the same time and formed Anthropic. I would not at all be surprised based on what I'm hearing if this is another one of those events. The Anthropic schism happened in part due to OpenAI, an earlier version of OpenAI, focusing its attention away from broader research, away from robotics, away from broader RL initiatives, instead focusing just on large language models. And I think this is another moment where OpenAI has just completed its historically large fundraising round, like $120-plus billion fundraising round. Incredible, largest ever. Yeah, they're targeting an IPO as soon as the end of this year. And they're in a death match with Anthropic to achieve recursively self-improving AI researchers and code generation the fastest with Codex versus Claude code. And I think anything from that vantage point, anything that isn't in that innermost loop of recursive self-improvement, that means AI for science, that means video generation. It may mean elements of vertically integrating higher elements of the stack, like B2B applications. Anything that doesn't get them into that innermost recursive self-improvement loop, I think is getting thrown out the window. I would not be surprised if we discover in the next two to three months another Anthropic like Frontier Lab emerges from this schism. And we go from three or four or five, depending on how you count Frontier Labs. There's a new star in the heavens. A new Frontier Lab is born with that curiously large amount of capital funding from Inception that has a totally new approach for solving AGI slash ASI. That would be my guess.

Speaker 4:
[43:58] Nice. You know, your point is an important one to make here. Of the 11 co-founders of OpenAI, two remain. It's Sam and Wojciech, right? Dave, you and I met with Wojciech while we were there, and Wojciech is likely to be heading OpenAI Foundation. But again, Kevin, congrats on wherever you're going next. Dave, what are you going to say?

Speaker 3:
[44:21] The numbers are so weird, and I'm going to throw it out there maybe for the last time. But if you look at the underlying numbers, when we were at OpenAI, Peter, Mark Chen had just been offered a billion dollars to come over to Meta, and he turned it down. Remember that? Sam reacted by going to every single employee other than the baristas and giving them a million-dollar spot bonus. Not vesting or anything. Here's a million dollars. Crazy, crazy numbers. Kevin, or a guy like Kevin, joined two, two and a half, three years ago, probably got 0.1%, maybe 0.2% vesting over four years.

Speaker 1:
[44:59] That would be a lot at this age.

Speaker 3:
[45:02] Yeah, well today, every 0.1% in this new funding is a billion dollars, a freaking billion dollars. And so if he's like half-vested on that, there's 500 million that he doesn't vest if he leaves. You know, he'll have probably six months of vesting severance. And again, I don't know if he left or if he got pushed out. We'll find out, I think, pretty soon, but we don't know as of today. But that frees up half a billion dollars of comp to hire 20, 30, 40, 50 top AI researchers to compete directly with Anthropic. So I'm not saying any of this is what's going on. I'm just saying those are the numbers. It's like when a company goes up in value that much and you start talking about a trillion dollar valuation, it gets wacky, like all these dynamics, the amount of money flowing around is just absolutely wacky, not normal company dynamics.

Speaker 4:
[45:56] Rarified atmosphere. We're going to talk about this whole conversation later at the end of the pod here. We keep on talking about billions and trillions, and the majority of everybody listening, including myself, can't relate with those numbers. All right, the speed of AI and AI business is stunning. As we were recording this podcast today, two pieces of important news dropped, so we're doing a pick up here to cover them both. The first is Chat GPT Images 2.0 model is released, reporting 99% text accuracy and extraordinary resolution. I'm going to play this video and then I want to hear what everybody thinks about it.

Speaker 6:
[46:36] We are launching ImageGen 2.0. If we think of Dali as cave drawings and ImageGen 1 as ancient art, then ImageGen 2.0 is the Renaissance.

Speaker 7:
[46:45] ImageGen 2.0 is the smartest image generation model ever built, with the ability to generate complex, polished and production ready visuals with accurate text and structured design. You see, this model isn't just generating images, it's thinking.

Speaker 8:
[47:00] That's right. ImageGen 2.0 is thinking and researching, and it can even search the web to generate images with the most accurate information available. With that information, the model is able to generate graphics that explain complex systems, and images that solve math problems with proofs.

Speaker 9:
[47:21] And with new multilingual capabilities, you can create visuals with multiple languages for the entire world.

Speaker 10:
[47:27] And now, for the first time in ImageGeneration, you can create multiple distinct images at once. So you can generate entire magazines with structured typography and photorealistic photos, full renovation plans for every room in your house.

Speaker 4:
[47:40] So, we're having conversations not only with words but now images back and forth. Dave, thoughts on this one?

Speaker 3:
[47:48] It's incredible. You got to try it. It's so easy to do. Just go to your JetGPT interface and just go to Create Image. The naming is terrible. So you're like, is this really the right thing? But as Alex was explaining to me a minute ago, it's all the same. You go to Create Image, it just defaults to Images 2.0 now. You'll know as soon as you're there, though, because it creates, like it says here on the slide, it creates images so quickly compared to before, and they are incredibly good.

Speaker 4:
[48:13] Stunning, huh?

Speaker 3:
[48:14] Stunning. So Alex, what's your normal test? The first time I asked it was for a specific scene in a specific location that only I would know. And it did it. It got the background right. It just has insane amounts of built-in knowledge that fills in your request gaps compared to before. It's really good.

Speaker 4:
[48:33] Alex, do you have a standard test you use with image models?

Speaker 1:
[48:36] It's been an evolving test over time for OpenAI's image models. I'll often ask it a molecular diagram question. This time around, my first prompt to it was, show me the molecular diagram for, you know, a typical drug, say ibuprofen or acetaminophen.

Speaker 4:
[48:55] Or my favorite drug, caffeine, 137-trimethylxanthine.

Speaker 1:
[48:59] Caffeine's a little bit easier in some ways. Maybe it's more popular. So it got a couple of the details wrong. It added a couple of alkane linkages in ibuprofen that weren't there. I asked it, I tried a few other tasks. I asked it to create manga for a single chapter of Accelerando. Did a decent job. I asked it a few other tasks. I will say the text qualities are vastly improved. If you look at the benchmarks, so arena.ai benchmarked this as 11512, that's I believe an ELO score. That's the largest gap. It's number one scoring text image model now in their text image arena. Far ahead of number two, which is NanoBanana2. This feels like a remarkable jump. A few other comments, if I may. One is the way OpenAI rolled this out, which is to comment on the reasoning abilities. We talked almost a year ago, I think, maybe six to nine months ago on the pod about how eventually we're going to see images as first class reasoning modalities, that the frontier models will start to think to themselves as humans do with sort of a Gestalt image rather than in language. And I think we are starting to see the dawn of first class visual reasoning by the models. I think that's very exciting. I think use of tools by ChatGPT images 2.0 or GPT image dash 2, depending on which way you're calling this. I think tool use for image generation is an interesting plot twist. Query whether that will lead to additional copyright infringement claims, I don't know, but interesting question. But I think the biggest question we should be asking here, doubly so in the context of a story that we covered elsewhere in this pod, which is the departure of a number of key OpenAI executives ostensibly in service of OpenAI trying to better compete with Anthropic in the cogen races. Why is OpenAI releasing such a compute-intensive model? When Google launched Gemini's NanoBanana Pro and NanoBanana Pro 2, the rumors going around were that it sucked up an enormous amount of Google's compute and that Google was scrambling to find the compute to service the NanoBanana Pro demand. So why now, in the middle of a cogen and AI researcher race, is OpenAI choosing to release what ostensibly would be pretty compute-intensive modality that probably, I'm speculating here, but if you look at the enterprise value of text to image generation, I would speculate it is an order of magnitude or more less valuable than cogeneration. So why in the middle of this hot race is OpenAI releasing an AI model? So this is... Any ideas? I'll speculate. This is unvarnished speculation because it's a pretty strange bit of timing to drop Sora on the one hand and then launch a next-gen image model on the other. One thought, this is again admittedly speculation, is if images are much lower compute cost at this point to generate, if the speed at which it's able to generate images has somehow, through algorithmic advances, suddenly gotten a lot cheaper and faster, maybe images are no longer that compute intensive and maybe OpenAI has achieved some sort of algorithmic advance versus, say, Google's Nano Banana Pro, in which case they can actually afford to allocate whatever the compute necessary is to this without breaking a sweat or otherwise interfering with their recursive self-improvement program. That's one theory. I could speculate as to other theories. Another theory would be somehow if image generation is actually instrumental in competing with Anthropic. Anthropic, as we've discussed on the pod previously, doesn't allow users to generate images. So perhaps, again, speculatively, maybe OpenAI thinks that reasoning over images, and I would note even in that launch video, OpenAI talks about and in their demonstrations, generating screenshots, for example. And in fact, Sam was teasing this release earlier today by posting what looked like a screenshot, I think, of Mac OS and saying that in effect, this isn't a real screenshot. So maybe OpenAI is betting on some form of supremacy in code generation using images as some sort of intermediate representation. Maybe they think that having stronger image gen capabilities will lead to better, say, UI design, code gen, or something like that. Those were the first two speculations that popped into my mind.

Speaker 4:
[53:49] Fascinating. When we heard that they were going to be dropping something, of course, we've been waiting for Spud to drop, right? Their answer to mythos. Still expected this month, possibly?

Speaker 1:
[54:02] Still expected, based on everything I'm hearing, still expected imminently. It may be branded as a sort of a.5 model, ChatGPT 5.5, if you will. But as far as I can tell, it's still expected imminently.

Speaker 4:
[54:17] Salim, any thoughts on ChatGPT Image 2.0?

Speaker 2:
[54:21] I've got two thoughts. One is Alex's hypothesis is always a possibility, obviously. I think they need to maybe just show that they're still relevant because they're like, and launching this. And then the Twitter versus a buzz with this. It's crazy without the reactions. My standard test is to create a photograph, generate a photograph of the Gutenberg printing press from the 1500s. Obviously, we didn't have cameras back then. So, it did really well. I mean, I did this about six months ago with NanoBanana and ChatGPT. And the NanoBanana one was way realistic. The ChatGPT one just looks better than the, but marginally better than the NanoBanana one from a few months ago. My second thought is, as they're generating a buzz for an IPO, they need as much every ounce of consumer attention as possible and retail investor attention as possible. And getting out to creatives and having this out, there may be one way of doing it.

Speaker 1:
[55:27] I scratched my head again just briefly over that, because every narrative that I've seen out of OpenAI, certainly in the lead up to IPO, including out of their CFO, is that it's the same narrative of OpenAI transitioning over from consumer use to enterprise use. So again, such a head scratcher in my mind, are enterprises just loving text to image, maybe? Or is it... Yeah.

Speaker 2:
[55:51] No, no, my point before, there's a huge, huge number in enterprises of PDFs and PowerPoints and so on that could be scanned and rendered in a way that that's what I thought was really interesting about the Claude Design thing. And so maybe this is a response to say, hey, we're still relevant there too.

Speaker 3:
[56:09] I think you guys nailed it. I think it's exactly what Alex said and Salim, you reaffirmed, and a lot more. I use Anthropic constantly, and it's generating megatons of code for me. And if I ask it to create a diagram to explain what it did, it sucks beyond belief. It just absolutely falls apart. And so then I have it write code to create diagrams, which is a royal pain in the ass. But it just doesn't do it. And so I use Gemini for most of my planning, because it's brilliant. I use Claude for all the coding and all the complicated thinking now. But there's no role for OpenAI in that stack. If you look at what goes on in the real white-collar world, coding is a little piece of it, but architectures and design documents and images are a massively bigger part of it. So this actually makes OpenAI suddenly relevant in white-collar. It's a good entry point. And once it's in the stack, then the use tends to grow.

Speaker 4:
[57:03] I'm sure it's not a random move. I'm sure they've got real data behind it. The second story that came out, that's a very important one, is SpaceX negotiates the right to buy Cursor for $60 billion. You know, the war is out there. We heard Elon say they need a lot of catch-up to do, especially in coding. Well, guess what? Buying Cursor helps them catch up to Anthropic. Cursor already has the best product out there, but they're compute-constrained. And of course, Colossus gives them all the compute that they need. Interesting breaking story. Dave, what do you think of it?

Speaker 3:
[57:42] Well, you got that storyline exactly right. They need a foundation model, they need compute, and XAI needs distribution to get in the game and be used. It's interesting, though, when you use Cursor right now, you can switch between models just instantaneously. So, you would assume that if this does happen, that somehow it would favor using XAI, maybe put it at the top or default to it or something like that. But, as of right now, Cursor is completely neutral. It's just easy to trigger between them. And, remember, OpenAI was very close to buying Windsurf, and then it got blocked. So, then they turned and they had Cursor in their launch of GPD5. Remember? They kind of featured that partnership. So, I don't know if this would then trigger some kind of a bidding war. But, I guess if this news story is right, Amazon has it locked up. He has a right to buy the entire company.

Speaker 4:
[58:41] Or they pay a $10 billion walk away fee. Fascinating.

Speaker 3:
[58:46] Yeah. Yeah, interesting structure.

Speaker 2:
[58:50] They bought Optionality, and I think the trade-off is exactly right. Cursor needs compute, and these guys need access to code capabilities.

Speaker 3:
[58:58] Yeah. I think this is also effectively a liquidity event. A lot of the Silicon Valley gang is saying, these valuations are incredible, but when are we going to see actual liquidity? This is at least $10 billion of liquidity, more likely $60 billion.

Speaker 4:
[59:12] Potentially in pre-IPO stock.

Speaker 1:
[59:15] Alex, there are a few... I think this is a bizarre story, but it's the sort of bizarre story that could only happen in April of 2026. So a few thoughts. One, I would say, dovetailing with the last story, we're seeing that code generation and the rise of AI researchers is maybe in some sense the innermost loop within the innermost loop. Everything seems to be coming down to the question of who can generate, at least among the frontier labs, who can generate the best code and why generate the best code so that you can build the best AI researcher to generate the best AI to generate the best code. That's what all of this seems to be coming down to. So a few thoughts. One, Elon and others have made comments after the SpaceX XAI merger that XAI was not built correctly from the start and that they needed to start all over again. SpaceX's VP of Starlink, who's also now head of engineering for XAI, made these comments similarly publicly as well. There was a desire to reboot, query whether this acquisition or quote-unquote right to acquire cursor actually ultimately represents a complete reset of GRUC, whether this is in some sense a manifestation of or an admission that GRUC's co-gen abilities were not remaining competitive with the Frontier. What's the Frontier? It's Claude. Next point. So there have been many studies, many analyses of cursor's behavior, and of course, cursor was for the longest time able before it decided to launch its own first-party model, was in a somewhat privileged position to watch user interactions with all of the existing third-party Frontier models. And so if you're a cursor, you have to ask yourself, you're in some sense in a position of quasi-vulnerability to those Frontier model vendors, what do you do?

Speaker 3:
[61:20] Quasi-vulnerability.

Speaker 1:
[61:22] Quasi-vulnerability.

Speaker 3:
[61:23] Very big quasi there, yeah.

Speaker 1:
[61:25] Yeah, the quasi is a load-bearing word fragment there. So you're in a position of vulnerability, what do you do? Well, to the extent that you have access to user behavior, you take as much user behavior and context as you possibly can, you probably take a Chinese open-weight model and you fine-tune the heck out of it, based on all of this user behavior. And to the extent that you can replicate Claude code or Claude level code gen abilities, you probably try to do that. So if you're SpaceX AI and you really would like to be competitive with Anthropic in particular, what's the best way to do that? Well, you buy yourself optionality from Cursor, which may be, question mark, question mark, one of the best and or cheapest ways to basically acquire Anthropic level user interaction for code gen, given all of the users that have been interacting with Claude via Cursor. That's the second thought.

Speaker 4:
[62:27] Fascinating.

Speaker 1:
[62:28] Third thought, if I may, is the whole cloud angle. So SpaceX right before our eyes is turning into a hyperscaler. We talk about the Dyson Swarm all the time, but this is actually what a Dyson Swarm probably looks like. It looks like a cloud in the stars. And this is, it's seemingly going to be the first major tenet of the Dyson Swarm that SpaceX AI is going to build in sun synchronous orbit or maybe around the sun. I don't know, but either way, this is potentially, even in SpaceX's announcement, a million H100 equivalent. That's Colossus now, that's ground based. But that's just an anchor third party tenant. Historically Colossus and Colossus 2, Elon style, were fully vertically integrated with only Elon's applications. Now we're seeing an opening up of that cloud in for layer to third parties. Cursor may or may not get acquired. I would expect to see many more deals like this where Elon is now exposing his vast GPU set on ground and soon in orbit to third parties. How soon until we're all running our compute loads on Colossus, Colossus 2 or whatever SpaceX renames or name, maybe it's Colossus 3 in orbit. I think it's a very interesting time for orbital clouds and we're starting to see the beginning of that.

Speaker 3:
[63:49] I think the second thing that Alex said is a great lead in to our Blitzy podcast coming up in a couple of days because I think it's very likely that Elon's incredibly confident in his foundation model and then layering the chain of thought reasoning on top of that, fairly confident in that. But that next layer that actually makes code generation really work, there's a lot to it. Cursor has worked on it a lot. Blitzy has worked on it a lot. A handful of other companies have. But I think this elevates that layer of the stack to first class. The science and technology in that layer of the stack wasn't really part of the AI researcher lexicon, but it's become critically important to getting actual functional code, actually getting the AI researcher to really exist. I think this elevates that layer to another level. Elon is saying, yeah, we could have the best foundation model ever, but we can't exist in the code generation world without this other thing. It's easier to buy it from Cursor than try and reinvent it.

Speaker 4:
[64:55] Salim, what are you going to say?

Speaker 2:
[64:56] Yeah, note that the $60 billion is a drop in the bucket compared to their overall market cap. It's like a little small.

Speaker 4:
[65:03] And you have to imagine Cursor is concerned about being disrupted, right? So, you know, they have everybody else creating capabilities equivalent to or greater than. This is a chance for them to become part of SpaceX AI as it's about to catapult literally to the stars. And it basically secures their position and makes them part of Elon's ecosystem.

Speaker 3:
[65:27] Well, so the Cursor team, you know, the four guys, they're awesome, but none of them is a natural public company CEO. Like, there's no obvious Mark Zuckerberg there. And so what are you going to do? Just stay private until eventually you're crushed? You know, you have to do something to lock this up. And this is a good way to do it.

Speaker 1:
[65:48] I also just want to contextualize, if I may. We talked about this elsewhere in the pod. Rumors flying again that Google DeepMind researchers, many of them aren't using Gemini for CodeGen, they're using Claude. So I think in some sense, there's a sense in the air that CodeGen for recursive self-improvement AI researchers, this is where it's all coming down to. This is like the final countdown of whatever stage of the Singularity we're in, where SpaceX is scrambling with $60 billion acquisitions and OpenAI is dropping large chunks of their initiatives just to focus on this and Anthropic is hoarding the compute that they do have just to focus on this. I think, I see, if you sort of squint at the chessboard, all of the major players are dropping either large amounts of capital or dropping entire other initiatives just to focus on cogen. I think, I would speculate, but this is somewhat informed speculation. That's because cogen, in all of their minds, this is like the critical path to maximum change, maximum acceleration, and something interesting is certainly going to come out.

Speaker 4:
[67:00] To GROCK5.

Speaker 1:
[67:01] Maximum Recursive Self-Improvement. Well, yeah, GROCK5, which is conveniently the definition in Elon's mind of AGI.

Speaker 4:
[67:09] I love it. All right, guys, and now let's go back to our regularly scheduled programming. Every day, 10 times a day, something is dropping. Extraordinary. Everybody, welcome to the health section of Moonshots brought to you by Fountain Life. We talk about AI on this Moonshot podcast all the time. One of the most important things AI is going to be able to do for you, besides educating your kids and helping you with your taxes, is making sure that you're living a healthy lifestyle, that you get a chance to get to 100 plus. I'm here today with Dr. Dawn Musalem, the Chief Medical Officer of Fountain Life and a part of my medical team, Dawn, a pleasure.

Speaker 11:
[67:45] Great to be here.

Speaker 4:
[67:46] You know, the thing that people are concerned about most about living to 100 or 120 is their cognitive abilities, making sure they don't have dementia. And the numbers about dementia are problematic. Can you share what you've learned?

Speaker 11:
[68:01] Such an important point. And you're right, at Fountain Life, our members, the number one thing people are most concerned about is losing their brain health, forgetting the name of their child, forgetting the face of their loved one. We know that when it comes to dementia, the conservative estimates are that 45% are entirely preventable. What was amazing is with the advanced testing we're doing at Fountain Life, one quarter of our members had advanced brain age. But what was really awesome is again, back to that prevention, when we partnered it with Healthy Living, this gives me chills, eating healthier, moving our bodies, sleep. Optimizing sleep is so important. You know what we saw? We saw that we improved that brain age by 26%. That is a big, big number to show that the majority of those individuals were able actually to improve the brain age.

Speaker 4:
[68:48] One of the things I love about Fountain is we're searching the world for the best therapeutics, the best approaches, and making sure we bring it to our members. So if having healthy brain function till 100, 120 is important to you, check out Fountain Life, go to fountainlife.com/peter. Make sure you become the CEO of your own health. All right, now back to the episode. Let's jump into the economy. I found this chart absolutely fascinating. This is how hyperscalers compare to the mega projects in the United States. In particular, this is from a tweet from Finn Morehouse. And so data center Clapax is hitting almost a trillion dollars, a trillion dollars by the end of this year, six years, a trillion dollars spent. That compares to the Apollo program, which spent $257 billion over 14 years. The Manhattan Project that spent $36 billion over five years. Interstate Highway program that spent $620 billion over 37 years. You know, when I first looked at this chart, I was like, okay, the numbers are big, but really let's talk about inflation or particular about percent against the GDP. So I did the work and put this chart together. And yeah, the data centers are still outpacing the Apollo program and Manhattan Project by a factor of five. And of course, on the right hand column, all those were government funded and we're talking about four private companies driving this trillion dollar spend. Alex, your take?

Speaker 1:
[70:21] Well, the Dyson Swarm is going to consume a huge portion of our economy. I don't know that this should be that surprising to the audience. If it's 1% one year and 10% the next year and 50% the following year, at some point all of this infraspend, all of this tiling the earth and soon the heavens with compute, surely this consumes the economy. This becomes the de facto economy. I would argue that this is fundamentally unlike the railroad buildout, which some may be bellyache saying, well, actually, if you look at the railroad buildout that consumed a disproportionate amount of the GDP and ultimately there was a collapse, or you compare it with Apollo or interstate highways, I really do think it may not happen in this economic cycle. It may happen two or three economic cycles later that data centers are the infrastructure for the future of civilization. I really do think, say, 10, 20, 30 years from now, the majority of intelligence in our solar system is likely to be intelligence that's hosted on data centers and not human, maybe a good deal sooner than that. That's very, very conservative outer bound. So if this is the entire machinery of civilization, then at what price do we price the spend? It's the future of our whole civilization.

Speaker 4:
[71:38] But Alex, the most interesting thing here is not the percent or the numbers. It's the fact that all these other mega-projects were government-funded. And here we see private funding driving this incredible spend.

Speaker 2:
[71:52] Yeah, I think this is very, very exciting because in the past, you'd have to take all the momentum and get at the political priorities. And there's such a lot of spade work to do to get government behind a major initiative like this. And therefore, everything just took forever. Major things took forever. Now, because of the democratization, individuals and private companies can do things that only governments could do before. And that is a huge need. That will move civilization forward at a 10x faster rate than we did 50, 60, 70 years ago. 100x faster. And so this is, for me, the most exciting part.

Speaker 1:
[72:26] If the economy grows as quickly as some expect, including Elon and including myself, I'm not even sure the distinction between public spend versus private spend matters that much. If the economy ends up growing 10x, who cares whether it's the approximately one third of the economy, that is the public sector or the two thirds, that's the private sector. If the pie grows so wildly, it almost in my mind doesn't matter. The government could, if in a much, much wealthier future, a few years from now, the government could pick up the tab for AI data centers if we want to. And that would still be enormous compared to the size of the economy that we have right now. So in my mind, to the extent I sound a little bit blase, perhaps, about the distinction between private sector funding this build out versus the public sector, it really is because I think this is the fundamental infra for the future of civilization that will get us to 10X year over year.

Speaker 4:
[73:17] In the long run, yes.

Speaker 1:
[73:18] We're running out of long run, though. It's like a few years away.

Speaker 4:
[73:20] Yeah, I know, five years long run. Salim's point is the one I think is most interesting. The notion is getting political support. I mean, look at the Apollo program as a perfect example. It cratered back in 72 because the political support was gone. But all of a sudden, a lot of these projects take a very dedicated, long expenditure of capital. And that can be done by a single individual, right? Elon or Bezos can say, we're building out the moon, we're building out Mars, and Ford the entire thing.

Speaker 3:
[73:52] Let me give you one story on the government versus private. That F-35 program that's on this chart is actually the second-to-fastest growing spend of all time. And that's a single plane. Nick Leonard, who works on VoiceRun here in the lab, worked on the F-35. And they asked him, that thing's supposed to have a vertical takeoff capability, which is very, very hard to do. And so they asked him to work on a nozzle for the thrust. You know, it's like this rotating nozzle. And could we make it out of carbon fiber? And you're like, well, that's, there's like a thousand-degree exhaust going through that thing. How are we going to make that out of carbon fiber? They said, well, just try. So they found a way to make it out of carbon fiber to reduce the weight a little bit. But they have to replace it every, like, 50 flight hours. And they said, fine, good enough. It's only taxpayer money. Go ahead and burn those things up. You know, what do they cost? A million dollars each? Great. We'll burn them up every 50 flight hours. So they actually made that choice. When I look at this chart, and I look at the data center build out, this is the most important and biggest thing humanity has ever done. And whatever country or region does it first and fastest is going to have insane amounts of growth, success, power, whatever. I know the point of the slide is this the biggest and fastest thing ever, but to me, it's too small, too slow. It needs to be even bigger, even faster. And if you push any of it into the public sector, it's going to slow down and it's going to go nowhere. So it's great that it's in the private sector, but it's so different from all these other projects. This is going to change. AI underlies everything in the future.

Speaker 4:
[75:25] The total economy of all of society, right?

Speaker 2:
[75:28] Two quick points here. I remember, Peter, when we were doing the one-week executive programs at Singularity, we had a very senior official at the DoD come through. And he stood up and he goes, yeah, all you guys talk about exponentials, investors are all hovering at the knee of the curve, trying to take down the technologies as it turns upward. You forget that government is the one that's been funding the flat part of the curve for a very, very long time. And as governments drop and get run out of money, who's funding the flat part of the curve, right? And that answer was kind of stuck in my head for a long time until we see now that private sector can actually take on that type of investment and make that investment. To Dave's point, the US private sector is doing a huge amount. But I think there's a massive problem where governments around the world are not putting enough money into data centers for their sovereign future. And that is going to be a massive problem for them as this world moves on, because they're going to get left behind. So if you're in government in any countries around the world, figure out a way of funding your own data centers and retaining your own sovereignty.

Speaker 3:
[76:36] Totally right. I think the only comparison that I like in history is World War II. And the industrialization build out during World War II, just in terms of speed and mobilization, that's what a well-thought-out plan would do right now. Do what we did in 1940-41, not just in the US., but globally. That's what you should be doing right now in the AI race. And so, I don't know what we spent on World War II, but it was a huck of a lot more than the highway system and the railroad system.

Speaker 1:
[77:07] I'm curious to take a spot, Paul, if I may. So, we're all obviously very excited about 1% and a continued increase in annual percentage of GDP for AI data centers. I'm curious, of the three of you, what do you think, looking backwards historically, will end up being the actual peak annual percentage of GDP spent on AI data centers?

Speaker 4:
[77:32] On Earth and in space?

Speaker 1:
[77:34] Everywhere.

Speaker 4:
[77:34] And on the moon? In the human sphere of existence?

Speaker 1:
[77:38] And after we've disassembled Jupiter.

Speaker 4:
[77:40] It will approach 100%.

Speaker 2:
[77:41] Yeah, it should just keep going.

Speaker 1:
[77:42] So, you think it goes to 100%. What about you, Salim?

Speaker 2:
[77:45] Well, I think over time, it should get higher and higher, because over time, as you talk about, you're in a loop. Everything becomes computronium and everything should be in service of building that.

Speaker 1:
[77:56] What about you, Dave?

Speaker 3:
[77:58] If you include building physical things to make people happy as part of the data center build out, it's 100%. If you take that out, I'd say probably 95%.

Speaker 1:
[78:08] I think it's going to... This is not investment advice. I think it's going to cap out somewhere between a maximum, a quarter or a third of GDP and then decline after that, because at some point, AI data centers are consuming so much of our available capital and resources, that pressure increases through the roof to reduce that spend through new means of automation, like robots, and maybe we get our nano assemblers, and that creates cost efficiencies that maybe compensate for... To the extent there is a bit of a Jevons paradox here, I don't actually think it's going to go to 100%. I think it'll peak somewhere.

Speaker 4:
[78:46] I think you're cheating on definitions here, Alex.

Speaker 2:
[78:48] Yeah, and also, at some point, we're going to be under a post-capital and post-scarce world, in which case GDP becomes meaningless.

Speaker 1:
[78:58] You think GDP is going to become meaningless before we finish exhausting data?

Speaker 2:
[79:03] Well, let's say that GDP is out of date now. This is a whole other debate, but GDP is out of date now and has been for several decades. The guy who created GDP said this is one of the worst forms of migrating the economy. Do we need to be moving to other measures anyway?

Speaker 3:
[79:18] Honestly, I like the way Alex framed it a lot, because if you tell the general population, we're going to spend 99 percent of our money on data centers, they're like, well, how does that benefit me? That's terrible. That's the worst thing in the world. The way Alex phrased it, it's just a shift of what you define, but it's much more palatable to the world because in that definition you're using pure compute for about a third of the global economy, and the other two-thirds is robots and creation of physical goods and everything else.

Speaker 2:
[79:49] Let's get to AGI first, and then we can rethink this whole thing, and it'll figure it out for us.

Speaker 4:
[79:54] All right. Talking about the global economy, let's look at the US economy, in particular, manufacturing. We had 17 quarters of negative manufacturing growth, 17 quarters of contraction, and we've now seen 16 quarters of pretty steady increasing acceleration in manufacturing. You know, I think we offshored most of America's manufacturing over the last, you know, 50-odd years, in particular, in the last 17 quarters, doing the research here. The drivers of that contraction were the trade war uncertainties under Trump's first set of tariffs. It was the COVID-19 collapse as, you know, we destroyed supply chains. Semiconductor shortages also as part of the COVID collapse. The post-2008 financial crisis had a hangover that didn't fully recover. And so, a lot of challenges as companies found themselves unable to manufacture their goods and services. And we've seen now in the last 16 quarters, the CHIPS Act, the Inflation Reduction Act, Infrastructure Investment and Jobs Act, a reshoring wave, AI infrastructure buildout and defense spending, basically driving up US manufacturing. So hopefully this continues. Any thoughts on this one, Dave?

Speaker 3:
[81:16] Well, if you want every American to be ultra wealthy, this is exactly the right thing to do. If you're worried about India, like Salim, you'd be worried sick about this because a lot of prosperity around Asia has come from outsourcing work that ends up in iPhones and in other manufactured products that then flow through America and then go back out to the world. But if we start on-shoring all that manufacturing again and roboticizing it, that depletes the rest of the world of those jobs. So I would expect that global hatred of America is going to go way up, that American wealth will go through the roof, and that this trend will continue, at least for the remainder of this administration. So try not to put a, like, you know, is this a good thing or a bad thing, spin on it because it affects different people in different parts of the world very, very differently. But massive, massive amount of capital is now flowing into the US from all over the world. And it's all, you know, because it's not labor dependent, it's robot dependent, it's basically all going to come back into the United States or North America.

Speaker 4:
[82:21] Salim, what are your thoughts on Dave's point?

Speaker 2:
[82:23] So I think that's exactly right. But what's also happening in parts of places like India is they're building their own manufacturing and their own automation capabilities, and they're doing it at reasonably low cost. Just look at the way India is tiling the country with solar panels and becoming energy self-sufficient. I think that's generally a good thing. I think that there's a little comment to be made here though. We've had a very bad political trope over the last many years now, that globalization has caused jobs. It turns out that's not the case at all. What's happened in the US specifically is financial engineering and private equity has gone in and bought companies in logistics, manufacturing, manufacturing, engineering, et cetera. When private equity goes in, they stall all innovation, outsource the jobs, and they've made it easier to do that. The irony around this, this is uncovered by Robert Goldberg, was that the LPs in these private equity funds that are outsourcing the jobs are the California Panjic Plan. So the pension of the worker that is going to outsource the jobs, so it's the worst incentive alignment ever in this. What we've now found using the EXO model and others, we've found ways of de-risking innovation and doing it at low cost back to that de-monetization thing. You know, in the past when you wanted to do anything heavily innovative in a company, you had to make a big bet, right? CFO hated you and would defund you as fast as they could, et cetera, et cetera. Today, because the cost of technology is so low, you can do disruptive innovation and try things out at a very low cost at the edge of the organization, only double down when you see success. This is allowing US companies now to go back to the thing. Robert Goldberg, his MTP was to reinvent American exceptionalism, and he found a way of recreating that wheel. And that's now, I think, starting to bite in a big way, where manufacturing and innovation are back with a vengeance in the US. And I think that's just fantastic. I think the ripple effects will be fantastic for the whole world as well, because it will push the whole world to operate that way. You will have to be able to compete on innovation going forward.

Speaker 1:
[84:36] I think manufacturing wants to be sovereign at the end of the day. I think we're going to have advanced nanotechnology and all sorts of other sci-fi-esque technologies that enable every sovereign nation state to be able to effectively re-domesticate or domesticate their entire supply chain. I don't think it's necessarily given all of the technologies that I think I reasonably foresee over the next, call it, five to ten years. I don't think supply chains necessarily want to be global in the long term, in which case capacity, domestic, for the US or otherwise, is limited more by technological advances and capability than any sort of geopolitical shifts or shifting around to supply chains. But I just, with all of the advances that I expect to happen over the next five to ten years, I just don't see a bright future for global supply chains and in particular, globally supplied manufacturing at all. Not investment advice.

Speaker 4:
[85:31] Alex, you know, if you think about the fact that we're talking about manufacturing physical goods and services, the other place we're going to see this probably is in food production with the vertical farms and protein-based meats. We're initiating the flip side of this, of course. Intelligence is being globalized, right? By definition, when we move these things to orbit, they're globalized planet-wide. So fascinating shifting of where things get made and consumed.

Speaker 1:
[86:02] Physically being globalized, but not politically. I think we end up with probably two sovereign Dyson, well, the US is going to wind up probably with, I think three or four competing corporate Dyson swarms, and who knows how many China will wind up with, query whether Europe or other parts of the world will be capable of launching their own Dyson swarms in time. But that's still a political division of the heavens.

Speaker 4:
[86:26] Salim, you're going to say?

Speaker 2:
[86:28] No, I mean, just agree. I think this is an important point. By the way, last time I mentioned Holland is a small point because it was a global food exporter, not just from vertical farming, but they've been doing hydroponics, aeroponics, greenhouses. I'm actually getting a briefing from one of the top vertical farming companies in the world. So I'll come back and tell you guys what I learned from that.

Speaker 3:
[86:53] Well, I'll tell you, globalization exists fundamentally because of shipping. Everything is moved by boat. There's no other way to move it around the world. And the US. Navy is the protector of all global shipping. You can see what's going on in the world right now. What's weird about the US is that we have all the natural resources that we need internally. And so we don't actually have a huge incentive to support globalization other than labor. Heavily reliant on Asian labor to make iPhones, to make cars, to make whatever, to make parts. But if that all moves to robots, there's no natural reason why the US. Navy needs to support and defend every transport ship in the entire world going forward. TBD, whether, you know, what direction the US goes on that. But no other country is geared up right now to protect shipping. And you see Europe isn't even trying in the straight-up war moves. It's like, yeah, you know what, we don't want to eat that cost. You guys figure it out. But you're the biggest importer of the oil. Like, wait a minute.

Speaker 4:
[87:53] Our Navy is so critically important for this. Yeah. Dave, you surfaced this chart in our WhatsApp group. Do you want to brief us on it? It's pretty amazing.

Speaker 3:
[88:06] It's truly amazing. Well, now, this is Perplexity speaking. So if anyone fact checks this, then remember, it was Perplexity. But I think it's right.

Speaker 1:
[88:12] Blame the AI.

Speaker 3:
[88:15] I looked at the numbers, but yeah, so San Francisco is now bigger than China, all of China, in market cap. This is just counting the market caps of the public companies. So it's about a billion people versus about a million people. The flow of wealth into San Francisco is going to make it the global capital, you know, the global financial capital of the world easily. If it's not already, people just haven't noticed yet. But you can see on the chart, the rest of the US compared to San Francisco, the rest of US is, and then in China, you know, as a small fraction of that. And actually, let's see, there's another slide that showed the same thing, but I guess we took it out.

Speaker 4:
[89:03] The point, 14% of the global market cap is concentrated in 7,000 square miles. That's extraordinary. I mean, it's one major earthquake away from economic...

Speaker 1:
[89:16] There's a cherry thought. It's also, I think, in part reflective of admittedly a cliché that the way public companies work in China is materially different, usually, from the values of very shareholder activist West and the US. The cliché, of course, is that for Chinese public companies, profit is not necessarily as prized a measure of performance as it is in US and Western companies. And if you do like a discounted cash flow analysis for Chinese public companies, and then you also discount the impact of the CCP and government regulation, maybe there's in some sense a suppression of book value that comes from that, that otherwise could be realized if they were a little bit more profit oriented and a little bit less, say, oppressed by the CCP. But I think it's an interesting statistic, for sure. And one would certainly, if you're a US policy maker anywhere other than San Mateo County or San Francisco or Santa Clara, you would absolutely love for your geography to be the world center of the future. But I would pose maybe as an open question, how do we get diffusion from Silicon Valley and SF to the rest of the country and the rest of the world?

Speaker 4:
[90:31] The billionaire tax will do that for us.

Speaker 1:
[90:34] Billionaire tax will shake loose a few billionaires, for sure.

Speaker 4:
[90:36] No, no, no.

Speaker 3:
[90:37] The billionaire tax is interesting.

Speaker 4:
[90:39] It's moving people to Texas and Florida.

Speaker 3:
[90:44] The billionaire tax already pushed out half of the tax base that it would have taxed. That came up on the All In podcast in a big way. And so it's completely counterproductive. But it's not affecting the inbound entrepreneurs at all. What's happening is people are forming their companies and their teams and their ideas all over the country, wherever there's universities mostly. But then they're moving to San Francisco. And they're young and they're not rich yet. So they moved to San Francisco without thinking about the billionaire tax. Then later when they're billionaires, they're like, why am I here? And then that's when they moved to Austin or wherever. But it's not slowing down San Francisco in terms of entrepreneurship and growth. So the slide we're looking at here is tech market cap. Actually, San Francisco passed all of China in total market cap. You know, Alex pointed out that market cap is a little bit of a weird statistics in China. But I really feel like when you measure a global economy, if somebody makes dinner for somebody else and then they charge them 20 bucks for it, that counts as a dollar in the economy. But it's gone. You know, it got eaten. It turns to poop. It's not a thing. Somebody else, you know, creates a new chip or a new engine or whatever. It creates permanent value for that society. Massive amounts of gain. An iPhone, massive amount of permanent gain. That counts as a dollar in the economy. So I think when you strip it out. Yeah, exactly. When you strip it all out, what's going on in San Francisco is absolutely extraordinary and unprecedented in world history. And then you should go and experience it for a week or two, no matter where you live, and just talk to people. You can't even walk down Market Street without five people talking about Anthropic or OpenAI right behind you.

Speaker 4:
[92:24] Go to the coffee shops. And the billboards.

Speaker 1:
[92:27] The billboards on the 101, especially, like every billboard is referencing superintelligence.

Speaker 4:
[92:32] Yeah, amazing. All right, let's jump into one of our favorite topics here on our pod, the race to space. This week, Blue Origin and their new Glenn did their third launch. It was a bittersweet new Glenn. Their reused booster flew beautifully. It landed beautifully. The challenge is when they launched their AST satellite, it launched into the wrong orbit and AST is having to deorbit that satellite. I think the points I want to make on this particular story is first, Blue Origin has demonstrated booster reuse three times faster than Falcon 9 in terms of from the first time when they started launching New Glenn to the time they were able to reuse their booster. Of course, the reality is SpaceX has launched 600 times and Blue Origin's only launched three times. The big race right now is who gets the lunar contracts. We're going to the moon in 2028 with Artemis 4. Artemis 3, don't forget, is going to be testing operations in low Earth orbit for going to the moon. Artemis 4 is going to be making a lunar attempt and right now SpaceX and Blue Origin are vying for the contract for NASA's lander there. Alex, I'll go to you as my space buddy in this conversation.

Speaker 1:
[93:51] Well, I think it's notable that United Launch Alliance, ULA, isn't even part of this conversation. It's just that the Neo space company, SpaceX and Blue Origin, I think competition is profoundly good. I don't think we want to wind up in a singleton where a single company controls the Dyson swarm, which I think is still more or less the end game here. I think a lunar station on the South Pole with Artemis IV is just a waypoint to launching lots of AI data centers from the moon. And I think SpaceX, based on their recent public communications, would agree with that. So I view this as a very positive thing. And I would like to see ideally more than two Western companies that are capable of rapid reuse of boosters. Hopefully we'll see at least one or two more publicly traded companies or soon we publicly traded companies in the space do it. We're certainly going to see quite a bit from China. I think this is very positive. I for one, just on Artemis IV specifically, I'm looking forward to Artemis III, which is I think a very, according to to Administrator Isaac Min's new roadmap for Artemis III, it was originally going to be Artemis III, that was the crude landing on the moon, pushed it off to Artemis IV so that Artemis III could focus on a demonstration of the docking of the lander and the orbital vehicle. And I think that's going to be super interesting. I love the new cadence that NASA has announced under Jared Isaac Min of rapid iteration, in some sense SpaceX-like rapid iteration toward colonizing or let's say disassembling the solar system. This is the only way we get there, and this is an important step in that direction.

Speaker 4:
[95:32] And Jared has said he'll come on the pod. I think I want to wait until after Artemis II was done. I think it's time to start the invitation process to bring him on. Absolutely. We have the dark horse is relativity space. A friend of mine, Tim Ellis had built that really to go up against the Falcon 9. I mean, interesting when I had met with Elon when he was just announcing Starship, he said his plans are going to be to retire Falcon 9 as soon as Starship is operational. I mean, think about Falcon 9, the most successful launch vehicle by an order of magnitude is going to be retired when Starship comes online. He always burns the ships right after he develops it. He retires the old system, did that with Falcon 1 and now Falcon 9. It's going to be interesting to see what Eric Schmidt who bought relativity space does with it. So hope it gets up and operational. It would be great to have another super large booster like this coming into operation.

Speaker 1:
[96:32] For sure. I think it's an interesting microeconomic question to ask. How many different transport to LEO services or let's say heavy launch providers do we actually need in order to keep that market competitive? Is that a long term source of profit or is that as with frankly civilian air transport, a case where profit margins are driven to zero. It's just a transport, it's just a dumb pass through and all of the profit is in LEO with data centers or Orbital Hilton's or something like that.

Speaker 4:
[97:03] It's Boeing and Airbus, capturing 90% of...

Speaker 1:
[97:07] Except Europe isn't in this story.

Speaker 4:
[97:09] Yeah, Europe is not. Listen, they kind of tried but they went all school all the way and China still has not gotten to reusability yet. They're also trying and having some failures along the way. All right, Alex, this is a fun story that you and I talk about on text all the time, the UFO UAP disclosures that are coming. I'll play these two videos and then let's discuss it. All right, the first one, President Trump on UFO declassification.

Speaker 5:
[97:43] I recently directed the Secretary of War to begin releasing government files relating to UFOs and unexplained aerial phenomena. That this process is well underway and we found many very interesting documents, I must say, and the first releases will begin very, very soon.

Speaker 4:
[98:03] All right, related, another story coming out of the news media and the White House.

Speaker 12:
[98:10] Ten missing scientists with access to classified stuff, nuclear material, aerospace, they've all gone missing or turned up dead in the last couple months. Based on what you've been briefed, what do you think is happening here and do you think that this is connected or totally random?

Speaker 5:
[98:26] Well, I hope it's random, but we're going to know in the next week and a half. I just left the meeting on that subject. So pretty serious stuff. But some of them were very important people, and we're going to look at it over the next week or two.

Speaker 4:
[98:39] AWG, over to you. What do you think of these? Are they connected? Tell us your thoughts.

Speaker 1:
[98:44] You know how Elon likes to say the strangest words to the effect of the strangest future or the most ironic future ends up being the one that we find ourselves in. I find myself when hearing stories like this, wondering whether a UAP could land on the White House law and be fully televised and then general public turns up its nose and asks what's next on television because of issues relating to over politicization. If the president does the proverbial Rose Garden address and makes some major announcement in connection with UAP disclosure or NHI, will people just or at least maybe half the population say, I don't believe it for one second because of the overly politicized climate of the moment and what else is on? I do wonder whether we're going to find ourselves in that world. I will say, a few weeks ago, I was on Capitol Hill doing closed door briefing with Senate and House staff on issues relating to science and technology. And I will say on Capitol Hill, this issue is taken extremely seriously. It's contra, say, 10 years ago when it was well outside the Overton Window of Discourse, very much inside the Overton Window of Discourse now. I chat with friends of mine who were in charge of the relevant hyperscalers and the government clouds who were actually in the process of carrying out the president's historic UAP directive to declassify files on top secret government networks that are connected with disclosure, his now historic executive order regarding UAP disclosure. And as far as I can tell, this is all in process. And I'm hearing stories from my friends of the hyperscalers about how UAP disclosure in connection with the president's executive order is scheduled to conclude no later than January of 2027. So Alex, question.

Speaker 4:
[100:47] This has been discussed, like Project Blue Book, there have been a number of presidents over the past that have talked about disclosure and stuff comes out that's just blanked and not actually earthshaking. Are we gonna finally see some earthshaking information here? I agree with your point, by the way, that this could be disclosed. UFOs in the White House lawn, alien body autopsies and people saying, okay, what was the sports score last night? Exactly. But here's the other point I want to ask. We saw this in Age of Disclosure. I love that documentary, urge people to go and watch it. It's worth your while. A lot of very high-ranking generals, admirals speaking about what they've heard, what they've seen. But we also know that a lot of this was taken out of the government and put down into private companies to avoid FOIA. This is now being apparently managed by private contractors that are resistant to disclosure. Any thoughts around that?

Speaker 1:
[101:48] Yeah, I will say, and I made similar comments when last we were discussing the documentary Age of Disclosure, that if the central allegations in that documentary are accurate, and just to provide sort of a one-sentence summary, the central allegation made by 35-plus senior, current and former US government officials in both the executive and the legislative branches, the central allegation is that there has been a highly illegal 80-plus year campaign project to capture, retrieve and reverse engineer crashed UAPs. That's the central allegation of that documentary. If that central allegation turns out to be accurate, then I would argue, and I have argued in past, that this is borderline crime against humanity. If, literally as alleged in that documentary, if there are civilizations, non-human intelligences, essentially raining technology down on us, and there's some illegal group, illegally operated quasi-governmental, quasi-contractor based group that is recovering artifacts that could advance human technology and civilization by centuries or millennia, and it's just being suppressed or even worse, weaponized for purely offensive or defensive military purposes without seeing the light of day of civilian applications, which could affect a variety of sectors, I think that that type of conspiracy, if it is indeed real, is not just criminal, but I think it's a crime against humanity that, if accurate, I have to add the appropriate caveats, would have potentially set back human progress by maybe a century. I think it would be devastating. So, that's one of the reasons why, one of several reasons why I think this is a very interesting space to watch. In my mind, we're getting super intelligence from AI one way or another. I think, in some sense, there's relatively little alpha left in extrapolating the super intelligence explosion. But when I think of what are the sort of crazy left turns that could hit civilization in the next few years, it's a very short list of crazy things that could happen that would derail the singularity or alter the course of the singularity. I think something like this, this is one of them. This is near the top of the list. And so, I think it bears close scrutiny.

Speaker 4:
[104:27] Dave, do you remember the conversation with Elon? I asked Elon outright, I said, what do you think? We've seen the documentary, what do you think of UAPs, UFOs? And his comment honestly was underwhelming. He said, we have such advances in camera technology, why are all the imagery blurry?

Speaker 3:
[104:46] They're black and white even.

Speaker 4:
[104:48] And I was like, okay, even if you knew, you probably couldn't tell us. All right, I don't want to spend more time on this, but I do have a question here about the 10 missing scientists. You connected it all. We've heard a number of the scientists out there, a number of people on the, since the age of disclosure, a lot of people saying, if I disclosed any of this to you, I would potentially lose my life. I mean, people feel like they're under a threat. And see, we see these missing scientists, these suicides. Anything about that in brief, Alex?

Speaker 1:
[105:25] It's certainly concerning. There have even in the past 24 to 48 hours been statements from both the executive and the Congress talking about the act of investigation. There have been messages from, messaging from congressional leaders that this is now being treated as connected, as opposed to just an unfortunate coincidence.

Speaker 4:
[105:47] We see a tweet down below by Representative Ogles that says, I have seen evidence so classified that just knowing it exists makes you a target.

Speaker 1:
[105:57] It is very concerning. I want to make maybe one closing comment, if I may, on the intersection, call it between artificial superintelligence, ASI, and the statutory term for this is non-human intelligence, NHI, ASI, NHI. And that is this. We talk, Royal We, I, talk on this pod often about building the Dyson swarm, disassembling the moon, disassembling planets. I'd like to think a little bit larger scale for just a minute. I think as a civilization, we're on the verge of having enough technology, thanks to AI, to be able to send von Neumann probes, self-replicating probes, in all directions, at relativistic speeds throughout our galaxy and all other things being equal, if we wanted to, as a civilization, converting our entire galaxy over the course of tens of thousands of years, light speed limitations obviously, to paper clips. If we wanted to paper clip the entire Milky Way galaxy and we're alone, we could in a few years with advances in AI and space and AI and space. And so if there were a time for any non-human intelligence elsewhere in our Milky Way galaxy to make a cameo appearance and stop us from at least having the optionality of paper clipping the rest of our galaxy, I think they're going to have to make a cameo appearance sometime in the next few years because this intelligence explosion is about to give us as a civilization galactic superpowers.

Speaker 4:
[107:28] Yeah, and that of course holds as well to the appearance of these UAPs, these UFOs, at the turn of our nuclear age when we had the ability to destroy ourselves. We could speak about this forever. I'm going to move us along. I know you have a hard out at some point, Alex. So in other news, okay, so here we see China continues to dominate in solar. I mean, these numbers are...

Speaker 3:
[107:51] That's a tough segue, Peter. How are you going to segue from that to this? Meanwhile.

Speaker 4:
[107:54] Good luck.

Speaker 12:
[107:55] Meanwhile.

Speaker 3:
[107:56] China made some solar panels. All right, Peter, this is where you earn your keep.

Speaker 4:
[108:02] Well, listen, you know, China's dominance in solar is pretty extraordinary. You know, China hit their 2030 goal five years early. 90% of new power capacity in China is wind and solar. And they have actually installed 1,500 gigawatts of solar. It's half of the total global energy production compared to almost 300 gigawatts here in the United States. I just point this out for everybody because solar is being underdeveloped in the United States. And I truly hope we start developing it further. You know, we have nuclear, we have fusion coming, but solar is here now. And I think we need to be deploying it more. Salim, you want to make a comment on this?

Speaker 2:
[108:46] Well, this is just a big strategic thing. As you do solar, you need less oil. And therefore, what China is doing is slowly reducing its dependence on any kind of oil. So, energy is also the new substrate, and China is playing for substrate advantage.

Speaker 3:
[109:02] Yeah, very smart.

Speaker 2:
[109:02] All going for the... Yeah, good on them.

Speaker 3:
[109:06] Yeah. Yeah, we need to get on this, too, because we've already decided the Dyson Swarm is the... the Dyson Sphere is the future, and that's solar, too. Now, the panels are six times more efficient in space, but six times is not that much when you're building an entire Dyson Sphere. So, we're going to need massive amounts of solar manufacturing capacity.

Speaker 2:
[109:26] And by the way, I've been watching carefully the advances in perovskite, and in the labs, they're now hitting 35% solar capture, which is amazing.

Speaker 3:
[109:36] And that's a crazy number.

Speaker 2:
[109:37] That's a crazy number. And now, the only question is, how do you make the perovskite last long enough to match the competitiveness of silicon panels? And they're not far from it. They're not far from it at all. And if you have perovskite, which is quite abundant and easy to make, you reduce all of Earth mineral dependence to zero, which is incredible.

Speaker 4:
[109:56] And perovskite is cheap, is one of its principal benefits. Two stories in the robotics world, actually three stories in the robotics world. The first is that Beijing ran its second humanoid robot half marathon. The lightning robot won the second Beijing humanoid half marathon in 50 minutes, 26 seconds. The human world record is 57 minutes held by Jacob Kiplamo. Last year, this half marathon took two hours and 40 minutes. This year took 50 minutes. So, you know, the fact of the matter is China is driving so hard on robots. It's a priority for the nation. They're using these kinds of competitions to highlight it. They have 150 humanoid robot companies. I'm going to turn it to you, Alex, on the second story, which is yours.

Speaker 1:
[110:47] Yeah, this is sort of an interesting split screen. So let's rewind to last year when China ran their first humanoid robot half marathon. I watched that story. I covered it. And I was so upset that the US and the West had nothing like that, that I resolved to try to start a US equivalent. And my first pitch was with Tom Grilke, who's the former head of the Boston Athletic Association, which ran the Boston Marathon, including at the time of the Boston bombing, to the Boston Athletic Association trying to persuade them that they should host a humanoid robot or robotic marathon the weekend of the Boston Marathon. And they looked at me like I had dropped a dead fish on their table. So Tom and I, Tom recruited his son David, and I recruited capital from my fund O2-1T Capital. And we funded a startup in this area called Professional Robotics League that this past Sunday, we're recording this on a Tuesday. On Sunday, we held the countries and North Americas and the West's first humanoid and non-humanoid professional robotics race ever. We did this in the Boston Seaport. You can see some video here from the event. We had a range of manufacturers, Chinese and Western robotics manufacturers. It was a 50-meter dash because we had to start somewhere. It was a tremendous success. I'm just sad that we couldn't do a live podcast from this event because there were so many Moonshots fans who showed up. There were people flying in from Texas, people flying in from Canada just to attend this event. This was historic first for the country. Yes, the robots were operating a little bit more slowly. Yes, we didn't have the support of the Chinese Communist Party forcing everyone or every robotic vendor to participate. But we had to start somewhere. I think it was a wild, wild success. The most touching moment for me, we did an exhibition match the day before on Saturday. There were people just walking by on the street in the Boston seaport. Little boy came up to the railing that we had set up a little barrier and said, at one point to me, I think I could do a better job designing a better robot. I want to be an engineer when I grow up. And I was ready to burst out in tears. That was the most touching, I think, event from the entire race. I would just say we're going to need to do quite a bit more like what ProRail is doing in the West in order to make sure that US retains primacy in robotics. But we're going to do it.

Speaker 4:
[113:35] Well, congratulations, Alex. The only elephant in the room here on this one is all three humanoid robots. We're Chinese. We need to get the US manufacturer.

Speaker 1:
[113:43] That photo, we had some nice beady dogs.

Speaker 4:
[113:47] Our third robot story here, Salim, you kicked this to us. Why don't you go ahead and brief us?

Speaker 2:
[113:52] Yeah, so this is a picking robot and the combination of vision plus AI plus dexterity is now leading to all sorts of new stock picking systems. I was at the Modex show, 30,000 people, and this is the video I promised, or one of them that I promised to show you. Radical changes, by the way, not a humanoid in sight because I think on the back office, you're going to see industrial robots like this.

Speaker 4:
[114:16] Only one arm.

Speaker 5:
[114:17] Front office, you may have a humanoid.

Speaker 1:
[114:20] Because you're a non-humanoid robot, Salim, you should rejoice.

Speaker 2:
[114:23] I should rejoice. But really interesting to see there. I just wanted to show this very quickly.

Speaker 4:
[114:31] All right, you know, I talk about the Crisis News Network, CNN. This crossed my ex-filters here. America's trust in mass media is at a record low. So here we go. Only 28% of Americans now trust the mass media. It's the lowest in 50 years as measured by Gallup. It's down from 68% in 1972 and it's down 40% in the last five years. So just as we're seeing AI come online, we're seeing a collapse in trust across all of these areas here. Salim, back to you on this one.

Speaker 2:
[115:11] This is a business model problem. When you privatize media and people, for lots of reasons, went through this, we've been telling the newspaper industry for 20 years that they were going to be in deep trouble with online. That happened. I think the big challenge now is legacy media has lost its monopoly on reality construction. We're going to have to think about how do we deal with this? How do we deliver trust? As Jerry Mikulski points out, scarcity equals abundance, bond is trust. So you have to figure out how to bring trust back into the equation. By the way, audiences like this and shows like this, where we go direct voice, is way more effective than institutional polish for delivering trust. So we're going to have to have a lot more stuff like this out there in the world to restore any level of that. I think that Xprize around, figure out what is trustworthy and notable and what is true is also useful. I'm really, really excited watching XAIs where people can say, hey, Grock, is this real or not? And bullshit gets called out very quickly on that. And that's proving to be very powerful and very important over time.

Speaker 3:
[116:19] I totally agree. I think that Xprize needs to come back. I know it's a really tough one to design, but this is going to become a crisis, because that line is only going down from here with deepfakes. There's no rebound in sight. And like Salim said, there are lots and lots of truthful, credible people out there, but they can't get above the clutter. So if we can figure that one out, you might be able to reverse the trend.

Speaker 4:
[116:40] And I think that if you're an entrepreneur out there, thinking about how do you build something that delivers trust in a way that people actually believe is true, that they actually trust, is a great opportunity. Other stories, we just saw the Apple CEO, Tim Cook, step down, the head of hardware manufacturing stepping into the role. Any thoughts on this, gents?

Speaker 3:
[117:06] You know, I lost track of the fact that when Steve Jobs passed away, Apple was worth about $300 billion. And that was huge. Now, $4 trillion? That's just crazy how big everything has gotten. Meanwhile, Apple has not introduced a new product in that entire time. It's still the iPhone. It's just crazy how little innovation has driven that much growth and profitability under Tim's. And he did a great job of cutting costs and vertically integrating everything, building Apple silicon, internal chips. The biggest innovation during that time frame was the AirPod, if you can believe that.

Speaker 4:
[117:42] And of course, the phone is going to go away as a modality. We've been talking about that for a while, where headsets come in and pins and other technologies that are watching and listening all the time. And of course, your AI becomes your interface to sort of intermediating all the apps out there. Alex?

Speaker 1:
[117:59] My best guess is that we'll look back on the Cook era as being analogous to the Balmer era of Microsoft, that we had Bill Gates as the original visionary technocrat, founder, CEO, then the antitrust trial pushed Bill in a number of ways to be chief software architect, focus on his foundation. Balmer stepped in, Balmer more of an operator, less of a technological visionary for a number of years. And then we got to the Satya era now, where we see Microsoft being resurgent again with new categories willing to drop initiatives that weren't working, refocus initiatives, where Microsoft could once more become a technical leader in the cloud and elsewhere. My best guess is we view the jobs to Tim Cook, to John Ternes transition as somewhat analogous, where John came up not through operations like Tim Cook did, but he came up through hardware engineering. And so I would expect a new Apple under John Ternes, starting later this year, to be more focused on breakthrough hardware devices and hardware-centered innovation. Of course, the elephant in this particular room is AI, and Apple's seeming inability, after multiple swings at bat, to position itself as the lead distribution platform or server for AI. It has fallen prey to all the frontier models, and even Microsoft, with its quasi-failing copilot initiative, has leaned more into AI than Apple has. So, I would expect a new device-oriented AI strategy from John Ternes. Maybe, ideally, I was chatting with a new friend from Apple Marketing a couple weeks ago in San Francisco, asking them, why on earth, given the popularity of OpenClaw, why isn't Apple leaning into an OpenClaw-oriented device strategy, or selling Mac minis, like they're going out of style? Why isn't Apple marketing all of its hardware?

Speaker 2:
[120:06] I think the answer is buried in this amazing stat. There are seven magnificent, seven mega tech companies. Four of them still have their original founders as controlling shareholders and active, and they're all doing AI. Three of them are on CEO number two, three or four, and they're the three not doing AI. So, Apple, Microsoft, Amazon. Hilliard Noyes That tells you everything.

Speaker 3:
[120:28] Salim I have two quick thoughts. One is, what a set of shoes to try and fill for John coming in. And second, what an amazing opportunity. Just imagine if we were CEO of Apple and you can go, OK, let's now really take this to the next level. That was such a great time.

Speaker 4:
[120:46] Salim, we all know the founder CEO has such control over the company, not only because of their stock position, but because they're seen as the visionary. And if you want to come in and make a right turn on your company, you're fighting everybody. It's like, don't disrupt our revenue. We have predictable revenue. The profits have been going up. You want to do what? It's really very, very difficult. Of course, Tim Cook will remain as the chairman of the company. And good luck to Apple. A lot of change is coming.

Speaker 3:
[121:18] I like the fact they've got a hardware focused person in there. That's great.

Speaker 5:
[121:22] This episode is brought to you by Blitzy, autonomous software development with infinite code context. Blitzy uses thousands of specialized AI agents that think for hours to understand enterprise scale code bases with millions of lines of code. Engineers start every development sprint with the Blitzy platform, bringing in their development requirements. The Blitzy platform provides a plan, then generates and pre-compiles code for each task. Blitzy delivers 80 percent or more of the development work autonomously, while providing a guide for the final 20 percent of human development work required to complete the sprint. Enterprises are achieving a 5x engineering velocity increase when incorporating Blitzy as their pre-IDE development tool, pairing it with their coding copilot of choice to bring an AI native SDLC into their org. Ready to 5x your engineering velocity? Visit blitzy.com to schedule a demo and start building with Blitzy today.

Speaker 4:
[122:26] You know, last time we had a great conversation about the five forks that humanity is going to take. We had a lot of incredible feedback from our listeners. Our discussion today among the Moonshot mates is going to be around the war in Iran, in particular, the potential disruption coming and the Straits of Hormuz. Here we see an image of the Straits. And let's jump into this conversation. I think it's incredibly important. You know, the Straits are going to impact AI infrastructure buildouts to a large degree, driving aluminum shortages, global oil shortage, natural gas, helium, driving a lot of concerns. We see a lot of change coming, and it's hitting already. Who wants to dive in first on this?

Speaker 2:
[123:18] Well, you know, I grew up in Iran, that's where I spent my early childhood. And I'll tell you, it's just incredible to me that absent US involvement, Iran would just continue down the nuclear weapons path unchecked and eventually have a nuclear weapon and eventually launch it at Israel or wherever. And that's just going to happen. So the US goes with Israel, bombs a whole bunch of nuclear facilities, and then they shut down the Strait of Hormuz. Now the US. Navy has sunk the entire Iranian Navy and is now trying to reopen the Strait, which is very, very, very hard to do, because it's such a narrow pass, it's very easy to mine it, it's very easy to launch things from shore, and the drones are dirt cheap now, so the drones can just blow up one of the... These oil tankers are very slow, right? Not hard to build a drone that can hit an oil tanker. And what do you know, oil burns.

Speaker 4:
[124:07] Oh my God.

Speaker 2:
[124:09] It's not an easy thing to fix. And so it's a major... And that's about a quarter of the world's oil supply moves through that strait. So yeah, it's a clusterfuck, what else were you going to do? So yes, we are approaching a massive material shortage. It'll show up mostly in Korea and Japan, who are huge importers through the strait. It's not going to affect the US too much. We have plenty of oil supply in the Western Hemisphere, but it's a big problem for East Asia.

Speaker 4:
[124:39] Let's talk about helium, right? A third of the world's supply of helium comes out of Qatar, out of the Rosloffan Facility there. And helium is absolutely critical in the fabrication of chips. And it's going to be hitting TSMC and SK Heineck. Thoughts on this, Alex?

Speaker 1:
[124:59] Yeah, I've seen scattered reports of shortages of certain memory chip facilities in South Korea facing reportedly multi-week deadlines to resolve their helium shortages before they're unable to manufacture memory chips. I'm not sure where this ends. I can... It's almost easier to predict what happens after it ends, which is to say, I think it's high time to start helium startups. I think having this global geopolitical dependency on one narrow, volatile geographic region is completely unacceptable. A lot of helium certainly comes as a byproduct from fission. There have been various speculations over the years. I know you and I, Peter, have talked about non... unconventional isotopes like helium-3 being minable from the moon and elsewhere. I look at material shortages and I say in the long term, each of these is a startup opportunity for us to come up with clever new ways to radically decrease the cost of generation and also redomesticate production. There is, I think, there's been from what I've read, meaningful movement in the direction of strategic helium reserves. I think that's probably an important component of this strategy. Truth be told, I don't know where the Iran War and helium in particular shortages end, but I feel more confident in predicting what happens after it's over and how we make sure that we're never technologically in this position again.

Speaker 4:
[126:37] Agreed. You know, abundance is a rule in the final result. You have short-term scarcity, but at the end of the day, entrepreneurs will step in and there is helium globally. It just needs to be put into manufacturing.

Speaker 1:
[126:51] It's actually pretty abundant in our universe. We just need a better way of harvesting it.

Speaker 4:
[126:57] The other thing that hit real quickly here, worth noting, is natural gas supply. Taiwan is completely dependent on natural gas, right? And the stat I read is that Taiwan has 11 days of natural gas reserves and that a prolonged blockade is going to cripple Taiwan's semiconductor industry. So, that is even a bigger concern for me than the helium supply.

Speaker 2:
[127:23] Well, I'll tell you, Taiwan had a rain problem a couple of summers ago, massive water shortage. The one thing they kept going was the chip fabs. Everybody had to stop showering, but the chip fabs did not slow down. So, I suspect they'll sacrifice every creature of comfort, but keep those fabs running no matter what. But who knows?

Speaker 3:
[127:43] A couple of comments here. For me, this Iran War is not really just an oil shock. It's a system shock, right? Because you've got healing issues, insurance spiking. There's a whole cascade of little bottlenecks that are coming along that are going to cause a huge challenge. There's two that struck me. One is aviation bottlenecks. Europe imports 30 to 40 percent of its jet fuel, and half of that comes from the Middle East. So, it's going to be a big problem over the next few weeks around that. The second, which is really the big tragedy in my mind, is food inflation because fertilizer not being getting out there, et cetera, is going to be a huge problem for food production all around the world. So, something like 30 percent of the global fertilizer supply passes through the strays of warmers, or depends on the gas that goes through there. So, this is a huge systems problem. I think it's going to accelerate de-globalization because people will not want to be dependent on these overall global supply chains going forward. So, there's a silver lining, but it's a down to the streamlining.

Speaker 4:
[128:49] I want to share some feedback from our community. So, we read the comments that you make every week, and there were three principal comments that you made last week, and I want to address them. The first pushback from everybody, we read this in the comments, is, not everyone can be an entrepreneur. And Salim, you, Dave, Alex and I are constantly saying, hey, become an entrepreneur. So, let's talk about this. Let's address this comment. So, I want to acknowledge the criticism directly, right? You're right. Not everyone wants to be an entrepreneur. Not everyone can be an entrepreneur. But the fact of the matter is, becoming an entrepreneur has become much easier by orders of magnitude. And I think the conversation here is that, if you never thought you could be an entrepreneur, the opportunity to become an entrepreneur is greater than ever before. So, let me share a couple of data points here on this. So, first off, you know, the cost of starting a business has plummeted. It's come down 99.7%. You can see on the left, back in 2005, a traditional startup cost. You had legal costs, you know, you had hardware costs, you had, you know, all of these different costs. It's come down tremendously. And the fact is, if, you know, an entrepreneur by definition doesn't have to be someone going out and raising venture capital. It's not someone, you know, starting a billion dollar startup. You could be an entrepreneur if you're a nurse and you see an opportunity to start a home health care business, or you're a barber and you want to start, you know, get another chair going in your shop. It's finding a problem and creating a solution to that problem. Salim, you want to jump in on this?

Speaker 3:
[130:40] Yes. So, you know, I agree, it's the old model of entrepreneurship was very, very difficult and most people cannot become entrepreneurs, okay? But everybody can become more agentic, everybody can become more AI leverage, everybody can become more resilient, right? This shift is not to entrepreneurship, it's moving from employment dependency to being self-sovereign in capabilities. So, there are other paths, founder, freelancer, craftsperson, operator, local business owner, as you mentioned Peter, creator, co-op member, etc., etc. It's a move towards dignity, okay? You don't want venture-backed chaos, you want stability, you want autonomy, you want meaning. There's lots of other roles behind founders here. You don't become an entrepreneur because I said so. Become anti-fragile in a way that fits your life. So, that's the way to frame this.

Speaker 4:
[131:36] In the past, if you wanted to start a business, you need a lawyer, an accountant, a web developer, a marketing team, months of capital for a runway. You can register a business in six days, and you can build a website in an afternoon using AI, right? You can write a business plan. This is about finding a problem. And no matter where you are in life, your ability to identify problems and opportunities is there if you look. And if you don't have those answers, brainstorm it with your friends. Here's some more data.

Speaker 1:
[132:06] Or if I may, Peter, or your AIs. I think even brainstorming problems is not a limiting factor. AI is quite good at identifying problems. This is one of the reasons I have to just name check company. I have a financial interest in Henry Intelligent Machines. And him from Friend of the Pod, Alex Finn, is premised on the thesis that not only is the cost of starting a business going towards zero, but everyone can and will be who wants to be an entrepreneur in the same sense that anyone who has opinions or taste as a consumer can become an entrepreneur thanks to AI, where AI operators are carrying out all of the difficult or non-obvious tasks. And you, as a, aspirationally, as a one-person investor, magnate, conglomerate owner, can oversee a fleet of AI operators being entrepreneurs so that you can just be a gentleman or gentlewoman investor or magnate. I do think we are going to find ourselves in that future.

Speaker 4:
[133:08] Take control of your future. Here's the data. You know, back in 2019, 23.7% were solopreneurs. That has climbed steadily to 36.3%. We talked about it in the last pod. We are going to start to see companies going from 100% down to 20% of their employee base. But at the same time, we are going to see five times as many companies being started. And guess what? If you are not the CEO, if you are not the idea person, you can find an entrepreneurial company, right? A small company and bring your capabilities, your passion, your experience to the table there. Salim, you were going to say?

Speaker 2:
[133:46] I don't even think that there are six million people in America who say their full-time or primary job is social media influencer. I don't even think they count in this. I mean, there's so many jobs like that that are emerging that didn't exist in the world previously that are going to pop up with AI, that they kind of defy normal categorization, but they're going to be rampant.

Speaker 4:
[134:08] Yeah, for sure.

Speaker 2:
[134:10] I count those as entrepreneurs. Maybe listeners don't think of that as an entrepreneur. But to me, it's like, that's an entrepreneur.

Speaker 4:
[134:16] And this chart, you know, I want to dispel the age myth, right? There was a persistent myth that you need to be in your 20s to be an entrepreneur. And the data says otherwise, you know, the mean age of founders, the top 0.1% fastest growing companies are founders age 45, not 25, 45. And individuals in the age group 55 to 64 represent one of the fastest growing segments of entrepreneurs in the United States. You know, if you're 60 and you're thinking it's too late for me, guess again, your experience is the product, right? AI is your amplifier. Your ability to, as Alex said, brainstorm an opportunity, brainstorm a challenge, find something that's right for you. You know, I talked about the gentleman I met in Morocco, right, who used ChatGPT to say, this is my abilities, this is what I can do, this is where I live. What kind of a business can I start to, you know, provide food for my family?

Speaker 3:
[135:15] Yeah. Just to reiterate, entrepreneurship is not the point. Agency is the point. Give yourself agency.

Speaker 4:
[135:23] Yeah, for sure. You know, what do you need to be an entrepreneur? I just want to hit on a few things. You know, you don't need to be the CEO. You don't need to be a fundraiser. You need to be curious, you know, about how you can be useful. You need to be willing to create value first. And that's something, you know, a lot of people think, well, being an entrepreneur is putting up something and having money come in. No, it's about delivering value first, right? You see Alex in his projects, Salim in your projects, Dave, it's delivering value to the people that you want to serve, who you want to be a hero to. It's being resourceful, right? All of your experience in life.

Speaker 3:
[136:00] Can I tell a quick anecdote? One of my favorite entrepreneurs in the world is Craig Newmark from Craigslist, right? He starts Craigslist, grows it, and basically stays customer service. His title is customer service. Company is doing a high, his peak was doing $100 million a month in revenue with 30 employees, and he retains the title customer service person. If you call Craigslist for customer service, he will usually answer the phone. And then he's taken all his money and given it to Veterans Affairs and projects on the future of journalism. So just an incredibly magical way of framing it, where you just follow your passion, you do what you think you're best at, and then let the rest take care of itself.

Speaker 4:
[136:44] That point you just made is so important, right? Having purposeful alignment with whatever you start or whatever small team that you join. Don't do it to make the money. Find something that gives you true meaning and purpose in life, and go and focus on that. So, I want to challenge everybody who said, hey, entrepreneurship is not for me, it's not for everybody. Okay, agreed, it's not for everybody, but the percentage of individuals, the range of folks who can become entrepreneur has been skyrocketing. And I encourage people to explore this for yourself, especially if you're just out of college, you haven't gotten a job, or if you've been laid off someplace, find a few of your best friends, right? Because building something, as we've said, Dave, with two or three best friends is the best way to do it.

Speaker 2:
[137:31] Yeah, I think it's even more acute than that. If you move into a world of AGI and then ASI, to thrive, you either need to be a founder or a joiner or an investor. Because all other career paths are going to disappear. You can't just join a big company and work your way up over 20 years. That's not going to exist in the world post-AGI, post-ASI. So then what does that leave? It leaves founders, joiners, investors.

Speaker 4:
[137:58] I love that three categories, Dave. That's great.

Speaker 2:
[138:01] Just label, decide what you're good at. There's always a way to add value. Just decide which is your natural bent and pursue it and get in the game.

Speaker 4:
[138:09] The second piece of criticism was on data center water usage. Quote, Dave is contradicting what AI says about the use of water in data centers. Dave, you want to take this one on?

Speaker 2:
[138:21] I love that you're blaming AI for the contradiction here. Come on, man. Take accountability. I check the AI, too. Water use by data centers is less than 0.3% of total water use. It's way less than 1% of agriculture. It's microscopic. But the point on here is that the water that is used is used almost entirely by evaporative cooling. I was like, why are we bothering with evaporative cooling at all? It turns out that the data centers that are in Phoenix and other very hot locations with tons of sunshine, they love using evaporative cooling because it's more efficient than regular mechanical cooling. But that's not going to work because those are also the areas where there's the least water. That's why it creates this problem. But there's no reason we should be using evaporative cooling at all. Just put more solar panels on the roof and use regular mechanical cooling, and then you don't waste any water and everything is solved. This is not a real problem in the world. We can kind of put it aside and move on. Fix it for sure, but move on.

Speaker 4:
[139:23] It's not a technical issue. I imagine that hyperscalers are going to be focusing on this because of all of the criticism. We talked about power, and of course, they'll pay differential rates, they'll provide their own power, but they're going to have to address the water issue.

Speaker 2:
[139:41] Yeah, that's not a big problem. I checked out the energy costs on moving over to mechanical cooling. It's not trivial, but it's about a 20% increase in your overall electricity bill. You just have to eat that with more solar panels. Put up more panels, because this only matters in very, very hot, very sunny locations in the first place, because that's the only place evaporative cooling works anyway. So just put up some more solar panels, eat the extra cost, and move on. But those areas are also the same place that, quote, whiskey is for drinking, water is for fighting. Those are the same locations where these data, like Abilene, Texas, and Phoenix, same locations where these data centers are going in. So you got to put up more solar.

Speaker 4:
[140:22] The third criticism, and it really is something we need to address, it's ivory tower out of touch criticism. So massive ivory tower mentality, your silver spoon is showing. Of course, the experts are more optimistic about AI, they're all rich. You know, laughing at the middle and lower income middle people concerns about anti-AI sentiment, and my electricity bill climbed 14%. Salim, you want to jump in here?

Speaker 3:
[140:53] Yeah. Look, everybody on this podcast and everybody on other podcasts, take all in. We all started from nothing. I mean, my dad told me, here, take a first year of university I can pay for, the rest is up to you. I was selling beer during beer strikes in Canada, trying to make money. We've all gathered it out at various points. I arrived in Silicon Valley in 2006 with, I was 43 and I had $2,000 to my name. I'd wiped myself out doing a startup, so you have to bill from scratch. I think we've all been there in different ways. I've been very, very lucky, and I think the skill that you need to work on the most is your luck to be successful in this stuff. But the huge opportunity here is that you can do a huge amount today with very, very little capital. And I think that is the magical story around this. Why we're so excited about entrepreneurship is the fact that you don't need to raise billions of dollars or millions of dollars. You can literally go get an account for 20 bucks and start changing things. And that's incredible, the opportunity there. So that's where we get so excited around this.

Speaker 4:
[142:09] Yeah. I mean, we are heading towards abundance. We talk about this. It is not coming next quarter. It's coming over the next five to 10 years. We've seen this, right? And the book I just published, We Are As Gods, there's 100 pages of charts in the back of the book that show the story that, for most of human existence, it was the king and the queen, the pharaoh, the emperor on the hilltop, and everyone else living in abject squalor. And we've had the most extraordinary movement of people into middle class, upper middle class. And today, individuals alive have access to more than the kings and queens. Had a century ago, the poorest people on the planet have a smartphone, have access to lighting, a roof over their head. It's become extraordinary in that regard. And it's continuing.

Speaker 3:
[143:04] Can I give a tangible example here? Okay, so let me give you a very specific example that you can just latch onto. There's a fishing village in the middle of nowhere in Vietnam, where once a month a big ship used to deliver diesel fuel for these fishing boats. At some point, the ship stops coming and these people have no fuel to power their fishing boats. So one of them literally buys a solar panel over the internet. They look up on the internet how to connect to the propeller, and they invent a solar-powered fishing boat. Right? So this is cutting-edge innovation using disruptive technology happening at the edge of the world. We on this podcast and other people like us get incredibly excited when we see that because here are people at the edge of the world using the most advanced technology and their mindset and their agency to totally change their own world. And that's what's possible today that was not possible a few years ago. And then AI amplifies that a thousand times. That is not possible everywhere in the world. We call, I used to call this PDI, permissionless disruptive innovation. In the past, to do something disrupted, you have to get the Medici family to back you. You had to get the government to approve you. You had to get the VC or investor to fund you. And now you can basically get together with the right mindset, do something crazy at near zero cost. How can that not be exciting to people?

Speaker 4:
[144:21] Our mission is to inspire you, right? To give you the inspiration to dream bigger. Yeah, I mean, at the end of the day, life can be difficult. And I've suffered a lot of massive setbacks, you know, back to zero.

Speaker 3:
[144:36] And Peter, 11 years trying to get approval for ZeroG from the FDA. You want to talk about hacking it out, right? It's the persistence that has, I mean, most of us and most successful people, you ask them, what made them successful? It was their persistence, they just never stopped. And so anyway, I can go on forever.

Speaker 4:
[144:56] You get persistence when you found your purpose, your MTP, right? Every time I've tried to do anything just to make money, it's failed miserably. Because doing anything big and bold in the world is hard work. And you need to find that purpose that you'll dream about in the morning, keep it going through the night. So if anything we say on this pod comes across inappropriately as having ivory tower mentality, apologies. That's not our purpose here. Our purpose is here to...

Speaker 3:
[145:24] And keep calling us out.

Speaker 4:
[145:26] Yeah. I mean, trust in us comes from our ability to respond to your critique and your criticism. We do read your comments. Please keep them coming. Our desire is to serve you in a number of different ways. Keep you informed of what's going on in the world. Give you an optimistic point of view and inspire you to go bigger and bolder in anything and everything you do. And if you find yourself in a place where you're not enjoying life, where you're not able to thrive, you've got to have the guts and the fortitude to find some place better around people. You heard this from Dave in the last podcast. The average of the five people you spend the most time with. So find incredibly good people, entrepreneurial people, people that want to make the world a better place and go hang out with them. And the reality is you're facing challenges and difficulties that probably Salim, Dave, Alex and I never faced when we were entrepreneurs getting going. And we hear you there, right? And our job is going to be to put ourselves in your shoes as we're covering these stories, as we're talking about what's possible to really provide a perspective that hopefully is useful to you. Dave, you want to add a final point here?

Speaker 2:
[146:39] Well, I think I really feel for Bullet 2 and Bullet 3 on this in the sense that, you know, when I was trying to get my first startup up and running, I was below the poverty line for eight or nine years. And the muffler fell out the bottom of my car and I had to duct tape it back. And I was like, how am I actually going to survive this? But I could always fall back on an entry level job at like a JP Morgan or something like that. So I wasn't really panicking. Now, it's different, like it's kind of do or die, you know. It's really, it is a different time because no one's hiring, you know. And we're saying be an entrepreneur, which because that's the right path forward. But the safety net is much scarier now than it was back then. Because, you know, no one's hiring. And I feel for that. So, you know, the bullet, too, says, of course, the experts are more optimistic. They're all rich. But it's because we know we're moving into this era of unprecedented abundance, and asset values are going to go way, way up. So the point is right. Then the last point really tears my heart out. The laughing at the middle and lower income people's concerns only adds fuel to anti-AI sentiment. AI is going to actually create more wealth and more success than anything in history. And an anti-AI sentiment is the worst thing we could ever propagate. I don't remember us laughing at lower and middle income problems. In fact, I feel it acutely every day with the 1,100 people that I have to think about. So if we ever laughed at lower and middle income problems, that was just an error on our part. I don't recall doing that. But that should absolutely never happen, because it's going to be a massively turbulent time. Now, I'm much more concerned or I'm much more confident today than I was two weeks ago about Dario and Sam, especially after Sam's house got firebombed. If they just turn 10% of their AI into this problem, I know by 2030 we'll be beyond it. I'm only worried about the window between now and 2030 when people who've worked in a 15, 20, 25 year career path as a paralegal or as a some kind of a white collar professional, that entire career is about to be replaced by AI. You've worked way down this path. That's devastating. But if you're nimble, you can actually get ahead of it. I know that's very vague advice, but if you're nimble, you can get on this wave and not get crushed by it. But I think if the big AI companies put just 10% of their effort into trying to deliberately give a roadmap, right now, start with the software engineers. You just rolled out capabilities that will kill Canva, Figma. Put a roadmap in front of those software engineers that's designed and deliberate through some kind of an independent software partnership program, so they know where to go next. That will solve a huge part of that swath, and then move on to legal, move on to the other professional services, and be deliberate about smoothing the pathway between 2027 and 2030 to try to actually create forward trajectories for everybody. I think it'll actually work. I feel like there's so much abundance coming that that overhead will be a rounding error, and we can actually make this smooth for everybody. You just need to design it. It's not going to happen by mistake.

Speaker 4:
[149:55] The three of us are on the Xprize board. We're focused right now on launching the Abundance Xprize, which remind people, if you haven't heard me speak about it before, the prize would be for a company or a team that's able to deliver housing, food, water, electricity, and bandwidth to a family of four for 250 bucks a month, right? So if you've got those basics, it's universal basic services, you have those basics covered, you're then able to think about, how do I upskill myself? How do I plan an entrepreneurial journey? But if you're worried about, can I turn the lights on? Do I have a roof over my head? Can I feed my kids? Then nothing else matters, and so very true. All right, gentlemen, it's a tough place to close it out, but it's an important one. The world is changing.

Speaker 3:
[150:46] Quick announcement.

Speaker 4:
[150:47] Yes, please.

Speaker 3:
[150:48] May 7th, 7 p.m. Eastern, I'm doing one of my Meaning of Life sessions online.

Speaker 4:
[150:55] Okay, where do people go to find out about that?

Speaker 3:
[150:58] Go to openexo.com and you'll see it somewhere.

Speaker 4:
[151:01] Amazing, and on May the 4th, we're gonna be having, recording a Moonshots podcast at MIT. We're doing our We Are As Gods session with Ray Kurzweil on May 4th. All right, and our outro music today is coming from Tomas. It's Solve Everything. Gentlemen, enjoy the outro music. This one's from Tomas. All right.

Speaker 3:
[152:25] I love the chiseled jaw all these outro musics give me.

Speaker 2:
[152:28] You know, it's borderline fake news, actually. We look way too good in it.

Speaker 4:
[152:34] All right, gentlemen, I love you.

Speaker 3:
[152:36] See you next time.

Speaker 4:
[152:36] See you guys soon. If you made it to the end of this episode, which you obviously did, I consider you a Moonshot mate. Every week, my Moonshot mates and I spent a lot of energy and time to really deliver you the news that matters. If you're a subscriber, thank you. If you're not a subscriber yet, please consider subscribing so you get the news as it comes out. I also want to invite you to join me on my weekly newsletter called Metatrends. I have a research team. You may not know this, but we spend the entire week looking at the meta trends that are impacting your family, your company, your industry, your nation. And I put this into a two-minute read every week. If you'd like to get access to the meta trends newsletter every week, go to diamandis.com/metatrends. That's diamandis.com/metatrends. Thank you again for joining us today. It's a blast for us to put this together every week.