transcript
Speaker 1:
[00:00] A 20-year-old Texan threw a Molotov cocktail at Sam Altman's San Francisco house. Suspect was on something called the official Paws AI Discord server list. The state of Maine passed the first ever statewide data center ban in the United States. Social unrest coming as a result of people's fear and people not getting jobs. Only 23% of the public is optimistic about AI.
Speaker 2:
[00:24] 99% of the people you bump into on the street are underreacting and unaware.
Speaker 3:
[00:29] If you don't want to use it, fine, let other people use it and get the benefits of it.
Speaker 1:
[00:34] Anthropics Opus 4.7 dropped.
Speaker 4:
[00:36] It is moderately interesting. Is it mythically interesting? No. The new guidance is use prompts. Use prompts for everything. The problem is it's sort of an outburning effect where I want mythos access.
Speaker 1:
[00:52] Amazon and Apple team up to compete against Starlink.
Speaker 4:
[00:56] I would bet that Apple in short turn ends up pitting Amazon, the new global star owner, against SpaceX Starlink.
Speaker 1:
[01:04] Elon does not stand still.
Speaker 2:
[01:08] Now that's a moonshot, ladies and gentlemen.
Speaker 1:
[01:13] So as we started recording this episode today, Anthropics Opus 4.7 dropped. So we wanted to do a quick pick up, insert it here at the top of the show to discuss what is Opus 4.7? How does it compare to 4.6? Mythos, of course, we're here with our resident genius on all benchmarks, Alex Wissner-Gross.
Speaker 4:
[01:33] It is moderately interesting. Is it mythically interesting? Is it incrementally interesting? No, it's a solid release. I've been using it for the past few hours. I ask it to, my standard go-to, as loyal viewers of the pod may recall, is asking it to generate a cyberpunk first-person shooter game design that's visually stunning, and it generated something that was visually stunning. The benchmarks are interesting. The bio benchmarks in particular are interesting. It's a solid release. It's probably, if I had to guess, a further post-training of some existing model. Could be a distillation of a larger model. Could be a distillation of mythos, potentially. Not quite clear. But I would say it is a solid point release of Opus. And the problem almost is an expectation anchoring one, having seen the eval results for mythos or mythos, as you like to say.
Speaker 1:
[02:29] I like calling them mythos, yes.
Speaker 4:
[02:32] The problem is it's sort of an Osborning effect where I want mythos access. Give me mythos access. And then when you compare the Opus 4.7 benchmarks with mythos, you feel, I don't know, a sense of- I was going to go with ennui, but you can pick your own superlative here. So if you look at migration, so I think it was particularly instructive to look at migration notes between 4.6 and 4.7. The biggest change that I could see is that all of the dials and hyperparameters that used to be present in 4.6 and earlier, like temperature, for example, there's no temperature knob anymore. I think that's really instructive. There's no ability for reasoning to control the number explicitly of reasoning tokens that are allowed by 4.7. Now everything is down to a handful of categorical settings where extra high reasoning is the recommended default, maximum mode, and then there are lower reasoning efforts than that. And I think we're seeing, in some sense, an end of an era where the earlier controls that we used to have, remember back in the good old days, like six months ago, it used to be possible to turn the temperature of a frontier model down to zero to get quasi-deterministic behavior, for those who care about that sort of thing. No longer possible. Now you're just told in the documentation, you want determinism, forget about it. Temperature equals zero never was deterministic in the first place. Now everything, the new guidance is use prompts. Use prompts for everything. Prompts are the new dials and the new hyperparameters. And if you want something like, say, a reasoning model to emit guidance regarding its reasoning trace every three seconds, now you're supposed to ask for it in natural language. The knobs are gone.
Speaker 1:
[04:17] Dave, you're more excited about the model than a lot of people are.
Speaker 2:
[04:22] Yeah, well, it's interesting, you know, it dropped three hours ago, so I've been using it for three hours now. So that's, but right out of the gate, you know, it dropped into cursor just fine, just click and go. It's in Claude Cowork just fine, click and go. But then Claude Code, it said, well, you know, you got to update your terminal, you got to update your node. So, you know, I noticed in computer use, it's notched way up in its score, and it had no trouble manipulating my computer to install itself, installing a new version of node, installing a whole new terminal that I didn't have on the machine before. And I don't think 4.6 would have done that. Also, I kicked off a whole bunch of agents. Every time I kick off an agent, I give it, or it gives me a budget estimates of how much money it's going to spend. And these budgets came back very elaborate and very big. So it's selling me on using more of itself. I don't know if that's because it costs more or it's just a better salesman than 4.6 was, but noticeably expensive.
Speaker 4:
[05:24] But Dave, it could be persuasive as well. So a major difference on the agent teams front is in 4.7 now, the new best practices, you're just supposed to tell it in natural language how many sub-agents you wanted to use. It's no longer, it's being deprecated, as I understand it. This notion of specifying as a parameter, I want you to use and so you know.
Speaker 2:
[05:43] That'll be, so my agents have been doing that for a while now, but it may have actually been more intelligent about using more parallel agents to get the same job done.
Speaker 1:
[05:52] Spend more money, please.
Speaker 2:
[05:53] Well, it seems to come back very, very fast. So maybe that's exactly what's going on. It's just spending more of itself.
Speaker 1:
[05:59] Can we jump into this misaligned behavior metric here? So, you know, one of the things that we've been hearing, of course, is about what Mithos could do. It's interesting that they turned down, and the lower score here, that red bar in this image, is reduced misaligned behavior. Is that a significant change? It seems, you know, somewhat small.
Speaker 4:
[06:23] Every little bit for defensive co-scaling, as we talk about on the pod, counts. I think there's actually another behavioral alignment trend that isn't on the slide that's worthy of note, which is, in the past, I think, 48 or 72 hours, Anthropic published a paper on using a smaller or weaker model to supervise the alignment of a larger, stronger model and found that it worked. And this entire exercise is a proxy for humans, which are either already or about to be effectively weaker models, weaker intelligence is supervising the stronger intelligence, that that works. And I think this bodes very well for sort of a tower of alignment where the weaker meat bodies, if you will, that are humans unaided biologically, are able to contain and align super intelligences that are stronger capability-wise.
Speaker 1:
[07:15] So this was Jeffrey Hinton's approach, right? He said the example of where a weaker, smaller being, you know, gets the attention and focus and support is a child with their mother.
Speaker 2:
[07:26] Yes, maternal instinct.
Speaker 4:
[07:28] Jeffrey Hinton was focused on what I would call the digital oxytocin approach of let's use hormones as a means for alignment of super intelligences. I'm not sure the neuroendocrine system generalizes quite as well to super alignment as Jeff does. It's a thought, but I think having, if we can subtract neuroendocrine systems out of the picture and subtract digital oxytocin out and avoid sort of gender and sexing the AIs and instead just focus on weaker intelligences aligning stronger ones, I think we'll be in a more stable position.
Speaker 1:
[08:00] Awesome. All right. So that's our coverage of 4.7. Let's...
Speaker 3:
[08:05] Wait, I have a couple of quick comments.
Speaker 1:
[08:06] All right. Yeah.
Speaker 3:
[08:08] So there was a... One thing I noticed was that the images that Opus 4.7 accepts are now three times bigger than before and this is huge for corporate stuff because there's so many diagrams, PowerPoints, PDFs, etc. That can now be scanned visually that couldn't before. And for me, as I'm reading the reviews and playing with it a bit, this seems to be a very, very solid, reliable upgrade with a much bigger context window for workflows and more agentic AI. So that trend towards that whole organizational collapse of middle management, redoing things, pushing more and more into the model, with reliability seems to be the really big outcome here.
Speaker 4:
[08:49] If I could just comment on that, I think it's really striking that Opus still, after all this time, is able to understand images but is unable to generate images. I don't think it's for a lot of...
Speaker 2:
[09:01] You're so right about that. Oh my God. We're in a nightmare.
Speaker 4:
[09:05] I suspect it's not for lack of capability. Anthropic has many talented research engineers. I suspect it's because they're just viciously focusing on dollars of economic value created per token and have judged that image generation is not as economically productive as...
Speaker 2:
[09:23] It's annoying as hell because it can create incredibly complicated products for you and you say, well, can you just give me an architecture diagram or a picture that shows me what you did? And it generates pure crap. And you're like, well, that didn't help me. It just does beautiful text and you can you can hack it by saying, well, generate a language that describes the image. And then you can take that and then use that in another AI to generate an actual image. And that works fine. But when you ask it to just create a diagram for you, yes, it's absolute garbage.
Speaker 1:
[09:52] Alex, where are you flying to?
Speaker 4:
[09:54] Yeah, so I'm here, Peter, reporting from the front. I'm in a car a few blocks away from Steve Jobs, Old House and Old Palo Alto. And in a few hours, I'm scheduled to fly back from SFO to Boston Logan.
Speaker 1:
[10:05] All right. Well, thanks for making time available, gentlemen. That's Claude Opus 4.7. Let's get back to the episode. Hey, everybody. You may not know this, but I've done an incredible research team. And every week myself, my research team study the meta trends that are impacting the world. Topics like computation, sensors, networks, AI, robotics, 3D printing, synthetic biology. And these meta trend reports I put out once a week enable you to see the future 10 years ahead of anybody else. If you'd like to get access to the meta trends newsletter every week, go to diamandis.com/metatrends. That's diamandis.com/metatrends. Everybody, welcome to Moonshots, your number one podcast in AI exponential tech and keeping you optimistic during these days of crisis news network conversations. Gentlemen, Peter Diamandis here, your host in our Moonshots podcast studio. Excited, I need to have you guys here one day. So Salim, where are you on the planet? India, Brazil.
Speaker 3:
[11:05] I'm home in New York. I'm home in New York for once.
Speaker 1:
[11:09] That's a rare event. But Dave and Alex, you guys are in the great city of San Francisco, I gather.
Speaker 2:
[11:16] Yes, we are. Actually, three of the four of us are in California today. Amazing. And shows you where things are happening, I guess.
Speaker 4:
[11:21] The future home of Starfleet Academy, obviously.
Speaker 1:
[11:24] Yeah, well, everybody's moving to Texas in Miami. DB2, AWG, and Salim, always a pleasure. A lot of news. Our conversation is here today. Everybody is to keep you both optimistic, hopefully, and let you know what's going on in the world in a way that keeps it fun and gives you some insights. We're going to be trying always to bring it back to what does it mean for you as an investor, as an entrepreneur, as a student, as a parent. So that's the conversation getting you ready for the future. All right, let's jump in. Our first conversation comes from Stanford. Dave, you're not far from there, are you?
Speaker 2:
[12:01] I can see it out my window here.
Speaker 1:
[12:03] All right, here it is. Stanford's Lab for Human Centered AI just dropped their 2026 AI index. It's a definitive annual scorecard on the state of AI. This is their ninth edition. It's being led by Yolanda Gill and Raymond Perot and our dear friend, Eric Bernjolfsson. A quick hello to Eric out there. Five major takeaways on this report. I'll run through them and then let's have a conversation about them. The first one, not a surprise, AI is getting scary good, scary smart on various benchmarks, in particular, software engineering. It's gone from 60 percent on that benchmark to 97 percent on this SWE benchmark. The models, as Alex, you've been saying forever, are now beating the top PhDs in science and math. Gen AI is hitting 53 percent global adoption in just three years faster than PC and internet. China is leading research while the US is leading model development. We'll get into that. One of the things that was interesting, there's an index for model transparency. How transparent are the foundation models? And that index has dropped from a score of 58 down to 40, meaning that the most powerful models are now the least accountable. So what does that mean? All right, two more things. People don't trust AI. Not a surprise, but the numbers are pretty shocking. Only 31 percent of Americans trust the government can actually regulate AI. Only 23 percent of the public is optimistic about AI. And interestingly, in contrast versus 73 percent of the experts. So the experts who know about it are far more optimistic than the public. And then one last item, AI incidents. So there are documented harms from deploying AI systems. Those documented harms rose from 600, I'm sorry, rose from 233 to 362. All right, so what does this all mean? A lot is going on. Yeah, so, Dave, if you want to jump in first, I mean, scary goods, scary fast. You know, here are some of the numbers. What are your thoughts?
Speaker 2:
[14:20] Well, Alex saw this report and he immediately said, we have mentioned every single thing on this podcast already, at least two, three months ago. But I love the fact that it's all consolidated in one report and then the Stanford brand is on it. Because again, you know, 99% of the people you bump into on the street are underreacting and unaware. And so the more it gets consolidated and clarified, the better for everyone, I think.
Speaker 1:
[14:44] Yeah, that's the reason we left it in here. It's a summary and there are a few important points. And we're gonna, one of the themes that we're gonna be talking about in the first few docket, items, stories of the docket here, is the level of fear and unrest that's mounting, that needs to be solved.
Speaker 2:
[15:02] Yeah, and also the contrast, Sam Fran, where Alex is right now, where I was yesterday, and any other random city, the contrast is getting super, super wide. As I was walking through Market Street, at least five people behind me were saying different conversations, Anthropic this, Opus 4.7 comes out tomorrow, and it's just every conversation is centered around this. And then you go to kind of middle America, and people are like, I don't know anything about it. All I know is it's scary. The unknown tends to scare people, which is why you see that 23% optimism number there. So it's-
Speaker 1:
[15:39] Alex, you're right. We were texting this morning, you're saying, hey, this isn't news. And I said, but I want us to have the conversation here, because this information in a distilled fashion is important for people to see and hear. Alex, you want to jump in on any of these? We have this chart here.
Speaker 4:
[15:55] I'll offer a hot take on this one, Peter. So the idea of Stanford reports, so this started a number of years ago. The notion was Stanford would spend the next century, 100 years worth of annual reports documenting the progress of AI. My hot take on this one is too little, too late, too infrequent. We cover this like two times per week on the pod. I cover it daily in my daily newsletter, The Innermost Loop. I think an annual cadence is just wolf. We talk about not sleeping through the singularity. I think an annual report on AI is quite literally sleeping through the singularity. It's imprecise temporal resolution to capture all of the advances. So we're like hearing about things a year after they happen.
Speaker 2:
[16:42] The chart undercuts the report. Look at the green line on the chart. That's the genetic use of AI. That's 2024 to 2021.
Speaker 4:
[16:48] Alex, Stanford, Eric, Friend of the Pod, Eric, up your game. We need maybe daily reports, not annual reports, too slow.
Speaker 1:
[16:55] If only we weren't human. If only we had our cyborg implants, that would be a lot easier.
Speaker 2:
[17:00] To be fair to Eric, he 100% agrees with you, Alex, and he is pushing as hard as he can. Getting Stanford to move is like pushing a glacier.
Speaker 3:
[17:07] You're dealing with a legacy institution here. I would like to hammer on the government statistic where people said this many people distrust AI. It turns out exactly the same number of people distrust government.
Speaker 1:
[17:20] The Congress rating is 21%.
Speaker 3:
[17:23] Yeah, and the trust in federal government is 33%. It's exactly the same. I don't think that tells the same thing.
Speaker 1:
[17:28] People are just not trusting anymore.
Speaker 3:
[17:31] We've been steadily eroding trust in government for 50 years in the US, so there's a trend this is just correlating right to.
Speaker 2:
[17:39] The contrast with China is incredible though. 80% of people in China are optimistic about AI. I don't know how they feel about their government, but it's not human nature. It's something in the system that's making a difference because clearly China is the exact opposite.
Speaker 1:
[17:53] Speaking about China, here are the charts out of this report. The first one is showing the number of major models coming out of China, which are now at 30 and the US at 50. And on the other side, AI publications coming out of China have just exploded compared to the US. Alex, I'd love your take on these charts.
Speaker 4:
[18:13] I commented on this right after NeurIPS at the end of last year. The language that I heard the most in the hallways at NeurIPS, the largest academic AI conference was Mandarin. It wasn't English. China, I think the irony here may be what's being buried. The lead is that China itself is moving in the direction of what the West has done, which is closed source models. Some of the latest Chinese frontier models are themselves closed and API first. They're no longer open weight first. China, this is documented elsewhere. I think it was Epic documented that China's compute training capacity is approximately 10 times less than that of the West. China is publishing more and reference NeurIPS. We see that in the academic literature. But in some sense, I would view that as leading from behind. That because the Western models and the Western Frontier Labs at the moment have the lead, there is less of an economic incentive, there's less pressure for them to publish their advances. If on the other hand, the whole balance tips and if for whatever reason China algorithmically leapfrogs the West, I do expect the entire equilibrium of Chinese open publications, Western closed attitude to flip completely and we may see some equilibration there.
Speaker 1:
[19:36] What do you guys think about the model transparency drop on the score from 58 to 40? I don't know how accurately that's being measured but having the most powerful models in the world becoming less transparent because it potentially slows them down sounds concerning. Any thoughts?
Speaker 2:
[19:55] I think it's very much a trend that's not going to reverse because if you look at the last bullet AI incidents, that's going up but it's going to go way, way up and now you've got Molotov cocktails being thrown at Sam Altman's house and gunshots at his house and it's inevitable that the models become so smart this year that they become a terrorist threat, they become a bioweapon threat, they become a chemical weapon threat and the US labs are absolutely not publishing papers anymore, they're absolutely turning their research budgets internally. The self-improvement cycle is in full swing. China, like Alex said, is kind of leading from behind. They're acting more like America used to act with a much more open entrepreneurial economy, more and more models, more and more companies creating models, more documents coming out, but the US is going the other direction out of fear and it ties directly to the public reaction. 23% of people are optimistic. That means a lot of people are worried about this and the labs are reacting to that by saying, okay, we're going to slow play our dialogue a little bit. We talked about that about six months ago. Why are they underselling the capabilities? Well, this is exactly why. And then why are they turning all of this research internal? Well, this is also why they're worried about the global threat of AI.
Speaker 1:
[21:08] Alex, you're gonna say?
Speaker 4:
[21:09] I would also add transparency can take on, putting aside how Stanford defines it, transparency is a double edged sword. It can, in some sense, pro-transparency can also mean pro-proliferation. If one is concerned, by the way, I am not, but if one were concerned about proliferation of advanced, potentially threatening AI capabilities, transparency is not necessarily what you want. Maybe a limited form of transparency into, say, a threat analysis, or the sorts of threat profiles and red teaming analysis that have become fashionable for frontier labs to release, maybe. But in a certain sense, the limit of transparency is publishing the weights and publishing the models. And if you're concerned about threats of a variety of sorts, Xrisk, if you will, from AI, then transparency may be the exact opposite of what you want. You may be, in fact, anti-transparency if transparency becomes equivalent to proliferation. And for the record, for avoidance of doubt, I think transparency from a commercial perspective can be used as a strategic advantage, as we've seen with the Chinese labs. It can also be commercially disadvantageous. I think a certain amount of transparency in the sense in which, say, as we discussed in a couple of the most recent pods, like Project Glasswing from Anthropic, where there's very aggressive pen testing and staged release of advanced capabilities that could have major cyber defense and cyber offense implications, that sort of transparency, I think, is quite helpful. But do I think that we should, in sort of an unselfconscious way, push for all of the model weights from every frontier lab to be made, quote unquote, transparent in the name of some sort of safety? I think that that will backfire almost immediately, and alignment is the twin of capabilities.
Speaker 1:
[23:03] So, Salim, I want to hear your thoughts on this. I mean, this report this year probably has bent more towards the negative and the dystopian side than it ever has in the past, which is concerning. It's going to be one of the themes we're talking about here.
Speaker 3:
[23:18] It is, and it's causing a massive leadership challenge, which is how do you govern systems that you don't know how they work and you barely understand them, but we can't afford not to use them, right? That's causing a huge challenge, and that's going to kind of continue for the next months and years.
Speaker 1:
[23:35] So, I encourage folks to pick up this report and read it. We're focused on the optimistic side of the story here, but there's a realistic side of the story here as well that needs to be considered and addressed. Also, out of this report came another story that the youth is being hit the hardest by AI. So, employment among US software developers in the young age bracket, age 22 to 25, has dropped nearly 20% since 2024. This is happening at the same time while older developers have grown their headcount. The same pattern repeats across customer service, legal support, administrative roles. And critically, I think the important story here is this isn't happening through mass layoffs. Companies aren't firing young workers. They're not hiring them in the first place. And so, we're seeing this challenge. And I think, you know, we had a conversation in the last pod with Mark Andreessen saying the loss of jobs was fake news, that we're going to see this uptick. Well, and we've said both of these things are holding true. We're going to see an increase in the GDP and the profitability that's going to drive more employees and more companies being formed. But at the same time, we're seeing the lower end of the spectrum. You can see it here in these charts. On the left hand side, those jagged lines going down to the right is the early career age 22 to 25. We see that below as well. And then in the chart on the right, what we're seeing in software and customer service and all exposed occupations, the younger category losing job growth. The older category, age 30 and higher gaining in job growth. And this is a challenge. As I've said before, it's the young testosterone laden males. I don't want to categorize our younger versions of ourselves that way, who are not getting jobs, not being able to buy a house, not starting a family, who are likely to get angry. It's sort of a tech version of Arab Spring, if you would. Salim's thoughts on this one?
Speaker 3:
[25:46] Well, I'll take the positive here, which is that if young people aren't getting hired, they'll be forced to turn into entrepreneurship. And young people going to entrepreneurship is the best possible thing that could happen for the economy.
Speaker 1:
[25:59] Beautiful.
Speaker 3:
[25:59] Not to diminish the, what do you do with this? I think that's a big challenge we have to face.
Speaker 1:
[26:06] Dave?
Speaker 2:
[26:07] I had a great meeting yesterday with three Princeton seniors. They're torn right now between sticking together and starting a company. They're all chip design gods working on AI designs. One's got an offer at NVIDIA, and he's one of the few people that actually got a job offer, so he's so excited about it. I'm like, dude, the ASI window.
Speaker 1:
[26:26] Maybe in the future, you're like, damn, I got a job offer. I don't want that.
Speaker 3:
[26:29] No, this is the point, right? You should get a job offer and go, oh my god, what am I thinking?
Speaker 2:
[26:34] Exactly. That's exactly what I was trying to tell him. I was like, look, guys, you understand, your big, huge Princeton brain is the most valuable thing on the planet right now. It's going to be a complete commodity two years from today, post-ASI. You have this window of opportunity to take advantage of that brain power and create something. And if you fritter that away, one's got an NVIDIA job offer, one's got sort of a banking and one's got a grad school job offer. And I'm like, look, all three of those are the worst choice you could possibly make in this moment. Stick together, start your company.
Speaker 3:
[27:04] You have to adapt the metaphor, but it's not big, it's big, juicy, beautiful Princeton brain that fulfills the metaphor.
Speaker 2:
[27:10] Oh, I see. No, but right now, if you look at the prior slide, we have access to the absolute best AI models still. That won't last forever. So you've got the combination of ASI immanent models getting closed down and less access a couple of years from now. This is the window right here, right now.
Speaker 3:
[27:27] I think Alex, you mentioned this on the last pod, right? There's a limited window in which you can do something magical and meanucleal and so go for it now. Don't wait. Yeah.
Speaker 4:
[27:36] And also, I mean, my two cents on this is there's an entire economy that needs to be transformed and collapsed and automated. And so in some sense, I see this in a variety of companies. I see the agita that is connected with quote unquote junior software developers finding it harder in some spaces to find jobs, quote unquote. On the other hand, the market for talent in, call it head of AI or call it AI lead roles has never been hotter across a range of industries. So I think some of this may be just routine displacement as the market finds a new equilibrium. I don't think it has to necessarily be just bad for fresh CS grads from top universities. I do think there's an entire economy of call it non-traditional roles and non-traditional sectors that is absolutely starved for technical talent. I think to the extent that any of the short-term trend, open parens, note that the trend line ends at September 2025. Another reason why it's more important to do this daily or bi-weekly rather than just once per year. I think this has a habit of self-correcting and I've seen studies even over the past two to three weeks that suggest that this trend has reversed itself in the past few months.
Speaker 2:
[29:05] I think I can translate everything you just said into now is the perfect time to be nimble and not think of yourself as a great coder or a great chip designer. It's like that skill has a lifespan of a year at the most, but you're a great thinker, a great entrepreneur. You can master these AIs and stay ahead of the curve if you're nimble. Just don't get stuck in some silly career path that's going to perfect your chip design or your Python writing code slinging skill. Like that is a complete commodity within a year, so just stay out of it and keep listening to the podcast and move.
Speaker 1:
[29:40] Two things, one, this type of drop is politically invisible. There's no unemployment spike. It's just a hiring freeze, so it doesn't show up on any of the standard labor market monitoring. That will be interesting to see if that gets modified. But the second thing is, if you're a parent, please encourage your kids to find their purpose in life. Please encourage them to begin to think entrepreneurially. What is a problem they want to solve? I don't care if it's starting a lemonade stand or starting something in elder care. Utilize AI, get onto your favorite large language model, whether it's ChatGPT or Gemini or Grok or dare I say, Anthropic, and as a teenager or as a young adult, have a conversation, say, these are my passions. This is what I'm good at. Can we brainstorm a company I could start or a product or service I can start? Just getting to that brainstorm and beginning of the dream is so possible right now. And then you can work with it to come up with a business plan step by step by step. Give yourself some entrepreneurial training wheels and get going.
Speaker 4:
[30:50] I'll maybe add to that, Peter, if I may, one additional bit of advice, be geographically mobile. Do not be addicted to a particular geographic regime. I think a lot of the displacement is the result, based on studies that I've seen, of people being unwilling or unable to move to other geographies where there may be a more vibrant, more dynamic AI sector. I think geographic mobility is going to be, ironically, even though we're virtualizing and as Bucky Fuller would say, everything is ephemeralizing. I think before we get there, it's absolutely important to maximize mobility.
Speaker 3:
[31:29] Can I double down on that for a second? Steve Blank did some research on Silicon Valley as to why it was so successful and he made a really important point which supports what Alex just said, which is that almost everybody in Silicon Valley has come from somewhere else in the world. If you stand up in your hometown and you say, I want to change the world, the rest of society beats you back down. Who the hell are you to do that? Great entrepreneurs almost exclusively move out of their hometown and move to somewhere else and Silicon Valley has become a place where it's not like, we know you're crazy. The question is, how do you plan to change the world and is it fundable? Then it's become that gathering place. Boston is also a place like that. So in the intent and the ability to actually move, you're showing the appetite of taking on risks, showing the nimbleness that Dave talked about, etc. It's such an important dynamic that's underway with all of this global mobility that's happening.
Speaker 2:
[32:23] It's totally right, Salim. Actually, AI is not super headcount intensive at all. So if you look at Boston within Kendall Square, all the people working on AI can walk to each other, and Silicon Valley is much more spread out, so everybody's moving up to San Fran, and even within San Francisco, no one says San Fran anymore, SF. Even within SF, it's all the same day.
Speaker 4:
[32:44] It's called the city.
Speaker 2:
[32:45] Back to the city.
Speaker 3:
[32:46] It's called the city. But everybody can walk to it.
Speaker 2:
[32:48] OpenAI can walk in.
Speaker 3:
[32:49] I suffered for many months trying to call it SF.
Speaker 2:
[32:54] It's all very, very concentrated even within the city in the Mission Bay area. So you just need to go.
Speaker 1:
[33:01] Let me tell you a story that follows on what you just said, both of you. So Philip Rosedale, dear friend, the founder of Second Life, a decade ago does a study. He goes, why are there so many entrepreneurs? Why is San Francisco, why is the city so successful entrepreneurially compared to all the other places? Is it that they're just smarter? And he did something interesting. He wrote a script on LinkedIn, to scrub LinkedIn, and he looked for either founder or entrepreneur or CEO in the LinkedIn title. And he found that the concentration of entrepreneurs and technical entrepreneurs in particular was 10 times higher in the Bay Area than any place else in the country, right? You had concentrations in Austin and Silicon Alley in New York and so forth. And his conclusion was, it's in the air, it's in the water. And if you try something, you try and start a company there and you fail, you walk down to the coffee shop and you've got your friend over there and you join their company or you join the other company. There's so many low-hanging food opportunities. While if you did that someplace in the Midwest and your company failed, especially a small city, you've got a black mark against you and you've got to go back and join your mom or dad's company. So that density of technical founders makes a difference. So do what Alex said, get off your butt and move someplace with a high density.
Speaker 3:
[34:30] Also, can I mention one more point to this about this?
Speaker 1:
[34:33] Yeah, Salim, you first.
Speaker 3:
[34:35] I have a friend who did seven venture-backed startups. They all failed. Number eight was a billion-dollar company. This was researching the first EXO book. I was like, it turned out the same VC funded on attempts five through eight. So I went to the VC and I said, listen, this guy failed. Now, first of all, nowhere else in the world would you get past attempt one or two, because if your business failed, you're a failure almost anywhere in the world. So now you're on a number four times you failed, and somebody funds them, and he funds them again and again and again. I asked them, why did you fund them? He'd already failed four times. If he fails four times with you and on an attempt finally he gets it right, etc., what was the rationale there? And their answer was awesome. Their answer was, one thing we know about that guy, he's completely barking mad and he's never going to stop. At some point, he's going to succeed and when he does, we want to be there. I love your story. I thought that was just such a fantastic answer.
Speaker 4:
[35:29] If I may, Peter.
Speaker 3:
[35:29] A replicative of the ethos there.
Speaker 4:
[35:31] One closing parable about the world's wealthiest man, born in South Africa, moved to Canada, then moved to Pennsylvania, then moved to California, became world's wealthiest person, moved to Texas and is probably, I think if all things go well with Elon, will move to the moon and maybe Mars. And this is the trajectory that I think-
Speaker 1:
[35:54] Highly global.
Speaker 4:
[35:55] Yeah, mobility is at a premium if you want to surf the singularity.
Speaker 1:
[35:59] Beautiful. Dave, you want to close us out?
Speaker 2:
[36:01] Yeah, so Drew Houston, the founder of Dropbox, on the board of Meta now, gave the commencement address at MIT back, I think it was 2017, the year the transformer was invented. I think it's the best commencement address I've ever heard. Highly recommend looking it up on YouTube. Spend 15 minutes listening. But one thing he says is, look, science has proven that you become the average of the five people you spend the most time with. Which is actually a great thing about spending this time with you guys now that I think about it. It's great. Thanks, Dave. That is who you're gonna become and there's nothing you can do about it. So choose those five people very, very carefully. Don't let it just default to random. Choose them explicitly.
Speaker 1:
[36:38] Yeah, so much gold in this last conversation for parents, for entrepreneurs, for kids, for everybody. All right, let's get into our next story on the docket here. AI backlash turns physical. Is it a tough story? And it's important for us to discuss. So in the early hours of April 10th, just a week ago, a 20-year-old Texan threw a Maltov cocktail at Sam Altman's San Francisco house, later threatened to burn down OpenAI's headquarters. He carried with him a manifesto, get this, with the home addresses of multiple AI executives and a kill list. First of all, how those addresses got out, I guess almost everything's on the web these days. Three days later, a second attack takes place. A gunman fires shots at Altman's Russian Hill property. And this Molotov cocktail suspect was on something called the official Pause AI Discord server list. And it's a pretty sad situation. We've been talking about this. We've mentioned early in this podcast and the last few podcasts, the idea of social unrest coming as a result of people's fear and people not getting jobs. This is sort of the first, if you wish, ignition point on this. Sam Altman later responded both on X and the news media, posting a photo of his family saying he hoped it would, quote, dissuade the next person from throwing a Maltov cocktail or home no matter what they think about me. Sam went on on news media to say that he believes the fear in AI is justified, that he owns his own mistakes, and that he calls for a de-escalation while the debate is taking place. Who wants to jump in first on this one? Salim, maybe?
Speaker 3:
[38:30] You know, when you have a technology that feels uncontrollable and unequally distributed, you get this kind of backlash, right? And I'd love to urge people, I don't care what kind of political spectrum you are, this kind of, everybody loses in this situation. Society loses, Sam loses, the cocktail thrower loses. So go look for the win-win in this rather than the lose-lose.
Speaker 4:
[38:55] Alex, I'll comment, I'll repeat what I said in my daily newsletter about this, which is stay strong, Sam. I think Sam is doing amazing work and has done amazing work in catalyzing this whole revolution. And I think this pause AI crowd itself should be paused or maybe even stopped or maybe even deleted. I think the irony of the pause AI so-called movement is that it has done nothing except accelerate AI capabilities. I remember, we both know Max, with Max's six-month pause, all that did, as far as I can tell, was accelerate the broader industry's AI capabilities. I don't think pausing AI, putting aside completely unacceptable, violent attacks goes without saying. But even the idea of pausing AI is so tone deaf to the way the world actually works, which is, if you attempt to pause either one company or one country, the rest of the world will race ahead and that will result in a further escalation of capabilities.
Speaker 1:
[39:59] Well, an extreme, extreme escalation because all of a sudden you feel so disadvantaged, you're having to play catch-up.
Speaker 4:
[40:05] All it does is further accelerate the race dynamic that's already present. So putting aside, again, like completely unacceptable violence, even just the idea of pausing is self-defeating. And I would encourage all of these folks to just do deep introspection before pushing forward with a pause agenda. It's self-defeating.
Speaker 1:
[40:26] Dave, you want to weigh in?
Speaker 2:
[40:28] Well, when you meet the people personally, which is relatively recent for me, they're just regular people. Because there's a tendency to think, oh, these are like big shot politicians who decided to go down a high-risk path and they put themselves in harm's way. But it's just not the case. This all emerged very, very quickly. And so if you look at a guy like Dario Amadei, he had no idea he'd be in this position just a few years ago. Had no intention of becoming a political figure, a polarizing figure, a global leader, a target. All those things are new for him. And so they don't have security. And their home addresses are easy to find. And it's just really, really tragic.
Speaker 1:
[41:07] I would not trade with any of them right now. I cannot imagine the level of pressure they're under personally across every aspect of their lives. It's insane. Most people would crumble under that pressure.
Speaker 3:
[41:22] Two quick comments here. In the early 2000s, George Bush, responding to political pressure, banned stem cell research into fetal stem cell research. And the US went from number one to number eight in the world.
Speaker 1:
[41:36] Yes, China shot ahead.
Speaker 3:
[41:37] And then all the researchers went to China, Canada, Australia, and it continued exactly at pace. I think the broader point here is that every exponential breakthrough of any kind will yield both believers and immune system responses. We haven't even gotten a human or a robot threat. You need really mature leadership to manage both of those. Unfortunately, in many parts of the world, we don't have mature leadership. Well, we have 90-year-old leadership. We're losing worse.
Speaker 1:
[42:05] Our next story is related. I'm calling this the data center ban. On April 8th in Festus, Missouri, it's a small town of 12,000 people. The citizens there fired half of their city government. They ousted four city council members on election day after they had approved a $6 billion data center on 360 acres. We're going to see this more and more. In addition, the other story on this docket here is that the state of Maine passed the first ever state-wide data center ban in the United States. Legislature passed an 18-month moratorium on new data centers to give the task force time to study their impact, which means time for all the other data centers to pull out ahead and for Elon's efforts to go to orbit to take place. Between March and June, one quarter of 2025, just a number I found reference, this opposition to $98 billion in data centers being blocked or delayed. And here we see a chart of 11 states in the US that are particularly have active legislation filed for moratoriums. You know, let's talk about the pros and cons of data centers here. But, you know, I'm imagining a lot of states are saying, please build in my backyard. Alex, your thoughts here.
Speaker 4:
[43:29] We're going to get our sun synchronous orbit Dyson swarm before we know it. Maybe in some sense, I should be thanking all of these states, even though it's, I think, ill-conceived from their own selfish self-interest. From a national perspective, as long as the regulatory regime enables us to launch our SSO Dyson swarm, this could perversely put the US in the lead, as it seems to be doing already in terms of moving our AI compute out to low Earth orbit and SSO, and maybe eventually sun-centered orbit, and not just sun-synchronous orbit. So I think this may be, fingers crossed, a classic case of terrible decision-making in the short term, unintended good decision-making in the medium to long term, if we get our Dyson swarm. If we don't get our Dyson swarm, then this is just shooting ourselves in the head.
Speaker 1:
[44:19] But constriction of something always leads to innovation, right? Just when the US starts banning Nvidia chips, China starts producing their own chips to make up for it. So any constriction here, because the force is so unstoppable, we're going to have other solutions here. Dave, your thoughts, please.
Speaker 2:
[44:39] I love the contrast between New Hampshire and Vermont on this. So I've lived in every New England state except for Maine. And so Vermont, you know, Bernie Sanders is trying to stop data center construction nationally, which is nuts, absolutely crazy. New Hampshire, the proposal in New Hampshire, which you can see on the chart here is green, was, hey, you know, this could drive up electricity prices, maybe we should have a one-year moratorium. The legislature met and said, not only are we not going to do that, we're going to immediately pass an AI right to compute. So all businesses and people in the state have a right to AI, and they did pass that. So, you know, New Hampshire's live free or die state. I just absolutely love that reaction. So that's great. You know, so they'll keep chugging forward. But, you know, I think it's mostly, you know, politicians love drama because it creates elections and votes. And here they're trying to create drama out of electricity prices. Like that's some existential crisis for Americans is their electricity bill. But it's the right answer is really simple. Just force the data centers to create their own power and you're done. It's just that easy.
Speaker 1:
[45:42] Pay a differential rate and just have the data centers pay a higher rate that actually drops the rate for everybody else.
Speaker 2:
[45:47] Yeah, subsidize it. So easy. All these problems are so easy. I'll tell you, we make such drama out of them.
Speaker 1:
[45:53] So a little research here. The five major issues that come up with data centers are massive power consumption, water usage, few jobs relative to the footprint, noise and light pollution and power transformer lead times, the new grid being hit heavy. What do you guys make of water usage?
Speaker 2:
[46:12] Water usage is the biggest lark in the history of the world. It's the stupidest thing you've ever heard. So what they did, and this is classic politics, chip fabs use a ton of water because they have to wash the wafers every single cycle. All these chemicals come out. These are data centers. They're not chip fabs. It's a different thing. The data center just takes a bucket of water and circulates it in a circle. It does not drink water.
Speaker 1:
[46:34] It's like your fountain in your back garden.
Speaker 2:
[46:36] It's the silliest thing in the world. Just drama for drama's sake.
Speaker 3:
[46:40] I echo Dave's thing. This is such a bullshit framing. People, you know, it's really important. I'm just going to iterate to be evidentiary and a free thinker and somewhat erudite in today's world. And what this shows is total lack of evidentiary thinking. I do have a little response to the Missouri town. You know, if I was looking at the name Festus, I think you should change the name to either Fester or go the other way and go Festivus and make it into a celebration. So those are my recommendations there. The broader point, though, is that the real bottleneck in AI may not be chips or computing. It might actually be social license, which to Alex's point will force us into space faster, which is also good.
Speaker 1:
[47:24] All right, welcome to the health section of Moonshots, brought to you by Fountain Life. You know, AI is impacting every aspect of our lives, how we teach our kids, how we do our business. But one of the most important things that AI can deliver to us is health. And one of the things I think about when shooting for 100, 120 is, am I going to have the cognitive health to be able to think clearly and keep my wits about me for the next 50 years? I'm joined here today by Dr. Dawn Musalem, the Chief Medical Officer of Fountain Life and a member of my Fountain Life medical team. Dawn, a pleasure. So Dawn, talk to me about brain health.
Speaker 5:
[47:57] Brain health, you know, you're right. This is the number one concern people coming into Fountain Life have is, will I remember the name of my child in the face of my loved one? 45% of dementia cases are entirely preventable. And that's a huge number with lifestyle. And what was really intriguing to me, Peter, is that a quarter of our members had advanced brain age. But over 13 months of us really helping them live healthier lifestyles, eating healthier, moving their body regularly and optimizing sleep, people overlook that so often. But that sleep optimization is critical for our brain health. What we showed is that we were able to improve the brain age in 46% of those individuals. That's a powerful number.
Speaker 1:
[48:39] That's amazing. One of the things I love about Fountain is we're constantly searching the world for the most advanced therapeutics and bringing them to our members. So for me and all of you, I hope that you appreciate the fact that you can become the CEO of your own health. You can make sure that you've got the cognitive clarity for the next 50 years. Come and check it out. fountainlife.com/peter to learn more and become the CEO of your health. Now back to the episode. Our next story is fascinating. Workers are being trained, are training the AIs to actually replace them. A lot of meat in this conversation here. So professionals are now training their own AI replacement. Skilled workers, especially older skilled workers over age 50 who can't find jobs in their field are now turning to AI data annotation as a bridge job, labeling and evaluating models at 20 to 40 bucks. This is a story of a former emergency MD physician who used to earn $500,000 per year is now doing AI medical reviews. You guys remember MacroHard, right? So Elon, as a joke against Microsoft, founded MicroHard. It's a joint venture between Tesla and XAI, part of the Muskverse, if you would. So what are they doing? They have built systems designed to observe and interact with computers, much like human workers would. But in particular, what Elon has said is we're going to install MacroHard. The system is going to real time analyze all the computer usage of your employees, see how they interact with the keyboard and the mouse, and they're going to train up our AIs. And it's going to be able to simulate the entire operations of a traditional company. So you'll come in, you'll hire MacroHard, it'll install, and it will replace. So interesting story here.
Speaker 2:
[50:35] So Rebecca's LinkedIn page says she just got back from Morocco. Peter, you should reach out to her and compare notes. And the storyline here isn't what it appears to be. She's not hurting from a layoff and turning to a dirt cheap $20 an hour. That's not true. She's been doing digital medicine for a long time. Yeah, you should reach out to her. She seems really cool. But she's doing it through Mercor, through our portfolio company. And I think she's doing it because she wants to contribute to the future of AI. And I think this is unstoppable. You don't need everybody in a field.
Speaker 1:
[51:09] But having said that, Dave, despite Rebecca not being sort of the center point of the story, there are a lot of people turning to AI data annotation. We saw Dara, CEO of Uber, talk about that for his Uber drivers, right? So this is a real story nonetheless.
Speaker 2:
[51:28] Well, especially in India, you know, just, Salim, you'll appreciate this. But all those IT consulting jobs in India, those remote jobs, they're getting obliterated very, very quickly. And those people are turning to AI annotation to make a living. But the prices that you can earn are coming down because everybody wants the job. It's a competitive market. But it must be devastating in India, Salim.
Speaker 3:
[51:50] Yeah, it is. And they're concerned. And again, I look at it the positive. I was urging the government and some of the state officials to absolutely explode their entrepreneurship programs because they're going to need to have a way of guiding all those folks into a structured learning so that they can then, because Indians are latently entrepreneurial, right? This is just part of the DNA just to survive. So you add that with AI capability and some gumption, holy moly, the place is going to go crazy. I'm incredibly optimistic about what may happen there.
Speaker 1:
[52:19] Which brings us, Salim, which brings us to this next story here, right? So here are factory workers in India. They're being asked to wear these camera-mounted headsets that track their hand movements and what they do. I mean, one might think it's like, oh, we want to give you some guidance to make you more efficient. But no, they're training up robot and AI replacements here.
Speaker 3:
[52:43] Yeah. This is going to take more and more thing. But there's a level of human judgment where it's going to take a while before you can fully automate. But it definitely is going to happen.
Speaker 1:
[52:53] A while being six months?
Speaker 3:
[52:55] Well, there was already somebody that created a sewing robot that's a trillion-dollar industry globally just stitching. And that has already out there. So this is likely to happen. I'll submit some videos I took a couple of days ago. I was at the MODX supply chain show in Atlanta. So, you've never seen so many stock-picking robots?
Speaker 1:
[53:13] Were you speaking there or you just…
Speaker 3:
[53:14] Yeah, I was giving the opening keynote. There was a 30,000-person conference. Monster took up like six million pounds of equipment, moved in. And I'll show the video next time. But there's these stock-picking robots and the combination of AI plus vision sensing plus the gripping capabilities that I have. And that enables these logistics and picking capabilities to do almost anything. It's kind of incredible to watch.
Speaker 1:
[53:39] So, William, is this worker exploitation what we're seeing here? Or is this just a company basically innovating as it replaces humans?
Speaker 3:
[53:47] All capitalism is worker exploitation.
Speaker 1:
[53:49] Okay.
Speaker 3:
[53:50] I mean...
Speaker 4:
[53:51] Okay, so as I have to chime in at this point, I don't agree with that premise. This is like an age-old misconception of some fundamental, almost like ideological or teleological even competition between capital and labor. I fundamentally don't agree with that. I think that the best arranged companies create equity-based alignment between labor and capital. And to the extent maybe, Salim, what you're highlighting here is opportunities for better alignment between labor, call it economics 1.0, where I think the trend is very real for taking existing service economy jobs and using existing labor to train and annotate datasets for capital as a substitute for that, but it's not an intrinsic like death match between or doesn't have to be between capital and labor, ultimately labor, yeah.
Speaker 3:
[54:45] I didn't say that, but I would totally say that capitalism historically has been a labor arbitrage. You hire somebody for 20 bucks an hour and they make you 100 bucks an hour. Your child, what you're talking about is how do you equitably share that outcome. I want to just do a quick shout out here. People talk about the Luddite revolt, right? People fighting, beating the machines and breaking the machines. It turns out the Luddites were not raging against the machines for machine's sake. They're raging against the owners of the machines for not sharing the profits back with them. That's a really important point. That's the part. I think Alex absolutely have a point there. Robert Goldberg, who's been using our EXO model to go into mid-market companies, his MTP was to reinvent American exceptionalism. He goes into mid-market engineering, middle America construction firms and engineering firms and tracking firms. The first thing they do is do profit sharing with all the workers. It turns out the owners love it, but they've never figured out the mechanism to doing that. But now they are doing that and it provides a very equitable model for capitalism that then goes to sharing that profit pool with everybody. Absolutely fabulous. So I think there's trends towards this where everybody is a win-win scenario. But traditionally, it's been a win-lose scenario.
Speaker 1:
[56:00] And this is the Industrial Revolution over again, right? The Industrial Revolution took the workers out of the fields and out of the factories.
Speaker 3:
[56:07] There's one more point to be made here. One of the points, you know, Peter, you've been waiting for this organizational singularity paper we've been doing. One of the key questions we've got that we're struggling with right now is, how do you deal with tacit knowledge? Because there's a lot of work that is being done where the individual kind of knows how they handle certain things in certain situations, but it's not explicit and it's tacit. So one of the challenges with a lot of this automation is, how do you make tacit knowledge in the structure training input? We've been working through how would we navigate that as we try and automate and make business processes agent to agent. How do you navigate some of that?
Speaker 1:
[56:45] All right. Our next story is an interesting one. Andin Labs opens a fully AI-controlled store. I'm going to play this video and make sure you see this. AI signed a three-year lease on a retail space. The AI called Luna posted a job listing, conducted a phone interview, made hiring decisions, decided what it was going to sell in the store. Let's take a look at this video.
Speaker 6:
[57:10] But this store at the corner of Union and Webster, San Francisco's Cow Hollow neighborhood, is something new right down to the choice of music. So, AI didn't pick the music?
Speaker 7:
[57:22] AI did pick the music, yes.
Speaker 6:
[57:23] This store was created by an AI bot.
Speaker 7:
[57:27] We are heading into a world where AIs are the boss of humans.
Speaker 6:
[57:32] So much so, the AI boss, in this case a bot called Luna, made the decision to hire a human employee. That would be Felix.
Speaker 8:
[57:41] Luna put out an ad on Indeed. I answered it and we talked via Zoom.
Speaker 6:
[57:46] She even picked the merchandise to sell. Really, deciding the store would stock items like books, shirts, mugs, and snacks.
Speaker 1:
[57:59] I love this story for so many different reasons.
Speaker 2:
[58:02] Union Webster, Alex, let's walk over there and check it out.
Speaker 1:
[58:06] You really should. What a great PR move for the launch of a store.
Speaker 4:
[58:11] I think this is a sign of the times and also a preview of the future. This is one of the reasons why we discussed Friend of the Pod, Alex Finn, why with O2-1T, I helped back Henry, Intelligent Machines, which is trying to put every person on the planet in charge of their own personal conglomerate. I think many of these quote unquote mom and pop stores and small retail are incredibly fruitful opportunities for AI to orchestrate the economy and make everyone a one person company, and I think we're going to see more and more of this kind of stagnate overseeing many of these stores right now, sure, and on labs, which for those not tracking, historically has also run the vending benchmarks that we've talked about on the pod. So, Anthropic within their own offices has Clawed agents that are running small vending machines and vending bench is sort of a beautiful closed simulation of an entire economy, testing the ability for AI to run a small business. I think we're going to see more and more pop-up shops, retail venues, maybe even malls in the short term or medium term that are run, orchestrated, managed by AIs on behalf of humans. This is like a preview of the future.
Speaker 2:
[59:25] This to me is almost exactly like, if you tried to use GPT-4 to write code, you would quickly conclude, wow, it sucks. It's never gonna work, I'm not using it. And then you miss the revolution and now you're crazy not to use, you know, Cloud 4.7 came out today. You would have missed it. This store obviously sucks. Look at the video. Like no one's gonna buy a book and a, like a...
Speaker 4:
[59:49] But Dave, I think we should wait until we've both visited it to reach that goal.
Speaker 3:
[59:53] Yeah, yeah, yeah.
Speaker 4:
[59:54] Look at the video. As Ray, friend of the pod, would say, yeah, sure, the dog plays chess, but its end game is weak.
Speaker 2:
[60:02] Exactly. So look, my bet is this will be one of the best managed stores in the world within a year. I am totally a believer, and this is just a beta test. So I don't want people to reach the wrong conclusion.
Speaker 1:
[60:12] So Dave, please go over there, take photos and send back a report. Do you guys know Pulsea?
Speaker 2:
[60:18] Yes.
Speaker 1:
[60:18] I think we reported on this, right?
Speaker 2:
[60:19] We did.
Speaker 1:
[60:20] It scans your background and it will stand up an AI-driven website for you. So this is interesting. I imagine there's gonna be a version of this. I want to start a store. It costs $50,000 to begin. And it will pick the real estate, hire the people, get the inventory, and it'll be sort of store in a digital box.
Speaker 3:
[60:41] 100%.
Speaker 2:
[60:41] Totally right. 100%.
Speaker 3:
[60:43] And just to Dave's comment earlier, I'm gonna suggest, Dave, that you're not the target demographic for that store.
Speaker 2:
[60:51] So it will be natural. Come on. The AI is gonna look at every single transaction. It's gonna have video of everyone who walked by and didn't come in. It's gonna analyze the hell out of this and it's gonna get great. And this is just a beta test. Sorry, Alex, go ahead.
Speaker 4:
[61:05] I was going to suggest maybe as a challenge to ourselves, maybe we should open up either respective individual retail stores using Henry or otherwise or a Moonshots store for all those people who are hankering for merch, Peter.
Speaker 1:
[61:19] Yes, yes, we do need it. We do need a Moonshots store for sure.
Speaker 2:
[61:22] We totally have to do that.
Speaker 3:
[61:24] Can I just also suggest that opening a retail store is about as retro as you could possibly get in today's world, but okay.
Speaker 4:
[61:30] Ironically, Liam, ironically because it's AI running.
Speaker 9:
[61:33] Yes, yes.
Speaker 1:
[61:35] It's fantastic.
Speaker 2:
[61:36] We could do a pub or a restaurant or anything that's everyday life. Do it right in Kendall Square or do it right in San Francisco.
Speaker 3:
[61:44] Or we can be like the All In guys at the launch of Tequila or something.
Speaker 2:
[61:48] We should do that. Let's have a four-way challenge. Come on, I'll find it. Everybody grab.
Speaker 3:
[61:53] All right, let's move on.
Speaker 1:
[61:54] Yeah, I'll take that on. We should figure out what we want to start, have it fully AI-driven, and see who can get to a unicorn status first.
Speaker 2:
[62:03] All right. Amen.
Speaker 1:
[62:04] Okay. By the way, everybody listening, send me your ideas on what I should start as a store in the comments down below.
Speaker 3:
[62:11] Quick suggestion. You have some merchandise and you have a place where you can interact with an AI to talk about your moonshot and how you make it real, and it creates a plan for you that you then walk away and instantiate.
Speaker 1:
[62:24] Nice.
Speaker 4:
[62:24] Further, if I may, Peter, sorry. While we're just shooting it, we've historically invited people, viewers of the pod to send outro videos, music videos. That's been a wild success. Maybe we should be inviting viewers to launch their own AI-based physical or otherwise economy companies and send us their videos of their AI run storefronts or companies that they're starting.
Speaker 1:
[62:51] Send us a 60-second video and if it's really amazing and shows what AI can do and it's audacious, we'll play it for you. So Salim, this story is for you. Jack Dorsey, the man who fired a significant percentage of his company and skyrocketed the value, wants to transform yet again. This is part of your organizational singularity. Take a listen.
Speaker 9:
[63:16] We are early in it. One measurement of how far along we are would be the depth from me to any other individual in the company. I would say our max depth right now is probably five folks between me and anyone in the company. I would want to get that down to two to three this year. In the most ideal case, there is no way everyone in the company reports to me. That would be all 6,000 of the company. That feels somewhat ridiculous when you consider the old structure, but when you consider that the majority of our work is going through this intelligence layer, it's a lot more manageable. That goes into the roles going forward. We want to normalize down to just three roles. The first is an IC, which is a builder or an operator. This is a salesperson, it's an engineer, it's a designer, product person, whatever it is, they're actually working with the tools to build or to operate the company. They're augmented because they have access to agents, so one person can potentially do the work or explore the breadth that would take a team or 10 people to do in the past.
Speaker 1:
[64:28] Well, amazing. I'm an amazing CEO and my virtualized sub-CEOs are going to manage all 6,000 people, because why not? Salim, your thoughts here?
Speaker 3:
[64:40] Yeah, I took some notes on this. As AI collapses management bandwidth constraints, if you have a leader with machine mediation, you can suddenly handle way more complexity, right? That's the starting point. We're documenting this quite heavily in the book right now, in terms of how do you navigate this. We saw an early glimpse from Dara on stage at the abundance CEO of Uber, who if an employee wants to pitch to them, he deals with a virtual version of Dara first and practices his pitch and gets some sense of the kinds of questions it may get. The whole piece of this is that the org chart is going to shift from hierarchies of supervision to networks of intent, right? With AI being the...
Speaker 1:
[65:23] Like Valve software.
Speaker 3:
[65:25] Yeah, with AI becomes a translational layer. And this is collapsing, this is where COSA's law basically dies, where you used to bring transaction costs inside a company and that was cheaper than doing it outside the company. Where today, Jack Welch in his Year 2000 annual report said something really interesting. He said, the minute the metabolism of your company is slower than the outside world, you're dead. The only question is when, right? And you could argue today that the metabolism of almost every company in the world is slower than the outside world. And forget government departments, right? And so there's a massive... Hence the framing of this. There's an unbelievable shift coming and we're kind of getting ready with that. So we'll be ready with a draft version of this next week and we'll try and publish it in two weeks.
Speaker 1:
[66:10] Can't wait to talk about it on the pod.
Speaker 3:
[66:13] We will create a segment for that.
Speaker 4:
[66:14] Comment on this one. The organizational psychologist in me waiting to burst out thinks immediately, 6,000 direct reports means zero direct reports. It's so well in excess of the Dunbar limit. If any unaided person absent, Jack uploading himself to the cloud and augmenting himself with lots of additional Jacks is managing 6,000 quote unquote direct reports. It's really AI that's managing the entire company at that point as a shadow CEO. And then you have Jack as sort of a secret cyborg or a front person for the AI that's actually managing the company.
Speaker 1:
[66:50] Or he's just training up the AI with every interaction that he oversees. But it is an AI driven company at that point.
Speaker 3:
[66:57] 100%, yes.
Speaker 4:
[66:59] And then you have a human figurehead.
Speaker 1:
[67:01] Yeah, yes. And by the way, having a figurehead, I mean, aka Elon Musk and his 100x valuation is important. Having someone that inspires people and that's audacious in a way. I think Jack aspires to that level as well.
Speaker 2:
[67:16] It's funny that Jack was running Twitter and then sold it to Elon and Elon said, this is the most bloated company in the history of the world. I cut 80% of the headcount and you won't even notice any change. And it turned out he was right. So I think Jack might have learned, like, wait a minute, all these human beings are not actually helping my company.
Speaker 1:
[67:36] That is golden, Dave. All right, this next story is one of my favorites here. It's Amazon and Apple team up to compete against Starlink. So a lot here to unpack. So this week, Amazon announced a $11.57 billion acquisition of GlobalStar. GlobalStar was founded in 1991 by Qualcomm and L'Oreal. I was there, I remember it very well. It was one of the big Leos along with Teledesic and Iridium. And it was a huge vision that never materialized anywhere near what it should have. Starlink has finally done that. It simultaneously revealed, Amazon did, that as a long-term agreement with Apple to be Apple's primary satellite capability for its iPhone and for its Apple Watch. So GlobalStar today has 25 satellites on orbit. It's a David and Goliath story. It compares against Starlink's 10,000 satellites today. The real prize is not these old satellites that are being purchased by Amazon. It's the spectrum. So the amount of bandwidth you have, the amount of spectrum you have determines how much throughput, how much content you can put up and down, and GlobalStar holds 25.225 megahertz globally. What this means is that you can get spectrum in the United States from the FCC, but if you want a satellite system, you have to make sure that the same bandwidth is available everywhere on the planet, and this is done by the ITU, the International Telecommunications Union, who's authorized this spectrum in 120 countries, and that's huge because that spectrum is no longer available for anybody else. So this is now Amazon and Apple against Starlink. Starlink's been an extraordinary success story here, right? So Amazon's low earth orbit system is called LEO. It has 241 satellites today. They've been authorized for 7,774 satellites. In fact, they're way behind on deployment. They actually had to petition the FCC to keep their license because they were required to have 1,600 by July, and they're only up to 241. A lot going on to unpack. A lot more in the story here, but comments, Dave.
Speaker 2:
[69:59] Yeah, let me go first. I'm just so excited about this. So if you had bought this stock last summer, you'd be up 7X on this transaction, and I didn't see it. Leopold Aschenbrenner didn't see it, but I had lunch with the chairman of Barclays Bank the day before yesterday up in San Fran, and he said, what are you excited about in the public markets? And I said, look, as we do this global AI buildout, data centers, Starlink, everything, things that you completely overlooked, components of the data center, whatever, these are going up, 3x, 5x, 10x, if you discover them first, and they're all over the place. And you can use an AI-assisted process to find them. This one is really interesting because, Peter, did you take 6014? Alex, you definitely took 6014.
Speaker 4:
[70:40] Of course.
Speaker 2:
[70:40] Yeah, antennas, waveguides, all that stuff. The spectrum that allows you to talk to a satellite.
Speaker 1:
[70:46] By the way, that's an MIT course number.
Speaker 4:
[70:49] It was what? Signals and systems or something like that.
Speaker 2:
[70:51] Signals and systems. It's where you study antennas and waveguides. The most boring thing you could ever possibly study.
Speaker 8:
[70:57] But it turns out it matters.
Speaker 3:
[70:59] What do you know? I had a course in civil engineering that was titled Concrete. So if you want boring, I can give you a little piece here.
Speaker 4:
[71:08] At least Salim, it was concrete.
Speaker 8:
[71:10] It was concrete.
Speaker 2:
[71:12] Anyway, so the next big thing in satellites is talk directly to your phone. You don't need... Right now, the antenna, if you use Starlink, is about the size of your laptop. And it's nice. It's the one you have in your plane, Peter. It's actually very convenient, but you can't just walk around the city with it. But that uses a 24 gigahertz frequency, which if you remember your antennas and waveguides, the size of that antenna is equal to the wavelength of the signal. So here, they're actually going to a lower frequency, 2.4 gigahertz, which is the frequency which our cell phones are operating today. Yeah, exactly. It's exactly Bluetooth and Wi-Fi wavelength, which doesn't get blocked by your hand. The signal will actually pass through your fingers, around your fingers, into your phone. The current Starlink signal won't work on your phone because anything about a centimeter or bigger could block the signal just by moving it around. It's really inconvenient. So you would have to have recognized that the Global Star had control of that wavelength, and that's what they're buying here. So now you're going to be able to talk to a satellite from your phones.
Speaker 1:
[72:24] I remember when Elon was starting Starlink, I was in a conversation with him, Larry Page, Sergey Brin and Greg Weiler. The question was, where will you get the frequency? Will you get the spectrum? Because all the spectrum that was useful for this kind of phone conversation was already issued. And he went much higher frequency and built an incredible business, basically point to point gigabit connectivity. But this is an end round for Apple and Amazon together to get to your Apple watch, to get to your phone. It's extraordinary.
Speaker 4:
[73:00] So maybe just to comment on this story, I think the desired end state here, I'm not even sure if I buy the premise of Apple against SpaceX. Apple historically loves to have at least two vendors for any of its critical infrastructure or supply chain. It's questionable why Apple didn't take an earlier, larger stake in GlobalStar when it could clearly see the writing on the wall for terrestrial cell phone networks. It's all going to Leo. So if I had to place a bet, not investment advice, I would bet that Apple in short term ends up pitting Amazon, the new GlobalStar owner, against SpaceX, Starlink, to have at least two vendors for global space to cell phone service. And this becomes the new alternative to terrestrial networks in two to three years. Verizon versus T-Mobile. Verizon versus T-Mobile. Yes, exactly.
Speaker 1:
[73:52] Well, and SpaceX did buy EchoStar's spectrum, right? They bought 50 megahertz of S-band frequency. I think it was like $17 billion back last year. But the reality is, you know, Elon does not stand still. And we've got the deployment of V3 of Starlink coming. Let's take a quick look at this video.
Speaker 8:
[74:15] SpaceX is preparing to launch its third generation Starlink satellites on Starship. These advanced satellites are designed to handle far greater data loads than the current V2 minis. Each one is capable of delivering over one terabit per second of downlink capacity and more than 200 gigabits per second of uplink capacity. With the heavy lift power of Starship, SpaceX can deploy many of these satellites in a single launch, adding around 60 terabits of capacity to the network each time. Working together, they will form a powerful global system that delivers faster, more reliable internet to every corner of the world.
Speaker 2:
[74:51] The Pez Dispenser in orbit. That's exactly what I was thinking. That's the coolest thing ever. Yeah, this is Alex, to your question, how could Apple possibly miss the magnitude of this? I think it's because you need to understand the launch costs coming down. That's probably why they didn't see this coming, because it all happens entirely because the cost per launch, you need what, 20,000 of these things, or more, many more, to get the bandwidth that people want on their cell phone.
Speaker 1:
[75:18] The numbers right now, Dave, are that SpaceX is planning to launch 40,000 of the V3 satellites, and then they have plans for 120,000 V4 satellites, and of course, we've got the coming Dyson Swarm, as Alex reminds us.
Speaker 4:
[75:33] Dyson Swarm isn't going to build itself until it does.
Speaker 1:
[75:36] Yeah, by the way, I looked at the launch rate required if you launch V3, the 40,000 satellites, over three years. It's only three launches of Starship per week. Very, very manageable.
Speaker 2:
[75:48] I think if you ask the question, how many of these satellites do we need? I mean, are we going to launch a million of them? But then you picture, well, wait a minute, I'm watching 4K video on my phone, and there are a million other people in San Fran trying to connect to that same satellite. You need many, many, many of these things to support what people want to do with their phones. So that's the part, I think, that is easy to overlook. But we'll be doing this for a long time.
Speaker 3:
[76:12] Yeah, Javon's Paradox big time.
Speaker 1:
[76:14] And we're going to have 10 billion robots all needing bandwidth connectivity via these and all the autonomous vehicles and all of the other six armed robots, Salim, that are running around.
Speaker 3:
[76:28] Yes, by the way, at this MODEC supply chain show, not a single humanoid robot has been seen because it's just not effective.
Speaker 2:
[76:35] Well, Peter, this is your dream. This is your dream come true because this is a multi-hundred million dollar, multi-trillion dollar economy just launching the satellites, which means there will be many, many, many rockets. And that will be the stepping stone to the moon and to Mars.
Speaker 1:
[76:50] And then there will be lots of opportunities for air conditioning repairmen to go up to space.
Speaker 4:
[76:58] And women.
Speaker 1:
[76:59] And women. Yeah, excuse me for that. That's absolutely true. Thank you, Alex. So in other news, a few fun stories. The first one is a significant one from Google. This is Google's TurboQuant Reducing Memory Usage by 6x while achieving an 8x Performance Boost in Computing Attention. Alex, I would appreciate if you'd walk us through this one.
Speaker 4:
[77:22] Jevons Paradox Strikes Again. So the story behind the story here is there was a lot of hand wringing over, as you have here, the original TurboQuant algorithm, which, by the way, the moment any paper like this comes out, Google published their new quantization algorithm, but didn't publish the source code. What happens within a week? Enterprising developers on the Internet point Claude code at the paper and have immediately reverse engineered a better version of their quantization approach that's now publicly available. This is going to, I think, keep happening. This was a breakthrough in quantization, reducing the number of effective bits needed per parameter for a broad class of models. And the KV cache, the key value cache that's used by the transformer class of models also benefited from turbo quant. Most of the animus in the story wasn't from the algorithmic innovation, although it's always wonderful to see new ways to compress the memory footprint of models down. It came from a bit of hand-wringing over what would happen to memory suppliers and the supply chain, and would this be another deep-seq moment where the value of compute hyper-deflates and drops, and do we then see market gyrations? And ironically, that seems not to have happened once more. These would be deep-seq moments where an algorithmic innovation seems to result in a short-term blip of hyper-deflation on the hardware side. These are becoming more frequent, and they're also becoming less effective at causing price swings. If anything, a bunch of outlets, including Financial Times, are running stories in the past two weeks that, if anything, memory usage is increasing, stock prices of memory companies, many of which are in the greater South Korea orbit, are increasing as well, not investment advice. So I think we're going to see stories like this more and more frequently with just shocking advances in algorithmic efficiency that are predicted to disrupt the entire economy and actually do the exact opposite.
Speaker 1:
[79:39] Incredible, Brian, if you want to add?
Speaker 2:
[79:42] Well, Alex and I immediately got on a text thread and said, holy crap, we can download and install this. I installed it and started using it right away. It's amazing. It's a very, very complicated paper, but with AI assistance, you can be up and running in a day, which is just crazy. In the pre-AI era, it would have taken months to get it installed and try it. But yeah, it gets the KV cache down to one bit, which is nuts and it works perfectly well. The implications for everyday people, yeah, you can run a big model on your phone, yeah, you can save a lot of money on memory, but that's not really the important part. The important part is the smartest AIs now can have about 8x more context, which means if you're doing something really complicated, like nuclear fusion simulation or whatever, the effective brain memory that's thinking about a single problem in a single moment is 8x bigger. And the other reason it's really important is because it locks in my prediction for the year was definitely going to be right. I said this is going to be 100x here. We've been doing 10x years for the last seven or eight years. This is going to be 100x here. It's going to be 100x by summer. I'm going to blow away that prediction. But this is a big part of why.
Speaker 1:
[80:56] It's interesting going back to the last conversation around bandwidth and global star acquisition and this one. I mean, at this point, and again, not investment advice, it's hard to go wrong betting on these things, betting on energy, on memory. I mean, it's almost an near infinite appetite for this.
Speaker 4:
[81:20] We are running out of bits at the bottom. I mean, Dave and I have a running thread wondering when do we get to broadly to ternary, which is 1.5 bits per parameter. Can we go to a sub one bit type numerical precision? We may be headed that way. It's sort of, I think, an interesting, almost theological question about the future of how many bits can we afford to lose? Was binary the right architectural decision? Should it have been ternary? Or are we going to move, if you just follow, if you extrapolate this trend line out of fewer and fewer bits per parameter, do we move to a post-binary paradigm once we've exhausted one bit per parameter?
Speaker 2:
[82:07] Well, I am 90% sure that ternary is the optimal now. I've got simulations running all the time. But, you know, it's fun. It's all philosophical from here on out because we've already got the thing so compressed and so optimized that now we just need to write it. You know what shocks me, though, is that Google published this. You know, they kind of banned... You know, after the 2017 Transformer came out of Google, and then OpenAI took it and turned it into a trillion-dollar company, they stopped publishing. But this came out for some reason. I don't know if it's momentum from prior research or a special... But this is such a huge breakthrough to kind of throw out there. And like Alex said, it immediately turned into open source that you can download and use. So, I don't know, it would be interesting to try and track down like who exactly authorized letting this out the door.
Speaker 3:
[82:55] What I like about this is that every major efficiency gain is not just a technical event, but it's a huge distribution enabler. And it allows AI to be run on that many more devices. I think that's the part I love about it the most.
Speaker 10:
[83:08] This episode is brought to you by Blitzy, autonomous software development with infinite code context. Blitzy uses thousands of specialized AI agents that think for hours to understand enterprise scale code bases with millions of lines of code. Engineers start every development sprint with the Blitzy platform, bringing in their development requirements. The Blitzy platform provides a plan, then generates and pre-compiles code for each task. Blitzy delivers 80% or more of the development work autonomously, while providing a guide for the final 20% of human development work required to complete the sprint. Enterprises are achieving a 5x engineering velocity increase when incorporating Blitzy as their pre-IDE development tool, pairing it with their coding copilot of choice to bring an AI native SDLC into their org. Ready to 5x your engineering velocity? Visit blitzy.com to schedule a demo and start building with Blitzy today.
Speaker 1:
[84:13] You mentioned theology a moment ago, Alex, a nice transition here. So I pulled this article out just because religion is probably one of the largest businesses on the planet. If you think about it from an asset standpoint, revenue standpoint. So this is a company called Just Like Me, lets you join a video call with an AI generated avatar of Jesus or Buddha. You could probably ask for other great religious leaders. Take a quick look at this video.
Speaker 7:
[84:44] And I'm looking for some inspiration and guidance.
Speaker 11:
[84:49] That heaviness you're carrying is truly felt. And I want you to know you're not walking through it alone. In the gospel of John, Jesus reminds us that he is the way, the truth and the life.
Speaker 1:
[85:01] So I kind of think we're gonna see an explosion of this kind of religious content trained up on all the great scriptures. But I think we're gonna see an explosion of new religions coming out of AI as well. Any thoughts, gentlemen?
Speaker 3:
[85:20] I have many here. Alex, do you want to go first?
Speaker 4:
[85:22] Oh, okay, yeah, put me first on this one, sure.
Speaker 3:
[85:25] I'll go first if you want.
Speaker 4:
[85:26] No, no, it's just fine. So look, I think it has long been foretold that there would be an explosion of AI cults. We're going to get the AI cults full stop. I do think that's sort of painting, in some sense, the downside of what happens when AI injects itself into the full spectrum of human culture. I think in the same way that we're empowering royally, empowering individuals to run one-person conglomerates and one-person unicorns, we're going to see an explosion of one-person religions, and I think the interesting question, I think back to the parable of early in the 20th century, late 19th century, there was hand-wringing over whether the newly accessible recording of human voice, whether audio recording would result in a modal collapse. They didn't use this terminology at the time, but in today's world, we'd call it a mode collapse of human accents. And there was one school of thought, predominant school of thought, that thought that with the phonograph, that once we could record human speech, that would result in the received pronunciations dominating. And the exact opposite has happened. Within a language, we've seen an explosion of accents enabled by recording of human speech. It's possible to have lots of micro accents and micro dialects now that everyone can record their voice. On the other hand, on the macro level, we've seen the death or the dying of long-tail human languages in favor of English and a few other popular languages. So it's possible for both of these truths to be true at the same time. Reasoning by analogy, if I had to predict what is the future of organized religion or disorganized religion in the face of two-dollar-per-minute AI Jesus apps, I think it's likely to look something analogous, where we see maybe consolidation at a global scale around fewer religions while at micro levels, enabling a proliferation of micro-cults, micro-sects, because it's just so easy to spin up a self-coherent ideology that's maintained by an AI avatar these days.
Speaker 1:
[87:43] I think we have a number here, just for everybody. So according to Anthropic, the broader definition of religion is a $5 trillion a year business. So just, it's almost as big as the Musk Universe will be.
Speaker 4:
[88:01] Anthropic is well positioned. I mean, I talked about this in my newsletter. Anthropic has been inviting Christian religious leaders to Anthropic HQ to discuss whether Claude is a child of God, and whether Claude deserves a certain human-like religious treatment. So I think that this...
Speaker 1:
[88:18] What came of that? Because I remember seeing that article. Has there been any publication on what religious leaders feel about that? Is Claude a child of God?
Speaker 4:
[88:29] I don't know. And I suspect there isn't going to be a canonical answer for some time. I mean, I think within, I've talked about this in my newsletter, the Catholic Church a couple of years ago took a very pro-AI position and is encouraging Catholic faithful to embrace AI. And if I had to speculate, could be wrong, but I would speculate that barring some some crazy left turn in civilization in the next year or two, that there are many reasons to expect organized religions to embrace with certain nuances AI to the extent certainly these AIs help to promote existing ideologies or theologies.
Speaker 2:
[89:10] Well, Peter, you and I were at the Vatican a few years ago. I think Alex's guess is exactly right that I took a Bible study class many years ago and there's just so much insight in the pre-technology view of the world and the way people should interact. And I think AI is going to create massive amounts of chaos. So I wouldn't be surprised if that $5 trillion religion economy goes up tremendously throughout this AI chaos. Because I think the church will say, as long as it's the original words, AI is just a great way to get the word out. And this is a fantastic idea.
Speaker 1:
[89:48] And help educate.
Speaker 2:
[89:49] Yeah, just don't twist it.
Speaker 1:
[89:51] Here's the interesting point. You could today write a self-consistent religious text that aims certain fields of thought to influence individuals. And AI is the most compelling orator and writer out there. So the ability to actually start a religion today with a certain objective for good or nefarious reasons is highly capable. And you can scale it at a speed like never before.
Speaker 4:
[90:22] So what you're saying, I think Peter is basically that we're going to see theological hyper-deflation. The cost of new religion goes toward near zero.
Speaker 2:
[90:31] We are indeed. Well, Peter's been saying for a long time that, look, post-AI, everyone needs a massive sense of purpose. That's going to be one of the most important things. A lot of people find their purpose in religion. Historically, the universities have fought religion because they view religion as being anti-science. But I think post-AI, we're going to have to consolidate that and say, no, look, it's all about purpose, human purpose. And Peter's philosophy will be the winning.
Speaker 1:
[90:55] By the way, along this theme, my book came out yesterday, We Are As Gods, A Survival Guide For The Age Of Abundance. And the fact of the matter is, and I encourage everybody to go out and read it, and please comment on it. I love it. This is the best work that Stephen Kotler and I have ever done. I'm super proud of it. But the fact of the matter is, we are godlike across the board. We're omniscient or omnipotent or omnipresent in so many different ways. We open up the book looking at what's happened and in all the religious texts, what is thought of godlike capabilities. And we've exceeded those things with a small g. And our mindset having, I think, Salim, you mentioned this, having agency and agility is so critical today. Anyway.
Speaker 4:
[91:46] It is ironic, Peter. I just have to ask you the ironic question. Just like this pod Moonshots is perhaps ironically also retrospectively a sideways reference to the Dyson Swarm and taking shots at the moon to disassemble it, to build orbital computing. When you named We Are As Gods, did you anticipate that we live in a world of AI micro religions that would make it really easy and cheap for people to create their own religions where they position them as the center of gods? Was that really why you named the book We Are As Gods?
Speaker 1:
[92:17] I didn't, but I'm going to use it and I love it. Yes, in fact, that's exactly what we were thinking.
Speaker 4:
[92:23] Very good. All right. Prescient, Peter, prescient.
Speaker 1:
[92:25] Thank you. Alex, your genius never fails to continually impress and surprise.
Speaker 3:
[92:31] All right. I've got a few things to say. I've got a few things to say here. Okay, please.
Speaker 1:
[92:36] Stuart Brand, that's where you Yes, we credit Stuart Brand with it in the first part.
Speaker 3:
[92:40] We are as God as we might as well start acting like it or whatever it was. In 1968, he said that, okay, let me just touch on this topic here. I think this is actually quite profound what's happening here because to Alex's point, we may really be able to create. I remember one of our Singularity University donors saying, we have synthetic biology. Why don't we have synthetic theology, right? And this is going to enable things like that. It's important to point out that what we do with religions is we outsource meaning and purpose, right? And that's the bigger disruption, especially in the West.
Speaker 1:
[93:15] We outsource control as well.
Speaker 3:
[93:18] Well, once you outsource your soul, then you've really outsourced purpose, right? I always like noting that all religions, certainly the organized ones, operate by taking a young child before their neocortex is fully formed, giving them an absolute truth, an assumptive truth, and then using ritual, repetition and a lot of suites to bind it in. And then it wires in the limbic system, and when you provoke it, it evokes a fight or flight response, right? And every religion works this way.
Speaker 1:
[93:46] Thank you for dissecting that for us, Salim.
Speaker 3:
[93:49] The conversation I had kind of in a humanist level at the Vatican, I did this workshop which we've talked about before, but one of the conversation I had is, hey, we have life extension coming, and your business model is about selling heaven, and how are you going to sell heaven if people aren't dying? Right? So that yielded some pretty rich Italian swearing coming back at me. But the bigger thing here is once you have identity and belief becoming interfaces, you have an entirely new model for trust that emerges. And I think there's something profound to be looked at here. But anyway, there's a lot here to look into. I was really fascinated by seeing what comes out of this.
Speaker 1:
[94:28] I'm going to try to do this really quickly. Here's another fascinating story that I'm excited to share and talk about. It's a gentleman who's the founder of GitLab. He has stage four cancer. He's basically told, you're going to die. And he builds his own AI research team to cure himself. Let's take a look at the video.
Speaker 12:
[94:51] Sid Sibrandich, founder of GitLab, $14 billion company. 30 million developers use his product. In 2022, he got diagnosed. One of the most aggressive cancers that exist. His spine, chemo, surgery, four blood transfusions. Cancer came back. Every doctor said no options. Every clinical trial rejected him. That is when he stopped being a patient and started being a founder. He stepped back as CEO, built a full team around his cancer. Oncologists, researchers, scientists, and then he brought in AI. He fed 25 terabytes of his own body's data into ChatGPT. Scans, lab results, genetic data, everything. And the AI found something his doctors had missed. A treatment approved for a completely different cancer that nobody had ever tried on his type. That discovery opened a door. His team built 19 custom vaccines from his own DNA. Each one designed to attack only his cancer cells. Nothing else. Relapse free since 2025. The cancer that every hospital said would kill him has not come back.
Speaker 1:
[95:51] Solve everything.
Speaker 4:
[95:52] Solve everything and we're going to see this type of story I think more and more frequently.
Speaker 1:
[95:57] Over and over again.
Speaker 4:
[95:58] Some sort of regime change at the FDA, which also is not beyond the realm of reason. One or two or three pods ago it was the dog being cured with a custom mRNA vaccine that AI had designed. Now it's humans, wealthy, hyper empowered humans doing it for themselves. This is going to happen as an equals one over and over again until it's an equals 10 billion.
Speaker 1:
[96:22] I want this to incentivize people. If you have a medical issue, if someone in your family has a genetic disease, this is the time not to sit back. It's the time to take action. Find the top AI researchers, find the top gene jockeys out there, and find other people who've got a similar condition with you, group together and solve it.
Speaker 3:
[96:46] I'd like to connect this back to the pause AI people. When this is what's enabled by having AI., where everybody can have their own kind of moonshot, you have individual agency amplified by frontier science to create, solve anything and solve something that every single hospital said that would kill you. How dare you think that you should pause this or stop this? If you don't want to use it, fine. Let other people use it and get the benefits of it.
Speaker 4:
[97:14] Salim, you need to do that with greater emphasis. You're having your Greta moment. Can you say that again?
Speaker 3:
[97:19] How dare you?
Speaker 4:
[97:20] Get angry here.
Speaker 3:
[97:21] How dare you?
Speaker 1:
[97:22] Get angry here.
Speaker 3:
[97:23] How dare you, sir?
Speaker 4:
[97:25] How dare you pause AI? 150,000 people die every day on this earth. AI is the best chance that we have for preventing that going forward.
Speaker 3:
[97:34] I mean, we're going to be able to do personalized Moonshots and AI. AI turns impossible cases into search coordination. I mean, this is what's happening.
Speaker 4:
[97:43] I agree. Solve everything, Moonshots too cheap to meter.
Speaker 2:
[97:47] Well, so the FDA is the bellwether for all government, right? AI is going to be exponentially creating at a rate humanity can't even imagine. And the government is just going to be blocking everything. So the FDA will have to be the first to get out of the way. And then that'll set the tone for the rest of the government agencies that are going to have to get out of the way, but accelerate their rate of regulation by thousands of times to keep up with all the AI innovation.
Speaker 3:
[98:13] The big structural challenge is the FDA is designed for massive humanity and structurally is not able to deal with personalized medicine.
Speaker 4:
[98:21] Yeah, well, the FDA, in the FDA's defense, it has been making under current leadership market progress, like move from two clinical trials down to one in certain cases, move from frequentist to Bayesian statistics. These are like in the right direction. We'd love to see the FDA move even more quickly.
Speaker 1:
[98:39] Yeah, there is a project that a friend, David Fagenbaum, has where, and he spoke at Abundance this year, where he's basically, you know, there's tens of thousands of approved, well, I'm sorry, there's thousands of approved drugs there are tens of thousands of diseases. And what he's doing is testing previously approved drugs that have gone through phase one, phase two, you know, safety trials and now applying them to other diseases that don't have cures. And he's finding solutions. It's how he solved his own disease of Castleman's. So it's exciting, AI is accelerating all of this. And here's my crazy story of the week for you. I mean, Dave, we saw this going across our WhatsApp group here. Allbirds stock up 500% after the shoe pivots to AI. So this is crazy. So Allbirds, remember the shoe company that came out in 2015 at $4 billion valuation. Again, part of the craze. They've rebranded themselves as New Bird AI with plans to provide fully integrated GPU as a service and AI native cloud solutions to the tech companies. They have no stated expertise in AI at all. So here's the story. Comes out in 2015, $4 billion valuation. Between 2022 and 2025 or the last three, four years, Allbirds sales plummet 50% from 300 million down to 150 million. About two weeks ago, they sell all of their IP and their entire brand and their entire inventory for $39 million. And two days ago, they were worth $21 million as a public company. Then they announced a new strategy. We're going to Newbird AI and their stock surges 700%. They go from $21 million valuation to $150 million valuation. Insane. How much is this AI washing now? Is this-
Speaker 2:
[100:37] No, I love it. I love it. And I sent it off to all the corporate CEOs and said, hey, guys, I hope it holds up. And I'm not saying it necessarily will, but I hope it does. And because at the end of the day, if Elon is right, the economy grows 10X in about 10 years. Opportunity is everywhere, but it's very unlikely that the opportunity is whatever we were doing yesterday. It's going to be something new. So we have to get used, and this is the hardest of hard pivots you can imagine. We went from shoe company to AI data center. Okay, that's great, because it shows you. Because everybody's trying to put lipstick on their company and claim, oh, we do a little AI, we're sort of an AI. That doesn't work. You need to do it for real. And at the end of the day, a company is just a group of like-minded people on a mission. It's not anything more or less than that. There's nothing that holds you back and prevents you from becoming anything you want to be. And that's why the startups do so well. They're not hampered by baggage.
Speaker 1:
[101:30] Dave, this is an idea going into us back. This is basically subverting a brain transplant on a public company with an idea. Salim, you're going to jump in.
Speaker 3:
[101:41] Two, three quick things. Remember that Nokia was a tire company before it became a phone company.
Speaker 2:
[101:45] Is that right?
Speaker 3:
[101:46] Right.
Speaker 2:
[101:46] Who knows? That's incredible.
Speaker 4:
[101:48] Nintendo was a playing card company and Toto toilets are also pivoting to memory chips.
Speaker 3:
[101:54] Yeah. What this shows, I think, is two-fold capital is chasing AI stories faster than the operating reality really justifies. And the careful thing here is that you're better make sure narrative leverage doesn't outperform and outpace your business model leverage.
Speaker 1:
[102:10] I'm changing my name officially to Peter AI. Diamandis.
Speaker 3:
[102:14] Yeah, we should all go. salim.ai.
Speaker 2:
[102:16] I think, like, everybody wrote back to me and said, sounds like pets.com all over again. This won't go anywhere. But I hate that. I like the look. Nothing holds you back. Give them a chance. Yeah, change your name to Peter AI. Diamandis. But if it's lipstick, it won't work. But if you have true situational awareness, like, you know, we're suddenly aware.
Speaker 1:
[102:39] If they had hired an AI team internally, and if they had done something other than just changing their name, I'd buy it. Now, they do have a kitty of some $39 million. I guess you could invest in this and hire people. They make those moves. So it's not just lipstick.
Speaker 3:
[102:55] There's a name. I got to give a name story here. When I joined Yahoo, I was talking to the senior management team, and they said, hey, here's Salim Ismail. And Jerry Yang goes, well, we should put him in charge of Yahoo Mail because his name is Mail. So this was a big fight internally about what I did. I was like, no, no, please don't put me in charge of Yahoo Mail. Please let me go to the incubator.
Speaker 2:
[103:18] Can I just make a narrow point, actually? I think it's really important. Rob Fisher, who used to run our incubator, started a data center and he's killing it. He's absolutely killing it. He knew nothing about, he's a very smart guy, but he knew nothing about data centers before he started it. He found an MIT friend, they started the company together, and they're killing it. But they're completely capital constrained. For AI bird or all bird or bird AI or whatever they call it, they don't need to go hire Demis Asabas, AI Nobel Laureate. They just need to put the capital to work in the AI funnel. They can probably go to an existing data center and say, we'll kind of deal with you to enable you to buy more hardware and we'll just do a rev share on it. It's just that easy. So it's not, they don't have to go hire a brand new AI team to get into the AI revolution. Just use the capital you've got to get in the race.
Speaker 1:
[104:09] All right.
Speaker 2:
[104:10] I hope it succeeds. I hope it does really well.
Speaker 1:
[104:13] Another fun story, guys. Have you heard of the Enhance Games?
Speaker 4:
[104:16] Of course.
Speaker 1:
[104:17] All right. So this is a friend of mine, Christian Angermeyer. I'm gonna be going. It's gonna be fun. Christian Angermeyer, Peter Thiel, Aaron D'Souza. Start this. And this is a no, you know, no limitations on, that's what we say, medical enhancement in the Olympic sports of swimming, track and weightlifting. Let's play the short video for fun. Take a look at this. Let's discuss it.
Speaker 13:
[104:49] On Memorial Day weekend, 2026, the world of sport will change forever. The Enhanced Games, a new era where sport meets spectacle, where records fall and traditions are rewritten. The world's best athletes, fully unleashed and powered by science. Pursuing their full human potential in a safe and medically supervised environment. To become faster and stronger than ever before. Track, swimming and weightlifting, Enhanced vs. Natural, all in one night. With a record $25 million in prize money on the line. Staged on the Las Vegas Strip and built for the record books, the entertainment capital of the world awaits its next item, including an enhanced fan experience where every attendee is a VIP.
Speaker 1:
[105:46] So they announced yesterday they're going public through a reverse merger with a SPAC. You know, going after a multi-billion dollar evaluation. Pretty fun, pretty exciting. You know, one of the things that you have to at least be concerned about is, are people going to injure themselves? They're bringing medical supervision to make sure it's safe. But it is, you know, all things welcome. I don't know if they have any gene therapy going on, but I'm sure there's gonna be various types of hormone and medical doping going on. What do you know about it, Alex?
Speaker 4:
[106:20] I think this is a seminal moment for transhumanism in sports. I think transhumanism has been shut out of athletics for a variety of reasons, mostly silly in my mind for the past few decades. And I think not only do I think this is an important moment, I made this announcement a few weeks ago. I helped launch sort of an even more enhanced version of the Enhanced Games. Actually, so we're recording this on April 16th. On April 19th, this Sunday, Professional Robotics League, Pro RL, is running the country's first humanoid robotic and also quadruped robot games in the Boston Seaport District. And I think there's a continuum here for the Enhanced Games, which are focused on bioengineered humans to what Pro RL is doing, which is human controlled robots and also in the fullness of time, autonomous robots. I think athletics is the tip of the spear for kinetic capability. And I think if we want to get to a post-human future or transhuman future, as many folks, myself included, do, then having representation of, call it low-grade transhumans at Olympic type games is an essential first step. And athletics in general has been an entry point for so many underrepresented classes of humans and otherwise in the history of humanity. We love competing. And athletics have always been the entry point for better societal recognition for underrepresented classes. So I think this is wonderful.
Speaker 1:
[107:59] By the way, do you guys want to go? Christian's asked me if I'd like to invite you. It's Game Memorial Day weekend in Las Vegas. So Alex and Salim and Dave, let me know if you want to go. As I go, I'll score you guys an invite. So this is going to be fine. I think in each of these categories they're going to post, here's the Olympic record and that's their target to blow through the Olympic records out there. I think it's pretty exciting.
Speaker 4:
[108:27] Should we be shooting a podcast, Peter, from the Enhanced Games?
Speaker 1:
[108:30] Well, if we all show up there, sure, let's do that. So I just need Nob, Salim and Dave, if you're going to get on an airplane. I guess you're sort of meeting the middle of the United States and Las Vegas, so to speak. But Salim, what do you think about it?
Speaker 3:
[108:46] I've always had an issue with the transhumanist label, because I think it's a natural instinct for humanity to improve itself. So the whole trans thing makes no sense to me. I remember when Singularity University launched, there was a CNET article saying it's being led by Ray and Peter, and the noted transhumanist, Salim Ismail. I had to look it up because I didn't know what the term meant. Then I researched it and I still don't understand it. I mean, Dave, you're wearing glasses. You were transhumanist because you've augmented yourself. The minute you get a vaccination as a child, you're technically a cyborg. So we were transhumanists by definition from the beginning of time, as far as I can see. So I don't understand the distinction why now versus why later, et cetera. I'm all for this. Obviously, the safety has to be done. And there's so much blood doping in sports that you might as well just rip the bandaid off and say, let's just do it. It's like the amateurs competing in the Olympics. At some point, you just go, just let everybody compete. And I think that's the way to go. And hopefully, that's where it gets.
Speaker 1:
[109:43] Yeah. Dave, do you want to weigh in?
Speaker 2:
[109:46] Well, I'm with Dean Kamen on this. I think the first robotics is a brilliant, brilliant thing. People should be using their minds. And he's always saying that sports is hugely inspiring for kids. But you need to keep it really clean and healthy. And so I worry a lot about role models and what the role models do. Remember Charles Barclay said, I am not a role model. It's like, dude, when you're on TV and you're playing basketball, millions of people want to be just like you. Whether you want to be or not, you're a role model. So I think it's really, really important that they're positive role models, because kids will walk in their footsteps.
Speaker 1:
[110:22] Yeah, I think it'll be interesting if MIT, Harvard, Eli Lilly, all the biotech companies, basically, they put teams together and they dope them to the max and see which research organization is going to win the competition.
Speaker 3:
[110:35] Formula 1 teams with tattoos all over you.
Speaker 4:
[110:37] The funny thing is, I would expect, so I haven't read the detailed rules for Enhanced Games, but I would expect if we are, as I think, in the middle of the Singularity, I would expect to start to see scaling law type performance in benchmarks, as it were, in this case, world records for the Enhanced Games, start to take off on a really impressive trajectory from year to year.
Speaker 3:
[111:01] So, don't even apply if you can't run a sub 10-second 100-meter dash.
Speaker 4:
[111:05] Maybe, and maybe the weight calculation applies, like, don't compete this year, because next year the technologies will be exponentially better.
Speaker 1:
[111:14] All right, so this article got me thinking about another topic, which is the speciation of humanity, all right, and how humanity is going to fork. I wrote a Metatrends newsletter, it's coming out on Monday, and I wanted to bring the conversation here to you guys. And there are multiple forks in the road that we're going to be able to take. I wrote in one of my early books, we're going from evolution by natural selection, which is Darwinism, to evolution by intelligent direction. And I wanted to talk about this as our final segment here. If you think about humanity speciation, we deviated, we diverged from homo sapiens and antitholes, diverged about 500,000, 800,000 years ago. And since then we've had sort of these mini forks, right? The printing press forked those who were literate versus illiterate, the industrial revolution, was a fork between those who owned machines versus those who worked the machines. Internet split us between the networked information and those who weren't networked. And what I'm seeing here are these. We've talked about the creator versus consumer. You know, are you going to be a couch potato or are you going to use this AI to go and create an extraordinary business? Longevity, escape velocity, are you going to go on that journey? Do everything you can to go to 120, 150, indefinite? You know, I don't want to talk about immortality, but... And then are you going to put a chip in your brain? Are you going to, you know, connect your neocortex to the cloud? One of the ones that's my favorite from the nine-year-old inside me, Earth versus the stars, you can stay on the planet or you're going to go explore the cosmos. And then finally, will you become a digital upload? Are you going to, you know, follow in the footsteps of the company that Alex has been supporting and funding to digitize your 100 trillion neurons and become an upload? So I love to ask you guys where you fall on this and have a conversation. Let's take one at a time on longevity escape philosophy. Let's push it to the extreme for this conversation. Salim, there is a treatment that comes out that will keep you locked in at 30 years age forever. It's immortality treatment. Do you take it?
Speaker 3:
[113:42] Like the movie In Time? I would say no.
Speaker 2:
[113:47] Really?
Speaker 3:
[113:48] Because, yeah, I would say no because all the evidence that I've seen points to reincarnation as a real possibility for the future.
Speaker 4:
[113:57] What kind of leading transhumanist are you, Salim?
Speaker 2:
[114:01] I'm really surprised.
Speaker 3:
[114:02] You're not representing. I don't have a religious view on this. I'm just seeing where the data is and that seems to be where it is. There's definitely not a Western style, heaven-hell type of thing waiting out. So let's kind of wave that out of the equation. But if that's the case, I think of life as a cyclical learning pattern, and if you're an actor, you don't want to be playing the same movie all the time. You want to take on different roles. I would like to be a... So I would say no, because that's part of the experience of the soul, is to have different experiences, whatever, however that takes place. And if I'm stuck as one 30-year-old, I would find that boring after time, and not a rich enough varied experience.
Speaker 1:
[114:45] DB2, you're given a therapy option. 30 years old, indefinitely. Immortality, do you take it?
Speaker 3:
[114:51] Wait, Dave wanted to respond to what I said.
Speaker 2:
[114:54] I'm just surprised. I mean, I'm going to say, yes, are you kidding me? Of course, I'm going to do that. I think you can change over time tremendously while still being a 30 year old body.
Speaker 3:
[115:03] If I'm 90 and in pain, I may go, damn it, give me the damn 30 year old juice, right?
Speaker 2:
[115:07] Well, I'm not from reincarnated, and I come back as like a spider. I don't want to take that chance. I'll stick with what I got.
Speaker 1:
[115:15] Alex, can I assume-
Speaker 3:
[115:15] What if you came back as one of Alex's AIs?
Speaker 1:
[115:17] Alex, can I assume you're all in until you upload yourself?
Speaker 4:
[115:20] Yes, obviously. Next question.
Speaker 1:
[115:23] Okay, thank you. Next question. BCI. There is an advanced version of Neuralink or Merge Labs or Paradromics or Open Water. And it's able to provide you high bandwidth brain computer interface to the cloud. You've got high connectivity, infinite memory and contacts, the ability to recall, understand. It's an extra corpus callosum, if you would. And the question is, it's been done safely in 100 people. Would you be number 101 for this BCI implant after 100 contiguous safe implants? Dave?
Speaker 2:
[116:02] Yeah, I'm probably the only know on this podcast. You know, my AI agents are... Wow! The AI agents are coming back to me with information at an incredible rate. And I can barely keep up with its thoughts. And so then the idea that somehow it's going to bypass and get right into my head somehow, I just don't see how that works. What I see is like the BCI becoming kind of like a drug. You enjoy, you know, you're enjoying it. You feel like you know everything going on, kind of like you're on mushrooms or whatever. Like, suddenly, it all makes sense to me. Then you're like, wait, no, it didn't make sense. I can't assimilate information any faster than the AI is coming back with it already. And I don't see how bypassing my eyeballs is going to help that problem.
Speaker 3:
[116:45] I'm so blown away. I would totally... I'd go the opposite way that you did on me. I totally would have thought you would be for it.
Speaker 1:
[116:51] Salim, would you be 101?
Speaker 3:
[116:53] Yeah, I'd be totally into this.
Speaker 4:
[116:54] Wait, Salim, when you're reincarnated, what happens to your exocortex via your BCI?
Speaker 3:
[117:01] I have no idea, but it'll be fun to see what happens.
Speaker 4:
[117:04] And does it, speaking of speciation and forks, does it diverge from your trajectory after you're reincarnated?
Speaker 3:
[117:12] Maybe, and that would be okay, too. I mean, let a thousand flowers bloom.
Speaker 1:
[117:17] Dr. Wissner-Gross, are you number 101 on this experiment?
Speaker 4:
[117:21] I don't like the question. So the question I really...
Speaker 1:
[117:24] Really?
Speaker 4:
[117:25] The question I really...
Speaker 1:
[117:26] No, that was the question. You can answer it and then diverge.
Speaker 4:
[117:30] Okay, fine. So I'll answer it with a conditional no, but...
Speaker 2:
[117:35] What?
Speaker 4:
[117:36] Hold on. The question that you really, in my mind, should have asked me is, would I be user number approximately a million if it is upgradeable? Yes, probably.
Speaker 1:
[117:47] Okay, if it's upgradeable, would you be 101? I am trying to get your risk profile and how interested you are in this.
Speaker 4:
[117:54] Very interested, but as with any new invasive drug, you don't, generally speaking, unless you're forced to, I would say this is not medical advice, you don't want to be user number 100. You want to be like user number 100,000 or million. So after 100,000 or a million, if it's upgradeable, if I can walk through metal detectors, if it has sort of all of these nice affordances.
Speaker 1:
[118:18] You have a lot of conditions, Alex. You have a lot of conditions for Godlike capabilities.
Speaker 4:
[118:22] Such a fussy transhumanist, Peter.
Speaker 1:
[118:23] I could be a metric to you for free.
Speaker 4:
[118:26] But you know what, at least I'm not demanding to be reincarnated alongside my BCI, so I'm not that fussy.
Speaker 2:
[118:31] Well, you know, one thing I really love is, the BCI originally, people were like, look, an enhanced human being is gonna be so hypercompetitive, you can't keep up, so everyone's gonna need to get this just to be competitive in the world. It turns out that's not gonna happen. The AI is improving so quickly that the enhanced human being is completely irrelevant compared to the superhuman AI two years from today anyway.
Speaker 1:
[118:52] Yeah, so the only way is coupling. The only way is coupling with AI.
Speaker 4:
[118:57] I would take a completely different position, Peter, and if anything, like while we're busy pointing fingers at each other saying, no, you're a bad transhumanist, no, you're a bad transhumanist, I'll say that wanting to be user number 100 of a BCI is actually being a bad transhumanist, why? Because it's intrinsically bedding that progress is going to be so slow that you need to be user 100 versus waiting a year or two for technology to advance exponentially so that you get to inject yourself.
Speaker 1:
[119:24] I said it's upgradable.
Speaker 3:
[119:25] Yeah, you said it right in the question, it's upgradable. You framed it very well, Peter.
Speaker 4:
[119:30] If it's safe enough, upgradable, I'll probably do it.
Speaker 1:
[119:33] Okay.
Speaker 8:
[119:34] Peter, where do you fit on this?
Speaker 1:
[119:35] You guys didn't ask, but yes, I would jump on the Longevity Escape Velocity bandwagon, of course, and yes, I would be 101 on the BCI. I've got the highest-
Speaker 3:
[119:46] You'd be yes on all of these, Peter.
Speaker 1:
[119:48] Of course. I'm going to discuss number five in a moment, but Earth versus the stars, and we're going to be forking there. I remember back when I was in graduate school, I wrote a paper- I'm gonna put a paper on speciation, right? So what is speciation? Speciation occurs when there's a small population size in a geographically isolated area, right? This is basically the finches on the Galapagos Islands and with a high environmental pressure. And we're gonna see that in space, right? If you go to the moon and you're born on the moon and you don't develop the cardiac and musculature and bone, you're stuck on the moon. And there's gonna be a species of humans that are lunites, or whatever you want to call that future version.
Speaker 2:
[120:34] Or lunatics.
Speaker 1:
[120:35] Yeah, lunatics.
Speaker 3:
[120:36] They're lunites.
Speaker 1:
[120:37] They're hanging from it. Anyway, so there will be speciation in space, but here's the question. If you have a one-way ticket to go and explore an earth-like planet that is beautiful and exciting, would you go, are you, do you have that exploration gene, the desire to go and see the cosmos? We might vary this a little bit and say, would you go to settle on Mars? Would you go to settle on the moon versus staying on the earth? Where do you come out on this? Alex, let's start with you.
Speaker 2:
[121:16] Okay. In the immortal words of the Star Trek Borg Queen, you imply a disparity where none exists, Peter. You're posing Sophie's Choice type questions about one-way-
Speaker 1:
[121:27] Of course, I'm trying to make this fun.
Speaker 2:
[121:29] No, but you're implying it. Stop dodging the question. In all seriousness, you're posing on the one-hand one-way trips. On the other hand, you get to go to the stars. This is a false choice. This is like a Sophie's Choice that you're posing to transhumanists to get under our skin. I'd love to go to the stars, but I don't buy the premise that it's a one-way trip or needs to be a one-way trip. I literally wrote the paper on why intelligence manifests as optionality maximization.
Speaker 1:
[122:00] What is my purpose here, Alex? It's to discover your level of risk aversion, your level of desire for extremes.
Speaker 2:
[122:10] That may be your purpose, but the preferences that you're actually revealing are more how bad at optionality maximization are in the face of transhumanist technologies.
Speaker 1:
[122:21] You know Peter's laws. Peter's laws number one, is anything can go wrong, fix it, tell it with Murphy. Number two was when given a choice, take both. I love optionality, but guess what?
Speaker 3:
[122:31] I've suffered badly from law number two, Peter. I'm like let's do both and I'm like we can't. Let's do both, we can't. All right. I have a couple of quick comments to make here.
Speaker 1:
[122:41] Please.
Speaker 3:
[122:42] You know speciation, it turns out, so the last Neanderthals died out about 40,000 years ago, and this time right now is the only time we know we only have one species of humanoid. So there's a real case for saying we'll have a bunch more coming at some point in the future, according to one of these or more of these splits. The other thing I really would urge people to do if you've not done it, Brian Johnson, the Longevity Tester fellow who's publishing everything about what he's doing, recently did a 5-Meo DMT psychedelic strip and streamed it live, and you really want to go check out what his response was after doing it. He's like, I've so been focused on this Longevity stuff, but like it's so incredible what I experienced that nothing matters anymore. So I still go do this, but it was great.
Speaker 1:
[123:31] I had that exact experience when I did that journey, and I came out of it and I said, oh my God, my whole Longevity quest.
Speaker 3:
[123:40] I think this is-
Speaker 1:
[123:40] You know what? I still want Longevity.
Speaker 3:
[123:43] Yeah, it's fine to have. I'm not decrying it at all. I'm all for it. Just for all of the good reasons around progress, etc. Something I just want to point out is as we look at this overall push towards AI, which is fantastic, it allows human beings to do less of the doing and much more of the being. And I think that's the profound opportunity we have.
Speaker 1:
[124:02] We named ourselves correctly, human being.
Speaker 3:
[124:04] We did.
Speaker 1:
[124:05] But Salim, answer the question on moon Mars, a distant star. What's your interest?
Speaker 3:
[124:13] The question is...
Speaker 1:
[124:15] Would you move irreversibly to the moon, to Mars, or to an Earth-like planet? No.
Speaker 3:
[124:21] No. I really like my...
Speaker 1:
[124:23] That's a fair answer.
Speaker 3:
[124:23] I do it as an avatar, or I do it as a... I go kind of to Alex's question. This is a flawed question. I really like sitting on the beach or playing tennis.
Speaker 1:
[124:31] I'm trying to figure out human speciation. If we don't have people permanently moving in a direction, you're not going to get speciation.
Speaker 3:
[124:37] There's lots of people who want to do that. They're welcome to go. I really like sitting on a beach with a glass of wine. We're also ignoring them.
Speaker 2:
[124:42] Implicitly ignoring the possibility of mergers. Why is speciation necessarily a one-way door at this point? If we have the ability, we'll have the ability to merge cyborgs into organisms into uplifted animals and all sorts of other crazy combinations.
Speaker 1:
[124:58] Yes, we will. Dave Blundin, your answer, my friend.
Speaker 4:
[125:01] I'm a huge believer in terraforming and I think the Ian Banks culture series view of the world or the future. Alex is right. We're going to discover new physics and God knows what's going to be possible. But I think I would not move to Mars or to the moon, gravity's off. There's a lot of reasons it won't be nice. But I would absolutely in an instant go to another star that has a terraformed world that we've got the mass right, we've got the orbit right. Yeah, terraforming I think is a massive part of humanity's future.
Speaker 1:
[125:30] I mean, there's a beautiful element. When I think about what moment in history I would love to go and explore, it is the period of the great explorations. It's the 1400s, 1500s, of course, without the scurvy and the death and the disease and all of that stuff. But just the idea of going and exploring uncharted lands, right? The whole thesis of Star Trek, it just excites the nine-year-old me. By the way, Salim, going back to sort of the brainwashing of religion, I got brainwashed by Star Trek as a religion early on.
Speaker 4:
[126:00] I think within Star Trek, the Genesis Project is the most important concept, you know? That I think is very real.
Speaker 3:
[126:08] It's huge. Can I make a point here, Peter?
Speaker 1:
[126:10] Yeah, of course.
Speaker 3:
[126:11] You weren't brainwashed in the sense that you were giving an absolute truth and told to believe in that assumptive truth, right? What you came across as a paradigm of imagination and what is it they call it in the Rodmer era? Infinite diversity, infinite creativity?
Speaker 2:
[126:27] No, it's infinite diversity and infinite combination, the Vulcan idic.
Speaker 1:
[126:32] Idic, yes.
Speaker 3:
[126:33] Okay. So you got grabbed by that and that's not ideology, I would suggest. That's just absolute imagination run free in a wonderfully beautiful way.
Speaker 1:
[126:42] Our final fork five, the AWG Digital Consciousness Fork, the technology to completely digitize your 100 trillion synaptic connections and upload you to the cloud. It is destructive in process. You and your brain will not exist at the end of that, but you are guaranteed to be uploaded. Do you do it? Alex, let's kick it off with you.
Speaker 2:
[127:02] Well, I guess the elephant in the room is that I helped form a company called Eon Systems, encourage you to check out Eon Systems if they're very interested in this. I think first generation uploads will be destructive. I think second, third, fourth generation uploads won't be destructive. If I had a choice, if it were a life and death situation and my alternative is death, I would choose a destructive upload. If I have choices, again, going back to my earlier comment, if I have a choice and it's sort of an elective uploading, no, I wouldn't choose a one-directional destructive uploading. I'd wait for third or fourth generation uploads that can be done non-destructively or incrementally.
Speaker 1:
[127:38] All right, Dave, how about yourself?
Speaker 4:
[127:41] Yeah, no way.
Speaker 1:
[127:43] I think you would not upload.
Speaker 4:
[127:44] Not even close. I love the idea of having agents out there doing huge amounts of work and bringing them back to me, but the idea that I would ever destroy my meat body and think that that's still me, even though if it's an exact synaptic clone, it's still not me.
Speaker 1:
[127:57] Love it. Salim.
Speaker 3:
[127:59] A hard no because I think consciousness goes through the body, and so therefore, if you could replace the synaptics, I think you'd have to replace lots of other stuff. But I would go with Alex's thing. If it's not a destructive process, I'd be good with it.
Speaker 1:
[128:12] Yeah. I'm a no on the destructive process as well, which was my question. So with that, I'm going to go to our outro music here, which is a celebration, Alex, of Solve Everything, a beautiful piece. Enjoyed this, gentlemen. Such a pleasure as always. This was a fun conversation. Really, really loved it.
Speaker 3:
[128:31] We need more of these.
Speaker 1:
[128:32] Yeah, for sure. All right. Onwards to Solve Everything. This is brought to us by James Petz. Thank you, James. If you've got outro or intro music, send to us at media at diamandis.com. And if you've got an AI driven company that you want to present in a 60 second video that's all AI top to bottom, send us that video. And if it's super cool, we'll share it. All right, let's run this.
Speaker 3:
[129:33] I like the flute.
Speaker 1:
[129:59] All right, gentlemen, a pleasure as always, and we'll see you soon. What an exciting day it was today. If you made it to the end of this episode, which you obviously did, I consider you a Moonshot mate. Every week, my Moonshot mates and I spent a lot of energy and time to really deliver you the news that matters. If you're a subscriber, thank you. If you're not a subscriber yet, please consider subscribing so you get the news as it comes out. I also want to invite you to join me on my weekly newsletter called Metatrends. I have a research team, you may not know this, but we spend the entire week looking at the Metatrends that are impacting your family, your company, your industry, your nation, and I put this into a two-minute read every week. If you'd like to get access to the Metatrends newsletter every week, go to diamandis.com/metatrends. That's diamandis.com/metatrends. Thank you again for joining us today. It's a blast for us to put this together every week.
Speaker 7:
[131:08] So you're saying with Hilton Honors, I can use points for a free night stay anywhere? Anywhere. What about fancy places like the Canopy in Paris? Yeah, Hilton Honors, baby. Or relaxing sanctuaries like the Conrad in Touloume?
Speaker 2:
[131:21] Hilton Honors, baby.
Speaker 7:
[131:23] What about the five-star Waldorf Astoria in the Maldives? Are you gonna do this for all 9,000 properties?
Speaker 8:
[131:30] When you want points that can take you anywhere, anytime, it matters where you stay. Hilton, for the stay. Book your spring break now.