title Dylan Patel - The Infinite Demand for Tokens, Claude Mythos, and Supply Constraints

description This is my second conversation with Dylan Patel. Dylan is the founder and CEO of SemiAnalysis, where he tracks the semiconductor supply chain and AI infrastructure buildout.

This conversation is about the supply and demand of tokens. On demand, Dylan describes something completely explosive. He explains why the frontier model is the only model anyone wants, and willingness to pay for it is nearly unbounded. His own firm has gone from tens of thousands of dollars in AI spend last year to seven million this year.

On supply, we walk through the bottlenecks across memory, logic, and fab equipment that will determine how fast any of this can scale.

We also cover Claude Mythos and what the leading labs need to do to fix their growing public perception problem.

For the full show notes, transcript, and links to mentioned content, check out the episode page ⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠here⁠⁠⁠⁠⁠. 

-----

Become a Colossus member to get our quarterly print magazine and private audio experience, including exclusive profiles and early access to select episodes. Subscribe at ⁠colossus.com/subscribe⁠.

-----

⁠Ramp’s⁠ mission is to help companies manage their spend in a way that reduces expenses and frees up time for teams to work on more valuable projects. Go to⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠ ⁠ramp.com/invest⁠⁠ to sign up for free and get a $250 welcome bonus.

-----

Trusted by thousands of businesses, ⁠Vanta⁠ continuously monitors your security posture and streamlines audits so you can win enterprise deals and build customer trust without the traditional overhead. Visit ⁠vanta.com/invest⁠. 

-----

WorkOS is the infrastructure B2B and AI-native companies use to sell to enterprise. It covers everything enterprise security requires: SSO, SCIM, RBAC, Audit Logs, AI governance, and more. Trusted by 2,000+ fast-growing companies, including OpenAI, Anthropic, Cursor, and Vercel.

-----

Rogo is the AI platform for finance. They're building agents for Wall Street that are trained to understand how bankers and investors actually do work: from diligence and modeling, to turning analysis into deliverables. To learn more, visit rogo.ai/invest.

-----

⁠Ridgeline⁠ has built a complete, real-time, modern operating system for investment managers. It handles trading, portfolio management, compliance, customer reporting, and much more through an all-in-one real-time cloud platform. Visit⁠ ⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠ridgelineapps.com⁠.

-----

Editing and post-production work for this episode was provided by The Podcast Consultant (⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠https://thepodcastconsultant.com⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠).

Timestamps:

(00:00:00) Welcome to Invest Like The Best

(00:02:29) Intro: Dylan Patel

(00:03:09) Semi Analysis AI Spend: Zero to $7M

(00:05:16) Real-World Examples of Claude Code

(00:11:41) Token Demand: “Completely Explosive”

(00:14:48) Why Everyone Wants the Frontier Model

(00:15:36) Mythos: Biggest Model Capability Jump in Two Years

(00:20:54) Fear of Rapid Model Progress

(00:23:45) Robotics as the Next Demand Wave

(00:26:03) Scaling Laws & Compute Efficiency

(00:27:24) OpenAI vs. Anthropic

(00:31:33) Supply Side: Bottlenecks Across the Stack

(00:33:26) TSMC CapEx Could Cause a Shortage

(00:36:45) CPUs, ASICs, and FPGAs

(00:40:12) Tokenomics

(00:42:20) Protests & AI Backlash

pubDate Thu, 23 Apr 2026 08:00:00 GMT

author Colossus | Investing & Business Podcasts

duration 2719000

transcript

Speaker 1:
[00:00] I know firsthand how complex the tech stack is for asset managers, and seemingly every new tool and data source makes the problem even worse, adding more complexity, more headcount and more risk. Ridgeline offers a better way forward, one unified platform that automates away all that complexity across portfolio accounting, reconciliation, reporting, trading, compliance and more. All at scale. Ridgeline is revolutionizing investment management, helping ambitious firms scale faster, operate smarter and stay ahead of the curve. See what Ridgeline can unlock for your firm. Schedule a demo at ridgelineapps.com. OpenAI, Cursor, Anthropic, Perplexity and Vercell all have something in common. They all use Work OS. And here's why. To achieve enterprise adoption at scale, you have to deliver on core capabilities like SSO, SCIM, RBAC and audit logs. That's where Work OS comes in. Instead of spending months building these mission critical capabilities yourself, you can just use Work OS APIs to gain all of them on day zero. That's why so many of the top AI teams you hear about already run on Work OS. Work OS is the fastest way to become enterprise ready and stay focused on what matters most, your product. Visit workos.com to get started. Felix by Rogo is a personal finance agent that turns a single prompt into finished client ready work using your firm's own templates, context and standards. Send Felix an email like, take these comments and turn them for me or update my tracker with the context of these emails, or run the ability to pay math on this buyer and Felix sends back finished PowerPoint decks, Excel models and sourced research. Felix works the way your team already does, delivering work quickly and accurately around the clock. Learn more at rogo.ai/felix. Hello and welcome everyone. I'm Patrick O'Shaughnessy and this is Invest Like the Best. This show is an open-ended exploration of markets, ideas, stories and strategies that will help you better invest both your time and your money. If you enjoy these conversations and want to go deeper, check out Colossus, our quarterly publication with in-depth profiles of the people shaping business and investing. You can find Colossus along with all of our podcasts at colossus.com.

Speaker 2:
[02:00] Patrick O'Shaughnessy is the CEO of PositiveSum. All opinions expressed by Patrick and podcast guests are solely their own opinions and do not reflect the opinion of PositiveSum. This podcast is for informational purposes only and should not be relied upon as a basis for investment decisions. Clients of PositiveSum may maintain positions in the securities discussed in this podcast. To learn more, visit p-s-u-m dot v-c.

Speaker 1:
[02:28] This is my second conversation with Dylan Patel. Dylan is the founder and CEO of SemiAnalysis, where he tracks the semiconductor supply chain and AI infrastructure build out. This conversation is about the supply and demand of tokens. On demand, Dylan describes something completely explosive. He explains why the frontier model is the only model anyone wants and willingness to pay for it is nearly unbounded. His own firm has gone from tens of thousands of dollars in I spent last year to 7 million run rate this year. On supply, we walk through the bottlenecks across memory, logic, and fab equipment that will determine how fast any of this can scale. We also cover mythos and what the leading labs need to do to fix their growing perception problem. Please enjoy my conversation with Dylan Patel. You told me this incredible story about how your own team's use of tokens has changed dramatically this year.

Speaker 3:
[03:15] Yeah.

Speaker 1:
[03:15] Can you retell that story and what it is teaching you about what's going on in the world?

Speaker 3:
[03:20] Last year, we thought we were heavy users of AI. Everyone's using ChatGPT. Everyone's using Claude, providing whatever subscriptions anyone wants on the order of spend of tens of thousands of dollars for our firm. This year, the spend is just skyrocketed. It really started in late December with Opus. That included Doug Laughlin, who is president. He's very much leading the charge in the sense of non-technical people using AI for coding. So he's basically peeled the whole firm slowly over time. I think he's been the leader in doing that. Obviously, the engineers were using it anyways, but spend in January just started to inflect and rocket and rocket and rocket and rocket. We signed an enterprise contract with Anthropic, and it's gone to the point where now, I think when I last talked to you, it was 5 million spend rate. It's actually 7 million spend rate now. So we're spending 7 million.

Speaker 1:
[04:09] That was last week, by the way.

Speaker 3:
[04:11] A lot of that is just the usage. People who have never coded before are using Claude code and spending thousands of dollars, sometimes a day. It also like some people spend thousands of dollars one day, or spend a couple of hundred dollars a couple of days, and then they go back to thousand dollars. It's very variable across each individual user, but across a firm, we're spending 7 million dollars a year now on Claude code at the current rate versus our salary expense being in the neighborhood of $25 million. So we're north of 25 percent of spend on Claude code as a percentage of salary. If this trajectory continues, then we'll spend more than 100 percent by the end of the year, which is a bit terrifying. Thankfully, I don't have to decide between people and AI because our company is growing so fast. It's more so like, okay, I don't have to hire nearly as fast and I can spend a lot more on AI and it works and we just grow faster. But I think other folks will start to reckon with the fact that if this person can do the work of five to 10 to 15 people, using Claude code, then all of a sudden, I should probably cut people. The use cases are so broad.

Speaker 1:
[05:15] Give a couple of examples.

Speaker 3:
[05:16] Okay. So for example, one thing is we have a reverse engineering lab in Oregon that we've been building for a year and a half. We have a bunch of fancy microscopes, scanning electron microscopes. The whole purpose of this is you reverse engineer chips. You get architecture out of it, you get the materials that they're using to manufacture, and this is some of the data we sell. This is a very slow process of analyzing that data. Instead, one person on the team, they've been able to spend with a couple of thousand dollars of Claude tokens, they've been able to create this application that is GPU accelerated, runs on a server that we have at CoreWeave, and anytime we send it an image, it will take the picture of the chip and overlay where every single material is. Oh, this part is copper. Oh, this part of the gate is tantalum. This part of the gate is germanium. This part of the gate is cobalt. So you can do a finite element analysis of the entire stack up of the chip. Very, very quickly, visual with a dashboard, GUI, it's everything, a few thousand dollars of Claude. The person previously worked at Intel and he said that was an entire team's job to build that and maintain that. I'll rack that up across the entire firm. It's insane. Another example that I think is super fun is Malcolm. He's an economist at a major bank before. Their economist department was like 100 or 200 people. What he built was the most incredible thing ever. He piped all of this different data, FRED data and all these other data, employment reports and all these other things from various APIs. He signed a couple of contracts with folks to get API access to data. Pulled it all in, started running regressions, started looking at the impact of various economic revolutions on the economy. From a deflationary, inflationary perspective, the Bureau of Labor Statistics has this entire set of 2,000 tasks. He did that with AI. Which ones can be done by AI, which ones cannot, and grading them across a rubric. About three percent are doable now with AI. And so he's created this metric so that you can measure things that can be done by AI, what the cost of being able to do those with AI, and therefore the deflationary aspect of it. Phantom GDP is what he's called it. Output can go up, but because cost falls so much, actually GDP theoretically shrinks. So he created this whole analysis and a brand new benchmark of language models, a set of evals across 2,000 different evals.

Speaker 1:
[07:19] He did this all by himself.

Speaker 3:
[07:20] This is all by himself, yeah. And he's like, dude, this would have taken the team of 200 economists a year. He's like completely cracked out on Claude. He's like, everything has changed.

Speaker 1:
[07:28] How do you think about it as a business owner, going from close to zero to 25 percent, accelerating towards whatever percent of total spend? At what point are you like, whoa, I need to put the brakes on this and be careful how much we're spending. Maybe we don't need to spend on the most cutting it on Opus 4.7 which came out today. Maybe I can throw it back to something that's a little bit cheaper.

Speaker 3:
[07:48] I think I'm in the information business. We sell analysis, we do consulting, we create datasets. I don't see why this wouldn't be completely commoditized on a pretty rapid basis. If I'm not constantly improving. My first product that I was selling as a dataset, there's more people trying to do it. Now, we've made it constantly better and better and better and more detailed, and so therefore, it sells a market. But the way we were doing it in 2023 is not terribly different. It's basically what everyone else is doing now. If I don't move up the bar, then I will be commoditized. If I don't move fast enough, I will also lose my edge. So the question is, yes, AI commoditizes things just like it commoditizes software. Those who can move fast and keep control of their customers, and keep providing them an awesome service, and keep improving the service, won't shrink, they'll grow faster. Those who are incumbent and not doing anything, they're going to lose. So it's a bit of an existential. If I don't adopt AI, someone else will and they will beat me. Another easy example is the energy space. So we've had a few energy analysts for like a year now. We've been trying to build out this energy model. It's very complex. Energy's data services market is something like $900 million. It's obviously a huge market for me to try and break into. We've been slowly grinding at it and it's been helpful for our data services business. We really hadn't broken into the energy data services business despite a year of having multiple people on the team. Then Claude Code, Psychosis hits one of the people who leads the data center in energy and industrial business at SemiAnalysis, Jeremy, it hits him. Now all of a sudden, in three weeks, he spent a lot. He's spending like $6,000 a day. It was an insane amount, but he scraped every single power plant in the US, every single transmission line above a certain voltage, and created this entire mapping of the entire US grid, as well as a lot of demand sources, all from various public sources of data. It got like this dashboard where you can view and check. You can see all the micro regions of the US where there's power deficits and surpluses. All of these details built in a handful of weeks. We started showing some of our customers who buy our data center data set but are energy traders. We showed some of them and they're like, wow, how long did this take you? This is really good. This is better than XYZ Company. And then we dig deeper. XYZ Company has 100 people and have been working on this for a decade. Obviously, our thing is not fully as robust, but in some ways it is better. I'm going to commoditize these energy services companies, data services company. Who's going to come commoditize me if I don't move faster? And so the question from a business owner's perspective is, yeah, I'm spending a lot, but what does that spend getting me? Is it getting more revenue?

Speaker 1:
[10:14] Are you worried that in the limit, the people that control capital and investing capital who are often hiring you for what you do will just say, well, we have analysts too who are really smart about this, like we'll just build this ourselves. If it's getting that easy, at what point does it just all pool into the investment firms that stand to gain the most because they have the most leverage on top of the data or the insights that they glean?

Speaker 3:
[10:36] First of all, any information services business, obviously I don't generate as much value as my customer does from said information because if I sell you information for a dollar, you're only buying it for a dollar because you know that information helps you make a decision that lets you make more than one dollar and so therefore you have made more money off of me than I did from the information myself. These investment funds all have their own information services, especially like the super, the Jane Streets of the world and the Citadels, they're really detailed on their data and yet these folks also purchase data from us and continue to do so and continue to grow with us because I think there's just some it factor. We move faster, we're more nimble. We're a smaller team that's focused on just one specific thing, AI infrastructure and the huge revolution that causes in AI, on tokenomics and all these things and we see where it's headed and so we're moving faster and building faster. I think investment professionals, yes, they'll try and build some of the stuff we do and more likely they'll just buy the data from us and it's cheaper for them to buy the data from us and then build on top of it than it is to build it themselves.

Speaker 1:
[11:40] I feel like every conversation I have with you, what I'm always getting at is just supply and demand of tokens. That's the thing that's interesting to me in the world right now. What has this experience taught you about the demand? Has it changed your view on the demand side of that equation? Just feeling it viscerally yourself?

Speaker 3:
[11:55] If we take a step back and look at the macro lens, right? Anthropic has gone from 9 billion revenue to what? They're at 35, 40 billion. Now, probably by this time, this air is 40, 45 billion, who does ARR? Their compute has not grown to the same degree. And if you do the calculations and you assume they didn't decrease their research and development compute, they clearly didn't. They have mythos. They have office 4.7. So it clearly didn't decrease their research compute spend. So ultimately, what they've done, even if you assume all incremental compute they've gotten has gone towards inference, their margins are at a floor of 72 percent. In reality, some of that incremental compute they've got probably went to research and development and maybe higher than 72 percent gross margins. To be clear, at the start of the year, there was a leak from their funding ground docs when leaked it, 30 something percent gross margins. Where on earth does a business like this grow margins like that? It's in principle, their demand is so high, they're able to cut back on usage limits, rate limits, all these things. What really matters is having an Anthropic rep and having an enterprise contract with them and getting the rate limit increases that you need because otherwise, tokens are ultimately super in demand. Whoever can pay for them, Anthropic has the same problem. I mean, not problem, it's just the reality of how capitalism works. Yes, people are sending them $40 billion ARR in tokens, but those tokens are generating way more than $40 billion in value. Various businesses will have different value generation per token, but as we get more and more intelligent, what really matters is access to these most intelligent tokens and leveraging them at things. You as a person deciding what is the best way to leverage these tokens to grow business and generate value. Because a lot of folks will want tokens and generate tokens, but the shitty SaaS startup and SF who is using Claude to generate their software product is not necessarily actually creating a ton of value, and therefore they're going to get priced out of tokens soon enough.

Speaker 1:
[13:46] As your business scales up, everything gets more complex, especially your compliance and security needs. With so many tools offering band-aids and patches, it's unfortunately far too easy for something to slip through the cracks. Fortunately, Vanta is a powerful tool designed to simplify and automate security work and deliver a single source of truth for compliance and risk. There's a reason that Ramp, Cursor, and Snowflake all use Vanta. It frees them to focus on building amazing, differentiated products, knowing that compliance and security are under control. Learn more at vanta.com/invest. I know firsthand how complex the tech stack is for asset management firms, and seemingly every new tool and data source makes the problem even worse, adding more complexity, more headcount, and more risk. Ridgeline offers a better way forward, one unified platform that automates away that complexity across portfolio accounting, reconciliation, reporting, trading, compliance, and more, all at scale. Ridgeline is revolutionizing investment management, helping ambitious firms scale faster, operate smarter, and stay ahead of the curve. See what Ridgeline can unlock for your firm? Schedule a demo at ridgeline.ai. I had this experience just today where on the flight here, I got re-limited out on something. I saw 4.7 came out, and what I immediately wanted was to be on 4.7 that second. I couldn't think about using 4.6 anymore, not this 4.7. I was perfectly happy with 4.6 for the last many weeks. It's amazing. Are you surprised that people are so insistent on going to the most expensive leading edge thing to the degree they are?

Speaker 3:
[15:14] Without a doubt. I think one of my funniest memories in the past month and a half is myself and a buddy of mine Leopold being on our knees in front of an anthropic co-founder begging him for access to Mythos and then pretending it doesn't exist because we knew it existed. We're like, please give us access. He's like, I don't know what you're talking about.

Speaker 1:
[15:36] What was your reaction to that rate card or that eval card coming out?

Speaker 3:
[15:40] It was rumored in the Bay Area. We knew it was supposed to be really good, but if you just look at the benchmarks, obviously benchmarks change over time. Mythos is potentially the biggest step up in model capabilities in two years. I think it's really, really an important detail that it's so good that they don't want to release it, even though they already announced the price to their people that they did a selective release for Cyber for, and it's 5 or 10x the token cost. They just don't want to release it because they're worried about the impact on the world, and they're releasing a worse version, Opus 47, to us. They explicitly said in the model card, hey, we actually preferentially made it worse at Cyber. I don't know if you read that. Whoever you are, if you have enough capital, you should get a freaking Enterprise Anthropic subscription where you pay per token not with these subscriptions because then you won't get rate-limited much. Then you need to figure out how to leverage those tokens to the highest value task and make money off of it. Because ultimately, what you're doing maybe like a year from now or two years from now, the business is actually just arbitraging tokens. The tokens are amazing, but let's figure out what direction to point them in. Then three or four years from now, the model will know what to do with the tokens and how to make the most value. You need to look at this retroactively, pick any benchmark, the cost to hit a certain capability tier used to cost X, and now it cost 1,100th or 1,1000th of that. DeepSeq, for example, on GPT-A4 was 1,600th the cost. Since then, the cost have fallen further for GPT-4 class models. Of course, no one gives a crap about GPT-4 class models. They want the frontier because the frontier lets them create the economically valuable things. But GPT-4 class models can still be used in stuff, and so people are using them in some tiny use cases. It's just the cost have fallen so fast. It's not really what's driving the demand. What's driving the demand is all these new use cases. If current 4.6 Opus or 4.7 Opus tier models a year from now, my spend for the same exact quality of the model would probably be like 70k. I bet you it'll be 100 times cheaper. Irrelevant because I'm going to be using a way, way, way better model which can do way, way better things. Anthropic Mythos is more expensive as a model, but it spends a lot less tokens to do the thing, and therefore it is actually cheaper in most tasks than 46 Opus because it's just way more efficient, even though each individual token is smarter. There's crazy geniuses creating huge cost efficiency improvements every day. They work at the labs and they're making the models way more efficient. You see it every generation. A GPD, what was it, five nano or whatever was better than GPD-4, or five mini was better than GPD-4, it was like one 100th the cost. This just happens and we accepted at its face value. Ultimately, you keep making things cheaper and then you keep scaling them up and you keep getting humongous improvements.

Speaker 1:
[18:23] When I last saw you, Mythos had just come out, maybe the day before or something, or the card had just come out, and you said something like, it actually made you feel like a little scared. It was so good. What did you mean by that?

Speaker 3:
[18:34] Anthropics whole goal in 2025 and even a lot of 2024, they're like, hey, by the end of 2025, we need an L4 software engineer in our model. They by and large achieved that with ForSix Opus. What they didn't say is that, and if you look at Mythos, if you compare benchmarks, it's like an L6 engineer. So L4 is pretty new. L6 is quite well-experienced. I think Anthropics said that the model internally was available in February. So in two months, they've gone from L4 engineer to L6 engineer. What's next? When you think about the model progress, it's only accelerated, Anthropics release cadence has compressed, OpenAirs release cadence has compressed. Why? Because generally to make a better model, you need a few things. You need amazing compute. Compute is very expensive and it has a time scale that we track. It's growing, but it's set in stone for the next short term. It's set in stone what you've already signed. There will be delays and shifts and somehow you can find a little more, but it's generally pretty set in stone. There's amazing researchers that people are paying tens of millions of dollars for. Then lastly, there's implementation. Implementation historically have been very difficult. If I have an idea, now I have to implement it. Implementing is hard. Now, ideas are there. Implementation is very easy. It's expensive, but it's very easy. How does one decide what ideas to implement? It turns out, if your implementation is just so much easier, now you can just implement more ideas and move on the treadmill faster and faster and faster, whether that is AI model research and so now your model release cadence is shrunk down to two months from where it was six months before, or I want to take every power plan in the US and every transmission line and model it and run regressions and see the micro supply and demand, I can also do that. The idea is cheap. Which idea makes sense? Which idea is worth the capital that you have to spend on the tokens because the implementation is there? That's the key learning and if implementation costs continue to tank, which they are, we don't even have Mythos yet. It's only been a handful of hours since Opus 47 launched but my team is pretty excited about it internally. What now comes to the world, it's a complete reordering of how economies work. What used to matter a lot was execution was very, very fucking difficult and ideas were cheap. Now, ideas are cheap and plentiful but execution is very easy. So really only the good ideas are the ones that can justify the spend on super cheap implementation.

Speaker 1:
[20:55] So are you actually scared or does it just introduce an uncertainty that's hard to grapple with?

Speaker 3:
[21:00] Uncertainty is there but I do think that causes some fear in terms of how does society reform itself. How does one exist in a world where actually your ability to implement something is not actually that important, your ability to choose the correct idea for AI to implement, and then your ability to sell that idea or sell what AI has implemented is what matters. Your ability to garner capital towards that is what matters. Going back to the point of it's very important to have the newest model always, who's going to have access to the newest model? Anthropics project, I know it's not called Earwig, but I troll anthropic people by calling it Earwig and Glasswig. Anthropic Earwig, where they only release mythos to certain companies for cyber, that's just going to be something that continues. Models will have less broad and less broad deployment. I know OpenAI and Anthropic and all these people are like, we want to have great AI for everyone. AI is very fucking expensive. Who's going to pay for the trillion dollars of infrastructure? People who have money and we can build useful things with AI. Then you don't want people to distill your model, so you don't release them broadly, you release them to fewer and fewer set of customers. Those customers are also now wrestling over the tokens, unless Anthropic jacks them. They could double their pricing on Opus and I would continue to pay, and I bet most users would continue to pay. I bet that wouldn't solve their humongous capacity problem that they have. So then the question becomes, where does this cycle end where token usage and therefore, the benefits of those tokens, the additional value generated on top of those tokens, aggregates among fewer and fewer and fewer companies? I don't have mythos. You know what has mythos? Top freaking banks. Now, they're only using it for cybersecurity, but at some point I can envision a world where, hey, maybe I, because I have an enterprise Anthropic contract and because Anthropic people like me, they're willing to give us slightly earlier access or slightly higher rate limits or something for a model. I hope that's what happens. Then my competitor, whoever that is, doesn't have that and I'm able to fucking crush them. There are people like Ken Griffin of Citadel, super well connected and super rich. He goes and signs a deal with open air Anthropic that's like, yeah, I'm going to get access to your models and I'll buy the first $10 billion worth of tokens each year. So whenever you release the model, I'll spend the first 10 billion tokens and then everyone else can get the model after that. Yeah. It's like, okay, well, now what does that do? Now he's going to crush everyone in the markets. That's just an example. It could be any number of things. It could be cyber like Anthropic is worried about, now I can hack people. It can be information services business like myself or I crush someone else. I think it's such a broad base. We don't know what these models can do. Anthropic doesn't know what these models can do. No one knows what these models can do. It's up to the end user to figure out where they can leverage the tokens to see what they can build and imagine, which is tremendously productive and uplifting for humanity. But then what happens to the concentration of resources and usage of it?

Speaker 1:
[23:46] Presumably, right now, robotics or robots consume relatively zero tokens versus everything else. What's your view of that? If that's like a second demand curve that could start to ratchet, there's a new startup every single day within a mile of here, trying to build something interesting in robotics.

Speaker 3:
[24:02] So there's this concept of software-only singularity, which is that the world has AI singularity but only in software, and now what about the rest of the world? The vast majority of the world is physical. You can see the world orient around hardware, not software. That's actually why I think software-only singularity is like just a blip, and not like we do get everything else. Because once software is super easy, what makes robots really hard? It's programming microcontrollers and actuators, and controlling all this stuff is very difficult. Right now, the interesting thing about AI models is, they're actually really inefficient in learning. It's just we're able to give them so much data that they're able to learn and pass us in certain ways. Currently, the robot models, VLAs, Vision Language Action Models, which is very popular right now, is probably not going to be the thing that ultimately scales beyond. They're inefficient in data, and we can't scale the data for them fast enough. There is going to be some way to large scale pre-trained robot models, where just like humans see all this data throughout their lives. What's interesting is humans, the reason why we're so good is we're sample efficient. One example, two example, we're good. So applying that to robotics. Once you have this software-only singularity, implementation is super cheap. Anyone can start to build these models that now robots are actually useful. And so I think in the next six to 18 months, we'll start seeing real breakthroughs in robotics that enable few-shot learning, i.e. there's a pre-trained robot model, and now there's a robot that you have hired or bought or whatever. You showed a few examples and it's able to do it. Right now, there's a lot of companies doing robots for advertisement or robots for simple stuff like that, and it'll be like, folding clothes, sure, sure, sure, no, but it's going to get really niche. Robots just for cleaning chalkboards, and it's a rental service or it'll be a model package that you download onto your standard robot that then does that. In any ways, there will be a huge explosion in physical good acceleration and deflationary effects there, but that's ultimately going to keep token demand going crazy. I don't think token demand slows down personally.

Speaker 1:
[26:03] Did you learn anything else about the world based on Mythos' results and how it was built? This is my way of asking if you break down the components of the scaling laws.

Speaker 3:
[26:12] Mythos is a materially larger model than prior models. So yes, it is a much larger model. What chip it's trained on is not really relevant, it's the scale. Obviously, 100,000 black wells is equivalent to hundreds of thousands of prior generation chips. TPUs and tranium have their different release cadence, so it's not exactly mirrored one-to-one. But ultimately, yes, Mythos is a significantly larger model. It's proof that the scaling laws still work. Everything about it shows the trend line continues of models. More compute into model makes model better. Along the whole way, it's not just more compute into model makes model better. Along the whole way, we're also getting these compute efficiency wins, which are as all this research compute that the labs are spending is actually turning into if I want X capability tier model, every six months that cost or every two months that cost is dramatically decreasing. But then if I scale it up massively, I get a humongous capability drop as well. And so, yes, it's proof that this is still happening. Google and Anthropic are not heavy heavy users of GPUs on the training side, but OpenAI, they'll start having their new class of models. I think they're taking a more sensible principled approach to scaling in small steps. Anthropic really went for a huge jump. We'll see better and better models throughout the year, and the release cadence is only going to get faster.

Speaker 1:
[27:25] We've gone a long way in the conversation with saying almost nothing about OpenAI, which would have been so strange.

Speaker 3:
[27:29] So this is the interesting thing. Everyone's like, okay, so Anthropic is just one, right? They had Mythos in February. They never released it because they didn't feel the need to. They're already sold out. Their revenue is already adding $10 billion a month. Then you've got Opus 47 today, all before OpenAI's alleged spud release, which media such as The Information and others have posted about. So clearly, Anthropic is in the lead and OpenAI is cooked. What's interesting is because Anthropic has such balance on compute and they can only grow it so fast and to the point of, Dario used to gloat about how OpenAI was being too aggressive on compute and Anthropic was more sensible in their scaling, and now Anthropic is like, fuck, I wish we had a lot more compute. OpenAI is able to pay the bills perfectly fine. In fact, they've raised a ton of money to get incremental compute in addition to the irresponsible levels of compute that they were buying from Oracle and Core, we've been softbanking all these people in Microsoft, such as Tranium. Now they're getting Tranium as well from Amazon. They've done this insane thing on compute. They also know they need more. But what's interesting is, if you were to say Opus 4.6, let's ignore models getting better over time, let's just take diffusion of this technology. You and I may jump on the model immediately day one, but other businesses take time and it takes time for people to learn, and the spark of, oh shit, Claude psychosis moment doesn't hit everyone at the same time. By the end of the year, let's say a 4.6 Opus term model, the economy would spend $100 billion on, I don't think that's unreasonable. It's spending $40 billion right now.

Speaker 1:
[28:57] That's like a linear extrapolation.

Speaker 3:
[28:59] It's a linear extrapolation, not an exponential. To get the exponential, you need the better models. Anthropic won't have enough compute to do that, and presumably OpenAI and Google will hit that tier soon enough. Whoever hits that tier next, sure, Anthropic may get to charge 70 plus percent gross margins, but if OpenAI hits it next, they charge 50 percent gross margins, they still get all of this incremental demand, and probably they also won't have enough compute to serve all the users. Sure, maybe Mythos is a model where if the world had enough compute, it'd be $500 billion of revenue or something crazy. There is such demand for these tokens and such limitations on compute. We see this with H100 prices skyrocketing and all these other things. The useful life of these GPUs continue to extend. It's pretty clear even the tier 2 lab is going to be sold out of tokens, let alone the tier 1 lab. The tier 1 lab will have better margins, but the tier 2 lab will be sold out, and probably the tier 3 lab will also be close to sold out. Economic value that the best model can deliver is growing faster than our ability to actually serve those tokens to people via the infrastructure. This gap will continue to grow and the model labs will continue to have expanding margins. Until people in the hardware supply chain, infrastructure supply chain are like, no, why don't I just jack up my margins?

Speaker 1:
[30:10] It's wise to say, I think the assessment today, or your assessment of the demand side is completely explosive in your own particular example here at SemiAnalysis, but just more broadly that you call it AI psychosis, people fall into this experience of what they can do, the implementation difficulty going completely away, I've certainly felt that. My own token spend is just through the absolute roof just in the matter of weeks. So that feels like a pretty good assessment. Anything we're missing on the demand side.

Speaker 3:
[30:34] If you don't use more tokens, you'll never escape the permanent underclass. Either you use more tokens and you generate economic value, outsize economic value for the use of those tokens. A lot of people are doing it the boring lazy way. Oh, I guess I'll just work one hour a day instead of eight hours a day and I'll have AI do most of my job. That's the boring way. The cool way is, I'll still work eight hours a day and I'll do 8x the work and maybe I'll make 5x the money. You can do this with a job, obviously. There's people who have multiple jobs and there's people who start companies and start selling stuff. There's people who are hustling, which is what I view like you and I is doing is we're mostly hustling. Get that economic value on this AI before everyone is using it and it's table stakes, because it's still not table stakes. So if you don't use more tokens and generate the value from them, and capture that value, there's three different problems here. Using more tokens, generating value from those tokens, and capturing value from the value that you created from the tokens. If you don't do these three things, you'll never escape the permanent underclass, i.e. as models continue to skyrocket in capability and the concentration of resources potentially happens.

Speaker 1:
[31:34] Let's talk about supply. What is changing at the frontier of supplying the entire stack that's required to serve all these tokens as the demand curve explodes?

Speaker 3:
[31:43] As demand skyrockets, prices are going up for everything on the supply side, whether it be the end GPUs, their prices are going up. In addition, their useful life is extending.

Speaker 1:
[31:52] H100 prices look like this.

Speaker 3:
[31:54] Yeah, exactly. There's people who have argued GPUs full lives are less than five years, complete nonsense. There are clusters now resigning, three or four-year-old Hopper clusters resigning for three or four more years. There's A100 clusters that are resigning for another couple of years. So the useful life is clearly not five years. It's maybe even seven or eight years, arguably. We don't know yet. We'll see when Hopper gets there, but it's clearly not five years. So useful life is extending and the prices are going up on that renewal. In effect, the gross margin was not 35 percent on a cluster. It's beyond that. So margins are expanding in the Cloud layer. Margins are extremely healthy on the hardware layer, with NVIDIA still charging 75 or whatever percent gross margin. As we move down the stack memory, obviously margins have skyrocketed there. Places like Optics and Logic, there are large prepayments and margins are growing slowly. More so, the companies that are making chips like NVIDIA are paying huge prepayments. So in effect, the cast of capital or timing of cash flows or return on invested capital is going up even if the gross margin isn't. You see this across the whole supply chain. You see ASML is completely sold out and they need Carl Zeiss to expand faster. Everyone's either sold out and margins are going up or they're getting prepayments which increases the return on invested capital because the invested capital is lower. So this is a consistent trend across any part. It's even like to make a PCB requires copper foil. That copper foil is sold out and people are making prepayments for it. Anything and everything that has a pulse and is sold out, people are jumping to get more incremental supply and fighting over the supply for the years after.

Speaker 1:
[33:26] What do you think are the most important bottlenecks? Typically in economic history when there's this kind of demand, supply reorients and rises very quickly to meet the demand. It seems like it's almost impossible for supply right now in this moment to keep up, famous last words, every shortage is followed by a glut historically. But what are the most interesting bottlenecks to you across the supply side?

Speaker 3:
[33:50] Supply chains are usually very fast to react. One unique thing is that our supply chains now are more complex than ever and the things we're building are more complex than ever and therefore the lead times are longer. It's not like we haven't seen 18-month-long lead times in other industries. It's just building incremental supply didn't take years. This is the case with memory. Memory can only grow capacity, low double-digit percentages a year, 20-30 percent a year, even less for NAND, a little bit higher for DRAM, but whatever. Even though the demand signal is very strong at the end of 2025, the memory companies immediately started reacting. None of that incremental capacity really gets here until the second that they've decided to do in addition to the typical 20-30 percent. They can stretch a little bit, but really the true incremental supply doesn't come until 28, which is a very unique thing. Even if they wanted to build as fast as possible, it doesn't come till 28, late 27 at best. So the result is memory prices have gone through the roof, and guess what? They're going to double and triple again, at least on DRAM especially. People are like, oh, the memory story is overplayed. Everyone gets in. It's like, no, no, no, you don't get it. DRAM will double or triple from here still, because that's how much capacity is required, and they have to steal capacity from somewhere else. And the only way to steal capacity from somewhere else in a capitalist economy is demand destruction via higher pricing. We're not rationing stuff here. And so ultimately, that's what's going to happen. And so margins continue to go up. I think logic also has humongous capacity problems. TSMC just had their earnings. They keep upping capex. Ultimately, it takes them quite some time to build fabs. They're trying to do everything they can to squeeze every little output out of every fab that they have. But ultimately, they're not raising prices fast because they're good people, that seems like. Single digit price increases instead of triple digit price increases like the memory guys have had. So you ultimately have this market where, yeah, TSMC is a great company, but are they actually going to extract all the value? I mentioned things like copper foil, glass fibers for PCBs, lasers. These are things that are well understood in niche supply chains but they're very, very tight. Ultimately, upstream, the semiconductor wafer fabrication equipment supply chain is one that's gone up a lot, but it's still very underappreciated. TSMC capex this year, they say 56. We've had 57.4 billion since January, and we may up it slightly more just because we see some ways that they can get incremental capex. But what people aren't focusing on is what does that mean next year? What does that mean the year after? It turns out three years from now, TSMC is going to spend $100 billion on capex. Maybe two years from now, it might be 28. Sincerely, they may spend $100 billion on capex in 2028, and people just can't fathom that. But what does that mean for their downstream supply chains? Companies like Lamb Research or Applied Materials or ASML or their further downstream supply chains like MKSI and everyone's got a key fock. and all these other companies. The tail whip, it just gets whipped harder and harder and harder, and that's a shortage if TSMC wants to spend $100 billion in 2028, which is a real possibility. I think people would think that's insane, but that's a real possibility.

Speaker 1:
[36:45] What about other parts of the chip ecosystem where GPUs have been completely dominant? What about CPUs or ASICs or things that start to pop out as both opportunities and bottlenecks beyond just NVIDIA's GPU dominance?

Speaker 3:
[36:58] ASICs are obviously taking off, but I'll pivot away from AI chips to talk about these other things. There's a project we did on FPGAs, and it turns out there's 120 FPGAs per next-generation rack, AI rack. And then what about all the FPGA names? CPUIs, all these reinforcement learning environments, plus all the slop code you and I are generating that is now running on some Vercel instance or whatever it is, or some AWS instance or some bucket that we've spun up, all of that requires CPU. And so, CPUs are completely sold out and demand is skyrocketing there.

Speaker 1:
[37:29] How people understand the role that CPU plays and everything?

Speaker 3:
[37:31] There's two main reasons why you need tons of CPUs. One is when you're doing reinforcement learning, the CPU is very critical to that. So before, you would throw all the Internet's data into the model, train it, and it spits some stuff out. Now, you train all the world's Internet's, you put all the Internet data into the model, then you put it in this environment. This environment is like, hey, model, try this out, and it tries stuff out, tries a bunch of different things. In the end, there is an environment which scores whether or not what it tried out is successful, and it grades it. These environments can be anything. It can be, hey, check if the text was outputted in the right way, structured outputs. It can be very simple stuff, it can be very complex stuff, and people are starting to get into very complex things. Like, hey, I want you to open this file, change it, edit it, update it, submit it to this website. I want you to open up this physics simulation from Siemens and edit this CAD model, so the environments can get more and more complex, and those environments run on CPUs. They don't run on GPUs, they don't run on ASICs. The ASICs run the model that takes the input data from the environment, runs it through the model, the model creates outputs of various different trajectories, ways that it think it could solve it in different instances. Those trajectories are graded slash scored, and the ones that are successful, you train on and you update, and you iterate, iterate, iterate. So, CPUs are very useful for that one. Then once you have these great models, and you're deploying them, those models are generating code, they're generating useful output. That useful output, it doesn't go from a GPU straight to the human brain, it goes from a GPU or an ASIC through to a deployed app that you're deploying somewhere that actually just runs on CPUs. So that's another area where there's a lot of demand, and things are sold out in a large, large way.

Speaker 1:
[39:12] As you continue to... Your finance team isn't losing money on big mistakes. It's leaking through a thousand tiny decisions nobody's watching. Ramp puts guardrails on spending before it happens. Real-time limits, automatic rules, zero firefighting. Try it at ramp.com/invest. As your business grows, Vanta scales with you, automating compliance and giving you a single source of truth for security and risk. Learn more at vanta.com/invest. Every investment firm is unique and generic AI doesn't understand your process. Rogo does. It's an AI platform built specifically for Wall Street, connected to your data, understanding your process and producing real outputs. Check them out at rogo.ai/invest. The best AI and software companies from OpenAI to Cursor to Perplexity use Work OS to become enterprise ready overnight, not in months. Visit workos.com to skip the unglamorous infrastructure work and focus on your product. Ridgeline is redefining asset management technology as a true partner, not just a software vendor. They've helped firms 5X in scale, enabling faster growth, smarter operations, and a competitive edge. Visit ridgelineapps.com to see what they can unlock for your firm. To assess and try to be the world's best informed person on both the trajectory of supply and demand, what are things that you wish you knew to make that understanding that you don't know?

Speaker 3:
[40:35] I think the hardest area for us and for everyone is understanding tokenomics, the economics of tokens. I think we have a really tremendously good insight into how much it costs to run infrastructure, what the cost of tokens are, what the cost of models are, what the margins of these labs are. But the usage and adoption is what's really difficult to model. Continuously, right? January, we had crazy estimates for February. Anthropix smashed them. How do we calibrate this model? What are the data sources for this? February, we had crazy assumptions for March. I know people were like, you're crazy, Dylan, and then they smashed them. Everyone sees the number of 10 billion and they're like, how do they add 10 billion of revenue? Who is using all these tokens? Why are they using them? What are they building with them? Then more importantly, with what they're building with these tokens, how is that actually diffusing into the economy? What value is that generating? Because it's not really something that you can capture in any GDP statistic. All of the value of the tokens that I use get transformed into better information, which I then sell at a discount to what people used to sell information for relatively. Therefore, that information is now making its way throughout the economy and people are making better investment decisions or better competitive decisions. If they're a semiconductor company or a data center company or hyperscaler, what is the value of this? What has that done to the economy? It's clearly by every subjective metric amazing. But where is the phantom GDP? What is the phantom GDP? How do we track the real economic value? Because the GDP metrics are not accurate if you were to say, what is the GDP that Dylan Patel is making? It's tiny compared to the value that I think is being created. I think you would say the same for Patrick. What is the value being created by these tokens? Not on a basis of simple, what is the knock-on effect of all the things that these things are doing? I think that's the real question and challenge. It's hard to measure. I think we've got a tremendous reading on the supply side of things. I think we've got a tremendous reading on even a lot of the demand side signals, but it's what is the value these tokens are generating? That's hard to quantify and measure.

Speaker 1:
[42:33] I hope we get a chance to do this every three months because this changes so quickly. What do you think is going to happen next? When I come back three months from now, and we're in San Francisco together again, what do you expect?

Speaker 3:
[42:42] Large-scale protests. Really? Yeah. I think there will be a large-scale protest against Anthropic and up at AI. People hate AI. AI is less popular than ICE, less popular than politicians. With Anthropic adding so much revenue, that's going to start causing business changes downstream. People are going to get more and more scared of AI. They'll start blaming more and more of their own problems and things that are global have been deep-seated problems for a long time. Those will bubble up and be blamed on AI. Probably some politician or some influencer will be able to start taking and weaponizing AI against people. You look at the comments of news articles where Sam Altman had a Molotov cocktail thrown in his house twice in two weeks. People are cheering it on. This is just the beginning. I think we'll see large-scale protest against AI in three months.

Speaker 1:
[43:30] What is the counterweight to that? How should the AI industry head that off?

Speaker 3:
[43:35] First of all, Sam Altman and Dario have to stop getting rid of interviews. They're so uncharismatic. I don't know what they're doing. Every interview they do is like, wow, normal people are going to hate you even more. Sam being on Tucker Carlson probably made all Republicans hate open AI. I'm just guessing. Same with Dario. I think that's first. Two, they need to start showing uplifting things that can be done with AI. Three, they need to stop talking about how the capabilities are going to change the whole world constantly, because then people are going to get fear of that capability, because they have no connection.

Speaker 1:
[44:05] They don't know how to use it, yeah.

Speaker 3:
[44:06] There's no connection to it either. The average person doesn't know an anthropic employee. The average person doesn't know an open AI employee. Average person doesn't know who these people are, what their goals are, and they just view them as a sneaky cobble of 5,000 people at this company that are going to change the world and automate all the jobs and destroy society. That's what they view it as. And as people who are funding the building of all these data centers and power plants that are going to pollute the world, they don't quite understand what's happening. So they have to stop talking about the future thing that's going to happen and not only talk about present how uplifting AI is. I think it's a huge reorg and rebranding that needs to be done.

Speaker 1:
[44:40] This is so much fun. I love doing this with you. Thanks for your time.

Speaker 3:
[44:42] Awesome, Biggs.

Speaker 1:
[44:44] If you enjoyed this episode, visit colossus.com. You'll find every episode of this podcast complete with hand edited transcripts. You can also subscribe to Colossus, our quarterly print, digital and private audio publication, featuring in-depth profiles of the founders, investors and companies that we admire most. Learn more at colossus.com/subscribe. Your finance team isn't losing money on big mistakes. It's leaking through a thousand tiny decisions nobody's watching. Ramp puts guardrails on spending before it happens. Real time limits, automatic rules, zero firefighting. Try it at ramp.com/invest. As your business grows, Vanta scales with you, automating compliance and giving you a single source of truth for security and risk. Learn more at vanta.com/invest. Ridgeline is redefining asset management technology as a true partner, not just a software vendor. They've helped firms 5X and scale, enabling faster growth, smarter operations and a competitive edge. Visit ridgelineapps.com to see what they can unlock for your firm. Every investment firm is unique and generic AI doesn't understand your process. Rogo does. It's an AI platform built specifically for Wall Street, connected to your data, understanding your process and producing real outputs. Check them out at rogo.ai/invest. The best AI and software companies from OpenAI to Cursor to Perplexity use Work OS to become enterprise ready overnight, not in months. Visit workos.com to skip the unglamorous infrastructure work and focus on your product.