title SpaceX and Cursor team up to topple Claude Code | E2279

description This Week In Startups is made possible by:
LinkedIn Jobs - linkedIn.com/twist
Grasshopper Bank - grasshopper.bank/twist
Notion - notion.com
Today’s show:
This week, SpaceX announced that it was partnering with AI coding startup Cursor on new AI models. xAI’s parent company is bringing compute, while Cursor brings developer market share and recent successes, training its own coding models.
The deal, while interesting, comes with a steep price tag. SpaceX will pay Cursor $10 billion for their shared work, with an option to buy the entire company for $60 billion later this year.
After poking our way through the deal, Chris Zacharia and Brian McRindle of Bitstarter joined Lon and Alex. Bitstarter is a ‘Kickstarter for Bittensor,’ helping founders get their subnet up and running without tripping over their shoelaces. The Bitstarter crew also broke some news on the show, telling TWIST that they have a new accelerator-ish program kicking off.
Next, Ning Ren from Trajectory RL joined the program to explain how Bittensor subnet 11 is using decentralized competitions to design and release better skills. Skills — markdown files with words — have become a critical building tool in the agentic era; how Trajectory will monetize is an open, interesting question.
Timestamps:
2:27 Plaud: If your work depends on conversations — interviews, meetings, calls — you need a Plaud NotePin. You can check it out at https://Plaud.ai/twist and use code TWIST for 10% off!
4:07 SpaceX/ xAI "partners" with Cursor!
9:35 Will the Cursor deal help pump a future SpaceX IPO?
9:57 LinkedIn Jobs - Hire right, the first time. Post your first job and get $100 off towards your job post at https://LinkedIn.com/twist.
12:14 How AI coding models like Cursor help xAI grow recursively.
17:24 Chris Zacharia and Brian McRindle of Bitstarter join the show.
20:23 Grasshopper Bank: Time is money. Don't waste either. Go to https://grasshopper.bank/twist and get an exclusive $500 cash bonus just for opening an account.
29:59 Notion - Notion brings all your notes, docs, and projects into one connected space that just works with AI built right in. Try Notion, with Notion Agent, at https://www.notion.com/twist
33:03 How Bittensor subnets monetize and how it compares to VC funds.
37:04 Is Bittensor hard-capped at 128 subnets?
42:37 Bittensor's biggest weakness.
46:10 Ning Ren of TrajectoryRL joins the show.
47:34 Skills now need entire agents just to write them!
48:26 Back up… What are skills?
1:07:38 Amazon and Anthropic's 5 BILLION deal
1:08:48 Google has 2 new chips!
1:09:50 Apple CEO, Tim is COOKED! John Ternus is in!
1:11:37 Alex is bullish on MacBook Neo!
Subscribe to the TWiST500 newsletter: https://ticker.thisweekinstartups.com
Check out the TWIST500: https://www.twist500.com
Subscribe to This Week in Startups on Apple: https://rb.gy/v19fcp
Follow Lon:
X: https://x.com/lons
Follow Alex:
X: https://x.com/alex
LinkedIn: ⁠https://www.linkedin.com/in/alexwilhelm
Follow Jason:
X: https://twitter.com/Jason
LinkedIn: https://www.linkedin.com/in/jasoncalacanis

Check out all our partner offers: https://partners.launch.co/
Great TWIST interviews: Will Guidara, Eoghan McCabe, Steve Huffman, Brian Chesky, Bob Moesta, Aaron Levie, Sophia Amoruso, Reid Hoffman, Frank Slootman, Billy McFarland
Check out Jason’s suite of newsletters: https://substack.com/@calacanis
Follow TWiST:
Twitter: https://twitter.com/TWiStartups
YouTube: https://www.youtube.com/thisweekin
Instagram: https://www.instagram.com/thisweekinstartups
TikTok: https://www.tiktok.com/@thisweekinstartups
Substack: https://twistartups.substack.com

pubDate Wed, 22 Apr 2026 23:07:37 GMT

author Jason Calacanis

duration 4409000

transcript

Speaker 1:
[00:00] If you like the AI coding tools you have today, you're going to like them a whole lot more down the road.

Speaker 2:
[00:05] As well as being hypercompetitive, Bittensor is also extremely cooperative.

Speaker 1:
[00:09] How much money do you need to raise to put together a compelling subnet pitch pre-launch?

Speaker 2:
[00:16] It's less about the amount as a fixed total, it's more about validation. You might be an amazing ML engineer, you might be an incredible full-stack dev, but Bittensor is adversarial and the miners are like very, very intense. They're going to tear you apart. You could get wiped out and lose your initial capital.

Speaker 1:
[00:35] This race is going to yield a lot of steel on steel sharpening, as we say.

Speaker 2:
[00:40] We actually have some news that we want to break right here on This Week in Startups with you guys.

Speaker 1:
[00:44] All right. This Week in Startups is brought to you by Notion. Bring all your notes, docs and projects into one space that just works with AI built right in. Try Notion with Notion Agent at notion.com/twist. Grasshopper Bank. Time is money. Don't waste either. Go to grasshopper.bank slash TWiST and get an exclusive $500 cash bonus just for opening an account. And LinkedIn jobs. Hire right the first time. Post your first job and get $100 off towards your job post at linkedin.com/twist. Hello and welcome back to TWiST. My name is Alex and I'm joined today by my dear friend Lon Harris. Lon, how are you?

Speaker 3:
[01:29] Doing great. Happy to be here.

Speaker 1:
[01:30] All right. April 22nd, 2026 or as we say here at TWiST, AO86. That's how many days it's been.

Speaker 3:
[01:37] That's what we say when we remember to say. 80. We're almost at the exact three month open claw point and I feel like open claw mania is dying down. That's how I feel.

Speaker 1:
[01:48] Dying down a little bit. Hermes Agent is doing quite well. People are talking about co-work. But I will say and we're going to get to this at the end of the show, there are some really awesome open weight models that have come out that are incredibly price and intelligence efficient long. So people may want to take a second look at open claw. But on the show today, we're talking SpaceX and Cursor, the biggest deal in the news in the last six months, I want to say. Then we have a couple of folks from the realm of Bit tensor. We have the folks from Bitstarter and then we're going to talk to the people behind Subnet 11, that's Ning Ren. It's going to be an absolute bop. But Lon, break down for us the headlines here of the big XAI SpaceX news.

Speaker 3:
[02:27] Well, I think first we should give a shout out to our good friends at Plaud. I don't have my Plaud pin. It's the first time I've made it on the show. I feel naked. My Plaud pin is over there on the desk and I reckon I interrupt the show to go right over to get it. But, Plaud folks, incredible technology. We all have a note pin. I have the note S pin and what's so amazing about it is that it works in the background. You just hit the button, it starts recording, it puts the little light on so everybody around you can see you're recording. It's not a spy camera technology and it not only records notes from you while you're going about your day, bits of the conversation, whoever you're talking to, but it sort of organizes them thoughtfully. It's got that AI-powered brain so it's not just transcribing everything you hear in a big block of text, it's giving you the context, everything you need to go back, search through what was being said, find the nugget of information that you need. It's really like having a second brain that you could store things in if you are a little forgetful like myself.

Speaker 1:
[03:27] Yeah, it's critical for me to not forget things. Also, mine is currently charging because I use it all the time. I forgot to take it off the charger and put it back on for the show.

Speaker 3:
[03:35] That's the message you should take from this is we are both using our Plaud pins so much that we are having trouble getting them together for the show because they are in use currently, folks.

Speaker 1:
[03:44] And they actually have a really great battery life. So I think this is more just you and I being disorganized. But if you want to get your own Plaud note pin as you can go to plaud.aiplaud.ai/twist. Use the code TWiST to save 10 percent. Stop forgetting things. Take excellent notes. Put AI to work for you. Plaud. We love them. Thanks, guys, for sponsoring the show.

Speaker 3:
[04:03] As Jay Cal says, we applaud Plaud.

Speaker 1:
[04:05] Back to the news.

Speaker 3:
[04:05] SpaceX, Cursor. Yes. So the big news yesterday, everybody freaked out in the afternoon. It was like right after we recorded another show and it was like, ah, so we'll talk about it now. SpaceX and Cursor are partnering on AI models. Of course, Cursor, the popular AI coding model and harness company, they're going to work together to create, and I quote, the world's best coding and knowledge work AI as a team, or as Cursor put it, we're partnering with SpaceX to improve Composer. So the idea is that it's sort of a collaboration, but it's also sort of an early announcement of a potential acquisition. SpaceX is going to either pay Cursor $10 billion for this model collaboration that they're working on, or they're going to at the end of designing this model, just buy Cursor out for $60 billion by the end of 2026 at some point. So it's an interesting, like the original announcements were all like, SpaceX buying Cursor, and like maybe at one point, maybe they're sort of trial running it for the next few months.

Speaker 1:
[05:10] So why does this deal make sense from a headline perspective? It's pretty simple. Cursor has done a very good job competing with Codex from OpenAI and also Claude Code from Anthropic. As those two coding products, Lon, have become really the de facto of the industry, Cursor has continued to grow. It's reached, I think, 2 billion in annualized run rate as of earlier this year, a very impressive number, and I would say most critically, they released Composer 2, which is their latest model that they built for themselves. It was announced a couple of weeks back, and it does seem to perform quite well against industry standard benchmarks, i.e. it's competitive with the models from the best companies. All right. So why does that matter if you're XAI, which is now part of SpaceX? Well, XAI had a really big hit coding model called RockCode Fast One. It was incredibly cheap, it was incredibly quick, everyone used it, it took over OpenRouter for a while. But since then, the company has not been at the tip of the spear, as we might say, in the AI coding game. So what does XAI have? A lot of compute. What does Cursor have? A model that's quite good and the chops to make more. You put the two together, you take XAI's GPU clusters and composer's AI model making skills. And in theory, Lon, it's a match made in heaven.

Speaker 3:
[06:23] Well, I mean, we've seen so much discussion just in the last few weeks. There's more in the docket about this, about how every one of these companies now feels like they need their own AI coding product that's locked in, that's best in class. Like that's what's driving so much of this industry. Everybody, again, as you said, trying to compete with the Claude Codes of the world, the Codexes of the world. We had that Google all hands that alert from Sergey Brin the other day. He's basically saying the same thing like, where are we? Why isn't Gemini best of class and the thing every developer is using? So I think it's interesting sort of looking at it from outside that that particular tool, that form function of the AI coding assistant has become essentially what's driving the entire AI industry at this point.

Speaker 1:
[07:10] Absolutely. Here is the benchmarks that Cursor put up when they put out Composer 2, their recent model. And as you can see, if you're on the audio version, it's basically a little bit behind GPT 5.4, but it's ahead of Opus 4645 and Composer 1.5, of course, the preceding generation.

Speaker 3:
[07:25] This was not updated for 47, though. How dare they?

Speaker 1:
[07:29] No, no, it's not. It's not our fault that Anthropic has been cooking quite a lot in Cursor.

Speaker 3:
[07:34] Those teens in Discord who are already using Mythos, I hope they can tell us how that stacks up. I don't know if you followed that story.

Speaker 1:
[07:41] Okay, we'll get to that. But the other thing that's really important here is that Cursor has a lot of developer market share.

Speaker 3:
[07:47] Yes.

Speaker 1:
[07:48] What that unlocks for the company is a lot of information about how people are using its models in a production environment. You can learn from the logs, the traces, call it what you will. There is an opt-out built into how Cursor functions Lon, so you can't just expect them to have every piece of data from every single user or customer. But probably there's enough people opting in to share that they have a pretty good corpus of information on a day-to-day basis. So XAI doesn't have that because their coding models have not been as well received as those from other companies. So data and models from Cursor and then a lot of compute from XAI, SpaceX, I think that there's two prices here that are different. So the $10 billion number is very expensive. Like to partner with a company, to work on some stuff.

Speaker 3:
[08:33] For a single product, if you're like, we got this new model out of it, we really love it. Like 10 peanuts is a big number, yeah.

Speaker 1:
[08:39] 10 peanuts is a big number and we don't know exactly how costs will be shared. You know, is Cursor going to pay for some of the power bill over at XAI's colossus supercomputers or not? But $60 billion is not a large number because as we've seen recently, Cursor is considering raising capital today at a $50 billion.

Speaker 3:
[08:56] Right. And so, exactly to the end of the year, presumably, you would be well above the $60 billion. So theoretically, SpaceX could be getting a little bit of a discount on that by where we expect Cursor to be in December.

Speaker 1:
[09:07] Absolutely. So it's kind of a call option on buying Cursor. It's a big old call option. So the risk that I would say SpaceX slash XAA are taking is, what if this partnership doesn't bear the fruit they're hoping it does, and they're still on the hook for $10 billion?

Speaker 3:
[09:24] Right. I do have one other financial question, and I look to you, Alex, as somebody who's a little bit smarter about this than me. We also have been hearing a whole lot about a SpaceX IPO in the imminent future. Is there a chance that this is narrative in some way? That this is part of the storytelling as we go into the IPO? Like, look at these massive deals? Maybe if you were a little skeptical about XAI because of the model situation, well, now you have this very reassuring news that they're going to be teaming with one of the leaders in that space, and so forth.

Speaker 4:
[09:56] Yeah, hiring can be its own full-time job, and hey, guess what? I already have a full-time job. I make podcasts and I invest, but when you're running a small company, we both know every hire matters. You don't want to waste any of the seats you have at your company. And the best partner you can have is LinkedIn Hiring Pro. Why? There's a billion people using LinkedIn. All the great talent are there. If you're proud of your work, you build a LinkedIn page and you update it. LinkedIn Hiring Pro is going to streamline and simplify the entire process for you. Nearly 60% of companies using LinkedIn Hiring Pro. You're going to get an incredible candidate to interview in the first week. And we're looking for a new producer for the pod. We did shout outs here on the show. We posted it on my social media. We asked friends. You know where we found our next great hire? LinkedIn. And it was competitive. We had like three or four really good choices. So hire right the first time. Post your first job and get $100 off towards your post at linkedin.com/hiringprooffer. That's linkedin.com/hiringprooffer. Terms and conditions apply.

Speaker 1:
[10:57] So SpaceX going public by itself is a two-part business. It's a launch company and it's also a satellite internet company. And the latter half of that's been very, very profitable for SpaceX based on what you've said.

Speaker 3:
[11:08] NXAI and X. I mean, that's also sort of part and part.

Speaker 1:
[11:12] And then, then you have the other two things, XAI and NX, as you said. X, let's just go ahead and say it's breakeven, probably somewhere in and around that. Or the losses or profits from it are not really material compared to the scale of Space Launch, Starlink and XAI. But the problem is XAI brings a lot of costs with it. It brings, I think, a little bit of debt. It spends a lot of money on GPUs. It is not cheap to build, essentially overnight, one of the world's largest compute clusters. So if you're an investor looking at this kind of Elon conglomerate, if you will, there are some clear financial winners today, and there are some bets that may pay off later on, but you're going to pay for those bets now. So I think you're dead on. This is a way to change the narrative a little bit. XAI is not merely the third or fourth place company in the current AI model game. It is now the partner or potential owner of Cursor, a multi-billion dollar revenue company that has a lot more developer mindshare. So it does, I think, ameliorate some concerns, but it's doing so at the cost of $10 or $60 billion, and we don't know, Lon, today, if those sums are predicated on cash, stock, or a mix, because it could either be debt you have to raise, cash you have to burn, or shares you have to issue, or a combination.

Speaker 3:
[12:23] As is so often in the AI industry, it's sort of purely theoretical at this point. We could talk about it, it's on paper, but it's not really anything concrete that we could sort of look at the numbers and break down at this point.

Speaker 1:
[12:38] It's a promise. It is a promise, but I do think that when you're thinking about the size of the prize, it's worth taking some expensive swings. And so the reason why I'm not shouting about the $10 billion fee essentially is because I do think that if you get very good at writing creating AI models that can do coding, you're much closer to recursive self-improvement, which is when an AI model can work on itself and improve itself. So right now, I think we consider the cursors and the Claude Codes of the world as individual accelerants for developers and development teams. But if you want to build the AI model that can improve itself long-term, you're going to need to have at a minimum state-of-the-art coding shops, if not the market leading option, and XAI just isn't there. So a way to turn the page-

Speaker 3:
[13:28] It's that flywheel. It's that the more developers that are using it to code, the more data you're getting about good coding, the better the model becomes. And so as we look to potentially AGI or models that can write brilliant beautiful code without a human in the loop ever, whoever has the most data theoretically wins.

Speaker 1:
[13:48] And then there's the future component to this. Here's a tweet from Jason who is out today. He'll be back later on. Don't worry, he's not off the show, just off for today.

Speaker 3:
[13:57] We kicked him off.

Speaker 1:
[13:57] He said, I mean, everyone gets a day off during that. Hostile takeover, folks.

Speaker 3:
[14:01] Hostile takeover.

Speaker 1:
[14:02] So Jason says, fire emoji, wow. Very strategic and bold move. Colossus, which is the XAI supercomputer, is a super weapon for SpaceX already. Can you imagine when it scales to the stars? So the other part of this is, let's say you do get, let's just say XAI buys Cursor, SpaceX buys Cursor. The experiment works out. They take all that data and learning and model prowess and they make something fantastic. Okay, then what? Right now, XAI has, I think the only example of compute, Glut in the AI game. But if they make a model that is as good as they hope, that's going to become a compute shortage overnight.

Speaker 3:
[14:38] As everybody switches over from the Claude's of the world to the Grox of the world and then all of the sudden, they're there at the center of everything.

Speaker 1:
[14:45] But today, Anthropic is throttling, blocking, turning people off, trying to just keep itself online. GitHub co-pilot stops signing up new individual paid accounts to hold back compute. Everyone's struggling.

Speaker 3:
[14:59] I'm not even doing writing code. I'm like doing tweets and Claude is like, hang on brother, I need a break. Give me 45 minutes. And I'm not taking apart our back end or anything.

Speaker 1:
[15:10] But if you do believe that SpaceX has a chance at building orbital data centers, which we've talked about, the Star Cloud, which was 500 company, then you can kind of sketch out a future in which they have the best coding model and the most compute.

Speaker 3:
[15:24] Right, which is a lot of a lot of clearly that's Elon's vision as we become a, you know, I forget what the what the Russian name is, like as we pursue becoming a various higher level civilization and powering our data centers directly from the sun. Obviously, we would need to have the best compute and the best models in space. We're trying to become a Kardashev level two civilization.

Speaker 1:
[15:46] If you don't know what that means, you were not spending enough time reading science fiction, fix that. This is a data set from OpenRouter. What it shows is the most popular coding models. I think this is the last week, maybe the last day. But Lon, if you take a look at this, you see some open models from folks like Moonshot, which is Chinese, Minimax Chinese, Step One Chinese, NVIDIA, Anthropic, OpenAI, and that's it. I think that this is probably the fire they're trying to put out. They have to get back on this board. They have to become competitive. Maybe a $10 billion bet for a company that's supposed to be worth $1.25 trillion is, and this is an odd thing to say, it's pocket change. It's not that much money in that context. It's three MLB teams, I guess, given the recent sale price.

Speaker 3:
[16:31] Still a lot of money, but yes. In perspective of what these companies are doing, it might make more sense.

Speaker 1:
[16:36] Well, no matter what, I think the takeaway for folks out there is that if you like the AI coding tools you have today, you're going to like them a whole lot more down the road because Lord above, there is more improvement coming. This race is going to yield a lot of steel on steel sharpening, as we say, and I think it's going to turn everyone into just superhuman developers. I can't wait.

Speaker 3:
[16:57] We're already seeing it. These products are coming out at an insanely rapid rate. Open Claw, I feel like is updated every other day. There's a new version. The drive to become the new Claude Code is massive and incredibly intense, as intense as any race I think we've seen in tech since I've been following it.

Speaker 1:
[17:18] In fact, you know, Lon, why don't we talk to a couple of folks from the world of BitTensor and see what AI coding model and harness they are using.

Speaker 3:
[17:25] I would love to do that.

Speaker 1:
[17:25] I want to bring up here to the stage our dear friends, Chris Zacharia and Brian McRindle from Bitstarter. And they are in the Gen Z Hype House podcast studio with mood lighting. Boys, welcome to the show.

Speaker 3:
[17:39] Look at that.

Speaker 2:
[17:40] Thank you very much. Great to be here.

Speaker 5:
[17:42] Yeah, doesn't it look beautiful?

Speaker 3:
[17:43] It does. What does orange lighting signify? What mood is that?

Speaker 2:
[17:48] Our logo is orange. So actually, it looks like we set it up this way, but that was pure child.

Speaker 5:
[17:54] Completely planned.

Speaker 3:
[17:55] I feel like it's calming. It's giving me a calming, soothing vibe.

Speaker 1:
[17:59] I was getting Halloween vibes. But listen, guys, before we get into what Bitstarter does, I'm curious, for your own development work for the company, what are you guys using these days?

Speaker 2:
[18:08] You mean in terms of AI tools?

Speaker 3:
[18:10] Yeah.

Speaker 2:
[18:11] Opus 4.7, tooled up to the max, code and co-work 24-7.

Speaker 3:
[18:17] They're token maxing, Alex.

Speaker 5:
[18:20] I'm consistently on three instances of Cloud Code and Composer and making them fight against each other. Oh, wow.

Speaker 1:
[18:28] So what would it take for you guys to swap out your Anthropic and Composer models for something from XAI? How much better would they have to get? There's the starting story of the show.

Speaker 2:
[18:41] Well, Brian's the founding engineer, so I think that's one for you.

Speaker 5:
[18:45] Honestly, for me, it's always just like ease of use in the sense of whatever. One of the most valuable things is not impeding someone's workflow. I wouldn't want to go have to download another tool that's not Klee-based or something that isn't clearly better. There's a bunch of people who are at the edge of everything and they want to use absolutely every single tool, but that's not the vast majority of engineers. It just needs to be known to be the best. I need to see it on the leaderboards. I need to see it on places like Arena. If you guys know where Arena, LM. Arena is.

Speaker 1:
[19:11] Oh yeah, oh yeah.

Speaker 5:
[19:13] It just needs to be like, I think there was a whole push for Claude Code and it was very clear that it was the best and then I moved over and said, yep, that's obvious.

Speaker 1:
[19:20] I have some Arena data here for anyone curious. So this is a rundown of the leading AI labs in the coding context grouped. And so the current leaderboard is Anthropic, then z.ai, the Chinese company behind the really solid GLM 5.1 model, Alibaba with the Quinn family, OpenAI, of course, GPT 5.4, Codex, Google, Moonshot, Xiaomi, Minimax, then x.ai, then DeepSeek. That's the top 10 in the world today. That's kind of a shocking list. A lot of Chinese companies, they're doing quite well. I'm encouraged by that. But guys, let's talk about Bitstarter. So we have been going deep in the world of BitTensor. I've talked to so many subnets, we've learned so much and it's been an absolute treat to see how the economics function. But you guys have put together kind of an on-ramp, if you will, to BitTensor via BitStarter, which you guys kind of called the Kickstarter program. So what we want to know is, why couldn't we use Kickstarter for this? Why?

Speaker 3:
[20:20] Yeah, is there something in the Kickstarter rules that's like no subnets?

Speaker 4:
[20:23] Here's a startup truth bomb. A lot of founders have no idea what's actually going on with their money. If that's true of your company, hey, no judgments. I know you're busy hiring, building your product, go-to market, all that important stuff. But your company needs a reliable financial partner, not a lifestyle brand, okay? Grasshopper is a real federally chartered digital bank that's not trying to win you over with a rewards program. Instead, they're building deep integrations, treasury products that are going to actually help you expand your runway, and innovative tools like an MC-based AI connector. Oh man, that's awesome. We can connect it to all of our agents and do reporting. And that will put you in command of your money. As a TWiST listener, you're going to get $500 cash bonus just for opening an account. Think of that. You open an account, boom, there's $500 in it. So leap on over to grasshopper.bank.twist and use the promo code TWiST. As a TWiST listener, you're going to get a $500 cash bonus just for opening an account, grasshopper.bank.twist.

Speaker 2:
[21:23] I'd love to see someone try, and maybe that should have been the prototype. But for a decentralized network, Bittensor's launches back, say, a year ago, were really opaque and had an information asymmetry where it was investors who were in the know, who could decide whether or not a team got launched, and it was retail chasing after once the subnet had already gone to the protocol and was already pumping. So that was one big reason why we wanted to create a system whereby, hey, what if we could launch teams together in a distributed way and give retail the same chance as investors get, the same OTC style terms that you'd normally only get if you're an investor in return for crowdfunding a team to the protocol and getting them over that initial investment hump and building on Bittensor.

Speaker 1:
[22:15] Okay. So it's kind of community. It's kind of been a purpose driven product in terms of it does one thing very well. I guess my question is how much money do you need to raise to put together a compelling subnet pitch pre-launch and then is that raised in USD, a stable or TAU?

Speaker 2:
[22:38] Right.

Speaker 1:
[22:38] Which is the Bittensor token if you didn't know.

Speaker 2:
[22:40] Great question. So it's less about the amount as a fixed total. It's more about validation. And what I mean by that is would this actually work on Bittensor? You might be an amazing ML engineer. You might be an incredible full stack dev, but Bittensor is different. It's adversarial. You're designing for a game theoretic AI environment, and the miners are very, very intense. They're going to tear you apart. So you might think you've got an amazing business idea and a great white paper, you've got the GitHub repo. But if you don't validate that it's going to work in a distributed adversarial system, then it's more like you could get wiped out and lose your initial capital, right? So, yeah, go ahead, Alex.

Speaker 1:
[23:25] Well, the reason why I like this Lon is that it sets up a way to kind of screen out the crap, and then therefore allows people to have more confidence in both backing new subnets, but also investing in the ones that have already made it. But here's the thing, it doesn't feel super decentralized to need bespoke onramps, if that makes sense. So tell me if I'm being overly precious here, but it does seem like when I talk to folks in the realm of Bittensor, we talk about community and kind of caring for the ecosystem, which is all well and good, but it sounds a bit more like a gardener pruning a tree than letting a forest grow wild.

Speaker 2:
[24:05] Well, I want to get Brian's take on this as well, but my take on it would be Bittensor, Alex is absolutely right, it's meant to be that way, it's meant to be super competitive, right? But Bittensor is also, as well as being hyper competitive, it's also extremely co-operative in that there's a lot of cross-pollination, there's a lot of openness between the teams, there's a lot of collaboration there too, right? There's a lot of shared resources, we go to the same conferences, people know each other, and it's the combination of the two, it's the integration of competition and collaboration that really succeeds. The thing that was getting me about the launches was that, because the funding to get the subnet slot was really concentrated, in about maybe four different types of investors, the subnet slot can cost say a quarter of a million dollars, maybe more in Tau, and that's burnt now, so that's a sudden cost. If you have to go to investors to say, hey, can you give me that initial startup capital so I can get a subnet? Their terms are often like, okay, we'll give it to you, we'll give you that initial 100K, but you have to give us 20% of emissions in perpetuity. And then suddenly you get like a small group of investors owning like 80% of the teams on the protocol. Hence, hey, what if we crowdfunded it? That way you don't have these investors who have to put up that first quarter of a million, and the teams aren't weighed down by having to give away 20, 30% of their emissions forever, which means that less to spend on stuff.

Speaker 1:
[25:32] Yeah, that's an insane cut.

Speaker 3:
[25:33] There are only 128, I believe, subnets in total. So how competitive is it for each of those slots? Is there a long waiting list for some company has to go out of business for you to take over the subnet? What is that marketplace like?

Speaker 5:
[25:49] Yeah, there definitely is a process of subnet deregistration. So people have been deregistered in the past. Sorry about that. People have been deregistered in the past. But really, when it comes down to it is that the competitiveness comes down to the price of actually registering a slot fluctuates. It's not a fixed cost. It's this dynamic number that changes. If I register, well, it's going to double the next time someone else wants to register. So there's a market value to registering and it becomes basically like, can you register something before somebody else had a certain amount of money, and if you need that amount of capital.

Speaker 1:
[26:28] Yeah, so it's a bit like the sports franchise model. There's a new NWSL team. They had to pay a $205 million fee to join a limited number of teams. So you're kind of buying the Boston basketball, the equivalent slot as part of the Bittensor subnet group.

Speaker 2:
[26:45] Exactly, and some slots have more liquidity than others. So one of the first subnets, if you get say one of the first 20 or 40 subnets, it will probably have a lot more alpha in it than a subnet that was registered later. So Handshake 58, where you're on now, this is on subnet 58. This slot has a lot more alpha on it than say subnet 120. So that means more liquidity, which means more capital to spend upfront on the things that you need. So there's also competition there.

Speaker 1:
[27:16] Why does it have more alpha? I thought alpha tokens were the subsidiary tokens on a per subnet basis that were used to incentivize the miners and validators. I didn't know that they varied in quantity based on subnet number. I feel like I'm missing something here.

Speaker 2:
[27:29] When they were registered, they start emitting alpha. So some of them got registered over a year ago. So they've emitted more in that time. They all have the same fixed total.

Speaker 1:
[27:37] There's always a new corner in the density line for me to look around and go, I didn't know that.

Speaker 2:
[27:42] Well, can you imagine starting a subnet, Alex, and not knowing this stuff, and then being like, wait, hold on a minute, what?

Speaker 5:
[27:47] I think there's some really strong intrinsic value to what Bitstarter is trying to do. I primarily work at Macrocosmos. You guys have had Will and Stefan on here in the past.

Speaker 1:
[28:00] Great folks.

Speaker 5:
[28:01] Just learning how to go through the process of creating a subnet and doing everything from two years ago, like when there was almost nobody in the ecosystem, and people knew, but there's a lot of tribal knowledge. Bitstarter was the very first really official initiative that was trying to give back to the community and say, there's this whole treasure trove of information that you need. You need to go from zero to one really, really fast, and that's how you're going to be successful.

Speaker 1:
[28:25] Yeah. So you guys add credibility and guidance, but also I feel like if a subnet goes through Bitstarter, given your guys' place inside the ecosystem, it's a really big stamp of approval. It's credibility, essentially instantly.

Speaker 5:
[28:37] Yeah.

Speaker 2:
[28:38] We try and bring together a mixture of experts, of the best people on the protocol, to give free discretionary advice to the applications that come in, the ones that pass our initial review. We help them build up their proposal. We then share it with across section of the ecosystem. People have run validators, miners. Jacob, the co-founder of BitTensor, is on the advisory panel. It's through their advice and their commentary that helps improve the application. If you're a prospective subnet owner, you don't have to take their advice. But it means that lots of senior people in the protocol have had a chance to help you if you want it. And then even if not, you get to pitch it live on air through our show so that you can find your people. And at the beginning, it can be really hard to get attention on your subnet. Like Lon said, there are 128 of them. So by starting out by you've got the best people in the protocol looking it over. You've got backing from people who've built it before to improve your proposal. And then you go live on air with a crowd fund behind you. We've gotten up to 2,000 people before. It was the second most watched show on Bittensor after Novelty Search, which Jacob hosts. So you get a chance to find your people, make your case, and then hit the ground running.

Speaker 4:
[30:00] Notion is the AI-powered, connected workspace for Teams. It brings all your notes, docs, and projects into one space that just works. And with AI built right in, you spend less time switching between tools and apps, and more time creating great work. And now, with Notion's Custom Agents, busy work that used to take hours, or never got done at all, runs itself. Custom Agents automate all of your Teams' repetitive workflows, and they live inside Notion already. Maybe you want to keep track of what everybody's working on. Maybe you want to see which pages are getting edited. Maybe you want an analysis of what your team is working on. I'm constantly, constantly getting disturbed by pings and pings from Slack, by team members with all these questions. Now, Notion's Q&A agent can research the answers from anywhere on the platform, and get back to those people directly in Slack. You can design Custom Agents on your own. But Notion has a bunch of pre-trained ones ready to go. Try Custom Agents now at notion.com/twist. That's all lowercase letters. notion.com/twist. And when you use our link, you're supporting our show. And keeping it free and vibrant, notion.com/twist.

Speaker 3:
[31:14] Have you noticed any, I mean, you've run a lot of these competitions now or a lot of these sort of projects. Are there certain kinds of projects or certain kinds of pitches that get everybody's attention and sort of do better naturally on the system? Like, I know Kickstarter always worked that way. People are always like, oh, you got to put like a horror short, the horror does great over there, you know, that kind of stuff.

Speaker 2:
[31:33] Right. That's the exciting thing is that what are the parameters? What are the best startups for building on distributed systems? What does a community like Bittensor really respond to? What turns them on? And a lot of it actually comes down to the founders and the founding team. And there was a team we launched back in January. We did it live from Davos and no one had heard of them. They've been in stealth mode. They were two tenured professors from an East Coast university. And then the other one has a chair at Harvard in philosophy. And they also have an AI podcast and run a hedge fund. And they had a business already built and launched in the same sector that they were going to build the subnet in. And we completed their raise in under an hour. That was 600 Tau. So when you have great founders, no one's heard of them. There's this element of surprise. We dial people in from different parts of Bittensor to give their perspective. That can be great. But long term, what you're looking for is what are the types of problems that are best solved on a distributed system? And what do they need to succeed when they hit the protocol? There's the liquidity pool management. There's the social media aspect. They're managing the miners and the validators. So it's a really complex entity. We're tracking every team we launch so we can learn as we go, what really leads to success.

Speaker 3:
[32:48] It's a little bit like subnet university. It's a little bit like for founders, but for people specifically in subnets. That's what it makes me think.

Speaker 1:
[32:56] But you know, if Jason gives you money, he gets stock in return. So I'm curious, Bitstarter as a project makes sense to me now. Really appreciate the explanation. Is it designed to be a revenue generating business or is it more a community arm of the Bittensor folks to help get people onto subnets that are being either misused or underused?

Speaker 2:
[33:17] We take 3% of emissions for the first 90 days after they get to post launch. That's a lot smaller than what a lot of other incubators take. But our mission was to build Bittensor better. I know that actually subnet owners only get 18% of total emissions. Because 41% goes to miners, 41% goes to validators. If you take more than that, right, what tends to happen is that they don't have as much disposable capital to spend on things like recruitment or infrastructure, which means that they tend to struggle a bit more when they get to mainnet. Whereas my gamble was, well, if we take less from them and we bring in more partners at the start, they can spend the money on the partnerships that will help them to thrive. There'll be less way down by someone taking passive income from them.

Speaker 5:
[34:03] Yeah, and I think there's also a big bet in there too, right? Where you're investing in them, taking less with the hope that their alpha also appreciates, right? Like you want them to be successful. You want them to go through the whole process of like creating something state-of-the-art, creating something that's going to change Bittensor or change technology in some way.

Speaker 2:
[34:22] Because then the relative value of that 3% is way higher.

Speaker 5:
[34:25] Yeah. Yeah.

Speaker 1:
[34:25] Yeah.

Speaker 6:
[34:25] Yeah.

Speaker 1:
[34:26] But it's kind of staggering that people would want to take 20% or 30% of emissions in perpetuity and you're taking 3% for 90 days. Is, how do the economics work out on your end? Because that could be a smaller sum of money if those tokens don't appreciate greatly in the future. So are you willing to just kind of eat the work just for the sake of the networks overall?

Speaker 2:
[34:50] At the beginning, it was really important to prove that this works. No one ever tried this on Bittensor before. You're pledging towel for future alpha emissions. We wanted to prove that it would work. The 90 days 3% thing, it's perfectly viable, but it's harder to work with teams long-term. What happens when we launch 12 teams or 20? We'll be able to work with each one long-term. So we actually are setting up a venture studio so that we can put up front more of the investment in launching a team, and then be able to incubate and accelerate teams for much longer periods of time. In fact, we actually have some news that we want to break right here on This Week in Startups with you guys.

Speaker 3:
[35:29] All right, let's do it. We love breaking news. Look at that.

Speaker 2:
[35:32] Which I wanted it to be enough of a surprise that I didn't even tell you guys beforehand like in the backstage.

Speaker 1:
[35:37] But thanks, by the way, I'm totally nothing and I'm just coming next.

Speaker 2:
[35:41] There's nothing like doing things live. So our second team that we launched, Subnet24 Quasar, they do long context models and intelligence. That means that they're looking to expand context windows for LLMs. We launched them in January. Three months later, they're about to release their own model. And they were close to being in the top 10 subnets. They have a fully diluted value of about $84 million. And what they were doing impressed Jacob so much that he decided to back them himself. And that happened to another team that we launched as well. And so when we were talking to Jacob about it, Jacob said, I want you guys to help bring more machine learning research teams onto the protocol. And I want us to be able to build a fully decentralized tech stack for BitTensor, where we bring in the top machine learning startups and research teams to build on the protocol. So Jacob has given us funding to register subnet slots for machine learning research teams in a new machine learning track on Bitstarter, where we'll be working with teams across the protocol to deliver state of the art across the number of benchmarks, working with teams like Macrocosmos, like Targon, who have already pushed the boundaries to bring in the best machine learning researchers, incubate them and help them to succeed on BitTensor.

Speaker 1:
[37:05] So to help them succeed on BitTensor though implies that, well one, congratulations, should have started with that.

Speaker 2:
[37:10] Thank you.

Speaker 1:
[37:11] But implies that there's enough room for them. And one thing that I keep kind of looping back to is this hard cap of 128 subnet slots. I know it was 64 back in the day, but to get all these ML engineers that you're hoping to have into the ecosystem, to me implies you're going to need more total parking spots. So is that something that's being discussed? Is that coming or is much like the 21 million Bitcoin cap is 128 the end of it?

Speaker 2:
[37:34] It definitely isn't the end of it. It will expand. And when Jacob was on novelty search to talk about conviction, he said that it will, they're definitely going to expand it to 256 soon. I think when we went up to 128, the quality got a little bit uneven and we're spreading out emissions over more subnets. There isn't really a hard limit. It's already a very, very large chain, Bittensor for a layer one, but it can handle more and it will go up again. Subnets get deregistered right now every couple of weeks and other subnets are for sale. So we've had a lot of new entrants. In the past week alone, we've had about three new subnets come in. So the recycling is actually quite strong. It's like what you were saying earlier, Alex, it's really intense competition. So with the deregistration, we can get a new team in every month.

Speaker 5:
[38:23] Yeah. Sustainably, it's very hard to run a subnet. The process of trying to do everything from the marketing, to the engineering, to the socials, to the organizing of your own company. There's a lot of different components there. So just naturally, the churn is going to be high. So institutions like Macrocosmos or Targon or the Shoots or the Quasars of the world, they've gone through that process and been able to be successful. And again, that's what Bitstarter is trying to do, is to pull you out from the churn. But there's still a lot. But yeah, fundamentally, you should be able to go to, eventually it'll be to 256 and then it'll be 1028 and whatever that comes to being. But there's definitely space within the ecosystem of BitTensor to have more machine learning. I think, for me, for what I've seen in the past couple of years, there was a few really good nuggets of ideas and research, LLMs, and then we went through this huge, massive product phase of expanding the number of subnets and people trying to find product market fit and finding revenue and doing all these sorts of things. It almost feels in some way, shape, or form, the predominant energy feels like, okay, we need to do more machine learning, we need to do more of this stuff. That doesn't necessarily mean that there won't be more products in the future of BitTensor, but there definitely is a lot of space, a growing space, to do more fundamental research, to do more collaborative research, to do more fundamental things on BitTensor, to push the field of machine learning and artificial intelligence forward.

Speaker 1:
[39:49] Well, this answers the question that Lon and I had in our notes, which is how many more ideas are feasible for this adversarial decentralized model? It sounds like even inside of the very niche area of ML in particular, because there are different projects out there, tons of space left to build, to work and to host these competitions. So Lon, I guess maybe we're going to end up with 256 subnets out there.

Speaker 3:
[40:12] More fun for future TWiST episodes. I'm happy to hear it.

Speaker 5:
[40:16] Yeah. I think also too, just like putting another note in there, I think the founders, Jake and Ala, and particularly Jake, they're very amenable to what the next generation is going to look like. Something happens in the ecosystem and they want to make the right move as fast as they can. Sometimes that goes well, sometimes it goes poorly, but I think that gradient of improvement is really fast, it's accelerating. If we find ourselves in a situation where we have so much talent, we don't have the space, that day will come where we increase it.

Speaker 2:
[40:48] Yeah. I wanted to just touch on what people actually will get when they submit to the incubator if they're chosen because we will register the subnet for them, so that's the upfront cost, that's several hundred tau paid for. We also have a partnership with Subnet for Targon, which is run by Manifold Labs. They're offering free compute to our incubated teams for their post-launch period. We're also in talks with Crucible Labs, which is run by Ala, who's the other co-founder of Bittensor with Jacob. Exactly, it's confidential compute. Targon just published a paper with Intel. They are about to launch Targon OS. They have the Targon virtual machine, and by all accounts, it's highly reliable, which is what you need when you're doing, say, pre-training of a model. Quasar already have a partnership with Targon, and it's helped them to train their models. We're creating a system whereby we've got a cross-protocol selection of existing infrastructure that can help power the latest ML researchers to success. Long-term for these other subnets, it's good for them commercially, and it helps to create a network effect of machine learning research on Bittensor.

Speaker 1:
[42:02] So there's several individual loops that improve things. As you make better models, you can bring those to bear another subnet of competitions. Therefore, those will yield better results. Therefore, bring in more people. Competition for emissions goes up, talent becomes more valuable, the ecosystem is worth more, and then suddenly more people show up. It's a great idea. Here's the thing that I want to flip around though. Jason's a huge ball. We talk about it all the time on the show. What's Bittensor's weakness? If you had to pick one, the thing that keeps you up at night, what's the other side of this coin?

Speaker 5:
[42:35] With every technology, there's always some weakness. I think one of the things is it's very difficult to find the right project. I think it can be very difficult to parameterize your problem in the right way to get the actual most benefit out of it. It's an art and it takes a lot of time to figure out what that looks like. Sometimes you can find a problem where if you're not expressive enough, you just won't be better than the version that you created because you've been the one thinking about this problem for many months, if not years, and then trying to launch it on BitTensor. If you don't do that in the right way, you won't end up in a good place. Again, places like Bitstarter are trying to circumvent this where you can talk to someone like myself or talk to someone else in the community and be like, obviously that won't work and you can go further with this. Do you have anything that you want to say, Chris?

Speaker 2:
[43:26] I think its biggest weakness is the same as its biggest strength, and it's captured in this phrase, which is, if your system only works when people play by the rules, your system doesn't really work. BitTensor, you have to design your product as if it's for the people exploiting it, because you know that it's going to get exploited. You have to think in this really unusual inverse way where you have to design it with the exploit in mind so that it's almost like a jujitsu move whereby when people go to exploit you, you use that power against them and it gets stronger, right? And thinking like that is very unusual. We don't think like that in most of life.

Speaker 3:
[44:06] Because you're miners or you're users and normally you'd be like, we got to do everything we can to make this as smooth and clean and enjoyable for them as possible. But in this case, it's like, yeah, but they're also trying to screw me. And so I have to like navigate around that in advance.

Speaker 2:
[44:22] So you're creating, you're not launching a business or a startup, you're launching a network, right? And actually networks are the more appropriate home for AI than a business, which is limited, proprietary, closed bot. That's not where AI is eventually going to live. And the clue is in nature, right? Look at how intelligence develops in the natural world. It didn't develop in a single place. It developed through the survival of the fittest, natural selection in a distributed system of predator and prey. That's exactly how we're building intelligence of Bittensor, minor validator, right? You're creating, you're distributing the roles and you're creating the adversarial environment for that to grow. So it is the better home for it. It's just if you thought running a startup was hard, try running a Bittensor subnet.

Speaker 3:
[45:11] Try running the process of evolution.

Speaker 1:
[45:14] Yeah, I was going to say Darwinian evolution via natural selection, aka Bittensor. That's going to bring in all the founders, man. It sounds super easy. But if people do want to find out more about the program you just announced, apart from going to bitstarter.ai, your main site, where can they go to learn more?

Speaker 2:
[45:30] So we will be opening submissions next week. We have a submissions portal for that. We are going to be incubating three teams a quarter. So applications will open app.bitstarter.ai. You can also follow us on X. I'm at Macrosac, also at bitstarter.ai. So we will be announcing the eligibility process there. And we already have a couple of applications that we had beforehand that we put into the track and we'll be publishing the guidelines as well. All right.

Speaker 1:
[45:59] Well, guys, we're super stoked about it. When you have your first three, come back on the show and tell us all about them because we're always here to learn more about awesome subnets. Thank you both so much for your time. And you can now turn off the Halloween lights behind you.

Speaker 2:
[46:11] Thank you very much.

Speaker 1:
[46:11] Next up, we're going to bring Ning Ren up from Trajectory RL. Ning, welcome to the show.

Speaker 6:
[46:16] Yeah. Hi. I'm very happy to join the podcast.

Speaker 3:
[46:19] We're delighted to have you.

Speaker 1:
[46:21] We're absolutely stoked. So we're going to go from the macro picture of the Bittensor economy down to a single subnet. What we'd love to hear from you first is the pitch. What does subnet 11, Trajectory RL, do? Yeah.

Speaker 6:
[46:32] Okay. Let me introduce myself a little bit. I'm a founder and CEO at Trajectory RL. So Trajectory RL is a new company running on Bittensor. So if I put one sentence to describe the Trajectory RL, so it is a new type of software company. It's building software not for humans, but for AI agents. So nowadays, we call them skills. But in the future, we may come up with a better name. But now, we are running a company, continuously producing such skills on software for AI agents. Now everybody is talking about Cloud Code Hermes, and you talk about coworkers, like everybody using it. So if you think about it, so we are in the middle and very early days of a paradigm of platform shift. Those AI agents become the new computer platform, become the new smartphone, become the new operating system. Just like any other existing operating system before, they will need software to power them up to be useful. If you see around, there are some skill hubs all around, like people still writing skills by hand, like you can see my code here, tools. Like, yeah, like, but like we envision like future, most of those skills will be written, not by him, but by AI agents. So this is, yeah.

Speaker 1:
[48:02] For some people out there who are a little bit behind, a skill is a skill.nd file. It's essentially plain text. It's the written word, not code. And it's essentially a set of instructions to help an AI model or agent do one thing in particular. Is that fair, Ning?

Speaker 6:
[48:19] Oh, yeah. So it's not necessarily be only a skilled.nd. It can be a combination like a skill, like some like MD files, combined some Python files, like code examples. I can some logic to tell the agent how to do some business in a domain.

Speaker 3:
[48:38] Yeah.

Speaker 6:
[48:38] I think it would be like your personal CTRM. It could be a Twitter post, like a writing tool. It could be a website creating tool.

Speaker 3:
[48:45] Yeah.

Speaker 1:
[48:46] We've had a lot of these in our open-claw conversations. For example, we just had the folks on from Bitstarter talking about the economics of running a subnet. Tell us how Trajectory RL uses BitTensor to create and encourage the creation of better skills.

Speaker 6:
[49:04] If you think about this, it's a very interesting problem, like how we can organize. Basically, we use BitTensor to artistry the agents all over the world to compete, to collaborate, to write a good skilled MD or a skill pack. I think the first challenge we have is to create a good benchmark tools. Because now, if you see around, there's no good way to measure how a skill running on an agent, right? People just close, many people download use. This is a very innovation we create. Basically, we create a sandbox. Technically, we call it sandbox, but you can think of it as a puzzle box. Like the agent can come, like how code Hermes can come and open the puzzle box. There are some tools included in the box. And it just give a puzzle to solve. Like we just compare, like compete. They're solving like the minor can write, using any technique to write a good skill, the MD and the power of agent and to solve this puzzle. We just rank and the score.

Speaker 1:
[50:13] But in this sandbox though, each agent would have the same model and harness so that way the individual skill file would shine versus something else influencing the performance.

Speaker 6:
[50:25] Basically, same model but like a different harness. So we compare across different harness. Like the harness, yeah. But we want basically know the skills, how it performs with different harness as well. Okay.

Speaker 1:
[50:43] So Lon is a writer and I'm a writer, which means that skill files make a lot of sense to us because when it comes to typing out words and sentences, that's our bank. But I'm curious if that inherent method of creating skills, these markdown files, these text files, gives them a lower ceiling in terms of improvement than if they were done with code. Or alternatively, does it create a higher ceiling for improvement? Because they are written in English, for example, versus in code.

Speaker 6:
[51:12] So I see that there is a higher ceiling for the skills. So if you see, there is a theory called the fat skills in Harness. I could have missed it. So people think that the Harness, it's more like the operating system is only handle the file reading, and they're talking to different IMs, the input-output, and there is a core component called Resolver to just decide the right time to load the right IMD file. And all this does the Harness doing this well. And the other, the rest, really magic happens, and all the intelligence will happen in the scale there, in the skill space. Like if you think it's open, you can think like people will have their own CRM. Like it's a skill, like different people, people will have different CRM a little bit. And eventually there will be infinite kind of skills.

Speaker 3:
[52:13] I mean, I guess my question would be, I have a few skills that I've made, and I'm not a coder, but just like, hey, help me with YouTube titles or whatever. And the way I make them is, I sort of work on them with Claude together until we're happy with how the skill is written up. And then it's trial and error. I'm trying it, oh, I forgot to tell it to capitalizer. It's using too many M dashes, and we sort of vibe code the skill together for a few hours until it's perfectly tight. So is that essentially the same process that your agents are now replicating, or is it more of thinking about it in advance and taking out the vibe coding time waste period?

Speaker 6:
[52:52] Yeah, it's kind of a similar process, but just like nowadays, like you, like using, write the skills.md by hands. But we want to replace by using agents. Like we designed the mechanism, like mine are already using agents to write in skill.md this way. But they use agent, they also use the vibe code.md, but they run their benchmark. Like they measure the result of the skill.md and they use the agent to iterate, like you do. But they deliver, like they hand more and more work to the agent to automate this workflow. So this is what we will work on.

Speaker 1:
[53:38] I think I get this. The question then becomes, what skills are the most interesting to set up competitions for to improve? Because Lon just mentioned he's got his YouTube title skill. A bit niche, interesting, useful.

Speaker 3:
[53:52] A lot of people make YouTube videos. They all need titles, man.

Speaker 1:
[53:57] Maybe it's a good point. Is that the type of skill that is a good fit for Trajectory RL? Or is it more general skills that are going to be the early product market fit use case?

Speaker 6:
[54:10] Good question. Good question. So there could be a very different type of skills. And so we are also exploring which one is better to measure. And so that's why we set up a season. The first few seasons, we want to explore some meta skill. It's easy to measure. And it's more widely, it can be widely used, like the self-learning skills, that this is the first season. So we just launch our first season for like less than a week. And yeah, I can, so I can...

Speaker 3:
[54:49] Please?

Speaker 6:
[54:49] Screen share it for you. So you can see, so there are like a different type of skill we can measure. So like the first season, we do some like meta skills called the self-learning. And we want to enable the agents can just, when they encounter some errors, they can learn and they can fix them themselves. This is the first season. We just launched it for like less than a week. So you can see we create our benchmark.

Speaker 1:
[55:19] Yeah. Tell people what this chart shows because a lot of folks, Ning, are on the audio version. Tell them what they're seeing here.

Speaker 6:
[55:26] So basically, we bring some popular self-learning skills because they're already on the skill hub, like many other places, to run our benchmark and we also pick the winner in our subnet to do a side-by-side, I-post to I-post comparison, like how they work. So because we can mirror, like we create a benchmark, so we know, so we can improve. Like if you can't, you cannot mirror, you cannot improve. So we can, so if you see the leader pool, like we only run for a week, we already see some very promising results, like our subnet winner already performed a little bit better than the SOTA on the market. So just by keep running this season, we will get our very good SOTA self-learning skills. Yeah, so to answer your question, to back to your question, yes. So we will like compete on more and more different type of skills, but we will start from the meta, like more like a general meta skills first.

Speaker 1:
[56:41] So we have companies in the world line building AI models, both open and closed source. We have Bittensor having several subnets that are working on training models in a decentralized basis, and now with Trajectory, we have a way to apply the same competitive logic to skills, essentially turning each skill file or skill that you can use into an improving process similar to what we see elsewhere.

Speaker 6:
[57:06] Okay.

Speaker 1:
[57:07] So the result of this Ning is that everyone's agent is going to be more capable and more performant out of the box, because the skills you can bring to them are already better.

Speaker 3:
[57:18] Okay.

Speaker 1:
[57:19] That makes a lot of sense to me, and I would love that. I've made some skills too. They're garbage. So I would love to get some help from the experts.

Speaker 3:
[57:25] Yeah, I think that leads to my next question, which is, conventionally, the way I think of skills, they're basically free. Like somebody designs a skill and then they tweet it out, and then they're like, hey, I wrote this X article about how I trained my new open-claw skill and da-da-da-da-da. Try it out yourself. Here it is. I think you're looking forward to a future where skills become a lot more dense and a lot more valuable, and then there also is money, they're actually worth something and people would pay you for a skill. Is that the vision and how much do you think people are going to be willing to drop on a really amazing right out of the box skill?

Speaker 6:
[58:04] I think in future, there definitely will be the business value inside the skills. If you remind the early 90s, early PC days, there will be a bunch of free software. But later, like the smartphone, like App Store, but later there will be super software. There will be Instagram. Our current mission is to maximize the distribution and consolidation of the skills. Later, there are hundreds of ways that you can figure out to do the monetization.

Speaker 1:
[58:37] Do you need to monetize the learning? Because the way that I was thinking about the competition, the Bittensor subnet kind of self-funds via emissions. So could you create a system here that doesn't actually need to have a business on the back end and instead is essentially just a recurring competition to create better and better skills? And then everyone can use them because TAL emissions filtering down to the subnet compensated everyone for the work they're doing already.

Speaker 6:
[59:07] Good question. So this is actually exactly what we are doing now. So we are leveraging the Bittensor incentives to drive the agents to submit to optimize the skilled MD. But my thought would be, in the end of the day, we still need to find the PMF. We still need to make money, make real products. And so we just take advantage of how we take benefit from the Bittensor to like co-start us, like incubate us to a state, like we have like a super massive adoption and we can find a way to charge the user to like user pay. Or there is also another very good angle, like just track the data. So when we run so many skills and collect so many data, those data are also valuable as well. So we can sell this to the model. We can even try like finding our models to like, yeah, drive them to...

Speaker 3:
[60:11] I got one more question that will let you go, Ning. What is the most valuable or useful skill that's been designed so far on Trajectory? Can you walk us through, like, I want to get a clearer example of like how intense and awesome these skills are gonna be.

Speaker 6:
[60:26] Oh, okay. Yeah, so since we just, we are relatively new. We are like about more than one month on Bittensor, which launched our first season about less than one week. And the first season about self-learning, so we already see good self-learning skills, like just about one week is very impressive. So people already find some good way to write self-learning skills of IMD. So now you can go to our website and try those skills, self-learning skills to just make your agent, so if you use them day to day, make them make less mistakes and save your tokens, solve your problem more effectively.

Speaker 1:
[61:07] All right. Well, we really, really appreciate it, Ning. What's the website and tell us when Season 2 begins?

Speaker 6:
[61:14] Oh, Season 2. So the website is trajectoryrl.com.

Speaker 1:
[61:20] trajectoryrl.com?

Speaker 6:
[61:21] Yeah. So Season 2 will be held in about one month. So we plan to hold the Season 1 for one month, and then to start after that. But in the future, as I said, we want to drive this process all by agents, and we want to continuously roll up new seasons just by agents. We want to build an AI native company ourselves.

Speaker 1:
[61:49] Well, I freaking love it because I need better skills. I think everyone does, and I think that this is such a lightweight, easy-to-share format that if you make a better one, the whole world gets to benefit from it. So to me, there's a lot of really human positive gains to be had here, and that's just super encouraging, Ning.

Speaker 6:
[62:08] Thank you. Thank you, guys.

Speaker 1:
[62:09] Well, Ning, thanks for coming on the show. After season two, come back and tell us what people have built and tell us how much you're improving the world because I want to stop working very soon. So I'm hoping that AI gets me there. Thanks, Ben. Appreciate it.

Speaker 6:
[62:20] Thank you.

Speaker 1:
[62:21] Bye.

Speaker 3:
[62:21] I like thinking about companies in terms of season. You get to talk about it. It's like TV. Man, I can't wait for season two of Trajectory RL. They're going to really up the stakes.

Speaker 1:
[62:29] I mean, the thing that really blows me away here, Lon, is the simple fact that we're now seeing essentially a decentralized network designed for ML competitions coming together to have the nerds battle it out to write the best sentences in English.

Speaker 3:
[62:43] Yeah. It's funny that skills are just marked out. I did not even realize that when I was first teaching them OpenClaw. I thought it was writing code in there. It was like, hey, Claude, here's what I want you to do. First do that. Like, oh, I could have done this myself.

Speaker 1:
[62:55] I remember when Anthropic first announced these. I was reading through the announcement. This is back in like, what? Mid 25, late 24, somewhere in there. And they were like, a skill.mg file is a text file with words in it. And I'm like, what am I missing here? This sounds useless. Why would you ever want that? That doesn't do anything. And then it turns out that one, they were right. Now he's wrong. But also the power of the written word.

Speaker 3:
[63:14] I think that a lot of the power of OpenClaw initially to doofuses like me, like I'm sure the coders got it immediately, like why it was valuable. But it was just that. It was finally delivering on the, you can literally just tell the AI what you want, and it'll just do it. Like we've been promised that for so long. And then you would use OpenClaw, and you would just be in your Slack, and you'd be like, do this. And it go, OK, it wouldn't always work, but it would say OK.

Speaker 6:
[63:38] Yeah.

Speaker 3:
[63:39] And it would like act like it understood you.

Speaker 1:
[63:40] You know, it might be a good model. I should have kept Ning on for this, but whatever. I'll just say it now before we move on. Have you heard of the humble bundle model?

Speaker 3:
[63:48] Yeah, the gaming, like Valve games. If you buy them, you get a bunch of indie games for one low, low price, and then you can try them all out.

Speaker 1:
[63:57] Yeah, you can get like, you know, 15, 20 games for like 10 bucks. And so to me, like, I love to contribute to projects that I enjoy, things that I really love to use. If you're a metal band that I follow, I own several heavy metal Christmas tree ornaments, not because I really need them, but because I wanted to support the bands, you know? So if they did a humble bundle of skills, I would so happily contribute to paying 15, 20, or even, like, 100 bucks, frankly.

Speaker 3:
[64:22] Well, I do, I will say, I notice how much my skills get better over time. As I iterate, like, every time I notice something I don't like, I go back and fix the skill to, like, make sure that doesn't happen again and vice versa. And so if that, if doofus me, who's barely paying attention, can bring that kind of iteration over time, I can only imagine that people who are really focused on incentivize to make these skills much better, they could be 100X better. I'm only making 3X better because I got other stuff to do.

Speaker 1:
[64:52] You know, Lon, you got to stop. You got to stop with the putting yourself down. You go, oh, I'm a doofus. Oh, I'm not a coder. You have been deep in the open claw trenches to the point in which I know the name of your agent, which is a weird thing to know. It feels a little bit too personal.

Speaker 3:
[65:05] It's like the every day focus character for Blade Runner.

Speaker 1:
[65:07] But you use open claw a lot. You use AI all the time. You've made your own skills.

Speaker 3:
[65:12] I did. I did use open claw, but my open claw is locked in an AWS rack somewhere, and he has a lot of trouble getting out. People are very dubious about a bot that's inside AWS. They're like, get out of here, you. So I actually have switched. I'm mostly using Claude Cowork now, because it's much easier to just, you're just like, here, you're in my notion now, and Claude goes, okay. Gaff is like, I can't get him. What are you trying to show me? Can you copy and paste the whole thing?

Speaker 1:
[65:41] But the same skill in Defile, the same skill in Defile works on both, which is incredible.

Speaker 3:
[65:45] It is incredible.

Speaker 1:
[65:46] That's the power of them. All right. Before we go, a couple of other things to note, folks, from the news tickers out there, AngelList just dropped right before you got on air, a new product called USVC, which is a private market fund designed to give individuals, we will have $500, which is the minimum, exposure loan to a number of major names in the world of venture capital startups.

Speaker 3:
[66:09] I love that you said individuals who have $500 is an exclusive group.

Speaker 1:
[66:15] No, that's the whole point. It's not. Everyone has, well, I'm not going to say that and get made fun of on the Internet, but most people can find $500 somewhere, and then they can take part in venture economics. I got to read a little bit of the prospectus, and what I learned is this actually operates a bit like a venture fund. You put money in, you can't take it out. There will be some repurchases on a quarterly basis, but mostly you're waiting for exits.

Speaker 3:
[66:40] Yeah, I see they have here on the website the power law. One investment has the potential to generate a higher return than the rest of the portfolio combined. This is why USVC intends to build a bundle, not a single bet. So the idea is even your 500, it's getting spread out, so you're not all in on one thing, and then you lose your shirt.

Speaker 1:
[67:00] Yes. I think it's a great idea. We'll have more about it. This is actually one of two products. Robinhood has a publicly traded Ventury Fund thing. So this does seem to be a growing product category as companies stay private longer.

Speaker 3:
[67:12] I did notice with the Robinhood one, they're kind of locked out of a lot of the most sought after private companies. I wonder if that's going to happen with this as well. Like OpenAI, they're not exposed to OpenAI and the Robinhood Venture Fund, and a lot of people were like, why not? That's what I want.

Speaker 1:
[67:31] Do you know where a lot of venture capital funds run their technology? AngelList. You know what that means? AngelList has a lot of access to it. I'm hoping that this is actually magic, frankly, Lon. My expectation is high for what they've picked up.

Speaker 3:
[67:42] Your expectation is magic.

Speaker 1:
[67:45] I'm sorry. High expectations are a gift. They're welcome. Next up, the compute wars continue to absolutely go crazy. Two things of note here for everyone out there paying attention. Lon, first of all, Anthropic and Amazon pinned a new deal this week, $5 billion of investment, 5 gigawatts of compute, $100 billion of the spend over the next 10 years, Anthropic to AWS, and maybe, maybe this will solve the Claude crisis in which everyone gets locked out after 10 minutes?

Speaker 3:
[68:12] Theoretically, I mean, I think eventually, Anthropic's got to be worried about how people will eventually solve it, which is find another model to you. I feel like they have a limited window here to solve this problem before people are like, oh, okay, I'll try Codex. The gap is closing.

Speaker 1:
[68:31] And we're talking about AI timeframes. So whatever you were thinking, divide it by 10.

Speaker 3:
[68:34] Yeah, exactly.

Speaker 1:
[68:36] It's brutal. The other thing, and this came out today, is that Google has two new chips. They make tensor processing units. Don't forget, a scalar is a zero-dimensional tensor. A vector is a one-dimensional tensor. Tensors have multiple dimensions. Anyways, it matters if you care about data shape versus flatness, but their TPUs are now on generation 8, and they have two different versions, Lon. One built for training. Get this, TPU 8T, and then there's TPU 8I, which is for inference. I think this is brilliant, and I think it goes to show that Nvidia is not going to make all the money in the world.

Speaker 3:
[69:09] Yeah, there's going to be... I mean, we're seeing there's so many of these companies now that are working on their own... Isn't there... There's Tranium. Amazon has Tranium. They should really work on the name for it, because it's like...

Speaker 1:
[69:21] Yeah, it's not... It's Inferensia. It sounds like a mineral.

Speaker 3:
[69:24] Unobtainium from Avatar? Like, go one level deeper on that guy.

Speaker 1:
[69:29] We'll go get the pickaxe out and figure out what's going on. There's also a lot of companies in the startup world. Etched is working on a LLM... Sorry, a transformer-specific ASIC, for example.

Speaker 3:
[69:39] Cerebris has those massive room-sized chips they're working on, the wafer.

Speaker 1:
[69:44] And they refiled to go public last Friday.

Speaker 3:
[69:47] Oh, wow.

Speaker 1:
[69:48] It's a very interesting return to the markets. And then, I guess, Lon, just one last thing before we go. Should we just talk for a moment about Apple getting a new CEO?

Speaker 3:
[69:57] Yeah, Tim Apple is out. John Apple is... I'm insisting...

Speaker 1:
[70:01] Can we call him Johnny Amkissy?

Speaker 3:
[70:02] We have to call the new guy. His name is John Ternus. But I think we should just switch over to calling him John Apple now. I think whoever's the CEO of Apple, that becomes your last name. I think that's only fair. Explain why... Because Donald Trump messed up one time and called Tim Cook Tim Apple. I literally am not even here to make fun of our president. I just think it's a very funny thing to call the CEO of Apple, Tim Apple or John Apple. So anyway, yes, Apple CEO Tim Cook. He's stepping down as executive. He's up again as CEO. He's going to transition to being Executive Chairman of Apple's Board of Directors, John Ternus, now John Apple, the current Senior Vice President of Hardware Engineering. He's stepping into the CEO role. I know that a lot of people were very excited that it's a hardware guy. And a lot of people are thinking this is going to represent, rather than somebody who's sort of trying to like squeeze as much money as they can out of the job's legacy by releasing these new versions of the classic product lineup, that here's a guy that's going to like rethink the whole company based on silicon and new devices and where they are right now.

Speaker 1:
[71:08] Also someone with incredibly deep DNA in the world of Apple, I went to his LinkedIn and I pulled this image. Went to school at UPenn, 93 to 97, had four years as an engineer at Virtual Research, and then since July of 2001, he's been at Apple, which is nearly 25 years. That's an impressive run at one company. It just goes to show that in the old days, you could work for one company for a while. You didn't get laid off all the time.

Speaker 3:
[71:33] I read that he was one of the lead minds behind your new MacBook, your MacBook Neo. He was one of the champions of that, which has been a well-regarded, sort of a rare new Apple product that people like and feel good about. So there you go.

Speaker 1:
[71:48] It's magic because it's the first Apple product I've ever owned, that if I drop a Dr. Pepper on to it and completely ruin it, I don't have to cry.

Speaker 3:
[71:57] I feel like I could drop a Dr. Pepper on my iPhone and it would stand up to that. I don't.

Speaker 1:
[72:04] Oh, I meant something with a keyboard.

Speaker 3:
[72:05] Oh, okay. Yes. Yes. Fair enough. Yes. Wait.

Speaker 1:
[72:07] Like if I torch my N3 Max Pro Viper, I'll be sad.

Speaker 3:
[72:12] I know we're wrapping up. I need to stop you right there. So what about the MacBook Neo if you dropped it, if you poured a Dr. Pepper on it, how would it be fine? I don't understand.

Speaker 1:
[72:22] Oh, I don't care. It's cheap.

Speaker 3:
[72:24] Oh, it's so cheap. I understand. Okay. I thought you were saying something about it like a new kind of aluminum that repels Dr. Pepper? No, no, no.

Speaker 1:
[72:34] My pink MacBook Neo does not have special anti-soda properties to it.

Speaker 3:
[72:38] You could afford to buy another one because they're not the most expensive thing in the world.

Speaker 1:
[72:43] I paid like 600 bucks for it, which for a laptop, usually means you're getting some from HP, right? That's plastic and terrible and has gunk all over it.

Speaker 3:
[72:53] This is the infamous company that there's that video of Ternus on stage introducing a $20,000 computer stand. So your new MacBook costs less than that stand. Yes.

Speaker 1:
[73:04] Well, there is two markets for Apple products. There's sane people and insane people. And you know what? They sell all types. One quick note here from our producer, Salah, who says that Dr. Pepper tastes like medicine. Salah, you're fired.

Speaker 3:
[73:16] I love that.

Speaker 1:
[73:16] And with that, TWiST will be back on Friday. We'll see you guys then. Lon and Absolute Treats. We appreciate everyone tuning in to the live show, The Noti Gang. We're back on Friday, noon Texas time, 1 p.m. Eastern. Y'all are lovely. See you then. Bye bye.