title Elon Buys Cursor?

description SpaceX struck a deal giving it the option to acquire Cursor for $60B or pay $10B, as xAI scrambles to catch up in AI coding. Google unveiled new TPUs and an agent platform, OpenAI shipped ChatGPT Images 2.0, and Mythos got accessed by unauthorized users.



SpaceX says it's working with Cursor to build "the world's most useful models" and it has the right to acquire Cursor for $60B or pay $10B for the partnership (NYT)


Google unveils a new TPU lineup consisting of the TPU 8t for AI training and the TPU 8i for inference, with general availability scheduled for later in 2026 (Bloomberg)


OpenAI releases ChatGPT Images 2.0 with new "thinking capabilities", allowing it to search the web to help it create multiple images from a single prompt (The Verge)


Source: a handful of unauthorized users in a private Discord channel have been accessing Anthropic's Mythos model since the day the company announced it (Bloomberg)


Meta is installing tracking software on US staffers' computers to capture mouse movements, clicks, and keystrokes in work-related apps for use in AI training (Reuters)


Disclaimer:

● Initial 3 week subscription and 4 weeks of medication from $79 plus tax and $179 per month plus tax for 12 week subscription thereafter. Final pricing depends on program selection.

● Noom GLP-1Rx Program involves healthy diet, exercise and support. Individual results vary. Meds & personalization based on clinical need. Not reviewed by FDA for safety, efficacy, or quality. No affiliation with Novo Nordisk Inc., the only US source of FDA-approved semaglutide. Not available in all 50 US states

● Based on an analysis of self reported data from 1,254 engaged Noom users.
Learn more about your ad choices. Visit megaphone.fm/adchoices

pubDate Wed, 22 Apr 2026 19:23:00 GMT

author Morning Brew

duration 1250000

transcript

Speaker 1:
[00:04] Welcome to the Tech Brew Ride Home for Wednesday, April 22nd, 2026. I'm Brian McCullough. Today, SpaceX struck a deal, giving it the option to acquire Cursor for $60 billion, or maybe pay $10 billion as XAI scrambles to catch up in AI coding. Google unveiled new TPUs and an agent platform, OpenAI shipped ChatGPT Images 2.0, and what they were afraid of happening happened. Mythos got access by unauthorized users. Here's what you missed today in the world of tech. Today's episode is brought to you by Doppel. Disguises are getting pretty good these days, and I'm not just talking about when you throw on a pair of glasses and a hoodie and hope you won't get recognized. We're talking about the kind of disguises that end up in your inbox, on your phone, or on the web, blending in as your everyday internal email, casual text message, or normal website. Doppel strengthens teams' resilience by giving employees the tools and defenses they need to protect themselves from increasingly sophisticated social engineering threats. Their digital risk protection takes it one step further by keeping an eye on every channel to connect patterns and shut them down fast. From deepfakes to bad links to impersonation attempts, Doppel helps you stay ahead of these threats with their AI native social engineering defense platform. Learn more at doppel.com. So I think SpaceX has just acquired Cursor, kind of, sort of. Quoting the Times, SpaceX, Elon Musk's rocket and satellite company, said on Tuesday that it had struck a deal with the artificial intelligence startup Cursor that could result in its acquiring the young company for $60 billion. In a social media post, the rocket maker said the combination with Cursor, which makes code-writing software, would allow us to build the world's most useful AI models. SpaceX added that the agreement gave it the option, quote, to acquire Cursor later this year for $60 billion or pay $10 billion for our work together. SpaceX is making the deal just as it prepares to go public in what is likely to be one of the largest initial public offerings ever. It is unclear if it plans to consummate a transaction with Cursor before or after its IPO, which could happen as early as June. A code-writing startup has seemingly little to do with rocket launches and a satellite internet service, which are SpaceX's main businesses. But Mr. Musk has been increasingly interested in AI. The tech mogul helped found OpenAI, the company behind ChatGPT, and in recent years established XAI, which created the Grok chatbot. After the hires were announced, Mr. Musk posted that XAI was not built right the first time around, so is being rebuilt from the foundations up. As for Cursor, the startup had been in talks to raise new funding in recent weeks, a person with knowledge of the matter said, but the rival coding tools from Anthropic and OpenAI created competitive pressure for the much smaller startup. Under its agreement with SpaceX, Cursor could obtain either a $10 billion injection of new capital or the $60 billion payday if the rocket company buys it. Cursor said in a blog post on Tuesday that its lack of access to computing power for training its AI models had bottlenecked its growth. The deal with SpaceX will give it access to XAI's infrastructure, which includes a supercomputer capable of training AI models. That will help Cursor dramatically scale up the intelligence of our models, the startup said, end quote. End quoting the decoder. For Elon Musk, the deal fills a hole XAI hasn't been able to patch on its own. XAI lags behind OpenAI's Codex and Anthropic's Cloud Code on coding performance and tooling, and it's been losing talent. Back in March, XAI poached two former Cursor execs, Andrew Milich and Jason Ginsberg. The company is currently training several new GROK models. End Quoting MG. Siegler In noting a compute deal between the two last week, I wrote, would be very curious how this deal came together slash was structured. Because while the high-level notion made some level of sense, it sure felt like there would need to be a lot of structure around it for both sides. As it turned out, that structure is a $60 billion call option for SpaceX to buy the entire company. And if they don't, they'll pay a, quote, mirror $10 billion, sort of a de facto breakup fee, albeit tied to compute deals. Price aside, this actually makes more sense to me. Cursor is under immense pressure from the Foundation Labs who have decided their space is the most important one to own at the moment. They clearly had an option on the table to keep going, which seems to say a lot, though also perhaps bullishness around SpaceX pre-IPO shares, obviously. And SpaceX, meaning the XAI subsidiary, is under immense pressure because they can't compete in that space yet. And why buy two udders when you can get the whole cow? Next question, will Anthropic and or OpenAI now fully pull their models from Cursor? Sure, that means forgoing money, but Anthropic in particular could undoubtedly use the capacity back at the moment. The real loser here may be Meta, which has no viable coding option yet. Forget space Twitter and space data centers. Now we have space Vibe coding. End quote. Google has unveiled a new TPU lineup consisting of the TPU 8T for AI training and the TPU 8i for inference, with general availability scheduled for later in 2026. They also announced the Gemini Enterprise Agent Platform, a revamped developer tool built on Vertex AI that manages the full lifecycle of AI agent fleets. They also unveiled Workspace Intelligence, which understands, quote, complex semantic relationships between data and Google Workspace apps to provide personalized context when working among them. But back to the headline, quoting Bloomberg, Alphabet's Google Cloud Division unveiled the latest generation of its Tensor Processing Unit, or TPU, a homegrown chip that's designed to make AI computing services faster and more efficient. The new lineup will come in two versions, the company said, at its Google Cloud Next event on Wednesday, where it also announced a $750 million fund to help boost corporate AI adoption and showed off tools for building AI agents. The TPU 8t is tailored for creating artificial intelligence software, while the TPU 8i is designed to run AI services after they've been created, a stage known as inference. Shares of Alphabet gained 1.7% before markets opened in New York on Wednesday. Google has emerged as one of the most successful makers of in-house AI chips in an industry dominated by NVIDIA. TPUs have become a hot commodity in Silicon Valley in recent months, and the company is looking to build on that momentum with the latest versions. The effort is part of a broader push to make it cheaper and less energy intensive to roll out AI software. The company also is working to make services more responsive. The new TPUs store more information on the chip, helping provide the rapid responses that users crave. But demands on increasingly complex layers of software are only growing. It's about how you deliver the lowest possible latency of the response at the lowest possible cost per transaction, said Mark Lohmeier, Google's Vice President of Compute and AI Infrastructure. The number of transactions is going way up and the cost per transaction needs to go way down for it to scale. Creating AI services and software is done by using systems that can sift through massive amounts of data very quickly to make connections and establish patterns that can be represented mathematically. Inference running the software and services benefits from processors that have huge amounts of memory integrated into them. This approach helps make AI responses more instantaneous because the component doesn't have to go seek information stored elsewhere. It's particularly useful when computers reason through problems taking multiple steps and learning from their own actions. The training chip 8t can be combined into groups of 9600 semiconductors. Google said that when deploying such massive systems, power is increasingly the major constraint in data centers. Owners, therefore, need systems that are more efficient to get the best out of the limited availability of electricity. TPU 8t delivers 124% more performance per watt than the preceding generation, with TPU 8i providing a gain of 117%. That step up is helped by improving in-house networking that increases the chip's ability to communicate with one another efficiently. AI systems built on the chips will be generally available later this year, Google said in a statement, the company will continue to offer services based on NVIDIA chips to customers who want to use the systems that currently dominate AI computing, it said. Google intends to be among the first to deploy gear based on a new design from NVIDIA coming in the second half of this year. Lohmeyer said, like Google, NVIDIA is focusing more on the inference stage of AI. Its forthcoming lineup will include technology from its acquisition of GROK, technology tailored specifically for providing ultra-fast responsiveness. NVIDIA Chief Executive Officer Jensen Huang has said that more than 20% of AI workloads might be best served by that type of chip. GROK was founded in 2016 by a group of former Google engineers. Last December, NVIDIA paid $20 billion for a license to use its technology and hired most of its engineering team. Separately, on Wednesday, Google's cloud computing unit showcased a set of tools that can create AI agents and track their work within companies, including a dedicated inbox for the virtual bots to post information and progress reports. Google also introduced updates across its Workspace Productivity Suite and offered up a vision in which AI agents dramatically overhauled the day-to-day routines of the average worker, end quote. Google also said this, 75% of new code created inside of Google is now generated by AI and reviewed by human engineers. That's up from 50% last fall. When I say micro, what comes to mind? Scopes? Bangs? Well, consider reframing that to micro-wins, micro-habits, and yes, even micro-doses of GLP-1s. That's the foundation of Noom's micro-program. The Noom micro-dose GLP-1 program is the easy way to start GLP-1 medication. That's because Noom starts you on a smaller dose of medication and then gradually scales you up depending on how your body reacts. Noom found users lose on average 8 pounds in 30 days on their micro-dose protocol. The Noom GLP-1 micro-dose program starts at $79 and is delivered to your door in 7 days. Start your micro-dose GLP-1 journey today at noom.com. That's noom.com. Noom. Micro changes big results. See podcast description for full disclaimers.

Speaker 2:
[11:01] Wishing you could be there live for the big game, soaking up the atmosphere in a crowd. But too often, life gets busy or the price holds you back. Priceline is here to help you make it happen. With millions of deals on flights, hotels, and rental cars, you can go see the game live. Don't just dream about the trip, book it with Priceline. Download the Priceline app or visit priceline.com. Actual prices may vary, limited time offer.

Speaker 1:
[11:30] Even I'm getting to the point where I can't keep up with all of the stuff happening, quoting The Verge. OpenAI is rolling out the latest version of its AI powered image generator with new thinking capabilities, allowing it to search the web to help it create multiple images from a single prompt. On Tuesday, OpenAI announced that ChatGPT Images 2.0 can now create more sophisticated images with improvements to its ability to follow instructions, preserve details of your choosing and generate text. It's powered by OpenAI's new GPT Image 2 model with new thinking capabilities available to ChatGPT Plus Pro, Business and Enterprise subscribers. When a thinking model is selected, the Chatbot image generator can pull information from the web, create visual explainers based on files you upload, and quote reason through the structure of the image before generating. ChatGPT Images 2.0 can also create up to eight images at once with thinking enabled, all while maintaining the same characters, objects and styles in each scene. OpenAI says this should make it easier to generate things like manga pages, a series of social graphics or design plans for every room in a house. All ChatGPT users can take advantage of updates that let ChatGPT Images 2.0 quote better capture the defining characteristics of photos in addition to pixel art, manga, cinematic stills and other types of images. It can now generate images with a resolution of up to 2K and in more aspect ratios ranging from wider formats such as three to one to taller ones like one to three. And it's not only better at generating English and other Latin script languages, OpenAI says, Images 2.0 makes significant gains in creating images containing text in Japanese, Korean, Chinese, Hindi and Bengali. OpenAI first released ChatGPT Images last year and launched its last big update in December, adding faster image generation and better photo editing capabilities. Since then, competition has only been getting stronger with the arrival of tools like Google's Nano, Banana Pro and Microsoft's My Image 2. ChatGPT Images 2.0 is available to all ChatGPT and Codex users starting today. End quote. This doesn't sound good to me, though. Quoting The Verge again, Anthropic's Mythos AI model, a powerful cybersecurity tool that the company said could be dangerous in the wrong hands, has been accessed by a, quote, small group of unauthorized users, Bloomberg reports. An unnamed member of the group, identified only as a third-party contractor for Anthropic, told the publication that members of a private online forum got into Mythos via a mix of tactics, utilizing the contractor's access and, quote, commonly used internet sleuthing tools. The Claude Mythos preview is a new general purpose model that's capable of identifying and exploiting vulnerabilities, quote, in every major operating system and every major web browser, when directed by a user to do so, according to Anthropic. Official access to the model is limited to a handful of companies through the Project Glasswing Initiative, including NVIDIA, Google, Amazon Web Services, Apple, and Microsoft. Governments are also eyeing the technology. Anthropic currently has no plans to release the model publicly due to concerns that it could be weaponized. We're investigating a report claiming unauthorized access to Claude Mythos Preview through one of our third-party vendor environments. An Anthropic spokesperson said in a statement to Bloomberg, Anthropic currently has no evidence that the unauthorized access is impacting the company's systems or goes beyond the third-party vendor's environment. The model was reportedly accessed illicitly on April 7th, the same day that Anthropic announced it was releasing Mythos to a limited number of companies for testing. The group that gained the unauthorized access has not been publicly identified, though Bloomberg reports that its members are part of a Discord channel that seeks out information about unreleased AI models. The group accessed Mythos by using knowledge of Anthropic's other model formats obtained from a recent Mercor data breach to make an educated guess about its online location. Members have been using Mythos regularly since gaining access, providing screenshots and a live demonstration of the model as evidence to Bloomberg. Though reportedly not for cyber security purposes in an attempt to avoid detection by Anthropic. Other unreleased Anthropic AI models have also been accessed by the group, according to Bloomberg. End quote. And finally today, Meta is installing tracking software on US staffers' computers to capture mouse movements, clicks and keystrokes in work-related apps for, well, what do you think? Again, everything is data for AI now, quoting Reuters. The tool called Model Capability Initiative, MCI, will run on work-related apps and websites, and will also take occasional snapshots of the content on employees' screens, according to one of the memos posted by a staff AI research scientist on Tuesday, in a channel for the company's model building Meta Superintelligence Labs team. The purpose, according to the memo, was to improve the company's AI models in areas where they struggle to replicate how humans interact with computers like choosing from drop-down menus and using keyboard shortcuts. This is where all Meta employees can help our models get better simply by doing their daily work, it said. The Facebook and Instagram owner has been moving aggressively to integrate AI into its workflows and reshape its workforce around the technology, arguing it will make the company operate more efficiently. Meta CTO Andrew Bosworth told employees in a separate memo shared on Monday that the company would step up internal data collection as part of those AI for work efforts, now rebranded as Agent Transformation Accelerator, ATA. The vision we are building towards is one where our agents primarily do the work and our role is to direct review and help them improve, Bosworth said. The aim, he added, was for agents to automatically see where we felt the need to intervene so they can be better next time. Bosworth did not explicitly spell out how those agents would be trained, but said Meta would be rigorous about building up data and evals for all the types of interactions we have as we go about our work. Stone said the data gathered via MCI would not be used for performance assessments or any other purpose besides model training, and that safeguards were in place to protect sensitive content without elaborating on which types of data would be excluded from collection. If we're building agents to help people complete everyday tasks using computers, our models need real examples of how people actually use them. Things like mouse movements, clicking buttons and navigating drop-down menus, said Stone. Meta is planning to lay off 10 percent of its workforce globally starting on May 20th and is eyeing additional large cuts later this year. End quote. Sometimes the implications of things write themselves, and I don't even have to say anything to underline that. Talk to you tomorrow.