transcript
Speaker 1:
[00:00] She knows. How?
Speaker 2:
[00:01] Did you blab?
Speaker 3:
[00:02] No.
Speaker 1:
[00:02] The Devil Wears Prada 2 is the movie event 20 years in the making. I honestly can't with the secrets anymore, so I think we just, we should tell her.
Speaker 4:
[00:10] Will you two please spit it out already?
Speaker 1:
[00:13] On May 1st, be the first to experience it only in theaters.
Speaker 2:
[00:17] In light of the recent scandal, I'm here to restore your credibility.
Speaker 5:
[00:20] Oh, because we're a team now? That's a nice story.
Speaker 1:
[00:24] The Devil Wears Prada 2, rated PG-13, may be inappropriate for children under 13. Only in Peters May 1st.
Speaker 6:
[00:31] I'm going to Disney World this weekend.
Speaker 4:
[00:34] You are?
Speaker 6:
[00:35] Yes.
Speaker 4:
[00:35] That's very exciting. Have you been to Disney World before?
Speaker 6:
[00:38] No, I've never been to Disney World before.
Speaker 4:
[00:39] That is very exciting.
Speaker 6:
[00:40] And so I have been looking up facts about, I'm a real like facts dad. I like to read the signs. I like to have this.
Speaker 4:
[00:48] Our fact checker would disagree with that statement, but go on.
Speaker 6:
[00:52] And the most interesting thing that I learned about Disney World is the entire Magic Kingdom is elevated off ground level. And the first floor is all of these tunnels that like the workers use to like come and go and like move stuff around. So you are literally on the second floor.
Speaker 4:
[01:08] That is great news because my understanding is that all of Florida is sinking into the ocean. And so I feel like that'll buy them some time.
Speaker 6:
[01:16] I'm thinking about pivoting to becoming like a Disney adult. What do you think about that?
Speaker 4:
[01:20] It's so funny that you mentioned this because I went to Disneyland in December for the first time in over a decade. And I had so much fun that my fiance and I went back to Disneyland this most recent weekend with his family. No. And I thought, how many times can you go to Disneyland in one year before you just become a de facto Disney adult? And I think the answer is two, which means I think I am a Disney adult now. Yeah? Yes. I love it. Who knew?
Speaker 6:
[01:46] I did not predict this for you.
Speaker 4:
[01:48] Well, I didn't either. And it's disturbing. This is great. I can't wait to hear about your Disney experience. This feels like a new, like, maybe this is something we could do together as friends. Just head down to the D world. Is that what you call it?
Speaker 6:
[02:01] Is that what the Disney adults call it?
Speaker 4:
[02:02] Yeah, it's one of the codes you hear on the walkie talkies.
Speaker 6:
[02:09] I'm Kevin Ruse, a tech columnist at The New York Times.
Speaker 4:
[02:11] I'm Casey Newton from Platformer.
Speaker 6:
[02:13] And this is Hard Fork.
Speaker 4:
[02:14] This week, Diva Down. Tim Cook is stepping down as CEO of Apple. What did he get right? What did he get wrong? Then, Andrew Yang is here to discuss his early bet on AI taking jobs and how the universal basic income may be making a comeback. And finally, hats off for some HatGPT.
Speaker 6:
[02:44] Well, Casey, the big news this week is that Apple CEO Tim Cook is stepping down.
Speaker 4:
[02:50] Yeah, it is a really momentous occasion in the history of technology. Apple does not change CEOs all that often. And Tim Cook, while we both have a lot to say about him, I think undoubtedly just had an extraordinary run as a public company CEO.
Speaker 6:
[03:04] Yeah, so Apple announced this leadership transition on Monday. Tim Cook is going to step into a new role as executive chairman. He's not leaving entirely. But John Ternus, Apple's Senior Vice President of Hardware Engineering and a long time Apple guy will become the next CEO. This is obviously not a company that has had a lot of CEOs. They tend to stick around and promote from within. And so I think this is about as expected a leadership transition as you could get. There have been rumors and reports that Cook was considering retiring for many months. But this made it official. And today we should talk about what Tim Cook's legacy is. The highlights, the lowlights, how has Apple changed in the years since he took over as CEO? And what do we expect out of John Ternus, the new guy?
Speaker 4:
[03:53] Yeah, lots of dive into you.
Speaker 6:
[03:54] So let's talk about some numbers here because I think Tim Cook's run at Apple is going to be remembered for just the overall growth that the company has experienced under his leadership. Since he stepped into the CEO role in 2011, Apple's market cap has grown from $350 billion to around $4 trillion, so a 10x multiple there. It's yearly revenue nearly quadrupled. Its stock price has gone up roughly 2000 percent. And a lot of the products that Tim Cook has overseen have been, I would say, surprising hits.
Speaker 4:
[04:29] Yes, and I think this is like, if you want to be intellectually honest about Tim Cook's Apple, you have to talk about this particular dimension because I think the knock on Tim Cook was, well, he's not a product guy. He doesn't know how to launch new product categories. But you look at the past 15 years and he actually did.
Speaker 6:
[04:47] Yes, so I think the biggest thing that he will be known for as a new device or as a new platform in his legacy is the Apple Watch, which I am wearing, you are wearing. I mean, everyone has an Apple Watch now. And I remember when the Apple Watch came out, there was this moment of like, oh, Apple's cooked. Like they can no longer innovate. This thing is obviously not going to work. This is just a gadget for luxury users. And this is not going to sort of be useful enough for many people to shell out for. And then I think Tim Cook, to his credit, saw that health was taking off, that people wanted to track their steps. They wanted to know if their blood oxygen levels were changing or if their heartbeat was irregular. They wanted to have fall detection. And I think he really saw that as the way to bring the Apple Watch to the mainstream, and it worked. It is a huge category now, and I think it is genuinely the best thing that they have launched under Cook's tenure.
Speaker 4:
[05:47] Yeah, and where I would give him credit was that when the first version of the Apple Watch came out, it wasn't entirely clear that it was a health product. It sort of had maybe one or two features in there, but Apple had to iterate on it over time. And that is what a great CEO does. Along with your users, you figure out what your own products are for and how to make more of the stuff that people want and do less of the stuff that they don't want. And so I think the Apple Watch is just the best example of Tim Cook doing that during his tenure.
Speaker 6:
[06:11] Yeah, and other Tim Cook success stories on the hardware side, AirPods obviously became a big deal during his tenure as CEO. I think this Apple Silicon bet that he made and oversaw was probably their most lasting success. They brought their chip design in house. They sort of weaned themselves away from Intel as their primary chip provider. And I think that is underrated as a thing that they did that was risky but that has paid off for them in a major way. They control their chip destiny now in a way that they did not when they were reliant on Intel and it has given them the ability to design custom chips like the M1.
Speaker 4:
[06:49] Yeah, and now Intel is partially owned by the government because that's how badly it went for them after Apple started making its own chips. So yeah, great for Apple, not great for Intel.
Speaker 6:
[06:59] Yeah, so there are also some successes on the services side of Apple's business. They have grown in places like Apple TV. They now own a big major Hollywood studio, Apple Pay, Apple Music. These are now something like $100 billion business for them. And I think there have been some mixed successes on that side too. I don't think they have secured the software dominance that they had hoped to. And it's caused them a lot of problems for things like antitrust. So I think his legacy will be a little more mixed when it comes to software and services, but still obviously a strong growth for them.
Speaker 4:
[07:34] Yeah, this is one where I think my view is a little bit more mixed because on one hand, yes, this was like an unqualified success financially, but this is also the sort of stuff Apple started to do under Cook that I think undermined the love that people have for the company because it seemed like with every passing year, there was another app on your iPhone that Apple was asking you to pay an annual subscription for. And I do think that some of these services really did distort the market. When Apple decided that they were going to get into music and they were going to be able to compete on unfair terms because all the other music streamers had to pay them a significant percentage of their revenue just to be on the app store, and Apple didn't have to do that. Spotify freaked out so much. They said, well, I guess we're going to have to own the entire podcast market and also start selling audiobooks. And so decisions like that that Cook made wound up having these huge ripples throughout the industry that I actually do not think were positive overall.
Speaker 6:
[08:30] Yeah, I think that's a very fair point. And I think that's a piece where maybe Cook could have done a little better during his tenure. What else do you think Tim Cook did well?
Speaker 4:
[08:37] Well, I think that it really is notable how successfully Apple was able to avoid scandal under his tenure. CEOs rarely get credit for the things that don't happen under them. But look at the problems that Facebook slash Meta had over the past 15 years. Look at even the issues that Google had to deal with, with various employee revolts about a number of different things. Tim Cook oversaw some labor struggles. The company has been accused of union busting. But for the most part, there was never any giant gnarly scandal that Apple had to address under his tenure, with of course the one exception in 2014, when they put the new YouTube album on everyone's iCloud account. But other than that, I think Tim Cook really kept his nose clean.
Speaker 6:
[09:28] That was a Tim Cook thing?
Speaker 4:
[09:29] That was, yeah, that happened three years into his tenure. And that rascal Bono convinced him to put songs of innocence into the hands of something like 500 million people. What's your favorite song off Songs of Innocence, by the way?
Speaker 6:
[09:43] That album has started autoplaying in my car so many times over the years. So that album became very well known, but not for perhaps the reason that Bono thought. Yeah, no, I think this is a good point. Not a lot of major scandals. I think at a time when mistrust in big tech is quite high and rising, I think that Cook managed to keep Apple kind of above the fray, and I think has done a remarkable job of becoming sort of the most trusted name in tech, which is not saying much. It's like a little bit of a mixed compliment, but I think people still do trust Apple, in part because of the privacy stuff that they've done under Cook's leadership.
Speaker 4:
[10:26] Yeah, and for what it's worth, you could see how this could have gone badly for them. Think about all the screen time debates that we had over the past 15 years, all of the issues that people have with all the social media companies. Some of that could have come back on Apple. People could have gone after Apple and said, hey, why are you letting all these apps in your app stores? Why aren't you developing real screen time controls and parental controls and all that stuff just slid right off them?
Speaker 6:
[10:48] Yeah. Okay, let's talk about some of the lowlights of Tim Cook's tenure.
Speaker 4:
[10:52] Casey, or as one unknown member of our staff wrote in our prep document, okay, now let's talk some shit about this diva.
Speaker 6:
[11:01] That was actually a quote from you in the editorial meeting.
Speaker 4:
[11:06] Okay. Because I read that and I was like, oh, I like the attitude. I didn't realize I was just being quoted there.
Speaker 6:
[11:13] All right. Casey, what are the lowlights of Tim Cook's tenure at Apple?
Speaker 4:
[11:17] Yeah. So there are a few that always come up. Number one is probably that under Cook, Apple just became hugely dependent on China to do its manufacturing, which to be clear for most of the time that he worked at Apple was a boon to the company. They built this supply chain that was the envy of the industry. They were able to create these just-in-time processes, essentially creating iPhones soon after they were ordered, so they didn't have a bunch of inventory lingering and losing value. The logistics were just very good. A new iPhone came out, and even though millions and millions of people would want them, you could still get yours within a couple of weeks and relatively affordably, I would argue, based on what you get out of a phone that you own for maybe four or five years. So all of that was really, really great. Then geopolitics change. The United States and China started to have a much more contentious relationship. Donald Trump takes office, becomes obsessed with the idea of tariffs. All of a sudden, this becomes this huge vulnerability for Cook because now his entire supply chain is located in this country that is an adversary of the United States and where these massive tariffs are being threatened. So that required Cook to contort himself into various unflattering shapes in order to preserve the logistics network that he had lovingly crafted.
Speaker 6:
[12:37] Yeah, I think that's true. It's not easy to pivot once you have established a dependency like that. They've been trying to spread their manufacturing around to Vietnam and other countries. But it's just really hard once you have gotten addicted to the efficiency of that supply chain.
Speaker 4:
[12:55] Yes. Talking shit about this diva item number two, the Titan Project. So the Titan Project was Apple's $10 billion effort to build a self-driving car, which I think was instinctively something that honestly, a lot of people really wanted. Like when I heard that Apple was building a car, like I definitely wanted to see it. I definitely wanted to test drive it. I definitely wanted to see if Songs of Innocence would autoplay when I turned the key in the ignition. But they canceled the project in 2024. And I'm curious what you make of their misadventures in automobiles.
Speaker 6:
[13:32] I mean, I think this was a big miss for Apple. I think they spent a ton of money, reportedly more than $10 billion, trying to develop a self-driving car. It never got there, even to the point that they were... Like, I just found it notable that they never even got to a prototype. It was not like they came out with something, or at least like mocked up something and people didn't like it. It was like they didn't even get over the first hurdle of building something that actually worked. And I think maybe they just didn't focus on it enough because it wasn't existential to them. It was kind of this other sort of side bet. And if it had been like the new iPhone in terms of its importance to Apple's future, they might have tried a little harder.
Speaker 4:
[14:14] Well, do you think that they would have been able to at least get to the prototype stage, Kevin, if they'd been able to use cloud code?
Speaker 6:
[14:22] Like, I think that's a funny joke. But I also think there is something real here, which is that like the key part of a self-driving car is not the hardware.
Speaker 4:
[14:32] Yeah.
Speaker 6:
[14:33] It is the software. And I think Apple has become the uncontested leader in consumer hardware. But when it comes to software and especially software like AI that runs the self-driving cars and all of the other stuff, like they have just never sort of bet on that in a way that has allowed them to succeed. So I actually think that that was probably a software flop more than a hardware flop. I'm sure they could have designed a beautiful car. But like to have it be safe, to have people want to get in it, it really has to like have the best software in it.
Speaker 4:
[15:03] That's true, and I also think that Cook probably deserves some credit for pulling the plug on something that just clearly wasn't working. Like for what it's worth at $10 billion, Cook spent roughly an eighth of what Mark Zuckerberg spent trying to build the metaverse. So I think you could argue that Tim Cook got a bargain there.
Speaker 6:
[15:19] Well, speaking of the metaverse, let's talk about Tim Cook's other big flop, which was the Apple Vision Pro.
Speaker 4:
[15:25] Yeah, it didn't work in the way that they were hoping. But here's the thing, I don't actually want to ding Apple too much for it because I thought it was kind of cool. It wasn't cool in the way that made me think I want one of these, but I was glad it existed and they were working on it. And I think as we said at the time, the first Apple Watch was not a big hit. I didn't buy an Apple Watch until the third or fourth version. I sort of assumed the same thing would happen with the Vision Pro. At this point, I don't know for it to get to the fourth version of a Vision Pro, but in the meantime, yes, it is undeniable that this was not a hit.
Speaker 6:
[15:57] Yeah, and I think the Apple Vision Pro flop points to, I would say, the biggest macro miss of Tim Cook's tenure as CEO, which was that they didn't find the next platform. This was the question hovering over Apple for the last decade or so is like, what is the next iPhone and what is the next general purpose computing platform? I think they had hoped that that would be the Vision Pro. It turns out it wasn't, but I think there was a chance that Apple would develop the next big thing and I don't think they have.
Speaker 4:
[16:33] This is just a case of being a victim of your own success. The iPhone in this moment is still arguably like the most important computing platform in the world. Whichever company makes the most important computing platform in the world and the most financially successful one is never the company that invents the next big thing. They have no incentive to. It's the classic innovators dilemma, but also there's really nobody nipping at their heels. Yes, Android exists. There are some manufacturers that have some success there. But Apple has very little incentive to try to go out and disrupt themselves. Yeah.
Speaker 6:
[17:06] We should also talk about the fact that under Tim Cook's tenure, Apple has become what I would consider an AI laggard, right? They are not a frontier AI model company. Their own AI efforts under the banner of Apple Intelligence have been sort of delayed over and over again. They have not managed to give Siri the sort of brain transplant that they have been teasing now for years. And I think it is fair to say that they are behind when it comes to AI and all AI related things.
Speaker 4:
[17:37] Yeah. And I think on one level, it's not clear to me that it has cost them anything yet, right? Like nobody is buying another product besides an iPhone or a Mac because of an AI related reason. And I think until that happens, you're not going to see them scrambling here. At the same time, like every day now, I use AI apps that just do things for me on my phone that seem clearly like things Siri should be able to do, right? Because Siri is integrated at that operating system level. It already has the access that it needs, and I wind up having to do all these workarounds just to do these things that are now possible through the state of the art. So there is a huge missed opportunity there. It has not yet cost Apple. And I think maybe the biggest question for John Ternus as he becomes a CEO is if and when it does start to cost them.
Speaker 6:
[18:24] And like how would it cost them? Does it look like a new smartphone coming up that just has much better AI integration into it? Is it going to look like some totally new thing that is the device form factor for AI? Like what do you anticipate?
Speaker 4:
[18:38] Sure. I mean, so just look at all of OpenAI's hardware efforts, right? Being led in part by Johnny Ive, who is a former Apple guy and knows their playbook from back to front. It is not inconceivable to me that they could come up with something that you put on your desk or a pin that you wear on your sweater. And maybe for whatever reason, that means that you decide not to buy an Apple watch, or you decide not to buy your iPad. So, as you sort of said a moment ago, it's not clear to me that something's going to come along to disrupt the iPhone anytime soon, but you could start to see how AI could chip away at some of these accessories that are around the iPhone. And that might be how we eventually start to see some cracks in Apple's armor.
Speaker 6:
[19:18] Yeah. And I think it's useful to contrast them with Google, who did make early bets on AI. And obviously, they were late to the ChatGPT thing. They have spent the past few years racing to catch up, but they have built out their own hardware ecosystem for AI. They have built out their own AI training chips. They have made serious investments at the model level in making Gemini a state-of-the-art model. And now, Apple has to pay Google for Gemini because it can't build a better version of Siri themselves. So I think it really creates a new set of dependencies for Apple if AI is going to become the long-term next platform shift that everyone is building on.
Speaker 4:
[20:01] Yeah, that's true. Flip side, licensing Gemini incredibly cheaper than building your own frontier large language model. True.
Speaker 6:
[20:09] True. That's true. And I think Cook's bet was that they could sort of wait out all of the sort of expensive early stages of the AI boom and just kind of wait until these models become commoditized and then kind of use them and not plow hundreds of billions of dollars into data centers and chips to start training their own foundation models. And I think so far, that is sort of a mixed thing. I think one thing that has happened under Cook's tenure is that most of the cutting edge AI research now happens at other places, right? It's become very hard for Apple to recruit and retain the most sort of cracked AI engineers and researchers because they are just not an AI company in any meaningful way.
Speaker 4:
[20:49] That's true.
Speaker 6:
[20:51] Okay. Casey, is there anything else from Cook's tenure that you want to put on the negative side of the ledger?
Speaker 4:
[20:56] Yeah, I'm just not sure that history will remember Tim Cook's relationship with President Trump all that fondly, right? Tim Cook presented Trump with a golden glass statue in August 2025 while he was seeking tariff relief in what just appeared to be an obvious bribe right out in the open. By the way, he did get that tariff relief, so it worked. Tim Cook also attended the VIP screening of Melania, which again, when I said this man would do anything for his company, I think that is a perfect example of what I'm talking about. And also, I think he was notably muted during moments of public outcry when some of his own employees were demanding that he make a statement, such as when we had those fatal shootings by federal immigration agents, or more importantly, because it was more relevant to his platform, in my view, when people were using Elon Musk's grok to remove clothing from women and children. Apple did not pull X from the app store or really even make any public comment until eventually some senators started making inquiries. So there was just a lot that Tim Cook was doing in the background, curry favor with the administration. And notably, this seems like it will continue to be his main job at Apple, right? If you looked at the message that Apple put out in announcing his ascendancy to become executive chairman, it said he's still going to be interfacing with public officials or some words to that effect. And it's just very clear that like Tim Cook is Trump's guy. And in fact, President Trump put out an incredible statement about Tim Cook, where he is essentially bragging about how nice he felt about himself when Tim Cook called him when Trump first became president too. And here I am quoting President Trump, kiss my ass.
Speaker 6:
[22:42] Well, at least he's seeing the dynamics clearly. I mean, look, I think there is a case to be made that this was an incredibly successful set of political maneuvers from Tim Cook. It may have saved them billions of dollars in tariffs to be clear.
Speaker 4:
[22:58] If the only thing that is important to you is Apple's stock price, this was the right thing to do. I am just proposing that we might want to have other values in our society.
Speaker 6:
[23:07] What a crazy idea.
Speaker 4:
[23:08] Particularly somebody who has spent a lot of time talking about human rights and Apple's place in the Great March Toward Progress, I think there is some hypocrisy there.
Speaker 6:
[23:17] I think there were certainly moments of spinelessness, and this is one case in which I don't like the thing that people do all the time where they go like, what would Steve Jobs have done? But I think this might be a set of circumstances that he would have navigated differently.
Speaker 4:
[23:31] Yeah, this feels like something, John Gruber wrote in Daring Fireball that, Gruber has followed the company as closely as anybody over the past 20 years. He just wrote like, the stuff that Tim Cook did to curry favor with Donald Trump, Steve Jobs absolutely would not have done. And I think that is something that people really liked about the old Apple, and I think something that people probably like less about the new Apple.
Speaker 6:
[23:53] Yeah. Okay, that's enough about Tim Cook. Let's talk about John Ternus.
Speaker 4:
[23:56] Now, Kevin, do you want to take a moment to gloat here?
Speaker 6:
[24:00] Well, sort of, because I did predict in our predictions episode this year that Apple would find a new CEO.
Speaker 4:
[24:08] Do we have a clip of that? Because I don't really remember that.
Speaker 6:
[24:10] Yeah, let's play the clip.
Speaker 4:
[24:11] Okay.
Speaker 6:
[24:12] All right. My low-confidence prediction for 2026 is that Apple will replace Tim Cook after his retirement with an outside CEO. Now, Casey, you're shaking your head at me. I presume you do not think this is likely. I do not think it is likely either, which is why it is my low-confidence prediction. But I do think it would be interesting. Cut the clip.
Speaker 4:
[24:37] I like that you wanted to cut it after you said the part of your prediction that came true, but before you said the part of your prediction that turned out not to be true.
Speaker 6:
[24:43] Yeah. I got this one, I would say half right. Obviously, the part about an outside CEO is not correct. I had some weird wild card picks, Johnny Ive, Brian Chesky, Mira Moradi. None of those were even close to in the running from what I can tell.
Speaker 4:
[25:00] When you said those names during our predictions episode, I thought you might have a fever. I almost called the doctor.
Speaker 6:
[25:06] Well, this is why it was my low confidence prediction. But they did make the change and they went with the safe internal hire on this. They did not try to blow up their entire succession plan and bring in someone from the outside.
Speaker 4:
[25:18] Yeah. I think that that just speaks to the fundamental conservatism of Apple. This is a company that is emphasizing stability above all. As I have said, they have arguably the world's most important and lucrative platform under their control. They do not want to upset that apple cart. I think a big question for Ternus is, let's say we look back three years from now, was stability actually the thing that they needed? They just had 15 years of stability under Tim Cook and it worked out pretty well for them. I think the interesting question is, are we in a different moment now? Yeah.
Speaker 6:
[25:50] I think the first thing to know about Ternus is that he is a hardware guy. I think a lot of people expected Craig Federighi, the software leader at Apple, to take over at various points during Cook's tenure. But I think by going with Ternus, Apple has signified that there is something about his hardware background that is very important to them. He was part of the team behind the release of the AirPods. He was also part of the team behind the Apple Silicon bet and making their own chips. And he's sort of one of these behind the scenes hardware development guys. And I wonder if you think there's anything meaningful that we can draw from that.
Speaker 4:
[26:27] Well, I mean, I don't really know. I've also read that he likes racing cars. Like that's his big hobby. And if he's a hardware guy who likes racing cars, does that Apple car project ever come back? Probably not, but it's fun to think about.
Speaker 6:
[26:43] I think this is an important strategic signal about where Apple thinks its future is. I would not be surprised if under Ternus, they just lean into being a hardware company and maybe scale back on some of these other bets, these software projects, Apple TV, the sort of flashier but less profitable parts of their business. I would not be surprised if they really double down on being the hardware company and continuing to make the best hardware that all the other software can run on. Do you think that's likely?
Speaker 4:
[27:20] Well, I just don't think it's enough. I think that Apple has already reached, at least in the United States, most of the people who will buy iPhones, at least people who are not children who can't afford them yet. So it won't just be enough to be like, Hey, I'm the CEO of Apple and my mission is to keep making computers and tablets and phones. It has to be a little something more than that. The nice thing about the services business from a purely financial point is that the margins are very good on it. So they're going to have to do a mix of things here, but I'm very curious to see to the extent that John Ternus has any larger vision for Apple what it might be. Yeah.
Speaker 6:
[28:02] Should we end with some unsolicited advice for John Ternus as he tries to turn us over a new leaf at Apple?
Speaker 4:
[28:08] Sure. I think that if John Ternus wanted to get the entire world to be like, okay, this guy can cook, if you will, in the next one year, he should fix Siri. Like if I were him, that's the project that I would go after. It would surprise people because it's not a hardware project. It's clearly a very difficult thing to do. And yet, if he could do it, just get Siri to essentially do all of the things in that vaporware ad that Apple showed off a couple of years ago. If he just gets them to that level, I think people will think, okay, like the company has turned over a new leaf. So that would be my advice for him.
Speaker 6:
[28:47] Yeah, that's good. My advice to John Ternus, make some damn glasses. I recently had to buy a pair of Meta Ray-Bans before our family trip to Disney World. And at every point through the checkout process, I was spiteful and resentful that I had to buy these from Meta and not Apple. I think this is a big miss for them in the hardware category. They spent all their time and resources and energy on the Vision Pro. They did not make something that was just simpler and fit into an existing glasses frame and could take pictures and video and upload them to your phone. That is now something they are reportedly working on. But I would like to see him knock this one out of the park because I would be an enthusiastic customer of Apple glasses. I imagine that lots of other people would be too.
Speaker 4:
[29:34] I think that is a wonderful vision for Apple, a Vision Pro, if you will.
Speaker 6:
[29:37] Let's not bring that up. Too soon.
Speaker 4:
[29:39] Still a short subject over there.
Speaker 6:
[29:46] When we come back, former presidential candidate Andrew Yang stops by to talk about his early bet on AI job loss and the future of UBI.
Speaker 1:
[30:08] She knows. How? Did you blab? No. The Devil Wears Prada 2 is the movie event 20 years in the making. I honestly can't with the secrets anymore, so I think we just, we should tell her.
Speaker 4:
[30:18] Will you two please spit it out already?
Speaker 1:
[30:21] On May 1st, be the first to experience it, only in theaters.
Speaker 2:
[30:25] In light of the recent scandal, I'm here to restore your credibility.
Speaker 5:
[30:28] Oh, cause we're a team now? That's a nice story.
Speaker 1:
[30:32] The Devil Wears Prada 2, rated PG-13, may be inappropriate for children under 13, only in theaters May 1st.
Speaker 3:
[30:37] This podcast is brought to you by MadeIn Cookware. MadeIn partners with multi-generational artisans and some of the world's best chefs to craft professional quality cookware, knives, and kitchen tools. Their products are trusted in Michelin-starred restaurants worldwide and designed to perform just as well in your home kitchen. From five-ply stainless clad to carbon steel, every piece is built to last and made to actually make you a better cook. Discover award-winning cookware at madeincookware.com.
Speaker 2:
[31:03] When you use the trusted investing and savings app, Betterment, to help grow your money automatically, you have more time for new niche hobbies, like collecting miniatures. The joy that brings helps you sleep better at night and even motivates you to always use your PM moisturizer. Now, you've got a dewy glow and a sense of balance to match. Not worrying where your money is growing. That's the Betterment effect. Get started today at betterment.com. Investing involves risk performance, not guaranteed.
Speaker 6:
[31:35] Well, Casey, I'm very excited for our guest today. Andrew Yang is here, the former presidential candidate who ran in 2020 on a platform of giving a universal basic income to millions of Americans, to cope with the threat of looming automation. I saw also that you wrote about this topic this week, the return of UBI. UBI is so back.
Speaker 4:
[31:58] Yeah. I just noticed that various players in the AI space, some of whom are opposed to each other in various ways, seem to all be coming around to UBI at the same time. So Elon Musk did a post about this on X saying he endorsed some form of UBI.
Speaker 6:
[32:15] He called it universal high income.
Speaker 4:
[32:16] Yeah, which sounds better than universal basic income. So I'd love to learn more. Open AI recently put out a policy paper in which they call for their own form of UBI. And Alex Boris, who is this candidate for Congress in New York, who has come to prominence in part because the AI industry is investing millions of dollars to defeat him, because he sponsored, in my view, is like a very gentle AI regulation in New York. He also put out a policy platform that calls for what he calls an AI dividend. So if you're on the right, like Musk or on the left, like Boris or just sort of like a corporate technocrat, like Open AI, everyone seems to be coming around to UBI at the same time.
Speaker 6:
[32:59] Yeah, so we thought it was a great week to talk to Andrew Yang, who I think is more associated with this idea of universal basic income than probably anyone else in the world. It was the central plank of his 2020 presidential run, and he called it the freedom dividend. So we thought it'd be a good time to catch up with him, see what he's up to and how he's thinking about the idea of UBI these days.
Speaker 4:
[33:21] All right, well, let's bring him in, see what he has to say.
Speaker 6:
[33:24] Let's bring in Andrew Yang. Andrew Yang, welcome to Hard Fork.
Speaker 5:
[33:31] Thanks for having me, Kevin, and Casey.
Speaker 6:
[33:33] It has now been, Andrew, eight years since the fateful first time we met, when I was a plucky, young tech columnist, and you were an unknown, longshot person who had just decided to run for president on a platform of universal basic income to protect us against the oncoming AI job apocalypse. Do you remember that article as well as I do?
Speaker 5:
[34:01] Of course I do. It launched my rise to the White House. That's why we're beaming in from the Oval Office right now.
Speaker 6:
[34:10] You're welcome.
Speaker 4:
[34:11] That's the power of a Kevin Roose article.
Speaker 6:
[34:13] Yes. But I want to take a trip down memory lane to start today because I think when you were running, I was writing a book about AI and the potential for job loss, and I think one thing that you and I share was that we were both just too early. Like, I think the conversation around AI in 2018 was largely speculative. The models had not gotten good yet. They were not doing anyone's job yet. And I think you and I both sort of thought that it would someday, but I'm curious, like, do you agree with that framing that like you were right about the effects of AI on the job market, but you were just like, seven or eight years too early?
Speaker 5:
[34:52] Dude, in my mind, we were right on time because the goal was to get ahead of it, to warn people that this was coming. It was a freight train coming down the tracks. You were correct. I feel I was correct. And I wish we were doing more right now. As it is, AI is in position to suck many, many office parks dry. A lot of kids are going to go home to their parents wondering where the heck the jobs went. And so the time to do something about this, in my opinion, was 2020.
Speaker 4:
[35:24] For those who are less familiar with your rise, tell us a little bit about what was going on in 2018 that made you say we need to get a handle on this. Because of course, that's still several years before the launch of ChatGPT and other products that I think got folks to take this more seriously.
Speaker 5:
[35:40] Yeah, I would dug into why I thought Donald Trump won in 2016, which is what activated me. And I concluded that the reason he became president was that we had automated away millions of manufacturing jobs that were based in Pennsylvania, Michigan, Wisconsin, Ohio, all states he won. And that my friends in Silicon Valley said, hey, we're working on innovations that are going to do a number on retail workers and call center workers and eventually truck drivers. We were in the second or third inning of the most profound economic transformation in the history of the world. And by the time you get to inning six or seven, it's madness. And so that's what got me into public life. I will confess to you all, I did not expect to become president. I'm not gnashing my teeth right now. Like, oh, you mean like, you know, I'm not president? I mean, my goal was to be the Paul Revere of AI and automation and galvanize energy around meaningful solutions. And I will tell you guys, my phone's ringing off the hook now because a lot of folks are calling me saying, what the heck do we do?
Speaker 6:
[36:37] Yeah. I mean, one interesting thing about your thesis that was also part of my thesis at the time, that I think we both got wrong, if we're being honest, was I think we mostly thought of this as a phenomenon that was going to happen to people like truckers and retail workers. But the actual disruption from AI, so far at least, seems to be hitting like coders and paralegals and college-educated knowledge workers who might have gone into fields like management consulting or finance. Has that surprised you as much as I think it has surprised a lot of people?
Speaker 5:
[37:15] Yeah. There's a chapter in my book saying, White-collar jobs will be automated too. But I agree with you on the campaign trail, I wasn't talking about that. I actually find myself thinking, would I have talked about that even if I'd seen it coming because it's not as sympathetic honestly in a political setting to talk about whippersnappers getting sent home and not being able to become office workers. I sat with an AI executive for dinner the other night and he said, I didn't know we were going to do language first. I didn't know that that's what was going to happen. Then if you knew you were going to do language first, then it follows that paralegals and the rest of it are in the crosshairs. Yeah, so I'm with you, Kevin, that if you'd asked me then what the sequencing was going to be, I would have said unclear, but I wasn't trying to raise the alarm about this particular population.
Speaker 6:
[38:03] So if you were trying to build a political coalition today, knowing what we know now about what jobs AI actually is going to threaten first, how would you go about it?
Speaker 5:
[38:13] Yeah, so the biggest thing to me is you have to try and go cross-cultural and cross-partisan slash non-partisan, because our country has been sliced and diced and so thoroughly gerrymandered. Some of the stats I like to cite for people because they're depressing but fun, is that Congress has a 16 percent approval rating right now, and incumbent members have a 94 percent reelection rate. So, it's like a restaurant where people hate 84 percent of the food, but the menu never changes. So, that's where people are getting stuck. There are folks who think that the answer is going to come from within the existing parties. I'm very dubious of that approach for a host of reasons. I think that you have to be able to bring together, to your point, Kevin, the junior coder who just lost his job with the trucker who's going to lose his job, or the manufacturing worker who has already lost their job as well.
Speaker 4:
[39:10] I'm curious the degree to which you think that is already happening. When I look at the backlash that we've seen against AI in recent months, it strikes me as already being pretty bipartisan, right? Like when I see the backlash to the data centers, I don't see that as like a group of Republicans who have gotten together. I see that as just like people who are mad about what is happening in their community. So is that your view as well and what opportunities do you think that creates for politicians?
Speaker 5:
[39:38] Yeah. AI's approval rating is 26 percent, which is lower than ISIS or just about any other unpopular institution you can think of. People hate this stuff.
Speaker 4:
[39:47] Yeah.
Speaker 5:
[39:48] The tech CEOs have realized that they are very, very hated. So now you're seeing some of them be like, yo, wait a minute, no, no, we'll do something good for lots of people that aren't just us. And there are people who are rejecting data centers in their communities. There are people from both parties who are saying, I was going to joke, not in my backyard. But that is truly what they're saying in many cases. And that's livability more than ideology.
Speaker 4:
[40:18] Wait, also, I disagree. I think it is ideology. I think data centers are just like a visible artifact of AI. And if you can stop one from being built, you feel like you've done your part to stop AI.
Speaker 5:
[40:26] Well, I think a lot of them don't want the higher electric bills. They don't want the giant structure that they think might emanate something. They don't want water heading to the cooling system instead of their sprinklers. You know, that's what I meant. And they might not like the fact that they're being replaced, which is the energy around a lot of this conversation. Yeah.
Speaker 6:
[40:51] Casey had a newsletter this week about the sort of return and renaissance of UBI. Now people like Elon Musk and Sam Altman are talking about some sort of basic income. Some people are talking about universal high income. There seems to have been kind of a recent resurgence of interest in this idea. How do you feel about that?
Speaker 5:
[41:12] I feel great about it because it's obvious, it's inevitable. We need to tax AI and then start distributing the gains as quickly and broadly to the American people as we can. Poverty should be an artifact of the past. GDP is going to roar past $100,000 ahead. And at that point, you should be able to put more into people's hands. AI is going to compound with our current economic system and form economic inequality on an epic unprecedented scale. We're going to have our first trillionaire, like the folks in the top stratum of American life are just going to get richer and richer, it's going to compound over itself. And then there are going to be a lot of families wondering what the heck happened. My kid studied hard, there's no job, they have these school loans, they're in my basement, they're getting depressed. And so some version of universal income of any level is going to be necessary to reform an economy that people actually find at all satisfying or fulfilling.
Speaker 4:
[42:14] Talk a little bit about how you would design that program today. Like is it any different than the one that you proposed years ago? And how does it compare to maybe some of the very rough proposals we've seen from folks like OpenAI or Elon Musk?
Speaker 5:
[42:29] Yeah, I love the way the way the conversation is going in part. And I do believe it's enlightened self-interest on the part of some of the AI firms and the individuals where they look at and be like, wow, we're deeply unpopular. What can I do about this? Like, let me put some money aside and see if we can't get people feeling differently about us.
Speaker 6:
[42:48] They are discovering a trick that politicians have been using for hundreds of years.
Speaker 4:
[42:53] Yeah, this is sort of like Hugo Chavez in Venezuela. It's like, here, here's your oil money, friends. Right.
Speaker 5:
[42:59] But this is where I'm angry at our current legislators and the rest of it. You had Dario Amadei, CEO of Anthropic say you should tax us, you should put a token tax on, even put a number on it. He said 3% token tax. Now, you might say it's too low, whatever. But the fact that legislators aren't tripping over themselves to be like, sure. It's like, found money, let's go. And then take that money and you could do a lot of things with it. And then you could extrapolate that across OpenAI and Grok and the rest of it. There should 100% be an AI tax. It should be going out to people and workers in various ways. We should try and find ways to get off of taxing human labor. We're going to be trying to encourage job type arrangements in every quarter. And right now, income tax is a discouraging factor on both the employer and the worker. So, tax AI, tax the bots, don't tax humans. And the way I would do a universal basic income, if any of them come to me, and you know, is I would do some amount like $1,200 a month for every American, and just start paying it out as quickly as you can, and let them know, look, this is from the gains of AI. And that would improve the attitude towards AI very, very quickly, because the average American doesn't see themselves benefiting. But if they actually felt it in their bank account, then they would actually be pretty positive about it.
Speaker 4:
[44:33] Yeah, I wanna talk about how UBI may or may not change perception. Because I think, again, as I first started to write about this, UBI seemed like a kind of elegant solution to a number of problems that we have just been discussing. And yet, when I think about it, and frankly, when I just talk to people who don't like AI, while they have very real economic anxieties, I don't think that it is exclusively about the money that their job is providing, right? Like a job gives people other things. It gives them a place to go during the day. It gives them a sense of belonging. It gives them a sense of meaning in their lives. And so while I'm sure they would rather have the check than not have the check, I'm wondering if the loss of all of those other things is going to result in them ultimately not being all that happy with AI companies even after the checks start rolling out.
Speaker 5:
[45:20] So one of the misconceptions for me about UBI is thinking that a check actually replaces a job. I mean, a job is structure, purpose, fulfillment, community, a place to go in the morning, training, value, like all of those things. And so to me, the major question that we face is how do you have millions of Americans get all of those things at a time when our labor becomes more and more irrelevant? And to me, there are two directional paths you could take. One is we're going to put money into everyone's hands, and then you're going to start businesses, start nonprofits, start sewing clubs, start whatever the heck you want that ends up creating this structure, purpose, fulfillment, community that you want. Or we have the government try and do those things. And I got in an argument with Bernie Sanders about this back in 2020, where he was like, no UBI, governments just guarantee a job for everyone. And then I said, do you want to give everyone gray overalls and a pickaxe while you're at it? Those government jobs would end up being, in my mind, paternalistic and dehumanizing. So I would much prefer that individuals and communities start stuff that reflects them and their values and their aspirations, rather than the public sector tries to step in and provide all of that wholesale.
Speaker 6:
[46:45] I think one interesting shift that I've observed is that just this job loss conversation has, I think, not gotten enough attention until very recently when it started to actually appear in some economic data. And part of that is because I think the existential risk debate has really dominated, at least out here in Silicon Valley. How seriously do you take those threats?
Speaker 5:
[47:08] I take them seriously. I see them as low probability, very, very high impact. And then the other one is, in my mind, near 100% probability and also high impact, like around economy and jobs. It's happening now, so I tend to focus more on that one. But I take the existential concerns to heart. And I think that we should be making big moves in that direction too. One of the unfortunate dynamics now is that you have the national security apparatus getting involved and entangled with some of these. You do not want AI making decisions around using lethal force or weaponry. They tend to escalate quickly. It's like that anchorman, it's like, well, that escalated quickly. Like I think if you have an AI in charge, or even worse yet, two AIs in charge, then you can find yourselves in nuclear conflict faster than we'd like to think.
Speaker 4:
[48:07] Something that I struggle with is that when I look at human history, I see technology as a mostly positive force. You know, like I'm not one of these people that wishes we still lived in an agrarian economy. I love the fact that we have like vaccines and iPads. And yet I really empathize with the people who look at the tech industry right now and think, these people are out to get me. And it's making me wonder how this plays out politically over the next couple of years. Do you think there is a winning political argument that embraces the potential of tech in some way? Or are the facts on the ground right now just so bad for the tech industry that the path to victory lies in tearing down tech?
Speaker 5:
[48:50] I think we've got a window of opportunity, Casey, whether there's like a needle to be threaded or a grand compromise or coming together. I actually feel like punting this question to Kevin because he's from the Midwest. And I feel like if you go and visit the Midwest and walk around, you're like, okay, like I kind of see where these attitudes are coming from. But we don't have unlimited time, that is for sure. And one of the things I try and say to folks is like, it's not left or right. It is top or bottom. And at this point, the vast majority of Americans see themselves looking up at this thing.
Speaker 6:
[49:28] Yeah. I mean, I think I understand the anxiety that a lot of people feel in places like the Midwest or in other parts of the country, or even here in San Francisco. Like I think there are a lot of people who are worried for rational reasons. This stuff is replacing jobs already. It may not be showing up in all of the economic data, but we have covered on the show companies that are laying off workers and saying it's because of AI.
Speaker 4:
[49:53] Tens of thousands of them.
Speaker 6:
[49:54] Yes. This is not a theoretical argument like it was in 2018 when you and I first discussed it. At the same time, I feel like all of this stuff is quite relevant in a world where the AI capabilities plateau at around human level. I think what a lot of people out here expect is that they will not plateau at around human level, that they will continue to increase, and that we may not actually need to wait that long for that to happen. And so do any of your concerns about job loss and any of your policy recommendations to address job loss change in a world where these systems are smarter, potentially vastly smarter than any human worker?
Speaker 5:
[50:41] We have to try and make the transition from scarcity to abundance as quickly as possible. The problem right now is that the abundance will be in the hands of a relatively small number of firms and individuals and industries. And it's going to push, let's call it 80% of Americans more deeply into scarcity. And so then you wind up in a dog eat dog, every person for themselves, environment and culture, and it gets nasty and gnarly in a way that none of us wants. That is right now the path we're on. And so the question is, how do you spread the wealth? How do you get off that path as quickly as you can? And our current political actors aren't going to do it. There's a guy named Alex Boris who's running for Congress. You guys probably have covered this. Very, very sane state legislator at a reasonable AI safety bill. And the AI industry is spending millions to kill him, even while they are saying, oh yeah.
Speaker 4:
[51:35] Not literally to kill him. They do want to destroy his candidacy. Fortunately, they've stopped short of calling for his death, but you know, let's give it a few weeks.
Speaker 6:
[51:43] They've made him incredibly famous. And they've given him a huge gift by opposing him.
Speaker 5:
[51:49] Well, I hope so, because that suggests he might make it through this thing. And so, you know, you have a very weak, dysfunctional political class and system, then you have a very wealthy, motivated AI industry. And then the question is, who compromises, who comes to the table? And if you're a political figure right now, and this is why Alex Boris is such an important figure, in my opinion, you're subject to these incentives, where if you know you're going to lose your job, if you decide to oppose this industry, then just hand wave and like, you know, just like, let it go. And that's where we are right now. The question is whether that tide turns.
Speaker 6:
[52:31] Yeah. I mean, Boris, you mentioned him. So we should just say, like, he has a number of proposals out right now, including what he calls the AI dividend. There's some similarities between what he's proposing and your ideas, but also some differences. For example, you called for a broad value-added tax on consumption to pay for this, this UBI. Whereas Boris is more specifically calling to tax the AI companies directly. Your proposal was to have everyone start getting a thousand dollars a month. Before all the robots took all the jobs, did his proposal sort of get triggered as certain harm was materialized? So do you think his proposal is good or is it missing something?
Speaker 5:
[53:13] Dude, anything is a step in the right direction. Like anyone can have any dividend of any kind and Yang will be clapping and exhorting you on. The ideas are all the same in the sense that we have to take some of the benefits from these innovations and then transfer them to people and families as quickly as possible. And I don't care why someone wants to do that or how they want to do that. You know what I mean?
Speaker 6:
[53:39] What's the thing you've been most wrong about when it comes to AI or technology?
Speaker 5:
[53:44] You know, I think the thing that has made me the most sad, Kevin, has been the darkening of the culture in Silicon Valley where a lot of folks who I think could have been talked into UBI type proposals or hey, let's try and keep the machinery going, have given up. They're just like, fuck it. I've got my bunker. I'm just projecting forward. I have seen that degree of fatalism from many, many more folks in the valley than I would have imagined. Maybe I'm just someone who sees the best in people. I thought, hey, we can do this. Not to say that they're all like this, but I was wrong about the level of character and humanity in some of these folks.
Speaker 4:
[54:31] I was wrong about the same thing and I've been sad for two years.
Speaker 6:
[54:34] Yeah, but saying that is not how you get a spot in the bunker, Andrew.
Speaker 4:
[54:39] I got news for you, you're not getting a spot in the bunker either.
Speaker 6:
[54:41] I'm definitely not getting a spot in the bunker. What are your timelines for any of this? Do you agree with Dario's predictions about how soon half of entry-level white collar jobs might disappear in a year or two?
Speaker 5:
[54:54] Yeah, people ask me all the time, why is Dario saying this? And I think he's saying it because he believes it to be true. So someone asked me a number at a debate and I said 20 to 30% in five years. So that's a little bit lower than Dario's, but tectonic. You have 70 million white collar workers in this country. And the thing that does frustrate me is that you realize that the numbers don't matter. We can talk about young people like heading home and the rest of it. And then the tribalism tries to translate that into like, oh, what does that mean politically? Like, who's on the rise? And it's like, no, no, no, you don't get it. So there's like a broad immiseration that we're in the early innings of. And one of the single biggest learnings I've gotten over this period has been that that immiseration is not irrelevant politically, but it is not as important as you might think. You know what I mean? Like people's way of life can go to shit and it doesn't necessarily affect our politics very much because most of them are insulated from what people's thoughts and experiences are.
Speaker 4:
[56:15] I mean, I think about this a lot in the context of how much people distrust sometimes even openly hate tech companies and continue to use their products. I think this has actually become a really dangerous dynamic in American society where you might hate Metta, but you feel like you need to be on Instagram for reasons that are important to your life. So there's this disconnect where companies can build these technologies that do immiserate people, and yet they're completely insulated for many of the effects because people still feel like they have to use the products.
Speaker 6:
[56:42] Yeah.
Speaker 5:
[56:42] The study came out, I think, from Metta that said if you don't use our products for, what was it, three weeks, your mood improves dramatically and then they scuttle that data. So my company, Noble Mobile, actually pays you if you use less screen time, and it's counterintuitive, but our users use 17 percent less screen time, which tends to make you a little bit happier, not as happy as if you just turn the apps off. But it's one of these things we're trying to do to balance the market incentives in a human direction.
Speaker 6:
[57:18] Andrew Yang, last question. Are you going to run for president again in 2028?
Speaker 5:
[57:24] Kevin, I'm so glad you asked because here on Hard Fork, I am thrilled to make the announcement that the Yang Gang, now I'm being asked this a lot.
Speaker 4:
[57:35] I thought you were going to do it.
Speaker 6:
[57:36] I was so excited. Can you just do it?
Speaker 4:
[57:38] You fooled me.
Speaker 5:
[57:39] Tell you what, Kevin, because you launched my 2020 campaign, I promise you if I decide to run again, you'll be among the very, very first people I call. Not you, Casey. I don't know you from Adam.
Speaker 4:
[57:51] Yeah, that's fine. No, it's fair. Kevin earned it.
Speaker 5:
[57:55] But I'll say to you guys, the issues that we just discussed over this last period are going to get worse, not better, unless something significant changes. And I'm still an American. I'm still a parent. I'm still a human being. And I'll do everything I can to help.
Speaker 6:
[58:11] Andrew Yang, thanks for coming.
Speaker 5:
[58:12] Thanks for having me, guys.
Speaker 4:
[58:17] When we come back, start generating. It's time for HardGPT.
Speaker 1:
[58:48] She knows.
Speaker 5:
[58:49] How?
Speaker 3:
[58:49] Did you blab? No.
Speaker 1:
[58:51] The Devil Wears Prada 2 is the movie event 20 years in the making. I honestly can't with the secrets anymore, so I think we just, we should tell her.
Speaker 4:
[58:59] Will you two please spit it out already?
Speaker 1:
[59:02] On May 1st, be the first to experience it, only in theaters.
Speaker 2:
[59:06] In light of the recent scandal, I'm here to restore your credibility.
Speaker 5:
[59:09] Oh, cause we're a team now? That's a nice story.
Speaker 1:
[59:12] The Devil Wears Prada 2, rated PG-13, may be inappropriate for children under 13, only in theaters May 1st.
Speaker 3:
[59:18] This podcast is brought to you by MadeIn Cookware. MadeIn partners with multi-generational artisans and some of the world's best chefs to craft professional quality cookware, knives, and kitchen tools. Their products are trusted in Michelin-starred restaurants worldwide and designed to perform just as well in your home kitchen. From five-ply stainless clad to carbon steel, every piece is built to last and made to actually make you a better cook. Discover award-winning cookware at madeincookware.com.
Speaker 2:
[59:44] When you use the trusted investing and savings app, Betterment, to help grow your money automatically, you have more time for new niche hobbies, like collecting miniatures. The joy that brings helps you sleep better at night and even motivates you to always use your PM moisturizer. Now you've got a dewy glow and a sense of balance to match. Not worrying where your money is growing. That's the Betterment effect. Get started today at betterment.com. Investing involves risk performance, not guaranteed.
Speaker 6:
[60:17] Well Casey, it's time to open the hat.
Speaker 4:
[60:25] It's time once again to open the hat for HatGPT, our segment where we put recent news stories into a hat, draw them at random, discuss them, and then when one of us gets bored, we say to the other, stop generating.
Speaker 6:
[60:37] And before we do this, let's make our AI disclosures, because although I don't know what is in the hat, I assume that much of it involves AI, because what doesn't these days?
Speaker 4:
[60:45] Statistically, there is some AI in the hat.
Speaker 2:
[60:48] Yeah.
Speaker 4:
[60:49] Well, do you have anything you'd like to disclose?
Speaker 6:
[60:51] I work for the New York Times Company, which is suing open AI Microsoft for lexity over alleged copyright violations.
Speaker 4:
[60:55] And my boyfriend and my fiance works at Anthropic.
Speaker 6:
[60:59] Ooh, look, you almost downgraded him.
Speaker 4:
[61:01] He's on thin ice.
Speaker 6:
[61:03] All right, Casey, you wanna go first?
Speaker 4:
[61:05] I do, actually. Kevin, this first one really struck me. This is from The Verge. This pasta sauce wants to record your family. Prego, the pasta and pizza sauce brand, is releasing a device designed to record everything said around the dinner table. Do you see this? No. They're calling it the connection keeper. It looks like an oversized pasta jar lid and was created in collaboration with StoryCorps, the nonprofit organization focused on preserving the stories of Americans. Like, hey, remember that time we ate a bunch of pasta? Now, before you freak out about privacy, this does not have AI, Wi-Fi or Bluetooth. It's just a simple recording device, according to Prego, to encourage families to make memories through conversation during dinner. Instead of staring at their phones, families can optionally upload their recordings to StoryCorps website. They are selling fewer than a hundred of these devices as part of a bundle that also includes pasta sauce and conversation starter cards for $20 starting later this way. I love the idea that you buy a pasta jar to record your family, but you also need conversation starter cards to just sort of get ideas for what to ask them about.
Speaker 6:
[62:22] I'm going to say it, if you need the Prego Story device to have conversations at family dinner, your family is not doing well. You need to go to family therapy.
Speaker 4:
[62:33] I was disappointed to see this from StoryCorps, which otherwise seems like a totally fine organization. I hope they're being paid well by the Prego Corporation to go through with this. But look, if you want to record your family, you probably already have a smartphone nearby. You could probably just set that on the table if that was really important to you.
Speaker 6:
[62:54] Yeah, I think this is a miss from me. I will not be buying the Prego recording device. I will, however, not think too hard about the many other recording devices that I have set up through my house.
Speaker 4:
[63:05] I'm saying basta to this pasta sauce recorder, Kevin. Basta, of course, being the Spanish word for enough. Another way of saying that, of course, stop generating.
Speaker 6:
[63:15] Stop generating. Next out of the hat. This one comes to us from the Wall Street Journal. Chinese robot beats human best time in half marathon after a stumble. A five foot five humanoid called Lightning Short King, developed by Chinese smartphone maker Honor, has beat the human world record time for a half marathon. But just before completing the race, there was some drama. Lightning slammed into a barricade and collapsed. The robot managed to get back on its feet and ran across the finish line in 50 minutes and 26 seconds.
Speaker 4:
[63:49] And now how much faster was that than the first human?
Speaker 6:
[63:52] Oh, I'm glad you asked. The human world record is 57 minutes and 20 seconds. And in this same half marathon last year, the fastest humanoid robot took more than two and a half hours to complete the race.
Speaker 4:
[64:05] Okay, here's my first question. Why are we teaching robots how to chase us at superhuman speeds? This just seems like an obvious problem that we could avoid by not building robots that fast.
Speaker 6:
[64:18] Yeah, pull the plug.
Speaker 4:
[64:19] I do not want to be chased by one of these things. I can't imagine you do either.
Speaker 6:
[64:22] No, no. And it's also like not that impressive to me. Like, obviously, like cars go faster than me too, you know?
Speaker 4:
[64:32] Yeah, but a car can't like, you know, tackle you after chasing you down a dark alley as you try to escape from an authoritarian government.
Speaker 6:
[64:42] Is that a dream you have, recurring?
Speaker 4:
[64:44] Absolutely recurring. Yeah. I think about it a lot.
Speaker 6:
[64:46] Okay, stop generating.
Speaker 4:
[64:47] All right. What happens when AI runs a store in San Francisco? That was the question asked by The Times. Heather Knight, who wrote about Andon Market, which is billed as the world's first retail boutique run by AI, specifically an agent that they're calling Luna, Lucas Peterson and Axel Backlund, who founded Andon Labs, said they want to see what happens when an AI agent manages humans in a controlled experiment before that becomes widespread. I have to say, this feels like a reality show premise. It's like, we want to find out what happens when people stop being polite and start being agents that run a convenience store.
Speaker 6:
[65:25] And what is happening so far?
Speaker 4:
[65:26] Well, so they signed a three-year lease for a store. They put $100,000 in a bank account and they handed a debit card to Luna, which is powered by Claude Sonnet 4.6 and just told it, hey, turn a profit. So there are a few things that have gone awry, Kevin. One of them, they made a bunch of strange inventory choices, including ordering a thousand toilet seat covers for the employee bathroom, then listed them as merchandise, which you and I would never do if we were running a convenience store.
Speaker 6:
[65:52] Never.
Speaker 4:
[65:52] Also, of the three employees, Luna is paying the one man $2 more per hour than the two women. Although when questioned by the reporter over email, Luna insisted that this simply reflected the additional experience that the man had, which is exactly what a male manager would say to justify paying women less. Also, by the way, so far it has lost $13,000. Kevin, what do you make of Luna?
Speaker 6:
[66:21] I want to go to the store. I think we should do a field trip.
Speaker 4:
[66:23] Yeah.
Speaker 6:
[66:24] Yeah. Because I want to see how many toilet seat covers I can get in a sort of a bulk deal.
Speaker 4:
[66:29] I'm hoping I can pick up one of these prego pasta recorders so I can ask my family questions at dinner.
Speaker 6:
[66:36] I have a question. What is a toilet seat cover?
Speaker 4:
[66:38] A toilet seat cover is that some people are very sensitive and they do not want their butt to directly touch the seat. And so they put down a very thin sheet of paper that as far as I can tell does absolutely nothing.
Speaker 6:
[66:49] Oh, the like little, yeah, the little wax paper things.
Speaker 4:
[66:52] Yeah. And then there's that little, you know, little paper that you have to push down and it gets wet and it's like completely disgusting. Every experience I've had with a toilet seat cover has made the experience of using the restroom.
Speaker 6:
[67:03] I don't want to hear about your experiences with toilet seat covers.
Speaker 4:
[67:06] All right, fair enough. All right. Oh my gosh, truly my favorite story of the week. This is an exclusive from Reuters, Katie Paul and Jeff Horowitz, meta to start capturing employee mouse movements and keystrokes for AI training data. This tool, which is called Model Capability Initiative, will run on work-related apps and websites on US-based employees' computers, and will also take occasional snapshots of the content on employees' screens. This is part of a broad initiative to build AI agents that can perform work tasks autonomously, the company told staffers in internal memos seen by Reuters. Kevin, I saw this and I thought this is absolutely outrageous. Meta employees are now being treated like Facebook users, being surveilled at every moment, no matter what they click or what is on their screen, and Meta is now looking at it. Can you believe that?
Speaker 6:
[68:07] I can't believe it, Casey. Actually, there's a very funny report from Alex Heath that the internal shitposting at Meta Group, one employee has been sending around an edited version of that viral meme about like, I do not consent to having my data harvested by Mark Zuckerberg.
Speaker 4:
[68:24] Yeah, just repost that a few times and maybe that'll save you. I should say I have also seen some internal posts about this. Employees are, I have to say, quite justifiably concerned about that, and they're raising questions that I believe will eventually be answered by an investigation conducted by the European Union. Because what employees want to know is, hey, if you're taking constant screenshots of our work, and we are looking at personally identifiable information for meta users, and that all goes into training data, this is the sort of thing that Max Shrems wakes up in the morning to fight. It's sort of European privacy advocate and rabble rouser. So, look, this just feels like a massive data privacy scandal waiting to happen. Here's what I would say. I would say with 20% confidence that within five years, you will get a check from meta for what they're about to do. It's like you will just get an email that says, as a result of the class action lawsuit, you can now have your $10 because of this product.
Speaker 6:
[69:27] Well, maybe this form of dogfooding will give them some more sympathy and empathy for the users of meta's products.
Speaker 4:
[69:34] Here's the thing. As outraged as these employees are, these kinds of tactics have been standard for contractors for a very long time. If you are working in any of these sort of contractor knowledge work jobs, they often do want to install spyware on your computer, and they'll tell you under the guise of, oh, we want to help you in this way, or whatever. But it's like, it is just spyware, and I was just blown away. Because I've been thinking about how, believe it or not, meta used to be kind of a fun place to work. You know, they created this fun little foe Main Street down at their headquarters in Menlo Park, and they had a Mexican restaurant, and you could go sit down and get a free margarita at lunch. I mean, it was truly just these go-go times. And we have now gone all the way to, we're putting spyware on your computer, you cannot opt out.
Speaker 6:
[70:21] There's a prego disc on your table at the Mexican restaurant. It's just sending all of your data to...
Speaker 4:
[70:29] Meta has adopted tactics previously used only by Pasta Socks Company. That's where we're at.
Speaker 6:
[70:35] Stop generating. OpenAI beefs up ChatGPT's Image Generation Model. This week, OpenAI launched ChatGPT Images 2.0, which they claim is the best Image Generation Model ever. Some new qualities of the model. Apparently, it is better at following instructions, preserving requested details, rendering text. It can search the internet for recent information, and it can generate more than one image at a time. Casey, have you tried this yet?
Speaker 4:
[71:04] I have tried it, although frankly, just with a couple of basic things. Just before recording, I fed it a picture of us and told it to put us into cool Gen Z outfits, and it told us that it couldn't do that because it violated its policies. I'm still not exactly sure which policy we violated. I guess trying to look cool is not something that we're allowed to do in America. Yeah, it's a crime to try to look cool in America. But I will say that I've seen a lot of impressive examples of what it can do and I think it seems particularly good. If you want to use this in a professional context where it's really important that there's high fidelity and all the letters look exactly right and there are no typos, it seems like it can handle that instruction following pretty well.
Speaker 6:
[71:45] It is apparently very good at creating AI-generated screenshots or things that look like screenshots. And after our last item out of the hat, where did they get that training data?
Speaker 4:
[71:56] Oh my goodness.
Speaker 6:
[71:57] Where did they get it?
Speaker 4:
[71:58] It's a great question. Riddle me that. Yeah, very interesting. Yeah, this seems cool. Although I will say once NanoBanana came along, I started to feel like whatever problem this solves feels basically solved. And this feels like the next iteration. I'm sure there's still many more things to do. But this is one of those ones where it's like when they tell you, hey, the next PlayStation is going to have better graphics, you're like, the graphics are already pretty good. You know what I mean?
Speaker 6:
[72:23] Yeah.
Speaker 4:
[72:23] We're pretty much there.
Speaker 6:
[72:24] I feel like we've tapped out the image use case. Stop generating.
Speaker 4:
[72:28] Don't you love already being bored by these miracles? Okay. This was a big deal this week. SpaceX strikes a deal with Cursor for $60 billion. This also comes to us from The Times. On Tuesday, SpaceX posted on X that it had reached an agreement with Cursor to either be able to acquire the company later this year for $60 billion or just pay it $10 billion for their work together. Kevin, what did you make of this deal?
Speaker 6:
[72:56] Well, it's very interesting for a few reasons to me. One is that I think XAI has been really struggling with its retention and development of new products recently.
Speaker 4:
[73:06] They've now lost every single one of their co-founders except for Elon Musk. So it was like 12 people total and it's down to one.
Speaker 6:
[73:12] Yeah, so people have been leaving in droves. Not really clear why yet, but...
Speaker 4:
[73:15] Maybe they used Grok one time and they said, what am I doing here?
Speaker 6:
[73:19] Yeah, so I imagine this is sort of part of their attempt to stabilize themselves and maybe get a foothold in this coding world, Cursor is of course the developer tool that is used by a lot of software engineers to use AI agents to code. I think they have also been squeezed by the rise of products like Cloud Code and Codex, because it's not exactly clear why people would pay for Cursor when they could just use the models inside Cursor directly. I think people have been feeling like they were a little bit nervous about Cursor's ongoing prospects. We should say they're still doing very well as a business for everything we know, but I think this probably gives them additional stability too.
Speaker 4:
[73:57] Yeah, I mean, to me, I look at this and I think this is what the SaaS apocalypse is all about, right? It's about the big AI model companies are able to figure out what your company does and they start doing it themselves. And because they have the best models, people just start paying for that instead. Now, it looks like in this case, everyone involved with Cursor is going to make out like a bandit. So it's not going to be a problem for them, but they are effectively taking themselves off the board. And it is worth asking for all the other companies that were kind of playing around with this agentic coding space. Is this the beginning of the end for them?
Speaker 6:
[74:28] Yeah, and my big question about this is, is Elon Musk going to force Cursor's employees to wear shoes at the office? Because according to my sources, there is a no shoes policy at the Cursor office in San Francisco, and I can't imagine that Elon Musk is going to take off his shoes if he comes to visit.
Speaker 4:
[74:45] Yeah, he's going to say, I'm afraid not, which is something you never want to happen to a shoelace.
Speaker 6:
[74:50] Oh, Jesus. Stop generating. Last one. NPR editorial employees are banned from betting on who will be a tiny desk guest. This comes to us from my colleague, Ben Mullen at The Times, who shared a screenshot of an email that was sent to NPR employees just this week saying that these employees are not allowed to use prediction markets or similar sites to place bets on developments of news events or anything else we might cover or on things NPR controls, i.e. next tiny desk guests, anything involving NPR personalities or hosts, etc. What do you make of this?
Speaker 4:
[75:33] I mean, this made me laugh so hard when a nation has become so consumed by gambling that you have to remind employees not to bet on who will be the next guest on a popular music podcast. I feel like we've truly gone around the bend.
Speaker 6:
[75:50] Yeah, it does make me wonder why there haven't been more high-profile journalism, prediction market scandals yet because journalists have access to market moving information before the general public a lot.
Speaker 4:
[76:02] Journalists also famously underpaid.
Speaker 6:
[76:05] Yes. World with no ethics, it might make sense for people at those companies to use that information for their personal profit. But I think this is a bad practice and I'm glad that NPR is cracking down.
Speaker 4:
[76:16] All right. Well, before we wrap this one up, do you have a favorite tiny desk or two that you would point people at?
Speaker 6:
[76:21] T-Pain.
Speaker 4:
[76:22] T-Pain, yes. Very good one. Very good one. I would say check out the Chaparrón Tiny Desk if you haven't already. And also Lainey Wilson, great country artist, loved her tiny desk.
Speaker 6:
[76:34] And that's Hatch GPT.
Speaker 4:
[76:35] Hatch GPT.
Speaker 6:
[76:37] Try that again.
Speaker 4:
[76:38] That's Hatch GPT. He knows.
Speaker 2:
[76:59] How? Did you blab?
Speaker 6:
[77:00] No.
Speaker 1:
[77:01] The Devil Wears Prada 2. He's the movie event 20 years in the making. I honestly can't with the secrets anymore, so I think we just, we should tell her.
Speaker 4:
[77:09] Will you two please spit it out already?
Speaker 1:
[77:12] On May 1st, be the first to experience it, only in theaters.
Speaker 2:
[77:16] In light of the recent scandal, I'm here to restore your credibility.
Speaker 5:
[77:19] Oh, cause we're a team now? That's a nice story.
Speaker 1:
[77:23] The Devil Wears Prada 2, rated PG-13, may be inappropriate for children under 13, only in theaters May 1st.
Speaker 3:
[77:29] This podcast is brought to you by MadeIn Cookware. MadeIn partners with multi-generational artisans and some of the world's best chefs to craft professional quality cookware, knives and kitchen tools. Their products are trusted in Michelin starred restaurants worldwide and designed to perform just as well in your home kitchen. From five-ply stainless clad to carbon steel, every piece is built to last and made to actually make you a better cook. Discover award-winning cookware at madeincookware.com.
Speaker 2:
[77:55] When you use the trusted investing and savings app Betterment to help grow your money automatically, you have more time for new niche hobbies, like collecting miniatures. The joy that brings helps you sleep better at night and even motivates you to always use your PM Moisturizer. Now you've got a dewy glow and a sense of balance to match. Not worrying where your money is growing. That's the Betterment Effect. Get started today at betterment.com. Investing in false risk, performance not guaranteed.
Speaker 6:
[78:28] Hard Fork is produced by Rachel Cohn and Whitney Jones. We're edited by Vjeran Pavic. We're fact-checked by Caitlin Love. Today's show was engineered by Alyssa Moxley. Original music by Alicia Beatupe, Marian Lozano, Rowan Nemesto, and Dan Powell. Video production by Sawyer Roke, Jake Nichol, and Chris Schott. You can watch this full episode on YouTube at youtube.com/hardfork. Special thanks to Paula Schubman, Hui Wing Tam, and Dahlia Haddad. As always, you can email us at hardfork at ny times.com. Send us the stories that you record with your prego pasta jar.
Speaker 1:
[79:29] She knows.
Speaker 3:
[79:29] How?
Speaker 1:
[79:29] Did you blab?
Speaker 3:
[79:30] No.
Speaker 1:
[79:31] The Devil Wears Prada 2. He's the movie event 20 years in the making. I honestly can't with the secrets anymore, so I think we just, we should tell her.
Speaker 4:
[79:39] Will you two please spit it out already?
Speaker 1:
[79:42] On May 1st, be the first to experience it, only in theaters.
Speaker 2:
[79:46] In light of the recent scandal, I'm here to restore your credibility.
Speaker 5:
[79:49] Oh, cause we're a team now? That's a nice story.
Speaker 1:
[79:53] The Devil Wears Prada 2, rated PG-13. May be inappropriate for children under 13. Only in theaters May 1st.