transcript
Speaker 1:
[00:00] Hey folks, look, I come on here and we do a podcast, we talk about world events, but I don't like to brag, I like to toot my own horn. I am influential in a variety of spheres. And one of the spheres that I think gets short shrift is obviously fashion. Am I Anna Wintour? Do I drive the trends? Am I Ferragamo? Is that a person that does this? I don't really know, I don't know. But the point is people look to me for wardrobe choices. Not just color, but like if their pants should fit. And especially this time of year, as the seasons are changing, you have got to refresh. Quints is what's going to help you refresh your wardrobe and bring out the spring in your personality. They've got all the essentials. And by the way, we're talking about 100% European linen. This is high quality premium material stuff built to last. The prices are like 50 to 60% less than similar brands. And I'm going to tell you how. They work directly with ethical factories. They cut out the middleman. You're paying for quality, not just brand markup. Refresh your wardrobe with Quints. Go to quints.com/tws for free shipping and 365 day returns. It's now available in Canada, too. Go to quince.com/tws for free shipping and 365 day returns. quince.com/tws.
Speaker 2:
[01:40] Kayak gets my flight, hotel and rental car right.
Speaker 1:
[01:43] So I can tune out travel advice that's just plain wrong.
Speaker 3:
[01:47] Bro, Skycoin, way better than points.
Speaker 4:
[01:50] Never fly during a Scorpio full moon.
Speaker 3:
[01:53] Just tell the manager you'll sue.
Speaker 4:
[01:55] Instant room upgrade.
Speaker 2:
[01:57] Stop taking bad travel advice. Start comparing hundreds of sites with Kayak and get your trip right.
Speaker 3:
[02:03] Bad advice? You talking to me?
Speaker 2:
[02:05] Kayak, got that right.
Speaker 1:
[02:18] Ladies and gentlemen, welcome. My name is Jon Stewart. It's another Weekly Show podcast on this Earth Day Eve. Is that? Do you celebrate? I love Earth. I can't wait. The pitter-patter of little feet at six in the morning running downstairs to open up the Earth Day presents. And as this glorious Earth is being celebrated while simultaneously being destroyed on the back end of it, I thought it would be appropriate not to worry about Iran, not to worry about climate change, but to worry about a third existential threat, which is AI, artificial intelligence. It is happening people, and it's about time that we had a sober conversation about its deleterious effects but also its opportunities. And so we're going to go straight to the source. We're going to go to two brilliant, brilliant MIT economists. We're going to talk to us a little bit about the possibilities of AI, the collateral damage of AI, and the various ways we might be able to mitigate that. So we're just going to get right into it with those cats right now. Here they are. Folks, we're going to break it down today in terms of the AI revolution and what will be the repercussions for the American people, the American worker, the world writ large. Who do you go to for this kind of thing? You go to the experts, you go to the brilliant people, you go to Daron Acemoglu, Nobel Laureate, I don't throw that around, Nobel Laureate in Economics, MIT Institute Professor and David Autor, Rubenfeld Professor of Economics at MIT. Guys, thank you so much for joining us today.
Speaker 2:
[04:02] Our pleasure, absolutely.
Speaker 3:
[04:03] Thanks for having us on.
Speaker 1:
[04:05] David and Daron, I am beginning to get increasingly as comforted by the speed at which AI seems to be infiltrating into not just the popular consensus in culture, but the workforce. So I want to ask you guys, what is our timeframe as this technology is– when are we going to really feel the full effect of this new technology?
Speaker 2:
[04:36] Just beginning to get worried about it now, Jon?
Speaker 1:
[04:39] No, Daron, you know me, you know me. We know each– no, I've been– you know I'm worried about everything.
Speaker 2:
[04:48] So am I, and I'm very worried about this too. Not about the timeline, because the timeline is so uncertain. It's hard for me to worry about something that's so uncertain. But with all of the consequences, I think we are definitely not ready for AI. The workforce isn't ready for AI. We don't know what it's going to do. I think the people who are really not ready for AI are the students whose learning is going to be affected in so many different ways. And we don't know, we have no guardrails, no ways of ensuring that students are actually learning how to learn and they can actually become experts in anything in the age of AI when they can get a lot of answers from AI. So there's just so many things to be concerned with.
Speaker 1:
[05:34] Now, David, will they need to learn anything? Because won't AI, what will they need to learn? Won't we all just be?
Speaker 3:
[05:42] If they don't need to learn anything, then they're just not needed as workers. And we don't want to be in that scenario.
Speaker 1:
[05:47] Right.
Speaker 3:
[05:47] So we do need people to have expertise and mastery. And I do think AI has both potential and risk.
Speaker 1:
[05:53] Right.
Speaker 3:
[05:53] And I think Daron will talk more about the risk. So I'll probably talk more about the potential. And let me point out that although I do not have a Nobel Prize, around here at MIT, it's more distinguished to not have one than to have one.
Speaker 1:
[06:05] David, can I tell you, I love how you've set yourself apart from your colleagues.
Speaker 3:
[06:09] Exactly.
Speaker 1:
[06:10] By not getting a Nobel Prize.
Speaker 3:
[06:12] Exactly. Someone's got to stand out.
Speaker 1:
[06:15] You know what? The idea that you have that rebellious spirit at MIT to go against the grain and not get a Nobel Prize, well, then let's start with that. David, the real concern is, look, and let's step back for a moment. We talk about disruptions for workers over time, industrial revolution, globalization. Those were the dynamics that really impacted workers, but those took place over time. David, you're going to talk more about the potential. Talk us through the previous disruptions and how AI fits into those paradigms.
Speaker 3:
[06:59] Sure. Let me first say, just to bring it to the present first, what we should be concerned about is not running out of jobs, per se, but having jobs where their expert labor is not needed. A future in which everyone is carrying the box from the UPS truck to the front door is very different from a future in which everyone is doing medical care. It's not the quantity, per se, but whether specialized human labor is still needed. I think it will be, but it really matters whether we are replaceable, whether we are all redundant versions of one another, or whether we have real added value in this economy. Now, we've been through lots of technological transitions. Some have been much more traumatic than others. The Industrial Revolution was very much so. There's a 60-year period that people refer to as Engel's Pause in the first Industrial Revolution where productivity was rising rapidly, and yet working-class wages were not. Artisanal labor, these people who had spent their lives developing expertise in weaving and so on, they were just wiped out. It took decades before there was actually need for specialized labor again. Who worked in those dark, satanic mills? It was basically unmarried women and indentured children doing dirty, dangerous, unskilled work. It took decades really into the late 1700s, I'm sorry, 1800s, I'm sorry, until we started to-
Speaker 1:
[08:23] This is why you don't have the Nobel. David, you got to know the right century.
Speaker 3:
[08:27] That's right. Until we actually started to use specialized skills again, where people needed to follow rules, they need to master tools and their expertise was really needed. That was a very traumatic technological transition and eventually we came through it okay, but most the people who were there at the outset did not. A lot of these transitions, young people adapt them usually more successfully by choosing different careers. People don't make big career transitions in mid-adulthood. They don't go from being a steel worker to a doctor or a programmer to a nurse. Those transitions are generational. When it moves really fast, as it did in the era of the China trade shock, for example, people just get left behind. Places eventually recover, but individuals much less so.
Speaker 1:
[09:19] You talk about it, it's very interesting. Daron, maybe we'll ask you. We're talking about specialized labor. David is talking about the craftspeople who knew weaving and those things, and they're replaced by automation and these things. Manufacturing jobs that were replaced in the China shock maybe weren't considered as specialized, but still blue collar. Is AI going to bring about those same disruptions, but in what you would call, I guess, white collar labor or less specialized knowledge and more administrative knowledge?
Speaker 2:
[09:55] I think it certainly will.
Speaker 1:
[09:57] Right.
Speaker 2:
[09:58] The time frame is unclear. Just to add to what David said, this kind of experience is not a distant one. As David's own work shows, the China shock when it led to cheap imports coming and destroying parts of manufacturing had the same effect.
Speaker 1:
[10:17] You're talking about the 2000s when China was admitted to the WTO.
Speaker 2:
[10:21] Yeah, starting in 1990s, but especially starting in 2000s.
Speaker 3:
[10:24] But really after 2000s.
Speaker 2:
[10:26] And robots at a much smaller scale had exactly the same effects. Huge increase in productivity for steel, electronics, cars, but blue-collar workers lost their jobs. Many communities just like with the Chinese imports shock were thrown into recession. And the same thing can happen if there is very rapid displacement of white-collar jobs. Now, the timing is very unclear. There is a lot of hype and a lot of reality to the capabilities of AI models. So far, we're not seeing mass layoffs. We may be seeing some slowdown in hiring. It's unclear. And white-collar jobs are less concentrated geographically compared to say textiles or toys. The things that were affected by Chinese imports or cars, definitely, or steel. But the numbers of jobs in white-collar occupations is high. So there could be a lot of people who lose their jobs. Now the thing is that despite the tremendous advances in AI over the last eight months or so, these models are not yet able to do the whole occupation for many of the white-collar jobs. Yet, that may be to come, or it may take a while. That's why there is so much uncertainty. But uncertainty is a very bad reason to be complacent.
Speaker 1:
[12:00] David, a story that those that are behind AI tell us is very different. When the people that are creating these AI models talk, they talk in utopian terms. We will be freed from the burden of the toil. We will paint and write poetry, even though AI is probably going to do that as well. But when they talk to their investors, they speak very differently. I want to ask you about a quote that I heard. There was a gentleman who was talking to his investors about AI and he said, it will allow you the benefit of productivity without the tax of human labor. He referred to human labor, as a tax, as something that a company wants to avoid paying to retain productivity. That's what worries me is that we talk a lot about this and it's always framed in terms of productivity.
Speaker 2:
[13:01] So wouldn't you like to be freed from your podcasting job, Jon?
Speaker 1:
[13:04] Listen man, I've been toiling in the podcast minds. I'm getting podcast lung. It's a terrible, it is a terrible crippling addiction.
Speaker 3:
[13:17] Yeah. So most of us are both workers and consumers and we're not going to be able to consume if we're not working. But of course, from the perspective of a firm, they want their customers, they'd rather not have their workers. Economists will tell you this, labor demand is derived demand. It's not that firms want labor.
Speaker 1:
[13:34] Explain that, derived demand. What is it? Yeah.
Speaker 3:
[13:36] They want to make stuff, right? And usually making stuff requires space and people and electricity and stuff and people. But if they could make it without the people, they would be just as happy. It's like spinal tap. If they had the sex and the drugs, they could do without the rock and roll. But of course, people have always been necessary. So although firms have always had this fantasy that they could fully automate, they've never been able to do so. And often it's kind of turned out not how they expected, right? During the era of numerically controlled machines, they thought they would de-skill and replace workers. Actually, they turned manufacturing workers into programmers. So it doesn't always work out the way that firms expect it to. But it may this time. There are certainly many, many more things that are subject to AI automation than were subject to the previous era. Because AI has a whole new set of capabilities, right? Previous computers could do routine tasks. They could follow rules. Rules specified so tightly that a non-sentient, non-improvisational, non-problem-solving, non-creative machine could just carry it out without having to understand what it's doing. That really limited the set of activities that we could subject to computer programming. But now AI learns inductively, right? It learns from unstructured information. It infers rules. It solves problems without our even understanding how it's solving them. That allows it to enter many, many new realms. Now, to make this very concrete, it's useful, I think, to contrast two occupations that one that people talk about all the time and one they should be talking about.
Speaker 1:
[15:04] Okay.
Speaker 3:
[15:04] The one they talk about all the time is long-haul truck drivers, right? There are about three and a half million of them in the United States and they say they're going to be replaced by autonomous vehicles. That is a problem we can handle because it's going to go very slowly, right? The day that, let's say, Elon Musk announces tomorrow he has a self-driving truck and let's just pretend we believe him and it's, you know.
Speaker 1:
[15:23] That's how I've been operating for years.
Speaker 3:
[15:26] It totally works. We're not going to throw all our trucks in the Atlantic Ocean and buy new ones tomorrow. It's going to take decades to replace all of that capital and all the infrastructure. So that's going to be a slow transition and labor markets can deal with transitions that happen at a couple of percentage points a year because people retire, new people don't enter. That's manageable.
Speaker 1:
[15:44] You're saying if it takes place over a generation.
Speaker 3:
[15:47] Absolutely.
Speaker 1:
[15:48] Then that's something that even though it will be disruptive, it won't be catastrophic.
Speaker 3:
[15:51] Exactly. Now let's think of call center workers. They're about as many of them in the United States as there are long-haul truckers. They're paid less, they're primarily women, but they're just as many. Those jobs can go very, very quickly. Because automation can encroach rapidly. I don't think they'll all go. The ones that remain will actually be more specialized. They'll be at the top of the queue. When the AI says, I give up, you will be handed over to the last 20 people standing.
Speaker 1:
[16:19] So rather than 20 people, five people will handle what's left of the human tasks that need to be handled.
Speaker 3:
[16:26] That's a mixed bag. Those will be better jobs. They'll be higher paid. They'll be more expertise intensive, but they'll be fewer of them. That we'll see this in language translation. We'll see this in call centers. We may see this in software as well. Software will bifurcate. We'll have a small number of people who build AI models, who run data centers, who run enterprise software, and they'll be highly paid and highly specialized. And then we'll have infinity vibe coders, and they'll be like Uber drivers. You'll call them up to write an app for you. They won't be paid nothing. There'll be a lot of them that won't be highly paid. So we're going to see a variety of impacts, but the ones, work that is just, that is fully cognitive work is much, much more vulnerable. It can change much more quickly. Eventually robotics will also more and more enter the physical realm, but that's still some ways off.
Speaker 1:
[17:21] Ground News, it's this website, NAP. It's designed to give readers a better way, an easier way to navigate the news. If you go on the algorithmic, the Twitters and the things, or the weaponized news organizations, or the websites, you don't even understand how they're manipulating your worldview and how they're getting past the reptilian barriers that you have towards polarization and all those different things. Ground News gives you the information you need to be able to battle that. It pulls together every article about the same news story from all outlets all over the world and puts them in one place and not incentivize for like the worst, most hostile, most partisan take. It tells you where it's coming from. They show you how reliable the source is and who's funding it. Who's funding it? Follow the money. Know who's behind the headline. I'm telling you, man, the Nobel Peace Center has even mentioned that Ground News is an excellent way to stay informed. Nobel Peace Center. That's, I think, the one that Trump started. I think it 3D prints Nobel Peace Prize. It just hands them out. The platform is independently operated, supported by its subscribers. So they stay independent and they stay mission driven. They don't get sucked into this slop. If you want to see the full picture, go to Ground News. They can help you through the noise and get to the heart of the news. Go to groundnews.com/stuart. Subscribe for 40% off the unlimited access Vantage Subscription. Discount available only for a limited time. This brings the price down to like $5 a month. That's groundnews.com/stuart or scan the QR code on the screen. But so let's talk about that, Daron. When we talk about these sort of two areas of work, which is the human expertise that needs to be done, and then physical work where robotics do, everything is moving in that direction. AI feels like it's stripped mind the entirety of human accomplishment. The 10,000 years that we have spent developing these areas of expertise, these areas of knowledge, the kinds of things that made us feel relevant to the progress of the human condition, AI comes in and six months later goes, okay, what else you got? What else are you going to feed me? And then it starts to move forward. Are you confident that... So what David's talking about is already a reduction of the human workforce. Is that the thing that you are most concerned about or is it the eradication?
Speaker 2:
[20:24] Yeah. Reduction is first and eradication is later. And in the process, wages be stagnating or even declining. And David, everything David said, I agree with. But there's one other thing to add. Again, it's a wild card because we don't know how quickly these AI capabilities will develop and how quickly they will be adopted. But all of our earlier examples of displacement, which as I said and David said, haven't been so good for workers, such as during the first 80 years or so of the British Industrial Revolution or during China and robot shocks. They were confined to a few occupations. Even then it was very hard for people to relocate and get jobs and newcomers to find jobs. But weavers during the British Industrial Revolution, once power looms came in, they lost about two-thirds of their earnings. But they could then become unskilled factory operators. Blue-collar workers went to construction or other things, or some of them withdrew from the labor force. If Dario Amadei or some of the other people who are most vocal about the capabilities of these models and what they will do to the workforce are correct, there are going to be many sectors at the same time being hit. So yes, if the rest of the economy was booming and 3.5 million customer service representatives were laid off, we could find other jobs for them, perhaps with somewhat lower pay. But what if all occupations are going in the same direction? That is Armageddon. Now I don't think that's going to happen anytime soon.
Speaker 1:
[22:11] David just sighed. You said Armageddon and David sighed.
Speaker 2:
[22:16] I will let David. I mean, that's not going to happen anytime soon. But I think we have to be prepared for it because some people are saying that's going to happen in the next two, three, four, five years. Either those plans are leading trillions of dollars of investment, which are going to come to nothing, or there is going to be a grain of truth in some aspect of it. But either way, we have to be prepared for that. Now, displacement is real.
Speaker 1:
[22:46] So you're talking about either this is a financial bubble, where an incredible amount of capital is being poured into a technology that ultimately will be a bubble that resolves nothing and is not worth the investment, which causes a kind of financial catastrophe, or it's real and it causes a personal human labor catastrophe. Is that?
Speaker 2:
[23:16] I would say I'm somewhere in between. I think the speed of which will be much slower, which will then lead to a lot of money being lost because the investments need to be monetized and they need to be monetized soon if these investments are going to pay off. So I am in the middle. I think that these capabilities will come at some point, but not as soon as these investments are being motivated by. But I am uncertain enough that either all of it being a bubble, or all of it happening within the next five years. Can I say with good conscience that's a zero probability event? I cannot. So many technologists are saying, look, in our labs, we have these even more amazing models. I don't believe it. I don't believe it, but I can't say, oh, necessarily it's wrong.
Speaker 1:
[24:06] How do you means test these hypotheses?
Speaker 2:
[24:10] I cannot. We cannot. Nobody can.
Speaker 1:
[24:11] We can't do it.
Speaker 2:
[24:12] Because they're all based on what's gonna come next year and we don't have access to it.
Speaker 1:
[24:16] So everything we're doing is we're looking backwards.
Speaker 2:
[24:19] We're looking backwards.
Speaker 1:
[24:20] But not forwards. David, you were gonna say something.
Speaker 3:
[24:22] Okay, so first thing, I don't think that the success of AI companies and the value of their investments entirely depends on them displacing labor. If we just got much more productive, that would also pay off, right? So if we got more efficient in healthcare, if we got better at transportation, if we did education better, so it doesn't all have to come from just throwing people out of work. And it's also important to remember that although these transitions have been wrenching, we're infinitely more wealthy than we were 200 years ago. We are much better off. None of us wants to live in...
Speaker 1:
[24:52] On the main, on the main. But obviously, if you look in certain... I don't think the Rust Belt would say, yeah, globalization was great for us.
Speaker 3:
[25:01] No, no, they're not starving, right? They're not... They're generally not starving. They have access... Look, I don't mean to be unsympathetic, but the standard of living in almost anywhere in America, including in the least privileged places, people have indoor plumbing. They are not food deprived by and large. They have some access to education. They have some safety. It's much better than conditions in pre-industrial England, you know, 250 years ago. So, I don't think that... So, although there's always costs, and I don't mean to minimize them, I think they're real, and the transitional costs are enormous, and the beneficiaries are not the same as those who are harmed, so it's not like they just make these... But I think we should recognize there's enormous upside potential here as well. We shouldn't only be sentimental about what would be lost. We should also recognize the opportunity to accelerate science, to improve our adaptation to climate change and energy generation, to improve medicine, to do education better. We might do it worse, we could do it better, to industry more of the world's wealth to more of the people in the world. I actually think artificial intelligence like mobile telephony can be potentially beneficial to the developing world in a way by increasing self-sufficiency, by giving access to expertise in engineering and medicine that is not readily available.
Speaker 2:
[26:26] Can I just jump in there?
Speaker 1:
[26:27] Please.
Speaker 2:
[26:28] Because David and I have been studying these things together and separately for the last 30 years and almost everything you'll hear from David I agree with and most things you hear from me, well, David probably would disagree with them, but anyway. But there is one place of disagreement between me and David and David put his finger on it. So let me expand because I think this just again underscores the uncertainty. So David and I completely agree that there is a potential to use AI in what we call a pro-worker way. Meaning you make workers more productive, they become better at their jobs, they gain additional expertise, start performing new and more important and interesting problem-solving tasks. The place of disagreement between me and David is that I think that direction requires a complete change in the focus of the industry and we want to get it on their current path. The current path is very automation focused. Whereas I think David thinks, well, whatever the companies do, somehow better things might come out. So I think he's more optimistic about those productivity gains that could then create meaningful jobs. I think we really are squandering that opportunity. That opportunity is there, but we're squandering. And that's the most important reason why I love being in shows like yours where people listen to as opposed to what I say. Because I think we need to change the conversation. The conversation shouldn't just be about the doom and the gloom or the amazing promise of AI. It should be about, are we actually using these models, these capabilities for the right thing or the wrong thing? That's the main conversation we need to have.
Speaker 1:
[28:10] Well, let me mediate the dispute between you and David before...
Speaker 2:
[28:14] We've tried. Many people have tried.
Speaker 1:
[28:17] Before it turns physical. I don't want to get there. I don't know how close you are to each other's squares.
Speaker 3:
[28:22] I know. I've seen a lot of fist fights on this podcast.
Speaker 1:
[28:24] That's exactly right. And things do get out of control. And if we need to take it to the octagon, we'll take it to the octagon. I don't have a problem with any of that. But I think what we're talking about are sort of two separate things. So I want to see if we can tease those out a little bit. You know, you said a phrase, Daron, that I think is interesting, which is, you want to make it, you said worker.
Speaker 2:
[28:47] Pro-worker. Pro-worker.
Speaker 1:
[28:49] You said pro-worker. What David is talking about, I think, is sort of the patina over society, that these advances allow us to fight diseases that we didn't have to do.
Speaker 2:
[29:01] Sure.
Speaker 1:
[29:01] But it's pro-human to a certain extent, but not necessarily pro-worker. So I guess, David, what I would say to you is, generally, those that are deploying these new things are not concerned about being pro-worker in any way. Now, the increase in productivity may have it, they always say a rising tide lifts all boats. I always say, unless you don't have a boat, and then, really, then it's just water and you're treading it. But so, the people that run, it's sort of like globalization. What they learned was capital travels and labor doesn't. So if I can find ways to pay workers less or to give them less safe working, so globalization was by no means pro-worker. For workers that were accustomed to more first-world conditions. But if you were a worker in the global south, those investments were wildly pro-worker because your conditions. So how do we tease out what we mean by pro-worker and the standards of society that we're talking about raising?
Speaker 3:
[30:25] So Daron and I, along with our colleague Simon Johnson, also know about, Laurie, further increasing my distinction if I'm not having one, just wrote a paper on pro-worker AI. What we mean is tools that extend the usefulness of human expertise and the range, the things that we can do, give people new things to do, things that they didn't. And let me say, what do we mean by new things to do? I don't mean sort blocks, but there are a quarter million data scientists in the United States right now. They earn about $120,000 a year at the median. Those didn't exist 20 years ago.
Speaker 1:
[30:58] Now, what is a data scientist?
Speaker 3:
[31:00] A data scientist is someone who basically deals with, we have enormous amounts of data, we have enormous amounts of computing power. How do we process that? How do we organize that and make it accessible? The data that we have on the Internet is so complex. It's video, it's text, it's images, and data science is all about how you use that constructively. We had no tools, we had statistics, we had computers, we had no tools for doing anything like that. And now there's tons of expert work. And a lot of new work, a lot of where the value of human work comes from is demand for new forms of expertise. So, we've had electricians and plumbers for a while, now we have solar electricians and solar plumbers. They're people who do those fields, but they're specialized even further. Much of our medical work, we didn't have any pediatric oncologists 50 years ago. Or even people who do like someone who's a fitness coach, that's also a new form of work. And often that creates demand, it creates specialization, people earn a premium for that. It needs to keep moving, right? And so expertise is always being actually devalued by automation and then reinstated by new ideas, new creativity, new opportunity. And so both of those things happen, but we have much less control and predictability about the new work. It's easy to predict what will be automated. It's hard to predict what will be, how much new work will be and where it will occur, and most important, who will do it. Most of the new work of the last 40 years has been for people with high levels of education and the majority of American adults do not have a college degree, only about 40 percent. And so we really, and college graduates have done fine for the last 40 years. It's the majority of people who are not college graduates that we should be concerned about. And so in our view, pro-worker AI, in particular, is AI that enables people without as much elite credentials to do more valuable medical care, to do more programming, to do more legal services, to do contracting, skilled repair. And we think there's opportunity there. But I agree with Daron. There's no guarantee that that's where we're going, that we're tech firms or even where the market is pointing. Now I'll say, I don't think, with some exceptions that I won't name, I don't think most of the tech bros are evil. I don't think they mean to do harm.
Speaker 1:
[33:09] All right, now you and I are going to have a problem.
Speaker 3:
[33:12] But I don't think they don't really know how to control this, right? If you told them, if you said, Dario, this is how you make pro worker AI., I think he would be very interested in that. I honestly don't think he knows.
Speaker 2:
[33:23] I thought we said that.
Speaker 3:
[33:24] I don't think he knows what that means precisely.
Speaker 1:
[33:27] But are they even interested in that? I'm curious what you guys think.
Speaker 2:
[33:31] No, they're not interested, Jon. They're not interested. They're not interested because they're being locked into this AGI, artificial general intelligence craze, and your chops in this industry are measured by how close you can argue or you really go towards this AGI. An AGI, if you take it seriously, hopefully, I don't think we have to take it seriously anytime soon. But if you do take it seriously, it means that these models can do everything, everything better than the very, very best experts. And then once combined with advanced robotics that are flexible enough, then they can do all the works better. So a lot of economic intuitions are based on what David Ricardo introduced, which is comparative advantage. If you have an advantage in winemaking, fine, you'll make the wine and I'll do the podcasting. You won't do both podcasting and winemaking because you have a limited amount of time. Now, if indeed we get to AGI, that framework is out of the window because these models can operate very cheaply and they'll have an advantage over all human work. I don't believe we're getting there anytime soon, but that is the agenda and that's the agenda that's driving the industry. That's the problem.
Speaker 1:
[34:41] Is the agenda AGI in the industry or is the agenda to own the operating system of our society? That's where I'm more concerned. Both. We're bringing up where it may go, but some of it does have to do with those that are the owners, Palantir, OpenAI, the owners of these new technologies and how exploitative they want to be for workers and also, ideologically, what are they going to do if they own... You know, when the companies were laying fiber optic cables or the companies were laying electricity or any of those kinds of things, there was not an ideological component. But when you listen to the guys that are laying the new pipelines for whatever this society is going to be, they are ideological.
Speaker 2:
[35:39] 100%, Jon, you nailed it. You nailed it. I think there is an ideology of AI. ADI is part of it.
Speaker 1:
[35:45] It's very different.
Speaker 2:
[35:46] Let me just try to illustrate that going back to what David said, which again, that part was based on our joint work. So I agree sort of mostly.
Speaker 3:
[35:55] You're required to agree.
Speaker 1:
[35:57] Disappointing your own work.
Speaker 3:
[35:58] Your name's on it, buddy.
Speaker 2:
[36:00] So the capability of using AI with non-expert workers to increase their expertise, to allow them to do new things, is definitely there. And I think it's the most exciting part. But fighting against that is the ideology and the practice of centralizing all information in the hands of a few companies and a few people. And if they control that information and if they want to use it in the way of not make the novices more expert, but get rid of the novices, get rid of the experts, then you have a very different world and that's the agenda. Now, can they achieve that agenda? Not necessarily true because there are technical barriers to it, but that's what they're trying to do. Yes, you're absolutely right.
Speaker 1:
[36:49] So, the avocado is one of nature's mysteries, as far as I'm concerned. I find it to be a very vexing, it's not, I want to say vegetable, I think it's a fruit, right? You know what? You'll Google it, you're right. You probably don't even have to Google it, you probably know it. Avocado green mattress. They sell mattresses, pillows, solid wood furniture. What more do you need? And no pits. It's all made from materials designed to support healthier living and more restorative sleep. Made without the harmful chemicals. Can actual avocados say that? Probably not. They only use certified organic non-taxic materials. Their products are designed to support deep restorative sleep so your body can properly recover, reset and wake up and take on the day. Avocado products are made, not manufactured, and thoughtfully crafted with real materials to deliver lasting comfort and support. Go to avocadogreenmatress.com/tws to check out their mattress and furniture sale. That's avocadogreenmatress.com/tws, avocadogreenmatress.com/tws.
Speaker 3:
[38:05] Okay, so I would make three points. First, you shouldn't take Daron and me too seriously about telling you about the future of AI, right? We're not experts in this. I don't think you should take Dario Amadei very seriously about projecting the future of the economy. He means well, but he's not, you know, it's like people have been telling us forever we'll run out of work because we're automating stuff. That hasn't happened before. It doesn't mean it can't happen, but just means thinking about it mechanically is not the right way to think about it. Second of all, I don't even think when there's AGI, that that will actually put all humans out of work. Many, many problems are not computational problems. They're political and interpersonal problems about who has control, who has ownership rights, who has the information. If I say today, here's a better way to reorganize MIT, I've got it. What? I've calculated, I did it with my AGI, MIT will not be reorganized tomorrow. It's a political problem.
Speaker 2:
[38:57] Depends on whether you have dictatorial powers or not. If they also have the dictatorial powers, then it will be reorganized. Okay.
Speaker 3:
[39:02] Well, if we also throw democracy out, then we're in more trouble.
Speaker 1:
[39:07] But David, let me talk about it in kind of, you made some really good points about the historical precursors of the Industrial Revolution and globalization. I just want to make a little bit of a point about human nature. When new technologies come along that are truly transformative, thinking of splitting the atom, so you have brilliant people working on splitting the atom, and if you split it one way, you can use it to power the world. If you split it another way, you can blow the world up. Which one did we try first? When we talk about AI and we're talking about the technology, it doesn't necessarily have to be transformative in the way that we're talking theoretically. We can talk about how powerful it is for the general tools that humans use to rule over other humans. I'll give you an example. Palantir comes across with this incredibly powerful AI generated systems. What do they do? They suck information out of the system and then they funnel information about people who are undocumented and the government then uses that information. It's not just about what it might do. It's about how governments or individuals will use these new powers to game the system and gain advantage over their competitors. Isn't that a more realistic conversation?
Speaker 2:
[40:41] Oh, you nailed it. You nailed it. Exactly, Jon. So I think for the next version of our paper with Simon Johnson.
Speaker 1:
[40:45] Daron, when are we writing the paper together?
Speaker 2:
[40:47] Exactly. I was just going to say you have to become a co-author.
Speaker 1:
[40:50] Where's my Nobel?
Speaker 2:
[40:53] Yeah, the direction of technology is highly malleable. And there is always a worse direction than the one you fear. And sometimes we find it, the more dictatorial, authoritarian, less democratic we are, the more likely we are to find that direction.
Speaker 1:
[41:13] Right.
Speaker 2:
[41:14] Nuclear weapons are much more likely under times of war or times of authoritarian control. And nuclear energy becomes much more reasonable if it's subject to democratic oversight. Exactly, the centralization of information, the ideology of AGI, and the meetings of the mind around the surveillance state and the technology are very worrying precisely because they open those bad doors for us. And anyway, many of the people in the industry would have no problem walking through those doors head first.
Speaker 1:
[41:47] And David, I want to ask you about that because, you know, you're making really good points about sort of the ways that these new technologies can be used to uplift. But I'm in my mind, I'm thinking atomic, it's splitting the atom. And are you concerned? Because I think you're more optimistic about where this thing is going. About what I'm raising here.
Speaker 3:
[42:13] Oh, absolutely. I'm very concerned. And I think AI is, you know, God's gift to authoritarians, right?
Speaker 1:
[42:18] Right.
Speaker 3:
[42:18] It's great for centralizing control. It's great for monitoring. Right. It is. Yeah. And I think it's, you know, it's we already see if we want to see, you know, mass surveillance and censorship at scale, you know, go to China and they're exporting that model. And we we've privatized a lot of it. We're still doing it. I'm very concerned about that. So I'm trying to emphasize that there's opportunity, not that we're on. We're destined to get there. I think we're destined to have a range of outcomes, some of them quite terrible, some of them quite good and very unevenly shared. And the balance may be towards the bad, it may be towards the good. But I think we have to, if we don't bear in mind that we have an opportunity, we certainly won't, we'll certainly squander it.
Speaker 1:
[42:56] Understood.
Speaker 2:
[42:57] Absolutely. But I think we also need to, and this is a first most important observation that David made, but we also need to have the public conversation that those opportunities exist if we're not currently targeting them. We're currently targeting something very different. Mass automation, surveillance state, a new sort of merger between the security apparatus and tech companies, those are the things we are contemplating or practicing right now.
Speaker 3:
[43:27] And there's another conversation we're not having, I just want to loop back to a point you made, Jon, a little while ago about all these stuff on the internet now kind of being monetized. There's a really fascinating book by Max Casey who's an economist at Oxford called The Means of Prediction. So play on the Marxian phrase, the means of production. And he makes, I think what is a brilliant analogy, he says, look, the enclosure movement in medieval Europe was when all the common land, all of a sudden the Lord said, hey, we own that and we're just going to farm that ourselves. And it may have been actually a more efficient way of farming, but the commoners were just wiped out by this. Well, you could say that AI is in some sense enclosing the internet. It's taking all this common property and monetizing it. All of the stuff we put out there, all our photos and all of our writing and all of our movies. And you say, oh, well, they're not enclosing it. I mean, it's still there, just where you left it. But of course, you never thought your artwork was going to compete with you. You never thought the story you wrote would be regurgitated and sold and you couldn't sell your work anywhere. So I do think this unilateral transfer of property rights is a huge thing that is under recognized, under discussed.
Speaker 2:
[44:32] Oh, yeah, that's so important. But can I add one thing? 100% agree with David, but it has an additional really bad effect. Which is that...
Speaker 1:
[44:42] He wants to be the black swan.
Speaker 2:
[44:45] Really dark soul black swan, yes, exactly.
Speaker 1:
[44:48] Yes, exactly. Go for it.
Speaker 2:
[44:49] But the kind of the useful things that David and I are mentioning that you can do pro-worker AI, that really requires very high quality data. It requires... If you're going to build a tool for electricians that makes novice electricians perform the expert tasks that solar electricians and the best season ones can do, you require the data from those electricians dealing with the hardest problems. That data will not be produced unless there is property right over data and there are data markets in which people can get the returns for the data that they create. But this enclosure thing that David described is a data extraction economy. So it's creating the opposite.
Speaker 1:
[45:38] Guys, this is blowing my mind. It's something that I had not thought of at all, but I think that's what you're bringing up is so interesting. So as AI strip mines the totality of human expertise and experience, right? So let's look at it in terms of music. You get royalties. If you write a song and somebody uses that song, they pay you a royalty. If somebody, you know, plagiarizes your lyrics or finds a way to take your melody and put it into their song, you're going to be paid for that. AI is a human expertise laundering machine. It's basically taking everything that we've gotten, training itself, in some ways, replacing us, but without that royalty payment. Where the royalty payment goes is to open AI or to Palantir or to any of these other places. And if you ask them what they're doing with it, they'll say, that's proprietary.
Speaker 3:
[46:38] Yeah, we're in the Napster area of AI., right? Remember Napster? Just everybody's music and just burn it, rip it and share it. That was not viable. We wouldn't have a music industry if we hadn't gotten control of that. With Spotify, with Apple Music, where we pay royalties when we listen to those songs. Small royalties, but we do pay them.
Speaker 2:
[46:56] But the difference is that in the Napster, it was the consumers who were doing that replication. Now it's the most powerful corporations humanity has seen who's doing it.
Speaker 3:
[47:05] But this is a failure of property rights, a failure of legislation. People say, oh, no, fair use allows that. Well, fair use never envisioned this. And so who cares what the law said? It's not applicable. We should be changing it. People should be compensated and not just once. They should be compensating as their information is reused. And that's actually a manageable problem. I've talked to people at Google who've worked on this. They say, yeah, we know how to do that. We don't have an incentive to do it, but we know how to do it. And if the laws, we would support it. So, I think that, and by not recognizing that this enclosure is going on, that this sort of property rights are being reallocated, economics doesn't deal with that.
Speaker 1:
[47:43] It's reverse socialism.
Speaker 3:
[47:45] Exactly.
Speaker 1:
[47:48] They're taking from the workers and they're funneling up to these five individuals. And it comes back to torture this atomic analogy. You got the sense that people like Oppenheimer or Einstein were aware of the gravity of what was happening. And through the crucible of war maybe made some decisions they might not have made otherwise. In this environment, I don't think Altman, Karp, Thiel. Thiel was asked, you know, should the human race flourish and continue to exist? And he took like a five second pause. Let me think about that for a second.
Speaker 3:
[48:31] That's a tough one there, yeah.
Speaker 1:
[48:32] So the nuance of what you're both bringing to the discussion seems utterly absent.
Speaker 2:
[48:40] And you know, you nailed it again, the war conditions. You know, Einstein, who was very pacifist because he was worried about Germany, Third Reich, supported the atomic weapons and several other. And you know what? Silicon Valley is also creating war conditions. They have the framing of AGI is either China gets their first and we become their national state or we have to go first. And that's creating this war like condition. You know, you have to allow us to do anything we want, even the worst things because otherwise China is going to do them. So that's creating the equivalent of the 21st century war condition.
Speaker 3:
[49:18] And Oppenheimer, by the way, spent the rest of his career opposing the H-bomb and eventually was stripped of security clearance and went to, you know, died a broken man effectively because he was persecuted for trying to control the invention that he was so instrumental in creating. But I mean, maybe it makes sense to talk a little bit about what are some policies that we could have.
Speaker 1:
[49:36] Yeah, please do. Okay.
Speaker 3:
[49:37] So I mean, I would put them in three buckets, but let me start with one that people call wage insurance. And wage insurance, an idea that actually was experimented with during the presidential administration that reigned from 2008 to 2016. I'm not going to say what the presidents, but you can guess.
Speaker 1:
[49:52] I don't recall, but I think I remember him in a tan suit.
Speaker 3:
[49:55] That's right. Handsome guy.
Speaker 1:
[49:58] A very handsome guy, tan suit.
Speaker 3:
[50:00] Anyway, that's all I remember, but the idea was, look, you lose a job in manufacturing, let's say you're making $50,000 a year, $25 an hour, and you can find another job, but it's going to be like a 15 bucks an hour, right? And it's not only is that low wage, but you're like, hey, that's beneath my dignity, right? I'm not going to take that job. So wage insurance says, hey, look, we get that. We're going to make up half the difference for up to like, say 8,000 bucks up to two years. Take the $20, take the $15 an hour job. You'll make 20, right? And then you can look for something better. And it gives people, it gets people back into the workforce more quickly. It's like an earned income tax credit for returning workers. This program was so effective in terms of saving unemployment insurance money and generating additional payroll revenue that it paid for itself.
Speaker 1:
[50:41] How is that different, David, than unemployment insurance?
Speaker 3:
[50:44] Unemployment insurance, you get it while you're not working. This, you get it if you return to work.
Speaker 1:
[50:49] I see. Yeah, I see.
Speaker 3:
[50:51] Now, this needs to be scaled.
Speaker 1:
[50:52] It makes up, so I get what you're saying. It makes up in some ways the difference that you would have gotten from a job that was paying a little bit more to what's... Okay, that makes sense.
Speaker 3:
[51:03] I think, by the way, this is very politically viable, right? In America, we're not very friendly towards people who aren't working. If you're working, that's okay with us, right? An incentive to work rather than an incentive or something that's subsidizing work rather than subsidizing leisure, saying that many people can get behind, especially if it's pretty cost effective. Now, we need a bigger demonstration, right? What was done, and people like Brian Kovac at Carnegie Mellon University is trying to stand up a multi-state demonstration of this. I've been speaking with funders trying to get it going. So that's one really actionable policy. Let me say, this is a no regrets policy.
Speaker 1:
[51:35] I'm in.
Speaker 3:
[51:36] I like it. If the Armageddon doesn't come to pass, we go, oh damn, why did we do wage insurance after all? This is just a good idea. It was a good idea 10 years ago, it's a good idea now. Let me pause here and turn it over to Daron for the next idea.
Speaker 2:
[51:49] Yeah, well, that's a great policy. I am fully behind it. But let me say before I talk about the next policies, I think the most important step, even before the policies is actually this conversation. This conversation that needs to just take place much more widely, that there are many different things we can do with AI, and it's a choice what we do with AI. That's what's lost in the current media environment. For about 10 years, the entire mainstream media was so excited about the tech barons that they couldn't do anything wrong. Now, they're talking about killer robots and doom, okay? That's a useful corrective, but we're actually missing the most important conversation. The most important conversation, AI is not one thing. AI is a whole spectrum, and at the one end of the spectrum, we've been emphasizing there are some terrible things, and at the other end of the spectrum, we made there are feasible things that we can do that are much better. Who's going to decide that? Who are going to empower to make those civilization-changing decisions? Dario Amadeus, Hum Altman, Peter Thiel? No, I think the democratic process should have a heart on it, and people should become more informed about it. I think that conversation is first, and then all the policies have to come on top of that.
Speaker 1:
[53:18] Folks, I don't know if you can hear it in my voice. I'm tired. I didn't sleep well last night. I need a good night's sleep. I always need a good night's sleep. And you know what I could do? I could buy a new mattress, a little maybe a princess bed, maybe get a little four poster thing, throw some mosquito netting on there, spend a ton of money. Or I gotta do the only thing that matters, get some nice sheets, some nice, clean, freshly done, comfortable sheets. That's what you need, the Boll and Branch way. It's the best way to get a better night's sleep is the bedding. Get the nice bedding. You don't want the chafing bedding. You don't want... I sleep in corduroy. Who would do that? Makes no sense. You can upgrade your sleep with Boll and Branch. Get 15% off your first order plus free shipping. bollandbranch.com/tws with code TWS. bollandbranch.com/tws. Code TWS to unlock 15% off. Exclusions apply.
Speaker 2:
[54:37] And then there are many policies that we can worry about. Like, for example, in the United States, we tax labor heavily, we subsidize capital.
Speaker 1:
[54:45] That's been that way for 50 years.
Speaker 2:
[54:47] How does that change the incentives? It's been, well, it's been, it's gotten much worse over the last 25 years, and much, much worse with the Trump administration. And how do you think that changes firms and technologists' decisions? It makes them more leaning towards automation because automation is being subsidized. That's right. So let's change that tax and we can raise more taxes also because we're just giving a pass to all capital income.
Speaker 1:
[55:13] But it's kind of a perpetual motion machine because what happens is when these new technologies come along, capital flows towards it in such massive ways. This giant, you know, trillions and trillions of dollars that flow in and building data centers and sucking up water and electricity and money. And then what they do with the profits is they reinvest not just in their technologies, but in their political power.
Speaker 2:
[55:39] Oh, 100 percent.
Speaker 1:
[55:40] They take their money and they bring it to bear on Washington. You know, it was a shocking moment to me at the inauguration of an American president to see in the front row, in a room of the swearing in, not the people, but the tech companies that had the closest proximity and access to the president.
Speaker 2:
[56:03] And you know what's worse? We don't even know who owned who, whether they owned the government or Trump owned them.
Speaker 1:
[56:09] We don't know which is what. David, you were going to say something, though.
Speaker 3:
[56:13] Oh, I just want to talk about another policy.
Speaker 1:
[56:14] Oh, okay, great. I like Daron's, though, the changing of the tax incentives that can even out, to talk about pro-worker, that makes, we value capital over labor. And I think the pendulum needs to swing back. So I think that was a really important point.
Speaker 3:
[56:29] But let me suggest another policy, really, which is what people call universal basic capital, right? So not universal basic income, right? Which is like writing people a check every month. But the notion that when people are born, we give them an endowment of capital with voting rights, right? Like shares. And what does this do? Well, one, it diversifies. Most people's, you know, their entire income is bound up in their human capital, right? Your income comes from your ability to produce valuable labor. Well, that's a pretty risky bet, right, for anyone, right? Because, you know, value of labor changes over time. Specialized skills become, sometimes they become more valuable, sometimes they become worthless. Right. So we distribute cal- and by the way, you can call the Trump accounts if you want, right? They're already being done.
Speaker 1:
[57:07] I think we're calling it Trump everything.
Speaker 3:
[57:09] That's right.
Speaker 1:
[57:10] This is actually The Weekly Show Trump podcast.
Speaker 3:
[57:13] That's right.
Speaker 1:
[57:14] We just add the word Trump to everything.
Speaker 3:
[57:16] And Daron has the Trump Prize in Economics. That's right. Yeah, just to return to our main theme. But so what does this do, right? One, it gives people a more diversified portfolio. It's something they can invest in, right? They can't spend it until they're 18. Second, it gives them ownership rights.
Speaker 1:
[57:30] What are they?
Speaker 3:
[57:31] Basically, it's like getting a bond when you're born.
Speaker 1:
[57:34] Okay.
Speaker 2:
[57:35] Like the Alaska Fund for everybody.
Speaker 1:
[57:36] Okay.
Speaker 3:
[57:37] That's right. But it gives people a diversified income portfolio somewhat. It also redistributes voting rights. They have voting rights over capital, right? And you could even set it up. So even if you sell your stocks, you maintain the voting rights.
Speaker 1:
[57:50] But what is the voting right? So the way that I would think about it is, it's reverse, it's Benjamin Button Social Security. So rather than it's a large fund, and then when you're born-
Speaker 3:
[58:03] That's why you're the comedian. That's good.
Speaker 1:
[58:04] You are given, well, I just watch a lot of movies. So when you're born, you are invested into this larger fund that has been. Now, then the questions come up, well, what is that fund invested in and how does it grow?
Speaker 3:
[58:19] No, it's invented, it owns shares of these tech firms, for example, right? It owns a piece of the economy. I get it. And so then we all have some voting rights. And that's really important because if labor, there's certainly a risk that labor will become less valuable and capital more so. And if so, we want more people to have ownership stakes. Part of the brilliance of the labor market is that in a country without slavery and without labor coercion, everyone owns at most one worker themselves, right? So it's intrinsically relatively equal, but capital is not like that.
Speaker 1:
[58:49] So the reason why I'm slightly dubious about that is, and I'll tell you why, companies won't even do that for their own employees.
Speaker 3:
[58:58] No, the government has to do it. It has to be done publicly.
Speaker 1:
[59:02] Publicly, but the government is going to give away shares of privately owned companies.
Speaker 3:
[59:08] Or buy them. That's fine.
Speaker 1:
[59:10] Or buy them. OK. All right. All right. Now I'm feeling a little better.
Speaker 2:
[59:14] But here is the problem. Here is the problem. I completely agree with David's. You know, that would be a nice addition to a functioning labor market.
Speaker 1:
[59:22] Yes.
Speaker 2:
[59:22] But here is what I want to put a pin on, which is that the tech solution to these problems of universal basic income.
Speaker 3:
[59:30] I didn't say, I hate AUBI.
Speaker 2:
[59:32] Exactly. But yeah, I want to just underscore that. Or other schemes where people are somehow given a handout so that they can just not work. I think there are many problems with that. First of all, I think we don't know what to do with millions of people who don't work. That would be highly bad for their mental health, for social peace. But even worse, I think if you create any system like that, based on dividends, based on income, based on other things, as long as society knows, oh, these are the creators, Peter Thiel, Elon Musk, etc. And the rest living off the income that they've created, that would create a horrible two-tier society, where there are those with very, very high status and then all the rest.
Speaker 1:
[60:21] We have a horrible two-tiered society.
Speaker 3:
[60:22] I know, but it will get even worse.
Speaker 1:
[60:23] As it exists now.
Speaker 2:
[60:24] I know.
Speaker 3:
[60:24] I mean, look, in Norway, they have a sovereign wealth fund that's worth 2 GDP, and it's coming from oil, but people are public owners of that, and they're doing okay.
Speaker 2:
[60:34] And they're working, but they're working in Norway.
Speaker 3:
[60:36] I'm in favor of work.
Speaker 1:
[60:37] But I want to push back on just a couple of things within that. So the system that's already been designed is a two-tiered system, and there's already that sort of Randian philosophy that there are makers and takers. But when you have an economic system that requires labor at its cheapest level, and you have outside pressure of globalization that continues to drive those wages down and conditions down, well, we've created the conditions for that permanent underclass, and then we blame those people as though their poverty is a function of vice, is a function of a lack of virtue. And that's what I want to push back on. I don't view money that goes into those communities as handouts. I view them as investments, and we have to find a way within this. I love the idea of giving people some ownership over the industries that drive the country. I think for too long, we have allowed these companies the providence of the stability of this country, the subsidies of this country, the investments of this country, and asked for no vague. And I do think the house should always win and the house should be the American people and there should be a rake.
Speaker 2:
[62:00] Right, 100%. Yeah, 100% Jon. Now you're definitely a co-author.
Speaker 1:
[62:05] Give me my prize.
Speaker 2:
[62:08] But you also put your finger in passing on something that's very important. And you might want to have Michael Sandel on the show to talk about this. Sort of this ideology of meritocracy that somehow all of those who are so successful are well-deserving and virtuous, and all of those who have lost out of globalization, of technological change, of social change, are losers that deserve their fate. I think that's been very, very pernicious. I think you cannot understand the rise of Trump, the rise of anger in this country without that former meritocracy ideology. And he's been the most eloquent describer of this. And I think it's very, very important that you put your finger on it.
Speaker 3:
[62:51] Not Trump. Michael Sandel has been.
Speaker 1:
[62:58] Who would have thought there's so much fun to be had at MIT? No one would have thought that.
Speaker 2:
[63:02] Please don't have Trump on your show, Jon.
Speaker 1:
[63:11] Folks, I'm going to be honest with you. You know, a lot of times, I'll be pitching you products. Am I crazy about them? I don't know. Maybe I muster some enthusiasm. But then every now and again, a product comes along when I'm like, oh, I actually use them. I actually wear those. They're super comfortable. And that's what we got now. Folks, Bombas is in the house. Bombas, baby! That's the alliteration I'm looking for. Bombas, I can't even tell you how excited I am that we got Bombas. Bombas is, first of all, not about you, but I'm a sock man. I like a nice, comfortable sock. If you give me a sock, every other part of my body is immune to discomfort but my feet. You throw on a nice pair of socks, man, and you can have yourself a fine day. And Bombas is the most comfortable socks in the world. And man, just get rid of all your old socks. You know what happened to me recently? I had some socks in my drawer and I put them on. And it was as though the fabric had expired. Like when you pulled it on, it made a noise like the universe was coming apart. Like it went like, like almost crackling. It was, needless to say, I didn't have a good day that day. Here's even the best part about Bombas. For every item you purchase, an essential clothing item is donated to someone facing housing insecurity. A one-for-one model with over 200 million donations and counting. Head over to bombas.com/weekly and use weekly. And use code weekly for 20% off your first purchase. That's bombas.com/weekly. Code weekly at checkout. These are really interesting, and I really do like them. And what I love about it the most is these are actionable, specific ideas. What so frustrates me about our political process in this moment. We have this incredibly powerful technology that sits just on the horizon, but we have a political system that is unable to articulate mostly anything but platitudes. We have to start talking about kitchen table issues. The working families must get to think.
Speaker 2:
[65:42] So you think like creating American AI dominion and cryptocurrency are not actionable issues?
Speaker 1:
[65:48] Well, let me tell you something. As a proud owner of Melania Coin, I can tell you that my future is set. But, you know, we are in this position. What's so, I didn't want to say ironic about it, is we could probably plug these questions into AI and come up with more specific and actionable and interesting solutions than what are being offered by our political system.
Speaker 2:
[66:21] Right.
Speaker 1:
[66:22] And that's the part I can't wrap my head around. Where do you guys see, why is that the case?
Speaker 3:
[66:28] Well, I actually think that, so the idea of wage insurance is in currency. It's being discussed. I've discussed it with people in the Trump administration. I've discussed it with people in the Democratic leadership. I think there's enthusiasm for that. Or there's also, I should say, there's new efforts around doing modernizing training in a way where we can measure it and monetize it and return the revenues to Raj Chetty and the group of Opportunity Insights at Harvard. They're working on this in a really innovative way.
Speaker 1:
[66:57] Harvard Safety School. Talking MIT, baby.
Speaker 3:
[67:04] Yeah, exactly. So I do think there are a set of policies that are, again, I would call it no regrets policy. We won't be sorry we did them even if the worst doesn't come to pass and we know how to do them well. They're not totally out of reach. So I absolutely with Daron, we need to shape the conversation. We need to deploy the technology constructively. But we also, we got to recognize we are in for a rough ride. Even if it goes well, we're in for a rough ride because the transition is going to be so fast. So we should have policies that support people, support their income, support job transitions, and give them also an ownership stake so they're on some of the upside of this, not just the downside. And that's distributing capital more broadly would have that effect.
Speaker 1:
[67:40] David, I can't tell you how much I love that and how much I think that in some ways, over the last 50 years, I think that's what's gone wrong with the economic condition in this country is that labor has never been offered an ownership stake in the value of their productivity. And Daron, I want to ask you about that. And then, and I've so appreciated this conversation. But great. You know, when we talk about productivity gains, because that's always how it's framed, it always outstrips wage. Always. And maybe that's just the way that the system is.
Speaker 3:
[68:16] No, it's not how it was until the mid 1970s.
Speaker 2:
[68:19] Exactly.
Speaker 1:
[68:20] Well, I'm saying since the 19th.
Speaker 3:
[68:22] Yeah, for 50 years.
Speaker 1:
[68:23] Since the Reagan Revolution.
Speaker 3:
[68:24] That's right.
Speaker 2:
[68:25] But you know, people say that about, like, oh, the capitalist system. Well, it was a capitalist system in Europe, in the United States, from 1940s to the mid 1970s, where wages grew faster than productivity. Workers with less than a college degree had faster wage gains than managers. That was feasible. There's nothing in the laws of economics or in the laws of democracy against that. We just chose a different path since 1980.
Speaker 1:
[68:51] Do you think at this point, those powerful corporations have, there's almost that they have us at an extortion point, where they say, if you try and do anything to regulate us, or you try and do anything to tax us, we'll leave.
Speaker 2:
[69:10] Well, look, this is such an important point. This is such an important point, Jon. First of all, these corporations are absolutely enormous. It's not a fair comparison, but I just did the calculation last week. Each one of the largest seven tech companies has annual revenues in current dollars twice as large as the entire British Empire's GDP in the middle of the 19th century. These are enormous, enormous corporations. They need to be regulated. But the rhetoric that they cannot be regulated, AI cannot be regulated, that's false. China proves it. Okay, I don't approve of what China does. I don't approve what they intend to do. But they show very clearly AI can be regulated. Tech companies, Alibaba, is now completely subservient to the interests of the Communist Party in China. We could also make Google and OpenAI and Anthropic be much more in line with the democratic priorities in the United States. There is nothing in the laws of economics, in the laws of physics, that says these companies cannot be regulated.
Speaker 3:
[70:12] They're not delicate flowers. When Sam Altman says, when Sam says, oh, if you charge us for intellectual capital property, we'll be put out of business. That's not true. It's kind of pathetic because they say, we don't produce anything of value. If you actually make us pay for inputs, no one would buy it. That's crazy. And it's not true. So I think, yeah, there's constructive ways to see it. We don't need to shut it down. We don't need to regulate it to death so it can't move. The US is innovative and that's great. We have a lot to be proud of in that we have led this technology, we're building it out quickly, you know, it's valuable. But we need to, it's an opportunity and we could squander it. We need to steer it. It will just left to its own. It's going to do, it's not going to be pro worker.
Speaker 2:
[70:56] What you're hearing both from me and David is that AI is a very promising technology, but it's precisely the reason why we've got to put the care to make sure that we use it for the right thing.
Speaker 1:
[71:06] Gentlemen, you have done the impossible. You have done the impossible, which is, you have somehow not allayed my fears, but you've given me hope that the future is actually not yet been written, and what it does is it creates opportunity. And when you have those opportunities to write it in the proper way. But I think what you've done really well today is you've given specifics that none of this is platitude. This is all the specificity of, here's what it could do, here's the damage it's going to do, here's a way to mitigate it, and here's some ways to give us a shared prosperity for it. And I think that's truly, I think that's the conversation, the two of you. Have you thought about having a podcast?
Speaker 2:
[72:02] We were hoping we would join you after this.
Speaker 1:
[72:04] What? Oh, yes! Unfortunately, what I've done is, I had my data scientists, they've been strip mining this conversation. I don't need you.
Speaker 3:
[72:15] We're done.
Speaker 1:
[72:16] I've created AI avatars of the two of you, and now we're done.
Speaker 3:
[72:22] Fantastic. But guys, that frees some time.
Speaker 1:
[72:25] Man, thank you so much for this conversation. I've truly appreciated Daron Acemoglu, Nobel Laureate in Economics at MIT Institute, Professor David Autor, Rubenfeld Professor of Economics at MIT. Guys, fantastic and really appreciate it, and I hope to continue the conversation with both of you.
Speaker 3:
[72:42] Thank you so much for having us on. It's superb. We love what you're doing, and it's great to have this conversation.
Speaker 2:
[72:47] This was fantastic. It was a lot of fun. Thanks, Jon.
Speaker 1:
[72:53] Holy smokes. I'm feeling something. Are you feeling something at home? Are you listening to this? Are you feeling something? I'm feeling the possibility of futures unwritten, the opportunity that it gives us to correct our path, to put us on a righteous path towards a more positive, productive, equal future. My God. I apologize. We don't have our normal staff chat today, because as you can see, I'm on the road, so we weren't able to accomplish that. But man, I so appreciated what those gentlemen were saying and the specificity of it. And I hope you did too. And it's put me in something that I've needed for a little bit, which is a better mood. I am now, and by the way, maybe I'm drinking the Kool-Aid too, but I am in a slightly better mood than I was at the beginning of this whole Schmoogege. But man, that was... I enjoyed that conversation tremendously. And thanks as always to our fantastic team, lead producer, Lauren Walker, producer, Brittany Mehmedovic, producer, Gillian Spear, video editor and engineer, Rob Vitolo, who he and Nicole Boyce are audio engineer. They had to work today. Today was a day when I couldn't figure out how to log into Riverside. They had to do a little extra work today. And as always, our executive producers, Chris McShane and Caity Gray, very nice. And we shall see you next week. The Weekly Show with Jon Stewart is a Comedy Central podcast. It's produced by Paramount Audio and Busboy Productions.
Speaker 4:
[74:48] In a food system this big, Feeding America races to get good food to neighbors facing hunger. That's why we work in real time, using technology to connect food to people fast. Hunger doesn't wait. Give now to help rescue good food for neighbors at feedingamerica.org/rescuefood.
Speaker 1:
[75:07] Paramount Podcasts.