title Are you a good driver?

description The story of how a secret project at Google led to driverless cars on American roads. And, an answer to the question: are the robots actually safer drivers than we are? 

Driven: The Race to Create the Autonomous Car , Alex Davies

⁠Support Search Engine!

To learn more about listener data and our privacy practices visit: https://www.audacyinc.com/privacy-policy

Learn more about your ad choices. Visit https://podcastchoices.com/adchoices

pubDate Mon, 23 Mar 2026 09:00:00 GMT

author PJ Vogt

duration 4202000

transcript

Speaker 1:
[00:01] This episode of Search Engine is brought to you in part by Rosetta Stone. There's nothing about spring that just gives me fresh energy. The days are longer, everything feels lighter. It makes me want to actually follow through on the things I've been putting off, like learning a new language. If you start now, by summer, you could already feel so much more confident speaking and understanding. That's why Rosetta Stone is such a smart place to begin. They've been a trusted leader in language learning for over 30 years, with millions of users and 25 languages to choose from. Spanish, French, German, Japanese, and more. Their immersive method helps you learn naturally. There's no relying on English translations. Instead, you connect words, visuals, and meeting in context, so you're building real understanding from the start. I am on a mission to learn Spanish. I will update you in these ads as it goes. Ready to start learning a new language this spring? Visit rosettastone.com/searchenginetoday to explore Rosetta Stone and choose the language that's right for you. Go to rosettastone.com/searchengine now and begin your language learning journey. This episode of Search Engine is brought to you in part by Serval. IT teams are constantly pulled into repetitive requests. Password resets, access permissions, employee onboarding, and the more a company scales, the more these tickets slow everything down. Serval helps eliminate that friction by automating up to 80% of help desk tickets. While legacy tools like ServiceNow and Jira are adding AI as an afterthought, Serval was designed for AI agents from the start. Serval AI can generate production-ready automations in seconds simply by describing the task in plain English. Take onboarding as an example. Traditionally, it's a long process involving emails, Slack messages, approvals, and manual setup. With Serval, a manager requests onboarding in Slack, and AI automatically provisions access across systems with the required approvals. Serval powers the fastest-growing companies in the world like Perplexity, Mercor, Verkata, and Clay. Get your team out of the help desk and back to the work they enjoy. Book your free pilot at serval.com/search. That's serval.com/search. Before we start the story today, I wanna ask you to imagine a different version of your life. You're you, but it's almost 200 years ago. And unfortunately, in our hypothetical, it's Monday morning. It's Monday morning, and it's very early, pre-dawn. You wake up to this really hard wrapping at your window. That's the knocker upper, here to get you up for work. We're in the 1800s, before the invention of the adjustable alarm clock. The knocker upper is a job. The knocker upper walks the neighborhood with a long stick and taps it on the windows of people's houses early in the morning to wake them up for work. Who wakes up the knocker upper for work? Nobody knows. But this is a job, a job that will actually exist for another century. Outside, the gas street lamps are still burning. The lamp lighter lit them the night before. He's supposed to come at dawn to extinguish them, but it's so early that he hasn't yet. Your lamp lighter is one of those neighbors you have a deep fondness for, a fixture. Every day, you watch him make the rounds at dusk with his ladder and his light. You yourself are a driver. Professional driver 200 years ago is also a job. You're a person who sits on a coach and holds the reins of a horse. You take passengers where they want to go. You start your work day. Okay, hypothetical over. Two of those jobs are obviously so long disappeared that most people don't know about them. The knocker upper is your iPhone alarm. The lamp lighter is the electric street light. The third one, driver, has persisted. As a job for some, as a routine human task for nearly everyone else. This is a story about whether that's about to change. It's about how the word driver, which right now makes me picture a human, could soon transform to refer to a machine. The same way the words dishwasher, printer and computer all did. I've thought about this maybe too much in the year I've been working on this story. In conversations constantly, I'd ask the humans I met the same question. Are you a good driver? Are you, do you consider yourself a good driver?

Speaker 2:
[04:49] I do within limits. I think I'm a good driver because I understand the limitations of my driving.

Speaker 1:
[04:59] This is Alex Davies. He wrote an excellent book called Driven, The Race to Create the Autonomous Car. Alex like me thinks a lot about human driving, about his own personal limitations. What are the limitations?

Speaker 2:
[05:11] The limitations are that I can't always pay attention to everything, that I get tired. I've been trying really hard to be calmer on the road. My husband and I are expecting our first baby this fall.

Speaker 1:
[05:26] Congratulations.

Speaker 2:
[05:27] Thank you. And I thought that along with like reading all the baby books, a good project to work on is just be calmer in the car.

Speaker 1:
[05:35] A very good resolution because of course for most of us, driving is the riskiest behavior we routinely engage in. In fact, even Alex, despite his good intentions, would actually get in a car accident just a few months after we first spoke. He was okay. It was the car that was totaled. Safety is the entire pitch for the driverless car, which is really a car driven by a computer. Driverless cars don't get drunk, tired or distracted. They never text or feel road rage. And these driverless cars, they aren't the future. They're actually already here. But it's funny, if you just don't happen to live in a place that already has them, it's easy to not see how fast things are changing. Robo taxis like Waymo are operating in 10 American cities, providing millions of rides to Americans. In China, the rollout is happening even more widely than twice as many cities. But here, if you live in a place like San Francisco or Austin, today a driverless car is about as exotic as an Uber. A passenger in those cities opens up their phone and decides who should drive them. A human driver or a robot driver. How that happened is a story, a story we are living through right now, whose ending promises to totally reshape the places we live. Today, we're going to tell you how we got here in chapters. Chapter 1, dreams without drivers. It turns out this dream that inventors have had to replace the human driver with some kind of machine, that dream is about as old as the lamplighters.

Speaker 2:
[07:12] People have been thinking about a self-driving car for just about as long as there's been a human driven car.

Speaker 3:
[07:21] Why?

Speaker 2:
[07:22] There's this funny thing you lose when you move from the horse to the human driven car, which is that in a horse drawn carriage, the horse is not just going to run off a cliff if you let go of the reins. You lose sentience in your vehicle.

Speaker 1:
[07:41] When automobiles first arrived, these powerful and non-sentient cars, there's actually a passionate fight to keep them off the streets. It was the 1800s and people feared these new things. The steam powered vehicles thundering down the roads that soon evolved into gas powered vehicles, also thundering down the roads. The fear was partly about jobs. These vehicles were seen as a huge threat to a whole network of working class jobs. Horse breeders and horse farriers, horse feed suppliers, horse manure haulers, horse carriage manufacturers, not to mention the Teamsters. Teamsters, today, the word makes me think of the Teamsters Union. But originally, the Teamsters were the workers who drove teams of horses. Teamsters were like truckers before we had trucks. Cars seemed to imperil all these horse related jobs. Even if you weren't worried about these workers, the cars were also less safe. Some anti-car activists battled to stop or slow the new technology, mainly with regulations. There were red flag laws, which said if you had an automobile, you had to hire a person to walk in front of it, waving a giant red flag to warn people. In Pennsylvania, a law was proposed requiring horseless carriage drivers who encountered livestock to stop, disassemble their car, and hide the parts behind the bushes.

Speaker 3:
[09:04] The governor vetoed it.

Speaker 1:
[09:08] But the thing about these crazy anti-car activists is that directionally, they were right. Those cars did initially wipe out a lot of jobs, even if they created more, and cars were very unsafe. The cities that threw their doors open to cars without regulation were rewarded with astonishing death rates. Detroit let drivers pretty much run wild. In the early 1900s, deaths accumulated in a Detroit without drivers licenses stoplights or turn signals. Many of those deaths were children. It took decades for society to mostly learn to live with cars. The rest of the story is just the world you grew up in. We invented laws, licenses, drivers ed. We learned to better design roads. We invented the highway, the seatbelt, the airbag. All those things made driving less deadly, although the smartphone reversed some of that progress. Nationally today, deaths from cars are about as common in America as deaths from guns or opioids, about one in a hundred. It will probably happen to someone you know in your life, maybe several someones. Whether or not you see that as an urgent problem to solve depends on you. But as long as there have been cars, there have been people who wanted to truly solve what's left of the safety problem, the best way we knew how. They wanted to make the car more like the horse had replaced, make the car more sentient.

Speaker 2:
[10:31] So that thought is there early and like early visions of it include, oh, well, we'll have radio controlled cars because they had radios at the time. There's a real effort at one point to build magnets under the road. And at each stage, what a self-driving car can be is dictated by the technology that's available at the time for the most part. Yeah. No one's thinking that much about a vehicle that thinks for itself. They're just thinking about a vehicle that the person in it doesn't have to drive.

Speaker 1:
[11:09] Many different attempts, many different failures. As many wonders as we invented, we could not approach nature's most majestic creation, a horse's brain, at least not until the turn of the millennium.

Speaker 4:
[11:34] Deep within the Department of Defense, there's a little known military agency that has created some of the most innovative technology of the 20th century. This is the story of DARPA.

Speaker 1:
[11:44] Chapter 2, DARPA's Million Dollar Prize.

Speaker 4:
[11:48] DARPA's current goal is to develop autonomous military vehicles, machines that can operate on their own without drivers.

Speaker 5:
[11:55] DARPA has always been intrigued with-

Speaker 1:
[11:57] This is from a documentary called The Million Dollar Challenge. Honestly, less a doc, more an ad for DARPA, the Pentagon's research arm. DARPA's mission is to try to keep American technology one generation ahead of everybody else. It doesn't always work, but DARPA has invented or funded a lot. GPS and the M16, the early Internet and the Predator drone. In 2002, DARPA decided to pursue the driverless car in a very unusual way.

Speaker 2:
[12:25] The director of DARPA, at the time a guy named Tony Tether, who had been a door-to-door salesman in his youth, definitely has that flair and that way of thinking, says, let's have a contest. Let's see who can put all of these ingredients that we've developed together into a proper self-driving car. His original idea is, we'll drive them down the Las Vegas Strip. That's almost immediately next, because it's insane.

Speaker 1:
[12:54] Oh, right. You would have to like literally gridlock a huge American city so people could put robot cars on it.

Speaker 2:
[13:03] Exactly. So he says, okay, do you know what? We'll do it in the desert. We'll do it in the desert, outside Las Vegas, and anyone who wants to can make a team, build a self-driving car, bring it to the desert, and we'll race them.

Speaker 1:
[13:18] The driver that DARPA wanted to replace was the American soldier. DARPA wanted a vehicle that could drive itself down roads that might be filled with hidden explosive devices. So in this moment, at the tail end of the.com boom, DARPA is trying to inspire tech to build something besides another website. DARPA's Tony Tether announces that the prize for whoever can win its grand challenge will be $1 million.

Speaker 2:
[13:42] The rules were very open. There were little rules, like you couldn't have two vehicles communicating with one another, but you could build any kind of vehicle you wanted. You could have six wheels, it could be a truck, it could be a motorcycle, it could be a tricycle. It just couldn't attack other vehicles. That was ruled out early on.

Speaker 1:
[14:00] Oh, was that a concern that people would just, like, sort of battle bot the thing? Your autonomous vehicle would have, like, a little shredder that would take out somebody else's?

Speaker 2:
[14:08] Someone asked in the first Q&A at this, like, they said, can we attack other vehicles? And they said, no. And it's funny you bring up battle bots because a lot of teams who entered this had battle bots history.

Speaker 1:
[14:21] Interesting.

Speaker 2:
[14:22] They were used to building robots for interesting purposes. And when they caught wind of this, they said, we can do this, we can scrap together some money, and this will just be fun.

Speaker 1:
[14:36] I'm going to tell you what happened in this robot race in the desert, not because I care so much about these early robot vehicles, but because I care a lot about the engineers who were making them. These would be the people who would later go on to lead development for the billion-dollar companies creating today's driverless cars. And these people had very different views about how to get that technology ready. Different values when it came to things like the acceptability of risking human life. Abstract differences that would become very concrete later on. To the point where people would be charged with federal crimes. That's the future. But listening to this part of the story, what I listen for is, how much of it can you detect already? How much are the differences already present? The first engineer I want you to pay attention to is a man named Chris Ermson. And way back in 2002, how did you end up being part of the DARPA Grand Challenge?

Speaker 5:
[15:35] It sounded like fun.

Speaker 1:
[15:38] Chris, these days the CEO of a large tech company. Back then, a PhD student at Carnegie Mellon University. When he first got recruited for the race, he was out in the field, observing a robot as it crept across the Atacama Desert, training for its future deployment on the surface of Mars.

Speaker 5:
[15:55] My PhD advisor came down and was really excited about this DARPA Grand Challenge thing and the idea that you'd have a robot run across the desert at 50 miles an hour. It just sounded exciting having spent the last couple of weeks walking behind a robot at very low speed.

Speaker 1:
[16:15] So Chris would join Carnegie Mellon's Red Team and help build a car called Sandstorm, a bright red Humvee with the top lopped off, a plethora of futuristic sensors mounted to it. Like scanners, a crackpot would use to search for aliens. You can see Chris back in that documentary. He explains to the filmmaker at the time that the hard part, of course, isn't the vehicle, it's the driver. How do you even begin to teach a computer to operate a Humvee at all?

Speaker 5:
[16:40] How does a computer make the steering wheel turn? How does a computer change the pressure on the brake and the throttle? Those are the issues that we're fighting through right now.

Speaker 1:
[16:49] Sandstorm represented the best entry from the contest's traditional academic crowd. But there's a different crowd there too. Represented best by a man named Anthony Levandowski. Can you tell me about Anthony Levandowski?

Speaker 2:
[17:02] Anthony Levandowski? Where to begin? So Anthony is like an entrepreneur. He's a really charming guy. He's six foot six. He's gangly. He's all get down. He grew up mostly in Belgium because his mom was working for the EU. For high school, he moved to Marin to live with his dad. And he's a hustler.

Speaker 6:
[17:35] My name is Anthony Levandowski. I was a grad student at Berkeley. Instead of continuing on to finish my PhD, I decided it was much better to do the Grand Challenge.

Speaker 1:
[17:46] We asked Anthony for an interview. He didn't respond. But here he is in the footage from back then. Anthony did not have the engineering experience or resources of a team like Carnegie Mellon's Red Team. So he tried something very different. A vehicle that had almost no chance of winning the race, but which was also perfectly designed to stand out, to get him a lot of attention, maybe a job. The race's only self-driving motorcycle. It was named Ghost Rider, a stubby little thing covered in stickers, with an antenna on the back and cameras on the front.

Speaker 6:
[18:17] There's a steering actuator on the top here, which allows us to modify the steering angle. So basically, if you're driving, you start to follow the left, you steer left, that makes you turn the left, and then you get the triple acceleration that puts you back up to the right. And you're monitoring that in real time and making small adjustments and you stay balanced.

Speaker 7:
[18:37] The strobe light is on, the command from the tower is to move. Ladies and gentlemen, Sandstorm.

Speaker 1:
[18:46] The race happens on a Saturday in March of 2004.

Speaker 7:
[18:49] Autonomous vehicle traversing the desert with the goal of keeping our young military personnel out of harm's way.

Speaker 1:
[19:06] What happens the first time they try to do this competition?

Speaker 2:
[19:09] The 2004 Grand Challenge is an utter, hysterical disaster.

Speaker 1:
[19:18] Disaster number one, Ghost Rider, the motorcycle. Anthony Lewandowski forgot to flip on the switch for the stabilization system. The bike immediately topples. Ghost Rider down.

Speaker 5:
[19:31] Anthony, good effort.

Speaker 2:
[19:34] And then, every vehicle after it fails miserably. Like one vehicle drives up onto a berm, flips off. One vehicle drives straight out, does an inexplicable U-turn, and just drives back to the starting line. And the rules are that once your vehicle starts, you can't do anything.

Speaker 1:
[19:53] Even Sandstorm got stuck on a berm. Chris Urmson just standing there, unable to help his robot.

Speaker 5:
[19:59] Poor thing was trying to get going, but its wheels were just spinning on the gravel, and tried so hard that it actually melted the rubber of the tires. And so there's this plumes of black smoke before they killed it.

Speaker 1:
[20:12] For the roboticists, this was obviously very disappointing. Chris Urmson compared it to an Olympic marathon, where the best runner only makes it two of the 26 miles. What this contest had done, though, was it had flushed all these inventors out. It had jumpstarted the scene that would develop this technology. One of the most important people there that day, actually just watching, was someone I haven't mentioned yet. A legendary roboticist named Sebastian Thrun.

Speaker 2:
[20:38] Sebastian Thrun, he was at the first Grand Challenge. He didn't bring a team, he wasn't participating. Darpo wanted to show off some other projects they'd been funding, including one of his robots, so he brings the robot and so he's there. And he watches this disaster and he thinks, I can do better than this.

Speaker 8:
[20:59] I looked at the very first iteration of this Grand Challenge where I didn't participate, I was a spectator.

Speaker 1:
[21:04] This, of course, is Sebastian Thrun. He grew up in West Germany, moved to the US, taught at Carnegie Mellon before moving to Stanford. Watching that day, he saw this fundamental error he believed all the entrants had made.

Speaker 8:
[21:16] I saw that all the teams treated this like a hardware problem. They looked at this and say, we have to build a bigger wheels, and bigger chassis, and so on. I looked at this and said, well, wait a minute, the challenge really is to build a self-driving car that can drive through the desert. I can get a rental car that can do it just fine, provided there's a person inside, and the challenge is really to take the person out of the driver's seat and replace it by a computer. That is not a problem of bigger tires. That's actually a software problem.

Speaker 1:
[21:51] Sebastian Thrun had a dual background, robotics and artificial intelligence, which probably explains his focus here on the robot driver's mind. He was thinking about something else too. The military wanted this tech to replace a relatively small number of drivers in its war zones. But Sebastian was already imagining something bigger. What would happen to traffic deaths worldwide if one day, everyone had access to a driverless car?

Speaker 8:
[22:16] I had experiences of losing people in my life to traffic accidents. And I felt we lost over the million people in the world to traffic accidents. Wouldn't it be amazing if DARPA invented something that would save a million lives a year?

Speaker 9:
[22:29] In October of 2005, 43 teams have brought their vehicles to compete in a unique event. A race driven not by testosterone, but computer code.

Speaker 1:
[22:43] Chapter 3. Machine.

Speaker 3:
[22:45] Learning.

Speaker 9:
[22:50] The race course is a circular maze that zigzags for 132 miles.

Speaker 1:
[22:54] 18 months later, for the second grand challenge, DARPA doubled the bounty. Two million dollars. This footage is from a PBS documentary called The Great Robot Race, narrated to my mild joy by John Lithgow. Familiar faces have returned. Chris Irmson, back with the Carnegie Mellon team, this time with two vehicles, Highlander and Sandstorm. Anthony Levandowski, back with his motorcycle, which still doesn't work. He's knocked out in the qualifiers. Now, there's also Stanford's entrant. Compared to Sandstorm, the bulked up Hummer, the car looks measly. A blue SUV donated by Volkswagen. A baby face with a run smiles next to his soccer mom looking vehicle.

Speaker 8:
[23:34] The vehicle's name is Stanley. So Stanley is nothing else but Stanford. But it also gives the vehicle a personality. We think of the vehicle more and more as an intelligent decision maker.

Speaker 9:
[23:46] Thrun is a computer scientist.

Speaker 2:
[23:47] And Thrun really brought more artificial intelligence, which at the time, we're talking 2005, was still rather primitive, especially compared to what we have today. But he could use it to teach his vehicle how to recognize the road and how to do it much faster. They found a dirt road out near Stanford and they drive it down a dirt road and have the car's cameras record what they were seeing.

Speaker 8:
[24:16] The robot Stanley was able to train itself as it went. And the way it worked is its eyes looked way ahead and it could see stuff way at distance. When it drives over the stuff, it could tell if it was a good place to drive or not, because it could measure how slippery or how bumpy the road was. And then it could then retroactively train and say, this green stuff over there is something good to drive on, aka grass, and this brownish stuff, aka mud, is not so good to drive.

Speaker 1:
[24:44] And so it was able to detect patterns and generalize from what it had learned?

Speaker 8:
[24:50] Yeah, absolutely. And it did this like 30 times a second. I mean, just like a person.

Speaker 1:
[24:56] The race kicks off with Stanley sandwiched between Carnegie Mellon's two behemoths.

Speaker 9:
[25:01] Highlander leads the pack. Followed by Stanley and Sandstorm.

Speaker 1:
[25:06] What happens in the second race?

Speaker 2:
[25:07] The second race is as successful as the first race is disastrous.

Speaker 1:
[25:15] Nearly every entrant in the second race would go further than Sandstorm had in the first. Multiple vehicles would finish the course. The real question was who would do it fastest? And so at what point was it clear to you that you were going to win?

Speaker 8:
[25:30] Well, once we passed the front-running team, we kind of saw the vehicle descend into what was the hardest part of the race course, a very, very treachery mountain pass. And we saw at a distance a dust cloud. We saw a helicopter. We saw little features that made us believe, wow, there's something happening that's magical. And this dust cloud then all of a sudden turned bluish because the car was blue and came closer. And then it came first to the finish line. It was unbelievably magical.

Speaker 1:
[26:00] At the end of the dock over some criminally corny piano music, Sebastian Thrun gives his post-race interview. He's dressed a lot like a race car driver, watching, you could forget he wasn't in the car.

Speaker 8:
[26:10] It was just amazing to see this community of people. That community succeeded today. Behind me, there are three robots that made it all the way through the desert, and all three of them did the unthinkable. It's such a fantastic success for this community. I think we all win.

Speaker 1:
[26:30] A made-for-TV kumbaya moment. Still years before the race to build driverless cars would enter its cutthroat phase. What would happen next is that a small band of lunatics would take driverless cars out of the desert, start secretly driving them on public roads in the state of California. They would do this at the behest of a man who had been observing from the stands that day, disguised in a hat and sunglasses, who'd watched the challenge while his mind spun. That's after a short break. This episode of Search Engine is brought to you in part by Framer. Your marketing website sets the tone for your brand and is the one touch point every single one of your customers has. If you still struggle to make small changes and simple updates, you're leaving opportunity on the table. That's why so many companies, from early stage startups to Fortune 500s, are turning to Framer, the website builder that turns your.com from a formality into a tool for growth. Framer works like your team's favorite design tool. Real-time collaboration, a robust CMS with everything you need for great SEO, and advanced analytics with integrated A-B testing. Changes go live to the web in seconds with one click, without help from engineering. It's an enterprise solution with premium hosting, enterprise-grade security, and 99.99% uptime SLAs. Learn how you can get more out of your.com from a Framer specialist, or get started building for free today at framer.com/search, for 30% off a Framer Pro annual plan. That's framer.com/search for 30% off framer.com/search, rules and restrictions may apply. This episode of Search Engine is brought to you in part by Chime. Chime is changing the way people bank, and it feels like banking finally caught up to real life. This is smarter, fee-free banking built for you. Not like old school banks that still charge overdraft fees and monthly fees just for existing, Chime is built for everyday people, not the 1%. With MyPay, you can access up to $500 of your pay when you want, and with Direct Deposit, you can get paid up to two days early. That kind of flexibility makes a real difference. You can forget overdraft fees, minimum balance requirements, and monthly fees. Chime isn't just smarter banking, it's the most rewarding way to bank. Join the millions who are already banking fee-free today. It just takes a few minutes to sign up. Head to chime.com/searchengine. That is chime.com/searchengine.

Speaker 10:
[29:16] Chime is a financial technology company, not a bank, banking services, a secure Chime Visa credit card, and MyPay line of credit provided by the Bancorp Bank NA or Stride Bank NA. MyPay eligibility requirements apply, and credit limit ranges $20 to $500. Optional services and products may have fees or charges. See chime.com/feesinfo. Advertised annual percentage yield with Chime Plus status only. Otherwise 1.00% APY applies. No min balance required. Chime Card on-time payment history may have a positive impact on your credit score. Results may vary. See chime.com for details and applicable terms.

Speaker 11:
[29:40] Have you ever felt like you are living just a B or B plus life? It's so dangerous to live that. More dangerous than a B minus or a C plus life, because when you're living a B or B plus life, you don't change it. You think it's good enough, is it? I'm Suzy Welch. I host a podcast called Becoming You. People think, OK, an A plus life is not available to me, but there is a way. We are all in the process of becoming ourselves. Listen to Becoming You wherever you get your podcasts.

Speaker 1:
[30:20] Welcome back to the show. Chapter four, Something Actually Useful for the World. The race in the desert had been designed as a spectacle, something flashy to dry out America's smartest roboticists. But it had drawn another person who'd come for his own reasons. Google's Larry Page arrived at the DARPA Grand Challenge in a baseball hat and sunglasses, a disguise. He found Sebastian Thrun and buttonholed him, asking him a million highly specific questions about things like the wavelength his LiDAR system used. But this meeting in the desert, this was not actually their first introduction.

Speaker 8:
[31:00] Well, the first time I met Larry, it was a bit earlier. He had built a small little robot that acted as a telepresence for meetings, and he was trying to drive it around the Google offices instead of himself going to meeting with a robot. And he sent me a message and said, I'm going to show you the robot I've built. And I, in the spirit of like craziness, I sent a message back saying, Larry, I'm so glad that Google lets you use 20% of your time to do something useful for the world. I either expected a rapid response or never hear from him again. It turns out I was lucky. He responded immediately. I took his robot, I fixed it the next 24 hours, and he was very happy.

Speaker 1:
[31:44] Larry Page, it turned out, had actually been interested in autonomous vehicles since at least grad school. That's what he'd wanted to do his thesis on before being guided by some wise PhD advisor towards Search Engines instead. Now, as a spectator at DARPA's Second Grand Challenge, he could see real-world evidence that autonomous vehicles might actually be a thing. At first, Larry Page hires Sebastian Thrun, along with fellow DARPA contestant Anthony Lewandowski, just to build what will become Google Street View. They'll actually modify the system that Stanley the car's roof-mounted cameras had used to begin photographing American streets. But before long, Larry Page returns to Sebastian with his dream of a driverless car. So how soon after arriving at Google, does Project Chauffeur begin? Larry Page says to you, I have a mission. How does this happen?

Speaker 8:
[32:39] And this is an embarrassing moment for me. It's about two years later, 2009, where I sit in my cubicle and Larry Page comes by and says, Sebastian, I think you should build a self-driving car that can drive anywhere in the world. And my immediate reaction was, no, taking the technology we built for this empty desert and putting it in the middle of Market Street in San Francisco is going to kill somebody. And Larry would come back the next day with the same idea, and I would give him the same answer. And both of us got increasingly more frustrated, like, god damn it, it can't be done. And eventually he came and said, look, Sebastian, okay, I get it. You can't do it. I want to explain to Eric Schmidt, the CEO at the time, and Sergey Brin, my co-founder, why it can't be done. Can you give me the technical reason why it can't be done? And that's the moment of incredible pain, because I go home and I can't think of a technical reason why not. It was this kind of moment where I felt, look, I'm the world expert on self-driving cars, and I'm the person who denies that it can't be done. That taught me an incredibly important lesson about experts. That for the rest of my life, I decided experts are usually experts of the past, not the future. And if you ask an expert about innovation, something crazy new, they're the least likely person to say, yes, it can be done.

Speaker 1:
[33:59] So this is where the Google self-driving car project begins in 2009. It's led by Sebastian, joined by others from the DARPA Challenges. The methodical Chris Ormson was running most things day to day. Anthony Levandowski, the flashy motorcycle guy, would work on hardware. Dmitry Dolgov, another DARPA veteran, would be responsible for planning and optimization. It was a secret project. They'd report directly to Larry Page, a small enough team that there'd be no bureaucracy, few emails, fewer meetings. Just 11 engineers who, writer Alex Davies says, represented some of the best young talent in the country.

Speaker 2:
[34:33] And so Google builds this very quiet team, and it says to them, build us a self-driving car. And because that goal is super nebulous, they give them two challenges. They say, safely log 100,000 miles on public roads, but they also give them a challenge called the Larry 1K.

Speaker 8:
[34:58] So Larry and Serg and I sat together, and the two of them carved out 1,000 total miles of road surface in California.

Speaker 2:
[35:06] They open up Google Maps, and they just click around, and they look for 10 separate 100-mile routes that are really tricky.

Speaker 8:
[35:17] Absolutely everything, like the Bay Bridge and Lake Tahau and Highway 1 to Los Angeles and Market Street and even Crooked Lombard Street.

Speaker 2:
[35:25] And they say to the team, you have to drive each of these 100-mile routes without one human takeover of the system, without one failure of the car.

Speaker 1:
[35:35] To get off to a running start, the team licenses the code from Stanford's DARPA Urban Challenge Vehicle. Anthony Lewandowski goes to a local Toyota dealership and buys eight Priuses, takes them back to Google, and retrofits them to accept a computer as a driver. He hooks that computer driver electronically into the brakes, the gas, the steering. These Priuses get a radar system behind the bumper, cameras, a lidar system spinning 360 degrees on top. Lidar like radar, but it shoots lasers instead of sound waves. At first, the team gives each Prius a cool name, like Knight Rider.

Speaker 12:
[36:12] But I think we quickly realized that we're not going to be able to name all these vehicles as we scale up our fleet, and so we just started to number them like, you know, Prius 27.

Speaker 1:
[36:21] This is Don Burnett. He had been a researcher working on autonomous submarines. He lost a friend in a car accident, separately got in a bad accident himself, and decided he wanted to do work on self-driving cars. That's how he eventually ended up on the team in its early days.

Speaker 12:
[36:36] I was on the Motion Planning and Behavior Decision-Making team, and my responsibility was to work on the nudging behavior.

Speaker 1:
[36:44] Nudging, when a big truck passes a human driver on the right, the driver will nudge a little to the left. For us, it's an instinct. Don's job was to teach a computer to nudge.

Speaker 12:
[36:54] They're trying to encode the behavior that you would use as a driver under kind of partially good perception. And it's a really tricky problem.

Speaker 1:
[37:04] A team of academic roboticists, some of whom had had friends die in cars, spending Google's money to see if they could make driving safer. It was a weird era. There's this big concert venue near Google's offices called the Shoreline Amphitheater. In 2009, you could have seen Cheryl Crow there, The Killers, Fish. But the most interesting show that year was one almost nobody knew about. In the venue parking lot, on days when there was no concert, no tour buses around to see them, the Google team would run its first test runs of their driverless cars, essentially hiding in plain sight. A Prius driving itself around the amphitheater parking lot with an attentive safety driver sitting behind the wheel, just in case. The team was making sure the basics functioned, that the sensors could really recognize another car, that the computer in the car was abiding by their orders. These were the baby steps that happened in this parking lot and at an empty airplane runway that was close to their offices. Spring 2009, the team tries actual real road driving for the first time. Chris Irmson takes one of the Priuses out on the Central Expressway, speed limit 45 miles per hour. There are humans driving here. And immediately, outside the confines of the empty parking lot and empty airplane runway, here's what's clear. They had a real problem. The car was swerving wildly.

Speaker 5:
[38:30] It was weaving around like a drunken sailor. And we realized that the scale of the runway was such that you didn't notice the one or two foot kind of oscillation it had in lateral control. And you put it on Central Expressway, and suddenly, you know, yep, turns out actually that's a problem.

Speaker 1:
[38:52] One more problem to fix. Listening to this story, it's funny because I can imagine it giving me a totally different feeling than it does. A tech company with nobody's permission was testing driverless cars on public roads in California. I don't know why that strikes me as being about invention, instead of just hubris and impunity. Maybe it's because I know that Google would be one of the few tech companies whose driverless cars would not cause any fatal accidents in testing. And that the team would just take more safety precautions than the other companies who'd rush in later to catch up with them, once this was an arms race. The way these cars were designed, the safety driver sat behind the steering wheel, ready to take over, and the other seat was their partner, watching the monitor displaying a graphical interface, designed by Dmitry Dolgov. The people watching the screen would call out problems ahead, some discrepancy between what the sensors were seeing and what was actually in the road. This is what teaching a car to drive actually looked like. Two person team is manning the cars, logging errors, going back to the office to troubleshoot, and then updating the code. I asked Don Burnett about this era. And while you're doing this and then like you leave work and you get in your car that you drive as a human, did you find yourself thinking more carefully like, how do I know what I know when I'm driving? Like you're trying to teach a machine by day. Did it affect how you thought about human driving by night?

Speaker 12:
[40:20] Almost obnoxiously so to any passengers in the car with me. I was obsessed with one big question, which is why do humans drive the way they drive? And it turns out there were no good answers. And I still think they're not great answers. And instead of actually answering that question, we've just turned to machine learning to infer the deep truths behind why humans do what they do. And so there's some basic principles that you can understand. Like we try to minimize lateral acceleration, meaning you don't want to be thrown to the outside of your car when you're making a turn. So you're going to slow down. But how much do you slow down, right? And it turns out that's contextual.

Speaker 1:
[41:02] Don gave me an example. So you're trying to figure out the right speed and angle for the car on one of those tight, curvy onramps onto the highway. You want it to feel comfortable for a passenger. Don says you can work out the math. The lateral acceleration is 2 meters per second squared. But the surprising thing is that number only applies on the onramp.

Speaker 12:
[41:24] If I put you at a cul-de-sac in a neighborhood, and you were going to do a U-turn at the end of the cul-de-sac, even though the speed is significantly slower, if you did 2 meters per second squared of lateral acceleration around a cul-de-sac, you would tell your driver they were crazy. It would feel incredibly uncomfortable. Like incredibly uncomfortable.

Speaker 1:
[41:49] You would feel like you were in Mario Kart.

Speaker 12:
[41:51] Yes, it would feel Mario Kart. And remember, this is a force, so it's a physical feeling on your body is exactly the same. But the contextual awareness of the situation of speeding up to get on the highway versus making a U-turn in a residential street tricks your brain into feeling opposite about the situation. And so it turns out the limit for a cul-de-sac is around 0.75. It's almost three times less than you would be willing to tolerate as you accelerate onto a highway. And so there were things like that where you couldn't just say humans have specific physical restrictions, right, from a force's perspective. The context matters, and when the context matters, now all of a sudden, anything is game. So things like that is where I spent my time as a researcher trying to figure out, okay, how are we going to make this comfortable for passengers?

Speaker 1:
[42:47] All these little problems to solve. But there was one gift, which was that the team at this point had an overarching goal uniting them. The DARPA challenge had told them, drive across this patch of desert. The Larry 1K challenge told them, drive these 10 routes without human intervention. The specificity of the mission meant they never had to squabble about why they were there. By 2010, just a year in, the team was really on a roll.

Speaker 2:
[43:14] They start knocking out routes.

Speaker 12:
[43:17] Each one of the routes was unique and distinct and different and had its own challenges.

Speaker 2:
[43:22] Down Route 1, Silicon Valley to Carmel.

Speaker 12:
[43:25] The bridges run, where we had to go across all of the bridges in the Bay Area, starting in Mountain View, finishing crossing the Golden Gate Bridge.

Speaker 2:
[43:33] It's Chris Hermsen in the car, it's Anthony Lewandowski in the car.

Speaker 12:
[43:37] I was in the car with Dimitri, Chris, and Anthony. It was the four of us in the Prius.

Speaker 2:
[43:42] They were figuring out the technology much faster than they thought they could.

Speaker 1:
[43:46] The Larry 1K was set up like a video game, meaning they'd get to try the route over and over until they could complete it without a single human takeover. Then, they'd move on to the next one.

Speaker 5:
[43:56] It was really a proof of concept exercise. Can you even make this happen once?

Speaker 2:
[44:04] When they fail a route, they know what the car can't handle, so they go back and say, you have to be better at doing X, Y, Z.

Speaker 12:
[44:11] And then we got back to the office. We regrouped. We went back out, I think, at like 11 p.m. And by 1 a.m., we had completed the route.

Speaker 2:
[44:21] They buy a bottle of Corbelle Champagne. They all write their names on it.

Speaker 1:
[44:25] Corbelle, $13.99 a bottle. The champagne they have at Trader Joe's. They had one for every route they completed.

Speaker 2:
[44:32] And one by one, they pick off the Larry 1K routes, and they think this is going to take them about two years when they start out, and they do it in a little bit more than a year, nearly twice as fast as they had expected.

Speaker 1:
[44:46] By fall of 2010, they're done. Here's Chris Ermson.

Speaker 5:
[44:49] And I think we had a big party up at Sebastian's house in Los Altos Hills. So, you know, it was pretty spectacular, right?

Speaker 2:
[44:57] They throw each other in the pool, they celebrate, and then they're not entirely sure what to do next.

Speaker 5:
[45:04] It was kind of, okay, and now what?

Speaker 1:
[45:08] The team had pulled off a kind of miracle in a year. A driverless car with human supervision, with lots of human coding, but still, a driverless car successfully navigating some very tricky roads in California. They've done this safely, they've done it quickly, and now, things would begin to wobble. Competition would arrive, the team itself would begin to schism, and one member, a person who believed the team was moving too slowly, would actually take matters into his own hands in a particularly extreme way. After the break. This episode of Search Engine is brought to you in part by MUBI, the global film company that champions great cinema. From iconic directors to emerging auteurs, there's always something new to discover. If you're looking for something really special, check out Father, Mother, Sister, Brother, the eagerly awaited new film from Jim Jarmusch, now streaming on MUBI in the US. It follows adult children navigating their relationships with somewhat distant parents and each other. It stars Tom Waits, Adam Driver, Mayim Bialik, Charlotte Rampling, Cate Blanchett, Vicky Cripps, India Moore, and Luca Sabat. MUBI is a curated streaming service dedicated to elevating great cinema from around the globe. Perfect for lovers of great cinema and for anyone who hasn't discovered how much they love it yet. To stream the best of cinema, you can try MUBI free for 30 days at mubi.com/searchengine. That's mubi.com/searchengine for a whole month of great cinema for free.

Speaker 2:
[46:52] Thy ticket, Lady Jennifer of Coolidge.

Speaker 11:
[46:55] Well, many thanks, good sir.

Speaker 13:
[46:57] Here is my Discover card.

Speaker 14:
[47:01] They accept Discover at Renaissance fairs?

Speaker 13:
[47:03] Yeah, they do here.

Speaker 15:
[47:04] Discover is accepted at the places I love to shop. Get it with the times.

Speaker 16:
[47:09] With the times?

Speaker 17:
[47:11] You're playing the lute.

Speaker 10:
[47:14] Yeah, and it sounds pretty good, right?

Speaker 18:
[47:15] Discover is accepted at 99% of places that take credit cards nationwide. Based on the February 2025 Nielsen Report.

Speaker 1:
[47:23] This episode of Search Engine is brought to you in part by Claude from Anthropic. When you're chasing down an answer, following a thread through contradictory sources, or trying to figure out why something doesn't add up, you need tools that help you think, not just search. Claude is built for people who ask uncomfortable questions. When you're researching a story, working through what different sources actually say, or trying to understand why the official version doesn't hold, Claude digs deeper with you, surfaces contradictions, asks the follow-up questions that push you past the easy explanation. It's designed to help you think deeper, not to keep you clicking. A thinking partner for people who don't stop until they actually understand. Try Claude for free at claude.ai search engine and see why the world's best problem solvers choose Claude as their thinking partner. Welcome back to the show. As early as 2010, Google's Driverless Car Project had developed some very impressive self-driving technology. But what they were struggling to decide was this. What was the actual product they were developing here? Here's Sebastian Thrun.

Speaker 8:
[48:37] We had a lot of debates inside Google about what the right business model was. At some point, we actually had a big debate, we should just buy Tesla. And Tesla was worth $2 billion at the time. I remember this. Maybe we should have in hindsight. But joking aside here, there was a debate whether this is more of an assistive technology or a disruptive replacement technology.

Speaker 1:
[49:03] Basically, should they follow the route that Tesla ultimately would? Design self-driving as a feature in your car, something that could take over sometimes but still need human monitoring? Or was it better to wait until the car could fully drive itself? Theron would eventually come around to this version of self-driving. Specifically, he'd come around to the idea of self-driving robo-taxis.

Speaker 8:
[49:24] A taxi service type system is way more capital efficient than ownership. An owned car is being used about 4% of the time and it's parked 96% of the time. Imagine a city without parked cars where every car is being utilized, call it 50% of the time, which means we have only 10% of the number of cars needed that we need today when we own cars. That's going to happen, there's no absolute question.

Speaker 1:
[49:49] What Sebastian is describing here so matter of factly is a fairly radical reimagination of American cities. The idea that robo-taxis would be so cheap and widely available that most people just wouldn't own cars, that we could put something else, anything else, in the places where we put most of our parking lots and parking spaces, that is a far-fetched idea. Just given how much of American identity is tied into personal car ownership. A far-fetched idea and for it to begin to happen, Google would have to bring a product to market. But the years passed and they didn't. Some people who were there felt stuck. Don Burnett says he believes life at Google got dangerously cushy. The food was great, the money was too. These former academics making much more than they'd ever expected.

Speaker 12:
[50:40] There was a lack of urgency on the team to actually make something viable. We had a funding supply that effectively felt infinite. And maybe it was, maybe it wasn't. But it certainly felt infinite. And when you have infinite funding, you're not forced to make hard decisions. You're not forced to focus. You're not forced to look at the opportunity, the market, the customer and be the best. It was more like, hey, let's take our time, let's make sure we do it right, which is on its face a good principle. But at the end of the day, I think the lack of urgency wasn't for everyone.

Speaker 2:
[51:19] And within the team, you get Team Chris and Team Anthony. And they start butting heads all the time.

Speaker 1:
[51:26] Chris and Anthony meaning Chris Urmson, official head of the project, versus Anthony Lewandowski, who I still think of as the motorcycle guy.

Speaker 2:
[51:34] The main difference in their approach is how quickly they want to move. Anthony is very okay with risk, we'll say. He gets one of these cars and he's driving it back, and he lives in Berkeley, works in Palo Alto. He's just using this car like on the Bay Bridge every day, probably outside the bounds of what the team actually wanted. And he's not like necessarily logging data, he's just enjoying his self-driving car and taking it all over the place. Chris comes from an academic background. He's that Canadian, very nice, very careful, very risk averse.

Speaker 1:
[52:13] When I asked Chris Ermson about all this, his memory was slightly different. In his memory, Team Anthony was pretty much just Anthony. And Anthony, he said, was a move fast and break things kind of guy. Move fast and break things, a motto famously coined by Mark Zuckerberg. It defines a way of developing technology, which once might have felt cute and revolutionary, but which today, at least to me, feels pretty irresponsible. Chris didn't think that philosophy was an option for their team. Even if their cars were statistically safer than human drivers, he knew that the first news story about a self-driving car in a fatal accident was going to be a huge deal. Anecdote was going to demolish data if they weren't extremely careful. By all accounts, Anthony Levandowsky felt differently. But he actually wasn't the only one. Here's Don Burnett.

Speaker 12:
[53:06] There were some people on the team, very famously, including myself, that started to get the itch kind of towards the three to four year mark. The itch of like, okay, where is this going?

Speaker 19:
[53:18] Who is it for?

Speaker 12:
[53:19] How are they going to use it? Where are they going to use it? And I felt like the leadership didn't have great answers to that. There was no commercial race, right? We had no competition and there was no market for the product.

Speaker 1:
[53:29] But competition would soon arrive, in the form of Uber.

Speaker 12:
[53:34] Uber's self-driving program This was the oh shit moment for me. Uber announced their self-driving program. And I remember like it was yesterday, waking up, reading the news, going to my desk in the morning and thinking, oh crap, these guys are going to eat our lunch.

Speaker 1:
[53:54] In 2013, then CEO of Uber, Travis Kalanick had gotten a ride in one of Google's prototype driverless cars. Sitting in a taxi without a human driver, he'd understood that this could be in the end of his company. Until Uber had plunged headlong into the driverless car race. The company hired nearly half of Carnegie Mellon's top robotics lab. And not long after, we also know through court records and emails, that Uber also began communicating with Anthony Lewandowski, who, in 2016, would leave Google, quitting just before he could be fired for recruiting team members away, including Don Burnett. Anthony would then start his own autonomous vehicle company. Uber would soon buy that company for almost $700 million, even though the company had no product and was only months old. Which raised a mystery. Why would Uber pay so much for a company whose only assets seem to be its people?

Speaker 2:
[54:51] This is where Google goes into its computer security logs and realizes that not long before he left, Anthony Lewandowski downloaded something like 14,000 technical files onto his computer and moved them onto an external disk.

Speaker 1:
[55:08] Obviously, you can't do that. I mean, I'm assuming obviously you can't do that.

Speaker 2:
[55:11] No, you definitely cannot do that. This is the kind of thing that maybe if he had stayed there, this is the kind of thing Anthony would have done and he would have been like, oh, it's just so I could have access to it somewhere else, and he probably would have gotten away with it. But when you then go and work for Uber and start running their direct competitor self-driving car program, that's when you get in trouble and that's when what's technically called Waymo at this point, Google's program, sues OOCR and puts Anthony at the center of an enormous legal battle between these tech giants.

Speaker 20:
[55:55] Secrets and subterfuge in Silicon Valley, a former Google engineer has been charged with stealing files from Alphabet's self-driving car project and taking them to Uber.

Speaker 21:
[56:06] Specifically, it involves a former lead engineer of Google's self-driving car unit, Anthony Lewandowski. Now, he's accused of using his personal laptop and downloading more than 14,000 files from...

Speaker 1:
[56:18] In 2016, Google had just spun its driverless car unit into a new entity, Waymo. Waymo sued Uber. Uber had to settle to the tune of $245 million. And in a separate criminal trial, Anthony Lewandowski pled guilty to stealing trade secrets. Afterwards, Uber continues their driverless car program without him, continuing to pursue its move fast, break things strategy, which in 2018 leads to the death of a woman named Elaine Hertzberg.

Speaker 18:
[56:47] Uber is hitting the brakes on its self driving cars after one of them hit and killed a woman in Arizona.

Speaker 16:
[56:53] The vehicle was in autonomous mode, but it did have a safety driver on board.

Speaker 17:
[56:58] But a police report later indicating the safety driver was streaming TV shows on her phone for three hours that night, including at the time of the crash.

Speaker 1:
[57:08] The way this story was reported, nearly everyone blamed the safety driver. She was on her phone. She was streaming an episode of The Voice.

Speaker 17:
[57:14] Tempe investigators saying had Vazquez been paying attention to the road, she could have stopped the car 42 feet before impact. The NTSB slamming Uber.

Speaker 1:
[57:25] There was some important additional context, which is that Uber's robot driver was also just much worse than Waymo's. A statistic I found jaw-dropping. At this point, Waymo's safety drivers were having to take over from the car once every 5,600 miles. Uber's safety drivers that year had to intervene more than once every 13 miles. Despite that, five months before the crash, over employee objections, Uber had cut its safety crews. Instead of two humans, they just used one. One safety driver overseeing a robot driver that was arguably not ready to be on public roads. In the last moments of Elaine Hertzberg's life, the robot spent an indefensible 5.6 seconds trying and failing to guess the shape in the road that was a human body pushing a bike. Over those 5.6 seconds, the robot kept reclassifying her. Was she an unknown object? A vehicle? A bicycle? During that time spent wondering, the car did not slow down. Soon after Elaine Hertzberg's death, Uber halted its testing program.

Speaker 14:
[58:31] Uber has temporarily suspended its driverless fleet nationwide as the NTSB, police, Uber and the National Highway Traffic Safety Administration investigate.

Speaker 1:
[58:41] We reached out to Uber for comment. A spokesperson said that the fatal collision was indeed a tragedy, which had a significant impact on Uber and the entire industry. There would be other competitors who would shut down after similar accidents. There would also be Tesla, which by 2020 was publicly marketing a product the company called full self-driving, but which absolutely was not. Meanwhile Waymo had slowly continued to develop its tech. Their robotaxis would be ready for riders by 2020. The team had gotten an unexpected boost from a technology that was, at the time, very little understood. In 2026, when most people talk about artificial intelligence, the conversation defaults to products like ChatGBT and Clawd. But artificial intelligence has been a core part of driverless cars going back two decades. In the 2010s, neural net advances meant that you could now begin to feed a computer system large amounts of data and watch as its perception, prediction, and decision-making abilities improved. Here's Sebastian Thrun.

Speaker 8:
[59:45] That technology of massive data training was with us from the get-go, but has become more and more and more and more important. The surprise for all of us has been that size matters. When you put a million documents into an AI, it's fine. A hundred million is fine. But when you put a hundred billion documents into an AI, it is unbelievably smart. And that, I think, shocked everybody, myself included.

Speaker 1:
[60:11] The Google Brain team, the deep learning people, started working with the driverless car team to use training data to help the computer driver learn things like, how to better predict when another car was about to suddenly switch lanes, how to more reliably spot pedestrians. Over the years, as the car drove more miles, as the team gathered more data, plugged that data into their AI systems and tweaked those systems, the engineers say the robot driver kept improving. As they tested the car in new weather conditions, they discovered problems that required hardware fixes. For instance, in Phoenix, Waymo had to design miniature wipers for their car's lidar sensors to deal with the dust storms and heavy rains. In 2020, Waymo finally debuts to the public in Arizona. In the years after, it will roll out to 10 more American cities. A funny consequence of Waymo's long development cycle is that the public's attitude towards Silicon Valley has just really changed in that time. There's more suspicion towards Google than there was back in 2009 when the project first started. And so now, many people look at the Waymo driver with a raised eyebrow, with a question immediately on their lips. Chapter 5. Are you a good driver?

Speaker 7:
[61:20] All right. Autonomous vehicles can now get you around Atlanta.

Speaker 22:
[61:23] The future of driving through Austin is here, except it comes without a driver.

Speaker 7:
[61:28] My healing app is now taking passengers in Miami.

Speaker 1:
[61:32] A fleet of white electric Jaguars covered in 40 different sensors, cameras, radar, lidar. It's an expensive car, as much as $150,000 by some estimates. In the news stories, you see the inside, where the human driver would normally sit. There's an empty seat. You're not allowed in. With a steering wheel in front of it, vestigial, it turns itself.

Speaker 18:
[61:53] Cars without drivers are here.

Speaker 14:
[61:55] Yeah, it sounds like something out of the Jetsons, but get ready because you may look over at the car next to you and see it rolling down the street.

Speaker 1:
[62:02] The TV newscasters always use the same G-Wiz tone. They can never resist a Jetsons reference. In every city, the influencers happen to record testimonials for their daily serving of clout.

Speaker 19:
[62:13] So in today's video, I'm about to take my first ever driverless car. It's with an app called Waymo. Waymo is basically driverless car Uber, where it's like ride service. You call it, go to wherever you need it to go, but there's no driver. You guys, this is creepy. It's like I'm being driven around by a ghost person.

Speaker 2:
[62:32] It's a little terrifying.

Speaker 19:
[62:33] It is definitely-

Speaker 1:
[62:34] Robo taxis pull hilariously badly. According to JD Power, a data analytics firm, among people who've not ridden in one, consumer confidence is at 20 percent. But among people who have taken a ride, the number shoots up to 76 percent. It's a thing I didn't capture in this story, but when I sat in one a couple of years ago, I just found it persuasive as an experience.

Speaker 20:
[62:58] You know what? I'm not as nervous as I thought I was going to be.

Speaker 13:
[63:01] This is actually quite relaxing.

Speaker 23:
[63:02] Nice gradual turn, felt very safe.

Speaker 17:
[63:04] You know, it was kind of freaky at first, but now it's pretty chill.

Speaker 19:
[63:08] It's a smooth ride though. It wouldn't drive fast, it wouldn't jerk in.

Speaker 24:
[63:11] It's driving like you always hope your Uber driver would.

Speaker 19:
[63:14] So I guess that's one of the big selling points.

Speaker 1:
[63:15] Chris Urmson, that methodical team leader, had left Google years ago, but he told me about his experience as a civilian consumer, trying Waymo out in the world.

Speaker 5:
[63:24] My universal experience has been, and you can tell me if this was your experience, the first couple of minutes in the vehicle, it's, huh, that's crazy. There's nobody behind the wheel. Ooh, swimming with sharks. And then a few minutes in, it's like, okay, is this just gonna drive? Is that all it does? And then 10 minutes in, people are looking at their phone.

Speaker 1:
[63:49] People tend to feel safe in these cars, but are they? Actually. So we know that the Waymo driver has now driven over 200 million real-world miles, and they released safety data so far for the first 127 million miles. Waymo's fairly transparent. They released their crash and safety data unredacted to the public. By contrast, Tesla redacts the details of its crashes. The company says they are confidential business information. In Waymo's case, I've looked at the data. I've looked at how the company interprets it, how skeptical independent researchers interpret it. I wanted to walk through it with an autonomous vehicle reporter I trust. His name is Timothy Beeley, author of the newsletter Understanding AI. I asked him how much our picture of the Waymo safety data has been evolving.

Speaker 23:
[64:37] So it's been pretty consistent the last couple of years. They are scaling up and so all the numbers get bigger, like the total number of miles get bigger, the number of crashes get bigger, but the crashes per mile have not changed a ton. Waymo says, and I think this is correct, that it's roughly 80 percent safer in terms of crashes that are severe enough to trick an airbag, crashes severe enough to cause an injury, and also crashes involving vulnerable road users like pedestrians or bicyclists.

Speaker 1:
[65:08] So 80 percent fewer airbag crashes than human drivers, and actually 90 percent fewer crashes that cause a serious injury. Some independent experts have small quibbles with the methodology, but broadly they find Waymo's data credible. Timothy pointed out there's one very important thing we don't know, the fatal crash comparison. For every 100 million miles humans drive, we cause a little over one fatal crash. The Waymo driver has driven 200 million miles without causing a fatal crash, but statistically speaking, that could still be a fluke. Some academics have suggested we need about 300 million miles to have statistical confidence. In the hundreds of millions of miles the Waymo driver has traveled, it was involved in two fatal crashes which it did not appear to cause. Here are the details of those crashes. In one, a speeding human driver rear-ended a line of vehicles at a stoplight. There's an empty Waymo in the line of struck cars. In another crash, a Waymo was yielding for a pedestrian. It was rear-ended by a motorcycle. The motorcycle driver was then struck by a second car. That's everything. When Timothy Beeley looks at the entire safety picture, the results we have so far from this big experiment Waymo is conducting on American roads, what he sees is mainly promising.

Speaker 23:
[66:28] So far, it's been better than human drivers. And so far, I think the case for allowing them to do the experiment is very strong.

Speaker 1:
[66:35] Which doesn't mean we shouldn't scrutinize this Waymo experiment as it continues. I find myself paying a lot of attention to Waymo crashes, which isn't hard. They make headlines. The most harrowing one recently was this January.

Speaker 21:
[66:48] A child near an elementary school in Santa Monica is struck by a Waymo.

Speaker 13:
[66:52] A child ran across the street from behind a double parked car, and a Waymo hit the kid. Santa Monica police say the child, a 10-year-old girl, was not hurt.

Speaker 1:
[67:01] The company issued a statement. Waymo said its driver had braked hard, reducing speed from 17 to under 6 miles per hour. A faster reaction, they claimed, than a human driver would have been capable of. What happened next at the accident scene actually answers a question I had had. What does a Waymo do after a car crash, since there's no human driver to help? Waymo employs what they call human fleet response agents, human beings who can't remotely drive the cars, but who the car can ask questions to if it gets confused. In Santa Monica, the Waymo called one of those humans. The human called 911, and this is the strangest part of Waymo's statement. Apparently, the car then waited at the scene of the accident until the police dismissed it. That's what we know so far, but there's two federal agencies investigating this crash, and so we'll have a full report in the future. One problem that's not really captured in the safety data that I've seen is what I'd call troubling edge cases. You see them in videos on social media. A Waymo gets stuck at a dead stoplight, or it blocks an emergency vehicle, or an example Timothy gave, Waymos were driving past stopped school buses in Austin.

Speaker 23:
[68:07] I think it's reasonable to say this is like a clear-cut rule that the vehicle should follow this rule. These edge cases are still very rare, and so if it's a one in 10 million thing, I think it's not that big a deal as long as they are making progress, which for most of these I think they are.

Speaker 1:
[68:20] Timothy pointed to one area where Waymo has not been as transparent as he'd like. Those human response agents, some of which are based here, some in the Philippines, there's questions about what specifically they do, and about how this will all work as Waymo scales up. We asked Waymo for comment on everything you heard in this episode, especially the recent safety incidents. A spokesperson said that the data to date indicates that the Waymo driver is already making roads safer in the places where they operate, and says that Waymo continues to work with policymakers and regulators to improve its technology. That's the safety picture so far, which to me, after many months of looking at this and talking to experts, looks pretty good. As Waymo continues its rollout, other companies are quickly following behind.

Speaker 25:
[69:04] Amazon's new driverless taxi is launching in Las Vegas this summer, and it's expected to arrive in LA in the next few years.

Speaker 1:
[69:10] There's other robo-taxi companies like Amazon Zoox. Uber is back in the mix, not making technology, but partnering with these robo-taxi companies.

Speaker 25:
[69:18] We recently struck a partnership with Uber to bring its AVs to Abu Dhabi.

Speaker 1:
[69:23] And many of those early Waymo engineers are now CEOs of autonomous companies themselves. Dimitri Dolgov is actually co-CEO at Waymo, but other team members run driverless trucking companies.

Speaker 14:
[69:34] Got Don Burnett, founder and CEO of Kodiak AI.

Speaker 10:
[69:37] Don, thank you so much for joining us.

Speaker 14:
[69:38] It's good to see you again.

Speaker 1:
[69:40] Don Burnett is head of Kodiak AI, which has its technology deployed in driverless trucks in the Permian Basin.

Speaker 24:
[69:46] Please welcome CEO of Aurora, Chris Irmson. A big round of applause.

Speaker 1:
[69:52] Chris Irmson now heads Aurora, which currently has semi-trucks on Texas highways. And my personal favorite plot development, which just emerged this week.

Speaker 17:
[70:00] I just broke on the information that Uber founder, Travis Kalanick, is starting a new self-driving car company, with financial backing from Uber, and in partnership with Anthony Lewandowski. Now, for those who have been...

Speaker 1:
[70:13] They say there's no second acts in American lives. Somehow, both of these men seem to be on their fourth. The big picture, though, is that everywhere in America today that you see a driver, taxi, truck, food delivery. There are several companies working on the robot version. Trying their best to make driver, as a job, start to go the way of the knocker upper, of the lamplighter. There's knocker uppers, by the way. They disappeared quietly. The lamplighters did not. Writer Carl Benedict Frey tells the story of the Lamplighters Union, how their strikes plunged New York City briefly into darkness, to the delight of lovers and thieves. In Vérevié, Belgium, the lamplighters' strikes turned violent, ending in an attack on the local police headquarters. The army was brought in. The lamplighters lost their fight. In part, just because they were so outnumbered. But the drivers today, fighting to save their livelihoods, are a significantly bigger force.

Speaker 4:
[71:11] Please stand up. Everybody that's ride-share, union members or someone who drives a vehicle, stand up.

Speaker 1:
[71:20] 4.8 million Americans strive for a living. It's one of the most common jobs we have. And these workers do not plan to surrender to the California tech companies.

Speaker 4:
[71:29] They're doing this because they stand to make an unfathomable amount of money if they eliminate driving jobs for working class people.

Speaker 2:
[71:38] I understand it is business, it is capitalism, but not in my city at the expense of our jobs.

Speaker 1:
[71:46] These drivers are represented by unions, backed by politicians, and in cities across America, blue cities, they're organizing. So far, they're winning.

Speaker 22:
[71:56] Humans drive this city, not machines.

Speaker 4:
[71:58] Labor drives this city.

Speaker 5:
[72:00] Keep the workers in the workforce.

Speaker 9:
[72:02] If it works in another city, great.

Speaker 17:
[72:03] Have fun.

Speaker 4:
[72:04] Not here.

Speaker 15:
[72:05] Not Boston.

Speaker 17:
[72:06] Thank you.

Speaker 1:
[72:12] Next week, the fight to save a job, to save the human driver. Don't miss this one. Thank you for listening to our episode. I just wanna say, making deeply reported stories like this one is only possible because of our listeners, particularly our premium subscribers who pay to support the show. We are releasing our full interview with Sebastian Thrun, who used to lead Google X, their secret special projects lab. Totally fascinating conversation with the kind of person who just sort of lives in the future and has a million strange ideas about it. We are releasing that for our Incognito Mode members only. It'll be in your feed. If you would like to know the future, sign up at searchengine.show. Again, your membership specifically enables projects like this one. So thank you. Search Engine is a presentation of Odyssey. It was created by me, PJ Vogt, and Sruthi Pinnamaneni. Garrett Graham is our senior producer. Emily Malterra is our associate producer. Theme, original composition, and mixing by Armin Bazarian. Our production intern is Piper Dumont. This episode was fact-checked by Mary Mathis. Our executive producer is Leah Reese Dennis. Thanks to the rest of the team at Odyssey. Rob Morandi, Craig Cox, Eric Donnelly, Colin Gaynor, Maura Curran, Josephine Francis, Kurt Courtney, and Hilary Schaff. Thanks for listening. We'll see you next week with the second part of this story.

Speaker 15:
[73:50] Guys, it's no use putting it off. The best time for an underwear refresh is now. Tommy John underwear is designed for a perfect fit that stays put all day. They're zero-shape thanks to four times more stretch than competing brands, and their innovative horizontal quick draw fly is a game changer. With over 30 million pairs sold, there are thousands of men out there more comfortable than you. Don't settle for less. Go to tommyjohn.com today for 25% off your first order with Code Comfort. That's tommyjohn.com, Code Comfort. Tommy John. Comfort Perfected.