transcript
Speaker 1:
[00:02] Bloomberg Audio Studios, podcasts, radio, news.
Speaker 2:
[00:18] Hello, and welcome to another episode of the Odd Lots Podcast. I'm Joe Weisenthal.
Speaker 3:
[00:23] And I'm Tracy Alloway.
Speaker 2:
[00:24] Tracy, when we talk about AI, we talk a lot about the big labs, the big independent labs, particularly open AI and anthropic. But of course, we know that the legacy tech companies, the so-called hyperscalers, et cetera, they're not just going to give up this groundbreaking technology without a fight.
Speaker 3:
[00:41] No, and we've certainly seen various efforts to, I don't want to say catch up, but to keep pace, I guess.
Speaker 2:
[00:48] So of the major legacy companies, which I would say are like Microsoft, Amazon, Meta, and Google, or sorry, Alphabet. I would say clearly Alphabet is the one that has the model that people are talking about, Gemini. In a way, that's surprising because one common intuition that people have is that companies aren't very good at developing the thing that disrupts their own legacy business, right? This is just a famous sort of B-school thing that people talk about all the time.
Speaker 3:
[01:20] Yes. So the issue one would think with AI and Google in particular is that Google, famous for its search, no one believes me, by the way, the first time I ever used Google in like, I guess it was the either late 1990s or early 2000s, I told my parents to invest in the company and they didn't listen to me.
Speaker 2:
[01:37] Oh really?
Speaker 3:
[01:37] Yeah. They don't believe me either, but I swear I did. The first time I used it, I was like, oh my God, this is so much better.
Speaker 2:
[01:43] So I feel so dumb because I remember the Google IPO very well, and I thought I was very smart because it was like I was 20, I think it was 2004. I thought I was very smart back in those days. I don't think it, now that I'm older, I realize I don't know anything. But I thought I was very smart. I was like, oh, these sheep, they're just buying, it's overvalued. Should I short this IPO? It's like a bubble, etc. Thank God, I didn't put on some short trade and go bankrupt. But it never occurred to me to go long, because I don't, and I think it's the journalist temperament, we're all very cynical and skeptical.
Speaker 3:
[02:14] You have to be an optimist to be an investor.
Speaker 2:
[02:16] Yeah, you have to be an optimist, you have to be willing to be part of the crowd, you have to ride the wave, we don't really like riding waves, etc. And so I was like, oh, should I short this? Anyway, I should have bought it.
Speaker 1:
[02:26] Okay.
Speaker 2:
[02:27] I should have held it. And you know what, I should have bought it, even though we don't trade, should have bought it in 2023, when everyone was saying that ChatGPT was going to eat its lunch, should have bought it a year later when people were saying, oh, Gemini is too woke, etc. All these turns, there are opportunities. But anyway, I'm happy to just talk about it.
Speaker 3:
[02:46] All right. Now that we've gone on a very long segue, what we are getting at, it's an important segue. What we are getting at is that in theory, AI would seem to pose a threat to Google's core business, which is search. So if you type in a query in Google now, I think I have one open from the last time we recorded a podcast. I don't know why I was looking at the sub. It says, can you see tankers physically from the Strait of Malacca?
Speaker 2:
[03:08] Oh yeah.
Speaker 3:
[03:09] In the Strait of Malacca from Singapore. So if you type that into Google, it used to be you would just get a bunch of search results.
Speaker 2:
[03:15] That's right.
Speaker 3:
[03:16] Now, you get an AI overview, which basically pulls in a bunch of results from other pages and gives you a summary. So the question is, if people are just going to be looking at these summaries instead of actually going to the pages that Google Search used to turn out as links at the top of the page, what does that mean for traffic via Google Search?
Speaker 2:
[03:37] Well, and then the other big element here, which is that people stop clicking as much on links or even seeing links. What does that mean for the advertising business? The expectation is that you see the answer right there, etc. One of the questions we have about OpenAI in particular is, are they ever going to be able to launch an advertising business? Can these two things combine, etc.? We know that to this day that Google search ads are the greatest money printer that's ever been invented basically in the history of the world. So how this is going to interact and how Google is thinking about these questions, as you say, it's a little unclear to me because yeah, it's nice. I could just put on the same question. Can you see tankers in the Strait of Malacca? I don't know if these are right. I don't know if the AI overview is right, but it's there.
Speaker 3:
[04:24] It's there. So lots of interesting questions here. Also, AI slop, right? Are the search results that are being turned up actually of any certain quality, right?
Speaker 2:
[04:34] It's a great point. And one of the reasons perhaps that a lot of people, and I would include myself in this, use AI more and more is because I have like, I have some issues, let's say, with search. So anyway.
Speaker 3:
[04:48] I'm gonna clip that. I have some issues, Joe Weisenthal.
Speaker 2:
[04:52] Anyway, we really do have the perfect guest to talk about this. Someone who is like really literally right in the middle of all of this and can answer all of our questions. We're gonna be speaking with Liz Reid. She is the VP of search at Google and she has been at the company for over 20 years, has been in the current role, has been on the search team for a few years. And so we're gonna hopefully get answers to all these questions. So Liz, thank you so much for coming on the Odd Lots podcast.
Speaker 4:
[05:19] Thank you for having me. Delighted to be here today.
Speaker 2:
[05:22] Absolutely. Why don't you just start by telling us like, what's your role at Google? What does it mean? Okay, you're the VP of search at a company that people know for being a search engine company. But what is your title actually entail?
Speaker 4:
[05:35] I lead the search team that you can think of as covering the product, the engineering, our user designers and data science. Sort of the team that fundamentally builds the search product that you use.
Speaker 3:
[05:45] So how much of your day to day has taken up thinking about AI nowadays versus like, let's say two years ago?
Speaker 4:
[05:53] Well, two years ago, I would say it was also still a fairly large amount. I think AI is a deeply transformative technology in what it opens up. I think AI has been in search for many years in different forms. It's much more in the forefront these days with things like AI overviews and AI mode. But if you go back several years, it was how we transformed a bunch of ranking with efforts like BERT and MUM that were built on some of the early transformer breakthroughs. But at the end of the day, if you go back once upon a time, AI didn't just refer to generative AI, it referred to general machine learning and other things like that. In the early 2000s, Google had spell correction, which felt revolutionary at the time that used AI.
Speaker 3:
[06:36] No one calls spell check AI anymore.
Speaker 4:
[06:39] Nobody calls spell check AI, but like it was at the time. It just shows you how far the world has come. But the opportunity to really transform search and realize Google's mission at a new level is really exciting and an amazing opportunity and very humbling to do this for a product that so many people use.
Speaker 2:
[06:57] By the way, you mentioned BERT, my little software hobby project of training a machine learning model to tell whether something is more indicative, the written or spoken word is based on BERT. So thank you for developing that and thank you for open sourcing it so that someone like myself cannot train it. But talk to us about just how you're thinking about this core tension, that for years Google had a business of you go to a you put in a term in a search bar and then people click out and some people and some of the clicks were to organic results and some of the clicks were to paid results. And now people are were entering this world in which people expect to get whatever they want right there from the query without that impulse for a click. And you have a business that is still dominated by that click out one way or the other. So just like, and I want to drive into the details, but big picture, is that a real tension?
Speaker 4:
[07:53] I think what's interesting about it is that the space of search is very big and what people are trying to do is very big. And sometimes people really want quick answers and they want it right in front of them. And sometimes they want to go deep or they want to hear from particular individuals. Right. I think there's this sort of myth that people want AI or the web that I actually think what we see is that people want AI and the web together. There are certainly questions for which you just want the quick answer and then you're done. And that's been true in many ways for years. We'll talk about this with AI, but I bet most of the time you look up the weather, you just want to know what the temperature is and you're done. Right. And you don't once, but then you're going to go on a trip and you're going to go and actually dig in more because you're going surfing on other pieces. I think if you think about this, people are like, oh, I have an answer I don't, so why would I do an ad? Well, the answer doesn't buy the pair of shoes. You actually have to buy the shoes, right? So you still have to go pick a merchant for that. People care often to hear people's perspectives, right? Like you'll talk about, okay, well, we want a bunch of answers, and yet this is like a golden age for podcasts, right? So clearly people sometimes want to spend a couple of seconds, and other times they'll spend a whole hour listening to things. And so one of the things we see with the shift with AI overviews is that you get more of this pronouncement of what's your goal, okay? If all you were going to do was go to the web page, see the fact and immediately click back, you're going to spend like a half a second on the page, okay? You see those things shift. But if what you were going to go and do is read an article for five minutes, you're still interested in reading that article for five minutes, right? AIO might help you point to the right page, so we see fewer bounce clicks where a user would sort of go and immediately come back as they weren't happy. You see people go though, and they want to hear from other people. They want to hear their expertise, their perspective, their unique take. I take fashion as an interesting example sometimes. If you hate fashion, then you love using chatbots to replace the need, right? You didn't really want to spend any time, that's fine. But if you were someone who was spending a lot of time reading influencers and what their interests in the fashion, you have not decided to replace that with a chatbot, right? You're still going to want to hear from those fashion tastemakers. So there's an opportunity with AI overviews to help you get started and then make it easy for you to dig in and connect. I think people's interest in connecting with other people is just as strong these days in many ways.
Speaker 3:
[10:23] So just at a simplistic level, can you tell me how does Google determine whether it shows the AI overview or not? So if I type in Corgi into Google Search, I'm biased because I have two Corgis, it just gives me a bunch of links to the American Kennel Club and the Corgi subreddit and things like that. If I type in what is a Corgi question mark, it gives me the AI overview. Is it just everything with a question mark returns an AI result or how are you actually deciding what to present to users?
Speaker 4:
[10:54] No. An important premise of this is that we shouldn't give you AI for the sake of giving AI. The point is for it when we think it adds value to people. And so it's not really associated with question marks. Question marks are often when people are looking for more of a description, they have a harder question, which maybe a single web page doesn't answer or whatever else. But what we're really using is looking at signals from users to say, does the AI overview provide additional value or not? And so most people, I haven't studied Corgi query in detail, but for a query like that, probably what we're seeing is that most people aren't just trying to figure out what is a Corgi. Maybe they want to see pictures about it. They want to click on knowing more about the dog breed because they're trying to engage in it. And so we basically learn over time based on user signals, the same way we learn about like, when should you show the weather one box? And when should you show local results? And when should you use sports? That the AI overview provides additional value? Great, we show it. It doesn't, then we don't want to get out of the way, right? You don't want your search for Wikipedia. You go and you type in Wikipedia, which a lot of people do. They want to get to Wikipedia. They don't want to go and say, let's give me the history of Wikipedia. That's not why they search for that, right? If they search for Odd Lots, right? They probably want to quickly get to your podcast. And so we have a variety of signals that try and help us understand when is it adding value and not. And we get smarter over time as people both change how they ask questions. As the models get smarter, right? Like we don't want to put in an AI overview if we think it's not going to be high quality. So as the models have gotten more powerful, we can cover more cases and just continue to develop, really with the focus being what is the best response to give a user for the question they ask.
Speaker 2:
[12:54] I have a question about what you see among user behavior. And my question is, do you see the same user or a cohort of people who use both google.com and gemini.google.com? And do you see distinct patterns of queries from the same user, but different types of searches? Or do people just sort of throw the question mostly in the Google search box and start from there? Or do you see people who just use gemini and do everything there? Like, what do you see in terms of emergent patterns about how an individual chooses which of the sites to enter into first, and how they do different queries in each one?
Speaker 4:
[13:36] Yeah. So maybe just, so we're all talking about the same thing. There's sort of your main search page, there's AI mode that's part of search, and then there's the Gemini app across. And I would say like there's a lot of users, so their behavior varies across all of them. But there are some patterns, okay? There's plenty of people who co-use across them. There's plenty of people that are actually using several AI products right now just in general, right? Not even just within Google. Across Gemini and search, the more informational ones, if it's an informational query, then the probability that they're using search or AI mode is going to be higher. If it's a creative query, it's more of a productivity question. I want to like, please rewrite this to make it sound more formal. Those type questions are going to be more Gemini oriented. Between AI mode and search, the main search page, some people use AI mode mostly via AI overviews. They start on AI overviews and they transition. For those who go direct to AI mode, they tend to do that for queries that they consider more complex, longer questions, questions where they expect that they're going to do more follow ups, versus if you're doing a very browsey query, you might choose to prefer all of the SERP. If you know that your goal is to just get to a particular web page, you're more likely to start with the search result page. But there's obviously overlap in the use cases, but across search and AI mode tends to be more longer, complex, more conversational queries versus more traditional queries. Between Gemini and search, there's more of a productivity and creativity versus information slant on them.
Speaker 3:
[15:16] So since we're talking about user behavior, one of the things that seems to be happening now is people will use an LLM, doesn't matter which one. They'll ask a question, and then they will go and fact check the answer that they get on Google. And I'm really curious if that's something that you're aware of as a sort of user behavior. And if the idea is that maybe Google becomes, maybe not an AI over viewer per se, but maybe the sort of fact checker of last resort for other LLMs.
Speaker 4:
[15:48] I think we're definitely aware that people use Google as a fact checker for some of their LLM use case. I think people have used Google as a place to fact check information pre-LLMs for a number of things. A friend tells them something, you know, and sort of come. But I think people use search for a lot more than just fact checking, right, or even just looking up facts. They want to go browse what they're going to go by. They want to go check up the sports score of the latest team. And we do see with AIO reviews that with the presence of AIO reviews, people are asking more longer questions, they're asking more conversational questions. And so some of these questions they started bringing to an LLM as we brought AIO reviews in, they took on those same types of questions and have brought them to search.
Speaker 2:
[16:34] One of the complaints about search, and I would say I've complained about this, or complained about, is how many of the results in the SERP are like very, almost like too timely. Let's say someone enters the news, there's a headline about so-and-so. I'm like, I'm really curious about this person. And so I like searched their name and I get like 10 results, or however many results, all from the last day since they made the news. And it actually is difficult for me to find information that didn't have the context of the news, so it was like unbiased in some way. Is that recognized as an issue? Because I certainly feel it as a user, but I'm curious from your perspective at Google, whether this is something that you think about as a way in which search becomes less than ideal.
Speaker 4:
[17:23] I think one of the things that's generally very challenging about search is that people enter the same query that's often very short with a lot of different intents in mind. So in a bunch of the examples where you have, probably our data says that most people just want to see the recent articles, and those are the ones that get all of the clicks, but you didn't. So how do we figure out across all of the different intents and match them across? So I think there's this question about both, how do you get the different facets of a question? How can you personalize the results more effectively for people? I do see more, this is anecdotal as opposed to complete data, but to your example, people using AI mode and AI overviews for some of the people queries to understand more about the person, independent of the rest of the news articles, because if you don't know who, most of the people searching for it know who the person is, but some of the people searching for it don't, and so you can see behavior like that. But I do think one of the interesting things about the evolution of AI is that people stop talking just in keyword ease as much, and they start expressing more of what they want, and then that becomes much easier for us to give an answer. If you say, tell me about someone versus what's new with someone, that's actually much easier for us to figure out how to give better results than if all we just say is someone. We used to talk about an example query in a different way is falafel. What do you want to know with falafel? Some people don't know what falafel is, they want a definition. Some people want recipes. Some people want to find where to eat. Some people want nutritional information. They all just use the word falafel, and that's just harder to figure out how across all of them do that. We also see that with tensions on things like, well do you want video results or do you want more text-based results? People are very opinionated about what the right answer is, but they are not very opinionated in the same direction. And so we try and meet multiple billions of people's needs at once.
Speaker 2:
[19:28] Just to be clear, the falafel example is great. Would you say that today you see a greater diversity of falafel related queries, whereas maybe like five or ten years ago you just get falafel, and now people know that they could type in, where does falafel come from? What is a falafel recipe, etc.? Have people gotten more sophisticated over time in their query specifications about falafel?
Speaker 4:
[19:55] I don't know falafel very specifically, but in general, yes. We have seen with AI Overviews meaningfully longer queries, we see more natural language queries, but it's also not even something as basic of that. It can also be like you were searching for restaurants. We used to laugh about that. Before I worked on search, I worked on maps and local search, some of the intersection with search, and people would just be like, restaurants in New York, and you're like, what do you want me to do with that query? The best restaurants in New York are going to take three months, and 99.9 percent of the population can't afford to go to them. But are you picking 10 random ones, etc. But part of why people will do that is they had a much more complex, I want a restaurant in this location for five people, they can't be too pricey, I have a vegan member, I also have kids. That was the question they had in their mind, and in the old word of keywordies, none of that information would be spread throughout the web, and so you wouldn't feel confident, you could just put in the question. Now with AI reviews and AI mode, you can start to actually, and you see people do this, they tell you the real problem. They don't take their need and translate it to what the computer understands. They try to give the computer their actual need, and expect us to do the translation. And I think that's really exciting to see, because one, we can be more helpful, but also those are real problems people had. If you go back to the mission, it was organize the world's information, and make it universally accessible and useful. They like that useful part, right? It's not just that it's organized, is it useful to you? And I think one of the most exciting things about AI, the transformation going on right now, is that you can actually make information much more useful to people. And that really opens up, that makes it, so people just ask more questions, because we can actually do a better job meeting their needs.
Speaker 3:
[21:43] Does that come with any complications in terms of privacy or competition for Google? If people aren't using keywords as much anymore, if they're doing basically query brain dumps into the prompt and saying, you know, I am so and so, I have a kid, I live here, I want to do the following things, this is my issue. Is that an added layer of complexity that you have to deal with as like a large search company?
Speaker 4:
[22:11] I mean, I think people from a privacy perspective, we give people sort of a range of different things. They can be sort of an incognito, they can be signed out, they can be signed in across. So, and I think Google has a long tradition of really treating people's data with a great deal of care and having cutting edge security and privacy by design. So I think people are seeing the value and they have continued their trust in Google. I think it means it's a harder job on quality, right? You have to take this question, there's many parts and you have to figure how you break it apart. And you have to do work to think about things like latency, because you can't just, if everyone uses the same keyword and it's not personalized, then you can cache it all. If all of a sudden the queries get much more diverse, it has consequences there. But I think we just see that it's very empowering people, right? That it takes some of the work out of searching. I think sometimes people think, oh, a few years ago they said like, oh, what more can you do with Google Search? But if you actually ask them, OK, when was the last time you spent 20 minutes searching when you would have preferred to spend two? It's actually not that hard for me. Oh, the last time I was trying to go find a service provider. The last time I was trying to go do these bigger tasks in life. And so it's been kind of exciting to just make people's lives easier by helping them address their real need.
Speaker 2:
[23:34] I have so many different theoretical questions I can ask. Here's one actually, you know, this is something that was inspired by our producer Dash, a conversation that I have with him 10 minutes ago, part of this and related to something else. I imagine in your career at Google, spanning over two decades at this point, you've been involved in quite a bit of recruiting and recruiting software engineers in particular. And I imagine that's a particularly important aspect in some way or another for the VP of Surge. Given what we've seen with AI coding and so forth, when you're doing one of these lead code software developer interviews, et cetera, is it different today than five years ago? Do you have to think really differently about the battery of technical questions that you would propose to a software engineer today, given the fun, just the restructuring of the nature of the job in a world of AI generated code?
Speaker 4:
[24:30] I think the process is definitely evolving. I wouldn't say that we have perfected the science yet.
Speaker 2:
[24:35] Okay.
Speaker 4:
[24:35] But there's two angles in which you're thinking about it. One is you don't want to ask questions for which they just go and type in the answer in the chat bot and recite it back to you. Okay. So you need to make sure to the extent that your goal is to understand are they critically thinking, are they able to think through a problem and do that? You want to make sure that that's actually what you're assessing. So is it in person? How are you doing that on some basic way? But there's the other thing that I think the tools are powerful. You can use them in ways that make you more effective, and you can use them in ways that make you less productive. How do you think as the fluency is changing with AI, the way a software engineer might approach a problem now is different than they might have approached the problem five years ago without some of these tools. So I think we're all learning how to change asking that question. Are you building up that expertise? Are you building up that fluency? And the fluency isn't fixed. What was possible with the tools six months ago, let alone two years ago, is different than what's possible now. And in six months, it will be different. So you have to start thinking about how part of your interview is thinking about fluency with the use of tools in the same way that when IDs became important or when people stopped using assembly language and they started doing your job, you had to evolve the interviews. It's just that it's happening very fast. So we all have to be on our toes, but it's exciting. You play with this tool and it doesn't work for something, and then it's not like play with it two years later, it's like play with it three months later. Maybe the tool will now work for these things.
Speaker 3:
[26:08] Okay. Well, speaking of tools, I mean, one of the things, Joe, I think you've said this, we've been playing around with Claude Cote, this idea that actually when you start vibe coding everything and telling your agent to do everything, it feels like you don't even necessarily need a computer much less a search engine presumably. So I'm just curious, if you gaze five or 10 years into the future, what do you think the default entry point for interacting with the web is actually going to be? Is it going to be a search engine like Google? Is it going to be a specific LLM? Is it going to be my personal agent that I've vibe coded for all of my preferences?
Speaker 2:
[26:47] Can I just add on to this question? This is something I think about. If I want to send an e-mail to Tracy today, then what I have to do, I find the tab in my browser, I scroll over there, okay, that's my Gmail tab, etc. Whatever. I would like to just be in my terminal. It'd be so much easier to say, here, send an e-mail to Tracy saying this. There's all these steps that I currently do because of the nature of graphical user interfaces. That now that I've gotten in like Claude coding or whatever, feel a little clunky, it feels a little yesterday. So yeah, I'm extremely curious about, will the web with the series of boxes that we drag and drop, etc. Is that the future or will it just be someone talking in English to their computer?
Speaker 4:
[27:29] I don't think, like 10 years is a long time right now, where the tech is. That's fair. In one year. Three months from now.
Speaker 2:
[27:36] Will we still have browsers in three months from now?
Speaker 4:
[27:39] We will be like, okay, we believe in 10 years, will it be an AGI, will anyone be doing anything the same? Okay. So with that aside, I think there are some things I believe in and some things I think we don't know. I think you already see if you go back 10, 20 years ago that the way you interact with the tech has evolved a bunch. It used to be it was just the laptop. Well, now it's the phone. Well, now it's also the watch. In some cases, it's the glasses. This sense that it should feel like the information is at your fingertips in whatever medium is useful. But I don't know that this becomes a, we haven't so far replaced all of the old ones. You use the phone a lot more. But my guess is you're not doing all your Claude code work on your phone and you're doing some of it on the desktop.
Speaker 2:
[28:27] That's true.
Speaker 4:
[28:29] The introduction of the watch has supplemented, but it hasn't eliminated the desktop. So what's been interesting actually is that it hasn't gotten the direction of converging to the answer. It's actually increased the form factors and so that you want to be able to access this information wherever you are, in whatever form factor makes sense. So will it be glasses? Will it be something else? Quite possibly. But let's even say it's glasses, become a big deal. Glasses are very small screens even there. You're probably not going to do your big productivity thing on desktop. So I think what you'll see is that the access point is not confined to one thing, but that the key is to eliminate the friction and the toil. To your point, you had to do six steps. You didn't want to do the six steps. Why should you do the six steps? I think you see that some things are much easier to do with a chat interface, and then some things, actually, a chat interface is a super slow way to go do. If you have a list and you have to go say, please remove this long title for the 10th item, that's actually much harder to do with chat than an interface that does that. So I don't think it necessarily converges on a single thing. I do think it should feel much more adaptive to your point about, well, if this is the way you prefer to interact, not just where you are, but how you interact, and can you customize, and can you create to what extent do the user interfaces look designed for you versus look designed for general, and can you have influence in them? I think you'll see that. I do think we sometimes, like, we're very aware of what doesn't work well. We're not necessarily aware of what does work well. Like, companies spend huge efforts working on how do they do shopping carts really well. Yeah. Okay. This belief that sort of the chat bot will have a more optimized shopping cart for every shopping cart place in the world than the one you go to every day. I don't know. Not clear. Right? For those things. But I do think it should feel much more personal. It should feel much more dynamic. It should feel much more ambient and available to you. And I don't think it will be one size fits all, either per person or per form document.
Speaker 2:
[30:53] I've been reading some articles. I think I saw one, there was a big one in The Information recently. Let's talk about one of your competitors' meta. Kind of, yeah, it's a competitor. And it was like, everyone's token maxing there, and there's a token leaderboard, and people are competing to show that they're using AI more than others. And from my perspective, that boggles my mind, because compute is a cost, and just using compute per se does not strike me as a particularly good way of measuring who is productively contributing to the company. I mean, I could certainly find an easy, quick recursive way to burn tokens.
Speaker 3:
[31:28] And generate a bunch of AI images of corgis.
Speaker 2:
[31:31] Yeah, generate it and then tell it to create one that just keeps telling it to improve itself, etc. The flip side, which some people say is like, look, it doesn't matter at this point, because everyone has to figure out how they're going to use AI productively in their work. So you know what, don't even worry about metering AI. Tell everyone to pedal to the metal in AI use. And if someone is maxing out on tokens, it means they're experimenting with something and then they'll find something that really is a productivity enhancer. I'm curious if from your perspective, it makes sense to essentially see token consumption or compute use as a proxy for someone who's doing their job aggressively well.
Speaker 4:
[32:13] I think the thing with all of these proxy metrics is if you use them blindly, you're going to run yourself into trouble. If you as a leader don't use judgment on them, then you get the example of like, I will just create a job that runs in the background and does dumb things to do the tokens. As a leader, your job is to use good judgment and not just think about the incentives. If somebody isn't playing around at all with the tools, when we know that they can improve productivity, then we need to figure out why and how we help support. Maybe there's some issue with the part of the system they're working on and we should go fix it, or maybe we just need to help upskill them or whatever else the case is. I do think there is a level of experimentation required. I don't think it works if your answer is like, you need to ensure that all your token use is completely optimized. It's not going to work. People have to learn what's possible. They're doing different jobs. The tech is changing, but it can neither be like, don't use the tools or just max your tools blindly. It's a noisy signal, but it's a signal. So go look at it and understand as a place of where to look. Don't use it as a final judgment.
Speaker 3:
[33:28] So speaking of measures and not oversimplifying them, I want to go back to the core attention that we started the conversation out with, which is the AI results versus people actually clicking through to results and generating traffic. I know you were talking about AI being expansionary or complimentary for Google Search, but I'm very curious how you actually measure that. The more granular you can get on this, the better. What are you specifically looking at to say that actually, this is something that's good for our business versus something that's detracting from the core?
Speaker 4:
[34:06] So I guess I would say Google's guidance in Northstar has always been focused on the user. That's our biggest question at the heart is, how do we make a great experience for users? Then you want to be thoughtful, obviously, about other concerns. If you don't have a healthy ecosystem, you can't build a service ongoing, so you need to make sure you're nurturing a healthy ecosystem. If you make no money, then you can't fund this wonderful service, so you have to be thoughtful about those. But the place you start with is try and build something amazing for users. One of the things we've seen again and again with Google Search is, if you're doing a really great job, people will not just do another query, they will come back to you more often. They will take their phone out of their pocket an extra time. That's a high bar. It's one thing to go and say, I've showed you something, can you do one more thing? Well, I'm showing it to you. It's another thing to get you to decide, you're going to bother to unlock your phone, you're going to boot up your desktop, you're going to navigate in the browser. And so one of the things we really look for is when we're doing these changes, does it cause people to come to search more often? Not just use search more often, but come more often. We also do various UX research studies and try and understand what are people happy about or not, what are the things they find are frustrating, are more users adopting it, not just how much are they using it. So we look at a bunch of different metrics, but one of the biggest is really like, do you choose to come and ask Google, do you essentially hire Google more often for things you need? One of the things that's very surprising to people at times is, they think they come somewhere for all the questions they have already today. Like maybe they think they come to Google all the time, or they think they go to Google plus LLMs, or Google plus LLMs plus TikTok plus whatever. They think they ask all the questions they have. But that's not true. You actually make a calculation when the questions go through your mind of, is it worth spending any time to figure out the answer to this question? If the answer is no, then you just don't ask the question. When we talk about AI as an expansionary moment, what we really mean is there's a whole bunch of questions people have, a whole bunch of curiosity that people are not exploring. They're not exploring because they view it as too difficult or too much time or not sure that it will be worth it. AI lowers that barrier, and it can lower that barrier in ways that are sometimes for US English speaker are surprising, which is like actually in a bunch of countries, there's not all the content in the web and the language you speak. LLMs can help unlock that content. AI overviews, because it's using an LLM, can be more multilingual than just the web corpuses by default. So suddenly information that wasn't available to you as a Hindi speaker is now available. It can be visual. You had a question about this flower, you had a question about that cool purse you saw, like where can you buy it, but you didn't know how to describe it, it's possible. It can also just be like, my kid has a question, do I say like, I don't know, or do I go ask the question? You see with young kids, they ask questions all the time. They go, why, why, why, why, why, why? At some point, parents are like, because.
Speaker 3:
[37:23] It's not bothering me, go ask the LLM.
Speaker 4:
[37:25] They go, okay.
Speaker 2:
[37:27] Let me Gemini that for you.
Speaker 4:
[37:27] But they do that because from a kid's perspective, they assume adults know everything, and it is no cost to them. They're not worried about their time and other things. As an adult, it's not that you're not curious. You just don't think everything is known and you don't have the time. If you lower that barrier, it allows you to be that kid again, that just sort of explores all of these things or get started on those projects that felt daunting or enables you to save or learn a new skill or whatever else, and that's really exciting.
Speaker 3:
[38:01] But I don't want to dismiss the wonder of being a kid and learning about the world, but I'm going to sound very callous in a second. But how do you make money off of that? Like how do you make money off of the AI overviews? Is it just customer retention? Is that what we're basically boiling it down to? Then if it is customer retention, then you could have a strong argument for saying that everything can just go through Gemini instead of Search.
Speaker 4:
[38:25] I think I would say a couple of things. Search only shows ads on a subset of queries. Like less than a quarter of queries. There's a whole bunch of queries, pre-AIO overviews, that you don't make money on, because many of them are not of commercial need. You asked a question about tankers earlier. Probably pre-AIO, that query wouldn't have shown ads anyway.
Speaker 2:
[38:48] It doesn't show ads now, by the way, yeah. I checked this too. There were no ads on that, too.
Speaker 4:
[38:53] It doesn't show ads before. There are no ads on that query. Nobody's trying to advertise something on that. So those queries, AIO reviews, it does disrupt. Then there's a bunch of class of queries where you're shopping. The presence of an AIO review or Gemini answers, it doesn't preclude the need to still buy the item. So there's still this huge opportunity with ads, because there's all of this choice that's going. I think you also see that there's an expansion of queries to this point. So you get more queries, and so some of those queries are more commercial. Some of them are not, but some of them are more commercial. So those become new opportunities for ads. There can also be things like when the query is under specified or it's a single query, you actually don't know as much. So you can't maybe target the ads as well. If people start expressing more of their need, if it's more of a conversation and they're going more down funnel, you can actually create better ads. So you can think about new opportunities for ads formats. Some number of years ago, people would have said like, how can you make money from a fee? Well, Instagram ads are very popular. So there's new ads formats as you recognize new technology and new opportunities, but the commercial needs are still often there and the desire for user choice is still often there. So there's still a lot of possibility going forward. And so it's worked out very well right now with us in the balance.
Speaker 2:
[40:15] So I realized that I am a user of both the Gemini app or gemini.google.com and just google.com. And I have like a sort of real, an intuition of which one I go to for which purpose. So if I want to look up the capital of Moldova, I'm not going to, I'll just search capital of Moldova, which I discuss. Do you know what it is Tracy? Sorry, I'm not trying to stump you.
Speaker 4:
[40:39] No.
Speaker 3:
[40:40] I feel like you're going to say it and I won't know.
Speaker 2:
[40:41] Kisha now, I'd never heard of it.
Speaker 3:
[40:42] Oh no, I didn't know.
Speaker 2:
[40:43] Yeah, I didn't know that either. But anyway, if I want to understand what are some academic papers that have been written about why it is that high frequency trading firms tend to not have outside capital, et cetera, and what was the theory for this? I think at this point I would use Gemini for that and hope to, I'd use Gemini or some other ones, but within the context of this conversation, that's more of a Gemini query for me than a Google one at this point. Will there always be two boxes or do you foresee eventually, there is just one box and we'll just know this is, like, why do we need two boxes?
Speaker 4:
[41:24] I don't know what life will be like in five years. I think it's very, sometimes people want sort of an experience, although the information need seems like similar, they actually want different experiences across. And so if you take a pre-OLEM example, people use YouTube for search, some. In the US, they use it some. In India, they use it a huge amount. They bring a bunch of queries that you would bring to Google Search in the US. You could say, okay, well, why haven't we collapsed YouTube and Search box into one search box and do that? It hasn't necessarily been the case. We have the Google app and we have Chrome. They both allow you to search and they both allow you to browse the web. You have a set of people that love the Google app, and you have a set of people that love Chrome, and you have a set of people that use both on a phone. But you can't necessarily convince either population that they want to stop using one app and just switch to the other app. So I don't think the space is so huge, and it's changing so quickly right now, that to be able to know for sure whether or not you can create one sufficiently dynamic, personalized experience that one app, one entry point can truly do it all, I don't think we know yet on that. When people come for restaurant searches, they come to Maps and they come to Google Search. We have not collapsed the Maps app in the Search app. At some point, it becomes big. You're putting all this directions code into Google Search app, is that actually useful, even if you had a full Maps view? I think we're just going to have to learn over time about what's good. But the space is really giant, and they do have different emphasis right now on what they try to excel at. You want to make sure that in the attempt to bring things together, you don't become only okay at everything, and you want to make sure that you can shine at all the use cases people need. That may mean two products or that may not, or it may be a third product. I don't know in five years, there may be a third product that replaces all the products. You have your personal agent, you don't talk to any products, I don't know.
Speaker 3:
[43:27] So I realize we kind of promised to talk about AI slop a little bit in this conversation. So one of the things that's happening with AI is not just that I can ask a bunch of questions I might not otherwise have had time or the inclination to ask, but also AI is being used to generate vast amounts of content that are aimed at potentially answering any silly question I or anyone else on Earth might have. Yeah, just churning it out. And I'm very curious how search is weighing, I guess, the quality of its results in the new Slop era of the Internet.
Speaker 4:
[44:03] I think there's a tendency at time sometimes to think about AI Slop as if it's, before AI Slop, there was Slop, right?
Speaker 3:
[44:11] Human-generated Slop.
Speaker 4:
[44:14] There was human-generated Slop, now there's AI-generated Slop. So there has always been Slop on the web. And so what doesn't really matter at some level is how much Slop is on the web, so much as, is there great content on the web, and can you surface it? Right? And this is Google's bread and butter in ranking, and has a long history of looking for spam and trying to drop it and make sure it doesn't show. And like we crawl many, many more pages than we even put in our index. There's pages we put in the index that we never surface, right? So that we can keep that rate of spam and Slop at a very low rate. And it is a constant effort, right? Like it's not a problem you solve because some of the people generating the spam, right? There's a lot of financial incentives associated with it. But that is what like we, what people have come to trust Google is that it will show great information and it's a thing that we will continue to put a huge amount of effort in. And so that's the way I would think about it. It's not like how much AI Slop or human generated Slop or whatever automated Slop pre-AI, post-human generated there is. But making sure that the information you do see is trusted.
Speaker 2:
[45:28] Liz Reid, thank you so much for coming on Odd Lots. That was a fascinating conversation. I have like a billion more questions, but we'll have you on in three months when the entire world has changed and we'll get an update from you.
Speaker 4:
[45:42] Thank you very much. It was a pleasure to be on with you.
Speaker 2:
[45:57] I like the point about human-generated slabs. Do you remember?
Speaker 3:
[46:00] Yeah, but the difference is the volume.
Speaker 2:
[46:02] No, I know, I know.
Speaker 3:
[46:02] Like, I get it, but.
Speaker 2:
[46:03] But like, do you remember Jason Kelly Candace's startup Mahalo?
Speaker 3:
[46:07] No.
Speaker 2:
[46:08] So Jason Kelly Candace, who's been on the podcast before, he had this startup for a while. It was like the biggest piece of garbage in the world. No, for real, people need to go, like it was called Mahalo, and the idea was like they were just gonna have, hire a lot of people to like write articles that were not very good to like appear in Google.
Speaker 3:
[46:25] Oh, I see. Yeah, just swamp the search engine.
Speaker 2:
[46:28] Yeah, there was a famous one that my old colleague, and I think the business insider Nick Carlson discovered, and it was like, if you search like how to play the xylophone, there was a Mahalo article for that, and it was, I swear to God, okay, it was step one, decide if you want to play a xylophone.
Speaker 3:
[46:46] Well, that's an important step.
Speaker 2:
[46:48] Step two, get a xylophone. Step three, learn to read sheet music. Step four, practice reading sheet music and play this. So like this was actually like this, I was just remembering like it is, people have been trying to like stuff complete garbage into the search results for a very long time. And I always get a chuckle thinking about that example, and you should go look for it. I'm so glad that Liz brought that.
Speaker 3:
[47:14] I am looking for it now. I'm very distracted.
Speaker 2:
[47:17] It was so bad.
Speaker 3:
[47:19] Wait, we're just gonna like laugh about this article for-
Speaker 2:
[47:22] Look at this, just search Mahalo, how to play the xylophone.
Speaker 3:
[47:27] I see something from mahalo.com on YouTube. That can't be it.
Speaker 2:
[47:31] Yeah, yeah, just search. So yeah, Business Insider, February 21. Hilariously useless, Mahalo, the kinds of playing the xylophone.
Speaker 3:
[47:38] Did you write that?
Speaker 2:
[47:39] No, Nick Carlson wrote it. He wrote it up. Unfortunately, now it's behind the paywall and itself is covered in sloppy ads, so I guess. But anyway, sorry.
Speaker 3:
[47:51] Decide whether you want to buy a used or new xylophone. Metal xylophones are less expensive than wooden ones. That's useful. That's a useful tip I took on there.
Speaker 2:
[48:00] When you see that, it's like, please AI save us from this human-generated garbage. They were trying to clog search results before.
Speaker 3:
[48:08] On a serious note, I did think the point about not customer retention, but expanding the volume of user queries on the platform made a lot of sense.
Speaker 2:
[48:19] Totally.
Speaker 3:
[48:20] Which I hadn't really considered that much before. So even if you do get a bunch of no-click users, they are more inclined to come back to the platform in the future. Maybe some of that eventually lands in clicks.
Speaker 2:
[48:33] The other thing I hadn't thought of, and I thought it was a great point, which is that Google currently runs multiple search boxes. There's the YouTube. Are you looking at the Mahalo article and cracking up?
Speaker 3:
[48:44] I'm sorry, I am. Step four is experiment with different mallets.
Speaker 2:
[48:49] It's so good.
Speaker 3:
[48:52] I just really want to play the xylophone.
Speaker 2:
[48:54] It's so good.
Speaker 3:
[48:58] Step five is practice regularly.
Speaker 2:
[49:00] Yeah. I think AI is much better. It's so good, isn't it? It's like the biggest steaming pile of garbage I've ever seen on the Internet. Fifteen years or ten years before anyone had ever...
Speaker 3:
[49:14] Can we have Calacanis back on the podcast just to talk about this?
Speaker 2:
[49:17] We should have Jason back on the podcast just to grill him about what exactly he was thinking and his sins against the Internet for having put this on there.
Speaker 3:
[49:27] Well, he was an early adopter of non-AI slob.
Speaker 2:
[49:31] But anyway, that point about there are multiple search boxes already, right? There is the YouTube search box. You're still left.
Speaker 3:
[49:38] I'm sorry. I was trying to make eye contact with you and not look at my computer.
Speaker 2:
[49:44] Should we just leave it there? Yeah.
Speaker 3:
[49:45] OK. All right. Yes, shall we leave it there?
Speaker 2:
[49:48] Let's leave it there.
Speaker 3:
[49:49] This has been another episode of the Odd Lots podcast. I'm Tracy Alloway. You can follow me at Tracy Alloway.
Speaker 2:
[49:54] And I'm Joe Weisenthal. You can follow me at The Stowart. Follow our producers, Kermen Rodriguez at Kermen Armin, Dash O'Bennett at DashBot, and Kale Brooks at Kale Brooks. And for more Odd Lots content, go to bloomberg.com/oddlots, where we have a daily newsletter and all of our episodes. And you can chat about all of these topics 24-7 in our Discord, discord.gg/oddlots.
Speaker 3:
[50:15] And if you enjoy Odd Lots, if you like it, when we talk about AI and human-generated slot, then please leave us a positive review on your favorite podcast platform. And remember, if you are a Bloomberg subscriber, you can listen to all of our episodes absolutely ad free. All you need to do is find the Bloomberg channel on Apple Podcasts and follow the instructions there. Thanks for listening.