transcript
Speaker 1:
[00:05] Hello and welcome to Better Offline. I'm of course your host Ed Zitron. As ever, support your neighborhood Zitron by subscribing to the Premium Newsletter discount link in the episode notes, of course, buy a T-shirt, download a blog, wherever it is you want to do, okay? It's not up to me what you do. But today, I'm joined by the incredible commsci professor and commentator Cal Newport. Cal, thank you for joining me.
Speaker 2:
[00:37] Always a pleasure, Ed.
Speaker 1:
[00:39] So, I kind of wanted to start with, I asked you for a quote a few, like a week ago, maybe two weeks ago, I can't remember how time works anymore, but it was around the way the reporters cover AI and how it seems that a lot of the reporting is kind of directionally true rather than actually true.
Speaker 2:
[00:57] Yes, and I want to add something to it since. So, I've been thinking about that quote. Yeah, I've been thinking about it. So, what I said, if I remember that quote properly, what I was saying is I was picking up a lot in the reporting on AI, that you would lean into a story without having necessarily verified that the details are true, and that this is what's actually going on, say, with the new AI model. You would lean into it anyways because it was what I call directionally correct. It makes the general point that you see it as your job as a reporter to make, which is, hey, you need to be worried about this, or this is a big deal, right? And so, I think that is a problem. There's another issue I'm seeing, though. I've sort of been refining my thinking on this. I'm also wondering if some of what I'm seeing in some of the reporting on this is just an embrace of the form of, I'm going to give you a stress wave with no relief. Just like, we're all going to take turns. Just, I will choose an area you haven't thought about. How about mathematics are going to go away? Mathematicians are going to be like, okay, I'll take that one. Yeah, let's go.
Speaker 1:
[02:02] Negative click bait.
Speaker 2:
[02:04] Yeah, but there's this weird sort of passivity to it, where it's like, I'm just going to sort of, it's, I call it like head-shaking doomerism. You're just like, it's us, this field's just going away. What can we do? Like this sort of like passive head-shaking. It's a very specific style. You don't see a lot of other reporting historically, I think, that takes on this resignation of, I'm just gonna make the case that like you're screwed and then kind of give you a shoulder shrug and then we're gonna drop the mic and walk off. And I'm kind of getting tired of this. Like I think there is a cost to stressing the hell out of people. I mean, I'm getting letters all the time now from people. They'll say things like, I feel like I'm trapped in a cage just being hit with wave after wave of stress and there's no outlet. There's no door or possibility of making things better. And I think the CEOs are doing it. And I think increasingly we're seeing commentators doing it as well. This is not good in many different ways. So I don't know, I'm adding that to my list. Some of it's directional true reporting, like they really are worried that people aren't worried enough. And I think it's just sport now. Can you find an area that come in and just write a head-shaking article that's only trying to undermine the existence of this like important human activity or this job or our lives or whatever? It's a very unusual style that quickly became a standard.
Speaker 1:
[03:20] And I see a lot with anything to do with AI and job studies. Like I've been sent this Tufts report where it's like, oh yeah, AI affected, or AI, they find these weird weasel words like jobs that could be at risk from AI at some point. And we put them in one bucket and then jobs that might one day be, we'll put that in another bucket. And there you go, don't know what we're, like you said, don't know what we're meant to do with this. Don't know what anyone's meant to do with this information, but it's just like, well, there you have it. There you have it, we're all fucked. It's the end. Even though the data does not say that. Like I've read, I think, every AI jobs report now. Every single one. And they're all the same. They are all, right now, AI can do this. And then you look at what it says, it's like, it can do law. Well, it can't really do law. It can do one sigma within law, kind of. And even then, it isn't really obvious. And the people saying it can do that are partners at law firms that don't write motions, or don't do like the grunt work. So it's almost, it feels like the reporters have either given up or are just looking for clicks. And it's hard to tell sometimes.
Speaker 2:
[04:33] This is what I'm trying to figure out. Because I'm realizing if it's entirely just, I think this is directionally true and that's good enough, then they should be way more upset and in the streets and sparking a revolution, right? Like if you actually really believe 50% of the economy was gonna be automated, that we're gonna have to have government checks just so we can afford to buy the cat food to eat after all the jobs are gone. If you really thought that our entire infrastructure is about to collapse, our super intelligence was going to emerge suddenly and be a threat to human existence, you want to just write a sort of too cool for school head-shaking resignation article. You would be like, we gotta, where are the John Connors, right? Like we need to get on the cool trench coats and get out there and go against the Skynet revolution. Like you would be on your feet, you would be, nothing would be more important to you. So this is the, this is my case about the tech CEOs. I think there's a moral hazard and I don't think that we're putting our finger on properly here, right? So you have the tech CEOs in the AI space that will just come out and just drop these bombs. Like white collar blood, you know, I never actually said that, that's Axios putting words in people's mouths.
Speaker 1:
[05:43] That was Axios. I thought that that was a, he definitely, Dario Amadei, Wario, he did say 50% but I thought he said the bloodbath, that's my bad.
Speaker 2:
[05:53] Well, I trust, this is the New York Fact Checkers figured that out for me. But Axios does a lot of this where they put like these really quotable quotes in the headlines about articles on interviews or speeches given by AI people. And it turns out the thing in the headline wasn't what they said, it was directionally what they said. But anyways, so they're out there making these big statements. The jobs are going away. The internet as we know it is about to all fall apart because of Mythos is going to have this new capability. The super intelligence is coming. I don't even know what's going to happen. There's two possible things going on here and both of them are morally bad. One is, which is the one I think is true, which is this is largely marketing. This works, it gets reported, it keeps us seeming inevitable and important, in which case that's a huge moral hazard because you are making many, many people, normal people, stress the hell out.
Speaker 1:
[06:43] Actively scaring them.
Speaker 2:
[06:44] Actively scaring them. The other option is you actually believe it's true. Well, this is an even larger moral trap that you've just fallen into because you are now perpetuating something that's going to cause exponentially more harm. You should be the very first person shutting down your company and trying to get the other ones to do it as well. So it's this weird moral trap they've set up where whatever is actually going on here, if they're coming out here saying these things, it is bad. This can't possibly, normatively speaking, be the right ethical behavior to be out there saying these scary things all the time because either you need to be building the barricade or you're just scaring people for the marketing. Neither of these, I think, is something that's defensible.
Speaker 1:
[07:22] I have a third and worse option, which is I choose Axios. I think Axios, there are some good reporters there. I think the leadership over there is disgusting. I think that they are aligning themselves with the companies. I think that if you watch, there was a Jim, what's his name? Interviewing Sam Altman. These, I think that there is a level of, and I would put this across people like Kevin Roos and Casey Newton, these are my words, not Cal's, that they're aligning them, that they're saying, we think this is gonna happen, and we're here to tell ya, great news. This is good news for me, the writer, because I will be safe somehow. I will be fine, you will not, you should be scared, but it's also a good thing, cause economy, marketing, market, good. And it's a very incoherent message, cause it's like, to your point, yeah, if this was a virus, like a pandemic, you wouldn't be writing, hey, millions of people are gonna die. Well, pretty good, right? Hey, it will be good. We'll have less people, that'd be good, right? It would be seen as peculiar.
Speaker 2:
[08:25] Someone did write that. Someone did write that, by the way.
Speaker 1:
[08:28] Someone did write that?
Speaker 2:
[08:29] They did say, I remember early pandemic, someone did write, hey, you know what? This is good for the planet. Didn't go, I'm like, hey, we're driving less, this is great. And we're overpopulated, like, uh-oh.
Speaker 1:
[08:39] I mean, that's a different conversation that maybe, but in all seriousness, you didn't have mainstream media being like, well, COVID's gonna kill everyone. The end, I guess, you know, maybe we'll just be inside forever. You didn't have this kind of stray, in fact, you had the direct opposite, it was we need to get outside again, who cares about this thing?
Speaker 2:
[08:58] Well, okay.
Speaker 1:
[08:59] And it's just, yeah, go on.
Speaker 2:
[09:00] Yeah, I think that's an interesting, and I wanna just pull on that thread a little bit, because I think COVID gives an interesting, I think it gives two different interesting observations that go in both directions, right? So, I think you're definitely right, what you're saying is when the pandemic was coming or it was getting bad, really a lot of the coverage was about what should we be doing, or who are the people doing the wrong thing? But it was very much coming from this angle of like, okay, we need to do whatever it is. Like we need to be better about this, it's gotta be vaccines, it's gotta be masks, it's gotta be pickier mitigation, whether you like it or not. It was very focused on what should we be doing, or who is it that's getting in the way of a plan that maybe would get us out of this? Which is where I think you're very right, is that you did not see a lot of COVID pieces that were just, well, I'm just gonna kind of walk through like all the different ways, you know, you might die and the morgues are gonna fill up, and you know, that's COVID. And they kind of shrugged it off.
Speaker 1:
[09:54] That's just how life goes.
Speaker 2:
[09:55] But I also think the other thing we saw in a lot of COVID coverage is something that we are seeing in the AI coverage. That's where I saw a lot of the directionally true, not factually, but directionally true. There was definitely a period early on in COVID. Because I was following that coverage quite carefully, where the papers were thinking, okay, this is the right behavior. They're probably right about a lot of these things. But I just would notice this. There'd be a lot of like, okay, we need people to buy into, for example, the lockdowns or whatever. And there'd be a lot of directionally true reporting where maybe they would like put on a photo of a mass grave that was sort of unrelated to COVID. Or you would see a lot of, there'd be pushback from like conservatives about schools, and then they put a lot of articles in the paper about teachers dying of COVID, even though they weren't in school, they got COVID elsewhere. And if you really pushed on it, it was because it's directionally true. Like the more general truth here is like, we need to be worried about this or these mitigations work. It doesn't matter if this photo is actually right, or if this teacher who died in Orlando, the fact that they hadn't yet been back in a school building yet, it's serving the directional truth. So it's like, it highlights something, COVID highlights something we're seeing now, that the reporters that are doing directional reporting, like we should be scared about it, I dare you not to be scared now, I dare you not to be scared now, just trying to ratchet it up. But then you also get the contrast, which is this new style of just like head-shaking resignation. And actually, I don't think the reporters think they're gonna be safe. They're also like, writing's gonna go away, the media is gonna go away. So it's an almost like a nihilistic type of approach to this. Like, yeah, I'm screwed, we're all screwed, what are we gonna do? And that is definitely different than we saw during that last crisis, which was obviously much more actually severe than what's happening now. So it's really confusing me, to be honest.
Speaker 1:
[11:46] Well, the directionally reporting during COVID, yeah, probably shouldn't have, but at the same time, it was actually in pursuit of something good. Like it was an attempt to make people take this seriously, because that's ultimately what it was. Take this seriously, don't go outside, don't meet with people, don't be indoors with people, blah, blah, blah, blah, blah. Great. In this case, it's like, yep, you should be scared of this, and what should you do? Fuck knows. Use ChatGPT, I guess. And what's really confusing to me as well is, you say, oh, these people don't think they'll be safe. For the most part, I actually take back what I said. I think a lot of them just don't acknowledge it. They don't acknowledge the core ridiculousness of being like, well, everyone's jobs are going to get replaced. Don't know, like the Garfield meme with him looking at the Garfield with the cross out and the TV. Yeah, flawlessly described there. It's just frustrating as well because it is terrifying people without, I'm not saying literally axios or however, but stories like this are what made that, made mentally unstable person throw a Molotov cocktail at Sam Altman's house. It's obvious that these people were scared of the AI doom, partly because to your point, what the fuck are we meant to do about it? Because using these tools is not, I don't really see how that works because going along that line of logic, if the answer is you need to use this stuff now, but the eventual endpoint is that it's intelligent enough to do everything for you, how does using it now matter at all? Surely ChatGPT would be seen as like a rock versus a shotgun at that point. It's just technologically irrelevant if they get to AGI, which they probably won't, and it's just naturally illogical stuff.
Speaker 2:
[13:32] Yeah, and I'm with you. I've been making that same argument, this idea that you need to learn how to prompt some generation of a chat bot that exists right now is going to be the key to your long-term. I mean, even if, as you say, AI ends up playing a major sustained role in the economy, it's not going to be everyone typing on a web interface to a chat bot that's synchophantic and has a personality. I think I've heard you say this recently, and I agree with it as well. I don't think we should be chatting with technology. We should not be chatting in a sort of anthropomorphized, humanized way. Doesn't mean you can't do natural language processing. I mean, Google is natural language processing. You're writing your Google searches in natural language, but no one's having a conversation with Google. It's you list the keywords as quickly as possible, and Google's pretty good at figuring out population, Spain, 1982, and you press enter, and you get that information. You're not like, hey, so I'm wondering what the population is of Spain in 1982. Can you help me find that question mark? There's something odd about that anthropomorphized conversational interface. I guess we saw a lot of Star Trek growing up, and that's what we think the future's supposed to be like, but it has all sorts of problems.
Speaker 1:
[14:42] Remembering Star Trek, when he would go, computer, do this, the computer didn't go, that's a great idea, Jean-Luc. What a great idea. Thank you for, the computer just did the thing. That's like, I don't have any trouble with natural language queries, because I think the whole reason that, say, ChatGPT has grown comes from search. I think it is the core of it, because ChatGPT and Claude and all them are better at understanding what you asked for. Not saying the data output is necessarily great, but just they understand the inference they make from what you say is better than Google, or at least better than Google has been. I feel like it was better before. I think that had Google not boofed it on this one, we wouldn't be in this spot. But even then, using Google now, it forces you. It forces the AI summaries, and you could do minus AI and all that, but sometimes I don't remember to. It's just turned search into this nightmare. But nevertheless, back to what you're saying, I agree. I think the anthropomorphization needs to go. I think that these things need to respond like terminal windows or what have you. They need to respond like computers and go, okay, here you go. Just don't need all that cludge. I don't need to be told, oh, what a great idea. I know I had it. Or indeed, if I'm being told that, I need to be told if it's a bad idea. But I don't even necessarily need an answer. I just need stuff to look at so that I can come to my own conclusions.
Speaker 2:
[16:09] I think it's hard, actually. I think it's actually hard to get a language model to do that. Because if you think about, when you go back to the base layer of what's happening in the pre-training, is that you're building a language model that's trying to win at the token guessing game. So I'm trying to guess what word or part of word actually comes next to what I assume to be a real piece of text. And then if you do that autoregressively, so you call it again and again and again, adding the answer to the input so it grows out an answer, what you're gonna get is text expansion. You've given me a text that I'm trying to expand as if there was a real text that exists and I'm trying to match it. You get that kind of indirectly. So really, its idiom is the type of text it's trained on, which for the most part is more sort of pro style text. So you can tune it away from it, like you can tune its mood, you can tune its sycophancy, but it might be hard to actually tune an LLM because it deals with human written prose as its main training data. Might be harder than we think to tune that away from being verbose and to just give a table. Now, I guess you could take its output and then maybe run that through another thing that then strips away the other piece. It's like, it's possible, but I think the anthropomorphized verbosity we see in language models is also, that's kind of the native tongue of this particular, which is why we still have a lot of chatbots being emphasized and tools that are built upon LLM as the digital brain are still way more scarce than you would imagine outside of maybe computer programming and coding harnesses. We just don't have a lot of other examples where we just use the LLM as a general person's digital brain. Because I think this verbosity is okay, humans can interpret that, but it's not great if the LLM is just a digital brain that's interfacing between you and another computer. It doesn't need to hear that their idea is great or wants to try to parse the different types of text. There's some interesting things going on there about the fundamental nature of these things.
Speaker 1:
[18:01] But even then with Google AI mode, it still seems like it can give fairly short answers. But if you argue with it, as I have, it will just provide you with it. Even Googles will provide you with just hot dog shit. It will just claim something is true. My Y1, I just did a private thing on private credit even. My favorite thing is being like, what fund is this part of? I go, it's part of this fund. That fund was founded after this happened. And it goes, okay, well, maybe it's this one. Different fund, three years old, doesn't it? Not involved. Do you have proof of that?
Speaker 2:
[18:39] Well, this is what you don't see in Star Trek, is Captain Kirk, or whoever, I'm gonna mix up the episodes here, say like, hey, computer, we are approaching Deep Space Nine. Prepare docking procedures. And computer is like, photon torpedo fired, station destroyed. And you're like, well, no, I said we're supposed to dock. Oh, you're right, Kirk. I should have fired the photon torpedoes.
Speaker 1:
[19:01] You're holding me accountable, Captain Kirk.
Speaker 2:
[19:03] That was, I did the opposite thing. You know, yeah, that didn't happen in Star Trek.
Speaker 1:
[19:18] So, one thing that's really been driving me insane, by which I mean going on Twitter, is looking at people like Aaron Levy of Box and Brian Armstrong of Coinbase, talking about like agents spending money and the agentic web and how we need to prepare the web for agents doing stuff. And the agents will do this. Fantastical, doesn't exist, agents don't do that. Just like they don't have the ability to like, oh, they'll use computers. Computer use is basically non-functional in AI, and it takes insane amounts of compute. It feels like a conversation keeps happening in theory, in the media, on social media, about something that's possibly completely impossible. But the certainty they discuss it with is insane to me. This whole agent conversation, I've never seen anything like it in my life.
Speaker 2:
[20:06] I mean, it does feel a little bit like crypto to me. I think that is kind of a fair comparison, where if you had a blockchain driven software, in theory, that software would kind of work, but it just gave you a worse version of what you could already do for pennies using the actual Amazon server somewhere. All you were really gaining was some sort of cyber libertarian philosophical feel goods about like, yes, but this was purely decentralized. I got worse versions of software to be decentralized.
Speaker 1:
[20:38] But now no one can control it.
Speaker 2:
[20:40] This is what early agents... I mean, okay, so here's what I've been writing about agents. I've been thinking a lot about it. I mean, the issue is, I don't think people understand what they are. I think people think that it's a new type of digital brain that is now able to go on and do more autonomous activity. I always see this get mixed up. It's just like people talking about mythos, breaking out of its sandbox to do XYZ. Mythos is a language model. You can give it an input and it can give you a token. You're talking about a program that is calling mythos and then taking actions based on what it called. This is really what we're talking about with agents, is the digital brains are LLMs. Then you write a program that will say to the LLM, give me a plan for doing X. Then the LLM spits out what seems like a reasonable text that seems like a reasonable plan. Then you execute that plan, the program executes that plan on behalf of the LLM. I wrote about this earlier this year. LLMs as a digital brain are bad planners. You're not going to get consistently usable plans because what an LLM is actually trying to do is finish the story you gave it. So all it wants to do is produce a story that sounds reasonable. So it's giving you reasonable sounding plans. Yeah, that's what a plan for doing this would more or less sound like. But what it's not doing is actually doing step-by-step evaluations. It doesn't have a clearly isolated goal that it's trying to measure how close you're getting to it. It doesn't have a world model to evaluate what's going to happen with the steps that are going to unfold next. And so in almost every context, it turns out, oh, a digital brain by itself being an LLM doesn't lead to good agents. In programming, it seems to work a little bit better. But I do think Gary Mark is, I don't know if it was a scoop, but Gary Mark has captured in a recent newsletter something really important. When Anthropic leaked the code for their Claude code coding harness sits on top of their LLMs to do coding, it turns out they've added a huge amount of old-fashioned hand-coded, symbolic AI style rules and pattern recognizers and special if-thins. They've just been sitting there tuning this program for specifically doing computer programming and the LLM is being a little bit more isolated to just the code production. They've just gone back to old-fashioned. That's just like an old-fashioned system that is plussing up an LLM. But I'm with you. Yeah, it's very hard. Just asking an LLM, tell me, give me a plan for doing X. For almost any scenario of X, you really can't trust a plan from a model whose goal is primarily to finish text, to finish the story you gave it in a reasonable style way. That's not how we plan. That's not how we think about planning and it doesn't give you consistently usable plans. But you're right, it's magic. The agents are coming. They've been saying this. I wrote the article I wrote in January. What happened to the year of the agent? 2025 was the year of the agent. All we had was coding agents. That's the only thing that we worked on that whole year. It was supposed to have the receipts. Early 2025, all of these executives saying, your work as a knowledge worker, not as a computer programmer, but just as a knowledge worker, is going to be largely done with agents. You're going to have agents are going to be a major part of your workforce in just a normal office setting. And none of that happened, because it turns out just asking an LLM, give me a plan for doing X, doesn't often actually produce a workable plan.
Speaker 1:
[24:07] And as a result, the only way to make agents work, which they do not, is to build a bunch of symbolic or if this then that shit, just like scripts. I mean, if you use Manus, for example, it's just writing a shit ton of Python. And it's writing it to do stuff that it, it's like, oh yeah, let me just do this. And it just writes a Python tool to fill out a spreadsheet. It's insane. It's really insane. But what's more insane to me is that the conversation around agents is as if they're already here. I'm about to read you something from Box CEO, Aaron Levy, the CEO of a public company. One corollary to the fact that AI agents take real work to set up in a company at scale is that the role of the forward deployed engineer or whatever it gets called in the future isn't going away anytime soon. When a vendor sells any kind of agents to an organization, you're no longer just selling a software tool that gets implemented and you're done. You're fundamentally selling some sort of actual workflow being done by your technology. What are you fucking talking about? What are you talking, you are a cloud storage and collaboration, what do you sell? And the answer is nothing. They don't sell any agents. Agents are, oh, agents are gonna do this. What you are describing is a different kind of technology. Just, that's it. Like it's something else that doesn't exist. But this is everywhere. You go, you look at any consultancy right now, any conference right now, there will be a speech about agents. Even Meredith Whitaker, who I deeply, deeply respect, went on stage last year and was like, yeah, AI agents using money, they're booking plane tickets. No, they're not.
Speaker 2:
[25:36] They're not.
Speaker 1:
[25:37] That's not happening. And I say this again, deeply respect Meredith. I said this online, people flip their shit at me. It's like, oh, she's directionally correct.
Speaker 2:
[25:46] Yeah.
Speaker 1:
[25:47] She's directionally correct. And it's like, let's be scared of the things that exist, because I think it's perhaps scarier for a different reason, that we have large swaths of the tech industry talking about something that doesn't exist. Like agents don't, like they don't. They don't exist. People are talking about the agentic internet. I keep reading about, even on The Verge, I read about it. I read it all over the shop where it's like, oh yeah, well the internet needs to be rebuilt for agents to use. It's like, what do you mean? And they never say, because the answer is, when we come up with something else, because I don't even think NeuroSymbolic makes sense for this. I mean, NeuroSymbolic being the one where it's, they have a deterministic system that they access from what I understand. Like, the other thing as well, now that I think about it out loud is, how would they actually browse the internet? Where are they being housed? Are we using GPUs to make them browse the internet? That's insanely, that's very, very convoluted and probably quite expensive to do, and to what end?
Speaker 2:
[26:55] That's the real question, right? I mean, I've seen these proposals. I mean, basically where a lot of these proposals go, I mean, the agents were supposed to, we thought that we could just make AI do anything, so we'll have it use the mouse and just use our computers for us. Oh, that's hard. We don't know how to do that. All right, so what we'll do is, we'll rewire all applications that anyone uses in the internet so that we don't actually have to use the mouse. We can have a text interface so that an LLM, like they do, the coding agents do, can give a description of how to do something in Excel, in text, without having to actually move a mouse or click things around. Then these evolve to say, okay, well, what's the one type of instruction that we're good at producing? Because when LLMs produce plans, they're directionally correct plans. They don't actually get the thing done. But they said, what LLMs are good at is producing code that piles and we can actually check that it works. So this is where this whole vision has changed, is that all applications and internet websites should have a code accessible API that you can expose, and then an LLM can write a program that will then access that API. So we don't need to teach the LLM how to use Excel, it'll write a Python program that'll call hooks into Excel. The problem with this is no one wants to open up their application to just agents in general. Microsoft is like, I want to write a custom tool for my program, why would I expose my program for anyone else to use it? But your original question is a big one, to what end? Like I've been writing about this recently, especially with work and AI. You gotta find the real bottlenecks, right? It's the drunk looking for the keys under the streetlight. There's a lot of this going on, where this is what we can do with AI right now, then this now becomes like the key to productivity. But the real bottlenecks in people's work is often not the things that we're trying to aim AI at. I don't know people are super frustrated at booking a plane ticket online.
Speaker 1:
[28:51] Yeah, it's really easy.
Speaker 2:
[28:52] How often do you book plane tickets? You kind of want to know, like let me see, maybe this time will be better, what seats available. It takes five minutes. So it was a huge jump to go from a travel agent to a web interface, but this is not a bottleneck in people's life now, where I want to give complicated...
Speaker 1:
[29:07] I book flights all the time, and they're easy. They're so simple. I can do it while sitting on the toilet. I don't want an agent to choose. And people are like, oh, your calendar will tell it. My calendar doesn't lay out my entire day. I don't have every single thing I do on there. It's just strange.
Speaker 2:
[29:25] Well, I had the same argument with social science researchers, who are like, if you're geeky enough to learn coding agents, they're like, this is revolutionizing science research, because now, for example, you could have it write a program to process a data file and then format it into a plot. And that might have taken you four hours to do, and you work with it for a half hour and you get that result. This is revolutionizing research. And I'm saying, well, it's not. The bottleneck for social science researchers is not analyzing data and producing plots. You're not sitting there doing that eight hours a day every day. And if I could do this twice as fast, I'll produce twice as many papers. I might write one paper in a three-month period. Yeah, in there, there's like four hours I spent making a plot. And sure, it'd be nice if that four hours became 30 minutes. But that's four hours out of like a multi-month process of sort of thinking about this paper.
Speaker 1:
[30:17] What is a plot, by the way?
Speaker 2:
[30:19] Like a graph.
Speaker 1:
[30:20] Oh, right.
Speaker 2:
[30:21] Yeah, the computer science term. But yeah, it's like that's nice that got a little bit faster, but that's not the bottleneck. That's not what's going to unlock a lot more research. It's like, man, I would write more papers if it wasn't for how long it took me to draw a graph.
Speaker 1:
[30:34] Isn't the problem data?
Speaker 2:
[30:36] Getting the data.
Speaker 1:
[30:36] Like actually collecting data?
Speaker 2:
[30:38] That's what it is. I wrote about this talking to a well-known business school professor years ago from my book, Deep Work, and he talked about, he just realized, oh, being a business professor, publishing papers is about data access. I have to spend most of my year talking to people, building relationships, trying to set up an agreement with a company where I can get good data that I can get three papers out of. In all of that work, there's one day in there where you're crunching the numbers and making a plot. And that's nicer if you could do it a little bit faster, but it's not a productivity bottleneck. It's a marginal efficiency. I think there's a lot of that going on right now with AI and productivity, as we look at what the AI can do and then try to make that thing into somehow being the key to getting things done.
Speaker 1:
[31:20] I just, my productivity problem is that the UI and UX and everything sucks. Everything's disjointed. Setting up Riverside is always fun. They move the menus around. Projects are in a different place. That takes up time. Moving files places also takes up a lot of time. This morning when I put out my private credit piece, I had to do these threads. I had to click around a website and put in the alt text, but I had to tweak it slightly. It's like, I don't know how AI would possibly help me here and they're not working on that.
Speaker 2:
[31:49] Well, they tried. They tried though. I thought that was going to be, this is what I was excited about earlier in the Gen AI revolution. I was like, okay, here's the real value prop, is natural language interface into advanced features on software, where I can just say, all right, I want you to go take this column in the spreadsheet and get rid of all the rows that have values before this, and then I want to make a pie chart, and because I don't want to learn how to do all that in Excel. I don't know how to do that. They tried it. I mean, this is Microsoft Co-Pilot, but it turns out we underestimated the degree to which, when we as humans are interacting with a chat bot, that we're incredibly gracious, we're able to adjust and get the gist of what it means, filter out the part of the chat bot response that's not really relevant or ask the follow up question. When they tried to just use LLM responses to automate actions within programs, it's just not accurate enough. So they wanted that to be the case, that you could just be talking to a Riverside bot, and you never would have to press a button ever again in Riverside. It's just not accurate enough. LLMs, it's fine for human conversation. It's just not accurate enough in this general case.
Speaker 1:
[32:56] Also, that thing you're describing with how they want the agentic web to just be a series of APIs, so that every agent writes Python or what have you to use them, that's a massive computational increase for no reason, because you're basically saying instead of someone clicking a mouse and hitting a keyboard, we will write code for everything.
Speaker 2:
[33:17] Yeah.
Speaker 1:
[33:19] What a truly insane idea. I mean, it's just very like Salesforce today. I don't know if you saw, they announced that they're doing Salesforce Headless 360. Mark Benioff needs to fire everyone in marketing, but they've made it so that you can do everything with Salesforce via an API. Which is, I mean, the first question I always ask is, what does Salesforce do? Because I've talked to so many people and they can't tell me. There's like 21 different features, no one knows what they do. But it's like, it's just a very bizarre thing. It's very much a cart horse thing, but also what agent? This is the thing that really drives me insane. They're talking about, we built this API for the agentic web, for agents to use it. Which one? What agent? What are you talking about? Well, it will be in the future. You change something materially with your publicly traded company worth $300 billion because it might happen while we're getting ahead of it. And you talk to members of the media about this, and they just go, yeah, you know, it will happen. It's obviously going to happen. They wouldn't put this much money behind it if it wasn't going to. It's like, I don't know, especially with Salesforce. And I'm like, you don't think Salesforce would spend a bunch of money for no reason? Well, buddy, you've not been following Salesforce at all then, but yeah, go on.
Speaker 2:
[34:42] Yeah, I was going to say, how much did Meta spend on the Metaverse?
Speaker 1:
[34:45] Over $70 billion. Where did that money go? Where did it go? Where did it go?
Speaker 2:
[34:53] Customizing, floating dinosaur avatars.
Speaker 1:
[34:56] Not building legs. But let's change-
Speaker 2:
[34:58] That's the second 50 billion, right? If they had gotten the second half of the investment, they would have got to the legs. They're just not there yet.
Speaker 1:
[35:04] Another 100 billion will have toes. So, changing subject a little, Mythos has been one of my favorite media hysterias recently. I genuinely wonder, if they ran War of the Worlds again today, I think Axios would have a headline in two minutes and it'd be like, there are aliens, they're attacking. I heard it on my podcast. I've looked through the system card. I don't know if you have for Mythos.
Speaker 2:
[35:30] It's wacky. I can't believe we're letting people get away with having a psychologist talking to the chat bot in your system card. It's nuts. It's all gone to marketing.
Speaker 1:
[35:43] They had a psychiatrist or a psychologist, I can't remember, talk to it and be like, yeah, we found these emotional features. We need regulators to stop this stuff because I've heard, and people's response to this is, well, banks are having meetings about it and the government's having meetings about it. Governments have meetings about NFTs. Gavin Newsom signed an executive order about Web 3. These people will meet and talk about anything. Oh, it's scary and they're not talking about it, which means it's powerful. Well, how is it powerful? What does it do? Because I think you probably saw this as well. It didn't list how many false positives there were. It also didn't mention that the free BSD bug that they talk about, that they found, that wasn't actually exploitable. I think it was something about the level, like the level it was at. I forget, I'm not, I don't do programming other than very simple Python, a dog's Python.
Speaker 2:
[36:42] Yeah, I mean, free BSD kernel is full of bugs. All these things are full of bugs.
Speaker 1:
[36:46] Because they're open source.
Speaker 2:
[36:48] I had to have this conversation with someone recently where they were like, Mythos, can you believe of all the places it found a bug in the kernel of Linux? In Linux, they found, are you kidding me? All day long is just bug fixes having to be pushed into that repository. Yeah, the Mythos story, I think, I mean, A, someone needs to get a Nobel Prize in Marketing, because it was absolutely brilliant what they did there. I've spent a lot of time on it. It's complicated because, again, you can't really trust the system cards or just gonzo that Anthropic puts out, and it's not publicly available. But there were, I think, a few very telling things. There's two features they say Mythos has. One is finding vulnerabilities in source code, and two is writing programs to exploit them. It's first really important that people understand, this has been something that people have been doing with the LLMs since the beginning of publicly available LLMs. Not only is there nothing new about that, but I found, they put this on my podcast, almost word for word from the Anthropic System card, they said in the Opus 46, systems card, a publicly available model that's already been out for many months, almost word for word for what they said about Mythos, except for no coverage of it and no fear. They said, we have found 500 zero day vulnerabilities, including some that had been existing for decades without having been discovered. That is what they said about what Opus 46 could do. For Mythos, they said the same thing, they just replaced the word 500 with thousands. But when Opus 46 came out, there was no, oh my God, they have found many hundreds of zero day exploits, many of which had been around for decades because they didn't push that marketing button. No one particularly cared about it. I went back to my podcast and showed multiple papers. This has been a huge concern and it's a real concern by the way, right? It's that partially what slows down slightly cracking, right, the breaking in the systems, is the fact that it's annoying and hard and LLMs have made it easier. GPT-4 was good at finding exploits, right? This was a big deal. They were like GPT-35 wasn't great at it, GPT-4 is, and then as we got the more recent models, they've been much better at writing code to exploit them because we had better agents for it and they're better able to produce multi-step software goals, and so they can better build software to exploit them. This is a real issue, but it's not new with Mythos, right? But Mythos was presented as if some Rubicon had been passed, but there was a couple things I noticed right off the bat. One, they made the mistake of listing a bunch of the exploits that they, vulnerabilities they had found to try to brag, look at this thing in FreeBSD, look at this thing in FFP&G or whatever, like they showed all these exploits they found. They didn't count on, a lot of security researchers said, well, wait a second, why don't I get like a much smaller, cheaper model aiming at that same source code and say, can you find any vulnerabilities? They could find the same ones. So the evidence that it's finding vulnerabilities better, we don't have any way of knowing that's true. And if anything, we actually are getting a lot of reports that they were paying big bounties for security researchers. I'm gonna give you access to Mythos, I'm gonna pay you for any bugs you can report that you found with it. So they had security researchers just, who knows how many false positives were coming out of that. And then on the exploitation side, we only really have one study, it comes from AISI, who I do not trust, but it's the only independent study. The fact that they gave them access itself should make us maybe a little bit suspect, but it basically just showed like normal progression. No massive leap, model by model gets a little bit better on some of these tests and benchmarks, and Mythos has no out of scale leap, it's just like on some it's about the same, on some it's a little bit better. And yet it got covered as if we had just turned on, you know, Whopper from the movie War Games. Like we had just some new entity that was like on its own undermining security. And I do not think that, I think that was highly credulous coverage of what almost certainly is just like a standard slight jagged move forward on these various capabilities that we've been seeing for the last three years.
Speaker 1:
[41:10] Also, when you said that, so the difference between Opus 4.6 and Mythos 500, 2000s, makes me ask the very simple question of, did they look as hard? To your point about the security researcher. They didn't. Like, did they spend as much time? Probably not. So they probably could have found them. Also, by the way, I immediately was looking up, AI Safety Institute is, of course, heavily linked to effective altruism.
Speaker 2:
[41:33] Can I say why I'm upset at AISI? I talked about them a lot. Two weeks ago, I did a, I don't know when this is coming out, but I did a podcast in whenever, March, where I looked at this report, and mainly I looked at the guardians coverage of this report done by AISI, but it was just the most inane thing. The headline was, Massive Increase in AI Scheming is Detected. And they had a chart.
Speaker 1:
[41:53] Jesus Christ.
Speaker 2:
[41:54] And they had a chart. And Bad Line went up. And it went up in like January and it goes up. And if you read this article about this study, they're like, something's going on, scheming has been increasing rapidly recently. And they like gave some examples of it or whatever. And so I look at this. It's like, well, I want to look at it. What is going on here? So I look at this chart. What are they charting? Oh, they're charting tweets per day that they've detected tweets about AI doing things that you didn't want it to do. And I said, huh. So when does this line start going up? The week that OpenClaw was released to the public. And everyone just started building their own bad agents and then tweeting about how bad they were. And you know what word was not mentioned in that article? OpenClaw. And even though the examples they were giving, so they just said, scheming just started rising. I guess AI is becoming sentient. And all they were measuring was people-
Speaker 1:
[42:47] Multiple tweets paraphrasing the same viral story to use their own fucking language.
Speaker 2:
[42:53] And then I looked at the biggest spike. I was like, well, this day in February on this chart had the biggest spike. It was like, oh, there was this one tweet about OpenClaw, like erasing someone's e-mails, and then it got re-tweeted. It went super viral. I was like, okay, great. The real headline of this article, letting people write their own agents leads to terrible agents. That's it. But the whole- Anyway, so that's AISI.
Speaker 1:
[43:18] I'm looking at the tweets as well. One of them is from a 47 follower account with AIR called underscore underscore just underscore underscore Lisa. And it's, this is really bad. Opus is editing files and making up reasons. It's deleting adult content. So hallucinations.
Speaker 2:
[43:35] And also Opus is not doing that. The stupid open-claw program you wrote that's prompting Opus and then taking action on your computer based on what it says is deleting your files. The program you wrote that you gave access to your files and just said whatever we get from this prompt, execute it, is erasing your files. Opus can't do anything, it can produce tokens. But here's the other point I want to make about Mythos that I don't think is being made. And it reminds me of the Sherlock Holmes story of the dog that didn't bark, right? Where the actual piece of evidence that mattered is not what you heard but what you didn't hear. This is what I think the real story here is, is you did not hear Dario Amadei in the lead up to the Mythos release in the last year, let's say, or the last two years. You did not hear him talking about what we're working on and why AI is important is because we're going to be able to find vulnerabilities in software that have been long hidden. We're going to build the ultimate cybersecurity machine. This was not discussed. That's old-fashioned stuff. That's boring stuff. That's stuff that we are worried at. Even GPT-2 people are worried about that. What we've been hearing about steadily was jobs are going to be automated. We're going to have whole creative industries wiped out. We might have sentience coming and at the very least, like AGI and these massive disruptions. This is what they've been focusing on again and again. Then their biggest, best model, their newest, greatest, bestest model that they train forever and use all the electricity. What did they say about it? None of those things. They didn't talk about any of the things they said the key to AI was, the things they were afraid of, the things they're excited about. Instead, they went back and talked about a boring, parochial old feature that has been an issue that nerdy security researchers have been talking about for a half decade now. That to me is, if I was an investor, I would say, take off your Greek helmet cosplay mythos that's coming to destroy. Hold on a second. Is this better at automating jobs? Is this better at producing code? Is this ageing? Why are we talking about finding bugs? We're worried about that with GPT-4. That's a problem, but that's not something new. Uh-oh, something must be going on. You just put a lot of money into a new model, and the best thing you could find to emphasize was it's good at finding bugs. I think that is a problem. It's what they didn't say about this model. They would have much, much, much rather be able to brag, this model is now much better at any of those things that they have been saying is the key to the AI future. And you didn't hear them talk much at all about any of those.
Speaker 1:
[46:05] Yeah, and that's the thing. If it was so powerful, like here's the thing. I don't know what would make me convince that LLMs were the future, but a step toward it would be, we typed create a Slack competitor, which they claim they did once, and then didn't show it and refused to. And they said, oh, it worked autonomously for 30 hours, but then wouldn't talk about it. If they were like, we created a Slack clone, here it is. And it was bug free. Like if it actually just worked, and we're like, we now, we have done this. Because theoretically, if this SaaSpocalypse story was true, which it's not, the AI is going to replace all software. If they actually did that, if they, because someone from Anthropic just left the board of Figma, and they created a Figma clone, and the stock went down because the market's run by toddlers, if they were like, we've released a clone of Microsoft Word, it's like we've done Anthropic Word, and we now sell that as part of our subscription, that would actually be quite something. But the thing is they're not. It's kind of, it gets back to the old talking point of if they made AGI, why would they sell it? Wouldn't it be a massive competitive advantage to keep this? And I think you're right. I think maybe Mythos is not as powerful as they say, and they've just had to dress it up. But it gets back to the thing of the direction the true media coverage. It's like, well, this is scary, right? I mean, that system card's like 180 pages long. I haven't got all day. I have to write three 100 word blogs a week. I couldn't possibly spend time reading this.
Speaker 2:
[47:31] We need so much more skepticism. We need so much more skepticism, right? I mean, this is why, again, like the most skeptical, we're not skeptics, but like the, I call it the East Coast computer scientists. So those of you who were technically minded were not near Silicon Valley. So we're not in that world. It's very hard to be a professor in a world where there's hundreds of millions of dollars being handed around and they try to like ignore it. But the East Coast computer scientists are all baffled by, you talk to any East Coast computer scientists, they're all baffled by, like oftentimes there's claims that are just not true or widely exaggerated. Why are we so credulous? I mean, it'd be one thing if it was like a government agency we didn't realize was like trying to, you know, protect the fact that there is UFOs, and they're just straight up lying. We've never encountered that before. We're like, I didn't realize that, you know, no, it's a business, right? And the credulity with which we're taking these claims, like Mythos is, I think the most important story there is, yeah, this is another example of what I wrote last summer about AI's had a bit of a wall in the sense that all of the improvements that have come really since over the last two years have almost all been either on post-training or more importantly on the harnesses that you built. It comes in the software you're building to take advantage of.
Speaker 1:
[48:42] What is a harness? I've seen this word used a lot. I think it's good for the me and the listeners to hear the exact definition.
Speaker 2:
[48:48] I think of it as like a computer program that can do stuff. You can talk to it, it could do stuff, and it uses, it'll prompt or talk to an LLM as like its digital brain. So the harness might actually be able to touch your file system, write the files, compile code, move things around, but to figure out what actions to take, it will also then prompt an LLM and say, okay, what should I do next? And you can put it on different- Is that just a wrapper? Yeah, it's a wrapper.
Speaker 1:
[49:13] So this-
Speaker 2:
[49:15] But that's where all the progress has come. All of the progress in coding agents since about a year has come, especially starting this fall, has come from better wrappers, better harnesses. It's all in, let's build better, just hand coding, no machine learning, no intelligence, no Skynet here, but just hand coding these programs that will call LLMs. Let's just keep tuning and tweaking those to be better and better. Of course, the programmers building those particular programs, they're building them to do their type of work, so it's a field they understand really well. They can really just sit here and twist and tune. Also, programmers are very adaptable. They like tools and they'll adapt around the weaknesses or not. It's like a best-case scenario. But this is another indication of we're not getting these fundamental giant leaps in the capabilities of the digital brains. It's either some benchmarking, like we tuned it to do better on a particular benchmark, or we built better programs around it. So when you put the money that they put in the mythos, and if really the best thing you had to emphasize when it was done is we have a cybersecurity benchmark where Opus46 was at 66.7 and this is 83.1, that doesn't necessarily going to justify what's going on. Or that AISI has this, there's only one thing in there where they see a leap from mythos at a particular contrived security scenario they came up with. And this big leap that got them all worried was Opus46 could on average complete 16 out of 32 steps in this challenge and mythos on average could do 22 steps out of 32. Wow. Like that's hundreds and hundreds of millions of dollars of training electricity or whatever. I think that's an issue.
Speaker 1:
[50:49] I just, I think that, and maybe this is a simplistic point, I don't think they know what they're doing at this point. Like I don't get the sense that Anthropic or even OpenAI has a strategy because today, as we're speaking, this will be out next Wednesday, but they released Anthropic Design. The thing I mentioned, the Figma clone, it's like, why are you fucking cloning Figma? What are you doing?
Speaker 2:
[51:12] I thought you were going to automate the economy.
Speaker 1:
[51:15] Yeah, I thought you were going to replace, so you've made a Figma clone. Like we heard the rumors last year that they were going to do a product, OpenAI was going to do a productivity suite. It's like, why? It's like they're doing everything they can to ignore the core problem, which is the core technology is not going anywhere. Because Mythos appears to be, they called it a step change, but that's a nice way of saying incremental improvement.
Speaker 2:
[51:42] There's a hundred percent correct. Yeah, and let me tell you why I would be worried if I was them. Here's the worrisome thing about Mythos, right? Is again, they talked about these vulnerabilities hidden for decades that Mythos found or what have you, and they replicated multiple different independent security teams were able to find most of those vulnerabilities using three to five billion parameter open weight models. So put that in perspective, right? A model like Mythos is going to have hundreds of billions, if not a trillion parameters. And they use a three to five billion parameter off the shelf. You could run this model on a chip inside your-
Speaker 1:
[52:20] Sorry, 10 trillion.
Speaker 2:
[52:22] 10 trillion. Oh, okay.
Speaker 1:
[52:23] 10 trillion parameter.
Speaker 2:
[52:24] That's crazy.
Speaker 1:
[52:26] Love the number, bro.
Speaker 2:
[52:27] Is that true?
Speaker 1:
[52:28] Yeah, that's what it says.
Speaker 2:
[52:29] Oh my God. 10 trillion parameters is insane. You better be, that better be either gaming the stock market and creating billions of dollars a day and fancy option returns or changing lead in the gold. Because to run something that has 10 trillion parameters to do almost anything else, it's like we're going to launch ourselves in the space to do something and then land every time. That seems so incredibly expensive. The real fear then is like, well, wait a second, if they could do most of this stuff with a free cheap model that I could just run on a machine at home, that's what keeps, I think, Dario Amadei up at night. That's what keeps Sam Altman up at night. It's the future. Look, I've been pitching this, right? I think the useful and the only ethical and sustainable future for AI is what I call distributed AGI. I think it's just what the future is going to be, which is you have specialized applications for different things. Where, oh, we want to do this thing over here. We built something that has some AI in it, and maybe it has an LLM, or it's a modular architecture, and it has a billion parameter model in there and a world model, and it's really good at doing this thing, and it's small and it mainly runs on chip, and now this program can do this thing that I used to have to do. And you multiply that across 10,000 different use cases, and you're like, oh, we kind of have AGI, right? There's all these different things that have AI tools, and they do pretty well. That's a completely, probably the most probable future. It's a future I really like for a lot of reasons. There'll be a lot of things that we can't make progress on, a lot of things we will, but it's a much more heterogeneous future. There's no giant HAL 9,000 brains. It's economically more interesting and diverse. It doesn't have all the sustainability issues. That has to be the future, but the problem about that future, if you're Sam Altman or Dari Ahmadi, is that their entire moat is, unless you need 10 million trillion parameters, they want that to be the key to the AI future, because that moat is something that no one can cross. And if that's not the moat, if it's just, oh, if I want to build a poker playing AI that's really good, I just need people who are good at poker and to spend a couple of years and figure out a cool custom system, and that thing now does well, if that's the future, you don't need open AI and you don't need Anthropic. And I think that probably might be the future, and I think that's terrifying, and they're trying to race to an IPO, and they're marketing out of their butts. Like, what can we do to kind of keep things going so at least we can get our stock on the market? That's what would keep me up at night if I was them is actually the future, there might be a lot of AI in the future, and it's not going to be nearly as sexy as they're hoping.
Speaker 1:
[55:01] What if there's also, by the way, that 10 trillion number, I can't source it to Anthropic. I've seen it reported multiple places. This is a problem.
Speaker 2:
[55:09] They never talk about it.
Speaker 1:
[55:11] We have an issue with news right now. We're just like mythology spreads, ironic considering the name. But the other thing is as well, it's like hundreds of billions of trillion parameter. You're just using a nuke to kill a single gopher. You're just like, we're going to throw everything we have at it. To the point that I don't know if you've been seeing the amount of trouble Anthropikas had keeping its service online, and how they're making the models dumber. It just feels like we're in this weird hysterical moment where no one knows why they're doing this, but everyone's ready to accept whatever anyone says. It's just like, we're all doing this insane thing, we're just going to repeat, what kind of informs the bias and makes us look less dumb, is the more excited we are.
Speaker 2:
[55:58] I think the frontier models are like F1 cars, and the equivalent of points on the F1 circuit are your positioning on the benchmark leaderboards. Until you do this, you build these giant models, and you spend all this money on electricity, and they're so big, they're not even economically viable to have people use, which might really be what's going on with Mythos, is we have to make this seem super premium because otherwise people are going to get charged $5,000 a month. And just like if you're Red Bull or Ferrari, your F1 car doing well on this leaderboard just lets people know, this company builds good cars, and then you can sell your normal cars. I think that's a lot of what's going on here, is that they want to be high on that leaderboard, means we know how to do AI, we AI smart, even though the future of actual consumer deployed products is going to be much more like a Honda Odyssey minivan than it's going to be like a top Formula One car.
Speaker 1:
[56:52] Well, Cal, it's been an absolute pleasure having you as ever. Where can people find you?
Speaker 2:
[56:57] You can find me at calnewport.com. My podcast is Deep Questions. On Thursdays, the Thursday episodes are all AI reality checks where I take a fun story. Actually, Ed's coming up, he may have already been on it by the time this comes out, or maybe it's the day after this comes out, so now you have to check it out. Now, get a double dose. Episodes get a double dose. You bring this out of me, Ed, by the way. You bring out my sort of ornery side. I'm normally like the very cut of staid professor New Yorker writer, just like, well, on the one hand, on the other, you bring this out of me. I love it.
Speaker 1:
[57:29] The thing is, you're critical only of things that need to be, you're still willing to humor these things as long as there's something to humor, and that's why I like having you on, because people claim I'm just a hater, so we've got to have people for a little balance, but thank you for joining me. Thank you, everyone, for listening. You have a monologue coming up as well on Friday. Thank you all. Thank you for listening to Better Offline. The editor and composer of the Better Offline theme song is Matt Osowski. You can check out more of his music and audio projects at mattosowski.com, mattosowski.com. You can email me at ez at betteroffline.com or visit betteroffline.com to find more podcast links and of course, my newsletter. I also really recommend you go to chat.wheresyoured.at to visit the Discord, and go to r slash Better Offline to check out our Reddit. Thank you so much for listening.
Speaker 3:
[58:29] Better Offline is a production of Cool Zone Media. For more from Cool Zone Media, visit our website, coolzonemedia.com, or check us out on the iHeartRadio app, Apple Podcasts, or wherever you get your podcasts.