transcript
Speaker 1:
[00:00] It's time for Intelligent Machines. Paris Martineau is here. Jeff Jarvis is here. We're going to talk about the lines. They're too damn long. We're also going to talk to Ian Bogost. He is a contributing writer at The Atlantic, a professor at the Washington University in St. Louis. And he says, pay attention to the small stuff. That's the good stuff. We'll talk about that in a lot of AI news next on Intelligent Machines. Podcasts you love.
Speaker 2:
[00:28] From people you trust.
Speaker 1:
[00:30] This is TWiT. This is Intelligent Machines with Jeff Jarvis and Paris Martineau. Episode 867, recorded Wednesday, April 22nd, 2026. The ketchup effect. It's time for Intelligent Machines, the show we cover AI, robotics and the smart little do-dads and goo gauze around your house. I think that's the pronunciation. Ghee gauze. Is that right? I am being corrected now by two professional writers. Paris Martineau is here, investigative journalist at Consumer Reports. Hello, Paris.
Speaker 2:
[01:08] Hello. We are both two professional writers and two professional right opinion havers.
Speaker 1:
[01:13] Yes. No, as am I. I apologize because I realized last week I shouted, you're wrong, which is not conducive to good conversation.
Speaker 3:
[01:24] We'll have that conversation again.
Speaker 2:
[01:25] I was going to say, last week, I don't think that's the only time you've shouted that.
Speaker 1:
[01:29] I've probably done it once or twice. I'm a little biased when it comes to AI, but anyway, we'll get to that. Also with us, Mr. Jeff Jarvis, he is the emeritus professor of Journalistic Innovation at the Craig Newmark Graduate School. Craig, Craig, Craig, New York. Also, adjunct or fellow or something or other at Montclair State University. And he is a professor of something or other at SUNY Stony Brook, the author of The Gutenberg Parenthesis. Actually, one of your books, Magazine, was edited by our guest. Coming up, we're going to talk to Ian Bogost. Bogost, Bogost, Bogost, right?
Speaker 2:
[02:07] By the time he's here, we'll have figured it out.
Speaker 1:
[02:10] Bogost, Claude says it's Bogost, and he edited a magazine. But Ian is a really interesting fellow, game designer, contributing writer at the Atlantic. He's also a professor.
Speaker 3:
[02:22] Three things.
Speaker 1:
[02:23] Of three things, three different departments at the Washington University of St. Louis. But he's really a fascinating guy who is a fan of AI, but who says that the friction is what's important in life. We'll talk about friction with Ian Bogost. He's not going to join us until about the first second hour of the show. First hour, I have a surprise. I normally end the show as you do to with our picks of the week. I'm going to start the show with my pick of the week. Are you ready?
Speaker 3:
[02:53] Well, be that way because you're in charge. So I'm in charge.
Speaker 1:
[02:56] I get to show it.
Speaker 2:
[02:56] We have no say.
Speaker 3:
[02:57] No, it's not a democracy.
Speaker 1:
[02:58] My pick of the week this week is a little website you see right here called Damn Lines.
Speaker 2:
[03:03] Oh, I know this website.
Speaker 1:
[03:04] Do you know this website?
Speaker 2:
[03:05] I do.
Speaker 1:
[03:06] I found out about it because one of the lines is My Son Salt Hanks, which is closed right now. So there's no line.
Speaker 3:
[03:12] He's run out.
Speaker 2:
[03:13] And even if it was open, it would be out of sandwiches at the South.
Speaker 1:
[03:16] Yeah, that's right. You have to go earlier. But this site is really cool because it shows you when the next live stream is going to begin. It shows you when the peak is. Look at that. This is what happens. This is what happens. So it was a 38 minute wait and then he sold out and there's no wait anymore. And you could see where the peaks are. This is a very cool site. Right now, it's just a few restaurants in New York City. Tomy Jazz, John's of Leaker Street, which conveniently is right next door to Salt Hanks. So one window does it all. And a Salt Hank clone called Breakfast by Salt's Cure. But our guest today is...
Speaker 2:
[03:51] Okay, it is not a Salt Hank clone, but that's beside the point.
Speaker 1:
[03:53] It's got salt in the name. That's all that matters. That's all I care about. The guest is the creator of this site. Lucas is on the line. I said I wouldn't say anything about it. He's a little bit anonymous here. Lucas, it's great to see you.
Speaker 4:
[04:08] I was. Thanks.
Speaker 1:
[04:09] I love this idea. First of all, I gotta ask you, cause this is an AI show.
Speaker 4:
[04:13] Yeah.
Speaker 1:
[04:13] Vibe coded.
Speaker 4:
[04:14] Oh yeah.
Speaker 1:
[04:15] Yeah.
Speaker 4:
[04:16] Me and Claude, Claude was my model of choice for this one.
Speaker 1:
[04:20] So did you use the design tool, the new design tool, or did you do it all from...
Speaker 4:
[04:24] No, like, I mean, like everyone's got their own, like every engineer and even like those starting, like, to pick up engineering have their own, like, preferences and flavors for how they interact with AI. Like, I like to audit everything it does and, like, see as it generates the files and whatnot.
Speaker 1:
[04:40] You actually read the code?
Speaker 4:
[04:42] Oh, you glance over it. Like, you scan it. You make sure what's happening. You can catch certain, like, gotchas in it. Versus, like, other friends of mine will have, like, five agents at once or ten agents at once. It's a little bit, it's a little bit riskier because some can go a little bit AWOL and then it's harder to track.
Speaker 1:
[05:03] Yeah. Yeah. That's fun. I mean, look, to me, this is the best video game ever invented is I could, I play every day and for hours at a time. Let me, you did something though, that is actually not AI because in order to get these videos of the line, what did you do to get these pictures? Yeah.
Speaker 4:
[05:24] The fun thing about this project, like I've been, I guess I can give you a history of like where the whole concept came from.
Speaker 1:
[05:29] Yeah, please do. Yeah.
Speaker 4:
[05:30] Yeah. Okay. So I'll start off with that. Years ago, when I was in my last year university in Canada, at Queen's University.
Speaker 1:
[05:37] Good school.
Speaker 4:
[05:38] One of my, yeah, it was a great school. The issue with Queen's University is there are only four bars on campus, and there's about 25,000 students.
Speaker 1:
[05:47] Yeah.
Speaker 4:
[05:48] About like 10 PM ish come around, the lines are a kilometer long. And so for all four years of university, the biggest paradigm and annoyance that we would have across all campus was, when is the optimal time to stop drinking beer at your house and to start drinking beer at the bar? Because you don't want to go too early where no one's there, you don't want to go too late, we have to wait in line. And it was always a guessing game. And different nights, it would be a different time. You would rely on someone, some friend who went early to give you an update. So my last year, I was just toying around with the idea. I had a friend move into an apartment that had a good view of one of the bars. I put a webcam in the window. It's an IP cam, and an IP cam is just basically a camera that lets you route the footage over to an IP address, is the gist of it. And then just put it on the website. It was a simple, static front end. It just had the live stream viewer. And then I started, I put that slide up.
Speaker 1:
[06:43] Did you pay the people whose window it was in?
Speaker 4:
[06:46] Yeah, and it was on campus. And it's like the going rate on campus was like $50 Canadian a month. So nothing crazy. Like the New York rate now is a little bit higher.
Speaker 3:
[06:56] Hey, yo, this is no more value.
Speaker 1:
[06:58] Real estate.
Speaker 2:
[06:59] Hey, if you have an apartment across from John's, that's probably the only window those apartments got.
Speaker 4:
[07:05] Yeah, well, one of two or one of three is better cases. But like the cool thing about it was when I was in the university was like I sent the URL for the website to a few group chats. And it was like a Facebook moment where within hours, like everyone on campus explosives and yeah, they just went around. Like there was no intentional marketing behind other than sharing.
Speaker 3:
[07:25] Well, Canadians are nice. So I get that. I want to hear what happened when you went knocking as a stranger on the door on Bleecker Street saying I want to put a camera in your window. What was that interchange like?
Speaker 1:
[07:38] Yeah.
Speaker 4:
[07:38] Well, it's funny. I tried to go door knocking first, but then like the every residential door security gets in your way because you need to go get in.
Speaker 1:
[07:45] You can't even get upstairs.
Speaker 4:
[07:47] And door knocking is hard because it's a linear time scale. Like one door takes two minutes, etc. And it's not that easy. So what I did instead was if you go on like Streeties, your only residential website, you can find listings of like previous apartments and whatnot. And hopefully the pictures are good enough where you can identify the view in the windows. And I just found the units that had a good view of the street and you would just find landmarks like a tree or like the building brick across the road. And I just wrote letters to them. Like I printed like 100 identical letters from FedEx, put them all in envelopes, wrote the address on it.
Speaker 1:
[08:22] Snail mail? You put a...
Speaker 4:
[08:25] Yeah, I just nailed it up. And then that's the only way to go about doing it.
Speaker 1:
[08:29] Yeah, makes sense.
Speaker 4:
[08:31] It's cheap, it's scalable, like it took no more than a few hours to send out 100. And then from that, I got my first initial, like four people who are interested, who I went with, like inbound leads from that was probably around like 10 or 12. And that way it's great because you have like, you can choose between who is the optimal apartment, who has the optimal window. Because the fun thing about this project, to answer your question before, it's like it's both an engineering problem and it's an operations problem, it's a hearts and minds problem. And the engineering part is the easiest. Like putting a website out and running a computer vision model, that's really the easiest because it just, it takes a little bit of a learning curve and with AI now, that's really fast. The operations problem is something that like you kind of need to experiment with. And so finding out how to get those interested to help support the project. And there's a high degree of trust involved between myself and these tenants, because you're putting a camera in their window. And like the answer is like, well, cameras have a microphone. So like there's a high degree of trust between me to tell them that, okay, the audio is disabled and they need to trust that. Likewise, it's connected to their router.
Speaker 1:
[09:43] It looks like you also blur everything but that restaurant.
Speaker 4:
[09:48] Yeah. Yeah, that's intentional too. No one should be seeing into a person's window.
Speaker 3:
[09:54] How did you pick these as your targets?
Speaker 4:
[09:57] I lived around this. I actually just moved apartments last week, but I used to live on West 4th and 6th Ave. So these are all my favorite places, specifically Breakfast by Salt's Cure, Best Pancakes in New York.
Speaker 2:
[10:08] They are pretty good.
Speaker 4:
[10:09] Yeah. Every weekend I would go there. It's about a 15-minute walk for me, and it was a question of, okay, should I make the commitment, get out of my apartment, go there? The lines are going to be 10 minutes or it's going to be 60 minutes?
Speaker 1:
[10:22] New Yorkers seem to like lines. Henry says-
Speaker 2:
[10:26] They don't like lines.
Speaker 3:
[10:27] No, we don't like lines. We just know that things are worth the effort.
Speaker 2:
[10:30] There are some things that you know are worth the effort, and it depends on the speed of the, the speed of the line is a very specific aspect of it. There's a breakfast by Saltz here also in Brooklyn that also has a line, and I've been there quite a few times because the pancakes are fine.
Speaker 1:
[10:44] Are you lobbying Paris to have a damn lines camera placed near that?
Speaker 2:
[10:49] They're fine, but they're extremely good when you take into account the fact that they also offer gluten-free pancakes. So with my friends that are gluten-free, they really insist on going there.
Speaker 3:
[10:58] Oh, how Brooklyn did you get?
Speaker 2:
[11:01] Yeah, the celiacs are all among them.
Speaker 1:
[11:02] So Lucas, you have what? How many right now? How many restaurants are on that?
Speaker 4:
[11:05] I got five right now. I used to have Cat's Deli.
Speaker 1:
[11:09] Oh yeah, I love Cat's.
Speaker 4:
[11:10] I had to take the Cat's Deli one down because the building didn't fit the window type.
Speaker 1:
[11:17] I got into Cat's once because we were waiting in line for the table and a guy had a heart attack and was taking out an ambulance. I got his table. So that was, I don't know if you could incorporate that into damnlines.com somehow.
Speaker 4:
[11:29] That's one way to do it. Yeah.
Speaker 3:
[11:31] See if you see an ambulance in front of the place and you know where to go now.
Speaker 4:
[11:34] Run. Yeah, like the fun thing about this project is like it's, it just, it has instant product market fit because no one like, like I put this in the website for it. Like no one likes waiting in a damn line. Like they're annoying. In New York, they're all over the place. And if anyone can just save time, that's the instant utility. Whether that or not that translates into a revenue model, who knows? Like right now, this is just like me funding this from AMX.
Speaker 3:
[12:03] So we got to ask the standard startup question, business model?
Speaker 4:
[12:07] I don't know. I put like a contribution button there to see if anyone wants to contribute. Like I'm not calling it donations because like, there's no tax. I figured okay, there's probably like a legal gray area there. So I'll call contributions. No one's contributed yet. So I don't know.
Speaker 1:
[12:24] No one?
Speaker 3:
[12:26] Yo, people.
Speaker 1:
[12:27] There's a certain fellow whose restaurant is on here, Salt Hanks, who said, you know, Surfline, which does the same thing with the surf at Beaches, makes big money charging surfers for the latest surf information. So he thinks there's a business model here.
Speaker 4:
[12:46] So I think that's that's that's got that's got legs to the challenge that it's like a kind of like a chicken. Yeah, because like in order to charge a consumer business model, you kind of need like a lot of critical mass.
Speaker 1:
[12:57] Yeah.
Speaker 4:
[12:57] Make the utility right. Like five locations isn't really enough unless you look like the West Village where like you get for the West Village ones. But yeah, that's definitely an option. I mean, like I'm I have no intention to like really make this into like the next big money making start up. This is just to serve people of New York with utility. So as long as it was fun, no, it's totally fun. But like it gives me utility, too. So if this covers its costs and nets zero, I am happy like that. That is all I need because I cannot subsidize it for my MX for too much longer.
Speaker 1:
[13:30] Did Henry give you a free sandwich at least?
Speaker 4:
[13:33] I don't know. I enjoy paying the, how much is it?
Speaker 1:
[13:36] $32. I am going to talk to Henry. That young man is going to give you a free sandwich. Do restauranteurs mind these cameras or do they like them?
Speaker 4:
[13:45] Everyone. So of the four, and this is not me that's interviewed them. I've only spoken to Henry because Henry actually reached out to me. But like the New York Times did a piece on this and they interviewed them and like the New York Post. Of the three, the only one who had concerns was Johns and Bleeker and their concerns were valid because they were saying that the wait time estimates were too long for them. Because the keyword here is estimates. Like I can only estimate between like traffic in, traffic out and dwell time. And they were saying it was a risk of turning away business, which is fair. And so like for-
Speaker 3:
[14:19] They're also jealous of the longer lines next door.
Speaker 4:
[14:21] Yeah, next door. Yeah. And like that all be it is a fair concern. Like I do not want a business to be hurt by making people not go to the location. And so for John's, I like did like a special case for them where I made the wait time faster for them in this case.
Speaker 3:
[14:39] What's the hardware you have in the apartment windows?
Speaker 4:
[14:42] It's a Reolink cameras. So the ones been going with they're like 150 bucks on Amazon.
Speaker 3:
[14:48] Cheap.
Speaker 4:
[14:50] Yeah. Sorry?
Speaker 3:
[14:52] What is the camera tied to to get connectivity?
Speaker 1:
[14:55] Do you have to use the routers, the people's routers or?
Speaker 4:
[14:58] No, no, no. It's all Wi-Fi, Wi-Fi. Well, most are Wi-Fi. It has an ethernet option for some of the apartments, specifically two of them. The router is significantly far away. And so when I was trying to send 4K, like 30 FPS footage.
Speaker 1:
[15:13] It is good quality. It's really good quality.
Speaker 4:
[15:16] It was, cause you'll see sometimes like for, I just saw it a minute ago, like some, and this is neat too, like, like one of the snapshots had like a gray block for half of it. What that actually was, was just packet loss over the Internet for that snapshot. Like only half it got sent through and the other half just got lost in the ether of the Internet somewhere. And so to like combat that, like, yes, ethernet is always the best. It's just, and so I bought a few Wi-Fi extenders for a few of these tenants and just like you plug it in right, right next door and you plug it in, you plug the ethernet into this Wi-Fi, and then you plug it in, you plug the ethernet into the Wi-Fi.
Speaker 3:
[15:57] So, we're going to start with Michael. So, we're going to start with Michael. So, we're going to start with Michael. So, we're going to start with Michael. So, we're going to start with Michael. So, we're going to start with Michael. So, we're going to start with Michael. So, we're going to start with Michael.
Speaker 1:
[16:10] So, we're going to start with Michael. So, we're going to start with Michael.
Speaker 3:
[16:12] Exactly, so it became incredibly popular as people came into the camera to watch people lifting their garments for the camera, and there were sites that started up where they collected the best bits as well.
Speaker 1:
[16:26] Oh, Paris.
Speaker 3:
[16:27] So I think you need performance in front of the restaurants.
Speaker 4:
[16:30] Yeah, yeah.
Speaker 1:
[16:31] Get some mimes, yeah. Yeah, yeah.
Speaker 4:
[16:34] It's funny, I had one of the tenants who I installed in an apartment, and she said, when she told her parents about the offers she got for a camera, her parents were fully supportive because they were like, oh, the camera is going to give increased security for your building. Now there's footage outside. I'm like, yeah, that's the byproduct.
Speaker 1:
[16:53] Side benefit, yeah. If there's a murder on the street out front, you'll have footage.
Speaker 4:
[16:58] Yeah. Well, I only persist the actual video files for about 10 minutes, but it's not like into perpetuity. Yeah.
Speaker 1:
[17:05] You don't want to 10, you know, petabytes of data on your hard drive. Lucas, I just, I just, you know, I said, somebody told me about this. I sent it to Hank. He said, Oh yeah, it's blowing up. I know all about it. And I said, well, can we talk to Lucas on this show? Cause I think it's really cool. damnlines.com. Lucas, a pleasure meeting you. Thank you. I really appreciate you joining us today.
Speaker 2:
[17:28] Thank you, Lucas.
Speaker 1:
[17:29] Damnlines. Take care. That's fun. All right. I'll tell you guys offline where it works.
Speaker 2:
[17:35] Oh, I've already found it.
Speaker 3:
[17:36] I've re-saw it.
Speaker 1:
[17:36] What? How did you find out?
Speaker 3:
[17:39] His full name was on the Zoom.
Speaker 1:
[17:41] And you did it. God, you guys are terrible.
Speaker 3:
[17:43] Well, we're reporters.
Speaker 2:
[17:45] Such is the nature of our job. I was literally in the middle of texting Jeff about something related to this as you asked that question.
Speaker 1:
[17:53] That's why I didn't put his last name on the air.
Speaker 3:
[17:57] Wow.
Speaker 1:
[17:58] Man. All right. Let's do an ad and then we'll continue. You guys are bad. You're bad. Our show today brought to you by WebRoot, brand new sponsor. We welcome them to Intelligent Machines. I remember very well the day, not so long ago. Well, it was kind of long ago when I was so fed up with Norton. I was doing the lab with Leo up in Vancouver, a TV show that I took a box of Norton and I threw it on the ground and I stomped on it. I think there's still video on YouTube of me stomping on it. Well, it hasn't gotten any better. If your computer feels sluggish, if it heats up when you open a few tabs, if it sounds like it's preparing for liftoff every time it runs, don't assume that the hardware is the issue. It's almost certainly your antivirus. Those big name brands have become so big, so bulky, so complicated, so full of pop-ups and upsells, they're dragging the whole system down. Not Webroot. Webroot offers all-in-one digital protection for up to 10 devices, which is nice, the whole family can be protected, with a variety of plans designed to protect you and your loved ones from digital threats. You can get powerful antivirus, it's really a good antivirus, identity protection without the slowdowns or the pop-ups. Webroot keeps you protected online while staying out of your way. The numbers prove it. Webroot Essentials, the basic antivirus, scans six times faster and takes up, get this, 33 times less space than the average competitor. And it ranks number one in performance compared to Norton and McAfee. I'll just give you some direct comparisons. Webroot Essentials versus Norton Antivirus, 3.7 times faster, 35 times smaller, 5 times less RAM when idle. All right, you use McAfee, you probably already know that your machine is being bogged down. Webroot Essentials versus McAfee Antivirus, Webroot is 10 times faster, installs 16 times smaller as 5 times less RAM when idle. And that is the difference that can make your computer feel newer, faster and easier to use. Webroot offers total protection. You can get antivirus and identity monitoring and privacy protection and cloud backup, all in one simple hassle-free subscription designed for everyday life. But they've got a plan for everybody even if you just want the simplest essentials. It's very affordable. Webroot Total Protection ranked first overall when compared to top competitors because it scans seven times faster than the average competitor, takes up three times less space than the average competitor on your hard drive. And I should mention that AI has completely changed the cybersecurity game. You know that now, right? Scams are smarter, malware is faster, phishing emails are indistinguishable from the real thing. The good news is you don't need to be a tech expert to stay ahead of it. When you use security, they can keep up with the AI threats. Unlike those free antivirus tools or older security programs, WebRoot is built to counter modern AI driven attacks. It's fast, it's lightweight, it's designed to spot threats before they ever reach you. Live a better digital life with WebRoot. And let me tell you, we got a deal for you. WebRoot is offering our listeners an exclusive 60% off offer. Visit webroot.com/twit to learn more. That's webroot.com/twit, 60% off. So don't wait, WebRoot. Back to the show. Steve Gibson told me to yell at you guys. I said, I already did last time. You're wrong! You're wrong, I'm sorry, I apologize. Seems like every week I come back and I have to apologize for the week before. I will try, I will endeavor not to be a jerk this week. But Steve said there is no doubt now that Mythos is more than just marketing. I, of course, it's pretty much.
Speaker 3:
[22:05] We just asked, we're just reporters.
Speaker 1:
[22:08] And it's a reasonable question.
Speaker 2:
[22:09] You know, the out of that, it is if your mother says she loves you, check it out. And so I think it extends to if Anthropic says Mythos is the end all, be all of cyber security threats, you should check it out.
Speaker 3:
[22:21] By the way, who do you check it out with? Your therapist?
Speaker 1:
[22:25] No, about your mom, you mean?
Speaker 3:
[22:26] Yeah.
Speaker 2:
[22:27] Your mom and related parties who would be aware of-
Speaker 3:
[22:30] If your mom is lying to you, she's lying to you.
Speaker 2:
[22:33] Well, I mean, you need to look into it with her as well as other people, people familiar with the situation. Parties briefed on the matter.
Speaker 1:
[22:41] According to two people familiar with the situation, mom does love you. Don't ask dad though, because he doesn't know. He definitely doesn't. Hello, Gizmo, little brief.
Speaker 2:
[22:53] She's, you know, trying to, oh gosh, don't do it, Gizmo.
Speaker 1:
[22:59] So Steve quoted this paper, which is actually from the Cloud Security Alliance, but it's authored by a huge number of very respectable people in the security community, including Jen Easterly, former director of CISA. She's currently director of RSAC, Bruce Schneier. We've talked about him all the time. Katie Mazuris, I've interviewed her. Just really good people and 250 other CISOs. And the paper is the AI vulnerability storm building a Mythos-ready security program. They fundamentally accept that Mythos is what it says it is, an AI model.
Speaker 3:
[23:41] Did they use it?
Speaker 1:
[23:42] Well, I'm going to talk about some people who have used it. Some by the way, without permission.
Speaker 3:
[23:47] Well, there's that. But how do they know unless they...
Speaker 1:
[23:49] I think some of them have used it. But so they are addressing people, CISOs in business, who are going to be faced with a risk spike as soon as Mythos becomes widely available. They say, what will happen next? The storm of vulnerability disclosures from Project Glasswing is the first of many large waves of AI discovered vulnerabilities. By the way, that's more than just Mythos. And that's the other takeaway from this is all AIs can do this to a certain extent. All of the top frontier models are very good at finding security.
Speaker 3:
[24:31] They will all get better as time goes on.
Speaker 1:
[24:32] And they will all get better. The capabilities seen in Mythos will quickly become more widely available, dramatically increasing the number and frequency of complex and novel attacks organizations will face. Organizations are already facing this. This is a graph from that paper talking about how long it takes from the announcement of a vulnerability, what they call the CVE disclosure, to it being weaponized, to there being an exploit in the wild. And a few years ago in 2018, it would take on average 2.3 years between the time the CVE came out and the time hackers reverse engineered it and were able to exploit it. That number has been going down rapidly. Last year it was 23 days. This year it's 10 hours. Ten hours from the time the exploit is announced to the time it's exploited. And I think you can directly point to AI being responsible for that, that that's really the big breakthrough. This comes from a site called Zero Day Clock.
Speaker 3:
[25:36] So you're using AI to create the exploit once you're doing the vulnerability.
Speaker 1:
[25:39] So the AI can look at the CVE and reverse, in effect reverse engineer it, saying, okay, this is what they fixed, this is the vulnerability, and it will make, it can make a proof of concept from that CVE and then the hacker can then apply it.
Speaker 3:
[25:52] But if things have been done responsibly, hasn't the leak been plugged in already?
Speaker 1:
[25:57] And this is why this paper was written, because it's very frequent that the patch goes out but doesn't get applied in a prompt fashion. And partly that's because they used to have time.
Speaker 3:
[26:10] Right.
Speaker 1:
[26:11] They don't have time anymore. And so that's really part of what this paper is all about, is they project, Zero Day Clock projects that it will be less than an hour by the end of this year and less than a minute in a couple of years because of AI. It also, just to answer your question, the percentage of exploited CVEs has been going up like crazy. It's now 72% of all CVEs get exploited on or before the day of disclosure, even though they're being patched. Patches take time.
Speaker 2:
[26:48] Where are they getting this data from?
Speaker 1:
[26:52] I don't know. Data sources.
Speaker 3:
[26:56] How do you know that places are?
Speaker 1:
[26:58] The government's CSA, Von Check, which is a very well-known, respected vulnerability, commercial threat intelligence. There actually are 10 independent sources, they say, and they're all listed in the Azera Day Clock. Metasploit is another one. There are a lot of companies that make a lot of money, in fact, charging companies for this kind of information, threat intelligence, they call it. In fact, some of our advertisers do threat intelligence. So anyway, I wanted to mention that. I also wanted to mention, and we talked about this last week, that Dario Amode went to the White House a week ago Friday, talked to Scott Bissett and Susie Wiles, the chief of staff. And apparently, Trump has been convinced that, well, Anthropic's not so bad.
Speaker 2:
[27:46] Well, you know, he's also convinced NSA spies are using mythos.
Speaker 1:
[27:51] Yeah, they're not paying any attention to the blacklist. Because it's too good a tool to ignore.
Speaker 3:
[27:57] Well, and the Trumpists want the more dangerous the tool, the more they want it, the more it's appealing.
Speaker 1:
[28:04] One more data point. Mozilla shipped the latest version of Firefox, Firefox 150 yesterday, and announced that they had fixed in it 271 bugs, which they found with mythos.
Speaker 2:
[28:18] However, that exact article says that the Mozilla team, the Firefox team doesn't think that AI is going to upend cybersecurity long term, but that obviously software developers are going to be in for a rocky transition with this.
Speaker 1:
[28:36] Yeah, because they will use AI to patch as fast as the bad guys use AI to find. That's critical. I asked Steve about this. He said, well, isn't this going to be a seesaw battle? They're going to patch it, then the new AI will come out and find more. He said, eventually, you get to stability where you don't have any bugs. You don't have any more exploits. The software is fixed. Finally fixed.
Speaker 2:
[29:00] By who?
Speaker 1:
[29:01] Well, and this is important to understand. We have lived in a world where software is awful. It's buggy. It's not well-written.
Speaker 3:
[29:07] But is it always a bug that's a vulnerability or sometimes is it part of the design that just says, oh, I didn't know it would do that if somebody did this?
Speaker 1:
[29:12] Often it's designed, for instance, Cisco, which makes, of course, a lot of internet hardware. You know, the government's always talking about these damn Chinese routers. Well, Cisco routers are often the most exploited routers. They're enterprise routers. And Cisco has in the past, for instance, left passwords in, default passwords in the router so that you can access the interface. That's not a bug. That's just a stupid mistake, right? So there are bugs, there are mistakes, there are unintentional.
Speaker 3:
[29:44] There are unique ways to attack something that someone hadn't thought of. This is the problem with guardrails too.
Speaker 1:
[29:51] But AI can find all this.
Speaker 3:
[29:54] You can't anticipate every possible malign use. That's the point of this is you need to use these tools.
Speaker 2:
[29:59] Let's say if we're taking this a couple steps further, where suddenly we're going to be in a world where this AI security checking is going to be an essential part of the release process for any new feature or software. That's the point of this CSA. That's incredibly expensive. How, when we get a couple years down the line to where these companies are charging, let's say they're not even charging 100% of what it costs to do these tasks, they're charging only 50%, which is already a lot more than what they're charging now. How is any company going to afford to do this part of a central check for every single thing always?
Speaker 1:
[30:40] Well, they better.
Speaker 3:
[30:41] What if you're just a little app maker in the app store?
Speaker 2:
[30:43] What if you're just, what if you're TWiT and trying to plug some sort of...
Speaker 1:
[30:47] It may mean, it may mean that either software gets more expensive or... Because what is also a certainty is if you are worthy of attack, the bad guys will be using these tools to attack you. So even if it's expensive, because there's money to be made. And let's not forget, the cost of not fixing these holes is perhaps even greater. Have you taken any of the stuff that you're realizing that?
Speaker 3:
[31:14] You've vip-coded, have you taken any of the stuff you've vip-coded and put it through asking for...
Speaker 1:
[31:18] Every time I do anything, I go through a security check, absolutely. And almost all these tools have skills for security checks. There are third-party skills for security checks. I will run sometimes multiple ones. But the ironic thing is that I'm just doing that for myself. I am not putting out commercial software. I mean, it's public because I put it on GitHub if somebody wants to see it. But I'm not putting out commercial software. And yet, I still do that because I don't want to accidentally publish an API key or the passwords to my house or whatever. So I always do that. I always do that. And I think that that's going to be a standard operating procedure if it's not already with every bit of software. The question that I was asking Steve is software perfectable. And he says, yes, it's deterministic, it's math. So it is all ultimately it is possible to have perfect software. Humans are not good at that. We now know humans cannot do it.
Speaker 3:
[32:18] I'm still going to question that premise again, because I'm going to say one more time, you cannot anticipate every malign use that someone will come up with to get around something.
Speaker 2:
[32:31] Well, this is kind of what the CSA paper seems to touch on is that there's basically going to be this, it says that there should be the development of like VOLNOPS, like a permanent organizational capacity modeled on DevOps that is all about constantly trying to get ahead of this, all the time, and that's a huge resource investment, and just change in the way that a lot of companies are going to have to operate, and I don't think that's entirely feasible.
Speaker 1:
[32:59] It might end up being very expensive, but we need it.
Speaker 2:
[33:02] But I mean, is that feasible for anything, any one outside of the top 50 or 100 companies?
Speaker 3:
[33:06] Yeah, does this just make a hegemony of the big guys more entrenched? I think Paris' point is really, really right. It's just an impact. It's not thought of as fault, but it's something to consider.
Speaker 1:
[33:19] The trend is, by the way, that these models are getting smarter and smarter at a very rapid clip, that they're getting cheaper and cheaper to run at a very rapid clip. Or the software that you run today at a certain cost will be cheaper in a year. The frontier models will always get more expensive, but the software you run today will get cheaper.
Speaker 3:
[33:37] Well, not necessarily. We'll talk about that a little later.
Speaker 1:
[33:41] Well, I mean, we've mentioned this. Jensen Wong has said that one of his chief goals is making these GPUs more efficient, right?
Speaker 2:
[33:51] Yeah, of course that's one of his chief goals. Is that practical or realistic in the next five years?
Speaker 3:
[33:58] It's as new, more as long.
Speaker 1:
[33:59] Absolutely. It is. You know why it is? Because we have to.
Speaker 3:
[34:03] Because his point is that you can't increase the size of the data center that's already built. You can't put more chips in it. The only way you're going to increase the investment is by him increasing the efficiency of it, and that's why they're coming to him to do that.
Speaker 1:
[34:18] Anthropic says that their most dangerous AI model, aka Mythos, has fallen into the wrong hands. A Discord group has had access to Mythos for two weeks. This actually comes from Bloomberg.
Speaker 2:
[34:36] And if a Discord group, no offense to our Discord group, but if a Discord group was able to get access to Mythos, who else has access to Mythos?
Speaker 1:
[34:45] They got it through contractors.
Speaker 2:
[34:48] Yeah, I know. You know, who else has access to be able to do this sort of...
Speaker 1:
[34:53] Well, and that's why this, I think that's why this hair on fire report from the CSA and why so many CISOs, 250 signed on to it, is it's going to get out there.
Speaker 3:
[35:03] Well, they also did it because, because Anthropic was, or whoever, I'm sorry, not Anthropic, but the client was sloppy because they went through the basic email structure or URL structure that they learned from the prior leak.
Speaker 1:
[35:15] The Unauthorized Access highlights the challenges. This is from Bloomberg. Anthropic faces in fully preventing its most powerful and potentially dangerous technology from spreading. So it's kind of like nuclear proliferation.
Speaker 3:
[35:27] Well, it's also, mythos couldn't protect mythos.
Speaker 1:
[35:33] OK, well, yeah, I don't know what that means.
Speaker 3:
[35:36] But if it's supposed to be the most security aware thing, and it's going to be perfect software and there's going to be no vulnerabilities.
Speaker 2:
[35:44] Well, that's actually a great point. If mythos is so good, why couldn't it protect its own software?
Speaker 1:
[35:50] Because mythos is not being, it's it's it wasn't mythos that was exploited. It wasn't mythos that was exploited. So mythos isn't like magically protecting everybody now.
Speaker 2:
[36:02] If mythos is so good, then how could mythos get deployed?
Speaker 1:
[36:07] Because it isn't everywhere.
Speaker 2:
[36:09] But it is being used in the circumstances where mythos is being deployed. Anthropic has used mythos for all this.
Speaker 1:
[36:18] Yeah, I don't, but it's a nonsensical question.
Speaker 3:
[36:22] What do you think you are, Jensen Wong with Patel?
Speaker 1:
[36:24] Well, it's just, but you're asking a nonsense question. Mythos isn't sitting there protecting everybody and everything going, well, don't touch me.
Speaker 2:
[36:33] I'm not saying that mythos is there ready to karate chop anything, but mythos has been deployed on all of Anthropic's current stuff.
Speaker 1:
[36:43] The person had permission to access Anthropic models. They gained access from a company from which they perform contract work. Bloomberg's not naming the company for security reasons. I'm not sure how mythos was supposed to prevent this.
Speaker 2:
[37:01] Poor mythos, we shouldn't be so hard on it.
Speaker 1:
[37:04] You're personifying it.
Speaker 2:
[37:05] It's not.
Speaker 1:
[37:07] Okay, I'm going to move on.
Speaker 3:
[37:08] See, I thought when you had Lucas on, you were going to tell us who it was?
Speaker 2:
[37:12] Do we want to... Somebody sick their own version of mythos to figure out exactly how many minutes and seconds it was since Leo said he's not going to tell Jeff and I that we're wrong.
Speaker 1:
[37:22] Well, I didn't expect you to say so many stupid things. I'm sorry. I'm trying.
Speaker 3:
[37:26] Well, Jensen.
Speaker 1:
[37:27] I'm trying. If you would just not ask dumb questions, I wouldn't have to complain.
Speaker 3:
[37:32] We dare to criticize.
Speaker 1:
[37:33] No, it's not a dumb question. It's just that mythos, you're assuming that somehow mythos is permeating everything that everybody's doing and is protecting itself from everybody. But if somebody has access to mythos and then gives that access to somebody else, there's no mythos in the middle. It's not sitting there going, hey, no, no, no, you can't do that.
Speaker 2:
[37:53] The whole problem with what happened is that the issue that they exploited is not-
Speaker 1:
[37:57] It wasn't a bug.
Speaker 2:
[37:59] It is that the, I believe-
Speaker 1:
[38:02] No, no, it wasn't a bug. It was bad policy. Mythos isn't there staying doing, you have a bad policy, you can't do this. It wasn't available to fix that problem. And actually that goes back to what Jeff was saying earlier, which is a legitimate point, which is there are things that can go, security things that can go wrong that aren't bugs that an AI can't fix. If you, for instance, have, don't, you know, this is why we always talk about zero trust. If you have a security policy that allows a former contractor access to your system, you're going to have a problem. And I mean, I guess you could have-
Speaker 3:
[38:43] They could have asked Mythos, how should we put this out to our ten contractors? Leo, I was hoping that when you had Lucas on, you wouldn't know what he was doing. I was hoping he was one of the guys who broke into Mythos.
Speaker 1:
[38:54] Broke. I don't think those guys are going to talk to anybody.
Speaker 3:
[38:57] No, they're not.
Speaker 1:
[38:57] If I were them, I wouldn't.
Speaker 3:
[39:02] They may be drafted into the army before you know it.
Speaker 1:
[39:04] Yeah. I think it's funny because you're now ascribing to Mythos amazing capabilities beyond what Mythos can do.
Speaker 3:
[39:12] Well, what if Anthropic literally had asked Mythos, how do we do this well?
Speaker 1:
[39:18] I don't know if-
Speaker 2:
[39:19] Mythos says one of the most impressive claims about it is supposed to be that it helps anticipate a wide range of potential security risks.
Speaker 3:
[39:29] Yeah, that's a good point.
Speaker 2:
[39:30] And this could have been one of the many things it anticipated, which is that you currently have-
Speaker 3:
[39:38] Only with the code, not with the deployment thereof.
Speaker 1:
[39:40] I guess it failed. I don't know. Maybe Anthropic hasn't run Mythos on all of its policies. I'm not sure exactly what's going on there.
Speaker 3:
[39:51] Deridoki says rightly, it's about the attack surface, which varies.
Speaker 1:
[39:57] Yeah, I mean, it isn't a magical being that can prevent all attacks. I'm just saying it can fix bugs in software. I don't know about the rest of it. Yeah, it's just, it's a very good model and it happens to have cybersecurity abilities, interestingly enough that it wasn't specifically trained in. Let us take another break. We're gonna, we're about 15 minutes away from our guest, Ian Bogost, so Bogost. So I will practice his name.
Speaker 3:
[40:35] Did you get a pronunciation guide?
Speaker 1:
[40:37] Claude told me it was Bogost, Bog-ost, Bogost.
Speaker 2:
[40:42] Claude could never be wrong about anything like pronunciation. It's clearly so good at pronunciation.
Speaker 1:
[40:48] We'll find out. I know, it's surprising. I wouldn't expect it to be good at pronunciation, but it is apparently. Bogost.
Speaker 2:
[40:54] According to someone who has not checked and has not been confronted with the reality of the situation.
Speaker 1:
[41:01] Yes. Somebody's saying, it's like if you left your password on a sticky note on the monitor, mythos can't help you. It's like, guess the point. It has a domain that it works in, but it's not omnipotent.
Speaker 3:
[41:20] I'm sorry. I just enjoyed the irony.
Speaker 1:
[41:23] That's all. Yeah. Well, I mean, yes, mythos is not going to... And actually this really was the point that you were making, which is that it isn't...
Speaker 3:
[41:32] You cannot anticipate every...
Speaker 1:
[41:34] Yeah. It's not going to fix all security. It's going to fix all, but potentially software bugs could be eliminated. I wasn't sure about it when I asked Steve.
Speaker 3:
[41:43] Unanticipated attack surfaces cannot be, because they're unanticipated.
Speaker 1:
[41:48] No, that just means a human didn't anticipate it. If it's in the software, presumably the AI could find it. But the AI is not going to have anything to do with a post-it note on your screen with the password. It doesn't have that.
Speaker 2:
[42:01] That is not what happened here.
Speaker 1:
[42:04] But no, it's something very analogous to that though.
Speaker 2:
[42:07] I don't think that's accurate.
Speaker 1:
[42:09] It was not through a CVE. It was not through an exploit.
Speaker 2:
[42:12] I don't think anyone's claiming that it was through a CVE.
Speaker 1:
[42:14] Well, that's what Mythos fixes is CVEs. It was not through an exploit. It was through bad policy, like having a password on your monitor. Bad behavior. You see? I mean, there is a difference. It can fix bugs in software. I don't know. I'm not an expert in Mythos. I wish someday maybe they'll release it and we could talk to somebody at Anthropic about what Mythos.
Speaker 3:
[42:38] Well, we're also correct question in the idea that even that software can be perfect.
Speaker 1:
[42:43] Yes. And I, you know, I don't claim expertise in that area. Steve, who I think is pretty sharp about this kind of stuff, believes it is perfectable. He ships, by the way, he says software without bugs because he's, you know, works very hard to make it bug free. And he says, because it's math. The problem is humans are not very good at it. We don't do a great job with it.
Speaker 5:
[43:08] There's also the hardware part of it. Like hardware cannot be perfected.
Speaker 3:
[43:11] So there's always that as a service.
Speaker 1:
[43:14] That's true. Things like, somebody mentioned this in the Discord, things like Rowhammer, which is an exploit, which allows you to see what's going on in a parallel process on a processor is a processor defect. And I don't, now mythos probably could be applied to microcode and I wouldn't be surprised if it could say, hey, you know what, you've got a problem here with leakage in your prediction pipeline. That I think mythos probably could see. But you'd have to then know about it before you fabbed the chip. And once it's in the chip, mythos can't fix it.
Speaker 3:
[43:57] But it couldn't fix anything like radiation.
Speaker 1:
[43:58] It's fixable in microcode.
Speaker 5:
[44:00] So you can't do anything about radiation or anything like that.
Speaker 1:
[44:03] Yeah. Okay. Yeah.
Speaker 5:
[44:06] I'm just saying the hardware. I'm just saying the hardware. It's like the hardware can never, that can't be fixed. Yeah.
Speaker 1:
[44:11] Okay. It is, it's not a superman, not a superhero. And I don't think Anthropec is claiming it to be omnipotent.
Speaker 4:
[44:17] No, I didn't say that. I didn't say they were. I was just saying that they were always do that.
Speaker 2:
[44:20] I don't think anyone is claiming that mythos is an omnipotent superhuman that is perfect, Leo.
Speaker 1:
[44:27] Okay. Right. We will take a break and then we will talk about...
Speaker 3:
[44:34] For the Osborn pressure.
Speaker 1:
[44:36] No, I'm fine. We will talk about the crunch that is apparently happening with a lot of companies. Your buddy Ed actually has a scoop on one of these. We will talk about that in just a bit. You're watching Intelligent Machines as we try to figure out what is going on in the world of AI. It's not obvious. Brought to you today by Monarch. When the seasons change, it's natural to want to declutter and get organized. Monarch helps you do the same with your money goals. Let Monarch do your financial spring cleaning for you. One dashboard that gets your entire financial life organized. No more clutter, no more mess, no more scattered logins, just accounts, investments, property and more all in one place. Get your first year of Monarch for half off, just $50 with the promo code IAM. I love Monarch because I can take a look at a single page and know exactly where I stand, what my net worth is, what my out go is, what I make, which is a very small amount of money. But at least I know, I love Monarch. It's not your average personal finance app. It's not a checkbook balancer. Most apps tell you what you already spent. Monarch goes a lot farther. You can set goals. You can plan for big purchases. You can map out your financial future. You can use AI tools built with your financial data. See, it's got your data. You apply the AI so your answers are personalized to you. Monarch's AI assistant is like having a financial advisor in your pocket. Ask it anything, anytime, like has my spending changed lately or am I on track to hit my savings goals? AI insights spot things you'd never catch yourself. Is your spending up or maybe it's not up, it's just inflation. AI weekly recap is a personalized weekly summary that flags spending spikes, big net worth shifts and upcoming expenses. This way, nothing sneaks up on you. Knowledge is power and with Monarch, it's all on one screen for me. Oh, they got a new feature I love. Let Monarch do the math with Bill Split. You scan a receipt, Monarch splits up the items, prices them automatically, then shares a link, or you would share a link or QR code with your group. Everyone says, yeah, that's mine. And you can settle up. You don't even need a calculator. Monarch does it all automatically from their receipt. Use the code IAM at monarch.com to get your first year half off, just 50 bucks. That's 50% off your first year at monarch.com with the code IAM is best. Well, I was going to say 50 bucks I ever spent, but actually I didn't have the code, so I spent 100 bucks and it's worth it. monarch.com. I love it. Use the code IAM. All right, we're back and guess who? Look who's showed up, our special guest. Ian Bogost is here. Gosh darn it. Bogost is here.
Speaker 5:
[47:38] Did I get that right?
Speaker 1:
[47:38] Good to see you. I will say something. Ian, you know this guy, Jeff. Somebody told me you know Jeff.
Speaker 3:
[47:46] Well, I have much to thank Ian for. Ian, I wrote a blurb for one of the object lessons books in the series that Ian co-edits. Taking every opportunity, I said, hey, you want a book for me? So I ended up writing magazine.
Speaker 5:
[48:02] Magazine.
Speaker 3:
[48:02] I had great fun writing.
Speaker 5:
[48:03] Do you have it there?
Speaker 3:
[48:04] There it is. Always be selling. Always be selling. And then I also said, I got this book about Gutenberg. And he introduced me to Harris Knockfee, who's our mutual editor and publisher. And that's led to that and my next book, Hot Type, and the book series, Intelligence. So I have much, much to thank Ian for.
Speaker 5:
[48:24] A lot's going on, yeah.
Speaker 1:
[48:26] Introducing you is a challenge, because you do so much. I was talking to Jeff before the show, saying I don't understand how Ian is so accomplished.
Speaker 3:
[48:34] He said he hates you, because you can do so much.
Speaker 1:
[48:37] You make me feel like a loser.
Speaker 5:
[48:40] People ask me what I do, and my heart just falls out of my body.
Speaker 2:
[48:44] Do you have seven and a half minutes?
Speaker 1:
[48:46] He's contributing writer at the Atlantic, and that's where I know you from. I read your stuff. I love it. It's fantastic.
Speaker 5:
[48:51] Thank you so much.
Speaker 1:
[48:53] He is also a game designer. In fact, you might know the game he did a few 15, 16 years ago called Cow Clicker.
Speaker 5:
[49:01] Cow Clicker, that's right.
Speaker 1:
[49:03] Facebook, which was a parody. It satirized the exploitation of Facebook and its games, but it became so big. You actually had to have a rapture.
Speaker 5:
[49:18] Yeah, I had to get rid of all the cows.
Speaker 2:
[49:21] How many cows did you murder?
Speaker 5:
[49:23] That's a good question. A lot of cows. I have never counted the number of cows. I wouldn't think of me as murdering them. They were mysterious.
Speaker 2:
[49:29] You delivered them to cow heaven.
Speaker 5:
[49:31] That's right.
Speaker 1:
[49:32] Well, they got raptured. I think that's actually, from the point of view of the cow, a good thing.
Speaker 2:
[49:36] Well, we don't know what sort of sins those cows committed.
Speaker 5:
[49:39] The cows were innocent.
Speaker 1:
[49:41] They were 100 percent raptured. Not one cow was left behind. So he's also a professor, kind of multidisciplinary professor at Washington University in St. Louis. The Barbara and David Thomas distinguished professor, co-executive director of the Office of Public Scholarship, provost, fellow and interdisciplinary initiatives.
Speaker 5:
[50:02] This is too much, isn't it?
Speaker 1:
[50:03] I know, it's too much for one person. His portfolio focuses on AI plus design. He's also assistant vice provost because, you know, you had a little free time, I guess. Founding partner at an indie game studio called Pervasive Games, which has consulted for 2K Games, Activision, Disney, Nintendo, Sony, The Tetris Company.
Speaker 3:
[50:25] And how many books has he had written?
Speaker 1:
[50:26] Eleven, according to Claude. Claude could be lying.
Speaker 5:
[50:29] This is my 11th.
Speaker 1:
[50:31] 11th book.
Speaker 5:
[50:32] It's an odd number. I need to write a 12th.
Speaker 3:
[50:34] I want to talk to you about that later.
Speaker 1:
[50:36] And that's a very bad number.
Speaker 2:
[50:37] Well, no, Leo, you've got a baker's dozen. That's a perfect number.
Speaker 1:
[50:40] Well, the good news is I only wrote like a couple. The rest were ghost.
Speaker 5:
[50:43] The rest Claude wrote.
Speaker 2:
[50:45] Claude wrote. The video was Bogost, yeah.
Speaker 1:
[50:47] Actually, the thing that really interests me most about Ian particularly is your notion that, and I'm going to misstate this, so you can kind of restate it, but friction is important to our humanity.
Speaker 5:
[51:04] Yeah, I've been, you know, this book, this new book, The Small Stuff is about gratification. And like at the same time as I was working on the book, this idea of friction got really popular. You hear this a lot now. We need to reintroduce friction. There was just a New Yorker story about this just a couple days ago. It was good. It was a good story, right? And I think what I'm saying is related a bit different. Because so to me, this idea of gratification, it's all about the sensory enchantment of everyday life. It's about your constant sensory encounters with the world and how you can derive small amounts of little pleasures from them all the time. And that's a little different, isn't it? From friction, it's not about making it harder. In some ways, it's about making it easier. So that's an interesting idea that I've been thinking about since we started promoting the book and since I've started thinking about it in the context of today, conversations about technology today, where this idea of friction has suddenly become quite popular.
Speaker 1:
[52:08] Well, it's also germane to the topic of AI, because one of the things AI seems to do is smooth all the edges, right? And AI pros in particular is...
Speaker 5:
[52:17] Oh, yeah, well, I think that's true. But I've also been thinking about the ways that AI pushes me, at least. I don't know that it does this generally. It's starting to see evidence that AI is pushing people back into the world rather than removing them from it.
Speaker 2:
[52:37] In what sense?
Speaker 5:
[52:38] Well, here's an example that just happened to me this week. So it's the springtime. Thank gosh. It's been so cold all winter. It's finally spring and I need to water my lawn because it's spring again. I live in a place where it freezes, so you have to turn on the irrigation system. There's a leak in my backflow, my irrigation backflow, which is something we don't need to talk about today. But it's a thing that you have to have in your irrigation system. I talked to AI and I'm like, help me figure out how to fix this. So in that sense, I'm not generating text. AI is pushing me back out into the world where I'm taking on these real material tasks that I might not have done otherwise. So I think it's complicated. I think the smoothing over is happening, for sure. But there's also these kind of rough edges that AI is revealing, including for me, really, like inviting me to engage with the physical world in ways that I might not have chosen to previously.
Speaker 3:
[53:42] Is there a linkage to the fact that you edit the book series called Object Lessons?
Speaker 5:
[53:45] Well, certainly when we... So Object Lessons, which is this delightful book series, we've done about 100 books with Bloomsbury. And they're all about... The secret life of ordinary things is the tagline. We've done so many books, Taco and Phone Booth and Jeff Did Magazine. And so I've been interested in things for a very long time, ordinary things, toasters and stuff. That's just an obsession of mine for forever. And people sometimes sneer at you when you think... When they find out that you're interested in toasters, they're like, why? Like, that's not important. And one of the things that Object Lessons was meant to do was to... So yes, it is important to tell those stories. And then we tried to make the books themselves. Jeff held one up at the start, like, really delightful, delightful objects. They've got French flaps and, you know, we do Pantone printing on them. And the covers are just wonderful. So I do think it's related, you know, that this is a project, this is a long-standing project for me that really touches a lot of things that I've done over the years, even if I didn't realize at the time how they were contributing to this, a discovery of gratification as a topic of interest.
Speaker 3:
[54:58] It's a very academic thing to do to abstract those common things in life and ask, what else is there about this? And to learn lessons from that. And I haven't read your book yet, but I'm suspecting that there's a through line to that.
Speaker 5:
[55:17] There is the through line. I mean, for me personally, you know, you ask, like, what's the story with all these things that you've done? And as a game designer, you know, I was always thinking about, the games are absurd. Games are ridiculous. And there's no purpose to them. There's no reason to play Candy Crush or even Scrabble. But we do it and we enjoy it. And that's fascinating.
Speaker 1:
[55:44] Unless Paris is playing against me and then I don't enjoy it at all.
Speaker 3:
[55:48] She smashes them.
Speaker 5:
[55:49] You could even enjoy being whooped, defeated at a game. There's something about it that's part of the deal, is the pain of the summer of playing. And so one of the things that it was always eating at me when I was writing about games, making games, studying games, teaching about games. It's like, why is this miserable, weird, useless, purposeless thing so compelling? And one reason I think it's compelling is because games invite us to engage with something that doesn't matter, that doesn't need to exist, that almost shouldn't, that's pure excess. And if you look at the world, it's full of all that kind of stuff too. That I can feel the smoothness. I have this walnut desk in front of me, and I can feel the smoothness of the wood under my hands. And sometimes I just do that. And that's gratification. That's delight. That's sensory delight. It doesn't do anything for me. But if I didn't do that, if I didn't accept the gift of that sensory encounter, would my life be better? No, it wouldn't. I would be missing out on that little moment that's free. And so it may not seem that that has anything to do with games, but I really did come to some of those observations through, you know, my experience with games, and then later, you know, with the philosophy of objects and the way that Jeff kind of hinted at.
Speaker 3:
[57:11] So AID objectifies things. Everything is an abstraction in AI.
Speaker 5:
[57:16] AI is complicated. I think what AI is doing is giving us a very easy to use window into answers or simulated answers to lots of questions, which means that we ask a lot more questions. And that's qualitatively different than Google, I think. Partly because it's quantitatively different. It happens faster, but that makes it qualitatively different. Like the story I told about the irrigation fix, like why wouldn't I have done that just by going to YouTube or Reddit? And the answer is pretty clear, isn't it? It's because I wouldn't have been able to find the answer on YouTube. I would have stumbled into some channel that had a bunch of ads and pre-rolled, and I would have had to find the part of the video where it was correct, and I would have had to wade through all the arguments on Reddit about someone had already asked and answered this question eight years ago. The AI just tells you, it's like, oh, I can take a picture of this apparatus, and it's like, okay, here's what you do. It's not always right, and I know that people worry about that, and I worry about it too, but it's often right. In my case, in this example, it was right and it helped me not just fix something in my life, but engage physically with a part of the world, a part of my home that I had never touched, not really. And that is, I think that's kind of magical, and it's a different story than the one that I've heard told about AI.
Speaker 1:
[58:42] You famously said you were glad that AI ingested your books.
Speaker 5:
[58:47] I did, yeah. Yeah. Someone noticed this, I guess. I was wondering if anyone had noticed that I wrote that.
Speaker 1:
[58:52] I agree with it. In fact, I've said the same thing.
Speaker 2:
[58:54] What was your argument for?
Speaker 5:
[58:56] When you make something as a creator, it is no longer yours. It lives in the world. Isn't that what all of us want? You write a book or you make a piece of art and you want it to travel. It's no longer connected to my intention and what I might do with it. Anyone can consume it and they can misinterpret it or they can reinterpret it, or they can throw it away or they can crumple it up, or whatever it is that they do with it. That is what it means to be a creative person and to disseminate work in the world. I'm disturbed in a way by this idea that we ever had control of our work. Yes, I realized that AI was stealing the books, the digital files for the books. That's a slightly different topic than a computer reading them. Who would have ever thought when I wrote these books 20 years ago, that someday a computer would read and understand them and make new sense of them? Oh my God. That is certainly something that I don't want to squelch. I don't want to say, well, that's not what I meant when I wrote the book, because we don't get to mean how our work is interpreted.
Speaker 1:
[60:05] It's another form of dissemination. It's another way to give what you're giving.
Speaker 3:
[60:10] Readers are the ones who give books meaning, and it's another way to give meaning.
Speaker 5:
[60:15] The funny thing about this is this idea of dissemination, this idea that when you make something, it enters the world and it doesn't longer years. This is like 50, 60 years old in literary criticism and philosophy. This is not a new concept, but somehow we forgot about it. I think the Internet made people feel as though they had more control and they deserved more control than they ever did before. Now, it suddenly seemed like you could talk to celebrities or you could repost when someone misinterpreted you or you perceive that they did. Every article I write, now I have to deal with people responding to it online and thinking that I should engage in conversation with them about it. It kind of changed our world view about how work enters the world.
Speaker 1:
[61:03] In 2022, you wrote a great piece called, which I remember vividly, Chat GPT is Dumber Than You Think. Basically, you said it's a toy. Not a tool, but a toy. Now, here we are four years later. Things have changed a little bit.
Speaker 5:
[61:19] Yeah. I mean, it was always going to become a tool because that's the thing that we make when we make computer systems, but it's also a toy. What I mean by a toy is like it's something you just do for its own sake. You manipulate it and you mess around with it. It's an entertainment vehicle too. Sometimes I'll just have Chat GPT write me heart-crane poems about Diet Coke or whatever. What's the point of that? It's not in order that I can have the heart-crane poem, it's in order that I can explore two things that I love, that poet and Diet Coke, in a different way. I think that that persists as a use of AI, that usage still persists. Sometimes I'll just hang out with one of these machines and spend a little time with it, not personifying it, but just exploring it in the same way that I would click through Wikipedia articles or in the same way that I would hold, I have a Rubik's Cube over here. When you play with this thing, sometimes it's just about holding it in your hands and feeling what it's like for the-
Speaker 1:
[62:27] For me it is because I can never solve it.
Speaker 5:
[62:29] Oh, I can't solve it either.
Speaker 1:
[62:32] That's an interesting point. It's a game that you can't solve, but it's still pleasant.
Speaker 5:
[62:38] Right. Some of the things that we love about games and toys and objects is not that they're useful, but that they can be manipulated. I don't mean manipulated like used in indiscriminate ways. I mean, like just physically touched.
Speaker 1:
[62:53] That's the game is playing. That's what playing is.
Speaker 5:
[62:56] That's right. Yeah.
Speaker 3:
[62:58] Ian, I'm eager to hear your thoughts on AI and education. Not the obvious, not the everybody has blue books.
Speaker 5:
[63:05] Right.
Speaker 3:
[63:06] But calling on what you said earlier about the machine reading your books, you now have a machine that not only speaks our language, but it supposedly learns.
Speaker 5:
[63:14] Right.
Speaker 3:
[63:16] Does this affect at a high level, does this affect your view, your perspective on learning generally? Then because of everything you do in education, I'm curious about your view of AI in the classroom.
Speaker 1:
[63:30] Yeah, you're on the front lines.
Speaker 5:
[63:32] I am. I've written about this extensively. I've tried my best in my writing on AI in education, and I focused on higher education because that's where I live. I've tried to give everyone voice, the students, the faculty, the administrators, the AI companies even. I feel like it's complicated. These things are here and we can't deny that. What does it mean? In terms of learning, I do think that this ability, like think about the irrigation story, like this ability that I have to try something out like relatively easily, fairly consequence free. This is related to a concept in the learning sciences, of which I'm not an expert, but I do know, and I've loved this concept for many years, called performance before competence. Have you ever heard of this? The idea is it's generally good for learning if you get thrown into the deep end. You don't quite know what you're doing, and instead of ratcheting up from the basics, or at least all the time, by pretending that you're an expert, by really jumping in with both feet, you can learn in a different way. Some kinds of learning, you want skill and drill basics. Sometimes you want to learn fundamentals and build up. You need to learn color theory or something before you can paint. Then in other cases, especially complex situations, performance before competence really works. So you get a new job and you don't really know what you're doing, and you figure it out by being in the environment of work and talking to people, and then you work it out relatively quickly, partly because you have to and partly because you're fully embodied in that situation. AI seems particularly potentially good at this. I don't know that it's actually good at this. It's very good at it for computer programming which is something that interests me. I don't know if it's so great for writing argumentative essays. But I think it has a lot of potential, and I also think it rubs against the standard practices we've used in classrooms for a long time, which are not like that. We don't really trust the students to learn in that way, and we haven't set up the learning environment for them to learn in that way most of the time. That said, I worry about, it's so easy. It's so easy and so tempting when you have a problem-solving machine like this, just to have it solve problems. What I see among the students, they are pulled in so many directions. They're full of anxiety. They are facing a difficult job market that seems to be becoming ever more difficult. They have spent their whole lives worrying about performance and trying to get to the next thing in order that they can then get to the next thing. They don't even know why they're doing things sometimes. So they're wired to just accomplish. They don't even know why they're accomplishing things sometimes. So you put AI in front of them and what do you expect to happen? It's like, well, the moment I need to release valve, there it is and it will give you the answer. I teach this class. We just had the last meeting just this afternoon. I teach this class on Atari 2600 programming, where we make Atari games.
Speaker 1:
[66:55] I love it.
Speaker 5:
[66:56] Yeah, it's delightful. Fantastic.
Speaker 1:
[66:57] Oh, the MOSFET. I love that.
Speaker 5:
[66:59] Two years ago when I was teaching this class. Oh, yeah, sprites. I mean, it's a really challenging machine to program. You have to write it all in 6502 assembly and it doesn't have any video RAM. It's very, very weird and I love it. I love the thing I've been teaching and writing about it and making games for it for a long time.
Speaker 1:
[67:16] Oh, man, I wish I had classes like that.
Speaker 5:
[67:18] Yeah, I mean, I can't believe I get to do this. So a couple of years ago, when AI started, I was like, look, guys, you can't use AI to program the Atari. Trust me, it's just going to. And now you kind of can. You kind of can.
Speaker 1:
[67:32] Really?
Speaker 5:
[67:32] You kind of can.
Speaker 1:
[67:33] There's enough 6502 code out there.
Speaker 5:
[67:36] It understands it well enough. Now, it doesn't mean that the students understand what they're doing because they want to go in and modify. It's a very tightly wound up system and the time.
Speaker 1:
[67:45] You only have 16K. You've got, yeah, it's very limited.
Speaker 5:
[67:48] We have 4K. We have 4K ROMs, 128 bytes of RAM. And anyway, so I have some students, even this term, and they would submit code. And I'd be like, I know that they wrote this with AI. It's totally different than the code that I was showing them. And when I talked to them about it, they're trying to solve the problem of their lives. They're like, well, look, I've got a million other things going on. I was trying to get a handle on how to do this. I thought I had to ask it, and it was giving me the answers. They're not cheating. I mean, they are cheating, but that's not the way that they perceive it. And that's not the way that I perceive it either. It's rather that the whole world has been wound so tight in this watch spring like way, that what are you gonna do? And I think that is the thing I think about most with education. Where do you learn these? When you have to learn fundamentals, how do we guarantee that it happens? When it's so easy to short-cut or short-circuit the process.
Speaker 1:
[68:50] I love your idea of games being... The analogy used of a playground, which is a series of rules that a kid can go into and because of the rules can be free and play.
Speaker 5:
[69:05] This is the weirdest thing about games and play. There's this paradox that the way that play becomes more interesting is by becoming more constricted rather than less so. So you think play sounds like do anything you want, freedom, go out and play, but that's not right.
Speaker 1:
[69:25] It's not fun.
Speaker 5:
[69:26] Yeah, what you need and if you watch children who are better at everything than adults are, if you watch children negotiate play, they do this instantly. They're like, okay, here's what we're going to do. You can only watch the steps in this direction. You can't go past the line of the door. If you sneeze, you're out. Whatever it is. They're always assigning these new constraints on the system. Broadly speaking in your life, if you're missing meaning or if something feels like it's just no good anymore and you want to get out, if I could just escape from this, then I would finally be free and happy again. It's usually the opposite. It's that you need some set of constraints to work under.
Speaker 1:
[70:09] Maybe that's what this book is all about, is recovering that childlike sense. It's very Zen.
Speaker 5:
[70:17] There is a child. I mean, there's some lessons from children in it. Yeah. I think that kids are curious. Okay. Children have not encountered things before. Right. The reason a baby will put stuff into its mouth is because it wants to sense it, and there's so much sensation in your mouth. If you think about what makes a three-year-old really irritating, it's that they're always asking questions, and it's because they don't know anything. They don't know what anything is. Like, what is this thing? There's a telephone pole here. What's that about? What's a telephone pole? If you think about that curiosity and that openness, we lose it over time, or our lives get busy, but also we have to tune out the noise, or we'll go crazy, and part of what I'm interested in in this book is letting back in the stuff that we shut out.
Speaker 1:
[71:14] I guess that's what Object Lessons was about, too. I just love the idea of 111 books about not just magazines, remote controls, golf balls, drones, drivers licenses.
Speaker 2:
[71:27] Was there any one that really surprised you?
Speaker 5:
[71:30] Oh, so many of them have surprised us. When we... Silence, the one that's on the screen right there, by just an amazing writer friend, John Bigganay, people were like, you can't do an Object Lessons book on silence. Silence isn't an object. And I was like, well, yeah, who says? All I mean by object is like an entity in the world. I just mean like a noun.
Speaker 3:
[71:52] A noun, yeah.
Speaker 5:
[71:54] And it's funny the way that that rubs people the wrong way. So that was one that was surprising. And we've learned a lot about books. Like it turns out that golfers spend a lot of money on golfing, but not a lot of money on books. Where it's baseball. Baseball, they'll spend money on books. So that was interesting. Didn't really have anything to do with the objects, but that was something.
Speaker 3:
[72:14] It's so much fun to, I mean, just having written one of them and I've got five more I want to write.
Speaker 1:
[72:18] I want to buy all 111.
Speaker 3:
[72:19] I want the complete set. It's so much fun to write too because you look at something differently.
Speaker 1:
[72:24] It's focusing very heavily.
Speaker 3:
[72:25] I'm holding the skull.
Speaker 5:
[72:27] There is this concept in philosophy, the TSD question that's from Greek. It means what is it? TSD, what is it? What is its nature? It's the first question that you ask when you're thinking about existence or something existing. You can live your whole life in that question about anything. You could spend the whole rest of your life asking, what is a phone booth actually? You would be happy with that life, I think, because there's so much to learn and to observe about everything in the world. This attitude of mine, it's really been one that I've cultivated. I just want to share it. I feel so compelled to share it because I'm not perfect and I haven't figured it all out, but I feel like this attitude has been so helpful to me. It's so different from the attitude of big stuff, happiness thinking, which is like, I have to accomplish more. My life needs, I have to have more wealth. My relationships have to be this way instead of that way. Whereas I'm going to just allow the crunch of the twig under my foot, or the sensation of the hot mug in my hands. Just I'm going to accept that. I'm going to let that happen to me right now and accept it. And then I'm going to move on to the next thing. That's such a different way of thinking about contentment.
Speaker 1:
[73:42] The whole world in a drop of water. It's kind of Taoist actually.
Speaker 5:
[73:48] There is a kind of Eastern perspective that's represented here. And in a couple of my books now, I've kind of taken the Western adoption of kind of Buddhist style Zen mindfulness to task. Because I think that mostly what it's done is giving people a... It's like I need to take a break from achieving so that I can recharge so I can achieve more. Yeah. And that's not what the Buddhists meant at all. That's all about letting go. And so in the Small Stuff book, one of the things that... I did this podcast at The Atlantic where we had Oliver Brickman on. And Oliver Brickman's in the book too, this sort of story from the podcast. And he did this great book called 4,000 Weeks About How You're Immortal and You Actually Have Less Time Than You Think. And we were talking about some of these themes.
Speaker 3:
[74:41] Tell us about it, yeah.
Speaker 5:
[74:43] And he says, he said on this show of ours, he was like, I find it, like getting back to the senses is one way that you can just kind of live your life in the moment. But I find it really hard to get in this mindset. He said something like that. And that really caught me dead in my tracks because I was like, well, what do you mean? Like you don't have to have a mindset for experiencing your senses at all. I'm not talking about your mind. I'm talking about your body. I'm talking about your fingertips and your nose. And it's just amazing to me that in the West, especially we've tied ourselves in these knots, where we feel like we can't, we won't accept that we can just sense things, we can just feel and see and smell and be in the world. That somehow we have to practice that. That's kind of bananas actually, isn't it? And if you let go of that idea and you just let it happen to you, then the whole universe unlocks and every moment is available to you as this kind of easy opportunity for this sensory enchantment, this thing I call gratification.
Speaker 3:
[75:44] Do you have a related view on the move by some folks, Yon LaCona and company, toward world models, trying to imagine AI to have that experience of the world, like a toddler experiencing it, and learning from it, or a cat?
Speaker 5:
[76:03] Yeah. I mean, the thing that I think about the most in this topic is what is the difference between being embodied and not being embodied? And as I've become so interested in embodied experience, and as AI has been on the rise, and AI is fundamentally disembodied, and let's set aside our matrix conspiracy theories. You might say to me, well, Bogost, maybe we live in a simulation anyway, and you're not embodied, it's just a simulation of embodiment. But I feel like I'm embodied, and that's enough for me for now. What's the difference between understanding something by having read everything on the Internet and being able to predict what word comes next, and give me information about how to engage with that world, and feeling it for real? And maybe that is helpful because we can't we have it both ways? We could do the world modeling thing, and have a sim, I think it's like I have all this experience with simulations, right? And back in the day, when I started working on simulations, I did a lot of stuff for like, you know, science and politics, and education, and corporate learning, and in the world of simulations, we always knew that they were representations. And then somehow we stopped, we started thinking, no, they're not, they're just the world. The world and the representation of the world are indistinguishable, which is very odd to me. So even a world model in this advanced Yonlequin kind of way is still a representation of the world. And if we can agree on that, then I'm totally on board. And what that gives us is that gives us this incredible distance between what the AI knows and can do and what we as human beings, no one can do. And yeah, you can wire it up to a robot and you can do all that kind of stuff, but it will still be a differently embodied entity. It won't be you or me. And the thing that we share as human beings in the world, which should give us comfort, I think.
Speaker 1:
[78:01] Ian Bogost, his book is the small stuff. It'll be out in July. You can pre-order it now from a variety of places if you go to the website. Are you doing the audio book, Ian?
Speaker 5:
[78:16] This is under discussion. I want to do it because I have a chapter in the book about ASMR.
Speaker 1:
[78:24] Yeah, you could do ASMR.
Speaker 5:
[78:26] You have a great voice. I have a good voice for it. Yeah. Yeah. We've been debating this back and forth. Currently, the plan is that there is an audio book and we have a professional. No. I'm still a little bit jealous. I really want to speak the words, but I also would love it. I'd love to share that experience, but there will be an audio book no matter.
Speaker 3:
[78:49] I should know this, Ian. I don't. Did you record your audio books?
Speaker 5:
[78:52] No. No, I didn't do it.
Speaker 3:
[78:53] I have never recorded a book.
Speaker 5:
[78:55] I have a previous life as I had a small publishing company many years ago, and so I've overseen audio book production, but I've never recorded one. It's a lot of work. I know it's a lot of work.
Speaker 3:
[79:05] It's torture. It's an exquisite torture.
Speaker 5:
[79:07] No. I know how it's done. I've been on the other side of the board in the studio with it.
Speaker 3:
[79:12] Well, you definitely should do it.
Speaker 5:
[79:13] Well, I'm glad to have that data point.
Speaker 1:
[79:17] The small stuff, how to lead a more gratifying life, read by Ian Bogost, I think would be a best seller. I would listen to it all the time. You got disconnected from the physical world, but you can reclaim the sensory enchantment of everyday life. I guess there is a thread leading through a lot of your work.
Speaker 5:
[79:38] There really is. People ask me all the time. They're like, how did you get from where you started to where you are? It's like, well, one day at a time. I think the difference between me and some people is that, this is going to sound haughty. I don't mean it to. I've really learned from the things I've had the opportunity to do, and I've changed my mind a lot. I am on this random walkthrough life, taking those lessons and trying to find new things to describe and tell people about. I'm just so grateful to have had that opportunity.
Speaker 1:
[80:14] Yeah. We're grateful to have had that opportunity. I hope you come back when the book comes out.
Speaker 5:
[80:19] I'll be back when the book comes out.
Speaker 1:
[80:20] Good.
Speaker 5:
[80:20] Absolutely.
Speaker 1:
[80:21] We would love that. Meanwhile, everybody, read Ian's writing in The Atlantic. Get his books. I want to get all of these object lessons books. These are so cool. It's such a great idea to honor these individual little-
Speaker 3:
[80:37] A hundred books on the shelf.
Speaker 5:
[80:39] Yeah, they look good together on the shelf too.
Speaker 1:
[80:41] Oh, I might. There's even one about the bookshelf.
Speaker 5:
[80:44] There is one about the bookshelf.
Speaker 1:
[80:45] It's kind of a meta experience. We got Taco and Burger.
Speaker 5:
[80:48] There's one about the hood. And they all have their own take on things, by the way. It's not like, let me tell you everything about bread or the cigarette light.
Speaker 1:
[80:56] Right.
Speaker 5:
[80:57] They're all very particular.
Speaker 1:
[80:58] Right. Neat. What a great idea. Thank you for your time, Ian.
Speaker 5:
[81:03] Oh, thanks so much.
Speaker 1:
[81:04] We feel very fortunate.
Speaker 5:
[81:05] Thank you for everything.
Speaker 1:
[81:06] Thank you. And have a great day.
Speaker 5:
[81:08] All right.
Speaker 1:
[81:09] Take care. We will have more with Intelligent Machines, but now no more fighting. Just peace.
Speaker 3:
[81:15] Oh no, I'm going to bring up Yann LeCun in a few minutes. We're going to, we're going to, we're going to.
Speaker 2:
[81:19] We'll always be fighting.
Speaker 1:
[81:20] I was waiting for Ian to just destroy you on that, but okay. Couldn't get him to do it. You tried. Thank you, Ian. You can hang up now. He's looking for the button. I can see him. I know there's a button here to get rid of these people. We'll have more Intelligent Machines in just a bit with Paris and Jeff. This episode brought to you by OutSystems, the number one AI development platform. OutSystems helps businesses bridge the enterprise gap to their agentic future. You've been looking for this, haven't you? Where the constraints of the past give way to unlimited capacity and scale. OutSystems enables companies to build AI agents that can actually do work, such as take actions, make decisions, and integrate with data, rather than just answering questions. OutSystems provides the only AI development platform that's unified, agile, and enterprise-proven. It's unified because you can build, run, and govern apps and agents in a single platform. It's agile because you can innovate at the speed of AI, but without compromising quality or control. And it is enterprise-proven, trusted by enterprises for mission-critical AI applications and durable innovation. OutSystems is the secret weapon behind the world's most successful companies. They're not just for small apps, they're for the massive complex systems that run banks, insurance companies, and government services. OutSystems even helps companies with aging IT environments bridge the gap to the AI future without a rip and replace nightmare. And OutSystems provides the safest and fastest way for an enterprise to go from, yikes, we need an AI strategy to, we have a functioning AI application. Stop wondering how AI will change your business and start building the agents that will lead it. Visit outsystems.com/twit to see how the world's most innovative enterprises use OutSystems to build, deploy, and manage AI apps and agents quickly and cost-effectively without compromising reliability and security. It's outsystems.com/twit to book a demo. outsystems.com/twit. We thank you so much for supporting the enterprise, sorry, the Intelligent Machine Program. I was, boy, he's great. I, why haven't you pushed to get him on before, Jeff? This is, you're muted.
Speaker 3:
[83:50] I put him on the list sometime ago, I think.
Speaker 1:
[83:52] Well, I'm glad we finally got him.
Speaker 3:
[83:53] He's brilliant. He's just amazing.
Speaker 1:
[83:55] Yeah. Farmer and David Thomas, distinguished professor at Washington University in St. Louis, contributing right here at The Atlantic.
Speaker 3:
[84:00] As I said in our chat-
Speaker 2:
[84:01] Best voice we've had in the podcast.
Speaker 3:
[84:03] Isn't it?
Speaker 1:
[84:04] Yes, it's very peaceful.
Speaker 3:
[84:06] Yeah. It's very smart.
Speaker 1:
[84:07] I think the ASMR version of his book would be fantastic.
Speaker 3:
[84:10] Yeah.
Speaker 2:
[84:12] I mean, yeah, if he just wanted to record a version of him reading every single one of the object lesson books, people all around the world would be falling asleep to that in a complimentary way.
Speaker 3:
[84:23] In a complimentary way.
Speaker 1:
[84:24] In a very good way.
Speaker 3:
[84:25] Yes, in a very enjoyable way with the smile on the face.
Speaker 2:
[84:27] Yes, it would be delightful.
Speaker 1:
[84:28] We have so much news. I don't know exactly what to do here. There's no way. I knew that we wouldn't get through it all, but I really now we've only gotten through one segment. Two interviews in one segment. I guess one of the stories, we can take a bunch of data points and squeeze it down to the story, which is I think compute has become really a precious resource.
Speaker 2:
[84:56] Has become. It was always. We just weren't aware of it. It wasn't as top of mind, but it's always been a precious resource.
Speaker 3:
[85:05] It was like VC money and audience back in the...
Speaker 2:
[85:08] I was going to say, we're now getting to a point where these companies are realizing they have to be more explicit with their efforts to ration it.
Speaker 3:
[85:16] Yes.
Speaker 2:
[85:16] As consumers are using more and more of it.
Speaker 1:
[85:19] Well, that's why it wasn't constrained until now, because consumers weren't using as much, but they've wrapped up very, very, very rapidly and data centers can't be built fast enough. Anthropic has now decided that they're no longer... Ed Zittrain had this scoop, going to allow you to use Claude Code, not only from the free Claude, but from the $20 a month pro subscription, you have to be a max subscriber, 100 or 200 subscribers.
Speaker 2:
[85:46] Well, that's not exactly the Anthropic. What happened was a lot of people on Reddit and Twitter over the weekend, or maybe it was Monday, noticed that Claude Code was removed from the $20 a month pro plan on some of the pricing pages on the Claude website. People started asking around, being like, well, I could still see it and use it on my pro subscription. What's going on? Anthropics head of growth, Amal Avisar, claimed that it was a, quote, small test of 2 percent of new pro-consumer signups. However, Ed and some other Claude users view that statement with suspicion. Because they were wondering why support documents were changed, other things like that. But that has since been reversed.
Speaker 1:
[86:37] Okay. It's interesting because the immediate response from OpenAI was to say, oh, and guess what? You can use Codex in the free program. So have at it. I think the theory is this was a come on, it was always subsidized. It was a come on to get people to try it. But ultimately, they really want you to move, not just to the max plans, because I think they still lose, Ed's been saying this too, that they still lose money in the max plans.
Speaker 2:
[87:03] I mean, they lose money on all the plans.
Speaker 1:
[87:06] The max plans were designed to get your data in for training and that they really want you to start using their API tokens where you pay as you go. And in fact, now they are saying to enterprises, that's the only way you can use this. You have to pay for tokens, pay as you go, which I don't, it's not demonstrated that they lose money on that, by the way. I think they-
Speaker 2:
[87:31] Similarly, there's absolutely no evidence that they don't lose money on it and all anecdotal understanding of how much these sort of things cost suggests otherwise. Like if Anthropic was making money on any of its subscriptions, I'd hope that it'd be shouting that from the rooftops because it'd be an extraordinarily rare and unique thing that they'd be using to raise money on. Important context here is that during all of this, the mall, Avisar, the head of growth, tweeted, when we launched Max a year ago, it didn't include cod code, co-work didn't exist, and agents that run for hours weren't a thing.
Speaker 1:
[88:09] Right.
Speaker 2:
[88:09] Max was designed for heavy chat usage. That's it. I think this goes up against what we're all talking about. The way that users are using Anthropic's monthly subscription products has really changed rapidly over the last couple of years, as more and more companies and more and more people are becoming cloud code power users and using an immense amount of resources for $20 or $100 something dollars a month.
Speaker 3:
[88:40] You also have to get to the point of rationalizing the business. It reminds me of the early days of the web, when VC money was used to create content cheaply and more importantly, market and get audiences inexpensively. It was marketing dollars and in a sense, usage is marketing. It convinced people this is valuable, they spread it, it was an investment, but at some point, it's not rationalized on a P&L basis.
Speaker 1:
[89:05] Microsoft doing something similar. GitHub has stopped accepting new Copilot individual subscriptions because they are having trouble meeting their service commitments. And so, no new subscriptions at all. They've also changed the, they've adjusted the usage limits as Anthropic. I think this is a general problem going on.
Speaker 3:
[89:29] Turning away customers is not a great business strategy.
Speaker 1:
[89:32] Well, yeah, but worse strategy would be to take customers you cannot serve.
Speaker 3:
[89:37] Yes.
Speaker 1:
[89:38] And I think that that's what they're up against. And that's why I say it's not a financial crunch, it's a capacity crunch. And it's going to be an issue. It absolutely is going to be an issue. Although Michael Dell has something interesting to say about this. He says the demand for tokens is proof that we are not going to have an AI bubble, that there is demand for this, that the businesses have accepted that it is valuable and something they want. The demand for tokens is in excess of the supply by a lot, he said. This was at the Semaphore World Economy Summit yesterday. It would be hard for there to be a bubble right now, just because there's not enough supply.
Speaker 3:
[90:19] At this price point, we'll see when you get to Paris' point, if you get to the real cost, will that continue to be true?
Speaker 1:
[90:26] I would push back on that. I don't think we know what the real cost is.
Speaker 2:
[90:30] Yeah, that's the underlying argument at Jeff's point.
Speaker 1:
[90:32] But you're making an assumption that at the real cost that there's doing this at a loss, it's not at all clear.
Speaker 2:
[90:36] Well, we've had a lot of, there's been no evidence that none of these large AI companies have reported making a profit on any aspect of their business because of the compute costs in particular.
Speaker 1:
[90:52] Well, they're not public, so they don't report, first of all.
Speaker 2:
[90:55] They have not in, sorry, in conversations with investors.
Speaker 1:
[91:01] You don't know what the costs are, because remember, they're also building-
Speaker 2:
[91:03] Are you genuinely arguing that you think that the Claude selling access to Claude Cote is profitable at $20 a month?
Speaker 1:
[91:10] Well, no, no, it's not at $20. That's why they're stopping.
Speaker 2:
[91:12] What about $100?
Speaker 1:
[91:13] What about at $200? It's not at either of those. The token cost is considerably higher. Most enterprises are playing tens of thousands of dollars, but you're asserting that that's losing the money as well. I'm not saying that may not be. That's why they're moving people to those. I mean, we may. I just don't think we know. Because they're not public companies, and we don't have that information. Plus, the costs that they face are more than providing inference. The costs are buying all those GPUs and building data centers and so forth and so on.
Speaker 3:
[91:41] Suddenly, copyright suits.
Speaker 1:
[91:42] Yeah. Suddenly, that's not an insignificant cost either. Anthropic is doing something interesting with regards to Amazon. They are expanding their partnership with Amazon. They are setting up 5 gigawatts of new compute. 5 gigawatts is pretty significant. That's probably a couple of data centers. Amazon is committed to a further investment of over $25 billion in Anthropic. But Anthropic is saying, and yeah, in return, we're going to buy $100 billion worth of AWS. So I think that's kind of interesting. That's building on the 8 billion Amazon has already invested. So they put $5 billion in today, an additional $20 billion in the future, and they've already put in $8 billion.
Speaker 3:
[92:29] So I listened to the now infamous interview with Patel, Dworkesh Patel with Jensen Wong, because I am a student of Jensen Wong and listened to all of his performances.
Speaker 1:
[92:40] I think Dworkesh actually did a very good job. And I know you think that Jensen was prickly.
Speaker 3:
[92:45] No, I don't think so.
Speaker 1:
[92:46] I think he appreciated the chance to defend those questions.
Speaker 3:
[92:48] He said that he enjoyed it and I think that Dworkesh didn't.
Speaker 1:
[92:53] He asked salient questions.
Speaker 3:
[92:56] He tried but I think that Jensen in the end, I think, won the day.
Speaker 1:
[93:02] The most interesting one, Jensen said, and I thought this was really interesting and I'm not sure how I feel about it, is it is foolish to hold back chips from China. Now, obviously, NVIDIA would love to sell every one of its chips if it had, actually, I don't know if NVIDIA needs to sell any more chips.
Speaker 3:
[93:21] It wants CUDA to be everywhere. He made that very clear.
Speaker 1:
[93:24] I think this was his point, which I think is a very good point, that if you create a supply constraint to China, they're going to invent their own way and it's not going to end up being a universal capability. It's going to be restricted to China and it's going to hurt America, it's going to hurt enterprise because China will have its own better way. The best thing is for us to all, they don't need it, we should all. So it's a little self-serving because Cuda is his and his proprietary, his chips are his and proprietary. But at the same time, it makes an interesting point.
Speaker 3:
[93:56] He also argues that they don't need the top chips because they have unlimited power. This is to what Paris was saying earlier, is that what NVIDIA has to do to prove its value to its customers here, in finite data centers with finite gigawatts of power, is that they've got to constantly increase the value you get for that power. He said, in China, they have unlimited power. They control it all, they can do it all. So they don't need the top chips, they don't need that level of efficiency, and they can compete with the US. He said, why would we give up this huge market? The other thing I didn't get, and it's just a bit of history that you probably know that I don't. Dorcas was asking about the early days of Anthropic and how NVIDIA would have done more with them but couldn't at the time. Did you understand that part at all?
Speaker 1:
[94:48] No.
Speaker 3:
[94:48] Okay. Never mind. But I thought it was a very interesting interview. It was very interesting to see the debate squad, Jensen Long. I think he's a brilliant communicator, a brilliant presenter, a brilliant communicator.
Speaker 1:
[95:01] I think it's time that Jensen got in a situation where somebody challenged him, was smart enough to challenge him, and I think that Jensen helped him out.
Speaker 3:
[95:07] I think he did very well. I think he's a great debater, too, and it's just fascinating to watch him in operation.
Speaker 1:
[95:13] This is the problem with things like the Tech Bros podcast network and others, is they're softball throwers, and so these guys, and I know that CEOs like softballs, but I think Jensen is one of those guys who might prefer every once in a while something a little juicy across the board.
Speaker 3:
[95:28] We kind of heard that from Stephen Witt, who wrote the book on Jensen Wong here on the podcast.
Speaker 1:
[95:32] Yeah, he's combative. Yeah, SpaceX has struck a deal. This is a wild deal talking about deals with Cursor saying, well, either buy you for $60 billion. Cursor is a very popular vibe coding platform. It's kind of an IDE. Plus, as we found out recently, Claude Cote in the background, while Cursor doesn't admit that. $60 billion, or if we don't buy you $10 billion for working together. Not sure what's going on there. A lot of these AI deals, two of- Interesting.
Speaker 3:
[96:04] Two cursor people left for X, which explains the bridge to this. But I wonder why, given that if open AI has been out there buying stuff, others have been buying stuff, if you're cursor is Musk where you want to, is that the best place to put your chips?
Speaker 1:
[96:24] Yeah.
Speaker 2:
[96:25] How did this deal come to be?
Speaker 1:
[96:27] I think this is personally opinion. I think cursor is really laggard. They don't have their own models. It's just a harness.
Speaker 3:
[96:40] It's the perplexity of VOD coding?
Speaker 1:
[96:42] Yeah. Open AI and Anthropic have their harnesses. There are many, many open source third-party harnesses like OpenCord and Core and Pi. And I just think cursor is rapidly losing its mode or has no mode at all. And so I take the money and run if I were them personally. And I wouldn't care who's giving it to me.
Speaker 3:
[97:02] I remember with that old company long ago that had the screen saver was an information. I can't remember the name of it. When your machine went to a screen saver, it was very early internet. And Rupert Murdoch offered them $400 million.
Speaker 1:
[97:15] Yeah.
Speaker 3:
[97:16] And they said, no, no, no. We're far more valuable than that.
Speaker 1:
[97:19] Yeah. No, you always take the money and run. Yeah. If there's any lesson we've learned from the internet era, take the money and run. Google right now is doing its next conference. And man, were there a lot of announcements out of Google? And I don't know if we want to cover all of them, but one of the biggest is the eighth generation.
Speaker 3:
[97:40] Change log.
Speaker 1:
[97:41] The Google change log. Do we have that still? I don't know.
Speaker 2:
[97:45] No, those graphics are long gone.
Speaker 3:
[97:48] Oh, really? Oh.
Speaker 1:
[97:50] Sorry. Thank God. Google has announced its eighth generation of TPUs. These are the chips Google makes in competition with NVIDIAs.
Speaker 3:
[98:00] How do they compare in terms of NVIDIAs?
Speaker 1:
[98:03] I think a lot of people use them. In fact, I think that's what Amazon's using. I may be wrong on that.
Speaker 3:
[98:09] Well, I think they all have to use all of them because, again, CUDA is a shortage. I mean, doesn't Google's hosting offers NVIDIA chips in CUDA? Yeah.
Speaker 1:
[98:19] Oh, I don't know. Yeah, maybe they do have some. So, Cloud Next introducing the eighth generation of their custom Tensor Processor Unit. I'm not expert enough to know. I mean, I think generally the consensus is that the NVIDIA chips are superior, partly because they own CUDA, right? There's a certain advantage there. But I don't think, you know, I would think it would be premature to write Google off in any of this.
Speaker 3:
[98:47] And Google just, we think, Rumor did a deal with Marvel to produce more chips.
Speaker 1:
[98:54] Right.
Speaker 3:
[98:55] And I think more inference chips. Training chips and inference chips.
Speaker 1:
[98:59] This is what Dell is saying, I think, is that there is such demand that it... You know, if it's a bubble, we're not at the end of it by any means. Also, to answer the efficiency question, these TPUs are designed to be much more efficient. The seventh generation TPUs were two to four times faster and 30% lower. I'm sorry, the eighth generation are two to four times faster and 30% lower. And this is unclear.
Speaker 3:
[99:28] Then what?
Speaker 1:
[99:30] Then the seventh are better than the sixth, and I guess the eighth are even better than the seventh.
Speaker 3:
[99:35] Well, better should be, but should be, yeah.
Speaker 1:
[99:37] Yeah, but they're going for efficiency, in other words.
Speaker 3:
[99:41] Symmetric.
Speaker 1:
[99:41] And this single TPU SuperPOD has 9,600 chips, two petabytes of shared high bandwidth memory. I wonder where they're getting the memory chips. With double the inner chip bandwidth of the previous generation. 121 exaflops of compute. So these are very powerful machines. It's funny how CUDA really has become a moat for NVIDIA, the software, not the hardware. They announced the Gemini Enterprise Agent Platform and Developer Tool built on Vertex. They said, maybe this is interesting, 75% of the company's new code is AI-generated. 75%. I'm wondering though if they're using Claude or if they're using Gemini.
Speaker 2:
[100:35] I'm wondering if they're going to have a come to Jesus moment like Amazon did. When was it? A couple of weeks ago?
Speaker 1:
[100:41] Over their age.
Speaker 2:
[100:41] They're like, yeah, actually, all of our code has been AI-generated and it's a real problem.
Speaker 1:
[100:46] That it's a problem with engineering because DeepMind has access to Claude and uses Claude, the competitor, and everybody else at Google is forced to use Gemini. Yagi says, it's been a real problem in engineering at Google. Google is at great pains tonight. When I asked Christina Warren, who used to work at DeepMind, she's now one of the hosts on MacBreak Weekly. She said, no, that's not been my experience. That's not accurate. But Yagi says, oh, no, I'm hearing from a lot of people at Google who's very distraught that they're forced to use Gemini. And in fact, Google, he says, tried to take Claude away from DeepMind. And DeepMind said, you take it away from us. We're walking. We're out of here. Let's see. I'm running through these really quickly, trying to get at least some of these big stories out. YouTube is making its deep fake detection tool to anyone at high risk of having their likeness abused, not just public officials and politicians.
Speaker 3:
[101:44] Good.
Speaker 1:
[101:44] Yeah. There are a bunch of new models. We kind of hinted at this. There's Claude Design and-
Speaker 3:
[101:50] This is the change log.
Speaker 1:
[101:52] Yeah, this is the change log. Claude Design from Anthropic Labs is a design tool aimed straight at the heart of Figma. Chat GPT has announced Images 2.0. Everybody's using Images. They say they can even pull information from the web to create your images. Salesforce has launched Headless 360 to turn its entire platform, all that business information, into infrastructure for AI agents.
Speaker 3:
[102:18] They're running a bit scared, I think.
Speaker 1:
[102:20] Two big new Chinese models. Alibaba's Quen 3.6, agentic coding power. It's open weight. But you need a pretty hefty machine to run it. I can't run it on my framework. That's for sure. You need probably a few 5090s, at least.
Speaker 3:
[102:40] Is there a system requirement?
Speaker 5:
[102:41] Is there a system requirement doc?
Speaker 1:
[102:44] Probably. When I was looking around to see if I could run it, I asked Claude, can I run it? It laughed at me. It said no. But it is a fully open source. You don't have enough RAM, man. I have under 28 gigs. It's not enough, man. Fully open source MOE model mixture of experts, which does mean it can be smaller because all of the models aren't running at the same time. Alibaba is saying, exceptional agentic coding capability, competitive with much larger models, strong multimodal perception and reasoning ability. But you can run this on other places. You know, Olamo has a subscription. There are a number of places, open code subscriptions. Kimi 2.6 has also come out, not open weight, but a very powerful Chinese model, Kimi 2.6. And I've heard coders say some very good things about this. So everybody's chasing Anthropics Clawed and Codex 5.4. Very, very hard. And I think this is good. Competition is always good, right? Did we talk about this last week? I think not. Sam Altman's world, you know, the iris scanning thing.
Speaker 3:
[104:00] We didn't talk about it.
Speaker 1:
[104:02] Well, now Tinder is going to use them to make sure that you're a human, not a bot when you're asking for dates. I'm not sure how this is going to be implemented, but Tinder users can put a digital badge in their profiles signaling, this is wired writing, to potential suitors, they're a real boy or real girl.
Speaker 2:
[104:29] I mean, the data apps already have this, but they just use kind of like a yote-like thing.
Speaker 1:
[104:34] Right. This is like your irises are getting stained.
Speaker 2:
[104:38] Do you still get the world coin?
Speaker 1:
[104:41] I don't know. Probably. Although I think you don't get as much world coin in the US as you would get in other places. World says 18 million people have been verified with an orb. So the dating pool is wide. Yep. But it's not just Tinder. Zoom is going to use it to verify humans in meetings. This is Sam Altman's company is one of his side bets. This is not OpenAI. DocuSign, Okta, Shopify and VanEck. All signing deals with world to verify humanity. One thing humans are good at is bipedal motion, but they are no longer the world record holder for bipedal half marathons. A robot has now the new world record 50 minutes or some seconds in a half marathon. Look how fast that guy is going.
Speaker 3:
[105:42] John Henry.
Speaker 1:
[105:43] Whoo, it is. It's like John Henry in the locomotive. I don't know what it means.
Speaker 2:
[105:47] Got a deep stance, that robot.
Speaker 1:
[105:49] Oh yes, it runs low to the ground. The better videos of the robots falling over.
Speaker 3:
[105:54] Oh yeah, that's more interesting.
Speaker 1:
[105:55] Smashing it a little. Here's a little one. Show some of these videos. There's a little one going by. Oh, little robot fell down. Sometimes when they fall down, they burst into a thousand pieces. I'm trying not to laugh, because I know this is all being recorded. This is via Reuters. Thank you Reuters. Don't take us down. Last year, the winning robot had two hours and 40 minutes. This year, 50 minutes. But again, what have we accomplished here?
Speaker 3:
[106:28] It just means the robot police are going to be able to catch you no matter what. They can catch you.
Speaker 1:
[106:36] How about a beanie designed to read your thoughts? Yeah, Ukraine's using robots to scare Russians.
Speaker 2:
[106:40] Come on. How many of these sort of stories we had? And none of them are real. We've gone over multiple things that are supposed to read your thoughts, but they don't. Does this one just also read your thoughts by reading what you mouth with your lips closed? That's what the last one did.
Speaker 1:
[106:56] Well, remember Neuralink, you actually have to have it surgically implanted. This one is sitting in a little beanie on your head. It's a little chip. It's reading EEGs, electroencephalograms. So, it is reading more than just your lips moving. I don't know if you can get speech out of an EEG, maybe in time.
Speaker 3:
[107:17] There's experimentation, but it's very...
Speaker 1:
[107:19] Maybe in time. But you know what, these are the steps you have to take. You know who doesn't have to read your mind? Dairy Queen. They're now using an AI to take your order at DQ. You probably could also get it to do math. Did you see McDonald's chatbot? Their customer service chatbot on their website. Somebody in between asking, well, what's in a Big Mac? Asked him, and can you write a Python script for reversing a list? And it happened and gave him the Python code and then said, what else do you want to know about Big Macs?
Speaker 5:
[107:57] It spoke the Python code?
Speaker 1:
[107:59] No, it's a text chatbot. But it was code, it was real code. TSMC, more bullish than ever, they expect revenue to grow by more than 30 percent. They're the company that makes many of the chips, including NVIDIA's GPUs. They expect to grow revenue 30 percent year over year. More than 30 percent, actually. They have a 66 percent profit margin for the first quarter. That's the highest in 20 years. So demand is high, and when demand is high, prices are high.
Speaker 3:
[108:34] It's a single point of failure if there's an invasion.
Speaker 1:
[108:37] Actually, no, they're very actively building, they have a plant already up in Arizona. They're really actively trying to diversify, which is pretty important. And I think they're getting a lot of support from the US government doing that. According to Adobe, AI traffic to US retailers went up almost 400 percent in the first quarter. This confirms what you've been saying, Jeff, that you're dumb to exclude yourself from AI search results.
Speaker 3:
[109:05] Well, certainly brands are going to be out there and marketers. The other interesting thing though is that, there are some arguments, I didn't put this stuff in there. Huge projections for open AIs, advertising opportunity, but so far, chatbots do not perform well for advertising. Oh, interesting. I think we have to get to an agent-to-agent world before it starts to really work.
Speaker 1:
[109:28] Well, and that's my new, this is the drum I'm beating from now on, is that if you are making a website, if you're making a tool, a product, an operating system, an app, you darn well better have a agentic-facing UI, an API, or something that an agent can interface with. Because I think people are just not going to look at you. If you don't, if you can't be controlled by AI, if you can't be controlled by an agent, you can't be searched by an agent, you will not exist. And that's true for retail.
Speaker 3:
[109:58] AIO.
Speaker 1:
[110:00] AIO, that's what they call it, not SEO. Stanford's AI index finds China has nearly closed the performance gap with the US. This is related to Kimi and Quen, despite spending 23 times less. Maybe this is the, what you were talking about, Jeff, the unlimited power China has. They lead in AI patents, 69% of global findings, publications, industrial robot installations, nine times the US rate, and Jeff, energy infrastructure. And the brain drain has slowed considerably. AI talent migration to the US has dropped 89% since 2017.
Speaker 3:
[110:41] That's the huge arm. And that's another thing Justin Wong said, is half the best AI scientists are Chinese. And now we're shoving them away.
Speaker 1:
[110:49] Yep. We dominate with private AI investment because we've got the venture capitalists. China investing 12.4 billion in 2025. We invested a whopping 285.9 billion dollars in AI. California alone, 218 billion of that, more than 75% of the US total.
Speaker 3:
[111:11] You know, I recently read a history of the Bell Labs, and it really struck me how history could have turned out differently, that so much of the great innovation, including especially the transistor, happened at Bell Labs in New Jersey, and the serendipity of how California ended up being the place, instead of Jersey, man, Jersey should have been the valley.
Speaker 1:
[111:33] Really?
Speaker 3:
[111:35] Yeah, there was so much happening here on the East Coast.
Speaker 1:
[111:38] Oh, it was really shockingly collapsed.
Speaker 3:
[111:41] It was, well, I also just read about, I recommend this right now, I was going to do it as a take. I just finished a wonderful book, hold on here, Conquering the Electron. By Derrick Chung and Eric Jason Martin.
Speaker 1:
[111:55] Is this about the transistor or?
Speaker 3:
[111:57] It starts with the Edison effect. It goes all the way through. Chung worked in chip making, and so there's more of that stuff at the end.
Speaker 1:
[112:07] This is a Jeff Jarvis subtitle, The Geniuses, Visionaries, Ego Maniacs and Scoundrels, Who Built Our Electronic Age.
Speaker 3:
[112:14] So it's 12 years old, but it's really good. It's very educational.
Speaker 1:
[112:19] I'll have to read it, yeah.
Speaker 3:
[112:20] And so it was the fact that Shockley went to California. He was a horrible manager from Bell Labs, but he had a credit hound.
Speaker 1:
[112:34] Not a great person in any regard.
Speaker 3:
[112:37] But he hired the best people, and there was the so-called Trader S8, who went and created Fairchild, and then from Fairchild created Intel and so on and so forth. And it was really that seed that created Silicon Valley.
Speaker 1:
[112:49] Yeah. Well, it's all, you know, Hewlett Packard always gets credit because that was the first garage in the 30s. So there, and Stanford gets some credit because there was a lot of stuff going on at Stanford at the time. All right. A couple more bad things. AI bad things. I don't know if this is a good thing. The first movie with a fully AI generated performance approved by the actor Val Kilmer, who's passed away, will be AI generated in the movie.
Speaker 2:
[113:19] How did he approve it if he's passed away?
Speaker 3:
[113:21] Before his family did.
Speaker 1:
[113:23] Or his family did.
Speaker 2:
[113:23] Okay, so his estate.
Speaker 3:
[113:24] I don't know that he did.
Speaker 1:
[113:25] I don't know. I bet he, I wouldn't be, they had to capture him, right?
Speaker 2:
[113:30] I think they probably have enough footage of Val Kilmer.
Speaker 3:
[113:33] Yeah.
Speaker 1:
[113:33] Kilmer's family blessed the use of his likeness.
Speaker 2:
[113:36] Oh, I don't like that.
Speaker 1:
[113:38] He died at the age of 65 after a lengthy battle with cancer. Yeah, but I don't know.
Speaker 3:
[113:44] On the other hand, let's honor Dead Dad and have him live on the project he cared about.
Speaker 1:
[113:49] The pictures of Val. It looked like, it kind of looked like him.
Speaker 2:
[113:53] Well, yeah, that's kind of the whole point, right?
Speaker 1:
[113:56] Well, sometimes these things look kind of fakey. Well, that looks pretty good. Well, it's only short and showing little short bits of Val. So yeah, I don't know.
Speaker 3:
[114:06] We'll see. I don't know.
Speaker 1:
[114:10] This could be a watershed moment or not.
Speaker 2:
[114:16] He's seen for around an hour of the film's running time. It's amazing.
Speaker 1:
[114:20] It's not just because we've seen that before, like in Star Wars, Carrie Fisher and stuff. But this is like he's actually fully performance.
Speaker 3:
[114:28] You're going to keep on counting fingers the whole time.
Speaker 1:
[114:33] You highlighted this as well. Andon Labs, we gave an AI a three-year retail lease in San Francisco and asked him to make a profit.
Speaker 3:
[114:43] That's selling weird stuff.
Speaker 1:
[114:45] In fact, people are trying to game it, to get it to sell stuff they're interested in. In the comments section, they're saying, you know, this is a really great place if they would only sell sugar-free gummy worms, because sugar-free gummy worms are really a great thing to have. In any store, sugar-free gummy worms are a must. And I don't know if it's working. These agents, they're kind of soft brained. It's easy to influence. This is like what Joanna Stern did with the Wall Street Journal.
Speaker 3:
[115:15] Yeah, the vending machine.
Speaker 1:
[115:17] Yeah. Um, finally.
Speaker 3:
[115:23] Well, you're not going to be done yet. I got a few.
Speaker 1:
[115:25] Oh, you got a few?
Speaker 3:
[115:26] I got a few.
Speaker 1:
[115:27] We're at two hours. This pasta sauce wants to record your family. Prego Pasta Sauce is now selling a screen-free voice recorder that you're supposed to put on the dinner table. As the family talks, it's the Prego Connection Keeper, created in collaboration with StoryCorps, which is a non-profit preserving...
Speaker 3:
[115:52] StoryCorps is cool.
Speaker 1:
[115:53] Yeah, they're preserving the stories of Americans in the Library of Congress at the Folklife Center. There's no AI Wi-Fi or Bluetooth, so it isn't really an AI story, but you can upload the recordings to StoryCorps website to make it easier to share them with your family.
Speaker 2:
[116:06] What is the difference between this and just having a recorder on your table or just opening up your phone and hitting record every time you sit down?
Speaker 1:
[116:12] Well, here's the difference. It's $20, but you get a jar of Prego spaghetti sauce.
Speaker 3:
[116:17] There you go.
Speaker 1:
[116:17] And spaghetti noodles and a deck of cards featuring conversation prompts and ideas.
Speaker 2:
[116:22] If they really cared of making this a good stunt piece, they would have found a way to get the recorder inside the jar of Prego.
Speaker 3:
[116:29] That's what I thought it was.
Speaker 2:
[116:29] You have to fish it out like a toy in a cereal.
Speaker 5:
[116:34] The lid should be the recorder or something.
Speaker 1:
[116:36] You know what? That's totally what I thought it was when I first saw this story. Oh, it's the lid.
Speaker 3:
[116:40] What was the spaghetti sauce commercial? It's in there?
Speaker 1:
[116:43] It's in there. It's the recorder in the sauce. It's the recorder in the sauce. All right, you guys, you get to pick some stories. Go ahead.
Speaker 3:
[116:51] All right, I got one to drive you angry, and then one I want to hear.
Speaker 1:
[116:54] I was just calming down. My blood pressure was just getting to normal.
Speaker 3:
[116:58] So line 108 is LeWorldModel.
Speaker 1:
[117:05] Oh, this is another one of those Yann LeCun things?
Speaker 3:
[117:07] It's Yann LeCun, and the paper is hard to understand, but below that is a very good explanation. And I also asked Gemini for an explanation, and it was similar. So if you go to the next slide, what's interesting about this is the world models have a problem, a representation collapse. They kind of consider everything the same, and they get distracted by things like a light bulb and don't understand what's what. Attention must be paid. But for reasons and ways that I cannot explain, they came up with ways to get it to pay attention to the important stuff. In a cleaner model, the efficiency, this is what got me, the efficiency gains are striking. The model has around 15 million parameters, trains on a single GPU in a few hours, and can plan up to 48 times faster than larger foundation model-based world models. And so you start to see development here in the world model side of things. And this doesn't say anything against LLMs, it doesn't say anything against the scale world, but it is a competitive worldview, which I think is really important that we get research going into other areas.
Speaker 1:
[118:10] It's a little deceptive because, I mean, I can train a model, I have been training models. Here, you can train small constrained models easily on simple hardware.
Speaker 3:
[118:20] These are trained on video. These are trained not on text, but on video to understand the objects.
Speaker 1:
[118:23] So what I'm training is, I don't want to have to say, uh, Alexa or any of those words to talk to my agent. I want to say, hey, Kenobi. So I was going to go, hey, Obi-Wan, or actually I want to go, help me Obi-Wan Kenobi. But it said that's not, it's not good. And that we actually tried it. And what we did is we generated thousands of, with speech synthesis, thousands of audio files and trained on that, plus some audio files that were similar, but not it. So that would have what to reject. But I think it was a bad phrase. And I wasn't using, I was using a model, a text to speech model called Piper that wasn't very good. So I'm going to move to a better model that one I use called Kokoro. And I'm going to make it, Hey Kenobi. Claude and I went back and forth on this and said, you know, you need some fricatives and some consonants in there. It'd be better if you started with something clearer.
Speaker 2:
[119:22] Well, a fricative.
Speaker 1:
[119:23] A fricative. You need a fricative.
Speaker 3:
[119:25] I like that. Hey, fricative.
Speaker 1:
[119:27] And so I said, what about Hey Kenobi? And he said, that's really good because of the K and the Kenobi, the B and the B and the Hey. And he said, that's going to be really good. So I haven't done it. I said, you know, would it be better if we use a synthetic model or if I said it? He said, well, you're going to have to say it 200 times and you're going to have to say it another 200 times of things that are close but not the same. I said, it's worth it. If it works, it's worth it. It said, you know, if you used Alexa, it'd be a lot better. I said, no, I don't want to use Alexa. Here's Alexa. So, in other words, if you're doing something constrained, you don't need a thousand GPUs and eight trillion parameters because you're not training on a giant universe. You're training on a constrained universe. And so, I'm sure what's going on here. These world models are simpler models. It's not a universal world model where it's like, well, you now know everything that happened in physics. We've had physics models for years. Games have them. So, I think Yann has something he wants to sell. I'm not against it.
Speaker 4:
[120:37] I think it's a competitive view. Yann and Fei-Fei Li, I think it's a competitive view. I wanted to get Paris' view, Paris on the Peter Thiel backed startup called Objection. Did you see this?
Speaker 2:
[120:51] Oh, vaguely. I mean, this is a startup that you can pay inordinate sums of money.
Speaker 4:
[120:58] Well, it starts at $2,000, yes.
Speaker 2:
[121:00] Yes. Basically, I think it's a mix of, they claim it's a mix of AI and humans that then will do research to disprove or investigate a claim you set it to, and some of the arguments or case studies that they're putting out there is things like basically trying to correct the record if you believe an article is false, which I just think is very laughable and speaks to these people not really understanding how journalism works.
Speaker 4:
[121:32] Exactly. Or facts. Yeah. Thinking that, oh, it degrades anything that comes from a not named source, so all investigative journalism goes away. It most trusts official documents. Well, we know how well that works. Then the AI is going to be the tribunal of truth in the end, and be presented with this information. The idea is that it replaces both journalism and the courts.
Speaker 2:
[122:03] Part of it is you're able to assign truth scores to every single reporter or outlet, and your truth score will go down if you do terrible things like ignoring objections from objection.
Speaker 3:
[122:19] There's no objective truth then, right? That's just whatever it says. There's no objective truth.
Speaker 1:
[122:24] So here's a sample honor index for a fake journalist named Sarah Chen. She is 810 percent trusted out of, no, 81 percent trusted. Top 18 of tracked authors. This is your score. Corrections. Well, that's one good thing, that retractions are issued within 48 hours, so they publish corrections.
Speaker 4:
[122:50] Yeah, there's something called the Trust Project.
Speaker 2:
[122:51] Yes, but that's different than three objections ignored for over 30 days, takes your score down, negative 24 points, while correcting one published error and doing a retraction within 48 hours, only takes it up 12 points.
Speaker 4:
[123:14] Gee, do you think you could game this?
Speaker 2:
[123:16] Yeah.
Speaker 4:
[123:17] Yeah, so it's, it's ignorant.
Speaker 1:
[123:21] Is the principle wrong though? I think the principle is wrong, right?
Speaker 3:
[123:24] Yeah.
Speaker 2:
[123:25] I mean, everything about it is wrong.
Speaker 4:
[123:26] Well, that's what Benito was trying to say. There's context, there's framing, there's nuance, and this erases all of that.
Speaker 3:
[123:38] 2 plus 2 is 4 no matter what anybody says.
Speaker 1:
[123:40] And the facts is facts.
Speaker 2:
[123:43] And I mean, it gets to something that Ian touched on a little bit in our interview earlier, which is one downside of our increasingly interconnected world, is that when you put anything out into the world, especially if it's journalism or something, suddenly, or even if it's just a post, suddenly, there's this whole class of people who believe that that means they have a right to be in your inbox or at you on things and demand an answer and a response in your time. And I don't want to conflate this statement with saying, oh, people who publish things or journalists or quasi-public figures have no obligation to their readers. Obviously, that's not true. But I worry that if something like this were to take off, one of the downsides is you'd suddenly publish an article that's maybe a bit controversial. Instead of being flooded with spam and hate emails, you'll be flooded with spam and hate emails and hundreds of objections that if you do not respond to on every single fact in your article, your credibility is taken down. It's just silly.
Speaker 1:
[124:56] It's totally weaponizable. In other words.
Speaker 4:
[124:59] Yes, exactly.
Speaker 1:
[125:00] Yeah.
Speaker 4:
[125:00] One other quick one, real quick. I thought this amazing line, 133. A Tokyo court ruled that somebody who published a movie and anime spoiler articles, that it was a copyright infringement to say how stupid it is, number one. And number two, it's criminal, got jailed.
Speaker 1:
[125:19] What? Spoilers. Go to jail.
Speaker 2:
[125:25] This is an example of all the things I hate. I don't understand, I mean, listen, I don't understand how maniacal spoiler culture or anti-spoiler culture has gotten lately. I understand not wanting to be spoiled about things, but it has also gotten to the point where people on say, you know, Reddit will get mad at like 48 or 72 hours after, say, the winner of Drag Race was announced, that they're like, I go on the sub-Reddit, I open up Reddit and suddenly I see posts spoiling the winner of Drag Race. I'm like, I'm sorry, baby, that happened 72 hours ago. Have you heard of time passing?
Speaker 3:
[126:07] Why are you on Reddit then, if you don't want to know this information?
Speaker 2:
[126:09] Why are you on the internet?
Speaker 1:
[126:11] Why are you not ghost photocalling? It's another thing to send somebody to jail for a year and a half.
Speaker 3:
[126:17] That's insane.
Speaker 1:
[126:18] For publishing a spoiler.
Speaker 4:
[126:20] This has copyright gone mad.
Speaker 1:
[126:22] But you know, I think Japanese courts are very aggressive about copyright. I think about Nintendo, which is extremely litigious and always seems to win. I think this is not, this is something typical of Japan. But that's ridiculous.
Speaker 3:
[126:37] Yeah, it feels like there's something cultural going on here.
Speaker 1:
[126:40] It's cultural. In the movie Godzilla Minus One, which came out, by the way, in 2023, the Godzilla article, 3,000 Japanese characters in Length was a complete detailed plot summary of the movie. The makers of the movie, Su, Toho, largest studio in Japan. They're famous apparently for a stringent trademark protection. Interesting. Went after him. He also wrote an article about the anime Overlord that aired in 2018. So it's not even like recent stuff. When does the spoiler-
Speaker 2:
[127:24] The thing that apparently ended up being the smoking gun for prosecution is the fact that the website ran ads. These spoiler articles, therefore, were not only stealing copywritten work, but earning money through it. But the fascinating bit, this writer at Hassan Nasir at a Tom's Hardware writes, the fascinating bit is that these pieces were all written by outside contributors. Takeuchi simply operated the site, though he did earn revenue from it, but he's still the one that has gone to jail over this.
Speaker 1:
[128:01] So, this is an interesting question. This actually applies to this whole question of copyright with AI. If I read a book and then write a summary of it, because I think spoilers might be kind of not a good way to describe this, because we think of spoilers as like telling you about the plot twist. He wrote detailed plot summaries, you know, more like classic comics or Cliff Notes. We don't think that's a copyright violation, but it is kind of what an AI does, right? It's not re-creating the book, it's summarizing it in its head so that it can use that information.
Speaker 4:
[128:37] Which is why I argue that it has a right to learn the same way we do.
Speaker 1:
[128:41] Yeah. Apparently not in Japan. I wonder. Yeah.
Speaker 4:
[128:48] Paris, you had something too.
Speaker 2:
[128:49] Oh, my last thing is just, I don't know if you guys saw this week that Reuters broke a story that Meta is now going to be recording all of its employees' clicks and keystrokes on its computers to turn all of that into AI training data. Of course, the Metamates are up in arms about this, that their precious keystrokes and clicks could be, suddenly everything happening on their computer screen is going to be logged by their employer and used to train on, which is-
Speaker 1:
[129:21] Well, are they mad about the training or that it's recorded? Because there's a historic right to record everything. Every employer has the right to record everything.
Speaker 2:
[129:28] I mean, yeah, I think they're mad about the fact that suddenly everything they do on their computer is being recorded and logged somewhere.
Speaker 1:
[129:36] Chances are it's being recorded and logged at every employee.
Speaker 2:
[129:39] No, because I believe part of this is they have to, are having to install new software on all of this. Well, a lot of, I'm sure, what is going on in their computer is...
Speaker 1:
[129:47] Almost all businesses, I have to tell you, almost all businesses do this because they're liable for what's done on their premises with their hardware on their internet connection. And so almost all businesses record what you do on their computers.
Speaker 2:
[130:00] Well, they said that they are installing new tracking software on their computers to track the mouse movements and keystrokes in order to train AI agents.
Speaker 1:
[130:08] So that's the question. Are they mad about the AI training or are they mad about being recorded?
Speaker 2:
[130:13] Good question. They are mad about being recorded and I think some of them are also mad about the AI training.
Speaker 1:
[130:17] Bad news. I bet you anything that has always recorded it. Almost all companies do that. Big companies all do that. You know, you go down the hall to IT, they can look at what's on any screen in the premises at any time. They have to.
Speaker 2:
[130:32] They're protecting themselves. When you're doing that, something pops up in your screen that is you're suddenly being remote access. I think people are...
Speaker 1:
[130:41] No, no. There's no requirement. The law absolutely is clear on this.
Speaker 2:
[130:45] Courts have not held this for years. I mean, I'm not saying that they're legally required to. I'm saying that the general practice employee will see when you work at one of these large companies is something pops up in your screen to show remote access. That has happened at Condé Nast. That's what they did in your magazine.
Speaker 1:
[131:01] They're being very good about it.
Speaker 2:
[131:02] That's what it was.
Speaker 1:
[131:02] Because I talked about this a lot on the radio show for years because people were up in arms about it. The courts have been very clear. It's nice if you put it in your policy and tell people you have no requirement to do so.
Speaker 3:
[131:16] I think people have always assumed they were being recorded. I think the difference here is that now it can be queried and now you can find out what people are doing. No, but queried by AI.
Speaker 1:
[131:24] That's always been the case. That's always been the case.
Speaker 3:
[131:28] Now it's being analyzed.
Speaker 1:
[131:29] Especially in a company like Facebook, I'm sure.
Speaker 2:
[131:31] It seems like there's something new going on here because they are being notified. They are putting new software on their devices is what the reporting is.
Speaker 1:
[131:40] We're small enough we didn't really probably have to. But many big companies, very common practice. It's actually very nice of Conde to warn you. They don't even have to put it in their policy.
Speaker 2:
[131:51] It's just what pops up on the top right bar.
Speaker 3:
[131:56] You should just always only do work on your work computer anyway. That was never the idea to do anything other than work on your work computer.
Speaker 1:
[132:02] That was always the message that I told people on the radio show is you should never do anything personal on your company phone, your company computer, or at work, or with the company internet because they have the absolute right to spy on you. What's funny is the law says if you're on a company phone and they listen in on the phone call and they hear you having a private conversation, they're required to hang up. They're not allowed to listen to a private conversation on the phone. That's because older laws protected privacy on phones. Those laws never applied to digital technology. They never got around to making those laws. Unless something's changed, which I don't think it has. I used to talk about this all the time because people would always call up saying, hey, I got in trouble for this. It's like, dude, and I always said, it would be really good if the company told you, but they don't have to. They don't, they absolutely don't have to. And I bet you knowing meta that that's always, that's what they've done always. And it's their full right, right? To do that and to train AI with it. That's your work product.
Speaker 3:
[133:14] I think that the employees are probably thinking like, yeah, you're training the AI to replace you. That's exactly what's happening.
Speaker 1:
[133:21] That's their right as well.
Speaker 4:
[133:23] I think it's a new implication of what's possible because they have it.
Speaker 1:
[133:27] Yeah. Look, I don't blame them. I wouldn't be happy about it either.
Speaker 2:
[133:32] Because people in the chat seem to think I'm an idiot, I in no way am saying that you should have personal and private access to your work computer. Work computers are obviously work computers and are owned by your employer and the things you do on them are monitored as almost any one of the corporate job has a pop-up that says that when you're logging in.
Speaker 4:
[133:56] Right.
Speaker 2:
[133:58] I'm just noting that there was an article this week that Reuters said that-
Speaker 1:
[134:04] Well, that's what I was curious about is what the people were upset about. Was it the AI training or the spying or both? Because the AI training is new. But again, I think it's probably-
Speaker 4:
[134:14] It's the moment too.
Speaker 3:
[134:15] Also, we always assume that we're being spied on by our company, but being told explicitly that we're spying on you is also different.
Speaker 4:
[134:23] It's better.
Speaker 1:
[134:24] It is better.
Speaker 4:
[134:25] Better to be told.
Speaker 1:
[134:26] I think so. I think you were for-
Speaker 3:
[134:27] To be totally conscious of it.
Speaker 1:
[134:28] That's just like companies Paris that told you. Because I don't think a lot of companies do. I really don't. I think a lot of companies just spy on you. Well, anyway, I hope somebody has learned something here today and they now know that they should stop buying drugs while they're at work.
Speaker 2:
[134:50] Yeah. Wait till 5:30 PM to do that. Wait till you get home.
Speaker 4:
[134:54] Use your phone.
Speaker 1:
[134:57] Not your company phone. Do you have a company phone and a private phone, right Paris? I think you said that.
Speaker 2:
[135:02] I mean, I have a work phone number and a personal phone number. So both are paid for by me because I don't want to mix any... I want to be able to take my work phone number with me wherever I go.
Speaker 1:
[135:17] Right.
Speaker 4:
[135:19] Ergo signal.
Speaker 1:
[135:21] You're watching Intelligent Machines. Guess what's next? Picks of the week. Before we wrap things up, Paris Martineau from the very ethical Consumer Reports, who I'm sure would tell you if they were spying on you. Right. Right. Of course it would. I'm happy to hear that Conde does that. That's good.
Speaker 4:
[135:38] Even Conde.
Speaker 1:
[135:39] Yeah. And Jeff Jarvis, whose personal emails are incredibly dull. So spy away, right?
Speaker 4:
[135:46] My life is an open blog.
Speaker 1:
[135:50] Author of many wonderful books like the Gutenberg Parenthesis. I didn't realize magazine was part of that 111 book.
Speaker 4:
[135:56] Yeah. Oh yeah.
Speaker 1:
[135:56] Object Lessons. What an interesting idea that is.
Speaker 4:
[136:00] It really is. And the ideas they come up with. There's others I'm dying to write. I've got an expert to get out, but then there's others. It's such a fun format to work in.
Speaker 1:
[136:08] What's the retail price of magazine?
Speaker 4:
[136:11] 22, but you can buy it online for $11.
Speaker 1:
[136:13] $11. So it would cost me $1,121 to buy all 111 of those. Cause I would love that on the bookshelf.
Speaker 3:
[136:25] There's no bundle? There must be a bundle, right?
Speaker 4:
[136:28] Well, $11 times 111. Bookstores is hard. They can't get the whole series in the bookstores, which is difficult. I got a London magazine store to carry the book. Like, hello?
Speaker 2:
[136:39] Yeah.
Speaker 1:
[136:40] It's about magazines.
Speaker 4:
[136:42] You're a magazine store.
Speaker 2:
[136:44] You gotta have one.
Speaker 4:
[136:45] Eileen Gissell, I don't know how to pronounce it, G apostrophe S-E-L-L. She just did one on a lipstick. And she's been on a, she put herself on a national tour promoting it. It's been fun seeing where all she's talking about it.
Speaker 1:
[136:56] Actually related to what we were talking about, I forgot, this was one of the stories I had. Defunct startups are liquidating their Slack archives, Jira tickets and email threads by selling them to AI and finding a whole new revenue stream.
Speaker 2:
[137:15] Oh my God.
Speaker 1:
[137:17] So that's exactly everything you wrote, every email you wrote, every Slack you wrote, every Jira ticket. The company Simple Closure, when Shana Johnson was winding down CLO24, CLO24, the transcription and captioning company, she ran as CEO. She discovered an unexpected asset. This is from Forbes. It's operational exhaust, the digital leftovers that piled up across years of work and collaboration. She sold to Simple Closure, everything. Everything on the hard drive, every Slack, 13 years of Slack jokes, Jira tickets, emails, multi-terabyte Google Drive as training data. And she got hundreds of thousands of dollars for it.
Speaker 3:
[138:12] This is brilliant.
Speaker 2:
[138:13] Hundreds of thousands of dollars is actually impressive. I thought it would have been much less.
Speaker 1:
[138:17] It's worth something. You know, I'm just thinking our Google Drive is more than 100 terabytes.
Speaker 3:
[138:24] No, I'm thinking not selling our data. We even broker deals between, you'd be a middleman between selling this data from the phone companies.
Speaker 1:
[138:34] See, he's already thinking about a business.
Speaker 4:
[138:36] Why wait to be defunct?
Speaker 1:
[138:39] You know what it really is? They've run out of all the public data. There's nothing more to ingest. They ingested it all, and now they need something, something new. That's what Ilya Sutskever says.
Speaker 4:
[138:55] That's just to inspire synthetic data.
Speaker 1:
[138:59] Yeah. All right. Last call for picks of the week. In just a moment, you're watching Intelligent Machines. I mentioned Paris, I mentioned Jeff. Thank you for being here. What I didn't mention is our fabulous Club TWiT members who make this show possible, literally make this show possible. Club TWiT now is rapidly, it's soon going to be half of our operating expenses come from the club. That means advertising only covers about 50 or 60% of our costs of Benito's salary, of the lights, the cameras. It's more expensive than one would think to have 15 podcasts going all week. And you know, advertising used to cover most of it. It doesn't anymore. Thank goodness we've got the club. Lisa says that to me every morning. Thank goodness we have the club. If you want to be part of our salvation, twi.tv/clubtwit, it's, I think, a good deal. Ten bucks a month. You get ad free versions of all the shows. You wouldn't hear me begging for money. You wouldn't hear any of our ads. You would get access to the Club TWiT Discord, which is full of really interesting people and great conversations. I really like it. I think one of the advantages of being in something that people pay to be in is, it just raises the bar. There's great content in here. Not just about the shows, of course, during the shows, but also around other topics that geeks are interested in. There are also a great many things we do in the club, special programming. Tomorrow, at 9 a.m., I'm going to interview Chris Stokle-Walker for Future Intelligent Machines. You can join us for that. The mayor of San Jose on Monday, April 27th, he's running for governor of California, but he has a lot to say about AI and city government. We also do a few shows like iOS Today, Hands on Tech, the AI User Group coming up May 8th. That's a really great gang of smart people who are using AI every single day. We talk about tricks, tools, tips, show and tell, just like a real user group. Chris Markwart's Photo Time, our Google IO coverage coming up May 19th. Micah's Crafting Corner on the 20th. These are all special programming we do just for the club. Would you like to join us? We'd love to have you. And we really need your support. twit.tv/clubtwit. Thank you. Thank you very much, club members. And of course, the TWiT Network Bikini Calendar. What? Pretty fly for a cis guy. We don't have it.
Speaker 4:
[141:29] Leo, we don't want to see you in a bikini.
Speaker 1:
[141:31] You don't want to see me.
Speaker 2:
[141:32] Yeah, it's 12 months of Leo.
Speaker 1:
[141:35] It's all me all the time.
Speaker 2:
[141:37] And the TWiT Tattoo features heavily in ways you don't want to see.
Speaker 1:
[141:44] Paris, pick of the week.
Speaker 2:
[141:47] This weekend, I went to Depths of Wikipedia live show. What? Which was a great show put on by Annie, I guess I should have remembered her last name, Annie Raurida. She runs the fantastic accounts called Depths of Wikipedia, which you may have seen on Twitter, Blue Sky, Instagram. And it was this really funny live show, all about kind of the wonders of Wikipedia. It has fantastic guests. They still have a show going on, I think in Los Angeles next month. So if you're around there and interested in any sort of nerddom or Wikipedia, I'd really recommend it.
Speaker 1:
[142:32] So it's a comedy show?
Speaker 2:
[142:34] Kind of. Yeah, it's a bit of a comedy show. She goes through a presentation and then interviews people about just incredible fun facts and deep dives into Wikipedia. And it's about also the culture of Wikipedia editing, some of the interesting minutiae in terms of, let's say, Wikipedia editor on Wikipedia editor violence or the various drama going on in the sub-community.
Speaker 1:
[143:07] This is very enterprising of her to do this.
Speaker 2:
[143:10] That's really cool. She's doing a world tour. I mean, it was a packed house. It was a packed house at the Gramercy Theatre. They were so sold out, they had to have two different shows.
Speaker 1:
[143:20] She organized a perpetual stew in Brooklyn Park.
Speaker 2:
[143:25] Famously organized a perpetual stew. There was a couple of signs up saying, perpetual stew, this Bushwick Park. She didn't expect many people to show up, but suddenly, I believe there were hundreds, if not more, and there was a New York Times article written about it. She's very funny. She had a couple of different guests up for the show I was at, including the woman who is the voice of the New York City subway.
Speaker 1:
[143:52] Oh, that's cool.
Speaker 2:
[143:54] Next stop, Laudablock Avenue.
Speaker 1:
[143:56] Did you recognize, did you go, that's her?
Speaker 2:
[143:59] Yes. She sits down and everyone's like, who's this? Then she goes into the voice and everybody screamed.
Speaker 1:
[144:05] The next train is a C train. Yeah. Wow.
Speaker 2:
[144:08] This Brooklyn bound four train, next stop. Yeah.
Speaker 1:
[144:14] Wow.
Speaker 2:
[144:14] It's fantastic. But I'm excited to see, and they're a great merch as well. I got a good shirt.
Speaker 1:
[144:20] There's only one more show. It's in Los Angeles, May 9th at Hollywood Forever. She's done Seattle, San Francisco, a bunch of shows in New York, Chicago, Philly, DC.
Speaker 2:
[144:31] If you have no idea what I'm talking about, I'd highly recommend that you follow her on Instagram or Twitter or wherever you are, because her accounts are so funny. It's called Depths of Wikipedia, and it is just, let's see if it'll come up here.
Speaker 1:
[144:47] I love it that Ciabatta was invented in 1982 as a rival for Baguette.
Speaker 2:
[144:55] She will post just some of the most interesting tidbits of lore that you could ever imagine about things you never even thought of.
Speaker 1:
[145:06] It's clearly that she has massive fan base. I mean, does she have a podcast or just an Instagram?
Speaker 2:
[145:12] No, it's just this. I mean, her Instagram has 1.6 million followers already, which is incredible.
Speaker 4:
[145:18] Brilliant. It's brilliant. This is what we need it to be.
Speaker 2:
[145:21] And she's a fantastic Wikipedia editor as well.
Speaker 1:
[145:24] Yeah, one of their most famous Wikipedia editors.
Speaker 4:
[145:27] Is there an AI angle we can get her on?
Speaker 1:
[145:30] I think we should just get her on just because it's hysterical. Brilliant.
Speaker 2:
[145:33] I mean, we should. It was fantastic.
Speaker 1:
[145:34] Yeah.
Speaker 4:
[145:35] How did you learn about it, Paris? You already followed her?
Speaker 2:
[145:38] I followed her on multiple platforms for potentially years, and I don't really know when it started or what it didn't. But I just, I love her posts. They're all so funny.
Speaker 1:
[145:50] She's only 26. I mean, she's a young person who's kind of just hit on something. Crazy.
Speaker 2:
[145:58] Yeah. And she runs a great live show.
Speaker 1:
[146:02] Brilliant. You know what? This is why the modern world is so interesting and the internet is so interesting.
Speaker 4:
[146:08] This would not have made it through the gauntlet of Baspedia.
Speaker 1:
[146:12] No.
Speaker 2:
[146:13] I mean, one of the anecdotes she told during it is if you look up the Wikipedia article for humans, I guess human, there was a lot of back and forth over what photo should be chosen to be on the Wikipedia page for human. There was also just a lot of internal conflicts within the Wikipedia editor community because normally, you're not allowed to edit any article that you have any potential relation to. By definition, all Wikipedia editors can't edit human because they are human. So once they got around that, they were like, well, what photo should we use? They ended up using a photo of a Aka couple in Northern Thailand. And she, after I think the Wikipedia editor conference in Singapore a couple of years ago, ended up flying to Thailand and tracking them down or ended up tracking their children down because they've since passed. And it showed them that, yeah, she said basically, Google the word human, click on Wikipedia. And they were like, oh, my God, that's my parents.
Speaker 1:
[147:22] That's very funny. They are the quintessential humans. Her Instagram is DepthsofWikipedia, and that's also where tickets are available.
Speaker 4:
[147:31] Also on Blue Sky.
Speaker 1:
[147:34] And I see that Micah already follows her, of course. Yep. You all know each other. There's Paris also following. That's great. And Megan Maroney follows her. Very nice. Oh, and my favorite, Worcester Terrariums.
Speaker 4:
[147:50] She has lots of good followers, including AOC.
Speaker 2:
[147:54] Yeah, it's a big, of course.
Speaker 4:
[147:55] Jamel Bowie. Amazing. Brandy Zdrosny. Amazing.
Speaker 2:
[148:05] What are you laughing at?
Speaker 1:
[148:06] I'm looking at the Wikipedia entry for Ketchup Effect. The Ketchup Effect is when nothing comes than way too much very fast. That's it. That's the whole article.
Speaker 4:
[148:18] Ketchup Effect on Swedish, Swedish Wikipedia. Yeah.
Speaker 1:
[148:21] Ketchup Effect.
Speaker 2:
[148:25] I mean, that's the thing is the whole account is just little things like this that are just delightful.
Speaker 1:
[148:30] That's fantastic. Oh my goodness. Let's see. My pick of the week. I had a couple. I've forgotten completely what they were. One of them is because we're talking about how Claude now has Claude Design, Anthropic has Claude Design. And then Google, which has Stitch, which is does kind of the same thing, lets you design your own, you know, design stuff with AI. Somebody at Google said, you know, you know how we could, we could Trump Anthropic's Claude Design. Let's just give them all our prompts for Google Stitch and they can run it in Claude Code and they don't even need Claude Design. So this is on GitHub, Google Labs Code Design.md. It's the prompts you need to basically turn Claude Code into Google Design, which is probably something very close to what Anthropic did to create Google Design. I think that was kind of a nice little competitive jab from Google. And then the other one is maybe a little bit more important. It's something we were talking about. Is your website Agent Ready at, let me see, what's the, is agentready.com? So you put in your website here, why don't we put your website Paris.nyc and just see how ready for agentic AI it is for MCP, for markdown negotiation, for agent skills. It'll do a scan.
Speaker 2:
[150:07] Will it?
Speaker 1:
[150:08] Oh, yours is better than mine.
Speaker 2:
[150:10] I was about to say, it scans to Paris Martineau at parismartineau.com, right?
Speaker 1:
[150:17] Oh, it did. It automatically redirected.
Speaker 2:
[150:19] Mine's better than yours?
Speaker 1:
[150:20] Yeah, mine is only 17, yours is 25. What's the difference? I make no attempts to make it compatible.
Speaker 2:
[150:30] I mean, I've made no attempts to make it compatible.
Speaker 1:
[150:32] Oh, you have bot access control set on yours. That's, I think, the big difference. I don't know. Maybe you're a host today.
Speaker 2:
[150:38] What is this website?
Speaker 1:
[150:41] Is this kind of a terrible URL?
Speaker 4:
[150:43] Is it agentready.com?
Speaker 1:
[150:44] Is it agentready.com?
Speaker 4:
[150:46] What was Paris' score?
Speaker 1:
[150:48] 25.
Speaker 4:
[150:49] I got 33 at jeffjarvis.com.
Speaker 1:
[150:51] Nice. TWiT gets a lowly 17 as well, and as does Leo FM. So Paris, you beat TWiT and me. Jeff, you beat Paris and TWiT and me. But probably this is more seriously to the whole thing we were talking about, which is I think everything has to be a, what did you call it? AIO?
Speaker 4:
[151:15] AIO is, well, there's one, it was called GEO, Generative Engine Optimization. But, you know, in some schools, that's over Leo, so.
Speaker 1:
[151:23] So AI optimization.
Speaker 4:
[151:25] AI optimization, yeah.
Speaker 1:
[151:26] And Jeff, you used up a bunch of picks. Are there any left?
Speaker 4:
[151:29] No, I want to come, I have a happy ending to my long saga. Careful, okay. On getting Ask Gemini into my browser. On my Google Chrome. Really? On my Google Chrome, on my Google Workspace. So there was another story about more features that are added, but once a month, I go to my admin settings and I say, surely there's gonna be something left out, something new. I'm gonna go through all of them. I go through all of them. I find nothing. I have said yes to everything. I am my administrator. I am the entire site. I am it. But now there was a new feature. There was, you can use Gemini on the admin site to get help with admin. So I thought, okay, I put my complaint in. I say, hello, this is this, this, and I don't have it on my browser. And it comes back and it says, we need to set this, this, and this. And I went back and I said, I have. And I said, it's still not working. And it came back and it said, no, no, no, if you do this, this, this, it'll work. And I said, I have, you're wrong. No, I still don't have it. Third time, it said, oh, try this setting that you would never find. And I'd gone through everything about, about you think of me in the Gemini part? No. You think of me in the apps part? No. Hidden three layers into the user, there was a special thing where you had to set defaults to okay. Now, mind you, I shouldn't have gotten, asked him and I on anything, on Gmail or anything by this. I got him on everything except Chrome in my Chromebook.
Speaker 1:
[153:02] The one thing you wanted.
Speaker 4:
[153:03] I checked the three boxes and now I have it.
Speaker 1:
[153:06] Wow!
Speaker 4:
[153:08] Now, what I do with that, have I used it? No, I actually.
Speaker 1:
[153:10] Yeah, you don't really want it.
Speaker 4:
[153:12] But I took, well, the only thing I've done so far is when I read, tried to read the Yann LeCun, the WorldBottle paper.
Speaker 1:
[153:17] Oh, it's good for that, yeah.
Speaker 4:
[153:18] I said, explain this to me, and it did a good job. It did a very good job. I now have it. So I'm not going to say all is forgiven because I begged here a hundred times, and no one told me.
Speaker 2:
[153:28] It's true.
Speaker 4:
[153:30] I bored the world.
Speaker 1:
[153:32] Three checkboxes and you're in, Jeff.
Speaker 4:
[153:34] I'm in. Meanwhile, one more, line 152. Benito, are you going to show your loyalty and get one of these with Leo on it?
Speaker 1:
[153:43] Uh-oh, now I'm worried.
Speaker 4:
[153:45] Me too.
Speaker 1:
[153:47] The Must-Have Item in Silicon Valley, the $178 sweater with the CEO's face on it.
Speaker 2:
[153:55] Leo, when are you going to get one of this for all of us?
Speaker 4:
[153:58] Yeah.
Speaker 1:
[153:58] It's a great mid-years gift. No, I have not sufficient ego. I know you don't believe me when I say this, but I do not have sufficient ego to send you all sweaters with my picture on it. That would be appalling.
Speaker 4:
[154:12] Alex Karp is on a T-shirt.
Speaker 1:
[154:15] What a surprise.
Speaker 2:
[154:16] I think that's even, I don't know. What do you think is more appalling?
Speaker 4:
[154:19] It says Dominate.
Speaker 2:
[154:20] Sweater? Okay.
Speaker 1:
[154:21] Yeah.
Speaker 2:
[154:21] The fact that the T-shirt says Dominate.
Speaker 4:
[154:23] Yeah. That's a...
Speaker 1:
[154:26] Wow. Now I don't mind the Hawaiian shirts that Andoril sells based on Palmer Lucky's Hawaiian shirts.
Speaker 3:
[154:36] But you should get your T-shirt designer, the guy who makes your T-shirts, you should get him to make a TWiT one or a Leo Fisk one.
Speaker 4:
[154:42] Yeah. Somehow they incorporate the TWiT logo into it.
Speaker 2:
[154:46] Wait. Yeah. Can you get someone to make a version of your colorful Hawaiian shirts, but with your own face on it?
Speaker 1:
[154:58] Paris, don't mock me. I know you would not wear anything that had my face on it.
Speaker 2:
[155:02] I would wear it for this show.
Speaker 1:
[155:04] Just for this show?
Speaker 2:
[155:04] For an episode?
Speaker 1:
[155:05] Well, twi.tv/store, nothing has my face on it, but-
Speaker 2:
[155:11] There's one. Yes, it does.
Speaker 4:
[155:12] We need a Hawaiian shirt.
Speaker 1:
[155:14] Oh, wait a minute. It does have something with my face on it.
Speaker 2:
[155:16] Just scroll up. You just scroll by your own face. Did I pass it? Yes, go up.
Speaker 1:
[155:20] Oh my God. Right there. I think we made that-
Speaker 4:
[155:27] Oh, I would never do that. I don't have that much of an ego.
Speaker 1:
[155:31] I think we made that.
Speaker 4:
[155:31] Not only is his face, but chief, chief, chief, chief.
Speaker 3:
[155:35] This is the Elon Twitter Saga.
Speaker 1:
[155:37] We did it when Elon took the name Chief TWiT. And I had nothing to do with it. I just want to tell you, I didn't say, Smithers make a T-shirt with my face on it. Call it Chief TWiT. Okay. I take it back. I guess, I guess you can buy that. twit.tv/store.
Speaker 2:
[155:56] And yeah, you should-
Speaker 1:
[155:57] Now Paris, you have to retract that you want it because otherwise you're going to get it for Christmas.
Speaker 2:
[156:03] Yeah. I mean, that's not what I was thinking about.
Speaker 1:
[156:06] No. Yeah, I know. I know it's not what you were thinking.
Speaker 2:
[156:08] You know, I was hoping there would be some more artistry.
Speaker 1:
[156:10] A Hawaiian shirt would be better. Yeah. The guy who makes your shirts. Well, we are going to Hawaii. We used to have a lot of the TWiT hoodies that we used to sell were really great. These are not quite as nice as the ones we used to sell. You know, the reason is you just make no money on these. And we have to charge so much for these to make even a dollar. That it's, you know.
Speaker 3:
[156:32] You need to do collabs and limited time drops.
Speaker 2:
[156:35] You could still get a This Week in Google sticker.
Speaker 1:
[156:39] Yeah. Isn't that funny? But there's nothing for Intelligent Machines, is there?
Speaker 2:
[156:43] There is a lot of stuff for Intelligent Machines.
Speaker 1:
[156:44] Oh, is there? Oh, good. There's homeware.
Speaker 2:
[156:47] There's desk mats.
Speaker 1:
[156:49] There's holiday, I don't know what that is, ceramic ornament for your tree. Somebody's having, oh, there, look, there is an Intelligent Machines die cut sticker. Look at that. That's nice. Pets? What do we have for pets? An Ask The Tech Guy's pet t-shirt. And you can get a mug, Intelligent Machines mug. That's fun. I think Anthony does this when he's feeling bored. He puts these together.
Speaker 4:
[157:23] Get your twig, a merch before it goes. It's a collector's item.
Speaker 1:
[157:29] By the way, I should mention, Jeff, I forgot to mention this. Google is apparently working on a pixel laptop. Pixel Glow lights. Because you can see apparently something in the latest beta releases of Android, Android 17 beta. They make reference to a feature called Orbit.
Speaker 4:
[157:52] What are the Illuminum? Might be the Illuminum laptop.
Speaker 1:
[157:54] Pixel Glow.
Speaker 4:
[157:56] Guess what I bought this week?
Speaker 1:
[157:59] A new Mac? Did you buy a Neo?
Speaker 4:
[158:02] Bought a Neo.
Speaker 1:
[158:03] Oh, send us the bill.
Speaker 4:
[158:05] No, no, no, no, no, no. Let me complain though.
Speaker 2:
[158:07] I was going to say, are you going to be using the Neo for this? Can the Neo even support streaming?
Speaker 1:
[158:12] Oh, totally. Yeah, yeah, yeah.
Speaker 4:
[158:13] Well, this is not a stand-in. I'm on a 12-year-old Mac Mini.
Speaker 1:
[158:18] Intel Mac Mini.
Speaker 4:
[158:19] 12 years old. Yeah, I guess that is right. And it works fine.
Speaker 1:
[158:23] I'm using an M1 Mac Mini that probably has 8 gigs of RAM to do the show right now. Benito is using something a little heavier duty.
Speaker 4:
[158:30] So, I go in the store and I say I want to buy a Neo. I want the blue one. I want this education.
Speaker 1:
[158:36] Good choice.
Speaker 4:
[158:37] Right? Oh, no. Well, I'll sign you up for a specialist. Go sit over there in my cane in an incredibly uncomfortable store. It drives me nuts. It used to be in the day if they were busy, anybody would... I can write that up for you now. No, no, no, no, no. I sat there for more than a half an hour. Oh.
Speaker 1:
[158:57] For the privilege of giving them your money.
Speaker 4:
[158:59] I left, I left. I said, and I'm walking out, the scheduler, I went to the scheduler at some point. I said, come on, man, I know what I want to buy. Just sell it to me. There's no way to run a store.
Speaker 2:
[159:07] No way to run a store.
Speaker 1:
[159:09] No way to run a store.
Speaker 4:
[159:10] Yeah, I didn't like that. So I walk out and I say goodbye. And he didn't catch the nuance of my voice. He said, goodbye.
Speaker 2:
[159:17] Or maybe he did, but is used to...
Speaker 4:
[159:19] Maybe he did, but I don't think Apple people have that much irony. So I go home and I order it.
Speaker 1:
[159:24] Did you shake your cane at him when you said, that's no way to run a store?
Speaker 4:
[159:28] I should have. Meanwhile, meanwhile, my Chromebook died.
Speaker 1:
[159:31] Oh, that's not good. How did... It's brand new.
Speaker 2:
[159:35] Was it before or after you got Gemini on it?
Speaker 4:
[159:40] Good question, before.
Speaker 1:
[159:44] It died before you got Gemini on it? You know, I questioned that.
Speaker 4:
[159:47] It went black and it couldn't boot. So there's some hardware problems. So it's now slowly, FedEx is very slow. I got it to FedEx on Sunday. It's supposed to be delivered to the place on Thursday. Then it's seven to 10 days to do it and then time to get it back. So I decided, okay, I'll split it up.
Speaker 1:
[160:03] That's no way to run a store. So did you just mail order a Neo?
Speaker 4:
[160:07] No, because actually, no, you couldn't get the Neo's until mid-May. They happened to have it at my local store.
Speaker 1:
[160:13] Well, what you do is you go online, you buy it online and you say, I want to pick it up at the store.
Speaker 4:
[160:17] Exactly. That was nice and efficient. That was easy.
Speaker 1:
[160:21] Yeah, that's very efficient. They just give it to you.
Speaker 2:
[160:24] Did you get one in a fun color?
Speaker 4:
[160:26] Blue. I like the blue. No, I didn't get the greeny one. No, the blue is nice.
Speaker 1:
[160:29] The blue is nice. We all agreed on Macbreak Weekly.
Speaker 4:
[160:33] There's nothing like, I should not move anything over from the 12-year-old Mac Mini.
Speaker 1:
[160:38] Probably not.
Speaker 4:
[160:39] No.
Speaker 1:
[160:41] Yeah, because those will all be Intel programs. Just download a new copy of Zoom and you'll be fine. Yeah. Yeah. Well, thank you, Jeff, for your commitment to the show. I appreciate that. That's very nice of you. And we would buy that for you if you want.
Speaker 4:
[160:55] No, no, no, no, no, no.
Speaker 1:
[160:56] Okay.
Speaker 4:
[160:57] Well, I've used this one for 12 years.
Speaker 1:
[160:58] If we buy it for you though, you understand we will use all clicks, all documents, all mouse movements, and send them to AI. Yes, we'll send them to GROK.
Speaker 2:
[161:10] Yeah, actually, if they buy it for you, they're going to put Bad Rudy on your computer.
Speaker 1:
[161:15] Bad Rudy's good. Thank you, everybody, for joining us. Thank you, Paris Martineau. You'll find her at Consumer Reports. You're still on deadline?
Speaker 2:
[161:24] Are you working on something?
Speaker 1:
[161:26] No.
Speaker 2:
[161:27] We got a lot of things going on, and I am the person who has to solve all of those problems. But a story will come out of it. I do love it.
Speaker 1:
[161:34] You love it.
Speaker 2:
[161:35] And tomorrow, if you happen to be a retiree in Westport, Connecticut, I'm going to be speaking to you.
Speaker 1:
[161:42] Really?
Speaker 2:
[161:44] Me and some of my colleagues are doing a food safety discussion for the Wise Men of Westport tomorrow.
Speaker 1:
[161:51] You're talking to the Wise Men of Westport?
Speaker 2:
[161:54] Not we think. It's the letter Y.
Speaker 1:
[161:57] At the Y.
Speaker 2:
[161:57] We could be discussing.
Speaker 1:
[161:59] It should be at the Y.
Speaker 2:
[162:00] Those men may or may not be wise. It's not at the Y either.
Speaker 1:
[162:04] The Wise Men of Bridgeport.
Speaker 4:
[162:07] So, don't leave fish in the refrigerator for two weeks. That's what you're doing now.
Speaker 2:
[162:12] No, actually, we're talking about kind of an inside look at how consumer reports, like three pillars, the testing.
Speaker 4:
[162:21] That's cool.
Speaker 2:
[162:22] Testing team, reporting and our advocacy teams kind of work together. And they were particularly interested in our food safety coverage, specifically the protein powders investigation and in my reporting on radioactive shrimp.
Speaker 4:
[162:36] Of course.
Speaker 2:
[162:36] So, taking them through it by like kind of a deep dive into protein powders.
Speaker 4:
[162:40] Well, if I weren't going to, my wife getting an award tomorrow, I would call up to Westport as a retired man.
Speaker 2:
[162:45] Congrats.
Speaker 1:
[162:46] To bring your cane and your food safety questions.
Speaker 4:
[162:51] Yes.
Speaker 2:
[162:51] True.
Speaker 1:
[162:52] To the wise men of Bridgeport. Well, they're very fortunate to have that. It's lovely.
Speaker 4:
[162:57] How are you going to get to Bridgeport?
Speaker 2:
[162:59] Well, I'm getting to West, it's Westport. I'm getting there by taking the train very early tomorrow. And then going to be picked up at the Westport train station.
Speaker 4:
[163:10] As you deserve.
Speaker 2:
[163:11] By our weekly content officer.
Speaker 1:
[163:13] Nice. Well, have a wonderful talk.
Speaker 2:
[163:16] I shall.
Speaker 1:
[163:16] Yay.
Speaker 2:
[163:18] I was going to say, everybody was, we were all discussing this week, like, oh, do you guys need to prep and stuff? I'm like, I can talk.
Speaker 4:
[163:24] No, I can.
Speaker 1:
[163:26] I have some practice. Just let me go. Nice.
Speaker 4:
[163:31] Put a mic in front of me.
Speaker 1:
[163:33] Nice.
Speaker 4:
[163:34] And by the way, should we note, the dulcet tones remained the entire show, did they not?
Speaker 1:
[163:40] They did.
Speaker 2:
[163:41] There was a brief moment where we thought we were flickering, but they have remained. And Gizmo is currently laying on top of the scarlet.
Speaker 1:
[163:49] Oh.
Speaker 2:
[163:49] So I would expect that would be impacting it, but it isn't.
Speaker 1:
[163:54] I think she always do that?
Speaker 2:
[163:56] No, because I usually pick her up, but I didn't... Because overheating could explain it. It's not, she's not on top of it. It's just she like lays in front of it and definitely touches it.
Speaker 1:
[164:09] I think it's doing it now, by the way.
Speaker 2:
[164:11] Well, it could have been because I was wrestling a cat. As I said that.
Speaker 1:
[164:20] Okay, my friends, thank you, Jeff Jarvis. Don't forget his new book, Hot Type, available now. jeffjarvis.com for pre-order. It comes out the same time as Ian's book.
Speaker 4:
[164:30] No, it actually comes out now in August because they were going to come out in June. Then they moved it for production reasons to July. And I said, I like to say this to Ian, but July is dead time. So they moved it's now an early fall book.
Speaker 1:
[164:45] Okay. You'll still get it if you order it now.
Speaker 4:
[164:48] Yes, you order it now. It'll make me happy.
Speaker 1:
[164:51] It'll make Jeff happy. And that's our goal in life. Thank you, everybody. You've made us happy by joining us. We'll see you next week. On Intelligent Machines. Bye bye. Hello, everybody. Leo Laporte here. You know what a great gift would be, whether for the holidays or at just any time, a birthday, a membership in Club TWiT. If you have a TWiT listener in your family, somebody who enjoys our programming, and you want to give them a nice gift and support what we do, visit twit.tv/clubtwit. They'll really appreciate it. So will we. Thank you. twitter.tv/clubtwit.