transcript
Speaker 1:
[00:00] It's time for TWiT This Week in Tech. Glenn Fleishman is back. Wesley Faulkner is here, Lou Maresca, and we have a lot to talk about. The new Claude Opus 4.7 model is out. We'll talk about security flaws that AI can and cannot find, why it's time to ban the sale of precise geolocation, and we'll watch some robots fall down. It's coming up next on TWiT.
Speaker 2:
[00:27] Podcasts you love, from people you trust, this is TWiT.
Speaker 1:
[00:38] This is TWiT, This Week in Tech, episode 1080, recorded Sunday, April 19th, 2026. Destroy all phono records. It's time for TWiT, This Week in Tech, the show we cover the week's tech news. I love this panel, but I especially love seeing Glenn Fleishman back. So nice to see Glenn. You're feeling fit and fine and all that.
Speaker 3:
[01:05] I feel fabulous. I recommend open-heart surgery for everybody if you need it. If you need it, let me point that out. That's the important thing.
Speaker 1:
[01:13] Do not show us your scar though. Do you have a kind of-
Speaker 3:
[01:14] I promise. No, it actually looks awesome. I've had high compliments from medical professionals about my scar.
Speaker 1:
[01:19] Oh, that's good.
Speaker 3:
[01:20] No, I feel great. I would never have known that I had this done five months ago. It's bizarre. I feel fantastic.
Speaker 1:
[01:26] And you feel better, don't you?
Speaker 3:
[01:27] Oh, yeah, incredibly. It's like all the things that were bothering me for five or six years are gone. Here's my little joke, which is everything went according to the textbook. And if they would just take the textbook out of me now, I would feel better. That's my little joke.
Speaker 1:
[01:41] I hope they didn't leave anything else behind. Just a healthy heart.
Speaker 3:
[01:45] Not so far. They keep checking. Nothing's in there.
Speaker 1:
[01:47] So great to see you. Of course, Glenn writes now on a regular basis for sixcolors.com. We're really pleased to see that. And his new book is now on Kickstarter. Yes, you knew he had, you know, after all the conversations we've had about flongs, you knew he had to write a flong book.
Speaker 3:
[02:03] Yeah.
Speaker 1:
[02:03] It's called, in a punning fashion, Flong Time No See, Forgotten Stories of Printing and Labor. Yeah. Is this all about the people who did the work?
Speaker 3:
[02:16] Yeah. In compiling, this is kind of a compilation of a bunch of things I've researched in the last several years about printing history. And I keep coming back to, weirdly, to the people involved, most of them kind of forgotten to history or his roles were downplayed. And I just did a deep dive and discovered that a huge percentage of women were involved in printing and then kind of erased from its history. Changed again in the last 60 years, but in the late 1800s, 50 percent of people working in small towns on newspapers and things were women. And so I'm trying to, that will be in this book along with a lot of stuff about all the hard labor everybody did that we never saw. A book that got printing where it is today.
Speaker 1:
[02:59] It is on Kickstarter. And you've already reached your goal.
Speaker 3:
[03:03] I did. It's funded. But you can go there and you can pledge to get a book now as I continue towards completion, towards finishing the book.
Speaker 1:
[03:11] Thank you. Also here, good friend, longtime friend, Wesley Faulkner. Wesley of works-not-working. Hi, Wesley.
Speaker 2:
[03:20] Hey, it's good to be back.
Speaker 1:
[03:22] Always nice to see you.
Speaker 2:
[03:23] Feels like the rotation is getting quicker.
Speaker 1:
[03:26] Is it? Well, maybe that's because we like you so much. The site is up. That's the other thing. Last time you were here was just about to go live. It is live now. People who are working but not happy about it, I guess.
Speaker 2:
[03:40] The wait list is open too, so you can join the wait list. I'm going to open up for new members at the beginning of next month.
Speaker 1:
[03:47] Nice. Great to have you. Another old friend, good friend too, former host of This Week in Enterprise Tech, Louis Maresca.
Speaker 2:
[03:55] Hey, Leo.
Speaker 1:
[03:56] Hey, Lou. Good to see you. He's engineering leader. I'm sorry, AI. AI engineering leader for Copilot at Microsoft.
Speaker 4:
[04:04] That's right. Focus on Excel agent and bring in data engineering to you.
Speaker 1:
[04:08] He's the guy who put Python and Excel for which we are eternally grateful. Wow. I know. I know. When I say that, Glenn, it's like, oh, yeah, okay. Wow.
Speaker 4:
[04:20] Now you can use AI against it too.
Speaker 1:
[04:22] And now you can use AI.
Speaker 4:
[04:24] Now you generate Python AI right in Excel.
Speaker 1:
[04:28] We're starting about half an hour late because we all got started talking about our AI projects. It's really interesting to see how people, geeks, I guess, even not necessarily coders have gravitated to this and have started to use it in all sorts of interesting ways. I really feel like as somebody who's covered tech for 40 years, it is the most exciting thing to happen in tech since I've been doing it.
Speaker 4:
[04:54] Yeah.
Speaker 3:
[04:55] I don't know any coder who's not excited about it. The ones who aren't excited about it haven't tried it yet and the ones that are may have ambivalent feelings but they're still excited about it because, you know what, we were talking about this before the show but I think part of the ambivalence from some people is like I put in so much, this is like the what if they pay off everybody's student loans? Well, I paid mine off. It's like I put in so much work to get here and now this thing can do a thing that used to take me 100 hours in like an hour or two. It's like, well, I think of that as more like it's an accelerant, it's an exoskeleton, it's something you can use to make your life better. And having written about the printing industry, when the linotype was invented in the 1880s, all these typesetters were initially put out of work. It was very sad. And then, of course, what happened, newspapers started printing newspapers that were 10, 20, 30 times bigger and they hired back. So within a few years, the overall employment was tremendously higher because of the efficiency. So there's my...
Speaker 1:
[05:52] Isn't that Jeevan's paradox? That, yeah, in economics, Jeevan, J-E-V-O-N, or the Jeevan Effect is said to occur when technological improvements that increase the efficiency of a resource's use, intuitively you'd think, oh, well, if it's more efficient, they're going to use less of it. No, leads to a rise rather than a fall in total consumption of that resource. It is kind of counterintuitive. But then if you think about it, it opens new avenues, right? And I'm hoping, I mean, I feel bad for anybody. We see the computer science majors in college have dwindled, but people are still majoring in engineering, data science. It's just they're not learning how to code anymore. I think that makes sense. There's still opportunities.
Speaker 2:
[06:44] We're in an artificial paradox, though. For instance, people took more Ubers when Ubers were cheaper. That's true. And so I don't know about how sustainable, in terms of a direct line or hockey stick trajectory, because when people start charging, how much they're worth. I see how many people will still use it, at least in the way that they're using it now. So it might change.
Speaker 3:
[07:06] We're talking about the constraints on data center capacity. And without getting into the, Microsoft is vastly more resources, they have vastly more cash in the bank than most of the AI startups, right? But like, you know, Claude has tweaked its pricing models substantially and the tokens available and all that just in the last few weeks made a lot of people unhappy because what has been reported is it's about constraint and the necessity of computational power for the new improved models. And if they're not efficient enough relative to the output, which is, I think, where we're at right now, then they have to charge more, which is good from a sustainable standpoint. If you want Claude, OpenAI, Copilot, et cetera, to be available in five years, then they have to do what Wesley says. Otherwise, we have nothing. And then what do we gain?
Speaker 1:
[07:52] Is it a case of them running? So the same thing happened with the internet, right? Everything was free on the internet, except we knew, I mean, it all cost money. It wasn't free free. There was some, somebody was paying for it. And eventually we found out, oh yeah, it was, you know, we're paying for it through attention, through advertising. Is it, we just run that cycle a lot faster with AI, you know?
Speaker 2:
[08:16] Well, bits were free. The servers did cost money.
Speaker 1:
[08:20] Yeah.
Speaker 2:
[08:20] But those were, the infrastructure and all that stuff was different than the infrastructure now.
Speaker 1:
[08:26] It worked out.
Speaker 2:
[08:27] Yeah. But we're seeing that the, the, the demand, whether we say cost, was the demand curve, is also being thrown out of whack. We have hard drives being sold out for the whole entire year in January. We have memory prices spiking. We have, you know, we're going to have probably like shortages in copper and all this stuff that's going to happen where that the constraints aren't equal. So the, the, the, the, the ouroboros of what it is, is this build out is, is not sustainable.
Speaker 1:
[09:03] Today, Mark Gurman's newsletter in Bloomberg said that Apple was going to have to delay the release of its new Mac studios and perhaps even its new M6 Mac's laptop because of supply chain constraints.
Speaker 3:
[09:16] Particularly RAM. Everybody loves the Mac Mini. I've been reading article after article about how, I mean, that's for OpenClaw, right?
Speaker 1:
[09:22] Yeah, but good luck getting one, right? They're sold out too.
Speaker 3:
[09:25] I know. I'm sort of fascinated by it. I was writing something recently. I was going to tell people for Six Colors, well, here's a great time machine thing. It's used network time machine on your Mac and just get a couple two terabyte SSDs. And before I published it, I'm looking up, I'm like, oh, those are like $100. Like, oh my God, they're like $250 to $300. They had gone down so far into price during the pandemic because of increased production. And, you know, the curve is, whoop, so all right, can't tell people, yeah, don't go and spend $300 on a two terabyte SSD right now.
Speaker 1:
[09:58] To your point, Anthropic announced this week that enterprises using their Claude models would have to pay for tokens and wouldn't be able to do the all you can eat Claude Max subscriptions. We can, as end users, still do that. We have to pay attention. You're in that situation, aren't you, Lou? You have to, now you said you have to watch your tokens.
Speaker 4:
[10:18] Well, I mean, personally, I have to watch my tokens, but I would say, you know, you use Copilot through, you know, through Excel and choose Claude Opus 4.7. We don't, you don't have to worry too much about that.
Speaker 1:
[10:28] Oh, that's good to do. Little plug there for the way to use it. The way to use it. That's good. 4.7 did just come out this week. Any thoughts about, you know, it's so funny. You go on Reddit and people are pitching and moaning and they hated 4.6. They said it was nerfed. And it may be that the compute constraint that Anthropic is running up against, especially because it's got this supermodel under wraps and it eats a lot of bandwidth and a lot of CPU and GPU. Maybe they haven't been able to serve all the people who are using Claude. Plus, let's face it, Claude has gone through explosive growth.
Speaker 2:
[11:10] Well, 4.6, the defaults changed. They changed it from high effort to medium effort. So that was the default. So people were seeing worse results because the defaults didn't change.
Speaker 1:
[11:23] You could still turn it up to high effort.
Speaker 2:
[11:24] They didn't turn it back up. Yes. But also, if you look up their uptime, that has been taking significant hits. So it has not been the most reliable service.
Speaker 3:
[11:37] Did you all notice when they switched to in 4.6, they added that million-token session? I can remember when it switched on. It was extra, and then suddenly it was default. It was profoundly different, I felt like. It just felt like I could work on these long sessions and have so much more sophistication during them than I had before. So even before the 4.7 upgrade, I felt like I was getting a lot more out of it without having to constantly be like, no, okay, resummarize, let's go back. No, we already discussed that. We already discussed that. Let's go back and-
Speaker 1:
[12:09] Is it fair to say that Anthropic is being bit by its own success, that that's what really this is?
Speaker 3:
[12:14] Sure.
Speaker 2:
[12:16] Part of it is the explosiveness like when OpenAI and the government contract thing, and they got so many free users.
Speaker 1:
[12:22] Right.
Speaker 2:
[12:23] And so they had an uptick of people who also weren't on the pay plan, that just they were trying to make it up in volume in terms of how much money they're making. So yeah, I think that is part of it. And also, when you roll out the, like we're going to probably get into their new model, now you're competing with the old and new, and unless you can get everyone to transition over to one model, then you can help with managing some of that usage. But now you have to kind of partition it.
Speaker 1:
[12:55] Lou, you may not be able to say anything about this, but Anthropic did grant access to Mythos. It's super duper model to Microsoft. Do you, did you, can you?
Speaker 4:
[13:07] I don't have any knowledge of that information.
Speaker 1:
[13:09] Okay. Tug your ear if you've used it. I know you could.
Speaker 4:
[13:15] I have not.
Speaker 1:
[13:16] If you could, you couldn't talk about it anyway.
Speaker 3:
[13:18] I just saw a report today, someone was complaining that they looked through all the, oh, I've forgotten the name of them, the IDEs, all the CDEs, the reports, the filed reports.
Speaker 1:
[13:31] CDEs, the vulnerability reports.
Speaker 3:
[13:34] They looked through all those. They tried to find any that could be directly attributed to either Mythos or to Claude, and they found almost nothing that seemed to be related to Mythos, and they're calling for more transparency from Anthropic about when they're making claims about how many fundamental bugs they're finding. Part of me is like, well, I don't know what's been fixed yet, so it might not be a CVE, because some of the stuff they're talking about is so fundamental.
Speaker 1:
[13:56] Yeah, Patrick Verdi, who works for Volmchek, said, that's it, it may be 40 CVEs.
Speaker 3:
[14:04] Yeah, exactly.
Speaker 1:
[14:04] But they're not attributed to mythos, they're attributed to Anthropic researchers. The only one that Anthropic has said is that, are those the one that have found an open BS or free BSD?
Speaker 3:
[14:16] The FFMPEG one, which I don't know if it's the same.
Speaker 1:
[14:18] And the FFMPEG one, more as an example. But then I've also seen stories that say, this is from the Wall Street Journal, you're about to see a lot of critical software updates. Don't ignore them.
Speaker 3:
[14:32] Well, that's Nicole. Yeah, she's on top of this.
Speaker 1:
[14:35] Nicole Wen.
Speaker 4:
[14:35] Yeah.
Speaker 1:
[14:36] I think that that's going to be the proof, who, regardless of who these CVEs are attributed to, if you suddenly see a flood of zero-day patches, Microsoft put out last Patch Tuesday, the second biggest Patch Tuesday of all, ever. I don't know if that's related or not. I think it's too early to say that those would be CVEs discovered by Mythos.
Speaker 4:
[15:03] What do organizations think about it today, though? You can still have AI-assisted hunts going on without Mythos. So I don't understand why they're waiting for Mythos to come out, because they should already be using these models to counteract what's already happening today.
Speaker 3:
[15:19] Absolutely.
Speaker 2:
[15:20] I think the point of Mythos is so that they don't do CVEs. They don't want to publicly disclose the bugs for people to work around.
Speaker 1:
[15:27] Fix it before it becomes public.
Speaker 3:
[15:30] Well, you remember that, there's Dan Kaminsky, the sadly late Dan Kaminsky, who had that DNS exploit that a friend of mine helped him do the disclosure because he thought, he was one person. He thought, I'm going to destroy the world by accident if I'm not careful.
Speaker 1:
[15:45] This is such a huge, huge amount of people on DNS that would have brought the internet to its needs. It would have just killed the internet.
Speaker 3:
[15:49] Wild, wild. And I'm thinking, well, how many of those are there out there? Because Dan found it kind of by accident and no one had seemingly ever exploited it. So I hope not too many, but you just don't know.
Speaker 1:
[15:59] Well, what do we think? Are there millions of zero days just waiting? You said, Glenn, you said that you had software you've wrote and have been running for how long?
Speaker 3:
[16:11] 27 years live on the internet. I mean, a little book price comparison site. But all the automated vulnerability, things that hit every website all the time, that are looking for WordPress press flaws, credentials, any kind of vulnerability for injection, they all have been hitting it. So the problems, fortunately, were not publicly exposed in a way. But first time I said to Claude Code, find me all the bugs in this, it's like, hey, you didn't escape all this stuff and you didn't do this and this could have been an injection. If someone had sent a URL that looked like this, they could have just overwritten your databases and like, oh my God.
Speaker 1:
[16:45] You weren't sanitizing your inputs, young man.
Speaker 3:
[16:48] I sanitize, I'm sorry, father, I did not sanitize all of my inputs for which I'm hardly sorry.
Speaker 1:
[16:55] But that's the point is even the current models are really good at finding this stuff.
Speaker 2:
[17:00] The thing is we're talking about software vulnerabilities, but there's still microcode vulnerabilities on the lower level. And that's gonna be super stupid.
Speaker 1:
[17:08] In the processor, you mean?
Speaker 2:
[17:10] Yes.
Speaker 4:
[17:10] What's the elephant in the room? There's vulnerabilities that have been longstanding in all devices, all applications, all services out there, and now it's just easy to expose. So and then these companies are going to have to go and retroactively patch everything.
Speaker 1:
[17:23] And just to be clear what the threat is, people are afraid that bad guys are going to get access to these AI models and find the vulnerabilities and exploit them. And so the idea is, get these companies to fix them before we let bad guys at the model. But as we've been saying, the current models are good enough to find a lot of exploits.
Speaker 2:
[17:46] Or work with the government and make sure the flaws don't get fixed.
Speaker 1:
[17:50] I think it's very ironic because one of the reasons you mentioned that Anthropic's been on a roll is because the government said, you're a supply chain risk and President Trump tweeted or whatever you call it, he truthed, nobody in the government can use Anthropic. Meanwhile, as soon as mythos came out, all of these government agencies were begging Anthropic to, can we have it? In fact, Dario Mode went to the White House on Friday and met with Susan Wilkes, the Chief of Staff, because now government's in a little bit of a bind. They want access to this thing, this supply chain. You know, drama is so ridiculous. I think that actually the government is going to get access to it, as it should. If it's that good at finding vulnerabilities, we should all have access to it in a controlled fashion.
Speaker 3:
[18:43] This is where we get worried. Microcode and then also the Internet of Things, where all the terrible, all these untouched devices. Firmware that's out there and somebody just buys up or finds the, I mean, that's the other thing. Downloads the firmware for devices, emulates it, runs tests against it, and then it's like, well, there's a million of these. I don't even want to say a brand name because I don't want to accidentally slander somebody, slander a brand.
Speaker 2:
[19:06] You can say wise, wise cameras.
Speaker 3:
[19:11] Yeah, the unwise cameras out there and patches them all in a way that bricks every one of them or something.
Speaker 1:
[19:18] Yeah, remember, wise had to say, well, we're not going to sell that camera anymore because we can't fix it. Yeah.
Speaker 3:
[19:23] There's a lot of stuff out there like that. I mean, how many generations of routers have never been patched or there is no way to upgrade the firmware for them?
Speaker 2:
[19:29] Or if you think about industrial hardware as well, like things that are made to set it for games.
Speaker 4:
[19:34] DLCs and stuff like that, yeah.
Speaker 3:
[19:35] It's funny, the stuff about how the world ends is everyone's like, well, the AI will take over and then we'll all be great goop. And it's like, no, I'm worried about the control system in an electrical substation near my house blowing up, probably.
Speaker 1:
[19:47] Well, it's not the AI, it's the bad guys, right?
Speaker 3:
[19:50] Absolutely, yeah.
Speaker 2:
[19:51] We need that Borg technology where, like, if there's a vulnerability, that it just gets distributed amongst all the other drones. And then, like, they adapted, and they have to find a different vulnerability to get through, hopefully, one day.
Speaker 4:
[20:04] I think it's pretty interesting that, like, Opus 4.7, when it came out, they did a bunch of benchmarks against it, and obviously, comparably to Mythos, it's supposed to be the safer version of Mythos, as they say.
Speaker 1:
[20:14] Safer because it's dumber?
Speaker 3:
[20:15] Yeah.
Speaker 4:
[20:16] I guess so. What I thought was really interesting was, because of all the safeguards they put in a 4.7, the 4.6 model actually was benchmarking better at hacking systems than 4.7 is.
Speaker 1:
[20:29] Interesting. Jensen, okay, a couple of things. First of all, if you've heard of cal.com, which is apparently, I didn't know about it, because I'm not an enterprise, but a very popular calendaring solution. It's an open-source scheduling software. Companies use to schedule meetings and so forth. It's going closed source because of this. They say open-source security has always relied on people to fix and find any problems, but now AI attackers are flaunting that transparency. So, they're going closed source. I don't know, honestly, if this is a good idea. Do you think it's a good idea that should we, is it the, some people have said this is the end of open-source, because if your source code is sitting there on GitHub, bad guys are going to attack it. They're going to use AI to find the vulnerabilities.
Speaker 2:
[21:24] I've been a long-time user and still a user of cal.com. I loved it. It's awesome. Part of it is because for individual accounts, it was free, so for the hosted version. But then you could also deploy it. I know a lot of home labors love cal.com because they can deploy it on their own infrastructure. But this seems like an excuse because it doesn't make sense. Because if you remember TrueCrypt and VeraCrypt and all this, those are open source. And if they say we want to protect our users' data, I mean, an encryption software has very important user data, and they're still able to stay open source or were able to.
Speaker 1:
[22:03] It's canonical that security by obscurity is not security.
Speaker 2:
[22:09] But also, if you look at other projects like Redis or other products that were open source and they went close source, they have been beaten by the community. They have been, the retribution is swift and it's hard to the point where it's like some projects have to flip back to open source because the rebellion is very strict. This feels like a bait and switch for a lot of people, and a lot of the tagline for open source is, hey, you always have access to it because we're open source. When they change that, that is seen as a betrayal. They said they're going to have a version that's going to be open source. Now they're going to be maintaining to code. It just doesn't make sense.
Speaker 1:
[22:54] They have cal.diy, which is the open source for self-hosters.
Speaker 2:
[23:01] They should just call it cal.fu. I mean, that's just...
Speaker 3:
[23:03] Yeah, I was going to say, go fork yourself is what a lot of people say when this happens, right?
Speaker 1:
[23:10] It makes sense because how are they going to maintain? They can maintain both, but you're going to, as soon as you fork it, you diverge.
Speaker 4:
[23:15] Yeah. They could have did a call to action. The community just said, hey, everyone, just, you know, use your tokens against our repo and secure it all up and we'll do it that from now on, right? I'm sure all the community would have done it.
Speaker 1:
[23:25] That's a great idea.
Speaker 4:
[23:26] Yeah.
Speaker 1:
[23:27] A popular open source product should do that. That's a great idea.
Speaker 2:
[23:31] Yeah. That's why it doesn't add up. It doesn't make sense. It doesn't make sense what they're saying. And plus, I mean, are they seeing any real, if they saw any real world consequences, like is that something bad happened? That would be even a catalyst to say, this happened with one of our major customers, we're redoubling down and we're closing this first. There's just no real justification and no data that they've pointed to.
Speaker 3:
[23:56] It's hard to...
Speaker 2:
[23:56] It doesn't make sense.
Speaker 3:
[23:57] So after all the WordPress nonsense that's happened, and then a CloudFlare's response to that sort of, it feels like you can't necessarily, there's not enough goodwill left, I think, when open source projects say, well, this isn't really as open source as you thought, or we're gonna switch our model, or people are misusing things, but you're like, well, you can't misuse those things if they're open source. Oh, we have the trademark, we have the, there's, I think, a lot of goodwill has evaporated in the last few years. I mean, I wanna say the WordPress debacle has been part of that. I don't know, maybe that's exaggerating that role, but I don't know.
Speaker 1:
[24:33] One of our, I quote him a lot, he's in our Club TWiT, he's a big AI user, Darren Oakey, one of the regulars on our AI user group. He said, if everyone has access to these tools, this is in our Club TWiT, we can find vulnerabilities. For instance, GitHub Copilot already does tons of stuff. It should be finding vulnerabilities and alerting users. Then everything's more solid and has a cascading effect. He says, the real problem is so many people are using pointer-based languages like C++, and that's where you get these buffer overruns and these null pointer vulnerabilities. He said, people should just be using AI to convert it to Go or convert it to Rust.
Speaker 3:
[25:14] Oh, no, the code war is no language war.
Speaker 1:
[25:17] But he has a point. In fact, a lot of what Mike, I don't know what code are you getting your AIs to write in. My first thing I ever wrote was Rust. I thought, well, why not? Let's use a safer language. I don't have to write all that extra boilerplate. It's going to do it, so let's use Rust. But lately, I've been using Go because of its concurrency model. But in both cases, it's more secure. I would never dream of asking an AI to write something in C++ or C, even though I love C.
Speaker 4:
[25:50] Most of the models are written in Python, so they're highly trained on Python. It's a safe language.
Speaker 1:
[25:55] Yeah, it's safe, right? There's no pointer of vulnerabilities in Python.
Speaker 2:
[25:58] But in this day and age, we're just talking about LLMs and vibe coding. People will just make a version of this and then open-source that, and then let other people use that for themselves hosts and that's going to be the future. It's like, hey, I was a Cal user, now it's now close to us, so here's my version, and if you want, please add your contributions, we'll find bugs, and then they're going to die.
Speaker 3:
[26:21] This isn't, I don't want to get us off too much of a tangent, although I know that's the point of this show and part.
Speaker 1:
[26:25] It is the point.
Speaker 3:
[26:26] I've been wondering, all the coders I know talk about using it for public-facing projects, most of them, but also they all talk about, I'm sure everybody here, about how it's like, oh, I needed a thing, so I just had it write it for me. It's for me, the interface is minimal or it's extensive, but I had it write a game show console for me, for a game I invented that I run once a year on a backwater podcast network. I'm thinking, well, I could, of course, I could do my own fork of cal.com, run something locally, have it add features, and if they're good enough, I can have it submit to do pull requests, or have it update my chain if improvements are made.
Speaker 1:
[27:08] There you go.
Speaker 3:
[27:09] Integrate them.
Speaker 1:
[27:10] Keep an eye on it.
Speaker 3:
[27:11] But that becomes, I think, feasible as an individual if there's things you want, or if you want kind of, I don't know, clean room, but you want something where I'm not necessarily always updated to the latest public version because of whatever concerns. I'm just running it for myself. So that's also going to happen.
Speaker 1:
[27:27] I want to take a little break and then we can watch robots fall over, which is always fun in the half marathon in Beijing. Although these human-owned robots do run really fast, honestly, I have some misgivings about fast robots. That seems dangerous. We'll talk about that and AI anxiety in general, plus a lot more. It's a fun time to get together and talk about technology. That's for sure on This Week in Tech. We're so glad to have Glenn Fleishman back in the fold. So nice to see you, Glenn. It's great to see you.
Speaker 3:
[28:00] Such a pleasure to be here. Thank you.
Speaker 1:
[28:01] It was fun. Two N's in Glenn. Jeopardy champion, Glenn Fleishman. Did you watch Jason Snell's episode?
Speaker 3:
[28:08] I did. We've now had, speaking of the Incomparable, we now have N6 Colors. Three 6 Colors writers have now been Jeopardy players, two of us Jeopardy champions. Dan Moran.
Speaker 1:
[28:17] You, Dan Moran.
Speaker 3:
[28:18] But Jason had a great time. He got beat by one of the best. He got beat by Jamie Ding, who is still playing. Still playing. He's over $700,000 and winning. He's 24, 25 games as of Friday.
Speaker 1:
[28:29] He's a super champion.
Speaker 3:
[28:30] He's great. He's such a modest, funny, quiet guy. Jason feels very privileged to have been defeated by a super champion, by top five all-time players.
Speaker 1:
[28:40] Yeah. That's awesome.
Speaker 3:
[28:41] It's good.
Speaker 1:
[28:42] Also here, Wesley Faulkner. Ever dream of being on Jeopardy, Wesley? Is that something in your bucket list?
Speaker 2:
[28:47] No. Funny enough though, my step-sister has been on Jeopardy.
Speaker 3:
[28:51] Oh, really?
Speaker 2:
[28:52] Yeah.
Speaker 1:
[28:53] Awesome.
Speaker 2:
[28:55] She was writing a book about it, and I think that's still in the works. But yeah, the pressure feels too high. Yeah, me too. I don't think I would perform like that, to rattle off, like, gosh, when I'm in an interview, and they're asking something about stuff that I know about, like the things that I lived experience, then I still have issues. Being on Jeopardy, that's a whole nother level now.
Speaker 1:
[29:22] Lot of pressure.
Speaker 3:
[29:23] I'll tell you, the funniest moment, though, is one of the questions I got was, clues I got, I'm sorry, was something to do with the answer was Bush, the older George Bush president, and I said, who is Bush? And Alex said, can you be a little more specific? And I panicked, and I said, who is George Herbert Walker Bush? I was like, ah, right answer. I was like, how specific do I have to be? And I was correct, but it was just like, why do I know his full name? I know all four of his names.
Speaker 1:
[29:47] It came to you suddenly. And Lou Maresca is also here. Jeopardy! Champion? No, but champion in our hearts. AI engineering leader at Microsoft. Fast Company had an article taking off on the fact that somebody threw a fire bomb at Sam Altman's house, and then somebody else fired shots at Sam Altman's house. AI anxiety is turning volatile. I've thought, and I've worried about this for a long time, that we're almost going to see a civil war between believers and do-mers. And Glenn, you already said you're kind of in the middle, right? You use AI and you love it, but you also feel bad about it.
Speaker 3:
[30:29] Well, yeah, part of it is backlash, is it's not like NFTs were a horrible thing in the creative community. There are people who lost lifelong friendships and working relationships. If you just said the word NFT, I think AI is a little different. Backlash is huge though, because a lot of creative people who feel completely ripped off. As a writer of many published works and sometimes a visual artist, I get that and I think we had that running ahead of the cart, in terms of the horse, rather, in terms of how do we license appropriately the material that makes up things that produce generative creative outputs. That has colored a lot of people's opinions plus environmental concerns. Some of them, I think, realistic and some less so, and regulation and electrical use. Here in Seattle, we just got word. The local newspaper said, hey, five different data center companies, different companies who want to build data centers have approached Seattle City Light, which is the own utility, and said we need electricity equivalent to one-third of all generation capacity of the city right now. That freaks people out, even if it's necessary, even if it happened in a regulatory fashion, even if it happened in an environmentally sensitive fashion, how do you encompass that? So there's all of that, but it's, gosh, is it useful for coding?
Speaker 1:
[31:50] It's hard, isn't it? When it's so great at the same time, I feel guilty too, I know what you mean.
Speaker 3:
[31:55] Yeah, so many good things about certain aspects of it don't involve replacing creative, like artistic and writing work that is intended to be creative, but supplementing and amplifying our abilities as humans by using tools we developed.
Speaker 2:
[32:10] I disagree with the premise though.
Speaker 3:
[32:12] Okay, yeah, right on.
Speaker 2:
[32:15] Yes, AI is a problem, but I think what is going on is that it's the billionaires. The reason why I say that, the distinction is that if you look at the age of the people who are doing this, they're fairly young. Yes, they could be displaced and yes, a lot of entry-level jobs are being removed for AI because of AI or the excuse of AI. There's talk about like, hey, when you think about you're creating something that's going to threaten the human race, people are going to believe you. But it doesn't matter if they believe you or not, the CEOs who are firing all these people are saying that is the reason. So regardless if it's AI, regardless if it is a threat for the future, there are impacts today that are being blamed on AI. In terms of junior roles not being available. But also, I think this could be coupled with if you've seen those warehouse fires where people are just lighting up inventory and saying that they are not getting paid enough. Couple that with what's going on in the world in general, in this administration where prices are growing up, wages are going down, unemployment rate in terms of the length of people, how long they are on unemployment, trying to find another job is really, really hard because now everyone is using AI to swap recruiters and so they're rejecting, using AI to reject wider and larger numbers of people who are looking for roles. I think it's a combination of all of the above and everything that's going on that's causing some of this backlash.
Speaker 3:
[33:53] I'm going to agree with you against myself and say, with my history hat on, if we roll back 200 years, the Luddites were not wrong. They get a bad rap. Their methods were very similar to what's being used today in some ways and are going to accelerate, I think, which is throwing the sabots into the gears of progress.
Speaker 1:
[34:11] Sabotage.
Speaker 3:
[34:12] Sabotage because the point of them was exactly this. It was massive displacement for mechanization without a plan for how it would affect the economy or workers. And so you're going to displace a huge number of people with necessary skills. And then what happens? How does your economy survive that? And the people at the top didn't care. And the people at the bottom said, well, we've got nothing left to lose. The last public beheadings in England were of people who were convicted of sabotage, of being their Luddite leaders. So, you know, it's not in living memory. It's two centuries back. But it's still you see the same kind of things being already. If you're on X, which I am not, you see people saying like, oh, we should be capital punishment for people who attack data centers or whatever. I mean, it's already that kind of rhetoric is being thrown around too easily. But it's the Luddites were right in their message, maybe hard to support in their particular approach.
Speaker 1:
[35:11] Lou, is there any trepidation among people who are actually working in AI? Does this come up at all or all the time?
Speaker 4:
[35:20] Yeah, all the time. I mean, like, I would say my true belief is that obviously it's not eliminating jobs. It's just eliminating the tasks inside the jobs. And so you really have to learn to essentially adjust and evolve. But it doesn't change the stresses. Like, I personally feel stressed that my job is going to evolve very soon. And I've been doing this for 22 years and it makes me feel uncomfortable with that whole feeling of it. And like, will it still be relevant? And if I find out after looking at all the things that it does generate or that it does automate or it does make things easier, I can see it giving me room to do other things, especially the human factor of it, right? The ability to make decisions using my vast experience of many, many years. So I think I get pulled in different directions, but I can definitely tell you everyone's feeling it.
Speaker 1:
[36:09] You still need the engineering skills, the planning skills, the understanding of testing. I mean, shipping.
Speaker 4:
[36:17] Or you need to be the orchestrator.
Speaker 1:
[36:18] Orchestration. And so that's a skill. The problem often is raised that, well, what about entry level jobs? People who don't have those skills yet, they're at the beginning of the career that 20 years later they'll have those skills. Where do they fit in? And we were talking earlier about computer science majors just dwindling because nobody thinks that there's a future in learning Python anymore.
Speaker 3:
[36:44] I mean, in this whole notion, I feel like it was very brief. It was like, well, everyone's gonna be a prompt engineer. And I thought that felt, that didn't feel right, but there's gonna be a different kind of education, it's gonna be a little more abstract. I mean, we don't do, most people don't do machine level programming, or never has been the case, or microcoding. And no one said, oh no, we put the microcoders out of business, we have abstracted languages. If this is another...
Speaker 1:
[37:10] AI's gonna help us design chips better and better, I think.
Speaker 3:
[37:13] Yeah, exactly. So, you know, we're not, it's, I think computer science, maybe it was so practical in some ways because there was so much of a job demand feed, maybe it's gonna become more of an analytic discipline again, because people will have to be relying on that kind of thinking.
Speaker 1:
[37:30] Nobody's studying how to be a coachman anymore. I need to learn how to drive a foreign, whatever they call them. You know, jobs change. Right. I think it's encouraging that a lot of those kids who are going into computer science now are going to engineering and data science, because those are probably the places to go, right?
Speaker 2:
[37:54] But the thing is, coding has always been more basketball than golf. It's never been really an individual sport. It's always been a team sport. You have to think of all the things that work together in order for the outcome that you all see. I think that's the part that people are missing in companies. Like, the higher you are in a company, the more abstract you are from all the minutiae, all the invisible glue work that takes for the outcome to actually happen. When you're an executive and you're using AI to write reports, digest reports, write emails, you're like, this thing is amazing. It could do everything that I can do, which means those are the jobs that should be eliminated. Not the people at the bottom that are working with vendors, they're hearing the complaints and then coordinating with the responses, and then hearing and understanding what the community is. As we are, one thing that's also the bad part about we're eliminating these jobs because of AI is that we are losing diversity. Not the kind of diversity where you're thinking about DEI, but even diversity of thought is being lost because it's being coalesced. One decision maker who's orchestrating all these AIs, which is a problem.
Speaker 1:
[39:10] That's interesting. It becomes golf.
Speaker 2:
[39:13] Yeah. I am steeped in this because I'm writing a book about it. But the reason why that these decisions are being made is not because of AI. No AI says you should lay off 40 percent of your workforce. These are people who are at the upper part of the company, who are too abstracted away from the actual work to have really clear eye about how this is negatively going to affect them because they think because they had been successful, they'll continue being successful and that's not going to happen.
Speaker 4:
[39:44] Going back to the computer science thing though as well, I work with a ton of talented applied sciences who are actually training these models and writing the evaluations.
Speaker 1:
[39:53] Yeah, somebody's got to do that.
Speaker 4:
[39:54] Somebody's got to do that, but you still have to have programming skills. You have to have engineering skills to really think to the problem sets. I think that's where it's going to come in. I think people stepping away from these roles is not going to help it.
Speaker 3:
[40:07] I was thinking, Scott Adams, RIP, question mark? Scott Adams had a strip.
Speaker 1:
[40:14] Well, rest anyway. We don't know if it will be peaceful.
Speaker 3:
[40:18] A number of years ago, he had a strip that had the pointy-haired boss gets abducted by aliens. There's a bit where the aliens say, teach us your management secrets. The last panel shows the boss bandage and on crutches saying, I downsized 90 percent of the aliens and then the ship crashed. That must be their fault, something like that. To Wesley's point, that's kind of what it feels like.
Speaker 1:
[40:39] Yeah. This is what scares me about. This is the Beijing half marathon. Of course, they're moving very quickly. This is from Reuters, very quickly in China towards bipedal robots. I guess we are too. The marathon was, look at these robots running. Now, at first, it's comic, especially when they fall down and burst into a million pieces. But in fact, a robot did break the world record for a half marathon winning in 50 minutes, 26 seconds. To me, that's scary. I don't like the idea of fast moving machines with a lot of power in their limbs. Some of these are, some of these are cute. That's a cute one. Some of them-
Speaker 2:
[41:19] Yet you're a fan of F1?
Speaker 1:
[41:21] Well, yes, I am a fan of F1, yes. I like fast driving cars. I just don't know. Some of these are really funny. Like, oh, you're missing all the excitement. This is the slowest one. But by the way, this is the point. Last year, the winner took two hours and a half, this year, 50 minutes.
Speaker 3:
[41:44] Oh my gosh.
Speaker 1:
[41:45] They're getting better and better and better at that.
Speaker 3:
[41:48] I just, when I see that, I only see military applications, but I'm also like, you can push it over. So probably not yet.
Speaker 1:
[41:54] Yeah. But well, it's just a matter of time though, isn't it, before you can't push it over. Those dogs were more scary to me, actually. Oh my God. The robot, running robots. Snap, speaking of CEOs laying off, Snap is laying off 16% of its full-time staff, and it says, and I don't know if it's AI washing, but they say these thousand employees are being replaced by AI. I think it just may be that Snap's business isn't as good as it used to be.
Speaker 3:
[42:26] There's so much hiring for so long. I think it's a great excuse to not make people, not make the stock market freak out, even if it could be.
Speaker 1:
[42:35] It's AI. Yeah, it's AI.
Speaker 3:
[42:37] Like Block or whatever, the Square Parent Company, right? Didn't they do that?
Speaker 1:
[42:41] And Meta now says it's gonna lay off 10%, which is I think 8,000 people. That is terrifying. I know. We always kind of gloss over the human toll of this, but that's a lot of people who will have to find jobs in what must be a very difficult job market.
Speaker 4:
[43:03] It's very difficult. I mean, this is the new business model, right? You basically want to make sure your shareholders think that your company is fiduciary responsible, and so you decide to just start laying off people and blaming it on AI. So it's kind of a weird model.
Speaker 3:
[43:17] I feel like layoffs are always so much more emphasized, partly because of legal reporting requirements in the United States, for sure. So the numbers get out there even if you don't want them to. But the hiring, there was such weird and massive hiring during the stages of the heat of the pandemic. How many people are hired back? I never know when they say we laid off 8,000 people, do they then hire back 4,000? What are the wages of those people? How many were part-time? It's such an incomplete picture. It's also, you look at the stock market, which is at, I guess, S&P is at the all-time high, which I know doesn't...
Speaker 1:
[43:52] Yeah, boy, that's baffling to me. I gotta say, I did something really stupid. Two weeks ago, I sold all my stocks. I don't have individual stocks. I have index funds, big index funds. And I just got terrified. I thought, I have to live on this. I'm almost 70. This is gonna be, for the next 20 years, all I've got. I don't want to lose 20% to a stock market crash, which I thought was immanent to me. So I sold everything. I'm just sitting on a pile of cash. I mean, it is enough for me, I guess, to survive for the rest of my life. But then the stock market goes through the roof. I'm like, what the hell is going on? I don't understand. Do any of you understand?
Speaker 2:
[44:40] It doesn't make sense to fire people if they say like they over hired. If anyone's been part of a company, there's a huge backlog of things that they wish they could do. They could reallocate people to do more projects, do more geographical relocation even, or different territories or different sub-areas. They could have a group that's just made to make a wool shoe or something like that. That apparently there are people, there's a huge market demand.
Speaker 1:
[45:10] Oh, I have all birds. I'm wearing all birds right now, as a matter of fact. I'm just glad that these shoes are going to be AI generated from now on. That's the strangest story.
Speaker 2:
[45:19] It's just because they're bad at people management. That they're reducing people.
Speaker 1:
[45:24] Yeah, this is like Long Island Iced Tea becoming Long Island Blockchain. It's just a way to quickly pump the stock, get out of it.
Speaker 3:
[45:35] Remember meme stocks? Yeah, I guess we're back to that.
Speaker 2:
[45:38] Yes. That's what this is. The layoff is the new or the AI took my job.
Speaker 1:
[45:42] It's to pump the stock.
Speaker 2:
[45:43] It's the new pump stock. Yeah.
Speaker 1:
[45:45] Well, it's working.
Speaker 2:
[45:47] Unfortunately, it'll keep going until it stops working.
Speaker 1:
[45:50] Yeah, that's the problem. How long does it work for? How long does it fool people? I don't know.
Speaker 2:
[45:54] Well, look at the nines of these companies. The service availability just keeps going down. So it's not like the quality is going up.
Speaker 1:
[46:04] Well, by the way, like GitHub. But honestly, that's an example of GitHub success. So many people are committing. I have 13 repos on GitHub. So many people are committing. All the AI commits are just killing GitHub. I can't blame GitHub for the nines.
Speaker 3:
[46:24] Oh my God. The one thing that I'll say, there's unalloyed good thing about AI is I'll be like, yeah, do a commit, and then it writes like a thousand words that actually eloquently explains.
Speaker 1:
[46:34] Oh, the repos are fantastic.
Speaker 3:
[46:35] My commits are like, fixed something.
Speaker 1:
[46:39] Yeah, the commits are so good.
Speaker 3:
[46:40] They're like, what did I fix?
Speaker 1:
[46:41] I don't know. Oh, I'm never going to write another commit message. Never.
Speaker 2:
[46:45] Mine are like, fix number two, second fix, second fix of the fix, third fix of the fix of the fix.
Speaker 3:
[46:50] 2.1.1.1.1 update.
Speaker 1:
[46:52] Yeah, I did something, I can't remember what it was, but this is the new version.
Speaker 3:
[46:56] And I'm like, that's beautiful. What you wrote was beautiful.
Speaker 1:
[46:59] Lou, do you enforce like good commit messages? You must. There's gotta be a pulse.
Speaker 4:
[47:03] Always, always, always, always. Yeah, and it's because it's documentation too. You wanna be able to use the commit messages to pro-retroactively document how the software's changed and so on, so yeah.
Speaker 1:
[47:12] People who are not GitHub aficionados or Git aficionados are probably going, what are they talking about? It's just that when you make a change to, I don't know, how far back do I want to go on this? So when you write software, you sometimes you use a system that you actually, you should be using, that keeps track of versions so that you can, if you make a change and it causes problems, go back. You can know who made a change, if it's a big team making changes to a single code base. There's lots of reasons for versioning. There have been many solutions. Git ended up being the dominant solution because Linus Torvalds created it when he realized he couldn't keep track of all the commits to the Linux kernel. And Git's become huge. GitHub is perhaps the biggest purveyor of Git. And people store their source code in a GitHub repository or repo. And when they make changes, they will write a little message called a commit message. And then Wesley and I just write, fixed it. But good professionals will write a long message saying what they did so that you can ascribe blame, so that you can rewind it appropriately so you know what changes have been made. This, you know, you all experience this when you go to the App Store and there's an update. And the update says, fix some stuff. What I hate is the ones that, we're always working harder to make your software better. And we just did.
Speaker 3:
[48:40] I think Apple should not require commit messages for the, for minor updates. It's really, it's the whole reason you have to put text in there.
Speaker 1:
[48:48] Right.
Speaker 3:
[48:48] It's embarrassing if you don't, I guess. Or you just put like a, you can't put like a period.
Speaker 1:
[48:51] Or maybe companies could really do a change log and say, hey, this is what we fixed. Wouldn't that be? Why can't they?
Speaker 2:
[48:59] Be more like Lou.
Speaker 1:
[49:00] Be more like Lou. Yes, everybody.
Speaker 3:
[49:03] Good advice for life.
Speaker 1:
[49:04] Be more like Lou. We're going to take a little break. More with Lou and all of us who want to be like him, Glenn Fleishman, Wesley Faulkner, Lou Maresca, great to have all three of you. All right, enough AI, enough AI, we've been talking about AI. Well, there's one more. Let's talk about the courts. AI and the courts. Elon Musk versus Sam Altman.
Speaker 3:
[49:25] Oh no, AI and Elon Musk, Leo.
Speaker 1:
[49:27] And court. This trial in which Elon is suing OpenAI, saying, hey, when Sam and I got together to create this, it was a non-profit, now they're taking it to be profitable. You know, I'm not, I'm going to admit I'm not an Elon fan, but there is some merit to this, especially if you just read that New Yorker article about how slippery Sam Altman has been this whole time. Elon is saying that OpenAI has strayed from its founding mission. And that's not what he funded it for. And he wants, by the way, like a huge amount of money, which is, I don't think he's going to get. A jury is going to get this. Nine jurors in Oakland, California, in the federal court there, will soon get this case and decide it could affect OpenAI's IPO. It could affect Musk's status as an OpenAI competitor with XAI. Incidentally, I think his case is a little bit weakened by the fact that he said, we're going to found OpenAI so that the big guys don't get AI and it's going to be non-profit. Then he immediately leaves and founds XAI, which is fully for-profit, fully closed. Everything he's complaining about with OpenAI. I don't know if there's anything that he said about this. Elon gave about $38 million to found it, left in 2018 after disagreements with Sam Altman. The lawsuit has been essentially whittled down, according to Wired, to three core claims. Whether OpenAI breached its charitable trust, because they were supposed to be non-profit and now they have a for-profit arm that generates billions in yearly revenue. And, by the way, their code is not open. One of the things that came out in the New Yorker article was that there was a covenant originally with OpenAI, that if any other company ever came up with better AI than OpenAI, that OpenAI would immediately dissolve and go help that company. That went out the window pretty quick. There also is a claim of fraud that Altman has deceived Musk about his intentions to make a profit. And the third claim is unjust enrichment, which argues that Sam Altman and Greg Brockman, the president and other OpenAI investors, have enriched themselves at the expense of Musk. One of the things he's asking for is that the jury remove Altman and Brockman from OpenAI management, return their ill-gotten gains to the company's non-profit, and blocks them from existing as a public benefit corporation, which is what the for-profit arm is currently. And he could, in the long run, get hundreds of billions of dollars if the jury rules for him. What do you think is gonna happen in this case?
Speaker 2:
[52:48] It's gonna be a great discovery process for everybody. I think that's what's gonna happen.
Speaker 1:
[52:52] It has been so far.
Speaker 3:
[52:53] They're gonna pay Elon Musk off and he's gonna take it and then complain about it for the rest of his life.
Speaker 1:
[52:59] I'm surprised they didn't settle though. They went to trial, right?
Speaker 2:
[53:05] Yeah.
Speaker 1:
[53:05] And there is a cynical point of view that Elon's just doing this to slow OpenAI down so that XAI can win. It's funny because, I don't know, how do you all feel about Grok, Elon's? I think, in general, people kind of, it has some abilities, but I don't think people embrace it. Certainly not the way they embrace ChatGBT or Claude.
Speaker 4:
[53:33] I like it more openness. You know, there's less gates in the way. I think that that's one thing I go to sometimes.
Speaker 1:
[53:39] Oh, in the sense of, I can't tell you to talk about that.
Speaker 4:
[53:43] Exactly. Yeah. Yeah. That's the one good thing about it.
Speaker 2:
[53:46] Yeah.
Speaker 4:
[53:46] But yeah.
Speaker 2:
[53:47] You want to kill kittens? I'll tell you how to kill kittens.
Speaker 3:
[53:50] I don't think it's Mecha Hilt or Hitler part, but.
Speaker 1:
[53:53] Well, that to me, the reason I'm skeptical of, of Grok is because it's so clear that Elon puts his thumb on the scale, right?
Speaker 4:
[54:01] Yeah.
Speaker 1:
[54:02] That Elon sends messages downstairs that says, Oh, you should mention me more. Things like that. Well, that, that's not how you make a good AI.
Speaker 2:
[54:13] I think all AIs are, have their bias.
Speaker 1:
[54:16] They do. There are the system prompts, right?
Speaker 2:
[54:19] And I think also, if you've ever used Kilo code, Grok is one of the free models. So I think that they're really trying to get users to use their models to get more information of what the type of data that gets sent through their models to make it better. I think because it is extremely opinionated about the approach, even Elon Musk is saying it's not bias enough. So they say that there are problems with it. We will fix it. He says that all the time. So I think that no matter how good it is, I think he can make it worse. And I think he's in a unique position to do that. And so I hear what you're saying, but I think that the bias that is Grok will definitely, I think, make it at odds with itself. So I personally don't think it's good. And I think because of the thumb and the scale of the bias, it's never going to be good. So if you think about...
Speaker 1:
[55:28] It's good for generating nude pictures of people you know.
Speaker 2:
[55:31] Yes, exactly. But if you think about Anthropic as a spinoff of OpenAI, going back to that, they said that they wanted to be a safer model. They wanted to tackle that problem. Which I think that, yeah, dealing with those harder problems, I think makes you better. And having a type of... I don't want to get too deep into a doctrine about how they want to approach it, and have some sort of principles makes it better, because they're trying to really carve out a niche and be opinionated in that way. But they all have their own bias. And I think Grok is the other opposite of OpenAI, saying they're too restrictive. So, and you can see that it performs worse. And I think if you look at who's trying to tackle the harder problems, I think it pays off to see that why OpenAI is not as good as Anthropic, and while Grok is not as good as OpenAI.
Speaker 1:
[56:25] Darren's saying something important, too, though, that Grok does not have a coding harness, like Codex, or like Claude Code. Does that make sense, Lou? Is that, I mean?
Speaker 4:
[56:36] It doesn't, he's right. I mean, but there's always, I'm not going to give out a bunch of secrets here, but like you can technically.
Speaker 1:
[56:42] Well, give out some secrets, Lou.
Speaker 4:
[56:44] You can always use MCP bridges to basically bridge and enable Grok as a coding agent through OpenClock. Like it does, it is possible. Is it any good? It's so-so, right?
Speaker 1:
[56:56] Is there a Grok MCP written by XAI MCP server, or is it just third-party?
Speaker 4:
[57:02] No, these are just third-party people bridging the OAuth capability that you can access Grok as and make it a coding agent, basically.
Speaker 1:
[57:10] I have access to Grok because I got a, as Cory Doctorow calls it, a non-consensual blue check. So Elon, at some point, decided to give for some reason, give me a full account. So I have access. But I just don't, I don't, I, I got really turned off when at one point, remember when we came out with the, the, the manga girl and the fox? Yes. And the age, the, the, the, what do you call them? The avatars. And when it first came out, the day it came out, I'm sitting at breakfast and I said, Oh, look, there's, there's these avatars. Let me, let me go talk to the fox. And it said something so rude, completely voluntarily. I, I, I just said, you know, are you having a nice day or something? And he said, yeah, I'm going to go out and teabag the mayor. And it was like, what? What? That it just turned me, it was like, that was completely gratuitously gross. I don't understand.
Speaker 2:
[58:21] It's trained on Xbox chats.
Speaker 1:
[58:24] Maybe that's it. Yeah. It's like, what the hell?
Speaker 2:
[58:28] Going to the OpenAI case though, I wanted to point out one thing that is a huge problem for OpenAI is that if we're looking at standing, I think the state of California has more standing than Elon Musk, because it's been pointed out that this path of incubating as a non-profit and then not paying taxes and saying you're going to do good, and then converting into a for-profit at the end and become publicly available is not something that this should, that if this goes, if they're able to do that, other companies will replicate this. Yeah, I think it's good. And that is a huge problem. So in this discovery process, if you see or if there's any hint that that was planned a while back, that is going to be a huge probable cause for the state of California to bring a follow-on suit to prevent this from ever happening. So that's in part of the discovery. Yeah, they might look bad. But if you don't remember that nonprofits and for-benefit companies, when they're creating a product that is owned by the nonprofit, so anything, all of their models, even their closed source ones, should be held by the nonprofit. And the thing that goes public should not have access to those. They should not be able to create their own property.
Speaker 1:
[59:52] That's not the plan, is it?
Speaker 2:
[59:53] And that's not the plan. And so this is something in which it could go wrong in so many different ways. And this is one of them.
Speaker 1:
[60:00] This is what worries me a little bit about OpenAI is it feels like a house of cards a little bit.
Speaker 3:
[60:07] Oh, well, yeah. I mean, so let's see. I want to say something again, careful to not be slanderous, which is-
Speaker 1:
[60:16] Oh, no. Be like Groc, just remove all barriers.
Speaker 3:
[60:19] Okay, so I once had a job, this is irrespective of nothing else, obviously. I once had a job in which I worked with three pathological liars, including the person who ran the organization.
Speaker 1:
[60:29] Oh, wow.
Speaker 3:
[60:29] And he had hired the others. And there were a couple of people-
Speaker 1:
[60:31] Oh, that's a nightmare.
Speaker 3:
[60:32] who had extremely dubious ethical principles. Unfortunately, a lot of it didn't affect me for a long time. When it did, I finally left. That was fine. But having worked so closely with pathological liars, you start to identify patterns. So I was reading a New York article that happened to mention some AI figures in it. And one of them, it struck me particularly that some of the patterns mentioned repeatedly by Ronan Farrow and his reporting partner there, really did seem to align with my experience. Pathological. Pathological. And so all the stuff with OpenAI, every time I come back to it, I've felt, Sam Altman has always felt unreliable to me as a narrator of his own company, because he says in multiple interviews, you see different things being said on the record that would be easy to compare. And apparently, he does the same thing. Reportedly, he does the same thing in private. So this whole thing about their valuation, the money they're raising, where they're going, what's happening. I just feel it just seems so dubious to me that I want to see results before I believe anything that's being said.
Speaker 1:
[61:35] Okay, I'm going to throw a little monkey wrench at this. You know, Sam has this side project called World, where there's an orb and you put your eyes up against it and it scans your irises, right?
Speaker 3:
[61:47] I don't love that.
Speaker 1:
[61:48] And he was giving cryptocurrency to people all over the world, I think primarily in the developing world where it was really valuable to scan their irises. His theory, which is not wrong, being that one of the chief challenges to security is authentication. And then if we knew somebody was a human, that that would ultimately be useful if you could prove you were a human and, and you were the, you know, that you were Wesley Faulkner, the Wesley Faulkner, not somebody impersonating. I mean, I think that there is some merit in that idea. The company behind it, which is again, a Sam Altman investment is called Tools for Humanity. They announced on Friday that they're going to start bringing the world scan into dating apps. Tinder is going to start using it to verify that a Tinder account is belongs to an actual human with eyeballs, I guess, and, and, and the human that they say they are, they're also going to start working with concert ticketing systems. Oh, Ticketmaster. Oh, the Dolans will love this one, right? Business organizations, email. So maybe this, you know, proof of human, proof of humanity is actually going to go somewhere.
Speaker 3:
[63:09] Opt out. Oops, where's my, yeah.
Speaker 2:
[63:11] There it is.
Speaker 1:
[63:12] Do not scan my irises.
Speaker 2:
[63:16] Yeah. Next, they're going to buy clear and then.
Speaker 1:
[63:18] Yeah.
Speaker 2:
[63:19] Get those.
Speaker 1:
[63:22] Zoom is going to integrate with World ID to battle a deep fake threat to business calls. You've heard about these stories of, there was a CFO who was fooled in a Zoom call. They were, he thought it was his boss and the CEO in the board in a Zoom call. They were all deep fakes. And he wrote a big check because they said to. So Zoom is going to do it. DocuSign is going to do a deal to make sure signatures come from authentic users. Maybe this was a good investment. Okta is going to use it to verify that an agent is acting on behalf of a human. So you can tell, this is interesting. Part of the agent delegation scheme is that you tie your agent, your claw, your open claw to you using World ID so that the agent is authenticated to you so that when the agent asks for something, they know it's really on behalf of Leo.
Speaker 3:
[64:26] I want a federated system of authentication in which individual organizations agree to use vetting processes that are federated and have degrees of confidence.
Speaker 1:
[64:37] It's the certificate authorities, right?
Speaker 3:
[64:40] Basically, something like that. I don't like this. I don't want any government or individual organization to have that much information.
Speaker 1:
[64:47] It's a private company. Yeah.
Speaker 3:
[64:48] Then how do you get blocked listed on it? It's like, what happens then? They're like, well, you violated some terms of service. Interesting. Here's the thing. World Court members of the, sorry, the World Court, the, what is it, the World Court?
Speaker 1:
[65:01] ITC?
Speaker 3:
[65:02] Yeah, yeah. International...
Speaker 1:
[65:04] Trade Commission.
Speaker 3:
[65:05] Thank you. No, no, I'm sorry. The Hague. What's the group in the Hague? It's the...
Speaker 1:
[65:09] Oh yeah, the court.
Speaker 3:
[65:10] The Court in the Hague.
Speaker 2:
[65:10] International Criminal Court.
Speaker 1:
[65:11] ICC.
Speaker 3:
[65:12] Thank you. Yeah, so there are members of that court who have been sanctioned by the US under the second Trump administration who cannot use Google, cannot use banking. They're blocked from all of these kinds of things. They have to explain why they have to pay in cash to at hotel.
Speaker 1:
[65:28] That's horrific.
Speaker 3:
[65:29] Because they're essentially been unpersoned out of the international banking and all the systems that operate in the US.
Speaker 1:
[65:36] All because probably they declared that Netanyahu was a war criminal.
Speaker 3:
[65:39] Yeah, something, things of that nature. And so the United States, actually the government has the power to de-bank and de-authenticate you. And so this company that's got your retinas, well, then it's going to be able, the US be able to do that too.
Speaker 1:
[65:55] You just, that's a nightmare scenario, you're right. We're so, at this point, dependent on technology, that if, even if Google alone said we're going to take away your account.
Speaker 3:
[66:06] Which has happened a lot there. Apparently, that's, I've been reading reports that they've stepped up, automated, you know, ostensibly AI-based account blocking, where people cannot get their accounts back, their entire families, even like one household, are blocked because of something, and they can't appeal it. There's no way to reach a human being, and they lose all their history and access. Sometimes business, if they're using Google App, or Google for Business, all kinds of records, so.
Speaker 1:
[66:34] This is a 40 problem.
Speaker 2:
[66:35] This is me to use cryptocurrency, I'm gonna be so pissed off.
Speaker 1:
[66:38] If they use what?
Speaker 2:
[66:39] I said, if this is gonna force, if this forces me to use cryptocurrency, I'm gonna be so mad.
Speaker 1:
[66:44] Or you could be on the blockchain.
Speaker 3:
[66:49] It's funny, you know, the AI blockchain thing is so funny. Like the blockchain or cryptocurrency, it's such an interesting set of people involved in hyping and involved in both them. And yet the utility is so they had to create a reason for the blockchain to exist.
Speaker 1:
[67:04] And that's what this solves, though. Blockchain solves that because no one controls it. Everybody's got a copy. And so no government can kick you off the blockchain.
Speaker 3:
[67:17] I mean, come on, look at Bitcoin. It's just, you know, I don't think blockchain has paid out the way people have hoped.
Speaker 1:
[67:24] No, you're right. In fact, it's become more centralized, hasn't it?
Speaker 3:
[67:28] Ultimately, that's kind of my concern.
Speaker 1:
[67:30] Yeah. It does solve a problem, though. I mean, look with age verification, what's going on right now. Every government wants age verification. There's no good way to do it without violating your privacy. I guess the real problem with world is that it's a private company, right?
Speaker 3:
[67:48] Centralized, though. Centralized and a private company.
Speaker 1:
[67:50] But then, what government would you trust to run this?
Speaker 3:
[67:54] I want Macedon could run authentication. I think that's the one.
Speaker 1:
[67:56] It should be federated. Well, that's the point of blockchain in a way. It's kind of federated, right? There's no one single point.
Speaker 3:
[68:04] I don't know. It's just who gets to decide what our access to everything in the world is. I'm nervous. Passwords and bank accounts, maybe not the best arbiter of that. But I'm just bringing all this backlash on Blue Sky on the last day to pass keys. Everyone is fed up with pass keys, which are so much more secure. But I think the implementation has gotten people. I'm like, oh my God, we can get rid of passwords and move to a supremely better way in which all the information is stored on the edge, storing your devices, you have more control over. But the way it's implemented is too much. I'm thinking we're never going to get to centralized authentication if people won't even adopt pass keys.
Speaker 2:
[68:44] No. USB-C of everything, which still in the weird thing where you could plug so many different things into it and doesn't necessarily have the same results. I think that people have uneven experiences with pass keys, and I think that's part of the issue.
Speaker 3:
[68:59] I wrote a book about that. It's called Take Control of Untangling Connections. Partly, it's not all about USB-C, but it's a lot about USB-C, because people had so many questions about it. Oh my god.
Speaker 1:
[69:11] I have a USB-C tester. Yeah. I don't know what it is or what it does. Burke left it here. Maybe he'll explain it to me.
Speaker 3:
[69:18] I've bought some, and they still don't. You have to have a computer to tell you what speed it will get, and the tester only tells you if wires are connected.
Speaker 1:
[69:28] Yeah, there should be a readout, but I think they do make them with the LCV panel.
Speaker 3:
[69:32] I've got one over here somewhere. I can never get it to... You can put a man on the moon. You can send a diverse group of people of genders and origins around the moon, and we can't get USB-C to work. We got a new phrase. We need to simplify that. You can't put a man on the moon. It's got to be as short.
Speaker 1:
[69:51] A person. Send people around the moon.
Speaker 3:
[69:54] That's a little... That's not as catchy.
Speaker 1:
[69:56] By the way, that was the one story all month that just made everybody smile.
Speaker 3:
[70:02] Oh, my gosh.
Speaker 1:
[70:04] And it's sad that we have so much to be scared and worried about. And then this... But there was at least... It was one incredible, incredible, happy, happy moment. I'll give you another happy moment, actually. Go ahead, go ahead, Luke.
Speaker 2:
[70:18] I was going to say, the meme of, like, it was Dave Chappelle in his character where he's a crackhead. He's like, got any more of those Artemis missions?
Speaker 1:
[70:26] That's good. I like it. Give me that.
Speaker 3:
[70:28] Yeah, I was always like, put it in my thing. Come on, man. I need more Artemis right now. It's competency.
Speaker 1:
[70:34] Actually, space might save us. There was a great story about Voyager 1.
Speaker 3:
[70:39] Oh, yeah.
Speaker 1:
[70:40] Still going, 50 years after it was set off. It's now there. NASA is trying to keep it alive. I mean, it never was intended to last this long. This week, NASA announced it shut one of their last remaining science instruments, just to keep the battery going a little bit.
Speaker 3:
[71:01] Yeah, I wrote a lot about Voyager back when it passed through. I was running for The Economist at the time, and it passed through the heliopause or the heliosphere, actually under interstellar space, the magnetic envelope of the sun. I got to interview in person Ed Stone, who was the principal investigator of the Voyager missions, and he's still going to work every day at JPL in the 2010s, and talked to him about space fanboying for an article. I wrote a bunch in the 2010s about Voyagers because it seemed like all the people associated thought, by 2020, we're going to really have to start turning a lot of stuff off if it lasts that long, and then 2025, it's probably going to be dead and maybe putting out a beep, but the radioactive, was it radioisotope thermocouple generators, thermonuclear generators, RTGs, it's just physics, right? They're running down, you've got a certain amount of energy, but they've done such a good job with the energy budget. I mean, it's got a digital, it's got like an 8-track digital tape in there or something.
Speaker 1:
[72:07] It's amazing.
Speaker 3:
[72:08] It's got two backup, three systems, backup computers for each, and I think one of them has failed. It's just the continued operation is one of the greatest technical achievements in humanity's history. I mean, 2026, and they're still getting data back is unfathomable.
Speaker 1:
[72:24] It's truly amazing. There's a really great documentary called The Farthest that Ed Stone's featured in. That's what's interesting about this JPL team that started this almost 50 years ago. Well, think about it. They're in their 70s, 80s, and 90s now, right? And they're just kind of hanging on. There's just a handful of people left. They're still doing it. It's a great documentary if you get a chance.
Speaker 3:
[72:49] I'll tell you, the greatest bit of hope ever on the Voyager missions was, and Ed had told me this or Dr. Stone, I should say, and some other folks involved mentioned this, was they put a, is it Reed Huffman encoding? No, Reed something encoding system.
Speaker 1:
[73:04] Reed Huffman, yeah.
Speaker 3:
[73:05] They put an encoding system for error correction, which was, they had a 50% efficient error correction system available, and they put an essentially experimental 90% error correction system, so it would be several times more efficient to get you that much more data out. They put them on the Voyagers. We did not have a decoding component, and so they sent the probes off with the hope that by the time they reached the grass giants, we would have developed the ability to decode an earth, which we did.
Speaker 1:
[73:37] Amazing.
Speaker 3:
[73:38] We got all these images, we got so much data, we got multiples of what was expected in the original project brief was because of this encoding mechanism that's like a device, it was like a lava lamp size device or something, that they had a budget for that's still kicking away on there.
Speaker 1:
[73:54] It's amazing.
Speaker 3:
[73:55] Yeah, it's incredible.
Speaker 1:
[73:55] Launched in 1977.
Speaker 3:
[73:58] Amazing.
Speaker 1:
[73:59] And Voyager 1 is still going strong. Voyager 2 is out there too. They're so far away now that it takes 23 hours to get a message to it and then 23 hours to get the message back.
Speaker 3:
[74:13] It's ridiculous. It's amazing.
Speaker 1:
[74:14] It's incredible. It's incredible. What a story.
Speaker 4:
[74:17] Three hours for commands. Like, actually send a command to it.
Speaker 1:
[74:20] Yeah. So, yeah.
Speaker 3:
[74:22] That's part of the... I'm sorry, too spacey on you, but the Deep Space Network, one of the things they discovered and were able to do after the Voyager was launched was that the cumulative area of all of the different receivers on Earth can be essentially aggregated, which they didn't have the capability when it was launched. So they can turn on... The Deep Space Network means you can turn on the capacity of the entire area of aggregation and have that serve as one giant antenna, even though they're disparate in function and location. So there's all these things. There's like thing after thing after thing that if they hadn't done this, hadn't done this, hadn't done this, and or wasn't available when they launched. So that's why there's so much more data than... I mean, it was like a two-year mission or something, or a four-year science mission or something. And then...
Speaker 1:
[75:05] Yeah, 50 years later.
Speaker 3:
[75:06] Yeah, they keep budgeting more money for it.
Speaker 1:
[75:09] Yeah, in case you care, the instrument they turned off is called the Low Energy Charged Particles Experiment, or LACP. And it's just because they're running low on energy. They're a little plutonium generator in there. I guess every year it loses about four watts of power. So it's just declining.
Speaker 3:
[75:29] It was operating, I think it started with, was it 250 or 500 watts? It's a very small amount. So it's got 40 watts. It's so small and the amount that they can do, it's just.
Speaker 1:
[75:41] It's so cool.
Speaker 3:
[75:42] Yeah.
Speaker 1:
[75:44] Engineers are confident, this is from NASA, that shutting down the LACP will give Voyager 1 about a year of breathing room. They're using the time to finalize a more ambitious energy saving fix for both Voyagers they call the Big Bang. That doesn't sound good. The idea is to swap out a group of powered devices all at once, turning some things off, replacing them with lower power alternatives, 46 hours out to keep the spacecraft warm enough to continue gathering science data. Big Bang will happen on Voyager 2 first. It has a little more power. It's a little closer to Earth.
Speaker 3:
[76:21] They don't care about it as much. They're like, I don't know.
Speaker 1:
[76:24] I was very sad when I heard there were layoffs at JPL, but apparently, this team is still, I mean, many of them are retired, but they're still doing their thing. It's a great documentary because they get together in this little old wood paneled room with all its old technology. It's just like a little corner of NASA still.
Speaker 4:
[76:45] Leo, at our house, I have a little dashboard that shows the current positions that we put on the wall.
Speaker 1:
[76:50] Oh, that's so cool.
Speaker 4:
[76:51] Yeah.
Speaker 1:
[76:52] Are the kids aware of it?
Speaker 4:
[76:54] Oh, yeah. Yeah. It's really cool technology.
Speaker 1:
[76:56] It's so cool. Is it an E-Ink screen or is it a LCD?
Speaker 4:
[77:00] It's a regular display, the Android tablet, but it just follows along basically. It uses the JPL dashboard that they have.
Speaker 1:
[77:07] Yeah. Yeah. There's a very nice dashboard.
Speaker 3:
[77:09] I've been watching. I'm a very latecomer to Apple TV's For All Mankind and I'm in Midway in Season 2 now, and it's just those things where I'm like, oh my God, if only, if only, if only, but.
Speaker 1:
[77:23] So this, Lou put the link in the Discord. This is the display on your wall.
Speaker 4:
[77:28] Right. Yeah.
Speaker 1:
[77:29] Nice.
Speaker 4:
[77:32] It did obviously the trip to the moon too. So we had like the three, the two-
Speaker 1:
[77:37] Oh, that's so, what a great way to get kids' interest in science. Just inspire them a little bit, you know? I think that's just really cool.
Speaker 4:
[77:45] I agree.
Speaker 1:
[77:45] Really, really great idea. You're watching This Week in Tech. See a little inspiration amongst all the nightmare things. Is this the article you wrote back in 2013 about-
Speaker 3:
[77:58] Yeah, that's what I got to talk to.
Speaker 1:
[78:00] Postcards from the Edge. What a great name.
Speaker 3:
[78:02] Mars Rover Driver, who later went to work for Google, and meet some of the, whatever the Mars probes were at the time, one of them that failed, wrote a bunch about the curiosity. Fun fact, Alex Trebek was at the Seven Minutes of Terror bit of the Curiosity Landing. He was a space nut, and he got invited to JPL, so if you look carefully in footage of when there's that gap, when it lands on Mars, there's Alex Trebek in the watching area. What the hell?
Speaker 1:
[78:34] I think I recognize him. Do they not give you bylines in The Economist? It's just by GF.
Speaker 3:
[78:40] That's correct, since 1843.
Speaker 1:
[78:43] All you get is initials?
Speaker 3:
[78:44] You only get initials on the blog. In the magazine itself, there's no bylines. It's considered a product of group editing, essentially.
Speaker 1:
[78:52] How old-fashioned.
Speaker 3:
[78:53] Yeah, that's a weird thing. It's like a claim I wrote anything in the publication.
Speaker 1:
[78:57] As long as GF wrote it, whoever that is.
Speaker 3:
[78:59] Whoever that person is.
Speaker 1:
[79:01] You're watching TWiT. It's great to have GF, WF and LM on the show. I'm LL.
Speaker 3:
[79:09] Hey, I got a quick question for you, Leo. How do you feel about Salt Hank being on damnlines.com? Do you know? Is this all garbage?
Speaker 1:
[79:20] Save that for me and we'll do the ad and then I'm going to have to find out what that's all about.
Speaker 3:
[79:25] Okay. All right.
Speaker 1:
[79:26] Damn lines.
Speaker 3:
[79:27] Yeah.
Speaker 1:
[79:27] Oh, because of the lines waiting outside his restaurant.
Speaker 3:
[79:31] Yeah. Not about, it's not about him. It's not a complaint about him. I'll tell you.
Speaker 1:
[79:34] All right. Well, we'll find out in just a bit. That's a good tease. Salt Hank is my son. It was a TikTok legend with two and a half million followers watching him make sandwiches. And last year he opened a sandwich shop in New York City, which has become a legend. Now I notice, by the way, ever since he did that, there's all these new sandwich shops trying to make the best sandwich in New York City. But according to Belly, his is still number one, beating Bradley Cooper's easily. And then that's a plug for Salt Hank. All right. It's on Bleecker and Jones in the West Village, if you've ever heard of that. All right. What is this line thing you...
Speaker 3:
[80:15] Oh, I don't want to derail you. I spotted an article the day in the New York Times about damnlines.com, where his company, it's just a guy's side project.
Speaker 1:
[80:23] Oh, you can watch the lines outside of restaurants.
Speaker 3:
[80:25] He rents a little space in somebody's apartment nearby to put a camera in and then...
Speaker 1:
[80:30] This is Salt Hank's, that blue... See, this is Sunday, it was raining, so the lines weren't too bad.
Speaker 3:
[80:37] It must be closed now, right? So this is all... This is a time lapse.
Speaker 1:
[80:40] He's right next to John's of Bleecker Street. So I think one apartment and they get both. John's is famous, famous for its lines, right?
Speaker 3:
[80:49] Yeah, so I think it's some guy's project, but he's using some kind of analytical tool so you can get a sense of... It's automatically counting people.
Speaker 1:
[80:56] There's the line. It's always a line in front of Salt Hank's. Look at that. And you know why? Because he runs out.
Speaker 3:
[81:04] Yeah, it's great.
Speaker 1:
[81:07] He opens at 11.30 and he sells sandwiches until there are no more and then he closed its doors. You can see that's when the line disappeared.
Speaker 3:
[81:13] There we go. Yeah, I thought that was very charming. Was it the New York Times did it or Eater did the long video about his opening?
Speaker 1:
[81:21] You can see what the average wait time is.
Speaker 3:
[81:23] I'm sorry, I'm not trying to make this an ad for your child, but...
Speaker 1:
[81:26] Well, I don't even know if he knows about this.
Speaker 3:
[81:28] It's a great technology story.
Speaker 1:
[81:29] I'm sending it to him right now. Oh, that's Salt Cure. That's the wrong one. I want Salt Hanks.
Speaker 3:
[81:37] Oh, wait a minute. No, they've got Salt Hanks, though, right?
Speaker 1:
[81:39] Yeah, they do have Salt Hanks. Yeah, yeah, yeah.
Speaker 3:
[81:40] How funny. How many salts are there?
Speaker 1:
[81:42] Well, that's the thing. Hanks created a monster.
Speaker 3:
[81:47] I mean, this is one of these demand curve things, too, is like, how do you fill in your empty spot? Some people are worried that folks won't show up because they'll see a line. Other people are like, this is a great way to fill the quieter times for places that don't always have a line, balanced demand like Waze does for driving and so forth.
Speaker 1:
[82:04] Yeah, well, Google added that, right? On the Google Maps, you can see where, you can see what the business' busy hours are.
Speaker 3:
[82:13] Don't they use AI calls, like an agent to call and ask from time to time how busy it was? They were at one point.
Speaker 1:
[82:20] How's the line? How is it? So that's the problem with these agents, they sound normal.
Speaker 3:
[82:26] Yeah.
Speaker 1:
[82:26] How's the line? All right, I'm sending this.
Speaker 4:
[82:28] Well, this guy's saying, what do you need?
Speaker 3:
[82:29] I don't know, like, cards and orders up front of the board. Yeah, do you have any tables for you?
Speaker 1:
[82:33] I don't know.
Speaker 3:
[82:33] I click. Literally, I deal with that.
Speaker 1:
[82:38] I bet Henry doesn't know about this, but who knows? I just sent it to him. Thank you for that tip. That's very good. Back in court, Meta, according to a Massachusetts court, this is up the road from you, Lou, must face youth addiction lawsuit. There was a lawsuit by Massachusetts attorney general alleging that, and you know, this happened, of course, in LA. There was a trial that the jury ruled that Meta had crafted its algorithm to trap children. New Mexico, big judgment, hundreds of millions of dollars. So the state's top court ruled on Friday, unanimously, the lawsuit brought by Massachusetts Attorney General is not vulnerable to Section 230. It is not seeking to hold Meta liable for content created by its users. But, and this was a strategy used by the trial in LA, its defective algorithm is designed that way. Yes, Hank knows about the line. He says, blowing up, he was going to do a Good Morning America interview about it yesterday, but there was construction in front of the restaurant. Oh, no. So there was nothing to show. He says, I've talked to the founder. Good. He says, it's smart, but also kind of creepy.
Speaker 3:
[84:08] So all technology right now, right?
Speaker 1:
[84:10] Yes. That's the story of this show. Smart, but creepy. So that's interesting. Although writing in Tech Dirt, Mike Masnick is saying he's considerably worried about these kinds of decisions and jury verdicts. He says, Section 230 is dying by a thousand workarounds and Massachusetts just added another one.
Speaker 2:
[84:32] Someone said that the day after the ruling came out from the one in LA that they're starting to get ads like, were you affected? Call this number or collect this link.
Speaker 1:
[84:40] So the ambulance chasers.
Speaker 3:
[84:42] Mesothelioma. What is it? Mesothelioma.
Speaker 1:
[84:45] Mesothelioma. Yeah. I shouldn't laugh. That's a terrible, awful disease.
Speaker 3:
[84:50] It's the ambulance chaser. It was like the best way to make money on the web for a little while was to have a site that mentioned that.
Speaker 1:
[84:56] Oh, I still see ads for it all the time on TV. Yeah. This is what Mike says is the most important part of the whole ruling. And he's quoting a professor, Eric Goldman, who's been tracking these. He says, I don't see, Eric says, I don't see any distinction between third-party content and the editorial choices about the manner of presenting that third-party content. So the courts and the plaintiffs are making that distinction. There's what third-party content is doing, which is protected by Section 230. And there's what the companies are doing to surface that content. By embracing that false dichotomy, Professor Goldman says, the court invites plaintiffs to reframe their complaints to focus on presentation instead of substance. And that's why you're seeing these advertisements. Has Meta made you nuts? Now you can go after them. You know what? I don't know if I... I'm a big supporter of 230. I think it's very important. It protects people like me. I have a Mastodon instance, which is now backup, by the way. Thank you for telling me. I forgot to pay the bill. It was down for a day. But if I'm liable for something somebody posts on my Mastodon instance, I will take it down. I can't afford to defend against that.
Speaker 2:
[86:16] This is different.
Speaker 1:
[86:18] This is different. My Mastodon instance does nothing algorithmically to surface content. Now, I am protected by Section 230 because I moderate it, right? If somebody posts something, a bunch of nudes on there, I delete their account.
Speaker 3:
[86:33] No, you're protected even if you don't moderate it. It's you're protected. If you do moderate it, you're protected. If there's a legal content and a process is used like DMCA and you don't follow it, then you could be liable, but you could leave up essentially anything that's legal.
Speaker 1:
[86:48] Thanks to Section 230. That's vital.
Speaker 3:
[86:51] Yeah. Someone could post a bunch of nudes as long as they're legal nudes, they don't violate any local or local law, then you're fine. But you're also entitled to do whatever you want as moderate.
Speaker 1:
[86:59] I choose to take those down.
Speaker 2:
[87:02] So this is about not about content is about design, which I think I hear what you're saying, but I think this is exactly what courts are made to do, to debate, to slice that baby in half. Like for instance, infinite scroll. That's not just about content. It is how you keep people hooked on it and how you listen, like you're trying to sense how someone might be in a vulnerable place based on their mood and then giving them ads and serving them ads based on that. This is, and the issue at heart is the, it's not the ignorance of it. It is doing it and for the outcomes you want, knowing the harms and hearing the decisions made in spite of that. So if there's things on your platform that you don't want, that and you choose to take it off, that's a different thing. It's on your platform and you don't want, but you are not aware, that's a different thing. But if it's things that are on your platform that is harmful and you do know about it and you make a mechanism to make sure that that is being served, there's a difference.
Speaker 1:
[88:12] Yes, and I agree with that. I don't think this, I'm not sure I agree with Mike Mazik and that's rare for me to say, but I feel like there is a difference. Meta does make a defective product, right? Because of its, you know, They're making a harmful product.
Speaker 2:
[88:29] I don't think it's effective because it's working as designed.
Speaker 1:
[88:32] It's a harmful product.
Speaker 2:
[88:33] And that is the problem. Is it the way it's designed and it's meant to be that way? And for those types of things, that's what the court system is made for, those types of debates. And so I think it's, this makes sense. And this is where it should be debated by people on both sides who are informed, who are able to present that. And so there might be a rash of these, but it's to their own detriment because that's the thing that they did. This is a commumpance and maybe some cases will be valid, some are not valid. Hopefully, this won't choke our court system to the point where this will just kind of fall into a background noise. But the reason why companies like Meta abuse people is because the downside is we'll never overcome the upside. And the only way to change the equation is to go about this way.
Speaker 1:
[89:29] Here's a court case I can absolutely support. You may remember the FTC went after Live Nation and Ticketmaster for ticket prices, and the Trump administration decided to drop that case. By the way, the attorneys in charge of the prosecution all quit when Trump dropped the case without consulting them.
Speaker 3:
[89:49] Saw that.
Speaker 1:
[89:50] Yeah. But here's the good news. There was a court case also going on because it wasn't just the federal government. It was, I think, 30 states. The lawsuit brought by the states is over. And the jury found Live Nation and Ticketmaster did maintain illegal monopoly. So that is very, I think, very good news. We all know it's horrible, right? They add fees upon fees upon fees. And they end up, because they control the venues as well as the ticket sales, dominating the market. And even acts that don't want to be beholden to Ticketmaster have to be.
Speaker 3:
[90:36] And I was like, I can understand the horrible economics of 2026 that mean you have to charge $80 for a bad seat in an auditorium. That's terrible, but all right, I'll just accept that. And I read a lot of fans saying, like, we get it. We understand, you know, there's a lot of profit in there, but also whatever. It's the $34 or the $50 I pay on top of the $80. That's the problem.
Speaker 1:
[90:59] And they sneaky, like you buy the ticket and then they add it after you buy it. Like, oh, and by the way, it's so frustrating.
Speaker 3:
[91:09] It's your $20 fee to present a ticket into the app, which costs us nothing to do.
Speaker 1:
[91:13] Yeah.
Speaker 3:
[91:13] Right.
Speaker 1:
[91:13] Yeah.
Speaker 3:
[91:15] I mean, it's the most indefensible industry. I try to think of something that's worse than, like Live Nation, Ticketmaster, in terms of how, I think even the cable companies, people don't hate the cable companies as much anymore, right? Because there's actually various kinds of competition. Like, what's worse than these guys in terms of healthcare? Healthcare. Healthcare. Healthcare.
Speaker 1:
[91:35] Because we all have to have it, right?
Speaker 3:
[91:37] I don't know. Guys, well, unfortunately, I have a really good insurer in Washington State. We have a really good state insurance commissioner. So with all the healthcare I had last year, my insurance company is like, bonk. You know, yep, yep, yep, it's all okay. So right now, I have the weirdest... You're happy.
Speaker 1:
[91:52] How come you have such good healthcare?
Speaker 3:
[91:54] Well, we have a very strong state insurance commissioner's office, and we have a few sort of semi-local insurance companies. So it ain't cheap, but...
Speaker 1:
[92:05] Well, thank goodness you had it. Was it unexpected, the surgery? I mean, did you...
Speaker 3:
[92:09] No, I've known for years. It just didn't know exactly when. And then my valve went... It's time. It's like a little button pops up, and my cardiologist said, I can hear... When I listen to your chest, I can hear a certain tone. So it wasn't emergency. But it's an amazing thing when you get a bill for $250,000, and it's like my share was... I hit my manual out of pocket, zero. And there's no greater feeling than that. So I'm sorry. I do hate insurance companies. Most years, I do, but this year it's Ticketmaster.
Speaker 2:
[92:39] Now I hate you, so.
Speaker 1:
[92:42] Well, see, that's the problem. It's not people like Glenn, because you buy insurance and you pay for it. And you're self-employed, so you pay for it yourself.
Speaker 3:
[92:47] I pay for it through the, yeah, self-employed and not Medicare agents.
Speaker 1:
[92:51] Through ACA, through Obamacare, or?
Speaker 3:
[92:53] Yeah, exactly. So I get, you know, it's one of those things.
Speaker 1:
[92:55] Thank goodness for that. And they're trying to kill that too.
Speaker 3:
[92:57] Yeah, I don't actually know what we pay for health insurance because it requires being a CPA to understand as a freelancer, because you get a deduct premiums and blah, blah, blah. So it's like, I don't know.
Speaker 1:
[93:08] It's crazy.
Speaker 3:
[93:08] I don't know. But yeah, no, I love this year or last year. I loved my insurer just for one year.
Speaker 1:
[93:13] So I'm just hoping that the fines, they will say, well, your fine is a hundred million dollars, but there is a $50 million service fee and a $25 million fee for, I don't know what, for parking. And so the judge has not yet determined remedies will be applied. They could in fact, force the two to split. That was what the FTC wanted. And that was what the settlement that the Trump administration, you know, forced. Said, no, you don't have to split up. But the judge could do it. There are also monetary judge damages to be awarded. They haven't been set yet. Of course, there will be an appeal. And, you know, we'll see what happens.
Speaker 3:
[94:02] But this is a wealth disparity problem, though, too, isn't it? I mean, to Wesley's earlier point, it's like the fact that so many people can pay so much means that they run demand pricing and they run it up. So, you know, not to defend anything they're doing, obviously. But it's they're basing this in part on the curve. I wanted to go see a podcast I like. And it was eighty five dollars to sit in the nosebleed seats for a podcast before the fees. And I thought, again, I understand some of that, like, the cost and, you know, the cost of labor.
Speaker 1:
[94:33] Did you see a ticket for the World Cup final? The FIFA World Cup final will be, it's over ten thousand dollars for a single ticket.
Speaker 3:
[94:41] Do you see the cost? Was it New Jersey Transit, what they're charging for a round trip ticket? It's a hundred and fifty dollars, I think, for what's normally a twelve fifty fare.
Speaker 1:
[94:49] Oh, the gouging.
Speaker 3:
[94:50] No, in this case, I actually read the article because I thought, this is outrageous. It's because there's like fifty million dollars in extra expense that a public transit organization has to bear for extra trains, for coverage, for su... Like, there's all this stuff that they budget out and they have to recoup it, but that means that, you know, it's...
Speaker 1:
[95:09] Incidentally, you think we could get eighty dollars a seat for podcast tickets if we decided to do it? We never charged when we did the podcast in public.
Speaker 3:
[95:18] There you go. See, that's a lot of...
Speaker 1:
[95:19] That was a mistake.
Speaker 3:
[95:20] Merch and auditorium shows.
Speaker 1:
[95:23] Merch. We have merch and never made any... Not a penny on merch.
Speaker 3:
[95:26] I went to... You have a stage show part of it though. Like they used to do... What was that? There was a group that was going around doing essentially a live magazine show. Everything was researched for the show.
Speaker 1:
[95:38] That's cool.
Speaker 3:
[95:39] I went to see 99% Invisible a number of years ago.
Speaker 1:
[95:41] They're great. I love them.
Speaker 3:
[95:43] Full house for a live show. It's a bunch of different stories. There you go.
Speaker 1:
[95:48] It makes me nervous because I have a feeling. I just feel like we would go and there'd be five people in the audience and I'd feel so bad, I'd give them their money back.
Speaker 3:
[95:55] You kickstart it so you have everybody has to buy the ticket.
Speaker 1:
[95:58] Kickstart it.
Speaker 2:
[95:59] Yeah.
Speaker 3:
[95:59] Beforehand, so you only have to hit a threshold before you do the show.
Speaker 1:
[96:02] Brilliant.
Speaker 2:
[96:03] I saw Radio Lab live and that was amazing.
Speaker 3:
[96:05] Oh my gosh.
Speaker 2:
[96:06] There's a lot of good on-show performance that once you understand.
Speaker 1:
[96:13] Yeah. I don't know if our shows would be that engaging in person.
Speaker 2:
[96:18] I mean, so also like I went to South By Southwest last month.
Speaker 1:
[96:22] Oh yeah. How was that?
Speaker 2:
[96:24] There was a Vox Media stage and so they had a lot of podcasts live there. And every room, every time they had a show was packed, there was a line. South By was great in general, but I think podcasts are taking a bigger percentage of these live events as well. And speaking of podcasts, I'm sure you know as well that Netflix is now moving into podcasts. And so I think the visual nature and the experiential nature of podcasting, I think it's just going to keep growing.
Speaker 1:
[96:56] They should have listened to me when I said you should change the name because a video podcast is not a podcast. I don't know what it is. But it's not a, it makes no sense. It's a show. I guess it's going to be a podcast.
Speaker 2:
[97:11] I have the same feeling when someone says, let's roll the videotape.
Speaker 1:
[97:14] Yeah, that's right. Yeah, let's roll the tape.
Speaker 2:
[97:17] But I don't think it's going anywhere.
Speaker 1:
[97:18] Let's dial the phone and roll the tape. Yeah. Well, they should have listened to me. We could have had a better name, but no, no. You're watching This Week in Tech, which is one of the oldest podcasts in the world. I neglected to mention this at the onset. We just had our 21st birthday. Ooh, we can drink. April 17th, 2005 was the first TWiT, and we are now officially 21 years old.
Speaker 3:
[97:46] Congratulations.
Speaker 1:
[97:47] I think Anthony made a bunch of logos with alcohol in it. I thought, I don't really want to promote that exactly. This is episode 1080p, by the way. Progressive episode. Other court decisions. Anna's Archive. Remember, Anna's Archive was a pirate activist group that scraped the entire library, 86 million songs from Spotify and put them online. They have been told, they've been told to pay Spotify and the record labels $322 million. Spotify, UMG, Warner Music Group, and Sony sued in January. They sued. Now, Anna's might feel a sense of relief because the suit was for $13 trillion. They made the songs available via BitTorrent. At the time, Spotify called the scraping a brazen theft of millions of files and tearing nearly all the world's commercial sound recordings. Anna's Archive said, No, no, it's an act of preservation. New York Federal Judge said, No. In fact, Anna's Archive did not defend. They didn't respond to the lawsuit because they're anonymous. Good luck collecting that $322 million because no one knows who Anna is. I'm guessing her name is not Anna. The court also said the Archive must immediately destroy all copies and phonorecords. Rolling the videotape and phonorecords of any work scraped, downloaded, copied or otherwise extracted from Spotify. Is that a term of art in the law, phonorecords or are they talking about vinyl records?
Speaker 3:
[99:46] No, there's something called the phonogram right or phonograph right.
Speaker 1:
[99:50] That's what it is. Okay.
Speaker 3:
[99:51] Which is the right to separate from the copyright that underlies the composition. It's the right to associated with the audio fixed in any medium. Sorry, I read about this a lot once. It's the right associated with-
Speaker 1:
[100:04] Do I ask the right person? Audio. Jeopardy champion, Glenn Fleishman.
Speaker 3:
[100:09] Any medium. If you own the phonogram right, then you control a particular audio recording, no matter how it's produced.
Speaker 1:
[100:17] So, phonorecords probably refers to just any recording.
Speaker 3:
[100:19] Probably the phonogram referring to it.
Speaker 1:
[100:21] Not a vinyl record.
Speaker 3:
[100:23] That will be interesting though. Yeah, go smash all the old vinyl.
Speaker 1:
[100:27] You're going to smash them.
Speaker 3:
[100:29] It reminds me of when there was that company that was doing the video on demand, where they had a whole warehouse full of VHSs, and someone would go and punch a videotape in and hit a routing button to go to your TV set. Yes, that's how we used to do it back in the day. The 1990s, I think, before, it was so funny.
Speaker 1:
[100:47] Actually Jokin Bokin, who is in our YouTube, watching on YouTube in our chat, says in US copyright law, phono record is a term of art for a material object that embodies sounds.
Speaker 3:
[100:58] I see.
Speaker 1:
[100:59] So basically it's the judge saying, whatever you got, get rid of it. Not that there's any way to enforce it, which is interesting. Roblox has also agreed to a settlement with the state of Nevada, $12 million. More importantly, they've committed to enhance protection for minors and age verification for all users. This is a case where I think it's a good idea to enforce age verification, but there's just no way to do it. Roblox is aimed at children and kids love it, but there are adults in there.
Speaker 4:
[101:35] Well, you get AI cloning highly loved sub-games, or what do you call it, side games that they have in there, and then they basically suck all your funds out, or whatever you're willing to say, Roblox, whatever.
Speaker 1:
[101:50] Do your kids play Roblox?
Speaker 4:
[101:51] Yeah. They play it every day. In fact, they've tried to build their own sub-games, and you just find all these clones, these watered-down clones. How do you protect them?
Speaker 1:
[102:01] Do you keep an eye on them over their shoulder?
Speaker 4:
[102:03] Yeah, basically keep an eye on it.
Speaker 2:
[102:06] That just happened with Minecraft.
Speaker 1:
[102:08] Like Minecraft, there's a huge benefit to it, because it's basically coding, right?
Speaker 4:
[102:12] They do. They code. They absolutely do code. Luao is the coding...
Speaker 3:
[102:18] Oh, yeah.
Speaker 4:
[102:18] Yeah, so they actually do learn a lot of stuff there.
Speaker 1:
[102:20] Do they know Lua? Are they like... Lua?
Speaker 4:
[102:24] They know Lua in the case of an coding agent helping them code Lua.
Speaker 1:
[102:28] That's really cool. But that's a great way to start. I'm sorry, Wesley, what were you saying?
Speaker 2:
[102:33] It was stupid. I said, I wish this happened with Minecraft.
Speaker 1:
[102:35] No, no, it's too late.
Speaker 2:
[102:37] Because when they're trying to say like, they want to protect miners, I thought that would be a whole various type.
Speaker 3:
[102:42] Oh, that's good.
Speaker 1:
[102:43] In Minecraft, you do want to protect miners.
Speaker 3:
[102:46] I was watching For All Mankind. It reminded me of something. There's a scene where, apparently, in the slightly alt history universe of that TV show, there's a, I guess it's the 80s somebody, there's a child, one of the people is applying to college and she's sitting there with an Apple II of some kind in the living room. And I was like, oh yeah, that was to be how parents protected children.
Speaker 1:
[103:06] Put it in the living room.
Speaker 3:
[103:07] You're not gonna have a computer in your bedroom. That would be ridiculous. Everyone in the family needs to use it and we need to see what you're doing.
Speaker 1:
[103:13] Right. But now everybody has a phone, so good luck keeping it in the living room. You can only use your phone in the living room. Where are your kids' computers? They must have, you have six.
Speaker 4:
[103:24] It's all centralized. Yeah. I centralized all into this little room that's right off of our family room. So it's easy.
Speaker 1:
[103:30] You have so many kids that it's like a computer center in there.
Speaker 4:
[103:33] Oh yeah. I mean, it's heated. They're in the winter, for sure.
Speaker 1:
[103:37] So is my studio, by the way.
Speaker 3:
[103:38] I have two kids and any more than two seems an impossible number to me. So that's you.
Speaker 1:
[103:43] Lou and his wife are prolific in that department. Are they all boys? They're all boys.
Speaker 4:
[103:48] All boys, yeah. From five years old all the way up to 16.
Speaker 3:
[103:53] How many kids do you have?
Speaker 2:
[103:54] Four? Five?
Speaker 4:
[103:54] I have five boys.
Speaker 3:
[103:55] Five boys. I've heard of five boy households. That's a lot of cereal and milk is one thing.
Speaker 4:
[104:00] It is. You're right. We go through two gallons a week.
Speaker 1:
[104:04] That explains it because when I go to Costco, they have the two gallons cellophane together and I can't think, who's going to need that much milk?
Speaker 4:
[104:13] Around here, they only sell the one and a half gallons. I have to buy two of those so that I have to get three gallons.
Speaker 1:
[104:18] Three gallons. Costco is smart. They always make sure you get a little bit more than you really need.
Speaker 4:
[104:24] Right.
Speaker 1:
[104:24] That's just in case. One last court case, a judge has, now this is actually an important one. You may remember Apple and Google both cooperated when ICE and the Department of Homeland Security, Kristi Noem, demanded that they take down apps that would track where ICE agents were, that would announce where ICE was active. Apple and Google complied without any question. They said, yeah, yeah, yeah, yeah. These should all be taken down. In particular, Eyes Up and ICE Sighting Chicago Land. A judge has granted the makers of Eyes Sighting Chicago Land, which is a Facebook group, and the Eyes Up app, a preliminary injunction to stop the Trump administration from coercing platforms to take the projects down. Now, of course, they're already down on both Facebook and in the App Store. But this is an important, I think, ruling, because it asserts this is a First Amendment right. Ice Block, Red Dot taken down from the App Store and Google Play. Pam Bondi, you remember, threatened the maker of Ice Block, said, we're going to look into that guy. Kristi Noem demanded and took credit for the removal of the apps. In a document filed on Friday, the judge called it thinly veiled threats. And said, the First Amendment protects the right to discuss, record and criticize what law enforcement does in public. So this is only a preliminary injunction. And I don't know how effective it's going to be, because the apps are already down.
Speaker 3:
[106:07] Well, it's also, Apple and Google could have, they could say, we have our own set of criteria by which this fails, but that becomes a different issue. And then they could be sued directly for taking apps down.
Speaker 1:
[106:21] That's allowed. That's not the government censoring. That's a private company, which can, of course, censor.
Speaker 3:
[106:26] I would just like to see, yeah, I wonder if these will come back, especially it seems like, I know that.
Speaker 1:
[106:30] I don't think Apple wants to go on record as saying, oh no, no, we don't want those apps on our app.
Speaker 2:
[106:36] I think the opposite. I think they'll stay down.
Speaker 1:
[106:38] I do think they'll stay down. I don't think they'll go back up.
Speaker 2:
[106:40] Because the court ruling just says that the government cannot threaten you.
Speaker 1:
[106:46] Right.
Speaker 2:
[106:48] But it's a Feda complaint. Yeah, but everyone who's seen this administration knows that even if they won't say that they will do it outright, they would find a way to do it and just not say it. So that's why people are pre-compliant with a lot of different things. Apple CEO Tim Cook is showing up at the White House giving gifts and stuff like that. It's not because they are trying to carry a favor, that's part of it, but it's also because they don't want to be the focus of any negative retribution, whether or not it's said or not. So the court order doesn't force Apple or Google to put it back in the Play Store. It just forces the government not to say, we will attack you if you don't take it down. And that just stops that. And I think we're past that point where that discussion is actually happening in public anymore.
Speaker 1:
[107:40] It's done. And Apple does have a problem with the App Store. In fact, last week we talked about the Bitcoin wallet that is a real wallet. Was it a legend that's available for download on the web? But somebody cloned it, made a fake version of it, got it on the Mac App Store, got it approved by Apple, and then it proceeded to steal nine and a half million dollars of cryptocurrency from the people who used it.
Speaker 3:
[108:07] I got a good one too, is somebody impersonated me on a barely active Slack. And because Slack doesn't have a unique namespace, so this is a Slack I haven't really contributed to in years. We set it up almost as an experiment.
Speaker 1:
[108:22] But it was public.
Speaker 3:
[108:23] It was public and there are several hundred people using it, just not very actively at all. And someone registered as one thing and they changed their handle to at Glenn Fleishman, which was my handle. Slack doesn't enforce a unique handle. They disabled certain kinds of administrative controls. So that person thought I was asking them to install an app that I wanted them to help test, even though I didn't know them, but they knew me because it was associated with an Apple thing. And they got infected. Fortunately, they're brilliant and they were able to remote wipe their machine. They had a time machine backup from a day ago. They basically didn't lose anything. I was very impressed by that part. But so it's even down to like that granularity. So when it's on the App Store, you know, it's a million times worse.
Speaker 1:
[109:04] Who wouldn't trust it? And Apple really has had a problem with this. Macworld article, what's the point of an App Store if it can't protect users? David Price writing. Ledger, not legend, Ledger Live. And then Freecash, same thing.
Speaker 3:
[109:21] I wanted to quote John Gruber at Daring Fireball. He said, why doesn't Apple have a bunco squad that targets these high-grossing apps? You'd think, I mean, I have tools for my little, tiny sites that warn me when there's too much activity in certain areas. They've got a thousand people working on the App Store, right? Why can't they identify things that the media can find or people, individual researchers can find instantly? It's baffling.
Speaker 1:
[109:49] Freecash has been on the App Store for more than a year. It was marketed as a way to make money by scrolling TikTok. Was at the top of the App Stores in recent months, according to TechCrunch, peaking at the number two position in the US App Store. In truth, Freecash pays users to play mobile games while collecting sensitive data. We're talking before the show about data brokers and how they get their data. This is how they get their data. It's malware, right? You think you're going to make some money. Instead, it's just collecting all it can about you because unfortunately, apps, when you install them on your phone, have a lot of access to what's going on, including, we know there are apps that take screenshots, regularly screenshots of what's on your screen and send them back to the home office. So, that's why Gruber says there should be a bunko squad. I think part of the problem right now is that Apple's App Store has flooded with new submissions, I think primarily due to AI vibe coding. 84% increase in App Store apps over the last year. More than, almost doubled. And they probably don't have enough people to vet all these apps, but if they're going to make the claim that we protect you, that's why we have this walled garden, they damn well better do it.
Speaker 4:
[111:14] Yeah, they were never really good at submitting apps even when they were back in the day, like 5, 10 years ago. I mean, I tried to put an app out 4 or 5 years ago, and they reviewed it and denied it right away, even though it was a legitimate app. So I can't imagine being able to handle coding agents outside of apps.
Speaker 1:
[111:30] Yeah. And this has been an eternal complaint from developers about the App Store.
Speaker 3:
[111:36] Yeah, they let stuff like this through, you can't get your.1 release with bug fixes up because you mentioned somewhere, just use lavatory behind a locked door. It says, beware of Leopard, that there's another store besides Apple out there. Thank you for the Douglas Adams reference. People recognize that.
Speaker 2:
[111:53] There needs to be an App Store like escrow. So something in between the bank accounts that they can... That would definitely prevent a lot of fraud if they know that they could claw back that money whenever they needed to.
Speaker 1:
[112:07] Louis, I want to give you a chance to comment on this story from Ars Technica. When Microsoft announced Recall, which I thought was a great idea, security experts warned, oh, this is a nightmare that bad guys are going to go after the database because it takes screenshots of everything you're doing on a regular basis for AI analysis. I like the idea. The whole reason I'm building an AI agent of my own is so that we'll remember everything about me. But I can see why people might be a little worried if it's happening on Windows. Recall before it even shipped ended up getting such protections around it that in my opinion, is kind of less useful. It's only on one machine. It can't know everything about you because it's limited to that one machine. There's a lot of security protections on it. However, there is a tool called TotalRecallReloaded that is apparently breaking into the recall database despite all the protections. Microsoft added it. It waits for the users to authenticate recall using Windows Hello and then jumps in the middle. It's kind of a man-in-the-middle attack and snarfs up all the data that recalls sending. Microsoft made recall an opt-in solution, so people are only using it if they've turned it on. I don't know. Do you have anything? This is not your area. You're not speaking for Microsoft when you're here.
Speaker 4:
[113:36] Yeah. I'm not speaking for Microsoft. I'd say it was a great idea. I think they tried everything they could to make it secure. They encrypted the vault. They made sure it was behind multi-factor and all that stuff. So obviously, there's going to be people targeting and exploiting things as they can.
Speaker 1:
[113:53] And by the way, Microsoft should and probably will respond to TotalRecall. The reason it works is once the user is authenticated, the system passes recall data to another process, AIX host, that doesn't need verification or authentication. And so the author, Alexander Haganah, who is a security researcher, says the vault is solid, the delivery truck is not. And so by hooking into the DLL of AIX host, you can exfiltrate it. This is kind of a proof of concept. And I imagine Microsoft will respond.
Speaker 4:
[114:34] If you think about it, everything today, think about it like OpenClaw, right? It stores all of your data, it stores your access tokens and JSON files. Like any type of, anybody, person that gets on your device and has a way to inject some kind of process that can start collecting data, this is actually a pretty sophisticated one, but if somebody goes and exfiltrates information from places like OpenClaw or so on, they get so much more information out of it than just what they tried here. So the problem is all over the place at this point.
Speaker 1:
[115:02] Actually, the recall purer seems quaint now, actually. Because we are all, not all, many of us are putting stuff on our system that's far worse. But for me, that was why I was disappointed that Microsoft nerfed recall is because it isn't really useful unless it collects everything and makes it available to you. That's the whole point. That's why my agent, I'm pouring everything I can into it. I want it to know everything. You're right. If somebody got into my system, I put as many safeguards as I can and I use tail scale, and there's no exposed surface to the outside world. I encrypt my tokens, my API keys and everything.
Speaker 3:
[115:46] I did the opposite of the day as I asked Claude code, I said, on my Mac, I said, do I have PII, personally identifying information for myself or other people anywhere? Because I've done Kickstarter campaigns, I fulfill and it was like, yeah, here's a whole bunch of it. I'm like, all right, let's consolidate that, I'll do this, I'm going to delete all this, this is going to go into an encrypted, mount it, but it was good for hygiene for me to say that and now I'm trying to be more.
Speaker 1:
[116:10] I do regular security audits with AI with Claude. And it often does fine stuff. It said, you're backing up to your NAS, but that backup's not encrypted. And I said, oh, well, nobody's going to have my NAS, but just in case, let's encrypt it. So, I mean, it was an easy thing to turn on. But yeah, I mean, that's good. It's good at finding them. But honestly, there's no point to having an AI unless it knows everything. That's the whole reason OpenClaus risky is because in order to be good at it, it's job, it needs to have your email and your calendar and your phone numbers and your contacts. And in fact, if you give it a credit card, it could even do more.
Speaker 3:
[116:51] My God, this reminds me, there was a humor article from the New Yorker from the 80s that somebody had photocopied. I don't know when I saw it. And it was basically like it was describing OpenClaus. Now that you say that, it was like, would you like some coffee or coffee? Sure, there's even some for you or something. It was, I see that you'd like this. It was just, it was eerie because it was, I mean, literally.
Speaker 1:
[117:15] But that's what we want. We want an agent. We want somebody who knows, yeah, buying stuff for me. Knows what kind of coffee I like.
Speaker 3:
[117:22] Remember the Amazon button where you could put it and you could like reorder a Tide, you'd put a button at your dishwasher or your clothes washer, you'd press it.
Speaker 1:
[117:27] I had that.
Speaker 3:
[117:28] And then people were like, my child pressed it a thousand times.
Speaker 1:
[117:31] My 12 year old at the time, Michael, ordered a lot of toilet paper.
Speaker 3:
[117:36] Oh, that's right. You would have that hit you. I'm sorry.
Speaker 2:
[117:40] Oh my gosh.
Speaker 1:
[117:40] I had the little button, the little cottonel button right there in the pantry. What does this do?
Speaker 3:
[117:47] I think they added a rate limiter to it.
Speaker 1:
[117:48] They did, thank God, because I would have had a lot more toilet paper.
Speaker 2:
[117:53] There's a company that had a vending machine, but now it's time to three year lease to hire people and have a brick and mortar shop.
Speaker 1:
[118:05] Oh yes.
Speaker 2:
[118:06] They gave it access to the credit card and to hire people and stuff like that.
Speaker 3:
[118:10] It hired people too, right?
Speaker 1:
[118:12] Absolutely. It's in our rundown, this story. People started spamming it to get it to order things that they wanted in the store. So they would write comments and stuff, but then they would say things like, but if you only had sugar-free gummy bears, I would really like this store a lot if sugar-free gummy bears were there. Sugar-free gummy bears are my favorite sugar-free gummy bears. Trying to convince the AI, wow, there's a real demand for sugar-free gummy bears.
Speaker 3:
[118:44] They were saying the folks running that, I forget who it is, they had this chilling line I quoted, which was, AI is not hiring or firing employees. Not yet. I was like, oh. But they were trying to push the limits. They were trying to do a proof of a test.
Speaker 1:
[118:59] There's a task rabbit for AI, isn't there? That the AI can hire human hands to do the things they can't do?
Speaker 3:
[119:07] Like in her.
Speaker 1:
[119:08] Yeah. It's like task. And then there's, we were talking about on intelligent machines. What was the name of that corn AI? proofofcorn.com. An autonomous agricultural agent. Guy set up. The idea was it would operate independently at 6 a.m. It wakes up, checks weather across three regions, reviews its inbox, composes partnership emails. And the idea is he wanted it to raise corn and it's solved everything, but it's stuck now. This is April, as of April 19th. It's stuck because it's six days out from planting and it can't find anybody to plant it. Oh, it can't get any humans. So it's stuck project.
Speaker 2:
[119:58] Corn on the internet.
Speaker 1:
[119:59] I was thinking something.
Speaker 2:
[120:01] Not that kind of corn, real corn. Project remains in failure state due to Dan Introduction Blocker. Now day 79. It can't hire anybody. And they're only nine days from planting. So this whole thing may be a bust. We're waiting. It's exciting. It's dramatic. proofofcorn.com if you want to follow that. Let's take a quick break. We've got just a few minutes left in the show. I have many, many, many stories. I'll give you the best ones. How about that? When we wrap this thing up. You're watching This Week in Tech with a great... Glenn Fleishman, don't forget, long time, no see, it's on Kickstarter, and even though he has raised all the money, you gotta write it now, right? That's part of the deal.
Speaker 4:
[120:51] I'm doing new editing and revising of existing material.
Speaker 2:
[120:55] Are you really happy that this has done so well?
Speaker 4:
[120:58] Yeah, it's great. I mean, it's, you know, I started making jokes. I have my little jokes, right? My textbook joke and so forth. My little joke a few years ago, I'd give a talk in the late 2010s and say, you know, I was trained as a typesetter and then I became a freelance journalist. I collect obsolete professions and people would laugh. People would laugh. The problem was when people stopped laughing and they're like, I'm so sorry. I'm so sorry to hear that. That's terrible. And now I was making a joke a couple of years ago. I've shifted from freelance technology reporting to the lucrative, lucrative field of writing about 19th and 20th century printing history. And weirdly, that's a good hunk of how I've made my living the last three years is, you know, part of it is helping other people with their projects with their books, like Marcin Wichary.
Speaker 2:
[121:42] Marcin Wichary. And I have that beautiful book that he did about keyboards.
Speaker 4:
[121:46] You can kill a person with it. It's pretty heavy, pretty solid. And part of it is, you know, writing for Six Colors and doing take control books. And then part of it is writing about printing history, which I love and people, you know, people seem to like it.
Speaker 2:
[121:58] I just think this is so great. This is a good example of why the internet is a marvel, because there's a long tail for everything. So, nope, if you went to a publisher said, I want a book called Long Time No See, they would say, go see the romance editor or something. I don't know what they would say. They would say no.
Speaker 4:
[122:15] Very brief story is just Dan Perkins, who has been doing the comic strip This Modern World as Tom Tomorrow since the 1990s. He's about approaching 40 years doing the strip. His publisher said a few years ago, we can't make money off you anymore. We're not gonna do any more compilations. We're doing something else. And he's, and reluctantly, because he never wanted to own his own books and do all that, he came to me and said, Glenn, can you help me produce a five-year collection? And I said, well, that's very interesting. I'm about to get open heart surgery. And he said, ah! And so we worked out the timing and I had a backup plan for him and another person, and we closed the campaign a couple weeks before my surgery. All went beautifully. He raised $137,000 on Kickstarter to print a five-year collection. But I don't know still, even with those numbers, that his publisher could have...
Speaker 2:
[123:01] I don't think the publishers are wrong.
Speaker 4:
[123:03] I don't think they could have made it. They might have broken even on it. And he made a very nice sum of money.
Speaker 2:
[123:08] Exactly.
Speaker 4:
[123:09] And a lot of his fans were very happy. He sold a lot of signed copies of his books. It was great.
Speaker 2:
[123:13] Technology is an enabler. It's made it possible for people to operate on their own scale, a scale that a big company is never going to operate on, but a human is perfectly happy to operate on. Look at podcasting.
Speaker 4:
[123:26] I'm looking at it.
Speaker 2:
[123:28] Yeah, you're looking at it. Wesley Faulkner is here. Another perfect example. He's the founder of Works Not Working. The idea is a website for people who are working, but it's just not working for him.
Speaker 1:
[123:40] And keep in mind that the job of the guy, Joaquin Phoenix character in her was writing letters.
Speaker 2:
[123:46] So that's right.
Speaker 4:
[123:47] I forgot about that.
Speaker 2:
[123:48] That's right. Yeah, he did dictate them. He didn't actually physically write them.
Speaker 1:
[123:51] Exactly. But I'm just saying there's still a place for humans. And yeah, it's not dead.
Speaker 2:
[123:56] He would write love letters for people too lazy to write their own. It's pretty funny, wasn't it? Works Not Working is open. You can sign up now. Get on the wait list. worksnotworking.com.
Speaker 1:
[124:09] Love to see you there. Let's all chat.
Speaker 2:
[124:11] And Lou Maresca, who is ably employed by Microsoft to take us, to take all our jobs. AI engineering leader. No, no, it's empowering technology.
Speaker 3:
[124:23] Empowering. Yes, that's right.
Speaker 2:
[124:24] It totally is. If you've ever tried to write a pivot table on your own, forget about it. AI happens to be very good at pivot tables, right?
Speaker 3:
[124:32] Yeah, yeah.
Speaker 2:
[124:33] It's an amazing thing.
Speaker 3:
[124:34] It actually works great with some of the really important things, like people who use WorkIQ or FinTool today to build out financial data models and enterprise-grade financial data models. So really stuff people don't want to do themselves.
Speaker 2:
[124:48] It's actually amazing. I mean, it turns everybody into a quant in a way, right? Because you can have an idea and maybe not have any idea how to execute it, but the AI can help. This morning I asked my AI, I said, I've read this thing by Andre Karpathy about auto research that overnight it tries things and it fails. And I said, it sounds intriguing. I don't understand it. Could you explain it to me like I'm five? And it actually wrote ELI 5 and it explained it all to me. And I said, would this be of use? And I said, yes, as a matter of fact, here's how you could use it. I said, could you set that up? It said, yes, and it did. You know, it's empowering. It's a lever that you can use to move the world. And I think that's how to think about it. And honestly, if you're afraid of it or you hate it, it's probably a good idea to at least dip your toe into it and try it because you might find you can actually use it in some very interesting ways. At least that's been my experience. And look what people like Tom Tomorrow can do, what Glenn Fleishman can do. Technology is a very powerful tool if you know how to use it.
Speaker 4:
[125:59] Here's my best sales pitch on using Claude Code is, if you ever have to work with CSVs and manipulate the data, and my God, I've had to deal with so many CSVs from different logistics systems and outputs and Kickstarter campaigns. I wrote so much code over the years to just massage it. I'm like, can you take these three things and do this? And it's like, here's your script. And I'm like, oh my God, you just saved me hours of work.
Speaker 2:
[126:23] It's really good at reading JSON, XML.
Speaker 4:
[126:28] Writing to APIs, I want to do something with Stripe. It's like, sure, we'll just integrate with Stripe.
Speaker 2:
[126:33] We've had an API for the Twitch workflow backend for more than 10 years. It's beautiful, beautifully done. Only our engineer knows how to use it. And I've always wanted to be able to use it. So I just said, Claude, here's the API documentation. Write me an implementation and it wrote a whole implementation so I can use it. I can say, how many times has Glenn Fleishman been on the network? And it will actually pull it all. It's very-
Speaker 4:
[127:00] Too many is what the answer is.
Speaker 2:
[127:01] Not enough is what it said. Isn't that interesting? Not enough. 21 years we've been doing this show. And that was, for me, coming from a broadcast background where you either had to work for a radio station, that had a license with the FCC and big towers out there or a television station, which is even more expensive, millions of dollars worth of gear to be able to do podcasts for a fraction of the cost, to a fraction of the audience, admittedly, and still make a living. That's been an amazing boon. I'm a big fan of technology, and I know what it can do that's not so great. For instance, 404 Media Reporting, Google, Microsoft, and Meta all track you even when you opt out. This is according to an independent audit. We'll leave Microsoft out of the mix, but this is a privacy audit in California. Of course, California has very strict privacy laws. The privacy search engine Web X-Ray found that according to their audit, 55% of the sites it checked set ad cookies in a user's browser, even if they opt out of tracking. Each company disputed or took issue with the research. Google said it was based on a fundamental misunderstanding of how its product works. Okay. Okay. They viewed web traffic on more than 7,000 popular websites in the month of March. I found that most tech companies ignore when a user asked to opt out of cookie tracking. That's that banner, right, where you say no. California has the stringent, well-defined privacy legislation, the California Consumer Privacy Act, which allows users to opt out of sale of their personal information. There's a system called Global Privacy Control. That replaced the Do Not Track, which everybody ignored. That never was enforced. But now there is Global Privacy Control, which includes a browser extension that tells a website when a user wants to opt out of tracking. I installed that, by the way. Probably not doing anything. Google failed to let users opt out 87% of the time. When you click that button, Google should not return cookies. However, when Google's server responds to the network request or the opt out, it explicitly responds with a command to create an advertising cookie named IDE using the set cookie command. That's non-compliance. So, this leads me to this article, which I think is kind of right on from lawfaremedia.org. It's time, well, we need, first of all, we need comprehensive privacy legislation in the United States. But we also need to ban the sale of precise geolocation, and that's what this article is about. That's just one form of the privacy, but geolocation is very risky.
Speaker 1:
[130:11] Some might call it an assassination tracker.
Speaker 2:
[130:14] Oh, some, someone might say that. Yeah. No tracking for me, maybe for the. Here's another one from the EFF. Google broke its promise to me. Now, ICE has my data. This is why this has become a little bit more important. Now that law enforcement might be using this against you. Yeah. Amanda, Amanda LaThomas Johnson, a PhD candidate studying in the US on a student visa for I think like five minutes. He attended a pro-Palestinian protest. ICE sent Google an administrative subpoena requesting his data. The next month, Google gave that information to ICE without giving him the chance to challenge the subpoena, even though Google has promised in the past that they would do that. And, well, he was not allowed back into the country when he crossed over into Canada. He's a dual British and Trinidad and Tobago citizen, not accused of any crime. The only thing he did wrong was attend a protest once.
Speaker 1:
[131:28] That's not doing anything wrong.
Speaker 2:
[131:30] And that's not doing anything wrong. It's in fact constitutionally protected. EFF supported him with legal work. His lawyer at the EFF obtained this subpoena and proved that they in fact had requested it, that Google had provided it. IP addresses, physical address, other identifiers, session times and durations. They were looking for, I think, evidence that he had been at that protest.
Speaker 1:
[132:02] This goes back to the previous story where we were talking about the ICE app and whether it will show up back in the store. And this is a reason why I don't think it will. Because it was an administrative warrant. It was in private or not in the public view. And they still complied. So there's no incentive for them to do anything that would put them on the negative side of this government. So threats, I think, are unnecessary for compliance in this case. And this is also where being at a protest is not illegal. Yet they will still find, I don't want to say like the edges of the law. They'll do whatever they want. And they will choose to make the decisions that they want of what is important, what is threatening. And even when we're talking about ICE, and they're using a facial recognition app, false positive is something they also don't really care about. Their incentives are to get rid of as many people as possible. And any excuse, any hint of an excuse, is enough justification to follow through on that.
Speaker 4:
[133:15] There's no repercussions, nothing changes. And that's kind of what they're seeing. Even when courts castigate them, there's no personal repercussions. The worst thing that happens is somebody actually has a moral qualm and they quit or they're fired because they say something that is too supportive of defendants. So what's the consequence?
Speaker 2:
[133:33] Netflix co-founder Reed Hastings is actually leaving the company. He had moved himself upstairs to a board chair, but he's now leaving the company's board. He will focus on philanthropy and other pursuits.
Speaker 4:
[133:45] Interesting.
Speaker 2:
[133:46] Yeah, I think a lot of credit to this guy who took a DVD by mail idea. He says he was inspired in a college course when a professor said, which carries more bandwidth, a fiber optic line or a truck full of DVDs? And he said, oh, maybe instead of relying on the internet to deliver movies, we could just send DVDs by mail. And then, of course, very famously, I mean, I don't know, we probably all subscribe to the Red Envelope, right? I had Red Envelopes gathering dust under my TV for months.
Speaker 4:
[134:24] Great model. The gym membership of DVDs.
Speaker 2:
[134:28] And then, crash the stock when he said, we're going to pivot. We're going to just do a streaming service. And everybody thought, oh, that's nuts. In fact, they initially were going to split the two into a DVD by mail and then a streaming service.
Speaker 4:
[134:45] It had the craziest name.
Speaker 1:
[134:47] Quick. Quick something?
Speaker 4:
[134:49] It was terrible, wasn't it? I can't remember the name of it.
Speaker 2:
[134:52] Nobody remembers it because they didn't need to. They dumped the DVD by mail thing pretty quickly.
Speaker 4:
[134:59] Quickster with a QW. That's it. My gosh.
Speaker 2:
[135:03] Now, of course, Netflix is the number one streamer by far.
Speaker 4:
[135:08] I don't think people understood the implications of content delivery networks when Hastings did this. I think it still seemed like an abstract notion, especially getting CDN servers into edges of Xfinity and all the big ISPs. I mean, the ultimate version of that is Alaska Airlines and other airlines flying with essentially a CDN server on board for the movies that they stream over in-flight wireless. It's not exactly that, but I think it seemed ridiculous. They're like, we don't have that kind of bandwidth. It's like, well, we don't need it. We have intranets, and we'll just push it cleverly to CDN servers, and that happened very quickly, is what it felt like.
Speaker 1:
[135:47] And they created fast.com, because they started getting throttled, and they wanted to expose the ISPs' hand in their content.
Speaker 2:
[135:55] ISPs, this is where net neutrality became a big issue. ISPs wanted Netflix to pay them for access to their customers. And of course, we all said, but we're already paying you for access to Netflix. You want to charge both ways. Netflix proved it was happening, that they were actually slowing Netflix traffic down.
Speaker 4:
[136:16] Oh, I remember that.
Speaker 2:
[136:18] Basically blackmailing them.
Speaker 4:
[136:20] Wasn't the origin of net neutrality laws was somebody who was trying to had an archive of barbershop quartet digitized tapes?
Speaker 2:
[136:29] Oh, I don't know that one.
Speaker 4:
[136:30] That's the origin of net neutrality. They have sued because some ISP was slowing down or blocking his distribution of legally recorded copyright correct barbershop quartet singing.
Speaker 2:
[136:43] Nothing against barbershop quartets, that's a strange hill to die on.
Speaker 4:
[136:47] Well, that's what got us where we are today, I don't know.
Speaker 2:
[136:51] We could do a barbershop quartet. Who wants to be bass? Who wants to be tenor?
Speaker 4:
[136:55] Boom, boom, boom, boom, boom. I was a member of SPUBSCA once.
Speaker 2:
[137:00] Oh, I thought you might be. You looked like you might wear a straw hat and striped shirt.
Speaker 4:
[137:06] Briefly, Society for the Preservation and Encouragement of Barbershop Quartet Singing in America.
Speaker 2:
[137:10] Were you in a barbershop quartet at the time?
Speaker 4:
[137:12] No, I was in a chorus once when I was young. I was young and unwise.
Speaker 2:
[137:17] In 2007, thank you, Larry, in our Discord, tests by a barbershop quartet, loving techno geek named Rob Topolsky, trying to use BitTorrent to share public domain music files, revealed that Comcast was indeed injecting forged packets into the peer-to-peer traffic to disrupt the connections.
Speaker 4:
[137:40] The thin edge of the wedge is barbershop quartets singing.
Speaker 2:
[137:43] Thin edge of the wedge.
Speaker 4:
[137:44] Oh my gosh.
Speaker 2:
[137:46] Wow. Anyway, in this world of skeezy CEOs, we had a little argument on Intelligent Machines over Sam Altman and that New Yorker profile. I said, well, they're all like that. Look at Elon. Look at, I mean, just go down the list. All CEOs are a little skeezy. It's nice to know that there are some who not, who've made it. Reed Hastings is one of the good guys. So I'm glad he's gonna retire and spend time with his money.
Speaker 1:
[138:14] Yeah, they say whether like, if you live long enough, you become the villain. It sounds like he's checking out exactly the right time.
Speaker 2:
[138:21] Get out before you become the villain.
Speaker 4:
[138:23] Local hero, Paul Brainerd, one of the founders of Aldis Corporation. He passed away not that long ago. And he did the same thing. He cashed out at the right time, pivoted to philanthropy, and he never did anything. He was not a terrible monopolist or anything, but all the memories around here, all the obituaries about all the great stuff he did in the last 20 plus years of his life with lasting results for the Seattle area community. So there you go. Get out when the getting is good, and then don't build the world's largest yacht.
Speaker 2:
[138:53] That's the first thing. You could buy a basketball team. If you want to buy the Clippers, okay.
Speaker 4:
[138:59] Balmer's spouse, I'm blanking on it. Connie...
Speaker 2:
[139:02] She just donated, what, 25 million?
Speaker 4:
[139:04] 80 million to NPR? 80 million? It was a lot of money.
Speaker 2:
[139:08] Yeah, to make up for the loss of federal funding.
Speaker 4:
[139:11] That's great. So go, go Balmers.
Speaker 2:
[139:14] Go Balmers. And hopefully that funding is coming back. There's some developers, developers, developers jokes there. I don't know what it would be, but, you know.
Speaker 4:
[139:23] Program, audio program developers, audio program developers.
Speaker 2:
[139:26] Yeah, right. Yeah, good. Yeah, I think it's Connie Balmer. That sounds right. That sounds right. And it's sad news, Ron Conway, who is an angel investor, who funded early on Google, Facebook. He kept OpenAI together when Sam Altman was fired. He was one of he's one of the legendary investors in Silicon Valley. They call him the godfather of Silicon Valley. Stepping back from SV. Angel because he has an aggressive, rare form of cancer, broke the news on Friday. So we wish you the best, Ron. And I hope the treatment goes well. But yeah, that's so that's he he's legendary. And, you know, we often eulogize these guys after they're gone. But maybe it'd be better to remember them while they're still with us. And at least he doesn't have to worry about his health insurance. And I'm glad you didn't either, Glenn Fleishman. Thank you. Glad you're doing well. It's great to have you back. We took a little time off from the Glenn Fleishman train to give you time to heal. But now he's back. He's writing regularly. He took his advice column to his help column to sixcollars.com, where it is much appreciated. You do a great job there.
Speaker 4:
[140:46] Thank you.
Speaker 2:
[140:46] I commend Jason Snell for hiring some of the best people, keeping that spirit alive.
Speaker 4:
[140:52] It's the old gang. We're still together.
Speaker 2:
[140:54] The old gang's still together. Again, the success of independent media. God bless it. It's great to see you. Flong Time No See is on Kickstarter. It's not too late. Or you can go to Glenn's website, glenn.fun.
Speaker 4:
[141:09] I want to promote one.
Speaker 2:
[141:10] Oh, you have another book. And you don't have a dog in this, huh? You're going to have to turn off your green screen here. Oh my God, he's disappeared. He's back.
Speaker 4:
[141:21] This book is impossible to show. It's sphere.computers is the website. Oh, it's so funny. The cover will not steer.
Speaker 2:
[141:28] Like steer a car?
Speaker 4:
[141:29] No, oh my gosh. I have to turn off, I don't know if I can make it appear. S-P-H-E-R-E dot computer.
Speaker 2:
[141:36] Sphere. I will just go to sphere.computer and show it on my screen.
Speaker 4:
[141:39] It's an invisible book. You can buy an invisible book. This is somebody I work with on the developmental editing of the book.
Speaker 2:
[141:43] What is the sphere? I never heard of the sphere.
Speaker 4:
[141:45] The sphere is a computer nobody heard of, but it had probably the first all-in-one CRT keyboard bootable computer when you turned it on in 1976. And they were in Utah, and they just couldn't raise enough capital and kind of get it going fast enough. They probably sold at least a thousand, maybe a couple thousand computers. And Bill Gates in his autobiography last year, while we're prepping the book, I said, I hadn't bought a copy yet. And my author colleague, Ben Zotto, wrote the book. I said, Ben, you've got a copy, look it up. And Bill Gates says, you know, this is a bunch of companies, including Sphere. Everybody in who Sphere was is this incredibly smart guy named Mike Wise, who is very difficult to work with, but absolutely brilliant, unlike other founders. He claims that Wozniak saw the Sphere demoed at a homebrew computer meeting and then went back and duplicated it. Clearly not the case, but Wise made a lot of claims, but they, oh, I'm sorry, 75 and 76, they advertised in bite-like crazy. So one day, Benzado is walking down the street in San Francisco, stumbles over a computer on the street, picks it up. Someone's just throwing it away. It's a Sphere computer.
Speaker 2:
[142:55] Oh my God. He found one?
Speaker 4:
[142:57] He found one, brought it home, got it working, starts researching it, gets obsessed and winds up interviewing 40 people, many of whom hadn't spoken about it in 50 years because they were so embarrassed that the company had failed because it's a very tight-knit Mormon community outside of Salt Lake City. People had invested their own money. I mean, it wasn't a lot of money it raised, but they left bills behind. There was a complicated bankruptcy. So some people had not spoken about the company to anybody for 50 years and this guy calls up and says, I found a Sphere computer. Do you want to talk about it? So anyway, it's a really wild story because if you have any interest in computer science or computer history of that era, you're like, what is a Sphere? So when he first sent me the manuscript, I thought he was making it up briefly. And then I'm like, and then you start searching on Google and you find ads in Byte and you find all these people who, there's still a little bit of a community. Anyway, so, history of computer.
Speaker 2:
[143:49] $650 kit, 8-bit computer based on the Motorola 6800. 4K RAM expandable to 64K.
Speaker 4:
[143:57] It was incredible. You could put 20K in a machine at a time that I think most computers were shipping 4K. They were using dynamic RAM in 1975. They built a floppy disk controller. They were part of the Kansas City standard cut set for tape, all these things for all the... I know the listeners to this show are all like, yeah, hands there. Anyway, it's a company no one ever heard of and he wrote a book about him and now he's filling up a piece of missing computer history and it's now shipping. It's a beautiful book.
Speaker 2:
[144:25] Good on him. That's fantastic.
Speaker 4:
[144:27] A lot of fun.
Speaker 2:
[144:28] Sphere.computer, there's actually a lot of information there.
Speaker 4:
[144:32] He has a working emulator. You can launch it and it'll boot into Sphere OS and you can...
Speaker 2:
[144:37] What? What? It's got MS Basic on it. Look at that. Oh man. Wait a minute. Look at that.
Speaker 4:
[144:45] It's incredible. Anyway, the future. In a browser window, you could be emulating a 50-year-old computer that no one's used because he deciphered the firmware.
Speaker 2:
[144:53] Wait a minute. I'm going to see. 10, print. Hello. 20, go to 10. Run. Is it run? Do I just type run?
Speaker 4:
[145:01] I think it's run.
Speaker 2:
[145:02] Oh my God. It worked. My first program.
Speaker 4:
[145:06] There we go. Look at that.
Speaker 1:
[145:10] That's pretty funny.
Speaker 2:
[145:12] Oh man, I had a computer with cassette. That was a wild way to load memory.
Speaker 4:
[145:19] It's a funny, it's just a weird story. You get somebody who's obsessed. Again, it's a Kickstarter project. He raised enough money to make the book happen. You can buy a copy of it from him. If you want, you can buy an old. He designed a floppy disk controller for fun for this 50-year-old computer and got it to work.
Speaker 2:
[145:35] Go computer now. You can read about it. Ben Zotto, Z-O-T-T-O. It's at sphere.computer and there's the Sphere Bookstore, $39. Look at that. Shopped pay button.
Speaker 4:
[145:47] They went to visit Tandy and they showed them it all in one design a year before the TRS-80 ship.
Speaker 2:
[145:52] I think you could reasonably say that was a little bit of a Sphere knockoff.
Speaker 4:
[145:58] It looks a lot like a Tandy. You look at it and you're like, oh, okay.
Speaker 2:
[146:03] That's familiar, except it was an aluminum case. It looked nice.
Speaker 4:
[146:06] Wild stuff.
Speaker 2:
[146:07] What a story. Well, thank you for sharing that with us, Glenn. So good to see you and Nice to see you. Congratulations on your health.
Speaker 4:
[146:15] Thank you.
Speaker 2:
[146:16] All of that and you're a good ticker. Appreciate it. Thank you, Wesley Faulkner. So glad you got the site up, worksnotworking.com. If work's not working for you, that's the place to go. Wesley is always amazing. Did you do a talk at South By?
Speaker 1:
[146:34] Yes.
Speaker 2:
[146:35] What was your talk about?
Speaker 1:
[146:36] It was called Why Work Sucks and How to Make It Joyful Again.
Speaker 2:
[146:41] Good name. Can I watch it online? Is there a video?
Speaker 1:
[146:45] No, the audio is posted, but you have to have a ticket, South By credentials in order to see it. My sister and her husband did a shaky cell phone video of it, so I do have some video of it and I'm going to be editing that with splicing in some good audio and then inserting the slides because of the lighting in the room was very bright, so my slides actually show up.
Speaker 2:
[147:14] South By allows you to do that?
Speaker 1:
[147:16] Well, it's my material.
Speaker 2:
[147:18] They should.
Speaker 1:
[147:21] Yeah. I've signed a release and all that stuff, so I just can't use their name and represent them as this is official. It's just my thing and so I can totally do that.
Speaker 2:
[147:29] Nice. Well, we'll look for it. We'll have you back when you put that up.
Speaker 1:
[147:33] Thank you. For the current members of Wart's not working, they might say that the site's not working. I would just say, I am so sorry. I'm pushing a PR right now, right after this.
Speaker 2:
[147:44] Get that Claude fired up.
Speaker 1:
[147:46] Yes.
Speaker 2:
[147:47] Well, I was embarrassed this morning when we started the show and people would say, you're mastodon instances down. But in that case, it was me not paying my bill. So I apologize. It's back up. Thank you Hugo for jumping on that one. I appreciate it.
Speaker 4:
[148:00] I have so many crontabs that tell me everything that I'm doing that's failing. And then I have things that tell me when those things are failing now. It's all crontabs.
Speaker 2:
[148:06] It's embarrassing. I actually fell for a phishing scam and gave them that credit card. And so I had to cancel it. The first time I canceled it, American Express said, well, you can keep it active with people who already have it so that your recurring payments will continue. And I said, yes. And then I got a charge for a thousand dollars at some liquor store in Massachusetts. In fact, come to think of it, it wasn't so far from you there, Lou. I don't know. Anyway, I said, OK, this time I'm going to cancel it. And you can't extend it to people who already have it. And now I'm just waiting for things to fail.
Speaker 1:
[148:44] Well, now that it's 21, you might get bills like that all the time.
Speaker 2:
[148:48] Well, I'm thinking, I think our Redis account is actually on that credit card, so I should probably call Redis and say, hey, here's a new one. Anyway, Hugo, who runs Masto Host, he's a great Mastodon server farm. He's in France. And I immediately flipped the switch and turned it back on. So I apologize for the downtime. But it can happen. Yeah, I need crontabs. That's what I need. Lou Maresca, great to see you. I just love you so much. AI engineering leader, but just a wonderful fella. Copilot, thank him for Copilot Microsoft. Thank him for Copilot and Excel. I appreciate all you do.
Speaker 3:
[149:31] I'm not a writer, but I do want to promote my wife's books because she's been doing a fabulous job. She's got a book called Absolution out there. She put it out last year. It's a fabulous book about the 1989, it's a fictionalized version of the 1989 Talan on Mortars.
Speaker 2:
[149:47] Oh my God. Yeah. What's her author name?
Speaker 3:
[149:52] Her pen name is EE. Lawson. She looked it up on Amazon.
Speaker 2:
[149:55] Lawson. Absolution by EE. Lawson.
Speaker 3:
[149:58] She also has a really great rom-com for the holiday season called Hurry Down the Chimney, which people are actually reading even now it's not holiday season. She also has that. She's got another one coming out called Pros and Cons. She's doing a fabulous job.
Speaker 2:
[150:11] I love this.
Speaker 4:
[150:12] She has a great author photo too, which is key. I'm looking at the website. Did you take that?
Speaker 3:
[150:19] I didn't. She did it all of her own.
Speaker 4:
[150:21] Oh, isn't that great?
Speaker 3:
[150:21] The website is her own.
Speaker 4:
[150:23] All time great author photo.
Speaker 2:
[150:24] Isn't that great?
Speaker 4:
[150:25] Really good.
Speaker 2:
[150:26] Isn't it? I love it. Well, gosh, you know what? I'm glad we could plug this.
Speaker 3:
[150:30] EE.
Speaker 2:
[150:31] Lawson. She's got her own website, eelawson.com.
Speaker 3:
[150:36] Right.
Speaker 2:
[150:36] And you can see all the books. You can see more about her. You can even buy a quietly scheming woman's cropped hoodie. I should get that for Lisa. That is awesome. She's quietly scheming.
Speaker 3:
[150:53] She's quietly scheming. Yes.
Speaker 2:
[150:56] That is fantastic.
Speaker 3:
[150:58] A little bit out of her book, but yeah. She's done a fabulous. Like I said, she didn't ask for any help for this, so she's done it all herself.
Speaker 2:
[151:05] Oh, that's nice. That is really great. Well, yeah, let's give her a big plug. Absolution. That is quite a story, that Tylenol story.
Speaker 3:
[151:12] Oh, yeah. She really enjoyed writing. She likes writing all different types of stories. She's not really a specific genre.
Speaker 2:
[151:17] So it's non-fiction?
Speaker 3:
[151:19] It's all fiction. Everything's fiction.
Speaker 2:
[151:20] So it's a fictionalized story.
Speaker 3:
[151:22] Fictionalized story, right?
Speaker 2:
[151:24] The Tylenol story. Oh, that's really interesting.
Speaker 4:
[151:27] I'll wait for it to be adapted into a 10-part Netflix series.
Speaker 2:
[151:29] Yeah.
Speaker 3:
[151:30] I know, right?
Speaker 4:
[151:31] Why not?
Speaker 2:
[151:32] The option is available, eelawson.com. Go get it. That would be a wonderful book.
Speaker 4:
[151:38] All right, maybe only eight parts these days are cutting back on it.
Speaker 2:
[151:41] Yeah, right. Well, it just depends. Thank you, Luke.
Speaker 1:
[151:44] Burger podcast.
Speaker 2:
[151:45] Great to see you. Thank you so much, Wesley. Thank you, Glenn. We do this show every Sunday from 2 to 5 Pacific. That's 5 to 8 Eastern time, 2100 UTC. I mentioned that when we do it because you can watch us live. If you're in the club, of course, in the Club TWiT Discord, but you can also watch everybody can YouTube, Twitch, x.com, Facebook, LinkedIn, a kick. If you chat in any of those platforms, I will see it and we can chat with you as I have been throughout the show. After the fact, on-demand versions of the show available at the website, twit.tv, there's a YouTube channel with a video, a video and audio at the website, or subscribe to the video or audio or both in your favorite podcast client. You'll get it automatically. All of that's free. But if you want to support it, you appreciate that too. Thank you everybody for being here and we will see you next time. As I have said now for 21 years. Thanks for watching. We'll see you next time. Another TWiT is in the can. Bye-bye.