transcript
Speaker 1:
[00:08] This is the Daily Tech News for Wednesday, May 22nd, 2026. We're gonna tell you some stuff it's good to know, we're gonna give you the context, and we're gonna help each other understand, right?
Speaker 2:
[00:18] We are, Tom. Today, Andy Beach is telling us how Google is using AI in advertising, and Framework's MacBook for Linux users may have achieved the dream of the mainstream modular laptop.
Speaker 1:
[00:30] Very excited about all that. I'm Tom Merritt.
Speaker 2:
[00:33] And I'm Sarah Lane.
Speaker 1:
[00:34] Let's start with what you need to know with that big story. Yeah, for those who follow Framework, the new Framework Laptop 13 Pro probably doesn't have too many things that are unexpected. Seems like everybody's pleased with it. But those who are new to the modular laptop maker may want to pay closer attention this time, because not only have they got the modular part down, which they have been doing for a long time, but it's got a solid build. That is the big thing I keep hearing from people, is that this feels not like something that might be a little flimsier or anything, this is a very good, solid build. It's called the Laptop 13 Pro. And in fact, Frameworks CEO has described it as a MacBook Pro for Linux users. You cannot get it with Mac OS, although you can get it with Windows, as well as Linux. But the company is clearly targeting Linux users and says they have slightly more of them in their user base than they do Windows users. Something like a little more than 50%. The stock model of the Laptop 13 Pro, which can of course be customized, that's the whole point with Framework, comes with a bigger 74-watt-hour battery. They say that's good for 20 hours of 4K Netflix. Remember, video isn't necessarily the most taxing thing, but it's good for comparing it to other models. A new chassis design in silver or black is available. This was machined out of a 6,000 series aluminum block, hence the solidity of this. You get more memory, you get a haptic trackpad, has a Dolby Atmos certified side firing speaker on both sides, so speakers. You can pick from Intel's Core Ultra 5, X7 or X9, those are Panther Lake chips, or if you pay a little more, you can get the AMD Ryzen AI 300 series main board. Ships with a 13.5 inch 3-2, 2880 x 1920 touch display. Framework has changed its thinking on this. They used to not be into the touch stuff, now they are. Variable refresh rate between 30 and 120 hertz, 700 nit capable backlight, and the RAM is very interesting. They have switched to LPDDR, so it uses more power efficient LPDDR5X RAM, which is not soldered to the board. And framework is big enough now to have secured supply, so they're not too concerned about running out. They will have to change prices, particularly of the models, but they've been very transparent of that. It's almost entirely backwards compatible with the rest of the previous framework models. That means with the exception of that larger battery, a part from the Laptop 13 Pro can slot in to the original 13 that was launched back in 2021. Even the battery could be added to the old machine. You just have to get a new bottom cover and a new input cover, but you wouldn't have to replace the whole thing. So if you have a framework laptop and are jealous of these upgrades, you don't need to ditch the existing one. Just buy the modules you want the most. However, if you do want the Framework Laptop 13 Pro right out of the gate, you can order it right now. Shipping will start in June. Pre-built systems with Windows or Ubuntu start at $1,500. And then the DIY model, which you do everything yourself, starts at $1,199.
Speaker 2:
[03:47] All right. So here's my question for all Framework enthusiasts. Like you mentioned, backwards compatible is going to be very attractive to a lot of folks. Like, all right, I need this, I need that. Okay, the larger battery would be the only real setback. But unless you're just very anti-DIY, which I'm going to assume a lot of Framework people are not, why not just do that with the older model?
Speaker 1:
[04:15] Yeah, I think a lot of them will. I feel like this laptop is meant to bring new people into the ecosystem. But if you have the previous 13, you have the 12, we're talking about a different system, but if you have the previous 13, upgrade that mainboard, maybe go to the trouble of changing the chassis a little bit to upgrade the battery. But the cool thing about it is, at some point, you could then buy the Laptop 13 Pro chassis and just put your internals into it, right? If you get to that point, you don't have to buy a new laptop. That's Framework's whole deal.
Speaker 2:
[04:54] I love this. I also love the sort of perhaps snarky nod to the Macbook. You know, the Macbook for Linux people. Also, touch screen. Interesting. Maybe we're going to be first here.
Speaker 1:
[05:07] Yeah.
Speaker 2:
[05:08] Should be an interesting 2026.
Speaker 1:
[05:10] I think what they found is that there are a lot of Windows users who are switching to Linux. And there were a lot of Macbook Linux users who were like, yeah, but I just like the Macbook Pro. I don't want to go to another Linux machine. But Apple Silicon has made that a not impossible, but more difficult proposition. And it's definitely harder on the most recent Silicon from Apple. So making something that's like you're going to get a build quality that will make you happy if you're a Mac user, because that's the big thing even about the Macbook Neo is the build quality. But you'll be able to have the modularity, the repairability, the DIY aspect of it, and put whatever Linux you want on it.
Speaker 2:
[05:51] Yeah, I think $1,500 for the pre-built system versus $1,200 for the DIY model. There are going to be people who say, $300, I just don't want to do this. But I think that's sort of the fun. That's the fun of it.
Speaker 1:
[06:06] For a lot of people, especially if you don't want to put a boot to a Windows on it. I mean, there's no reason to not go for it that way.
Speaker 2:
[06:14] Well, all of you listening probably have thoughts on this. That's why we do the show. BTNs is made possible by all of you listening to us right now thanks to Ali Sanjabi, Aby Puppy, Dale McKayhee and Jack Burnham.
Speaker 1:
[06:27] Thanks everybody. There's more we need to know today. Let's get to the briefs.
Speaker 2:
[06:34] Let's do it. All right. A lot of Google news coming out today. Google Cloud announced new tensor processing units or TPUs, which are faster and more efficient. We've got the TPU-8T, aka Sunfish, for creating software, and the TPU-8I, aka Zebrafish, for running services, aka training and inferencing. General availability is coming later this year. Its Ironwood TPUs are also now generally available as well. Google Cloud and Wiz launched new security agents, including threat hunting and detection engineering to combat zero-day exploits. These sound like lifetime shows, but no, they're actually security agents. The Vertex AI platform is now the Gemini Enterprise Agent Platform, combining model selection, model building, and tuning services of Vertex AI, security, DevOps, orchestration, and more, all in one place. It includes access to all Gemini models, as well as models and agents from Anthropic, from Box, from Workday, from Salesforce, ServiceNow, and others. There's a no-code agent builder called Workspace Studio, an agent development kit that lets companies design their own agents. Each agent has a cryptographic ID for security and can be tested in an agent simulation tool before shipping. Workspace Intelligence claims to understand semantic relationships better between data and apps to better anticipate and execute tasks for you. Semantic arguments. No more, says Google. And a new feature called Ask Gemini that lets you use Google Chat like a command line, where you can ask Gemini to just do things for you. Document generation, scheduling meetings, file searching, that kind of stuff.
Speaker 1:
[08:19] Yeah. A lot of this is Google saying, we're going to take the promise of all the Gemini stuff and make it easier, make it work better, make it anticipate things for you, which I think is good and it's useful if you're working in this space. The TPU stuff is really interesting. They're still offering NVIDIA through Google Cloud. And this is all, by the way, coming out of Google Next, which is its big cloud conference. So the new TPUs, breaking it into inferencing and training, got a lot of heads turned to say, oh, okay, we're not just having one model that's out there. That is interesting. Some people think it's a great idea, some people don't, but it certainly gives you more customizability if you're in the enterprise and want to build these kinds of data center cloud instances and all of that. Again, like on the last one, if you are in the enterprise and you're like, this is the thing that really gets me excited and or angry about Google Cloud announcements, feedback at dailytechnewsshow.com. But this is the big announcement of the day coming out of Google Next. And it does show, if nothing else, that Google is, they want to come for Claude code, they want to come for your ability to use agents and not have to turn to Anthropic or OpenAI and they're staying in the race.
Speaker 2:
[09:40] I did something extra, what I thought was very, very extremely simple with Jim and I this morning, just because I was looking at all this news. And that was because I hosted Daily Tech Headlines this morning, and we do some stuff inside WordPress, and it has to be formatted a certain way. And for whatever reason, OpenAI just did something really weird with my formatting, where the heading was like heading one, and huge instead of heading five. I don't think anybody would be that upset about it, but I was like, this just looks crazy. Let me just say, okay, everything that's heading one, make it heading five. And Gemini was like, here's some code that you can, and it was just, it was a bust. And I thought, all right, well, maybe I should ask some other ones. I didn't really have time to do that. But this is just, I think, for folks who are out there saying, this all sounds very promising. What can it realistically do for me? I'm not a coder, I'm not an engineer, or maybe I'm just trying to make workflows better for myself. I think there's still a really big learning curve here.
Speaker 1:
[10:51] Yeah, and there are so many different versions of Gemini, right? There's the one you use today, which is entirely different from the workspace one that they announced, right? Because this is deployed on the enterprise by your company. And what they're promising is, you'll be able to go into Gemini, or go into Google Chat. You won't even have to go to Gemini. You'll go to Google Chat and say, I need a new slide based on the data that just came in from Sarah. And it'll be able to go into your Gmail, figure out that it came from Sarah, find that data, and then create a presentation for you in the background while you're working on something else, and then pop up in the chat and go, hey, here's your presentation. You know, you're good to go. As well as being able to anticipate things because of what it knows about you and be able to say like, oh, I know you have these meetings today, so I prepared some notes for you, some proactive stuff like that. I am very curious how people who are in the copilot universe versus people who are in the Google universe compare on these sort of notes because I don't think there is a standard way or a standard set of features yet, right? We're still playing around with all this different stuff. That all sounds great in the demo, right? But which things are actually useful in practice is a whole different thing. Totally. PJC Reese noted this on our DTNs subreddit. Bloomberg reports that a small group of unauthorized users claimed in a private discord that they were able to access Anthropic mythos model. Now, that's significant because mythos has been limited to 40 organizations because of its reported ability to find or exploit security vulnerabilities. Anthropic says it doesn't want to let it fall into the wrong hands. This sounds like the idea that it fell into the wrong hands. Now, Bloomberg has verified the documents. So they say, we looked at the stuff. Sounds like they really do have access to mythos. The people running it say, we're not using it for security purposes. We're just toying around with it. The discord chat is apparently people who like to go out and find models and see what they can do. So this is at least gray hat, if not white hat. But it also may be less significant than it looks. One member of the group works for a third-party contractor for Anthropic. And according to Bloomberg, it sounds like they combined that access with some other information they gathered, like guessing the URL. They looked at some leaks of Anthropic stuff and says, oh, when they make the location, they generally put the location here, maybe not exactly URL, but figured out, this is where Mythos is likely to be and turns out it was there. And then they were able to access it. Anthropic is investigating the report, because it has no evidence that the access went beyond a third party vendor's environment. So I don't know. It sounds like somebody probably shouldn't be using it in this way, but maybe they didn't break in. So you could still have an objection to this happening. And Anthropic, if it is this dangerous, probably should tighten this up and not let private Discord friends see a third party contractor's access. However, I don't think it got hacked in the same way that you might think. Meanwhile, in evidence of what Mythos can actually do, Mozilla said it used Mythos to look for bugs in Firefox 150, the most recent version of the browser, and it found 271 of them. Now, none of these bugs are the kind that could not have been found by a human, according to Mozilla. It's just that it found them immediately, and it would have taken humans a lot longer.
Speaker 2:
[14:14] Yeah, which is the first part of the story. Unauthorized users claiming anyway, that they were able to access Mythos. Concerning, certainly, that's definitely something that if you are some sort of contractor that's working with Anthropic to figure out how to bolt this down a little bit better. Well, that's one thing. If you're not that person, that's a whole different thing entirely, especially because, again, this has been such a slow rollout and kind of secretive, really, because they want everything to be solid before the world gets it, for good and bad.
Speaker 1:
[14:57] I'm a little surprised Anthropic isn't coming out a little harder, saying, yes, it was a third-party contractor, and we have talked to that contractor about restricting that access.
Speaker 2:
[15:05] Right. And we're not upset about it. We paid them to do this, for example.
Speaker 1:
[15:10] But we should have shut that person was misusing it, and they have been locked out. Like if this is as threatening as Anthropic wants us to believe it is, and there's debate about that, then they should be quickly saying like, that third-party contractor had access, but they don't anymore because they misused it. That would be the thing I would expect to see from them.
Speaker 2:
[15:30] Yeah. The idea that Mozilla used Mythos to look for bugs in Firefox 150 and found almost 300 of them, that is promising in theory.
Speaker 1:
[15:41] Yeah.
Speaker 2:
[15:41] I think so too. I know that there are certainly humans who have done this work historically in the past, that now have a lot of more work on their plate. Now, finding a bug and fixing a bug are not the same thing necessarily. I know in many cases the bugs were fixed, but still have to go through human trials and make sure that they're signed off on by humans. Firefox is not just going to be just getting updated somewhere without any human intervention. Yeah. And I know that there are people who feel some type of way about this. It's as somebody who historically could squash bugs and was good at it, how much can you realistically do in a day, a week, a month? If you don't have to do a lot of the legwork in a particular day because a lot of that legwork is done for you, sounds great, right? Job gets easier. But now you have more legwork on the other side of it. I don't really work in this scenario, so I can't say for sure. But I would love to know from anybody out there who's like, yeah, my workload is actually worse now. Even though we're better off, we are more secure. We have fixed more things.
Speaker 1:
[16:59] The idea is that these tools help you speed through that workload a little better as well. But like you said, you still got to vet it. There's still work that has to be done on it. It does seem like Mozilla is overall pleased saying, we have now squashed a bunch of bugs before anybody could have known they were there, which is a good thing.
Speaker 2:
[17:16] Indeed. Well, if you want to hear us talk about something on the show or something that we've already talked about resonated with you, one way to let us know is in our subreddit. Submit stories and vote on them, reddit.com/r/dailytechnewsshow.
Speaker 1:
[17:35] Now some quick headlines that are just good to know might make you look a little smarter in the future if you know this stuff.
Speaker 2:
[17:40] OpenAI released Chat GPT Images 2.0. I played around with it a little bit this morning. It was kind of fun. Makes it much better at text and also includes things like Latin characters as well as Japanese, Kanji, Korean, Hindi and Bengali.
Speaker 1:
[17:56] Writer sources say Meta sent memos to its employees saying it will use the keyboard strokes and mouse movements of its employees to help train agents.
Speaker 2:
[18:06] It says it will not be used for performance reviews.
Speaker 1:
[18:10] Yeah.
Speaker 2:
[18:10] And I bet not everybody thinks that's true. I mean, I'm not saying that's what they're doing.
Speaker 1:
[18:16] They would have done it already if that's what they were going to do. Like, they clearly do need more data on agents to make agents better at acting like humans at mouse and keyboard. So even though I don't necessarily trust Metta, I kind of believe them in here. They're like, yeah, no, we would have already figured out how to do that to do your performance review. We are seriously just want to get those movements.
Speaker 2:
[18:38] Sure.
Speaker 1:
[18:38] Yeah.
Speaker 2:
[18:38] And yeah, there could have been, you know, clandestine initiatives all along. But if I were at Metta right now, I'd just be like, oh, no.
Speaker 1:
[18:47] I'm not sure how useful this would be in any other scenario anyway. If there is, I'm sure somebody will figure it out and we'll hear about it later. And that's the thing to be suspicious of. But I feel like this is a little less of a hair on fire situation.
Speaker 2:
[19:03] China's anchor announced a chip called Thus, which will use it'll use in headphones to execute neural networks and power features like noise canceling.
Speaker 1:
[19:13] Thus sprock anchor. SpaceX has partnered with Cursor. Cursor is like the big codex competitor out there or Claude code competitor out there, partnering with SpaceX to develop new software to assist in coding and knowledge work and the key. They'll either pay them for that later this year or SpaceX can just buy Cursor later this year for $60 billion.
Speaker 2:
[19:38] Canada's government introduced the Canadian Space Launch Act to develop space launch capabilities that don't rely on any other country.
Speaker 1:
[19:46] I think they were saying they're the only G7 country that doesn't have this capability. So yeah, good for them. The Attorney General of the US state of Florida announced it has opened a criminal investigation into OpenAI related to a shooting at Florida State University in 2025. OpenAI says it has been cooperating with the state on this and released chat logs from the shooter showing in their words, factual responses to questions with information that may be found broadly across public sources on the Internet and it did not encourage or promote illegal or harmful activity. But AG is still going to investigate.
Speaker 2:
[20:21] Google Wallet is now offering live updates for tracking your flight in Android 16+.
Speaker 1:
[20:26] Oh, that is very handy. And thanks to Gadget Virtuoso for mentioning this one in the subreddit. The Internet Engineering Task Force, aka the IETF, has released a draft proposal for IPv8, which would include IPv4 as a subset. If you don't know what that is, it's the numbers behind the domain names, it's the location numbers for your machines. And we've tried to get everybody on to IPv6 because they were running out of IPv4 addresses. There has been varying levels of success at that. So IPv8 is a way to say, like, let's just include IPv4 as a subset and just add more numbers, see if that works. So it'll be interesting to follow that as well. Those are the essentials for today. Let's dive a little deeper.
Speaker 2:
[21:12] Let's do it. At its new front's presentation, Google announced that Gemini is now the brain behind how ads get bought, how creators get discovered, and what you see on your TV. Andy Beach says, this is what it looks like when a foundation model becomes media infrastructure.
Speaker 1:
[21:30] Andy Beach, welcome back.
Speaker 3:
[21:32] Hey, Tom, good to see you.
Speaker 1:
[21:33] Yeah, good to see you too. So Gemini is becoming the operating system of advertising. How much am I overstating it when I say that?
Speaker 3:
[21:42] You know, I have a lot of, I somewhat had problems with this one because I wonder, is it a marketing piece or is it actually a tech piece that's out there? So I don't 100% know the answer, and I think we're going to find it out together over the next couple of months.
Speaker 1:
[21:55] So what is changing here? What is different for how Google is doing this based on what they've said about how they're incorporating Gemini?
Speaker 3:
[22:02] Yeah, so I think, and this is where my hesitation came in, part of this is just a rebranding of what they already had, something called Brand Connect into the Creator Partnerships, which now is powered by Gemini. And so they're taking a set of tools they already had, which was already using Gemini or components of Gemini in some way. And they're now just sort of blanket saying, this is powered by Gemini and it is rebranded as this product piece. And what it really does is allow creators to work more with the studio controls and the studio pieces using natural language prompts versus some of the menus and the pieces that they had before. And then they're connecting in their ad advisor, which the marketers use to go find the content sources that they want. And that's now set up and more optimized to fully use Gemini to create those campaign reports that their businesses rely on. So it is Gemini becoming an operating system for it, but I don't know that that's particularly different from what they were already doing. They've just now given it a name and wrapped a nicer bow around it.
Speaker 1:
[23:14] Yeah. And I guess the front end is a little different because you can use natural language prompts. Right, which you couldn't do before.
Speaker 3:
[23:20] Yeah.
Speaker 1:
[23:21] Does this matter much? How is this going to play out in your opinion?
Speaker 3:
[23:28] Well, I will be curious to see exactly how it plays out. I don't think it's a bad idea, but I also don't think it's guaranteed to work. I feel like we've seen foundation model makers try to sort of just squeeze their model in everywhere, and it hasn't always worked to date. The one that particularly jumps to mind that's different from this is when a smart TV now has an LLM attached to it, whether that's Gemini or Copilot or something else, and it allows you to make some changes to your television or the things that you're watching via that versus the traditional menus that you've had to click through. That hasn't been wildly successful, but it's also a very consumer product and it's very hard to change consumer behavior. Because this is for a technically smaller market, innovators are a big market but they're still a niche. You're wanting to use studio or you're wanting to use the marketing tools as part of your business to do it. So I think it has more chance of success out of the gate. But I think an LLM by itself doesn't go just become an operating system and allow you to replace things. So I think there's going to be a lot of work over the next couple of months for them to truly get an integration that hits the sweet spot for what an AI does and then what UI, what user experience and user interface we need in the applications to do the work that we're traditionally used to doing.
Speaker 1:
[24:57] Okay. So it is significant in that it's the moving of one of these foundation models into something. It's still hard to tell how much it has moved in from what you're saying.
Speaker 3:
[25:08] Famously, Google will kill a thing if it's not working. So I think we'll know pretty quickly whether this is going to be successful or not. By the end of the year, if this is taking off and getting traction, they'll continue to grow it and maybe it'll manifest more. But come December, if we have a news article we're talking about where they've quietly killed this off, that'll be the indication.
Speaker 1:
[25:34] Yeah. Or reintroduced a different version of this to replace it, which is also a very Google move.
Speaker 3:
[25:40] A very Google thing to do.
Speaker 1:
[25:41] Yeah. Well, Andy, thank you so much for bringing this to our attention, helping us understand it. Where can folks go to find more of what you do?
Speaker 3:
[25:48] So I write about this all the time. Media and AI are the sweet spot for me. My substack is enginesofchange.ai.
Speaker 1:
[25:56] Thanks Andy.
Speaker 3:
[25:57] Thanks Tom.
Speaker 1:
[25:59] I knew they'd need to save themselves with advertising somehow, right?
Speaker 2:
[26:04] I mean, if Google's good at anything, it's this.
Speaker 1:
[26:07] Yeah. Yes.
Speaker 2:
[26:08] Thank you, Andy. As always, good stuff. We like to end every episode of DTNS with some shared perspectives. With the news of WhatsApp providing a paid tier for cosmetic features, long time WhatsApp user Mohan recalled the first time they actually charged for something.
Speaker 1:
[26:24] Yeah. Mohan wrote, I remember when I first started to use WhatsApp circa 2008, I had to pay a dollar for a year. This was prior to Metta, known as Facebook back then, buying it and making it free. Do you remember that? They used to ask you to pay 99 cents a year.
Speaker 2:
[26:42] So, circa 2008, I was not using WhatsApp at all. I knew what it was, but it hadn't caught on in my social circle at all. In fact, to be honest, it still hasn't, except for my friends who live outside of the US, in which case, that's the only way that they will talk to me. But no, I don't remember it being a dollar a year.
Speaker 1:
[27:03] Yeah, 99 cents a year. It was crazy.
Speaker 2:
[27:05] I mean, it's going to make some money somehow. Well, if you're thinking about the WhatsApp days of yore or anything else that we talked about today, let us know. Share it with us. Feedback at dailytechnewsshow.com.
Speaker 1:
[27:20] Big thanks to Andy Beach and Mohan for contributing to today's show. Thank you for being along for today's show. And you can keep this show coming. Just become a patron. It's easy. Go to patreon.com/d-t-n-s. See you there.
Speaker 4:
[27:42] The DT&S Family of Podcasts, helping each other understand. Diamond Club hopes you have enjoyed this broker.