title Yes. Exactly. - The Zero-Day Ticking Clock

description Security leaders warn the era of AI-driven bug hunting has arrived, with Mythos uncovering hundreds of overlooked vulnerabilities in code bases as trusted as Firefox. Are defenders ready for the avalanche of exploits and the frantic race to patch?

A disgruntled developer discloses multiple Windows 0-days.
Microsoft purchases its own bugs in massive campaign.
VeraCrypt & Wireshark suddenly lost their dev accounts.
A serious problem with re-captured domain names.
How might AI help to secure open source repositories.
A listener wonders what we thought of Project Hail Mary.
Cyber security professionals tell us What Mythos Means
Show Notes - https://www.grc.com/sn/SN-1075-Notes.pdf

Hosts: Steve Gibson and Leo Laporte

Download or subscribe to Security Now at https://twit.tv/shows/security-now.

You can submit a question to Security Now at the GRC Feedback Page.

For 16kbps versions, transcripts, and notes (including fixes), visit Steve's site: grc.com, also the home of the best disk maintenance and recovery utility ever written Spinrite 6.

Join Club TWiT for Ad-Free Podcasts!

Support what you love and get ad-free audio and video feeds, a members-only Discord, and exclusive content. Join today: https://twit.tv/clubtwit

Sponsors:
canary.tools/twit - use code: TWIT
joindeleteme.com/twit promo code TWIT
hoxhunt.com/securitynow
meter.com/securitynow
zscaler.com/security

pubDate Wed, 22 Apr 2026 02:25:44 GMT

author TWiT

duration 9628000

transcript

Speaker 1:
[00:00] It's time for Security Now. Steve Gibson is here. He's got some more thoughts on Project Mythos, the new super smart AI that can find security flaws. He's saying, you know what, this isn't hype, and he's got some real evidence behind it. We'll talk about Microsoft buying its own bugs back and ignoring other ones. And what do Steve think of Project Hail Mary? Yeah, even that coming up next on Security Now.

Speaker 2:
[00:34] Podcasts you love from people you trust.

Speaker 1:
[00:38] This is TWiT. This is Security Now with Steve Gibson. Episode 1075 recorded Tuesday, April 21st, 2026. Yes, exactly. It's time for Security Now. Yay, Tuesday has come. You've been waiting all week long to hear from this guy right here, Steve Gibson, the man in charge at Security Now. Hey, Steve, good afternoon to you.

Speaker 2:
[01:06] Leo, great to be with you again, as always. I got some feedback from the 20,000 plus listeners who joined my email list to receive the show notes.

Speaker 1:
[01:19] Yeah.

Speaker 2:
[01:20] As a consequence of my own schedule over the weekend, I needed to start work on this Friday. I finished Saturday afternoon and immediately sent out the show notes. The feedback was, wait, is it Sunday? No.

Speaker 1:
[01:38] People are paying attention, Steve. You can't pull one over on them.

Speaker 2:
[01:40] Unfortunately, they're paying very close attention because I also, because my emailing system assumed that I would be doing the emailing in the morning of the podcast, it auto fills the day's date at the top of the email. But since I'm sending the mailing out typically on Sundays and occasionally on a Saturday, I have to go in and remove the placeholder for the auto fill and replace it with the actual date. Unfortunately, something is wrong with me because this is not the first time I've put in 2024 as the year. And so I sent it out as 4-21-2024. And so several of our Sharpite listeners said, uh, I don't think so.

Speaker 1:
[02:31] Have we traveled in time?

Speaker 2:
[02:33] And now I'm thinking I'm just going to leave it with auto fill and it'll date the e-mail on the send date rather than the podcast's date since, you know, that would work too. Anyway, um, we're going to have some fun. I hope that today's podcast will put to rest any question about what Mythos means, because two days after last Tuesday's podcast, the entire industry of security professionals, Bruce Schneier, who we know well, Google, CISO, I mean, a who's who, all co-signed and authored and produced a document that is intended to get the industry's attention, because they all agree with me. So I titled today's podcast, Yes, Exactly, which is meant to say, I love the name.

Speaker 1:
[03:35] Yes, exactly.

Speaker 2:
[03:38] Yes, it is what I said last week, but we've got, I want to share just to really, no, this is probably, at one point, remember I said that last week's show notes was revision three. In the first revision, I wrote, this could be the most significant podcast in our pod, in the history of Security Now. Now, I got, I did some deep breathing, and in one of the follow-on revisions, I removed that sentence. But my point was, we're talking about potentially, well then, of course, then my working name was Mythos, Marketing, or Mayhem, because this could be a big deal. So anyway, I just want to, we've got an amazing document that I'm going to share. And the other thing that is useful is, I've heard from some listeners who are having a hard time convincing upper management that they need to respond. Because of course, any response is going to be expensive, right? I mean, it's going to require expenditures of talent, equipment, like, you know, upgrade, upheaval, whatever.

Speaker 1:
[04:52] Well, that's why the flaws are still there in the first place.

Speaker 2:
[04:56] Exactly. And of course, and then there's the issue of the new things that haven't yet been found. So one of the things that this document offers, and in fact, this is also the first time I've had two shortcuts in one podcast for the same file, because you can get to this two ways, GRC.sc.mythos or episode number, GRC.sc.1075. Because I want there to be no possible reason that our listeners can't get the PDF and send it up to the C-suite, folks, because it is written for them. There are takeaways and bullet points and priority lists, and this is what you have to do, because a tsunami is very likely coming. And in fact, I realize there's a... I'm already giving this away. I've got this whole thing in my head. There's a very much of a Y2K aspect of this, right? Think about it, you know? Everyone who said after we went into the year 2000, oh, look, that was nothing. Nothing happened. Well, folks, there was a reason nothing happened. It's because everybody who needed to actually took it seriously and prevented something from happening. So anyway, we're going to have, I think, a great follow up today to last week's. Last week was just my opinion. Today, we've got everybody's opinion. But we're going to talk about a disgruntled developer who has been disclosing multiple Windows zero days because he's upset with Microsoft. Microsoft purchasing its own bugs in a massive campaign. The story behind something that's a couple of weeks ago, many of our listeners wrote to me. I didn't know what to say about it. I could have talked about it last week, but actually I bumped it because last week's podcast was full, about how VeraCrypt and Wireshark and some other projects suddenly lost their dev accounts at Microsoft. They were like, what happened? They were unable to do revisions of their software for some reason. We now have the whole story there. It's good I waited a week because we were going to talk about it anyway. We got a serious problem of recaptured domain names, which is reminiscent of the bucket reuse that we talked about with AWS a couple of weeks back. Also, a listener feedback-inspired exploration of how exactly AI might help to secure, might best help to secure open-source repositories. A listener wrote to say, hey, I never heard you and Leo talk about your opinions of Project Hail Mary. Could you say a few words? So we will. And then we're going to end with what cybersecurity professionals across the industry tell us about what Mythos means.

Speaker 1:
[08:04] Oh, that'll be interesting.

Speaker 2:
[08:06] And of course, again, the title of today's podcast is, Yes, Exactly.

Speaker 1:
[08:12] That gives you some idea of what's to come. Awesome. We also have a lovely picture of the week, which I haven't seen, but I know it's lovely because it always is.

Speaker 2:
[08:22] I think this one, this one is a bit of a hoot. So yeah, a bit of a hoot coming up.

Speaker 1:
[08:30] You're watching Security Now. We're glad you're here and we are so glad to have our sponsor with us. The wonderful folks at Thinks Canary, Thinks Canaries. We've talked about these so many times. I'll give you a quick recap. If you're new here, they're honeypots, honeypots notoriously difficult to write. But the idea is you put something on your network that's very attractive to hackers and you might say, well, why would a hacker be in my network? Well, that's the question, isn't it? You know, I think a lot of times we assume we've got such great perimeter defenses, there's no way a bad guy could get in, except I think anybody paying attention, if you've just listened to the show a little bit, you know that breaches are happening with increased frequency all the time now. And that means somebody's got into your network. And the real issue, in my opinion, and I don't think I'm alone in this, is how do you know? Because these bad guys are often very good at covering their tracks. That's one of the real skills of being a hacker, is you delete logs, you hide any record of their presence. So how would you know if somebody's in your network? As stats say, most people don't. On average, a company will not know that they've been breached for 91 days. That's a problem because that means that three months a bad guy has to go through your stuff. Exfil trade information they could use to blackmail you or your customers, huge potential for reputation damage, plant little time bomb ransomwares that could go off. I mean, they could do all sorts of damage. You need to know the minute somebody's in your network. That's why you need a Thinks Canary. It's a honeypot that is A, designed by people who have been doing this for years. They are experts in breaching networks. They've been teaching private industry and government how to do it for years. They're super smart. They've written something that is highly secure because you don't want to put something in your network that's not. So it is absolutely rock solid. That one of the things they do that I like, and I asked him about this when I saw him at RSEC a couple of weeks ago, they're always pushing out updates. They're always adding features but also fixing bugs, making sure that your thinks canary is solid as a rock. So you can put it on your network with confidence. It's a honeypot, very easy to deploy. You plug it in. I've got mine right here. It has two connectors on it, a USB connector for power, and it's got a network connection. It actually just looks like an external hard drive, just a black box. Nobody will really notice it. It looks like many devices hooked into your network, but the cool thing about this is it can impersonate almost anything. So you go into the console, your Think Scenario console, and you'll choose the personality you're going to apply to this. Scenario mine has been for a long time a Synology NAS, but it could be a SharePoint server, a Windows server, a Linux server. You can turn on any ports you want. You can light it up like a Christmas tree, give it every possible service, or just have a few select ones, you know, like Windows file sharing. You can, it could be a SCADA device. It could be an SSH server. It also, this is really neat, can create files that phone home. These files can be really anything, a wire guard configuration. Oh, by the way, hackers love that, because then that means you can get in other networks that are theoretically secured, right? It could be a spreadsheet. I have a few spreadsheets. You can put them on your local drive, but even on your cloud drive. So I have a few spreadsheets that say things like, payroll information, something a bad guy cannot, help but open. The thing is, the minute the intruder, whether it's an outside hacker or malicious insider, tries to open this fake SSH server or tries to access that spreadsheet that says, payroll information, your thinks canary will immediately tell you, there's somebody in the network, you got a problem. No false alerts, just the alerts that matter. In any way you want, by the way, text message, SMS, but web hooks, Slack, syslog, of course, it could be email, any way you want. They're very flexible. They even have an API, so you could write your own tool if you want. This is so cool. Choose a profile for your thinks canary device. You register with the hosted console. That's how it gets the monitoring, that's how it generates the notifications, and then you just wait. You just wait. You sit back and relax. An adversary cannot help but make themselves known by accessing your thinks canary. Now, let me tell you, a big bank might have hundreds of these. You certainly should have one for every network segment. I think all the nooks and crannies on your network, right? And let's not forget, you can put it on cloud devices too. Visit canary.tools slash TWiT. Let's say you want five of them, okay? That's 7,500 bucks a year. You get the five thinks canaries, you get your own hosted console, you get upgrades, support, you get maintenance. Oh, and if you use the code TWiT in the, how did you hear about this box? You get 10% off the price and not just for the first year, forever, for as long as you have your thinks canaries. You also should be reassured because you can always return your thinks canaries. They have a very generous two months money back guarantee for a full refund. I should point out though that they've been advertising with us this month. It's their 10th year advertising with us, 10 years. And during all that time that full refund has never been claimed. Not once. Visit canary.tools.twit, enter the code TWiT in the How did you hear about us box, 10% off for life, canary.tools.twit, use the offer code TWiT. We thank them so much for 10 years of support for what Steve's been doing here at Security Now. That is, I think, probably our longest standing sponsor.

Speaker 2:
[14:19] Monitoring your network is one thing to secure it, but you need to also keep an eye on what comes through the door.

Speaker 1:
[14:27] You need to know. Yeah. All right. I am prepared to show all the picture of the week.

Speaker 2:
[14:34] I gave this picture the caption, hyphen usage is uncommon, but there are times when there's no substitute. All right.

Speaker 1:
[14:48] This is going to be another punctuation issue.

Speaker 2:
[14:50] Hyphen usage is uncommon, but there are times when there's no substitute.

Speaker 1:
[15:05] Okay. Okay. Somebody took this quite literally. Do you want to describe it?

Speaker 2:
[15:10] Yes. So we have a sign. You can sort of see it, a keyboard on a shelf above and maybe a monitor. There's some sort of a, and there's a power strip in the back. There's some sort of a, you know, some sensitive electronics and PC stuff.

Speaker 1:
[15:25] We had signs like this all over our studio because I always bring my coffee in and spill it. Yes.

Speaker 2:
[15:30] And the sign says, no drinks back here unless they have a screw on top now. And then it says, thank the management. Now they clearly meant a screw on top, not a screw on top. So what we have then is a styrofoam cup with a, one of those little plastic styrofoam lids and a long, about an inch and a half wood screw sitting on top of the cup. So it has a screw on top, which all you need.

Speaker 1:
[16:05] It satisfies all the requirements.

Speaker 2:
[16:07] Yes. And had it said screw hyphen on, then it would have been clear that you didn't want a screw on the top of the cup. You wanted a screw on top.

Speaker 1:
[16:19] Again, punctuation container.

Speaker 2:
[16:22] Yes.

Speaker 1:
[16:22] Can be very important.

Speaker 2:
[16:23] Yes. Not many people use hyphens, but I like to hyphen them.

Speaker 1:
[16:28] I like hyphens.

Speaker 2:
[16:29] Yeah, I do too. Yeah. Not quite clear when you need them, but I just say, hey, error in the positive, in this direction of hyphenating, because what the hell? Okay, so.

Speaker 1:
[16:39] We mentioned last time that the Patch Tuesday last week was the second biggest Patch Tuesday of all time. I mean, big numbers.

Speaker 2:
[16:47] Yes. And we don't know yet whether that is Mythos related, but we know that Microsoft is one of the companies that was named that Anthropic gave, has given access to Mythos. So, and you have to wonder, because I've heard people talk about, oh, like somebody was saying on some show, oh, what do they think? There's no way that it's not going to escape. It's not going to get out. It's like, they're not, they're giving them access to the model online. They're not giving them a Mythos to go.

Speaker 1:
[17:22] No, there is no Mythos. We'll wrap it up for you. You like to wear it out or?

Speaker 2:
[17:27] Please make sure it's not like one of those iPhones that gets left at a bar when you walk, you know, an unreleased iPhone. So anyway, there's no problem. But the fact that it's cloud-based means that Microsoft would need to trust their competitor because of course they're all open AI, whereas this is coming from Anthropic. They would need to trust Anthropic with their source code uploads into Anthropic's cloud in order to have Mythos rummaging around in Microsoft source. So there's that. But then on the other hand, everybody is going to have to trust Anthropic in that fashion because it's their cloud. There's no local Mythos yet as far as I know. Anyway, last Thursday the 16th, Bleeping Computer's headline was New Microsoft Defender Red Sun. That's the name of it or it's been given to it. Zero-day proof of concept grants system privileges, which as we know, elevation of privilege is almost as important as remote code execution because oftentimes the remote code that you're executing is in the context of a user login where the whole OS has got security wrapped around the user to keep the user from misbehaving. You need to first get into the user account, but then you need to get out of the user account into the system account, into root. Again, elevation of privilege is a big deal. So, Bleeping Computer's piece told a story of the disgruntled developer who had, and I'll share some of what this guy wrote, because we'll get, it sounds like he himself is a little more than disgruntled. He's a little sketchy, but anyway, this guy has been publishing, not this is like not even the first, fully working proof of concept exploit code for his discoveries, plural, of privilege of elevation vulnerabilities in both workstation and server. Like from like 2019 on, like server 2019 on have been vulnerable to this. So, the following day on the 17th, so on the 16th, Bleeping Computer's headline was, New Microsoft Window Defender, Zero Day Proof of Concept Grants System Privileges. The next day, they followed up that reporting with another piece titled, Recently Leaked Windows Zero Days, Now Exploited in Attacks. In other words, this guy put the proof of concepts up on GitHub, and the next day, bad guys had found them and were exploiting them to hurt Windows users. So, not good. Bleeping Computer said, Threat Actors are exploiting. Three, recently disclosed Windows security vulnerabilities in attacks to gain system or elevated administrator permissions. Since the start of the month, the security researcher known as Chaotic Eclipse or Nightmare, oh, there's a hyphen, Nightmare hyphen Eclipse, so he's hyphenating, has published proof of concept exploit code for all three security issues in a protest to how Microsoft Security Response Center, you know, MSRC, handled the disclosure process. And we're not getting much visibility into what that means exactly. Bleeping said, Two of the vulnerabilities dubbed Blue Hammer and Red Sun are Microsoft Defender local privilege escalation flaws, while the third, known as Undefend, can be exploited as a standard user to block Microsoft Defender definition updates. At the time of the leak, the security flaws these exploits targeted were considered zero days by Microsoft's definition, which, remember, is a little different than the industries, since they had no official patches or updates to address them. Normally, zero day is about surprise. In this case, it's about response, essentially, to something that hasn't yet been patched or updated. They said on Thursday, this is of last week, Huntress Labs security researchers reported seeing all three zero-day exploits deployed in the wild, meaning in use to hurt people with the blue hammer vulnerability being exploited since April 10th. They also spotted Undefend and Red Sun exploits on a Windows device that was breached using a compromised SSL VPN user in a tag showing evidence of what they're calling hands-on keyboard threat actor activity, meaning not just automated scan stuff, but an attacker logged in through an SSL VPN hitting keystrokes in order to explore and exploit the vulnerable connection. They said, while Microsoft is tracking the blue hammer vulnerability as CVE 2026, 33825 and has patched it in April 2026, security update. It got fixed last Tuesday, which was patched Tuesday of this month. They said attackers can use the Red Sun exploit to gain system privileges on Windows 10, Windows 11, and Windows Server 2019 and later systems when Windows Defender is enabled. That actually uses Defender in order to leverage its attack. So weirdly disabling Defender disables the zero day. And they said even after applying the April Patch Tuesday updates. So this is a vulnerability post Patch Tuesday of this month. So we don't know when it's going to get fixed. Maybe an emergency out of cycle update. Who knows? They said that the disgruntled researcher explained. So this is the researcher saying, quote, When Windows Defender realizes that a malicious file has a cloud tag, meaning, you know, like the, remember the, the mark of the web, which tags are able to get saying, we're going to treat this differently because you downloaded this off the internet. Says, when a malicious file has a cloud tag for this is the, this is the, the, the, the disgruntled researcher writing, quote, for whatever stupid and hilarious reason, the antivirus that's supposed to protect decides it would be a good idea to just rewrite the file it found to its original location. The proof of concept abuses this behavior to overwrite system files and gain administrative privilege, unquote. And we'll, we'll actually get a little more detail about that in a second. So they wrote, when Bleeping Computer contacted Microsoft earlier this week for more information on the disclosure reported by the anonymous researcher, a Microsoft spokesperson told Bleeping Computer, of course, this is going to be as helpful as they generally are, quote, Microsoft has a customer commitment to investigate reported security issues and update impacted devices to protect customers as soon as possible. We also support coordinated vulnerability disclosure, a widely adopted industry practice that helps ensure issues are carefully investigated and addressed before public disclosure, supporting both customer protection and the security research community, unquote. So thank you for that, Microsoft. So two days before that on Wednesday, this person, this is me talking now, this person going by the moniker chaotic eclipse posted his own diatribe over on blogspot. And I think it's worth sharing since it gives us some impression of who's doing the disgruntling. Uh, dated Wednesday, April 15th, the blogspot post was titled public disclosure, a response for CVE 2026, 33825 patch. So it reads posted by the guy, here is the code enjoy. And then he's got a GitHub link, uh, github.com/nightmarehypheneclipse/redsun. So he said now to, so it looks like nightmare eclipse is the guy's name, right? And red sun is the exploit. Now to address what some media articles wrote, first of all, I want to talk about MSRC official response regarding blue hammer. That's his previous release of a zero day exploit proof of concept code. He said, uh, Microsoft has a customer commitment to investigate it, reported security issues and update impacted devices. Oh, so it's he. So, so this is a him quoting what, what bleeping computer just showed us they had, they had said for what it's worth. So he said, this is a very generic response, almost as if they don't care and they don't. Why? Because MSRC was fully aware of this public disclosure. A case was filed but was dismissed by them. And they are also aware that this one will be disclosed. But again, they are ignorant. Period. This is again, Mr. Disgruntled. Normally he writes, I would go through the process of begging them to fix a bug. But to summarize, I was told personally by them that they will ruin my life. And they did. And I'm not sure if I was the only person who had this horrible experience, or a few people did, but I think most would just eat it and cut their losses. But for me, they took away everything. They mopped the floor with me and pulled every childish game they could. It was so bad, duos on so bad, at some point, I was wondering if I was dealing with a massive corporation or someone who was just having fun seeing me suffer, but it seems to be a collective decision. Wow. I know. One other thing, they do everything but support the research community. I won't disclose details, but they sabotage people a lot. I mean, just look at the past. Microsoft is the only major company who had a track of multiple vulnerabilities being publicly disclosed, just because the researchers were so upset by how MSRC treated them. Unfortunately, the folks who have the capacity to stop those disclosures not only don't care, but also seem to push harder for even worse exploits to be released. I didn't want to be evil, but they are actively poking me to start releasing RCEs, which I will be doing at some point, dot, dot, dot. He said, he finishes, I will personally make sure that it gets funnier every single time Microsoft releases a patch. Okay. So we talked about vulnerability discoverers feeling that their brilliance is not being sufficiently recognized or rewarded. In an earlier posting on March 26th, last month, this person wrote, quote, I never wanted to reopen a blog and a new GitHub account to drop code. But someone violated our agreement and left me homeless with nothing. They knew this will happen and they still stabbed me in the back. Anyways, this is their decision, not mine. Unquote. Okay. So my presumption, without knowing anything specific, is that Microsoft almost certainly treated this researcher the same way they treat everyone else. But he or she believed that they deserved special treatment. You know, we've certainly shared horror stories in the past about the way some researchers have been treated. But Microsoft is not evil. It's full of good people, but a great many good people. So the result is that it's a big lumbering machine that doesn't care about anything, but only because caring is not what big lumbering machines are optimized to do. This researcher appears to have adopted, and I didn't want to, but you made me do it, rationale for his actions. Reading between the lines, my guess is maybe he was counting on receiving a big bug bounty payout that he desperately needed, which never came. Sort of sounds like he may have released the proof of concept before Microsoft went through the formal disclosure process and so blew his opportunity, because he pre-released. And so now he's complaining about that. So now, of course, he's blaming Microsoft for this. It's unfortunate that this person is having trouble with life. I looked at the details of the proof of concept that he designed, and it's a slick bit of work, you know, that the well-known security researcher, Will Dorman, from whom Bleeping Computer often seeks confirmation of complex issues, posted about this new Red Sun exploit over on Mastodon. Will wrote, from the same author as Blue Hammer, we now have Red Sun. This works around 100 percent reliably to go from unprivileged user to system against Windows 11 and Windows Server 2019 and beyond, with April 2026 updates, as well as Windows 10. As long as you have Windows Defender enabled, any system that has CLDAPIDLL should be affected. CLDAPI sounds like the Windows Cloud API and is. In the next quote from Will, he refers to ICAR, E-I-C-A-R. That's the abbreviation for the European Institute for Computer Antivirus Research. The file they produced, which itself is known as ICAR, is a popular pseudo malware test file that can be used to deliberately freak out any good AV tool without actually itself containing or doing anything malicious. It's just used as a test file to see if AV detects it. So in a follow-on mastodon posting, Will writes to explain what this thing does. He says, this exploit uses the Cloud Files API, writes ICAR to a file using it, meaning using the Cloud Files API, uses an opt-lock to win a volume shadow copy race, and uses a directory junction reparse point to redirect the file rewrite with new contents into C colon backslash Windows backslash system32 backslash tiering-engine-service.exe. At this point, the Cloud Files infrastructure runs the attacker-planted tiering-engine-service.exe, which is the Red Sun exploit itself, as system. And he writes, game over. In other words, this is the proof of concept that the disgruntled developer engineered, which as I said, is some slick work. I mean, that's kind of tricky to figure that out. Anyway, our primary takeaway here is that all fully patched, as of last week's Mega Patch Tuesday, as you said, Leo's second biggest ever, but apparently not quite big enough. Windows desktop and server are currently vulnerable to this exploit, which is now being actively used in the wild. It's not the end of the world since something bad must first get into a machine so that it's able to trick Windows Defender into performing that odd file rewrite dance. That allows attacker-provided code to be run with full system privilege, but the attacker has to first get in there and provide the code. As I said, while it's not the end of the world, Huntress Labs is observing it under active use. It would be nice if Microsoft were to fix the issue for this before May's Patch Tuesday, which is still a full three and a half weeks away. I mean, this is a bad problem, and Microsoft didn't get there in time, and they probably should get this updated.

Speaker 1:
[35:28] In the meantime, I think it's hilarious. Turn off Windows Defender.

Speaker 2:
[35:32] Yes, actually, that is the only, that's the mitigation is turn off Defender, because Defender is being used, this weird behavior that Defender has. I'm sure Microsoft knows why they're doing this, but it ends up you can leverage that in order to get yourself attacked. Yeah. I wouldn't turn off Windows Defender. I don't know what I would do. Maybe the ZeroPatch guys, I didn't think to look over at zeropatch.com. Maybe they have a quickie patch. They offer for free patches for vulnerabilities that are known, but which Microsoft has not yet provided fixes, and they might do that. That might be an opportunity.

Speaker 1:
[36:18] I'm checking right now just to see.

Speaker 2:
[36:20] I'll let you know. Cool. Meanwhile, Microsoft has been buying up their own bugs. While we're on the topic of Microsoft and bugs, Bleeping Computer also reported that Microsoft has been breaking records for bug bounty payouts. Before I take note of the irony inherent in this, I'll share what Bleeping Computer wrote. They said Microsoft has awarded 2.3, get this, 2.3 million dollars to security researchers after receiving nearly 700 submissions during this year's Zero Day Quest, which is the name of it, Zero Day Quest, you know, ZDQ hacking contest. Tom Gallagher, Vice President of Engineering at Microsoft Security Research Center, that MSRC we were just talking about, said that over 80, 8, 0 of the flaws found during the live event at Microsoft's Redmond Campus were high impact cloud and AI security vulnerabilities. So that's just great. We've all been using that software, which has 80 high impact cloud and security vulnerabilities. And actually Leo, that may more account for patch for the April Patch Tuesday than Mythos.

Speaker 1:
[37:47] So you don't really need Mythos, there's plenty to go around.

Speaker 2:
[37:50] Well, just pay because 2.3 million dollars. I mean, what we've seen is this bug bounty concept paying, you need to motivate researchers who have, you know, only so many hours in the day to go running around chasing after Microsoft vulnerabilities. Although it doesn't seem there's any scarcity of those. So BIP computer continues. Gallagher said, quote, During the 2026 hacking, the live hacking event, Microsoft partnered with the global security research community, representing more than 20 countries and a wide range of professional backgrounds from high school students to college professors. Researchers conducted all testing within authorized environments in accordance with Microsoft's rules of engagement, demonstrating potential impact without accessing customer data or other tenant systems. Within these constraints, researchers identified critical paths involving credential exposure, SSRF, server-side request forgery chains, and cross-tenant access. Lots of cloud, lots of AI. He wrote or briefing computer said last August, Microsoft announced that it would increase the prize pool at this year's Zero Day Quest hacking contest to five million in bounties, which the company described as the largest hacking event in history. In 2025, Zero Day Quest also generated significant participation from the security community following Microsoft's offer of four million in rewards for vulnerabilities in cloud and AI products and platforms. So they bumped it from last year's four million to this year's five million. After the hacking competition and concluded, Microsoft announced it had paid 1.6 million in rewards after receiving more than 600 vulnerability submissions. So last year, they offered four million. They paid out 1.6 after 600 vulnerability. This year, they offered five million and paid out 2.3 billion. So lots more actual problems found. The Zero Day Quest event contest is part of Microsoft's Security Future Initiative, a cybersecurity engineering effort launched in November 2023, following a scathing report from the Cyber Safety Review Board of the US Department of Homeland Security that found the company's security culture inadequate and required an overhaul. Of course, we talked about this at the time. I mean, it was just really raked him over the like charging excess money for logging security events. So they could just turn it on. But let's make some more money from having so many bugs that people have to log in order to track them. Right. Anyway, Bleeping said last August, Gallagher said, as part of our secure future initiative, we will transparently share critical vulnerabilities through the CVE program even if no customer action is required. Here I heat this word, learnings from the zero-day quest will be shared across Microsoft. They're sharing their learnings.

Speaker 1:
[41:20] They love that word. I don't know why.

Speaker 2:
[41:22] But they love it. I mean, yeah.

Speaker 1:
[41:24] It's corporate talk, I think.

Speaker 2:
[41:26] The things we learned is the way I learned my learnings, to help improve cloud and AI security in alignment with SFIs, core principles securing by default, by design and in operations. Apparently, however, not in code. Finally, earlier last August, Microsoft announced it had paid a record $17 million to 344 security researchers across 59 countries through its bug bounty program between July 2024 and June 2025. You know, there's a mixed blessing of bragging about how many millions of dollars you have paid to 344 security researchers who have found really bad problems with your software. On the one hand, okay, I think it's great that Microsoft software will now or will soon be, you know, that many bugs fewer in Cloud and AI security vulnerabilities. That's of course good for everyone. But as I said, it seems a little ironic to have Microsoft gleefully bragging about how many hundreds of bugs researchers were just able to find throughout their products. When they were sufficiently motivated to do so. So anyway, as we know, none of those bugs should have been there in the first place to be found. But hey, Microsoft has way more cash on hand than it knows what to do with. So dangling, increasing quantities of cold, hard cash in front of security researchers who will then be motivated to go bunk hunting. That's definitely money well spent. Let's have more of it because Microsoft can certainly afford to pay. We're going to talk about the mysterious disappearing developer accounts, Leo, after you tell our listeners about one of our sponsors who's still here with us. They've not mysteriously disappeared.

Speaker 1:
[43:31] No, unlike those developer accounts. I just, I like the, what was it? Hands-on keyboard. That's another good one.

Speaker 2:
[43:38] Hands-on keyboard attacker. You can see like, Oh, Boris is hunting. Boris is hunting and pecking.

Speaker 1:
[43:46] Wow.

Speaker 2:
[43:46] Yes.

Speaker 1:
[43:47] Wow. Uh, retcon five in our Discord says the term learnings was not in common use in 19th and 20th century. Although the countable noun sense learning, as in things learned dates to middle English and the plural learnings to early modern English. Note that early use of learnings often have the sensor connotation teachings. Yeah. I've heard teachings before.

Speaker 2:
[44:11] Yeah.

Speaker 1:
[44:12] As was the case of the, of learn generally, it's found occasional use for centuries, including by Shakespeare and parallel construction.

Speaker 2:
[44:20] If you're subject to teachings, what you come away with is learnings.

Speaker 1:
[44:24] Learnings. I don't like it either. I agree with you a hundred percent. It feels like corporate speak.

Speaker 2:
[44:29] It feels like you've, what are you smoking on the peninsula?

Speaker 1:
[44:35] All right. Let's take a little time out. Steve will caffeinate. I will educate this episode of Security Now and advocate, advocate this and I hope you learn it. Learning 8. This episode of Security Now brought to you by Delete Me. Actually, this is something you might want to learn about. Ever wonder how much of your personal data is out there on the internet for anyone to see? We're talking on Sunday on a Twitter before the show about this whole data broker issue and like, where do they get the information from and who are they selling it to? I think what we do know is there are more than 500 data brokers out there, collecting every little bit of information they can. Your name, your contact info, your social security number, your home address, even information about your family members. They collect it in a variety of ways. I mean, that's why we're always talking about browser, fingerprinting and things like that. Frankly, companies sell it to them and it's not at all unusual for them to offer libraries or companies to incorporate into their mobile apps because mobile is great for this because you put everything in your phone, right? So there's a lot of ways they get this information. They, the fact is they know it and worse, they will sell it to anyone. They don't care. To marketers, of course, but also to hackers, to harassers, to law enforcement, to other countries, nation states. Anyone on the web can buy your private data, including your social security number, and it's completely legal in this country. It shouldn't be. I know it shouldn't be, but it is. And what does that lead to? Well, I mean, your imagination is the limit. Identity theft, phishing attempts, doxing, harassment, hacking. But I think it's very important to know that there is something you can do about it. You can protect your privacy with Delete Me. We learned about Delete Me when we started getting phished a lot. And what was weird is the phishing techs knew way too much about our company organization, about our management, about their employees, their direct reports, phone numbers. And of course, the more personal information a phishing attempt has, the more effective it is because it's believable, right? We knew immediately there was a problem. Fortunately, our team is pretty savvy. They didn't fall for it, but we wanted to immediately go out and get this personal information off the internet. So we subscribed to Delete Me. Yes, it's a subscription service. It removes that personal info from hundreds of data brokers. It's very simple. You sign up. You give it exactly the information you want deleted. And then the Delete Me experts take it from there. And this is good because they know exactly who the data brokers are. Every one of the data brokers is required by law to have a removal page, but they're sneaky. They hide it. It's in a different spot for everybody. It's not immediately obvious. But that's why you need these Delete Me experts because they know they have exactly where to go. So Delete Me will send you regular, personalized privacy reports showing what info they found, where they found it, what they removed. And this is why it's a subscription. It's not a one-time service because even if they remove it, it comes back. It's like cockroaches. These data brokers, some of them, quote, go out of business, change their name, and then just continue on. Some of them, even after you delete the information, will start repopulating it. So Delete Me is always working for you, constantly monitoring and removing that personal information you don't want on the internet. You could, I guess, do it yourself, but it's really hard. To put it simply, Delete Me does the hard work for you of wiping your family, your personal information, your company's personal information from data broker websites. Take control of your data, keep your private life private. Sign up for Delete Me. We've got a special discount today only for our listeners, 20% off your individual Delete Me plan when you go to joindeleteeme.com/twit, and use the promo code TWIT at checkout. Now, that's the only way to get 20% off. Join delete me.com/twit, enter the code TWIT at checkout. Go directly to join delete me.com. That's one word, join delete me.com/twit. Please do us a favor, use the offer code TWIT, you do yourself a favor, you get 20% off. Join delete me.com/twit and the offer code is TWIT. We thank them so much for supporting security now and for getting our private information off the Internet. We appreciate that too. Steve.

Speaker 2:
[49:28] I think those little, remember when there were all those fake sweepstakes sites?

Speaker 1:
[49:33] That's another way.

Speaker 2:
[49:34] Yeah.

Speaker 1:
[49:34] Because we know that that happened on Facebook. That was the Cambridge Analytica scandal. They were making quizzes like which Star Wars figure am I? Just by simply virtue of taking those quizzes, not only did they get all your Facebook information, they got all your friends' Facebook information. So even if you said, Oh, I'm not going to do that. If your friends do it, they're revealing it. Now that hole has been plugged, but there's always more. Because they're like cockroaches. You can't get rid of them. All right. Anyway, on we go. Let's talk about the missing Microsofties.

Speaker 2:
[50:08] One of the recent bits of news that was, as I said, bumped so that we'd have time to thoroughly examine Anthropics, Mythos, was that Microsoft, last week, was that Microsoft had, without apparent cause or reason, suddenly dropped a number of driver developer accounts, products such as VeraCrypt and WireGuard. I mean, like the well-known WireGuard, like next-generation VPN, they inherently incorporate kernel driver components in order to obtain the deep OS level access they need in order to operate. So this was a huge concern for many users of these products, and I heard from many of our listeners who picked up on this news. Okay, as it turns out, it was, as I said before, it was just as well that I waited. Since last week, the news, the news, the only news that we had was that the accounts had been dropped. We didn't know why. Today we know. The reason for Microsoft suspension of these accounts turned out not to be a mistake, but was entirely deliberate, which is also a process or a reason for some concern. I know it's probably going to hit home for many of our listeners since it's an issue that I've been talking about recently during that whole process of updating my code signing certificate, what I had to go through, which like getting my CPA to sign an attestation letter that, yes, he'd just laid eyes on me and I was real, and he was putting his license on the line in order to vouch for me. It's like, yikes, none of that had ever been needed before. So once again, Bleeping Computer was on top of this. Under their heading, Microsoft rolls out Fast Track to reinstate Windows hardware dev accounts. Kind of an oops. Anyway, Bleeping Computer explained, writing, Microsoft has rolled out a Fast Track response to help developers regain access to accounts recently suspended from its Windows hardware program. Following widespread complaints that they were locked out without warning. Last week, the company suspended Windows hardware developer accounts used to publish Windows drivers and updates for widely used tools like WireGuard, VeraCrypt, Memtest86, super popular, and WindScribe. The suspensions prevented developers from releasing new Windows builds and security patches, raising concerns about potential delays in responding to vulnerabilities. VeraCrypt developer Munnuri Idrassi stated that his account, so this is VeraCrypt, the successor to TrueCrypt as we know, that was taken over by someone in France, this Idrassi guy, he said his account had been terminated without warning, and that he was unable to reach a human support representative, leaving him unable to publish Windows updates. Similar experiences were reported by WireGuard maintainer Jason Donfeld and others, who described being locked out without facing any, or without access or facing lengthy or unclear appeals processes. You know, there was the machine, the Microsoft machine was just sort of ignoring them. After many developers took to X to report the suspensions, Microsoft Vice President Scott Hanselman said, the accounts were suspended for failing to complete identity verification in the Windows Hardware program and that the company had been emailing these people, which they call partners, about the requirement since October of 2025. Right. So six months of we're trying to reach you and you're not replying. Microsoft requires identity verification for the Windows Hardware program because it allows developers to sign and distribute. Actually, it doesn't anymore, but OK. Remember, Microsoft is doing the signing now and distribute kernel level drivers. It does allow developers to develop kernel level drivers under the program. Which run, writes Bleeping Computer with high privileges and had been, well, yeah, in the kernel and had been like it could do anything, have been abused by threat actors in past attacks. However, they write many developers claimed, and there are so many, it's probably true, claimed they had not received any prior notification, including emails before they were suspended. While Hanselman and others at Microsoft had been working to reinstate accounts, Microsoft yesterday introduced a temporary process to fast track reinstatement for suspended accounts. An update to Microsoft's advisory adds, quote, We've heard your feedback. We know that some partners whose accounts were suspended following account verification are experiencing challenges. Regaining access to the hardware dev center, the HDC, protecting the security of Windows ecosystem remains our highest priority. And we are adding a temporary process to accelerate the reinstatement experience for partners who are able to resolve outstanding compliance requirements. Wow. Under the new process, developers are told, they wrote, to open a support case through the hardware program as the fastest way to reinstate accounts. Requests must include a clear business justification explaining how access to the hardware dev center will be used. Microsoft says that once reinstated, all outstanding compliance requirements must still be resolved before full access is restored. This is an interim, we're on a provisional trial basis, we're going to give you your account back that you previously had access to for years. Until we decided, nope, no more. Suddenly, we don't know who you are. But now, you're going to have to tell us who you are and prove it, of course. So Microsoft said it advised partners to ensure they're signed in with the correct account when submitting tickets and to continue prompting Co-Pilot to create a ticket if automated assistance fails, whatever that means. For those unable to submit requests through standard channels, Microsoft provided an alternative support contact to help initiate the process. Microsoft has not said how long this accelerator process will remain in effect, that is what this grace period is going to be. So effective developers are advised to act quickly. Okay, so I would tend to believe the developers over Microsoft regarding this complete lack of attempts to inform them. As I noted earlier, Microsoft is no longer an entity that is actually able to care. Caring is not something that it does. It's just too big and caring is a distraction. So someone, somewhere, doubtless decided that the best way to get developer attention or just to remove dead accounts would be to simply suspend all currently non-compliant accounts for non-compliance. This has the advantage, as I said, of weeding out any older accounts that no one really cares about that much, since they won't be immediately inconvenienced by their inability to access Microsoft's developer portal. Conversely, those who are inconvenienced will be highly motivated to get their identity-proving act together. As we know, this may involve getting an affiliated attorney or CPA to sign some attestation papers. It's what I went through. Basically, this is the same process. You need to have identity verification that is bulletproof. Of course, that's true because running code in the Windows kernel is a privilege that none of us want bad guys to have. We do want Microsoft to make that as bulletproof as possible. While Microsoft could have been way more gentle about this, this did get the job done. That's what that was all about. The reason Microsoft suddenly suspended a bunch of developer accounts, many who were immediately inconvenienced because they were using them actively. Now, basically, it's like, okay, we're going to give them back to you for a while, but you need to get your identity made compliant. So that will happen. Now, this next piece of news beautifully exemplifies a problem we've seen before. That's, I think, largely a consequence of the aging Internet and aspects, critical aspects of its design that were never very well thought through, since and in defense of its designers, they could have never and never did foresee what their creation, which we call the Internet, would become. They, I mean, I'm in awe of the original design of these protocols that have stood the test of time. There are some aspects that haven't. Leaping Computer's headline for this reporting was, quote, signed software abused to deploy antivirus killing scripts. Not a great headline. While that's factually true, it's more of the consequence of the problem than the problem itself. Okay, so let's start with what Leaping Computer reported. They said a digitally signed, meaning, you know, there is a real company behind it, so it's digitally signed, and pretty much these days anything has to be, a digitally signed adware tool. So not malware, not, you know, not evil, but just unwanted. And a digitally signed adware tool has deployed payloads running with system privileges that disabled anti-virus protections on tens of thousands of endpoints, meaning host computers at where it was installed. Some in the educational, utilities, government, and health care sectors. In a single day, researchers observed more than 23,500 infected hosts across 124 countries trying to connect to the operator's infrastructure with hundreds of infected endpoints being present in high-value networks. Okay. So they're saying 23,500 PCs have this adware tool. Hundreds of them are in like really important networks, and they're all reaching out trying to phone home to this operator's infrastructure. Bleeping wrote, Security researchers at managed security company Huntress discovered the campaign on March 22nd, when signed executables viewed as potentially unwanted programs. Love that. Pups, PUPs, potentially unwanted programs. Do you really want this? Which is what that original opt-out tool that I wrote for that old radiate or oriate adware was. Anyway, they said, potentially unwanted programs triggered alerts in multiple managed environments. So Huntress is in the environment management business, and they saw these things doing this. They wrote, pups or adware are regarded more as a nuisance than malicious, as their role is typically to generate revenue for the developer by showing advertisement pop-ups, banners or through browser redirects. They'll infect browser URLs to bounce through some other redirect before they go to the site that you actually intend. Huntress researchers say that the software was signed by a company called Dragon Boss Solutions LLC. Sounds kind of Chinese. Involved in, quote, search monetization research, whatever that is, activity, and promoting various tools. For example, the Chrome Stera browser. Oh, Leo, don't leave home without the Chrome Stera browser. Chrome-in-us, whatever that is. The World Wide Web. Oh, that's catchy. Web Genius and the Artificious browser. I don't know if I want the Artificious browser. Anyway, all labeled as browsers, but detected as pups by multiple security solutions. So they're recognized as, are you sure you want this? Beyond annoying users with ads and redirects, Huntress researchers say the browsers from DragonBossSolutions also feature an advanced update mechanism that deploys, and get this, an antivirus killer. In other words, they've found out that there's things that don't like them, so let's kill that because we don't want to be unliked. Huntress researchers discovered that the operation relied on the update mechanism from the commercial advanced installer, authoring tool to deploy MSI and PowerShell payloads. Analyzing the configuration file for the update process revealed several flags that made the operation completely silent. No user interaction required. You don't want to bother users with those pesky permission dialogues. It also installed the payloads with elevated system privileges, prevented users from disabling automatic updates, and checked frequently for new updates. So basically, badly behaving malware, I would agree, potentially unwanted, probably definitely unwanted. That would be dupe. Definitely unwanted programs.

Speaker 1:
[66:05] Not a pup, it's a dupe.

Speaker 2:
[66:06] Yeah, that's right. Okay. So none of those things seem deliberately malicious, right? Having been harassed by false positive AV detections, I can at least understand their motivation behind creating exceptions for one's code. As we know, that's not the approach I take, like killing off AV that bothers you. Mostly this seems like software written entirely with the convenience of its publisher rather than its user in mind. That's bad software, no doubt about it, but that's also life. So, the reporting continues saying, according to the researchers, the update process retrieves an MSI payload, setup.msi, disguised, this is weird, disguised as a GIF image, which is currently flagged as malicious on virus total, but only by five out of 69 or 70 security vendors. So, not many false positives or positive positives. Anyway, it does seem a little sketchy. Why would any software publisher who thinks of themselves as legitimate, retrieve a windows setup.msi file disguised as a GIF image?

Speaker 1:
[67:34] What?

Speaker 2:
[67:34] Okay. Anyway, they continue writing, the MSI payload includes several legitimate DLLs that advanced installer uses for specific tasks such as executing PowerShell scripts, looking for specific software on the system or other custom actions defined in a separate file named exclamation point underscore string data that includes instructions for the installer. Huntress says that before deploying the main payload, the MSI installer conducts reconnaissance by checking the admin status, detecting virtual machines, verifying internet connectivity, and querying the registry for installed antivirus products from Malwarebytes, Kaspersky, MacAfee, and ESET. The security products are disabled using a PowerShell script named ClockRemovable.ps1 PowerShell, which is placed in two locations. The researchers say that installers for the Opera, Chrome, Firefox, and Edge browsers are also targeted, likely to avoid potential interference with the adware's browser hijacking. Yeah, you want to turn off anything that might get in the way. The ClockRemovable.ps1 script also executes a routine when the system boots, and at logon, and every 30 minutes, to make sure that antivirus products are no longer present on the system by stopping services, killing processes, deleting installation directories, wow, you know, really wiping them, and registry entries, silently running vendors uninstallers, love that, and forcefully deleting files when uninstallers fail to successfully uninstall. It also ensures that the security products cannot be reinstalled or updated by blocking the vendors' domains through modifying the hosts file and null routing them, redirecting them to 0000. Wow. So again, not technically malicious, but you just don't want that. So what's clearly going on here is that the publishers of this definitely malbehaving crapware have previously experienced well-deserved run-ins with a handful of alert anti-crapware utilities that want to warn their users that this is a potentially unwanted program in spades. So these cretins have upped the anti by making their adware offerings even more obnoxious in the things they do to get anything that doesn't like them off the system and keep them off. Like, you can't even contact those AV companies any longer because your browser will not resolve the domain because the hosts file has been edited in order to null route them. Wow. Okay, so here's something curious and interesting. During the analysis, Huntress found that the operator did not register the main update domain, ChromeStera browser, ChromeStera browser.com, or the fallback domain, World Wide Web Framework 3.com, used in the campaign, presenting them with the opportunity, presenting them, Huntress, with the opportunity to sinkhole the connection from all infected hosts. In other words, domain got abandoned, Huntress saw nobody had re-registered it, so they did. As such, writes Believe in Computer, they registered the main update domain and watched tens of thousands of compromised PCs reach out, looking for instructions that in the wrong hands could have been anything. Based on the source IP addresses of the endpoints, these PCs that have this crap on them, the researchers identified 324 infected hosts residing in a high-value networks. Remember, that's 324 out of 23,500. So, there's 23,500 PCs overall, 324 in a high-value networks, specifically 221 in academic institutions in North America, Europe and Asia, 41 OT as we're calling them now, operational technology networks in the energy and transport sectors and at critical infrastructure providers, 35 municipal governments, state agencies and public utilities, 24 primary and secondary educational institutions, three health care organizations, hospital systems and public health care providers, and the networks of multiple Fortune 500 companies. So if a bad guy had registered that domain before Huntress did, they'd have access into all of those networks. Bleeping Computer wrote that they tried to reach out to Dragon Ball Solutions, but could not find contact information as their site is no longer operational. Huntress warns that while the malicious tool currently uses an AV killer, the mechanism to introduce far more dangerous payloads into infected systems is in place, and could be leveraged at any time to escalate the attacks. Additionally, since the main update domain was not registered, anyone could claim it and push arbitrary payloads to thousands of already infected machines with no security solutions protecting them by design, and through an already established infrastructure. Huntress recommends that system admins look for WMI event subscriptions containing the string MB removal or MB setup, schedule tasks referencing WMI load or clock removal, and processes signed by Dragon Boss Solutions LLC. Additionally, review the hosts' file for entries blocking AV vendor domains and check Microsoft Defender exclusions for suspicious paths such as D Google, E Microsoft, or DD apps. Okay, so this particular incident is not the end of the world. But as I noted at the start, it's another perfect example of something the Internet was never designed to handle. Some random company may itself not be explicitly evil, but might have sloppy, uncaring, and abusive coders who install software that does things to its hosting PCs that would raise serious concerns from anyone who understood what was going on. But as we know, the phrase, from anyone who understood what was going on, is almost never good to include the end user who decided, hey, you know, I'll bet that Chrome Stera browser would be a lot better than Chrome. So I'm using Chrome Stera instead of Chrome. It's like, okay. So here's the problem. So here's the problem that the Internet's designers never considered. What happens when the progenitors of ill-begotten and very badly designed software and not necessarily even that, like any software which is now using an infrastructure to phone home to check for updates, and then has the power to automatically download them and put them in place? What happens when that software which continually reaches out to the Internet for updates, eventually, and if it's a fly-by-night company, probably inevitably, goes out of business? Their horrible software remains installed and alive and querying for updates. I know that all of us have stuff on our PCs that we installed some time ago and then stopped using, but probably haven't taken the time to remove because it's not bothering us. But then their various domains also expire.

Speaker 1:
[77:06] Oops.

Speaker 2:
[77:08] Now anybody could re-register them. Fortunately, in this instance, Huntress are the good guys who re-registered those expired domains for the sake of their research. But if bad guys were to do this, they would have stumbled upon the mother load. 221 academic institutions, 41 operational technology networks and infrastructure providers, 35 municipal governments, state agencies and public utilities, 24 school systems, three healthcare organizations and the networks of multiple Fortune 500 companies, they could get in to all of them. Ransomware, anyone? This abandoned software would literally have a ready-to-go back door into the networks of all of those 324 high-value targets. And here's the concern to think about. This cannot be an isolated event. This particular discovery was Huntress showing that they're awake and alert and doing their managed security thing. That's great. But similar events are doubtless happening across the Internet. Companies are abandoning their previous failed software offerings, which included technology to phone home. Then home is abandoned too. Note that it's one thing when some random website's domain is abandoned, but it's an entirely different matter when automation that's been silently installed into user machines is making those queries. This creates a ready-made back door into every one of the networks that's reaching out to abandon domains. We're in a world where there is no accountability for the actions of the software while it's in use, right? I mean, people can download this crapware, and it does that to their machines. Horrible things, installing scheduled tasks, stripping AV out, running the AV uninstallers, and if that doesn't work, removing their registry entries and manually deleting the software from their machines, black-holing their domains by putting 0.0.0.0 in the hosts file, and the user doesn't know. They said, yeah, I really want the Chrome Stera browser. Sounds great. And this happens. There's no accountability in our current environment. Companies can do whatever they want, including this kind of crap. You know, you know, this basically we're in a world where we have a rent a domain name system, right? We rent a domain name and as long as we're willing to pay for it, we get to keep it. But when we decide we don't want to rent that domain name any longer after it expires, it's up for grabs. Just like the AWS abandoned bucket problem was where bad guys could grab abandoned buckets that were still had activity on them. So unfortunately, this re-registering a domain is assumed and encouraged, but it leaves us with some serious potential for security problems. It's not something our forefathers on the internet thought about because they could have never imagined that the net would become what it has. But this problem of recycling domains, it creates a whole new world of security problems.

Speaker 1:
[80:56] What an interesting story. Yeah. From Stera. I can't wait to get it. By the way, I just saw this news cross the wire.

Speaker 2:
[81:06] Yeah.

Speaker 1:
[81:06] Mozilla is saying now that it used Mythos on Firefox and that it found 271 bugs, which they patched in their current version 150. So this is the first that we've seen of an actual admission that Mythos was used by an independent third party.

Speaker 2:
[81:29] Yep.

Speaker 1:
[81:31] 271 bugs in shipping software. Yeah. It's been tested and tested and tested. Oh, my God.

Speaker 2:
[81:39] Pounded on and we know it is the largest attack surface on anyone's computer is the web browser.

Speaker 1:
[81:49] Yeah. One of the things Mozilla said is this is our belief is that the tools have changed dramatically and there were categories of bugs that you could find with human analysis, you couldn't find with automated analysis, which means that threat actors had an advantage if they were willing to spend the time and energy, we couldn't keep up.

Speaker 2:
[82:13] Now it is finding them with automated analysis. Wow.

Speaker 1:
[82:17] Every piece of software, this is by the way, this is Holly, Bobby Holly, Firefox's CTO. Every piece of software is going to have to make this transition because every piece of software has a lot of bugs buried underneath the surface that are now discoverable. This is a transitory moment that is difficult and requires coordinated focus and a lot of grit to get through. But I think that this is a finite moment, even as the models become more advanced. He said, yes, we are flooded now with things we have to fix, but at least we know about them.

Speaker 2:
[82:51] Yeah. And when AI is in the pre-delivery pipeline, we are not going to be there again. So as I said, we are going to have, it's transient mayhem potentially. It's Y2K. This Y2K is a perfect model.

Speaker 1:
[83:07] I think this confirms what you just said. Exactly. Yep. It is Y2K. It's hair on fire, but for a limited time only. Yes.

Speaker 2:
[83:16] And now, and you know that going forward, with the Mozilla team having seen this, they will vet anything they do now through AI to catch, like hyperlint, in order to catch any of the problems before they ship.

Speaker 1:
[83:35] That's what Holly is saying basically is, this is now incumbent on everybody.

Speaker 2:
[83:39] It's the new model.

Speaker 1:
[83:40] It's the new model. It's the future. But this is a real confirmation that Mythos, it wasn't merely marketing hype, that there is something going on. If you can find 271 bugs in a highly tested version, current version of...

Speaker 2:
[83:56] Please tell Jeff and Paris, I'm so annoyed with their like, is it really? Yes. Read something.

Speaker 1:
[84:04] It is now. Well, we didn't, I mean, to be fair, we weren't sure. I was. Yeah, you said that last week. Yeah. Yeah. And I mentioned that to them. But now we have absolute confirmation. This is the real deal. Because they've been using automated tools before. This is not... This is a special category.

Speaker 2:
[84:28] And I will, when we get to our main topic, I will... The guys at Aisle, remember A-I-S-L-E, they're the guys that found all the problems in OpenSSL. And so they have a little bit of pushback against Anthropic, which I'll share, to round this out. But anyway, as I said, the podcast is titled, Yes, Exactly.

Speaker 1:
[84:52] Exactly. And you were... You called it. You were absolutely right. I take it you would like to pause to celebrate.

Speaker 2:
[85:00] And it is time for a pause to refresh.

Speaker 1:
[85:03] To refresh. As I talk about our sponsor for this segment of Security Now, we're glad you're here. Thank you for watching. And I think you're glad you're watching too. A show today brought to you by Hoxhunt. It's like Foxhunt with an H, H-O-X-H-U-N-T. As a security leader, you've been there, right? The eye rolls during training, the one size fits all fishing simulations. I'll put that in quotes. That your employees spot from a mile away. And the report button that gets ignored more often than not. Your programs are running, but it's not changing employee behavior. Meanwhile, we're seeing that AI is making real attacks more convincing by the day. And leadership is starting to ask the question you don't have a clear answer to. Is this actually working? Well, Hawks Hunt is built to answer that. Hawks Hunt empowers your employees to spot and stop advanced fishing attacks, drive measurable behavior change, so personalize gamified micro-training. It's powered by AI and behavioral science, and it really works. And you'll love it because as an admin, Hawks Hunt is doing all the heavy lifting. The simulations run automatically, and not just email, but email, of course, but Slack and Teams too. And just like the real attacks, they're personalized to each employee based on role, location, and behavior. Every simulation uses AI to mirror those real world attacks. So your employees are actually being tested on stuff that's real, that's actually getting through, not some outdated template that they say, oh yeah, we've seen this before. Gamified training keeps the engagement high. They love it. They get gold stars. They get stickers. I mean, they love it. But so it doesn't feel punitive because I'll tell you something, nobody ever learned something by being punished. Because every interaction generates a coaching moment. You're not just tracking completion. Yes, they press the button. Yes, they finish the test. You're building behavioral indicators that tell a real story. Reporting rates, repeat clicker reduction, and time to report the kind of metrics that hold up when leadership asks the hard questions. But you don't have to take my word for it. With over 3,500 verified reviews on G2, Hoxsine is the top-rated security training platform. Recognized both for best results and easiest to use, it's also recognized as customer's choice by Gartner. Thousands of companies use it. Qualcomm, DocuSign and Nokia. They trust Hoxhunt to train millions of employees worldwide. Visit hoxhunt.com/security now today to learn why modern secure companies are making the switch to Hoxhunt. That's hoxhunt.com/security now. We thank them so much for their support of Security Now and Steve Gibson. We've got some feedback. What? Oh, you're muted. Hello.

Speaker 2:
[88:06] Thank you. Okay. So feedback. A listener shared some musings over strategies for securing open-source repositories, and it provided a perfect setup for looking at this aspect of the future. So his name is Gene Hastings, and who listens to us, I'm familiar with the name. He sent email in the past. He wrote, a colleague and I often meet to talk about DevOps and related issues, you know, system and personal health. He's more Dev, I'm more Ops. Both often cranky. One of our listeners. In any event, we were talking about the nightmare that's having a project's dependence on libraries all over the net, and what steps might be taken to provide some degree of defense. He said, I was already aware of version pinning, and there was the recent news about a compromised package where the infection modified it without changing the version. I recalled long after our conversation that one would need to store a hash of the package and compare it on retrieval, right? Because then a modification will get detected, the hashes would match. He said, little protection against a compromised new version or a first-time use, but some nonetheless. There's also the concern as to the trustworthiness of the package's own dependencies. All this led me to reflect that what may need to happen next is to have each package and its components not only signed by the author, but also by an independent auditor. Obviously, this does not scale physically or financially. So the next step is to have a trusted, agenic auditor that does not charge a fortune for each signing. Such automation will be necessary soon. This led me to a further thought. Imagine a new project philosophically akin to Let's Encrypt, a service for smaller developers who can do an automatic audit at a tolerable expense. He says, if all of the following are true, the agents like Mythos and Descendants are competent, the agents are efficient, the agents are trustworthy, the agents are not priced out of reach with some flavor for everyone, and the owners of the agents are trustworthy. He has an exclamation point on that one. He said, then, there could be a future for us and the Internet. Apparently, otherwise, forget about it. That's all over. He said, as an aside, I am an AI skeptic. I do not trust that which cannot be explained. Getting back to operations, if I don't have a half-decent idea what a system and its configuration is doing, I am very reluctant to put my name on it. I am willing to trust people who are able to understand the systems to assure me that I can be fairly reassured. At the moment, such people are hard to find amid the tsunami of hype. I'm not as concerned about the quality of the technologies that I am about the people pushing them. I wouldn't trust simple driving directions from the likes of Sam Altman, Mark Zuckerberg, or Jeff Bezos. I do not trust their motives, plans, or motives or plans. He's a developer of the Google Chrome app, and he's a developer of the Google Chrome app. And sign the library all for a low cost. The trouble with this is that then we need some authority to manage the trust in these AI agent signatures, and on the trusting end, some new root store that users of these signed libraries could use to look up and verify the trust. In other words, a whole bunch of new stuff. I think there's a more direct, cleaner and straightforward means of accomplishing the same thing. We simply move to a world, very much like what we were just talking about, Leo, with Mozilla. We move to a world where anything that a public code repository offers for broad public consumption first passes the scrutiny of an AI agent. An AI will be guarding the exits, essentially. Code cannot leave the repository without first being checked by the AI agent. And the process might not be autonomous. The repository's AI might have some questions for a package's authors that would need to be answered and negotiated before a new or updated package could be made widely available. And since the use of an AI will certainly come at a non-zero cost for the foreseeable future, at least, I don't know, I mean, there'll probably always be some costs because this is always going to be some compute. I'd imagine that there would be some form of rate limiting on submission, new submissions being made available publicly for review and publishing. You know, non-professional authors who are in the habit of constantly revising their code to make an endless series of incremental improvements might have a release delay or some sort of submission limit imposed. But the idea being, in the same way that Mozilla will be running their Firefox code from now on through AI, the solution is for repositories to do the same thing, to clean anything that is being released to the public before it gets out there. And I suspect that solves this problem. You know, the vast majority of a repository's code is mostly static, right? So an AI will only need to give it the once over one time. And from then on, those who pull it could rely upon its security more than they ever have been able to before. And most code only changes incrementally. So an AI could retain the context that it developed during that original once over and then bring itself back up to speed and only look at the changes, all the deltas to the code, in order to minimize the recurring cost of continuing to review code which incrementally changes over time. So I think the whole system can be made practical. So what I know is this, the year is currently 2026. See, I got it right this time. It's not 2024, 2026. When AI costs today far more to run than it's able to generate in revenue, I am sure that the economics of AI will be radically different in the future. Just as the economics, for example, of mass storage and computation have been utterly transformed over the past 50 years.

Speaker 1:
[96:02] There's a rich history of this. This has always happened.

Speaker 2:
[96:04] Today, we're all walking around with globally connected pocket computers that would have boggled the minds of our grandparents.

Speaker 1:
[96:13] Yeah. So our parents forget the grandparents.

Speaker 2:
[96:17] Yeah. It should be clear to everyone that AI, which continues to boggle our minds today, will be just as accepted and take it for granted by our grandkids as the Internet is by today's kids. So, you know, I mean, kids growing up today, they've always had the Internet. That's just like, yeah, they don't know life without it. We're still sort of like, wow, remember those days. Remember books.

Speaker 1:
[96:44] Yeah, I used to be remember CDs, DVDs, records.

Speaker 2:
[96:49] And finally, GP, our listener says, Dear Steven Leo, given April's security related news, I can see how thoughts on the project Hail Mary movie might have been pushed to the wayside. I'm wondering what you gentlemen thought of the film and its treatment of the source material. I felt the movie struck a nice balance. It did justice to the book while allowing those who have not read it to enjoy the story without being overwhelmed by a flood of science, which could have easily turned it into a five part mini series.

Speaker 1:
[97:21] Oh yeah. He said a lot of science in the book.

Speaker 2:
[97:24] Yeah, there is. Well, and that's why we love Andy Weir's writing, right? So he said, my young one enjoyed the movie so much that the young ones, oh, young one that they wanted to read the book.

Speaker 1:
[97:39] So good. Yeah, that's good.

Speaker 2:
[97:41] Yes. So we signed up to borrow it from the library. However, we were number 110 in the queue of the public library to borrow the book.

Speaker 1:
[97:54] Yeah, it's one of the best seller lists again, I think.

Speaker 2:
[97:56] Yeah. So we opted for my old copy on Audible instead. Although I salute him for reading, because I'm still, I think reading is primal. But anyway, he said, listening to the story again did not diminish the movie. It only enhanced the experience for both me and my little one. It's like getting the inside story, if you get my drift. This is one of the few times in recent history where a movie did not ruin the book, but actually improved upon it. Good job to the production team on this one. All the best GP. So Leo?

Speaker 1:
[98:31] Yeah, I'd agree 100 percent. In fact, I'm re-listening to the book, which I started right after the movie. The other thing we did do though, is we also re-watched The Martian, because Lisa and I had a little inside bet. Because after the project, I said, oh, that was as good as The Martian. She said, no, it wasn't. She said it was really good, but it's not as good as The Martian. I said, oh, and then we watched The Martian, and I have to agree with it. The Martian was remarkably good. I think that's partly because Ridley Scott directed it. I think the directors of Project Hail Mary, who chiefly are famous for the Lego movie, maybe have a little bit more of a kiddie sensibility. I could see I would appeal to his little ones, because, you know, Grayson is, Grace, Ryland Grace is like, there's a lot of times he goes, Oh, you know, things the kids would like.

Speaker 2:
[99:25] Yeah.

Speaker 1:
[99:25] But it's a little over the top. That bugged me a little. I do feel it was very true to the book. The book has infinitely more detail. Because you had to cut all that stuff out. I'd forgotten how much science there is in the book. And so there's stuff that I thought, oh boy, they left that part out of the movie.

Speaker 2:
[99:44] For example, I love the details of breeding astrophage from the book. It was so good. And we just got a little suggestion of it in the movie.

Speaker 1:
[99:56] Almost all the science is suggested in the movie. They focus on the drama, the interpersonal relationships, and the science gets a second.

Speaker 2:
[100:06] So my theory, because I've reread the book when we knew there was going to be a movie, because I read it originally when Andy wrote it. And I thought about this question of movie vs. book a lot. Of course, famously, I've complained here that Jurassic Park, when I was watching the movie, I was incensed, because so much was left out. I mean, some arguably really important stuff. On the other hand, look, Jurassic Park was a phenomenon as a movie. So who can say that, like, there's what I've decided is it's really not fair to compare. Yes, they are too. It is what they have in common is a similar plot. So they have the concept and the plot. But you really are addressing two different audiences. A book reader or audible listener is a different audience than somebody who wants to go to a movie for in two hours and be entertained.

Speaker 1:
[101:11] They're different media. Yes, absolutely. And you have to be native to the medium. Otherwise, it just isn't going to work. And I understand that. But I do agree with you that this, which is unusual, and our correspondent, this movie makes you want to read the book, which is really great. And you don't feel disappointed in either direction, which is very unusual. I almost always fear disappointed by science fiction books, not living up to the movie. In this case, no, I think in both for both The Martian and Project Hail Mary, the movies are great. They really do a good job. Yeah. So we're in agreement.

Speaker 2:
[101:48] Yeah. Okay. Let's take a break. And then we're going to plow into what the experts say and what, and what you just shared a perfect example from Mozilla, what they found when they ran Mythos against their Firefox code base.

Speaker 1:
[102:03] Yeah. Yeah. Very interesting. I do have to point out Redcon5 asked in the Discord chat, our Club Twitch chat, how many of the 271 bugs were severe or were, you know, and they actually didn't talk about severity. So I don't know. They might have been smaller bugs. We don't know. So that's the next question. But I guess a bug is a bug is a bug.

Speaker 2:
[102:27] I mean, and we know how often bugs can be elevated.

Speaker 1:
[102:32] Right.

Speaker 2:
[102:33] Yes, into something more severe.

Speaker 1:
[102:35] Right. Right. This episode of Security Now brought to you by METER, the company building better networks. Talk about severe. If you're a network engineer, you know, you are facing severe constraints. Legacy providers, inflexible pricing, IT resource constraints, stretching you thin, complex deployments across fragmented tools. It's a wonder anything works. You're mission critical to the business, but you're working with infrastructure that just wasn't built for today's demands. And to be fair, I mean, no one anticipated how much bandwidth we need, how much bandwidth we'd be using. I mean, it's been an explosion. It's been great. But you need your hardware, your network stack to keep up. And that's why businesses are switching to meter. This is brand new, created by two network engineers who feel your pain, who said there's got to be a better way. Meter delivers full stack networking infrastructure for wired and wireless and cellular that's built, specifically built for performance and scalability. And I would add a third term to that, reliability, right? And they know that the way to get there is to do the whole stack. So Meter designs the hardware, they write the firmware, they build the software, they manage deployments, they even provide support after the deployment. Meter offers everything, I mean, even down to ISP procurement, they cover security and routing and switching, a lot of things you need, wireless, firewall, cellular, power, you know, a lot of times we forget how important power is to the reliability of this whole thing, not Meter, because they've been there, they know it, they feel your pain, they cover DNS security, VPNs, SD-WAN, multi-site workflows, all in a single solution. I had a great conversation with the Meter engineers, and they were talking about, well, you know, one of the real pain points they see a lot is a company acquires another company. Now you've got two heterogeneous systems that may not or operate well, and then throw in a 100,000 square foot warehouse with wireless that works in one corner, but not the other, and never can get back to the home office. And you can see why you need Meter. Meter's single integrated networking stack just works in all of those hostile environments from major hospitals, branch offices, those huge warehouses, even large campuses, even data centers. You know who uses Meter? Reddit. That's pretty good. I mean, that's a challenging environment. Here's another one. The assistant director of technology for Web School of Knoxville said, we had more than 20 games on our campus between our two facilities. Each of the 20 games was streamed via wired and wireless connections. The event went off without a hitch. He says, we could never have done this before. Meter redesigned our network. With Meter, you get a single partner for all your connectivity needs. That's nice. One phone number to call from first site survey to ongoing support without the complexity of managing multiple providers or multiple tools. Meter's integrated networking stack is designed to take the burden off your IT team and give you deep control and visibility. Reimagining what it means for businesses to get and stay online. Meter is built for the bandwidth demands of today and tomorrow. I know tomorrow you're going to say, what was the name of that? We need that. What was the name of that that Leo was talking about? Remember it. Meter. Thanks to Meter so much for supporting Security Now and go right now to meter.com/security now and book a demo. That's meter.com/security now. Book a demo. This is networking done right. This is what you need. meter.com/security now. 15 of the Firefox CVEs were low. 18 were moderate. 13 were high, at least of those. At least that's according to Joke and Boken. So on YouTube. So.

Speaker 2:
[106:51] And I realized that the proper response to the guy in the club is listen to what the Mozilla guy is saying. Yeah. He is saying, You've got to do this. This is significant.

Speaker 1:
[107:02] Yeah.

Speaker 2:
[107:03] You know, this wasn't, this wasn't dust that was found.

Speaker 1:
[107:06] Yeah.

Speaker 2:
[107:07] You know, they were like, whoa.

Speaker 1:
[107:10] And that number is huge. 271 is mind boggling, but if 13 were high, this is from version 149 to 150. This is, this is huge.

Speaker 2:
[107:22] Yeah.

Speaker 1:
[107:23] Anyway, let's talk about it.

Speaker 2:
[107:25] And just one package, we're talking about the, I mean, all, think of all the software in the industry. Think of, think of how, how minutely Firefox has been curated and, and developed over time, how much scrutiny it's received. And even so, AI found what people could not. Now imagine the typical software that's just thrown together and out the door.

Speaker 1:
[107:54] Think about Windows, how many hundreds of millions of lines of code.

Speaker 2:
[107:58] Well, and how many bugs they know about. And like, yeah, remember, didn't they ship famously seven with like 10,000 or 20,000, like known bugs? Like what? How does it even get off the ground? Oh my gosh. Yeah. All right. It's, it's, it's a revolution. Okay.

Speaker 1:
[108:15] So exactly Steve.

Speaker 2:
[108:17] Yeah. As I noted several times last week, my original working title, uh, for last week's podcast was Mythos marketing or mayhem. But once I'd assembled and examined all the data, I realized that leaving the question to the, you know, leaving the question or the answer to the question that that title implied up in the air would be wrong because there's no way, really after looking at the facts and just with no bias, there's no way that Mythos was only marketing. We had evidence of it. So, you know, I acknowledged also that it was certainly also marketing, but it was also far more than only that. And I think that's where people get confused is they just mistrust people's motives to such a degree these days that it's like, oh, but again, it could be both. And it was, it happens at Anthropic, use this for marketing, but I'm going to make the point at the end of the podcast. Thank God, because it broke out. That's the difference. And that this breakout is what we're talking about today. I titled today's podcast. Yes, exactly. Because last Thursday, two days after, as I said at the top of the show, two days after our What Mythos Means podcast was delivered, an incredibly significant group of industry veterans who pretty much comprise a who's who of the cybersecurity industry, all weighed in with a formal emergency wake up call for the entire cybersecurity world. The organizer and publisher was a group calling themselves the Cloud Security Alliance. And I have a link to the most recent version of their 23-page paper in the show notes. They titled it the AI Vulnerability Storm, Building a Mythos-Ready Security Program. So the paper enumerates its 16 primary contributing authors. Because this is important for appreciating the weight of the paper's stated concerns, I'm going to share them briefly. They are Jen Easterly, CEO of the RSA Conference and former director of CISA. Bruce Schneier, who we all know, renowned cryptographer, current head or chief of security architecture at INRUPT and fellow and lecturer at the Harvard Kennedy School. Chris Inglis, the White House's former national cyber director. Phil Venables, Ballistic Ventures, he is formerly the CISO of Google Cloud. Heather Adkins, current CISO of Google. Rob Joyce, the NSA's former cybersecurity director. Sanel Yu, the CTO of Gnostic and former chief security scientist for Bank of America. Katie Mazuris, the founder and CEO of Lutta Security. John N. Stewart, Talens Venture and former CSTO for Cisco. James Line, CEO of the SANS Security Institute. Dave Lewis, global advisory CISO for One Password. Maxim Kowalski, managing director of AI Security COE for Consortium Networks. Jim Revis and John Yeo, who are the CEO and CISO respectively of that Cloud Security Alliance. Joshua Sacks, CTO and co-founder at Security Superintelligence Labs, former AI and Lama security head at Meta. Finally, Rami Husani, CCSO for none other than Cloudflare. As I said, the who's who. In addition, to those primary contributing authors, the paper's content was also reviewed by a list of CISOs that pretty much includes everyone else. I'm not going to read them since there are too many of them, but I've reproduced that page from the report in the show notes so you can just see it. I mean, it is like there's anybody who I didn't just read, former head of security for Netflix, CISO for Brave Technology, global field CISO for Fastly. Your eye just drops on any of them. I mean, so everybody basically understood what Mythos meant. Okay, so we clearly established the provenance of this document. So I want to first share the executive summary overview and then the key takeaways for CISOs, followed by their brief summary of why Mythos is so important. Much of this will sound exactly like I did last week, two days before this was published, which is of course why I immodestly titled today's podcast, Yes, Exactly. This amazing group of experts even use some of the same phrases that I used, given the impossible to exaggerate significance of Mythos, and the successor systems that are sure to follow, and not only philanthropic, but I get it. As I said last week, they're just first, but they were the one that broke through, and breaking through is what we really needed for our industry to get the wake up call it needs. So I think it's crucial for the listeners of this podcast to appreciate that it's not just me with a lone opinion here. Okay, so the authors of the Executive Summary set it up as a sort of topical Q&A. They wrote, what happened? Answered, AI as demonstrated by anthropics mythos. So again, noted that even they didn't fall to their knees in front of mythos. They're saying AI as demonstrated by anthropics mythos has significantly increased the likelihood of attackers discovering new vulnerabilities, creating new exploits and using them in complex automated attacks at scale. While AI also increases the speed of patch development and reduces defects in new software, defenders still face a heavier relative burden due to the inherent limitations of patching. Attackers gain asymmetric benefits. That's what I referred to last week when I was talking about the existing installed base of software that hasn't had the opportunity to be screened through AI. It's already deployed, it's in devices and appliances, and many of it has been forgotten, but not by the attackers who want to use it to get in. So they asked the question, how is this different from the status quo? And answered, in the near term, security organizations will likely be overwhelmed by the need to apply patches and respond to AI discovered vulnerabilities, exploits and autonomous attacks. What to do now to deal with the current risk spike? Adjust risk calculations and reorient security program resources for increasing volume of patches, decreasing time to patch, and more persistent and complex attacks. Focus on the basics and harden your environment further. Segmentation, egress filtering, multi-factor authentication, and defense in depth, breadth, all increase the difficulty for attackers. What do we believe will happen next? The storm of vulnerability disclosures from Project Glasswing is the first of many large waves of AI discovered vulnerabilities that may occur in rapid sequence. The capabilities seen in Mythos will quickly become more widely available, dramatically increasing the number and frequency of complex novel attacks organizations will face. Finally, what else should start now to be ready for the next waves? Prioritize robust dependency management to reduce vulnerabilities in third-party and open-source components. Enforce automated security assessments consistently in your development process, including using LLM-powered agents to find vulnerabilities before attackers do. Introduce AI agents to the cyber workforce across the board, enabling defenders to match attackers' speed and begin closing the gap. Re-evaluate your risk tolerance for operational downtime caused by vulnerability remediation to account for shorter adversary timelines. Update governance for more efficient vendor onboarding and increase headcount to facilitate a faster cycle deployment of new AI-based defenses. As an industry, we need to strengthen our coalition's cooperation and coordination. Okay, so I think it should be clear from these executive summary bullet points that the cybersecurity industry's posture on Mythos is that there is less than no time to waste. This is not the time to adopt a wait-and-see posture and to be reactive to events. By the time a reaction is indicated, it will be too late. Despite these clear alarms being run by many security being run, the alarms being run by many security professionals who have no profit stake in any of this being true, inertia being what it is, many organizations will nevertheless wait to see if anything really happens. For what it's worth, I did not wait. Although GRC's border security has always been strong as I've been able to make it, as I've mentioned before, I did have two deliberately exposed SSH servers listening for connections from any US domestic IP. Foreign IPs have always been hard blocked. I'm referring to them now in the past tense because after Mythos, they're already shut down. I've used those SSH links to allow me to deal with the rare IP changes in my two Cox cable connections.

Speaker 1:
[120:00] The SSH session allows me to update the firewall filters that block all other connections from anywhere other than my two remote work locations. Even though those SSH servers are both using the strongest multi-factor identity authentication available, that might not matter if some bypass vulnerability is found. I don't need those SSH servers as much as I need security. So I'm going to take a wait and see approach in the opposite direction. Rather than waiting to see whether a problem is found and then hoping I get the news quickly enough, I'm going to assume that someone using Mythos might discover something unforeseen in the SSH server software I'm using. So I'm going to wait and see about that before I feel safe to poke my head out again. And in fact, I may drop SSH completely with its inherently open ports altogether and come up with an affirmatively more secure solution. Leo, like you were talking about using tail scale in order to get in to your inside. Because tail scale is able to do NAT penetration, in which case you don't have to have any open ports.

Speaker 2:
[121:22] Yeah, so that's what I use and it's great.

Speaker 1:
[121:24] I love it. So this wonderful call to action paper next offers some key takeaways for CISOs. Here's what the paper's authors recommend CISOs to consider. Use LLM based vulnerability discovery and remediation capabilities. They said, unlike defensive AI technologies, LLM based vulnerability discovery capabilities are already mature and could be used to your advantage. Start immediately by asking an agent for a security review of any code and build towards a VOLM Ops capability. Update your risk metrics. With the shifting landscape, many of your metrics and risk assessments may be outdated and could affect business reporting. Consider how to update these and communicate the challenge with stakeholders. Accelerate your team by the use of coding agents. And you were just talking about this on Mac Break Weekly, how some group at Apple are not-

Speaker 2:
[122:35] The Siri group is being sent to learn how to vibe code, almost 200 of them. Because I guess they weren't, they didn't-

Speaker 1:
[122:46] Take it seriously.

Speaker 2:
[122:47] Take it seriously, yes.

Speaker 1:
[122:48] They didn't realize what the benefits were. So these guys are saying to CISOs, accelerate your team by the use of coding agents. While defensive AI technologies are lagging behind offensive ones, agents can already accelerate human action across the board from incident response to GRC. Encourage and require your team to use these agents to accelerate their capabilities. Triage and test patches. Red team your environment. Automate audit data collection and accelerate security operations overall. Prepare to respond to more incidents. Run tabletop exercises for multiple simultaneous high-vulnerability incidents occurring within the same week. And have playbooks in place for high-level critical incidents. I mean, these guys are literally predicting a storm is coming. Examine how to automate remediation capabilities to the degree possible. Verify and enable mitigating controls such as segmentation, egress filtering, zero-trust architectures, phishing-resistant multi-factor authentication, and secrets rotation to limit impact when exploitation occurs. The supply chain will be affected. Increase focus on the basics. The basics remain valid and can be prioritized for risks that cannot otherwise be mitigated. Segmentation, patching known vulnerabilities, identity and access management, and defense in depth and breadth, all increase the difficulty for attackers to lower latent risk. Expanding these efforts while there is time is prudent. In other words, do it now before it's too late. They said, we cannot outwork machine speed threats. Reprioritize, automate, and prepare for burnout. The cadence and volume of vulnerability disclosures will exceed anything we have experienced before. They're literally saying, understand everybody, bad guys, China and Russia and North Korea, they're going to get this capability and they are going to come at us hard. They wrote, the cadence and volume of vulnerability disclosures will exceed anything we have experienced before. Consider how you manage current priorities and request additional headcount and budget for reserve capacity to avoid exhausting available resources or potentially burning out existing staff. This in parallel with adoption of coding agents, re-prioritization, putting more automation in place, and helping your team through career uncertainties and upskilling challenges. Yikes. Evolve to a Mythos-ready security program. Mythos, they wrote, is likely one of many changes coming to cybersecurity risk. If not already underway, seriously consider incorporating Mythos and its implications into your strategy. Build collective defense now. Attackers already operate as syndicates. Crowdsourcing, sharing tools, and moving as a collective. Engage now with sector coordinating groups, ISACs, CERTs, and standards bodies to share threat intelligence, coordinate response, and produce sector specific guidance for this moment. Defenders must do the same and leverage our coordinating groups, especially when considering organizations that fall below the cyber poverty line as introduced by Wendy Nather. So, just to pause, a little over three years ago, back in 2023, Cisco's CISO, Wendy Nather, articulated a concept she termed the cyber poverty line. It was the point below which an organization cannot afford to invest in the minimum required security to remain safe on the Internet. So, like, you do need to invest in security. The bottom of page 17 of the show notes duplicates a breathtaking chart from the very cool and someone unnerving website, zerodayclock.com. You know, zerodayclock.com. The chart shows how the vulnerability versus exploit race has radically changed over just the past eight years. Eight years, at the bottom of page 17, a beautiful chart. Eight years ago, in 2018, the average TTE time to exploit was 2.3 years. In other words, just eight years ago, on average, there was a 2.3 year gap between the public disclosure of a security vulnerability in a CVE and its confirmed use in an attack exploit. 2.3 years.

Speaker 2:
[128:35] Wow, we had a lot of time back in the day.

Speaker 1:
[128:37] We did.

Speaker 2:
[128:38] Not anymore.

Speaker 1:
[128:39] Look at this chart, Leo, at the bottom of page 17.

Speaker 2:
[128:43] How many days now do we have in a zero day?

Speaker 1:
[128:46] Well, watch how this happens. The next year, in 2019, that exploitation gap had dropped from 2.3 to 1.9 years. In 2020, a year later, it was down to 1.3 years to exploit. 2021 averaged 10.8 months from CVE publication to exploitation. A year later, 2022 dropped that 10.8 months down to 9.7. The next month, 4.9. I'm sorry, the next year, 2023 was down to 4.9 months. 2024, just 2 years ago, we were down to 56 days. Last year, 23.2 days. Shockingly, so far this year, we are seeing exploits appear an average of 10 hours after their CVE vulnerabilities have been published.

Speaker 2:
[129:50] That's AI, right? I mean, that's got to be AI.

Speaker 1:
[129:53] That is, that is, and we've been talking about this on the podcast, mostly theoretically, because it was obvious it was going to happen. Bad guys are sitting, waiting for new vulnerabilities to be published and they instantly jump on them. Ten hours. Ten hours. And so, I mean, there is just no time. As the writers of this paper said, humans cannot outperform machine driven attacks. It can't, it won't, it doesn't happen. So from, think about that, eight years, Leo, gone from 2.3 years to 10 hours. So everybody should check out the zero day clock.com. It's got this chart and a bunch of others where, where these sorts of stats are, are being maintained and it is breathtaking. Okay. So next, I'm going to share just the brief introduction that these cybersecurity industry expert authors wrote for the paper, but Leo, let's first take our final break.

Speaker 2:
[131:01] Good thinking. Thank you for remembering, Steve. Final break and then the final results to yes. Exactly. That graph. Wow. That says it all, I think. I mean, unbelievable. Our show today brought to you, we'll get back to it in just a moment. I know, the thrilling and gripping conclusion. But first, a word from our sponsor, Zscaler, the world's largest cloud security platform. The potential rewards of AI, as you could see every day, too great to ignore, so are the risks. Now, I'm not just talking about this instant vulnerability invention, but the risks of the loss of sensitive data through accidental exfiltration by your own staff, by your own team using the AI. And then there are, of course, attacks against enterprise-managed AI. And then there is the obvious risk of generative AI being used by threat actors, helping them rapidly create phishing lures, write malicious code, look at this, 10 hours, automate data extraction, all kinds of tools. And they do it fast and they do it at scale. So let's talk about that first issue of accidentally exfiltrating information. There are 1.3 million instances of social security numbers leaked to AI applications last year. I bet that number doubled this year around April 15th when we were all finishing our taxes and I bet your employees thought, you know, may as well run this by a ChatGPT just to see what it thinks. And of course, what's in your social, your tax return, everything a bad guy needs, name, address, birthdate, social, everything. Chat GPT and Microsoft Co-Pilot saw nearly 3.2 million data violations last year. And that's the inadvertent stuff. That's not even malactors. That's just people doing their thing. It really is time to rethink your organization's use of public and private AI. Check out what Siva, the Director of Security and Infrastructure at ZWARA, says about using Zscaler to prevent AI attacks.

Speaker 3:
[133:22] With Zscaler being in line, in a security protection strategy helps us monitor all the traffic. So even if a bad actor were to use AI, because we have a tight security framework around our endpoint helps us proactively prevent that activity from happening. AI is tremendous in terms of its opportunities, but it also brings in challenges. We're confident that Zscaler is going to help us ensure that we're not slowed down by security challenges but continue to take advantage of all the advancements.

Speaker 2:
[133:52] Thanks Siva. With Zscaler Zero Trust plus AI, you can safely adopt generative AI and private AI to boost productivity across the business. Their Zero Trust architecture plus AI helps you reduce the risks of AI-related data loss, protects against these AI attacks we've been talking about, and guarantees greater productivity and compliance. Learn more at zscaler.com/security. You owe it to yourself. That's zscaler.com/security. We thank you so much for your support. Security Now and Steve Gibson. All right, on we go. Oh, Steve, I think you're muted again.

Speaker 1:
[134:36] Did it again. I didn't want you to hear me typing. So, okay. Thank you. Okay. So the brief introduction that these cyber security industry expert authors wrote for the paper. They explain, and I'll, well, I'm going to point our listeners and our, to recommend that they point their bosses, anybody who doesn't understand this, the paper was written for the C-suite guys to understand. And that's why it's got the who's who behind it. So they wrote, many of our assumptions about the capabilities of AI in vulnerability research, exploitation, and autonomous attacks may be outdated. Throughout 2025 and into 2026, we've seen continuous examples of increasing capabilities, both in research and in actual in the wild attacks. AI driven vulnerability discovery and exploitation has been accelerating for over a year. Anthropics Clawed Mythos Preview represents a step change in that trajectory. Autonomously finding thousands of critical vulnerabilities across every major operating system and browser, generating working exploits without human guidance, and empowering autonomous attack orchestration, all at a speed and scale that outpaces any prior capability. The asymmetry this creates is structural. AI lowers the cost and skill floor for discovering and exploiting vulnerabilities faster than organizations can patch them. This is what I was talking about last week when I said, now script kiddies can be expert attackers and exploiters because you just ask AI for some attacks. The window between discovery and weaponization has collapsed to hours. Attackers gain disproportionate benefit and current patch cycles, response processes and risk metrics were not built for this environment. While many of these capabilities predate this model, Mythos class capabilities do represent a step change and will proliferate, meaning anthropic is only first, they're not the last. The organizations that respond well will be those that build the muscle now. The processes, the tooling, and a culture willing to adopt AI as a core part of how security gets done. The adaptability will help determine who meets the next wave on their own terms. This moment requires reprioritizing resources, reviewing risk levels and controls, and leveraging AI where feasible. At the time of this writing, most AI defensive controls and approaches are not yet mature. That said, AI attacker technology may be used for defense purposes and coding agents will help. To finally place all this into context, I want to share Appendix A of their paper, which they titled Historical Precedent, meaning where we came from, because this will help everybody to put this in context. They said, this all began with the DARPA Cyber Grand Challenge, a landmark competition organized by DARPA in 2016, so a decade ago, that demonstrated the potential of fully automated cybersecurity systems. Teams developed autonomous platforms capable of identifying, exploiting, and patching software vulnerabilities in real-time without human intervention. The challenge highlighted a shift toward machine speed cyber defense, showing how automation and artificial intelligence could significantly enhance vulnerability management and incident response, while also raising important questions about trust, control, and the future role of human operators in cybersecurity, meaning humans are going to be obsolete. By mid-2025, Expo, an autonomous offensive security company, topped the Hacker One leaderboard. The DARPA AI Cyber Challenge found 54 vulnerabilities in four hours of compute. Google's Big Sleep discovered real zero days in open source. Anthropic was used to automate full attack chains from reconnaissance through exfiltration, and open source tools such as Raptor proved autonomous vulnerability research is available to anyone able to use an agent. In September 25, Heather Atkins, the CISO for Google, and Gaddy Evron, the CEO of Gnostic, published a warning. September 2025, they published a warning that attackers were racing toward a singularity moment with autonomous vulnerability discovery and exploitation roughly six months away. Wow. Well, that's impressive. Their timing was exactly correct. That was six months ago. In February 2026, Anthropic used Claude Opus 4.6 using Claude Opus 4.6, reported more than 500 high severity vulnerabilities in open source software. Aisle, remember A-I-S-L-E, Aisle found 12 open SSL zero days, including a CVSS 9.8 vulnerability dating back to 1998. Linux kernel maintainers saw vulnerability reports climb from 2 to 10 per week, largely hallucinated at first, but that changed rapidly. The volume has held steady, but the reports are now all verified as real bugs. The curl project, which originally discontinued its bug bounty program because it was drowning in hallucinated vulnerability reports, AISlop, last week echoed the observation from the Linux team, reporting an increasing number of AI-supported high-quality security incidents. Sysdig documented an AI-based attack that reached admin level access in eight minutes. This week, Gambit released a report on the AI-led compromise of Mexican government infrastructure originally reported in February. Actually, I saw that and skipped over reporting that due to show length. But briefly, an attacker used a combination of both ChatGPT and Claude to attack, rapidly penetrate inventory and exfiltrate a much larger amount of data from the Mexican government that would have ever been possible without the aid of AI automation. He used an AI automated-based attack. They end their historical timeline by telling us about the zero-day clock writing in March. Sergei Epp and others introduced the zero-day clock, visually demonstrating the disappearing time to exploit development, demonstrating the drastic fall in time to exploitation to less than a day in 2026, 10 hours. It's worth noting that the historical collapse in time to exploit has not yet produced a proportional increase in the impact of exploitation. Many of the most consequential incidents of recent years involved credential abuse, social engineering, or supply chain compromise rather than zero days. The zero-day clock trend is a leading indicator of where attacker capability is heading, not a direct measure of current damage. So it's predicting what's going to happen shortly. The AI-driven, okay, so then the AI-driven security research company, Aisle, A-I-S-L-E, remember that we talked about them at the time. They found the problems in OpenSSL. They responded a little disgruntled themselves, understandably, to all of the mythos buzz. And so it was in February that we reported on them finding those 15 vulnerabilities in OpenSSL, 12 of which entirely composed a major update to OpenSSL. And as we know, this paper briefly mentioned them in passing. They're grumbling somewhat, saying that they were able to reproduce anthropic results themselves without the mythical mythos. They wrote, and I have a link to their report in the show notes. They said, this is Aisle, we took the specific vulnerabilities anthropic showcases in their announcement, isolated the relevant code and ran them through small, cheap, open weight models. Those models recovered much of the same analysis. Eight out of eight models detected mythos' flagship free BSD exploit, including one with only 3.6 billion active parameters, costing $0.11 per million tokens. A 5.1 billion active token open model recovered the core chain of the 27-year-old open BSD bug. On a basic security reasoning task, small open models outperformed most frontier models from every major lab. The capability rankings reshuffled completely across tasks. There is no stable best model across cybersecurity tasks, the capability frontier is jagged. In other words, they're just saying, hold on here, we've got our own small, cheap models that we are able to deploy that do the same thing as Mythos. I don't doubt that I'll did what they claimed. Although, admittedly, there's much they didn't say. For example, even with isolated code, confirming an already known problem feels different from making brand-new discoveries. Although, I know in theory there should be no different. We also don't know how autonomous their system was. That was one of the main points that Anthropic has been making about Mythos, is that you just ask it pretty pleased to attack somebody and it's able to. It's only natural for Aisle, a commercial enterprise whose specific and narrow focus is to offer commercial vulnerability discovering services to enterprise to be somewhat miffed over all the breathless industry and media coverage Mythos has generated. They should be celebrating their own systems if they're able to meaningfully compete with Mythos' outcome for far less money. As I said, once the dust has settled, it's going to all come down to who can do the most with the fewest resources. So if I'lls got some bunch of tricks up their sleeve that allows them to offer these services much less expensively far more economically, then I say that's great. Bravo. However, everyone who's been paying attention knows that what the cybersecurity industry most needs right now, this instant without delay, is a swift kick in the pants. This Security Now podcast informed its listeners of Aisle's AI-driven vulnerability discovery news back in February. It's one of the reasons that Anthropix claims for Mythos made so much sense to us, right? Because like we saw this coming, you know, this made sense. But Aisle did not break through in February. Mythos did. Even if Mythos were hype, which none of these experts who should know, believe it to be, it should be abundantly clear. Even looking at Aisle's results with OpenSSL from February, that the next stage of AI-driven rapid vulnerability discovery and exploitation is here now. And that as all of these experts also agreed, we're not ready for it. So I'm all for the hype this industry is able to muster. If it will help to instill some much needed fear and action from an industry which appears to have become far too comfortable with the status quo. You know, as I said at the top and a couple of times, let's turn this. Let's have another Y2K event that never happens. Not because it isn't real, but because it is. And everyone who needed to, who needed to, who needed to understood and then took action to prevent the apocalypse from ever happening when everything rolled over to the year 2000. Anyway, as I said, I've created two GRC shortcut links to this very significant paper to make it even easier for our listeners to get to it. You can either go to grc.sc slash mythos, M-Y-T-H-O-S, which everyone should be able to remember. That'll just bounce you over to the PDF, or this week's episode number, grc.sc slash 1075. That'll do the same thing.

Speaker 2:
[149:51] Nice.

Speaker 1:
[149:52] And I think it is very clear that the, I mean, I get it that when we're talking about increasing head count and reshuffling priorities and all this, I mean, these are expensive things to ask for a problem that hasn't yet manifested. The problem is by the time it does, it could be too late.

Speaker 2:
[150:13] It's like way more expensive.

Speaker 1:
[150:15] Yes. It's like waiting to see what, you know, like if the elevators stop running on January 1st of the year 2000, like I'd rather not get stuck in an elevator. Thank you very much.

Speaker 2:
[150:27] Okay. Now here's the question. Models are going to continue to get better. I think there's no doubt about that. There was some question a year ago, maybe that maybe we'd hit a plateau and models weren't getting better fast. I think we all see that that is not the case.

Speaker 1:
[150:43] We're learning how to use this new thing. Yeah. Like the notion of parallel agents.

Speaker 2:
[150:48] Yeah, we're getting better at it. Yeah.

Speaker 1:
[150:49] A collection of different capabilities that are brought in.

Speaker 2:
[150:54] Right.

Speaker 1:
[150:54] So yeah, so we're learning basically how to ask.

Speaker 2:
[150:58] So presumably the 271 bugs we found in Firefox this time. Next time we might find more. We might find more again as models get better. We might find more again. Is there a point where software just becomes perfect and there are no more bugs? Or I think we can get it.

Speaker 1:
[151:17] Yeah, I think there is. Software is math. Math is 100% predictable. You know, there's no random number generator that like there is in AI. There's no random number generator in our software. You know, it is deterministic. And I hold that if that's something that doesn't get lost in the details, basically humans have created software that is too complex for them to hold in their head.

Speaker 2:
[151:50] Right.

Speaker 1:
[151:51] That that's what's happened is we don't understand our own creation, but AI can be scaled to be able to understand, you know, and I use understand in air quotes. I know it's not conscious. It's not actually understanding, but to weave through all the combinatorial ins and outs. And God, who was it? There was another person who I just saw it. I think it might have been in an email feedback. Somebody else. Oh, it was one of our listeners. I'll share it next week. It was one of our listeners who has been maintaining a package that is exposed to the Internet and it involves SQL and he was curious. So he aimed Claude code at the software that he wrote, and it found a vulnerability that astonished him. And he said, it wasn't super critical because it only blah, blah, blah, blah, blah. I know there were lots of ways that you had to have it, but he was amazed by what it found in his own code. And so he stood there thinking, my God, is this true? And then he actually, oh, he didn't want to upset it by asking it for an exploit. So he wrote the exploit himself.

Speaker 2:
[153:11] Actually, Glenn Fleischman on Sunday was reporting something very similar. He's had a web facing or internet facing a tool running for, I think he said like 20 years, a long time. It's just a little thing that he runs some sort of a book search or something. And he said he fired Claude code at it. It found bugs that had been running for 20 years. No one's seen, he hadn't seen that he was able to fix. I mean, I think that you nailed it, which is that it's gotten impossible for us as human beings to make perfect software. But this is a machine. It is tireless. It doesn't make the same kinds of mistakes.

Speaker 1:
[153:52] And use chess, fall back to chess again. You know, super chess grandmasters are able to look at a board and see things in it I can't even begin to describe. They were able to hold their own for a long time, no more. No, that's gone.

Speaker 2:
[154:11] It's no longer even close.

Speaker 1:
[154:13] And that suggests that now we have computers that are able to look at the same thing. Again, there's no one's rolling dice in chess. It is a deterministic board game. And they can take us down now.

Speaker 2:
[154:29] Meanwhile, I've been working on my firmware. Let me just, help me Obi-Wan. Help me Obi-Wan. Just gonna see if it's listening. Help me Obi-Wan. Still working on it. It's a thorny problem we're working on here.

Speaker 1:
[154:43] But we are, you know, we're, okay. You're a little younger than I am. You're in your late 60s. I'm now 71.

Speaker 2:
[154:51] No, I'm almost 70. 70 in November. I'm not so far behind.

Speaker 1:
[154:55] Okay. So we're still going to be here in another 10 years. I hope so.

Speaker 2:
[154:59] God willing.

Speaker 1:
[155:00] The world is going to be different.

Speaker 2:
[155:01] I know. And I love that. I thought I was going to miss the apocalypse.

Speaker 1:
[155:05] It's going to happen so fast. That what's so cool is there's so much money behind software development that there will be a huge push to make this happen. Oh yeah.

Speaker 2:
[155:16] And the other thing that's encouraging is this, these tools are getting more efficient, which means they're taking less hardware to do more, which means not only will they improve, but they will be more accessible. They will be less expensive.

Speaker 1:
[155:30] I'm convinced cloud crap is going to go away. I hope so. We're going to have local running models because we'll have little software, you know, AI boxes in our homes that we talk to and they're able to do what we want.

Speaker 2:
[155:44] Yep. Yep. That's what I'm working on right now. That's exactly it. I'm trying to make, because I'm so dissatisfied with Alexa and Siri and all these other assistants. I'm trying to make an assistant that works the same way, but is local and knows me and has memory and all of that stuff. And we're getting, I'm getting closer than I ever thought I would. And I think, you know, by the time I'm 80.

Speaker 1:
[156:05] And here's the key, Leo, you are having fun.

Speaker 2:
[156:08] Oh, and it's so, it's the best game ever. You are having fun.

Speaker 1:
[156:12] Yep.

Speaker 2:
[156:12] It's similar. I mean, I still love coding, but coding is more like hand, hand building furniture.

Speaker 1:
[156:17] Leo, it is modern coding. This is what coding is going to become. People are going to be removed from the code generating loop. And we, and we will be directing AI to write our code. And this is me who is still coding in assembly language. I'm saying it's over folks. People are going to be taken out.

Speaker 2:
[156:37] Wow. And you know what? Maybe, maybe that's the right thing because we had our time. Computers are going to do a better job of this. This is their native tongue, you know? Steve Gibson does such a good job. I'm so glad we have you to rely on. And let's, let's hope we get to keep doing this for many, many more years to come. You'll find him at grc.com, the Gibson Research Corporation. That's where you'll find Spinrite, the world's best mass storage maintenance, performance enhancing and recovery tool. If you've got mass storage, you've got to have Spinrite. Get the current version. 6.1 is there. Free upgrades for anybody who's ever bought it in the past. If you haven't bought it, buy it now. Get on, get on the train. He also has something brand new that he wrote that's fantastic. The DNS Benchmark Pro that lets you figure out what the best DNS provider would be for your particular situation. That's where you live. It's less than 10 bucks. It matters where you live. All of that at grc.com. While you're there, sign up for his newsletters. He has two of them. One is a product announcement newsletter that you'll never get anything from. And he works very carefully.

Speaker 1:
[157:49] Methodically.

Speaker 2:
[157:50] Methodically. The other, though, you will get a weekly email. The show notes, sometimes from the wrong year, most of the time from the right year. No, it's always the correct show notes. It's just the year that it says it's from is different. And those come out, you know, usually on a Sunday before the show on Tuesday or thereabouts. So, sign up there at grc.com/email. You're actually, what you're really doing, that form there is just to whitelist your email so you can send him pictures of the week and comments and suggestions and stuff. You do have to do that because he doesn't want any spam. It's a very effective system he's come up with. Let's see, what else? Oh, we have copies of the show at our website, twit.tv/sn. There's also a YouTube channel dedicated to the video. You can also subscribe in your favorite podcast client. However you do it. I don't think you want to miss a single episode of this show. If you're one of those folks who wants the most recent version of the show, you can actually watch us do it live Tuesdays right after Mac Break Weekly. That's 1.30 Pacific, 4.30 Eastern, 20.30 UTC. Club members get to watch in the Club TWiT Discord. How nice. The rest of you can watch on YouTube, Twitch, x.com, Facebook, LinkedIn, or Kik. You take your pick. You can watch live if you want. If you're not a Club member, do join the Club. Very important to us to keep doing what we're doing. Advertising does not cover all of our expenses, barely covers 70 percent. The Club makes up the difference. Without you, we really wouldn't be able to do what we're doing. So please think about it. If you can afford it, 10 bucks a month, you get ad-free versions of all the shows, you get access to the Discord, which is a great place. Smart people, really fun to hang out. You get special programming just for Club members. We really need you now more than ever. twit.tv/clubtwit. Steve, have a wonderful week. And we'll see you next time.

Speaker 1:
[159:48] I'll be here next Tuesday. Bye.

Speaker 2:
[159:51] Hello everybody, Leo Laporte here. You know what a great gift would be, whether for the holidays or at just anytime, a birthday, a membership in Club TWiT. If you have a TWiT listener in your family, somebody who enjoys our programming, and you want to give them a nice gift and support what we do, visit twit.tv/clubtwit. They'll really appreciate it. And so will we.

Speaker 1:
[160:15] Thank you.

Speaker 2:
[160:16] twit.tv/clubtwit.