title Yes. Exactly. - The Zero-Day Ticking Clock

description Security leaders warn the era of AI-driven bug hunting has arrived, with Mythos uncovering hundreds of overlooked vulnerabilities in code bases as trusted as Firefox. Are defenders ready for the avalanche of exploits and the frantic race to patch?

A disgruntled developer discloses multiple Windows 0-days.
Microsoft purchases its own bugs in massive campaign.
VeraCrypt & Wireshark suddenly lost their dev accounts.
A serious problem with re-captured domain names.
How might AI help to secure open source repositories.
A listener wonders what we thought of Project Hail Mary.
Cyber security professionals tell us What Mythos Means
Show Notes - https://www.grc.com/sn/SN-1075-Notes.pdf

Hosts: Steve Gibson and Leo Laporte

Download or subscribe to Security Now at https://twit.tv/shows/security-now.

You can submit a question to Security Now at the GRC Feedback Page.

For 16kbps versions, transcripts, and notes (including fixes), visit Steve's site: grc.com, also the home of the best disk maintenance and recovery utility ever written Spinrite 6.

Join Club TWiT for Ad-Free Podcasts!

Support what you love and get ad-free audio and video feeds, a members-only Discord, and exclusive content. Join today: https://twit.tv/clubtwit

Sponsors:
canary.tools/twit - use code: TWIT
joindeleteme.com/twit promo code TWIT
hoxhunt.com/securitynow
meter.com/securitynow
zscaler.com/security

pubDate Wed, 22 Apr 2026 02:25:44 GMT

author TWiT

duration 9628000

transcript

Speaker 1:
[00:00] It's time for Security Now, Steve Gibson is here. He's got some more thoughts on Project Mythos, the new super smart AI that can find security flaws. He's saying, you know what? This isn't hype, and he's got some real evidence behind it. We'll talk about Microsoft buying its own bugs back and ignoring other ones. And what do Steve think of Project Hail Mary? Yeah, even that coming up next on Security Now. Podcasts you love.

Speaker 2:
[00:36] From people you trust.

Speaker 1:
[00:38] This is TWiT. This is Security Now with Steve Gibson. Episode 1075 recorded Tuesday, April 21st, 2026. Yes, exactly. It's time for Security Now. Yay, Tuesday has come. You've been waiting all week long to hear from this guy right here, Steve Gibson, the man in charge at Security Now. Hey, Steve, good afternoon to you.

Speaker 2:
[01:06] Leo, great to be with you again, as always. I got some feedback from the 20,000 plus listeners who joined my email list to receive the show notes. As a consequence of my own schedule over the weekend, I needed to start work on this Friday. I finished Saturday afternoon and immediately sent out the show notes and the feedback was, wait, is it Sunday?

Speaker 1:
[01:38] People are paying attention, Steve. You can't pull one over.

Speaker 2:
[01:40] Unfortunately, they're paying very close attention because my emailing system assumed that I would be doing the emailing in the morning of the podcast. It autofills the day's date at the top of the email. But since I'm sending the mailing out, typically on Sundays and occasionally on a Saturday, I have to go in and remove the placeholder for the autofill and replace it with the actual date. Unfortunately, something is wrong with me because this is not the first time I've put in 2024 as the year. And so I sent it out as 4-21-2024. And so several of our Sharp-Eye listeners said, uh, I don't think so.

Speaker 1:
[02:31] And we traveled in time.

Speaker 2:
[02:33] And now I'm thinking I'm just gonna leave it with autofill and it'll date the email on the send date rather than the podcast's date, since, you know, that would work forever. Anyway, we're gonna have some fun. I hope that today's podcast will put to rest any question about what Mythos means, because two days after last Tuesday's podcast, the entire industry of security professionals, Bruce Schneier, who we know well, Google, CISO, I mean, a who's who, all co-signed and authored and produced a document that is intended to get the industry's attention, because they all agree with me. So I titled today's podcast, Yes, Exactly, which is meant to say, I love the name.

Speaker 1:
[03:35] Yes, exactly.

Speaker 2:
[03:38] Yes, it is what I said last week, but we've got, I want to share just to really know, this is probably, at one point, well, remember I said that last week's show notes was revision three. In the first revision, I wrote, this could be the most significant podcast in the history of Security Now. Now, I did some deep breathing, and in one of the follow-on revisions, I removed that sentence. But my point was we're talking about potentially, well, then of course, then my working name was Mythos Marketing or Mayhem, because this could be a big deal. So anyway, we got an amazing document that I'm going to share. The other thing that is useful is I've heard from some listeners who are having a hard time convincing upper management that they need to respond, because of course, any response is going to be expensive, right? I mean, it's going to require expenditures of talent, equipment, like upgrade, upheaval, whatever.

Speaker 1:
[04:52] Well, that's why the flaws are still there in the first place.

Speaker 2:
[04:56] Exactly. And of course, and then there's the issue of the new things that haven't yet been found. So one of the things that this document offers, and in fact, this is also the first time I've had two short cuts in one podcast for the same file, because you can get to this two ways, GRC.sc slash Mythos or episode number, GRC.sc slash 1075. Because I want there to be no possible reason that our listeners can't get the PDF and send it up to the C-suite, folks, because it is written for them. There are takeaways and bullet points and priority lists, and this is what you have to do, because a tsunami is very likely coming. In fact, I realize I'm already giving this away. I've got this whole thing in my head. There's very much of a Y2K aspect of this. Think about it. Everyone who said after we went into the year 2000, oh, look, that was nothing. Nothing happened. Well, folks, there was a reason nothing happened. It's because everybody who needed to actually took it seriously and prevented something from happening. Anyway, we're going to have, I think, a great follow-up today to last week's. Last week was just my opinion. Today, we've got everybody's opinion. But we're going to talk about a disgruntled developer who has been disclosing multiple Windows zero days because he's upset with Microsoft. Microsoft purchasing its own bugs in a massive campaign. The story behind something that's a couple of weeks ago, many of our listeners wrote to me. I didn't know what to say about it. I could have talked about it last week, but actually I bumped it because last week's podcast was full, about how VeraCrypt and Wireshark and some other projects, suddenly lost their dev accounts at Microsoft. They were like, what happened? Were they were unable to do revisions of their software for some reason? We now have the whole story there. So it's good I waited a week because we were going to talk about it anyway. We got a serious problem of recaptured domain names, which is reminiscent of the bucket reuse that we talked about with AWS a couple of weeks back. Also, a listener feedback-inspired exploration of how exactly AI might help to secure, might best help to secure open source repositories. A listener wrote to say, hey, I never heard you and Leo talk about your opinions of Project Hail Mary. Could you say a few words? So we will. Then we're going to end with what cybersecurity professionals across the industry tell us about what Mythos means.

Speaker 1:
[08:04] Oh, that'll be interesting.

Speaker 2:
[08:06] And of course, again, the title of today's podcast is, yes, exactly.

Speaker 1:
[08:12] That gives you some idea of what's to come. Awesome. We also have a lovely picture of the week, which I haven't seen, but I know it's lovely because it always is.

Speaker 2:
[08:22] I think this one, this one is a bit of a hoot. So yeah, a bit of a hoot coming up.

Speaker 1:
[08:30] All right. I am prepared to show all the picture of the week.

Speaker 2:
[08:35] I gave this picture the caption, hyphen usage is uncommon, but there are times when there's no substitute. All right.

Speaker 1:
[08:46] I like the, I like the, this is going to be another punctuation.

Speaker 2:
[08:51] Hyphen usage is uncommon, but there are times when there's no substitute. Okay.

Speaker 1:
[09:06] Okay. Somebody took this quite literally.

Speaker 2:
[09:09] Do you want to describe it?

Speaker 1:
[09:10] Yes.

Speaker 2:
[09:10] Yes. So we have a, we have a sign you can sort of see at a keyboard on a shelf above and maybe a monitor. There's some sort of a, and there, and there's a power strip in the back. There's some sort of a, you know, some sensitive electronics and, you know, PC stuff.

Speaker 1:
[09:26] We had signs like this all over our studio because I always bring my coffee in and spill it.

Speaker 2:
[09:30] Yes. And the sign says, no drinks back here unless they have a screw on top now. And then it says, thank the management. Now they clearly meant a screw on top, not a screw on top. So what we have then is a Styrofoam cup with one of those little plastic Styrofoam lids and a long, about an inch and a half wood screw sitting on top of the cup. So it has a screw on top, which all you need.

Speaker 1:
[10:06] It satisfies all requirements.

Speaker 2:
[10:08] Yes. And had it said screw hyphen on, then it would have been clear that you didn't want a screw on the top of the cup. You wanted a screw on top.

Speaker 1:
[10:20] Again, punctuation container can be very important.

Speaker 2:
[10:24] Yes. Not many people use hyphens, but I like to hyphenate.

Speaker 1:
[10:28] I like hyphens.

Speaker 2:
[10:29] Yeah, I do too. Yeah. Not quite clear when you need them, but I just say, hey, air in the posit, in this direction of hyphenating, because what the hell.

Speaker 1:
[10:39] Okay, so. So we mentioned last time that the Patch Tuesday last week was the second biggest Patch Tuesday of all time. I mean, big numbers.

Speaker 2:
[10:47] Yes. And we don't know yet whether that is Mythos related, but we know that Microsoft is one of the companies that was named that Anthropic gave, has given access to Mythos. So, and you have to wonder, because I've heard people talk about, oh, like somebody was saying on some show, oh, what do they think? There's no way that it's not going to escape. It's not going to get out. It's like, they're not, they're giving them access to the model online. They're not giving them a Mythos to go.

Speaker 1:
[11:24] No, here's your Mythos. We'll wrap it up for you. Do you want to wear it out or?

Speaker 2:
[11:28] Please make sure it's not like one of those iPhones that gets left at a bar when you walk, you know, an unreleased iPhone. So anyway, there's no problem. But the fact that it's cloud based means that Microsoft would need to trust their competitor because of course, they're all open AI whereas this is coming from Anthropic. They would need to trust Anthropic with their source code uploads into Anthropic's cloud in order to have Mythos rummaging around in Microsoft source. So there's that. But then on the other hand, everybody is going to have to trust Anthropic in that fashion because it's their cloud. There's no local Mythos yet as far as I know. Anyway, last Thursday the 16th, Bleeping Computer's headline was New Microsoft Defender Red Sun, that's the name of it or it's been given to it, Zero-Day Proof of Concept Grants System Privileges, which as we know, elevation of privilege is almost as important as remote code execution. Because oftentimes the remote code that you're executing is in the context of a user login where the whole OS has got security wrapped around the user to keep the user from misbehaving. So you need to first get into the user account, but then you need to get out of the user account into the system account, into root. So again, elevation of privilege is a big deal. So, Bleeping Computer's piece told a story of the disgruntled developer who had, and I'll share some of what this guy wrote, because we'll get, it sounds like he himself is a little more than disgruntled. He's a little sketchy. But anyway, this guy has been publishing, this is like not even the first, fully working proof of concept exploit code for his discoveries, plural, of privilege of elevation vulnerabilities in both workstation and server, from like 2019 on, like server 2019 on, have been vulnerable to this. So the following day on the 17th, so on the 16th, Bleeping Computer's headline was New Microsoft Window Defender Zero Day Proof of Concept Grants System Privileges. The next day, they followed up that reporting with another piece titled, Recently Leaked Windows Zero Days Now Exploited in Attacks. In other words, this guy put the proof of concepts up on GitHub, and the next day, bad guys had found them and were exploiting them to hurt Windows users. So, not good. Bleeping Computer said, Threat actors are exploiting. Three recently disclosed Windows security vulnerabilities in attacks to gain system or elevated administrator permissions. Since the start of the month, the security researcher known as chaotic eclipse or nightmare. Oh, there's a hyphen. Nightmare hyphen eclipse. So, he's hyphenating, has published proof of concept exploit code for all three security issues in protest to how Microsoft Security Response Center, you know, MSRC, handled the disclosure process. And we're not getting much visibility into what that means exactly. Bleeping said, two of the vulnerabilities dubbed Blue Hammer and Red Sun, are Microsoft Defender local privilege escalation flaws, while the third known as Undefend can be exploited as a standard user to block Microsoft Defender definition updates. At the time of the leak, the security flaws these exploits targeted were considered zero days by Microsoft's definition, which remembers little different than the industries, since they had no official patches or updates to address them. Normally, zero day is about surprise. In this case, it's about response essentially to a, to something that hasn't yet been patched or updated. They said on Thursday, this is of last week, Huntress Labs security researchers reported seeing all three zero day exploits deployed in the wild, meaning in use to hurt people with the blue hammer vulnerability being exploited since April 10th. They also spotted undefend and red sun exploits on a Windows device that was breached using a compromised SSL VPN user in a tag showing evidence of what they're calling hands on keyboard threat actor activity, meaning not just automated scan stuff, but an attacker logged in through an SSL VPN hitting keystrokes in order to explore and exploit the vulnerable connection. They said, while Microsoft is tracking the blue hammer vulnerability as CVE 2026, 33825, and has patched it in April, 2026, security updates. So it got fixed last Tuesday, which was patched Tuesday of this month. They said attackers can use the Red Sun exploit to gain system privileges on Windows 10, Windows 11, and Windows Server 2019, and later systems when Windows Defender is enabled. It actually uses Defender in order to leverage its attack. So weirdly, disabling Defender disables the zero day. And they said, even after applying the April Patch Tuesday update. So this is a vulnerability post Patch Tuesday of this month. So we don't know when it's going to get fixed. Maybe an emergency out of cycle update. Who knows? They said that the disgruntled researcher explained. So this is the researcher saying, quote, when Windows Defender realizes that a malicious file has a cloud tag, meaning, you know, like the, remember the mark of the web, which tags are able to get saying, we're going to treat this differently because you downloaded this off the Internet. Says, when a malicious file has a cloud tag, for this is the disgruntled researcher writing, quote, for whatever stupid and hilarious reason, the antivirus that's supposed to protect decides it would be a good idea to just rewrite the file it found to its original location. The proof of concept abuses this behavior to overwrite system files and gain administrative privilege, unquote. We'll actually get a little more detail about that in a second. So they wrote when Bleeping Computer contacted Microsoft earlier this week, for more information on the disclosure reported by the anonymous researcher, a Microsoft spokesperson told Bleeping Computer, of course, this is going to be as helpful as they generally are, quote, Microsoft has a customer commitment to investigate reported security issues and update impacted devices to protect customers as soon as possible. We also support coordinated vulnerability disclosure, a widely adopted industry practice that helps ensure issues are carefully investigated and addressed before public disclosure, supporting both customer protection and the security research community, unquote. So thank you for that, Microsoft. So two days before that on Wednesday, this person, this is me talking now, this person going by the moniker chaotic eclipse, posted his own diatribe over on blogspot. And I think it's worth sharing since it gives us some impression of who's doing the disgruntling. Dated Wednesday, April 15th, the blogspot post was titled Public Disclosure, a response for CVE 2026, 33825 patch. So it reads, posted by the guy, here is the code enjoy. And then he's got a GitHub link, github.com/nightmarehypheneclipse/redsun. So he said, now to, so it looks like nightmare eclipse is guy's name, right? And red sun is the exploit. Now to address what some media articles wrote, first of all, I want to talk about MSRC official response regarding blue hammer. That's his previous release of a zero day exploit proof of concept code. He said, uh, Microsoft has a customer commitment to investigate it, reported security issues and update impacted devices. Oh, so it's, he, so, so this is a, him quoting what, bleeping computer just showed us they had, they had said for what it's worth. So he said, this is a very generic response, almost as if they don't care and they don't. Why? Because MSRC was fully aware of this public disclosure. A case was filed but was dismissed by them. And they are also aware that this one will be disclosed. But again, they are ignorant. This is again, Mr. Disgruntled. Normally, he writes, I would go through the process of begging them to fix a bug, but to summarize, I was told personally by them that they will ruin my life. And they did. And I'm not sure if I was the only person who had this horrible experience or a few people did, but I think most would just eat it and cut their losses. But for me, they took away everything. They mopped the floor with me and pulled every childish game they could. It was so bad, duos on so bad, at some point, I was wondering if I was dealing with a massive corporation or someone who is just having fun seeing me suffer, but it seems to be a collective decision. Wow. And one other thing, they do everything but support the research community. I won't disclose details, but they sabotage people a lot. I mean, just look at the past. Microsoft is the only major company who had a track of multiple vulnerabilities being publicly disclosed, just because the researchers were so upset by how MSRC treated them. Unfortunately, the folks who have kept the capacity to stop those disclosures, not only don't care, but also seem to push harder for even worse exploits to be released. I didn't want to be evil, but they are actively poking me to start releasing RCEs, which I will be doing at some point, dot, dot, dot. He said, he finishes, I will personally make sure that it gets funnier every single time Microsoft releases a patch. Okay. So we talked about vulnerability discoverers feeling that their brilliance is not being sufficiently recognized or rewarded. In an earlier posting on March 26th, last month, this person wrote, quote, I never wanted to reopen a blog and a new GitHub account to drop code. But someone violated our agreement and left me homeless with nothing. They knew this will happen and they still stabbed me in the back. Anyways, this is their decision, not mine. Unquote. Okay. So my presumption, without knowing anything specific, is that Microsoft almost certainly treated this researcher the same way they treat everyone else. But he or she believed that they deserved special treatment. We've certainly shared horror stories in the past about the way some researchers have been treated. But Microsoft is not evil. It's full of good people, but a great many good people. So the result is that it's a big lumbering machine that doesn't care about anything, but only because caring is not what big lumbering machines are optimized to do. This researcher appears to have adopted, and I didn't want to, but you made me do it rationale for his actions. Reading between the lines, my guess is maybe he was counting on receiving a big bug bounty payout that he desperately needed, which never came. It sort of sounds like he may have released the proof of concept before Microsoft went through the formal disclosure process and so blew his opportunity because he pre-released. And so now he's complaining about that. So now, of course, he's blaming Microsoft for this. It's unfortunate that this person is having trouble with life. I looked at the details of the proof of concept that he designed, and it's a slick bit of work, that the well-known security researcher, Will Dorman, from whom Bleeping Computer often seeks confirmation of complex issues posted about this new Red Sun exploit over on Mastodon. Will wrote, from the same author as Blue Hammer, we now have Red Sun. This works around 100 percent reliably to go from unprivileged user to system against Windows 11 and Windows Server 2019 and beyond, with April 2026 updates, as well as Windows 10. As long as you have Windows Defender enabled, any system that has CLDAPIDLL should be affected. Okay, so CLDAPI sounds like the cloud, the Windows Cloud API and is. In the next quote from Will, he refers to ICAR, E-I-C-A-R. That's the abbreviation for the European Institute for Computer Antivirus Research. The file they produced, which itself is known as ICAR, is a popular pseudo malware test file that can be used to deliberately freak out any good AV tool without actually itself containing or doing anything malicious. It's just used as a test file to see if AV detects it. In a follow on Mastodon posting, Will writes to explain what this thing does. He says, this exploit uses the Cloud Files API, writes iCar to a file using it, meaning using the Cloud Files API, uses an opt lock to win a volume shadow copy race, and uses a directory junction reparse point to redirect the file rewrite with new contents into C colon backslash Windows backslash system 32 backslash tiering engine service.exe. At this point, the Cloud Files infrastructure runs the attacker planted tiering engine service.exe, which is the Red Sun exploit itself as system. And he writes game over. In other words, this is what this is the proof of concept that the disgruntled attacker or developer engineered, which as I said, is some slick work. I mean, that's kind of tricky to figure that out. Anyway, our primary takeaway here is that all fully patched as of last week's mega patch Tuesday. As you said, Leo's second biggest ever, but apparently not quite big enough. Windows desktop and server are currently vulnerable. To this exploit, which is now being actively used in the wild. It's not the end of the world, since something bad must first get into a machine, so that it's able to trick Windows Defender into performing that odd file rewrite dance. That allows attacker-provided code to be run with full system privilege, but the attacker has to first get in there and provide the code. As I said, while it's not the end of the world, Huntress Labs is observing it under active use. It would be nice if Microsoft were to fix the issue for this before May's Patch Tuesday, which is still a full three and a half weeks away. I mean, this is a bad problem, and Microsoft didn't get there in time, and they probably should get this updated.

Speaker 1:
[29:29] In the meantime, I think it's hilarious. Turn off Windows Defender.

Speaker 2:
[29:33] Yes. Actually, that's the mitigation, is turn off Defender because Defender is being used. This weird behavior that Defender has, and I'm sure Microsoft knows why they're doing this, but it ends up you can leverage that in order to get yourself attacked. Yeah. I wouldn't turn off Windows Defender. I don't know what I would do. Maybe the Zero Patch guys, I didn't think to look over at zeropatch.com. Maybe they have a quickie patch. They offer for free patches for vulnerabilities that are known, but which Microsoft has not yet provided fixes, and they might do that. So that might be an opportunity if they...

Speaker 1:
[30:19] I'm checking right now just to see.

Speaker 2:
[30:21] I'll let you know. Cool. Meanwhile, Microsoft has been buying up their own bugs. While we're on the topic of Microsoft and bugs, Bleeping Computer also reported that Microsoft has been breaking records for bug bounty payouts. And before I take note of the irony inherent in this, I'll share what Bleeping Computer wrote. They said Microsoft has awarded 2.3, get this, 2.3 million dollars to security researchers after receiving nearly 700 submissions during this year's Zero Day Quest, which is the name of it, Zero Day Quest, ZDQ hacking contest. Tom Gallagher, Vice President of Engineering at Microsoft Security Research Center, that MSRC we were just talking about, said that over 80 of the flaws found during the live event at Microsoft's Redmond campus were high-impact cloud and AI security vulnerabilities. That's just great. We've all been using that software, which has 80 high-impact cloud and security vulnerabilities. Actually, Leo, that made more account for the April Patch Tuesday than Mythos stuff.

Speaker 1:
[31:48] You don't really need Mythos. There's plenty to go around.

Speaker 2:
[31:51] Well, just pay because $2.3 million. I mean, what we've seen is this bug bounty concept paying, you need to motivate researchers who have only so many hours in the day to go running around chasing after Microsoft vulnerabilities. Although it doesn't seem that there's any scarcity of those. So BIP computer continues. Gallagher said, quote, During the 2026 live hacking event, Microsoft partnered with the global security research community, representing more than 20 countries and a wide range of professional backgrounds, from high school students to college professors. Researchers conducted all testing within authorized environments in accordance with Microsoft's rules of engagement, demonstrating potential impact without accessing customer data or other tenant systems. Within these constraints, researchers identified critical paths involving credential exposure, SSRF, server-side request forgery chains, and cross-tenant access. So lots of cloud, lots of AI. He wrote, or Bleeping Computer said, last August, Microsoft announced that it would increase the prize pool at this year's Zero Day Quest hacking contest to 5 million in bounties, which the company described as the largest hacking event in history. In 2025, Zero Day Quest also generated significant participation from the security community following Microsoft's offer of 4 million in rewards for vulnerabilities in cloud and AI products and platforms. So they bumped it from last year's 4 million to this year's 5 million. After the hacking competition and concluded, Microsoft announced it had paid 1.6 million in rewards after receiving more than 600 vulnerability submissions. So last year, they offered 4 million. They paid out 1.6 after 600 vulnerability. This year, they offered 5 million and paid out 2.3 billion. So lots more actual problems found. The Zero Day Quest event contest is part of Microsoft Security Future Initiative, a cybersecurity engineering effort launched in November 2023, following a scathing report from the Cyber Safety Review Board of the US. Department of Homeland Security that found the company's security culture inadequate and required an overhaul. Of course, we talked about this at the time. I mean, it was just really raked him over the clock. Charging excess money for logging security events, so they could just turn it on. But let's make some more money from having so many bugs that people have to log in order to track them. Right. Anyway, Bleeping said, last August Gallagher said, quote, as part of our secure future initiative, we will transparently share critical vulnerabilities through the CVE program, even if no customer action is required. And here I heat this word, learnings from the zero day quest will be shared across Microsoft. They're sharing their learnings.

Speaker 1:
[35:21] They love that word.

Speaker 2:
[35:22] I don't know why, but they love it. I mean, yeah, the things we learned, the things we learned is the way I learned my learnings, to help improve cloud and AI security in alignment with SFIs core principles securing by default, by design and in operations. Apparently, however, not in code. Finally, earlier last August, Microsoft announced it had paid a record $17 million to 344 security researchers across 59 countries through its bug bounty program between July 2024 and June 2025. You know that there's a mixed blessing, right, of bragging about how many millions of dollars you have paid to 344 security researchers who have found really bad problems with your software. On the one hand, okay, I think it's great that Microsoft software will now or will soon be, you know, that many bugs fewer in cloud and AI security vulnerabilities. That's of course good for everyone. But as I said, it seems a little ironic to have Microsoft gleefully bragging about how many hundreds of bugs researchers were just able to find throughout their products, when they were sufficiently motivated to do so. So anyway, as we know, none of those bugs should have been there in the first place to be found. But hey, Microsoft has way more cash on hand than it knows what to do with. So dangling, increasing quantities of cold, hard cash in front of security researchers who will then be motivated to go bunk hunting, that's definitely money well spent. Let's have more of it because Microsoft can certainly afford to pay. We're going to talk about the mysterious disappearing developer accounts, Leo, after you tell our listeners about one of our sponsors who's still here with us. They've not mysteriously disappeared, no, unlike those developer accounts.

Speaker 1:
[37:34] I just, I like the, what was it? Hands-on keyboard. That's another good one.

Speaker 2:
[37:38] I had hands-on keyboard attacker. Yeah, you can see like, Oh, Boris is hunting. Boris is hunting and pecking. Yes.

Speaker 1:
[37:47] Wow. Retcon five in our Discord says the term learnings was not in common use in 19th and 20th century, although the countable noun sense learning, as in things learned dates to middle English and the plural learnings to early modern English. Note that early use of learnings often have the sense or connotation teachings. Yeah. I've heard teachings before. Yeah. As was the case of the, of learn generally, it is found occasional use for centuries, including by Shakespeare.

Speaker 2:
[38:19] So I guess if you're subject to teachings, what you come away with is learnings.

Speaker 1:
[38:24] Learnings. I don't like it either. I agree with you 100 percent. It feels like corporate speak.

Speaker 2:
[38:29] It feels like you've, what are you smoking on the peninsula?

Speaker 1:
[38:35] Steve.

Speaker 2:
[38:37] I think those little, remember when they were, there were all those fake sweepstakes sites?

Speaker 1:
[38:42] That's another way.

Speaker 2:
[38:43] Yeah.

Speaker 1:
[38:44] Because we know that that happened on Facebook. That was the Cambridge Analytica scandal. They were making quizzes like, which Star Wars figure am I? And just by simply virtue of taking those quizzes, not only did they get all your Facebook information, they got all your friends' Facebook information. So even if you said, oh, I'm not going to do that, if your friends do it, they're revealing it. Yeah. No, that hole's been plugged, but there's always more because they're like a cockroaches. You can't get rid of them. All right. Anyway, on we go. Let's talk about the missing Microsofties.

Speaker 2:
[39:17] One of the recent bits of news that was, as I said, bumped so that we'd have time to thoroughly examine Anthropix Mythos last week, was that Microsoft had, without apparent cause or reason, suddenly dropped a number of driver developer accounts, products such as VeraCrypt and WireGuard. I mean, the well-known WireGuard next-generation VPN, they inherently incorporate kernel driver components in order to obtain the deep OS-level access they need in order to operate. So this was a huge concern for many users of these products, and I heard from many of our listeners who picked up on this news. As it turns out, as I said before, it's just as well that I waited. Since last week, the only news that we had was that the accounts had been dropped. We didn't know why. Today, we know. The reason for Microsoft suspension of these accounts turned out not to be a mistake, but was entirely deliberate, which is also a process or a reason for some concern. I know it's probably going to hit home for many of our listeners since it's an issue that I've been talking about recently during that whole process of updating my code signing certificate and what I had to go through, which like getting my CPA to sign an attestation letter that yes, he'd just laid eyes on me and I was real and he was putting his license on the line in order to vouch for me. It's like, yikes, none of that had ever been needed before. So once again, Bleeping Computer was on top of this under their heading Microsoft rolls out fast track to reinstate Windows hardware dev accounts. Kind of an oops. Anyway, Bleeping Computer explained writing Microsoft has rolled out a fast track response to help developers regain access to accounts recently suspended from its Windows hardware program following widespread complaints that they were locked out without warning. Last week, the company suspended Windows hardware developer accounts used to publish Windows drivers and updates for widely used tools like WireGuard, VeraCrypt, Memtest86, SuperPopular, and WindScribe. The suspensions prevented developers from releasing new Windows builds and security patches, raising concerns about potential delays in responding to vulnerabilities. VeraCrypt developer Munnuri Idrassi stated that his account, so this is VeraCrypt, they're the successor to TrueCrypt as we know, that was taken over by someone in France, this Idrassi guy. He said his account had been terminated without warning and that he was unable to reach a human support representative, leaving him unable to publish Windows updates. Similar experiences were reported by WireGuard maintainer Jason Donfeld and others, who described being locked out without facing any, or without access or facing lengthy or unclear appeals processes. You know, there was the machine, the Microsoft machine was just sort of ignoring them. After many developers took to X to report the suspensions, Microsoft Vice President Scott Hanselman said the accounts were suspended for failing to complete identity verification in the Windows hardware program and that the company had been e-mailing these people, which they call partners, about the requirement since October of 2025. Right? So six months of we're trying to reach you and you're not replying. Microsoft requires identity verification for the Windows hardware program because it allows developers to sign and distribute. Actually, it doesn't anymore. But OK, remember, Microsoft is doing the signing now and distribute kernel level drivers. It does allow developers to develop kernel level drivers under the program, which run, writes Bleeping Computer, with high privileges and had been, well, yeah, in the kernel and had been like it could do anything, have been abused by threat actors in past attacks. However, they write, many developers claimed, and there are so many, it's probably true, claimed they had not received any prior notification, including emails before they were suspended. While Hanselman and others at Microsoft have been working to reinstate accounts, Microsoft yesterday introduced a temporary process to fast track reinstatement for suspended accounts. An update to Microsoft's advisory adds, quote, We've heard your feedback. We know that some partners whose accounts were suspended following account verification are experiencing challenges. Regaining access to the hardware dev center, the HDC, protecting the security of Windows ecosystem remains our highest priority. We are adding a temporary process to accelerate the reinstatement experience for partners who are able to resolve outstanding compliance requirements. Wow. Under the new process, developers are told, they wrote, to open a support case through the hardware program as the fastest way to reinstate accounts. Requests must include a clear business justification, explaining how access to the hardware dev center will be used. Microsoft says that once reinstated, all outstanding compliance requirements must still be resolved before full access is restored. This is an interim, you know, we're on a profisional trial basis. We're going to give you your account back that you have previously had access to for years until we decided, nope, no more. Suddenly, we don't know who you are. But now you're going to have to tell us who you are and prove it, of course. So Microsoft said it advised partners to ensure they're signed in with the correct account when submitting tickets and to continue prompting co-pilot to create a ticket if automated assistance fails, whatever that means. For those unable to submit requests through standard channels, Microsoft provided an alternative support contact to help initiate the process. Microsoft has not said how long this accelerator process will remain in effect. That is what this grace period is going to be. So effective developers are advised to act quickly. Okay, so I would tend to believe the developers over Microsoft regarding this complete lack of attempts to inform them. As I noted earlier, Microsoft is no longer an entity that is actually able to care. Caring is not something that it does. It's just too big. And caring is a distraction. So, someone somewhere doubtless decided that the best way to get developer attention or just to remove dead accounts would be to simply suspend all currently non-compliant accounts for non-compliance. This has the advantage, as I said, of weeding out any older accounts that no one really cares about that much, since they won't be immediately inconvenienced by their inability to access Microsoft's developer portal. And conversely, those who are inconvenienced will be highly motivated to get their identity-proving act together. As we know, this may involve getting an affiliated attorney or CPA to sign some attestation papers. It's what I went through. Basically, this is the same process. You need to have identity verification that is bulletproof. And of course, that's true because running code in the Windows kernel is a privilege that none of us want bad guys to have. So we do want Microsoft to make that as bulletproof as possible. While Microsoft could have been way more general about this, this did get the job done. So, you know, that's what that was all about. The reason Microsoft suddenly suspended a bunch of developer accounts, many who were immediately inconvenienced because they were using them actively. Now, you know, basically it's like, OK, we're going to give them back to you for a while, but you need to get your identity made compliant. So that will happen. OK, now this next piece of news beautifully exemplifies a problem we've seen before. That's, I think, largely a consequence of the aging Internet and aspects, critical aspects of its design that were never very well thought through. Since and in defense of its designers, they could have never and never did foresee what their creation, which we call the Internet, would become. They, I mean, I'm in awe of the original design of these protocols that have stood the test of time. There are some aspects that haven't. Bleeping Computer's headline for this reporting was, quote, signed software abused to deploy anti-virus killing scripts. That's not a great headline. While that's factually true, it's more of the consequence of the problem than the problem itself. Okay, so let's start with what Bleeping Computer reported. They said a digitally signed, meaning, you know, there's a real company behind it, so it's digitally signed and pretty much these days anything has to be, a digitally signed adware tool. So not malware, not evil, but just unwanted. And a digitally signed adware tool has deployed payloads running with system privileges that disabled antivirus protections on tens of thousands of endpoints, meaning host computers at where it was installed. Some in the educational, utilities, government and health care sectors. In a single day, researchers observed more than 23,500 infected hosts across 124 countries trying to connect to the operator's infrastructure with hundreds of infected endpoints being present in high-value networks. Okay. So they're saying 23,500 PCs have this adware tool. Hundreds of them are in like really important networks, and they're all reaching out, trying to phone home to this operator's infrastructure. Bleeping wrote, Security researchers at managed security company Huntress discovered the campaign on March 22nd, when signed executables viewed as potentially unwanted programs. Love that. Pups, PUPs, potentially unwanted programs. Do you really want this? Which is what that original opt-out tool that I wrote for that old Radiate or Oriate adware was. Anyway, they said, potentially unwanted programs triggered alerts in multiple managed environments. So Huntress is in the environment management business and they saw these things doing this. They wrote, pups or adware are regarded more as a nuisance than malicious as their role is typically to generate revenue for the developer by showing advertisement pop-ups, banners or through browser redirects. Now they'll infect browser URLs to bounce through some other redirect before they go to the site that you actually intend. Huntress researchers say that the software was signed by a company called Dragon Boss Solutions, LLC. Sounds kind of Chinese. Involved in, quote, search monetization research, whatever that is, activity, and promoting various tools. For example, the Chrome Stera browser. Oh, Leo, don't leave home without the Chrome Stera browser. Chrome-in-us, whatever that is, the World Wide Web. Oh, that's catchy. Web Genius and the Artificious Browser. I don't know if I want the Artificious Browser. Anyway, all labeled as browsers, but detected as pups by multiple security solutions. So they're recognized as, are you sure you want this? Beyond annoying users with ads and redirects, Huntress researchers say the browsers from DragonBossSolutions also feature an advanced update mechanism that deploys, and get this, an antivirus killer. In other words, they've found out that there's things that don't like them, so let's kill that because we don't want to be unliked. Huntress researchers discovered that the operation relied on the update mechanism from the commercial advanced installer, authoring tool, to deploy MSI and PowerShell payloads. Analyzing the configuration file for the update process revealed several flags that made the operation completely silent, no user interaction required. You don't want to bother users with those pesky permission dialogues. It also installed the payloads with elevated system privileges, prevented users from disabling automatic updates, and checked frequently for new updates. So basically, badly behaving malware, I would agree, potentially unwanted, probably definitely unwanted. That would be dupe. Definitely unwanted programs.

Speaker 1:
[55:14] Not a pup, it's a dupe.

Speaker 2:
[55:15] That's right. Okay. So none of those things seem deliberately malicious, right? Having been harassed by false positive AV detections, I can at least understand their motivation behind creating exceptions for one's code. As we know, that's not the approach I take, like killing off AV that bothers you. Mostly this seems like software written entirely with the convenience of its publisher rather than its user in mind. That's bad software, no doubt about it, but that's also life. So the reporting continues saying, according to the researchers, the update process retrieves an MSI payload, setup.msi, disguised, this is weird, disguised as a GIF image, which is currently flagged as malicious on virus total, but only by five out of 69 or 70 security vendors. So not many false positives or positive positives. Anyway, it does seem a little sketchy. Why would any software publisher who thinks of themselves as legitimate, retrieve a windows setup.msi file disguised as a GIF image?

Speaker 1:
[56:43] What?

Speaker 2:
[56:44] Okay. Anyway, they continue writing, the MSI payload includes several legitimate DLLs that advanced installer uses for specific tasks such as executing PowerShell scripts, looking for specific software on the system or other custom actions defined in a separate file named exclamation point underscore string data that includes instructions for the installer. Huntress says that before deploying the main payload, the MSI installer conducts reconnaissance by checking the admin status, detecting virtual machines, verifying internet connectivity, and querying the registry for installed antivirus products from malware bytes, Kaspersky, McAfee, and ESET. The security products are disabled using a PowerShell script named ClockRemovable.ps1, you know, PowerShell, which is placed in two locations. The researchers say that installers for the Opera, Chrome, Firefox, and Edge browsers are also targeted, likely to avoid potential interference with the adware's browser hijacking. Yeah, you want to turn off, you know, anything that might get in the way. The ClockRemovable.ps1 script also executes a routine when the system boots, and at logon, and every 30 minutes, to make sure that antivirus products are no longer present on the system by stopping services, killing processes, deleting installation directories, wow, you know, really wiping them, and registry entries, silently running vendors uninstallers, love that, and forcefully deleting files when uninstallers fail to successfully uninstall. It also ensures that the security products cannot be reinstalled or updated by blocking the vendors' domains through modifying the hosts' file and null routing them, redirecting them to 0000. Wow. So again, not technically malicious, but you just don't want that. So what's clearly going on here is that the publishers of this definitely malbehaving crapware have previously experienced well-deserved run-ins with a handful of alert anti-crapware utilities that want to warn their users that this is a potentially unwanted program in spades. So these cretins have upped the ante by making their adware offerings even more obnoxious in the things they do to get anything that doesn't like them off the system and keep them off. Like you can't even contact those AV companies any longer because your browser will not resolve the domain because the hosts file has been edited in order to null route them. Wow. Okay, so here's something curious and interesting. During the analysis, Huntress found that the operator did not register the main update domain ChromeSteraBrowser, chromesterabrowser.com or the fallback domain worldwidewebframework3.com used in the campaign, presenting them with the opportunity, to sinkhole the connection from all infected hosts. In other words, domain got abandoned, Huntress saw nobody had re-registered it, so they did. As such, writes Believe in Computer, they registered the main update domain and watched tens of thousands of compromised PCs reach out, looking for instructions that in the wrong hands could have been anything. Based on the source IP addresses of the endpoints, these PCs that have this crap on them, the researchers identified 324 infected hosts residing in high-value networks. Remember, that's 324 out of 23,500. So there's 23,500 PCs overall, 324 in high-value networks, specifically 221 in academic institutions in North America, Europe, and Asia. 41 OT, as we're calling them now, operational technology networks in the energy and transport sectors and at critical infrastructure providers, 35 municipal governments, state agencies and public utilities, 24 primary and secondary educational institutions, three health care organizations, hospital systems and public health care providers, and the networks of multiple Fortune 500 companies. So if a bad guy had registered that domain before Huntress did, they'd have access into all of those networks. Bleepy Computer wrote that they tried to reach out to Dragon Ball Solutions, but could not find contact information as their site is no longer operational. Huntress warns that while the malicious tool currently uses an AV killer, the mechanism to introduce far more dangerous payloads into infected systems is in place and could be leveraged at any time to escalate the attacks. Additionally, since the main update domain was not registered, anyone could claim it and push arbitrary payloads to thousands of already infected machines, with no security solutions protecting them by design and through an already established infrastructure. Huntress recommends that system admins look for WMI event subscriptions containing the string MB removal or MB setup, schedule tasks referencing WMI load or clock removal, and processes signed by Dragon Boss Solutions LLC. Additionally, review the hosts file for entries blocking AV vendor domains and check Microsoft Defender exclusions for suspicious paths such as D Google, E Microsoft, or DD apps. OK, so this particular incident is not the end of the world. But as I noted at the start, it's another perfect example of something the Internet was never designed to handle. Some random company may itself not be explicitly evil, but might have sloppy, uncaring, and abusive coders who install software that does things to its hosting PCs that would raise serious concerns from anyone who understood what was going on. But as we know, the phrase, from anyone who understood what was going on, is almost never going to include the end user who decided, hey, I'll bet that Chrome Stera browser would be a lot better than Chrome. So I'm using Chrome Stera instead of Chrome. It's like, OK. So here's the problem. So here's the problem that the Internet's designers never considered. What happens when the progenitors of ill-begotten and very badly designed software, and not necessarily even that, like any software, which is now using an infrastructure to phone home, to check for updates, and then has the power to automatically download them and put them in place. What happens when that software, which continually reaches out to the Internet for updates, eventually, and if it's a fly-by-night company, probably inevitably, goes out of business. Their horrible software remains installed and alive, and querying for updates. I know that all of us have stuff on our PCs that we installed some time ago and then stopped using, but probably haven't taken the time to remove, because it's not bothering us. But then their various domains also expire.

Speaker 1:
[66:15] Oops.

Speaker 2:
[66:17] Now anybody could re-register them. Fortunately, in this instance, Huntress are the good guys who re-registered those expired domains for the sake of their research. But if bad guys were to do this, they would have stumbled upon the mother load. Two hundred and twenty-one academic institutions, forty-one operational technology networks and infrastructure providers, thirty-five municipal governments, state agencies and public utilities, twenty-four school systems, three health care organizations, and the networks of multiple Fortune 500 companies, they could get into all of them. Ransomware, anyone? This abandoned software would literally have a ready-to-go back door into the networks of all of those three hundred and twenty-four high-value targets. And here's the concern to think about. This cannot be an isolated event. This particular discovery was Huntress showing that they're awake and alert and doing their managed security thing. That's great. But similar events are doubtless happening across the Internet. Companies are abandoning their previous failed software offerings, which included technology to phone home. Then home is abandoned too. Note that it's one thing when some random website's domain is abandoned, but it's an entirely different matter when automation that's been silently installed into user machines is making those queries. This creates a ready-made back door into every one of the networks that's reaching out to abandoned domains. We're in a world where there is no accountability for the actions of the software while it's in use. People can download this crapware and it does that to their machines. Horrible things, installing scheduled tasks, stripping AV out, running the AV uninstallers and if that doesn't work, removing their registry entries and manually deleting the software from their machines, black-holing their domains by putting 0.0.0.0 in the hosts file. The user doesn't know. They said, Yeah, I really want the ChromeStera browser. Sounds great. This happens. There's no accountability in our current environment. Companies can do whatever they want, including this crap. Basically, we're in a world where we have a rent-a-domain-name system. We rent a domain name, and as long as we're willing to pay for it, we get to keep it. But when we decide we don't want to rent that domain name any longer, after it expires, it's up for grabs. Just like the AWS abandoned bucket problem was, where bad guys could grab abandoned buckets that still had activity on them. Unfortunately, this re-registering a domain is assumed and encouraged, but it leaves us with some serious potential for security problems. It's not something our forefathers on the Internet thought about, because they could have never imagined that the net would become what it has. But this problem of recycling domains, it creates a whole new world of security problems.

Speaker 1:
[70:05] What an interesting story.

Speaker 2:
[70:06] Yeah.

Speaker 1:
[70:08] From Stera. I can't wait to get it. By the way, I just saw this news cross the wire. Mozilla is saying now that it used Mythos on Firefox and that it found 271 bugs. Which they patched in their current version 150. So this is the first that we've seen of an actual admission that Mythos was used and by an independent third party.

Speaker 2:
[70:38] Yep.

Speaker 1:
[70:40] 271 bugs in shipping software.

Speaker 2:
[70:46] Yeah.

Speaker 1:
[70:46] It's been tested and tested and tested. Oh my God.

Speaker 2:
[70:48] Pounded on and we know it is the largest attack surface on anyone's computer is the web browser.

Speaker 1:
[70:58] Yeah. One of the things Mozilla said is this is our belief is that the tools have changed dramatically and there were categories of bugs you couldn't find with that you could find with human analysis, you couldn't find with automated analysis, which means that threat actors had an advantage if they were willing to spend the time and energy, we couldn't keep up.

Speaker 2:
[71:22] And now it is finding them with automated analysis.

Speaker 1:
[71:26] Every piece of software, this is by the way, this is Holly, Bobby Holly, Firefox's CTO. Every piece of software is going to have to make this transition because every piece of software has a lot of bugs buried underneath the surface that are now discoverable. This is a transitory moment that is difficult and requires a coordinated focus and a lot of grit to get through. But I think that this is a finite moment, even as the models become more advanced. He said, yes, we are flooded now with things we have to fix, but at least we know about them. Yeah.

Speaker 2:
[72:01] And when AI is in the pre-delivery pipeline, we are not going to be there again. So as I said, we are going to have, it's transient mayhem potentially. It's Y2K. Y2K is a perfect model.

Speaker 1:
[72:16] I think this confirms what you just said. Exactly. Yep. It is Y2K. It's hair on fire, but for a limited time only. Yes.

Speaker 2:
[72:26] And now, and you know that going forward, with the Mozilla team having seen this, they will vet anything they do now through AI to catch, like hyperlint, in order to catch any of the problems before they ship.

Speaker 1:
[72:44] That's what Holly is saying basically, is this is now incumbent on everybody.

Speaker 2:
[72:48] It's the new model.

Speaker 1:
[72:49] It's the new model. It's the future. But this is a real confirmation that Mythos wasn't merely marketing hype, that there is something going on. If you can find 271 bugs in a highly tested version, current version of it.

Speaker 2:
[73:06] Please tell Jeff and Paris, I'm so annoyed with them. Like, is it really? Yes. Read something.

Speaker 1:
[73:13] Yes, it is now. Well, we didn't. I mean, to be fair, we weren't sure, you know, because I was, yeah, you said that last week. Yeah, yeah. And I mentioned that to them. But now we have absolute confirmation. This is the real deal. Because they've been using automated tools before. This is not, this is a special category.

Speaker 2:
[73:37] And I will, when we get to our main topic, I will, the guys at Aisle, remember, A-I-S-L-E, they're the guys that found all the problems in OpenSSL. And so they have a little bit of pushback against Anthropic, which I'll share to round this out. But anyway, as I said, the podcast is titled, Yes, Exactly.

Speaker 1:
[74:01] Exactly. And you were, you called it. You were absolutely right. And we've got some feedback. What? Oh, you're muted. Hello?

Speaker 2:
[74:14] Thank you. Okay. So feedback. A listener shared some musings over strategies for securing open source repositories. And it provided a perfect setup for looking at this aspect of the future. So his name is Gene Hastings, and who listens to us, I'm familiar with the name. He sent email in the past. He wrote, a colleague and I often meet to talk about DevOps and related issues, you know, system and personal health. He's more Dev, I'm more Ops. Both often cranky. One of our listeners. In any event, we were talking about the nightmare that's having a project's dependence on libraries all over the net, and what steps might be taken to provide some degree of defense. He said, I was already aware of version pinning, and there was the recent news about a compromised package where the infection modified it without changing the version. I recalled long after our conversation that one would need to store a hash of the package and compare it on retrieval, right? Because then a modification will get detected, that the hashes would match. He said, little protection against a compromised new version or a first-time use, but some nonetheless. There's also the concern as to the trustworthiness of the package's own dependencies. All this led me to reflect that what may need to happen next is to have each package and its components not only signed by the author, but also by an independent auditor. Obviously, this does not scale physically or financially. So the next step is to have a trusted aogenic auditor that does not charge a fortune for each signing. Such automation will be necessary soon. This led me to a further thought. Imagine a new project philosophically akin to Let's Encrypt, a service for smaller developers who can do an automatic audit at a tolerable expense. He says, if all of the following are true, the agents like Mythos and Descendants are competent. The agents are efficient. The agents are trustworthy. The agents are not priced out of reach with some flavor for everyone. The owners of the agents are trustworthy. He has an exclamation point on that one. He said, then there could be a future for us and the Internet. Apparently otherwise, forget about it. That's all over. He said, as an aside, I am an AI skeptic. I do not trust that which cannot be explained. Getting back to operations, if I don't have a half decent idea what a system and its configuration is doing, I am very reluctant to put my name on it. I am willing to trust people who are able to understand the systems to assure me that I can be fairly reassured. At the moment, such people are hard to find amid the tsunami of hype. I'm not as concerned about the quality of the technologies that I am about the people pushing them. I wouldn't trust simple driving directions from the likes of Sam Altman, Mark Zuckerberg or Jeff Bezos. I do not trust their motives, plans, or motives or plans. Signed Gene Hastings. OK, so, as he said, often cranky, cranky Gene. He's suggesting a future, a future solution, which might be a system in the form of Let's Encrypt, where individual developers would need to have an AI-based agent audit their code for problems. And, you know, unsuspected or unwanted behaviors. And would then sign the library all for a low cost. The trouble with this is that then we need some authority to manage the trust in these AI agent signatures. And on the trusting end, some sort of new root store that users of these signed libraries could use to look up and verify the trust. In other words, a whole bunch of new stuff. I think there's a more direct, cleaner and straightforward means of accomplishing the same thing. We simply move to a world very much like what we were just talking about, Leo, with Mozilla. We move to a world where anything that a public code repository offers for broad public consumption first passes the scrutiny of an AI agent. An AI will be guarding the exits, essentially. Code cannot leave the repository without first being checked by the AI agent. And the process might not be autonomous. The repository's AI might have some questions for a packages authors that would need to be answered and negotiated before a new or updated package could be made widely available. And since the use of an AI will certainly come at a non-zero cost for the foreseeable future, at least, I don't know, I mean, there'll probably always be some costs because this is always going to be some compute. I'd imagine that there would be some form of rate limiting on new submissions being made available publicly for review and publishing. You know, non-professional authors who are in the habit of constantly revising their code to make an endless series of incremental improvements might have a release delay or some sort of submission limit imposed. But the idea being, in the same way that Mozilla will be running their Firefox code from now on through AI, the solution is for repositories to do the same thing, to clean anything that is being released to the public before it gets out there. And I suspect that solves this problem. You know, the vast majority of a repository's code is mostly static, right? So, an AI will only need to give it the once over one time. And from then on, those who pull it could rely upon its security more than they ever have been able to before. And most code only changes incrementally. So, an AI could retain the context that it developed when during that original once over, and then bring itself back up to speed and only look at the changes, all the deltas to the code, in order to minimize the recurring cost of continuing to review code which incrementally changes over time. So, I think the whole system can be made practical. So, what I know is this, the year is currently 2026. See, I got it right this time, it's not 2024, 2026. When AI costs today far more to run than it's able to generate in revenue, I am sure that the economics of AI will be radically different in the future. Just as the economics, for example, of mass storage and computation have been utterly transformed over the past 50 years.

Speaker 1:
[82:09] There's a rich history of this, this has always happened.

Speaker 2:
[82:12] Today, we're all walking around with globally connected pocket computers that would have boggled the minds of our grandparents.

Speaker 1:
[82:21] Yeah, our parents, forget the grandparents.

Speaker 2:
[82:24] Yeah, it should be clear to everyone that AI, which continues to boggle our minds today, will be just as accepted and take it for granted by our grandkids as the Internet is by today's kids. So, you know, I mean, kids growing up today, they've always had the Internet. That's just like, yeah, they don't know life without it. We're still sort of like, wow, remember those days. Remember books.

Speaker 1:
[82:51] I used to remember CDs, DVDs, records.

Speaker 2:
[82:56] And finally, GP, our listener says, Dear Steven Leo, given April's security related news, I can see how thoughts on the project Hail Mary movie might have been pushed to the wayside. I'm wondering what you gentlemen thought of the film and its treatment of the source material. I felt the movie struck a nice balance. It did justice to the book while allowing those who have not read it to enjoy the story without being overwhelmed by a flood of science, which could have easily turned it into a five-part miniseries.

Speaker 1:
[83:28] Oh, yeah. He said a lot of science in the book.

Speaker 2:
[83:31] Yeah, there is. Well, and that's why we love Andy Weir's writing, right? So he said, my young one enjoyed the movie so much that the young ones, oh, young one that they that they wanted to read the book. So good. So we.

Speaker 1:
[83:47] Yeah, that's good.

Speaker 2:
[83:49] Yes. So we signed up to borrow it from the library. However, we were number 110 in the queue of the public library to borrow the book. Yeah.

Speaker 1:
[84:01] So we are the bestseller list again, I think. Yeah.

Speaker 2:
[84:03] Yeah. So we opted for my old copy on Audible instead list. Although I salute him for reading because I'm still a, you know, I think reading is primal. But anyway, he said, listening to the story again did not diminish the movie. It only enhanced the experience for both me and my little one. It's like getting the inside story. If you get my drift, this is one of the few times in recent history where a movie did not ruin the book, but actually improved upon it. Good job to the production team on this one. All the best GP. So Leo.

Speaker 1:
[84:39] Yeah, I'd agree a hundred percent. In fact, I'm re-listening to the book, which I started right after the movie. The other thing we did do though, is we also re-watched the Martian because Lisa and I had a little inside bet because after the Project Hail Mary, I said, oh, that was as good as the Martian. She said, no, it wasn't. She said it was really good, but it's not as good as the Martian. I said, oh, and then we watched the Martian and I have to agree with it. The Martian was remarkably good. Yeah. I think that's partly because Ridley Scott directed it. I think the directors of Project Hail Mary, who chiefly are famous for the Lego movie, maybe have a little bit more of a kiddie sensibility. I would appeal to his little ones because, you know, Grayson is, uh, Grace, Rylan Grace is like, there's a lot of times he goes, oh, you know, things the kids would like.

Speaker 2:
[85:32] Yeah.

Speaker 1:
[85:33] But it's a little over the top. That bugged me a little. I do feel it was very true to the book. The book has infinitely more detail.

Speaker 2:
[85:40] Yeah.

Speaker 1:
[85:40] Because you had to cut all that stuff out. I'd forgotten how much science there is in the book. And so there's stuff that I thought, oh boy, they left that part out of the movie.

Speaker 2:
[85:51] But I forgot. I love the details of breeding astrophage from the book. It was so good. And we just got a little suggestion of it in the movie.

Speaker 1:
[86:04] Almost all the science is suggested in the movie. Yeah. They focus on the drama, the interpersonal relationships, and the science gets a second.

Speaker 2:
[86:14] So my theory, because I've re-read the book when we knew there was going to be a movie, because I read it originally when Andy wrote it. And I thought about this question of movie versus book a lot. Of course, famously, I've complained here that Jurassic Park, when I was watching the movie, I was incensed because so much was left out. I mean, some arguably really important stuff. On the other hand, look, Jurassic Park was a phenomenon as a movie. So who can say that like there's, what I've decided is it's really not fair to compare. Yes, they are two. What they have in common is a similar plot. So they have the concept and the plot, but you really are addressing two different audiences. A book reader or audible listener is a different audience than somebody who wants to go to a movie in two hours and be entertained.

Speaker 1:
[87:18] They're different media. Yes, absolutely. And you have to be native to the medium, otherwise it just isn't going to work. And I understand that. But I do agree with you that this, which is unusual and our correspondent, this movie makes you want to read the book, which is really great. And you don't feel disappointed in either direction, which is very unusual. I almost always fear disappointed by science fiction books, not living up to the movie. In this case, no, I think in both for both The Martian and Project Hail Mary, the movies are great. They really do a good job. Yeah. So we're in agreement.

Speaker 2:
[87:55] Yeah. Okay. Let's take a break. And then we're going to plow into what the experts say and what, and what you just shared a perfect example from Mozilla, what they found when they ran Mythos against their Firefox code base.

Speaker 1:
[88:11] Yeah. Yeah. Very interesting. I do have to point out Redcon5 asked in the Discord chat, our Club TWiT chat, how many of the 271 bugs were severe or were, you know, and they actually didn't talk about severity. So I don't know. They might have been smaller bugs. We don't know. So that's the next question. But I guess a bug is a bug is a bug.

Speaker 2:
[88:34] I mean, and we know how often bugs can be elevated.

Speaker 1:
[88:40] Right.

Speaker 2:
[88:40] Yes. Into something more severe.

Speaker 1:
[88:42] Right. 15 of the Firefox CVEs were low. 18 were moderate. 13 were high. At least of those, at least that's according to Joke and Boken. So on the YouTube. So.

Speaker 2:
[88:54] And I realized that the proper response to the guy in the club is listen to what the Mozilla guy is saying. Yeah. He is saying, you've got to do this is significant.

Speaker 1:
[89:06] Yeah.

Speaker 2:
[89:06] You know, this wasn't, this wasn't dust that was found.

Speaker 1:
[89:10] Yeah.

Speaker 2:
[89:10] You know, they were like, Whoa.

Speaker 1:
[89:12] So and that number is huge. 271 is mind boggling. But it, but if 13 were high, this is from version 149 to 150. This is, this is huge.

Speaker 2:
[89:26] Yeah.

Speaker 1:
[89:26] Anyway, let's talk about it.

Speaker 2:
[89:28] And just one package. We're talking about the, I mean, all think of all the software in the industry. Think of, think of how, how minutely Firefox has been curated and, and developed over time, how much scrutiny it's received. And even so AI found what people could not. Now imagine the typical software that's just thrown together and out the door.

Speaker 1:
[89:57] Think about Windows, how many hundreds of millions of lines of code.

Speaker 2:
[90:01] Well, and how many bugs they know about. And like, yeah, remember, didn't they ship famously seven with like 10,000 or 20,000, like known bugs? Like what? How does it even get off the ground? Oh my gosh. Yeah, it's, it's, it's a revolution. Okay.

Speaker 1:
[90:19] So exactly Steve.

Speaker 2:
[90:21] Yeah. As I noted several times last week, my original working title for last week's podcast was Mythos, Marketing or Mayhem. But once I'd assembled and examined all the data, I realized that leaving the question to the, you know, leaving the question or the answer to the question that that title implied up in the air would be wrong because there's no way, really after looking at the facts and just with no bias, there's no way that Mythos was only marketing. We, we had evidence of it. So, you know, I acknowledged also that it was certainly also marketing, but it was also far more, far more than only that. And I think that's where people get confused is they just mistrust people's motives to such a degree these days that it's like, oh, but, but again, it could be both. And it was, it happens at Anthropic. Use this for marketing, but I'm going to make the point at the end of the podcast, thank God, because it broke out. That's the difference. And that this breakout is what we're talking about today. I titled today's podcast, yes, exactly. Because last Thursday, two days after, as I said at the top of the show, two days after our What Mythos Means podcast was delivered, an incredibly significant group of industry veterans who pretty much comprise a who's who of the cybersecurity industry, all weighed in with a formal emergency wake up call for the entire cybersecurity world. The organizer and publisher was a group calling themselves the Cloud Security Alliance. I have a link to the most recent version of their 23-page paper in the show notes. They titled it the AI Vulnerability Storm, Building a Mythos-Ready Security Program. The paper enumerates its 16 primary contributing authors. Because this is important for appreciating the weight of the paper's stated concerns, I'm going to share them briefly. They are Jen Easterly, CEO of the RSA Conference and former Director of CISA, Bruce Schneier, who we all know, renowned cryptographer, current head or chief of security architecture at Inrupt and fellow and lecturer at the Harvard Kennedy School, Chris Inglis, the White House's former National Cyber Director, Phil Venables, Ballistic Ventures, he is formerly the CISO of Google Cloud, Heather Adkins, current CISO of Google, Rob Joyce, the NSA's former Cybersecurity Director, Sanel Yu, the CTO of Nostic and former Chief Security Scientist for Bank of America, Katie Mazuris, the founder and CEO of Lutta Security, John N. Stewart, Talens Venture and former CSTO for Cisco, James Lyne, CEO of the SANS Security Institute, Dave Lewis, Global Advisory CISO for One Password, Maxim Kowalski, Managing Director of AI Security, COE for Consortium Networks, Jim Rivas and John Yeo, who are the CEO and CSO, respectively, of that Cloud Security Alliance, Joshua Sacks, CTO and co-founder at Security Superintelligence Labs, former AI and Llama security head at Meta, and finally, Rami Husani, CCSO for none other than CloudFlair. So as I said, the who's who. In addition to those primary contributing authors, the paper's content was also reviewed by a list of CSOs that pretty much includes everyone else. I'm not going to read them since there are too many of them, but I've reproduced that page from the report in the show notes so you can just see it. I mean, it is like there's anybody who I didn't just read, former head of security for Netflix, CSO for Brave Technology. I mean, global field CSO for Fastly. Your eye just drops on any of them. I mean, so everybody basically understood what Mythos meant. Okay. So we clearly established the provenance of this document. So I want to first share the executive summary overview, then the key takeaways for CSOs, followed by their brief summary of why Mythos is so important. Much of this will sound exactly like I did last week, two days before this was published, which is, of course, why I immodestly titled today's podcast, Yes, Exactly. This amazing group of experts even use some of the same phrases that I used, given the impossible to exaggerate significance of Mythos and the successor systems that are sure to follow, and not only philanthropic, but I get it. As I said last week, they're just first, but they were the one that broke through, and breaking through is what we really needed for our industry to get the wake up call it needs. So I think it's crucial for the listeners of this podcast to appreciate that it's not just me with a lone opinion here. Okay, so the authors of the executive summary set it up as a sort of topical Q&A. They wrote, what happened? Answered, AI, as demonstrated by Anthropics, Mythos. So again, noted that even they didn't fall to their knees in front of Mythos. They're saying, AI, as demonstrated by Anthropics, Mythos, has significantly increased the likelihood of attackers discovering new vulnerabilities, creating new exploits and using them in complex automated attacks at scale. While AI also increases the speed of patch development and reduces defects in new software, defenders still face a heavier relative burden due to the inherent limitations of patching. Attackers gain asymmetric benefits. That's what I referred to last week when I was talking about the existing installed base of software that hasn't had the opportunity to be screened through AI. It's already deployed, it's in devices and appliances, and many of it has been forgotten, but not by the attackers who want to use it to get in. So they asked the question, how is this different from the status quo? And answered, in the near term, security organizations will likely be overwhelmed by the need to apply patches and respond to AI discovered vulnerabilities, exploits, and autonomous attacks. What to do now to deal with the current risk spike? Adjust risk calculations and reorient security program resources for increasing volume of patches, decreasing time to patch, and more persistent and complex attacks. Focus on the basics and harden your environment further. Segmentation, egress filtering, multi-factor authentication, and defense in depth breadth, all increase the difficulty for attackers. What do we believe will happen next? The storm of vulnerability disclosures from Project Glasswing is the first of many large waves of AI discovered vulnerabilities that may occur in rapid sequence. The capabilities seen in Mythos will quickly become more widely available, dramatically increasing the number and frequency of complex novel attacks organizations will face. Finally, what else should start now to be ready for the next waves? Prioritize robust dependency management to reduce vulnerabilities in third-party and open-source components. Enforce automated security assessments consistently in your development process, including using LLM-powered agents to find vulnerabilities before attackers do. Introduce AI agents to the cyber workforce across the board, enabling defenders to match attackers' speed and begin closing the gap. Re-evaluate your risk tolerance for operational downtime caused by vulnerability remediation to account for shorter adversary timelines. Update governance for more efficient vendor onboarding and increase headcount to facilitate a faster cycle deployment of new AI-based defenses. As an industry, we need to strengthen our coalition's cooperation and coordination. Okay, so I think it should be clear from these executive summary bullet points that the cybersecurity industry's posture on Mythos is that there is less than no time to waste. This is not the time to adopt a wait-and-see posture and to be reactive to events. By the time a reaction is indicated, it will be too late. Despite these clear alarms being run by many security being run, the alarms being run by many security professionals who have no profit stake in any of this being true, inertia being what it is, many organizations will nevertheless wait to see if anything really happens. For what it's worth, I did not wait. Although GRC's border security has always been as strong as I've been able to make it, as I mentioned before, I did have two deliberately exposed SSH servers listening for connections from any US domestic IP. Foreign IPs have always been hard blocked. I'm referring to them now in the past tense, because after Mythos, they're already shut down. I've used those SSH links to allow me to deal with the rare IP changes in my two Cox cable connections. An SSH session allows me to update the firewall filters that block all other connections from anywhere other than my two remote work locations. Even though those SSH servers are both using the strongest multi-factor identity authentication available, that might not matter if some bypass vulnerability is found. I don't need those SSH servers as much as I need security. So I'm gonna take a wait and see approach in the opposite direction. Rather than waiting to see whether a problem is found and then hoping I get the news quickly enough, I'm gonna assume that someone using Mythos might discover something unforeseen in the SSH server software I'm using. So I'm gonna wait and see about that before I feel safe to poke my head out again. And in fact, I may drop SSH completely with its inherently open ports all together and come up with an affirmatively more secure solution. Leo, like you were talking about using tail scale in order to get in to your inside. Because tail scale is able to do NAT penetration, in which case you don't have to have any open ports.

Speaker 1:
[103:25] Yeah, that's what I use and it's great.

Speaker 2:
[103:27] I love it. So this wonderful call to action paper next offers some key takeaways for CISOs. Here's what the paper's authors recommend CISOs to consider. Use LLM-based vulnerability discovery and remediation capabilities. They said, unlike defensive AI technologies, LLM-based vulnerability discovery capabilities are already mature and can be used to your advantage. Start immediately by asking an agent for a security review of any code and build towards a VOLM Ops capability. Update your risk metrics. With the shifting landscape, many of your metrics and risk assessments may be outdated and could affect business reporting. Consider how to update these and communicate the challenge with stakeholders. Accelerate your team by the use of coding agents. And you were just talking about this on Mac Break Weekly, how some group at Apple are not-

Speaker 1:
[104:39] The Siri group is being sent to learn how to vibe code, almost 200 of them. Because I guess they weren't, they didn't-

Speaker 2:
[104:49] Take it seriously. They didn't realize what the benefits were. So these guys are saying to CISOs, accelerate your team by the use of coding agents. While defensive AI technologies are lagging behind offensive ones, agents can already accelerate human action across the board from incident response to GRC. Encourage and require your team to use these agents to accelerate their capabilities. Triage and test patches. Red team your environment. Automate audit data collection. And accelerate security operations overall. Prepare to respond to more incidents. Run tabletop exercises for multiple simultaneous high vulnerability incidents occurring within the same week and have playbooks in place for high level critical incidents. These guys are literally predicting a storm is coming. Examine how to automate remediation capabilities to the degree possible. Verify and enable mitigating controls such as segmentation, egress filtering, zero trust architectures, fishing resistant, multi-factor authentication, and secrets rotation to limit impact when exploitation occurs. The supply chain will be affected. Increase focus on the basics. The basics remain valid and can be prioritized for risks that cannot otherwise be mitigated. Segmentation, patching known vulnerabilities, identity and access management, and defense in depth and breadth, all increase the difficulty for attackers. To lower latent risk, expanding these efforts while there is time is prudent. In other words, do it now before it's too late. They said, we cannot outwork machine speed threats. Reprioritize, automate and prepare for burnout. The cadence and volume of vulnerability disclosures will exceed anything we have experienced before. They're literally saying, understand everybody, bad guys, China and Russia and North Korea, they're going to get this capability and they are going to come at us hard. They wrote, the cadence and volume of vulnerability disclosures will exceed anything we have experienced before. Consider how you manage current priorities and request additional headcount and budget for reserve capacity to avoid exhausting available resources or potentially burning out existing staff. This in parallel with adoption of coding agents, reprioritization, putting more automation in place and helping your team through career uncertainties and upskilling challenges. Yikes. Evolve to a Mythos-ready security program. Mythos, they wrote, is likely one of many changes coming to cybersecurity risk. If not already underway, seriously consider incorporating Mythos and its implications into your strategy. Build collective defense now. Attackers are already operate as syndicates. Crowdsourcing, sharing tools and moving as a collective. Engage now with sector coordinating groups, ISACs, CERTs and standards bodies to share threat intelligence, coordinate response and produce sector-specific guidance for this moment. Defenders must do the same and leverage our coordinating groups, especially when considering organizations that fall below the cyber poverty line as introduced by Wendy Nather. Just to pause, a little over three years ago, back in 2023, Cisco's CISO, Wendy Nather, articulated a concept she termed the cyber poverty line. It was the point below which an organization cannot afford to invest in the minimum required security to remain safe on the Internet. So like you do need to invest in security. The bottom of page 17 of the show notes, duplicates a breathtaking chart from the very cool and someone unnerving website, 0dayclock.com. You know, zerodayclock.com. The chart shows how the vulnerability versus exploit race has radically changed over just the past eight years. At the bottom of page 17, a beautiful chart. Eight years ago in 2018, the average TTE time to exploit was 2.3 years. In other words, just eight years ago, on average, there was a 2.3 year gap between the public disclosure of a security vulnerability in a CVE, and its confirmed use in an attack exploit. 2.3 years.

Speaker 1:
[110:38] Wow, we had a lot of time back in the day.

Speaker 2:
[110:41] We did.

Speaker 1:
[110:42] Not anymore.

Speaker 2:
[110:43] Look at this chart, Leo, at the bottom of page 17.

Speaker 1:
[110:46] How many days now do we have in a zero day?

Speaker 2:
[110:49] Well, watch how this happens. The next year in 2019, that exploitation gap had dropped from 2.3 to 1.9 years. In 2020, a year later, it was down to 1.3 years to exploit. 2021 averaged 10.8 months from CVE publication to exploitation. A year later, 2022 dropped that 10.8 months down to 9.7. The next year, 2023 was down to 4.9 months. 2024, just two years ago, we were down to 56 days. Last year, 23.2 days. Shockingly, so far this year, we are seeing exploits appear an average of 10 hours after their CVE vulnerabilities have been published.

Speaker 1:
[111:54] That's AI, right? I mean, that's got to be AI.

Speaker 2:
[111:57] That is, and we've been talking about this on the podcast, mostly theoretically because it was obvious it was going to happen. Bad guys are sitting, waiting for new vulnerabilities to be published, and they instantly jump on them. Ten hours. Ten hours. So, I mean, there is just no time. As the writers of this paper said, humans cannot outperform machine-driven attacks. It can't, it won't, it doesn't happen. So, from, think about that, eight years, Leo, gone from 2.3 years to 10 hours. So, everybody should check out the 0dayclock.com. It's got this chart and a bunch of others where these sorts of stats are being maintained and it is breathtaking. Okay, so next, I'm going to share just the brief introduction that these cybersecurity industry expert authors wrote for the paper. But Leo, let's first take our final break.

Speaker 1:
[113:04] Good thinking. Thank you for remembering, Steve. All right, on we go. Steve, I think you're muted again.

Speaker 2:
[113:15] Did it again. I didn't want you to hear me typing. So, okay, thank you. Okay, so the brief introduction that these cybersecurity industry expert authors wrote for the paper. They explain and I'll, well, I'm going to point our listeners to recommend that they point their bosses. Anybody who doesn't understand this, the paper was written for the C-suite guys to understand. And that's why it's got the who's who behind it. So they wrote, ″Many of our assumptions about the capabilities of AI in vulnerability research, exploitation, and autonomous attacks may be outdated. Throughout 2025 and into 2026, we've seen continuous examples of increasing capabilities, both in research and in actual in the wild attacks. AI-driven vulnerability discovery and exploitation has been accelerating for over a year. Anthropics Claude Mythos Preview represents a step change in that trajectory. Autonomously finding thousands of critical vulnerabilities across every major operating system and browser, generating working exploits without human guidance, and empowering autonomous attack orchestration, all at a speed and scale that outpaces any prior capability. The asymmetry this creates is structural. AI lowers the cost and skill floor for discovering and exploiting vulnerabilities faster than organizations can patch them. This is what I was talking about last week when I said, you know, now script kiddies can be expert attackers and exploiters because you just ask AI for some attacks. The window between discovery and weaponization has collapsed to hours. Attackers gain disproportionate benefit and current patch cycles, response processes and risk metrics were not built for this environment. While many of these capabilities predate this model, Mythos class capabilities do represent a step change and will proliferate, meaning anthropic is only first, they're not the last. The organizations that respond well will be those that build the muscle now, the processes, the tooling and a culture willing to adopt AI as a core part of how security gets done. The adaptability will help determine who meets the next wave on their own terms. This moment requires reprioritizing resources, reviewing risk levels and controls and leveraging AI where feasible. At the time of this writing, most AI defensive controls and approaches are not yet mature. That said, AI attacker technology may be used for defense purposes, and coding agents will help. Okay. To finally place all this into context, I want to share Appendix A of their paper, which they titled Historical Precedent, meaning where we came from. Because this will help everybody to put this in context. They said, this all began with the DARPA Cyber Grand Challenge, a landmark competition organized by DARPA in 2016, so a decade ago, that demonstrated the potential of fully automated cybersecurity systems. Teams developed autonomous platforms capable of identifying, exploiting, and patching software vulnerabilities in real time without human intervention. The challenge highlighted a shift toward machine speed cyber defense, showing how automation and artificial intelligence could significantly enhance vulnerability management and incident response, while also raising important questions about trust control and the future role of human operators in cybersecurity, meaning humans are going to be obsolete. By mid-2025, Expo, an autonomous offensive security company, topped the Hacker One leaderboard. The DARPA AI Cyber Challenge found 54 vulnerabilities in four hours of compute. Google's Big Sleep discovered real zero days in open source. Anthropic was used to automate full attack chains from reconnaissance through exfiltration. An open-source tool such as Raptor proved autonomous vulnerability research is available to anyone able to use an agent. In September 25, Heather Atkins, the CISO for Google, and Gaddy Evron, the CEO of Gnostic, published a warning, September 2025. They published a warning that attackers were racing toward a singularity moment with autonomous vulnerability discovery and exploitation roughly six months away. Wow. Well, that's impressive. Their timing was exactly correct. That was six months ago. In February, 2026, Anthropic used Claude Opus 4.6 using Claude Opus 4.6 reported more than 500 high-severity vulnerabilities in open-source software. Aisle, remember A-I-S-L-E-I-L found 12 open SSL zero days, including a CVSS 9.8 vulnerability dating back to 1998. Linux kernel maintainers saw vulnerability reports climb from 2 to 10 per week, largely hallucinated at first, but that changed rapidly. The volume has held steady, but the reports are now all verified as real bugs.

Speaker 1:
[119:56] The curl project, which originally discontinued its bug bounty, Let's get started. What is the latest release of the Security Program because it was drowning in hallucinated vulnerability reports, AI slop, last week echoed the observation from the Linux team, reporting an increasing number of AI-supported, high-quality security incidents. Sysdig documented an AI-based attack that reached admin level access in eight minutes. This week, Gambit released a report on the AI-led compromise of Mexican government infrastructure originally reported in February. Actually, I saw that and skipped over reporting that due to show length. But briefly, an attacker used a combination of both ChatGPT and Claude to attack, rapidly penetrate inventory and exfiltrate a much larger amount of data from the Mexican government that would have ever been possible without the aid of AI automation. He used an AI automated-based attack. They end their historical timeline by telling us about the zero-day clock writing in March. Sergev Epp and others introduced the zero-day clock, visually demonstrating the disappearing time to exploit development, demonstrating the drastic fall in time to exploitation to less than a day in 2026, 10 hours. It's worth noting that the historical collapse in time to exploit has not yet produced a proportional increase in the impact of exploitation. Many of the most consequential incidents of recent years involved credential abuse, social engineering, or supply chain compromise rather than zero days. The zero-day clock trend is a leading indicator of where attacker capability is heading, not a direct measure of current damage. So it's predicting what's going to happen shortly. The AI-driven, okay, so then the AI-driven security research company, Aisle, A-I-S-L-E, remember that we talked about them at the time. They found the problems in OpenSSL. They responded a little disgruntled themselves, understandably, to all of the mythos buzz. And so, it was in February that we reported on them finding those 15 vulnerabilities in OpenSSL, 12 of which entirely composed a major update to OpenSSL. And, as we know, this paper briefly mentioned them in passing. They're grumbling somewhat saying that they were able to reproduce anthropic results themselves without the mythical Mythos. They wrote, and I have a link to their report in the show notes. They said, we took, this is Aisle, we took the specific vulnerabilities and anthropic showcases in their announcement, isolated the relevant code, and ran them through small, cheap, open weight models. Those models recovered much of the same analysis. Eight out of eight models detected Mythos' flagship free BSD exploit, including one with only 3.6 billion active parameters costing $1.11 per million tokens. A 5.1 billion active token open model recovered the core chain of the 27-year-old open BSD bug. And on a basic security reasoning task, small open models outperformed most frontier models from every major lab. The capability rankings reshuffled completely across tasks. There is no stable best model across cybersecurity tasks. The capability frontier is jagged. In other words, they're just saying, hold on here, we've got our own small, cheap models that we are able to deploy that do the same thing as Mythos. And I don't doubt that I'll did what they claimed. Although, admittedly, there's much they didn't say. For example, even with isolated code, confirming an already known problem feels different from making brand new discoveries, although I know in theory, there should be no different. We also don't know how autonomous their system was. That was one of the main points that Anthropic has been making about Mythos, is that you just ask it pretty please to attack somebody and it's able to. And it's only natural for IEL a commercial enterprise, whose specific and narrow focus is to offer commercial vulnerability discovering services to enterprise, to be somewhat miffed over all the breathless industry and media coverage Mythos has generated. They should be celebrating their own systems. If they're able to meaningfully compete with Mythos' outcome for far less money. As I said, once the dust has settled, it's going to all come down to who can do the most with the fewest resources. So if I'lls got some bunch of tricks up their sleeve that allows them to offer these services much less expensively far more economically, then I say that's great. Bravo. However, everyone who's been paying attention knows that what the cybersecurity industry most needs right now, this instant without delay is a swift kick in the pants. This Security Now podcast informed its listeners of IELTS AI driven vulnerability discovery news back in February. It's one of the reasons that Anthropics claims for Mythos made so much sense to us, right? Because like we saw this coming, you know, this made sense. But IELTS did not break through in February. Mythos did. Even if Mythos were hype, which none of these experts who should know, believe it to be, it should be abundantly clear, even looking at IELTS results with OpenSSL from February, that the next stage of AI-driven rapid vulnerability discovery and exploitation is here now. And that as all of these experts also agreed, we're not ready for it. So I'm all for the hype this industry is able to muster, if it will help to instill some much needed fear and action from an industry which appears to have become far too comfortable with the status quo. You know, as I said at the top and a couple of times, let's turn this, let's have another Y2K event that never happens. Not because it isn't real, but because it is. And everyone who needed to understood and then took action to prevent the apocalypse from ever happening when everything rolled over to the year 2000. Anyway, as I said, I've created two GRC shortcut links to this very significant paper to make it even easier for our listeners to get to it. You can either go to GRC.sc slash Mythos, M-Y-T-H-O-S, which everyone should be able to remember, and that'll just bounce you over to the PDF, or this week's episode number, GRC.sc slash 1075. That'll do the same thing.

Speaker 2:
[128:31] Nice.

Speaker 1:
[128:31] And I think it is very clear that the... I mean, I get it that when we're talking about increasing headcount and reshuffling priorities and all this, I mean, these are expensive things to ask for a problem that hasn't yet manifested. The problem is, by the time it does, it could be too late. It's like waiting...

Speaker 2:
[128:53] A lot more expensive.

Speaker 1:
[128:55] Yes. It's like waiting to see what... Like if the elevators stop running on January 1st of the year 2000. They're like, I'd rather not get stuck in an elevator. Thank you very much.

Speaker 2:
[129:07] OK. Now here's the question. Models are going to continue to get better. I think there's no doubt about that. There was some question a year ago, maybe, that maybe we'd hit a plateau and models weren't getting better fast. I think we all see that that is not the case.

Speaker 1:
[129:22] We're learning how to use this new thing.

Speaker 2:
[129:25] Yeah.

Speaker 1:
[129:25] Like the notion of parallel agents.

Speaker 2:
[129:27] Yeah, we're getting better at it. Yeah.

Speaker 1:
[129:29] And a collection of different capabilities that are brought in.

Speaker 2:
[129:33] Right.

Speaker 1:
[129:34] So, yeah, so we're learning basically how to ask.

Speaker 2:
[129:37] So, presumably the 271 bugs we found in Firefox this time, next time we might find more. We might find more again as models get better. We might find more again. Is there a point where software just becomes perfect and there are no more bugs or? I think there is.

Speaker 1:
[129:57] Yeah, I think there is. Software is math. Math is 100 percent predictable. Right. You know, there's no random number generator that like there is in AI. There's no there's no random number generator in our software. You know, it is deterministic. And I hold that if that's something that doesn't get lost in the details. Basically, humans have created software that is too complex for them to hold in their head.

Speaker 2:
[130:30] Right.

Speaker 1:
[130:31] That's what's happened is we don't understand our own creation. But AI can be scaled to be able to understand, you know, and I use understand in air quotes. I know it's not conscious. It's not actually understanding. But but but to weave through all the combinatorial ins and outs. And God, who was it? There was another person who I just saw. I think it might have been in an email feedback. Somebody else. Oh, it was one of our listeners. I'll share it next week was one of our listeners who has been maintaining a package that is exposed to the internet and it involves SQL. And he was curious. So he aimed Claude code at the software that he wrote and it found a vulnerability that astonished him. And he said, it wasn't super critical because it only blah, blah, blah, blah, blah. I know there were lots of ways that you had to have it, but he was amazed by what it found in his own code. And so he stood there thinking, my God, is this true? And then he actually, oh, he didn't want to upset it by asking it for an exploit. So he wrote the exploit himself.

Speaker 2:
[131:51] Actually, Glenn Fleischman on Sunday was reporting something very similar. He's had web facing or internet facing a tool running for, I think he said, like 20 years, a long time. It's just a little thing that he runs. It's some sort of a book search or something. And he said he fired a Claude Code at it. It found bugs that had been running for 20 years. No one's seen, he hadn't seen that he was able to fix. I mean, I think that, I think you nailed it, which is that it's gotten impossible for us as human beings to make perfect software, but this is a machine. It is, it is tireless. It is, it doesn't make the same kinds of mistakes.

Speaker 1:
[132:32] And use chess, fall back to chess again. You know, super chess grandmasters are able to look at a board and see things in it. I can't even begin to describe. They were able to hold their own for a long time, no more. No, that's gone.

Speaker 2:
[132:51] It's no longer even close.

Speaker 1:
[132:52] And that suggests that now we have, we have computers that are able to look at the same thing. Again, there's no, no one's rolling dice in chess. It is a deterministic board game and they can take us down now.

Speaker 2:
[133:09] Meanwhile, I've been working on my firmware. Let me just, help me Obi-Wan. Help me Obi-Wan. Just going to see if it's listening. Help me Obi-Wan. Still working on it. It's a very, it's a thorny problem we're working on here.

Speaker 1:
[133:22] But we are, you know, we're okay. You're a little younger than I am. You're in your late sixties. I'm now 71.

Speaker 2:
[133:31] No, I'm almost 70. I'll be 70 in November. I'm not so far behind.

Speaker 1:
[133:34] Okay. So we're still going to be here in another 10 years. I hope so.

Speaker 2:
[133:39] God willing.

Speaker 1:
[133:39] The world is going to be different.

Speaker 2:
[133:41] I know. And I love that. I thought I was going to miss the apocalypse.

Speaker 1:
[133:45] It's going to happen so fast. That what's so cool is there's so much money behind software development that there will be a huge push to make this happen.

Speaker 2:
[133:55] Oh yeah. And the other thing that's encouraging is this, these tools are getting more efficient, which means they're taking less hardware to do more, which means not only will they improve, but they will be more accessible.

Speaker 1:
[134:08] Yes.

Speaker 2:
[134:08] They will be less expensive.

Speaker 1:
[134:10] I'm convinced cloud crap is going to go away. I hope so. We're going to have local running models because we'll have little software, AI boxes in our homes that we talk to and they're able to do what we want.

Speaker 2:
[134:24] Yep. Yep. That's what I'm working on right now. That's exactly it. Because I'm so dissatisfied with Alexa and Siri and all these other assistants, I'm trying to make an assistant that works the same way, but is local and knows me and has memory and all of that stuff. I'm getting closer than I ever thought I would. I think by the time 80.

Speaker 1:
[134:45] Here's the key, Leo. You are having fun.

Speaker 2:
[134:48] Oh, it's the best game ever. You are having fun. It's like coding. I mean, I still love coding, but coding is more like hand building furniture.

Speaker 1:
[134:57] Leo, it is modern coding. This is what coding is going to become. People are going to be removed from the code generating loop. And we will be directing AI to write our code. And this is me who is still coding in assembly language. I'm saying it's over, folks. People are going to be taken out.

Speaker 2:
[135:17] Wow. And you know what? Maybe that's the right thing because we had our time. Computers are going to do a better job of this. This is their native tongue, you know? Steve Gibson does such a good job. I'm so glad we have you to rely on. And let's hope we get to keep doing this for many, many more years to come. You'll find him at grc.com. The Gibson Research Corporation. That's where you'll find Spinrite, the world's best mass storage maintenance, performance enhancing and recovery tool. If you've got mass storage, you've got to have Spinrite, get the current version. 6.1 is there. Free upgrades for anybody who's ever bought it in the past. If you haven't bought it, buy it now. Get on the train. He also has something brand new that he wrote that's fantastic. The DNS Benchmark Pro that lets you figure out what the best DNS provider would be for your particular situation. That's all.

Speaker 1:
[136:13] It matters where you live.

Speaker 2:
[136:14] It's less than 10 bucks. It matters where you live. All of that at grc.com. While you're there, sign up for his newsletters. He has two of them. One is a product announcement newsletter that you'll never get anything from, and he works very carefully.

Speaker 1:
[136:29] Methodically.

Speaker 2:
[136:30] Methodically. The other, though, you will get a weekly email. The show notes, sometimes from the wrong year, most of the time from the right year. No, it's always the correct show notes. It's just the year that it says it's from is different. Those come out usually on Sunday before the show on Tuesday or thereabouts. So, sign up there at grc.com/email. What you're really doing, that form there is just to whitelist your email so you can send him pictures of the week and comments and suggestions and stuff. You do have to do that because he doesn't want any spam. It's a very effective system he's come up with. Let's see, what else? Oh, we have copies of the show on our website, twit.tv/sn. There's also a YouTube channel dedicated to the video. You can also subscribe on your favorite podcast client. However, you do it. I don't think you want to miss a single episode of this show. If you're one of those folks who wants the most recent version of the show, you can actually watch us do it live Tuesdays right after Mac Break Weekly. That's 1.30 Pacific, 4.30 Eastern, 20.30 UTC. Club members get to watch in the Club TWiT Discord. How nice. The rest of you can watch on YouTube, Twitch, x.com, Facebook, LinkedIn, or Kik. You take your pick, so you can watch live if you want. If you're not a Club member, do join the Club. Very important to us to keep doing what we're doing. Advertising does not cover all of our expenses, barely covers 70 percent. The Club makes up the difference and without you, we really wouldn't be able to do what we're doing. So please think about it. If you can afford it, 10 bucks a month, you get ad-free versions of all the shows, you get access to Discord, which is a great place. Smart people, really fun to hang out. You get special programming just for Club members. We really need you now more than ever. twit.tv/clubtwit. Steve, have a wonderful week. We'll do.

Speaker 1:
[138:27] See you next time. I'll be here next Tuesday. Bye.