title 2.5 Admins 296: Beware of the Leopard

description Microsoft locks devs out of important accounts, the foreign router ban exemptions make even less sense, Backblaze shows that “unlimited” never means that, and attempting to avoid software that’s written with AI.

 

Plugs

Support us on patreon and get an ad-free RSS feed with some early episodes

Do More with Less: Cost-Efficient Storage on the New TrueNAS with Enhanced Fast Dedup

The Hidden Value of CPU-Intensive Compression on Modern Hardware

 

News/discussion

Microsoft locks out VeraCrypt and WireGuard devs, blames verification process

Action Required: Account Verification for Windows Hardware Program Begins October 16, 2025

FCC exempts Netgear from ban on foreign routers, doesn’t explain why

Backblaze has quietly stopped backing up your data

 

Free consulting

We were asked about avoiding software that’s written with AI.

 

 

 

 

 

 




 

















See our contact page for ways to get in touch.

 

pubDate Thu, 23 Apr 2026 16:52:00 GMT

author The Late Night Linux Family

duration 1832000

transcript

Speaker 1:
[00:00] This Late Night Linux Family Podcast is made possible by our patrons. Go to latenightlinux.com/support for details of how you can join them. Support us on Patreon for access to ad-free episodes and early releases. That's latenightlinux.com/support. Two and a half Admins, Episode 296. I'm Joe. I'm Jim.

Speaker 2:
[00:21] And I'm Allan.

Speaker 1:
[00:23] And here we are again. And before we get started, you got a couple of plugs, Allan. First, a webinar.

Speaker 2:
[00:27] Yeah. Wednesday, April 29th at 11 a.m. Eastern or 5 p.m. Central European time. We're having Do More with Less, cost-efficient storage with the new TrueNAS and Enhanced Fast DDoP. So if you've ever been interested in the new DDoP capabilities in ZFS, definitely come check it out.

Speaker 1:
[00:44] Okay. And you've got an article, The Hidden Value of CPU Intensive Compression on Modern Hardware.

Speaker 2:
[00:48] Yeah. You know, compression for a long time, you could only have so much because you only had so much CPU. But now that your storage server is likely has 32 cores or more, you can actually use even Z standard at the higher levels and get that much more compression and therefore storage savings. And especially if you're paying $350 a terabyte for NVMe, it can make a real big difference.

Speaker 1:
[01:11] Right. Well, links to both of those in the show notes. Microsoft locks out VeraCrypt and WireGuard devs blames verification process. Now, this has been resolved, so let's keep that in mind.

Speaker 3:
[01:23] Well, this has been resolved for the two people in question. I very seriously doubt that the issue of accounts just randomly getting locked out for no good reason and then having to go through a dystopian or willian nightmare trying to deal with it. No, that hasn't gone away. That issue is very much still there. Jason Donenfeld was one of the two individuals who got locked out recently. He's the founder of the WireGuard project. And it was very interesting to me that Microsoft gave him the runaround. Because this is not just an issue of, you know, one day these folks go to sign in to their Azure accounts and, nope, you're not allowed. You've been banned. You've been permanently banned. No reason why. No warnings. Just you may not log in. You know, you try to contact support. Well, support wants you to log in to do the support thing, which you can't do because that's what you're calling support about. So much like, you know, what I've been complaining about recently with having to deal with MFA resets and such, Donenfeld found himself stuck in this endless loop. Well, the thing is, Microsoft has a lot of interest in WireGuard. When WireGuard NT came out, like there was real collaboration between devs in Microsoft and Donenfeld getting everything just right for Windows, the same way that the Linux project got directly involved, you know, before WireGuard could make its way into the Linux kernel. There's refactoring involved. It's not like he's just some guy who has a software project that happens to go on Windows and his crap got locked out. Microsoft knows who he is, and yet he went through round after round with people telling him, okay, well, this is the appeals process. It will take 60 days to resolve. We're not gonna tell you what documentation you need to provide us. Best of luck with that. We'll let you know at the end of 60 days, you know, whether you passed or not. Now, he found some back channel way finally that somebody said, okay, now we're gonna go ahead and get this resolved. And he didn't have to wait out the entire 60 days. But again, that ain't a fix.

Speaker 2:
[03:20] Yeah, like as the register article points out, if there had been a zero day or just a security vulnerability in WireGuard at the time, he would have had no way to push that change out to all the Windows users. The thing he was trying to log in to do at the time was, he had spent all this work getting a new in kernel driver for Windows up to spec with the Windows hardware lab kit test suite and getting it all extended validation code signing certificate applied. Then he goes to log in and it's like, nope, screw you.

Speaker 3:
[03:51] The other thing that Jason didn't mention is that even absent undiscovered vulnerabilities in WireGuard that this would have been the perfect time to exploit while he was locked out for at least 60 days. When you get locked out like that and you get told things like, well, you can file for an appeal, it'll be 60 days, we're not going to tell you what you need to provide us, it's just going to be radio silence till the 60 days is up, and then we'll let you know what we decided. The other thing that tells you is that process, it's opaque as hell, it's really arbitrary, and that is also a big security problem because you don't know you're the only one that's going to file an appeal during that time frame. If they decide to approve somebody else's appeal, then what?

Speaker 1:
[04:34] Well, Microsoft did post about this in October last year. They had a blog post, Action required account verification for Windows hardware program begins October 16th, 2025. And in this post, they say that effective October 16th, 2025, Microsoft will initiate mandatory account verification for all partners in the Windows hardware program who have not completed account verification since April 2024. And it goes on and on and on. And they actually updated that post recently to say, we heard your feedback. We know that some partners whose accounts were suspended following account verification are experiencing challenges regaining access to hardware dev center. And then a bunch of instructions of how to get back in. So they have made some efforts here, to be fair to them.

Speaker 3:
[05:20] First step to get back in, of course, is you need to log in to your account.

Speaker 2:
[05:23] Which just tells you deactivated, get lost.

Speaker 3:
[05:26] I'm joking. I haven't read what Joe is looking at. But honestly, I would not be surprised because I have seen so many sets of instructions on account recovery that tell you to first log in to the account so that you can do any number of things that are only available once you're authenticated and in the portal in question, that your whole issue is you're locked out of and you can't log into. I don't know why enormous tech organizations keep getting that wrong, but boy, how do they sure do.

Speaker 1:
[05:53] It looks like you don't have to log in, to be fair. You have to have access to email and a bunch of other stuff. But no, I think they have actually sorted this out.

Speaker 2:
[06:02] Well, they still haven't explained now that email that they were supposed to send to Jason never got to him.

Speaker 1:
[06:07] Well, as we know, emails are notoriously unreliable. How many times have you advised people to not monitor their servers via email, for example?

Speaker 2:
[06:15] Right.

Speaker 1:
[06:15] But yeah, they should have communicated this better than they did. That is very, very clear. It shouldn't have been as hard as it was before they fixed it. Who knows how good the fix is.

Speaker 3:
[06:27] I don't know that everybody's going to agree with me on this one. But as a certified old who remembers when things were done differently, TM, Donenfeld is prominent enough and well-known enough at Microsoft that they should have called him, you know? I mean, if the email didn't arrive, if this didn't happen, if the other didn't happen, like somebody should have reached out to him to make that right because he's doing Microsoft at least as many favors as Microsoft is doing him. And the idea that an organization has gotten so moribund that it can't even figure out, oh, hey, you know, I've arbitrarily locked out somebody mission-critical for 60 days, and like, my only response is, who gives a shit? That's not a healthy org. That's not a healthy way to run things.

Speaker 1:
[07:16] FCC exempts Netgear from ban on foreign routers. Doesn't explain why.

Speaker 3:
[07:22] Money. Backsheesh. Corruption.

Speaker 1:
[07:26] Orpacity and corruption from the Trump government? No. So we did talk about this a few weeks ago. So this is an update to this story, but the headline isn't even necessarily the most interesting part about this. It's the fact that the Trump administration might block updates to existing routers.

Speaker 3:
[07:45] Yeah, that was the really big headline for me. The Trump administration has reserved the right to block security updates to routers, and wow, the intent is obvious. I don't see any other way to look at this, is that it's just another bit of pressure to comply. But just wow, like any possible, possible argument that this is for the benefit of the American consumer is just completely out the airlock. Once you say, you know what? No more security updates.

Speaker 1:
[08:17] Well, presumably the argument is that nefarious shit could be shipped in through the updates.

Speaker 3:
[08:23] Sure, that's the argument. Is it a good argument?

Speaker 1:
[08:27] No.

Speaker 3:
[08:27] It is fair to say that yes, you can absolutely sneak in nasty crap in the guise of a security update or along with a security update or what have you. But the United States government has neither the resources nor the interest to actually code audit every security update for every model of consumer router manufactured. It's not possible. It's not going to happen. They never intended for that to happen. So to just say we're going to arbitrarily block all security updates is essentially saying, we don't really give the slightest shit about consumer's security, all we care about is making sure the thing that we wanted gets done.

Speaker 2:
[09:06] Yeah. So basically, they've approved, all previously approved routers can have software updates until March of 2027, and then that will expire unless they grant that exemption again. Even Netgear, with their routers and switches and so on now being allowed, it says the exemption only lasts until October 1st of 2027, then again, if not renewed, then they would be banned again.

Speaker 1:
[09:30] So last time we talked about this, we got a bit of pushback from some people in the audience because we'd have played down the importance of a router because of HTTPS and all the rest of it. Some people said to us, well, that's not the issue with dodgy routers. It's the fact that they give the nefarious people a toehold into your network.

Speaker 3:
[09:51] Like literally any other device on your network. Any single internet connected device can be used as a toehold.

Speaker 1:
[09:58] Yeah, you replied to one of our listeners and said, well, what about web browsers?

Speaker 3:
[10:02] That's a whole other can of worms. I hadn't even gotten into that part yet. I'm literally just saying that your smart thermostat, your smart washer and dryer, your smart refrigerator, your Wi-Fi access points.

Speaker 1:
[10:12] Your Android tablet that hasn't been updated for five years.

Speaker 3:
[10:15] Nope, nope, we're not getting into general purpose computer devices yet. We're sticking with IoT. We're talking about things that people don't think of as computers, but that they just blithely plug in and they don't think of as a security risk. And my point is that any single device that is internet accessible, that you haven't audited every single line of code to, can be reaching outbound to phone home, and that same connection that it establishes outbound can be followed back inbound from which they have a beachhead into your land. Now, to Joe's point, even these, I don't think are the biggest real security threat. The biggest security threat to the typical American consumer is going to be their own... It's their own system. It's their own web browser. Partly because web browsers are insanely complicated software projects. Any modern web browser absolutely dwarfs operating systems from, you know, when I was coming up in computing. And I'm not talking dwarfs like DOS, I'm talking like Windows NT, Windows 2000. I believe we're already at the point where modern browsers are larger than the entire Windows 2000 operating system was. That is an awful lot of attack surface. There's also an awful lot of pressure against security in browsers. I know that's going to raise people's eyebrows hearing me say that, but it's because of what people want to do with browsers. And I've been railing about this literally for 20 years now. Flash pissed me off because it was such a horrible security intrusion into the browser. And the web dev community loudly told me I was an idiot right up until the point where the entire internet collectively got sick of it and said, no, Flash is deprecated and it can never come back. It is dead, never.

Speaker 1:
[11:56] Arguably, Apple did that by not allowing it on the iPhone.

Speaker 3:
[11:59] Right. But my point is, the reason Flash was such a nightmare is because it's what took the web from pure HTML that was still simple enough to say, okay, well, we know what this is allowed to do and what it's not allowed to do. And we can see what's in the browser versus what's out of the browser. But as the technology became more popular, again, particularly with mainstream consumers, everybody starts saying, oh, but, you know, what if we could do anything with the browser? What if we could make it play video games? What if we get full screen? What if we can draw outside the browser window, spawn new windows? Like every single time somebody found something that couldn't be done in a browser that could be done with a local application, there's a sudden push to let's find a way to do that in the browser until you get to the point where essentially we haven't really just been viewing web pages for a long time. When you jump into a browser and hop onto the internet in 2026, more than just rendering documents in a markup language, I mean, you're running external code, is what you're doing. And you are hoping and praying that the sandbox in your browser keeps you safe from any nasty ways that this touring complete set of languages might take advantage of you.

Speaker 2:
[13:13] So Netgear isn't the only company that got devices approved, a second company. AdTran also got some devices approved, although they point out that those devices in particular seem to be more service provider oriented. And even Netgear, it was specifically, in addition to their routers, was specifically their cable modem. And I wonder if some of the things getting approved have more to do with American Telecos pushing for, well, if you're going to ban all routers, which router do you want us to use to hook people up to the internet?

Speaker 3:
[13:42] That seems entirely possible, although I didn't see anything about, was it like, Ares sells cable modems with integrated routers, and I didn't see anything about them getting exemptions.

Speaker 2:
[13:52] Yeah, the Amazon mesh Wi-Fi thing hasn't been approved yet either, so I don't know.

Speaker 3:
[13:56] The IRO, sure. The thing that bothers me the most is that this clearly has nothing to do with actually improving our nation's security posture or the safety of the American consumer. But the real order of risk that I would put the American consumer, if I was going to talk about the real InfoSec risks to the American consumer's household, the very first one of the devices, all the devices that that user actually actively uses to touch the Internet, beginning with computers and phones and tablets, anything with a full-on browser in it. Your next thing down that on the list is going to be any gaming consoles, again, enormous attack surface. Any Internet connected device works, anything that can maintain an outbound connection, and not only is the console itself very complex, not only is it probably running, well, if you've got an Xbox, it's essentially running Windows underneath it, which enough said about that, but also you're installing gigabytes of game code frequently on these things and running them, and allowing the gigabytes of code to touch the Internet. So there's a lot of potential for compromise there. Following that, I would be worried about IoT devices, especially any of the Alibaba, Aliexpress type stuff, but also, honestly, I'm not much happier about Maytag putting out a smart washing machine than I am about whatever crap you got off of Aliexpress, because in my experience, the level of QA and attention to the code that goes into it, and frankly, the ethics frequently, they're not any better with the American manufacturers. Finally, below that, you've got the router. And the reason that I'm ranking the router so low at the bottom, again, is just because it has so much less attack surface, and it's really not providing any significant additional benefits that any other toehold wouldn't have. I mean, you can talk about, like, well, if you get hold of the router, then you know all the devices that are on the network. Yeah, if you get hold of any device on the network, you can do that. You can just run an end map. You can look at your MAC addresses. You can build up a list of devices and vendors. There's just not much that you can't do with just about any device on the LAN, especially a consumer LAN, which is not going to be segmented or carved up into VLANs. It's just going to be, well, you got in, game over.

Speaker 2:
[16:08] Yeah, and if we had to actually solve some of these security problems, it would be more the other stuff we talked about even, I guess it would be two or three years ago now. The idea of the food label warnings on routers about, we promise we'll do security updates until at least this many years from now, or this one's been validated and so on, or we talked about NIST coming up with a set of criteria that would judge that a router met at least some minimum standards. Like not every copy of this router has the same default password, and that they've actually done at least these battery of tests on it to make sure there's nothing stupidly wrong with it.

Speaker 3:
[16:45] Well, Allan, don't you see that's exactly what Uncle Sam is doing for us soon. We'll be able to see the Made in the USA sticker on all the boxes, and that'll answer all our security questions.

Speaker 2:
[16:54] Yay.

Speaker 1:
[16:56] Backblaze has quietly stopped backing up your data. This is a post from Robert Rees from just over a month ago that I missed at the time, but I thought it was worth shining a light on.

Speaker 2:
[17:08] Yeah, this seems to be a change in the default ignore list, which as part of the article, they explain that unexpectedly, apparently, Backblaze's backup client skips the.git directory when backing things up, which most of the time is okay, except for if you actually need to undo something in the repo versus in the actual files in the repo.

Speaker 3:
[17:28] This probably shouldn't be surprising, given that Backblaze's typical plan is unlimited storage. It's never true, it's never true. When somebody tells you you get unlimited something for limited dollars per month, it is always a lie. You don't know in what way that lie is going to come back and bite you in the ass, but it's going to, and this is how it bit people in the ass with Backblaze.

Speaker 1:
[17:54] It is worth noting though that on Backblaze's pricing page, it's got a nice chart of what's included and what's not in their various plans. On the basic personal backup plan, that's $99 a year, data covered, it says, and then you mouse over the information thing and it says, all user-generated data will be backed up from computers and connected external drives. So why is my Git directory not part of that then?

Speaker 3:
[18:20] Because unlimited storage is a lie and they were looking for ways to have to store less crap.

Speaker 2:
[18:25] To be clear, we're talking about the.Git directory in your Git checkout. So all the source code will get backed up, but the machine-generated metadata of the commit history from the repo won't be backed up. But most of the times, you've pushed that to some Git server somewhere and so it doesn't matter. But that's not a fun surprise to run into.

Speaker 1:
[18:46] Well, also Dropbox and OneDrive folders are now not included as well. And you could argue that that's not files on the local computer, but if they're pulled down from OneDrive and Dropbox onto a local computer, then they're files on your local computer.

Speaker 2:
[19:03] Well, in particular, there's no way for the Backblaze client to tell which is a file that started on your computer and went into Dropbox and got sent, and now goes to the other computer, versus the other way. And they've just said, if you're using some other software, like OneDrive or Dropbox, to back this up, we don't have to, so we'll exclude it to save storage on our end.

Speaker 1:
[19:19] And this stuff was in changelogs, but they didn't really shout about these changes, did they?

Speaker 3:
[19:25] Oh, yeah. And, you know, everybody reads changelogs on their cloud services on a regular basis every time they, you know, add a feature or, you know, add a new line of code, whatever. Yep. Yep. That's what that's what people do. That's how the Internet works. What was the line from Hitchhiker's Guide to the Galaxy? Yeah. Oh, yes, we absolutely posted notice of the upcoming demolition of your house, Mr. Ford. It's in the what was it, in a locked cabinet, in a disused basement, behind a sign that said, beware of the leopard, I recall. That's pretty much what Backblaze did here.

Speaker 1:
[19:54] But like you say, it comes down to this, nothing unlimited ever is.

Speaker 3:
[19:58] Yeah, and they probably felt pretty justified in this. Well, look, we're offering unlimited backup for a reasonable price, but it's getting more expensive than we wanted it to. And well, this stuff's already getting backed up, so we don't have to worry about that. Like I can see where from a human perspective, you might feel justified, even though you're not, when really you're just popping the bubble that lie. My point again is when you have a fixed plan where you just say, okay, you have to pay me X dollars in order to get X terabytes of storage on my cloud backup server that you want to use, there's no reason for that company to even start thinking about what kind of folder it's backing up. Why would you care? You know how your accounting works. You know how much money you need to charge per unit of actual resources you're consuming versus just telling individual consumers, you know what? We're guessing you're not going to use more than $99 a year worth of our resources. We're pretty sure of that, so we'll tell you you can have as much as you want.

Speaker 2:
[21:00] Yeah, and if you're actually paying for every gigabyte you back up, they want you to back up as much as you can wish. Send me all your files.

Speaker 1:
[21:08] Let's do some free consulting then. But first, just a quick thank you to everyone who supports us with PayPal and Patreon. We really do appreciate that. If you want to join those people, you can go to 2.5admins.com/support. Remember that for various amounts on Patreon, you can get an advert free RSS feed of either just this show, or all the shows in the Late Night Linux family. If you want to send in your questions for Jim and Allan or your feedback, you can email show at 2.5admins.com. Scott writes, when the Late Night Linux guys recently talked about how badly vibe coding is infecting our software chain, it sounded like they were tackling it from a desktop user's perspective. I understand that it is what it is, but what about software we are hosting on servers, especially when it is for other people as well as ourselves? I've been wanting to de-Google and de-Cloud as much as possible, and I thought Nextcloud seemed like a good fit, but it seems like they've leaned really hard into LLM use. It's incredibly draining and time-consuming trying to detect and avoid software Claude and friends have contributed to. I'm half tempted to freeze or pin software on my server to the last pre-Claude versions. This is a huge shift not only in mindset, but also the scale of responsibility when coming from someone who has always tried to keep as up-to-date as possible and relied on a lot of community-built software.

Speaker 3:
[22:23] Okay. So first off, no, we cannot possibly advise you to pin all of your software to pre-Claude versions and leave it there. That would be incredibly irresponsible of us. I personally have spent decades telling people, stop trying to become a bottleneck in the flow of security updates flowing to your system. In the mid-2000s, it used to be very common practice for Windows Admins to just refuse to apply Windows updates because maybe they might break something. In the 90s, even more so than that, it could make sense when you had, you know, completely isolated servers with no access to the internet and only 10 people could touch them. It actually made a hell of a lot of sense to leave your NT 4.0 server unpatched and just operational, you know, with uptime of like five and six years plus. But as information security has gotten to be a bigger and bigger problem over the years, basically the more people who can possibly touch your systems, the more up-to-date you'd better freaking keep them because vulnerabilities get disclosed, they get passed around, they get traded, and the older a vulnerability on your system is, well, the more likely somebody's going to find it and nail you with it.

Speaker 2:
[23:31] Yeah, and you know, there's a bit of hysteria over the AI stuff at the moment. And while I for one am not pro-putting AI all over the place, at the same time, we've also seen this backlash against Clod cause problems when Clod-based research reports a vulnerability. And it's like, well, is that a contribution from Clod? Does that taint our whole software now because we fixed the vulnerability they reported? And so just because there's a contribution from Clod doesn't necessarily mean the software is tainted either. It really comes down to how much do you trust the maintainers of the software to have code-reviewed or validated the contributions? It doesn't matter if they were from Clod or from junior programmers. Bad code is bad code. And either there's a thorough review or there's not. And so you can see strong projects with good policies accepting AI code and it's fine, and you can also see projects that outright refuse AI code. And either people contribute it and don't say that it's AI code or just contribute bad code and it gets in because they're just about clicking merge every pull request, not actually looking at what was inside.

Speaker 3:
[24:38] Yeah, Allan hit the nail on the head. As much as we try to draw a hard bright line between human effort and AI effort, that's not really entirely possible. There are a world of image classifiers out there, for example, that claim that they can tell you whether or not an image was AI generated. That is a lie. They cannot tell that for certain. There may be a tell that can be found that that model might know about, but it's not like there's just some magic thing that's always going to be true of anything that was generated by an AI that makes it discernible in that sense. So what we're left with is, the real problem is that what we do know is that as a class, AIs write really shitty code. They don't check their work. This is really the big issue. They may not understand the actual problem that they're trying to solve. They may not understand the language that well. There's all sorts of ways they can screw up, almost all of which can also be screwed up by shitty humans. The problem with the LLM contributions is not that they're so different from human contributions, if they're so likely to map well to subpar human contributions. So the better metric than has a Claude contribution ever showed up in this project is usually going to be more along the lines of, all right, well, how many contributions labeled Claude have shown up in this project? Do we have any declarations from the project maintainers about how they manage and review code contributions?

Speaker 1:
[26:09] Yeah, was the initial commit one big dump from Claude? That's a pretty bad sign, right?

Speaker 3:
[26:14] Well, yeah, yeah, that's certainly your worst case. But I think you're going to have to look at it, you know, essentially the same way that we classify spam and email, you know, classifying spam versus ham is very difficult. And that's the reason why we don't usually use block lists as like a one hit instant fail, because block lists get things wrong. Content analysis of an email by, you know, Bayesian filters and, you know, you name it, they get stuff wrong. Handwritten rules get things wrong. So what do we do? We create a scoring system. And you know, for all of these rules, all the ways that we can test an email, we say, okay, well, if you failed this test, that's going to cost you this many points. Well, if you pass this one, this definitely won't cost you points. If you pass the other one, maybe that gives you a couple of points back because we basically are never going to see that from spam. So you wind up getting all of this into a system that's like, it's not nice and simple and clean cut. And you just have this little light that definitely says spam or ham, but you have something that's a lot more useful than just like, well, this was in spam house and therefore, no, I won't accept it.

Speaker 2:
[27:22] Yeah, I think it's something where just, as you said, the whole software ecosystem is going to have to adapt to this and come up with stricter rules about how AI is used. And for a lot of projects, that's come in the form of better guidance to people that are trying to use AI on how to do it and how to engineer prompts and spelling out their responsibilities as the human in the loop.

Speaker 1:
[27:45] Yeah, responsibility was a word I was just dying to say there. You have to take responsibility for code that you are committing. Like, it's your code, even if you used an LLM or an agent or whatever to write it, you are responsible for it. You can't just dump it and not read through it and understand it.

Speaker 3:
[28:04] That's the whole idea of blocking any repository that is accepted code that says Claude on it is pretty badly flawed. Because let's say that you're completely successful in that, and everybody does that, and everybody knows, don't ever get any software that Claude committed something to that repository. What happens then? The same people that were using Claude continue to use it, but now they copy and paste the stuff that Claude generated and submit it directly themselves, and you just don't have an easier way to see that Claude was involved.

Speaker 1:
[28:36] Or they tell their agent to do it for them.

Speaker 3:
[28:38] Yeah, you don't see the LLM's name, you see a human name as far as you know, but you don't know, do you?

Speaker 2:
[28:44] Yeah, and it's why it's also on the open source project to police the quality of the contributions. But at the same time, we already have maintainers that are burned out and we're just amplifying their workload. So a lot of that's going to be more people are going to have to contribute and we're going to have to solve this existing problem in open source of underfunded and understaffed maintainers to deal with this influx of stuff that needs even closer scrutiny, because we can't just trust that somebody's done this and they've done their due diligence and they tested it, versus they said the AI probably tested it.

Speaker 3:
[29:20] Well, the obvious solution, Allan, is we replace those poor overworked and unpaid maintainers with AI. Problem solved, podcast over.

Speaker 1:
[29:27] Yeah, great idea. Well, one thing that maintainers are perfectly within their rights to have a policy on is no massive code dumps, like split it into smaller pull requests that can actually be reviewed properly, right?

Speaker 3:
[29:43] You listening, Kent, over street?

Speaker 2:
[29:46] A lot of projects already have that, like FreeBSD, you can post a bigger pull request, but probably the first thing you're going to get told is, if you want anybody to look at this, you're going to have to break it into smaller pieces, because nobody's got 10 hours to sit down and review this. They got an hour a night where they will maybe get through smaller pieces if you break it up. Plus, it's just not possible to not have your eyes glaze over with that much. If you can't break it down into more separate, explainable pieces, you don't understand it well enough to contribute it yet either.

Speaker 3:
[30:14] Right.

Speaker 1:
[30:14] Well, we better get out of here then. Remember, show at 2.5admins.com if you want to send any questions or feedback. You can find me at joeress.com/mastodon.

Speaker 3:
[30:23] You can find me at mercenariesysadmin.com.

Speaker 2:
[30:25] I'm at Allan Jude.

Speaker 1:
[30:26] We'll see you next week.