transcript
Speaker 1:
[00:00] The Team House, with your hosts Jack Murphy and David Parke. Hey, everybody, welcome to The Team House, episode 269. I'm Dave Parke, co-host Jack Murphy, and behind the wheels of steel, D. Tonight, we'd love to, you know, we welcome our guest, Jeff Man, NSA for 10 years, 28 years in the crypto and hacking community outside of the NSA. So Jeff, thank you very much from coming out from the shadows and sharing your time with us.
Speaker 2:
[00:46] Hey, happy to join you here this evening. Looking forward to having a fun conversation, a little stroll down memory lane as we're.
Speaker 3:
[00:56] Hey, I just want to hit everyone up before we get started and let you know about our Patreon. You can find the link down in the description. If you guys sign up, you get access to all these episodes ad free. We really appreciate you guys supporting the channel. So if you can, please go take a look at it. Again, the link is down in the description.
Speaker 1:
[01:16] All right. Jeff, on to you. One of the things we like to ask our guests is, what's your origin story? How did you grow up and what led you into the crypto world, the cryptography world?
Speaker 2:
[01:32] Well, it's a great question. Ironically, on the podcast that I'm co-host on, Paul's Security Weekly, we often start with the interviews with the same kind of, how did you get your start question? For many years, if somebody asked me, how did you get your start? I'd say, well, I sort of cut my teeth, I got started at NSA. But I realized a couple of years ago that that doesn't really tell the story. The real story is, how did I get to NSA in the first place? I'll try to be succinct. I grew up in a family of pretty smart people. My dad was a physicist. He actually, in the 1950s, came to the Washington, DC area, went to work for the Naval Research Laboratory. Around the time that they were experimenting with hydrogen bombs, hydrogen devices, I guess the first one was not technically a bomb. He used to tell stories about how he was on a ship in the South Pacific and he got to watch the detonation of the first hydrogen device obliterating a little atoll called Antietok. So my dad being a physicist and me being like many people having daddy issues, I grew up, I was like, I'm not gonna be a physicist. I tried to avoid physics and I did. I'm the youngest of four boys. We all liked to do puzzles. We're all sort of analytical problems solving. And I really grew up doing puzzles, crossword puzzles, crypto quizzes. Back when we used to have newspapers and comics pages, there always be like a little Caesar cipher type of cryptogram that you had to solve, usually like a famous quote or something like that. Yeah, I went to college, didn't know what I want to do. I graduated with a business degree because it was the easiest major I could find that required the least amount of work, the least amount of term tapers, and I didn't have to take physics. My mom at the time had gone back to work and she was working for a different naval institution called Naval Surface Weapons Center at the time. She actually got me a summer intern job before the senior year of college working, ironically, for a physicist. Only this guy was doing anti-submarine warfare research. My first week on the job, my first day on the job, he asked me, what do you know about anti-submarine warfare? Of course, I didn't know anything about it. He's like, well, I could explain it to you, but there's a book came out recently. It explains it about as good as anything does. He handed me a copy of The Hunt for Red October. I thought, this is really cool. My first week on the job, I get to sit and read a book. Summer intern job, graduated, looking for what do I want to do with a business degree. I was putting in applications to a lot of different places. My mom who worked in human resources or personnel, as they called it back in the day, she had a friend whose daughter had gotten a job at this place called the National Security Agency. Being born and raised in Maryland, I had never heard of it because it used to be very clandestine and nobody knew it existed. Nobody was supposed to know it existed. There were no signs on the highway or anything like that. But I filled out a standard government application, mailed it in, got a response from them and went to Fort Meade for a couple days of aptitude and skills testing, psych exam, polygraph, all sorts of different prodding and poking. But most of it was just taking these various skill level exams, aptitude tests. And long story short is I scored really well on the tests, and so they offered me a job. What I didn't know was they had just hired me when I first went to work for NSA, and this is back in 1984, I'm sorry, 1986. 84 was that George Orwell book. I was granted a secret clearance, but I was going through the background investigation to get a top secret clearance, so I had to wait a couple months. While I was waiting, I essentially went on a bunch of job interviews, and I ended up in what at the time was the defensive side of the house, which we called at the time communication security, soon to be renamed information security, later on to be renamed information assurance, now sort of dissolved and you have US. Cyber Command. But I'm getting ahead of myself. So I went to work for the Manual Crypto Systems branch, and they were looking for someone to do cryptographic analysis of manual crypto systems that they'd produced and were fielded by primarily the military. So I went to work for them. I had somebody that was there on assignment from the operations side, a real cryptanalyst. He took me under my wing and became my mentor, and he was actually the one that advised, yeah, this is a pretty good job. You should take this. So one of my first assignments was actually, my customer was US Special Forces. So there's a little connection there. I can tell that story in a minute. But the day I knew that I was in the right place and I had found the right place to be was, I'd mentioned growing up, my whole family liked to do puzzles. And when we would go to vacation at the beach in the summer, we'd buy a single copy of a Dell Crossword Puzzle magazine that had all sorts of different types of puzzles in it. But they always had one or two logic problems. And we all love to do the logic problems. And there was usually like a little table that you could use to fill out and kind of help you solve all the clues. And basically the logic problems were like maybe eight or ten statements about a bunch of different things. And you had to try to just based on a couple of clues, connect the dots and maybe it was there's five different students taking five different classes. What's their favorite subject from five different teachers in five different classrooms. And they'd give you just very sparse types of clues like Sally loves biology and it's next to the red room. And statements like that, you put it together and try to figure out who's class, who's the teacher, who's the student, what's the subject, that type of thing. My one day at lunch, so that was something I grew up on. One day at lunch, I'm talking to my mentor and he's working on something. I asked him what he's working on. He says, oh, I'm writing a logic problem. He's like, oh, I love logic problems. He says, yeah, I write logic problems as a side job for DelCross for business. So it was like, you know, the planets were in alignment. I knew I was in the right place. So my started NSA was really in cryptology. And I was doing analysis of, you know, systems and really just designing systems. My very first assignment was to come up with a replacement, a new memory cryptosystem for special forces. When they were deployed, they had at the time one time pads, paper pads with the key, the random key written out on it that they would use to manually encrypt and decrypt messages and then send them. But if they had to, you know, exit some place really quickly or they're on their run and they had to drop all their paper, they still wanted to have a way to communicate securely. So they needed to have a way of doing a memory cryptosystem. So that was my first assignment, was to come up with a new memory cryptosystem for them. In doing that, I had just been through the five months of waiting to get my clearance, taking all sorts of introduction to cryptography classes, history of cryptography classes. I've learned about things called cipher wheels. If you've seen A Christmas Story, the little Orphan Annie Dakota ring. I thought there ought to be a way to take a visionary table, which is what Special Forces used, which is the alphabet, 26 offsets and a big table, which for Special Forces actually translated into, try to get this on screen for you, I think it's 123 unique three-letter groupings that they called try graphs. They would memorize these things, the commos.
Speaker 3:
[10:26] When you put something through a one-time pad and a try graph, it's considered impossible to decrypt, right?
Speaker 2:
[10:33] Absolutely. There is no cryptographic solution for it. There's no brute forcing. It's completely random based on the fact that there's only two copies of the key in the world, one on each end, as long as it's not stolen or compromised and used only once, it's unbreakable. But anyway, I was struggling to... I wanted to use the same, essentially, algorithm, use these try graphs and use this visionaire table. And I thought there ought to be a way to do it on a wheel. So I like figured it out with graph paper and drew one out and my mentor helped me with it. And we kind of came up with the design. The first one was glued to cardboard. Took it with me the next time I went to, what's that place in North Carolina called now? Fort Liberty? Yeah. Used to be called Fort, we can't say it anymore. But I turned my back to write on the board, turned around and the thing was gone. They'd stolen it from me. And I'm like, guys, where's my wheel? And they're all like looking around. So after a couple of visits and bringing multiple handmade copies, I finally said, you know, we're in the business of, you know, we're NSA, we're in the business of making crypto systems and all sorts of crypto for you. Why don't we just make a bunch of wheels? So there was a machine shop at the time of NSA because back in those days, they were building little black boxes, engineering little black boxes that would go in different places. So I had them make a prototype of this thing that we called the Visionaire wheel. So the three letter combinations would just line up. You get your two letters and the third letter appears in the window. They loved it. So we ended up producing 15,000 of them and distributing them to US. Special Forces. This was all the different groups. This is probably in 1988, I would say. And as far as I can tell, they were using it up into the early 2000s until digital crypto solutions and encrypted phones and stuff became popular. So that was my very first assignment, Made a Wheel. And if I may, shameless pitch at Fort Meade, which is where National Security Agency is located in Maryland. There's something called the National Cryptologic Museum. And at the end of this month, end of April, a copy, one of the production models of what came to be known as the whiz wheel, or I came to learn that that's what they called it, is going to be put on display at the National Cryptologic Museum. They're excited about it because they're not usually putting stuff on displays where the people responsible for it are still alive. Right. I'm excited about it because something I did that was just a silly little thing as far as I was concerned, actually turned out to be very instrumental in the mission of US Special Forces for over a decade. I had the opportunity to meet someone that was a former Green Beret a couple of years ago at DEFCON, a hacker conference in Vegas. Actually, a friend of mine met him and found out he was a Green Beret and asked, oh, do you remember the Whizwheel? He said yes. They said, would you like to meet the guy that invented it? I met the guy and long story short, he said, I think you might qualify for membership in our alumni association because you made a significant contribution. He got me a lifetime membership in the Special Forces Association. Fantastic.
Speaker 1:
[14:08] Yeah.
Speaker 2:
[14:10] I had the opportunity, COVID came along, blew things up, but I had the opportunity to speak at their convention last year. It was in Indianapolis, which is chapter 500, like the Indy 500. I asked the guys there when I was speaking, I said, I've been walking around with the prototypes. I have two of them for 30 some odd years. I've never seen a production model of the Whizwheel before. And I put out an appeal if anybody was willing to donate them. I was trying to get a couple, one of which was to be put in the National Cryptologic Museum. That was the goal anyway. They came up with two. One has been donated, will be put on display. This is another one. This is a production model of the Whizwheel. And this one is designated, if we ever get a contact for the Special Operations Museum that's down in North Carolina at Fort Liberty. That's where we want to put the other one. This is a little piece of history.
Speaker 3:
[15:13] That's amazing.
Speaker 2:
[15:16] So, I'll pause for a minute. That's how I got my start, just solving puzzles, got into crypto, designed something, came up with a little quick fix that was really just an aid for me and then ended up being something that was, you know, pretty critical to the mission, many missions that I don't even know about, of many expression forces teams.
Speaker 3:
[15:37] Before we get deeper into it, since you're the first, I think you're the first guest we've had on from NSA. We've done all kinds of different federal agencies. Could you explain to our audience a little bit about what the National Security Agency is, what their mandate is, their job, why they came about? Sure.
Speaker 2:
[15:58] I mean, I'm not a historian. I can give you a little bit of the history. I've probably forgotten more than I know about it at this point. NSA, I believe, was started in the late 40s. It was sort of after World War II. You know, organizations that were doing code breaking and things like that during World War II kind of got reorganized, and they came up with this idea for the National Security Agency. I want to say 48 or 49 was when it was convention.
Speaker 3:
[16:30] So it was like the National Information Agency or something first, wasn't it? Yeah.
Speaker 2:
[16:35] You probably know more, and you can Google quicker while I'm talking to get the exact story. It'll come back later on in my story, but I'll share it now. The charter, the mission of NSA is, I always used to describe it to people, the operations, what we call operations is basically to be the big ear of the country responsible for primarily monitoring and intercepting signals. Anything that was going out over the airwaves, which back in those days was mostly radio, little bit of, eventually television, maybe some telephones, but primarily radio waves, the whole spectrum of sound. NSA's mission was to listen to everything and try to intercept whatever they could from other countries, adversarial countries, nation states, is what we call them these days, and just keep tabs on everything. So at one level, it was a big collections agency. It would collect a lot of information and there'd be people that would try to break codes and ciphers when those were in play. Others would translate foreign languages that they intercepted, and there'd be other people that would read it and try to extract useful information that gets put together on daily reports, it gets sent to the White House and the Pentagon and other places. Anybody with it is a customer, intercept collections and communications that are collected. At a broad level, that's what the mission has always been. With some rules that were put in place in the early 70s after Watergate and Watergate investigations, the Senate subcommittee hearings had happened after Watergate. One of which was a Senate subcommittee that was chaired by Senator Frank Church, and their output was called the Church Proceedings. They published several volumes of material. But in essence, what they discovered as a result of the Watergate investigations, the Watergate break-in from the early 70s, was that the three-letter agencies, like NSA, FBI, the CIA, had a lot of power and a lot of capabilities at their hands, with not a whole lot of any kind of oversight or rules dictating how they would operate, rules of engagement, as it were. So one of the outcomes of that was what I came to learn when I went to work for NSA, is the NSA Charter, which is still to this day a classified document. But basically what it says is that NSA can only do what NSA does to other countries, foreign nationals, and specifically NSA cannot do what it does to US citizens. Now fast forward to 9-11 and the Patriot Act, the rules kind of changed a little bit, but I mean that's the charter that NSA was built on.
Speaker 3:
[19:50] But you guys are also in charge of like maintaining America's communication security as far as the US government, right?
Speaker 2:
[20:01] Well, yeah, I was just warming up to that. When I went to work for NSA, I was working on what we would have called the defensive side, information security, communication security, and it was probably classified at maybe 10 or 15 or 20 percent of the personnel and the resources of NSA. So even when I was there at the time, there were people there that had been there for a while, working the mission for a while, and everybody had a chip on their shoulder. Everybody was considered info sec, as we called it, the bastard stepchild because operations got all the headlines, operation got all the budget, operations got all the glory, and info sec, which was the mission of providing secure communications and crypto to all of the US., whether it's the military or any level of government where they needed to have secure communications. That was NSA's purview. That was NSA's responsibility, the info sec side. So I came into an organization that had an inferiority complex, always did and probably always will. Of course, it doesn't exist anymore. But it was, there was always this conflict between operations, what everybody knows NSA, that they know what NSA is, what they're doing, and then us doing the really important stuff that you don't get any credit for, like making sure that people can't steal any of our communications. So a lot of cryptographers, a lot of mathematicians coming up with the algorithms and the machines, the little black boxes that would secure the communications for, you know, the military primarily, any level of government, interdepartmental communications, you know, embassies abroad and things like that.
Speaker 1:
[21:53] And you went in, you said you went in in around 86, is that correct?
Speaker 2:
[21:58] Correct.
Speaker 1:
[21:58] So the Cold War was still a very real thing at that point in time.
Speaker 2:
[22:03] Why yes, yes, it was. Which is one of the main reasons why I was hired. I was hired at a time when NSA was hiring 100 people a week. And they'd been doing that for a couple of years, because they'd gone through a lean time in the 70s where they really didn't hire that many people. The guy that was my mentor had been hired in the early 70s. And then they just had a handful of hires from like the early 70s to the early 80s. And they really hired a bunch of people. This is where I get a chip on my shoulder. We didn't call it STEM back then. They called it Critical Skills, but they were mostly looking for mathematicians, computer scientists, and engineers. And if you had a degree in any of those fields, you would get a job offer, and you were paid on an accelerated pay scale. So you got paid extra. I think the engineers made the most, but don't quote me on that. You know, anywhere from 10, 15, to 20, or 25% more than what I was making as just a peon, regular employee. But, you know, they hired me because I scored well on the aptitude test, the skills test. And so I was not a critical skill. And those hundred people around me that were hired the same week I was, they were first in line for promotions, they were first in line for training opportunities, first in line for diversity tours, going to other organizations, because the game at the time was if you wanted to be promoted up past a certain level, you had to have what was called a professionalization degree. And the professionalization degree would be similar to search that we know of in the cybersecurity field these days. And to get that professionalization, you had to have a certain amount of work experience, certain amount of diversity of work experience, working in different places. You had to have continuing education and various depending on what field you were choosing in various other things. If you wanted to go into the computers, you have to write a computer program at some point and so on and so forth. So I being just a regular employee was not getting the opportunity to get the diversity tours. And I tried to get into an intern program and I wasn't qualified for it. Not because I wasn't a critical skill, but because I had a horrible GPA in college. I won't say what it is on air because people would be shocked. But you know, my mentor did a good job of kind of nurturing and talking to friends of his like on the operation side of the house and getting me some diversity tours on my own because he knew I was going to need it. But yeah, they hired a bunch of people. They would go off to get a graduate degree and the government would pay for it. They called it the 2020 program. So they'd work 20 hours, go to school for 20 hours. And then they had to give back government time to offset the time that they went to school. But what they failed to figure out for many years was the clock was running while they were in school.
Speaker 3:
[25:14] And their retirement.
Speaker 2:
[25:16] So you could literally go to grad school, get a graduate degree completely paid for by the government. And after about three months, you could quit and go out to the private sector and get paid more. And that's what a lot of people did. So they were kind of growing by attrition. And because I didn't qualify for the 2020 program initially, I didn't get to go to that. I didn't get to do the intern programs. I just sat in this little office and designed a wheel that was used by special forces for 12 years. And I'm told saved lives. I was also there at the sort of the beginning of the computer age, IBM PCs were kind of a thing. I think my first office, I had a standalone IBM PC. It wasn't networked yet. It didn't have windows on it. It was just DOS. In fact, I think my first one didn't even have a hard drive. But one of my early assignments, can't say it was my second assignment, but one of my early assignments in this office was I was approached by another customer, another military branch, and they were responsible for communicating with one-time paths with people that, shall we say, had been recruited in certain places in Eastern Europe. The one-time paths that they were using in the field were really tiny and they could hide in the heel of your shoe type of thing. They were printed on rice papers so that when you used it, you could destroy it by eating it. But the caseworkers, the handlers, were in skiffs, controlled spaces, offices on the good side of the world. Their version of the one-time path was sort of like a legal path. But they came to us and they said, it takes us hours and hours to decrypt and encrypt these messages because they're getting situation reports from these people. They said, there's this PC sitting on our desk. Is there any way we could use that? Me being young and naïve, I don't see why not. Of course, I didn't know it at the time, but I was working for an engineering organization whose mantra was, there's no such thing as software, there's only hardware. All they did was build little black boxes. I took up the project of coming up with a design for writing a computer program that could run on the IBM PC and taking the one-time pad key, and instead of printing it on paper, putting it on a floppy disk, which I forgot to grab, so you'll have to look at the save icon on your Word document, and that's what a floppy disk looks like. I had to go through an engineering process, a design review process called the FSRS, Functional Security Requirement Specification. It was specifications to build secure hardware, and I was building a software program. So I had to fudge my way through it. I had to go through a review process with all the executive management of InfoSec. InfoSec was Oregonite. It was a directorate, and inside the directorate were various groups, and every group had offices and divisions, and so on and so forth. But all the group chiefs, and there was like five or six of them, got together, and that was the board of directors, as it were. I had to present the ideas to them, and they said, yeah, go ahead and do it. And I came back with the design and had to go through its own security review, which produced issues that had to be addressed. And I went through that process and eventually went back and pitched it to them and said, okay, I've met all the security requirements, met all the objections, we're ready to go, it's ready to field. And the director, the chairman of the board, don't know what his exact title was at this point, he said, okay, we'll let you do this. And literally, he said, don't do this again. To my knowledge, it was the first software-based system that NSA ever produced. And it was simply a computer program that would automate the process of doing a manual encryption and decryption with a one-time pad. But I actually ran into somebody about 10 years ago at a conference that remembered using it. We called it Centaur, because it was a half paper and it was half electronic. So Centaur, every system we produced had to have a cool mythological name attached to it. So we came up with Centaur, semi-automated one-time pad. Can't show it to you because it was software.
Speaker 3:
[29:45] Just to correct me if I'm wrong trying to paint the picture here, the person on the end user in Good Guy Land is taking an Oregon Trail floppy disk, putting it into the computer and then typing in the encrypted message he had received and the computer would spit out the decrypted message. Correct. That's pretty cool.
Speaker 2:
[30:09] And conversely, if he wanted to send a message, he's typing in a message hitting the button to encrypt it. And the trick was one of the secrets of a one-time pad is you use one page at a time as much of it as you need and then you destroy it. So we had to come up with a way of securely deleting a page of key at a time off the floppy disk and part of that was coming up with a secure deletion or a secure overwrite routine. That was a requirement and so I went searching and asking various offices, you know, can you show me one? Can you give me the specs for one? And it had never been done before. So we had to come up, you know, it was a requirement, but we had to come up with, what would this look like? And so we had to come up with a routine for doing an overwrite of the one-time pad key that was on a floppy disk. Doing it and enough so, you know, other really smart people at similar agencies couldn't figure out how to read the data off the floppy drive. It used to be like a flimsy piece of plastic where stuff was printed on it, you know, bits and bytes in various sectors, kind of like a vinyl album, smaller and much more compact. And, you know, things that get deleted off of memory space on floppy disks and hard drives, traditionally, at least in those days, didn't really get deleted. You would just, you would move the needle to a different part of the record and start writing new information there in the order to where your information was, which was sort of kept in a master list on the drive or on the floppy disk. That was a race. So you didn't have the location anymore, but nothing was done to remove the data off the drive itself. Eventually, it would come around and get overwritten. So we had to figure out how do we zone in on exactly where it is and delete the right amount of keys so that it can be done. So there was some engineering, as it were, or software design that had to be done. And people weren't happy about it, but they let us do it.
Speaker 1:
[32:18] In the late 80s or around that time, how were you keeping up with what was going on in the computer industry, because it was moving fast? I remember in 88 hearing about the first one gig hard drive and thinking, what would anybody ever do with a gig of a hard drive? That's insane.
Speaker 2:
[32:42] Hey, I had the same thought when I got my 10 megabyte hard drive on my IBM PC. Who would ever fill that up? And now I think I have more storage space on my smartphone than the supercomputer that I used to use. In the early days of NSA. So yeah, I mean, there was, try to be a politically polite answer to that question. On the operations side, all you have to do is figure out how to intercept stuff. And as communications got more advanced in terms of the cryptography, you and other sister organizations perhaps come up with other ways of capturing the data, perhaps maybe before or after it's been encrypted or decrypted. And that's the land of espionage and so on and so forth. On the Infosec side, it was actually really a struggle. And I saw it at the very beginning. And it came to a head later on in my career in the early 90s, where technology was catching up with Infosec, which was responsible for taking three to five years to design a little black box. And we'll get it to you when it's ready. And we're responsible for providing all this year communications. Probably the first, I'm skipping forward a little bit, but the first real test of that for the government in general, but for NSA in particular, was when a program came out called Pretty Good Privacy, which, don't quote me on what year it came out, probably late 80s, early 90s. And it was an encryption program, and it was written with public algorithms, not NSA designed algorithms. And it was based on what we call public key cryptography, which is where you have a pair of keys, one that does the encryption and one that does the decryption. And everybody uses it, if you're online at all, every day, multiple times a day. But the idea is you have a public key that is used to encrypt the data, and that can be sent anywhere, it's not secret. And the only way you decrypt a message that's been encrypted with that key is if you hold the secret key and you hold that close, that's the private key. And it's a one-way relationship like that. So you have to do a key exchange, if I want to communicate with you, I give you my public key, you give me your public key, we do something to verify we're really talking to each other, and then we're off and running, we can send messages to each other. Well, so fast forward a little bit. You know, I left the Manual Cryptosystems Office, I was there for about three years, and then I did finally get into an intern program. There's not much to this story, it'll get quick. I went over to the operations side of the house. I did happen to be there during Desert Shield, Desert Storm. So I got my certificate of the appreciation for participating in Desert Shield, Desert Storm. I was an intern, so I was doing six-month tours in various office. My last tour of the intern program was back on InfoSec side in what was called Fielded Systems Evaluations. So I'm back on InfoSec, it's the early 90s. There was a time when one of our clients, and this was probably I would guess 93 or 94, one of our clients, one of the military branches, came to NSA and said, why are we spending multi-millions of dollars on a secure communication system with you guys? Why can't we just use PGP? That was really a slap in the face to the power structure, at least the InfoSec side of things. And there was literally an all-hands-on-deck call put out for everybody in InfoSec to stop what you're doing. Everybody work on an attack against PGP. And there was a couple guys in an office nearby that actually did come up with an attack against it. And they were paraded around as heroes. They got huge cash awards. They were taken down to the Pentagon, the White House. I mean, the red carpet was rolled out for these guys. Months later, when all the dust settled down, everybody's got a short attention span, they did a lunch and learn in our lab to just tell us peons that worked with them about the attack that they'd come up with PGP. And what they essentially had done was figured out a way to send a document, let's say a Word document, only it wasn't Word, it was some predecessor, and they found some unused bit space in the document that they were able to insert a virus as it were. And if they sent this document to somebody and could trick them into opening the document, it would execute this code that would essentially steal the key rings, the secret key rings, and attach it to an email back to whoever had sent the email. That might sound familiar to you guys if you keep up with the cyber security schemers today. It sounds a little bit like a phishing attack. Yeah. Only we don't click on attachments anymore. We click on links. But I remember sitting there and hearing them describe this, and then they got to the point where they're asking, does anybody have any questions? And I raised my hand. I said, wouldn't this work against our stuff too? And they kind of looked at each other, and they're like, well, yeah. I said, well, so what's the big deal? And they said, well, our mission was to come up with an attack against PGP, and that's what we did. I'm like, okay, if that's how you sleep at night. But yeah, I mean, well, and which is very, and I'm not, it was funny then at the time, and it's kind of a funny story now, but I mean, they did make a difference. They did come up with an attack. But as is true most often, and I've been in this business 40 some years, when you're attacking crypto, very rarely are you going after the algorithm itself. You're going after the implementation, and either the implementation of the cryptography itself or what we call the key management or the key distribution. So they didn't essentially break the algorithm, they just stole the key. When has that ever happened before in the history of the world? Right.
Speaker 1:
[39:32] Jeff, can I back you up real quick? I just want to ask, because you were, you know, the Soviet Union was a real threat when you joined the NSA, and then in 1989, the wall fell, and the Soviet Union was no longer. Did the NSA at all go through any kind of identity crisis? Were there issues where, like, who's our enemy now? Or did you guys just kind of have a mission and drive forward?
Speaker 2:
[39:58] I don't know if anybody in power would admit to it, but absolutely, there was issues because once the great Satan fell, that was Reagan's term for President Reagan's term. The evil empire. The Soviet Union, the evil empire. Once they fell, yeah, for the first time in a long time, NSA had to worry about budget requisition. They had to go before Congress and justify what they were doing. And I'm not a conspiracy theorist, but Desert Shield, Desert Storm happened shortly after the wall fell and terrorism became the thing that kept things alive. But that wasn't really a clear and present danger, quoting my Tom Clancy books. It wasn't something you could put your finger on. I mean, I remember watching videos about terrorists when I was waiting for my top secret clearance to come through and classified briefings at the time about what did the terrorists do back in the 70s and 80s? They'd hijack planes, they'd blow them up. That was the thing back then. There was one plane in particular that nobody knew it, but there was people on it from NSA and CIA, and there was suspicions of whether people knew.
Speaker 3:
[41:31] Oh, you're talking about Lockerbie?
Speaker 2:
[41:34] I can neither confirm nor deny, but it's been a long time, so it's probably declassified at this point. There was the one plane where they landed somewhere and they killed a passenger and shoved them out the pilot's window. And it was, I think it was a Navy enlisted person.
Speaker 1:
[41:57] It was a diver, I think, right?
Speaker 2:
[41:59] And the reason they tagged him or pulled him out was because he was in uniform. Because what I remember hearing at the time, the briefing I got, the video I watched about that was there was a flight attendant that had been asked to collect the passports of all the passengers. And for whatever reason, US citizens get a blue passport, but government employees get a red passport. And so she was able, as she was collecting the passports, to somehow hide the fact that she was collecting red passports. I mean, when I was at NSA, I was issued a red, you know, it's more like a burgundy passport. Yeah. And that's your official passport to use on international travel. And you're only allowed to use that password. But then I was pulled aside and said, take both. And I did. And, you know, for the official get through customs, the red one comes out everywhere else. It was the blue password. I'm just Joe Citizen. Much because of that experience of that plane being hijacked. So, yes, there was an identity crisis. There was a justification for budgets that had never been realized before. And computers were becoming much more a thing. I mean, we sort of leapfrogged over the whole machine age to the digital age. And NSA was largely unequipped for that and slow to respond. You know, think, you know, probably too soon. But, you know, think a large ship that's, you know, pointed towards the pylon of a bridge. And how hard is it to steer that and turn that thing? I'm five miles away from that particular, what used to be that particular bridge. So, they were very slow. There was also a certain amount of attitude, I would say, in sort of the old guard. We're like, you know, people can, you know, what was it? Henry Ford. You can have whatever color car you want, as long as it's black. I mean, they sort of had a monopoly on crypto. And so they weren't very quick to change. They did start farming things out to contractors and third parties. The classified telephone that was popular at the time that I was there called a Secure Telephone Unit, STU. And they were up to the Stu-3, the third version, which looked like an old-fashioned office desktop phone. And there was three contractors that were allowed to build it. It was RCA, GE and Motorola, I believe were the three models. If you're old enough to remember those I've worked for the government. So, early 90s, I'm back in this Fielded Systems Evaluation Office and that's where I started doing penetration testing, it is what we called it then, but trying to break into computers and network systems. We were assigned to break into military facilities throughout the world and at some point we decided, why don't we just call it penetration testing? That's what the world's calling it, let's become hackers. So, that was early 90s. The NSA trying to respond to the changing world, they reorganized and formed what they called the Systems and Network Attack Center. It was the vision was that it was going to be a center of excellence and have all the really smart people, and NSA has lots of really smart people, and they were going to be experts on everything related to computers and networks. And of course, we had been doing this for a couple of years at that point, this small team of people, and we had realized because of being involved in something that's interconnected, we realized very quickly, there's a whole lot of people in the world that are focused on this problem. I don't care who you are, you have a small subset of 10, 20, 100, 200 people. There's no way to compete against the whole world for that kind of brainpower and distributed thinking, let's say. But they went about doing the reorganization, and that's when the office that I was in got pulled into it, and we were sort of formally given the task, the small group of people that I worked with, of just doing... We called it vulnerability and threat assessment, but for lack of a better term, we said we're hackers and we're learning how to do pen tests. So that was... We were formed officially, I guess, in 93, 94, at least in terms of this new organization. We moved to a different... I'm sorry.
Speaker 1:
[46:42] I just want to... I'm curious, because you coming from cryptology, had computers been a hobby? You know, had you been learning C or C++? Like, I don't know what languages were prevalent at that time, but how were you, personally and then as an organization, how were you catching up with these teenage kids who had nothing better to do than to figure out how to, you know, break into shit?
Speaker 2:
[47:20] Well, I mean, I graduated from high school in 1980, and I remember taking a computer math class, so it was late 70s, but it was, you know, a very rudimentary type of PC. I think I was programming in basic, and it was kind of cool. We wrote our programs to punch tape. It was even before the era of floppy disks. So, you know, I'd have two or three or four feet long of punch tape that I would have to feed into a machine to read my program. So, yeah, I was kind of interested in it. I had an older brother, one of my older brothers, you know, sort of the brain of the family. He was, he was into physics and engineering, and he was always buying the new toy of the month. So he, you know, he built a computer, you know, built it from scratch, kind of like you build, you know, the old ham radio. Of course, he did that when he was a kid. But at some point, he built his own computer, very rudimentary. And then, you know, what was popular at the time, the Apple 2E or Macintosh or something like that. He was always getting computers. He was the first one to have the first video game, Pong. And he was the first one to get a Nintendo and an Atari. You know, I kind of grew up playing video games at the arcade. Everybody remember that, you know, you put a quarter in a machine and play the game and keep putting the quarters in. So I was into it because it was new and it was kind of fun and different, but I wasn't like, how does this work? And digging into the innards of it. But at NSA, you know, when I was in the intern program, I had to write it. One of the assignments was to work for a programming office and I had to write a computer program. That was one of my assignments. And at the time, NSA was converting from their own mainframe supercomputers that they had their own custom operating system on it and their own primary programming language that all their number crunching, cryptanalytic calculating, statistical counting types of programs had been written on. They were migrating over to what at the time was fairly common, Unix workstations, primary Sun microsystems, later SunOS, later it would be called Solaris. So the IBM PC left and in came a Sun workstation, the old pizza boxes, Spark 510s, whatever they called them. So I had to rewrite a program that had been written in a proprietary language at NSA in C. And of course, I got it to compile and then got it to hang the first time I ran it because it worked, but it didn't optimize for the number crunching type of thing it needed to do. So, you know, I did that. It was kind of cool, but I wasn't really into it, into it. But the idea of breaking into things, that was kind of cool. The idea of going someplace where you weren't supposed to be learning a hidden trick or a hidden feature. There weren't many exploits in those days. It was mostly features of the operating system, undocumented or undocumented, or just learning the tricks of how to fool the computer, trick the computer into giving you stuff. Of course, a lot of stuff was there, and it wasn't that hard to do. Other people had figured out a lot of the ways to do stuff. The terminology in those days was script kitty. Starting out, I was much more of a script kitty, just doing the stuff that other people had figured out, but trying it on our classified networks, even though it was something that was discovered out in the real world. But because I had a cryptanalytic background, one of the things that I enjoyed doing was password cracking. And of course, I didn't write the programs. I was using the programs that were available at the time, but learning how to tweak them and fine tune them. Password guessing was a thing back then. I was actually pretty decent at guessing passwords. Nobody does that any more these days. There was a lot of our customers when we were doing these fielded systems evaluations. We were going to military bases throughout the world, and they always had some real whippersnapper teenager, but he was also an E4 or an E5 now. And because he knew computers, he was responsible for computers. So he came up with an idea of coming up with a random password generator. And so they had all, you know, they knew passwords security was a thing back then. So they wanted to come up with ways of defeating the password cracking tools or just making passwords less prone to being guessed. And they inevitably were horrible because, you know, from a cryptanalytic statistical brute forcing perspective, they almost inevitably fell. I mean, I remember one guy, I want to say he was at a base, doesn't matter where he was, but he thought he had this program that was really cool and it was producing really random looking passwords. And we cracked 100% of them in minutes. It just was that bad. So that's where I kind of like applied the cryptanalytic stuff that I had learned to some aspect of it. And we didn't call it cyber security at the time. We actually called it internet security. But that was something I could kind of focus on as sort of a niche area. It's like, oh yeah, I'll focus on like password cracking and how to come up with strong passwords or random passwords and any of the few types of cryptanalytic things that were associated with operating systems at the time. That was sort of my focus. The other focus I had, I guess, was I worked with people both while I was at NSA and then even into the private sector days years after that would love to just break into a system, get root. It was all root because it was all Unix back then and say they were done. And I was more like, well, we've just broken on to a computer or a server, why don't we look at what's on it and see what's there, what kind of information is there. They were all about the front and let's conquer another box. Let's root another box. I was more about the analytical, what kind of information is here and what can we learn about our target or our customers or what is sensitive here that might give us more of a clue of where to look next or if we found the crown jewels or just whatever was, but just looking at stuff. So I tended to do more of a analytical deep dive. Let's see what we've got rather than just keep knocking over boxes after boxes after boxes and saying, we're done.
Speaker 1:
[54:40] Right.
Speaker 2:
[54:41] Moving on.
Speaker 1:
[54:42] So how did that develop for you? Because while all these other people are trying to get root, now you want to get into the system, you want to go through the various file systems and everything like that, and move throughout the system. Like, what does that look like for you compared to what everybody else was focused on?
Speaker 2:
[55:05] So back in those days, the sort of the methodology, which ironically is based mostly off a film that came out in the early 90s called Sneakers. Robert Redford and Ben Kingsley were the stars. And that was sort of the first movie that showed what people would more commonly refer to as a red team exercise these days, because a combination of computer hacking but maybe physical penetration testing. The methodology was simply back in those days, you have a target, you have a company, an organization, everybody had their own IP routable, IP addresses. There was no masking back in those days or no private addressing. Everything was Internet reachable because everything was connected. So you'd find out what the target was, whether it was a Class C address or a series of Class C addresses, which is 255 potential addresses. And then you do a probe of each IP address, do some sort of rudimentary scan to see what's alive, what's answering. And so once you found live targets, you do a port scan, which is basically, okay, what's this machine talking on? In TCPIP, there's 65,535 potential channels that you can talk on. And there's some commonly associated reserved ports that are associated with specific protocols, specific services. Start with there, and most of the protocols, communication protocols back then were clear text. There wasn't a lot of encryption going on. So you would find what they were talking on, and then that's usually when you could connect to a system, maybe steal a password, maybe guess a password, maybe force one of the programs that was listening to hiccup and give you access. There's many different methods of doing it. But the goal then was to get access. And it didn't have to be root. It could just be any user account. And then once you had that foothold, that toe in the door, then you try to elevate your privileges to root. And once you're on the system, there was any number of ways of doing that, including reading the password file, which was world readable. Anybody could look at what the password hashes were. I'm not using the word correctly, the encrypted passwords. They're hashed passwords. You could copy that and run it into your computer cracking program, which conveniently was called crack. So elevating privilege. I mean, that was sort of the modus operandi. The first thing to do is get to root because once you're at root, you have access to everything, any file system, any folder, any anything that was locked down and protected. Root had access to you because root was what we called the god account. It could go anywhere. It could do anything. Which is why we used to say to our clients, if we've got root, we're done. But they would very rarely understand that, comprehend that, and take it to heart. Which is why it became beneficial to say, okay, you're not getting it that we have root. But would you understand it if I said we're looking at your financials for the previous court and we can see all of it, or we can look at the payroll and tell everybody what they're being paid, who got what bonus, and the people sitting next to each other, one person's getting made paid 15% more, and he's a guy and she's a woman, and we can blow things up. Or, you know, research data, or we know where the money is. You know, there's any number of things. That tends to be something that, you know, I have no idea what you're talking about getting root, but you can do this. Right. I mean, when I was, and I'm blurring the lines a little between my NSA days and my private sector days, that when we first started out doing this at NSA, and people started, and we started calling it pen testing, and we started being asked, not by just, you know, our military customers, but like offices within NSA and other classified networks, you know, within the community, we started kind of having to come up with processes and kind of formalize a methodology because we had to get permission to do it. You know, I mentioned early on in the interview, the church proceedings in the NSA Charter. That became an issue at least early on because, you know, even though we were white hat hackers, we're the good guys trying to break in because we're NSA, we technically weren't allowed to break into computers and networks that were US owned and operated. But, you know, as long as it was in the classified world, it wasn't really that much of an issue. But we did have to start talking to our general council. And for whatever reason, I volunteered to do that. You know, I was a business major. So finally I was like, oh, we need organization, we need structure. I can do that. My friends that I worked with, they were much more into the gears and, you know, the weeds of the technology and like business processes. I got that, I can do that. So I started talking to the lawyers. I tell a story that, well, to level set, everything that we did in terms of our techniques for breaking into computers and networks, when we were working within the classified realm, everything we did by rule had to be classified at the level of our target. So naturally, if we were working on top secret systems, everything that we did was classified top secret. In order to get authorization to do top secret stuff against top secret targets, you had to go through bureaucracy and red tape and get all sorts of permissions, which took a god-awful amount of time. I mean, we literally would have to wait weeks to get permission to try to break into something that was even within NSA, like another organization, another office within NSA. Of course, what we didn't tell the powers of B, we'd already broken in, we already knew how to do it. Then we'd do the paperwork of, this is the way we're going to try it, this is our attack methodology, and then we'd have to go off and get permissions, which was on a typed up piece of paper that had to be signed or initialed by every level of management from our branch, on up to the group level, over to the group that was the target, and down their management chain. And this is paper passing from desk to desk, secretary to secretary. It might sit on a desk for hours or days. So it would take weeks. I tell this story in a talk I've given at a couple of different conferences, but usually when I'm telling the story about what was our trade craft, what do we do, I have to qualify and say, technically I can't tell you what we did because it was top secret. And then at some point I say, okay, I'll tell you one. So I have this big disclaimer banner, top secret. And I say, okay, one of our primary cyber weapons that we use to get against top secret systems was something called the PING command. Let that sink in, or if you don't know what a PING command, it's a system level command that comes with every Unix operating system. It's basically, and it's named after a submarine sonar. It sends out a signal and waits for a response. Are you there? Yes, I'm here. It will ping every single address on whatever your target space is. Very rudimentary, very common. Part of the operating system, it's a feature. But because the lawyers looked at it and said, well, you're eliciting response from the target, therefore, this has to be considered an active attack. Therefore, it qualifies as a top-secret cyber weapon.
Speaker 1:
[63:26] Wow.
Speaker 2:
[63:27] That's the logic that we were dealing with. And that's where I kind of like, okay, we got to fix this. So I started talking to the lawyers and started teaching them about our methodologies and that their idea was, why don't you just show us what you do and we'll pre-approve it so that when you get a job request to do an attack, you can just tell us, well, we're going to do a little of this and a little of that and a little of this over here and a little of this, it'll be like an a la carte menu, and we already know what they are and what they do, and we'll just pre-approve it and it'll be pretty quick. I'm like, yeah, the problem is, you don't know what you're doing until you're in the middle of it.
Speaker 1:
[64:02] Right.
Speaker 2:
[64:04] It starts with the probing, we called it recon, what's out there? What's out there? What are they talking on? How are they communicating? What are they listening on? What are the ports and channels that are open? So I went through a process, I would meet weekly with our lawyers and just teach them the fundamentals of penetration testing and hacking and how do the computer networks work. I say all this because one time I was showing the lawyer, even though he was on an isolated sub-network that he thought was very super secret because he's dealing in all sorts of legal proceedings and investigations, and he had his folders and files on his computer that he thought was completely protected and top secret, and I was like, well, let's look at that. So we were sitting in our office, which was in a physical building that was different, probably 10 miles apart. I said, let's go over to your network, see, here we are. Here's your file system. We're on your system now. We had him log in, and I said, let's look at your directory structure, and I'm looking through it, and Unix file permissions, there's this concept of the owner, a group membership, and then the world. For each category, there's the option of read-only, read and write, or read and write and execute. Let's just go with read for now. I was looking at his folders that were supposedly top secret, his eyes only. I'm like, that folder is not only your readable, and not only the lawyer group readable, General Counsel's Office readable, it's set to anybody read it. Look, I've just clicked on the folder. Here's all these files. Look, I can click on this document here and open up. He's like, oh my God, don't do that. That's all secret stuff. Oh my God. So he got this really great education on how to set file permissions so he could actually lock down his folders.
Speaker 1:
[66:03] You're not doing anything superbly technical right now. You're just accessing his network and he has open permissions. You're not even technically really hacking, you're just showing him how much access a knowledgeable person would have.
Speaker 2:
[66:21] Right. Yeah. That's a good way to summarize it. I mean, the hackers that are out there these days, the security researchers, they're trying to come up with creative ways of breaking things, using a methodology that's similar to what was done back in those days. But in the early days, it was much more just taking advantage of what I would call undocumented features. What can the system do? Taking advantage of knowing more about how it works than the users, because in the early days, most users didn't really know how it worked. They could barely get it to work and they were happy if they could get it to work and it wasn't anybody telling them to do anything else.
Speaker 3:
[67:05] I have a question. As you describe all of this, it actually reminds me a bit of Richard Marcinko's Red Cell, which was testing physical security at military bases. You guys were, of course, doing that in the electronic space. I was wondering, did you guys get any sort of pushback or political fallout from what you were doing? People who were shocked or embarrassed and maybe even angry that you were able to penetrate their systems?
Speaker 2:
[67:35] Interesting segue question. Initially, no, when it was mostly military targets that they'd asked to do it and then internal targets that, yeah, I take that back. We did have one internal target one time that, supposedly that they were isolated with internal segmentation, what we would call it these days. But supposedly there's a firewall or some router with some access control lists in place. And we were doing initial probing. And I think we had a target of either an IP address or maybe an IP range. But us being us, we just kept going. So what else can we see? Where else can we go? And this particular target, which was an internal office, they did have some sort of monitoring in place. And they were detecting our activity. And we technically went beyond the bounds. But we didn't break into anything. It was like, well, the door was open. Everything was answering. We just kept going. There was nothing blocking us. We didn't subvert anything. This is how far we could go. But there was a point where we sort of got called to the carpet. And I guess I've been doing a lot of the work. And I got called into a meeting with the customer. And poor guy. I still feel sorry for this guy. The guy that they had assigned to be the investigator, he was apparently some branch of the military police. And he came in with like a stack of notebooks with printouts of all the activity that he had seen us doing, me doing, and had it all printed out. Because they thought they had caught a bad guy. He was like, they're ready to throw the book at us. And we're like, well, no, we had this request to do this thing. And we just kind of didn't know where the boundary was. And we just kept going. And they're like, oh, well, thanks for letting us know. We didn't realize it was that porous. And the guy was like, he never got a chance to open it. I mean, it must have been a foot high.
Speaker 3:
[69:47] This might be a little sensitive, but I mean, as far as like the attack surfaces that you guys used, I mean, did you have to be inside the NSA to get to even launch this attack? Or were you guys replicating an outside attack, you know, perhaps a foreign adversary?
Speaker 2:
[70:03] Well, you know, our targets, at least in that case, were internal to internal. And technically, whatever we was doing, what we were doing was classified at the level of the target. So technically, what we were doing was top secret. But it's probably a safe bet to think that we were doing a lot of the techniques that were publicly available, because guess where we were learning how to do it? Publicly accessible stuff. So yeah, that's how I'm going to answer that question.
Speaker 1:
[70:35] What was your relationship like? Because I remember in the late 80s, going to my local game shop to buy D&D stuff, and there was always a copy of 2600 there.
Speaker 3:
[70:48] It's a nice magazine.
Speaker 1:
[70:49] And for people who don't know, 2600 was like the OG, I think, hacker, like little booklet, magazine pamphlet type of booklet thing. And then the DEFCON started in the early 90s. So there was this vibrant hacker community out there that was moving along with the times from Captain Crunch and Freaking and all that. How was your relationship with them, these people who are sort of breaking the law and on the cutting edge, but also like pushing it?
Speaker 3:
[71:30] Right.
Speaker 2:
[71:32] I mean, at the time, we didn't interact with many of the people in that part of the community. I've certainly over the last 10 years or so had the privilege of meeting many of those folks and comparing notes and so on and so forth. But I mean, we were certainly learning from them. I mean, we, you know, back in those days, it was bulletin boards and mailing lists and, you know, our best resources was the internet and learning all the places where people were posting stuff about hacking and breaking into things. So we were certainly learning from them. And I would even say that we felt like we were behind them. I mean, when we were considering ourselves to be students and learning all this stuff, I mean, they were doing it and we were just trying to pick up on it and learn from them. So there was, I guess, from our perspective, a certain amount of respect. But there's a handful of people that kind of went south of the law and got caught and prosecuted. I have different opinions on some of those people. There was certainly mythology associated with it. There's sort of the elite or elite hackers. The Uber hackers is what we called them back then. I hope to someday meet some of them. But we were kind of learning and doing stuff and figuring out stuff. We certainly had access to a lot of resources that a lot of people don't have access to. I mean, we had access to Unix source code. And this is before the days of Linux. And the Unix source code is something that the agency, NSA, paid God knows how much money for. So we were able to look at all the internals, all the function calls, all the libraries. We had a fair amount of opportunity to tear things apart. We had a fair amount of resources that maybe not everybody has. But we still consider ourselves to be students and learning. It's funny because we'll get to why I left NSA in a little bit hopefully. But I was out in the private sector for a few years doing the penetration testing and basically trying to convince companies back in those days, if you're going to play on the Internet, you really need to have a firewall. You really need to have some sort of secure architecture. You need to have some sort of clue or plan as to what you're doing. So you need to put a security program in place and figure out what it is you want to protect and need to protect. And at some point, I got really frustrated with being hired by clients every six months to break in and we'd break in the same way time after time and we tell them this is really easy to fix and they didn't seem to want to care to fix it. And at some point I'm like, okay, I'm done pen testing because that doesn't seem to be getting the message across and I ventured into, I need to just be able to talk to companies and explain it to them and explain why they care and explain why it matters. And about the time I made that decision is about the time that this thing called PCI came along, the payment card industry. And I got sucked into that, but it was nice at the time because there were a lot of companies that had to do PCI and it's a private sector regulatory security standard that's of, by and for the credit card industry. So it's not a federally mandated thing. So it's voluntary. But if you don't do it, you don't get to take credit cards if you're a retailer or any kind of business that wants to make money. So for me, it was beautiful because it gave me a captive audience. And I did that for a lot of years. And one of the people that I worked with at NSA in our little hacking group, our pen test group, went out into the private sector, became an entrepreneur, started a company, and it finally agreed. We finally came to terms and he found a way for me to come work for him. And when I came to work for him, which is gosh, it's been 10 years ago at this point, he said, oh, I want you to be an evangelist. I want you to start going to the conferences and start telling stories and talk about the stuff that we did. And so I, having, I mean, there wasn't much of a hacking community in terms of conferences and training and search back in the, when did I walk away from the early 2000s, 2004ish. But compared to 2013, 2014, 2015, where there's lots of hacker conferences all throughout the country, there's security besides conferences, so on and so forth. So I, you know, I was like, I was kind of nervous because I'd kind of been away. I walked away from pen testing. I was just talking to people for the better part of 10 years and explaining a particular security standard, which to this day is still a decent standard. And, you know, here's all the fundamental things that you should think about and do. But as I went back to these conferences and started meeting people and over time, you know, one of the thoughts I had was, oh, I'm going to meet all these smart hackers. And they've had 10 years to keep working and growing. And I've been going to these conferences now for 10 years. And I'm still waiting to meet those uber people that were, that my perception was they were so advanced. Not to say that I'm advanced, but I think we were all in it together and we were all at a similar level, which is always learning. I mean, nobody, nobody claims to have a complete understanding of all of this. There's always more to learn and there's always more to discover. And there's always layers and layers and layers. But I've had the privilege of meeting a lot of the people that were, I considered to be the pioneers and my heroes over the last couple of years. I've met a lot of people that were members or some of the famous, you know, hacking groups and hacking collectives from back in the day. And I've met a lot of people and they, and I apologize if this, I hope this does not come across as egotistical. But as I meet all these people that are, you know, farmed away from the Midwest, you know, got into phone freaking to get free long distance and then later free cable. And they just kept going and they figured out some things. Nobody's had the experiences that I've had. Right.
Speaker 3:
[78:31] Yeah.
Speaker 2:
[78:32] Which, you know, and for me, it was just, you know, the right time and the right place type of thing. But I've never met anybody yet to this day that I'm like, I'm completely in all of.
Speaker 3:
[78:44] An Uber hacker.
Speaker 2:
[78:45] One or two exceptions. The Uber hackers. Most of them are almost as much as excited to meet me as I am to meet them. I remember before COVID, I think the last DEF CON, so it would have been 2019, I was sitting around with some folks and one of the guys I was sitting with was a guy whose name is Weld Pond. He's a member of the Loft, which became famous back in the 90s for producing one of the first, if not the first, password cracking routines that would work on Windows passwords. So it was called Loft Crack and they were a hacker collective, a bunch of smart guys out of Boston, Berkeley, Harvard, MIT type people. And I'm sitting there with one of them. And then one of the other guys I was sitting with, I was introduced to, he's one of the original members of Call to the Dead Cow. And they're famous for other reasons. And I'm like, wait a minute. He's the Loft, he's Call to the Dead Cow. And now is probably when I should mention the nickname for our hacking group at NSA, came to be known as the PIT. And so I'm a member of the PIT. I'm one of the founders, architects of the first penetration testing team at NSA. And we called it the PIT. So I'm like, it's the Loft, it's the PIT. It's Call to the Dead Cow. I'm like, guys, let's get our picture taken together. So I had somebody take our picture. I was like, you guys don't know this, but this is really historical. Cause you know, dark side, dark side, white hat guy inside of the good. But you know, smart guys, nobody's Uber that I've ever met. Most of the people that, especially from the early days, are all pretty humble. You always hear about all the real elitist, arrogant jerks. And there are some out there, but most of the people that are really serious about this craft, as it were, are pretty humble and pretty eager to share and love to swap stories and share stories. And I've certainly had a lot of great opportunities to do that. One of my idols, one of our motivations back when we were forming The Pit, when we were reorganized into this thing called Systems and Network Attack Center, the SNAC, the Center of Excellence for Computer and Network Security, back in 1994, we got moved to a new building and we got moved to an office and we nicknamed our office The Pit. And one of our motivators was a book called The Cuckoo's Egg, written by a gentleman named Cliff Stahl. Cliff Stahl is like a Berkeley astronomer, physicist, smart guy. And he had noticed that by a matter of circumstances, that somebody was breaking into the university mainframe and stealing a lot of government secrets, because back in those days, the only thing that was connected on the internet was mainframes from either the government or research university. And he set out to track down and find the people that were breaking in. Fascinating story, sort of invented forensics. And he documented his experiences in a book that's called The Cuckoo's Egg. Must read if anybody's interested at all in this discipline. A couple years ago, again, pre-COVID. In fact, I'm going back to the same conference where I met Cliff Stahl end of this week, but I was at a security conference up in Canada. He was the keynote. So I'm like, fanboy, I get to meet Cliff Stahl. And he's a goofy, quirky, weird kind of guy. He did a keynote presentation with a view graph projector. That's how quirky this guy is. 2019. You probably don't even know what a view graph is. Overhead projector. His talk was a...
Speaker 1:
[83:00] Transparency. Yeah, the transparency.
Speaker 2:
[83:02] A transparency that he laid down on a box that lit up through a lens that would project.
Speaker 1:
[83:08] Yeah.
Speaker 2:
[83:08] I mean, old school. Totally geeky and quirky and cool. And I had to go up and introduce myself and meet him, get my picture taken with him. And I told him I was NSA. And he's like, oh, yeah. He visited NSA as part of his tale of trying to figure out how to hack, catch these bad guys that turned out to be East German hackers. And to my chagrin, the only time I've really been nervous to give a talk, was my, you know, he did the keynote. And I think I was the second or third talk after him. He's sitting in the front row. You know, one of my heroes, he's going to sit and listen to me give a talk. But that's how cool he was. I've met the guy that wrote PGP, Phil Zimmerman, a couple years ago. I've pretty much met all the pioneers at some point. And what's funny is a lot of those people, because they got into it out of necessity. They didn't start out as computer scientists, and they didn't start out as programmers or administrators. They just had a job, and computers became a thing, and so they wanted to learn about it and make it work to get something done. A lot of them went back to their day job. You know, Cliff Stahl is still an astronomer, whatever he does, and a lot of these other guys that were a lot of university professors, university researchers, they went back to their first love. There's very few of the early rounded people that actually saw the dollar signs and went with it and became a millionaire.
Speaker 3:
[84:43] Jeff, to backtrack a little bit, do you want to talk about, I mean, you mentioned it briefly, why you ended up leaving the NSA after.
Speaker 1:
[84:51] Even before that though, you do have, when we met at a conference, you showed me orders, or military-wise, I'd call them orders, but a document authorizing you to do the, was it the very first pen test of an outside organization? All right.
Speaker 2:
[85:14] It's the same question to the same story. Okay. And so I'll try to, there's a lot, I have a lot of stories, I apologize. Hopefully people are entertained.
Speaker 3:
[85:24] This is a podcast. People love stories, Jeff.
Speaker 2:
[85:27] All right. So I'll keep going. And they can play me at 1.5, which makes it go even quicker. So, you know, I'm in the pit, we're doing all these, you know, pen tests of military bases throughout the world, and NSA facilities and other classified environments. And for whatever reason, and all I can say is, it's because I was the business major, I was sort of the, I became the biz dev person and was trying to formalize what we did. I was the only one that was really interested in talking to, you know, managers and suits and, you know, people other than just talking the tech and doing the stuff, talking to the lawyers. So in doing all that, we were putting together a methodology and we were writing it down. So it could be a repeatable process. It was something that had a beginning and end. And we take into account all the things we needed to think about before, during and after doing the engagement. And somewhere along the line, I started working with some people from another organization called DISA, Defense Information Systems Agency, I think is what it's called. And they got me connected to some people at the Department of Justice. And everybody was just, the Internet was new. Everybody was plugging into the Internet. And everybody was like, all the potential for the Internet. But then they were also saying, oh, but maybe we should think about security. So I went down to DC., this is 1996. I was probably, the first time I met them was probably April or May. Went down to the Department of Justice buildings, you know, went into some big, beautiful conference room, you know, mahogany walls, big, huge table, everything's wood. Meeting with these people and basically they wanted us to do a pen test of their internet presence. I'm like, yeah, sure, no problem, we can do that. So I go back and talk to the lawyers and lawyers like, well, hello, time out. It's an unclassified network. That's kind of new and different. And NSA is responsible for the security of classified systems, but the organization that was responsible for the security of unclassified organizations at the time was NIST, the National Institute of Standards and Technologies. And at the time, that was kind of a tongue-in-cheek kind of running gag because NIST didn't have a whole lot of capability in any technical respect, similar to the kind of stuff that NSA did. So I'm talking to the lawyers. I'm like, well, can we make this happen? And the lawyer is like, yeah, we can make it happen, but there's hoops that you got to jump through. So we proceeded to go through several weeks and months of hoop jumping to make this happen. And one of the first things he told me was, well, when you have this type of relationship, it's got to be sort of a handshake agreement between cabinet level positions. And I'm like, well, what does that mean? He said, what it means is the attorney general, which is what the DOJ rolls up under, basically has to ask the Secretary of Defense for a favor and say, hey, can you have your guys come over and take a look at our systems? So you asked me to look it up. I've got a copy of the original email, not email, I'm sorry, letter that came from the office of the attorney general saying, hey, your guys have been talking to our guys. I'm paraphrasing it. And basically we want you to, well, I can read it to you. Therefore, I am formally requesting that DISA and NSA work with us to provide a vulnerability assessment on the security posture of DOJ, sensitive systems and network connectivity to include the system network architecture, SNA, and virtual telecommunications access method, VTAM. It's government, everything's got to have an acronym. Also the secure network architecture. Did I say that already? I am requesting that the assessment begin with the testing and evaluation of the security configurations in the financial management information system, which is used by several components within the DOJ. It goes on and on, a little over a page, signed by the Attorney General at the time, Janet Reno. You got that? Yeah. Okay. And it was actually addressed to the person that was designated by the Secretary of Defense at the time, the Assistant Secretary of Defense responsible for C3I, the Honorable Emmett Page Jr. Wow. Okay. So that was the first step. And then what had to happen was, gosh, hope I get this in the right order. This is a response from NSA. Of course, letters by the government, they're all written by peons like me, and they just eventually get up and signed by the people. You've seen the movies where they throw papers in front of the president, and he just signs them one after another. So this is a draft letter from Emmett Page back to Janet Reno saying, basically, we're on it. And there's another letter that I have. This is from somebody at DISA to the Department of Justice saying basically, we're on it. And probably, then, the most interesting one is, and it's had an official processing form because it's got to have lots and lots of signatures to approve it. But this is the letter that was drafted by for the signature of the director of NSA. And if you see that there, right there on the bottom line, I am the point of contact for this project, which says, yeah, we'd be happy to, members of the system, the Network Attack Center will go down and do this. Now, on the cover sheet, it actually talks about, I think you can see this here. It had a code name.
Speaker 3:
[91:57] Project Eagle.
Speaker 2:
[91:57] The effort is Project Eagle. So, this letter, which is a copy of it, but it's signed and it's dated, we'll see the date, 21 August.
Speaker 3:
[92:11] 1996.
Speaker 1:
[92:12] Yeah.
Speaker 2:
[92:12] 1996. So, this is what happened. Of course, the letter is signed. This is all going back around, getting all the signatures. It had not yet been delivered yet. I think the 21st of August 1996 was like a Wednesday or Thursday. The weekend before, and it's before the letter had been delivered, the DOJ website was topped. It was the first hack of a DOD or government website. Rather famous, the hackers basically replaced the entire website. They replaced Janet Reno's picture with a picture of Adolf Hitler. They had all sorts of more colorful things on it. This happened on a weekend. The weekend before this letter was going to be delivered, and we were going to be golden. So I get a call Monday morning from my contact at the DOJ saying, we had a problem over the weekend. We were hacked. I don't know if you heard about it, but help. And so I'm like, well, let me see what I can do. I hung up the phone and I called the lawyers up, the general counsel's office, and I explained to them what happened. And I said, we're this close to being legal to going down there and doing the work. What do I have to do in order to get a team of people down there the next day? I mean, I want to help them out. They're desperate. They need help. What can we do for them? And they gave me three criteria. They said, well, don't go on your own accord. Make sure you're sent by management. Get the request in writing from the DOJ. And don't go alone. I mean, that was it. I'm like, okay, I assembled a team. I called back the DOJ and said, send me something that requests this. I got it, you know, hours later. And then we went to our management and said, hey, this is what's happening. Will you let us go? And they said yes. So Tuesday morning, we go down and we're looking at everything. Of course, in those days, everybody had their own servers that were serving up their web servers that were part of their network. Maybe they were outside of their network. Maybe they weren't. But when they were, when they discovered the breach, the DOJ admins, they took the systems down, took them offline and wiped them and rebuilt them. So whatever evidence might have been there was largely gone to begin with.
Speaker 1:
[94:41] Yeah.
Speaker 2:
[94:42] I mean, there were no forensic guides. There were no rules back then. This is 1996. Nothing had been written yet about how to do this other than Cliff Stahl and the Cuckoo's egg. But what he was talking about was mostly on phone lines and phone switches and PBX's, public exchange servers, all phone related. So we're there Tuesday, Wednesday. There were other systems that hadn't been affected, but we were looking for evidence of tampering and any footprints as it were, electronic footprints to see if we could pull anything together. We're there Tuesday. We're there Wednesday. We go down Thursday, mid-morning Thursday. I got a call from somebody back in the pit and they said, Jeff, the shit's hit the fan. You guys got to drop what you're doing and come back now. We dropped what we were doing. We went back and got graded into the Deputy Director's Conference Room. The lawyer that I had been working with for the previous year proceeded to read us the Riot Act and yelling at me in particular for doing something that was potentially illegal that could get the director not only fired but prosecuted. What the hell were you thinking? I'm like, you knew about it? I'm like, well, and technically, when I called the lawyers on that Monday morning, both the general counsel, this guy, and his deputy answered the phone and I said, I've got an issue. Who wants to take it? And the general counsel deferred to his deputy. So I did this with the deputy general counsel, not the main guy. But it's the main guy that was yelling at me. So I got put on double secret probation since I was the ringleader. And the first time I've ever heard of the church proceedings is when the lawyer was yelling at me saying, don't you know you violated the NSA charter? Don't you know you could get the director fired if not prosecuted? I was put on probation. I was investigated internally. I found out many years later, because I bumped into this lawyer after 20 some odd years at DEFCON, ironically. Turns out they were not only trying to fire me, they were trying to prosecute me as well.
Speaker 1:
[96:52] That attorney or the administration, the director? The powers of being.
Speaker 2:
[96:59] This was above him and it was above me. In fact, I learned that, you know, I mean, I didn't piss off at this guy for 20 some odd years for yelling at me when we were buds. And it turns out he was getting a lot of flak, too, because he had ultimately sent us or his office had sent us.
Speaker 1:
[97:14] Yeah.
Speaker 2:
[97:16] His deputy resigned. But, you know, after going on double secret probation and having to talk to internal security and tell the story, and pretty much everybody I talked to, like, that's it? You were just trying to help? It kind of soured me on continuing to work there. We eventually were exonerated and we got pulled back into the deputy director's office and a bunch of the senior level management were talking to us and counseling us and they basically said, you know, we like what you guys do. We want you to do it. But if you're going to do it here, you have to follow our rules. And so we said, fine. I was gone from NSA by the end of September of 1996. So like six weeks after this all went down, I was gone from NSA because it was end of the fiscal year. They had done it. They were doing a buyout to get people to leave. This was one of the fallouts of the Soviet Union and fighting for budget. They were paying people to leave. And we'd been kind of toying around. A bunch of us were looking for, you know, more high paid jobs in the private sector and all that kind of stuff. So I took the first offer that came along and I was offered money to leave and I got the hell out of Dodge. And, you know, end of September 1996, you know, tried not to let the door hit me on the way out type of thing. Which, you know, is, you know, looking back on it almost 30 years later, if it hadn't have gone south, I mean, you know, there was something cool and fun and patriotic about doing it there. We were thinking we were doing a good thing. You know, there was the allure of more money out in the private sector, but I'll tell you what, when I went out into the private sector, more than, you know, I got an increase in pay, it was the idea that I could be hired by a company to do a pen test one week, do the job for the next couple weeks, take a couple weeks to write the report, you know, maybe a month later, come in and present our findings, give them recommendations and we were done. In and out, you know, maybe a month, maybe six weeks. Whereas six weeks at NSA, we would have still been trying to get permission to run the PIN command. Right. So much more than the money was the lack of the bureaucracy and the more focused, less complicated, there's a job to do, do it, report on it, give the feedback, thank you, you're done type of thing. That was very refreshing. So, but the reason I left NSA was because I was very much, they tried to get me to leave involuntarily, but I kind of took the opportunity when they gave it to me, to get out and go out to the private sector, where largely I've had a more receptive audience of my clients over the years. You know, not every time do they want to hear what I have to tell them in terms of, you know, how they're insecure and what they need to do differently, or what they need to invest in. But generally, if you can explain it to people, and I think I do a reasonable job of explaining to people why they should care, why they should worry, what they need to do to invest in, or at least, okay, you've got limited resources, here's your options, here's the pros and cons of what you decide to do or not do. So at least they can make a more informed decision, or at least what I believe is a more informed decision about how to approach this thing that we now call cyber security and protect your organization. And oh, by the way, we're losing and it's, nobody can afford to do everything that they need to do to provide that mythical 100% level of protection because it doesn't exist. And yeah, we have a very burgeoning industry that keeps going and hundreds of billions of dollars spent on technology where what ultimately causes many companies to fall is a process issue or a failure of people and personnel to do something pretty trivial.
Speaker 1:
[101:42] Yeah.
Speaker 2:
[101:43] When you get down to it.
Speaker 1:
[101:44] How does...
Speaker 2:
[101:45] Keep spending your money, people.
Speaker 1:
[101:47] How does, you know, when we look at the United States and we are a free country and limits on the government is a good thing. And yet, I don't want to say yet as though we should erase freedoms in any way, shape or form. But how does the NSA, particularly in this info sec environment, how does the NSA compete against countries like China, Iran, you know, country Russia that do not have any moral compunction, any laws that, you know, limit their government's reach? How do we compete against that?
Speaker 2:
[102:39] Well, that's a very complicated question to answer. And philosophically, it does, and I just, this came up a while ago in a conversation. I now have the opportunity to say it, so I'll say it. But I think it's, one should think twice about automatically assuming that what we're doing is moral because we're doing it to protect us. I'll just throw that out there just to make people think. But generally speaking, we are a moral, responsible society and government that does operate under rules, and most people take the rules fairly serious. There's always exceptions. And because there's rules and there's bounds, and more than that, there's just, there's so much that could happen, there's so much that could go wrong, and you never know what's going to happen and where, you know, where do you put your attention and focus and your limited resources? We're almost setting ourselves up as a society, if not pockets of industry within our government, which some would argue that the government should be protecting. It's not really a winnable situation in my opinion, whereas other countries, we are certainly told that they aren't as strict on rules and regulation. I doubt if Chinese hacking groups, whether they're military or paramilitary or funded by the government, are going through a lot of procedures and bureaucracy and red tape. Just a perception. So, I mean, we handcuff ourself. And, of course, you know, I work tangentially. I have relationships tangentially with a lot of people that are involved in, you know, the mission of protecting the country, cybersecurity, national defense, and so on and so forth. To be honest, and if any of them are listening, I apologize ahead of time, but, you know, given my experience working with the government and under the private sector, I've always felt that if you're working for the government, it's because you're not good enough to make it in the private sector, so you're kind of second tier to begin with. And there are exceptions. I mean, that's just a very broad blanket, probably ignorant statement, if you need to say. But in my experience, the real cutting edge stuff happens in the private sector. And here's why. For better or for worse, in the private sector, everything is driven by the dollar. Everything is financially motivated. Companies exist because they're trying to make money. That's free commerce. That's what we do as a free country. And I often tell my clients in the private sector, when they talk about risk, and you hear all these words bandied about, like risk and vulnerability and threat, security, I tell my clients and anybody that will listen, frankly, when I was working for the military, when I was working as a civilian, the idea of risk was all computed around loss of human life, troops on the battlefield, citizens abroad and domestic, embassy workers, state department employees, and stuff like that. But it all had to do with loss of life. In the private sector, it's all about money. That's very different, especially when everything you do comes at a cost, or everything you don't do potentially comes at a cost. So it's a different motivational factor. And I'm not saying it's a... Somebody posted on LinkedIn in the last year or two.
Speaker 1:
[106:49] But we're losing you just a little bit. I think your signal is a little low, but...
Speaker 2:
[106:54] Oh, no.
Speaker 1:
[106:55] Yeah. Can you repeat that last...
Speaker 2:
[106:57] Can you hear me now?
Speaker 1:
[106:58] Yeah, we got you.
Speaker 2:
[107:00] Okay. How far last do you need to go?
Speaker 1:
[107:03] Just like the last sentence or two. Yeah.
Speaker 2:
[107:08] Well, what I'm saying is the idea of risk, why you do security, why you do the things, it's very different if you're pursuing the national defense, which is basically loss of human life at some degree, versus the private sector, which is how much money you're going to lose, or how much money are you going to spend, or how much revenue are you going to lose, or how much... It's all a financial basis. It's not that one is right and one is wrong, it's just they're very different. In a lot of ways in the private sector, it's a lot better to understand dollars and cents. That's a pretty easy equation to understand. In the national defense concept, it's how do you put a price on a human life? I mean, you intuitively don't want to lose anybody's lives, but I'm sure we've all seen reports or heard people talk about, you know, generals planning battles and, you know, even the Normandy invasion in World War II. Everybody knew people were going to die. Right. And the calculations that were being done on what was an acceptable level of loss of human life given the potential gain. I mean, and that's where I defer to the people that do work for the government and do work for the national defense, because they do take that very seriously and it's very hard. But it's also very politically motivated and there's a lot of, there's a lot of stuff, bureaucracy and stuff that goes on with that, where maybe I'm taking the easy road out by just working in the private sector. And it's all about money. Do we have any questions for Jeff?
Speaker 1:
[109:01] Yeah, we do. But I want to ask you, in your opinion, the government is notoriously cheap, right? I mean, the government is notoriously, what they pay soldiers, what they pay case officer, what they pay NSA analysts and operators, what they pay their federal law enforcement. Like, it is not, and for a lot of the jobs, whether it's a soldier or an FBI agent or whatever, there are not a lot of comparable jobs on the outside so they can pay on the cheap. When it comes to the NSA, though, you know, you guys may be a GS-12 or GS-13 step five, but then you can turn around to Mandiant and CrowdStrike or CrowdStrike, whatever, and earn three times, four times what you're making. Do you feel that the NSA needs to, that the government in general needs to deal with this new reality and the NSA should pay people what they're worth on the outside in order to keep that talent?
Speaker 2:
[110:12] I mean, the short answer is yes, but it's complicated because, and this is where I kind of do have a little bit of deference to the people that do, you know, work for the government because they do believe in machine and are patriots and things like that. But there is this stigma, at the very least, that if they were really good at what they did, they'd be in the private sector and the bigger dollars, making them more. But that doesn't mean that everybody out in the private sector that's making the big bucks is deserving of the big bucks. So, you know, it's kind of a two way street.
Speaker 3:
[110:51] You might not necessarily want them deciding who lives and who dies either.
Speaker 1:
[110:56] Right. Right.
Speaker 2:
[110:58] I mean, I talk to a lot of people, you know, since I go out to a lot of conferences. I was at a conference last weekend and I was after I spoke, I was talking to probably a dozen college students that had come from one college and they were just peppering me with questions. And refreshingly, they did not ask change when I talked to students. How much does this pay? How much can you make in cybersecurity? They're mostly interested in technology, they have a passion for whatever this stuff is. But I try to tell people, find something you like to do, find something you enjoy doing it. Don't get hung up on money because you can make a lot of money and that that arriving and making it. But I have yet to meet anybody that's happy and satisfied because they make a lot of money. But I know a lot of people that are really happy with what they're doing and really satisfied with their job that some do make a lot of money, some don't make a lot of money. Some are in the government, some aren't in the government. But the happiest people I know are the ones that are doing what they love and feeling like they make a difference. I've been doing the credit card industry for 20 years. I go home at night and fall to sleep thinking, wow, I've allowed a company to make money on credit card interests. In contrast, that was somebody that could go night and fall asleep because they knew they helped save lives or promote the national defense. It's a hard nut to crack, but I think there's a stigma that if, at least for me, that if you're for the government, it's because you could put it in the private sector where they pay the big bucks. Of course, a lot of people put their time in in the government, and then they get the posh job at the big companies out in the private sector, and most of the people you know and see, and I'm grossly generalizing, I'm not impressed by the people that you see, the public figures, the ones that are always getting interviewed on CNN and all the different news channels. And so on and so forth. The people that really are good at doing all this stuff, and love it and are passionate about it, you don't know who they are. I don't know who they are because they're just in the trenches doing it, and they're doing it for whatever makes them satisfied, and God bless them because we need those people.
Speaker 1:
[113:36] I think it's interesting because you talk about the mission, and I can see how similar to the military, the people in the NSA have a mission and a purpose. And as you experienced, I think the challenge with the mission and patriotism and that sense of purpose, the only thing that stands between that and bitterness is like one bad manager, one bad leader, and they can steal that entire sense from a person. How is the NSA when it comes to their leadership development and their management development and things like that?
Speaker 2:
[114:21] Yeah, I don't know when I was there, which was for the better part 30 years ago, there was a stigma between if you want to advance in your career, go up the pay grade ladder to get beyond a certain level, you had to get into management. So you had to go, there was either the technical track or the management track and the management track is who made the big bucks. But if you were good at the technology, and I use that term loosely, technical could be your cryptologist, technical could be anything, but technical not management, your labor not management. The people that were really good at it and wanted to advance had some point had to kind of suck it up and like, if I want to go further, I got to get into management. I don't know that they've completely solved that. I was actually invited back to NSA last fall for an alumni open house, because they're basically trying to recruit people that used to work there, because they're hiring, there's certainly a need. We talked about how they don't pay well. And someone like me, whose son has expired over 20 years ago, I simply asked, is there any way to streamline me getting back in? You want me. I'm certainly capable. I certainly have a lot of experience, but there's that background investigation, getting the clip again. And the very long-winded answer, that I really never got a good answer, was no, there is no shortcut. But, gosh, I was, I think I was at RSA a few years ago, and I went to the NSA booth, because that's sort of a pilgrimage, every time I go to RSA conference. And I met a young lady at the booth, and she's like, oh, you're Jeff Man. I'm like, oh, she knows who I am. She knows my stature in the industry, and my background and stuff. And she said, oh, I used to go to school with your daughter. So I'm like, oh, okay. So she had no idea who I was, other than I was the father of a classmate of hers. So my daughter now is in her early 30s. This woman's in her early 30s. She's senior level management at NSA, and she might be the smartest person around, but my gosh, I mean, early 30s probably has been at NSA since college, so she's got maybe 10 years experience, and she's in a really senior level role. That doesn't give you warm fuzzies, and it's nothing personal against her. It's not because she's a woman. It's not because she's young. It's because she's got maybe 10 years of experience, and how much of that 10 years has been off on the 2020 program getting more education and training and doing this, that and the other? And my impression is they're working with what they've got to work with. And again, it's not a knock on her personally. I'm sure she seems to be very smart and very wonderful, but she's made comments about how NSA is on top of their game. At this open house, the director was talking about how NSA is on the top of their game. He's a very compelling speaker. But I'm like, yeah. Then I started talking to some of the people like, yeah, you're still full of it. And that's just my opinion. So they talk of a good game, but at the end of the day, it's still a government job. And they've got lots of stupid bureaucracy and rules and regulations. And because they're sort of the only game in town and they sort of look inward, they don't see the big picture and they don't see the outside. I've been trying to offer them, hey, I've been out in the private sector for 25, 26, 27, 28 years now. I've learned a few things that maybe you would say, we want to be more engaging to the private sector. Why don't you bring me in to let me tell you how to maybe do that? Because you're me first approach, we're NSA, you should listen to us. That's not gonna cut in the real world because people are like, oh yeah, you're NSA. What does that mean at the end of the day? Gosh, I hope I'm not getting fired or arrested after this podcast.
Speaker 1:
[118:30] And one last question before we get to fewer questions. I'm curious about. Back during the Naval era, when you had the letters of Mark, when we've had these times when the government can control everything, we had the idea of privateers. Do you think that the government, in this cyber warfare world, in this cyber environment, when there are 14-year-olds, who are just brilliant and doing crazy, amazing stuff, and there are groups out there, do you think that the government in this one arena should turn to a privateer model?
Speaker 2:
[119:20] It's an interesting question.
Speaker 1:
[119:28] I would say, I was having a conversation in the last couple of weeks with some people at one of the conferences I go to, and they were talking about, actually it might have been on the podcast, I do, but they're basically talking about how there's certain hacker groups out there that are just going after certain, not necessarily nation state actors, but sex trafficking, child trafficking type of groups. There's conscientious hackers that just go after them just because it needs to be done, and it's not technically sanctioned by the government, It's sanctioned by anybody, but nobody's really complaining. So, I mean, that's my most recent frame of reference. I would say, I don't... My bias is NSA or the government puts its fingerprint on it. It's gonna get stupid at some point. Could there be sort of a handshake unofficial? Well, there's this shadow group out there that's just doing the responsible right thing. That might work for a while, but of course, that could go wrong for many reasons, too, because absolute power corrupts absolutely. But the serious hackers out there that are socially minded, socially conscious, want to do the right thing, and are frustrated at bureaucracy and the limits that government puts on out of necessity, but it makes it very difficult to do what needs to be done in a fashion or a manner that can and should be done. Yeah, I would, yeah, I don't know what you would call it. If you would call it privateering per se, or just looking the other way. Does there need to be some oversight? Does there need to be some kind of stopgap? But I could see that happening. On the other, on the flip side, do I believe in vigilantism? Not necessarily. That sounds intuitively wrong. But I mean, anything can work for a while, and anything can go south when the wrong personality and the wrong motives come into play. People often ask about hacking back, and whether that should be done by companies, or leave that to the government.
Speaker 2:
[122:03] Right. Yeah.
Speaker 1:
[122:05] You know, this is where it kind of, you know, the difference between the private sector, you know, money, that's the risk, and the government protecting, you know, the US and US entities and things like that. That's where it gets a little bit fuzzy for me and tricky, but I tend to want to, like, I'd rather have the government in control of the actual war fighting, because that's sort of what they're in the business of doing, because I think it could get real ugly, and lots of bad things could happen to innocent people if it's done by the wrong people for the wrong reasons, or even the wrong people for the right reasons, but outside of the boundaries of control. You know, there's a reason why we have a Geneva Convention, which it doesn't make sense at some level. Like, why do we have people sitting down, coming up with rules on how to conduct warfare? At some level, it makes perfect sense, and another level, it's a head scratcher. It's the same type of thing for hacking and hacktivism, stuff like that. It makes sense at one level, and at another level, it's like, man, you don't want to go there. That's very sketchy. And I can go either way depending on my mood and depending on what the situation is.
Speaker 2:
[123:24] So, again, I'm sorry, but one more following question, because you mentioned the Geneve convention. And I'm curious, in your experience, if a non-state actor, a hacker group shuts down a hospital over ransomware, should they be considered a viable military target?
Speaker 1:
[123:47] Hmm, that's an interesting question. From a Geneve convention perspective, and again, this is a conversation we had on our podcast a couple of weeks ago with a gentleman named Josh Corman. You know, it used to be that the hackers sort of had the bad guys, you know, hackers can be good or bad, but the bad guys used to have sort of a code of conduct or ethics that you wouldn't go after, you know, like a children's hospital and hit them with malware or ransomware, but the perpetrators, the bad guys that are doing this, they're looking for targets of opportunity. They're not looking at who it is as much. So there, you know, there is this idea that, you know, there used to be some idea of responsible crime. And that kind of can go away at some point. So are they, should they be targeted by a military action? I would tend to say yes. But again, those, that's the situation where there's private groups, there's hacking groups, you know, good guys groups that are actively targeting those types of organizations and doing what they can to take them down in a logical technological sense. I don't think that it's in a military sense, you know, physical sense. But yeah, there's certain lines that get crossed that most people will say, yeah, that's something that shouldn't be done. That's not cool. And it used to be that there was responsible criminals that wouldn't do something like that, but that seems to have gone out the window. So, you know, whatever works to get the stuff to stop happening, I'd be tempted to condone that to a degree, if that makes sense.
Speaker 2:
[125:44] I agree with you. I mean, I was just curious. I mean, you're the expert here, but I feel as though if, and according to the Geneva Convention, they're responsible for the loss of life, they're a viable target, but I don't know from a cyber perspective, somebody as experienced as you, what your thoughts would be. All right, let me get to the question.
Speaker 1:
[126:04] Well, final comment on that. I mean, what's interesting to me is, we're, again, we talked earlier about some things are kind of coming full circle or overlapping. Maybe this was off the air, but, you know, signals intelligence is becoming a thing again. The idea that risk now, because we're targeting hospitals, they can't afford the security, can't afford the ransom, critical infrastructure, you know, the idea of the risk being lost of human life is kind of becoming a thing that's more tangible and real in the private sector. So it's not a full circle thing, but it is a blending where more action is required and more action from the government is necessary, even if that means regulation and regulatory compliance, but also assistance. So it is an interesting time we're living in, but I think it's interesting that risk in the private sector, which has been money for so long, is now starting to be human life again, which is something that the military understands. So yeah, maybe they should step in.
Speaker 2:
[127:22] So viewer questions. M. Corbin, thank you very much. Really appreciate it. Does Bitcoin have a future as a tool for power projection in the future? And also, what is your take on the 2000 US-China Hacker War?
Speaker 1:
[127:37] I try to avoid Bitcoin as much as possible. Does it have a future? No comment. And I haven't heard of the other one. I don't do a lot in the technology realm. I focus more on people and processes. That's just a general disclaimer. So try to ask me another question. I'm sorry, I can't answer the first one.
Speaker 2:
[128:05] Johnny, thank you very much for the donation. I don't see a question. If you have one, please throw it in the chat. Oh, I see another one. Global Media, thank you very much. Support The Team House, get those likes up. Yes, everybody, if you haven't liked this, please throw us a like and hit us and subscribe if you haven't. Johnny, thank you very much. I wonder if Jeff thinks CPU architecture can be secure. Intel, Apple, TSMC have been shown to have un-patchable physical vulnerability and chips, which leak secure keys.
Speaker 1:
[128:41] Yeah. I had a chief scientist, I believe it was in my early days at NSA. So it would have been in the 80s, maybe early 90s, that used to have a mantra. What can be created by man can be broken by man. So in that context, can CPUs ultimately be made 100 percent secure unbreakable? No. To me, we're having two different discussions that often get lumped under this mantle of cybersecurity. That's the idea of securing all the things as much as possible. So securing, creating a secure state, which is kind of a noun. And then the second thing is security. What do you do given you can't do the first? What do you do to monitor and detect and respond to your network, your environment, given that something inevitably is going to fall in terms of the technology? So in that sense, what I'm saying is, no, I don't think CPUs can be ultimately secured 100 percent. But given that, what do you do? Maybe you don't invest as much on trying to find a better CPU. What is done these days by the organizations that you referred to is probably good enough for most people, but it's that the few that care and the few that are going to be impacted the most by somebody that figures out a compromise, figures out a way around, to work around what we used to call a feature, they're the ones that need to care about it. But they need to know how to detect it, to minimize the damage, to respond to it. I am a proponent of the process. Security is something you do, it's not a state that you achieve. There's making things secure, and then there's security, which is the diligence and the monitoring and the standing guard and standing watch, so that you see the attack when it's happening, you intercept it early, you minimize the damage. That, to me, is the essence of security.
Speaker 2:
[131:01] Do you think that hardware manufacturers and software manufacturers are transparent enough with the community in terms of what they think the weaknesses are so that people can be diligent, or do you think they could be more transparent?
Speaker 1:
[131:19] Short answer is no. I don't think they're as transparent as they could be. The podcast that I do, Paul's Security Weekly, securityweekly.com, Paul Asadorian, the Paul and Paul Security Weekly, he works for a company that does hardware hacking, hardware vulnerability research company. Well, I don't need to say the name. We'll let him do that. Go to Security Weekly, you'll figure it out. But he focuses a lot on hardware vulnerabilities right now. So that's a topic that comes up a lot in our podcast over the last year or so. And he reports very routinely on the research that he's doing with his day job on the insecurities of hardware and how hard it is to secure hardware. And it's not really the new frontier because it's been around forever. I mean, I worked at NSA when it was all hardware and there was no software. So, it's semantics, it's blurring the lines, but hardware is also prone to insecurities and vulnerabilities and bugs and weaknesses and misconfigurations. And they're out there. They typically don't become publicly known until either somebody exploits them or some researcher discovers it. And then the sky is falling. You have to temper it with the likelihood that somebody is going to go after something like that, going to go to that degree of attack that they're going to try to exploit that. A general principle, I'd say, is, you know, the bad guys are going to do whatever works, whatever is the easiest. I mean, they have their own cost-benefit analysis, as it were. So they're going to do what works and what's easy, and they're going to hit the targets that are vulnerable. They don't necessarily target specific organizations, which to me is one of the big 800-pound guerrillas in the room, is that we have this industry that makes people protect against all sorts of stuff. Most of the bad guys aren't targeting specific organizations. If they did, they sort of have unlimited resources, and they could go after them any way they can, and they can take the time. And if it means exploiting a hardware vulnerability, they will. I think the line is drawn when the hardware vulnerability, it can be exploited in a way that is sort of reproducible, and it can become something that's random in terms of, let's find somebody who's vulnerable. We don't care who it is, even if it's a children's hospital, and let's exploit it and make money off of it. Commoditize types of attacks that target anybody, no offense, it's just, we're just targeting whoever's vulnerable.
Speaker 2:
[134:24] Do you think that ransomware as a service has kind of like increased that type of tendency, that you might have ransomware gangs that do have those codes, but then when it's ransomware as a service, you just have some script kiddie out there who's like, ah, fuck it, I'll just find whoever will pay.
Speaker 1:
[134:44] Well, I mean, it's simple economics and you're not really paying attention to who the target is. It's whoever's vulnerable that you can make money off of. I mean, ransomware in general, I think has changed the dynamic of cybersecurity significantly because the way I was classically taught about this problem, which we back in the early days, we called data security or information security. And most people have probably heard of the CIA triad, the three components of security of data being confidentiality, integrity, and availability. So confidentiality, keeping secrets secret, integrity, knowing that the data is valid, it hasn't been altered or tampered with, and then availability. Can you get to the data when you need it? Most of this cybersecurity industry, which is mostly technology-based, focuses on the confidentiality problem, trying to keep things secret, trying to keep things safe, trying to keep things inaccessible in terms of stealing it. You know, denial of service has been a problem off and on. Distributed denial of service has been a problem off and on over the last 20 years or so. But we sort of solved those problems. Integrity issues, faking the data, if you trust the data, you know, that can kind of come into play with phishing schemes and fraud schemes, scams and stuff like that. But availability, that's something that we haven't really invested a lot of technology solutions in it. And everybody believes that technology is how you solve the problems. And it's even more twisted than that, because it's not just ransomware, where we're going to hold your data and if you don't pay, we don't give it back to you and you lose access to it. But now, it's sort of the... I don't know if somebody's come up with a good term for it, but holding the data and threatening to release it, rather than just sending it back to you. So sort of... I don't know what's a good term for it, but that's coming up more.
Speaker 2:
[137:03] Yeah, it's the exploitation.
Speaker 1:
[137:04] Kind of a black man.
Speaker 2:
[137:05] Yeah.
Speaker 1:
[137:08] You know, there are no good technical solutions to prevent that other than the things that we've been preaching for the last 30 years of sort of basic security hygiene to try to prevent that stuff from happening. I mean, with all the ransomware attacks that are out there, you don't often hear people talking about how the ransomware attack was launched in the first place, or how it got into the environment. But it's usually a phishing attack, which is not a technical failure, although you could argue that it could be. Why am I getting an email in my inbox that's got a phishing link in it? Why isn't there technology out there that filters out or blocks it? But there's that aspect of it. But we don't have a lot of good technology out there that prevents people from clicking on a link or falling prey to a really, really convincing, clever phishing scam. Or to date myself back almost 30 years, to open an attachment of a document in an email that I got from a trusted source that says, hey, read this. And by doing so, I've launched a virus or malware, what we used to call viruses and trojans and malware, but what we pejoratively call ransomware these days.
Speaker 2:
[138:30] Right. Well, I mean, in these days and times, it's amazing how many organizations aren't even enforcing a basic 2FA to log in to stuff. It's incredible the basic steps that aren't being taken often.
Speaker 1:
[138:48] I agree with that, and what I often shake my head out is the fact that while there's so many vendors out there that are trying to sell you convincing solutions, there's for, and I'm talking primarily the private sector because that's where I've been most of the last 30 years. Without regulation, without compliance, most companies aren't going to do it because why should they? They don't have to, and until they get popped, until they get breached, they don't get the religion of, oh, we really should have done that. I've been doing the payment card industry for 20 years. The PCI data security standard is a pretty decent, high-level set of rules of things that you should do to secure your organization, your network, to protect data that you care about being stolen. Specifically, it's credit card information, but you can apply it to anything. Most organizations that I work with are doing it because they have to, and in the early days, they weren't saying, even before PCI, when I was working with companies in the private sector, and even in the beginning days of PCI, the questions I was being asked from companies that I worked for was, they weren't asking, what do we need to do to be secure? They were asking, what is everybody else doing that's a peer in my industry, so that I can do as little or as much as anybody else, so that when something bad happens, I can say, well, I was doing best practice, and therefore not get fined, or not be held liable or accountable, because it could happen to anybody. And it could happen to anybody. It's a weird dynamic, but most companies out there, if they don't have a reason to do it, they're not going to do it. But you can sort of explain that in a financial model, because everything's money based in terms of the risk model. Well, it hasn't happened yet. Why should we spend money to protect against something that hasn't even happened yet? So, there's a financial logic to it. And of course, it blows up when the bad thing happens. And that's when we get called in, and we help them straighten things out. And they get religion. But what's in the news these days in the private sector? Critical infrastructure, utility companies. And people are talking, I hear people talking about, well, there's NIST this, NIST that, and MITRE ATT&CK framework can do this and that and the other. And there's all these things. I'm like, they're a utility. Somebody in that company is collecting credit cards to pay for the water bill, the electric bill. So they know PCI is in there somewhere. If you just did what PCI said to do, you'd be pretty much okay. But nobody seems to be connecting the dots on that. PCI is this, nobody likes to talk to PCI. That's old. It's stupid. You know, it's not flashy and new and shiny. Right. But it is today because PCI 4.0 is now the law of the land.
Speaker 3:
[142:15] Do we have anything else for Jeff?
Speaker 2:
[142:17] How long do you think it will take? Thanks, John Jones. How long do you think it will take for AI-based security controls to become as commonplace in the private sector as Layer 7 firewalls are today?
Speaker 1:
[142:29] Oh, God. AI, the latest buzzword thing that I'm trying to avoid ever dealing with. You could probably map this to other things like you're using the firewall is the analogy. Everybody's got a firewall these days. I'm sorry, they don't have firewalls anymore because their infrastructure is now in the cloud and it's protected by software. 10 years with a little bit of acceleration, I'll say 5 years. That's my guess.
Speaker 2:
[143:08] And then, I'm Corbin. Oh, Justin Zulu, thank you very much. What are some things that an average person could do to protect themselves going forward?
Speaker 1:
[143:22] Probably the biggest thing is put what the industry calls multi-factor authentication, what we used to call two-factor authentication on everything. I'm not a personal fan of password vaults because I'm old school enough to think that you shouldn't put all your secrets online period or trust technology period. War Games, 1983, don't trust the whopper. But use a really, really, really long password and I would even advocate phrases, poems, song lyrics, try to think of obscure song lyrics and then apply random uppercase, lowercase, special characters. Everybody knows to substitute the number 4 for the letter A and the number 3 for the letter E, but don't do it on the first letter last year, last letter, don't do it on every letter. Put spaces in between the words or better yet, put spaces in between somewhere in between the word and not between the words. Because that's going to protect against password tracking, brute forcing. But more than that, I would say make sure you're always using some sort of multi-factor authentication on everything. There's a lot of people talking about using password vaults and you get to use those super long, random password generated things that are stored in the vault. But password vault companies have fallen victim to compromise, so they're not a perfect solution. In fact, I interviewed the CEO of LastPass last summer at Black Hat. It was part of the podcast I do. We did live interviews of executives. That was an interesting conversation. I didn't know the guy wasn't the founder of the company of LastPass. He had become CEO October two years ago. Months before, they had not one but two major breaches. So I was like, Ouch. But I'm old school. I don't believe that you should put all your eggs in the technology digital basket. I think this is your best tool right here. My current domain password for my day job company is like, I think it's like 38 characters long. It's a song we're, it's a line of a song that's a song that I know. I mix it up a little bit enough to just protect against the cracking. But just the sheer length of it, 38 characters, nobody's going to guess it. Even, I would even say if you knew what album I was citing a lyric to because of the various permutations. Yeah, you could compute it brute force, but it would take you a while because I mix up the spaces and the upper characters and lower characters and special characters and stuff like that. So, but because I grew up typing with 10 fingers and not thumbs, I can type my 38-character password in faster than probably most people can do a 10 or 12-character password. They're just doing it like this. That's just me being a crotchety, curmudgeonly old-timer, get off my lawn.
Speaker 2:
[146:56] So Jeff, my question, because I do use a password of all, my question to you would be, in this digital world where everything we do requires a password, and obviously you don't want to reuse the same password, but how do you manage 30 passwords without a vault? Do you write them all down? Do you personally just remember them all? How does the average person manage that?
Speaker 1:
[147:26] Well, A, I'm not the average person. For better, for worse. You know, we used to talk about having passwords you care about, and passwords that are the throwaway passwords. Of course, I've talked to developers that are doing stuff in Azure or AWS, where they need to know like 300 passwords for all the various different systems that they got working on. That can be a little bit excessive, but I guess I'm more of the mindset that you have the throwaway password. You need to make a password, have it decent, but I'm okay with repeating passwords for accounts that I don't care about. Now, the thinking on that is you don't want to use a password in multiple cases and use it on some place where something's going to get stolen, something you care about. So, I sort of distinguished the throwaway password. Oh, I've got to sign up for something. I've got to create an account. I'm never going to use this again. I need to have a password. I have a throwaway password that's just something lame. And then the passwords on the accounts that I care about, which are much fewer, they're either unique or they're permutations on a very, very long stream. There's a couple considerations to be made and I can argue myself out of this, because it's not just stealing the hash and cracking it and trying to figure out what the password is. There's, if you're using it in multiple places and it gets compromised in one place, it can be used in many other places. That's another type of attack. There's the possibility that even your best password somehow gets intercepted in while you're using it, where it's in a fashion where it can be copied. You know, more rare, but still a possibility. But the bad guys don't often do it that way because there's easier ways to do it. So I guess I could be proven wrong. I'm happy to be proven wrong and argued out of it, but I'm still of a mind that I have throw away passwords that I'll use repeatedly in many places. And I don't care if you knock over this account and that account and that account and that account because I just set up the account so I could download the white paper, damn it, and read it. Right. But, you know, I mean, shoot, my rental car company that I use, and I won't say which rental car company I use. When I initially set up the password on my first app, they asked for a pin. So I have a four-digit password on my car rental company. And I keep thinking I should change it, but then I keep thinking, but I don't really care if somebody rents a car in my name, because I could probably sort that out. You know, I'm not going to be ultimately held reliable for, liable for it. And who's going to do that anyway? So I have a four-digit pin that is my password to my car rental company to this day. And I set it probably 25 years ago.
Speaker 3:
[150:46] Jeff, tell us about your podcast and where people can go to find it. Sure.
Speaker 1:
[150:54] I'm on a podcast called Paul's Security Weekly. You can find it at securityweekly.com. And if you search on all the podcast catchers, and I think we're on YouTube and Twitch, securityweekly.com is the way you'll get there for subscribing. Paul Ascidorian is the Paul in Paul's Security Weekly. He started the podcast with his friend Larry Pesce back in 2006, I believe. So it's one of the oldest security podcasts around. And it was built on the premise of practitioners just sitting around having drinks, talking shop. Paul's a cigar smoker. So much like your studio there, the liquor flows freely, the cigars are smoked. And I met Paul about 10 years ago when I went to work for this vendor that was a friend of mine. And he got me involved in the podcast. I've been doing it about nine years now, but we're a weekly podcast. Paul actually made it his own company at some point, which was acquired at some point. But it's a network of shows. We drop about probably 10 hours of content a week. There's Paul's Security Weekly, the flagship, Application Security Weekly, Enterprise Security Weekly, Business Security Weekly, and Twice Weekly Security News segments. Lots of content, but the people that at the end of the day are practitioners that are in this because they're passionate about it. We talk shop, we talk about all sorts of things like we've been doing tonight.
Speaker 3:
[152:31] And for people listening, we'll have a link in the description to go and check it out.
Speaker 2:
[152:35] And where else can people find you?
Speaker 1:
[152:41] I do a lot of conference speaking to this day thanks to my friend that pushed me out into the conference world. I'm going to be, I'm actually going to be up in Canada later this week at a conference called Atlantic Security Conference, Atlantic Security Conference. I'll be at B-Sides Harrisburg, Pennsylvania in two weeks. End of the month, I'm going to be in Boise, Idaho at the Boise ISSA Conference. In May, I will be in St. Louis at the ShowMeCon Hacker Conference. So a lot of conferences, I'll be around for what we call Hacker Summer Camp, B-Sides Vegas and Black Hat and Def Con. I'll be out in San Francisco for RSA. I'm on Twitter, although nobody's on Twitter anymore, but you can find me there at Mr. Jeff Man. If you spell my name right, you can find me on LinkedIn. Go to YouTube and type in my name and security, and you'll find many recordings of talks I've given. My NSA days, my first couple years where I was in the crypto shop, I did a talk and I had the marketing team come up with a sticker for it, because hackers love stickers. I did Tales from the Crypt Analyst, and then when I did the talk about the NSA Red Team, the first pen testing team, that was the sequel. More Tales from the Crypt Analyst, and this year I'm giving a talk in Commission's new art. I'm giving Tales from the Crypt Analyst, the afterlife. That's the talk I'm giving this year.
Speaker 2:
[154:21] Jeff, we throw stickers up on our door. We want as many of those stickers as we, one of each. Yeah, if you have them, we'd love to.
Speaker 1:
[154:31] I'm going to have to get more of these made.
Speaker 2:
[154:33] Yeah.
Speaker 1:
[154:34] More of these made because I'm down to the last couple. But the woman that is responsible for all these stickers, her Twitter handle is OneDarkOne. She does a lot of graphic art for a lot of the hacker conferences and the B-side. So I call her a con artist. She literally is a con artist.
Speaker 2:
[154:54] I have two more questions real quick. Anyway, and Dean might have some from Patreon, but M. Corbin, thank you very much. Any way to circumvent hackers for hire used by foreign nations?
Speaker 1:
[155:07] Pay them more?
Speaker 2:
[155:10] Mohammed Sabani, thank you very much for the very generous donation. So there's a couple of questions. Do you like YubiKeys for passwords?
Speaker 1:
[155:20] I've not used them, but yes. I think they're a good thing to do if you want to drop the money for them. Yes, I think they're good.
Speaker 2:
[155:29] Seriousness of quantum, hold on, sorry, I lost that. Seriousness of quantum compute threat.
Speaker 1:
[155:39] We'll get there. But like any other technology, it'll have the potential for being used for good and bad. So in the old days of the Cold War, it was often referred to the Cold War as a game of cat and mouse. You know, the Soviets would do something that would be devastating, but eventually we'd figure it out. And then we'd do something that was devastating, and eventually they'd figure it out. So kind of this cat and mouse game. I think the same is roughly true with all the technological advances. Quantum being that what we were talking about a year ago, but of course AI is the thing now that everybody is talking about. So it has the potential for good, it has the potential for evil, it's overhyped and not there yet. The quantum thing is becoming real, but until quantum computing is available on the smartphone or reasonably affordable by people that aren't nation state status, it's not going to be an issue yet. What's interesting though about quantum, I will add, is because quantum has the ability to break things when it becomes popular, that is stuff that was even encrypted in the past, that's where you start to have to think about now what you're protecting with the current cryptography, especially for stuff you're storing, because it could be cracked in the future by quantum computing. So think about what you're saving and thinking about why you're saving it and storing it. And keep in mind that what you're storing now based on what algorithms you're using to store it could become susceptible to compromise. But like everything else security related, maybe the protection isn't just coming up with a stronger algorithm, maybe it's preventing it from being stolen in the first place. Or if it does get stolen, you catch the people doing it and prosecute them. There's always more than one way to solve the problem. There are no single point solutions, quantum-included, AI-included for, okay, we've got this, so we're done, we're good. We can walk away now and not think about it.
Speaker 2:
[158:04] Right. How best, and it's still from Mahabasvani, how best to develop US talent earlier, like Unit 8200. And I think this goes into maybe the idea of, obviously there are a lot of legal things people can do now to develop their hacking skills, unlike the past. But let's say you have a kid who is curious, maybe with a criminal bent, kind of a ne'er-do-well, but reforms his ways. Is there a way? Do you feel like there's a way to bring these people into the government?
Speaker 1:
[158:51] Well, not speaking for the government, I would say yes. But the government has rules. I mean, when I was hired at NSA, I had to go through a background investigation. I had to go through a polygraph. They wanted to know all your deepest, darkest secrets. They claimed at the time it wasn't necessarily if you had done something in your past, it would mean you didn't get hired. They just wanted to know about it so you couldn't get blackmailed in the future. Right. So, I mean, I think the government's getting smarter at knowing that they have to sort of cast a wider net and not necessarily go after the cookie cutter, stem person. I mean, I'm the living proof of that. I was not a critical skill. I was not a stem person. I was hired by NSA and I did some things that were meaningful. And I probably wouldn't be, given my GPA and given my educational background, if it wasn't for those aptitude skills tests recognizing my potential, I would not have been hired by NSA then or even to this day. So, what I'm trying to advocate for is, let's figure out a way to find the people with the potential and the aptitude that aren't necessarily the cookie cutter, you know, they're in a STEM curriculum or they're from a certain neighborhood or they're a certain skin color or they're a certain ethnicity or they're a certain orientation. Let's find the people that have the potential and the aptitude because they test well in a certain skill set and let's promote that. That to me transcends all the other issues. And I'm the living proof of that because I had no business being hired by NSA if all they were looking for was computer scientists, engineers and mathematicians because I was neither of the three. But I ran circles around the people that they hired that did have those degrees that left after three years with a graduate degree and went off and made a lot more money out in the private sector.
Speaker 2:
[161:07] Right. We have a couple.
Speaker 1:
[161:09] Yes, I have a chip on my shoulder.
Speaker 2:
[161:11] We have a couple of questions coming in. So I just want to make sure we get to them. Thoughts on Mattermost Messaging?
Speaker 1:
[161:23] I'm not sure I know what that is.
Speaker 2:
[161:26] Yeah, I think it's a new secure signal style. I'm not sure of it. Also from Mohammed Sabani, how much difficulty does a red teamer like you have keeping up with a relentless pace of development and knowledge needed by networks to VMs, OSINT, to Cali Linux tools, etc?
Speaker 1:
[161:48] So I don't do the red teaming anymore. I hung up my hat or my gloves on doing that about 20 years ago. I've been for the last 20 years trying to talk to people about the possibilities and what could happen and what could go wrong, and what they need to do to prevent it from a process perspective rather than keeping up with the technical stuff. That being said, because we talk about this ad nauseum on the podcast, because other co-hosts are actively red teamers. When we do get down to it, while the technology has changed and the techniques necessarily changed, the underlying motivations and methodologies, the foundational principles of security have not and generally do not change. So in that sense, I don't need to keep up with it, because nothing has changed. Then sprinkle on top of that, for all the stuff that's going on, the two reasons why companies still get breached, the two most common reasons why companies get breached to this day, to this day in 2024 is something to do with weak passwords or stealing passwords, exploiting passwords and the exploitation of trust relationships. Those are two broad terms, but very rarely is it technology related. I mean, we were talking about vulnerabilities and CVE scores a couple of weeks ago, and the statistics for something like only 3 percent of all the published CVEs have ever been used by bad guys to steal something, to exploit something. Yet we have an old industry built around driving down the vulnerability count, CVEs, CVEs.
Speaker 2:
[163:47] Yeah. And so the CVEs, what you mentioned, are the critical vulnerability that come out through the various, like Microsoft has a CVE Tuesday or Wednesday, I don't remember.
Speaker 1:
[164:00] Well, it's Patch Tuesday.
Speaker 2:
[164:02] Patch Tuesday.
Speaker 1:
[164:03] The CVEs is Common Vulnerability.
Speaker 2:
[164:06] Common Vulnerability. Okay.
Speaker 1:
[164:08] What's the E stand for? I can't think of what it is. So basically, I mean, what we're really getting down to is most companies are running a vulnerability scanner of some ilk and responding to the results. And the results are ranked critical, high, medium, low, based on some sort of statistical calculation, which is called a CVE score. And it's got lots of different factors involved. But and I'm somewhat generalizing, but my almost 30 years of experience in the private sector, most companies jump at the scan results and not anything else that they do in their security program. And so the argument and the discussion we've been having on our podcasts over the last couple of months is what happens when a vendor discovers a vulnerability in something that they produce because somebody discovered it and disclosed it, whether they got a bug bounty or not, but they told the vendor about it. And the vendor decides to fix it, but not issue a CVE. Right. Does it ever get to the scanner? Does it ever get a finding? Does it ever get a ranking? And do companies ever respond to it by doing the patch or the version upgrade? That I think is a very serious issue from the perspective of most companies. They had it drilled into their heads that everything starts with, what is the vulnerability scanner tell us to do? Because everything we do is associated with driving down the vulnerability count, because that's how we manage risk. Overly simplistic, wrong, and we could go another couple hours talking about that, but we shouldn't.
Speaker 2:
[165:54] Another one, Mohamed Sabani, again, thank you very much. Finally, for the lads, how much difficulty do the glowy's, I guess that's the new slang for feds, have in tracing Monero transactions? Beautiful Algo, LOL, asking for friends. Of course, Mohamed, we're always asking for friends.
Speaker 1:
[166:17] Sure.
Speaker 2:
[166:18] But when it comes to crypto and stuff like that, a lot of people have this impression that it's anonymous, but it's really not. Can you tell us a little bit from your experience or from your knowledge, like how do the feds track Monero or Bitcoin or anything else like that?
Speaker 1:
[166:45] I mean, I can't speak definitively because I don't work with them or for them anymore, but given what little I know about it, if they're motivated to track it, they can track it. There are ways to do it. I would hesitate to say that they're tracking everybody just because they're financially and economically bound just like everybody else. But if they have a reason to go after you, the indicators are there. I mean, if you're asking, are you safe to do it and the government's not watching you, I think a certain amount of big brother fear is probably healthy. But I wouldn't lose sleep over it either.
Speaker 2:
[167:36] I think one of the, and it was Dark Side Diaries, Jack Reissender who actually recommended you to me. In one of his episodes, they talked about Department of Homeland Security, Operation Against Child Pornographers, and how they tracked the crypto going in. And the thing is, they may not be able to track crypto in terms of where it's going inside the system, but eventually you got to cash out. And they can follow it to that cash out point. They can follow it from the buy point, they can follow it from the cash out point. So I think, just kind of emphasizing on your point, if you think you're getting away with something, you're probably not.
Speaker 1:
[168:28] Well, I mean, probably a similar analogy is encrypting data and data was encrypted initially for transmission, for communication and the mantra back then, or even if you're doing it in the modern world for storage, but if you're encrypting data to protect it, sooner or later, you're gonna want to decrypt it so you can use it or you can refer to it or you can access it. So the attack points are either before it's encrypted or after it's decrypted. So I think that's a similar analogy to what you're painting. Jack Reesider, it's all him and Schmook on, that's probably where you saw him. I'm episode 83, if anybody wants to go listen to it, I'm the second part, second half of episode 83. It's entitled NSA Cryptologists. I met Jack again at DEFCON a couple of years ago, and I'm like, oh, you do Darknet Diaries, you should really interview me. And he checked me out and he's like, yeah, I really should. So, you know, different elements and aspects of the story I've been telling tonight would come out in the Darknet Diaries episode.
Speaker 2:
[169:41] Yeah, he's a really great guy. Andrew just asked a question. Does the cyber liability insurance run its own penetration testing teams?
Speaker 1:
[169:52] I'm not aware of any that do it directly, but a lot of times the insurance riders are very closely connected to other companies that do provide some level of assurance that the insuree, if that's the right term, is insurable. And they would simply do it. But I mean, the first couple of years of the insurance, cyber insurance industry was all questionnaires. And that was supposed to magically validate that you were worthy of the cyber insurance, especially if there was a claim file. So I don't think any of them do it directly, but they certainly, because of claims against it and the need to, and I'm not an insurance expert, but actuarial tables, figuring out how much you need to charge people that want to have this type of insurance based on how many claims are going to be filed and what's fair and all that kind of stuff. And the insurance companies can still make profit. They're starting to get more responsible. I mean, cyber insurance has been around for almost 10 years. And I remember being asked about it almost 10 years ago. And I'm like, people are silly to think that they can skirt or dodge regulatory compliance by just getting cyber insurance. And in this context, it was PCI. Because I'm like, have you ever tried to file a claim against an insurance company? You can be damn sure that they're going to come back and say, were you doing all the things that you should be doing? So if you think the PCI assessment or audit was bad, wait till the cyber insurance adjuster comes out and starts looking under the hood.
Speaker 2:
[171:38] And a lot of times, I think what they'll do is they'll hire the forensics people to go in and say, well, they didn't do this, and the insurance company will have an easy hour.
Speaker 1:
[171:48] Right.
Speaker 2:
[171:49] Yeah.
Speaker 1:
[171:49] But I have heard of, I mean, partnerships, I guess, or relationships where the insurance carriers do have relationships. Again, they don't do it themselves, but they probably have partner companies that will do a little tire kicking, a little bit of vetting of the people trying to get the policy to make sure that they're meeting some sort of minimal standard. Similar to, like, I don't think insurance companies hire doctors. They don't have doctors on their payroll, but you have to get a physical to get a life insurance policy most of the time. So they have partnerships and relationships, or you have to have the notarized signature of a doctor. I got to renew my driver's license, and I'm like, I can do it in the mail, right? Except for I got to have the back of the form filled out by the eye doctor saying, I can still see.
Speaker 2:
[172:39] Right. So the insurance companies will hire somebody that will boot up Cali and say, yeah, okay, you know, we ran port scans, they're fine, yeah, whatever. But then if things go awry, the insurance company can also, the claim can also be like, oh, well, you weren't meeting this thing.
Speaker 1:
[172:58] Yeah, it's very complicated. There's certainly something to be said for, you know, some sort of minimum level of security, which is typically measured by some sort of compliance standard. Yeah. And the cyber insurance companies are certainly getting smarter. But you triggered me a little bit because there's also this prevailing attitude in our world and in our industry that the ultimate test is a pen test, which at some level, yeah, if you can afford it, that might be true because that's rubber hits the road, live fire tests. Most companies don't want to pay for that. But and I'm guilty of this. When I first came into the private sector, I started with, let's do a, we called it a pen test, but it was really a vulnerability assessment. Let's see what you got. Let's see what we have to work with. Let's see what your holes are, your vulnerabilities are, and let's start by closing them. I kind of thought that the industry would evolve. Because that was almost 30 years ago. God, that's almost 30 years ago. But when I got back into this, talking to Red Team and pen testing companies in the last 10 years or so, I'm like, wow, it's become, this is the ultimate test, and this is where you start. And you should not start your journey of security with a pen test, that's the last thing you should do. Literally, that's the last thing you should do, because there's all sorts of more cost-effective economic ways to put security in place and test it and stop gaps and check it. And the ultimate live-fire test when you think you're ready for it and you're mature enough is a pen test, a real pen test. Not a vulnerability scan, not a Nessus scan, not somebody running a tool suite, this, that, or the other, but an actual, you want people to try to come after you, and you're going to pay them to do it, let them do it. Again, which is the methodology that was portrayed in the movie Sneakers, which came out in 1992.
Speaker 2:
[175:15] Right.
Speaker 3:
[175:16] Jeff, thank you for spending your Monday evening with us and sharing all these secrets.
Speaker 1:
[175:22] Heck, it's almost Tuesday.
Speaker 2:
[175:25] I know, we've kept you so long. We really appreciate it.
Speaker 3:
[175:28] We will be back on Friday with Jonah Mendez. Otherwise, Jeff, any final thoughts, any final things you want to put out there before we get going tonight?
Speaker 1:
[175:41] There's no way to summarize this. Be diligent, be smart, be caring, and don't believe the vendor.
Speaker 2:
[175:53] And again, people can find you on Twitter at realjeffman.
Speaker 1:
[176:00] Mr. Jeff Man on Twitter. You can find me on LinkedIn.
Speaker 2:
[176:04] 2Fs1N.
Speaker 1:
[176:07] 2Fs1N.
Speaker 2:
[176:10] And the podcast one more time for everybody, please.
Speaker 1:
[176:13] Paul's Security Weekly. You can find us at simply securityweekly.com.
Speaker 3:
[176:20] All right. Well, thank you so much, Jeff. And we will see all you guys out there on Friday. All right.
Speaker 1:
[176:26] Hey, thanks for indulging me with all this time.
Speaker 3:
[176:31] Absolutely.
Speaker 2:
[176:31] Thank you, Jeff. We really appreciate your time. We had a question from Andrew. I'm going to ask you real quick. And this last question we're going to take. If I'm a Fortune 500 company, what is a Pentest going to cost me?
Speaker 1:
[176:48] It's probably a percentage of your revenue. The presumption is a Fortune 500 company is a mature enterprise, and so you're going to pay more. But there's a lot of, I mean, last time I looked, nine out of the ten Fortune 10 companies, 98 of the 100 Fortune 100 companies have to do PCI, at least in some part, and PCI is notorious for taking a very minimal approach to Pentesting. So it could cost you a lot, but it's very much dependent on what you want to get out of it. If you want to do a Pentest, the first conversation you should have is, what are the goals and the objectives? Because they are a legion, and you need to understand what you're asking for before you ask for it, and you should expect to pay accordingly. Most companies aren't ready for it, even in the Fortune 500, frankly. I'd say maybe 10% of the Fortune 500 are really, really mature enough and ready for a Pentest to really have a Pentest. Pentest being no holds barred, can somebody get in by any means to do something. But again, that's the goal or the objective. Are they trying to steal something? Are they trying to gain access to something? Are they trying to prove a point? Are they trying to, whatever it is, exfiltrate data, lock the data? I mean, I don't know how many Pentests out there that emulate a ransomware attack. I don't know. I'm going to have to ask my friends to do that. I don't think they do that.
Speaker 2:
[178:29] When you talk about this full scope Pentest, you're not just talking about hackers or the technical aspect. You talk about social engineering. You're talking about physical, like Deviant Olam and those guys. You're talking about the entire gamut, correct?
Speaker 1:
[178:46] Yeah. I mean, and I apologize because somewhere in the time that I took off from this industry, this term red teaming came about. What I call Pentesting is comprehensive.
Speaker 2:
[178:58] Correct.
Speaker 1:
[178:58] But most people would call what I'm describing as a Pentest these days a red team. It's Deviant Olam, by the way. That's how you pronounce that.
Speaker 2:
[179:07] Okay.
Speaker 1:
[179:08] I said Olam for years.
Speaker 2:
[179:09] I said Olam.
Speaker 1:
[179:10] So we interviewed them, but it's Olam. But yeah, I mean, no holds barred means somebody wants to go after you and they're going to do it by any means possible. It's not simply. Now, the presumption was when the Internet came along that the path of least resistance, the easiest way rather than physically having to go to a place and try to break into it was like, oh, they're connected to the Internet. Let's try to get in over the Internet. But once defenses came up in terms of the technology and the network perspective, the physical type of thing was back on the table. And the irony is if you really want to go after a particular company and you're motivated and you have resources, no holds barred means you'll try everything. There was a movie that came out, I don't know, in the 2000s maybe, Harrison Ford. It was called Firewall. No spoilers, but the premise of the movie is Harrison Ford's like a firewall admin or a network admin at a bank. And the bad guys kidnap his family and put guns to their heads and said, give us the passwords, give us the UB key, give us the RSA key, you know, help us do the multi-factor authentication, log on to this firewall that will get us into the network, that will get us to the safe to steal the money because we've got guns to your family's head. You know, that's rather extreme, right? But for motivated nation state bad guys that are really going after you, that's the measures that they'll go to. Most companies, you know, can't and shouldn't afford to pay for a simulation of that type of exercise, but you ought to kind of at least talk about it. You know, tabletop it. You know, what would happen if somebody did X, Y or Z? But not everybody needs to worry about that because most bad guys aren't going to do that because it's easier just to launch the ransomware attackers, send out the phishing attack and just see who bites. And they're not targeting you specifically. They'll just target whoever takes the bait. And if it happens to be a children's hospital and people die, you know, that's not what they're worried about.
Speaker 2:
[181:45] Right.
Speaker 1:
[181:47] Problematic world we're living in right now.
Speaker 2:
[181:50] Did we have anything on Patreon? Okay. Jeff, thank you so much. We deeply, deeply appreciate your time.
Speaker 1:
[182:00] I appreciate you giving me the time in the audience. And yeah, feel free, anybody that's listening to reach out to me. LinkedIn is probably the best way to find me. I do honestly try to respond to people happy to give back, happy to answer questions and mentor where I can.
Speaker 2:
[182:21] And check Jeff out on Paul's Security Weekly. It's P-A-U-L-S Security Weekly, correct?
Speaker 1:
[182:29] It is, but the website, if you go there, is just simply securityweekly.com.
Speaker 3:
[182:35] All right, guys.
Speaker 1:
[182:36] You'll find us there.
Speaker 3:
[182:37] We will see you guys on Friday. Take care out there.