Category Archives: Security

Defensive Security Podcast Episode 279

In Episode 279 of the Defensive Security Podcast, Jerry Bell and Andrew Kalat discuss the latest cybersecurity news and issues. Stories include Transportation for London requiring in-person password resets after a security incident, Google’s new ‘air-gapped’ backup service, the impact of a rogue ‘Whois’ server, and the ongoing ramifications of the Moveit breach. The episode also explores workforce challenges in cybersecurity, such as the gap between the number of professionals and the actual needs of organizations, and discusses the trend of just-in-time talent versus long-term training and development.

 

Links:

  • https://www.bleepingcomputer.com/news/security/tfl-requires-in-person-password-resets-for-30-000-employees-after-hack/
  • https://www.securityweek.com/google-introduces-air-gapped-backup-vault-to-thwart-ransomware/
  • https://arstechnica.com/security/2024/09/rogue-whois-server-gives-researcher-superpowers-no-one-should-ever-have/
  • https://www.cybersecuritydive.com/news/global-cyber-workforce-flatlines-isc2/726667/
  • https://www.cybersecuritydive.com/news/moveit-wisconsin-medicare/726441/

Transcript:

Jerry: [00:00:00] Here we go. Today is Sunday, September 15th, 2024. And this is episode 279 of the defensive security podcast. My name is Jerry Bell and joining me today as always is Mr. Andrew Kalat.

Andrew: Good evening, Jerry. Happy Sunday to you.

Jerry:  Happy Sunday, just a reminder that the thoughts and opinions we express on the show are ours do not represent those of our employers or.

Andrew: present, or future.

Jerry: for those of us who have employers, that is not that I’m bitter or anything. It’s,

Andrew: It’s, I envy your lack of a job. I don’t envy your lack of a paycheck. So that is the conflict.

Jerry: It’s very interesting times right now for me.

Andrew: Indeed.

Jerry: All right. So our first story today comes from bleeping computer. And the title here is TFL, which is transportation for London requires in person, password [00:01:00] resets for 30, 000 employees. So those of you who may not be aware transportation for London had suffered what I guess would has been described as a nebulous security incident.

They haven’t really pushed out a lot of information about what happened. They have said that it does not affect customers. But it apparently does impact some back office systems that did take off certain parts of their services offline, like I think. They couldn’t issue refunds. And there were a few other transportation related things that were broken as a result.

But I think in the aftermath of trying to make sure that they’ve evicted the bad guy who, by the way, apparently has been arrested.

Andrew: That’s rare. Somebody actually got arrested.

Jerry: yeah. And not only that, but apparently it was somebody local.

Andrew: Oops.

Jerry: In in the country which may or may not be associated with an unknown named [00:02:00] threat actor, by the way, that was involved in some other ransomware attacks.

Andrew: Kids don’t hack in your own backyard.

Jerry: That’s right. Make sure you don’t have extradition treaties with where you’re attacking. So what I thought was most interesting was the, their, the approach here to getting back up and going they, they had disabled. So TFL had disabled the access for all of their employees and the requiring their employees to show up at a designated site to prove their identity in order to regain access.

This isn’t the first. Organization that’s done this, but it is something that I suspect a lot of organizations don’t think about the logistics of, in the aftermath of a big hack. And if you’re a large company spread out all over the place, the logistics of that could be pretty daunting.

Andrew: Yeah. It’s wild to me that they want in person. [00:03:00] Verification of 30, 000 employees. But given the nature of their company and business, I’m guessing they’re all very centrally located. Used to going to physical offices, but man, can you imagine if you were a remote employee and you don’t have any office anywhere near you, how would you handle that? I’m not, I’m probably not going to get on a plane to go get my password re enabled.

Jerry: Exactly.

Andrew: You know what it did, remind me of though is, remember back PGP and PGP key signing?

Jerry: Oh, the key parties. Yes.

Andrew: Yes. Where, You basically, it’s a web of trust and people you trust could verify and sign another key. Like at a key signing party, because we were fun back then, that’s what nerds used to do. And then that’s how you had the circle trust. So maybe they could do something similar where verified employee could verify another employee, then you’ve got the whole insider threat issue, et cetera. Yeah. It

just reminded me of,

Jerry: No, nobody trusts Bob’s.

Andrew: [00:04:00] It’s true. Your friend, Bob, how many times has he been in prison?

Most recently, like where Rwanda? I think I heard,

Jerry: He’s got the frequent visitor card.

Andrew: but yet has some of the best stories.

Jerry: He does, he definitely does. so apparently they make reference to a similar incident that happened at Dick’s sporting goods. I will emphasize the sporting goods. They had a similar issue and that is a nationwide retailer here in the U S at least, I don’t know if they’re they’re outside of the U S and so that really wouldn’t be possible, with transportation for London.

I assume that most of the people associated with it are local or. Or within a reasonable driving distance or commuting distances, the case may be. But in the situation with a retailer, a nationwide retailer, I think they had to go with virtual in person. So they basically had zoom meetings [00:05:00] with employees and I assume had them show like pictures of their government ID and so on.

So the logistics of that is interesting. And. It isn’t really something I’ve spent a lot of time thinking about. And but I know in the aftermath of a big attack like this, establishing, trust and certainty and who has access to your network would be super important. So I think it’s I think it’s worth.

Putting into your game plan,

Andrew: Yeah, it is. It is a wild one. And what do you trust? Especially in the age of, deep fakes and easily convincing AI copies of other employees. And I don’t know, it’s an interesting one.

Jerry: right?

Andrew: Ciao.

Jerry: our next, yeah, it was it was certainly a an unfolding story, which I don’t think is over yet based on everything I’m reading.

Andrew: I did see one quote in here that made me chuckle, which is this is a quote from the transport [00:06:00] agency added on their employee hub. Some customers may ask questions about the security of our network and their data. First and foremost, we must reassure that our network is safe. Okay, define safe. That’s just us

being

Safe ish.

Jerry: safe ish, safe now,

Andrew: Safe, safe y. It resembles something that is sometimes called appropriately safe. Based, based on the criteria that we came up with, it’s completely safe.

Jerry: which I’m sure is true because they they had also had a clop. Ransomware infection, I guess a couple of months prior to this. So

Andrew: What do you use for clop? Is that like a cream? Is that like a, how is

that treated typically?

Jerry: every time I hear clap, I, it takes me back to the Monty Python, the coconut horse trotting.

That’s what I think about when I hear the word clap,

Andrew: That’s

fair.

Jerry: [00:07:00] which is oddly appropriate given that this is in the UK, which is where where Monty Python hails from.

Andrew: I thought you say where they have coconuts.

Jerry: Only if they’re if they’re transported by swallows.

Andrew: You youngins will just have to go.

Jerry: Gotta go watch that movie. Alright, it’s worth it. I, by the way, I remember making my son, both my sons watch it, and they protested. And now, I think they’ve each seen it like 30 or 40 times,

Andrew: so when you say process, did you like have to duct tape them to a chair and like pry their eyes open and

do a whole, yeah, train spotting situation?

Jerry: I think they thought it was like an actual movie about the Holy Grail.

Andrew: Which, why would they be opposed to that? That could also be interesting.

Jerry: I don’t know.

Andrew: Indiana Jones did a fine movie on it.

Jerry: It’s true. But it, that does not hold a candle to [00:08:00] the Monty Python Holy Grail movie. Let’s just be

Andrew: We, we learned a lot. We learned about facing the peril. We learned that Camelot is a silly place. And we learned how to end a movie when you don’t have a better plan. Again, way off topic, but you young’uns will just have to go discover. Do you,

Jerry: So back on topic, our next story comes from security week. And the title here is Google introduces air gapped backup vault to thwart ransomware. And I’m going to put quotes as they do over air gapped because as they describe it, it is logically air gapped, not. Actually air gap. So what, and by the way I don’t necessarily mean to take away from the utility of the solution that they’re offering here, but calling it air gap, I think is maybe a little bit of a misnomer.

So they are offering Google they being [00:09:00] Google are offering a service where you as a Google cloud customer can store data. Backups to a storage service that does not appear as part of your cloud account. It’s part of a Google managed project that is transparent to your account. So if somebody were to take over your account, for example or to compromise systems within your account, they actually wouldn’t be able to do anything with that backup which I think is a pretty smart the one thing that I was wondering, obviously that you are not necessarily protected in the case that Google’s cloud itself becomes the victim of something bad, but that is, is a kind of a theoretical issue at this point.

But the one that concerns me a bit is what happens as we have seen in some other. [00:10:00] There was a, I’m forgetting the name at the moment that there was a company whose AWS account at the time was basically deleted and they had all of their data, all of their backups in their cloud account and they had it, split across different availability zones and it, it didn’t matter because they were, the actor actually deleted everything in their account and I believe they actually deleted the account itself.

And I do wonder the same thing, if your account were to be taken over would that backup persist? Would you have the ability after the fact to, to prove to Google who you were and be able to resurrect that. I,

Andrew: Do you mean the one that happened accidentally that Google did with that Australian pension fund or like a bad actor getting in and deleting it?

Jerry: Bad actor that got

Andrew: Gotcha. Yep.

Jerry: There was a it was a GitHub competitor,

Andrew: Yes.

Jerry: [00:11:00] can’t remember the name. It was

Andrew: I will look at,

Jerry: several years ago. Yeah, I do think, and I’ve said this, I say this an increasing amount. I do think we are. On the cusp of, much more aggressive, what I’ll call cloud native attacks where adversaries are actually attacking, not just the workloads in the cloud, but actually, the cloud resources themselves, the cloud accounts and whatnot.

So I think as time goes on, things like this are going to become much more important and questions like what I just asked, I think are going to become Increasingly important to

Andrew: yeah it’s, interesting that it makes sense, first of all to make sure that my, if I’ve got a bad actor or ransom or whatever, that’s out there deleting things, I don’t want it to just delete my backups, which is something we’ve always talked about is it could be a weakness in your automated systems.

If they’ve got full admin rights into your cloud environment, what stops them from going [00:12:00] after your backups? So that makes sense. It is interesting how strong that quote unquote logical air gapping is. It makes me wonder a little, somebody should probably test it, but I’m surprised this wasn’t offered before, honestly. It also makes me think, remember the days when we used to back up the tape and send those tapes off site to underground storage facilities? And

Jerry: And half the time the tapes would fall off the truck

Andrew: right.

Jerry: built spilled out under the freeway. Yes,

Andrew: And you never test restoring them, and then when you do need to restore them, it’s gonna take 43 months and half of them are bad.

It was a weird time.

Jerry: recall the tapes and the tapes will come back in a locked box and there’ll be tapes missing.

Andrew: Right.

Jerry: It was just Like the grand old days. I like, I, I don’t know why we don’t still do that.

Andrew: I won’t go on the, we’re old rant, but boy, it makes me feel old. But this makes sense. Like what I’m also curious about, I haven’t looked into this is, how many versions of backups do you [00:13:00] have? Because the other thing I think about is you’ve got ransomware. And it automatically backs up how many iterations in, or am I just backing up encrypted data I can’t restore because it’s encrypted.

The backup system doesn’t know the difference. It’s just backing up an iterative change. So that’s something else to think about is okay, how many snapshots back can I go? Because that starts to get expensive, but if I’m just like automatically backing up my encrypted data. Oops, it’s interesting and I like the concept and it’s meant to fight one particular source of pain, which is, ransomware, deleting your backups.

Jerry: Yeah. I really liked the concept too. I think things like this are going to become increasingly important as this time goes on. Happy to see things like this starting to emerge,

Andrew: Indeed.

Jerry: but now, again, it comes back to making sure that it is actually working.

Andrew: Yeah. And testing like a restore

and [00:14:00] do the assumptions you have work.

And that’s one thing not to go off on a bit of a side rant that I see a lot is organizations don’t have enough time built into their. IT or security schedules to actually test these things. They just Oh, we think it’s going to work.

And the first time they tested is during a crisis, which is a terrible idea. You want to be able to test like when you’re not in crisis mode and see how well this stuff works.

Jerry: Absolutely. Our next story comes from Ars Technica and the title is rogue. Who is server gives researchers superpowers. No one should ever have.

Andrew: This one was crazy.

Jerry: Yeah. So there’s a company called Watchtower of course, is all things tech. Now it isn’t spelled correctly. I won’t hold that against them. One of their researchers found during their stay at Black hat that the dot Moby top level domain had recently changed the location of its, who is [00:15:00] server.

So previously it was a domain hosted on a dot net top level domain, but apparently over some time in the recent past, they moved that to unsurprisingly a name hosted on the dot Moby TLD. And I guess through probably some bit of, corporate cost savings or missteps don’t know.

They let that domain, they let the dot net version of that domain expire, which is problematic. And so this person realized that registered the domain and then actually started seeing legitimate requests, who is requests coming in. And then they set up a, who is server and. Found that they would have had the opportunity to do quite a few bad things, like creating TLS certificates [00:16:00] for for domains, because VeriSign and others were still pointing their who is to the old.

net. So they hadn’t, completely switched over from the NET domain to the MOBI domain, and as a result chaos ensued and it’s really hard to put bounds on how bad this could be, right? There’s, when you, they go through quite a few different situations that this could be. This could have allowed, for example, intercepting email and, lots of different telemetry based attacks.

But I don’t even know that we have a good handle on the art of the possible when something like this happens.

Andrew: Yeah. Plus the the TLS certificate trust that comes natively with this, which is massive. Like that just can cascade into a whole bunch of shenanigans when you can [00:17:00] own The authority around TLS certificates around an entire domain like that. That’s huge.

Jerry: Which they were able to do in this instance. So really bad for sure. I thought it was interesting because in, in my former role, I saw lots of situations similar to this. And I, and that just in my former, immediately former role, but in lots of former roles, companies often registered or create internal domains.

And those domains sometimes are, they start off as. Like they start off as trying to think of a good good, a good example. Let’s say like that fun, it’s stupid one, right? When you created your active directory domain back in 1997, like that TLD wasn’t around, but over the

Andrew: Right.

Jerry: That, that [00:18:00] did become a domain and, nobody thinks twice about it. And suddenly now you’re susceptible to a whole class of attacks. And I think there’s a broad range of problems that the industry has associated with domain names either expiring or for example, a lot of companies as they acquire other companies, they they, Transition.

That company’s email to the acquired company’s domain. And over time, sometimes, not all the time, but they let those domains expire, somebody comes along and you can pretty much guarantee that there’s still almost certainly valid email going to that domain. And so there’s, I think there’s this whole class of problems.

That we don’t often, it’s a super simple and dumb problem space that has emerged [00:19:00] around domain clashes, domain problems, people letting domains expiring. So I I don’t feel like this is something that is, is well represented in different security frameworks and, policies and whatnot, because it’s off, it’s often the corner, but I, it is definitely, and has been, is this, proves it has been, and can be a big source of problems.

And so I, I think it’s really important to keep your eye on this.

Andrew: Yeah, I agree completely. And it’s to the point you made earlier about ADs or internal domains being set up. And then suddenly that many years on the line becoming a new top level domain. It reminds me of when people didn’t follow RFC 1918 and used random IP addresses that later are routable and, can’t figure out why they’re having weird Transcribed Routing [00:20:00] issues talking to certain parts of the internet and not others and it’s like there’s

Jerry: That

Andrew: got to watch that.

And what’s interesting is this like with all respect But a lot of folks today don’t understand how the plumbing of the internet works anymore. It’s been abstracted away from them And like a lot of people this sort of problem with DNS reminds me a little bit of how fragile BGP is.

And very few people really understand BGP anymore. They don’t have to, they don’t need to know it. That’s a SaaS provider problem. That’s a cloud provider problem. But it’s very much a real problem. Like you and I, at one point in our career, we went through the process of registering for our own. Slash 19 and figure out all the fun of what it took to route that and share that. And all those things that came with it which I think was valuable, not to just pat ourselves on the back, but it’s interesting today when you go talk to people about some of the complexities of DNS, they have no idea. They don’t. They don’t. know how all this works. They don’t know that this is even a susceptible problem, because I think there’s this inherent [00:21:00] belief that there’s just some overriding authority managing all the top level domains and all the top level Whois servers. There’s not. Be careful.

Jerry: Yeah, definitely. Definitely. All right. The the next story is this one is it’s a bit of a followup to when we talked about last time. It comes from cybersecurity dive and the title is global cybersecurity workforce growth, flatlines stalling at 5. 5 million pros. This is based off of a report released by the ISC squared, which is the, for those of you who don’t know, they’re the people who create and maintain the CISSP and a bunch of other.

Certification programs. What they identified is that the growth of the cyber security workforce grew a 10th of a percent year to year, which is interesting. [00:22:00] Like from five, five ish million to 5. 5 million. It

Andrew: Wait, that’s not a tenth of a percent. that’s, 10%.

Jerry: you’re right. 5. 45 to 5. 5. There you

Andrew: There you go.

Jerry: you. I can do math. I

Andrew: I’m here to help. I’m here to help.

Jerry: promise, but this was the first time that, that the growth is really stalled in quite a few years.

They what I found most interesting with this particular report in this particular article is it explained something that we continue to talk about. Both on the show and as an industry about the kind of the dichotomy between people’s experience in trying to get a job in security and the way that the industry talks about the number of unfilled [00:23:00] security jobs, because those two things, as we talked about last time, again, aren’t.

In concert, right there’s a gap somewhere. And this one for the first time started to explain it in a way that made sense to me. And what they describe is that the workforce, like the number of people who are employed in the security sphere went up very quickly.

The number of people that are needed to keep companies secure, as identified through interviews with companies, is growing dramatically. And outpaces by a large margin, the number of people who are qualified to work where it [00:24:00] breaks down is that just because, I say that I need. 50 more people on my team to keep our company secure.

Does it mean that I get to go hire 50 people? It just means in order to do what I think is a responsible job, I’m making this up completely, by the way. In order to do a good job of keeping my company secure, I would need 50 more people than I have. And so

Andrew: Right.

Jerry: then gets counted in the total number. Of these quote unfilled security roles,

Andrew: Really that’s just the,

Jerry: exist.

Andrew: That’s just the beginning point of negotiation for your budget.

Jerry: Yes. Yes.

Is yes.

Andrew: So when they refer to workforce, do they mean the number of people employed

in the cybersecurity industry or the [00:25:00] number of people available to fill jobs in the industry?

Jerry: They’re talking about the number of people butts in seats.

Andrew: Okay. So there could be, if they’re saying there’s 5. 5 million people in the cybersecurity workforce industry collecting a paycheck but there’s 10 million qualified people seeking jobs. That’s one of your gaps, right? There’s just not the jobs out there for the number of qualified people. Which if that’s true, which we’ve heard the opposite, there’s a skills gap and there’s a capability gap, which could go back to some companies may be asking for the wrong things, like 10 years of experience in a technology that’s been around for two years, which we’ve seen over and over again. Or if there’s too many people chasing too few jobs, it can drive down salaries. So I don’t know. It’s interesting. If people are willing to accept jobs for less, basically in competition with somebody else, that can also depress wages or at least cap [00:26:00] growth. So I don’t know. We keep hearing very, to your point, very conflicting things about The market in the industry including Hey, we don’t make it easy for new people or entry level roles or mentoring or journeyman roles or ways to bring people in that we can build up people and you want to hire experienced people, where do they start getting experience?

So I think some of that comes to play too.

Jerry: I think it’s all intertwined, right? They, in the article, they point out that There are 5. 5 million butts in seats in the security sphere. They believe based on their data that there are, there’s a need for 10. 2 million people, right? So that, that creates a big gap. But again, that doesn’t mean that there’s 4.

7 million unfilled jobs.

Andrew: Yeah. I

certainly don’t see those job listings,

Jerry: it means [00:27:00] that we, some at a top level, it means that we think in order to do a responsible job of protecting every company, we would have to have 4. 7 million more people working in security than are available today now, but where I think folds back into what you were saying about wages is that, for a long time, security people have had it great.

And I say that as one of them, we were pretty highly compensated and so it’s a difficult thing, especially as of late, it’s a difficult thing to continue adding more and more people to your payroll at the salaries that people are getting. And so there is part of me, as we talked about last time, the U.

  1. government is launching an initiative to train up, hundreds of thousands of more people to enter the workforce. The reality is, those people are going to be [00:28:00] competing with people who are already unable to find jobs, but the net effect, I think of that is going to be deflationary.

On on, on cybersecurity job salaries.

Andrew: It’s possible.

It’s, yeah.

Jerry: and then in doing so theoretically will be able to hire more of them.

Andrew: Yeah, I think the danger is always, is that training going to align with what companies need?

Jerry: I don’t think so because I think we have created this and I know that we’ve gone way off into the security podcast. But I think. And look I had, I managed a very large team in a side of a very large company that had, I had a, had an interesting vantage point. What I observed is that [00:29:00] companies have adopted this position of what I refer to as just in time talent, we, we want.

We, we create this profile of expectations of what people need to have in order to come on board for an entry role, entry level role, like you’ve got to have 10 years of experience and you’ve got to know, all of these specific, very specific security tools for an entry level role.

Like how do you get an entry level role if you don’t have. You get, you end up in the, into this kind of catch 22, but on the other side, one of the concerns I’ve got is that as an industry for a long time, security people came out of it, right? You were, you came out of application development or system administration or network engineering or help desk.

and a lot of. These people had a [00:30:00] very broad and deep background in, maybe not every aspect of it, but in lots of aspects of it. And now, security has become a field unto its own. And so you go through school and you you graduate with a degree in security and it’s all been about security and not necessarily about the implementation of it, the implementation of, and I, in operation of it inside companies.

And I think that not, I’m not, by the way, I’m not in any way downplaying the importance of the stuff that you learn school, what I’m saying is I think you coming out, you come out lacking some of the important context that you need in order to be effective.

The other side. Is that a lot of that context tends to be pretty specific to a company.

And I think that where we’re at is that companies have lost, largely lost the patients for whatever [00:31:00] reason to train people, to do on the job training and grow people and. And that scares me to be not only from like the human aspect, but also from like the ability to be effective and whatnot, because now I think we’re inhibiting artificially governing the effectiveness of people because we’ve got.

These people, we got people who have relatively narrow sets of skills coming into the workforce. And I suppose in some instances that’s okay, but I think it, it is a I don’t know maybe I’m just getting old.

Andrew: No, I agree with your point. And again, I’m also getting old, but I find there are very few generalists anymore. Everybody’s very hyper specialized. And I think That’s a bit of a shame. Yeah, you could be super good at one particular thing and that’s very valuable and there’s value in that, but I also find a lot of value in it.

[00:32:00] Generalists who come into security just have such a breadth of understanding of how these things are supposed to work together and what’s normal that I think it it va it, it brings a lot of value to the job.

And it goes back to what we were saying earlier. People don’t understand DNS, they don’t understand bgp, they don’t understand IP routing because they don’t have to. And I guess that’s okay. I guess maybe the world has gotten so complex that this is the way it needs to be, but I do think it’s a real shame becoming like IBM massive company. Those are the types of companies that I think should be able to grow their own talent with mentorship and the whole concept of the way we used to do things with apprenticeship and raising people up and giving them that opportunity to grow and build that skillset. And, maybe their salary is a little low initially, but as they grow and hopefully that skillset will grow and the salary will grow, or [00:33:00] sadly, they probably will just bounce to another company. That’s, I think what companies worry about

is you train them and they leave. What if you don’t train them and they stay?

Yes. The way I could counter that, but it is a problem. I don’t know that I have a solution, but I’m a big fan of trying to promote people who are interested from it and the security, not that one is better than the other, but I do think those folks with it backgrounds have a lot of basic understanding that I think really helps with general security engineering and SOC.

The other thing I’d say is it takes a long time to ramp up. I don’t know that companies, Respect that anymore. It may take six months to a year to really be effective in a, at least a security operations role and understand what normal is for a company. And it feels like everybody’s moving too fast for that now. I guess this is the whole get off my lawn speech. It’s an interesting problem.

Jerry: I, I, from a an individual standpoint, I think it’s. [00:34:00] It’s clearly a much more competitive market than it used to be. And I think it’s becoming increasingly important for people who are serious about getting into it and finding a job to be able to differentiate yourself. And I know that’s.

Heretical to say in some circles, if you want the job, I’m not saying that you have to work, 200 hours a week, but you’ve got to be able to separate yourself from the pack. Otherwise, I don’t know what to say. You’re, you’ll be looking for work for a long time.

Andrew: Just don’t start a defensive security competitive podcast. We don’t need the competition,

please.

Jerry: no we definitely don’t. I, by the way I for those of you who know, I, I recently lost my job and it’s okay. Not complaining. It’s actually been an amazing experience. And I’ve been working with a career coach who’s awesome. By the way if you have the opportunity to work with a career [00:35:00] coach, like that’s probably one of the best things because they can call bullshit on your, like they hold you to account.

But one of the things that, that mine told me was this is a difficult. Economy right now to find a job. It takes a long time to find, and a lot of false starts and a lot of tries to find a job right now. And I don’t know if it’s like historical at a historical low. I don’t know. But it’s definitely, I’ve got kids that have recently graduated from college and I look at the struggles they are having with finding jobs as recent college graduates and it’s a difficult, just a difficult economy and I don’t see that getting better.

Anytime soon, maybe when, and if interest rates go back to a negative, then we’ll start seeing lots of lots of startups again, but I don’t know.

Andrew: I do think [00:36:00] certainly this is a well trodden road that other people do a better job talking about than I do. But I think that there are certain roles that we have treated poorly. Culturally, like blue collar jobs and trade jobs that have a huge, massive shortage of workers who are desperate for workers. But we have, and those are good, paying jobs with great benefits, we went down this path of everybody needs to go to college and everybody needs a white collar job. And I think that’s, Not great for people or our society. And the other thing I’d be curious about, you’re seeing both sides of it.

You’re at a very senior point in your career. And my first thought would be, is it tougher to find a senior level job? Cause there’s less of those in theory. But you’re also saying, your kids right out of college are basically looking for their first main major career job, which is the opposite of that spectrum and they’re struggling. I did tell you underwater basket weaving would be a tough role for them to find a career in, but they insisted.

Jerry: You, [00:37:00] you warned him. I it’s fair.

I think it’s all up and down the scale. So certainly for me, if I if in one, I do another thing, it’ll probably be an exec another more senior level executive type role. I don’t know if it’ll be a CISO again. That was hard and I don’t know, I don’t know if I got that in me again.

It was fun for sure. When I talk to my kids and other young people, one of the bits of feedback I get is there’s been a lot of people who have lost their jobs. And I think this is also true, maybe particularly in the IT space, lots of layoffs in the IT world over the past 18 months.

And those people have experience and they’re unable to find jobs necessarily at the same level that they were. And so they may be, they may be competing. I guess what I’m trying to say is entry level people. Who are coming out of college may very well be competing against [00:38:00] people who are not entry level for entry level jobs, because those other people can’t find other jobs.

And so they’re, they’re trying to find any kind of work. And so people entering the workforce are not competing against other people entering the workforce. They’re competing with, other

Andrew: yeah.

Jerry: who may have experience, who have recently lost their job. And I think that’s it is what it is,

A challenge.

So

Andrew: yeah. One last thing I’ll say on this is that in theory, the unemployment rate is low. So are we just going through a cyclical change where those jobs are moving to other areas? And I. T. and I don’t know. See, this is the challenge. You saw this conflicting data of. We have all these unfulfilled jobs, but then people can’t get hired.

I don’t know. I don’t know what to say.

Just, I’m thankful I have a job and I will do my best not to be so frustrated tomorrow, Monday morning, as I normally be.

Jerry: I’m thankful to be in the spot I’m [00:39:00] in, even though I don’t have a job,

Andrew: You do have a job. Your job is to entertain me on this podcast and our 12 listeners.

Jerry: hopefully I’m doing it, doing okay.

Andrew: You’re meeting expectations.

Jerry: Good good. All right. Our last story also comes from cybersecurity dive. And the title here is move it. Victims are still coming forward. This time it’s Wisconsin Medicare. There, there isn’t anything necessarily new here. We obviously were on hiatus when the big move it breach happened.

Happened in the second quarter of 2023, but we are now 18 months on about, and we’re still hearing about net new victims of this. And I find it just mind boggling. Now this particular entity reported to the centers for medic Medicare and [00:40:00] Medicaid that they were breached back in July, but presumably the actual breach happened, Much earlier than that, only recently detected.

And I don’t know why that is. I don’t know if

Andrew: I certainly hope that, I certainly hope that like they hadn’t gone unpatched all this long and just suddenly got popped.

Jerry: I doubt that, but what I it’s possible, right? It’s

But the thing that I’ve been concerned about it, and again, this is an asset management problem. Like I don’t know. How many of these things were out there that companies didn’t realize,

Andrew: yeah.

Jerry: Like being managed by a subcontractor to a subcontractor and Hey, the magic just happens.

I pay the bill and the files just appear. I don’t know how it happened.

Andrew: yeah, I look at this like understanding your attack surface, like to me, you really need to understand everything that’s associated with the company that’s open to the internet. I know that’s not the only way to attack [00:41:00] things. I know that things start at endpoints with phishing attacks, but nonetheless, for these sorts of widespread, vulnerable, remote code exploits sort of things, you have to know what’s open to the internet that Are associated with.

Jerry: Yes.

Andrew: I just feel like that’s table stakes. You got to know your attack surface and a lot of companies don’t like it’s a one. It’s a tough problem. But two, it’s not something that a lot of companies spend time, I think, worrying about. But I think this is a great example. What if this has been sitting out there? Just got ignored. Nobody’s really maintaining it. Nobody’s really patching it. Nobody really knows, but I think in cases like this, if you’re a security organization, something like Moveit pops, it’d be nice to look at a list and say, huh, do we have Moveit? Oh, yeah, we do. We should go fix that.

Jerry: And not only do you have it, but do your suppliers have it

Andrew: Send them an Excel sheet for them to fill out. You’ll be

fine.

Jerry: then have them send it to their suppliers and their suppliers send [00:42:00] it to their suppliers.

Andrew: And eventually it just circles back around.

Jerry: Turtles all the way down. You know what the other, again, because we didn’t have a chance to talk about it at the time. The other issue I think that really exacerbated this move it. Issue, obviously this was a very widely used, which blows my mind.

I was much more widely used than I ever would have thought. A file transfer tool that that progress software, I think it was progress software, right? Anyway whoever maintains it, sorry, progress. If I got you wrong you’ve got your own problems. So I’m not going to feel too bad. The issue is this.

application had a vulnerability that made it very trivial for adversaries to pull files off of the appliance. One of the things that came out in the aftermath of that is that people would allow files to be uploaded. And then just sit [00:43:00] there, like they would obviously copy them off, but they wouldn’t clean them out.

And so

Andrew: Look, that’s not part of that’s not part of my KPIs, man. My job is to get the file over there, not delete it later.

Jerry: right.

Andrew: Look, I’m,

Jerry: thing that isn’t in frameworks. I was listening to a book today and they made reference to the old George box quote, all models are wrong. Some models are useful. And I got to

Andrew: yeah,

Jerry: like all security frameworks are wrong. Some are useful and

Andrew: We need is another framework of the useful bits.

Jerry: totally, absolutely. And then you have, and then it’s the old XKCD, I forget which number that was, but but again it I struggle because there isn’t a there isn’t something that, that makes it obvious that, Hey. That’s a problem. [00:44:00] It’s intuitively obvious in hindsight that you shouldn’t store like forever files on the damn file transfer tool, right?

Like you should be cleaning that off periodically or in real time as you’re, Pulling data off of it, but that’s not what happens. And for the most part, like how many policies, how many companies security policies say would say that I don’t know that there’s many. Is that part of ISO 27, 000 or PCI?

Or no it’s not very clearly enumerated, but it is super important. The thing that is enumerated is you got to patch the thing. No, the thing exists, but I think there’s a, there’s also a very did I lose you?

Andrew: Yep. My Chrome.

Jerry: There’s a very real problem with data minimization. And I don’t mean that in terms of we’ve talked about it [00:45:00] in, in the context of you shouldn’t every stinking piece of data from your customers and squirrel the way I’m talking more does that data have to sit there? Can, or

Andrew: Right.

Jerry: can you move it? And especially important. When you got something sitting on the edge, right? This was a device that was exposed to the internet.

Andrew: Yeah. The tough part is probably 95 to 99 percent of the time. That’s never a problem. And cleaning up old files is probably not high value leverage work for a lot of employees, but

It’s like a whole data classification system. Nobody wants to do it. It’s too much of a pain in the butt until the one time it bites you.

Jerry: Yeah. I think, the other thing that bothers me a little bit about this is that companies will make that trade off, right? Like I, sure I could have, I could pay [00:46:00] Bob to sit there and delete those files. Or I could pay Bob to go do something more productive, it’s the, it’s it’s the people whose data is represented there who are actually to be the one that’s, it’s harmed in this and they don’t get a,

Andrew: Sure.

Jerry: And that’s right.

Andrew: an easy solve may be just an auto expire like 30 days. It’s auto deleted.

Jerry: Which comes back to

Andrew: And you just,

Jerry: responsibility, should that have been the default setting?

Andrew: yeah,

Jerry: I

Andrew: it’s,

Jerry: I don’t know. Anyway,

Andrew: it was progress

Jerry: was

Andrew: the way. I did confirm it was indeed progress. Yes.

Jerry: yes. They’ve had a long run of spectacular F ups.

Andrew: Your old man memory was accurate in this case.

Jerry: Back in my day, progress was a database.

Andrew: That’s [00:47:00] true.

Jerry: surprised to hear that progress is all this other crap. And apparently no database. So time times are funny. Funny. What happens over

Andrew: They lost it somewhere.

Jerry: All right.

Andrew: Anyway.

Jerry: I think we’re, I think we’ve we peaked and we’re on our way back down and so we will end it here.

Andrew: Oh, I hope people enjoyed our first video podcast, the defensive security show.

Jerry: We will do better next time. I’m

Andrew: It only took 279 episodes. Yes. We will do better next time. And we had a little technical bubble there. I don’t know how much it’s going to show up, but hopefully we’ll get it sorted out.

Jerry: Yeah, your your browser won’t stay running, huh? Call the neighbor kids. You come look at

Andrew: I’m just hoping I didn’t lose too much of my side of the recording. [00:48:00] That’s all

Jerry: good point.

Andrew: we’ll see. We’ll sort it out. But anyway,

Jerry: thank you for listening. You can find this the show and all of our previous episodes on our website at www. defensivesecurity. org. You can find the podcast on just about every podcast service under the sun. And if we aren’t on one, if let us know and we will we’ll get that fixed. You can follow Mr. Callot on X for me. I really hate that name by the way. It’s just like

Andrew: I still call a Twitter, I’m old,

Jerry: Oh, go ahead. Where can they find you?

Andrew: On Twitter and both and infosec. exchange at lerg L E R G.

Jerry: All right. Good deal. You can find me on infosec. exchange at Jerry. And with that, we will talk to you again real soon.

Andrew: Have a great week guys. Bye bye.

 

Defensive Security Podcast Episode 274

https://www.bleepingcomputer.com/news/security/over-3-000-github-accounts-used-by-malware-distribution-service/
https://blog.knowbe4.com/how-a-north-korean-fake-it-worker-tried-to-infiltrate-us
https://arstechnica.com/security/2024/07/secure-boot-is-completely-compromised-on-200-models-from-5-big-device-makers/
https://www.darkreading.com/cybersecurity-operations/crowdstrike-outage-losses-estimated-staggering-54b
 https://cdn.prod.website-files.com/64b69422439318309c9f1e44/66a24d5478783782964c1f6f_CrowdStrikes%20Impact%20on%20the%20Fortune%20500_%202024%20_Parametrix%20Analysis.pdf
https://www.darkreading.com/vulnerabilities-threats/unexpected-lessons-learned-from-the-crowdstrike-event

Summary:

Episode 274: Malware on GitHub, North Korean Developer Scam & Secure Boot Failures In this episode of the Defensive Security Podcast, hosts Jerry Bell and Andrew Kalat discuss several notable security stories and issues. They start with a malware distribution service that leverages compromised GitHub accounts and WordPress sites. They then cover a security warning from KnowBe4 about hiring a supposed North Korean agent as a senior developer. They dive into the significance of two separate vulnerable firmware signing keys affecting over 500 hardware models. Lastly, they explore the massive financial impact of the recent CrowdStrike outage, with losses estimated at $5.4 billion. Throughout the episode, the hosts provide insights, potential solutions, and share personal experiences related to these cybersecurity challenges.

00:00 Introduction and Casual Banter

00:30 Funemployment and Retirement Reflections

01:54 Disclaimer and First Story Introduction

02:17 Malware Distribution via GitHub

04:24 WordPress Security Issues

8:09 North Korean Developer Incident

14:36 Lessons Learned and Recommendations

23:27 Secure Boot Vulnerabilities

29:19 Cloud Providers and Firmware Security

30:47 The Epidemic of Leaked Keys on GitHub

33:35 Challenges in Development and Security Practices

35:36 CrowdStrike Outage and Its Financial Impact

39:16 Legal and Technical Implications of the Outage

57:33 Concluding Thoughts and Future Plans

 

Transcript:

Episode 274 274
===

jerry: [00:00:00] Today is Wednesday, July 31st, 2024. And this is episode 274 of the defensive security podcast. My name is Jerry Bell and joining me tonight as always is Mr. Andrew Kalat.

Andrew: Good evening, Jerry. How are you? My good sir.

jerry: So good. It hurts. How are you?

Andrew: I’m doing good. it’s Wednesday, which is halfway through the week. So I can’t complain too much.

jerry: It’s just another day to me though.

Andrew: I, how are you enjoying your funemployment?

jerry: It is awesome. funny story, when my dad retired, he told me something sad. He said, one of the things that you don’t realize is that the weekend starts losing its appeal,

Andrew: Because every day is the weekend.

jerry: because it’s just another day and, holidays are just another day.

jerry: There’s not really something to look forward to when you’re working. You typically look forward to the weekend. It’s just another day. I am finding that to be true. I’m going to be [00:01:00] spending some time coming up down at the beach, which will be a whole different experience, not having to work and actually be at the beach, which will be cool.

Andrew: So you don’t have to wrap your laptop in plastic when you take it surfing with you anymore.

jerry: That is very true. No more conference calls while out on the boogie board.

Andrew: I will say the random appearance of sharks behind you on your zoom sessions will be missed.

Andrew: Of course, we’ll have to find a way to bring that back. I live in jealousy of your funemployment. I will just say that. But not that you didn’t work your ass off and earned it, right? This is 25 years of blood, sweat, and tears given to this industry to get you to this point. So you earned it

jerry: I’m going to have to be responsible again at some point, but I am having fun in the meantime.

Andrew: as well. You should

jerry: before we get into the stories for today I just want to remind everybody that the thoughts and [00:02:00] opinions we express on the show are ours and do not represent anybody else, including employers cats, farm animals, spouses children, et cetera, et cetera.

Andrew: there’s that one Lama in Belarus though, that agrees 100 percent with what we have to say.

jerry: Very true. Getting into the stories, we have one from bleeping computer and this one is titled over 3000 GitHub accounts used by malware distribution service. I thought this one was particularly interesting and notable. There is a malware distribution as a service that leverages both, let’s call them fake or contrived GitHub accounts, as well as compromised WordPress sites.

jerry: And the, what they’re effectively leveraging is the brand reputation of GitHub. And so they have a fairly complicated setup of driving. [00:03:00] Victims through watering hole attacks and SEO type lures to get people to these sites and they have different templates that entice people to download these encrypted zip files that are hosted on GitHub.

jerry: And what they’re taking advantage of is two things. Number one, people generally think that GitHub is a reputable place. To find files. And so you’re. Level of concern goes down when you download something that you think is coming from a reputable place. And I think the other, perhaps more problematic angle from my perspective, at least is GitHub is something that most companies allow access to.

jerry: it is something that, by design, many companies, not all, encourage their employees to interact with GitHub. And so you really can’t block it. [00:04:00] And or at least it’s more difficult to block it. And because it’s one kind of amorphous. Thing you, you don’t have the ability to granularly say you can go to this aspect or this part of GitHub and not this other part of GitHub.

Andrew: Yeah. I agree at all points. It’s absolutely leveraging and abusing the reputation of GitHub to get this malware out there and it’s effective. Using WordPress doesn’t surprise me, just about every day I see some other plugin has a massive vulnerability. So I’m not blaming WordPress, I’m blaming their plugin ecosystem as being highly toxic in the original sense.

jerry: I know that WordPress has a lot of detractors, especially in the security community, but It’s over 50 percent of the entire internet. Websites runs WordPress, right? That is pretty impressive.

Andrew: There’s something to be said for The amount of coverage or the amount of instances out [00:05:00] there equals how many bad guys are poking at you. So if you’re not widely deployed, you’re probably also not getting widely tested. So there is absolutely some of that aspect of, Hey, if you’re a well used tool, you’re likely to have more security problems.

Andrew: So statistically that makes sense, but it’s not a bad tool. Don’t get me wrong. It’s a super useful tool. It’s just amazing how often I see advisories about. Really nasty exploits on various plugins for WordPress.

jerry: Yeah. the barrier to entry for plugin development is incredibly low and there are just an absolute ton of them. There’s many thousands. So it isn’t surprising.

Andrew: People who are running WordPress sites are not super technical admins. They’re usually marketing folks or content generation folks. So, when they’re looking for, Hey, I need something that makes a pretty picture, do something like this in WordPress, they probably aren’t looking at with the same level of technical rigor you and I might.

jerry: I will tell you in in prior [00:06:00] jobs where we had customers hosted on our infrastructure, this was a big problem because, customers would walk away, right? There’s it’s so easy to set up. It’s so easy to set up a WordPress instance, which is by the way, like that’s part of its value proposition, but it’s also part of, I think it’s I think it contributes to the low ongoing attachment or ongoing care and feeding of it.

jerry: It’s so easy to set up and then just walk away from and it’s a big problem. I think that the WordPress team themselves have done a pretty good job of mitigating the issues to the extent they can. Most of the, most of it auto updates these days.

Andrew: Yeah.

jerry: More to go, right?

Andrew: what’s interesting to me is the way you describe that often sounds like the same problems we have with SaaS and cloud in general. It’s so easy to set up and walk away from and not manage it well. That we lead to all sorts of similar problems. This story is more about [00:07:00] GitHub and I agree with all your points.

Andrew: I just went down my WordPress rant rabbit hole, but yeah, I get it. GitHub is an interesting one. And I don’t have a lot of good solves for that one.

jerry: No, it’s It is not a, it’s not an easily solvable issue. I think this is one of those one of those cases where education will help certainly, proper end point detection, I should say end point protection. Will help being able to identify that somebody has downloaded and run something that is potentially malicious on their device.

jerry: But beyond that, unless you’re willing to take that leap and not not permit people to access GitHub, it’s hard to defend against. So by the way, if the industry as a whole decided, Hey, this is too risky, the bad guys would just move somewhere else.

Andrew: Yeah, absolutely. They’re leveraging whatever has a good solid reputation that has a sort of functionality. It’s not GitHub’s fault. And I think you’re right. If it [00:08:00] wasn’t GitHub, it’d be somebody else that serves the same purpose. function. It’s their victim of their own success in that sense.

jerry: 100%. So the the next story we have comes from the know b4 blog. Know B4 is. I think a fairly well known security education company. They are not without controversy in my my experience. I think they’re the ones that offer the the ransomware guarantee or warranty, if I’m not mistaken.

jerry: But this story is not about that. The story is about how they hired a North Korean, allegedly as a senior developer. So they they did what I would describe as a public service announcement describing how they had been duped by purportedly a North Korean, government agent who was trying to infiltrate their [00:09:00] company.

jerry: It’s a little unclear exactly what their mission was. But the story basically is that a this company was trying to hire a senior developer. They went through the interview process. They selected what they believed was the best candidate. They performed all their normal interviews, background checks and whatnot.

jerry: Then they made an offer. The offer was accepted. They dispatched a laptop to this person. And then suddenly that. Laptop was observed attempting to install malware. And that is what really set off the series of events that alerted them to what was going on. So in the end analysis, the the person they hired was again, somebody allegedly from North Korea.

jerry: They had used a stolen identity of a legitimate person in the U S. They [00:10:00] altered a photo, and it’s a little unclear to me exactly in what ways they show before and after of how the photo was altered, but it’s not entirely clear to me if that was the photo of the identity, the person whose identity they stole or somebody else that aspect isn’t clear to me, the laptop was sent to what they call an IT A laptop shipping mill, I think is what they call it.

jerry: Basically. It’s a place that will set these laptops up either on a network to allow access, low remote access from overseas, or they will send them out I think in this instance, it was allowing remote access in. So the the bad actor was apparently in North Korea, VPNing into the laptop and then connecting from the laptop, which I guess was located in California and accessing the company’s networks.

jerry: When KnowBe4’s security [00:11:00] team identified that there was malicious activity going on. They tried to contact this. Very new employee, and the new employee said, I can’t talk right now. I, I’m trying to, I’m trying to debug something, but I’ve got a personal or family issue going on. And so over the course of a couple of hours, the security team decided to isolate the laptop, and then things started to fall apart, and they realized exactly what happened.

jerry: It’s not entirely clear to me, by the way, from reading the article exactly how they stitched together that this person was a North Korean agent. That. That is not part of the at least the stories that I have read. But it’s interesting because this resonated with me personally. I worked for a large multinational company for a very long time.

Andrew: And you’re a North Korean agent.

jerry: And I yes, that’s, that is true. Sorry, everybody had to find out that way. And anyhow,

Andrew: Didn’t they all know? Is that breaking news?

jerry: [00:12:00] I think it might be for some,

Andrew: Oh you’re a very nice North Korean agent.

jerry: I try to be

Andrew: Yeah.

Andrew: Anyway. Yeah. It’s a crazy story. I, Sent this to some of my friends in HR and they’re like, what this happens? Yeah, it happens. So that tells me there’s some awareness interestingly according to The timeline that no before published their sock was on it. They Detected the users activity Suspicious activity began at 9 55 p.

Andrew: m. And by 10 20 p. m 25 minutes later. They’re like, nope shut me down.

jerry: yeah. But by all accounts, they did a great job.

Andrew: Yeah, and the SOC person was suspicious enough. And also the North Korean angle. It looks like, before we reached out to the FBI in Mandiant. And so it’s probably coming from one of those two who are saying, that this was a North Korean thing.

Andrew: Although from the story as well, it looks like this may not necessarily be malicious in terms of state sponsored. malicious [00:13:00] activity so much as it might’ve been just a way to earn legitimate money sent back to North Korea to fund whatever, as they say, at least according to what they quote in their blog post, the FBI was saying, sometimes this is just straight up people working in sanctioned countries, getting around the sanctions and doing legitimate work.

Andrew: Or, but this guy immediately started, Loading malware, which is not great.

jerry: Yeah. And realistically, if he hadn’t started loading malware, who, I don’t know if it would have been very easy to detect that this was going on.

Andrew: sure. So a lot of lessons learned though. A lot of takeaways.

jerry: Yeah. I, anyway I was saying that this is, this, It hit home for me because I have had similar, not with, foreign agents, North Korean agents or age, people in embargoed countries, but I have had experience with people falsifying who they [00:14:00] are when they get hired and it’s difficult to do, especially in these remote working.

jerry: Environments. I’m not saying that it can’t also be done in a face to face type environment, but I think that it is. Easier to pull off in, in today’s remote working environment. I’m still, by the way, a big proponent of remote working. Don’t misunderstand in any way. But I do understand this one.

jerry: I I can definitely sympathize someday. I will tell some funny stories about experiences I’ve. I’ve had, so there were, they did call out some tips and recommendations as a result of this. Again, this was intended to be a public service announcement. So they wanted to describe the, the event and what they learned from it.

jerry: So their tips to prevent were scan your remote devices and make sure no one remotes into these. So I think that was [00:15:00] really, addressing how the person was connecting remotely into the laptop. But that’s how, by the way, that’s not a even if you were to not permit VNC or remote desktop or things like that, it’s not a guarantee because there are plenty of cheap ass network KVMs, right?

jerry: So just be aware of that. Next one is better vetting. Make sure they, the employee are physically where they are supposed to be. I think also that’s a difficult. One a better resume scanning for career inconsistencies, by the way I’m assuming they are listing these tips because some aspect of the, that the root cause analysis is tied back to each of the, each of these things.

jerry: So I’m guessing that if they had looked more closely, they would have identified inconsistencies in the career of the person whose [00:16:00] identity they had stolen. And so I think what they’re saying is, make sure that everything ties out. Then next one is get these people on video camera and ask them about the work they’re doing so that you can see firsthand, a, that their picture looks like who you think you’re hiring and B that you can understand, that they are in fact who they say they are.

jerry: And I can definitely tell you firsthand that’s an important one.

Andrew: Yeah. But that’s getting easier and easier to fake.

jerry: Yeah.

Andrew: going to be an interesting problem as, deepfake AI technology is getting cheaper and more widely accessible. So

jerry: So Bob, our friend Bob, you told me about a situation in a Asian country where it is At least in the recent past fairly common for [00:17:00] what I’ll loosely call, or what he loosely called job fairs, where people who were job prospecting could go and hire an expert. Like they could, the person, this person would offer their services to go through interviews and.

Andrew: wow,

jerry: And, purport to be the person who’s looking for the job and they would rely on like the, the silly Westerners inability to distinguish between different Asians and they would end up hiring, And bringing on board somebody who was not the person they had actually interviewed.

jerry: And it was it was done at an industrial scale.

Andrew: obviously that is a short lived scam for the person getting hired, but I guess they get a paycheck or two out of the deal and move on.

jerry: I think, and I think sometimes it, sometimes I think it can work out for a long time, [00:18:00] unfortunately.

Andrew: I guess that’s true. Wow, that is crazy.

jerry: Anyway Next one is the laptop shipping address is different from where they are supposed to live at work is a red flag. So I’m assuming that this person declared that they had an address in place a, and they asked for the laptop to be put shipped to place B and that, that should be a definitely red flag.

jerry: I. I’m a little surprised that has to be called out. Seems obvious to me. They did have, they do have a couple of recommended process improvements, including a background where background checks appear to be inadequate and names are not consistent. So basically where that the person’s name And the results of their background check don’t necessarily line up correctly.

jerry: You need to make sure that you’re properly vetting your references and do not solely rely on email [00:19:00] based references. I think that’s also a good thing, although not airtight, especially if you’re trying to fend off, any type of I want to say nation state, but if it’s the adversary, it’s a kind of a low bar to top.

Andrew: If they’re already a scammer, they can probably get over that hurdle of having three valid, references who will lie for them.

jerry: That’s my thinking to implement enhanced monitoring for any continued attempts to access systems. That makes a very good sense. Obviously if. If you terminate this, by the way, I don’t think it’s unique to the situation. When you terminate someone, you need to make sure that their access is terminated globally and you don’t have back doorways into your environment.

jerry: A review and strengthen access controls and authentication processes. And then finally conduct security awareness training. And then they have a couple of what to look out for is like using voiceover IP numbers, discrepancies and addresses and dates of [00:20:00] birth conflicting personal information like marital status and then also family emergencies. I think, again. When the company soc tried to contact the employee and ask what was going on.

jerry: This person said, oh, I can’t talk. I’ve got a family emergency going on. And then sophisticated use of v VPNs and VMs for access for accessing custom or for company systems. I think that’s, by the way, very important one in something that, that I have. I know Bob was doing pretty pervasively was looking for evidence of unauthorized connections from known VPN sources,

Andrew: Right.

jerry: And then attempting to execute malware and subsequent coverups.

jerry: So I think all really good. I will also say, by the way, that I, I think, obviously, no, before published this to get their name out, it was intended to help, I what that one judge called corporate puffery, [00:21:00] which is my new, my, my new favorite term, the legal term now. But I wish I, it was shared far and wide, but it was very clear because this company had to hit the post, the followup blog post that this post was being interpreted as they’re disclosing a data breach.

jerry: Thought that angle is also very interesting that you have to be careful how you couch things and they, in, in the FAQ, by the way, they actually draw a delineation between how the InfoSec media industry covered their disclosure or their privacy. Public service announcement and how the general media disclosed it.

jerry: The InfoSec press, as they describe it, covered it as a lessons learned. Here’s here’s some things to look out for and did what they had hoped, but the general media basically just covered it as if it was just another data breach, [00:22:00] which I thought was quite interesting and something to be aware of.

Andrew: There’s two other things that I took away. Interesting that one, their EDR software caught the malware. So that’s important. I don’t know if it was CrowdStrike, but we’ll just mention that just for the, the circuit search algorithms, because everybody has to talk about CrowdStrike. The other thing that I thought was interesting is the way this is written.

Andrew: It’s clearly a bit of an internal document this the way that the Verbiage is used in the tenses and that sort of thing. So they say here.

Andrew: Let me find it really quick Ah, I don’t have to it’s good We have new employees in a highly restricted area when they start and they have no access to production systems So it sounds to me based on that, that they have some sort of tiering for brand new employees that they don’t automatically get access to anything, that they’re some sort of training period or ramp up period where they’re vetted or whatnot, whatever the purpose is for that particular sort of [00:23:00] sandbox that they’re left in initially.

Andrew: Not a bad idea for this sort of circumstance.

jerry: a great idea and a great call out. Yeah,

Andrew: Crazy story though. I’m really glad that, before. Published all this and we can learn from it. Kudos to them. Yeah, you’re right. There’s probably some cynical self serving nature to it, but nonetheless, it’s good fodder for us.

jerry: absolutely. Absolutely. And I think it’s happening probably a lot more than we, we want to admit. Moving on to the next story, this one comes from Ars Technica and the title is secure boot is completely broken on 200 plus models from five big device makers, So I think most people hopefully know what secure boot is.

jerry: It’s intended to to establish a level of trust in the firmware that is running on the hardware. And that was back in the early 2010s late Ts early 2000 tens a big deal because there were some pretty novel firmware [00:24:00] based attacks and the specter of really persistent and very difficult to eradicate firmware level mail malware was was running pretty rampant.

jerry: So the industry created this secure boot concept, and it relies on public key cryptography. And the one thing that everybody should realize by now is that we really.

jerry: And so there are two distinct problems here. That’s it’s identified in the article. One is that. Back in 2022 AMI, one of the signing keys from AMI was posted on a public GitHub repository. Now it was encrypted. It was strongly encrypted with a four character password. So it was very easy to decompile, reverse engineer.

Andrew: Indeed.

jerry: Yeah. So that basically means that systems which use [00:25:00] secure boot that, that relies on that particular key can be bypassed by an adversary. They can now load malware. They can basically sign a malicious version of firmware and run that on the system. So not

Andrew: It’ll be trusted. And the only way to fix this is basically to deploy firmware updates to the hardware to rotate the key, which is not going to happen.

jerry: And by the way once a system is infected, like if you have malicious firmware, it is very difficult. This is not like you can necessarily just wipe the hard drive and start over. It’s very difficult to eradicate the tools in detection capabilities for malicious firmware are very immature.

Andrew: Mhm. this is a tough one. When you’re relying purely on the secrecy of a [00:26:00] key, and if that key were to get leaked, in this case, it blows up that entire ecosystem. Wow, that’s rough. And the other thing that I’ll bring up is I have seen so many secrets get pushed to GitHub by accident. It’s so easy to do and it’s a huge problem that I don’t know that we talk about enough.

Andrew: Once it’s pushed too it’s almost impossible to ever wipe from that record because it’s permanently in that, repos history. So once that happens, you’re basically forced to rotate that secret. And in this case, you can’t because it’s relying on firmware you can’t centrally update. Without massive amounts of effort.

Andrew: So now you’ve got this problem forever

jerry: Yeah, correct. Now in the,

Andrew: for those hardware.

jerry: In this instance, in the other instance, which is maybe a little less bad I guess in some ways it’s better. It’s less bad in other ways. It’s worse. [00:27:00] What we just talked about affected 200, 200 or so models. There were another 300 models that were impacted by a different problem.

jerry: And that other problem was that a, one of the firmware manufacturers released prototype code that had a stock.

jerry: And in fact, the common name in this key is do not trust. So the hardware manufacturers did not actually rotate the key as they were supposed to. And so in the analysis of The first problem that we were just talking about, this company called Binerly was doing some analysis of other systems, looking for other problems.

jerry: And they found apparently a very pervasive problem across many different. Models of hardware that have this, stock test key [00:28:00] instead of an actual legitimate key. And so it’s literally the same key used everywhere. It was never intended to be used in production systems but apparently the hardware manufacturers think at the memo, I guess the one good news in all of this is that I think all of the hardware now is, or almost all of it is out of support and probably pretty old.

jerry: Not necessarily a good look in terms of our ability. And I like these. Who knows how many people have had these keys over the years and provided an opportunity for, bad actors to leverage the vulnerability. I would say for most people or for most organizations, it’s not like an exigent threat.

jerry: Because something else has to go horribly wrong, in order for a bad person, unless you’re talking about like a laptop or a desktop personal, like a PC from a server [00:29:00] perspective, something else has to go horribly wrong. They’ve got to, you’ve got to get, be compromised. And if they did, this provides a really gnarly method of persistence.

jerry: If that were to be properly leveraged, but the one, again, coming from. Where I was, there is a category of company that has to worry about this like very deeply, and that is. Cloud providers.

Andrew: that’s true. Somewhere there is hardware. Backing all that stuff up.

jerry: Yeah. And, because you have to have administrative rights on the system in order to flash it. You are like, I run a infosec. exchange and I have a bunch of bare metal servers that I rent from a data center. I have root, I could conceivably update the firmware on those.

Andrew: You could.

jerry: If I up, if I updated the firmware in a way that maintained persistence and my [00:30:00] hosting provider was not sophisticated, I could. Establish a means of persistence to compromise the next person who, or organization that is using that piece of hardware.

Andrew: You rent it. You flash it with your code. You un rent it. And they rent it to somebody else.

jerry: Yes,

Andrew: It doesn’t matter if they loaded fresh OS or whatever. It’s sitting there in firmware. It’s at the BIOS level.

jerry: exactly. So that those are the kinds of things that kept me up at night.

Andrew: But to your point, I don’t know how much of a real world hardcore threat this is. It’s a very interesting situation, but I don’t think that’s what’s going to burden most companies.

jerry: No. I think it, so you touched on something. I wanted to hammer home and that is our just terrible stewardship of keys particularly as it pertains to GitHub. I’m sure [00:31:00] there are others, but like GitHub is, I suspect if in 50 years going to be viewed as like the nexus point for many data breaches that were caused by leaked keys it’s

Andrew: It’s just too easy.

jerry: It’s an epidemic.

Andrew: and I’m not throwing shade at the developers. It is just so easy to do it that even the most well intentioned and diligent developer can get caught by this, even if you have tools that are supposed to check for secrets before that PR request is completed, they’re not foolproof, and the secrets look differently, and they’re not great.

Andrew: There’s no one perfect tool for this. So it’s, I don’t know, the only thing I can say, and it’s really tough when we’re talking about firmware based things, but everything else is try to design with the intent that you’ve got to rotate those keys on a regular basis and make it easier for yourself.[00:32:00]

Andrew: Much easier said than done.

jerry: Certainly true. I will say if you start out with that design principle in mind, it’s easier. It’s much easier than going back and trying to retrofit. So I, and I think that’s, I think that goes without saying, but it is doubly true in this instance. So keep an eye on those keys. Definitely rotate them when, and where you can minimally know what keys you have and how you would rotate them if they were disclosed.

Andrew: And I don’t want to get down in the weeds of all the coding practices, but because I’m certainly not an expert there, however, what I, as a very dangerous level of knowledge that I have just enough to be very dangerous, I understand there are ways to not directly call those secrets in your code or contain them in the code.

Andrew: You can call them as a reference and [00:33:00] otherwise teach your devs how to do that so that if they don’t have to worry about. That secret accidentally getting pushed is one very crude example of a very complicated problem that I’m not doing a good job talking about. But for those who know much better than I, try to help your devs.

Andrew: Now put secrets in GitHub, please. Thank you.

jerry: It’s, it is a difficult. And I will tell you, wrestled with this problem mightily, it requires you to create an ecosystem. And one of the challenges I’ve observed is that. And we are how best to say it, commoditizing development. I know that’s, perhaps a crass way of saying it, but we are, we’re putting people in development roles that don’t really have a lot of development background.

jerry: And we’re, we’re basically throwing them out of the nest and [00:34:00] saying, okay, it’s time to fly. And they know how to write code, but they don’t necessarily know that they’re not even, Aware that this class of problem exists until, somebody from the security team comes along and says, what did you do?

jerry: Let alone knowing how and where to actually store. And so very incumbent on, the leadership of the security team to make sure that ecosystem, both exists and your developers are aware of how to use it. Yeah. Otherwise you, this is going to keep happening.

Andrew: A single slide during onboarding isn’t good enough?

jerry: Oh, so it’s add that slide to your annual training and you’re good.

Andrew: All right. And maybe just something like just don’t share secrets. Just as a bullet point, move on. I think

jerry: So yeah don’t share. Yeah. Don’t share secrets, right? Don’t share your passwords. That’s right. I have, by the [00:35:00] way I have, by the way I have a been involved in in situations where something like this has happened. And I. Have asked the person, like at what point did you think it was okay?

jerry: This is a password. And he said, no, it’s not a password. It’s a secret.

Andrew: Right.

jerry: Not allowed to share passwords. There’s no, no prohibition on secrets. So

Andrew: makes my code go. I need it. It’s important.

jerry: very cautious about terminology. All right. Let’s keep plugging along here. The next one comes from dark reading and the title is CrowdStrike outage losses estimated at staggering 5.

jerry: 4 billion.

Andrew: And I’m going to go on record and say that’s probably low.

jerry: Yeah, I, it was funny in the short after immediate aftermath of the the event, or maybe a couple of days after somebody was asking the question of, how much money that [00:36:00] was lost. And at the time The number we just gotten the number of eight and a half million systems impacted. And so I did a bit of bad back of the net napkin math.

jerry: And I came up with between five and 10 billion, which is pretty interesting. That’s coming out to be a 5. 4 billion. I do think that by the way, the eight and a half million number is now being walked back. I haven’t heard a new number. But Microsoft seems to be signaling that eight and a half million is probably very conservative and CrowdStrike doesn’t seem to be very motivated to go correct the record.

Andrew: Yeah, from what I understand, Those were just the systems that were able to send a crash dump to Microsoft is what they based it on And they’re now saying that was probably just a subset of some so that 8. 5 million number that we’ve been talking about It’s probably Way low I would i’m just going to take a guess.

Andrew: It’s at least [00:37:00] 10x that number.

jerry: It’s a lot, it’s a lot, it was it was an interesting discontinuity because I think CrowdStrike has something like 15 percent of market share and this was like 1 percent or less than 1 percent of Windows systems that were impacted and it was hard to reconcile those. And now I, what, the only thing I could think of is that, That 1 percent included home users, but I don’t know, it just, it doesn’t seem right.

jerry: It’ll be very interesting to see what the actual number is. I think it’s certainly going to be a lot higher than eight and a half million, just based on, the news coverage how many freaking airports did we see pictures of people on ladders, resetting the displays?

jerry: You know, it’s mind boggling the amount of effort that went into recovering from this. I

Andrew: side story. I saw this amazing [00:38:00] article I won’t do it justice because I didn’t have it prepped but somebody figured out they could use a barcode scanner here To speed up the process because Windows treats a barcode scanner as a keyboard, and they figured out how to encode the actual command with the, um, the the encryption key they needed.

jerry: a key that bit like a key. Yeah.

Andrew: Thank you, the BitLocker key. And created custom barcodes for HPC that was tied to it. And instead of typing it all in, just got to the point where they needed, they scanned the barcode and boom, it was recovered. And I was like, I need to go find the story. It was amazing. Little bit of a hack that I thought was awesome.

jerry: Yeah. That was a,

Andrew: also I was thinking.

jerry: it was going to say it was a company in Australia that did that, I think if I remember, yeah, that was very clever.

Andrew: I was also thinking randomly when I saw that same picture of those people up in the ladder fixing that airport monitor. I’m like, if you’re down to that monitor, you probably got your critical stuff back.

jerry: That is very true. Although different teams, [00:39:00] probably different

Andrew: That’s true. That’s true. And I know there’s a lot more to talk about here, but one last thing I’ll say just since I stole the, we’ll probably get a lot better read on this because a whole bunch of legal actions is starting to spin up and that’ll all come out in discovery.

jerry: Delta airlines, notably, we were sharing some stories back and forth about Delta airlines before, before the recording here announced that they estimate their losses at half a billion dollars and they have retained the legal council and they’re contemplating legal action against both CrowdStrike and Microsoft.

Andrew: Yeah. The other interesting thing since the last we talked about this that came up, we were taking a little bit of a grain of salt, but we were talking a lot about why CrowdStrike was operating in the kernel and that, if that was a good idea or not, and why are they allowed to do it and should they be allowed to do it, et cetera, et cetera.

Andrew: And you can’t do that on a Mac. And one thing Microsoft came out and said is that based on an EU [00:40:00] legal They were forced due to antitrust type considerations to basically allow any security tool to have the same level of access as Microsoft’s built in security tools. Hence, they didn’t want Microsoft to have monopoly access to the operating system.

Andrew: So they needed to allow third parties to have the same capabilities. So if Microsoft defender has the kernel. Therefore, competitors need to also be able to have kernel access. That’s going to be an interesting thing to see play out. And I think Microsoft is walking a very careful line of pointing that out without playing too much of a victim.

Andrew: But it is an interesting caveat that I think adds to the nuance of the conversation we were having last week on this topic.

jerry: Yeah they’ve come out pretty forcefully and said that they’re intending to refactor access to the kernel. And I think they actually made the comment that they are intending to move in a direction that is much more like iOS, the Apple iOS [00:41:00] ecosystem. So that’ll be be quite interesting to watch.

Andrew: Yeah, I don’t blame them. They, Microsoft’s in a tough spot here, I think. And I’m not a Microsoft lover or an apologist, but they’ve caught a lot of flack for something that kind of wasn’t their direct fault, but somewhat the result of decisions and ecosystems and permissions that were granted. And I think they realize now that they just can’t succeed that.

Andrew: Ground or that they’re still going to be held accountable to what other people are doing in their operating system.

jerry: Yeah, I, in in the legal circles, there’s this concept of an attractive nuisance, and I have a feeling that’s going to be part of the basis of their, of claims against Microsoft is that, should have known.

jerry: And therefore, they’re somewhat culpable for not taking action. And anybody who, you know who’s had somebody slip and fall on their driveway, at least in the U S I don’t know [00:42:00] that this is a big problem outside of the U S you, you have that sort of, problem, right?

jerry: It’s an attractive nuisance. You should have done something about it. You should have known that this was going to be a problem. Somebody got hurt as a result. And now you’re you’re at least in part responsible. So again, don’t know. Not presupposing what a court would find. I don’t think this, by the way, is the kind of thing that would actually end up going to trial, but Hey, we’ll see this, by the way, this whole, this story here.

jerry: Is based on a report that comes from a company called parametrics and they are a I guess a think tank provides information to insurance companies. And so it was they who pinned the estimate estimated losses at half a billion, I’m sorry, at 5. 4 billion dollars. And they identified that healthcare companies were the most impacted followed by [00:43:00] banking as well as transportation companies.

jerry: We actually have a breakdown based on their telemetry. They have they, they’ve You know, they’ve identified different categories and how impact and how much each category of industry lost as a result of this net all totals up to the five point five point 4 billion. There were some interesting conclusions at the end of the report, which was linked in the article from dark reading.

jerry: And I’ll just read through those. And some of them, by the way, are a little confusing. Contrary to what I understood to be the case. So the first one is that recovery disparities between cloud and traditional infrastructures, they point out that cloud based organizations that were had their assets impacted in the cloud recovered much more quickly than those who had traditional infrastructure.

jerry: And on the one hand, I know that there were a lot of problems. But [00:44:00] on the other hand I’m wondering if that whole concept is skewed by the, the fact that you don’t have laptops and kiosks and whatnot, sitting in the cloud. And so those things that were impacted actually required you to dispatch someone to go out and do, whereas, You know you, with with infrastructure that’s sitting in a cloud, like somebody from their basement was able to just plug through them one, one by one without having to to actually dispatch people to go fix.

jerry: So I don’t know how much that played into it.

Andrew: I had one other thought there, which is that maybe a certain number of these systems that went down were abstracted in some sort of hypervisor and the management plane was still remotely available to sock. Or knock engineers who could fix it without actually having to put hands on physical keyboards at the devices.

jerry: That’s possible. Very possible.

Andrew: know, but because I’m assuming like the underlying infrastructure running it [00:45:00] didn’t go down. It was the actual virtual machines inside of it that were running CrowdStrike at that layer that, that caused, the blue screen. And can I don’t know, I don’t know if I can fix that easier than, having to go plug into a PC.

Andrew: USB port on a physical device.

jerry: I think they were able, if nothing else, they were able to use a virtual KVM to connect the systems.

Andrew: Yeah. So that’s much quicker. I would think.

jerry: Yeah, that’d be my expectation. So their second conclusion was misplaced priorities. They they call out the focus on non systemic cyber perils was ultimately a catalyst for the CrowdStrike outage.

jerry: And but my read of that is that we’re all focused on things like. Ransomware and worms and whatnot. We weren’t focused on the potential impacts of other components in our technology stack failing catastrophically.[00:46:00]

Andrew: Yeah, I challenge this. I think that’s some Monday morning quarterbacking.

jerry: Keep, again, keep in mind that I don’t think you can. I don’t think you can divorce that one from the next item, which is achieving portfolio diversification. So keep in mind that this was written for insurance companies,

Andrew: That’s fair. Carry on.

jerry: not necessarily for it companies. I, so what they’re pointing out here is that

jerry: there was a concentration of risk. Based on pervasive use of CrowdStrike and pervasive use of windy intersection of pervasive use of CrowdStrike and windows led to, this very systemic outage. And so if you are an insurance company, you should probably be thinking just like how, insurance companies don’t want to have all of their customers in Florida.

jerry: Because that, then if, hurricane comes [00:47:00] through, all of their customers at the same time are making simultaneous claims, they want to have diversified customer base. They want to have some people in Alabama and some people in New York and some people in Ohio and some in Florida. And so carrying that. Into in, into it. And so you don’t, do you want to ensure, do you want to have companies under your insurance umbrella who are only using windows and only using windows and CrowdStrike because now this is pointing out that creates. A situation that is not unlike having all of your customers your homeowners in Florida. It’s been a very one sided.

Andrew: I’m also very curious how these lawsuits are going to go. If they do indeed go, as we have very cynically EULAs that these companies write are not really. setting themselves up for a lot of liability. [00:48:00] And I saw a quote from the CEO of Delta and I won’t get it right, but basically he was saying, Hey, they got to be held accountable and we need to hold them accountable.

Andrew: And my initial thought, having been in a lot of discussions around procurement is that’s probably something your legal team should have done when you sign that contract. What does your legal team agree to and do they really have any actual liability in this circumstance. In my gut, I’ve never read the EULA for CrowdStrike.

Andrew: I’ve never bought CrowdStrike, so I don’t know. But having looked at many other EULAs, they have this limited liability stuff. And hey, yeah, we might screw up. This stuff isn’t perfect. And this is how much we’re liable for. So it will be very interesting, I think, how that might play out in the real world.

jerry: Yeah. I’ve seen snippets of CrowdStrikes EULA and contracts and, one of the, in the immediate aftermath, one of the notable discussion points was that CrowdStrikes. CrowdStrike does not [00:49:00] permit you, the customer to use CrowdStrike in a I’m paraphrasing, but in a in a context of life critical or safety critical uses.

jerry: And I think there were some other similar words. It is not uncommon, by the way, for company software companies To limit the liability based on, how much you paid, like you, you can’t necessarily sue them for damages that exceed the amount that you’ve paid them. That is a fairly common thing.

jerry: Now in the one reality is as as was stated by the CEO of Delta, they are going to go, they are going to go after Delta and it will be sorry after cross tracking, it will be. Interesting to see how that is perceived by the court. I will tell you, you, at least in the U S again, my context is all us law.

jerry: You can’t disclaim away. You [00:50:00] can’t write away in contract gross negligence, right? That is something that if you are found to be gross negligence, it doesn’t matter what you wrote. And, what the customer agreed to in a contract is that there can still be a finding against you.

jerry: And so I think that’s very likely the stance or the angle that Delta will take and probably some other companies as well. I think it’ll be very interesting to see how this plays out because, again, I’m not an attorney. I have worked with quite a few of them over the years. And I’ve seen a lot of a lot of contract cases go to court and be settled out of court.

jerry: I think personally, this is the kind of thing that would get settled out of court because CrowdStrike is not going to want, um, drawn out legal proceeding that is. Really giving them a deep, their IT processes and development processes, a deep [00:51:00] examination. And likewise, Delta probably doesn’t want a deep examination of exactly how they were using CrowdStrike and whether that was in concert with the terms of their license agreement and whatnot, the.

jerry: I think the challenge for CrowdStrike is,

jerry: I, again, I’m not saying this isn’t warranted, but there are a lot of companies right now that are probably in talks with legal counsel and what their options are. And I think, once one happens, this is going to quickly turn into either a class action case, or it’s going to turn into some, some revolving door of litigation.

jerry: And the reality is I think unless CrowdStrike goes to trial with this and they’re found to be not, not having been negligent in their contract term, Limitations are enforceable. They have, they stand a lot to [00:52:00] lose.

Andrew: Yeah, that’s true. The one thing that you touched on that I thought was interesting is the Yula bit of not to be used in critical infrastructure, life, and the one other area that I have any sort of familiarity with is aviation software, which is meant obviously to run aircraft and important functions on aircraft.

Andrew: And that software is incredibly expensive. And very highly regulated because it is meant for that critical use case. So what that sort of tells me is that we have this tension going on here of We want any relatively inexpensive software Almost commodity grade that can run everywhere to solve this problem, but that inexpensive software means that The level of care that goes into very high critical software.

Andrew: I’m not explaining this well, but if you look at what why it costs much money to write aviation software is because the level of diligence is [00:53:00] so high because the consequence of failure is so high, as well as all the regulatory oversight. And that’s where it comes into it. We don’t have that in this case, which is probably why.

Andrew: We have that EULA stipulation of don’t run it in a critical case. But if you’re expecting that level of due care and carefulness in coding, it comes with that price and would corporations be willing to pay that price instead of paying 150 bucks a node or whatever it is for CrowdStrike, are they willing to pay 5, 000 a node?

Andrew: That’s where the rubber meets the road.

jerry: If you look at the case of Delta, obviously they’re an airline and, they’re, they, they have planes that are the software that runs on them are life critical, but I don’t think that’s what, you know,

jerry: The things that failed, I think were the kind of the back office stuff, the planning and the scheduling of crews and and departures and and arrivals and whatnot, things that are not necessarily [00:54:00] important to an airplane staying up in the air, but rather to orchestrating the movement of planes around the world and passengers around the world.

jerry: And. That, that is not by itself, I think, a life critical thing. Although in aggregate, I, there was certainly a story about a person in Florida, an elderly person in Florida who missed his flight as a result of this and has not been seen since.

Andrew: Yeah, and I’m sure that there’s second order. I’m more, what I’m trying to draw a parallel between is, if you’re demanding software with zero defects, it’s much, much more expensive.

jerry: Oh, sure.

Andrew: And that’s where I’m going with this is there’s, whether we like it or not, there are trade offs. And so if you look at one of the extreme of software that runs, and I only say this because this is what I’m familiar with, I’m sure the same for like power plants and software that runs super critical stuff is also super expensive.

Andrew: So [00:55:00] if you want that level of dependability, you probably aren’t going to do it on commodity multifunction. Operating systems and hardware, and you’re probably not going to do it with lower cost tooling like this. So I guess what I’m trying to say is if companies expect zero defect, they’re going to have to pay for it.

Andrew: And we’re not in that scenario right now.

jerry: Temper your expectations, I think is what you’re saying.

Andrew: Yeah. And you can’t have, fast, cheap. Reliable pick one, kind of situation.

jerry: Yeah it’s fair. I think in this instance, this is one of those situations where it’s obvious in hindsight, it’s a simple failure. It’s a stupid failure. It didn’t rely on, highly sophisticated. Quality assurance processes in order to catch it. But it’s one of those things where it’s obvious in hindsight, it’s not necessarily obvious going in.

jerry: And so I, I think broadly speaking, [00:56:00] you’re right. We in it and really civilization and society in general have built this infrastructure around software and systems that, that we beat the shit out of our vendors, pardon my French, to get the lowest prices out of. And then things happen and we’re, we are

Andrew: And generally you might say it’s worked like how much, and

jerry: worked.

Andrew: Agree. Not

jerry: it’s worked spectacularly well.

Andrew: paying

jerry: Like we don’t, how many times do we have this discussion? This is not a common problem on the other side. I’ll just contrast that with every single month. Microsoft, and I think Oracle does it quarterly and, many other companies do too, release a cadre of patches for critical vulnerability.

jerry: So I think to some extent, on the one hand, we’ve built an [00:57:00] ecosystem that is tolerant of a certain level of. Software badness or a lack of quality. But on the other hand, when things like this come along, it highlights how interconnected and interdependent our systems are and that we’re not necessarily, the things that we’re buying are not as resilient as they need to be based.

jerry: And have the expectation that they would never fail,

Andrew: and not on the, Commodity multifunctional hardware and software we’re using.

jerry: All right. It was it was not obvious to hopefully with through magic of editing, not obvious that we just had like catastrophic connectivity failure and whatnot, but I think we’re going to cut it off and we’ll pick up the rest of the story next time.

Andrew: Sounds good.

jerry: I appreciate everybody’s time and attention, and I hope you learned something and, certainly appreciate your your patronage here.

jerry: So you can [00:58:00] follow our podcast here at our website at www. defensivesecurity. org. You can find back episodes just about any podcast service, including now, thanks to Mr. Callit, including YouTube.

Andrew: Audio only for the moment, just an experiment. We’ll see how it goes.

jerry: We’ll be we’ll be experimenting with video, but we both, unfortunately, have to undergo some plastic surgery things before we can

Andrew: About a 12 month intensive workout program with the world’s leading exercise physiologists, I believe as well.

jerry: Yes, but that’s to look forward to the future. You can follow me on social media. I’m primarily on my mastodon instance, Infosec.Exchange. You can follow me at @Jerry@Infosec.Exchange. You can follow Andy where

Andrew: On X @LERG L E R G and infosec.Exchange at L E R G.

jerry: awesome. [00:59:00] And with that, I wish everybody a great week. I hope you have a great weekend ahead and we will talk again very soon. Thank you all.

Andrew: Thanks very much. Bye bye.

 

Defensive Security Podcast Episode 273

The Joe Sullivan Verdict – Unfair? – Which Part? (cybertheory.io)

Fujitsu Details Non-Ransomware Cyberattack (webpronews.com)

5 Key Questions CISOs Must Ask Themselves About Their Cybersecurity Strategy (thehackernews.com)

Sizable Chunk of SEC Charges Vs. SolarWinds Dismissed (darkreading.com)

CrowdStrike CEO apologizes for crashing IT systems around the world, details fix | CSO Online

Summary:

Cybersecurity Updates: Uber’s Legal Trouble, SolarWinds SEC Outcome, and CrowdStrike Outage

In Episode 273 of the Defensive Security Podcast, Jerry Bell and Andrew Kalat discuss recent quiet weeks in cybersecurity and correct the record on Uber’s CISO conviction. They delve into essential questions CISOs should consider about their cybersecurity strategies, including budget justification and risk reporting. The episode highlights the significant impact of CrowdStrike’s recent updates causing massive system crashes and explores the court’s decision to dismiss several SEC charges against SolarWinds. The hosts provide insights into navigating cybersecurity complexities and emphasize the importance of effective communication and collaboration within organizations.

00:00 Introduction and Banter
01:52 Correction on Uber’s CISO Conviction
04:07 Recommendations for CISOs
09:28 Fujitsu’s Non-Ransomware Cyber Attack
12:13 Key Questions for CISOs
32:47 Corporate Puffery and SEC Charges
33:15 Internal vs External Communications
33:52 SolarWinds Security Assessment
36:36 CrowdStrike CEO Apologizes
37:16 Global IT Systems Crash
37:57 CrowdStrike’s Kernel-Level Issues
40:55 Industry Reactions and Lessons
42:58 Balancing Security and Risk
49:26 CrowdStrike’s Future and Market Impact

01:03:46 Conclusion and Final Thoughts

 

Transcript:

defensive_security_podcast_episode_273 ===

jerry: [00:00:00] All right, here we go. Today is Sunday, July 21st, 2024, and this is episode 273 of the Defensive Security Podcast. My name is Jerry Bell, and joining me tonight as always is Mr. Andrew Kalat.

Andy: Good evening, Jerry. I’m not sure why we’re bothering to do a show. Nothing’s happened in the past couple of weeks.

Andy: It’s been really quiet.

jerry: Last week was very quiet.

Andy: Yeah, sometimes You just need a couple quiet weeks.

jerry: Yeah. Yeah, nothing going on so before we get into the stories a reminder that the thoughts and opinions We express on this podcast do not represent andrew’s employers

Andy: Or your potential future employers

jerry: or my potential future employers

Andy: as you’re currently quote enjoying more time with family end quote

jerry: Yes, which by the way Is highly recommended if you can do it.

Andy: You’re big thumbs up of being an unemployed bum.

jerry: It’s been amazing. Absolutely [00:01:00] amazing. I I forgot what living was like.

jerry: I’ll say it that way.

Andy: Having watched your career from next door ish, not a far, but not too close. I think you earned it. I think you absolutely earned some downtime. My friend, you’ve worked your ass off.

jerry: Thank you. Thank you. It’s been fun.

Andy: And I’ve seen your many floral picks. I don’t, I’m not saying that you’re an orchid hoarder, but some of us are concerned.

jerry: I actually think that may be a fair characterization. I’m not aware of any 12 step programs for for this disorder here.

Andy: There’s a TV show called hoarders where they go into people’s houses who are hoarders and try to help them. I look forward to your episode.

jerry: I yes, I won’t say anymore. Won’t say anymore. So before we get into the new stories, I did want to correct the record on something we talked about on the last episode [00:02:00] regarding. Uber’s CISO that had been criminally convicted. Richard Bejtlich on infosec. exchange actually pointed out to us that it was not failure to report the breach that was the problem. It was a few other issues, which is what Mr. Sullivan had actually been convicted of. So I’m going to stick a story into the show notes. That has a very very extensive write up about the issues and that is from cybertheory. io. And in essence, I would distill it down as saying again, I guess he was convicted so it’s not alleged. He was convicted of obstruction of an official government investigation. He was convicted of obstructing the ongoing FTC investigation about the 2013 slash 2014 breach, [00:03:00] which had been disclosed previously.

jerry: The FTC was rooting through their business and were asking questions and unfortunately apparently Mr. Sullivan did not provide the information related to this breach in response to open questions. And then furthermore, he was he was convicted of what I’ll summarize as concealment.

jerry: He was concealing the fact that there was a felony. And the felony was not something that he had done. The felony was that Uber had been hacked by someone and was being extorted. But because, he had been asked directly, Hey, have you had any, any issues like this?

jerry: And he said, no, that becomes a concealment, an additional concealment charge. And so the jury convicted him on both of those charges, not on failure to disclose a breach.

Andy: Yeah, it’s we went down the wrong path on that one. We were a little, we put out some bad info. [00:04:00] We were wrong.

jerry: So I’m correcting the record and I certainly appreciate Richard for for getting us back on the right track there.

jerry: This article, by the way, does have a couple of interesting recommendations that I’ll just throw out there. One of them is hopefully these are fairly obvious. Do not actively conceal information about security incidents or ransomware payments, even if you’re directed to do so by your management.

Andy: Yeah. I think, let’s put it out for a second. If you’re in that situation, what do you do? Resign?

jerry: Yes. Or do you,

Andy: yeah, I think that’s,

jerry: I mean you either resign or you have to become a whistleblower.

Andy: Yeah, that’s true. Your career has probably ended there at that company either way. Most likely. But it’s better than going to jail.

jerry: It’s a lot better than going to jail. I think what I saw is he Sullivan is up for four to eight years in prison, depending on how he’s sentenced.

Andy: Feds don’t like it when you lie to them. They really don’t like it.

jerry: No, they don’t. Next recommendation is if you’re, if your company’s under investigation, get help and potentially [00:05:00] that means getting your own personal legal representation to help you understand what reporting obligations you may have for any open information requests. And I say that because. In this instance, Sullivan had confirmed with the CEO of Uber at the time about what they were going to disclose and not disclose and the CEO signed off on it. And he also went to the chief privacy lawyer, who by the way, was the person who was managing the FTC investigation and the chief privacy lawyer also signed off on it.

Like the joke goes, the HR is not, it’s not your friend. Your legal team may also not be your friend. At some point if you’re in a legally precarious position, you may need your own council, which is crappy.

Andy: That is crazy. How much is that going to cost? And wow, that’s it. I don’t [00:06:00] one more reason to think long and hard before accepting a role as CISO at a public company.

jerry: Yeah, this, by the way I’m skipping over all sorts of good stuff in this story. So I invite everybody to read it. And it’s a pretty long read.

jerry: It, it talks about the differences between the Directors of companies and officers of companies and the different obligations and duties they have related to shareholders and customers and employees and whatnot. And what was very interesting. The point they were making is that CISOs don’t have that kind of a responsibility, right?

jerry: They don’t, they’re not corporate officers in the same way. And so what they, what, when you read the article, and I apologize for not sending it to you. I just realized, when you read the article it was very clear that there The author here was pointing out that the government and I suspect with, at the behest of Uber, was really specifically [00:07:00] going after Sullivan, right?

jerry: Because in exchange for testimony, people got immunity in order to testify against Sullivan. And that kind of went all up and down, including You know, it’s some of the lawyers. So I, by the way, I think he clearly had some bad judgment here. But, also, he wasn’t the only one. This was a a family affair, but he’s the one who’s really taken taken the beating. Next recommendation was paying a ransom in return for a promise to delete copies of data, not disclosed data does not relieve your responsibility to report the issue in many global laws and regulations.

jerry: So just because you’ve gotten an assurance that the, after you’ve paid a ransom that the data has been destroyed, you still in, in almost all cases are going to have a responsibility to report. And, one of the things the the author here says is you really should let everybody know, there’s vehicles to [00:08:00] inform at least in the U S CISA and the FBI, and I’m sure there’s similar agencies in different countries. To help insulate yourself do not alter data or logs to conceal a breach or other crime. That seems pretty self evident, but I think the implication is that.

jerry: That’s what happened here. And then also lastly, do not create documents that, contain false information.

Andy: Shocking.

jerry: Yes. So again, not, nothing in there that is like earth shattering but it’s a good reminder,

Andy: yeah. And I, I don’t know if but our good friend Bob actually got out of the South American prison he’s been in for a while, and I heard from him, and he’s doing well, he’s got three new tattoos and lost two fingers, but otherwise he’s doing well. He was telling me that he once worked for a CISO that actually fabricated evidence for an internal auditor.

Andy: And thought it was a fun [00:09:00] game

Andy: and how he had a tough time knowing how to handle that.

jerry: And the ethics of how to disclose that, right?

Andy: Especially because as he described it, it was a very powerful CISO who had a reputation for retaliatory behavior to those who did not bow before him. Damn. So

jerry: yeah, Bob has all the best stories.

Andy: He does. He does. I look forward to hearing more about his South American prison stint.

jerry: All right. Our next story today comes from web pro news. com. And the title here is Fujitsu details, non ransomware cyber attack. It feels like it’s been so long since we’ve talked about something that wasn’t ransomware.

Andy: I feel like these bad guys just, lost a good ransomware opportunity.

jerry: Clearly they did. So there’s not a huge amount of details. But basically Fujitsu was the victim of some sort of [00:10:00] data exfiltrating worm that crawled through their network. They haven’t published any details about who or how, or, why, what was taken, but was, what was most interesting to me is that, the industry right now is very taken by ransomware or, more pedestrian hacks of things to mine cryptocurrency or send spam or, do those sorts of things.

jerry: It’s been a while since I’ve. I can think of the last time we actually had a, like a a destructive or, something whose job was not. To be immediately obvious that it’s in your environment.

Andy: Yeah. If I had to, again, the details are very sketchy, but if I had to guess, maybe this was some sort of corporate espionage or some sort of, it appears the way they described it, which again, the details are sparse.

Andy: It was low and slow and very quiet [00:11:00] trying to spread throughout their environment. It didn’t get very far. They said, what, 49 systems? 49. And they had a lot of interesting, you caveats of it didn’t get to our cloud this and it didn’t do that. So there’s a lot of things that didn’t do.

Andy: They didn’t tell us much about what I did do. But if I had to guess, maybe some sort of corporate espionage. Yeah, maybe that’s, or just random script kitties being like, you can never always attribute motivation. So I’ll say

jerry: this way, intellectual property theft, the motivations for that, I guess this is an exercise left to the reader, but.

jerry: They did say that data was exfiltrated successfully. They didn’t say what data but I, my guess is, they were after some sort of intellectual property theft. The reason for bringing this up is not that this has a whole lot of actionable information, but more that, that there are other threats out there still, it’s not all, it’s not all ransomware and web shells and that sort of stuff.[00:12:00]

Andy: Indeed, but to be fair that is majority of it. Protect your cybers. You know what helps? A solid EDR. It’s a little foreshadowing for a future story.

jerry: We’ll get there. We’ll get there. All right. The next story comes from thehackernews. com and the title here is five key questions CISOs must ask themselves about their cybersecurity strategy.

Andy: Apparently, we need to add a sixth one, which is, Am I going to go to jail?

jerry: So the key questions here, number one, how do I justify my cyber security? Actually, you know what, I’m going to back up for a second, because there were a couple of other salient data points in here. And the first one was they pointed out that only 5 percent of CISOs report directly to the CEO , then two thirds of CISOs are two or more levels below the CEO in the reporting chain. And that, those two facts indicate a potential lack of high level influence to [00:13:00] use their words. I will tell you the placement of the CISO in an organization isn’t necessarily an indicator of how much power they have. Somebody who reports to the CEO is going to be more influential for sure, but there are lots of different organizational designs especially when you go into larger companies.

Andy: Sure. I would say also if they’re highly regulated, that CISO has a lot of inherent authority because of the regulations that are being enforced upon that organization. So by external third parties.

jerry: The Ponemon or Pokemon Institute found that only 37 percent of organizations think they effectively utilize their CISOs expertise.

jerry: I kind of wonder who are they asking that? Are they asking the CISOs or are they asking, I, anyway I am curious about the [00:14:00] methodology behind that study. It doesn’t necessarily surprise me. Just moving somebody up in a different, into a different place in the organization doesn’t necessarily mean that they’re going to more fully use the talents of or expertise of a CISO.

Andy: Yeah. If it’s anything in most organizations, it’s. They delegate to that CISO, not like what the assumption, is that the boards of the executive teams would be asking deep cyber questions of the CISO, which is an odd expectation.

jerry: It is an odd expectation. And similar related to what you’re saying, gartner finds that there are only 10 percent of boards. that have a dedicated cybersecurity committee overseen by a board member.

Andy: The way I would look at it, both of those stats is more, how much influence does CISO have on the company operating in a less risky or more risky methodology, right?

Andy: It’s not about leveraging their expertise. It’s about how influential are they to [00:15:00] guide the company away from risk and what those trade offs are.

jerry: It also comes down to what the company’s value. This is a financial risk management. And

Andy: the flip side is I think a lot of executives think of CISOs as constantly calling for the skies falling to get better budgets and build their empire and more people. And as this is a black hole of money we’re throwing money into that we can’t, which this article goes into, we can’t justify it.

Andy: We can’t prove the ROI on.

jerry: Yes, exactly. So the the key questions to ask yourself is number one, how do I justify my cybersecurity budget? And that is a I think that’s a perennial challenge that anybody in security leadership has. How do how do you justify, or demonstrate that you are spending the right amount of money?

jerry: You’re not spending too much. You’re not spending too little. Generally [00:16:00] speaking, and this is like a, one of those mass psychosis. episodes. You do that by often benchmarking yourself against your competitors.

Andy: It’s a safe answer.

jerry: And they do it by benchmarking themselves against their competitors.

Andy: You’ve got the theory of the wisdom of crowds, right? What’s if I’m around the average, I must be doing fairly close to correct, but not all companies are the same. Not all companies have the same risk tolerance. Not all companies have the same, corporate structure in the same financial situation. So I get it. That’s where my mind goes. What percentage of G&A is spent on cyber in the, my industry? That’s what I’m going to go ask for.

jerry: Number two is how do I master the art of risk reporting, which by the way, I think is not entirely disassociated from the last one, right? Because part of your budget in I dare say a major part of your budget is intended to address [00:17:00] risk. And and what they’re really pointing out here is how do you communicate to the senior leadership team, the board of directors and so on, the level of risk that you cyber risk that you have in your organization in terms that make sense to them,

Andy: That’s an incredibly challenging question, honestly.

jerry: Yeah. I, so something that was very interesting is I was, to me, at least, is I was reading this because look, I struggle with all these things too, right? I’ll. Five of these things that we haven’t got to all of them yet, but they resonated with me and he’s super interesting is we all have to make this up on our own.

Andy: You didn’t go through that section of the CISSP?

jerry: There’s not like a GAAP, in, in in accounting, you have the GAAP generally accepted accounting principles. There’s really a gap type methodology for this in risk reporting. And [00:18:00] perhaps there should be.

Andy: This is why we are often accused of being an immature industry from other well trodden business leaders who have a shared language.

Andy: We’re wizards and witches walking in speaking spells that they don’t understand out of black boxes that don’t make sense.

jerry: So I, I think this is an area that we can certainly mature. So I would love to hear from anybody in the audience who thinks that there’s a, a common methodology that people can adopt here. I’d love to talk about that in a future episode. All right. Number three is how do I celebrate security achievements?

jerry: I have a problem with the way some of the, this was worded public recognition of attacks that were deflected. This is in quotes, by the way, public recognition of attacks that were deflected can simultaneously deter attackers and reassure stakeholders of the organization’s commitment to data [00:19:00] protection.

jerry: So I’m reminded of when I read that I, I immediately thought of Oracle’s unbreakable Linux or unhackable, what do you call it?

Andy: Yeah.

jerry: It’s like putting a chip on your shoulder and Begging someone to come in.

Andy: If I really dug into this, define what an attack is, define when I’ve deflected it. Like every firewall drop, log entry, is that an attack I stopped?

Andy: Like I’ve seen that kind of shenanigans. Or is it more, hey, we had an incident that started and we contained it. Or is it, I don’t know, every time my email security tool stopped a phishing attack? There’s all those sorts of metrics you can run, but is it valuable?

jerry: There’s all you get into like how many spams did I reject?

jerry: How many phishing emails did I reject? Which we make fun [00:20:00] of, right? Because they’re metrics. They’re not achievements.

Andy: But you’re trying to prove a negative here. This is, this has been the fundamental problem from day one with the industry is you’re spending money to stop something. How do you know if you hadn’t spent that money, that things would have happened?

jerry: The only thing I can say is if you take a more capability focused view rather than a metrics focused view, I think that’s perhaps where the opportunity lies. We had a gap in. We had a gap in our authentication scheme because we didn’t have multi factor authentication.

jerry: We, we implement a multi factor authentication. We closed a huge hole. Yeah. Yes. Super simplistic example. Yeah. But I will say, there is a there’s another aspect of this that you have to be aware of. And perhaps I worked alongside too many lawyers, [00:21:00] but one of the, one of the pitfalls of taking credit for doing some security thing is that you’re tacitly admitting that you weren’t doing it before.

jerry: Yeah,

Andy: that’s true.

Andy: Our new version no longer does X. Wait, you were doing X before? Don’t worry about that. The fact is we’re not doing it now.

jerry: We implemented multi factor authentication. Oh so wait a minute,

Andy: right? It’s a tough one. Yeah. I, but I also, You also can never be, if you’re completely risk zero and completely safe, you’ve either way overspent, or you’ve added so much friction to business, or you’ve inhibited the ability for people to do the jobs that you’re now breaking the business in a different way.

Andy: You’re not going to get to risk zero. So what’s the right balance?

jerry: Yeah. And the business doesn’t want you to get to, I remember working effectively as the CIO for a company that [00:22:00] we both worked for once. And the COO told me he was he pitched it in the form of a question. Now what is your approach to passing audits, Jerry?

jerry: Do you want to, like, how do you you want to do really well? And I said, yeah, I think you should do really well. And he said, no, I said, if you fail audits, you’re going to get fired. And if there are no issues ever found, you’re probably going to get fired because you’re spending too much money.

jerry: So you got to find the right balance because that’s what the business wants. If you’re, if you are. Spending enough money to do perfect and everything that’s coming at the expense of other things that the business could be investing in and the return, the rate, I think his point was not.

jerry: Except trying to accept too much risk, but that to do things perfectly, as you continue to move up the [00:23:00] maturity ladder, it gets more and more expensive. And the, the marginal utility starts to decline.

jerry: Sure.

jerry: Anyhow, I, all that said it is very important from a morale perspective, if for nothing, no other reason from a morale perspective to celebrate. But you’ve got to be smart about it.

Andy: I wouldn’t do it publicly, frankly.

jerry: I wouldn’t either.

Maybe internally Somewhat company wide maybe, or at least departmental wide, you need to understand what motivates your people and what their reward systems are like and work to those. But I am very much against putting that bullseye on your back by saying you’re not hackable or, you’re about to get a free audit.

jerry: Oh, yes. Oh, yes. So number four is how do I collaborate with other teams better at this, by the [00:24:00] So again this whole article is. Aimed at CISOs and CISOs are almost always an executive level position. And one of the, I learned a lot, right? I ended my career. I don’t know if that’s the end or if there’s more to come as an executive.

jerry: And I learned a lot. I learned a lot about what it, what that means. And one of the most important aspects is that. You do partner with those other people. That’s less, an intrinsic part of being an executive.

Andy: Yeah. It becomes about working well with other departments. And that means sometimes you’ve got to give and take and be willing to lose a battle to win the war, as they say.

jerry: So it’s super important, but I think this is not. It’s not security centric. This is a fundamental [00:25:00] tenet of what it means to be a leader in an organization.

Andy: And I think we, as technology people often get promoted up with a background that isn’t well suited for that, to be completely honest to the point where many technical people score in those quote unquote soft skills.

Andy: But if you want to get to that level of organization, it is required.

jerry: Yes. Yes, absolutely. No, they point out that there are tangible security benefits. So for example, building bridges with HR allows you to do things like Integrate security requirements into the onboarding and offboarding processes and whatnot.

jerry: And also having those relationships throughout the organization are very key, especially when in times of crisis, like in an incident or what have you, you’ve got to, you’ve got to have the trust of the team and the team needs to have trust in you.

jerry: Last [00:26:00] one is how do I focus on what matters most? This is a hard one. And I think in large measure, it’s because there are so many variables, every company values things differently, they have different risk appetites. They’re in different industries. They move at different speeds. They have, different idiosyncrasies. They they like different technologies or they don’t like different technologies. And in many instances, companies hire people, they hire a CISO based on who that person is, like what, not necessarily what they need. They hire them based on, on the reputation, Jerry’s Jerry’s an incident response focused person.

jerry: We have lots of incidents.

Andy: Because of Jerry?

jerry: Maybe.

Andy: Oh, that’s fair. Job security.

jerry: I think you’ve got to take a step back and, as you’re figuring out how to focus on what matters most, you’ve got to, [00:27:00] first define what is it that matters most.

Andy: Yeah. . The other thing I would add, it’s, I don’t know how to integrate this, but there’s a cultural aspect of a company. The culture matters. So I may want to do a highly impactful security initiative. Let’s say something like, I don’t know, DLP with a document classification system, and I may work for a large financial who’s completely on board with that and very comfortable with that.

Andy: And great. You work for a small startup. You’re getting a lot of pushback on that because that’s not their culture. It’s not something that, from my perspective, and obviously I haven’t been a CISO but I’ve been senior level and currently a director, you can’t go faster than the culture of the company will allow you to go if you’re implementing potentially friction inducing security controls.

Andy: And so that I think can help determine your priorities. What’s acceptable to the business from an [00:28:00] impact perspective.

jerry: That’s a good, it’s a good point. I’ll tag on to that and say, one of the things I’ve learned over the years is that. Regardless of how much money you’re given, there is only so much change than an organization can undergo.

Andy: Oh yeah. That’s a good point.

jerry: In a certain period of time. So someone comes to you and says, you have a blank check. I need you to implement DLP, replace all our firewalls, a bunch of additional, very disruptive things. You’re not going to be able to do it.

jerry: Even if you have the money to do it there’s a finite capacity for change in an organization that, you know, that, that threshold is going to be different in different organizations. And that’s one of the challenges that you as a leader have to figure out is, where that threshold is. So you don’t cross it because if you cross it, Things start to fall apart and you don’t actually make progress on anything.

jerry: And I’ve seen that [00:29:00] happen time and time again, where, we, we, Hey, we’re all in we are going to blow the doors off the budget. We’re going to, we’ve got a lot of things that have to change.

Andy: Especially after a breach.

jerry: Yeah. And you end in having, you end having not accomplished anything.

jerry: You’ve burned through all the money. But you’ve not accomplished fully anything. And so you’ve got to be very very measured in that.

Andy: And the flip side, if you’re an aggressive go getter as a leader and you commit to some aggressive schedule and then you don’t get it done or you don’t get there as fast as you want, then you potentially have executives looking at you like you’re ineffective.

Andy: So it’s a careful balance.

jerry: And that’s it. Leadership 101. Like you’ve got to, you’ve got to meet your commitments.

jerry: Alright. So the, they do talk a little bit about communication, effective communication between the CISOs and [00:30:00] and up. I think that, permeated the la those five items. But, there, there is a, this maybe goes back to the whole gap that, the idea of the gap style reporting. I think we as a security community, often do a pretty bad job of.

jerry: Communicating in non technical terms we talk about CVEs and and DDoS volumes and things like that, but, translating that into business impact is really what the business needs from you.

Andy: When it’s also a likelihood percentage, not a guarantee. Correct.

Andy: Correct. And it’s at best a guess. Based on best practices and observing what happens to other companies and, lots of inputs and data, but there’s no guarantees.

jerry: I think one of the struggles when I read articles like this, they, they often talk about [00:31:00] things like how many fewer incidents did you have or how many fewer breaches did you have?

jerry: And and whatnot. And using that as, Is how you communicate the effectiveness and of your program or what you need to improve and I think the reality is that breaches tend to be like, very Transformational pivotal incidents. They’re often not like countable you don’t you don’t Stay in a ciso role and have you know So many breaches that you can show trends over time, right?

jerry: It’s just you’re, if you’re in that position and you have that kind of data, like something’s wrong.

Andy: Yeah. We need to learn very, we need a little, we need to learn from other people’s breaches, right?

jerry: Exactly.

jerry: All right. Moving on, the next one comes from dark reading here. And the title is sizable chunk of SEC’s charges against solar winds tossed out of court. So I will [00:32:00] admit I have not read. All 107 pages of the judge’s ruling, so shame on me for that.

Andy: You’re an unemployed bum, what else do you have to do?

jerry: Absolutely nothing. The SEC filed a lawsuit against SolarWinds and SolarWinds CISO, alleging lots of things. Everything was dismissed except for the statements that the CSO had made about the security program at SolarWinds prior to the breach. So there were inaccuracies in their 8k, which for those of you don’t know, 8k is a form that you have to fill out in a, in the wake of a breach as required by the SEC that apparently had some inaccuracies.

jerry: And so that was. Part of the case there were other statements made post breach that the judge I did find in a different article, described it as corporate puffery that is not [00:33:00] actionable. I thought that was pretty funny.

Andy: That is pretty funny. I think that needs to be a thing. I got to work that into more and more conversations.

jerry: It’s interesting that, A lot of the reaction to this, which means that there are apparently other implications in the ruling. A lot of a lot of the, post judgment discussion has been, Oh gosh, this is really a good thing because it allows teams internally to communicate amongst themselves without fear of what you write being used against you.

jerry: However, that, that actually. isn’t obvious as part of what the SEC was charging them with. I’ve got to go, I really want to go read that 107 pages to understand, what exactly the SEC was alleging. But in some regards it’s neither here nor there. What is most interesting though, Is the charges that do remain, which are those that [00:34:00] basically said before the breach, the solar winds CISO had come in and performed an assessment and found lots of problems and a documented those problems, but then we would go externally into customers and perhaps investors and made claims about the robustness of their security program.

jerry: And that is what. The SEC is still going after and that is what the judge is allowing them to continue pursuing.

Andy: Because the theory that the SEC I’m assuming is going after is accurate information should be disclosed to the investing public. And so they know how to appropriately measure the risk of investing in a company and or the board, point, all that stuff that comes with being a public company.

Andy: They want, they’re very particular. about making sure that the information that is disclosed is accurate and not misleading. We [00:35:00] see all sorts of stuff about misleading, just going back to Elon Musk’s tweets about Tesla getting him in a lot of trouble with the SEC and that sort of thing.

Andy: Like they take that stuff very seriously.

jerry: Oh yes. Yes, indeed. So more, probably more to come on this after I have a chance to read the the court decision, but I would definitely say, have a measured approach to communicating, especially if you’re aware that there are security gaps or weaknesses in your environment.

jerry: If you end up in a position where you are radically representing things differently in internal communications versus external communications, you should probably. Take a step back and ask yourself what you’re doing. I guess it probably won’t be a problem if they’re, if you’re not breached, but if you are like, that’s going to be exhibit a.

jerry: Yep. And as we’ve now [00:36:00] seen, like the company isn’t going to have your back. They’re not going to, they’re not going to stand in front of you and take the bullet, they’re going to be like, Oh, look at God, our CISO, he was a terrible guy.

Andy: Which by the way, is why people get so frustrated as statements put out by companies sounding like legalese and business ease because they’re protecting themselves with very specific language for these sorts of circumstances.

jerry: Yes.

jerry: So there’s one more story. It’s a small thing. It’s not probably not even worth talking about. I, I wasn’t even sure if we were going to get to it. Yeah, it’s not really even, you know what? Let’s talk about it. So this is the one comes from CSO online and the title is CrowdStrike CEO apologizes for crashing it systems around the world and details fix.

Andy: Yeah, it was it was a thing.

jerry: It was a thing. I have to tell you, I I woke up. on Friday to a text from my wife who had [00:37:00] already been up for hours asking me if I was happy that I was no longer in corporate IT. And I, what?

Andy: What just happened?

jerry: What just happened? And so of course I jumped on to infosec. exchange and quickly learned that 8. 5 million. window systems around the world had simultaneously blue screened and would not come back up without intervention.

Andy: Like on site physical intervention.

jerry: Yes. Yeah. Yes. Apparently though, if you could reboot it, Some, somewhere between three and 15 times, and it might come back on its own.

Andy: I

Andy: heard

Andy: that

Andy: too.

Andy: I have no idea how accurate that is.

jerry: I don’t know either.

Andy: But that’s going to be the new help desk joke. Have you tried rebooting it 15 times?

jerry: Yes. Yes, it is already it is already meme fodder for sure.

Andy: Insane amount of memes. So what happened?

jerry: CrowdStrike, I think most people know is a, it’s an [00:38:00] EDR agent that runs on.

jerry: I think it’s probably 10 to 15 percent of corporate systems around the world. It’s a, it’s a significant number. They deliver these content updates which are, I would say roughly equivalent to what we used to think of as antivirus updates. They delivered one on Friday. And these, by the way, are multi time a day updates. So these aren’t like new versions of the software. These are like, fast quickly delivered things. And so what happened was on Friday, CrowdStrike pushed out a change, an update in how it processes or how it. analyzes named pipes and the definition file that they pushed out had some sort of error that the nature of the error hasn’t been [00:39:00] disclosed that I’ve seen at least and that error caused the windows to crash basically.

jerry: Like crash

Andy: hard. Crash hard

jerry: but with blue screen basically.

Andy: Yeah

jerry: and put in a blue screen loop at that point. And so yeah, because it was, because, Of how CrowdStrike integrates with Windows, it would, it would blue screen again, immediately, as part of the startup process. So you would, as you described that you would up in a blue screen loop.

jerry: And so the only option you had was to go into safe mode. And remove a file. And people came up with all sorts of creative ways of doing that with scripts and even Microsoft released a an image on a thumb drive that you could boot now where it went horribly wrong for some people. And I think in Azure, it was ironically, like one of the most problematic places where you have disk encryption, [00:40:00]

Andy: right?

Andy: You’re probably most often using boot boot locker. And if you don’t have your recovery key. You can’t access the disk without fully booting.

jerry: So lots of lots of IT folk got a lot of exercise over the past weekend. And by the way, if they’re looking a little, they’re dragging this week, but go buy him a donut, hi and say, thank you.

jerry: Because they’ve had a, they’ve had a

Andy: bad couple of days. Yeah, no kidding. It’s also amazing how many people have turned into kernel level programming experts. On social media in the past three days,

jerry: 100%,

jerry: they’re like, they’re formerly political scientists and constitutional scholars and trial attorneys and epidemiologists and climate scientists and whatnot.

jerry: So it does not surprise me that they are also kernel experts.

Andy: There’s been a lot of really intense finger pointing and [00:41:00] debate going on when we, when Honestly, don’t even fully know the entire story yet.

jerry: No. It, there’s a whole lot of hoopla about, layoffs of QA people and returned the impact of return to office, but we don’t, we really don’t know what happened.

jerry: What I find most interesting is that this is a process that happens multiple times a day for the most part. And. Hasn’t happened before. So something went horribly wrong. And I don’t know if that was because they skipped the process or because there was a gap in coverage, like this, the set of circumstances that arose here, we’re just not ever accounted for, like that nobody thought that was a possibility,

Andy: which by the way, it happens at almost every engineering discipline.

Andy: We learn through failure. I’m not. Okay. Let me back way up and say, I am not in any way a CrowdStrike apologist. In fact, I’m not a huge fan of CrowdStrike. [00:42:00] However, I’m seeing a whole lot of holier than thou, you should have XYZ’d on social media. That puts me in a contrarian mood to counter those arguments with a cold dash of reality of, there’s a whole lot of reasons Companies do what they do in the way they do it.

Andy: And anyway, I don’t want to get in my entire soapbox when there’s more done back here, but I think it’s very easy to point out failures without weighing it against benefits.

jerry: Yeah, absolutely. CrowdStrike works quite well for a lot of people. And I dare say it has saved a lot of asses and a lot of personal data.

Andy: So for everybody, there’s certain people like somebody we know who’s very commonly commenting on these things who were in a couple of articles saying that, this just proves that automatic updates are a bad idea. I don’t know that’s true. I would say you can’t. Say that unless you [00:43:00] measure the value of automatic updates that have stopped breaches and stopped problems because those updates were so rapid and so aggressive against this outage, right?

Andy: You have to balance both sides of that scale. So just look at this and say, yeah, this was a massive screw up. It’s caused massive chaos and huge amount of loss of income for a lot of people disrupted a lot of people’s lives. Okay. That’s bad, but weigh that against For those using tools with automatic updates, how many problems were solved and avoided, which is so difficult to measure, but must be thought of by those rapid updates that were automatic.

jerry: I think what is the most problematic aspect of that is that the the value is amortized and spread out over, thousands or tens of thousands of customers. Over, over the, over a long period of time, but this failure was a, a one, impacted everybody at the [00:44:00] same time and caused mass chaos, mass inconvenience, mass outages.

jerry: All at the same time. And so completely agree with you. But I think that’s what’s setting everybody’s alarms bells off is that, Oh my gosh we have this big systemic risk, which, realistically hasn’t happened perhaps as often as you might think. And I know by the way, there’s a lot of people who are also talking smugly about how good it is to be on Linux and not on Windows because, this issue only impacted Windows, not Linux. But I will tell you, as the former user of a very large install base of CrowdStrike on Linux, it had problems. And a lot of them. We had big problems. I don’t know that there was ever one like as catastrophic as this where it happened all at the same time, [00:45:00] but, it, it’s not Linux the CrowdStrike agent on Linux hasn’t also had issues.

Andy: Sure. It’s the question comes, what problems would you had if you hadn’t run it? What value did it bring?

jerry: That’s obvious. I think that for many organizations, this is the way they identify that they’ve been in intruded on

Andy: yeah I mean ensure cyber insurance companies basically mandate you have EDR.

Andy: For ransomware containment. There’s it’s For good or ill it’s not table stakes So and those by the way, those cyber security companies out there who are currently casting stones At CrowdStrike saying we don’t do things this way and we would never have this problem. Yeah, good luck. Yeah, you are the definition of glass houses.

Andy: We’ve, this is, interestingly, that 8. 5 million stat apparently was, according to Microsoft, less than one per seat, 1 percent of the Windows fleet in the world, which I find fascinating that only 1 percent cause this much [00:46:00] chaos. So I wonder how many of those are second order impacts and other sort of, not direct, but secondary fallout of some critical system somewhere going down, but we’ve seen problems like this before.

Andy: It’s just on smaller scales. Even Windows Defender has had somewhat similar outages that have caused problems. This is the whole debate of Windows update, do you automate, do you trust it, do you test it? And I go back to, okay, you may have a problem. But is that problem worth the efficiency and speed of getting a patch out there before you get hit with that exploit for whatever recently patched problem or, you can’t just look at one side of the equation, which is so frustrating to me.

Andy: And so many people are out there clout farming right now, just being, I told you so’s or you guys are just dumb and they’re not looking at the big picture at all.

jerry: So on the converse side. There [00:47:00] were huge impacts, and I think that we do have to do better. But that said, I don’t think it’s a wise idea to run out and uninstall your CrowdStrike agenT.

jerry: there are other technologies, other ways of linking in that perhaps are less risky, but not no risk, for sure.

Andy: and are you talking about the kernel level? Yeah. Integration. Yeah. Versus non kernel level.

jerry: Yeah, so like E-E-B-P-F versus, kernel modules and whatnot. But those that by the way is a it solves some problems, but it creates other problems and so we’ve not seen big failures with those other companies But have we not seen them because they’re just small or because they don’t happen

Andy: And look, let me be very clear. I am not a coder. I don’t understand what i’m talking about here I am just going off of like Rough understanding of trying to get my arms around this issue.

Andy: So take everything i’m about [00:48:00] to say with a grain of salt but my understanding is that the advantage of being at the kernel is you’ve got much deeper level access that’s faster, more efficient, and more TAP resistant than running in user space. And as a security control that’s trying to stop things like rootkits, which were a big problem back in the day, not so much now, I think that’s where that came from. Like we’ve seen a number of vendors who are saying running at the kernel level is just, it’s just irresponsible. We don’t do that. And here’s why we don’t do that. We’re differentiated because of XYZ. Okay, cool. But there must be a trade off and Yeah, you can’t crash the system like that.

Andy: But are you also as capable at detection and what is your resource impact? And I honestly don’t know, right? And when you listen to these other vendors who don’t use kernel mode, they of course have very compelling arguments and they drink their own Kool Aid and they believe their own marketing.

Andy: And maybe they’re right. I don’t know, but I also don’t think CrowdStrike is just completely irresponsible for running a [00:49:00] kernel. Some people are saying. I think that has a benefit. Which is why they did it. Now, whether that benefit is worth it is the question, but I don’t think they’re just malicious.

Andy: So I don’t know. I just see a lot of people going off on this. And I admittedly, I don’t know enough to probably really participate in that conversation, but

Andy: it’s a tough,

jerry: I do think that CrowdStrike, my understanding at least is that CrowdStrike was, or is moving in the direction of a similar strategy, but they just aren’t there yet of not using curl mode. Yeah. Yeah. Yeah. Not using a kernel module. So don’t misunderstand. Like they’re, they are using this thing called EBPF.

jerry: It’s a way, it’s a very uniform way of getting visibility into what’s happening in the kernel without actually loading your own driver into or module into the [00:50:00] kernel. One of the big problems I had with CrowdStrike was this constant churn of, do I patch my kernel or do I leave CrowdStrike running because I can’t do both.

jerry: And of sequence

Andy: on supporting each other.

jerry: Yeah. And that, by the way, it comes back to the fact that they have to they have to create a kernel module. It’s tailored, to, to different kernel versions. And depending on how changes in the kernel, how the kernel changes from one version to the next.

jerry: Sometimes you don’t, sometimes it’s fine. Most of the time it’s not fine. And so you end up with this problem. So less of a problem on windows because windows kernels are pretty stable. And I think they do, they have a lot more interlocking with vendors like CrowdStrike than Linux does. But in any event, I.

jerry: I I agree with your thesis there that like, this is something that we have. [00:51:00] We are benefiting from technology like this and assuming that it could never go wrong. And that’s probably not a good assumption as we’ve now seen, but at the same time, like I do think this could have gone better.

Andy: Absolutely. But where the assumption of this could go wrong at some point is. Planning for this to go wrong again, a wise use of time and money versus all the other problems you’re likely to deal with. Is this a black swan event that isn’t likely to happen enough to bother to build mitigations in for?

jerry: I, I, so it was standing in line for dinner, my wife and she asked me this question. Cause at the time a lot of flights were still canceled. And she asked the question and I didn’t have a good answer. What can companies do to avoid that? They prepare in some kind [00:52:00] of disaster recovery way to do that?

jerry: And the reality is, I don’t think you can. And so you could say you know what? I’m going to get rid of CrowdStrike and I’m going to go with Sentinel one or somebody else, but you’re taking the leap of faith that they won’t also have a problem or that they won’t have a problem that says that they’re.

jerry: They’re blind to some kind of attack that CrowdStrike could have seen.

Andy: You could duplicate your infrastructure with failover and run two different EDRs on each of the backups, but then so much more cost complexity. Are you going to run them as well? Are you going to have the mastery of two different vendors?

Andy: Like that introduces a whole lot of complexity. That’s easy to just say, go do it. But it’s very complicated for a maybe that happens. We talk about uptime and percentages. And we look at, I always want to say, again, I’m not a CrowdStrike defender. This is the funny part. I’m just, I’m frustrated with the thought leaders out there who are just blowing smoke on, just being, [00:53:00] pounding the table with, look how bad this is without putting it in context.

Andy: If we look at the percentage of how many of these, Channel file updates went out without a problem versus the ones that do. Is that a fair estimation of success versus failure rate? And we holding CrowdStrike to five, nine, six, nines, three nines, two nines, There’s no perfect solution. And by the way, the closer you get to perfect, the more expensive it gets.

Andy: So when we talk about like uptime and systems of being five nines, that’s a hell of a lot more expensive than two nines or three nines. So are we being unrealistic in, in saying that this should never happen and CrowdStrike should go out of business and okay, fine, then how much more are you willing to pay for a system that’s not perfect?

Andy: That never has this problem. And how slow are you willing to accept updates against new novel techniques, which is what they said they were pushing out to make sure this never happens? Because they’ve got this tension between getting these updates out fast versus doing all the checks in QA we all want to see.

Andy: So what are you willing to trade [00:54:00] off? Cost, complexity, time, risk, and the risk of if you don’t get the update fast enough, and then that self propagating ransomware hits you before you got it and Oh, sorry, we were in QA at the time. Are you willing to accept that? We just, we’ve got to be adults in the room and look at this with all sides of the equation and not just point fingers at somebody when they screw up without realizing the other side of the equation.

Andy: And again, I am not a CrowdStrike apologist here. I’m frustrated with the mindset of I’m going to build my thought leadership by just pointing out the bad things without ever balancing it against what the good is. Sorry, I’m a little frustrated.

jerry: It’s what we’ve built our industry around. I know.

Andy: Am I wrong? I’m not, am I wrong? Not to put you on the spot, but

jerry: The problem I have with the situation is that it reliably crashed every Windows computer it landed on.

Andy: I’m not sure that’s a hundred percent true. And [00:55:00] the only reason I say that as I’ve seen some social media imagery of a bunch of like check in kiosks at a airport and only one of five was down. I don’t know why. I have no idea why. It’s very flimsy evidence, but it’s I would say let’s get a bit more root cause analysis before we can completely say that’s true, but it’s obviously very highly effective, right?

Andy: And very immediately impactful to a super high percentage of the machines. This was installed on.

jerry: So I guess my question, my concern is. Did this happen as a, cause I’m assuming, and I feel pretty confident that they have a pipe, some kind of testing pipeline where before they push it out, it goes through some, some standard QA checks and I’m assuming what got missed.

jerry: Yeah. I’m assuming something didn’t happen and like it didn’t crash their [00:56:00] version of windows. In their test pipeline or did crash and it wasn’t detected or somebody skipped that step altogether. I don’t know, my concern lies there. Like what was the failure mode that, that happened? And, hopefully they’ve, hopefully they’ll come out of this better than they were in, in general, like we, as an industry, we we advance by kicking sand in other people’s faces.

jerry: That’s not

Andy: wrong. And there’s not, it’s by the way, I’m not trying to say, let’s just go up, up. Let’s suck. Let’s move on. Obviously we have to learn from this and we have to understand the implications, but I. And we have to adapt to it. I’m sure that every other vendor doing similar things is very curious what went wrong, right?

Andy: And hopefully we can all learn from it. Hopefully Construct will be very transparent. There’s no guarantee, but hopefully. And mind you, they’re going to get massively punished in the market for this as they should. There’s going to be a lot of people who are not [00:57:00] renewing CrowdStrike over this incident, and I have no problem with that.

Andy: And that’s that’s how the industry works. And there’s all, they, how they handle this incidents is going to be highly impactful to how well they keep their customer base. But they’re also massively highly deployed. So that’s, one people say they’re too highly deployed. I don’t know, 14 percent of the industry.

Andy: I think I heard somebody say, I don’t know how accurate that is. They’re one of the big boys in the EDR space. And frankly, having met a number of their people they know it and they have some swagger, so maybe that’ll knock them down a notch. Admittedly, when I first heard this, I was like, ah, couldn’t have happened to a better company.

Andy: But, the other aspect is Microsoft, man, they’re taking a bunch of crap over this because it was their systems, but it was really not their fault. As far as we know at all, but they’re trying to step up. Like they’re putting out recovery information. They’re trying to put out tooling.

Andy: Like they’re trying to help, which I appreciate, but [00:58:00] it’s certainly a mess. Don’t get me wrong. And I know I went down a ranty rabbit hole of just the stuff I didn’t like, because, partially because I’m assuming people who listen to the show know the details, right? So we’re trying to, there’s no reason for us to rehash all the basics.

Andy: It’s, trying to get into what we think, people care about that, but.

jerry: Yeah, I guess the net point is, shit happens. I think we have to be pragmatic in that EDR is. An incredibly important aspect of our controls. And I think auto update is as well. The whole point of this is to be as up to date as you can be because the adversaries are moving very fast.

Andy: Yeah. And we as an industry are not moving away from auto update AI is the antithesis of manual updating guys. Yeah. Buckle up. If you don’t like auto updating, you’re not going to AI much.

jerry: So we [00:59:00] have to find a way, I think, to to co exist and I don’t have a lot of magic words to say, and, like this happened to CrowdStrike, but it has happened, it happened to McAfee and it’s, it is, an incredible coincidence that the CEO of CrowdStrike was also the CTO of McAfee when that happened but it’s happened to Microsoft and it’s happened. I think it’s happened to Symantec and I think it’s happened to Microsoft multiple times.

jerry: Now I think the difference between those and this again, is the time proximity, like all of these things have all of these eight and a half million systems went down roughly at the same time And contrasted with a lot of the others, these were almost all corporate or, slash business systems.

jerry: Because you don’t run CrowdStrike for, actually Amazon’s been trying to sell me CrowdStrike out for my home computer, but generally you don’t run CrowdStrike on, on your your home PC.

jerry: You run them on, on, the stuff that you care [01:00:00] about, because it’s really freaking expensive.

jerry: And when those systems go down. The world notices and when eight and a half million of them go down all at the same time, it becomes big news. And lots of people consternate over it. And look, again, as an industry there’s so much clamoring for airtime and we love a crisis.

jerry: We love to talk about, why you shouldn’t be using kernel modules, why you shouldn’t have auto updates, why you shouldn’t do this, why you shouldn’t do that. It’s what we do, and it’s annoying. And I think it’s sometimes you can cross the threshold of causing more problems than you’re solving, because if you’re trying to solve a problem that may never happen again in your career by, ripping things out, it’s going to or duplicating, your EDR environment it’s just not necessarily cost effective.

jerry: So I don’t have a great. Look, this is, this has been an industry wide problem [01:01:00] and I don’t even think everybody’s fully recovered. I think there’s still people, those, I don’t know how many of the eight and a half million systems are still sitting there with a blue screen, but it’s not zero. I’ll tell you that.

Andy: Delta still canceling flights. This is just as one small, tiny example.

jerry: Yeah. I had packages delayed UPS is saying, your shipment is delayed because of a technology failure. So this is, it’s far reaching, but I think we have to be thoughtful and not knee jerk reaction to what happened here.

jerry: I think CrowdStrike has a lot to answer for. I do think. One of the quotes in this article is hilarious. I’ll read it here. It’s quoting the CEO, quote, the outage was caused by a defect found in a Falcon content update for Windows hosts, Kurtz said, as if the defect was a naturally occurring phenomenon discovered by his [01:02:00] staff.

Andy: I go back to probably about eight, 80 lawyers are all over every single sentence being uttered right now. Oh yeah. Yeah. You never

jerry: first, first law is you don’t accept responsibility until at least until more. Yeah. But they, look, they don’t I’m guessing the fog of war is thick over there right now.

jerry: Yeah. I’m sure they know what happened. By now. And like you said, hopefully they will be transparent about it, but

Andy: holy cow. It was interesting is if you get into their blog about the technical details, 409 UTC is when they put out the bad file, it was fixed by 527 UTC. So an hour, 21 minutes, 79 or 80,

jerry: 80 minutes.

Andy: Yeah. So that’s pretty fast, right? Obviously they knew they had a problem pretty fast. But it’s too late. Once it’s out there, especially because the machine’s down, you can’t push a fix to them. It’s a worst case scenario for them, in that they couldn’t [01:03:00] push out an autofix. Yeah, it’s yeah, as a side thing, we’re also seeing a lot of bad actors starting to jump on, trying to send out malicious content under the guise of helping with CrowdStrike issues.

Andy: So that’s always fun.

jerry: Dozens, maybe hundreds of domain names registered, many of which were some many of which were malicious. Some of which were parodies and yeah. Oh yeah. Yeah. Good times. Anyway not a lot happened on Friday or last week. Hopefully it’ll be another quiet week.

Andy: It is for you, unemployed bum.

jerry: I don’t, it’s interesting, I don’t know how I ever had time to work.

Andy: I, I think you need to put some air quotes around work, but sure.

jerry: Ouch.

Andy: All right, we have gone pretty long today, maybe we should wrap this bad boy

jerry: up. Yes, indeed. I appreciate everybody’s time. Time. Hopefully this was interesting to you. If you like the show, you can find it [01:04:00] on our website, www. defensivesecurity. org. You can find this podcast in 272 that preceded it on your favorite podcast app, except for Spotify.

jerry: I’m still working on that. And you can find Lurg, where?

Andy: I’m on x slash Twitter at L E R G and also on infosec. exchange. at L E R G, LERG.

jerry: You can find me at Jerry on InfoSec. Exchange and not so much on X anymore. And with that, we will talk again next week. Thank you, everybody.

Andy: Have a great week. Bye

jerry: bye.

 

Defensive Security Podcast Episode 252

https://www.bankinfosecurity.com/capital-one-must-turn-over-mandiant-forensics-report-a-14352

https://www.databreachtoday.com/insider-threat-lessons-from-3-incidents-a-14312

https://www.zdnet.com/article/ransomware-deploys-virtual-machines-to-hide-itself-from-antivirus-software/

Defensive Security Podcast Episode 233

https://www.securityweek.com/hackers-using-rdp-are-increasingly-using-network-tunneling-bypass-protections

https://www.zdnet.com/article/trojan-malware-is-back-and-its-the-biggest-hacking-threat-to-your-business/

https://www.csoonline.com/article/3336923/security/phishing-has-become-the-root-of-most-cyber-evil.html

https://www.darkreading.com/attacks-breaches/ransomware-attack-via-msp-locks-customers-out-of-systems/d/d-id/1333825

https://www.dlapiper.com/~/media/files/insights/publications/2019/02/dla-piper-gdpr-data-breach-survey-february-2019.pdf

Defensive Security Podcast Episode 231

https://lifehacker.com/why-smart-people-make-stupid-mistakes-1831503216

https://www.chicagotribune.com/business/ct-biz-tribune-publishing-malware-20181230-story,amp.html

https://www.securityweek.com/was-north-korea-wrongly-accused-ransomware-attacks

https://www.healthcareitnews.com/news/staff-lapses-and-it-system-vulnerabilities-are-key-reasons-behind-singhealth-cyberattack

https://www.nextgov.com/cybersecurity/2019/01/hhs-releases-voluntary-cybersecurity-practices-health-industry/153835/

https://www.zdnet.com/article/data-of-2-4-million-blur-password-manager-users-left-exposed-online/

https://arstechnica.com/information-technology/2018/12/iranian-phishers-bypass-2fa-protections-offered-by-yahoo-mail-and-gmail/

Defensive Security Podcast Episode 230

https://arstechnica.com/information-technology/2018/11/hacker-backdoors-widely-used-open-source-software-to-steal-bitcoin/

https://krebsonsecurity.com/2018/11/marriott-data-on-500-million-guests-stolen-in-4-year-breach/

https://krebsonsecurity.com/2018/12/what-the-marriott-breach-says-about-security/