Defensive Security Podcast Episode 282

Episode 282: Exploiting Trust in Cybersecurity Practices In episode 282 of the Defensive Security Podcast, hosts Jerry Bell and Andrew Kallett discuss several cybersecurity topics. They highlight a phishing attack outlined by Microsoft, where cybercriminals leverage file-hosting services like OneDrive and Dropbox to exploit trust and compromise identities. The episode also explores concerns about AI systems, like Grammarly sharing company confidential info, and emphasizes the growing need for well-defined governance policies. They touch on a cyberattack affecting American Water’s billing systems and the potential implications for OT systems. The final discussion surrounds Kaspersky’s decision to replace its software on US systems with Ultra AV, raising alarms over cyber responsibilities and government influence over IT.

 

Links:

  • https://www.microsoft.com/en-us/security/blog/2024/10/08/file-hosting-services-misused-for-identity-phishing/
  • https://www.tenable.com/blog/cybersecurity-snapshot-employees-are-oversharing-work-info-with-ai-tools-cybersecurity
  • https://go.theregister.com/feed/www.theregister.com/2024/10/07/american_water_cyberattack/
  • https://www.theregister.com/2024/09/24/ultraav_kaspersky_antivirus/

Defensive Security Podcast Episode 281

In this episode of the Defensive Security Podcast, hosts Jerry Bell and Andrew Kalat discuss various cybersecurity events and issues. The episode opens with discussion on the recent weather impacts affecting Asheville and lessons for disaster preparedness in the security industry. A significant portion of the episode is dedicated to CrowdStrike’s recent Capitol Hill testimony, examining the fallout from their admitted testing failures and the implications of needed kernel access for security software. The hosts also explore an ongoing GDPR violation by Meta related to storing user passwords in plain text, and a hyped but less-critical-than-expected Linux vulnerability in the CUPS printing system. Finally, they delve into potential risks associated with AI systems like ChatGPT and the increasing need for security in OT and ICS environments. The episode concludes with a reminder about the essential nature of cybersecurity fundamentals.

Links:

  • https://www.cybersecuritydive.com/news/crowdstrike-mea-culpa-testimony-takeaways/727986/
  • https://www.bleepingcomputer.com/news/legal/ireland-fines-meta-91-million-for-storing-passwords-in-plaintext/
  • https://thehackernews.com/2024/09/critical-linux-cups-printing-system.html?m=1
  • https://arstechnica.com/security/2024/09/false-memories-planted-in-chatgpt-give-hacker-persistent-exfiltration-channel/
  • https://industrialcyber.co/cisa/cisa-alerts-ot-ics-operators-of-ongoing-cyber-threats-especially-across-water-and-wastewater-systems/

Defensive Security Podcast Episode 280

In this episode of the Defensive Security Podcast, hosts Jerry Bell and Andrew Kellett delve into key cybersecurity topics. They discuss a recent statement by CISA director Jen Easterly on holding software manufacturers accountable for product defects rather than vulnerabilities, and the need for derogatory names for threat actors to deter cybercrime. The episode also covers Disney’s decision to ditch Slack following a data breach, and the impact of valid account misuse in critical infrastructure attacks. Additionally, they explore new tough cyber regulations in the EU under NIS2, and a Google security flaw from a Black Hat presentation concerning dependency confusion in Apache Airflow. The hosts share their thoughts on industry responses, regulations, and how enterprises can improve their security posture.
00:00 Introduction and Podcast Setup
00:59 First Story: CISA Boss on Insecure Software
03:26 Debate on Software Security Responsibility
11:12 Open Source Software Challenges
15:20 Cloud Imposter Vulnerability
22:22 Disney’s Data Breach and Slack
27:37 Slack Data Breach Concerns
29:26 Critical Infrastructure Vulnerabilities
35:21 EU’s New Cyber Regulations
43:42 Global Regulatory Challenges
48:42 Conclusion and Sign-Off

Links:

  • https://www.theregister.com/2024/09/20/cisa_sloppy_vendors_cybercrime_villains/
  • https://www.tenable.com/blog/cloudimposer-executing-code-on-millions-of-google-servers-with-a-single-malicious-package
  • https://www.cnbc.com/2024/09/19/disney-to-ditch-slack-after-july-data-breach-.html
  • https://www.cybersecuritydive.com/news/cisa-critical-infrastructure-attacks/727225/
  • https://www.cnbc.com/amp/2024/09/20/eu-nis-2-what-tough-new-cyber-regulations-mean-for-big-business.html

Defensive Security Podcast Episode 279

In Episode 279 of the Defensive Security Podcast, Jerry Bell and Andrew Kalat discuss the latest cybersecurity news and issues. Stories include Transportation for London requiring in-person password resets after a security incident, Google’s new ‘air-gapped’ backup service, the impact of a rogue ‘Whois’ server, and the ongoing ramifications of the Moveit breach. The episode also explores workforce challenges in cybersecurity, such as the gap between the number of professionals and the actual needs of organizations, and discusses the trend of just-in-time talent versus long-term training and development.

 

Links:

  • https://www.bleepingcomputer.com/news/security/tfl-requires-in-person-password-resets-for-30-000-employees-after-hack/
  • https://www.securityweek.com/google-introduces-air-gapped-backup-vault-to-thwart-ransomware/
  • https://arstechnica.com/security/2024/09/rogue-whois-server-gives-researcher-superpowers-no-one-should-ever-have/
  • https://www.cybersecuritydive.com/news/global-cyber-workforce-flatlines-isc2/726667/
  • https://www.cybersecuritydive.com/news/moveit-wisconsin-medicare/726441/

Transcript:

Jerry: [00:00:00] Here we go. Today is Sunday, September 15th, 2024. And this is episode 279 of the defensive security podcast. My name is Jerry Bell and joining me today as always is Mr. Andrew Kalat.

Andrew: Good evening, Jerry. Happy Sunday to you.

Jerry:  Happy Sunday, just a reminder that the thoughts and opinions we express on the show are ours do not represent those of our employers or.

Andrew: present, or future.

Jerry: for those of us who have employers, that is not that I’m bitter or anything. It’s,

Andrew: It’s, I envy your lack of a job. I don’t envy your lack of a paycheck. So that is the conflict.

Jerry: It’s very interesting times right now for me.

Andrew: Indeed.

Jerry: All right. So our first story today comes from bleeping computer. And the title here is TFL, which is transportation for London requires in person, password [00:01:00] resets for 30, 000 employees. So those of you who may not be aware transportation for London had suffered what I guess would has been described as a nebulous security incident.

They haven’t really pushed out a lot of information about what happened. They have said that it does not affect customers. But it apparently does impact some back office systems that did take off certain parts of their services offline, like I think. They couldn’t issue refunds. And there were a few other transportation related things that were broken as a result.

But I think in the aftermath of trying to make sure that they’ve evicted the bad guy who, by the way, apparently has been arrested.

Andrew: That’s rare. Somebody actually got arrested.

Jerry: yeah. And not only that, but apparently it was somebody local.

Andrew: Oops.

Jerry: In in the country which may or may not be associated with an unknown named [00:02:00] threat actor, by the way, that was involved in some other ransomware attacks.

Andrew: Kids don’t hack in your own backyard.

Jerry: That’s right. Make sure you don’t have extradition treaties with where you’re attacking. So what I thought was most interesting was the, their, the approach here to getting back up and going they, they had disabled. So TFL had disabled the access for all of their employees and the requiring their employees to show up at a designated site to prove their identity in order to regain access.

This isn’t the first. Organization that’s done this, but it is something that I suspect a lot of organizations don’t think about the logistics of, in the aftermath of a big hack. And if you’re a large company spread out all over the place, the logistics of that could be pretty daunting.

Andrew: Yeah. It’s wild to me that they want in person. [00:03:00] Verification of 30, 000 employees. But given the nature of their company and business, I’m guessing they’re all very centrally located. Used to going to physical offices, but man, can you imagine if you were a remote employee and you don’t have any office anywhere near you, how would you handle that? I’m not, I’m probably not going to get on a plane to go get my password re enabled.

Jerry: Exactly.

Andrew: You know what it did, remind me of though is, remember back PGP and PGP key signing?

Jerry: Oh, the key parties. Yes.

Andrew: Yes. Where, You basically, it’s a web of trust and people you trust could verify and sign another key. Like at a key signing party, because we were fun back then, that’s what nerds used to do. And then that’s how you had the circle trust. So maybe they could do something similar where verified employee could verify another employee, then you’ve got the whole insider threat issue, et cetera. Yeah. It

just reminded me of,

Jerry: No, nobody trusts Bob’s.

Andrew: [00:04:00] It’s true. Your friend, Bob, how many times has he been in prison?

Most recently, like where Rwanda? I think I heard,

Jerry: He’s got the frequent visitor card.

Andrew: but yet has some of the best stories.

Jerry: He does, he definitely does. so apparently they make reference to a similar incident that happened at Dick’s sporting goods. I will emphasize the sporting goods. They had a similar issue and that is a nationwide retailer here in the U S at least, I don’t know if they’re they’re outside of the U S and so that really wouldn’t be possible, with transportation for London.

I assume that most of the people associated with it are local or. Or within a reasonable driving distance or commuting distances, the case may be. But in the situation with a retailer, a nationwide retailer, I think they had to go with virtual in person. So they basically had zoom meetings [00:05:00] with employees and I assume had them show like pictures of their government ID and so on.

So the logistics of that is interesting. And. It isn’t really something I’ve spent a lot of time thinking about. And but I know in the aftermath of a big attack like this, establishing, trust and certainty and who has access to your network would be super important. So I think it’s I think it’s worth.

Putting into your game plan,

Andrew: Yeah, it is. It is a wild one. And what do you trust? Especially in the age of, deep fakes and easily convincing AI copies of other employees. And I don’t know, it’s an interesting one.

Jerry: right?

Andrew: Ciao.

Jerry: our next, yeah, it was it was certainly a an unfolding story, which I don’t think is over yet based on everything I’m reading.

Andrew: I did see one quote in here that made me chuckle, which is this is a quote from the transport [00:06:00] agency added on their employee hub. Some customers may ask questions about the security of our network and their data. First and foremost, we must reassure that our network is safe. Okay, define safe. That’s just us

being

Safe ish.

Jerry: safe ish, safe now,

Andrew: Safe, safe y. It resembles something that is sometimes called appropriately safe. Based, based on the criteria that we came up with, it’s completely safe.

Jerry: which I’m sure is true because they they had also had a clop. Ransomware infection, I guess a couple of months prior to this. So

Andrew: What do you use for clop? Is that like a cream? Is that like a, how is

that treated typically?

Jerry: every time I hear clap, I, it takes me back to the Monty Python, the coconut horse trotting.

That’s what I think about when I hear the word clap,

Andrew: That’s

fair.

Jerry: [00:07:00] which is oddly appropriate given that this is in the UK, which is where where Monty Python hails from.

Andrew: I thought you say where they have coconuts.

Jerry: Only if they’re if they’re transported by swallows.

Andrew: You youngins will just have to go.

Jerry: Gotta go watch that movie. Alright, it’s worth it. I, by the way, I remember making my son, both my sons watch it, and they protested. And now, I think they’ve each seen it like 30 or 40 times,

Andrew: so when you say process, did you like have to duct tape them to a chair and like pry their eyes open and

do a whole, yeah, train spotting situation?

Jerry: I think they thought it was like an actual movie about the Holy Grail.

Andrew: Which, why would they be opposed to that? That could also be interesting.

Jerry: I don’t know.

Andrew: Indiana Jones did a fine movie on it.

Jerry: It’s true. But it, that does not hold a candle to [00:08:00] the Monty Python Holy Grail movie. Let’s just be

Andrew: We, we learned a lot. We learned about facing the peril. We learned that Camelot is a silly place. And we learned how to end a movie when you don’t have a better plan. Again, way off topic, but you young’uns will just have to go discover. Do you,

Jerry: So back on topic, our next story comes from security week. And the title here is Google introduces air gapped backup vault to thwart ransomware. And I’m going to put quotes as they do over air gapped because as they describe it, it is logically air gapped, not. Actually air gap. So what, and by the way I don’t necessarily mean to take away from the utility of the solution that they’re offering here, but calling it air gap, I think is maybe a little bit of a misnomer.

So they are offering Google they being [00:09:00] Google are offering a service where you as a Google cloud customer can store data. Backups to a storage service that does not appear as part of your cloud account. It’s part of a Google managed project that is transparent to your account. So if somebody were to take over your account, for example or to compromise systems within your account, they actually wouldn’t be able to do anything with that backup which I think is a pretty smart the one thing that I was wondering, obviously that you are not necessarily protected in the case that Google’s cloud itself becomes the victim of something bad, but that is, is a kind of a theoretical issue at this point.

But the one that concerns me a bit is what happens as we have seen in some other. [00:10:00] There was a, I’m forgetting the name at the moment that there was a company whose AWS account at the time was basically deleted and they had all of their data, all of their backups in their cloud account and they had it, split across different availability zones and it, it didn’t matter because they were, the actor actually deleted everything in their account and I believe they actually deleted the account itself.

And I do wonder the same thing, if your account were to be taken over would that backup persist? Would you have the ability after the fact to, to prove to Google who you were and be able to resurrect that. I,

Andrew: Do you mean the one that happened accidentally that Google did with that Australian pension fund or like a bad actor getting in and deleting it?

Jerry: Bad actor that got

Andrew: Gotcha. Yep.

Jerry: There was a it was a GitHub competitor,

Andrew: Yes.

Jerry: [00:11:00] can’t remember the name. It was

Andrew: I will look at,

Jerry: several years ago. Yeah, I do think, and I’ve said this, I say this an increasing amount. I do think we are. On the cusp of, much more aggressive, what I’ll call cloud native attacks where adversaries are actually attacking, not just the workloads in the cloud, but actually, the cloud resources themselves, the cloud accounts and whatnot.

So I think as time goes on, things like this are going to become much more important and questions like what I just asked, I think are going to become Increasingly important to

Andrew: yeah it’s, interesting that it makes sense, first of all to make sure that my, if I’ve got a bad actor or ransom or whatever, that’s out there deleting things, I don’t want it to just delete my backups, which is something we’ve always talked about is it could be a weakness in your automated systems.

If they’ve got full admin rights into your cloud environment, what stops them from going [00:12:00] after your backups? So that makes sense. It is interesting how strong that quote unquote logical air gapping is. It makes me wonder a little, somebody should probably test it, but I’m surprised this wasn’t offered before, honestly. It also makes me think, remember the days when we used to back up the tape and send those tapes off site to underground storage facilities? And

Jerry: And half the time the tapes would fall off the truck

Andrew: right.

Jerry: built spilled out under the freeway. Yes,

Andrew: And you never test restoring them, and then when you do need to restore them, it’s gonna take 43 months and half of them are bad.

It was a weird time.

Jerry: recall the tapes and the tapes will come back in a locked box and there’ll be tapes missing.

Andrew: Right.

Jerry: It was just Like the grand old days. I like, I, I don’t know why we don’t still do that.

Andrew: I won’t go on the, we’re old rant, but boy, it makes me feel old. But this makes sense. Like what I’m also curious about, I haven’t looked into this is, how many versions of backups do you [00:13:00] have? Because the other thing I think about is you’ve got ransomware. And it automatically backs up how many iterations in, or am I just backing up encrypted data I can’t restore because it’s encrypted.

The backup system doesn’t know the difference. It’s just backing up an iterative change. So that’s something else to think about is okay, how many snapshots back can I go? Because that starts to get expensive, but if I’m just like automatically backing up my encrypted data. Oops, it’s interesting and I like the concept and it’s meant to fight one particular source of pain, which is, ransomware, deleting your backups.

Jerry: Yeah. I really liked the concept too. I think things like this are going to become increasingly important as this time goes on. Happy to see things like this starting to emerge,

Andrew: Indeed.

Jerry: but now, again, it comes back to making sure that it is actually working.

Andrew: Yeah. And testing like a restore

and [00:14:00] do the assumptions you have work.

And that’s one thing not to go off on a bit of a side rant that I see a lot is organizations don’t have enough time built into their. IT or security schedules to actually test these things. They just Oh, we think it’s going to work.

And the first time they tested is during a crisis, which is a terrible idea. You want to be able to test like when you’re not in crisis mode and see how well this stuff works.

Jerry: Absolutely. Our next story comes from Ars Technica and the title is rogue. Who is server gives researchers superpowers. No one should ever have.

Andrew: This one was crazy.

Jerry: Yeah. So there’s a company called Watchtower of course, is all things tech. Now it isn’t spelled correctly. I won’t hold that against them. One of their researchers found during their stay at Black hat that the dot Moby top level domain had recently changed the location of its, who is [00:15:00] server.

So previously it was a domain hosted on a dot net top level domain, but apparently over some time in the recent past, they moved that to unsurprisingly a name hosted on the dot Moby TLD. And I guess through probably some bit of, corporate cost savings or missteps don’t know.

They let that domain, they let the dot net version of that domain expire, which is problematic. And so this person realized that registered the domain and then actually started seeing legitimate requests, who is requests coming in. And then they set up a, who is server and. Found that they would have had the opportunity to do quite a few bad things, like creating TLS certificates [00:16:00] for for domains, because VeriSign and others were still pointing their who is to the old.

net. So they hadn’t, completely switched over from the NET domain to the MOBI domain, and as a result chaos ensued and it’s really hard to put bounds on how bad this could be, right? There’s, when you, they go through quite a few different situations that this could be. This could have allowed, for example, intercepting email and, lots of different telemetry based attacks.

But I don’t even know that we have a good handle on the art of the possible when something like this happens.

Andrew: Yeah. Plus the the TLS certificate trust that comes natively with this, which is massive. Like that just can cascade into a whole bunch of shenanigans when you can [00:17:00] own The authority around TLS certificates around an entire domain like that. That’s huge.

Jerry: Which they were able to do in this instance. So really bad for sure. I thought it was interesting because in, in my former role, I saw lots of situations similar to this. And I, and that just in my former, immediately former role, but in lots of former roles, companies often registered or create internal domains.

And those domains sometimes are, they start off as. Like they start off as trying to think of a good good, a good example. Let’s say like that fun, it’s stupid one, right? When you created your active directory domain back in 1997, like that TLD wasn’t around, but over the

Andrew: Right.

Jerry: That, that [00:18:00] did become a domain and, nobody thinks twice about it. And suddenly now you’re susceptible to a whole class of attacks. And I think there’s a broad range of problems that the industry has associated with domain names either expiring or for example, a lot of companies as they acquire other companies, they they, Transition.

That company’s email to the acquired company’s domain. And over time, sometimes, not all the time, but they let those domains expire, somebody comes along and you can pretty much guarantee that there’s still almost certainly valid email going to that domain. And so there’s, I think there’s this whole class of problems.

That we don’t often, it’s a super simple and dumb problem space that has emerged [00:19:00] around domain clashes, domain problems, people letting domains expiring. So I I don’t feel like this is something that is, is well represented in different security frameworks and, policies and whatnot, because it’s off, it’s often the corner, but I, it is definitely, and has been, is this, proves it has been, and can be a big source of problems.

And so I, I think it’s really important to keep your eye on this.

Andrew: Yeah, I agree completely. And it’s to the point you made earlier about ADs or internal domains being set up. And then suddenly that many years on the line becoming a new top level domain. It reminds me of when people didn’t follow RFC 1918 and used random IP addresses that later are routable and, can’t figure out why they’re having weird Transcribed Routing [00:20:00] issues talking to certain parts of the internet and not others and it’s like there’s

Jerry: That

Andrew: got to watch that.

And what’s interesting is this like with all respect But a lot of folks today don’t understand how the plumbing of the internet works anymore. It’s been abstracted away from them And like a lot of people this sort of problem with DNS reminds me a little bit of how fragile BGP is.

And very few people really understand BGP anymore. They don’t have to, they don’t need to know it. That’s a SaaS provider problem. That’s a cloud provider problem. But it’s very much a real problem. Like you and I, at one point in our career, we went through the process of registering for our own. Slash 19 and figure out all the fun of what it took to route that and share that. And all those things that came with it which I think was valuable, not to just pat ourselves on the back, but it’s interesting today when you go talk to people about some of the complexities of DNS, they have no idea. They don’t. They don’t. know how all this works. They don’t know that this is even a susceptible problem, because I think there’s this inherent [00:21:00] belief that there’s just some overriding authority managing all the top level domains and all the top level Whois servers. There’s not. Be careful.

Jerry: Yeah, definitely. Definitely. All right. The the next story is this one is it’s a bit of a followup to when we talked about last time. It comes from cybersecurity dive and the title is global cybersecurity workforce growth, flatlines stalling at 5. 5 million pros. This is based off of a report released by the ISC squared, which is the, for those of you who don’t know, they’re the people who create and maintain the CISSP and a bunch of other.

Certification programs. What they identified is that the growth of the cyber security workforce grew a 10th of a percent year to year, which is interesting. [00:22:00] Like from five, five ish million to 5. 5 million. It

Andrew: Wait, that’s not a tenth of a percent. that’s, 10%.

Jerry: you’re right. 5. 45 to 5. 5. There you

Andrew: There you go.

Jerry: you. I can do math. I

Andrew: I’m here to help. I’m here to help.

Jerry: promise, but this was the first time that, that the growth is really stalled in quite a few years.

They what I found most interesting with this particular report in this particular article is it explained something that we continue to talk about. Both on the show and as an industry about the kind of the dichotomy between people’s experience in trying to get a job in security and the way that the industry talks about the number of unfilled [00:23:00] security jobs, because those two things, as we talked about last time, again, aren’t.

In concert, right there’s a gap somewhere. And this one for the first time started to explain it in a way that made sense to me. And what they describe is that the workforce, like the number of people who are employed in the security sphere went up very quickly.

The number of people that are needed to keep companies secure, as identified through interviews with companies, is growing dramatically. And outpaces by a large margin, the number of people who are qualified to work where it [00:24:00] breaks down is that just because, I say that I need. 50 more people on my team to keep our company secure.

Does it mean that I get to go hire 50 people? It just means in order to do what I think is a responsible job, I’m making this up completely, by the way. In order to do a good job of keeping my company secure, I would need 50 more people than I have. And so

Andrew: Right.

Jerry: then gets counted in the total number. Of these quote unfilled security roles,

Andrew: Really that’s just the,

Jerry: exist.

Andrew: That’s just the beginning point of negotiation for your budget.

Jerry: Yes. Yes.

Is yes.

Andrew: So when they refer to workforce, do they mean the number of people employed

in the cybersecurity industry or the [00:25:00] number of people available to fill jobs in the industry?

Jerry: They’re talking about the number of people butts in seats.

Andrew: Okay. So there could be, if they’re saying there’s 5. 5 million people in the cybersecurity workforce industry collecting a paycheck but there’s 10 million qualified people seeking jobs. That’s one of your gaps, right? There’s just not the jobs out there for the number of qualified people. Which if that’s true, which we’ve heard the opposite, there’s a skills gap and there’s a capability gap, which could go back to some companies may be asking for the wrong things, like 10 years of experience in a technology that’s been around for two years, which we’ve seen over and over again. Or if there’s too many people chasing too few jobs, it can drive down salaries. So I don’t know. It’s interesting. If people are willing to accept jobs for less, basically in competition with somebody else, that can also depress wages or at least cap [00:26:00] growth. So I don’t know. We keep hearing very, to your point, very conflicting things about The market in the industry including Hey, we don’t make it easy for new people or entry level roles or mentoring or journeyman roles or ways to bring people in that we can build up people and you want to hire experienced people, where do they start getting experience?

So I think some of that comes to play too.

Jerry: I think it’s all intertwined, right? They, in the article, they point out that There are 5. 5 million butts in seats in the security sphere. They believe based on their data that there are, there’s a need for 10. 2 million people, right? So that, that creates a big gap. But again, that doesn’t mean that there’s 4.

7 million unfilled jobs.

Andrew: Yeah. I

certainly don’t see those job listings,

Jerry: it means [00:27:00] that we, some at a top level, it means that we think in order to do a responsible job of protecting every company, we would have to have 4. 7 million more people working in security than are available today now, but where I think folds back into what you were saying about wages is that, for a long time, security people have had it great.

And I say that as one of them, we were pretty highly compensated and so it’s a difficult thing, especially as of late, it’s a difficult thing to continue adding more and more people to your payroll at the salaries that people are getting. And so there is part of me, as we talked about last time, the U.

  1. government is launching an initiative to train up, hundreds of thousands of more people to enter the workforce. The reality is, those people are going to be [00:28:00] competing with people who are already unable to find jobs, but the net effect, I think of that is going to be deflationary.

On on, on cybersecurity job salaries.

Andrew: It’s possible.

It’s, yeah.

Jerry: and then in doing so theoretically will be able to hire more of them.

Andrew: Yeah, I think the danger is always, is that training going to align with what companies need?

Jerry: I don’t think so because I think we have created this and I know that we’ve gone way off into the security podcast. But I think. And look I had, I managed a very large team in a side of a very large company that had, I had a, had an interesting vantage point. What I observed is that [00:29:00] companies have adopted this position of what I refer to as just in time talent, we, we want.

We, we create this profile of expectations of what people need to have in order to come on board for an entry role, entry level role, like you’ve got to have 10 years of experience and you’ve got to know, all of these specific, very specific security tools for an entry level role.

Like how do you get an entry level role if you don’t have. You get, you end up in the, into this kind of catch 22, but on the other side, one of the concerns I’ve got is that as an industry for a long time, security people came out of it, right? You were, you came out of application development or system administration or network engineering or help desk.

and a lot of. These people had a [00:30:00] very broad and deep background in, maybe not every aspect of it, but in lots of aspects of it. And now, security has become a field unto its own. And so you go through school and you you graduate with a degree in security and it’s all been about security and not necessarily about the implementation of it, the implementation of, and I, in operation of it inside companies.

And I think that not, I’m not, by the way, I’m not in any way downplaying the importance of the stuff that you learn school, what I’m saying is I think you coming out, you come out lacking some of the important context that you need in order to be effective.

The other side. Is that a lot of that context tends to be pretty specific to a company.

And I think that where we’re at is that companies have lost, largely lost the patients for whatever [00:31:00] reason to train people, to do on the job training and grow people and. And that scares me to be not only from like the human aspect, but also from like the ability to be effective and whatnot, because now I think we’re inhibiting artificially governing the effectiveness of people because we’ve got.

These people, we got people who have relatively narrow sets of skills coming into the workforce. And I suppose in some instances that’s okay, but I think it, it is a I don’t know maybe I’m just getting old.

Andrew: No, I agree with your point. And again, I’m also getting old, but I find there are very few generalists anymore. Everybody’s very hyper specialized. And I think That’s a bit of a shame. Yeah, you could be super good at one particular thing and that’s very valuable and there’s value in that, but I also find a lot of value in it.

[00:32:00] Generalists who come into security just have such a breadth of understanding of how these things are supposed to work together and what’s normal that I think it it va it, it brings a lot of value to the job.

And it goes back to what we were saying earlier. People don’t understand DNS, they don’t understand bgp, they don’t understand IP routing because they don’t have to. And I guess that’s okay. I guess maybe the world has gotten so complex that this is the way it needs to be, but I do think it’s a real shame becoming like IBM massive company. Those are the types of companies that I think should be able to grow their own talent with mentorship and the whole concept of the way we used to do things with apprenticeship and raising people up and giving them that opportunity to grow and build that skillset. And, maybe their salary is a little low initially, but as they grow and hopefully that skillset will grow and the salary will grow, or [00:33:00] sadly, they probably will just bounce to another company. That’s, I think what companies worry about

is you train them and they leave. What if you don’t train them and they stay?

Yes. The way I could counter that, but it is a problem. I don’t know that I have a solution, but I’m a big fan of trying to promote people who are interested from it and the security, not that one is better than the other, but I do think those folks with it backgrounds have a lot of basic understanding that I think really helps with general security engineering and SOC.

The other thing I’d say is it takes a long time to ramp up. I don’t know that companies, Respect that anymore. It may take six months to a year to really be effective in a, at least a security operations role and understand what normal is for a company. And it feels like everybody’s moving too fast for that now. I guess this is the whole get off my lawn speech. It’s an interesting problem.

Jerry: I, I, from a an individual standpoint, I think it’s. [00:34:00] It’s clearly a much more competitive market than it used to be. And I think it’s becoming increasingly important for people who are serious about getting into it and finding a job to be able to differentiate yourself. And I know that’s.

Heretical to say in some circles, if you want the job, I’m not saying that you have to work, 200 hours a week, but you’ve got to be able to separate yourself from the pack. Otherwise, I don’t know what to say. You’re, you’ll be looking for work for a long time.

Andrew: Just don’t start a defensive security competitive podcast. We don’t need the competition,

please.

Jerry: no we definitely don’t. I, by the way I for those of you who know, I, I recently lost my job and it’s okay. Not complaining. It’s actually been an amazing experience. And I’ve been working with a career coach who’s awesome. By the way if you have the opportunity to work with a career [00:35:00] coach, like that’s probably one of the best things because they can call bullshit on your, like they hold you to account.

But one of the things that, that mine told me was this is a difficult. Economy right now to find a job. It takes a long time to find, and a lot of false starts and a lot of tries to find a job right now. And I don’t know if it’s like historical at a historical low. I don’t know. But it’s definitely, I’ve got kids that have recently graduated from college and I look at the struggles they are having with finding jobs as recent college graduates and it’s a difficult, just a difficult economy and I don’t see that getting better.

Anytime soon, maybe when, and if interest rates go back to a negative, then we’ll start seeing lots of lots of startups again, but I don’t know.

Andrew: I do think [00:36:00] certainly this is a well trodden road that other people do a better job talking about than I do. But I think that there are certain roles that we have treated poorly. Culturally, like blue collar jobs and trade jobs that have a huge, massive shortage of workers who are desperate for workers. But we have, and those are good, paying jobs with great benefits, we went down this path of everybody needs to go to college and everybody needs a white collar job. And I think that’s, Not great for people or our society. And the other thing I’d be curious about, you’re seeing both sides of it.

You’re at a very senior point in your career. And my first thought would be, is it tougher to find a senior level job? Cause there’s less of those in theory. But you’re also saying, your kids right out of college are basically looking for their first main major career job, which is the opposite of that spectrum and they’re struggling. I did tell you underwater basket weaving would be a tough role for them to find a career in, but they insisted.

Jerry: You, [00:37:00] you warned him. I it’s fair.

I think it’s all up and down the scale. So certainly for me, if I if in one, I do another thing, it’ll probably be an exec another more senior level executive type role. I don’t know if it’ll be a CISO again. That was hard and I don’t know, I don’t know if I got that in me again.

It was fun for sure. When I talk to my kids and other young people, one of the bits of feedback I get is there’s been a lot of people who have lost their jobs. And I think this is also true, maybe particularly in the IT space, lots of layoffs in the IT world over the past 18 months.

And those people have experience and they’re unable to find jobs necessarily at the same level that they were. And so they may be, they may be competing. I guess what I’m trying to say is entry level people. Who are coming out of college may very well be competing against [00:38:00] people who are not entry level for entry level jobs, because those other people can’t find other jobs.

And so they’re, they’re trying to find any kind of work. And so people entering the workforce are not competing against other people entering the workforce. They’re competing with, other

Andrew: yeah.

Jerry: who may have experience, who have recently lost their job. And I think that’s it is what it is,

A challenge.

So

Andrew: yeah. One last thing I’ll say on this is that in theory, the unemployment rate is low. So are we just going through a cyclical change where those jobs are moving to other areas? And I. T. and I don’t know. See, this is the challenge. You saw this conflicting data of. We have all these unfulfilled jobs, but then people can’t get hired.

I don’t know. I don’t know what to say.

Just, I’m thankful I have a job and I will do my best not to be so frustrated tomorrow, Monday morning, as I normally be.

Jerry: I’m thankful to be in the spot I’m [00:39:00] in, even though I don’t have a job,

Andrew: You do have a job. Your job is to entertain me on this podcast and our 12 listeners.

Jerry: hopefully I’m doing it, doing okay.

Andrew: You’re meeting expectations.

Jerry: Good good. All right. Our last story also comes from cybersecurity dive. And the title here is move it. Victims are still coming forward. This time it’s Wisconsin Medicare. There, there isn’t anything necessarily new here. We obviously were on hiatus when the big move it breach happened.

Happened in the second quarter of 2023, but we are now 18 months on about, and we’re still hearing about net new victims of this. And I find it just mind boggling. Now this particular entity reported to the centers for medic Medicare and [00:40:00] Medicaid that they were breached back in July, but presumably the actual breach happened, Much earlier than that, only recently detected.

And I don’t know why that is. I don’t know if

Andrew: I certainly hope that, I certainly hope that like they hadn’t gone unpatched all this long and just suddenly got popped.

Jerry: I doubt that, but what I it’s possible, right? It’s

But the thing that I’ve been concerned about it, and again, this is an asset management problem. Like I don’t know. How many of these things were out there that companies didn’t realize,

Andrew: yeah.

Jerry: Like being managed by a subcontractor to a subcontractor and Hey, the magic just happens.

I pay the bill and the files just appear. I don’t know how it happened.

Andrew: yeah, I look at this like understanding your attack surface, like to me, you really need to understand everything that’s associated with the company that’s open to the internet. I know that’s not the only way to attack [00:41:00] things. I know that things start at endpoints with phishing attacks, but nonetheless, for these sorts of widespread, vulnerable, remote code exploits sort of things, you have to know what’s open to the internet that Are associated with.

Jerry: Yes.

Andrew: I just feel like that’s table stakes. You got to know your attack surface and a lot of companies don’t like it’s a one. It’s a tough problem. But two, it’s not something that a lot of companies spend time, I think, worrying about. But I think this is a great example. What if this has been sitting out there? Just got ignored. Nobody’s really maintaining it. Nobody’s really patching it. Nobody really knows, but I think in cases like this, if you’re a security organization, something like Moveit pops, it’d be nice to look at a list and say, huh, do we have Moveit? Oh, yeah, we do. We should go fix that.

Jerry: And not only do you have it, but do your suppliers have it

Andrew: Send them an Excel sheet for them to fill out. You’ll be

fine.

Jerry: then have them send it to their suppliers and their suppliers send [00:42:00] it to their suppliers.

Andrew: And eventually it just circles back around.

Jerry: Turtles all the way down. You know what the other, again, because we didn’t have a chance to talk about it at the time. The other issue I think that really exacerbated this move it. Issue, obviously this was a very widely used, which blows my mind.

I was much more widely used than I ever would have thought. A file transfer tool that that progress software, I think it was progress software, right? Anyway whoever maintains it, sorry, progress. If I got you wrong you’ve got your own problems. So I’m not going to feel too bad. The issue is this.

application had a vulnerability that made it very trivial for adversaries to pull files off of the appliance. One of the things that came out in the aftermath of that is that people would allow files to be uploaded. And then just sit [00:43:00] there, like they would obviously copy them off, but they wouldn’t clean them out.

And so

Andrew: Look, that’s not part of that’s not part of my KPIs, man. My job is to get the file over there, not delete it later.

Jerry: right.

Andrew: Look, I’m,

Jerry: thing that isn’t in frameworks. I was listening to a book today and they made reference to the old George box quote, all models are wrong. Some models are useful. And I got to

Andrew: yeah,

Jerry: like all security frameworks are wrong. Some are useful and

Andrew: We need is another framework of the useful bits.

Jerry: totally, absolutely. And then you have, and then it’s the old XKCD, I forget which number that was, but but again it I struggle because there isn’t a there isn’t something that, that makes it obvious that, Hey. That’s a problem. [00:44:00] It’s intuitively obvious in hindsight that you shouldn’t store like forever files on the damn file transfer tool, right?

Like you should be cleaning that off periodically or in real time as you’re, Pulling data off of it, but that’s not what happens. And for the most part, like how many policies, how many companies security policies say would say that I don’t know that there’s many. Is that part of ISO 27, 000 or PCI?

Or no it’s not very clearly enumerated, but it is super important. The thing that is enumerated is you got to patch the thing. No, the thing exists, but I think there’s a, there’s also a very did I lose you?

Andrew: Yep. My Chrome.

Jerry: There’s a very real problem with data minimization. And I don’t mean that in terms of we’ve talked about it [00:45:00] in, in the context of you shouldn’t every stinking piece of data from your customers and squirrel the way I’m talking more does that data have to sit there? Can, or

Andrew: Right.

Jerry: can you move it? And especially important. When you got something sitting on the edge, right? This was a device that was exposed to the internet.

Andrew: Yeah. The tough part is probably 95 to 99 percent of the time. That’s never a problem. And cleaning up old files is probably not high value leverage work for a lot of employees, but

It’s like a whole data classification system. Nobody wants to do it. It’s too much of a pain in the butt until the one time it bites you.

Jerry: Yeah. I think, the other thing that bothers me a little bit about this is that companies will make that trade off, right? Like I, sure I could have, I could pay [00:46:00] Bob to sit there and delete those files. Or I could pay Bob to go do something more productive, it’s the, it’s it’s the people whose data is represented there who are actually to be the one that’s, it’s harmed in this and they don’t get a,

Andrew: Sure.

Jerry: And that’s right.

Andrew: an easy solve may be just an auto expire like 30 days. It’s auto deleted.

Jerry: Which comes back to

Andrew: And you just,

Jerry: responsibility, should that have been the default setting?

Andrew: yeah,

Jerry: I

Andrew: it’s,

Jerry: I don’t know. Anyway,

Andrew: it was progress

Jerry: was

Andrew: the way. I did confirm it was indeed progress. Yes.

Jerry: yes. They’ve had a long run of spectacular F ups.

Andrew: Your old man memory was accurate in this case.

Jerry: Back in my day, progress was a database.

Andrew: That’s [00:47:00] true.

Jerry: surprised to hear that progress is all this other crap. And apparently no database. So time times are funny. Funny. What happens over

Andrew: They lost it somewhere.

Jerry: All right.

Andrew: Anyway.

Jerry: I think we’re, I think we’ve we peaked and we’re on our way back down and so we will end it here.

Andrew: Oh, I hope people enjoyed our first video podcast, the defensive security show.

Jerry: We will do better next time. I’m

Andrew: It only took 279 episodes. Yes. We will do better next time. And we had a little technical bubble there. I don’t know how much it’s going to show up, but hopefully we’ll get it sorted out.

Jerry: Yeah, your your browser won’t stay running, huh? Call the neighbor kids. You come look at

Andrew: I’m just hoping I didn’t lose too much of my side of the recording. [00:48:00] That’s all

Jerry: good point.

Andrew: we’ll see. We’ll sort it out. But anyway,

Jerry: thank you for listening. You can find this the show and all of our previous episodes on our website at www. defensivesecurity. org. You can find the podcast on just about every podcast service under the sun. And if we aren’t on one, if let us know and we will we’ll get that fixed. You can follow Mr. Callot on X for me. I really hate that name by the way. It’s just like

Andrew: I still call a Twitter, I’m old,

Jerry: Oh, go ahead. Where can they find you?

Andrew: On Twitter and both and infosec. exchange at lerg L E R G.

Jerry: All right. Good deal. You can find me on infosec. exchange at Jerry. And with that, we will talk to you again real soon.

Andrew: Have a great week guys. Bye bye.

 

Defensive Security Podcast Episode 278

In episode 278 of the Defensive Security Podcast, Jerry Bell and Andrew Kalat discuss various recent cybersecurity topics. The episode starts with light-hearted banter about vacations before diving into the main topics. Key discussions include a new vulnerability in YubiKey that requires sophisticated physical attacks, resulting in a low overall risk but sparking debate about hardware firmware updates for security keys. Another key topic is Verkada being fined for CAN-SPAM Act violations and lack of proper security measures, including exposing 150,000 live camera feeds. The hosts also explore reports showing diverging trends in security budgets and spending, with some organizations reducing budgets while overall industry spending increases. They highlight the need for effective use of security products and potential over-reliance on third-party services. The episode also delves into the growing threat of deepfake scams targeting businesses, emphasizing the need for robust authentication policies and awareness training to mitigate risks. Finally, the hosts reflect on the broader challenges of balancing security needs with budget constraints in an evolving threat landscape.

Links:

https://www.bleepingcomputer.com/news/security/new-eucleak-attack-lets-threat-actors-clone-yubikey-fido-keys/
https://www.bleepingcomputer.com/news/security/verkada-to-pay-295-million-for-alleged-can-spam-act-violations/
https://www.cybersecuritydive.com/news/iran-cyberattacks-us-critical-infrastructure/725877/
https://www.theregister.com/2024/09/05/security_spending_boom_slowing/ vs https://www.cybersecuritydive.com/news/infosec-spending-surge-gartner/726081/ https://www.cybersecuritydive.com/news/deepfake-scam-businesses-finance-threat/726043/

Transcript

Jerry: All right, here we go. Today is Saturday, September 7th, 2024. And this is episode 278 of the defensive security podcast. And my name is Jerry Bell. And joining me today as always is Mr. Andrew Kalat.

Andrew: Good evening. Jerry, how are you? Kind sir.

Jerry: Doing fantastic. How are you?

Andrew: I’m great. Just got back from a little vacation, which was lovely. Saw a lot of Canada, saw some whales, saw some trains. It was

Jerry: Did you see any moose?

Andrew: Oddly we did not see a single moose, which was a bummer. We crossed from Toronto to Vancouver on a train and didn’t see a single moose.

I saw a metric crap ton of ducks though. I couldn’t believe literally in the thousands. I don’t know why.

Jerry: The geese are ducks. Cause

Andrew: We saw a

Jerry: geese are pretty scary.

Andrew: We were sealed away from them, so we were protected.

Jerry: I don’t know.

Andrew: hard to

Jerry: I don’t know. I w I wouldn’t I wouldn’t bet my life on that.

Andrew: But yeah, we saw a decent chunk of gooses, but mostly ducks.

Jerry: Good deal.

Andrew: Indeed. I’m good. Now, catching back up on work.

Jerry: And you’re back.

Andrew: And you are apparently the Southern Command Center.

Jerry: I am for another another day or two.

Andrew: Nice. Never sucks to be at the beach.

Jerry: It definitely does not. No, no bad days at the beach.

Andrew: Nice.

Jerry: All right. A reminder before we get started that the thoughts and opinions we express in the show are ours and do not represent those of our employers.

Andrew: Past, present, or future.

Jerry: That’s right. So our first topic or first story from today comes from bleeping computer. And this one was a bit of a, Oh, what’s the best, a bit controversial, best way to say it, controversial on on the social media sites over the past week. And the title is new leak. I’m not even going to try to pronounce that attack.

Let’s threat actors, clone, Yubikey, Fido keys.

Andrew: Shut down the internet. Shut

Jerry: Shut it down, just throw away your Yubikeys, it’s over.

Andrew: And apparently it can happen from 12 miles away with trivial equipment, right?

Jerry: No, actually, they the bad actor here actually has to steal it and it takes some pretty sophisticated knowledge and equipment. But apparently the equipment they allege are about, costs about 11, 000. However, the the YubiKey actually has to be disassembled, like they actually have to take the protective cover, protective covering off, and they have to instrument it and, and then they’re able to leverage a vulnerability in an Infineon chip that’s contained in these YubiKeys to extract the private key. And so it’s not a, it’s not a trivial attack. You have to lose physical possession of the token for some period of time. But if you were, The victim of this, it is possible for someone, some adversary, who was willing to put in the time and effort could clone your key unbeknownst to you, and then find a way to reconstitute Packaging and slide it back into your drawer, and you would be none the wiser.

Andrew: All seriousness, I think this has a very low likelihood of impacting the average listener to our show or the average person who cares about such things. But if you’re a very high profile target and, some sort of state intelligence service wanted to kidnap you and steal your YubiKey and then gain access to things before those sorts of permissions got revoked in some way, shape or form, I guess that could be viable, but this doesn’t seem like something that would happen to the average person.

Jerry: Oh, a hundred percent. And I still think, despite some of the the initial banter about this, you’re much better off using. I’m sure there are definitely certain use cases where you would be concerned about this, but for the average person, I think, like you said, it’s it’s really not a big deal.

So this does impact the YubiKey 5 series. And I think also the HSM 2 up through that was released, I think it was in May of 2024. The challenge is that you can’t actually update firmware on Yubikeys. That was a security decision.

Andrew: yeah, that seems like a wise security decision if you ask me.

Jerry: Yeah, it’s, I have observed quite a few people who who are now trying to find alternate. Security keys because they’ve been that they feel a little dejected by the fact that you can’t update the firmware on them. But I think it’s important to understand that. That actually is a very important security function, right?

The ability to not muck with the firmware on these keys is very important.

Andrew: right, otherwise a piece of malware could be doing that too.

Jerry: Exactly.

Andrew: Which not be all that happy

Jerry: No. Sad in fact.

Andrew: get the sort of knee jerk reaction to, I want to be able to update this to patch for flaws and such, but keep in mind that everything like that can be used by a bad actor just as easily, if not more easily.

Be careful what you wish for.

Jerry: Yeah. Now what’s interesting is this All of the hoopla around this is about Yubikeys, but the chip, the Infineon chip is actually used by multiple different types of security products, including some EFI. So the secure boot which, I guess at this point, it’s got his own problems already.

And then I believe even after, since this particular article has been written, that there are some other. Actual security keys, similar to YubiKeys that have been identified as also using this Infineon chip. So almost certainly going to be vulnerable in the same way

Andrew: But I guess, nothing to really panic about. But boy, this got a lot of press. A lot of social media traction.

Jerry: it really did. So anyway, I thought it was important to discuss because again, for most people, this is really not a big deal. YubiKey themselves rated the vulnerability as a A CVSS score of 4. 9 to give you an idea. And I think that, that seems right to me.

Andrew: Did it get a mascot?

Jerry: It did not get a mascot. There was some attempts some valiant attempts made.

Andrew: What about a jingle?

Jerry: I haven’t seen a jingle yet either, but it did get a name

Andrew: All right

Jerry: and it has a website. So

Andrew: geez. Okay, so mild panic then. If it’s got a name and a website, that equals mild panic. But got a mascot and a jingle, I’m full on panic.

Jerry: that, what else are you going to, what are you going to do? If it’s got a jingle, you gotta panic.

Andrew: what the tough part is, this is probably like getting traction, perhaps at executive levels who may not have the time or the knowledge to dig into the details and that they’re probably freaking out in certain C suites, but

Jerry: Yeah.

Andrew: send them our show. Tell them these two random guys on the internet said not to freak out.

Jerry: Yeah. I can’t put anything on the internet. That’s not true. That’s right. But, I was I was thinking it’s been a while since YubiKey or UBI has released a new version of the YubiKey. So

Andrew: So maybe this is driving an upgrade cycle. Maybe

Jerry: maybe

Andrew: it themselves. get people to buy new keys. Is that what you’re saying? Jerry,

Jerry: it could be just like how the antivirus companies are releasing all the viruses. Yes. That’s right.

Andrew: that’s some smart thinking right there.

That is, know what? That’s the kind of cutting edge analysis you get on this

Jerry: Thought leadership right there.

Andrew: man. to get out on this. All right, here’s the plan. Let’s spend 20 years making a company and then break our main thing. So people to buy new things.

Jerry: It’s a good idea. It’s solid. I don’t see any any faults in this plan.

Andrew: Hey, how’s that working out for CrowdStrike?

Jerry: We’ll find out soon.

Andrew: Indeed.

Jerry: All right. The next story comes from bleeping computer and title is Verkada to pay 2. 95 million for alleged CAN SPAM Act violations. So for those of you, not in the U. S. CAN SPAM was a law passed a couple of years ago, probably more than a couple of years ago at this point, that Unlike you, what you might expect does actually a permit spam in under certain particular circumstances.

It requires, for example, an unsubscribe link, which this company didn’t do. Verkata, by the way, creates security cameras. There were two big issues, not related here. First one is that their marketing team did what marketing teams do. They went wild with the with their prospect, people who enter, who expressed interest in the cameras, they started spamming the crap out of them without any way to opt out.

And so that was actually the genesis of the 2. 95 million fine. But on the other side, the company had been running around saying that they’re. Cameras are super secure and they are HIPAA compliant and they meet privacy shield requirements and a few other things. And at the same time they got hacked and lost.

Quite a lot of data, some sensitive data, actually video feeds from sensitive places like mental institutions and whatnot things, the kinds of video that you would not want to to have exposed. So in addition to that roughly 3 million fine for spamming, they also have now to appoint a and pay for.

A security overseer for the next 30, sorry, 20 years. And I think they have to report any data breaches to the financial or the federal trade commission within 10 days or you face additional sanctions. So I thought this one was interesting because we’re starting to see a definite trend. At least in the U.

  1. of the government holding companies to account when they make what I’ll call false claims about the, in, in the aftermath, in retrospect, false claims about the security capabilities of their offerings. And so I thought this is interesting. It was really important for people to understand that, if you are going to make those claims, you better not have a problem like this because you’re going to end up in the crosshairs

Andrew: Yes, this wasn’t the SEC. So this didn’t matter if they were public or not. This was the FTC, the Federal Trade Commission. So that’s interesting. Some people would say if I’m not public, I don’t have to worry as much about this sort of liability, but guess what?

Jerry: you do.

Andrew: Looking at the details, 150, 000 live camera feeds were exposed. That’s impressive. And it looks like they didn’t know about the breach until AWS flags is a precious activity.

Jerry: Yeah.

Andrew: So kudos to AWS, but man,

Jerry: It’s not a good look.

Andrew: how would you like to be like that security overseer? How would you like that job? Just, working for the FTC, waiting for companies to come tell you stuff that they screwed up on.

Jerry: Don’t know. It. Could be could be good. Could be bad.

Andrew: Yeah. It’s. It’s interesting because the other thing that’s called out here is that Verkada did not implement basic security measures on its products, such as demanding the use of complex passwords, encrypting customer data at rest, implementing secure network control. So the complex passwords, it’s a little unclear to me if they mean this within their own environment or their customers. Requiring to use complex passwords, which gets back into that whole snowflake conversation we were having on previous episode of how much is a company liable for a customer’s poor use of their security features that are offered. Now, the other aspects here are obviously not, they’re very much for Cata’s choices and how they ran their environment and set up their, it and production environments and such, but doesn’t make me wonder are we going to see more things pushed on ensuring that. These companies are ensuring their customers are being somewhat safe with the use of the tooling.

Jerry: Yeah. I don’t know exactly where this particular issue sits, but I will say broadly speaking, I think the expectation is that as a, Technology provider, whether that’s a service provider or some kind of piece of technology, if you’re going to assert that it’s no quote secure, the expectation is that you have some sort of guardrails on them that feed, for example, mandatory multi factor authentication or mandatory password complexity, and if you don’t, and lots of customers, lots of your customers end up getting hosed because they’re using bad passwords, obviously they have a problem, but I think what.

And what we’re seeing increasingly is that you as the provider also have a problem, despite the fact that it’s based on the choices of the cost of your customers. Look, we can debate whether that’s the right approach or not, but I think that is in fact, what is happening.

Andrew: Yeah. Makes sense. Got to be careful with what you say. If you allege you’ve got good security, at least your marketing and sales people are. And you don’t, and you get bit, there are consequences.

Jerry: Yeah. Yeah, absolutely. All right. The the next story comes from cyber security dive and the title here is Iran linked actors, ramping up cyber attacks on us critical infrastructure. And I, There’s not a whole lot of technical Gorpy detail in here, but they do make reference to a number of different threat actor names like Pioneer Kitten.

And I’m always fascinated by the names that emerge with these threat actors, but it was more that the targets of these attacks are actual security products. So they’re Cisco firewalls, F5s, Palo Altos that are being attacked by these threat actors in a couple of different ways.

This one particular threat actors asserted to be associated with Iran and what they’re doing is they’re facilitating initial compromise, and apparently one of the ways they’re doing this is through exploitation of vulnerabilities in the security products, and then they’re using that access to basically sell that access to ransomware actors, and then taking a cut of any proceeds that those.

Ransomware actors end up getting. I think it’s an interesting approach because again they’re asserting that this is, this threat actor is associated with the the Iran Republican Guard IRA or IRG Who apparently is really in it for the money and not for other, other uses like data destruction or intellectual property theft and whatnot.

Andrew: Iran is a heavily sanctioned country, much like North Korea. They’re probably seeking hard capital. it makes sense. I, as I’ve often stated on this program I’m very skeptical of the veracity of some of these attribution reports. It’s tough to know how accurate they are because there’s no way to check easily. Somebody says it’s so and okay, based on what and how do you know and how certain are you and how do you know it’s not another actor trying to look like that actor? So I’m always quite skeptical. I’m just cautious about buying into this, but boy, is this such a focus of the industry of trying to attribute a certain actor and it’s interesting.

It makes for good fodder for conversation. And, at the executive suite, I think they like to know that, but I’ve said a few times, and I’ll probably say a few more times. As a defender, I don’t know that I care. I think I care about the TTPs. I think I care about the typical tactics and the typical techniques and typical approaches, but it’s coming from Iran or Billy down the street, really change my job too much, I just need to know what they’re up to and what I need to defend against. That’s not the point of your story. I know I just went off on a rant.

Jerry: I will say, so back to why I thought this was interesting when I, back when I was working. I had observed quite a few especially smaller organizations getting compromised and ransomware as a result of running older, vulnerable the edge protection devices like Cisco’s and Palo Alto’s and whatnot.

And it. It seemed to help in the spirit of quickly identifying or investigating the breach to understand what they were likely doing. And so if example, if you’re you’re Palo Alto firewall gets compromised while they’re You know, the first thing to look for is evidence that somebody’s trying to move laterally and deploy ransomware because that’s probably what’s going on.

But I’m broadly speaking, I’m with you, like you’ve got a, you have to be threat actor agnostic. You need to defend your environment regardless of who’s trying to get in. But I think, what I am. Really trying to impart in here is that I think as an industry and maybe not more sophisticated companies, but broadly speaking I think as an industry, we do a pretty bad job of maintaining certain really Key pieces of our infrastructure, like those firewalls, like I for whatever reason, I don’t know why, like we’ve, I have observed small companies keeping like their workstations patched and, their servers patched, but for whatever reason, It’s really common for their firewalls to be end of life.

Andrew: Yeah, it’s, I think there’s a lot of costs or a lot of reasons. I think cost is one, I think interruption of business and downtime perceptions of patching network gear is another, I think in general, network gear is not thought about as something needs to be patched very often, unlike general purpose computers. But I agree it’s a problem and we’ve seen, it goes Absent flows, and we get troughs and peaks of how often we see these sorts of devices have problems like this. And it seems like lately we’ve had a big spike in remote access VPN type technologies having pretty serious vulnerabilities. And what’s always scares me, what I think about is, okay, you’re 28 patches behind and something serious pops. That’s so much more disruptive to patch up than if you were on a more current patch level and you’ve got to Deploy a patch to fix a serious vulnerability So it’s there’s a hygiene aspect of keeping up to date on this stuff that I think makes your life easier in a crisis

And but also to your point It’s also very interesting that These types of actors get initial access and then broker that out and sell that and we’re not catching them at the initial access phase They can dwell without much notice You So that’s another interesting problem that we should get better at.

Jerry: Yeah. I think in part, it’s because the, these even well instrumented organizations don’t. They’re instrumented, especially at the edge to look for how best to describe it, to look for malicious things that are transiting through the firewall and not necessarily attacking the firewall.

I’m not saying that well, but And that my, my experience was a lot of these technologies don’t have unto themselves, things like EDR type capabilities to tell you that something is going horribly wrong on the firewall itself.

Blind to that.

Andrew: Yeah. Or they have log events that are obscure enough that your SIM wouldn’t know how to make heads or tails of it without custom alerting and. Yeah, when somebody is breaching something through some sort of vulnerability, how it reacts is highly unpredictable, hence it’s a vulnerability, hence it’s doing something it wasn’t designed to do. So it’s not that simple. We usually have to look for some sort of second order impact or or movement, like you mentioned, lateral movement to deploy ransomware to detect that intrusion often, sadly.

Jerry: Yes, indeed. All right, moving on to our next story, this one comes from the register and the title is security boom is over with over a third of CISOs reporting flat or falling budgets.

Andrew: Back it up. Let’s go home.

Jerry: Yeah, it was fun while it lasted which, by the way, I normally we don’t talk about that sort of thing, but I thought it was very interesting because it’s contrasted with another story from cybersecurity dive, which is titled InfoSpec spending to hit a three year growth peak, reaching 212 billion next year.

So the registered story is based on a survey from INs and the cybersecurity dive story is based on a report from Gartner.

Andrew: Which I’m assuming is also based on a survey from Gartner

Jerry: Yes,

Andrew: list of customers and whatnot.

Jerry: Now, when I was first contemplating this, because look, I, I’m pretty involved socially and I see a lot of people struggling to get work in the security industry right now. And we don’t have a story about it, but the US government recently announced that it’s going to Put a bunch of money into training people up to be cybersecurity workers, because there’s this, dearth of unfilled security jobs.

At the same time, we got a lot of people who can’t find work in the security industry. So I think that’s a, it’s an interesting dichotomy. And then, which kind of aligns with. The story from the register, but is antithetical or in opposition to the Gardner report. Now it occurred to me though, that they’re actually maybe saying the same thing.

Or at least they’re not mutually exclusive. I think what the IANS report is saying is that in general security budgets, budget growth is slowing some places it started to stop. And in some places it’s even starting to go down, particularly as it Pertains to hiring new people. And the Gartner report is really talking about industry spending.

So how much are you, how much are companies spending on security products and services? And it occurs to me that when you put those two things together, you have less money to spend, but the money that you do have, you’re spending on third party services and software means that you have less money To grow your team.

Andrew: Oh, CapEx versus OpEx. Although, gone to SaaS, so it’s all OpEx now these

Jerry: It’s not a lot of packs. Yeah. But I think, I, so what, one

Andrew: I could be

Jerry: of the concerns I had is a, as a CISO is, when you buy a thing, whether it’s a SAS thing or on prem thing, You actually have to do stuff with it. Like you have to have, it’s it’s an obligation. You’ve adopted a puppy. You have to care for and feed that puppy. And in the worst case, look at what happened to Target like many years ago now, probably almost a decade ago,

When they got hacked and all of the logs that indicated they had been hacked was like sitting there, but nobody was looking at them.

And so I’m concerned that as an industry, we’re becoming very enamored with technology and less so on the ability to actually use that technology. Now that’s a, maybe a naive and uninformed view, but that’s the benefit I give to the show.

Andrew: It’s fair. No, I, of my strong principles in the teams I run is try to develop mastery over your tools. to understand what’s normal, what isn’t normal, tinker with them, get to know them, play with them, know how to interpret what they’re saying. The challenge with that is you need a lot of free time and you need a lot of initiative and self starting mindset to go do that. And if you’re being pulled a thousand different directions, it’s hard. So you’re very reliant on the tool, raising the alarm, not sensing something is a little off based on familiarity with the tool or the environment or the situation. So I do think we’re also recovering still, or Adjusting to a higher interest rate environment.

I think when we were in zero interest rate environments, at least in the U. S., a lot of tech companies hired a ton of people. And so we’ve seen the slow motion layoff wave for the last year or so, or two years, as a lot of these companies. Probably overhired and with interest rates going up part of the goal those higher interest rates is basically slow down, industry growth and spending and make money more expensive and it’s working like companies slowing down. A little bit, and so they’re slowly laying off here and there and adjusting. And I wonder if that’s part of it to a lot of this is a number of companies with their budgets, taking a little bit of a pause as they’re adjusting, or their staffing levels come down a little bit, or. What not? The other thing that this reminded me of while I’m ranting on this is this consolidation push, which happens about every 5 to 10 years that I’m seeing. And then reference it a bit in one of these articles. Consolidation around certain tools or certain vendors and trying to do functionality in one, one particular solution for one particular vendor seems to be hot right now with executives. I don’t like about that is typically what will happen is a given solution or tool will be very good in one area. And then to gather and grab some of that consolidation money and increase their footprint, they’ll start expanding into other areas. But typically the offerings in those other areas are substandard.

They’re not great. They’re certainly not, of breed.

Jerry: Almost checkbox, right?

Andrew: Exactly. They are a checkbox coverage Of that other bit of functionality and usually sucks compared to the best of breed out there. But what is it’s enough to convince an executive. They don’t need a separate tool that they can gain efficiencies in that consolidation.

But I think we end up with of substandard tools. Then in these, offshoot areas that these other tools are growing into. Now they may get better over time, but typically I think that nuance is lost we are pushing to consolidate of, yeah, it says it does this, but how well does it do it in this other area?

That wasn’t the core purpose. We bought this particular tool or vendor to cover. I think some of that may be going on too. I see, I hear a lot of that push right now to consolidate and get down to less vendors and simpler platforms, but I think you risk losing some functionality and capability when you do that

Jerry: And I think that’s been a long term IT trend that is now crossing over into the security world. If you look at companies like HP and Oracle and IBM and others, they have a lot, they have a few best of breed things and they have a whole lot of checkbox crap and they’re, their sales tactic is to, convince company executives that look, you don’t have to go everywhere.

You can get everything here,

Andrew: and it works, which is helpful because our show has a couple of best read episodes and a bunch of checkbox episodes, so I’m. glad that’s acceptable to the industry. The other thing that I pulled out of one of these articles, I think it was the register one. I’m going to quote it. Quote, an encouraging sign also is that security spending as a proportion of the overall IT budget is on the rise up from 8.

6 percent in 2020 to 13. 2 percent this year. This trend looks set to continue, Kowalski opined, but still security spending was typically less than 1 percent of the revenue of those I always find that interesting. And I know what’s going to happen is a lot of CISOs and CFOs are going to look at that and go that’s what I should spend. 13. 2%. That’s my target.

Jerry: Yep.

Andrew: Regardless of their circumstance or their situation, companies love to try to measure against the averages around them and use, in theory, wisdom of crowds to measure what they should be spending on security.

Jerry: Yeah. That is the normal benchmark. That’s the normal playbook for you, whether you’re a CIO or CISO coming into a company, like that’s the gold standard is benchmarking your competition.

Andrew: It’s safest. It’s easiest. It’s an easy button. It’s, you can’t get criticized if you’re doing what’s average for the industry.

Jerry: Right,

Andrew: I think that’s a bit of a shortcut, a bit of a easy button, naive way to look at it. Like you don’t like your risk tolerance and your environment and your situation is very different from your competitor

Jerry: yeah, it

Andrew: got

Jerry: assumes text to text parity and lots of other

Andrew: Right.

Jerry: that are almost certainly not the case.

Andrew: So just something I pulled out just to riff on for a minute there that I worry about that being used as. Now the flip side is if you’re well below 13. 2 percent of your, your IT budget, you can go use this as evidence to, to fight for more budget.

Jerry: That’s certainly true. You’re not gonna get it in this market. It is I think if we take a big step back, The reality that we find ourselves in is that there’s a lot of pressure on spending, and I certainly felt it as a ciso. And I think that many people in that role feel the same and is born out in the in the ions report that, we’re being asked as an industry to do more.

With either the same or with less, the threat landscape is certainly not getting any better. It’s getting worse at a faster pace as we often talk about here. And from a, a personal liability standpoint, I think it’s also getting, At least in the U. S. getting more complex companies in their officers and their executives are now starting to be personally held liable in certain, edge case instances.

And so that’s there’s a lot of, I think, really fundamental changes afoot. But if you look at the big contour. There is an expectation of driving efficiency. And I think that what, just going back to my time as a CISO, it’s difficult for an organization to commit to hiring a person.

It’s easier for an organization to commit to spend, to signing a contract that lasts a year or two years, because, you can You know how much it’s going to cost. You can choose to not renew it. You can perhaps get out of the contract in, in the middle of it. And so that I think drives some of what has is being borne out in the Gartner report that we are starting to see companies trading off people for services, but as we’ve talked about in the past I still think that Especially if you talk, even like managed security services, I think it’s very difficult to take commodity services and have them be really effective in absence of some kind of an abstraction layer.

Like you, I think that. The place that we have to get to is in industries, figuring out how to optimally run vended services and our internal it I’m struggling with the right way to convey it, but I think we’ve got a long way to go, but my, Big concern is that we’re going to overshoot the mark, like that, that the, we’re going to cut too far

Andrew: Sure.

Jerry: and as security leaders, we’re going to end up getting exposed.

And as we started to see, that the employee, our employers are probably not going to have our backs.

Andrew: and the indicators you’ve cut too far are usually very lagging indicators that you’re not going to know right away. And it’s very difficult to know until you do a post mortem after a massive breach, why? And often those decisions make sense in the moment. And it’s a very tough job justifying a security ROI, which goes back to, Hey, what’s the average of the industry doing? Okay. That must be safe.

Jerry: I think even. I think even that is is a little perilous because, it’s, I guess it’s one thing if you feel like you’ve got all your bases covered and, you’re trying to decide if you want to, take the next step of maturity or what have you. But most companies have a big risk register.

And I always worry that those risk registers are gonna be exhibit A.

Andrew: Jerry, if it’s on the risk register, nobody can attack it.

Jerry: True.

Andrew: rule.

Jerry: I forgot about that. I’m sorry. You’re right.

Andrew: It’s out of bounds. If it’s.

Jerry: It’s out of bounds.

Andrew: No, you’re right. What’s worse having it on a risk register or not knowing your risk, I think is the first step.

Jerry: I completely agree. I guess my, my, my point was it’s hard. I think it’s very difficult, especially in some kind of a, litigation or regulatory situation to be able to justify having gnarly looking things on a risk register at the same time you’re cutting your budget. That’s my that was my point.

Andrew: Yeah, that’s fair. But just to play devil’s advocate as a CFO or a CEO. Hey, like I’m trying to keep my company afloat. I can’t just throw money at security.

Jerry: It’s certainly true. And I guess that kind of comes back to the risk tolerance, which, I guess I think one of the one of the things that’s really important. Problems that we have is what is, what does it mean to have a risk tolerance, right? Because

Andrew: Yeah.

Jerry: there seems to be like this

Andrew: And is it well understood and truly internalized by the executives making that decision?

Jerry: correct. And so in the past, okay like I, I’ve accepted a risk and no, gosh, something bad happened that. The thing that I accepted was exploited and we had a breach and, we got our hands slapped or we got some negative press. And then you move on, but now it’s okay.

And you’re going to get personally sued or perhaps go to jail or like that it’s the dynamics I think are changing. a bit. And so the, this concept of accepting the risk, I think is starting to take on a different flavor than it has in the past. And it’s probably candidly overdue. Because look, as society marches on and becomes more and more online and digital and reliant on, on on your Personal records and personal data, the sensitivity and the harm that can come from that information being stolen or maliciously used is it’s becoming much more perilous for the.

The people whose data that is exposed. And so it makes sense. But I’m not sure that we’ve broadly speaking, embraced that yet.

Andrew: No, it’s I agree. Problem. There’s always has been a tough problem and it’s constantly evolving and you go back to, okay, then what’s the framework I can use? What’s the standards I could measure against. How often are those finding the last set of problems? It’s. constantly see these sorts of approaches come into play that, whether it’s PCI, just as an example of, yeah, we’re completely PCI secure. Great. You still got hacked, right? We’re following the NIST cybersecurity framework. Okay, but how well? And there’s so much complexity.

It’s not easy. It’s really tough. getting the basics right is really tough. So it’s, I don’t know. It’s a tough problem. I, probably we should start caveating this show with the advice in the show is for entertainment purposes only so we don’t get sued.

Jerry: That’s a good point. It’s a very good point. Anyhow, but time marches on technically, the risk marches on and then the next story is a good example Of that. And this one also comes from cybersecurity dive. And the title is deep, fake scams, escalate hitting more than half of businesses.

Andrew: That was an amazing segue, by the way.

Jerry: I’m trying like I’m

Andrew: was, that’s years of work

Jerry: Right there. I’m done. Like I,

Andrew: Almost professional.

Jerry: yeah, a couple, like I say, just a few hundred more episodes and I’ll actually know what I’m doing here.

Andrew: Anyway

Jerry: It’s not been all that long that the CEO, the CFO, CEO, business email compromise concept has emerged. It started off probably 10, 10 ish, 15 years ago where, the CFO would send an urgent email to their accounting team saying I need you to wire a bunch of money to to this business email.

Bank because we’re buying a company and super secret. You can’t talk to anybody about it, but you know what, you got to do it immediately. And those for a period of time were pretty effective. And we it’s a community we adapted a bit. We built some processes and it still happens by the way, like companies are still falling victim to fairly unsophisticated attacks.

But, one of the things we said was like, you got to make sure that you know who you’re talking to. Is that email really from your CFO? And then that, that morphed into deepfake audio calls where the CEO, somebody would alter their voice. So it sounded like the CFO and for a period of time that was that was somewhat effective.

And now the next iteration of that is deepfake video where the CFO is getting onto a Webex or a zoom meeting with you. And. They’re face to face asking you to to transfer money and

Andrew: interactive video.

Jerry: right.

Andrew: Yeah, wild.

Jerry: And that’s what this is about. They’re saying that they interviewed a bunch of not IT or security people, they interviewed finance people.

And they said that of the 1500 people they interviewed, they said 85 percent view, this is an existential threat. They identified that about Roughly half of companies have I’ve been targeted and then of those that have been targeted, about 43 percent have fallen victim, which is a big number.

I don’t know how well that extrapolates out. Obviously they didn’t interview every company in the world, but still that seems like a big number. And when you tie that, by the way the Gartner report that we just talked about, the headline on that report was actually quite interesting. Because it, it said Gartner forecasts, I’m sorry, wrong headline.

Gartner predicts that by 27, 2027, so Three years from now, 70 percent of cyberattacks will involve generative AI.

Andrew: That’s wild. My toothbrush now has gender of AI, so sure, but

Jerry: I think what they’re, I think what they’re, I think what they’re saying is, it’s going to be things like this where they’re they’re using generative AI to create phishing lures and to create deep fakes and whatnot. So I don’t know that,

Andrew: sense.

Jerry: I don’t know that like they’re going to, I don’t think that they’re predicting that LLMs are going to start coming up with novel ways of hacking into your pal about the firewall, though, maybe that happens.

I don’t know. I think it’s more like coming up with creative ways to execute kind of old style attacks.

Andrew: You won’t have the badly translated poor English messages any longer.

Jerry: That’s true.

Andrew: in proper American English. It’ll sound normal and appropriate and you won’t have that trigger of, weird verbiage that sounds like a non native English speaker, which is usually a good tell. So that’s one thing that’ll make it a little harder for sure.

Jerry: So there was a report published in May. Yeah. By the, in this article makes reference to this report by the big four accounting firms, including Deloitte, that said that by 2027 fraud losses from generative AI, they expect to reach 40 billion. It’s

Andrew: That’s wild. And honestly don’t know the answer to this, and I probably should, and I feel bad that I don’t know this, but in a personal Fraud case. Let’s say somebody scams you into transferring money with Zelle, for instance, like a personal fraud, your personal homeowner’s insurance and your bank.

I’m like, no, sorry, we’re not covering that. You authorized that transaction. Yeah, you were scammed. It was and for us in the industry, we’re looking at that was a, that was basically stealing money. But right now Those banks and vendors and such are like, no, sorry, you were, you authorized it.

It’s an authorized transaction. We don’t own the liability. I wonder if that’s the case with cyber insurance. If you fell for, or were a victim, I shouldn’t say fall for, because that makes it sound like you’re at fault. If you were a victim of this sort of scam. Would there be coverage or would it be the same sort of story of no, you authorized it.

This is an authorized transaction. I don’t know. I really don’t.

Jerry: a good question. I I wonder if it would fall into I’m not a, not an insurance expert. But I do wonder if it would fall into more like errors and omission,

Andrew: Could be.

Jerry: Than cyber insurance, but I don’t know. That’s a, it’s a good question. Very good question.

Andrew: dozen or so listeners might know.

Jerry: Yeah. Hit us up. Let us know if you know the answer to that. So in any event, I. Now, this is certainly a rapidly evolving area. I know that there are a lot of people who are very bearish on generative AI, and there’s some valid ish reasons for that, but I don’t think it’s going away. The fact that you may not think it has a lot of utility and then it is unsustainable from a ecological perspective.

All that stuff can be true and it doesn’t go away. So I think that’s probably where we’re at and certainly from a, from the perspective of adversaries, I think we’re on what is likely to be a dramatic uptick in using this technology in adversarial fashion.

Andrew: Have some decent advice, which is the same thing I would ask you, which is educate your finance teams, your executive teams about the risk. Show them examples. Show, give them If there’s one thing to take away, it’s look for that sense of urgency. That’s what’s usually very common in these scams and it’s very effective psychologically, but if you can get them to understand, if someone is pushing you for a sense of urgency, be suspicious. the other thing is have processes that cannot be deviated from, which inquire, require. people to authorize and following a process. So if somebody is asking you to deviate from the process, that should be a red flag. And I think those are very common techniques that happen with these scams. So I think that’s a key. And I do think because there’s only a small number of people who can control finances in a company, can go a long way. It’s not like you’re trying to broadly target your entire population against, phishing attacks. You’re trying to get a few people control the purse strings know about this and be wary of this and be wise.

Now the flip side is it adds. It adds friction, right? You’re saying, if you have your boss legitimately coming to you and saying, I need you to do this. And you say, no, because our policy is X, Y, Z. That’s a little uncomfortable. That’s probably going to slow business down, but it’s one of the only ways probably to protect yourself against these types of scams.

Jerry: absolutely. Absolutely. I’m concerned about what other ways we’ll see this materialize.

Andrew: I actually am not here. This has been a deep scam

Jerry: You’re just an, you’re just an LLM.

Andrew: This is actually one of my cats and probably doing a better job than I can do.

Jerry: I I think. I was about to say that

Andrew: Look, hey, results, I’m results oriented. The show got done. Speaking of, where’s Betty? Stop hopping up on my desk. We’re not in video yet. But soon, we might be.

Jerry: getting close.

Andrew: random cats wandering through the shot.

Jerry: Yeah, it’ll be deep fake video.

We’ll be like we’ll pick different celebrities to portray us, but it’ll be okay.

Andrew: Ugh. What is happening? have no idea.

Jerry: Anyway I think this is one area to watch. I, I. I certainly think this particular area, like you said, there are some definite ways that we can avoid losses, but I am also concerned about ways that we maybe haven’t even thought of or seen exploited yet, for more broad consumption in, in, in other types of roles, for example, a lot of Organizations, even the ones, by the way, that work in person meet virtually.

And how long will it be before we start reading about you people sliding in adversary, sliding in and portraying someone and, stealing intellectual property or,

Andrew: And

Before the Zooms, the Google Meets, and the WebExes of the world offer an enhanced package with anti deepfake for a small, multi thousand dollar per month fee?

Jerry: Hey, there you go. We should patent that.

Andrew: We should.

Jerry: Brilliant.

Andrew: You know it’s coming. You know it is. It’s gotta be. It’s inevitable.

Jerry: Yeah. And it’ll be it’ll be LLM, generative AI based. Yeah. Two. So

Probably with some blockchain and cloud thrown in, I’m guessing.

Andrew: How could it not be?

Jerry: I don’t know.

Andrew: It’s a table stakes.

Jerry: We’ve I think we’ve hit terminal altitude and now we’re headed back down. So I I think that’s it for today. I certainly appreciate everybody’s time and attention.

Hopefully you found this interesting. If you like the show, give us some love on your favorite podcast app. We definitely like that. It helps other people find the show. If you. didn’t like it, you can also give us a positive rating because that still works. Like you, you don’t have to give us a bad rating.

You could give us a good rating and then just not listen again. That works too.

Andrew: It’ll encourage us to get even better.

Jerry: That’s right. Absolutely.

Andrew: We’re

Jerry: So you,

Andrew: positive reinforcement here.

Jerry: Not negative reinforcement, positive. All right. You can follow the show on our website at www. defensivesecurity. org. You can follow us on, you can download the podcast on just about every podcast platform there is out there now. I don’t know that we’ve missed any, although if we had, love to hear about it.

You can follow Mr. Callot on the social media at where

Andrew: I’m both on X, formerly known as Twitter, and InfoSec. Exchange, Jerry’s fine Mastodon service at LERG, L E R G, which someday I will explain, but not today.

Jerry: which is by the way, the best the very best in social media. So you can find, follow me on infosecular. exchange at. At Jerry at InfoSec that exchange. And with that, we bid you adieu. Have a great week, everybody.

Andrew: See you later. Bye bye.

 

Defensive Security Podcast Episode 277

In this episode, Jerry Bell and Andrew Kalat discuss various topics in the cybersecurity landscape, including the influence of cyber insurance on risk reduction for companies and how insurers offer guidance to lower risks. They touch upon the potential challenges with cybersecurity maturity in organizations and the consultant effect. The episode also goes into detail about issues surrounding kernel-level access of security tools, implications of a CrowdStrike outage, and upcoming changes by Microsoft to address these issues. They recount a case about a North Korean operation involving a laptop farm to gain employment in U.S. companies, posing major security concerns. The discussion highlights the pitfalls of relying on end-of-life software, especially in M&A scenarios, and how this could be a significant vulnerability. Lastly, they explore the massive data breaches from Snowflake and the shared security responsibilities between service providers and customers, emphasizing the importance of multi-factor authentication and proper security management.

Links:

https://www.cybersecuritydive.com/news/insurance-cyber-risk-reduction/724852/

https://arstechnica.com/information-technology/2024/08/crowdstrike-unhappy-with-shady-commentary-from-competitors-after-outage/

https://www.cnbc.com/2024/08/23/microsoft-plans-september-cybersecurity-event-after-crowdstrike-outage.html

https://arstechnica.com/security/2024/08/nashville-man-arrested-for-running-laptop-farm-to-get-jobs-for-north-koreans/

https://www.darkreading.com/vulnerabilities-threats/why-end-of-life-for-applications-is-beginning-of-life-for-hackers

https://www.cybersecuritydive.com/news/snowflake-security-responsibility-customers/724994/

 

Transcript:

Jerry: Here we go. Today is Saturday, August 24th, and this is episode 277 of the defensive security podcast. My name is Jerry Bell and joining me today as always is Mr. Andrew Kalat.

Andrew: Good evening, my good sir Jerry. How are you?

Jerry: I am awesome. How are you?

Andrew: I’m good. I’m good. I’m getting ready for a little bit of a vacation coming up next week So a little bit of senioritis. If I’m starting to check out on the show, you’ll know why

Jerry: Congrats and earned. I know.

Andrew: Thank you, but otherwise doing great and happy to be here as always

Jerry: Good. Good deal. All right. Just a reminder that the thoughts and opinions we express on this show are ours and do not represent anyone else or including employers, cats, relatives, you name it.

Andrew: various sentient plants

Jerry: Exactly. Okay. So jumping into some stories today. First one comes from cybersecuritydive. com, which by the way, has a lot of surprisingly good content.

Andrew: Yeah, I have enjoyed a lot of what they write. We’ve a couple good stories there

Jerry: Yeah. Yeah. So the title here is insurance coverage drives cyber risk reduction for companies, researchers say that the gist of this story is that there were two recent studies done or reports released one from a company called Omeda and another one from Forrester, which I think we all know and love.

And I’ll summarize it and say that they’re both reports indicate that companies which have cyber insurance tend to be better at quote, reducing risk more likely detect, respond, and recover from data breaches and malicious attacks compared to organizations without coverage. So I thought that was a little interesting.

On the other hand it to me feels like a bit of availability bias, so by that, what I mean is if you go and take a survey of people who go to the gym and work out at the gym on their diet, you will probably will find out that Eat a healthier diet than the public at large.

Andrew: But I go.

Jerry: you just go.

Andrew: I, look,

Jerry: I’m not saying, I’m not saying everybody, right?

Andrew: least I show up, right? And I’ve been told showing up is half the battle.

Jerry: It is half the battle, that’s right. Knowing is the other half.

Then doing is the other half.

Andrew: I will say, speaking of G. I. Joe quotes, I thought catching on fire was going to be a far bigger problem in my life than it turned out to be.

Jerry: That and quicksand.

Andrew: I, we were

Lot about that as children of

Jerry: quick, quicksand.

Andrew: Heh.

Jerry: QuickSand was, I, I lived in fear of QuickSand, but it turns out it’s really not that big of a concern.

Andrew: For as much as I heard stop drop and roll done it

Jerry: Yet.

Andrew: That’s true. The day is young. Anyway back to your story. I think you’re right I will also say having worked with a number of these companies do interestingly have their own towards trying to keep you from getting hacks. They have to pay out So they do push certain things like and I’ve seen myself and I won’t say it You know, it doesn’t matter where, when, but if you have things like one of the well known EDR tools well deployed, they might cut you a rate on or a break on your rates. Because they have their actuarial table saying, Hey, if you’re using certain bits of technology that lowers your risk of usually ransomware, right? So they

Jerry: Sure.

Andrew: seems to me, my opinion is that these insurance companies feel that some of the well known EDR brands in a Windows environment It is very effective or decently effective at stopping ransomware, therefore they’re less likely to pay out, therefore they lower your rates. So there might be some of that too. They do to give companies guidance on what they see across their industry to reduce risk.

Jerry: I think that, that makes sense. I’ll say, on, on one hand, like I was saying before, I think companies that buy cyber insurance are probably maybe more mature, more invested in, protecting their environment than others. But I think that there’s also this consultant effect when when you want to drive change and whether whatever kind of change that is, reorganizing revamping your security program, justifying additional expenses for anything outside guidance, typically Carries a lot more weight than something that comes from internal.

Andrew: Sad but

Jerry: and so I think, yeah, anybody who’s been in the industry for a long time or really any amount of time knows that, especially this is a, the CISO trick, right? When you come into a new organization as a CISO, the first thing you do is you go off and you hire a, a big name consultant.

You burn a half a million bucks on a consulting engagement. And at that point, it’s not you telling the company, Hey, we’ve got to spend a bunch of money to improve our security program. It’s some, hard to argue with independent third party who is making that assessment. And to some extent you argue with that at your own peril, right?

Because now it’s it’s a, it’s an assessment that becomes exhibit a, if something goes wrong and which is, both a blessing and a curse. But my experience is it certainly helps a lot. And I think that this cyber insurance and their somewhat prescriptive guidance and expectations around the kinds of controls and technologies you need to have in place is a very similar kind of thing, right?

If you’re engaging with them, they’re going to be opinionated on what you should and shouldn’t be doing and and then like a consulting engagement. It’s a third party giving you that guidance. And so I think that tends to carry a lot more weight.

Andrew: Agreed on all points. The only caveat I would say to that is sometimes these recommendations that come from some insurance companies are not customized typically to your particular risk environment or situation. They are very broad approaches to reducing risk across many different types of environments with many different types of risk profiles. Technology stacks and all that sort of stuff. So they’re very somewhat generic recommendations, I think.

Jerry: I think you’re probably right. In any event, it’s I thought it was I thought it was quite interesting. Certainly having that insurance can help. I will tell you in my time as a CISO in dealing with customers and to some extent business partners, there was a I would say a growing expectation that you have to have cyber insurance.

Actually, I experienced firsthand quite a few customers actually writing into contracts. That you have now, I don’t know how far and wide that permeates the industry, but I think it’s probably becoming a lot more common these days because, companies have this interdependence and so it’s not necessarily just like a cloud service provider where that kind of thing can manifest, look at over the, what now, 12, 13 years we’ve been doing the show.

How many times have we talked about a company like, let’s say, Target or Home Depot getting hacked as a result of something happening with one of their suppliers? And so I think, as time goes on, we’re going to see that becoming kind of table stakes to, to have these business relationships, especially with larger and more mature companies.

Andrew: Why do you think that is, what do you think that the third party is assuming that you will get from that insurance? Just so you have the ability to recover from an incident and sustain As a going concern or that they assume that if you have insurance, it’s coming with requirements that level up the maturity of your program or what value do you think that third party sees in their business partner having cyber insurance?

Jerry: That’s a great question. I think it’s both, actually. I think there is this, naive view that if, if something bad were to happen this insurance would, provide that buffer. It would make sure that, the company didn’t go out of business, but the reality is that, especially, if you look at some of the really large hacks.

can happen with relatively small organizations who are, I would say fairly highly leveraged, at least in terms of their insurance policy. So yeah, it’s great. They may have a 5 million insurance policy, but if they hit if they’re, let’s say a, a hundred million dollar company and they get hit with a, 50 million in breach fees, their 5 million in insurance coverage, isn’t really going to go very far.

So I don’t know that it’s extraordinarily useful in terms of protecting customers from harm. I think there’s a facade that it provides. And I also think it does give some, at least a segment of, roles at companies gives them this warm, fuzzy feeling that somebody else is looking over their shoulders.

In that respect, it’s not different than like a sock to or an ISO, SOA or what have you.

Andrew: I wonder if there’s some sort of implied, Hey, you’ve ransomware you can recover faster. The other thing I think about is the perverse incentive. So when we look at an insurance in general, it’s to shift risk. It’s to shift

Jerry: I

Andrew: risk to a third party. So is there the risk that a executive committee will say, Hey, we don’t need to invest in much in cybersecurity because we have insurance if something bad happens.

Jerry: mean, I would love to sit here and say no, that’s that would never happen. But I don’t think it happens that every organization, but I definitely expect it happens more than it should.

Andrew: Yeah, it’s interesting. It’s interesting interplay of competing priorities. When you start to introduce these sorts of things and how what sort of behavioral economics comes into play

Jerry: Yeah, absolutely. All right. Anyway, go go talk to your insurance carrier and it might might help you with your internal program and justify additional improvements to your program. Our next story comes from Ars Technica and the title here is Crowdstrike.

Unhappy with shady commentary from competitors after outage

Andrew: I’m shocked. I say shocked

Jerry: Totally surprised by this so we’ve talked about this Several times and i’m sure we’ll talk about it several more times CrowdStrike obviously had a pretty devastating Snafu With one of its products that caused probably the largest single meltdown of I.

T. in history and a lot of their competitors have been capitalizing on that outage. And so now this story is talking about in the wake of some of the back and forth tit for tat. mudslinging that’s been going on. I think they call out Sentinel one in particular. CrowdStrike is, I think, getting a little peeved at how their competitors are behaving, basically saying, hey, this could have happened to anybody.

And I think there’s a lot of differing opinions in the industry based on my experience and exposure to different, to, the industry. I don’t think everybody’s on that bus. I think there’s a lot of people who think that, no, this really would be a lot less likely with other companies. Although it is interesting that SentinelOne is, is one, I think one of the more aggressive mudslingers, but they also, by the way, as far as I can tell, do use they do access windows The kernel.

And in fact, the next story we have actually talks directly about that.

Andrew: Yep they do and this goes back to something that I’ve I don’t have expertise in so I’m just dancing around and pontificating at something I can’t be authoritative on but I think what I keep seeing is that most security tooling feel that they need to be in the Windows kernel to be effective on the way Windows is architected today. it’s interesting when they talk about they being various competitors of CrowdStrike talk about safer methodologies, whatever that means, and I think somewhat that implies perhaps not operating at the kernel level. However, safer in terms of not causing an outage per se, but are they as effective at spotting and stopping malware? I don’t know. I, my assumption is there’s always some sort of trade off. If we’ve got most of the industry wanting to operate at the kernel level, and we’ve got another story that talks about this a little bit, and Microsoft themselves is talking about maybe we can find ways to make this effective. seems to me as not a, not having worked at those companies that, but Operating at the kernel level allows these security tools to be more resilient against malware trying to shut them down, and in theory be faster and more effective, and if they are operating at the user level or in user space, the implication that I’m getting from these articles is malware could Shut down the anti malware tool and do whatever it wants to do. And that appears to be harder at the kernel level. That it’s better able to protect itself and spot things at a deeper level in the operating system. I don’t know if that’s true, but it seems to be most of these companies operate that way. And in fact, there was even an implication we talked about it on a previous show. From Alex Stamos, who’s the newly appointed CSO over at or c SSO or cso, one of the two over at Sentel

Jerry: CTO. CTO.

Andrew: Now it says Chief Information Security Officer in this particular

Jerry: Oh, okay.

Andrew: Alright. Anyway, he talked about, Hey, we’ll back outta using the kernel if all of our competitors will as well. there’s clearly some advantage to being there. And I don’t know that anybody really wants to talk about that.

Jerry: I think there’s, I think there, that we are talking about this as if it’s like one, monolithic choice of, you’re either there or you’re not there.

Andrew: Yeah.

Jerry: that’s probably not the right way to think about it. I suspect that there’s the nuances that there are certain kinds of functions that you don’t need to perform in the kernel.

And as an example, you don’t need to, parse your file, your definition files in the kernel. You could do that in user mode and then. pull it into the kernel module. I suspect that there’s some of that. It certainly adds a lot of complexity and I’m not going to argue that, but I think that’s where, I don’t know if, I don’t know anything about SentinelOne and their technology, but I’m going to guess what, when they say, when they’re trying to throw rocks at CrowdStrike, I think what they’re probably saying is we do.

more things outside of outside of the kernel. But like you said, if they were to completely move out of the kernel their ability to function would be impaired. And so that kind of dovetails into the second story about this, which is from CNBC. And the title there is Microsoft plans September cybersecurity event to discuss changes after a CrowdStrike outage.

And there was I don’t know if this was directly an outgrowth of the comment that Ed Bastian, the CEO of Delta, made to CNBC, gosh, right after the outage. He said something like, we don’t see this sort of problem with Apple. And, and he was. really talking about, and by the way I’m assuming that Ed had heard that from, his security people hey, Apple just doesn’t allow this, which is true, right?

They don’t they are much stricter about this access. Microsoft will certainly point to the the decision that was rendered against them in the EU that forced them to open this up because by the way, Microsoft is a direct competitor to both SentinelOne and CrowdStrike when it comes to endpoint security products.

Like they have their own, they have their own product, which is in contrast to Apple. Apple doesn’t really have, an equivalent thing like Microsoft Defender, right? They,

Andrew: a single brand around security. They do things a little differently. This is a tough comparison, and I struggle I hear this, and I get frustrated because it’s In a vacuum that isn’t looking at market share. It isn’t looking at

Context. It isn’t looking at all the tradeoffs Microsoft had to make to run an open hardware ecosystem and all the backward compatibility choices they made Apple, to just say Apple just does it better.

It’s not and I’m not an anti Apple fan. I use Mac all day long, but it is Microsoft Mechanics www. microsoft. com. I disingenuous to say it outside of the context of everything else around that ecosystem that has contributed to this. And all the frustrations people have with Apple being so hardcore about, Oh yeah, you, sorry, this hardware no longer supported, go away. Or where is Apple’s surfer ecosystem or, there, there’s just, he’s not wrong, but it’s also like one 10th of the story.

Jerry: Sure. You

Andrew: like that. there’s, it’s not just, it’s not just Microsoft is stupid. It’s, and Apple is smart. There’s so much more that goes into this. It’s my frustration.

Jerry: know, as a society, though, we’ve boiled everything down to 15 second soundbites, it’s just the way of the world.

Andrew: You’re not

Jerry: lost our, we’ve lost our tolerance for nuance.

Andrew: But that’s

Jerry: But,

Andrew: Jerry. To bring the nuance back.

Jerry: That’s exactly right. Microsoft on September 10th, which is coming up fast. is going to have this summit with endpoint security providers. And I think what they’re trying to establish is a set of best practices around what is and is not done in the kernel so that they can avoid catastrophes like this going forward.

And then also, as we’ve talked about in the past, they’re going to start trying to encourage these companies to use the eBPF interface, which is, an alternative way of hooking into the kernel that has less ability. It provides probably not the exact same level of control and visibility, a very, very substantially similar without all some of the downside.

So I think I think Microsoft’s eBPF implementation is rapidly maturing as, relative to what’s been out there for Linux for some time, but it is, it’s not something that it might my experience that the security industry is really embraced yet, but I think this is probably going to be the forcing function that really drives us in that direction.

And this, by the way, may be the thing that It ultimately enables Microsoft to say, no more access to the kernel. If you want to do this, you’ve got to do it through this particular feature like eBPF. I think that’s how I see this playing out. I think if you were to look five years, five years down the road, I don’t think companies are going to be linking and hooking into the kernel.

I think they’re going to be forced through a function like eBPF. It’s just Jerry’s wild ass speculation though.

Andrew: It makes sense. One quote that I do want to mention from this article that kind of goes to what we were saying in the previous story, which is, quote, Software from CrowdStrike, Checkpoint, SentinelOne, and others in the endpoint protection market currently depend on kernel mode. Such access

Jerry: Right.

Andrew: quote, monitor and stop bad behavior and prevent malware from turning off security software, end quote, a spokesperson said. So that kind of

Jerry: Yeah.

Andrew: what we were saying earlier that there’s clearly some

Jerry: Yes.

Andrew: sure that they’re trying to get, like you mentioned, new methodology to give them that same capability without being deep in the kernel.

Jerry: But I think the, I think both customers of these endpoint products and the manufacturers of them too are going to say, okay, fine. But then tell me how you’re going to help us not get, not our endpoint security product, not get killed by. The ransomware companies the adversaries who are pretty adept at stopping things, even that are running in the kernel today, it’s a lot easier to do with user mode processes.

So I, I think that there’s going to be a bit of a meet me in the middle here where Microsoft is going to say, get your crap out of the kernel. And those companies are going to say then, give us something. Enable us because they’re, I don’t think they’re fully enabled today. Yes,

Andrew: wouldn’t be fair. And I think that’s where the EU is coming out with their ruling years ago that started all this. So there’s, make no mistake, I guess is what I’m saying here, that there is absolutely competitive standoff going on here. These companies are frenemies in this situation, so they don’t want to back out and be, leave Windows, their competitor with the only capability to do something that they can’t do, and then they can’t compete.

Jerry: Exactly. And by the way, my Microsoft we have, we talked about it before. We have seen instances where Microsoft has shot themselves in the foot too, right? This is, it’s not unprecedented that Microsoft’s own updates have caused outages. Outages. So I’ve got to, I’ve got to believe like they’re thinking about this too.

It’s so great. They they kick all the other endpoint security products out of the kernel. And then suddenly they are, they’re the cause of the next CrowdStrike scale outage because of their security product is now the only one that’s in the kernel. How bad would that be for them? Holy crap.

Anyway I think that’s what we’re going to see. Agreed.

Andrew: on you there.

Jerry: No, all good. So our next our next story is also from Ars Technica, and this is a bit of a follow up from the Know Before story that we talked about, I think, two or three episodes ago. The title is, title here is, Nashville Man Arrested for Running Laptop Farm to Get Jobs for North Koreans.

So the, if you recall,

If you recall the security awareness company Know Before published a blog post about how they had hired what turned out to be a North Korean agent. And the way that went down was they interviewed and selected and hired a a person who turned out to be a North Korean citizen. And they shipped the laptop to a laptop farm.

And I actually had some questions about how that worked. This actually explains how this goes down. It’s very enlightening. So this this person named Newt, his last name is Matthew Isaac Newt, who lived in in Nashville. So he had a relationship with a set of people in North Korea, where he would facilitate and it looks like it was a fairly sophisticated operation where he would help track down identities that they could use.

He provided a place for North Koreans who were being hired by, unknowingly hired by the way, by U. S. companies to have their laptop sent to. He would receive the laptops, put it on his home network. install remote access software and allow the North Koreans remote access into the laptop so they could quote, do their work.

And it’s just a, it’s a fascinating thing. He I guess they, they said that they were making each one of the North Korean employees was making about 250, 000 over the roughly year period that this was going on. The allegation by the US government is that the money these people were earning was in turn being used by North Korean to fund their weapons program.

So obviously not not super awesome. And they do go on to say in the article, by the way, this is not a one off situation. They refer to another person named Christina Marie Chapman. Down in Arizona who basically did the very same thing So it’s a fascinating I didn’t realize this was as large of a problem as it is, but Apparently this is a fairly Industry becoming an industrial scale operation run by, individuals.

I’m I’m surprised.

Andrew: Yeah it’s interesting. It’s a little different to the know before because in this case, it looks like the North Korean it employees are doing legitimate work. They’re not immediately installing, malicious software and that they’re trying to earn their wage as it were

Activity in this case.

But I’m also very curious how the money was moved over to North Korea. And, was this, paid, I’m sure it’s paid to a US bank account. And then what does it look like to get it over North Korea, which is somewhat non trivial. I don’t know if they use Bitcoin or something like that. But that’s a big part of the charges here is the wire fraud, that sort of thing.

But the other thing I think about is, if as an employer don’t allow your random employee to install software, it would stop a lot of this. I get that’s a big cultural taboo and there’s a lot of gnashing of teeth around that topic, but if they couldn’t install remote access tools or, you as an IT department or security department don’t monitor for those remote access tools, it certainly would stop a lot of this.

You’d be, It just wouldn’t work, unless some other methodology is found. A way to fight this it’s one more reason to not necessarily let local users have full admin rights.

Jerry: Even if you do, I think it’s very prudent to actively look for and block and investigate people who are installing remote access tools because like remote access tools. Whether it’s RDP or TeamViewer or, any of the myriad other software, that stuff has been the source of so many security incidents over the years.

And in fact, it’s one of the common ways that frauds, like just garden variety fraud is perpetrated where, the Windows help desk scam, where they’ll call you up and ultimately install some sort of remote access software to get into your system. So I think this is really important.

Now I will say, I think this particular set of scams was reliant on them installing this remote access software. But I think, a sophisticated, if they were if they were sophisticated, network KVMs are pretty cheap these days. So it’s not necessarily a home run to say we’re like, we’re fully protected because we don’t allow that we would certainly detect it.

But it is not, there are other ways around that. It’s another step. And I can imagine it would make, perhaps the would increase the cost and complexity of hosting these but probably not prohibitively. And, it goes back to you’ve got to, especially in a remote working situation, you’ve got to have good diligence on and good awareness of who you’re hiring.

And by the way, that I say that is somebody like full throated supportive remote work.

Andrew: Yeah, certainly, but I also feel like you’ve spent a lot of time thinking about this. Is this your new post retirement career? Are you setting up laptop

Jerry: But my, my kids have moved. My oldest son has moved out and someday soon my youngest will move out. So I’m trying to figure out, do I go the Airbnb or do I go the, hosting laptop farms? I don’t know yet.

Andrew: That explains the eight new air conditioning units you just added onto your house.

Jerry: Yeah. Then the Bitcoin mining, like you gotta diversify.

Andrew: You having free time is dangerous. I don’t know that this is a good thing at all.

Jerry: What is the saying about idle hands?

Andrew: Indeed.

Jerry: All right. The next next article comes from Dark Reading, and the title here is Why End of Life for Applications is the Beginning of Life for Hackers. This is a big problem. So the gist of this story is that end of life applications is a boon for threat actors. It’s they make reference to 35, 000 applications moving to end of life status over the course of the next year.

I think that’s probably optimistic. I think it depends on how you define application for sure, but I think the look, in my, my, my personal experience. This is probably one of the larger problems we have in IT. It just, we talked last time about patching and how nobody wants to, nobody wants to patch stuff.

But there are so many issues that come along with using end of life software. Not the least of which, by the way, is that Most vulnerability management programs are built around, subscribing to vendor alerts to understand that a patch was, needs to be applied. When you’re using end of life software, that doesn’t happen anymore.

Like it’s, it goes quiet. Certainly.

Andrew: end of life for vulnerabilities. Much less issue patches.

Jerry: Yeah, exactly.

Andrew: Yeah.

Jerry: Yeah, most of the time you’re. Your vendors, vendor notifications are not vulnerability based, they’re patch based. They announce the availability of a patch to fix a vulnerability. And so now you know that you have a piece of work that has to be done because a patch was released and you got to go and apply that patch.

There’s no patch. A lot of vulnerability scanners work in a similar way, especially as it pertains to maybe less less well known applications. Now, certainly if you’re using a tenable Nessus and you’re running an out of support version of Linux or Windows, like it, it flags that itself as a critical vulnerability, but you don’t have any granularity about what the actual technical vulnerabilities are because they don’t know.

It’s just like it’s end of life. Who knows what. What sort of vulnerabilities there are and I think it gets It starts to descend into obscurity after that when you get into like open source Components and whatnot. You just don’t know you’re unaware that they’re not being maintained anymore And that becomes a big problem I will also say, one of the things that they talk a little bit about how companies can get into the Or defend against letting this, letting things go end of life.

But I will say my, in my experience, it’s easy. It’s a trap to fall into when something goes end of life, right? Because, Hey it’s end of life, but there’s no known vulnerabilities and we have this other thing that needs to be done and it’s super high priority. It’s going to make a billion dollars, blah, blah, blah.

And at the time, that’s true, right? It’s true. It’s a low, it’s a low risk thing. Your your version of WordPress is out of date. There’s no known vulnerabilities. But then, you start to collect these things. And suddenly, you’re buried under too much technical debt. And and it’s hard to, really hard to get out from under and you end up in this position where you’ve got so much of this debt that you have a hard time even understanding, not only what is, what all is end of life, but, are there actually vulnerabilities?

Like at the time you made that risk acceptance to allow this end of life thing happen. You knew that there wasn’t, but are you actually keeping up with the vendor and with the industry to know whether that has changed? And I think, if you’re talking about one thing, one application, it’s fairly easy to manage.

But once you start accumulating a lot of these, it becomes really unmanageable.

Andrew: Yeah, you echoed a lot of the notes I had as well, which is once you get so far behind, it’s that much more difficult to get caught back up. And then it becomes that much more of a fight comparing against other higher priority things to work on patching or massive upgrading, as opposed to just keeping things up to date bit by bit. other thing I think about, I don’t have my notes here, is If something is that far end of life, it’s not just a security thing. You don’t know necessarily if other interactive or interrelated components are supporting that version anymore, or tested against that version. And you might start seeing some weird buggy artifacts as a result. and you brought up the open source thing. That’s. Most of these sort of end of life checks are usually coming from some sort of end of life policy statement from a commercially supported application or operating system. The problem with a lot of these open source dependencies and third party packages, they don’t have that.

They don’t have any sort of published end of life, end of support methodology. So how do you know when something goes end of life when it’s open source? It hasn’t been updated in three years. Is that just because it’s just really stable or because it’s been abandoned and what is the criteria you want to use there?

Do you have a criteria that says, Hey, we need to use currently maintained third party dependencies in our code. Okay. How do you define what that is? Something that has had an update in the last X amount of months we see all the time that. Open source packages just become abandoned ware without any notification, without any understanding of that.

It’s just in hindsight you go, oh yeah, that guy stopped working on that three years ago. knows.

Jerry: Yeah, it’s not certainly not universally true. There are plenty of open source, especially the larger ones that have a defined roadmap. But I think you’re spot on. If you don’t, if you don’t see that there’s been an update on a particular piece of open source, is it because there’s nothing wrong with it or, or is it on you to go look at its GitHub repository and see that like people have been.

jamming the issue log with requests to fix some vulnerability that are falling on deaf ears. And, when you start to think about the many tens of thousands or even hundreds of thousands of open source components, some applications use that, it becomes really challenging. And I think that’s where some of the open source management tools Like mend and others can start to help but you know Even those aren’t infallible and even so you have to have as an organization Some amount of discipline and capacity to make sure that you’re staying up to speed

Andrew: Yeah.

Jerry: The other thing by the way that I wanted to hammer on because I hate it.

I hate it With the burning passion and it has happened every time i’ve been involved in it And it’s something that I feel like as an industry, we have to do something about. And that is in the aftermath of an acquisition, accounting systems. Can we just talk about those for a second? Because I don’t know how many acquisitions I’ve been, on the acquired side and I’ve been on the acquire side a bunch of times.

And it happens every single goddamn time. The acquired company has an accounting system. And what happens when, why do companies acquire other companies? One of the main reasons is this thing called synergy, right? And the synergy basically means that we don’t want to run duplicate HR systems and accounting and whatnot.

If we consolidate all that backend stuff. And continue to make money and the products they sell, then like we have, we’ve increased the value that the whole is more than the sum of the parts. And that’s why a lot of companies

Andrew: Is,

Jerry: other,

Andrew: yeah. Sorry,

Jerry: right?

Andrew: gonna say the back office overhead

Jerry: And

Andrew: Yeah.

Yeah.

Jerry: exactly. So want to. When a company acquires another company, one of the things they want to do very quickly, as quickly as possible, is start to realize those savings. And one of the first things that happens is, putting out to pasture the accounting system. And this could also be the HR system, although, nobody does HR internally anymore.

It’s like running your own voicemail system these days. But accounting systems seem to be still very much insourced. And, every single time I, without exception, every single time I’ve seen this happen, company gets acquired. We stopped, they stopped paying for maintenance on the accounting application.

And. Ends up running on some old ass operating system that is out of date. It can’t be that you can’t update the operating system because the accounting system won’t work on the new operating system. You can’t update the app, the accounting system because like they’re out of business or you don’t have a license to upgrade anymore and it would cost millions of dollars and it’s not in the business case, my, oh my God.

And it. wouldn’t provide any like accretive value to the company if you did upgrade it because it’s not going to be used anymore it but you know what if you shut it off like people will go to jail at least that’s what i’ve been told over and over again and so you end up with this thing sitting in the corner which is as best i can describe it an attractive nuisance because like everything about it is terrible it’s all out of date and it can’t go away because you Joe from accounting says that somebody will go to jail if you turn it off.

Anyway, I may be a little angry about this. Ah,

Andrew: If you just ran your accounting on Excel, like God intended, you wouldn’t have this problem.

Jerry: it’s

Andrew: No

Jerry: so true.

Andrew: than what can be done in an Excel sheet.

Jerry: Amen. I, the only thing I can say is in, in those instances, and I look I’m going to just be forthright and say, I’ve never seen an effective. counter to this. It’s been a problem every single time I’ve seen it happen. The only thing I can say is, as an idea, have a, have a way I don’t think any company is going to be I shouldn’t say any company, but I think most companies will not be effective in changing those facts.

It’s going to happen, your acquired company is going to have an accounting system. There’s not going to be an appetite to update it but there it is, and so you have to you have to mitigate the risk of it, and I think that having a defined approach to doing that, whether it’s like a separate VLAN that has no access, or like you have to, do multi factor authentication to get in and out of that network, it could be pretty simple and dirty, but have a plan, because it just, It happens no matter how mad it makes me, it, it happens.

And and so I think we’ve got to recognize that there are cases where that will be and come up with, relatively workable mitigations around it, but it can’t be the rule.

Andrew: What’s your ideal use case? That they just migrate all the data off the old system to the new system and just kill it?

Because I’m

Jerry: I don’t know. I

Andrew: system is maintained, is because they need a system of record for the last seven years or whatever, for tax purposes or government regulatory purposes. And I’m guessing that’s why, I’m assuming.

Jerry: typically, yes. In, in, frankly, I think the best course would be to figure out the different types of reports that are needed and to do, to run exports and have those exports exist in a spreadsheet. Now, I don’t know if there, I’m not an accountant, I’m not a,

Andrew: right out of the See, look, it all comes back.

Jerry: But I don’t know if there’s some like statutory requirement that data, that system of record has to be there because if, like the sec or, the department of justice or some other legal Authority came to you and wanted to investigate like why did you claim? Why did you say you made x dollars in, seven quarters ago before you were acquired?

You’ve got to be able to go back and replay that maybe that’s why and maybe that can’t be done through exports I don’t know but I In every instance, the accounting folk have insisted that system has to be available. It’s not enough to just dump the data.

Now I don’t know if that’s laziness or what. The other problem I have, and while I’m beating on this drum, Over time, the people who are familiar with that system go away

Andrew: Certainly.

Jerry: and so like at one of the, one of the things you have to watch out for is that eventually what, like when that system’s usefulness is done, there isn’t anybody left to say, Hey, now it’s time to turn it off.

Andrew: And then who wants to take the risk of being the one who makes the wrong call? So they say people go to jail. Can we talk about who might go to jail and if that’s really a bad thing?

Jerry: I like where this is going. I,

Andrew: just weighing the outcome. What are my options?

Jerry: I think I’ve beat that one to death. The the last story we have today, it’s also from the cyber security dive. And the title is after a wave of attacks, Snowflake insists security burden rests with customers. Now, Snowflake had a a large problem. And this happened earlier in, in 2024 before we got back to podcasting.

But I would say that I don’t think it’s an overstatement to say that the security breaches or data breaches associated with Snowflake will probably go down as the largest in history. It, bigger than anything there ever was, and perhaps bigger than anything there ever will be again, maybe it’s a huge numbers of customers.

Lost lots of data. Now, the point of this story is to say, Hey, like Snowflake was not, snowflake is saying, Hey, it wasn’t us. It wasn’t us. It was. Our customers.

Andrew: Because they

Jerry: they’re not

Andrew: single factor login, was easy to find in other dumps of passwords bad actors did a widespread campaign to, and upward force, password test, all of those passwords against Snowflake user accounts and lo and behold, a bunch of them worked. And that’s, they’re

Jerry: Yes.

Andrew: shouldn’t have relied on single factor and reusing of passwords.

And that’s on you because you didn’t take appropriate security measures.

Jerry: Correct. Yeah we just provided you with an account and a place to store your data or process your data. It was on you, the customer, to pick a good password. And that’s basically what they are saying here. They’re saying our systems, our Snowflake systems didn’t get hacked. We, you know Our infrastructure is fine.

Everything worked as designed. The fact that bad guys got your password and stole all your data is a horrible thing, but it’s not our fault. It’s your fault because, it was your password that they got. We don’t know how they got your password. Was it the same password you used on LinkedIn or on, Ashley Madison?

I don’t know. Who, who knows? And that’s, they’re saying it’s not our problem, not our fault. And they basically, of course they do give lip service to the fact that, hey, there are customers and of course we care about them and we’re in it together as they say, but not our fault.

By the way they have since implemented some snowflake, I should say, has since implemented some changes, which require mandatory multi factor authentication for new customers, and it also gives customers the ability to require or enforce multi factor authentication for all of their users or for specific roles in their account.

So I should, by the way, I should have said for those of you not in the know, Snowflake is a what I would call like a managed storage managed database provider. They do lots of. Value added services around data analytics and whatnot. So the kinds of data that you would have stored in Snowflake are the kinds of data that you wouldn’t want to get compromised.

And so I think this was a central place, one stop shopping for some adversaries to go and do their password stuffing. And it looks like they they got somewhere in the neighborhood of 150 or 200 different customer accounts and pulled all that data out. And it was, and still is by the way, we’re still, even now hearing about net new companies that were hacked or had their data stolen.

And I don’t know if that’s because they’ve, been quote, responding and investigating to the incident, or if they just recently realized that this has happened, but this is a big problem and I think more interesting than, this itself, because I don’t think that Snowflake is all that, commonly used in the industry is the concept that, these service providers are.

They have a hard line of demarcation of what they’re responsible for. And so when we as consumers of these services decide what we’re going to go use, what do we do? Like we look at their SOC 2 and we look at their PCI report and we look at their ISO certification and we look, we look for all these things, but how often do we look to see What the capabilities of their services from a security perspective do you know, do you require multi factor authentication as by default, how many customers, I can say this authoritatively, I don’t know, in my time that I ever saw any customers asking about that, and they should.

And this is, the, because the, look go ahead.

Andrew: no I think this absolutely is a product management decision. And I

Jerry: Yes.

Andrew: the implication here is security causes friction. Friction

Jerry: Yes.

Andrew: make us uncompetitive. So if I, let me bring it back to a bank as an example. A bank knows how to secure your accounts. A bank could easily force multifactor authentication.

They could force tokens. They could force all sorts of things. But they know there’s a certain percentage of customers who will move away from them as a result. That friction will be more complicated than they want to deal with. And they will not value the security They will look at this as a detriment to that service and go someplace else. So that is a decision that every company and every product manager makes about what security they built into their tool. So in my mind obviously I have no insight, insider insight to Snowflake, but the idea that those admins could allow single user passwords and static passwords is a product design choice to reduce friction and usability of to make the usability of the platform easier. Whether it’s for whatever reason, as opposed to like multi factor authentication is a very mature known. Solved problem. So if it’s not being put in place, it’s because somebody made a choice and that choice typically is competitive in nature. So to your point,

Are not demanding it, it’s not going to show up. The other thing that this occurs to while I’ve stolen the floor for a moment is with these rise of all of these SaaS vendors, typically the administrators for these are no longer it professionals or security professionals. They are. users of the data professionals. So they may not even understand or have any insight into the implications of the administration of these tools around security aspects like this. All they know is, Hey, I need to make an account. Okay. I made an account. They may not have the background or the guidance to know single factors bad, and here’s why. Like you would, if a. More traditional IT or security team were administering these tools. So I think we’ve also got a problem with these SaaS tools that have become so ubiquitous and easy to use that we’ve somewhat enabled less technical staff to administer them and I think things like this become oversights that come back to bite us.

Jerry: I couldn’t have said it better. I think that’s exactly what’s going on. And so it’s not friction on IT and security departments that a product might require multi factor authentication. It’s friction on the business users. And so when the bit when it’s the business users that are specifying and deciding what services to use.

You know that. It’s not hard to imagine that they’ll pick one that is easier to log into even though it could have a devastating effect, like a lot of the customers of Snowflake here. And so it’s a complicated thing because if all of this, if all of the providers were requiring multi factor authentication, that wouldn’t be a, there wouldn’t be a difference.

It wouldn’t be a differentiation. I guess the business users would have kind of a similar experience across all of the different providers, but we know that’s, we know that’s not true. But I think that if you zoom out, the concern I have as an industry is that we’re we’re not broadly speaking.

We’re not. deeply aware of the responsibilities that we are picking up for properly managing those services. We are, to some extent, doing what we think is due diligence, looking at those providers and saying they’re a reputable company, they’re secure, they have these certifications, but then that’s like the end of it.

We don’t think about What our obligations are. And this manifests itself in so many different ways. Like how many times we talked about open S3 buckets. It’s another permutation of that. We had the big Capital One breach in AWS. Also misconfiguring how they set up their IAM.

Like it. The devil is in the details in how we manage these software as a service systems and, that the companies are not that these providers aren’t going to come to us and hit us on the back of the head and say, you big dummy, like you should have, you, we saw that you don’t have multi factor authentication turned on and you really need to go do that.

Now, maybe. snowflake will start doing that because of the reputation damage that they’ve incurred as a result of this breach, which they assert isn’t their fault. It is having an impact on them. It is, I think it is probably having a negative impact on them.

It’s attracting attention that I’m sure they don’t want to have. And I’ll, another way of thinking about it is like, there’s no such thing as bad PR, but I think You don’t want to have your name associated with the largest data breaches that have been around. But again,

Andrew: podcast like this with tens of listeners

Jerry: tens of listeners. And I’m not blaming, I’m not disagreeing by the way, I’m not disagreeing with the premise of Snowflake’s comments that it was, their customers were responsible. This was not intended to be like a Bash on Snowflake segment. It was more like we have to understand how companies like Snowflake are viewing their relationship with their customers.

You as the customer are responsible for ensuring that you are properly securing your stuff.

Andrew: So I go back to maybe we’ve got non technical running this, these tools in companies, maybe what we should be doing is allocating GRC’s time to go auditing how they’re running these tools

Jerry: Yes.

Andrew: and clean them up.

Jerry: I I don’t have a better, I don’t have a better option or better idea. Yeah,

Andrew: out there for and such that sort of are meant to be, allow you to apply policy, and those are technical controls of the same problem, which is proper policy should be applied. And I think the challenge is we have such a sprawl of SAS tools that it’s really difficult to stand on top of and ahead of. And I’ve worked at try to mandate as best they can. Hey, if you want to buy a SaaS tool, it’s got to integrate to single sign on or MFA. you could do that and then 10 minutes later, the admin sets up another username and password that you don’t have purview into. And they don’t necessarily know they’re doing anything wrong.

Jerry: exactly. They have a business objective to meet. They’re trying to solve a business problem.

So

Andrew: this is I think one of the unintended consequences of the, Democratization of admin capabilities through the SaaS and Cloud Sprawl, that is making life more difficult for security teams.

Jerry: Yes. And I do wonder, by the way I don’t know that we’ll ever have clarity on this, but I do wonder of the, I think it’s 165 at last count of the 165 companies who’s data was breached from from their Snowflake account. How many of those companies found their IT and security departments found the fact that they were using Snowflake in that way a surprise or that they didn’t have multi factor authentication turned on?

How many times was that a surprise? And I think it’s going to be an unfortunately high number. Yes,

Andrew: And what I unfortunately foresee happening is executives will just say, fine, security, IT, go fix it. Without allocating enough resources.

Jerry: yes you failed in your job, which, I guess it’s not a it’s not a completely unfair statement. But on the other hand, I think we have to be we have to be enabled in our jobs. And I’m not sure that always happens.

Andrew: Yeah. Yeah.

Jerry: So anyway, That is that is the stories for today. Oh, go ahead. Sorry. I think there’s a lag now.

Andrew: Yeah. We might be,

Jerry: I

Andrew: stepped each other a little bit this this show, but we’ll figure it out.

Jerry: think there’s a I don’t know if it’s because of the way we’re recording or what, but I think there’s a bit of a lag. So in any event that is the show for today. I appreciate everybody’s attention and hope you found this useful. And if you like the show, you can. Go listen to back episodes.

Everything is available on our website at www. defensivesecurity. org or on your favorite podcast player. If you do the show, we would we would love, love, love for you to give us a five star review that that helps make sure that other people are able to find us and and get us.

And by the way, if you don’t like the show. You can also still give us a five star review.

Andrew: Even more

Jerry: And just not listen. Yeah.

Andrew: That’s true.

That it’s free five stars is free

Jerry: Yeah, and then you just don’t listen anymore.

Anyway.

Andrew: just play it for your cats. That’s what I do.

Jerry: Yes.

I’m not even going to continue down that road.

Andrew: That’s fair, it’s going off the rails.

Jerry: All right. You can find a Mr. Kellett. On the social media where

Andrew: I’m on Twitter slash X at atlerg L E R G and on InfoSec Exchange on the Fediverse, also atlerg L E R G.

Jerry: Awesome you can find me on the fediverse at jerry at infosec. exchange and with that We will talk to you again very soon. Thank you all. Have a great week

Andrew: week. Bye bye.

Defensive Security Podcast Episode 276

Check out the latest Defensive Security Podcast Ep. 276! From cow milking robots held ransom to why IT folks dread patching, Jerry Bell and Andrew Kalat cover it all. Tune in and stay informed on the latest in cybersecurity!

Summary:

In episode 276 of the Defensive Security Podcast, hosts Jerry Bell and Andrew Kalat delve into a variety of security topics including a ransomware attack on a Swedish farm’s milking machine leading to the tragic death of a cow, issues with patch management in IT industries, and an alarming new wormable IPv6 vulnerability patch from Microsoft. The episode also covers a fascinating study on the exposure and exploitation of AWS credentials left in public places, highlighting the urgency of automating patching and establishing robust credential management systems. The hosts engage listeners with a mix of humor and in-depth technical discussions aimed at shedding light on critical cybersecurity challenges.

00:00 Introduction and Casual Banter
01:14 Milking Robot Ransomware Incident
04:47 Patch Management Challenges
05:41 CrowdStrike Outage and Patching Strategies
08:24 The Importance of Regular Maintenance and Automation
15:01 Technical Debt and Ownership Issues
18:57 Vulnerability Management and Exploitation
25:55 Prioritizing Vulnerability Patching
26:14 AWS Credentials Left in Public: A Case Study
29:06 The Speed of Credential Exploitation
31:05 Container Image Vulnerabilities
37:07 Teaching Secure Development Practices
40:02 Microsoft’s IPv6 Security Bug
43:29 Podcast Wrap-Up and Social Media Plugs-tokens-in-popular-projects/

Links:

  •  https://securityaffairs.com/166839/cyber-crime/cow-milking-robot-hacked.html
  • https://www.theregister.com/2024/07/25/patch_management_study/
  • https://www.cybersecuritydive.com/news/misguided-lessons-crowdstrike-outage/723991/
  • https://cybenari.com/2024/08/whats-the-worst-place-to-leave-your-secrets/
  • https://www.theregister.com/2024/08/14/august_patch_tuesday_ipv6/

 

Transcript:

Jerry: Today is Thursday, August 15th, 2024. And this is episode 276 of the defensive security podcast. My name is Jerry Bell and joining me tonight as always is Mr. Andrew Kalat.

Andrew: Good evening, Jerry. Once again, from your southern compound, I see.

Jerry: Once again, in the final time for a two whole weeks, and then I’ll be back.

Andrew: Alright hopefully next time you come back, you’ll have yet another hurricane to dodge.

Jerry: God, I hope not.

Andrew: How are you, sir?

Jerry: I’m doing great. It’s a, it’s been a great couple of weeks and I’m looking forward to going home for a little bit and then then coming back. How are you?

Andrew: I’m good, man. It’s getting towards the end of summer. forward to a fall trip coming up pretty soon, and just cruising along. Livin the dream.

Jerry: We will make up for last week’s banter about storms and just get into some stories. But first a reminder that the thoughts and opinions we express are those of us and not our employers.

Andrew: Indeed. Which is important because they would probably fire me. You’ve tried.

Jerry: I would yeah. So the the first story we have tonight is very Moving.

Andrew: I got some beef with these people.

Jerry: Great. Very moving. This one comes from security affairs and the title is crooks took control of a cow milking robot, causing the death of a cow. Now, I will tell you that the headline is much more salacious than the actual story that the. When I saw the headline, I thought, oh my God, somebody hacked a robot and it somehow kill the cow, but no, that’s not actually what happened,

Andrew: Now, also, let’s just say up front, the death of a cow is terrible, and we are not making light of that. But we are gonna milk this story for a little while.

Jerry: that’s very true.

Andrew: I’m almost out of cow puns.

Jerry: Thank God for that. So, what happened here is this farm in Sweden had their milking machine, I guess is a milking machine ransomware and the farmer noticed that he was no longer able to manage the system, contacted the support for that system. And they said, no, you’ve been ransomware.

Actually, the milking machine itself apparently was pretty trivial to get back up and running, but apparently what was lost in the attack was important health information about the cows, including when some of the cows were inseminated. And because of that, they didn’t know that one of the pregnant cows was supposed to have given birth, but actually hadn’t.

And so it. What had turned out to be the case is that the cow’s fetus, unfortunately passed away inside the cow and the farmer didn’t know it until they found the cow laying lethargic in it stall, and they called a vet. And unfortunately, at that point it was too late to save the cow.

This is an unfortunate situation where a ransomware attack did cause a fatality.

Andrew: Yeah, and I think in the interest of accuracy, I think it was in Switzerland,

Jerry: Is it switzerland? Okay. I knew it started with a S W.

Andrew: That’s fair. You’re close. It’s Europe.

Jerry: It’s all up there.

Andrew: But yeah, I guess in this theory that if they had a better tracking date when the cow had been inseminated, they would have known that the cow was in distress with labor and could have done something more proactively to save cow and potentially the calf. And unfortunately, because I didn’t have that data, because it was in this ransomwared milking robot machine we ended up with a dead cow and a dead calf.

Jerry: So not without grilling the farmer too much. I was I was thinking, that,

Andrew: Wow!

Jerry: I’m sorry. I was thinking that, they clearly had an ability to recover. And what they thought was the important aspect of that machine’s operation, which was milking, they were able to get that back up and running pretty quickly.

But it seemed to me like they were unaware that this other information was in kind tied to that same system. I don’t fully understand. Seems like it’s a little more complicated than I’m, than I’ve got it envisioned in my mind. But very clearly they hadn’t thought through all the the potential harm.

A good lesson, I think for us all.

Andrew: I feel like we’ve butchered this story.

Jerry: The the next story we have for today comes from register. com and the title is patch management still seemingly abysmal because no one wants the job can’t stop laughing. All right.

Andrew: A cow died! That’s tragic!

Jerry: I’m laughing at your terrible attempts at humor.

Andrew: I couldn’t work leather in there. I tried. I kept trying to come up with a leather pun.

Jerry: We appreciate your efforts.

So anyhow. This next story talks about the challenge that we as an IT industry have with patching. And basically that it is a very boring task that not a lot of people who are in IT actually want to do. And so it, it highlights the importance again of automation and.

This in the complimentary story which is titled misguided lessons from CrowdStrike outage could be disastrous from cybersecurity dive. I put these two together for a reason because one of the, one of the. I think takeaways from the recent CrowdStrike disaster is we need to go slower with patching and updates and perhaps not rely on automatic updates.

And these 2 articles really point out the folly in that. Number 1, this. Article from the register is pointing out that relying on manual patching is a losing proposition because really nobody wants to do it and it doesn’t scale. It’s, it’s already, it’s IT operations is already a crap job in many instances, and then trying to expect people to to do things manually is a problem.

The second article points out the security issues that come along with Adopting that strategy, which is, you’re exposing your environment unduly unnecessarily. And in fact the improvements in. Your security posture and the let the reduction in likelihood of some kind of an attack far outweigh the remote possibility of what happened.

Like we saw with CrowdStrike. Now there is a kind of an asterisk at the bottom. They point out the importance of doing staged deployments of patches, which I think is one of the central lessons of the, at least for my Perspective, one of the central lessons of the CrowdStrike disaster is that go fast, but stage it.

Andrew: yeah it’s an interesting problem that we’re struggling with here, which is how many times have we saved our own butts without knowing it by automate or rapidly patching? It’s very difficult to prove that negative. And so it’s very difficult to. Weigh the pros and cons empirical data showing where automatic patching or rapid patching solved a problem or avoided a problem versus when patching broke something.

Cause all we know about is when it breaks, like when a Microsoft patch rolls out and breaks and that sort of thing. And it’s one of those things where it has to be perfect every time is the feeling from a lot of folks. And if it, if every time we have a problem, we break some of that trust. It hurts the credibility of auto patching or, rapidly patching. The other thing that comes to mind is I would love to get more IT folks and technical operations folks and SREs and DevOps folks, with the concept of patching as just part of regular maintenance. That is, just built into their process. A lot of times it feels like a patch is an interrupt driven or toil type work that they have to stop what they’re doing to go work on this.

Where, in my mind, at least the way I look at it from a risk manager perspective, unless something’s on fire or is a known RCE or known exploited, certain criteria. I’m good. Hey, take patch on a monthly cadence and just catch everything up on that monthly cadence, whatever it is. I can work within that cadence.

If I’ve got something that I think is a higher priority, we can try to interrupt that or drive a different cadence to get that patched or mitigated in some way. But the problem often is that, Okay. Every one of these patches seems to be like a one off action if you’re not doing automatic patching in some way, that is very Cognitively dissonant with what a lot of these teams are doing and I don’t know how to get Across very well that you will always have to patch it was all this will never stop So you have to plan for it.

You have to build time for that. You have to build Automation and cycles for that and around it and it’ll be a lot less painful It’s it feels like pushing the rock up the hill on that one.

Jerry: One of my observations was

an impediment to fast patching is the reluctance for downtime and, or the potential impacts from downtime. And I think that dovetails with what you just said, in part, that concern stems from the way we design our IT systems and our IT environments. If we design them in a way that they’re not patchable without interrupting operations, then my view is we’ve not been successful in designing the environment to meet the business.

And that’s something that, that I tried hard to drive and just thinks in some aspects I was successful and others I was not. But I think that is one of the real key things that, that we as a it leader or security leaders really need to be imparting in the teams is that when we’re designing things, it needs to be, Maintainable as a, not as a, like you described it as an interrupt, but as an, in the normal course of business without heroic efforts, it has to be maintainable.

You have to be able to patch, you have to be able to take the system down. You can’t say that gosh, this system is so important. Like we can’t, we take it down. We’re going to lose millions of dollars ever. Like we can’t take it down. Not a good, it’s not a good look. You didn’t design it right.

Andrew: That system is gonna go down. Is it gonna be on your schedule or not? The other thing I think about with patching is not just vulnerability management But you know Let’s say you don’t patch and suddenly you’ve got a very urgent Vulnerability that does need to be patched and your four major versions and three sub versions behind now you have this massive uplift That’s probably going to be far more disruptive to get that security patch applied, as opposed to if you’re staying relatively current, n minus one or n minus two, it’s much less disruptive get that up to date.

Not to mention all of the end of life and end of support issues that come with running really old software. And don’t even know what vulnerabilities might be running out there, but just keeping things current as a matter of course, I believe. It makes dealing with emergency patches much, much easier. all these things take time and resources away from what is perceived to be higher value activities. So it’s constantly a resource battle.

Jerry: And there was like, there was a quote related to what you just said in, at the end of this article, it said I think it mostly comes down to quote, I think it mostly comes down to technical debt. You explained it’s very, it’s a very unsexy thing to work on. Nobody wants to do it and everyone feels like it should be automated, but nobody wants to take responsibility for doing it.

You added the net effect is that nothing gets done and people stay in the state of technical debt. Where they’re not able to prioritize it.

Andrew: That’s not a great place to be.

Jerry: No, there wasn’t another interesting quote that I often see thrown around and it has to do with the percent of patches. And so the, I’ll just give the quote towards the beginning of the article. Patching is still notoriously difficult for us to principal analyst. Andrew Hewitt told the register.

Hewitt, who specializes in it ops said that while organizations strive for a 97 99 percent patch rate, they typically only managed to successfully fix between 75 and 85 percent of issues in their software. I’m left wondering, what does that mean?

Andrew: Yeah, like in what time frame? In what? I don’t know. I feel like what he’s talking about maybe is They only have the ability to automatically patch up to 85 percent of the deployed software in their environment.

Jerry: That could be, it’s a little ambiguous.

Andrew: It is. And from my perspective, there’s actually a couple different things where we’re talking about here, and we’re not being very specific. We’re talking about I. T. Operations are talking about corporate I. T. Solutions and systems and servers. For an IT house, I work in a software shop, so we’ve got the whole software side of this equation, too, for the code we’re writing and keeping all that stuff up to date, which is a whole other complicated problem that, some of which I think would be inappropriate for me to talk about, but, so there’s, it’s doubly difficult, I think, if you’re a software dev shop to keep all of your components and dependencies and containers and all that stuff up to date.

Jerry: Absolutely. Absolutely. I will also say that A couple of other random thoughts on my part, this, in my view, gets harder or gets more complicated, the larger in larger organizations, because you end up having these kind of siloed functions where responsibility for patching isn’t necessarily clear, whereas in a smaller shop.

You may have an IT function who’s responsible end to end for everything, but in large organizations, oftentimes you’ll have a platform engineering team or who’s responsible for, let’s say, operating systems. And then you may have a, that, that team is a service provider for other other parts of the business.

And those other parts of the business may not have a full appreciation for what they’re responsible for from an application perspective, and especially in larger companies where, they’re want to reduce head count and cut costs, the, those application type people in my, my experience, as well as the platform team are are ripe targets for reductions.

And when that happens. You end up in this kind of a weird spot of having systems and no clear owner on who’s actually responsible. You may even know that you have to patch it, but you may not know whose job it is.

Andrew: Yeah, absolutely. In my perfect world, every application has a technical owner and every underlying operating system or underlying container has a technical owner. Might be the same, might be different. And they have their own set of expectations. Often they’re different and often they’re not talking to each other. So there could be issues in dependencies between the two that they’re not coordinating well. And then you get gridlock and nobody does anything.

Jerry: So these are pragmatic problems that in my experience. They present themselves as salt is a sand in the gears, right? They make it very difficult to move swiftly. And that’s what in my ex in my experience drives that heroic effort, especially when something important comes down the line, because now you have to pay extra attention because that something is not going to, that there isn’t a well functioning process.

And I think that’s. Something as an industry, we need to focus on. Oh, go ahead.

Andrew: I was just gonna say, in my mind, some of the ways you solve this, and these are usually said difficult to do, but proper. I should define that. Maintained asset management, I. T. asset management is key. And in my mind, you’ve got to push your business to make sure that somebody has accountability to every layer of that application. And push your business to say, hey, if we’re not willing to invest in maintaining this, and nobody’s going to take ownership of it, it shouldn’t be in our environment. must be well owned. This is, it’s like when you adopt a dog. Somebody’s got to take care of it. And you can’t just neglect it in the backyard. So we run into stuff all the time where it’s just, Oh, nobody knows what that is. Then get rid of it. attack surface. That’s a single thing out there is something that could be attacked. If it’s about being maintained, that becomes far riskier from an attack surface perspective. So I think that, and I also think about, Hey, tell people before you go buy a piece of software, do you have the cycles to maintain it? Do you have the expertise to maintain it?

Jerry: The business commitment to fund its ongoing operations, right?

Andrew: Exactly. I don’t know. It gets stickier. And now we have this concept of SaaS, where a lot of people are buying software and not even thinking about the backend of it because it’s all just auto magic to them. So they get surprised when it’s, Oh, it’s in house. We’ve got to actually patch it ourselves. Yeah,

Jerry: The other article in cybersecurity dive had a, another interesting quote that I thought lacked some context and the quote was. There are, there were 26, 447 vulnerabilities disclosed last year and bad actors exploited 75 percent of vulnerabilities within 19 days.

Andrew: no, that’s not right.

Jerry: Yeah, here’s, here is the missing context.

Oh, and it also says one in four high risk vulnerabilities were exploited the same day they were disclosed. What now, the missing context is this report linked, or this quote is referring to a report by QALYS that came out early. At the beginning of the year and what it was saying is that about 1 percent of vulnerabilities or are what they call high risk and those are the vulnerabilities that end up having exploits created, which is an interesting data point in and of itself, that only 1 percent of vulnerabilities are what people go after.

Patching our goal is to patch all of them. What they’re saying is that 75 percent of the 1%, which had vulnerability or had exploits created, had those exploits created within 19 days.

Andrew: That makes, that’s a lot more in line with my understanding.

Jerry: And 25 percent were exploited within this the same day. So I, and that’s the important context. It’s a very salacious statement without that extra context. And I will say that as a as a security leader, one of the challenges we have is, again, that there, there were almost 27, 000.

Vulnerabilities. I think we’re going to blow the doors off that this year,

not all that they’re not all equally important. Obviously they’re rated at different levels of severity, but the real, the reality for those of us who pay attention, that it’s not just the critical vulnerabilities that are leading to. being exploited and hacked and data breaches and whatnot.

There’s plenty of instances where you have lower severity from a CVSS perspective, vulnerabilities being exploited either on their own or as put together but the problem is which ones are important. And so there’s a whole cottage industry growing up around trying to help you prioritize better with which which vulnerabilities to go after.

But that is the problem, right? We, like we, we, I feel like we have quite Kind of a crying wolf problem because 99 percent of the time or more, the thing that we’re saying the business has to go off and spend lots of time and disrupting their their availability and pulling people in on the weekends and whatnot is not, Exploited, it’s not a targeted by the bad guys, you only know which ones are in that camp after the fact.

So if you had that visibility before the fact, it’d be better, but that’s a that’s a very naive thing at this point.

Andrew: Yeah. If 1%.

Jerry: If I could only predict the winning lottery numbers.

Andrew: The other thing, and the debate, this opens up, which I’ve had, Many times in my career is ops folks, whomever, I’m not the bad guys. They’re just asking questions, trying to prioritize. Prove to me how this is exploitable. That’s a really unfair question. I can’t because I’m not hacker who could predict every single way this could be used against a business.

I have to play the odds. I have to play statistically what I know to be true, which is that some of them will be exploited. One of the things I could do is I could prioritize, Hey, what’s open to the internet? What’s my attack service? What services do I know are open to anonymous browsing or not browsing, but, reachability from the internet. Maybe those are my top priority. And I watched those carefully for open RCEs or likely exploitable things or, and I prioritize on those, but at the end of the day, not patching something because I can’t prove it’s exploitable. that I can predict what every bad guy is ever going to do in the future or chain attacks in the future that I’m not aware of.

And I think that’s a really difficult thing to prove.

Jerry: Yeah, a hundred percent. There, there are some things that can help you, some things beyond just CVSS scores that can help you a bit, certainly if you look at something and it is worm able , right? Remote code, execution of any sort is something in my estimation that you really need to prioritize the the CISA agency, the cybersecurity infrastructure security agency, whose name still pisses me off.

All these years later, because they has the word security too many times in it, but they didn’t ask me. They have this list they call Kev. It’s the known exploited vulnerabilities list, which, in, in previous years was a joke because they didn’t update it very often. But now it’s actually upgrade updated very aggressively.

And so it contains the list of vulnerabilities that the U S government and some other foreign partners see actively being exploited in the industry. And so there’s a, that’s also a data point. And I would say. My perspective is that shouldn’t be the thing that you say that’s, those are going to be what we patch then it’s your, my view, your approach should be, we were going to patch it all, but those are the ones that we’re not going to relent on.

There’s always going to be a need. There’s going to be some sort of There’s going to be an end of quarter situation or what have you, but these are the ones that, that you should be looking at and saying, no, like these can’t wait they have to, we have to patch those.

Andrew: Yep. 100%. And a lot of your vulnerability management tools are now integrating that list. So it can help you right in the tool know what the prioritization is. But bear in mind, there’s a lot of assumptions in that, that those authorities have noted activity, have noted and shared it, understood it, and zero days happen.

Jerry: Somebody had to get, the reality is somebody had to get hacked.

Autologically, somebody had to get hacked for it to be on the list.

Andrew: right, so don’t rely only on that, but it is absolutely a good prioritization tool and a good focusing item of look, we have this, know we have this is known exploitable. We’re seeing exploits in the wild. We need to get this patched.

Jerry: Yeah, absolutely. So moving on to the next story, this one is from a cybersecurity consulting company called Cybernary. I guess it’s how you would say it.

Andrew: I’d go with that. That seems reasonable.

Jerry: The title is, I’m sure somebody will correct me if I got it wrong. Title here is what’s the worst place to leave your secrets. Research into what happens to AWS credentials that are left in public places. I thought this was a fascinating read, especially given where I had come from. I’ve been saying for some time now on this, on the show, API keys and whatnot are the next big horizon for attacks.

And in fact, we had been seeing that, we’re actually, I think on the upswing in the past, in my former role, we saw a lot of that manifesting itself as attackers using those to mine crypto, like they would hijack servers or platforms. Or containers or whatever to mine cryptocurrency.

But I think over time, we’re going to see that morph into, more data theft and perhaps less overt actions. I’m sure it’s, it is already happening. I’m not, I don’t mean to say that it isn’t happening, but I think it’s in the periphery right now where a lot of the activity, at least A lot of the voluminous activity tends to be what I’ll call more benign, like again, crypto mining.

But anyway, this the approach that this organization took here was pretty interesting. There’s a company called thinkst that has this kind of like. Concept of canary tokens and canary credentials. And they are exactly what they sound like. It’s a set, it’s a set of secrets that you can create through this company and watch how they’re used.

You can see, you can get an alert when somebody tries to use them. And that’s exactly what they did here. They created, I think it was 121, 121. That’s the number of total attempts. I don’t know exactly how many credentials they created. They created a number of credentials and they spread them around and they they used a number of different services.

Let’s see, they had GitHub, GitLab, Bitbucket, Docker hub. They created their own FTP web server. And blog, they put them on pastebin jfiddle. They put them into the NPM JS repository in the PyPI repository, which we just talked about. They put them in the various cloud storage buckets. And then they just waited to see how and when they were accessed.

And unsurprisingly to me this. Was pretty successful. It was a very successful test. I am a little surprised at some of the results. So the one of the, one of the charts they give is how long it took for different credit credentials planted at different places to be used.

Andrew: It’s insane.

Jerry: So the NPM JS was less than 60 seconds.

Andrew: So that means, from the time of posting, somebody somehow picked it up with some kind of scanner and could turn around and test using it in less than a minute.

Jerry: Yes.

Andrew: That’s insane.

Jerry: Pi was 120 seconds. So just shy of two minutes. GitHub was 127. 127 seconds, just just over two minutes. Pastebin was 50 minutes their own web server, 47 hours and docker hub 6. 9 days.

Andrew: Man, what’s going on with Docker Hub? Just nobody cares?

Around to it.

Jerry: nobody cares. I think it’s a lot more involved. It’s not as it’s not as readily scannable, I would say.

Andrew: I can tell you from my own experience in previous roles, we used to get reports all the time for Hey, you’ve got the secret out here. Hey, you’ve got the secret out here people looking for bounties. I still want to know what tools are using to find this stuff so rapidly because it’s fast.

Jerry: Yes.

Andrew: And

Jerry: Like a GitHub, GitHub will, we’ll give you a, an API so you can actually subscribe to an API. Again, it’s not perfect because it’s obviously they are. Typically relying on randomness or, something being prefixed with password equals or what have you. But it’s not a, it’s not a perfect match, but there’s lots of lots of tools out there that people are using.

The one that I found most interesting and it’s more aligned with the Docker Hub one, but not. And I think it’s something that is a much larger problem that hasn’t manifested itself as a big problem yet. And that is, with container images you can continue to iterate on them.

You can and by default when you spin up a container, it is the end state of a whole bunch of what I’ll just call layers And so if you, let’s say included credentials at some point in a configuration file, and then you later deleted that file, when you spin up that container in a running image, you won’t find that file.

But it actually is still in that container image file. And so if you were to upload that container image file to, let’s say Docker hub and somebody knew what they were doing, they could actually look through the history and find that file. And that has happened I’ve seen it happen a fair number of times you, you have to go through some extra steps to squash down the container image so that you basically purge all the history and you only ended up with the last intended state of the container file, but not a lot of people know that, like how many people know that you have to do that?

Andrew: well, including you, the six people listening to this show, maybe four others.

Jerry: So there’s a lot of there’s a lot of nuance here. So I thought, The time the timing was just. Fascinating. That, that it was going to be fast just based on my experience. I knew it was going to be fast, but I did not expect it to be that fast. Now in terms of where most of the credentials were used.

That was also very interesting. Hello. Was a little, in some areas, some respects, not what I expected. So the most the most targeted or the place where the most credentials was used was Pastebin, which is interesting because Pastebin also had a relatively long time to detect. And so I think it means that people are more aggressively crawling it.

And then the second most common is a website. And I think that one does not surprise me because crawling websites has been a thing for a very long time. And I think there’s lots and lots of tools out there to help identify credentials. So obviously it’s a little. Dependent on how present them.

If you have a password. txt file and that’s available in a index in directory index on your webpage. That’s probably going to get hit on a lot more.

Andrew: I’m, you know what?

Jerry: Yeah, I know. You’re not even going to go there. Yep. You’re I’ll tell you the trouble with your mom. There you go. Feel better.

Andrew: I feel like she’s going to tan your hide.

Jerry: See, there you go. You got the leather joke after all. Just like your mom.

Andrew: Oh, of nowhere,

Jerry: All right. Then GitHub was a distant third.

Andrew: which surprises me. I,

Jerry: That did surprise me too.

Andrew: Yeah. And also I also know GitHub is a place that tons and tons of secrets get leaked and get labs and similar because developers do have, it’s very easy for them to accidentally leak secrets in their code up to these public repos. And then you can never get rid of them.

You’ve got to rotate them.

Jerry: I think it. So my view is it’s more a reflection of the complexity with finding them, because in a repository, you got to search through a lot of crap. And I don’t think that the tools to search for them is as sophisticated as let’s say, a web crawler, hitting paste in the website.

Andrew: Which is fascinating that the incentive is on finding the mistake by third parties. Yeah. got better tooling then. Now, to be fair, all of these, like if GitHub for instance, has plenty of tools you can buy, both homegrown at GitHub or third parties that in theory will help you detect a secret before you commit it, but they’re not perfect and not everybody has them.

Jerry: Correct. Correct. And I also think it’s more in my experience. It’s much more of a common problem from a, from a likelihood of exposure from from the average it shop, you’re much more likely to see your keys leak through GitHub than you are from people posting them on a website or on pastebin.

But, knowing that if they do end up on pastebin, like somebody’s going to find them is I think important to know, but my Experience it’s, it’s Docker hub in the code repositories, like PyPy and MPM and GitLab and GitHub. That’s where it happens, right? That’s where we leak them.

It’s interesting in this, in this test, they tried out all the different channels to see which ones were more, more or less likely to get hit on. I think you get hub in my experience, GitHub and Docker hub and whatnot. are the places that you have to really focus and worry about because that’s where they’re, that’s where they’re leaking.

Andrew: Yeah. It makes sense. It’s a fascinating study.

Jerry: Yeah. And it

sorry, go ahead.

Andrew: I would love for other people to replicate it and see if they get similar findings.

Jerry: Yes. Yes. I, and this is one of those things that, again the tooling is not there’s not a deterministic way to tell whether or not your code has a password or not in it. There are tools, like you said, that will help identify them. To me, it’s. And it’s important to create a I would call the three, three legs of the stool approach.

One is making sure that you have those tools available. Another would be making sure that you have the tools available on how to store credentials securely, like having. Hash a car vault or something like that available. And then the third leg of the stool is making sure that the developers know how to use those.

Know that they exist and that’s how you’re, how they’re expected to actually use them. Again, it’s not perfect. It’s not a firewall. It’s, you’re still reliant on people who make mistakes.

Andrew: Two questions. First of all, that three legged stool, would that be a milking stool?

Jerry: Yes.

Andrew: Second, plus a question, more comment. I would also try to teach your teams, hey, try to develop this software with the idea that we may have to rotate this secret at some point.

Jerry: Oh, great point. Yes.

Andrew: and try not to back yourself into a corner that makes it very difficult to rotate.

Jerry: Yeah. I will also, I’ll go one step further and say that not only should you do that, but you should at the same time implement a strategy where those credentials are automatically rotated on some periodic basis. Whether it’s a month, a quarter, every six months, a year, it doesn’t really matter having the ability to automate it or to have them automated, have that change automated, gives you the ability in the worst case scenario that somebody calls you up and says, Hey, like we just found our key on on GitHub, you have an ability to go exercise that automation, but Without having to go create it or incur downtime or whatnot.

And that the worst case is you’re stuck with this hellacious situation of I’ve got to rotate the keys, but if I rotate the keys, the only guy that knows how to, this application works is on a cruise right now. And if we rotate it, we know it’s going to go down and we, so you’re end up in this That’s really bad spot.

And I’ve seen that happen so many times.

Andrew: And then the see saw ends up foaming at the mouth like a mad cow.

Jerry: Yes, that’s right. Cannot wait for this to be over. All right. The the last story mercifully is Mike is also from the register. com. And the title is Microsoft patches is scary. Wormable hijacked my box via IPv6 security bug and others. It’s been a while since we’ve had one that feels like this. So the the issue here is that Microsoft just released a patch as part of its patch Tuesday for a a light touch pre authentication, remote code exploit yeah, remote code exploit over the network, but only for IPv6,

Andrew: which to me is holy crap, big deal. That’s really scary.

Jerry: incredible.

Andrew: and either, I don’t know, I feel like this hasn’t gotten a ton of attention yet. Maybe because there wasn’t like a website and a mascot and a theme song and a catchy name.

Jerry: Yes. And

Andrew: But, so if you’ve got IPv6 running on pretty much any modern version of Windows, zero click RCE exploit, over, have a nice day. That’s scary. That’s a big deal.

Jerry: the better part is that it is IPv6. Now I guess on the on the downside it’s IPv6 and IPv6 typically is isn’t. Affected by things like NAT based firewalls. And so quite often you have a line of sight from the internet straight to your device, which is a problem. Obviously not always the case.

On the other

Hand, it’s not widely adapted.

Andrew: but a lot of modern windows systems are automatically turning it on by default. fact, I would wager a lot of people have IPv6 turned on and don’t even know it.

Jerry: Very true.

Andrew: Now you’ve got to have all the interim networking equipment, also supporting IPv6 and for that to be a problem, but it could be.

Jerry: So there, there’s the the researcher who identified this has not released any exploit code or in fact, any details other than it exists. But I would say now that Apache exists I think it’s fair to say every freaking security researcher out there right now is trying to reverse those patches to figure out exactly what changed in hopes of finding out what the.

Problem was because they want to create blogware and create a name for it and whatnot. I’m sure. This is a huge deal. I think it is for alarm fire, you’ve got to get this one patched like yesterday.

Andrew: Yeah. It’s been a while since we’ve seen something like this. Like you said, at the top of the story, it’s, Vulnerable, zero clickable, RCE, just being on the network with IPv6 is all it takes. And I think it’s everything past Windows 2008. Server, is vulnerable. Obviously patches are out, but it’s gnarly. It’s a big deal.

Jerry: As you would say, get ye to the patchery.

Andrew: Get ye to the patchery. I’ve not used that lately much. I need to get back to that. Fresh patches available to you at the patchery.

Jerry: All right. I think I think we’ll cut it off there and then ride the rest home.

Andrew: Go do some grazing in the meadow. As you can probably imagine, this is not our first radio.

Jerry: Jesus Christ. Where did I go wrong? Anyway, we I I sincerely apologize but I also find it. I also find it weird.

Andrew: I don’t apologize in the least.

Jerry: We’ll, I’m sure there’ll be more.

Andrew: Look man, this is a tough job. You gotta add a little lightness to it. It can drain your soul if you’re not careful.

Jerry: Absolutely. Now I Was

Andrew: But once again, I feel bad for the cow and the calf. That’s terrible. That’s, I don’t wish that on anyone.

Jerry: alright. Just a reminder that you can find all of our podcast episodes on our website@wwwdefensivesecurity.org, including jokes like that and the infamous llama jokes way, way back way, way back. You can find Mr. Clet on X at LER.

Andrew: That is correct.

Jerry: Wonderful, beautiful social media site, InfoSec. Exchange at L E R G there as well. And I am at Jerry on InfoSec. Exchange. And by the way, if you like this show, give us a good rating on your favorite podcast platform. If you don’t like this show, keep it to yourself.

Andrew: Or still give us a good reading. That’s fine.

Jerry: Or just, yeah, that works.

Andrew: allowed,

Jerry: That works too.

We don’t discriminate.

Andrew: hopefully you find it useful. That’s all we can that’s our hope,

Jerry: That’s right.

Andrew: us riffing about craziness for an hour and hopefully you pick up a something or two and you can take it, use it and be happy.

Jerry: All right. Have a lovely week ahead and weekend. And we’ll talk to you again very soon.

Andrew: See you later guys. Bye bye.

Defensive Security Podcast Episode 275

Links:

Transcript:

Jerry: Today is Wednesday, August 7th, 2024. And this is episode 275 of the Defensive Security Podcast. My name is Jerry Bell and joining me tonight as always is Mr. Andrew Kalat.

Andrew: Good evening, Jerry. How are you? Good, sir.

Jerry: I am amazing. It is blistering hot at the beach, but it’s awesome.

Andrew: recording from your southern compound.

Jerry: I am.

Andrew: Nice.

Jerry: Yeah, Bell Estate South.

Andrew: And Debbie was not an issue.

Jerry: Debbie not here. We got probably 45 minutes worth of rain.

Andrew: Yeah, it seems, at this point, in real time, stalled out over South Carolina

Jerry: Yeah, it looks several feet of rain hitting like Savannah and That is nuts. But no, it was not a big issue here. I was pretty worried. I packed up all my Milwaukee batteries with lights and whatnot in preparation for the worst got extra tranquilizer for my dog who hates storms.

But no, it’s been absolutely amazing here.

Andrew: So you took the tranks instead? Is that what I’m hearing?

Jerry: Absolutely. You gotta sleep somehow.

Andrew: That’s fair. I’m glad it was a non event, at least for your little neck of the

Jerry: Yeah, it was Nice you could actually see some of the storm clouds off in the distance. And that was the best way to watch a hurricane is when it’s far away.

Andrew: That’s true. That

A few I’ve been through. Stuck on islands, but

Jerry: Yeah, that’s right. since I’ve been here, I have been in the building for two hurricanes, and the building’s been hit by three tornadoes. And then there was also a unsuccessful base jump.

Andrew: So we’re saying you are cursed. Is that what we’re saying?

Jerry: am the human equivalent to a plastic flamingo.

which attracts tornadoes for those who don’t know. Anyway.

Yeah.

Andrew: after that meteorological update,

Jerry: Yeah. just a reminder that the thoughts and opinions we express on the show are ours and do not represent those of our employers past, present, or future.

Andrew: maybe even our

Jerry: Or our pets. my pet is licking me right now and she says, nope, it’s not her opinion.

Andrew: fair,

Jerry: Okay I would say that this is going to be a CrowdStrike heavy episode.

Andrew: three weeks in a row.

Jerry: Yeah, it continues to get more and more interesting. Obviously the main event itself is largely behind us and now we are in the lawyer up phase of the party.

Andrew: the blamestorming

Jerry: blamestorming has indeed begun. The first topic we have to talk about here is the actual formal full root cause analysis was released yesterday by CrowdStrike and it is a 12 page long document. It has lots of marketing fluff in it.

And only I would say a little bit of substance. I don’t think there’s anything that is remarkably telling or revolutionary in the document, but it does indicate technically what went wrong. And it gives some indications of the, potential improvements for their quality assurance, which I think is where a lot of this went wrong.

So the, I’m not going to go through the details in uber technical specificity, but the net is that this channel file update is for this inter process communication agent, for lack of a better term, I’ll call it. And that agent, expects configuration files that have

20 parameters, but through some unfortunate

bad planningtheir test harness actually was Marking the 21st as a catch all, as an asterisk. It was effectively being marked as not used. And so in this particular update, they actually started using it, and that ended up causing their parser to perform what ultimately ended up being an out of bounds read.

Because that parser wasn’t set up to actually read it. And so when that read attempted to happen in kernel space, it tried to access memory. It wasn’t allowed to access, wasn’t allocated. And that caused the blue screen. And because the same thing happened every time it booted up.

You just had this endless boot loop until that particular file got removed. I think the more substantive issue, and that’s the kind of thing that can happen,

Andrew: So let me restate that to make

The application was expecting. a file that had 21 fields in it, and it got a file with 20.

Jerry: Yes.

Andrew: And where it went to read that 21st, it wasn’t allowed to read, and the way that systems protect themselves to do a kernel panic and shut down if you’re trying to read something you’re not allowed to

Jerry: Yes.

Andrew: If you’re in

Jerry: Windows basically says something is horrifically wrong. This should not happen.

Andrew: If I went by that criteria, I’d shut down every day.

Jerry: And so if that were to happen in user space, the application that performed that read would crash. But when it happens in kernel space, Windows attempts to protect itself and it blue screens.

And so the challenge is that testing harness was built assuming that 21st parameter was always set up as a catch all and so effectively was being ignored.

And I think there were really two issues here. One was they didn’t have a very thorough, their testing harness obviously wasn’t, Properly designed, but then they also did not have staged deployments. Like they, what they have a process where once it goes through that test harness and passes it, it goes out far and wide.

There is no staged, deployment ring concept that you have in, let’s say, Microsoft Windows updates and whatnot. And because of that, it, it blasted out. Everybody implicitly trusted CrowdStrike updates and those got applied to pretty much as, fast as they were delivered and the rest is now history.

Andrew: I think it’s a very complicated series of events that led to this. And I think just reacting to a lot of the zeitgeist in the social media world around this, there’s a lot of angry finger pointing some of which is probably well warranted, but it’s interesting to see how the inner chain came together.

And going back to another area I know a little bit about is like aviation incident investigation and things like when space shuttles explode or fall apart, I’ve read a lot of books on those and that sort of thing. Anyway, it’s interesting how there’s very rarely one root cause is where I’m going with this. Usually series of events, an air chain that led to this situation. If one of these situations have been slightly different, this would have been caught and all the Swiss cheese holes lined up just right this situation to happen, not absolving or apologizing for it. It’s just interesting how complex these situations truly are. Compared to how a lot of people will knee jerk their opinions on things, usually based on their own bias around what they care about.

Jerry: When I was reading it, it reminded me of the show. I think you’ve probably watched it too called engineering disasters. And the history or the learning in each one of those episodes that a sequence of disconnected things all lined up in just the right way. for that disaster to happen.

Andrew: right.

Jerry: And I think that is definitely what happened here.

Andrew: For everybody involved, but there’s a part of me that finds these things fascinating to watch play out.

Jerry: I still think, for me, what is most, troublesome, because this is not unprecedented, right? Obviously the amount of systems that were impacted, is unprecedented, but that’s probably more a function of how interconnected and dependent we are on computers than any point in time.

But what’s interesting is that this sort of thing has happened in the past, right? This has happened with Symantec and McAfee and Microsoft probably about five or six different times and several others that I’m probably missing. But one of the things that, distinguishes this from those is that those were much less impactful because they did stage rollouts.

And so when it happened, it was devastating to the people who were among the first, the canaries that they had the problem. But this is a different thing. I think that the fundamental Coding and architecture errors are hard to foresee.

They’re easy to see in hindsight, right? this is like the signal and the noise thing. The failure is easy to identify after the fact, because it’s obvious. Like you can’t, duh, it’s so obvious that this was going to happen, but it’s only obvious after the fact.

Andrew: Certainly.

Jerry: weren’t looking at it beforehand and saying, Oh, we’re just going to accept the risk. They just, it wasn’t in their mind. And so, that part I find less, obviously it is the thing that caused it, but what I find most problematic is the fact that they hadn’t adopted what I would call the industry standard practice of the tiered rollouts.

Andrew: I’m sure that was an intentional decision. I obviously don’t know for sure. I have no idea about decisions that go on a cross track. I’ve never worked there. However, those in their mind, I would imagine a value in not doing, in

Jerry: sure,

Andrew: So dig, if you will, the picture, they are using these sensors, not just to stop things, but also together

these sensors are reporting back to CrowdStrike all the time about potentially malicious or confirm malicious activity. And then they use that data to shotgun this out. And this is basically a reject expression. They were trying to get out to the world about something they thought was malicious. they’re. Go to market strategy is to stop breaches specifically around ransomware. So their business model is really geared towards a ransomware. That’s in my opinion, they’re, I’m not speaking for them. It’s just my read of it. And they care greatly about stopping ransomware and windows environments. So they know that ransomware spreads quickly and rapidly, and I’m sure. In their mind that the value of getting these updates out as quickly as possible to as many sensors as possible a valuable thing to do. So let’s say they know about a new attack type and they pick it up at one customer and they start staging this and they stage it to 10 percent of the customers and 30 percent of the customers and then 50 percent of the customers.

Meanwhile, you’re at a customer who hasn’t gotten staged yet and you get hit by it. Are you happy with their staged deployment at that point?

No. End. So I’m just playing devil’s advocate a little bit on, it probably was an intentional decision to not do staged rollouts. It’s not like they just were unaware of that concept. I think they felt that there was value in updating as rapidly as they possibly could to as many sensors as they could.

Jerry: Likely, I would say that is definitely the proof is in the pudding, right? That is exactly what they were doing.

Andrew: A, in hindsight, it’s a bad decision, but it might not have been a bad decision at the time.

Inputs as to why they made that call, I guess is what I’m saying.

Jerry: I’m sure there is. And by the way, I’m not advocating for, something like what with Microsoft where the rollout happens over the course of days or weeks, I’m more talking about. Hours that, potentially in my mind, at least is a happy medium where you could conceivably have canaries that go an hour before the rest of the world, right?

So yeah, you miss an hour. And I’m sure that potentially exposes some number of customers and maybe this is such a rare thing that one hour difference is something that their customers feel important, to accept the risk on. I don’t know. it’s a fair question.

Andrew: They are saying in their root cause analysis that one of the things they will do as part of the mitigation is provide customers more control over the deployment of rapid release content updates. So to me that means maybe choose when and where rapid release content updates are deployed is what they’re saying. Like they want to go down that path of, hey, if you want to be as safe as possible from ignoring operational risk, or I should say, minimizing your concern about operational risk, deploy as quick as you can. If you are concerned about operational risk, here’s these tools, right? But the trade off you might be More exposed to malware risk.

Jerry: Correct. And, from a customer perspective, it does give you the opportunity to do your own soak testing. But I think you probably also, again, depending on your risk tolerance and size of your environment and exposure and whatnot, would do well to do your own sort of testing.

Before you push it out far and wide. But again, that takes resources. It takes time. It takes people, it takes labor. And if it drops on,after work, right before a long weekend, are you going to, you’re going to have the appetite to do that?

Andrew: and how consistent is your fleet?

Are you on the same patch level? Are you in the same build? Are you on the same hardware? There’s a whole lot of things that are coming to play. And if you’re only testing on a couple of examples,

Jerry: Yeah.

Andrew: yeah, I don’t disagree. I know I’m being a little contrarian on this. It’s more a matter of, again, people are saying things like, if you only just, and I’m like think that through a little, and you’re going to find out that there’s still holes in your planning and that you may have to accept some risk for the benefit of the tool. And have a way to solve that for you. There’s no perfect answer.

It’s you’ve got to figure out that trade off for your own organization.

Jerry: Yeah, I think that’s fair. The other, again, I don’t know a whole lot about their release pipeline process, but it, I find it fascinating that they clearly don’t as part of their release process, verify that it doesn’t blue screen a system like there, there’s a very different distinction between, this.

channel file update causes CrowdStrike to crash or for it to not detect things and then require a subsequent update. to fix it versus the agent is now creating an unbootable situation. And it seems like there’s a very utilitarian, very specific set of tests. And I think deterministic tests that they could put in line in their release pipeline when you deploy it onto.

X number of different windows systems. Does it crash? And if it doesn’t, then you release it with the assumption that if it’s not a catastrophic problem, any other problem could be fixed with an update.

Andrew: Yeah. Or if it’s not crashing on the largest deployed population set example, any crashes you have will be limited to relative small percentage of systems.

Jerry: All right. So anyway, it’s an interesting read. Um, there’s obviously I think it’s 12 or 13 pages long, a lot more technical details about the actual nature of the crash and whatnot. So I invite you to read it if you’re interested in that sort of thing. Moving on to the next story, which is related, this one I’ll take a step back and say for those of you who aren’t aware, there’s a bit of a war of words between Delta Airlines CrowdStrike and Microsoft.

So Delta CEO, Ed Bastian is very famously now said that this outage has cost Delta Airlines about half a billion us dollars in losses and that he feels compelled on behalf of his customers and shareholders and employees to try to recoup some of those losses. And so he’s signaled that he’s going to be suing.

both CrowdStrike and Microsoft. And we’ve talked about this a little bit in previous shows, but now CrowdStrike and Microsoft have separately responded to those legal threats. I don’t think they’ve actually materialized as filed lawsuits yet, but the first one is from the register and the title is CrowdStrike Unhappy About Delta’s Litigation Threat Claims Airline Refused Free On Site Help.

So this particular story is about a open letter that one of CrowdStrike’s lawyers sent to the legal counsel that Delta retained. And I think that legal counsel is pretty some fairly high profile legal team that was involved in in prior high profile cases. In this, I’ll summarize this letter is basically saying, you, Delta should be, cautious about what you ask for.

And by the way, if you are going to proceed with this, we would expect you to retain a certain set of information that we think will be useful in this litigation. And there was one quote, I think summarizes the whole thing very well. And it says, Should Delta pursue this path, meaning the lawsuit, Delta will have to explain to the public, its shareholders, and ultimately a jury, why CrowdStrike took responsibility for its actions swiftly, transparently, constructively, while Delta did not.

Now, there is an interesting adjunct to this that both Microsoft and CrowdStrike had offered assistance to Delta. And in the industry, there’s lots of, armchair quarterbacking about how Silly that is, because what would they really be able to do, right? Like in this particular instance, you need hands and feet going and sitting in front of computers to do the thing, to get the systems back up.

And so what were either CrowdStrike or Microsoft, really going to do? And I think that’s a fair characterization, but It’s not highlighted in this particular letter, but Delta, to both Microsoft and CrowdStrike basically said, no we’re good.

We don’t need your help. And so now both CrowdStrike and Microsoft are throwing that back in their face saying, if it was so horrible, why? Why didn’t you accept our offer of help? But I think that does beg the question, what help could they really have been? And I actually don’t have an answer for that.

Andrew: it’s interesting because these lawyers for CrowdStrike and Microsoft also go into this whole, and this is clearly for public, and posturing

Jerry: Oh yeah.

Andrew: lawsuits, but saying, Hey, what you Delta clearly were different in the way you approach this and your competitors. So what’s up with your organization that makes you different?

When why didn’t you want our help? What were you hiding? And, implying that. dirty laundry would come out on Delta’s side, and they’ve got skeletons in the closet that, Microsoft and CrowdStrike are aware of, that they are brandishing of, hey, if you want to drag us into court. You’re not going to come out of this unscathed, which I’m sure this was going to come up great during negotiations for renewal on CrowdStrike with Delta. I’m sure that sales team is really happy right now.

Jerry: Oh

Yes.

Andrew: this is wild. I also, and we’ve talked about this before, I don’t know what sort of legal leg Delta really has to stand on.

I guess this is what the court system will explore, but yeah it’s an ugly one. And it was pretty aggressive on both Microsoft and CrowdStrike’s lawyers parts to be like, Delta. Yeah, we screwed up, but you screwed up a lot too.

Jerry: So the Microsoft response. And there’s a similar article from the verge titled Microsoft says Delta ignored Satya Nadella’s offer of CrowdStrike help. And Microsoft similarly wrote a letter to Delta’s lawyers, but this one actually shines a light on a, perhaps a question or line of questioning that I hadn’t thought of before, and it does highlight again that Microsoft obviously did reach out and offer help, and they were told that, no, they Delta have it under control.

And what Microsoft and the reporter on the verge seemed to be insinuating is that the problem for Delta was not actually with windows. Obviously the problem started with CrowdStrike. Causing their window systems blue screen. But what they seem to be asserting is that when that happened, it created downstream problems in their legacy infrastructure, which is not based on windows and that then had to go get fixed.

And that is what took the place. All the time. Now that is an interesting point, although I will say it stains a little bit in contrast to some of the images that we saw of contractors standing up on ladders and airports rebooting. Display terminals, I think well after the, like up to a week after the outage, it’s an interesting point, what Microsoft is asserting here is that Delta has chosen not to modernize its it.

And when this incident happened, because of the fragility of their system, they were able to get their Windows systems back up and running relatively fast, as evidenced by their refusal. the problem they had that took all the time was with their old aging IT infrastructure.

Now it’s an interesting thing because legally, It almost doesn’t matter, because even if that’s true, and the court fines in favor of Delta, the fact that Delta didn’t invest doesn’t necessarily absolve either Microsoft or CrowdStrike of anything.

Andrew: Yeah. And Southwest is still running on the Commodore 64 and they were fine.

Jerry: true.

Andrew: counterpoint,

Jerry: That’s very true.

Andrew: it’s almost do you want the court to, Find what is due diligence and what is appropriate level of I. T. modernization. And you’ve heard this point a couple of times. You don’t think this is ever going to

Jerry: No, this is they’re trying this in the court of public opinion. This is never going to go to trial. This is going to get settled out of court. None of these parties, maybe Microsoft, I don’t know. They may be okay, but certainly CrowdStrike and Delta don’t want.

They don’t want, they’re not going to want it to go through the process of discovery and having all of this shit about them in their it programs and development programs laid bare for the public to see, they’re just not going to want that. So I think it’s going to get settled out of court. And right now this is about PR and damage control and everybody’s trying to make sure that their own we don’t have a story about it, but CrowdStrike’s shareholders have filed a lawsuit against CrowdStrike for for loss of shareholder value.

Andrew: Indeed.

It’s interesting. So I have one other topic on the CrowdStrike thing before I move on, if you’ll indulge me.

Jerry: And I think this is the LinkedIn posts, right? And I’ve got, I’m going to put these links in the show notes for you.

Andrew: Okay. So, Alex Stamos, who’s well known and, was at all sorts of interesting organizations recently joined SentinelOne Now SentinelOne is a direct competitor of CrowdStrike, so keep that in mind. And full disclosure. I happen to use Sentinel 1, so I know it better, but I don’t have any, strong opinions one way or the other, but I just want to be transparent about that. Anyway, so about a week ago, Alex put a post out on LinkedIn talking about this outage. So keep in mind it’s coming from a competitor. And he’s basically alleging it is false to say this could have happened to anybody. It’s clear that CrowdStrike made intentional architectural engineering and process decisions that led to this global catastrophe. All right. And he has a bunch of points, but there’s one specifically that I want to drill into here, which is 0.

4 in his post. And this addresses why are people running a kernel level? And why do they have kernel for these tools? And a lot of people have said, Hey, you shouldn’t do that. You don’t need to do that. You don’t do that on Mac. You don’t do that on other systems. Why are you doing in Windows? Point four, quoting, Additionally, there are models for architecting EDR with minimal kernel access, and the team at Sentinel One is, quote, willing to work with Microsoft on exploring these models, assuming Microsoft holds their own products to the same standards, end quote. So as I think this is a very key point to this entire discussion and this entire debate, because what this is saying, and this goes back to some things that Microsoft 2009, the EU through antitrust settlement Microsoft, who was competing at a, with a security tooling. other vendors to give other vendors the exact same access as Microsoft had to the kernel for the efficacy of their security tooling. So what Microsoft is saying, look, we didn’t have a choice. The government, the EU made us give folks access to this kernel. And what I’m reading here is saying, we also have access to the kernel because we have to compete with Microsoft’s tooling. So if Microsoft is willing to not be in the kernel with their security tooling, we’ll do the same. And that is calling attention, I think, to there’s obviously some benefit to running your security tooling at the kernel level if they’re worried about competing with Microsoft who has kernel access and they wouldn’t

Jerry: When I read that, whole thing is interesting because. It’s coming from a competitor of CrowdStrike. And so I come back to my new favorite legal term, which is corporate puffery,

Andrew: indeed.

Jerry: And I see corporate puffery. Obviously there’s some well founded points in here, but.

I will say that the first point and that fourth point stand in contrast to each other. Because on the one hand, they seem to be saying nobody else is doing it that way. CrowdStrike is standing alone in how they do this. And by the way we look forward to working with Microsoft to figure out how we can also not do it.

Andrew: Right?

Jerry: I don’t know. it’s an interesting point. there is associated video where Alex talks a bit more about his thoughts in depth. I think it fundamentally isn’t wrong. if there is an alternative way to do this, that is safer outside of the kernel and we don’t lose visibility we should be doing that.

Andrew: Yeah. speed. Severity of impact of the agent. There’s multiple things there. And I just think that there’s more at play here than just, we chose to do it because it was easier. I think there’s a competition issue here with Microsoft.

Jerry: Yeah. There was one point in here that I wanted to talk about. I picked up on his second point, which is quote, it is dangerous to claim that any security product could have caused this global outage because you’re telling CEOs, CIOs, and boards around the world that it is highly risky to deploy advanced security products.

In the long run, that makes it the world harder to defend and less secure. in any event so I don’t disagree conceptually. Like we shouldn’t be doing things that make our job harder, but we also shouldn’t be sugarcoating things and telling our senior leadership that, there is no risk

Andrew: The other thing I find interesting about this point that is not ever included in these conversations is the cost of the software.

So I’m making up numbers here. Don’t quote me, but I’m in the right area. Let’s say for the average company, average CrowdStrike agent per host. I, again, I’m making up numbers, but just. Work with me, 50 bucks a head, right? So it’s probably more than that, but let’s just say it’s 50 bucks a unit. What if I offered you a thousand bucks a unit, but I promise you it’ll never crash. Is it worth it to you? I’ve got to apply so many more resources and so much more time and so much more energy to do that extra checking and I’m going to develop slower and I’m going to be behind everybody else, but it’s more stable. Is it worth it to you? There has to be that trade off. There has to be that question of speed versus cost versus stability.

Jerry: So I,

Andrew: And we never

Jerry: yeah I think it just becomes unaffordable, right? It’s not a, do I want it? Do I think it would be, is it necessary? at that price point you’re basically. Probably spending two or three times more than your overall budget just on a single tool, not even including, what it would require to run the tool.

So I think it would be awesome to have, aviation, life safety grade stuff, but it’s really not. It’s not practical, but I guess my,

Andrew: that’s my point. It’s do we acknowledge that ever? That, hey, look, there are folks who develop things that run aircraft, that cost, many multiples of standard software development because of life, grade. I used a good term that I’ve blanked on. That isn’t affordable for the average generic system. But is it then fair to assume that they would have the same stability that highly refined, or however you want to say it, system?

Or are we just deluding ourselves?

Jerry: no I strongly suspect most organizations would not say that it’s worth it to them, right? They would trade off some amount of downtime or the chance of some amount of downtime for. To be able to a, not spend that exorbitant amount of money, and also to have generally a reasonable amount of protection in place, which I think is the implicit or the tacit trade off that we’ve made.

The thing that I guess is concerning to me is that this particular bullet seems to be asserting that, we shouldn’t. Be, upfront about the potential downside, right? I think what Alex and again, I didn’t talk to Alex. I don’t know, Alex, but I think what he’s trying to say is we have a hard enough time as it is.

We have a hard enough time allocating money for security programs as it is. And if we go and we tell our senior leadership team that, hey, not only do we have to spend all this money, but by the way, there’s, a non significant chance that it’s going to cause, a global outage for us because, it happens to everybody.

I think that’s what he’s railing against, but on the other side, I think we, at our own detriment, tell our boards that, there’s no risk in running CrowdStrike or SentinelOne or any other program

Andrew: let’s take it back to password managers, right? When somebody has a problem like LastPass did, for instance, We all came out and said, look, don’t throw password managers out.

Jerry: right?

Andrew: Like you’re still better off running a password manager statistically and everything else.

You’re still better off using unique complex passwords on every site. You manage through password manager. It’s still better even with these sorts of circumstances. So I feel that’s a similar conversation. Yeah, there’s a risk, but there’s no zero risk here. You can’t be in business and have zero risk. So got to, I think Alex knows better, frankly, than to try to imply that there’s a zero risk option here. There’s always some trade

Jerry: Yeah I think he’s two, two things. One is he works for a vendor and so he needs to make sure that companies still keep buying XDR software. But I think more broadly, he is hoping that we don’t shoot ourself in the foot by talking our senior leadership team out of allocating money for these important products.

Andrew: fair. And by the way, I don’t know Alex yet. I think I’ve met him once or twice, but I certainly am not trying to, Know what he’s saying. I think there’s a, it’s somewhat disingenuous to say that there’s zero risk in running software like this,

Jerry: All right. So moving on to more exciting things.

Andrew: From security week. The title here is , thousands of devices wiped remotely following mobile guardian hack. So mobile guardian is anMDM solution, mobile device management solution, who focuses on the education sector.

Jerry: they had an incident on August 4th that resulted in some number of their iPads managed through their platform, being deregistered in the process of being deregistered that caused the devices to be wiped. And there’s actually some fairly sad looking pictures of kids in front of stacks of broken iPads, because they’ve been deregistered.

wiped. this particular article doesn’t go into it, but it seems like this company mobile guardian has actually had a run of some security related issues leading up to this one. This one apparently isn’t related to the prior ones, but it does give me.

Concerns about the health of their offering. right now, in fact, they’ve actually got their offering deactivated and any of the systems that are managed by them are currently not functional until they. fix whatever issues they have. I think they took a proactive step of disabling their central management infrastructure.

So that’s rendered the functionality of those devices pretty significantly.

Andrew: Yeah, it’s an interesting attack factor. Somebody gets into your centralized management software and could just wipe all your devices. That’s tough to recover from. Not one I’d thought about before, to be honest, from an MDM

Jerry: Yeah. The it tools have long concerned me where you have central management of your infrastructure. In this particular case, it’s outsourced to a third party company, but we have lots of different flavors, whether that’s like Ansible or salt or Active Directory or many others that serves as a concentration point, like it is a place that an adversary can go in with one, you area cause untold chaos for your organization.

And so it really needs to be well protected. And, in this instance, the country of Singapore was pretty significantly impacted by this. And so they have actually decided to move off of a mobile guardian. Going to guess they won’t be the last one.

Andrew: Yeah, this is an interesting corollary of, is it worth it to have the protection? They’ve decided, nope, it

Jerry: Yeah I think

Andrew: need

Jerry: my guess is they’ll

Move to something else, but

Andrew: I should say not even just the protection, but the manageability. Which yeah, you’re right. They’re going to go to a competitor. I’m sure. But the capabilities offered by the centralized manageability is what can be used against you, which is what you were saying.

Jerry: Yeah,

Andrew: but I don’t know how you’d manage a modern large infrastructure without it. you just got to

Jerry: I don’t think you can, but I also don’t think that many it shops spend enough time thinking about how to protect those orchestration and management platforms. And I also don’t think that we necessarily don’t do enough due diligence on. companies like Mobile Guardian.

And again, I don’t know, this is the first time I’ve ever heard about them. I don’t know anything about them. They could be an absolutely fine company, just having a bad run. I don’t really know, but I think it’s a, to me, a highlight on an issue that I think is going to continue to grow in prominence.

So now on to, the other topic I want to talk about, which is really focused on targeting I. T. people. So the first story is from Bleeping Computer. The title here is Stack Exchange Abused to Spread Malicious PyPy Packages as Answers. And it, again, it’s not a, like a massive attack here, but I thought the technique was super interesting and worth talking about.

There’s a couple of different blockchain platforms, one of them called radium and I forget the name of the other one. Radium doesn’t have any Python modules. And so what the adversary here did was they created some malicious. Repositories in the PiPi repo. And then they created a whole bunch of fake accounts on stack exchange and they started answering questions and they started basically talking up how to use these bogus packages that they.

Uploaded into the PyPy repo and it worked, they got almost 2, 100 downloads.

Andrew: that’s actually really what we’re up to with the show. It’s a long con to start telling people to go install malware once we’ve gotten over 5, 000 episodes out and people trust us. That’s what we’re doing

Jerry: we’ll be there sometime around 2050.

Andrew: but

Jerry: amazing.

Andrew: They’re just, They’re abusing the reputational trust that comes with that platform of valid appropriate answers and Oh yeah, it’s like the easy button to go find an answer and yeah, I trust it. You get some malware, have a nice day.

Jerry: Yeah. So this particular one was, capturing credentials and other sensitive information out of big crypto wallets Telegram and other instant messaging software. And then also information out of browsers. But what I thought was quite interesting was that it was, focused on developers.

It was targeted at developers and As I guess now the former CISO, this causes me a lot of concern, right? That you have a population of developers who are, I would argue what makes an effective developer these days is figuring out how to get efficient answers to your technical problems. you can’t hire developers that know everything, that are completely prescient and omnipotent.

the average IT person is an effective IT person because they know where to go to get the answers to the problems they have. Whether that’s programming, or system administration, or helpdesk, or anything like that. And this worries me. And I think we have to be very cognizant of this trend going forward.

And I think in the near term, we at least have to be incorporating something about this into our security education for developers. But, I think this is probably going to come up more broadly. And that, by the way, dovetails into The other way that it manifests, or has manifested, and I’m sure there’s lots more ways, this one also comes from Bleeping Computer and the title is Ransomware Gain Targets IT Workers with New Sharp Rhino Malware.

And this is more of a watering hole slash

It’s, watering hole, SEO, domain hijacking, where they are purporting to be the angry IP scanner. They’re doing typo squatting or malicious ad buys, trying to divert you. The IT person to a malicious website that they control that looks like the legitimate source of the angry IP scanner for you to download.

And somebody who’s downloading that in the context of an IT or a security person in a company, probably more often than not, has some sort of elevated privileges because they’re probably trying to get that to do some important IT work.

Andrew: Indeed.

Jerry: Between the two, I think we should expect an increase focus on targeting it workers. I think in I think we’ve seen it people being targeted by more sophisticated adversaries for quite some time. What I think is becoming more novel is that this is really becoming a commodity type.

tactic where you’ve got, like lower tier ransomware actors and people trying to steal crypto wallets, actually targeting IT people. So I think we’re almost certainly going to see more and more criminal elements coming in and using these sorts of tactics to try to infiltrate, we know that entry as a service or, that initial entry point into an organization is part of the industrialization of the attack chain where You can, as a ransomware actor or as somebody trying to steal intellectual property, go and buy access.

So you’ve got bad actors who are trying to find ways of Infiltrating companies. And this I think is going to become a much more prominent and effective way in the future, because we, it people hold ourselves in high regard in our ability to not fall for stuff.

Yeah.

Andrew: Believe me, it happens. And the bad guys can keep getting more sophisticated and keep trying different tactics and they start to hit on things that work for a while or, catch certain people at a certain time in a certain set of stressors or certain circumstances that just, it’s, I was curious, going back to the malicious PyPy packages. I do wonder if any static code analysis tools would have picked that up. I don’t know. Maybe I’ll play with that.

Jerry: This is

Andrew: Or that sort of

Jerry: the they were not detected by. Like when the normal anti malware, but I don’t know if they would have been detected by, your aquas and and whatnot.

Andrew: And is it worth running? Having a process of any third party code? That you bring in and scan. I don’t know. I’m just musing out loud here. I honestly don’t know. And I think for a while, wasn’t Google, I think a couple people were trying to do like a safe package repository where, it was vetted and secured.

And I don’t know how well that’s been adopted, but it might be another idea. Don’t just get random stuff, like only go to the safe

Jerry: There’s,

Andrew: But that may slow you

Jerry: yeah.

Andrew: may not be able to answer all the questions you need that way. And, you may not find what you need there.

Jerry: lot, there’s a lot of. focus on supply chain. And I think we’re seeing a lot more focus on supply chain security as it pertains to open source? But my observation is that tends to focus more on Products where we’re, to some extent becoming compelled to care about this, because for example, the U S government has created some, regulatory hoopla where you now have to create S bombs and you have to make attestations about the integrity of your software development practice and whatnot.

So I think. in certain contexts that make sense. But there’s a lot of companies that have a lot of developers that don’t make products for sale or offer services for sale. We have huge numbers of people creating internal applications and whatnot. And I don’t expect, I think it’s a very long time before we see that kind of maturity being applied to purely internal operations.

Andrew: fair. And, these are also expensive sort of things to ask an organization to do it. You’d have to be pretty mature and pretty well staffed and well funded to, to do some of these things too.

Jerry: I definitely agree, but I think we have to be mindful about the ecosystem that we operate our, what I’ll call privileged users in, right? it goes back to that quote, with great power comes great responsibility. If you’re going to have IT people with elevated privileges, they should have to operate within a tighter set of parameters.

And whether that’s, the context of what Microsoft used to call their red forest, I forget what they call it these days. I’m getting old. But I think it’s very difficult to combat all of the different ways these things can manifest. And in particular, a lot of companies aren’t going to have the tolerance, to, stop their employees from leveraging open source that way.

But that doesn’t mean that we get to throw up our hands and say it’s just too hard. I think it means that we need to take a different look at how we construct our environment in a way that is more tolerant to the problems that can come from this sort of thing.

Andrew: Yep. I would agree. It makes me think also somewhat similar, like if they’re a high value target, Similar to how executives are a high value target. It’s only a reason they shouldn’t break the rules or have exceptions to the rules because the consequences of them being hacked in some way are so much higher than the average employee.

Jerry: Oh, a hundred percent. Absolutely. All right. That is the show for today. I appreciate everybody’s attention. Thank you for working through my noisy cat and our technical problems. If you like the show, you can find all of our back episodes on our website at defensive security. org. You can also find us on your favorite podcast platform.

I think we’ve got them all now, if I’m not mistaken, pretty much all.

Andrew: Unless there’s ones out there we don’t know about.

Jerry: Yeah,

Andrew: We’ll

Jerry: absolutely. You can follow me on the Fediverse at Jerry at infosec. exchange. You can follow Andy where?

Andrew: Also on the Fediverse at lurg, L E R G, at infosec. exchange. Also, I’m still hanging around the Twitter slash X world. Also at lurg, L E R G.

Jerry: All right. I look forward to talking to everybody again real soon. I hope you have a great week.

Andrew: Thanks everybody.

Jerry: Take care.

Andrew: bye.

Defensive Security Podcast Episode 274

https://www.bleepingcomputer.com/news/security/over-3-000-github-accounts-used-by-malware-distribution-service/
https://blog.knowbe4.com/how-a-north-korean-fake-it-worker-tried-to-infiltrate-us
https://arstechnica.com/security/2024/07/secure-boot-is-completely-compromised-on-200-models-from-5-big-device-makers/
https://www.darkreading.com/cybersecurity-operations/crowdstrike-outage-losses-estimated-staggering-54b
 https://cdn.prod.website-files.com/64b69422439318309c9f1e44/66a24d5478783782964c1f6f_CrowdStrikes%20Impact%20on%20the%20Fortune%20500_%202024%20_Parametrix%20Analysis.pdf
https://www.darkreading.com/vulnerabilities-threats/unexpected-lessons-learned-from-the-crowdstrike-event

Summary:

Episode 274: Malware on GitHub, North Korean Developer Scam & Secure Boot Failures In this episode of the Defensive Security Podcast, hosts Jerry Bell and Andrew Kalat discuss several notable security stories and issues. They start with a malware distribution service that leverages compromised GitHub accounts and WordPress sites. They then cover a security warning from KnowBe4 about hiring a supposed North Korean agent as a senior developer. They dive into the significance of two separate vulnerable firmware signing keys affecting over 500 hardware models. Lastly, they explore the massive financial impact of the recent CrowdStrike outage, with losses estimated at $5.4 billion. Throughout the episode, the hosts provide insights, potential solutions, and share personal experiences related to these cybersecurity challenges.

00:00 Introduction and Casual Banter

00:30 Funemployment and Retirement Reflections

01:54 Disclaimer and First Story Introduction

02:17 Malware Distribution via GitHub

04:24 WordPress Security Issues

8:09 North Korean Developer Incident

14:36 Lessons Learned and Recommendations

23:27 Secure Boot Vulnerabilities

29:19 Cloud Providers and Firmware Security

30:47 The Epidemic of Leaked Keys on GitHub

33:35 Challenges in Development and Security Practices

35:36 CrowdStrike Outage and Its Financial Impact

39:16 Legal and Technical Implications of the Outage

57:33 Concluding Thoughts and Future Plans

 

Transcript:

Episode 274 274
===

jerry: [00:00:00] Today is Wednesday, July 31st, 2024. And this is episode 274 of the defensive security podcast. My name is Jerry Bell and joining me tonight as always is Mr. Andrew Kalat.

Andrew: Good evening, Jerry. How are you? My good sir.

jerry: So good. It hurts. How are you?

Andrew: I’m doing good. it’s Wednesday, which is halfway through the week. So I can’t complain too much.

jerry: It’s just another day to me though.

Andrew: I, how are you enjoying your funemployment?

jerry: It is awesome. funny story, when my dad retired, he told me something sad. He said, one of the things that you don’t realize is that the weekend starts losing its appeal,

Andrew: Because every day is the weekend.

jerry: because it’s just another day and, holidays are just another day.

jerry: There’s not really something to look forward to when you’re working. You typically look forward to the weekend. It’s just another day. I am finding that to be true. I’m going to be [00:01:00] spending some time coming up down at the beach, which will be a whole different experience, not having to work and actually be at the beach, which will be cool.

Andrew: So you don’t have to wrap your laptop in plastic when you take it surfing with you anymore.

jerry: That is very true. No more conference calls while out on the boogie board.

Andrew: I will say the random appearance of sharks behind you on your zoom sessions will be missed.

Andrew: Of course, we’ll have to find a way to bring that back. I live in jealousy of your funemployment. I will just say that. But not that you didn’t work your ass off and earned it, right? This is 25 years of blood, sweat, and tears given to this industry to get you to this point. So you earned it

jerry: I’m going to have to be responsible again at some point, but I am having fun in the meantime.

Andrew: as well. You should

jerry: before we get into the stories for today I just want to remind everybody that the thoughts and [00:02:00] opinions we express on the show are ours and do not represent anybody else, including employers cats, farm animals, spouses children, et cetera, et cetera.

Andrew: there’s that one Lama in Belarus though, that agrees 100 percent with what we have to say.

jerry: Very true. Getting into the stories, we have one from bleeping computer and this one is titled over 3000 GitHub accounts used by malware distribution service. I thought this one was particularly interesting and notable. There is a malware distribution as a service that leverages both, let’s call them fake or contrived GitHub accounts, as well as compromised WordPress sites.

jerry: And the, what they’re effectively leveraging is the brand reputation of GitHub. And so they have a fairly complicated setup of driving. [00:03:00] Victims through watering hole attacks and SEO type lures to get people to these sites and they have different templates that entice people to download these encrypted zip files that are hosted on GitHub.

jerry: And what they’re taking advantage of is two things. Number one, people generally think that GitHub is a reputable place. To find files. And so you’re. Level of concern goes down when you download something that you think is coming from a reputable place. And I think the other, perhaps more problematic angle from my perspective, at least is GitHub is something that most companies allow access to.

jerry: it is something that, by design, many companies, not all, encourage their employees to interact with GitHub. And so you really can’t block it. [00:04:00] And or at least it’s more difficult to block it. And because it’s one kind of amorphous. Thing you, you don’t have the ability to granularly say you can go to this aspect or this part of GitHub and not this other part of GitHub.

Andrew: Yeah. I agree at all points. It’s absolutely leveraging and abusing the reputation of GitHub to get this malware out there and it’s effective. Using WordPress doesn’t surprise me, just about every day I see some other plugin has a massive vulnerability. So I’m not blaming WordPress, I’m blaming their plugin ecosystem as being highly toxic in the original sense.

jerry: I know that WordPress has a lot of detractors, especially in the security community, but It’s over 50 percent of the entire internet. Websites runs WordPress, right? That is pretty impressive.

Andrew: There’s something to be said for The amount of coverage or the amount of instances out [00:05:00] there equals how many bad guys are poking at you. So if you’re not widely deployed, you’re probably also not getting widely tested. So there is absolutely some of that aspect of, Hey, if you’re a well used tool, you’re likely to have more security problems.

Andrew: So statistically that makes sense, but it’s not a bad tool. Don’t get me wrong. It’s a super useful tool. It’s just amazing how often I see advisories about. Really nasty exploits on various plugins for WordPress.

jerry: Yeah. the barrier to entry for plugin development is incredibly low and there are just an absolute ton of them. There’s many thousands. So it isn’t surprising.

Andrew: People who are running WordPress sites are not super technical admins. They’re usually marketing folks or content generation folks. So, when they’re looking for, Hey, I need something that makes a pretty picture, do something like this in WordPress, they probably aren’t looking at with the same level of technical rigor you and I might.

jerry: I will tell you in in prior [00:06:00] jobs where we had customers hosted on our infrastructure, this was a big problem because, customers would walk away, right? There’s it’s so easy to set up. It’s so easy to set up a WordPress instance, which is by the way, like that’s part of its value proposition, but it’s also part of, I think it’s I think it contributes to the low ongoing attachment or ongoing care and feeding of it.

jerry: It’s so easy to set up and then just walk away from and it’s a big problem. I think that the WordPress team themselves have done a pretty good job of mitigating the issues to the extent they can. Most of the, most of it auto updates these days.

Andrew: Yeah.

jerry: More to go, right?

Andrew: what’s interesting to me is the way you describe that often sounds like the same problems we have with SaaS and cloud in general. It’s so easy to set up and walk away from and not manage it well. That we lead to all sorts of similar problems. This story is more about [00:07:00] GitHub and I agree with all your points.

Andrew: I just went down my WordPress rant rabbit hole, but yeah, I get it. GitHub is an interesting one. And I don’t have a lot of good solves for that one.

jerry: No, it’s It is not a, it’s not an easily solvable issue. I think this is one of those one of those cases where education will help certainly, proper end point detection, I should say end point protection. Will help being able to identify that somebody has downloaded and run something that is potentially malicious on their device.

jerry: But beyond that, unless you’re willing to take that leap and not not permit people to access GitHub, it’s hard to defend against. So by the way, if the industry as a whole decided, Hey, this is too risky, the bad guys would just move somewhere else.

Andrew: Yeah, absolutely. They’re leveraging whatever has a good solid reputation that has a sort of functionality. It’s not GitHub’s fault. And I think you’re right. If it [00:08:00] wasn’t GitHub, it’d be somebody else that serves the same purpose. function. It’s their victim of their own success in that sense.

jerry: 100%. So the the next story we have comes from the know b4 blog. Know B4 is. I think a fairly well known security education company. They are not without controversy in my my experience. I think they’re the ones that offer the the ransomware guarantee or warranty, if I’m not mistaken.

jerry: But this story is not about that. The story is about how they hired a North Korean, allegedly as a senior developer. So they they did what I would describe as a public service announcement describing how they had been duped by purportedly a North Korean, government agent who was trying to infiltrate their [00:09:00] company.

jerry: It’s a little unclear exactly what their mission was. But the story basically is that a this company was trying to hire a senior developer. They went through the interview process. They selected what they believed was the best candidate. They performed all their normal interviews, background checks and whatnot.

jerry: Then they made an offer. The offer was accepted. They dispatched a laptop to this person. And then suddenly that. Laptop was observed attempting to install malware. And that is what really set off the series of events that alerted them to what was going on. So in the end analysis, the the person they hired was again, somebody allegedly from North Korea.

jerry: They had used a stolen identity of a legitimate person in the U S. They [00:10:00] altered a photo, and it’s a little unclear to me exactly in what ways they show before and after of how the photo was altered, but it’s not entirely clear to me if that was the photo of the identity, the person whose identity they stole or somebody else that aspect isn’t clear to me, the laptop was sent to what they call an IT A laptop shipping mill, I think is what they call it.

jerry: Basically. It’s a place that will set these laptops up either on a network to allow access, low remote access from overseas, or they will send them out I think in this instance, it was allowing remote access in. So the the bad actor was apparently in North Korea, VPNing into the laptop and then connecting from the laptop, which I guess was located in California and accessing the company’s networks.

jerry: When KnowBe4’s security [00:11:00] team identified that there was malicious activity going on. They tried to contact this. Very new employee, and the new employee said, I can’t talk right now. I, I’m trying to, I’m trying to debug something, but I’ve got a personal or family issue going on. And so over the course of a couple of hours, the security team decided to isolate the laptop, and then things started to fall apart, and they realized exactly what happened.

jerry: It’s not entirely clear to me, by the way, from reading the article exactly how they stitched together that this person was a North Korean agent. That. That is not part of the at least the stories that I have read. But it’s interesting because this resonated with me personally. I worked for a large multinational company for a very long time.

Andrew: And you’re a North Korean agent.

jerry: And I yes, that’s, that is true. Sorry, everybody had to find out that way. And anyhow,

Andrew: Didn’t they all know? Is that breaking news?

jerry: [00:12:00] I think it might be for some,

Andrew: Oh you’re a very nice North Korean agent.

jerry: I try to be

Andrew: Yeah.

Andrew: Anyway. Yeah. It’s a crazy story. I, Sent this to some of my friends in HR and they’re like, what this happens? Yeah, it happens. So that tells me there’s some awareness interestingly according to The timeline that no before published their sock was on it. They Detected the users activity Suspicious activity began at 9 55 p.

Andrew: m. And by 10 20 p. m 25 minutes later. They’re like, nope shut me down.

jerry: yeah. But by all accounts, they did a great job.

Andrew: Yeah, and the SOC person was suspicious enough. And also the North Korean angle. It looks like, before we reached out to the FBI in Mandiant. And so it’s probably coming from one of those two who are saying, that this was a North Korean thing.

Andrew: Although from the story as well, it looks like this may not necessarily be malicious in terms of state sponsored. malicious [00:13:00] activity so much as it might’ve been just a way to earn legitimate money sent back to North Korea to fund whatever, as they say, at least according to what they quote in their blog post, the FBI was saying, sometimes this is just straight up people working in sanctioned countries, getting around the sanctions and doing legitimate work.

Andrew: Or, but this guy immediately started, Loading malware, which is not great.

jerry: Yeah. And realistically, if he hadn’t started loading malware, who, I don’t know if it would have been very easy to detect that this was going on.

Andrew: sure. So a lot of lessons learned though. A lot of takeaways.

jerry: Yeah. I, anyway I was saying that this is, this, It hit home for me because I have had similar, not with, foreign agents, North Korean agents or age, people in embargoed countries, but I have had experience with people falsifying who they [00:14:00] are when they get hired and it’s difficult to do, especially in these remote working.

jerry: Environments. I’m not saying that it can’t also be done in a face to face type environment, but I think that it is. Easier to pull off in, in today’s remote working environment. I’m still, by the way, a big proponent of remote working. Don’t misunderstand in any way. But I do understand this one.

jerry: I I can definitely sympathize someday. I will tell some funny stories about experiences I’ve. I’ve had, so there were, they did call out some tips and recommendations as a result of this. Again, this was intended to be a public service announcement. So they wanted to describe the, the event and what they learned from it.

jerry: So their tips to prevent were scan your remote devices and make sure no one remotes into these. So I think that was [00:15:00] really, addressing how the person was connecting remotely into the laptop. But that’s how, by the way, that’s not a even if you were to not permit VNC or remote desktop or things like that, it’s not a guarantee because there are plenty of cheap ass network KVMs, right?

jerry: So just be aware of that. Next one is better vetting. Make sure they, the employee are physically where they are supposed to be. I think also that’s a difficult. One a better resume scanning for career inconsistencies, by the way I’m assuming they are listing these tips because some aspect of the, that the root cause analysis is tied back to each of the, each of these things.

jerry: So I’m guessing that if they had looked more closely, they would have identified inconsistencies in the career of the person whose [00:16:00] identity they had stolen. And so I think what they’re saying is, make sure that everything ties out. Then next one is get these people on video camera and ask them about the work they’re doing so that you can see firsthand, a, that their picture looks like who you think you’re hiring and B that you can understand, that they are in fact who they say they are.

jerry: And I can definitely tell you firsthand that’s an important one.

Andrew: Yeah. But that’s getting easier and easier to fake.

jerry: Yeah.

Andrew: going to be an interesting problem as, deepfake AI technology is getting cheaper and more widely accessible. So

jerry: So Bob, our friend Bob, you told me about a situation in a Asian country where it is At least in the recent past fairly common for [00:17:00] what I’ll loosely call, or what he loosely called job fairs, where people who were job prospecting could go and hire an expert. Like they could, the person, this person would offer their services to go through interviews and.

Andrew: wow,

jerry: And, purport to be the person who’s looking for the job and they would rely on like the, the silly Westerners inability to distinguish between different Asians and they would end up hiring, And bringing on board somebody who was not the person they had actually interviewed.

jerry: And it was it was done at an industrial scale.

Andrew: obviously that is a short lived scam for the person getting hired, but I guess they get a paycheck or two out of the deal and move on.

jerry: I think, and I think sometimes it, sometimes I think it can work out for a long time, [00:18:00] unfortunately.

Andrew: I guess that’s true. Wow, that is crazy.

jerry: Anyway Next one is the laptop shipping address is different from where they are supposed to live at work is a red flag. So I’m assuming that this person declared that they had an address in place a, and they asked for the laptop to be put shipped to place B and that, that should be a definitely red flag.

jerry: I. I’m a little surprised that has to be called out. Seems obvious to me. They did have, they do have a couple of recommended process improvements, including a background where background checks appear to be inadequate and names are not consistent. So basically where that the person’s name And the results of their background check don’t necessarily line up correctly.

jerry: You need to make sure that you’re properly vetting your references and do not solely rely on email [00:19:00] based references. I think that’s also a good thing, although not airtight, especially if you’re trying to fend off, any type of I want to say nation state, but if it’s the adversary, it’s a kind of a low bar to top.

Andrew: If they’re already a scammer, they can probably get over that hurdle of having three valid, references who will lie for them.

jerry: That’s my thinking to implement enhanced monitoring for any continued attempts to access systems. That makes a very good sense. Obviously if. If you terminate this, by the way, I don’t think it’s unique to the situation. When you terminate someone, you need to make sure that their access is terminated globally and you don’t have back doorways into your environment.

jerry: A review and strengthen access controls and authentication processes. And then finally conduct security awareness training. And then they have a couple of what to look out for is like using voiceover IP numbers, discrepancies and addresses and dates of [00:20:00] birth conflicting personal information like marital status and then also family emergencies. I think, again. When the company soc tried to contact the employee and ask what was going on.

jerry: This person said, oh, I can’t talk. I’ve got a family emergency going on. And then sophisticated use of v VPNs and VMs for access for accessing custom or for company systems. I think that’s, by the way, very important one in something that, that I have. I know Bob was doing pretty pervasively was looking for evidence of unauthorized connections from known VPN sources,

Andrew: Right.

jerry: And then attempting to execute malware and subsequent coverups.

jerry: So I think all really good. I will also say, by the way, that I, I think, obviously, no, before published this to get their name out, it was intended to help, I what that one judge called corporate puffery, [00:21:00] which is my new, my, my new favorite term, the legal term now. But I wish I, it was shared far and wide, but it was very clear because this company had to hit the post, the followup blog post that this post was being interpreted as they’re disclosing a data breach.

jerry: Thought that angle is also very interesting that you have to be careful how you couch things and they, in, in the FAQ, by the way, they actually draw a delineation between how the InfoSec media industry covered their disclosure or their privacy. Public service announcement and how the general media disclosed it.

jerry: The InfoSec press, as they describe it, covered it as a lessons learned. Here’s here’s some things to look out for and did what they had hoped, but the general media basically just covered it as if it was just another data breach, [00:22:00] which I thought was quite interesting and something to be aware of.

Andrew: There’s two other things that I took away. Interesting that one, their EDR software caught the malware. So that’s important. I don’t know if it was CrowdStrike, but we’ll just mention that just for the, the circuit search algorithms, because everybody has to talk about CrowdStrike. The other thing that I thought was interesting is the way this is written.

Andrew: It’s clearly a bit of an internal document this the way that the Verbiage is used in the tenses and that sort of thing. So they say here.

Andrew: Let me find it really quick Ah, I don’t have to it’s good We have new employees in a highly restricted area when they start and they have no access to production systems So it sounds to me based on that, that they have some sort of tiering for brand new employees that they don’t automatically get access to anything, that they’re some sort of training period or ramp up period where they’re vetted or whatnot, whatever the purpose is for that particular sort of [00:23:00] sandbox that they’re left in initially.

Andrew: Not a bad idea for this sort of circumstance.

jerry: a great idea and a great call out. Yeah,

Andrew: Crazy story though. I’m really glad that, before. Published all this and we can learn from it. Kudos to them. Yeah, you’re right. There’s probably some cynical self serving nature to it, but nonetheless, it’s good fodder for us.

jerry: absolutely. Absolutely. And I think it’s happening probably a lot more than we, we want to admit. Moving on to the next story, this one comes from Ars Technica and the title is secure boot is completely broken on 200 plus models from five big device makers, So I think most people hopefully know what secure boot is.

jerry: It’s intended to to establish a level of trust in the firmware that is running on the hardware. And that was back in the early 2010s late Ts early 2000 tens a big deal because there were some pretty novel firmware [00:24:00] based attacks and the specter of really persistent and very difficult to eradicate firmware level mail malware was was running pretty rampant.

jerry: So the industry created this secure boot concept, and it relies on public key cryptography. And the one thing that everybody should realize by now is that we really.

jerry: And so there are two distinct problems here. That’s it’s identified in the article. One is that. Back in 2022 AMI, one of the signing keys from AMI was posted on a public GitHub repository. Now it was encrypted. It was strongly encrypted with a four character password. So it was very easy to decompile, reverse engineer.

Andrew: Indeed.

jerry: Yeah. So that basically means that systems which use [00:25:00] secure boot that, that relies on that particular key can be bypassed by an adversary. They can now load malware. They can basically sign a malicious version of firmware and run that on the system. So not

Andrew: It’ll be trusted. And the only way to fix this is basically to deploy firmware updates to the hardware to rotate the key, which is not going to happen.

jerry: And by the way once a system is infected, like if you have malicious firmware, it is very difficult. This is not like you can necessarily just wipe the hard drive and start over. It’s very difficult to eradicate the tools in detection capabilities for malicious firmware are very immature.

Andrew: Mhm. this is a tough one. When you’re relying purely on the secrecy of a [00:26:00] key, and if that key were to get leaked, in this case, it blows up that entire ecosystem. Wow, that’s rough. And the other thing that I’ll bring up is I have seen so many secrets get pushed to GitHub by accident. It’s so easy to do and it’s a huge problem that I don’t know that we talk about enough.

Andrew: Once it’s pushed too it’s almost impossible to ever wipe from that record because it’s permanently in that, repos history. So once that happens, you’re basically forced to rotate that secret. And in this case, you can’t because it’s relying on firmware you can’t centrally update. Without massive amounts of effort.

Andrew: So now you’ve got this problem forever

jerry: Yeah, correct. Now in the,

Andrew: for those hardware.

jerry: In this instance, in the other instance, which is maybe a little less bad I guess in some ways it’s better. It’s less bad in other ways. It’s worse. [00:27:00] What we just talked about affected 200, 200 or so models. There were another 300 models that were impacted by a different problem.

jerry: And that other problem was that a, one of the firmware manufacturers released prototype code that had a stock.

jerry: And in fact, the common name in this key is do not trust. So the hardware manufacturers did not actually rotate the key as they were supposed to. And so in the analysis of The first problem that we were just talking about, this company called Binerly was doing some analysis of other systems, looking for other problems.

jerry: And they found apparently a very pervasive problem across many different. Models of hardware that have this, stock test key [00:28:00] instead of an actual legitimate key. And so it’s literally the same key used everywhere. It was never intended to be used in production systems but apparently the hardware manufacturers think at the memo, I guess the one good news in all of this is that I think all of the hardware now is, or almost all of it is out of support and probably pretty old.

jerry: Not necessarily a good look in terms of our ability. And I like these. Who knows how many people have had these keys over the years and provided an opportunity for, bad actors to leverage the vulnerability. I would say for most people or for most organizations, it’s not like an exigent threat.

jerry: Because something else has to go horribly wrong, in order for a bad person, unless you’re talking about like a laptop or a desktop personal, like a PC from a server [00:29:00] perspective, something else has to go horribly wrong. They’ve got to, you’ve got to get, be compromised. And if they did, this provides a really gnarly method of persistence.

jerry: If that were to be properly leveraged, but the one, again, coming from. Where I was, there is a category of company that has to worry about this like very deeply, and that is. Cloud providers.

Andrew: that’s true. Somewhere there is hardware. Backing all that stuff up.

jerry: Yeah. And, because you have to have administrative rights on the system in order to flash it. You are like, I run a infosec. exchange and I have a bunch of bare metal servers that I rent from a data center. I have root, I could conceivably update the firmware on those.

Andrew: You could.

jerry: If I up, if I updated the firmware in a way that maintained persistence and my [00:30:00] hosting provider was not sophisticated, I could. Establish a means of persistence to compromise the next person who, or organization that is using that piece of hardware.

Andrew: You rent it. You flash it with your code. You un rent it. And they rent it to somebody else.

jerry: Yes,

Andrew: It doesn’t matter if they loaded fresh OS or whatever. It’s sitting there in firmware. It’s at the BIOS level.

jerry: exactly. So that those are the kinds of things that kept me up at night.

Andrew: But to your point, I don’t know how much of a real world hardcore threat this is. It’s a very interesting situation, but I don’t think that’s what’s going to burden most companies.

jerry: No. I think it, so you touched on something. I wanted to hammer home and that is our just terrible stewardship of keys particularly as it pertains to GitHub. I’m sure [00:31:00] there are others, but like GitHub is, I suspect if in 50 years going to be viewed as like the nexus point for many data breaches that were caused by leaked keys it’s

Andrew: It’s just too easy.

jerry: It’s an epidemic.

Andrew: and I’m not throwing shade at the developers. It is just so easy to do it that even the most well intentioned and diligent developer can get caught by this, even if you have tools that are supposed to check for secrets before that PR request is completed, they’re not foolproof, and the secrets look differently, and they’re not great.

Andrew: There’s no one perfect tool for this. So it’s, I don’t know, the only thing I can say, and it’s really tough when we’re talking about firmware based things, but everything else is try to design with the intent that you’ve got to rotate those keys on a regular basis and make it easier for yourself.[00:32:00]

Andrew: Much easier said than done.

jerry: Certainly true. I will say if you start out with that design principle in mind, it’s easier. It’s much easier than going back and trying to retrofit. So I, and I think that’s, I think that goes without saying, but it is doubly true in this instance. So keep an eye on those keys. Definitely rotate them when, and where you can minimally know what keys you have and how you would rotate them if they were disclosed.

Andrew: And I don’t want to get down in the weeds of all the coding practices, but because I’m certainly not an expert there, however, what I, as a very dangerous level of knowledge that I have just enough to be very dangerous, I understand there are ways to not directly call those secrets in your code or contain them in the code.

Andrew: You can call them as a reference and [00:33:00] otherwise teach your devs how to do that so that if they don’t have to worry about. That secret accidentally getting pushed is one very crude example of a very complicated problem that I’m not doing a good job talking about. But for those who know much better than I, try to help your devs.

Andrew: Now put secrets in GitHub, please. Thank you.

jerry: It’s, it is a difficult. And I will tell you, wrestled with this problem mightily, it requires you to create an ecosystem. And one of the challenges I’ve observed is that. And we are how best to say it, commoditizing development. I know that’s, perhaps a crass way of saying it, but we are, we’re putting people in development roles that don’t really have a lot of development background.

jerry: And we’re, we’re basically throwing them out of the nest and [00:34:00] saying, okay, it’s time to fly. And they know how to write code, but they don’t necessarily know that they’re not even, Aware that this class of problem exists until, somebody from the security team comes along and says, what did you do?

jerry: Let alone knowing how and where to actually store. And so very incumbent on, the leadership of the security team to make sure that ecosystem, both exists and your developers are aware of how to use it. Yeah. Otherwise you, this is going to keep happening.

Andrew: A single slide during onboarding isn’t good enough?

jerry: Oh, so it’s add that slide to your annual training and you’re good.

Andrew: All right. And maybe just something like just don’t share secrets. Just as a bullet point, move on. I think

jerry: So yeah don’t share. Yeah. Don’t share secrets, right? Don’t share your passwords. That’s right. I have, by the [00:35:00] way I have, by the way I have a been involved in in situations where something like this has happened. And I. Have asked the person, like at what point did you think it was okay?

jerry: This is a password. And he said, no, it’s not a password. It’s a secret.

Andrew: Right.

jerry: Not allowed to share passwords. There’s no, no prohibition on secrets. So

Andrew: makes my code go. I need it. It’s important.

jerry: very cautious about terminology. All right. Let’s keep plugging along here. The next one comes from dark reading and the title is CrowdStrike outage losses estimated at staggering 5.

jerry: 4 billion.

Andrew: And I’m going to go on record and say that’s probably low.

jerry: Yeah, I, it was funny in the short after immediate aftermath of the the event, or maybe a couple of days after somebody was asking the question of, how much money that [00:36:00] was lost. And at the time The number we just gotten the number of eight and a half million systems impacted. And so I did a bit of bad back of the net napkin math.

jerry: And I came up with between five and 10 billion, which is pretty interesting. That’s coming out to be a 5. 4 billion. I do think that by the way, the eight and a half million number is now being walked back. I haven’t heard a new number. But Microsoft seems to be signaling that eight and a half million is probably very conservative and CrowdStrike doesn’t seem to be very motivated to go correct the record.

Andrew: Yeah, from what I understand, Those were just the systems that were able to send a crash dump to Microsoft is what they based it on And they’re now saying that was probably just a subset of some so that 8. 5 million number that we’ve been talking about It’s probably Way low I would i’m just going to take a guess.

Andrew: It’s at least [00:37:00] 10x that number.

jerry: It’s a lot, it’s a lot, it was it was an interesting discontinuity because I think CrowdStrike has something like 15 percent of market share and this was like 1 percent or less than 1 percent of Windows systems that were impacted and it was hard to reconcile those. And now I, what, the only thing I could think of is that, That 1 percent included home users, but I don’t know, it just, it doesn’t seem right.

jerry: It’ll be very interesting to see what the actual number is. I think it’s certainly going to be a lot higher than eight and a half million, just based on, the news coverage how many freaking airports did we see pictures of people on ladders, resetting the displays?

jerry: You know, it’s mind boggling the amount of effort that went into recovering from this. I

Andrew: side story. I saw this amazing [00:38:00] article I won’t do it justice because I didn’t have it prepped but somebody figured out they could use a barcode scanner here To speed up the process because Windows treats a barcode scanner as a keyboard, and they figured out how to encode the actual command with the, um, the the encryption key they needed.

jerry: a key that bit like a key. Yeah.

Andrew: Thank you, the BitLocker key. And created custom barcodes for HPC that was tied to it. And instead of typing it all in, just got to the point where they needed, they scanned the barcode and boom, it was recovered. And I was like, I need to go find the story. It was amazing. Little bit of a hack that I thought was awesome.

jerry: Yeah. That was a,

Andrew: also I was thinking.

jerry: it was going to say it was a company in Australia that did that, I think if I remember, yeah, that was very clever.

Andrew: I was also thinking randomly when I saw that same picture of those people up in the ladder fixing that airport monitor. I’m like, if you’re down to that monitor, you probably got your critical stuff back.

jerry: That is very true. Although different teams, [00:39:00] probably different

Andrew: That’s true. That’s true. And I know there’s a lot more to talk about here, but one last thing I’ll say just since I stole the, we’ll probably get a lot better read on this because a whole bunch of legal actions is starting to spin up and that’ll all come out in discovery.

jerry: Delta airlines, notably, we were sharing some stories back and forth about Delta airlines before, before the recording here announced that they estimate their losses at half a billion dollars and they have retained the legal council and they’re contemplating legal action against both CrowdStrike and Microsoft.

Andrew: Yeah. The other interesting thing since the last we talked about this that came up, we were taking a little bit of a grain of salt, but we were talking a lot about why CrowdStrike was operating in the kernel and that, if that was a good idea or not, and why are they allowed to do it and should they be allowed to do it, et cetera, et cetera.

Andrew: And you can’t do that on a Mac. And one thing Microsoft came out and said is that based on an EU [00:40:00] legal They were forced due to antitrust type considerations to basically allow any security tool to have the same level of access as Microsoft’s built in security tools. Hence, they didn’t want Microsoft to have monopoly access to the operating system.

Andrew: So they needed to allow third parties to have the same capabilities. So if Microsoft defender has the kernel. Therefore, competitors need to also be able to have kernel access. That’s going to be an interesting thing to see play out. And I think Microsoft is walking a very careful line of pointing that out without playing too much of a victim.

Andrew: But it is an interesting caveat that I think adds to the nuance of the conversation we were having last week on this topic.

jerry: Yeah they’ve come out pretty forcefully and said that they’re intending to refactor access to the kernel. And I think they actually made the comment that they are intending to move in a direction that is much more like iOS, the Apple iOS [00:41:00] ecosystem. So that’ll be be quite interesting to watch.

Andrew: Yeah, I don’t blame them. They, Microsoft’s in a tough spot here, I think. And I’m not a Microsoft lover or an apologist, but they’ve caught a lot of flack for something that kind of wasn’t their direct fault, but somewhat the result of decisions and ecosystems and permissions that were granted. And I think they realize now that they just can’t succeed that.

Andrew: Ground or that they’re still going to be held accountable to what other people are doing in their operating system.

jerry: Yeah, I, in in the legal circles, there’s this concept of an attractive nuisance, and I have a feeling that’s going to be part of the basis of their, of claims against Microsoft is that, should have known.

jerry: And therefore, they’re somewhat culpable for not taking action. And anybody who, you know who’s had somebody slip and fall on their driveway, at least in the U S I don’t know [00:42:00] that this is a big problem outside of the U S you, you have that sort of, problem, right?

jerry: It’s an attractive nuisance. You should have done something about it. You should have known that this was going to be a problem. Somebody got hurt as a result. And now you’re you’re at least in part responsible. So again, don’t know. Not presupposing what a court would find. I don’t think this, by the way, is the kind of thing that would actually end up going to trial, but Hey, we’ll see this, by the way, this whole, this story here.

jerry: Is based on a report that comes from a company called parametrics and they are a I guess a think tank provides information to insurance companies. And so it was they who pinned the estimate estimated losses at half a billion, I’m sorry, at 5. 4 billion dollars. And they identified that healthcare companies were the most impacted followed by [00:43:00] banking as well as transportation companies.

jerry: We actually have a breakdown based on their telemetry. They have they, they’ve You know, they’ve identified different categories and how impact and how much each category of industry lost as a result of this net all totals up to the five point five point 4 billion. There were some interesting conclusions at the end of the report, which was linked in the article from dark reading.

jerry: And I’ll just read through those. And some of them, by the way, are a little confusing. Contrary to what I understood to be the case. So the first one is that recovery disparities between cloud and traditional infrastructures, they point out that cloud based organizations that were had their assets impacted in the cloud recovered much more quickly than those who had traditional infrastructure.

jerry: And on the one hand, I know that there were a lot of problems. But [00:44:00] on the other hand I’m wondering if that whole concept is skewed by the, the fact that you don’t have laptops and kiosks and whatnot, sitting in the cloud. And so those things that were impacted actually required you to dispatch someone to go out and do, whereas, You know you, with with infrastructure that’s sitting in a cloud, like somebody from their basement was able to just plug through them one, one by one without having to to actually dispatch people to go fix.

jerry: So I don’t know how much that played into it.

Andrew: I had one other thought there, which is that maybe a certain number of these systems that went down were abstracted in some sort of hypervisor and the management plane was still remotely available to sock. Or knock engineers who could fix it without actually having to put hands on physical keyboards at the devices.

jerry: That’s possible. Very possible.

Andrew: know, but because I’m assuming like the underlying infrastructure running it [00:45:00] didn’t go down. It was the actual virtual machines inside of it that were running CrowdStrike at that layer that, that caused, the blue screen. And can I don’t know, I don’t know if I can fix that easier than, having to go plug into a PC.

Andrew: USB port on a physical device.

jerry: I think they were able, if nothing else, they were able to use a virtual KVM to connect the systems.

Andrew: Yeah. So that’s much quicker. I would think.

jerry: Yeah, that’d be my expectation. So their second conclusion was misplaced priorities. They they call out the focus on non systemic cyber perils was ultimately a catalyst for the CrowdStrike outage.

jerry: And but my read of that is that we’re all focused on things like. Ransomware and worms and whatnot. We weren’t focused on the potential impacts of other components in our technology stack failing catastrophically.[00:46:00]

Andrew: Yeah, I challenge this. I think that’s some Monday morning quarterbacking.

jerry: Keep, again, keep in mind that I don’t think you can. I don’t think you can divorce that one from the next item, which is achieving portfolio diversification. So keep in mind that this was written for insurance companies,

Andrew: That’s fair. Carry on.

jerry: not necessarily for it companies. I, so what they’re pointing out here is that

jerry: there was a concentration of risk. Based on pervasive use of CrowdStrike and pervasive use of windy intersection of pervasive use of CrowdStrike and windows led to, this very systemic outage. And so if you are an insurance company, you should probably be thinking just like how, insurance companies don’t want to have all of their customers in Florida.

jerry: Because that, then if, hurricane comes [00:47:00] through, all of their customers at the same time are making simultaneous claims, they want to have diversified customer base. They want to have some people in Alabama and some people in New York and some people in Ohio and some in Florida. And so carrying that. Into in, into it. And so you don’t, do you want to ensure, do you want to have companies under your insurance umbrella who are only using windows and only using windows and CrowdStrike because now this is pointing out that creates. A situation that is not unlike having all of your customers your homeowners in Florida. It’s been a very one sided.

Andrew: I’m also very curious how these lawsuits are going to go. If they do indeed go, as we have very cynically EULAs that these companies write are not really. setting themselves up for a lot of liability. [00:48:00] And I saw a quote from the CEO of Delta and I won’t get it right, but basically he was saying, Hey, they got to be held accountable and we need to hold them accountable.

Andrew: And my initial thought, having been in a lot of discussions around procurement is that’s probably something your legal team should have done when you sign that contract. What does your legal team agree to and do they really have any actual liability in this circumstance. In my gut, I’ve never read the EULA for CrowdStrike.

Andrew: I’ve never bought CrowdStrike, so I don’t know. But having looked at many other EULAs, they have this limited liability stuff. And hey, yeah, we might screw up. This stuff isn’t perfect. And this is how much we’re liable for. So it will be very interesting, I think, how that might play out in the real world.

jerry: Yeah. I’ve seen snippets of CrowdStrikes EULA and contracts and, one of the, in the immediate aftermath, one of the notable discussion points was that CrowdStrikes. CrowdStrike does not [00:49:00] permit you, the customer to use CrowdStrike in a I’m paraphrasing, but in a in a context of life critical or safety critical uses.

jerry: And I think there were some other similar words. It is not uncommon, by the way, for company software companies To limit the liability based on, how much you paid, like you, you can’t necessarily sue them for damages that exceed the amount that you’ve paid them. That is a fairly common thing.

jerry: Now in the one reality is as as was stated by the CEO of Delta, they are going to go, they are going to go after Delta and it will be sorry after cross tracking, it will be. Interesting to see how that is perceived by the court. I will tell you, you, at least in the U S again, my context is all us law.

jerry: You can’t disclaim away. You [00:50:00] can’t write away in contract gross negligence, right? That is something that if you are found to be gross negligence, it doesn’t matter what you wrote. And, what the customer agreed to in a contract is that there can still be a finding against you.

jerry: And so I think that’s very likely the stance or the angle that Delta will take and probably some other companies as well. I think it’ll be very interesting to see how this plays out because, again, I’m not an attorney. I have worked with quite a few of them over the years. And I’ve seen a lot of a lot of contract cases go to court and be settled out of court.

jerry: I think personally, this is the kind of thing that would get settled out of court because CrowdStrike is not going to want, um, drawn out legal proceeding that is. Really giving them a deep, their IT processes and development processes, a deep [00:51:00] examination. And likewise, Delta probably doesn’t want a deep examination of exactly how they were using CrowdStrike and whether that was in concert with the terms of their license agreement and whatnot, the.

jerry: I think the challenge for CrowdStrike is,

jerry: I, again, I’m not saying this isn’t warranted, but there are a lot of companies right now that are probably in talks with legal counsel and what their options are. And I think, once one happens, this is going to quickly turn into either a class action case, or it’s going to turn into some, some revolving door of litigation.

jerry: And the reality is I think unless CrowdStrike goes to trial with this and they’re found to be not, not having been negligent in their contract term, Limitations are enforceable. They have, they stand a lot to [00:52:00] lose.

Andrew: Yeah, that’s true. The one thing that you touched on that I thought was interesting is the Yula bit of not to be used in critical infrastructure, life, and the one other area that I have any sort of familiarity with is aviation software, which is meant obviously to run aircraft and important functions on aircraft.

Andrew: And that software is incredibly expensive. And very highly regulated because it is meant for that critical use case. So what that sort of tells me is that we have this tension going on here of We want any relatively inexpensive software Almost commodity grade that can run everywhere to solve this problem, but that inexpensive software means that The level of care that goes into very high critical software.

Andrew: I’m not explaining this well, but if you look at what why it costs much money to write aviation software is because the level of diligence is [00:53:00] so high because the consequence of failure is so high, as well as all the regulatory oversight. And that’s where it comes into it. We don’t have that in this case, which is probably why.

Andrew: We have that EULA stipulation of don’t run it in a critical case. But if you’re expecting that level of due care and carefulness in coding, it comes with that price and would corporations be willing to pay that price instead of paying 150 bucks a node or whatever it is for CrowdStrike, are they willing to pay 5, 000 a node?

Andrew: That’s where the rubber meets the road.

jerry: If you look at the case of Delta, obviously they’re an airline and, they’re, they, they have planes that are the software that runs on them are life critical, but I don’t think that’s what, you know,

jerry: The things that failed, I think were the kind of the back office stuff, the planning and the scheduling of crews and and departures and and arrivals and whatnot, things that are not necessarily [00:54:00] important to an airplane staying up in the air, but rather to orchestrating the movement of planes around the world and passengers around the world.

jerry: And. That, that is not by itself, I think, a life critical thing. Although in aggregate, I, there was certainly a story about a person in Florida, an elderly person in Florida who missed his flight as a result of this and has not been seen since.

Andrew: Yeah, and I’m sure that there’s second order. I’m more, what I’m trying to draw a parallel between is, if you’re demanding software with zero defects, it’s much, much more expensive.

jerry: Oh, sure.

Andrew: And that’s where I’m going with this is there’s, whether we like it or not, there are trade offs. And so if you look at one of the extreme of software that runs, and I only say this because this is what I’m familiar with, I’m sure the same for like power plants and software that runs super critical stuff is also super expensive.

Andrew: So [00:55:00] if you want that level of dependability, you probably aren’t going to do it on commodity multifunction. Operating systems and hardware, and you’re probably not going to do it with lower cost tooling like this. So I guess what I’m trying to say is if companies expect zero defect, they’re going to have to pay for it.

Andrew: And we’re not in that scenario right now.

jerry: Temper your expectations, I think is what you’re saying.

Andrew: Yeah. And you can’t have, fast, cheap. Reliable pick one, kind of situation.

jerry: Yeah it’s fair. I think in this instance, this is one of those situations where it’s obvious in hindsight, it’s a simple failure. It’s a stupid failure. It didn’t rely on, highly sophisticated. Quality assurance processes in order to catch it. But it’s one of those things where it’s obvious in hindsight, it’s not necessarily obvious going in.

jerry: And so I, I think broadly speaking, [00:56:00] you’re right. We in it and really civilization and society in general have built this infrastructure around software and systems that, that we beat the shit out of our vendors, pardon my French, to get the lowest prices out of. And then things happen and we’re, we are

Andrew: And generally you might say it’s worked like how much, and

jerry: worked.

Andrew: Agree. Not

jerry: it’s worked spectacularly well.

Andrew: paying

jerry: Like we don’t, how many times do we have this discussion? This is not a common problem on the other side. I’ll just contrast that with every single month. Microsoft, and I think Oracle does it quarterly and, many other companies do too, release a cadre of patches for critical vulnerability.

jerry: So I think to some extent, on the one hand, we’ve built an [00:57:00] ecosystem that is tolerant of a certain level of. Software badness or a lack of quality. But on the other hand, when things like this come along, it highlights how interconnected and interdependent our systems are and that we’re not necessarily, the things that we’re buying are not as resilient as they need to be based.

jerry: And have the expectation that they would never fail,

Andrew: and not on the, Commodity multifunctional hardware and software we’re using.

jerry: All right. It was it was not obvious to hopefully with through magic of editing, not obvious that we just had like catastrophic connectivity failure and whatnot, but I think we’re going to cut it off and we’ll pick up the rest of the story next time.

Andrew: Sounds good.

jerry: I appreciate everybody’s time and attention, and I hope you learned something and, certainly appreciate your your patronage here.

jerry: So you can [00:58:00] follow our podcast here at our website at www. defensivesecurity. org. You can find back episodes just about any podcast service, including now, thanks to Mr. Callit, including YouTube.

Andrew: Audio only for the moment, just an experiment. We’ll see how it goes.

jerry: We’ll be we’ll be experimenting with video, but we both, unfortunately, have to undergo some plastic surgery things before we can

Andrew: About a 12 month intensive workout program with the world’s leading exercise physiologists, I believe as well.

jerry: Yes, but that’s to look forward to the future. You can follow me on social media. I’m primarily on my mastodon instance, Infosec.Exchange. You can follow me at @Jerry@Infosec.Exchange. You can follow Andy where

Andrew: On X @LERG L E R G and infosec.Exchange at L E R G.

jerry: awesome. [00:59:00] And with that, I wish everybody a great week. I hope you have a great weekend ahead and we will talk again very soon. Thank you all.

Andrew: Thanks very much. Bye bye.

 

Defensive Security Podcast Episode 273

The Joe Sullivan Verdict – Unfair? – Which Part? (cybertheory.io)

Fujitsu Details Non-Ransomware Cyberattack (webpronews.com)

5 Key Questions CISOs Must Ask Themselves About Their Cybersecurity Strategy (thehackernews.com)

Sizable Chunk of SEC Charges Vs. SolarWinds Dismissed (darkreading.com)

CrowdStrike CEO apologizes for crashing IT systems around the world, details fix | CSO Online

Summary:

Cybersecurity Updates: Uber’s Legal Trouble, SolarWinds SEC Outcome, and CrowdStrike Outage

In Episode 273 of the Defensive Security Podcast, Jerry Bell and Andrew Kalat discuss recent quiet weeks in cybersecurity and correct the record on Uber’s CISO conviction. They delve into essential questions CISOs should consider about their cybersecurity strategies, including budget justification and risk reporting. The episode highlights the significant impact of CrowdStrike’s recent updates causing massive system crashes and explores the court’s decision to dismiss several SEC charges against SolarWinds. The hosts provide insights into navigating cybersecurity complexities and emphasize the importance of effective communication and collaboration within organizations.

00:00 Introduction and Banter
01:52 Correction on Uber’s CISO Conviction
04:07 Recommendations for CISOs
09:28 Fujitsu’s Non-Ransomware Cyber Attack
12:13 Key Questions for CISOs
32:47 Corporate Puffery and SEC Charges
33:15 Internal vs External Communications
33:52 SolarWinds Security Assessment
36:36 CrowdStrike CEO Apologizes
37:16 Global IT Systems Crash
37:57 CrowdStrike’s Kernel-Level Issues
40:55 Industry Reactions and Lessons
42:58 Balancing Security and Risk
49:26 CrowdStrike’s Future and Market Impact

01:03:46 Conclusion and Final Thoughts

 

Transcript:

defensive_security_podcast_episode_273 ===

jerry: [00:00:00] All right, here we go. Today is Sunday, July 21st, 2024, and this is episode 273 of the Defensive Security Podcast. My name is Jerry Bell, and joining me tonight as always is Mr. Andrew Kalat.

Andy: Good evening, Jerry. I’m not sure why we’re bothering to do a show. Nothing’s happened in the past couple of weeks.

Andy: It’s been really quiet.

jerry: Last week was very quiet.

Andy: Yeah, sometimes You just need a couple quiet weeks.

jerry: Yeah. Yeah, nothing going on so before we get into the stories a reminder that the thoughts and opinions We express on this podcast do not represent andrew’s employers

Andy: Or your potential future employers

jerry: or my potential future employers

Andy: as you’re currently quote enjoying more time with family end quote

jerry: Yes, which by the way Is highly recommended if you can do it.

Andy: You’re big thumbs up of being an unemployed bum.

jerry: It’s been amazing. Absolutely [00:01:00] amazing. I I forgot what living was like.

jerry: I’ll say it that way.

Andy: Having watched your career from next door ish, not a far, but not too close. I think you earned it. I think you absolutely earned some downtime. My friend, you’ve worked your ass off.

jerry: Thank you. Thank you. It’s been fun.

Andy: And I’ve seen your many floral picks. I don’t, I’m not saying that you’re an orchid hoarder, but some of us are concerned.

jerry: I actually think that may be a fair characterization. I’m not aware of any 12 step programs for for this disorder here.

Andy: There’s a TV show called hoarders where they go into people’s houses who are hoarders and try to help them. I look forward to your episode.

jerry: I yes, I won’t say anymore. Won’t say anymore. So before we get into the new stories, I did want to correct the record on something we talked about on the last episode [00:02:00] regarding. Uber’s CISO that had been criminally convicted. Richard Bejtlich on infosec. exchange actually pointed out to us that it was not failure to report the breach that was the problem. It was a few other issues, which is what Mr. Sullivan had actually been convicted of. So I’m going to stick a story into the show notes. That has a very very extensive write up about the issues and that is from cybertheory. io. And in essence, I would distill it down as saying again, I guess he was convicted so it’s not alleged. He was convicted of obstruction of an official government investigation. He was convicted of obstructing the ongoing FTC investigation about the 2013 slash 2014 breach, [00:03:00] which had been disclosed previously.

jerry: The FTC was rooting through their business and were asking questions and unfortunately apparently Mr. Sullivan did not provide the information related to this breach in response to open questions. And then furthermore, he was he was convicted of what I’ll summarize as concealment.

jerry: He was concealing the fact that there was a felony. And the felony was not something that he had done. The felony was that Uber had been hacked by someone and was being extorted. But because, he had been asked directly, Hey, have you had any, any issues like this?

jerry: And he said, no, that becomes a concealment, an additional concealment charge. And so the jury convicted him on both of those charges, not on failure to disclose a breach.

Andy: Yeah, it’s we went down the wrong path on that one. We were a little, we put out some bad info. [00:04:00] We were wrong.

jerry: So I’m correcting the record and I certainly appreciate Richard for for getting us back on the right track there.

jerry: This article, by the way, does have a couple of interesting recommendations that I’ll just throw out there. One of them is hopefully these are fairly obvious. Do not actively conceal information about security incidents or ransomware payments, even if you’re directed to do so by your management.

Andy: Yeah. I think, let’s put it out for a second. If you’re in that situation, what do you do? Resign?

jerry: Yes. Or do you,

Andy: yeah, I think that’s,

jerry: I mean you either resign or you have to become a whistleblower.

Andy: Yeah, that’s true. Your career has probably ended there at that company either way. Most likely. But it’s better than going to jail.

jerry: It’s a lot better than going to jail. I think what I saw is he Sullivan is up for four to eight years in prison, depending on how he’s sentenced.

Andy: Feds don’t like it when you lie to them. They really don’t like it.

jerry: No, they don’t. Next recommendation is if you’re, if your company’s under investigation, get help and potentially [00:05:00] that means getting your own personal legal representation to help you understand what reporting obligations you may have for any open information requests. And I say that because. In this instance, Sullivan had confirmed with the CEO of Uber at the time about what they were going to disclose and not disclose and the CEO signed off on it. And he also went to the chief privacy lawyer, who by the way, was the person who was managing the FTC investigation and the chief privacy lawyer also signed off on it.

Like the joke goes, the HR is not, it’s not your friend. Your legal team may also not be your friend. At some point if you’re in a legally precarious position, you may need your own council, which is crappy.

Andy: That is crazy. How much is that going to cost? And wow, that’s it. I don’t [00:06:00] one more reason to think long and hard before accepting a role as CISO at a public company.

jerry: Yeah, this, by the way I’m skipping over all sorts of good stuff in this story. So I invite everybody to read it. And it’s a pretty long read.

jerry: It, it talks about the differences between the Directors of companies and officers of companies and the different obligations and duties they have related to shareholders and customers and employees and whatnot. And what was very interesting. The point they were making is that CISOs don’t have that kind of a responsibility, right?

jerry: They don’t, they’re not corporate officers in the same way. And so what they, what, when you read the article, and I apologize for not sending it to you. I just realized, when you read the article it was very clear that there The author here was pointing out that the government and I suspect with, at the behest of Uber, was really specifically [00:07:00] going after Sullivan, right?

jerry: Because in exchange for testimony, people got immunity in order to testify against Sullivan. And that kind of went all up and down, including You know, it’s some of the lawyers. So I, by the way, I think he clearly had some bad judgment here. But, also, he wasn’t the only one. This was a a family affair, but he’s the one who’s really taken taken the beating. Next recommendation was paying a ransom in return for a promise to delete copies of data, not disclosed data does not relieve your responsibility to report the issue in many global laws and regulations.

jerry: So just because you’ve gotten an assurance that the, after you’ve paid a ransom that the data has been destroyed, you still in, in almost all cases are going to have a responsibility to report. And, one of the things the the author here says is you really should let everybody know, there’s vehicles to [00:08:00] inform at least in the U S CISA and the FBI, and I’m sure there’s similar agencies in different countries. To help insulate yourself do not alter data or logs to conceal a breach or other crime. That seems pretty self evident, but I think the implication is that.

jerry: That’s what happened here. And then also lastly, do not create documents that, contain false information.

Andy: Shocking.

jerry: Yes. So again, not, nothing in there that is like earth shattering but it’s a good reminder,

Andy: yeah. And I, I don’t know if but our good friend Bob actually got out of the South American prison he’s been in for a while, and I heard from him, and he’s doing well, he’s got three new tattoos and lost two fingers, but otherwise he’s doing well. He was telling me that he once worked for a CISO that actually fabricated evidence for an internal auditor.

Andy: And thought it was a fun [00:09:00] game

Andy: and how he had a tough time knowing how to handle that.

jerry: And the ethics of how to disclose that, right?

Andy: Especially because as he described it, it was a very powerful CISO who had a reputation for retaliatory behavior to those who did not bow before him. Damn. So

jerry: yeah, Bob has all the best stories.

Andy: He does. He does. I look forward to hearing more about his South American prison stint.

jerry: All right. Our next story today comes from web pro news. com. And the title here is Fujitsu details, non ransomware cyber attack. It feels like it’s been so long since we’ve talked about something that wasn’t ransomware.

Andy: I feel like these bad guys just, lost a good ransomware opportunity.

jerry: Clearly they did. So there’s not a huge amount of details. But basically Fujitsu was the victim of some sort of [00:10:00] data exfiltrating worm that crawled through their network. They haven’t published any details about who or how, or, why, what was taken, but was, what was most interesting to me is that, the industry right now is very taken by ransomware or, more pedestrian hacks of things to mine cryptocurrency or send spam or, do those sorts of things.

jerry: It’s been a while since I’ve. I can think of the last time we actually had a, like a a destructive or, something whose job was not. To be immediately obvious that it’s in your environment.

Andy: Yeah. If I had to, again, the details are very sketchy, but if I had to guess, maybe this was some sort of corporate espionage or some sort of, it appears the way they described it, which again, the details are sparse.

Andy: It was low and slow and very quiet [00:11:00] trying to spread throughout their environment. It didn’t get very far. They said, what, 49 systems? 49. And they had a lot of interesting, you caveats of it didn’t get to our cloud this and it didn’t do that. So there’s a lot of things that didn’t do.

Andy: They didn’t tell us much about what I did do. But if I had to guess, maybe some sort of corporate espionage. Yeah, maybe that’s, or just random script kitties being like, you can never always attribute motivation. So I’ll say

jerry: this way, intellectual property theft, the motivations for that, I guess this is an exercise left to the reader, but.

jerry: They did say that data was exfiltrated successfully. They didn’t say what data but I, my guess is, they were after some sort of intellectual property theft. The reason for bringing this up is not that this has a whole lot of actionable information, but more that, that there are other threats out there still, it’s not all, it’s not all ransomware and web shells and that sort of stuff.[00:12:00]

Andy: Indeed, but to be fair that is majority of it. Protect your cybers. You know what helps? A solid EDR. It’s a little foreshadowing for a future story.

jerry: We’ll get there. We’ll get there. All right. The next story comes from thehackernews. com and the title here is five key questions CISOs must ask themselves about their cybersecurity strategy.

Andy: Apparently, we need to add a sixth one, which is, Am I going to go to jail?

jerry: So the key questions here, number one, how do I justify my cyber security? Actually, you know what, I’m going to back up for a second, because there were a couple of other salient data points in here. And the first one was they pointed out that only 5 percent of CISOs report directly to the CEO , then two thirds of CISOs are two or more levels below the CEO in the reporting chain. And that, those two facts indicate a potential lack of high level influence to [00:13:00] use their words. I will tell you the placement of the CISO in an organization isn’t necessarily an indicator of how much power they have. Somebody who reports to the CEO is going to be more influential for sure, but there are lots of different organizational designs especially when you go into larger companies.

Andy: Sure. I would say also if they’re highly regulated, that CISO has a lot of inherent authority because of the regulations that are being enforced upon that organization. So by external third parties.

jerry: The Ponemon or Pokemon Institute found that only 37 percent of organizations think they effectively utilize their CISOs expertise.

jerry: I kind of wonder who are they asking that? Are they asking the CISOs or are they asking, I, anyway I am curious about the [00:14:00] methodology behind that study. It doesn’t necessarily surprise me. Just moving somebody up in a different, into a different place in the organization doesn’t necessarily mean that they’re going to more fully use the talents of or expertise of a CISO.

Andy: Yeah. If it’s anything in most organizations, it’s. They delegate to that CISO, not like what the assumption, is that the boards of the executive teams would be asking deep cyber questions of the CISO, which is an odd expectation.

jerry: It is an odd expectation. And similar related to what you’re saying, gartner finds that there are only 10 percent of boards. that have a dedicated cybersecurity committee overseen by a board member.

Andy: The way I would look at it, both of those stats is more, how much influence does CISO have on the company operating in a less risky or more risky methodology, right?

Andy: It’s not about leveraging their expertise. It’s about how influential are they to [00:15:00] guide the company away from risk and what those trade offs are.

jerry: It also comes down to what the company’s value. This is a financial risk management. And

Andy: the flip side is I think a lot of executives think of CISOs as constantly calling for the skies falling to get better budgets and build their empire and more people. And as this is a black hole of money we’re throwing money into that we can’t, which this article goes into, we can’t justify it.

Andy: We can’t prove the ROI on.

jerry: Yes, exactly. So the the key questions to ask yourself is number one, how do I justify my cybersecurity budget? And that is a I think that’s a perennial challenge that anybody in security leadership has. How do how do you justify, or demonstrate that you are spending the right amount of money?

jerry: You’re not spending too much. You’re not spending too little. Generally [00:16:00] speaking, and this is like a, one of those mass psychosis. episodes. You do that by often benchmarking yourself against your competitors.

Andy: It’s a safe answer.

jerry: And they do it by benchmarking themselves against their competitors.

Andy: You’ve got the theory of the wisdom of crowds, right? What’s if I’m around the average, I must be doing fairly close to correct, but not all companies are the same. Not all companies have the same risk tolerance. Not all companies have the same, corporate structure in the same financial situation. So I get it. That’s where my mind goes. What percentage of G&A is spent on cyber in the, my industry? That’s what I’m going to go ask for.

jerry: Number two is how do I master the art of risk reporting, which by the way, I think is not entirely disassociated from the last one, right? Because part of your budget in I dare say a major part of your budget is intended to address [00:17:00] risk. And and what they’re really pointing out here is how do you communicate to the senior leadership team, the board of directors and so on, the level of risk that you cyber risk that you have in your organization in terms that make sense to them,

Andy: That’s an incredibly challenging question, honestly.

jerry: Yeah. I, so something that was very interesting is I was, to me, at least, is I was reading this because look, I struggle with all these things too, right? I’ll. Five of these things that we haven’t got to all of them yet, but they resonated with me and he’s super interesting is we all have to make this up on our own.

Andy: You didn’t go through that section of the CISSP?

jerry: There’s not like a GAAP, in, in in accounting, you have the GAAP generally accepted accounting principles. There’s really a gap type methodology for this in risk reporting. And [00:18:00] perhaps there should be.

Andy: This is why we are often accused of being an immature industry from other well trodden business leaders who have a shared language.

Andy: We’re wizards and witches walking in speaking spells that they don’t understand out of black boxes that don’t make sense.

jerry: So I, I think this is an area that we can certainly mature. So I would love to hear from anybody in the audience who thinks that there’s a, a common methodology that people can adopt here. I’d love to talk about that in a future episode. All right. Number three is how do I celebrate security achievements?

jerry: I have a problem with the way some of the, this was worded public recognition of attacks that were deflected. This is in quotes, by the way, public recognition of attacks that were deflected can simultaneously deter attackers and reassure stakeholders of the organization’s commitment to data [00:19:00] protection.

jerry: So I’m reminded of when I read that I, I immediately thought of Oracle’s unbreakable Linux or unhackable, what do you call it?

Andy: Yeah.

jerry: It’s like putting a chip on your shoulder and Begging someone to come in.

Andy: If I really dug into this, define what an attack is, define when I’ve deflected it. Like every firewall drop, log entry, is that an attack I stopped?

Andy: Like I’ve seen that kind of shenanigans. Or is it more, hey, we had an incident that started and we contained it. Or is it, I don’t know, every time my email security tool stopped a phishing attack? There’s all those sorts of metrics you can run, but is it valuable?

jerry: There’s all you get into like how many spams did I reject?

jerry: How many phishing emails did I reject? Which we make fun [00:20:00] of, right? Because they’re metrics. They’re not achievements.

Andy: But you’re trying to prove a negative here. This is, this has been the fundamental problem from day one with the industry is you’re spending money to stop something. How do you know if you hadn’t spent that money, that things would have happened?

jerry: The only thing I can say is if you take a more capability focused view rather than a metrics focused view, I think that’s perhaps where the opportunity lies. We had a gap in. We had a gap in our authentication scheme because we didn’t have multi factor authentication.

jerry: We, we implement a multi factor authentication. We closed a huge hole. Yeah. Yes. Super simplistic example. Yeah. But I will say, there is a there’s another aspect of this that you have to be aware of. And perhaps I worked alongside too many lawyers, [00:21:00] but one of the, one of the pitfalls of taking credit for doing some security thing is that you’re tacitly admitting that you weren’t doing it before.

jerry: Yeah,

Andy: that’s true.

Andy: Our new version no longer does X. Wait, you were doing X before? Don’t worry about that. The fact is we’re not doing it now.

jerry: We implemented multi factor authentication. Oh so wait a minute,

Andy: right? It’s a tough one. Yeah. I, but I also, You also can never be, if you’re completely risk zero and completely safe, you’ve either way overspent, or you’ve added so much friction to business, or you’ve inhibited the ability for people to do the jobs that you’re now breaking the business in a different way.

Andy: You’re not going to get to risk zero. So what’s the right balance?

jerry: Yeah. And the business doesn’t want you to get to, I remember working effectively as the CIO for a company that [00:22:00] we both worked for once. And the COO told me he was he pitched it in the form of a question. Now what is your approach to passing audits, Jerry?

jerry: Do you want to, like, how do you you want to do really well? And I said, yeah, I think you should do really well. And he said, no, I said, if you fail audits, you’re going to get fired. And if there are no issues ever found, you’re probably going to get fired because you’re spending too much money.

jerry: So you got to find the right balance because that’s what the business wants. If you’re, if you are. Spending enough money to do perfect and everything that’s coming at the expense of other things that the business could be investing in and the return, the rate, I think his point was not.

jerry: Except trying to accept too much risk, but that to do things perfectly, as you continue to move up the [00:23:00] maturity ladder, it gets more and more expensive. And the, the marginal utility starts to decline.

jerry: Sure.

jerry: Anyhow, I, all that said it is very important from a morale perspective, if for nothing, no other reason from a morale perspective to celebrate. But you’ve got to be smart about it.

Andy: I wouldn’t do it publicly, frankly.

jerry: I wouldn’t either.

Maybe internally Somewhat company wide maybe, or at least departmental wide, you need to understand what motivates your people and what their reward systems are like and work to those. But I am very much against putting that bullseye on your back by saying you’re not hackable or, you’re about to get a free audit.

jerry: Oh, yes. Oh, yes. So number four is how do I collaborate with other teams better at this, by the [00:24:00] So again this whole article is. Aimed at CISOs and CISOs are almost always an executive level position. And one of the, I learned a lot, right? I ended my career. I don’t know if that’s the end or if there’s more to come as an executive.

jerry: And I learned a lot. I learned a lot about what it, what that means. And one of the most important aspects is that. You do partner with those other people. That’s less, an intrinsic part of being an executive.

Andy: Yeah. It becomes about working well with other departments. And that means sometimes you’ve got to give and take and be willing to lose a battle to win the war, as they say.

jerry: So it’s super important, but I think this is not. It’s not security centric. This is a fundamental [00:25:00] tenet of what it means to be a leader in an organization.

Andy: And I think we, as technology people often get promoted up with a background that isn’t well suited for that, to be completely honest to the point where many technical people score in those quote unquote soft skills.

Andy: But if you want to get to that level of organization, it is required.

jerry: Yes. Yes, absolutely. No, they point out that there are tangible security benefits. So for example, building bridges with HR allows you to do things like Integrate security requirements into the onboarding and offboarding processes and whatnot.

jerry: And also having those relationships throughout the organization are very key, especially when in times of crisis, like in an incident or what have you, you’ve got to, you’ve got to have the trust of the team and the team needs to have trust in you.

jerry: Last [00:26:00] one is how do I focus on what matters most? This is a hard one. And I think in large measure, it’s because there are so many variables, every company values things differently, they have different risk appetites. They’re in different industries. They move at different speeds. They have, different idiosyncrasies. They they like different technologies or they don’t like different technologies. And in many instances, companies hire people, they hire a CISO based on who that person is, like what, not necessarily what they need. They hire them based on, on the reputation, Jerry’s Jerry’s an incident response focused person.

jerry: We have lots of incidents.

Andy: Because of Jerry?

jerry: Maybe.

Andy: Oh, that’s fair. Job security.

jerry: I think you’ve got to take a step back and, as you’re figuring out how to focus on what matters most, you’ve got to, [00:27:00] first define what is it that matters most.

Andy: Yeah. . The other thing I would add, it’s, I don’t know how to integrate this, but there’s a cultural aspect of a company. The culture matters. So I may want to do a highly impactful security initiative. Let’s say something like, I don’t know, DLP with a document classification system, and I may work for a large financial who’s completely on board with that and very comfortable with that.

Andy: And great. You work for a small startup. You’re getting a lot of pushback on that because that’s not their culture. It’s not something that, from my perspective, and obviously I haven’t been a CISO but I’ve been senior level and currently a director, you can’t go faster than the culture of the company will allow you to go if you’re implementing potentially friction inducing security controls.

Andy: And so that I think can help determine your priorities. What’s acceptable to the business from an [00:28:00] impact perspective.

jerry: That’s a good, it’s a good point. I’ll tag on to that and say, one of the things I’ve learned over the years is that. Regardless of how much money you’re given, there is only so much change than an organization can undergo.

Andy: Oh yeah. That’s a good point.

jerry: In a certain period of time. So someone comes to you and says, you have a blank check. I need you to implement DLP, replace all our firewalls, a bunch of additional, very disruptive things. You’re not going to be able to do it.

jerry: Even if you have the money to do it there’s a finite capacity for change in an organization that, you know, that, that threshold is going to be different in different organizations. And that’s one of the challenges that you as a leader have to figure out is, where that threshold is. So you don’t cross it because if you cross it, Things start to fall apart and you don’t actually make progress on anything.

jerry: And I’ve seen that [00:29:00] happen time and time again, where, we, we, Hey, we’re all in we are going to blow the doors off the budget. We’re going to, we’ve got a lot of things that have to change.

Andy: Especially after a breach.

jerry: Yeah. And you end in having, you end having not accomplished anything.

jerry: You’ve burned through all the money. But you’ve not accomplished fully anything. And so you’ve got to be very very measured in that.

Andy: And the flip side, if you’re an aggressive go getter as a leader and you commit to some aggressive schedule and then you don’t get it done or you don’t get there as fast as you want, then you potentially have executives looking at you like you’re ineffective.

Andy: So it’s a careful balance.

jerry: And that’s it. Leadership 101. Like you’ve got to, you’ve got to meet your commitments.

jerry: Alright. So the, they do talk a little bit about communication, effective communication between the CISOs and [00:30:00] and up. I think that, permeated the la those five items. But, there, there is a, this maybe goes back to the whole gap that, the idea of the gap style reporting. I think we as a security community, often do a pretty bad job of.

jerry: Communicating in non technical terms we talk about CVEs and and DDoS volumes and things like that, but, translating that into business impact is really what the business needs from you.

Andy: When it’s also a likelihood percentage, not a guarantee. Correct.

Andy: Correct. And it’s at best a guess. Based on best practices and observing what happens to other companies and, lots of inputs and data, but there’s no guarantees.

jerry: I think one of the struggles when I read articles like this, they, they often talk about [00:31:00] things like how many fewer incidents did you have or how many fewer breaches did you have?

jerry: And and whatnot. And using that as, Is how you communicate the effectiveness and of your program or what you need to improve and I think the reality is that breaches tend to be like, very Transformational pivotal incidents. They’re often not like countable you don’t you don’t Stay in a ciso role and have you know So many breaches that you can show trends over time, right?

jerry: It’s just you’re, if you’re in that position and you have that kind of data, like something’s wrong.

Andy: Yeah. We need to learn very, we need a little, we need to learn from other people’s breaches, right?

jerry: Exactly.

jerry: All right. Moving on, the next one comes from dark reading here. And the title is sizable chunk of SEC’s charges against solar winds tossed out of court. So I will [00:32:00] admit I have not read. All 107 pages of the judge’s ruling, so shame on me for that.

Andy: You’re an unemployed bum, what else do you have to do?

jerry: Absolutely nothing. The SEC filed a lawsuit against SolarWinds and SolarWinds CISO, alleging lots of things. Everything was dismissed except for the statements that the CSO had made about the security program at SolarWinds prior to the breach. So there were inaccuracies in their 8k, which for those of you don’t know, 8k is a form that you have to fill out in a, in the wake of a breach as required by the SEC that apparently had some inaccuracies.

jerry: And so that was. Part of the case there were other statements made post breach that the judge I did find in a different article, described it as corporate puffery that is not [00:33:00] actionable. I thought that was pretty funny.

Andy: That is pretty funny. I think that needs to be a thing. I got to work that into more and more conversations.

jerry: It’s interesting that, A lot of the reaction to this, which means that there are apparently other implications in the ruling. A lot of a lot of the, post judgment discussion has been, Oh gosh, this is really a good thing because it allows teams internally to communicate amongst themselves without fear of what you write being used against you.

jerry: However, that, that actually. isn’t obvious as part of what the SEC was charging them with. I’ve got to go, I really want to go read that 107 pages to understand, what exactly the SEC was alleging. But in some regards it’s neither here nor there. What is most interesting though, Is the charges that do remain, which are those that [00:34:00] basically said before the breach, the solar winds CISO had come in and performed an assessment and found lots of problems and a documented those problems, but then we would go externally into customers and perhaps investors and made claims about the robustness of their security program.

jerry: And that is what. The SEC is still going after and that is what the judge is allowing them to continue pursuing.

Andy: Because the theory that the SEC I’m assuming is going after is accurate information should be disclosed to the investing public. And so they know how to appropriately measure the risk of investing in a company and or the board, point, all that stuff that comes with being a public company.

Andy: They want, they’re very particular. about making sure that the information that is disclosed is accurate and not misleading. We [00:35:00] see all sorts of stuff about misleading, just going back to Elon Musk’s tweets about Tesla getting him in a lot of trouble with the SEC and that sort of thing.

Andy: Like they take that stuff very seriously.

jerry: Oh yes. Yes, indeed. So more, probably more to come on this after I have a chance to read the the court decision, but I would definitely say, have a measured approach to communicating, especially if you’re aware that there are security gaps or weaknesses in your environment.

jerry: If you end up in a position where you are radically representing things differently in internal communications versus external communications, you should probably. Take a step back and ask yourself what you’re doing. I guess it probably won’t be a problem if they’re, if you’re not breached, but if you are like, that’s going to be exhibit a.

jerry: Yep. And as we’ve now [00:36:00] seen, like the company isn’t going to have your back. They’re not going to, they’re not going to stand in front of you and take the bullet, they’re going to be like, Oh, look at God, our CISO, he was a terrible guy.

Andy: Which by the way, is why people get so frustrated as statements put out by companies sounding like legalese and business ease because they’re protecting themselves with very specific language for these sorts of circumstances.

jerry: Yes.

jerry: So there’s one more story. It’s a small thing. It’s not probably not even worth talking about. I, I wasn’t even sure if we were going to get to it. Yeah, it’s not really even, you know what? Let’s talk about it. So this is the one comes from CSO online and the title is CrowdStrike CEO apologizes for crashing it systems around the world and details fix.

Andy: Yeah, it was it was a thing.

jerry: It was a thing. I have to tell you, I I woke up. on Friday to a text from my wife who had [00:37:00] already been up for hours asking me if I was happy that I was no longer in corporate IT. And I, what?

Andy: What just happened?

jerry: What just happened? And so of course I jumped on to infosec. exchange and quickly learned that 8. 5 million. window systems around the world had simultaneously blue screened and would not come back up without intervention.

Andy: Like on site physical intervention.

jerry: Yes. Yeah. Yes. Apparently though, if you could reboot it, Some, somewhere between three and 15 times, and it might come back on its own.

Andy: I

Andy: heard

Andy: that

Andy: too.

Andy: I have no idea how accurate that is.

jerry: I don’t know either.

Andy: But that’s going to be the new help desk joke. Have you tried rebooting it 15 times?

jerry: Yes. Yes, it is already it is already meme fodder for sure.

Andy: Insane amount of memes. So what happened?

jerry: CrowdStrike, I think most people know is a, it’s an [00:38:00] EDR agent that runs on.

jerry: I think it’s probably 10 to 15 percent of corporate systems around the world. It’s a, it’s a significant number. They deliver these content updates which are, I would say roughly equivalent to what we used to think of as antivirus updates. They delivered one on Friday. And these, by the way, are multi time a day updates. So these aren’t like new versions of the software. These are like, fast quickly delivered things. And so what happened was on Friday, CrowdStrike pushed out a change, an update in how it processes or how it. analyzes named pipes and the definition file that they pushed out had some sort of error that the nature of the error hasn’t been [00:39:00] disclosed that I’ve seen at least and that error caused the windows to crash basically.

jerry: Like crash

Andy: hard. Crash hard

jerry: but with blue screen basically.

Andy: Yeah

jerry: and put in a blue screen loop at that point. And so yeah, because it was, because, Of how CrowdStrike integrates with Windows, it would, it would blue screen again, immediately, as part of the startup process. So you would, as you described that you would up in a blue screen loop.

jerry: And so the only option you had was to go into safe mode. And remove a file. And people came up with all sorts of creative ways of doing that with scripts and even Microsoft released a an image on a thumb drive that you could boot now where it went horribly wrong for some people. And I think in Azure, it was ironically, like one of the most problematic places where you have disk encryption, [00:40:00]

Andy: right?

Andy: You’re probably most often using boot boot locker. And if you don’t have your recovery key. You can’t access the disk without fully booting.

jerry: So lots of lots of IT folk got a lot of exercise over the past weekend. And by the way, if they’re looking a little, they’re dragging this week, but go buy him a donut, hi and say, thank you.

jerry: Because they’ve had a, they’ve had a

Andy: bad couple of days. Yeah, no kidding. It’s also amazing how many people have turned into kernel level programming experts. On social media in the past three days,

jerry: 100%,

jerry: they’re like, they’re formerly political scientists and constitutional scholars and trial attorneys and epidemiologists and climate scientists and whatnot.

jerry: So it does not surprise me that they are also kernel experts.

Andy: There’s been a lot of really intense finger pointing and [00:41:00] debate going on when we, when Honestly, don’t even fully know the entire story yet.

jerry: No. It, there’s a whole lot of hoopla about, layoffs of QA people and returned the impact of return to office, but we don’t, we really don’t know what happened.

jerry: What I find most interesting is that this is a process that happens multiple times a day for the most part. And. Hasn’t happened before. So something went horribly wrong. And I don’t know if that was because they skipped the process or because there was a gap in coverage, like this, the set of circumstances that arose here, we’re just not ever accounted for, like that nobody thought that was a possibility,

Andy: which by the way, it happens at almost every engineering discipline.

Andy: We learn through failure. I’m not. Okay. Let me back way up and say, I am not in any way a CrowdStrike apologist. In fact, I’m not a huge fan of CrowdStrike. [00:42:00] However, I’m seeing a whole lot of holier than thou, you should have XYZ’d on social media. That puts me in a contrarian mood to counter those arguments with a cold dash of reality of, there’s a whole lot of reasons Companies do what they do in the way they do it.

Andy: And anyway, I don’t want to get in my entire soapbox when there’s more done back here, but I think it’s very easy to point out failures without weighing it against benefits.

jerry: Yeah, absolutely. CrowdStrike works quite well for a lot of people. And I dare say it has saved a lot of asses and a lot of personal data.

Andy: So for everybody, there’s certain people like somebody we know who’s very commonly commenting on these things who were in a couple of articles saying that, this just proves that automatic updates are a bad idea. I don’t know that’s true. I would say you can’t. Say that unless you [00:43:00] measure the value of automatic updates that have stopped breaches and stopped problems because those updates were so rapid and so aggressive against this outage, right?

Andy: You have to balance both sides of that scale. So just look at this and say, yeah, this was a massive screw up. It’s caused massive chaos and huge amount of loss of income for a lot of people disrupted a lot of people’s lives. Okay. That’s bad, but weigh that against For those using tools with automatic updates, how many problems were solved and avoided, which is so difficult to measure, but must be thought of by those rapid updates that were automatic.

jerry: I think what is the most problematic aspect of that is that the the value is amortized and spread out over, thousands or tens of thousands of customers. Over, over the, over a long period of time, but this failure was a, a one, impacted everybody at the [00:44:00] same time and caused mass chaos, mass inconvenience, mass outages.

jerry: All at the same time. And so completely agree with you. But I think that’s what’s setting everybody’s alarms bells off is that, Oh my gosh we have this big systemic risk, which, realistically hasn’t happened perhaps as often as you might think. And I know by the way, there’s a lot of people who are also talking smugly about how good it is to be on Linux and not on Windows because, this issue only impacted Windows, not Linux. But I will tell you, as the former user of a very large install base of CrowdStrike on Linux, it had problems. And a lot of them. We had big problems. I don’t know that there was ever one like as catastrophic as this where it happened all at the same time, [00:45:00] but, it, it’s not Linux the CrowdStrike agent on Linux hasn’t also had issues.

Andy: Sure. It’s the question comes, what problems would you had if you hadn’t run it? What value did it bring?

jerry: That’s obvious. I think that for many organizations, this is the way they identify that they’ve been in intruded on

Andy: yeah I mean ensure cyber insurance companies basically mandate you have EDR.

Andy: For ransomware containment. There’s it’s For good or ill it’s not table stakes So and those by the way, those cyber security companies out there who are currently casting stones At CrowdStrike saying we don’t do things this way and we would never have this problem. Yeah, good luck. Yeah, you are the definition of glass houses.

Andy: We’ve, this is, interestingly, that 8. 5 million stat apparently was, according to Microsoft, less than one per seat, 1 percent of the Windows fleet in the world, which I find fascinating that only 1 percent cause this much [00:46:00] chaos. So I wonder how many of those are second order impacts and other sort of, not direct, but secondary fallout of some critical system somewhere going down, but we’ve seen problems like this before.

Andy: It’s just on smaller scales. Even Windows Defender has had somewhat similar outages that have caused problems. This is the whole debate of Windows update, do you automate, do you trust it, do you test it? And I go back to, okay, you may have a problem. But is that problem worth the efficiency and speed of getting a patch out there before you get hit with that exploit for whatever recently patched problem or, you can’t just look at one side of the equation, which is so frustrating to me.

Andy: And so many people are out there clout farming right now, just being, I told you so’s or you guys are just dumb and they’re not looking at the big picture at all.

jerry: So on the converse side. There [00:47:00] were huge impacts, and I think that we do have to do better. But that said, I don’t think it’s a wise idea to run out and uninstall your CrowdStrike agenT.

jerry: there are other technologies, other ways of linking in that perhaps are less risky, but not no risk, for sure.

Andy: and are you talking about the kernel level? Yeah. Integration. Yeah. Versus non kernel level.

jerry: Yeah, so like E-E-B-P-F versus, kernel modules and whatnot. But those that by the way is a it solves some problems, but it creates other problems and so we’ve not seen big failures with those other companies But have we not seen them because they’re just small or because they don’t happen

Andy: And look, let me be very clear. I am not a coder. I don’t understand what i’m talking about here I am just going off of like Rough understanding of trying to get my arms around this issue.

Andy: So take everything i’m about [00:48:00] to say with a grain of salt but my understanding is that the advantage of being at the kernel is you’ve got much deeper level access that’s faster, more efficient, and more TAP resistant than running in user space. And as a security control that’s trying to stop things like rootkits, which were a big problem back in the day, not so much now, I think that’s where that came from. Like we’ve seen a number of vendors who are saying running at the kernel level is just, it’s just irresponsible. We don’t do that. And here’s why we don’t do that. We’re differentiated because of XYZ. Okay, cool. But there must be a trade off and Yeah, you can’t crash the system like that.

Andy: But are you also as capable at detection and what is your resource impact? And I honestly don’t know, right? And when you listen to these other vendors who don’t use kernel mode, they of course have very compelling arguments and they drink their own Kool Aid and they believe their own marketing.

Andy: And maybe they’re right. I don’t know, but I also don’t think CrowdStrike is just completely irresponsible for running a [00:49:00] kernel. Some people are saying. I think that has a benefit. Which is why they did it. Now, whether that benefit is worth it is the question, but I don’t think they’re just malicious.

Andy: So I don’t know. I just see a lot of people going off on this. And I admittedly, I don’t know enough to probably really participate in that conversation, but

Andy: it’s a tough,

jerry: I do think that CrowdStrike, my understanding at least is that CrowdStrike was, or is moving in the direction of a similar strategy, but they just aren’t there yet of not using curl mode. Yeah. Yeah. Yeah. Not using a kernel module. So don’t misunderstand. Like they’re, they are using this thing called EBPF.

jerry: It’s a way, it’s a very uniform way of getting visibility into what’s happening in the kernel without actually loading your own driver into or module into the [00:50:00] kernel. One of the big problems I had with CrowdStrike was this constant churn of, do I patch my kernel or do I leave CrowdStrike running because I can’t do both.

jerry: And of sequence

Andy: on supporting each other.

jerry: Yeah. And that, by the way, it comes back to the fact that they have to they have to create a kernel module. It’s tailored, to, to different kernel versions. And depending on how changes in the kernel, how the kernel changes from one version to the next.

jerry: Sometimes you don’t, sometimes it’s fine. Most of the time it’s not fine. And so you end up with this problem. So less of a problem on windows because windows kernels are pretty stable. And I think they do, they have a lot more interlocking with vendors like CrowdStrike than Linux does. But in any event, I.

jerry: I I agree with your thesis there that like, this is something that we have. [00:51:00] We are benefiting from technology like this and assuming that it could never go wrong. And that’s probably not a good assumption as we’ve now seen, but at the same time, like I do think this could have gone better.

Andy: Absolutely. But where the assumption of this could go wrong at some point is. Planning for this to go wrong again, a wise use of time and money versus all the other problems you’re likely to deal with. Is this a black swan event that isn’t likely to happen enough to bother to build mitigations in for?

jerry: I, I, so it was standing in line for dinner, my wife and she asked me this question. Cause at the time a lot of flights were still canceled. And she asked the question and I didn’t have a good answer. What can companies do to avoid that? They prepare in some kind [00:52:00] of disaster recovery way to do that?

jerry: And the reality is, I don’t think you can. And so you could say you know what? I’m going to get rid of CrowdStrike and I’m going to go with Sentinel one or somebody else, but you’re taking the leap of faith that they won’t also have a problem or that they won’t have a problem that says that they’re.

jerry: They’re blind to some kind of attack that CrowdStrike could have seen.

Andy: You could duplicate your infrastructure with failover and run two different EDRs on each of the backups, but then so much more cost complexity. Are you going to run them as well? Are you going to have the mastery of two different vendors?

Andy: Like that introduces a whole lot of complexity. That’s easy to just say, go do it. But it’s very complicated for a maybe that happens. We talk about uptime and percentages. And we look at, I always want to say, again, I’m not a CrowdStrike defender. This is the funny part. I’m just, I’m frustrated with the thought leaders out there who are just blowing smoke on, just being, [00:53:00] pounding the table with, look how bad this is without putting it in context.

Andy: If we look at the percentage of how many of these, Channel file updates went out without a problem versus the ones that do. Is that a fair estimation of success versus failure rate? And we holding CrowdStrike to five, nine, six, nines, three nines, two nines, There’s no perfect solution. And by the way, the closer you get to perfect, the more expensive it gets.

Andy: So when we talk about like uptime and systems of being five nines, that’s a hell of a lot more expensive than two nines or three nines. So are we being unrealistic in, in saying that this should never happen and CrowdStrike should go out of business and okay, fine, then how much more are you willing to pay for a system that’s not perfect?

Andy: That never has this problem. And how slow are you willing to accept updates against new novel techniques, which is what they said they were pushing out to make sure this never happens? Because they’ve got this tension between getting these updates out fast versus doing all the checks in QA we all want to see.

Andy: So what are you willing to trade [00:54:00] off? Cost, complexity, time, risk, and the risk of if you don’t get the update fast enough, and then that self propagating ransomware hits you before you got it and Oh, sorry, we were in QA at the time. Are you willing to accept that? We just, we’ve got to be adults in the room and look at this with all sides of the equation and not just point fingers at somebody when they screw up without realizing the other side of the equation.

Andy: And again, I am not a CrowdStrike apologist here. I’m frustrated with the mindset of I’m going to build my thought leadership by just pointing out the bad things without ever balancing it against what the good is. Sorry, I’m a little frustrated.

jerry: It’s what we’ve built our industry around. I know.

Andy: Am I wrong? I’m not, am I wrong? Not to put you on the spot, but

jerry: The problem I have with the situation is that it reliably crashed every Windows computer it landed on.

Andy: I’m not sure that’s a hundred percent true. And [00:55:00] the only reason I say that as I’ve seen some social media imagery of a bunch of like check in kiosks at a airport and only one of five was down. I don’t know why. I have no idea why. It’s very flimsy evidence, but it’s I would say let’s get a bit more root cause analysis before we can completely say that’s true, but it’s obviously very highly effective, right?

Andy: And very immediately impactful to a super high percentage of the machines. This was installed on.

jerry: So I guess my question, my concern is. Did this happen as a, cause I’m assuming, and I feel pretty confident that they have a pipe, some kind of testing pipeline where before they push it out, it goes through some, some standard QA checks and I’m assuming what got missed.

jerry: Yeah. I’m assuming something didn’t happen and like it didn’t crash their [00:56:00] version of windows. In their test pipeline or did crash and it wasn’t detected or somebody skipped that step altogether. I don’t know, my concern lies there. Like what was the failure mode that, that happened? And, hopefully they’ve, hopefully they’ll come out of this better than they were in, in general, like we, as an industry, we we advance by kicking sand in other people’s faces.

jerry: That’s not

Andy: wrong. And there’s not, it’s by the way, I’m not trying to say, let’s just go up, up. Let’s suck. Let’s move on. Obviously we have to learn from this and we have to understand the implications, but I. And we have to adapt to it. I’m sure that every other vendor doing similar things is very curious what went wrong, right?

Andy: And hopefully we can all learn from it. Hopefully Construct will be very transparent. There’s no guarantee, but hopefully. And mind you, they’re going to get massively punished in the market for this as they should. There’s going to be a lot of people who are not [00:57:00] renewing CrowdStrike over this incident, and I have no problem with that.

Andy: And that’s that’s how the industry works. And there’s all, they, how they handle this incidents is going to be highly impactful to how well they keep their customer base. But they’re also massively highly deployed. So that’s, one people say they’re too highly deployed. I don’t know, 14 percent of the industry.

Andy: I think I heard somebody say, I don’t know how accurate that is. They’re one of the big boys in the EDR space. And frankly, having met a number of their people they know it and they have some swagger, so maybe that’ll knock them down a notch. Admittedly, when I first heard this, I was like, ah, couldn’t have happened to a better company.

Andy: But, the other aspect is Microsoft, man, they’re taking a bunch of crap over this because it was their systems, but it was really not their fault. As far as we know at all, but they’re trying to step up. Like they’re putting out recovery information. They’re trying to put out tooling.

Andy: Like they’re trying to help, which I appreciate, but [00:58:00] it’s certainly a mess. Don’t get me wrong. And I know I went down a ranty rabbit hole of just the stuff I didn’t like, because, partially because I’m assuming people who listen to the show know the details, right? So we’re trying to, there’s no reason for us to rehash all the basics.

Andy: It’s, trying to get into what we think, people care about that, but.

jerry: Yeah, I guess the net point is, shit happens. I think we have to be pragmatic in that EDR is. An incredibly important aspect of our controls. And I think auto update is as well. The whole point of this is to be as up to date as you can be because the adversaries are moving very fast.

Andy: Yeah. And we as an industry are not moving away from auto update AI is the antithesis of manual updating guys. Yeah. Buckle up. If you don’t like auto updating, you’re not going to AI much.

jerry: So we [00:59:00] have to find a way, I think, to to co exist and I don’t have a lot of magic words to say, and, like this happened to CrowdStrike, but it has happened, it happened to McAfee and it’s, it is, an incredible coincidence that the CEO of CrowdStrike was also the CTO of McAfee when that happened but it’s happened to Microsoft and it’s happened. I think it’s happened to Symantec and I think it’s happened to Microsoft multiple times.

jerry: Now I think the difference between those and this again, is the time proximity, like all of these things have all of these eight and a half million systems went down roughly at the same time And contrasted with a lot of the others, these were almost all corporate or, slash business systems.

jerry: Because you don’t run CrowdStrike for, actually Amazon’s been trying to sell me CrowdStrike out for my home computer, but generally you don’t run CrowdStrike on, on your your home PC.

jerry: You run them on, on, the stuff that you care [01:00:00] about, because it’s really freaking expensive.

jerry: And when those systems go down. The world notices and when eight and a half million of them go down all at the same time, it becomes big news. And lots of people consternate over it. And look, again, as an industry there’s so much clamoring for airtime and we love a crisis.

jerry: We love to talk about, why you shouldn’t be using kernel modules, why you shouldn’t have auto updates, why you shouldn’t do this, why you shouldn’t do that. It’s what we do, and it’s annoying. And I think it’s sometimes you can cross the threshold of causing more problems than you’re solving, because if you’re trying to solve a problem that may never happen again in your career by, ripping things out, it’s going to or duplicating, your EDR environment it’s just not necessarily cost effective.

jerry: So I don’t have a great. Look, this is, this has been an industry wide problem [01:01:00] and I don’t even think everybody’s fully recovered. I think there’s still people, those, I don’t know how many of the eight and a half million systems are still sitting there with a blue screen, but it’s not zero. I’ll tell you that.

Andy: Delta still canceling flights. This is just as one small, tiny example.

jerry: Yeah. I had packages delayed UPS is saying, your shipment is delayed because of a technology failure. So this is, it’s far reaching, but I think we have to be thoughtful and not knee jerk reaction to what happened here.

jerry: I think CrowdStrike has a lot to answer for. I do think. One of the quotes in this article is hilarious. I’ll read it here. It’s quoting the CEO, quote, the outage was caused by a defect found in a Falcon content update for Windows hosts, Kurtz said, as if the defect was a naturally occurring phenomenon discovered by his [01:02:00] staff.

Andy: I go back to probably about eight, 80 lawyers are all over every single sentence being uttered right now. Oh yeah. Yeah. You never

jerry: first, first law is you don’t accept responsibility until at least until more. Yeah. But they, look, they don’t I’m guessing the fog of war is thick over there right now.

jerry: Yeah. I’m sure they know what happened. By now. And like you said, hopefully they will be transparent about it, but

Andy: holy cow. It was interesting is if you get into their blog about the technical details, 409 UTC is when they put out the bad file, it was fixed by 527 UTC. So an hour, 21 minutes, 79 or 80,

jerry: 80 minutes.

Andy: Yeah. So that’s pretty fast, right? Obviously they knew they had a problem pretty fast. But it’s too late. Once it’s out there, especially because the machine’s down, you can’t push a fix to them. It’s a worst case scenario for them, in that they couldn’t [01:03:00] push out an autofix. Yeah, it’s yeah, as a side thing, we’re also seeing a lot of bad actors starting to jump on, trying to send out malicious content under the guise of helping with CrowdStrike issues.

Andy: So that’s always fun.

jerry: Dozens, maybe hundreds of domain names registered, many of which were some many of which were malicious. Some of which were parodies and yeah. Oh yeah. Yeah. Good times. Anyway not a lot happened on Friday or last week. Hopefully it’ll be another quiet week.

Andy: It is for you, unemployed bum.

jerry: I don’t, it’s interesting, I don’t know how I ever had time to work.

Andy: I, I think you need to put some air quotes around work, but sure.

jerry: Ouch.

Andy: All right, we have gone pretty long today, maybe we should wrap this bad boy

jerry: up. Yes, indeed. I appreciate everybody’s time. Time. Hopefully this was interesting to you. If you like the show, you can find it [01:04:00] on our website, www. defensivesecurity. org. You can find this podcast in 272 that preceded it on your favorite podcast app, except for Spotify.

jerry: I’m still working on that. And you can find Lurg, where?

Andy: I’m on x slash Twitter at L E R G and also on infosec. exchange. at L E R G, LERG.

jerry: You can find me at Jerry on InfoSec. Exchange and not so much on X anymore. And with that, we will talk again next week. Thank you, everybody.

Andy: Have a great week. Bye

jerry: bye.