Tag Archives: 72b372

Defensive Security Podcast Episode 277

In this episode, Jerry Bell and Andrew Kalat discuss various topics in the cybersecurity landscape, including the influence of cyber insurance on risk reduction for companies and how insurers offer guidance to lower risks. They touch upon the potential challenges with cybersecurity maturity in organizations and the consultant effect. The episode also goes into detail about issues surrounding kernel-level access of security tools, implications of a CrowdStrike outage, and upcoming changes by Microsoft to address these issues. They recount a case about a North Korean operation involving a laptop farm to gain employment in U.S. companies, posing major security concerns. The discussion highlights the pitfalls of relying on end-of-life software, especially in M&A scenarios, and how this could be a significant vulnerability. Lastly, they explore the massive data breaches from Snowflake and the shared security responsibilities between service providers and customers, emphasizing the importance of multi-factor authentication and proper security management.

Links:

https://www.cybersecuritydive.com/news/insurance-cyber-risk-reduction/724852/

https://arstechnica.com/information-technology/2024/08/crowdstrike-unhappy-with-shady-commentary-from-competitors-after-outage/

https://www.cnbc.com/2024/08/23/microsoft-plans-september-cybersecurity-event-after-crowdstrike-outage.html

https://arstechnica.com/security/2024/08/nashville-man-arrested-for-running-laptop-farm-to-get-jobs-for-north-koreans/

https://www.darkreading.com/vulnerabilities-threats/why-end-of-life-for-applications-is-beginning-of-life-for-hackers

https://www.cybersecuritydive.com/news/snowflake-security-responsibility-customers/724994/

 

Transcript:

Jerry: Here we go. Today is Saturday, August 24th, and this is episode 277 of the defensive security podcast. My name is Jerry Bell and joining me today as always is Mr. Andrew Kalat.

Andrew: Good evening, my good sir Jerry. How are you?

Jerry: I am awesome. How are you?

Andrew: I’m good. I’m good. I’m getting ready for a little bit of a vacation coming up next week So a little bit of senioritis. If I’m starting to check out on the show, you’ll know why

Jerry: Congrats and earned. I know.

Andrew: Thank you, but otherwise doing great and happy to be here as always

Jerry: Good. Good deal. All right. Just a reminder that the thoughts and opinions we express on this show are ours and do not represent anyone else or including employers, cats, relatives, you name it.

Andrew: various sentient plants

Jerry: Exactly. Okay. So jumping into some stories today. First one comes from cybersecuritydive. com, which by the way, has a lot of surprisingly good content.

Andrew: Yeah, I have enjoyed a lot of what they write. We’ve a couple good stories there

Jerry: Yeah. Yeah. So the title here is insurance coverage drives cyber risk reduction for companies, researchers say that the gist of this story is that there were two recent studies done or reports released one from a company called Omeda and another one from Forrester, which I think we all know and love.

And I’ll summarize it and say that they’re both reports indicate that companies which have cyber insurance tend to be better at quote, reducing risk more likely detect, respond, and recover from data breaches and malicious attacks compared to organizations without coverage. So I thought that was a little interesting.

On the other hand it to me feels like a bit of availability bias, so by that, what I mean is if you go and take a survey of people who go to the gym and work out at the gym on their diet, you will probably will find out that Eat a healthier diet than the public at large.

Andrew: But I go.

Jerry: you just go.

Andrew: I, look,

Jerry: I’m not saying, I’m not saying everybody, right?

Andrew: least I show up, right? And I’ve been told showing up is half the battle.

Jerry: It is half the battle, that’s right. Knowing is the other half.

Then doing is the other half.

Andrew: I will say, speaking of G. I. Joe quotes, I thought catching on fire was going to be a far bigger problem in my life than it turned out to be.

Jerry: That and quicksand.

Andrew: I, we were

Lot about that as children of

Jerry: quick, quicksand.

Andrew: Heh.

Jerry: QuickSand was, I, I lived in fear of QuickSand, but it turns out it’s really not that big of a concern.

Andrew: For as much as I heard stop drop and roll done it

Jerry: Yet.

Andrew: That’s true. The day is young. Anyway back to your story. I think you’re right I will also say having worked with a number of these companies do interestingly have their own towards trying to keep you from getting hacks. They have to pay out So they do push certain things like and I’ve seen myself and I won’t say it You know, it doesn’t matter where, when, but if you have things like one of the well known EDR tools well deployed, they might cut you a rate on or a break on your rates. Because they have their actuarial table saying, Hey, if you’re using certain bits of technology that lowers your risk of usually ransomware, right? So they

Jerry: Sure.

Andrew: seems to me, my opinion is that these insurance companies feel that some of the well known EDR brands in a Windows environment It is very effective or decently effective at stopping ransomware, therefore they’re less likely to pay out, therefore they lower your rates. So there might be some of that too. They do to give companies guidance on what they see across their industry to reduce risk.

Jerry: I think that, that makes sense. I’ll say, on, on one hand, like I was saying before, I think companies that buy cyber insurance are probably maybe more mature, more invested in, protecting their environment than others. But I think that there’s also this consultant effect when when you want to drive change and whether whatever kind of change that is, reorganizing revamping your security program, justifying additional expenses for anything outside guidance, typically Carries a lot more weight than something that comes from internal.

Andrew: Sad but

Jerry: and so I think, yeah, anybody who’s been in the industry for a long time or really any amount of time knows that, especially this is a, the CISO trick, right? When you come into a new organization as a CISO, the first thing you do is you go off and you hire a, a big name consultant.

You burn a half a million bucks on a consulting engagement. And at that point, it’s not you telling the company, Hey, we’ve got to spend a bunch of money to improve our security program. It’s some, hard to argue with independent third party who is making that assessment. And to some extent you argue with that at your own peril, right?

Because now it’s it’s a, it’s an assessment that becomes exhibit a, if something goes wrong and which is, both a blessing and a curse. But my experience is it certainly helps a lot. And I think that this cyber insurance and their somewhat prescriptive guidance and expectations around the kinds of controls and technologies you need to have in place is a very similar kind of thing, right?

If you’re engaging with them, they’re going to be opinionated on what you should and shouldn’t be doing and and then like a consulting engagement. It’s a third party giving you that guidance. And so I think that tends to carry a lot more weight.

Andrew: Agreed on all points. The only caveat I would say to that is sometimes these recommendations that come from some insurance companies are not customized typically to your particular risk environment or situation. They are very broad approaches to reducing risk across many different types of environments with many different types of risk profiles. Technology stacks and all that sort of stuff. So they’re very somewhat generic recommendations, I think.

Jerry: I think you’re probably right. In any event, it’s I thought it was I thought it was quite interesting. Certainly having that insurance can help. I will tell you in my time as a CISO in dealing with customers and to some extent business partners, there was a I would say a growing expectation that you have to have cyber insurance.

Actually, I experienced firsthand quite a few customers actually writing into contracts. That you have now, I don’t know how far and wide that permeates the industry, but I think it’s probably becoming a lot more common these days because, companies have this interdependence and so it’s not necessarily just like a cloud service provider where that kind of thing can manifest, look at over the, what now, 12, 13 years we’ve been doing the show.

How many times have we talked about a company like, let’s say, Target or Home Depot getting hacked as a result of something happening with one of their suppliers? And so I think, as time goes on, we’re going to see that becoming kind of table stakes to, to have these business relationships, especially with larger and more mature companies.

Andrew: Why do you think that is, what do you think that the third party is assuming that you will get from that insurance? Just so you have the ability to recover from an incident and sustain As a going concern or that they assume that if you have insurance, it’s coming with requirements that level up the maturity of your program or what value do you think that third party sees in their business partner having cyber insurance?

Jerry: That’s a great question. I think it’s both, actually. I think there is this, naive view that if, if something bad were to happen this insurance would, provide that buffer. It would make sure that, the company didn’t go out of business, but the reality is that, especially, if you look at some of the really large hacks.

can happen with relatively small organizations who are, I would say fairly highly leveraged, at least in terms of their insurance policy. So yeah, it’s great. They may have a 5 million insurance policy, but if they hit if they’re, let’s say a, a hundred million dollar company and they get hit with a, 50 million in breach fees, their 5 million in insurance coverage, isn’t really going to go very far.

So I don’t know that it’s extraordinarily useful in terms of protecting customers from harm. I think there’s a facade that it provides. And I also think it does give some, at least a segment of, roles at companies gives them this warm, fuzzy feeling that somebody else is looking over their shoulders.

In that respect, it’s not different than like a sock to or an ISO, SOA or what have you.

Andrew: I wonder if there’s some sort of implied, Hey, you’ve ransomware you can recover faster. The other thing I think about is the perverse incentive. So when we look at an insurance in general, it’s to shift risk. It’s to shift

Jerry: I

Andrew: risk to a third party. So is there the risk that a executive committee will say, Hey, we don’t need to invest in much in cybersecurity because we have insurance if something bad happens.

Jerry: mean, I would love to sit here and say no, that’s that would never happen. But I don’t think it happens that every organization, but I definitely expect it happens more than it should.

Andrew: Yeah, it’s interesting. It’s interesting interplay of competing priorities. When you start to introduce these sorts of things and how what sort of behavioral economics comes into play

Jerry: Yeah, absolutely. All right. Anyway, go go talk to your insurance carrier and it might might help you with your internal program and justify additional improvements to your program. Our next story comes from Ars Technica and the title here is Crowdstrike.

Unhappy with shady commentary from competitors after outage

Andrew: I’m shocked. I say shocked

Jerry: Totally surprised by this so we’ve talked about this Several times and i’m sure we’ll talk about it several more times CrowdStrike obviously had a pretty devastating Snafu With one of its products that caused probably the largest single meltdown of I.

T. in history and a lot of their competitors have been capitalizing on that outage. And so now this story is talking about in the wake of some of the back and forth tit for tat. mudslinging that’s been going on. I think they call out Sentinel one in particular. CrowdStrike is, I think, getting a little peeved at how their competitors are behaving, basically saying, hey, this could have happened to anybody.

And I think there’s a lot of differing opinions in the industry based on my experience and exposure to different, to, the industry. I don’t think everybody’s on that bus. I think there’s a lot of people who think that, no, this really would be a lot less likely with other companies. Although it is interesting that SentinelOne is, is one, I think one of the more aggressive mudslingers, but they also, by the way, as far as I can tell, do use they do access windows The kernel.

And in fact, the next story we have actually talks directly about that.

Andrew: Yep they do and this goes back to something that I’ve I don’t have expertise in so I’m just dancing around and pontificating at something I can’t be authoritative on but I think what I keep seeing is that most security tooling feel that they need to be in the Windows kernel to be effective on the way Windows is architected today. it’s interesting when they talk about they being various competitors of CrowdStrike talk about safer methodologies, whatever that means, and I think somewhat that implies perhaps not operating at the kernel level. However, safer in terms of not causing an outage per se, but are they as effective at spotting and stopping malware? I don’t know. I, my assumption is there’s always some sort of trade off. If we’ve got most of the industry wanting to operate at the kernel level, and we’ve got another story that talks about this a little bit, and Microsoft themselves is talking about maybe we can find ways to make this effective. seems to me as not a, not having worked at those companies that, but Operating at the kernel level allows these security tools to be more resilient against malware trying to shut them down, and in theory be faster and more effective, and if they are operating at the user level or in user space, the implication that I’m getting from these articles is malware could Shut down the anti malware tool and do whatever it wants to do. And that appears to be harder at the kernel level. That it’s better able to protect itself and spot things at a deeper level in the operating system. I don’t know if that’s true, but it seems to be most of these companies operate that way. And in fact, there was even an implication we talked about it on a previous show. From Alex Stamos, who’s the newly appointed CSO over at or c SSO or cso, one of the two over at Sentel

Jerry: CTO. CTO.

Andrew: Now it says Chief Information Security Officer in this particular

Jerry: Oh, okay.

Andrew: Alright. Anyway, he talked about, Hey, we’ll back outta using the kernel if all of our competitors will as well. there’s clearly some advantage to being there. And I don’t know that anybody really wants to talk about that.

Jerry: I think there’s, I think there, that we are talking about this as if it’s like one, monolithic choice of, you’re either there or you’re not there.

Andrew: Yeah.

Jerry: that’s probably not the right way to think about it. I suspect that there’s the nuances that there are certain kinds of functions that you don’t need to perform in the kernel.

And as an example, you don’t need to, parse your file, your definition files in the kernel. You could do that in user mode and then. pull it into the kernel module. I suspect that there’s some of that. It certainly adds a lot of complexity and I’m not going to argue that, but I think that’s where, I don’t know if, I don’t know anything about SentinelOne and their technology, but I’m going to guess what, when they say, when they’re trying to throw rocks at CrowdStrike, I think what they’re probably saying is we do.

more things outside of outside of the kernel. But like you said, if they were to completely move out of the kernel their ability to function would be impaired. And so that kind of dovetails into the second story about this, which is from CNBC. And the title there is Microsoft plans September cybersecurity event to discuss changes after a CrowdStrike outage.

And there was I don’t know if this was directly an outgrowth of the comment that Ed Bastian, the CEO of Delta, made to CNBC, gosh, right after the outage. He said something like, we don’t see this sort of problem with Apple. And, and he was. really talking about, and by the way I’m assuming that Ed had heard that from, his security people hey, Apple just doesn’t allow this, which is true, right?

They don’t they are much stricter about this access. Microsoft will certainly point to the the decision that was rendered against them in the EU that forced them to open this up because by the way, Microsoft is a direct competitor to both SentinelOne and CrowdStrike when it comes to endpoint security products.

Like they have their own, they have their own product, which is in contrast to Apple. Apple doesn’t really have, an equivalent thing like Microsoft Defender, right? They,

Andrew: a single brand around security. They do things a little differently. This is a tough comparison, and I struggle I hear this, and I get frustrated because it’s In a vacuum that isn’t looking at market share. It isn’t looking at

Context. It isn’t looking at all the tradeoffs Microsoft had to make to run an open hardware ecosystem and all the backward compatibility choices they made Apple, to just say Apple just does it better.

It’s not and I’m not an anti Apple fan. I use Mac all day long, but it is Microsoft Mechanics www. microsoft. com. I disingenuous to say it outside of the context of everything else around that ecosystem that has contributed to this. And all the frustrations people have with Apple being so hardcore about, Oh yeah, you, sorry, this hardware no longer supported, go away. Or where is Apple’s surfer ecosystem or, there, there’s just, he’s not wrong, but it’s also like one 10th of the story.

Jerry: Sure. You

Andrew: like that. there’s, it’s not just, it’s not just Microsoft is stupid. It’s, and Apple is smart. There’s so much more that goes into this. It’s my frustration.

Jerry: know, as a society, though, we’ve boiled everything down to 15 second soundbites, it’s just the way of the world.

Andrew: You’re not

Jerry: lost our, we’ve lost our tolerance for nuance.

Andrew: But that’s

Jerry: But,

Andrew: Jerry. To bring the nuance back.

Jerry: That’s exactly right. Microsoft on September 10th, which is coming up fast. is going to have this summit with endpoint security providers. And I think what they’re trying to establish is a set of best practices around what is and is not done in the kernel so that they can avoid catastrophes like this going forward.

And then also, as we’ve talked about in the past, they’re going to start trying to encourage these companies to use the eBPF interface, which is, an alternative way of hooking into the kernel that has less ability. It provides probably not the exact same level of control and visibility, a very, very substantially similar without all some of the downside.

So I think I think Microsoft’s eBPF implementation is rapidly maturing as, relative to what’s been out there for Linux for some time, but it is, it’s not something that it might my experience that the security industry is really embraced yet, but I think this is probably going to be the forcing function that really drives us in that direction.

And this, by the way, may be the thing that It ultimately enables Microsoft to say, no more access to the kernel. If you want to do this, you’ve got to do it through this particular feature like eBPF. I think that’s how I see this playing out. I think if you were to look five years, five years down the road, I don’t think companies are going to be linking and hooking into the kernel.

I think they’re going to be forced through a function like eBPF. It’s just Jerry’s wild ass speculation though.

Andrew: It makes sense. One quote that I do want to mention from this article that kind of goes to what we were saying in the previous story, which is, quote, Software from CrowdStrike, Checkpoint, SentinelOne, and others in the endpoint protection market currently depend on kernel mode. Such access

Jerry: Right.

Andrew: quote, monitor and stop bad behavior and prevent malware from turning off security software, end quote, a spokesperson said. So that kind of

Jerry: Yeah.

Andrew: what we were saying earlier that there’s clearly some

Jerry: Yes.

Andrew: sure that they’re trying to get, like you mentioned, new methodology to give them that same capability without being deep in the kernel.

Jerry: But I think the, I think both customers of these endpoint products and the manufacturers of them too are going to say, okay, fine. But then tell me how you’re going to help us not get, not our endpoint security product, not get killed by. The ransomware companies the adversaries who are pretty adept at stopping things, even that are running in the kernel today, it’s a lot easier to do with user mode processes.

So I, I think that there’s going to be a bit of a meet me in the middle here where Microsoft is going to say, get your crap out of the kernel. And those companies are going to say then, give us something. Enable us because they’re, I don’t think they’re fully enabled today. Yes,

Andrew: wouldn’t be fair. And I think that’s where the EU is coming out with their ruling years ago that started all this. So there’s, make no mistake, I guess is what I’m saying here, that there is absolutely competitive standoff going on here. These companies are frenemies in this situation, so they don’t want to back out and be, leave Windows, their competitor with the only capability to do something that they can’t do, and then they can’t compete.

Jerry: Exactly. And by the way, my Microsoft we have, we talked about it before. We have seen instances where Microsoft has shot themselves in the foot too, right? This is, it’s not unprecedented that Microsoft’s own updates have caused outages. Outages. So I’ve got to, I’ve got to believe like they’re thinking about this too.

It’s so great. They they kick all the other endpoint security products out of the kernel. And then suddenly they are, they’re the cause of the next CrowdStrike scale outage because of their security product is now the only one that’s in the kernel. How bad would that be for them? Holy crap.

Anyway I think that’s what we’re going to see. Agreed.

Andrew: on you there.

Jerry: No, all good. So our next our next story is also from Ars Technica, and this is a bit of a follow up from the Know Before story that we talked about, I think, two or three episodes ago. The title is, title here is, Nashville Man Arrested for Running Laptop Farm to Get Jobs for North Koreans.

So the, if you recall,

If you recall the security awareness company Know Before published a blog post about how they had hired what turned out to be a North Korean agent. And the way that went down was they interviewed and selected and hired a a person who turned out to be a North Korean citizen. And they shipped the laptop to a laptop farm.

And I actually had some questions about how that worked. This actually explains how this goes down. It’s very enlightening. So this this person named Newt, his last name is Matthew Isaac Newt, who lived in in Nashville. So he had a relationship with a set of people in North Korea, where he would facilitate and it looks like it was a fairly sophisticated operation where he would help track down identities that they could use.

He provided a place for North Koreans who were being hired by, unknowingly hired by the way, by U. S. companies to have their laptop sent to. He would receive the laptops, put it on his home network. install remote access software and allow the North Koreans remote access into the laptop so they could quote, do their work.

And it’s just a, it’s a fascinating thing. He I guess they, they said that they were making each one of the North Korean employees was making about 250, 000 over the roughly year period that this was going on. The allegation by the US government is that the money these people were earning was in turn being used by North Korean to fund their weapons program.

So obviously not not super awesome. And they do go on to say in the article, by the way, this is not a one off situation. They refer to another person named Christina Marie Chapman. Down in Arizona who basically did the very same thing So it’s a fascinating I didn’t realize this was as large of a problem as it is, but Apparently this is a fairly Industry becoming an industrial scale operation run by, individuals.

I’m I’m surprised.

Andrew: Yeah it’s interesting. It’s a little different to the know before because in this case, it looks like the North Korean it employees are doing legitimate work. They’re not immediately installing, malicious software and that they’re trying to earn their wage as it were

Activity in this case.

But I’m also very curious how the money was moved over to North Korea. And, was this, paid, I’m sure it’s paid to a US bank account. And then what does it look like to get it over North Korea, which is somewhat non trivial. I don’t know if they use Bitcoin or something like that. But that’s a big part of the charges here is the wire fraud, that sort of thing.

But the other thing I think about is, if as an employer don’t allow your random employee to install software, it would stop a lot of this. I get that’s a big cultural taboo and there’s a lot of gnashing of teeth around that topic, but if they couldn’t install remote access tools or, you as an IT department or security department don’t monitor for those remote access tools, it certainly would stop a lot of this.

You’d be, It just wouldn’t work, unless some other methodology is found. A way to fight this it’s one more reason to not necessarily let local users have full admin rights.

Jerry: Even if you do, I think it’s very prudent to actively look for and block and investigate people who are installing remote access tools because like remote access tools. Whether it’s RDP or TeamViewer or, any of the myriad other software, that stuff has been the source of so many security incidents over the years.

And in fact, it’s one of the common ways that frauds, like just garden variety fraud is perpetrated where, the Windows help desk scam, where they’ll call you up and ultimately install some sort of remote access software to get into your system. So I think this is really important.

Now I will say, I think this particular set of scams was reliant on them installing this remote access software. But I think, a sophisticated, if they were if they were sophisticated, network KVMs are pretty cheap these days. So it’s not necessarily a home run to say we’re like, we’re fully protected because we don’t allow that we would certainly detect it.

But it is not, there are other ways around that. It’s another step. And I can imagine it would make, perhaps the would increase the cost and complexity of hosting these but probably not prohibitively. And, it goes back to you’ve got to, especially in a remote working situation, you’ve got to have good diligence on and good awareness of who you’re hiring.

And by the way, that I say that is somebody like full throated supportive remote work.

Andrew: Yeah, certainly, but I also feel like you’ve spent a lot of time thinking about this. Is this your new post retirement career? Are you setting up laptop

Jerry: But my, my kids have moved. My oldest son has moved out and someday soon my youngest will move out. So I’m trying to figure out, do I go the Airbnb or do I go the, hosting laptop farms? I don’t know yet.

Andrew: That explains the eight new air conditioning units you just added onto your house.

Jerry: Yeah. Then the Bitcoin mining, like you gotta diversify.

Andrew: You having free time is dangerous. I don’t know that this is a good thing at all.

Jerry: What is the saying about idle hands?

Andrew: Indeed.

Jerry: All right. The next next article comes from Dark Reading, and the title here is Why End of Life for Applications is the Beginning of Life for Hackers. This is a big problem. So the gist of this story is that end of life applications is a boon for threat actors. It’s they make reference to 35, 000 applications moving to end of life status over the course of the next year.

I think that’s probably optimistic. I think it depends on how you define application for sure, but I think the look, in my, my, my personal experience. This is probably one of the larger problems we have in IT. It just, we talked last time about patching and how nobody wants to, nobody wants to patch stuff.

But there are so many issues that come along with using end of life software. Not the least of which, by the way, is that Most vulnerability management programs are built around, subscribing to vendor alerts to understand that a patch was, needs to be applied. When you’re using end of life software, that doesn’t happen anymore.

Like it’s, it goes quiet. Certainly.

Andrew: end of life for vulnerabilities. Much less issue patches.

Jerry: Yeah, exactly.

Andrew: Yeah.

Jerry: Yeah, most of the time you’re. Your vendors, vendor notifications are not vulnerability based, they’re patch based. They announce the availability of a patch to fix a vulnerability. And so now you know that you have a piece of work that has to be done because a patch was released and you got to go and apply that patch.

There’s no patch. A lot of vulnerability scanners work in a similar way, especially as it pertains to maybe less less well known applications. Now, certainly if you’re using a tenable Nessus and you’re running an out of support version of Linux or Windows, like it, it flags that itself as a critical vulnerability, but you don’t have any granularity about what the actual technical vulnerabilities are because they don’t know.

It’s just like it’s end of life. Who knows what. What sort of vulnerabilities there are and I think it gets It starts to descend into obscurity after that when you get into like open source Components and whatnot. You just don’t know you’re unaware that they’re not being maintained anymore And that becomes a big problem I will also say, one of the things that they talk a little bit about how companies can get into the Or defend against letting this, letting things go end of life.

But I will say my, in my experience, it’s easy. It’s a trap to fall into when something goes end of life, right? Because, Hey it’s end of life, but there’s no known vulnerabilities and we have this other thing that needs to be done and it’s super high priority. It’s going to make a billion dollars, blah, blah, blah.

And at the time, that’s true, right? It’s true. It’s a low, it’s a low risk thing. Your your version of WordPress is out of date. There’s no known vulnerabilities. But then, you start to collect these things. And suddenly, you’re buried under too much technical debt. And and it’s hard to, really hard to get out from under and you end up in this position where you’ve got so much of this debt that you have a hard time even understanding, not only what is, what all is end of life, but, are there actually vulnerabilities?

Like at the time you made that risk acceptance to allow this end of life thing happen. You knew that there wasn’t, but are you actually keeping up with the vendor and with the industry to know whether that has changed? And I think, if you’re talking about one thing, one application, it’s fairly easy to manage.

But once you start accumulating a lot of these, it becomes really unmanageable.

Andrew: Yeah, you echoed a lot of the notes I had as well, which is once you get so far behind, it’s that much more difficult to get caught back up. And then it becomes that much more of a fight comparing against other higher priority things to work on patching or massive upgrading, as opposed to just keeping things up to date bit by bit. other thing I think about, I don’t have my notes here, is If something is that far end of life, it’s not just a security thing. You don’t know necessarily if other interactive or interrelated components are supporting that version anymore, or tested against that version. And you might start seeing some weird buggy artifacts as a result. and you brought up the open source thing. That’s. Most of these sort of end of life checks are usually coming from some sort of end of life policy statement from a commercially supported application or operating system. The problem with a lot of these open source dependencies and third party packages, they don’t have that.

They don’t have any sort of published end of life, end of support methodology. So how do you know when something goes end of life when it’s open source? It hasn’t been updated in three years. Is that just because it’s just really stable or because it’s been abandoned and what is the criteria you want to use there?

Do you have a criteria that says, Hey, we need to use currently maintained third party dependencies in our code. Okay. How do you define what that is? Something that has had an update in the last X amount of months we see all the time that. Open source packages just become abandoned ware without any notification, without any understanding of that.

It’s just in hindsight you go, oh yeah, that guy stopped working on that three years ago. knows.

Jerry: Yeah, it’s not certainly not universally true. There are plenty of open source, especially the larger ones that have a defined roadmap. But I think you’re spot on. If you don’t, if you don’t see that there’s been an update on a particular piece of open source, is it because there’s nothing wrong with it or, or is it on you to go look at its GitHub repository and see that like people have been.

jamming the issue log with requests to fix some vulnerability that are falling on deaf ears. And, when you start to think about the many tens of thousands or even hundreds of thousands of open source components, some applications use that, it becomes really challenging. And I think that’s where some of the open source management tools Like mend and others can start to help but you know Even those aren’t infallible and even so you have to have as an organization Some amount of discipline and capacity to make sure that you’re staying up to speed

Andrew: Yeah.

Jerry: The other thing by the way that I wanted to hammer on because I hate it.

I hate it With the burning passion and it has happened every time i’ve been involved in it And it’s something that I feel like as an industry, we have to do something about. And that is in the aftermath of an acquisition, accounting systems. Can we just talk about those for a second? Because I don’t know how many acquisitions I’ve been, on the acquired side and I’ve been on the acquire side a bunch of times.

And it happens every single goddamn time. The acquired company has an accounting system. And what happens when, why do companies acquire other companies? One of the main reasons is this thing called synergy, right? And the synergy basically means that we don’t want to run duplicate HR systems and accounting and whatnot.

If we consolidate all that backend stuff. And continue to make money and the products they sell, then like we have, we’ve increased the value that the whole is more than the sum of the parts. And that’s why a lot of companies

Andrew: Is,

Jerry: other,

Andrew: yeah. Sorry,

Jerry: right?

Andrew: gonna say the back office overhead

Jerry: And

Andrew: Yeah.

Yeah.

Jerry: exactly. So want to. When a company acquires another company, one of the things they want to do very quickly, as quickly as possible, is start to realize those savings. And one of the first things that happens is, putting out to pasture the accounting system. And this could also be the HR system, although, nobody does HR internally anymore.

It’s like running your own voicemail system these days. But accounting systems seem to be still very much insourced. And, every single time I, without exception, every single time I’ve seen this happen, company gets acquired. We stopped, they stopped paying for maintenance on the accounting application.

And. Ends up running on some old ass operating system that is out of date. It can’t be that you can’t update the operating system because the accounting system won’t work on the new operating system. You can’t update the app, the accounting system because like they’re out of business or you don’t have a license to upgrade anymore and it would cost millions of dollars and it’s not in the business case, my, oh my God.

And it. wouldn’t provide any like accretive value to the company if you did upgrade it because it’s not going to be used anymore it but you know what if you shut it off like people will go to jail at least that’s what i’ve been told over and over again and so you end up with this thing sitting in the corner which is as best i can describe it an attractive nuisance because like everything about it is terrible it’s all out of date and it can’t go away because you Joe from accounting says that somebody will go to jail if you turn it off.

Anyway, I may be a little angry about this. Ah,

Andrew: If you just ran your accounting on Excel, like God intended, you wouldn’t have this problem.

Jerry: it’s

Andrew: No

Jerry: so true.

Andrew: than what can be done in an Excel sheet.

Jerry: Amen. I, the only thing I can say is in, in those instances, and I look I’m going to just be forthright and say, I’ve never seen an effective. counter to this. It’s been a problem every single time I’ve seen it happen. The only thing I can say is, as an idea, have a, have a way I don’t think any company is going to be I shouldn’t say any company, but I think most companies will not be effective in changing those facts.

It’s going to happen, your acquired company is going to have an accounting system. There’s not going to be an appetite to update it but there it is, and so you have to you have to mitigate the risk of it, and I think that having a defined approach to doing that, whether it’s like a separate VLAN that has no access, or like you have to, do multi factor authentication to get in and out of that network, it could be pretty simple and dirty, but have a plan, because it just, It happens no matter how mad it makes me, it, it happens.

And and so I think we’ve got to recognize that there are cases where that will be and come up with, relatively workable mitigations around it, but it can’t be the rule.

Andrew: What’s your ideal use case? That they just migrate all the data off the old system to the new system and just kill it?

Because I’m

Jerry: I don’t know. I

Andrew: system is maintained, is because they need a system of record for the last seven years or whatever, for tax purposes or government regulatory purposes. And I’m guessing that’s why, I’m assuming.

Jerry: typically, yes. In, in, frankly, I think the best course would be to figure out the different types of reports that are needed and to do, to run exports and have those exports exist in a spreadsheet. Now, I don’t know if there, I’m not an accountant, I’m not a,

Andrew: right out of the See, look, it all comes back.

Jerry: But I don’t know if there’s some like statutory requirement that data, that system of record has to be there because if, like the sec or, the department of justice or some other legal Authority came to you and wanted to investigate like why did you claim? Why did you say you made x dollars in, seven quarters ago before you were acquired?

You’ve got to be able to go back and replay that maybe that’s why and maybe that can’t be done through exports I don’t know but I In every instance, the accounting folk have insisted that system has to be available. It’s not enough to just dump the data.

Now I don’t know if that’s laziness or what. The other problem I have, and while I’m beating on this drum, Over time, the people who are familiar with that system go away

Andrew: Certainly.

Jerry: and so like at one of the, one of the things you have to watch out for is that eventually what, like when that system’s usefulness is done, there isn’t anybody left to say, Hey, now it’s time to turn it off.

Andrew: And then who wants to take the risk of being the one who makes the wrong call? So they say people go to jail. Can we talk about who might go to jail and if that’s really a bad thing?

Jerry: I like where this is going. I,

Andrew: just weighing the outcome. What are my options?

Jerry: I think I’ve beat that one to death. The the last story we have today, it’s also from the cyber security dive. And the title is after a wave of attacks, Snowflake insists security burden rests with customers. Now, Snowflake had a a large problem. And this happened earlier in, in 2024 before we got back to podcasting.

But I would say that I don’t think it’s an overstatement to say that the security breaches or data breaches associated with Snowflake will probably go down as the largest in history. It, bigger than anything there ever was, and perhaps bigger than anything there ever will be again, maybe it’s a huge numbers of customers.

Lost lots of data. Now, the point of this story is to say, Hey, like Snowflake was not, snowflake is saying, Hey, it wasn’t us. It wasn’t us. It was. Our customers.

Andrew: Because they

Jerry: they’re not

Andrew: single factor login, was easy to find in other dumps of passwords bad actors did a widespread campaign to, and upward force, password test, all of those passwords against Snowflake user accounts and lo and behold, a bunch of them worked. And that’s, they’re

Jerry: Yes.

Andrew: shouldn’t have relied on single factor and reusing of passwords.

And that’s on you because you didn’t take appropriate security measures.

Jerry: Correct. Yeah we just provided you with an account and a place to store your data or process your data. It was on you, the customer, to pick a good password. And that’s basically what they are saying here. They’re saying our systems, our Snowflake systems didn’t get hacked. We, you know Our infrastructure is fine.

Everything worked as designed. The fact that bad guys got your password and stole all your data is a horrible thing, but it’s not our fault. It’s your fault because, it was your password that they got. We don’t know how they got your password. Was it the same password you used on LinkedIn or on, Ashley Madison?

I don’t know. Who, who knows? And that’s, they’re saying it’s not our problem, not our fault. And they basically, of course they do give lip service to the fact that, hey, there are customers and of course we care about them and we’re in it together as they say, but not our fault.

By the way they have since implemented some snowflake, I should say, has since implemented some changes, which require mandatory multi factor authentication for new customers, and it also gives customers the ability to require or enforce multi factor authentication for all of their users or for specific roles in their account.

So I should, by the way, I should have said for those of you not in the know, Snowflake is a what I would call like a managed storage managed database provider. They do lots of. Value added services around data analytics and whatnot. So the kinds of data that you would have stored in Snowflake are the kinds of data that you wouldn’t want to get compromised.

And so I think this was a central place, one stop shopping for some adversaries to go and do their password stuffing. And it looks like they they got somewhere in the neighborhood of 150 or 200 different customer accounts and pulled all that data out. And it was, and still is by the way, we’re still, even now hearing about net new companies that were hacked or had their data stolen.

And I don’t know if that’s because they’ve, been quote, responding and investigating to the incident, or if they just recently realized that this has happened, but this is a big problem and I think more interesting than, this itself, because I don’t think that Snowflake is all that, commonly used in the industry is the concept that, these service providers are.

They have a hard line of demarcation of what they’re responsible for. And so when we as consumers of these services decide what we’re going to go use, what do we do? Like we look at their SOC 2 and we look at their PCI report and we look at their ISO certification and we look, we look for all these things, but how often do we look to see What the capabilities of their services from a security perspective do you know, do you require multi factor authentication as by default, how many customers, I can say this authoritatively, I don’t know, in my time that I ever saw any customers asking about that, and they should.

And this is, the, because the, look go ahead.

Andrew: no I think this absolutely is a product management decision. And I

Jerry: Yes.

Andrew: the implication here is security causes friction. Friction

Jerry: Yes.

Andrew: make us uncompetitive. So if I, let me bring it back to a bank as an example. A bank knows how to secure your accounts. A bank could easily force multifactor authentication.

They could force tokens. They could force all sorts of things. But they know there’s a certain percentage of customers who will move away from them as a result. That friction will be more complicated than they want to deal with. And they will not value the security They will look at this as a detriment to that service and go someplace else. So that is a decision that every company and every product manager makes about what security they built into their tool. So in my mind obviously I have no insight, insider insight to Snowflake, but the idea that those admins could allow single user passwords and static passwords is a product design choice to reduce friction and usability of to make the usability of the platform easier. Whether it’s for whatever reason, as opposed to like multi factor authentication is a very mature known. Solved problem. So if it’s not being put in place, it’s because somebody made a choice and that choice typically is competitive in nature. So to your point,

Are not demanding it, it’s not going to show up. The other thing that this occurs to while I’ve stolen the floor for a moment is with these rise of all of these SaaS vendors, typically the administrators for these are no longer it professionals or security professionals. They are. users of the data professionals. So they may not even understand or have any insight into the implications of the administration of these tools around security aspects like this. All they know is, Hey, I need to make an account. Okay. I made an account. They may not have the background or the guidance to know single factors bad, and here’s why. Like you would, if a. More traditional IT or security team were administering these tools. So I think we’ve also got a problem with these SaaS tools that have become so ubiquitous and easy to use that we’ve somewhat enabled less technical staff to administer them and I think things like this become oversights that come back to bite us.

Jerry: I couldn’t have said it better. I think that’s exactly what’s going on. And so it’s not friction on IT and security departments that a product might require multi factor authentication. It’s friction on the business users. And so when the bit when it’s the business users that are specifying and deciding what services to use.

You know that. It’s not hard to imagine that they’ll pick one that is easier to log into even though it could have a devastating effect, like a lot of the customers of Snowflake here. And so it’s a complicated thing because if all of this, if all of the providers were requiring multi factor authentication, that wouldn’t be a, there wouldn’t be a difference.

It wouldn’t be a differentiation. I guess the business users would have kind of a similar experience across all of the different providers, but we know that’s, we know that’s not true. But I think that if you zoom out, the concern I have as an industry is that we’re we’re not broadly speaking.

We’re not. deeply aware of the responsibilities that we are picking up for properly managing those services. We are, to some extent, doing what we think is due diligence, looking at those providers and saying they’re a reputable company, they’re secure, they have these certifications, but then that’s like the end of it.

We don’t think about What our obligations are. And this manifests itself in so many different ways. Like how many times we talked about open S3 buckets. It’s another permutation of that. We had the big Capital One breach in AWS. Also misconfiguring how they set up their IAM.

Like it. The devil is in the details in how we manage these software as a service systems and, that the companies are not that these providers aren’t going to come to us and hit us on the back of the head and say, you big dummy, like you should have, you, we saw that you don’t have multi factor authentication turned on and you really need to go do that.

Now, maybe. snowflake will start doing that because of the reputation damage that they’ve incurred as a result of this breach, which they assert isn’t their fault. It is having an impact on them. It is, I think it is probably having a negative impact on them.

It’s attracting attention that I’m sure they don’t want to have. And I’ll, another way of thinking about it is like, there’s no such thing as bad PR, but I think You don’t want to have your name associated with the largest data breaches that have been around. But again,

Andrew: podcast like this with tens of listeners

Jerry: tens of listeners. And I’m not blaming, I’m not disagreeing by the way, I’m not disagreeing with the premise of Snowflake’s comments that it was, their customers were responsible. This was not intended to be like a Bash on Snowflake segment. It was more like we have to understand how companies like Snowflake are viewing their relationship with their customers.

You as the customer are responsible for ensuring that you are properly securing your stuff.

Andrew: So I go back to maybe we’ve got non technical running this, these tools in companies, maybe what we should be doing is allocating GRC’s time to go auditing how they’re running these tools

Jerry: Yes.

Andrew: and clean them up.

Jerry: I I don’t have a better, I don’t have a better option or better idea. Yeah,

Andrew: out there for and such that sort of are meant to be, allow you to apply policy, and those are technical controls of the same problem, which is proper policy should be applied. And I think the challenge is we have such a sprawl of SAS tools that it’s really difficult to stand on top of and ahead of. And I’ve worked at try to mandate as best they can. Hey, if you want to buy a SaaS tool, it’s got to integrate to single sign on or MFA. you could do that and then 10 minutes later, the admin sets up another username and password that you don’t have purview into. And they don’t necessarily know they’re doing anything wrong.

Jerry: exactly. They have a business objective to meet. They’re trying to solve a business problem.

So

Andrew: this is I think one of the unintended consequences of the, Democratization of admin capabilities through the SaaS and Cloud Sprawl, that is making life more difficult for security teams.

Jerry: Yes. And I do wonder, by the way I don’t know that we’ll ever have clarity on this, but I do wonder of the, I think it’s 165 at last count of the 165 companies who’s data was breached from from their Snowflake account. How many of those companies found their IT and security departments found the fact that they were using Snowflake in that way a surprise or that they didn’t have multi factor authentication turned on?

How many times was that a surprise? And I think it’s going to be an unfortunately high number. Yes,

Andrew: And what I unfortunately foresee happening is executives will just say, fine, security, IT, go fix it. Without allocating enough resources.

Jerry: yes you failed in your job, which, I guess it’s not a it’s not a completely unfair statement. But on the other hand, I think we have to be we have to be enabled in our jobs. And I’m not sure that always happens.

Andrew: Yeah. Yeah.

Jerry: So anyway, That is that is the stories for today. Oh, go ahead. Sorry. I think there’s a lag now.

Andrew: Yeah. We might be,

Jerry: I

Andrew: stepped each other a little bit this this show, but we’ll figure it out.

Jerry: think there’s a I don’t know if it’s because of the way we’re recording or what, but I think there’s a bit of a lag. So in any event that is the show for today. I appreciate everybody’s attention and hope you found this useful. And if you like the show, you can. Go listen to back episodes.

Everything is available on our website at www. defensivesecurity. org or on your favorite podcast player. If you do the show, we would we would love, love, love for you to give us a five star review that that helps make sure that other people are able to find us and and get us.

And by the way, if you don’t like the show. You can also still give us a five star review.

Andrew: Even more

Jerry: And just not listen. Yeah.

Andrew: That’s true.

That it’s free five stars is free

Jerry: Yeah, and then you just don’t listen anymore.

Anyway.

Andrew: just play it for your cats. That’s what I do.

Jerry: Yes.

I’m not even going to continue down that road.

Andrew: That’s fair, it’s going off the rails.

Jerry: All right. You can find a Mr. Kellett. On the social media where

Andrew: I’m on Twitter slash X at atlerg L E R G and on InfoSec Exchange on the Fediverse, also atlerg L E R G.

Jerry: Awesome you can find me on the fediverse at jerry at infosec. exchange and with that We will talk to you again very soon. Thank you all. Have a great week

Andrew: week. Bye bye.

Defensive Security Podcast Episode 276

Check out the latest Defensive Security Podcast Ep. 276! From cow milking robots held ransom to why IT folks dread patching, Jerry Bell and Andrew Kalat cover it all. Tune in and stay informed on the latest in cybersecurity!

Summary:

In episode 276 of the Defensive Security Podcast, hosts Jerry Bell and Andrew Kalat delve into a variety of security topics including a ransomware attack on a Swedish farm’s milking machine leading to the tragic death of a cow, issues with patch management in IT industries, and an alarming new wormable IPv6 vulnerability patch from Microsoft. The episode also covers a fascinating study on the exposure and exploitation of AWS credentials left in public places, highlighting the urgency of automating patching and establishing robust credential management systems. The hosts engage listeners with a mix of humor and in-depth technical discussions aimed at shedding light on critical cybersecurity challenges.

00:00 Introduction and Casual Banter
01:14 Milking Robot Ransomware Incident
04:47 Patch Management Challenges
05:41 CrowdStrike Outage and Patching Strategies
08:24 The Importance of Regular Maintenance and Automation
15:01 Technical Debt and Ownership Issues
18:57 Vulnerability Management and Exploitation
25:55 Prioritizing Vulnerability Patching
26:14 AWS Credentials Left in Public: A Case Study
29:06 The Speed of Credential Exploitation
31:05 Container Image Vulnerabilities
37:07 Teaching Secure Development Practices
40:02 Microsoft’s IPv6 Security Bug
43:29 Podcast Wrap-Up and Social Media Plugs-tokens-in-popular-projects/

Links:

  •  https://securityaffairs.com/166839/cyber-crime/cow-milking-robot-hacked.html
  • https://www.theregister.com/2024/07/25/patch_management_study/
  • https://www.cybersecuritydive.com/news/misguided-lessons-crowdstrike-outage/723991/
  • https://cybenari.com/2024/08/whats-the-worst-place-to-leave-your-secrets/
  • https://www.theregister.com/2024/08/14/august_patch_tuesday_ipv6/

 

Transcript:

Jerry: Today is Thursday, August 15th, 2024. And this is episode 276 of the defensive security podcast. My name is Jerry Bell and joining me tonight as always is Mr. Andrew Kalat.

Andrew: Good evening, Jerry. Once again, from your southern compound, I see.

Jerry: Once again, in the final time for a two whole weeks, and then I’ll be back.

Andrew: Alright hopefully next time you come back, you’ll have yet another hurricane to dodge.

Jerry: God, I hope not.

Andrew: How are you, sir?

Jerry: I’m doing great. It’s a, it’s been a great couple of weeks and I’m looking forward to going home for a little bit and then then coming back. How are you?

Andrew: I’m good, man. It’s getting towards the end of summer. forward to a fall trip coming up pretty soon, and just cruising along. Livin the dream.

Jerry: We will make up for last week’s banter about storms and just get into some stories. But first a reminder that the thoughts and opinions we express are those of us and not our employers.

Andrew: Indeed. Which is important because they would probably fire me. You’ve tried.

Jerry: I would yeah. So the the first story we have tonight is very Moving.

Andrew: I got some beef with these people.

Jerry: Great. Very moving. This one comes from security affairs and the title is crooks took control of a cow milking robot, causing the death of a cow. Now, I will tell you that the headline is much more salacious than the actual story that the. When I saw the headline, I thought, oh my God, somebody hacked a robot and it somehow kill the cow, but no, that’s not actually what happened,

Andrew: Now, also, let’s just say up front, the death of a cow is terrible, and we are not making light of that. But we are gonna milk this story for a little while.

Jerry: that’s very true.

Andrew: I’m almost out of cow puns.

Jerry: Thank God for that. So, what happened here is this farm in Sweden had their milking machine, I guess is a milking machine ransomware and the farmer noticed that he was no longer able to manage the system, contacted the support for that system. And they said, no, you’ve been ransomware.

Actually, the milking machine itself apparently was pretty trivial to get back up and running, but apparently what was lost in the attack was important health information about the cows, including when some of the cows were inseminated. And because of that, they didn’t know that one of the pregnant cows was supposed to have given birth, but actually hadn’t.

And so it. What had turned out to be the case is that the cow’s fetus, unfortunately passed away inside the cow and the farmer didn’t know it until they found the cow laying lethargic in it stall, and they called a vet. And unfortunately, at that point it was too late to save the cow.

This is an unfortunate situation where a ransomware attack did cause a fatality.

Andrew: Yeah, and I think in the interest of accuracy, I think it was in Switzerland,

Jerry: Is it switzerland? Okay. I knew it started with a S W.

Andrew: That’s fair. You’re close. It’s Europe.

Jerry: It’s all up there.

Andrew: But yeah, I guess in this theory that if they had a better tracking date when the cow had been inseminated, they would have known that the cow was in distress with labor and could have done something more proactively to save cow and potentially the calf. And unfortunately, because I didn’t have that data, because it was in this ransomwared milking robot machine we ended up with a dead cow and a dead calf.

Jerry: So not without grilling the farmer too much. I was I was thinking, that,

Andrew: Wow!

Jerry: I’m sorry. I was thinking that, they clearly had an ability to recover. And what they thought was the important aspect of that machine’s operation, which was milking, they were able to get that back up and running pretty quickly.

But it seemed to me like they were unaware that this other information was in kind tied to that same system. I don’t fully understand. Seems like it’s a little more complicated than I’m, than I’ve got it envisioned in my mind. But very clearly they hadn’t thought through all the the potential harm.

A good lesson, I think for us all.

Andrew: I feel like we’ve butchered this story.

Jerry: The the next story we have for today comes from register. com and the title is patch management still seemingly abysmal because no one wants the job can’t stop laughing. All right.

Andrew: A cow died! That’s tragic!

Jerry: I’m laughing at your terrible attempts at humor.

Andrew: I couldn’t work leather in there. I tried. I kept trying to come up with a leather pun.

Jerry: We appreciate your efforts.

So anyhow. This next story talks about the challenge that we as an IT industry have with patching. And basically that it is a very boring task that not a lot of people who are in IT actually want to do. And so it, it highlights the importance again of automation and.

This in the complimentary story which is titled misguided lessons from CrowdStrike outage could be disastrous from cybersecurity dive. I put these two together for a reason because one of the, one of the. I think takeaways from the recent CrowdStrike disaster is we need to go slower with patching and updates and perhaps not rely on automatic updates.

And these 2 articles really point out the folly in that. Number 1, this. Article from the register is pointing out that relying on manual patching is a losing proposition because really nobody wants to do it and it doesn’t scale. It’s, it’s already, it’s IT operations is already a crap job in many instances, and then trying to expect people to to do things manually is a problem.

The second article points out the security issues that come along with Adopting that strategy, which is, you’re exposing your environment unduly unnecessarily. And in fact the improvements in. Your security posture and the let the reduction in likelihood of some kind of an attack far outweigh the remote possibility of what happened.

Like we saw with CrowdStrike. Now there is a kind of an asterisk at the bottom. They point out the importance of doing staged deployments of patches, which I think is one of the central lessons of the, at least for my Perspective, one of the central lessons of the CrowdStrike disaster is that go fast, but stage it.

Andrew: yeah it’s an interesting problem that we’re struggling with here, which is how many times have we saved our own butts without knowing it by automate or rapidly patching? It’s very difficult to prove that negative. And so it’s very difficult to. Weigh the pros and cons empirical data showing where automatic patching or rapid patching solved a problem or avoided a problem versus when patching broke something.

Cause all we know about is when it breaks, like when a Microsoft patch rolls out and breaks and that sort of thing. And it’s one of those things where it has to be perfect every time is the feeling from a lot of folks. And if it, if every time we have a problem, we break some of that trust. It hurts the credibility of auto patching or, rapidly patching. The other thing that comes to mind is I would love to get more IT folks and technical operations folks and SREs and DevOps folks, with the concept of patching as just part of regular maintenance. That is, just built into their process. A lot of times it feels like a patch is an interrupt driven or toil type work that they have to stop what they’re doing to go work on this.

Where, in my mind, at least the way I look at it from a risk manager perspective, unless something’s on fire or is a known RCE or known exploited, certain criteria. I’m good. Hey, take patch on a monthly cadence and just catch everything up on that monthly cadence, whatever it is. I can work within that cadence.

If I’ve got something that I think is a higher priority, we can try to interrupt that or drive a different cadence to get that patched or mitigated in some way. But the problem often is that, Okay. Every one of these patches seems to be like a one off action if you’re not doing automatic patching in some way, that is very Cognitively dissonant with what a lot of these teams are doing and I don’t know how to get Across very well that you will always have to patch it was all this will never stop So you have to plan for it.

You have to build time for that. You have to build Automation and cycles for that and around it and it’ll be a lot less painful It’s it feels like pushing the rock up the hill on that one.

Jerry: One of my observations was

an impediment to fast patching is the reluctance for downtime and, or the potential impacts from downtime. And I think that dovetails with what you just said, in part, that concern stems from the way we design our IT systems and our IT environments. If we design them in a way that they’re not patchable without interrupting operations, then my view is we’ve not been successful in designing the environment to meet the business.

And that’s something that, that I tried hard to drive and just thinks in some aspects I was successful and others I was not. But I think that is one of the real key things that, that we as a it leader or security leaders really need to be imparting in the teams is that when we’re designing things, it needs to be, Maintainable as a, not as a, like you described it as an interrupt, but as an, in the normal course of business without heroic efforts, it has to be maintainable.

You have to be able to patch, you have to be able to take the system down. You can’t say that gosh, this system is so important. Like we can’t, we take it down. We’re going to lose millions of dollars ever. Like we can’t take it down. Not a good, it’s not a good look. You didn’t design it right.

Andrew: That system is gonna go down. Is it gonna be on your schedule or not? The other thing I think about with patching is not just vulnerability management But you know Let’s say you don’t patch and suddenly you’ve got a very urgent Vulnerability that does need to be patched and your four major versions and three sub versions behind now you have this massive uplift That’s probably going to be far more disruptive to get that security patch applied, as opposed to if you’re staying relatively current, n minus one or n minus two, it’s much less disruptive get that up to date.

Not to mention all of the end of life and end of support issues that come with running really old software. And don’t even know what vulnerabilities might be running out there, but just keeping things current as a matter of course, I believe. It makes dealing with emergency patches much, much easier. all these things take time and resources away from what is perceived to be higher value activities. So it’s constantly a resource battle.

Jerry: And there was like, there was a quote related to what you just said in, at the end of this article, it said I think it mostly comes down to quote, I think it mostly comes down to technical debt. You explained it’s very, it’s a very unsexy thing to work on. Nobody wants to do it and everyone feels like it should be automated, but nobody wants to take responsibility for doing it.

You added the net effect is that nothing gets done and people stay in the state of technical debt. Where they’re not able to prioritize it.

Andrew: That’s not a great place to be.

Jerry: No, there wasn’t another interesting quote that I often see thrown around and it has to do with the percent of patches. And so the, I’ll just give the quote towards the beginning of the article. Patching is still notoriously difficult for us to principal analyst. Andrew Hewitt told the register.

Hewitt, who specializes in it ops said that while organizations strive for a 97 99 percent patch rate, they typically only managed to successfully fix between 75 and 85 percent of issues in their software. I’m left wondering, what does that mean?

Andrew: Yeah, like in what time frame? In what? I don’t know. I feel like what he’s talking about maybe is They only have the ability to automatically patch up to 85 percent of the deployed software in their environment.

Jerry: That could be, it’s a little ambiguous.

Andrew: It is. And from my perspective, there’s actually a couple different things where we’re talking about here, and we’re not being very specific. We’re talking about I. T. Operations are talking about corporate I. T. Solutions and systems and servers. For an IT house, I work in a software shop, so we’ve got the whole software side of this equation, too, for the code we’re writing and keeping all that stuff up to date, which is a whole other complicated problem that, some of which I think would be inappropriate for me to talk about, but, so there’s, it’s doubly difficult, I think, if you’re a software dev shop to keep all of your components and dependencies and containers and all that stuff up to date.

Jerry: Absolutely. Absolutely. I will also say that A couple of other random thoughts on my part, this, in my view, gets harder or gets more complicated, the larger in larger organizations, because you end up having these kind of siloed functions where responsibility for patching isn’t necessarily clear, whereas in a smaller shop.

You may have an IT function who’s responsible end to end for everything, but in large organizations, oftentimes you’ll have a platform engineering team or who’s responsible for, let’s say, operating systems. And then you may have a, that, that team is a service provider for other other parts of the business.

And those other parts of the business may not have a full appreciation for what they’re responsible for from an application perspective, and especially in larger companies where, they’re want to reduce head count and cut costs, the, those application type people in my, my experience, as well as the platform team are are ripe targets for reductions.

And when that happens. You end up in this kind of a weird spot of having systems and no clear owner on who’s actually responsible. You may even know that you have to patch it, but you may not know whose job it is.

Andrew: Yeah, absolutely. In my perfect world, every application has a technical owner and every underlying operating system or underlying container has a technical owner. Might be the same, might be different. And they have their own set of expectations. Often they’re different and often they’re not talking to each other. So there could be issues in dependencies between the two that they’re not coordinating well. And then you get gridlock and nobody does anything.

Jerry: So these are pragmatic problems that in my experience. They present themselves as salt is a sand in the gears, right? They make it very difficult to move swiftly. And that’s what in my ex in my experience drives that heroic effort, especially when something important comes down the line, because now you have to pay extra attention because that something is not going to, that there isn’t a well functioning process.

And I think that’s. Something as an industry, we need to focus on. Oh, go ahead.

Andrew: I was just gonna say, in my mind, some of the ways you solve this, and these are usually said difficult to do, but proper. I should define that. Maintained asset management, I. T. asset management is key. And in my mind, you’ve got to push your business to make sure that somebody has accountability to every layer of that application. And push your business to say, hey, if we’re not willing to invest in maintaining this, and nobody’s going to take ownership of it, it shouldn’t be in our environment. must be well owned. This is, it’s like when you adopt a dog. Somebody’s got to take care of it. And you can’t just neglect it in the backyard. So we run into stuff all the time where it’s just, Oh, nobody knows what that is. Then get rid of it. attack surface. That’s a single thing out there is something that could be attacked. If it’s about being maintained, that becomes far riskier from an attack surface perspective. So I think that, and I also think about, Hey, tell people before you go buy a piece of software, do you have the cycles to maintain it? Do you have the expertise to maintain it?

Jerry: The business commitment to fund its ongoing operations, right?

Andrew: Exactly. I don’t know. It gets stickier. And now we have this concept of SaaS, where a lot of people are buying software and not even thinking about the backend of it because it’s all just auto magic to them. So they get surprised when it’s, Oh, it’s in house. We’ve got to actually patch it ourselves. Yeah,

Jerry: The other article in cybersecurity dive had a, another interesting quote that I thought lacked some context and the quote was. There are, there were 26, 447 vulnerabilities disclosed last year and bad actors exploited 75 percent of vulnerabilities within 19 days.

Andrew: no, that’s not right.

Jerry: Yeah, here’s, here is the missing context.

Oh, and it also says one in four high risk vulnerabilities were exploited the same day they were disclosed. What now, the missing context is this report linked, or this quote is referring to a report by QALYS that came out early. At the beginning of the year and what it was saying is that about 1 percent of vulnerabilities or are what they call high risk and those are the vulnerabilities that end up having exploits created, which is an interesting data point in and of itself, that only 1 percent of vulnerabilities are what people go after.

Patching our goal is to patch all of them. What they’re saying is that 75 percent of the 1%, which had vulnerability or had exploits created, had those exploits created within 19 days.

Andrew: That makes, that’s a lot more in line with my understanding.

Jerry: And 25 percent were exploited within this the same day. So I, and that’s the important context. It’s a very salacious statement without that extra context. And I will say that as a as a security leader, one of the challenges we have is, again, that there, there were almost 27, 000.

Vulnerabilities. I think we’re going to blow the doors off that this year,

not all that they’re not all equally important. Obviously they’re rated at different levels of severity, but the real, the reality for those of us who pay attention, that it’s not just the critical vulnerabilities that are leading to. being exploited and hacked and data breaches and whatnot.

There’s plenty of instances where you have lower severity from a CVSS perspective, vulnerabilities being exploited either on their own or as put together but the problem is which ones are important. And so there’s a whole cottage industry growing up around trying to help you prioritize better with which which vulnerabilities to go after.

But that is the problem, right? We, like we, we, I feel like we have quite Kind of a crying wolf problem because 99 percent of the time or more, the thing that we’re saying the business has to go off and spend lots of time and disrupting their their availability and pulling people in on the weekends and whatnot is not, Exploited, it’s not a targeted by the bad guys, you only know which ones are in that camp after the fact.

So if you had that visibility before the fact, it’d be better, but that’s a that’s a very naive thing at this point.

Andrew: Yeah. If 1%.

Jerry: If I could only predict the winning lottery numbers.

Andrew: The other thing, and the debate, this opens up, which I’ve had, Many times in my career is ops folks, whomever, I’m not the bad guys. They’re just asking questions, trying to prioritize. Prove to me how this is exploitable. That’s a really unfair question. I can’t because I’m not hacker who could predict every single way this could be used against a business.

I have to play the odds. I have to play statistically what I know to be true, which is that some of them will be exploited. One of the things I could do is I could prioritize, Hey, what’s open to the internet? What’s my attack service? What services do I know are open to anonymous browsing or not browsing, but, reachability from the internet. Maybe those are my top priority. And I watched those carefully for open RCEs or likely exploitable things or, and I prioritize on those, but at the end of the day, not patching something because I can’t prove it’s exploitable. that I can predict what every bad guy is ever going to do in the future or chain attacks in the future that I’m not aware of.

And I think that’s a really difficult thing to prove.

Jerry: Yeah, a hundred percent. There, there are some things that can help you, some things beyond just CVSS scores that can help you a bit, certainly if you look at something and it is worm able , right? Remote code, execution of any sort is something in my estimation that you really need to prioritize the the CISA agency, the cybersecurity infrastructure security agency, whose name still pisses me off.

All these years later, because they has the word security too many times in it, but they didn’t ask me. They have this list they call Kev. It’s the known exploited vulnerabilities list, which, in, in previous years was a joke because they didn’t update it very often. But now it’s actually upgrade updated very aggressively.

And so it contains the list of vulnerabilities that the U S government and some other foreign partners see actively being exploited in the industry. And so there’s a, that’s also a data point. And I would say. My perspective is that shouldn’t be the thing that you say that’s, those are going to be what we patch then it’s your, my view, your approach should be, we were going to patch it all, but those are the ones that we’re not going to relent on.

There’s always going to be a need. There’s going to be some sort of There’s going to be an end of quarter situation or what have you, but these are the ones that, that you should be looking at and saying, no, like these can’t wait they have to, we have to patch those.

Andrew: Yep. 100%. And a lot of your vulnerability management tools are now integrating that list. So it can help you right in the tool know what the prioritization is. But bear in mind, there’s a lot of assumptions in that, that those authorities have noted activity, have noted and shared it, understood it, and zero days happen.

Jerry: Somebody had to get, the reality is somebody had to get hacked.

Autologically, somebody had to get hacked for it to be on the list.

Andrew: right, so don’t rely only on that, but it is absolutely a good prioritization tool and a good focusing item of look, we have this, know we have this is known exploitable. We’re seeing exploits in the wild. We need to get this patched.

Jerry: Yeah, absolutely. So moving on to the next story, this one is from a cybersecurity consulting company called Cybernary. I guess it’s how you would say it.

Andrew: I’d go with that. That seems reasonable.

Jerry: The title is, I’m sure somebody will correct me if I got it wrong. Title here is what’s the worst place to leave your secrets. Research into what happens to AWS credentials that are left in public places. I thought this was a fascinating read, especially given where I had come from. I’ve been saying for some time now on this, on the show, API keys and whatnot are the next big horizon for attacks.

And in fact, we had been seeing that, we’re actually, I think on the upswing in the past, in my former role, we saw a lot of that manifesting itself as attackers using those to mine crypto, like they would hijack servers or platforms. Or containers or whatever to mine cryptocurrency.

But I think over time, we’re going to see that morph into, more data theft and perhaps less overt actions. I’m sure it’s, it is already happening. I’m not, I don’t mean to say that it isn’t happening, but I think it’s in the periphery right now where a lot of the activity, at least A lot of the voluminous activity tends to be what I’ll call more benign, like again, crypto mining.

But anyway, this the approach that this organization took here was pretty interesting. There’s a company called thinkst that has this kind of like. Concept of canary tokens and canary credentials. And they are exactly what they sound like. It’s a set, it’s a set of secrets that you can create through this company and watch how they’re used.

You can see, you can get an alert when somebody tries to use them. And that’s exactly what they did here. They created, I think it was 121, 121. That’s the number of total attempts. I don’t know exactly how many credentials they created. They created a number of credentials and they spread them around and they they used a number of different services.

Let’s see, they had GitHub, GitLab, Bitbucket, Docker hub. They created their own FTP web server. And blog, they put them on pastebin jfiddle. They put them into the NPM JS repository in the PyPI repository, which we just talked about. They put them in the various cloud storage buckets. And then they just waited to see how and when they were accessed.

And unsurprisingly to me this. Was pretty successful. It was a very successful test. I am a little surprised at some of the results. So the one of the, one of the charts they give is how long it took for different credit credentials planted at different places to be used.

Andrew: It’s insane.

Jerry: So the NPM JS was less than 60 seconds.

Andrew: So that means, from the time of posting, somebody somehow picked it up with some kind of scanner and could turn around and test using it in less than a minute.

Jerry: Yes.

Andrew: That’s insane.

Jerry: Pi was 120 seconds. So just shy of two minutes. GitHub was 127. 127 seconds, just just over two minutes. Pastebin was 50 minutes their own web server, 47 hours and docker hub 6. 9 days.

Andrew: Man, what’s going on with Docker Hub? Just nobody cares?

Around to it.

Jerry: nobody cares. I think it’s a lot more involved. It’s not as it’s not as readily scannable, I would say.

Andrew: I can tell you from my own experience in previous roles, we used to get reports all the time for Hey, you’ve got the secret out here. Hey, you’ve got the secret out here people looking for bounties. I still want to know what tools are using to find this stuff so rapidly because it’s fast.

Jerry: Yes.

Andrew: And

Jerry: Like a GitHub, GitHub will, we’ll give you a, an API so you can actually subscribe to an API. Again, it’s not perfect because it’s obviously they are. Typically relying on randomness or, something being prefixed with password equals or what have you. But it’s not a, it’s not a perfect match, but there’s lots of lots of tools out there that people are using.

The one that I found most interesting and it’s more aligned with the Docker Hub one, but not. And I think it’s something that is a much larger problem that hasn’t manifested itself as a big problem yet. And that is, with container images you can continue to iterate on them.

You can and by default when you spin up a container, it is the end state of a whole bunch of what I’ll just call layers And so if you, let’s say included credentials at some point in a configuration file, and then you later deleted that file, when you spin up that container in a running image, you won’t find that file.

But it actually is still in that container image file. And so if you were to upload that container image file to, let’s say Docker hub and somebody knew what they were doing, they could actually look through the history and find that file. And that has happened I’ve seen it happen a fair number of times you, you have to go through some extra steps to squash down the container image so that you basically purge all the history and you only ended up with the last intended state of the container file, but not a lot of people know that, like how many people know that you have to do that?

Andrew: well, including you, the six people listening to this show, maybe four others.

Jerry: So there’s a lot of there’s a lot of nuance here. So I thought, The time the timing was just. Fascinating. That, that it was going to be fast just based on my experience. I knew it was going to be fast, but I did not expect it to be that fast. Now in terms of where most of the credentials were used.

That was also very interesting. Hello. Was a little, in some areas, some respects, not what I expected. So the most the most targeted or the place where the most credentials was used was Pastebin, which is interesting because Pastebin also had a relatively long time to detect. And so I think it means that people are more aggressively crawling it.

And then the second most common is a website. And I think that one does not surprise me because crawling websites has been a thing for a very long time. And I think there’s lots and lots of tools out there to help identify credentials. So obviously it’s a little. Dependent on how present them.

If you have a password. txt file and that’s available in a index in directory index on your webpage. That’s probably going to get hit on a lot more.

Andrew: I’m, you know what?

Jerry: Yeah, I know. You’re not even going to go there. Yep. You’re I’ll tell you the trouble with your mom. There you go. Feel better.

Andrew: I feel like she’s going to tan your hide.

Jerry: See, there you go. You got the leather joke after all. Just like your mom.

Andrew: Oh, of nowhere,

Jerry: All right. Then GitHub was a distant third.

Andrew: which surprises me. I,

Jerry: That did surprise me too.

Andrew: Yeah. And also I also know GitHub is a place that tons and tons of secrets get leaked and get labs and similar because developers do have, it’s very easy for them to accidentally leak secrets in their code up to these public repos. And then you can never get rid of them.

You’ve got to rotate them.

Jerry: I think it. So my view is it’s more a reflection of the complexity with finding them, because in a repository, you got to search through a lot of crap. And I don’t think that the tools to search for them is as sophisticated as let’s say, a web crawler, hitting paste in the website.

Andrew: Which is fascinating that the incentive is on finding the mistake by third parties. Yeah. got better tooling then. Now, to be fair, all of these, like if GitHub for instance, has plenty of tools you can buy, both homegrown at GitHub or third parties that in theory will help you detect a secret before you commit it, but they’re not perfect and not everybody has them.

Jerry: Correct. Correct. And I also think it’s more in my experience. It’s much more of a common problem from a, from a likelihood of exposure from from the average it shop, you’re much more likely to see your keys leak through GitHub than you are from people posting them on a website or on pastebin.

But, knowing that if they do end up on pastebin, like somebody’s going to find them is I think important to know, but my Experience it’s, it’s Docker hub in the code repositories, like PyPy and MPM and GitLab and GitHub. That’s where it happens, right? That’s where we leak them.

It’s interesting in this, in this test, they tried out all the different channels to see which ones were more, more or less likely to get hit on. I think you get hub in my experience, GitHub and Docker hub and whatnot. are the places that you have to really focus and worry about because that’s where they’re, that’s where they’re leaking.

Andrew: Yeah. It makes sense. It’s a fascinating study.

Jerry: Yeah. And it

sorry, go ahead.

Andrew: I would love for other people to replicate it and see if they get similar findings.

Jerry: Yes. Yes. I, and this is one of those things that, again the tooling is not there’s not a deterministic way to tell whether or not your code has a password or not in it. There are tools, like you said, that will help identify them. To me, it’s. And it’s important to create a I would call the three, three legs of the stool approach.

One is making sure that you have those tools available. Another would be making sure that you have the tools available on how to store credentials securely, like having. Hash a car vault or something like that available. And then the third leg of the stool is making sure that the developers know how to use those.

Know that they exist and that’s how you’re, how they’re expected to actually use them. Again, it’s not perfect. It’s not a firewall. It’s, you’re still reliant on people who make mistakes.

Andrew: Two questions. First of all, that three legged stool, would that be a milking stool?

Jerry: Yes.

Andrew: Second, plus a question, more comment. I would also try to teach your teams, hey, try to develop this software with the idea that we may have to rotate this secret at some point.

Jerry: Oh, great point. Yes.

Andrew: and try not to back yourself into a corner that makes it very difficult to rotate.

Jerry: Yeah. I will also, I’ll go one step further and say that not only should you do that, but you should at the same time implement a strategy where those credentials are automatically rotated on some periodic basis. Whether it’s a month, a quarter, every six months, a year, it doesn’t really matter having the ability to automate it or to have them automated, have that change automated, gives you the ability in the worst case scenario that somebody calls you up and says, Hey, like we just found our key on on GitHub, you have an ability to go exercise that automation, but Without having to go create it or incur downtime or whatnot.

And that the worst case is you’re stuck with this hellacious situation of I’ve got to rotate the keys, but if I rotate the keys, the only guy that knows how to, this application works is on a cruise right now. And if we rotate it, we know it’s going to go down and we, so you’re end up in this That’s really bad spot.

And I’ve seen that happen so many times.

Andrew: And then the see saw ends up foaming at the mouth like a mad cow.

Jerry: Yes, that’s right. Cannot wait for this to be over. All right. The the last story mercifully is Mike is also from the register. com. And the title is Microsoft patches is scary. Wormable hijacked my box via IPv6 security bug and others. It’s been a while since we’ve had one that feels like this. So the the issue here is that Microsoft just released a patch as part of its patch Tuesday for a a light touch pre authentication, remote code exploit yeah, remote code exploit over the network, but only for IPv6,

Andrew: which to me is holy crap, big deal. That’s really scary.

Jerry: incredible.

Andrew: and either, I don’t know, I feel like this hasn’t gotten a ton of attention yet. Maybe because there wasn’t like a website and a mascot and a theme song and a catchy name.

Jerry: Yes. And

Andrew: But, so if you’ve got IPv6 running on pretty much any modern version of Windows, zero click RCE exploit, over, have a nice day. That’s scary. That’s a big deal.

Jerry: the better part is that it is IPv6. Now I guess on the on the downside it’s IPv6 and IPv6 typically is isn’t. Affected by things like NAT based firewalls. And so quite often you have a line of sight from the internet straight to your device, which is a problem. Obviously not always the case.

On the other

Hand, it’s not widely adapted.

Andrew: but a lot of modern windows systems are automatically turning it on by default. fact, I would wager a lot of people have IPv6 turned on and don’t even know it.

Jerry: Very true.

Andrew: Now you’ve got to have all the interim networking equipment, also supporting IPv6 and for that to be a problem, but it could be.

Jerry: So there, there’s the the researcher who identified this has not released any exploit code or in fact, any details other than it exists. But I would say now that Apache exists I think it’s fair to say every freaking security researcher out there right now is trying to reverse those patches to figure out exactly what changed in hopes of finding out what the.

Problem was because they want to create blogware and create a name for it and whatnot. I’m sure. This is a huge deal. I think it is for alarm fire, you’ve got to get this one patched like yesterday.

Andrew: Yeah. It’s been a while since we’ve seen something like this. Like you said, at the top of the story, it’s, Vulnerable, zero clickable, RCE, just being on the network with IPv6 is all it takes. And I think it’s everything past Windows 2008. Server, is vulnerable. Obviously patches are out, but it’s gnarly. It’s a big deal.

Jerry: As you would say, get ye to the patchery.

Andrew: Get ye to the patchery. I’ve not used that lately much. I need to get back to that. Fresh patches available to you at the patchery.

Jerry: All right. I think I think we’ll cut it off there and then ride the rest home.

Andrew: Go do some grazing in the meadow. As you can probably imagine, this is not our first radio.

Jerry: Jesus Christ. Where did I go wrong? Anyway, we I I sincerely apologize but I also find it. I also find it weird.

Andrew: I don’t apologize in the least.

Jerry: We’ll, I’m sure there’ll be more.

Andrew: Look man, this is a tough job. You gotta add a little lightness to it. It can drain your soul if you’re not careful.

Jerry: Absolutely. Now I Was

Andrew: But once again, I feel bad for the cow and the calf. That’s terrible. That’s, I don’t wish that on anyone.

Jerry: alright. Just a reminder that you can find all of our podcast episodes on our website@wwwdefensivesecurity.org, including jokes like that and the infamous llama jokes way, way back way, way back. You can find Mr. Clet on X at LER.

Andrew: That is correct.

Jerry: Wonderful, beautiful social media site, InfoSec. Exchange at L E R G there as well. And I am at Jerry on InfoSec. Exchange. And by the way, if you like this show, give us a good rating on your favorite podcast platform. If you don’t like this show, keep it to yourself.

Andrew: Or still give us a good reading. That’s fine.

Jerry: Or just, yeah, that works.

Andrew: allowed,

Jerry: That works too.

We don’t discriminate.

Andrew: hopefully you find it useful. That’s all we can that’s our hope,

Jerry: That’s right.

Andrew: us riffing about craziness for an hour and hopefully you pick up a something or two and you can take it, use it and be happy.

Jerry: All right. Have a lovely week ahead and weekend. And we’ll talk to you again very soon.

Andrew: See you later guys. Bye bye.