Defensive Security Podcast Episode 273

The Joe Sullivan Verdict – Unfair? – Which Part? (cybertheory.io)

Fujitsu Details Non-Ransomware Cyberattack (webpronews.com)

5 Key Questions CISOs Must Ask Themselves About Their Cybersecurity Strategy (thehackernews.com)

Sizable Chunk of SEC Charges Vs. SolarWinds Dismissed (darkreading.com)

CrowdStrike CEO apologizes for crashing IT systems around the world, details fix | CSO Online

Summary:

Cybersecurity Updates: Uber’s Legal Trouble, SolarWinds SEC Outcome, and CrowdStrike Outage

In Episode 273 of the Defensive Security Podcast, Jerry Bell and Andrew Kalat discuss recent quiet weeks in cybersecurity and correct the record on Uber’s CISO conviction. They delve into essential questions CISOs should consider about their cybersecurity strategies, including budget justification and risk reporting. The episode highlights the significant impact of CrowdStrike’s recent updates causing massive system crashes and explores the court’s decision to dismiss several SEC charges against SolarWinds. The hosts provide insights into navigating cybersecurity complexities and emphasize the importance of effective communication and collaboration within organizations.

00:00 Introduction and Banter
01:52 Correction on Uber’s CISO Conviction
04:07 Recommendations for CISOs
09:28 Fujitsu’s Non-Ransomware Cyber Attack
12:13 Key Questions for CISOs
32:47 Corporate Puffery and SEC Charges
33:15 Internal vs External Communications
33:52 SolarWinds Security Assessment
36:36 CrowdStrike CEO Apologizes
37:16 Global IT Systems Crash
37:57 CrowdStrike’s Kernel-Level Issues
40:55 Industry Reactions and Lessons
42:58 Balancing Security and Risk
49:26 CrowdStrike’s Future and Market Impact

01:03:46 Conclusion and Final Thoughts

 

Transcript:

defensive_security_podcast_episode_273 ===

jerry: [00:00:00] All right, here we go. Today is Sunday, July 21st, 2024, and this is episode 273 of the Defensive Security Podcast. My name is Jerry Bell, and joining me tonight as always is Mr. Andrew Kalat.

Andy: Good evening, Jerry. I’m not sure why we’re bothering to do a show. Nothing’s happened in the past couple of weeks.

Andy: It’s been really quiet.

jerry: Last week was very quiet.

Andy: Yeah, sometimes You just need a couple quiet weeks.

jerry: Yeah. Yeah, nothing going on so before we get into the stories a reminder that the thoughts and opinions We express on this podcast do not represent andrew’s employers

Andy: Or your potential future employers

jerry: or my potential future employers

Andy: as you’re currently quote enjoying more time with family end quote

jerry: Yes, which by the way Is highly recommended if you can do it.

Andy: You’re big thumbs up of being an unemployed bum.

jerry: It’s been amazing. Absolutely [00:01:00] amazing. I I forgot what living was like.

jerry: I’ll say it that way.

Andy: Having watched your career from next door ish, not a far, but not too close. I think you earned it. I think you absolutely earned some downtime. My friend, you’ve worked your ass off.

jerry: Thank you. Thank you. It’s been fun.

Andy: And I’ve seen your many floral picks. I don’t, I’m not saying that you’re an orchid hoarder, but some of us are concerned.

jerry: I actually think that may be a fair characterization. I’m not aware of any 12 step programs for for this disorder here.

Andy: There’s a TV show called hoarders where they go into people’s houses who are hoarders and try to help them. I look forward to your episode.

jerry: I yes, I won’t say anymore. Won’t say anymore. So before we get into the new stories, I did want to correct the record on something we talked about on the last episode [00:02:00] regarding. Uber’s CISO that had been criminally convicted. Richard Bejtlich on infosec. exchange actually pointed out to us that it was not failure to report the breach that was the problem. It was a few other issues, which is what Mr. Sullivan had actually been convicted of. So I’m going to stick a story into the show notes. That has a very very extensive write up about the issues and that is from cybertheory. io. And in essence, I would distill it down as saying again, I guess he was convicted so it’s not alleged. He was convicted of obstruction of an official government investigation. He was convicted of obstructing the ongoing FTC investigation about the 2013 slash 2014 breach, [00:03:00] which had been disclosed previously.

jerry: The FTC was rooting through their business and were asking questions and unfortunately apparently Mr. Sullivan did not provide the information related to this breach in response to open questions. And then furthermore, he was he was convicted of what I’ll summarize as concealment.

jerry: He was concealing the fact that there was a felony. And the felony was not something that he had done. The felony was that Uber had been hacked by someone and was being extorted. But because, he had been asked directly, Hey, have you had any, any issues like this?

jerry: And he said, no, that becomes a concealment, an additional concealment charge. And so the jury convicted him on both of those charges, not on failure to disclose a breach.

Andy: Yeah, it’s we went down the wrong path on that one. We were a little, we put out some bad info. [00:04:00] We were wrong.

jerry: So I’m correcting the record and I certainly appreciate Richard for for getting us back on the right track there.

jerry: This article, by the way, does have a couple of interesting recommendations that I’ll just throw out there. One of them is hopefully these are fairly obvious. Do not actively conceal information about security incidents or ransomware payments, even if you’re directed to do so by your management.

Andy: Yeah. I think, let’s put it out for a second. If you’re in that situation, what do you do? Resign?

jerry: Yes. Or do you,

Andy: yeah, I think that’s,

jerry: I mean you either resign or you have to become a whistleblower.

Andy: Yeah, that’s true. Your career has probably ended there at that company either way. Most likely. But it’s better than going to jail.

jerry: It’s a lot better than going to jail. I think what I saw is he Sullivan is up for four to eight years in prison, depending on how he’s sentenced.

Andy: Feds don’t like it when you lie to them. They really don’t like it.

jerry: No, they don’t. Next recommendation is if you’re, if your company’s under investigation, get help and potentially [00:05:00] that means getting your own personal legal representation to help you understand what reporting obligations you may have for any open information requests. And I say that because. In this instance, Sullivan had confirmed with the CEO of Uber at the time about what they were going to disclose and not disclose and the CEO signed off on it. And he also went to the chief privacy lawyer, who by the way, was the person who was managing the FTC investigation and the chief privacy lawyer also signed off on it.

Like the joke goes, the HR is not, it’s not your friend. Your legal team may also not be your friend. At some point if you’re in a legally precarious position, you may need your own council, which is crappy.

Andy: That is crazy. How much is that going to cost? And wow, that’s it. I don’t [00:06:00] one more reason to think long and hard before accepting a role as CISO at a public company.

jerry: Yeah, this, by the way I’m skipping over all sorts of good stuff in this story. So I invite everybody to read it. And it’s a pretty long read.

jerry: It, it talks about the differences between the Directors of companies and officers of companies and the different obligations and duties they have related to shareholders and customers and employees and whatnot. And what was very interesting. The point they were making is that CISOs don’t have that kind of a responsibility, right?

jerry: They don’t, they’re not corporate officers in the same way. And so what they, what, when you read the article, and I apologize for not sending it to you. I just realized, when you read the article it was very clear that there The author here was pointing out that the government and I suspect with, at the behest of Uber, was really specifically [00:07:00] going after Sullivan, right?

jerry: Because in exchange for testimony, people got immunity in order to testify against Sullivan. And that kind of went all up and down, including You know, it’s some of the lawyers. So I, by the way, I think he clearly had some bad judgment here. But, also, he wasn’t the only one. This was a a family affair, but he’s the one who’s really taken taken the beating. Next recommendation was paying a ransom in return for a promise to delete copies of data, not disclosed data does not relieve your responsibility to report the issue in many global laws and regulations.

jerry: So just because you’ve gotten an assurance that the, after you’ve paid a ransom that the data has been destroyed, you still in, in almost all cases are going to have a responsibility to report. And, one of the things the the author here says is you really should let everybody know, there’s vehicles to [00:08:00] inform at least in the U S CISA and the FBI, and I’m sure there’s similar agencies in different countries. To help insulate yourself do not alter data or logs to conceal a breach or other crime. That seems pretty self evident, but I think the implication is that.

jerry: That’s what happened here. And then also lastly, do not create documents that, contain false information.

Andy: Shocking.

jerry: Yes. So again, not, nothing in there that is like earth shattering but it’s a good reminder,

Andy: yeah. And I, I don’t know if but our good friend Bob actually got out of the South American prison he’s been in for a while, and I heard from him, and he’s doing well, he’s got three new tattoos and lost two fingers, but otherwise he’s doing well. He was telling me that he once worked for a CISO that actually fabricated evidence for an internal auditor.

Andy: And thought it was a fun [00:09:00] game

Andy: and how he had a tough time knowing how to handle that.

jerry: And the ethics of how to disclose that, right?

Andy: Especially because as he described it, it was a very powerful CISO who had a reputation for retaliatory behavior to those who did not bow before him. Damn. So

jerry: yeah, Bob has all the best stories.

Andy: He does. He does. I look forward to hearing more about his South American prison stint.

jerry: All right. Our next story today comes from web pro news. com. And the title here is Fujitsu details, non ransomware cyber attack. It feels like it’s been so long since we’ve talked about something that wasn’t ransomware.

Andy: I feel like these bad guys just, lost a good ransomware opportunity.

jerry: Clearly they did. So there’s not a huge amount of details. But basically Fujitsu was the victim of some sort of [00:10:00] data exfiltrating worm that crawled through their network. They haven’t published any details about who or how, or, why, what was taken, but was, what was most interesting to me is that, the industry right now is very taken by ransomware or, more pedestrian hacks of things to mine cryptocurrency or send spam or, do those sorts of things.

jerry: It’s been a while since I’ve. I can think of the last time we actually had a, like a a destructive or, something whose job was not. To be immediately obvious that it’s in your environment.

Andy: Yeah. If I had to, again, the details are very sketchy, but if I had to guess, maybe this was some sort of corporate espionage or some sort of, it appears the way they described it, which again, the details are sparse.

Andy: It was low and slow and very quiet [00:11:00] trying to spread throughout their environment. It didn’t get very far. They said, what, 49 systems? 49. And they had a lot of interesting, you caveats of it didn’t get to our cloud this and it didn’t do that. So there’s a lot of things that didn’t do.

Andy: They didn’t tell us much about what I did do. But if I had to guess, maybe some sort of corporate espionage. Yeah, maybe that’s, or just random script kitties being like, you can never always attribute motivation. So I’ll say

jerry: this way, intellectual property theft, the motivations for that, I guess this is an exercise left to the reader, but.

jerry: They did say that data was exfiltrated successfully. They didn’t say what data but I, my guess is, they were after some sort of intellectual property theft. The reason for bringing this up is not that this has a whole lot of actionable information, but more that, that there are other threats out there still, it’s not all, it’s not all ransomware and web shells and that sort of stuff.[00:12:00]

Andy: Indeed, but to be fair that is majority of it. Protect your cybers. You know what helps? A solid EDR. It’s a little foreshadowing for a future story.

jerry: We’ll get there. We’ll get there. All right. The next story comes from thehackernews. com and the title here is five key questions CISOs must ask themselves about their cybersecurity strategy.

Andy: Apparently, we need to add a sixth one, which is, Am I going to go to jail?

jerry: So the key questions here, number one, how do I justify my cyber security? Actually, you know what, I’m going to back up for a second, because there were a couple of other salient data points in here. And the first one was they pointed out that only 5 percent of CISOs report directly to the CEO , then two thirds of CISOs are two or more levels below the CEO in the reporting chain. And that, those two facts indicate a potential lack of high level influence to [00:13:00] use their words. I will tell you the placement of the CISO in an organization isn’t necessarily an indicator of how much power they have. Somebody who reports to the CEO is going to be more influential for sure, but there are lots of different organizational designs especially when you go into larger companies.

Andy: Sure. I would say also if they’re highly regulated, that CISO has a lot of inherent authority because of the regulations that are being enforced upon that organization. So by external third parties.

jerry: The Ponemon or Pokemon Institute found that only 37 percent of organizations think they effectively utilize their CISOs expertise.

jerry: I kind of wonder who are they asking that? Are they asking the CISOs or are they asking, I, anyway I am curious about the [00:14:00] methodology behind that study. It doesn’t necessarily surprise me. Just moving somebody up in a different, into a different place in the organization doesn’t necessarily mean that they’re going to more fully use the talents of or expertise of a CISO.

Andy: Yeah. If it’s anything in most organizations, it’s. They delegate to that CISO, not like what the assumption, is that the boards of the executive teams would be asking deep cyber questions of the CISO, which is an odd expectation.

jerry: It is an odd expectation. And similar related to what you’re saying, gartner finds that there are only 10 percent of boards. that have a dedicated cybersecurity committee overseen by a board member.

Andy: The way I would look at it, both of those stats is more, how much influence does CISO have on the company operating in a less risky or more risky methodology, right?

Andy: It’s not about leveraging their expertise. It’s about how influential are they to [00:15:00] guide the company away from risk and what those trade offs are.

jerry: It also comes down to what the company’s value. This is a financial risk management. And

Andy: the flip side is I think a lot of executives think of CISOs as constantly calling for the skies falling to get better budgets and build their empire and more people. And as this is a black hole of money we’re throwing money into that we can’t, which this article goes into, we can’t justify it.

Andy: We can’t prove the ROI on.

jerry: Yes, exactly. So the the key questions to ask yourself is number one, how do I justify my cybersecurity budget? And that is a I think that’s a perennial challenge that anybody in security leadership has. How do how do you justify, or demonstrate that you are spending the right amount of money?

jerry: You’re not spending too much. You’re not spending too little. Generally [00:16:00] speaking, and this is like a, one of those mass psychosis. episodes. You do that by often benchmarking yourself against your competitors.

Andy: It’s a safe answer.

jerry: And they do it by benchmarking themselves against their competitors.

Andy: You’ve got the theory of the wisdom of crowds, right? What’s if I’m around the average, I must be doing fairly close to correct, but not all companies are the same. Not all companies have the same risk tolerance. Not all companies have the same, corporate structure in the same financial situation. So I get it. That’s where my mind goes. What percentage of G&A is spent on cyber in the, my industry? That’s what I’m going to go ask for.

jerry: Number two is how do I master the art of risk reporting, which by the way, I think is not entirely disassociated from the last one, right? Because part of your budget in I dare say a major part of your budget is intended to address [00:17:00] risk. And and what they’re really pointing out here is how do you communicate to the senior leadership team, the board of directors and so on, the level of risk that you cyber risk that you have in your organization in terms that make sense to them,

Andy: That’s an incredibly challenging question, honestly.

jerry: Yeah. I, so something that was very interesting is I was, to me, at least, is I was reading this because look, I struggle with all these things too, right? I’ll. Five of these things that we haven’t got to all of them yet, but they resonated with me and he’s super interesting is we all have to make this up on our own.

Andy: You didn’t go through that section of the CISSP?

jerry: There’s not like a GAAP, in, in in accounting, you have the GAAP generally accepted accounting principles. There’s really a gap type methodology for this in risk reporting. And [00:18:00] perhaps there should be.

Andy: This is why we are often accused of being an immature industry from other well trodden business leaders who have a shared language.

Andy: We’re wizards and witches walking in speaking spells that they don’t understand out of black boxes that don’t make sense.

jerry: So I, I think this is an area that we can certainly mature. So I would love to hear from anybody in the audience who thinks that there’s a, a common methodology that people can adopt here. I’d love to talk about that in a future episode. All right. Number three is how do I celebrate security achievements?

jerry: I have a problem with the way some of the, this was worded public recognition of attacks that were deflected. This is in quotes, by the way, public recognition of attacks that were deflected can simultaneously deter attackers and reassure stakeholders of the organization’s commitment to data [00:19:00] protection.

jerry: So I’m reminded of when I read that I, I immediately thought of Oracle’s unbreakable Linux or unhackable, what do you call it?

Andy: Yeah.

jerry: It’s like putting a chip on your shoulder and Begging someone to come in.

Andy: If I really dug into this, define what an attack is, define when I’ve deflected it. Like every firewall drop, log entry, is that an attack I stopped?

Andy: Like I’ve seen that kind of shenanigans. Or is it more, hey, we had an incident that started and we contained it. Or is it, I don’t know, every time my email security tool stopped a phishing attack? There’s all those sorts of metrics you can run, but is it valuable?

jerry: There’s all you get into like how many spams did I reject?

jerry: How many phishing emails did I reject? Which we make fun [00:20:00] of, right? Because they’re metrics. They’re not achievements.

Andy: But you’re trying to prove a negative here. This is, this has been the fundamental problem from day one with the industry is you’re spending money to stop something. How do you know if you hadn’t spent that money, that things would have happened?

jerry: The only thing I can say is if you take a more capability focused view rather than a metrics focused view, I think that’s perhaps where the opportunity lies. We had a gap in. We had a gap in our authentication scheme because we didn’t have multi factor authentication.

jerry: We, we implement a multi factor authentication. We closed a huge hole. Yeah. Yes. Super simplistic example. Yeah. But I will say, there is a there’s another aspect of this that you have to be aware of. And perhaps I worked alongside too many lawyers, [00:21:00] but one of the, one of the pitfalls of taking credit for doing some security thing is that you’re tacitly admitting that you weren’t doing it before.

jerry: Yeah,

Andy: that’s true.

Andy: Our new version no longer does X. Wait, you were doing X before? Don’t worry about that. The fact is we’re not doing it now.

jerry: We implemented multi factor authentication. Oh so wait a minute,

Andy: right? It’s a tough one. Yeah. I, but I also, You also can never be, if you’re completely risk zero and completely safe, you’ve either way overspent, or you’ve added so much friction to business, or you’ve inhibited the ability for people to do the jobs that you’re now breaking the business in a different way.

Andy: You’re not going to get to risk zero. So what’s the right balance?

jerry: Yeah. And the business doesn’t want you to get to, I remember working effectively as the CIO for a company that [00:22:00] we both worked for once. And the COO told me he was he pitched it in the form of a question. Now what is your approach to passing audits, Jerry?

jerry: Do you want to, like, how do you you want to do really well? And I said, yeah, I think you should do really well. And he said, no, I said, if you fail audits, you’re going to get fired. And if there are no issues ever found, you’re probably going to get fired because you’re spending too much money.

jerry: So you got to find the right balance because that’s what the business wants. If you’re, if you are. Spending enough money to do perfect and everything that’s coming at the expense of other things that the business could be investing in and the return, the rate, I think his point was not.

jerry: Except trying to accept too much risk, but that to do things perfectly, as you continue to move up the [00:23:00] maturity ladder, it gets more and more expensive. And the, the marginal utility starts to decline.

jerry: Sure.

jerry: Anyhow, I, all that said it is very important from a morale perspective, if for nothing, no other reason from a morale perspective to celebrate. But you’ve got to be smart about it.

Andy: I wouldn’t do it publicly, frankly.

jerry: I wouldn’t either.

Maybe internally Somewhat company wide maybe, or at least departmental wide, you need to understand what motivates your people and what their reward systems are like and work to those. But I am very much against putting that bullseye on your back by saying you’re not hackable or, you’re about to get a free audit.

jerry: Oh, yes. Oh, yes. So number four is how do I collaborate with other teams better at this, by the [00:24:00] So again this whole article is. Aimed at CISOs and CISOs are almost always an executive level position. And one of the, I learned a lot, right? I ended my career. I don’t know if that’s the end or if there’s more to come as an executive.

jerry: And I learned a lot. I learned a lot about what it, what that means. And one of the most important aspects is that. You do partner with those other people. That’s less, an intrinsic part of being an executive.

Andy: Yeah. It becomes about working well with other departments. And that means sometimes you’ve got to give and take and be willing to lose a battle to win the war, as they say.

jerry: So it’s super important, but I think this is not. It’s not security centric. This is a fundamental [00:25:00] tenet of what it means to be a leader in an organization.

Andy: And I think we, as technology people often get promoted up with a background that isn’t well suited for that, to be completely honest to the point where many technical people score in those quote unquote soft skills.

Andy: But if you want to get to that level of organization, it is required.

jerry: Yes. Yes, absolutely. No, they point out that there are tangible security benefits. So for example, building bridges with HR allows you to do things like Integrate security requirements into the onboarding and offboarding processes and whatnot.

jerry: And also having those relationships throughout the organization are very key, especially when in times of crisis, like in an incident or what have you, you’ve got to, you’ve got to have the trust of the team and the team needs to have trust in you.

jerry: Last [00:26:00] one is how do I focus on what matters most? This is a hard one. And I think in large measure, it’s because there are so many variables, every company values things differently, they have different risk appetites. They’re in different industries. They move at different speeds. They have, different idiosyncrasies. They they like different technologies or they don’t like different technologies. And in many instances, companies hire people, they hire a CISO based on who that person is, like what, not necessarily what they need. They hire them based on, on the reputation, Jerry’s Jerry’s an incident response focused person.

jerry: We have lots of incidents.

Andy: Because of Jerry?

jerry: Maybe.

Andy: Oh, that’s fair. Job security.

jerry: I think you’ve got to take a step back and, as you’re figuring out how to focus on what matters most, you’ve got to, [00:27:00] first define what is it that matters most.

Andy: Yeah. . The other thing I would add, it’s, I don’t know how to integrate this, but there’s a cultural aspect of a company. The culture matters. So I may want to do a highly impactful security initiative. Let’s say something like, I don’t know, DLP with a document classification system, and I may work for a large financial who’s completely on board with that and very comfortable with that.

Andy: And great. You work for a small startup. You’re getting a lot of pushback on that because that’s not their culture. It’s not something that, from my perspective, and obviously I haven’t been a CISO but I’ve been senior level and currently a director, you can’t go faster than the culture of the company will allow you to go if you’re implementing potentially friction inducing security controls.

Andy: And so that I think can help determine your priorities. What’s acceptable to the business from an [00:28:00] impact perspective.

jerry: That’s a good, it’s a good point. I’ll tag on to that and say, one of the things I’ve learned over the years is that. Regardless of how much money you’re given, there is only so much change than an organization can undergo.

Andy: Oh yeah. That’s a good point.

jerry: In a certain period of time. So someone comes to you and says, you have a blank check. I need you to implement DLP, replace all our firewalls, a bunch of additional, very disruptive things. You’re not going to be able to do it.

jerry: Even if you have the money to do it there’s a finite capacity for change in an organization that, you know, that, that threshold is going to be different in different organizations. And that’s one of the challenges that you as a leader have to figure out is, where that threshold is. So you don’t cross it because if you cross it, Things start to fall apart and you don’t actually make progress on anything.

jerry: And I’ve seen that [00:29:00] happen time and time again, where, we, we, Hey, we’re all in we are going to blow the doors off the budget. We’re going to, we’ve got a lot of things that have to change.

Andy: Especially after a breach.

jerry: Yeah. And you end in having, you end having not accomplished anything.

jerry: You’ve burned through all the money. But you’ve not accomplished fully anything. And so you’ve got to be very very measured in that.

Andy: And the flip side, if you’re an aggressive go getter as a leader and you commit to some aggressive schedule and then you don’t get it done or you don’t get there as fast as you want, then you potentially have executives looking at you like you’re ineffective.

Andy: So it’s a careful balance.

jerry: And that’s it. Leadership 101. Like you’ve got to, you’ve got to meet your commitments.

jerry: Alright. So the, they do talk a little bit about communication, effective communication between the CISOs and [00:30:00] and up. I think that, permeated the la those five items. But, there, there is a, this maybe goes back to the whole gap that, the idea of the gap style reporting. I think we as a security community, often do a pretty bad job of.

jerry: Communicating in non technical terms we talk about CVEs and and DDoS volumes and things like that, but, translating that into business impact is really what the business needs from you.

Andy: When it’s also a likelihood percentage, not a guarantee. Correct.

Andy: Correct. And it’s at best a guess. Based on best practices and observing what happens to other companies and, lots of inputs and data, but there’s no guarantees.

jerry: I think one of the struggles when I read articles like this, they, they often talk about [00:31:00] things like how many fewer incidents did you have or how many fewer breaches did you have?

jerry: And and whatnot. And using that as, Is how you communicate the effectiveness and of your program or what you need to improve and I think the reality is that breaches tend to be like, very Transformational pivotal incidents. They’re often not like countable you don’t you don’t Stay in a ciso role and have you know So many breaches that you can show trends over time, right?

jerry: It’s just you’re, if you’re in that position and you have that kind of data, like something’s wrong.

Andy: Yeah. We need to learn very, we need a little, we need to learn from other people’s breaches, right?

jerry: Exactly.

jerry: All right. Moving on, the next one comes from dark reading here. And the title is sizable chunk of SEC’s charges against solar winds tossed out of court. So I will [00:32:00] admit I have not read. All 107 pages of the judge’s ruling, so shame on me for that.

Andy: You’re an unemployed bum, what else do you have to do?

jerry: Absolutely nothing. The SEC filed a lawsuit against SolarWinds and SolarWinds CISO, alleging lots of things. Everything was dismissed except for the statements that the CSO had made about the security program at SolarWinds prior to the breach. So there were inaccuracies in their 8k, which for those of you don’t know, 8k is a form that you have to fill out in a, in the wake of a breach as required by the SEC that apparently had some inaccuracies.

jerry: And so that was. Part of the case there were other statements made post breach that the judge I did find in a different article, described it as corporate puffery that is not [00:33:00] actionable. I thought that was pretty funny.

Andy: That is pretty funny. I think that needs to be a thing. I got to work that into more and more conversations.

jerry: It’s interesting that, A lot of the reaction to this, which means that there are apparently other implications in the ruling. A lot of a lot of the, post judgment discussion has been, Oh gosh, this is really a good thing because it allows teams internally to communicate amongst themselves without fear of what you write being used against you.

jerry: However, that, that actually. isn’t obvious as part of what the SEC was charging them with. I’ve got to go, I really want to go read that 107 pages to understand, what exactly the SEC was alleging. But in some regards it’s neither here nor there. What is most interesting though, Is the charges that do remain, which are those that [00:34:00] basically said before the breach, the solar winds CISO had come in and performed an assessment and found lots of problems and a documented those problems, but then we would go externally into customers and perhaps investors and made claims about the robustness of their security program.

jerry: And that is what. The SEC is still going after and that is what the judge is allowing them to continue pursuing.

Andy: Because the theory that the SEC I’m assuming is going after is accurate information should be disclosed to the investing public. And so they know how to appropriately measure the risk of investing in a company and or the board, point, all that stuff that comes with being a public company.

Andy: They want, they’re very particular. about making sure that the information that is disclosed is accurate and not misleading. We [00:35:00] see all sorts of stuff about misleading, just going back to Elon Musk’s tweets about Tesla getting him in a lot of trouble with the SEC and that sort of thing.

Andy: Like they take that stuff very seriously.

jerry: Oh yes. Yes, indeed. So more, probably more to come on this after I have a chance to read the the court decision, but I would definitely say, have a measured approach to communicating, especially if you’re aware that there are security gaps or weaknesses in your environment.

jerry: If you end up in a position where you are radically representing things differently in internal communications versus external communications, you should probably. Take a step back and ask yourself what you’re doing. I guess it probably won’t be a problem if they’re, if you’re not breached, but if you are like, that’s going to be exhibit a.

jerry: Yep. And as we’ve now [00:36:00] seen, like the company isn’t going to have your back. They’re not going to, they’re not going to stand in front of you and take the bullet, they’re going to be like, Oh, look at God, our CISO, he was a terrible guy.

Andy: Which by the way, is why people get so frustrated as statements put out by companies sounding like legalese and business ease because they’re protecting themselves with very specific language for these sorts of circumstances.

jerry: Yes.

jerry: So there’s one more story. It’s a small thing. It’s not probably not even worth talking about. I, I wasn’t even sure if we were going to get to it. Yeah, it’s not really even, you know what? Let’s talk about it. So this is the one comes from CSO online and the title is CrowdStrike CEO apologizes for crashing it systems around the world and details fix.

Andy: Yeah, it was it was a thing.

jerry: It was a thing. I have to tell you, I I woke up. on Friday to a text from my wife who had [00:37:00] already been up for hours asking me if I was happy that I was no longer in corporate IT. And I, what?

Andy: What just happened?

jerry: What just happened? And so of course I jumped on to infosec. exchange and quickly learned that 8. 5 million. window systems around the world had simultaneously blue screened and would not come back up without intervention.

Andy: Like on site physical intervention.

jerry: Yes. Yeah. Yes. Apparently though, if you could reboot it, Some, somewhere between three and 15 times, and it might come back on its own.

Andy: I

Andy: heard

Andy: that

Andy: too.

Andy: I have no idea how accurate that is.

jerry: I don’t know either.

Andy: But that’s going to be the new help desk joke. Have you tried rebooting it 15 times?

jerry: Yes. Yes, it is already it is already meme fodder for sure.

Andy: Insane amount of memes. So what happened?

jerry: CrowdStrike, I think most people know is a, it’s an [00:38:00] EDR agent that runs on.

jerry: I think it’s probably 10 to 15 percent of corporate systems around the world. It’s a, it’s a significant number. They deliver these content updates which are, I would say roughly equivalent to what we used to think of as antivirus updates. They delivered one on Friday. And these, by the way, are multi time a day updates. So these aren’t like new versions of the software. These are like, fast quickly delivered things. And so what happened was on Friday, CrowdStrike pushed out a change, an update in how it processes or how it. analyzes named pipes and the definition file that they pushed out had some sort of error that the nature of the error hasn’t been [00:39:00] disclosed that I’ve seen at least and that error caused the windows to crash basically.

jerry: Like crash

Andy: hard. Crash hard

jerry: but with blue screen basically.

Andy: Yeah

jerry: and put in a blue screen loop at that point. And so yeah, because it was, because, Of how CrowdStrike integrates with Windows, it would, it would blue screen again, immediately, as part of the startup process. So you would, as you described that you would up in a blue screen loop.

jerry: And so the only option you had was to go into safe mode. And remove a file. And people came up with all sorts of creative ways of doing that with scripts and even Microsoft released a an image on a thumb drive that you could boot now where it went horribly wrong for some people. And I think in Azure, it was ironically, like one of the most problematic places where you have disk encryption, [00:40:00]

Andy: right?

Andy: You’re probably most often using boot boot locker. And if you don’t have your recovery key. You can’t access the disk without fully booting.

jerry: So lots of lots of IT folk got a lot of exercise over the past weekend. And by the way, if they’re looking a little, they’re dragging this week, but go buy him a donut, hi and say, thank you.

jerry: Because they’ve had a, they’ve had a

Andy: bad couple of days. Yeah, no kidding. It’s also amazing how many people have turned into kernel level programming experts. On social media in the past three days,

jerry: 100%,

jerry: they’re like, they’re formerly political scientists and constitutional scholars and trial attorneys and epidemiologists and climate scientists and whatnot.

jerry: So it does not surprise me that they are also kernel experts.

Andy: There’s been a lot of really intense finger pointing and [00:41:00] debate going on when we, when Honestly, don’t even fully know the entire story yet.

jerry: No. It, there’s a whole lot of hoopla about, layoffs of QA people and returned the impact of return to office, but we don’t, we really don’t know what happened.

jerry: What I find most interesting is that this is a process that happens multiple times a day for the most part. And. Hasn’t happened before. So something went horribly wrong. And I don’t know if that was because they skipped the process or because there was a gap in coverage, like this, the set of circumstances that arose here, we’re just not ever accounted for, like that nobody thought that was a possibility,

Andy: which by the way, it happens at almost every engineering discipline.

Andy: We learn through failure. I’m not. Okay. Let me back way up and say, I am not in any way a CrowdStrike apologist. In fact, I’m not a huge fan of CrowdStrike. [00:42:00] However, I’m seeing a whole lot of holier than thou, you should have XYZ’d on social media. That puts me in a contrarian mood to counter those arguments with a cold dash of reality of, there’s a whole lot of reasons Companies do what they do in the way they do it.

Andy: And anyway, I don’t want to get in my entire soapbox when there’s more done back here, but I think it’s very easy to point out failures without weighing it against benefits.

jerry: Yeah, absolutely. CrowdStrike works quite well for a lot of people. And I dare say it has saved a lot of asses and a lot of personal data.

Andy: So for everybody, there’s certain people like somebody we know who’s very commonly commenting on these things who were in a couple of articles saying that, this just proves that automatic updates are a bad idea. I don’t know that’s true. I would say you can’t. Say that unless you [00:43:00] measure the value of automatic updates that have stopped breaches and stopped problems because those updates were so rapid and so aggressive against this outage, right?

Andy: You have to balance both sides of that scale. So just look at this and say, yeah, this was a massive screw up. It’s caused massive chaos and huge amount of loss of income for a lot of people disrupted a lot of people’s lives. Okay. That’s bad, but weigh that against For those using tools with automatic updates, how many problems were solved and avoided, which is so difficult to measure, but must be thought of by those rapid updates that were automatic.

jerry: I think what is the most problematic aspect of that is that the the value is amortized and spread out over, thousands or tens of thousands of customers. Over, over the, over a long period of time, but this failure was a, a one, impacted everybody at the [00:44:00] same time and caused mass chaos, mass inconvenience, mass outages.

jerry: All at the same time. And so completely agree with you. But I think that’s what’s setting everybody’s alarms bells off is that, Oh my gosh we have this big systemic risk, which, realistically hasn’t happened perhaps as often as you might think. And I know by the way, there’s a lot of people who are also talking smugly about how good it is to be on Linux and not on Windows because, this issue only impacted Windows, not Linux. But I will tell you, as the former user of a very large install base of CrowdStrike on Linux, it had problems. And a lot of them. We had big problems. I don’t know that there was ever one like as catastrophic as this where it happened all at the same time, [00:45:00] but, it, it’s not Linux the CrowdStrike agent on Linux hasn’t also had issues.

Andy: Sure. It’s the question comes, what problems would you had if you hadn’t run it? What value did it bring?

jerry: That’s obvious. I think that for many organizations, this is the way they identify that they’ve been in intruded on

Andy: yeah I mean ensure cyber insurance companies basically mandate you have EDR.

Andy: For ransomware containment. There’s it’s For good or ill it’s not table stakes So and those by the way, those cyber security companies out there who are currently casting stones At CrowdStrike saying we don’t do things this way and we would never have this problem. Yeah, good luck. Yeah, you are the definition of glass houses.

Andy: We’ve, this is, interestingly, that 8. 5 million stat apparently was, according to Microsoft, less than one per seat, 1 percent of the Windows fleet in the world, which I find fascinating that only 1 percent cause this much [00:46:00] chaos. So I wonder how many of those are second order impacts and other sort of, not direct, but secondary fallout of some critical system somewhere going down, but we’ve seen problems like this before.

Andy: It’s just on smaller scales. Even Windows Defender has had somewhat similar outages that have caused problems. This is the whole debate of Windows update, do you automate, do you trust it, do you test it? And I go back to, okay, you may have a problem. But is that problem worth the efficiency and speed of getting a patch out there before you get hit with that exploit for whatever recently patched problem or, you can’t just look at one side of the equation, which is so frustrating to me.

Andy: And so many people are out there clout farming right now, just being, I told you so’s or you guys are just dumb and they’re not looking at the big picture at all.

jerry: So on the converse side. There [00:47:00] were huge impacts, and I think that we do have to do better. But that said, I don’t think it’s a wise idea to run out and uninstall your CrowdStrike agenT.

jerry: there are other technologies, other ways of linking in that perhaps are less risky, but not no risk, for sure.

Andy: and are you talking about the kernel level? Yeah. Integration. Yeah. Versus non kernel level.

jerry: Yeah, so like E-E-B-P-F versus, kernel modules and whatnot. But those that by the way is a it solves some problems, but it creates other problems and so we’ve not seen big failures with those other companies But have we not seen them because they’re just small or because they don’t happen

Andy: And look, let me be very clear. I am not a coder. I don’t understand what i’m talking about here I am just going off of like Rough understanding of trying to get my arms around this issue.

Andy: So take everything i’m about [00:48:00] to say with a grain of salt but my understanding is that the advantage of being at the kernel is you’ve got much deeper level access that’s faster, more efficient, and more TAP resistant than running in user space. And as a security control that’s trying to stop things like rootkits, which were a big problem back in the day, not so much now, I think that’s where that came from. Like we’ve seen a number of vendors who are saying running at the kernel level is just, it’s just irresponsible. We don’t do that. And here’s why we don’t do that. We’re differentiated because of XYZ. Okay, cool. But there must be a trade off and Yeah, you can’t crash the system like that.

Andy: But are you also as capable at detection and what is your resource impact? And I honestly don’t know, right? And when you listen to these other vendors who don’t use kernel mode, they of course have very compelling arguments and they drink their own Kool Aid and they believe their own marketing.

Andy: And maybe they’re right. I don’t know, but I also don’t think CrowdStrike is just completely irresponsible for running a [00:49:00] kernel. Some people are saying. I think that has a benefit. Which is why they did it. Now, whether that benefit is worth it is the question, but I don’t think they’re just malicious.

Andy: So I don’t know. I just see a lot of people going off on this. And I admittedly, I don’t know enough to probably really participate in that conversation, but

Andy: it’s a tough,

jerry: I do think that CrowdStrike, my understanding at least is that CrowdStrike was, or is moving in the direction of a similar strategy, but they just aren’t there yet of not using curl mode. Yeah. Yeah. Yeah. Not using a kernel module. So don’t misunderstand. Like they’re, they are using this thing called EBPF.

jerry: It’s a way, it’s a very uniform way of getting visibility into what’s happening in the kernel without actually loading your own driver into or module into the [00:50:00] kernel. One of the big problems I had with CrowdStrike was this constant churn of, do I patch my kernel or do I leave CrowdStrike running because I can’t do both.

jerry: And of sequence

Andy: on supporting each other.

jerry: Yeah. And that, by the way, it comes back to the fact that they have to they have to create a kernel module. It’s tailored, to, to different kernel versions. And depending on how changes in the kernel, how the kernel changes from one version to the next.

jerry: Sometimes you don’t, sometimes it’s fine. Most of the time it’s not fine. And so you end up with this problem. So less of a problem on windows because windows kernels are pretty stable. And I think they do, they have a lot more interlocking with vendors like CrowdStrike than Linux does. But in any event, I.

jerry: I I agree with your thesis there that like, this is something that we have. [00:51:00] We are benefiting from technology like this and assuming that it could never go wrong. And that’s probably not a good assumption as we’ve now seen, but at the same time, like I do think this could have gone better.

Andy: Absolutely. But where the assumption of this could go wrong at some point is. Planning for this to go wrong again, a wise use of time and money versus all the other problems you’re likely to deal with. Is this a black swan event that isn’t likely to happen enough to bother to build mitigations in for?

jerry: I, I, so it was standing in line for dinner, my wife and she asked me this question. Cause at the time a lot of flights were still canceled. And she asked the question and I didn’t have a good answer. What can companies do to avoid that? They prepare in some kind [00:52:00] of disaster recovery way to do that?

jerry: And the reality is, I don’t think you can. And so you could say you know what? I’m going to get rid of CrowdStrike and I’m going to go with Sentinel one or somebody else, but you’re taking the leap of faith that they won’t also have a problem or that they won’t have a problem that says that they’re.

jerry: They’re blind to some kind of attack that CrowdStrike could have seen.

Andy: You could duplicate your infrastructure with failover and run two different EDRs on each of the backups, but then so much more cost complexity. Are you going to run them as well? Are you going to have the mastery of two different vendors?

Andy: Like that introduces a whole lot of complexity. That’s easy to just say, go do it. But it’s very complicated for a maybe that happens. We talk about uptime and percentages. And we look at, I always want to say, again, I’m not a CrowdStrike defender. This is the funny part. I’m just, I’m frustrated with the thought leaders out there who are just blowing smoke on, just being, [00:53:00] pounding the table with, look how bad this is without putting it in context.

Andy: If we look at the percentage of how many of these, Channel file updates went out without a problem versus the ones that do. Is that a fair estimation of success versus failure rate? And we holding CrowdStrike to five, nine, six, nines, three nines, two nines, There’s no perfect solution. And by the way, the closer you get to perfect, the more expensive it gets.

Andy: So when we talk about like uptime and systems of being five nines, that’s a hell of a lot more expensive than two nines or three nines. So are we being unrealistic in, in saying that this should never happen and CrowdStrike should go out of business and okay, fine, then how much more are you willing to pay for a system that’s not perfect?

Andy: That never has this problem. And how slow are you willing to accept updates against new novel techniques, which is what they said they were pushing out to make sure this never happens? Because they’ve got this tension between getting these updates out fast versus doing all the checks in QA we all want to see.

Andy: So what are you willing to trade [00:54:00] off? Cost, complexity, time, risk, and the risk of if you don’t get the update fast enough, and then that self propagating ransomware hits you before you got it and Oh, sorry, we were in QA at the time. Are you willing to accept that? We just, we’ve got to be adults in the room and look at this with all sides of the equation and not just point fingers at somebody when they screw up without realizing the other side of the equation.

Andy: And again, I am not a CrowdStrike apologist here. I’m frustrated with the mindset of I’m going to build my thought leadership by just pointing out the bad things without ever balancing it against what the good is. Sorry, I’m a little frustrated.

jerry: It’s what we’ve built our industry around. I know.

Andy: Am I wrong? I’m not, am I wrong? Not to put you on the spot, but

jerry: The problem I have with the situation is that it reliably crashed every Windows computer it landed on.

Andy: I’m not sure that’s a hundred percent true. And [00:55:00] the only reason I say that as I’ve seen some social media imagery of a bunch of like check in kiosks at a airport and only one of five was down. I don’t know why. I have no idea why. It’s very flimsy evidence, but it’s I would say let’s get a bit more root cause analysis before we can completely say that’s true, but it’s obviously very highly effective, right?

Andy: And very immediately impactful to a super high percentage of the machines. This was installed on.

jerry: So I guess my question, my concern is. Did this happen as a, cause I’m assuming, and I feel pretty confident that they have a pipe, some kind of testing pipeline where before they push it out, it goes through some, some standard QA checks and I’m assuming what got missed.

jerry: Yeah. I’m assuming something didn’t happen and like it didn’t crash their [00:56:00] version of windows. In their test pipeline or did crash and it wasn’t detected or somebody skipped that step altogether. I don’t know, my concern lies there. Like what was the failure mode that, that happened? And, hopefully they’ve, hopefully they’ll come out of this better than they were in, in general, like we, as an industry, we we advance by kicking sand in other people’s faces.

jerry: That’s not

Andy: wrong. And there’s not, it’s by the way, I’m not trying to say, let’s just go up, up. Let’s suck. Let’s move on. Obviously we have to learn from this and we have to understand the implications, but I. And we have to adapt to it. I’m sure that every other vendor doing similar things is very curious what went wrong, right?

Andy: And hopefully we can all learn from it. Hopefully Construct will be very transparent. There’s no guarantee, but hopefully. And mind you, they’re going to get massively punished in the market for this as they should. There’s going to be a lot of people who are not [00:57:00] renewing CrowdStrike over this incident, and I have no problem with that.

Andy: And that’s that’s how the industry works. And there’s all, they, how they handle this incidents is going to be highly impactful to how well they keep their customer base. But they’re also massively highly deployed. So that’s, one people say they’re too highly deployed. I don’t know, 14 percent of the industry.

Andy: I think I heard somebody say, I don’t know how accurate that is. They’re one of the big boys in the EDR space. And frankly, having met a number of their people they know it and they have some swagger, so maybe that’ll knock them down a notch. Admittedly, when I first heard this, I was like, ah, couldn’t have happened to a better company.

Andy: But, the other aspect is Microsoft, man, they’re taking a bunch of crap over this because it was their systems, but it was really not their fault. As far as we know at all, but they’re trying to step up. Like they’re putting out recovery information. They’re trying to put out tooling.

Andy: Like they’re trying to help, which I appreciate, but [00:58:00] it’s certainly a mess. Don’t get me wrong. And I know I went down a ranty rabbit hole of just the stuff I didn’t like, because, partially because I’m assuming people who listen to the show know the details, right? So we’re trying to, there’s no reason for us to rehash all the basics.

Andy: It’s, trying to get into what we think, people care about that, but.

jerry: Yeah, I guess the net point is, shit happens. I think we have to be pragmatic in that EDR is. An incredibly important aspect of our controls. And I think auto update is as well. The whole point of this is to be as up to date as you can be because the adversaries are moving very fast.

Andy: Yeah. And we as an industry are not moving away from auto update AI is the antithesis of manual updating guys. Yeah. Buckle up. If you don’t like auto updating, you’re not going to AI much.

jerry: So we [00:59:00] have to find a way, I think, to to co exist and I don’t have a lot of magic words to say, and, like this happened to CrowdStrike, but it has happened, it happened to McAfee and it’s, it is, an incredible coincidence that the CEO of CrowdStrike was also the CTO of McAfee when that happened but it’s happened to Microsoft and it’s happened. I think it’s happened to Symantec and I think it’s happened to Microsoft multiple times.

jerry: Now I think the difference between those and this again, is the time proximity, like all of these things have all of these eight and a half million systems went down roughly at the same time And contrasted with a lot of the others, these were almost all corporate or, slash business systems.

jerry: Because you don’t run CrowdStrike for, actually Amazon’s been trying to sell me CrowdStrike out for my home computer, but generally you don’t run CrowdStrike on, on your your home PC.

jerry: You run them on, on, the stuff that you care [01:00:00] about, because it’s really freaking expensive.

jerry: And when those systems go down. The world notices and when eight and a half million of them go down all at the same time, it becomes big news. And lots of people consternate over it. And look, again, as an industry there’s so much clamoring for airtime and we love a crisis.

jerry: We love to talk about, why you shouldn’t be using kernel modules, why you shouldn’t have auto updates, why you shouldn’t do this, why you shouldn’t do that. It’s what we do, and it’s annoying. And I think it’s sometimes you can cross the threshold of causing more problems than you’re solving, because if you’re trying to solve a problem that may never happen again in your career by, ripping things out, it’s going to or duplicating, your EDR environment it’s just not necessarily cost effective.

jerry: So I don’t have a great. Look, this is, this has been an industry wide problem [01:01:00] and I don’t even think everybody’s fully recovered. I think there’s still people, those, I don’t know how many of the eight and a half million systems are still sitting there with a blue screen, but it’s not zero. I’ll tell you that.

Andy: Delta still canceling flights. This is just as one small, tiny example.

jerry: Yeah. I had packages delayed UPS is saying, your shipment is delayed because of a technology failure. So this is, it’s far reaching, but I think we have to be thoughtful and not knee jerk reaction to what happened here.

jerry: I think CrowdStrike has a lot to answer for. I do think. One of the quotes in this article is hilarious. I’ll read it here. It’s quoting the CEO, quote, the outage was caused by a defect found in a Falcon content update for Windows hosts, Kurtz said, as if the defect was a naturally occurring phenomenon discovered by his [01:02:00] staff.

Andy: I go back to probably about eight, 80 lawyers are all over every single sentence being uttered right now. Oh yeah. Yeah. You never

jerry: first, first law is you don’t accept responsibility until at least until more. Yeah. But they, look, they don’t I’m guessing the fog of war is thick over there right now.

jerry: Yeah. I’m sure they know what happened. By now. And like you said, hopefully they will be transparent about it, but

Andy: holy cow. It was interesting is if you get into their blog about the technical details, 409 UTC is when they put out the bad file, it was fixed by 527 UTC. So an hour, 21 minutes, 79 or 80,

jerry: 80 minutes.

Andy: Yeah. So that’s pretty fast, right? Obviously they knew they had a problem pretty fast. But it’s too late. Once it’s out there, especially because the machine’s down, you can’t push a fix to them. It’s a worst case scenario for them, in that they couldn’t [01:03:00] push out an autofix. Yeah, it’s yeah, as a side thing, we’re also seeing a lot of bad actors starting to jump on, trying to send out malicious content under the guise of helping with CrowdStrike issues.

Andy: So that’s always fun.

jerry: Dozens, maybe hundreds of domain names registered, many of which were some many of which were malicious. Some of which were parodies and yeah. Oh yeah. Yeah. Good times. Anyway not a lot happened on Friday or last week. Hopefully it’ll be another quiet week.

Andy: It is for you, unemployed bum.

jerry: I don’t, it’s interesting, I don’t know how I ever had time to work.

Andy: I, I think you need to put some air quotes around work, but sure.

jerry: Ouch.

Andy: All right, we have gone pretty long today, maybe we should wrap this bad boy

jerry: up. Yes, indeed. I appreciate everybody’s time. Time. Hopefully this was interesting to you. If you like the show, you can find it [01:04:00] on our website, www. defensivesecurity. org. You can find this podcast in 272 that preceded it on your favorite podcast app, except for Spotify.

jerry: I’m still working on that. And you can find Lurg, where?

Andy: I’m on x slash Twitter at L E R G and also on infosec. exchange. at L E R G, LERG.

jerry: You can find me at Jerry on InfoSec. Exchange and not so much on X anymore. And with that, we will talk again next week. Thank you, everybody.

Andy: Have a great week. Bye

jerry: bye.

 

Defensive Security Podcast Episode 272

Links:

https://www.darkreading.com/cybersecurity-operations/a-cisos-guide-to-avoiding-jail-after-a-breach

https://www.csoonline.com/article/2512955/us-supreme-court-ruling-will-likely-cause-cyber-regulation-chaos.html/

https://sansec.io/research/polyfill-supply-chain-attack

https://www.securityweek.com/over-380k-hosts-still-referencing-malicious-polyfill-domain-censys/

https://www.tenable.com/blog/how-the-regresshion-vulnerability-could-impact-your-cloud-environment

 

Transcript
===

[00:00:00]

jerry: All right. Here we go. Today is Sunday, July 7th, 2024, and this is episode 272 of the defensive security podcast. My name is Jerry Bell and joining me tonight as always is Mr. Andrew Kalat.

Andrew: Good evening, Jerry. This is a newly reestablished record twice in a week or

jerry: twice in a week. I can’t believe it.

Andrew: I know. Awesome. Yeah. You just had to, quit that crappy job of yours that provided income for your family and pets and you know everything else but now that you’re unemployed house But now that you’re an unemployed bum.

jerry: Yeah, I can podcast all I want 24 7 I think i’m gonna become an influencer like i’m gonna just be live all the time now

Andrew: you could I really I look forward to you asking me to subscribe and hit that notify button.

jerry: That’s right. Hit that subscribe button

Andrew: Like leave a rating and a comment

jerry: like and subscribe All [00:01:00] right getting with the program we’re we’re getting back into our normal rhythm. As per normal, we’ve got a couple of stories to talk about. The first one comes from Dark Rating and the title is, A CISO’s Guide to Avoiding Jail After a Breach.

Andrew: Before we get there.

Andrew: I want to throw out the disclaimer that thoughts and opinions do not reflect any of our employers, past, present, or future.

jerry: That’s a great point. Or, my cats.

Andrew: Unlike you, I have to worry about getting fired.

jerry: I still have a boss. She can fire me.

Andrew: That’s called divorce, sir. But true.

jerry: Yeah.

Andrew: Anyway, back to your story.

jerry: Anyway, yeah. CISO’s Guide to Avoiding Jail After a Breach. So this is this is following on a upcoming talk at, I think it’s Black Hat talking about how CISOs can try to insulate themselves from the [00:02:00] potential legal harms or legal perils that can arise as a result of their jobs. It’ll be interesting to see what’s actually in that talk, because the article itself, in my estimation, despite what the title says, doesn’t actually give you a lot of actionable information on, How to avoid jail. They do they do a quote Mr. Sullivan, who was the CISO for Uber.

jerry: And they give a little bit of background and how it’s interesting that he he is, now a convicted felon. Although I think that’s still working its way through the the appeals process. Though he previously was appointed to a cybersecurity board by president Obama.

jerry: And before that he was a federal prosecutor. And in fact, as the article points out, he was one of the process, he was the prosecutor who prosecuted the first DMCA case, which I thought was quite interesting. You didn’t know that about him, but what’s interesting is this article at least is based a lot on [00:03:00] interviews with him and including recommendations on things like communicating with your your board and your executive leadership team. But I’m assuming that He had done that at Uber.

Andrew: Yeah, this is such a tough one for me, and it makes, I think a lot of good people make references in the article. I want to shy away from being a CISO if there’s this sort of potential personal liability. When, there’s a lot of factors that come into play about why a company might be breached that aren’t always within the control of the CISO, whether it be budget, whether it be focus, whether it be company priorities, and you have an active adversary who is looking for any possible way to get into your environment.

Andrew: So what becomes the benchmark of what constitutes a breach? Negligence up to the point of going to jail is the one that [00:04:00] I’ve struggled with so much and I think those who haven’t really worked in the field much can very easily just point to mistakes that are made, but they don’t necessarily understand the complexity of what goes in to that chain of events and chain of decisions that led to that situation.

Andrew: Every job I’ve been in where we were making serious decisions about cybersecurity was a budgetary trade off and a priority trade off and a existential threat to the company if we don’t do X, Y, and Z. Coming from five or six different organizations at the same time coming up to that CFO or the CEO and they have to make hard calls about where that those resources go and those priorities go to keep people employed. And you pair that with a very hostile, third party intentionally trying to breach you it’s a tough situation and I don’t think any of us knows what the rules look like. At this point to keep yourself out of [00:05:00] trouble. You’ve been in this position, not in the, going to jail part, but that this threat was much more meaningful to you in your last role than it is to me.

jerry: It is very uncomfortable. I’ll tell you when when the Uber CISO got got charged and the CISO of SolarWinds got charged, that’s It’s an uncomfortable feeling an exposed feeling. In criminal law, there’s this concept of strict liability.

jerry: And strict liability basically means, it means the thing happened. And because the thing happened and you are responsible for the thing, it doesn’t matter that, there, there’s no mitigating factors. Your your state of mind, your motivations, , none of that matters in a strict liability case.

jerry: And to some extent, it feels like that in this instance, I don’t think it really is, although, when you’re a CISO sometimes that thought can cross your mind. Now in the article, they actually point out that, though the CISO is the [00:06:00] lightning rod when things go wrong. It is not just the CISO that is responsible for, what went wrong.

jerry: As they describe it, it takes a community and the results of that community are, as we’ve now seen or is alleged is, being pinned on a particular individual. And I, I think and I know from having read the Uber case I’ve not. I’m not so familiar with the SolarWinds case although I’m obviously familiar with what happened in SolarWinds case, with Uber, it was a situation where they they had a a, basically a data breach and the allegation was that the ad, the adversary was trying to hold it for ransom and they They successfully negotiated having that, at least this is my understanding of how the case went they negotiated a payment through [00:07:00] the bug bounty program to the adversaries, perhaps, maybe adversaries isn’t the right word allegedly deleted the data and because of that, they didn’t report the breach.

jerry: And so it was really, the failure to report that breach which the government was coming after him for, basically being deceptive to investors. And it’s not necessarily that he was malicious or what have you, but no, basically my layman’s rate is he was defrauding

jerry: investors by withholding information about a breach that he was obligated to report. So that’s a tough situation. And what concerns me is that this is somebody who was a federal prosecutor so I had I had plenty of competent legal counsel surrounding me.

jerry: And that was a good thing. It felt good. And I’m quite certain he did too, further he himself [00:08:00] was a prosecutor. And so I have a hard time accepting, and maybe it’s just very naive of me. I’d have a hard time accepting that, He was actually trying to misrepresent things or hide things.

jerry: I guess that’s where I’m at on this one. It feels bad and the article points out that, because of this, one of the, one of the whispers as they describe it in the industry is that it’s forcing people who are qualified for the role and understand the perils that they face to shy away from taking that role.

jerry: And that then leads to people who are maybe not as qualified taking the role and then obviously not doing as good of a job. And therefore actually, the net effect is a weaker security posture.

Andrew: Yeah. I think one thing that you can, if we try to get some advice out of this or try to give some advice out of this, and the one thing they mentioned in For lack of a better [00:09:00] term, tie some other people in the organization to the same decision, right?

Andrew: Make sure that your board is aware and your executives are aware and that you’re not the only one holding the risk bag at the end of the day that, if you have to own the risk yourself, then you need to have formal control. Now, in this case, we’re talking about. In theory, he got in trouble because he didn’t notify the SEC and it was a public company, it was material breach.

Andrew: And, so stockholders weren’t informed more so than he was negligent in his cybersecurity duties in terms of technical controls and audits and that sort of thing. However, that feels the way things are going. We hear more and more calls for hold companies accountable directly and legally with risk of jail for breaches.

Andrew: And this, there’s a lot of nuance here that’s not exactly what happened here. But I find that very troubling and [00:10:00] obviously, I have a bias because I’m in the industry and I would be at risk of that potentially. But I just don’t think it’s that simple. There’s no CISO that has that much control over an environment that they should be solely responsible for taking the fall if a breach were to happen, although that does happen all the time, but it’s one thing to lose your job is another thing to go to jail.

jerry: Yeah. And I think that the author here points out that at least as Mr. Sullivan describes it, he feels like he was put forward by Uber as a sacrificial lamb. I guess what I don’t really understand was how much better would it have been for him if, He had done a more effective job at, creating what I’ll loosely call co conspirators within the company.

jerry: I think what they’re trying to say is that you as a CISO should go to the board, to your CEO, to whoever, and articulate the risk, [00:11:00] not with the intention of them again, becoming co conspirators, but of them saying, gosh, now I know about it I don’t want to go to jail. I’m going to reallocate the money or do what, do whatever is required in order to address the particular risk. Now, I think in this instance, it wasn’t like a, we have to go spend more money on security. It was more, Hey, we had this issue. Do we disclose it or not?

jerry: And I think, that’s a slight, maybe a slightly different take, I would assume by the way, just again, having played in this pool he didn’t make that decision alone.

Andrew: Sure. Part of me, and this maybe is not exactly apples to apples, but I think about a lawyer advising an executive on the legality of something that executive can take that advice or reject that advice, a CISO advising a company on the legality or outcome or [00:12:00] risk of a decision. They don’t always make that decision. They’re somewhat beholden to their leadership on which way the company wants to go.

jerry: There was a an unwritten aspect to this that I wanted to discuss a bit. And that is the subtext of all of this, I think, is going to create an adversarial relationship between the CISO and the CISO’s employer, because it feels to me like what the government would have preferred is for the CISO to to run to the government and say, Hey, my employer isn’t acting ethically.

jerry: Necessarily saying that’s what happened in Uber’s case or any of these cases, but I think that’s what the government is trying to push. Now, granted there’s a not so gray line, beyond which you have an ethical duty to to rat on your employer.

jerry: You can imagine all sorts of situations not [00:13:00] even in the realm of security where, you would be obligated to go and and report them. But it feels to me they’re trying to lower that bar.

Andrew: Yeah, I can see that. Unfortunately this is probably going to be messy to get sorted out. And it’s going to take a lot of case law and it’s going to take a lot of precedence. That makes me nervous. If I were offered a CISO opportunity at a public company, I’d probably think real long and hard about it, about passing on it or trying to assure some level of security to avoid this problem.

jerry: Our next story throws some throw some sand in the gears there. This one comes from CSO online and the title here is US Supreme Court ruling will likely cause cyber Regulation chaos.

jerry: And so unless you’ve been living under a rock or perhaps just not in the U S you’re probably aware that the Supreme Court, I guess it was last [00:14:00] week, overturned what has been called or referred to as the Chevron deference doctrine. And the name comes from the oil company Chevron, and it stems back from a 1984 so 40-year-old ruling by the Supreme Court that basically I, I’ll sum summarize it to say that ambiguous laws passed by Congress can be interpreted by regulators like the FCC, the FDA, the SEC and so on. In the U S at least a lot of regulations are very high level. It’ll say something, I’m going to make it pick a stupid example. It’ll have that will say, use a strong authentication . And then it’ll be up to a regulator to say strong authentication means that you use multi factor authentication.

jerry: That isn’t SMS based.

jerry: That initial ruling was intended to establish that courts aren’t experts [00:15:00] in all matters of law.

jerry: And by default courts should be deferring to these regulators. And that has stood the test of time for quite a long time. And now it was overturned in in this session of the Supreme whatever you want to think about the sensibility of it.

jerry: I think the challenge that we now have is going to be a have made the joke on social media that right now the most promising career opportunities has got to be trial lawyers, because there’s going to be all manner of court cases, challenging different regulations which, in the past were, pretty well established as following regulations set by the executive branch in the U. S. But now as this article points out, things ranging from the SEC’s requirements around data breach [00:16:00] notifications to the Graham Leach Bliley Act of 1999.

jerry: There’s a broad range in the security space of regulations, which, are likely to be challenged in court because the prescription behind those laws basically don’t cover the way they’re currently being enacted. And so we should assume that they will, these will be challenged in court and given the Supreme Court’s ruling, the established prescription coming out of the executive branch is no longer to be deferred to.

jerry: And it’s unclear at this point, by the way, how courts are going to pick up their new mantle of responsibility and interpreting these things because, judges aren’t experts in security. So I think that’s why they’re calling it chaos right now, because we don’t really know what’s going to happen. For the longterm, think things will normalize.

Andrew: Yeah. Businesses hate uncertainty.

jerry: [00:17:00] Yes.

Andrew: And for good or ill, businesses can have a huge impact on government legislation. So I think this will get sorted out eventually, but I think you’re right. I think what we counted on, or at least tried to work around or With these regulatory agencies and understand these rules have now all changed, and I think you’re right.

Andrew: There’s going to be probably a ton of. of these rules that have the force of law being challenged now in court. And I think ultimately Congress has probably the reins to fix this if they want, but I think that’s another interesting problem. If SCOTUS is saying, look, You regulatory agencies are taking the power of law in your own hands and we don’t like that.

Andrew: So the power of law comes from Congress and elected officials in Congress. Then Congress, you need to do a better job of defining these rules specifically. That presents its [00:18:00] own set of interesting challenges because how well will they do that? And we’ve seen a lot of well intentioned laws, especially in very complex areas, have their own set of problems because of all of the trade offs and problems that go into legislative work in Congress causing issues.

Andrew: So it will be very interesting. This could have a lot of wide ranging impacts. And again, to your point, I’m not getting anywhere near whether they should or shouldn’t have done this, but I think the intent was you unelected regulators shouldn’t make law, Congress should make law. Okay. But that’s easier said than done.

jerry: Yeah. It’s, I think it’s that plus the constitution itself. very directly says that it is up to the judicial branch to interpret laws passed by Congress. Yeah. Yeah. And not the executive branch. And that’s [00:19:00] what, that’s where I think if you read the majority opinion, that’s basically to sum up, that’s what they’re saying.

jerry: I think the, the challenges that when the constitution was written, like there was, it was a much, much simpler time.

Andrew: There’s a lot of interesting arguments about. That you see out there and there’s a lot of very passionate opinions on this. So I’m trying very hard to stay away from the political rhetoric around it and just, I concur that this throws a lot of accepted precedent around our industry into question.

jerry: But, going back to the previous story, I don’t know, again, I’m not a, I’m not an attorney. However, if I were Joe Sullivan, I would feel like I have a new avenue of appeal.

Andrew: Sure. Yeah. Did the SEC made this law in essence could, would be his argument. And based on this particular ruling by SCOTUS [00:20:00] that was an inappropriate ruling and, or an inappropriate law.

Andrew: And therefore his. Obviously I’m not a lawyer because I’m not articulating this like a lawyer, but he could say that’s why I shouldn’t have been trying to convicted and please politely pound sand.

jerry: I do think the, I do think the opinion did say something along the lines of it doesn’t overturn, previously held court cases, people are due their day in court.

jerry: So if he has an avenue for appeal, that’s how the justice system works. This is hot off the presses. I think. I think the echoes are still circling the earth, we’ll be seeing the outcome of this for a while and I don’t think we exactly know what’s going to happen next. Stay tuned and we’ll check in on this periodically.

jerry: Okay. The next one comes from Sansec and there’s actually two stories, one from Sansec and one from security week. And this is [00:21:00] regarding the polyfill. io issue. I’m hesitant to call it a supply chain attack, but I guess that’s what everybody’s calling it.

Andrew: Come on, get on the bandwagon.

jerry: I know, I know.

Andrew: If you want to be an influencer, man, you got to use the influencer language.

jerry: I feel, it makes me feel dirty to call it a supply chain attack. So why what makes you so uncomfortable calling it a supply chain attack? I don’t know. I don’t know. I, that’s a good question. And I, the answer is I don’t really know.

jerry: It just feels wrong.

Andrew: Did your mother talk to you a lot about supply chain attacks?

jerry: See that’s, maybe that’s the problem.

Andrew: Okay. Imagine you’re walking in a desert and you come across a supply chain attack upside down stuck on its back. Do you help it? But you’re not turning it over. Why aren’t you turning it over, Jerry?

jerry: I don’t even know where this is going.

Andrew: I had to lighten it up after the last two stories, man. You were being a downer.

jerry: Polyfill is [00:22:00] a is a JavaScript library that many organizations included in their own website. It does oversimplifying it. It enables some types of more advanced functions or newer functions of modern web browsers to work in older versions of web browsers. And so I don’t fully understand the sanity behind this. I think it’s, maybe this will start to cause some rethink on how this works, but , this JavaScript library is called by reference rather than it being served up by your web server, you are referring to it, as a remote entity remote document hosted on, in this instance, polyfill. io.

Andrew: So instead of the static code living in your. HTML code. You’re saying go get the code snippet from this bot and serve it up.

jerry: Correct. It’s telling the web browser to go get the codes directly. Yeah. What happened [00:23:00] back in February was I don’t fully understand, what precipitated this, but the maintainer of the polyfill. js library in the polyfill. io domain. Was sold to a Chinese company. And that company then started using they all basically, they altered the JavaScript script library to alt, alternatively, depending on where you’re located and other factors either serve you malware or serve you spam ads and so on.

Andrew: So you’re saying there are not hot singles in my area ready to meet me?

jerry: It’s surprising, but there probably are actually.

Andrew: carry on.

jerry: They can’t all be using Polyfil. Anyhow, there, there were, depending on who you believe somewhere ranging from 100, 000 [00:24:00] websites that were including this polyfill. io code to tens of millions as purported by CloudFlare. So at this point, by the way, that the issue is somewhat mitigated.

jerry: I’ll come back to why I say somewhat mitigated that the poly field that IO domain, which was hosting the malicious code has been taken down. Most of the big CDN providers are redirecting to their own local known good copies, but again, they haven’t solved the underlying issue that it’s still pointing to JavaScript code that’s hosted by somebody else. Although, presumably companies like CloudFlare and Akamai and Fastly are probably more trustworthy than, Funnel in China.

Andrew: Yeah yeah, because they actually came out and denied any malicious intent and cried foul on this whole thing too, which was interesting.

jerry: Yes. [00:25:00] But people have done a pretty good job. And in fact, this, the San Sec report gives it pretty good. Pretty thorough examination of what was being served up. And, you can very clearly see it it’s serving up some domain lookalikes, like I find it hilarious, Googie dash any analytics. Com, which is supposed to look like googleanalytics. com. And I suppose if it were in all caps, it would probably look a lot more like that. But the other interesting thing is that these researchers, noticed that the same company also in several other domains, some of which have been also serving up malware.

jerry: And those have also been taken down, but there are also others that aren’t serving, or haven’t been seen serving malware yet and are still active. And so it’s it’s probably worth having your threat Intel teams. Take a look at this because my guess would be that at some point in the future the [00:26:00] other domains that this organization owns will probably likewise be used to serve up a malware.

Andrew: Bold of you to assume that all of us have threat Intel teams.

jerry: Fair enough. You do you just, it just may be you.

Andrew: Correct. Me and Google.

jerry: Yes.

Andrew: And my RSS feed of handy blogs, but yes,

jerry: that’s right,

Andrew: but yeah, they seem to have, oh, a wee bit of a history of being up to no good.

Andrew: This particular Chinese developer.

jerry: Yes, defending against this, I think is pretty, pretty tough beyond what I said on the supply side. I think it’s, I think it’s a bad idea. Maybe I’m a purist. Maybe I’m old school and it should be out the pasture. I think it’s a risky as we’ve seen many times now.

jerry: This is not by far the first time this has happened to be including by reference things [00:27:00] hosted, as part of some kind of an open source program. Not necessarily picking on open source there. I think it happens less often with commercial software. As we’ve seen it now happen quite a few times with these open source programs, either, including things like browser extensions and whatnot.

jerry: I, now having said that, you can imagine a universe where this existed as a just simply and solely a GitHub repo and companies, instead of referring to polyfill. io we’re downloading the polyfill code to their own web server. And most likely you, you would have between a hundred thousand and 10 million websites serving locally, modified code, but then again, nobody updates

Andrew: right? It would be impacted, but we’re running 28 year old versions.

jerry: So maybe not.

Andrew: Yeah, but boy, to your point, it gives me a little bit of a [00:28:00] heebie jeebies to say that the website that you’re responsible for is dynamically loading content and serving it that you don’t have control over, but that’s perhaps very naive of me.

Andrew: I don’t do much website development. I don’t know if that’s common, but as a security guy, that makes me go, Ooh, that’s risky. So we don’t control that at all. Some third party does. And we’re serving that to our customers or visitors to come to our website and we just have to trust it. Okay. But that probably exists in many other aspects of a modern supply chain or a modern development environment where you just have to trust it and hope that.

Andrew: People are picking up any sort of malicious behavior and reporting it as they did in this case, which is helpful But then it causes everybody to scramble to find where they’re using this which then goes to hey How good is your software building materials or software asset management program to how quickly can you identify you for using this?

Andrew: and then there was a lot of confusion when this first came out because there’s different sort of kind of [00:29:00] styles or Instances of polyfill that some were impacted some were not how much of this is You know, what truly was at risk? And the upside is that the domain was black hole pretty quick. Anyway, it seems so fragile, right? You’ve got this third party code that you don’t control. You don’t know what’s the other end. You probably have ignored that it’s even out there and forgotten about it, especially this is defunct code. And that’s a whole other area that drives me a little crazy at night is how do you know when an open source software is no longer being maintained and is silently or quietly gone end of life and you should be replacing it? I’ve contemplated things like, hey, if there hasn’t been an update within one year, Do we call that no longer maintained?

Andrew: I don’t know. I don’t have a good answer. I play around with that idea with my developers and talking about, because we want to make sure that code is well maintained and third party code that we’re using is being up to date. We don’t want end of life code in general, but I don’t know what [00:30:00] constitutes the end of life in open source anymore.

jerry: I think we will eventually see some sort of health rating for open source projects. And that health rating will be based on like, where are the developers located in the world? How long on average does it take for reported vulnerabilities to get fixed? How frequently are commits and releases of code being made and other things like that. But that doesn’t necessarily mean a whole lot. Look at what happened with, what was it? X Z.

Andrew: Yeah. Yeah.

jerry: That was a very, arguably, won’t call it healthy, right?

jerry: But it was an active project that had a malicious a malicious contributor who found ways of contributing malicious code in ways that were difficult to discern. And then, you look at what happened with open SSL and then open SSH and [00:31:00] it’s not a guarantee, but I think

jerry: it would be good to know that, hey, you have code in your environment that is included by reference and it was just bought by a company who’s known to be a malicious adversary. And we don’t have that. We don’t have any way of doing that today.

Andrew: So you want like a restaurant health inspector to just show up and be like, all right, show me your cleanliness.

jerry: They so I think that we will get there.

Andrew: You want a sign in the window, this restaurant slash get hub repository earned a B minus, but has great brisket.

jerry: Sometimes you just have to risk it. Good, good brisket is good brisket. So I think that’s going to happen, but what that doesn’t solve is the demand side. So that’s. I think part of the supply side, you still have to know to go look for the health score.

Andrew: Or have some sort of tooling or third party tool [00:32:00] that, some sort of software security suite that, scans your code and alerts you on these things in some way, like in theory. And I’m sure by the way, that there’s probably vendors out there that think they do this today and be happy to pimp us on their solution.

jerry: Oh I’m, I feel quite certain that my LinkedIn. DMs will be lit up with people wanting to come on the show to talk about their fancy AI enabled source code analyzer.

Andrew: But it’s just one more thing devs that now have to worry about as security teams have to worry about. And. This is a competition against developing new features and new functionality and fixing bugs is, this is now just one more input to worry about, which competes for priorities, which is why it’s not that simple.

jerry: It’s very true. Way back when I was a CISO.

Andrew: You mean two weeks ago?

jerry: Way back. The way I had always characterized it is using open source software is like adopting a puppy. You can’t ignore it. It needs to be cared for. You have to feed it and clean up after [00:33:00] it and walk it and whatnot. I don’t think that is a common approach. I think we typically consume it as a matter of convenience and assume that it will be good forever. I think we’re getting, we’re starting to get better about developing an inventory of what you have through SBOM. And that of course will lead to better intelligence on what needs to be updated when it has a vulnerability and that’s certainly goodness, but I think that the end to end process in many organizations needs a lot of work.

Andrew: Yeah. I also think that this is never going to go away in terms of companies. I think rightly or wrongly, or we’ll always be reliant on third party open source software now. And so we’ve got to find, and this is also a relatively rare event that we’re aware of the hundreds or maybe thousands of open source projects that people use regularly.

Andrew: This doesn’t happen very [00:34:00] often.

jerry: It’s the Shark attack syndrome, you hear about it every time it happens. And so it’s, it seems like it happens often, but when it does happen, it can be spectacular. .

Andrew: It’s interesting because when these things hit a certain level of press awareness, it also drives a third party risk management engagement of various vendors to vendors and Inevitably, at least in my experience, when we see something like this hit you will inevitably see, if you were a vendor to other businesses, their third party risk management team spinning up questionnaires to their suppliers, hey, are you impacted by this and what’s your plan?

Andrew: Which then drives another sense of urgency and a sense of reaction. That may be false urgency that’s taking your resources away from something that’s more important. But you can’t really ignore it. The urgency goes up when customers are demanding a reaction in this way, whether or not it’s truly your most important risk that you’re working, it doesn’t matter.

jerry: Having come from a service provider, I [00:35:00] lived that pain. And, and I’m sure you, you do too. Like you, you have to deal with it both ways. You have your own customers who you want you to answer their questions, but then you have your own suppliers. If for no other reason than to be able to answer your customer’s questions with a straight face, you’ve got to go and answer them. I think one of the challenges with that is where does it end ? I’m a supplier to some other company and I have suppliers and they have suppliers and they have suppliers and they have turtles all the way down, and If you think about everybody, assuming everybody acted responsibly and they all got their vendor questionnaires out at right away, but how long would it take to actually be able to authoritatively answer those questions?

jerry: I don’t know. I think it’s. I think there’s a lot of kabuki dance, I don’t know if that’s an appropriate term there.

Andrew: It’s executives saying, we have to do something, go do something. [00:36:00]

jerry: That’s true.

Andrew: And so then the risk management folks or third party risk manager or whoever do something and then they could point, Hey, look, we did something.

Andrew: We’re waiting for responses back from Bob’s budget cloud provider.

jerry: There’s a lot of hand wringing that goes on. I will also say having worked, in certain contexts you end up having small suppliers. You may end up with small suppliers who may not know they have to go do something.

jerry: And so your questionnaire may in fact be the thing that prompts them to go take action because their job is to deliver parts. They’re not a traditional service provider. They have some other business focus.

jerry: In those instances, it could very well be because like you said, not everybody has a threat intel team, that you are in fact telling them that they have to worry about something it’s, it doesn’t make it any less annoying though, especially if you have a, a real, a more robust security program in place. Because I don’t [00:37:00] know, in my experience, I’m not sure anything genuinely beneficial has come from those vendor questionnaires other than put potentially, like I said, the occasional you’re telling a supplier who was otherwise unaware.

Andrew: I think it breeds a false sense of security that you’ve got a well managed supply chain and a well managed third party risk management.

Andrew: I question the effectiveness.

jerry: Yeah I can agree with that.

Andrew: So not to be too cynical about it, but, and then I always wonder, what are you going to do? Okay, let’s say. Let’s say you’re, how soon could you shift to another provider? Okay. Let’s play this out. Let’s say you ask me and I’m running Bob’s budget cloud provider.

Andrew: Do I have polyfill? And I say, I don’t know. What are you going to do? You’re going to cancel your contract. Maybe you’re going to choose to go someplace else. Maybe it’s going to take time. Yeah, it could influence your decision to renew or continue new [00:38:00] business or whatnot. But

jerry: it’s, I think what you’re trying to say, and I agree is it doesn’t change the facts for that particular situation.

Andrew: Right yeah. And do you want me to spend time answering your questions or go fixing the problem?

jerry: I want you to do both, dammit. That’s that’s their view. What do I pay you for?

Andrew: I don’t know. I have a tough spot. I don’t have a really warm fuzzy about these sort of fire drills that get spun up around Big media InfoSec events.

Andrew: I think they’re, I think it’s the shark attack and it’s, do you have sharks in your lagoon? Maybe.

jerry: I feel like this whole area is very immature. It’s a veneer that, in most instances, I think is worse than useless because it does create a false sense of security.

Andrew: Yeah, I agree. And how do you know I’m not lying to you when I fill out your little form?

jerry: That’s the concern. We’re lying and there was a breach, like you would, you as the [00:39:00] customer would, crucify them in the media, or in a lawsuit

Andrew: Yeah, at the end of the day, it either becomes a breach of contract or a, I don’t know, I’m not a lawyer, but I haven’t fully articulated my thoughts on this yet. But there’s something I’ve just never really felt was very effective or useful about these sorts of questionnaires that go out around these well publicized security events.

jerry: Yeah, I agree. I agree. I think there is likely something sensible as a consumer.

jerry: Yeah. It is helpful to know the situation with your suppliers and how exposed you are, because then your management wants to know, Hey what’s my level of exposure to this thing? And you don’t want to turn your pockets inside out and say, I don’t know. But at the same time, I’m not sure that the way that we’re doing it today is really establishing that level of reliable intelligence. The last story comes from tenable the title is how the regression vulnerability could impact your cloud environment. So the [00:40:00] regression is cutely spelled with the SSH capitalized. So regression, this regression vulnerability was a recently discovered this slash disclosed vulnerability in open SSH.

jerry: I think it was for versions released between 2021 and as recently as a couple of weeks or months ago and can under certain circumstances allow for remote code execution. So kind of bad

Andrew: Yeah remote code execution Unauthenticated against open SSH that’s open to the world.

Andrew: Correct, but It’s not that easy to pull off.

jerry: Correct. There’s a lot of, there’s a lot of caveats and it’s not necessarily the easiest thing to exploit. So I think they say it takes about 10, 000 authentication attempts. And even with that, you have to understand the exact version of OpenSSH and information about the platform it’s running on, like [00:41:00] it’s a 32 bit, 64 bit, et cetera.

Andrew: Yeah. And I think that those tests were, a 32 bit. And it’s much tougher against 64 bit because you’ve got to basically get the right address collision in memory, is my understanding. Take that with a little grain of salt. But that was my understanding.

jerry: But not impossible. And so the point of this post is, OpenSSH is exposed everywhere.

jerry: Like it’s everywhere. And they point back to cloud and I think they point to cloud for two reasons. Reason number one is, in, I think cloud incentivizes or makes it really easy and in some instances, preferable to expose SSH as a way of managing your, your cloud systems. And in those instances, there’s almost always going to be open SSH. Unless it’s RDP, then it’s all good.

Andrew: It’s much preferred.

jerry: RDP is way better.

Andrew: There’s a GUI. There’s pictures.

jerry: There’s pictures. That’s right.

Andrew: A mouse works.

jerry: How [00:42:00] much better could it get? And then the other reason they are picking on cloud providers is that as a consumer, you are provisioning based on images that usually with most cloud providers, You’re provisioning your servers using images provided by the cloud provider. And those images may not be updated as frequently as maybe they should be. And so therefore, when you provision a system, it is quite likely, to come vulnerable right out of the gate. And you’ve got to get in there and patch it right away.

jerry: You’ve got to know that’s your responsibility and it’s not actually protected by the magic cloud security dust.

Andrew: At least, not your cloud. Maybe Bob’s budget secure cloud is, I don’t know, that joke didn’t work out, but you make an interesting point. And I think I was talking to somebody about this and I was trying to make the example that when we started doing this stuff pre cloud, because we’re old. [00:43:00] The concept of something being exposed to the internet was a big deal. Everything was in a data center behind a firewall, typically. And typically if you wanted to expose something to the internet, like an SSH Port or an HTTP port, an HTTPS port, that usually had a lot of steps to go through, and most companies would also make sure that you’re hardening it and making sure that, it really needed to be exposed.

Andrew: But with cloud, and I think you referenced this, it’s exposed by default. Most of the time there’s this, there’s not this concept of this thick firewall that, that only the most important things and well vetted and well secured things would be exposed to the internet. There is no more quote unquote perimeter. Everything’s just open to the internet. And that’s the way the paradigm is taught now with a lot of cloud providers, that there isn’t this concept necessarily of private stuff in the cloud versus public stuff. It’s just. stuff. And yeah they, talk to limited ACLs and only open the ports you have to and that sort of thing.

Andrew: But I think it’s super easy and super simple for people to just build something and I got to [00:44:00] get to it. So open SSH and, or whatever, or literally RDP and do what they got to do. And to your point, yeah, most of these images are not. hand rolled images. There’s something, some sort of image that you grab off of some catalog and spin it up and probably has a bunch of vulnerable stuff in it.

Andrew: But SSH we think of as safe ish. And, even security folks are like only have SSH open. But this to me speaks more and more to, it still matters what your attack service is, and you still shouldn’t be exposing stuff that doesn’t need to be exposed to the internet because you never know when something like this is going to come along even on quote unquote, your safe protocols to be open to the internet.

Andrew: So the less you have exposed, the less you have to worry about this. Now, I’m not saying that the only thing gets attacked is the stuff that’s open to the internet. We know that’s true, but it’s one more. hurdle that the bad guy has to get through. And again, buys you more time to manage stuff if it’s not directly exposed as an attack surface to a random guy coming from [00:45:00] China.

jerry: So the the recommendations coming out of this are a couple. First is making sure that you update, obviously that you update for this vulnerability, patch the vulnerability. Second is that when you are using cloud services and you’re provisioning systems with. A cloud provided image, make sure that you are keeping them patched, even newly provisioned systems are probably missing patches and they need to be patched post haste, limiting access, they talk about least privilege and they talk about that on two axes.

jerry: The first axis is With regard to network access to SSH, not everything should have access to SSH. It is not a bad practice to go back to the bastion host approach on a relatively untrusted system that then you use as a jumping off point to get deeper into the network where you don’t have every one of your systems. SSH exposed to the internet. It gives you [00:46:00] one place to patch. It gives you a lot more ability to focus your monitoring and whatnot. Now the other access they point out is that in the context of cloud providers, you can assign access privileges to systems. And so if your system is compromised it’s going to inherit all the access that you’ve given to it through your cloud provider. And so that could be access to S3 storage buckets or, other cloud resources that may be not directly on the system that was compromised, but because that system was delegated access to other resources they provide basically seamless access for an adversary to get to them. And that’s another, in my view, a benefit to that relatively untrusted bastion host concept that doesn’t have any of those privileges associated with it.

Andrew: Yeah, it’s a tough sell. I don’t think most cloud [00:47:00] architects think about it that way at all.

jerry: You are absolutely right. They don’t think about that until they’ve been breached. And then they do. Yeah. And I can authoritatively say that given where I came from.

Andrew: That’s fair. And part of the goal of this show is to try to take lessons. So you don’t have to learn the hard way.

jerry: There is a better way. And it’s not, no, it’s not as convenient. Not everything that we used to do back in the old days, when we rode around on dinosaurs was a bad idea. There are certain things that, probably are still apt even in today’s cloud based world.

jerry: I think one of the, one of the challenges I’ve seen is the, how best to describe it, the, like the bastardized embracing of zero trust. Again, in concept, it’s a great idea, it’s a great idea, but like the whole NIST password guidance that came out a couple of years ago where people looked at it said, Oh, NIST says I don’t need to change my [00:48:00] passwords anymore. It does actually say that, but it’s in the context of several other things that need to be in place, in, in the context of in the context of zero trust, that also portends certain other things. I think where zero trust starts to break down is when you have vulnerabilities that allow the bypassing of those trust enforcement points.

Andrew: Yeah. If you can’t trust the actual authentication authorization technology involved zero trust dependent upon that. I think the takeaway for me is you can never get to zero risk, but you never know when you might have to rapidly patch something really critical.

Andrew: And are you built to respond quickly? Can you identify quickly? Can you find it quickly? And can you patch it quickly? That’s the question.

jerry: And you can make it harder or easier on yourself. Design choices you make can make that harder or easier.

Andrew: Yeah. As well as how you run your teams. One thing that I’ve often tried to instill in [00:49:00] the teams that I work with is I can’t tell you what vulnerabilities are going to show up in the next quarter, but I know something’s going to show up. So you should plan for 10 to 20 percent of your cycles to be unplanned, interrupt driven work driven by security.

Andrew: And if you don’t, if you’re committing all of your time to things, not security, when I show up, it’s a fire drill, but I know I’m going to show up and I know I’m going to have asks. So plan for them. Even if I can’t tell you what they are, a smart team will reserve that time as an insurance policy, but that’s a tough sell.

Andrew: It’s a tough sell. Yeah. They don’t always buy into it, but that’s my theory. I try to do it, to explain at least and try to get them to buy into. And sometimes it works. Sometimes it doesn’t.

jerry: All right. I think I think with that, we’ll call it a show.

Andrew: Given the weather gods are fighting us today.

jerry: Yeah. I see [00:50:00] that it’s starting to move into my area, so it’ll probably be here as well. So thank you to everybody for joining us again. Hopefully you found this interesting and helpful. If you did tell a friend and subscribe.

Andrew: And buy something from our sponsor today, sponsored by Jerry’s llamas,

jerry: The best llamas there are. All right.

Andrew: I feel like all the podcasts need to need, use our code Jerry’s big llama box. Dot com.

Andrew: I’m just going to stop before this goes completely off the rails.

jerry: That happened about 45 minutes ago.

jerry: So just a reminder, you can follow the podcast on our website at defensive security. org. You can follow Lerg at

Andrew: Lerg L E R G on both x slash Twitter and InfoSec. Exchange slash Mastodon.

jerry: And you can follow me on InfoSec. Exchange at Jerry. And [00:51:00] with that, we will talk again next week. Thank you.

Andrew: Have a great week, everybody.

Andrew: Bye bye.

 

Defensive Security Podcast Episode 269

https://www.bleepingcomputer.com/news/security/cosmicstrand-uefi-malware-found-in-gigabyte-asus-motherboards/

https://www.bleepingcomputer.com/news/security/hackers-scan-for-vulnerabilities-within-15-minutes-of-disclosure/

https://www.techcircle.in/2022/07/31/paytm-mall-refutes-cyber-breach-report-says-users-data-safe

Defensive Security Podcast Episode 268

 

Stories:

https://www.scmagazine.com/feature/incident-response/why-solarwinds-just-may-be-one-of-the-most-secure-software-companies-in-the-tech-universe

https://www.computerweekly.com/news/252522789/Log4Shell-on-its-way-to-becoming-endemic

https://www.bleepingcomputer.com/news/security/hackers-impersonate-cybersecurity-firms-in-callback-phishing-attacks/

https://www.cybersecuritydive.com/news/microsoft-rollback-macro-blocking-office/627004/

jerry: [00:00:00] All right, here we go today. Sunday, July 17th. 2022. And this is episode 268. Of the defensive security podcast. My name is Jerry Bell and joining me tonight as always is Mr. Andrew Kellett.

Andy: Hello, Jerry. How are you, sir?

jerry: great. How are you doing?

Andy: I’m doing good. I see nobody else can see it, but I see this amazing background that you’ve done with your studio and all sorts of cool pictures. Did you take those.

jerry: I It did not take those. They are straight off Amazon actually. It’s.

jerry: I’ll have to post the picture at some [00:01:00] point, but the pictures are actually sound absorbing panels.

Andy: Wow. I there’s jokes. I’m not going to make them, but anyway, I’m doing great. Good to see ya..

jerry: Awesome. Just a reminder that the thoughts and opinions we express on the show are ours and do not represent those of our employers. But as you are apt to point out, they could be for the right price.

Andy: That’s true. That’s true. And that, and by the way, what that really means is you’re not going to change our opinions. You’re just going to to hire them.

jerry: Correct. right. Sponsor our existing opinions.

Andy: Someday that’ll work.

jerry: All right. So we have some interesting stories today. The first one comes from SC magazine dot com. The title is why solar winds just might be one of the most secure software companies. In the tech universe.

Andy: It’s a pretty interesting one. I went into this a little.

Andy: Cynical. But there’s a lot of [00:02:00] really interesting stuff in here.

jerry: Yeah there, there is, I think

jerry: What I found interesting. A couple of things. One is very obvious. That this is a. Planted attempt to get back into the good graces of the it world. But at the same time, It is very clear that they have made some pretty significant improvements in their security posture. And I think for that, it deserves a.

jerry: A discussion.

Andy: Yeah, not only improvements, but they’re also.

Andy: Having these strong appearance of transparency and sharing lessons learned. Which we appreciate.

jerry: Correct. The one thing that I so we’ll get into it a little bit, but they still don’t really tell you. How. The thing happened.

Andy: Aliens.

jerry: Obviously it was aliens. They did tell you what happened. And so in the. Article here they describe this the [00:03:00] CISO of solar winds describes that the attack didn’t actually. Change their code base. So the attack wasn’t against their code repository. It was actually against one of their build systems.

jerry: And so they were the adversary here. Was injecting code. At build time, basically. So it wasn’t something that they could detect through code reviews. It was actually being added as part of the build process. And by inference the head. Pretty good control. At least they assert they had good control over their

jerry: source code, but they did not have good control. Over the build process and in the article they go through. The security uplifts they’ve made to their build process, which are quite interesting. Like they I would describe it as they have three parallel. Build channels that are run by three different teams.

jerry: And at the end of, at the [00:04:00] end of each of those, there’s a comparison. And if they don’t. They don’t match, if the. They call it a deterministic build. So there are like their security team does one, a dev ops team does another and the QA team does a third. And all building.

jerry: The same set of code. They should end up with the same final. Final product. All of the systems are are central to themselves. They don’t commingle. They don’t have access to each others. So there should be a very low opportunity for for an adversary to have access to all three.

jerry: Environments and do the same thing they did without being able to detect at the end, when they do the comparison between the three builds, whether it’s a novel approach. I hadn’t thought about it. It seems.

jerry: My first blush was, it seemed excessive, but as the more I think about it, It’s probably not a huge amount of [00:05:00] resources to do so maybe it makes sense.

Andy: Yeah.

Andy: And also, they mentioned that three different people are in charge of it. And so to corrupt it. Or somehow injected. Into all three would take. Somehow corrupting three different individuals, somehow some way.

jerry: Yeah, they would have to clue the three teams would have to collude.

Andy: Yeah.

Andy: Which. Is difficult.

jerry: Yeah.

jerry: Yep. Absolutely.

jerry: So they actually I haven’t looked into it, but they actually say that they’ve open sourced their their approach to this the multi kind of multi what I’ll just call multi-channel build. I thought that was. Interesting.

jerry: So There’s a, it’s a good read that they talk about how they changed from their prior model of having one centralized SOC under the. The company CISO to three different SOCs that monitor different. Different aspects of the environment. They went from having a kind of a part-time.

jerry: Red team to a [00:06:00] dedicated red team who’s focused on the build environment. I will say the one. Reservation I have is this kind of feels maybe a little bit like they’re fighting the. The last war. And so all the stuff that they’re describing is very focused on. Addressing the thing that failed last time.

jerry: And, are they making equal improvements in other areas?

Andy: Could be, I would say that.

Andy: They’re stuck in a bit of a pickle here where they need to address. The common question is how do you stop this from happening again? That is. That is what most people are going to ask them. It’s what the government’s asking them. That’s what customers asking them. And so there. There’s somewhat forced, whether that’s the most.

Andy: Efficient use of resources, not to deal with that problem right there. They have no choice. But I also feel like a lot of the changes they met, build change to their build process. I would catch. A great many other supply chain type. [00:07:00] Attack outcomes.

Andy: It seems to me.

jerry: Fair. Fair enough.

Andy: It’s also interesting because a lot of these things are easy to somewhat. Explain. I bet there’s a lot of devil’s in the details if they had to figure out, they mentioned that they did. They halted all new development of any new features for seven months and turned all attention to security.

jerry: Yeah, so it sounded like they moved from I think an on-prem. Dev and build environment to one that was up in AWS so that they could dynamically. Create and destroy them as needed.

Andy: Yeah, it’s. It’s an interesting, the fundamental concept that this article is saying is, Hey, once you’ve been breached, And you secure yourself.

Andy: Do you have a lower likelihood of being breached in the future. Are you like Dell? You have the board’s attention. Now you have the budget. Now you have the people now have the mandate to secure the company.

Andy: And is that true?

jerry: think it is situational. that there are some, [00:08:00] I’m drawing a blank. I think that’s one of the hotel change. don’t want to say the wrong name, but I I believe that there are. There are also instances. We’re readily available. Where the contrast true. Like they just keep getting hacked over and over.

Andy: And I sometimes wonder if that has to do with the complexity of their environment and the legacy stuff in their environment. If you look at a company like, I don’t know anything about solar winds, but I’m guessing. You know that there is somewhat of a. Fairly modern it footprint that. Maybe somewhat easy to retrofit as opposed to, hotel chain.

Andy: Probably some huge data centers that are incredibly archaic in their potential architecture and design and.

jerry: That’s a good point. It’s a very good point. It’s a different, it’s very different business model, right?

Andy: And they talked about how they’re spending, they’ve got three different tiers of socks now outsourcing two of them. They’re spending a crap ton of money on security.

jerry: Yes.

Andy: Whether with CrowdStrike watching all their end point [00:09:00] stuff. They mentioned it here. I’m sure that CrowdStrike appreciated that. Their own. Tier three SOC. They’ve got a lot of stuff and they also talking to that now their retention rates for customers are back up in the nineties, which is pretty, pretty good. So I don’t know. Yeah. Clearly this is a PR thing.

Andy: But at the same time, I really do appreciate. A company that’s gone through this sharing as much as they’re sharing because the rest of us can learn from it.

jerry: Yeah, absolutely.

Andy: And the other thing it’s interesting because I look at this, cause I work for software company now. And it’s a small company. It’s nothing the size of these guys. And we don’t have the resources these guys have, but. I think about how many points in our dev chain. Probably could be easily corrupted in a supply chain attack.

Andy: That they’re stopping with their model. That, I wonder what. What could I do? Like how much of this could you do on a budget? There’s a huge amount of people environment here. There’s a huge amount of. Of red tape and [00:10:00] bureaucracy and checks and balances that must add tremendously to the cost.

Andy: Probably slow things down a little bit, probably gonna, would get pushed back. If you just tried to show up at your dev shop and say, Hey, we’re doing this now without having gone through this sort of event. So what I’m dancing around here is the concept of culture. Have, post-breach, you now have a culture that is probably more willing to accept what could be perceived as draconian security mandate over how they do things.

Andy: As opposed to pre breach.

jerry: Yeah. It probably doesn’t scale down very well.

Andy: Yeah.

jerry: With the. The overhead that they’ve poured on. Any, they also. In the article point out that you. It remains to be seen. How well solar winds continues carrying on, but it does, like you said, it does seem like. They’ve they’ve definitely taken this and learned from it and not only learn from it, but also have like we see in this article,

jerry: I’m trying to [00:11:00] help the rest of The rest of the industry learned, which is, by the way, like what we’re trying to do here on the show. Kudos to them.

jerry: For that.

Andy: Yeah. I also wonder how many other dev development shops.

Andy: We’ll learn from this and adopt some of these practices. So they’re not the next supply chain attack. Cause that’s really where the benefit comes.

jerry: Yeah. Yeah, absolutely.

Andy: Yeah.

jerry: All right. Onto the next story, which comes from computer weekly.com and the title here is log for shell on its way to becoming endemic. So the the us government after. Joe Biden’s president Joe Biden’s cyber executive order in, I think it was. 2021. Maybe. Formed this cyber security.

jerry: What is it called? The

jerry: Cyber

Andy: safety review board.

jerry: safety review board. I could remember the S.

Andy: Yeah.

jerry: Which I think was modeled after the [00:12:00] NTSB or what have you. But they released this report last week, which describes. What happened in, or at least their analysis of what happened. In the log4j. Incident that happened last year. And. So I have mixed. Mixed emotions.

jerry: About this one. You know that one of the, one of the key findings is that. Open source development. Doesn’t have the same level of maturity and resources that, that. Commercial software does. And, on the one hand, one of the promises of open source was, many eyes makes.

jerry: Bugs. Very shallow. Which I think we’ve seen is not really holding water very well. But I think the other problem is it’s asserting that. Open source developers are uniquely making security mistakes in their development. [00:13:00] In the last I checked every single month for the past 20 plus years. Microsoft releases.

jerry: Set of patches For security bugs in their software and they are not open source. And so I, I think it w what’s a little frustrating to me was they didn’t. It feels like they didn’t address the elephant in the room. Which was not necessarily that the. Th that the open source developers here did

jerry: a bad job. They didn’t understand how to. Code securely. It’s self-evident that they made a, they made some mistakes. But the bigger problem is the fact that it was rolled up into so freaking many. Other open source. In non open source packages in and multi-tiered right. It’s.

jerry: Combined into a package that’s combined into another package. That’s combined into another package. That’s. [00:14:00] Combined into this commercial software. And the big challenge we had as an industry. Was figuring out where they, where all that stuff was. And then even after that Trying to beat on your vendors.

jerry: To come to terms with the fact that they actually have log4j in there environment, and then having to make these like painful decisions do we stop using. For instance, VMware, because we know that they have yet that they have log4j and they haven’t released the patch. At the time they have, since, by the way,

jerry: But. Th that is I think that’s the more concerning problem. Not just obviously for log4j but when you look across the industry, we have lots of things like log4j that are. Pretty managed by either a single person or a very small team on a best effort basis. And they serve some kind of important function and they just keep getting.

jerry: Consolidated. And I don’t [00:15:00] think there’s a real appreciation for how pervasively, some of these things. Are being used. They do talk about in the recommendations about creating built in a better bill of material for software, which I think is good. But it’s still, that’s like coming at it the wrong way.

jerry: It seems to me like we need to be looking for hotspots and addressing those hotspots. And I just don’t, I’m not seeing that it’s concerning to me.

Andy: what do you mean by hotspots?

jerry: Hotspots in terms of potentially. Poorly managed or not. That’s not the right way to say it, but less well-managed open source packages that have become super ingrained.

jerry: In the it ecosystem like log4j like openssl has been in some of the other bash, And so on.

jerry: We see this come and go. But at the end of the day I don’t know that we have a good handle on where those things are. So we’re just going to continue to get [00:16:00] surprised when some enterprising researcher. Lifts up a rug that nobody’s looked under before and realizes, oh gosh, there’s this piece of code that was managed by

jerry: a teenager in the proverbial basement. And they’ve since moved on to college and it’s you. It’s not being maintained anymore. Any more, but it’s like being used by By everybody and their dog.

jerry: We don’t seem to be thinking about that problem, at least in that way.

Andy: Yeah, you said something early on in covering this too about how open sources less rigorous and their controls than commercial, but I think it’s very fair to say that. vast majority of commercial applications. Are reusing tons of open source. And their code, right? That.

Andy: The kind of odd implication there is that. Commercial entities write everything from the ground up when that’s not true. Now here’s the flip side. If I’ve got a well known, mature, vetted [00:17:00] package. That does its job well that I can include in my software package. I could potentially save myself a lot of bugs and.

Andy: And vulnerabilities because that package has been so well vetted. In theory, right?

jerry: A hundred percent.

jerry: Yep.

Andy: like writing your own encryption algorithm, bad idea. There’s a whole. Whole litany of people who’ve, edited, ruined because they thought they knew better. And that’s a really hard problem to solve. So I think there’s value in having. Almost like engineering standards of this type of strength of concrete,

Andy: that is reused because it’s a known quantity as opposed to, Hey, we’re just going to invent some new concrete and give it a whirl. I see it a little bit like that. But I agree with you. I also wonder how often.

Andy: Dev shops can spare someone who his whole job is to dig deep into the ecosystem of all the packages they pull in. When they do their development and know the life cycle of those. To the level we’re [00:18:00] talking about versus, Hey, that’s a solved problem. I’d just pull it off the shelf and move on.

jerry: I think that is the very issue as I see it. That is the. Problem because I don’t think most companies have the ability to do that.

Andy: What do you thinking like a curated.

Andy: Market of open source tools that are well-maintained.

jerry: Think we’re headed in that direction. I don’t. I don’t love the idea. By any stretch. I’m not saying don’t mean to imply that I do. But. I don’t see a good alternative. And the reason is that, like you said, you want. As a, as the developer of a application, whether it’s open source or not.

jerry: You want to use? You don’t want to recreate something that’s already existing and you want to use something that’s reliable. I think that one of the problems is that. These smaller pieces of open source. Technology like I have a strong feeling that like when the, when log4j started out, they didn’t expect that they were [00:19:00] going to be in every fricking piece of commercial and open source software out there.

jerry: It just happened. It happened.

jerry: over time. And. And.

jerry: I just think there was little consideration on both sides of the equation for what was happening. It was just happening and nobody really was aware of it.

Andy: It’s not like the log4j team was like, gum, use me everywhere. And then, there’s a little bit of, Hey, I wrote this, it’s up to you. If you want to use it, that’s on you.

jerry: Yeah. It’s there. Caveat. Emptor.

Andy: so it’s.

Andy: Yeah, this is. I don’t know. It’s a tough problem. I don’t know. The software bill materials is your solve either. I know a lot of people are talking about it. I know that it helps, but.

jerry: It, I think it, it helps in so much as if you have a, a few as a. Manufacturer of software or even you as a consumer. Have a S bomb that goes all the way down, which by the way, is itself a. Pretty tricky. When something like log4j hits [00:20:00] it becomes much easier to look across your environment and say, yep, I got it there and there.

Andy: Yeah.

jerry: That’s what I have to go fix. By the way, like it’s. You’re also dependent on your close source. Commercial software providers. Also doing a. A similar kind of job. So I think there’s a coming set of standards and processes. That the industry is going to have to, to get to, because this problem isn’t going to go away. It’s going to continue to get worse.

jerry: And somebody is either going to Some enterprising government like Australia or India or the U S is going to stuff a. Solution, none of us would like that our throat, or we’re going to have to come up with something.

Andy: Yeah. You’re not wrong.

Andy: It’ll be interesting to see how it plays out.

Andy: Now that I think the genie’s out of the bottle, you got to assume some of these big cybercrime. Syndicates or whatever term you want to use are attempting to replicate this.

jerry: Oh a hundred percent. A hundred percent, they gotta be looking around saying, what is. [00:21:00] open source components exist in, pervasively and what would be easy ish.

jerry: For me to take over slash compromise so that I could, roll and roll up into as many. Environments as I can, like that would be. Super convenient as a, as an adversary.

jerry: So anyway, there’s lots more to come on that I do think we’re going to see lots of hyper-focus on.

jerry: Source code supply chain, open source. Coming. And I fear that it’s going to. Be largely misguided, at least for awhile.

Andy: Fair enough.

jerry: All right. The next story comes from bleeping. Computer in the, this is a fascinating one. Title is hackers impersonate cybersecurity firms in callback phishing attacks.

Andy: Clever people.

jerry: We have a story here about an adversary or maybe multiple adversaries, who it becomes super enterprising and they [00:22:00] are sending letters to unwitting. Employees at different companies. And I don’t know how well targeted this is. There’s really not a lot of discussion about that, but. In the example they cite they have a letter.

jerry: I think it comes. By way of email. On CrowdStrike letterhead. And it basically says, Hey, CrowdStrike and your employer have this. Have this contract in place, we’ve seen some anomalous activity. You have you and your company. Are beholden to different regulatory requirements and we have to move really fast. We need you to call this phone number and to schedule an assessment. And it. Unlike by the way, a lot of a lot of these things is pretty well written. I would like to think that if I got it. I would say. That’s BS, but like it is really well-written, there’s not, it’s not full of grammatical errors. That kind of makes sense.

jerry: And apparently if you follow the instructions, by the way, [00:23:00] It, the hypothesis is that it will lead to unsurprisingly a ransomware infection because they’ll install a remote access Trojan on your workstation. And then, use that, use that as a beachhead to get into your.

jerry: Your company’s network.

Andy: Yeah. I hate to say it, but another good reason why you shouldn’t let your employees just randomly install software.

jerry: Yes.

Andy: And you have to assume. There’ll be some, this is where I struggled by the way with social engineering training is I really do believe, and it’s not a failure. It’s not a moral failure it’s not an intelligent failure. It’s a psychological weakness of how human beings. Brain’s work that.

Andy: These bad guys are exploiting and they will find some percentage in some certain circumstances. That will fall for these sorts of efforts. And you’ve got to be resilient against that. I don’t think you can train that risk away.

jerry: I, yeah, I would say that it’s [00:24:00] perilous to think that you can train it away, because then you start to think that when it happens, It’s the failure of the person. And actually think that’s the wrong way to think about it. If you have, Obviously. You want to do some level of training?

Andy: Sure.

jerry: Just if for no other reason, you’re obligated to do that by many regulations and whatnot. But, also like you want people to understand. Like what to look for, it’s it helps in the long run, but at the end of the day, like you, we have to design our environments. To withstand that kind of.

jerry: Issue right.

Andy: Yeah.

jerry: if we’re. If our security is predicated on someone. Recognizing that a well-written email on CrowdStrike letterhead. Is is fake. Like we have problems.

Andy: Yeah. If you’re never going to be taken down by one error click on an employee.

Andy: That I think is a problem you need to solve.

jerry: Yeah. And that’s a failure on, on, [00:25:00] on our.

jerry: Like it and security side, not on the employee side.

Andy: Yeah.

jerry: So anyway, be on the lookout. Obviously this is a pretty, I hadn’t heard of this before. It makes total sense in hindsight, but something to be on the lookout for.

jerry: All right. The last story we have comes from cybersecurity. dive.com. One of my new new favorite websites, by the way. The good stuff on there. Title is Microsoft rollback on macro blocking in office sows confusion. So earlier in the year, Microsoft made a much heralded. Announcement. That they were going to be blocking.

jerry: Macros in Microsoft office from anything that was. Originated from the internet. And and that. Was born out by the way, by an apparent. But some researchers have said that. It’s much as two thirds. Of [00:26:00] the. Attacks involving macros has fallen away. So pretty effective control Microsoft last week.

jerry: Now it’s that they were reversing course and re enabling macros. I assume. Because CFO’s everywhere were in full meltdown that their fancy spreadsheets we’re no longer working and obviously we should assume that, the the attacks are going to be back on the upswing. And apparently this is a temporary reprieve. It’s a little unclear when Microsoft is gonna re enable it. But I have a strong feeling that a lot of.

jerry: Organizations have. Taking us taking a breather on this front because Microsoft solved it. For us and now we need to be back on, on the the defensive.

Andy: Yeah, I’m really curious what the conversation was like that Forced them to reverse course, like what broke. That was that big of a deal that was so imperative because this has been a [00:27:00] problem. For at least 15 years with Microsoft.

jerry: yeah.

Andy: least. This was a pretty big win. And now it’s. Kinda get rolled back. So I was disappointed.

jerry: So there are. I think there’s some links in here. You can actually go back and re enable it through group policy settings. Obviously if if you’re so inclined, Probably a really good idea. As a, as an it industry, I think we’re worse off. For this change until they re enable it.

Andy: Yeah. This is without knowing all the reasons behind it. This feels like such a pure example of productivity versus security sort of trade off and playing out in real time.

jerry: Yeah. I can almost guarantee you this what’s going on.

jerry: So that yeah. That is a little concerning. Definitely. Be on the lookout.

Andy: Indeed. We’ll see what happens to be continued. Stay tuned.

jerry: To be continued. And that is [00:28:00] that is the story for tonight. Just one little bit of editorial. I spend a lot of time during the week reading. Different stories, all kinds of Google alerts set up for For different security stories and whatnot to help pick what we talk about on these podcasts.

jerry: And. It is amazing to me. How many. Stories that are Couched asnews are actually. Basically marketing pieces.

Andy: Yeah.

jerry: It’s I know that we’ve talked about this in the past, but it is alarming. I actually gotten to the point now where I dropped down to the end to see what they’re going to try to sell me before I get too invested in. The

Andy: I look at who wrote it. And if they’re like not a staff writer, if they’re like contributing writer from, chief marketing officer from blah, blah, blah, I’m like, Nope.

jerry: Yeah.

Andy: I very quickly just. Stop reading it. If it’s something written by an employee of a vendor or some variety. [00:29:00] And I don’t mean to be that harsh about it. It’s just.

Andy: There’s a bias there that they believe their own marketing. And their own dog food and they’re clearly pushing the problem. They know how to solve.

jerry: Yeah, they’re characterizing. The problem is. Something offerings can solve.

Andy: Right.

jerry: And, and I think it’s a. It’s certainly an understandable. Position, but I. I’m concerned that as a industry,

jerry: Where do we go to get actual best practices. Because if you’re, if everything you read is written by a security vendor who wants. The best practices are install crowdStrike install red Canary install. McAfee installed

Andy: you bring up an interesting. You bring up an interesting side point, which is. I’m seeing some movement in the cyber insurance industry that they’re basically saying. At the broadest level for those that are less sophisticated. These are the three EDRs. We want you [00:30:00] to have one of and if it’s not one of these three, you don’t get premium pricing.

jerry: Oh, that’s interesting.

Andy: And you’re like, wow. Especially because it’s such a blanket statement. And so many environments are different and I’m. I’m not passing judgment on the efficacy of those three vendors, which is why I’m not saying them. It’s more, that’s feels like a very.

Andy: Lack of nuanced opinion that, Very blunt instrument being applied there.

jerry: Yeah, and It also.

jerry: Ignores like a whole spectrum of other stuff that you should be doing in.

Andy: That’s just their EDR. Table-stakes right. And which is all coming very much from ransomware. They’re just getting their ass kicked the ransomware payouts. And so they’re like what is what will stop ransomware?

jerry: Fair enough. That’s a fair. That’s a fair point.

Andy: Back to your point about, So many marketing pieces being masquerading as InfoSec news, I think is very true. And on that note, I want to thank today’s sponsor of Bob’s budget, firewalls.

jerry: [00:31:00] We proudly have I think we’ve cleared 10 years of no No vendor sponsorship. No sponsorship of any kind, other than a donation.

Andy: Yes, which we appreciate.

jerry: All right. is the show for this week. Happy to have done two weeks in a row now. Got to make a habit of this.

Andy: I know this is great. I appreciate it.

jerry: All

Andy: all four listeners that we still have.

jerry: I moved to a commercial podcasting hosting platform. And so we get actually now get some metrics and we have about

jerry: About 10,000 ish.

Andy: Wow.

jerry: Or so.

Andy: counting the inmates that are forced to listen as part of their correction.

jerry: No see see

jerry: think actually because That’s a one to many thing so there’s probably like one stream is forcing like maybe 500 people. To listen.

jerry: And then when they do crowd control, like that could be thousands of

Andy: That is true.

Andy: I was quite [00:32:00] entertained. And really proud of you when I found out that your voice. Was found to be one of the best tools to disperse crowds.

jerry: Hey, we all have to be good at something right.

Andy: It is up there with. Fire

jerry: Yeah. Yeah.

Andy: neck and neck. Better than tear gas. I, are you aware of this better

jerry: I was not aware that I had overtaken tear gas.

Andy: It’s impressive. My friend, you should be proud.

jerry: I, I am.

Andy: should be proud.

jerry: am. I’m going to go tell them.

jerry: All

Andy: All right.

jerry: Have a good one, everyone.

Andy: Alrighty. Bye.

jerry: Bye.

Defensive Security Podcast Episode 267

Defensive Security Podcast Episode 267

 

Links:

https://www.justice.gov/opa/pr/aerojet-rocketdyne-agrees-pay-9-million-resolve-false-claims-act-allegations-cybersecurity

https://us-cert.cisa.gov/ncas/alerts/aa22-187a

https://www.zdnet.com/article/these-are-the-cybersecurity-threats-of-tomorrow-that-you-should-be-thinking-about-today/

jerry: [00:00:00] Alright, here we go. Today is Sunday, July 10th, 2022. And this is episode 267 of the defensive security podcast. My name is Jerry Bell and joining me tonight as always. Is Mr. Andrew Kellett.

Andy: Good evening, Jerry, how are you? Good, sir.

jerry: I’m doing great. How are you doing?

Andy: I’m good man. It’s hot and steamy in Atlanta. Tell you that much.

jerry: Yeah. I ‘ve been back for a month from my beach place. And I think today’s the first day that we’ve not had a heat advisory. [00:01:00]

Andy: Yeah, that’s crazy.

jerry: which it has been brutally hot here.

Andy: Now, when you say beach place, you might have to be more specific, cause you’ve got one like seven beach houses now.

jerry: Well, the Southern most beach house. Yes.

Andy: Yeah. One is the Chateau. One’s technically a compound.

jerry: One’s an island,

Andy: that’s.

Andy: We’re going to have to probably name them because. They’re tough to keep straight.

jerry: They definitely are. Yup.

Andy: But, I, for one. Appreciate your new land barronness activities. And look forward to.

Andy: Jerry Landia being launched and seceding from the United States.

jerry: Hell. Yeah. That’s right.

Andy: I’ll start applying for citizenship whenever I can.

jerry: Good plan. Good plan. All right. A reminder. We should probably already said this, but the thoughts and opinions we expressed on the show are ours and do not represent those of our employers.

Andy: But for enough money, they could

jerry: yeah. Everything is negotiable. [00:02:00] All right. Couple of really interesting stories crossed my desk. Recently and the first one comes from the US department of justice of all places. And the title here is Aerojet , Rocketdyne agrees to pay $9 million to resolve false claims act allegations.

jerry: Of cybersecurity violations in federal government contracts. So the story here is that there’s this act, as you could probably tell by the title called the false claims act that permits an employee of a company who specifically does business with the US government to Sue the company under the false claims act claiming that the company is misrepresenting itself in the execution of its contracts. And if that [00:03:00] lawsuit is successful, the person making the allegation, basically it’s a whistleblower kind of arrangement. The person making the allegation gets a cut of the settlement. And so in this particular case the whistleblower received $2.61 million dollars of the $9 million.

Andy: Wow. So his company. In theory was lying on their security controls. And he found out about it or knew about it. And was a whistleblower. About it is getting 2.61 million.

jerry: Correct. Correct.

Andy: Have to go check everything in my company. I’ll be right back.

jerry: I’m guessing that his lawyers will probably take about 2 million of the 2.61, but, Hey, it’s still.

jerry: still. money, right?

Andy: That’s crazy. It reminds me, it’s probably a lot of our listeners are too young for this, but. The days of the business software Alliance about turning in your employer for using pirated software, that you could get a cut of that, but not in the you [00:04:00] know seven figure range.

jerry: Yeah, this is really quite interesting. And what’s more interesting is that there is apparently some indication that the US government may expand the scope of this to include non government contracts and including. Perhaps even like public companies. Under the jurisdiction of the securities and exchange commission. I don’t think that’s ah codified yet.

jerry: Probably just ah hyperbole at this point, but holy moly. It really really drives home the point that we need to, do what we say and say what we do.

Andy: So what were the gaps or what were the misses that they said they had.

jerry: have done a little bit of searching around. I didn’t go through all of the details in that case. Because it was a settlement, there may not be an actual Details available, but I’ve not been able to find the specific details of of what they were not doing.

Andy: Yeah. did [00:05:00] go and I cause. I was very curious about this and did do a bunch of searching and found some summaries of the case and some of the legal documentations, and it looks like. The best I was able to get into is there was a matrix of 56 security controls. Or something around those lines, don’t quote me on that and that the company only had satisfactory coverage of five to 10 of them.

jerry: Oh, wow.

Andy: And there was another one where they did a third-party pen tests who got into the company in four hours. It looks like there’s a bunch of Unpatched vulnerabilities. So it’s in legalese, right? So it’s a little tough to translate into our world at times.

Andy: But I’m actually quite curious and I might want to do some more research trying to figure out what exactly were the gaps and I guess at the end of the day, they agreed to these things contractually. And just didn’t do them.

jerry: Correct. That’s the net of it.

Andy: This is primarily if you’re doing business with the government, the us government.

jerry: Correct. Do you have a government contract?

jerry: Yeah for now. And I do think that over time, like I said, my [00:06:00] understanding is that the scope of this may make increase.

Andy: This is, I really feel like this is huge. This could open the door.

Andy: I mean because you and I both know how often those contractual obligations and the way you answer those questions is a little squishy.

jerry: Yeah. Yeah. Optimistic, I think. I think optimistic might be.

Andy: That’s fair. That’s fair. But it’s also interesting trying to have, federal judges navigate this very complex world. Yeah, that’s it. That’s a crazy story. We’ll see where that goes.

jerry: So anyway, it really highlights the point about being very honest and upfront with with what we’re doing. And if we commit to doing something, we need to do it.

Andy: Yeah, it just gets fuzzy when there’s business deals on the back end of that answer.

jerry: No, I could completely agree.

jerry: All right. The the next story also pretty interesting. Also comes from a us government agency. This one comes [00:07:00] from CISA the cybersecurity and infrastructure security agency. I hate the name. I really wish they come up with a different name. It’s the word security way too many times. Anyway that the title here is North Korea state-sponsored cyber actors use Maui ransomware to target the healthcare and public health sectors.

jerry: That from a, from a actual actor standpoint or threat actor standpoint, there’s not a ton a ton of innovation here. They’re not doing anything super sophisticated that we don’t see in a lot of other campaigns, but what is most interesting is that the government, the US government has attributed this particular campaign to North Korea. And North Korea is, one of the most, perhaps the most heavily sanctioned country in the world for the us government. And so if you, as a an entity in the US somehow support an [00:08:00] organization or a person or entity in North Korea, you can be subject to penalties from the U S government.

jerry: And the point here is if you are a victim of this ransomware campaign and you pay the ransom, you may run a foul of those sanctions and that could end in addition to whatever penalties you might come into as a result of of the breach you may actually run into some pretty significant additional penalties as a result of supporting the north Korean government.

Andy: Well, that is an interesting little problem isn’t it?

jerry: Yes, it is. Yes, it is.

Andy: What you need is a shell company. To run your ransomware payment through.

jerry: I have a feeling is a lot of that going on in the world.

Andy: we saw some shenanigans with like lawyers doing it as a proxy and with using. In essence [00:09:00] privileged communications to hide it. At least allegedly in some previous stories we’ve covered. But that’s an interesting problem. Yeah. I can see how that would be a challenge. Maybe if you only paid the ransomware, like in bulk wheat shipments.

jerry: a barter system.

Andy: Because we send them food.

jerry: That’s true.

Andy: That’s allowed.

jerry: so you recover your data by paying in humanitarian aid.

Andy: I think Twinkies for data is a perfect campaign. We should launch.

jerry: I don’t even know what to say.

Andy: Either pay three Bitcoin, which is now probably worth like 30 bucks. I don’t know, I haven’t checked lately or.

Andy: Two semis full of Twinkies.

jerry: But how are you going to get to Twinkies to them? That’s what I want to know.

Andy: They have ships. They make ships that they go on and they go across the sea and then they take them off the ships. Did you not read the books I gave you?

jerry: Oh, geez. Showing my ignorance. I will say that there are some recommendations down at the bottom. Some [00:10:00] of them are interesting and things that you haven’t seen a lot of recommended before. But a lot of them are just the normal run of the mill platitudes. Only use secured networks and avoid using public wifi networks. Consider you using an installing a VPN.

jerry: No, I get so tired of the, you should consider doing X. Well, okay. I considered it.

jerry: You should consider not using administrative rights for your users. Okay. I considered it.

Andy: Well, and the real problem here is that ransomware is not one threat. It is the outcome of.

jerry: Exactly.

Andy: Yeah. That’s why the ransomware defense is an interesting problem. Unless you’re actually just trying to stop the. Pure encryption component of it. How that ransomware starts could be highly varied.

jerry: It’s the end link in the chain, right? Because as, they talk about earlier in the [00:11:00] advisory here. This particular Maui ransomware is actually pretty manual. It actually has to be, apparently be launched. By hand. With the command line. So whoever is whoever the threat actor is, they found some way into the system and you can infer. Assuming that the CISA actually has that kind of insight. You can infer by reading through their recommendations, how they think the north Koreans are getting in there using RDP that’s exposed to the internet and then moving laterally using user credentials who have administrative rights and so on. So you can infer based on what they’re saying not to do to see probably how it’s being propagated, but sometimes it’s a little difficult to understand, with these kinds of recommendations how much of it is the result of actual observations and just yeah, we have this list [00:12:00] of good hygiene practices. And we think this is what you should be doing.

Andy: Yeah. I think the problem is that. True. Ransomware defense is highly varied based on the. Individual company’s stance platform, environment, situation. And it’s very difficult to roll that into a couple paragraphs in a generic article.

jerry: Yeah. Yep. Absolutely.

jerry: So anyway don’t get ransomwared and if you do. Don’t pay off the north Koreans because you’re going to get a double whammy.

Andy: I still don’t quite understand how the north Koreans are launching these from their Commodore 64s. But maybe we’ll talk about that in another show.

jerry: It is a fair question, how they’re coming into possession, but I would expect it’s coming in via countries like China and Russia where they may not have that sanction in place. .

jerry: Which by the way, I think is how there, cause you, you would ask the same question. Well, how are they getting internet access? My, my understanding is it. It is [00:13:00] coming through China.

jerry: So the last story today comes from ZD net and the title here is these are the cybersecurity threats of tomorrow that you should be thinking about today.

Andy: Well, hold on. If I’m thinking about tomorrow’s threats today, thinking about today’s threats today.

jerry: Well, you were supposed to think about today’s threats yesterday.

Andy: Oh,

jerry: And you were supposed to think about yesterday’s threats last week.

Andy: I’m gonna have to start over.

jerry: Yeah, well, Look, this isn’t this career. It’s not meant for everybody.

Andy: Is this what they mean by thought leaders?

jerry: I think it is.

jerry: I think it is.

jerry: So the first threat of tomorrow that you need to worry about today. Is quantum threats. And that’s not like the James Bond quantum, right? This is more like the. The quantum computing. Hacking all your public key crypto. Which by the way is going to be here probably sooner than anybody really wants to [00:14:00] admit.

jerry: The world of quantum computing is advancing quite. Quite rapidly. And. I think that the interesting thing to consider is that we’re, we are creating just massive amounts of encrypted data. On a daily basis today. And presumably it’s infeasible to decrypt almost all of it. Because of the complexity.

jerry: But in the near future that won’t be the case. The things that we’re creating today could be relatively easily decrypted using things like quantum computing. So presumably there is enterprising companies and state actors and whatnot, and squirreling away encrypted data. Today.

jerry: That will be encrypted. Somewhere down the line. So at some point. It makes sense for us, like we’re going to have to, I think even before. Quantum crypto because quantum Attacks against cryptography. Becomes [00:15:00] technically feasible. We’re going to have to shift to quantum resistant. Crypto, which is going to be interesting because there it’s you going out on a limb and saying that.

jerry: The quantum resistant crypto that we’re making actually will be quantum resistant because we don’t have the kind of quantum computers that can verify that hypothesis.

Andy: In fact NIST, just put out the beginning of the process to solicit and evaluate. For quantum resistant public key cryptography algorithms.

jerry: Yeah.

Andy: Which I mean, by the way is not a quick process. I think the last time they updated. It took three or four or five years of review. So it’s not a quick. Quick endeavor typically, but yeah. It’s an odd one. And I guess the theory behind this is that quantum computers just do math differently. So all of the time variables.

Andy: Of how long it would take to break a standard encryption today don’t apply or apply very differently to quantum computing than they do to our standard type of computing today.

jerry: Yeah. [00:16:00] Correct. Conceivably a. A well. In properly skilled quantum computer could. Take a, contemporary. Public key crypto and break it, in, in very short time.

Andy: In essence by brute forcing it just much, much, much faster.

jerry: It’s not actually brute forcing. It’s just solving the math.

Andy: Yeah, I. Well, I guess what I’m saying is there’s. There’s nothing inherently weak in the encryption algorithm. It’s the speed of the computing that’s changing.

jerry: It’s the approach.

Andy: You could break it. You could break today’s encryption as well. It just takes a very long time.

jerry: Well, the. The strength in the encryption today comes from the fact that we have to basically just brute force you know trying to factor. Numbers. But in the world. When you get it. To the quantum computing.

jerry: You don’t have to actually brute force. The

Andy: Hmm.

jerry: can just, you can just solve it. Like you don’t. You don’t even, you don’t even have to brute force, so solve it. It [00:17:00] is.

Andy: I gotta be honest. I feel woefully ignorant and naive of these issues, and I clearly need to educate myself because I. I did not understand that.

jerry: It’s well, It’s. It is a

jerry: It’s just very different. It’s a very different thing. It’s not a, like you can’t take a binary computer.

jerry: And compare it to a quantum computer.

jerry: It’s almost like an analog computer. Anyway, it’s very interesting. It’s a, I actually think it’s more closely aligned to. What had been described as the DNA computers, where you can arrange, you can break. Segments of the DNA up and have it have it assemble itself into the answer to a complicated question that you would have a really hard time, answering with a traditional computer. I think it’s closer to that.

jerry: Than it is to like a, an actual binary computer. The concepts are difficult to translate, which says to me like That is by the way, the concern I have as we approach. You creating these [00:18:00] quantum safe algorithms? Like we were like hypothesizing, how. Quantum computing is going to evolve once it the.

jerry: Anyway, that’s.

Andy: Yeah, before we move on to the last thing I’ll say is this feels like the magic black box. That is a bit of a boogeyman because a lot of people don’t understand it. But

jerry: Totally. But it is. it is.

jerry: a practical thing, and it is a responsible thing for us to go and create this, these quantum safe algorithms and start migrating to them.

jerry: With as soon as practical. So I’m just concerned that like how confident can we be that? They’re actually quantum safe. So the next future threat, which I don’t think is actually a future threat, I think it’s like already here is software supply chain attacks. This is, obviously things like what happened with solar winds and Microsoft exchange. And.

jerry: Many others where the threat actors, I think are finding it a lot easier to attack. Purveyors of [00:19:00] software and software as a service companies and in and whatnot. Because if you do that you you can not just attack one company you can conceivably, with one attack yet.

jerry: Access to many different organizations as we saw with Kaseya. And solar winds as well. So yeah, I, I definitely think this is. On the upswing. I fear is it is an industry. Our response to this threat is like more spreadsheets.

jerry: I

Andy: And answer more questions.

jerry: Yeah. I

Andy: no. We have to live off. Look, we need to go back to the pioneer times and we code all of our own software. That’s the only solution Jerry.

jerry: It’s like the Hyundai version of

Andy: Look.

jerry: We have to build it .

Andy: Yeah, I think, yeah. And you can’t download anybody else’s code because you don’t know what’s in it. You have to code it yourself. the only option.

jerry: It’s true. And by the way, building your own encryption is it still irresponsible? So try to figure that one out.

Andy: [00:20:00] I’m kidding. Of course. Yeah, it’s a tough. It’s a tough problem. There’s so much inherent trust that you establish if we’d look back at solar winds and Yeah how many times have you and I said, Hey, upgrades and patches are important. And then that became the attack. Vector. Let’s just hope that becomes a rarity.

Andy: And let’s also hope that. Somebody else gets hit by that before you do, and you have time to react.

jerry: Yeah we just, we have to find a more mature way as an industry of handling this. Threat. There’s some there’s some approaches evolving, like salsa and.

Andy: Yeah.

jerry: And whatnot, but. Still the. From a consumer side, it’s still little more than spreadsheetware are you, or are you not. Salsa. Well, we already know from the first story that people are apt to lie.

Andy: Yeah. To defend yourself from this, I think you could do a lot of threat modeling of, and I’ll go back to the whole concept of least privilege as best you can, but some of these. Software. [00:21:00] Supply chain risks happened because you have no choice, but to have a massive amount of trust granted to some.

Andy: Third-party software.

Andy: Just to function.

jerry: Completely agree, open sources adds another add s another level of complexity there, because. What we see with open sources. And by the way I have no particular aversion to open source thing. The problem we have is that. It’s. It’s apt to be abandoned. It’s easy for it to get handed off from a quote good person to a quote, bad person. It’s.

jerry: Conceivable that a quote, good person goes bad.

jerry: In, in, in many other permutations in it, like they’re so stacked on top of each other some of these open source applications. The feed and even the commercial applications. There’s like tens of thousands of packages. Like how do you.

jerry: How you get your hands around that.

Andy: Yeah, that is crazy.

Andy: You brought up up something that reminded me,’ve even seen some very [00:22:00] popular well-known.

Andy: Packages be turned into protest where

jerry: Yeah.

Andy: purposes, by the maintainers for various reasons, it’s rare, but we’ve seen it a couple of times and it’s.

jerry: Over the years we’ve seen. Browser plugins being sold by their.

Andy: Yeah.

jerry: Their author and maintainer to, Malicious. Quasi malicious companies. So it, it happens. And it’s a really difficult. Problem to solve, but we’re going to have to reconcile how to how to solve it eventually.

jerry: The next one is internet of things, making us more vulnerable. Blah, blah, blah, blah, blah. More devices you have on your network. Created by, I think it’s this, it’s a flavor of the same. That same thing, you have these embedded devices, typically lower costs, whether it’s a copier.

jerry: I don’t know a thermometer or whatever. They have crappy firmware. And they get abandoned and they’re still on your network and they become a launching point. And then, they point out in the [00:23:00] article the super famous. I guess infamous story about the Las Vegas casino that was hacked through their.

jerry: Their aquarium thermometer, which is. I don’t know. Something’s wrong there. If that can happen, but I mean there’s stories about, people getting in the wireless networks through Through a smart light bulbs. There’s a lot of stories like that and,

jerry: But I think it’s a similar kind of thing we have to. We have to figure out how to handle that. The one that makes me the most concerned though. Is a deep fakes powering business, email compromise attacks.

jerry: So I actually as a. Kind of an experiment. And I’m recording it with this software now. You can easily buy. You have access, like the average person has access to technology that allows you. Pretty easily to do deep, fake type stuff. And if you think about that in the context of what we’ve seen [00:24:00] with

jerry: The business email compromise where somebody’s parading as the CFO. Asking to transfer money or to change the bank account information. This opens up a whole new world. And especially when you add the layer of a video deep fakes, holy crap. Like having a WebEx. With the person who you think is your boss. And by all means, by all appearances, it is.

jerry: And here you’re talking face to face, virtually face to face with who you think is your boss, giving you an instruction on how to do something or to do something. And it’s. It’s not real.

Andy: Yeah. In fact, Jerry’s, not even here I deep faked jerry’s entire portion of this podcast.

jerry: That’s true. That’s true. I was never real by the way.

Andy: That’s actually not true. Jerry has evolved into a llama. it’s been replaced by a deep fake AI. I’m kidding. No, I don’t mean to make light of this because I think it’s absolutely [00:25:00] legitimate. We, as humans. Have evolved to trust our senses. And identify people visually and audibly. Withreat level. Like we don’t have any built in skepticism that we just inherently trust it.

Andy: And this problem. Is playing on that psychological concept of, we trust our senses when we identify somebody because we’re very good at identifying things. And so the fact that. We have now moved to this digital environment and digital comms, and we can deep, fake this successfully. Is really powerful and dangerous.

Andy: And I can see this wreaking a lot of havoc. Absolutely.

jerry: Yeah, it’s. It, it seems a little scary to me to be honest. And I think we’re going to have to come up with. Better processes.

jerry: Well, you’re gonna have to have a multifactor, like you’re I think we’re going to get to a point. you just can’t trust. [00:26:00]

jerry: just can’t trust.

jerry: that. And by the way, scares the crap out of me. When you think about things like evidence submitted in the court from surveillance cameras. There’s like the, your mind can go in lots of problematic places. But from a, just narrowly from a business.

jerry: Resisting business, email, compromise type things. It really puts on what’s the focus back on having a robust process.

jerry: Where even if you have your boss, the CFO, whoever call you up on a WebEx.

jerry: Like you still have to have some systematic way.

jerry: That requires authentication and whatnot.

jerry: That.

jerry: The person has to prove who they are.

jerry: We just have to do that.

Andy: Yeah, no.

Andy: No matter what you can’t violate the process that.

Andy: Authenticates at multiple levels that this person is who they say they are, and that they’re authorized to do it. Which is difficult, especially for small companies. It’s a lot [00:27:00] of discipline and bureaucratic red tape, but. Otherwise, I just it’s going to get too trivially easy. To fake a phone call from the CEO.

Andy: With a perfect voice representation.

jerry: alone a perfect ah video.

Andy: Right.

jerry: Yeah. Anyway, that’s Something to keep you awake at night. Destructive malware attacks is next. Again I think we’ve we’ve seen this. Quite a lot. We had WannaCry NotPetya. Yeah. And candidly. Like the scourge of the internet right now is ransomware, which I think they’re thinking more like physically damaging, candidly ransomware is I think in this category already.

jerry: So I think we’re, I think we’re already living in this one.

jerry: And then finally the skills crisis. Although I guess I’ll go back one, we’ve talked to in the past about. Some of the forward-looking innovations in [00:28:00] malware, probably moving into firmware.

jerry: And in that may be where we where we see this going next is it does get more destructive for the average organization, but it’s by means of attacking firmware. Like where you can’t recover. Your hardware, you just, you can’t just wipe a system and.

Andy: Yeah. Just bricks, the entire.

Andy: Whatever.

jerry: Right.

Andy: Motherboard level or hard drive level

jerry: Or it’s it. Or it’s it’s infected at a, in a way that just you can’t clean it. You just, you can never reestablish trust.

Andy: You mean like installing windows.

jerry: For instance.

Andy: Sorry. I’m kidding. I’m kidding.

jerry: Yeah. Although I have to on that funny point. So the the author of systemd, for those of you who ah who are Linux nerds like me, the author of systemD recently left Red Hat and moved to Microsoft.

Andy: Aye.

jerry: And so system D has been pretty controversial thing and, cause it’s starting, my, my view is it’s starting [00:29:00] to move Linux into kind of a windows windows mode of operation. And so I think like the next plan. Like order 66.

Andy: Oh,

jerry: right. And system D was like the clones.

jerry: In order 66 is. They’re going to, we’re going to, they’re going to rename systemD to be SVC host.

Andy: So who are the Jedi in this example? Exactly.

jerry: I don’t know, everybody’s bad.

jerry: So there’s no, there’s there’s no. There’s no light side of the force is like the dark and the darker side of

Andy: But with order 66, like they kill all the Jedi. So who are they going to like.

jerry: Oh, that’s the Linux. That’s the Linux people. That’s the people who are using. Using Linux.

Andy: I see.

jerry: Yeah.

Andy: Yeah.

jerry: The only thing left will be ah, will be the windows? Yeah.

Andy: And my one lone copy of OS2 warp that I’m running still.

jerry: That is very true

jerry: Yup.

Andy: It’s a dangerous world. My friend.

jerry: So the the final frontier of risks that we will have to worry about [00:30:00] tomorrow. Is the skills crisis.

jerry: I am. I will sayit a little bit differently. I’m not sure that is the skills crisis so much as. The ability to pay for the skills we need.

Andy: So you think there’s plenty of people out there we’re just not paying in a form.

jerry: I think, yeah, I think so. I think it’s probably naive of me to say there’s plenty of people out there, but there are people out there. My observation is. A lot of organizations have lots and lots of of job openings and they moan and complain about the. The lack of people, but it’s the lack of people who were willing to take the job at the.

jerry: Rate that they’re offering.

Andy: Okay. So is that a. Is that a supply demand problem as well. If there was more supply. To meet that demand. It would. down the pricing.

jerry: yes. I think so. I think [00:31:00] that’s, what’s. To be honest, like one, one of the ways to look at this is, a lot of organizations are.

jerry: Trying to dramatically increase the supply to get the cost to go down. The reality is that there’s a. Like, we’re just don’t have the level of the number of people. We, most organizations don’t have the number of people in the skill level of people they need. But at the same time, I think there’s.

jerry: That’s a symptom. Of a bigger problem that I don’t think that most companies invest the amount of money they need to invest in securing and operating their IT like, I think we’ve just. Like we’re over. We’re overextended. We’re over leveraged. In it, and we’re trying to figure out how to fix it without fixing that.

jerry: Over leveraged situation. That’s just, Jerry’s macroeconomic. Baloney.

Andy: Yeah, see some truth in there though that we.

Andy: We also have [00:32:00] probably a lot of bad practices. And failure to follow best practices for various reasons that were. Compensating for, with other security controls, as opposed to just. things more inherently secure.

jerry: Oh a hundred percent.

jerry: A hundred percent

Andy: Now that’s a complicated thing, there may be very good reasons why we do that. And I’m not saying. I’m trying not to be dogmatic about there’s only one way to do things. I think that’s true. I also think a lot of legacy businesses have so much tech debt. And the cost to say rebuild with best practices are so high.

Andy: I think probably don’t have much choice.

jerry: Absolutely.

Andy: Yeah, I don’t know. It’s interesting problem.

Andy: I do wonder.

Andy: I do wonder if it’s.

Andy: Going to stabilize or we’re always going to be in this scenario.

jerry: It is.

jerry: On the one hand

jerry: There’s a lot of peril in thinking that the patent office is going to close. But on the other hand how much more innovation, like how many more features does your Your word processor need? .

Andy: [00:33:00] Apparently lots.

jerry: Well, I guess I’m like there will always be innovation, but I think the rate of innovation is going to. You’ll start to flatten out a bit. And

Andy: we’ve been saying that for years and.

jerry: Yeah, well, it’s

Andy: law has proved us wrong over and over again. But I hear you like how word process spreadsheets pretty mature, pretty commodity. Like how much more tech do you need in them, but. Heck we’re still debating whether or not macros should be on or off in office by default.

jerry: Oh, gosh, that’s right. Yep.

Andy: No, I hear ya. I just, I also feel like it’s going to slow down soon and feel two old guys talking about getting off their lawns.

jerry: Well, I think it’s, I think the complexity will probably shift around and I think to some extent, what. What might end up happening is we see. We see the devices people have move away. The device’s employees have move away from being kind of general purpose computers to more specialized.

jerry: Like iPad type [00:34:00] devices. Not. Not not like green-screen terminals but less so in less a wave. Or less general computing and more. Specialized, which I think are easier to secure. That just shifts. A lot of the complexity into other parts of the environment your infrastructure.

Andy: Well, going to specialized isn’t that like being more like an IOT. Problem, which also we can’t seem to keep updated and secure.

jerry: Well, that’s true. I guess were.

jerry: Pretty bad all the way around,

Andy: yeah. Sorry, clearly there’s no hope for any of us.

Andy: We’re doomed.

jerry: Yeah.

jerry: All right. Well, I guess we’ll just just keep milking the the machine for awhile. And then we all retire to our private island,

Andy: jerry Landia.

jerry: Anyway I thought this was. Pretty interesting list of things to be thinking about. The most interesting one I thought by far was the was the deep fake threat.

jerry: I wanted to call that out. Anyhow, that is [00:35:00] the show for today. Thank you all for for listening. Sorry. It’s been so long. Life continues. To get in the way of making podcasts. And I, every time I think it’s going to level out and I will be less busy. Something happens.

jerry: Hopefully. Fingers crossed.

Andy: Fair enough, but Hey, I I, we appreciate you guys sticking with us and hopefully still finding some value in the podcast and we enjoy making them when we can.

jerry: Take care, everyone.

Andy: Have a great week.

Andy: Buh-bye.

jerry: Bye.

Defensive Security Podcast Episode 266

https://www.csoonline.com/article/3660560/uber-cisos-trial-underscores-the-importance-of-truth-transparency-and-trust.html

https://thehackernews.com/2022/06/conti-leaks-reveal-ransomware-gangs.html?m=1

https://www.bleepingcomputer.com/news/security/new-symbiote-malware-infects-all-running-processes-on-linux-systems/

https://doublepulsar.com/bpfdoor-an-active-chinese-global-surveillance-tool-54b078f1a896

Defensive Security Podcast Episode 265

Google Exposes Initial Access Broker Ties With Ransomware Actors (bankinfosecurity.com)

Okta says hundreds of companies impacted by security breach | TechCrunch

Okta: “We made a mistake” delaying the Lapsus$ hack disclosure (bleepingcomputer.com)

Microsoft confirms Lapsus$ breach after hackers publish Bing, Cortana source code | TechCrunch

DEV-0537 criminal actor targeting organizations for data exfiltration and destruction – Microsoft Security Blog

Sabotage: Code added to popular NPM package wiped files in Russia and Belarus | Ars Technica

President Biden Signs into Law the Cyber Incident Reporting Act (natlawreview.com)

SEC Proposes Rules On Cybersecurity Risk Management, Strategy, Governance, And Incident Disclosure By Public Companies – Technology – United States (mondaq.com)