Software Integrity Blog


Achieving open source security in container environments

Achieving Open Source Security in Container Environments

Today, open source components are at the heart of most modern applications, transforming how we architect solutions in every industry. Black Duck’s 2017 Open Source Security and Risk Analysis of over 1000 commercial applications revealed that 96% of applications scanned utilized open source. Meanwhile, more than 60% of those applications contained known security vulnerabilities in their open source components, and on average, vulnerabilities identified in these applications have been publicly known for over four years.

As we saw in the news last year, unremediated known vulnerabilities can cause serious damage. The massive Equifax data breach that exposed the private data of 145.5 million people was due to exploitation of a known vulnerability (CVE-2017-5638) in Apache Struts, a popular open source framework for creating web applications. Use of Apache Struts is by no means unusual – this framework is used across the Fortune 100, providing web applications in Java and powering both front-end and back-end applications. The vulnerability exploited was originally reported in March 2017, and at that time of disclosure there were already exploits freely available. (For a complete timeline of the Apache Struts vulnerability from bug to breach, check out this post.)

A few key takeaways from the Equifax breach:

  1. Visibility is critical. Without a a complete inventory of the open source components your teams is using, including the origin of the component, you are leaving your applications at risk. You simply cannot  protect yourself if you don’t know what’s in your code and what it depends upon.
  2. Open source vulnerability management must be both automated and integrated into your development and DevOps tools and processes. Tracking open source components via an Excel spreadsheet is simply not an option in a rapid deployment model. Further, having your CI process fail a build due to the presence of a security vulnerability is considered a best practice.
  3. Minimize the time between when vulnerabilities are reported and when your teams can patch or mitigate them. In 2017 more than 30 new vulnerabilities were reported every day. Systems like the National Vulnerabilities Database (NVD) provide insight into vulnerabilities, but we’ve observed that it can take upwards of three weeks for vulnerabilities to be fully documented in the NVD. When simple exploits are available via simple Google searches, three weeks is far too long to wait before identification remediation.

The Equifax data breach isn’t the only recent exploitation of a known open source security vulnerability. A six-figure fine was recently issued to Gloucester City Council in England for a breach of UK data protection laws. Essentially, the council failed to ensure that the open source software it was using was updated to fix the “Heartbleed” vulnerability, a critical security flaw in. Although Heartbleed was discovered nearly four years ago, and IT staff at the council flagged the need to update the software, unfortunately the patch issued was never applied. Gloucester City Council “did not have sufficient processes in place to ensure its systems had been updated while changes to suppliers were made,” said the entity imposing the £100,000 fine, UK’s Information Commissioner’s Office (ICO).

The council’s failure resulted in a series of security breaches, including compromised Twitter accounts belonging to senior officers at Gloucester, access to 16 users’ mailboxes via the Heartbleed vulnerability in the SonicWall appliance (containing an affected version of OpenSSL) used for routing traffic to Gloucester’s services, and access to 30,000 emails from a senior officer’s mailbox, containing financial and sensitive personal information on past and current employees.

“Gloucester appears to have overlooked the need to ensure that it had robust measures in place to ensure that the [OpenSSL] patch was applied,” the report concludes.

The UK’s Information Commissioner’s Office (ICO) isn’t alone in regulating data privacy and imposing fines. Personal data privacy regulation continues to grow, including the rapidly approaching  EU General Data Protection Regulation (GDPR), which goes into effect on May 25th, 2018. The GDPR mandates that all companies processing and holding the personal data of European citizens must protect that information — regardless of where it is sent, processed or stored — and proof of protection must be verified. Once this regulation goes into effect, the penalties for non-compliance can be severe. Organizations can be fined up to 4 % of annual global revenue or up to €20 million (approximately 22.3 million USD ) for breaching GDPR, whichever figure is higher. Based on the ICO’s recent fine for Gloucester City related to the Heartbleed vulnerability, it seems extremely likely that appropriate security under GDPR will include open source vulnerability management.

However, data breaches and security vulnerabilities don’t – and shouldn’t – deter organizations from using open source components and frameworks. Open source allows those developing custom code to focus on building unique functionality, creating core intellectual property, and delivering competitive differentiation. The exponential growth of open source usage over the last several years has changed the game, requiring organizations to think differently about how to manage custom and open source code risk. The Forrester Wave™: Software Composition Analysis, Q1 2017 indicated that developers are creating applications using only 10-20% new code – the remaining 80%-90% of the code is built using open source components.

That’s exactly why you need to manage open source usage in your organization, using a bill of materials to determine which versions of open source components are in use, and where each component exists in your applications. As companies begin leveraging containers on a massive scale to rapidly package and deliver software applications, they need to have a clear understanding of the components and dependencies in their container images as well. And while development teams need to be able to find security vulnerabilities quickly in their development environments, operations teams require the same insight to prevent unsecure containers from becoming a danger in production environments.

Addressing these security risks is essential, because hackers have not missed the growing popularity of containers. This new attack vector may offer opportunities for attack that organizations may not yet have considered.

However, once containers are deployed, vulnerabilities found in those containers aren’t patched with the latest update. Your best approach to remediating vulnerable containers is to update the base images themselves, then the applications and redeploy the resultant image as new containers. This is an important operational difference, to which DevOps teams will have to adjust their processes and tools.

Define a container security strategy

Your organization must define a container security strategy as a first step, then use tools that help you enforce that strategy throughout the DevOps lifecycle. Selecting the right tools is important, as they must both validate and enforce compliance of container security policies by including a method to prevent containers with security vulnerabilities from being deployed.

Despite the many excellent traditional security tools available, most of them aren’t designed to manage the security risks associated with hundreds or thousands of containers. The large-scale use of containers is new; so are the tools you need to manage them. Keep that in mind as you research container orchestrators and container security tools – traditional tools won’t help you manage open source security risks in large scale container deployments. Don’t forget to evaluate tools that give you run-time visibility into container networking and process behavior, like that of our partner NeuVector.

Secure against known vulnerabilities

New security vulnerabilities are being disclosed every day, which is why it’s essential that you monitor your containers continuously. Last year the National Vulnerability Database documented more than 18,000 vulnerabilities, a significant increase from 2016. When your Operations team is managing thousands of running containers, finding and mitigating or remediating every newly disclosed vulnerability in each container is not something you can leave to chance and a spreadsheet.

We know that every organization is different, therefore the approach to container security is unique to each containerized environment. You only need a single vulnerable container out of thousands to cause a breach, necessitating visibility into every container image simultaneously. And you can’t rely on tools alone – your organization needs people to manage disclosed vulnerabilities. Tools may and must scale, but people don’t and the time it takes you to remediate is critical.

Once you have visibility into each container, consider grouping containers based on similar security risks. Smart grouping makes it harder for attackers to expand a compromise to other container groups and makes it easier for you to detect and contain the breach.

Finally, make sure you are proactive about container security. While containers speed software delivery, it is unwise to ignore the new risks they pose to application security. The most effective way for you to control security risks associated with vulnerabilities in containers is to find and remove vulnerabilities in base images and application dependencies. The volume of newly disclosed vulnerabilities combined with the huge numbers of containers deployed in production environments requires an agile and dedicated container security solution that will help you prevent, detect, and respond quickly to threats directed at containers.

Recently Black Duck launched OpsSight for OpenShift and Kubernetes to help address container security. Once a container is scanned, OpsSight continually monitors Black Duck’s vulnerability database to determine whether any new vulnerabilities have been discovered that impact components in that container. Should a new vulnerability be disclosed, OpsSight proactively updates the container metadata with vulnerability information and can notify security response teams to the event. This allows operations teams to move from an unknown and uncertain vulnerability state to a known one with automated triggering of response plans. The response plan could require development teams to rebuild and test the application or operations teams to simply rebuild the container image. Imagine how differently the Equifax experience might have been if their production operations team was proactively notified that their systems were vulnerable – all without requiring an external scan.

This post was originally published on the NeuVector blog.


More by this author