close search bar

Sorry, not available in this language yet

close language selection
 

Open source software maturity activities

Practice these open source software maturity activities to help ensure the open source you use and contribute to becomes as secure as your proprietary code.

I’m at the BSIMM3 Conference, in an open source breakout session. The context: you’re an organization with a reasonable application security program. The question, “How to apply that same process maturity to open source where no ‘throat to choke’ exists?” Your organization and its software-providing vendors may not be perfect but at least you can choke someone if vulnerability exists. If you believe in the value of Software Security Framework (SSF) activities for built and purchased software, you understand that assurance activities (source code review or penetration testing) may apply to open source, but applying others (such as training, or SDL gating/exceptions, and so forth) might be as impossible shooting ghosts.

Applying a software maturity model to open source

So, we have a control problem:

We can’t tackle the “process improvement” problem with our open source providers like we can with those who build our software in-house, or those vendors from whom we acquire code.

Understanding, based on this, we lack as many ‘knobs and dials’ for improving the cleanliness of the pipes through which open source software flows… we may have a flow rate problem as well. Though I lack hard evidence, I’d bet this represents a proverbial iceberg’s tip:

An organization may deploy as much or more open source code as their own in-house developed code.

It’s interesting to think about the money organizations spend to secure the code they build vs. the amount of open source they consume in this light… (participants indicated they didn’t track this spend separate from their development efforts).

Open source software maturity activities

Harumph. Our breakout group generated great ideas worth sharing. First, we unearthed a lot of things attending organizations already do. Next, we brainstormed valuable next steps. Here’s categories of activities we came up with:

  • Inventory and inventory control
  • Vulnerability identification (assessment)
  • Vulnerability management
  • Ownership of open source
  • Policy (use)
  • Policy (contribution)

Inventory and inventory control

  • Identification. All Participating organizations had a manual discovery process for open source usage. Many wanted better automated schemes, despite some existing tool usage.
  • Identification of masked open source. Many participating organizations realize that not only do they adopt open source software directly but that the third-party code (and open source) they absorb also contain open source software. Discovering this ‘masked open source’ software represents as big a problem as identifying the open source software used directly.
  • Centralized open source distribution. Some organizations allow development & deployment of open source software from the web directly whereas others only allow access to and use of open source from a centrally managed repository. Centralizing deployment usage may provide only improved integrity, or may be used to implement an ‘approved package list’.

Vulnerability identification (assessment)

  • Using assessment tools. About three-quarters of workshop respondents used the same tools they use to assess their application security posture on their open source assets. This, to me, seems worth its own blog entry. I had to wonder aloud, “As organizations move from detective assessment schemes (SCR, PT) to so-called preventative (threat modeling, architecture analysis, misuse/abuse cases), how do they consider open source?” I’m finding that organizations blazing trails into security architecture work commonly omit discussion of their open source frameworks.

Vulnerability management

  • Root cause analysis. When a vulnerability is found (regardless of means or source: internal/external), organizations sometimes can point to a person or group who understands the vulnerable component (notable exceptions exist for purchased software or that software maintained by a development team that’s vanished). In open source, the organization must expend resources in order to “get smart” on vulnerability’s root cause and make trade-offs about mitigation strategies and their impacts. This represents extra cost on which I’d enjoy having greater visibility.
  • Vulnerability impact analysis. Almost every participant had some regime by which they discovered (through assessment, feeds, or other means) new vulnerabilities within their adopted open source code base. Everybody possessed some ability to figure out which lines-of-business or development teams might be affected by newly discovered vulnerabilities.
  • Patch management. Only about one half of participants had, in their minds, a good strategy–having assessed the impacted teams–for distributing a patch that remediated open source vulnerability in an organization-wide manner. More strategically, several schemes seemed available to organizations beyond the straightforward “penetrate-and-patch” loop here. Alternatives included:
    • Wrapping open source
    • Hardening open source (and centrally distributing)
    • Sandboxing/compartmentalizing open source

Ownership of open source and risk

  • Maintain a preferred list of open source. Seemingly related to the “centralized distribution” item (but surprisingly uncorrelated in our survey), this meant that someone, from security, owned assessing the risk and proffering a “thumbs-up” or “thumbs-down” that can inhibit preferred membership.
  • Revisit preferred list. Organizations expressed that they found value in pruning the approved list of open source software based on non-use, newly identified risk, and similar factors. About one-quarter of our group engaged in this activity.
  • Ownership of identified risk. Some participants avidly encouraged use of open source within their applications. In these organizations, when a developer chooses to include open source (as opposed to writing a widget themselves), they own any newly identified risk when in that open source. This reminded me eerily of Wall St. traders. Equity investment creates risk. Margin calls create leveraged risk. In this metaphor, choosing to adopt open source seems like a margin call. It’s very possible that a developer can absorb more risk into the organization than they themselves could effectively own up to in black-swan scenarios. It’s unclear how to measure this exposure when adopting open source.
  • Collaboration. Certainly an organization-specific and an unsolved problem, participants indicated there may be a “third stakeholder” in the process of identifying and managing open source vulnerability. Two examples given were 1) clearing houses from which organizations purchase open source software and 2) support organizations (a la RedHat).

Policy (use)

  • Security policy/standards. About half participants had some kind of security policy or standards addressing how to securely use open source software within the organization.
  • Training. About one quarter of participants trained their developers to use some portion of their open source securely. Interestingly enough, the one quarter of our respondents that trained developers did not line up well with the one half that had security standards. Wild.

Policy (contribution)

  • Legal permission to contribute to open source. Some organizations see open source contribution as a key activity. Others did but have suffered clamp-down from their legal departments because legal fears liability. Others never liked the idea.
  • Community notification on vulnerability. When an organization contributes to open source and later finds (or is notified of) vulnerability in its contributions, it may need a way to notify the broader community. Organizations also complained about very high latency in the community notifying them of vulnerability in their code. This proved a surprising problem in our brainstorming session. Why? Contributors complained that this was because, often, their contributions were either 1) forked or 2) baked into other products that masked use of the contributions. In either case, it wasn’t evident to those disclosing the vulnerability that the (contributing) organization was responsibly for vulnerable code.

I will absolutely not let it go without saying that though this entry contains many of my own thoughts it heavily relies on the work of many in our breakout session, well-lead by HP’s Brian Chess. Thanks all for a great discussion.
-jOHN

 
John Steven

Posted by

John Steven

John Steven

John Steven is a former senior director at Synopsys. His expertise runs the gamut of software security—from threat modeling and architectural risk analysis to static analysis and security testing. He has led the design and development of business-critical production applications for large organizations in a range of industries. After joining Synopsys as a security researcher in 1998, John provided strategic direction and built security groups for many multinational corporations, including Coke, EMC, Qualcomm, Marriott, and FINRA. His keen interest in automation contributed to keeping Synopsys technology at the cutting edge. He has served as co-editor of the Building Security In department of IEEE Security & Privacy magazine and as the leader of the Northern Virginia OWASP chapter. John speaks regularly at conferences and trade shows.


More from Open source and software supply chain risks