Software Integrity

 

OWASP Top 10—A7: Request for removal and replacement

OWASP Top 10—A7: Request for removal and replacement

Foreword by Jim Ivers
Vice President, Marketing, Synopsys Software Integrity Group

If you’re a software security professional, you’re probably familiar with the OWASP Top 10. Even if you aren’t in the AppSec trenches every day, you may have heard of it. It’s a widely referenced list of the 10 most critical web application security risks that prudent organizations endeavor to mitigate at all costs (or within reason). The OWASP Top 10 is a foundational plank of application security compliance, particularly important to web applications, and used in highly regulated industries.

Every few years, the OWASP foundation updates their Top 10 list to ensure that it continues to address the needs of the web app ecosystem in context of emerging technologies and evolving threats. The OWASP Top 10 was last updated in 2013, and as you may have guessed, at present there is a “release candidate” for the 2017 revision published for public comment.

Interestingly enough, two new “security risks” have made their way into the 2017 release candidate, taking the place of those that have diminished in relevance. For Synopsys and many others, one (and arguably both) of these newcomers are a point of contention and represent a slippery slope departure from the Top 10’s original tenants of risk management.

The following article is an official response to the call for review of the latest revision of the OWASP Top 10. The document was prepared by John Steven, Senior Director, Security Technology and Applied Research of the Synopsys Software Integrity Group.

It is the position of Synopsys that software security technologies emerge and evolve rapidly. Therefore, it is prudent to resist including any elements that prescribe security controls, particular security testing, or particular security testing products in the Top 10 Application Security Risks list—as presented in the new revision. We believe that such prescriptive language may induce organizations to rely too heavily on any one technology; thus, leaving themselves vulnerable to attack, which is contrary to the spirit and intent of the list.

We invite other organizations or individuals to consider our position and direct comments regarding this new language to the official avenues of feedback provided by OWASP. Specifically, via email at OWASP-TopTen@lists.owasp.org, by the June 30, 2017 deadline. We also invite you to engage the security and developer communities on this topic.

Overview

Proposed entry “A7 – Insufficient Attack Protection” is an inappropriate and potentially dangerous addition to the 2017 OWASP Top 10 Application Security Risks list. We recommend replacing proposed entry A7 with a defensible security risk prior to finalizing the new Top 10 list and making it generally available.

In general, we caution against including any elements that prescribe security controls or particular security testing in the Top 10 Application Security Risks list, as the newly proposed entries A7 and A10 do. We believe that such inclusions muddle the clarity and purpose of the OWASP Top 10 as well as reduce its utility.

Why the OWASP Top 10 matters

The OWASP Top 10 has had a huge impact on our industry. Over the years, the project grew from an awareness spearhead to a foundational plank of application security compliance, particularly pertaining to PCI. Changes to the Top 10 drive mandated spend across industries.

Even accepting the effect of long-standing data collection, process, and transparency problems with the OWASP Top 10, A7 is a particularly extraordinary proposal. Our proposal to remove the proposed entry A7 is based in three primary grounds. OWASP Top 10 project maintainers:

  • Show no tie between underlying data and the elements of ‘A7’ description.
  • Formulate A7 as judgment on security control sufficiency rather than risk due to software vulnerability.
  • Conflate patterns of secure design with a product category.

Formulation as risk or prescription?

The OWASP Top 10 list bills itself as an enumeration of “Application Security Risks” [SR]. Proposed entry A7 is not formulated as an application security risk (of injection/scripting or forgery, of impersonation or escalation of privilege, exposure, or any other). A7 is authored to mandate use of an unspecified amount of a security control. Using language of presumptive close, A7 advocates,

“Be sure to understand what types of attacks are covered by attack protection.”

With this sentence, the proposed A7 conjures a new security control category—“attack protection”—promoting it to the level of “Authentication” or “Encryption.” “Attack Protection,” however, is not an accepted type of security control: it’s not indicated in OWASP’s definition of a security control [SC] nor is it present in source material from ISACA or COBIT. It is interesting to note, however, that Contrast Security’s website is the first non-ad result in Google when searching for “attack protection” and “application security.” It is also the only result on the first page of search results that uses the words together in this context with similar meaning. IBM, Radware, and others use the two words together but not as a moniker for a single class of control, let alone as a valid category of risk that would befit the Top 10 list. Additionally, “attack protection” is not indicated as a control type by OWASP’s “Proactive Controls” project [PC].

By coining a new control type without the benefit of broad industry acceptance, the proposed entry advocates a specific means of protection (a “how”):

“You can use technologies like WAFs, RASP, and OWASP AppSensor to detect or block attacks, and/or virtually patch vulnerabilities.”

This can be interpreted to mean exclusively Gartner’s emerging product category: RASP. Listed alternatives offer false choice. Notwithstanding its OWASP ‘flagship’ status, AppSensor hasn’t experienced a major release announcement in about two years [15]. The wiki, mailing list, and github are neither up-to-date, rich in content, nor dynamic. Though conceived of and maintained by gifted individuals, AppSensor cannot be considered viable for industry-level adoption by either SMBs nor enterprises. To be clear, we believe it is possible to get value out of AppSensor, but only after expending considerable effort, much in green fields. Indicating AppSensor as a drop-in control is analogous to indicating that using Express.js gives you a website.

WAFs, the other listed alternative, have been shown ineffective even after years of maturation. Particularly, testing shows a lack of application layer visibility and stateful context disallows WAFs from resisting manual attack effectively. In our professional services vulnerability discovery experience, we see little-to-no attack resistance afforded by WAFs. We also see very little customer interest in WAFs. Previous adopters are moving on. That leaves RASP. Thus, the proposed A7 is written to say, “if you don’t want A7, get RASP.”

Another challenge exists with A7’s proposed formulation: it speaks to virtual patching in addition to the introduction of “attack protection.” Elements of A7’s description (“Threat Agents,” “Attack Vectors”) demand runtime protection. Others focus on the speed at which organizations respond and patch (“Technical Impacts” and “Business Impacts” particularly). Still others (“Security Weakness,” “Am I vulnerable…,” “How do I prevent…”) cover both runtime protection and speed of patching. As was the case with its coined term “attack protection,” neither A7’s description nor the “Virtual Patching” best practice page list a viable industry-wide solution to the virtual patching problem (though, again, it represents great work by gifted teams).

The main issue is that if the proposed A7 were to be re-worded as risk, it is unclear which we are considering. It could either be risk resulting from:

  • Process and other organizational gaps resulting in a security initiative’s lack of patching capabilities; or
  • Control-centric gaps resulting in failure of an application to exhibit an emergent behavior coined “attack protection.”

Even in committed DevOps cultures, these two capabilities are separate, owned by different stakeholders. They are two complementary and additive concepts. Process concerns and security capabilities, such as patch management, have no place in the OWASP Top 10—a list of application security risks. Instead, they belong in the OpenSAMM or a related project. Calls for use of a particular tool (such as RASP) also do not have a place in the Top 10.

Some may counter that while the OWASP Top 10 list of application security risks exists in the application domain, risks may be reasonably countered by controls in a different domain. Indeed, OWASP and other standards documentation indicate that training or security testing (activities within a security initiative) and WAFs (a tool deployed by an initiative) are valid security controls. Yet, if the 2017 OWASP Top 10 RC intends to shine light on this dramatically broader scope, then why has every other Top 10 release entry in its history been confined to the scope of software vulnerability? Why has A7 been documented as a control prescription and not a risk?

Though less flagrant, some may perceive the entry A10 proposal as another pitch for RASP and/or IAST technologies. This item explicitly indicates that:

“Dynamic and sometimes even static tools don’t work well on APIs”

then warns,

“Be sure your security analysis and testing covers all your APIs and your tools can discover and analyze them all effectively.”

Wichers admits to re-treading other entries in the context of APIs [DW]. This is direct admission that A10 as proposed does not reflect a unique application security risk but the application of other risks to a specific application design element: the API. One could consider limitations in the formulation of A10 at length in its own right or the “supportive” effects it has on A7, but those topics are beyond this article.

‘Data’-driven

The lack of meaningful connection drawn between the published data trove and the proposed Top 10 entries has been pointed out by others. In his articles Glas [BG1][BG2] concludes:

“There is data (to some extent) for each of the Top 10 categories, with the exception of the new ones (A7 & A10). <snip> Without data, we have to look a little more at where the other entries may have come from. The only references I could find to A7 and A9 were recommendations from Contrast to add them to the Top 10. While I don’t disagree that these are issues, I’m trying to determine the justification behind making them the only two new entries in this Top 10.”

Glas points out that he cannot find any satisfying justification for A7 among mailing lists, the slack channel, or elsewhere.

Conflating design with products

Conflating a single product category, like WAF or RASP, with an important design property of security design leads organizations in the wrong direction. It leaves them vulnerable to classes of attack even when the products work—subject to DoS, or worse.

When maturity and capability allow, and risk appetite indicates, we believe that organizations should design software to conditions observed at runtime with security in mind. We’ve had designing systems that do this for over two decades now and we’ve spoken on the topic at length within developer conferences like SecAppDev. Because of this, we can speak to both the challenge and value of such an approach.

Like any cross-cutting concern, an application’s ability to detect and respond to conditions at runtime affects a host of other security controls and requires a broad range of feature/functions, individually or even acting in concert. A few classes of response include: behavioral access control, dynamic log level adjustment, velocity throttling, and user and administrative notifications. The tools and frameworks listed by A7 have no claim to behavioral access control and only proof-of-concept functionality to address log adjustment, velocity throttling, or notification.

Countless other domain-specific dynamic response categories exist based on organizations’ business logic. Neither RASP products nor AppSensor have the abstractions or integration necessary to provide a policy-driven dynamic runtime response across response categories and/or in areas of business logic. Neither has an ability to specify and manage detection or response at a level of organizational or business unit policy description, as an SMB or enterprise would need.

Proposed actions

The scope of this letter demonstrates the weakness of A7’s proposed formulation and show that it is inappropriate for inclusion in a list of Top 10 application security risks before the end of the comment period. Providing specific and actionable advice for replacing A7 (and potentially A10) with the proposed entries is beyond that scope.

However, consideration of the evolution of OWASP’s Top 10 list from 2003 to the 2017 RC shows the maintainers have continued to wrestle with the challenge of characterizing authentication/authorization risks, particularly as they pertain to access those resources provided by APIs (object or account reference for instance). Data from OWASP community participants in the Top 10 project can be used to tune the previous Top 10 release candidate and provide a practical Top 10 risk list that both addresses the persistent ontological challenges and provides the fresh update to those building systems with modern languages, frameworks, and platforms. That, not a call for RASP, is what the industry deserves.

Footnotes

  1. [BG1] – https://nvisium.com/blog/2017/04/18/musings-on-the-owasp-top-10-2017-rc1/
  2. [BG2] – https://nvisium.com/blog/2017/04/24/musings-on-the-owasp-top-10-2017-rc1-pt2/ 
  3. [DW] – http://lists.owasp.org/pipermail/owasp-topten/2017-May/001480.html
  4. [PC] – https://www.owasp.org/index.php/OWASP_Proactive_Controls
  5. [SC] – https://www.owasp.org/index.php/Category:Control
  6. [SR] – https://www.owasp.org/index.php/Category:OWASP_Top_Ten_Project
  7. [15] – https://www.owasp.org/index.php/OWASP_AppSensor_Project

 

As we were crafting this response, important changes to the Top 10 occurred, principally due to the OWASP summit. Andrew van der Stock has taken ownership of the project. We pledge to work with him and high-quality partners like Brian Glas to reconsider the data in pursuit of a vendor-neutral and impactful release candidate. Additionally, Andrew recently accepted a position with Synopsys Consulting Services. Welcome to Synopsys, Andrew!

Moving forward, I, personally, will redouble my eye on neutrality. The community should as well. Re-evaluating the data is essential and since the community sees little or no tie between the available data and RC1’s A7, we’re hopeful. The remaining challenge, for those who—like myself—support the concepts sincerely backing A7, is that which faced A9 (“OSS risk”) from the Top 10’s last evolution. OWASP’s new leadership will have to wrestle with how to best draw attention to those issues, vital to the industry, for which there is a fundamental lack of visibility due to the limitations in existing vulnerability discovery tools and services.