Software Integrity

 

OWASP Top 10 2017: But is it fixed?

OWASP Top 10 2017
Months back, I called outright for the removal of “A7: Insufficient Attack Protection” from the OWASP Top 10. The OWASP Top 10 team recently published a second release candidate (RC2) for OWASP Top 10 2017—and A7, which was in RC1, is conspicuously absent. So is the Top 10 fixed?

My argument to remove A7 was grounded in three concerns. Here’s a summary of the indictments:

  1. It was ontologically unclear. There was no definition of what “attack protection” entailed.
  2. It was taxonomically invalid. Attack protection is a control anyway, not a risk.
  3. It was biased. As the OWASP Top 10 can be sponsored, the inclusion of this item without any anchor in the data suggested strong bias.

I characterize this list as a weakest argument. Unlike other more stringent “goodness” criteria that could be applied to the Top 10 RC, merely requesting that an item be defined, be appropriate to the list’s stated type, and appear unbiased is about as low a bar as can be set. Though I received disappointingly scant public support, there must have been much tacit agreement, as no one offered superficial, let alone substantive, argument to my position. In RC2, this item and the other problematic new addition, A10, were removed. Two legacy items were merged, one was dropped, and three were added. So is it fixed?

While we can apply the three above criteria to any past, present, or future candidate list item in a straightforward way, the Top 10’s credibility is not simply the sum of the validity of its items. Though cheeky, the “Is it fixed?” formulation confines the Top 10 analysis to small beer.

The OWASP Top 10 has become a de facto standard and the backing specification for regulation such as PCI DSS that drives organizations’ application security approach and spend. The Top 10 should change with software development and its associated risk. In this context, we should consider not whether the 2017 release is “fixed” but whether the OWASP Top 10 project is healthy. I believe the answer here is a resounding yes.

Admirably, Dave Wichers handed project ownership over to Andrew van der Stock this year. Andrew has thought about risks and controls for many years, doing yeoman’s work maturing the Application Security Verification Standard (ASVS). But it’s the bold actions he quickly took, more than his prior experience, that have restored faith in OWASP leadership.

Diversity of stakeholders in OWASP leadership

Perhaps foremost among Andrew’s actions was bringing three other contributors into project leadership. Now the leaders represent product- and service-centric application security vendors, security architect practitioners outside the vendor space, and contributing OWASP volunteers from outside the United States. This diversity of associations and perspectives more deservedly reflects the Top 10’s reach toward and impact on various constituencies.

Transparency

Complementing the additional leadership is another powerful improvement: transparency. Some describe OWASP as “radically transparent,” and the Top 10 project now reflects that openness as well.

GitHub has replaced closed meetings and private conversations.* At the time of publication, 125 GitHub issues from 40 contributors had been resolved in the production of RC2, leaving a traceable accountability regarding who requested (and resolved) what. Andrew communicated process and changes proactively throughout the creation process, in mailings and otherwise. The team has documented their results in easy-to-consume presentations.

Data

In the AppSec USA 2017 keynote, I highlighted concerns with existing OWASP data, pointing out its limited visibility into vulnerability classes and various biases due to vendor technology capability and approach. Brian Glas too wrote on the all-important distinction between tool-based and manual assessment approaches and the negative impacts on Top 10 data.

Now 114,000 apps’ worth of data has driven Top 10 item selection. A raft of firms, my own firm included, contributed data where they hadn’t prior. This assessment data was augmented by 516 surveys, allowing the community’s many other stakeholders to contribute even if they don’t run assessment practices or software-as-a-service (SaaS) assessment tools. Surveys also further democratized the somewhat controversial—but I think essential—inclusion of so-called forward-looking items in the list. These items are those that contributors believe need explicit mention within the list precisely because they’re likely underrepresented in data, because tools aren’t yet equipped to discover such risks.

So it’s fixed then…

One can legitimately complain about the OWASP Top 10 2017 RC2. The contributing community, methodology, and data don’t reflect the full range of stakeholders that the release will end up affecting through PCI DSS. The list itself represents a rather quaint subset of application security software vulnerabilities, quite short of fulfilling its “top 10 security risks” aim. The writing, though very much improved, can elicit a groan as contributors strive to walk a tightrope between conveying general classes of problems and describing specific instances of those problems compelling and concrete enough to direct vendors/practitioners.

The items within RC2 are not the risks I would have chosen, nor are they reflective of what some high-end assessment practices find and address. Again, this doesn’t matter: The present-day OWASP Top 10 security risks project is healthy because it has dramatically improved in terms of leadership and representation, transparency, and data quality.

It’s reasonable to believe that the community’s reaction to these improvements will continue to drive the project’s newfound dynamism and that the list will continue to improve. The leadership team seems to be serious about balancing the pace of change with stability within a specification that backs regulatory action. And compared with what we had prior, that’s a healthy situation indeed. Job well done, OWASP Foundation.

* This doesn’t imply that these communications, meetings, or decisions were conducted in closed or private settings for nefarious purposes. Nonetheless, they left the community without an easily auditable trail allowing public scrutiny of how or why decisions were made.