One of my favorite books, “The Hitchhiker’s Guide to the Galaxy,” describes itself in the introduction like this:
“In many of the more relaxed civilizations on the Outer Eastern Rim of the Galaxy, the Hitchhiker’s Guide has already supplanted the great Encyclopedia Galactica as the standard repository of all knowledge and wisdom, for though it has many omissions and contains much that is apocryphal, or at least wildly inaccurate, it scores over the older, more pedestrian work in two important respects:
First, it is slightly cheaper; and second, it has the words ‘DON’T PANIC’ inscribed in large friendly letters on its cover.”
The OWASP Top 10 is like the “Hitchhiker’s Guide” in many ways. It supplanted OWASP’s first document—the OWASP Developer Guide—and overshadows nearly all else that we do at the OWASP Foundation, including the excellent Cheat Sheets, Testing Guide, and Application Security Verification Standard (ASVS). The Top 10 is downloaded millions of times every year, and for better or worse, it’s the backbone of nearly every application security program and many training courses and tertiary syllabuses—and the bane of many developers’ lives.
I think it has succeeded because it’s short, it’s easily digestible, and it doesn’t ask too much of the organizations and individuals adopting it. Organizations like that last aspect because unlike many compliance regimes, the OWASP Top 10 very approachable. Although it’s not a standard, that’s how it’s used.
Earlier this year, the first release candidate (RC1) of OWASP Top 10 2017 came out after a long gestation period. I think we can safely say that it wasn’t well-received. The social media pile-on against the founding OWASP Top 10 leadership was not one of #InfoSec’s finest moments. I can’t imagine that the peer review of many other standards or documents has ever caused such commotion.
We found out the OWASP Top 10 has a lot of passionate stakeholders. In other words, change the OWASP Top 10 at your peril.
However, there were positives that came out of this process. Community members, such as Brian Glas, reviewed and reanalyzed the available data, and it became obvious that there were issues with the construction of the OWASP Top 10. There was confusion in the community about the inclusion of two new items (which have since been removed). But the OWASP Top 10 has always included forward-looking issues for which there is little to no evidence. For example, in 2007 I put in cross-site request forgery (CSRF) because no application or app framework had CSRF protection. And in 2013, we included “Using components with known vulnerabilities,” an issue that is now understood to be responsible for one of the largest breaches of all time.
I can only imagine the emotional toll that responsibility for the OWASP Top 10 has taken on the leadership team of Dave Wichers and Jeff Williams. When Dave had clearly had enough, he passed the leadership baton to me. This came at an interesting time for me personally: I had just accepted a senior principal consultant role in the United States with Synopsys and was busy moving from Australia back to the States. But leading the OWASP Top 10 is not something you can turn down lightly.
I knew taking on the OWASP Top 10 would be a lot of work, as I knew how long it can take to research it and write it up. I knew I needed help and I needed to fix a fair few things, and possibly even start over.
One of the first issues to be addressed was governance; I needed to appear—and to be—independent. I appointed co-leads Neil Smithline and Torsten Gigler, two contributors I didn’t know well at the time but am glad I do now. Neil has been helping with the OWASP Top 10 since 2004, and Torsten has been translating it since 2010, as well as putting up the wiki version. After a short while, we added Brian Glas, whose data analysis of the original RC1 was key to our understanding what needed fixing. Going forward, I want to make sure all OWASP flagship projects have this diversity of leadership, as having more leaders adds a wealth of different experiences and perspectives to any project.
The next issue was transparency; many felt that RC1 had appeared out of nowhere. I didn’t think this was true—nor did those who carefully watched the OWASP Top 10 list—but I agreed that we needed to increase transparency. So we uploaded everything to GitHub, we translated the PowerPoint deck to Markdown, and we worked in the open, with full accountability and traceability. If an issue was logged by a community member, you could see the change and who made it. If we received a change from a pull request, you could see who submitted it, what exactly changed, and who merged it.
At the OWASP Project Summit in London in June 2017, I participated remotely in the OWASP Top 10 sessions. We resolved the following: We would reopen a short data call, 8 of the 10 items would be identified by data, and up to 2 of the items would be forward-looking, drawn from a short list of around 25 common issues and decided by a survey of the community.
Because of these decisions, we received a great deal of new data, some of it game-changing. We received data from small and large consultancies, tool vendors, managed service providers, and, at the very end, one of the largest bug bounty firms. In all, we received good data on 114,000 apps and many thousands of bug bounties. We received 516 survey responses, which led to our including deserialization and insufficient logging and monitoring for the first time. But where do these issues belong in the list?
After analysis, it became clear the order of the OWASP Top 10 was at odds with the data. Previously, to maintain stability, when an issue had a place in the OWASP Top 10 but had become less important, we wouldn’t move it from its old position immediately. This time around, I asked Twitter whether folks wanted the order to be based on the existing order, potential impact (e.g., the number of breached records or the financial loss), the likelihood as defined by the data, or risk (which is a mix of the previous two). Of our 200 respondents, 59% wanted risk-based ordering. So that’s what we did. And that’s the reason cross-site scripting (XSS) has fallen to A7 and others have fallen out altogether. In some ways, this is good, as it likely means the OWASP Top 10 is helping to reduce previously prevalent web application security issues.
Here’s the new OWASP Top 10 2017:
All these issues are important, and I will likely run through each of them in future blog posts, but for now I want to thank the community and particularly our data contributors for their passion and involvement in helping us make a better, stronger, more evidence-based OWASP Top 10.
I commend those who provided constructive feedback, like John Steven in an earlier post on this blog. We don’t exactly see eye to eye on this, but I think his input made the OWASP Top 10 a far stronger document able to withstand scrutiny and peer review. All the data we used is available and will inform not just the OWASP Top 10 but many other OWASP projects, such as the ASVS, in future months.
We are in the final stages of preparing the OWASP Top 10 2017. If you want to review and provide feedback, you can get a fairly recent copy in Markdown, PDF, or PowerPoint format at GitHub. Please use the golden-master branch until Nov. 13 and the 2017-final branch Nov. 13–20. The OWASP Top 10 2017 comes out on Nov. 20 in all the usual places. Please log any issues at GitHub.
Lastly, I want to thank the founding OWASP leadership team: Dave Wichers and Jeff Williams. The OWASP Top 10 would not be where it is today without them.
Andrew van der Stock is a senior principal consultant at Synopsys, providing technical leadership in security architecture, threat modeling, security architecture reviews, secure coding guidelines and reviews, assurance and penetration tests, risk assessments, and developer training. He has worked in the IT industry for over 20 years and is a seasoned web application security specialist and enterprise security architect. Andrew currently leads the OWASP Top 10 2017 and Application Security Verification Standard projects.