Posted by Mike Lyman on July 18, 2017
There is a sad reality in the software world that developer education and training not only neglect software security, but often teach developers the wrong activities to secure it. This ranges from the ‘get it to work and move on’ habit to insecure code samples in the tutorials and forums we all use when learning new approaches. A study published earlier this year shows that insecure code samples in tutorials, that are vulnerable to things like SQL injection and cross-site scripting (XSS), manage to find their way into real-world production code.
In April 2017, Tommi Unruh, Bhargava Shastry, Malte Skoruppa, Federico Maggi, Konrad Rieck, Jean-Pierre Seifert, and Fabian Yamaguchi published “Leveraging Flawed Tutorials for Seeding Large-Scale Web Vulnerability Discovery.” They used known insecure code samples from popular tutorials on the web to find similar weaknesses in actual software projects on GitHub.
The aim was to use the tutorials to find weaknesses. The results prove that bad examples end up in real-world code.
The researchers looked through little-known and little-used projects to well-known and well-used projects. They created a method to look for tutorial-derived code in projects. Next, they manually verified that the code in question was substantially the same as the code in an insecure tutorial. And, that it did in fact replicate the weaknesses.
Across 64,415 codebases on GitHub, their spider found 820 pieces of code that looked similar to the tutorial code they were looking for. Manual verification confirmed that 117 were substantially the same code and contained the same weaknesses that were in tutorial code samples. They also found problems in both the less popular software as well as the more popular software.
While the numbers may look small, these were instances where just a few bad tutorial code samples appear to have directly led to the same weaknesses in real-world code in open source projects. These findings were only based on the tutorial code they were specifically looking for. They do not reflect all other bad sample code that developers use each day.
Developer education all too often teaches how to do exactly the wrong activities when it comes to securing software. Those wrong methods are appearing, often with little change, in production code. As developers learn the wrong methods, they then practice and implement those methods in the real world. Combine that with the ‘get it to work and move on’ habits so many developers pick up along the way and the wrong methods aren’t corrected before the project’s release. Organizations then get hacked as a result.
In a perfect world, developers wouldn’t learn insecure practices or produce insecure sample code. However, that is a tall order. We are seeing improvements, but as the study shows, we have a very long way to go.
If you produce sample code to teach developer lessons, please make sure your samples are secure. Until everybody is doing that, a solution to insecure code largely falls to the developers and their teams.
As individual developers, remember that sample code in tutorials is almost always incomplete. To get the immediate lesson across, sample code is simplified so the lesson is easy to see and grasp. Necessary steps like input validation and error checking are often left off and need to be added by the developer following the sample. The same is often true of other elements impacting code security. Ensure that when following sample code what you are doing cannot be hacked. Tutorial writers may have the luxury of writing insecure code but real-world developers do not.
Development teams play a major role in ensuring that bad code doesn’t make it into production. Conduct peer code reviews to identify and correct any mistakes made during development. ‘Get it to work and move on’ leaves room for error. Peer review puts other eyes on the code and can help identify errors before the code is checked in.
A best practice is to have a security-aware developer involved in this process. They should specifically look for code that may cause security issues. Many firms establish the role of a security champion, a person who acts as a satellite of the main software security group (SSG). They are located close to development teams to be their resident security expert. Peer review and participation of a security expert go a long way to prevent insecure sample code from becoming a problem in production.
There are many static analysis tools available to help locate problems in code before they reach production. These tools, both commercial and free, can scan code for problems relatively quickly and identify common security bugs created in development. Tools like Synopsys’ SecureAssist act like a spell checker for developers. They identify security problems in code as they are being written and provide immediate remediation advice to the developer.
Development teams, no matter the industry in which they work, should look at the places where developers go to learn new skills and ask questions. Look at the quality of the lessons and the answers available. If the quality is consistently poor, direct your developers elsewhere. If the quality is good, list the source as a preferred source for your developers. Make sure they understand that the code they are seeing in other places may not meet your standards. Also, communicate that additional work will probably be required beyond copying, pasting, and rewriting slightly.
It is sad that developer training actively creates security problems in our software. However, that is where we stand today. It’s been proven that sample code with bad security practices is appearing in production code. As developers and as companies developing software, we must remain vigilant. Remember that the lessons developers learn are often incomplete and wrong. We need to back our developers up with good processes and procedures. Those that catch not only their errors but errors they have imported from elsewhere.
Get the latest Software Integrity news, thought leadership, and more.