Products + All Products + Software Integrity + Semiconductor IP + Verification + Design + Silicon Engineering
Posted by Synopsys Editorial Team on November 8, 2016
As we continue to sharpen our security skills, it’s important to have a firm understanding of foundational security concepts and the nomenclature surrounding them. This includes the difference between software bugs and design flaws. While almost all software contains some sort of fixable bug (usually measured in the thousands), security flaws are often overlooked, causing serious security implications.
Gary McGraw has been preaching this for years: “Bugs and flaws are two very different types of security defects. We believe there has been quite a bit more focus on common bugs than there has been on secure design and the avoidance of flaws, which is worrying since design flaws account for 50 percent of software security issues.”
Let’s take a closer look at where he draws the line between the two:
With that high-level framework in mind, let’s go through a few examples of how these play out in the real world.
File uploads are one of the best examples of a bug vs flaw. Let’s say you have an application that lets users upload PDF files to be printed by your organization (quite the snazzy app, too). A bug in such a system would be something like this: file permissions are not set properly in the uploads folder, and the web server cannot write files there upon upload.
Fixing the file permissions should easily fix the bug. But wait! Let’s take a closer look at this uploads folder. Is the accessibility to the folder itself restricted? Is there any sort of file type validation on the upload handler? Does the system sanitize the PDFs? Is the upload folder properly jailed on the server so no executables can run within it?
For each “no” answer to the above questions, you have potential security flaws. None of these things will break your application in any obvious way, but these will be the first questions hackers will be asking.
Bug and flaws happen in third party software all of the time. Luckily, in larger projects these are fixed and patched quickly, and new versions are released. But you still have to update them.
A bug due to out-of-date third-party software might be something like this: the charting library we are using has a display bug that distorts the bar charts on a time-series axis.
Security flaw in this system aren’t something that’s going to be immediately noticeable. Your site will appear to be working normally but the flaw is there, like a secret passage into your access layers happily open and waiting. Discovery of this passage leads to something like Heartbleed or the DROWN Attack.
Due to the nature of a flaw, everything will appear to be fine before, during, and potentially even after the exploit. The only way you can know is by keeping your software up-to-date, and monitoring the disclosure channels of whichever third party tech you’re using.
Remember: The more popular your third party software is, the more exploits there are going to be — just look at tools like WordPress, which are constantly under attack. The upside of this is that a large community effort often comes with lots of contributors, and people tend to look out for each other.
Client-side trust is when you assume the client sending requests to a server has done its due diligence in analyzing, filtering, and sanitizing the data that comes from the user, and is only passing along the correct data.
It shouldn’t come as a surprise that you should NEVER give blanket trust to a client. Yet, people still make mistakes within this seemingly obvious security practice. There is no bug here unless your client or API has a bug related to the data transfer. This is a flaw in mindset and architecture.
Never trust user input, and never trust client-side packets.
Of course, always refer to the security commandments of whichever platform you’re using, such as the OWASP Top 10 List. These are things like HTTPS cookies, input sanitation, weak ACL, and more that you as a developer can take the lead in securing.
These are exploits that are known, testable, and verifiable. If nothing else, it will help you get in the mind of the hackers and that new perspective might reveal a few other flaws that might exist within your system.
Knowing what you’re dealing with is going to help you know what to look for. You can think of it like this – bugs will reveal themselves to you, but you need to go looking for flaws.