Products + All Products + Software Integrity + Semiconductor IP + Verification + Design + Silicon Engineering
Posted by Jamie Boote on June 6, 2016
Have you ever heard the old saying “You get what you get and you don’t get upset”? While that may apply to after school snacks and birthday presents, it shouldn’t be the case for software security. When a software feature is deployed, it isn’t simply accepted by the software owner; there’s a strategic process of critique, justification, and analysis before it’s deployed. Security should be treated with the same attention to detail. After all, secure software doesn’t just happen out of nowhere—it has to be a requirement of the strategic development process. The requirements should be clear, consistent, testable, and measurable to effectively deploy secure software.
Traditionally, requirements are about defining what something can do or be. A hammer has to be capable of driving nails. A door lock needs to keep a door closed until it’s unlocked with a specific key. A car needs to move travelers from point A to point B along the nation’s roads. It also needs to work with the modern gasoline formulation. These types of requirements work fine for physical objects, but fall short when designing software.
Additionally, these objects can be used for more than just their intended purpose, and their purposes can be circumvented to suit the user. For instance, a hammer can be used to break a window, a door lock can be picked, and a car can be used to transport stolen goods. Similarly, software can be abused or made vulnerable. The key difference is that GM isn’t liable when their cars are used as getaway vehicles. However, when your software’s capabilities and permissions are hacked, you (as the software owner) are the one who suffers.
Security vulnerabilities allow software to be abused in ways that the developers never intended. Imagine being able to design a hammer that can only hammer nails and nothing else. By building robust software security requirements, you can lock down what your software does so that it can only be used as intended.
Fortunately, building software that is immune to the OWASP Top 10 is easier than building a hammer that turns to marshmallows when used to hit anything but nails.
A security requirement is a goal set out for an application at its inception. Every application fits a need or a requirement. Some applications allow customers to perform actions without needing help from a company representative. Just as those actions and outcomes are laid out as goals for the final application, the security goals must also be included. A security requirement is not a magic wand that you can wave at an application and say “Thou shalt not be compromised by hackers” any more than a New Year’s resolution is a magic wand that you can wave at yourself to lose weight. Just like a resolution to lose weight, being vague is a recipe for failure. How much weight? How will you lose it? Will you exercise, diet, or both? What milestones will you put out there? In security, the same types of questions exist. What kinds of vulnerabilities are you looking to prevent? How will you measure whether your requirement is met? What preventative measures will you take to ensure that vulnerabilities aren’t built into the code itself?
When building a security requirement, be specific about the kind of vulnerabilities to prevent. Take this requirement example: “[Application X] shall not execute commands embedded in data provided by users that forces the application to manipulate the database tables in unintended ways.” This is a fancy way of saying that the application should not be vulnerable to SQL injection attacks. This can be tested with specific kinds of tests, both on the source code itself and on the compiled application. These attacks are preventable with a combination of rejecting or scrubbing bad input from the user, using a carefully crafted type of database query that flags data as data and not as commands to be acted upon, and modifying the output of the database calls to prevent bad data from attacking functionality down the line.
In order to build good requirements, you should make sure that you are answering questions about your requirements. A security requirement should be built much like a functionality requirement; it shouldn’t be vague or unattainable. Anticipate the questions that the developers will have and answer them ahead of time. Here’s how:
When building a requirement, remember that it is a goal that someone must achieve. By creating specific and achievable requirements, the designers and developers can meet the security goals for an application.
If you are entrenched in the requirements or contracting world, you are already aware of the basic kinds of requirements: functional, non-functional, and derived. Security requirements fall into the same categories, but just like performance requirements define what a system has to do and has to be in order to perform according to specifications, security requirements define what a system has to do and be in order to perform securely.
When defining functional non-security requirements, you see things like “If the scan button is pressed, the lasers shall activate and scan for a barcode.” This is what a barcode scanner needs to do. When a security requirement is written, it talks about the things that a system has to do to enforce security like “The cashier must log in with a magnetic stripe card and pin number before the cash register is ready to process sales.”
A functional security requirement is something that describes functional behavior that enforces security. It can be directly tested and observed. Requirements that have things to do with access control, data integrity, authentication, and wrong password lockouts fall under functional requirements.
Non-functional requirements describe what a system has to be. These are statements that support auditability and uptime. Non-functional security requirements are statements like “Audit logs shall be verbose enough to support forensics.” Supporting auditability is not a direct functionality requirement, but it supports auditability requirements from regulations that may apply.
Derived requirements are inspired by the functional and non-functional requirements. When a system has a user ID and PIN functional requirement, a derived requirement may define the number of PIN guesses before an account is locked out. For audit logs, a derived requirement may support the integrity of the logs, such as log injection prevention.
Derived requirements are tricky because these stem from abuse cases. Not only must the requirements designer think like a user and a customer, but they also have to think like an attacker. For every bit of functionality that is given to the user, that functionality could be abused by an attacker. Login functionality can become password guessing attempts, uploading files can open a system up to hosting malware, and accepting text can open the door to cross-site scripting or SQL injection.
Security requirements can come from many sources along the requirements and early design phases. When defining functionality, that functionality must be defined securely or have supporting requirements to ensure that the business logic is secure. Generic guidance from industry best practices, regulatory requirements, and guidance must be tailored to meet specific application requirements.
Abuse cases are a way to think like an attacker. A use case is flipped on its head and designers analyze how the functionality can be abused. If a user is allowed to generate reports with sensitive data, how might an unauthorized user gain access to those reports and their sensitive data? Those abuse cases are often answered by industry best practices which can be used to build requirements for how the application handles access to privileged data.
Security requirements can also come from analysis of the design via architecture risk analysis. If a web application uses a specific framework or language, industry knowledge of attack patterns and vulnerabilities can be applied. If a framework prevents cross-site scripting in some situations and not others, a requirement which speaks to how the developers will prevent cross-site scripting in insecure situations needs to be defined.
Every security requirement should address a specific security need and knowledge of the vulnerabilities that could exist in an application is essential. Generic guidance and knowledge is not enough. Specific security requirements will arise from the specific application requirements.
It doesn’t matter if you build software in-house or if you outsource your software to third-party vendors; building sound security requirements can benefit you. By defining your security requirements early, you can spare yourself from nasty surprises later. Sound security requirements will help internally as they provide a clear road map for developers. They can also help with external regulatory requirements. Implementing measures to keep software from getting hacked is a good strategy and security requirements are a fantastic start to not being unhappy with what you get.
The best time to plant an oak tree was 20 years ago.
The next best time is now.
– Ancient Proverb
Build your software security requirements early and sit in the shade of securely built software later.