A security requirement is a goal set out for an application at its inception. Every application fits a need or a requirement. For example, an application might need to allow customers to perform actions without calling customer service. Just as you lay out those actions and outcomes as goals for the final application, you must include the security goals.
A software security requirement is not a magic wand that you can wave at an application and say, “Thou shalt not be compromised by hackers,” any more than a New Year’s resolution is a magic wand that you can wave at yourself to lose weight. Just like a resolution to lose weight, being vague is a recipe for failure. How much weight? How will you lose it? Will you exercise, diet, or both? What milestones will you put out there?
In security, the same types of questions exist. What kinds of vulnerabilities are you looking to prevent? How will you measure whether your requirement is met? What preventative measures will you take to ensure that vulnerabilities aren’t built into the code itself?
When building a software security requirement, be specific about the kind of vulnerabilities to prevent. Take this requirement example: “[Application X] shall not execute a command embedded in data provided by users that forces the application to manipulate the database tables in unintended ways.” This is a fancy way of saying that the application should not be vulnerable to SQL injection attacks. You can prevent these attacks with a combination of rejecting or scrubbing bad input from the user, using a carefully crafted type of database query that flags data as data and not as commands to be acted upon, and modifying the output of the database calls to prevent bad data from attacking functionality down the line. Then you can test this requirement with specific kinds of software tests, both on the source code and on the compiled application.
Requirements for your requirements
To build good requirements, make sure that you’re answering questions about your requirements. A software security requirement should be much like a functionality requirement; it shouldn’t be vague or unattainable. Anticipate developers’ questions and answer them ahead of time. Here’s how:
- Is this testable? Can we test this requirement in the final application? “Be secure” is not a testable requirement. “Encode all user-supplied output” is.
- Is this measurable? When we test for this, can we determine coverage and effectiveness?
- Is this complete? Are we forgetting something? Are we mandating checks for user-supplied data to databases but not logs?
- Is this clear? Will the people responsible for designing, implementing, testing, and delivering on this requirement understand the intent of the requirement?
- Is this unambiguous? Could someone interpret this requirement in any other ways?
- Are these requirements consistent? Are we approaching each security requirement in the same way to ensure that the security measures are applied consistently across the board?
When building a requirement, remember that it is a goal that someone must achieve. Designers and developers can’t meet the security goals for an application unless you create specific and achievable requirements.
Types of security requirements
If you’re entrenched in the requirements or contracting world, you’re already aware of the basic kinds of requirements: functional, nonfunctional, and derived. Software security requirements fall into the same categories. Just like performance requirements define what a system has to do and be to perform according to specifications, security requirements define what a system has to do and be to perform securely.
When defining functional nonsecurity requirements, you see statements such as “If the scan button is pressed, the lasers shall activate and scan for a barcode.” This is what a barcode scanner needs to do. Likewise, a security requirement describes something a system has to do to enforce security. For example: “The cashier must log in with a magnetic stripe card and PIN before the cash register is ready to process sales.”
Functional requirements describe what a system has to do. So functional security requirements describe functional behavior that enforces security. Functional requirements can be directly tested and observed. Requirements related to access control, data integrity, authentication, and wrong password lockouts fall under functional requirements.
Nonfunctional requirements describe what a system has to be. These are statements that support auditability and uptime. Nonfunctional security requirements are statements such as “Audit logs shall be verbose enough to support forensics.” Supporting auditability is not a direct functionality requirement, but it supports auditability requirements from regulations that might apply.
Derived requirements are inspired by the functional and nonfunctional requirements. For example, if a system has a user ID and PIN functional requirement, a derived requirement might define the number of allowable incorrect PIN guesses before an account is locked out. For audit logs, a derived requirement might support the integrity of the logs, such as log injection prevention.
Derived requirements are tricky because these stem from abuse cases. Not only must requirements designers think like a user and a customer, but they also have to think like an attacker. For every bit of functionality given to users, an attacker could abuse it. For example, log-in functionality could become password guessing attempts, uploading files could open a system up to hosting malware, and accepting text could open the door to cross-site scripting or SQL injection.
Software security requirements can come from many sources in the requirements and early design phases. When you’re defining functionality, you must define it securely or provide supporting requirements to ensure that the business logic is secure. You should tailor generic guidance from industry best practices and regulatory requirements to meet specific application requirements.
Abuse cases are one way to think like an attacker. Designers flip a use case on its head and analyze how the functionality could be abused. If a user is allowed to generate reports with sensitive data, how might an unauthorized user gain access to those reports and their sensitive data? Abuse cases are often answered by industry best practices, which you can use to build requirements for how the application handles access to privileged data.
Software security requirements can also come from an analysis of the design via architecture risk analysis. If a web application uses a specific framework or language, you’ll need to apply industry knowledge of attack patterns and vulnerabilities. If a framework prevents cross-site scripting in some situations and not others, you’ll need to define a requirement that speaks to how the developers will prevent cross-site scripting in insecure situations.
Every security requirement should address a specific security need, so it’s essential to know about the vulnerabilities that could exist in an application. Generic guidance and knowledge are not enough. Specific security requirements will arise from specific application requirements.