Products + All Products + Software Integrity + Semiconductor IP + Verification + Design + Silicon Engineering
Posted by Synopsys Editorial Team on January 18, 2017
Modern websites and applications are feature-rich. They provide the user with an intuitive flow through business logic and data. Application developers write these features, rely on their operation, and may even re-use them in their code. Due to rapid, feature-driven development and code sharing, when a vulnerability is introduced in code (and goes undetected) it can spread very quickly.
Applications may simply use the same technology stack and have no common business function in an organization. Fortunately, some developers and their software security groups (SSGs) are beginning to look at a security activity known as code review.
There are many articles out there from reputable software vendors that cover the benefits of ad-hoc code reviews. Here’s a quick summary of some of those benefits:
When developers review code for dependent applications, their schematic of how code comes together to support business functions is clearer. Additionally, knowledge transfer between developers is easier when they’re sharing code.
As code review results are communicated (through email or a bug tracking system), trends are analyzed to build documentation. Over time the company can build a documented, community-reviewed process for application development. This helps to standardize solutions for common business functions and streamline common development complexities.
Developers don’t aim to introduce errors into code. Code reviews provide a second set of eyes to identify points of interest the author may not have caught. A reviewer may ask questions such as:
When posing these questions in the development stages of the software development life cycle (SDLC), common implementation bugs and design flaws can be identified and resolved early. This subsequently saves time during QA testing and even once the software has moved into production. Finding issues early also saves time documenting tests that fail.
Once the basic code review practice is ingrained in the SDLC for each application, what’s next?
Many companies select an off-the-shelf or open source product to introduce tool-driven automated testing. Implementing an automated tool is an efficient way to identify points of interest in code. However, it is imperative to understand that purchasing a tool does not mature a code review practice. It also isn’t a replacement for manual code review. Without providing the necessary training to configure the tool, interpret results, or document points of interest, adding an automated tool to the SDLC may be meaningless.
It is important to select a tool that supports your technology stack, identifies points of interest in code, and integrates with other systems. Tools may simply support a set of programming languages while introducing extensive rule sets for others. Configuring tools to produce significant results and document them in existing tools is highly valuable.
Before landing on a tool that best suits your SDLC, start documenting answers to a few questions during those ad-hoc manual reviews:
Data that the application handles may be of interest to an attacker. If the application has an authentication mechanism, this information may include a session token or data that’s available once a user has authenticated. Generally, information that’s available after login, or that provides authentication, is considered confidential, non-public information.
If using input from an outside source to render information in the application, it requires validation. The application may interpret unvalidated input differently than expected. Inaccurate or misrepresented data can render an application useless to clients.
Denial of service attacks are often a last resort for attackers. This is because they’re easily observable. However, the effectiveness of denial of service attacks is undeniable. Thus, it’s essential to document and test areas where the application may become unavailable.
Preparing answers to these questions accomplishes several tasks that encourage the code review practice to grow organically. These questions amend the original review process to go beyond business logic errors and into malicious intent.
Considering this type of information strengthens the code review process by adding security test cases. Further, as reviewers design these test cases and create them in automated tools they begin to become tool mentors. Tool mentors can increase automated tool coverage by tweaking configurations and creating new test cases. They may also be able to connect automated tools to existing systems for bug tracking, remediation, and reporting. Statistics produced by this workflow can lead into other questions such as:
Of course, this also comes at the opportunity cost of a new feature or release. The upfront time spent could save you from an avalanche of development time later in the development life cycle. Spending more time upfront can help to:
Finding an issue in QA or production requires additional steps to track down the code that requires remediation. Most of these cases also require documentation replicating the issue to understand it. These fixes are often rolled into a new or subsequent release.
Issues documented during code review can contain direct references to the code, making it easier to analyze the results. If the documented issues are analyzed and have fixes in place then the buggy code never reaches the runtime environments.
Emergency production changes are a nightmare for every developer. You know, that nightmare where work calls at an inopportune hour with a person on the other end explaining critical changes that need to be made to the production environment at that very moment. Depending on the issue, a phone call like this could put the developer in a very difficult situation. They may have to track down the issue, document it, and then implement a fix. If code review can alleviate a percentage of these issues, the time savings could be substantial.
Depending on the industry, auditors may inquire about the types of tests you’re performing on software. Undoubtedly, if you write software, then you have customers who want to use your code with confidence that it’s tested and secure. Building a mature code review practice ensures this well-documented, community-backed, custom-built practice to reference and continue building on.
Fixing issues early in the SDLC provides developers with additional time to spend on new features, training, test cases, and documentation. This single security activity can change the face of your development process for the better.