Posted by Arvinder Saini on January 12, 2017
A well-defined software development life cycle (SDLC) is essential to develop more reliable, bug-free software. At Synopsys, we often make the claim that it’s important to fix bugs early in the SDLC to save time and money. But how much of a cost difference does it really make to fix bugs during various SDLC phases? This post examines this question by highlighting the costs that can be incurred when fixing bugs at various stages of the software life cycle.
This popular saying most definitely holds true when it comes to bugs or security issues identified within the SDLC. During the development process, it is more cost effective and efficient to fix bugs in the earlier stages rather than later ones. The cost increases exponentially as the software moves forward in the SDLC.
To provide an example of this, it was reported by the Systems Sciences Institute at IBM that the cost to fix a bug found during implementation was around 6 times costlier than one identified during design. Furthermore, according to IBM, bugs found during the testing phase could be 15 times more costly than during design.
The result of this analysis also holds true for most cases as it usually becomes harder to rectify an issue once the product approaches the end of development life cycle. Bugs introduced during the design phase, if not handled during the early stages of development, cost more as they can have severe impact and are more complex to resolve. The changes made for a bug fix could also affect the application’s functionality. In turn, this may require more time for the developers to make counter changes to the codebase, eventually adding to the cost, time, and effort.
Consider an example of a banking application in which a security flaw is identified after the application’s release. As the product might be used by thousands of people, the bug can cost the bank a great deal of money to remediate. The effort, time, and money that the bank now must spend resolving the issue costs exponentially more than if it was fixed during the earlier stages of product’s development. Additionally, the complexity of deploying/implementing changes in a live production environment would further increase the overall cost associated with late stage maintenance.
A real world example of catching a bug only after the application is in production is the Samsung Note 7 fiasco. There was speculation that one of the problems the Note 7 phones were facing involved their battery management system. This system monitors electric current and stops the charging process when the battery is fully charged. Having a fault in this system could lead the battery to overcharge, become unstable, and eventually explode.
This fault cost Samsung nearly $17 billion. Had this been noticed at an early stage, a lot of money, headaches, and Samsung’s reputation would have been saved.
Improving security throughout the SDLC helps to create more reliable software. Conducting security assessments during a software’s development helps address issues related to software security. In a traditional SDLC, security testing is done at the end–after the development team’s required functionalities are in place. However, security testing can merge into the SDLC in a very fluid manner. Conducting security activities and considering risk factors during development phases prevents the entry of bugs during later development phases.
To ensure that bugs are fixed at an earlier stage within the SDLC, take advantage of the following security testing practices:
Bugs are unavoidable. These practices allow you to tie security into your software development process–ensuring that software issues are reduced and resolved along your software development journey.
Get the latest Software Integrity news, thought leadership, and more.