Software Integrity

 

Abuse cases: How to think like a hacker

Use cases have become common practice in agile software development as a means to protect developers from delivering code that falls short of the intended feature request. Product managers draft use cases to ensure the code they write meets their business objectives. Developers build the application to those specs. In a perfect world, it prevents unnecessary and redundant work.

The security world hasn’t evolved in the same ways. There are no protections in place to help developers anticipate what a malicious user might do with a feature. There should be.

If use cases create a “path of least resistance for the users to get what they want,” then we need a view of your application that puts the most resistance between the bad guys and the data/access they want, while minimizing the impact on legitimate users.

That we need, then, are the opposite of use cases: abuse cases. To think like an attacker at the unit level, the system level, and the infrastructure level. To prepare our code bases for use cases that compromise sensitive information. To prevent embarrassing business failures.

Sure, the bulk of this burden falls to your security and product management teams. It’s impossible to define thorough abuse cases entirely within development. But there are a few things developers can do now to get the process started. Let’s break these down.

Abuse cases at the unit test view

Few developers can get past writing only unit tests for use cases. For each function or method, a test or two is written to ensure that the function does what it says it’s going to do.

To switch to “abuse case” thinking, ask yourself: How can I interrogate the system for useful information by feeding it unexpected input: the wrong type or size? Null? How might I be able to harm the system by calling a function repeatedly?

Include at least one abuse case for every positive test case you have. If it seems like the security benefits of coding defensively like this might not be immediately apparent, look at it this way: thinking small will help you think large. If you practice thinking about ways to break your functions, then you will have an easier time with the following sections.

Unit tests are great for solidifying individual components of your application, but security errors often happen at the intersection of two components. For that, we’ll need something a little more integrated.

Abuse cases at the system view

The system view is how most of the bad guys will access an application—interfacing with it as a user or as a scripted farce of a user.

These are a bit trickier, but with enough thought, you can avoid a number of scary scenarios and build protections into your automated testing suite. Selenium has become the ipso facto for testing web applications and using Selenium, you can test for a number of major flaws, including:

  1. SQL Injections
  2. Cross-Site Scripting Attacks
  3. Session Hijacking

Next, look at your authentication:

  1. Can you brute force the username or password field in any way?
  2. Is there a time delta between a request with an invalid username and a request with an invalid password?
  3. Are your database ids guessable? sequential?

Your system is your code, but your system is not ONLY your code. Remember human factors too:

  1. What type of attacks would two-factor authentication prevent?
  2. Can a poorly-trained support technician give harmful access to a user’s account?
  3. Can an unexpected user action wreak havoc on your system?

The “at scale” or stress-test level

Another method of attacking your application happens only at scale: the dreaded DDoS attack. Hackers are finding increasingly creative ways to bring your system down by maxing out resources—commonly CPU, bandwidth, or RAM (whichever breaks first). When multiple attackers work together, it’s called a Distributed Denial of Service or DDoS attack. This happened as recently as Oct 21, when hackers used IoT cameras to take out a centralized target, the Dyn DNS System.

A recent DDoS attack leveraged the sizable daily traffic to China’s Baidu search engine. Hackers were able to insert malicious code into the search results page, causing anybody using the search engine to unwittingly participate in a DDoS against GitHub.

You also might remember an early episode of Mr. Robot, where they took this one step further. After the DDoS attack caused the servers to reboot, a rootkit was installed that gave their fictional “fsociety” group control over the Evil Corp servers.

Organizations try to combat high load with auto-scaling, which leads to a secondary vector of attack—bankrupting a company with infrastructure costs. Oh, and people can also attack hard drive space too.

These types of exploits can only happen at scale and the abuse cases are obvious:

  1. Can you, through the user-level interface, max out CPU, bandwidth, or RAM?
  2. Can you fill up the system’s hard drive?
  3. Is an edge caching layer missing? If not, does it have built-in DDoS protection, if any?

There are plenty of load-testing tools that you can use, from the simple siege tool to the more complex WebLoad, and more. Remember, with abuse cases you are trying to break your app in as many ways you can think of.

The secret weapon: fuzzing

It’s not a pragmatic use of your time to attempt to document all the ways your system could be hacked and all the methods people might use. Luckily, just like Alan Turing didn’t do all of his cryptography by hand, you can get a computer to do this for you.

Enter fuzzing.

Developed at the University of Wisconsin, fuzzing is a technique where random input, pseudorandom input, or user behavior is applied to a system. This technique is incredibly useful when it comes to helping you detect the unexpected.

Let’s take a look at this example. Here’s a painfully simple PHP function:

function doubleNumber($x) {

return $x * 2;

}

A simple, deterministic test might run it through a hundred or so variations and just ensure that it’s working.

public function testPoorly() {

for($i = -100; $i < 100; $i++) {

$this->assertEquals(doubleNumber($i), $i * 2);

}

}

This is obviously ridiculous, but becomes more interesting when we add fuzzing to it:

public function testMultiplicationProperties() {

for($i = 0; $i < 50000; $i++) {

$positiveRand = mt_rand(1, 1000000);

$isDoubleBigger = $positiveRand < doubleNumber($positiveRand);

$this->assertEquals($isDoubleBigger, true);

}

for($i = 0; $i < 50000; $i++) {

$negativeRand = mt_rand(-1000000, -1);

$isDoubleSmaller = doubleNumber($negativeRand) < $negativeRand;

$this->assertEquals($isDoubleSmaller, true);

}

}

So here we have a much wilder and more random test of our simple function that tests random numbers between 1 and 1 million, 50,000 times in a row. While it’s doubtful that you’ll run into any flaws trying to double a number in PHP, this becomes infinitely more complex the instant you start writing “real functions” that do “real things.”

Note that another emergent property of fuzzing is that you end up testing properties of your code instead the direct results. In the above example, the code tests:

  1. For positive input, is the output BIGGER than the input.
  2. For negative input, is the output SMALLER than the input.

Now our very simple example has become fairly interesting. Fuzzing has that effect on your testing. You’ll start to notice some unexpected results soon, even at the unit test level. You can even apply fuzzing at the system level with Selenium.

Conclusion

Hacks can happen at every layer so it is important to be prepared and writing misuse or abuse cases is an exercise in “thinking like the enemy”. It is a great way to train yourself to have a security-first mindset. If you’re actively thinking of ways that your system may be compromised, then you’re that much further ahead in the great arms race of software security.