Software Integrity

 

Increasing static visibility

Sometimes, people talk loosely about an important difference between static analysis and dynamic analysis. Static analyzers, they say, achieve 100% coverage. They may complain that dynamic tools struggle to get even double-digit statement coverage of an application under test.

Dan Cornell wrote a blog post on static analysis coverage. He observed that while the static tool he used completed its scan, it didn’t find things it should. He implicitly referred to this as coverage.

I’ve often put things as follows:

Your tool can’t find what it can’t see and it can’t see what it doesn’t parse.

This got said often years back because at the time it was quite difficult to integrate tools effectively into a builds so that they would in fact ‘see’ everything and successfully parse it (*1). We referred to this concept of how well we, as operators, had integrated as providing visibility into the code. These days, market leading tools don’t have nearly the problem they used to with finding the code they need to parse… or parsing that code successfully (*2).

These days, the frameworks developers use for persistence, object remoting, application navigation, and view display challenge visibility most. Tools see the code for your controller and parse it successfully but they don’t necessarily understand what it represents. This happens in almost every application I scan, even the painfully simple ones.

We couldn’t have our Consultants going out into the world armed with tools bearing only spotty visibility into applications so I wrote a utility identifyEntryPoints to find entry points on its own. We don’t want Consultants guessing at what their static analysis tool missed or popping the hood open to try to discern what it found and not doing what they’re paid to do: assess the application, so I wrote another utility entryPointCompare to compare what the tool found vs. what I was able to find programmatically. See actual output below (I’ve replaced the static tool’s name with ‘TOOL XXX’):

Sanguine:demo jsteven$ /Users/jsteven/code/demo/working/engine/util/entrypointCompare.py 
2011-02-06 09:07:08,158 INFO     bois_1.0        bin.execPythonRobot STARTING: /Users/jsteven/code/demo/working/engine/util/entrypointCompare.py
Factory 31, [TOOL XXX] 0
 TOOL XXX missed set(['Index', 'csrchat', 'accountsList', 'Transfer', 'editProfileFormBean', 'newAccountFormBean', 'fileList', 'Auth', 'newClientFormBean', 'qaFormBean', 'UserProfile', 'SetupProfileAction', 'uploadFile', 'VerifyAnswerAction', 'BeanEditProfileAction', 'existingClientFormBean', 'GetDocuments', 'ChatPoll', 'announcements', 'modifyDocument', 'BeanCreateAccountAction', 'ChangePasswordAction', 'VerifyClientAction', 'ChatSend', 'Help', 'fileUploader', 'BackOfficeAdmin', 'passwordFormBean', 'BeanCreateClientAction', 'action', 'Accounting'])

Here, I ran the test on our own internal “Bank of Insecurities”, which is a simple Struts1 application. The utility’s output represents all the actions and forms that account for web-based input in their own respective ways. Sure looks like some pretty important stuff was missed in there huh?

The next step was to write a third program registerEntryPoints which would write rules for the missing entry/exit points as appropriate so that the beefy commercial static analysis tool could do what it does best. These utilities represent just some of the internal workings of our ESP platform.

Yes, but my tool finds stuff all the time
Would a tool not find any findings without detecting entry points? Well, it would likely find potential vulnerability but only using more local syntactic analysis. Remember, “you can’t find what you can’t see…”

I’m sensing a theme in your posts jOHN
OK, so, that’s two blog posts (Mr. Cornell and my own) complaining about what I refer to visibility. Now what?

Spend time cataloging your chosen tool’s performance
Our experience indicates that as few as 10-13 reasonably chosen applications can serve as a representative sample for as many as 300 applications on the Java platform. When you pilot those 10-13 applications, conduct the following steps to avoid the visibility failures seen above in the portfolio as a whole:

  1. On-board the application using whatever tool interface gives the most feedback about missing artifacts (*3)
  2. Explore scan logs for identified entry points
  3. Manually explore the application’s deployment descriptors and critical configuration files
  4. Document controller logic as framework default or developer extended
  5. Identify key entities within the DAO/persistence framework

When you’ve done that for 10-13 apps, our representative sample, stop and take stock of what you’ve collected. You’ll have created a very good list of items for which you’ll want to create custom rules. Different tools call the kind of rules you’ll create different things, but suffice it to say you’ll be writing rules to tell the tool that it’s:

  • Entry: Taking input from untrusted web sources
  • Entry: Taking input from untrusted partner applications
  • Exit: Placing data in a untrusted view (browser, service repsonse, etc.)
  • Exit: Conducting CRUD operations on entity data

The above list is by no means exhaustive. Indeed, you’ll want to look at data originating from the persistence tier and headed to the web (not listed above) but we often consider this as a second phase of customization. You’ll also want to begin considering data entry/exit from 2nd and 3rd party components within the applications being tests. Again, another good subsequent customization phase.

Gosh, this sounds expensive
Yes; ‘well. Compared to simply running a static analysis tool using its IDE-based GUI, triaging results, and calling it quits this is darned expensive. However, it dramatically raises your visibility into an application’s vulnerability (you should also expect false-positive reduction).

Compare this to the potential difference in quality and depth of a manual, tool-assisted penetration test and an automated vulnerability scan. An engineer running AppScan costs much less than an expert penetration test and organizations have come to not only expect different results, but see these as two very different services. I predict that we’ll see a similar distinction between two services (automated and more expert-driven) in the static space in a few years.

Leveraging Manual Efforts Across Assurance Activities
If your organization doesn’t conduct manual penetration testing and is unwilling to pay your engineers to properly on-board applications into its static analysis tool, then neither static nor dynamic analysis will benefit from high visibility into applications under test. The cost will look “high” and “extra” as well.

Conducting the kind of analysis described by this entry (targeting entry and exit points) can inform both static and dynamic analysis alike. Conduct this exercise for applications built with representative technology stacks. On the static front, write custom rules based on your work. On the dynamic side, furnish this work to your dynamic testers (even if they are external vendors) so that their exploration benefits from it.

(*1) – Coverity’s Prevent was always a notable exception and able to integrate with even challenging build environments without much pain.
(*2) – Limitations still exist with complex (distributed) build systems, code-generation schemes, and even the more mundane JSP compilation process.
(*3) – Ask if you need help here, last time I published tool details, vendors got cranky.