close search bar

Sorry, not available in this language yet

close language selection
 

How to increase visibility for static and dynamic analysis

Sometimes, people talk loosely about an important difference between static and dynamic analysis. Static analysis tools, they say, achieve 100% coverage. They may complain that dynamic analysis tools struggle to get even double-digit statement coverage of an application under test.

Dan Cornell wrote a blog post on static analysis coverage. He observed that while the static analysis tool he used completed its scan, it didn’t find things it should. He implicitly referred to this as coverage.

I’ve often put things as follows:

Your tool can’t find what it can’t see, and it can’t see what it doesn’t parse.

Why visibility is important in static and dynamic analysis

This got said often years back because at the time it was quite difficult to integrate analysis tools effectively into a build so that they would in fact “see” everything and successfully parse it. We referred to this concept of how well we, as operators, had integrated as providing visibility into the code. These days, market-leading analysis tools don’t have nearly the problem they used to with finding the code they need to parse, or parsing that code successfully. (But limitations still exist with complex [distributed] build systems, code-generation schemes, and even the more mundane JSP compilation process.)

These days, the frameworks that developers use for persistence, object remoting, application navigation, and view display are the ones that challenge visibility most. Analysis tools see the code for your controller and parse it successfully, but they don’t necessarily understand what it represents. This happens in almost every application I scan, even the painfully simple ones.

Find what your static and dynamic analysis tools are missing

We couldn’t have our consultants going out into the world armed with analysis tools bearing only spotty visibility into applications, so I wrote a utility identifyEntryPoints to find entry points on its own. We don’t want consultants guessing at what their static analysis tool missed or popping the hood open to try to discern what it found and not doing what they’re paid to do, which is to assess the application. So I wrote another utility entryPointCompare to compare what the analysis tool found versus what I was able to find programmatically. See actual output below (I’ve replaced the static analysis tool’s name with ‘TOOL XXX’):

Sanguine:demo jsteven$ /Users/jsteven/code/demo/working/engine/util/entrypointCompare.py
2011-02-06 09:07:08,158 INFO bois_1.0 bin.execPythonRobot STARTING: /Users/jsteven/code/demo/working/engine/util/entrypointCompare.py
Factory 31, [TOOL XXX] 0
TOOL XXX missed set(['Index', 'csrchat', 'accountsList', 'Transfer', 'editProfileFormBean', 'newAccountFormBean', 'fileList', 'Auth', 'newClientFormBean', 'qaFormBean', 'UserProfile', 'SetupProfileAction', 'uploadFile', 'VerifyAnswerAction', 'BeanEditProfileAction', 'existingClientFormBean', 'GetDocuments', 'ChatPoll', 'announcements', 'modifyDocument', 'BeanCreateAccountAction', 'ChangePasswordAction', 'VerifyClientAction', 'ChatSend', 'Help', 'fileUploader', 'BackOfficeAdmin', 'passwordFormBean', 'BeanCreateClientAction', 'action', 'Accounting'])

Here, I ran the test on our own internal “Bank of Insecurities,” which is a simple Struts application. The utility’s output represents all the actions and forms that account for web-based input in their own respective ways. Sure looks like some pretty important stuff was missed in there, huh?

The next step was to write a third program, registerEntryPoints, which would write rules for the missing entry/exit points as appropriate so that the beefy commercial static analysis tool could do what it does best. These utilities represent just some of the internal workings of our ESP platform.

Yes, but my analysis tool finds stuff all the time.

Would an analysis tool not find any findings without detecting entry points? Well, it would likely find potential vulnerability but only using more local syntactic analysis. Remember, you can’t find what you can’t see.

Spend time cataloging your tool’s performance

Our experience indicates that as few as 10-13 reasonably chosen applications can serve as a representative sample for as many as 300 applications on the Java platform. When you pilot those 10-13 applications, conduct the following steps to avoid the visibility failures seen above in the portfolio as a whole:

  1. Onboard the application using whatever analysis tool interface gives the most feedback about missing artifacts.
  2. Explore scan logs for identified entry points.
  3. Manually explore the application’s deployment descriptors and critical configuration files.
  4. Document controller logic as framework default or developer extended.
  5. Identify key entities within the DAO/persistence framework.

When you’ve done that for 10-13 apps, our representative sample, stop and take stock of what you’ve collected. You’ll have created a very good list of items for which you’ll want to create custom rules. Different tools call the kind of rules you’ll create different things, but suffice it to say you’ll be writing rules to tell the analysis tool that it’s:

  • Entry: Taking input from untrusted web sources
  • Entry: Taking input from untrusted partner applications
  • Exit: Placing data in a untrusted view (browser, service response, etc.)
  • Exit: Conducting CRUD operations on entity data

The above list is by no means exhaustive. Indeed, you’ll want to look at data originating from the persistence tier and headed to the web (not listed above), but we often consider this as a second phase of customization. You’ll also want to begin considering data entry/exit from second- and third-party components within the applications being tests. Again, another good subsequent customization phase.

Gosh, this sounds expensive.

Yes, well. Compared to simply running a static analysis tool using its IDE-based GUI, triaging results, and calling it quits, this is darned expensive. However, it dramatically raises your visibility into an application’s vulnerability (you should also expect false-positive reduction).

Compare this to the potential difference in quality and depth of a manual, tool-assisted penetration test and an automated vulnerability scan. An engineer running AppScan costs much less than an expert penetration test, and organizations have come to not only expect different results, but see these as two very different services. I predict that we’ll see a similar distinction between two services (automated and more expert-driven) in the static space in a few years.

Leverage manual efforts across assurance activities

If your organization doesn’t conduct manual penetration testing and is unwilling to pay your engineers to properly onboard applications into its static analysis tool, then neither static analysis nor dynamic analysis will benefit from high visibility into applications under test. The cost will look “high” and “extra” as well.

Conducting the kind of analysis described by this entry (targeting entry and exit points) can inform both static analysis and dynamic analysis alike. Conduct this exercise for applications built with representative technology stacks. On the static front, write custom rules based on your work. On the dynamic side, furnish this work to your dynamic testers (even if they are external vendors) so that their exploration benefits from it.

 
John Steven

Posted by

John Steven

John Steven

John Steven is a former senior director at Synopsys. His expertise runs the gamut of software security—from threat modeling and architectural risk analysis to static analysis and security testing. He has led the design and development of business-critical production applications for large organizations in a range of industries. After joining Synopsys as a security researcher in 1998, John provided strategic direction and built security groups for many multinational corporations, including Coke, EMC, Qualcomm, Marriott, and FINRA. His keen interest in automation contributed to keeping Synopsys technology at the cutting edge. He has served as co-editor of the Building Security In department of IEEE Security & Privacy magazine and as the leader of the Northern Virginia OWASP chapter. John speaks regularly at conferences and trade shows.


More from Security news and research