Software Integrity Blog

Author Archive

Mike Lyman

mlyman

Mike Lyman is a senior security consultant at Synopsys. He works with customers on secure code reviews, vulnerability assessments, and trains developers in secure development. Prior to Synopsys, Mike spent 12 years with SAIC and helped create their software assurance offering for DoD customers at Redstone Arsenal, AL; pioneering most of the processes and procedures used by the practice. He learned IT security in the trenches with Microsoft's network security team throughout the heady days of SQL Slammer, Code Red and Nimda. Prior to that, he was a software developer supporting US Army project offices at Redstone Arsenal and served on active duty as an officer in the US Army. He has been a CSSLP since 2008 and a CISSP since 2002.


Posts by Mike Lyman:

 

Insecure example code leads to insecure production code

Insecure sample code in tutorials leads to insecure production code. How do you train your developers to fix and reuse insecure code examples the right way?

Continue Reading...

Posted in Security Training & Awareness | Comments Off on Insecure example code leads to insecure production code

 

Squash more bugs with this code review checklist

“All software projects are guaranteed to have one artifact in common—source code. Because of this guarantee, it makes sense to center a software assurance activity around code itself.”

Continue Reading...

Posted in Security Training & Awareness, Static Analysis (SAST) | Comments Off on Squash more bugs with this code review checklist

 

How to avoid the blind spot in static analysis tools caused by frameworks

If your static analysis tool can’t see into your frameworks, it can’t report security issues. Here’s how to ensure static analysis coverage of blind spots.

Continue Reading...

Posted in Static Analysis (SAST), Web Application Security | Comments Off on How to avoid the blind spot in static analysis tools caused by frameworks

 

When and how to support static analysis tools with manual code review

Analyzing source code for security bugs gets a lot of attention and focus these days because it is so easy to turn it over to a static analysis tool that can look for the bugs for you. The tools are reasonably fast, efficient, and pretty good at what they do. Most can be automated like a lot of other testing. This makes them very easy to use. But if you are not careful, you may still be missing a lot of issues lurking in your code or you may be wasting developers’ time looking at false positives the tool was not sure about. As good as these tools have become, they still need to be backstopped by manual code review undertaken by security experts who can overcome the limitations of these tools. Let’s examine how manual code review compliments static analysis tools to achieve code review best practices. Overcome the tool trade-offs Like all software, static analysis tools are a collection of trade-offs. If they go for speed, the depth of their analysis suffers and you get more false positives. If they try to reduce the false positives, they run slower. If tools are inexpensive, chances are there is less expertise and less original research behind them. If they have more expertise and more research, you help foot the bill by paying more. One tool may be very good at catching some classes of bugs, another tool may be good at catching other classes of bugs; none are likely to be good at catching all classes of bugs. These trade-offs will affect the tool results. Why manual code review is key Manual follow-up to the tools can help overcome these trade-offs. The reviewer knows the tools and knows what rules provide reliable results and what rules provide weak results. They weed out the problems before they ever waste a developer’s time by shielding them from the noise that all static analysis tools create. Conducting manual review after the tool has been run can identify areas where the tools can be tweaked to provide more reliable results. This might be done through things as advanced as custom rules or using existing features in the tools to help them better understand what various parts of the code are doing. It may be simple configuration changes that help the tools run better. Context and environment All tools suffer from a lack of understanding the environment regarding the software they are analyzing. They also lack any real understanding of the context of what they are looking at.

Continue Reading...

Posted in Static Analysis (SAST) | Comments Off on When and how to support static analysis tools with manual code review

 

Benefits of secure code review: Developer education

Secure code review benefits you not just by making the code immediately under review more secure but also by teaching developers to write more secure code.

Continue Reading...

Posted in Security Training & Awareness, Static Analysis (SAST) | Comments Off on Benefits of secure code review: Developer education

 

Benefits of code scanning for code review

“All software projects are guaranteed to have one artifact in common – source code. Because of this guarantee, it make sense to center a software assurance activity around code itself.”

Continue Reading...

Posted in Static Analysis (SAST) | Comments Off on Benefits of code scanning for code review