There are two primary limiting factors that can make a secure code review tricky: humans and automation. For a human, the limiting factor is the relatively few lines of code that an individual can review in a work day. At best, a human may be able to review several hundred lines of code in a day. Considering that modern software is often comprised of tens or even hundreds of thousands of lines of code, it is highly unlikely for a human to manually review every line of code. It would require nearly as many reviewers as developers to approach the process using manual methods alone.
Automated tools can review code much faster than humans. The trade-off, however, is that automation is far more prone to missing security implications (false negatives) as well as falsely identifying them (false positives). In addition, automated tools often don’t understand the context in which code is written.
To overcome these limitations, a review should be performed through a combination of manual and automated efforts. Automated tools can quickly scan the code base to identify areas of interest and potential vulnerabilities. Triaging automated findings guides the manual investigation into those potential vulnerabilities. Manual reviews are also useful when reviewing the code for certain classes of flaws such as authentication and cryptography.