A defender can find and fix a thousand vulnerabilities in their software, but if they miss even one, the attacker has already won.
We're not going to lie, defensive security is tough to get right. OWASP alone lists almost 200 classes of vulnerabilities, and between your standard XSS exploit, and more obscure attacks like NoSQL injection, there are more ways for an attacker to exploit your application than any single team of engineers can be expected to protect against - at least, if they want to have time left over to actually build a product. That's why we're firm believers in the idea of integrating vulnerability scanning into your DevOps process; if we can detect almost all of your vulnerabilities before your code even hits production, your engineers can spend more of their time solving problems instead of securing against them.
That's the goal at least, but not letting any vulnerabilities slip by in the first place is a task of its own. Most engineers agree that writing correct code is much easier with a solid test suite, and it's no different when dealing with vulnerability scanning - except when some vulnerabilities only manifest themselves on a misconfigured Tomcat server, running on a Windows box. Unit tests are great, but unless you actually stress the application in a production-setting, you risk letting some particularly nasty bugs slide through: in our case, false negatives for vulnerabilities with severe consequences.
In the course of building Tinfoil Security, we've written integration tests which pit our scanner against everything from Sinatra servers, to your standard LAMP setup, with even a few Windows stacks thrown in for good measure. We soon found that our dependence on so many virtual machines meant that running our tests entirely locally was out of the question - our development machines just weren’t powerful enough to run through the suite in any reasonable amount of time, while also letting our engineers be productive as the tests ran. We evaluated a few solutions, but anything viable required more resources than our small team was willing to throw at the problem. In the end, we found that we couldn’t reasonably justify including some of our more expensive integration tests as part of our development cycle.