From a regulatory perspective, we’re in a period where applications of all forms need to have continuous monitoring in place. For some, the term “continuous monitoring” means that a network security solution is in place. But history provides us with many examples of how a lack of awareness of what’s in an application and what’s running on a system led to a security breach.
Periodic scanning using network scanning tools as a proxy for continuous monitoring also doesn’t work well. This is due to an issue of locality of reference and perspective. When you look at a system from the outside and an attacker looks at it from an adjacent internal system, the two views can be quite different. Continuous monitoring demands an understanding of the risks present in the system and ongoing validation that those risks haven’t increased.
Put another way:
- If an application owner rolled back an application to a previous, and vulnerable version, how long would it take for you to recognize the increased risk?
- If a new security disclosure was issued an hour ago, how long would it take you to determine precisely which running containers are impacted?
- If an application used at specific points in the year starts, how long would it take you to determine whether it was vulnerable?
These are important considerations for anyone placing containerized applications into production. And if your container security solution can’t automatically scan everything regardless of source, and continuously monitor for new security threats while alerting on continued usage of vulnerable images, then you probably need a new solution. As it happens, we have such a solution in our portfolio.
I encourage you to find Synopsys at RSA and ask for me—Tim Mackey—by name. I’ll be happy to help you build a more secure data center using container technologies.