close search bar

Sorry, not available in this language yet

close language selection

Using containers? What’s hidden in your container images?

Tim Mackey

Apr 02, 2018 / 5 min read

Do you know what’s in your containers? No, the question has nothing to do with those mystery containers in your fridge. But if you don’t know what’s in those lovely Docker containers which are all the rage, you could be in store for just as rude a surprise as discovering what might be hiding deep in your fridge.

Seriously, this has everything to do with basic data center hygiene—knowing the software present in your environment. Executables are made to be run, and if there’s one which you don’t expect to be run, and it does run, things could become problematic quite quickly. This is the lesson of modern malicious software: Stuff that shouldn’t be present gets exploited. And containerized applications are no different from any other software platform.

How does software get into a container?

The first step in understanding what’s in a container is to understand how something gets into a container in the first place—a fairly simple process.

  1. You start from a “base image”. In a Docker container, this is the origin image someone decided to use as the core foundation for the custom container image. For practical purposes, there are an infinity of possible base images, and not all are created equal. For example, the contents of a base image often aren’t disclosed. Second, there isn’t always a ready way to verify that the author of a given image should be trusted—or that the author is who they might claim to be.
  2. Next, you’ll want to update that base image for any security updates. There is one thing you can guarantee with a base image: It’s unlikely to be current with security patches. I’m not saying anything negative about the image authors—only that with the passage of time security disclosures are made, and not all images are patched quickly—knowing this, you’ll want to apply your own set of patches.
  3. Application files are then copied into the container image to create an application-specific image. Ideally these application files have been through multiple security scans and are fully vetted. Of course, as with base images, we know that over time, security disclosures will be made against these application dependencies, so we must have a patch strategy defined. More on that later.
  4. While rare, someone could log into a running container and modify its contents. If a running container is modified via an interactive login, it’s important to know that those changes will automatically be discarded when the container is recycled. This means that the interactive user will need to save the running container as a container image.

How does a container image turn into a running container?

Container images are immutable, read-only versions of a container. Until they are executed, they are nothing more than a binary file stored in a container registry. To get them into a running state, a container orchestration system like Kubernetes, OpenShift, or Docker Swarm is used. The container orchestration system first “pulls” the container image from the specified registry to a server node. There are many possible registries—some private, some public, and many run by third parties. Once the image has been pulled to a given server node by the orchestration system, it can then be executed.

While the container image typically defines an application to start by default, in reality any executable in the image can be started, and more than one process may be running. Put in security terms, unless you limit what can run in the image, assume everything can. The best way to limit what can run in the image is to limit what’s present—but that requires an understanding of what is present, of course.

When should I scan container images?

In the realm of DevOps, there is a strong desire to move everything to the left—that is to say, toward the developer. While it’s true the best place to fix an issue in software is as close as possible to where the code is used, the risks of consumption are to the right of where a container image is being created. Put another way, the best a container image author can do is scan their image when it is pushed into an image registry. Unfortunately, when the time comes to run that same image, the security risk could easily have changed.

Core image scanning rules

  1. Images can come from anywhere and be created by anyone
  2. Images were likely scanned when pushed to a registry entry to a registry, but it’s also likely that the scan is now old
  3. If someone modifies an image, they need to save it and then cause it to be run
  4. Images could be cached on nodes, and the cache could become stale
  5. Images can be referenced by tags which can hide the true version of the image

In short, any container security scanning solution must be able to scan container images regardless of the source registry. Due to the latency between when an image is pushed to a registry and when it is used, any scan occurring prior to when the image is used should be considered suspect. This also means that regardless of how much you trust the registry and any scanning it might have performed, if the scan wasn’t done when you consumed the image, the scan results may be incorrect.

Continuous monitoring of containerized applications

From a regulatory perspective, we’re in a period where applications of all forms need to have continuous monitoring in place. For some, the term “continuous monitoring” means that a network security solution is in place. But history provides us with many examples of how a lack of awareness of what’s in an application and what’s running on a system led to a security breach.

Periodic scanning using network scanning tools as a proxy for continuous monitoring also doesn’t work well. This is due to an issue of locality of reference and perspective. When you look at a system from the outside and an attacker looks at it from an adjacent internal system, the two views can be quite different. Continuous monitoring demands an understanding of the risks present in the system and ongoing validation that those risks haven’t increased.

Put another way:

  • If an application owner rolled back an application to a previous, and vulnerable version, how long would it take for you to recognize the increased risk?
  • If a new security disclosure was issued an hour ago, how long would it take you to determine precisely which running containers are impacted?
  • If an application used at specific points in the year starts, how long would it take you to determine whether it was vulnerable?

These are important considerations for anyone placing containerized applications into production. And if your container security solution can’t automatically scan everything regardless of source, and continuously monitor for new security threats while alerting on continued usage of vulnerable images, then you probably need a new solution. As it happens, we have such a solution in our portfolio.

I encourage you to find Synopsys at RSA and ask for me—Tim Mackey—by name. I’ll be happy to help you build a more secure data center using container technologies.

Continue Reading

Explore Topics