Learn how to choose React Native libraries that abide by application security principles in order to build secure mobile applications.
When we choose a library, we usually think about its functionality: “Does it do the right job?” and performance, “Will it slow my application down?” But we should also think about security; “Will it make my application vulnerable?”
Typically, applications are built with principles of security in mind, including
The implementation of these principles in a mobile application varies slightly due to the threat model being different from those of web applications, but it varies more for applications built with React Native. Let’s examine React Native’s impact on the overall security of the application for each of these principles.
This principle specifies that an application should only have permissions that are essential. A typical mobile application is comprised of several components, such as services and receivers in the case of the Android environment. These components need to interact with various device processes for the application to function. For example, a music streaming application may have a service component that allows it to save music to the local file system. However, if the application has components that request access to the device location or camera, that might indicate a violation of least privilege.
Typically, with native applications, permissions are set in configuration files before building. React Native libraries, on the other hand, allow applications to read and set application permissions at runtime. They do not necessarily introduce any new risks. However, as with native code, the React Native code must not request overly privileged permissions.
This principle refers to security best practices that help an application defend against chaining of vulnerabilities. Like web applications, mobile applications can be safeguarded against common risks such as data theft and code manipulation. Examples of defensive controls include restricting the application to non-jailbroken devices and to devices that are on the latest OS version, avoiding insecure or deprecated APIs, or not allowing debuggers to be attached to the application process.
Lately, more applications are implementing these defensive controls in React Native code and not in native code. It is important to understand how these controls work. Consider the practice of restricting the application to non-jailbroken devices. When selecting a jailbreak detection library, analyze the implementation in the APIs used. Does the implementation depend on a Boolean return type? If so, it is considered event-based and is easy to bypass, especially when there is a lack of other complementary defenses.
In general, when choosing a React Native library to defend against typical mobile risks, exercise caution with implementations and deprecated methods. Further, ensure that default values or other configuration options that affect the application’s security do not introduce vulnerabilities.
This principle relies on the secrecy of the inner workings of an application. With traditional web applications, a few implementations reside on the client side, hence, it is easier for the attacker to understand the logic and exploit it. Client-side code is easily reachable by attackers. Determining what should and shouldn’t be revealed in the client-side code is quite straightforward, at least with web applications. Find a balance—incorporate an open design, reveal only whatever is requisite on the client side, and do the heavy lifting securely on the server side.
But mobile applications do the heavy lifting on the client side (the mobile device), especially if using a hybrid framework such as React Native. So should this recommendation of an open design principle be taken with a grain of salt?
Yes. Mobile applications tend to move logic and data storage to the client side, so obscurity is more important here than in web applications. Though attackers will eventually get to whatever is obscured with enough time and resources, certain controls, such as the ones discussed in the previous section, will delay them.
Typically, a mobile application’s binary and data exist on the device, so obfuscating code in the application binary would make it harder for attackers to understand how the application works, thwarting targeted attacks. But mitigations should still be employed on the server side. It should also be noted that sole reliance on obfuscation is considered bad practice.
This principle recommends limiting the number of entry points into an application. With traditional web applications, the attack surface includes input fields and URL parameters. With mobile applications, the attack surface includes both user inputs and the sandbox that the application resides in. A sandbox is a containerized location on the device within which the application operates. The application process can only access the data present in that sandbox. The operating system minimizes the attack surface to an extent via sandboxing, but this does not address all security concerns. If there were a weak attack surface, data could still be leaked (weak confidentiality) or data and code could be manipulated (weak integrity).
For example, applications need to store user data on the device to make them fast and responsive. But what if the device is compromised? Should applications accept the minimal but existent risk of leaking a user’s personal information or credentials? What if the user was an administrator with higher privileges? It is important to not only be conscious about where on the device this information is stored but also what kind of access controls are in place to defend against leaks. Mobile applications also allow deep linking, in which the application or a specific functionality within it is invoked from outside its sandbox. If it’s not configured securely, the application functionalities and data can be at risk.
Furthermore, the client on which the mobile application is installed and the server it is communicating with need a means through which they can trust each other. Most applications achieve this via certificate validation and certificate pinning, where trusted certificates are installed on the client. Also, the mobile operating system provides applications with a mechanism to predefine rules for secure network communication, such as Application Transport Security controls in the case of iOS. For example, the configuration “NSExceptionAllowsInsecureHTTPLoads,” which allows insecure communication over HTTP, should be limited to trusted domains.
When it comes to application data, applications should follow a zero trust approach with not only clients but the users as well. Users could make insecure decisions when dealing with sensitive data. For example, users may install and use third-party keyboards that could perform keylogging. Thus, by placing certain checks in the code, the application can secure user data from leaks. For example, the application can only allow the system keyboard for sensitive input fields such as passwords or not allow sensitive information from being copied by the user to the device clipboard.
As mentioned in the “Defense-in-depth” section, these libraries should be chosen after careful analysis of the implementation and potential default configurations. Some examples include ensuring that
In short, place extra attention to device settings and configurations. Ensure the library APIs are failsafe. Trusting third party libraries, certificates, or software with system wide access (keyboards, clipboards) is inherently risky. Communication should be limited to trusted servers and protocols (HTTPS).
When building mobile applications with React Native libraries, many functionalities are often implemented with external libraries. Thus, it is imperative to choose them wisely to ensure they do not make mobile exploits easier for attackers. Best practices include verifying that they are the latest version, implemented securely, allow configurable options as their native peer, and do not utilize insecure defaults.
With Rapid Scan Static 2022.12.1, React Native libraries can be vetted against these principles as relevantAPI safety and configuration checks are instilled into the engine. The publicly available IDE plugin Code Sight™ for VS Code and IntelliJ can be used to explore these new capabilities. Customers of Synopsys Coverity® and Black Duck® products will have these capabilities in their next major releases.
Vineeta is a senior research engineer at Synopsys. She evaluates current technologies and languages in the industry to identify methods of using them securely. Her research contributes to static analysis solutions that influence server-side, mobile, and client-side areas of security. As a software security enthusiast, she obtained her master’s degree in Computer Science from Indiana University and found her calling in 2016 in application security. Her key interests lie in web and mobile application security. Vineeta also enjoys sunshine and being outside.