Securing your Mobile JavaScript Applications

July 7, 2017 | 5 min Read

JavaScript has been used as a client side language for over 20 years and as a server side language for close to 10. In the past 3 years, JavaScript has emerged as a language of choice for mobile app developers, especially those looking for a cross-platform solution. Technology stacks such as React Native and Tabris.js are the obvious choices, but some engineers are rolling their own runtimes with tools such as J2V8.

As more organizations migrate towards JavaScript on Mobile, JavaScript security is becoming a top priority. At EclipseSource, we have been building JavaScript based mobile apps for customers in the fields of health care and financial services. In this post, we will highlight some of the security concerns we uncovered when using JavaScript on Mobile and discuss how to mitigate them.

Repackaging Protection

One attack that is difficult to protect against involves someone taking your application, decompiling it, changing it, and re-publishing it to the app stores. The attacker then tries to get users to install their modified version instead of your official one. A well crafted app will still behave like the original so the user has no idea they are using a version that has been tampered with.

To protect against this, you can perform a check at runtime to see if the code you are executing matches your signing key. On Android, you can check the fingerprint of the signing certificate using activity.getPackageManager().getPackageInfo().signatures. Of course, if someone decompiled your source code, it will be trivial to change this logic. To address this, you can either compile this logic directly into your JavaScript engine, which as a native library will be harder to break; or you can embed this logic into your JavaScript, and ship encrypted & obfuscated JavaScript.

JavaScript Code Obfuscation & Encryption

Shipping JavaScript based mobile apps often means shipping your JavaScript in plain text to all your users. Even without decompiling your source code, an attacker can often find your JavaScript embedded directly in your packaged app. Anybody with simple editor can inspect your source code to get insights into your application and search for security vulnerabilities. Minified JavaScript can help a bit, but will only slow down an attacker. A better approach is to obfuscate your code using an AES key and use that same key at runtime to decrypt the JavaScript as it’s executed. The AES key can either be encrypted and embedded directly in the binary or it can be negotiated with a server and passed at runtime.

Of course, in the end, the code must be decrypted as it’s executed. If someone dumps the memory of your app while it’s running, they may find the JavaScript there. If you are going to obfuscate your code, you should still perform code signing & verification to make sure nobody has tampered with it.

JavaScript Code Signing & Verification

Code signing and verification is a mechanism used to ensure that the code being executed has not been tampered with. When code is authored, a fingerprint is computed and that fingerprint is signed with a private key. At execution time, the same fingerprint is computed and the matching public key is used to verify that the signature matches. If the signature matches, then the code can be executed, otherwise the code is rejected with a security exception.

Like the AES key, the public key can either be embedded in the application or negotiated at runtime. Public keys are considered safe to distribute, but if someone does have the public key, they could substitute it for their own. Obfuscating and embedding the key directly in the JavaScript engine makes this attack harder. Furthermore, if the APK itself has been signed, then an attacker won’t be able to simply change the binary.

We suggest performing JavaScript code signing & verification as a minimum for all JavaScript based apps. This means that you must sign and verify not only your own source code, but all your dependencies too.

Root Detection

Finally, most attack vectors require root access to a users device. With root access, an attacker can install alternate VMs or manipulate the Java code running on a Android device using reflection. By disabling the JavaScript engine from running on a rooted phone, these attacks become harder. This can be viewed as an extreme measure, but it helps protect highly sensitive data from being exposed in a potentially hostile environment.

However, disabling apps on rooted phones is also a double-edge sword. Some people root their phones to install security patches, and in some cases rooted phones may actually be more secure than stock installs. If you are going to perform root detection, think about your target audience and if this security measure actually makes sense for your app.

Summary

As JavaScript grows and starts to play a more prominent role in the financial and health services, security requirements are becoming a top priority. Because JavaScript is an interpreted language, the source code often ships in plain text to all your clients’ devices. However, there are several measures you can take to address the risks, such as re-packaging protection, code obfuscation & encryption, code signing & verification and root detection. Embedding these checks directly in the JavaScript engine, and halting all execution if those checks fail, can help mitigate the security risks associated with JavaScript on Mobile.

If you are interested in learning more about mobile security and to hear about how we manage this with Tabris.js and J2V8, please get in touch. For more information on J2V8 and JavaScript security, follow me on Twitter.

Ian Bull

Ian Bull

Ian is an Eclipse committer and EclipseSource Distinguished Engineer with a passion for developer productivity.

He leads the J2V8 project and has served on several …