Application security incidents cause serious disruption and scrutiny for any company. Fingers will be pointed, blame will be cast, and heads will roll. But right now all that matters is how you respond.
Security incidents are also far more common than you think. To illustrate this point, set up a baseline CentOS VM and give it a public IP address. Log in once, then log out and wait a week. When you log back in, you will see something like the following message toward the bottom of the servers message-of-the-day:
There have been 32 failed login attempts since your last login.
There are millions of scripts and spiders and probes out there, autonomously sniffing around and reporting back what systems they find. If security isn’t at the top of your priority list, it’s likely you’ll be hacked sooner or later. While we highly, highly recommend integrating developer-driven security into your engineering strategy to prevent any application security incidents, here are a few things you can do to handle this situation with grace if it should happen to you.
First, you need to figure out exactly how hackers got in, what data was accessed, and what holes still exist that would allow a similar attack to happen in the future. Try to think like a hacker and backtrack from the evidence to possible attack vectors and exploits.
Note that you’re also collecting evidence here. Take screenshots, save infected files, save log files, make sure to move incriminating files away from folders like /tmp where they may be deleted by an system process. Protect everything that proves the hack. This information will help you with both your Incident Report to users, but also the report you will send to the proper authorities.
After that, go to a skeleton crew. You want as few people as possible with the most expertise. The jargonic acronym for this is Computer Security Incident Response Team (CSIRT). Right now, your office needs to clear out except for the designated response team so they can move quickly and easily.
Everybody gone? Good. Now turn off as many systems as you can. Of course, consult your Legal and Management departments before doing this, but be prepared to make the case that allowing the system to stay upright and to continue withstanding the hack may risk even more damage, compromised data, and downtime. Hopefully your app has some sort of maintenance mode. If not, a quick DNS redirect to a static page can do the trick with a web app. Don’t be afraid to think creatively here, and in all of the steps below.
Upper management typically controls the release of the disclosure, but you can help them by giving them the following information, to be passed along to users:
Informing your users is responsible and gives you the best opportunity to look like you’re on top of the problem, instead of being the hapless victim of it.
Now that it’s quiet in the office, the systems are quiet, and the users have been notified, it’s time to stop and think. How did this happen? How did we get into this mess? You know your system better than anybody else.
John Allspaw from Etsy has a concept called Blameless Post Mortem, built around the idea that “human error is seen as the effect of systemic vulnerabilities deeper inside the organization.” This is another way of saying that security holes don’t just exist at one layer of your organization or infrastructure—your problems are likely due not just to policy and procedure, but the consistent following (or not following) of said policies.
It’s hard to give specific code-level examples here since every application is different, but you are in a unique position to find and patch the holes. This is your application. Finding and patching the problem should be your first order of business. You might have luck starting from the exploited data/access layer and working backwards. It’s likely clues will present themselves if they haven’t already.
Ideally, there will be no next time, but in a world where you’re in an arms race with state-sponsored hackers going after high-value targets with extreme tenacity, you never know. Use this experience as an opportunity to herald a wake-up call for organization and implement a security-centric approach to development.
Here are a few key questions to ask yourself at this point:
Once the main exploit is patched, it’s time to look over your entire codebase with an eye for security. If a flaw exists in one place it’s likely to exist in other places. There are resources and tools out there that will help you do a security audit. Plenty.
Doing an audit once isn’t enough. You will need to also have a more developer-centric security model in mind.
Another model that works is “security peer programming.” This is where a member of the security team and a member of the development team sit and work together on the same bit of code, with security in mind. This can help expose blind spots that developers might have and also serves as a wonderful incidental peer-education tool as the development team’s security knowledge grows.
There was an exploit discovered fairly recently that used login redirects to determine if a client-side user is logged into social networks. Static content, simply by way of how it’s served by Apache or Nginx, or how your application handles requests, can leak a surprising amount of data.
How does that work? Well, think about it: you are serving static content like a CSS file or image file behind a login. Somebody else can include that file on their own site and as users pass through, they can determine if they are logged in to YOUR site based on the static asset. This exploit can be used for social engineering, data mining, and de-anonymization.
It turns out, CDNs aren’t just good for performance, they’re good for security too. After you’ve done this, refer back to question #1 to see if you can ditch any code you had written to deal with those files in the first place.
Beef up your authentication. The easiest win here is to enable two-factor authentication, or TFA. If you’re not familiar, TFA adds a second layer of security by requiring users to verify a secondary measure of trust, usually a phone. You can create a simple one using only SMS or you can get fancy and use Duo Mobile, or RSA.
For added security, you can design your TFA system to re-authenticate whenever a user enters a section of your application like the account or payment settings.
There’s a lot of debate about key-based, or role-based authorization. Here, the more methods you use to make sure that the agent seeking access should truly be able to gain access, the better.
From top to bottom, you want something like this: The database can only be accessed by machines on its same subnet, who are granted a role with database access privileges, and are making the request with the proper API keys, initiated by a properly authenticated user who then has application-level privileges to even make the request in the first place. Yes, some of that is IT’s job, but from the user-level up it’s on the developer. Whatever role you have in the organization you are, it’s good to know how your security stack works. For more information on this see the OWASP Application Security Verification Standard (starting on page 19).
No, this is not “Install RelicSnag” or whatever new cloud monitoring service you have available. This is an opportunity to sit and think about which application events you want logged and which ones you want monitored. Add these logging events to your code.
Some obvious events are:
Some non-obvious ones are:
A lot of the time, the biggest security flaw in an organization is a human being. Let’s take the example of a support call. With a few pieces of PII, a hacker can get an astonishing amount of access to a user’s account from a poorly trained support technician.
Additionally, all it takes is somebody opening one wrong email attachment to infect your entire network with a worm, Trojan horse, or worse. The period of time immediately after a properly-handled security breach is a great time to bring up training opportunities, not just for your developers, but for your organization at large. You’re only as strong as your weakest link and often times that link is a poorly trained human being.
Even though email falls under the IT or Security team, you should be thinking about your application and its code, and how you might be able to protect your application against even trusted devices – as we learned from the Mirai botnet attack, even trusted IP addresses can turn against you. Think of it like The Matrix: anybody can become an Agent at any moment.
Breaches inarguably demonstrate a failure to security your systems.It’s on you and your organization to respond quickly and effectively.
Use this as a wake-up call to truly implement developer-driven security. Learn from the hackers, learn from your security team, and learn from each other. It only takes an instant to build security in while you’re coding and a relatively small amount of time to see significant gains. A developer-driven, security-first approach will have rippling effects on the rest of your application.