close search bar

Sorry, not available in this language yet

close language selection
 

Behavioral security at RSA Conference 2018

Behavioral security at RSA Conference 2018

Wednesday, RSA 2018: On any given day, there are more than 150 sessions to choose from here. Good luck getting to even 5% of those. The good news is that attendees can get access to most of the sessions they missed after the fact, since the slide presentations are posted and videos are made of just about every one. So you can keep “attending” for months to come.

But from a small slice of it in real time:

It didn’t get nearly as much buzz as the keynote from Monica Lewinsky of Bill-Clinton-and-blue-dress fame, but the message was still powerful: Behavioral analytics is changing the world of security.

We all know we can’t escape our genes. Turns out we can’t escape our behavior either. It’s hardwired into all of us, to the point that with the data analytics enabled by machine learning (ML), it’s possible to tell if a person trying to do much of anything online isn’t really you.

Which can be great for keeping malicious attackers posing as you from breaching your company.

But it also means it’s possible to tell when the person really is you, which might sound more than a bit intrusive to a lot of us, even if we’re not trying to do malicious things online. It’s a bit unsettling to hear that the way you browse, tap your keyboard, or even hold your smartphone is as reliable as a fingerprint or retina scan in identifying you—and can be done remotely.

Behavioral identification doesn’t have to intrude on privacy though, was the message from Jim Routh, CISO at Aetna, in a talk titled “Model-Driven Security: It’s Closer Than You Think.”

Actually, it’s not just close—it’s already here. At Aetna, Routh said, the company uses just about every conventional countermeasure against the No. 1 threat vector—phishing emails—and the major tools hackers use to succeed: spoofed domains, look-alike domains, display name deception, and compromised accounts.

But if an attacker does get past those lines of defense and starts to move laterally to try to gain more privileges, then behavioral analytics takes over.

It begins with the separation of privileges from every ID. Those who want or need privileges for a certain amount of time ask for it. But the way they use privileges is then analyzed through what he called privileged access management (PAM), which doesn’t rely on conventional authentication.

“Your username, password, and PIN can be exploited,” he said. “They’re binary. But your behavior doesn’t lie.”

So he collects behavioral information from every user, in the form of 30–60 attributes that range from the top three apps they use to how they hold their phones. “I’m not storing it,” he said. “I’m just assigning it a number to come to a risk deviation score.”

He said the model can detect anomalous behavior in milliseconds and has had a false positive rate of .002%. “And we can do it by pushing a button, not writing code, which is expensive,” he said.

And it sets a much higher bar for attackers. “It reduces friction for users and increases it for attackers,” he said, adding that “this is what security looks like now.”

Behavior, Part II:

John Elliott, head of payment security at easyJet, put a much more analog spin on behavior with a pitch for applying a personality test along the lines of Myers-Briggs to potential third-party suppliers to see, before you hire them, if they are likely to put your organization at risk.

He said that evaluation comes down to three major characteristics: knowledge, ability, and intent. As in, do they know what to do, do they have the capability to do it, and will they actually do it?

Among the ways to find out:
  • Do a sniff test, bringing your own experience with other vendors to bear.
  • Use external rating agencies. “That won’t necessarily expose intent, but it can give you valuable information,” he
  • Ask open questions. “Make vendor think,” he said. “Ask questions that can’t be answered by salespeople.”
And among those suggested questions:
  • What do you see as top three cyber-threats to your business?
  • How do you gain threat intelligence for the short, medium, and long term?
  • What formal or informal information-sharing networks are you members of?
  • How many pen tests and red team exercises have you done in the last 12 months?
  • What do you plan to do differently next year?
  • How many people are more than 50% dedicated to CyberSec or InfoSec?
  • Do you think this is enough?
  • How many days would it take malicious attackers to breach your defenses and gain access to your critical systems?
  • How quickly would you detect it?

Elliott even suggested posing questions that have no answers, to see if the vendor will try to blow some smoke. “Ask them, ‘What’s the RPO, RQO, and RTO for the systems that support the service you provide to us?’

“RQO is nothing. But ask it anyway to see if they answer it.”

None of this comes cheap, he warned. But it could be a lot cheaper to find out if vendors will be a dream or a headache before getting locked into a contract with them.

Are connected medical devices saving or harming patients?

A panel of experts discussing whether connected medical devices are saving or harming patients agreed that things are pretty much the way they’ve been for a number of years. There’s still a long way to go, but things are improving, with better anomaly detection, better government involvement by the FDA, and more awareness by doctors, hospital staff, and patients that they are all stakeholders.

But toward the end of the panel, David Scott, product security officer at Becton Dickenson & Co., noted what those in the Synopsys Software Integrity Group have been preaching for more than a decade—that it is much better to build security into products than to try to patch it on later.

Connected medical devices “have to be secure by design,” he said. “And that has to run from the concept to the end of life of a device.”

How secure are automotive safety features?

Feeling secure in your new car, with all the safety features like adaptive cruise control, rearview cameras, and accident avoidance? You shouldn’t.

That was the rather bleak message from Sergey Kravchenko, senior business development manager, future technologies, at Kaspersky Lab, who noted the obvious reality—that cars are not just vehicles but content providers and are largely controlled by computers.

A few years ago, he said, modern cars were relatively secure, after manufacturers created better security for navigation updates.

But he said hackers have found vulnerabilities that give them “access to all the functions—voice control, external cameras, engine. All of those have already been hacked and monetized by ‘gray garage’ businesses,” he said.

“Your car—all cars—are hackable, and you don’t even need the dark web.”

Some of the possibilities might appeal to owners, like the ability to roll back your odometer.

But most of them are ominous, like being able to unlock a car and control crucial functions like brakes and the engine. Hackers can also track the location of a vehicle through access to the GPS.

And things are apparently not improving. Kravchenko said that a couple of years ago, his firm tested nine car apps for resistance to several cyber attack vectors. “None were secure,” he said. “One year later, we tested 13, and the original nine were still insecure. Of four new apps tested, only one was resistant.

“It’s not just a connected car—it’s a connected world,” he said.

We have been warned.

 
Taylor Armerding

Posted by

Taylor Armerding

Taylor Armerding

Taylor Armerding is an award-winning journalist who left the declining field of mainstream newspapers in 2011 to write in the explosively expanding field of information security. He has previously written for CSO Online and the Sophos blog Naked Security. When he’s not writing he hikes, bikes, golfs, and plays bluegrass music.


More from Security news and research