A new financial services cybersecurity report reveals an industry aware of online threats but not doing enough to protect its systems, networks and data.
The original version of this post was published in Forbes.
When it comes to cybersecurity, the U.S. business world is definitely not the IT version of Lake Wobegon. Everybody ain’t above average.
But the financial services industry (FSI) can claim that, on average, it is above average—for the past decade it has ranked better or “more mature” than average in the annual BSIMM (Building Security In Maturity Model) survey of software security initiatives (SSI) in multiple industry verticals.
That does not mean there is no room for improvement, however. A new report out today: The State of Software Security in the Financial Services Industry, commissioned by the Synopsys Cybersecurity Research Center (CyRC) and conducted by the Ponemon Institute, reveals an industry that is both aware and concerned about online threats, but acknowledges that it is not doing enough to protect its systems, networks and data from them.
Not really walking the talk, in other words.
Indeed, Exhibit A this week is news of the breach of Capital One, the credit card, banking and loan giant that is obviously a major FSI player.
FBI agents arrested software engineer Paige A. Thompson, of Seattle, on Monday, on suspicion of downloading Capital One credit application data of about 106 million people from a rented Amazon cloud data server that multiple reports said was “misconfigured.” The compromised data included personal information and, in some cases, Social Security and bank account numbers.
That kind of event may not be frequent, but it is inevitable, given key findings of the Ponemon survey of more than 400 security practitioners within the FSI:
Drew Kilbourne, managing director of security consulting at Synopsys, said the survey results show that while there is no single right approach to software security, “there is a significant need for improvement in supply chain risk management.”
All of which, collectively, paints a rather ominous picture—a picture that doesn’t surprise Alissa Knight, senior analyst at Aite and author of a report released in April that documents how she reverse engineered 30 FSI mobile apps and found 180 “critical” security problems with them.
“I was able to reverse engineer them in about eight minutes,” she says, “and found things like hard-coded credentials, weak encryption and private key exposure. There’s some scary stuff—not even validating the username and password.”
She added that she has found the FSI, in general, has “very poor security hygiene around their APIs [application programming interface].”
Kilbourne noted that while the FSI as an industry is relatively mature regarding software security, “organizations are grappling with a rapidly evolving technology landscape and facing increasingly sophisticated adversaries.”
But ominous does not have to mean hopeless. The industry can evolve as well, and the survey results show that organizations are aware that they have a problem.
Organizations also don’t have to invent the wheel, so to speak, to improve. The ways to do it are well established and accessible.
It is, as experts regularly say, impossible to make an organization bulletproof. But it is possible to make it much more difficult for hackers to breach an organization, and to make it easier to detect those who do manage to get inside.
One of the most crucial, and fundamental, moves would be to reverse what the majority of respondents said they are doing today. Instead of “managing risk” with pen testing and patch management at the end of the SDLC, do what software experts have been preaching for more than a decade: Shift left. That’s another way of saying “don’t wait until the end.” Build security into software from the beginning and throughout development.
That requires the use of multiple, automated tools that can help developers find and fix bugs before pen testers find them later, when they take more time and money to fix. Or before a product on the market needs emergency patches because it is getting hacked.
It is critically important to remember that there is no “all-in-one” tool that magically catches every defect in software code. It takes at least a half dozen tools—an alphabet soup of acronyms—but leaving those aside, each looks for specific things, like open-source components, or tests the software code in different states—static, dynamic and interactive.
As Kilbourne put it, “Each tool does its one thing really well, but each is looking for different stuff.”
Nabil Hannan, managing principal, financial services at Synopsys, said organizations should start by setting priorities. “Instead of trying to find everything at once, automation should focus on the low-hanging fruit,” he said. “By using multiple analysis techniques, organizations can get a higher level of confidence that they are detecting and blocking the most common vulnerabilities sooner rather than later.”
Yes, it will cost something to employ those tools, but it will save both time and money in the long run. Patching software after its release is, as experts say, trying to “bolt security on” instead of “building security in.”
A second fundamental is to address third-party risks.
One step is to demand the same security standards from third-party vendors. If they aren’t secure, neither are you. If they can get hacked, you can get hacked. Indeed, “island hopping” is the term that describes what cybercriminals do to expand on a breach of a victim’s network—they’re hoping to breach other organizations in the victim’s supply chain as well.
The infamous and catastrophic breach of mega-retailer Target at the end of 2013 is just one example. The breach, which exposed 40 million debit and credit card numbers and 70 million other records that included addresses and phone numbers, was enabled by an email phishing attack on a third party—a heating, air conditioning and refrigeration contractor.
So organizations should require their vendors to test their software during development, to demonstrate compliance with industry security standards and to use an independent measurement of their SSI.
Knight says another third-party risk is that many FSI organizations outsource development of their mobile apps, giving those vendors the API key and then not realizing that those developers have “hardcoded the key into the code [of the apps].”
“Their security is spotty at best,” she says, “and it’s pretty systemic.”
And Hannan said yet another challenge is securing the supply chain from insider threats. “It’s something that’s challenging to accomplish and almost impossible to fully automate, and requires new workflows and governance processes,” he said.
Third, make the workforce security savvy. Train employees to spot phishing emails, which are designed to trick them into clicking on a malicious link, or “vishing” phone calls, where an attacker posing as anyone from law enforcement to somebody in HR will try to exploit the desire of employees to be helpful and get them to give up confidential information.
Other security basics:
Much of that is the digital equivalent of locking the safe and the doors at night and turning on the security system. Any organization that does all that will be better than above average. Those basics will make it way above average.