A recent report recommends a national cyber incident reporting law. But how do we help organizations report data breaches if they fear regulatory sanctions?
The Cyberspace Solarium Commission recently published a report with over 80 recommendations for implementing “strategy of layered cyber deterrence” for national security. Section 5.2.2, “Pass a National Cyber Incident Reporting Law,” raises the question of whether an organization filing a report in the interest of national security will be subject to punishment under one or more data security or privacy regulations.
The report recommends that “reported incidents may not be used to inform or drive punitive measures taken by regulatory agencies.” But if such a law were enacted, organizations might not trust that the government, having knowledge of an incident, would not provide that information to agencies that oversee reporting requirements for other regulations.
So how can we encourage organizations to report data breaches in the interest of national security when they fear that by doing so, they’d expose themselves to regulatory sanctions? We asked three experts for their opinions.
Most critical infrastructure entities (utilities, rail oil & gas, etc.) already have regulations in place that require the implementation of cybersecurity protocols and incident reporting in the event that a breach happens. To encourage critical infrastructure entities to come forward and share this information, we need to ensure that they are not penalized with large fines, which can deter them, and instead offer them assistance to address and fix these breaches.
The DHS needs to then reinforce this message, assuring critical infrastructure entities that they will receive government support when they report these incidents. Finally, the amount of support provided by the government needs to be reflective of the severity of the breach. For example, a breach of customer data, while severe, may not have the same devastating impact as an incident that results in loss of control of mission-critical operations in the power grid.
Have you ever been having something of a crisis and your 5-year-old tries to “help”?
Have you ever had someone take one small snippet of something they saw or heard completely out of context and just run with it to the worst possible conclusions?
Have you ever had your sphincter pucker up tight enough to rip fabric from a chair when you hear the words, “We’re from the government and we’re here to help”?
NO ONE wants ANYONE second-guessing their actions taken in times of crisis. NO ONE wants ANYONE to dig into every nook and cranny of the circumstances surrounding the crisis. NO ONE wants ANYONE to give their Monday-morning quarterbacking rendition of what coulda-shoulda-woulda happened if only x, y, and z.
There’s also no way to make the reports anonymous. There will always be an opsec screwup that identifies the company involved.
There’s always some detail that’s going to get misused, if not by a regulator, then by the cybersecurity insurance company or by the public once the details show up on redditwitterfacegram.
The GDPR people complain that companies don’t meet the 72-hour reporting requirements, so they want to actually make it shorter! Hey, you effing morons, if I can’t run a 4-minute mile, do you think requiring that I run a 3-minute mile is going to make me faster? At the end of 72 hours, all I know is that it appears that something happened. I don’t want to share that with anyone. There’s been multiple cases of where Company X’s data has been found online and all the fault lies with Company Y.
Company X isn’t going to say squat that isn’t absolutely required by law until they know what they’re talking about, and no “greater good” is going to change that.
—Sammy Migues, principal scientist, Synopsys
To Sammy’s point, no one wants to be “that guy,” but most (not all) critical infrastructure organizations are used to a high level of scrutiny. They are used to being called out for violations, so the idea of having another stick to beat them over the head is not an incentive and will reinforce this negative behavior.
My initial thought is to leverage parts of the model for in-flight incidents that the FAA has in place. Having said that, it is a joke to believe there would be no traceability and the information will be leaked and judged in the court of public opinion. This type of activity is best left to industry. The work that Auto-ISAC and H-ISAC [formerly NH-ISAC] have accomplished in a short period point to some great successes. Granted, there needs to be more transparency and safe harbor to share incident information in a more public forum, but the ideas that a difficult-to-execute law that has no “punitive measures” is ludicrous.
On another note: Define “incident.” Multiple standards have conflicting guidance and requirements for the artifacts need to create an incident. Should a single entity create the definition of “incident”? No! There are too many interactions for a single definition to encompass the entire sector and would lead to more challenges.
If anything, there should be better ways to provide safe harbor and guidance that encourages organizations to share incident information. Encourage the development and management of industry lead ISACs that allow for flexibility and continuity of service via. Many ISACs would probably encourage an overarching ISAC that helps develop consistent messaging where applicable. This is something that could be carried out by a NIST-like entity. There are better ways to address this issue. More government regulation is not it for this particular topic.
—Chris Clark, principal security engineer, Synopsys