Since the launch of the popular Verizon Breach Investigations Report (VBIR) and its subsequent imitators, I have been asking what I believe to be a simple and fundamental question:
The basic assumption is that these reports include these breaches because they were detected. I suppose I could attribute this detection to the savvy of the attacked organization. Alas, statistics from these studies show that most attacks are detected by third parties, not the attacked party. So I feel comfortable eliminating a bias toward the detection acumen of the attacked organization.
Think back to the hide and seek of your childhood. In my experience, the worst hiders were very likely the first caught. If you need further illustration, might I suggest you consult the old Monty Python “How Not to Be Seen” sketch. The better you were at hiding, the less chance you had of being found. Shoot, I had one buddy who was so good, he may still have been hidden if he hadn’t gotten hungry and come out to eat.
So it seemed sensible to ask if the reports were skewed to the worst hiders of the attack population. Not to be coarse, but are we finding the work of the lower left of the bell curve of hacker competence and intelligence?
Let me stop here and make sure there is no misunderstanding about how I feel about the validity and usefulness of the breach reports. I worked with many of those responsible for the VBIR in my CyberTrust days and I know firsthand that they are smart, dedicated, and knowledgeable professionals. The report is a must-read and I have used the statistics liberally. My question really lies with the available data.
I have raised this question in the past and the response ranged from casual curiosity to being accused of seeding fear, uncertainty, and doubt (FUD). This struck a nerve as I have tried very hard to stay away from FUD in my work, and felt it was a harsh interpretation of my question. Of course, this entire subject is a bit of a Gordian knot, as it is hard for me to produce proof to support my fear that we don’t know what we don’t know. But the more I look at the statistics, the more I see unanswered questions that lie beyond the available evidence.
The breach counts in the collective reports actually rely on two things: detection and disclosure. The VBIR is based on the Verizon caseload, a handful of participating partners, and cooperation from law enforcement agencies from several countries. How many breaches are detected that do not show up on the Verizon report or the others? How many breaches are not reported to the authorities? There are regulatory mandates that require an organization to disclose breaches that involve the loss of certain types of data, but what happens when those regulatory lines are not crossed?
I go back to what we don’t know. How many breaches go undiscovered? How many breaches are discovered and not disclosed? Are the detected and disclosed breaches representative of the broader population or are they representative of the less well-written and less well-executed breaches?
These questions have ramifications, particularly when we put them in the context of what evidence we do have. For example, if we find out that the discovered breaches are not exactly the sharpest knives in the drawer, what does it say about the ability of organizations to detect breaches when the average time from infiltration to detection is measured in months according to various reports?
We need more data. Unfortunately, a reasonable conclusion that can be drawn from the collective evidence of these studies is that most organizations are not equipped to detect breaches. Which of course adds to the conundrum—the evidence points to the fact that we will struggle to gather the proper evidence. Another twist in the Gordian knot.
In the end, organizations should consider the data from these reports but exercise care in using the data to make decisions about security. Maturity models like the Building Security In Maturity Model (BSIMM) should be added to balance the conversation. Organizations need to thoughtfully consider their threats and risks in the context of their unique operating environment and security readiness. Most of all, never confuse a lack of detection as assurance of a lack of attack.
Jim Ivers is the senior director of marketing within Synopsys' Software Integrity Group where he leads all aspects of SIG's global marketing strategies, branding initiatives, and programs, as well as product management and product marketing. Jim is a 30-year technology veteran who has spent the last ten years in IT security. Prior to Synopsys, Jim was the CMO at companies such as Cigital, Covata, Triumfant, Vovici, and Cybertrust, a $200M security solutions provider that was sold to Verizon Business. Jim also served as VP of Marketing for webMethods and VP of Product Management for Information Builders.