Adelphi University has launched the Anita D’Amico Endowment Fund to support research related to cyber security and psychology.
People’s attitudes, motivations, and behaviors are at the heart of many cyber security practices. Dr. Anita D’Amico, one of the newest staff members at Synopsys SIG, has spent more than two decades studying human factors in cyber security. Her alma mater, Adelphi University, recently launched an endowment in D’Amico’s name to fund cyber security and psychology research.
D’Amico was appointed vice president of market development at Synopsys Software Integrity Group in June 2021, shortly after Synopsys acquired Code Dx, a startup where Anita served as the CEO. Her background is different from most of the Synopsys staff. She is an experimental psychologist who has applied her talents to cyber security for over 20 years, and to software security for the past decade. We spoke with D’Amico to learn more about the intersection of cyber security and psychology, and the endowment fund.
Psychologists conduct research on how to improve human performance, how to avoid human mistakes that can lead to unsafe situations, and how people can work together productively. Early in my career I studied human factors that contribute to merchant ship accidents, I worked on the design of the space station habitation module to ensure that teams could live and work collaboratively in space, and I designed displays for surveillance aircraft. I was accustomed to applying psychological principles to a diverse set of domains.
So when Northrop Grumman asked me to head up their first Information Warfare team way back in the 1990s, I saw it as a new and increasingly important domain to practice. I was drawn to the important role that human security analysts play in detecting and responding to cyber incidents. The specific question that caught my interest was: “How would the U.S. Department of Defense (DoD) know if it was the target of a well-orchestrated cyber attack by an adversary?” At the time DoD agencies had firewalls and intrusion detection systems, but these systems didn’t connect and correlate their results. It was up to people monitoring these systems to mentally correlate the results and detect patterns that could be indicators of a coordinated attack. So I started looking into what patterns the experts were detecting, and how those patterns could be visualized to make it easier for people to see them.
I worked on the visualization of cyber security information for several years. Initially I worked on visualizations to make it easier for people to see patterns in data that were not automatically detected by the information security systems. People are excellent pattern detectors, and sometimes can see things that even modern intelligent systems may not. But even if automated systems detect patterns, people want to see those patterns visualized so that they can understand them, and equally important, explain them to others. So I worked on research to design visualizations that conform to the perceptual and cognitive abilities of human beings.
I also performed research on the decision processes of cyber incident responders, and the information gaps that hindered their decisions. In fact, it was my interest in cyber security decision-making that led me to leave Northrop Grumman and start an organization called Secure Decisions, a division of a software company focused exclusively on R&D to help people make more-secure decisions.
About 10 years ago I responded to a Department of Homeland Security (DHS) call for research to correlate the results of many different software security scanners. At the time, application security (AppSec) analysts were manually pulling together thousands of findings from scanners like Coverity®, Fortify, and FindBugs, as well as manual code reviews. It was very labor- and time-intensive. It reminded me of the correlation problem I tackled a decade earlier, trying to figure out how people mentally correlate the results of different firewall and intrusion detection systems. But this time it was focused on software security systems. Ken Prole, my co-principal investigator, and I led a team of software engineers and AppSec specialists to come up with a solution. After another six years of research, including how to use machine learning to automate the human analysts’ prioritization practices, we spun out a separate company, Code Dx, to commercialize the results of the research. Today the results of that research can be found in the Code Dx AppSec vulnerability correlation and risk management system, which was recently acquired by Synopsys.
There sure are. Let’s start with risk-taking and risk tolerance. Many cyber security incidents are traced back to people not taking cyber risks seriously. Organizations fail to back up their critical systems, patch their software, and institute and enforce authentication processes. They know that ransomware and data breaches occur, but they don’t perceive that their organization is at risk. Individual people expose themselves to similar risks through bad “cyber hygiene” such as poor password practices or jumping onto unsecured Wi-Fi. There is a disconnect between organizations’ and people’s knowledge that a bad cyber incident can happen and their willingness to engage in the activities to avoid it or their unwillingness to take precautions. Why are some organizations and people so willing to take risks while others aren’t? And what will it take for them to change? People certainly are capable of changing their risky attitudes and behaviors. There have been major societal and individual changes regarding risks related to seat belts, cigarette smoking, and sexual practices. So what are the factors that can result in changes to cyber risk-taking?
Another particular interest of mine is how secure code development is influenced by human factors. Most cyber breaches are traced back to an attacker exploiting a software vulnerability that was inadvertently inserted by a software developer. What are the characteristics of software developers or development teams that make them more or less likely to produce vulnerable code? We know that team size and context-switching have an effect, but what else is at play? If we understood the factors that contribute to developers introducing vulnerabilities, then we could take measures to mitigate those factors. It’s the ultimate “shift left” in software security.
And I remain interested in how data representation affects security decision-making, particularly with respect to software security. What are the fundamental decision processes that developers, DevOps security teams, and CISOs engage in about software security? What is the essential data they need to make informed decisions and how is that best represented and delivered to them?
Adelphi University, where I received my PhD, just launched the Anita D’Amico Endowment Fund for cyber security and psychology research. Psychological principles help us understand the human side of cyber security—the behaviors of attackers and defenders. The purpose of the endowment fund is to finance interdisciplinary research in cyber security and psychology conducted by faculty, graduate students, and upperclassmen.