Gary McGraw: This is the 113th in a series of interviews with security gurus and practitioners and I am happy to have with me today Chandu Ketkar. Hi Chandu. Chandu Ketkar is the technical manager (now Principal Consultant) at Cigital where he’s focused on architecture analysis and code analysis for several years. Chandu has over 20 years of experience in the software industry as a developer, manager and entrepreneur. Past positions include work at Alcatel, Swift, IXI and Plutonian. Chandu has an MS in computer science from Virginia Tech and an MS in EE from IIT. Chandu lives in Virginia with his family. Thank you for joining us today.
Chandu Ketkar: Thanks for having me.
McGraw: As a person who worked as a developer, what sort of insight can you give us on the mind of a developer as opposed to the mind of a security person?
Ketkar: Having worked as a developer for a long time, developers are really focused on building features and if you look at a typical developer’s work-life, they are going from sprint to sprint to sprint. And the focus is on getting things done and not always getting things rightly done. That’s for the security guys. Their focus is how to architect things in a secure way, how to code things in a secure way. The security guys bring that focus, which is lacking for lack of time and other factors.
McGraw: Do you think that the fact that software developers are taught and managed to think about features and functions make them think that security might be one of those instead of a property of a system?
Ketkar: It’s about the culture and the company. I worked in some companies where security was part of the culture of development teams. And then there are other companies where I worked where security had no role to play. It was all about features. It really flows from top down. That’s how I look at it. Since I’ve joined Cigital and started to expound on it a little bit, I have begun to think that we are really playing on the same team more and more—developers and the security people. For myself, I think that if I had had the security knowledge that I have today, it would have made me a lot better developer.
McGraw: Back in the past. Would it be fun to go back and fix all your old code?
Ketkar: Get a time machine to go back to the past.
McGraw: So, what can software security do better to appeal to developers and architects that may not know too much about security yet?
Ketkar: Part of it is education and I like to connect dots. So, if you look at economics, there is a principal-agent paradox. So, we are the principals of security. We own the security problem and the security people. But, we are not the agents of change. When we say that, “Hey we need to design something in a secure way or code something in a secure way,” the agents of change are the engineering teams. So, the ‘us versus them’ mentality, which exists in some corners of the security world—I’m not a big fan of that.
McGraw: Me neither.
Ketkar: We need to take a lesson and take a page from how the understanding of principal-agent problem has revolutionized the way boards and managements are playing on the same team. I think we need to start the same in security. I am very hopeful that we are leaning in that direction.
McGraw: Can you give me an example of a domain where that pattern existed and it went away or was fixed?
Ketkar: I think the whole genesis of incentive stock options (ISOs) is precisely or expressly created to bring the agents (who are the management team) in line with the principles (who are the shareholders and the board members of a team). And incentive stock options have aligned their interests. And I think that we really need to figure out how we can do that in security.
McGraw: That’s interesting. Let’s talk about some of your past work. One of your projects at Cigital a million years ago was this thing called ESP [Encapsulating Security Payload], which focused on automating static and dynamic analysis and combining results in sort of a sophisticated package. What lessons did you learn from working on ESP with regard to tools and their actual use out there in the real world?
Ketkar: I learned two lessons. One of the lessons John Steven made me aware of: that today’s tools are good, but they are leaving out a large class of problems. For example, if we take a typical static analysis tool like Fortify, it works well in some environments, but when we start using various dependency injection frameworks, for example, where the code is wired at configuration and run time, these tools don’t really do a great job, so there is this depth problem. But, the more fundamental problem, as I see, is the integration problem. You go to a big enterprise, and they have ten vendors, ten different tools, producing ten different reports, ten different business processes that are looking at what those reports are trying to mitigate. So it’s a much bigger integration problem. I think bigger organizations need help in that space. So when I was part of ESP, I fashioned ESP as an integration platform saying, “Hey you can bring your White Hat results, your Fortify results, all types of results and we can normalize them so that you have single findings database.” And then you can work off of that.
McGraw: I guess that makes it easier to figure out what to fix.
Ketkar: Yes. That makes it easier to fix and, if you’re a manager, you really want to get a holistic view of what’s happening and not just going to different silos and saying, “What’s happening in static code? What’s happening in White Hat?” You really want a holistic view of your security landscape. And that is not possible unless we have supporting tools and infrastructure built for that.
McGraw: Let’s talk about code review tools a little bit more in depth. Code review tools have been around for a decade, actually slightly more than a decade, and they’re all the rage. But they really don’t even begin to scratch the surface of design. Why not?
Ketkar: If you look at any software development activity when you’re talking about code reviews, you’re already late in the game. Design has already happened. A lot of main decisions have already occurred: whether I’m going to use a 2-factor authentication, whether I will use a token-based authentication. That decision has already been made before it goes to coding. So, to me, fixing problems in coding is kind of late in the game.
McGraw: You have to do it to catch bugs, but there are also flaws.
Ketkar: Oh, yes, absolutely. There are flaws. You’re going to catch some stuff, but independent Microsoft research shows that 50/50 split is pretty widely seen between flaws and bugs. So, you’re just not going to be able to catch flaws by looking at code.
McGraw: Let’s talk about software design and security a little bit more. Why is architecture risk analysis important for understanding security posture of a piece of software? And how does that really differ from code review? Like, code review can use a tool; what do we use for architecture risk analysis? And what are the key steps?
Ketkar: Architecture is really approaching a big area here. The foundation of architecture risk analysis, and the foundation of most of the security activities is threat modeling. So threat modeling is a real fundamental piece that allows you insight into the threat landscape of your application or the system. Once you have that understanding, it can drive many decisions in your SDLC. And what are those decisions? You could use threat modeling to drive security requirements. You could use that to drive pen. testing. You could use that to drive security testing by creating abuse cases and security test cases. If you just look at security touch points from Cigital, all these ideas are very well expressed in that model. Architecture risk analysis is different than threat modeling.
McGraw: Okay, let’s try to get to the bottom of that. How are they different?
Ketkar: Threat modeling is fundamentally a technical activity that connects your risk; it is basically looking at your threat landscape and all possible threats that could potentially compromise your application and identifies assets, potential threats, potential controls you may need and so on and so forth. And it’s a very fundamental activity. However, when you wear a businessman’s hat, you have limited resources to spend. You can’t fix every problem. So which problem should I fix? Which threat should I be looking at first? That’s where architecture risk analysis comes into play because now we are talking about a different paradigm. We are in the risk landscape now.
McGraw: Risk Management land. Talk about what kind of steps go into an architecture risk analysis in your view.
Ketkar: In architecture risk analysis, there are two fundamental things that we do. One is to understand the business context. So we understand what the business objective is. What are the business risks? That is one stream of understanding.
McGraw: So you answered the, “Who Cares?” problem.
Ketkar: Correct. And why should I care? If I lose this data, what damage does it do to my brand? Should I fix it because my security guys are telling me to fix it? Or should I accept the risk? Those types of decisions can really be driven by doing business risk analysis to some extent. And then we do threat modeling as a second piece where we, again, the details are we decompose a system, then we fundamentally focus on any two sub-systems that are communicating across different trust zones. Because that’s where we find most of the problems. That’s where most of the assumptions get violated. So we continue through this decomposition and modeling of the system. Once we do that, then we start identifying assets. Again, we always wear security analyst’s hats so assets are something that are important from a security viewpoint, which means, if they’re compromised, it causes some business-level impact. Then we find out what threats could compromise these assets. What controls we could potentially implement. When we are doing an analysis, we, obviously, look at a known attack analysis. So, we have a large library of attacks that we know.
McGraw: Right. So those are written done in advance or known in advance.
Ketkar: We have checklists that we use. Then we understand all the dependencies. We look at the dependency risk. So we take a holistic risk view when we do threat modeling and then we tie those results with the business analysis and then we prioritize risks and we say that we believe that threat modeling suggests that these findings are really high priority and they will have big impact if you guys do not fix those findings.
McGraw: So this sounds like a lot of work. How do you scale something like that?
Ketkar: Well, that’s a great question because threat modeling and architecture risk analysis, as much as I love it, I find that our clients and most companies that I have worked with, they are struggling to apply this across an application portfolio. So right now, the application of these architecture risk-based services is related to highly-critical applications, which is not sufficient.
McGraw: So they’re not scaling across the portfolio at all times.
Ketkar: Correct. Scaling is a problem right now. We have come up with some solutions and we have some really neat ideas that we are working on right now. So, for example, let’s look at threat modeling. There are a couple of issues when you go into threat modeling and ARA. One is the scaling issue. Another is a consistency, reliability and efficiency issue. They are two separate issues that link, but they’re two separate issues. So how do I improve consistency of my threat modeling? How do I make it reliable?
McGraw: I guess writing stuff down helps (both laugh).
Ketkar: Let’s talk about STRIDE, because STRIDE is used…
McGraw: This is Microsoft’s approach to threat modeling.
Ketkar: I just want to point out where STRIDE kind of does not work. If you look at S-T-R-I-D-E—Spoofing, Tampering, Repudiation, Information Disclosure… (Denial of Service, Elevation of Privilege). These are really impacts of attacks. When you look at tampering, how many ways can you tamper data? Well, I can tamper data on a communication channel, I can tamper it in the database, I can have memory-based attacks, I can tamper it all over the place. So, when someone is using STRIDE, when they focus on tampering, they need to have checklists to ensure that they are covering all the attacks that could tamper data. I worked with a couple of clients, and I found that they use STRIDE, but they’re highly inconsistent and they are not getting the results that they wanted.
McGraw: I guess what happens is that if you have experts that are doing this and they’re pretty good at it, they have this list in their head, but they might have a different approach than some other expert in the next cube over.
McGraw: There’s a lot of work to be done there clearly.
Ketkar: Again, we could learn from some other industries. Gary and I like to connect dots always. For example, there is a great book called Checklist Manifesto.
McGraw: I like the title.
Ketkar: It is written by a Harvard doctor. He tries to change the Healthcare industry. Because, 15 years ago all the surgeons there looked down upon checklists.
McGraw: Oh yeah. So this is the guy who worked on the critical care stuff.
Ketkar: Correct. And he learned from the airline industry. It’s a very interesting story he tells. The B-17 bomber, the legendary B-17 bomber from Boeing that the Pentagon bought. When they went for a test flight, one of their most senior pilots, he crashed the plane and he died. The Pentagon was trying to figure out what’s going on. Then they realized that the plane was too complicated to fly. That was the first time there was a four-engine plane, and they could not fly the plane. So Boeing came up with checklists. And that was the beginning of checklists: pre-flight checklists, in-flight and post-flight. And that has made commercial aviation today, which learned from military. Commercial aviation has become very safe now. I think the same thing is happening in healthcare. Surgeons, who jeered at checklists, thinking they’re smarter than checklists, are using checklists very successfully. It is increasing the reliability of the healthcare industry. And I think security is no different. We need to really say, “OK, you know what? We need—the Amits of the world and the John Stevens of the world to solve the really hard problems—but there are 70-80% of the problems that are so mundane and so well-known and so well researched, let’s write them down. Let’s write the solutions down and let’s create some checklist-based approaches that give us reliability and consistency.
McGraw: We tried to do that with IEEE Center for Secure Design work in the beginning. Have you found that work to be of use?
Ketkar: Yes, absolutely. At a high-level I think it gives me good framework to think about. But, what I’m talking about is more at the operational level. For example, pattern-based threat modeling. When we decompose a system we always see red services, databases, enterprise services, message queues. We know when we see a REST service, what to look for. Can we use a 80-20 rule and say, let’s write down what threats I should be looking at, under what circumstances those threats are applicable—the applicability criteria—because not every system, not every threat is going to apply. Once a threat applies, we know what controls we need to use to mitigate that threat. We eventually try to secure requirements, abuse cases. It can really, I believe, change the way we do threat modeling.
McGraw: And ultimately, you just want those designs to be done right in the first place. So I supposed design patterns are the ultimate goal of this kind of work.
Ketkar: Correct. I’m doing a lot of work with some folks at Cigital is security architecture patterns. And that solves a different business problem. The business problem it addresses is, we are getting better at knowing what the right things are from a security viewpoint. So, I know what the right thing to do is, but how to do it right, how to operationalize it…and that’s where I think we have some challenges. For example, if you go to a big organization, they have policies and standards that are ensuring their security requirements. And now, we have to take those policies and standards and drive those through engineering team so we do the right thing. We need to give some tools in the hands of our security people and the engineering teams. And security architecture patterns is one of those tools. So when you say you want to use a token-based authentication system, you understand what the best practices are, what the risks are and you have real actionable guidance.
McGraw: You’ve been active in the medical device domain – we’ve even written about this together, you and I – What kind of errors do you often find in medical devices? And, I know you take a holistic approach to this when you do these sorts of things yourself. So it includes ARA but it also includes things bigger than that I suppose. What kinds of problems are you typically finding in today’s medical devices?
Ketkar: So, medical devices. They’re fundamentally different from a lot of other embedded systems that I have looked at because their life span is very long. Something that was built in the late 80s or mid 90s is operational today. So what does that do? As a security guy, you can go and say “yeah, do RSA encryption or do AES-256” and they will say “well, I can’t do it. I don’t have the fire power.”
McGraw: Not enough computation
Ketkar: Correct. So, that is the first problem we ever deal with as experts. And the second problem I have found is the context in which some of these devices operate really require us to create more pragmatic and innovative solutions. So, just to give you an example, I looked at a couple of devices that were used in operating rooms. You turn that machine on and the machine began working. The machine contained patient data. My first reaction was “hey, is there no authentication?” (Both laugh.) And they all began laughing. And they said “No! No! No! Doctors will never authenticate because this is brain surgery equipment and it better turn on and work.” So, we need to balance now—security and the context in which the machine is being used. It really compelled me to think hard and create innovative solutions. We have to accept the reality. Going back to the original question Gary asked, I am finding all kinds of issues. Starting from physical security, with all of these machines you can simply get the firmware out, platforms are not hardened. If you just look at the most recent Hospira drug pump discovery, the drug pumps are accepting any firmware. They don’t care if the firmware is signed – and this is really security 101. You are really seeing basic issues when it comes to security. The second thing I’ve found is that medical devices operate in an environment with two security concerns. The patient safety concern. The other is a privacy concern.
McGraw: And I think that HIPAA has made people pay more attention to privacy than safety, which is a high irony. I don’t think you care about your medical data getting lost if you’re dead. I mean, I wouldn’t (both laugh).
Ketkar: That’s an interesting way to look at it, Gary. But, I’ve done some research, and medical data is fetching a lot of money in today’s black market. So let’s look at what happened with Anthem. Someone got a hold of an admin password to the database and they stole 80 million records. I simply cannot fathom. When I read the Anthem story, two things came to mind. We should have thought about a separation of duty. This is a very fundamental security principal. One admin has access to 80 million records. How is that possible? The military solved that problem a long time back. The second thing is defense in depth. Today’s databases support features such that, if I give a query, and if it results in 80 million rows to be returned, the databases should not give you that. (Both laugh.) Support maximum rows to be returned from a query, so you can have defense in depth. You can have separation of duty. So a lot of these things were a mess in the case of Anthem.
McGraw: So, we’re currently building BSIMM 6 this week and next week which is going to include a healthcare vertical. Care to hazard a guess about how healthcare security compares to financial services security? What’s your guess?
Ketkar: I think I’m not even guessing here. (Both laugh.)
McGraw: What’s the answer then?
Ketkar: The answer is that financial security is way, way better than healthcare security.
McGraw: Yeah, that’s what we’re finding. So, the good news is that healthcare as a vertical is beginning to pay attention to this and they’re starting to work on it. But, boy do they have some work to do.
Ketkar: Yes, and this is actually very interesting. When I gave a talk, and thanks to you Gary, at the Archimedes Conference (at the University of Michigan), I simply shared my findings based on the assessments I had done with medical devices. And here comes a company from Boston, BitSight. And their CTO is only using publically accessible data. And he reached the same conclusions that we at Cigital had reached.
McGraw: Without really looking at devices.
Ketkar: Correct. So the external and internal view are really in sync in this case.
McGraw: That is very interesting. So, thanks. This has been good. I’ve got one last question for you. What’s your favorite piece of Indian classical music?
Ketkar: Oh, OK. That’s going to be a long answer.
McGraw: Well, you have to hold it to 30 seconds or less.
Ketkar: I’m trained as an Indian classical musician, I like Jaipur style of singing. I like Kishori Amonkar is one of the real exponents of that style of singing. I like her. So, yep. I enjoy all types of Indian classical music as well as Western classical.
McGraw: Thanks a lot. This has been very interesting.
Ketkar: Thanks, Gary.
McGraw: This has been a Silver Bullet security podcast with Gary McGraw. Silver Bullet is co-sponsored by Cigital and IEEE Security and Privacy magazine and syndicated by SearchSecurity. The May/June 2015 issue of IEEE S&P magazine focuses on diversity, crypto, and identity management.