Show 119: Jacob West Discusses the IEEE CSD, Bugs, Flaws, And Wearable Devices

February 29, 2016

As the Chief Architect for Security Products at NetSuite, Jacob West leads research and development for technology to identify and mitigate security threats. West has over a decade of experience developing, delivering, and monetizing innovative security solutions. Prior to his role at NetSuite, he served as the CTO for Enterprise Security Products (ESP) at HP where he founded and led HP Security Research. West is the co-author of Secure Programming with Static Analysis, and is a founding member of the IEEE Center for Secure Design. Listen as Gary and Jacob discuss secure design, the critical difference between bugs and flaws, and wearable device security.

Listen to Podcast

Transcript


Gary McGraw: This is a Silver Bullet Security Podcast with Gary McGraw. I’m your host Gary McGraw, CTO of Cigital and author of Software Security. This podcast series is co-sponsored by Cigital and IEEE Security and Privacy Magazine. For more, see www.computer.org/security and www.cigital.com/silverbullet. This is the 119th in a series of interviews with security gurus and I’m pleased to have back on Silver Bullet, Jacob West. Hi Jacob.

Jacob West: Hey Gary. It’s great to be here.

McGraw: Jacob West is Chief Architect for Security Products at NetSuite. In that role, he leads research and development for technology to identify and mitigate security threats. Prior to that, West was CTO for Enterprise Security Products at HP where he founded and led HP security research. Jacob co-authored Secure Programming with Static Analysis in 2007 with Brian Chess and that book was published in my Addison-Wesley Software Security series. He is currently a co-author of the BSIMM and a founding member of the IEEE Center for Secure Design which we’re going to talk about today. Jacob lives with his husband, Johnathan in San Francisco. Thanks for joining us today.

West: Looking forward to the conversation.

McGraw: So, way back in episode 78 (I had to go look it up) we discussed the arc of your career, from intern to pundit, building static analysis tools, BSIMM stuff—I think BSIMM 4 had just come out and now we’re on BSIMM6. This time, I’m interested in discussing the IEEE Center for Secure Design work you just published. So first of all, what is IEEE CSD and why is it important?

West: As you know, we founded the IEEE CSD a couple of years ago to really try to expand the focus in security, from what we think the main focus today has been, which is on finding and fixing bugs to include looking for and really trying to actively avoid design flaws that we believe can lead to, or we believe do lead to, very serious security problems today. So, not just focusing on the implementation, but thinking about the security implications of the design from the beginning all the way through the project.

McGraw: That makes sense. Can you explain with some examples—an example of a bug and an example of a flaw so people can get in their mind what the difference is?

West: Sure. So, an example of a bug is something like cross-site scripting, let’s say. Developers try to build a web page and they want to generate nice content for their users. What they’re not thinking about is an attacker supplying some malicious value. Because they don’t scrub that value, they don’t validate it before they output it into the page—that’s a bug, right? The mistake they made in their code. The attacker is able to deliver not only characters the way that the programmer might have expected, but also script that might run to the user’s browser and execute an attack. So, no one intended for the website to have that feature. But, the programmer made a mistake when he or she was building the site that allowed that attack to succeed. That’s a traditional security vulnerability or bug.

McGraw: Ok. How about a flaw?

West: An example of a flaw is a design decision—so something the system did intend to do—but that probably isn’t a good idea from a security standpoint. One example would be thinking about authentication mechanisms and how we authenticate a user. A system might be designed to allow simple login and password. It might be designed to allow any arbitrary password, even very insecure ones like a dictionary word. Now that wasn’t a mistake a programmer made when he or she was building the site. That was part of the design. Or, if you really think about it, a requirement that was missing from the design which is that the password scheme requires strong passwords.

McGraw: Right. Another example which I like to use because it’s really simple is an error of omission, and that is “forgot to authenticate user.”

West: Absolutely. That’s a huge hole in the design if you want that to be a secure system.

McGraw: Right. So, I mean, there is a little bit of trickiness here because you gave a great example of cross-site scripting as a bug, but it turns out if you look at it sideways and you think about the APIs that a programmer is using to get input and the frameworks they’re using, there might be ways to solve that entire class of bugs with a design tweak. In fact, Google has done that.

West: Absolutely. And this, for a long time, has been one of my favorite interview questions. Asking people which would be easier to fix: SQL injection or cross-site scripting? Because, for a long time, we’ve had a designed-in mechanism to make SQL injection very easy to avoid—parameter as queries. And we haven’t had, until more recently, an equivalent on the cross-site scripting side. So, great example.

McGraw: I think we get the difference between bugs and flaws. How prevalent are flaws, and who are the sorts of people that need to know about them and avoid them?

West: Experts like yourself have often cited roughly a 50/50 split between flaws and bugs that lead to security problems. Now, as you said, the center of that spectrum is a little bit gray, right? Some things can be addressed with a design decision, or if they’re not addressed in that way, might lead to an implementation bug. It’s not necessarily a black and white division. In terms of who needs to be aware of secure design principles, it’s really everyone involved in the development and production of software. Certainly from an architect’s standpoint, as you’re imaging the initial system design, security is a huge consideration. But, even as smaller design decisions are made, as the overall system coalesces and development proceeds, you can see developers and product managers all needing to be aware of the security implications of typically missing security requirements as the design phase, and as that design matures throughout the project.

McGraw: Right. So, tell me about the IEEE Center for Secure Design. Now that we know why it exists. What is it and who’s in it? How did it form? What’s the deal?

West: We pulled experts in software security together from—and this was a couple of years ago that we founded the group as I mentioned— three main sources or communities. First is folks like myself from the commercial world. Many of us have built security products in the past, or oversee security practices, or are employers. Another group is folks from academia—folks who are responsible for teaching some of the architecture and design principles that the center is so concerned about the security implications of. The third is, we have folks from government who probably have a different view on some of these systems; maybe have different considerations from a process standpoint on how systems are designed or maybe who the attackers they deal with are. We think these three diverse groups really brought a wide ranging set of perspectives. More than just perspectives, we actually asked people to bring real data to the first meeting and so the first document we published was listing the top 10 software security design flaws. We built that document out of the raw data and experience that that group brought together the first time we met.

McGraw: Right. So that was published in 2014, was it? Is that right?

West: Yeah, that’s right. Late in 2014.

McGraw: And then you recently published another report with the IEEE Center for Secure Design called “WearFit: Security Design Analysis of a Wearable Fitness Tracker” which obviously is a fake fitness tracker because that’s not a real product. But what it does is brings the top 10 flaws into a real example so you can get clear on what a flaw might look like in an actual system. Tell us about that report.

West: You’re exactly right about the way we developed this report. In the initial top 10 flaws document, we try to give examples wherever we can of how the general guidance around the flaw, or the general guidance for avoiding the flaw, applies to different kinds of systems. What we did with this latest report is kind of inverted that equation and started with a real system design. As you mentioned, this isn’t an actual company. We weren’t looking to pick on anyone in particular, but we looked at wearable fitness tracking devices across the industry and tried to understand how they are designed—what the hardware constraints, protocol constraints, implementation decisions that have been made really look like and then designed a fictitious system that closely resembles real-world systems. Once we had that design agreed upon, we took the top 10 flaws from the original CSD document and applied them systematically, one-by-one to complete a 10-step, in a way, design review of the WearFit system we’ve created.

McGraw: Got it. So, it’s kind of like the report that came out about the design review.

West: Exactly right. And, wherever we can, we talk people through what would have happened in that design review—the discussions that would have occurred. The key parts of the system design that would have been reviewed, and try to add color about why certain system designs would have been made in order to achieve certain security properties in the final system.

McGraw: So, who is ‘we’? Who are the co-authors of this thing?

West: I was lucky enough to work with four really good friends: Yoshi Kohno from the University of Washington, David Lindsay from Synopsis (previously Coverity), and Joe Sechman who used to be a colleague of mine at HP Enterprise.

McGraw: Cool. So, you guys got together and formed this example. I know in the early days we were trying to call it WTF, it was the wearness tracker for fitness, but we got rid of the WTF.

West: We enjoyed the WTF name. We enjoyed it but we thought it may get a little distracting. We tried to keep it as vanilla as possible.

McGraw: You’re no fun. Vanilla? I mean it’s—come on. It’s the IEEE. They’ve got plenty of vanilla.

West: We could use a little more fun in security, that’s for sure. [both laugh]

McGraw: Anyway, I’m going to call it WTF. What was the most difficult flaw to work in when you were looking at the 10 and when you were thinking about the design? Was there one that was like ‘oh man, how are we going to describe this one?’ Or, ‘where would that be found in a system like this?’

West: You know, I think we spent more time talking about the interplay between privacy and cryptography. That really relates to a couple of different flaws about what’s important and then how you protect it. I think these were particularly interesting in the fitness tracker scenario because partly you are significantly hardware constrained. You’ve got to be able to encrypt data on a very small, wearable device. Not a lot of power. Not a lot of CPU. You’ve got to be able to communicate that eventually up to the cloud to be able to make use of it. And, the data sensitivity aren’t exactly clear. If it were credit card numbers, no one would argue that they weren’t sensitive. If they were ambient temperature readings, unrelated to you, well, you can get that information anywhere. It’s public already. But, what about my steps, and my heart rate, and my information that this device is collecting about my person? Just how sensitive is that? How should it be protected and how should it be shared? These are topics we spend a lot of time discussing.

McGraw: I can give you a very concrete example of that. Those fitness trackers actually can tell when a person is having sex.

West: That’s an example we talked about in the paper. A very concerning one depending who has access to your data.

McGraw: Exactly. If you weren’t supposed to be having sex at the time that your data were there. There’s all these implications of the data.

West: Yeah. You’re on a business trip or something. Who knows. [both laugh]

McGraw: I never go on those. What are you talking about? [both laugh] Let’s talk about the importance of the process by which you find flaws like this. Now, you guys did it in an ad hoc ‘we’re a bunch of smart guys’ manner, I would imagine. But, systematizing an approach to something like architecture risk analysis, or threat modeling, or whatever you want to call it has been a big challenge. Do you talk about that or have you had thoughts about that?

West: You know, we don’t talk about it too much; and I think one motivation for that is that documents that feel like they are about a process can be off-putting to some people. So, we try to basically walk someone through a process, show them a process, without really talking about the process itself. And the document is structured in exactly the way my co-authors and I thought about the project.

McGraw: We’ll call that the Socratic Method, how about?

West: There we go. I’ll take it. Upfront we talk about the system’s design in pretty technical detail. Then we talk about the different attack categories, the types of threats that we think the system might face. We group those into the high-level buckets—things like denial of services, compromising the integrity of the device, stealing a user’s health data. And we enumerated lots of examples of those potential attacks. And, with the combination of that design, and a very ad hoc threat model—in terms of what we were concerned about—we were then able to proceed through the top 10 design flaws thinking about how the design and the threats would interact with a system that was eventually implemented. This worked really really well for us.

McGraw: So, you made them concrete in this example. If you’re confused out there, and you’re listening to this and you still don’t get what a flaw is, that is the point of this report. So you can go look at an example and you can say ‘oh, I see what they mean when they say “authorize after you authenticate” or whatever.’

We’ll be right back after this message.

This is Gary McGraw, your host for the Silver Bullet Security Podcast. If you like what you’re hearing here, you should check out my monthly security column published by SearchSecurity and Information Security Magazine. You can find the most recent column at www.SearchSecurity.com/McGraw. All of my writings are collected on my web page at https://www.garymcgraw.com/technology/writings/. Thanks for listening.

McGraw: Why is design review important, again? And, who should do it and how should they do it and should it be a process? I mean, that’s a little boring, but what do you think?

West: Yes. So, everyone should do design review, meaning every organization that’s building software of any import. In terms of who should conduct the design review, you’ve got to know something about design and you’ve got to know something about security. So somebody with architecture chops, whether that’s their title or not, is pretty important. But, they’ve got to understand the security side of it. You could take an architect and train them up on this, but it’s going to take some investment. Most likely, you need somebody with some experience in software security specifically.

McGraw: I want to push on that a little bit. So, because you guys focused on flaws, and you focused on finding and showing those flaws in a hypothetical design, it makes it clear that flaws are not always about putting security features in the wrong place—or mis-designing security features. Sometimes they’re about other aspects of the design altogether that impact security. I think that’s very similar to what happens when we think about security testing. When we’re testing a security feature, that’s normal testing because security features are just a function of the code like any other feature. So, you use functional testing techniques. But when we’re doing adversarial testing, we’re doing a different kind of testing. I guess it’s important to realize that that also exists when you’re doing design review.

West: That’s a great point, and I would say—I picked the cryptography and privacy example earlier, but the vast majority of what we talk about in the report, and the design decisions that we believe have a security implication, are related to non-security functionality, right? They aren’t the crypto or the authentication mechanism necessarily. They’re just about how the system moves data around and services its users. I think that’s a really good point—it’s not just about the security features, it’s about the security implications of the way the rest of the features are designed also.

McGraw: Yep. Ok. So, you’ve explained why it’s important and who should carry it out. Are there processes for doing design reviews that are more principled than just ‘be really smart’ and ‘have security experience’?

West: There are processes. Various organizations and individuals have published some of this. I think one of the best ways to learn is to work with someone who has experience in it. Whether that’s someone you hire from a consulting firm or whether that’s someone that you know and you just beg some time off of, or whether you build up that capability internally. Whatever the case may be, the best way to learn is to go out on a ride with somebody and see how they do it.

McGraw: Yeah. I’ve done “apprenticeship” for many many years and taught lots of people over the last two decades how I do security analysis, but that doesn’t really scale. You have to institutionalize it somewhat. We’ve been working on that at Cigital, but there’s a long way to go. You still have to be very experienced to carry out the process because the process does have places where it says ‘and now, insert your brain here and do something clever.’

West: Yeah. Absolutely. I mean, back to you’ve got to understand how systems are architected and what the security threats that might face them are, and not everybody knows that stuff.

McGraw: Let’s switch channels a bit. So, you’re a co-author of the BSIMM. What role does the kind of security design analysis and the kind of security design that leads to the document that you just published—what role does that play in the BSIMM?

West: As you know, they are significantly represented in the BSIMM. And I think one of the interesting things that we’ve seen over the years of collecting and publishing the BSIMM data is just what an investment firms do make in design-related decisions. Obviously, we could always see more in the industry, but I think the BSIMM data really validates that this is an area where more mature firms are clearly investing in.

McGraw: I see. So, I guess there are a lot of people who still need to understand that software security is not just penetration testing, and software security is not just code review, and you know…there are many activities that you undertake and we put them into practices in the BSIMM and architecture is so important that it’s a major practice.

West: Absolutely right.

McGraw: You know, if you’re trying to figure out how to do this architecture analysis, you mentioned the possibility of figuring out how to do it inside your own organization. Do you think that’s really possible? Or, do you really need to go out and find somebody who’s done this for a while and who knows how to look at these problems? Have you had any experience trying to create that capability in an organization?

West: I have. But it always involved very exceptional individuals. So, I don’t think it’s impossible for a firm to do this on their own, but I think they’ve got to have the right people or be able to get the right people. That’s a challenge, right? It’s hard to hire a good architect even independent of security. So, it’s not something that I think is 100% reproducible. I think that’s one of the reasons I think we’re going to continue to see outside firms still provide a lot of help in this area.

McGraw: That makes sense. I guess one of the challenges is, if you think about education, or even certification, there is no course, or set of courses, or university program that you can go to to become a software architect. That’s not where software architects come from. You know, they come from coding for 20 years and one day you turn around and everybody’s asking you questions about how they should do this and how they should do that. So you put down your keyboard and you pick up a magic marker, never to code again, and voilà, you’re an architect. [laughs] So you sort of have to have that, plus some security knowledge, to carry this out—which is a challenge. It’s a challenge for the planet.

West: Absolutely. One of my pet peeves today, and something that I speak publically about as often as I can, is the challenge we have in creating new security people. We have a huge talent shortage in security. Firms can’t hire enough people to solve their problems and that doesn’t even account for tackling areas that maybe aren’t being looked at today like design review.

McGraw: Right.

West: And at the same time, the top universities, the top computer science programs in the country are still doing very little to instruct undergraduates in software security and secure coding. So, I think we have a big divide to span and the industry is going to have to do everything we can, kind of guerrilla-style, to create these skills. I’ll throw it out there, I think we should all put pressure back on the universities in whatever way we can to start to meet this demand a little bit better, as well.

McGraw: Well, let’s kind of talk about that a little bit more. What are your thoughts about the evolution of software security as a discipline since your time as a student of David Wagner at Berkeley?

West: Well, I think it hasn’t evolved nearly enough, frankly. You know, I graduated from UC Berkeley in 2004, and at the time had really no exposure through coursework to any security topic.  Maybe a security feature here or there, but certainly nothing to do with robust secure programming skills, certainly nothing to do with secure design principles. Unfortunately today, more than a decade later, we don’t see a whole lot of change there. You can graduate from any one of the top universities in the country today with an undergraduate degree in computer science and have really never been exposed to software security topics. That’s not to say that they aren’t available in many schools today. But, you have to be on the lookout for them and really go hunt them down for yourself. I think the point we need to get to is the point where security is treated as a fundamental property of software, and therefore of computer science; and it’s taught as a part of every discipline that we teach today—operating systems, networks, data structures, and databases. All of these have security implications and we need to teach these subjects in the right way.

McGraw: Yeah. That makes sense. I mean, the challenge is, of course, that real world architecture is different than academic project architecture. Not always at the graduate level, but when you’re talking about undergrads and simple class projects and stuff like that, often the architecture is pretty straightforward. There might be security implications, but that means you don’t get any real experience with walls of code like you might find in a financial institution, for example.

West: Yeah. That’s right. I think we’re always going to have a delta between someone leaving a degree program of some sort and going into the workforce. There’s always going to be a gap between academic scale and commercial scale. What we can do, I think, is start to inject a lot more software security along the way so that when they get to that final step of scaling up to the real thing, they’ve seen at least all of the important building blocks that lead up to that as opposed to I think today often security is just missing from that equation.

McGraw: Right. So, now I want to veer us off the tracks completely. You know today is my 50th birthday and you said you were going to sing “Happy Birthday” on the podcast. So, are you going to do it?

West: I’m definitely not going to do it. I thought about getting a fun recording of someone else singing, but I think we’ll save your listeners from having to hear either of us.

McGraw: Aw. I was hoping they could sing along out of tune. [both laugh] Alright, well you had your chance.

West: Alright, well that’s my challenge to your listeners, then. Everybody should take a video recording of yourself singing “Happy Birthday” to Gary, and tweet it at him.

McGraw: Thanks a lot, Jacob. [laughs]

West: Yeah. I know you’re looking forward to it.

McGraw: Alright, so here’s the last question. Tell us about your latest foodie adventure. You and I like to hit great restaurants all over the planet. What’s the latest cool thing that you did in foodie land?

West: Well, I don’t like to repeat restaurants too often. Living in San Francisco, we have enough wonderful choices and enough change amongst those choices that it’s just not worth it, typically. But, I broke that rule and went back to a place that I love called Coi—spelled ‘C-o-i’—but they pronounce it ‘quoi,’ for some reason. And, it was fun because that restaurant has been open for, I don’t even know—more than a decade has had the same chef running it for quite a long time. Chef passed the torch to a successor just a few weeks ago, in January. I was able to eat there just a week or two after the new chef had been installed. Let me just say that the torch was passed very successfully. The new menu was spectacular. I think he’s really filled big shoes very well. So, for everyone who’s visiting San Francisco, and wants to have a nice Michelin Star meal—Coi is a fantastic choice.

McGraw: Cool. Well, thanks very much. We always like your food advice, because it tends to be good. Thanks for your time today, Jacob. I really appreciate it.

West: Gary, it was fun as always. Thanks a lot and happy birthday.

McGraw: Thanks. This has been a Silver Bullet Security Podcast with Gary McGraw. Silver Bullet is co-sponsored by Cigital and IEEE Security and Privacy Magazine and syndicated by SearchSecurity. The September/October 2015 issue of IEEE S&P is devoted to the economics of cybersecurity. The issue features our interview with European cryptographer Bart Preneel

show 119 - Jacob West