Show 126: Mike Pittenger Discusses Open Source Software Security

September 29, 2016

Mike Pittenger is the VP of Security Strategy at Black Duck Software where he is responsible for strategic leadership of security solutions, including product direction and strategic alliances. He has 30 years of experience in technology and business, more than 25 years of management experience, and has spent the past 15 years focusing on security. Mike previously served as VP and General Manager of the product division of @stake. After @stake’s acquisition, he led the spin-out of his team to form Veracode. He later served as VP of the product and training division of Cigital. Mike also works as an independent consultant helping security companies identify, define, and prioritize their security product approaches.

Listen as Gary and Mike discuss open source security including OpenSSL, containerization, and progress being made in the industry.

Listen to Podcast

Transcript


Gary McGraw: This is a Silver Bullet Security Podcast with Gary McGraw. I’m your host Gary McGraw, CTO of Cigital and author of Software Security. This podcast series is co-sponsored by Cigital and IEEE Security and Privacy Magazine. This is the 126th in a series of interviews with security gurus, and I am pleased to have today with me, Mike Pittenger. Hi, Mike.

Mike Pittenger: Hey, Gary. How are you doing?

McGraw: I’m good. So, Mike Pittenger is the Vice President of Security Strategy at Black Duck Software. Mike has been working in security, and in software security in particular, for 15 of the 30 years he’s been in technology. Previous employers include Cigital, Veracode, and @stake. Mike also works as an independent consultant helping security companies identify, define, and prioritize their security product approaches. At Black Duck, Mike’s responsible for a strategic leadership of security solutions including product direction and strategic alliances. Mike has an AB in Economics from Dartmouth and an MBA from Bentley College. He lives with his family in New Hampshire and just became a grandfather, about which, congratulations.

Pittenger: Thanks, Gary. Yeah, it’s pretty special.

McGraw: Very cool. Full disclosure for all my listeners, I worked with Mike at Cigital, and I’m also now an adviser to Black Duck. So, you should know that ahead of time.

All right, let’s get started. What got you going in security, and why did you get going in security, and when did you become aware of software security issues and all that?

Pittenger: Yeah. Good question. So, like a lot of guys in the security space, I started my career with six years at a mail order smokehouse in Vermont. No joke. It was a job that was offered to me out of college, and that led me to a holding company in Massachusetts who was starting a mail order division in consumer electronics. Did that for about six years and then joined the parent company to do M&A transactions. So, mostly on the sell side for a company called Dynatech. I ended up joining—I got into software by joining a company that I had divested to a division of Eaton, a company called MPSI. And they did expert systems for diagnosing electromechanical problems and went from there to—ended up at a company called Authentica. And we got Jack Hembrough who—

McGraw: Ah, yeah. I remember Authentica actually. You know, I was an adviser to Authentica too, a million years ago.

Pittenger: Oh, were you really?

McGraw: Yeah I really was.

Pittenger: With John Bruce?

McGraw: Yep.

Pittenger: Or was it—yep. So, Jack Hembrough, who subsequently became the Chairman of Application Security, Inc., I worked with in consumer electronics. And he asked me to come over because they were looking for somebody to run business development and ended up joining Authentica and just kind of got the security bug and recognized it as a growth market and went from there to @stake in ’03. And that was—

McGraw: Which was, believe it or not, 13 years ago.

Pittenger: Yeah. Isn’t it something? And they were, you know, similar to what Cigital is now, but obviously smaller, and I was running the product division there. So, we had L0phtCrack which was, and remains, pretty well known. But the real reason for me being brought in was that they were working on this technology to analyze binaries and, in their minds, automated manual code review. And we brought that to market and within a year we were acquired by Symantec and they told us to stop all sales immediately. And—

McGraw: And that eventually became Veracode’s stuff.

Pittenger: That’s it. We spun that out and formed Veracode in ’06. That’s right. So—

McGraw: Which was a mere 10 years ago.

Pittenger: So, I stayed there for a couple of years and then went off and began consulting and worked with Security Innovation and Cigital. Black Duck was a client, Digital Guardian, BeyondTrust, CoreLogic, you know, a whole handful of other companies.

McGraw: So I’m interested to know, I guess it was when you moved to Authentica when you really got interested or began to understand the role that software plays in security, not for security. You know, because Authentica in some sense they were doing—they were a security feature company. They were building data protection technology.

Pittenger: Yeah. It was digital rights management. Right.

McGraw: Right. So, you know there’s a difference between security software and software security that most people who listen to this podcast understand. But I’m wondering when, in your mind, that kind of kicked in?

Pittenger: That really kicked in with, I mean, we talked about it at Authentica, but it really kicked in when I joined @stake. I mean, this is what they lived and breathed every day.

McGraw: Right, right, okay. And so, you know, given all the stuff that you’ve done, what’s your personal preference? Do you like private consulting? Do you like working for a firm? Are there certain kinds of firms you like working for? It’s interesting to learn, from all of somebody’s experience, how the world works.

Pittenger: Yeah. I mean, the consulting was obviously a lot of fun, and it’s lucrative. The fun part of that was doing projects where you were, kind of, doing strategy and helping companies understand the market value and the technologies that they’ve delivered. So I worked on a project with CoreLogic that was being sponsored by DARPA, and it was just fascinating and you get to talk to a lot of companies and learn about what they’re doing and, you know, where they see value and what not.

The other half of that, which tends to pay the bills a lot of the time but is less intellectually stimulating, is writing white papers, writing content, doing webinars, and that type of thing. And the nice thing here, in my role here, is I get to do a bit of both of those. So we’re working on determining what we should be doing around open source security and how that strategy should manifest itself in the product. But also doing a lot of speaking, and writing, and traveling around to see customers, and that type of thing. So, being really the advocate and the evangelist around security and open source, and helping people understand the difference between what we do and what some of the traditional automated security testing tools do.

McGraw: Yeah. Let’s talk about that. So these days your focus has certainly shifted to open source. First off, a trick a question. So, you ready? What’s more important, identifying open source in your pile or managing the open source in your pile?

Pittenger: I think they’re equally important. You can’t protect yourself against stuff that you don’t have visibility to, right?

You know what I mean. How do you manage something if you don’t know it’s there?

McGraw: Of course, yeah. That’s why it was a trick question. So, how many people that are worried about open source are managing versus just identifying, do you think?

Pittenger: I think it’s kind of the leading edge of companies that have awareness and are trying to do anything about open source are really managing it. I mean, most are really just—it’s all of a sudden dawned on them that for the past you know, 10, 15, 20 years they’ve been consuming open source in their application development program and have no idea what they have there; what the hygiene or the profile of that open source is. So, it’s really kind of a triage process for a lot of companies when they first start to do this. They’ve thought that they’ve been managing software bugs simply by doing pen testing and static analysis and so on—

McGraw: On their own code.

Pittenger: Right, not recognizing that those really aren’t effective against the types of bugs that we see in open source. Or in analyzing open source, period.

McGraw: Well, let’s get a little bit closer to that. So, how much open source is out there? And let’s talk about that. Let’s figure out whether it should be automatically patched like most software is and why it isn’t, whether it could be, and how much churn there is in the space, so we can get a—let’s approach the problem from that angle.

Pittenger: Yeah. So, we’ve been doing this for 14 years at Black Duck. Our knowledge base—we started building it back in ’02, I guess it was. We now track 1.9 million discreet open source projects and every version of those that comes out in multiple forms and—

McGraw: And theoretically, they have more than one line of code in each.

Pittenger: Yeah, yeah. I mean, you have these big—OpenSSL, you know, just kind of bloated over time and so on. And you have the Linux distros and stuff which are obviously a little bit bigger, as well as small utility components and so on.

McGraw: So, there’s a lot out there.

Pittenger: Yeah, there’s a lot out there. And then in terms of its consumption, we sell the software but we also provide an on-demand service. Typically, you know, if I’m going to buy your software company, I want to make sure I have freedom to operate and no IP risks. So, they’ll pull in Black Duck to analyze their code, kind of a one-off audit, and we do 700 or 800 of those a year. And we’re issued a report in April on that. We’re going to issue another one next month. But we looked at those audits we did in Q4 of last year and Q1 of this year, and on average—well, every application had open source. On average, 35% of the files in each application was open source, 65% custom code. But you had to organize it. That’s a lagging indicator because if a company is mature enough to be, you know, “I want to buy your software company,” then that code base we’re looking at could be 5 or 10 years old.

McGraw: Well, so let’s talk about that a little bit. In the early days, it seemed to me that the person that was worried about the open source problem was the CFO because of the GPS virus, so to speak. You know, and for those of you who don’t know the GPS is this license that says anything that this code glob shows up is also GPLed. So, it really worried CFOs because it was like an infection that took over your software if you used a glob of open source code in it. And that was driving the open source identification problem in those days. Is that still true? Is that, kind of, the main driver or has it shifted to security people or technology people?

Pittenger: No. It’s—yeah, it’s really shifted. So, the first 10 or 12 years of Black Duck’s life, it was almost exclusively focused on license risk. And you know, the average—the common buyer was you know, an attorney or a CIO or a CFO as you point out. And in the early days, the reason they wanted to identify the open source is so they could get rid of it. Because of these squirrely licenses some of it was issued under, they didn’t trust other peoples’ code and so on.

Now, a majority of our business is being driven by security, as people have realized that—well, I’ll give you an example. I was talking to a CISO that we both know at a very big financial services company, and I was asking them how they manage their open source. This was when I was consulting to Black Duck, and he went through a really lengthy explanation about their design review, their architectural review board, and how they, you know, really carefully vet this stuff. And they’ll ask the design team what third-party components you’re using, what licenses are they issued under, what alternatives did you consider, tell me about vulnerability density, you know, code—

McGraw: As if those guys really know this.

Pittenger: Right. And then as kind of a post script to the whole thing he said, but of course, that’s unenforceable. Because when you tell a developer, “Hey, you can use project Foo 5.2,” they only hear the first half of that sentence. And they’re like, “Oh good, I can use project Foo and that’s great because I’ve been using it for three years. It’s in my workspace. I know all the APIs.” And they’re less concerned about the version number because they continue to be more focused on functionality.

And they really weren’t—most of the people I talked to weren’t aware that there was a way of automatically tracking the open source that was being used and getting alerts on when the health or the security of those components changed. And we’ve been working with a lot of different companies to try and get the messaging out that static analysis is good, dynamic analysis is good. We use both of those internally, but it’s really focused on finding common coding mistakes that could result in security vulnerabilities and—

McGraw: Not on inventory controls so to speak.

Pittenger: No, no. Buffer overflows, theoretically, you should be able to find those with static analysis or dynamic analysis. Heartbleed was a buffer overflow in the code base for two and a half years. I mean, how many thousands of times was OpenSSL subject to, you know—

McGraw: Actually, let’s dig into that. So, the Coverity engine which is quite phenomenal was used on that project in particular. And the reason—

Pittenger: It was the Codenomicon one, right?

McGraw: No, no, no.

Pittenger: Oh, yes. Right.

McGraw: It was Coverity. It was Andy Chou and those guys, and it didn’t find the bug which was an easy bug because the code was so crappy. I mean, seriously. And if you looked at the code and you tried to understand the dependencies and all the include files, you could see how a static analyzer would get confused about that. So—

Pittenger: Yeah, it just couldn’t track the control flow or the data flow.

McGraw: Yeah, exactly. Because it was basically, just to put it bluntly, really shitty code. But it did wake people up to the idea that they were relying on this code that somebody else wrote that they didn’t know anything about that was free. And you know, in some sense, the Heartbleed thing is still with us. How many websites are out there that still have Heartbleed bugs?

Pittenger: Oh my. Well, in this analysis that we did on their audit data, I mentioned it was 35% open source, 12% of those applications we looked at still had the Heartbleed problem.

McGraw: Unbelievable.

Pittenger: About seven percent had ShellShock and—

McGraw: And Drown, too, I suppose is another one to look for.

Pittenger: Yeah, yeah. So, it’s not that these vulnerabilities aren’t well publicized. I learned about Heartbleed on Good Morning America.

McGraw: You should stop watching that show. You know, Matt Lauer is useless.

Pittenger: And you know, so people know about it. So you can only conclude that they maybe didn’t know that they were using OpenSSL or they didn’t know that they were using OpenSSL multiple times in the same application. I mean, you remember when that came out and people were running around with their hair on fire.

McGraw: Oh, I know. We had a fire drill over here at Cigital. It took us—we had to roll lots of CERTs. So, it took some doing.

We’ll be right back after this message.

If you like what you’re hearing on Silver Bullet, make sure to check out my other projects on garymcgraw.com. There you can find writings, videos, and even original music.

McGraw: So, how many open source zero-days do you suppose are lurking out there? Any idea about that?

Pittenger: More than you could count, I suspect. I mean you know, there are 2,000 to 3,000 vulnerabilities disclosed each year just in NVD in open source components. And you know, again, if you look at ShellShock, that was in the code base for 25 years before it was disclosed. So, it’s really optimistic to think that nobody else knew about this until some Good Samaritan disclosed it to the community.

McGraw: Right. But it also emphasizes something that I think is really important to emphasize in this podcast, which is the many eyeballs thing is a crock of shit.

Pittenger: Yeah. It’s the wrong eyeballs, right?

McGraw: Yeah. It’s the wrong eyeballs and it’s also just plain old wrong. So, you can’t really crowdsource your security review to everybody and nobody.

But, I want to talk about some modern stuff, too. So there’s been this big shift to containers that, you know, everybody’s using them and it seems to be causing more hidden open source out there than ever. Like, you know, we see major shops moving to the container view and actually using containers in pretty clever ways to enhance their security, but the container itself is a piece of open source. So, how much of that stuff is out there?

Pittenger: Well, so we did an analysis of Docker Hub and I can’t remember whether they called them the approved containers? And something like two-thirds of them had known vulnerabilities in them. So, you’ve got this Linux stack typically, or you might have a LAMP stack or something, and the beauty of the containers is you can strip off the application layer and put on a new application or strip it down to the Linux Kernel or the core and put on—build a new stack up and a new application.

But as you know well, the security of any software project is pretty ephemeral. And if you use something long enough, and the code base ages, there’s going to be vulnerabilities in it. So, what they end up doing in this is just keep reusing a Linux stack with, perhaps, an old version of OpenSSL or something like that. And you just tend to propagate those vulnerabilities across multiple applications because you just haven’t checked the kind of underlying OS on this.

McGraw: Well, who’s supposed to keep you know, container-based open source secure? Is that somebody’s job?

Pittenger: It doesn’t seem to be. I mean, people talk a lot about DevOps, and this is where you start to get, in a large organization, you’re going to have an application security team who’s going to be responsible for the application layer. And then, you probably have an IT security team who’s responsible for the OS and the hardware that things are running on and what not. And the containers seem to cross over there a little bit. So, I don’t think we’re going to see a DevOps security team that’s independent of IT or application security. So, that’s kind of a long-winded way—I think it probably belongs in the domain of the IT security people.

McGraw: Yeah, but they’re not doing it?

Pittenger: No.

McGraw: I mean, this is the thing that has always, sort of, bothered me about open source which is the idea that everybody was going to somehow do this stuff that costs lots of money, magically, like security analysis or reliability testing or quality testing, and it seems like an aspect of open source that has yet to be properly addressed. What we do now, and what Black Duck can help you do, I guess, is find out how big a problem you have and figure out which components you should and shouldn’t use, in some sense. But, it isn’t really going to the open source projects and saying, “Hey, you guys, get your act together,” or is it?

Pittenger: No. You’re right. Right now, we’re trying to deal with helping organizations identify and manage the open source that they’re using. We’ve got our research team that’s working more on providing more remediation content and guidance. But again, that’s focused more on the—

McGraw: What you’ve already got.

Pittenger: Yeah. It’s focused on the users of the open source as opposed to the developers of the open source. There’s some overlap there as well.

McGraw: So, that’s tricky because, you know, part of my religion (used loosely) is to fix what you find. I preach that all the time. It’s great that you used goat sacrifice and some pen testing to find that problem. Now, tell me how you fixed it. In fact, I don’t really care how you found it. Did you use static analysis? Fantastic. Did you use dynamic analysis? Great. Did you use binary analysis? Fantastic. What’d you do to fix it? And the answer often is, “Uhmm, yeah, uhh.” And the problem with open source is the ‘who fixes it’ is still unknown.

One more thing, though, because there’s an even worse wrinkle which is if end users don’t know that it’s broken, then the nobody who’s supposed to fix it has more reasons not to fix it.

Pittenger: Yeah. The good news in this is that—semi-good news—is that most of these disclosures are done responsibly. So, when a vulnerability is disclosed in open source, it’s typically accompanied by a fix.

McGraw: A fix. And so, what you guys can help people do is to say, “Hey, this thing just got fixed and you have a copy of it. So, fix your stuff.”

Pittenger: Fix your stuff, but yes. And that’s one of the options, right? The other option is “What can I do to mitigate the risk of this until such time as I can fix it, because—”

McGraw: What’s an example of that, Mike?

Pittenger: Generating indicators of attack, indicators of compromise. So, one of the concerns that a CISO has when an open source vulnerability is disclosed in this is, you know, it’s just become public, but did somebody else already know about it? Why are we probed? Am I going to be subject to a non-targeted attack? So, if we can provide them with indicators of compromise or indicators of attack, and log file things that they can set up, rules, and their ideas, they can go review their log files and see if they’ve been probed. As another guy we both know put it, “If the NSA wants to get into my environment, they’re going to get into my environment, and I’m probably not going to lose my job.”

McGraw: No, no, no. That’s wrong. The NSA has already been in your environment for a while.

Pittenger: Yeah. So, his bigger worry was if a 17-year-old kid from Hackensack gets into my environment using a public exploit on a known vulnerability, I’m in big trouble. And, you know, taking a public exploit and running it across a thousand IP addresses and seeing which ones come back positive and then determining how far you want to take each attack, it’s a more economical approach to it, frankly. And, you know, the bad guys have quotas too.

McGraw: Yeah. All right. So, let’s come full circle. Ultimately, it would be great if we could get the developers of open source not to write broken software, but there’s an awful lot of already broken software that’s been distributed that millions of people use. And so, we have to attack the problem from not only worrying about fixing new open source and teaching those developers how not to screw it up, but also by finding out what we’re relying on and, hopefully, using better components if there’s some that are better than others. But failing that, just keeping an eye on our broken stuff to see when it’s attacked.

Pittenger: Yeah. I think a lot of this is simply vulnerability management which companies typically think of as being, “Have I patched my Windows devices, and my Linux boxes, and Adobe, and other commercial applications?” And what we’re trying to get across here is that patch management as it were, or vulnerability management, includes your own applications. And we’ve got a lot more people who are capable of finding these types of bugs because we have a smarter, overall, security community than we did 10 years ago. So, we’re going to continue to see more and more vulnerabilities disclosed because open source is, kind of, the playground for researchers. And so just having awareness of what you’re using and staying on top of that so that you don’t have to go through that fire drill every time a new vulnerability is disclosed, just makes life easier and helps you on the defense.

McGraw: That makes sense. So, let’s just zoom up 85 feet or whatever, 8,500 feet, to a bigger question for you. You’ve been doing software security since the @stake days and through the Veracode days, and worked at Cigital for a while. Do you think we’ve been making progress in the last 20 years on this problem or not?

Pittenger: Oh, I certainly do. I think we’ve made a lot of progress. I mean, number one, there’s awareness. We see more software security groups in organizations so companies are starting to take this more seriously. I think we’re seeing a—we still see a skew towards IT security in terms of spending which really isn’t justified if you look at the data and how the attacks are happening. But I think that will change over time. We’ll see that even out more because, you know, there’s no perimeter here.

McGraw: Perimeter security is great, but it requires a perimeter.

Pittenger: Yeah, right, right. And port 80 doesn’t count, right? So, I think we have made progress from that. But clearly, there’s a lot more to go because we’re still penetrating and patching largely, as opposed to building security in. You know, there’s a lot of work that still needs to be done to get developers smarter about this. To make companies recognize that they need to invest in, not just the skills and the money, but the time it takes to make sure that you’re integrating all of these security activities into each stage of the SDL.

McGraw: Yeah. And, you know, it’s kind of interesting. I think that if you guys collect lots of data about what you’re finding out there, and you know which components are the most important, so to speak, and also which ones seem to be the most vulnerable, that can help figure out which developers to target in the open source community to get up to speed to do a better job, too. So, I’d like to see that happen, you know, over the next five years or so. That’d be cool.

Pittenger: Yeah. That’s a good point, Gary, because everybody—we constantly talk about Heartbleed because people recognize it. But there have been 60 or 70 additional vulnerabilities in OpenSSL since then.

McGraw: Yeah. Some people even abandoned the whole project. They’re like, “We’re making our own glob.”

Well, cool. It’s been a very interesting conversation. Thanks for your time today.

Pittenger: Yeah. Thanks, Gary.

McGraw: I have one other question, I guess, which has nothing to do with any of the stuff that we’ve been talking about. What’s your favorite kind of music to listen to and enjoy?

Pittenger: Bluegrass.

McGraw: Okay. And what are you listening to now?

Pittenger: Well, I have Larry Keel on the high-rotation list right now. And then—

McGraw: Okay. This is because your second daughter got married, is that right?

Pittenger: That’s right. She got married, and Larry and Jenny played at the wedding and reception. And then there’s a new band up out of Maine called The Ghost of Paul Revere. It’s kind of a banjo, a guitar, acoustic bass, and a harmonica. Four guys from Western Maine, and lots of harmony and just fun music.

McGraw: Cool. Well, thanks. We’ll see if we can find some pointers to put those on the website.

Pittenger: Okay. Cool.

McGraw: Yep. Thanks again for your time, Mike.

Pittenger: Okay. Thanks, Gary.

McGraw: This has been a Silver Bullet Security Podcast with Gary McGraw. Silver Bullet is co-sponsored by Cigital and IEEE Security and Privacy Magazine, and syndicated by Search Security. The July-August issue of IEEE S&P includes articles on verifiable and electronic voting, and on breaking down barriers between security and business. The issue also includes our interview with Marty Hellman, inventor of public-key cryptography and recent Turing Award winner.

While you’re there, be sure to watch the video we produced to celebrate the 120th episode of Silver Bullet. Holy cow, ten years of Silver Bullet in a row. 

show 126 - Mike Pittenger