Comparing Security Models with Bob Blakley

July 17, 2009

For the 40th episode of The Silver Bullet Security Podcast, Gary interviews Bob Blakley, VP and research director of The Burton Group’s Identity and Privacy Strategies. Gary and Bob discuss the importance of liberal arts degrees, the (over) complications of CORBA security, whether computer security requires a complete shift in approach, cybersecurity and governments, and the movie Perils in Nude Modeling (really).

Listen to Podcast

Transcript

Gary McGraw: Bob, your degree from Michigan was the last in the computer and communica­tions program, and I think John Holland’s was the first (John’s the father of genetic algorithms for those listeners who don’t know that). Are we doing enough these days to teach technology profes­sionals to think?

Bob Blakley: I guess. I have strong feelings about that in a broader context than just technologists. There are courses that used to be taught, for thousands of years, that taught people how to think, and they just aren’t in the undergradu­ate curriculum anymore, so my fa­vorite pet peeve is rhetoric. We don’t teach people rheto­ric anymore—how to analyze an argument and determine whether or not the methods used to make the argument are legitimate. There are all sorts of proposed public policy and technologi­cal ills that could conceivably be avoided, especially in a democ­racy, if people understood how to think about arguments. As far as technologists are con­cerned, you know, I think we teach technologists more about special­ized disciplines and less about gen­eral disciplines like mathematics and other fields, and also we tend to have them specialize earlier. I didn’t specialize until well into my graduate career. My undergraduate degree is in classics from Princeton University, and that degree in itself came about because I got out of various other curricula under both honorable and slightly less-than-honorable circumstances. I had a very diverse background, as many of my colleagues at Michigan did, but there were 12 graduate students admitted to the program. Only one of them was an undergradu­ate computer science major. Today, that would be unheard of.

McGraw: I have a degree in the lucrative field of philosophy.

Blakley: Yes, I think there’s much worse preparation for life than that. My classics degree focused a lot on philosophy, and in the Na­tional Academy of Sciences work that I’ve been doing, one of the things that I always consciously try to do is go back and study and bring into the discussion the philosophical background of what we’re talking about. For example, identity. There’s a very rich philosophical back­ground about identity, and iden­tity is a complex topic, so if you aren’t reading what Locke said about identity and what Nietzsche said about identity and what the Buddha said about identity, you’re just not paying attention.

McGraw: I would recommend [Robert] Nozick, too. I don’t know if you’ve ever read his mod­ern philosophy.

Blakley: Yes. And it goes right down to the modern day, right? It’s not like philosophy just went out of business in the mid 19th centu­ry or anything like that. There are lots of people, like Dan Dennett, doing extremely interesting work about the function of the mind and how that feeds on identity.

McGraw: Switching gears, in my view, the Java 2 security model and the CORBA security model remained inscrutable to most prac­titioners, and that kind of rendered their uptake a little more tepid than the ideas probably warranted.

Blakley: The Java 2 security model has a feature that I don’t like very much, which is that its approach to fine-grained authorization doesn’t have as many indirections in it as it ought to. This is the JSR115 architecture. The result is that authorizations have to be expressed very early in the process of designing and de­ploying an application, and if you change the policy, you have to re­deploy an object. It is very complicated. These fine-grained models are all very complicated, and it hasn’t been adopted very widely yet. Maybe the combination of [xAML] and claims-based authorization will succeed in externalizing authori­zation from the Java environment.

McGrawI don’t know. I’m a little skeptical about that. What about the CORBA idea? How does CORBA differ?

BlakleyThe CORBA model has a feature that I like even less, which is that I—number one— was instrumental in designing it and—number two—subsequently failed to explain it in any com­prehensible way to people who might adopt it. I sometimes go back and think, ‘Well, I wonder whether the fact that it wasn’t ad­opted was just a side-effect of the larger failure to adopt CORBA generally,’ but I don’t think so. I mean, so I designed pieces of the ECE security model and pieces of the CORBA security model and a variety of other things, includ­ing an operating system security model for OS2, all of which had this characteristic that they were way too complicated. In a sense, all of the things that I was famous for a long time in the security community were failures, and they were failures—it’s not in a shallow sense but in a deep sense. Namely, repeated failures to learn the same lesson. Which I eventually did learn. I was also the editor of a spec at CORBA RAD—Resource, Access, Deci­sion—which I think is still my fa­vorite access control interface and model. It’s extremely simple. Af­ter that, I became the first general editor of the SAML specification. This is an OASIS specification security markup—Security Attri­bute Markup Language. And that has become very successful, and it became very successful because a bunch of us—Prateek Mishra and Keith Mailer and many others— decided very early in the process that we were going to include ab­solutely nothing that could not be demonstrated to be essential. As a result, it was very simple. Hal Lockhart was able to draw a very elegant diagram of it that made it very clear to people, and it became successful.

McGrawDo you believe that in the quest to make security more usable, we should just focus most of our attention on simplicity?

BlakleyYes, I believe that—in what may be the most radical way in the industry. I regularly tell au­diences and our customers at Bur­ton Group that no general-purpose device—and I mean this in a tech­nical sense, a general-purpose computing system, meaning a true and complete computational de­vice—can, in principle, ever be made secure. You laugh, but I’m perfectly serious.

McGrawIt’s like perpetual mo­tion. I laugh because I believe you. I believe you have one easy propo­sition on your hands.

BlakleyThe proposition is very simple to state, right? By defini­tion, a true and complete com­puting system has infinitely much behavior. Well, a secure comput­ing system has a finite amount of desirable behavior and therefore, what’s left over is this still infinite amount of undesirable behavior that you have to somehow pre­vent. And you prevent it not by the design of the system but by con­straining the system after it’s been designed and built.

McGrawWe build it so it can do some stuff, and then we get all mad when it does it.

BlakleyRight, we build it so that it can do everything, including ev­erything we don’t want, and then we build a set of safety interlocks that don’t work, and we deploy it, and it hurts people. Not a surprise.

McGrawI guess you started thinking about some of this in your famous paper “The Emper­or’s Old Armor” [http://portal. acm.org/citation.cfm?id=304855], which was published back in the IBM days?

BlakleyOh, yes, that was pub­lished at my point of maximum depression. In 1994, everything was going wrong. Security was self-evidently getting much worse, which it still is. And ’94 was not a good time for IBM, and I was at an Open Group meeting, and I was whining to Ellen McDermott that everything was horrible, and, in inimitable style, she said, ‘Well, why don’t you quit whining about it and do something?’ And I thought, ‘Well, that’s actually good advice.’

McGrawI thought I might read the manifesto from that paper be­cause it’s worth quoting. You said, ‘No viable secure system design can be based on principles of pol­icy, integrity, and secrecy because in the modern world, integrity and secrecy are not achievable and policy’s not manageable.’

I suppose you were a curmud­geon before your time?

BlakleyYes, very much. Well, second-generation curmudgeon, right? My dad’s a cryptographer.

I think it’s really true, it’s de­monstrably true. If you look at the systems that we have in operation, everything that we try to build to protect secrecy doesn’t; and ev­erything that we try to build that has good integrity, you know, that doesn’t need a ‘patch Tuesday’ or any vulnerability disclosures, turns out not to have good integrity; and, in the huge majority of ap­plications, we just use the default policy, which is inappropriate for the situation, and when we try to manage policy, we then have to buy another suite of tools that tells us what policy we have actu­ally created, and we have to review them periodically to see whether or not they’re out of sync.

You know, secure systems should be secure by default. They should be inherently secure. That is, they should be incapable of do­ing things which are not safe, and their default configuration should be one in which they’re secure. We’re not close to that these days, and we won’t get close to it until we begin building special-purpose devices that have, as their only task, the preservation of a secu­rity property and learning how to mix those things together with the general purpose elements in ways that produce security.

McGrawI suppose you can draw a clear inference to software security as a likewise doomed enterprise?

BlakleyWell, it’s hard for me to know what software security means. I know what you mean by it, but the idea that we are go­ing to teach programmers to use a general-purpose programming language to create true and com­plete systems that are secure is an incorrect idea.

We’re not going to do that. Now, I don’t mean that the enter­prise is useless. Clearly, it’s better to have programmers writing good code than bad code, so we should be continuing to teach them to do that. Not only that, but maybe a general-purpose programming en­vironment is the ideal way to tell people who don’t know yet how hard security really is. Why, hey guys, work on this for a while and see how you do.

I think the exercise is noble and valuable, but a general-purpose computer system produced using the best methodology we know how to design is still not going to be secure.

McGrawSwitching gears to poli­tics. President Obama recently delivered a speech about cybersecurity, I guess based on the 60- day review that Melissa Hathaway performed. Do you hold out any hope that cybersecurity initiatives in the government can move past cyber platitudes into action?

BlakleyOh, absolutely. I firmly believe that in the United States, the government is us. If we wake up and demand that something be done, something will be done. We have to demand that the right thing be done, and I don’t think this is rocket science. The right thing is we have to demand that people who produce computing devices that are unsafe and hurt people should be held accountable for those failures.

People are not too stupid to build a safe computing device. They just haven’t been focused on it in a way that deeply affects their livelihood and well-being yet. Policymakers have absolutely a role to play in providing secu­rity, and the role that they have to play is to construct a playing field in which the incentives drive us toward, rather than away from, production of secure systems.

McGrawIt just seems like the market pressures of the “invisible hand” toward the impossible to attain—faster, better, cheaper— outweigh any sorts of policy wants these days.

BlakleyWhat that means is that under the current regime of incen­tives, we prefer to pay later rather than pay now. Well, paying later is often a lot more expensive than paying now.

McGrawIf you’re not in office, it’s way cheaper from a personal perspective.

BlakleyThat depends, right? You might’ve thought that about financial regulation a while back but even the people who put in place the system of regulation that failed in the case of the banking system have now lost 80 percent of their 401K value and a whole bunch of assets elsewhere, and it’s a lesson that could conceivably be learned. But psychologically, it’s a very hard lesson. Risk manage­ment studies consistently show that we always prioritize small, current gains over the possibility of large future losses.

McGrawDo you think that that liability shifts that calculus?

BlakleyLiability regimes can shift that equation if they’re designed properly. But designing them properly is subtle business.

McGrawHere’s a short question for you. How’s privacy related to identity?

BlakleyRight. It would not take more than one or two seconds to talk about that. The thing about privacy [is that] people always con­fuse privacy with secrecy, right? Because the cryptographers got in there and started doing their mischief before we really thought about the problem from a techno­logical point of view. I’ll give just two brief examples, to illustrate the tenuous relationship between secrecy and privacy and identity and privacy. The first example is, let’s say that you know something about yourself which you would prefer to stay private. Well, until you tell somebody else, you don’t have a privacy problem. You know something about yourself, and you can just keep your mouth shut, and you’re fine. Privacy rights only [come into play] when we interact socially with people who know things about us that are sensitive. Clearly, it’s not about keeping secrets, it’s about sharing information in a way where its sensitivity is respected by those who we share it with. The second example that I like to give is, let’s imagine that a letter is delivered to your house from a sexually transmitted disease test­ing clinic. The letter could, in principal, have on it your name and address and the name and re­turn address of the facility. Well, it’s going to have your name and address, or you’re not going to get it, and you probably want to get it. The only personally identifi­able information in the equation, namely your name and address, is going to be on the envelope. Keeping that secret or redacting it or transforming it in some way isn’t possible because otherwise, you don’t get the letter. On the other hand, [you have] the return address of the business, which is not personally identifi­able information, right? It’s not about you, it’s about the business, can be taken off, and it can be delivered in a plain brown enve­lope—and that does go some way to protecting privacy. If you think it’s just as simple as secrecy of per­sonally identifiable information, you get it wrong.

McGrawInteresting. The last question for you—you’re credited as making possible the film Perils in Nude Modeling in IMDb. Do tell.

BlakleyMaking possible would be too strong a word. I provided some funding for some produc­tion. This was a student produc­tion by some people I know at the University of Texas. It’s a short film that was produced as a senior project, and I know both the direc­tor and some of the staff, and also some of the people who starred in the movie. I recently—along with my two sisters and other family members, my two kids—entered the Austin 48 Hour Film Project [www.48hourfilm.com/austin/]. We produced a film for that project in which we drew the hor­ror genre and the premise of the film is that Schrödinger’s cat has nine lives and comes back and does the experiment on him.

McGrawWell, I hope we all get to see it someday on the Net.

BlakleyIt will undoubtedly be in­festing a YouTube near you some­time soon.

show 040 - Bob Blakley