Posted by Robert Vamosi on April 20, 2016
Procurement language in software. The concept of holding someone contractually liable for the statements they make about the quality, reliability, and—most of all—security of the software they are providing. Many industries have specific hardware procurement requirements for parts introduced into their supply chains, but what about software? Until recently, there has not been real pressure to have supply chain software vendors attest to the validity of their wares. But with the introduction of software into automobiles, television sets, and medical devices, software integrity has taken on greater meaning.
Software today is assembled with up to 90% of the final software code coming from a combination of open source and third parties. Doesn’t it make sense to test and know what’s inside that code?
In this two-part Code Review podcast, I talked with Mike Ahmadi, director of critical systems security at Synopsys, and with George Wrenn, CSO and vice president cyber security for Schneider Electric, about the purpose of procurement language. In Part 1 this week we talk about unintended features in software, both malicious and nonmalicious, authenticated and nonauthenticated, and how to establish trust. [See Part 2 here.]
You can listen to the podcast on SoundCloud or read the transcript below.
The conversation starts with Mike Ahmadi describing how he came to write the procurement language document used by Synopsys.
Ahmadi: I started working on procurement language really with Mayo Clinic about almost two years ago. And they provided me with procurement language they were using for the medical device space that they provide their device vendors as some requirements to show that they adequately tested a product and can give them some level of cyber assurance for devices that are being used on their network and the hospital and they created a document which they released publicly and said we could use it as we see fit. And then approximately one year ago after the now infamous hack of the Jeep Cherokee by Charlie Miller and Chris Valasek we were approached by the automotive industry and they simply asked us we really are spending quite a bit of time and effort in cyber security but we deal with an extended supply chain, and in the case of the Jeep Cherokee it was actually a product that was made by Harman that was attacked and they said how do we provide a supply chain with some sort of language that they can then look and test against and provide us with some level of cyber assurance beyond just them saying that their product is secure. So I created a document which laid out some various testing methodologies and language modeled on the initial language from Mayo Clinic and gave it to the industry and they loved it. And that eventually lead to the creation of an SAE working group which I now chair the Cybersecurity Assurance Testing Task Force.
Vamosi: So, George, when you look at procurement language what are some of the things that you feel have to be a part of that to be an effective document?
Wrenn: As more and more of our parts and software and such that comprise our products are sourced from different geographies and different areas, different companies, and different rule sets, it’s important that we have a universal set of procurement language in our contracts we call our brand label process where we have a set of required activities, security activities, a right to audit, for example of one of the clauses that we have to have in there so that we can go in and look at what they are doing. That’s very important. The big thrust of the contract language is really again trying to paint a picture around the things that would introduce unintended functionality at a sub unit level for things that go into our larger product assembly.
Vamosi: So you mention unintended functionality. There are different classes of unintended functionality. Can you explain a little bit about that?
Wrenn: So we have this idea that malicious or nonmalicious unintended functionality. So in some cases it’s just an oversight by a programmer. Or in other cases there might be some testing hooks left in, some documented documentation of the code that when compiled someone could run strings on and suck out the passwords or other information that you should not have access to – so those are sort of, those are examples of potentially unintended … Now malicious things, the malicious unintended functionality tend to be things like remote kill switches in air defense systems, the things that could shut down a SCADA system, shut down a plant, shut down a power grid. These would be in our consideration a class of unintended functionality that we do not want or intend in our product. We need to be very careful about the malicious. There’s also authorized vs unauthorized. So it’s a continuum of unintended functionality. Our mission at Schneider is to have no unintended functionality in our product, in our offering.
Ahmadi: Or, to expand on what you say, if it is a function that could potentially lead to a hazardous situation which is done by unauthorized user it could very well be something that could be ended by only by an authorized user. So it is important to understand and that something’s that are built into a system and we run across this all the time, some people will tell us it is a feature but the fact that an unauthorized person can access this feature makes it an unintended functionality .
Wrenn: So one of things we’ll see in a lot of legacy systems that were never … one of the examples of that is security by obscurity, right? So security by obscurity we see many examples in many software companies where they thought oh no one will ever look at that. Another myth that historically has been a challenge I think for the entire industry is that this idea that well, in order to use that functionality, in order to be authorized to use that functionality –unintended or in many cases intended – like a kill switch—the operator has to be in front of the device or the control or the system in order to actually do that, right? And so the idea is that if the malicious user or attacker has gotten that far into the heart of the plant or the automotive factory or whatever it is that at that point they might as well they could light the place on fire, right? Or they don’t have to use cyber they could use any other conventional –use a baseball bat—to break something at that point, physically. They could use a hammer and physically damage something and take out a system if they are that close. So you see those types of arguments being made, but I think the idea that the trustworthy system is like cryptography, right, it should be resistant to attacks even when all the details are known and published, right. So a cryptographic system that has all the details like Rijndael and AES, right, has all of the specifications and the source code and whatever published, right, and even then it should be resistant to attack. So there should be no Easter eggs or hidden trap doors or things that certain people known about that can subvert the system. So there shouldn’t be properties of people, knowledge, or other things that exist in the universe that allow those entities to execute unintended functionality, right. So in cryptography that done by publishing the full specification, source code, and a lot of eyes can go on it, right. And it’s designed to be resistant to attack even with full knowledge of the system. So I think as we look at supply chain and secure supply chain and we look at assembling complex products we have to up the bar to say that let’s assume that the adversary has full knowledge of the entire stack, and then let’s have a set of tests that look for unintended functionality or security weaknesses – authorized, unauthorized—and lets be able to test for that and verify, or from an earlier point, we can trust it because we verified it, right. Does that make sense?
Ahmadi: That’s an argument I’ve been making lately in many of the discussions I have been having people is there is a commonly used phrase out there today is “trust but verify” and I always as that question, I ask “What does that really mean?” OK, trust but verify. You can either trust or you can verify. And I say a more appropriate use of the term is trust because you verify. Right? I use an example of my children. If I ask them to clean their room, they come and I ask them did you clean your room and they say “Yes” I walk over to their room and make sure it is clean, right? I trust them because I will look at the room and see that it is clean and I then verify it. If you say trust but verify that has a notion that, well, just because someone told you they did something that you could trust them where does the verification come in?
Wrenn: But I think there is more to it. I think trust because you verify in your example is scalable in the sense that in the first couple of instances you peeked in the room and saw that yes they did clean their room. In the next couple of instances you have a more automated way of not having to go back to that room because you’ve verified in the past that yes they do the right practices and there is evidence to that. So you might have a more randomized verification schedule or you may decide that you don’t verify because every time you have verified the room has been clean so part of this whole supply chain — there has to be some economies of scale some automation in there –that every time you hear that someone is doing the proper practice you may have a more statistical method than go through a manual and laborious each time to verify. To me trust but verify implies a laborious and repetitive non-learning process.
Ahmadi: I will have to say that holds true in a couple of cases, okay, where I believe cyber security doesn’t always apply. Number one is that actually holds true in a system that is lower criticality. So for example, despite the fact that we have a lot of practices that we do for air travel every single time some flies a plane they go through step by step and verifies and checks off everything on a known checklist regardless of how much they trust the organization that builds and maintains the airplane. And by the way they find things quite frequently. I know this because I’ve had plenty of flights delayed for that reason so, an again, why do they do that? Very critical systems, right, and that’s very important. The other thing is that actually works very well for things that are actually more static, right. The problem with security is dynamic. You can verify somethings today and they could be good for a while but a year from now… the entire vulnerability and attack surface has completely changed, right? So again, it’s less static. Again, criticality means that maybe I should … now, you may be able to depending on criticality for a low criticality system you can say okay, I can shorten the mean time between verification, but for high criticality systems you should be verifying really as often as possible.
Wrenn: So let me address that point. As a pilot, I like that analogy and I go through those very same processes but let me give you an example of some cockpit automation and checklist automation that give you some to get what he’s saying. So some aircraft like the Cirrus aircraft for example, SR22 and others, their flight management system has a menu system that is an electronic check list so you actually have to go through to get to the flight director to get to the GPS actually be able to operate the airplane, okay. And so when you first get into the plane you can use a paper check list and you can kind of miss things. This forces a level of automation in terms of checking because you actually have to go through it with a knob and click off. And we also touch each control so if we say fuel open, the hand goes down and moves the fuel thing to closed and back to open to verify that it is open and then you click it on the FMS and go to the next thing. So there’s ways of tightening that up and making it stronger but also making it faster so any mechanism that forces people – it’s like the Japanese idea of Poka Yoke —you can only put eh USB stick one way – there are certain physical controls that make it impossible to mis-operate the intended feature, okay. So the more Poka Yoke and the more automation and error reduction you can implement in the system, it improves the amount of trustworthiness but lessens the burden …
Ahmadi: … well …
Wrenn: … so that should be our objective. What is the goal of my trust system? Over time it should become more accurate and more automated.
Ahmadi: So that’s actually fantastic and in fact I couldn’t have even planned this if I’d tried to set this up myself and that’s really the essence of what we’re really talking about here at Synopsys. We provide automated testing tools. You could do all of this manually. You can manually go through line by line of code. You can manually look up CVEs and CVEs scores and CWEs to define your metrics. You can manually comb through and manually create test cases that could fuzz something at a protocol level, okay, or you could connect them to our automated testing tools, push the button, and wait. And that’s the whole point. And by the way you are right; it actually improves your outcome because you know you have … I won’t say software is error-free but you’ve dramatically reduced the number of errors when you automate the test tools, and you can get more done in a shorter period of time.
Vamosi: In Part 2 of this discussion, we’ll discuss CVEs, context, and certifications in the procurement process.
Get the latest AppSec news and trends sent directly to you.