Show 147: Interview with Kathleen Fisher

Kathleen Fisher is a professor and Chair of the Tufts Department of Computer Science. Previously, Dr. Fisher was a Program Manager at DARPA, where she started and managed HACMS and PPAML. She also has been a Faculty Member at Stanford, and a Principal Member of the Technical Staff at AT&T Labs Research. Kathleen's research focuses on advancing the theory and practice of programming languages. Recently she's been exploring synergies between machine learning and programming languages with an emphasis on building more secure systems. Dr. Fisher is an ACM Fellow. She's a recipient of the SIGPLAN Distinguished Service Award, vice-chair of DARPA's ISET Study Group and a Trustee at Harvey Mudd College. Kathleen holds a B.Sc. in math and computer science, and a Ph.D. in computer science from Stanford. She lives with her husband in Cambridge, Massachusetts. Her daughter Elaine is in grad school.

Listen as Gary and Kathleen discuss scientific research versus hacking "research,"  programming languages and software security, hacking (or not hacking) autonomous helicopters at DARPA, why machine learning looks pretty similar to how it looked 25 years ago, and more.

Listen to Podcast

Transcript

Gary McGraw: This is a Silver Bullet Security Podcast with Gary McGraw. I’m your host, Gary McGraw, vice president of security technology at Synopsys and author of “Software Security.” This podcast series is co-sponsored by Synopsys and IEEE Security & Privacy magazine. For more, see www.computer.org/security and www.synopsys.com/silverbullet. This is 147th in a series of monthly interviews with security gurus, and I’m super pleased to have today with me Kathleen Fisher. Hi, Kathleen.

 

Kathleen Fisher: Hi.

 

Gary: Kathleen Fisher is a professor and chair of the Tufts Department of Computer Science. Previously, Dr. Fisher was a program manager at DARPA, where she started and managed HACMS and PPAML. She also has been a faculty member at Stanford and a principal member of the technical staff at AT&T Labs Research. Kathleen’s research focuses on advancing the theory and practice of programming languages. Recently, she’s been exploring synergies between machine learning and programming languages with an emphasis on building more secure systems. Dr. Fisher is an ACM Fellow. She’s a recipient of the SIGPLAN Distinguished Service Award, vice chair of DARPA’s ISAT Study Group, and a trustee at Harvey Mudd College. Kathleen holds a B.Sc. in math and computer science and a Ph.D. in computer science from Stanford. She lives with her husband in Cambridge, Massachusetts. Her daughter, Elaine, is in grad school. Thanks for joining us today.

 

Kathleen: My pleasure.

 

Gary: So you’ve been doing scientific research for many, many years. I’m interested in your view of research of the DARPA variety in science land versus “research of the Black Hat / DEF CON variety.” So what are your thoughts on all that?

 

Kathleen: I think that DARPA does really interesting project-focused research. So DARPA’s charter is to explore and find potential for a surprise, right? We don’t want to be taken by surprise, and we’d like to be able to take our adversaries by surprise. So DARPA spends a lot of energy thinking about where that potential for surprise might be in a bottom-up organizational structure. Program managers are hired for their potential to find interesting surprise and then explore the space around that potential surprise. So unlike, say, the National Science Foundation, whose mission is just to advance the state of knowledge, DARPA is really about that surprise and taking technical risk off the table in a project-focused way. DARPA identifies some problem area that is important for national security where the state of technology is shifting, and so the landscape might be radically different in a few years, and dives in to explore what’s possible, what’s not possible, and what does it take to get there.

 

Gary: So DARPA is interesting in science land in its own right, and differs from, sort of, you might say, “normal” scientific research a little bit because it’s driven by a mission.

 

Kathleen: Yeah, it’s driven by a mission, right. So I think both NSF and DARPA’s focuses are important and complementary, right? NSF is about advancing science for the sake of science, which is fantastic and really important. DARPA is about solving particular problems, and in the process, almost always advances the state of knowledge as well, because if we already knew how to do everything, it probably wouldn’t be on that frontier of surprise that DARPA is focused on.

 

Gary: So now let’s talk about the “research” that’s done by “researchers” at Black Hat and DEF CON. You know, the hacker people.

 

Kathleen: Yeah, OK.

 

Gary: Contrast that to, kind of, science research. This is a thing that I try to do with both sides of the debate—or not debate, just the divide. Maybe to spur your thinking, you know, science based on literature knows where it came from and has an agenda for moving forward. And some of the hacking research is very much hands-on, uses real systems, but, you know, is often not quite as disciplined. So I’m interested in your view of whether/how they fit together.

 

Kathleen: It’s interesting. Both are actually really valuable. So like my experience at DARPA, one of the things you have to do to convince DARPA to invest money in an area is show that there are problems. And the kind of experiences that come out of Black Hat show examples where “Oh, right, that system which we really would like to be secure isn’t actually at all secure. And here’s how some really clever individual is able to go off and figure out how to hack into it.” Those kinds of you know, one-off experiments that require really deep expertise and knowledge are really helpful data points for spurring “Oh, we need to go be more systematic about how we build security into these systems.” So in some sense, the two kinds of research can be really complementary. The sort of practical “in the weeds of the reality of today” that often gets published at Black Hat is really helpful for showing where there are big weaknesses. And the longer-range “OK, what are we going to do systematically to raise the standards so that it’s harder to find those kinds of vulnerabilities?” is something that are done in a more systematic literature-based kind of approach.

 

Gary: Yeah, that makes sense. Slightly different topic: I grew up on Scheme and wrote my ridiculous thesis code in Scheme. So what’s your favorite programming language? I know you’re a hugely active programming languages person.

 

Kathleen: Oh, yeah. My favorite programming language, I’m kind of torn. I like functional programming languages a lot. I like the connections with mathematics and the really crisp, clean way you can specify certain kinds of functions that way. ML is a language that I grew up with that I really like. I like the pattern matching and the recursive functions, the way of structuring your code that way. Not so good for user interfaces, but most of the code that I have to do isn’t user interface–focused. But ML is kind of not super actively being developed anymore, and most of the action on the programming language research side is now in a language called Haskell, which also has some adoption in the commercial world. I like a lot of the features of Haskell. I like how it’s open and exploring the frontier. It’s based on what’s called lazy evaluation, which means that the order in which parts of a program are evaluated is determined by when they’re actually needed to compute an answer for the user. And so it’s really hard to predict when things are going to be executed, which makes it really hard to predict how much memory or time it’s going to take, which is a pretty significant downside. So I’m kind of conflicted between those two, ML and Haskell.

 

Gary: How about Moby? Tell us about Moby.

 

Kathleen: Moby is a language that John Reppy and I worked on just after I graduated from my Ph.D. looking at combining the strengths of object-oriented and functional programming into one system using structural subtyping in addition to declared subtyping. So structural subtyping, you just look at the type and see if, you know, one type has everything that the other type needs in order for the objects of the one to pretend to be objects of the other, versus sort of nominal subtyping or what you see…It’s like interfaces in Java have structural subtyping versus classes in Java have declared or nominal subtyping. So Moby was working on having both of those existing in a same framework. So Scala is probably the most similar language now to the work that John and I did on Moby.

 

Gary: Yeah, interesting. I miss my programming languages days. They were long, long ago.

 

Kathleen: Yeah, I really like programming languages. One of the things I like about the field is that you can do theoretical work, and design work, and implementation work, and experimental work, and applications work all within the same project. Because you can develop a language, and explore its theory, and design what it looks like, and then build applications and measure how it’s doing. So you don’t get stuck in only proving theorems, or only building, writing code, or only doing measurements. You get to do all of them together in a coherent way.

 

Gary: Yeah, that’s cool. So like me, I think you believe we can build systems that are more secure if we really work on it.

 

Kathleen: Yeah, if we wanted to, we could, for sure.

 

Gary: And you worked on HACMS at DARPA, and that sort of demonstrated one way to go about this. So what do you think is holding us back from building systems that are secure?

 

Kathleen: What is holding us back? So, I mean, in the one hand, it’s a really hard problem, right? Security, to get it right, you have to get so many things right at the same time. An attacker just has to find one way where you screwed up. But that doesn’t mean we shouldn’t try to get all those things right. HACMS is really focused on vulnerabilities that come about through errors in the implementation, so bugs and coding. And the goal was to use mathematical-based techniques, formal methods, to produce code that comes along with a proof of correctness. So if you can prove that correctness, then it doesn’t have memory-safety kinds of vulnerabilities, for example. And we showed that we could do that, not for a hundred million lines of codes but for, you know, smaller systems but that still had security relevance.

 

So what’s holding us back? You know, one is that the problem seems so daunting when you get to the hundred-million-line code that it sort of says, “Well, just don’t bother at all.” And I think that’s not the right response. Even if we can’t secure all of a hundred million lines of code, it doesn’t mean we can’t reasonably secure pieces of that that have a disproportionate effect on the overall security of the system. So part of it is it’s a really hard problem.

 

Another part is we don’t have proper accounting. The people who make the security bugs and who cause the problems don’t pay the bill typically for cleaning it up. And so there’s…incentives are misaligned. It’s really hard to count the actual cost of the security cleanup, so it’s hard to know, kind of, how much money there is spent on this problem. The sort of the thought experiment is, what if you had all that money and you distributed it to the people who are writing the software and told them, “You must use this money to fix these bugs”? How much better would software be in that case? So I think if our incentives were more properly aligned, we would have better-quality software.

 

I think also the downstream costs to consumers are probably going to go higher as more and more security vulnerabilities happen and more information gets stolen that might cause consumers to decide, “Maybe I’m willing to wait another three months for my new app update in exchange for having that code be written to a little bit higher quality.” So that’s a third impediment, is the sort of demand for new features instantaneously. In the competitive market, it means companies are much more interested in delivering those new features as fast as possible and not worrying so much about what the security consequences of those features might be down the road.

 

Gary: Absolutely. And I think you’re right about the economic incentives being misaligned. One way to tie those together is to have the maintenance budget be computed as part of the total cost of software creation so that when people hit their date, they also have to estimate what it’s going to cost to do maintenance over the next year or two. And if they’re wrong, they get dinged, and if they’re right, they get a bonus. And you know, Phil Venables did that at Goldman a zillion years ago, and that worked pretty well.

 

Kathleen: Right. I think one of the challenges with that is the sort of uncertainty of when a particular, you know, latent exploit will be triggered. You could have a system that has a huge vulnerability in it that nobody’s found or exploited for a really long time, and so the cost has been essentially zero. And maybe it will never be exploited, so the cost is always zero, versus, you know, something’s been sort of puttering along just fine for, you know, years. And all of a sudden, there’s a really clever hacker who figures out how to exploit it, and now the cost is astronomical. How do you build that into a model to estimate what that maintenance budget should be? It’s like the probability of finding it versus the damage if you find it. That probability is super hard to calculate, and that makes it really hard to reason about.

 

I do think, you know, insurance companies are starting to offer insurance for this kind of thing. So they’re starting to develop models that they’re confident enough in that they can sell products based on. But I think that we’re a long way from a mature insurance model that might help align those incentives.

 

Gary: Yeah, I agree with that. Let’s talk about tech transfer a little bit. So in my experience, it takes about a decade, a huge bucket of money, and then some very passionate advocate to transfer tech from the lab to the commercial world. What do you think?

 

Kathleen: Yeah, I think that that’s maybe even optimistic. It depends on how aggressive the tech change is. So like the tech that was developed at HACMS, the formal methods–based techniques, pretty huge divergence from normal software development practices. So it probably will take more time than that, unfortunately. Particularly for a technology that isn’t immediately tied to a financial return on investment. Like you have new tech that investors can see, like, “Wow, as soon as we get this deployed, we’re going to get a 10-times return on investment,” that goes out very quickly. But tech that is, you know, “Yeah, if you use us, you possibly won’t have to spend a billion dollars down the road”—a much less compelling sell. And so it takes a lot longer, and you need to get the kind of, you know, external motivators to say, like, “Oh, this is why I need to go ahead and adopt this technology.”

 

And that’s part of what we were trying to do with HACMS. So in the program, we had researchers build tools and techniques to allow them to build software for vehicles that red teams had a much harder time breaking into, right? The quadcopter the red team couldn’t break into with six weeks of full knowledge of the system. So that quadcopter, which was an open source quadcopter you can buy from Amazon, the technologies got ported to Boeing’s Unmanned Little Bird, where Boeing researchers—not formal methods researchers but Boeing aviation engineers—took those tools and techniques and applied them to the Unmanned Little Bird. And then the red team tested that system and, again, was not able to break in or disrupt the operation of the helicopter, even having full access to the system and full knowledge of the system. They were so confident that they allowed the red team to attack the helicopter while it was in flight.

 

Gary: Yeah, I heard that. And they also gave them the code for the camera.

 

Kathleen: Right, exactly. They had root access to the partition for the camera. They had all the knowledge, and all they could was crash the partition that was running the camera. And then the master partition noticed that the camera partition was down and restarted it. So the effect was the camera was down for like a minute, and that was it. So that shows transfer from the formal methods researchers to the aviation engineers.

 

And then, in phase 3, Ray Richards, who was the program manager then, spent a lot of time working with transition partners in the military—who have different motivations in the commercial sector, that want to make sure that their platforms are under their own control and not in adversaries’ control—demonstrating successful transition to a whole bunch of different kinds of military systems. You know, big trucks and various kinds of quadcopters and various kinds of space technology, for example.

 

The long-term goal there is to hope that Pentagon procurement officers understand that this technology is possible, and so when they’re writing their request for proposals, they put in requirements about “You must demonstrate that this part of the system has these properties in a machine-checked way.” And then, you know, when the companies that normally bid for those on those calls come and say, “Oh, that’s ridiculous, we can’t…no one can do that. You have to take that out. No one would be able to respond…”

 

Gary: You can say, “Can so.”

 

Kathleen: The procurement officer can say, like, “Well, these, you know, grad students were able to do it. Why can’t your highly paid engineers replicate what a few grad students did?” And so to keep those clauses in the calls and have the company say, like, “Well, I want to get that contract. So I’m going to go hire some people, and build some tools, and figure out how to do this stuff.” And then, you know, build up their capacity, successfully bid, successfully deliver product, and then create a virtuous cycle to the point where the companies have sufficient technology expertise that they can then start saying, “Well, you know, in addition to providing this for military customers, maybe there are medical device customers. Maybe we don’t want to have insulin pumps and pacemakers vulnerable to remote hacking.” Those are also systems where these kinds of techniques are really appropriate because the codebases are relatively small and the cost for vulnerability is super, super high. So that’s the dream. Like, who knows when it will actually happen?

 

Gary: It all happens. It just takes longer than we want it to. We’ll be right back after this message.

 

If you like what you’re hearing on Silver Bullet, make sure to check out my other projects on garymcgraw.com. There, you can find writings, videos, and even original music.

 

What’s the “research value of death,” and how do we overcome it? And what did you do as a PM about that?

 

Kathleen: Yeah. So part of what we did was we structured the program so that there were paired systems. There was a research platform that everybody had full access to, there were no ITAR restrictions, there were no classification restrictions, everybody had full access. So that allowed kind of maximal free flowing of ideas. But then there was a paired sister system that was proprietary. So like the Unmanned Little Bird, for example, proprietary to Boeing, and really only the Boeing engineers had access to that system. And so to deliver the stage 2 and stage 3 results for DARPA, the Boeing engineers had to be able to use the tools and techniques on the Unmanned Little Bird platform because the knowledge had to go…

 

Gary: That made a very nice threshold, yeah.

 

Kathleen: Yeah, exactly, had to go from the heads of the researchers in the universities into the heads of the very smart and very well-trained but not formal methods experts at the company, so that they had to apply the tools and techniques. So that’s, you know, stage 1. So there are people at Rockwell Collins, there are people at Boeing and other companies who had to learn how to do that on their own. And now the hope is that that’s enough of a beachhead to get more uptake in those companies. I think the long-term value proposition is reasonably clear, particularly for companies that supply sensitive systems, that they really don’t want to be introducing vulnerabilities for their like long-term reputation and viability. But there aren’t a huge number of formal methods researchers, and tools are often in, kind of, somewhere between alpha and beta stage. The documentation is kind of poor.

 

Gary: “Researchware” is what we used to call it.

 

Kathleen: Yeah, researchware, exactly. So you still have all of those challenges. But I think the sort of proof of concept that HACMS represented, saying, like, “No, really, you can secure something as large as an autonomous helicopter big enough to fly two people” to the state where, you know, professional red teams can’t break in six weeks despite having full access, full knowledge of the system, and root access to a partition on the system—they didn’t have to get into it; they were given that access—is, you know, pretty impressive, and hopefully silences some of those doubts that say, “Oh, no, this is not possible. Why bother?”

 

Gary: Yeah, what a cool story. So in addition to HACMS, you also worked on PPAML at DARPA. What progress has been made in the last 25 years in machine learning? Or are we just basically taking the stuff that we did 25 years ago and putting it on way better computers?

 

Kathleen: Well, I do think a lot of the success of neural nets is we’re putting it on way better computers.

 

Gary: Me too. I’m glad you agree with that. I was a little worried about the answer to this one.

 

Kathleen: Yeah, I mean, I think, actually, in some sense, parts of machine learning or AI and parts of formal methods share some characteristics, and that both of them turn out at least to have a core component that is the searching of vast space. And so when you have computers that have a lot of memory and a lot of cycles, you can search much, much bigger spaces than you could 50 years ago. So I think a lot of the initial failures of AI and formal methods were not because the approaches were wrong but because computers weren’t fast enough yet.

 

Gary: Absolutely agree, yep.

 

Kathleen: And we’re kind of in a golden period right now for both machine learning and AI and formal methods. And the computers are powerful enough that we can do really interesting things now that we couldn’t before.

 

Gary: Yeah. I saw that you recently published something in genetic algorithms, and that’s another technology that is just ripe for the picking right now, I think.

 

Kathleen: Yeah, we had fun with that. That’s a project with an undergrad at Tufts. Like I was talking earlier about Haskell, and how it has lazy evaluation, and how that means it’s really hard to figure out when exactly things are going to execute, and that means that you can have a lot of unpredictability about the memory and time resources required for your program. Bang annotations are a way of having the programmer tell the compiler, like, “I don’t care what your laziness evaluation says. Execute this now.” But the downside with that is it’s really hard to figure out where to put them, actually, to have performance improvements. It’s something that sort of expert Haskell programmers know how to do, but sort of everyday, you know, Jane Doe, not so good at. And so this program was using, as you said, genetic algorithms to come up with suggestions for good places to put in those bang annotations, and we saw pretty significant performance improvements.

 

I just have a paper submitted now actually that’s doing…So the downside of that is it introduced a lot of bangs, and each bang you kind of have to reason about, to think about, well, is this safe or not? So the second, the follow-on paper that’s under review right now is figuring out how to minimize the number of bangs. So it maintains most of the performance and reduces the number of bangs inferred by about 90%.

 

Gary: Wow, really cool. So how do probabilistic programming languages work?

 

Kathleen: Yeah, so probabilistic programming languages are you build a model of the system that you’re interested in as a program. And one of the things you can do is say, like, “Well, I don’t know what X is exactly, but I think it’s drawn from this kind of a distribution.” And then you can just use X as a normal value. And then you can run the program forward, and when it gets to that X, it says, “Oh, OK. X is drawn from, say, a normal distribution with these parameters. I’m going to say X is 10 for this run.” And then you get to the end, and you get the answer out.

 

So running it forward isn’t actually the interesting direction. The interesting direction is you give it a bunch of data and you say, “Given that I’ve observed all of this data, what’s the most likely value for X?” So you kind of run it backwards to get those answers. And so it’s a nice way of raising the level of abstraction of machine learning problems.

 

So like most machine learning these days is focused very much on sort of matrices of numbers, where you have a model, but it’s a very, very weak model, and you don’t really understand how the pieces fit together. Like this is the explainable AI problem. Why is that a cat 0.23? Like, 0.23 means nothing to me. Like, tell me why it’s a cat, right? And being able to explain, “Well, it’s a cat because it has whiskers, it has ears, it has fur, and it’s about this size”—those are the kinds of things that you might put in a model. And so the probabilistic programming languages are designed to make it easy for you to build the model and then dump in the data and figure out, like, what part of the space of models that the probabilistic program represents is the most likely.

 

There’s actually some fair controversy in the machine learning community about how useful models are. There’s definitely a school that thinks models are pernicious because they bias you in favor of your prejudices that have been embodied in the model, instead of allowing you to be freely guided by just the data. My personal assessment is that right now, because there’s sort of so much low-hanging fruit, we can do with really very simple or almost-nonexistent models. But as that low-hanging fruit goes away, we’re going to need to encode higher-level understanding of what’s going on to be maximally effective.

 

Gary: Yeah. My very first journal publication was about using CBR to explain genetic algorithm results and the schema theorem. Like, that was 30 years ago or something. The probabilistic programming reminds me just a smidge of simulated annealing for some reason, I guess because of the pushback or the backflow or something like that. It’s very cool.

 

Kathleen: Yeah, I think that they’re related.

 

Gary: So tell me what role you think programming languages play in software security, and then we’ll sort of put that to bed.

 

Kathleen: Oh, OK. Well, the language that you use has a huge impact on what you’re able to do effectively and efficiently and kind of what you leave lying around. Like C is very good at expressing low-level…I mean, there’s no abstraction barrier. It gives you direct access to the machine, which is very effective and important when you’re programming low-level machine code. But it’s notoriously difficult for people to not make mistakes with buffers and overflow.

 

Gary: Something about like, you know, power saws and kindergartners.

 

Kathleen: Yeah, yeah, exactly, something like that. And so like there’s been a bunch of work on, how can you make a better C? Something that still gives you access but has a lot more seatbelts and suspenders on it too, so you can get efficiency, but you’re less likely to make mistakes? Like REST is an example of that. In general, the sort of overhead, the cognitive overhead, of using the fancier type systems has not yet proven worthwhile. Although I think that goes back to our earlier conversation about we’re not appropriately accounting for the cost of the security vulnerabilities, and so we’re not appropriately rewarding the richer sets of languages that make it easier.

 

I think, you know, also different languages have different targets, right? So C is very good at the low-level descriptions, but it’s not so good for higher-level programming. An example would be like writing a parser, right? So parsers are all over the place because everywhere we get data from a wire, we have to convert it into a representation that’s suitable for the program that’s going to use that data. And, you know, people who write parsers in C should just stop writing parsers in C. Like the number of vulnerabilities that come about because somebody wrote a parser in C is really high, actually. And so part of what you need to do with languages is choose the right language for the task that you’re working on.

 

When I was in grad school, I remember, you know, a fellow grad student asking about what my research was, and I was, like, oh, I said, “Programming languages.” And he gave me this look of disbelief, like, “Why would you ever be doing research in programming languages? We have C++. What more could you possibly want?”

 

Gary: Oh, good Lord, one of those.

 

Kathleen: Yeah, I mean, this is when, you know, you basically had to write code in the language of the operating system that you are running on because everything ran on a single machine in a monolithic way. And if you weren’t in that language, you paid a huge performance cost, and performance was everything.

 

Now, with distributed computation and the web, now many, I mean, almost every application is written in a whole smorgasbord of different languages. And if you’re already like passing your messages back across the network, it really doesn’t matter if one system is running in one language and the other system is running in a different language, because the overhead of changing from one language representation to the other doesn’t even measure on the scale when you talk about, “Yeah, but you’re sending a packet across the network, and you have to serialize and deserialize it anyway.” So there’s a lot more…

 

Gary: We’re waiting for the monkey to move the mouse.

 

Kathleen: Exactly. So there’s a lot more flexibility these days. There’s much more open ecosystem. So like students now are not adverse to learning many languages because they know they’re going to have to learn many languages for no matter what they end up doing. So that’s a really exciting development in the last 15, 20 years in programming languages, going from a sort of “You will write in C++” to, you know, “What’s the language that we’re using here for this purpose, and why are we using that?” And that opens up the ability to use languages that are better, which have better security properties for the particular task. So like don’t write your parsers in C. Write your low-level device driver in C, and then use tools to prove that it always terminates.

 

And so one of the interesting things about C, actually—on a slight digression—is that it’s actually easier to prove properties of C than to prove properties of higher-level languages often, because the higher-level languages depend upon garbage collector and things that are written in C. So you have to be able to prove C code correct before you can prove Java code correct.

 

Gary: Yup. I remember that from the very early Java days. We were actually looking at the virtual machine, which was, of course, written in C. So last topic: You’ve been active in CRA’s Committee on the Status of Women, and obviously, you are a very accomplished woman in technology. So the question’s kind of dumb but important: What role can men like me play in creating a welcoming environment for women in tech?

 

Kathleen: Oh, right. Well, sincerely asking questions like that is a really good start. I think, you know, arguments that say, you know, “Women just aren’t as good” or “Women, for some reason, aren’t interested” are not helpful. I think that we don’t have a way of systematically…Like we can’t take an 18-year-old woman who hasn’t been subject to 18 years of culture saying, “You shouldn’t go into computer science” and have a sort of experiment that says, “Well, since that woman didn’t decide to go into computer science, it’s because she wasn’t good enough or because she didn’t want to.” You can’t erase that first 18 years. So having majority populations acknowledge and recognize that and offer to say that, so it’s not always the women who are saying, “Wait a minute, you don’t have any scientific basis for making that claim.” When you say you have a scientific basis for making that claim, you’re not being helpful, because if you think about it, you can’t possibly have a scientific basis for making that claim, because we don’t have the ability to undo those 18 years of culture influencing our young women. So, you know, basically, standing up and saying, “I care about diversity inclusion. I care about having an open door. Anybody who’s interested can come in, and we’ll figure out how to get to the skills and training necessary for anyone who wants to succeed” is really very helpful.

 

Gary: Cool. Last question, and it’s a flyer. So I recently read this book called “Luna: Wolf Moon” at the beach, which is, kind of, it’s really new sci-fi, so I’m wondering—I know you read some sci-fi—what’s the best sci-fi book that you’ve read recently?

 

Kathleen: Hard question. I mean, probably the best one I’ve read recently is Patrick Rothfuss’ The Kingkiller Chronicles. I just recently reread book 1 and book 2, and I’m desperately waiting for book 3 to come out. One of the things I really like about those books is how well—I mean, I’m sort of…Book 3 has been waiting for a really long time. But one of the reasons why it’s sort of worth waiting for is how carefully crafted the books are. So like if you go back and read book 1, you see all of these things that were foreshadowing or giving you more information about things that you knew nothing about until you read book 2. And so like how tightly coupled everything is gives a lot of encouragement and excitement about what’s going to happen in book 3, and there’s a lot of meat there just to like dig in. Like, what’s really going on? The story is happening at one level, but there are multiple other levels happening at the same time, and as when you study it, you can see what’s going on. So there’s lots of motivation to speculate and talk about it and figure out what you think’s happening, and there’s lots of online forums of people talking about this and discovering clues. So it’s not just the experience of reading the book but sort of participating in the full community.

 

Gary: Cool. Well, thanks. This has been really fun.

 

Kathleen: Yeah, indeed. Thanks for having me.

 

Gary: This has been a Silver Bullet Security Podcast with Gary McGraw. Silver Bullet is co-sponsored by Synopsys and IEEE Security & Privacy magazine and syndicated by Search Security. The March/April issue of IEEE S&P magazine features our interview with Bank of America CISO Craig Froelich. The issue is also devoted to hacking without humans and covers the DARPA Cyber Grand Challenge, focused on both offense and defense. Show links, notes, and an online discussion can be found on the Silver Bullet webpage at www.synopsys.com/silverbullet. This is Gary McGraw.

 


Kathleen Fisher