Show 140: Adrienne Porter Felt discusses usable security at Google and web versus mobile permission models

Adrienne Porter Felt is a senior staff software engineer within the Chrome Security team where she leads Google’s usable security efforts. Dr. Felt focuses on front-end work, building security user interfaces, experimental design, large-scale data analysis, and management. Previously, she was a research scientist on Google’s Security Research team. She has also worked as a security consultant at HP Enterprise Security. Dr. Felt earned a Ph.D. in Computer Science from UC Berkley. She also holds a BS in Computer Science from the University of Virginia. She lives in California with her husband, Mark, and young son, Emerson.

Listen as Gary and Adrienne discuss usable security, web and mobile security indicators, browser warnings, permission models, and more. 


Listen to Podcast


Gary McGraw: This is a Silver Bullet Security Podcast with Gary McGraw. I’m your host, Gary McGraw, vice president of security technology at Synopsys and author of “Software Security.” This podcast series is co-sponsored by Synopsys and IEEE Security and Privacy Magazine. For more, see and This is the 140th in a series of interviews with security gurus, and I am super pleased to have Adrienne Porter Felt with me today. Hi, Adrienne.


Adrienne Porter Felt: Hi, Gary.


Gary: Adrienne Porter Felt is a staff software engineer in the Google Chrome security team, where she leads Google’s usable security efforts. Dr. Felt does a mix of front-end work, building user interfaces, experimental design, large-scale data analysis, and managing. Previously, Adrienne was a research scientist on Google’s security research team and a security consultant at HP Enterprise Security. Dr. Felt earned a Ph.D. in computer science from UC Berkley, where my friend David Wagner was her advisor. She also holds a B.S. in computer science from University of Virginia. She lives in California with her husband, Mark, and her young son, Emerson. Thanks for joining us today.


Adrienne: It’s great to be talking.


Gary: So I’m really interested to know how you made the academic route versus corporate route decision as you launched your career after grad school. Was that an easy thing or a tough thing or an obvious thing?


Adrienne: That’s a really great question, and it felt like a very difficult decision at the time. I was very torn whether I wanted to pursue an academic career or an industry career. But I was really interested in this idea that, oh, wow, I could actually go make, you know, big immediate changes that billions of people would be impacted by. And so ultimately, industry drew me in because of the possibility for immediate impact. With that being said, though, I did start off, actually, on research at Google. So I still was participating in the research community for a long time. I’m now working pretty squarely in products, but I do like to still work with the research community. They produce lots of cool ideas that we’re able to bring into Chrome.


Gary: Yeah. And I’ve noticed that you’re still very active in the science community too, which we’ll talk about later. So you kind of have the best of both worlds, I guess.


Adrienne: I’m trying to. I’m trying to.


Gary: In my view, you had the dream team of advisors, starting with Dave Evans at UVA and including David Wagner. Both of those guys are amazing to me. I really like them both a lot. I was hoping that you’d graduated from the BACS program at UVA, because I helped Dave establish that. I didn’t know if you knew that or not. But what role did your advisors play in your formation as a leading researcher these days in computer security?


Adrienne: The relationship with an advisor is really important, and it was really important to me. So as an undergraduate, I honestly was a little bit lost. I wasn’t sure what I wanted to do, and frankly, I wasn’t always the best student. But David Evans invited me to do some research with him as a rising junior. It was the summer between the two school years, and it gave me an opportunity to see, oh, this stuff we’re learning in class, here’s how you use it. Here’s the impact it can have. Here is why you would actually need to apply that. No, not just for the sake of learning how to write an if statement or learning how to write a search algorithm, but here’s the application of it.


And I found that super exciting. It got me interested and excited about computer science as something that can be applied and used in many places. And he also gave me the basic foundational skills that I needed outside of the classroom to pursue research. Skills like, how do you write a paper? How do you define a research statement? So that was really important. And then moving on to graduate school, David Wagner was able to teach me, how do you reason about a problem that’s so big you can’t wrap your mind around it? What are the questions you need to ask? How do you break it down?


Gary: They’re both incredibly nice people too.


Adrienne: Yeah, they both are.


Gary: And hugely supportive. I can’t imagine picking advisors that are more kind of…I mean, they’re diverse, but they are cut from the same cloth in some sense.


Adrienne: I feel permanently indebted to both of them.


Gary: So professionally, you spend lots of time these days thinking about security indicators and browser warnings and permission models. So the overarching question that occurs to me is, how much control should users have when it comes to security decisions?


Adrienne: That’s an excellent question. I think that ideally users should have as much control as they want, which is why when we implement new warnings—like, for example, our malware warnings—we could make our lives easier by just saying, “OK, users can’t click through that warning at all.” That would protect everyone. But we don’t do that. We make it so that you can click through. We just make it so we give you as much information as we can.


Now, there is a bit of a tension here. There are cases where website owners through HSTS say, “We don’t want anyone to go to our site if their connection is not secure.” And we do honor that. That’s how HSTS works. We can enforce so that there has to be a secure connection. So there is a little bit of a tension there, but I think, whenever possible, I like to err on the side of giving people control even if in the end they might not end up secure but they’re making an informed decision.


Gary: Right. I guess the real tension comes with whether or not people can make an informed decision. I mean, some people can be informed all you want, and they just make the dumbest decision ever. And so I guess that’s part of it too, huh? Do you factor that in?


Adrienne: Well, I wouldn’t necessarily say that people make dumb decisions. They might make different decisions than I do, but that might be because they’re weighing the situation differently. Maybe they feel differently about the risk. Maybe they feel, “Oh, OK, you know, what’s the worst thing that happens? Someone is able to get control of my computer.” Maybe they don’t care too much about that. And you know, that’s not something that I can…you know, I would be really upset, but that’s not something that everyone necessarily cares about.


Some people think, “Oh, a virus. Whatever, I’ll just wipe my computer.” So you know, that’s not really a decision that I want to be in the position of making for people. But it is difficult, especially with topics like HTTPS. Connection security is a really nuanced topic. A lot of people who work in software engineering honestly are confused about it themselves. It’s a really difficult topic to get right. So it’s hard for people to understand.


Gary: Yeah. So it’s the understanding, I guess. And part of what you do is to try to make that understanding easier to grasp for mere mortals by having indicators that make sense, I suppose.


Adrienne: And one of the ways we tackle this problem is by trying to make it so people don’t have to make the choice. So for example, between HTTP and HTTPS, we’re pushing really hard to make it that every website is HTTPS so that people don’t need to think about the connection security indicator, because frankly, I don’t think people should have to learn about that.


Gary: Right. That makes sense. So how similar or dissimilar are mobile permission systems versus web permission systems? And should they all be the same?


Adrienne: They’re different across each platform right now. So on Android, there’s still a mix of run-time permissions and install-time permissions. They’re moving towards the run-time model in newer versions of Android, where you see the permission prompts while you’re using the application, although the original permission system was install-time based, where you saw all the prompts when you’re installing the app. Chrome has been historically and still is run-time-oriented, where you see permission prompts at run time. iOS has been more similar to Chrome in that it’s been historically all run time.


So they are moving…everyone’s moving in the direction of run-time permissions, although there are differences between platforms about what the exact permissions are and how nuanced they are, how long they’re saved. I do admit that I can see how it could be a little bit confusing for people in that there are some differences, but they’re similar enough at this point that I don’t think it’s probably that confusing.


Gary: Maybe. I think you’re just super smart.


Adrienne: Thank you.


Gary: So why not just wrap permissions granted to an app, or whatever you want to call it, with some sort of least privilege wrapper that, you know, you compute what it really needs versus, you know, having it ask for everything and the kitchen sink when it doesn’t really need that and you don’t want to grant it that as a user?


Adrienne: Well, most of the modern permission systems are actually relatively granular. In general, we’re no longer seeing these really broad blanket user-access control requests. They do tend to be relatively granular. There are some exceptions. Like Bluetooth is, in some ways, related to geolocation sometimes in a way that can be confusing, because there are physical Bluetooth beacons that might give away your physical location. So there are those weird interdependencies, but in most cases, the permissions are relatively granular now.


Gary: Yeah, I know, but I mean, I just had to use a parking app at the Reston Town Center, and it was just unbelievable what it thought it needed to have access to in order to just let me park in a place for an hour. It was so egregious that I was like, “I’m just not going to park here.”


Adrienne: You probably want to take that up with the app developer.


Gary: Oh, we have. Yeah, I know. But the issue is that sometimes developers of things ask for way more than they really require. It’d be kind of cool to be able to compute what the thing actually needs versus what it’s asking for and set that level. I don’t know if people are working on that or not, but it just seems like something that would be cool to me.


Adrienne: So with something like that, honestly, I think, that’s more of a human problem than a technical problem. I think that there are incentives that sometimes push developers of all types of platforms into adding functionality to their app that aren’t necessarily core functionality. So the app is going to ask permission for it. And it doesn’t mean the app doesn’t need it from a technical perspective, because it may actually be making API calls that use all those permissions. But that doesn’t mean that it’s central to the functionality of the application.


Gary: Right. Gotcha.


Adrienne: Which is why it feels to me like an incentive problem, where if consumers are unhappy with it and push back, they might hear that feedback.


Gary: Yup, I think that’s a good position to have.


We’ll be right back after this message.


If you like what you’re hearing on Silver Bullet, make sure to check out my other projects on There, you can find writings, videos, and even original music.[LS1]


So I know you talked about this at SOUPS in 2016, but how well do security indicators in browsers actually work these days?


Adrienne: That’s a very nuanced question, because it depends a lot on who the target audience is. We see different levels of understanding of web security in general, not just indicators, across different groups of people. Someone who’s a knowledge worker, working in an office, who’s worked with computers for the last 30 years probably has a relatively sophisticated understanding of how browsers work. They probably have an association between a green lock and security, even if they can’t tell you all the details of HTTPS.


On the other hand, someone who’s just come online for the first time on their phone and has never received an education on technology, and whose friends are in a similar position, is likely going to come from a different perspective and background and therefore not have the immediate implicit associations with certain icons that someone who’s been seeing them for 20 years has. So I would say, honestly, that I feel like our security indicators work better for some populations and worse for others, which is something we’re working on, and it’s an area of active development.


Gary: Yeah. I mean, user interface and security design is something that needs a lot more attention. And it’s good that people are paying more than they have been in the past, but we certainly have plenty of work to do. So maybe you can tell us a story about HTTPS and Chrome’s role in making TLS or SSL or Transport Security, or whatever you want to call it, more prominent and perhaps invisible.


Adrienne: Promoting HTTPS adoption is definitely a cross industry effort. You know, before I start talking about Chrome’s contributions, I do want to highlight that. We’ve worked with other organizations like Let’s Encrypt and Firefox and other browsers for a long time, and they all have an important part of the story too.


Chrome’s focus has been on a few fronts. I’ll talk, for the most part, about the one I have the most experience, which is in promoting HTTPS adoption, where we’ve been doing a mix of one-on-one outreach to developers at large companies and helping them make the transition. We’ve been changing our Chrome UI in order to help inform people about whether they’re using HTTP or HTTPS, which does drive consumers to ask websites to implement HTTPS and creates demand for it.


And we’ve been adding better developer tools in the security panel and the developer tools to tell developers more about their HTTPS setup and help them debug problems. Also, when developers have errors, we actually send emails to the search console for developers that are signed up for Google Webmaster Tools to tell them about problems with their HTTPS configuration. So we’ve been trying to provide a mix of tooling and incentives to help companies move to HTTPS.


Gary: That’s cool. I think you guys have made a huge difference. I mean, I can tell. Even from high geekdom, you can see it, so that’s very cool.


Let’s switch gears to managing people and, in particular, managing geeks. So what’s your view on doing that? Since you have to do it.


Adrienne: I like being a manager. It’s not something I feel like I have to do. Google definitely does have lots of strong individual contributors who aren’t managers, but I do enjoy managing, and I’ve chosen it as part of my profession. I think one of my favorite parts of managing is seeing how people can grow, you know? Someone takes on a new challenge, and I help figure out how to nudge them towards, you know, stepping up to that challenge and seeing them get through it. I really enjoy that process.


Gary: And then, switching gears again…like you, I’m a huge believer in real data and analysis of such. The BSIMM project is an example of that. What are your thoughts about measuring security, given your view into the security field as somebody who’s thinking a lot about usability?


Adrienne: Measurement in general is always difficult because there’s always the thing you want to measure and then what you actually can measure, and there’s always a gap between the two. I actually, in addition to working in security, I also manage the Chrome metrics team and work on measurement problems across all of Chrome, and we have that problem everywhere. But one of the places where security in particular is so challenging is that often when you ask people about it, the direct answers that they give you are not necessarily how they would act in the real situation, which makes it very difficult to study.


So for example, if you showed someone a picture of a warning, and you asked them what they would do, they won’t necessarily tell you how they would actually act—not because they’re lying to you, not because they’re trying to trick you, but because it’s very difficult to predict the real circumstance you’ll be in, right? What if it’s the middle of the night, your kid’s sick, you’re trying to look up what you…like should you give them Tylenol or ibuprofen, and what’s the difference? When confronted with a hurdle in that type of situation, you may just ignore it, whereas, you know, if it’s in the middle of the day, you’re sitting in your office, you’re trying to act as a corporate resource, and you see a warning, you might have a very different reaction.


Gary: That makes sense.


Adrienne: So just asking people questions about it in the abstract isn’t always a very effective way of understanding the reason for their behavior.


Gary: That makes sense, but I’m still trying to get to this notion of trying to measure security as well. What do you think about that? I mean, there’s obviously no first-order measurement, but what about approaching security measurement through user behavior? Does that make any sense, or am I just totally on crack?


Adrienne: No, I think that’s the right direction. So it is challenging because, you know, as I mentioned, it’s hard to ask people. But if you’re able to observe their actual behavior, that can lead to a lot more insight. Now we are in that position, luckily, ourselves. Chrome users can volunteer to share aggregate statistics and other sources of data like that with Chrome, and we’re able to use that as an input when we make decisions. So we’ll run experiments, for example, on our warnings, and we can compare 2 different or 3 different or even 20 different versions of warnings and see which ones are the most effective in practice. And then we roll out the change that looks like it’s going to be the most effective.


We deploy this same type of A/B testing across most or all of our security features. We try to be pretty rigorous scientifically when we’re making changes to security UI. There’s always a little bit of risk when we launch a new security UI change that we could actually make people less safe. So we’re very careful. We watch it like a hawk, and if it looks like there’s any problems, we pull back the launch.


Gary: Very interesting. So congrats on your upcoming role as USENIX Security chair in 2018. That’s awesome. I read with interest your article on [“house”] about…I guess it’s “hoss”? How do you say that?


Adrienne: “House.”


Gary: “House”…about peer review. So my question is, are you planning to or have you already addressed some of your concerns about gender bias in program committee makeup and so on through USENIX?


Adrienne: Yeah, so let me give a little bit of a backstory, which is that one day I was…Look. You know, I sit on program committees all the time. And one day I was sitting in a program committee, and I was looking around the room and thinking, “Wow! There are not a lot of other women here.” And so out of curiosity, I went and I calculated for a set of different program committees what the gender balance was. And it was a little disheartening. You know, some do better than others, but in general, women are [under]represented across all the major security conferences in the program committees.


And I don’t have, honestly, a great solution as to how to fix it. When we were putting together our program committee, we did significant outreach ourselves to try to increase the number of women on the committee, and I have to admit we didn’t get as far as I wanted. We weren’t able to find as many women. And part of the problem, in my opinion, is that women aren’t necessarily being trained and prepared for work on program committees at the same rate.


Gary: Yeah. That makes sense. I mean, it’s kind of a system bias that has to wash out.


Adrienne: So we made an attempt to include people, both men and women, who might not historically have been on program committees in the past but who were technically very qualified. You know, I’m in industry. I think there are lots of great men and women in industry who haven’t necessarily been trained. They don’t have Ph.D.s, so they didn’t necessarily have the training for how to review papers in school, but they know all the technical information. There may be experts, world experts, in their fields, and we’re trying to get some of those people involved in USENIX Security this year in order to bring new people into the pipeline. I’m hoping that if other people are able to do this in other years, perhaps we can address some of the diversity challenges that we have, both in terms of gender and race. But it’s not going to happen all at once.


Gary: No, it won’t happen overnight, but it’s incredibly great that you’re making tangible progress on that and objectively, you know, setting out to do that ahead of time. So good work.


Last question has nothing to do with any of this technical stuff we’ve been talking about. So I’ve renovated a whole bunch of houses over the years too. The last one was built in 1760, so there’s not a square in the whole thing. What’s the most interesting experience you guys have had working on your bungalow? And I know you can look at the website and find all that stuff, but, you know, I just want to have a quick story.


Adrienne: Oh, gosh. Renovating an old house is definitely a fun challenge. I thought I was good at project management as a software engineer, and so I barreled in thinking, “I can do project management for a construction project.” And to illustrate how well that ended up, I had a baby last year, and our house still wasn’t done, so we lived in a hotel. We hired an architect after that.


Gary: That does sound like a software project. So that came as a surprise, huh?


Adrienne: Yes. So we hired an architect, and things are going more smoothly now.


Gary: That’s good. Well, listen. Thanks so much for your time. I feel like I could go on and talk to you for hours, but it’s been very interesting.


Adrienne: It was great talking. Thanks for having me on.


Gary: This has been a Silver Bullet Security Podcast with Gary McGraw. Silver Bullet is co-sponsored by Synopsys and IEEE Security and Privacy Magazine and syndicated by Search Security. The September/October issue of IEEE S&P Magazine is a special issue on genomic privacy. It also features our interview with Ksenia Dmitrieva-Peguero of Synopsys. Show links, notes, and an online discussion can be found on the Silver Bullet web page at This is Gary McGraw.