Show 149: Brittany Postnikoff discusses the maker culture and the problems with robots

August 30, 2018

Brittany Postnikoff is a graduate student in the Cryptography, Security, and Privacy Lab at the University of Waterloo. She researches the interplay between robots and social engineering to predict and mitigate the negative impact of social robots on security and privacy. As an undergrad at the University of Manitoba, she focused on human-robot interaction. Her work on robot skiing won first prize at the Humanoid Applications Challenge of the International Conference on Robotics and Automation in 2015. Brittany has given talks at ShmooCon, Troopers, Black Hat, and DEF CON. She holds diplomas in business administration and business IT from Red River College and is working on a master’s degree at Waterloo, in Canada.

Listen as Gary and Brittany discuss robotics, maker culture, and the hands-on nature of learning. They closely examine the security and privacy problems that robots introduce—including the ethical implications and built-in biases of human-robot interactions. Don’t miss their discussion of robot vulnerability today and find out how vulnerable off-the-shelf robots really are.

Listen to podcast

Transcript

Gary McGraw: This is a Silver Bullet Security Podcast with Gary McGraw. I’m your host, Gary McGraw, vice president of security technology at Synopsys and author of “Software Security.” This podcast series is co-sponsored by Synopsys and IEEE Security & Privacy magazine. For more, see www.computer.org/security and www.synopsys.com/silverbullet. This is the 149th in a series of interviews with security gurus, and I am super pleased to have with me today Brittany Postnikoff. Hi, Brittany.

Brittany Postnikoff: Hi.

Gary: Brittany Postnikoff is a graduate student in the Cryptography, Security, and Privacy Lab at the University of Waterloo. She researches the interplay between robots and social engineering to predict and mitigate the negative impact of social robots on security and privacy. As an undergrad at the University of Manitoba, she focused on human–robot interaction. Her work on robot skiing won first prize at the Humanoid Applications Challenge of the International Conference on Robotics and Automation in 2015. Brittany has given talks at ShmooCon, Troopers, Black Hat, and DEF CON, and probably a lot of other places. Brittany holds diplomas in business administration and business IT from Red River College and is working on a master’s degree now at Waterloo, Canada, as we speak.

So thanks for joining us today.

Brittany: Yeah, I’m excited.

Gary: You’re currently a grad student at Waterloo in security and privacy, and I’m super envious because Waterloo has quite the deserved reputation as a hands-on place when it comes to tech. How do you think that the maker mindset has influenced or not influenced tech education at grad schools, especially Waterloo?

Brittany: That’s a good question. With the maker culture, we see a lot of people in grad school trying out a lot of the new technologies. So things like 3D printers, hands-on electronics, and all that makes the research a lot easier, especially with some of the embedded security stuff I’m doing. I know the maker culture has given a lot of access to doing some of these things that, otherwise, people would need a lot of money coming out of the labs into those resources. So maker culture has definitely been helpful that way in reducing the barriers to be able to do science.

Gary: It sort of seems to fit the Waterloo culture too. I know that up there, there seems to be a…Isn’t there a yearlong internship that’s supposed to be outside, out in the world somewhere, and then you come back and finish your studies? Does Waterloo still require that in maths?

Brittany: Yeah, that’s actually pretty common in most universities now. It’s called the co-op programs, and it’s usually a semester of school, then a semester of work, and then come back, semester of school, semester of work, back and forth, until you get the three or four semesters, which is a year to a little over a year. They actually had the same program in my undergrad as well.

Gary: That’s really cool. So it seems like this “Let’s be hands-on and learn stuff that we can apply, and also let’s build actual things in maker land” is a really nice way for the culture to come together to make it so you can build stuff and try things out.

Brittany: Yeah, very much. There are even a number of other people in the area as well, even just the number of startups, or even YouTubers that are just making things in Waterloo is really interesting. You constantly run across people who are trying out really neat things.

Gary: Well, I’m going to date myself here. I built a robot out of a toy car in 1993, before there was this thing called maker land, and it eventually became a major part of a friend of mine, Lisa Meeden’s, thesis in machine learning. We used a neural network and self-training way back then to control the robot. So robot culture, even back then, was about really building stuff, and hands-on things, and making stuff actually do something. You did the same thing, I guess, at Manitoba. What technology did your University of Manitoba skiing robot—I guess you called it Jennifer—use under the covers?

Brittany: Jennifer was actually a DARwIn-OP robot from the company Robotis. It’s a robot I really appreciate, because they do have a lot of their designs online, they have their software online, so really into that open source culture, which is something I definitely appreciate in companies. It really lets us have a lot of leeway with what we do with the robots, because a lot of the others I’ve used are very proprietary, they are black box. You can’t do anything, you don’t know what information it collects, you don’t know what information it sends elsewhere. So I find the DARwIn-OP to be a really accessible, comfortable robot to deal with.

Gary: Yeah, less of a black box, I guess.

Brittany: Yeah.

Gary: In ’93, the robot was so tiny, it was like an eight-bit microcontroller. You could look at all of its memory on one little piece of paper.

Brittany: That’s fantastic.

Gary: So it made life a lot easier, but it also made it hard to do anything fun. There’s no way in heck we could’ve made anything ski back then. We were just trying to make it drive forward.

Brittany: I mean, that’s the funny thing, is that’s still almost a problem with so many robots, is just getting them to move properly in one direction. We’ve had problems where when the robot runs out of battery, for example, it kind of walks like it’s drunk. So you’re like, “Oh, that robot’s tired. It needs a break.” So you have the opportunity to take the batteries out, charge it, and try your code again, because the physical things actually really affect the performance.

Gary: Absolutely. So you’ve been digging into the ethical implications of human–robot interaction for some time and given some really good talks on the subject, I’ll say, so check them out on YouTube. Tell us a bit about robot authority, and how people react to robots, and what you’ve learned in these studies.

Brittany: Oh gosh, favorite topic. There are so many interesting things with robots, and specifically what I look at is usually social robots, so robots that people interact with on a social level using typical human–human interaction techniques. Things like gaze, like where your eyes are and how they move during a conversation. Body posture, how the humans face the robots and how the robots face the humans, and whether it’s just their head looking at the human or whether the robot’s body is directed towards the human, and how that affects interaction. Or even things like…(so you asked about authority) when we put robots into these authority positions, people hook onto that. They understand that the robot is in a position to tell them what to do, that the experiment relies on the person listening to the robot, and when the robot…The one experiment that happened in my lab was they had a person sitting at a computer. They had to rename a bunch of files by hand, and if they used any shortcuts, they had to redo it.

Gary: That sounds kind of boring.

Brittany: That was the goal, finding out who wants to do these tedious tasks. Who wants to do their paperwork at an office job, right? So you got this robot as a manager, and the robot would tell people, “We need more data. Can you please continue? We need more data. Can you please continue?” Over and over. So the people would try and stop working. They’d be like, “I’m done. I’m not renaming these files anymore.” And the robot would say, “But we need more data. We need more data,” and keep pushing them. So this is a bit Milgram-y, so we had to go through ethics, obviously, especially in a human test experiment at a university. But it was three strikes, so once a person said it three times in a row—“I’m not going to continue,” they say it three times in a row—and did not touch another file, did not say that they were going to start again, anything like that, they were finally allowed to stop. And it was interesting how many people actually listened to the robot.

Gary: So there’s a natural propensity to do what the robot says, just because it’s a robot.

Brittany: Well, especially if it is in that position of authority. And I mean, adding a tie to it doesn’t hurt.

Gary: That’s really hilarious. So here’s kind of a crazy question. I bet you’ve thought about this, but I’m just really interested to hear what you say about it. What do you think of Asimov’s Three Laws of Robotics?

Brittany: They’re bullshit.

Gary: OK, that’s great. So you guys who work on robot–human interaction just think that’s fantasy land and it’s totally irrelevant to your work?

Brittany: There are so many academic papers on why they don’t work and why they are bad laws. I mean, the books talk about how they’re bad laws. Even in the sci-fi universe they’re a part of, they don’t work.

Gary: That’s true. But you know, most people, when they think about robots and ethics, I think that’s the first thing that comes to mind. You guys have got some work to do in your field.

Brittany: It’s just such a catchy phrase, right?

Gary: Yeah.

Brittany: So that’s the thing that really…I usually get this question almost every talk, and it’s…I just need to do a talk on why these laws don’t work.

Gary: You actually should. I think that’d be really helpful, especially for people who haven’t done any work in robotics at all. I mean, one of the things that people are not aware of yet is where the field stands. What can robots do and not do today? Because there’s so much baloney on TV, you don’t know what’s real and what’s not real. It’s all very confusing for normals, I think.

Brittany: I agree. That’s definitely one of the hardest parts about the talks I give, is convincing people to agree on what a robot is. Sci-fi and media has given people such an idea of what a robot is, and everybody has their own opinion on the definition that it’s usually very difficult to agree on that. When I say I’m talking about robots, people tend to disagree with my definition. And then I show them the 50 papers I’ve read and all the definitions and how they’re all different anyway.

Gary: Yeah. There’s a lot more work that needs to be done for, I guess, basic understanding. Then we can maybe understand the work that you guys are doing a little bit better. So I wanted to pursue a slightly different angle. How secure are robots themselves these days? Like you said you used an off-the-shelf but open source-y robot in your work at Manitoba. What did you find out about the security of the robot itself?

Brittany: For that robot, that was actually a little bit before I started to get into security, so I don’t actually know much about that one in particular. But some of the others I’ve picked off the shelf are scarily vulnerable. There was one that I was dealing with that I handed to some undergrads and said, “Hey, you should try to change the URL on this.” Because most robots have their own web servers in them, so connect to the robot, connect to the web servers, change some URLs, and you get access to a bunch of things that you otherwise wouldn’t have access to. And the interesting thing is the robot sent so much information back to the company, we were able to get credentials for the company servers.

Gary: I love it. So can you patch a robot these days?

Brittany: Generally, no. There are some that are starting to get better about it, but it’s not a great system yet. There’s not any push to have patches. There isn’t a lot of push to have security, which is a big reason why I do my talks, is because I want people to be aware of the issues so they can go back to their companies or robot manufacturers or think about what they’re buying so they can actually start trying to push people to add security to these things.

Gary: Yeah. Well, believe me, it’s a long uphill battle. I’ve been doing it for about 25 years—not in robot land, just in “Hey, your software should be secure, guys.” And we’ve made some progress but not enough.

Brittany: Even then, some of the companies that have tried to add security have done their best, I’m sure, but they end up doing things that make things even worse. Like there’s the one robot you can connect to via Bluetooth from your phone, but if you stop using the app for even a second, just to answer a phone or answer a call, anybody else with the app can take over your robot using the same method of just Bluetooth connect, and there’s no pairing, just access. But then if they manage to get the robot—like this is an off-roading, outdoors, supposed to be used in parks and sand dunes kind of crazy robot, and it’s like a pill. It looks like a capsule. But if somebody else connects to it, the best you can do is maybe catch it, and you have to hold it in the air for an hour until its batteries run out because there’s no off switch.

Gary: That’s crazy. So I was going to ask you, can you fix the hardware if your robot breaks and a piece breaks? Is it easy to fix those, or do you have to send them back? I’m trying to figure out in terms of security vulnerability, you can patch software but only if they let you patch software. What about hardware?

Brittany: That’s actually another issue I keep meaning to bring more attention to, is if you get the expensive robots, like the DARwIn is $15,000 but it’s completely open. You can 3D print parts so you can fix it. But there’s another robot that is fairly expensive. I think it’s down to $25,000. People constantly ask if they can buy it in Best Buy because it looks like a toy but it’s not.

Gary: A really expensive toy.

Brittany: Yeah. This robot, you get two years’ warranty on it, which is pretty great, but if anything breaks, you have to send it back. The issue with that is there’s actually no way to wipe the data off the robot reliably or clean anything out. And so if the robot has files in it of video around your lab or audio or whatever space it is in, actually that data stays on the robot when you send it back. And I’m located in Canada, so if I’m sending it, it’s going to another country for sure. So what happens when I let that data over a border? Because it’s contained in the robot. And these robots are being used in hospitals and schools, in private government locations as well. So it’s like also thinking about what data is stored on the robot, and when you have to send it back for any sort of fix, like a physical hardware fix, you’re actually opening yourself up to a bunch of other issues too.

Gary: That’s really interesting. We’ll be right back after this message.

If you like what you’re hearing on Silver Bullet, make sure to check out my other projects on garymcgraw.com. There, you can find writings, videos, and even original music.

Do you think anybody’s thinking about building security in, and privacy too, to robots and how they’re designed and how they’re updated and all that stuff?

Brittany: You know, it is starting to happen. There are a few robot companies that I typically use in my presentations, and they’ve started to worry about security because they’ve had a couple of bad scenarios in the last year or so, and they’re getting a lot of media flack for it. Because, again, those robots are being used in banks and in sales positions and places where money and personal information is involved. So people are starting to worry about security. Also, that’s how I end up getting a lot of robots in my labs, is I want to do my social engineering experiments, but I kind of offer companies a free pen test if they’ll give me their $50,000 robot for a few months.

Gary: Right. That’s interesting. And it probably works like a charm.

Brittany: Oh, you have no idea how many robots I’ve gotten that way.

Gary: Humans are really gullible and susceptible to all kinds of social engineering, regardless of robots. But what kinds of security and privacy concerns arise from this gullibility when it comes to robots?

Brittany: That’s a great question. There are a few social engineering attacks that are better done by robots, I think. I’m trying to run those experiments in the next year to have stats on this. But there are scenarios where I’ve hidden Roombas—with permission from companies and people in the companies—I’ve hidden Roombas or Roomba-likes in different companies. And they have 1080p HD cameras and microphones and everything, so it can take audio, it can take video. And they have apps, so I can control the robot from anywhere in the world. I don’t have to be near the building. I could be in a different country. But I can navigate the robot in the building, and people are like, “Oh, I guess we got a vacuum bot,” and they just don’t think about it really. I can hide it under a desk until everybody is gone for the day. You know how some of the automatic doors, you can’t get in during the night but you can get out? Well, I can trip it with a vacuum bot.

Gary: Very nice. Yeah. I think our door at Synopsys in Virginia would do that if you used a robot. I’m not sure how far down the sensor goes, but it could carry something maybe.

Brittany: Well, that’s another attack I like doing, is food delivery robots.

Gary: Yeah. If the robot comes bearing cookies, it probably always gets in.

Brittany: Yeah. There’s actually a great paper on this called Piggybacking. I should link that on my website. It was done by another university. They had a robot that had cookies on it. In this paper, they don’t say “robot social engineering attack,” but that’s definitely what it is. In this paper, what they do is they have this robot sitting outside a dorm, and it asks students to let it in. And the dorm is supposed to be locked, and there are signs everywhere saying, “Don’t let anybody else in.” And without the cookies, it was fairly effective. They put the cookies on, though, on the robot, and put stickers on it saying it’s a delivery bot, it was so effective.

Gary: That’s hilarious.

Brittany: The craziest thing about it was there’re risks with that, and people acknowledge, like, “Hey, this robot might be a bomb.” Statistically, the people who said the robot was more likely to be a bomb were more likely to let the robot into the dorm.

Gary: Yeah. Well, not only that, what if the cookies were poisoned? I always thought the best attack on an airplane would just be to bring a bunch of Cinnabons that were poisoned and hand them out to the crew.

Brittany: Wow.

Gary: I probably shouldn’t talk about that, but it was published 15 years ago. Nobody’s done it yet. So let’s talk about the other kind of bots. Bots on social media have led to all sorts of political problems and seem to be actively targeting Western democracy, frankly. And that’s because people believe bots on social media, but people believe robots even more than bots. Why? Why do they do that?

Brittany: It’s the physicality. And this is one thing that has been proven in a number of papers, is that the physicality and movement really convinces people to give each individual bodied robot its own identity. And people will name them, just like people name cars and stuff. They’ll name the robots. They’ll care for the robots. But because it moves—movement is huge—because robots move and because they seem to have goals that are unique to them and they work towards something, people will assume they are another living being. And I think that’s a bit easier. We even see that with humans, right? We’re more willing to be compassionate to people we see in person rather than people we see on the internet, right?

Gary: Right. Yeah, that’s really just fascinating as heck. When do you suppose we’ll see the first robot at some sort of resistance rally?

Brittany: Oh, that’s a good question. I mean I kind of just want to bring one with me to the next rally now and make that it.

Gary: That’d be pretty cool. It’d be fun to watch, just to see if people, you know, if the authority thing ports over to that, political activation.

Brittany: Yeah, like what would people do to stop a robot rally? Would it actually be more effective than humans because they’re worried about property damages?

Gary: Yeah, who knows. That’s what, this is a very interesting frontier that you’re working in here. I think there’s a lot of things, a lot of room to do some really interesting experiments.

Brittany: That’s one of the fun things, is I have not come across anybody else specifically looking at robot social engineering and identifying it as such. In human–robot interaction, so many people in that field, even when I went to the Human-Robot Interaction Conference this last year, they didn’t know what social engineering was.

Gary: Right.

Brittany: They didn’t realize how much of their own research is applicable to it. So when I showed these people how their research could be used negatively, so many of them just had their minds blown.

Gary: Yeah. So you talk about some commonsense ways to defend against robots in some of the things we’ve talked about today, in your ShmooCon talk. Why do people trust the robots and not think that there’s some person behind there controlling the robot? Is one level of indirection enough to just throw everybody completely off the scent?

Brittany: So this is another thing that has been found in human–robot interaction research, is this idea called Wizard-of-Ozing, and we use this all the time. It’s where you’re sitting behind a curtain and controlling the robot while somebody’s interacting with it. This works because people acknowledge the robot as a living being, when it seems to be acting autonomously and have that movement, and so it’s pretty much indistinguishable. And so we use this in experiments to simulate, like once the AI people have their AI they’ve created, we get in a scenario where, “OK, what happens after an AI focuses on that?” After, because the Wizard-of-Ozing works, we get the same effects as if the robot was doing it itself. So yeah, the one level of indirection is enough. And even when a third party is operating the robot, people are like, “Ah, it must be a bug if it starts acting weird.”

Gary: Well, you remember the Automated Turk from long, long ago, well before Google’s Automated Turk. People give all sorts of agency to these machines if they just seem to act right, I guess.

Brittany: Yeah, very much.

Gary: And robots, as you pointed out, make really good surveillance tech because it’s kind of like a laptop. If you leave a laptop or a smartphone lying around in a conference room with all of its mics on and camera on, most people will just leave it there and have their meeting anyway. This is a thing we did once at a technical advisory board. We actually left a laptop on with the mic on and listened in the next room to see what they had to say about the advisory board. Which was double interesting, because we also spoke Hebrew among the advisory board, which is what they were doing, and it really surprised the heck out of them when we came back and said, “Hey, you know, we know what you were saying.” You could do that with a robot too, obviously. It might even be easier.

Brittany: Well yeah, because robots are walking, talking vulnerabilities. You get to a point where you can…like if you’re not getting the right audio or the right video with your laptop, you can’t move it, right? But the point with the robot is most of them are able to move.

Gary: Right.

Brittany: And situate themselves differently. So all of a sudden, if you need that one person’s face, the vacuum bot is moving. Oh well.

Gary: There it is, very nice.

Brittany: Yeah.

Gary: Now, sort of a big, ridiculous issue, but I’m just going to raise it anyway: Do robots deserve rights?

Brittany: I very much think so, and there’s a lot of reasons why. The main reason is because we acknowledge them as living beings, and even the most skeptical people, who are like, “Nah, that’s a machine,” most of them will still act towards the robot like a living being. There’s one story of a friend who was playing with one robot that had foam skin on it. He took it off, and was messing with it, and put it back together. And then the robot started making distressed noises, because it’s like a little pet robot that children are supposed to learn how to take care of. And he picked it up and started petting it, and like, “OK, it’s OK, it’s OK.” And he’s like, “What the hell?”

Gary: Yeah.

Brittany: It’s stuff like that.

Gary: It’s just a program that makes it do that, right?

Brittany: Right. And so that’s the thing, though, is you get to the point where people recognize robots, especially social robots, as more living beings, and they give them their own personalities, think about them as that other entity. And how we treat robots very much reflects how we treat other people, and this is very much shown in experiments with children. Children will be like, “Yes, this robot is 100% a living being. It might as well be a person.”

Gary: Right.

Brittany: But they will treat it like other children, which means they’ll beat it up. They’ll push it. They’ll see how far they can get the robot to go before it cries. And the thing is, the children that don’t learn to socialize with the robots well also typically treat other children very badly. It’s a learned behavior, and parents that don’t correct their children when they hurt robots, those kids are often, again, more willing to hurt other children, because, “Well, I can hurt that living being. Why can’t I hurt this one?”

Gary: Sure. Well, I mean it goes back to the psychopath who kills puppies or whatever in their early-stage career, before they start killing humans.

Brittany: That’s a thing that’s in robots too, is that people that, like older adults, that mistreat robots are often more down that scale as well, like the psychopath scale, or sociopath.

Gary: So do robots have some kind of level of consciousness, just not whatever human consciousness is? I mean, maybe you start with, I don’t know, thermostats. They have a little bit of consciousness. You move on through robots, and you eventually get to humans through dogs? I don’t know. What do you think?

Brittany: That’s a tough question, but one way I like to answer this is through Chinese room theory.

Gary: Sure.

Brittany: Have you heard of that one?

Gary: Of course.

Brittany: For people that don’t know, what that is, is the room, and say there’s a person inside the room, and you feed them an English word, and you want to get Chinese out the other side. So what happens is you put the English into the room, and then the person in the room checks all these boxes and matches, “Oh yeah, this word aligns to that character on the page.”

Gary: They just take squiggles and squaggles and rules, and produce the answer, right?

Brittany: Yeah. But when you think about how people learn languages, it’s very much the same, right? Chinese room theory is like, that’s how robots work. You put stuff in, they give stuff out. You put stuff in, they give stuff out. But when humans learn languages, do we not start by learning the same way?

Gary: I don’t know. Pinker thinks that it’s innate. I mean this is a big, thorny issue. I probably shouldn’t have brought it up, but I personally think Searle is completely wrong, but that’s just me.

Brittany: Yeah, it’s one of these things where most people use Chinese room theory to prove that robots don’t have consciousness, but I like to use it to prove that they do, because humans act the same way. We learn rules slowly as we’re growing up too, right?

Gary: I think you have a sort of gigantic pile of work to do in your future career, which is pretty cool. And there are a lot of interesting questions that deserve some thinking from a security and privacy perspective. Who else is working on this besides you? I really don’t know of anybody else doing this work.

Brittany: Like I said before, I don’t know anyone else that is specifically looking at social robots and how to use their social abilities for privacy and security exploitation. There are lots of people working on industrial robots, and how to get into those, and what their vulnerabilities are, and that’s great.

Gary: Right.

Brittany: Then there’s the human–robot interaction people looking at the social side. But so far I’m the only one knitting these two together.

Gary: Yep.

Brittany: And I actually founded the field of robot social engineering earlier this year with the first paper defining the topic.

Gary: Just wait till we can have our cars be totally automated—which makes them robots, by the way, just with wheels. Then all the work will apply to cars too. I’m going to convince your car to do the wrong thing.

Brittany: You know, I had some great conversations with people in the last few months, and I’m fully willing to admit that autonomous cars, that can act completely autonomously, are social robots. And that took a while to convince me, but I’m finally there.

Gary: Well, good. All right, last question: What is your favorite William Gibson novel?

Brittany: A lot of people like to think I’m the one character from Neuromancer, and so that’s the one I usually get most compared to. So I think I’d have to pick that, just because kind of self-identification, you know?

Gary: Cool, well, we’ll have to tell William Gibson that on Twitter.

Brittany: Definitely.

Gary: Thanks for your time today. It’s been really interesting.

Brittany: Yeah, thank you.

Gary: This has been a Silver Bullet Security Podcast with Gary McGraw. Silver Bullet is co-sponsored by Synopsys and IEEE Security & Privacy magazine and syndicated by Search Security. The July/August issue of IEEE S&P magazine is a special issue devoted to blockchain security and privacy. The issue highlights our interview with Nicholas Weaver from ICSI, who has a thing or two to say about cryptocurrencies and blockchain, and the things are not so nice, so you should check that out. Show links, notes, and an online discussion can be found on the Silver Bullet webpage at www.synopsys.com/silverbullet. This is Gary McGraw.

Brittany Postnikoff