Brittany “Straithe” Postnikoff is a graduate researcher at the University of Waterloo in Ontario who has been researching robot social engineering — the intersection of human-robot interaction and security and privacy — for the past four years.
Postnikoff has found human-robot interaction (HRI) to be surprisingly close to human-human interaction in many cases, which piqued her interest into the security and privacy concerns that could arise if robots were used in social engineering attacks. Her research has included using cameras on robots to spy on people, getting victims to give personal information and even using a robot to change people’s opinions.
Although her research is still in early days, Postnikoff has found striking results, not the least of which is how little security is built in to consumer robots on the market today.
How did you begin studying robot social engineering? Did you start on the social engineering side or on the robotics side?
Brittany ‘Straithe’ Postnikoff: I guess I started on the social engineering side, but I didn’t understand that at the time. For background, I collect post-secondary pieces of paper. I have college diplomas in both business administration and business information technology. And in both of those programs, I acquired people management skills, which I learned were useful in social engineering attacks when I attended DEF CON for the first time.
As for robot social engineering, I casually began studying this topic shortly after I started university to get my computer science degree. I had joined a very small robotics team, and within my first three months of university, the team flew to China for a competition.
During this competition, the robots and the humans on the team wore matching blue jerseys and toques that looked like Jayne’s from ‘Firefly.’ You can look up ‘Jennifer the Skiing Robot’ to see what we looked like.
So many people stopped my teammate and I during the competition to take photos with us and our robots. We noticed this wasn’t happening to the other teams. What was really interesting to me was that people cheered for us and our robots even if we were their competition.
I wondered why. Why are people cheering for our robot instead of theirs? Why are we getting all this extra attention? It’s then I started to see opportunities to blend my marketing and human resources knowledge with robots, security and privacy.
Luckily, my undergraduate university was also host to a human-robot interaction lab. I joined the lab the next semester and learned about concepts like robot use of authority, body positioning and gesturing from my senior researchers that are the foundation of the robot social engineering research that I now pursue full time.
Are there any major differences between what people would normally think of as social engineering and robot social engineering?
Postnikoff: Well, the biggest and clearest difference is that the attack is performed by a robot instead of a human. Otherwise, the base attacks are generally quite close to human-performed attacks.
Like a human, robots can make use of authority to convince people to do things; they can make use of empathy to convince someone to take particular actions and so on. What is important for a robot social engineering attack is that the robot has a body and it’s able to interact with humans on a social level.
The interesting thing about the embodiment of a robot is that people will believe each physical robot is its own individual entity, especially if the robot is known to act autonomously. It doesn’t normally occur to people that a typically autonomous robot acting erratically might have been taken over by a third party.
In researching your work, it appears that the human empathy toward the robot is a big part of the attack. Is that right?
Postnikoff: Yes, just like with some human-performed social engineering attacks, robots that are able to interact on a social level with humans can make use of a victim’s empathetic side in order to perform some attacks. For example, a malicious entity could infect a robot with ransomware and only restore the robot once the ransom has been paid.
If it’s a robot that someone is extremely attached to, in need of, or if they have put a lot of work into personalizing the robot and training it, this could be a particularly devastating attack.
What is next in your robot social engineering research?
Postnikoff: Next in my research is performing more attacks in both controlled environments and in the wild in order to collect some stats on how effective it really is. I think it’s important to determine how widespread this issue could become. Hopefully, I’ll be able to post those results publicly in a couple months.
How does artificial intelligence factor into your research into robot social engineering?
Postnikoff: Artificial intelligence is very important, but tangential to the research that I’m currently pursuing. In HRI, we often use the ‘Wizard of Oz’ technique, which involves a person sitting behind a curtain — or in a different room — and controlling the robot while another person is in the same room as the robot and interacting with it. The people interacting with the robot often can’t tell that the robot is being controlled and will assume that the robot is acting on its own. For this reason, I don’t spend time researching AI, because we can fake it effectively enough for our purposes at this time.
Many other experts are working on AI, and my time is better spent focusing on how the physical embodiment of robots and the actions of robots can impact the people they interact with.
How many robots do you have right now?
Postnikoff: Right now, I have direct access to about 30 robots, but I only have five different models of robots. Thankfully, I have a lot of friends and contacts who are in other universities and companies that are willing to let me play with their robots and complete tests and experiments once in a while.
Sometimes, I help them set up their own experiments to try with the robots, and they let me know what happened as a result. Or, I provide them with the background information and resources they need for their own research. Additionally, people will send me robots to perform experiments on if I promise to do security assessments on them.
To me, these are all win-win scenarios.
Are they all consumer robots?
Postnikoff: For the most part, yes. I try and work though all the different types of robots — consumer, industrial, medical and so on. But, unfortunately, many of the medical and industrial robots are quite pricey and are harder to get access to. This leaves me to experiment primarily with consumer robots.
Consumer robots are also more likely to be widespread, which does offer some benefits considering the research that I do — especially when I can show what sorts of things I can do inside somebody’s home. Saying that, much of my research also applies to what can happen inside companies that make use of robots — banks and malls — when they don’t understand what can be done with a social robot if it’s not adequately secured.
How have you found the security to be in the robots you use?
Postnikoff: Not great. A number of the robots I deal with really do need a lot of help. And that’s one reason why I’m trying to bring awareness of this topic to the security and privacy community, especially before robots become more widespread.
What’s interesting here is that the topic of robot security overlaps heavily with IoT security, and most of what is being done in that field to make devices more secure also applies to robots.
With the robots that you use where you’re controlling them, is it generally difficult to get control access?
Postnikoff: It depends on the robot, but many are surprisingly easy to gain control over. There were some first-year computer science students at my university that I was mentoring, and after a bit of instruction and background, they were able to get into the robots, even though they had no experience doing this sort of thing just hours before.
A number of the robots I looked at have had default credentials, sent usernames and passwords in plaintext, transmitted unencrypted video streams and so on. These are a lot of the same problems that plague many of the other devices that people in this industry see.
What kinds of robot social engineering attacks have you run?
Postnikoff: One of my favorite attacks is putting snacks on top of the Roomba-like robot as a way to get access into a locked space.
First, I research who might be in the space, then write that person’s name on a nameplate and put it on the robot, along with the robot’s nametag and the snacks. I use an app to drive the robot to the door, and I get it to run into the door a few times. People hear the robot’s knock, answer the door and might let it in. Meanwhile, I’m able to use the app to look through the robot’s camera and hear through its microphones to absorb what is happening in the space.
There is a paper out by [Serena] Booth et al. called ‘Piggybacking Robots‘ that does a great job of describing a similar attack that inspired me to try this. So, if you ever try one of those food delivery robots that are in D.C. or the Silicon Valley area, you might not want to let them into your house if you don’t have to. You never know who might be piggybacking on the robot’s camera or video feed.
Do you have to be within Bluetooth range to be able to control the robots, or can they be controlled over the internet?
Postnikoff: Some yes; others no. A lot of the robots that I’m personally dealing with have remote-access capabilities. That is actually a common feature that companies selling consumer robots like to boast about. They might say that if you want to check if your front door is locked, you can hop into the robot, point it at your door and use the robot’s camera to check if the door is locked. That might be great for you, but this same capability is also pretty great for an attacker if they can get remote access.
Is there anything else people should know about robot social engineering research?
Postnikoff: Robot social engineering attacks are starting to happen in the wild. I have had a number of groups approach me with incidents involving their social robots that could easily be classified as robot social engineering attacks. If we start focusing on this issue now, we can prevent greater issues in the future.