Deepfake technology has advanced at a rapid pace, but the infosec community is still undecided about how much of a threat deepfakes represent.
Many are familiar with deepfakes in their video and image form, where machine learning technology generates a celebrity saying something they didn’t say or putting a different celebrity in their place. However, deepfakes can also appear in audio and even text-based forms. Several sessions at RSA Conference 2020 examined how convincing these fakes can be, as well as technical approaches to refute them. But so far, threat researchers are unsure if deepfakes have been used for cyberattacks in the wild.
In order to explore the potential risk of deepfakes, SearchSecurity asked a number of experts about the threat deepfakes pose to society. In other words, should we be worried about deepfakes?
There was a clear divide in the responses between those who see deepfakes as a real threat and those who were more lukewarm on the idea.
Concern about deepfakes
Some security experts at RSA Conference 2020 feared that deepfakes would be used as part of disinformation campaigns in U.S. elections. McAfee senior principal engineer and chief data scientist Celeste Fralick said that with the political climate being the way it is around the world, deepfakes are “absolutely something that we should be worried about.”
Fralick cited a demonstration of deepfake technology during an RSAC session presented by Sherin Mathews, senior data scientist at McAfee, and Amanda House, data scientist at McAfee.
“We have a number of examples, like Bill Hader morphing into Tom Cruise and morphing back. I never realized they looked alike, but when you see the video you can see them morph. So certainly in this political climate I think that it’s something to be worried about. Are we looking at the real thing?”
Jake Olcott, BitSight’s vice president of communications and government affairs, agreed, saying that deepfakes are “a huge threat to democracy.” He notes that the platforms that own the distribution of content, like social media sites, are doing very little to stop the spread of misinformation.
“I’m concerned that because the fakes are so good, people are either not interested in distinguishing between what’s true and what’s not, but also that the malicious actors, they recognize that there’s sort of just like a weak spot and they want to just continue to pump this stuff out.”
CrowdStrike CTO Mike Sentonas made the point that they’re getting harder to spot and easier to create.
“I think it’s something we’ll more and more have to deal with as a community.”
Deepfake threats aren’t pressing
Other security experts such as Patrick Sullivan, Akamai CTO of security strategy, weren’t as concerned about the potential use of deepfakes in cyberattacks.
“I don’t know if we should be worrying. I think people should be educated. We live in a democracy, and part of that is you have to educate yourself on things that can influence you as someone who lives in a democracy,” Sullivan said. “I think people are much smarter about the ways someone may try to divide online, how bots are able to amplify a message, and I think the next thing people need to get their arms around is video, which has always been an unquestionable point of data, which you may have to be more skeptical about.”
Malwarebytes Labs director Adam Kujawa said that while he’s not so worried about the ever-publicized deepfake videos, he does show concern with deepfake text and systems that automatically predict or create text based on a user’s input.
“I see as being pretty dangerous because if you utilize that with limited input derived from social media accounts, anything you want to create a pretty convincing spear phishing email, almost on the fly.”
That said, he echoed Sullivan’s point that people are generally able to spot when something is obviously not real.
“They are getting better [however], and we need to develop technology that can identify these things you and I won’t be able to, because eventually that’s going to happen,” Kujawa said.
Greg Young, Trend Micro’s vice president of cybersecurity, went as far as to call deepfakes “not a big deal.”
However, he added, ” I think where it’s going to be used is business email compromise where you try to get a CEO or CFO to send you a Western Union payment. So if I can imitate that person’s voice, deepfake for voice alone would be very useful because I can tell the CFO to do this thing if I’m the person pretending to be the CEO, and they’re going to do it. We don’t leave video messages today, so the video side I’m less concerned about. I think deepfakes will be used more in disinformation campaigns. We’ve already seen some of that today.”
Go to Original Article