By Steve Durbin, Managing Director, Information Security Forum
Advanced deepfakes of high-profile individuals or executives will soon threaten to undermine digital communications, spreading highly credible fake news and misinformation. Deepfakes will appear with alarming frequency and realism, accelerating and turbocharging social engineering attacks.
The underlying Artificial Intelligence (AI) technology will be used to manipulate both video and audio, enabling attackers to accurately impersonate individuals. As attackers use increasingly sophisticated deepfakes they will cause serious financial damage, manipulating public opinion with fake videos and images to manipulate financial markets, promote political agendas or gain competitive advantage. Severe reputational damage will be caused when executives or high-profile individuals have their identities compromised.
Organizations and individuals will face new security challenges beyond anything they have dealt with before. Nation states, activists and hacking groups will use deepfakes to spread disinformation on scale, leaving individuals and organizations unable to distinguish fact from fiction, truth from lies.
Deepfakes Lead to Deep Trouble
The advancements and increasing availability of AI technologies will enable attackers to create highly realistic digital copies of executives in real-time, by superimposing facial structures and using vocal patterns to mimic real voices. As deepfake technologies become more believable, many organizations will be impacted by this new, highly convincing threat. Organizations using poor quality audio and video streaming services will find it particularly challenging to identify this threat, as the imperfections in even the most simplistic deepfakes will go unnoticed.
Early generation deepfakes have already been used to replicate the audio and visual likeness of public figures, such as politicians, celebrities and CEOs. The likelihood of these unsophisticated examples is low, with viewers being able to tell when a video is authentic or fake. However, the development of deepfakes is progressing quickly, with the first case of AI assisted voice phishing (vishing) – social engineering using an automated voice – used to perform a high-profile scam in early 2019. Attackers replicating the voice of an energy company’s CEO were able to convince individuals within the organization to transfer $243,000 to a fake supplier. The attackers used social engineering techniques to trick an employee into calling the CEO and, as the voice on the other end of the phone sounded exactly like the CEO, the employee went ahead with the transfer.
While the manipulation of images has a considerable history, often used as propaganda in times of conflict, the easy availability of digital tools, the highly realistic nature of the doctored content, and the existence of new media channels to distribute misinformation have turned deepfakes into a viable attack mechanism. With a growing number of important government elections taking place in the coming months, the impact of deepfakes is likely to far exceed that of existing ‘fake news’.
As deepfakes become more common and the technology behind them becomes cheaper and more widely available, it will raise major concerns around the likelihood of attackers targeting organizations for financial gain, blackmail or defamation. Not only do popular mobile applications such as Snapchat and Zao allow individuals to create deepfake content with ease, attackers will be able to buy and sell highly convincing deepfake technologies or services on the dark web and use bots to generate fake content on social media.
While deepfakes have already begun to cause considerable concern in the media, the speed at which this technology moves will inevitably result in negative impacts to targeted organizations. Traditional attempts to identify and counter defamation will be unable to deal with the sophistication of deepfakes. Established forms of communication will be questioned, as the real becomes indistinguishable from the fake and trust erodes further in an already fractured world.
Preparation Begins Now
Organizations need to enhance current technical security controls to mitigate against the threat of deepfakes to the business. Training and awareness will also need revamping with special attention paid to this highly believable threat.
In the short term, organizations should incorporate an understanding of deepfakes into the security awareness program and run an executive-level awareness program focusing on deepfakes.
In the long term, enhance or invest in identity controls and content management products to protect against deepfakes. Review authorization processes for financial transactions in the context of deepfakes. Finally, monitor deepfake activities in related industry sectors.
About the Author
Steve Durbin is Managing Director of the Information Security Forum (ISF). His main areas of focus include strategy, information technology, cyber security, digitalization and the emerging security threat landscape across both the corporate and personal environments. Steve can be reached online at @stevedurbin and at our company website www.securityforum.org.