Page 71 - Cyber Defense eMagazine March 2024
P. 71
In fact, this is already happening. In February, an employee was duped into sending $25m of company
funds to malicious actors after falling for a deepfake video scam, where fraudsters posed as the firm’s
CEO in a video conference (The Guardian).
This attack is the canary in the coalmine – hackers have expanded their arsenal and are bringing
deepfakes to the cybersecurity gunfight. Corporates cannot afford to rely on government for protection –
they’re too slow. They need to develop their own multi-layered defensive strategy now.
But what does this look like?
The cornerstone has to be compliance. It’s never enough on its own, but if your employees are regularly
leaving the door wide open for hackers then it doesn’t matter how much you spend on new technologies.
So, corporates need to invest in training and informing their employees about the threat posed by
deepfakes and how to mitigate it. An email newsletter is not going to cut it – there should be regular,
mandated training sessions. These might involve simulated phishing exercises with deepfakes or
interactive workshops where employees are trained to spot red flags. There needs to be rapid internal
reporting mechanisms so that employees can reach specialized IT teams as soon as a threat is identified.
Getting employees up to speed on this will be no small feat. Unfortunately, as any IT professional knows,
most workers outside of the industry have a chronic lack of cybersecurity basics. It will take regular,
proactive measures to ensure that they aren’t accidentally exposing the company. But if successfully
done, a well-educated, vigilant workforce is the foundation of a comprehensive cyber defense.
Alongside training staff, corporates should also be onboarding the latest authentication and verification
tech. Not investing in the latest defensive systems leaves corporations in the stone age, facing down
hackers with AI and deepfakes at their fingertips.
These might include advanced forensic analysis tools that use machine learning and image processing
to identify manipulated media content. Biometric authentication needs to be ramped up, with enhanced
facial and voice recognition, to verify identities. Digital watermarking can be deployed to mark authentic
content for employees.
These technologies can be rapidly integrated into company practices today. More long-term technological
defenses might involve AI-powered deepfake detection tools – using machine learning algorithms, trained
on datasets composed of authentic images and deepfakes, to detect fraudulent content. But these will
take significant amounts of time, data and expertise to build.
Whilst these first two strategies are preventative, it’s also crucial to have damage limitation in place,
should a breach occur. Corporates need to have robust access controls to sensitive information so that
a successful scam at lower levels of the business does not result in high-level IP and trade secrets being
lost.
Strict and defined network separation is also crucial, based on the principle of least privilege. If malicious
actors gain access to a network through an employee, this should not allow them to pivot into other areas
of the business. Added to this, containment measures can prevent dissemination of fraudulent media
Cyber Defense eMagazine – March 2024 Edition 71
Copyright © 2024, Cyber Defense Magazine. All rights reserved worldwide.