Recent advances in generative artificial intelligence (AI) are making it increasingly difficult to know the difference between what is real and what is not. Computer-generated clips that are designed to look real, known as deepfakes, not only distort reality, but can be used to destroy the reputations of their victims and worse, foster social unrest. But what can we do about it?
Celebrities, ordinary people, and the erosion of trust
Anyone who follows the news regularly might be forgiven for thinking that deepfakes only target celebrities and politicians. The reality is that malicious actors generate fake content to harass, intimidate and spread misinformation about ordinary people too. In fact, when fake media attacks a person that doesn’t have the economic or social means of a Taylor Swift or a Donald Trump, that’s a real problem. For example, women in South Korea recently took to the streets to protest against an upsurge in deepfake pornography. The national police agency is investigating hundreds of cases in which the faces of real women and girls have been digitally superimposed onto a body without their knowledge or consent. In Spain, a court sentenced 15 schoolchildren to a year’s probation for creating and spreading deepfake pornography of girls as young as 11. Stories like this are occurring all the time in some part of the world. I was on a UN conference panel recently and a journalist from the Middle East was talking about her experience with a deepfake and she being falsely depicted with her boss in order to discredit her. The experience paid a heavy toll on both her and her family.
And while the effects on individuals can be devastating, deepfakes pose a wider threat to society. According to MIT Technology Review, deepfakes are “a perfect weapon for purveyors of fake news who want to influence everything from stock prices to elections”. The World Economic Forum’s Global Risks Report identifies fake media as one of our most pernicious challenges. According to the WEF, the proliferation of disinformation has eroded trust not only in digital platforms, but also in institutions. For example, for WITNESS, an NGO, it is threatening their mission of helping people to use video as a tool for defending human rights.
In fact, there is a growing concern that the misuse of digitally altered media can cause a state of social unrest be it through a misinformation campaign to influence the political stage, or the encouragement of hate speech and criminal activity to sew division.
Deepfakes and an unstable geopolitical stage
We are living in times of heightened geopolitical instability. Conflicts are erupting across regions, economic crises are deepening, political polarization is increasing, nations are grappling with the rapid pace of technological advancements, which bring both opportunity and risk, and the impacts of climate change are exacerbating all of the above. And with a lack of standards to check the threat, it’s an all you can eat buffet for bad threat actors.
The good news is a standards collaboration on AI and multimedia authenticity was launched at the global AI for Good Summit in Geneva earlier this year. Experts highlighted the potential societal impacts of generative AI and the urgent need to address the misuse of AI for spreading misinformation.
WITNESS is one of a number of organizations who have signed up to apply standards – internationally agreed best practices – for detecting deepfakes and generative AI. Under the leadership of the three international standards organizations, IEC, ISO and ITU, the alliance is leveraging the insights and expertise of a broad range of stakeholders. Big tech is represented by the likes of Adobe and Microsoft. Other organizations taking part include the German research institute, Frauenhofer, and CAICT, a technology-focused think tank based in China, as well as the authentication specialists, DataTrails and Deep Media.
Why? Because standards can create safeguards to protect the digital space and work to rebuild trust, so that we can look at the news and understand whether we can trust it or not. And, in a world where trust has been broken and our natural human defenses don’t seem to be enough, standards can help reestablish our confidence by contextualizing what we see and hear.
A good example is the case of Deep Media, who recently commented on a hyper realistic video of Elvis Presley singing in a modern-day setting. Although the video became viral, the chances that anyone believed in the video’s authenticity are slim to none. Deep Media noted that while opening doors for creative expression, the technologies used also carry risks of misinformation and reputational damage. Being able to detect deepfakes is crucial for preserving integrity and trust.
Not all fake media is bad
The AI for Good Summit brought together a diverse range of organizations, including the Content Authenticity Initiative (CAI), Coalition for Content Provenance and Authenticity (C2PA), IETF, IEC, ISO, ITU and JPEG, in recognition of the critical role of technical standards in supporting government policy measures. They agreed on a multistakeholder collaboration to develop global standards for AI watermarking, multimedia authenticity and deepfake detection technologies.
The discussions underscored the importance of transparency in AI, particularly regarding the provenance of data used to train AI models. Establishing the authenticity and reliability of this data is crucial for understanding potential biases and ensuring the accuracy of AI-generated content. The ability to verify the authenticity and ownership of generative AI multimedia content was identified as essential for protecting digital rights.
The aim is to detect rather than stop fake media because unlike the murky world of fake news, the intentions behind fake media are often good. The term ‘fake’ can be applied to any manipulated media. The entertainment industry benefits enormously from new technologies to create realistic special effects or artificial natural scene production with actors in the studio, for example. It is the same with news where leading public service broadcasters use virtual studios for their news bulletins and the set is oftentimes either computer-generated or a video image.
Standards for AI and generative technologies
The multistakeholder collaboration aims to provide a global forum for dialogue on priority areas and requirements for technical standards mapping the landscape of existing standards and identifying gaps where new standards are needed, as well as facilitating the sharing of knowledge among stakeholders.
This initiative is expected to play a crucial role in supporting government policy measures, ensuring transparency and protecting the rights of users and content creators in the rapidly evolving field of AI and generative technologies.
And for a bit more good news, members of the collaboration have already created a new standard, known as JPEG Trust, which will enable photos and videos to be ‘tagged’ and authenticated, so that users and creators can build trust in media content. JPEG Trust has been developed by the joint IEC, ISO and ITU Joint Photographic Experts Group (JPEG). This is the same group of experts behind JPEG technology that has enabled the world to use and share billions of images each day for over 30 years.
About the Author
Alessandra Sala, Chair of Multimedia Authenticity Collaboration for the International Electrotechnical Commission and Sr. Director of AI and Data Science at Shutterstock.
A research and scientific leader in Artificial Intelligence, Alessandra chairs the AI and Multimedia Authenticity Collaboration, which brings together IEC, ISO, ITU and other standards organizations to foster a unified response to deepfake technology, addressing risks while harnessing the benefits of generative AI.
Alessandra is also the Sr. Director of AI and Data Science at Shutterstock. Alessandra has over 18 years’ experience in research and innovation gained whilst working in academic and commercial environments. Alessandra is passionate in advanced analytics, machine learning, and computational models with the focus of transferring innovation from research to products.
As Co-chair of the UNESCO Women for Ethical AI Platform, Alessandra is working with a strong community of women to foster diversity, inclusion and equality for women and minorities while encouraging a global ethical approach in AI. As Global President of Women in AI (a non-profit do-tank working towards gender-inclusive AI that benefits global society) Alessandra leads a community of women from 171 countries that empowers women and minorities to become AI & Data experts, innovators and leaders. In an advisory capacity, Alessandra also serves as the Governance Committee Chair at the Science Foundation Ireland Centre for Research Training in Machine Learning.
Among several awards, Alessandra won the 2024 Grace Hopper and the 2021 XV International Prize “Le Tecno-visionarie” in the AI – Industrial Research category. In her previous role Alessandra was Head of Analytics Research at Nokia Bell Labs where she was leading research teams in several locations while driving changes across different activities like her contributions to the Nokia AI Ethics Advisory Board.
In her previous appointment Alessandra was Head of Analytics Research at Nokia Bell Labs where she was leading research teams in several locations and developing a successful strategy for the Data Analytics division while driving changes across different activities like her contributions to the Nokia AI Ethics Advisory Board.
Alessandra can be reached online at https://www.linkedin.com/in/salaalessandra/ through the IEC https://iec.ch/homepage or at her company website and at our company website https://www.shutterstock.com.