Call us Toll Free (USA): 1-833-844-9468     International: +1-603-280-4451 M-F 8am to 6pm EST
The Great Ai Swindle

The Great Ai Swindle

AI washing, or making inflated or misleading claims about AI capabilities, is nothing new. In some ways it is to be expected when a new disruptive technology hits the limelight.

But since the launch of ChatGPT, it has gone into hyperdrive. In the scramble to seem innovative and capture market share, some companies are grossly overstating their AI prowess.

Nothing new here, surely, in a world where gaining attention is everything. However, the fallout of making such claims may go beyond marketing hype. AI washing is creating a false sense of security while masking serious threats.

AI is more than just ChatGPT

It’s important to cut through the hype: ChatGPT and Copilot are really exciting pieces of technology, but they’re just the latest chapter in an AI story spanning decades. What’s new here is the use of a specific type of neural network (that is, a mathematical model inspired by the structure of neurons in biological systems) called a transformer.

Because of the impact of ChatGPT, what most people today now mean when they refer to AI, is this sort of transformer-based neural network. These Large Language Models (LLMs) represent a groundbreaking advance, but they are merely the latest evolution of AI, not its totality.

Businesses have been using AI and machine learning (ML) for decades to spot anomalies, group data, give recommendations, and more. ML often doesn’t use neural networks, and has been part of the software developer’s repertoire for many years.

Companies using ML in their products are technically correct stating they have AI, although possibly disingenuous. That’s because AI washing now takes place in this disconnect between what most of us on the research side mean when we use the term “AI”, and companies’ claims to be “AI-powered”. What many actually mean is based on well-trodden ML methods or, worse, on the back of crude hard-coded logic masquerading as AI. In those cases, there are no AI characteristics, such as perceiving and learning from its environment.

The temptation to deceive

It’s easy to see why companies might do this. In today’s market, claiming AI capabilities carries serious weight. Numerous studies show consumers and business decision-makers view AI adoption as a competitive necessity. In one survey, 73% of consumers said that AI can have a positive impact on their customer experience.

Unsurprisingly, less scrupulous vendors are happy to slap the AI label on glorified IF statements in their code. It can help make sales and attract investors. However, bad-faith claims obscure real progress and push aside necessary conversations around the challenges of secure and responsible AI deployment.

Even well-meaning companies can succumb. We have seen cases where companies claimed to use AI although in reality many had yet to kick off their project in question. These companies felt they needed to claim an AI footprint in order to be seen as leading in their field, as they scramble to hire scarce and pricey AI talent.

Hidden risks

Disillusionment is a real risk, but AI washing brings dangers beyond disappointment. Fake AI claims obscure real progress and stifle important conversations around responsible AI use.

Companies touting “military-grade AI” with a straight face make it that much harder for genuine innovations to gain traction and trust. Hype drowns out expert calls for better data privacy, transparency, and fairness.

Most disturbingly, focusing on fictional AI diverts attention and resources from clear and present dangers. Ultimately AI is still software, and thus susceptible to cyber security risks that many professionals will be familiar with. Advanced generative AI models are already being incorporated into critical enterprise systems and customer-facing products. If built and deployed without security in mind, these models can be highly susceptible to cyber attacks. Indeed, it has been shown that bad actors can siphon sensitive data, reconstruct proprietary models, introduce malicious backdoors – all without a single line of hardcoded deception to tip off conventional cybersecurity.

The AI washing hype cycle doesn’t just provide false comfort, it actively conceals genuine risk which businesses are struggling to navigate and manage. Overclaiming not only misleads consumers but can also lead businesses to make poorly informed decisions about AI adoption and investment. Companies may end up wasting resources on ineffective or insecure AI solutions, putting their data, intellectual property, and reputation at risk.

Earning trust in the age of democratized AI

To be clear, AI has the potential to be truly transformative for competitive businesses. But as complexity grows, so does the attack surface and the imperative for purpose-built, battle-tested AI security.

It is becoming increasingly difficult for a business to understand or manage AI security. To effectively navigate the AI landscape and make informed decisions, business leaders need to invest in AI literacy and education. This includes understanding the capabilities and limitations of different AI technologies, as well as best practices for secure and ethical AI deployment.

In the era of accessible deep learning, trust must be earned continuously with transparency, not bought with marketing fluff. Only by cutting through the AI-washing noise can enterprises build lasting value and safeguard the future.

About the Author

The Great Ai SwindlePeter Garraghan, CEO and Co-Founder of Mindgard, is an internationally recognised expert in AI infrastructure and security. He has pioneered research innovations that were implemented globally by a leading technology company used by over 1 billion people. As a professor at Lancaster University, he has raised over €11.6 million in research funding and published over 60 scientific papers. Mindgard is a deep-tech startup specialising in cybersecurity for any companies working with AI, GenAI and LLMs. Mindgard was founded in 2022 at world-renowned Lancaster University and is now based in London, UK. It has achieved €3.5 million in funding, backed by leading investors like IQ Capital and Lakestar. Mindgard’s primary product – born from eight years of rigorous R&D in AI security – offers an automated platform for comprehensive security testing, red teaming, and rapid detection/response.

Peter Garraghan can be reached online at LinkedIn and at our company website https://mindgard.ai/

cyberdefensegenius - ai chatbot

12th Anniversary Top InfoSec Innovator & Black Unicorn Awards for 2024 are now Open! Winners Notified at CyberDefenseCon 2024...

X