Call us Toll Free (USA): 1-833-844-9468     International: +1-603-280-4451 M-F 8am to 6pm EST
Artificial Deception: The State Of “AI” In Defense and Offense

Artificial Deception: The State Of “AI” In Defense and Offense

By Ken Westin, Field CISO, Panther Labs

If you have seen any of my talks, I often say that the infosec industry wouldn’t exist without deception. Although I’ve seen enough nature documentaries to know deception exists throughout the rest of the animal kingdom, humans have the cunning ability to deceive each other to gain resources, whether in war or crime. Deception in society doesn’t usually become a “crime” until property is lost or there is harm. Of course, it has evolved with the evolution of technology into the world of cybercrime — the use of artificial intelligence (AI) is no different. At Black Hat and Def Con this year, I saw an interesting dichotomy in the realm of AI, specifically the application of data science and machine learning in defensive and offensive security.

Artificial Intelligence or Mechanical Turk?

Walking the show floor at Black Hat, most vendors were pitching some sort of AI that would “revolutionize” defense. I found some of these messages deceptive themselves, making promises the industry has heard for years only to disappear into vaporware and disappointment. The advances in machine learning (ML) and large language models (LLM) have been very promising over the past few years, although still a bit overhyped as “AI” when in reality, these technologies require reliable data inputs, along with ongoing human tuning and supervision.

Machine learning models are only as good as the data they are fed. As any data scientist will tell you, the majority of their job is data prep and cleansing, this also makes these models themselves susceptible to deception through data poisoning and model manipulation. The application of LLM through tools such as ChatGPT has been a fantastic breakthrough in the application of data science, with the promise of increasing productivity across many different industries. LLM, however, is a machine learning model that uses Natural Language Processing (NLP) to scan massive amounts of text. Some companies have been deceptive about how this technology works, confusing the industry.

Although LLM technology can magically create content from a prompt out of thin air, there is more to it than meets the eye. LLMs rely on data inputs like any other model, so they leverage existing works, whether articles, blog posts, art, or even code. So there should be no surprise that there are now mass lawsuits against companies like ChatGPT from content creators claiming copyright infringement and source code is no different, not to mention privacy implications.

LLM also has another negative side effect of “hallucinations” where it will spit out nonsense or untrue content that could trick or confuse a person if they believe the content, which shows why even some of the most advanced uses of “AI” require a human in the loop to verify content. Interestingly, we can be deceived by this technology by accident; however, the same technology can and is being used offensively to manipulate data models and people and, in many respects, is ahead of the defense.

Generative Deception

At Def Con I saw the other side of “AI” on the offensive side. Both the Social Engineering and Misinformation Villages have grown over the years. The Social Engineering CTF was amazing to watch as teams targeted employees at companies to see how they could gather valuable information from targets for reconnaissance. This can now be taken a step further to manipulate people using voice synthesis to mimic the voice of an authority figure, family member, or celebrity, for example, to gain the target’s trust.

The increasingly widespread use of this technology will pose a significant threat to organizations and individuals, mainly as many non-tech-savvy folks are unaware of it, and the models become increasingly convincing. In addition, the use of generative AI to create videos and images that are progressively realistic is already finding its way into propaganda, fraud, and social engineering at a horrifying rate, and most security awareness training programs and other defenses for these types of attacks are slow to catch up.

Human-in-the-Middle

In creating AI tools to make us more productive and creative, we also opened a Pandora’s Box, as these same tools can be used to deceive us. I presented a while back at the SANS Data Science Lightning Summit “Cyborgs vs. Androids” on this topic, where I discussed the successful use of AI technology should be thought of less as an autonomous entity that will replace the security analyst/engineer and more like a cyborg, where we leverage these technologies to enhance the security analyst/engineer.

Organizations also need to consider the potential liability of using some of these tools, given the technology is new, questions about data provenance, and potential legislation regarding their use.

By keeping the human in the center, we are better able to harness the power of AI while at the same time ensuring it has the proper inputs and monitoring of its outputs. Trained humans are still better than machines at identifying patterns and detecting human deception; the challenge is that they are overwhelmed with data, tooling, and threats. The more we can leverage AI to enhance the analysts’ capabilities to make their jobs easier, the better we will defend against a whole new generation of threats — or maybe this post was written by an AI to convince you that’s the case ;-).

About the Author

Artificial Deception: The State Of “AI” In Defense and OffenseKen Westin is Field CISO of Panther Labs.  He has been in the cybersecurity field for over 15 years working with companies to improve their security posture, through detection engineering, threat hunting, insider threat programs, and vulnerability research. In the past, he has worked closely with law enforcement helping to unveil organized crime groups. His work has been featured in Wired, Forbes, New York Times, Good Morning America, and others, and is regularly reached out to as an expert in cybersecurity, cybercrime, and surveillance.

Ken can be reached online at LinkedIn (https://www.linkedin.com/in/kwestin/) and at our company website https://panther.com/

cyberdefensegenius - ai chatbot

13th Anniversary Global InfoSec Awards for 2025 now open for super early bird packages! Winners Announced during RSAC 2025...

X