Page 148 - Cyber Defense eMagazine March 2024
P. 148
It should be noted that transforming breach data into a format compatible with LLMs demands
considerable expertise and resources. This isn't a casual pursuit; it's a multibillion-dollar industry often
backed by various governments. Although currently not widespread, elite threat actors, armed with ample
resources, could potentially utilize LLMs to analyze data in unprecedented ways. As malicious AI
applications become more advanced, even those who aren’t well versed in network security mechanics
will be able to enter the game. This includes state sponsored threat actors who don’t care about
government regulations on what AI can and cannot do and hactivists who will utilize AI to generate new
exploits to make their point.
Imagine if threat actors could input data from multiple breaches into a LLM, making it readable. They
could instruct an AI platform to discern patterns and details about individuals in the database, surpassing
what has been done before. This could involve extracting extensive personal information about a person,
their family, and associates, leading to various malicious activities.
For instance, threat actors could cross-reference this information with social media profiles, conference
attendance records, or employment details from platforms like LinkedIn. This knowledge could be
exploited for activities ranging from utility service disruptions to identity-based extortion.
While this may seem like a scenario from a Hollywood movie, the capability exists today, although not
without challenges. Mainstream AI platforms like those from OpenAI and Google have built-in ethical
protections. However, if hackers gain access to an open software platform with advanced capabilities,
they could potentially modify it.
Contrasting this with traditional methods, early data breaches primarily involved credit card and social
security number theft. Credit card details were sold on the dark web, enabling subsequent illicit
transactions. In the current landscape, breaches involve diverse data, such as medical records. Attackers
now need to formulate specific queries or understand how to build queries for the stolen data, a process
made more efficient with AI. Specifically, AI enhances the ability to cross-reference data, identify patterns,
and track individuals and their associations, marking a significant departure from conventional
cyberattack methods.
Recently, we have seen several companies pop up overnight utilizing this new type of aggregated breach
database to search for people and what exposed data may be out there for specific individuals. Out of
curiosity, I went to Malwarebytes the other day and entered my work email to see what may be lurking
on the dark web about me. In less than a few minutes, the site correctly revealed my work address, where
I live, and 14 breaches where a password of mine had been exposed.
It can’t be emphasized enough, the rising threat of AI-enhanced cyberattacks is a cause for concern.
Currently, the technology to detect such advancements is being developed in tandem with the evolving
threat actor tactics. While companies have primarily focused on utilizing AI for threat hunting through
tools like security information and event management (SIEM), threat actors across various levels are now
employing AI to craft sophisticated phishing attacks. For example, in the context of AI-enhanced phishing
emails, conventional methods of identifying language and grammar errors are becoming less effective.
AI can now generate phishing emails that appear professionally written.
Cyber Defense eMagazine – March 2024 Edition 148
Copyright © 2024, Cyber Defense Magazine. All rights reserved worldwide.