Page 201 - Cyber Defense eMagazine April 2023
P. 201
applications. Computerphobia reached its peak in the mid-1980s, marked by fears about humans losing
jobs or becoming dependent on devices for critical thinking. Organ transplantation, space travel, DNA
manipulation, etc. have all elicited strong reactions that emphasized both amazing potential and dreadful
consequences, finally settling into an equilibrium that is nuanced and complex.
As a cyber defender and risk manager, it’s vital to see both sides of this innovation and develop an
approach that balances the potential threat and business opportunity. Uncertainty and fear would demand
blocking all access to ChatGPT and the API. Excitement and wonder would propose feeding terabytes
of data into the platform for its near-prescient insights. Before either approach, further consideration is
warranted. Let’s take a few examples of GPT’s more high-profile concerns and counter-balancing
opportunities:
A few weeks ago, ChatGPT was criticized in numerous articles for enabling creation of advanced,
polymorphic malware. While most of the articles left out key facts like the web version didn’t actually
produce the malware and that ample human intervention was required, use of ChatGPT as a malware
engine was theoretically possible. However, one must also consider the potential benefit of using
ChatGPT to stub out software for developers, speeding new product development. The promise of low-
code or no-code applications with the assistance of a tool like ChatGPT is now more than marketing
hype. Take a more specific use – writing a routine to encrypt a large store of content. The resulting code
could be used to secure an important transaction or ransomware. ChatGPT doesn’t know or understand
the difference.
More recently, the internet was abuzz with news that hackers had bypassed the ChatGPT controls to
create new service offerings like phish email automation. The original article (it has since been updated
for clarity) left out the key point that the bypass was merely the use of the API which currently doesn’t
have all the constraints of the web version. (API abuse is currently prohibited by OpenAI policy, not a
technical control.) Like the scenario above, while the API can be used to generate phish – until OpenAI
detects and terminates the access – it can also be used to generate phish testing campaigns and
awareness posts that vary in content and tone, keeping the message fresh.
Currently, there are numerous articles about how to turn off the response controls, enabling ChatGPT
answer without filters eliminating answers that potentially enable unlawful activity. While these claims are
proving dubious upon further scrutiny, consider the potential uses in threat modeling or role playing.
Asking ChatGPT to act as an insider threat and it will decline. Asking ChatGPT to help you, as a CISO,
to brainstorm ways an insider can harm your organization becomes an insightful method to verify your
defenses. In a 10-minute span, ChatGPT can guide you from a 10,000-foot view down into the weeds.
In this example, ChatGPT could walk one through high-level threats like monitoring cloud storage down
to a user-awareness quiz about social engineering attacks that included answers! Extending the use
case to role playing, think about the tremendous value of interacting with an AI programmed to emulate
malicious behavior in helping people prepare for real scenarios.
The impact of ChatGPT may mark the beginning of an AI race as the big players – Microsoft, Google,
Baidu, Meta, Amazon – invest millions upon millions to build the most complete AI platform. They’ll push
the envelope of innovation, adding features and functionality as desired, then following with mitigations
and controls as required. Like the innovations before it, we’ll be enthralled with the excitement and
possibility as we simultaneously wrestle with the uncertainty and fear. We’ll climb and climb until we reach
201