Page 178 - Cyber Defense eMagazine September 2023
P. 178
into the corporate environment. Specific to ChatGPT, there are many unknowns regarding its ongoing
evolution and how it impacts data and information security. From an infosec perspective, managing
“unknowns” is not anyone’s view of ideal. Cybersecurity is the art and science of attempting to achieve
full transparency to risk and then mitigating and controlling that risk.
Even if an organization secures its connectivity to OpenAI, it is challenging to ensure data protection,
particularly granting the tremendous data troves gathered by ChatGPT. In late March, OpenAI disclosed
a data breach that exposed portions of user chat history as well as personal user information including
names, email/payment addresses, and portions of credit card data over a nine-hour window. That same
week, threat intelligence company Grey Noise issued a warning regarding a new ChatGPT feature that
enabled expanded data collection features using a plugin, which they believed had been exploited in the
wild. Samsung employees also leaked sensitive data into the ChatGPT program; as a result, Samsung
lost control of some of its intellectual property. Since there are little to no legal precedents for this activity,
these types of leaks have the potential to cost organizations billions in lost opportunity and revenue.
There is also little evidence of how the large tech companies that control these platforms may leverage
this newly found treasure trove of previously undisclosed intellectual property.
These issues highlight the vulnerability of the product and raise serious concerns about the security of
sensitive information that businesses, knowingly or unknowingly, entrust to ChatGPT. As with all third
parties, these platforms must be vetted and their vendors contractually bound to protect the data to your
organization’s standards before being permitted access to it.
The security issues also underscore the legal obligations of organizations to secure their own and their
clients’ data. Law firms with attorney-client privilege and those subject to regulations such as the Health
Insurance Portability and Accountability Act (HIPAA) and the EU’s General Data Protection Regulation
(GDPR) are particularly affected. Organizations must ensure the security and privacy of their information.
Using a third-party service like ChatGPT creates challenges to these obligations.
Importantly, OpenAI’s ChatGPT and Google’s Bard learn from and store information from many sources.
Organizations should never place corporate and client information into these platforms, as it must be
assumed it can be viewed by those unauthorized to do so (intentionally or otherwise). The lack of clarity
and transparency around how data is being handled creates a real risk for businesses using ChatGPT.
Yet, lacking direct action by IT or security teams to impose controls, users can easily copy and paste
data of any level of corporate sensitivity into the platform, without their organization’s knowledge or
consent. Thus, these platforms should be blocked by default, despite their current attraction and whirling
popularity. For organizations that require research and development in these platforms, access should
only be permitted for those groups.
Today, it is quite difficult to block these platforms by default because they are popping up quickly (as well
as their related scams). Fortinet, Palo Alto Networks, Cisco, and other security vendors have not yet
created holistic lists that include all the OpenAI and ChatGPT options available. Thus, IT is left to compile
manual lists of these tools for blocking.
Cyber Defense eMagazine – September 2023 Edition 178
Copyright © 2023, Cyber Defense Magazine. All rights reserved worldwide.