By Brett Raybould, EMEA Solutions Architect, Menlo Security
Today, headlines surrounding AI tools such as ChatGPT today are largely pessimistic.
From the doom-mongering narrative that technologies will put millions out of their jobs to the growing need for heavy regulation, it is often the negative connotations driving clicks online at this time.
When topics of change or uncertainty are involved, such a reaction is perhaps only natural. But we’re likely to look back and see this one-sided argument as largely sensationalist.
The good: AI offers huge productivity potential
While the benefits are perhaps not talked about as much, the truth is that AI has the potential to improve our lives in a variety of ways.
Technology can operate in those pockets where humans are not typically interested nor effective. Take a large dataset for example; AI is great at efficiently and effectively analysing these, highlighting correlations and themes.
By using it to complete laborious, mundane, repetitive jobs quickly and accurately, employees are then freed up to focus on higher-value tasks, bringing greater value to their roles as more creative, productive individuals.
ChatGPT has emerged as a shining light in this regard. Already we’re seeing the platform being integrated into corporate systems, supporting in areas such as customer success or technical support. In this example, it’s been introduced in an advisory role. Employees can use it to scan email text to give an indication of the tone, gaining a greater understanding of how they are coming across in customer support interactions, with suggestions for improvements or edits.
The bad: The risks surrounding ChatGPT
Of course, there’s always two sides to the same coin, and reasons for hesitancy around ChatGPT remain. From security to data loss, several challenges are prevalent on the platform.
For many companies, concerns centre around the potential risk of leaking trade secrets, confidentiality, copyright, ethical use and more. Further, the ability to verify and rely on the accuracy of data and subsequent outcomes that ChatGPT provides isn’t certain. Indeed, ChatGPT is a learning platform – if it’s fed bad data, it will produce bad data.
It’s also important to recognise that ChatGPT itself already suffered a breach in 2023 due to a bug in an open-source library.
It was named the fastest growing app of all time, having racked up 100 million active users in just two months – a figure that Instagram only reached after 2.5 years. This broad user base makes it the perfect platform for threat actors to target with a watering hole attack (one designed to compromise users by infecting regularly used websites and lure them to malicious websites).
If an attacker is successful in infiltrating ChatGPT – something that can be achieved through potentially hidden vulnerabilities – they may in turn serve some malicious code through it, possibly affecting millions of users.
The ugly: Enhancing threat actor tactics
The other concern isn’t centred around the risks associated with using natural language processing platforms themselves. Rather, it looks at the ways in which threat actors are leveraging them for malicious means.
According to a survey of IT professionals from Blackberry, more than seven in 10 feel that foreign states are likely already to be using ChatGPT for malicious purposes against other nations.
There are a variety of ways in which adversaries can tap into intelligent AI platforms. In the same way that customer service professionals may leverage the platform, threat actors can use it can make their phishing lures look more official and coherent.
It also means cybercriminals don’t need to rely on their first language or dialect. They can use generative AI to translate phishing emails effectively across many vocabularies.
In addition to translation, tone alteration and written enhancements, generative AI tools can also be jailbroken. When this happens, they can then be asked to generate things like malware and viruses, lowering the skill floor required for threat actors.
In this sense, ChatGPT could democratise cybercrime in the same way that ransomware-as-a-service would – a reality that would lead to a massive spike in the volume of attacks we witness globally.
Managing ChatGPT effectively
We see many organisations focused on rapidly building their policies and controls as a response to these potential threats. Of course, that’s been hard to do – many people didn’t even know what ChatGPT was at the start of the year.
Where understanding and policy development are still in progress, some companies are outright blocking the use of the platform. However, this isn’t necessarily the right approach long term.
ChatGPT will be key in unlocking user productivity and creativity. Therefore, organisations must find ways in which to harness it in a secure and safe manner.
OpenAI itself has recognised the importance of addressing security concerns in order to fulfil the platform’s potential. The company recently announced the rollout of ChatGPT enterprise, offering capabilities such as data encryption, and the promise that customer prompts and company data will not be used for training OpenAI models.
These are steps in the right direction. However, to effectively combat all risks, organisations should look to embracing a diverse suite of security tools to maximise protection. As an example, it can prevent the pasting of files, images and text to an external site (ChatGPT included) where it can be misused. It can also set character limits to prevent large amounts of data being removed.
Additionally, isolation can record data from sessions, allowing organisations to keep track of end-user policy violations in web logs on platforms like ChatGPT, such as the submitting of sensitive data.
Isolation is a key component capable of ensuring that ChatGPT is used in a secure manner, making it easier to enact key policies and processes. For firms proactively seeking to harness the benefits of AI and gain competitive advantages, isolation is a vital tool.
About the Author
Brett Raybould – EMEA Solutions Architect, Menlo Security. Brett is passionate about security and providing solutions to organisations looking to protect their most critical assets. Having worked for over 15 years for various tier 1 vendors who specialise in detection of inbound threats across web and email as well as data loss prevention, Brett joined Menlo Security in 2016 and discovered how isolation provides a new approach to solving the problems that detection-based systems continue to struggle with.