Page 52 - Cyber Defense eMagazine October 2023
P. 52

There are a variety of ways in which adversaries can tap into intelligent AI platforms. In the same way
            that customer service professionals may leverage the platform, threat actors can use it can make their
            phishing lures look more official and coherent.

            It also means cybercriminals don’t need to rely on their first language or dialect. They can use generative
            AI to translate phishing emails effectively across many vocabularies.

            In addition to translation, tone alteration and written enhancements, generative AI tools can also be
            jailbroken. When this happens, they can then be asked to generate things like malware and viruses,
            lowering the skill floor required for threat actors.

            In this sense, ChatGPT could democratise cybercrime in the same way that ransomware-as-a-service
            would – a reality that would lead to a massive spike in the volume of attacks we witness globally.



            Managing ChatGPT effectively

            We see many organisations focused on rapidly building their policies and controls as a response to these
            potential threats. Of course, that’s been hard to do – many people didn’t even know what ChatGPT was
            at the start of the year.

            Where understanding and policy development are still in progress, some companies are outright blocking
            the use of the platform. However, this isn’t necessarily the right approach long term.

            ChatGPT will be key in unlocking user productivity and creativity. Therefore, organisations must find ways
            in which to harness it in a secure and safe manner.

            OpenAI  itself  has  recognised  the  importance  of  addressing  security  concerns  in  order  to  fulfil  the
            platform's  potential.  The  company  recently  announced  the  rollout  of  ChatGPT  enterprise,  offering
            capabilities such as data encryption, and the promise that customer prompts and company data will not
            be used for training OpenAI models.

            These are steps in the right direction. However, to effectively combat all risks, organisations should look
            to embracing a diverse suite of security tools to maximise protection. As an example, it can prevent the
            pasting of files, images and text to an external site (ChatGPT included) where it can be misused. It can
            also set character limits to prevent large amounts of data being removed.

            Additionally, isolation can record data from sessions, allowing organisations to keep track of end-user
            policy violations in web logs on platforms like ChatGPT, such as the submitting of sensitive data.

            Isolation is a key component capable of ensuring that ChatGPT is used in a secure manner, making it
            easier to enact key policies and processes. For firms proactively seeking to harness the benefits of AI
            and gain competitive advantages, isolation is a vital tool.










            Cyber Defense eMagazine – October 2023 Edition                                                                                                                                                                                                          52
            Copyright © 2023, Cyber Defense Magazine. All rights reserved worldwide.
   47   48   49   50   51   52   53   54   55   56   57