Page 51 - Cyber Defense eMagazine October 2023
P. 51

Technology can operate in those pockets where humans are not typically interested nor effective. Take
            a  large  dataset  for  example;  AI  is  great  at  efficiently  and  effectively  analysing  these,  highlighting
            correlations and themes.

            By using it to complete laborious, mundane, repetitive jobs quickly and accurately, employees are then
            freed up to focus on higher-value tasks, bringing greater value to their roles as more creative, productive
            individuals.

            ChatGPT has emerged as a shining light in this regard. Already we’re seeing the platform being integrated
            into  corporate  systems,  supporting  in  areas  such  as  customer  success  or  technical  support.  In  this
            example, it's been introduced in an advisory role. Employees can use it to scan email text to give an
            indication of the tone, gaining a greater understanding of how they are coming across in customer support
            interactions, with suggestions for improvements or edits.



            The bad: The risks surrounding ChatGPT

            Of course, there’s always two sides to the same coin, and reasons for hesitancy around ChatGPT remain.
            From security to data loss, several challenges are prevalent on the platform.

            For many companies, concerns centre around the potential risk of leaking trade secrets, confidentiality,
            copyright,  ethical  use  and  more.  Further,  the  ability  to  verify  and  rely  on  the  accuracy  of  data  and
            subsequent outcomes that ChatGPT provides isn’t certain. Indeed, ChatGPT is a learning platform – if
            it’s fed bad data, it will produce bad data.

            It’s also important to recognise that ChatGPT itself already suffered a breach in 2023 due to a bug in an
            open-source library.

            It was named the fastest growing app of all time, having racked up 100 million active users in just two
            months – a figure that Instagram only reached after 2.5 years. This broad user base makes it the perfect
            platform for threat actors to target with a watering hole attack (one designed to compromise users by
            infecting regularly used websites and lure them to malicious websites).

            If an attacker is successful in infiltrating ChatGPT – something that can be achieved through potentially
            hidden vulnerabilities – they may in turn serve some malicious code through it, possibly affecting millions
            of users.




            The ugly: Enhancing threat actor tactics

            The other concern  isn’t centred  around the  risks  associated  with  using natural  language  processing
            platforms themselves. Rather, it looks at the ways in which threat actors are leveraging them for malicious
            means.

            According to a survey of IT professionals from Blackberry, more than seven in 10 feel that foreign states
            are likely already to be using ChatGPT for malicious purposes against other nations.





            Cyber Defense eMagazine – October 2023 Edition                                                                                                                                                                                                          51
            Copyright © 2023, Cyber Defense Magazine. All rights reserved worldwide.
   46   47   48   49   50   51   52   53   54   55   56