Page 151 - Cyber Defense eMagazine June 2024
P. 151

How AI is being used to automate cyber attacks

            Modern hackers have found ways to leverage AI technology to automate cyber attacks because an AI
            model can be trained to constantly probe a network for weaknesses, often identifying them before they
            are even known to network operators. The effects of this are twofold: for one, this significantly increases
            the number of attacks because attackers can be much more efficient; beyond that, the efficiency of these
            attacks makes them much more difficult to detect and respond to.

            Considering how connected our world is today, the prospect of automated cyber attacks is incredibly
            frightening. If a hacker targets a high-value target, such as a network powering a supply chain or critical
            infrastructure, the damage an attack like this could cause could be catastrophic. Everything from shipping
            routes to traffic lights, air traffic control systems, power grids, telecommunications networks, and financial
            markets is vulnerable to this type of AI-powered cyber threat.



            The abuse of generative AI for scams and fraud

            The second potentially harmful capability of artificial intelligence that has taken the world by storm is its
            ability to synthesize written and audiovisual information from user prompts. This category of AI models,
            known  as  generative  AI,  has  been  used  for  several  legitimate  purposes,  including  drafting  emails,
            powering customer service chatbots, and more. However, bad actors have still found ways to leverage
            this technology for their own gain.

            One of the most dangerous use cases of AI technology is the improvement of phishing scams. In these
            schemes, a fraudster attempts to convince a victim to share personal information by impersonating a
            trusted source, such as a friend, loved one, coworker, boss, or business partner. Although it was once
            relatively easy to distinguish these fraudulent messages from legitimate ones due to simple mistakes like
            grammatical errors and inconsistencies in voice, generative AI has allowed scammers to make their
            messages significantly more convincing. By training a model on a library of materials written by the person
            they  hope  to  impersonate,  scammers  can  mimic  an  individual’s  writing  style  more  accurately  and
            convincingly.

            The materials that generative AI can produce extend even beyond writing, as this technology can now
            also be used to create convincing fraudulent images and audio clips known as deepfakes. Deepfake
            photos and audio clips have been used for all sorts of nefarious purposes, from reputational damage and
            blackmail to the spread of misinformation and manipulation of elections or financial markets. With how
            advanced AI has become, distinguishing between legitimate and fraudulent materials is more difficult
            than ever.



            Fighting fire with fire in AI

            Thankfully, many of the tools that wrongdoers use to wreak havoc can also be applied for more positive
            use cases. For instance, the same models that hackers use to probe networks for vulnerabilities can be
            leveraged by network operators to identify areas needing improvement on their networks. Additionally,





            Cyber Defense eMagazine – June 2024 Edition                                                                                                                                                                                                          151
            Copyright © 2024, Cyber Defense Magazine. All rights reserved worldwide.
   146   147   148   149   150   151   152   153   154   155   156