Page 37 - Cyber Defense eMagazine December 2023
P. 37

As both businesses and individual users grow more comfortable using generative AI, there will be a
            significant spike in activity associated with those crawlers. Imperva Senior Product Manager Lynn Marks
            agrees, noting that data scraping is “becoming more of an issue for organizations” as their data is used
            to train the large learning models (LLMs) that inform generative AI tools.

            Triebes points out that generative AI will make its presence felt in other areas, as well—including a shift
            toward AI-based coding in the future. Director of Technology within the Office of the CTO Peter Klimek
            agrees and says that “new and/or junior developers will benefit greatly” from AI-enabled development
            tools, increasing productivity and output by automating routine tasks. However, he acknowledges that
            those same tools will “help script kiddies graduate into skilled hackers capable of carrying out more
            complex exploits.” In the near term, Triebes believes generative AI will primarily be used to perpetrate
            fraud.

            “It will be much easier for fraudsters to masquerade as somebody else—at least online,” explains Triebes.
            “AI will lead to a new breed of fraud and social engineering attacks. A fraudster could scrape the internet
            for information about you and then weaponize a voice recording of you. Through generative AI, they can
            create a pseudo version of you. If they package that effectively, they could contact your bank and request
            a password reset.”

            Ron Bennatan,  Imperva  Fellow,  Data  Security agrees.  He  expects  to see  an  increase  in attacks as
            attackers leverage AI to fool their victims, noting, “because LLMs are so good at both understanding
            humans and creating text communications that really look like they were created by humans, attackers
            will be able to target and ‘hack’ individuals far better than before.”

            Alan Ryan, AVP, UK & Ireland, notes that as attackers invest in AI, so too must defenders. Bad actors
            are investing heavily in AI in an attempt to gain an upper hand over defenders, which means organizations
            need to ensure they are investing in these solutions as well. Ryan says AI doesn’t necessarily “change
            the balance of ‘good vs. evil,’” but instead just represents the next evolution of the ongoing cat and mouse
            game between attackers and defenders.



            API Security Will Take on Greater Prominence

            As attackers target APIs with greater regularity, organizations will be forced to take a more proactive
            approach toward identifying, classifying, and protecting all API endpoints in production. This is particularly
            true for large organizations: enterprises with a revenue of at least $100 billion USD are between three
            and four times more likely to experience API insecurity than small or midsize businesses.

            Unfortunately,  while  API  ecosystems  are  expanding  rapidly,  most  organizations  are  still  in  the  early
            stages of understanding how to effectively protect them. Although it’s common for today’s businesses to
            have between 50 and 500 APIs in production, many don’t know where they are deployed or what data
            they are accessing. That put the organization, and their valuable data, at extreme risk.










            Cyber Defense eMagazine – December 2023 Edition                                                                                                                                                                                                          37
            Copyright © 2023, Cyber Defense Magazine. All rights reserved worldwide.
   32   33   34   35   36   37   38   39   40   41   42