Page 269 - Cyber Defense eMagazine Annual RSA Edition for 2024
P. 269

AI Security Challenges

            There’s already no shortage of commentary speculating on AI’s concomitant dangers, most of which fall
            into one of three broad categories:

               1.  Training Data-Related

            AI  systems  have  voracious  appetites  for  data.  Their  need  to  ingest  and  analyze  large  and  diverse
            datasets necessarily means that AI developers run the standard gamut of risks that attach whenever a
            sufficiently rich, centralized data asset is created. There are questions of storage, usage, and access.
            Data must be protected from external bad actors, of course, but it must also be safeguarded against
            accidental unauthorized access or misuse. AI further complicates questions of basic security because AI
            agents are increasingly able to make their own decisions about what to do with the data assets to which
            they have access.

               2.  Deep Analysis & Inference

            AI  technologies  have  a  genuinely  unprecedented  capacity  to  analyze  massive  datasets  and  make
            complex inferences, which can make them into powerful surveillance tools, even inadvertently. We’ve
            already seen obvious instances of AI surveillance with facial recognition programs, but the true power –
            and risk – of sophisticated AI lies in its ability to draw accurate conclusions about individuals based on a
            broad array of seemingly unrelated data points. Such capabilities aren’t simply a risk due to potential bad
            actors; instead, these inferential capabilities can inadvertently uncover sensitive information in the regular
            course of once-benign activities, like behavioral customer segmentation.

               3.  Misuse of Capabilities

            AI will accelerate virtually every industry, including fraud. We’re seeing it already with cloned voices and
            deepfakes  – AI  can be  used  to  create  a  distorted view  of  the world,  then distribute  that  view  in  an
            incredibly broad or highly targeted fashion depending upon the desires of individual bad actors. The
            potential attack vectors are too many to count, ranging as they do from email fraud to driving recruitment
            for  extremist  groups.  The key  takeaway  here  is  that  AI can  serve as  a  force  multiplier  for  the  most
            malignant forms of human creativity.

            Beyond these, there’s a long tail of potential threat vectors that could arguably justify the creation of
            additional  categories,  but  that  discussion  is  starting  to  feel  a  bit  academic.  For  our  purposes,  it’s
            principally  important  to  understand  that  AI-related  privacy  and  security  risks  take  multiple  forms,
            recapitulate older vulnerabilities while creating new ones, and can arise from benign as well as malign
            intentions. The sheer breadth of possibility here demands a complex, active, and collaborative response.



            De-Risking AI: The Story So Far

            Just as AI is in its relative infancy as a technology, so too are efforts to comprehensively safeguard
            privacy in an AI-driven world. At the regulatory level, we’re seeing efforts from a number of national and
            international bodies to begin enshrining privacy protections and somewhat regulate AI.







                                                                                                            269
   264   265   266   267   268   269   270   271   272   273   274