Page 43 - Cyber Defense eMagazine July 2024
P. 43

discussed,  these  algorithms  can further  exacerbate  inequalities  amongst  already  marginalized  groups
            and technology illiterate. As the pace of innovation and rate of change increases, many will be left behind.

            In this  article  I’ll  attempt  to tease  out  some  of the  more  granular  issues  that  are  being  overlooked  or
            under-examined.  As  a  point  of  reference,  I  will  use  the  current  White  House  Office  of  Science  and
            Technology Policy (OSTP) Blueprint for an AI Bill of Rights and will discuss other potential measures for
            regulation and best practices to improve trust, transparency, safety, and accountability, while minimizing
            harm of AI, particularly as it relates to marginalized communities.



            The AI Dilemma

            To  put  it  simply,  AI  relies  on  massive  amounts  of  data  to  create  statistical  correlations  in  order  to
            accelerate decision-making.  In the context of generative AI, models can create new text, image, sound,
            and more based on the training data sets. These operations have risks around privacy and security, and
            are already grappling with generating output that may be seen as bias or discriminatory.



            Privacy and Security

            AI algorithms  are dependent  on vast amounts  of personal  data being  collected, stored,  and analyzed.
            Like with any technology,  the potential for data breaches and unauthorized  access poses severe risks.
            Data leaks, tampering, and downtime of essential services can have significant effects on individuals and
            businesses  depending  on  the  AI  systems.  Effective  cybersecurity  controls  must  be  implemented  to
            minimize the likelihood  of exposure, misuse, and other compromise.  By its nature, the complexity of AI
            systems often makes it challenging for users to understand how their data is being used, raising concerns
            regarding transparency  and true informed consent.  Do you know what data is being collected  and how
            it's being used and shared? Even if you know, can you do anything about it? Clear communication  and
            implementing robust data privacy and security practices is critical to the effective protection of users.



            Bias and Discrimination

            AI algorithms depend heavily on the quality of the training data they receive and unfortunately, numerous
            headline-making  stories have demonstrated the inherent risk of these platforms inadvertently amplifying
            existing  biases  which  can  lead  to  the  unfair  treatment  of  different  groups,  often  those  already
            marginalized.  Gender  biases  in  training  sets  can  lead  to  unequal  treatment,  as  shown  in  the  well-
            documented  case  of  Amazon’s  recruiting  tool  that  was  trained  on  previous  resumes,  which  were
            predominantly men’s, thus leading to the algorithm inadvertently favoring male applicants.

            Leveraging  biased  data  sets  may  also  perpetuate  systemic  racism,  leading  to discretionary  decision-
            making affecting equal employment opportunities, financial lending, or law enforcement. One example is
            demonstrated as an AI-based tool used to score the likelihood of criminal re-offense incorrectly labelled
            Black defendants as twice as likelihood to reoffend as white defendants. Having human intervention and
            fallback  mechanisms  are crucial  in these  situations  before  the biases  are known.  But  that said  – and




            Cyber Defense eMagazine – July 2024 Edition                                                                                                                                                                                                          43
            Copyright © 2024, Cyber Defense Magazine. All rights reserved worldwide.
   38   39   40   41   42   43   44   45   46   47   48