Page 44 - Cyber Defense eMagazine July 2024
P. 44

knowing that nothing is ever easy – there are highly publicized instances of AI being over-corrected  by
            manual intervention,  such as with Google’s Perspective  API, an AI tool developed to curb hate speech
            online.  It  faced  criticism  that  it  was overcorrecting  and  censoring  words  used  in benign  contexts  and
            sparking conversations  about free speech and policing AI.




            Blueprint for an AI Bill of Rights

            To  address  the  aforementioned  risks,  many  have  looked  to  government  regulations.  One  example  is
            provided  by  the  White  House  Office  of  Science  and  Technology  Policy  (OSTP)  which  introduced  a
            Blueprint  for an AI Bill of Rights  in 2022, designed  to protect  civil rights of the American  public as the
            world adapts to AI seeping into nearly every aspect of society. This framework outlines five principles:



            Safe and Effective Systems

            AI systems must be thoroughly  vetted with rigorous testing  and ongoing  monitoring  to ensure they are
            safely  and  effectively  operating.  This  is  achieved  with  pre-deployment  testing,  risk  identification  and
            mitigation, and continuous monitoring. Implementing safety measures with regular audits to verify system
            performance  and reliability is key to this success. The foundational  goal of protecting  users from harm,
            along with building public trust in AI technologies is dependent on prioritizing both safety and efficacy of
            these  systems.  Transparency  is  paramount  in  the development  process,  so that  AI systems  are  both
            technically sound and in alignment with ethical standards and user expectations.




            Algorithmic Discrimination Protections
            AI systems must be prevented  from perpetuating  existing biases and discrimination,  while designed in
            an  equitable  way.  Designers,  developers,  and  deployers  of  AI  systems  must  include  safeguards  to
            actively  identify,  address,  and  mitigate  biases  in  algorithms  and  data  sets.  Leveraging  diverse  and
            representative  training  data helps proactively minimize  the risk of discriminatory  outcomes.  The use of
            regular  audits  and  impact  assessments  can aid  in detecting  and  correcting  biases  in  the AI decision-
            making  processes.  By  implementing  these  protections,  AI  systems  are  better  positioned  to  treat
            individuals fairly, regardless of race, gender, religion, or other protected classifications,  leading to more
            inclusivity and effectiveness.



            Data Privacy

            The  rights  associated  with  strong  data  privacy  and  security  protections  underscore  the  criticality  of
            individual  control  and  agency  over  personal  data.  Achieving  this  principle  involves  preventing
            compromises around unauthorized access, misuse, and exploitation requires robust security practices to
            be in place, along with providing transparency about data collection practices. Individuals must be clearly
            informed about how their data is being used and given the opportunity to make informed choices about




            Cyber Defense eMagazine – July 2024 Edition                                                                                                                                                                                                          44
            Copyright © 2024, Cyber Defense Magazine. All rights reserved worldwide.
   39   40   41   42   43   44   45   46   47   48   49