Page 270 - Cyber Defense eMagazine Annual RSA Edition for 2024
P. 270

Here in the US, President Biden issued an Executive Order governing the “safe, secure, and trustworthy”
            development of AI, while Senator Chuck Schumer has called for comprehensive legislation governing AI.
            At the same time, the FTC is finalizing rules around AI-based impersonations. The United Nations is
            getting  in  on  the  act,  too,  forming  an  advisory  body  on  AI.  Apart  from  governments,  the  IEEE  is
            assembling  its  own  global  initiative  on  AI  ethics,  while every  major  technology  player  developing  AI
            systems has its own internal ethics and governance boards.

            There’s clearly no shortage of desire to enshrine protections and into AI development, but these efforts
            are early, disaggregated, and in the cases of big tech self-regulation, often opaque. For the time being,
            at least, we’re stuck relying on existing privacy and data protection legislation (e.g. GDPR, CCPA, HIPAA
            and so on). While many of the protections within these regulatory packages apply to at least some degree,
            no privacy laws on the books ever contemplated the unique challenges posed by AI – and even if they
            did, they’d remain an uneven patchwork of protection.

            The bottom line: we’re a long way away from universal AI-related privacy protections – assuming we
            even want that kind of global agreement, which is very much open to debate. If and when that regulation
            does  come,  however,  it  may not  be  enough,  or  it  might upset  the balance of  risks  and benefits  we
            discussed earlier. A new way of thinking is required.



            The Path Forward

            The comprehensive risks associated with AI demand a comprehensive response, which means that AI
            innovators, governments, regulators, and watchdog groups must work closely together. Legislators and
            regulators are not experts; they will need guidance and transparency. By that same token, AI innovators
            and other technical experts may not be acquainted with the full array of tools in a regulator’s toolkit – or
            the implications of using or not using them, for that matter – so once again it’s imperative that both sides
            collaborate with one another in good faith.

            As they do, they should focus on two principal objectives:



               1.  Ward off the worst-case scenarios.

            Better protect data by acknowledging that “notice and choice” data consent frameworks (e.g. opt-in or
            opt-out) are simply inadequate. Going forward, it’s imperative that we increasingly move toward models
            that give granular control of data collection and sharing to individual data subjects themselves. As part of
            this, we need novel ways to allow users to proactively control their data footprint in a centralized fashion
            rather than navigating countless individual consent “agreements” with distinct websites, brands, and so
            on. This paradigm shift, even in the absence of AI-specific regulation, would dramatically reduce the
            potential for AI to surveil, compromise, or assist in defrauding individuals.

               2.  Effectively navigate the best-case scenario.

            So  what  happens  if  AI  develops  quickly  and  humanely  into  truly  generalized  intelligences?  Even
            assuming all goes well, this poses a new problem: we’re faced with a full-on “authenticity crisis.” What’s





                                                                                                            270
   265   266   267   268   269   270   271   272   273   274   275