Page 254 - Cyber Defense eMagazine Annual RSA Edition for 2024
P. 254

Moreover, this expansive data collection has started ringing alarm bells in regulatory corridors. As AI
            adoption  continues  to  soar  and  data  collection  practices  expand,  the  spotlight  is  increasingly  on
            augmented regulatory scrutiny and compliance pressures.



            Untangling the Regulatory Maze

            Initially  meant  to  give  businesses  a  competitive  advantage,  robust  regulatory  frameworks  could
            inadvertently pull organizations into an intricate labyrinth of regulations, where non-compliance could
            invite hefty fines and dent reputations. For example, the California Consumer Policy Act (CCPA) and
            Europe’s General Data Protection Regulation (GDPR), were created to raise awareness around digital
            data risks and now govern the digital domain. These stringent laws dictate how businesses must manage
            Personally Identifiable Information (PII) with rigorous data collection, processing and storage protocols.

            With such regulatory frameworks in place, companies indulging in AI-enhanced data collection stand on
            the brink of a complex regulatory environment. To effectively navigate this evolving regulatory maze,
            organizations  must  pre-empt  potential  roadblocks.  This  involves  curating  a  robust  data  governance
            framework,  anonymizing  data  where  possible,  restricting  unwarranted  data  collection  and  diligently
            assuring compliance with existing data protection laws.

            However,  even  as  the  AI  juggernaut  rolls  on,  the  nature  of  collected  data  is  shifting.  While  some
            organizations  primarily  focus  on  gathering  PII,  they  could  inadvertently  collect  more  sensitive  data
            categories like Protected Health Information (PHI) and biometric data.



            Securing Sensitive Data

            The  collection  of  PHIs—which  encompass  health-related  details  linked  to  an  individual  or  disclosed
            during  healthcare  services—carries  stricter  regulations  via  the  US  Health  Insurance  Portability  and
            Accountability Act (HIPAA). Simultaneously, indiscriminate collection of biometric data—unique biological
            attributes such as fingerprints or facial recognition patterns—is also coming under the purview of virtually
            uncompromising regulations, such as the Illinois Biometric Information Privacy Act (BIPA).


            In a world dependent on AI tools, specifically chatbots and facial recognition platforms, the inadvertent
            collection  of  PHI  and  biometric  data  can  become  problematic,  forcing  organizations  into  a  realm  of
            stringent and sophisticated regulations. For instance, AI can diagnose patients with a certain illness or
            condition by collecting biometrics from an image; this means a biometric is collected anytime an individual
            posts on social media that they are sick. These circumstances, in turn, raise the question of whether
            posts on emotional well-being will lead to accidental psychometric data collection.

            To manage such risks, organizations can proactively establish clear and concise privacy policies that
            seek explicit consent and invest in comprehensive data mapping and inventory tools. By deploying AI
            algorithms to monitor these data handling practices and compliance, organizations can detect potential
            issues in real time and prepare for them in advance.







                                                                                                            254
   249   250   251   252   253   254   255   256   257   258   259