Page 64 - Cyber Defense eMagazine March 2024
P. 64

AI models and the OSS They Rely on

            OSS models are the bedrock of the AI revolution. Why? Most organizations today don’t build their own
            AI models from scratch – they increasingly rely on open-source models as foundational components in
            their AI initiatives. This approach expedites development, allowing rapid deployment and customization
            to suit specific needs.



            The tradeoff for this convenience and efficiency, however, is security.

            OSS packages are widely used – and attackers know it. They also know that organizations struggle to
            scrutinize potential vulnerabilities in code written by outside developers. The result: OSS models often
            introduce vulnerabilities that malicious actors can easily exploit to compromise sensitive data, decision-
            making processes, and overall system integrity.


            AI research and solutions are still in the “move fast and break things” phase, so it’s no surprise that
            security protocols for OSS models are still in their infancy. This is why cyber incidents like the recent
            Hugging Face vulnerability  – where  over  1500  LLM-integrated  API  tokens  in  use  by  more  than 700
            companies including Google and Microsoft became compromised – are increasing.



            Ramifications of Not Securing OSS Models

            Hacking into AI infrastructure gives bad actors the keys to the kingdom. Attackers can view and steal
            sensitive data, compromising user privacy on an unprecedented scale. They can also access, pilfer, and
            even delete AI models, compromising an organization's intellectual property. Perhaps most alarmingly,
            they can alter the results produced by an AI model.

            Imagine a scenario where an individual with a persistent headache asks an AI chatbot or Large Language
            Model (LLM) for basic headache tips. Instead of suggesting rest or Advil, the corrupted AI advises the
            person to combine two OTC medications in a way that, unbeknownst to the headache sufferer, can result
            in toxic or fatal side effects.

            Alternatively, consider an autonomous car that relies on AI for its navigation. If its AI system has been
            tampered with, it might misinterpret a red traffic light as green, triggering a collision. Similarly, if an auto
            manufacturing facility’s AI quality control system is tampered with, it might falsely validate bad welds,
            risking driver safety and recalls.

            Even in seemingly benign situations, compromised AI can be dangerous. Imagine that a user prompts
            an AI assistant to suggest travel tips for Italy. A compromised LLM could embed a malicious URL within
            its response, directing the user to a phishing site laden with malware – all disguised as a helpful travel







            Cyber Defense eMagazine – March 2024 Edition                                                                                                                                                                                                          64
            Copyright © 2024, Cyber Defense Magazine. All rights reserved worldwide.
   59   60   61   62   63   64   65   66   67   68   69