Page 96 - Cyber Defense Magazine for August 2020
P. 96

What We Learned About AI Incident Response

            Drafting  AI  incident  response  plans  has  been  eye-opening,  even  for  us.  In  putting  to  paper  for  our
            customers  all  the  various  ways  AI  can  fail  and  its  many  attack  surfaces,  we’ve  learned  several  big
            lessons.



            Neither MRM Nor Conventional lR is Enough

            The basics of our AI incident response plans come from combining model risk management  (MRM)
            practices,  which  have  become  fairly  mature  within  the  financial  industry,  with  pre-existing  computer
            incident response guidance and other information security best practices. MRM helps protect against AI
            failures. Conventional incident response provides a framework to prepare for AI attacks. These are both
            great starts, but as we detail below, a simple combination of both is still not quite right for AI incident
            response.  This  is  why  our  Sample  AI Incident  Response  plan  includes guidance on both  MRM and
            traditional computer incident response, plus plans to handle novel AI risks in the context of the burgeoning
            AI regulation landscape in the US.

            MRM  practices,  illustrated  in,  among  other  places,  the  Federal  Reserve’s  Supervisory  Guidance  on
            Model Risk Management, known as SR 11-7, are an excellent start for decreasing risk in AI. (In fact, if
            your organization is using AI and is not familiar with the SR 11-7 guidance, stop reading this article and
            start reading the guidance.) Broadly, MRM calls for testing of AI systems, management of AI systems
            with inventories and documentation, and careful monitoring of AI systems once they are deployed. MRM
            also  relies  on  the  concept  of  “effective  challenge”  -  which  consists  of  models  and  processes  being
            questioned and reviewed by humans in multiple lines of technology, compliance, and audit functions.
            However, MRM practices do not specifically address AI security or incident response, and they often
            require resources not available to smaller organizations.

            We’ll address incident response for smaller organizations in the next section, but from an information
            security  perspective,  traditional  incident  response  guidance  is  helpful  -  though  not  a  perfect  fit.  For
            instance, AI attacks can occur without traditional routes of infiltration and exfiltration. They can manifest
            as high usage of prediction APIs, insider manipulation of AI training data or models, or as specialized
            trojans buried in complex third-party AI software or artifacts. Standard incident response guidance, say
            from SANS or NIST, will get you started in preparing for AI incidents, but they also weren’t specifically
            designed for newer attacks against AI and could leave your organization with AI security blindspots.



            When Going Fast and Breaking Things Goes Wrong

            MRM practices require serious resources: lots of people, time, and technology. Standard MRM may not
            be  feasible  for  early-stage  or  small  organizations  under  commercial  pressure  to  “go  fast  and  break
            things.” Common sense indicates that when going fast and breaking things, and without conventional
            MRM, AI incidents are even more likely. With AI incident response, smaller organizations without the
            capability for heavy-handed supervision on the build side of AI can spend limited resources in a manner
            that allows them to stay agile while also confronting the reality of AI incidents.




            Cyber Defense eMagazine – August 2020 Edition                                                                                                                                                                                                                        96
            Copyright © 2020, Cyber Defense Magazine.  All rights reserved worldwide.
   91   92   93   94   95   96   97   98   99   100   101