Page 95 - Cyber Defense Magazine for August 2020
P. 95

increasingly interested in enforcing best practices for AI. And numerous damaging attacks against AI
            systems have already been published in machine learning and security research journals.

            Our bet is you’ll be hearing more about AI incidents in the coming years. Below, we’ll go over why AI is
            (and is not) different from more traditional software systems, some of the primary lessons we’ve learned
            writing  AI  incident  response  plans,  and  we’ll  introduce the  free and open  bnh.ai  Sample  AI  Incident
            Response Plan to help you make your organization better prepared for AI incidents.

            How AI Is (and Is Not) Different


            What’s so different about AI? Basically, it’s much more complex than traditional software, it has a nasty
            tendency to drift toward failure, and it’s often based on statistical modeling. What does that really mean?

            More  complexity:  For  starters,  AI systems  can have millions or  billions  of  rules  or parameters  that
            consider combinations of thousands of inputs to make a decision. That’s a lot to debug and it’s hard to
            tell if an AI system has  been manipulated by an adversary.

            Drift  toward  failure:  Most  AI  systems  are trained on  static  snapshots of the  world encapsulated  in
            training datasets. And just in case you haven’t noticed, the world is not a particularly static place. As the
            world changes, the AI system’s understanding of reality becomes less and less valid, leading to degrading
            quality of decisions or predictions over time. This is known as “model decay” or “concept drift,” and it
            applies to nearly all current AI systems.



            Probabilistic outcomes: Most AI systems today are inherently probabilistic, which means that their
            decisions and predictions are guaranteed to be wrong at least some of the time. In standard software,
            wrong outcomes are bugs. In AI, they are features. This makes testing and establishing tolerances for
            failure more difficult.

            The combination of these three characteristics present a number of testing difficulties, potential attack
            surfaces and failure modes for AI-based systems that often are not present in more traditional software
            applications.



            If that’s what’s different, then what’s the same?

            In the end, AI is still just software. It’s not magically exempt from the bugs and attacks that plague other
            software, and it should be documented, tested, managed, and monitored just like any other valuable
            enterprise  software  asset.  This  means  that  AI  incident  response  plans,  and  AI  security  plans  more
            generally, needn’t reinvent the wheel. Frequently they can piggyback on existing plans and processes.












            Cyber Defense eMagazine – August 2020 Edition                                                                                                                                                                                                                        95
            Copyright © 2020, Cyber Defense Magazine.  All rights reserved worldwide.
   90   91   92   93   94   95   96   97   98   99   100