Page 139 - CDM-CYBER-DEFENSE-eMAGAZINE-December-2018
P. 139

Challenge #2: Generic Models

            Though AI is touted as having the ability to detect complex new attack patterns, most AI systems actually
            only provide a small extension beyond previous rule- and signature-based approaches. AI is only as
            powerful as the data it is provided, and most implementations of AI distribute generic models that don’t
            understand the networks they are deployed to and are easy for adversaries to evade. When pattern
            detection is static across time and networks, adversaries can profile the detections and easily update
            tools and tactics to avoid the defenses in place.



            Challenge #3: Human-AI Breakdown


            The majority of AI systems currently deployed serve up seemingly random scores and don’t explain them.
            This leads to a breakdown in trust and understanding with the humans that need to consume and act on
            the  results.  When  AI  isn’t  able  to  support  “sophisticated”  detections  with  explanations  that  security
            analysts can understand, this adds to the cognitive load of the analyst, rather than making them more
            efficient and effective.



            Using AI to Your Advantage

            That’s not to say AI and machine learning are useless in the fight against cybercrime. They can be
            powerful tools in improving enterprise defenses, but success requires a strategic approach that avoids
            the weaknesses inherent in most of today’s implementations.  There are three key tactics that will amplify
            the ability of security teams to work with AI, rather than adding to their problems.




            Tactic #1: Aim for Fewer False Positives

            An effective system requires an ambitious goal that reduces the security team’s workload and automates
            investigation with a focus on the full adversary objective. Rather than detecting secondary aspects of
            adversary  activity  such  as  the  tool  used  or  the  tactic  employed,  AI  systems  that  uncover  the  core
            behaviors an adversary has difficulty avoiding will give security teams a small number of true business
            risks to investigate. Effective solutions should have very few false positives, generating fewer than 10
            high-priority  investigations  per  week  (not  the  hundreds  or  thousands  of  events  produced  by  current
            approaches).



            Tactic #2: Understand the Environment

            When security teams home in on the adversary’s objectives,  the bad actors are forced to shift their
            approach to better hide in the environments they attack. Adversaries traditionally have the advantage
            because they can profile an environment and avoid the detections in place. AI systems can gain the





                                 139
   134   135   136   137   138   139   140   141   142   143   144