Page 16 - Cyber Defense Magazine RSA Edition for 2021
P. 16

legitimate activity may be blocked and disrupt business activity, or alternatively malicious activity may be
            mistakenly identified as being OK and an alert not generated, leaving the organization open to risk.



            Hurdles AI/ML detection tools must overcome


            Despite their potential, AI/ML detection tools are not a panacea. They can miss threats and they can flag
            activities as malicious when they’re not. In order for these tools to be trusted, the alerts they raise and
            the decisions they make need to be continually validated for accuracy.

            If we train our AI and machine learning algorithms the wrong way, they risk creating even more noise,
            and alerting on things that are not real threats (“false positives”). This wastes analysts’ time as they chase
            these phantoms unnecessarily, only to discover they’re not real threats. Alternatively, they may also miss
            threats completely that should have been alerted on (“false negatives”).



            How do we guard against the pitfalls?

            In the case of false positives, we need to validate that a detected event is not malicious and train the tool
            to ignore these situations in future - while at the same time ensuring this doesn’t cause the tool not to
            alert on similar issues that are in fact malicious. The key to doing this effectively is having access to
            evidence that enables accurate and timely investigation of detected threats. Recorded packet history is
            an indispensable resource in this process - allowing analysts to determine precisely what happened and
            to accurately validate whether an identified threat is real or not.

            Dealing with false negatives is more difficult. How can we determine whether a threat was missed that
            should have been detected? There are two main approaches to this. The first is to implement regular,
            proactive threat hunting to identify whether there are real threats that your detection tools  - including
            AI/ML  tools  -  are  not  detecting.  Ultimately,  threat  hunting is  a  good  habit  to  get  into  anyway,  and  if
            something is found that your AI/ML tool missed the first time around, it provides an opportunity to train it
            to correctly identify similar threats the next time.

            The  second  approach  is  using  simulation  testing  -  of  which  one  example  is  penetration  testing.  By
            creating simulated threats, companies can clearly see if their AI/ML threat detection tools are identifying
            them correctly or not. If they’re not, it’s once again an opportunity to train the tool to identify similar activity
            as a threat in future.





















                                                                                                              16
   11   12   13   14   15   16   17   18   19   20   21