Page 85 - Cyber Defense Magazine for August 2020
P. 85

The Rising Cost of Failure

            How much is cybersecurity costing companies? In a recent Ponemon-Deep Instinct survey of IT and IT
            security practitioners, only 40% of respondents believed their budgets were sufficient for achieving a
            robust cybersecurity posture.

            These budgets are predominantly funneled into containing and remediating threats rather than preventing
            them – in large part because cyber staff are overwhelmed with the amount of data that they need to
            monitor. Yet this “assume a breach and then contain” approach comes at a big cost, with the time and
            money spend remediating attacks costing well into the hundreds of thousands of dollars. The value of
            preventing a cyber-attack ranges from $400,000 to $1.4 million, depending on the nature of the attack. If
            an  attack  is  the  first  of  its  kind,  it’s  virtually  guaranteed  to  succeed  with  absent  strong  preventative
            capabilities, and organizations stand to lose upwards of $1 million per successful attack.


            Subpar Solutions, Subpar Results


            Why are current approaches to cybersecurity proving so inadequate? Because they over-rely on the
            human intervention.

            Specifically, most AI-based cybersecurity solutions are powered by traditional machine learning (ML),
            which is inhibited by a number of limitations that have become substantial problems in the recent past.
            Chief among these limitations is data: ML models are trained on only a fraction of the available raw data,
            and are trained on features identified by experts.

            Human error, of course, also comes into play, even when highly specialized computer scientists with
            expertise in cybersecurity carry out ML feature engineering. These professionals excel at training ML
            models on  known  threats  – but even  seasoned cybersecurity  professionals  are unable  to  anticipate
            emerging, first-seen attacks, that are designed to be evasive. Hackers of course, understand this, which
            is why they now building malware that is capable of fooling ML models into classifying it as benign.


            Finally, there’s a limit to the size of the dataset for training ML systems before reaching learning curve
            saturation – the point past which the system no longer improves its accuracy.


            Given these limitations, ML systems struggle to detect new, previously unseen malware, while generating
            high rates of false positives. Just as the cost of an unprevented attack can deliver a real blow to the
            bottom line, the time and resources required to investigate false positives also strains security teams’
            resources.  This,  in  turn,  breeds  a  sense  of  “alert  fatigue,”  making  teams  more  prone  to  error  when
            genuine threats emerge.


            Simply put, AI trade-offs – not understaffed cybersecurity teams – may be one of the biggest inhibitors to
            achieving a resilient cybersecurity posture.








            Cyber Defense eMagazine – August 2020 Edition                                                                                                                                                                                                                        85
            Copyright © 2020, Cyber Defense Magazine.  All rights reserved worldwide.
   80   81   82   83   84   85   86   87   88   89   90