By Dave Thompson, Senior Director, Product Management, LightCyber
The words “operational efficiency” and “security” are generally not commonly used together. I’m amazed that after 20 years in the networking/network security industry, vendors get away with hyperbolic messaging in the place of substantive objective evidence of value. A fundamental problem for measuring an organization’s operational security effectiveness is the industry’s lack of metrics to define what “success” looks like. Simply put, how much time and resources should be spent on security, and how do you measure operational success?
Despite the fact that IT security product spending has grown rapidly to nearly $30 billion dollars in spending per year, industry vendors haven’t been held accountable to justify the costs of new security solutions. The result is that security practitioners are left with expensive solutions that are not proven to help them achieve their mandate – to provide effective protection of critical infrastructure and to rapidly detect and respond to intruders that get in. As a result, security practitioners generally feel overwhelmed and underappreciated, and that has to change.
One of the primary challenges for the security industry is the overwhelming and growing
volume of alerts coming from the growing volume of incumbent security solutions (IDS,
Sandbox, SIEM, or otherwise). The staggering volume of alerts wastes time and resources
spent triaging and researching the predominant volume of false positives. Increased staffing has become challenging, with a worldwide shortage of a million security professionals. At the same time, organizations have limited budgets and couldn’t continue to linearly increase staff to meet the growing volume of security alerts.
How does a security operator even know where to start with that level of alerts, especially
considering that a large majority of these are false-positives? Today, two-thirds of the security staff’s time is wasted due to the gross inefficiency of their tools, according to the Ponemon study, and only 4% of all alerts can generally be investigated. There is a likelihood that several of the 96% of those ignored alerts may convey something important. These kinds of statistics would be completely unacceptable in other parts of the IT industry. Even major league baseball would be appalled by such averages!
The flood of alerts overwhelms security organizations, making it nearly impossible to spot
anything that is truly representative of a real network attack. In short, the overwhelming majority of security tools purchased today are primarily focused on detecting the evidence of malware based upon some static definition of an attack, such as a signature, hash, domain, determined list of software behaviors, etc. These systems have obvious operational benefits, but also some serious shortcomings that must be addressed to achieve acceptable operational efficiency.
First, since the overwhelming majority of malware that is seen (in email, at the perimeter, et al) does not actually “detonate” on a vulnerable host, it is not operationally relevant to the security practitioner. This enormous false positive problem creates “analysis paralysis” for the average security team and consumes cycle for triage and research.
Second, since these systems inherently can only detect “known” malware, they are unable to detect new attacks, new malware variants, and the infamous “zero-day” attacks. Given the growing volume of malware variants targeting individual organizations, this is an enormous security loophole exposing significant “false negative” risks.
Last, since these systems are built to identify malware (hashes, signature, et al) and its
manifestations (file activity, C&C domains, et al), they are fundamentally incapable of detecting attacks that don’t employ malware, like insider attacks, credential attacks, or those stages of external attacks that don’t employ malware. This an enormous blind spot for security teams.
In order to realize “operational efficiency in security operations” that is meaningful and
measurable, we as an industry must deliver tools that overcome these serious shortcomings.
We need to focus on the two operational metrics that are most important to security operators: efficiency (volume of alerts) and accuracy (usefulness of alerts). We need systems that can solve the false negative and positive problems, and eliminate the blind spot around credential-based attacks. We need new systems that employ machine learning to complement the “known bad” models with new “learned good” models that aren’t susceptible to the same alert accuracy and efficiency problems. Security vendors must step up and take responsibility for delivering products that demonstrate operational success and publish operational metrics that substantiate those claims. The industry can no longer afford to hide behind marketing fluff and hyperbolic claims. As one CISO recently put it, “We need tools that can slap us across the face and tell us what’s going on. We don’t have time to go looking for security events.”
About the Author
David Thompson, Senior Director of Product Management, LightCyber David Thompson serves as the Senior Director of Product Management for LightCyber, responsible for assessing customer and market requirements, conducting sales and channel training and
enablement, market education, and overall solution definition. He has been with LightCyber since late 2014. Mr. Thompson has over 15 years of experience focused on information security. Prior to joining LightCyber, he served in Product Management leadership positions for OpenDNS, iPass, Websense, and Voltage Security (now HP). Prior to running product management at Voltage Security, Mr. Thompson was a Program Director at Meta Group (now Gartner) responsible for security research topics including encryption, PKI, remote access, and secure network design. Mr. Thompson holds a bachelor of science in Physics from Yale University.