Page 139 - Cyber Defense eMagazine December 2023
P. 139
research into continuous monitoring and verification tools, which will rely on further development
of artificial intelligence (AI) tools that can be deployed at partially observable and often vulnerable
CPHS interfaces. These tools must be engineered to be able to learn through changing threat
landscapes and help trigger automated response under highly dynamic and unpredictable
situations. Additionally, insights derived from systematic study of human behavior and incentives
for engineered systems will be crucial to better understand the human oversight aspect of security
monitoring, adaptation, and verification.
3. Future Approaches to Autonomous Security: The sheer size and scale of cyber threats will require
much greater use and deployment of AI and machine learning capabilities to monitor and quickly
synthesize massive amounts of data and help determine when CPHS are at risk. R&D must
continually emphasize integration of the most cutting-edge AI into safety and security processes,
with a special focus on developing AI with contextual awareness in as humanlike a way possible,
while ensuring trustworthiness of automated decisions and response capabilities. Crucially,
research is needed to develop effective processes for human operators and decision-making
processes (or feedback loops) to interact with and derive the most value from AI, again tapping
into the CPHS framework to integrate knowledge about human behavior in threat modeling and
coordinated, risk-aware, response mechanisms that satisfy physical constraints.
4. New Approaches to Resilience in Interdependent Infrastructures: CPHS tightly couple continuous
physical dynamics with networked computer processes, which means adversaries can exploit a
weakness in one area to wreak far wider damage. Strategic coordination among different
systems, organizations, and industries is therefore critical for addressing insecurities arising due
to correlated software bugs, hardware malfunctions, and network interdependencies. In addition
to technical research, mechanisms for coordination between government and for-profit agents
must be established, since the latter possess access control to critical industries that are
integrated in common CPHS. There is a clear need for a limited liability framework (aka due care
standard) and compliance mechanisms for processes such as data sharing and analysis as well
as the knowledge base for security tactics and active defense strategies.
5. Architecting Trustworthy Systems: Systems and security processes must be, above all,
trustworthy. But what does this mean? In the context of engineered infrastructure, trustworthy
refers to system correctness and security according to a well-defined design specification. Hence
R&D in this space should focus first on design specification and defining correct behavior in
complex infrastructures (which include many interconnected sub-infrastructures and processes
that can range from centralized command to fully decentralized operations). The goal is to design
trustworthy systems that can withstand attacks—or unanticipated uses of technology that fall
within a system’s specifications. These systems have untrusted inputs and interfaces that can be
tackled by confidential computing techniques and trustworthy architectures. Engineering research
should address such key issues to address vulnerabilities we see today that arise from ill-defined
specifications, brittle control loops, and poorly understood interdependencies.
The ERVA report elucidates proactive research directions and focus for those who oversee cybersecurity
within each of these five research areas. But I would emphasize two key points above all: (1) collaboration
across industries and sectors (including academia and government agencies) is essential, as the
Cyber Defense eMagazine – December 2023 Edition 139
Copyright © 2023, Cyber Defense Magazine. All rights reserved worldwide.