Page 46 - Cyber Defense eMagazine July 2024
P. 46
What should a CISO do?
Virtually every conference, webinar, and local cybersecurity chapter meeting that I’ve personally attended
this year has echoed the same themes. CISOs are struggling with enabling the business and end users
to benefit from AI tools, while mitigating potential risks. Shadow IT is nothing new. However, preventing
end users from accessing AI is particularly challenging due to the ubiquitous nature of the technology. It
seems like every vendor is rushing to put AI into every software, platform, and hardware device. Other
CISOs are examining the problem from a data classification point-of-view: where if they can only identify
their most critical data and ensure it never gets processed into an AI model, then they can reduce most
harm. I’m skeptical that this will be achievable.
So, what then should a CISO do? I return to the fundamentals. The back-to-basics approach is to begin
with end user security awareness and training, specifically tailored to AI. Many end users do not truly
know what AI is, how it actually works, yet they are still using it daily. Educating and empowering your
front-line workers to exercise caution around AI will likely yield the best results. Creating a culture where
security is enabling innovation, rather than outright blocking tools via policy mandate, is essential. I’ve
seen great success with my partners who institute weekly lunch-and-learn sessions specifically for AI use
cases. One CISO commented to me that their attendance as dramatically increased once AI became a
regular topic. There is a real demand for learning about these tools, because as I mentioned earlier, no
one wants to be left behind.
Below, I will revisit the core aspects of the AI dilemma previously discussed with recommendations and
best practices to alleviate the potential harm.
Privacy and Security – Revisited
Incorporating privacy and security principles right from the design phase is imperative to minimize
potential compromise of data and systems with the goal of increasing trust, transparency, and safety.
Consider these practices:
1. Security controls: implement strong security principles (such as encryption, incident response,
access control, network and endpoint security) to prevent unauthorized access, misuse of data,
and tampering of data and systems.
2. Audits and transparency: regular audits of AI systems for bias, safety and effectiveness, and clear
privacy controls must be publicly shared for true accountability and transparency.
3. Data minimization: only collect data that is necessary for functionality.
4. Purpose limitation: use data only for the specified purposes.
5. Data anonymization: remove identifiable information from data, however note that some Ais are
able to re-anonymize de-identified data.
6. Informed consent: clearly explain how data is used, if it’s being shared with third-parties, and
provide updates when changes in data use and collection occur.
7. Individual rights: allow users the ability to select and modify their preferences, opt-out of certain
processes, and delete their data.
Cyber Defense eMagazine – July 2024 Edition 46
Copyright © 2024, Cyber Defense Magazine. All rights reserved worldwide.