Page 127 - Cyber Defense eMagazine August 2023
P. 127
Why compliance standards are important for AI
When people discuss their concerns about artificial intelligence, most people would cite the loss of jobs
or the spreading of false information as their primary concerns. However, more people should be
concerned about their cybersecurity and privacy being endangered by the use of AI. After all, AI models
are able to rapidly process, store, and — perhaps more frighteningly — learn from massive amounts of
information. This means that if a hacker can gain access, they have enormous amounts of data available
they can then exploit for their own gain.
Compliance standards in the AI industry ensure that AI developers put the right protections in place to
minimize or eliminate the risk to the data the algorithm is processing and storing. Some measures that
should be standard include a legal obligation not to sell, rent, or share data with third parties, as well as
ensuring that all regulatory requirements for data protection are met or exceeded.
What compliance standards are needed for AI
One of the most important considerations when it comes to the use of AI is user consent. From the user’s
end, it is important to read the terms of use and understand what is being consented to. Meanwhile, from
the operator's end, enabling users to understand their consent clearly — such as allowing them to track
their consent with intuitive tools and completely delete their data — is necessary not only for
accountability, but also for protection to ensure that users are informed of potential risks. This is especially
vital for financial companies, whose user data is particularly sensitive.
Companies that implement AI into their practices while handling financial data should also implement
stringent cybersecurity standards. The use of bank-level cybersecurity standards can ensure that
systems and data are fully encrypted and protected, and any sensitive data stored in the system should
have restricted access. Access to this data should only be granted to authorized and verified users with
a legitimate reason to view or utilize it.
Additionally, it’s important to remember that cybersecurity is about being proactive. Entities employing AI
who want to be proactive about protecting their data should pursue penetration and vulnerability testing
from a professional service. Through penetration testing, the weaknesses of a program and its
cybersecurity measures can be exposed before wrongdoers can ever exploit them, and fixes can be
implemented to protect the data.
Still, there are certain types of data that users should avoid inputting into AI programs, and that the entities
behind AI programs should avoid collecting and storing, regardless of how strong the system might seem.
If an AI program contains user data that is typically valuable to wrongdoers — such as card payment
information or usernames and passwords to banking accounts — it is more likely to be targeted for
attacks, and therefore far more susceptible to data breaches. After all, the best method to protect against
attacks is to prevent the attack from ever happening in the first place.
The truth is that, like any other tool they use, businesses will be held accountable for the risks created by
their use of artificial intelligence. That isn’t to say that businesses should not implement AI — it is a
powerful tool with numerous exciting implications — but it is vital that companies use this technology
Cyber Defense eMagazine – August 2023 Edition 127
Copyright © 2023, Cyber Defense Magazine. All rights reserved worldwide.