Page 97 - Cyber Defense Magazine for August 2020
P. 97
Attitude Adjustments
There is a lot of hype surrounding AI and the profession of data science. This hype, coupled with lax
regulatory oversight has led to a wild west of AI implementations that can favor the kitchen sink over the
scientific method.
A hype-driven sense of entitlement can sometimes lead to friction and resistance from front line AI
practitioners. We’ve found that some practitioners are unwilling or unable to understand that, despite their
best intentions, their AI systems can fail, discriminate, get hacked, or even worse. There’s not much to
say about this except that it’s time for the commercial practice of AI to mature and accept that with
increasing privilege comes increased responsibility. AI can, and is already starting to, causes serious
harm. As of today, compliance, legal, security and risk functions in large organizations may have to make
manual attitude adjustments, and insist that AI groups are subject to the same level of oversight as other
IT groups, including incident response planning for AI attacks and failures.
Don’t Deploy AI Without an Incident Response Plan
The final takeaway? AI is not magic -- meaning organizations can and should govern it. If AI is the
transformative technology it is hyped to be (and we do believe it is), then deploying AI with no incident
response plans is a recipe for disaster. After all, we don’t fly commercial jetliners without detailed plans
for systems failures; we don’t run nuclear reactors without emergency plans; if the activity is important to
us, we think and plan in advance about its risks.
And that means we also need to be prepared for AI to fail. Having an AI incident response plan in place
can be the difference between an easily manageable deviation in AI system behavior and serious AI-
driven harm and liabilities.
Cyber Defense eMagazine – August 2020 Edition 97
Copyright © 2020, Cyber Defense Magazine. All rights reserved worldwide.