Page 145 - Cyber Defense eMagazine December 2023
P. 145
What is AI addiction?
The risk of AI addiction stems from the technology’s ability to discern what users want and serve it up in
more compelling ways. For example, AI is being used in the world of e-commerce to provide hyper-
personalized shopping experiences. AI can quickly determine what shoppers are willing to spend on, and
deliver virtually endless examples of deals related to past purchases.
The same AI-powered personalization leveraged in the world of e-commerce can be applied to other
platforms as well. Streaming platforms can use it to keep viewers locked in to certain types of content.
Gaming platforms can use AI to nudge players into deeper interaction by empowering adaptive difficulty
and other personalization strategies. While visions of humanity enslaved by AI may be a bit dystopian,
the idea that AI can lead us into unhealthy patterns of behavior is something the experts are clearly
communicating.
One of the chief concerns is that we are becoming too dependent on AI. We count on it to do an ever-
growing number of tasks, from driving our decision-making to driving our cars. As over-reliance moves
toward total-reliance, the potential grows for losing perspective, wisdom, and the ability to perform
valuable and essential skills.
AI growth and cybersecurity concerns
As the use of AI grows, the potential for abuse also grows. AI is essentially built on data collection and
processing, which means it can be compromised by cyberattacks. If healthy AI has come to be seen as
dangerous, AI that has been corrupted by bad actors is even more dangerous.
For example, bad actors who gain access to AI training data can corrupt it, leading AI to behave in
unintended ways. Imagine AI designed to direct a self-driving car being fed poisoned data that causes it
to misidentify traffic signs and signals. The consequences could be devastating.
Data breaches that lead to AI bias are another danger. AI is already being used to guide decision-making
in sensitive areas, such as medical diagnosis. Bias that leads to misdiagnosis or misguided treatment
recommendations puts lives at risk.
Promoting responsible AI use
Protecting against the misuse of AI begins with weaving ethical considerations into its development and
use. Developers must think through potential dangers and design AI tools in ways that address them. As
data is the foundation of AI learning, data used for training should be carefully sourced and protected.
Bringing a multitude of perspectives to the development of AI is also important. AI is a tech tool, but its
impact is felt far beyond the tech world. Development teams that include those with diverse backgrounds
and disciplines contribute to a deeper understanding of dangers that must be addressed.
Cyber Defense eMagazine – December 2023 Edition 145
Copyright © 2023, Cyber Defense Magazine. All rights reserved worldwide.