Call us Toll Free (USA): 1-833-844-9468     International: +1-603-280-4451 M-F 8am to 6pm EST
Open AI Exec Warns AI is “Extremely Addictive,” Humanity Could Become “Enslaved”

Open AI Exec Warns AI is “Extremely Addictive,” Humanity Could Become “Enslaved”

By Sai Mattapalli and Rohan Kalahasty, Co-Founders — Vytal.ai

The idea of technology going wrong and turning on its creators is not new. More than 200 years ago, Mary Shelley teased out what that could look like when she created Frankenstein’s monster.

Recent reports suggest that artificial intelligence — which is weaving its way into virtually every aspect of our culture — may be the next technological monster society must contend with. According to Mira Murati, CTO at generative AI giant OpenAI, those reports may not be far off.

During a recent interview, Murati warned that the misapplication of AI could have tragic consequences. She pointed to the possibility that AI could be designed “in the wrong way,” leading to it becoming “extremely addictive” and users becoming “enslaved.”

Coming from Murati, the comment is significant. She has been dubbed “the most powerful woman in AI.” The company she works for, OpenAI, is a global leader in the field of generative AI. If she points to risks that must be addressed, the tech world should take notice.

What is AI addiction?

The risk of AI addiction stems from the technology’s ability to discern what users want and serve it up in more compelling ways. For example, AI is being used in the world of e-commerce to provide hyper-personalized shopping experiences. AI can quickly determine what shoppers are willing to spend on, and deliver virtually endless examples of deals related to past purchases.

The same AI-powered personalization leveraged in the world of e-commerce can be applied to other platforms as well. Streaming platforms can use it to keep viewers locked in to certain types of content. Gaming platforms can use AI to nudge players into deeper interaction by empowering adaptive difficulty and other personalization strategies. While visions of humanity enslaved by AI may be a bit dystopian, the idea that AI can lead us into unhealthy patterns of behavior is something the experts are clearly communicating.

One of the chief concerns is that we are becoming too dependent on AI. We count on it to do an ever-growing number of tasks, from driving our decision-making to driving our cars. As over-reliance moves toward total-reliance, the potential grows for losing perspective, wisdom, and the ability to perform valuable and essential skills.

AI growth and cybersecurity concerns

As the use of AI grows, the potential for abuse also grows. AI is essentially built on data collection and processing, which means it can be compromised by cyberattacks. If healthy AI has come to be seen as dangerous, AI that has been corrupted by bad actors is even more dangerous.

For example, bad actors who gain access to AI training data can corrupt it, leading AI to behave in unintended ways. Imagine AI designed to direct a self-driving car being fed poisoned data that causes it to misidentify traffic signs and signals. The consequences could be devastating.

Data breaches that lead to AI bias are another danger. AI is already being used to guide decision-making in sensitive areas, such as medical diagnosis. Bias that leads to misdiagnosis or misguided treatment recommendations puts lives at risk.

Promoting responsible AI use

Protecting against the misuse of AI begins with weaving ethical considerations into its development and use. Developers must think through potential dangers and design AI tools in ways that address them. As data is the foundation of AI learning, data used for training should be carefully sourced and protected.

Bringing a multitude of perspectives to the development of AI is also important. AI is a tech tool, but its impact is felt far beyond the tech world. Development teams that include those with diverse backgrounds and disciplines contribute to a deeper understanding of dangers that must be addressed.

Ethics also come into play for those who use AI. Businesses should develop policies to govern AI use. All employees who engage with AI should receive training on how to use it ethically and how to identify situations in which misuse is occurring.

As the potential for AI abuse becomes more evident, the need for accountability grows. Who is responsible for defining what is acceptable when it comes to AI? Who will blow the whistle when development gets off track? Who will keep us from becoming enslaved? Those are just some of the big questions that must be answered, as we consider the new wave of warnings experts are issuing regarding AI.

About the Author

Open AI Exec Warns AI is “Extremely Addictive,” Humanity Could Become “Enslaved”Rohan Kalahasty is a co-founder of Vytal.ai. He has been a researcher at the Harvard Ophthalmology AI Lab for three years, where he led and incubated NeurOS and worked on projects leveraging mathematical, statistical, and artificial intelligence modeling to enhance eye disease diagnosis. Furthermore, he served as an incubation lead at Roivant Sciences, leading the early stages of a potential new vant. He has also researched at the Center for Brains, Minds, and Machines at MIT, where he dedicated his time to studying human intelligence and memory utilizing artificial intelligence. While working with these groups, he has developed a deep insight into the intersection of AI and Medicine and the creation of digital biomarkers, an insight which proved to be crucial for the development of Vytal. He is a senior at Thomas Jefferson High School for Science and Technology.

Sai Mattapalli is a co-founder of Vytal.ai. With a deep background in the study of neuroscience, Sai was a research intern at the Neurophysiology Lab at Georgetown University, where he built behavioral paradigms to test auditory learning in Zebrafish. He also serves as an intern at the Center for Brain Circuit Therapeutics at Harvard Med, where he works on mapping the brain networks at play for seizure patients who experience eye versions. Later pivoting to entrepreneurship and finance, Sai served as a business growth intern at Quantbase (YC W23) and was a part of DMV’s Finest, the winners of the Wharton Investment Competition.

Sai Mattapalli and Rohan Kalahasty can be reached at our company website https://vytal.ai/

cyberdefensegenius - ai chatbot

13th Anniversary Global InfoSec Awards for 2025 now open for super early bird packages! Winners Announced during RSAC 2025...

X