Page 97 - Cyber Defense eMagazine July 2024
P. 97
Promised Rewards of AI
Large language models (LLMs) like ChatGPT have introduced AI to the masses and have been rapidly
adopted by employees, students, and consumers alike to ease everyday tasks. ChatGPT and other
generative AI-powered chatbots allow people to ask questions in plain language, meaning using it is
simple and doesn’t require much technical skill.
Beyond ease of use, the technology promises:
• Increased productivity: With LLMs helping workers do everything from coding faster to digesting
large quantities of research in minutes, workers are now free to focus on more strategic tasks,
instead of time-intensive ones.
• Operational efficiencies: By embedding generative AI tools like chatbots into software or
processes, organizations can get the information they need to execute on tasks quickly or
automate processes altogether – significantly streamlining workflows.
• Cost savings: When time is saved and employees are more productive, organizations can
reduce spend while also increasing profit.
Potential Risks of AI
While glamorous at first look, AI does not come without pitfalls. Understanding these pitfalls – and the
risks they pose to businesses – can help prevent misuse, bias, security breaches, and more. A few risks
organizations should be aware of:
• Data sharing: To get the answers relevant to a specific business or function, that organization
needs to first give an LLM data. For example, if you want ChatGPT to write a summary of a
meeting, you will have to share the meeting transcript. This means that if proprietary information
gets shared with an LLM, it remains in the LLM’s knowledge system and can be accessed by
other users. This is typically outlined in an End User License Agreement – which many users
typically sign without much thought.
• Bias or inaccurate answers: Answers provided by LLMs have been known to be biased,
inaccurate, or made up. Understanding where the model is getting the information from and
reviewing the information with an air of caution before using it can help identify any of these risk
factors before they cause harm.
• Copyright infringement: It is also important to remember that anything produced by generative
AI is considered part of the public domain – so, putting your name on something produced by
ChatGPT could lead to copyright troubles down the line.
Cyber Defense eMagazine – July 2024 Edition 97
Copyright © 2024, Cyber Defense Magazine. All rights reserved worldwide.