Page 54 - Cyber Defense eMagazine February 2024
P. 54
standards and practices relating to the authentication, labelling, testing and detection of synthetic content
and developing guidance around the use of such techniques like watermarking are good first steps
towards removing discrepancies like biases, data hallucination and misuses of data.
As a result of the order’s many provisions, testing of language models against multiple frameworks to
ensure compliance will see a boost. Typically, software integration and algorithm testing are outsourced
to system integrators (SIs) like TCS, Infosys, Wipro, among others. Hence, these players are likely to
come up with dedicated solutions and toolkits for such workloads.
Another area that can see a surge is LM-Ops tools (language model optimization) within generative AI.
Prompts made to tools like ChatGPT must adhere to content safety regulations and need to be flagged
off when there’s a discrepancy like biases and harmful language. Hence, prompt optimization is a critical
area and because of generative AI’s rapid development, we see the new role of prompt engineers gaining
importance day by day.
Similarly, data annotation and data labelling are also likely to get a boost. Transparency in the
development and use of AI requires clean data sets - the quality of the of output is as good as the data
it’s trained on. Hence, technical capabilities that are pre-cursors to developing an AI model are key. For
example, Google used Snorkel AI to replace 100K+ hand-annotated labels in critical ML pipelines for text
classification, leading to a 52% performance improvement.
With the EO’s aim to promote the safe, secure, and trustworthy use and development of AI, the role of
regulation takes center stage, shaping a future where large or small companies can profit from while
minimizing its own unintended consequences.
Market Dynamics: How the AI Order Affects Players
All businesses that use AI will be impacted by the executive order, but the impact is not as binary, there’s
nuance. It depends on the technological investment in AI and complexity of the workload.
It’s a no-brainer that AI adoption requires large investments, and large enterprises are well-positioned to
make them. They have the capital to undertake core AI development initiatives like building custom AI
models the way Meta and Google did with LLaMA and Bard. Once the regulations come into effect, their
offerings will need to comply to the set standards.
SMBs, on the other hand, might not have the same monetary capacity to commit a huge amount of money
to complex technology projects. This disadvantage gets compounded by the fact that SMBs are a big
target for cybersecurity attacks and generative AI has a plethora of vulnerabilities that expose SMBs to
attacks, putting their cybersecurity concerns at peak. For SMBs, simple workloads, like deploying a
customer support chatbot are more feasible. Once the regulations are in effect, SMBs can integrate
regulation-compliant products and offerings into their workflows and reap the benefits that AI brings. In
parallel, they can come up with LM-Ops solutions and dedicated toolkits the way small scale ISVs do and
expand their offerings.
Cyber Defense eMagazine – February 2024 Edition 54
Copyright © 2024, Cyber Defense Magazine. All rights reserved worldwide.