Call us Toll Free (USA): 1-833-844-9468     International: +1-603-280-4451 M-F 8am to 6pm EST
RSA Conference 2024: Exploring our Current Cybersecurity Realities Amidst AI Myths

RSA Conference 2024: Exploring our Current Cybersecurity Realities Amidst AI Myths

AI. Artificial Intelligence. One acronym, two words that seem to have reshaped the landscape of cybersecurity. At the 2024 RSA Conference, it was ubiquitous: stamped on almost every booth’s showcase, invoked in nearly 150 speaker sessions, and echoed in my many interviews with C-level executives. According to Vasu Jakkal, Corporate Vice President at Microsoft Security, it’s estimated that 93% of organizations have some AI usage [1], reflecting the booming market projected to reach $407 billion by 2027 [2]. But what truly constitutes AI, and how does it differ from the already adopted machine learning technology? Are companies genuinely utilizing AI as professed? And if so, how do we, cybersecurity professionals, protect and govern something that is evolving so fast? More concerningly, are attackers leveraging AI like we believe they might be? In both conversations with industry experts and attendance to keynote sessions highlighting current trends in security, I gained intriguing perspectives and observations, shedding light on our current cybersecurity realities amidst the pervasive use of AI.

Artificial intelligence represents the advanced technology that allows computers and machines to mimic human intelligence and solve intricate problems efficiently. This innovation, often integrated with tools like sensors and robotics, enables the performance of tasks traditionally requiring human thinking. From the widespread use of digital assistants to the precision of GPS navigation and the independence of self-driving cars, AI has manifested in numerous domains of our modern life. As AI continues to integrate into various industries, the conversation around ethical AI and responsible usage, and maintaining security becomes increasingly critical. [3].

Despite the widespread claims of AI adoption, many companies may not be utilizing true artificial intelligence but rather relying on machine learning techniques. While these terms are often used interchangeably, they represent different scopes and capabilities within the realm of advanced technology. As AI encompasses a broader scope of capabilities, machine learning operates as a subset within AI, focusing on the autonomous process of enabling machines to learn and improve from experience.

Rather than relying on explicit and hardcoded programming, machine learning utilizes algorithms to analyze lengthy datasets, extract insights, and then make informed decisions. As the machine learning model undergoes training with increasing volumes of data, its proficiency and effectiveness in decision-making progressively improve. While many companies harness the power of ML algorithms to optimize processes and drive insights, the utilization of true artificial intelligence is somewhat limited in adoption. Consequently, the threats and vulnerabilities associated with each differ significantly; machine learning systems are often susceptible to data poisoning and model inversion attacks, whereas AI systems face broader issues like hallucinations and adversarial attacks.

For instance, Jonathan Dambrot, CEO of Cranium, discussed how AI systems can “hallucinate,” generating inaccurate outputs or falling prey to prompt-based threats. He stresses the importance of balancing the drive to adopt AI with a thorough understanding of its security implications. Organizations, fearing obsolescence, rush to implement AI without fully considering these risks, thereby exposing themselves to potential threats.

Brandon Torio, an AI expert and Senior Product Manager at Synack, identifies prompt injection as the most pressing threat to AI today. He distinguishes between security content management and traditional cybersecurity, emphasizing that to mitigate these risks, organizations must adopt a proactive approach. Torio advocates for “shifting left” in the development process, meaning thorough pre-deployment testing to catch vulnerabilities early. He acknowledges AI’s benefits, such as making data more digestible and streamlining mundane tasks like simple script writing. However, he asserts the irreplaceable role of human oversight in contextualizing and interpreting AI-generated results.

In another conversation with John Fokker, Head of Threat Intelligence at Trellix, he noted that attackers are not leveraging AI as extensively as often portrayed or believed. He argues that while AI can assist attackers with tedious tasks like exploit development or creating deepfakes, it is not essential for most cybercriminal activities. “A human is more creative than a machine,” Fokker states, underscoring the continuing superiority of human ingenuity over AI in crafting sophisticated attacks.

After attending and experiencing RSA 2024, and having had the opportunity to interview industry experts, my concluding thoughts are this: to harness AI’s potential effectively, a balanced approach that includes proactive security measures and human oversight is crucial. While AI can, and does, offer unprecedented capabilities, it also introduces new vulnerabilities that require diligent attention. The industry’s leaders agree that a thorough understanding of AI’s limitations, coupled with thorough testing and ethical considerations, is essential to protecting our digital landscape and future digital endeavors. As we continue to integrate AI into our cybersecurity frameworks, it is imperative to remain vigilant and adaptable, ensuring that technological advancement does not outpace our ability to secure it.

About the Author

kylie-amison-authorKylie Amison is a proud alumnus of George Mason University where she obtained her Bachelor of Science degree in Cybersecurity Engineering with a minor in intelligence analysis.

She is working full time at a leading mobile security company as an Application Security Analyst where her main tasking involves pen-testing mobile applications, secure mobile application development, and contributing to exciting projects and important initiatives that are consistently highlighted throughout the security industry.

In addition, Kylie contributed to a startup company as a cybersecurity software developer where she was the lead developer on one of the company’s products; a geopolitical threat intelligence engine that combines a broad assortment of metrics and NLP sentiment analysis to calculate nuanced and real-time threat scores per nation state. Contributing to this initiative has been pivotal in her knowledge of creating secure software and has given her the opportunity to not only develop her first product, but to also start her own startup company, productizing the software and capabilities created in her threat intelligence engine. She is presently co-founder and CTO of Xenophon Analytics.

Throughout all of her experiences and coursework, she has gained essential skills in secure software development, penetration testing, mobile security and a plethora of coding languages. She has further aspirations of going back to school to get a graduate degree in the field of digital forensics and cybersecurity.

Beyond academics and professional life, Kylie enjoys watching anime, reading, and doing anything with nature involved. When asked her ultimate goal in life, she responded with “My goal in life is to learn every single day, and I am proud to be doing just that.”

References:

[1] RSAC 2024 keynote speaker session “Securing AI: What We’ve Learned and What Comes Next”

[2] https://www.forbes.com/advisor/business/ai-statistics/

[3] https://www.ibm.com/topics/artificial-intelligence

 

cyberdefensegenius - ai chatbot

12th Anniversary Top InfoSec Innovator & Black Unicorn Awards for 2024 are now Open! Finalists Notified Before BlackHat USA 2024...

X