Page 166 - Cyber Defense eMagazine October 2023
P. 166

cybercrime underground, offering to lower the barrier to entry for budding hackers. Their developers claim
            these,  and  other  tools  can  help  write  malware,  create  hacking  tools,  find  vulnerabilities  and  craft
            grammatically convincing phishing emails.

            But developers could also be handing threat actors an open goal, while undermining the reputation and
            bottom line of their employer, if they place too much faith in LLMs and lack the skills to check their output.
            Research backs this up. One study from Stanford University claims that participants who had access to
            an AI assistant were “more likely to introduce security vulnerabilities for the majority of programming
            tasks, yet also more likely to rate their insecure answers as secure compared to those in our control
            group.”

            A  separate  University  of  Quebec  study  is  similarly  unequivocal,  reporting  that  “ChatGPT  frequently
            produces insecure code.” Only five of the 21 cases the researchers investigated produced secure code
            initially. Even after the AI was explicitly requested to correct the code, it did so as directed in only seven
            cases. These aren’t good odds for any developer team.

            There are also concerns that AI could create new risks like “hallucination squatting.” This was recently
            reported when researchers asked ChatGPT to look for an existing open source library, and the tool came
            back with three that didn’t exist. Hallucinations of this sort are common with current generative AI models.
            However, the researchers posited that, if a hacker did the same probing, they could create an actual
            open-source project with the same name as the hallucinated responses – directing unwitting users to
            malicious code.



            Start with your people

            Part of the reason for the buggy output these researchers are getting back is that the input was poor. In
            other words, the code and data used to train the model was of poor quality in the first place. That’s
            because LLMs don’t generate new content per se, but deliver a kind of contextually appropriate mashup
            of things they’ve been trained on. It’s proof if any were needed that many developers produce vulnerable
            code.

            Better training is required so that teams relying on generative AI are more capable of spotting these kinds
            of mistakes. If done well, it would also arm them with the knowledge needed to be able to use AI models
            more effectively. As the researchers at Stanford explained: “we found that participants who invested more
            in the creation of their queries to the AI assistant, such as providing helper functions or adjusting the
            parameters, were more likely to eventually provide secure solutions.”


            So, what should effective secure coding training programs look like? They need to be continuous, in order
            to always keep security front of mind and to ensure developers are equipped even as the threat and
            technology landscapes evolve. Training programs should be universally taught to everyone who has a
            role to play in the SDLC, including QA, UX and project management teams, as well as developers. But
            they should also be tailored individually for each of these groups according to the specific challenges
            they face. And they should have a focus on rewarding excellence, so that security champions emerge
            who can organically influence others.






            Cyber Defense eMagazine – October 2023 Edition                                                                                                                                                                                                          166
            Copyright © 2023, Cyber Defense Magazine. All rights reserved worldwide.
   161   162   163   164   165   166   167   168   169   170   171