'

Almost all developers are using AI despite security concerns, survey suggests

While more than half of developers acknowledge that generative AI tools commonly create insecure code, 96% of development teams are using the tools anyway, with more than half using the tools all the time, according to a report released Tuesday by Snyk, maker of a developer-first security platform.

The report, based on a survey of 537 software engineering and security team members and leaders, also revealed that 79.9% of the survey’s respondents said developers bypass security policies to use AI.

“I knew developers were avoiding policy to make use of generative AI tooling, but what was really surprising was to see that 80% of respondents bypass the security policies of their organization to use AI either all of the time, most of the time or some of the time,” said Snyk Principal Developer Advocate Simon Maple. “It was surprising to me to see that it was that high,”

Without testing, the risk of AI introducing vulnerabilities into production increases

Skirting security policies creates tremendous risk, the report noted, because even as companies are quickly adopting AI, they are not automating security processes to protect their code. Only 9.7% of respondents said their team was automating 75% or more of security scans. This lack of automation leaves a significant security gap.

“Generative AI is an accelerator,” Maple said. “It can increase the speed at which we write code and deliver that code into production. If we’re not testing, the risk of getting vulnerabilities into production increases.”

“Fortunately, we found that one in five survey respondents increased their number of security scans as a direct result of AI tooling,” he added. “That number is still too small, but organizations see that they need to increase the number of security scans based on the use of AI tooling.”

Developers should be more confident in themselves than AI

Many developers place far too much trust in the security of code suggestions from generative AI, the report noted, despite clear evidence that these systems consistently make insecure suggestions.

“The way that code is generated by generative AI coding systems like Copilot and others feels like magic,” Maple said. “When code just appears and functionally works, people believe too much in the smoke and mirrors and magic because it appears so good.”

Developers can also value machine output over their own talents, he continued. “There’s almost an imposter syndrome,” he said. “Developers don’t believe they’re as good as they actually are, that their code isn’t as secure as something machine generated.”

Speed increases from AI risk unsafe open-source components getting into code

The report also maintained that widespread use of AI software development tools contributed to open-source security problems.

Only 24.6% of survey respondents said their organizations used software composition analysis to verify the security of code suggestions from AI tools, the report noted. Increased velocity would likely increase the speed at which unsafe open-source components are accepted into code.

Because AI coding systems use reinforcement learning algorithms to improve and tune results when users accept insecure open-source components embedded in suggestions, the AI systems are more likely to label those components as secure even if this is not the case, it continued.

This risks the creation of a feedback loop where developers accept insecure open-source suggestions from AI tools and then those suggestions are not scanned, poisoning not only their organization’s application code base but the recommendation systems for the AI systems themselves, it explained.

Belief that AI coding tools are highly accurate and less fallible than humans is a danger

The report asserted that there is an obvious contradiction between developer perception that AI coding suggestions are secure and overwhelming research that this is often not the case.

This is a perception and education problem, it continued, caused by groupthink, driven by the principle of social proof and humans’ inherent trust in seemingly authoritative systems. Because the unfounded belief that AI coding tools are highly accurate and less fallible than humans is circulating, it has become accepted as fact by many.

The antidote to this dangerous false perception, it concluded, is for organizations to double down on educating their teams about the technology they adopt while securing their AI-generated code with industry-approved security tools that have an established history in security.

Development Tools, Security, Security Practices, Software Development, Supply Chain


Go to Source
Author: