Generative AI is scaring CISOs – but adoption isn’t slowing down

The march of generative AI isn’t short on negative consequences, and CISOs are particularly concerned about the downfalls of an AI-powered world, according to a study released this week by IBM.

Generative AI is expected to create a wide range of new cyberattacks over the next six to 12 months, IBM said, with sophisticated bad actors using the technology to improve the speed, precision, and scale of their attempted intrusions. Experts believe that the biggest threat is from autonomously generated attacks launched on a large scale, followed closely by AI-powered impersonations of trusted users and automated malware creation.

The IBM report included data from four different surveys related to AI, with 200 US-based business executives polled specifically about cybersecurity. Nearly half of those executives – 47% — worry that their companies’ own adoption of generative AI will lead to new security pitfalls while virtually all say that it makes a security breach more likely. This has, at least, caused cybersecurity budgets devoted to AI to rise by an average of 51% over the past two years, with further growth expected over the next two, according to the report.

The contrast between the headlong rush to adopt generative AI and the strongly held concerns over security risks may not be as large an example of cognitive dissonance as some have argued, according to IBM general manager for cybersecurity services Chris McCurdy.

For one thing, he noted, this isn’t a new pattern — it’s reminiscent of the early days of cloud computing, which saw security concerns hold back adoption to some degree.

“I’d actually argue that there is a distinct difference that is currently getting overlooked when it comes to AI: with the exception perhaps of the internet itself, never before has a technology received this level of attention and scrutiny with regard to security,” McCurdy said.

Global think tanks have sprouted up to study the security implications of generative AI, he highlighted, and although there’s a great deal of education that needs to happen in C-suites, organizations are generally moving in the right direction.

“In other words, we’re seeing that security isn’t an afterthought, but a key consideration in these early days,” McCurdy said.

It’s important to recognize that the positive impact of generative AI on business operations has the potential to be transformative, he added. If security, to say nothing of governance and compliance, are part of the conversation from the beginning, cyber threats don’t need to stand in the way of progress.

“There is a lot of focus on how AI will impact organizations positively, but it’s our responsibility to also consider what guardrails we have to put in place to ensure the AI models we rely on are trustworthy and secure,” McCurdy said.

Budget, Generative AI, Security

Go to Source