'

Businesses face “silent infiltration” of generative AI as use spirals out of control

Business leaders appear to have lost control over the deployment, oversight, and purpose of generative AI within their organizations, new research from Kaspersky suggests. That’s despite just 28% of organizations expressly permitting the use of generative AI, with even fewer (10%) having a formal generative AI use policy in place, according to new findings from ISACA.

It’s perhaps no surprise then that a recent survey by Add People discovered that one in three UK workers are using generative AI tools without their boss’ knowledge.

Executives admit “deep concern” about the security risks of generative AI takeover

Almost all (95%) of the 1,863 UK and EU C-level executives surveyed by Kaspersky believe generative AI is regularly used by employees, with over half (53%) stating that it is now driving certain business departments. The extent of the takeover is such that most executives (59%) express deep concerns about potential security risks that could jeopardize sensitive company information and result in the total loss of control of core business functions.

However, just 22% of respondents have discussed establishing rules and regulations to monitor the use of generative AI, despite 91% stating they need more understanding of how internal data is being used by employees to protect against critical security risks or data leaks, Kaspersky found.

Organizations lack sufficient generative AI policies, risk management

ISACA’s generative AI survey of 2300 global digital trust professionals found that while the use of generative AI is ramping up, most organizations do not have sufficient policies or effective risk management in place. The survey indicated that over 40% of employees are using generative AI regardless — a percentage is likely much higher given that 35% aren’t sure.

Employees are using generative AI in several ways, including to create written content (65%), increase productivity (44%), automate repetitive tasks (32%), provide customer service (29%), and improve decision-making (27%), according to ISACA.

While 41% of survey respondents believe not enough attention is being paid to ethical standards for AI implementation, fewer than one-third of organizations consider managing AI risk to be an immediate priority. That’s despite those polled noting the following as top risks of the technology:

  • Misinformation/disinformation (77%)
  • Privacy violations (68%)
  • Social engineering (63)
  • Loss of intellectual property (IP) (58%)
  • Job displacement and widening of the skills gap (tied at 35%)

“While the ISACA survey shows that generative AI is being widely used for various purposes, it also highlights a glaring gap in ethical considerations and security measures,” wrote Raef Meeuwisse, author of Artificial Intelligence for Beginners. “Only 6% of organizations are providing comprehensive AI training to all staff, and a staggering 54% offer no training at all.”

This lack of training, coupled with insufficient attention to ethical standards, can lead to increased risks, including exploitation by bad actors, Meeuwisse said. “Perhaps not surprisingly, 57% of respondents are very or extremely worried about generative AI being exploited by malicious actors.”

One in three UK workers use generative AI in secret

Last month, digital marketing firm Add People interviewed 2,000 UK workers about their use of generative AI. It found that a third are using tools like ChatGPT without their manager knowing, driven by the fact that only 10% of workplaces regulate the use of such AI tools.

The workers most likely to keep AI use from managers are those aged between 25 and 44, while only 13% of respondents over 55 said they kept AI use to themselves, according to Add People.

Women are more likely to secretly use AI than men (57% versus 42% of respondents) and are also more likely to work in businesses with no official AI implementation, the research found.

C-Suite, Generative AI, Security Practices


Go to Source
Author: