AI enters production systems even as ‘trust’ emerges as a growing concern

AI has seen massive adoption in the public as well as private sector, with only a small fraction of both segments believing they are at least two years away from successfully leveraging it, according to a new report from Foundry Research.

The research, commissioned by Splunk, surveyed senior decision-makers from more than 200 organizations with an average size of 4,255 employees and found building trust with AI systems as the top obstacle to expanding the use of AI in private and public sectors.

“This survey was conducted to understand how public and private sectors are leveraging AI, and contending with obstacles such as government regulations, ethical considerations, and the challenges of protecting AI-enabled systems,” the report said.

Respondents for the survey, 54% of which held vice president or higher job titles, almost equally represented the private and the public sector (49% vs 51%).

Automation drives rapid AI adoption

Most organizations (79% of public sector and 83% of private sector organizations) have started to use AI in production, according to the survey. Only a small fraction remains that is currently testing the technology (9%), investigating solutions (8%), or is planning to investigate the technology (4%).

The majority of this drive is attributed to a move toward “automation” within both sectors. While 44% of the public sector respondents said that they are already using or are planning to increase productivity through automation, private sector numbers were a little higher at 53%.

Other top priorities for AI adoption included improving innovation and idea generation (30%), improving goods or services (30%), improving citizen or customer experiences (29%), and detecting and assessing cyber risk (26%).

Additionally, 57% of respondents at public sector organizations and 65% at private sector organizations were confident that their organization will be prepared to leverage AI to advance their missions or business strategies within a year. Just 11% and 12% of public and private sector respondents said it would at least take two years for their organizations to do so.

The top obstacle to AI adoption and expansion remained “building trust in AI systems,” with 48% in the public sector and 44% in the private sector being reluctant over data privacy and security concerns.

Regulatory concerns loom amid growing benefits

One of the leading (80%) benefits of AI was observed to be its use to address cybersecurity priorities. These include AI-enabled monitoring (34%), risk assessment (33%), and analysis of threat data (29%).

The study also revealed that private sector respondents are significantly more likely than those in the public sector to report the use of AI technology to analyze threat data (35% versus 22%), improve productivity (30% versus 17%), generate code (22% versus 7%) or analyze OT data (20% versus 9%).

“Some of the areas where I see AI playing an active role include vulnerability identification, threat detection, attack prevention, user behavior analytics, entity analytics, big data security analytics, risk, governance, identity management, and other areas,” said Pankit Desai co-founder and CEO of Sequretek.

The popularity of large language models (LLMs), a text-based generative AI technology, has grown tremendously in recent years with nearly seven in ten public sector respondents (69%) reporting their organizations have already adopted or intend to adopt externally created LLMs, compared to 57% of those in the private sector.

As the scope of LLM grows, so do concerns over the model’s training, usage, and bias. More than three-quarters (78%) of decision-makers across all industries believe there should be global ethical principles to guide the regulation of AI and LLMs, according to the study.

Generative AI, Security Infrastructure


Go to Source
Author: