'

NSA Highlights AI System Security Guidelines

A recent advisory from the NSA highlighted the ways that operators of national security systems and Defense Industrial Base companies can best securely deploy AI systems that have been designed by third parties.

Last week’s guidance, which comes as companies continue to weigh potential security risks inherent either in AI systems themselves or in how they are deployed, specifically gave recommendations around securely operating AI in the environment and continuously protecting AI systems for vulnerabilities. The advisory marked the first set of guidelines from the Artificial Intelligence Security Center, which was established by the NSA in September in order to help detect and counter AI flaws, develop and promote AI best practices and drive collaborations across the industry relating to AI.

“The rapid adoption, deployment, and use of AI capabilities can make them highly valuable targets for malicious cyber actors,” according to the NSA’s cybersecurity guidance, released jointly with a number of other Five Eyes agencies, including the National Cyber Security Centre and the Australian Signal Dictorate. “Actors, who have historically used data theft of sensitive information and intellectual property to advance their interests, may seek to co-opt deployed AI systems and apply them to malicious ends.”

With organizations typically deploying AI systems within their existing infrastructure, the NSA said that security best practices and requirements also apply to AI systems. Cybersecurity gaps might arise if teams outside of IT are deploying the systems, and the NSA recommended that companies make sure that the person accountable for AI system security is also responsible for the organization’s cybersecurity in general. If organizations outside of IT are operating an AI system, they should work with IT to make sure the system is “within the organization’s risk level” overall. Organizations should also require AI system developers to provide a threat model for their system, which outlines potential threats and mitigations for those threats.

The question of data security and privacy for AI is critical. Companies implementing AI systems should map out all data sources that the organization will use in AI model training, including the list of data sources for models trained by others, though notably, these types of models aren’t typically publicly available. Additionally, security teams should apply existing best practices – like encrypting data at rest, implementing strong authentication mechanisms and ensuring the use of MFA – in the AI deployment environment.

“Do not run models right away in the enterprise environment,” according to the NSA. “Carefully inspect models, especially imported pre-trained models, inside a secure development zone prior to considering them for tuning, training, and deployment. Use organization approved AI-specific scanners, if and when available, for the detection of potential malicious code to assure model validity before deployment.”

The NSA also outlined steps that organizations should take after the initial implementation of AI in order to continuously make sure that data running through the system is secure, including testing the AI model for accuracy and for potential flaws after modifications have been made, evaluating and securing the supply chain for external AI models and data and securing potentially exposed APIs. Metin Kortak, CISO at Rhymetec, said that cybersecurity measures around actively monitoring model behavior are particularly significant because “AI can be unpredictable.”

“Prior to deploying AI systems, companies need to acknowledge and tackle data privacy and security concerns,” said Kortak. “AI systems inherently handle extensive datasets, encompassing sensitive personal and organizational data, rendering them enticing targets for cyber threats.”

Go to Source
Author: