'

Protect AI adds LLM support with open source acquisition

AI and ML security platform Protect AI has integrated a widely used, open source large language model (LLM) security tool — LLM Guard — into existing offerings after acquiring its developer Laiyer AI.

Available as a Python package accessible through a preferred installer program (PIP) package manager, LLM Guard is a security toolkit for LLM interactions designed to detect and block data leakage and prompt-based attacks on LLMs.

“The acquisition of LLM Guard is part of Protect AI’s mission to provide one integrated platform that enables organizations to enforce one policy that they can invoke at an enterprise level that encompasses all forms of AI security,” said Daryan Dehghanpisheh, president and co-founder of Protect AI. “We are enabling enterprises to build, deploy, and manage AI applications that are not only secure and compliant but also operationally efficient.”

Having infused the open source LLM Guard into its existing stack, Protect AI has plans to tool it up for a separate commercial offering by mounting additional features and integrations.

Extended protection against prompt injection and data leaks

Protect AI’s existing offering centers on protecting an organization’s AI and ML workflows, helping security teams defend against unique AI security threats. LLM Guard is an extension to that offering with a specific focus on LLM-based generative AI (GenAI) workflows.

“The acquisition of Laiyer AI and its LLM Guard open source tool extends the Protect AI platform with new capabilities for detecting, redacting, and sanitizing inputs and outputs from LLMs in order to mitigate risks such as prompt injections and personal data leaks,” Dehghanpisheh said. “These features are integral to preserving LLM functionality while safeguarding against malicious attacks and misuse. LLM Guard also integrates seamlessly with existing security workflows and observability SIEM and SOAR tools.”

Additionally, LLM Guard is expected to extend Protect AI Radar’s protection capabilities that can be built with a machine learning bill of materials (MLBOM) for detecting and mitigating security threats in the AI supply chain, according to Dehghanpisheh. Protect AI’s Radar is an AI risk detection and mitigation offering.

“There’s a clear need in the market for a solution that can secure LLM use cases from start to finish, including when they scale into production. By joining forces with Protect AI, we are extending Protect AI’s products with LLM security capabilities to deliver the industry’s most comprehensive end-to-end AI Security platform,” Laiyer AI co-founders Neal Swaelens and Oleksandr Yaremchuk said in a press statement.

LLM Guard to undergo gradual changes

Protect AI has assured that it will not enforce any changes in user interaction on LLM Guard, which is presently available as an open source offering and sees 2.5 million monthly downloads on HuggingFace.

“We remain committed to open source and permissive use licensing to support customers on their journey to implementing MLSecOps and securing their AI/ML deployments,” Dehghanpisheh said.

However, the company plans to scale the tool up with new features and offer a separate version on subscription at a later time.

“There will be a commercial version of Laiyer AI’s open source LLM Guard product which will offer expanded features, capabilities, and integrations as part of the Protect AI platform,” Dehghanpisheh added. “We have received extremely positive feedback from our customers and build partners who have seen these new capabilities. We will be announcing them publicly in the future.” GenAI platforms built on LLMs have been fueling a significant rise in cyberattacks and security risks, leading to existing cybersecurity providers as well as new startups rolling out specialized offerings to address these risks.

Security Software


Go to Source
Author: