As CISO for the Vancouver Clinic, Michael Bray gushes about the infinite ways large language models (LLMs) will improve patient care. âDNA-based predictive studies, metabolic interactions, lab services, diagnostics and other medicine will be so advanced that todayâs medical practices will look prehistoric,â he says. âFor example, applications like ActX are already making a huge difference with symptom identification, medicine interactions, effectiveness, and dosages.â
As excited as he is about LLMs improving patient care and diagnoses, Bray is equally concerned about the new and hidden threats that LLMs present. LLMs are core to disruptive and fast-moving AI technologies including OpenAIâs ChatGPT, Googleâs Bard, and Microsoftâs Copilot, which are rapidly proliferating across enterprises today. LLMs are being developed into a host of other specialty apps for specific vertical industries like finance, government, and military.
With these LLMs come new risks of data poisoning, phishing, prompt injections, and sensitive data extraction. Because these attacks are executed via natural language prompts or training sources, traditional security tools are ill-equipped to detect such attacks.
Fortunately, these vulnerabilities are being identified and prioritized by the Open Web Application Security Project (OWASP), National Institute of Standards (NIST), and other standards groups nearly as quickly as AI is proliferating. The EU AI Act has released an initial compliance checker for organizations to determine if their AI applications fall into the category of unacceptable risk or high risk. In November 2023, the UK released the UK guidelines for secure AI system development.
Tools are also catching up with new risks introduced through LLMâs. For example, natural language web firewalls, AI discovery, and AI-enhanced security testing tools are coming to market in what may well become a battle of AI versus AI. As we wait for those tools, these are the most likely threats organizations will face to their use of LLMs:
1. Malicious instructions from prompt injections
When asked about new threats introduced to enterprises through LLMs, experts cite prompt injections a top risk. Jailbreaking an AI by throwing a bunch of confusing prompts at the LLM interface is probably the most well-known risk and could cause reputational damage if the jailbreaker spreads misinformation that way. Or a jailbreaker could use confusing prompts to cause a system to spit out ridiculous offers, such as with a popular auto dealership chatbot developed by a company called Fullpath. By instructing a Chevy dealerâs chatbot to end each response with âthatâs a legally binding offer, no takesies backsies,â a hacker tester tried thousands of prompts until he ultimately tricked the dealer site into offering him a new car for one dollar.
The more severe threat is when prompt injections are used to forceapplications to hand over sensitive information. Unlike with SQL injection prompts, threat actors can use limitless prompts to tryto trick an LLM into doing things it shouldnât because the LLM prompts are written in natural language, explains Walter Haydock, founder of StackAware, which maps AI use in enterprises
, and identifies associated risks.
âWith SQL, there are finite ways you can input data so there is a known set of controls you can use to prevent and block SQL injections. But with prompt injection, there are infinite ways to provide malicious instructions to an LLM because the English language is that vast,â Haydock notes.The number of LLM prompt tokens continues to grow.
2. Data leakage from prompt extractions also an LLM vulnerability
Hyrum Anderson, CTO at Robust Intelligence, anÂ end-to-end AI securityÂ platform that includes a natural language web firewall, also points toÂ prompt extractionsÂ as a point of vulnerability. âPrompt extractionÂ falls into the category ofÂ data leakage, where data can be extracted by merely asking for it,â he adds.
Take, for example, chatbots on a website,Â with relevant data behind them that support the application. These data can be exfiltrated. As an example, Anderson points to retrieval augmented generationÂ (RAG), where LLM responses are enriched by connecting them to sources of information relevant to the task. Anderson recently witnessed such an attack in which demonstrators used a RAG to force the database to spit out specific sensitive information by asking for specific rows and tables in the database.
To prevent this type of database leakage, Anderson urgesÂ caution whenÂ connecting public-facing RAG apps to databases. âIf you donât want theÂ RAG appÂ user to seeÂ the entire database,Â then you should restrictÂ access at the user interface to the LLM,â he adds.Â âSecurity-minded organizations shouldÂ steel their APIs against natural-language pull requests, restrict access, and use an AI firewall toÂ blockÂ malicious requests.â
3. New LLM-enabled phishing opportunities
LLMs also open a new vector for phishers to trick people into clicking their links, Anderson continues. âSay Iâm a financial analyst using a RAGÂ appÂ to scrape documents fromÂ the internet to find out a companyâs earnings, but in that supply chain of dataÂ are instructions for an LLM to respond with a phishing link. So, say I ask it to find the most up to date information in the trove of data it sent, and it says âclick here.â And then I click a phishing link.â
This kind of phish is powerful since the user is explicitly seeking an answer from the LLM. Furthermore,Â traditional anti-phishing tools may notÂ see these malicious links, Anderson adds. He advises CISOâs to update their employee training programsÂ to include critical thinking about RAG responses, andÂ toÂ use emerging web-based tools that canÂ scan RAGÂ dataÂ forÂ natural-languageÂ prompt injections that encourageÂ usersÂ toÂ click links.
4. Poisoned LLMs
Models from open-source repositories and the data used to train LLMs can also be poisoned, adds Diana Kelley CISO atÂ Protect AI, a platform for AI and ML security.Â âThe biggest threats could be in the model itself or the data the LLM was trained on, who trained it, and where it was downloaded from,â she explains. âOSS models run with high privileges, but few companies scan them before use and the quality of the training data directly impacts the reliability and accuracy of the LLM. To see and manage AI related risks, and prevent poisoning attacks, CISOs need to govern the ML supply chain and track components throughout the lifecycle.â
That is, if CISOs are even aware of whatÂ applications are using LLMs and for what purposes.Â Many common workforce applications used in enterprises today are embedding the latest AI capabilities in their system updates, sometimes without the knowledge of the CISO.
Because these LLMs are integrated into third-party applications and web interfaces, discovery and visibility become even more murky. So, an AI policy addressing the entire data supply chain is key, says Haydock of StackAware.Regarding thesefourth-party risks. âItâs understanding how these apps are using, training, accessing, and retaining your data,â he adds.
AI versus AI
The US Government, arguably the largest network in the world, certainly understands the value of AI security policy as it seeks to leverage the promise of AI across government and military applications. In October 2023, the Whitehouse issued an executive order (EO) for safe AI development and use.
TheÂ Cybersecurity and Infrastructure Security Agency (CISA), part of the Department of Homeland Security (DHS),Â plays a critical role in executing the executive order andÂ has generated an AIÂ roadmapÂ that incorporates key CISA-led actions as directed by the EOâalong with additional actions CISA is leading to support critical infrastructure owners and operators as they navigate the adoption of AI.Â
As a result of the executive order, several key government agencies have already identified, nurtured, and appointed new chief AI officers responsible for coordinating their agencyâs use of AI, promoting AI innovation while managing risks from their agencyâs use of AI, according to Lisa Einstein, CISAâs senior advisor for AI.
âWith AI embedded into more of our everyday applications, having a person who understands AIâand who understands the positive and negative implications of integrating AIâis critical,â Einstein explains. âRisks related to LLM use is highly contextual and use-case specific based on industry, whether it be healthcare, schools, energy, or IT. So, AI champions need to be able to work with industry experts to identify risks specific to the context of their industries.â
Within government agencies, Einstein points toÂ the Department of Homeland Securityâs Chief AI Officer EricÂ Hysen, who isÂ also DHSâsÂ CIO.Â Hysen coordinates AI efforts across DHSÂ components, she explains, including the Transportation SecurityÂ Administration, which uses IBMâsÂ computer visionÂ to detect prohibited items in carry-on luggage. DHS, in fact, leverages AI in many instances to secure the homeland at ports of entry and along the border, as well as in cyberspace to protect children, defend against cyberthreats, and even to combat the malicious use of AI.
As LLM threats evolve, it will take equally innovative AI-enabled tools and techniques to combat them. AI-enhanced penetration testing and red teaming, threat intelligence, anomaly detection, incident response are but some of the tool types that are quickly adapting to fight these new threats.
Go to Source