Nation-state threat actors using LLMs to boost cyber operations

Nation-state groups Forest Blizzard, Emerald Sleet, Crimson Sandstorm, Charcoal Typhoon, and Salmon Typhoon are using large language models (LLMs) to improve and expand their criminal activities, according to findings from Microsoft Threat Intelligence Cyber Signals 2024, done in collaboration with Open AI.

The study did not identify significant attacks employing the LLMs that Microsoft and Open AI monitored, but it revealed that these groups have been using LLMs to improve their reconnaissance, scripting, research, and other activities to gain crucial information before attacks.

The report emphasized practices such as multi-factor authentication (MFA) and zero trust are essential to prevent possible attacks using LLMs. It also recommended users apply vendor AI controls and continually assess whether these remain adequate. Implement strict input validation and sanitization for user-provided prompts, mandate transparency across the AI supply chain, and communicate clearly with users on how and which AI tools have been vetted for use by the organization.

How attackers use LLMs to gather information

The threat actors profiled in the report are a sample of observed activity Microsoft and Open AI believe best represents the tactics, techniques, and procedures (TTPs) the industry will need to better track using MITRE ATLAS knowledgebase updates.

Forest Blizzard (Strontium)

Microsoft has noticed Forest Blizzard — a Russian military intelligence actor linked to GRU Unit 26165 — using LLMs to research satellite and radar technologies that may pertain to conventional military operations in Ukraine, as well as generic research aimed at supporting their cyber operations. They used what is classified as LLM-informed reconnaissance in the MITRE ATLAS to understand satellite communication protocols, radar imaging technologies, and specific technical parameters through interaction with LLMs. These queries suggest an attempt to acquire in-depth knowledge of satellite capabilities.

They also used LLM-enhanced scripting techniques to seek assistance in basic scripting tasks including file manipulation, data selection, regular expressions, and multiprocessing, to potentially automate or optimize technical operations.

Emerald Sleet (Thallium)

Emerald Sleet — a North Korean threat actor that relies on spear-phishing emails to compromise and gather intelligence on prominent North Koreans — has used LLMs to understand publicly known vulnerabilities, to troubleshoot technical issues, and for assistance with using various web technologies.

The report found that Emerald Sleet used LLM-assisted vulnerability research and used LLMs to better understand publicly reported vulnerabilities, such as the CVE-2022-30190 Microsoft Support Diagnostic Tool (MSDT) vulnerability. It also used LLM-enhanced scripting techniques but not with the same purpose as Forest Blizzard. It used LLMs for basic scripting tasks such as programmatically identifying certain user events on a system and seeking assistance with troubleshooting and understanding various web technologies.

Emerald Sleet used LLM-supported social engineering for assistance with the drafting and generating content that, according to the report, would likely be for use in spear-phishing campaigns against individuals with regional expertise. It also used LLM-informed reconnaissance, again with a different focus from Forest Blizzard: It used LLMs to identify think tanks, government organizations, or experts on North Korea that have a focus on defense issues or North Korea’s nuclear weapon’s program.

Crimson Sandstorm (Curium)

Crimson Sandstorm — an Iranian group assessed to be connected to the Islamic Revolutionary Guard Corps (IRGC) — has used LLMs to request support around social engineering, assistance in troubleshooting errors, .NET development, and ways in which an attacker might evade detection when on a compromised machine. Crimson Sandstorm used LLM-supported social engineering to generate phishing emails. It also used LLM-enhanced scripting techniques to generate code snippets intended to support app and web development, interactions with remote servers, web scraping, executing tasks when users sign in, and sending information from a system via email. The group also used LLM-enhanced anomaly detection evasion, an attempt to use LLMs for assistance in developing code to evade detection, to learn how to disable antivirus via registry or Windows policies, and to delete files in a directory after an application has been closed.

Charcoal Typhoon (Chromium)

Charcoal Typhoon — a Chinese state-affiliated threat actor with activities predominantly focused on entities within Taiwan, Thailand, Mongolia, Malaysia, France, and Nepal — has used LLMs to support tooling development, scripting, understand various commodity cybersecurity tools, and to generate content that could be used to social engineer targets.

More specifically, it used LLM-informed reconnaissance to research and understand specific technologies, platforms, and vulnerabilities, indicative of preliminary information-gathering stages. Charcoal Typhoon used LLM-enhanced scripting techniques to generate and refine scripts, potentially to streamline and automate complex cyber tasks and operations.

It also used LLM-supported social engineering for assistance with translations and communication, likely to establish connections or manipulate targets, according to the report. The group also used LLM-refined operational command techniques for advanced commands, deeper system access, and control representative of post-compromise behavior.

Salmon Typhoon (Sodium)

Salmon Typhoon — a Chinese state-affiliated threat actor with a history of targeting US defense contractors, government agencies, and entities within the cryptographic technology sector — has used LLMs in what appears to be an exploratory way. The report stated that “this threat actor is evaluating the effectiveness of LLMs in sourcing information on potentially sensitive topics, high profile individuals, regional geopolitics, US influence, and internal affairs.”

The report found Salmon Typhoon used LLM-informed reconnaissance, engaging LLMs for queries on a diverse array of subjects, such as global intelligence agencies, domestic concerns, notable individuals, cybersecurity matters, topics of strategic interest, and various threat actors. These interactions mirror the use of a search engine for public domain research.

It also used LLM-enhanced scripting techniques to identify and resolve coding errors, and LLM-refined operational command techniques demonstrating an interest in specific file types and concealment tactics within operating systems, indicative of an effort to refine operational command execution. It also used LLM-aided technical translation and explanation to translate computing terms and technical papers.

All accounts associated with these activities have been disabled. In a blog, Microsoft corporate VP of security, compliance, identity, and management Vasu Jakkal said, “Microsoft uses several methods to protect itself from these types of cyberthreats, including AI-enabled threat detection to spot changes in how resources or traffic on the network are used; behavioral analytics to detect risky sign-ins and anomalous behavior; machine learning (ML) models to detect risky sign-ins and malware; zero trust, where every access request has to be fully authenticated, authorized, and encrypted; and device health to be verified before a device can connect to the corporate network.”

Advanced Persistent Threats, Generative AI

Go to Source