'

Cyber attackers and defenders are racing to up their AI game

Artificial intelligence’s power and fast evolution are rapidly altering the cybersecurity landscape in ways that pose opportunities and challenges to cybersecurity defenders. As popular AI tools such as ChatGPT and, more recently, even more robust generative AI systems become mainstays of the digital ecosystem, cybersecurity professionals will increasingly deal with new threats while also turning to AI technologies to identify and ward off those threats.

Security company Axonius released today a survey on the state of IT and security teams, revealing, among other things, how AI is rising to the top of cybersecurity agendas to realize the promise and tackle the peril of the AI era. Axonius surveyed IT and security decision-makers at 950 companies with 500 or more employees in the United States, United Kingdom, and Australia.

The survey found that three-quarters (76%) of those surveyed said their organizations are spending more on AI or machine learning compared to 12 months ago, and over four in five (85%) respondents said they were interested in applying AI in their organization’s IT and security operations in the coming year. Axonius also found that nearly two in five (39%) IT and security decision-makers whose organizations have reduced their IT or security headcounts in the last 12 months say their organizations have adopted AI-based tools to streamline tasks to keep pace with workload in light of reduced headcounts.

These findings go together with another survey finding: Nearly three-quarters (72%) of IT and security decision-makers reported are concerned about the potential adverse effects of generative AI on their organization’s cybersecurity.

As the survey illustrates, cyber defenders will increasingly use AI technology to defend against AI threats while simultaneously coping with threat actors continuously up their game using AI-powered malware and intrusion tools. “If this technology is allowing an attacker to do something a lot faster or a lot cheaper, then that means that defenders also have to think about how can we do something faster and cheaper and effectively do more of it with the resources we have,” Daniel Trauner, senior director of security at Axonius, tells CSO.

A window of attacker asymmetric advantage?

In these early days of AI technology adoption, one factor that might tip the balance in favor of malicious actors is the relatively slower and more deliberate adoption by defenders of the latest AI defense tools, giving attackers at least a temporary asymmetric advantage. “The advancements in AI are happening so quickly that we can’t possibly hope that all of the people using it will fully understand what’s happening,” Peter Morgan, CSO at Phylum, tells CSO. The “timing difference is a big deal right now.”

“It takes a long time to build [cyber defense systems] relative to building something that an attacker can use that might work, say, 1% of the time. If they go after 1,000 or 10,000 or 100,000 targets, it’s very easy to do the math on how much 1% success will give you. But as a defender, 1% success doesn’t help you much. And really, what you want for something productized and put in front of customers or users in a reliable sense is something in the high nineties. That takes a long time.”

Some experts think an asymmetric advantage is something cybersecurity professionals are already accustomed to and have dealt with for a long time. “Everyone working in security, we cannot see the good without seeing the bad, right?” Andrea Hervier, head of partnerships at CrowdSec, tells CSO. “We always try to think what could be the consequences of a potential new technology, but when we look at the benefits of AI in cybersecurity at the moment, we can also say that many of the benefits are the flaws at the same time.”

The advent of AI represents two sides of one coin, Hervier says. “What I can say with certainty is that generated AI on one side can automate a lot of day-to-day tasks. It can do them at scale, and it can do them very fast, which can be seen as a good thing. It can also be seen as a bad thing if it’s in the hands of the cybercriminal.”

Even if AI is currently an asymmetric threat, defenders don’t have anything to fear, Fayyaz Makhani, leader, professional services, global compliance, and risk at SecureTrust, tells CSO. “AI has been around in various forms for several years now. Decades, even,” he says. “I think starting last year when it came to the forefront…it became really scary because it’s new.”

“We don’t need to be afraid of AI. We can look at it in a couple of different ways, and if we look at it as tools and as support for whatever it is we do, whether it’s on the white side or the not-so-white side of cybersecurity. I think either way, we all have the ability to utilize artificial intelligence in many different ways.”

AI threats defenders will face

Although it’s hard to foresee the types of threats defenders will face as AI technology takes hold, the ability of attackers to generate synthetic content at scale is one top concern. “The ability to synthetically generate new or seemingly real content will be very interesting. If you take an example from something that happens at the nation-state level where if a nation-state attacker is trying to build up an online presence for a series of accounts to make it look like they’re real, there’s a whole process,” Morgan says.

“You create a new account, put a bunch of content in it, and make it interact. And often, [threat actors] used to have people do this for years, building up these identities online. It seemed like real people, and it took a lot of human time and effort to synthetically create those things and age them over these time periods. Well, the content generation of that with ChatGPT can do all of that for you. Now, it’s just writing some code to automate the processing. It’s changing the landscape for what attackers can do on a volume level.”

When it comes to the most frequent malicious AI use case attackers currently employ, improving the language of phishing emails, cyber defenders are quickly rising to that challenge. “The phishing example is a great example of a technology that works on both sides,” Makhani says. “So, the nefarious users use AI, ChatGPT, or other generative models to create better phishing emails. But on the flip side of that, the tools that we are building incorporate many similar technologies to detect the patterns in these phishing emails or other types of spam.”

“AI will make it somewhat easier for attackers to make better phishing attacks, but it will also inevitably make defenses stronger,” Morgan says.

Careful implementation of AI technologies is needed

Could AI technology, which is fast-moving, complex, and often opaque in its operations, pose liabilities for cyber defenders if not carefully implemented? A growing number of companies including Samsung, Apple, Spotify, Verizon, and Amazon are limiting employee use of ChatGPT to avoid one such liability: the disclosure of sensitive customer or corporate data.

Questions surrounding cybersecurity practices by organizations have only heightened since the SEC in late October charged SolarWinds and its CISO for fraud and internal control failures relating to allegedly known cybersecurity risks and vulnerabilities, a development that has stirred intense debate within the information security community. “There will be new attacks that come out involving AI that I don’t think you can hope to or reasonably prosecute someone for not being defensive because they may be brand new,” Phylum’s Morgan says.

Axonius’ Trauner believes the industry is already proactive in monitoring AI risks and liabilities. “You can tell that the security industry has paid very careful attention to the risk and liability here just by looking at a bunch of frameworks that have come out. There is already the OWASP Top 10 for Large Language Model Applications. NIST published its Artificial Intelligence Risk Management Framework. There was the Biden executive order.”

But the bottom line is AI is advancing so quickly that all bets are off in terms of how both attackers and defenders will deploy it over the next few years. “The one thing I will say about this technology, which is different from a lot of other ones that I’ve experienced personally, the rate of advancement is unique,” Trauner says. “I have not seen anything advance quite this quickly.”

CSO and CISO, Cyberattacks, Generative AI


Go to Source
Author: