Black Hat insights: Generative AI begins seeping into the security platforms that will carry us forward

Black Hat insights: Generative AI  begins seeping into the security platforms that will carry us forward

LAS VEGAS – Just when we appeared to be on the verge of materially shrinking the attack surface, along comes an unpredictable, potentially explosive wild card: generative AI.

Related: Can ‘CNAPP’ do it all?

Unsurprisingly, generative AI was in the spotlight at Black Hat USA 2023, which returned to its full pre-Covid grandeur here last week.

Maria Markstedter, founder of Azeria Labs, set the tone in her opening keynote address. Artificial intelligence has been in commercial use for many decades; Markstedter recounted why this potent iteration of AI is causing so much fuss, just now.

Generative AI makes use of a large language model (LLM) – an advanced algorithm that applies deep learning techniques to massive data sets. The popular service, ChatGPT, is based on OpenAI’s LLM, which taps into everything available across the Internet through 2021, plus anything a user cares to feed into it. Generative AI ingests it all, then applies algorithms to understand, generate and predict new content – in text-based summaries that any literate human can grasp.

I spoke to technologists, hackers, marketers, company founders, researchers, academics, publicists and fellow journalists about the promise and pitfalls of commoditizing AI in this fashion. I came away with a much better understanding of the disruption/transformation that is gaining momentum, with respect to privacy and cybersecurity.

Shadow IT on steroids

Generative AI, in point of fact, has, for the moment, dramatically accelerated attack surface expansion. I spoke with Casey Ellis, founder of Bugcrowd, which supplies crowd-sourced vulnerability testing, all about this. We discussed how elite hacking collectives already are finding ways to use it as a force multiplier, streamlining repetitive tasks and enabling them to scale up their intricate, multi-staged attacks.


What’s more, generative AI has exacerbated the longstanding problem of well-intentioned employees unwittingly creating dangerous new exposures, especially in hybrid and multi-cloud networks. I spoke with Uy Huynh, vice president of solutions engineering at Island.io, about how generative AI has quickly become like BYOD and Shadow IT on steroids. Island supplies an advanced web browser security solution.

“The days of localized data loss is over,” says Huynh. “With ChatGPT, when you post sensitive content as part of a query, it subsequently makes its way to OpenAI, the underlying LLM. Every piece of information becomes a part of the model’s vast knowledge base. This unintentional leakage can have dire consequences, as sensitive information can thereafter be accessed through the right prompts.”

Of course, the good guys aren’t asleep at the wheel. Another theme that stood out at Black Hat: security innovators are, at this moment, creating and testing new ways to leverage generative AI – as a force multiplier – for their respective security specialties.

Threat intelligence vendor Cybersixgill for instance launched Cybersixgill IQ at Black Hat. This new service feeds vast data sets of threat intel into a customized LLM tuned to generate answers to nuanced security questions.

The idea is to shrink the time analysts spend sifting through data, says Brad Liggett, director of global sales engineering. Cybersixgill’s researchers, for instance, are finding they can quickly gain insights they might have missed or taken much longer to uncover.

Meanwhile, Concentric.ai recently patented technology that uses LLM to ingest and iterate data from a company’s trove of unstructured data. “We can actually read the context,” Cyrus Tehrani, vice president of business development, told me.


“We can tell if there’s a Social Security number in there, we know the type of document it is, we know the type of form it is. We can tell you if something you’re releasing has to do with privacy personas; we can tell because we know the context of that data.”

This all really boils down to intuitive questioning of generative AI by clever human experts. Bugcrowds’ stable of independent white hat hackers, for instance, are probing for the edges of the envelope, striving to determine where usefulness ends and inaccuracy kicks in, Ellis told me.

Defense-in-depth redux

I also spoke just ahead of the conference with Horizon3.ai, Syxsense and Trustle – and we touched on how they are factoring in generative AI; for a deeper dive, please give a listen to my podcasts discussions with each. At the conference, I had deep conversations with experts from Bugcrowd, Island.io, JupiterOne Traceable.ai, Data Theorem, Sonar and Flexxon; stay tuned for upcoming Last Watchdog podcasts with each.

Generative AI is sure to rivet everyone’s attention for some time to come. When it comes to cybersecurity, Markstedter, the keynote presenter, astutely observed how generative AI is on track to  match the original iPhone’s adoption trajectory: massive popularity followed by an extended period of companies scrambling to gain security equilibrium.


“Do you remember the first version of the iPhone? It was so insecure — everything was running as root. It was riddled with critical bugs. It lacked exploit mitigations or sandboxing,” she said. “That didn’t stop us from pushing out the functionality and for businesses to become part of that ecosystem.”

Cybersecurity is undergoing a tectonic shift, folks. To get us where we need to be, traditional, perimeter-centric IT defenses need to be reconstituted and security services delivery models need to be reshaped. A new tier of overlapping, interoperable, highly automated security platforms are taking shape. Defense-in-depth remains a mantra, but one that is morphing into something altogether new.

Automation and interoperability must take over and several new security layers must coalesce and interweave to address attack surface expansion. Generative AI has come along as a two-edged sword, accelerating attack surface expansion, but also stirring cybersecurity innovation. In short, the arms race has taken on a critical new dimension.

Cutting against the grain


A few off-the-cuff discussions I had on the exhibits floor and at offsite gatherings at Black Hat resonated. One was with Christopher Budd, director of Sophos X-Ops, which launched in July 2022 as a cross-operational unit linking SophosLabs, Sophos SecOps and Sophos AI. This consolidation of Sophos deep, diverse threat detection and analysis assets is helping organizations better defend against constantly changing and increasingly complex cyberattacks.


I also spoke at length with Saryu Nayyar, CEO of Gurucul, supplier of a unified security and risk analysis solution. Gurucul, too, launched a “generative AI assistant” at Black Hat and has been in the vanguard of another major trend: competing to shape the multi-faceted security platforms we’ll need to carry us forward.

“We’ve always had a vision, right from the beginning, of suppling a unified, open platform,” Nayyar told me. “Our data ingestion framework supports over one thousand-plus integrations. . . Our biggest differentiator is our threat content. We use machine learning, and we have a large research team producing threat content that’s all use-case driven, content that can be used for proactive response and proactive risk reduction.”

I also had a fascinating chat with Jonathan Desrocher and Ian Amit, co-founders of Gomboc.ai, which emerged from stealth at Black Hat with a $5 million seed funding round and a strikingly unique solution. With generative AI all the rage, Gomboc is tapping into what Amit and Desrocher characterized as the polar opposite – “deterministic AI.”

Gomboc’s innovation appears to be a simplified way to drag-and-drop robust security policy onto cloud IT resources, such as AWS processing and storage. Instead of using generative AI to guess, based on information about the feature sets it can see, determinisitic AI runs through a series of predetermined checks, then applies reasoning to conclude whether a cloud asset is securely configured; it either is, or it isn’t, Desrocher told me.

Baked-in security

“It’s deterministic and it also changes the focus of what you’re modeling,” he says. “Do you model past behavior and try to extract rules to predict the future? Or are you actually modeling the problem domain to understand the physics of how it works, so that you can predict the future based on the laws of nature, if you will.”

Fresh out of stealth mode, Gomboc has a ways to go to prove it can gain traction. Amit and Desrocher, of course, have high hopes to make a big difference.

Here’s what Amit told me: “Over the medium term, we’re going to change the way that security is being managed for cloud infrastructure. And in the long term, we’re going to change the way that cloud infrastructure, in general, is being managed . . . our policy engine can also be applied to performance, cost and resilience so that DevOps won’t need to inundate themselves with those intricacies of finding the correct parameters to make things run correctly. Security is going to be baked into the way you deploy your architecture.”

Along these same lines, I had a deep conversation with Camellia Chan, co-founder and CEO of Flexxon, a Singapore-based hardware vendor that’s also cutting against the grain. Chan walked me through how Flexxon has won partnerships with Lenovo, HP and other OEMs to embed Flexxon solid state memory drives in new laptops. Branded “X-Phy,” these advanced SSDs contain AI-infused mechanisms that provide a last line security check, she told me. A full drill down is coming in my podcast discussion with Chan, so stay tuned.

The transformation progresses. I’ll keep watch and keep reporting.


Pulitzer Prize-winning business journalist Byron V. Acohido is dedicated to fostering public awareness about how to make the Internet as private and secure as it ought to be.

(LW provides consulting services to the vendors we cover.)

Go to Source
Author: bacohido