'

A year after ChatGPT’s debut, is GenAI a boon or the bane of the CISO’s existence?

It has been a full year since OpenAI’s ChatGPT found its way into the vernacular of the day, quickly followed by Google’s Bard and other generative AI offerings. Before you could say Rumpelstiltskin, it seemed employees, contractors, customers, and partners were all flexing their newfound shiny object — the AI engines employing large language models about which they had little knowledge.

People were amazed when these tools enhanced knowledge and accuracy and marvelled as well at the economy of time they could help create. They were equally amazed when the engine didn’t have a clue and provided nonsense answers or hallucinations and thus proved to be a waste of time — but not for long.

The unintended consequences of querying AI engines soon reared their ugly head, as evidenced by the early 2023 incident at Samsung, who found trade secrets had been blithely uploaded into ChatGPT. While the information was apparently quite positive, the trade secrets weren’t secret any longer as they had been shared with ChatGPT parent OpenAI and anyone making a similar query could (hypothetically) be benefiting from the engineer’s input.

The rapid rise of shadow AI should come as no surprise

Samsung handled this discovery, in my opinion, in precisely the right manner. A big oopsie, and let’s not let this happen again, we need to develop our own in-house capabilities so that trade secrets remain secret.

To all, including those with but a scintilla of experience in information technology provision and support, it was obvious that when the AI engines were availed to the masses, the headwaters which form the river of risk had a new mother source: the AI query engine. Shadow IT had a newborn sibling, shadow AI.

The arrival of shadow AI shouldn’t really be a surprise, observes Alon Schindel, director of data and threat research at Wiz, who shared how it is analogous to “where cloud was five to 10 years ago: everyone is using it to some extent, but very few have a process to govern it.”

“In the race to innovate, developers and data scientists often unintentionally create shadow AI by introducing new AI services into their environment without the security team’s oversight,” Schindel tells CSO. “Lack of visibility makes it hard to ensure security in the AI pipeline and to protect against AI misconfigurations and vulnerabilities. Improper AI security controls can lead to critical risks, making it paramount to embed security into every part of the AI pipeline.”

Three things every company should do about generative AI

The solution, is very commonsensical. We need only step back to that which was shared in April 2023, by Code42 CISO Jadee Hanson, who was speaking specifically to the Samsung experience: “ChatGPT and AI tools can be incredibly useful and powerful, but employees need to understand what data is appropriate to be put into ChatGPT and what isn’t, and security teams need to have proper visibility to what the organization is sending to ChatGPT.”

I spoke with Terry Ray, SVP data security and field CTO for Imperva, who shared his thoughts on shadow AI, providing three key takeaways which every entity should already be doing:

  • Establish visibility into every data repository, including the “shadow” databases squirrelled away “just in case.”
  • Classify every data asset — with such, one knows the value of an asset. (Does it make sense to spend $1 million to protect an asset that is obsolete or worth far less?)
  • Monitoring and analytics — watching for the data to move to where it doesn’t belong.

Know your GenAI risk tolerance

Similarly, Rodman Ramezanian, global cloud threat lead at Skyhigh Security, noted the importance of knowing one’s risk tolerance. He cautioned that those who aren’t watching the outrageously fast-paced spread of large language models (LLMs) are in for a surprise.

He opined that guardrails are not enough; users must be trained and coached on how to use sanctioned instances of AI and avoid those which are not approved and that this training/coaching should be provided dynamically and incrementally. Doing so will improve the overall security posture with each increment.    

CISOs, charged with protecting the data of the company, be it intellectual property, customer information, financial forecasts, go-to-market plans, etc., can embrace or chase. Should they choose the latter, they may wish to also prepare for an uptick in incident response, as there will be incidents. If they choose the former, they will find heavy lifting ahead as they work across the enterprise in its entirety and determine what can be brought in-house, as Samsung is doing.

GenAI is inevitable, so be prepared to manage its flow

The way to identify and mitigate potential risks from the use of AI tools is to fully engage with the various entities within a business and create policies and procedures, as well as pathways to use AI, for every facet of the operation. Followed by the very evident need to create and implement employee/contractor education and practical exercises. The CISO and their organization can get behind, support, identify the security deltas, and suggest paths to mitigate that which is possible and buckle up on those flying free.

Ray added some very timely food for thought for CISOs, especially important in the era of truly competitive hiring and retaining landscape of the “cyber hire.” The new employee in this space must be able to use the tools available and build their own, using AI to help perfect them.

After all, AI is a force multiplier that should be harnessed for the good it can do. But while you’re embracing the concept, you must also embrace AI certifications, policies, and procedures and above all maintain vigilance over where and how it’s being used. Entities particularly need to know and guard their “cheese” — where the prize possessions are located.

The choice is in the CISO’s hands — you can lock up tight and outright ban the chatbots, taking what appear to be all the right steps to prevent shadow AI. Yet, like water seeping through bedrock, users will find a way to leverage it going forward. The wise CISO will be well served by ensuring the necessary channels exist for the water to flow safely and securely.

Data and Information Security, Generative AI, Security Practices


Go to Source
Author: