Israeli cybersecurity platform Aim Security has put together a SaaS offering tailored specifically against enterprise risks associated with the use of generative AI (GenAI) tools.
The offering is aimed at providing collective visibility, detection, enforcement, and protection against GenAI risks spanning varied enterprise use cases: public GenAI, enterprise GenAI (like Microsoft Copilot), and homegrown GenAI.
âAim is a one-stop-shop GenAI security platform, whether itâs for apps and products built in-house, third-party applications used by enterprises, or apps used directly by employees, that allow businesses to securely use their private data with GenAI,â said Matan Getz, CEO and co-founder at Aim Security. âAs companies adopt various types of GenAI tools, and as the number of tools grows, Aim is there to scale with them.â
Aim was founded by Getz and Adir Gruss who serves as the companyâs chief technology officer. Getz and Gruss were both part of a veteran cybersecurity team at Israeli Defence Force (IDFâs) elite Intelligence Unit 8200.
SaaS for all GenAI risks
Aimâs GenAI security platform is designed to cover a range of enterprise use cases. It supports public GenAI tools, such as chatbots, used within the organization that can lead to data leakage and privacy violations. Enterprise GenAI (tailor-made AI tools for organisational usage) such as AI copilots and homegrown GenAI applications are also included within Aimâs protection.
âAimâs GenAI security platform is a single pane of glass, securing all enterprise GenAI use cases while driving business productivity,â Getz added. âBeyond security, Aim provides in-depth data and analysis into how GenAI is used in organizations, giving business leaders and executives invaluable insights they can use to improve their own goals.â
GenAI platforms have been fueling a significant rise in cyberattacks and security risks. This has given rise to a new set of cybersecurity startups that are specifically working to address these risks.
âPowerful GenAI capabilities are now accessible to a wider audience instead of an elite group of AI and deep learning experts and it is important to consider the security implications and take steps to ensure privacy and security of company, partner, and customer data,â said Melinda Marks, senior analyst at ESG. âThere are a number of startups addressing this, including Portal26, Prompt Security, CalypsoAI, etc.â
The idea is to help organizations assess what GenAI is being used, help them set policies to limit usage or put guardrails in place for safe usage, and then monitor them to ensure the data is protected, according to Marks.
GenAI security built on data protection offerings
Almost all enterprise-centred GenAI-related risks can be piled under data leakage or bias. Therefore, tools designed to protect against these include data loss protection (DLP) solutions. GenAI-based leakage, however, sometimes pertains to a compromise of a huge amount of data as models are trained on larger corpora.
âThis does fall into DLP, but usage of GenAI also brings a scalability issue because there can be so much data transferred to and from LLMs between building the models, and then using the data and generating/ changing new data in the natural language interactions and prompts,â Marks said. âOrganizations need to ensure their sensitive data isnât shared or used in other models, which is especially important for the regulated industries like healthcare and finance.
Startups like Aim will need to demonstrate better visibility and control at managing security risks with GenAI use, including visibility on data uploads and identifying out-of-policy data transfers, according to Marks.
âWhile itâs interesting to see new startups solely focused on GenAI, organizations should talk to their cloud security, CASB, or DLP vendors to learn about their capabilities identifying GenAI usage, ability to create and enforce policies, and monitor for risk, threats, and attacks,â Marks added.
Go to Source