Americas

Asia

Oceania

Shweta Sharma
Senior Writer

New Israeli startup to help organizations deal with GenAI-related risks

News
31 Jan 20244 mins
Risk Management

The SaaS offering is specifically targeted at providing organizations visibility and protection over third-party and homegrown generative AI tools.

risk jumping over a crevice
Credit: Sammie Chaffin / Unsplash

Israeli cybersecurity platform Aim Security has put together a SaaS offering tailored specifically against enterprise risks associated with the use of generative AI (GenAI) tools.

The offering is aimed at providing collective visibility, detection, enforcement, and protection against GenAI risks spanning varied enterprise use cases: public GenAI, enterprise GenAI (like Microsoft Copilot), and homegrown GenAI.

“Aim is a one-stop-shop GenAI security platform, whether it’s for apps and products built in-house, third-party applications used by enterprises, or apps used directly by employees, that allow businesses to securely use their private data with GenAI,” said Matan Getz, CEO, and co-founder at Aim Security. “As companies adopt various types of GenAI tools, and as the number of tools grows, Aim is there to scale with them.”

Aim was founded by Getz and Adir Gruss who serves as the company’s chief technology officer. Getz and Gruss were both part of a veteran cybersecurity team at Israeli Defence Force (IDF’s) elite Intelligence Unit 8200.

“Aim’s proactive approach to security works to educate us on the right way to leverage GenAI, ensures acceptable use, and enhances our company’s decision-making capabilities – so it’s more than another security tool in our stack,” said Drew Robertson, CISO of Finance of America. “Once we deployed Aim’s platform, we gained granular visibility in spots that were previously limited, I was able to see how GenAI is used and what data is shared on it. These insights helped me drive more GenAI adoption rather than inhibit it – helping business scale, securely.”

SaaS for all GenAI risks

Aim’s GenAI security platform is designed to cover a range of enterprise use cases. It supports public GenAI tools, such as chatbots, used within the organization that can lead to data leakage and privacy violations. Enterprise GenAI (tailor-made AI tools for organisational usage) such as AI copilots and homegrown GenAI applications are also included within Aim’s protection.

“Aim’s GenAI security platform is a single pane of glass, securing all enterprise GenAI use cases while driving business productivity,” Getz added. “Beyond security, Aim provides in-depth data and analysis into how GenAI is used in organizations, giving business leaders and executives invaluable insights they can use to improve their own goals.”

GenAI platforms have been fueling a significant rise in cyberattacks and security risks. This has given rise to a new set of cybersecurity startups that are specifically working to address these risks.

“Powerful GenAI capabilities are now accessible to a wider audience instead of an elite group of AI and deep learning experts and it is important to consider the security implications and take steps to ensure privacy and security of company, partner, and customer data,” said Melinda Marks, senior analyst at ESG. “There are a number of startups addressing this, including Portal26, Prompt Security, CalypsoAI, etc.”

The idea is to help organizations assess what GenAI is being used, help them set policies to limit usage or put guardrails in place for safe usage, and then monitor them to ensure the data is protected, according to Marks.

GenAI security built on data protection offerings

Almost all enterprise-centred GenAI-related risks can be piled under data leakage or bias. Therefore, tools designed to protect against these include data loss protection (DLP) solutions. GenAI-based leakage, however, sometimes pertains to a compromise of a huge amount of data as models are trained on larger corpora.

“This does fall into DLP, but usage of GenAI also brings a scalability issue because there can be so much data transferred to and from LLMs between building the models, and then using the data and generating/ changing new data in the natural language interactions and prompts,” Marks said. “Organizations need to ensure their sensitive data isn’t shared or used in other models, which is especially important for the regulated industries like healthcare and finance.

Startups like Aim will need to demonstrate better visibility and control at managing security risks with GenAI use, including visibility on data uploads and identifying out-of-policy data transfers, according to Marks.

“While it’s interesting to see new startups solely focused on GenAI, organizations should talk to their cloud security, CASB, or DLP vendors to learn about their capabilities identifying GenAI usage, ability to create and enforce policies, and monitor for risk, threats, and attacks,” Marks added.