The SaaS offering is specifically targeted at providing organizations visibility and protection over third-party and homegrown generative AI tools. Credit: Sammie Chaffin / Unsplash Israeli cybersecurity platform Aim Security has put together a SaaS offering tailored specifically against enterprise risks associated with the use of generative AI (GenAI) tools. The offering is aimed at providing collective visibility, detection, enforcement, and protection against GenAI risks spanning varied enterprise use cases: public GenAI, enterprise GenAI (like Microsoft Copilot), and homegrown GenAI. “Aim is a one-stop-shop GenAI security platform, whether it’s for apps and products built in-house, third-party applications used by enterprises, or apps used directly by employees, that allow businesses to securely use their private data with GenAI,” said Matan Getz, CEO, and co-founder at Aim Security. “As companies adopt various types of GenAI tools, and as the number of tools grows, Aim is there to scale with them.” Aim was founded by Getz and Adir Gruss who serves as the company’s chief technology officer. Getz and Gruss were both part of a veteran cybersecurity team at Israeli Defence Force (IDF’s) elite Intelligence Unit 8200. “Aim’s proactive approach to security works to educate us on the right way to leverage GenAI, ensures acceptable use, and enhances our company’s decision-making capabilities – so it’s more than another security tool in our stack,” said Drew Robertson, CISO of Finance of America. “Once we deployed Aim’s platform, we gained granular visibility in spots that were previously limited, I was able to see how GenAI is used and what data is shared on it. These insights helped me drive more GenAI adoption rather than inhibit it – helping business scale, securely.” SaaS for all GenAI risks Aim’s GenAI security platform is designed to cover a range of enterprise use cases. It supports public GenAI tools, such as chatbots, used within the organization that can lead to data leakage and privacy violations. Enterprise GenAI (tailor-made AI tools for organisational usage) such as AI copilots and homegrown GenAI applications are also included within Aim’s protection. “Aim’s GenAI security platform is a single pane of glass, securing all enterprise GenAI use cases while driving business productivity,” Getz added. “Beyond security, Aim provides in-depth data and analysis into how GenAI is used in organizations, giving business leaders and executives invaluable insights they can use to improve their own goals.” GenAI platforms have been fueling a significant rise in cyberattacks and security risks. This has given rise to a new set of cybersecurity startups that are specifically working to address these risks. “Powerful GenAI capabilities are now accessible to a wider audience instead of an elite group of AI and deep learning experts and it is important to consider the security implications and take steps to ensure privacy and security of company, partner, and customer data,” said Melinda Marks, senior analyst at ESG. “There are a number of startups addressing this, including Portal26, Prompt Security, CalypsoAI, etc.” The idea is to help organizations assess what GenAI is being used, help them set policies to limit usage or put guardrails in place for safe usage, and then monitor them to ensure the data is protected, according to Marks. GenAI security built on data protection offerings Almost all enterprise-centred GenAI-related risks can be piled under data leakage or bias. Therefore, tools designed to protect against these include data loss protection (DLP) solutions. GenAI-based leakage, however, sometimes pertains to a compromise of a huge amount of data as models are trained on larger corpora. “This does fall into DLP, but usage of GenAI also brings a scalability issue because there can be so much data transferred to and from LLMs between building the models, and then using the data and generating/ changing new data in the natural language interactions and prompts,” Marks said. “Organizations need to ensure their sensitive data isn’t shared or used in other models, which is especially important for the regulated industries like healthcare and finance. Startups like Aim will need to demonstrate better visibility and control at managing security risks with GenAI use, including visibility on data uploads and identifying out-of-policy data transfers, according to Marks. “While it’s interesting to see new startups solely focused on GenAI, organizations should talk to their cloud security, CASB, or DLP vendors to learn about their capabilities identifying GenAI usage, ability to create and enforce policies, and monitor for risk, threats, and attacks,” Marks added. Related content news AT&T suffers critical breach impacting 73 million customers Data released on the dark web impacts 7.6 million existing account holders and 65.4 million past subscribers. By Shweta Sharma 01 Apr 2024 4 mins Data Breach feature Recruit for diversity: Practical ways to remove bias from the hiring process Changing the wording on job descriptions and introducing a diverse hiring panel are some of the ways to remove bias when hiring cybersecurity professionals. By Aimee Chanthadavong 01 Apr 2024 8 mins Careers feature The CSO guide to top security conferences Tracking postponements, cancellations, and conferences gone virtual — CSO Online’s calendar of upcoming security conferences makes it easy to find the events that matter the most to you. By CSO Staff 01 Apr 2024 17 mins Technology Industry IT Skills Events news Top cybersecurity product news of the week New product and service announcements from Bedrock Security, GitGuardian, Legit Security, Nametag, and Cybereason and Observe By CSO staff 29 Mar 2024 70 mins Generative AI Security PODCASTS VIDEOS RESOURCES EVENTS SUBSCRIBE TO OUR NEWSLETTER From our editors straight to your inbox Get started by entering your email address below. Please enter a valid email address Subscribe