Americas

Asia

Oceania

Shweta Sharma
Senior Writer

Orca to offer armor against AI adoption risks

News
19 Mar 20244 mins
Risk ManagementSecurity Software

The company's new AI-security posture management (AI-SPM) offering is designed to secure an organization’s AI projects from sensitive access risks.

To help companies scale business operations with AI without having to worry about the technology’s underlying risks, cybersecurity provider Orca Security has rolled out an AI-SPM offering available through its flagship, SaaS-based cloud security platform.

Orca claims the new AI-SPM capabilities, including features such as AI bill of materials (BOM), sensitive data detection, and public access visibility, will help organizations securely access popular AI services including Amazon Sagemaker and Bedrock, Azure Open AI, and Vertex AI.

“Orca revealed through its 2024 State of Cloud Security research that 82% of AWS SageMaker users have exposed notebooks, which can often contain sensitive training data,” Orca said in a press statement. “This makes the cloud resources AI models rely upon a potentially lucrative target for attackers.”

Apart from the AI-SPM offering, Orca’s SaaS platform presently offers a suite of cloud security capabilities including cloud-native application protection (CNAP), cloud security posture management (CSPM), cloud workload protection (CWP), cloud infrastructure entitlement management (CIEM), and cloud detection and response, among others.

Works on existing SideScanning capabilities

According to Orca, the new AI-SPM offering is built on the company’s existing “side scanning” capabilities for cloud-based workloads. Orca’s side scanning is an agentless, cloud workload visibility offering that collects data from the workload’s runtime block storage to provide a virtual read-only view.

“With the introduction of AI-SPM, Orca is leveraging its patented agent-less SideScanning technology to provide the same visibility, risk insight, and deep data for AI models that it does for other cloud resources,” Orca said. “The tool also addresses use cases unique to AI security, including detecting sensitive data in training sets.”

Experts, however, think the SideScanning technology could use real-time support. “The side scanning is primarily useful for visibility but the technology is not real-time threat detection and response, it takes data from the cloud APIs and comes with a built-in lag. From a posture perspective and visibility, this is helpful, but for investigation and response, something real-time is required,” said Story Tweedie-Yates, head of product marketing for RAD Security.

This AI visibility enables Orca to create an inventory of all the AI models deployed within organizational environments, allowing organizations to build an AI bill of materials (BOM). This capability has a detection range of 50 AI models and software packages at launch, including Azure OpenAI, Amazon Bedrock, Google Vertex AI, AWS SageMaker, Pytorch, TensorFlow, OpenAI, Hugging Face, Langchain, etc.

“An AI/ML BOM is extremely difficult to manage because the sources are highly fragmented,” said Gil Geron, CEO, and co-founder of Orca. “Organizations use a combination of different services and unifying all this information into one coherent inventory and security flow is difficult, but Orca is well positioned to do so with its wide array of security capabilities, including CIEM, agentless workload scanning, DSPM, and shift-left repository scanning.”

Additional detection and protection offerings

Apart from the AI/ML BOM, the new offering provides advanced detection capabilities for sensitive data exposed within and close to a company’s existing AI projects. The sensitive data may include telephone numbers, email addresses, social security numbers, or personal health information.

Additionally, the offering provides third-party and public access detection to these AI projects.

Third-party access detection leverages Orca’s code repository scanning to detect when keys and tokens to AI services such as OpenAI, and Hugging Face, are unsafely exposed in code repositories and alert the organization in order to prevent leakage of the models or training data, according to the company.

For Public access detection, Orca uses its existing insight into AI model settings and network access to alert the organization whenever an AI data source is publicly accessible. Within the AI-SPM offering, Orca has also launched a new “AI best practices” compliance framework, with a set of rules for the proper upkeep of AI in organizations. This will typically cover areas like AI models network security, data protection, access controls, and identity and access management (IAM).

“Much of app development is happening in cloud environments, so it is particularly important for cloud security providers like Orca to provide AI-specific security measures,” Tweedie-Yates said. “This offering should provide great support for organizations that want to get a head start on implementing AI-specific security in their organizations, and hopefully more CSPM providers will follow this lead.”