Americas

Asia

Oceania

Shweta Sharma
Senior Writer

Orca integrates cloud app security platform with GPT-4

News
05 May 20235 mins
Application SecurityArtificial IntelligenceCloud Security

Orca’s existing GPT integration with its cloud-native application protection platform (CNAPP) receives a GPT-4 upgrade, along with a few other enhancements.

10 cloud security breach virtualization wireless
Credit: Getty Images

Agentless cloud security provider Orca Security has integrated Microsoft Azure OpenAI GPT-4 into its cloud-native application protection platform (CNAPP) under the ChatGPT implementation program that the cybersecurity company started earlier this year.

“With our transition to Azure OpenAI, our customers benefit from the security, reliability, and enterprise level support that Microsoft provides,” said Avi Shua, chief innovation officer and co-founder of Orca Security.  “By integrating GPT-4 into Orca Security’s CNAPP platform, security practitioners can instantly generate high-quality remediation instructions for the platform of their choice.”

The integration could help devsecops teams working in cloud environments.

“In cloud native applications, it is ideal to make as many changes as possible early in the lifecycle, e.g. in IaC tools or Terraform, as teams generally struggle to address all the issues that security tools identify in production,” said Jimmy Mesta, co-founder and chief technology officer of KSOC, a Kubernetes security company. “Orca’s intention is to address this reality by trying to help customers reduce the amount of time spent actioning on the alerts from their solution.”

Additionally, Orca has announced a suite of new features that come along with the integration. The integration as well as the enhancements are available immediately.

GPT enables queries about remediation instructions

With a Representational State Transfer (REST) API based integration to OpenAI’s generative pre-trained transformer (GPT) engine, Orca is aiming to security practitioners generate remediation instructions for each alert from the Orca CNAPP platform.

“Orca is announcing the use of GPT-4 to generate remediation instructions for the alerts its product creates. Those remediation instructions would be used in different places dependent on the nature of the recommendation; for example, they could apply to an Infrastructure as Code (IaC) tool or a cloud services account like Azure Kubernetes Service (AKS) or Google Kubernetes Engine (GKE),” Mesta said.

The generated remediation instructions can be copied and pasted into platforms such as Terraform, Pulumi, AWS CloudFormation, AWS Cloud Development Kit, Azure Resource Manager, Google Cloud Deployment Manager, and Open Policy Agent.

Additionally, developers can ask ChatGPT — a large language model (LLM) based on the GPT architecture— follow-up questions about remediation, directly from the Orca Platform.

“Orca shows alerts from cloud misconfigurations in runtime, after deployment, so at the point the alerts are shown, the issue is already present. The integration is useful in the sense of going backwards into the application development lifecycle to fix the issue in code. Kind of like, ‘detect in production, fix early in the lifecycle,” Mesta said.

GPT-4 automates code-snippet creation

Orca had introduced GPT-3 (an earlier version) support in the Orca Platform in January and has since claimed dramatic reduction in customers’ mean-time-to-remediation (MTTR). The GPT-4 integration is expected to build on that momentum as the model upgrade comes with an improved accuracy on top of an ability to generate code snippets.

Other enhancements that accompany GPT-4 integration for Orca include “prompt optimization to produce even more accurate remediation responses, inclusion of remediation instructions in assigned Jira tickets, support for Open Policy Agent (OPA) remediation, and new cloud provider specific remediation methods including AWS, Azure, and Google Cloud,” according to Shua.

Open Policy Agent (OPA) is an open-source, general-purpose policy engine that enables the implementation of policy as code. It provides a declarative language called Rego that allows users to specify policies as rules that evaluate whether a request should be allowed or denied.

Additionally, the GPT-4 integration adds on security and enterprise support by Microsoft, including privacy, compliance, 99.9% uptime SLA and regional availability.

“With our transition to Azure OpenAI, our customers benefit from the security, reliability, and enterprise level support that Microsoft provides. Even though Orca already ensures privacy by anonymizing requests and masking any sensitive information before submitting to GPT, Azure OpenAI provides further privacy assurances and is fully regulatory compliant (HIPAA, SOC2, etc),” Shua said.

GPT integration raises data security questions

Despite his appreciation for Orca’s integrated effort, Mesta carries some reservations over the risks associated with using GPT to process any kind of customer data.

“The first issue is the fact that, as AI models go, GPT is trained using other peoples’ data and that is the information the model draws from. They don’t use your data to train the model which is why, on several occasions, the model is known to have simply made up answers based on arbitrary references. If that happened here, false remediation advice could create more harm than good,” he said.

Mesta’s second concern is the security of the data uploaded on GPT systems which, in most parts, is claimed to be taken care of by Orca and Microsoft’s joint efforts. He cites a recent Samsung incident where employees put confidential information into ChatGPT and points out “such human error is always a possibility when another system opens up, but it is especially an issue with the conversational appeal of GPT.”

“What happens if you need to describe a location for secret stores and source code in the remediation guidelines and someone accidentally puts in confidential information? The intention might not be malicious, but the action could be quite damaging,” Mesta added.

Several companies and countries are bringing in some form of restrictions around the usage of GPT based models for privacy reasons. “These decisions validate the real risk involved, whether you are a government body or a security vendor,” he said.