Three defining concerns associated with the security of AI include trust in AI, ethical application of AI, and cybersecurity of AI, according to the SIA research for cybersecurity megatrends in 2024. Credit: 3rdtimeluckystudio/Shutterstock AI has topped the list of emerging trends that are likely to impact the enterprise security segment in 2024, according to a study by the security industry association (SIA). The research that surveyed hundreds of security industry business leaders, including several volunteers and speakers from the 2023 Securing New Ground (SNG) conference, indicated a multifaceted penetration of AI in the security segment. “Ninety-three percent said they expected to see generative artificial intelligence (AI) like ChatGPT make an impact upon their business strategies within the next 5 years, and over 89% said that they had AI projects active in their research and development (R&D) pipelines,” said the SIA report. Other key trends outlined in the study include the expansion and evolution of security’s return on investment, the evolution of the integration business model, security as a service (SaaS), real estate re-optimization, and IT-OT convergence. AI to make a multifaceted impact The top deck (first four) of the list included AI-related trends that SIA expects will make a substantial impact in the segment in the coming year. Topping the list was AI security, which refers to the cybersecurity practices for the protection of data, IP, and corporate integrity with the adoption of AI into businesses of all sizes. “AI has become more accessible over the past few years, being used for both — the good and the bad,” said Pankit Desai co-founder and chief executive officer at cybersecurity firm Sequretek. “From an attacker’s perspective, the AI-based attacks are much more efficient and difficult to spot out. For example, a social engineering attack being carried out with the help of AI technologies will have more convincing language, representation, and deepfakes.” Three defining concerns associated with the security of AI include trust in AI, ethical application of AI, and cybersecurity of AI, according to the research. The second trend observed in the study is the adoption of AI-infused digital cameras as they rapidly become an “everything tool” in the security industry, using AI to permanently change the value proposition of video surveillance to “video intelligence.” “The Internet of Things (IoT) was once perceived to be a vast network of sensors. As it turns out, many of those sensors will be cameras,” said the report. “Seventy-eight million security cameras were shipped globally in 2022.” AI regulation is expected to catch up Generative AI, a text-prompt-only AI programmed with a huge corpus of public data, emerged as another leading trend. The study revealed that this AI technology is likely to change the security industry. “LLM applications (like ChatGPT) will be applied to security systems data”, the report said. “Generative AI will be used for content creation and solving business operational challenges.” Forty-eight percent of security solution developers expect generative AI to have a strong impact on their strategy within the next 5 years, as revealed in the study. Additionally, 74% of them characterized their firm’s R&D investments as being fully, heavily, or somewhat focused on AI. However, because of its instant popularity and easy accessibility, it has become extremely important that the regulatory and law frameworks step in and put in place regulatory guidelines around it, according to Desai. “Building a regulatory framework around Gen AI technology requires careful consideration of various factors to ensure ethical, responsible, and safe development and deployment,” said Kumar Ritesh, founder, and CEO at cybersecurity company Cyfirma. “We would say transparency, fairness, accountability, privacy, and standardization would be very important.” AI regulation thus becomes a rather trendy topic in security with several countries looking to bring forth their own AI guidebooks. It settles at the fourth spot on the list as the study reveals the likelihood of several limitations incorporated into some data sets that leading AI systems use. Related content news AT&T suffers critical breach impacting 73 million customers Data released on the dark web impacts 7.6 million existing account holders and 65.4 million past subscribers. By Shweta Sharma 01 Apr 2024 4 mins Data Breach feature Recruit for diversity: Practical ways to remove bias from the hiring process Changing the wording on job descriptions and introducing a diverse hiring panel are some of the ways to remove bias when hiring cybersecurity professionals. By Aimee Chanthadavong 01 Apr 2024 8 mins Careers feature The CSO guide to top security conferences Tracking postponements, cancellations, and conferences gone virtual — CSO Online’s calendar of upcoming security conferences makes it easy to find the events that matter the most to you. By CSO Staff 01 Apr 2024 17 mins Technology Industry IT Skills Events news Top cybersecurity product news of the week New product and service announcements from Bedrock Security, GitGuardian, Legit Security, Nametag, and Cybereason and Observe By CSO staff 29 Mar 2024 70 mins Generative AI Security PODCASTS VIDEOS RESOURCES EVENTS SUBSCRIBE TO OUR NEWSLETTER From our editors straight to your inbox Get started by entering your email address below. Please enter a valid email address Subscribe