Cybersecurity professionals expressed a wide range of opinions on the pros and cons of generative AI in a new survey from a prominent certification group. Credit: amperespy44 / Shutterstock The wildfire spread of generative AI has already had noticeable effects, both good and bad, on the day-to-day lives of cybersecurity professionals, a study released this week by the non-profit ISC2 group has found. The study – which surveyed more than 1,120 cybersecurity pros, mostly with CISSP certification and working in managerial roles – found a considerable degree of optimism about the role of generative AI in the security realm. More than four in five (82%) said that they would at least “somewhat agree” that AI is likely to improve the efficiency with which they can do their jobs. The respondents also saw wide-ranging potential applications for generative AI in cybersecurity work, the study found. Everything from actively detecting and blocking threats, identifying potential weak points in security, to user behavioral analysis was cited as a potential use case for generative AI. Automating repetitive tasks was also seen as a potentially valuable use for the technology. Will generative AI help hackers more than security pros? There was less consensus, however, as to whether the overall impact of generative AI will be positive from a cybersecurity point of view. Serious concerns around social engineering, deepfakes, and disinformation – along with a slight majority which said that AI could make some parts of their work obsolete – mean that more respondents believe AI could benefit bad actors more than security professionals. “The fact that cybersecurity professionals are pointing to these types of information and deception attacks as the biggest concern is understandably a great worry for organizations, governments and citizens alike in this highly political year,” the study’s authors wrote. Some of the biggest issues cited by respondents, in fact, are less concrete cybersecurity problems than they are general regulatory and ethical concerns. Fifty-nine percent said that the current lack of regulation around generative AI is a real issue, along with 55% who cited privacy issues and 52% who said data poisoning (accidental or otherwise) was a concern. Because of those worries, substantial minorities said that they were blocking employee access to generative AI tools – 12% said their ban was total and 32% said it was partial. Just 29% said that they were allowing generative AI tool access, while a further 27% said they either hadn’t discussed the issue or weren’t sure of their organization’s policy on the matter. Related content news AT&T suffers critical breach impacting 73 million customers Data released on the dark web impacts 7.6 million existing account holders and 65.4 million past subscribers. By Shweta Sharma 01 Apr 2024 4 mins Data Breach feature Recruit for diversity: Practical ways to remove bias from the hiring process Changing the wording on job descriptions and introducing a diverse hiring panel are some of the ways to remove bias when hiring cybersecurity professionals. By Aimee Chanthadavong 01 Apr 2024 8 mins Careers feature The CSO guide to top security conferences Tracking postponements, cancellations, and conferences gone virtual — CSO Online’s calendar of upcoming security conferences makes it easy to find the events that matter the most to you. By CSO Staff 01 Apr 2024 17 mins Technology Industry IT Skills Events news Top cybersecurity product news of the week New product and service announcements from Bedrock Security, GitGuardian, Legit Security, Nametag, and Cybereason and Observe By CSO staff 29 Mar 2024 70 mins Generative AI Security PODCASTS VIDEOS RESOURCES EVENTS SUBSCRIBE TO OUR NEWSLETTER From our editors straight to your inbox Get started by entering your email address below. Please enter a valid email address Subscribe