Upgraded features designed to tackle novel email attacks and increasingly complex malicious communication powered by generative AI including ChatGPT and other large language models. Credit: Oatawa / Shutterstock Darktrace has announced a new upgrade to its Darktrace/Email product with enhanced features that defend organizations from evolving cyberthreats including generative AI business email compromise (BEC) and novel social engineering attacks. Among the new capabilities are an AI-employee feedback loop; account takeover protection; insights from endpoint, network, and cloud; and behavioral detections of misdirected emails, the vendor said. The upgrade comes amid growing concern about the ability of generative AI – such as ChatGPT and other large language models (LLMs) – to enhance phishing email attacks and provide an avenue for threat actors to craft more sophisticated and targeted campaigns at speed and scale.“Normal” pattern knowledge key to tackling novel, generative AI email attacksAs part of the Darktrace Cyber AI Loop, Darktrace/Email’s new capabilities help it detect attacks as soon as they are launched, the firm said in a press release. That’s because it is not trained on what “bad” historically looks like based on past attacks, but instead learns the normal patterns of life for each unique organization, according to Darktrace. This feature is key to tackling novel email attacks and linguistically complex malicious communication driven by AI technologies like ChatGPT and LLMs. It also enables Darktrace/Email to detect novel email attacks 13 days earlier (on average) than email security tools that are trained on knowledge of past threats, Darktrace claimed.With this upgrade, Darktrace Cyber AI Analyst combines anomalous email activity with other data sources including endpoint, network, cloud, apps, and OT to automate investigations and incident reporting, Darktrace said. Through greater context around its discoveries, Darktrace’s AI is now capable of more informed decision making, with algorithms providing a detailed picture of “normal” based on multiple perspectives to produce high-fidelity conclusions that are contextualized and actionable, according to the vendor. Darktrace/Email’s new capabilities include: Account takeover and email protection in a single productBehavioral detections of misdirected emails, preventing intellectual property or confidential information being sent to the wrong recipientEmployee-AI loop that leverages insights from individual employees to inform Darktrace’s AI to provide real-time, in-context insights and security awarenessIntelligent mail management for improved productivity against graymail, spam, and newsletters that clutter inboxesOptimized workflows and integrations for security teams, including the Darktrace mobile appAutomated investigations of email incidents with other coverage areas with Darktrace’s Cyber AI AnalystWidespread concern over ChatGPT-enhanced email attacks, malicious activitySince the launch of ChatGPT by OpenAI last year, there has been widespread debate and concern over the chatbot’s ability to make social engineering/phishing attacks more sophisticated, easier to carryout, and more likely to be successful. Darktrace data revealed a 135% increase in novel social engineering attacks across thousands of its active email customers from January to February 2023, corresponding with the mass adoption of ChatGPT.These attacks involved the use of sophisticated linguistic techniques including increased text volume, punctuation, and sentence length, the firm said. Furthermore, 82% of 6,711 global employees surveyed by Darktrace said they were fearful that attackers can use generative AI to create scam emails that are indistinguishable from genuine communication. Last week, Europol warned that ChatGPT’s ability to draft highly realistic text makes it a useful tool for phishing purposes, while the capability of LLMs to reproduce language patterns can be used to impersonate the style of speech of specific individuals or groups. “This capability can be abused at scale to mislead potential victims into placing their trust in the hands of criminal actors,” Europol said.In February, a BlackBerry study of 500 UK IT decision makers revealed that 72% are concerned by ChatGPT’s potential to be used for malicious purposes, with most believing that foreign states are already using the chatbot against other nations. Furthermore, 48% of respondents predicted that a successful cyberattack will be credited to ChatGPT within the next 12 months, with 88% stating that governments have a responsibility to regulate advanced technologies such as ChatGPT. Related content news AT&T suffers critical breach impacting 73 million customers Data released on the dark web impacts 7.6 million existing account holders and 65.4 million past subscribers. By Shweta Sharma 01 Apr 2024 4 mins Data Breach feature Recruit for diversity: Practical ways to remove bias from the hiring process Changing the wording on job descriptions and introducing a diverse hiring panel are some of the ways to remove bias when hiring cybersecurity professionals. By Aimee Chanthadavong 01 Apr 2024 8 mins Careers feature The CSO guide to top security conferences Tracking postponements, cancellations, and conferences gone virtual — CSO Online’s calendar of upcoming security conferences makes it easy to find the events that matter the most to you. By CSO Staff 01 Apr 2024 17 mins Technology Industry IT Skills Events news Top cybersecurity product news of the week New product and service announcements from Bedrock Security, GitGuardian, Legit Security, Nametag, and Cybereason and Observe By CSO staff 29 Mar 2024 70 mins Generative AI Security PODCASTS VIDEOS RESOURCES EVENTS SUBSCRIBE TO OUR NEWSLETTER From our editors straight to your inbox Get started by entering your email address below. Please enter a valid email address Subscribe