September 15, 2025
Today’s cyber defenders are overwhelmed by a deluge of security alerts, threat intelligence signals, and shifting business context, creating an urgent need for AI systems that can enhance operational security work. Despite the potential of Large Language Models (LLMs) to automate and scale Security Operations Center (SOC) operations, existing evaluations are incomplete in assessing the scenarios that matter most to real-world cyber defenders. This lack of informed evaluation has significant implications for both AI developers and those seeking to apply LLMs to SOC automation. Without a clear understanding of how LLMs perform in real-world security scenarios, AI system developers lack a north star to guide their development efforts, and users are left without a reliable way to select the most effective models. Furthermore, malicious actors have begun using AI to scale cyber attacks, emphasizing the need for open source benchmarks to drive adoption and community-driven improvement among defenders and AI model developers. To address this gap, we introduce CyberSOCEval, a new suite of open source benchmarks that are part of CyberSecEval 4. CyberSOCEval consists of benchmarks tailored to evaluate LLMs in two tasks: Malware Analysis and Threat Intelligence Reasoning, core defensive domains that have inadequate coverage in current security benchmarks. Our evaluations reveal that larger, more modern LLMs tend to perform better, confirming the training scaling laws paradigm. We also find that reasoning models leveraging test time scaling do not achieve the boost they do in areas like coding and math, suggesting that these models have not been trained to reason about cybersecurity analysis, and pointing to a key opportunity for improvement. Finally, we find that current LLMs are far from saturating our evaluations, demonstrating that CyberSOCEval presents a significant hill to climb for AI developers to improve AI cyber defense capabilities.
Written by
Lauren Deason
Adam Bali
Ciprian Bejean
Diana Bolocan
James Crnkovich
Ioana Croitoru
Krishna Durai
Chase Midler
Calin Miron
David Molnar
Brad Moon
Bruno Ostarcevic
Alberto Peltea
Matt Rosenberg
Catalin Sandu
Arthur Saputkin
Sagar Shah
Daniel Stan
Ernest Szocs
Shengye Wan
Spencer Whitman
Sven Krasser
Joshua Saxe
Publisher
arXiv
Research Topics
April 16, 2026
Karen Hambardzumyan, Nicolas Baldwin, Edan Toledo, Rishi Hazra, Michael Kuchnik, Bassel Al Omari, Thomas Simon Foster, Anton Protopopov, Jean-Christophe Gagnon-Audet, Ishita Mediratta, Kelvin Niu, Michael Shvartsman, Alisia Lupidi, Alexis Audran-Reiss, Parth Pathak, Tatiana Shavrina, Despoina Magka, Hela Momand, Derek Dunfield, Nicola Cancedda, Pontus Stenetorp, Carole-Jean Wu, Jakob Foerster, Yoram Bachrach, Martin Josifoski
April 16, 2026
April 14, 2026
Fei Zhang, Zijian Zhou, Bohao Tang, Sen He, Hang Li (BizAI), Zhe Wang, Soubhik Sanyal, Pengfei Liu, Viktar Atliha, Tao Xiang, Frost Xu, Semih Gunel
April 14, 2026
March 17, 2026
Omnilingual MT Team, Belen Alastruey, Niyati Bafna, Andrea Caciolai, Kevin Heffernan, Artyom Kozhevnikov, Christophe Ropers, Eduardo Sánchez, Charles-Eric Saint-James, Ioannis Tsiamas, Chierh CHENG, Joe Chuang, Paul-Ambroise Duquenne, Mark Duppenthaler, Nate Ekberg, Cynthia Gao, Pere Lluís Huguet Cabot, João Maria Janeiro, Jean Maillard, Gabriel Mejia Gonzalez, Holger Schwenk, Edan Toledo, Arina Turkatenko, Albert Ventayol-Boada, Rashel Moritz, Alexandre Mourachko, Surya Parimi, Mary Williamson, Shireen Yates, David Dale, Marta R. Costa-jussa
March 17, 2026
March 17, 2026
Omnilingual SONAR Team, João Maria Janeiro, Pere Lluís Huguet Cabot, Ioannis Tsiamas, Yen Meng, Vivek Iyer, Guillem Ramirez, Loic Barrault, Belen Alastruey, Yu-An Chung, Marta R. Costa-jussa, David Dale, Kevin Heffernan, Jaehyeong Jo, Artyom Kozhevnikov, Alexandre Mourachko, Christophe Ropers, Holger Schwenk, Paul-Ambroise Duquenne
March 17, 2026

Our approach
Latest news
Foundational models