September 15, 2025
Today’s cyber defenders are overwhelmed by a deluge of security alerts, threat intelligence signals, and shifting business context, creating an urgent need for AI systems that can enhance operational security work. Despite the potential of Large Language Models (LLMs) to automate and scale Security Operations Center (SOC) operations, existing evaluations are incomplete in assessing the scenarios that matter most to real-world cyber defenders. This lack of informed evaluation has significant implications for both AI developers and those seeking to apply LLMs to SOC automation. Without a clear understanding of how LLMs perform in real-world security scenarios, AI system developers lack a north star to guide their development efforts, and users are left without a reliable way to select the most effective models. Furthermore, malicious actors have begun using AI to scale cyber attacks, emphasizing the need for open source benchmarks to drive adoption and community-driven improvement among defenders and AI model developers. To address this gap, we introduce CyberSOCEval, a new suite of open source benchmarks that are part of CyberSecEval 4. CyberSOCEval consists of benchmarks tailored to evaluate LLMs in two tasks: Malware Analysis and Threat Intelligence Reasoning, core defensive domains that have inadequate coverage in current security benchmarks. Our evaluations reveal that larger, more modern LLMs tend to perform better, confirming the training scaling laws paradigm. We also find that reasoning models leveraging test time scaling do not achieve the boost they do in areas like coding and math, suggesting that these models have not been trained to reason about cybersecurity analysis, and pointing to a key opportunity for improvement. Finally, we find that current LLMs are far from saturating our evaluations, demonstrating that CyberSOCEval presents a significant hill to climb for AI developers to improve AI cyber defense capabilities.
Written by
Lauren Deason
Adam Bali
Ciprian Bejean
Diana Bolocan
James Crnkovich
Ioana Croitoru
Krishna Durai
Chase Midler
Calin Miron
David Molnar
Brad Moon
Bruno Ostarcevic
Alberto Peltea
Matt Rosenberg
Catalin Sandu
Arthur Saputkin
Sagar Shah
Daniel Stan
Ernest Szocs
Shengye Wan
Spencer Whitman
Sven Krasser
Joshua Saxe
Publisher
arXiv
Research Topics
December 18, 2025
Tomáš Souček, Pierre Fernandez, Hady Elsahar, Sylvestre Rebuffi, Valeriu Lacatusu, Tuan Tran, Tom Sander, Alexandre Mourachko
December 18, 2025
November 19, 2025
Nicolas Carion, Laura Gustafson, Yuan-Ting Hu, Shoubhik Debnath, Ronghang Hu, Didac Suris Coll-Vinent, Chaitanya Ryali, Kalyan Vasudev Alwala, Haitham Khedr, Andrew Huang, Jie Lei, Tengyu Ma, Baishan Guo, Arpit Kalla, Markus Marks, Joseph Greer, Meng Wang, Peize Sun, Roman Rädle, Triantafyllos Afouras, Effrosyni Mavroudi, Katherine Xu, Tsung-Han Wu, Yu Zhou, Liliane Momeni, Rishi Hazra, Shuangrui Ding, Sagar Vaze, Francois Porcher, Feng Li, Siyuan Li, Aishwarya Kamath, Ho Kei Cheng, Piotr Dollar, Nikhila Ravi, Kate Saenko, Pengchuan Zhang, Christoph Feichtenhofer
November 19, 2025
November 18, 2025
Shalini Maiti *, Amar Budhiraja *, Bhavul Gauri, Gaurav Chaurasia, Anton Protopopov, Alexis Audran-Reiss, Michael Slater, Despoina Magka, Tatiana Shavrina, Roberta Raileanu, Yoram Bachrach, * Equal authorship
November 18, 2025
November 10, 2025
Omnilingual ASR team, Gil Keren, Artyom Kozhevnikov, Yen Meng, Christophe Ropers, Matthew Setzler, Skyler Wang, Ife Adebara, Michael Auli, Can Balioglu, Kevin Chan, Chierh Cheng, Joe Chuang, Caley Drooff, Mark Duppenthaler, Paul-Ambroise Duquenne, Alexander Erben, Cynthia Gao, Gabriel Mejia Gonzalez, Kehan Lyu, Sagar Miglani, Vineel Pratap, Kaushik Ram Sadagopan, Safiyyah Saleem, Arina Turkatenko, Albert Ventayol-Boada, Zheng-Xin Yong, Yu-An Chung, Jean Maillard, Rashel Moritz, Alexandre Mourachko, Mary Williamson, Shireen Yates
November 10, 2025

Our approach
Latest news
Foundational models