CORE MACHINE LEARNING

Benchmarking Attacks on Learning with Errors

August 09, 2024

Abstract

Lattice cryptography schemes based on the learning with errors (LWE) hardness assumption have been standardized by NIST for use as post-quantum cryptosystems, and by HomomorphicEncryption.org for performing encrypted computations on sensitive data. Thus, understanding their concrete security is critical. Most work on LWE security focuses on theoretical estimates of attack performance, which is important but may overlook attack nuances arising in real-world implementations. The sole existing concrete benchmarking effort, the Darmstadt Lattice Challenge, does not include benchmarks relevant to the standardized LWE parameter choices—such as small secret and small error distributions, and Ring-LWE (RLWE) and Module-LWE (MLWE) variants. To improve our understanding of concrete LWE security, we provide the first benchmarks for LWE secret recovery on standardized parameters, for small and low-weight (sparse) secrets. We evaluate four LWE attacks in these settings to serve as a baseline: the Search-LWE attacks uSVP, SALSA, and Cool&Cruel, and the DecisionLWE attack: Dual Hybrid Meet-in-the-Middle (MitM). We extend the SALSA and Cool&Cruel attacks in significant ways, and implement and scale up MitM attacks for the first time. For example, we recover hamming weight 9 − 11 binomial secrets for KYBER (kappa = 2) parameters in 28 − 36 hours with SALSA and Cool&Cruel, while we find that MitM can solve DecisionLWE instances for hamming weights up to 4 in under an hour for Kyber parameters, while uSVP attacks do not recover any secrets after running for more than 1100 hours. We also compare concrete performance against theoretical estimates. Finally, we open source the code to enable future research: https://github.com/facebookresearch/LWE-benchmarking

Download the Paper

AUTHORS

Written by

Emily Wenger

Eshika Saxena

Mohamed Malhou

Ellie Thieu

Kristin Lauter

Publisher

arXiv and the IEEE Symposium on Security and Privacy 2025

Research Topics

Core Machine Learning

Related Publications

December 09, 2024

NLP

CORE MACHINE LEARNING

Discrete flow matching

Itai Gat, Tal Remez, Felix Kreuk, Ricky Chen, Gabriel Synnaeve, Yossef (Yossi) Adi, Yaron Lipman, Neta Shaul

December 09, 2024

November 20, 2024

NLP

CORE MACHINE LEARNING

Llama Guard 3-1B-INT4: Compact and Efficient Safeguard for Human-AI Conversations

Igor Fedorov, Kate Plawiak, Lemeng Wu, Tarek Elgamal, Naveen Suda, Eric Smith, Hongyuan Zhan, Jianfeng Chi, Yuriy Hulovatyy, Kimish Patel, Zechun Liu, Yangyang Shi, Tijmen Blankevoort, Mahesh Pasupuleti, Bilge Soran, Zacharie Delpierre Coudert, Rachad Alao, Raghuraman Krishnamoorthi, Vikas Chandra

November 20, 2024

November 14, 2024

NLP

CORE MACHINE LEARNING

A Survey on Deep Learning for Theorem Proving

Zhaoyu Li, Jialiang Sun, Logan Murphy, Qidong Su, Zenan Li, Xian Zhang, Kaiyu Yang, Xujie Si

November 14, 2024

November 06, 2024

THEORY

CORE MACHINE LEARNING

The Road Less Scheduled

Aaron Defazio, Alice Yang, Harsh Mehta, Konstantin Mishchenko, Ahmed Khaled, Ashok Cutkosky

November 06, 2024

Help Us Pioneer The Future of AI

We share our open source frameworks, tools, libraries, and models for everything from research exploration to large-scale production deployment.