February 12, 2024
Learning With Errors (LWE) is a hard math problem underpinning many proposed post-quantum cryptographic (PQC) systems. The only PQC Key Exchange Mechanism (KEM) standardized by NIST [13] is based on module LWE, and current publicly available PQ Homomorphic Encryption (HE) libraries are based on ring LWE [2]. The security of LWE-based PQ cryptosystems is critical, but certain implementation choices could weaken them. One such choice is sparse binary secrets, desirable for PQ HE schemes for efficiency reasons. Prior work Salsa [51] demonstrated a machine learning based attack on LWE with sparse binary secrets in small dimensions (𝑛 ≤ 128) and low Hamming weights (ℎ ≤ 4). However, this attack assumes access to millions of eavesdropped LWE samples and fails at higher Hamming weights or dimensions. We present Picante, an enhanced machine learning attack on LWE with sparse binary secrets, which recovers secrets in much larger dimensions (up to 𝑛 = 350) and with larger Hamming weights (roughly 𝑛/10, and up to ℎ = 60 for 𝑛 = 350). We achieve this dramatic improvement via a novel preprocessing step, which allows us to generate training data from a linear number of eavesdropped LWE samples (4𝑛) and changes the distribution of the data to improve transformer training. We also improve the secret recovery methods of Salsa and introduce a novel cross-attention recovery mechanism allowing us to read off the secret directly from the trained models. While Picante does not threaten NIST’s proposed LWE standards, it demonstrates significant improvement over Salsa and could scale further, highlighting the need for future investigation into machine learning attacks on LWE with sparse binary secrets.
Written by
Cathy Li
Jana Sotakova
François Charton
Emily Wenger
Evrard Garcelon
Mohamed Mahlou
Publisher
Arxiv
Research Topics
Core Machine Learning
November 18, 2025
Shalini Maiti *, Amar Budhiraja *, Bhavul Gauri, Gaurav Chaurasia, Anton Protopopov, Alexis Audran-Reiss, Michael Slater, Despoina Magka, Tatiana Shavrina, Roberta Raileanu, Yoram Bachrach, * Equal authorship
November 18, 2025
October 13, 2025
Chenyu Wang, Paria Rashidinejad, DiJia Su, Song Jiang, Sid Wang, Siyan Zhao, Cai Zhou, Shannon Zejiang Shen, Feiyu Chen, Tommi Jaakkola, Yuandong Tian, Bo Liu
October 13, 2025
September 24, 2025
Jade Copet, Quentin Carbonneaux, Gal Cohen, Jonas Gehring, Jacob Kahn, Jannik Kossen, Felix Kreuk, Emily McMilin, Michel Meyer, Yuxiang Wei, David Zhang, Kunhao Zheng, Jordi Armengol-Estape, Pedram Bashiri, Maximilian Beck, Pierre Chambon, Abhishek Charnalia, Chris Cummins, Juliette Decugis, Zacharias Fisches, François Fleuret, Fabian Gloeckle, Alex Gu, Michael Hassid, Daniel Haziza, Badr Youbi Idrissi, Christian Keller, Rahul Kindi, Hugh Leather, Gallil Maimon, Aram Markosyan, Francisco Massa, Pierre-Emmanuel Mazaré, Vegard Mella, Naila Murray, Keyur Muzumdar, Peter O'Hearn, Matteo Pagliardini, Dmitrii Pedchenko, Tal Remez, Volker Seeker, Marco Selvi, Oren Sultan, Sida Wang, Luca Wehrstedt, Ori Yoran, Lingming Zhang, Taco Cohen, Yossi Adi, Gabriel Synnaeve
September 24, 2025
August 22, 2025
August 22, 2025

Our approach
Latest news
Foundational models