CORE MACHINE LEARNING

Salsa Picante: A Machine Learning Attack On LWE with Binary Secrets

February 12, 2024

Abstract

Learning With Errors (LWE) is a hard math problem underpinning many proposed post-quantum cryptographic (PQC) systems. The only PQC Key Exchange Mechanism (KEM) standardized by NIST [13] is based on module LWE, and current publicly available PQ Homomorphic Encryption (HE) libraries are based on ring LWE [2]. The security of LWE-based PQ cryptosystems is critical, but certain implementation choices could weaken them. One such choice is sparse binary secrets, desirable for PQ HE schemes for efficiency reasons. Prior work Salsa [51] demonstrated a machine learning based attack on LWE with sparse binary secrets in small dimensions (š¯‘› ā‰¤ 128) and low Hamming weights (ā„ˇ ā‰¤ 4). However, this attack assumes access to millions of eavesdropped LWE samples and fails at higher Hamming weights or dimensions. We present Picante, an enhanced machine learning attack on LWE with sparse binary secrets, which recovers secrets in much larger dimensions (up to š¯‘› = 350) and with larger Hamming weights (roughly š¯‘›/10, and up to ā„ˇ = 60 for š¯‘› = 350). We achieve this dramatic improvement via a novel preprocessing step, which allows us to generate training data from a linear number of eavesdropped LWE samples (4š¯‘›) and changes the distribution of the data to improve transformer training. We also improve the secret recovery methods of Salsa and introduce a novel cross-attention recovery mechanism allowing us to read off the secret directly from the trained models. While Picante does not threaten NISTā€™s proposed LWE standards, it demonstrates significant improvement over Salsa and could scale further, highlighting the need for future investigation into machine learning attacks on LWE with sparse binary secrets.

Download the Paper

AUTHORS

Written by

Cathy Li

Jana Sotakova

FranƧois Charton

Kristin Lauter

Emily Wenger

Evrard Garcelon

Mohamed Mahlou

Publisher

Arxiv

Research Topics

Core Machine Learning

Related Publications

December 18, 2024

CORE MACHINE LEARNING

UniBench: Visual Reasoning Requires Rethinking Vision-Language Beyond Scaling

Haider Al-Tahan, Quentin Garrido, Randall Balestriero, Diane Bouchacourt, Caner Hazirbas, Mark Ibrahim

December 18, 2024

December 12, 2024

NLP

CORE MACHINE LEARNING

Memory Layers at Scale

Vincent-Pierre Berges, Barlas Oguz

December 12, 2024

December 12, 2024

CORE MACHINE LEARNING

SYSTEMS RESEARCH

Croissant: A Metadata Format for ML-Ready Datasets

Mubashara Akhtar, Omar Benjelloun, Costanza Conforti, Luca Foschini, Pieter Gijsbers, Joan Giner-Miguelez, Sujata Goswami, Nitisha Jain, Michalis Karamousadakis, Satyapriya Krishna, Michael Kuchnik, Sylvain Lesage, Quentin Lhoest, Pierre Marcenac, Manil Maskey, Peter Mattson, Luis Oala, Hamidah Oderinwale, Pierre Ruyssen, Tim Santos, Rajat Shinde, Elena Simperl, Arjun Suresh, Goeffry Thomas, Slava Tykhonov, Joaquin Vanschoren, Susheel Varma, Jos van der Velde, Steffen Vogler, Carole-Jean Wu, Luyao Zhang

December 12, 2024

December 10, 2024

CORE MACHINE LEARNING

Flow Matching Guide and Code

Yaron Lipman, Marton Havasi, Peter Holderrieth, Neta Shaul, Matt Le, Brian Karrer, Ricky Chen, David Lopez-Paz, Heli Ben Hamu, Itai Gat

December 10, 2024

Help Us Pioneer The Future of AI

We share our open source frameworks, tools, libraries, and models for everything from research exploration to large-scale production deployment.