December 18, 2025
Text watermarking embeds statistical signals into text that can later be detected, enabling traceability of AI-generated content. We explore post-hoc text watermarking through LLM rephrasing, where watermarks are embedded during the rewriting process of existing text to, for instance, protect copyrighted documents or detect their use during training or RAG. Unlike generation-time watermarking which is constrained by how LLMs are served in practice, the post-hoc setting offers control over many generation and detection parameters. We test if we can exploit this freedom by allocating more compute to get a better quality-detectability trade-off, ranging from using bigger models for rephrasing, using beam search or even generating multiple candidates, to utilizing an auxiliary model for entropy filtering at detection time. Our results show that these strategies achieve strong detectability and high semantic fidelity on open-ended text like Wikipedia articles and books, while verifiable text like code remains more challenging due to stricter correctness constraints.
Written by
Pierre Fernandez
Tom Sander
Hady Elsahar
Hongyan Chang
Tomáš Souček
Sylvestre Rebuffi
Valeriu Lacatusu
Tuan Tran
Alexandre Mourachko
Publisher
arxiv
Research Topics
February 10, 2026
Alisia Lupidi, Bhavul Gauri, Thomas Simon Foster, Bassel Al Omari, Despoina Magka, Alberto Pepe, Alexis Audran-Reiss, Muna Aghamelu, Nicolas Baldwin, Lucia Cipolina-Kun, Jean-Christophe Gagnon-Audet, Chee Hau Leow, Sandra Lefdal, Hossam Mossalam, Abhinav Moudgil, Saba Nazir, Emanuel Tewolde, Isabel Urrego, Jordi Armengol Estape, Amar Budhiraja, Gaurav Chaurasia, Abhishek Charnalia, Derek Dunfield, Karen Hambardzumyan, Daniel Izcovich, Martin Josifoski, Ishita Mediratta, Kelvin Niu, Parth Pathak, Michael Shvartsman, Edan Toledo, Anton Protopopov, Roberta Raileanu, Alexander Miller, Tatiana Shavrina, Jakob Foerster, Yoram Bachrach
February 10, 2026
December 26, 2025
Anselm Paulus, Ilia Kulikov, Brandon Amos, Remi Munos, Ivan Evtimov, Kamalika Chaudhuri, Arman Zharmagambetov
December 26, 2025
December 12, 2025
Raghuveer Thirukovalluru, Xiaochuang Han, Bhuwan Dhingra, Emily Dinan, Maha Elbayad
December 12, 2025
November 10, 2025
Omnilingual ASR team, Gil Keren, Artyom Kozhevnikov, Yen Meng, Christophe Ropers, Matthew Setzler, Skyler Wang, Ife Adebara, Michael Auli, Can Balioglu, Kevin Chan, Chierh Cheng, Joe Chuang, Caley Drooff, Mark Duppenthaler, Paul-Ambroise Duquenne, Alexander Erben, Cynthia Gao, Gabriel Mejia Gonzalez, Kehan Lyu, Sagar Miglani, Vineel Pratap, Kaushik Ram Sadagopan, Safiyyah Saleem, Arina Turkatenko, Albert Ventayol-Boada, Zheng-Xin Yong, Yu-An Chung, Jean Maillard, Rashel Moritz, Alexandre Mourachko, Mary Williamson, Shireen Yates
November 10, 2025

Our approach
Latest news
Foundational models