April 24, 2017
The goal of this paper is not to introduce a single algorithm or method, but to make theoretical steps towards fully understanding the training dynamics of generative adversarial networks. In order to substantiate our theoretical analysis, we perform targeted experiments to verify our assumptions, illustrate our claims, and quantify the phenomena. This paper is divided into three sections. The first section introduces the problem at hand. The second section is dedicated to studying and proving rigorously the problems including instability and saturation that arize when training generative adversarial networks. The third section examines a practical and theoretically grounded direction towards solving these problems, while introducing new tools to study them
Research Topics
November 19, 2020
Angela Fan, Aleksandra Piktus, Antoine Bordes, Fabio Petroni, Guillaume Wenzek, Marzieh Saeidi, Sebastian Riedel, Andreas Vlachos
November 19, 2020
November 09, 2020
Angela Fan
November 09, 2020
October 26, 2020
Xian Li, Asa Cooper Stickland, Xiang Kong, Yuqing Tang
October 26, 2020
October 25, 2020
Yossef Mordechay Adi, Bhiksha Raj, Felix Kreuk, Joseph Keshet, Rita Singh
October 25, 2020
December 11, 2019
Eliya Nachmani, Lior Wolf
December 11, 2019
April 30, 2018
Yedid Hoshen, Lior Wolf
April 30, 2018
April 30, 2018
Yaniv Taigman, Lior Wolf, Adam Polyak, Eliya Nachmani
April 30, 2018
July 11, 2018
Eliya Nachmani, Adam Polyak, Yaniv Taigman, Lior Wolf
July 11, 2018
Foundational models
Our approach
Latest news
Foundational models