Research

Revisiting Classifier Two-Sample Tests for GAN Evaluation and Causal Discovery

April 24, 2017

Abstract

The goal of two-sample tests is to decide whether two probability distributions, denoted by P and Q, are equal. One alternative to construct flexible two-sample tests is to use binary classifiers. More specifically, pair n random samples drawn from P with a positive label, and pair n random samples drawn from Q with a negative label. Then, the test accuracy of a binary classifier on these data should remain near chance-level if the null hypothesis “P = Q” is true. Furthermore, such test accuracy is an average of independent random variables, and thus approaches a Gaussian null distribution. Furthermore, the prediction uncertainty of our binary classifier can be used to interpret the particular differences between P and Q. In particular, analyze which samples were correctly or incorrectly labeled by the classifier, with the least or most confidence.

In this paper, we aim to revive interest in the use of binary classifiers for two-sample testing. To this end, we review their fundamentals, previous literature on their use, compare their performance against alternative state-of-the-art two-sample tests, and propose them to evaluate generative adversarial network models applied to image synthesis.

As a by-product of our research, we propose the application of conditional generative adversarial networks, together with classifier two-sample tests, as an alternative to achieve state-of-the-art causal discovery.

Download the Paper

Related Publications

November 27, 2022

Core Machine Learning

Neural Attentive Circuits

Nicolas Ballas, Bernhard Schölkopf, Chris Pal, Francesco Locatello, Li Erran, Martin Weiss, Nasim Rahaman, Yoshua Bengio

November 27, 2022

November 27, 2022

Near Instance-Optimal PAC Reinforcement Learning for Deterministic MDPs

Andrea Tirinzoni, Aymen Al Marjani, Emilie Kaufmann

November 27, 2022

November 16, 2022

NLP

Memorization Without Overfitting: Analyzing the Training Dynamics of Large Language Models

Kushal Tirumala, Aram H. Markosyan, Armen Aghajanyan, Luke Zettlemoyer

November 16, 2022

November 10, 2022

Computer Vision

Learning State-Aware Visual Representations from Audible Interactions

Unnat Jain, Abhinav Gupta, Himangi Mittal, Pedro Morgado

November 10, 2022

April 08, 2021

Responsible AI

Integrity

Towards measuring fairness in AI: the Casual Conversations dataset

Caner Hazirbas, Joanna Bitton, Brian Dolhansky, Jacqueline Pan, Albert Gordo, Cristian Canton Ferrer

April 08, 2021

April 30, 2018

The Role of Minimal Complexity Functions in Unsupervised Learning of Semantic Mappings | Facebook AI Research

Tomer Galanti, Lior Wolf, Sagie Benaim

April 30, 2018

April 30, 2018

Computer Vision

NAM – Unsupervised Cross-Domain Image Mapping without Cycles or GANs | Facebook AI Research

Yedid Hoshen, Lior Wolf

April 30, 2018

December 11, 2019

Speech & Audio

Computer Vision

Hyper-Graph-Network Decoders for Block Codes | Facebook AI Research

Eliya Nachmani, Lior Wolf

December 11, 2019

Help Us Pioneer The Future of AI

We share our open source frameworks, tools, libraries, and models for everything from research exploration to large-scale production deployment.