NLP

CORE MACHINE LEARNING

What is my math transformer doing? Three results on interpretability and generalization

December 02, 2022

Abstract

This paper investigates the failure cases and out-of-distribution behavior of trans- formers trained on matrix inversion and eigenvalue decomposition. I show that incorrect model predictions still retain deep mathematical properties of the solution (e.g. correct eigenvalues, unit norm of eigenvectors), and that almost all model fail- ures can be attributed to, and predicted from, properties of the problem or solution. This demonstrates that, when in doubt, math transformers do not hallucinate absurd solutions (as was sometimes proposed) but remain “roughly right”. I also show that the careful choice of a training dataset can accelerate training, while allowing the model to generalize out of its training distribution, invalidating the idea that transformers “merely interpolate” from memorized examples.

Download the Paper

AUTHORS

Written by

François Charton

Publisher

Neurips MAH-AI workshop

Research Topics

Natural Language Processing (NLP)

Core Machine Learning

Related Publications

July 23, 2024

HUMAN & MACHINE INTELLIGENCE

CONVERSATIONAL AI

The Llama 3 Herd of Models

Llama team

July 23, 2024

July 21, 2024

CORE MACHINE LEARNING

From Neurons to Neutrons: A Case Study in Mechanistic Interpretability

Ouail Kitouni, Niklas Nolte, Samuel Pérez Díaz, Sokratis Trifinopoulos, Mike Williams

July 21, 2024

July 08, 2024

THEORY

CORE MACHINE LEARNING

An Adaptive Stochastic Gradient Method with Non-negative Gauss-Newton Stepsizes

Antonio Orvieto, Lin Xiao

July 08, 2024

June 25, 2024

NLP

Neurons in Large Language Models: Dead, N-gram, Positional

Elena Voita, Javier Ferrando Monsonis, Christoforos Nalmpantis

June 25, 2024

Help Us Pioneer The Future of AI

We share our open source frameworks, tools, libraries, and models for everything from research exploration to large-scale production deployment.