December 02, 2022
This paper investigates the failure cases and out-of-distribution behavior of trans- formers trained on matrix inversion and eigenvalue decomposition. I show that incorrect model predictions still retain deep mathematical properties of the solution (e.g. correct eigenvalues, unit norm of eigenvectors), and that almost all model fail- ures can be attributed to, and predicted from, properties of the problem or solution. This demonstrates that, when in doubt, math transformers do not hallucinate absurd solutions (as was sometimes proposed) but remain “roughly right”. I also show that the careful choice of a training dataset can accelerate training, while allowing the model to generalize out of its training distribution, invalidating the idea that transformers “merely interpolate” from memorized examples.
Written by
François Charton
Publisher
Neurips MAH-AI workshop
July 23, 2024
Llama team
July 23, 2024
July 21, 2024
Ouail Kitouni, Niklas Nolte, Samuel Pérez Díaz, Sokratis Trifinopoulos, Mike Williams
July 21, 2024
July 08, 2024
Antonio Orvieto, Lin Xiao
July 08, 2024
June 25, 2024
Elena Voita, Javier Ferrando Monsonis, Christoforos Nalmpantis
June 25, 2024
Product experiences
Foundational models
Product experiences
Latest news
Foundational models