April 24, 2017
In this paper, we explore theoretical properties of training a two-layered ReLU network g(x; w) = PK j=1 σ(w | j x) with centered d-dimensional spherical Gaussian input x (σ=ReLU). We train our network with gradient descent on w to mimic the output of a teacher network with the same architecture and fixed parameters w∗. We show that its population gradient has an analytical formula, leading to interesting theoretical analysis of critical points and convergence behaviors. First, we prove that critical points outside the hyperplane spanned by the teacher parameters (“out-of-plane“) are not isolated and form manifolds, and characterize inplane critical-point-free regions for two ReLU case. On the other hand, convergence to w∗ for one ReLU node is guaranteed with at least (1 − )/2 probability, if weights are initialized randomly with standard deviation upper-bounded by O(/√ d), consistent with empirical practice. For network with many ReLU nodes, we prove that an infinitesimal perturbation of weight initialization results in convergence towards w∗ (or its permutation), a phenomenon known as spontaneous symmetric-breaking (SSB) in physics. We assume no independence of ReLU activations. Simulation verifies our findings.
Written by
November 10, 2022
Unnat Jain, Abhinav Gupta, Himangi Mittal, Pedro Morgado
November 10, 2022
November 06, 2022
Filip Radenovic, Abhimanyu Dubey, Dhruv Mahajan
November 06, 2022
October 25, 2022
Mustafa Mukadam, Austin Wang, Brandon Amos, Daniel DeTone, Jing Dong, Joe Ortiz, Luis Pineda, Maurizio Monge, Ricky Chen, Shobha Venkataraman, Stuart Anderson, Taosha Fan, Paloma Sodhi
October 25, 2022
October 22, 2022
Naila Murray, Lei Wang, Piotr Koniusz, Shan Zhang
October 22, 2022
April 30, 2018
Yedid Hoshen, Lior Wolf
April 30, 2018
December 11, 2019
Eliya Nachmani, Lior Wolf
December 11, 2019
April 30, 2018
Yedid Hoshen, Lior Wolf
April 30, 2018
November 01, 2018
Yedid Hoshen, Lior Wolf
November 01, 2018
Foundational models
Latest news
Foundational models