April 30, 2018
We consider the problem of learning a one-hidden-layer neural network: we assume the input x ∈ Rd is from Gaussian distribution and the label y = aTσ(Bx) + ξ, where a is a nonnegative vector in R m with m ≤ d, B ∈ Rm×d is a full-rank weight matrix, and ξ is a noise vector. We first give an analytic formula for the population risk of the standard squared loss and demonstrate that it implicitly attempts to decompose a sequence of low-rank tensors simultaneously. Inspired by the formula, we design a non-convex objective function G(·) whose landscape is guaranteed to have the following properties:
All local minima of G are also global minima.
All global minima of G correspond to the ground truth parameters.
The value and gradient of G can be estimated using samples.
With these properties, stochastic gradient descent on G provably converges to the global minimum and learn the ground-truth parameters. We also prove finite sample complexity results and validate the results by simulations.
Research Topics
November 10, 2022
Unnat Jain, Abhinav Gupta, Himangi Mittal, Pedro Morgado
November 10, 2022
November 06, 2022
Filip Radenovic, Abhimanyu Dubey, Dhruv Mahajan
November 06, 2022
October 25, 2022
Mustafa Mukadam, Austin Wang, Brandon Amos, Daniel DeTone, Jing Dong, Joe Ortiz, Luis Pineda, Maurizio Monge, Ricky Chen, Shobha Venkataraman, Stuart Anderson, Taosha Fan, Paloma Sodhi
October 25, 2022
October 22, 2022
Naila Murray, Lei Wang, Piotr Koniusz, Shan Zhang
October 22, 2022
April 30, 2018
Yedid Hoshen, Lior Wolf
April 30, 2018
December 11, 2019
Eliya Nachmani, Lior Wolf
December 11, 2019
April 30, 2018
Yedid Hoshen, Lior Wolf
April 30, 2018
November 01, 2018
Yedid Hoshen, Lior Wolf
November 01, 2018
Foundational models
Latest news
Foundational models