RESEARCH

ML APPLICATIONS

Open Source Evolutionary Structured Optimization

May 11, 2020

Abstract

Nevergrad is a derivative-free optimization platform gathering both a wide range of optimization methods and a wide range of test functions to evaluate them upon. Some of these functions have very particular structures which standard methods are not able to use. The most recent feature of Nevergrad is the ability to conveniently define a search domain, so that many algorithms in Nevergrad can automatically rescale variables and/or take into account their possibly logarithmic nature or their discrete nature, but also take into account any user-defined mutation or recombination operator. Since many problems are efficiently solved using specific operators, Nevergrad therefore now enables using specific operators within generic algorithms: the underlying structure of the problem is user-defined information that several families of optimization methods can use and benefit upon. We explain how this API can help analyze optimization methods and how to use it for the optimization of a structured Photonics physical testbed, and show that this can produce significant improvements.

Download the Paper

AUTHORS

Written by

Jérémy Rapin

Daniel Haziza

Olivier Teytaud

Antoine Moreau

Emmanuel Centeno

Pauline Bennet

Publisher

Evolutionary Computation Software Systems Workshop at ​GECCO

Related Publications

November 28, 2022

RESEARCH

CORE MACHINE LEARNING

Neural Attentive Circuits

Nicolas Ballas, Bernhard Schölkopf, Chris Pal, Francesco Locatello, Li Erran, Martin Weiss, Nasim Rahaman, Yoshua Bengio

November 28, 2022

November 27, 2022

RESEARCH

Near Instance-Optimal PAC Reinforcement Learning for Deterministic MDPs

Andrea Tirinzoni, Aymen Al Marjani, Emilie Kaufmann

November 27, 2022

November 23, 2022

THEORY

CORE MACHINE LEARNING

Generalization Bounds for Deep Transfer Learning Using Majority Predictor Accuracy

Tal Hassner, Cuong N. Nguyen, Cuong V. Nguyen, Lam Si Tung Ho, Vu Dinh

November 23, 2022

November 16, 2022

RESEARCH

NLP

Memorization Without Overfitting: Analyzing the Training Dynamics of Large Language Models

Kushal Tirumala, Aram H. Markosyan, Armen Aghajanyan, Luke Zettlemoyer

November 16, 2022

Help Us Pioneer The Future of AI

We share our open source frameworks, tools, libraries, and models for everything from research exploration to large-scale production deployment.