Gradient Descent optimisation using Neural Network proxies

Presentation

Many practical continuous minimization problems are not amenable to standard optimization methods because gradients can not be computed directly. Recent results obtained in our lab suggest that it is possible to train a Neural Network regressor as a proxy to the initial function. We then optimise the proxy function via Gradient-Descent. However, during the optimisation process, the optimiser might explore regions of the space where the model of the proxy is wrong. It is then necessary to resample the space around the current optimisation point and retrain the Neural Net using these new samples. The goal of this project is to find out the best way to do it, taking into account the cost of getting a new sample.

Outcomes of the project

A principled and efficient approach to the problem of efficient resampling during optmisation would have an important impact from a theoretical and a industrial perspective.

Researched profile
  • Knowledge of Machine Learning and Optimisation Theory
  • Experience using Python
  • Taste for experimentation and/or theory in CS
Gained knowledge
  • Learn how to use correctly a Deep-Learning framework (TensorFlow)
  • Gain expertise in the field of Active Learning
  • Understand how to use Machine Learning out of the boundaries of usual applications
Dates

At least 3 Months, anytime.

Contact

Please contact me pierre.baque(at)epfl.ch for any further information.