Projet de recherche doctoral numero :7487


Date depot: 21 septembre 2020
Titre: Advanced Gradient Descent Methods in Deep Reinforcement Learning
Directeur de thèse: Olivier SIGAUD (ISIR (EDITE))
Domaine scientifique: Sciences et technologies de l'information et de la communication
Thématique CNRS : Non defini

Resumé: In this PhD thesis, we propose to investigate variants of the gradient descent method based on first-order or second-order Taylor approximations of the functions involved in the computations. Many such approximations can be found in the literature, resulting in various approaches such as the natural gradient or Gauss-Newton and Newton’s methods. The first task we want to address in this PhD is providing a unifying view of these methods. Since they can be derived and justified in many different ways, presenting them and their relationships in a single canonical framework could be helpful to clarify the landscape of iterative methods in deep learning.

Doctorant.e: Pierrot Thomas