Projet de recherche doctoral numero :5105

Description

Date depot: 1 avril 2018
Titre: Benefits of Adaptive Parameter Choices in Discrete Black-Box Optimization
Directrice de thèse: Carola DOERR (LIP6)
Domaine scientifique: Sciences et technologies de l'information et de la communication
Thématique CNRS : Non defini

Resumé: Black-box optimization algorithms such as local search variants, simulated annealing, and evolutionary algorithms are general-purpose optimization techniques, which — in par with mathematical programming — are among the most frequently applied approaches for large-scale, complex, or dynamically changing real-world optimization problems. In contrast to white-box solvers, black-box optimization algorithms do not require (nor do they make use of) instance-specific formulation/data, and operate in an iterative ``trial-and-error'' manner rather: in every iteration a set of solution candidates is sampled from some random distribution, their function values evaluated, and the information obtained through these evaluations is used to update the sampling distribution for the next iteration. This process is repeated until some stopping criterion is met.Black-box optimization algorithms are parametrized algorithms. The parameters govern, for example, how many solutions are evaluated per iteration, the distribution from which they are sampled, the update mechanisms that adapt the sampling distribution for the next iteration, and the amount of memory that the heuristic has access to. One of the key challenges in applying black-box optimization algorithms to a given problem lies in the selection of suitable parameter values. This is probably the most prominent research question in black-box optimization. Among the key challenges are the difficult to foresee interdependences of the involved parameters, their problem- and instance-dependence, but also the fact that the best parameter values can drastically changing during the optimization process.The objective of this PhD thesis is to investigate parameter control mechanisms that automatically identify good parameter values and track their evolution during the whole optimization process. Building on recent work that has shown significant performance gains of such dynamic parameter selection techniques (see, for, example, [Benjamin Doerr and Carola Doerr: Optimal Static and Self-Adjusting Parameter Choices for the $(1+(lambda,lambda))$ Genetic Algorithm. Algorithmica, volume 80, pages 1658-1709, Springer, 2018.] for a theoretical result where it is rigorously proven that the so-called $(1+(lambda,lambda))$~GA with success-based parameter choice achieves a linear expected optimization time---a bound that is (provably) not attainable by static parameter values), we will analyze---by mathematical and empirical means---which methods are most suitable under which circumstances. Our main focus will be on comparison-based parameter selection techniques that are inspired by machine learning. Most notably, we will extend the multi-armed bandit (MAB) setting with dynamic rewards to the context of parameter selection for black-box optimization algorithms.



Doctorant.e: Jankovic Anja