Projet de recherche doctoral numero :4713

Description

Date depot: 1 janvier 1900
Titre: Governing Automated Vehicle Behavior
Directeur de thèse: Raja CHATILA (ISIR (EDITE))
Domaine scientifique: Sciences et technologies de l'information et de la communication
Thématique CNRS : Intelligence artificielle

Resumé: The objective of this thesis proposal is to conduct research in Robotics and Artificial Intelligence to design and validate a decision-making system for an automated vehicle that is also equipped with real-time perception and motion control. Given the difficulty to fully predict the behavior of dynamic objects in traffic environment, there could be situations where collisions might become inevitable. The aim of the decision-making system would be to minimize risk and damage (Goodall, 2014). A risk that cannot be avoided can be at best redistributed, and thus, the decision turns into an ethical issue: there will no “good” solution and the decision will involve a trade-off between the interests of different parties involved. An AV does not have a sense of ethics. Nonetheless, it would have to make real time decisions of risk distribution in ethical dilemmas involving high uncertainty. There are several moral theories that try to explain or to guide human decisions (deontic, utilitarian, casuistic, …) that could be applied as such to machine decisions. One of the first tasks of the candidate will be to study moral theories and to become acquainted with their philosophical foundations. Simultaneously, literature on machine ethics will be also studied (Allen 2006, Chauvier 2014), specifically in the context of autonomous vehicles (Lin 2015, Bonnefon 2015). The candidate will develop and implement an artificial moral agent based on different moral theories. This decision-making system will have to cope with evolving conditions and take into account perception and action uncertainties. Research will therefore explore and implement uncertain representations, e.g., Bayesian representations and Bayesian networks (Ferreira 2013) and decision-making approaches, e.g., Markov processes or game theory, in the context of multi-agent systems, in which each agent have partial information, non-identical and maybe conflicting goals and interests. This is a major objective of the thesis. The work will also include the development of a simulation environment to study scenarios and situations for characterizing different ethical approaches, and enabling to study human reactions as well. Furthermore, the work will investigate how the system could be able to adapt its behavior by learning from user decisions, instead of merely applying its own policy. The simulation environment will have to embed this capacity. Research will be performed to propose an appropriate learning approach from acquired data on the situation and driver decisions. This will also enable to provide a comparative study on human and artificial ethics.

Doctorant.e: De Moura Martins Gomes Nelson