Description
Date depot: 5 avril 2024
Titre: Is your AI Model Secure? Security Audit on AI models
Directeur de thèse:
Hassine MOUNGLA (LIPADE)
Encadrant :
Saad EL JAOUHARI (LISITE)
Encadrant :
Nouredine TAMANI (LISITE)
Domaine scientifique: Sciences et technologies de l'information et de la communication
Thématique CNRS : Systèmes et réseaux
Resumé: Artificial Intelligence (AI) is a multidisciplinary field of computer science that focuses on creating intelligent machines capable of performing tasks that typically require human intelligence. These tasks include learning, reasoning, problem-solving, perception, natural language understanding, and even aspects of speech and visual recognition. AI is a broad umbrella term that encompasses various subfields and approaches, ranging from classical rule-based systems to advanced machine learning algorithms. The goal of AI is to build systems that can simulate or replicate human-like cognitive processes, enabling them to adapt, improve, and perform tasks autonomously. AI is currently applied to a broad range of applications such as in healthcare, finance, autonomous vehicles, smart homes and cities, etc. However, numerous challenges still face the full adoption of AI, for critical use cases and applications, particularly. Among these challenges, the ones related to security, risks, ethics and privacy are of utmost importance in building trust in AI models. Such security challenges include adversarial attacks [1][2], data poisoning [3][4], models inference [5], lack of explainability, software related vulnerabilities (CVEs) [6][7], etc. The security of AI systems is a complex and evolving topic. Therefore, a first step toward a secure AI is to start with an audit of such AI models. Auditing AI involves the systematic examination and evaluation of various aspects of artificial intelligence systems to ensure that they meet desired standards, comply with regulations, and operate ethically. The auditing process is essential for identifying and addressing potential biases, ensuring transparency, and verifying the overall reliability of AI systems.