Projet de recherche doctoral numero :8724

Description

Date depot: 10 avril 2024
Titre: Neuromorphic algorithms and their hardware implementation
Directeur de thèse: Haralampos STRATIGOPOULOS (LIP6)
Domaine scientifique: Sciences et technologies de l'information et de la communication
Thématique CNRS : Intelligence artificielle

Resumé: Neuromorphic computing is an emerging computing paradigm that has its roots in mimicking the spike-based operation of neurons in the biological brain. A neuromorphic processor essentially maps a Spiking Neural Network (SNN). SNNs can offer orders of magnitude more energy-efficiency and inference speed compared to conventional Artificial Neural Networks (ANNs). For these reasons, SNNs open exciting new possibilities for realizing the next-generation Artificial Intelligence (AI) systems. They also show great potential in powering intelligent and autonomous edge devices with local AI processing. A major leap forward in the recent years is the development of several large-scale neuromorphic processors supported with software frameworks. In this thesis we will investigate neuromorphic algorithms and their efficient hardware implementation. The focus will be on: 1) Fault-tolerant computing: Neuromorphic processors running SNNs applications are subject to hardware-level faults, i.e., manufacturing defects, device variability, ageing effects, soft errors induced from radiation, etc., similar to any other hardware component and general-purpose processors. There is a belief that SNNs inherit the remarkable fault tolerance capabilities of the biological brain. It is true that some inherent fault tolerance exists thanks to the distributed and highly parallel processing and the overprovisioning. However, while most faults are benign, some end up being critical for the application resulting in drastic loss of performance, i.e., misclassification. This was showcased with recent fault injection experiments on SNNs, which demonstrated that in particular saturated neuron faults, i.e., neurons that fire constantly spikes in the absence of any activity at their input, can be detrimental for the application. The goal will be to develop fault-tolerant training algorithms and on-chip fault tolerance techniques to improve the reliability of the SNN applications; 2) Federated learning with neuromorphic nodes: Nowadays we observe an ever-increasing number of AI applications using sensitive user data. Leakage of sensitive data, such as medical records, raises security and privacy concerns for users and legal exposure for providers. Recently, Federated Learning (FL) emerged as a promising distributed learning approach that enables learning from data belonging to multiple participants, without compromising privacy since user data is never directly exchanged. While ANNs are the de-facto architectures for FL, neuromorphic architectures could be an attractive alternative achieving faster on-line training, better energy efficiency for edge devices, stronger security against attacks such as data poisoning and backdoor attacks, and stronger privacy-preserving characteristics. Recent preliminary studies demonstrate such properties at software-level, but a demonstration on hardware bridging the gap between theory and practice is yet to be accomplished. The goal is to develop a hardware-aware framework for FL on neuromorphic nodes. In particular, we will study on-line training in the context of FL and what is the optimal information exchange between independent neuromorphic nodes for efficient collective learning while preserving data privacy. We will also investigate security and privacy threats for neuromorphic nodes, characterizing their inherent security and privacy-preserving characteristics, and proposing appropriate defenses to enhance their trustworthiness; 3) Neural architecture search: Most SNN models use ANN-like architectures, however these are not necessarily the optimal architectures given the targeted cognitive tasks. To address this, Neural Architecture Search (NAS) approaches for finding better SNN architectures are required. NAS aims at automating the design of neural networks and derive neural networks that are on par or outperform hand-designed or conventional architectures. The goal here is to develop efficient NAS algorithms specifically for SNNs that are hardware-friendly and, thereafter, develop a programmable SNN hardware accelerator that can execute such generated SNN models.



Doctorant.e: Malogiannis Christos