.Inria, the French national research institute for the digital sciencesOrganisation/Company: Inria, the French national research institute for the digital sciencesResearch Field: Computer scienceResearcher Profile: First Stage Researcher (R1)Country: FranceApplication Deadline: 23 Dec 2024 - 23:00 (UTC)Type of Contract: TemporaryJob Status: Full-timeHours Per Week: 38.5Offer Starting Date: 1 Oct 2024Is the job funded through the EU Research Framework Programme? Not funded by a EU programmeReference Number: 2023-05930Is the Job related to staff position within a Research Infrastructure? NoOffer DescriptionContext and background: Nowadays, there is a growing and irreversible need to distribute Artificial Intelligence (AI) applications from the cloud to edge devices, where computation is largely or completely performed on distributed Internet of Things (IoT) devices. This trend aims to address issues related to data privacy, bandwidth limitations, power consumption reduction and low latency requirements, especially for real-time, mission- and safety-critical applications (e.G., in autonomous driving, support for gesture and medical diagnosis, smart power grid or preventive maintenance).The direct consequence is the intense activity in designing custom and embedded Artificial Intelligence HardWare architectures (AI-HW) to support energy-intensive data movement, speed of computation, and large memory resources that AI requires to achieve its full potential. Moreover, explaining AI decisions, referred to as eXplainable AI (XAI), is highly desirable in order to increase the trust and transparency in AI, safely use AI in the context of critical applications, and further expand AI application areas. Nowadays, XAI has become an area of intense interest.AI-HW, similar to traditional computing hardware, is subject to faults that can have several sources: variability in fabrication process parameters, latent defects or even environmental stress. One of the overlooked aspects is the role that HW faults can have in AI decisions. Indeed, there is a common belief that AI applications have an intrinsic high-level or resilience w.R.T. errors and noise. However, recent studies in the scientific literature have shown that AI-HW is not always immune to HW errors. This can jeopardize all the effort of having an explainable AI, leading any attempt to explainability to be either inconclusive or misleading. In other words, AI algorithms retain their accuracy and explainability property under the condition that the hardware wherein they are executed is fault-free.Therefore, before explaining the decision of an AI algorithm - to gain confidence and trust in it - firstly the reliability of the hardware executing the AI algorithm needs to be guaranteed, even in the presence of hardware faults. In this way, trust and transparency of an implemented AI model can be ensured, not only in the context of mission- and safety-critical applications, but also in our everyday life