Organisation/Company: CNRS
Department: Laboratoire d'Informatique de l'Ecole Polytechnique
Research Field: Computer science Mathematics » Algorithms
Researcher Profile: First Stage Researcher (R1)
Country: France
Application Deadline: 24 Jan 2025 - 00:00 (UTC)
Type of Contract: Temporary
Job Status: Full-time
Hours Per Week: 35
Offer Starting Date: 6 Jan 2025
Is the job funded through the EU Research Framework Programme? Not funded by a EU programme
Is the Job related to staff position within a Research Infrastructure? No
Offer Description LIX (Laboratoire d'Informatique de l'Ecole Polytechnique) is a joint research unit with two supervisory institutions, École Polytechnique, a member of the Institut Polytechnique de Paris, and the Centre National de la Recherche Scientifique (CNRS), and one partner, Inria Saclay, with shared buildings and mixed teams.
LIX is organized in four poles: "Computer Mathematics", "Data Analytics and Machine Learning", "Efficient and Secure Communications", "Modeling, Simulation and Learning" and "Proofs and Algorithms". The Ph.D. student will be part of the Cosynus team in the "Proofs and Algorithms" pole. The members of the Cosynus team work on the semantics and static analysis of software systems, sequential, concurrent or distributed, and hybrid/control systems and cyber-physical systems.
The Ph.D. student will benefit from the exciting environment of LIX, specifically the Computer Science Department of Ecole Polytechnique (DIX), where they will be able to give courses, and the Department of Computer Science, Data and Artificial Intelligence of Institut Polytechnique de Paris (IDIA). They will also interact with the project members of the SAIF project (Safe Artificial Intelligence through Formal Methods), of the French National Research Programme on Artificial Intelligence PEPR IA.
Abstraction-based safety verification for neural networks has received considerable attention recently, particularly reachability analysis of neural networks using polyhedric abstractions. The context of this work is to develop sound abstractions for addressing more general robustness properties. More specifically, the objective is to propose provably correct explanations of neural networks behavior while most existing techniques are heuristic.
The tractable inner and outer-approximations of ranges of functions are a building block for proving very general quantified reachability problems. These constitute a basis from which the student is expected to design new set-based methods to tackle properties of neural networks that can be expressed as quantified reachability problems. The objectives are to identify some properties of interest that can be expressed in this framework, and design and experiment reachability analyzes inspired from previous techniques to rigorously assess these properties. As a starting point, we can explore fairness properties. Another axis consists in providing rigorous explainability properties of neural networks such as abductive explanation: a minimum subset of input features, which by themselves determine the classification produced by the DNN. We can also imagine using such approaches to guide the sparsification of neural networks.
#J-18808-Ljbffr