Organisation/Company: CNRS
Department: Laboratoire d'Informatique de l'Ecole Polytechnique
Research Field: Computer science Mathematics » Algorithms
Researcher Profile: First Stage Researcher (R1)
Country: France
Application Deadline: 24 Jan 2025 - 00:00 (UTC)
Type of Contract: Temporary
Job Status: Full-time
Hours Per Week: 35
Offer Starting Date: 6 Jan 2025
Is the job funded through the EU Research Framework Programme? Not funded by a EU programme
Is the Job related to staff position within a Research Infrastructure? No
Offer Description LIX (Laboratoire d'Informatique de l'Ecole Polytechnique) is a joint research unit with two supervisory institutions, École Polytechnique, a member of the Institut Polytechnique de Paris, and the Centre National de la Recherche Scientifique (CNRS), and one partner, Inria Saclay, with shared buildings and mixed teams.
LIX is organized in four poles: "Computer Mathematics", "Data Analytics and Machine Learning", "Efficient and Secure Communications", "Modeling, Simulation and Learning" and "Proofs and Algorithms". The Ph.D. student will be part of the Cosynus team in the "Proofs and Algorithms" pole, working on the semantics and static analysis of software systems, including sequential, concurrent or distributed, and hybrid/control systems and cyber-physical systems.
The Ph.D. student will benefit from the exciting environment of LIX, specifically the Computer Science Department of Ecole Polytechnique (DIX), where they will have opportunities to teach, and the Department of Computer Science, Data and Artificial Intelligence of Institut Polytechnique de Paris (IDIA). They will also interact with the project members of the SAIF project (Safe Artificial Intelligence through Formal Methods), of the French National Research Programme on Artificial Intelligence PEPR IA.
AI is increasingly embedded in everyday applications, and it is crucial to verify the correct behavior of neural networks in critical situations, such as control and motion planning for autonomous cars. For applications like perception in autonomous systems, we can only hope for probabilistic safety. In real-world systems, precise models may not always be available, necessitating the consideration of both probabilistic information and epistemic uncertainty. Recent approaches based on imprecise probabilities or probability boxes generalize these uncertainties by defining sets of probability distributions for the quantitative verification of neural networks.
The approach has proven to be more general and computationally efficient than the state of the art, but challenges remain for real-world applicability. Potential extensions for study include:
- Investigating other abstractions beyond the current constant stepsize staircase discretization;
- Handling multivariate input distributions using copulas.
A longer-term objective is to extend this approach to the analysis of Bayesian neural networks, where network weights and biases are defined by multivariate (imprecise) probability distributions. The project will also explore applications to:
- The safety of autonomous systems, particularly the robustness of perception and decision-making in drones with imprecise probabilistic information on trajectories;
- Fairness analysis and explainability of network behavior.
#J-18808-Ljbffr