Inria, the French national research institute for the digital sciences
Organisation/Company: Inria, the French national research institute for the digital sciences
Research Field: Computer science
Researcher Profile: First Stage Researcher (R1)
Country: France
Application Deadline: 29 Nov 2024 - 23:00 (UTC)
Type of Contract: Temporary
Job Status: Full-time
Hours Per Week: 38.5
Offer Starting Date: 1 Mar 2025
Is the job funded through the EU Research Framework Programme? Horizon 2020
Reference Number: 2024-08305
Marie Curie Grant Agreement Number: 101168816
Is the Job related to staff position within a Research Infrastructure? No
Offer Description The position is part of a new Marie Curie Training Network called FINALITY, in which Inria joins forces with top universities and industries, including IMDEA, KTH, TU Delft, the University of Avignon (Project Leader), the Cyprus Institute, Nokia, Telefonica, Ericsson, Orange, and others. The PhD students will have opportunities for internships with other academic and industry partners and will be able to participate in thematic summer schools and workshops organized by the project.
Only people who have spent less than one year in France in the last 3 years are eligible.
The candidate will receive a monthly living allowance of about €2,735, a mobility allowance of €414, and, if applicable, a family allowance of €458 (gross amounts).
This thesis focuses on advancing online learning algorithms that offer theoretical guarantees against an adversary who selects the sequence of inputs with the goal to jeopardize system performance. Such adversarially robust algorithms are particularly beneficial for scenarios characterized by highly dynamic user demands and/or rapidly evolving network conditions.
A key metric in evaluating the robustness of these algorithms is regret, which measures the largest discrepancy between the algorithm's experienced cost and that of the optimal static policy in hindsight (i.e., one that has prior knowledge of the entire input sequence). The objective is to develop algorithms with sublinear regret growth relative to input sequence length, ensuring that their per-input-average cost asymptotically approaches that of the best static policy.
Online gradient descent, follow-the-perturbed-leader or follow-the-regularized-leader exemplify algorithms that achieve sublinear regret in practical applications. However, their computational and memory requirements often exceed the capacities of edge devices and/or are incompatible with tight latency constraints, largely due to for large state storage and/or projection operations over the feasible state space.
This thesis aims to design online learning algorithms optimized for reduced memory and computational overhead, making them more suitable for resource-constrained and latency-sensitive environments. Initial strategies for complexity reduction include batch processing of inputs, input sampling, and constraint relaxation. Building on these approaches, this work will explore novel methods to further streamline these algorithms while preserving robust performance.
Minimum Requirements:
The candidate should have a solid mathematical background (in particular on optimization) and in general be keen on using mathematics to model real problems and get insights. He should also be knowledgeable on machine learning and have good programming skills. We expect the candidate to be fluent in English.
Languages:
FRENCH Level: Basic
ENGLISH Level: Good
Additional Information Partial reimbursement of public transport costs
Leave: 7 weeks of annual leave + 10 extra days off due to RTT (statutory reduction in working hours) + possibility of exceptional leave (sick children, moving home, etc.)
Possibility of teleworking and flexible organization of working hours
Social, cultural and sports events and activities
Access to vocational training
Contribution to mutual insurance (subject to conditions)
Selection process: Applications must be submitted online on the Inria website. Collecting applications by other channels is not guaranteed.
#J-18808-Ljbffr