This post-doctoral fellowship is part of the AI-NRGY research project, which aims to propose an AI-based distributed architecture for intelligent energy systems made up of a large number of dynamic components (e.g. smart grids, electric vehicles, renewable energy sources). More specifically, the aim of this post-doc is to protect AI-based services against malicious disruptions that could affect the essential functionality of energy systems. Given the ubiquity of AI systems in modern digitised systems, their potential corruption poses a major threat to critical infrastructures. Two types of threats can be investigated: privacy threats (such as pattern reversal or data mining) and security threats (such as evasion attacks or data poisoning). Privacy threats have been widely addressed by the scientific community and the CEA has conducted extensive work on integrating and optimising robust cybersecurity primitives. However, emerging security such as model poisoning (which arises from data poisoning) and adversarial attacks now require additional processing. Data poisoning is a cyber attack that can be used to simply compromise the convergence of the learning phase and result in underperforming models, but it can also be used to embed a ‘backdoor’ into the learned model that allows the expected result to be manipulated. This post-doctoral position will enable the candidate to carry out theoretical and applied research in the field of privacy and security in distributed machine learning, particularly in the context of intelligent energy systems. More specifically, the candidate will study the potential threats of distributed/federated learning, and propose solutions to defend against the attacks identified as the most relevant.
doctorat dans un domaine connexe à l’intelligence artificielle, les mathématiques appliquées ou l’informatique
Talent impulse, the scientific and technical job board of CEA's Technology Research Division
© Copyright 2023 – CEA – TALENT IMPULSE - All rights reserved