Integrity, availability and confidentiality of embedded AI in post-training stages

  • Cyber security : hardware and sofware,
  • phD
  • Grenoble
  • Level 7
  • 2025-10-01
  • MOELLIC Pierre-Alain (DRT/DSYS//LSES)
Apply

With a strong context of regulation of AI at the European scale, several requirements have been proposed for the quot;cybersecurity of AIquot; and more particularly to increase the security of complex modern AI systems. Indeed, we are experience an impressive development of large models (so-called “Foundation” models) that are deployed at large-scale to be adapted to specific tasks in a wide variety of platforms and devices. Today, models are optimized to be deployed and even fine-tuned in constrained platforms (memory, energy, latency) such as smartphones and many connected devices (home, health, industry…). However, considering the security of such AI systems is a complex process with multiple attack vectors against their integrity (fool predictions), availability (crash performance, add latency) and confidentiality (reverse engineering, privacy leakage). In the past decade, the Adversarial Machine Learning and privacy-preserving machine learning communities have reached important milestones by characterizing attacks and proposing defense schemes. Essentially, these threats are focused on the training and the inference stages. However, new threats surface related to the use of pre-trained models, their unsecure deployment as well as their adaptation (fine-tuning). Moreover, additional security issues concern the fact that the deployment and adaptation stages could be “on-device” processes, for instance with cross-device federated learning. In that context, models are compressed and optimized with state-of-the-art techniques (e.g., quantization, pruning, Low Rank Adaptation) for which their influence on the security needs to be assessed. The objectives are: (1) Propose threat models and risk analysis related to critical steps, typically model deployment and continuous training for the deployment and adaptation of large foundation models on embedded systems (e.g., advanced microcontroller with HW accelerator, SoC). (2) Demonstrate and characterize attacks, with a focus on model-based poisoning. (3) Propose and develop protection schemes and sound evaluation protocols.

Intelligence Artificielle, Sécurité

en_USEN

Contact us

We will reply as soon as possible...