Security blind spots in Machine Learning systems: modeling and securing complex ML pipeline and lifecycle

  • Cyber security : hardware and sofware,
  • phD
  • CEA-Leti
  • Grenoble
  • Level 8
  • 2024-11-01
  • MOELLIC Pierre-Alain (DRT/DSYS//LSES)
Apply

With a strong context of regulation of AI at the European scale, several requirements have been proposed for the quot;cybersecurity of AIquot; and more particularly to increase the security of AI systems and not only the core ML models. This is important especially as we are experience an impressive development of large models that are deployed to be adapted to specific tasks in a large variety of platforms and devices. However, considering the security of the overall lifecycle of an AI system is far more complex than the constraint, unrealistic traditional ML pipeline, composed of a static training, then inference steps. In that context, there is an urgent need to focus on core operations from a ML system that are poorly studied and are real blind spot for the security of AI systems with potentially many vulnerabilities. For that purpose, we need to model the overall complexity of an AI system thanks to MLOps (Machine Learning Operations) that aims to encapsulate all the processes and components including data management, deployment and inference steps as well as the dynamicity of an AI system (regular data and model updates). Two major “blind spots” are model deployment and systems dynamicity. Regarding deployment, recent works highlight critical security issues related to model-based backdoor attacks processed after training time by replacing small parts of a deep neural network. Additionally, other works focused on security issues against model compression steps (quantization, pruning) that are very classical steps performed to deploy a model into constrained inference devices. For example, a dormant poisoned model may become active only after pruning and/or quantization processes. For systems dynamicity, several open questions remain concerning potential security regressions that may occur when core models of an AI system are dynamically trained and deployed (e.g., because of new training data or regular fine-tuning operations). The objectives are: 1. model security of modern AI systems lifecycle with a MLOps framework and propose threat models and risk analysis related to critical steps, typically model deployment and continuous training 2. demonstrate and characterize attacks, e.g., attacks targeting the model optimization processes, fine tuning or model updating 3. propose and develop protection schemes and sound evaluation protocols.

Intelligence Artificielle, Sécurité

en_USEN

Contact us

We will reply as soon as possible...