Multimodal continual learning under constraints

  • Artificial Intelligence & data intelligence,
  • phD
  • CEA-List
  • Paris – Saclay
  • Level 7
  • 2024-10-01
  • AUDIGIER Romaric (DRT/DIASI//LVIC/SAC)
Apply

Standard deep learning methods are designed to use static data. This induces a significant practical limitation when they are deployed in dynamic environments and are confronted with unknown data. Continuous learning provides a solution to this problem, especially with the use of large, pre-trained models. However, deploying such models in stand-alone mode is currently impossible in many frugal applications that impose heavy computational and/or memory constraints. Furthermore, most current methods are developed for a single modality (text or visual), whereas the data captured is often multimodal. This thesis proposes to address several objectives that enable the practical deployment of agents capable of updating their representations under constraints. This deployment involves the following objectives:(1) the collection of domain-oriented corpora and their augmentation based on generative multimodal models, (2) the compression of foundation models to adapt them to the domain and make them usable under computational and/or memory constraints, (3) the proposal of efficient continuous learning methods to manage new multimodal data, and (4) the management of realistic data flows to take into account the specificities of different application contexts.

Master en informatique ou intelligence artificielle

en_USEN

Contact us

We will reply as soon as possible...