Defense of scene analysis models against adversarial attacks

  • Cyber security : hardware and sofware,
  • phD
  • Paris – Saclay
  • Level 7
  • 2024-10-01
  • AUDIGIER Romaric (DRT/DIASI//LVA)
Apply

In many applications, scene analysis modules such as object detection and recognition, or pose recognition, are required. Deep neural networks are nowadays among the most efficient models to perform a large number of vision tasks, sometimes simultaneously in case of multitask learning. However, it has been shown that they are vulnerable to adversarial attacks: Indeed, it is possible to add to the input data some perturbations imperceptible by the human eye which undermine the results during the inference made by the neural network. However, a guarantee of reliable results is essential for applications such as autonomous vehicles or person search for video surveillance, where security is critical. Different types of adversarial attacks and defenses have been proposed, most often for the classification problem (of images, in particular). Some works have addressed the attack of embedding optimized by metric learning, especially used for open-set tasks such as object re-identification, facial recognition or image retrieval by content. The types of attacks have multiplied: some universal, other optimized on a particular instance. The proposed defenses must deal with new threats without sacrificing too much of the initial performance of the model. Protecting input data from adversarial attacks is essential for decision systems where security vulnerabilities are critical. One way to protect this data is to develop defenses against these attacks. Therefore, the objective will be to study and propose different attacks and defenses applicable to scene analysis modules, especially those for object detection and object instance search in images.

Master2 ou ingénieur avec bonne expérience en vision et apprentissage profond

en_USEN

Contact us

We will reply as soon as possible...