Thesis

Security-by-design for embedded deep neural network models on RISC-V

With a strong context of regulation of Artificial Intelligence (AI) at the European scale, several requirements have been proposed for the "cybersecurity of AI". Among the most important concepts related to the security of the machine learning models and the AI-based systems, "security-by-design" is mostly linked to model hardening approaches (e.g., adversarial training against evasion attacks, differential privacy against confidentiality-based attacks). We propose to cover a wider panorama of "security-by-design" by studying software (SW) and hardware (HW) mechanisms to strengthen the intrinsic reobustness of Embedded AI-based systems on RISC-V platforms. Objectives are: (1) define and model SW and HW vulnerabilities of embedded models, (2) develop and evaluate protections (3) demonstrate the impact of SW and HW protections - and their combination - against state-of-the-art attacks such as weight-based adversarial attacks and model extraction.

en_USEN

Contact us

We will reply as soon as possible...