Partager cette page :

AI security: a game of weak signal design and detection

le 5 février 2025

13h15

Campus de Beaulieu Salle Jersey - bât. 12D

Intervention de Eva Giboulot, chercheuse au centre Inria de l'Université de Rennes,dans le cadre des séminaires du département Informatique.

/medias/photo/seminaire-di_1630676501273-jpg

The power of machine learning as a tool for automating inference and generation tasks is now something which is understood by both the public and executive bodies. A clear proof of this is the provisional agreement reached between the European Parliament and Council on the AI Act aimed at assessing risks and regulating AI applications. Such regulations can only be proposed under a rigorous definition of what securing such applications entails.

This seminar aims to present the field of AI security, its current successes and open problems.

The first part of the seminar provides a comprehensive overview of each aspects of AI security through three main pillars: (1) Authenticity/Identification, (2) Integrity and (3) Confidentiality. Securing AI applications is done by ensuring these properties for all elements of the inference/generation pipeline: the training data, the model and the testing/generated data.

A second part of the seminar is dedicated to a thorough presentation of current important problems: the traceability of generated content, adversarial perturbations and data poisoning attacks.

We frame these problems under the perspective of designing and detecting weak signals, in the context of a dynamic game between an attacker and a defender.
Thématique(s)
Formation, Recherche - Valorisation
Contact
David Pichardie

Mise à jour le 13 mai 2025