The wall of safety for AI: approaches in the confiance.ai program - IRT SystemX Accéder directement au contenu
Communication Dans Un Congrès Année : 2022

The wall of safety for AI: approaches in the confiance.ai program

Bertrand Braunschweig
  • Fonction : Auteur
  • PersonId : 1133686
Rodolphe Gelin
  • Fonction : Auteur
  • PersonId : 1133687

Résumé

AI faces some « walls » towards which it is advancing at high pace. Apart from social and ethics consideration, there are walls on several subjects very dependent but gathering each some considerations from AI community, both for use, design and research: trust, safety, security, energy, human-machine cooperation, and « inhumanity ». Safety questions are an particularly important subjects for all of them. The Confiance.ai industrial program aims at solving some of these issues by developing seven interrelated projects that address these aspects from different viewpoints and integrate them in an engineering environment for AI-based systems. We will present the concrete approach taken by confiance.ai and the validation strategy based on real-world industrial use cases provided by the members. The walls of AI and their relation with safety Artificial intelligence is advancing at a very fast pace, both in terms of research and applications, and is raising societal questions that are far from being answered. But as it moves forward rapidly, it runs into what we call the five walls of AI, walls that it is likely to crash into if we don't take precautions. Any one of these five walls is capable of halting its progress, which is why it is essential to know what they are and to seek answers in order to avoid the so-called third winter of AI, a winter that would follow the first two in the years 197x and 199x, during which AI research and development came to a virtual standstill for lack of budget and community interest. The five walls are those of trust, energy, safety, human interaction and inhumanity. They each contain a number of ramifications, and obviously interact.
Fichier principal
Vignette du fichier
SafeAI_2022_paper_49.pdf (243.56 Ko) Télécharger le fichier
Origine : Fichiers produits par l'(les) auteur(s)

Dates et versions

hal-03657293 , version 1 (02-05-2022)

Identifiants

  • HAL Id : hal-03657293 , version 1

Citer

Bertrand Braunschweig, Rodolphe Gelin, François Terrier. The wall of safety for AI: approaches in the confiance.ai program. Workshop on Artificial Intelligence Safety (SAFEAI), Feb 2022, virtual, Canada. ⟨hal-03657293⟩
176 Consultations
169 Téléchargements

Partager

Gmail Facebook X LinkedIn More