Service interruption on Monday 11 July from 12:30 to 13:00: all the sites of the CCSD (HAL, EpiSciences, SciencesConf, AureHAL) will be inaccessible (network hardware connection).
Skip to Main content Skip to Navigation
Conference papers

The wall of safety for AI: approaches in the confiance.ai program

Abstract : AI faces some « walls » towards which it is advancing at high pace. Apart from social and ethics consideration, there are walls on several subjects very dependent but gathering each some considerations from AI community, both for use, design and research: trust, safety, security, energy, human-machine cooperation, and « inhumanity ». Safety questions are an particularly important subjects for all of them. The Confiance.ai industrial program aims at solving some of these issues by developing seven interrelated projects that address these aspects from different viewpoints and integrate them in an engineering environment for AI-based systems. We will present the concrete approach taken by confiance.ai and the validation strategy based on real-world industrial use cases provided by the members. The walls of AI and their relation with safety Artificial intelligence is advancing at a very fast pace, both in terms of research and applications, and is raising societal questions that are far from being answered. But as it moves forward rapidly, it runs into what we call the five walls of AI, walls that it is likely to crash into if we don't take precautions. Any one of these five walls is capable of halting its progress, which is why it is essential to know what they are and to seek answers in order to avoid the so-called third winter of AI, a winter that would follow the first two in the years 197x and 199x, during which AI research and development came to a virtual standstill for lack of budget and community interest. The five walls are those of trust, energy, safety, human interaction and inhumanity. They each contain a number of ramifications, and obviously interact.
Document type :
Conference papers
Complete list of metadata

https://hal.archives-ouvertes.fr/hal-03657293
Contributor : Patrice Aknin Connect in order to contact the contributor
Submitted on : Monday, May 2, 2022 - 6:33:13 PM
Last modification on : Thursday, May 5, 2022 - 3:41:08 AM

File

SafeAI_2022_paper_49.pdf
Files produced by the author(s)

Identifiers

  • HAL Id : hal-03657293, version 1

Citation

Bertrand Braunschweig, Rodolphe Gelin, François Terrier. The wall of safety for AI: approaches in the confiance.ai program. Workshop on Artificial Intelligence Safety (SAFEAI), Feb 2022, virtual, Canada. ⟨hal-03657293⟩

Share

Metrics

Record views

22

Files downloads

11