Artificial intelligence, a newcomer among the big risks – Le Monde

Artificial intelligence is learning how to detect wildfires from professionals and is now coming online to help. A Cal Fire fire captain monitors screens at Cal Fire San Diego County headquarters in El Cajon, California, U.S., August 4, 2023. MIKE BLAKE / Portal

Cybersecurity risks have long been among the threats considered the greatest concern by both experts in the field and the general public. Nevertheless, the annual report of the Axa insurance group on the “major risks of tomorrow” published on Sunday, October 29, brings a new concern to the forefront: artificial intelligence (AI).

Read the story: Article reserved for our subscribers In the wake of ChatGPT, the race for artificial intelligence

This study, based on a dual survey of 3,226 risk and insurance professionals in fifty countries on the one hand and 19,000 population representatives from fifteen major countries on the other, certainly ranks climate change still at the top cause of concern, ahead of cybersecurity risks and geopolitical instability.

However, AI and big data jump from fourteenth to fourth place among experts and from nineteenth to eleventh among the general public. In both categories, this growing fear is leading to strong distrust: 64% of experts and 70% of the general public believe a “pause” in research on AI and other so-called “disruptive” technologies is necessary.

“The feeling that new technologies create more risks than they solve has increased extremely sharply since 2020,” explains Etienne Mercier, head of opinion at the Ipsos Institute, which conducted the surveys. Half of the world’s population distrusts technological progress. » This new prudence is leading to an “explosion” of feelings of vulnerability in the face of these risks and their impacts, especially since experts and the public see the authorities as ill-prepared.

Reducing the sense of geopolitical instability

The concern raised by artificial intelligence argues for a rapid implementation by authorities of the regulation of these technologies in Europe, even if Axa itself demands that this must be done “responsibly and justifiably”.

“We should not have too strict regulations on this issue because we are in competition with the United States and China and our companies need to develop applications and have data to develop them,” says Frédéric de Courtois, deputy general manager of the insurer. However, the call for a “pause” in AI is somewhat less pronounced in France than elsewhere, with only four in ten experts and two-thirds of the general population supporting it.

You still have 30% of this article left to read. The rest is reserved for subscribers.