Categories: Lifestyle World

This Year, AI Accidents Are Expected to Soar

As the world starts to push the boundaries and usage of AI systems, there is a corresponding rise in AI-created accidents, near misses and even death. Such incidents, ranging from self-driving car accidents to chat systems creating racist content, are set to increase rapidly, according to experts tracking AI problems.

The development and rollout of AI systems in 2023 is remarkable: since OpenAI released chatGPT, there has been a rush to create learning models in many fields, including image generation, automated tasks, finance and other areas. But with the exponential rise in AI deployment comes a corresponding trail of what might best be described as unfortunate events, some with tragic consequences.

The AI Incident Database tracks mistakes, near-misses and critical incidents caused by AI systems, and encompasses some shocking events. Overall, there are over 500 incidents, and as the chart below shows, they are increasing rapidly. For 2022, the database has 90 incidents. For just the first three months of 2023, there are already 45, meaning that at the current rate, we are on track for around 180 in 2023—if the use of AI stayed constant, which it clearly is not.

Sean McGregor, founder of the AI Incident Database project, and a Ph.D. in machine learning, told Newsweek that “we expect AI incidents to far more than double in 2023 and are preparing for a world where AI incidents are likely to follow some version of Moore’s law.”

Moore’s law was the prediction by Intel co-founder Gordon Moore in 1965 that the number of transistors on a circuit would double around every two years—and hence computing speed and capability would increase as a result in the same way. The trajectory of AI-related incidents is clear.

The more widespread use of AI will create more opportunities for error, but as McGregor explained, there’s no way of measuring how much AI is being deployed at the moment like you can in other business sectors: “On a per mile basis, aviation is safer than ever. The problem with AI incident trend analysis is that nobody has a great measure of the ‘distance’ AI is traveling. We know that the number of intelligent systems in the world is exploding, but nobody can say, ‘AI is 30 percent safer than the year before,’ because we only observe the failures and not the successes.”

Some of those failures are socially unacceptable but non-fatal, such as when Google’s Photos software labeled Black people as “gorillas” in 2015, or a recruiting tool that Amazon had to shut in 2018 for being sexist when it marked down female candidates.

Others are fatal: a Volkswagen plant robot killed a worker by pinning him to a metal plate; in 2019, a Tesla Model S driver on autopilot mode reportedly went through a red light and crashed into another car, killing two people in Gardena, Los Angeles.

Also included in the list is the fatal Lion Air flight of 2018 that killed all 189 people on board. The Boeing 737 crashed into the sea after faulty sensor data caused an automated maneuvering system to repeatedly push the plane’s nose downward.

The database starts with a close shave that would have been far more consequential than anything humanity has ever faced: the closest the world has come to all-out nuclear war. In September 1983, a Soviet lieutenant colonel in the Air Defense Force named Stanislav Petrov decided that the missile-warning system which had detected five U.S. missiles heading for the Soviet Union was a false alarm.

If he had reported the warning as an attack, the system in place at the time would have sparked immediate nuclear missile retaliation. There was no time to check, let alone to try diplomatic channels, and nuclear war would have commenced between the U.S. and the Soviet Union, causing many millions of deaths and the end of much of civilization. It’s not hyperbole to say that Petrov “saved the world” as we know it.

The cause of the error? The system mistook sunlight glinting off clouds. Petrov was reprimanded by his superiors and reassigned to a less-important post.

McGregor said: “A big part of what we are doing at the moment is preparing for a world where AI incidents are incredibly common… It is incumbent upon policy makers, commercial actors, and humanity in general that we build the social infrastructure necessary to bend that curve downward.”

The AI Incident Database is run by The Responsible AI Collaborative, a not-for-profit organization that was set up to help make AI safer. The database is opensource and has a network of volunteers around the world.

Organizations with most entries in the AI Incident Database:

Organization Incidents
Facebook 46
Tesla 35
Google 28
Amazon 18
OpenAI 18
YouTube 13

Source: www.newsweek.com

Josephine Poot

Contributor

Recent Posts

Senate Democrats mock Biden for selling weapons to Israel

Some of President Biden's strongest Senate allies are calling on Israelis to end months of fierce fighting and criticizing the…

4 months ago

The biggest cities in Ukraine are struck by Russian missiles, leaving at least 4 dead and over 100 injured

Ukraine's KYIV — At least four people were killed and nearly 100 injured when Russian hypersonic ballistic missiles attacked Ukraine's…

4 months ago

Good News That’s Much Needed for Biden

The economy is making Americans feel a little more upbeat, especially when it comes to jobs and incomes, which could…

5 months ago

College Ventilation System Reveals Missing Man

The discovery of a decaying body inside a college ventilation system marked the tragic and unexplained end to a missing…

5 months ago

Following three Palestinian shots, a Vermont man was arrested

Following the shooting deaths of three 20-year-old Palestinian-American students in Vermont, authorities have detained a man. According to a news…

5 months ago

China is told to cease “COVID deception” due to the pneumonia outbreak by the US envoy

The United States ambassador to Japan, Rahm Emanuel, urged Chinese authorities "to abandon COVID deception" and demanded that China be…

6 months ago

This website uses cookies.