Who is Liable for Faulty AI?

Kenna Castleberry
AI futuristic technology background vector

Globally, there are few countries that have adapted their laws to include advancing technology. The rest of these countries have left obvious legal gray areas when it comes to dealing with cases of faulty technology, such as artificial intelligence (AI), which can cause serious issues. AI works through Machine learning (ML), which uses large swathes of data to evolve algorithms for better analysis of incoming data. Because of this, AI can adapt and improve its own algorithms over time. This makes it beneficial in fields like medicine or police work, where data analysis can lead to significant results. Most AI machines are still human-operated, with humans inputting the data. This can lead to some interesting and impactful problems, especially from a legal standpoint.

AI is built with many complex components, allowing it to have a wide range of issues. Previous issues have ranged from biased data analysis to lethal faulty programs. The news has reported faulty AI causing accidents for Autonomous Vehicles (AV), as well as facial recognition bias toward certain ethnicities. Because it can evolve and develop algorithms on its own, the issues beg the question: who is really at fault for defective AI?

While this article won’t cover all potential legal cases, there already have been cases revolving around this issue. According to Brookings, a nonprofit organization focused on independent research, some AI cases have been about product liability. Product liability is simply when injuries or property damage arising due to faulty products. A product liability case can include claims of design defects, manufacturing defects, failure to warn, or negligence. These claims have already been applied to certain AI products. In these cases, the company that created and produced the AI is at fault. In some defenses, a company may try to blame the AI, the data, the supply chain, or even the users. All of these make for poor defenses, as the AI company itself used these components to build a faulty product.

This has already been applied to certain cases of AV accidents and insurance companies. As insurance providers are part of the legal system, they must be considered in cases of faulty AI. For insurance providers on these products, there are new risks that must be calculated in drawing up policies. In hoping to modernize its laws, the UK is proposing to introduce rules where the insurer will bear primary liability in cases of AV accidents. This can help insurance companies modify their policies to include AI technology.

Other cases involving AI revolve around bias in AI data. In 2015, Amazon discovered that their AI used to help recruit new employees was biased against women. In another incident, an AI used to predict healthcare risks of over 200 million Americans was found to be biased against certain races. And, in 2019, Facebook apologized as its AI-led to discriminatory ad targeting women and minorities. These are just a few examples of the many cases of AI’s biases affecting society. In these cases, there are three groups that may be at fault: the sample data supplier, the AI operator, and the algorithm creator. All three of these groups could check for potential biases before they happen, whether this is examining the data, checking that the inputting process is as objective as possible, or checking that the algorithms don’t include biases.

The main reason for AI biases is due to biased data. Because AI works using large sources of sample data, if the data is already flawed or biased, the AI can magnify these biases. This leads to problems later down the line. Because there is no legal regulation in place to check for biases, there is no accountability for these AI companies to do this. As a result, a growing initiative has been established to try and mitigate this problem. Called the Open AI Initiative, this group is working to make the creation and development of AI products more transparent, encouraging accountability. Open AI is supported by companies like Apple, Amazon, Google, and Microsoft, as well as individuals like Elon Musk, Grey Brockman, and Jessica Livingston.

Hopefully, with the help of the Open AI Initiative, accountability for these companies will be adapted into legal systems, allowing companies to accept better responsibility for faulty AI, and work harder to make AI more beneficial and objective for our society.

References:

Amerongen, Robbrecht van. 2017. “Who Is Responsible When Artificial Intelligence Fails?” Www.linkedin.com. January 30, 2017.

Dilmegani, Cem. 2020. “Bias in AI: What It Is, Types & Examples, How & Tools to Fix It.” AppliedAI. September 12, 2020.

Gluyas, Lee, and Stefanie Day. 2021. “Artificial Intelligence – Who Is Liable When AI Fails to Perform? Insight | Technology, Media & Telecommunications | United Kingdom | International Law Firm CMS.” Cms.law. 2021.

Villasenor, John. 2019. “Products Liability Law as a Way to Address AI Harms.” Brookings. Brookings. October 31, 2019.

Wesson, Vivian D. 2020. “Who (or What) Is Liable for AI Risks?” New York State Bar Association. September 17, 2020.

Photo Courtesy of Freepik.com

Total
0
Shares
Leave a Reply
Previous Post

Clearview AI Ordered To Delete All Facial Recognition Data of Australians, Set To Appeal

Next Post
Veteran Quantum Computing Executives Join Atom Computing to Lead Commercialization and Growth

Veteran Quantum Computing Executives Join Atom Computing to Lead Commercialization and Growth

Related Posts
The Deeptech Insider