Is it Possible to Have Unbiased AI?

Kenna Castleberry

As AI is implemented by more businesses across various industries, the issue of AI bias becomes more prevalent. Previous occurrences of AI bias have been with facial and speech recognition, as the algorithms have a more difficult time recognizing women and minorities. One reported incident had a 34% error rate for identifying dark-skinned females compared to light skin men. Biases within AI have led to many public apologies by big technology companies, but have also produced more sinister results. For example, the investigative news site ProPublica found that COMPAS, a criminal justice algorithm implemented by Broward Country Florida, was twice as likely to mislabel African-American defendants as “high risk” compared to white defendants, The mislabeling and other errors in AI learning have caused specifically women and minorities to have higher rejection rates from certain organizations, like hiring committees or loan applications. But where do these biases come from, and why haven’t we figured out a way to fix them yet?

Biases in AI

The biggest cause of bias within AI (and the one that gets blamed the most) is the training data. AI technology uses training data to process through its algorithms to produce accurate predictions or analyze patterns. The training data may intentionally start out as biased, if there are underrepresented groups in the data, or if the data includes oversampling or under-sampling. If the algorithms process flawed data, their outputs will logically be flawed. The most obviously biased training data are user-generated. Users add data that is clearly encoded with their own biases. An example of this is the Microsoft Twitter Bot; where, in 2016, it shifted its language to output sexist and racist tweets by processing a sample of clearly biased tweets as training data. This incident showed the errors that flawed data could produce in an AI machine.

Other biases in AI are much more subtle and human error-based. These can include biases in data input, coding algorithms, or designing the AI. All of these can affect how the AI works and can skew the outputs to be biased. Of course, no one develops AI technology to be biased, but all humans are error-prone, which results in accidental biases. If you implement AI in your business or are developing AI technology, it’s important to understand where these biases arise, so a strategy can be used to fix them.

Repairing AI Biases

There are many strategies to repair AI biases. The simplest is to have more diverse data or to double-check all training data for obvious biases. This is easier said than done but can be extremely useful for improving the product. It’s important to understand what steps of the process may be introducing bias and to develop protocols to fix these issues. Many big technology companies use AI frameworks (codes) to sniff out possible bias and then eliminate it with AI toolkits (more codes). Other businesses are investing in “pre-processed” data, which has been analyzed for biases before being used in AI technology. From surveying user-bases to beta testing, AI biases can be rooted out through various methods, allowing a multi-disciplinary approach.

The biggest topic in resolving AI bias is “fairness.” Most experts have speculated how to program “fairness” into AI technology. There is no one definition of fairness, but several definitions that can be used. Yet, AI is currently limited in its number of inputs to try to program this fairness into its software. Many technology developers question when a system is “fair'” enough to be released, or what sort of evaluation should be used to test the fairness of a system. As current AI systems are not programmed with fairness first, it’s essential for businesses to develop ethical processes to ensure as much “fairness” as their staff is capable of, while also having a plan in place to improve the product in case biases do happen.

AI biases are not just concerning, but can also be public safety issues, such as in cases of criminal justice or AI-powered cars. A 2018 Gartner report predicted that by 2030, 85% of AI projects will give false results due to biases. That percentage is worrying, which is why understanding the biases within AI is incredibly important in order to rectify the problem. Current bias research is helping to provide businesses with more methods to eliminate biases, helping to improve the technology of our future.

 

References:

Cherelus, Amy Tennery, Gina. 2016. “Microsoft’s AI Twitter Bot Goes Dark after Racist, Sexist Tweets.” Reuters, March 24, 2016.

“How to Reduce Bias in AI with a Focus on Training Data.” 2020. Appen. July 2, 2020.

Knight, Will. 2019. “AI Is Biased. Here’s How Scientists Are Trying to Fix It.” Wired.        December 19, 2019.

Manyika, James, Jake Silberg, and Brittany Presten. 2019. “What Do We Do about the Biases in     AI?” Harvard Business Review. October 25, 2019.

McKendrick, Joe. 2019. “Artificial Intelligence May Amplify Bias, but Also Can Help Eliminate It.” Forbes. June 28, 2019.

Shomron, Jacob. 2021. “AI Bias Is Prevalent but Preventable — Here’s How to Root It Out.”   VentureBeat. August 8, 2021.

Silberg, Jake, and James Manyika. 2019. “Tackling Bias in Artificial Intelligence (and in   Humans).” McKinsey & Company. 2019.

Total
0
Shares
Leave a Reply
Previous Post
BMW Group and Amazon Web Services Select QCI’s ‘Qatalyst’ as a Finalist in the Quantum Computing Challenge

BMW Group and Amazon Web Services Select QCI’s ‘Qatalyst’ as a Finalist in the Quantum Computing Challenge

Next Post
Mantium logo

Mantium Raises $12.75 Million in Seed Funding to Make AI More Accessible

Related Posts
The Deeptech Insider