Machine Learning Bias—An Excuse To Avoid Accountability

Machine Learning Bias—An Excuse To Avoid Accountability

Racist.  Discrimination.  These are words that get slapped on to artificial intelligence programs that are identified as being biased in their outcomes against different groups.  In the tech world, the term used for these outcomes is machine learning bias and it is defined as a phenomenon that occurs when an algorithm produces results that are systemically prejudiced due to faulty assumptions in the machine learning process.  But if we start talking algorithms and coding, artificial intelligence is the product of human design, and it functions as it was programmed to by its developer.  The reality is that machine learning bias is a convenient excuse for tech teams to avoid accountability for their lack of foresight.

What’s the starting point for Artificial Intelligence and Machine Learning?

When the idea of artificial intelligence programs that could learn and improve over time became reality it was called machine learning.  The ability for artificial intelligence systems to learn on their own was seen a game changer, and the machine learning was designed to take place with minimal human interaction.  The machine learning algorithms are programed to absorb tons of information and filter anything  not coded as important.  Over time, what happens is the artificial intelligence system updates itself and consistently gets better, or, if it turns out there is machine learning bias, worse.

The origin story for Machine Learning Bias

Among the leading tech giants, research and development are the costliest departments and the results they produce have long-term implications.  The challenge with having a successful tech business is that tech companies often need to seek out patents and protections for their innovations to stay profitable.

And with how competitive the tech space is, there have been occasions where competing companies have requested patents for similar products.  The tech company that could argue that they were first-to-market with their product would likelyhave a better chance of being awarded the patent.  Companies that were all-in on a single patent and that missed out on it were likely to go under.  The competitive nature of research and development in the tech space and the amounts of money that get invested into these departments mean that tech companies may have no choice but to be first-to-market—even if it results in a faulty product.

Another major challenge with designing a comprehensive artificial intelligence system is that there is always more to them than developer-only teams can plan for.  The biggest barrier to brining on diverse schools of thought is that tech companies are often cash strapped and do not have the financial means to bring on a variety of professionals including those with backgrounds in sociology, health, or finance.

One of the most cited examples of machine learning bias and how it can ruin lives is that of a risk assessment software that was used in the criminal justice system.  The biased algorithms were coded by developers based on superficial data which resulted in black individuals being flagged as more dangerous and having stricter conditions imposed on them.  While the machine learning bias was not fatal, it did have life-changing implications for the individuals whose freedoms were taken away.  The effects of this machine learning bias could have been avoided if the developer-only team had had access to professionals with backgrounds in socioeconomics and who would have been better positioned to identify potential areas for bias.

Our society is heading into a future that will have us interacting with artificial intelligence systems ranging facial detection systems all the way to robots.  As our capacity to innovate and build becomes more powerful, so does our ability to do unplanned damage.  Using machine learning bias to explain unexpected results that harmed people is equivalent to referring to car crashes that were the result of impaired driving as car accidents.  Both outcomes are entirely avoidable, and excuses should not be accepted because they minimize the life-changing implications they can have.