When it Comes to Ethics and AI, One Must Come First

In this digital world, we’re privileged to have access to data in such an abundance the mind can’t comprehend. With data in such abundance, marketers, merchandisers, and others become reliant on automation and algorithms to assist with certain functions. An algorithm can often accomplish a task in mere minutes that would take an individual a day or more. However, reliance on machine learning and artificial intelligence can further reinforce stereotypes, hatred, and other biases when not properly accounted for. And worse, they’re often overlooked because the perception exists that computers don’t operate on biases.

The Code Working Against Diversity and Inclusion

A computer will only do what it’s programmed to do. There are no mistakes. The computer will execute code exactly as it is instructed. Unexpected outcomes are merely the result of a flawed set of code – but the computer’s response will still be the precise response to the human’s flawed input. In this sense, it would seem relying on machine learning and algorithms would solve for some of the biases in our world. However, unless you subscribe to the Simulation Theory, all code starts with a human. Enter bias and a need for ethics.

Human Bias

The human(s) crafting the initial algorithms inherently insert their own biases into the code we rely on to guide our businesses’ planning, our public health responses, and much more in the world around us. These algorithms – built with parameters assigned based on the developer(‘s/s’) inherent biases (both good and bad) are then accepted by many as fact because of the perception that computers don’t operate on bias or make mistakes.

In many cases, this won’t pose much of an issue, but could it? Suppose you have a team of developers creating an algorithm that sifts through volumes of data to select the likely best-selling products. A very useful tool – and a real-life scenario employed by many of the world’s biggest companies to curate product assortments. How do these algorithms define what is best seller material? Typically, data from countless sources would be introduced to the learning environment, with a script defining what is important and what is not – often weighting certain variables more or less heavily based on significance.

Greatest Opportunity for Bias

These inputs are arguably the greatest opportunity for bias to enter the (literal) equation. Developers usually rely on mathematics and data science to identify importance, but these do not remove all bias. Most dangerously, they don’t remove the bias against that which the developer does not know.

Take that product curation code. If the team is comprised of white, cis-males exclusively, they may make a good faith effort to be inclusive of diversity needs in a product assortment. (After all, there are many who recognize the strength in diversity.) However, they’re now trying to define significance for attributes that they do not fully understand. They may even entirely overlook certain attributes because these data points never existed in the initial datasets to begin due to nobody on the homogenous team giving them consideration.

Marrying Ethics and Algorithms to Solve Bias: Mixing Ethics and AI

The easiest way to solve for bias is with a diverse team of developers. While the developer world is increasingly diversifying, it remains a white, cis-male dominated field. By diversifying people writing the code in the first place, it introduces the opportunity for biases to be identified and solved for early on. It will not eliminate bias. However, it will provide greater opportunity to combat negative biases, while incorporating greater diversity and inclusion into scripts.

Solving for Bias

Okay, but what about now? We obviously can’t solve the workforce diversity dilemma overnight – though there are companies and organizations attempting to accelerate the process. In the meantime, ethics in data processing automation takes on even greater significance.

Data processing automation provides great efficiencies. Indeed – in many cases, humans couldn’t comprehend the volumes of data for some of the decisions needed. However, when gaps are identified, human intervention becomes essential.

It is unethical for a business to hide behind an algorithm as a reason to perpetuate a discriminatory decision, action, or inaction. If the code is flawed, it is incumbent upon them to correct the situation with human intervention.

It’s possible for good people and good businesses to be led astray by an algorithm. That’s why their character must be measured by their response. The human intervention to a computer’s faulted decision – a decision which ultimately was based on flawed human inputs – is a critical skill which must become a training focus in any modern world’s diversity and inclusion programs. Without it, we sacrifice diversity for efficiency.

Tags: