How to Manage — and Remove — Bias in AI

We have a tremendous opportunity to design fairness and transparency into our decision-making systems.

(Image via Unsplash)

Snafus around AI-powered automation are driving concern, for good reason. Recently, the technical recruiting team at Amazon discovered the machine learning app they were using to screen resumes was discriminating against female applicants. A few years ago, Google Photo’s image recognition program inaccurately labeled black people as gorillas. These are often touted as egregious examples of machine learning’s limitations. But in reality, they reflect limitations of our current processes around building AI.

Machine learning models are only as good as the data you feed them.The algorithms work by divining statistical patterns from historical data. In the Amazon example — and probably at other companies, too — years of male-dominated recruiting practices likely rendered past recruiting data more favorable towards men. And machines simply took the cue. As we use these models to automate tasks, biases can get propagated from data to decisions at unprecedented scale.

But there are tangible things we can do to manage — and remove — bias in AI. Here are some of them.

1.Cultivate your values

Identify your organization’s values. Talk about them openly, intentionally, and establish a culture that empowers employees to make decisions aligned with them — even at the cost of short-term profit. Proactively think through potential ways in which the use of new technology might threaten your organization’s values. These values should be baked into every step of the AI implementation process—from technology procurement, to deployment, and beyond.

2. Hire and empower diverse teams

It’s much harder to notice and avoid bias when the people responsible for building or deploying your products are homogeneous. Teams that lack diversity are likely to make flawed choices that reflect their limited perspective. History is filled with examples. The National Highway Traffic Safety Administration used only male dummies in crash tests for more than three decades, starting in 1978, likely reflecting a male-dominated workforce. When the agency began incorporating female dummies—starting only in 2011! —safety ratings dropped significantly in many car models. This is just one example where a lack of diversity in the workforce resulted in poor design and deadly outcomes. The people working on building or deploying AI at your company should reflect your company’s customer base. Diversity — in terms of ethnicity, age, gender, sexual identity, and other factors — is a vital asset in helping you recognize and remove bias in data, and models.

(Image via Shutterstock)

3. Detect and correct for bias

Bias in data can creep in either through biased data collection practices or through biased human behavior in the data generation process. Some of this is certainly fixable through a diverse, aware, and vigilant work-force. But a full treatment is a more nuanced topic. In fact, some human-generated bias is actually good and exactly the thing you want your machine learning algorithms to pick up on! An apparel store may want to algorithmically learn the clothing choices of different age and gender groups. On the other hand, a financial organization would not want to build a credit risk assessment model that systematically discriminates against particular races. So the first thing you need to identify is the kinds of bias you seek to avoid, versus the kind you might actually want to leverage for your use case.

Identifying desirable and undesirable bias can be challenging to do exhaustively. But in general, legally protected classes are a good place to start. For instance, a financial services organization may be legally bound to avoid discriminating, or simply targeting customers, on the basis of race, ethnicity, gender, or age. Conventional wisdom would suggest that we simply drop any such factors from data. But we have to be very careful when doing this, because the data may well contain proxies for these factors. Zip codes, for example, are often a substitute for race. Fortunately, there are a variety of techniques that can help detect, and remove, these proxies. Matt Kusner of the University of Oxford, and others, have developed mathematical definitions of fairness, and methods to instill such fairness in machine learning algorithms. With examples of desirable and undesirable bias, it even becomes possible to train machines to be less sexist.

4. Be able to explain automated decisions

With automation, explainability of machine learning models is critical. In regulated sectors like finance and insurance, explainability is actually legally required. This involves ensuring transparency at both the macro level (What are the most important factors in the model generally?), as well as at the individual level (What are the most significant factors in producing a specific outcome?). Certain types of machine learning algorithms lend themselves to human interpretation more easily than others. But even for the more complex algorithms, local interpretability techniques can bring transparency around individual decisions. These techniques compute the sensitivity of a particular outcome to a perturbation in the input data and can help pin down which factors most influenced the result.

Ensuring that the technology you build or deploy provides some form of explanation allows your organization to then audit automated decisions and detect potential bias earlier. 

5. Know your numbers

One of an organization’s most important efforts is compiling and publishing metrics, and reviewing them regularly. If an organization values fairness in certain categories—like race, gender, or age—it should maintain metrics on the decisions it makes (automated or not) specifically focused on these categories. This is the only way to identify unequal outcomes so that you can investigate further, and fix lingering biases in your organizational structure, processes, and algorithms. Ultimately, an organization cannot fix what it does not measure.

As technology advances, more AI-based automation will have built-in tools to combat bias. We have a tremendous opportunity to design fairness and transparency into our decision- making systems. Our businesses, and people, can’t afford to wait.

Shubha Nabar is senior director of data science at Salesforce Einstein, which embeds AI capabilities in the company’s platform.

Related Posts
See All

Artificial Intelligence is Already Remaking Modern Businesses

We have a tremendous opportunity to design fairness and transparency into our decision-making systems.

Artificial Intelligence: Healthcare’s Next Big Opportunity

We have a tremendous opportunity to design fairness and transparency into our decision-making systems.

Artificial Intelligence May Change the Face of Business

We have a tremendous opportunity to design fairness and transparency into our decision-making systems.

XR is Having a Moment – Will Headsets Share the Spotlight?

We have a tremendous opportunity to design fairness and transparency into our decision-making systems.