This piece originally appeared in FIN, James Ledbetter’s fintech newsletter, on October 25, 2020.
The history of racism in the banking and lending industries is long, deep, and far from behind us. Despite decades of reform and regulation, racial discrimination in the financial sector remains a stubborn stain on modern American life. In December of 2019, the New York Times reported that a former NFL player worth millions of dollars couldn’t figure out why he was having trouble becoming a “private client” of JP Morgan. He went into an Arizona branch and was told: “You’re bigger than the average person, period. And you’re also an African-American. We’re in Arizona. I don’t have to tell you about what the demographics are in Arizona. They don’t see people like you a lot.”
The story got the attention of five U.S. Senators, who noted that Chase was a repeat offender. “In 2013, JPMorgan Chase agreed to pay $13 billion as part of a settlement with the Department of Justice (DOJ) and several states over predatory mortgages. In 2014, JPMorgan Chase agreed to pay another $614 million for “fraudulent lending practices,” they wrote in a letter to CEO Jamie Dimon. “In 2017, DOJ fined the bank another $55 million for charging at least 53,000 Black and Latino borrowers higher rates and fees on mortgage loans than similarly-situated white borrowers. As the DOJ explained, ‘Chase’s pattern of discrimination has been intentional and willful, and has been implemented with reckless disregard of the rights of African American and Hispanic borrowers.’ Numerous other cities, states, and private plaintiffs also sued the bank for predatory or discriminatory lending practices.”
Needless to say, the phenomenon is hardly limited to Chase.
This sorry legacy represents, in theory, a profound opportunity for fintech startups. At a very basic level, making fintech lending easily available to minority and underserved populations would be a win-win. And at a more detailed level, fintech should be able to replace racially-tainted methods of measuring risk with shiny, new algorithms that don’t discriminate.
That is the promise of Zest AI, a Los Angeles-based machine-learning company that announced a $15 million fundraising round late last year, led by Insight Partners. Zest isn’t a lender (although it started as one); instead it sells software that it claims can help traditional lenders measure risk in a more inclusionary way.
CEO Mike de Vere explained in a FIN interview that the methods that most traditional lenders use to measure risk date to the 1950s, and can use only a limited number of variables. Machine learning makes it possible to use hundreds of variables, and to enact tiny tweaks to models, making them more inclusive and more reliable.
He likens the adversarial de-biasing process that Zest uses to “a game of Pong,” in which one “Pong racket” optimizes for economics—trying to predict default rates, and maximizing profit—while the other optimizes for fairness. He cites a customer that makes car loans, and says that it makes an average of $1600 in profit per loan. Zest was able to show the customer that by making $2 less profit per loan, it could gain 4 percentage points in inclusivity. Another example is Akbank, Turkey’s third largest private bank. Zest’s machine-learning models allowed Akbank to reduce its nonperforming loans by 45%, and double its approval rate for borrowers with no credit history.
De Vere is explicit about his company’s social mission: “We have a duty to correct this, and we have the IP needed to do it.” There are several other companies with similar concepts—the so-called inclusive fintech group—and if you listen to their pitches you might well conclude that big lenders will be rushing to eliminate bias because it’s good for their bottom line. However…
The Case for Skepticism
Of course it’s not that simple, as de Vere acknowledges. I take de Vere at his word that his company will use only white-hat machine learning. But you have to figure that for every Zest out there, there is at least one company that will be tempted to use artificial intelligence in ways that perpetuate the lending status quo or even make it worse.
The traditional (and highly regulated) ways of determining credit risk involve factors that are fairly intuitive: income, past record in paying bills, etc. Artificial intelligence and machine learning make it possible to build customer profiles from a much wider set of variables, including ones which few humans would, at first glance, identify as especially relevant to credit risk. De Vere referred to these as “creepy variables,” and they might include things like your social media profiles or what kind of clothes you buy.
This is not just theoretical. A paper published in 2018 analyzed transactions at a German e-commerce company, and concluded that several aspects of a customer’s digital footprint are actually better predictors of credit default than credit bureau scores. These include things like type of operating system (Android users are much likelier to default than iOS users); time of day of order (customers purchasing between noon and 6 pm are half as likely to default as those purchasing between midnight and 6 am); and whether or not a customer uses an e-mail address with a real name in it.
It doesn’t take much imagination to see how a machine-learning algorithm could apply such data to discriminate against people of color, without ever having to mention race as a factor. In the U.S.. such efforts might well violate the Equal Credit Opportunity Act of 1974, but as discussed above, big lenders seem to have little hesitation about that, treating whatever fines emerge as a cost of doing business.
A second reason for skepticism is sort of a last mile problem; that is, there are major structural impediments in bringing even the best-intended fintech to minority and underserved communities. Last October, the Federal Deposit Insurance Corporation (FDIC) issued its biennial survey about how American households use banking and financial services. One somewhat bright note is that the proportion of “unbanked” households continues to decline, and is now at 5.4 percent (it’s worth pointing out that this survey was taken pre-COVID). For Black households, however, 13.8 percent are unbanked, and for Hispanic households, 12.2 percent.
When the FDIC asked unbanked Americans why they don’t have bank accounts, the two answers given most often are not having enough money to meet minimum deposit requirements, and simply “don’t trust banks.” So at a minimum, until lenders find a way to deliver products that millions of Americans can use and trust, all the algorithmic changes in the world are not going to reach this group as consumers.
And the problem is broader than that. The FDIC’s advisory committee on economic inclusion met in October 2020 to discuss the survey’s findings. In the meeting, Kevin Klowden, executive director of the Milken Institute’s Center for Regional Economics and California Center, offered some sobering facts and arguments. Klowden pointed out that when COVID hit China, 57% of the population was already set up to use mobile banking, and thus were able to deal with physical bank closings fairly easily. The comparable number in the U.S. is only 43%, and a good number of that population only use mobile to check bank balances; only 34% of American households use mobile technology as their primary banking method.
His point is that the most likely beneficiaries of fintech innovation are customers with broadband access and smartphones with large data plans, and these are far less common among unbanked and other underserved populations. Many places where lower-income people live still don’t have these digital tools, and it’s not only a problem in poor urban areas—a Wall Street Journal story really brought this home, with photos of people parked outside the public library in Hudson, New York, to soak up its free wi-fi. We’re in a familiar situation where private companies are offering viable solutions to problems of inequality, but without public leadership and investment in services like universal broadband, those solutions won’t reach the people they’re supposed to help.
This piece originally appeared in FIN, James Ledbetter’s fintech newsletter. Ledbetter is Chief Content Officer of Clarim Media, which owns Techonomy.