“This is a human problem,” said Vivienne Ming, a longtime computer scientist and entrepreneur who calls herself a “professional mad scientist.” She was talking about the problem of keeping artificial intelligence ethical. Software that learns or evolves over time while doing tasks–a rough definition of this complex category of technology–is poised to play a greater and greater role in modern society. Techonomy recently brought together a trio of leaders and technologists who are excited by its potential but committed to doing so carefully for a Tech+ session.
These experts all agree that thus far, AI hasn’t fulfilled its promise, because it hasn’t been designed with sufficient ethical intention. In addition to Ming, we heard from leaders of two major companies that take AI ethics seriously–software giant Salesforce, represented by Paula Goldman, chief ethical and human use officer, and global technology services firm Wipro. Its Chief Digital and Information Officer Rohit Adlakha is also head of Wipro Holmes, the company’s artificial intelligence business. Wipro co-hosted the session.
“Why is it after 60 years of AI existing, and 20 years of it being a practical economic engine, we are still making the same mistakes?” Ming asked, noting that “we have these misfires and racial and gender bias in hiring” and other areas. The very first time she saw a facial recognition program two decades ago, it successfully identified every member of the cast of Star Trek except one–Ohura. Today many facial recognition programs still misidentify non-white people.
But this is why Ming says it is a human problem. “We could fix all these problems if we really cared to fix them,” she said. (For facial recognition, for example, it’s a matter of feeding the algorithms data about people of diverse races.) But she added that it was also a “balance of power problem,” because only a very small group of companies–including Alibaba, Amazon, and Google–“control virtually all the architecture behind modern artificial intelligence, and have a pretty good lock on a lot of talent.” So, she said, “a lot of what we’re doing is kind of, you know, at their behest.”
“The basic issue,” said Wipro’s Adlakha, “is we’re dealing with what I’d call a trust deficit.” We have trouble trusting software the way we trust humans, he explained. The answer, he said, is “keeping societal values intact” when we create software, and “reproducing ethical values” inside the AI. Since its founding in 1945 Wipro has emphasized human values, he said. In creating products, it follows a framework it calls “ETHICA.” This stands for software that is “explainable,” “transparent,” with “humans in the loop,” is “interpretable,” based upon “common sense,” and “auditable.” “So whenever we deploy AI,” Adlakha said, “we can actually give a weighted score saying how ethical it is across the six parameters. We publish that score to customers.”
At Salesforce, the software-as-a-service company has instituted numerous methods to ensure its tools, including AI, are designed with ethical intention. There is a committee of the board for privacy, Goldman said. There is a “data science review board” as well as an “ethical use team.” And, she added, “we do something called ‘consequences scanning’ that enables teams to think through the unintended consequences of the work they’re doing.” The company employs full-time AI ethicists, and abides by something it calls an “AI charter,” defining how it will and will not deploy artificial intelligence.
Techonomy asked the audience a few poll questions at the outset of the session. One result stood out. Does AI need governance at the board level, we asked? A stunning 92% of respondents replied yes. Sadly, such oversight remains rare.
But Goldman emphasized something essential that was echoed by both other speakers–what matters is “who is at the table.” The ethical problems so often encountered with narrowly-conceived artificial intelligence are problems of inclusion. Salesforce has a group focused specifically on equality, working for a chief equality officer (Tony Prophet, who has spoken at Techonomy). Goldman’s group is part of his team. “That is deliberate,” she explained, “because so much of the choice around AI ethics, and tech ethics in general, is ‘who’s at the table?’ Who are you building with, and for? Those questions are inseparable.” Ming said something similar: “To have inclusive software, you have to have inclusion.”
One way Salesforce works to keep software ethical is by deliberately working to keep its workforce diverse. Executives have scorecards that track how many people of color or women have been promoted in their groups, for example, and how that number compares to a year earlier.
Salesforce also makes great efforts to carefully consider the sources of the data any algorithm is applied against. It aims to measure bias in its software, statistically, against various protected groups and vulnerable populations, compared to how the software would treat “majority populations.” And in the tools it makes available to customers, it exposes such choices. “We want to build technology that’s as easy as possible to use responsibly,” Goldman said. For example, it gives the option to remove race as a variable from its Einstein data science product. But once a user does that, the software pops up a dialogue that says something like “I notice you’ve hidden race. You may also want to consider hiding zip codes, because that can be correlated with race.”
The reason all this caution and corporate cultural intervention is necessary is that AI will be so important. Ming explained how it can fundamentally change the economics of something that otherwise sounds “ludicrous.” She gave an example: hiring an individual doctor to follow each person and give them individual tests for retinal degeneration every day when they wake up. That would obviously be impractically expensive. But an AI tool can turn a glance at your smartphone each morning as you wake up into exactly such a test, at trivial cost. She said one group she knows is working on diagnosing Covid-19 simply by listening to the sound of a person’s breathing.
The ongoing challenge for ethics, Adlakha explained, is that “AI is like a child” who you have to train. “Whatever you teach the AI will come back to you later on.” He concluded: “The responsibility for us as good corporate citizens is that we use that power of AI for good.”
Full Poll Results:
