Amazon recently abandoned a recruiting tool that used artificial intelligence to rate candidates. The tool studied the resumes of previously hired engineers to create algorithmic associations with certain schools, experiences, and key words that were supposedly indicators of work success. But the company belatedly recognized that the tool was, in fact, teaching itself that men were preferable candidates to women.
Chalk it up to the unintended consequences of technological advancement.
When Facebook’s ad targeting system was used to spread misinformation, the unrest it created around the world was another unintended consequence. When the company tried to fix the problem by clamping down on political advertising, it created even more unintended consequences by blocking dozens of non-political, LGBT-related ads.
And now, despite a rash of hate speech spread on Facebook that incited ethnic violence in Myanmar and Sri Lanka, and propaganda in the Philippines manipulated by its violent dictator, Facebook is accelerating its plans to expand Wi-Fi access in India, Indonesia, Kenya, Nigeria, and Tanzania. Unintended consequences, anyone?
As technology races ahead of society’s ability to absorb its impact at work, at home, and in world affairs, unintended consequences are becoming more ubiquitous than e-scooters. The so-called unintended consequences of social media have left consumers vulnerable to misuse of their data and resulted in just about every part of the world being plagued by misinformation and disinformation.
LinkedIn CEO Jeff Weiner put it in perspective during a recent talk: “It’s far less about the technology these days, and far more about the implications of technology on society,” he said. “We need to proactively ask ourselves about the potential unintended consequences of these technologies.”
Unintended consequences are one thing, predictable outcomes are another. Let’s not be glib about root causes. Let’s not conflate errors, poor judgement, and shortsightedness with unintended consequences in order to shirk responsibility. And let’s not ignore the lessons of today’s unintended consequences. They are a warning about the potential impact of the next set of emerging technologies.
AI will soon be upon us; 5G promises to accelerate the speed of consequences, both good and bad. The combination is poised to change everything — our companies, our products, our transportation, and even our governments.
Two of the world’s leading AI experts will be at Techonomy 2018 next month for a conversation about how AI will change innovation, policy, the nature of companies and work, and even national competition. Kai-Fu Lee, alumni of Apple, Silicon Graphics, Microsoft, and Google, is based in Beijing and is the author of a new book that explores the global fight for AI dominance. Paul Daugherty is chief technology officer at Accenture and a longtime student of how AI is transforming business. He is bullish, writing here for Techonomy that “artificial intelligence could double annual economic growth rates of many developed countries by 2035, transforming work and fostering a new relationship between humans and machines.”
Given its transformational power and the global stakes, now is the time to assign responsibility for AI’s thoughtful development and deployment. A recent paper from EY contends that AI and machine learning are outpacing our ability to oversee their use, and points out that it’s risky to use AI without a well-thought-out governance and ethical framework. The paper suggests four conditions that should be considered before starting an AI initiative:
- Ethics: An AI system must comply with ethical and social norms. Ethics must inform how people design, develop and operate the AI, as well as how the AI behaves. This approach, more than any other, introduces considerations that have historically not been mainstream for the development of technology, including moral behavior, respect, fairness, avoidance of bias, and transparency.
- Social Responsibility: The potential societal impact of the AI system should be carefully considered, including its impact on the financial, physical and mental well-being of humans and our natural environment. Potential impacts include workforce disruption, skills retraining, discrimination and environmental effects.
- Accountability and Explainability: The AI system should have a clear line of accountability to an individual. Also, the AI operator should be able to explain the AI system’s decision framework and how it works. This is more than simply being transparent; this is about demonstrating a clear grasp of how AI will use and interpret data, what decisions it will make, how it may evolve, and the consistency of its decisions across subgroups.
- Reliability: This involves testing the functionality and decision framework of the AI system to detect unintended outcomes, system degradation, or operational shifts — not just during the initial training or modelling but also throughout its ongoing “learning” and evolution.
EY frames these considerations as risk management, but it is really a guide to building trust when developing and deploying any emerging technology. It requires a new way of thinking, one that acknowledges that while consequences can be unintended, they are not necessarily unpredictable.
Paula Goldman, global lead of the Tech and Society Solutions Lab at Omidyar Network, is championing programs to bring this kind of thinking to the development process and to computer science education. She and Terah Lyons, who leads the Partnership on AI, will lead a discussion about how to build an ethical operating system at next month’s Techonomy 2018.
The three-day retreat will be a very intentional conversation about consequences — how to anticipate them, how to guard against them, and how to react to them when they do inevitably come. It is a crucial step forward on the path toward a more trustworthy future.