Will We Own AIs, or Will They Own Us?

The rise of artificial intelligence (AI) promises a technology revolution, but like most major innovations these days it is misunderstood. Some of its real risks to society and to the privacy and autonomy of our daily lives are practically undiscussed. (Originally from Techonomy Magazine)

Illustration courtesy of Emmanuel Polanco for Techonomy Magazine

The rise of artificial intelligence (AI) promises a technology revolution, but like most major innovations these days it is misunderstood. Some of its real risks to society and to the privacy and autonomy of our daily lives are practically undiscussed. Not that there isn’t extensive fretting on other fronts. We hear endless talk about the threat AI could pose for jobs, and Elon Musk has warned that future AIs could develop their own autonomous mean streak. He tweeted they posed “vastly more risk than North Korea.”

Meanwhile, AI is widely overhyped. Look: IBM’s Watson is winning Jeopardy; industry executives blithely call AI “freakishly good” at voice recognition; Google’s AlphaGo AI program has beaten a renowned Go Master!

What gets lost amidst all this, however, is a fundamental procedural, social, and policy challenge: Who will own—and control—artificial intelligence?

In his seminal science fiction novel Neuromancer, William Gibson not only popularized the term “cyberspace”, but also sketched a future where ownership of an AI is the ultimate hallmark of wealth; one big extended family in the novel owns not one, but two AIs, dubbed Neuromancer and Wintermute. But in reality, few if any of us will own our own AIs. Instead, how we monitor those who do own them may determine whether we as humans have freedom of choice across our entire lives.

Technology rarely turns out quite the way we expect. Take online search: We were promised a tool for discovery—news, encyclopedias, libraries, and the phone book all rolled into one. But instead, search more fundamentally transformed how we live. Our searches morphed into a decision-making mechanism that is not about finding things anymore. The results guide and shape our decisions. You may get scores of pages of results when you search for a product, service provider, or holiday destination. But in all likelihood you will select one of the top five “choices” presented to you by Google’s algorithm. Search does not deliver serendipity of discovery. Instead it has turned into a filter, fine-tuned by your web habits and fraught with highly targeted advertising.

Similarly with free web services—Flickr, Instagram, Netflix, LinkedIn, or Facebook. As is widely said, when you don’t pay for the product, instead you are the product as the companies mine the rich data we all produce to show us ads.

The AI revolution, however, will be even more fundamental, but we may not even notice it. Already, we are happily using the rudimentary precursors of true artificial intelligence: the digital assistants in our smartphones or smart speakers, and the chatbots answering the basic queries we put to retailers, banks, and other service providers. The more delicate the subject, the more happily we “talk” to the machines. Research suggests we would rather talk to a computer about a financial problem than to a human, who after all might be judgmental.

And what’s not to like: Do we really have to make every routine decision ourselves? “In which folder should I file this email?” “When do I need to order more washing powder?” Many say that AI could give us the time and power to focus on truly creative work.

But here’s the rub: All this support comes at a cost.

At work, our employers may pay for an AI and try to ensure it has the company’s best interests in mind. But what about the AIs that will augment our private lives? Many of them will be free, as are today’s digital assistants like Alexa, Cortana, Google Assistant, and Siri. We will talk to computers all day long.

AIs will come into their own once they truly understand our natural-language queries and respond in kind. When that happens, they will not bore us by reading pages upon pages of search results; instead, we will expect them to give us clear directions, or at most a choice between a couple of options. At this point, however they will have stopped augmenting us; instead, they will have become our opaque gatekeepers. If today’s earch reduces our range of choices to five or maybe 10 options (since hardly any of us scroll beyond that), then voice-powered AIs will narrow that down to just one or two.

Which AIs will we chose? The ones that promise to be good? The one that speaks with a celebrity voice? Or simply the one that comes preinstalled with our digital world’s operating system?

How will we know whether our “personal” AI does evil? If we ask for the best restaurant nearby, will it give us the choice that we might have picked from a list of search options, or the one that paid to be first? More significantly, if we ask our AI to give us the background on a political issue, will it skip the parts that interested parties have worked hard to suppress? Will today’s relatively benign search engine optimization morph into a more pernicious gaming of AI results? Will someone be able to pay to “optimize” what the AI tells us so it is in their interest?

The dynamics of value exchange will have dramatically shifted again. We will have evolved from our small filter bubble and grown into a filter prison that could in practice offer us at best, binary choices. The product at that point will not be the data trail of our behavior, but our behavior itself. To put it bluntly, it won’t be the AI that is the ownable service, but our own actions, as we obediently follow the advice of “our” AI.

Today’s smart phones and the voice-recognition services of our connected loudspeakers and other Internet of Things devices are already the harmless-seeming beachheads of AI in our homes and lives. As such devices and services proliferate and appear in more and more places, they may appear to be so harmless (and cheap) that we will barely notice until well after this revolution has triumphed.

Still, it’s not too late for societies to make the right choices so we as consumers can swerve before we enter this one-way street into dystopia. It doesn’t help that today the technology industry remains the most trusted of all industries, according to the Edelman Trust Barometer, an annual survey of more than 33,000 people. But the 2017 survey also shows that the more cutting edge the technology, the more this trust is likely to evaporate. When it comes to the hot, contemporary topic of autonomous cars, for instance, consumers are decidedly queasy whether they should get on board.

Car companies that want to entice us to buy an autonomous car are thus going to need to build trust first, based on clearly conveying to us the opportunities and limitations of the technology. All of us are going to have to demand similar levels of transparency before the AI revolution seeps into our lives. Companies will need to answer some key questions: What underpins AI decision-making? Who sets the parameters of their decision filters? Can third parties influence the options “our” AI offers? Both Microsoft and Google have already explicitly said they realize they have to make AI “people-friendly,” and openly address issues of the transparency of the systems and proceed slowly in order to develop public trust.

Ultimately, we will need clear laws and well-accepted procedures to regulate robots and AIs and deploy them effectively. Because as we go forward, whoever owns the AIs could, if we let them, own us.

Tim Weber is an SVP with communication marketing firm in the UK and the former business and technology editor of the BBC News website. 

 

Related Posts
See All

How Real is Virtual Reality?

The rise of artificial intelligence (AI) promises a technology revolution, but like most major innovations these days it is misunderstood. Some of its real risks to society and to the privacy and autonomy of our...

Generous Tax Subsidies for Sustainable Aviation Fuels in the U.S.? Yes, But Details Matter.

The rise of artificial intelligence (AI) promises a technology revolution, but like most major innovations these days it is misunderstood. Some of its real risks to society and to the privacy and autonomy of our...

In Cancer Detection, AI Saves Critical Time

The rise of artificial intelligence (AI) promises a technology revolution, but like most major innovations these days it is misunderstood. Some of its real risks to society and to the privacy and autonomy of our...

7 Technologies for Fighting Climate Change

The rise of artificial intelligence (AI) promises a technology revolution, but like most major innovations these days it is misunderstood. Some of its real risks to society and to the privacy and autonomy of our...