Why is artificial intelligence, or AI, so much in the news these days? How big an impact will it have on business? Should we fear it? What will it mean for jobs? What do ordinary people need to understand? These were some of the questions that motivated a recent salon dinner discussion that Techonomy hosted on AI. Our partner for the dinner was Accenture, which has put this subject high on its own corporate agenda as it sees AI-related challenges and opportunities looming ever larger for its clients.
We gathered an eminent group of tech and business leaders. While the conversation was unusually animated, there wasn’t even agreement on how to define AI. Sridhar Sudarsan, a top technologist with IBM’s Watson group, gave it a good shot when he said that at least one of four characteristics of human thinking must be involved for a computing process to be considered “cognitive computing,” the way IBM refers to AI. The four characteristics: understanding information, correlating it with other information, reasoning about it, and learning in a way that affects future decision making.
AI has waxed and waned as a theme in the computer industry for more than 30 years. Minerva Tantoco, New York City’s chief technology officer, noted that she had started a company in 1985 that used what was then called AI to create a spreadsheet that “wrote its own formulas on the fly.” She explained that three highly significant developments have changed computing since the 1980s that alter what’s possible today: massive computing, massive data, and massive connectivity.
Paul Daugherty, Accenture’s chief technology officer, agreed with Tantoco that the environment has fundamentally changed, saying that those vast increases in computer and network capability essentially “demanded” AI. But he noted that so-called “general-purpose AI,” or machines that act in a manner resembling people, remains “way off.” “The advantage now,” he said, “is with special purpose AI.” He explained Accenture’s view that “a lot of the applications of AI now are at the edge of the enterprise. It needs to move to the core. But we have to take a people-centered approach. There’s too much focus on getting rid of people instead of thinking about how to enable people to do better work.”
There was much discussion of how even mentioning AI raises popular fears of job replacement. Joao Barros, CEO of urban mobility and connectivity startup Veniam, noted that “when people hear the term ‘AI’ they think ‘something that used to be done by a human is now going to be done by a machine.’” Some at the dinner argued that the fear of job loss accompanies every phase of computerization and automation, and that this time is not much different.
Jon Stein, CEO of Betterment, a fast-growing “robo-advisor” for personal finance and investment decisionmaking, said “When Betterment started, there was a lot of fear that the human role [of investment advisor] would disappear. But that did not happen. Each person instead can now serve more customers better.” Longtime New York venture capitalist Alan Patricof chimed in to say “We’re going into a phase where we’re teaching ourselves to get better and better” through the use of computing.
AI continually calls into question the proper division of labor between people and machines. Sudarsan of IBM said that when IBM began applying Watson to cancer research problems, “we very quickly realized how little we know.” He continued: “We need to shift humans to working on undiscovered problems, and let machines work on the already discovered ones.” Edwin van Bommel, chief cognitive officer of IPsoft, which builds AI-related applications for businesses including a digital “cognitive agent” called Amelia, put it a different way: “We need to shift machines to what humans aren’t good at, and shift humans to what machines can’t do.”
Bill Ruh, CEO of GE Digital, said he believed that in a transition to a more automated society “at most 5% of jobs in the US will disappear. For the 95% remaining people will apply insights to optimize their jobs, resulting in a new wave of productivity in the workplace.”
Several mentioned the vast number of Americans who work as truck drivers, whose jobs may be threatened, or at a minimum vastly altered, by the dawn of self-driving vehicles. “The drivers will now be answering phones,” opined one person, “and doing other jobs to help run the business, because they will not be fully occupied with driving the truck. But they’ll still need to be there.”
Many offered examples of situations where AI, whatever its limitations, is aiding people in doing important work. “We have failing schools,” noted Kathleen Warner of the New York City Economic Development Corporation. “Shifting to ‘bots’ is magical. The AI empowers teachers to figure out an individualized, unique path for each kid.” Mark Bartolomeo, who oversees Internet of Things efforts for Verizon, called that a compelling example. “We need to look at problems worth solving,” he said, “and use AI to address them. Education has an outcome worth investing in. Same with vehicles–the outcome is to reduce fatalities and accidents. That’s worth investing in.”
Deepak Krishnamurthy, SAP’s chief strategy officer, weighed in that good AI will make software “invisible and seamless.” But he also said that in the current AI frenzy there is a lot of what he called “AI-washing,” making it seem like AI is involved in just about every advance happening in technology. Several others complained about the amount of AI-related hype. Many in the room professed to be confused about the difference between AI and other sophisticated computing processes that involve data and analytics. A fair number left the dinner still confused about that.
But an air of caution hovered over the room. Said Barros of Veniam: “The speed at which we create context for these new processes is so much slower than the speed at which the technology is evolving. AI can be dangerous. That’s why you have politics, which is multiple people weighing information and then making decisions jointly about how to respond to such things.”
View editorial post