Our homes, industries, livelihood, toys, robots, and even our wars are being shaped by rapid advances in AI. How do we ensure that the resulting systems connect these advances equitably to the benefit not just of business, but also of society?

Sherman: Good morning all. Welcome. We’re about to have an hour together to talk about AI. We have a panel here at this end of the table—I’ll introduce them in just a moment. We’re going to have plenty of time to turn this into a general conversation that includes all of you. I would definitely encourage you, if you have questions along the way, just let us know and we’ll include you in the conversation.

My name is Strat Sherman. I’m going to be the moderator here. And I’d like you to meet our panel. So next to me, to my right, is Shivon Zilis. She’s a venture capitalist doing early-stage investing in companies for Bloomberg Beta and has a variety of machine learning and AI-related investments already, so an interesting view of what is being developed right now.

Next, we have Francesca Rossi, who is at the IBM Watson Research Center, so she is an active AI researcher herself, an academic background in—how would you describe it, the academic field?

Rossi: Collective decision making.

Sherman: Collective decision making. And is also IBM’s representative to the—is it called Partners in AI? Along with Amazon, Facebook, Google, and so forth. And I always imagine meetings of that group being very much like this, at sort of a Treaty of Versailles type table. So perhaps we can induce Francesca to tell us a little bit about the diplomacy of artificial intelligence.

Next over, Paul Daugherty, CTO, if I’m correct, at Accenture, and long career, actively involved right now in a lot of the practical implementation of AI at a very large company.

Next we will go to Babak Hodjat. Babak has an interesting resume that includes precursor work for Siri. He’s now CEO of Sentient Technologies, which is—

Hodjat: I’m not the CEO.

Sherman: You’re not CEO, you’re—

Hodjat: Cofounder and chief scientist.

Sherman: There you go. So the error rate of human beings is exceedingly high, I’ve been told, and I’m the living example of it. [LAUGHTER] This will continue throughout the morning.

And Sentient—I’ll leave you to describe it, but it’s a very interesting company in terms of AI applications that’s happening right now.

And finally, we have Mark Patel from McKinsey. He’s the practice leader in Internet of Things.

So here’s our panel. And what I’d like to start off with is what is AI? I mean, let’s just get a framework here so we all know that we’re talking about the same thing. Anyone on the panel want to grab that?

Hodjat: It’s very brave of me to attempt to define AI. [LAUGHTER] I believe AI is any algorithm or collection of algorithms that are inspired by human or natural intelligence. Typically, it’s a method algorithm, so we make use of an algorithm to write us an algorithm, either for finding patterns in data, or our belief at Sentient is to actually have it make decisions. So sort of incorporating the decision making loop and automating that based on data. So that’s my definition, but I’m sure everyone on this panel has an opinion on what is AI.

Zilis: I don’t even use the term in this, because you end up getting in so many fights when you use the term artificial intelligence. So we publish a now annual report on the current state of basically what we’d consider these learning algorithms and AI, I would just get in constant, constant fights because I’d be like, “Oh yeah, well this company is using machine learning techniques or AI to do this” and they’re like, “Really? They’re just using text data and they’re making a smarter output out of text data. Is that really AI? That’s something I could do.” And so it’s this like inherently ephemeral term. So we’ve just landed on machine intelligence to describe sort of the entirety of this world, which is that of data and learning algorithms.

Daugherty: I guess we’ve taken a view of—I agree, it’s a hard thing to define and the problem is the definition shifts, because we sometimes stop thinking of things as AI once they become more mainstream. So it’s a shifting bar. But I guess we just try to describe the outcomes or use cases around it as where we’re building systems that can sense, comprehend, act, and then learn. You know, if it exhibits those kind of characteristics, it’ll typically be powered by artificial intelligence and that’s the next generation of solutions that we’re looking to power.

Rossi: I agree, that is a very ephemeral term, and I completely agree that once we know how to do something, then it’s not AI anymore but is just any computer problem. But I think that if we want to define the discipline, I mean the original idea of course was to build systems and programs that were going to possess some form of something that we usually associate with human intelligence. And over the years, I think the discipline has worked a lot on problem solving capabilities because that’s one big part of how we demonstrate our intelligence. And then recently, we realized that these machines that were able to solve problems very efficiently and optimally needed also a lot of perception capabilities to be able to solve problems in the environment in which—not in a controlled environment but in the real world. So that’s why in the recent years, fortunately there were many techniques that succeeded in providing these perception capabilities to intelligent machines.

Sherman: Francesca, you said something and I’d like to know whether you were being frivolous or serious when you said it, that once a problem is solved, it’s not AI anymore.

Rossi: No, that’s what people—you know, Peter Norvig from Google sometimes says AI tackles problems that we don’t know how to solve. So then once we tackle it and we solve it, then it’s just an algorithm to solve a problem and then let’s go to the next thing that we don’t know how to do, because of uncertainty and real world vagueness and whatever. So yeah, it’s a common perception that the definition of what is AI shifts a lot.

Patel: I mean, I love the idea that once you solve the problem, it’s no longer AI. But what does it mean to solve the problem? I think at some level what we’re doing with AI today is solving some very specific problems that we might be claiming victory around, but I would argue that extrapolate into the broader world or a broader set of definitions of the same problem, we’re far from solving it, and yet there’s still plenty of opportunity for application of these technologies.

Daugherty: I think one of the reasons this discussion is even important is, we did a survey recently of global 1,000 corporations and 75% of the executives said they’re doing AI, which I think is scary, because they’re not. [LAUGHTER]

Zilis: It’d be scarier if they were actually doing AI. [LAUGHTER]

Daugherty: Yeah, it’d be scarier. But I think that’s what I get concerned about because there is so much potential for this to be really transformative and you need people to really understand it and understand how it’s changing their organizations and how they can do things differently. But what’s happening, as companies hear a lot about it, they rush to embrace something, they solve a simple machine learning problem in some area and they say, “We understand this, we got it” and they kind of stop there. And I believe this is a bigger transformation that requires people to really understand not just where AI is now but what the potential is and where it’s heading and start to think about how they’d do things differently.

Sherman: Full disclosure, I studied comparative literature, French and Russian novels in the nineteenth century. [LAUGHTER] So I think I’m qualified to represent those who don’t really understand what’s going on.

A: What’s interesting is that everybody had a different definition, so the experts don’t have the right definition—or the same viewpoint. And that’s to me disturbing.

Zilis: Well, so how about this, let me offer a counter. I appreciate that. The way we look at this—and again, I’m coming from a position where we’re trying to invest in anything and everything machine intelligence that’s solving real problems. And that gets to the point of problem first. We’re actually fairly agnostic to the method for solving that problem. Sometimes it’s going to be a machine learning algorithm that was pioneered 15 years ago, sometimes it’s going to be one of these new and novel deep learning techniques. The purpose for us is not about the method you’re using. It’s just that it’s an appropriate method to solve a given problem.

Sherman: So if I may, I was about to propose a hypothetical definition of AI—probably wrong, but we’ve got capable people here to correct me. It seems to me there are two primary pieces here. One of them is the ability to absorb and process the information in datasets larger than human beings would be able to do in a given amount of time, combined with some kind of machine learning that has a continuous improvement element to it that makes it smarter through its own activity. And to me, those two pieces seem to be the fundamentals of AI—I see that I’m wrong, so correct me, please. [LAUGHTER]

Rossi: Yeah, because I think there is danger of confusing AI with machine learning. And that’s something that you read everywhere, and I don’t think that the two things are the same thing.

Zilis: When you say AI, do you mean AGI or—

Rossi: AI. And I think that they’re not the same thing. You know, machine learning, the ability to learn is one big part of our intelligence and it should be a big part of machine intelligence. But there are also many other capabilities that you need for any machine to be, quote, “intelligent.” So in fact, you know, if you look—like I am from academic background. If you look at the scientific discipline, we have machine learning, but we also have many other things like knowledge representation, how do you represent the world around you, how do you reason with that, how do you solve problems, how do you plan, how do you schedule activities. So all these other disciplines that should be combined with the ability to learn. And this is also a big issue because the kind of techniques that they use in machine learning and these other disciplines are sometimes not easily combinable. But I think that all of these should be put together to get a real AI.

Hodjat: There’s absolutely a disconnect between what the general public thinks is AI and what practitioners think is AI. And, yeah, machine learning is an AI algorithm, as are a number of other ones. And of course, for applied AI, you’re thinking about how you put together these various different algorithms where they are useful to solve a problem. But I’ll give you an example, like with Siri, do you know which part of Siri gives it this aura of being intelligent? It’s the least intelligent piece, which is randomly pick from responses, canned responses to predefined inputs. You know, if somebody asks you what is the meaning of life, there’s like four or five or six different responses you pick from and that differentiates Siri. When you talk to people, they’ll tell you Siri is smarter than say Cortana, because Siri is funny or because Siri is—as an AI practitioner, do I consider that AI? No. It’s looking up something in a table, right?

A: [off mike 0:13:57.8] I mean, you’ve just described the Turing interaction.

Hodjat: Absolutely. So Turing’s definition is exactly that. Turing also says if we think it’s AI then it’s AI. But I think that’s a cop out, in a way.

A: Well, I’m curious, you know, considering we’re spending the first few minutes trying to define artificial intelligence, whether the practitioners up here feel that the original sin was actually in 1957, calling it intelligence. Would you be happier if that term never came up and we were just talking about some digital way of solving problems and having machines do things for us, or do you think that maybe intelligence enables the general public to guard against something that might be disruptive in their lives.

Hodjat: I personally think that the term artificial intelligence creates a lot of the confusion, the intelligence in the term, because we’re talking about a lot of techniques there. When you get to certain things, knowledge representation and certain things you can do and you get to AGI types of concepts, yeah, intelligence becomes a real consideration. If you look at statistical machine learning algorithms, that’s not necessarily—would you call that intelligence or just a better way of solving a problem? So I think the term artificial intelligence does cause a problem. But I don’t get that disturbed by the fact we can’t define it because that’s just endemic to our industry. I mean, think about cloud computing in 2007, how many of us could have agreed on what cloud computing was? It took a few years to get to IaaS and PaaS and a classification that we could all live with and agree with that allowed us to talk about it. And I just think that’s where we are. This has been around a long time, but I think that’s where we are in understanding the application of it.

Patel: To me it’s the artificial piece that I think actually gets the emotions going, as opposed to the intelligence piece. And we’re actually getting—I think as individuals and as organizations, companies, we’re getting very comfortable with the concepts of anticipation, of prediction. I mean, it’s happening every ten seconds on your phone. And so if you think about it in terms of anticipation, prediction, and then intelligence, it’s just a continuum that we’re on that’s actually enabling so many better experiences in our lives.

Sherman: But if I may, the reason why we have a full house at 7:45 in the morning is not because there’s an interesting semantic question here.

Patel: No.

Sherman: I mean, we can debate for hours about artificiality and about intelligence and I think we certainly to some degree will continue to. But there is a thing there that is preparing—we are preparing it to have a world historical influence, I believe. I assume. Am I wrong? It’s not just an idea. It’s a reality.

Rossi: I like the term artificial intelligence. I’ve used this for many, many years. But at IBM, I think there is an interesting tweak on it that we call it AI but actually, we mean augmented intelligence. So the fact is that these AI systems, whether they are autonomous or not, they are going to live and function with us in our world. They don’t have their own world separate from ours. And so we are going to be changed by the use of these systems and our intelligence is going to be changed, and hopefully, you know, we hope to be augmented, you know, enhanced. So the goal is not just to replicate our intelligence. To do what? To augment our intelligence, to make us live better, to solve our problems. So I think that the term that we use, augmented intelligence, I think is more—it tells you the overall goal of these AI systems.

A: Robert Klitzman from Columbia University. I’m wondering if trying to define and label it is in some ways a mistake, because it is evolving, whatever it is. The term artificial, the term intelligence, both these are contested broadly in social sciences. I trained as a psychiatrist. You know, in many areas, it’s not clear what these things mean. Artificial as opposed to natural, if the machine is doing something that’s sort of like humans, is that natural, etcetera. So it seems to me just calling it a spectrum of activities might be sufficient. I don’t know if there’s a need to really try to kind of label it and categorize it as such.

Sherman: I definitely think that we would be wasting our time if all we do is talk about that. So let’s move on. What is it about AI—we’re going to call it that for the next 40 minutes. What is it about AI that is worth thinking about? What’s the big deal?

Zilis: So I’m going to give a very unsexy answer to this question, which is just my personal story for how I became interested in it. I’ve been involved in data products for the last decade and, again, early stage VC, my job probably looks a lot like a lot of your jobs, trying to stay on top of many different flows of information. I remember doing a lot of research when we first started being able to capture and store economically these various different data types. And there was this huge promise in that and I realized my job is getting kind of worse and worse and worse. I have to stay on top of more and more things and my ability to do that has not increased very significantly.

And so I got lucky in being an early stage VC. We’re constantly testing out all these different software and productivity tools and there’s this subset of them that was making my life much, much better. And it got to the point—you know, to your point about augmented intelligence—where I didn’t want to let go of this new appendage and I was like what is it about—and this is, you know, circa three and a half, four years ago—what is it about this subset of tools that’s so interesting, that’s so transformative? And it was the fact that, again, they were incorporating this machine learning element. And they were helping me—for example, I don’t use Google for research anymore. I use tools that basically 80/20 a lot of that research task for me. I use this to stay on top of my emails. And so I just wondered, I was like if this is changing my life as an individual knowledge worker to the point where I don’t want to do my life without these things, what’s happening in the rest of the world? And so that’s the launching off point, why does this matter.

Sherman: So if I can summarize, what’s important about AI is extreme utility.

Zilis: Yes.

Sherman: Thank you. Anyone else?

Zilis: If only I were as concise as you. [LAUGHS]

Daugherty: Maybe just a different take on that, but I think it transforms human machine interaction.

Sherman: How so?

Daugherty: Well, in the way that you just heard. But we spend an immense amount of our effort now trying to figure out how to use things like thumbs to interact with awkward keyboards and use QWERTY keyboards that were designed to slow down our speed of thought and we’re embedding humans in processes in ways that I think are completely suboptimal. I gave some examples on the panel I was on yesterday. Think about, you know, checkout clerks in stores that are really robots moving things to OCR scan and taking credit cards and swiping them and the opportunity to have systems that operate at a human level so that humans can engage naturally with systems I think transforms the way people interact with things. I mean, think of how you interact with your Echo, if you have that product at home, versus the old way of looking things up and searching. That’s just a small example. I think the tremendous power of this is augmented intelligence to allow humans to operate at a level where we don’t have the impedance mismatch of speed of human thought the way we think and the way machines think. I look at it now as the way that consumers and workers use technology, it’s like we’re not working for robot overlords. We have machine minions who are very unintelligent that subject us to working in very inefficient way, and more intelligent machines allow us to really operate as consumers more effectively and allow our workers to be more productive.

Sherman: Just a question and then I’ll go to you, Mark. And then I’ll go to you, sir. You’re not forgotten, believe me.

When we went from what used to be called foils at IBM, which were pieces of clear plastic with like a wax marker we used with an overhead projector in order to run meetings better back in the olden days, and then Microsoft gave us PowerPoint. Now, that made the world a worse place, I would argue. [LAUGHTER] When I was a kid, telephone service was good. Today we’ve got supercomputers in our pockets and telephone service is bad. So how do we think about progress, one step forward, one step back, five steps forward, one step—what’s the ratio of making things better to making things worse in AI?

Patel: So I’ll try to answer that and also where I was going with your meta question, which was what’s important about AI, at least from our perspective. So what’s important—it’s a tool, like any other tool. So what’s important is what we aim it at and how we apply it to that problem or to that opportunity. And I think that also characterizes the answer to your question, which is it’s as good as what we apply it to and the outcome that we get to when we apply it. And a lot of the work that I’ve been doing with clients and as we’ve been looking beyond what clients currently do is to try to say what are the outcomes that we can get to that we can get there faster or better with AI, or which address problems that whatever it is we’re doing today doesn’t get to the same quality or the same clarity of outcome. And we think that there’s a set of problems that are characterized in such a way that they make great candidates for addressing AI.

Now, some of them are around, to your point, Paul, around the man machine interface and further automation of things that we already automate so that we reduce waste, so that we improve safety. Those are all good things that we can get better at with more advanced application of machine learning, of further intelligence. There’s also a set of problems that I would argue go beyond that, that we haven’t yet got good tools to solve but which AI has some promise around. And those I think are in two categories. One is around eliminating waste, where you’re optimizing for resource outcomes, where we allow humans—humans are like the worst optimizers for resource outcomes because we let our emotions and our own biases get in the middle of it. Any negotiated contract, any negotiated outcome, any forecasting, or any kind of planning activity, turns out it’s relatively suboptimal, often because our fault in the mix. Those are great application areas, I think, in the real world for AI.

Sherman: Mark, but before you move on there, what about the problem that our light bulbs are all going to unite and wreck everything? [LAUGHTER]

Patel: I think that talks to a different presumption, which is that it’s relatively straightforward for the machine to take control. And I don’t know about you, but in my connected home, I actually can’t get the light bulbs to talk to each other. [LAUGHS] And that’s with me in the mix. So if it can do it for me, I’m a fan. But I think that’s a microcosm of the same issue, which is actually, when you hit the real world, there’s complex challenges to overcome, not least the fact that, you know, at one level we haven’t set up the interoperability in what would appear to be relatively common operating systems. But then that extrapolates to much harder challenges around real world operating systems, where incentives don’t align, people have different motivations as to whether they want to get the same outcome. Those are the real challenges that are going to get in the way of applying AI to get to better outcomes in the real world.

Ken Washington: I wanted to—well, I’ve been holding the mic long enough that I’m still stuck on the first question. But I do want to comment on what is it good for. I think the words matter. So the word ‘artificial’ and the word ‘intelligence’ matter. Because if you think about it, artificial is really about being made by humans. It’s not real. It’s intended to mimic something that’s natural. And when you apply that to—

Sherman: Excuse me, sir. [knocking on surface] Is this not real?

Washington: Okay, so not ‘natural,’ not—

Sherman: Thank you.

Washington: So when you apply it to intelligence, it becomes more controversial because it’s about us making something that is intended to solve a problem that otherwise would be solved by humans.

Sherman: Or not solved.

Washington: So it’s not controversial at all when you apply it to things that are pretty mundane like artificial hair and artificial turf and artificial flowers. And so artificial is something that we’re pretty comfortable with because we know how to make things that make our lives better or that save resources or that solve problems that otherwise can’t be solved. And that’s exactly what’s happening with intelligence. We have learned how to artificially make something, which in this case is a set of complex computational tools and data stores that can help us solve problems. Because without it, we’ve got to solve all the problems ourselves, or using methods that are less capable.

And so what is it good for? It’s good because it’s helping us solve problems in society that otherwise we couldn’t solve, like saving lives in an autonomous car.

Rossi: I agree completely. So that’s what AI is good for. It’s good for helping us solve problems that we cannot otherwise solve because we cannot—you know, with our brain, we are not able to digest and assimilate and find patterns in all the data around us. Because doctors cannot read all the papers that have been published about a disease, so what do they do? You know, they don’t know what’s the best therapy for a patient and the AI system can help.

And I think this is an obvious definition. It’s good because it helps us solve our problems that we couldn’t otherwise solve. But this changed over time. I’m old enough to remember when AI was something that didn’t have many applications in real life because, as I said before, it didn’t have capabilities to perceive and to read, to see, to perceive what is around in the real world. And without those capabilities, which came about recently with machine learning algorithms, which are innovative but not that innovative with respect to many years ago, but also the exponential computing power that helped us, the great amount of data available for training these algorithms, you know, that all came about and converged in a way that allowed to give these real world perception capabilities. And with that, together with the other techniques, then AI started having really an impact in the real world in solving real world problems. And without that, we couldn’t solve them. So that’s exactly what I wanted to say about what AI is good for.

Sherman: So one comment here and then I’d like to actually turn to a different subtopic, if I may.

Anderson: Mark Anderson, Strategic News. And I’m also CEO of Coventry Computer. I’d like to follow up on what Paul said and what Mark said and move us a little bit away from what is AI and move toward what it does. And following up what Paul said, I think we’re not quite there but we’re almost at the point of defining something that is truly a revolution—I hate that word—in the human machine story. Until this week, or last year, the whole story could be described by saying we tell these machines what to do and they do a pretty bad job of it, but increasingly better. And after this next thing I’m going to describe happens, that’ll be flipped.

And that next thing is, as we use patterns, not to find the monkey in the Google pictures but to actually find patterns of things that we never suspected existed, that changes human life and that changes the relationship between humans and machines. So currently it’s called unsupervised learning, which I think is kind of an incremental thing. But if you actually look at it raw and you say when we can actually find things out about the world that we didn’t have any previous idea of, then the machine is telling us something we didn’t know.

Sherman: Thank you. And you actually have succeeded in turning the conversation just where I had dreamed that you would, so thank you. [LAUGHTER]

What I’d like to talk about just for a minute is decisions. Machines, intelligent machines can help us make decisions. But there’s a distinction that I think becomes critical between decision support and decision making. And I’m talking about the machine as the actor in either supporting or making the decision. So could we turn to that topic quite narrowly just for a few minutes please?

Rossi: Yes. So I think that decision making is when you delegate the decision to the machine. So decision support is when the human is in the loop and the human is the final decision maker and the machine is helping the human making the decision. I mean, of course there will be—there are already both of these aspects in AI and I think there should be both of these aspects. But I think that the focus should be on the decision support. Of course, there are also tasks that you want to delegate to machines because they’re not typically humans. We do them because nobody else was doing that for us, but they’re not defining us as humans to repeatedly do things that a machine can do.

But I think that really the decision support aspect is very important. You know, we want to be able to be helped in making our own decisions, you know, deciding what are our decisions and also deciding what to delegate to machines.

One thing that I wanted to comment on what you said is, you said that we used to say that this is the goal that we give to the machine. The machine goes and solves it, solves the problem and then comes back with a solution. So this is also something that is changing and it’s something to be considered and is being considered by scientific community and everybody, because as the machine can perceive the real world by themselves and not by our own specification of the real world, which was what’s happening before, then the machine can interpret—we have to be careful how the machines are going to interpret the goal that we’re going to communicate. So if I tell my self-driving car, “Bring me home as fast as possible,” that’s the goal. But of course the machine is going to sense the other pedestrians, the other cars and so on. But of course, I want the machine to follow on that goal, achieve it in the best way, but without running over anybody, without making me carsick, without going over the speed limit. So the goals that we’re going to give to the machines are not that simple to give as before when the machines were working in a very controlled environment. So that’s an issue that also involves giving, for example, machines some sort of common sense reasoning, which right now we still don’t know how to do.

Hodjat: I think it does give us a little bit of a warm and fuzzy to think that AI should be used mainly to augment human decision making. But the reality is that by delegating more and more of the decision making, as I think we should, to the machines, we are removing more and more of the decision making from humans. In the case of that self-driving car, remember that we just removed the driver. Yeah, we are giving the intentionality to the car, so that’s where we are in the loop. We are the users. We are setting what is being optimized for. But the driver’s not there anymore.

And you can expand that to any kind of decision making. There are areas where humans simply cannot operate. You know, too much data, as was mentioned, or the frequency of decision making has to be so rapid that humans simply cannot operate. And in those cases, in fact, we might be shooting ourselves in the foot by having a human sitting there and just pressing, you know, okay, okay, okay, every time there’s some sort of decision recommendation, just because we need someone to blame for if something goes wrong.

So I think we have to be very realistic about what this means. Also, the intentionality itself, let’s not think that intentionality is also the exclusive realm of humans. I mean, if you look at some of the algorithms that are coming out, for example in novelty search and so forth, niche-filling algorithms, even there you can see that we’re gradually moving into pushing intentionality out to the machines.

Sherman: That was a very big idea, if you don’t mind my saying so. Do machines today, algorithms today, whatever we’re calling this AI, do they today have intentionality?

Hodjat: At a certain level, I would say yes. Because what we’re saying is we’re defining a high-level goal, which is for example try to get yourself closer and closer to this sort of optimization goal. But at the smaller level, we can delegate how those sub-goals are defined by the system. So in that sense we keep moving ourselves out and we’re allowing the more sort of autonomy for—

Zilis: So it depends what you mean by intentionality. You can split that into two things, right? Actually creating an initial intent is not a thing that any of these systems have yet. So a vivid example is DeepMind playing Atari games, right? They have been told maximize your score within this game, and so they didn’t have this deep desire to play a game and get the highest score. They were told to do this. And once they were given an intention to go fulfill, yes, of course they fulfilled it. But they don’t have intentions independent of us.

Sherman: Yet.

Daugherty: Well, the other thing I’d add in is I think you have to then think about accountability, once you think about intentionality. And sometimes we get confused by thinking about tasks rather than processes and bigger outcomes we’re trying to achieve, and there’s certainly decisions that we’re going to be very comfortable—we already are very comfortable with machines making. But there’s things we’re not going to be comfortable with and I think that’s what we need to think through better. And an example is in risk and fraud detection, very advanced machine learning and things you can do, very comfortable with automated decision making on triggering actions for certain customers and things. No problems there. But there’s a corporation somewhere that’s taking accountability for a credit policy and collective actions and the interplay of the decisions that are automated and made and the policy is something somebody needs to take accountability for and understand. And same with the self-driving car, where you can detect a lane change, do I nudge the customer back into a lane or not, but at some point the human has control and the company has some accountability about how those things work.

Sherman: So Francesca mentioned to me when we were conversing earlier that the European Union I believe has passed a law that will be implemented in 2018 which requires that decisions made by machines that affect human beings must be explained. And those of you who were at Techonomy last year might remember a panel on which I sat about AI in which everybody agreed that nobody knows how decisions are made. It’s just once you’re in there, there’s just too many ‘if thens’ and too much data to actually unpack how it happens. Is this wrong?

I mean, I understand that we have to—if we’re complying with a law, we are giving an explanation pursuant to the law. But is the explanation actually meaningful?

Rossi: So that law says you have to give an explanation. But of course we don’t want just any explanation. We want an explanation that is meaningful and understandable by the target user. So in the healthcare domain, for example, you may want to give an explanation to the doctor or to the patient or to the guy who wrote the algorithm that doesn’t understand why a certain thing came out, or to the nurse. And all these people need different kinds of explanation with different symbols and terminologies. So you want a meaningful explanation.

But definitely, that’s a big research challenge to achieve in a short time, and not just because there is a law, but because it’s really needed to have these explanation capabilities in order for the human using a certain AI machine to get the right level of trust within that machine. So when it was mentioned that a decision support system, the system is there and the user is just pressing a button and that’s not my idea of a decision support system. It’s really that the user is heavily involved in the decision process with interaction and explanations about why certain suggestions are made and that there are studies that show that if you use the machine alone you get a certain error rate, if you use the human alone you get another error rate, but if you use them together then you get much, much lower error rates. So that’s where we should go I think.

Sherman: Mark, you’re making faces.

Patel: Yeah. [LAUGHTER] I give my emotions away too quickly. I think as a race and as individuals, we get much more excited about outcomes and once we start to achieve or perceive benefits of outcomes, we care much less about the rationalization of how we got there. And so this concept that we have to be able to understand how the decision was achieved, I don’t think we’ll have—we will not be able to keep pace simply in trying to articulate how the decision was achieved with the pace at which we want to get the decisions done and get the benefits.

And I mean, I’ll use two analogies outside of AI. When it comes to privacy online, if we went back—if we were all sitting here and having a debate about privacy online 12 years ago, I wouldn’t want to give personally identifiable information, I wouldn’t want to have a profile online, I wouldn’t want it to be accessible. Now that I get all the benefits I really have given up my privacy. I know we’re not having a privacy panel.

The same in pharma. If I was told I’ve got a chronic disease, there’s a treatment for it, we don’t know all of the side effects. We know, we all do it every day when we’re sick, we sign up to take something not having characterized all of the outcomes. And in this case, the rationalization of how we got there and what the decisions were to get there is always going to fall far behind the desire to get to the outcome and experience the outcome.

Zilis: I just think having a blanket policy on explainability is going to hamper us. And exactly to your point, you know, if there were a system that could diagnose cancer with 5x more accuracy with a doctor but couldn’t explain it because it was using a deep neural network-based approach, I think we’d be okay with it. But if it was 1.2x better, perhaps we wouldn’t be, right?

And this gets to another point, which is I’m not saying that explainability isn’t important, but in some ways it’s a luxury. And one of my predictions is a lot of these technologies, especially within the realm of healthcare, are going to be deployed in emerging market countries first. And the reason is, you know, here we have a great medical system that’s supported by experts and so you’re getting this incremental gain but it comes at a cost. But if we can create automated diagnostics of MRIs and blood tests and various other things in places where nothing exists, it’s so much better than nothing. And that’s the framework that I want people to keep in mind. We’re not really good at thinking about these systems as alternatives to ourselves. I was at a dinner on AI and ethics last night and everyone’s like, “What is the universal ethical framework we’re going to put in these systems? There is no ethical framework that’s universal, therefore we can never build smart systems.” And it’s like, well, no, what if the design goal is actually to just create a system that worked more efficiently than our current one does?

Daugherty: I just need to add one point on there. I think this issue of transparency and explainability is really important and it’s not a black or white where you can have it all or you can’t have it all. In a case like that, where it’s statistical improvements in quality of healthcare and things like that, I think I agree, explainability is less important. But there’s issues of absolutes, is somebody put on a list or not, did somebody die or not because of a certain behavior that a piece of machinery took or something. And I think from a societal perspective, we’re going to be at a very dangerous place if there’s certain things that aren’t very explainable. And we also have an issue I think in this whole domain where we talk about the importance of data and a lot of these things are trained by data. Any dataset’s biased and the world’s a constantly changing place and if we don’t have ways to assure that the data isn’t leading to the wrong conclusions, we’re going to exacerbate some of the societal issues we already have.

So I think the issue of transparency is really important. I think it’s dangerous to say we can’t do it and it’s not going to be a priority because there’s areas that are going to be very important where we as leaders in a movement of this are going to have to figure out and have answers to those issues.

Sherman: So we’re going to take a comment here and then we’re going to decisively change the conversation once again. Sir?

A: Thanks for a great conversation. I’m Lex from Autonomous Research, and I guess the thought I’m having in response to what you’re saying is that people want transparency because they think if we understand what’s going on we can correct the process and we can make a good process out of a bad process and therefore we’re not going to have a bad outcome. And I think that’s completely beside the point, how good the process is, because—this is my favorite data point on the topic. Eight percent of Twitter is automated. Eight percent of Twitter is robots talking to robots: “Here’s an article.” “Thank you. I’ll follow you,” retweet. That’s 20 million—

Sherman: Not 80%?

A: Not 80. Eight percent right now.

Sherman: I thought it was 80. [LAUGHTER]

A: It feels like it’s 80. And all of those things, we know how they work, right? We know how the agents work. There are deterministic rules. But already they’re having this sort of weird fuzzy effect on top of a contained system. And so if you go into a world where you have thousands of different agents all chasing completely different outcomes, you have this chaos theory situation where you just have no idea how all these things are going to interact. So it’s beside the point if the process to get to an outcome is good. It’s what happens to the interaction of all the different outcomes that we just—there we don’t have any capacity to deal with that or to clear it up or to prevent.

Sherman: Thanks a lot, that was great. So there are two more things I hope we can cover during our remaining time. The first one is jobs. I saw a little piece of a panel that, Paul, you were on, in which there was a lot of reassurance that we’re all going to have just tons of jobs for, you know, wetware going forward. Is that really true? Has automation of even very primitive kinds not destroyed jobs absolutely consistently throughout human history?

Daugherty: Yeah, I mean we kind of got into the topic but didn’t get deeply into it yesterday. I think there is a big jobs issue that we have to confront because the effect of everything we’re talking about is automation at an—we can call it augmentation. There will be augmentation. A lot of people will do their jobs more effectively but that will result in jobs being eliminated as well. And I think in the current societal environment we have and populism, everything that we’re seeing in the world, that’s kind of another ingredient to add to some very big challenges that we’ve got.

So I think the short answer, I think it’s a real issue. On the panel yesterday, we talked about individual company responses and how individual companies like GE, who was on the panel, and I was talking about Accenture and what we’re doing in that area. I think you can solve for that in some cases, depending on the market you’re in, the industry you’re in, whether your company is growing. But collectively, there’s no way around the fact that there’s going to be a lot of displacement, in my view.

I disagree with some people though, and I’m not sure we’re headed for the age of universal unemployment or universal leisure. I do think that—

Sherman: You don’t buy that?

Daugherty: I don’t buy that—yet. I could be wrong. Who knows? It’s early. But I—

Sherman: Explain it again, just in case they’re not up to speed, what universal leisure really is.

Daugherty: Well, I’m not sure I could explain it.

Sherman: The Dalai Lama actually had an op-ed piece in “The New York Times” addressing that question just a couple of weeks ago.

Daugherty: I’m beginning to regret I went down this path. [LAUGHTER] But the idea is that we’re at a level where we’re going to automate things to a certain extent where work as we know it by masses isn’t required anymore because machines will be able to do so much of the work.

Sherman: So just to be clear, the idea is that there are no jobs for human beings because we’re not needed, we’re not competitive.

Daugherty: Yeah. And my view on that personally—you know, it’s hard to know who’s right and wrong on this—is we do not yet know the new things that this will empower, just like we did not know when we went to the steam engine that there was going to be electricity and appliances and etcetera, etcetera, etcetera that evolved. I do believe there’s—I don’t think we’ve fully realized every human potential for output. We understand that in the industrial age. We’re moving to a different age and there’s lots of different potential exchanges of value between human beings that I don’t think we’ve begun to understand.

Sherman: Okay. Babak?

Hodjat: Yeah, just to put a very controversial statement out there and then just leave it lingering [LAUGHTER], I think jobs is a function of population. I’ll just put it out there for a second.

Sherman: What’s the implication of that, if I may ask?

Hodjat: I think just like 30 years ago, people—I mean, I had just started into computers and software and everybody was like, “Oh my God, computers are taking my job. What’s going to happen?” And today, every single one of us is carrying multiple computers and we’re—we operate computers. It’s just this natural thing. It wasn’t, you know, even 30 years ago.

I think the same is going to happen with AI. I think everything is an AI problem. I think as AI becomes more and more dominant, we’re all going to be AI users. And better off because of that—

Sherman: We already are.

Hodjat: Yeah.

Zilis: So let’s keep in mind that technology has rapidly eliminated jobs throughout all of modern history, right? And half this room would have been a farmer or farm laborer 150 years ago and we’re not today. One of the questions I always try to ask is did your job exist 50 years ago, and in general, most of the room, you know, minus maybe journalists, their jobs didn’t exist. So we’re really good at recreating the types of things.

The thing that is fundamentally different here though, and this gets back to what has changed, the pace of skill disruption has increased very significantly and our educational system has not—you know, we basically go through this educational system where we’re very formally educated and then we go off into the workforce. And that worked in an era where you had the same skill or roughly the same skill for the 30, 40, 50 years that you were productive. And now we’re seeing, you know, HTML developers made tons of money historically. That skill got rapidly kind of thrown away. And so how can we as society rapidly retrain our workforce in a continuous way?

Sherman: What you’re saying is so important I have to interrupt you. [LAUGHTER] I mean, seriously. This is just a gigantic point because we’ve just had an election, in the most powerful country in the world, in which people who have had their skills disrupted have grabbed pitchforks and stormed the bastille successfully. So this is important. I worked at FORTUNE magazine in the mid-1980s, when lifetime job security was destroyed. Everybody in 1985 knew that manufacturing—anyone who thought about it in 1985 knew that manufacturing jobs were going to decline, that people with low education were going to have vastly diminished, ever-diminished opportunity. We knew all that. And if I may note, we didn’t reskill those people. We didn’t help them crave college educations and high tech jobs. We left them to fester and to have the outcome that we’ve just had.

And so as we accelerate technological process and we accelerate the rate at which we eject people from utility in the world, we may be buying ourselves problems that we actually aren’t capable of solving.

Daugherty: Well, I’m optimistic on the longer term because I think the transformation of the K-12 system in preparing the next generation for better success I think is one thing. I think the issue we’ve got—and you used the word retraining. I think that’s the exact issue. We have people already displaced and more people that will be displaced and I think we need new solutions to retraining the currently displaced people—

Sherman: Are you optimistic that we can keep pace? Is anyone on this panel optimistic that we can keep pace?

Zilis: Ten years pessimistic, 20 years optimistic. And one framing that’s been really helpful to think about is—and it’s actually kind of a scary framing. I just want to throw one thing out there, which is a lot of the technologies that are coming to bear are amplifications of whatever cognitive skills you already have. And that means wonderful things for a lot of us in the room because it means we’re going to find these other appendages that let us kind of outsource the things we don’t want to do and focus on the things we do want to do. But if you don’t have that cognitive skill base, that part of the population is at extreme risk right now.

Charlie Oliver: Hi, I’m Charlie. Thank you for this great conversation. This is wonderful. But here’s the thing, I’m a little discouraged and I want to know what you guys think—and I mean everyone in the room at some point, if you can come up to me and tell me your thoughts on this. I just launched an initiative a couple of months ago called Tech 2025. I’m extremely passionate about the idea of helping average, ordinary people understand what the hell is going on with this technology. Now, we have to do that on the policy level because we have enormous hurdles that we have to get over—

Sherman: What’s your point?

A: Okay, I’m going to get there. And we have to do that on a sort of a personal level. What do you think—I’m working with NYU and I’m getting students in the room and I’m getting people of all generations in the room. What’s the commonality, what’s the common thread by which we can educate the young, meaning the college kids that I have, as well as the 50-year-olds, the baby boomers who are also coming to these workshops? Where do we begin there? What’s the plan? What’s your ideas?

Sherman: Well, that’s a different panel. That is—and I would like to be at that panel because it’s an absolutely gigantic and relevant question, which we’re not going to be able to do anything with in the time that remains.

Zilis: There’s a two-sentence answer though. Which is, as it relates to this category of technology, we’re actually very lucky, right? If you look at what a lot of the innovations that happened before, okay, semiconductors, databases got faster. How do you understand that? The interface of a lot of these technologies is very human because it presents itself in terms of intelligence. And so actually exposing people to real applications of this is a really good way to start to wrap your head around it.

Sherman: Okay, one more question and then we’re probably turning into toast.

A: So I’d like to agree with the assessment, 10-year problem, 20-year solution. And I think part of the solution set in terms of creating new jobs has to do with the ability of VR and AR to accelerate the learning process and the application of humans to AI to become enabled through more rapid training and more application—embedding the dyad of AI and humans into new jobs. And I think there’s going to be plenty of new jobs in 20 years. It’s the 10-year time horizon that’s scary.

Rossi: Can I make my point? [LAUGHTER] I just wanted to say that I think that AI can help also in that process, in accelerating that process and in giving people the right skills, not just like giving these nanodegrees, if you want to use the Udacity kind of terminology, and to understand what the right skill at the right moment during their life, to be reskilled.

Sherman: Does anyone else on the panel have anything pertinent?

Hodjat: Just real quick, I think it’s unfair to exclusively lay the blame on technologists.

Sherman: Why not?

Hodjat: Let’s do some soul searching here. If you guys in your industries had this technology that could do the job of five people better than those five people, let’s not kid ourselves. You would use that technology. Right? So I just want to put that out there and say that, you know, we’re not to blame here. It’s something we have to as a society collectively look into.

Sherman: It’s kind of like heroin, right?

Hodjat: Absolutely. It’s very addictive. It’s like Facebook. [LAUGHTER]

Patel: I actually think there are so many problems to solve—we’re worried about the first order impact of very straightforward replacement of relatively and repeatable tasks. I think almost everyone would like to be freed of those and given the opportunity to actually go work on the next problem, and that’s what we should be encouraging.

Sherman: Okay, Paul wants the final word.

Daugherty: Yeah, the only thing I’d say is I think we have to be careful of the optimism of technology leading to a backlash and the wrong societal outcome. So it’s great to be optimistic, but we have to do more to get hands on, and not just the technology industry, the business community, in embracing the technology to get at the root of some of these issues. Because talking about optimism isn’t going to solve the problems if we don’t bring the solutions in.

Zilis: In conclusion, AI is like heroin, is that where you left us? [LAUGHTER]

Sherman: So in conclusion, our species has a long track record of creating whatever it is possible to create. We have a long track record as a species of consuming whatever seems good to consume. And our track record of appropriately balancing the long term against the short term in that consumption, which I believe is your point, is frankly terrible. And I would urge everyone who thinks about AI to ask themselves when was the last time a species of inferior intelligence prospered in the company of a species with superior intelligence. And the answer is, well, cats do really well with us, you know? [LAUGHTER] But lions, not so much. You know, elephants aren’t doing so well.

So I would just consider the possibility that there’s something inevitable going on here. And I’d recommend three things to you, just as—if anyone’s interested in pursuing this line of thought. The best book ever written on technology that I know of is Richard Rhodes “The Making of the Atomic Bomb.” I regret that it’s about 1,000 pages long, but you’ll love it. It’s the best journalism about technology ever done. And one of the points that Rhodes makes is that, you know, if we can, we will. And I urge everyone to think about everything, from Facebook to heroin to AI, in the context of the nuclear bomb. It’s a metric of human actual capability to create versus human ability to manage and control. The two are not in alignment.

I’d also recommend, for those of you interested in the Turing thing, a book called “The Most Human Human,” about a human competitor in a Turing test.

And the last thing, there’s a lovely Errol Morris movie called, I believe it’s called “Fast, Cheap, and Out of Control.” It’s sort of an amalgam of three or four different things, one of which is robotics. And it’s just super interesting and if you follow that line of thought, it’ll take you to great places.

So I would like to thank the panel. I hope we’re all sobered [LAUGHTER] and ready for an exciting day at Techonomy. Thank you much.

[APPLAUSE]

Transcription by RA Fisher Ink