Session Description: Unmanned aircraft of all sizes, self-driving cars, ships, and robots—all are moving quickly towards autonomous mobility capability. But it’s not just about smart systems and sensors. What policies, changes in social attitude, and other advances are needed so autonomous machines can seamlessly navigate the same landscape as we do?

The full transcript is also available here as a downloadable PDF.

 

Ross: We’ve got some great speakers to talk about how we get to a system where there is true, seamless, autonomous mobility. So I’m going to introduce Steven Levy, the editor-in-chief of Backchannel. He is going to moderate this for us and he will introduce the panelists.

Levy: Great, thank you, Simone. I’m excited about this, too. I’ve been looking a lot at the space, particularly the autonomous cars, the self-driving cars, but I think it’s a great idea to talk about the larger picture of autonomous mobility that includes other things besides that, that operate on their own and get around in our world. We’re going to have a lot of that in the world.

Before I go to the panelists and introduce them, I just want a little poll here. How many people here in the audience have actually been in—as a passenger, not a driver, certainly— a car that drives itself? And for the purposes of this article, I’ll include the Tesla Autopilot. And we’ve got a bunch there. And how many don’t ever want to? Okay, so you’re all into it there. And my panel, I’m sure, is going to open up to you a lot of insights about what we need to do to get to that future and then a little bit about what happens when we do get to the future there.

The panel from my far right is Mark Bartolomeo.  He is the vice president of the Internet of Things at Verizon.

Melissa Cefkin is an anthropologist at Nissan.

Doug Davis of Intel; he is the vice president and general manager of the Augmented Driving Group at Intel, which has a very big interest in this.

Dyan Gibbens, is the CEO and founder of Trumbull Unmanned. She’ll tell us what that is; it’s a lot to do with drones.

And finally and certainly not least, Chris Urmson, who is the founder and CEO of Aurora Technologies. I met him when he was involved with X or Google X or Alphabet X—

Urmson: Something like that.

Levy: Yes, or as Apple would say, ten, in their self-driving car project there. So I want to ask each of you two things here. One, just say maybe a couple sentences about what you do at the place where you do it. And then also to answer the question, when does this happen?

Bartolomeo: When does what happen?

Levy: So when do we achieve the singularity of autonomous mobility, when our cars drive themselves and the boats and the planes and the trucks?

Bartolomeo: So my role at Verizon is I’m responsible for all of our IoT solutions, which are primarily addressing the industrial sectors and businesses—healthcare, fleet management, telematics, a lot of the transportation and energy management types of businesses. And you know, we see transportation as probably the place where we’ll have the biggest impact on autonomous mobility, although I know we’re going to talk about robotics and things like that today.

We got involved initially in telematics back in 1997 with the 1997 Cadillac when it was being launched by General Motors and OnStar at the time and we’ve stayed very involved in the automotive industry. We have about 6 million vehicles on the network today and about a million fleet vehicles that we manage. From a timing standpoint, we think this will be a very serial, evolutionary type of approach. We’ll see different types of autonomy taking place, we’ll see certain industries and markets moving faster. We think we’ll see UAVs [unmanned aerial vehicle] lead a lot of that. We think there’s some really well-defined business cases that support UAVs.

But in terms of full autonomous vehicles, by 2030 or 2035, maybe 30 percent of the vehicles being sold will be fully autonomous. They probably would only account for maybe 10 to 20 percent at the most of miles driven. So we’re going to see this three-percent adoption and this five- and 10- and 20-percent adoption over a very long period of time.

Levy: Interesting, okay, so keeping in that mind, you were talking really mid-century for the singularity? Okay. Melissa?

Cefkin: I’m happy to be here and a part of this conversation. I lead a small group of social scientists, where we focus on how the world will engage with autonomous vehicles from the standpoint of the everyday, average person in the world who might encounter these cars on the road. So you asked the question about how many people had been inside of them or some degree of that, with the Tesla or something else.

What we look at is all the people who don’t even know, necessarily, that they might be interacting with them, engaged with them, as a starting point, to unpack the question of what will it mean to have more of these increasingly autonomous systems moving about and occupying our world, and how does it change or address how we live and what we’re trying to accomplish. And currently, we use that work at Nissan to inform both the system design that can help us shape how these vehicles, the cars themselves, should move and maybe adapt as they move around the world. Nissan is a global car company in the mass market and we’re really committed that we make vehicles that are going to be meaningful and adaptable in the variety of markets that we serve, which is many, many places. And then we also imagine how the vehicles themselves might need a change, with additional signaling and what possibilities for services and all there will be.

Like Mark said, I think that we also see a sort of gradual projection in terms of the shift and it’s not going to be all or nothing. It’s not right away, tomorrow, that we’re going to see the more fully autonomous [transportation]. So we will see some pockets of use for fully autonomous vehicles by about 2026 and it will be a gradual development to get to that, and then on from there.

Levy: Okay.

Davis: So from my standpoint, my team has been developing primarily the computing that’s necessary for these kinds of autonomous vehicles. But we recently also acquired a company called Mobileye that gives us the ability to both understand more about the environment, how to do that very efficiently and combine that with the computing or the decision-making that is necessary within the vehicle.

But the other part of my role has been connecting the other parts of Intel to support the ecosystem for these kinds of devices. They need to be connected, right? And so, provide that connectivity that the edge of the network is going to play a really important role in deploying things like maps or providing us content when we’re just passengers in these vehicles and we can use that time more productively. And then of course, everything that happens in the data center to do calculations, to refine models, to be able to support the fleets of vehicles out on the road. So from an Intel perspective, we really think about that full breadth, from the vehicle to the data center.

Levy: Why was buying Mobileye important to this since, obviously, there was a momentum in the field on its own? That was a substantial investment you made in that company. Why did you feel that you guys had to be in that one?

Davis: Well, certainly they bring a significant amount of expertise in this space. They’ve been doing driver-assist technologies for many, many years. They’ve been in business for 19 years now. They’re deployed in millions of vehicles. And so it gives us the ability to put together both the capabilities to understand the environment around the vehicle as well as to be able to calculate trajectory.

I tend to agree with Mark’s and Melissa’s comments in terms of when we will see the majority of vehicles. I think we’re going to be talking in the 2025 to 2030 range. But I’m also one of those that’s advocating the industry to move fast. We can’t do like we did with airbags. It took 50 years for that technology to become pervasive in cars and to me, this has got to happen much, much faster because it will save lives. And so I think just the imperative for all of us it make it move faster than the automotive industry’s ever moved before.

Levy: Great. Okay, Dyan?

Gibbens: Good morning, I’m Dyan Gibbens. I lead Trumbull Unmanned; we’re a Forbes Top 25 veteran-founded company and we fly drones in challenging and austere environments to collect and analyze data for the energy sector. We fly and collect data and we analyze that data both onshore and offshore, upstream, midstream, and downstream, and we focus on environmental and industrial sustainability. So that’s what we do as a company. We’re Houston-based. And with respect to autonomy, I think it depends on how we define autonomy. I think that was brought up a couple times. And how we move forward with that. So who here thinks autonomy’s going to decrease over time, our usage of it? Okay, no one’s hand is raised.

[LAUGHTER]

And then who thinks we’re going to use fewer unmanned systems over time? No. And so it’s that evolution of how we ensure that we’re progressing autonomy with digital ethics in mind, how we’re progressing autonomy with the greater good in mind, and that we don’t let certain tools and technology into nefarious actors. I’m just going to throw out some big things early on but I look forward to this conversation.

Levy: Great.

Urmson: So I lead Aurora Innovation; we’re a company building an automated driver, effectively. So we want to focus on doing what we think we can do really well, which is the software and the systems to drive the car, and then work with partners who know how to build the hardware, build the vehicles, and know how to bring it to market and help see the safety benefits and the other consumer benefits that matter. And this is why my team and I are so passionate about this.

When’s it going to happen? I think you can start to see it today. So many people have been in cars and you can buy a production Tesla today, which has a driver-assistance capability. You can also go to fenced-off places where you can ride in a driverless shuttle at low speed and do that safely. And what’s going to happen is the application domains, where you can get in a vehicle that drives itself, are going to increase over time as we get safer and better. And I agree with Doug that we have to move this as quickly as possible because, unlike airbags— where really the only benefit is saving you in a horrible event so you don’t really get to experience the upside of it on a daily basis—with self-driving technology you’re going to see that upside every day. And so we get all the safety benefits along with the fact that you can sit in the car and do what it is you want to do on your drive in the morning.

Levy: So Chris, you have the disadvantage of having worked with me, so I know questions to pin you down. When I did my last dive into that, it was almost two years ago and I asked you when the cars would be able to handle the full range of tasks within driving—level four it’s called—and you said when your son was going to get a driver’s license.

Urmson: Two years from now.

Levy: Yes. Do you still think that?

Urmson: Yes, I think that—and again, I don’t mean to parse words too much but I think that you’ll be able to go places in a driverless car within the next two years. Meaningful places. I don’t think it’ll be ubiquitous. The U.S. automotive fleet turns over, over about a fifteen-year period so even if you started selling every car this year being fully self-driving, it would take fifteen years to get to full penetration. And we’re nowhere near that. But I think there will be places in the U.S. [and] in the rest of the world within the next two years where you’ll be able to call a driverless car. It’ll come pick you up and take you where you want to go.

Levy: So between now and that time, there’s a number of obstacles that have to be surmounted. One of them—though certainly not in this audience—is the acceptance of that. And Melissa’s done a lot of looking into that. There are a lot of people who basically say you can pry my steering wheel from my cold, dead hands and whatever.

[LAUGHTER]

But Melissa, I want you specifically—and other people can talk about this as well—to talk about the acceptance of this. What do the people who aren’t plugged into that, who aren’t believers of technology, who hear about this and might be intimidated; what [do] they think?

Cefkin: Yes, I think personally, I’m pleased that there’s people out there that are reticent and uncertain because I think it exercises all of us to remind ourselves to ask what are we really up to? Why are we doing this? It’s not good enough to just say, because we can, it’d be really cool to have these vehicles that can move by themselves and do these things. There needs to be real value. We’ve already brought up safety. We have the opportunity to improve environmental things on many fronts, both in the day-to-day driving task, the values that come when you have a car that behaves a little better than most of us as drivers, and with the larger scale, as the shift in the fleet begins to merge with electrification and things like that. But the fact that there are people who are reticent is what exercises us and reminds us that, I think, at the end of the day the real goal is that it’s not about those technologies themselves. They are a vehicle to our lives and it’s really about how we’re making our own lives more autonomous or more capable.

But I would say, you guys probably see the same press. There’s been some recent press about declining interest in autonomous vehicles and declining trust. People are saying, why should I ever do that, what’s the problem with driving? And I think that some of the figures that we’ve seen are that the fleet by 2022—or 2025, I hope I have these years right—we would maybe have only about 600,000 autonomous vehicles. But it will increase exponentially—

Levy: And they will all be tested by these other companies that are riding around Mountain View and Phoenix now?

Cefkin: Yes, exactly, the testing. What we focus on is, what is that daily experience to make it something comfortable and adaptable? I actually like to think of the acceptance problem not as “what do we do to get them to want it?” but “what do we need to do so that it makes sense in the worlds that people live in?” We look at acceptance from the standpoint of as these vehicles are moving about in worlds, how does they integrate seamlessly into the variety of different kinds of worlds that we live in? And the studies that I lead, we do a lot of empirical studies of what happens on the road today. We learn that how traffic moves and what people do on the streets and expect varies considerably from place to place. It can be the difference between being on a college campus and being two blocks away in a downtown area. How fast is the traffic moving? What is the etiquette? Who yields to who? So these are the kinds of things that people will have that very mundane experience about. So I think from the acceptance side, again, it’s not only what can we do to convince people, but it’s also what do we need to do to design appropriate behavior for the vehicles.

Levy: Chris, you’re nodding. You’ve had a lot of experience with testing out there. What have you felt about that? What’s the reaction that you get from the millions of miles of projects that you’ve been involved with and experienced?

Urmson: I actually think it’s a very subtle question because I think there’s the question of, as a society, how do we want to bring this technology into existence and how do we make sure we realize the benefits of it and not end up with more empty vehicles driving around, clogging up the roads and more pollution and whatnot as a result of that. There’s a bunch of social qualities. And then there’s how do we get people to accept it? Should they accept it and do they accept it?

And the second one is actually the one that I don’t worry about at all. So many times, I’ve taken people that have come in incredibly skeptical, whether they be elderly, whether they be just technophobic, and they come in and say, “Geez, you’ve got some smoke and mirrors here; this is terrible, I’m never going to do it.” And we get them into the car and then go take it on the freeway and five minutes in, they’re like, “Oh, it just drives itself?” It’s a self-driving car, what exactly did you expect?

[LAUGHTER]

And you know, ten minutes in, they’re like, “oh, it’s actually pretty good.” And you leave them for an hour with it, they’re like, “oh, it’s better than me, I want this.” And I’ve talked about this from my days at Google. We had one guy who came in; he drives a Porsche and so he’s a driver’s driver. And he told us upfront that this was just dumb, right? Why would we be wanting to take driving away from people? And he took the vehicle for a week and came back at the end of it, and said we couldn’t have it back. And he said, I get it. For one, people are terrible drivers and he observed that watching the display of what other vehicles are doing. And the other was he realized this gave driving back to people, that for most of the time, even if you like driving, you actually don’t enjoy it because you sat on the 101 [freeway] here in the Bay Area or pick your local commute and it’s not actually fun. What you want to be doing is checking your phone, or reading a book, or having a nap.

Bartolomeo: I think that’s what a lot of people are doing.

[LAUGHTER]

Urmson: Having seen people on the 101, it’s a serious problem, yes. And so if you can let them just do what they want to do and not have to enforce another prohibition, this time around technology, then they’re going to accept the technology. And then as a society, we’ll get from that the safety benefits and we’ll get from that the reductions in congestion and the environmental benefits. This is what I think is really compelling about it as a product.

Bartolomeo: Steve, I think Doug brought up the point that I think is real critical about getting acceptance. It’s just not about the vehicle; it’s about all of these other assets and things that we’re involved in and have to do to drive a vehicle. Don’t we need to see the autonomous vehicle come together with more of a sharing environment and could we communicate to people that you don’t actually need to own a vehicle any longer because you can have an autonomous shared vehicle. It’s a better performing asset. You don’t necessarily have to have insurance except by the mile. You could have immersive media experiences.

[LAUGHTER]

You could live farther away from where you work because it’s a better place that you want to live and you don’t have to worry about the commute and the drive. So how do we get all of this integrated in with this autonomous experience? Because I think the autonomous vehicle by itself just being talked about probably is part of the thing that’s not real compelling to some of the people that say, “I want one.” But we have to bring all those things together.

Levy: You bring up an interesting point. I heard some pushback about autonomous vehicles—you mentioned some of it; your commute could be longer because you could spend the time elsewhere. Some people are worried that because of this there’ll be more cars, there’ll be more people driving around because people will be freed from worrying about how long they spend in the car and, environmentally, it might be a disaster. Anyone have an answer to that?

Bartolomeo: So one of the things that we’ve been looking at is will the miles driven increase or decrease, will we save fuel? And I think the facts point to—although Melissa may have more data on this—that the miles driven will actually increase because we’ll have people become passengers or users of autonomous vehicles who aren’t today [like] the elderly, the blind, disabled, lower-income people who will start using those vehicles. But also with autonomy, we’ll have better management of the vehicle and things like platooning. We’re going to get more efficiency per lane, so we will get less congestion even though, potentially, we’ll have more miles driven.

Levy: Let’s talk about the benefits later on. Melissa, did you want to say something?

Cefkin: I don’t have any figures right here, but one projection is that we would see an increase of congestion in the early years. But again, as that optimization and efficiencies work out more and people adapt to the best ways of using them, it could shift. But I think the other thing to keep in mind, of course, is that when people are traveling, they’re usually not traveling just to travel; they’re trying to get from one place to another for a purpose. And that doesn’t necessarily change. You still have to get to your meeting on time and those sorts of things. So yes, you could move farther away and have to get up that much earlier and sleep in the car and those sorts of things, but the goal isn’t about the time in the vehicle only. Although we can do more with that, potentially, and hopefully when we do more with that, we don’t have a failure of imagination and get rid of all the wonderful things about mindless travel through time and space that we get by being in a car. But people are still living their lives and they’ll need to get to the places they need to get to and meet with people and be running late and all those things anyway.

Levy: All right. You’re nodding, Dyan, you agree with that?

Gibbens: I think it’s a challenge that is not going to go away. And I think we just need to do more in this sense and so once we have more autonomous cars, we’ll understand what the data is and we can model these things very well and I think prepare for those. And so, for me, I want to be part of this progression and I think that we need to work together on that. And so whether it’s having engineers, and for me it’s having unmanned systems and drones, and we want to have engineers, we want to have operators, we want to have customers, because if we don’t, we’re going to have a suboptimal solution every time.

Davis: Well, if I can add too, I think there’s always this desire to pick the absolute. At what point in time will every car be autonomous? At what point in time will we achieve this particular point? And I think Melissa’s comments were well said. We’re going to see a range of autonomous things—that’s at least how we’ve been describing it—and there will be periods of adaptation, right? There may be more miles driven in the short term but I keep reminding people, your average car’s used 4 percent of the time. And so over time, I think we’re going to see much higher utilization of vehicles and fewer miles driven but there’s going to be transition periods. We’ll see the same thing in unmanned aerial vehicles, right? There’s a lot of consternation around how you manage these things, but over time we’re going to see some amazing new services and capabilities that come from that. But I think where we’re a little uncomfortable is that transition period.

Levy: Another thing we’re uncomfortable about is injuries, accidents, fatalities that would be attributed to technology as opposed to a driver—even a drunk driver, right? Obviously there’s a huge number of serious accidents and fatalities that are due to driver error—most of them, the vast majority—yet a single one attributed to an autonomous vehicle. You saw what happened with the Tesla thing in Florida. It wasn’t even clear what went wrong there. It has a huge outcry there. And there’s this thing called the trolley problem, which is sort of the bugaboo there and I know, Doug, you have some thoughts about that. Tell us what the trolley problem is for those who don’t know it and your thoughts on that.

Davis: Well, the trolley problem—and there are others of these online if you go find them—it’s kind of the test that says, if you have a runaway trolley and it has two choices, two tracks it can pick. Does it run over this person or that person and which do you choose?

Levy: Well, one could be like a bunch of Boy Scouts and another could be a Silicon Valley venture capitalist, right?

[LAUGHTER]

Davis: Yes. Or there are others, you know, there’s a little girl crossing the street and there’s a grandmother that steps out.

Do you have a bias against age? Those kind of tests will all come back. From my standpoint and I’ve had this conversation with some of the OEMs in the industry as well, there’s a whole domain around functional safety. And the idea around functional safety is yes, something can happen to the system that would cause it to have a failure but designing the system in such a way that it can fail operationally. In other words, the vehicle can get to the side of the road. The trolley does have another backup capability to be able to stop before it runs over that venture capitalist. And so it’s much more about designing the system to be able to deal with those random kind of situations as opposed to saying we’re going to have these runaway vehicles. I think that’s really important.

Levy: Well, I think the issue with the trolley problem isn’t so much saying that the car or whatever autonomous vehicle isn’t safe enough but that it made a decision that a human would be responsible for. And that’s the thing that I think people can’t abide. I mean, obviously, if you flipped it around, if we had a lot of autonomous vehicles, if that was our standard, and then you were talking about people, you would never let them turn it over to people there because they couldn’t make the right choice, either, right? Go ahead, Melissa.

Cefkin: To add to that, though, I think you’re exactly right that the focus starts to be about the moment that decision was made and what it was based on? And again, it makes me happy that even the engineers and the developers are forced to sort of encounter that, the decisions they’re making in building the systems could have impact on what happens there. But you know, decision-making for an autonomous vehicle is—Chris, you can help me if I get it wrong—but it’s something like 100 times every second. It’s updating constantly, constantly, constantly, so at what point in time was that decision? And then these instances that we use to tell this story, with the trolley problem, think about the categories you just used. Those are social categories. The machine will see small moving objects. And somewhere in the annotation of the computer vision, they will have been told to call those small moving objects kids. They won’t call them Boy Scouts, unless it’s a really good computer vision program that’s been annotated in that kind of way.

[LAUGHTER]

But these are how we see the world. This is what makes it for me, as a human scientist, such a compelling and interesting problem because we can’t help but see the world through the social categories we’ve learned. But we’re teaching the computers that and it’s only ever going to be a very partial education. So they won’t have the category of grandma and kid; it’ll be a slower, taller moving object that we’ve now called grandmother or woman. But it won’t have the soul to have a feeling about that. And I think that’s where it’s so easy to conflate and get excited about the trolley problem because we can’t help but see it in those terms. But to the computer, it’s just not.

Davis: Well, I think the other challenge with the trolley car problem, in particular, is that it is a profound ethical question that as human society, we don’t have an answer to. This is not like it’s something that was invented three years ago as a hmm about self-driving cars. This has been a fundamental ethical problem for hundreds of years. It probably wasn’t trolley cars when it first was thought of. And so from my point of view as a lowly engineer trying to make this technology work, I think it’s not really the right answer to put that problem on the vehicle, right? I think we want to make vehicles that are safer than people because we’ll see a benefit from that and we want to see all the other cultural benefits and social benefits of this technology and this is one of the classic cases of kind of almost inventing a problem. The likelihood that you end up in a no-win situation in driving is incredibly low.

I imagine there’s probably nobody in this room who’s had to make the choice between drive over the Boy Scout or the grandmother in their lifetime and probably doesn’t know anyone who’s ever had to make that choice. And so by having a vehicle that is inherently a better defensive driver and avoids those problems and then brings a more reliable assessment of the underlying physics of what’s going on to the problem means that you reduce the likelihood of a negative outcome anyway. So it’s one of these things to get wrapped up about and I think it is valuable in that it causes people who are working on this technology to understand, to really think about the implications of what they’re doing and bear that in mind and kind of carry that responsibility with them. But I think as a specific question, I think it’s not actually super useful.

Urmson: And I’ll add that Professor Amnon Shashua and Shai Shalev-Shwartz recently published a paper that defines a mathematical model to define blame for an autonomous vehicle. In other words, this model says that we understand enough about the vehicle, its capabilities, and the physics around it such that the model would provide the driving policy software from ever issuing a command that would cause an accident with a human driver. It doesn’t mean a human driver couldn’t hit an autonomous vehicle, but mathematically we know we can prevent that from happening and therefore the vehicle would not be to blame for an accident. And so I think there’s an opportunity with the understanding of the technology to be able to create that kind of situation.

Levy: So I want to move a little bit to another thing that we have to deal with, which is regulation and how that works there. Dyan, I’m going to start with you because I think probably we’re a little ahead of where we are in terms of on the road, in boats, and other things in the air, where you are. You deal with regulators all the time, I assume, right? So tell me how much we’ve solved that problem in terms of these aircraft that we’re going to see a lot more of and what lessons come to that that extend to the category in general?

Gibbens: Thank you for that question. I’ll start about a year ago, when we were making some of these regulations and last year, under President Obama, they had the first ever White House drone workshop and I was part of that, representing the energy sector, and I discussed with all of our clients some of their three top concerns. And those were inspection, protection, and monitoring of critical infrastructure. The other was safe and cost-effective beyond line of sight, so pipelines, corridors. And the third was rapid and immediate response for emergencies. And so at the time, those were each very big, substantial challenges. Fast-forward a year and we’ve had a lot of progress in each of those areas. We still have quite a way to go. And I’ll give you a couple of examples of why I think it’s improving. One is we just had several hurricanes hit our country. My company, Trumbull Unmanned, we’re based in Houston and we worked with FAA as did several other companies and I think they granted a couple hundred emergency COA flights. And we flew several hundred flights during that time frame. We were able to use technology to keep people out of harm’s way. We were able to use technology immediately and able to use autonomous technology and to make operations better, faster, safer, and cheaper.

Another thing I’m excited about is what was just announced last week and it’s the U.S. Department of Transportation unmanned aircraft systems integrated pilot program. And so that is going to focus on key areas that I just mentioned—you know, beyond line of sight, autonomy, and emergency response—and working with companies both public and private and government on how to move forward with those challenges and opportunities. And one thing I’ll just add there is that we’re talking about autonomy and progression, and I’m seeing some familiar faces in the audience with respect to defense backgrounds, and we need to just always consider that with every opportunity, there’s also a threat. And so how we plan around those with respect to cybersecurity, cyber-resilience, and then also data management and user agreements as well.

Levy: So in the drone space, it’s interesting. You talk about the White House and drones. I guess people have speculated, well, maybe the way to attack the White House would be with a drone there. Where are we on security and those issues?

Gibbens: We’re making progress, but we still have a long way to go. And so I think each company up here is dealing with that in some respect. And so, counter-UAS, counter-drone is a big initiative. They just put in about half a billion dollars into the Department of Defense for that. And so not only is that an individual security issue, it’s a national security issue. And so these are gaining, I think, the proper attention from the administration, from the Department of Defense, because these challenges are not going away. As we continue to have more autonomy, we’re going to continue to have more security threats as well.

Levy: So it sounds to me actually that in terms of the air, in terms of drones, we have a fairly enlightened approach to it there. And in different states, we’re seeing things licensed, I think probably more because of the lobbying acumen of some of these companies working state legislators than any particular enlightenment. Though there was a thing last week from national regulation that seemed in the opposite direction there. Will the regulatory agencies, the places that put the rules on that, be a hindrance? Mark, you probably know about this.

Bartolomeo: So one of the things that we believe is that through education, with the federal agencies, we can have a very effective, participatory regulatory process. And really what the regulators want to know, the FAA, the FCC, is can we do this safely? And so they’re looking for that participation from the UAV companies, the OEM companies, the network companies to really lay out how we can identify those UAVs, so what is their unique serial number that’s being broadcasted, how are they being managed and controlled, what type of latency is involved in the interaction with those, how do we do collision avoidance? The issues are very similar across all types of autonomous vehicles. But I think the FAA is just moving a little bit faster and they’ve put some regulations in place with Part 107, that are actually facilitating the use of commercial, unmanned aircraft. So they’re getting some experiences and they’re understanding that it is being done safely. That’s the key thing.

Levy: And I wanted to touch on whether there are currently what seem to be—at least in the short term—intractable obstacles leading to full autonomy. Are there things that we can’t do yet that we have to make either scientific breakthroughs or AI breakthroughs in order to achieve on the ground, say, full autonomy? Chris, what do you think about that?

Urmson: Yes, there’s hard work to be done. I don’t think there’s any fundamental science at this point—I could be wrong. I’ve been wrong before, but I think here the hardest problem that we’re facing is really predicting the next five to ten seconds of what’s going to happen in the world. So when you drive along the road, if you really drove based on where everybody was in this moment, you’d hit an awful lot of things and that would be a bad outcome. And so we have to look at where the other actors there, whether they be pedestrians or cyclists or other vehicles, look at what they’re doing and then try to understand a little bit about their intent. Is that vehicle about to make a left turn in front of us or a lane change? Is that pedestrian about to step into the road? And so that’s really the hardest problem left in automated vehicles, getting that so that we don’t just get it right most of the time, which is kind of the canonical machine learning problem. It’s pulling out the rare events from that to be able to identify those scenarios as well.

Levy: It seems to me that the problem in that is some of the problems that we’ve been hearing in some previous panels about getting chatbots to do a long conversation and things like that. When a self-driving car or really any kind of autonomous vehicle is navigating its environment, it doesn’t understand its environment. It knows what to do, as you say, in the next five or ten seconds but it doesn’t know what a pedestrian really is, right?

Urmson: Well, and I think that’s actually why automated vehicles are one of the places where you’re seeing progress because we’ve seen robotics go into manufacturing in very repetitive tasks and, again, they don’t really have that kind of fundamental question of what is this thing I’m making and what’s its inherent value? Driving and the UAS space are the next realm of that, where the problems are kind of understandable. We built roads for people, they were never really meant to move at 70 miles per hour and so we structured them in a way that we give them lots of signal about what’s coming up on the road and what the world is about to do to them. And so that semi-structured nature of the problem is what makes it the area that we’re going after with technology now.

Cefkin: Just to add to that, I think, as Chris said, the very interaction-rich environments of cities and urban environments is where the challenge remains very serious for the ground autonomous vehicles. And those kinds of things of like, what does it actually see and know, again, this is where we bring some more of the kind of social learning. Think about it again. You don’t even pay attention, but you see some pedestrians on the side of the road and if there’s two of them that are taller and two of them that are shorter and they’re clustered, you would probably think family and they’re going to operate and move in some ways [that are] family-like, right? So those kinds of predictions you’re already using a category of a social understanding. Those are the kinds of things that we can maybe increasingly teach the vehicles, but they won’t always know.

To compound how hard a problem it actually is, and this is where I think there really is some time left for these time frames that we’ve been projecting, what we see when we look really closely and empirically at what’s happening is that these are very complex, multi-agent kinds of problems that of course the computer systems ultimately should be even better than we are at managing. But whether that car’s going to actually turn, its intent; it’s meanwhile judging somebody else that might be blocking it and that person is judging somebody else that had something to do with the environment. So it’s this whole chain reaction. Getting those things, like what matters and what doesn’t in that picture, and working out the models appropriately to be able to accommodate, it might not be fundamental science, but there surely are some areas left to develop out much further.

Davis: The one part that gives me some confidence here is that there is this kind of exponential-reasoning tree that happens, but people are really bad at that stuff.

Cefkin: Yes.

Davis: And somehow we’re able to drive down roads anyway.

Cefkin: We can get better.

Davis: Which means that there’s a hope that we can kind of truncate that space—

Cefkin: But we do that in part by interacting on the road in very subtle ways.

Davis: Yes, absolutely.

Cefkin: There’s the obvious hand waves and those kinds of things, which happen on occasion. That’s not the total thing, but the car becomes kind of our prosthetic device, right? So we communicate things whether we mean to or not. If I’m a stop and I let up on the brake because I’m going to start to go, I might just think I’m starting to go, but the fact that my car starts to roll a little bit gives other people that message. And so part of what we do is, we’re very good at accommodating in these microinteractions and there’s a lot of that going on.

Davis: Absolutely.

Levy: Yes, it seems to me, we’re in this valley. We’re going to be in the valley, where the autonomous vehicles, which eventually will be able to communicate with each other and be able to do those tricky situations better than we do. We will also be interacting with human beings and that kind of communication is going to be the toughest one, right? When everything or the vast majority are autonomous, no problem, you’re going to know and maybe there will even be an algorithm, maybe you’ll even be charged money to dive in and cut some guy out, right? There’ll be some account you could use, that’s possible, right? But in the meantime, when humans are interacting with it, those little, subtle signals you can’t give to an autonomous car. Or can you? I don’t know.

Bartolomeo: So Steve, one of the areas is—and I’d like to get Melissa’s comment on this also—it seems like autonomous mobility will take off where the infrastructure is more prepared to accept it. And I think that’s one of the things we’ve seen with UAS, this infrastructure in the air that doesn’t have a lot of these subtleties. And the FAA does have guidelines for temporary flight restrictions, restricted air space, and that’s well-known and managed. So it seems likely that autonomous mobility will take off where the infrastructure’s ready to go. And one of the things we haven’t talked about is like maritime and harbors and places like that where we’re seeing it actually taking off right now because they have the ability to put those ships into specific ports without pilots on board and things. And most of the testing that I’ve seen has been in closed environments like universities and places like that.

Cefkin: I would just add that I think that, yes, certainly where it is now, those sorts of more contained, somewhat more hygienic-y kinds of settings makes it more possible. And then the environment in terms of the sensors but also even just the way that the physical infrastructure will—it disciplines us and guides us and tells us where and how to be, as that shifts too, that will play a role. But again, with the pedestrians or the favorite case of the bicycles, I mean, they’re shapeshifters, they do everything crazy, they act like pedestrians one moment and cars the next and all that—if we’re going to try to wire all these things up, how readily will they be able to interact in a kind of wired manner? Are they going to change their path based on a moment-to-moment sort of interaction? So I think it improves as it increases but it doesn’t necessarily solve everything.

Levy: Before I take the questions, I just want to spend a minute or two on what happens when we get to that point that we’re all talking about. I’m wondering, are there going to be effects that are transformative as we saw, say, from the interstate highway system, where it really rewrote the geography there? What are some of the amazing or potentially amazing secondary effects that will come when this future that we’re talking about arrives?

Bartolomeo: So one of the things that I think about is do the automotive OEMs become publishers? Do they own the media experience for the consumer at that point? Do we see this big shift that occurs because now that I’m actually in this vehicle and, assuming the vehicle is fully safe and getting me to where I need to go very efficiently, what is it that I will be doing in that vehicle? And it seems to me that’s the opportunity for that immersive media experience and there will be big battles between the Silicon Valley platforms and the automotive OEMs to sort of win that space.

Levy: So will a car be like a walled garden or an open system?

Davis: Well, we recently worked with Strategy Analytics to go look at exactly that thing, you know, ignore the economics of the vehicle itself, right? We could talk a lot about that but look at everything else that happens around them over time. And they looked at a window further out when we have a large percentage of these vehicles on the road, more like 2030 to 2050, but they said the positive economic impact is about $7 trillion. And you think about and say, really, that’s pretty far out in time. But we step back and think about what the PC has done. Again, ignore the PC itself and think about everything that’s changed around it and the positive economic impact that has resulted. To me, I think the same is going to happen as we move into an autonomous world as well.

Levy: Dyan, what about millions of autonomous vehicles in the air? How does that change things and bring effects we haven’t seen before?

Gibbens: I think whether it’s an autonomous ground vehicle or air vehicle, two of the biggest factors are safety and efficiency. And I know that those are what some consider boring metrics but I want to save lives and we want to save time. And so when we have that, that enables us to do second- and third-order problems for the next generation. So I think for me, that’s what I get most excited about. We haven’t talked a lot about the next generation, so I think autonomous systems, vehicles, air, ground, everywhere else are really exciting the next generation to become data scientists, computer scientists, engineers and they view this with a clean landscape without constraints. You know, we have a drone camp every year where they pitch an idea and I just want people to view these problems and challenges, unlike an engineer—like I am, with constraints—you know, size, weight, cost, power doesn’t matter and they view and solve those problems. So I’m excited to see how these inspire and empower the next generation.

Levy: Great. So let’s take some questions here. I see one over here.

Q: I’m a real estate developer and I’m rooting for you because I think the changes to land use and the quality of our cities is going to be amazing. It would be interesting for you to comment on this issue that I’m dealing with at the Urban Land Institute, the first and last mile, which is essentially we’re building high-density mixed-use projects, maybe a mile and a half from transit and then an employer says, okay, I’m another mile and a half from transit. Our hope is that autonomous vehicles become a means to connect people to transit systems because I don’t know if you saw the article in The Economist a couple of months ago, 83 percent of people that work in Los Angeles County drive to work all alone in one car. And until we change that dynamic, the quality of life in our cities is not going to change. So the impacts on land use are going to be significant and positive and I’m rooting for you to succeed.

Levy: Does anyone here want to comment on that?

Bartolomeo: So the way that I’ve been looking at it and I think most people have been is that autonomy brings us really to the forefront of mobility as a service. It’s not just about an autonomous car. Why would I ever want to own an autonomous car? What I really want is a fully integrated autonomous mobility as service that takes care of that last mile and is integrated in with my life, that knows exactly where I need to go, when I need to go there. You know, it can take me to the doctor’s office and then come back later and pick me up and do some other things in between. And then on that last mile, we are starting to see, like Deutsche Bahn integrate in electric bicycles as part of the reservation system so that when you take the train into Munich, you also at the same time can reserve that electric bicycle for that last mile as part of that fare. So we’re seeing, Santa Clara is doing some testing in that same space. So I think that’s a critical piece.

Levy: Great. Okay. Here in the front row?

Abadir: Hi, Sam Abadir from Aspire Ventures. We’re invested in healthcare AI, partnered with Penn Medicine. So I have a kind of health care question for you. In healthcare, arguably, the technology is just as advanced relative to autonomy, you know, in vision, in neural nets but in driving, you’re far advanced in terms of legislation, lexicon, et cetera. I’m kind of curious, what things, looking backwards, should you never have done as an industry? You know, we should never have called something X because that held us back, for example. And what things worked out unexpectedly well as we try and bring your experiences into the health care market?

Urmson: I guess I would think as an outsider looking at the unmanned aerial vehicle space, I think that the consumer things should have been not branded as drones. Drones are things that fly over foreign countries and drop bombs and spy on them. And I think that is probably one of the biggest marketing hits against that space, right? It’s just a very loaded word. I think everyone—maybe you would know much better than me—but when I think drone, I think predator, and that’s scary.

Levy: What about you, Dyan, do you use that word?

Gibbens: Sure, I think any technology can be used for good or bad, you know, it’s how we define good and bad, where digital ethics come into play. With respect to the word drone, that word has just sort of taken over the industry. You’ll notice I said both, I typically say unmanned aircraft systems because that’s what the FAA says and that’s what our clients say.

But as far as semantics, that is important. But I think with respect to healthcare, whether it’s autonomous anything, we have to solve the problem for the government and take it to them and then say here’s what we can do and not only how it benefits my company but how it benefits the industry. So that’s been our approach to the company. You know, we requested flight across the country and that became granted for everyone. But that helped everyone. And I do think that we do need to be collaborative in order to solve these problems and it has to enable the whole industry.

Levy: I’d have to say just as an observer that I think that the people involved in this effort have done a surprisingly good job of taking what literally six or seven years ago most people thought was an insane idea and bringing it into the mainstream where even skeptics are considering it without having the experience of getting in the car. So I’m surprised at that. I would have thought it would have taken a harder sell or been a more gradual thing.

Davis: Well, the one thing I’d add, too, is a good thing I see in the industry right now, which is a very high sense of concern around things like privacy and security and safety. And what I’ve been watching from a regulatory standpoint is early engagement with the regulatory bodies, to say here’s where we are, here’s what’s possible, here’s how we should work together. And if you see in the U.S., at least recently, the Department of Transportation Secretary Chao’s AV guidelines are helpful as we’re at this stage of development, and House and Senate legislation, again helpful without putting unnecessary regulation in place too early. I think we’re seeing the same thing in your world as well. And so that education and partnership with policymakers to recognize these are the things that are important and here’s what we’re doing about them as an industry is really important.

Levy: Okay. Was there a question over there?

Nair: Sanjay, from Edelman. Wondering if you’re also thinking about the opportunity costs of introducing autonomous driving in terms of the impact it has on the ecosystem that depends on livelihood and that. Because I mean, everything you said is perfect and great and will make sense and will be the future, but looking at everything that is happening in the country right now in terms of the divide between haves and have-nots and the unrest that is creating, is the industry thinking on that and what exactly we are doing to kind of help address it? Because just saying that we’ll upscale everyone is probably not going to be the case. So what’s your point of view? Just curious to know.

Bartolomeo: So we look at this as is there a dystopian view or a utopian view tied to this and there’s a little bit of both. When we think about autonomous vehicles, particularly in autonomous fleets, we see the opportunity to perform functions where people can’t hire enough drivers today. It’s well-known that, you know, truck drivers are difficult to hire, there’s not enough. So we see certain industries, which have those better defined financial business cases will move first and so I think that at a high level, that’s really how we’re looking at it. And then there’s the safety factor on jobs. And someone mentioned it on an earlier panel today, I think it was John Chambers when he was talking about the UAVs doing the work in the mines, very dangerous jobs that put people at risk. Can we make those jobs safer or eliminate damaging human lives by using UAVs in these very dangerous situations?

Cefkin: I can’t speak for my company fully in terms of the point of view and all the different ways of thinking through those issues, but what I can do and what I try to do at the very local level with the people that I work with is, as we think of examples or use cases or proofs of concept of things like that, again, to always push on this question of why are we thinking autonomy is a good solution or direction here? And if the answer comes back the only thing is it’s removing a piece of labor, that we could have a driver instead but there’s no other value gained by autonomy, I at least try to say maybe that’s not the best case for us to be thinking about and trying to work out. Because if you could pay somebody to drive the doctor for a house call—that was one idea, it would be great because doctors can go make house calls—it’s like, well, they could today, they can drive themselves. Those are economic decisions, there’s nothing more gained in that. So pushing it further is the kind of thing that I try to exercise among my colleagues.

Levy: Okay, time for one more quick question. Who’s got it?

Audience Member: So I want to come back to this trolley problem and it seems like this technology’s kind of ripe for anticipatory governance. What has been going on in this field to understand the need for transparency in the algorithms and the societal buy-in for those algorithms? And what is the conflict between that and having competition between the companies wanting to keep intellectual property to themselves?

Urmson: So I’ve answered this question a few times in the past and I think this is one where it is about transparency and that if—at Google, and this may be the case now, I don’t know—but when I worked there, we were pretty explicit that the car was going to work hardest to avoid unprotected road users or vulnerable road users and then after that it would work to avoid moving vehicles and after that it would work to avoid things that didn’t move through the world. And to your point, that’s kind of the lens it sees the world through. And at that point, as somebody who’s about to use this technology, you can say, hmm, am I okay with that? And if I am, then I’m willing to ride in this vehicle and if I’m not, then maybe there’s a competitor or some alternative solution that’s going to say, I’m always going to put the safety of the occupant of the vehicle first. And that might be the right choice for them.

And I think that’s the best we can do as engineers and companies is be transparent about some of these very high-level decisions and then at some point, maybe there’s a role for government to indicate which of these they believe is the right prioritization but I would be careful to call that in too early because the technology doesn’t really exist today. The potential unintended consequences of that, I think, are fairly significant. Let’s let the market speak and then if we feel like it’s going the wrong way, do what government does and guide it as appropriate.

Levy: Maybe this could be a role model for transparency and algorithms in general that we hold to. My apologies to those that didn’t get the answer to your question, maybe you could find these folks later on. But I want to thank all these panelists for doing a tremendous job of explaining an amazing subject. Thank you all.

[APPLAUSE]