fbpx

18 Conference Report #techonomy2018

Ethics and Responsibility in Modern Technology

1/1

  • Paula Goldman, Terah Lyons, and Catherine Cheney at Techonomy 2018, Monday, November 12, 2018. (Photography by Paul Sakuma Photography)

Speaker

Paula Goldman
Global Lead, Tech and Society Solutions Lab, Omidyar Network

Terah Lyons
Founding Executive Director, Partnership on AI

Moderator

Catherine Cheney
West Coast Correspondent, Devex


Description: Tech tools have become too powerful not to be conceived within an ethical framework and thoughtful awareness of their consequences. We need a new attitude of social responsibility from technologists and companies.

The following transcript has been lightly edited and condensed for ease of reading. 

Speakers: Paula Goldman, Omidyar Network | Terah Lyons, Partnership on AI

Moderator: Catherine Cheney, Devex

(Transcription by RA Fisher Ink)

Cheney: So we wanted to open this session up with a question and hopefully this is not a new question to most of you in the room. But if it is and you’re a bit stumped, the purpose of this session is to help you answer it. So the question is, is the technology you’re building, or the technology that your company is behind, potentially going to produce some unintended consequences or be used in unexpected ways? And everyone on this session agrees that questions like these need to be asked by more people, in more places, more often. And both Paula, who’s to my far right, and Terah, who’s just beside me, are working on that very question at their companies. Just to kind of tee up the conversation, first I want to introduce our panelists. I know David mentioned earlier, but Paula Goldman is the global lead of the Tech and Society Solutions Lab at the Omidyar Network here in Silicon Valley. And Terah Lyons, to my right, is the founding executive director of The Partnership on AI, which brings together big AI companies as well as simple society organizations. And both of them are focused in different ways on how to maximize the benefits and minimize the risks of technology. I know questions like that have come up earlier today and yesterday, and this session is devoted to that very topic.

But just to tee up the stakes of what we’re talking about, I actually want to read a brief excerpt from something David wrote in the issue of Techonomy that you’ve all seen out and about at this conference. So he’s writing about Facebook, and he said never before has one company’s failure had such a devastating impact on the world. Facebook does engender connection, friendship, community building, and user empowerment for billions. But that does not reduce the gravity of the disastrous epidemic of misuse. So I’m a journalist focused on global development. I’m working on a story right now about how Facebook was really used as a tool for evil in Myanmar. And this is the kind of consequences when tech companies are reactive vs. proactive. So that’s my first question. Can each of you just talk a little bit about what are the risks that you’re most concerned about and how are you working to move the industry toward being proactive versus reactive when it comes to some of those risks? Do you want to start?

Lyons: Sure, happy to. Thanks for having us here today. As was mentioned, I am the executive director of an organization called the Partnership on Ai, and we’re a multi-stakeholder initiative that was started roughly about a year ago by a bunch of big tech companies, one of which was Facebook, in addition to several others, including Google, Apple, Microsoft, IBM. And some civil society organizations, including organizations like the MacArthur Foundation and the ACLU. And today, we’re a global consortium of over 80 institutions working on responsible and ethical AI development and deployment. So this question impacts all of the work we do, and I think if I were to characterize those issues that we care about most, I would—you know, it’s an expansive question vis-à-vis the work that we’re grappling with, but I think that for the partnership at the center of everything we do is this theory of change, which is sort of focused on the idea that there’s a special kind of empowerment and power, frankly, that especially senior-level technologists but also the average engineer, especially in large tech companies, have in impacting responsible technology outcomes. And that they cannot do that alone by any means effectively without being in conversation with other disciplines and other types of institutions, especially those that represent affected communities, which are traditionally really quite missing actually from the conversation around technology development. So that’s sort of the heart and center of the problem that we’re attempting to solve, and that crosses issues ranging from safety to fairness and bias questions, to questions associated with even the future of the labor market, and how automation might impact it. And that’s a global set of questions for our organization, given the expansiveness of all these issues and who they touch all over the world. That’s a high-level summary, but I’m happy to talk more about what we’re up to later, too.

Cheney: We’ll dive into more detail. I mean, one of the questions that I’d love to return to is how do we make this more of a global conversation? And you know, I think all of us would agree that that’s good to do, but that’s also hard to do. So exactly, how have you done it? But we’ll get back to that. For Paula, I actually would love for you to talk about some of the risks you’re focused on. I know at the Partnership on AI, machine intelligence is the main focus and at the Omidyar Network what you’re working on is AI, as well as other threats. In fact, there was a really helpful visual around some of the risks you’re focused on, including bad actors, which we talked about earlier with the Myanmar example. But also user understanding, algorithmic bias, data control. So can you talk about what risks keep you up at night and how are you working to tackle them?

Goldman: Well, I’ll tell you my almost three-year-old daughter is very worried about the risk of addiction because she keeps telling me, “Mommy no phone. Put down your phone.” But maybe just backing up for a second, for those of you who are not familiar with the Omidyar Network, founded by Pierre Omidyar, the founder of eBay. Very much born in the utopianism of Internet 1.0. Use technology to connect people, amazing things are going to happen. Amazing things have happened, right? But after deploying a billion to $1.3 billion dollars into mission-driven startups, we ourselves started having humbling moments in the last few years and we’re like, oh my God, there’s all these unintended consequences that really well-meaning people did not foresee. And I think, I don’t know how many of you were here for the debate last night, but I think the key difference to us in my analysis would be, technology is no longer the underdog, right? We got away with, for a decade or two, we got away with a sort of, “We’re going to disrupt everything, it’s going to be great.” All of a sudden, we’re in the top companies in the economy, and that comes with a ton of responsibility. And all of a sudden, elections are being disrupted, there’s fear of disrupting children’s neurological development. And what is slow to catch up is the sense of responsibility in the culture of ethics within companies and the way that they operationalize it. So to your point, one of the things we did this year is we partnered with the Institute for the future after hearing lots of founders say, “We couldn’t have predicted all the things that happened with our companies.” Well actually, it’s not that hard, you know? And so we produced a—here’s a toolkit. Here are eight risk zones to think about, probably one or two might apply to your company. Here’s where you can put this into the product requirement docs, here’s how you can think about it when you’re shipping stuff. And the idea is that this becomes standard, just like 20 or 30 years ago it wasn’t standard to red team based on security concerns, now it is. We need to red team for ethics concerns, or responsibility concerns.

Cheney: One of the things I think both of you can comment on is, in terms of who needs to take responsibility for this, it’s not just technologists. So you’re working on an individual level, you’re working on an institutional level, but you’re not just thinking about tech companies. So you know, Paula I know you’ve talked about investors play a role here, and I’m sure we have investors in the audience. Terah, drawing on some of your White House experience, I know you believe governments also need to play a role. And I’m sure we have government representation here as well. So can each of you just comment on, in terms of what action is needed, who beyond technologists need to be doing something about these threats?

Lyons: Sure. Well, I think at the heart of any multi-stakeholder initiative is the principle that all sorts of different sectors and voices ought to be involved in deliberations on whatever topic is being approached, and I think that’s certainly the case at the Partnership on AI. You know, we work with civil society organizations. As you mentioned, we work with industry, we work closely with academia as well. Because the research sector, especially related to AI is really, really important to a lot of these discussions. And certainly, it’s also a big part of our mission to advance public understanding about the work that we’re doing, but also the work that’s happening in the field just writ large. A big part of that conversation needs to be with the policymaking ecosystem. In the former work that I did working in the Obama Administration on some of these topics situated in the Office of Science and Technology Policy, it was an understanding of that administration certainly. I think over almost all eight years of it, from the very beginning, that one of the biggest deficiencies of government policy-making today is that there are not people of technical capacity in rooms making decisions about the future of a lot of policy issues, which implicitly bleed into questions associated with technology. If not, they are confronting them explicitly. So we worked on building infrastructure to try to increase that capacity in government. We also worked on bridge-building between silicon valley and Washington, for example. At least here in the US, in an attempt to alleviate some of those challenges as well. And it’s interesting, I actually moved to San Francisco about a year and a half ago to take this job and I felt like there was this sort of cavernous gap when I was working in the White House between tech and government, insofar as the way in which they culturally understood each other. And looking from the opposite direction, sort of westward-looking east, I find that that gap is even wider, which is really fascinating.

Cheney: And problematic.

Lyons: Right. Very problematic. So there’s a long way to go, I think, to better get the technology ecosystem to understand the constraints and opportunities of government, and certainly I think acknowledged in all the work an organization like the Partnership is doing, there’s a role for government to play in policymaking and also in education. There’s a role for standards bodies to play in generating practice and promulgating it effectively. And there’s this sort of mushy middle, that may be situated amongst or above or around that, which is sort of this center of best practice and deliberation as an industry and as a multisector community in and around how we determine what practices will feed the implementation that then becomes later questions around policy and standards generation. So that’s really sort of where our work is focused.

Cheney: And I’m curious to hear—it’s really helpful to hear how both of your backgrounds play into your perspective. So for you Paula, I know previously you worked in startups and emerging markets largely. And that kind of informs your perspective that investors need to step up. So can you expand on that a little bit?

Goldman: Yes. So having now spent the last year, year and a half, talking to a lot of disparate  folks within tech that are concerned about the problem, but not necessarily finding a bigger umbrella, I think one of the places where I haven’t seen people step up as much is on the investing side, curiously. Actually, I think from the VC firms, there’s a sort of bubbling up of concern, but it’s actually their LPs that are more concerned, which is super interesting. It hasn’t yet translated into, I think people are still seeing this as like, well, it’s maybe a PR risk. It’s not going to happen until these companies are in a later stage or something like that. And they’re not yet seeing it as business risk, and it is business risk. It’s about retaining talent. There’s regulatory risk obviously on the table, right? And I think that shift is something that we dramatically need to accelerate.

Cheney: I love that. That’s a good take away, that this needs to be framed as business risk. In fact, I’ll ask one more quick question and then I want to bring in the audience. So if you have questions in mind, please get them ready. When it comes to the biggest challenges you’re having, we were talking a little bit before we took the stage, about—in some ways it’s more of a branding/marketing challenge. People hear ethics and they go, “That’s not my focus.” Can you just talk about your biggest challenge? And I want to say that before we go to questions because I think—I’d encourage not just questions but comments and thought partnership if other people are working through some similar challenges. So what are your big challenges? And then we’ll go to some questions or comments.

Goldman: Well, maybe I’ll build on what you were saying. I think marketing, this topic is a really big challenge. I’m a movement builder by trade. I think about how to build aspirational identities and I’ll tell you—you mentioned the word ethics, people are like, “I’d like to walk out of this room right now,” but luckily you’re all still in this room, so thank you. Or they think it’s someone else’s job. And so, I don’t know how many of you guys know DJ Patil, he was the former US chief data scientist, and he recently coined this phrase, “Move purposely and fix things,” and he printed stickers around it. I was like, yes, that’s really perfect, and we started promoting that. But I would ask for all of your help. What is the aspirational identity? What would we call ourselves? We wouldn’t call ourselves ethical technologists. What’s the cool thing to be that would enable this to become aspirational and mainstream?

Lyons: I agree, that’s a tough question. Plus one to that. I think evolving and understanding the ethics isn’t a plugin or a set of audits you can run at the end of a process is really important. It needs to be considered from the very outset of a technology development process. So I think that’s a mindset shift that definitely needs to happen. I also just think, especially as it relates to my work, that trust building is really probably among our biggest challenges. I mean, that’s really again at the center of what PAI is trying to do right now. Where it is not necessarily the operating mode of any corporation, regardless of whether or not they’re focused on traditional tech. To be permissive of vulnerability, I think when they encounter a problem and need to bring it to a larger community to grapple with. And that especially needs to happen in technology, where the issues are so asymmetric in that one company can build a platform that affects millions, if not billions of people, and there’s a lot of sociology that has to go into thinking about that. And some organizations may find themselves ill-equipped to really have a considered understanding for some of those impacts that you talked about at the very beginning of the session. So I think making sure that there’s a culture bill in the technology community and in places like the partnership and otherwise, where organizations feel like they can say, “Oh man, this is a really tough problem we’re grappling with. We really need some help with it before it becomes explosively problematic.” And making sure that you’re bringing other voices into those conversations.

Cheney: And it does sound like it’s not just about, as we were discussing earlier, changing practice. It’s about changing professional identity, which is no small task. I want to bring in some questions. Can we bring the lights up, just so we can see who’s in the room? I’m having a hard time seeing some of you, but if you could just say who you are and what brings you here, and then join the conversation.

Audience 1: Hi, my name’s Jack and I’m from Riveted Labs. And I wanted to get your perspective on the idea of fiduciary responsibilities. Facebook, as we heard yesterday, was able to make some changes in pursuit of this goal, and that’s partly because Mark Zuckerberg controls so much of the company. Other companies may have the best intentions and want to move in this direction, but they might be afraid of activist investors, they might be afraid of the share price of their job. So what can be done to give those companies the space to make those right decisions?

Cheney: Great question:

Lyons: I’ll maybe just start by saying, and you probably have thoughts on this too Paula, but I think fiduciary obligations as a change lever is a really high potential area for impacts. I mean, optimizing constantly for profit generation is, in some senses, the fundament of what has borne a lot of the business models that we’ve seen evolve in the technology industry and in other places. Thinking creatively about that layer, that fundamental layer, I think is really, really important to incentivizing the kind of obligations and engagement that was discussed earlier around investors and so on and so forth. So I think that’s a really key piece of the puzzle.

Goldman: I would say two things. One, I would harken back to an earlier comment I made about reframing this as business risk, which I think that shift hasn’t happened, but there’s a lot of evidence behind it. And then the second, I would say having come from the world of investing and particularly having kind of helped with the impact investing and mission-driven investing world is, there are a lot of also—talk about activist investors. There’s also a counterweight of investors that really, really care about this stuff. And I know some of them are working on indexes around best practice in tech companies and trying to throw their weight around that, and also becoming activists in a positive sense. And so that’s a trend I would watch for in this coming year.

Cheney: Yes, great. That was a great question, thank you. I see one right there.

Audience 2: Hi, Jody Westby. So to me, I see this as Silicon Valley missed corporate governance 101. And if you look at Facebook, if you look at Uber, if you look at some of the instances, they flat out don’t get corporate governance and what that means. They just grew up and became a big company but still act like a little kid. And the second is on the investor side. So I launched In-Q-Tel for the CIA. I worked with venture capitalists for several years and said, “You know, you need me. You’re investing in these companies and you don’t know if they’re going to have illegal consequences and [INDISCERNIBLE 0:18:18.7] consequences. You need to think about that.” Do you know what they all said uniformly? Every single one of them. “By the time that occurs, we’ll have our money and we’ll be out.” And so, you know, until that changes on the investor side, and until Silicon Valley realizes they can’t just be an IPO, go take people’s money, have shareholders, and not be corporate governance, nothing’s going to change.

Cheney: I think you both agree and you’re working on the change, but what more is needed?

Goldman: I’m a little more optimistic. I do think the investment lever is a hard one. But look, in the last year, one of the trends that has really surprised us is the extent to which employees have been really active on this, right? And in a hugely tight labor market, companies are really worried and rightly so, and have to develop very sophisticated ways of involving employees in these decisions. And that’s been a force for change. I think that starts to bubble up to board decisions, and those board decisions start to bubble up on investor calls, right? We’ve seen them on a number of quarterly earnings calls, companies talking about these crises, right? So I don’t disagree with you that the structural analysis is like, yes, if the money at the top doesn’t care, it’s not going to change. But I’m more sanguine that there are ways that are starting to affect the money at the top.

Cheney: We only have about a minute left, and I wish we could bring in more questions, but hopefully we can continue the conversation here at Techonomy and beyond. But I want to give both of you the chance to share any final words, something we didn’t get to, or a call to action. I think—I love the point Paula made earlier about this new mantra for Silicon Valley. Any final words about how we get there, or calls to action for this group?

Lyons: I mean, I would just springboard off of Paula’s just recent comment in saying that it’s really easy to underestimate the impact that a single individual can have on the tone of public and industry-wide conversation on these issues. But we have seen proved over and over again in the past 18 months or so that that has been the case and it really has started with a very small group of individual change makers. So I think—that’s sort of the thesis behind PAI as well. It started with individual voices saying something like this was important. So I just think that’s really important to remember.

Goldman: Yes, and I would echo that. And I would think about everyone in this room, like all of us are in positions where we can influence this today and have a real responsibility to do so. I’m harkening back to sometime late last year, there was a conference at Harvard where an engineer presented an algorithm that was about helping police detect gang violence and he gets a question in the audience and they were like couldn’t that lead to civil rights concerns? And his answer was, “I’m just an engineer.” And that’s I think the core of the problem for all of us. Whether you’re an investor, you’re an executive, you’re an engineer—this is all of our problem. It’s all of our legacy as an industry and so we all need to own this as ours and our ability to do positive things in the world.

Cheney: That’s a really powerful way to end. I mean, just in talking with the two of you, I think of you as two experts when it comes to ethics and responsibility in modern technology, that’s why you’re here. But your point is, we all need to be talking about this and not just here, but globally. Because the stakes are too high not to. Please join me in thanking our panelists.

[APPLAUSE]

 

Leave a Reply

Your email address will not be published. Required fields are marked *