Lanier: So I might start by noting that I dispute the summary of this talk that I read in the program. Although I’m not sure that I read the same summary that you did, because I’m using a Windows phone these days. Not because of Microsoft, I couldn’t care less, but because I have a bit of a nonconformist streak and the Windows phone is the only way you can express that in your phone choice these days. But since the world has rejected the openness and universal access of the Internet for the app economy, that means I don’t get the app. So I’m not sure—but anyway, what I think you read is something about how I’m saying there hasn’t been economic growth and it’s been this economic disaster, and that’s not exactly right. So I want to correct that and try to present my I hope somewhat more nuanced sensibility about these things.
So we have digital networking, we have computation, we have all the sensors and actuators to connect those things to the real world. And what that ought to do is give us an ability to make a world that makes more sense, that’s more optimized, more efficient, less annoying, less dangerous and all that. And I mean that’s been a vision for a very long time.
Now, the thing is though, when you start to optimize the world with digital technologies, there’s kind of this dangerous pothole you can fall into, or this sort of mistake you can make, which is to turn yourself into a sort of a Communist central planner, where whoever’s the closest to the most central and most influential computer gets to sort of run everything. And when do that, you can reap incredible, incredible near-term benefits, but you also undermine the project that got you into it in the first place. So I’m a little concerned we’ve gone down that fake road.
So I want to give you a bit of context on how we got there and where the mistake is and what the consequences are, and also a little bit about how to get out of it. I really want to emphasize—this is a tricky thing. I feel like somehow because there’s so much internet boosterism and because it’s so central to the conversation of the world today, a lot of people have some sort of internal desire to hear somebody be this real curmudgeon and dark critic and skeptic and pessimist. And because I’m a technologist who’s critical of some of the things we’re up to and questioning, a lot of people want to put that on me and want to perceive more pessimism and more just general negativity from me than I actually am able to muster to support their desires. [LAUGHTER] And so for those of you looking for that, maybe some of the later speakers today can help you. I’ll do my best to be a little curmudgeonly, but actually I’m just, I love the big tech companies. I’ve sold a company to Google. I love working with Microsoft. We’ve got a little product out called Kinect that tens of millions of people bought, and I love—I love our world. I love Silicon Valley. So I can’t be that person for you. So it’s better I tell you now than disappoint you later. [LAUGHTER] Honesty in relationships, you know.
Okay, so here’s like my little pocket, tiny capsule view of the history of these things. Right at the start of digital networking, the first proposal for a design came from Ted Nelson, who’s still with us and lives on a boat in Sausalito. And he did it while he was a student at Harvard. And so the initial digital architecture proposed a universal micropayment structure where two things would happen at once. One is we’d gradually let go of a lot of the traditional regulation of information, particularly things like copyright. Those sorts of all or nothing decisions about this person owns this and these other people don’t, and they have to pay for it, those things would fade away and instead there’d be a sort of universal ability for people to use information from each other, but at the same time, people would get paid when their information was referenced, even indirectly or very indirectly. There’d be this sort of rolling wave of payments where if you did something that referenced somebody else and that referenced somebody else, all the people down the line would get a little bit.
And so why did he do this? It occurred to him—see, I think sometimes the very first person on the scene has the clearest view. Not always, but sometimes. And I think in this case that happened with Ted. He was the very first person to consider digital architecture, and the reason he did it was incredibly simple. He was the kid of show business parents from Hollywood and what he saw is that, in Hollywood, it wasn’t just the stars who made money. Instead there was this whole middleclass economy of all kind of other people who made money. And the reason they did was because of this kind of awkward, sometimes unfair, very approximate, kind of kludgy system of things like unions and copyrights and all these mechanisms that people had fought for so they’d get a bit of a piece of the pie. So it’s not just the top stars, but it’s this whole world of agents and lawyers and the people doing the lighting and the grips and the best boys and the dah-dah-dah—there’s this whole middleclass. In the movie business, they’re called the below the line people, rather disparagingly.
And so what he knew, and what all of us knew from the start, was that with digital efficiencies you could end up automating a lot of that stuff and having only the stars left. And so the problem isn’t a lack of wealth. The problem is a loss of a bell curve in economic outcomes. And this is a really key point and it’s very hard to make because the vocabulary we use creates this blind spot around something that should be incredibly obvious.
So I’m sure—this is a conference with a lot of technically educated people, so if I talk about a bell curve, you know what I mean. If you make a measurement of any natural thing in the world, like how tall everybody is at this conference, you tend to get a normal distribution of outcomes. Now, if you think of the economy as a measuring device—which how else can you think about it?—then when it measures what people do, the economic outcomes for people ought to look something like a bell curve. And what’s nice about that is if it looks like a bell curve, you have a setting for a stable society, an honest and a fair democracy, and many other good things.
And contrary to a bell curve, for our purposes, would be what we call a Zipf distribution, or winner take all, or a long tail. Those are all roughly words for the same thing, where you have sort of a prize, where there’s a few people who do really well and then it falls off, and then everybody else is a wannabe. And so prizes are cool. I think it’s cool that there’s a Nobel Prize and all of that stuff, but the basic economic outcomes ought to be a bell curve, or else you have a concentration of power in an elite. And you can talk about that elite with different terminology. In the founding papers of the U.S. it was the aristocracy. Whatever you want to call it, if that gets too much concentration, you have a failure of democracy. But I think you also have a failure of economy, because you no longer really have a competitive market. So you want that bell curve of outcome. And the interesting thing, and this is just a slightly technical point here, if you have micropayments over a network, the topology of the network has a huge influence on the distribution of economic outcomes. So if you look, right now social networks like Facebook are almost always not monetized. There’s just a very few that have been, like Second Life, but mostly they’re not. But if you look at what information people use from each other on those, you see this very broad distribution. And if it were monetized you wouldn’t see a perfect bell curve on Facebook, but you wouldn’t see a Zipf distribution either. You’d see a kind of a biased bell curve, something that’s kind of in between the two, which is at least survivable. But if you have a hub-and-spoke network—and there are lots of hub-and-spoke networks these days. The app stores are like that, where there’s a central authority and everybody goes through it, whether it’s Apple or Amazon or whatever. With those, you do tend to get Zipf distribution outcomes. So you do have people who are successful on them. You have a few people who do really well with their YouTube videos or whatever it is, but then it falls off to a sea of wannabes.
So anyway, way, way back in 1960, so many years ago, Ted was thinking, wow, if we have this richly connected graph instead of a hub-and-spoke, and there’s a micropayment system, and then we automate the world so that people don’t have to do a lot of jobs by hand anymore, instead of everybody starving, the way Karl Marx feared, we’ll get this nice bell curve distribution because people will be using each other’s information to run this thing somehow. He didn’t know how, but information doesn’t just come from angels or from some transcendental realm. It has to come from people at some point. And I’ll get back to that point in a second—well, actually, I’ll do it now. Let me try to be very concrete about an example of how this works. My favorite example for explaining this point has to do with language translation. Now, I know a lot of people who translate between languages professionally because I write books, so I talk to the people who translate my books into various languages. And they are facing the same pattern of difficulties that are facing recording musicians, investigative journalists, professional photographers, and others whose work can really be captured digitally. And that is, it’s not that there’s no opportunity, not that it’s absolutely hopeless, but that it’s turning—where there used to be a bell curve—in a bell curve there’s still only a few people at the top, but where there used to be a bell curve of a lot of people kind of in the middle there’s now a Zipf distribution where all there is, is the top and everything else has gone away. So the middle class of their world has just vanished. And so what that means overall is that the volume, if you remember your basic calculus, is the volume has gone done. We can use the term decimated, which means broken down to a tenth. That’s about what’s happened with everything that gets digitized overall.
Now, but here’s the interesting thing, a very typical reaction that you’ll hear from people in the tech world is, well, if those translators don’t have work any more, that’s creative destruction, they’re buggy whips. They’ve been made obsolete. That’s sad but that happens. They have to find new things, they have to be flexible in learning to do new jobs. But the thing is that that response is absolutely at variance with the truth, with the facts. And this has to do with another interesting stream here, which is the fantasy of artificial intelligence. So way back when I was a teenager, I had the extraordinary good fortune to find a lot of people who at that time were already kind of senior and elder in the field who wanted to mentor me. I don’t know why I was that lucky, but I was. And among them were some gentlemen named Doug Engelbart and Marvin Minsky. So you probably know these names, but just in case you don’t, I’ll mention Marvin first. Marvin Minsky is one of the founders of the field artificial intelligence. And he’s one of the people who’s really articulated the whole way that we think about that. So way, way, way back, in the early ‘50s, there’s a story that happens to be true. There was a conference at Dartmouth where the term and the sort of idea of artificial intelligence was really well-articulated for the first time. And then there was this thought, well, you know what? We should be able to input two dictionaries into an algorithm and then have a little crystalline algorithm that can combine them and make a perfect translator between languages, which would make all those human translators into buggy whips. It would make them obsolete right away. And at first, people thought this might be easy. And it wasn’t a terrible scientific hypothesis at the time. Also at MIT where Marvin was, was Noam Chomsky, who at that time was promoting a theory—well still is—of this very crystalline little algorithmic core to language that exists within us. And so initially, Marvin assigned it as a summer project for some students: Over the summer could you please make a BabelFish, a universal translator that would just obsolete all the translators. And now it sounds silly, but at the time it was a perfectly reasonable hypothesis. Decade after decade that never worked. And finally, it was some researchers at IBM in the late ‘90s who said, “Screw this thing, It’s not going to work. What we need is a very large sample set from real translators. Then we’ll do statistics to correlate little bits of a new document to be translated with old bits that had been translated before and we’ll make a mashup of those and see how that thing is.” And guess what? That’s readable. It’s not perfect, but it’s readable. So what you do is you take a lot of real examples from real people and you correlate them and you mash them up. This is what these days we call big data in cloud computing. I’m sure in five years we’ll have to come up with new terminology to make it all seem fresh again. More power to us. But whatever it is, the terminology changes but the ideas are proximally constant. So that works. And you know what? I think it’s fantastic. I think it’s just a real improvement in the world that if you need to translate some little thing like somebody’s memo or product description from another language, you can just do it instantly and without any barriers. So I love this stuff. It’s a great advance, it makes the world more efficient, it makes the world more transparent. All of those things that we always wanted.
But, notice something really interesting that’s going on. If you just use the same corpora—I think the first one was like ’97 or ’98, something like that—if we use the same dataset that they gathered initially—which was, if I’m not mistaken, it was the body of UN translations that was the initial dataset—it would be completely hopeless for anything that was written today, because the whole world of references, slangs, current events, all sort of things changes incredibly rapidly. So for instance, if you’re translating the term midterm today, you might translate it differently in context than you would have a week and a half ago, because now all of a sudden midterm has a kind of resonance in the U.S. that it didn’t have. Selfie wasn’t a term that was around in the ‘90s. All kind of things are different. And so what we do—and by we, I mean Bing has one of these translation services, and then our friends at Google do as well—what we do is we go around the world and we scrape literally millions of fresh translations every single day from real translators to add to the process so that translation will still work. So what this means is that these people are not buggy whips. They’re still needed. It’s just that they’re needed in a new way. So we haven’t rejected—we haven’t made translation obsolete with some kind of brain implant or something. What we’ve done is we’ve made translation better. But somehow along there we snuck in this disempowerment of the people who do the work. But the new way of doing work is adding data to the cloud. The new way of doing work is adding valuable data to a big data statistical system.
So one of the things I started to be interested in some time ago—when Ted originally designed the first digital network architecture—and this was really early, it wouldn’t have been practical. This was before packet switching, just to be very clear. This was the very, very first view. But when he did that, he was thinking in terms of artists and reference, the sort of copyright problem. But what I became interested in is let’s apply this to big data. Let’s say that your translation of the day is used in Google and Bing and perhaps in IBM or whoever else a bunch of times to do things. Shouldn’t you get some pennies from that? So if we start to think of a world where things are more and more automated, where perhaps vehicles are driving themselves and factories are highly automated or manufacturing’s highly distributed with 3D printers or whatever—whatever scenario you like. You can think of any industry and you can imagine the version where there’s a lot fewer people directly doing whatever it is and instead they’re mediated by some kind of cloud software. Well, why not pay them through the cloud for their contributions? It’s honest, it’s true, it’s fair. But even more interesting to me is that the topology of that arrangement would be a thickly connected graph, and so I think it might create a bell curve. And I say I think it might because if there’s one thing that’s certain about economics, it’s that it’s a highly empirical art in which most people say things that turn out not to be useful. Even—you know, it’s a tricky thing. But I think it’s a worthy hypothesis that you’d get a bell curve outcome from such a system. So what that would mean is that you could resolve this very longstanding problem, which is as technology gets better, do you make people obsolete and do they sort of wallow?
Now, here we’re a little insulated from this problem because, you know, we’re close to Silicon Valley. Every single person in this room is only a few steps away from the biggest computers on the planet. We all benefit from it. I certainly have. But what’s interesting is that out in the world, something very strange has happened since about the turn of the century, when the Internet became a dominant force, which is that we’re starting to lose bell curves instead of Zipf distributions in general. So the economic recovery in the U.S. is progressing, we’re seeing a bit more employment, and yet it’s mostly benefited, as we always hear, the top 1 percent, blah, blah, blah. Now, I want to say something here that infuriates me, and it’s a little hard to talk about because it has to do with vocabulary. We tend to hear, in the very words that are available for us to use to speak about it, this conflict between kind of extremes of left and right, both of which seem just preposterously hopeless to me. On the one hand, there’s this idea of income inequality, as if income equality would be a good thing. Because to really make incomes equal, there’d have to be some sort of force that would pound people down and say, “We will all be equal.” Surely, any world of freedom is going to create a variation of outcomes. That’s just by definition. So if you believe in freedom, if you believe in agency in people, you have to accept distribution of outcomes. On the other hand, the Zipf distribution, where there’s an incredible concentration in a tiny minority, is just not sustainable. Totally aside from whether you like it, whether it’s fair, whatever you might think about it, it isn’t sustainable. And I want to get into a little bit about why. But just the approximate reason is then there aren’t any more customers to buy your stuff eventually. So what we really should not be talking about is income inequality versus the 1 percent. What we should be talking about is how can we get the economy to be a fair measurement device, a true measurement device, a well-calibrated measurement device, which by definition ought to be giving us a bell curve. And the bell curve acknowledges that there’s a minority of people who are exceptional, most people will be in the middle of whatever it is, and then there’ll be a minority who are below that. And nothing but a bell curve can be an indicator of freedom in a system with a market economy. And this to me just seems a priori. I mean I just can’t—like to argue against that is that to argue against math. [LAUGHTER]
So let me talk a little bit about, when I say the Zipf curves are unsustainable, I want to talk a little bit about what I mean by that. The very typical pattern, not absolutely universal, but the typical pattern you see with digital optimization done the wrong way, is that somebody optimizes something, they get super, super wealthy super fast—in fact, it’s the fastest and biggest concentrations of wealth in history, and then whatever it is they optimized overall shrinks in volume, but there’s a super concentration that creates this appearance of economic growth. But then the whole thing crashes because it’s not sustainable, because some systemic problem comes up. So there’s a couple of different ways I can put this. I’ll tell it in terms of stories and then I’ll tell it in terms of math. They’re two windows onto the same idea. In terms of stories, the very first example of this kind of automation with big data was actually automated trading on markets. So the first flash crash was in the late ‘80s. And then we had a sequence of institutions that got better and better at it. I was close to some of the people at Long-Term Capital Management, who might even be here today. I don’t know if there’s anybody who was associated with that. But anybody not know what I’m talking about? Okay, so Long-Term Capital, in my view, was a well-intended and innocent operation. And many people disagree with me, but my view is that there were some very smart people. A lot of that had Nobels in economics, who thought this is going to make the world better by having a computationally optimized trading system. They concentrated a ton of wealth really fast, and then it crashed. Now, why did it crash? Here I want to go to the math side of it. Once again, basic statistics. Somewhere in this room—well, there’s a number of cameras following my fingertip, right? And so because of all the hackers and whatnot in the world, there’s some algorithm somewhere probably automatically tracking my fingertip. I’m sure that’s valuable information to somebody somewhere. So my fingertip’s being tracked. And the interesting thing is that, because of physics and because of the validity of statistics, there will be, within in a window, a predictability. If my finger’s going like this, it will probably be over here. So statistically you can kind of follow it. But the other thing about our world is that fundamentally it’s structured, so it means eventually my finger will hit an obstacle or it will reach the extension my arm is capable of or something. Some structural thing will cause the statistical prediction to be suddenly violated. So that’s just the nature of our world. And it’s a very interesting topic to go into in more depth, actually. It gets to the heart of math and physics. But just for the present purpose, what that means is that if you’re doing cloud-based algorithms to optimize the world, you have to do them statistically, because you’re sampling the world in the first place. And so whatever it is, whether it’s some numerical value that influences how you’re going to place an automated trade, or whether it’s how you’re going to pitch somebody for a loan, or whatever the hell it is, whatever your scheme is, initially you’re going to get this feedback that you have it just right and it’ll feel like you’re Midas and like, “Oh my God, I have the keys to the kingdom. I have the keys to reality. I’m predicting the future, I’m optimizing the future” and then you’ll become a super-billionaire and all this stuff. But then suddenly, bam, you’ll hit this wall and it’ll fail. And it just happens again and again and again. And it’s not because anybody’s not doing their job. It’s just because that’s the nature of statistics and reality.
So Long-Term Capital failed and there was this huge bailout, as you all know. The next instances of it were no longer innocent. They knew exactly what they were doing. So a big one that came along was Enron, our favorite company. They tried to buy the little company that we ended up selling to Google instead, and, boy, were those guys sleazy. They knew exactly what they were doing. They knew they were going to crash. But they made a lot of money and not too many people were prosecuted, a few. And then there was the bundled mortgage-backed derivatives of—where are we now?—about six or seven years ago, something like that. And those things, it’s really interesting, what you did in that case is you practically created an encrypted financial mechanism that nobody could understand. And furthermore, the data that you’re pulling in when you reach that point, it’s no longer just numerical data as it had been for Long-Term Capital. You’re starting to pull in text data from the Internet and all kinds of things. You’re doing behavior models, you’re doing all kinds of stuff. And it worked for a little while and then it crashed.
But let me give you another example which I think is really interesting. Amazon was playing the same game out in the consumer world before anybody in Silicon Valley understood it. So really kind of an extraordinary story, so what Amazon did is they optimized their supply chain, they sampled what information they could about everybody in their supply chain so they could make predictions about the absolute bottom line for anyone they’d negotiate with. So it was kind of like counting cards. They could enter into a negotiation knowing more than the people that they were negotiating with. They had information superiority. And I know this scheme pretty well because I consulted to them at the time and helped them do it. And so what they did is they concentrated an astounding amount of wealth, astounding rapid growth, very much like a Silicon Valley scheme, but now, once again, they’re hitting the table—it’s like when my finger hit the table. The structural issue is that at a certain point, if you optimize that too much, you don’t have customers any more. And they’re starting to see limits to their growth and real problems, because they’ve impoverished their customer base too much. And this is the problem with the Zipf distribution. It’s not economically stable. Exactly the same thing will happen to Google eventually, it will happen to Facebook eventually. All of these Zipf-creating schemes are not sustainable.
So what I’ve been advocating from an economic point of view is really very simple, that for us, in Silicon Valley, for those of us who start these schemes, and for the world at large, and in particular for democratic process, we’ll all be better off if we can find our way back to the original idea of digital networking, where there’d be a thickly connected graph of micropayments that’s universal. Because that can create a stable outcome no matter how advanced and automated technology becomes. Automation should not be an enemy of employment. It never was before. The only difference between now and the past is that now we’re pretending the people who do the real work are actually not doing the real work. We’re putting all of those human translators behind a curtain and we’re pretending they’re not there so we can say, oh, it’s this AI. But it’s not an AI. It’s an algorithm plus millions of people who aren’t getting paid. So if we accept true transparency, open the curtain, acknowledge that people are actually doing the work, but just in a new way, in a better way—in a better way. If we can acknowledge the improvements that we’re actually creating, we could create a kind of sustainable and democratic very, very high tech future. So that’s the thing I’m advocating for.
Does that all make sense? Okay, it’s a little hard to get the idea across, because I know, it sounds radical now. I really believe that just through process of elimination, someday the ideas I’ve just presented will seem a little wrong, but I think they’ll seem mostly right. Because there just aren’t that many alternatives. If you look at just the logic of how this can play out our options are somewhat limited. So something along these lines is the path to a sustainable high tech future.
So I mean there’s so many more things I could talk about, obviously, this opens up into a whole universe. But I was asked to stop just a little shy of my time to take questions if there are any. And I’m getting close. So are there any questions?
Elkehag: Hello, my name is Elin Elkehag from Vinna Ventures. I was wondering what your view is on bitcoins or digital currency.
Lanier: Sure. Well, you know, I think the idea of using encryption as a way of creating a currency is interesting and the whole block chain idea is interesting. There’s a lot of really interesting technology in Bitcoin. I want to express a few concerns about bitcoin. The whole architecture of it was explicitly designed to emulate Ayn Rand’s ideas of what gold should be, and I view that as an unviable and kind of hopeless way to design a currency. It allows for no fiscal policy—so let me explain what I mean by that. Let’s say you’re in a society—basically what it is, it’s a plutocracy-generating machine in its current incarnation. Right now, whoever hoards a bunch of bitcoins early can just wait for other people to innovate to make bitcoins more and more valuable, and they and their descendants automatically become super rich. What you need to be able to have is a little bit of printing money, so to speak, a little bit of inflation, in order for the overall wealth of the society to be acknowledged as people get better and better at doing things. To have a fixed money supply, that would work in a society absolutely absent in innovation where everything was static. So to me, it’s at the macroeconomic level I think bitcoin is really bad, bad, bad design. Really, really retrograde and really cruel, ultimately. But the individual technologies are actually really interesting. So I hope that answers—
Elkehag: Yes, it was more about the distribution of having like a global currency or the possibility to trade.
Lanier: Yeah, let’s go to another question.
Vogelaar: Hello, yes, Marleen Vogelaar. Is there a current example you could give of a current company that is doing digital automation in a long-term sustainable way?
Lanier: Not really. You know, I think right now the problem is that the temptations of doing it in the way that’s not sustainable are so incredible, because I mean you make so much money, it’s just unbelievable. I’ve gotten involved in a couple of attempts to do it. Second Life was little bit like that. Second Life had a thickly connected micropayment scheme. And it’s like a little tiny bubble, and we learned a lot from it. And it wasn’t prepared for the mobile revolution so I think it lost a bit of the relevance that it had before that. But on a large scale, no, not really. Nobody’s doing it right now. The schemes where people can make money from their participation online are mostly hub-and-spoke, and I think that’s a huge mistake. By the way, I’ve only touched, touched on the theme. There’s other huge issues that have to deal with whether links online are one-way or two-way and some other things like that. There are some architectural issues that got baked in that really have to eventually be reversed.
Banerjee: So I’m Prith Banerjee, I’m from Accenture. Fascinating discussion. I’ve read your book. So I had a question. Each year, Accenture Tech Labs, we publish an Accenture technology vision. This year’s vision talks about a trend called a data supply chain, where we posit that you have lots and lots of data, think of it as a supply chain taking data, raw data that you collect, and ultimately it results in outcomes and sort of insight or whatever, outcome, right So taking that data supply chain concept of the tech vision, I am sort of asking you—I know what you’re trying to do, right? Here is the value of translation. Do you have an algorithm by which you really get value to the micropayment—because it’s related to the Internet of things, also, right?
Lanier: Ah, yes.
Banerjee: As with the Internet of things, you are collecting so much data that data translates into value. So whatever you talked about for a micropayment also applies to data from IoT. Thank you.
Lanier: Yes. All right, that’s a fantastic question. So in order for the project I’m talking about to proceed, such algorithms have to come into existence. So what I’ve been doing is torturing a very pleasant series of Stanford grad students and post docs to work on versions of those algorithms. I would say we’re closer that we were a year or so ago, but there’s a still a lot of research to be done. Please understand, I’m not a utopian saying, “Oh, I have the answer.” I’m a technologist and scientist saying, “Here’s a research program that has a long way to go.” We have a first pass at an algorithm that I know is inadequate, but is definitely teaching us a lot. What we’re doing is we’re building these algorithms and then testing them in agent-based simulations. So I can’t promise that the problem is solvable in the direction I’m talking about, but I’m feeling encouraged by the progress so far.
Mulholland: I’m Greg Mulholland of a company called Citrine Informatics. We do data mining for materials science. My question is if you have people performing on a bell curve like you mention, how do you prevent the people that are standard deviations above from running away and creating—you know, taking advantage of a Zipf distribution and actually allowing the people in the middle to benefit from whatever they’re doing and not just manipulate the system?
Lanier: Well I think in this audience I’m looking at the people who are at the top of that distribution, and I’m appealing to your good will and sensibility. I mean I think it’s really that simple. I mean if society can’t be based on basic good will and sensibility from the people all across the distribution, for the most part, it’s going to fail anyway. So what I hope is that—I simply ask them, “Who are you, specifically?” I mean I think it’s really that simple. I think it’s in our interest, despite all my friends who have immortality startups. I think we all know—we all know we’re going to die and this whole thing will be inherited by our children and their children. And so the question is do we want a world that’s distributed like a Zipf distribution for them or like a bell curve? Do you want to gamble on where in that distribution your kid will be? Or their kid or their kid? I think when you think it through, you start to like bell curves more. And so I think it’s just really appealing to good will and decency. And also self-interest. Ultimate large scale, long-term self-interest.
Jurvetson: I’m Steve Jurvetson, fascinating talk—
Lanier: Oh hey, how’s it going?
Jurvetson: It’s going great. So the fantasy of AI, very interesting. If I understand your argument, the current machine learning harness is basically leveraging human intelligence distributed in time and space to do what it does.
Jurvetson: Would you also argue that AI is inherently impossible for some reason and cannot be evolved, and if it were to ever show up, would it be a plutocracy-enhancing feature where someone who has it can do great things?
Lanier: Well, you know, my feeling about AI is that it’s one of these ideas that just grabs people, because it has this like metaphysical and philosophical and religious sort of resonance to it. So everybody’s fascinated by this, “Could there be a real AI? Might there be one soon? Would it be different from a person?” My feeling is that these discussions about personhood grab at us. They grab at us like crazy. And yet, I think it’s a bit of a distraction in this case. Because like let’s suppose there was or there wasn’t. In a funny way it wouldn’t affect anything I’ve said. Some AI wouldn’t be able to do anything in the world without getting all the data. Like some AI wouldn’t be able to translate without getting all the data from those people, versus some program we think of as stupid. Now as it happens, I’ve been on the inside of those algorithms. I know the machine learning world at Google pretty well, I know the one at Microsoft very well, and I know the one at IBM fairly well. They’re all distinct. And, look, anyone who looks at it honestly will say we’re kind of flailing around a little bit. What tends to happen is all the different approaches to AI hit a certain plateau of capability. Like in the late ‘90s we learned we could recognize faces and we could track facial features. Wonderful thing. It’s been kind of plateaued a little bit since then. We still can’t interpret a scene, for instance. There a lot of things—I’m sure we will eventually. I’m a real optimist about that stuff. I love working on the math, I think it’s a wonderful area. There’ve been a few good ideas since then in learning networks, the so-called deep idea and whatnot. But you know it’s a bit slow, it’s a bit kludgy, and what I’d rather do is separate the philosophical and religious question about ultimate AI from the very concrete. In fact, I’ll even say it stronger. I’ve known a lot of AI researchers. The more interested they are in the ultimate question, the less productive they are as actual mathematicians or engineers. It’s like this thing that just grabs people and sucks them into a toilet. [LAUGHTER]
The better thing, like what I would rather see is I’d rather see—and I’m always telling this to students, like I’d rather see somebody who’s a gifted mathematician or engineer focus on a problem where they can actually define the terms. Because I don’t what a person is, I don’t know what intelligence is, despite our standardized testing obsession. The truth is, we don’t quite know what these things are and we can argue about the definitions. But if I say, “I want this thing to recognize a face,” hey, that’s something concrete. I can measure whether it worked or not. So what I would advocate is for people to try to save those discussions for dorm rooms or science fiction plots or whatever. There’s a lot of wonderful places where you can explore those ideas. But in terms of engineering, and I also want to say in terms of how we think about economics, it’s probably better to not bring that framework into the discussion. I think it just creates this excess of fascination and a kind of a confusion.