Parthasarathy: Thank you very much for inviting me to participate in this session in such an exciting time in brain research and also in artificial intelligence and neural networks. I’m very pleased to introduce Terry Sejnowski, who is Professor of Computational Neurobiology at the Salk Institute for Biological Sciences, where he is the Francis Crick Chair. Terry is a pioneer in the field of computational neuroscience, and also an experimentalist. And I actually can’t think of an area in neural systems or cross-scales that you haven’t had some impact on.
I was very surprised to learn that you were one of 10 individuals who was elected to all three National Academies: Engineering, Sciences and the Institute of Medicine. Which I think really speaks to your transdisciplinary impact. And more recently, I understand you’ve been involved in the Brain Initiative. You’ve been an architect on President Obama’s Brain Initiative, and an advisor to the NIH director on this multi-million- if not billion-dollar project to decode the brain. So I’d love to learn a little bit more about that as we go along.
I’m also pleased to introduce Richard Socher. Richard is the CTO and co-founder of MetaMind, a startup here in Silicon Valley, I believe, which just launched in December with $8 million dollars in funding from Marc Benioff and Khosla Ventures. Richard just also completed his PhD in Computer Science with Andrew Ng and Charles—
Socher: Chris Manning.
Parthasarathy: Chris Manning at Stanford University, and has taken some of the insights that he has gained into neural networks and artificial intelligence into this new company. And the company’s mission is really to take cutting edge AI advances and bring them to nonexperts. And I believe you’re involved in everything from social media to medical imaging.
So, Richard, I’d love to just start with you. If you could explain a little bit about this neural network, deep-learning that seems to be quite popular and quite exciting. So what is it, in not too technical terms [LAUGHS].
Socher: Right. So, deep learning is a type of Artificial Intelligence, a set of tools or a set of algorithms, that really go beyond machine learning, in that there’s less human expert knowledge required to make them work. And so to give you an idea, they can do anything from classifying CT scans of the brain to classifying social media messaging. And all you need to give it is the raw input, the text, and an output that you would like to predict. And then, they will learn how to represent this input in some successively more abstract representation, such that it can then classify it.
So to give you an idea of what I mean, then in computer visions, for instance, it used to be that experts looked when they tried to classify dogs versus cats, or this car make versus that car make, they looked at a bunch of features like “Oh, it’s these edges, and maybe you want to circle here for the tire.” And then, once we knew, okay, there should be this kind of circle for the tire, and this kind of straight edge for this kind of particular trunk of the car, then we give this to an algorithm, to a machine-learning algorithm, to kind of just weight the features automatically. And now, what you can do is, you just give it a couple of images, and it will automatically learn how to represent the image to a computer algorithm to then distinguish very well between any classes.
So, this kind of technology enabled us as a company to really go and classify satellite images, and classify standard car images, or one thing I’m very excited about, medical images. So, retinopathy can be classified, CT scans of the brain can be classified. And what’s fascinating is, it’s the same kind of algorithm that classifies these medical images that we can also use to classify social media messages and sentiment analysis. So, is this a positive tweet about your company or organization, or a negative?
Parthasarathy: So, I feel like I’ve heard about neural network kind of technologies since I was a graduate student in the late ‘80s, and I know, Terry, that you did some very seminal work in this area. There was a lot of hype then, it died down, there seems to be a lot of excitement now. Is this real, or is this another part of the hype cycle? And if it’s real, why know?
Sejnowski: Well, it was apparent to everyone back in the ‘50s, when Artificial Intelligence just got off the ground, that there are two different styles. One is programming, where you sit down, an engineer sits down, figures out how to put together a modular program that would be able to find, you know, identify an object in an image, or be able to move a robot. And the other approach is the one that biology takes, which is you learn it. You learn it through experience. Experience with the world.
And in the ‘50s, unfortunately, computers were in the kilohertz range, and programming was much more efficient. And so it was possible to make more progress. Learning is very computation-intensive. But, it’s very labor non-intensive. And eventually, what’s happened is that, over the period of, you know, 60, 70 years, the two have crossed—where now, it’s actually cheaper to learn, because computation is so cheap, and data are so abundant, that it’s much more efficient to have a learning system solve the problem, than hire an engineer to write a program that’s very labor intensive.
Now, you mentioned what happened in the ‘80s. So Geoff Hinton, who’s here with Google, and I were the first to come out with a learning algorithm. That’s a way of changing the weights on the different layers of the neural network in order to be able to create the representations you need in order to be able to solve a particular problem. And it was called the Boltzmann Machine, that was back in 1983. Unfortunately, computers back in the 1980s were like megahertz machines, the VAX series. And it was very limited, how large the network could be—a few hundred units. I, for example, had a network called NETtalk that could take English letters from words and pronounce them in English, which was at the time a linguistic task that was very difficult to program.
Well, that was a little bit of the 21st century technology in the 20th century, because in fact, we could only do one layer of what we call hidden units. Now, it’s possible with computers that are in the gigahertz range, and with servers that have hundreds of thousands of cores, to do networks that are 12 layers deep. And that’s about the size of the different cortical layers that we have in our brains. And what we’ve shown is that it scales. And this is the big secret of AI, is that there are some algorithms that scale very poorly, because as the problem gets more difficult, it just takes exponential time to complete. And there are other problems, and apparently a lot of the problems in vision, and motor control, and you know, classification, are ones that scale very, very well on these architectures. So that’s really the breakthrough that’s occurred, is the fact that we’ve reached the point now where there’s a threshold amount of computational power to be able to take advantage of the same algorithms that the brain uses.
Now there’s one other thing that you have to take into account, okay? Just to calibrate things here, right? The biggest deep-learning networks that I know about—I mean, there may be some secret ones that Google has—but, at least the ones I know about have a billion weights, those are the connection strings. A billion parameters—that’s a lot of parameters. Well, in one cubic millimeter of your cerebral cortex, there’s a billion synapses. And those are the weights in the artificial networks. One cubic millimeter, right? So, the cortex is vastly greater in terms of amount of computational power that it has. Not only that, but the cortex is only one out of a few hundred processors that you have in the brain. There’s another one called the basal ganglia, which is important for reinforcement learning—that’s learning through practice. And recently, DeepMind coupled the deep learning hierarchy with reinforcement learning, to learn how to play Atari games. And so we’re gradually building bits of the brain and incorporating them and making the systems more powerful.
Parthasarathy: So, do you agree with that, is that the technology that you’re using now in MetaMind?
Socher: Yeah, so, I guess in general we’re only very loosely inspired by the brain. We’re not—in some ways, computer science is an interesting area, and it’s somewhat of a science, but there’s also a lot of engineering. And sometimes it’s kind of similar to, do we want to fly like birds, or do we want to fly with airplanes that don’t flap their wings. And so, in some ways, we just need to understand aerodynamics, and then we can try to abstract a way from the specifics we see in the natural world and just try to build certain algorithms that work incredibly well.
I do agree that the learning versus specifically programming for each task—that is a very fundamental idea that we can now use. But I wouldn’t say some of the algorithms that we’re building—you know, I’m not a brain expert, so my hunch is that it’s unlikely that they’re exactly doing what the brain does.
Parthasarathy: Yeah, I think that was something Steve Jurvetson referred to in the first panel today, this idea that we’re now able to evolve software in a way, rather than having to know up front we can parent this technology as it grows and does things that we may be able to specify exactly.
So, can you give me examples of things you sort of talked about. Things that this technology does particularly well, and where it will impact the economy? And then, examples of things that we still can’t do very well.
Socher: That’s a good question. So, I think the impact on the economy will be quite profound, especially on the job market. In that, whenever you have a task that is very repetitive—you keep looking at pictures, and then you write down a pathology report. Or you keep driving along a road that, and especially in US is quite straight, for very, very long stretches. Those kinds of jobs will, in the next decade or two, I think, be first, made a lot more efficient—so radiologists can look at a scan, see the report that is automatically generated, and check to make sure it’s correct. But eventually, a lot of these things can be automated. So, I think there are a lot of tasks, and several of them we need to be also careful about the ethics. As soon as there are human lives involved, we need to be careful about what kinds of things are we comfortable—you know, we really want to show that, statistically, the algorithm makes many fewer mistakes than the human before we let it loose on very important tasks. So I think there will be a lot of tasks that can be automated with deep learning technology.
The more education you require to do your job, I think the harder it will be to automate that job. So I think education is something very crucial, and obviously, creativity is something that I don’t think we’ve tackled in any sort of convincing way yet with deep learning. So very creative, artistic kinds of jobs, and those that require to incorporate a lot of different kids of information will be harder to automate.
Parthasarathy: Do you have any thoughts on what can’t do well yet?
Sejnowski: We don’t understand how the brain works. [LAUGHTER] Everybody has one, and I’m sure you’ve all thought about, you know—why did I make that decision? And things that you do, which is absolutely without thinking. Like you know, reach out for a cup, or recognize somebody. Those have turned out—I mean our intuition was completely wrong about them. We thought those were going to be the easy, trivial problems, because they’re so easy for us. They turn out to be the most difficult computational problems. Still unsolved, despite the fact that deep learning is making progress, the fact is we’re far, far away from human capability in any of those domains. Except in very narrow ones that we have a huge amount of data.
So, what’s really at stake here is understanding some principles that will allow us to create appliances, devices, that will help us, as human beings, solve difficult problems that we spend a lot of time doing now. Like driving is a good example. But there’s so many other expert tasks that take a tremendous amount of education and learning, and at the end of the day, if we could automate that, we could free up minds for doing creative tasks that are much, much more satisfying.
So, I’m the president of the NIPS Foundation—Neural Information Processing Systems Foundation—that was founded 27 years ago by Ed Posner, who was a professor at CalTech. And this was the meeting, it’s now the premiere meeting in machine learning. And I just learned that, in fact, you’ve been there ever since you were a graduate student, and that’s very nice. Because it’s grown enormously. Last year in Montréal, we had 2,500 attendees. And I could only recognize a small fraction, because they were all much more younger than I was. But, what’s exciting is that this community is just tremendous amount of energy. If you could only have been there and seen how much raw intellectual power and the new principles that are coming out of this. I mean, there are mathematicians who are actually analyzing these networks and coming up with understanding, a deep mathematical understanding, about why they’re so powerful. We didn’t know this. It was empirical up to this point. We now know it’s because there are saddle points up there, rather than local minimum.
So, this is an exciting time. This is unprecedented in terms of the potential applications in terms of—and my favorite application is the brain. It turns out that, as part of this advisory group to Francis Collins, the director of NIH, we were asked to come up with technologies, and get engineers involved in helping neuroscientists. And it turns out one of the biggest applications of machine learning is going to be in analyzing large data sets, recordings from the brain. Right now we can record from about one hundred neurons at the same time. In 10 years, the goal of the Brain Initiative is to be able to record from a million neurons at the same time. We’re never going to be able to do that with the old fashioned techniques of plotting graphs. We’re going to need deep learning to be able to just get off the ground.
And not just for recordings, it’s also going to be for connectomics—figuring out the wiring diagram of the brain. It’s going to be for analyzing behavior. And that’s probably going to be one of the big breakthroughs, is that we now can actually analyze in animals from flies up to humans, what are they actually doing—not just are they sitting or walking but how are they walking? You can tell a lot from a person’s health just by looking at what they look like. A physician, a really good diagnostician can walk into a patient and within two minutes, by looking at them and just observing, figure out what their state of health is. And that comes from literally years and years and years of experience with other patients. That’s learning and that’s what we’re trying to do with these networks.
Parthasarathy: It sounds like there’s a really great interplay between sort of the advances in brain theory and technologies that you’re able to apply, and back into understanding the brain. Do you think that advances in brain science are going to inform what you do at MetaMind?
Socher: In a very loose way, I think. One of the things that I’m most excited about, and that we’re actually not good at right now, but we’re on the brink of getting much, much better at, is natural language understanding. Really being able to give an algorithm, a lot of legal documents, a lot of medical documents, and then having it just understand what’s going on. And then being able to get queried and really ask complex questions that aren’t just “keyword search.” Like, oh, I just try to find this paragraph in here. Here’s where you may find the answer, which is what Google does right now, for instance, but “Here is the answer.” And just learn that. And so, I think there, with neural networks in particular, recently, we’ve started working with memory. And that is certainly a concept where you have certain parts that deal with temporal information, and you have certain parts of the brain also that deal with memory retrieval and things like that. And so, in those kinds of abstract ways, we’re definitely inspired.
Parthasarathy: There’s totally more to ask, but I want to see if anybody in the audience has questions, because I’m conscious of the time. I think there’s a hand back there? There’s a mike.
Sejnowski: No, no, you need the mike.
Audience1: My name is Dr. George [INDISCERNIBLE 16.55], I’m a physician, and I was thrilled to see Terrence comment about that we don’t understand the brain, that’s great. I wonder if you can reconcile Vinod Khosla’s comment that we as physicians will be replaced shortly by machine learning. And I’m just wondering when I can introduce them to my patients. [LAUGHTER]
Sejnowski: So, I don’t think you have to worry about being replaced. But in fact what I hope will happen is that your time will be freed to actually explore, you know, what the implications are of the symptoms that appear. In this sense, having an intelligent assistant. And really, what’s happening is, this will create appliances that are going to amplify your intelligence. And my example is, we already have an example of that: we have chess-playing programs that now can play as well and better than the world’s best chess players. What’s happened? Has that replaced—have people stopped playing chess? No. It turns out that everybody’s game has increased, because they can play the best players on their computer. And this also means that somebody in a village in Norway can become the world’s chess champion—Magnus Carlsen. So this is going to happen in medicine. You’re going to have ordinary docs becoming superdocs.
Socher: And I think one thing that’s important to add here is that, certainly in the US, people have access to really great doctors, at least in most of the cities. But if you go into the developing world, people don’t have access to any doctor, and there are actually literally people dying because the surgeons don’t have radiologists tell them where to go. And so, if the world’s best doctors can instill their knowledge in algorithms by training them, giving them lots of examples of, you know. “Here are the symptoms that I see, here is what I think is happening.” If you can instill that knowledge into algorithms, you could help a lot of people right now who don’t have any access to a doctor.
Audience1: Well, thank you very much, I’ll now have more time to fill out the electronic medical record after your visit. [LAUGHTER]
Socher: Thank you! That’ll be great.
Dave: Hi, I’m sorry to intervene. But, you know, I think Terry’s already more or less answered this, but the Kurzweil idea of the singularity—is that something you buy into, either one of you? and how do you factor that into this discussion.
Sejnowski: Where are you?
Dave: I’m here, sorry.
Sejnowski: Oh, I’m sorry. You know, I think too many people have fallen into the singularity. It turns out to have consumed a lot of people’s thought. I actually think that we’re far, far, far away. This is just the beginning. And we’re many e-folds away just take the raw—I already told you that we could only simulate in a very crude way artificially what’s happening in a little tiny little pea of the brain. And in order to get the point where we can solve the really difficult problems might take, you know, 10, 20 years, or 100 years. We don’t know. In fact, the reason why we have a Brain Initiative is to accelerate progress in understanding the brain. And to accomplish in the next 10 years otherwise would have required 100 years. So, I think that predicting the future is very difficult. And I applaud those that are thinking about it, but the track record of humans predicting the future is really poor—especially experts. Experts are really good at telling you what’s not possible. It’s the young people who actually make it happen.
Dave: And that’s—
Socher: I think it often detracts, distracts from the real issues. So in medical, you know, having certain ethical requirements for algorithms to make interventions. Especially in military applications, making sure that there are ethical standards for when these kinds of computer vision techniques or robotics techniques can be used and should be used. And so I think it distracts a lot, in both a positive and a negative way, right? You can say the singularity, well now you have an algorithm that will solve cancer and all these other things, and now the people are then worried and think “Oh, the Terminator will happen, and we’ll all get killed in Skynet” and all that. And I think both of those are quite distracting from actual issues that we have right now. Eventually job issues, right now just, sort of, ethical issues.
Sejnowski: And I should mention that human mental disorders is one of the biggest problems on the planet. Especially as our population ages, we all know. We heard a talk about Alzheimer’s. Well, half the people in this room, if you live to 80, 90—because the average age is going up. You’re probability of getting Alzheimer’s goes up rapidly after 70. And so half the people here could be terminal Alzheimer’s someday.
Parthasarathy: And I guess one last comment. When you’re worried about the singularity, once you actually see what we’re doing, in very specific tasks, you know, I classify this type of cancer versus, you know, benign, malignant and so on. You kind of feel like when people worry about the singularity, it’s as if kids play Lego and they are worried about them building a nuclear bomb. [LAUGHTER] It’s so unlikely that it’s just distracting.
Veejay: Hi, I’m Veejay [ph 0:22:01.8] had a question about can the immersive experience open up neural pathways so that you could learn faster?
Sejnowski: What a great question. I’ll tell you, we’ve struggled with this, okay? Yeah, we’d like to cure diseases, but as we learn more about the brain, we should be able to develop mental prostheses. And I don’t want to say drugs, because very likely, it’s going to turn out to be a neural technology that is quite different from big pharma. In fact, we really think there’ll be a neural technology industry as important, that grows out of the Brain Initiative, as the biotech industries that grew out of recombinant DNA that came from the war on cancer. But the point is, though, that this really will allow us to greatly enhance the way that the brain stores information, processes information. Right now we have ways, we think, just behavioral ways, of improving memory. You know, extreme memory. But it’s going to be—with neural prostheses and neural technology, this could open up a whole new era.
Parthasarathy: Fantastic. I think we’re out of time—do we have time for one more question? Are we—one more question.
[INDISCERNIBLE 23.15]: Hi, my name is [INDISCERNIBLE 23.15], I actually went to Singularity University, so the last question actually really titillated me. But the question that I have is more so about optogentics and how far are we from mainstream adoption of that into our current day practices?
Sejnowski: Okay, so, that’s my business. And optogentics, for those who don’t know about it, is the revolution that occurred about 10 years ago, in fact, up at Stanford, Karl Deisseroth and Ed Boyden. And it’s basically using optics. What they did was genetically engineered an algae protein that, when you shine a light on it, will stimulate a neuron. So, by imaging a neuron and simply shining a little beam of laser light, you can activate a whole population of neurons with any pattern that you want. Now, the reason why this is incredibly powerful is that up until recently, the best we could do was just record passively, or scoop out the material, lesion it. But now, we can actually intervene.
And this goes to the issue of improving memory. If we could intervene by putting in patterns that are coming from the outside, we can actually open up two-way communications between your brain and directly to information sources on the outside, bypassing the senses. Just think about this. Now, it’s science fiction today, but I can tell you that DARPA has two programs whose goal is going to lead to technologies like this. So this is something that will happen, but when, we don’t know.
Parthasarathy: Fantastic. Well, thank you both. It’s been a great panel, and really enjoyed it, and looking forward to the rest of the conference.