Description: Veteran hardware engineer Mary Lou Jepsen is a harbinger for a new generation of imaging technologies. High resolution, low power LEDs can look inside the body for medical diagnoses and treatments, and even ultimately brain-to-brain communication.

The following transcript has been lightly edited and condensed for ease of reading. 

Speaker: Mary Lou Jepsen, Openwater

Introduction: David Kirkpatrick, Techonomy

(Transcription by RA Fisher Ink)

Kirkpatrick: You know, when you have the biggest company in an industry, sometimes that’s why they’re the biggest. Mary Lou Jepsen is a great member of the Techonomy community. I’m super pleased that she’s going to be on our stage. She’s done things here before, but she’s got a really great presentation today from her company Openwater. Mary Lou.

Jepsen: Hello. So medical imaging is used to diagnose millions of cancers, billions of cancers, over the last 30 years, and many diseases in our body. Those are the medical imaging suites, the multimillion dollar machines many of you have probably laid in. They haven’t improved substantially in three decades. They’re expensive and hard to use. And so what I’m trying to do is actually massively lower the cost and the size of medical imaging systems so that we can see inside of our bodies and brains. And I think that can have a transformative impact on our health.

And the question is like why now, how can we do it now? And I think the tools of our time are ready for this now. Big data, machine learning, and the often overlooked trillion dollar manufacturing infrastructure is where I’ve lived and breathed, shipping products on the hairy edge of imaging now for about three decades. So I’m taking all of my imaging knowledge, where I’ve shipped all kinds of new display systems, laptops, HD TVs, projectors, virtual reality/augmented reality sets, and using that work, displays and camera chips, to be able to see inside of our bodies in high resolution. Which sounds a little nutty, so let me explain to you a bit of how this works.

But first, I’m talking about literally replacing this big iron machine with a ski hat or a bandage that can go around your body and see in higher resolution for about a thousand times lower cost, about a million times smaller, and potentially a billion times higher resolution. So that’s a lot of zeros. It’s classic innovator’s dilemma and I think the manufacturing supply chain that makes our smartphones, TVs, laptops is ready to take it on because of some manufacturing process improvements that have been put in place literally in the last couple of years to enable next-generation high fidelity virtual reality and augmented reality.

And so how does this work? Can we turn down the lights a little bit? So our bodies are translucent to red light. You can see red light goes right through my hand here. Gamma rays and X-rays also go through my body. So do two ton magnetic fields, but guess which one’s cheaper?

[LAUGHTER]

By a lot. And so here’s the thing, though. If I take this laser and point it on this table, you can see that makes a spot. But when it goes through my hand, it scatters. So we have to do something about the scattering to be able to see deep inside of our bodies at high resolution. And so what I’ve figured out to do—oh, wow, I thought you were seeing the slides I was on. I’m trying to figure out—this is our body’s translucent. So we de-scatter the light using holography, which records actually the waves in the wavelength of light and the interference of them. And I actually spent the first ten years of my career making holograms and actually holographic video and electro-holography. To record a hologram, you need a very fine pixel size, a pixel size on the order of the wavelength of light. And that’s just been enabled in the trillion dollar manufacturing infrastructure for next-generation high fidelity VR and AR. Because when you put on your VR and AR sets, the pixels look kind of chunky. So billions of dollars have been spent to enable us to record holograms in camera chips.

So that’s the second key. The first key is our body is translucent to red light, scatters it, we can de-scatter the light with holography. And the third key is leveraging ultrasonic scanning. So ultrasonic chips are now being made in every single silicon fab in the world using a process called MEMS, microelectromechanical systems, that was developed for projection displays by Texas Instruments in the 1990s. So fast forward 30 years later and we can take an ultrasonic ping and focus it down into the brain at that spot and then bring in red light. And the light that goes through that ultrasonic focus changes color ever so slightly just, just like the pitch of a police car siren changes at speeds past you—doppler shift.

So we can then change the color of the light in a light guide plate underneath—these black discs represent ultrasonic and camera chips in an array with a network of fiber optics illuminating it. But if I turn on my smartphone, you see the light comes straight out, even though the light comes in from the side. That’s called a light guide plate. That’s what we do to create what’s called a reference beam in a hologram. We beat two beams off of each other to get information about that ultrasonic spot where the wavelength shifted.

Because we have this fine pixel structure, we can see the interference we call fringes. We couldn’t see that if the pixel size was bigger. And then we decode that much like Rosalind Franklin decoded this iconic image of X-ray diffraction for the first time to rebuild the structure of DNA. It took her a long time, she didn’t get credit for it, but—

[LAUGHTER]

But we can now do that in a million times a second using double stack camera chips where the first layer senses the light, switches it to digital, the second layer does image processing. So we do it, for the engineers in the audience, we do a 2D discrete fourier transform super fast on that logic chip and then we scan the next spot out. In doing this, we can scan out spot by spot the body and brain. We can go in a line or we can go—with software reconfigurable ultrasonic pings, we can change the frequency and thus the focus size. We can look at areas of interest, not totally scan the thing over and over.

And you’re probably thinking what about the skull? Well, light goes straight through the skull. It’s white. It scatters. That’s real human skull. Ordered from skullsunlimited.com.

[LAUGHTER]

Yes, because the speed of sound changes in skull, but we can compensate for the change in speed of sound. And there’s this thing that we’re really good at seeing. We can see blood, and blood is pretty important. Red light absorbs blood and flesh scatters it. And we can see the difference—you can see the difference in this slide. And that’s a really a big deal because any tumor that metastasizes steals blood from your body. And so tumors that are cancerous have five times the amount of blood as normal tissue. It steals the blood so it can grow and try to kill you. So you can see that. We can see that.

So there’s also—so using that, this simple system, we’ve been able to see, in optically mimicking phantoms that we created using camera chips and lasers and ultrasonic pings, to see vasculature in tumors we embedded. And we’ve gone further. We can see the color change blood makes, whether it’s carrying oxygen or not carrying oxygen. It’s actually a different color. The absorption properties change. So we use a laser with two colors coming out of it so we can see whether the blood is carrying oxygen or not, which is exactly what multimillion dollar fMRI does, which is a video form of MRI that is very useful for many different forms of brain disease. So we can do that too. And we think we can see it faster than fMRI because there’s a faster sub-second response. It usually takes four seconds to capture an fMRI scan of the brain, but we see a sub-second response optically that we’re capturing.

And for the first year and a half of the company, I didn’t focus on product. I just wanted to know what the limits of the physics were. So we hit fMRI resolution, which is 10 cubic millimeters, then MRI resolution, which is about a millimeter cubic, kept going, and we got to a couple cubic microns, which is about a millionth of a meter or a thousand times finer than a millimeter, but that’s in XY and Z, so it’s a thousand times a thousand times a thousand or a billion times higher resolution, potentially. So that means that we can focus down to neurons themselves, noninvasively, through skull and brain, as I showed live on stage at TED this year in Vancouver. And that’s just mind-blowing. It was mind-blowing to us because if you couple that with work that’s been using near-infrared light to both read the state of neurons and write the state of neurons, that’s pretty transformative in what it can do for our ability to understand the brain with this tool that we’re creating. But, back to life—

So I did all that with this huge lab-sized prototype because any chip designer worth their salt has gotten thrown out of fabs in the world because they didn’t ship enough products. So what you do is you jerry-rig systems in prototype with existing chips while you architect the chips that you need to reduce it. So we spent a long time doing that with lots of different big iron setups like this and now we’ve architected these new components of camera chips, ultrasonic chips, and a new kind of laser that is a strobe laser, and that’s necessary—or a so-called pulse laser, but it’s a slow pulse. It’s like a microsecond pulse, and that’s because we’re alive, we move. It’s good to be alive, but, you know, we have to strobe it.

So this is the system we have now. We’re scanning rats in a small animal imaging facility in San Francisco and we’re MRI-ing the rats and then scanning the rats. We also get whatever is on sale at the meat market and scan that.

[LAUGHTER]

We recently—yes, pork chops and chicken, and we got some pig brain a couple weeks ago. So 10 times less—these are the recent improvements we’ve made in the last six months and these are an order of magnitude improvements that we keep going on. We’re working on alpha kits for next summer, but we’re waiting to freeze the architectures where we stop getting these big improvements in both signal and noise ratio, which is important. We want more signal, less noise to see deep, and lowering the amount of light needed.

So I was talking around the fire with Wyell, who will be up here later, who had a big role in the Arab Spring, about a lot of the ethical—and a bunch of the Techonomy crew last night around one of the fires—we skipped the music. Sorry, but it was lovely. And they gave me a hard time about bringing computer communication in a real way I could hear it. And the question is, do we want to be able to communicate and what are the ethical rules on that? And what we have agreed to do is not do that until we can define what it means to be responsible there, but nonetheless, we’re building systems that can see inside of our body and brain.

And so just quickly, using existing MRIs, students that are thrown into MRI machines for hundreds of hours while recordings are made of their brain. This was work by Professor Jack Gallant at UC Berkeley. The computer is then able—it stopped, sorry. Usually, it runs through. Hopefully, if I go back and then forward—well, the students—yes, there we go! Thank you, back there. The computers are able to infer grainy image of what the students are actually thinking by getting the data store with some low accuracy. But if you up the resolution and make MRI cheaper, it’s more comfortable than putting on a ski hat.

In Japan, a group did this with dreams by waking up the students three minutes after they fell asleep and asking them what they were dreaming about to create the data store and used machine learning backpropagation to then infer what they were dreaming when they fell asleep. The accuracy of this is, for example, this word cloud back at UC Berkeley, 5% false positive threshold on a word cloud, even the sex and violence areas. So if we do develop this, we could maybe not—if we didn’t wish to communicate thoughts of sex and violence to each other, we can turn that part off.

And in terms of accuracy, here’s a study from a decade ago, a thousand images were double flashed, double shown to students laying in MRI machines. And with 80% accuracy, the machine learning algorithm could infer what the student was looking at or thinking of. Because when you think of an image, the same area lights up in your head. And by the way, through random guess, that’s 0.1%, so a really big difference.

So the bet here is there’s massively more data, higher resolution with increased temporal response and better algorithms. We can get to more resolution that we think can have profound impact on people that suffer from brain disease, lock-in, stroke victims, and so forth.

But the really big thing, the elephant in the room is two-thirds of humanity lacks access to medical imaging. And that’s actually all of us. If you look at mammography, for example, that’s screening for breast for cancer for women. We know MRI is ten times better, but we don’t use it in the US for routine screening. And not a single country in the world uses it for routine screening for one reason. It’s too expensive.

And then you even have people—I loved how David Kirkpatrick introduced the health panel yesterday with talking about susceptibility for innovation in healthcare. And he’s talking about sectors in the US, but the question is, is the US the most susceptible for innovation of healthcare in the world? I believe its other countries that are more susceptible and so we’re really reaching out to them.

And there’s a biological basis for brain disease. Two billion people suffer from brain disease if you add in mental disease and neurodegenerative disease, but they’re not given fMRIs, and yet they just ask questions like, “Are you sleeping all the time? Have you gained weight? Do you have thoughts of suicide?” If you answer yes to that and another litany of questions, you’re clinically depressed. And then, “How is your therapy going? When you take a pill, does it change?” It’s subjective. I mean, I was talking to these people that try to figure out if the schizophrenics taking the meds are getting better or worse and it’s confusing. And we can see the signature of different brain disease with an fMRI scan, but we don’t do that. So precision psychology could be enabled.

We can also focus the ultrasound for longer than a microsecond. If we focus it for 15 seconds, we can do surgery without the knife, ablate tissue, open the blood-brain barrier, deliver microdoses of drugs, but with the right intensity at the right place. And we’re working with a focused ultrasound foundation on that, who is really shepherding through this for lots of different indications.

FDA is rather straightforward. The predicate products to using near-infrared light and ultrasound are decades old. The safety limit is well established and we’re building up our image stores comparing our rats and then humans with MRIs.

So I think this is inevitable. It’s thrilling to talk to you about it here. I’d love to talk to you all about how we might be able to shape it. It’s far bigger than me or my company, Openwater. Thank you.

[APPLAUSE]