In 1960, my mother noticed she was feeling queasy. She started smelling something in the “icebox” (as it was called then) that she couldn’t eradicate, despite repeatedly cleaning it and (to my father’s dismay) throwing everything out. After missing her period and confiding in a friend who had several children, she decided to investigate. So one morning she put on a nice dress and drove several miles to the local family doctor’s office. She then stood in line for a half hour to speak with a receptionist. After relating her detailed story, she got an appointment, though not right away.
Two weeks later, she provided the doctor a urine specimen for a “Friedman pregnancy test” and underwent a thorough interview and physical examination. The test involved adjusting the specimen’s pH (acid/base balance) to 7.4, and then injecting the urine into a female rabbit’s ear vein on two consecutive days. The rabbit was, then, anesthetized, and an exploratory operation was performed on it by a laboratory expert and doctor. They wanted to see whether there was pathological evidence of pregnancy hormones acting on its ovaries. If the answer was yes, my mother was pregnant.
It was. (That explains why I’m able to write this.)
Fast forward to the present. If a woman suspects she is pregnant and it’s a month or less from likely conception, she takes a paper-based pregnancy test strip she ordered and received from Amazon, and pees on it.
We’ve seen this story before, and repeatedly, in the history of humanity’s interaction with technology: a useful technology is initially only available to people via the intercession of an expert, but later can be used easily and independently by individuals. The full cycle goes like this:
- There is no technology available.
- A technology capable of helping us be better at something is created.
- For a time, only experts can operate it.
- Eventually we learn to use it without needing an expert, or even expertise.
- Sometimes finally the machine no longer requires any human competence to operate – it operates itself.
Some areas where such technological evolution has occurred include transportation, cooking, reading and writing. More recent complex examples include making music, creating art, investing, etc.
While long considered “too difficult,” or “too different” – the practice of medicine is now experiencing this cycle.
What’s required to power the cycle is three things – democratized machines that are user-friendly, democratized information that is digestible, and social acceptance of the fact that an expert is no longer necessary.
When it comes to democratized machines, the list in medicine is long and getting longer. Some examples from our smart phones and watches include apps that evaluate our vital signs, conduct electrocardiograms, and measure sleep quality, blood count or bone density. There are attachments we can point into our own eyes and ears that suggest various diagnoses.
The posterchild for democratized information is obviously the internet, where more information can be found than any medical school could ever offer, or than any human being could ever absorb. More than one and a half billion individuals around the globe have sought online health-related information online, and more than 900 million have searched for information about a specific medical condition or disease.
Now the new sciences of genomics and proteomics promise the ability to access information directly from our own bodies, expanding individuals access to impactful medical information even more dramatically.
What about social acceptance? I would posit this has been rendered a fait accompli by two developments: the cost and inconvenience associated with the delivery of modern healthcare, and the way individuals presume they will be empowered by technology in other areas of their lives. If you can transfer money from your phone to anyone, call a car, order a meal, or find a date, of course you should also be able to learn key information about your health.
For the foreseeable future, more complex medical challenges like brain surgery or heart valve replacement will continue to require expert intervention. But we will increasingly use information and technology to deal with simple medical problems ourselves – what most of us seek expert care for now.
Still, many people don’t seek care for simple problems because they lack access, or because it’s inconvenient or expensive. As a result, simple medical issues can snowball into complex ones. As technology becomes more intelligent and user-friendly, individuals and healthcare providers will benefit in many ways – including improved access to care, and lower overall healthcare costs as people avoid unnecessary expert visits and interventions. That will leave more time and energy for experts to focus on those who really need the attention.
Early Homo sapiens had no piano, but when one was created, Amadeus Wolfgang Mozart was somehow able to compose for and play upon it with such mastery that the American composer Aaron Copeland said he “tapped the source from which all music flows.” When rockets and flying machines capable of carrying a man were created, Neil Alden Armstrong was ready to pilot them to the moon. When the automobile was invented, Danica Patrick was prepared to slip behind the wheel and drive at speeds that exceed human ambulation by 80X. When the average person receives online video instruction to test herself for strep throat – with a swab that changes color if the test is positive, followed by instructions for printing antibiotics on her 3D printer –she will be prepared to do that as well.
We’re prepared to use technology in ways we cannot even conceptualize in the present, or believe in our wildest dreams, we could use. Medical technology isn’t exempt. The co-evolution of humans and machines is inexorable – and now, finally encompasses healthcare as well.