Algorithms are pieces of code that break every job into a series of steps and have a machine manage it. And they are the plankton of AI (Artificial Intelligence). They’ve been around, in some form, since the dawn of humanity. But up to now, we needed our own carbon-based units to make them. They were, in effect, abstract machines in our brains that kept track of the steps needed to solve a problem. In the 21st century brace yourself for algorithms made by algorithms.
The big breakthrough arrived like a whisper: AI suddenly became able to equal and soon to surpass homo sapiens in what we thought were unique human faculties that required consciousness: Vision and Language.
We have eyes because our planet orbits elliptically around the sun. Our eyes are encoders and decoders of sunlight. Natural selection transformed them into sophisticated cameras. Eyes evolved many times, a proof of their survival advantage. But a typical human eye will only respond to wavelengths from about 390-700 nanometers, that is 0.0035% of the electromagnetic spectrum. This is the exact range where the sun emits most of its energy. The rest of the spectrum is only visible to machines. It includes gamma rays, X-rays, and ultraviolet, infrared, microwave, and radio frequency radiation. Driverless cars use LIDAR (infrared) and drones and planes use RADAR (radio waves), both of which are outside our visible spectrum.
In this decade algorithms will be able to respond to close to 100% of the electromagnetic and optical experience, including what our two built-in cameras can see as well as all of the machine-viewable spectrum. They will also be able to match or exceed the valuable algorithms that we carbon-based units are equipped with, that continuously scan the horizon to predict traumatic encounters.
In 2016 the two schools of computer vision converged (geometric and cognition) and produced powerful neural networks like Tensorflow’s Inception-v3 which are able to extract just about anything directly from images. The hard core of the vision problem has been solved. From now on it will just get better and better.
Vision is present in 96% of the animal species, but there is one faculty that only homo sapiens seem to possess: language, the evolution of which is still a mystery. Did natural selection favor those who could speak? Or did our cognitive functions increase to accommodate a larger brain? Whatever it is, ML (Machine Learning) will soon crack it: 2018 is expected to see a paradigm shift in machine comprehension of natural language. Two schools in Natural Language are about to become one: connectionism (brain science) and symbolism (rules). In December 2016 Google announced that its Neural Machine Translation (GNMT) would soon be able to translate from any one language into hundreds of other languages at the flick of a switch. They reported that the machine had created its own protolanguage (the language of language).
What about our Consciousness?
In the first decade of this third millennium a lot of neurobiologists and brain scientists abandoned the belief that consciousness was the biggest reason for our intelligence. They now believe that the main algorithm that lies at the bottom of human intelligence is pattern recognition, based on our vision and language faculties that were hardwired by natural selection over millions of years of evolution. Surprisingly, 99% of human qualities and abilities are simply redundant for the performance of most modern jobs. In other words, for machines to acquire human intelligence, they only have to be able to recognize and classify electromagnetic photons, to replicate or surpass our vision, and to manipulate the basic elements of spoken and written speech.
The history of humanity is the quest to replace the philosophy of need with the philosophy of want. We may soon have what we need. And there will be a whole range of things waiting for us to want. We may all go on permanent vacation or enjoy the creation of things that society needs in the long term but that do not have short-term benefits. We may be able to choose to live our lives physically or virtually in immersed digital worlds. We can be in the driver’s seat or on autopilot. The problem will be choice. First we shape the tool and then the tool shapes us.