Even though we don’t know whether to call it Echo or Alexa, Amazon’s amazing voice-controlled home device is clearly the breakthrough in computing interfaces we’ve been waiting for. Techonomy has covered this already, with articles about CES, Amazon’s big deal with Ford to put Alexa controls in cars, and a great John Hagel article here about the future of “assistance in an instrumented world.” But this week Tim O’Reilly (who is joining us at Techonomy 2016 in November) wrote the best, most passionate, and most nuanced piece about Alexa yet.
O’Reilly doesn’t just rave about the machine’s astonishing capabilities, but he breaks it down so we begin to understand what causes it to so romance users from 2 to 90 in homes all across the U.S. It has to do with not just what information it has behind it but how it prioritizes tasks. O’Reilly takes us, for example, through a sequence of commands and actions when he’s playing music, running a timer, and pausing the music to take a phone call (even while complaining that he can’t yet answer his phone on the Echo). In this deconstruction he makes critical points that pertain to anyone who is designing any kind of computer interface in any device, from a car to a smoke alarm to even a restaurant ordering system.
“You should be imagining a future in which the devices used to interact with your software are increasingly conversational, and asking yourself “What would Alexa do?”” he writes. It’s a must-read for anyone who wants to understand where technology is going.