Published on November 16th, 2016 | by Emergent Enterprise0
3 Factors Limiting AI Adoption
E-E says: Although today’s author tends to sell AI a little short, there is truth in these three points. Just like AR & VR, artificial intelligence is seeing a lot of “cute” and not enough “useful.” But that will change in a hurry and there will be both enterprise and consumer success stories that will unlock the limitless potential of AI. What’s your best experieince with AI so far? Share below.
It seems like we’ve perennially been on the edge of major breakthroughs in AI(artificial intelligence), virtual reality, personal robots, and other such cool tech for the past two decades. The first set of science fiction imaginings came true rather rapidly — think trans-continental air travel, space stations, even drone warfare — but it appears that the emergence of next-gen tech wizardry has stalled.
But while we still can’t chat with the on-board computer on our personal spaceship, artificial intelligence is far more pervasive in our daily lives today than most of us realize. As anyone who has trained their mobile phone assistant can attest, years of painstaking research and investment in artificial intelligence technologies is starting to yield impressive results. Siri can predict our commute patterns, Microsoft Cortana warns us of bad weather, and the Google Assistant diligently sets calendar reminders with the impassive demeanor of an English butler of yore.
On the other extreme, machine learning and deep learning play a key role in the rapid emergence of visual search, giving us automatically tagged and sorted photographs with uncomfortably accurate markers for names, places, and groupings; the same tech also allows governments to tap into real-time facial, biometric, and pattern recognition across millions of traveler profiles.
The hardware front is chock full of exciting probabilities too. Drones are going mainstream, and robots have graduated from factory shop floors to our living rooms. Computing has become miniaturized enough to produce powerhouse computers that fit into our pockets and purses. Yet that je ne sais quoi is still missing. It’s all good, but, not breakthrough enough to have AI explode into mainstream and take care of our various picayune needs. One wonders then: Why are we not further along? If my computer can’t beam me extra-terrestrially, why can’t my watch at least book my next flight? If our airplanes have used auto-pilot technology safely for ages, why can’t my car benefit from it?
I posit that there are three simple reasons that we haven’t seen AI take a more dramatic leap forward.
1. It’s the people, of course
The agricultural revolution had its farmers, the industrial revolution its factory workers. The AI revolution has a few thousand specialists, best case. Very few universities today offer training in AI, neural networks, machine learning, and similar fields. There is a well-acknowledged shortage in current programs in computer science, cybersecurity, and data analytics. We simply do not have the talent pool to ideate, experiment, innovate, and push the boundaries of all that we can do with AI.
2. Moving beyond cute
While virtual assistants that give sassy responses are cute, and using the power of AI to remind you to pack an umbrella is handy, there is still a lack of strong use cases that yield powerful and tangible returns on AI investments. Identifying the best bang for the research buck will mean fewer scattershot projects and better yield for the limited resources applied to harnessing AI. Health care has always flirted with the potential of AI, but it is only now that practitioners have moved away from the lofty computer-as-a-physician ideal to the more pragmatic (and dare I say, overwhelmingly useful) approach of AI as an expert assistant to the physician that ingests, processes, and analyzes data from millions of research papers and clinical trials. No human could come close to synthesizing this quantity of information, and AI makes it possible to harness the learnings from these in more accurate diagnoses and treatments.
3. The only thing we have to fear
Let’s face it, we generally don’t deal well with change, even the “good for us” variety. Think of how long it took to fully adopt ATMs, and then online banking, even when the 24×7 convenience was a clear advantage from the beginning. AI often evokes a special kind of scare factor — the oft-touted assertion that once machines can think for themselves, they will become self-learning and will inevitably reduce humans to an unemployed, cowering mass that will have to go into hiding. Fear of change always impedes adoption of new technology, and AI is no exception.
With time, not only health care, but also industries such as travel, will apply AI to many exciting possibilities. Think about trips booked that automatically account for preferences and prior experiences, or leveraging deep learning for optimal supply and demand. Travel suppliers like airlines or hotels can easily offer contextually aware and relevant experiences powered by AI-powered operations.
If history is any guide, progress rarely rides a linear curve. At some point, a series of incremental changes heaves-ho onto a step function, creating newer altitudes of capability and convenience. And humans have always found a way to move up the value chain. After electricity was discovered and harnessed to power the industrial revolution shop floors, we turned our attention to making lava lamps and three dozen kinds of toasters.
Jokes aside, AI will not make us feckless; our latent ingenuity and creativity will use the opportunity to escape the demands of mundane and repetitive work and instead create a future that our limited intelligence today — artificial or biological — cannot yet fathom.