Published on July 17th, 2019 | by Emergent Enterprise0
What AI-Driven Decision Making Looks Like
AI will continue having a bigger influence on business decisions as humans learn more about its capabilities. Eric Colson at Harvard Business Review takes a close look at this relationship and where it may be headed in the near future. It’s apparent that human bias will always be part of the equation as objective and unemotional as AI may seem. The AI needs to be developed with variables and algorithms that have human fingerprints all over them.
Photo Source: DANIEL SAMBRAUS/EYEEM/GETTY IMAGES
Many companies have adapted to a “data-driven” approach for operational decision-making. Data can improve decisions, but it requires the right processor to get the most from it. Many people assume that processor is human. The term “data-driven” even implies that data is curated by — and summarized for — people to process.
But to fully leverage the value contained in data, companies need to bring artificial intelligence (AI) into their workflows and, sometimes, get us humans out of the way. We need to evolve from data-driven to AI-driven workflows.
Distinguishing between “data-driven” and “AI-driven” isn’t just semantics. Each term reflects different assets, the former focusing on data and the latter processing ability. Data holds the insights that can enable better decisions; processing is the way to extract those insights and take actions. Humans and AI are both processors, with very different abilities. To understand how best to leverage each its helpful to review our own biological evolution and how decision-making has evolved in industry.
Just fifty to seventy five years ago human judgment was the central processor of business decision-making. Professionals relied on their highly-tuned intuitions, developed from years of experience (and a relatively tiny bit of data) in their domain, to, say, pick the right creative for an ad campaign, determine the right inventory levels to stock, or approve the right financial investments. Experience and gut instinct were most of what was available to discern good from bad, high from low, and risky vs. safe.
It was, perhaps, all too human. Our intuitions are far from ideal decision making instruments. Our brains are inflicted with many cognitive biases that impair our judgement in predictable ways. This is the result of hundreds of thousands of years of evolution where, as early hunter-gatherers, we developed a system of reasoning that relies on simple heuristics — shortcuts or rules-of-thumb that circumvent the high cost of processing a lot of information. This enabled quick, almost unconscious decisions to get us out of potentially perilous situations. However, ‘quick and almost unconscious’ didn’t always mean optimal or even accurate.
Imagine a group of our hunter-gatherer ancestors huddled around a campfire when a nearby bush suddenly rustles. A decision of the ‘quick and almost unconscious’ type needs to be made: conclude that the rusting is a dangerous predator and flee, or, inquire to gather more information to see if it is potential prey – say, a rabbit, that can provide rich nutrients. Our more impulsive ancestors–those that decided to flee– survived at a higher rate than their more inquisitive peers. The cost of flight and losing on a rabbit was far lower than the cost of sticking around and risking losing life to a predator. With such asymmetry in outcomes, evolution favors the trait that leads to less costly consequences, even at the sacrifice of accuracy. Therefore, the trait for more impulsive decision-making and less information processing becomes prevalent in the descendant population.