Back in 2011, IBM amazed the world when its AI-enabled computer system, Watson, won $1 million and took first place on the game show Jeopardy—beating two of the best human contestants. Seven years later, another AI-enabled computer system is in the news—this one completely taking the chess world by storm. Google’s AlphaZero took a mere four hours to learn the game and obliterate the world champion chess program, Stockfish. The event makes it clear that AI—and, more specifically, deep learning—has officially reached transformative proportions. But why has it taken so long—and what has now changed?
The transformative evolution of machine intelligence
According to an article in the Harvard Business Review entitled “The Simple Economics of Machine Intelligence”, the recent surge in the transformative power of AI can best be explained through an economic lens: essentially, it’s taken this long for machine intelligence to evolve to a point where it can dramatically lower the cost of prediction.
While prediction has always been part of business—whether we’re talking about forecasting or algorithmic risk models—recent advances in AI are making it easier to deploy. Today, for instance, banks have access to increasingly accurate risk models—allowing them to make better decisions than ever about market, credit, and liquidity risk. In addition to the obvious applications, this growing accessibility is also changing how we approach prediction—allowing us to apply it to completely new, and previously unimaginable, tasks.
Take self-driving cars, as an example. While autonomous vehicles have existed for more than a decade, they were traditionally programmed with “if-then-else” decision algorithms (e.g., if an object approaches the vehicle, then stop) and restricted to extremely controlled settings, like manufacturing plant floors. However, as the cost of predictive technology dropped, car makers started to view driving as a prediction problem. Suddenly, instead of programming “if-then-else” commands, they asked a simple question: How can vehicles predict what a human driver would do?
“It is hard to imagine the limit to which AI will be used to transform industries,” says Ofer Shai, Director of AI in Deloitte’s Strategic Analytics and Modeling practice within Financial Advisory Services and our resident deep learning expert. “What used to be strictly in the domain of science fiction is quickly becoming a reality. We are seeing huge strides towards driverless cars, doctor-in-a-box – devices that will diagnose and even treat medical conditions, and true digital personal assistants, which not only address your needs, but anticipate them.”
Despite its evolution, however, plenty of wrinkles still need to be ironed out before we can truly realize the transformative potential of predictive technologies. When AI operates as expected, it will deliver great improvements—whether it’s safer driving, faster reactions to the stock market, improved customer interactions, or greater personalization of marketing campaigns. However, there will be rare instances where things go wrong and, when that happens, it will be a more extreme failure than if AI wasn’t there.
Take for example the flash crash where, in May 2010, algorithmic trading was involved in a sudden trillion-dollar stock market crash. The crash was triggered by a single large volume trade and propagated by automated HFTs reacting to the original event. It lasted for approximately 36 minutes before prices rebounded to pre-crash levels. Consider a similar, hypothetical situation with autonomous vehicles: a highway traffic optimizing AI causes a significant interstate shut down in response to a benign situation its designers hadn’t thought to train it on. Or, in retail, the case of Target correctly identifying a 16-year-old girl as pregnant based only on her shopping habits. In these cases, the humans deploying the systems hadn’t considered all of their potential implications.
Deep Learning presents ethical concerns in privacy as well. AI might piece together an identity from higher volumes of ever-more granular data—even anonymized data—from a wide variety of sources across the Internet of Things. The AI may be able to trace individuals and their actions from the data collected. Organizations will need to establish robust ethical frameworks and controls over how AI systems gain access to data.
At the same time, data is essential to the success of predictive technologies—the first self-driving cars “studied” hundreds of miles of human driving behaviours before taking to the roads autonomously. But how much data is enough? Coming up with that number is a bit of an art form, and definitely introduces new forms of strategic risk. For example, while certain professions may benefit from the lower cost of prediction—such as the aforementioned avatar consultants—specialized forms of surgery may be another story. If only 1,000 people across the world can do a certain job, is that enough data for an AI program to successfully make life-saving predictions?
This example is precisely why, as the value of human predictive abilities decreases in the face of AI, the value of human judgment skills will inevitably rise. Because while AI may be able to better detect certain illnesses and treatable conditions, for example, humans will still need to discuss the pros and cons of each treatment option with the patient, administer the treatment, provide emotional support, and assist in the recovery process. Clearly, the “human touch” remains necessary.
We’re too early in our transformative journey to fully understand how—and to what extent—AI and deep learning will change business as we know it. That said, it’s helpful to recognize the impact of something like increasingly-accessible prediction, and start thinking outside the box in terms of how we leverage it.
Join in on the conversation with Paul Skippen when you subscribe to Exponentials.