Skip to Content
@pskippen-23ef707
Dec 5, 2017

If organizations hope to successfully leverage transformative technologies to solve complex business challenges, these technologies can’t be restricted to the confines of the IT department. At some point, they’ll have to pass into the hands of the less tech-savvy—and often error-prone—end user. Which, of course, gives rise to a whole new world of risk.

Navigating the cognitive risk landscape

Every organization is familiar with the danger of “fat finger errors”—incorrectly logged keystrokes that can throw off the accuracy of critical reports, triggering serious operational risk. The saving grace of these manual errors is that they’re typically confined to a single mistake made by an individual employee, and can often be caught before they cause serious damage—provided the right control mechanisms have been thought through and implemented.

But what happens in a cognitive computing world where robots are responsible for completing the routine tasks humans used to do? If end users make a mistake when configuring robots, or if fat finger errors get inadvertently coded into an algorithm responsible for high-volume and high speed data processing, the system performance issues can suddenly multiply exponentially. After all, today’s current systems are tuned for humans, not for the speed of robots, which means mistakes that might once have affected a handful of reports can now cascade across multiple reports and systems—potentially turning one wrong number into hundreds or even thousands, and causing material reporting errors.

This, of course, is just one example of how fast-paced bots and algorithms can allow errors to proliferate quickly—but the risks don’t end there. According to Baskaran Rajamani, a Partner in Deloitte’s Risk Intelligent Robotics and Cognitive Automation practice, while automation initiatives can reduce process risks by eliminating manual tasks and automating certain controls, they can also introduce a wide variety of new operational, financial, regulatory, and organizational risks.

“Incorrectly programmed bots with extensive privileges could cause undetectable regulatory breaches, a loss of integrity in financial reports, expensive disaster recovery, and unthinkable damage if the bots ever came under the control of malicious hackers,” he explains.

Given the unique risk profile associated with cognitive technology, it makes sense to apply a higher level of oversight to these implementations than that applied to traditional IT implementations. Beyond the standard controls related to the configurability, design, and functionality of a new application, robotic deployments need to be risk intelligent, which requires an entirely new set of policies, models, and controls.

To determine what these should look like, we first need to view robotic implementation in a somewhat different light than we’re used to—one of semi-automation rather than full automation. In a Star Trek future—where humans and AI work in harmony—robots are not hyper-intelligent entities designed to eradicate human jobs. Rather, they’re more like manual labourers created to enhance human intelligence. As such, they require supervision.

In the real world, this involves repurposing existing IT controls to fit a robotic reality. For instance, when does it make sense to turn on a bot? Can bots access corporate systems using the IDs of their users or should they be assigned unique IDs? How can you swiftly detect and address performance issues related to cognitive technology?

In addition to significant policy and control changes, organizations will also have to reevaluate the effectiveness of their model management. In fact, when it comes to cognitive technologies, AI, and deep learning, even today’s advanced model management controls may not be sufficient.

Consider, for example, a case from earlier this year when a Facebook algorithm independently created three unethical advertising categories that allowed advertisers to reach anti-Semitic audiences. The categories went completely undetected by Facebook staff until the issue was brought to their attention by advertisers.[1]

This event illustrates the need for companies to ensure not only that their models are properly validated, subject to appropriate access controls, effectively documented, and delivering predictable and consistent outcomes. “When deploying robotics, AI, or cognitive solutions, organizations must now also take steps to add an ethical double-check into the mix,” says Julie Calla, a Senior Manager in Deloitte’s Operational Risk practice who focuses on robotics, cognitive, and AI. “Additionally, when building deep learning models, it’s important to understand that the improper use of demographic data can lead to biased results. To ensure your systems are properly configured to any new scenarios, AI must be understood in context.”

Admittedly, these new technologies can potentially raise frightening new risks—but this shouldn’t deter companies from embracing transformative technologies. Similar to the early days of eCommerce, organizations with mature risk management will enjoy a first-mover advantage in this new technological era. The key, then, is to launch your transformation journey with your eyes open now. And that starts with designing and implementing a strong foundational risk framework—one capable of identifying control gaps and mitigating incremental risk, both now and for years to come.

To discuss how the emerging risks of cognitive technologies might affect your organization, please reach out to Baskaran Rajamani at [email protected] or Julie Calla at [email protected], or contact me directly.

Paul Skippen

[email protected]

[1] https://www.propublica.org/article/facebook-enabled-advertisers-to-reach-jew-haters