Skip to Content

In 2017, two Stanford researchers hypothesized that faces contain more information about sexual orientation than the human brain can interpret. To prove their theory, they created a machine learning algorithm capable of distinguishing the difference between gay and heterosexual men and women in 81% and 71% percent of cases respectively[1]—an accuracy rate that far exceeds the capabilities of human judges—and ran it through 35,000 pictures from online dating sites. However, while their intentions may have been earnest, the study was not well-received—inspiring an intense social backlash, an ethical review by the American Psychological Association, and an ongoing debate into the ethical repercussions of AI.

New ethical frontiers: when robots don't know how to behave

Regardless of where you stand on what has now been coined the “gaydar” study—whether you believe it’s based on “junk science”[2] or view it as a form of “AI phrenology”[2]—it’s hard not to notice the larger issue it presents. AI is opening a whole new can of ethical worms—a can that neither the research nor business community is currently equipped to deal with.

Just as the Stanford review board approved the “gaydar” study because, according to the university’s 40-year-old regulations, it didn’t break any ethical rules, businesses across the world are using outdated systems and processes to govern AI—and finding themselves on the wrong side of public opinion as a result. Last year, for example, one US retailer opted to lock up frequently-shoplifted products, not realizing those products also happened to be used, primarily, by a specific ethnic minority. In a separate case, a retailer launched a direct mail campaign based on customers’ buying habits—and ended up sending baby-related coupons to a teen girl’s home, revealing her pregnancy to her parents before she had a chance to do it herself.

For organizations intending to leverage AI and other emerging technologies, these stories serve as cautionary tales. They not only highlight the potential dangers associated with the reckless adoption of AI, but they also underline the important role human intelligence (HI) plays in a successful tech rollout.

“Data, on its own, can’t make business decisions. Companies need to leverage human skills to examine the root causes behind their analyses, and ensure all decisions not only make business sense—but ethical sense as well,” says Matt Denis, Manager in Deloitte’s Risk Advisory practice.

An ethical landmine

This is understandably easier said than done. As with many things related to emerging technologies like AI, machine learning, cognitive, and robotic process automation, when it comes to ethics, we’re venturing into uncharted terrain. Right now, there is very little guidance—and quite a bit of debate—surrounding what AI-related ethical constructs should look like. And given the fact that emerging technologies are advancing faster than regulators can keep pace, this challenge likely won’t be resolved anytime soon.

“It’s possible for an organization to leverage AI in a way that doesn’t contravene any laws, but nevertheless could land them on the negative side of public opinion,” says Matt. “Given the fact that public opinion varies from culture to culture, and demographic to demographic, it’s difficult to determine what customers will be okay with—and what will inspire outrage.”

Admittedly, this could be changing. Earlier this year, the European Union implemented the General Data Protection Regulation, which places restrictions on “automated individual decision-making which ‘significantly’ affects users”[3]. The legislation essentially allows consumers to ask for an explanation if an algorithmic decision significantly affects them in some way. Many believe this legislation will force organizations operating in Europe (including North American companies with European operations) to put more thought into how they build their machine learning algorithms—and strive to make them both easier to explain and less likely to discriminate.

Paving an ethical path forward

Yet, experience shows that technological advancements are still likely to outpace regulatory trends. That’s why organizations have a responsibility to build their own ethical constructs—something the financial services sector has long understood.

With the swift evolution of criminal banking activity, financial institutions have been using data to identify fraudulent transaction patterns for quite some time—and, through trial and error, have learned about the dangers of drawing conclusions from raw data without further analysis. Turning a grandma into an unbankable customer because she unwittingly moved into a pothouse—or reporting an innocent middleman who unknowingly is being gamed in a well-orchestrated insurance scheme—doesn’t help your brand. That’s why—whether you’re dealing with data related to fraud or customer data acquired through AI algorithms—it’s important to carefully analyze it on a case-by-case basis and weigh the pros and cons of every potential outcome.

“It’s about baking corporate ethics into your AI and data analysis practices,” says Matt. “This means taking strides to articulate your organization’s ethical principles, beliefs, and values, and building your technology policies around them.”

This could involve employing a designated person—or team of people—to review data usage through an ethical lens, and make sure every decision makes sense from a corporate conduct perspective. You may even want to take a few steps back, re-examine how you’re using your existing data, and work backwards to ensure the appropriate controls are in place to make ethically appropriate decisions.

However you approach it, the ultimate goal should be to marry HI with AI—and never leave AI unsupervised. Because while they’re good at a lot of things, robots haven’t yet mastered the nuances of human ethics.

To learn more, please reach out to Matt at [email protected], or contact me directly.

Paul Skippen

[email protected]



[3] European Union regulations on algorithmic decision-making and a ‘‘right to explanation’’, Bryce Goodman, Seth Flaxman