Skip to Content
@blebelle
Apr 9, 2018

Democracy and the threat of AI powered media control

Photo by Kayla Velasquez on Unsplash

It is hard to keep pace with the advances of AI – almost every week brings fresh news on new skills we either taught machines to perform or they self-learned. There are currently quite a few controversies on how much a threat is AI to humanity – the Elon Musk vs. Mike Zuckerberg opposite views being just one of many.

AI is a tool, albeit a powerful one, and we must use it with caution in certain cases. What we can learn from Humanity’s journey so far is that History is always written by conquerors, winners and survivors of battles.

When we look at historical records and stories we know that we could be looking at potentially biased or incomplete information. What is now frightening is we’re seeing attempts and new capabilities to redefine reality and reported information in almost real-time.

AI-powered fake news toolbox is already here

In the great era of the Cold War, if you wanted to change the perception of History, you had to physically delete people from printed photos. The Soviet Union was well renowned for its pre-digital Photoshop habit of erasing unwanted personas from official pictures. It required skill, craftsmanship and time.

Well today, thanks to new AI capabilities, this can be done overnight without a problem. Recent advances from chipmaker Nvidia demonstrates the capability for AI to create picture of artificial celebrities that look truly human (without the associated physical DNA).

The New-York Times had a very interesting article describing these new features but what got me really worried was the last part describing how, with the help of AI, you could redub a video interview with another soundbite. As we already know we can use technology such as Baidu’s ‘Deep Voice to recreate the voice of a given public figure with just under 4 seconds of sound bits, this means we can totally alter any audio or video recording…

We have to understand how much we can alter media components:

* AI can now simulate different weather conditions based on a given photo, paving the way to temporal displacement of a story.

* Adobe AI-enabled video-editing feature allows you to delete people or objects from movie shots, paving the way to the full-redesign of what we call reporting on reality.

Last but not least, as demonstrated by the recent Cambridge Analytica / Facebook scandal, technology and data-scientist methods allow efficient behavioral profiling. This means that we can very accurately segment any population in highly detailed clusters that can be used for targeted communication or fake news / disinformation acts.

And this profiling will be even more efficient when AI-powered software is on-boarded and can read our facial expressions in any connected device surrounding us. Knowing someone’s pain-points enables highly effective fake news which has the power to influence how one votes in an election.

Democracy can be threatened by AI media control

Publication of content on promoting terrorism acts, of apologies of war and hatred all have to be condemned and fought. This is vital to protect citizens across the world. On the good side of things, we can train AI to detect any “inappropriate video content” such as ISIS propaganda or any other foul perversion (think child pornography).

This capability is essential as the more audio or video content is created every minute, the more Herculean the task for moderators to sift through it becomes. Having content-detection trained AI virtual assistants is essential to hunt and take offline such offensive content. The UK Home Office has recently developed with ASI Data Science such a software to tackle ISIS propaganda.

But let’s imagine, for a second, that a dictatorial government want to preserve the status-quo of its grip on the population. It could restrict access to the Internet and put in place a highly monitored and filtered gateway. Such a state could easily now create artificial newscast to report to its citizen, creating a dystopian country-wide Truman Show.

Futuristic techno-threat from aspiring thriller novel writers? Not quite so. A recent publication from the European Data Protection Supervisor organization actually warns about the existing feasibility of online political manipulation threat.

On top of this we know that current social media sites or search engine curation algorithms are creating user-centric bubbles that prevent regular exposure to other points of views. Unless we are mindful of this, we run the risk of creating a dangerous feedback loop in which we only see limited content, ultimately feeding the nasty bipolarisation trend of any social sites.

Democracy requires open discussions and debates with clear, unbiased, news and information. Efficient debates require having the possibility to acquire information on a multitude of perspectives. If we’re not careful, AI-powered fake-news could jeopardize any democratic election.

Harnessing the power of AI

As said before, AI is just another tool – we only need to learn to use it and to leverage it to advance Humanity. At the same time, we also must ensure we’re still able to operate in spite of AI when considering news analysis.

We need to ensure that kids, from a very early age, are made aware of the existence of fake news and be trained on how to spot it. Teaching the essential habit of fact-checking and looking for alternative confirmation source will prevent them from being tricked. A much need survival skills for their future.

Media companies need to look at ways of leveraging and embedding AI in Journalism. Radio-Canada recently ran an AI Hackathon we supported, where innovative solutions were proposed. The winning idea called Panorama will, when fully developed, help journalist analyze published articles and segment and look for missing perspectives. There is a growing need to equip journalists with an AI-powered toolbox to navigate today’s murky waters of content & biased news.

As a society, we must ensure that technology’s ultra-personalized curation features do not create more biases or polarization bubble. This would help preventing a snowballing effect when fake news is being pushed out at critical time of elections. We must also find a better way to detect when AI is maliciously being used to create alternate reality information segments (audio / video) at critical moments of any Democratic process.

I strongly believe that AI can be used for the greater good if we’re both conscious of the potential pitfalls / malicious use and we collectively implement the required appropriate safeguard mechanisms. As our world becomes more digital is it essential that we prevent any distortion of what reality really is when people look up for information and news.

@blebelle

Join in on the conversation with Bernard Lebelle when you subscribe to Exponentials.