Weaponizing Artificial Intelligence
As a strategist, my worse nightmare is the disappearance of facts. Note that I don't say Truth. I consider truth a matter of interpretation. Facts, on the other hand, are immutable. They happened, and there is proof.
But what if that proof wasn't so? What if all you rely on, suddenly became untrustworthy?
As I read the newest Artificial Intelligence news, my skin crawls. The more I read, the more scared I become. I've always been pro-AI. One thing though is talking about the future; a different matter is living it. I'm staring into the beast's eyes, feeling its rancid breath and I'm wetting my pants.
The more I piece the little AI crumbles together, the more frightening the picture becomes. Weeks ago I wrote about how fake news was a massive problem. Not because of the propaganda, which has existed forever, but because of the scale of distribution.
I never said anything about the content itself because, to date, fake news is rather elementary. Detecting lies and uncovering them is a matter of basic investigation. However, doing this at scale is a real problem. And it's only getting worse. Twitter started, finally, cracking down on botnets and automation. It remains to be seen how effective the new measures are.
The scary part about fake news is how much it undermines trust. People are getting influenced on a major scale, and this has brutal ramifications for any sector. We're now capable of swinging people's opinion with a single click.
As consumers, we lack necessary tools to protect ourselves. We have to trust the platforms, and oh the irony, they're the first ones that have an incentive not to protect their users from manipulation attempts.
Worse than that, is that brands and organizations are even more exposed than individuals. Their attack surface is more prominent. Their response time, slower. Their trust, already undermined due to the impersonality of the brand.
Few believe that a good orchestrated fake news attack on an organization can make it crumble. Most think their company isn't a big enough entity to offer a good target. Others believe that fake news is simple propaganda articles. Both accounts are wrong. Anyone will be targeted, and the tactics will be brutal.
But if fake news undermines trust, the new crop of AI models, obliterate it. The new set of technologies are focusing on automated content generation. This includes, not only automated image generation, but video and audio manipulation at a level never seen before.
Yes, you could fake a picture before. But it required a semi-expert to make it believable. New AI models (Generative Adversarial Networks), not only edit images at leisure, but create whole new realistic images of people, in one click, and at scale.
Yes, you could fake audio before, but it required an expert. Now given 15 minutes of audio, you can, commercially, create a new voice you can use to impersonate anyone.
Current audio manipulation techniques are becoming mainstream. In 2016, Adobe announced VoCo. A text editor for audio. The same year Google's DeepMind released WaveNet, one of the most impressive audio waveforms models to date. In 2017, DeepMind announced near-parity of voice recognition. The same year, Andrew Mason, former Groupon's founder, launches Descript, another audio text editor for the masses.
Yes, you could fake video before, but it required a super-expert. And expert with costly hardware. Furthermore, the results weren't perfect. Experts could pick something amiss with the naked eye. In 2017, Adobe announced Cloak, a one-click shop to make objects disappear from a video. You can't see the difference.
Off-the-shelf Open source software is being used to swap faces in videos automatically. You can still pick them as fakes, but there are some critical differences with the past. The software to do this is available to everyone through a point-and-click program. Those using it are amateurs. Professional tuning of the model will lead, rapidly, to a massive jump in quality. When that happens, and it will be soon, fakes will be indistinguishable to humans.
Recent AI models, like Face2Face, are capable of making any face in a video, say anything they want. Their demo video is impressive. Even more remarkable is that they did it with off-the-shelf webcams.
But if you prefer, we can synthesize a virtual Obama, indistinguishable from the real one, speaking anything we want.
The death of trust
Fake new already frightens me. New content manipulation techniques based on AI, terrifies me.
Being able to reprogram any video, any audio and making it pass for a fact, is bone chilling. It essentially heralds the end of the "seeing is believing."
"Currently, the existence of high-quality recorded video or audio evidence is usually enough to settle a debate about what happened in a given dispute, and has been used to document war crimes in the Syrian Civil War."
The Malicious Use of Artificial Intelligence
Not only it destroys the capacity to rely on particular facts. It throws shadows of doubt on accurate facts. If I can't believe a video or a call or a transcript, then I won't accept anything. Even when you provide a trustworthy proof.
All this might seemed farfetched, but two elements drive the point forward. The automation of creating multimedia manipulations lowers the cost of doing it. When you reduce the cost, you increase the scale and scope.
"The costs of attacks may be lowered by the scalable use of AI systems to complete tasks that would ordinarily require human labor, intelligence, and expertise. A natural effect would be to expand the set of actors who can carry out particular attacks, the rate at which they can carry out these attacks, and the set of potential targets."
The Malicious Use of Artificial Intelligence
AI systems also create a sense of anonymity. The actual manipulator isn't me, but the AI. This creates a distance to the target that increases my willingness to do it again.
“AI systems can allow the actors who would otherwise be performing the tasks to retain their anonymity and experience a greater degree of psychological distance from the people they impact."
The Malicious Use of Artificial Intelligence
So while this future isn't here yet, it's not only feasible but inevitable. My prediction is that during the next year we'll see an increase in the number of fake multimedia content that's being distributed. At a certain point, this content will drop off the face of the Internet. That's the moment where we should be scared. That will mean we've reached peak performance of fake content.
Can AI also protect us?
It's worth noting that we can also use AI systems for defense capabilities. I argued as much on the fake news article. We'll eventually see more companies investing in active cyberdefenses.
The problem with fake content though is human fallibility. We will build systems that can detect fakes, but people will be too lazy to use them. I find it näive to suggest that we can solve the problem through educating the user. Yes, education is a big part of it, but thousands of years of history have proven that people are and will keep on being lazy.
"It is likely to prove much harder to secure humans from manipulation attacks than it will be to secure digital and cyber-physical systems from cyber attacks, and in some scenarios, all three attack vectors may be combined."
The Malicious Use of Artificial Intelligence
The truth though is that, beyond trying to promote a culture of responsibility, there are no realistic detection tools. Several academic papers hint at models that can detect manipulated images or videos. Nonetheless, they don't exist commercially or at scale.
"As yet, however, the detection of misleading news and images is an unsolved problem, and the pace of innovation in generating apparently authentic multimedia and text is rapid."
The Malicious Use of Artificial Intelligence
The non-existence of such detection platforms is a massive opportunity to innovate in this space. On the one hand, we need to build more robust trustworthy proofs-of-fact. Proofs that go beyond "seeing is believing." Decentralized systems like Blockchain are one of the key elements for making higher-trust environments.
Right now, decentralized platforms aren't deemed critical. They will be shortly.
"Centralization has also created broader societal tensions, which we see in the debates over subjects like fake news, state-sponsored bots, “no platforming” of users, EU privacy laws, and algorithmic biases. These debates will only intensify in the coming years."
Why Decentralization Matters – @ Chris Dixon
On the other hand, such proof-of-fact systems need to be integrated into the existing behavior flows of users. Having the technology to prove someone hasn't tampered with our content isn't enough. It has existed for decades now. The challenge is how we make everyone use it transparently.
Apart from proof-of-fact, we need real-time fake detection systems. The challenge is absurd. They need to be able to flag questionable fake content, but not mark artificial legit content. Yes, that's the crux of the problem. AIs will be the authors of more and more content. Some will be legit; others will be malicious. These detection systems need to be trained to spot the difference. Something the current technology giants are wrestling with.
Final thoughts
It's easy to undervalue the threat fake news, and fake content has on society. We always focus on the now; on that which affects us tomorrow.
What does this have to do with my company? With my industry? Everything! I feel like preaching in the desert. It reminds me of when decades ago I used to tell people how critical their cybersecurity was. No one listened. On a post-Snowden-wanna-cry world, it seems companies are finally waking up to cybersecurity.
On the fake content-news front though, most companies aren't taking the threat seriously. Or intelligently. It's a massive opportunity to pivot a company into detecting deceiving content or validating content for your industry.
In a world were content is the blood of the Internet, you want to ensure that the content you produce is trustworthy. Trust will become a prime currency. Those that build products and services to serve the trust economy will thrive.
Moreover, as trust becomes eroded, citizens will start pressuring their governments to adopt new regulations. Government regulations will have unintended consequences.
"Therefore, we need to buy time for democratic institutions to evolve and adapt to the new reality imposed by technology. This requires aggressive and effective responses from individuals, governments, NGOs, the private sector, academia, and other organizations to address the risks from MADCOMs."
The Madcom Future: How Artificial Intelligence Will Enhance Computational Propaganda, Reprogram Human Culture, And Threaten Democracy. @ Matt Chessen