Fake news is the new black. Read any publication, and you'll encounter one or two accounts of fake news. The term is a neologism used to refer to fabricated news, propaganda or information warfare.
Despite its tiring ring, despite it not being new, despite it's imprecise and all-encompassing definition, it has never posed such a threat to society like today.
Information manipulation has been the silent weapon in many recent geopolitical conflicts. And while there's always been propaganda, its sophistication reached new heights in the US Presidential Elections of 2016.
The problem, though, is that fake news isn't only manipulating political events. Its influence is affecting governments, strategic, innovative decisions like Net Neutrality, and indeed, industry-wide warfare.
Anatomy of fake news
There are several aspects about fake news that have changed in recent years. These have made them a terrifying threat to society. Fake news needs three components for them to work.
Content: The poisoned apple
At its core, interested parties create content designed to manipulate the audience's opinion. The more inflammatory, crude, visceral or outrageous the better.
Thanks to the Internet, it's never been easier to create content. Everyone can start an online publication and start dumping their ruminations.
Fake content isn't limited to political agendas. It can touch virtually anything. It can be human trafficking claims against an enemy or lies on how a company treats their employees.
Distribution: The propaganda machine
Fake news content needs to be widely circulated. It's important to deliver it to the right people at the right moment. Albeit Facebook getting all the press lately, there is another suspect always present at every single fake news incident. Yes, Twitter.
Although propaganda efforts engage many different delivery channels, Twitter's core design is the perfect distribution engine. It's anonymous, it's in real-time, it allows for easy targeting and, for the most part, it's unpoliced.
Facebook, on the other hand, restricts the distribution algorithm (the news feed), it's focused on identifiable profiles and is, slightly better policed than Twitter.
Although the later is harder to exploit, both are extensively used for information manipulation.
Email, forums, websites, chats, you name it. Anywhere there is a strategic audience; interested parties will target it.
It's important to mention that the target audiences don't need to be large. Political misinformation, for example, tends to target sizeable audiences. Other forms of manipulation, like attacks on brands, products or business deals, might not call for a massive audience, just the right one.
Scale: The unstoppable tsunami
The previous two elements of fake news have existed for decades. There have always been hidden agendas, and there have always been ways to reach an audience, let it be in a forum, in a book or a newspaper.
“Nothing can now be believed which is seen in a newspaper. Truth itself becomes suspicious by being put into that polluted vehicle. The real extent of this state of misinformation is known only to those who are in situations to confront facts within their knowledge with the lies of the day […] I will add, that the man who never looks into a newspaper is better informed than he who reads them; inasmuch as he who knows nothing is nearer to truth than he whose mind is filled with falsehoods & errors.”
Nonetheless, the capacity to produce fabrications and to distribute it to the broader audience has never been as dominant as today.
In other words, the scale of misinformation we can attain today is orders of magnitude more significant. Automation of content enables us to create more content than ever before. Deep Learning systems can copy, summarise or even create content, at scale.
Hand in hand with this scale is the distribution capabilities of networks like Facebook or Twitter. Never in the history of humankind, we've experienced such colossal aggregation platforms.
The combination of these two facts makes the scale of information manipulation tremendous.
Fake news in the age of Artificial Intelligence
To accomplish scale, there needs to be a certain degree of automation. Automatic content creation is one part. Autonomous distribution and amplification, though, is the cornerstone.
The way propaganda automates distribution is through bots. These are computer scripts, posing as human users, that automatically distribute fake news on social networks. Sometimes a human will man a bot, others it will be autonomous. These hybrids are called cyborgs.
Twitter is, by far, the most substantial breading ground for bots. The social network's design is perfect for them. It exposes an API, which enables the automation of basic operations. It allows the creation of anonymous accounts, at scale.
Bots, though, don't operate in isolation. Bot owners cluster their creations to form swarms of bots that run in a coordinated way. These hives are called botnets.
"Twitter bots can pose a series of threats to cyberspace security. For example, they can send spam tweets to other users; they can create fake treading topics; they can manipulate public opinion; they can launch a so-called astroturfing attack where they orchestrate false ‘’ campaigns to create a fake sense of agreement among Twitter users; and they can contaminate the data from Twitter’s streaming API that so many research works have been based on; they have even been linked to election disruption."
Smaller botnets are compromised of 30-40 bots. Bigger botnets might be as extensive as 350.000 bots (Jan. 2017). The latest discovered botnet, called Bursty, implicates 500.000 fake Twitter accounts (Sep. 2017). Depending on how conspicuous and aggressive these bots are, they can tweet between 72 to 300 times a day. That gives a throughput of 36 million tweets per day, at the lower range.
In early 2017, researchers estimated the Twitter bot population being between 9% and 15% of all users. That translates to around 49.20 million bots. And this is a conservative approach based on current detection methods. The recent discovery of the Bursty botnet is an excellent example of some flaws in current detection methods.
The primary problem is that bot-creation and bot-detection have entirely different timelines. Twitter makes it nearly frictionless to create new accounts. Nonetheless, detecting a fake account isn't easy at all.
The problem is so critical that in 2015 DARPA (The US Defense Advanced Research Projects Agency) organized the first ever Twitter bot challenge. The goal was to upgrade global cyber-defenses against fake news on Twitter.
Since then, bot detection technology has been improving, but it's not enough. The recent discoveries of the Star Wars and Bursty botnets acknowledge this.
The most worrisome aspect of it is that, despite all the efforts to detect bots and fake news amplification nodes, there isn't an easy way to stop them.
Botnet neutralization starts with being able to detect their activity. This first step is already complicated. Tweetstorms will be identified quickly, but other subtle techniques might be more subtle. If fraudulent news activity is detected, the second point is to uncover the botnet. Some bots might be obvious, but others are very sophisticated.
"Social bots can search the Web for information and media to fill their profiles, and post collected material at predetermined times, emulating the human temporal signature of content production and consumption—including circadian patterns of daily activity and temporal spikes of information generation."
The last step is maybe the hardest. If we managed to identify a part of the botnet, we now have to disrupt it. As most detections happen outside of Twitter, the only way to stop the botnet is by reporting them to the company. According to their policy,
"Some of the factors that we take into account when determining what conduct is considered to be spamming include:
[…]
If a large number of people have blocked you in response to high volumes of untargeted, unsolicited, or duplicative content or engagements from your account;
If a large number of spam complaints have been filed against you;"
Therefore, it's up to Twitter to decide when they take down such accounts. The important fact though, is that the time between fake news activity detection and botnet disruption might be quiet long. Each phase might take days or even weeks. In a week, a regular not-too-aggressive botnet (~100 bots) might have pushed something between 50.000 to 100.000 tweets. That's enough to take over and disrupt a conversation, a hashtag or a Twitter trend.
Future of fake news
Although the press has devoted much time discussing the impact of fake news on politics, I feel, that is also a diversion.
Information warfare is used as we speak to influence major strategic decisions. Such powerful botnets can attack a country, but they can and will subvert any organization that lays in their wake.
It's not farfetched to think that the current backlash against the big technology companies isn't, to an extent, amplified by the fake news echo chamber.
It's easy to plant disinformation as long as it's what the audience wants to believe. Giants like Facebook are the enemy now, so any content bashing them will find massive virality.
Today it's Facebook; tomorrow could be Bayer, Unilever, Maerks or your organization.
Cybersecurity investment is on the rise, but still, I don't know of any company that's deploying bot hunters and botnet disruptors. The only way to fight scale and automation is with scale and automation.
Cyberwarfare isn't for governments anymore. Companies need to invest in cyber defenses and be able to disrupt fake news attacks in real time.
If you like this article, please share it, and invite others to follow the newsletter, it really helps us grow!