Twitter Bots: A New Era of Digital Warfare
Photo by Kon Karampelas on Unsplash
For decades, China’s communist government has disseminated propaganda through state-controlled and heavily censored media, such as radio, print, and television. Currently, there is little research on the government’s involvement online. There have been a few scattered reports of fake accounts targeting Chinese actors and politicians who have spoken out about corruption, but no substantial evidence has been found (Bolsover and Howard 2). It has been proven however, that the Chinese government censors the internet for its citizens and tries to maintain a positive reputation on foreign websites. Two Oxford researchers conducted a study not too different from the ones done for Italian and Russian bots to determine if the Chinese government uses Twitter bots. While Twitter is banned in China, there are a growing number of users on mainland China accessing the site through illegal means (Abdollah). The team used hashtags related to hot topics in Chinese politics during the first half of 2017 in order to find bots, then picking out the most active accounts for study.
The researchers found that “seventy-one of the top-100 highest posting accounts posted all or almost all of their posts using known automation platforms… Additionally, many of these accounts appeared to be using custom automation scripts” (Bolsover and Howard 6). The top 38 highest posting accounts tweeted 70 times or more per day, and half of the top 100 tweeted anti-state content. None tweeted pro-state content (Bolsover and Howard 6). It makes sense that there is such a high concentration of anti-state content on Twitter in China because the website is banned. It is used primarily by anti-state activists and those looking to hear different opinions than those of the state (Bolsover and Howard 6). Of the anti-state accounts, there are two main groups, one behaving like the Italian bots, the other behaving like the Russians. Both use defamatory propaganda against the Chinese government, however.
The group that behaved like Italian bots were the “1989 Group.” These bots promote content about the Tiananmen Square massacre and how it must be remembered. They also promote human rights in China (Bolsover and Howard 6). These accounts for the most part share profile names related to human rights or democracy, their profile pictures being of attractive Chinese women or something from the Tiananmen Square protests. They behave like Italian bots because all of their retweets come from one hub, an expatriate leader of the 1989 movement (@wurenhua) now living in America (Bolsover and Howard 6). They also use common hashtags in China in order to spread his opinions. They dominate a few hashtags because “eleven accounts in this group posted more than 1000 times each using the hashtag 人权 [human rights] during the data collection period, with the next highest poster posting 98 times. Almost 90% of the tweets that used the hashtag 人权 during the data collection period were posted by these 11 accounts” (Bolsover and Howard 6). The 1989 bots are aggressive, swamping out any other content under certain hashtags. Their posts are in simplified Mandarin, increasing possible viewership. Mainly retweeting content from a former anti-state leader is very effective propaganda for a group focused on grabbing the attention of those brave enough to log on to Chinese Twitter in China. Other expatriates who remember the movement and Chinese students abroad who have never heard of it might also be intrigued after seeing a message from @wurenhua.
The 1989 group is particularly interesting because these are bots promoting human rights and democracy in a country in which traditional media is censored. While the bots themselves are grey, their source (@wurenhua) is using white propaganda. It is odd to compare them with Russian and Middle Eastern bots with negative purposes, but their behavior is similar. These comparisons show that bots are not always anti-democracy. While they might operate like their peers, the bots deployed by the 1989 group are working for a noble cause.
The second group of anti-state propagandists on Chinese Twitter are concerned with protesting an investment scheme that occurred in 2015. “This group disseminated information about the victims of the pan-Asia ‘Ponzi scheme.’ Approximately 220,000 people lost the money they have invested in the Kunming Pan-Asia Nonferrous Metals Exchange when it collapsed in late 2015” (Bolsover and Howard 6). They believe the government encouraged people to invest, fully knowing that the company was not doing well. While these bots do not tweet nearly as much as the 1989 group, they still retweet and use hashtags to disseminate their propaganda. Unlike the 1989 group, the Ponzi bots are a cohesive unit of equals without a hub. They use similar hashtags of cities and universities in their tweets and profile descriptions, and they retweet each other rather than one or two hubs (Bolsover and Howard 6). What makes them similar to the Russian bots is that they lack large hubs, and they utilize news outlets to their advantage. Rather than simply tweeting links to news stories, the Ponzi bots try to pose as news outlets or educational organizations in an attempt to appear more credible to their audience.
In summary, current evidence suggests that anti-state agents in China are using bots to their advantage more than the state itself. This contrasts the Russian sphere where both government and anti-government forces were present, or the Italian sphere in which one popular side of the spectrum dominated discourse online. The anti-state minority in China is also the left-wing of their political spectrum, as the 1989 group advocates free speech and human rights, and the Ponzi group protests government corruption. Additionally, the Italian bots and 1989 group bots use hubs to organize themselves and their messages. However, all three groups of bots use hashtags to display their content, along with retweets to make content.
Middle Eastern Bots
The final group of bots that I will be examining are bots from the Middle East, specifically the gulf region. In May of 2017, a Qatari state-run news outlet released statements from Qatar’s head of state that praised the nation’s relationship with Iran, Hamas, and the Muslim Brotherhood (Jones 1390). The news agency then claimed they were hacked, but Saudi Arabia, the UAE, Egypt, and Bahrain quickly mobilized against Qatar, in more ways than just economic sanctions or military action. The bots used in this campaign against Qatar and its leadership were mostly from Saudi Arabia (Jones 1397). Although there is no evidence for a connection with the government, it is implied that the bot groups have some government ties. These bots used both defamatory and subversive propaganda in an attempt to degrade and possibly force the removal of the Qatari government. The bots operate in squads, each sharing their own specific messages or pieces of propaganda (Jones 1397). Interestingly, one of these squads was constructed a month prior to the incident, in April of 2017 (Jones 1397). The incident in Qatar was only the catalyst for deploying a social media strategy Saudi Arabia already had in the works, the initial squad was deployed immediately, with nine more joining in the following days (Jones 1400).
Similar to the Chinese accounts, these accounts used hashtags like parasites would use a host. They latch on and slowly drown out whatever other opinions might be on the hashtag. Oftentimes, they would not even look for an existing hashtag. Their numbers and tenacity allowed them to create their own hashtags that often reflected imaginary stories and statements, causing Qatari citizens to discuss fake news and possibly even change their opinions of their government (Jones 1396). These bots were so aggressive that “on some hashtags, at least 71% of the active accounts were found to be bots,” and “bot-generated trends were sometimes picked up by other news sources, thus increasing the impact and reach of the propaganda” (Jones 1409). Qatari Twitter was so flooded by Saudi bots and their propaganda that by August of 2017, hashtag “don’t participate in suspicious-looking hashtags” trended (Jones 1392). To bolster these false and biased hashtags, the bots frequently tweeted visual media, such as pictures of fake news articles, and political infographics. They relied more on visuals accompanied by hashtags and some text than mostly retweets like the 1989 and Italian bots. Similar to the Russian accounts, the Saudi bots referenced well-known media outlets like Reuters in their screenshots of fake headlines.
The middle eastern hubs were topics rather than a political figure or media outlet. The ten bot squads deployed by Saudi Arabia each had a general theme, and each was further subdivided into groups that focused on certain hashtags and visuals. Such organization reflects the purpose bots have in the digital age. These bot squads function like battalions in a military, designed to conduct digital warfare by capturing certain objectives. Saudi Arabia was able to cause unrest and chaos in a nation with economic sanctions exacerbated by a strong social media campaign. The success of Saudi bots in Qatar only emphasizes the need for more studies of bots and how social media can act as a weapon of war.
Overall, both ends of the political spectrum and all types of groups use Twitter bots. The comparisons between bots serve to emphasize the scope of the nature and origin of bots. While not explicitly stated, pro-Kremlin and pro-Saudi bots are implied to have government ties in some form. Whether it be far-right Italian political parties, the Russian and Saudi governments, or anti-Chinese groups, bots have become essential in spreading opinions online. The right is by far the more vocal side, with bots tweeting at rates significantly higher than their opponents. Surprisingly, governments are using bots alongside clandestine groups, although the government bots seem to be more organized and powerful given the difference in resources a government can provide as opposed to a small activist group. The Saudi bots show that automated accounts are being deployed in a structure not so different than a military unit. The Russian and Saudi bots also prove that bots active internationally can cause discord, changing the opinions of Twitter users and even fooling news outlets. If left unbridled, Twitter will only become more and more infected by these viruses of accounts.
As explained, Twitter bots have become a destructive tool of propaganda dissemination that both governments and activist groups can use to their advantage. In an article about bot regulation, Sergey Sanovich and his colleagues from NYU write that bots are difficult to regulate and ban for many reasons. One is that in certain instances bots are based in one country but operate in another, like the Saudi bots in Qatar. The Qatari government has no authority over users in Saudi Arabia. Secondly, Twitter’s lack of real-name policy allows bots to impersonate people and organizations or use entirely made up ones (9). Despite these difficulties, in recent years governments have taken action in order to minimize the effects of bots online. However, the governments with the best success in bot control are autocratic or not too far from it. For example, as of 2016 no Russian media company can have more than 20% foreign ownership (7). By restricting foreign presence in their media, the pro-Kiev and pro-opposition bots discussed in the context of the Ukraine conflict might not have as many news stories to retweet. This supports the Russian government’s involvement in bot activities because such a law seems to leave pro-Kremlin bots unscathed, as they typically shared content associated with or run by the state itself. The Russian government has also negotiated the purchase of shares of Russian social media sites, sometimes causing their executives to flee the country or resign (9). On a more user-based level, Russia passed a law in 2014 that required bloggers with a certain level of popularity to register with the government (7). Again, this system supports the Kremlin. For instance, say there are two bloggers on the Russian internet, and their content is used as hubs for bots. One blogger is a hub for pro-opposition, and the other is a hub for pro-Kremlin. If both are on government file, which blogger is more likely to be targeted?
Even more autocratic nations, such as China and North Korea, have found success in making their own social media sites, firewalls, or national internets. China effectively blocks certain websites (Twitter), applications, words, and content, while North Korea is entirely isolated (10). Remember, Chinese bots are targeting those in China accessing Twitter illegally. For democratic nations like the United States, which has experienced bot activity on Twitter, these methods of fighting bots are entirely unrealistic. The government cannot buy shares of Twitter through state-run banks. Nor would the government be able to register bloggers, as many might see that as a violation of privacy. Censoring content like the Chinese would be seen as a violation of free speech. Finally, given the NSA, who in the United States would join a government-made social media platform? The onus is fully on Twitter.
On October 30, 2019, the New York Times reported the announcement that in lieu of the 2020 presidential election, Twitter would be banning all political advertising on the platform (AP). This policy shift reflects ongoing efforts by Twitter to filter its political dialogue in the hopes of eliminating propaganda and targeting. In 2015, Twitter began using an algorithm that scanned for posts that seemed like propaganda, such as those with hate speech, threats, and offensive language (Lieberman 116). By 2017, it was encouraging users to help report accounts they thought were bots (Lieberman 116). As of August, Twitter itself reported banning hundreds of thousands of accounts on both sides of the Hong Kong protests (Abdollah). While these may seem like adequate measures, they are minimally effective at best. Ariel Lieberman, a national security lawyer, writes that “closing down Twitter accounts can temporarily hinder users from spreading…propaganda, but most of them come back under new account names within a day or so of having their accounts removed” (117). These bot systems are so automated that spawning new accounts is simple. The lack of a real-name policy only exacerbates this issue. Implementing a real-name policy on Twitter would be costly, and making an account might become difficult for entities like companies and organizations, which are not a single person. Further, studies of bots analyzed in this paper were possible because the researchers did not report the bots they identified. They were able to look at these accounts and their activity in a certain time period. Twitter cannot even trust its users to report bots when they know exactly what accounts are automated. How do they expect the average user to report something they do not recognize or even know exists? As of now, there seem to be no feasible solutions for bot control in democratic countries like the United States.
The differences in bot methods also proves a challenge for their elimination. If Twitter were to end retweeting, it would stop a majority of bot activity. However, the bots that disseminate their own tweets would still thrive. The point is that to eliminate bots a myriad of measures would need to be implemented. There is no “bot destruction switch” that could be pulled. Destroying one niche would only mean an increase in popularity of another.
Slowly but surely more information about bots and their behavior is being gathered by many of the same researchers mentioned in this paper. But how long will this information be relevant? As technology advances, it is inevitable that bots advance with it. The success that autocratic countries have had in minimalizing bot activity may be dashed in the near future on account of breakthroughs by those who make bot systems. The understanding of bots is not limited to their behavior either. This paper is not concerned with the way in which average users have interacted with bots, another integral part of understanding automated accounts. Interaction has been proven, as these bot accounts sometimes have non-bot followers, thus opening another line of inquiry. How effective is the propaganda disseminated by bots? A negative caricature of Winston Churchill found on a Berlin bulletin in 1939 seems no different than the pictures Saudis saw of the Qatari head of state online, but how effective are fake images and headlines? Propaganda for the most part has not changed over time, delivery device being the only major difference. Only time and further research will prove if Twitter as a delivery device is effective in swaying the opinions of the masses more than traditional mass media.
While there is much work to be done in the future, the present is equally important. American democracy is threatened every day by the activity of foreign bots. Their goal is to cause discord and feed fake information to American Twitter users. A new information war is being fought constantly, and everyone with a Twitter account is on the frontlines. By simply knowing that bots exist, let alone understanding their behavior, those on the platform are better fighters and more informed citizens. Furthermore, knowing the different ways in which bots have been used in the past is even more beneficial. Without such information, Americans risk consuming false information and propaganda, resulting in possible disease from lack of vaccinations, or not voting for a candidate because of slanderous tweets from foreign agents. Bastions of democracy across the world can only hope that viable solutions for bot control or elimination are developed and implemented so that the sanctity of their political system is maintained.