Meta, Twitter, and Google against Russia. Social networks opposing the Kremlin propaganda
Виділіть її та натисніть Ctrl + Enter —
Meta, Twitter, and Google against Russia. Social networks opposing the Kremlin propaganda
Social networks have been an important dimension of armed conflicts and mass protests in the world for a long time. Just remember the Revolution of Dignity, when Facebook became the engine of public mobilization; or the role Telegram played in the protests in Belarus as well as in the current war. The management of social networks are aware of their role in modern information and social processes, so they are more and more involved in the struggle for their platforms being a "clean and healthy space". The statements of such companies as Meta, Twitter, and Google have one thing in common: they all emphasize how important it is to combat disinformation. In recent years, these companies have often reported blocking accounts or deleting bot networks, particularly those related to Russia.
Facebook reacted to the beginning of the full-scale Russian invasion of Ukraine almost immediately: Ukrainians got the opportunity to close their accounts from outsiders in one click. Twitter informed that they had begun preparations by working with a cross-functional team monitoring the situation in Ukraine and evaluating risks with the first signs of a potential crisis few weeks before the beginning of the full-scale war. That is, they took preventive steps.
Moreover, social platforms listen to the Ukrainian government and civil society: for example, Detector Media and some other relevant Ukrainian NGOs communicate with Google, Microsoft, Twitter, Meta to counter Russian disinformation on their platforms. "In times of war we face complex challenges that are changing constantly. Twitter cannot manage it all on its own, and our industry is not unanimous in handling the situation. Each platform has its own services and different business models, our approaches and principles are different when it comes to the challenges of the crisis, although they often complement each other. Twitter welcomes the opportunity to cooperate with every stakeholder, and will gladly work with the Detector Media team," wrote Ronan Costello, Twitter's head of public policy for Europe, Turkey, and Israel.
Russia's war against Ukraine made social networks take unusual steps. Despite well-established content moderation rules, on March 10 Meta surprised everyone with the decision that was unusual for such corporations: it temporarily allowed Facebook and Instagram users in some countries to call for the deaths of Russian President Vladimir Putin and Belarusian dictator Alexander Lukashenko, as well as for violence against the Russian military in the context of their invasion of Ukraine. We might assume that this was done in order not to block the messages of Ukrainians and citizens of other countries who were critical of the Russian and Belarusian authorities because of the war in Ukraine. Although Meta changed its mind rather quickly and banned wishing Putin and Lukashenko dead on its platforms on March 14, Russia still claimed the American corporation as an "extremist organization" and banned Facebook and Instagram on its territory.
Meta and Twitter told MediaSapiens about the measures they were taking to combat Russian propaganda and disinformation on their platforms during the war.
Meta has been labeling Facebook pages and Instagram accounts of Russian state-controlled media since 2020. It was done so users understood: "the news they read is coming from a publication that may be under the influence of a government."
After the new phase of the war in Ukraine had begun, Meta continued to label Russian media content, as well as show it lower in Feed and make it more difficult to find it on Facebook and Instagram around the world. The same applies to the content containing links to Russian state-controlled media and media outlets. "We will label these links and provide more information to people before they share them or click on them to let them know that they lead to state-controlled media websites," the company informed.
At the same time, Russian state-controlled media have been banned from advertising or monetizing their content on the company's platforms around the world. Meta does not provide a list of Russian media outlets affected by the restrictions but makes it clear that the list will be expanded to include new media outlets controlled by Russian government. "We do not maintain a public database of publishers globally or in regions that are or will be labeled and we are not currently sharing a full list. Our list is dynamic and will evolve and change over time as we label more entities and as media environments change," the company said.
Facebook pages and Instagram accounts of Russian state-controlled media are being blocked or deleted not on the company's own initiative, but at the request of different governments. In particular, the Ukrainian government requested to restrict the access to the Facebook page of blogger and politician Anatoliy Shariy, to the Instagram accounts of the head of the Russian propaganda channel RT Margarita Simonyan and the director of the Russian-speaking RT service Anton Krasovsky, Russian propagandists and TV presenters Tina Kandelaki and Vladimir Solovyov, Russian singers Timati, Nikolai Baskov, and Oleg Gazmanov, and others. These people support Vladimir Putin and the war in Ukraine, some of them are under sanctions. At the same time, the access to Russia Today and Sputnik was limited in the EU and UK in response to government requests.
That is, the willingness of governments to counter Russian disinformation encourages tech giants to respond quickly and effectively. "We have also received requests from a number of Governments, the EU and UK to take further steps in relation to Russian state-controlled media. Given the exceptional nature of the current situation we restricted access to RT and Sputnik across the EU and UK," Meta stated.
The company has a separate scope of work in the field of fighting propaganda and misinformation. It:
- consults with outside experts in fight with misinformation spread on the platforms;
- expands the third-party fact-checking capacity in Russian and Ukrainian languages across the region and works to provide additional financial support to Ukrainian fact-checking partners;
- removes content that violates the policies of the platform and works with third-party fact-checkers in the region to debunk false claims. If a post is claimed fake, the label is placed on it and it is moved lower in the Feed. It happened with a post of Baskov;
- warns users when they try to share some war-related images in addition to the labels from fact-checking partners. Meta's systems detect images that are over one year old so people have more information about outdated or misleading images that could be taken out of context;
- Messenger, Instagram and WhatsApp limit message forwarding and label messages that haven’t originated with the sender. By the way. WhatsApp continues working in Russia, although its parent company is claimed "extremist" one;
- notifies people who have previously shared or try to share unchecked content so they can decide for themselves if they want to continue sharing it;
- Facebook Pages, Groups, accounts, and domains that repeatedly share false information will receive additional penalties. For example, they will be removed from recommendations and all of the content they post is lower in Feed;
- shows a pop-up notification to users connecting with a Facebook or Instagram account that has repeatedly shared false content.
Twitter's efforts are similar to Meta's. The company also has a team monitoring the situation in Ukraine and evaluating potential risks. Ronan Costello, Twitter's public policy chief for Europe, Turkey, and Israel, said that this team was formed many weeks ago with the first signs of a potential crisis. "We are actively working with a cross-functional team within the company to assess the level and the extent of Twitter's response to these threats. The team consists of experts from different teams who deal with user security and service transparency in particular. They monitor the situation in Ukraine, identifying potential risks associated with the conflict, such as the need to identify and neutralize spreading false information, and increasing the speed and effectiveness of our response to these risks," he said.
Twitter started fighting Russian media propagandists back in 2017, when the United States began to record and speak publicly about the US presidential election being influenced on behalf of the Russian government. That year, the social network decided to ban the promotion of content on all accounts owned by Russia Today and Sputnik on the grounds of Twitter's own research and the findings of US intelligence. The company took the next step in August 2020, when it began labeling accounts controlled by the Russian government in addition to twenty other states and reducing their reach. In 2021, the social network has expanded the list of labeled and restricted countries and accounts. Today, it contains about 100 media accounts, which are marked as connected with the Russian authorities. Recently Twitter started labeling the accounts of the Belarusian state-controlled media. The company does not share the list but says that it is being reviewed and updated with newly created Russian accounts.
"We want people on Twitter to have as much context of the messages they see as possible. In particular, we label accounts of the media owned by or associated with the government. If the editorial policy of the media is controlled by the government through direct or indirect political pressure and financial leverage, and / or if the government controls the production and distribution of its content, we believe that this media is associated with the government. Twitter will not suggest such media to other users and will reduce their reach," Costello explained. At the same time, Twitter complies with the requirements of the sanctions imposed by European Union and blocks certain content in EU member states. Similar work is conducted outside the European Union as well.
The management have already reported on the first consequences of their actions: more than 50,000 messages containing false information about Russia's war against Ukraine were removed from Twitter or labeled as inaccurate as of March 16. More than 75,000 profiles have been deleted for violating the platform's spam and manipulation policy. Deletion of accounts has some interesting consequences: the audience of pro-Russian politicians gets much "thinner" after the mass purge of Twitter bot accounts, which has been repeatedly pointed out by representatives of Ukraine's public sector. These cases show that the pro-Kremlin posts spread artificially. The labeling of Russian-controlled media accounts also has some interesting results: a 30% drop of reach for tweets marked according to this expanded policy as of March 11. Although in the beginning, after the full-scale Russian invasion of Ukraine started, the social network was recording daily more than 45,000 tweets with links to Russian state-controlled media, posted by the ordinary users.
"It proves that most of the Russian state-controlled media content is spread on Twitter via people's accounts, and not via labeled official accounts of these media. So, recently we decided to change our policy and label those tweets that contain links to Russian state-controlled media. The reach of such tweets will be reduced: they will not appear in the top search or in suggested tweets," the company said.
Twitter had similar motivation to ban advertising in Ukraine and Russia. It also banned the following: political advertising (since 2019); monetization of content related to the Russian-Ukrainian war that is misleading or false; monetization of search queries related to the Russian-Ukrainian war; promoting content created by Russian government related media.
"By the way, while the Russian state media is spreading false narratives, our teams have recorded rather few signals of inauthentic coordinated behavior related to the crisis. Automatic early detection methods combined with manual moderation have allowed us to stop these attempts before they reach a wide audience," said Ronan Costello. "We use both manual and automatic methods to identify and remove coordinated inauthentic behavior related to the war in Ukraine. We have also reinforced our monitoring for any violations of the rules on the platform, including the Spaces live video broadcast format."
Among other things, Twitter recorded a significant increase in the amount of forged and manipulated content, such as some footage from video games presented like a real video; some footage from other conflicts or military maneuvers presented as footage from Ukraine. The fraudsters trying to trick people out of their money (especially in cryptocurrencies) or pretending to be experts in the conflict etc., also became more active after the beginning of a new phase of Russia's war against Ukraine. Such content is either labeled or deleted by the social network.
Twitter also takes additional actions to protect "the health of the service". The company:
- creates additional tips on maintaining digital security and protecting your account in English, Ukrainian and Russian. It also writes regularly about the measures taken — in English, Ukrainian, and Russian also;
- analyzes tweets to identify manipulations or other inauthentic behavior, and removes or restricts the spreading of messages that distort the picture of events;
- adds some context to crisis-related content, in particular through the Moments and Events features;
- monitors vulnerable accounts, including those of journalists, activists, government officials and bodies, to repel any attempted hacking;
- works on improvement of Topics, Lists and Spaces functions;
- launched a donation campaign among Twitter employees for some reliable organizations that help Ukrainian refugees seeking asylum and security; the corporation will double their donations and donate to a partner organization itself.
YouTube, like other social networks, takes measures to counter disinformation on its platform, and also controls the distribution of content about Russia's war against Ukraine. Neither YouTube nor Google commented on their efforts in this fight for MediaSapiens. However, the company went public about some of its decisions. In particular, it announced the disabling of monetization for residents of Russia. This decision alone will make Russian bloggers and media companies making money from integrated advertising on YouTube lose about $100 million a month. In addition, bloggers from other countries who have some audience in Russia will not be able to earn from the Russian market, and it can significantly affect the content of those bloggers, especially in Ukraine.
YouTube Premium, Music Premium, sponsorship, super chat, super stickers and merch will not be available to any Russian viewers. The video hosting also has announced that Russia's state-controlled media channels are blocked globally for violating anti-violence rules. In particular, YouTube will remove advertising and content about Russia's war in Ukraine, that violates the policy of the video service. For example, on February 25, Google announced that hundreds of channels and thousands of videos related to the war in Ukraine would be removed.
YouTube, like other social platforms, has blocked Russian propaganda channels RT and Sputnik across Europe, as well as Pervyi, Rossiya 24 and Rossiya 1, TASS, RIA Novosti, RBK and Telekanal Zvezda in Ukraine. In addition to YouTube media channels, video hosting also blocks the accounts of some pro-Kremlin propagandists. For example, three channels of Russian presenter Vladimir Solovyov, the channels of propagandist Anatoliy Shariy, his wife Olga Shariy, and their Dubl channel were blocked in Ukraine.
In addition, Google has assembled the team that monitors the war in Ukraine: it searches for disinformation campaigns, hacking attacks and fraud, and disrupts them. "We have automatically increased the level of account protection for people in the region and will continue to do so as cyber threats develop," the company informed, adding that it also joined fundraising to support Ukrainians.
The actions of Meta, Twitter and Google aimed at combating disinformation have a significant impact on the spread of Russian propaganda and fakes among Ukrainians. However, Russia spreads its propaganda not only through state-controlled media or pro-Kremlin bloggers, but also through the pages of diplomatic missions, government agencies, and so on. For example, the Russian Embassy in Great Britain shared a fake news that the photo of the victims of the maternity hospital shelling in Mariupol had allegedly been staged and the pregnant woman in the photo had been a model. These posts, by the way, were deleted on Facebook and Twitter.
At the same time, although the companies are open for the dialogue, there are some unresolved problems: a verified account of the Russian Foreign Ministry's "representative office" in the occupied Crimea (as early as February 2021), or blocking Ukrainian volunteers, or deleting media reports about Ukrainian fighters, or about the March of Vyshyvanky, or shadow bans for the Ukrainian flag, or a strike for poems by Taras Shevchenko. Those blocks and strikes are due to complaints from Russian users, often even bots. These examples show that Russian disinformation and its consequences cannot be removed from the platforms in one day. The amount of disinformation spreading ways is a challenge for social networks that they cannot always cope with; so, fakes are shared before platforms block them.