AI and Disinformation in the Russian-Ukrainian War

Opening her Facebook account on March 10, one of the first things Aleksandra Przegalinska saw on her News Feed was a post from a Russian troll spreading misinformation and praising Russian President Vladimir Putin. .

The message claimed that Putin was doing a good job in the Russian-Ukrainian war.

As someone who has followed the conflict between Russia and Ukraine, the Polish artificial intelligence expert and university administrator was caught off guard by what she believed to be an inaccurate message.

Although she realized the post was created by someone who was friends with one of her Facebook friends and not someone she knew directly, Przegalinska said it shows that search engines prioritize information that can potentially generate conflict and is controversial.

Recommender systems are still very rudimentary,” said Przegalinska, who is also a research fellow at Harvard University and a visiting fellow at the American Institute for Economic Research in Great Barrington, Mass. “If they see that I’m interested in a conflict and Ukraine — which is clear when you analyze my social media content — they may just be trying to analyze and promote content linked to it.”

Recommendation algorithms, disinformation and TikTok

Recently, recommendation algorithms have led to the promotion of misinformation on social media about the Russian-Ukrainian war.

Particularly on TikTok, misinformation and disinformation are rife. Some users – wanting to go viral, make money or spread Putin’s agenda – mix war videos with old sounds to create false information about what is happening.

While some war posts offer real accounts of what is happening, many others seem unverifiable.

For example, many TikTok videos during the war included an audio clip showing a Russian military unit telling 13 Ukrainian soldiers on Snake Island, a small island off the coast of Ukraine, to surrender. Some of these videos indicated that the men had been killed.

Recommendation algorithms lead to the spread of false information on TikTok, such as this video saying that the soldiers were killed.

This was initially confirmed by Ukrainian President Volodymyr Zelenskyy, but Russian state media showed that the soldiers arriving in Crimea were POWs. Ukrainian officials later confirmed that the soldiers were alive but were being held captive.

TikTok has also become a platform for Russia to promote Putin’s agenda to invade Ukraine. Although the platform recently suspended all live streams and new content from Russia, it did so days after videos of influencers supporting the war had already circulated.

Using exactly the same words, Russian TikTok users repeated false Russian claims on the “genocide” committed by Ukrainians against other Ukrainians in the separatist Russian-speaking regions of Donetsk and Lugansk. The messages condemn Ukraine for killing innocent children, but there is no evidence to support this false claim.

On March 6, TikTok suspended videos from Russia, after Putin signed a law providing for prison terms of up to 15 years for anyone posting what the state considers ‘fake news’ about the military. Russian.

Disinformation, AI and war

The spread of misinformation in the war between the two sides is nothing new, said Forrester analyst Mike Gualtieri.

However, the use of AI and machine learning model training as sources of misinformation is novel, he said.

Machine learning is exceptionally effective in learning to harness human psychology, as the internet provides a large and rapid feedback loop for learning what will strengthen and/or break the beliefs of demographic cohorts.

Mike GualtieriAnalyst, Forrester

“Machine learning is exceptionally effective in learning to harness human psychology because the Internet provides a large and rapid feedback loop for learning what will strengthen and/or break the beliefs of demographic cohorts,” Gualtieri continued.

Since these machine learning capabilities are the basis of social media, government entities and individuals can also use the platforms to try to influence the opinion of the masses.

Transformer networks such as GPT-3 are also new, Gualtieri said. They can be used to generate messages, completely eliminating the human from the process.

“Now you have an AI engine that can generate messages and immediately test whether the message is effective,” he continued. “Fill it rapidly 1,000 times a day, and you have an AI that quickly learns how to influence targeted demographic cohorts. It’s scary.”

What seems even scarier is how easy it is for social media users to create these kinds of AI engines and machine learning models.

Deepfakes and the Spread of Misinformation

A species of machine learning model that circulated during the war consists of AI-generated humans or wrong wrong.

Twitter and Facebook have taken down two fake AI-generated human profiles claiming to be from Ukraine. One was a blogger ostensibly named Vladimir Bondarenko, from Kyiv, who was spreading anti-Ukrainian rhetoric. The other was Kharkiv-based Irina Kerimova, supposedly a teacher-turned-editor of “Ukraine Today.”

Unless you examine both very closely, it’s almost impossible to tell they’re not real. This confirms the findings of a recent report in the Proceedings of the National Academy of Sciences that AI-synthesized faces are hard to distinguish from real faces and even look more trustworthy.

Generative adversarial networks help create such AI-generated images and deepfakes. Two neural networks (a generator and a discriminator) work together to create the fictional image.

Creating deepfakes used to be complicated and required a complex skill set, Przegalinska said.

“Currently, what is worrying is that many deepfakes can be created without coding knowledge,” she said, adding that tools that can be used to create deepfakes are now easily found online.

Also concerning is that there are few limits to how neural networks can be used to create deepfakes, such as a video showing the surrender of Zelenskyy, Przegalinska said.

“We don’t really know how widespread the use of deepfakes will be in this particular conflict or war, but what we do know is that we already have a few documented cases of synthetic characters,” he said. she stated.

And since Russia banned many social media platforms, including Facebook and Twitter, many citizens of the country only know what Russian state television shows. It would be easy to use deepfake technology on Russian television to present Putin’s agenda that Russia is Ukraine’s savior, Przegalinska said.

It’s important for social media consumers to pay close attention to news because it’s hard to know what’s real and what’s fake.

“There’s this alarmist aspect that says, ‘Listen, you have to be careful what you see because it can be a flaw,'” she continued.

“Russia is very good at the disinformation game,” Przegalinska said. “Even though the tools they use may be very sophisticated…it’s an obvious weapon.”

Meanwhile, the West is not as prepared for the disinformation game and right now there are two wars going on, she said.

“It’s a parallel war taking place in the physical world and obviously the physical war is the most important because there are people dying in this world, including little children,” Przegalinska said. “However, information warfare is just as important in terms of the impact it has.”

Comments are closed.