Are 5G networks to blame for the coronavirus? Facebook deploys bots to fight fakenews

The American company Facebook has decided to fight against misinformation and fake news that is spreading in connection with the coronavirus with the help of artificial intelligence. The people who have been in charge so far are expensive and will not detect a lot of fake news or will register it too late.

Truth be told, the 2006 American film Absurdistan is not a blockbuster. Still, it deserves a little attention. It tells the story of a young man who wakes up after a long hibernation in 2505 to find that the population has completely degenerated. The height of the new entertainment is farting on TV shows, and the people there are so naive that they will "jump" on almost any misinformation.

This year, however, every person who uses social networks must feel roughly similar to the hero of the film. A lot of fake news is thrown at us, often one more absurd than the other, but the ones about the coronavirus are clearly leading the way. Among the most common are the claims that the contagion is spread by 5G networks, the news that covid-19 does not exist and this is just a show to deceive the public, or the rumor that the coronavirus can be "killed" with alcohol. As amusing as the latter claim may seem to us, it has been taken seriously in Iran, where hundreds of people have died from ingesting methanol. The consequence of fake news also stirred state and international authorities to action. Among other things, the European Commission, which began to pressure social networks such as Facebook to hire people who will verify information and stop fake ones. "People can die during the coronavirus pandemic as a result of disinformation," for example the head of EU diplomacy Josep Borrell.

Artificial networks

Facebook promised to honor the call. But some time before that, he announced that he would deploy artificial intelligence against fake news about the coronavirus, which would, for example, compare graphics or images with disinformation. “Once independent reviewers identify an image as containing misleading or false claims about the coronavirus, SimSearchNet (a neural network-based convolutional model for detecting near-exact duplicates) as part of our endpoint image indexing and matching system is able to spot near-duplicate matches so we can use warning labels," the company said on its website ai.facebook.com. According to her, the SimSearchNet system is capable of checking a billion images a day on Facebook and Instagram.

To fight misinformation, according to the Internet magazine Verge, it will use the same system that is designed to detect hate posts. "Facebook says its new discoveries — notably a neural network it calls XLM-R that was announced last November — are helping its automated moderation systems better understand text in multiple languages," notes the Verge.

The fight against arms trafficking

But Facebook wants to go further. According to another report by the Verge magazine, its engineers have developed a new method that will help identify and prevent harmful types of behavior such as spamming, fraud, or arms and drug trafficking. “They can simulate the actions of rogue users with AI-powered bots by releasing them on a parallel version of Facebook. Scientists can then study the robots' behavior in simulation and experiment with new ways to stop them," the magazine said.

In other words, the programmers are testing how to track anti-social activities in a WES or Web-Enabled Simulation environment. “It is a giant, very realistic, virtual replica of Facebook that uses methods of machine learning, artificial intelligence, game theory and so-called multi-agent systems. The computer engineers behind this system hope to better understand and thereby suppress the explosive growth of anything negative associated with Facebook — political misinformation, crazy conspiracy theories, or hate speech," she described on her website. and Czech Television.

Artificial intelligence interacts with each other, for example asking for friendship. To model this behavior, Facebook created a group of "innocent" bots to act as targets and trained a number of "bad" bots to try to make contact with them. Engineers then tried different ways to stop the bad robots. They implemented various restrictions, such as limiting the number of private messages and posts that bots can send each minute. All to see how it affected their behavior. We can compare it to the work of urban planners who try to limit the speed on busy roads, where they place various speed bumps and traffic lights. If successful, Facebook could change the frequency of adding posts or automatically mark the veracity of individual posts on some topics. We can only hope that Mark Zuckerberg and his people really manage to bring it to a successful end. The information that they promise to use artificial intelligence in the fight against fake news appeared several years ago.