In China, these protesters trick social media AI to fight censorship

SOCIAL NETWORKS – While the Chinese have been taking to the streets for a week in an unprecedented and massive way to express their disagreement with the government in its policy zero-covid “, the demonstrators play cat and mouse with social networks to spread their images.

Whether Chinese platforms were quickly censored by government authorities, it didn’t take long for the protesters to turn to Western platforms. Use of filters or parasitic noise, everything is good to deceive the algorithm of Chinese networks, and especially not to be spotted. And against censorship, the solution is Twitter where content that has already been altered arrives as a last resort.

If these images manage to go around the world, it is mainly because Twitter is experiencing a loss of personnel far too great to filter all content in the mass, but not only. Users use well-thought-out strategies to avoid having their content deleted.

“A computer is not very smart”

All social networks, Twitter like Weibo or Facebook, use algorithms and artificial intelligence to recognize the images broadcast on their platform, and if necessary filter them according to their respective rules. To better understand their role when content is involved in political issues of this magnitude, Victor Louis Pouchet, hacker at BZhunt answered questions from the HuffPost.

If in the streets of China, the protesters circumvent the censorship of the Xi Jinping government using white sheets, on the social networks, it is by means of other artifices. As you can see in the tweet above, we can see images of protests, and to diffuse them, the user added a filter, an element that disturbs the original image, as well as music that is not very much in adequacy with the content.

When the algorithm goes to analyze video content, it will inspect each pixel and try to associate it with something it already knows. In the case of demonstrations in China, artificial intelligence will therefore try to capture a keyword on a sign, for example, or the language spoken and used on it. explains the hacker.

Prevent image contextualization by AI

But recognizing elements on video content is not enough since associating them with the real context of this content can completely change the situation. ” If we take the example of a crowd, the AI ​​will ask itself if it is a crowd in a concert, in a demonstration or in a shopping center, it will therefore seek to contextualize it he says.

This is when the addition of “parasitic” and “superfluous” elements complicates the task of the AI: ” adding filters or clutter, like text or music, outside of the base context is going to make the AI ​​much less sure of what it’s seeing. And after that, the algorithm must go even further to make sense of what it sees, in this case opposition to government, and contextualize it”.

In this other example, a user is filming the content he wanted to broadcast but taking another phone as a frame. ” In this case, the algorithm is not intelligent enough and it will say that it is the video of a telephone without paying attention to what is broadcast in this telephone notes Victor Louis Pouchet. So the more the user adds scenarios in his content that are ultimately not expected, the more time the AI ​​will take to understand the real intention of the user.

“The concept of filters is super interesting and allows you to bypass an algorithm. Example with Youtube: an AI automatically detects content subject to rights but to circumvent them, some people will flip the image, zoom in or add filters so that it passes. The more parasites we add, the more the statistics will be scrambled” adds the hacker. “These algorithms remain very specific learning models, a computer is not very smart, it does what we tell it but not always what we want it to do” does he notice.

A reaction from the Chinese government?

And on Twitter, failing to censor, the government would have preferred to drown the fish by diluting in the mass the content of demonstrations with pornographic content.

According to this Chinese analyst, many fake accounts are used to spread illegal content to skew search results in China on Twitter. Indeed, on many publications, the geolocation of the places where the gatherings of demonstrators were the most massive has been added to pornographic content.

But the hacker reminds “moderation remains a problem inherent in each social network, especially if there are still a few humans behind it and inevitably there will always be holes in the racket. And the techniques that we use at a given time will be obsolete tomorrow, so it’s an eternal restart”.

See also on The HuffPost:



We would like to say thanks to the writer of this short article for this amazing web content

In China, these protesters trick social media AI to fight censorship


Visit our social media accounts and also other related pageshttps://yaroos.com/related-pages/