“Mitigate the toxic influence of the internet with artificial intelligence”, Charles Cohen (Bodyguard.ai) – Strategies

The Internet, and social networks in particular, have become essential tools for brands and influencers to interact with their community and forge new links with potential customers or subscribers. This is where customers and subscribers ask their questions, expose their problems, confront their opinions in real time, exchange with the brand and also among themselves. This mobilization combined with the follow-up and response options offered represents a tremendous opportunity for brands. Provided that the relationships thus woven encourage dialogue, rather than conflict.

However, the chronic increase, year after year, of toxicity on the Internet goes against this perspective. More than 5% of the content created on the platforms and social networks of brands or influencers is considered toxic (3.3% is of a hateful nature and gives way to insults, misogyny, racism, homophobia or teasing on the physical, and 2% are assimilated to unwanted/polluting messages). The problem has become very real since 40% of users leave a platform after a first exposure to harmful comments. Not to mention that these same users are also likely to inform other Internet users of their mediocre experience, which often risks irreparably tarnishing the brand image of the company in question.

Artificial intelligence to the rescue of moderators

The solution ? Intelligent moderation to save users from the toxic influence of the internet. However, manual moderation, which is particularly time-consuming, depends on human beings overcome by weariness, insensitive or overworked. These issues are further exacerbated by the time sensitivity in the fight against “cyberhate”. If moderators don’t respond in time or miss something, the damage is already done, notwithstanding the ephemeral nature of social media comments.

It takes about ten seconds for a qualified moderator to analyze and moderate a simple comment. If 100,000 comments are posted simultaneously, it becomes impossible to manage this flow and process hate speech in real time. This is where artificial intelligence comes to the rescue. Algorithmic mining makes it possible to instantly review and moderate huge volumes of online comments. However, even this solution is not free of limitations.

The challenge of contextualization and the risk of censorship

Most of the essential social platforms, such as Facebook or Instagram, use a system called machine learning to moderate online content. Unfortunately, this system is unable to detect all the subtleties of language, ignores sentiment analysis and the inherent complexities of meaning in context. Therefore, its error rate can reach 40% on the detection of toxic content. The results speak for themselves: according to the European Commission, only 62.5% of hateful content is removed from social networks.

This substantial margin of error may also reflect discrepancies between censoring free speech and protecting communities, if algorithms overdo it and remove content that is not truly toxic. For brands and influencers, this is another headache. On the one hand, they are responsible for protecting their community and everyone who ventures into their channels, buys their products, etc. On the other hand, they expose themselves to infringing on freedom of expression, which also risks harming the experience of their community on these channels and, ultimately, ruining their reputation, causing demobilization, etc. .

Adopting a technological solution based on a fair balance between the machine and the human, between algorithms and linguistics, makes it possible to analyze and understand the context of online discussions in real time. Contextual and autonomous moderation is, indeed, the surest way to offer brands and online communities a successful experience, in a secure place where freedom of expression is protected.

We would like to thank the author of this write-up for this outstanding content

“Mitigate the toxic influence of the internet with artificial intelligence”, Charles Cohen (Bodyguard.ai) – Strategies

You can view our social media profiles here as well as other related pages herehttps://yaroos.com/related-pages/