In 2019, Microsoft’s Digital Civility Index revealed that the civility of online conversation had reached a four-year low. Social platforms and online forums were rife with toxic interactions, dominated by vicious political arguments and attacks on people’s physical appearance.
What should be a positive social force – the ability to discuss and debate freely online – had apparently degenerated into a mire of hostility. And the sites hosting those conversations were paying the price. Facebook saw brands pulling ads from its platform, and the largely-unmoderated “free speech” platform Parler was shut down entirely by its hosting companies.
In the time since, OpenWeb has built advanced AI and Machine Learning-powered technology that pushes back against toxicity and incentivizes positive conversations. Across the more than 1,000 publisher with whom we partner, high-quality conversations are on the rise. And there’s a major reason for that: toxic online comments aren’t just hurtful for users, they’re also bad for business. So how can publishers stop them from polluting comments sections intended for quality discussion?
What do toxic conversations look like?
First, it’s worth defining what we mean by toxic comments. They largely fall into two categories: hate speech and online harassment (aka trolling).
There’s vigorous debate over what constitutes hate speech, but the United Nations defines it as “any kind of communication that attacks or uses pejorative or discriminatory language with reference to a person or a group on the basis of who they are, in other words, based on their religion, ethnicity, nationality, race, color, descent, gender or other identity factor”.
The UN Human Rights Office of the High Commissioner reported in March 2021 that hate speech is on the rise worldwide, and that three-quarters of it is directed at minority groups.
Online harassment is another umbrella term that can be applied to many different kinds of comment. Pew Research Center categorizes it into six types: purposeful embarrassment, offensive name-calling, physical threats, stalking, sustained harassment, and sexual harassment. It says that 41% of Americans experienced at least one of these online in 2020.
Why do toxic conversations happen?
Unpleasant or threatening comments aren’t just the work of malicious trolls and bots. People who are normally perfectly polite can easily tip over into toxicity in certain situations. Let’s look at three of the biggest drivers of toxic conversations online.
The online disinhibition effect: Psychologists have observed that we lose some of our usual inhibitions when we communicate online, especially when our identity is unknown (either to the publisher hosting the platform or to the other people using it). That can work well to open up debate, but it also removes the social barriers to saying hurtful things.
Mood and ambience: Researchers from Stanford and Cornell studying trolling behavior noticed two key trigger mechanisms: the individual’s own mood, and seeing other people posting troll posts. They found that if we’re in a bad mood and we see other people trolling, we’re twice as likely to post trolling comments ourselves.
Platform design: Some social platforms are designed to provoke polarized arguments and reward those who participate in them. In 2020, for example, New Zealand’s Digital Cultures Institute found that Facebook’s Feed “privileges incendiary content, setting up a stimulus–response loop that promotes outrage expression.” The result may be strong user engagement, but it can come at the expense of quality discourse.
How OpenWeb helps publishers push back against toxicity and build a healthy online community
The Parler example mentioned earlier shows how an online community can rapidly degenerate when comments are left unmoderated. Publishers who want to cultivate a healthy online community around their content need a reliable and automated way to drive out toxicity and instead encourage respectful debate.
The multi-layered moderation in OpenWebOS is designed to do just that, with a three-pronged approach to identifying toxic comments and stopping them in their tracks:
Understand individual users: OpenWeb incentivizes casual readers to become registered users, whose behavior, reputation and community influence it can then be better understood. Those insights then feed into our moderation tech, allowing engaged, respected and respectful users to be recognized for their contribution to debates and discussions.
Identify and filter out toxicity: Our AI and ML-driven moderation software monitors millions of conversations to filter out toxic comments and behavior, using its deep understanding of conversational nuances including language ambiguity, contextual toxicity, and local slang. At the same time, community members can manually flag questionable comments for review by our human moderators, who can then take appropriate action.
Encourage and reward quality conversation: Our moderation tech identifies quality comments and brings them to the top of the conversation, leading to healthier discussions and greater engagement. Commenters who advance thoughtful arguments or share expert knowledge are rewarded with higher exposure, increasing their influence in the community.
A more civil online world is better for business, too
More and more publishers are recognizing that hosting quality conversations is great for business as well as for society. Quality conversations attract engaged users who register as community members, stay for longer and return more often – delivering 218% more revenue per user than an unregistered user. Learn more about how you can use OpenWeb moderation to drive out toxicity and host quality conversations on your own site.