After the recent riots in the U.S. Capitol, the role that social media played in creating this unprecedented moment of division is top of mind.
In the aftermath, entire social networks have been taken offline, and the President (along with thousands of accounts promoting the Qanon conspiracy) has been banned from nearly every major social media platform.
With our divisions clearer than ever, there has never been a greater need for civil, productive communication.
But how, on the internet, can we allow users to speak freely while maintaining the level of respect and civility required for productive, enriching conversations? Effective moderation.
And we at OpenWeb know that publishers who get this right grow engaged, loyal communities that keep coming back to engage in these quality conversations.
Today, we’re taking a look at how, during and after the attacks on the U.S. Capitol, our moderation technology aided in mitigating toxicity and creating healthier interactions.
Our AI-driven moderation platform adapts in real-time—stopping toxic comments in their tracks
In the aftermath of the Capitol attacks, we saw more people spending time in the comment section reading about it, seeing what others were saying about it, and discussing it.
- Across our network, we saw a 36.6% increase in people spending time in the comment section as the events on January 6th unfolded, and
- we saw notable spikes in the number of people commenting (84.4%) and replying to comments (77.4%) on January 7th.
But news like this can spark toxicity. During this period (from January 6-7), our Conversation Survey Score—which tracks the ebb and flow of toxicity based on feedback from users across our network—dropped by 23% compared to the previous week’s average.
At the same time, the percentage of comments approved to be published–our comment approval rate–dropped by 9%.*
What does this all mean?
The approval rate dropped as toxicity increased. If the approval rate stayed the same while toxicity increased, that would mean toxic comments were falling through the cracks.
Instead, our technology adapted and prevented offensive comments from seeing the light of day.
On January 8th—right after we saw the initial dip— the Conversation Survey Score began rising again, indicating that the quality of conversations increased—which is exactly what we want to see.
How does our moderation platform work?
Our AI-driven moderation allows us to keep improving our platform to support healthy conversations. We integrate with Jigsaw’s Perspective API, a tool that can identify and predict if a comment is toxic to simplify the moderation process. This tool features a “nudge” concept that encourages users to edit potentially harmful comments in real-time.
Additionally, we use a proprietary moderation algorithm that can identify and remove behaviors that can put a community at risk.
This multi-layered platform—along with the continuous feedback loop we have in our Conversation Survey—allows us to improve and adapt our moderation to any situation.
Publishers can rest assured that toxicity is kept in check no matter what topic is dominating the news—and in this environment, quality conversations can flourish.
The path forward: publishers can be hosts of civil discourse
We believe that publishers have the power to create environments where readers engage in quality conversations—even when a polarizing topic dominates the news.
Moderation lets publishers overcome toxicity and host the quality conversations absent from so much of the web.
With great power comes great responsibility.
Since we know that more users spend time engaging in the comment section as major news breaks, it’s important for publishers to be prepared for a surge in activity—and have a solution to manage any potentially harmful comments.
In addition to using moderation technology, here’s how publishers can encourage healthy conversations.
- Journalists and writers should participate in the conversation. Interact with readers by addressing their questions and concerns—and let them know you’re listening.
- Consider posting a featured question to your community to guide the conversation.
- Finally, don’t hide from toxicity. When polarizing news breaks, there is an even greater need to be a part of the conversation.
*Compared to our baseline approval rate.