shutterstock_1379131412 placeholder
Articles, Tech

Humans vs Algorithms

By Ben Lang

Moderation. We have heard quite a bit about this term lately. With social media giants Facebook and Twitter making headlines for all the wrong reasons, moderation seems to have gained a bad reputation. But it doesn’t have to be that way. Moderation is not synonymous with censorship. It is possible to build a holistic approach, creating a healthier experience for the community, and in-turn a brand-safe environment for the publishers.

When it comes to content moderation at OpenWeb, we firmly believe that encouraging healthy behavior is equally important as removing the bad actors. Yes, we have the AI and machine learning algorithms, we work with trusted API partners such as Google Perspective, and we block toxic, offensive and spammy content. But it doesn’t stop there. A recent academic study on civility made a significant observation on online behavior, “when people are exposed to civil content, they are much more likely to create civil content themselves.” This research demonstrates that we can combat incivility with a framework that considers the relationship between role model behaviors and observer responses. 

The path to breaking the downward spiral and creating open forums where everyone’s voice matters seems straightforward:

Humans vs Algorithms

Here are key product features and tactics for publishers to keep in mind as they encourage diverse voices to enter public conversation. 

Quality starts with knowing the user

As a leading audience engagement and conversation platform, our core value begins with millions of individual users. We provide them a forum to have a healthy debate and share diverse opinions on hundreds of sites. In turn, the data provides us a window of opportunity to observe and analyze user behavior over time, within specific environments. How do they interact, what do they like, how do they influence the community, react to nudges, others’ behaviors, and so much more. These insights can be used to create high-level user modeling in order to guide the moderation and conversation experience.

Multi-layered automatic moderation

The scale of the open web presents challenges, whether it’s fostering uncivil conversations or promoting trolls and bullies. We need best-in-class tools to ensure the right checks and balances are in place, and to reduce behaviors that put the publisher and community at risk. As the hosts of the discussion our partners enforce their unique rules and standards, thus creating a contextual and controlled environment. These guidelines are layered on top of our automatic moderation technology, which is powered by proprietary AI and machine learning algorithms that strive to think like a human. To get this right, it’s essential that we built a system that understands the nuances of a conversation — language ambiguity, contextual toxicity, local slang, user sentiment, threat detection and more to make the right decisions at scale. We also work with several third-party APIs to bolster our core tech.

Quality enhancement and curation

In order to create a domino effect of quality breeding more quality, we need to identify the best content and find avenues to boost it. 86% of users only see the discussions at the top. So it is crucial that the best quality content and creators are exposed to influence more positive contributions. Our unique sorting and exposure metering identifies and promotes civil and thoughtful comments, providing another layer of protection to users and publishers alike. 

What happens when you attempt to spread toxicity or hate in one of OpenWeb’s communities? We step in and encourage better behavior. We recently collaborated with Jigsaw’s Perspective API to showcase how gentle nudges can steer the conversation in the right direction — reinforcing our belief that quality spaces and civility ques drive a better, healthier conversation experience.

Driving healthy conversations with Reputation

We believe in the power of autonomous communities. Instead of dictating the ideal standards for quality, we encourage communities to promote civility from within. Our user Reputation mechanism operates based on the feedback provided by the fellow community members. We empower the community to recognize and reward quality content, with reputation points, likes, upvotes, and follows. The more positive impact a user has on the community, the more ascendancy he/she gains. Hence, the more civil they are, the larger their reach and influence — creating a virtuous cycle of positive engagement.

Publishers and social platforms have the opportunity, as well as the tools to create quality environments that constantly spur civil conversations, improve user experiences and reduce churn. It’s time to take a step back and look at the bigger picture of how a humanistic approach to moderation can bring better and meaningful outcomes. 

Let’s have a conversation.

Right now OpenWeb has a limited number of partners we can work with in order to provide the highest quality service to each and every one. Let us know you’re interested and stay informed about how OpenWeb is empowering publishers and advertisers to change online conversations for good.
Loading...
Loading...