new placeholder
News

How OpenWeb provides a safer platform for brands.

By Nadav Shoval

#Stop Hate for Profit

We started OpenWeb for two reasons: to provide safe places for people to converse online, and to support the media industry. We saw what was happening to publishers as they created content, found audiences, and then lost them to big social platforms who knew how to monetize user attention and data. We wanted to provide publishers a platform to truly engage their audiences and we knew that conversations were the way to do it. 

Reducing toxicity is critical for building a healthy society online and off. Freedom of expression does not mean platforms should allow the spread of hate, racism, discrimination, or violence. Technology should be part of the solution to the polarization of our society, not the problem. In all of our studying of technology, communication, and social media, there’s one thing that’s clear: moderation does not equal censorship. The remarks from the civility audit on Facebook released this week affirm that position.

“Elevating free expression is a good thing, but it should apply to everyone,” the report says. “The prioritisation of free expression over all other values, such as equality and non-discrimination, is deeply troubling to the auditors.” 

Individuals are responsible for the things that they say — but when technology provides a platform for these ideas to be shared, and then actively promotes the spread of hateful and harmful ideas in order to monetize them, they too are responsible. We need to demand more from the hosts of society’s conversations.

Here’s a look at some of the ways we actively address issues of racism and hate on our platform and how we are fundamentally different from other big social platforms: 

  1. Layers of moderation. Independent Sources. Real Humans.

Having multiple sources for moderation is critical. OpenWeb has our own AI based on the millions of conversations that occur on our platform each month, and that helps us determine what might be considered hateful or toxic – for each specific community we serve. But we also work with several third parties including Google and IBM Watson to add additional layers of moderation from unique sources that might have caught or learned something that ours hasn’t. A diversity of voices needs to be examined by a diversity of tools.

And if this multi-layered approach wasn’t enough, we also empower users to evaluate and respond to posts they find offensive on the platform, and give them the power to take action. Community moderators add yet another layer to creating healthy communities.

Last but not least, if your comment is flagged as toxic, but not captured by our AI, a human reviews it. Yes, a real person. To ensure transparency for the user, we also explain why something is removed. No black boxes or unknowns about why something is flagged or removed.

  1. We incentivize quality, not extremes.

Throughout our 8-year history, we’ve focused on using technology to encourage quality discourse. Every decision we make as a company is guided by this north star. It begins with simple and beautiful UX, continues through how we encourage and incentivize thoughtfulness in dialogue, and ends with transparency to our users, and our publishing partners. 

Something I’m particularly proud of with OpenWeb is the development of our own Quality ScoreTM. This score evaluates comments’ exposure, the number of removed or flagged comments, and overall engagement and time spent in conversations to give publishers feedback on how their communities are doing. Over the last 3 months alone, we’ve seen quality increase nearly 30% across the board. 

Active Users on our platform earn Reputation points as they engage with the community. The more positive impact a user has on the community through thoughtful and valuable comments, the more points he/she gains. Should you post something that’s ultimately flagged or removed, you lose them. Reputation Points give users extended reach, exposure, and access to privileges like the ability to create their own conversation threads. In other words, access and reach are earned.

  1. Reach can only be organically earned.

Unlike other platforms, you cannot pay to have your account promoted on OpenWeb or within any of our 700 communities. You cannot buy influence within the community that you are commenting or engaging in. If you have something to say, the community will decide how valuable it is. That then drives exposure for your message. Whether you are a student or the leader of a country, your comments are subject to the specific community guidelines. This is very different than allowing the loudest voices or largest external following to be magnified. 

How does this fight hate speech? Because everyone is subject to community feedback. Toxic ideas can’t spread the way they do on big social networks, and no one is entitled to reach, regardless of who they are.

  1. We are decentralized.

OpenWeb is not one single community. We are hundreds of communities across different publishers and content sites. From a reader or commenter perspective, this means that no single idea or comment can spread rampantly or easily without multiple layers of review. A toxic comment or racist remark cannot reach the masses. It’s that simple.

On Facebook, toxic ideas and hate can spread like wildfire because literally billions of people are hosted on one platform. 

  1. Conversations are publisher led and hosted.

OpenWeb inspires and enables conversations within independent publisher communities, not on a stand-alone site where one set of rules is applied to all. Commentary is hosted by publishers with their own journalistic standards and integrity, and these publishers set their own guidelines for community behavior. These guidelines are layered on top of our own rigorous moderation process (see #1) to ensure the safest possible platform for brands to target audiences. 

The movement to Stop Hate for Profit is a long and serious one that starts and ends with each of us. We can boycott and sign petitions, and we can make changes in how we use and activate the web. But to function well as a society we need more choices on where to communicate. And we need to support the many places and platforms that host diverse voices and groups to keep our democracy alive. Let’s prioritize conversations and openness, but not at the expense of spreading hate. 

Together we can create a healthier, safer open web.

Nadav

Subscribe to our blog