OpenWeb Moderation Standards
Last Updated July 22, 2021
OpenWeb has a vision for a healthier web. We empower publishers to build communities and host vibrant discussions between readers. We currently, and will continue to, host a wide range of thought and discussion.
We believe that toxicity, hostility, hate, violence, personal attacks, and the release of personal information have no place on the web. We take pride in playing our part in providing publishers and their communities with a safer environment where higher quality dialogue can take place.
We’re serious about our vision and our values. Bringing these to life means that we must develop, and ongoingly improve, global standards that endeavor to harmonize our actions with our vision and values. To that end, we are making public two evolving policies—sets of global rules affecting our business relationships and practices—regarding:
- The journalistic integrity of publishers with which we partner—called OpenWeb Publisher Standards (available here); and
- The moderation of user-generated content—called OpenWeb Moderation Policy.
As laws, regulations and widely accepted norms continue to evolve, so will these guidelines, based on our product experience and research, consultation with third-party experts, reflection on industry best practices and feedback from the communities we support around the web. We have enforced and will continue to enforce these guidelines with the intention of creating a more open, healthier web.
OpenWeb Moderation Standards and User-Generated Content (UGC) Guidelines
OpenWeb hosts millions of conversations across thousands of sites. We endeavor to improve the quality of conversation online. Our efforts take two distinct approaches to produce this outcome.
First, our technology strives to incentivize and promote healthy conversations. We continuously improve our technology to promote constructive, reciprocal conversations over toxic (even if extremely engaging) user contributions.
Second, we endeavor to demote and remove toxic conversations. We build and implement tools such as: Language detection, user reputation scores, community signals, sorting algorithms, and more to help detect and remove or reduce the prominence of toxic content.
In doing so, we differentiate between global and community user-generated content (UGC) Standards.
“Global Standards” are norms of behaviour defined and maintained by OpenWeb and applied to all conversations that OpenWeb hosts, no matter the publisher.
“Community Standards” are community-specific norms of behaviour, defined by publishers and/or by the community. These are additive rules and guidelines. Community standards are applied only to that specific community.
Community Standards are “downstream” of Global Standards, meaning that all Community Standards across the OpenWeb network must not violate the letter or spirit of the Global Standards.
1. OpenWeb’s Global Standards
- No Personal Attacks
This includes any attack on the identity or features of individuals.
- No Violent Behavior
This includes direct threats including incitement for violence.
- No “Doxxing”
This prohibits publishing private or otherwise personally identifying information about a particular individual.
- No Hate Speech
This prohibits any abusive or threatening speech that expresses prejudice against a particular group, especially on the basis of race, gender, religion, or sexual orientation.
- No Spam
This prohibits Spam, defined as irrelevant contributions, made for the purpose of promoting a website, business, etc.
Exception for Public Figures
Discussions of a “public figure”–a person of great public interest or familiarity–are subject to eased standards, depending on the type and severity of the violation, and on relevance to the topic of conversation. We believe public figures can be, to a greater extent than private individuals, subject to behavior that would otherwise violate our policy. This is because we believe a critical discussion of, and public airing of opinions about, public figures and their actions are an important part of free expression.
2. Community Standards
Norms of behavior differ depending on the culture, environment, and/or the context of the conversation. We acknowledge that language and topics that are acceptable in one setting may not be acceptable in another.
OpenWeb’s tools empower communities to leverage advanced moderation to suit their needs. As such, we allow each publisher to create additional, additive standards that apply to their community. As stated above, these additional standards cannot violate the letter or spirit of the Global Standards.
Enforcement of the global standard is new and just beginning. Previously, OpenWeb allowed publishers to opt out of baseline moderation. With this policy, we begin enforcement of this new approach, as described in this document.
OpenWeb develops proprietary technology, and partners with other technology companies, to improve the health of conversations online.
If certain actions by an existing partner constitute participation or encouragement of content that violates the Global Standards, we will reach out and attempt to resolve the matter amicably, including by assisting in establishing internal guidelines and rules for content creators, community managers, or other parties.
Additional enforcement of these standards is ongoing. Through our technology:
- We remove more than 2M unsafe messages every month,
- More than 95% of messages viewed are deemed “safe;” and
- OpenWeb users have reported a 20% increase in perception of conversation health over the last 12 months.
As with all policies, we seek to constantly improve our enforcement and our standards. In the future, we will continue to incorporate new academic and private resources, as well as new product research and partner feedback into our policies and guidelines.