clever moderation-01 placeholder
Conversations

The Moderation Debate: Defining Toxicity and Creating Safer Environments Online

By OpenWeb

We talk a lot about a “crisis of toxicity” online, and about our commitment to improve this state of affairs. But what do we mean? In trying to work out a functional definition—one that might guide us towards a “less toxic” future online—there are some crucial distinctions to be made, ones that are often missed in the ongoing toxicity debate.

Understanding “toxicity”

In theory, everyone is against “online toxicity.” The term is vague enough, in general usage, that almost anyone can earn goodwill by pledging to combat it. Further complicating things, the term is used so often—in op-eds, press releases, Congressional committee hearings—it can have different meanings in different contexts.

So, first: there are some clearly toxic behaviors online that nearly everyone agrees should not be tolerated. These constitute “global norms,” and include personal attacks, incitements to violence, doxxing, hate speech, knowingly spreading disinformation, et. al.

At OpenWeb, our Global Moderation Standards and Publisher Standards clearly lay these out in an attempt to draw a line around what’s suitable for content online—whether coming from user or publisher—and what isn’t. Any definition of toxicity needs to center these kinds of outright violations, and must take seriously finding ways to curb them.

These sorts of global norms are needed to keep a strong baseline of healthy, lively and open dialogue on the internet; norms that respect the right to freedom of speech. While all publishers in the OpenWeb network agree to adhere to these global standards, individual communities are empowered to layer further, additive guidelines on top into their content moderation protocol, that reflect their “localized” norms. On the traditional social media platforms, centralized enforcement of global norms means that communities—each undoubtedly with their own particular standards for permissible content—are squeezed into a one-size-fits-all approach to moderation.

The importance of open dialogue

The toxic behaviors outlined above—these global norms—define “online toxicity” for us. That is what we are talking about when talking about the crisis of toxicity.

However, there is something else that often comes up, and this sort is much trickier. Call it what you want: spin, opinion, slant, bias, twist, angle, belief. We’ll call it “opinion” for our purposes here, and this is where the fight against toxicity online can get more confusing.

The fact is, many in the debate around online toxicity will rush to label opinions they do not like as “toxic.” Whether those opinions are correct is, for the purposes of this discussion, largely irrelevant. It is not the place of the moderator—the one enforcing the norms outlined above—to evaluate the relative merit of a given argument or to “moderate out” opinions they determine to be incorrect.

The job of online conversation platforms is to intelligently surface potentially problematic user content to the moderator. The job of a moderator is to use such a tool to root out actual toxicity (the kind described above) and to let reason rise to the top. This often means letting repellent or ill-informed views be expressed, so long as the baseline requirements for healthy discussion online (those global and/or community norms) are met. It is the duty of other participants in the discussion—not the moderator—to argue against or defeat these views. 

Of course, those with whom one disagrees seem unreasonable, even hateful, and many arguments online will go unresolved. Does this—a lack of a rational, shared conclusion at the end of an argument—mean that the experiment failed?

No: there is great value in allowing disagreement to stand as a record of two battling, incompatible opinions. The host must only provide an environment where ideas can be expressed freely so that an open, rational exchange of views can occur.

The limits of “deplatforming”

There is, of course, another option: simply removing the voices with which we disagree. This may be tempting, but it’s proving to be a controversial strategy. One need look no further than a recent paper published earlier this year that follows groups banned from Reddit as they migrated to less-moderated (or completely unmoderated) platforms. On these platforms, group growth did slow, but those who migrated became “more toxic, negative, and hostile” about their perceived enemies.

This makes sense. When groups are pushed from mainstream platforms to less-moderated environments, they find themselves side-by-side with more radical actors—and, in the case of platforms like Telegram, encrypted out of the view of law enforcement.

And, to layer atop this reasoning, a wider point: while many believe we can “deplatform” or moderate our way out of these issues and toward a more unified future—there is reason to believe we cannot solve our problems this way.

That is because, simply put, these online problems are only a reflection of real-world issues. While social media platforms may add fuel to the fire, they are not the fire itself. Specifically, they reflect what Francis Fukuyama recently described as a “growing epistemic relativism.” For this, there are myriad social factors at play. 

That said, something must be done about truly malicious actors. A set of standards must be upheld, and those in violation—publishers and users alike—must be held accountable. But there is reason to be weary of calls to remove those opinions with which we may disagree.

A diversity of views and opinions is essential. If you disagree, we’d love to hear why. Just, you know, don’t be toxic about it.

Let’s have a conversation.

Right now OpenWeb has a limited number of partners we can work with in order to provide the highest quality service to each and every one. Let us know you’re interested and stay informed about how OpenWeb is empowering publishers and advertisers to change online conversations for good.
Loading...
Loading...