pixel3921 placeholder
Articles

Even Truth Social Knows the Importance of Moderation

By OpenWeb

Last weekend, Truth Social—the President Trump-backed ‘free speech safe zone’—hit the Apple App Store. Despite some technical glitches, the app’s downloads and registrations skyrocketed, as did the company’s SPAC’s valuation.

That’s right: a business whose central promise to consumers is about user-generated content moderation is topping the charts and capturing headlines again (perhaps we’ve seen this before, if you remember Gab, Parler). 

We’ve detailed on this blog how these kinds of moderation-free online social environments tend to immolate rather quickly, and it seems Truth Social has taken note of the trend. Even this ‘free speech safe zone’ promises to enforce moderation standards of some kind, however undefined (publicly) they yet are.

This is proof: The fever pitch at which our discourse around moderation is held has officially boiled over. Few subjects in American life are now more contentious, or more apt to suddenly take over the news cycle for two or three weeks straight. 

Even a quick glance at the news reveals a country consumed with questions about what should and shouldn’t be allowed in our digital spaces. Before Truth Social, there were more than a few recent examples.

For instance: TikTok, which last week made headlines for poaching content moderators from companies like Accenture and Covalen (which Facebook has turned to for moderation services). Or Microsoft, which, after some controversy, is taking steps to moderate its VR spaces. Or the US Senate, where two senators recently introduced bipartisan legislation designed to hold social platforms accountable for content that might harm children. All of these news stories emerged in the course of only twenty-four hours earlier this month. 

I think we’ve proven our point: moderation is a hot topic.

Given all this discourse, it makes sense that some publishers are wary of featuring user-generated content on their sites. Comments sections are absolutely beneficial, but—as this line of thinking goes—are they really worth playing Whack-a-Mole with trolls, or risking the proliferation of abusive or hateful content on one’s own site? If large social media platforms can’t reign in toxicity, what chance do publishers stand?

There’s a fallacy at play in this kind of thinking—namely, the idea that social platforms actually intend to reduce toxicity. As the Facebook Papers and other leaks have made clear, a certain level of viral toxicity is simply seen as part of doing business.

A tale of two comments

So how does OpenWeb spare publishers the headache of content moderation?

To illustrate, let’s follow two very different comments on their journey to getting posted.

Comment A is, let’s say, inflammatory; the kind of comment that can derail a conversation. OpenWeb’s AL/machine learning-enhanced filters (which can scan a comment for incivility, white supremacy, abuse, and more) instantly pick up on this: that’s strike one against the comment. 

As it turns out, Comment A’s author has something of a history of posting inflammatory comments: OpenWeb’s software may already know this, because it tracks behavior at the user level, building Civility Profiles for commenters. So Comment A is already at a disadvantage, for the simple reason that its author has demonstrated an inability to play by the rules of civil discourse.

Comment A, if deemed unsafe after an AI-powered analysis of the content and context, will be de-emphasized in the conversation. And if it offends the readers who still manage to find it, they’ll have the ability to flag it. If it’s flagged enough times, it’ll be sent over to a staff of human moderators, who work to keep our partners safe 24/7, 365 days a year. 

Now let’s examine another comment, Comment B. Comment B was posted at the exact same second as Comment A, and is everything a contribution should be: thoughtful, illuminating, and relevant. It furthers the dialogue started by the article, and provides an opening for other users to contribute their own opinions. Better yet, its author has a history of similar comments: our system has flagged them as the kind of user that gets healthy conversation flowing. And so our system takes action, moving Comment B to the top of the conversation.

So in the span of just a few seconds (without any intervention on the part of the publisher’s team) a disruptive contribution is minimized and a generative one is highlighted. The conversation flows unimpeded, as your team works to keep the content flowing.

The endless benefits of community

This kind of multi-layered moderation doesn’t only save time, but it also generates insights that can make content even better, and keep readers coming back. When productive contributions to conversation are prioritized, others are far more likely to join in on the conversation.

Collectively, these comments can teach publishers what their readers wants to see (not to mention what it doesn’t). These insights can’t exist without quality moderation: they’d get lost in the noise, or simply wouldn’t surface at all, with would-be commenters repelled by the trolls.

Point being: don’t let the moderation discourse psych you out. Yes, it’s a complicated issue; yes, there are people online who, for their own unknowable reasons, are devoted to sowing chaos. But there are real, tangible solutions out there, ones that balance the need for free speech with the imperative to keep readers safe.

At OpenWeb, we’re proud to help publishers navigate these complexities, opening up the space for quality content—and dialogue—to flourish.

Let’s have a conversation.

Right now OpenWeb has a limited number of partners we can work with in order to provide the highest quality service to each and every one. Let us know you’re interested and stay informed about how OpenWeb is empowering publishers and advertisers to change online conversations for good.
Loading...
Loading...