Journal logo

A (Brief) History of Content Moderation

Vocal's Head of Content takes a look at the content moderation industry, its history, and how Vocal handles the daily polarization of user-generated content.

By Joshua Luke JohnsonPublished 3 years ago Updated about a year ago 8 min read
Top Story - September 2021
35

*Author's Note 4/27/2022: Given the events of the last few days, and as Elon Musk fights to bring his limitless resources and late-stage capitalist ideals into the conversations surrounding online discourse, I believe the contents of this story are more relevant than ever.

In 2008, a dispute over YouTube videos criticizing the Turkish president kicked off a chain of events that included a then-shocking instance of I.P. blocking on a national scale. For a brief period, Google restricted the entirety of Turkey from streaming a parody broadcast announcing "Today’s news: Kamal Ataturk is gay!"

Nicole Wong, then-deputy general counsel at Google, would go on to earn the nickname "The Decider" among colleagues as opinions on the obligation of user-generated content (UGC) platforms came to a head. Over the course of the decade to follow, modern content moderation practices would be formed and ISPs (internet service providers, i.e. host platforms for UGC such as YouTube and Vocal) would realize their enormous amount of freedom to self-regulate their networks.

While the incident in Turkey happened in 2008, conversations around free speech and the "safe harboring" of ISPs date back to the 1990s when the Digital Millennium Copyright Act (DMCA) was codified as part of the United States Copyright Law. With this law began a conversation that has only intensified as new media and technologies push the boundaries of free speech to heights unforeseen in the late '90s, when Windows 98 was da bomb and only 41% of adults even knew what it meant to "be online."

Now, content moderation takes place everywhere: discussion forums, comment threads, chat rooms, and everywhere else a community can gather on the internet. Per Section 230, located in the Communications Decency Act of 1996, individual platforms decide internally what their standards are, but they often end up working together around generally common moral understandings, like the serial de-platforming of Alex Jones in 2018 and Donald Trump in 2021. Few if any other countries have a comparable law in place. This makes U.S.-based platforms something of a refuge for free speech where networks are empowered to host conversations that are controversial, or even extremist.

With this "safe harbor" protection comes a huge responsibility for these U.S.-based ISPs. The voices willing to speak up are numerous, but not all free speech is safe speech, which makes content moderation guidelines a priority for any internet-based hub. However thorough they may be, these guidelines often cause unrest among users. (As it turns out, finding a large political and social community aligned on every polarizing issue is unheard of...) Therefore platforms must labor over the delicate balance of enforcing rules that will repel bad actors without silencing the voices of those brave enough to speak up. There is a long list of ISPs that failed to scale their content moderation practices, including now-defunct outlets like Fling, Secret, and others. I was surprised to learn recently that Yik Yak, the happy-go-lucky anonymous gossiping network that quickly became a haven for cyberbullying and harassment, has returned from the dead with a fresh vibe and a fresh set of content moderation standards.

Content Moderation on Vocal

Voluntary self-regulation means that, for the foreseeable future, [Google] will continue to exercise extraordinary power over global speech online. Which raises a perennial but increasingly urgent question: Can we trust a corporation to be good — even a corporation whose informal motto is “Don’t be evil”?

George Washington University law professor Jeffrey Rosen, writing for the New York Times, asked this question in 2008, highlighting the radical disconnect between the ideology of a search engine that contains all the information in the world being run by a company that decides what can and cannot be searched.

Of course it's not just Google that enforces guidelines and content moderation standards. Twitter and Facebook have both come under fire in recent months for ostensibly inconsistent and controversial interpretations of their community protocols. Writing platforms like Medium and WattPad all have content regulations in place. Even the now-infamous Parler, the ultra-conservative self-proclaimed harbinger of free speech, has outlined community standards since being reinstated to the Apple App Store following a debatable laissez faire introduction to live networking.

Content moderation is an imperfect practice, and that's putting it mildly. Frankly it's an impossible task, because those in the hot seat will almost certainly please everybody and nobody at the same time. Personally, I have been referred to as both "The Savior" and "The Bad Guy" since being instated as head of Content Moderation at Vocal in 2020. Because the community's breadth of feelings towards me is so expansive, it's fitting that my weekly catch-up with my team is denoted in iCal as "Vocal Highs and Lows." That's right, every week we meet and discuss your content on Vocal. We share our favorite stories, laugh at your jokes, and yes, we discuss stories that skirt the lines on various aspects of our Community Guidelines.

Almost every day, someone writes a story about how we can improve Vocal. We love these stories, because more than anything we want this platform to be intuitive; we want it to be supportive of all creative initiatives; we want to make it valuable to those who choose to trust us with their work. That means reading every word of every story outlining what creators want from us: the team behind Vocal. Some of these stories criticize our executive leaders. Some of these stories criticize the moderation team. A few have criticized me directly.

I don't mind at all. I get it. You want consistency. So do I. But with certain guidelines, context and interpretation is crucial, and the gorgeous thing about humanity is that no two people think alike, which effectively raises the entire moderation industry on a foundation of subjectivity. It's safe to say that all human-led content moderation is inherently a subjective practice.

Naturally, this raises some questions...

FAQs

Why is Vocal sticking with a human-led model for content moderation?

In an October 2020 issue of Internet Policy Review, Aram Sinnreich, professor of copyright, culture, and media at American University, refers to the increased reliance on technology in the content moderation space as "quantisation of culture," i.e., "the delegation of nuanced and contextualised cultural decision-making (and meaning-making) processes to an algorithm." In other words, the more we come to rely on the learning processes of algorithms, Sinnreich argues, the more we risk solidifying algorithmic logic as the arbiter of safe and legitimate content online. For the time being, here at Vocal, we concur. While technology's capabilities may one day surpass organic decision-making, it is not this day.

We do use technology to filter and aid our processes, similar to a lot of other platforms, but ultimately we are a human-led operation. For now, we trust the system we've built.

Why don't you just get rid of the Community Guidelines and let everything be published?

We like to imagine a completely uninhibited take on free speech, right? Well, as we saw on Parler, it was only a matter of months before nudes, slurs, and criminal activity forced administrators to rethink their entire content model. Without community guidelines, nobody feels truly safe.

So why don't you just have rules against nudes, slurs, and criminal activity?

Easy, right? Effectively, that's what all platforms have: rules against nudes, slurs, and criminal activity. But what constitutes a nude? Is an artistic Boudoir shot considered a nude? If not, why? What constitutes a slur, and if it’s used as a catalyst for storytelling purposes, is it still offensive? If not, why? What constitutes criminal activity, and if someone writes a first-person story detailing how they would go about murdering a real-life public figure, is that acceptable if it's categorized as fiction?

If not, why?

You want consistency, and I do too, but these examples are just a small sampling of the eyebrow-raising content that moderators encounter every day. Ultimately, we do our best to craft moderation protocols that are as thorough and open-ended as possible, so they can be interpreted in any situation that arises. We are constantly amending and updating those protocols to enable Vocal moderators to make the least subjective decision possible. In fact, we just released a new version of our Community Guidelines to offer more insight into our decision-making processes.

As we welcome a larger and more genre-diverse group of storytellers to the Vocal platform, we are committed to continuing to update these guidelines to best support everyone who wants to create with us.

You want consistency from us, but you also want transparency, and so do I. That's why I'm writing this story — to be transparent with you about who we are and how we operate.

Content moderation is controversial, and on some level it's an impossible task, but I guess you could say we're up to the Challenge? (Vocal pun intended.) As always, thank you for your feedback. Thank you for your forgiveness when we mess up. Thank you for your continued support as we grow and learn together. Thank you for helping us build the best damn storytelling community on the Web.

We couldn't do it without you.

---

Joshua Luke Johnson is a senior content manager at Creatd and head of Content Moderation and Curation at Vocal.

industry
35

About the Creator

Joshua Luke Johnson

Former Head of Content @ Vocal

Interview with Christopher Paolini: https://vocal.media/interview/christopher-paolini-interview-part-1-of-3

Reader insights

Outstanding

Excellent work. Looking forward to reading more!

Top insights

  1. Compelling and original writing

    Creative use of language & vocab

  2. Expert insights and opinions

    Arguments were carefully researched and presented

  3. Eye opening

    Niche topic & fresh perspectives

Add your insights

Comments

There are no comments for this story

Be the first to respond and start the conversation.

Sign in to comment

    Find us on social media

    Miscellaneous links

    • Explore
    • Contact
    • Privacy Policy
    • Terms of Use
    • Support

    © 2024 Creatd, Inc. All Rights Reserved.