Stomping Out Toxic Voices and Humanizing Technology
The old model is due for a significant change. I’m convinced we need a reset button.
All of us have noticed the rise of toxicity in our social feeds and unfortunately, life in general. I have been a part of the industry that created, supported, and grew today’s major social networks and the advertising economy that surrounds them.
Like many of my contemporaries and peers, I believe that the old model is due for a significant change. I’m convinced we need a reset button.
I remember an article I read a few years ago from Scientific American, The Technology of Kindness, that described the central issue as follows:
“Technology quietly poisoned the connections that keep us human.”
Technology fostered a sense of comfort through isolation and distance, creating echo-chambers that impeded our ability and desire to express empathy towards a different opinion. What this boils down to is that there is a virtually unlimited amount of content proliferated on social platforms, including what you’re reading right now, that is specifically created for your consumption. The history and intent of the original social platforms was to expose the best of humanity to the world. It still does. But, in retrospect and unwittingly, the worst of humanity has also been exposed.
Unfortunately, in a world filled with diverse voices, and an equally important passion driven economy, we cannot all seem to agree on how to use this great advancement for the collective good. While generally, we concede that it was not intended to cause damage, the ambiguity around one person’s damage and another one’s gain, has only further confused the issue.
Who is it really that defines “toxic”?
The answer is different people with different views.
Who sets those boundaries and limits?
Anonymous parties whose boundaries are determined by what harms them.
Does intent matter?
Yes, and people with different intentions can find the way to express them with respect, civility, and kindness.
It is these types of questions that have led to the dilemma all technology companies will face in an increasingly regulated and litigious environment. In response, companies like Facebook, Twitter and others have tried to better define toxicity and harmful discourse, as well as increase the efficiency of their machine learning algorithms to identify it. But at the end of the day, such efforts may only be a band-aid, and ironically may even cause more harm in the future than good in the present.
Why? Because at its heart, the conundrum is inherently human in nature. It seems that we’re looking too far from the root of the problem to find solutions, when in fact they’re right in front of us. It’s a human lack of empathy, a human boldness, a human inclination to cause others pain. Technology can offer tremendous help, but at the end of the day, we as a society, even more so as human kind, must make the decision to make the internet a safer and more humane place to help the global community.
One of the many reasons I was so excited about joining as the COO of Creatd (Nasdaq: CRTD), the parent company of the Vocal platform, is because our company is at the forefront of the movement towards creating a safe and rewarding experience for creators and audiences of all shapes, sizes, and opinions. In my first story on Vocal, I began to discuss how our approach differs from other platforms for that very reason.
Let me outline some of the central pain points that are often questioned with respect to platforms, creators, readers, and the quality of their trust and safety.
- What moderation services and regulatory compliance, if any, does the platform offer?
- How clear and transparent are those guidelines and boundaries?
Safety and Content Moderation
Here at Vocal, we offer what we call a CFX (creator-first experience). We take great pride in creating a thriving environment that creators can consider their home base. A great deal of our growth has come from organic word of mouth in the creative community. How did we create this? The answer is by striving to be better humans.
There is a fine line between free speech and hate speech. Free speech encourages debate whereas hate speech incites violence.
We block hate speech and stories that contain NSFW content. But importantly, we recognize that the definition of hate speech and NSFW content is constantly evolving. We recognize that context matters, that a single world event can turn something that was acceptable yesterday to unacceptable tomorrow. That is the logic behind our human-led, technology-assisted moderation system, which has humans reading through every story, and approving or denying pieces based on our strict community guidelines and protocols.
Many companies tout their completely automated processes as badges of honor and validation that they are positioned for scale. Some more questionable platforms claim to “have some of the highest security ever, space age stuff.”
Rather than prematurely leaping towards automation, we at Created preferred to learn from our creators directly, growing the boundaries and safety guidelines with them, before we were confident that code alone can do it better. Till this day, with 4 years of input from nearly 1,000,000 creators, we still maintain a hybrid approach to moderation and compliance, always partnering humans and technology.
Not a day goes by without me thinking about our excellent “super-human” moderation and curation team at Vocal. They excel at their work and truly care about the greater Vocal community. They are the real humans that shield our Vocal platform, and serve as the perfect example for other companies to hit a reset button.