01 logo

[Censored]

How Algorithms and Auto-Censorship Affect You

By Teyana JacksonPublished 7 years ago 7 min read
Top Story - September 2017
2

When you think of power, what is it you're picturing? Armies? Politicians? Hordes of genetically modified scorpions intent on decimating the world's population so they can rule over the tasty, tasty ashes?

Okay, so maybe that last one's just me, but I digress.

Chances are, when you think of power, the first thing to spring to mind won't be the websites you and most of your peers frequent on a daily basis. But they should be.

Did you know that as of last year, Facebook's algorithm controlled 40 percent of all web traffic to publisher sites? Or that the site itself is the main source of news for 62 percent of all American adults? More likely than not, it's where you found this article in the first place. Mark Zuckerberg's nosy baby has grown into a multi-million dollar networking giant, and it's got its chunky fingers firmly wrapped around the tubes responsible for funneling a hefty portion of the information you, and millions of other Americans, consume every day.

Now I'm sure you've heard the term "fake news" in the past, particularly in recent months, as it's one of the most popular phrases circulating both in serious news media, and in the steady flow of satire aimed at said media. But not all the talk about "fake news" is well... fake news.

There's an inherent level of trust put into sites like Facebook; an assumption that the information you're receiving is accurate and fairly unadulterated, and even if there is some inaccuracy or bits of mistaken censorship here and there, it's surely harmless and for the greater good. Unfortunately this isn't strictly true. The social networking site doesn't consider itself a "media outlet," so it isn't required to abide by the same rules or guidelines as a technical media platform would. So, essentially, over half of the online American populace is receiving the brunt of their information from a source that isn't necessarily required to ensure that what's spreading via their site is factual.

Have you ever noticed that Facebook tends to auto populate ads based on things you've searched for lately? Or that page and event suggestions have become more and more tailored and specific in the years since you first stumbled over from MySpace to set up your account? It's not a coincidence. Facebook's automated algorithm is meant to create the most personalized experience possible for all of its users; something that while great in theory, results in some unfortunate side effects, one of which being filter bubbles.

Filter bubbles form when algorithms use information about an individual-gathered from their search history, past-click behavior, and location to selectively guess what they want to see next. Now, while this might provide you with exactly the sort of things you'd like to see or hear, it also results in you being essentially surrounded with only information similar to that which you have a history of preferring. (The Wall Street Journal provided an excellent interactive visual representation of this, "Blue Feed, Red Feed," that better illustrates the difference between what information Facebook provides based on your political affiliations.) What this results in is essentially a closed loop of information in which fake news is easily created and repeatedly circulated as fact, slowly stirring in more and more bits of factually incorrect and often incendiary information.

BuzzFeed published a poll last November showing that the top 20 fake news stories on Facebook about the ongoing election were circulated 1.4 million times more than the top 20 factual news stories. That is an awful lot of misinformation spreading in a relatively short period of time, to an alarming number of largely unwitting individuals. This is particularly concerning because previously, the Facebook CEO had claimed that fake news accounted for less than one percent of all content on Facebook.

Facebook isn't the only corporation with the power to control what news and opinions you receive, though.

4,464,000,000 search queries are conducted using Google's search engine every day. I don't know if you noticed, but that's a lot of questions. And all of them are relying on one source to provide them with an answer. Google decides what (and in what order) articles, sites, and companies appear in relation to the terms entered in the search bar—and what links don't appear at all.

As a company, they retain the right to censor the content available through their service, usually to comply with their own policies, legal demands, or with government censorship laws. While this isn't in itself overly alarming, and is in fact necessary for an application of Google's scale to operate globally, it can also result in some questionable practices.

For example, in 2014 although Google had accepted ads from NARAL (a pro-choice lobbying group), they opted to remove ads for many anti-abortion pregnancy centers. Google cited an investigation conducted by NARAL itself that found evidence that the anti-abortion ads violated Google's policy against "deceptive advertising," due to the fact that people using the search engine to find abortion clinics received ads for crisis pregnancy centers that were anti-abortion instead. A related issue occurred in 2008 when Google refused to run ads for a UK Christian group that was opposed to abortion, saying that at the time their policy didn't permit advertisement of sites that contained "abortion and religion." While Google issued a statement claiming it had followed normal company procedures, the issue still sparked a slew of debates. Not the least of which being whether Google's censorship policy was suppressing free speech.

The search giant has come under fire for this many times over the years since its creation, prompting some publications, like 2600: The Hacker Quarterly, to compile lists of the words and terms restricted by Google Instant. While most of the words are derogatory and things we'd hope people would refrain from searching in the first place, many argue it should still be the individual's right to choose to view them.

Google's subsidiary, YouTube, has also become the subject of speculation and debate in recent months, due to the use of its automated system to flag potentially "offensive or controversial" material. Many YouTube creators found themselves floundering when a content crackdown resulted in hundreds of previously monetized videos being demonetized in what was labeled the "Adpocolypse."

The aim of the movement was to prevent the further spread and monetization of extremist propaganda on the site, and any videos featuring "controversial or sensitive subjects and events, including subjects related to war, political conflicts, natural disasters and tragedies, even if graphic imagery is not shown," were demonetized and considered to be violating YouTube's content policy. While the effect this had on content creators was the primary focus of most protests or reporting, it should also be noted that YouTube's automated monitoring system and mass flagging had a hugely detrimental effect on smaller news agencies and reporters.

Shortly after the conflict in Syria first began, Syrian mass media broke down, forcing many Syrians to take to YouTube to post news, but because the site's automated system searches for key words in titles and descriptions, and many of the search terms were related to the conflict, it resulted in the inadvertent removal of thousands of videos documenting war crimes and current events in the area.

It's easy to look at sites like Facebook, Google, or YouTube as too big to fumble. They're such a familiar presence in our homes and lives that we tend to take the information they give us at face value, because, after all, with that many employees and algorithms reviewing and monitoring their content at all times, the end result must be reliable. Right?

But no system is perfect. No site, or business, or search engine is without flaws, and it's imperative to remember that all of the sources we've mentioned so far, operate to make money. Facebook provides content that you enjoy, because after all, wouldn't you rather frequent a site that gives you what you want and not one that's dotted with opposing opinions you don't agree with? Google and YouTube profit from the ads they feature, and if controversial content can't be monetized, why keep it around at all?

Surrounding ourselves with a padded world of properly censored, auto-filtered information is becoming a disturbingly easy task. It's much more appealing to simply cover the opinions of those we disagree with, with a comforting layer of thick black tape, until they're cut into pleasant, manageable squares that support our own views. But just because something makes you uncomfortable, doesn't mean it's wrong, and even if it IS wrong, it doesn't mean that it should simply be ignored.

If anything can be learned from the progressively alarming tactics of our favorite sites, it's that even the largest institutions should be questioned. Now, if you'll excuse me, I have some pressing scorpion related research to conduct.

social media
2

About the Creator

Teyana Jackson

An aspiring writer and poet currently living on the East Coast. More work can be found on allpoetry.com, thebluenib.com, and in the poetry anthologies "Circular Whispers" and "Seasonal Perspective"

Reader insights

Be the first to share your insights about this piece.

How does it work?

Add your insights

Comments

There are no comments for this story

Be the first to respond and start the conversation.

Sign in to comment

    Find us on social media

    Miscellaneous links

    • Explore
    • Contact
    • Privacy Policy
    • Terms of Use
    • Support

    © 2024 Creatd, Inc. All Rights Reserved.