JavaScript

This website requires the use of Javascript to function correctly. Performance and usage will suffer if it remains disabled.
Social Media Censorship

Real Truth logo

Article

Social Media Censorship

Learn the why behind the headlines.

Subscribe to the Real Truth for FREE news and analysis.

Subscribe Now

A fire only needs a small amount of heat and fuel to start, and even a tiny spark can light a fire. If the wind is strong enough and the humidity low enough, then a small, local fire can grow into a large, uncontained wildfire that burns until the fuel is gone or the wind dies down.

Similarly, societies need very few conditions to ignite unrest that can lead to upheaval—and today social media posts can fan the flames of discontent to a conflagration of outrage. The Arab Spring uprisings, George Floyd protests and storming of the U.S. Capitol are just a few examples of people using social media to organize large, sometimes violent, demonstrations.

James 3:5-6 describes this situation perfectly: “Consider how small a spark sets a great forest ablaze. The tongue also is a fire” (Berean Study Bible translation). Nowhere is this truer than social media.

Sometimes these social media fires have real-world impacts involving physical injury and death. Other times, they are targeted on a single person online, leading to emotional and financial injury.

The Atlantic summarized it this way: “Throngs on social media violate fundamental notions of fairness and due process: People may be targeted because of a misunderstanding or an out-of-context video. The punishment online mobs can mete out is often disproportionate. Being attacked and ridiculed by perhaps millions of people whom you have never met, and against whom you have no defenses, can be devastating and lead to real trauma.”

So, should you really be able to say anything you want online? Should companies like Facebook, Twitter or Google’s YouTube step in and censor inflammatory speech? Should national governments hold them accountable?

Where does freedom of expression fit in all of this?

From Virtual to Violence

A Pew Research poll conducted August 31 to September 7, 2020, found that 86 percent of Americans get their news from a digital source. While Americans ages 30 and above most often use news websites or apps, those under 30 show a dramatic shift to preferring social media.

The rise of citizen journalists—those in the general public helping to distribute and analyze the news—has helped people understand the world around them. Social media is usually the first place that editorially unfiltered information is available when news events happen.

But this lack of oversight means that anything can be, and often is, posted online—including personal insults, threats of harm, and complete lies that can have far-reaching impacts.

Circulating posts on social media regarding the coronavirus is just one example of how fear can escalate to bloodshed from inaccurate information. Healthcare workers fighting COVID-19 in dozens of countries face violence from fearful communities who have attacked doctors and burned down clinics, aid agencies said.

Some of the most common beliefs are that the coronavirus is man-made, that it is not real, or that new testing facilities or health centers will bring it to communities. The International Committee of the Red Cross recorded 611 incidents targeting health workers, patients and facilities from February to July, including blocking ambulances from entering a town to screen for COVID-19 cases in Colombia.

People in South Africa who did not want responders in their neighborhood burned down a testing station and a clinic, according to medical charity Medecins Sans Frontieres (MSF).

“It’s a byproduct of the new, novel, infectious disease—there’s a lot of fear,” said Sean Christie, a spokesman for MSF in South Africa. “Even for areas that have experienced very big HIV and T.B. epidemics, this was just different. The media response was so overwhelming, and [there is] so much social media misinformation.”

Sometimes a single post can ignite conflict from existing tensions. When a Buddhist woman in Mandalay, Myanmar, reported to the police that two Muslim brothers raped her in 2014, a local Buddhist monk posted the story on Facebook. The brutal clashes between the two religious groups over the next several days left 19 people injured and two dead. Calm only returned once government authorities temporarily shut down local access to Facebook.

But the woman had lied. She later admitted that a rival shop owner trying to damage the Muslim brothers’ reputation had paid her to file the police report. She recanted, but that could not undo the damage already done.

The old saying, “Sticks and stones may break my bones, but words will never hurt me,” is simply untrue in the digital age.

Content Moderation

With the potential for horrific outcomes, people are demanding that social media companies begin implementing some level of “content moderation,” a polite term for censorship.

Under pressure to clean up its site, Twitter started labeling or removing misleading tweets last year. The move intensified debates about the role major social media platforms play in public discourse and fueled allegations from lawmakers that tech companies are promoting specific political agendas.

In January, Twitter started asking U.S. users to help identify and fact-check posts in a new pilot program called Birdwatch. Participants can flag misleading tweets and annotate them with “notes” to give more information, which other users can rate as helpful.

The experiment, running with about 2,000 participants in a separate section of the site, faces many of the same challenges as Twitter itself—discerning facts from opinion and dealing with the potential for harassment or people trying to manipulate the system.

Public Birdwatch data shows notes ranging from balanced fact-checks to partisan criticism. Many gave opinions—a “note” on a tweet from SpaceX and Tesla CEO Elon Musk said that he should “go to Mars. And stay there”—while others added comments to opinions.

People are “fact-checking things that professional fact-checkers never would,” said Alex Mahadevan, a reporter with the Poynter Institute’s MediaWise project, who analyzed Birdwatch’s data.

Crowd-sourced knowledge and community moderation are not new models; they underpin platforms like the social network Reddit. Facebook runs a “community review” program that pays users to identify suspect content for vetting by professional fact-checkers.

Katherine Maher, CEO of the Wikimedia Foundation that runs Wikipedia, said Twitter would need to develop standards and its enforcement for Birdwatch and decide how people could appeal annotation. Twitter needs to solve the issue, she said, of “who watches the watchers?”

Who Should Moderate?

Facebook has tried to address this with multiple levels of content moderation. The three-tier system uses companies contracted to Facebook to provide content moderators the first level of oversight, capped with periodic review by Facebook employees.

But this process does not address the impact on the moderators themselves. Facebook and its contractors employ people to watch the worst that Facebook users have to offer, from hate speech to graphic pornography to violent attacks, including murder.

Prolonged exposure to these images has resulted in PTSD symptoms in former employees. Some employees admitted to using alcohol and marijuana while on the job to numb themselves to the stress.

Viewing this type of content also leads moderators to question reality and pushes them to the fringes. One Facebook contractor in Phoenix employs a flat Earther and a former employee now questions the truth of the Holocaust, while another former employee has begun to believe that terrorists did not commit the 9/11 attacks.

Twitter’s experiment with Birdwatch shows that people cannot be impartial enough to decide what should be allowed on the platform. Regular and prolonged exposure to raw human nature on Facebook leads to severe mental health issues. Perhaps the solution to content moderation is an impartial artificial intelligence, trained to spot problem content and automate removal before people ever see it. That experiment has already begun with the biggest social media companies.

YouTube, Facebook and Twitter warned in March that videos and other content might be erroneously removed for policy violations, as the coronavirus pandemic forced them to empty offices and rely on automated takedown software.

But digital rights activists warn those AI-enabled tools risk confusing human rights and historical documentation with inappropriate material like terrorist content—particularly in war-torn countries like Syria and Yemen.

“A.I. is notoriously context-blind,” said Jeff Deutch, a researcher for Syrian Archive, a nonprofit that archives video from conflict zones in the Middle East. “It is often unable to gauge the historical, political, or linguistic settings of posts…human rights documentation and violent extremist proposals are too often indistinguishable,” he said in a phone interview.

Erroneous takedowns threaten content like videos that could become formal evidence of rights violations by international bodies such as the International Criminal Court and the United Nations, said Dia Kayyali of digital rights group Witness.

Social media companies try to police themselves, but the issues with human or computer moderators limit their effectiveness. People’s biases hamstring the human-powered attempts and subject people to the kind of emotional abuse that results in long-term health issues. Artificial intelligence is not capable of distinguishing between content that violates standards and eye-witness reporting. And yet, the need for censoring what gets posted remains.

Legislative Proposals

Many people turn to the government for the solution. The 2014 violence in Myanmar was quashed after the local government limited access to Facebook, suggesting that such intervention can help defuse violence that online rhetoric incites.

Put simply, such intervention is problematic.

Despite this, United States politicians are coming under increasing pressure to do something. Social media companies in the U.S. are protected from prosecution for user-generated content by a Communication Decency Act of 1996 provision often referred to as “Section 230.” This protection allowed companies like Facebook and Twitter to become internet giants by crowd-sourcing their content instead of developing it themselves. It also makes forcing those companies to do something with their platform to address these issues nearly impossible.

In March, U.S. lawmakers asked chief executives of Facebook, Google and Twitter whether their platforms bore some responsibility for the riot and storming of the U.S. Capitol building.

“We fled as a mob desecrated the Capitol, the House floor, and our democratic process,” said Democratic Representative Mike Doyle, who asked the CEOs about their responsibility. “That attack, and the movement that motivated it, started and was nourished on your platforms,” he added.

In the joint hearing, held by two subcommittees of the House Energy and Commerce Committee, lawmakers also questioned the executives on the proliferation of COVID-19 misinformation and raised concerns about the impact of social media on children—including asking questions about Facebook’s plan to create a version of Instagram for kids.

“Your business model itself has become the problem, and the time for self-regulation is over. It’s time we legislate to hold you accountable,” said Democratic Representative Frank Pallone, chair of the Energy and Commerce committee.

Some lawmakers are calling for Section 230 of the Communications Decency Act to be altered. There are several pieces of legislation from Democrats to reform Section 230 that are slowly making the rounds in Congress. Several Republican lawmakers have been pushing to remove the law entirely.

Countries that do not have similar protections for online businesses already censor social media. However, this leads to governmental suppression of free, independent voices. A study by the African Digital Rights Network (ADRN) focusing on 10 countries found governments used a plethora of measures over the last two decades to stifle people’s ability to organize, voice opinions and participate in governance online.

“Our research shows online civic spaces are being closed through various repressive actions, including unwarranted arrests, unwarranted surveillance, and various forms of intimidation,” said digital rights researcher Juliet Nanfuka from the Collaboration on International ICT Policy for East and Southern Africa and member of the ADRN. “Self-censorship online is being fueled by financial restrictions and online content regulation. All of these actions inhibit freedom of expression and access to information, which are fundamental to a flourishing civic space.”

The new research covered South Africa, Cameroon, Zimbabwe, Uganda, Nigeria, Zambia, Sudan, Kenya, Ethiopia and Egypt. It documented 115 examples of technologies, tactics and techniques used to control or censor the internet.

The study found that governments’ most common methods were digital surveillance, disinformation, internet shutdowns, the introduction of laws reducing digital rights, and arrests for online speech.

Government shutdowns of the entire internet or mobile phone system have become increasingly common. The number of intentional internet shutdowns by African governments rose to 25 in 2020 from 21 in 2019, with Algeria, Ethiopia and Sudan the worst-affected countries, said the study.

Government-mandated or controlled online censorship leads to problems that can be worse than the lies and abuse spread through social media.

There Is a Solution

Social media’s ability to influence people toward fear, abuse and violence is only a symptom of a more significant problem. The Proverbs state that “the curse causeless shall not come” (26:2). This principle lays out the starting point: all adverse effects must have a cause.

The inflammatory things people say online are a curse on all society, which means there must be a cause for such harmful rhetoric. The violence to people’s reputation, health and lives is only the last and most obvious link in a chain of cause and effect.

American philosopher Henry David Thoreau said, “There are a thousand hacking at the branches of evil to one who is striking at the root.” Most people see the impacts of a problem but do not take the time to understand the source—the root cause.

Social media companies will never come up with a workable solution, even with government legislation, because they cannot address the real issue—what is in the hearts of the users who generate and consume social media content.

The only solution to this problem requires addressing that. Anything else is just putting a bandage on a bullet wound. It may mitigate the immediate effects, but it never addresses the core cause.

People have always said terrible things about others—yet social media allows them to speak louder and have more people hear them than ever before. God taught that people speak out “of the abundance of the heart” (Luke 6:45), meaning that people say the things they believe deep in their hearts. And “the heart is deceitful above all things, and desperately wicked” (Jer. 17:9). The words “desperately wicked” can also be translated “incurably sick.”

The awful things people post on social media come from a sick, wicked heart. Just think of the disturbing content moderators have had to face so we do not have to see it. And where the source of the problem is the individual, so too is the solution. The Bible explains that anyone who would “love life, and see good days” should “refrain his tongue from evil, and his lips that they speak no guile” (I Pet. 3:10).

Instead of the government stepping in, individuals need to refrain their tongues from evil—they need to moderate themselves.

The apostle James wrote, “If any man offend not in word, the same is a perfect man, and able also to bridle the whole body” (Jms. 3:2). He then likened controlling powerful horses with a small bridle and mighty ships with only a small rudder to how “the tongue is a little member, and boasts great things” (vs. 3-5).

People must recognize the far-reaching effects their words can have—particularly on social media where readers around the world can wrongfully act upon what is said.

King Solomon, the wisest man who ever lived, understood that “death and life are in the power of the tongue” and realized that those who use their voice publicly will “eat the fruit thereof” (Prov. 18:21). The words people use on social media have consequences.

People may not see an immediate impact in their lives, but over time the result becomes obvious. The Berean Study Bible renders James 3:6 as, “The tongue also is a fire” that “sets the course of his life on fire.”

A person who routinely lies, insults, is divisive, harasses, stalks, trolls or is otherwise abusive and hateful burns down his own life and happiness. Such people become so challenging to be around that their friends and family eventually cut them out of their lives.

Social media makes the situation worse by giving such individuals a place to go. They can easily find others who say and believe the same things. As they go deeper and deeper into the online echo chamber of similar ideas, they lose touch with reality, as seen with several content moderators for Facebook.

And no wonder because “a lying tongue hates those that are afflicted by it” (Prov. 26:28). Social media is full of inaccuracies, shading the truth, and outright lies. Solomon explained thousands of years ago that every single one of these is hate against those being affected by it. Those who post wrongful content do not have the care to consider the harmful effects it could bring to others.

Social media spreads lies, misinformation, abuse and hate because people post from the malice in their hearts and minds that comes from being cut off from God. No amount of social censorship will ever be able to deal with that. Yet individuals can turn to God—and this involves coming to grips with their own human nature. Read Did God Create Human Nature? to understand how to curb what is in your own heart—as well as the words that come out of your mouth.


FREE Email Subscription (sent weekly)


Contact Information This information is required.

Comments or Questions? – Receive a Personal Response!



Send

Your privacy is important to us. The email address above will be used for correspondence and free offers from The Restored Church of God. We will not sell, rent or give your personal information to any outside company or organization.

Latest News

View All Articles View All World News Desk