These days Social Media is such a substantial and important part of our day to day lives, it’s practically impossible to keep misinformation out of the discussion, especially when the algorithms all but promote it to us. Because of the harm that can be caused by the spread of misinformation it’s important that people and the technology companies play an active role in the response to misinformation on their platforms. The larger, popular platforms have gone to great extent in trying to clean up the content on their platforms. Today we’ll take a look at some of the policies YouTube and Twitter have developed to help diminish the spread of misinformation.
YouTube Misinformation Policies
According to YouTube, they do not allow misleading or deceptive content that might poses a serious risk harm. Their website states that “Our policies are developed in partnership with a wide range of external experts as well as YouTube Creators. We enforce our policies consistently using a combination of content reviewers and machine learning to remove content that violates our policies as quickly as possible” They address misinformation on their platform based on a series of “4 R’s” Principles. The 4 R’s are: REMOVE content that violates their policies, REDUCE recommendations of borderline content, RAISE up authoritative sources for news and information, and REWARD trusted creators.
They also list several different policies in their community guidelines such as the general Misinformation Policy, the Elections Misinformation Policies, COVID-19 Medical Misinformation Policy and the Vaccine Misinformation Policy.
If your content violates one of these policies, YouTube will remove the content and send you an email to let you know. If this is your first time violating the Community Guidelines, you’ll get a warning with no penalty to your channel. If it’s not, they will issue a strike against your channel. Three strikes and you’re out! If you get 3 strikes, your channel will be terminated.
Let’s take a closer look at what kind of information each of these policies include:
Misinformation Policies
- Suppression of census participation
- Manipulated content
- Misattributed content
- Harmful remedies, cures, and substances
- Contradicting expert consensus on safe medical practices
- Content that contradicts local health authorities’ or WHO guidance on Vaccines deemed safe by health authorities and Chemical and surgical abortion methods deemed safe by health authorities
- Promotion of alternative health practices deemed unsafe by local health authorities or the WHO such as Promotion of alternative formulas for infants in place of breast milk or commercial formula
- Promotion of alternative abortion methods in place of chemical or surgical methods deemed safe by health authorities
For more detailed information check out YouTube’s Misinformation Policies page.
Election Misinformation
- Voter suppression
- Candidate eligibility
- Incitement to interfere with democratic processes
- Distribution of hacked materials
- Election integrity
For more detailed information check out YouTube’s Election Misinformation Policies page.
COVID-19 Misinformation
- Treatment misinformation
- Prevention misinformation
- Diagnostic misinformation
- Transmission misinformation
- Content that denies the existence of COVID-19
For more detailed information check out YouTube’s COVID-19 Misinformation Policies page.
Vaccination Misinformation
- Vaccine safety
- Efficacy of vaccines: content claiming that vaccines do not reduce transmission or contraction of disease
- Ingredients in vaccines: content misrepresenting the substances contained in vaccines
For more detailed information check out YouTube’s Vaccine Misinformation Policies page.
Twitter Misinformation Policies
Twitter states on their website that their goal is to “create a better informed world so people can engage in healthy public conversation.” They say that they work to mitigate detected threats and also empower customers with credible context on important issues. To help enable free expression and conversations, they only intervene if content breaks their rules. Otherwise, they lean on providing additional context.
Twitter defines misinformation as claims that have been “confirmed to be false by external, subject-matter experts or include information that is shared in a deceptive or confusing manner. The content is identified through a combination of human review and technology, and through partnerships with global third-party experts.” They have a few programs in place such as Birdwatch and Safety Mode which are aimed at reducing disruptive interactions and creating a better informed world, by empowering people on Twitter to collaboratively add helpful notes to Tweets that might be misleading. They also have several policies in place regarding misinformation. We’ll take a look at these in a sec. The action Twitter takes regarding misinformation depends on the level of potential harm the information could cause. This ranges from limiting the amplification of misleading content or, if offline consequences are immediate and severe, removing it from Twitter, Informing and Contextualizing by sharing timely information or credible content from third-party sources. This could look like any of the following: Labeling Content, Prompting you when you engage with a misleading Tweet, Creating Twitter Moments and Launching Pre-bunks.
Now let’s take a look at the different misinformation policies Twitter has in place.
Crisis misinformation policy
A crisis is any situation in which there is a widespread threat to life, physical safety, health, or basic subsistence that is beyond the coping capacity of individuals and the communities in which they reside. Twitter focuses on misleading information with the capacity to:
- Serve as a pretext for further aggression by armed actors, belligerents, or combatants,
- Trigger forced or anticipatory displacement of vulnerable populations, or lead to increased humanitarian needs,
- Negatively impact the ability of humanitarian protection, human rights, or relief organizations to provide assistance or access affected populations,
- Incite the targeting or surveillance of groups that can be identified based on their political, religious, ethnic or ideological affiliation or membership, or organizations and actors protected by international humanitarian law;
- Disrupt potential ceasefire agreements, peacekeeping operations, or diplomatic solutions to conflict or insecurity, among other matters.
For more detailed information check out Twitter’s Crisis Misinformation Policies page.
COVID-19 Misleading Information
You may not use Twitter’s services to share false or misleading information about COVID-19 which may lead to harm. There has been an surfacing of persistent conspiracy theories, alarmist rhetoric unfounded in research or credible reporting, and a wide range of false narratives and unsubstantiated rumors, which left uncontextualized can prevent the public from making informed decisions regarding their health, and puts individuals, families and communities at risk. Content that is demonstrably false or misleading and may lead to significant risk of harm may not be shared on Twitter. This includes the efficacy and/or safety of preventative measures, treatments, or other precautions to mitigate or treat the disease; official regulations, restrictions, or exemptions pertaining to health advisories; or the prevalence of the virus or risk of infection or death associated with COVID-19.
For more detailed information check out Twitter’s COVID-19 misleading information policy page.
Synthetic and manipulated media Policies
You may not share synthetic, manipulated, or out-of-context media that may deceive or confuse people and lead to harm (“misleading media”). In addition, we may label Tweets containing misleading media to help people understand their authenticity and to provide additional context. In order for content with misleading media (including images, videos, audios, gifs, and URLs hosting relevant content) to be labeled or removed under this policy, it must:
- Include media that is significantly and deceptively altered, manipulated, or fabricated, or
- Include media that is shared in a deceptive manner or with false context, and
- Include media likely to result in widespread confusion on public issues, impact public safety, or cause serious harm
For more detailed information check out Twitter’s Synthetic and manipulated media policy page.
Civic Integrity Policies
You may not use Twitter’s services for the purpose of manipulating or interfering in elections or other civic processes. This includes posting or sharing content that may suppress participation or mislead people about when, where, or how to participate in a civic process. In addition, we may label and reduce the visibility of Tweets containing false or misleading information about civic processes in order to provide additional context. This policy addresses 4 categories of misleading behavior and content:
- Misleading information about how to participate
- Suppression and intimidation
- Misleading information about outcomes
- False or misleading affiliation
For more detailed information check out Twitter’s Civic integrity policy page.