
We all have access to different social media accounts. With this access comes plenty of freedom to what we choose to share with others and how we share it. We can write posts, share memes, Gifs, or even videos. How do these social media platforms ensure that all the information we share stays within guidelines and protects us from misinformation?
Today I will be taking a look at two platforms, Facebook and Discord, and their policies regarding stopping the spread of misinformation.
After researching Facebook’s parent company, Meta, I found a site where they discuss their policies regarding stopping misinformation from being shared on the platform.
As they describe it, certain things are easier to detect as misinformation than others. For example, they hire experts who determine if the content in question can cause imminent harm. They have different rules depending on what type of misinformation they are dealing with.
For example, they remove misinformation when it is dangerous and can inflict physical harm or when it is clear that the post contains manipulated media that can affect political processes.
Otherwise, the page does not state that they remove the misinformation but attempts to create an environment that leads to a productive dialogue.
The process for addressing misinformation on Facebook is as follows:
The fact-checkers will review the content and provide ratings such as False, Altered, Partly False, Missing Context, Satire, and True.
Once the content has been labeled, they add a notice to it so the audience is aware and can do additional research.
They make sure that the post appears lower on the audience’s Feed so that fewer people can access the misinformation.
Those who are repeat offenders will have some restrictions such as limiting who sees them, limiting their advertising, or their ability to register as a Page.
When something does not follow the Community Standards, this is the message the user gets:

I believe that what Facebook is doing helps minimize the spread of misinformation. Giving the user a warning before posting, as they do, that the post might violate guidelines, still provides the user with an opportunity to stop and think about how their post can affect others. This moment of pause can easily change someone’s mind and stop them from sharing a post that might contain misinformation. Those who still decide to post are not automatically blocked; others can still see their post but will remain watched as repeat offenders.
I have seen posts from friends warning me that the information presented may not be entirely accurate, forcing me to look further into the post and form my own opinion, which I like.
This approach works well with some types of misinformation. However, I have also seen posts where the information is ridiculously false, and we get the same warning. I think Facebook should be somewhat tougher on posts where the information is entirely erroneous, even if it does not pose a physical threat, and auto-delete the post to avoid others sharing it.
Discord
The guidelines for Discord are not as clear-cut as Facebook’s. However, we have to keep in mind that Discord is a newer platform than Facebook, which is still growing in popularity and may not find its footing as quickly.
On February 2022, Discord released a statement updating its Terms of Service, Privacy Policy, and Community Guidelines. These changes went into effect in March 2022. As part of the new Community Guidelines, they added a section focusing on misinformation.
Discord describes misinformation as false or misleading content that can lead to physical harm. The guidelines state that this content is not to be shared on Discord. However, the guidelines do not include a step-by-step list of how they plan to identify these posts and the process to stop them from being shared.
Discord provided more information on its policy to prevent the spread of false information related to COVID-19. They include a list of content not allowed to be shared on the platform. However, once again, they did not provide information on how they plan on identifying and stopping these posts.
The guidelines provide information on what the punishment is warning or temporarily suspending an account, removing the content, or permanently suspending an account.
I think this is a great start to handling the spread of misinformation. However, I believe the platform needs clearer guidelines on how they plan to enforce this new policy. Would they have fact-checkers? Use an algorithm? Would there be different levels of “flags” based on the content? These are all great questions that need to be included in their guidelines to help users feel more comfortable about the content they are exposed to while using the platform.