I have created two infographics to educate college students about mis/disinformation.
I choose to use infographics because, as students, we tend to read too many long articles and book chapters to get the information we need. When the opportunity to explain something differently arose, I chose a format that included graphics and shortened, to-the-point information that was easier for students to consume.
When it comes to mis and disinformation, college students are constantly dealing with multiple websites that can be unfamiliar while trying to find the information needed for research. Therefore I thought it was important for other students to know there are easy ways to identify what type of information we see (misinformation/disinformation) and tools to work with it to dig further into its origin.
Using these tips, students can quickly identify the information and form their own opinions on whether to trust it.
On September 15, 2022, the world celebrated Democracy Day. Different organizations held events to celebrate this day and bring awareness to the importance of democracy. One such event was held by Collaborative Journalism.
For their event, they called upon newsrooms across the U.S. to become media partners in the collaborative. Participants would get a commitment from their organization and sign up as media partners.
Once they became media partners, they would agree to have their name and logo on the media partner page, produce one story or piece of content on Democracy Day, and share links with the team.
Participants were given three levels of content menu they could work on for their piece. The levels were: lower effort, medium effort, and higher effort. The higher the level for their piece, the more it was expected of the story with deeper complexities.
I believe this type of event relates closely to help stop the spread of misinformation. Suppose newsrooms around the U.S. come together to write articles on topics such as ways to support democracy in the community, stories that warn readers about election day expectations, and transparency stories about democracy. As well as stories on how to vote, changes in laws, and polling locations, among others, help spread correct information.
Election time tends to be especially high on misinformation because the public constantly shares information based on their opinions and tries to pass it as real or accurate. However, having access to a well-researched source on how things work during election time and how to protect our democracy can help reduce the spread of misinformation.
With the release of these stories published on September 15, the public would have access to real researched information. It can easily debunk myths when those come upon our social media pages.
We all have access to different social media accounts. With this access comes plenty of freedom to what we choose to share with others and how we share it. We can write posts, share memes, Gifs, or even videos. How do these social media platforms ensure that all the information we share stays within guidelines and protects us from misinformation?
Today I will be taking a look at two platforms, Facebook and Discord, and their policies regarding stopping the spread of misinformation.
After researching Facebook’s parent company, Meta, I found a site where they discuss their policies regarding stopping misinformation from being shared on the platform.
As they describe it, certain things are easier to detect as misinformation than others. For example, they hire experts who determine if the content in question can cause imminent harm. They have different rules depending on what type of misinformation they are dealing with.
For example, they remove misinformation when it is dangerous and can inflict physical harm or when it is clear that the post contains manipulated media that can affect political processes.
Otherwise, the page does not state that they remove the misinformation but attempts to create an environment that leads to a productive dialogue.
The fact-checkers will review the content and provide ratings such as False, Altered, Partly False, Missing Context, Satire, and True.
Once the content has been labeled, they add a notice to it so the audience is aware and can do additional research.
They make sure that the post appears lower on the audience’s Feed so that fewer people can access the misinformation.
Those who are repeat offenders will have some restrictions such as limiting who sees them, limiting their advertising, or their ability to register as a Page.
When something does not follow the Community Standards, this is the message the user gets:
I believe that what Facebook is doing helps minimize the spread of misinformation. Giving the user a warning before posting, as they do, that the post might violate guidelines, still provides the user with an opportunity to stop and think about how their post can affect others. This moment of pause can easily change someone’s mind and stop them from sharing a post that might contain misinformation. Those who still decide to post are not automatically blocked; others can still see their post but will remain watched as repeat offenders.
I have seen posts from friends warning me that the information presented may not be entirely accurate, forcing me to look further into the post and form my own opinion, which I like.
This approach works well with some types of misinformation. However, I have also seen posts where the information is ridiculously false, and we get the same warning. I think Facebook should be somewhat tougher on posts where the information is entirely erroneous, even if it does not pose a physical threat, and auto-delete the post to avoid others sharing it.
The guidelines for Discord are not as clear-cut as Facebook’s. However, we have to keep in mind that Discord is a newer platform than Facebook, which is still growing in popularity and may not find its footing as quickly.
Discord describes misinformation as false or misleading content that can lead to physical harm. The guidelines state that this content is not to be shared on Discord. However, the guidelines do not include a step-by-step list of how they plan to identify these posts and the process to stop them from being shared.
Discord provided more information on its policy to prevent the spread of false information related to COVID-19. They include a list of content not allowed to be shared on the platform. However, once again, they did not provide information on how they plan on identifying and stopping these posts.
The guidelines provide information on what the punishment is warning or temporarily suspending an account, removing the content, or permanently suspending an account.
I think this is a great start to handling the spread of misinformation. However, I believe the platform needs clearer guidelines on how they plan to enforce this new policy. Would they have fact-checkers? Use an algorithm? Would there be different levels of “flags” based on the content? These are all great questions that need to be included in their guidelines to help users feel more comfortable about the content they are exposed to while using the platform.
Today I have taken on the challenge of using Trust Indicators to determine if a site is trustworthy and if we can share its news. We can use eight indicators when looking at a news site to determine how much we can trust them to report accurate news. The two sites I will be evaluating are AZ Free News and Arizona Silver Belt.
AZ Free News
The Trust Indicators I found on this website are:
Journalist Info- I found the Twitter account for one of the journalists Terri Jo Neff and the LinkedIn account for the other journalist Corrine Murdock. The claims listed on the site about their previous experience can be verified by their online presence on these two other social sites.
References- One thing I liked about their articles is that the reporters’ quoted material is usually accompanied by a link to the original account of where they are getting the information.
Actionable Feedback- At the bottom of the page are links to their social media sites where the audience can easily comment and engage with the site.
Best Practices- A section of the page called Code of Ethics lists what guidelines they follow to ensure they can be trustworthy sources of information.
I could not find evidence of the use of the other indicators. Although the site is separated into News and Opinions, some “news” articles used language that seemed to point in one way or another instead of remaining impartial. Not all of the reporters are local. According to Corrine Murdock’s LinkedIn account, she is based in Texas and not Arizona. I also noticed a lack of diversity among the reporters and the news they reported, which is connected to why some of the news felt they belonged in the opinion section rather than the news.
Another thing I found interesting while using lateral reading to gather more information on the staff was finding a page similar to this one. I Googled the Managing Editor’s name, Eric Porteous, and could not find another valid social media page belonging to him. His name also came up as the Executive Director for the Arizona Freedom Foundation. This website uses the same language in its mission as AZ Free News but only lists Porteous as its staff.
AZ Free News states that they are a project of the Arizona Freedom Foundation, but it seems somewhat suspicious that they would describe both pages similarly. For these reasons, we should not completely trust the news presented by AZ Free News. I believe it is a great source to spark curiosity and continue to double-check and cross-reference the information from other reputable sources.
Arizona Silver Belt
The Trust Indicators I found on this website are:
Journalist Info- I could search for the names of the reporters that wrote the articles on the website, and they exist. However, they are not reporters that work directly with Arizona Silver Belt. These reporters work for the Associated Press.
The page is full of advertisements and pop-ups that take away from the experience of getting accurate news. Because the articles are not original from their page, there is no way to verify if the page itself is diverse or uses a local perspective. Although the page provides a “Subscribe” button, there is no other way for readers to interact with the page and participate. Additionally, there is no site section that talks about their ethics.
Overall, I would not trust this site to get my information. It is best to directly follow the Associated Press journalist that initially wrote the article to stay up to date on accurate news.
Gone are the days of sitting in front of the TV during the news. I remember growing up in Puerto Rico and purchasing the paper to get the news every morning on our way to school. No one in my family sits down to read the paper anymore, and based on my conversations with my friends, I doubt they still sit to read the paper or watch the news.
Almost everyone in my generation can agree that we get our news from social media sites. There are many pros to this approach to getting our news. It is right at our fingertips; we do not have to purchase a separate paper or sit down waiting between ads to watch the live news.
However, the problem with getting our news via social media is the high amounts of misinformation and disinformation that cloud the real news. While scrolling through Facebook, I have noticed that when a piece shared by someone seems suspicious, Facebook will do a fact-check and include a link to verified information so the user can decide whether to believe the original post or not.
I believe Facebook is doing a great job by adding that feature, but is it Facebook’s responsibility to flag and give us correct information? An article from Cornell University found that Americans want media corporations responsible for sorting through the things shared and ensuring we receive accurate information.
This finding then raises the question of freedom of speech. If media corporations were to stop every post they deemed “fake,” that could go against freedom of speech and impede some from sharing their opinions.
I believe that a compromise could be met. Because we all have access to the internet, our opinions should have a space out there. However, if a site detects that misinformation is being spread, they could attach a warning and let the user decide to look up more information.
This solution continues to become difficult to achieve because social media users can not agree on who should stop the spread of misinformation. As presented in an article by Poynter, some believe we should be on the lookout for false news; others believe it is the government, and others place the responsibility on media corporations.
Because false information affects us all, I believe the government can create laws that allow social media sites to share all information we post and, at the same time, offer additional assistance if false information is detected. This way, we can protect our free speech and be aware of the information we come across online.
There are different ways for us to learn about any topic in general. Some learn via audio, visual, and hands-on practice, and some like incorporating games into their learning techniques to make things interesting. I find that games can stimulate different parts of our brains and allow us to retain information in a fun matter. This week, I played two games based on mis/disinformation and fake news. The games were fun and addictive to some point.
This game has various levels in which the player gets to read a news article and decide whether the article is true or fake based on the source and content. Once the player reads the article, they can click on “show source” and determine whether the source used for the article is reputable or not. An article by Stevenson University provides good information on whether an article comes from a reliable source.
Once you have decided, you can select the red “X” for fake news or the green check mark for real news. The game will then pop up an explanation of why you got it correct or incorrect, along with tips on telling fake news from real ones.
This game teaches players to be wary of the information they see online. Not because it sounds credible means that it is. Apart from the source, other ways players can tell when an article might be fake are based on grammar. Some fake news sites do not pay enough attention to editing or double-checking their grammar. Therefore, as readers, we will notice many mistakes, which should be a red flag and cause us to question further what we are reading.
I have to say I spent much time playing this game. It is not because it was time-consuming, but because it was fun to have a goal to try to accomplish and see how easy it is to spread false information for personal gain. The basis of this game is that as a player, you need to generate some quick money for a goal. This goal can be a used car, a down payment for an apartment, or even a new music system. To make things easier on myself, I chose to go for the new music system, which had a lower monetary goal of $200.
To earn this money, I had to create a website and make it as believable as possible. The game gives you goals to achieve with your website as you copy “news” from other sites and plant them in groups to generate ad revenue. Although the game moves quickly, it does show how long it would typically take to complete different actions.
The game teaches players that as long as you have patience, it is relatively easy to copy, paste, and plant fake news online to generate revenue. It opens our eyes to how any regular person with access to a computer and the internet can create any “news” they want and pass them as accurate. It is up to us, the reader, to scrutinize everything we read.
Sometimes, the copied stories can be real and just plagiarized to gain money from them. An article by CNBC explains how this situation happened to a journalist and how she created one to see how easy it was to accomplish.
Games offer a way for learners of all types to engage in materials and understand where they come from and how they affect them. When it comes to mis/disinformation, it is important to understand the implications of falling prey to these articles, what we can do to become aware of what we read, and how to properly accept or deny information. Other online games also touch on the topic of fake news and can help us explore this topic further.
Misinformation has become so prevalent in today’s society that many well-established personal relationships get affected when someone can separate what is real from what is accurate, and others are easily persuaded.
Living in Orlando, FL, I am surrounded by theme parks, and I love them all for their unique features and offerings. I have also worked for two of them in the past, and so my friend circle is primarily comprised of past and current theme park employees.
The misinformation topic has become increasingly controversial on my social media, especially surrounding a particular theme park and its practices. This theme park is SeaWorld. Recently, a disagreement over misinformation caused a heated discussion between one of my friends and me. While working at Universal, you get free admission to SeaWorld, and I decided to take a day and visit the park with my husband. When I told my friend I was going to SeaWorld, she said she could not believe I would visit that park and how I should be aware of their animal abuse and not support them in any shape or form.
Here is the thing, she was quoting information from the “documentary” Blackfish.
The SeaWorld controversy is not news to me. I see it constantly all over social media. My stance in this situation is not entirely black and white. Whereas I disagree with animals performing, I also see the benefits of the rescue side of SeaWorld. My issue comes in when people blindly quote material from a film that should be taken with a grain of salt.
I gave myself the homework of watching Blackfish, cross-referencing their claims, and questioning everything they mention.
Blackfish focuses on the Killer Whale Tilikum, his story from the moment he was captured to the incidents that have occurred over approximately 30 years. For starters, SeaWorld did not capture Tilikum. This was initially shown in the film itself. SeaWorld purchased Tilikum after he was trained with negative punishment under another park. Because he was used to living in captivity, there was no other option but to go to a park like SeaWorld that could handle large animals. Being released to the wild would be a death sentence to Tilikum.
Throughout the film, they pose multiple claims, but I was able to find a website that debunked them using actual references from doctors and SeaWorld themselves. The website cites their sources at the end of the article, something this so-called “documentary” does not. The documentary attempts to establish credibility by interviewing former SeaWorld trainers. However, the same trainers interviewed mentioned that they know nothing about Killer Whales. They only know how to train them. Here is my issue, if you know nothing about the animal itself, why are you making claims about them? Why are you fighting for these animals to be released into the wild? Do you not know that after being raised in captivity, being released into the wild is a death sentence?
Again, the documentary should be taken with a grain of salt. They do have a point in some of their claims, but we can not believe everything mentioned because there are many plotholes in their claims.
However, it is so well-made that many people, like my friend, blindly believe the whole thing. They let themselves be persuaded by the sad music and the heartbreaking stories and continue to spread the wrong information.
I have a few examples of what blindly believing in such misinformation can do on a harmless social media post. I am a member of a few passholder groups on Facebook for the theme parks around Orlando. Someone posted some great photos captured during their visit to SeaWorld. This person got attacked by multiple comments from those who, like my friend, disagreed with SeaWorld. Once again, their “evidence” was Blackfish.
Blindly believing everything we see out there can be harmful to our relationships. It can ruin our friendships and make someone’s great day seem grim just because of a Facebook post gone wrong.
I hope we can learn to question everything we see and look for accurate references based on facts and those with real credentials. This is the only way we can stay afloat in this sea of misinformation.
It is interesting to realize one’s habits when you put your mind to counting how much media you access in a day. This week I choose a random, typical day to keep track of my media consumption, see my habits, and what type of information, memes, and social I am bombarded with daily. The results were impressive. My biggest realization was that I spend too much time on Facebook. By too much time, I mean I spend hours upon hours of my day mindlessly scrolling through a never-ending feed of news, activities, memes, and friends’ photos without paying much attention to what I am seeing.
Here is a break-down of how my media-conscious day went:
9:00 am- Wake up. The first thing I did was grab my phone and scroll through Facebook. The first news article that caught my eye was about China censoring the new Illumination movie, Minions: The Rise of Gru. At first, this one surprised me, but I chose to believe the article given that it came from NBC News, which I consider a reputable source of information.
After “liking” a few friends’ photos and seeing what everyone had been up to the day before I decided it was time to study.
10:00 am- 12:00 pm- Put my phone away and focused on studying.
12:00 pm- I took a break and checked my email. One email that caught my attention was an invitation to join the waitlist for the newly revamped MoviePass. When I first saw it, I was sure I had received Spam mail. However, the email included a link to an article by Time Magazine in which the co-founder and CEO talked about the plans for reopening MoviePass. I followed the Time Magazine article link and concluded that this was accurate information. I had been a MoviePass user before it stopped its services, so I am happy to join the waitlist and see what is in store for the new version.
Around this time, I also went back to scrolling through Facebook and saw a post that caught my attention. This travel company made it from Puerto Rico called Mochileando. The post stated that Universal Orlando had confirmed some of the attractions coming to the new Epic Universe. This post caught my attention because, as a theme park enthusiast, I follow Universal Orlando on both Twitter and Facebook and regularly get their notifications. I also used to work for them and still have friends who work there. Before this post, I had not seen anything about confirmed attractions at Epic Universe. I went to the Universal official website to verify this information. As I suspected, the official universalorlando.com website has no “confirmed attractions.” Many sites show up on Google with speculations on attractions, but the theme park giant has officially confirmed nothing.
I continued to mindlessly scroll through Facebook, when I came upon an article about a Disney guest finding a scorpion in their room.
The site is not what I would consider a reputable source, so I tried to follow the trail on the guest’s post. It seems that the post was originally done on Reddit and there is no way to confirm that there was in fact a scorpion in this guest’s room. Additionally, the post has since been deleted, further raising questions of whether it was real or not. Although if it is true, that is one scary situation and I would run home.
At this point I decided to have some lunch and forget about social media for a while.
1:00 pm- 3:00 pm- More studying.
3:00 pm- After some additional mindless scrolling, I noticed a post done by a page called Crafty Morning. This page claims to have found a home remedy for unwanted ants around your home. I’m not sure how to properly verify these “home remedies.” Apart from the post’s website, wikiHow and other DIY sites all state that a combination of Borax and sugar will kill unwanted ants. I guess I will have to give it a try next time ants decide to take over my home.
4:00pm- 6:00pm- During this time I decided to take a break from social media and school and focus on something a little more relaxing. I grabbed my yarn and started crocheting while listening to the audiobook I’m Glad my Mom Died by Jennette McCurdy. This is a heartbreaking yet insightful read, especially for someone who grew up watching iCarly.
6:00pm- 7pm- Prepping dinner while listening to Taylor Swift on shuffle.
7:00pm- 9:00pm – During this time, I watch a TV show that airs daily on Telemundo called Top Chef VIP. The show features famous Hispanic singers, actors, content creators and Miss Universe cooking for a chance to win $100,000.
9:00pm- Because I am extremely pregnant, my body just falls asleep too early on most nights. As soon as the show was over, I fell asleep.
I could not believe how quickly a day goes by when you constantly focus on your media consumption. It is sad to realize that I am very likely addicted to the mindlessness of scrolling through Facebook with no goal. I enjoyed looking at the different articles and questioning them beyond what was in front of me. You know that not everything you see online is true, but to realize it after verifying the information is truly mind-blowing. I believe I was able to find a good 50-50 tie on fake vs real information during my media-tracking day. It seems that most sites that share information that may not be real are pages that focus on getting followers and attention. From my experience, real news tend to come from reputable sources such as CNN, NBC, and Time Magazine. I want to keep double-checking where I get my information from and cross-referencing the attention-grabbing sites with reputable sources before coming to any conclusion.
I look forward to learning more about my media consumption and seeing how I can improve daily.
Welcome to my blog! Here you will find insightful posts about communications, misinformation, and our society. I hope we can enjoy and learn together throughout this semester and beyond. Stay tuned for more!