
Misinformation, Social Media, and Free Speech
The internet was once seen as a force for positive social change, enabling movements like Black Lives Matter and the Arab Spring and promoting democracy around the world. However, it has also played a role in the decline of democracy through the spread of fake news, misinformation, and conspiracy theories on social media. This apparent contradiction can be explained by looking at the economic and cognitive factors at play.
Online platforms, including social media sites, serve the interests of advertisers rather than consumers. Users pay for free services by giving up their data, which is then used to target them with ads based on their preferences and personal characteristics. This “surveillance capitalism” incentivizes platforms to prioritize the needs of advertisers over those of their users, even if it is to the detriment of the user.
The economic model of these platforms has also led to the exploitation of cognitive biases and vulnerabilities. For example, humans are naturally drawn to emotionally charged or surprising information, which was useful in early human societies but can be manipulated by platforms seeking to capture users’ attention. YouTube’s recommendation algorithm, for instance, amplifies sensational content to keep people engaged, and a study by Mozilla found that YouTube recommends videos that violate its own policies on misinformation, hate speech, and inappropriate content.
Misinformation and fake news are particularly effective at capturing attention because they often provoke outrage or awe. Digital platforms are flooded with this type of content, as they prioritize information that captures users’ attention and keeps them engaged. Facebook’s newsfeed algorithm, for example, gave more weight to content that elicited anger than to content that evoked happiness. Algorithms can also filter out harmful or illegal content, but until recently, they have largely prioritized free speech over truth. However, during the COVID-19 pandemic, many platforms, including Facebook, Google, and Twitter, have taken a more active role in moderating content, leading to debates about censorship and the role of these platforms in society.
The internet has both positively and negatively impacted democracy, with the economic and cognitive factors at play leading to the spread of misinformation and the manipulation of users’ attention. While moderation is necessary to address harmful content, it is important to also consider the impact on free speech and the role of these platforms in society.
How to combat disinformation
To better access online information, consider practicing the following:
Be a critical thinker. Don’t believe everything you see online. Verify information from multiple sources before accepting it as true.
Look for reliable sources. Check the credibility of the websites and social media accounts where you get your information. Avoid sources that are known to spread disinformation or have a history of publishing false information.
Check the date of the information. Disinformation and conspiracy theories can spread quickly online, so it’s important to make sure you’re getting the most up-to-date information.
Consider the source. Who is the author or creator of the information? Are they an expert in the field? Do they have a track record of publishing accurate information?
Be skeptical of sensational claims. Headlines and stories that are designed to grab your attention and get you to click on a link are often used to spread disinformation. Take the time to read the full article before you decide to share it.
Don’t spread misinformation yourself. If you come across information that you think might be false, don’t share it with others.