Freedom of opinion and critical thinking are, in principle, good things, to be cherished and encouraged online. However, to quash harmful social movements, some public entities and social-media platforms have chosen to adopt unpopular censorship measures, thereby either reducing the visibility of certain content or removing the offending content altogether. But such top-down interventions often spur backlashes and, thus, risk reinforcing the very conspiracy theories they’re trying to stave off in the first place.
Other, more constructive approaches to addressing misinformation and disinformation online include providing users with a means of understanding whether certain information is trustworthy. [2] However, there has been little study of the relationship between UX design and the development of reliable means of distinguishing fact from opinion or outright fiction.
In this column, I’ll highlight some findings from research that can guide UX designers in creating what I’ll refer to as counter-misinformation features. Arguably, our need for such competencies could spike in the future, in parallel with the rise of needful interventions by governments and content regulators who are trying to tackle extremist and misleading narratives.
Now, let’s consider some effective approaches to helping users assess the veracity of the information they consume online.
Minimizing User Effort
In dealing with misinformation, the first thing that comes to mind is obviously users’ access to reliable resources for fact-checking. Spotting and debunking misinformation online is, in fact, by no means always easy—particularly when it comes from people we know or what we think of as authoritative sources.
However, there are plenty of examples of fact-checking resources that are available online—such as Google Fact Check Explorer or Factcheck.org. Plus, some platforms and industries are themselves making fact-checking resources available that are specific to a particular domain. Examples include Facebook’s COVID-19 misinformation center, as well as for the practice of labeling of posts that contain misinformation or disinformation.
One factor that is worth considering is so-called emotional labor, a phenomenon that impacts users who are actively trying to identify fake news.