Mitigating Misinformation on Social Media

Mitigating Misinformation on Social Media

A user Centered design approach

A user Centered design approach

01. Overview

This case study explores how I reframed misinformation as a product and interaction design problem, using user-centered research to design and validate six interface interventions that reduced impulsive sharing and improved credibility judgment on social media for my Master's Thesis.

02. CONTEXT

Information Without Guardrails

Information Without Guardrails

Social media platforms are now one of the primary sources of news for millions of people. Yet their interfaces are optimized for speed, engagement, and emotional reaction rather than reflection or accuracy.

Content that is emotionally charged often spreads faster than content that is factual. Users are expected to verify information on their own, while the interface actively encourages instant sharing.

This creates a fundamental mismatch between how humans process information and how platforms are designed.

03. Key Challenges
03. Key Challenges

Complexity Without Clarity

Complexity Without Clarity

Challenges

As the sole researcher and designer on the project, I had to address multiple interconnected problems at once:

  • Posts appear visually identical regardless of credibility or intent

  • Likes & shares are often mistaken for trustworthiness

  • Context about sources, timelines, or corrections is hidden or absent

  • Sharing actions are frictionless, encouraging impulsive behavior

Solution

I addressed these design failures by introducing interface-level support for critical evaluation:

  • Add clear distinctions to separate news, opinion, and factual claims

  • Add context with timelines and source comparisons available in the feed

  • Reframed misinformation labels to explain why content is questionable

  • Introduced lightweight friction before sharing high-risk content

POST TONE LABEL
POST TONE LABEL

Identify The Intent Behind Every Post You See

Identify The Intent Behind Every Post You See

Satire often looks like breaking news. These labels act as a "tone indicator," ensuring that sarcastic or humorous posts aren't misremembered as hard news when you check your feed.

Satire often looks like breaking news. These labels act as a "tone indicator," ensuring that sarcastic or humorous posts aren't misremembered as hard news when you check your feed.

WHY IT WORKS

WHY IT WORKS

These labels disrupt the Truth Bias by providing a visual reason to scrutinize content. They act as a manual override for fast, emotionally-driven mental shortcuts.

MY NOTES

MY NOTES

Users often believe false claims because they look serious. Labelling "Opinion" vs "News" shifted participants from uncertainty to critical skepticism, stopping impulsive, misinformed shares

0%

People saw improvement in recognising the tone of the post and accurately judging if the post was misinformed or not

0%

People found the feature to be helpful and useful

SOLUTIONS
SOLUTIONS

Multiple Sources View

Multiple Sources View

Displays coverage of the same claim from multiple sources directly within the interface, allowing users to compare perspectives side-by-side without leaving the platform.

Displays coverage of the same claim from multiple sources directly within the interface, allowing users to compare perspectives side-by-side without leaving the platform.

  • Reduces the effort required to verify information

  • Encourages comparison rather than passive acceptance

  • Supports informed judgment without restricting choice

Weather app image
Weather app image
Weather app image
Weather app image
0%

of users reported higher confidence in assessing credibility when multiple perspectives were visible
Indicates improved judgment quality without enforcing a “correct” interpretation.
Source: post-task questionnaires following exposure to multi-source views.

0%

of users preferred comparing multiple sources over relying on a single credibility label
This shows clear user demand for contextual, side-by-side verification rather than top-down judgments.

POST TONE LABEL
POST TONE LABEL

Placing information in time

Placing information in time

Adds a chronological timeline to posts discussing evolving events, showing key moments such as when a claim first appeared, how it changed, and when corrections or new information emerged.

Adds a chronological timeline to posts discussing evolving events, showing key moments such as when a claim first appeared, how it changed, and when corrections or new information emerged.

Weather app image
Weather app image
Weather app image
Weather app image
Weather app image

WHY IT WORKS

WHY IT WORKS

  • Helps users understand whether information is current, outdated, or already corrected

  • Reduces misinterpretation of old or recycled claims

  • Provides temporal context without requiring external research

  • Supports more cautious judgment in fast-scrolling environments

END RESULT

END RESULT

By making the evolution of a story visible, the timeline helps users interpret claims more accurately and reduces the likelihood of reacting to information that is incomplete, outdated, or taken out of context.

0%

were better able to better identify whether a claim was outdated or evolving

0%

reported feeling more confident in interpreting posts about ongoing or breaking events in historical context.

Weather app image
Weather app image
Sharing Delay
Sharing Delay

Creating space to pause

Creating space to pause

Introduces a short delay before users can share posts flagged as potentially misleading, prompting them to pause briefly before amplifying the content.

Introduces a short delay before users can share posts flagged as potentially misleading, prompting them to pause briefly before amplifying the content.

  • Interrupts impulsive, emotion-driven sharing

  • Encourages users to reconsider the accuracy and impact of a post

  • Preserves user agency by allowing sharing after the delay

0%

of users reported higher confidence in assessing credibility when multiple perspectives were visible
Indicates improved judgment quality without enforcing a “correct” interpretation.
Source: post-task questionnaires following exposure to multi-source views.

0%

of users preferred comparing multiple sources over relying on a single credibility label
This shows clear user demand for contextual, side-by-side verification rather than top-down judgments.

Weather app image
Weather app image
04. RESEARCH
04. RESEARCH

Using AI to support judgment, not replace it

Using AI to support judgment, not replace it

Rather than positioning AI as an authority or fact-checker, I explored how AI could act as a contextual assistant, helping users reason through complex or misleading information while preserving autonomy.

Rather than positioning AI as an authority or fact-checker, I explored how AI could act as a contextual assistant, helping users reason through complex or misleading information while preserving autonomy.

AI Contextual Explanation

Provides an AI-generated explanation beneath posts that are likely to be misleading, summarizing missing context, factual corrections, and why the claim may be incomplete.

  • Reduces the effort required to understand complex or misleading claims

  • Explains why a post may be problematic instead of simply labeling it

  • Supports comprehension without interrupting the feed experience

Because of this users were able to grasp missing context faster and reported feeling better informed without feeling corrected or talked down to.

Debate-Style AI View

Presents opposing perspectives side-by-side, framing issues as structured arguments rather than definitive conclusions.

  • Encourages critical thinking over passive acceptance

  • Makes value judgments and trade-offs explicit

  • Reduces polarization by exposing users to multiple viewpoints

This approach shifted AI from “answer engine” to “thinking partner,” supporting reflection rather than persuasion.

4.6/5

Users rated AI-assisted multiple-source views at an average usefulness score of 4.6 out of 5.
This indicates strong perceived value in source aggregation and contextual comparison.

0%

of users found debate-style AI interactions more effective than standard chatbot explanations when evaluating misinformation.
This shows that dialogic, reasoning-focused AI supports deeper engagement than one-way fact delivery.

04. RESEARCH
04. RESEARCH

Grounding Design in Evidence

Grounding Design in Evidence

To understand how interface design contributes to misinformation, I combined quantitative surveys, usability testing, and observational analysis. This mixed-methods approach helped uncover behavioral patterns, not just opinions, and informed every design decision.

To understand how interface design contributes to misinformation, I combined quantitative surveys, usability testing, and observational analysis. This mixed-methods approach helped uncover behavioral patterns, not just opinions, and informed every design decision.

0%

reported sharing without checking sources.

Users rarely verify before sharing posts

Users rarely verify before sharing posts

Users rarely verify before sharing posts

Most participants shared news impulsively, driven by emotional reactions rather than evaluation, even when they expressed concern about misinformation in principle.

Most participants shared news impulsively, driven by emotional reactions rather than evaluation, even when they expressed concern about misinformation in principle.

Most participants shared news impulsively, driven by emotional reactions rather than evaluation, even when they expressed concern about misinformation in principle.

Design opportunity:

Design opportunity:

Design opportunity:

Adding friction to sharing could reduce reshares for posts suspected of misinformation

Adding friction to sharing could reduce reshares for posts suspected of misinformation

Adding friction to sharing could reduce reshares for posts suspected of misinformation

0%

people struggled to correctly predict the tones of posts in their feed

Post tone is frequently misinterpreted

Post tone is frequently misinterpreted

Post tone is frequently misinterpreted

Users struggled to accurately identify whether posts were factual, opinionated, or satirical, often relying on assumptions rather than intent cues provided by the interface.

Users struggled to accurately identify whether posts were factual, opinionated, or satirical, often relying on assumptions rather than intent cues provided by the interface.

Users struggled to accurately identify whether posts were factual, opinionated, or satirical, often relying on assumptions rather than intent cues provided by the interface.

Design opportunity:

Design opportunity:

Design opportunity:

Introduce explicit post-type and tone indicators to support correct interpretation before users form judgments or share content.

Introduce explicit post-type and tone indicators to support correct interpretation before users form judgments or share content.

Introduce explicit post-type and tone indicators to support correct interpretation before users form judgments or share content.

0%

admitted checking sources externally and described the process as annoying or effortful.

Source verification is external and frustrating

Source verification is external and frustrating

Source verification is external and frustrating

Users reported manually Googling claims to verify credibility, often finding the process tedious and fragmented due to having to search for and compare multiple sources themselves.

Users reported manually Googling claims to verify credibility, often finding the process tedious and fragmented due to having to search for and compare multiple sources themselves.

Users reported manually Googling claims to verify credibility, often finding the process tedious and fragmented due to having to search for and compare multiple sources themselves.

Design opportunity:

Design opportunity:

Design opportunity:

Surface multiple, comparable sources directly within the interface to reduce verification friction and support informed judgment without forcing users to leave the platform.

Surface multiple, comparable sources directly within the interface to reduce verification friction and support informed judgment without forcing users to leave the platform.

Surface multiple, comparable sources directly within the interface to reduce verification friction and support informed judgment without forcing users to leave the platform.

0%

associated higher engagement with higher credibility.

Engagement is mistaken for credibility

Engagement is mistaken for credibility

Engagement is mistaken for credibility

Users consistently interpreted likes, shares, and comments as signals of trustworthiness, despite knowing these metrics reflect popularity rather than accuracy.

Users consistently interpreted likes, shares, and comments as signals of trustworthiness, despite knowing these metrics reflect popularity rather than accuracy.

Users consistently interpreted likes, shares, and comments as signals of trustworthiness, despite knowing these metrics reflect popularity rather than accuracy.

05. SCENARIO BASED DESIGN
05. SCENARIO BASED DESIGN

Designing for Real Moments

Designing for Real Moments

Insights from surveys and observations revealed recurring patterns in how users encounter, interpret, and share information on social media. Rather than designing abstract solutions, I translated these patterns into realistic usage scenarios that reflected everyday behavior.

Insights from surveys and observations revealed recurring patterns in how users encounter, interpret, and share information on social media. Rather than designing abstract solutions, I translated these patterns into realistic usage scenarios that reflected everyday behavior.

06. UX PRINCIPLES USED
06. UX PRINCIPLES USED

Designing for how people actually behave

Designing for how people actually behave

The interventions in this project were guided by a small set of UX principles grounded in real user behavior. These principles helped ensure the designs supported people in fast, emotional, and imperfect decision-making environments.

The interventions in this project were guided by a small set of UX principles grounded in real user behavior. These principles helped ensure the designs supported people in fast, emotional, and imperfect decision-making environments.

Design Principles

This phase focused on designing interventions that help users slow down, think critically, and regain control while consuming information on social media—without adding cognitive overload or restricting agency.


The UX outcomes were shaped around three core goals:

  • Enable users to interpret content more accurately in fast-scrolling environments

  • Reduce impulsive reactions and sharing behavior driven by emotion or urgency

  • Support informed judgment without positioning the system as an authority


Rather than optimizing for engagement, the experience prioritizes clarity, context, and user confidence at key decision moments.

Content Framework

To guide the design, I compiled a toolkit of UX psychology principles grounded in how people actually process information online. While not every principle was used directly, this toolkit informed decisions throughout the design process.

Solving Impulsive Sharing

Applied through tone indicators, context explanations, and multi-source comparisons.

Cognitive load

When overwhelmed, users rely on shortcuts rather than reasoning

Temporal discounting

Immediate emotional rewards outweigh long-term consequences

Pause effects

Even small delays can shift behavior toward reflection

Cognitive load

When overwhelmed, users rely on shortcuts rather than reasoning

Temporal discounting

Immediate emotional rewards outweigh long-term consequences

Pause effects

Even small delays can shift behavior toward reflection

Cognitive load

When overwhelmed, users rely on shortcuts rather than reasoning

Temporal discounting

Immediate emotional rewards outweigh long-term consequences

Pause effects

Even small delays can shift behavior toward reflection

Solving Emotional Interpretation

Applied through tone indicators, context explanations, and multi-source comparisons.

Affect heuristic

Emotions strongly influence perceived truth

Framing effects

How information is presented shapes belief more than content itself

Confirmation bias

Users favor content aligned with existing views

Affect heuristic

Emotions strongly influence perceived truth

Framing effects

How information is presented shapes belief more than content itself

Confirmation bias

Users favor content aligned with existing views

Affect heuristic

Emotions strongly influence perceived truth

Framing effects

How information is presented shapes belief more than content itself

Confirmation bias

Users favor content aligned with existing views

Solving Verification Fatigue

Applied through embedded sources, AI explanations, and side-by-side perspectives.

Effort avoidance

Users avoid tasks that require leaving the platform

Information foraging

People prefer nearby, comparable information

Trust through transparency

Visible reasoning increases confidence

Effort avoidance

Users avoid tasks that require leaving the platform

Information foraging

People prefer nearby, comparable information

Trust through transparency

Visible reasoning increases confidence

Effort avoidance

Users avoid tasks that require leaving the platform

Information foraging

People prefer nearby, comparable information

Trust through transparency

Visible reasoning increases confidence

Design Principles

Embracing UX

While UX anti-patterns are usually seen as 'design don'ts,' I believe they can be harnessed as innovative solutions when used thoughtfully.

Add Friction to Entry
Friction is usually treated as something to eliminate. In this project, it was used intentionally.


Instead of optimizing every action for speed, I introduced small moments of resistance at critical points, especially around sharing. These pauses are designed to interrupt habitual behavior without blocking user intent.

Use Action to Dissuade

Rather than relying on warnings or passive labels, the interface uses secondary actions to gently redirect behavior.


The primary action remains available, but a more reflective alternative is surfaced alongside it, such as: view timeline, compare sources etc.

Content Framework

The iACT Model for Information Clarity

To reduce reading effort while increasing understanding, I developed a lightweight content framework for AI explanations and contextual prompts.


Insight : Surface the key issue or missing context like “This claim removes key details from a longer timeline.”

Action : Offer a simple next step or reflection like “View how this event unfolded over time.”

Consequence : Explain why it matters to the user like “Earlier context changes how this claim is interpreted.”

Twist : Add a human, motivating nudge like “Truth rarely fits in one post.”

07. USER TESTING & EXPERIMENT DESIGN
07. USER TESTING & EXPERIMENT DESIGN

Testing Behavior, Not Opinions

Testing Behavior, Not Opinions

Designing the Experiment

The user testing phase was designed to evaluate whether the interventions meaningfully changed how people interpreted and acted on information, not whether they simply noticed new features.

Instead of testing individual components in isolation, the experience was structured to mirror real social media use as closely as possible.

Test Structure

The experiment was built around realistic, scenario-based tasks derived from earlier research and observation.

Participants were asked to:

  • Scroll through a simulated social media feed

  • Encounter posts containing misleading or ambiguous claims

  • Make natural decisions such as reading, ignoring, or sharing content

The interface was intentionally minimal, avoiding explicit instructions that could bias behavior.

Before and After Comparison

To understand the impact of the designs, the study compared behavior across two conditions:

  • Baseline condition: Posts presented without additional context or interventions

  • Intervention condition: The same posts presented with design interventions applied

This structure allowed for direct comparison of interpretation, confidence, and decision-making under identical content conditions.

What was measured?

The study focused on signals that reflect real-world behavior:

  • Changes in perceived credibility and confidence

  • Willingness to seek additional context

  • Hesitation or reflection before sharing

  • Ability to articulate why a post felt credible or misleading

These measures were chosen to capture cognitive and behavioral shifts rather than surface-level preferences.

Create a free website with Framer, the website builder loved by startups, designers and agencies.