01. Overview
This case study explores how I reframed misinformation as a product and interaction design problem, using user-centered research to design and validate six interface interventions that reduced impulsive sharing and improved credibility judgment on social media for my Master's Thesis.




02. CONTEXT
Social media platforms are now one of the primary sources of news for millions of people. Yet their interfaces are optimized for speed, engagement, and emotional reaction rather than reflection or accuracy.
Content that is emotionally charged often spreads faster than content that is factual. Users are expected to verify information on their own, while the interface actively encourages instant sharing.
This creates a fundamental mismatch between how humans process information and how platforms are designed.
Challenges
As the sole researcher and designer on the project, I had to address multiple interconnected problems at once:
Posts appear visually identical regardless of credibility or intent
Likes & shares are often mistaken for trustworthiness
Context about sources, timelines, or corrections is hidden or absent
Sharing actions are frictionless, encouraging impulsive behavior
Solution
I addressed these design failures by introducing interface-level support for critical evaluation:
Add clear distinctions to separate news, opinion, and factual claims
Add context with timelines and source comparisons available in the feed
Reframed misinformation labels to explain why content is questionable
Introduced lightweight friction before sharing high-risk content
These labels disrupt the Truth Bias by providing a visual reason to scrutinize content. They act as a manual override for fast, emotionally-driven mental shortcuts.
Users often believe false claims because they look serious. Labelling "Opinion" vs "News" shifted participants from uncertainty to critical skepticism, stopping impulsive, misinformed shares
People saw improvement in recognising the tone of the post and accurately judging if the post was misinformed or not
People found the feature to be helpful and useful
Reduces the effort required to verify information
Encourages comparison rather than passive acceptance
Supports informed judgment without restricting choice
of users reported higher confidence in assessing credibility when multiple perspectives were visible
Indicates improved judgment quality without enforcing a “correct” interpretation.
Source: post-task questionnaires following exposure to multi-source views.
of users preferred comparing multiple sources over relying on a single credibility label
This shows clear user demand for contextual, side-by-side verification rather than top-down judgments.
Helps users understand whether information is current, outdated, or already corrected
Reduces misinterpretation of old or recycled claims
Provides temporal context without requiring external research
Supports more cautious judgment in fast-scrolling environments
By making the evolution of a story visible, the timeline helps users interpret claims more accurately and reduces the likelihood of reacting to information that is incomplete, outdated, or taken out of context.
were better able to better identify whether a claim was outdated or evolving
reported feeling more confident in interpreting posts about ongoing or breaking events in historical context.
Interrupts impulsive, emotion-driven sharing
Encourages users to reconsider the accuracy and impact of a post
Preserves user agency by allowing sharing after the delay
of users reported higher confidence in assessing credibility when multiple perspectives were visible
Indicates improved judgment quality without enforcing a “correct” interpretation.
Source: post-task questionnaires following exposure to multi-source views.
of users preferred comparing multiple sources over relying on a single credibility label
This shows clear user demand for contextual, side-by-side verification rather than top-down judgments.




AI Contextual Explanation
Provides an AI-generated explanation beneath posts that are likely to be misleading, summarizing missing context, factual corrections, and why the claim may be incomplete.
Reduces the effort required to understand complex or misleading claims
Explains why a post may be problematic instead of simply labeling it
Supports comprehension without interrupting the feed experience
Because of this users were able to grasp missing context faster and reported feeling better informed without feeling corrected or talked down to.
Debate-Style AI View
Presents opposing perspectives side-by-side, framing issues as structured arguments rather than definitive conclusions.
Encourages critical thinking over passive acceptance
Makes value judgments and trade-offs explicit
Reduces polarization by exposing users to multiple viewpoints
This approach shifted AI from “answer engine” to “thinking partner,” supporting reflection rather than persuasion.
4.6/5
Users rated AI-assisted multiple-source views at an average usefulness score of 4.6 out of 5.
This indicates strong perceived value in source aggregation and contextual comparison.
of users found debate-style AI interactions more effective than standard chatbot explanations when evaluating misinformation.
This shows that dialogic, reasoning-focused AI supports deeper engagement than one-way fact delivery.
reported sharing without checking sources.
people struggled to correctly predict the tones of posts in their feed
admitted checking sources externally and described the process as annoying or effortful.
associated higher engagement with higher credibility.
Design Principles
This phase focused on designing interventions that help users slow down, think critically, and regain control while consuming information on social media—without adding cognitive overload or restricting agency.
The UX outcomes were shaped around three core goals:
Enable users to interpret content more accurately in fast-scrolling environments
Reduce impulsive reactions and sharing behavior driven by emotion or urgency
Support informed judgment without positioning the system as an authority
Rather than optimizing for engagement, the experience prioritizes clarity, context, and user confidence at key decision moments.
Content Framework
To guide the design, I compiled a toolkit of UX psychology principles grounded in how people actually process information online. While not every principle was used directly, this toolkit informed decisions throughout the design process.
Solving Impulsive Sharing
Applied through tone indicators, context explanations, and multi-source comparisons.
Solving Emotional Interpretation
Applied through tone indicators, context explanations, and multi-source comparisons.
Solving Verification Fatigue
Applied through embedded sources, AI explanations, and side-by-side perspectives.
Design Principles
Embracing UX
While UX anti-patterns are usually seen as 'design don'ts,' I believe they can be harnessed as innovative solutions when used thoughtfully.
Add Friction to Entry
Friction is usually treated as something to eliminate. In this project, it was used intentionally.
Instead of optimizing every action for speed, I introduced small moments of resistance at critical points, especially around sharing. These pauses are designed to interrupt habitual behavior without blocking user intent.
Use Action to Dissuade
Rather than relying on warnings or passive labels, the interface uses secondary actions to gently redirect behavior.
The primary action remains available, but a more reflective alternative is surfaced alongside it, such as: view timeline, compare sources etc.
Content Framework
The iACT Model for Information Clarity
To reduce reading effort while increasing understanding, I developed a lightweight content framework for AI explanations and contextual prompts.
Insight : Surface the key issue or missing context like “This claim removes key details from a longer timeline.”
Action : Offer a simple next step or reflection like “View how this event unfolded over time.”
Consequence : Explain why it matters to the user like “Earlier context changes how this claim is interpreted.”
Twist : Add a human, motivating nudge like “Truth rarely fits in one post.”
Designing the Experiment
The user testing phase was designed to evaluate whether the interventions meaningfully changed how people interpreted and acted on information, not whether they simply noticed new features.
Instead of testing individual components in isolation, the experience was structured to mirror real social media use as closely as possible.
Test Structure
The experiment was built around realistic, scenario-based tasks derived from earlier research and observation.
Participants were asked to:
Scroll through a simulated social media feed
Encounter posts containing misleading or ambiguous claims
Make natural decisions such as reading, ignoring, or sharing content
The interface was intentionally minimal, avoiding explicit instructions that could bias behavior.
Before and After Comparison
To understand the impact of the designs, the study compared behavior across two conditions:
Baseline condition: Posts presented without additional context or interventions
Intervention condition: The same posts presented with design interventions applied
This structure allowed for direct comparison of interpretation, confidence, and decision-making under identical content conditions.
What was measured?
The study focused on signals that reflect real-world behavior:
Changes in perceived credibility and confidence
Willingness to seek additional context
Hesitation or reflection before sharing
Ability to articulate why a post felt credible or misleading
These measures were chosen to capture cognitive and behavioral shifts rather than surface-level preferences.









