Thesis Project

Twitter Home Interface
Role: UX Designer & Researcher
Duration: 6 Months
Tools: Figma, Adobe Illustrator
Team: Azuna (Solo)

Project Overview

This thesis explores the use of both an AI moderation tool and a human moderation support system on X to detect and filter harmful content, aiming to reduce cyberbullying and create a safer environment for youth. The AI tool is designed to improve the speed and accuracy of content review, while the human moderation tool enhances the moderator experience by making workflows more intuitive and effective within X’s user interface. A key challenge was identifying the right features for each tool—striking a balance between moderation effectiveness, user control, and overall user experience.

The Problem

Goals

  • Improve mental health and well-being for youth.
  • Create a safer, more peaceful online environment that fosters trust and encourages positive, responsible engagement among youth users.
  • Develop effective moderation tools to accurately detect harmful language targeted specifically at youth.

User Pain Points

  • Youth often feel unprotected when harmful messages slip through automated moderation—or frustrated when their harmless posts are wrongly flagged or removed, making them feel misunderstood or silenced.
  • Users frequently report that their concerns are ignored or resolved too slowly, leaving them exposed to ongoing bullying or offensive content with little sense of safety or resolution.
  • Young users may lack consistent parental support or digital guidance, and despite being aware of online risks, they still feel powerless to stop harmful behavior when systems fail to intervene effectively.

Research

Methods Used

  • Secondary Research
  • Competitor Analysis
  • Wireframing
  • Personas
  • Mind Mapping
  • User Journey Maps
  • User Flow
  • Affinity Mapping

Key Findings

  • Traditional Safety Tools Are Inadequate: Parental controls, content filters, and educational campaigns often fail to adapt to rapidly evolving online threats, leaving youth vulnerable to harmful content and behaviors.
  • Gaps in Enforcement and Human Oversight: Inconsistent policy enforcement and insufficient human moderation allow harmful behaviors to persist, highlighting the need for more proactive and scalable solutions like AI-driven moderation tools.
  • Reducing Harmful Content Benefits All: Tackling harmful content not only protects vulnerable users and promotes healthier online communities but also lowers liability and healthcare costs for platforms and society at large.

Desktop Version

Twitter home interface
Group interface 2
Group interface 3
Group interface 4
Group interface 5

This desktop version presents a sequence of high-fidelity wireframes that demonstrate the AI Moderation Tool in action. Red bars highlight harmful words that have been detected, while black bars indicate that these words have been removed. The design also showcases the Human Moderation Tool. Represented by a shield icon labeled "Trust & Safety Team," this feature allows users to initiate contact with a human moderator. After clicking the shield icon, the user is taken to a page where an audio call is automatically started, with the option to switch to a video call. Users can also send messages to the moderator directly from this page.

Mobile Version

Mobile interface 1
Mobile interface 2
Mobile interface 3
Mobile interface 4
Mobile interface 5

Similar to the desktop version, I designed a mobile version that presents a sequence of high-fidelity wireframes demonstrating the AI Moderation Tool in action. Red bars indicate harmful words that have been detected, while black bars show that these words have been removed. The mobile design also features the Human Moderation Tool, represented by a shield icon. Tapping the shield directs the user to a page where an audio call is automatically initiated, with the option to switch to a video call. Users can also send messages to the human moderator directly from this screen.

The Solution

Design Decisions

For my thesis project, I chose to redesign the X interface (formerly Twitter) by mimicking its existing visual style while exploring improvements in layout and interaction. To maintain consistency with the platform’s established identity, I retained both the original color palette and typography. I used Chirp, X’s official typeface, to ensure visual cohesion and preserve the familiar tone of the platform.

Results

Key Achievements

  • Creating a duplicate page that mirrors X's layout and design.
  • Integrating an AI moderation tool and enhancing the human moderation experience within X’s user interface.
  • Implementing both the AI and human moderation tools across the desktop and mobile versions of X.

Key Takeaways

What Worked Well

I developed an AI moderation tool and strengthened the human moderation system by incorporating intuitive safety features to combat cyberbullying and detect harmful content targeting youth. These solutions were successfully integrated into X, enhancing content review efficiency and improving the moderator experience.

Lessons Learned

I learned the value of conducting thorough research and how the insights gained can strengthen ideas and enhance the quality of your solutions.

Sleep Tracker Recipe Sharing