Case study

Making Writer Experience measurable across tools

Writers worked across multiple tools, but the overall experience was not formally measured or understood. I designed a structured system to turn fragmented feedback into a measurable view of usability, making it possible to establish a baseline, track improvement over time, and connect feedback to action.

Focus
UX measurement, evaluation design, cross-tool experience
Role
System designer and program contributor
Core method
System Usability Scale (SUS) + user feedback + repeated cycles
Outcome
A repeatable model for measuring and improving Writer Experience

The problem

Writers were working across multiple tools, but the overall experience was understood only through scattered signals. Issues appeared in tickets, questions surfaced in Slack, and frustrations were known anecdotally, but there was no baseline, no shared measurement method, and no reliable way to track improvement over time.

This made it difficult to answer simple but important questions: Was the experience improving? Which changes had the biggest effect? Where should teams focus next?

What was missing

  • No formal baseline for usability
  • No consistent way to evaluate experience
  • No system for tracking change over time
  • No structured way to turn feedback into insight

The need

We needed a way to turn fragmented feedback into a clear, measurable view of the Writer Experience. That meant creating a structured approach that could establish a usability baseline, evaluate experience consistently, capture context behind the scores, and track progress across multiple cycles.

The goal was not simply to gather more feedback. The goal was to create a system that made experience visible enough to guide prioritization and support iterative improvement.

Success looked like this

  • Usability could be measured consistently
  • Feedback could explain score changes
  • Improvement could be tracked over time
  • Teams could make decisions using evidence, not guesswork

The system

I helped shape a structured approach to measuring Writer Experience by combining standardized usability measurement, direct user feedback, and repeated evaluation cycles. This turned scattered signals into a repeatable evaluation loop.

1. Establish a baseline Use the System Usability Scale (SUS) to create a consistent starting point for measurement.
2. Capture context Collect direct user feedback to understand why users scored the experience the way they did.
3. Repeat across cycles Run multiple evaluation cycles to observe change over time instead of relying on one-time impressions.
4. Connect signals to action Use quantitative scores and qualitative feedback together to identify patterns, prioritize improvements, and validate changes.

How it worked in practice

Standardized measurement

SUS was used to establish a baseline and provide a consistent framework for comparison across cycles.

User feedback collection

Direct user input added explanatory detail, helping reveal what was influencing the scores.

Cycle-based evaluation

Repeated measurement made it possible to track trends and improvement over time.

Focused application

The method was applied to a newly developed system to validate the approach and observe change across three cycles.

Findings

Once experience was measured consistently, patterns became visible. Usability could be tracked across cycles instead of being inferred from anecdotal complaints. System performance, including speed and reliability, had a clear effect on user perception. Qualitative feedback helped explain why scores shifted from one cycle to the next.

This changed the conversation. Instead of focusing only on isolated issues, teams could start looking at measurable trends over time.

What became visible

  • Usability trends across cycles
  • The relationship between system performance and perception
  • The reasons behind score movement
  • A repeatable basis for future UX evaluation

Results

SUS scores improved across three cycles, demonstrating that usability could be measured consistently and that changes had a visible effect over time. Even though the broader cross-tool initiative was later paused, the work demonstrated clear value as a model for measuring and improving Writer Experience.

Baseline established

A structured starting point replaced anecdotal understanding.

Improvement tracked

Scores moved across three cycles, making change measurable.

Feedback explained movement

Qualitative input clarified why scores improved or stalled.

Approach validated

The system proved that Writer Experience could be measured and improved systematically.

Experience is not just feedback. When structured correctly, it becomes a signal system that can be measured, tracked, and improved over time.