Writers worked across multiple tools, but the overall experience was not formally measured or understood. I designed a structured system to turn fragmented feedback into a measurable view of usability, making it possible to establish a baseline, track improvement over time, and connect feedback to action.
Writers were working across multiple tools, but the overall experience was understood only through scattered signals. Issues appeared in tickets, questions surfaced in Slack, and frustrations were known anecdotally, but there was no baseline, no shared measurement method, and no reliable way to track improvement over time.
This made it difficult to answer simple but important questions: Was the experience improving? Which changes had the biggest effect? Where should teams focus next?
We needed a way to turn fragmented feedback into a clear, measurable view of the Writer Experience. That meant creating a structured approach that could establish a usability baseline, evaluate experience consistently, capture context behind the scores, and track progress across multiple cycles.
The goal was not simply to gather more feedback. The goal was to create a system that made experience visible enough to guide prioritization and support iterative improvement.
I helped shape a structured approach to measuring Writer Experience by combining standardized usability measurement, direct user feedback, and repeated evaluation cycles. This turned scattered signals into a repeatable evaluation loop.
SUS was used to establish a baseline and provide a consistent framework for comparison across cycles.
Direct user input added explanatory detail, helping reveal what was influencing the scores.
Repeated measurement made it possible to track trends and improvement over time.
The method was applied to a newly developed system to validate the approach and observe change across three cycles.
Once experience was measured consistently, patterns became visible. Usability could be tracked across cycles instead of being inferred from anecdotal complaints. System performance, including speed and reliability, had a clear effect on user perception. Qualitative feedback helped explain why scores shifted from one cycle to the next.
This changed the conversation. Instead of focusing only on isolated issues, teams could start looking at measurable trends over time.
SUS scores improved across three cycles, demonstrating that usability could be measured consistently and that changes had a visible effect over time. Even though the broader cross-tool initiative was later paused, the work demonstrated clear value as a model for measuring and improving Writer Experience.
A structured starting point replaced anecdotal understanding.
Scores moved across three cycles, making change measurable.
Qualitative input clarified why scores improved or stalled.
The system proved that Writer Experience could be measured and improved systematically.
Experience is not just feedback. When structured correctly, it becomes a signal system that can be measured, tracked, and improved over time.