My work sits at the intersection of content, evaluation, and workflow design. I build frameworks, tools, and measurement approaches that help teams understand what’s working, what isn’t, and where to improve.
View Case Studies
I design systems that help teams evaluate and improve complex content and AI-driven experiences.
My background spans technical content, analytics, process design, and internal tools. I’ve built evaluation frameworks, reporting systems, and content-quality approaches that turn messy, ambiguous problems into structured, usable systems.
Earlier in my career, I focused on DITA XML, publishing workflows, and business intelligence. That foundation now informs my work in AI evaluation, content operations, and measurement design.
About MeI work at the intersection of content, systems, and teams to turn ambiguous problem spaces into structured, repeatable solutions.
My experience includes designing workflows, defining standards, building internal tools, and creating measurement approaches that help teams improve quality and performance over time.
I’m especially drawn to problems where user experience, structured information, and system behavior all intersect.
LinkedIn Download My ResumeA selection of work focused on AI evaluation, content systems, and measurement design. These projects show how I turn scattered signals and ambiguous problems into structured approaches that teams can use.
Designed a repeatable approach for measuring Writer Experience using SUS, qualitative feedback, and cycle-based evaluation to track improvement over time.
Read case studyBuilt an internal framework and application for evaluating AI-generated responses through structured scoring, test cycles, and reporting.
Case study in progressDefined practical content guidelines and evaluation thinking to improve how technical documentation is retrieved, interpreted, and reused by AI systems.
Case study in progress