I led a pilot to evaluate Oxygen Positron — an AI-assisted authoring add-on for Oxygen XML Editor — within a structured DITA content environment. The work went beyond testing a tool: I designed prompt-based workflows, built an evaluation framework, identified real operational constraints, and shaped a strategic decision about where AI fit in our authoring ecosystem.
Technical writers were under mounting pressure from frequent release cycles while working in structured DITA environments with strict content requirements. AI was generating a lot of noise, but very little of it was practical for writers working at the document level.
The question wasn't whether AI could generate text. It was whether AI could produce structured, valid, insertable content that actually fit into an enterprise authoring workflow — and whether that was sustainable at scale.
Rather than simply testing the tool, I built a small system around it — designing repeatable prompt workflows, establishing evaluation criteria, and collecting structured feedback from pilot participants.
The pilot produced real findings — not just about Positron, but about what AI-assisted authoring requires at an organizational level.
Positron required Oxygen Enterprise licenses, creating a cost barrier before any writer could participate.
Underlying model updates broke existing prompts, introducing silent regressions with no clear ownership path.
Our internal AI proxy broke Positron's built-in functionality, resulting in 404 errors and missing features.
Every prompt required design, testing, and ongoing maintenance. No governance model existed for managing a prompt library at scale.
The pilot revealed that AI-assisted authoring at enterprise scale requires sustained investment: in prompt governance, tooling maintenance, evaluation infrastructure, and organizational readiness. Given the pace of change in AI and competing priorities, the decision was made to defer full Positron adoption in favor of a lighter-weight approach — a custom AI chat interface built directly into our CMS, with lower overhead and tighter control.
Defer Positron rollout. Build lightweight AI tooling inside the CMS with lower overhead and clearer ownership.
Each prompt requires design, testing, and ongoing maintenance. Scaling prompt-driven workflows means accepting a real operational cost — one that needs ownership and governance to survive.
In DITA, "good text" is not enough. Output must be valid, insertable, and context-aware. AI tools built for general writing struggle at this boundary without significant scaffolding.
The capability existed. The system around it didn't. Without evaluation infrastructure, governance, and tooling stability, even strong AI output cannot be reliably operationalized.
The question was never whether AI could write. It was whether the organization was ready to maintain the system that makes AI output trustworthy and repeatable.