What I Learned Building an EHR Pathway Template with AI
I built a 7-layer vendor-agnostic EHR pathway template for acute heart failure. It has SNOMED CT codes, NICOR audit mapping, HL7v2/FHIR integration specs, a DCB0160 clinical safety case, and 8 synthetic test patients. It took about 3 hours of my time across two sessions. Without AI tools, this would have been 2-4 weeks of consulting work.
Here's what I learned about what AI can and can't do in clinical informatics.
What the template actually is
An acute heart failure pathway isn't a simple flowchart. It's a multi-layered specification that tells an NHS Trust exactly how to configure their EHR system to support the clinical pathway from admission to discharge and beyond.
The 7 layers:
- Clinical Pathway - the clinician-facing route map, with 12 decision points that need local governance sign-off
- Governance Decision Log - those 12 decisions, structured for sign-off by clinical director, medical director, and HF lead
- Clinical Informatics Spec - data model, SNOMED coding strategy, NICOR quality indicator calculations, GP transfer coding
- Technical Build Spec - 15 EHR forms, 13 decision rules in pseudocode, 12 alerts, 10 auto-calculations
- Integration Spec - 8 system-to-system data flows: ambulance, lab, echo, pharmacy, NICOR, GP, virtual ward, point-of-care testing
- Testing & Validation - 8 synthetic patients covering all phenotypes, 30 validation queries, clinical safety hazard log
- Patient Materials Spec - 9 patient-facing documents with EHR triggers
Each layer references the others. The SNOMED codes in Layer 3 feed the decision rules in Layer 4. The integration flows in Layer 5 depend on the data model in Layer 3. The test patients in Layer 6 exercise the decision rules in Layer 4. It's not a collection of documents: it's an interlocking system.
What AI did well
Structure and cross-referencing
Clinical pathway specifications are structurally complex. You're mapping between clinical guidelines (NICE, ESC, GIRFT), coding standards (SNOMED CT, ICD-10, Read codes), audit requirements (NICOR NHFA), quality frameworks (QOF), and technical standards (HL7v2, FHIR). Keeping all of those aligned across 7 layers is where most human error creeps in.
AI is genuinely good at this. Give it the right source material and it will maintain cross-references that would take a human hours of checking and rechecking.
Boilerplate with precision
A lot of clinical informatics documentation is structured repetition. Hazard log entries follow a pattern. Integration specs follow a pattern. Test patients follow a pattern. AI turns these from hours of template-filling into minutes.
Research synthesis
I pointed the AI at NICE CG187 and NG106, ESC 2021 and 2023 guidelines, GIRFT benchmarks, and the NICOR NHFA v5.0 dataset specification. It synthesised the areas of agreement, flagged the 12 places where guidelines conflict, and identified 15 gaps that need local governance decisions.
A human would find these too, but it would take days of careful reading and comparison. The AI did it in one pass.
What AI can't do
Clinical judgement
The template has 12 "DECISION REQUIRED" boxes. These are places where the evidence is ambiguous, guidelines disagree, or local factors matter. The AI identified these correctly but couldn't resolve them, because they require clinical governance decisions made by real clinicians who know their Trust's context.
For example: when to initiate sacubitril/valsartan in-hospital versus waiting for outpatient titration. NICE and ESC disagree. Your local pharmacy formulary has an opinion. Your heart failure nurses have a workflow. No AI can make that call.
Verification
After building the template, I ran 5 separate verification checks against the source guidelines. The AI got most things right, but it made confident-sounding errors in a few places: wrong NICOR field numbers, a QOF indicator that had been retired, a SNOMED code that was close but not quite right.
The uncomfortable truth
AI doesn't know when it's wrong. It presents incorrect SNOMED codes with the same confidence as correct ones. Clinical informatics work must be verified against primary sources. The AI accelerates the work, but someone with domain knowledge must check it.
Context and politics
Every NHS Trust has workarounds, local policies, and historical decisions baked into their systems. A pathway template is a starting point. The real work is adapting it to the mess of reality: the EPR that doesn't support conditional alerting, the lab system that sends results in a non-standard format, the consultant who refuses to use electronic prescribing.
No template covers this. No AI can either. This is where consulting expertise lives.
The value question
If I'd charged a consulting day rate for 3 hours of work, it wouldn't cover much. But the output is equivalent to 2-4 weeks of specialist consulting time. The value isn't in the hours: it's in the output.
This is the shift that AI tools create. A single person with domain expertise and AI tools can produce work that previously required a team. The bottleneck moves from production to verification and decision-making.
For clinical informatics specifically, that means:
- The 90% of pathway documentation that is structured, cross-referenced, and evidence-based can be AI-assisted
- The 10% that requires clinical judgement, local context, and governance sign-off cannot
- The verification step is non-negotiable, and currently requires someone who can read both the clinical guidelines and the technical specifications
What I'd do differently
I'd start with verification. I built the full 7-layer template first, then verified it. Next time I'd verify each layer as it's built, catching errors before they propagate into downstream layers.
I'd also involve a second clinical reviewer earlier. Solo work with AI is fast, but clinical safety documentation specifically benefits from a second pair of expert eyes, not because the AI makes obvious mistakes, but because the subtle ones are the dangerous ones.
The bottom line
AI tools make clinical informatics consulting dramatically faster. They don't make it easier, because the hard parts were never the typing. The hard parts are knowing what to build, knowing what the guidelines actually say, knowing where they disagree, and making defensible decisions in the gaps.
If you don't have that domain knowledge, AI tools won't give it to you. If you do, they'll let you work at a pace that would have been impossible three years ago.