How Medical Necessity Documentation Differs From Clinical Notes
There's a common assumption in clinical practices: good chart notes equal good prior authorization support. If the documentation is thorough, the payer should be able to see the case. That assumption gets biologics PAs denied every single day.
Clinical notes and medical necessity documentation serve completely different purposes. Written for different audiences, evaluated by different criteria, structured around different goals. Understanding that gap is the first step to closing it.
What Clinical Notes Are Actually For
Clinical notes exist for care continuity. They're a record of what happened — what the clinician observed, what the patient reported, what was assessed, what was planned. The SOAP format (Subjective, Objective, Assessment, Plan) was designed for the next clinician who picks up that chart, not for an insurance reviewer.
Good clinical notes are specific about findings, appropriately detailed about history, and written in the language of clinical medicine. They document the patient's story chronologically. They're honest about uncertainty. They capture the full picture because that's what safe care requires.
Payers don't need the full picture. They need an argument.
What Medical Necessity Documentation Is Actually For
A prior authorization submission is a persuasive document. Its job is to demonstrate — in language that maps directly to a payer's coverage criteria — that this specific treatment is medically necessary for this specific patient under this payer's specific policy.
The audience is a utilization reviewer who may spend four to six minutes on your submission. That reviewer has a checklist. Each checkbox corresponds to a criterion in the payer's LCD or coverage policy. Your documentation either checks those boxes or it doesn't.
Clinical completeness doesn't help if the relevant clinical facts aren't surfaced clearly. A ten-page chart note with a buried DAS28 score fails the same way as a chart note with no score at all — the reviewer can't reconstruct your argument from raw clinical data.
Why Copying Clinical Notes Into PA Submissions Fails
The copy-paste approach is the most common shortcut practices take, and it fails for three structural reasons.
Wrong narrative structure. Clinical notes tell a story chronologically. PA submissions need to build a logical argument: here's the diagnosis, here's the severity, here's what failed before, here's why this treatment is appropriate now. The structure isn't a timeline — it's a case.
Wrong level of specificity on the wrong things. A clinical note might spend three paragraphs on a detailed symptom history that a payer doesn't require, while having one line about step therapy that a payer requires in explicit detail. Copying the note imports that imbalance directly into the submission.
Missing payer-specific framing. Every payer publishes coverage criteria that use specific language, reference specific scoring tools, and require specific evidence thresholds. A clinical note wasn't written against that criteria. A medical necessity letter needs to be.
The AMA's 2023 prior authorization survey found that 94% of physicians reported PA delays that led to care delays, and 80% said denials were due to administrative rather than clinical reasons. That's the copy-paste tax.
The Structure of Effective Medical Necessity Documentation
High-approval PA submissions have a recognizable architecture. It's not universal — payers vary, drugs vary, indications vary — but the bones are consistent.
Diagnosis statement with specific ICD-10 codes. Not M06.9 (rheumatoid arthritis, unspecified) — the specific seropositive or seronegative code with joint involvement detail that the payer's policy actually references. The diagnosis section should use payer-recognized codes and leave no ambiguity about what's being treated.
Disease severity with validated scoring. Payers don't accept "moderate-to-severe" as a subjective clinical impression. They want DAS28 for RA, Harvey-Bradshaw for Crohn's, PASI for plaque psoriasis, HAQ-DI for psoriatic arthritis. The score needs to be current — most payers want data within 30 to 90 days — and it needs to meet the threshold the policy specifies.
Step therapy history as an explicit timeline. Drug name, dose, start date, end date, documented reason for discontinuation. Not "patient failed multiple prior treatments." Each prior treatment gets its own entry with specific failure criteria. Inadequate response documented with objective data where possible. Adverse events documented with the event, not just the conclusion that it occurred.
Direct address of coverage criteria. The most effective submissions structure their content around the payer's own policy language. If the policy says the patient must have tried and failed two TNF inhibitors before a JAK inhibitor is approved, the submission should explicitly state which two TNF inhibitors were tried, when, for how long, and how they failed.
Prescriber attestation in appropriate language. Many payers require specific physician statements about clinical judgment. These need to be written to match the payer's template or policy language, not whatever the prescriber's default attestation template says.
What Payers Are Actually Evaluating
Utilization reviewers at most major commercial payers work from structured review tools. CMS publishes its LCD criteria publicly. Commercial payers publish coverage policies on their provider portals. These documents are not secret — but most practices don't pull them before writing a submission.
The review process is more algorithmic than most clinicians expect. Automated systems screen diagnosis codes before a human ever looks at the submission. The wrong ICD-10 code specificity can trigger an automated denial before a reviewer opens the file. Once it reaches a human reviewer, they're working through a checklist against the policy, not making an independent clinical judgment about whether the treatment makes sense.
This is worth sitting with for a moment. The clinical appropriateness of the treatment isn't what the reviewer is evaluating. They're evaluating whether your documentation satisfies their policy criteria. Those are different questions, and a lot of practices conflate them.
The KFF's 2023 Medicare Advantage analysis found that 7% of PA requests were denied at the initial review stage, with many reversals on appeal — suggesting the documentation quality at submission is doing more work than the clinical merits in a significant share of cases.
The Translation Problem
Every clinician who's spent hours on a PA appeal already knows intuitively what this post is spelling out: there's a translation gap between clinical language and payer language. The clinical case exists. It's in the chart. It's legitimate. But getting it into a form the payer can act on requires a different kind of document than what clinical documentation produces.
Solving that translation problem manually takes time most practices don't have. Pulling the payer policy, identifying the coverage criteria, mapping your clinical data to those criteria, drafting a submission that addresses each criterion explicitly — for a single biologic PA, that process can take 30 to 60 minutes when done properly.
That's what AI-assisted PA documentation tools were built to address. Not to replace clinical judgment, but to automate the research and translation layer — pulling the current payer policy, identifying what criteria need to be documented, and generating a submission that addresses those criteria with the clinical data you've already captured.
Luma doesn't reformat your clinical notes. It builds payer-focused documentation from scratch, structured around the coverage criteria for the specific payer and drug, using the clinical information you provide as inputs. That's a different product category than a clinical documentation tool — because it's solving a different problem.
The administrative burden of prior authorization is documented extensively in the research literature. Practices spend an average of 13 hours per physician per week on PA work. A meaningful portion of that time is the translation gap — taking solid clinical cases and reformatting them into payer-ready documentation.
Better documentation processes close that gap at the source. The clinical case doesn't get easier to make — but the submission gets built to the right standard from the start.
Sources:
1. AMA 2023 Prior Authorization Physician Survey — ama-assn.org
2. CMS Medicare Coverage Database — cms.gov/medicare-coverage-database
3. KFF Medicare Advantage Prior Authorization Analysis 2023 — kff.org
4. Health Affairs — Administrative Burden of Prior Authorization — healthaffairs.org
5. ACPA — Clinical Practice Guidelines for RA Disease Activity Measures — rheumatology.org