What “complete documentation” actually means
A nine-element checklist for documents that hold up to use.
Walk into any engineering organization and ask the team whether their documentation is complete. The answer is rarely “yes.” More often, it’s a shrug, or “complete enough,” or a vague acknowledgment that the wiki has drifted.
Here’s the uncomfortable truth: most documentation isn’t incomplete because people are lazy, careless, or understaffed. It’s incomplete because “complete” is never defined. Teams write docs to a vague standard, readers consume them expecting a vague standard, and the gap between what the document says and what the reader needs to know never gets named.
A document is complete not when it covers everything — no document ever does — but when it is honest about what it covers, what it doesn’t, and what the reader needs to do with the gaps. That honesty can be decomposed into a concrete checklist. Nine elements, each of which either appears or doesn’t. If any element is missing, the document is incomplete, regardless of how many pages it has or how much effort went into it.
This checklist applies to any professional document: training manuals, security audits, architecture docs, API references, business rules inventories, incident runbooks. The names change; the underlying discipline doesn’t.
Constraint disclosure
The document states, up front, what it was based on — and what it did not have access to.
A security audit written from source code without production logs is a fundamentally different artifact than one written with runtime telemetry. A training manual written from screen observations is different from one written with end-user interviews. The reader cannot calibrate the document unless the document tells them what it was working from.
One sentence at the top of an assessment — "This assessment is based on the application source code. It does not include runtime performance data, user research, or operational incident history" — tells a reader more about the document's trustworthiness than the next thirty pages.
Documents that read as if the author had access to everything, when they had access to some things. The reader trusts conclusions that were drawn from incomplete inputs.
Anti-fabrication commitment
The document commits, explicitly, to not inventing content.
This sounds like it shouldn't need to be stated. It absolutely does — especially as more documentation is assembled by tools rather than written by humans. Plausible-sounding content that isn't supported by the source is worse than a missing section. A missing section tells the reader to go find this elsewhere. Fabricated content tells them nothing is wrong.
An anti-fabrication commitment is concrete: the document says that no workflow, rule, feature, or behavior is described unless it is evidenced in the source material. A reader who encounters a suspicious claim can then trust that it came from somewhere specific — and ask where.
Documents that "fill in the gaps" with general knowledge rather than source evidence. Often invisible until someone tries to follow a workflow that doesn't actually exist.
The three-tier marker system
Every claim falls into one of three tiers: documented, inferred, or not documented.
"Documented" means the source material states it directly. No marker needed — this is the default.
"Inferred" means the document is reasoning from patterns, conventions, or indirect evidence. Marked with something like: "(inferred from the framework's convention — verify against the actual route configuration)." The reader knows to check.
"Not documented" means the reader needs this information but the source doesn't provide it. Marked with something like: "[Not documented — ask the DBA team: what is the SLA for this endpoint?]" The reader knows to act.
Without markers, everything in the document looks equally authoritative. The paragraph that describes a real workflow sits next to the paragraph that guessed at one, and the reader has no way to tell them apart.
Uniform authority. Strong claims and weak claims presented with the same confidence.
Scope disclosure
The document states what it covers — and, equally importantly, what it does not.
A codebase has thirty screens. The document covers five of them. That is not an incomplete document — it's a legitimate scope decision. But it must be stated: "This assessment covers the Dashboard, Settings, Billing, Admin, and User Profile screens. Screens and features outside this list are not in scope."
Scope disclosure prevents the most common documentation failure: the reader assuming the document is comprehensive when it is partial. Every missing section then becomes an unstated question rather than an acknowledged boundary.
Partial documents formatted as if they were comprehensive. The reader reads to the end and assumes they have the full picture.
Visual placeholders
Where images would clarify the content, the document marks their absence explicitly.
Training manuals need screenshots. Architecture documents need diagrams. Process maps need flowcharts. When a document cannot produce these — because the author doesn't have rendering access, or because the images need to be created later — the gap should be labeled, not ignored.
"[Screenshot: the billing settings page showing the credit balance and payment method]" is better than silence. It tells the person finalizing the document what needs to be captured, and it tells the reader what visual would have helped if the document had one.
Documents that describe a UI in words alone, leaving the reader unsure whether the author saw the actual interface.
Source attribution
Every claim is tied to a specific source.
In a technical document, this is inline: "Source: Dashboard screen (/dashboard)." In a user-facing document, it may live in an appendix mapping each chapter to the screens it was derived from. Either approach works. What doesn't work is unsourced assertion.
Attribution does two things. It forces discipline on the author — you cannot write what you cannot source. And it gives the reader a way to verify. A claim that says the export button is on the Reports screen is testable when the source is named; it is an article of faith when it isn't.
Unsourced claims. The reader has no way to check, and the author had no mechanism to notice when they drifted from evidence.
Provenance disclosure
The document identifies how it was produced and what review happened before distribution.
A document written by a senior engineer who knows the system cold is different from a document assembled from a checklist by a contractor unfamiliar with the codebase. A document generated by a tool is different from either. These are all legitimate production methods — but they are different, and readers calibrate their trust differently once they know.
Provenance disclosure is not confessional. It is a single sentence: "This document was produced by [method]. Subject matter expert review is recommended before distribution." That sentence does not weaken the document. It sets correct expectations.
Documents that obscure how they were made, leading readers to over- or under-trust them.
Remediation specificity
Every gap is actionable.
"Not documented — check with the team" is not actionable. "Not documented — ask the DevOps lead: what is the uptime SLA for this service? Insert the answer in the Availability Requirements section below" is actionable.
A gap marker should contain three pieces: who to ask, what specifically to ask them, and where the answer should go. Without these, the document accumulates a wall of unresolved TODOs that nobody can close because nobody knows where to start.
Specificity turns documentation from a static artifact into a living punch list. The person reviewing the document can pick up each gap, know exactly who to contact, know exactly what information to request, and know exactly where to put the answer. That is the difference between a document that gets finished and one that doesn't.
Vague gap markers. Every TODO becomes a research project.
Cross-boundary warnings
The document flags points where the workflow or scope extends beyond what's covered.
A user flow starts on the Dashboard (documented) and continues on the Checkout page (not documented). A security review covers the application layer but not the infrastructure layer. An architecture document describes the service but not the dependencies it calls into.
At every such boundary, the document should warn: "This workflow continues beyond the documented screens. Verify the complete flow with the engineering team before treating this as comprehensive." Without the warning, the reader assumes the document ended because the workflow ended.
Documents that end mid-flow with no indication that more exists.
The underlying discipline
These nine elements are not decorations. They are load-bearing — each one prevents a specific failure mode in how the document gets read, used, and trusted. Remove any one and the document gets subtly but meaningfully worse. Without constraint disclosure, the reader misjudges trustworthiness. Without anti-fabrication, invented content passes as real. Without the three-tier marker system, strong and weak claims look identical. Without scope disclosure, partial coverage passes as complete. Without source attribution, claims can’t be verified. Without remediation specificity, gaps never get closed. Without cross-boundary warnings, truncated workflows pass as whole ones.
What they have in common is honesty. Each element forces the document to be honest about a specific dimension — what it knows, what it doesn’t, where it came from, how much of the territory it covers, how reliable any given claim is.
Documentation is hard because truth-telling at this level of rigor is hard. It requires the author to separate what they know from what they’re inferring from what they’re guessing — a distinction that is almost never native to how we write. But the distinction is what separates documents that hold up to use from documents that look good on the shelf.
At Inkwell Forge, we apply this checklist to every deliverable we generate. It’s the single biggest difference between automated documentation that’s useful and automated documentation that’s a liability. But the checklist itself is independent of any tool. Apply it to anything — a runbook you’re writing by hand, a wiki page someone else updated last year, a vendor-supplied API reference.
If any of the nine elements is missing, the document is incomplete.
See the checklist applied to your codebase
Inkwell Forge generates professional documentation with all nine elements enforced. Start free, no credit card required.
