Verification Before Completion
Run verification commands and confirm output before claiming success
Category: development Source: mrgoonie/claudekit-skillsWhat Is Verification Before Completion?
The "Verification Before Completion" skill is a rigorous discipline for software development workflows that mandates explicit evidence before any claim of success. Whether fixing a bug, passing a test suite, or completing a feature, this approach requires developers to run verification commands and present their outputs before asserting that a task is done. The core principle is simple but uncompromising: no task can be marked as complete without fresh, direct verification evidence. This skill is applicable to all programming languages and is particularly relevant before creating pull requests, committing code, or claiming that an issue has been resolved.
Why Use Verification Before Completion?
In software development, premature or unsupported claims of completion can lead to unreliable code, undetected bugs, and a loss of trust within teams. Relying on memory, assumptions, or outdated results is not only risky but can be considered dishonest in a professional context. Verification Before Completion eliminates these pitfalls by institutionalizing evidence-driven practices.
The benefits of this approach include:
- Increased reliability: Every claim is based on recent, verifiable evidence, reducing the likelihood of undetected errors.
- Greater transparency: The verification process is explicit and shareable, enabling team members and reviewers to quickly validate work.
- Faster debugging: When issues arise, fresh verification artifacts provide immediate context, accelerating diagnosis and resolution.
- Cultural consistency: Enforcing evidence before claims helps build a culture of accountability and technical rigor.
How to Get Started
Implementing Verification Before Completion requires integrating its "Gate Function" into your development workflow. The process is as follows:
- Identify: Determine the command or process that will verify your claim. For example, if claiming "all tests pass," the command might be
pytestornpm test. - Run: Execute the identified verification command just before making the claim. This must always be a fresh run—not a cached or remembered result.
- Read: Carefully inspect the full output, paying attention to exit codes, failure counts, and error messages.
- Verify: Assess whether the output genuinely supports your claim. If it does not, clearly state the actual status with the supporting evidence.
- Only Then Claim: Make your claim, but only if the evidence is conclusive and up-to-date.
Example:
Suppose you are fixing a bug and want to claim it is resolved. Instead of simply stating, "Bug is fixed," you should:
## Step 1: Identify
## The symptom was a failing test: test_bug_repro
## Step 2: Run
pytest tests/test_bug_repro.py
## Output:
## =================== test session starts ===================
## collected 1 item
#
## tests/test_bug_repro.py . [100%]
#
## ==================== 1 passed in 0.03s ====================
## Step 3: Read
## Output shows 1 passed, 0 failed.
## Step 4: Verify
## The original symptom was a failing test. Now it passes.
## Step 5: Only Then Claim
Bug fixed. Evidence:
$ pytest tests/test_bug_repro.py
=================== 1 passed in 0.03s ====================
Key Features
The Verification Before Completion skill enforces the following key features:
Universal Applicability: Works with any language or stack, as the core requirement is command-based evidence.
Output-Centric: Focuses on actual command outputs, not intentions, assumptions, or prior runs.
Explicit Process: Outlines a clear, step-by-step gate function that must be followed before making any claim.
Table of Common Failures: Helps clarify what counts as sufficient evidence for different claims. For example:
Claim Requires Not Sufficient Tests pass Test command output: 0 failures Previous run, "should pass" Linter clean Linter output: 0 errors Partial check, extrapolation Build succeeds Build command: exit 0 Linter passing, logs look good Bug fixed Test original symptom: passes Code changes only Strict Enforcement: Skipping any step in the verification process is treated as a violation of the rule, undermining both its letter and spirit.
Best Practices
- Always run verification commands immediately before making a claim. Never rely on memory or previous runs.
- Include full command output and relevant exit codes in communications, commit messages, and pull requests.
- Be explicit: If verification fails, state the result and attach the evidence. Never hide or omit failures.
- Automate where possible: Integrate verification steps into CI pipelines or pre-commit hooks to reduce manual effort and increase consistency.
- Review evidence, not just claims: When evaluating a colleague’s work, require and check the verification evidence before accepting a task as complete.
Important Notes
- Partial checks are insufficient. Running a subset of tests or skipping slow verifications undermines the goal of comprehensive validation.
- Verification must be fresh. Cached, remembered, or assumed results are not acceptable. Rerun the commands every time.
- Output must be unambiguous. If output is unclear or incomplete, your claim cannot be considered verified.
- Honesty is paramount. Any deviation from the process—whether by omission or shortcut—is considered a breach of professional integrity.
- Documentation matters: Keep a record of verification commands and their outputs as part of your project’s workflow to support audits and postmortems.
Verification Before Completion is more than a technical rule—it is a professional standard. By demanding evidence before claims, it raises the bar for reliability, trust, and software quality across the development lifecycle.