A feature can pass every quality gate and ship completely unmeasurable.
The standard CI/CD pipeline answers one question: is the code correct and safe to deploy? Unit tests verify logic. Integration tests verify system behavior. Security scans verify compliance. Health checks verify the deployment succeeded.
None of these gates ask whether the deployment is observable against the outcome it was built to achieve. The three outcome gates address this. They run after the quality gates and before (or at) the final deployment job.
These gates do not replace existing quality gates. They add a new layer on top of them.
The standard pipeline asks: is the code correct? The outcome gates ask: is this deployment observable against the outcome it was built to achieve?
Gate Overview
| Gate | Question | Blocks deploy? |
|---|---|---|
| 1. Instrumentation Gate | Are the required telemetry events present in the codebase? | Yes |
| 2. Validation Gate | Does a pre-ship validation record exist with a signed conclusion? | Yes, for P0/P1 bets |
| 3. Confirmation Integration Gate | Write the bet-to-deployment linkage in the confirmation system | No. It writes. |
The Outcome Definition Registry
All three gates depend on a registry that maps deployments to bets. Before building the gates, build the registry. The minimum viable registry is a JSON file in the repository.
{
"bets": [
{
"id": "bet-2026-q1-onboarding",
"name": "Enterprise onboarding time reduction",
"classification": "P1",
"required_events": [
"enterprise_account_checklist_item_completed",
"enterprise_account_first_integration_completed",
"enterprise_account_90_day_active"
],
"measurement_window_open": "2026-02-14",
"measurement_window_close": "2026-03-28",
"success_threshold": "90-day retention from 41% to 47%",
"confirmation_owner": "sarah.kim@company.com",
"validation_record_ref": "validation/2026-q1-onboarding-validation.md"
}
]
}Keep the registry in the repository at a consistent path. Version control it. Treat it with the same discipline as your CI/CD config. The platform team owns the schema. The hypothesis owner (PM or equivalent) owns each entry. The entry must exist and be committed before the bet enters development.
Gate 1: The Instrumentation Gate
Checks whether the telemetry events listed in the bet's registry entry are present in the codebase. Runs after all quality gates, before the deploy job. Does not check whether events fire correctly - that is a correctness concern handled by tests. The gate checks existence.
Implementation
jobs:
instrumentation-gate:
docker:
- image: cimg/python:3.11
steps:
- checkout
- run:
name: Check instrumentation for active bets
command: |
python3 scripts/check_instrumentation.py \
--registry .outcomes/registry.json \
--src src/import json, sys, subprocess, argparse
from datetime import date
def load_registry(path):
with open(path) as f:
return json.load(f)
def is_active(bet):
today = date.today().isoformat()
return bet["measurement_window_open"] <= today <= bet["measurement_window_close"]
def check_event_in_src(event_name, src_path):
result = subprocess.run(
["grep", "-r", event_name, src_path],
capture_output=True, text=True
)
return result.returncode == 0
def main():
parser = argparse.ArgumentParser()
parser.add_argument("--registry", required=True)
parser.add_argument("--src", required=True)
args = parser.parse_args()
registry = load_registry(args.registry)
failures = []
for bet in registry["bets"]:
if not is_active(bet):
continue
for event in bet["required_events"]:
if not check_event_in_src(event, args.src):
failures.append({"bet": bet["id"], "missing_event": event})
if failures:
print("INSTRUMENTATION GATE FAILED")
print("The following events are required by active bets but not found in the codebase:")
for f in failures:
print(f" Bet: {f['bet']} | Missing event: {f['missing_event']}")
print("")
print("This deployment cannot proceed until these events are wired.")
print("If this bet is no longer active, update the registry.")
sys.exit(1)
print(f"Instrumentation gate passed. {len(registry['bets'])} bets checked.")
if __name__ == "__main__":
main()What the gate produces
On pass: a log line confirming the number of active bets checked and all events found. On fail: a list of missing events, the bet IDs they belong to, and a hard exit code that stops the pipeline. The deployment does not proceed.
Common questions
What if a developer adds events but does not update the registry?
The gate will pass. The problem is in the registry, not the gate. Registry discipline is enforced by the process: a bet cannot enter development without a registry entry. If a developer ships events not in a registry entry, those events are not measured against anything.
What if the event name is in a constants file, not the source directly?
Extend the grep to include constants and configuration files, or require that event names be defined in a specific location the gate can check.
What about monorepos with multiple services?
Scope the check to the service being deployed. Pass the service path as a parameter and filter the registry by service tag.
Gate 2: The Validation Gate
Checks whether a validation record exists for this bet and contains a signed conclusion. Runs after the instrumentation gate, before the final deploy job. Required for P0 and P1 bets. For P2 bets, the gate runs as a warning, not a blocker.
What a validation record is
A validation record is a document produced before the feature ships to production users, confirming that pre-ship evidence supports (or does not support) the directional hypothesis. It can be a staged-environment test result, a prototype user test with a defined protocol and conclusion, a shadow-mode run result, or a small-cohort A/B test result.
It does not need to prove the hypothesis. It needs to provide a plausible path. The record must contain:
- A description of the validation method
- The evidence produced
- A directional conclusion: proceed, do not proceed, or proceed with modifications
- A signature - the name of the person who reviewed the evidence and signed off
Implementation
jobs:
validation-gate:
docker:
- image: cimg/python:3.11
steps:
- checkout
- run:
name: Check validation records for active P0/P1 bets
command: |
python3 scripts/check_validation.py \
--registry .outcomes/registry.jsonimport json, os, sys, argparse
from datetime import date
def load_registry(path):
with open(path) as f:
return json.load(f)
def is_active(bet):
today = date.today().isoformat()
return bet["measurement_window_open"] <= today <= bet["measurement_window_close"]
def requires_validation(bet):
return bet.get("classification") in ("P0", "P1")
def validation_record_exists(bet):
ref = bet.get("validation_record_ref")
return bool(ref) and os.path.exists(ref)
def validation_record_is_signed(bet):
ref = bet.get("validation_record_ref")
if not ref or not os.path.exists(ref):
return False
with open(ref) as f:
content = f.read()
return "Signed-off-by:" in content
def main():
parser = argparse.ArgumentParser()
parser.add_argument("--registry", required=True)
args = parser.parse_args()
registry = load_registry(args.registry)
failures = []
for bet in registry["bets"]:
if not is_active(bet) or not requires_validation(bet):
continue
if not validation_record_exists(bet):
failures.append({"bet": bet["id"], "reason": "No validation record found at: " + str(bet.get("validation_record_ref"))})
elif not validation_record_is_signed(bet):
failures.append({"bet": bet["id"], "reason": "Validation record exists but has no Signed-off-by line"})
if failures:
print("VALIDATION GATE FAILED")
for f in failures:
print(f" Bet: {f['bet']} | {f['reason']}")
sys.exit(1)
print("Validation gate passed.")
if __name__ == "__main__":
main()Validation record format
Store validation records in .outcomes/validation/. Each record is a markdown file with the following structure:
# Validation Record: [Bet ID]
**Bet:** [Bet name]
**Validation method:** [Prototype test / shadow mode / staged cohort / other]
**Date:** [YYYY-MM-DD]
## Evidence
[Description of what was tested, how, and what was observed]
## Conclusion
[Proceed / Do not proceed / Proceed with modifications]
[One to three sentences explaining the conclusion]
**Signed-off-by:** [Name, role]
**Date signed:** [YYYY-MM-DD]Gate 3: The Confirmation Integration Gate
At deploy time, writes a record linking the deployment to its associated bets in the confirmation system. It does not block. It writes. Without this linkage, the confirmation evaluation becomes an archaeology project - six weeks after deployment, someone reconstructing which deploy corresponds to which bet loses a day before any analysis starts.
Implementation
jobs:
confirmation-integration:
docker:
- image: cimg/python:3.11
steps:
- checkout
- run:
name: Write confirmation integration record
command: |
python3 scripts/write_confirmation_record.py \
--registry .outcomes/registry.json \
--deploy-sha $CIRCLE_SHA1 \
--deploy-time $(date -u +"%Y-%m-%dT%H:%M:%SZ") \
--output .outcomes/deployments/import json, os, sys, argparse
from datetime import date
def load_registry(path):
with open(path) as f:
return json.load(f)
def is_active(bet):
today = date.today().isoformat()
return bet["measurement_window_open"] <= today <= bet["measurement_window_close"]
def main():
parser = argparse.ArgumentParser()
parser.add_argument("--registry", required=True)
parser.add_argument("--deploy-sha", required=True)
parser.add_argument("--deploy-time", required=True)
parser.add_argument("--output", required=True)
args = parser.parse_args()
registry = load_registry(args.registry)
active_bets = [b for b in registry["bets"] if is_active(b)]
if not active_bets:
print("No active bets. No confirmation integration record written.")
return
os.makedirs(args.output, exist_ok=True)
record = {
"deploy_sha": args.deploy_sha,
"deploy_time": args.deploy_time,
"active_bets": [
{
"id": bet["id"],
"name": bet["name"],
"classification": bet["classification"],
"required_events": bet["required_events"],
"success_threshold": bet["success_threshold"],
"measurement_window_open": bet["measurement_window_open"],
"measurement_window_close": bet["measurement_window_close"],
"confirmation_owner": bet["confirmation_owner"],
"validation_record_ref": bet.get("validation_record_ref")
}
for bet in active_bets
]
}
filename = f"{args.deploy_sha[:8]}-{args.deploy_time[:10]}.json"
output_path = os.path.join(args.output, filename)
with open(output_path, "w") as f:
json.dump(record, f, indent=2)
print(f"Confirmation integration record written to {output_path}")
print(f"Linked {len(active_bets)} active bet(s) to deploy {args.deploy_sha[:8]}")
if __name__ == "__main__":
main()The output files in .outcomes/deployments/ are committed back to the repository as part of the deploy job, or posted to an external confirmation system. The simplest implementation uses the repository. More mature implementations post to a shared system accessible to the confirmation owner without requiring repository access.
Pipeline Assembly
Gates 1 and 2 block deployment. Gate 3 runs after deployment. If Gate 3 fails (for example, the output directory is not writable), do not roll back the deployment. Log the failure and alert the confirmation owner. The deployment succeeded. The record-keeping failed. Those are different problems with different fixes.
workflows:
build-test-deploy:
jobs:
- build
- unit-tests:
requires: [build]
- integration-tests:
requires: [build]
- security-scan:
requires: [build]
- instrumentation-gate: # Gate 1
requires: [unit-tests, integration-tests, security-scan]
- validation-gate: # Gate 2
requires: [instrumentation-gate]
- deploy:
requires: [validation-gate]
- confirmation-integration: # Gate 3
requires: [deploy]Making the gates advisory instead of blocking
A gate that emits a warning but does not fail the build is not a gate. It is a notification. If the instrumentation gate can be bypassed, it will be bypassed. The cost of bypassing it must be higher than the cost of wiring the events.
Putting the registry in a location the platform team owns
The registry must be owned by the people writing the bets. If only platform engineers can update it, it becomes a bottleneck and teams route around it. Store it in the application repository. Treat updates to it as part of the bet-writing process.
Scoping the instrumentation check too narrowly
If your events are defined in constants files, configuration files, or generated code, the grep-based implementation will miss them. Audit where events are defined in your codebase before writing the check script.
Running the validation gate for P2 bets
P2 bets are exploratory. Requiring a full validation record for low-stakes bets adds overhead without proportional benefit. The classification system exists for this reason. Use it.
Treating Gate 3 failures as deploy failures
Gate 3 is record-keeping. A failure in Gate 3 does not mean the feature is broken. Alert the confirmation owner, log the event, and fix the record-keeping separately.
Start writing
your first bet
Copy the template as Markdown and paste it into your team's documentation tool. Fill it out before the next sprint begins.
Based on the framework in The Output Trap by JP LeBlanc
Free to use. No attribution required.