Brandedflow prompt-contracts
git clone https://github.com/JenCW/brandedflow
T=$(mktemp -d) && git clone --depth=1 https://github.com/JenCW/brandedflow "$T" && mkdir -p ~/.claude/skills && cp -r "$T/.claude/skills/prompt-contracts" ~/.claude/skills/jencw-brandedflow-prompt-contracts && rm -rf "$T"
.claude/skills/prompt-contracts/SKILL.mdPrompt Contracts
Define a 4-part contract before implementation: GOAL (quantifiable success), CONSTRAINTS (hard limits), FORMAT (exact output shape), FAILURE (explicit conditions that mean "not done"). The agent treats this as an engineering spec with zero ambiguity about what "done" means.
Why this works: Agents hallucinate and over-engineer when success is undefined. They silently cut corners when failure is undefined. A prompt contract front-loads all the reasoning about scope and edge cases. The FAILURE clause is the key innovation — it prevents the agent from taking shortcuts it would otherwise rationalize as acceptable.
Execution
1. Receive the task
Read the user's request. Determine whether they've already provided a contract or need help writing one.
If the user provides a contract: Parse it into the 4 sections and proceed to validation (step 3).
If the user provides a plain task: Help them convert it into a contract (step 2).
2. Generate the contract
If the user gave a plain task description, convert it into a structured contract. Use what you know from the codebase, the task description, and reasonable defaults.
Present the draft contract to the user for approval before proceeding.
Contract template:
{ "goal": "What does done look like — measurable, specific", "workspace": "path to the workspace this task runs in", "directive": "which directive governs this task (or null)", "skill": "which skill defines the method (or null)", "constraints": [ "Hard limit 1 — what is NOT negotiable", "Hard limit 2" ], "allowed_paths": [ "files/dirs the executor can write to — relative to repo root" ], "allowed_tools": [ "Write", "Edit", "Bash" ], "success_criteria": [ "Measurable condition 1 that means done", "Measurable condition 2" ], "memory_to_update": [ "which memory nodes to write after completion (or empty)" ] }
The contract is saved to
.session-contract.json at repo root. The pre-tool hook (P-01)
blocks all Write/Edit/Delete calls until this file exists with a valid goal.
allowed_paths is enforced by P-06. allowed_tools is enforced by P-05.
Writing good GOAL statements:
- Include a number: "handles 50K req/sec" not "handles high traffic"
- Be specific: "returns results in <200ms p95" not "is fast"
- Define the user-visible outcome: "user can filter by date, status, and assignee" not "add filtering"
Writing good CONSTRAINTS:
- Only hard limits — things that are NOT negotiable
- Technology constraints: "no external dependencies", "must use existing ORM"
- Scope constraints: "under 200 lines", "single file", "no new database tables"
- Compatibility: "must work with Node 18+", "backwards compatible with v2 API"
Writing good FORMAT:
- Exact file structure: "single file:
" not "a Python file"rate_limiter.py - What to include: "type hints on all public methods", "5+ pytest tests"
- What to exclude: "no comments explaining obvious code", "no README"
Writing good FAILURE clauses:
This is the most important section. Think about how the task could "technically work" but actually be wrong:
- Missing edge case: "no test for empty input"
- Performance miss: "latency exceeds 1ms on synthetic load"
- Silent failure: "swallows errors without logging"
- Incomplete: "doesn't handle the concurrent access case"
- Over-engineered: "adds abstraction layers not required by GOAL"
3. Validate the contract
Before implementing, check that the contract is:
- Complete — all 4 sections filled out
- Consistent — CONSTRAINTS don't contradict GOAL
- Testable — every FAILURE condition can be mechanically verified
- Scoped — GOAL is achievable within the CONSTRAINTS
If anything is ambiguous, ask the user to clarify before proceeding.
4. Implement against the contract
Build the solution. Treat the contract as a hard requirement:
- goal: The thing you're optimizing for
- constraints: Boundaries you cannot cross
- allowed_paths: Only write to these locations
- allowed_tools: Only use these tools
- success_criteria: Conditions you must achieve
- memory_to_update: What gets smarter after this task
As you implement, mentally check each success criterion. If you catch yourself about to violate a constraint, stop and fix it before moving on.
5. Self-verify against success criteria
Before delivering, verify every success criterion:
## Contract Verification - [ ] success_criteria[0]: {criterion} → VERIFIED: {evidence} - [ ] success_criteria[1]: {criterion} → VERIFIED: {evidence} - [ ] goal met: {evidence} - [ ] All constraints respected: {confirmation} - [ ] Only allowed_paths written: {confirmation} - [ ] Only allowed_tools used: {confirmation}
If any criterion is not met, fix it before delivering.
6. Write contract file + deliver
Save the approved contract to
.session-contract.json at repo root:
echo '{...}' > .session-contract.json
Then present the verification:
Contract status: ALL PASS goal: ✓ {evidence} constraints: ✓ {all respected} success_criteria: ✓ {all verified} allowed_paths: ✓ {only these written} memory_to_update: {what will be written after completion}
If any criterion failed:
Contract status: 1 FAILED goal: ✓ constraints: ✓ success_criteria: 1 of 3 failed - FAILED: "{criterion}" — {what happened} - Options: {what could fix it}
Example contracts
API endpoint:
{ "goal": "GET /users endpoint returning paginated results, handling 10K concurrent connections", "workspace": "5-build/portals/hub-api/", "directive": null, "skill": null, "constraints": ["Express.js + existing Prisma ORM", "No new dependencies", "Under 80 lines"], "allowed_paths": ["5-build/portals/hub-api/routes/", "5-build/portals/hub-api/controllers/", "5-build/portals/hub-api/__tests__/"], "allowed_tools": ["Write", "Edit", "Bash"], "success_criteria": ["Pagination via limit/offset", "400 on invalid input (not 500)", "Test for empty results", "Test for invalid page"], "memory_to_update": [] }
branded+Flow workspace task:
{ "goal": "Wire Cursor preToolUse hook so P-01 through P-08 fire in Cursor", "workspace": ".cursor/hooks/", "directive": "_archive/docs/audits/repo-governance-audit-triaged.md", "skill": null, "constraints": ["Cursor protocol: JSON stdout, not exit codes", "Same logic as pre-tool-hook.js", "failClosed: true"], "allowed_paths": [".cursor/hooks/", ".cursor/hooks.json"], "allowed_tools": ["Write", "Bash"], "success_criteria": ["P-01 blocks without contract", "P-05 blocks disallowed tools", "P-06 blocks disallowed paths", "Compliance log entries written"], "memory_to_update": ["CP-01 status → closed"] }
When to use
- Infrastructure code (rate limiters, caches, queues)
- API endpoints and services
- Anything that will be hard to fix later
- Tasks where quality matters more than speed
- When you want to prevent "it technically works but..." outcomes
When NOT to use
- Quick prototypes ("just get something working")
- Exploratory tasks where requirements are still forming
- Trivial changes where a contract would be more text than the code itself
Configuration
| Parameter | Default | Description |
|---|---|---|
| verify | true | Run self-verification before delivering |
| strict | true | Fail the task if any FAILURE condition is triggered |
| template | standard | Contract template (standard, minimal, detailed) |
Edge cases
- User provides incomplete contract: Fill in missing sections with reasonable defaults and confirm with the user.
- FAILURE conditions conflict with GOAL: Flag the contradiction. Ask which takes priority.
- Can't verify a FAILURE condition: Note it as "UNVERIFIABLE" and explain why. Suggest how the user can verify manually.
- Contract is overkill for the task: If the task is trivial, say so. Suggest skipping the contract or using a minimal version (GOAL + FAILURE only).