git clone https://github.com/LightbridgeLab/CodeCannon
T=$(mktemp -d) && git clone --depth=1 https://github.com/LightbridgeLab/CodeCannon "$T" && mkdir -p ~/.claude/skills && cp -r "$T/.gemini/skills/qa" ~/.claude/skills/lightbridgelab-codecannon-qa-e75496 && rm -rf "$T"
.gemini/skills/qa/SKILL.mdGemini CLI: This skill is triggered by description matching. State any arguments in your message. Sub-agent spawning is not supported — the automated review step in
must be done manually using the review-agent prompt in a separate session./submit-for-review
What /qa
does
/qa/qa has two modes:
- No argument — show all issues awaiting QA.
- Numeric argument — walk through QA review for a specific issue.
Mode A — No argument: QA queue view
Run:
gh issue list --label "ready-for-qa" --state open \ --json number,title,url,labels,milestone,assignees
Display as a numbered list. For each issue show: issue number, title, milestone (if set), and URL.
If the list is empty, say:
"No issues are currently waiting for QA."
Do not take any other action.
Mode B — Numeric argument: Review a specific issue
The argument is an issue number. Work through the following steps.
Step 1 — Load context
gh issue view <number> --json number,title,body,labels,url,comments
Display the issue title and body.
Check whether
ready-for-qa is currently applied to the issue. If it is not present, warn:
"Issue #N does not have the ready-for-qa label. It may not be deployed to preview yet. Continue anyway? (yes/no)"
Wait. If the user says no, stop.
Step 2 — Review prompt
Say:
"Test this feature on the preview environment. When you're done, describe what you tested, what you found, and your verdict (pass or fail). Include screenshots if you have them — paste image links or drag-and-drop into GitHub directly."
Stop. Wait for the QA person's input. Do not proceed until they provide a verdict.
Step 3 — Post findings as issue comment
Build a structured comment from their input:
## QA Review **Verdict:** PASS (or FAIL) **What was tested:** <their description> **Findings:** <their findings, or "None — feature works as expected."> **Screenshots:** <image links if provided, or "None attached.">
Show the comment to the user before posting. Ask:
"Post this to issue #N? (yes/no)"
Wait for confirmation. On yes:
gh issue comment <number> --body "<comment>"
On no, stop — do not post and do not apply labels.
Step 4 — Apply verdict label
Based on the verdict:
PASS:
gh issue edit <number> --add-label "qa-passed" --remove-label "ready-for-qa"
FAIL:
Before applying labels, fetch the issue's current assignees:
gh issue view <number> --json assignees --jq '.assignees[].login' 2>/dev/null
If one or more assignees are found, prepend the following line to the comment body (above the
## QA Review heading) in the comment that was already posted:
cc @<assignee> — this issue needs attention.
If there are multiple assignees, include one
cc @<login> per line. If the command errors or returns no results, omit the line silently.
gh issue edit <number> --add-label "qa-failed" --remove-label "ready-for-qa"
If the
gh issue edit command fails because a label does not exist in the repo, report the error and say:
"The label does not exist in this repository. Create it in GitHub Settings → Labels, then re-run
from Step 4."/qa <number>
Do not retry automatically.
After applying labels, tell the user what was done:
- On PASS: "Issue #N marked as QA passed. The developer can now close it or promote to production."
- On FAIL: "Issue #N marked as QA failed. The developer will see the findings in the issue comments."
Hard rules
- Never close an issue — verdict recording and closure are separate concerns.
- Never post the comment without showing it to the user first.
- Never apply labels without user confirmation (the confirmation in Step 3 is the single gate for both posting the comment and applying labels).
- Never merge, deploy, or take any action beyond labeling and commenting.