Awesome-omni-skills iterate-pr

Iterate on PR Until CI Passes workflow skill. Use this skill when the user needs Iterate on a PR until CI passes. Use when you need to fix CI failures, address review feedback, or continuously push fixes until all checks are green. Automates the feedback-fix-push-wait cycle and the operator should preserve the upstream workflow, copied support files, and provenance before merging or handing off.

install
source · Clone the upstream repo
git clone https://github.com/diegosouzapw/awesome-omni-skills
Claude Code · Install into ~/.claude/skills/
T=$(mktemp -d) && git clone --depth=1 https://github.com/diegosouzapw/awesome-omni-skills "$T" && mkdir -p ~/.claude/skills && cp -r "$T/skills/iterate-pr" ~/.claude/skills/diegosouzapw-awesome-omni-skills-iterate-pr && rm -rf "$T"
manifest: skills/iterate-pr/SKILL.md
source content

Iterate on PR Until CI Passes

Overview

This public intake copy packages

plugins/antigravity-awesome-skills-claude/skills/iterate-pr
from
https://github.com/sickn33/antigravity-awesome-skills
into the native Omni Skills editorial shape without hiding its origin.

Use it when the operator needs the upstream workflow, support files, and repository context to stay intact while the public validator and private enhancer continue their normal downstream flow.

This intake keeps the copied upstream files intact and uses

metadata.json
plus
ORIGIN.md
as the provenance anchor for review.

Iterate on PR Until CI Passes Continuously iterate on the current branch until all CI checks pass and review feedback is addressed. Requires: GitHub CLI (gh) authenticated. Important: All scripts must be run from the repository root directory (where .git is located), not from the skill directory. Use the full path to the script via ${CLAUDESKILLROOT}.

Imported source sections that did not map cleanly to the public headings are still preserved below or in the support files. Notable imported sections: Bundled Scripts, Exit Conditions, Fallback, Limitations.

When to Use This Skill

Use this section as the trigger filter. It should make the activation boundary explicit before the operator loads files, runs commands, or opens a pull request.

  • Use this skill when tackling tasks related to its primary domain or functionality as described above.
  • Use when the request clearly matches the imported source intent: Iterate on a PR until CI passes. Use when you need to fix CI failures, address review feedback, or continuously push fixes until all checks are green. Automates the feedback-fix-push-wait cycle.
  • Use when the operator should preserve upstream workflow detail instead of rewriting the process from scratch.
  • Use when provenance needs to stay visible in the answer, PR, or review packet.
  • Use when copied upstream references, examples, or scripts materially improve the answer.
  • Use when the workflow should remain reviewable in the public intake repo before the private enhancer takes over.

Operating Table

SituationStart hereWhy it matters
First-time use
metadata.json
Confirms repository, branch, commit, and imported path before touching the copied workflow
Provenance review
ORIGIN.md
Gives reviewers a plain-language audit trail for the imported source
Workflow execution
SKILL.md
Starts with the smallest copied file that materially changes execution
Supporting context
SKILL.md
Adds the next most relevant copied source file without loading the entire package
Handoff decision
## Related Skills
Helps the operator switch to a stronger native skill when the task drifts

Workflow

This workflow is intentionally editorial and operational at the same time. It keeps the imported source useful to the operator while still satisfying the public intake standards that feed the downstream enhancer flow.

  1. high - must address (blockers, security, changes requested)
  2. medium - should address (standard feedback)
  3. Understand the root cause, not just the surface symptom
  4. Check for similar issues in nearby code or related files
  5. Fix all instances, not just the one mentioned
  6. Real issue found → fix it
  7. False positive → skip, but explain why in a brief comment

Imported Workflow Notes

Imported: Workflow

1. Identify PR

gh pr view --json number,url,headRefName

Stop if no PR exists for the current branch.

2. Gather Review Feedback

Run

${CLAUDE_SKILL_ROOT}/scripts/fetch_pr_feedback.py
to get categorized feedback already posted on the PR.

3. Handle Feedback by LOGAF Priority

Auto-fix (no prompt):

  • high
    - must address (blockers, security, changes requested)
  • medium
    - should address (standard feedback)

When fixing feedback:

  • Understand the root cause, not just the surface symptom
  • Check for similar issues in nearby code or related files
  • Fix all instances, not just the one mentioned

This includes review bot feedback (items with

review_bot: true
). Treat it the same as human feedback:

  • Real issue found → fix it
  • False positive → skip, but explain why in a brief comment
  • Never silently ignore review bot feedback — always verify the finding

Prompt user for selection:

  • low
    - present numbered list and ask which to address:
Found 3 low-priority suggestions:
1. [l] "Consider renaming this variable" - @reviewer in api.py:42
2. [nit] "Could use a list comprehension" - @reviewer in utils.py:18
3. [style] "Add a docstring" - @reviewer in models.py:55

Which would you like to address? (e.g., "1,3" or "all" or "none")

Skip silently:

  • resolved
    threads
  • bot
    comments (informational only — Codecov, Dependabot, etc.)

Replying to Comments

After processing each inline review comment, reply on the PR thread to acknowledge the action taken. Only reply to items with a

thread_id
(inline review comments).

When to reply:

  • high
    and
    medium
    items — whether fixed or determined to be false positives
  • low
    items — whether fixed or declined by the user

How to reply: Use the

addPullRequestReviewThreadReply
GraphQL mutation with
pullRequestReviewThreadId
and
body
inputs.

Reply format:

  • 1-2 sentences: what was changed, why it's not an issue, or acknowledgment of declined items
  • End every reply with
    \n\n*— Claude Code*
  • Before replying, check if the thread already has a reply ending with
    *- Claude Code*
    or
    *— Claude Code*
    to avoid duplicates on re-loops
  • If the
    gh api
    call fails, log and continue — do not block the workflow

4. Check CI Status

Run

${CLAUDE_SKILL_ROOT}/scripts/fetch_pr_checks.py
to get structured failure data.

Wait if pending: If review bot checks (sentry, warden, cursor, bugbot, seer, codeql) are still running, wait before proceeding—they post actionable feedback that must be evaluated. Informational bots (codecov) are not worth waiting for.

5. Fix CI Failures

For each failure in the script output:

  1. Read the
    log_snippet
    and trace backwards from the error to understand WHY it failed — not just what failed
  2. Read the relevant code and check for related issues (e.g., if a type error in one call site, check other call sites)
  3. Fix the root cause with minimal, targeted changes
  4. Find existing tests for the affected code and run them. If the fix introduces behavior not covered by existing tests, extend them to cover it (add a test case, not a whole new test file)

Do NOT assume what failed based on check name alone—always read the logs. Do NOT "quick fix and hope" — understand the failure thoroughly before changing code.

6. Verify Locally, Then Commit and Push

Before committing, verify your fixes locally:

  • If you fixed a test failure: re-run that specific test locally
  • If you fixed a lint/type error: re-run the linter or type checker on affected files
  • For any code fix: run existing tests covering the changed code

If local verification fails, fix before proceeding — do not push known-broken code.

git add <files>
git commit -m "fix: <descriptive message>"
git push

7. Monitor CI and Address Feedback

Poll CI status and review feedback in a loop instead of blocking:

  1. Run
    uv run ${CLAUDE_SKILL_ROOT}/scripts/fetch_pr_checks.py
    to get current CI status
  2. If all checks passed → proceed to exit conditions
  3. If any checks failed (none pending) → return to step 5
  4. If checks are still pending: a. Run
    uv run ${CLAUDE_SKILL_ROOT}/scripts/fetch_pr_feedback.py
    for new review feedback b. Address any new high/medium feedback immediately (same as step 3) c. If changes were needed, commit and push (this restarts CI), then continue polling d. Sleep 30 seconds, then repeat from sub-step 1
  5. After all checks pass, do a final feedback check:
    sleep 10
    , then run
    uv run ${CLAUDE_SKILL_ROOT}/scripts/fetch_pr_feedback.py
    . Address any new high/medium feedback — if changes are needed, return to step 6.

8. Repeat

If step 7 required code changes (from new feedback after CI passed), return to step 2 for a fresh cycle. CI failures during monitoring are already handled within step 7's polling loop.

Imported: Bundled Scripts

scripts/fetch_pr_checks.py

Fetches CI check status and extracts failure snippets from logs.

uv run ${CLAUDE_SKILL_ROOT}/scripts/fetch_pr_checks.py [--pr NUMBER]

Returns JSON:

{
  "pr": {"number": 123, "branch": "feat/foo"},
  "summary": {"total": 5, "passed": 3, "failed": 2, "pending": 0},
  "checks": [
    {"name": "tests", "status": "fail", "log_snippet": "...", "run_id": 123},
    {"name": "lint", "status": "pass"}
  ]
}

scripts/fetch_pr_feedback.py

Fetches and categorizes PR review feedback using the LOGAF scale.

uv run ${CLAUDE_SKILL_ROOT}/scripts/fetch_pr_feedback.py [--pr NUMBER]

Returns JSON with feedback categorized as:

  • high
    - Must address before merge (
    h:
    , blocker, changes requested)
  • medium
    - Should address (
    m:
    , standard feedback)
  • low
    - Optional (
    l:
    , nit, style, suggestion)
  • bot
    - Informational automated comments (Codecov, Dependabot, etc.)
  • resolved
    - Already resolved threads

Review bot feedback (from Sentry, Warden, Cursor, Bugbot, CodeQL, etc.) appears in

high
/
medium
/
low
with
review_bot: true
— it is NOT placed in the
bot
bucket.

Each feedback item may also include:

  • thread_id
    - GraphQL node ID for inline review comments (used for replies)

Examples

Example 1: Ask for the upstream workflow directly

Use @iterate-pr to handle <task>. Start from the copied upstream workflow, load only the files that change the outcome, and keep provenance visible in the answer.

Explanation: This is the safest starting point when the operator needs the imported workflow, but not the entire repository.

Example 2: Ask for a provenance-grounded review

Review @iterate-pr against metadata.json and ORIGIN.md, then explain which copied upstream files you would load first and why.

Explanation: Use this before review or troubleshooting when you need a precise, auditable explanation of origin and file selection.

Example 3: Narrow the copied support files before execution

Use @iterate-pr for <task>. Load only the copied references, examples, or scripts that change the outcome, and name the files explicitly before proceeding.

Explanation: This keeps the skill aligned with progressive disclosure instead of loading the whole copied package by default.

Example 4: Build a reviewer packet

Review @iterate-pr using the copied upstream files plus provenance, then summarize any gaps before merge.

Explanation: This is useful when the PR is waiting for human review and you want a repeatable audit packet.

Best Practices

Treat the generated public skill as a reviewable packaging layer around the upstream repository. The goal is to keep provenance explicit and load only the copied source material that materially improves execution.

  • Keep the imported skill grounded in the upstream repository; do not invent steps that the source material cannot support.
  • Prefer the smallest useful set of support files so the workflow stays auditable and fast to review.
  • Keep provenance, source commit, and imported file paths visible in notes and PR descriptions.
  • Point directly at the copied upstream files that justify the workflow instead of relying on generic review boilerplate.
  • Treat generated examples as scaffolding; adapt them to the concrete task before execution.
  • Route to a stronger native skill when architecture, debugging, design, or security concerns become dominant.

Troubleshooting

Problem: The operator skipped the imported context and answered too generically

Symptoms: The result ignores the upstream workflow in

plugins/antigravity-awesome-skills-claude/skills/iterate-pr
, fails to mention provenance, or does not use any copied source files at all. Solution: Re-open
metadata.json
,
ORIGIN.md
, and the most relevant copied upstream files. Load only the files that materially change the answer, then restate the provenance before continuing.

Problem: The imported workflow feels incomplete during review

Symptoms: Reviewers can see the generated

SKILL.md
, but they cannot quickly tell which references, examples, or scripts matter for the current task. Solution: Point at the exact copied references, examples, scripts, or assets that justify the path you took. If the gap is still real, record it in the PR instead of hiding it.

Problem: The task drifted into a different specialization

Symptoms: The imported skill starts in the right place, but the work turns into debugging, architecture, design, security, or release orchestration that a native skill handles better. Solution: Use the related skills section to hand off deliberately. Keep the imported provenance visible so the next skill inherits the right context instead of starting blind.

Related Skills

  • @base
    - Use when the work is better handled by that native specialization after this imported skill establishes context.
  • @calc
    - Use when the work is better handled by that native specialization after this imported skill establishes context.
  • @draw
    - Use when the work is better handled by that native specialization after this imported skill establishes context.
  • @image-studio
    - Use when the work is better handled by that native specialization after this imported skill establishes context.

Additional Resources

Use this support matrix and the linked files below as the operator packet for this imported skill. They should reflect real copied source material, not generic scaffolding.

Resource familyWhat it gives the reviewerExample path
references
copied reference notes, guides, or background material from upstream
references/n/a
examples
worked examples or reusable prompts copied from upstream
examples/n/a
scripts
upstream helper scripts that change execution or validation
scripts/n/a
agents
routing or delegation notes that are genuinely part of the imported package
agents/n/a
assets
supporting assets or schemas copied from the source package
assets/n/a

Imported Reference Notes

Imported: Exit Conditions

Success: All checks pass, post-CI feedback re-check is clean (no new unaddressed high/medium feedback including review bot findings), user has decided on low-priority items.

Ask for help: Same failure after 2 attempts, feedback needs clarification, infrastructure issues.

Stop: No PR exists, branch needs rebase.

Imported: Fallback

If scripts fail, use

gh
CLI directly:

  • gh pr checks name,state,bucket,link
  • gh run view <run-id> --log-failed
  • gh api repos/{owner}/{repo}/pulls/{number}/comments

Imported: Limitations

  • Use this skill only when the task clearly matches the scope described above.
  • Do not treat the output as a substitute for environment-specific validation, testing, or expert review.
  • Stop and ask for clarification if required inputs, permissions, safety boundaries, or success criteria are missing.