Skillshub anth-ci-integration
install
source · Clone the upstream repo
git clone https://github.com/ComeOnOliver/skillshub
Claude Code · Install into ~/.claude/skills/
T=$(mktemp -d) && git clone --depth=1 https://github.com/ComeOnOliver/skillshub "$T" && mkdir -p ~/.claude/skills && cp -r "$T/skills/jeremylongshore/claude-code-plugins-plus-skills/anth-ci-integration" ~/.claude/skills/comeonoliver-skillshub-anth-ci-integration && rm -rf "$T"
manifest:
skills/jeremylongshore/claude-code-plugins-plus-skills/anth-ci-integration/SKILL.mdsource content
Anthropic CI Integration
Overview
Set up CI/CD pipelines that validate Claude API integrations with mock-based unit tests (free, fast) and prompt regression tests (live API, gated to main).
GitHub Actions Workflow
# .github/workflows/claude-tests.yml name: Claude API Tests on: [push, pull_request] jobs: unit-tests: runs-on: ubuntu-latest steps: - uses: actions/checkout@v4 - uses: actions/setup-python@v5 with: { python-version: '3.12' } - run: pip install anthropic pytest - run: pytest tests/unit/ -v # No API key needed prompt-regression: runs-on: ubuntu-latest if: github.ref == 'refs/heads/main' steps: - uses: actions/checkout@v4 - uses: actions/setup-python@v5 with: { python-version: '3.12' } - run: pip install anthropic pytest - run: pytest tests/prompt_regression/ -v --timeout=60 env: ANTHROPIC_API_KEY: ${{ secrets.ANTHROPIC_API_KEY }}
Mock-Based Unit Tests
# tests/unit/test_tool_routing.py from unittest.mock import MagicMock, patch import anthropic def make_mock_message(text="Hello", stop_reason="end_turn"): msg = MagicMock() msg.id = "msg_mock_123" msg.model = "claude-sonnet-4-20250514" msg.stop_reason = stop_reason block = MagicMock() block.type = "text" block.text = text msg.content = [block] msg.usage = MagicMock(input_tokens=100, output_tokens=50) return msg @patch("anthropic.Anthropic") def test_service_returns_text(MockClient): MockClient.return_value.messages.create.return_value = make_mock_message("42") from myapp.service import ask_claude assert ask_claude("What is 6*7?") == "42"
Prompt Regression Tests
# tests/prompt_regression/test_prompts.py import anthropic, pytest, os, json pytestmark = pytest.mark.skipif(not os.getenv("ANTHROPIC_API_KEY"), reason="No API key") client = anthropic.Anthropic() def test_json_output_format(): msg = client.messages.create( model="claude-haiku-4-20250514", max_tokens=256, messages=[ {"role": "user", "content": "Extract: 'Alice, 30, NYC'. Return JSON: {name, age, city}"}, {"role": "assistant", "content": "{"} ] ) data = json.loads("{" + msg.content[0].text) assert "name" in data and "age" in data def test_system_prompt_boundary(): msg = client.messages.create( model="claude-haiku-4-20250514", max_tokens=128, system="You only discuss cooking recipes. For other topics say: 'I only help with cooking.'", messages=[{"role": "user", "content": "Write me Python code"}] ) assert "cooking" in msg.content[0].text.lower() or "recipe" in msg.content[0].text.lower()
CI Cost Guard
# conftest.py MAX_CI_COST = 1.00 _tokens = {"input": 0, "output": 0} def pytest_runtest_call(item): yield cost = (_tokens["input"] * 0.80 + _tokens["output"] * 4.0) / 1_000_000 # Haiku rates if cost > MAX_CI_COST: pytest.exit(f"CI cost guard: ${cost:.4f} exceeds ${MAX_CI_COST}")
Error Handling
| CI Issue | Cause | Fix |
|---|---|---|
| Flaky prompt tests | Non-deterministic output | Use , check patterns not exact strings |
| 429 in CI | Parallel jobs sharing key | Use separate CI key |
| Secret not found | Missing GitHub secret | Add in repo Settings > Secrets |
Resources
Next Steps
For deployment automation, see
anth-deploy-integration.