Awesome-omni-skill live-tests
Writes live integration tests that hit the real Copilot API and record responses as replayable fixtures. Use this skill when adding new agent behaviors, provider integrations, or tool interactions that need real-world API coverage.
install
source · Clone the upstream repo
git clone https://github.com/diegosouzapw/awesome-omni-skill
Claude Code · Install into ~/.claude/skills/
T=$(mktemp -d) && git clone --depth=1 https://github.com/diegosouzapw/awesome-omni-skill "$T" && mkdir -p ~/.claude/skills && cp -r "$T/skills/tools/live-tests" ~/.claude/skills/diegosouzapw-awesome-omni-skill-live-tests && rm -rf "$T"
manifest:
skills/tools/live-tests/SKILL.mdsource content
Live Test & Fixture Recording Skill
You write live integration tests that exercise the real GitHub Copilot API and capture responses as JSON fixtures for deterministic replay. This is the project's VCR-like system for ensuring tests stay grounded in real API behavior.
Architecture overview
Live test (mix test --include live --include save_fixtures) │ ├─ RecordingProvider ← wraps real Copilot provider, captures SSE events │ │ │ └─ persistent_term storage ← events buffered here during stream │ └─ FixtureHelper.save_fixture/2 ← writes events to JSON file │ └─ test/support/fixtures/<name>.json Fixture-based test (mix test) │ ├─ FixtureProvider ← reads fixture JSON, replays events via Req.Response.Async │ │ │ └─ FixtureHelper.build_fixture_response/1 ← spawns process to send SSE events │ └─ Assertions on agent behavior, events, tool calls, etc.
When to act
- Adding a new agent behavior that needs a new fixture (e.g., new tool call pattern, multi-turn conversation, error handling).
- Adding or changing provider integration logic.
- When the user asks to "record a fixture", "add a live test", or "capture API responses".
- When an existing fixture is stale and needs re-recording.
Writing a live test
Live tests go in
test/opal/live_test.exs. They require @moduletag :live (excluded by default) and use the RecordingProvider to capture real SSE events.
Template
describe "live API — <scenario description>" do @tag :save_fixtures @tag timeout: 30_000 test "records <what this captures>" do RecordingProvider.start_recording() {:ok, pid} = Opal.start_session(%{ model: {:copilot, "claude-sonnet-4"}, system_prompt: "<constrained prompt that produces deterministic output>", tools: [<tool modules if needed>], working_dir: System.tmp_dir!(), provider: RecordingProvider }) {:ok, response} = Opal.prompt_sync(pid, "<user message>", 25_000) Opal.stop_session(pid) events = RecordingProvider.stop_recording() assert length(events) > 0 # Save as a fixture for replay path = FixtureHelper.save_fixture("<descriptive_name>.json", events) assert File.exists?(path) # Assertions on the live response assert is_binary(response) assert String.contains?(response, "<expected content>") end end
Running
# Run all live tests (requires valid Copilot auth) mix test --include live # Run live tests AND save recorded fixtures to disk mix test --include live --include save_fixtures # Run a specific live test mix test --include live test/opal/live_test.exs:<line_number>
Writing a fixture-based integration test
Once you have a fixture, write a deterministic integration test that replays it. These go in
test/opal/integration_test.exs or a new file under test/opal/.
Template
describe "<feature under test>" do test "<what it verifies>" do # Configure the FixtureProvider with your fixture :persistent_term.put({FixtureProvider, :fixture}, "<your_fixture>.json") # For multi-turn tests (tool call → second response): # :persistent_term.put({FixtureProvider, :second_fixture}, "<second_turn>.json") session_id = "test-#{System.unique_integer([:positive])}" {:ok, tool_sup} = Task.Supervisor.start_link() agent_opts = [ session_id: session_id, model: Model.new(:copilot, "test-model"), system_prompt: "<system prompt>", tools: [<tool modules>], working_dir: System.tmp_dir!(), provider: FixtureProvider, tool_supervisor: tool_sup ] {:ok, pid} = Agent.start_link(agent_opts) Events.subscribe(session_id) Agent.prompt(pid, "<user message>") # Assert on events received assert_receive {:event, %{type: :response_start}}, 5_000 assert_receive {:event, %{type: :text_delta, data: %{delta: delta}}}, 5_000 assert is_binary(delta) assert_receive {:event, %{type: :response_end}}, 5_000 end end
Fixture file format
Fixtures live in
test/support/fixtures/ as JSON:
{ "description": "Recorded live fixture: responses_api_text.json", "recorded_at": "2025-02-13T...", "events": [ {"data": "{\"type\":\"response.created\",\"response\":{...}}"}, {"data": "{\"type\":\"response.output_item.added\",...}"}, {"data": "{\"type\":\"response.completed\",...}"} ] }
Each
data field contains one SSE event payload as a JSON string.
Rules
- System prompts in live tests must be highly constrained to produce deterministic output. Use prompts like "Respond with exactly the word 'pong'" rather than open-ended prompts.
- Name fixtures descriptively:
, notresponses_api_tool_call.json
.test1.json - Never hand-edit fixture JSON. Always re-record from a live test.
- Tag live tests correctly:
on the module,@moduletag :live
on recording tests,@tag :save_fixtures
for API calls.@tag timeout: 30_000 - Clean up after recording tests if the fixture is only meant for one-time capture — or keep it permanently if it will be used by replay tests.
- Fixture-based tests should be fast and async-safe — they replay from disk, no network needed.
- When adding a fixture for a new scenario (e.g., error response, high token usage), also add a corresponding integration test that replays it.
Key files
— Live API tests with recordingtest/opal/live_test.exs
— Fixture-based integration teststest/opal/integration_test.exs
— Fixture load/save/replay helperstest/support/fixture_helper.ex
— Recorded fixture JSON filestest/support/fixtures/