Awesome-omni-skill code-test
Run targeted tests to validate changes. Prefer the smallest relevant scope; broaden only when necessary.
git clone https://github.com/diegosouzapw/awesome-omni-skill
T=$(mktemp -d) && git clone --depth=1 https://github.com/diegosouzapw/awesome-omni-skill "$T" && mkdir -p ~/.claude/skills && cp -r "$T/skills/development/code-test" ~/.claude/skills/diegosouzapw-awesome-omni-skill-code-test && rm -rf "$T"
skills/development/code-test/SKILL.mdCode Testing Skill
This skill provides testing operations for the project codebase.
All test commands use cargo-nextest exclusively. If not available ask the user to run the
just install-cargo-nextest task to install it.
Table of Contents
- When to Use This Skill
- Test Scope Selection
- Available Commands
- Important Guidelines
- Common Mistakes to Avoid
- Validation Loop Pattern
- Debugging
- Next Steps
- Project Context
When to Use This Skill
Use this skill when you need to run tests and have decided testing is warranted:
- Validate behavior changes or bug fixes
- Confirm localized changes with targeted test suites (unit, integration)
- Test specific packages or areas
- Respond to a user request to run tests
Test Scope Selection (Default: Minimal)
Start with the smallest scope that covers the change. Only broaden if you need more confidence.
- Docs/comments-only changes: skip tests and state why
- Localized code change in 1-2 crates: run unit tests or targeted package tests
- End-to-end/external dependency changes: run
,just test-integration
, orjust test-e2e
in CIjust test - If unsure, ask the user which scope they want
Available Commands
Run All Tests (REQUIRES EXTERNAL DEPENDENCIES)
just test [EXTRA_FLAGS]
Runs all tests (unit and integration) in the workspace. Uses
cargo nextest run --workspace.
⚠️ WARNING: This command requires external dependencies (PostgreSQL, Firehose services, etc.) that may not be available locally.
Use this when: Running in CI.
Examples:
- run all testsjust test
- run with output capture disabledjust test -- --nocapture
- run specific test by namejust test my_test_name
Run Unit Tests Only
just test-unit [EXTRA_FLAGS]
Runs only unit tests using the
unit nextest profile. Excludes integration tests (it_*) and the top-level tests package.
Use this when: You want fast feedback on pure logic changes. Unit tests have no external dependencies.
Examples:
- run all unit testsjust test-unit
- run unit tests for metadata-db cratejust test-unit -p metadata-db
Run Integration Tests (REQUIRES EXTERNAL DEPENDENCIES)
just test-integration [EXTRA_FLAGS]
Runs integration tests (
it_* tests across all crates) using the integration nextest profile. Excludes the top-level tests package.
⚠️ WARNING: Integration tests require external dependencies (databases, Firehose endpoints, etc.).
Use this when: Running in CI or when you have external services available locally.
Examples:
- run all integration testsjust test-integration
- run integration tests for a specific cratejust test-integration -p metadata-db
Run E2E Tests (REQUIRES EXTERNAL DEPENDENCIES)
just test-e2e [EXTRA_FLAGS]
Runs end-to-end tests from the top-level
tests/ workspace package using the e2e nextest profile.
⚠️ WARNING: E2E tests require external dependencies (databases, Firehose endpoints, etc.).
Use this when: Running in CI for end-to-end validation.
Examples:
- run all e2e testsjust test-e2e
- run specific e2e testjust test-e2e test_name
Per-Crate Targeted Testing
For targeted testing within a single crate, use
cargo nextest run directly:
# Unit tests for a specific crate (skip in-tree integration tests) cargo nextest run -p metadata-db -E 'not test(/::it_/)' # Specific module's unit tests cargo nextest run -p metadata-db -E 'test(/workers::tests::/)' # In-tree integration tests for a crate cargo nextest run -p metadata-db -E 'test(/::it_/)' # Specific in-tree integration test suite cargo nextest run -p metadata-db -E 'test(/::it_workers/)' # Public API integration tests for a crate cargo nextest run -p metadata-db -E 'kind(test)' # Specific public API integration test file cargo nextest run -p metadata-db -E 'test(it_api_workers)' # Run a single test by name cargo nextest run -p metadata-db -E 'test(=my_exact_test_name)'
Important Guidelines
Cargo Nextest
This project uses cargo-nextest exclusively for all test execution:
- Faster parallel test execution
- Better output formatting and filtering
- Filter expressions (
) for precise test selection-E - Install with:
just install-cargo-nextest
Pre-approved Commands
These test commands are pre-approved and can be run without user permission:
- Run all testsjust test
- Run unit testsjust test-unit
- Run integration testsjust test-integration
- Run e2e testsjust test-e2e
with targeted filters - Per-crate targeted testingcargo nextest run
Test Workflow Recommendations
- During local development: Prefer targeted unit tests first; broaden only if the change is risky or cross-cutting
- Before commits (local): Run the smallest relevant test scope; broaden only if the change is risky or cross-cutting
- In CI environments: The CI system will run
,just test
,just test-integrationjust test-e2e - Local development: Never run
,just test
, orjust test-integration
locally — those require external dependenciesjust test-e2e
External Dependencies Required by Non-Local Tests
The following tests require external services that are typically not available in local development:
- PostgreSQL database: Required for metadata-db tests
- Firehose endpoints: Required for Firehose dataset tests
- EVM RPC endpoints: Required for EVM RPC dataset tests
- Other services: As configured in docker-compose or CI environment
Use
or per-crate targeted testing to avoid these dependencies during local development.just test-unit
Common Test Flags
You can pass extra flags through the EXTRA_FLAGS parameter:
or-p <package>
- test specific package--package <package>
- nextest filter expression for precise test selection-E '<filter>'
- run tests matching nametest_name
- show output from passing tests-- --show-output
Common Mistakes to Avoid
❌ Anti-patterns
- Never use
- Always usecargo test
or justfile tasks (which use nextest profiles). See Per-Crate Targeted Testingcargo nextest run - Never run
locally - It requires external dependenciesjust test - Never skip tests when behavior changes - Skipping is OK for docs/comments-only changes, but not for runtime changes
- Never ignore failing tests - Fix them or document why they fail
- Never run integration/e2e tests locally - Use
or target unit tests insteadjust test-unit
✅ Best Practices
- Prefer the smallest relevant test scope
- Run tests for behavior changes or bug fixes
- Fix failing tests immediately
- If nextest not installed, install it for better performance
- Run broader tests only when necessary
Validation Loop Pattern
Code Change → Format → Check → Clippy → Targeted Tests (when needed) ↑ ↓ ←── Fix failures ──────────┘
If tests fail:
- Read error messages carefully
- Fix the issue
- Format the fix (
)just fmt-file - Check compilation (
)just check-crate - Re-run the relevant tests (same scope as before)
- Repeat until all pass
Debugging
Using Logs
Tests use the
monitoring crate's logging system. Enable structured logs via the AGENTFLOW_LOG environment variable to diagnose failures.
Environment variables:
| Variable | Default | Values | Purpose |
|---|---|---|---|
| | , , , , | Log level for all agentflow workspace crates |
| (none) | , | Log tracing span lifecycle events |
| (none) | Standard directives | Fine-grained per-crate overrides (takes precedence over ) |
Examples with nextest:
# Debug logging for a failing test AGENTFLOW_LOG=debug cargo nextest run -p metadata-db -E 'test(my_failing_test)' # Trace logging (very verbose) AGENTFLOW_LOG=trace cargo nextest run -p worker -E 'test(my_test)' # Debug a specific crate while keeping others quiet RUST_LOG="metadata_db=trace,sqlx=warn" cargo nextest run -p metadata-db # Include span open/close events for async debugging AGENTFLOW_LOG=debug AGENTFLOW_LOG_SPAN_EVENT=full cargo nextest run -E 'test(my_test)'
How it works:
sets the log level for all agentflow workspace crates; external dependencies default toAGENTFLOW_LOGerror
directives overrideRUST_LOG
for specific crates (useful for noisy dependencies)AGENTFLOW_LOG- Logging is initialized via
, which is idempotent and already called by the test context buildermonitoring::logging::init() - Output goes to stderr, which nextest captures by default — use
to see logs from passing tests-- --show-output
See also: docs/code/logging.md for full logging configuration details.
Next Steps
After required tests pass:
- Review changes → Ensure quality before commits
- Commit → All checks and tests must be green
Project Context
- This is a Rust workspace with multiple crates
- E2E tests are in the top-level
packagetests/ - Some tests require external dependencies (databases, services)
- Test configurations are defined in
.config/nextest.toml - Nextest profiles:
,default
,unit
,integratione2e