Claude-skill-registry bats
Bash Automated Testing System (BATS) for TDD-style testing of shell scripts. Use when: (1) Writing unit or integration tests for Bash scripts, (2) Testing CLI tools or shell functions, (3) Setting up test infrastructure with setup/teardown hooks, (4) Mocking external commands (curl, git, docker), (5) Generating JUnit reports for CI/CD, (6) Debugging test failures or flaky tests, (7) Implementing test-driven development for shell scripts.
install
source · Clone the upstream repo
git clone https://github.com/majiayu000/claude-skill-registry
Claude Code · Install into ~/.claude/skills/
T=$(mktemp -d) && git clone --depth=1 https://github.com/majiayu000/claude-skill-registry "$T" && mkdir -p ~/.claude/skills && cp -r "$T/skills/data/bats" ~/.claude/skills/majiayu000-claude-skill-registry-bats && rm -rf "$T"
manifest:
skills/data/bats/SKILL.mdsource content
BATS Testing Framework
BATS (Bash Automated Testing System) is a TAP-compliant testing framework for Bash 3.2+. Think of it as JUnit for Bash—structured, repeatable testing for shell scripts.
Workflow Decision Tree
Creating New Test Suite
- Initialize project structure (see "Project Setup" below)
- Create test files with
extension.bats - Load helper libraries in
setup() - Write tests using
blocks@test
Writing Tests
- Testing script output? → Use
+runassert_output - Testing exit codes? → Use
+runassert_success/assert_failure - Testing file operations? → Use
assertionsbats-file - Mocking external commands? → See gotchas.md
Debugging Failures
- Test hangs? → Check for background tasks holding FD 3
- Pipes don't work? → Use
wrapper orbash -cbats_pipe - Negation doesn't fail? → Use
(BATS 1.5+)run ! - Variables disappear? → Don't use
for assignmentsrun - See gotchas.md for complete troubleshooting
Project Setup
Recommended Structure
project/ ├── src/ │ └── my_script.sh ├── test/ │ ├── bats/ # bats-core submodule │ ├── test_helper/ │ │ ├── bats-support/ # Output formatting │ │ ├── bats-assert/ # Assertions │ │ ├── bats-file/ # Filesystem assertions │ │ └── common-setup.bash # Shared setup logic │ ├── unit/ │ │ └── parser.bats │ └── integration/ │ └── api.bats └── .gitmodules
Initialize Submodules
git submodule add https://github.com/bats-core/bats-core.git test/bats git submodule add https://github.com/bats-core/bats-support.git test/test_helper/bats-support git submodule add https://github.com/bats-core/bats-assert.git test/test_helper/bats-assert git submodule add https://github.com/bats-core/bats-file.git test/test_helper/bats-file
Common Setup Helper
Create
test/test_helper/common-setup.bash:
_common_setup() { load "$BATS_TEST_DIRNAME/test_helper/bats-support/load" load "$BATS_TEST_DIRNAME/test_helper/bats-assert/load" load "$BATS_TEST_DIRNAME/test_helper/bats-file/load" PROJECT_ROOT="$(cd "$BATS_TEST_DIRNAME/.." && pwd)" export PATH="$PROJECT_ROOT/src:$PATH" }
Test File Template
#!/usr/bin/env bats setup_file() { # Runs ONCE before all tests in file (expensive setup) export SHARED_RESOURCE="initialized" } setup() { # Runs before EACH test load 'test_helper/common-setup' _common_setup TEST_DIR="$BATS_TEST_TMPDIR" } teardown() { # Runs after EACH test (cleanup) rm -rf "$TEST_DIR" 2>/dev/null || true } teardown_file() { # Runs ONCE after all tests (final cleanup) unset SHARED_RESOURCE } @test "describe expected behavior" { run my_command arg1 arg2 assert_success assert_output --partial "expected substring" }
The run
Helper
runrun captures exit status and output in a subshell:
run command arg1 arg2 # Available after run: $status # Exit code $output # Combined stdout+stderr ${lines[@]} # Array of output lines ${lines[0]} # First line # Implicit status checks (BATS 1.5+) run -1 failing_command # Expect exit code 1 run ! command # Expect non-zero exit run --separate-stderr cmd # Separate $output and $stderr
Critical:
run always returns 0 to BATS. Always check $status explicitly or use assertions.
Core Assertions (bats-assert)
# Exit status assert_success # $status == 0 assert_failure # $status != 0 assert_failure 1 # $status == 1 # Output assert_output "exact match" assert_output --partial "substring" assert_output --regexp "^[0-9]+$" # Lines assert_line "any line matches" assert_line --index 0 "first line" assert_line --partial "substring" # Negations refute_output "not this" refute_line "not in output"
File Assertions (bats-file)
assert_file_exists "/path/to/file" assert_dir_exists "/path/to/dir" assert_file_executable "/path/to/script" assert_file_not_empty "/path/to/file" assert_file_contains "/path/to/file" "search text"
Temporary Directories
| Variable | Scope | Use Case |
|---|---|---|
| Per test | Always use for isolation |
| Per file | Shared fixtures in |
| Per run | Rarely needed |
@test "file operations" { echo "data" > "$BATS_TEST_TMPDIR/file.txt" run process_file "$BATS_TEST_TMPDIR/file.txt" assert_success # Automatically cleaned up }
Mocking External Commands
Mock via PATH manipulation:
@test "mock curl" { mkdir -p "$BATS_TEST_TMPDIR/bin" cat > "$BATS_TEST_TMPDIR/bin/curl" <<'EOF' #!/bin/bash echo '{"status":"ok"}' EOF chmod +x "$BATS_TEST_TMPDIR/bin/curl" export PATH="$BATS_TEST_TMPDIR/bin:$PATH" run script_using_curl assert_output --partial "status" }
Running Tests
# Basic execution bats test/ # All tests bats -r test/ # Recursive bats --jobs 4 test/ # Parallel # Filtering bats --filter "login" test/ # By name regex bats --filter-tags api,!slow test/ # By tags bats --filter-status failed test/ # Re-run failures # Output formats bats --formatter junit --output ./reports test/ # JUnit for CI bats --timing test/ # Show durations
Tagging Tests
# bats test_tags=api,smoke @test "user login" { } # Run tagged tests bats --filter-tags api test/ # Has 'api' bats --filter-tags api,!slow test/ # Has 'api' but not 'slow'
Skip Tests
@test "not ready" { skip "Feature not implemented" } @test "requires docker" { command -v docker || skip "Docker not installed" run docker ps }
CI/CD Integration
GitHub Actions
- name: Run tests run: ./test/bats/bin/bats --formatter junit --output ./reports test/ - name: Publish results uses: EnricoMi/publish-unit-test-result-action@v2 if: always() with: files: reports/report.xml
GitLab CI
test: script: - bats --formatter junit --output reports/ test/ artifacts: reports: junit: reports/report.xml
Reference Documentation
- Common pitfalls and debugging: See references/gotchas.md
- Complete assertion reference: See references/assertions.md
- Real-world project examples: See references/projects.md
- CI/CD integration patterns: See references/ci-integration.md
Quick Troubleshooting
| Problem | Solution |
|---|---|
| Test passes but should fail | Use or check |
Pipes don't work with | Use |
doesn't fail test | Use (BATS 1.5+) |
Variables lost after | Don't use for assignments |
| Test hangs indefinitely | Close FD 3 for background tasks: |
| Output has ANSI colors | Use helper or |
Code Style
- Use
for capturing output, direct execution for state changesrun - Always check
or use assertions$status - Prefer
over hardcoded paths$BATS_TEST_TMPDIR - Mock external dependencies, not internal logic
- Name tests to describe expected behavior