Agent-almanac test-shiny-app
git clone https://github.com/pjt222/agent-almanac
T=$(mktemp -d) && git clone --depth=1 https://github.com/pjt222/agent-almanac "$T" && mkdir -p ~/.claude/skills && cp -r "$T/i18n/wenyan/skills/test-shiny-app" ~/.claude/skills/pjt222-agent-almanac-test-shiny-app-fa37e9 && rm -rf "$T"
i18n/wenyan/skills/test-shiny-app/SKILL.mdTest Shiny App
Set up comprehensive testing for Shiny applications using shinytest2 (end-to-end) and testServer() (unit tests).
When to Use
- Adding tests to an existing Shiny application
- Setting up a testing strategy for a new Shiny project
- Writing regression tests before refactoring Shiny code
- Integrating Shiny app tests into CI/CD pipelines
Inputs
- Required: Path to the Shiny application
- Required: Test scope (unit tests, end-to-end, or both)
- Optional: Whether to use snapshot testing (default: yes for e2e)
- Optional: CI platform (GitHub Actions, GitLab CI)
- Optional: Modules to test in isolation
Procedure
Step 1: Install Testing Dependencies
install.packages("shinytest2") # For golem apps, add as a Suggests dependency usethis::use_package("shinytest2", type = "Suggests") # Set up testthat infrastructure if not present usethis::use_testthat(edition = 3)
Expected: shinytest2 installed and testthat directory structure in place.
On failure: shinytest2 requires chromote (headless Chrome). Install Chrome/Chromium on the system. On WSL:
sudo apt install -y chromium-browser. Verify with chromote::find_chrome().
Step 2: Write testServer() Unit Tests for Modules
Create
tests/testthat/test-mod_dashboard.R:
test_that("dashboard module filters data correctly", { testServer(dataFilterServer, args = list( data = reactive(iris), columns = c("Species", "Sepal.Length") ), { # Set inputs session$setInputs(column = "Species") session$setInputs(value_select = "setosa") session$setInputs(apply = 1) # Check output result <- filtered() expect_equal(nrow(result), 50) expect_true(all(result$Species == "setosa")) }) }) test_that("dashboard module handles empty data", { testServer(dataFilterServer, args = list( data = reactive(iris[0, ]), columns = c("Species") ), { # Module should not error on empty data expect_no_error(session$setInputs(column = "Species")) }) })
Key patterns:
tests module server logic without a browsertestServer()- Pass reactive arguments via the
listargs - Use
to simulate user interactionssession$setInputs() - Access reactive return values directly by name
- Test edge cases: empty data, NULL inputs, invalid values
Expected: Module tests pass with
devtools::test().
On failure: If
testServer() errors with "not a module server function", ensure the function uses moduleServer() internally. If session$setInputs() doesn't trigger reactives, add session$flushReact() after setting inputs.
Step 3: Write shinytest2 End-to-End Tests
Create
tests/testthat/test-app-e2e.R:
test_that("app loads and displays initial state", { # For golem apps app <- AppDriver$new( app_dir = system.file(package = "myapp"), name = "initial-load", height = 800, width = 1200 ) on.exit(app$stop(), add = TRUE) # Wait for app to load app$wait_for_idle(timeout = 10000) # Check that key elements exist app$expect_values() }) test_that("filter interaction updates the table", { app <- AppDriver$new( app_dir = system.file(package = "myapp"), name = "filter-interaction" ) on.exit(app$stop(), add = TRUE) # Interact with the app app$set_inputs(`filter1-column` = "cyl") app$wait_for_idle() app$set_inputs(`filter1-apply` = "click") app$wait_for_idle() # Snapshot the output values app$expect_values(output = "table") })
Key patterns:
launches the app in headless ChromeAppDriver$new()- Always use
to clean upon.exit(app$stop()) - Module input IDs use the format
"moduleId-inputId"
creates/compares snapshot filesapp$expect_values()
ensures reactive updates completeapp$wait_for_idle()
Expected: End-to-end tests create snapshot files in
tests/testthat/_snaps/.
On failure: If Chrome isn't found, set
CHROMOTE_CHROME environment variable to the Chrome binary path. If snapshots fail on CI but pass locally, check for platform-dependent rendering differences — use app$expect_values() for data snapshots rather than app$expect_screenshot() for visual ones.
Step 4: Record a Test Interactively (Optional)
shinytest2::record_test("path/to/app")
This opens the app in a browser with a recording panel. Interact with the app, then click "Save test" to auto-generate test code.
Expected: A test file is generated in
tests/testthat/ with recorded interactions.
On failure: If the recorder doesn't open, check that the app runs successfully with
shiny::runApp() first. The recorder requires a working app.
Step 5: Set Up Snapshot Management
For snapshot-based tests, manage expected values:
# Accept new/changed snapshots after review testthat::snapshot_accept("test-app-e2e") # Review snapshot differences testthat::snapshot_review("test-app-e2e")
Add snapshot directories to version control:
tests/testthat/_snaps/ # Committed — contains expected values
Expected: Snapshot files tracked in git for regression detection.
On failure: If snapshots change unexpectedly, run
testthat::snapshot_review() to see the diffs. Accept intentional changes with testthat::snapshot_accept().
Step 6: Integrate with CI
Add to
.github/workflows/R-CMD-check.yaml or create a dedicated workflow:
- name: Install system dependencies run: | sudo apt-get update sudo apt-get install -y chromium-browser - name: Set Chrome path run: echo "CHROMOTE_CHROME=$(which chromium-browser)" >> $GITHUB_ENV - name: Run tests run: | Rscript -e 'devtools::test()'
For golem apps, ensure the app package is installed before testing:
- name: Install app package run: Rscript -e 'devtools::install()'
Expected: Tests pass in CI with headless Chrome.
On failure: Common CI issues: Chrome not installed (add the apt-get step), display server missing (shinytest2 uses headless mode by default so this usually isn't an issue), or timeout on slow runners (increase
timeout in AppDriver$new()).
Validation
-
runs all tests without errorsdevtools::test() - testServer() tests cover module server logic
- shinytest2 tests cover key user workflows
- Snapshot files are committed to version control
- Tests pass in CI environment
- Edge cases tested (empty data, NULL inputs, error states)
Common Pitfalls
- Testing UI rendering instead of logic: Prefer
for logic andtestServer()
for data. Only useapp$expect_values()
when visual appearance matters — screenshots are brittle across platforms.app$expect_screenshot() - Module ID format in e2e tests: When setting module inputs via AppDriver, use
format (hyphen-separated), not"moduleId-inputId"
."moduleId.inputId" - Flaky timing: Always call
afterapp$wait_for_idle()
. Without it, assertions may run before reactive updates complete.app$set_inputs() - Snapshot drift: Don't commit snapshots generated on different platforms (Mac vs Linux). Standardize on the CI platform for snapshot generation.
- Missing Chrome on CI: shinytest2 requires Chrome/Chromium. Always include the installation step in CI workflows.
Related Skills
— create testable modules with clear interfacesbuild-shiny-module
— set up app structure with testing infrastructurescaffold-shiny-app
— general testthat patterns for R packageswrite-testthat-tests
— CI/CD setup for R packages (golem apps)setup-github-actions-ci