Auto-GPT write-frontend-tests
Analyze the current branch diff against dev, plan integration tests for changed frontend pages/components, and write them. TRIGGER when user asks to write frontend tests, add test coverage, or 'write tests for my changes'.
git clone https://github.com/Significant-Gravitas/AutoGPT
T=$(mktemp -d) && git clone --depth=1 https://github.com/Significant-Gravitas/AutoGPT "$T" && mkdir -p ~/.claude/skills && cp -r "$T/.claude/skills/write-frontend-tests" ~/.claude/skills/significant-gravitas-auto-gpt-write-frontend-tests && rm -rf "$T"
.claude/skills/write-frontend-tests/SKILL.mdWrite Frontend Tests
Analyze the current branch's frontend changes, plan integration tests, and write them.
References
Before writing any tests, read the testing rules and conventions:
— testing strategy, file locations, examplesautogpt_platform/frontend/TESTING.md
— detailed testing rules, MSW patterns, decision flowchartautogpt_platform/frontend/src/tests/AGENTS.md
— custom render with providersautogpt_platform/frontend/src/tests/integrations/test-utils.tsx
— MSW server setupautogpt_platform/frontend/src/tests/integrations/vitest.setup.tsx
Step 1: Identify changed frontend files
BASE_BRANCH="${ARGUMENTS:-dev}" cd autogpt_platform/frontend # Get changed frontend files (excluding generated, config, and test files) git diff "$BASE_BRANCH"...HEAD --name-only -- src/ \ | grep -v '__generated__' \ | grep -v '__tests__' \ | grep -v '\.test\.' \ | grep -v '\.stories\.' \ | grep -v '\.spec\.'
Also read the diff to understand what changed:
git diff "$BASE_BRANCH"...HEAD --stat -- src/ git diff "$BASE_BRANCH"...HEAD -- src/ | head -500
Step 2: Categorize changes and find test targets
For each changed file, determine:
- Is it a page? (
) — these are the primary test targetspage.tsx - Is it a hook? (
) — test via the page/component that uses it; avoid directuse*.ts
tests unless it is a shared reusable hook with standalone business logicrenderHook() - Is it a component? (
in.tsx
) — test via the parent page unless it's complex enough to warrant isolationcomponents/ - Is it a helper? (
,helpers.ts
) — unit test directly if pure logicutils.ts
Priority order:
- Pages with new/changed data fetching or user interactions
- Components with complex internal logic (modals, forms, wizards)
- Shared hooks with standalone business logic when UI-level coverage is impractical
- Pure helper functions
Skip: styling-only changes, type-only changes, config changes.
Step 3: Check for existing tests
For each test target, check if tests already exist:
# For a page at src/app/(platform)/library/page.tsx ls src/app/\(platform\)/library/__tests__/ 2>/dev/null # For a component at src/app/(platform)/library/components/AgentCard/AgentCard.tsx ls src/app/\(platform\)/library/components/AgentCard/__tests__/ 2>/dev/null
Note which targets have no tests (need new files) vs which have tests that need updating.
Step 4: Identify API endpoints used
For each test target, find which API hooks are used:
# Find generated API hook imports in the changed files grep -rn 'from.*__generated__/endpoints' src/app/\(platform\)/library/ grep -rn 'use[A-Z].*V[12]' src/app/\(platform\)/library/
For each API hook found, locate the corresponding MSW handler:
# If the page uses useGetV2ListLibraryAgents, find its MSW handlers grep -rn 'getGetV2ListLibraryAgents.*Handler' src/app/api/__generated__/endpoints/library/library.msw.ts
List every MSW handler you will need (200 for happy path, 4xx for error paths).
Step 5: Write the test plan
Before writing code, output a plan as a numbered list:
Test plan for [branch name]: 1. src/app/(platform)/library/__tests__/main.test.tsx (NEW) - Renders page with agent list (MSW 200) - Shows loading state - Shows error state (MSW 422) - Handles empty agent list 2. src/app/(platform)/library/__tests__/search.test.tsx (NEW) - Filters agents by search query - Shows no results message - Clears search 3. src/app/(platform)/library/components/AgentCard/__tests__/AgentCard.test.tsx (UPDATE) - Add test for new "duplicate" action
Present this plan to the user. Wait for confirmation before proceeding. If the user has feedback, adjust the plan.
Step 6: Write the tests
For each test file in the plan, follow these conventions:
File structure
import { render, screen, waitFor } from "@/tests/integrations/test-utils"; import { server } from "@/mocks/mock-server"; // Import MSW handlers for endpoints the page uses import { getGetV2ListLibraryAgentsMockHandler200, getGetV2ListLibraryAgentsMockHandler422, } from "@/app/api/__generated__/endpoints/library/library.msw"; // Import the component under test import LibraryPage from "../page"; describe("LibraryPage", () => { test("renders agent list from API", async () => { server.use(getGetV2ListLibraryAgentsMockHandler200()); render(<LibraryPage />); expect(await screen.findByText(/my agents/i)).toBeDefined(); }); test("shows error state on API failure", async () => { server.use(getGetV2ListLibraryAgentsMockHandler422()); render(<LibraryPage />); expect(await screen.findByText(/error/i)).toBeDefined(); }); });
Rules
- Use
fromrender()
(NOT from@/tests/integrations/test-utils
directly)@testing-library/react - Use
to set up MSW handlers BEFORE renderingserver.use() - Use
(async) for elements that appear after data fetching — NOTfindBy*getBy* - Use
only for elements that are immediately present in the DOMgetBy* - Use
queries — do NOT destructure fromscreenrender() - Use
when asserting side effects or state changes after interactionswaitFor - Import
orfireEvent
from the test-utils for interactionsuserEvent - Do NOT mock internal hooks or functions — mock at the API boundary via MSW
- Prefer Orval-generated MSW handlers and response builders over hand-built API response objects
- Do NOT use
manually —act()
andrender
handle itfireEvent - Keep tests focused: one behavior per test
- Use descriptive test names that read like sentences
Test location
# For pages: __tests__/ next to page.tsx src/app/(platform)/library/__tests__/main.test.tsx # For complex standalone components: __tests__/ inside component folder src/app/(platform)/library/components/AgentCard/__tests__/AgentCard.test.tsx # For pure helpers: co-located .test.ts src/app/(platform)/library/helpers.test.ts
Custom MSW overrides
When the auto-generated faker data is not enough, override with specific data:
import { http, HttpResponse } from "msw"; server.use( http.get("http://localhost:3000/api/proxy/api/v2/library/agents", () => { return HttpResponse.json({ agents: [{ id: "1", name: "Test Agent", description: "A test agent" }], pagination: { total_items: 1, total_pages: 1, page: 1, page_size: 10 }, }); }), );
Use the proxy URL pattern:
http://localhost:3000/api/proxy/api/v{version}/{path} — this matches the MSW base URL configured in orval.config.ts.
Step 7: Run and verify
After writing all tests:
cd autogpt_platform/frontend pnpm test:unit --reporter=verbose
If tests fail:
- Read the error output carefully
- Fix the test (not the source code, unless there is a genuine bug)
- Re-run until all pass
Then run the full checks:
pnpm format pnpm lint pnpm types