Mastra smoke-test
Create a Mastra project using create-mastra and smoke test the studio in Chrome using Chrome MCP server
git clone https://github.com/mastra-ai/mastra
T=$(mktemp -d) && git clone --depth=1 https://github.com/mastra-ai/mastra "$T" && mkdir -p ~/.claude/skills && cp -r "$T/.claude/skills/smoke-test" ~/.claude/skills/mastra-ai-mastra-smoke-test && rm -rf "$T"
.claude/skills/smoke-test/SKILL.mdSmoke Test Skill
Creates a new Mastra project using
create-mastra@<tag> and performs smoke testing of the Mastra Studio in Chrome.
This skill is for Claude Code with Chrome MCP server. For MastraCode with built-in browser tools, use
mastracode-smoke-test instead.
Usage
/smoke-test --directory <path> --name <project-name> --tag <version> [--pm <package-manager>] [--llm <provider>] /smoke-test -d <path> -n <project-name> -t <version> [-p <package-manager>] [-l <provider>]
Parameters
| Parameter | Short | Description | Required | Default |
|---|---|---|---|---|
| | Parent directory where project will be created | Yes | - |
| | Project name (will be created as subdirectory) | Yes | - |
| | Version tag for create-mastra (e.g., , , ) | Yes | - |
| | Package manager: , , , or | No | |
| | LLM provider: , , , , , | No | |
Examples
# Minimal (required params only) /smoke-test -d ~/projects -n my-test-app -t latest # Full specification /smoke-test --directory ~/projects --name my-test-app --tag alpha --pm pnpm --llm anthropic # Using short flags /smoke-test -d ./projects -n smoke-test-app -t 0.10.6 -p bun -l openai
Step 0: Parameter Validation (MUST RUN FIRST)
CRITICAL: Before proceeding, parse the ARGUMENTS and validate:
- Parse arguments from the ARGUMENTS string provided above
- Check required parameters:
or--directory
: REQUIRED - fail if missing-d
or--name
: REQUIRED - fail if missing-n
or--tag
: REQUIRED - fail if missing-t
- Apply defaults for optional parameters:
or--pm
: Default to-p
if not providednpm
or--llm
: Default to-l
if not providedopenai
- Validate values:
must be one of:pm
,npm
,yarn
,pnpmbun
must be one of:llm
,openai
,anthropic
,groq
,google
,cerebrasmistral
must exist (or will be created)directory
should be a valid directory name (no spaces, special chars)name
If validation fails: Stop and show usage help with the missing/invalid parameters.
If
or -h
is passed: Show this usage information and stop.--help
Prerequisites
This skill requires the Chrome MCP server (Claude-in-Chrome) for browser automation. Ensure it's configured and running.
The Chrome MCP server provides tools like
tabs_create_mcp, tabs_context_mcp, navigate_mcp, click_mcp, type_mcp, and screenshot_mcp.
Execution Steps
Step 1: Create the Mastra Project
Run the create-mastra command with explicit parameters to avoid interactive prompts:
# For npm npx create-mastra@<tag> <project-name> -c agents,tools,workflows,scorers -l <llmProvider> -e # For yarn yarn create mastra@<tag> <project-name> -c agents,tools,workflows,scorers -l <llmProvider> -e # For pnpm pnpm create mastra@<tag> <project-name> -c agents,tools,workflows,scorers -l <llmProvider> -e # For bun bunx create-mastra@<tag> <project-name> -c agents,tools,workflows,scorers -l <llmProvider> -e
Flags explained:
- Include all components-c agents,tools,workflows,scorers
- Set the LLM provider-l <provider>
- Include example code-e
Being explicit with all parameters ensures the CLI runs non-interactively.
Wait for the installation to complete. This may take 1-2 minutes depending on network speed.
Step 2: Verify Project Structure
After creation, verify the project has:
with mastra dependenciespackage.json
exporting a Mastra instancesrc/mastra/index.ts
file (may need to be created).env
Step 2.5: Add Browser Agent for Browser Testing
To test browser functionality, add a browser-enabled agent:
- Install browser packages:
<pm> add @mastra/stagehand # or for deterministic browser automation: <pm> add @mastra/agent-browser
- Create browser-agent.ts in
:src/mastra/agents/
import { Agent } from '@mastra/core/agent'; import { Memory } from '@mastra/memory'; import { StagehandBrowser } from '@mastra/stagehand'; export const browserAgent = new Agent({ id: 'browser-agent', name: 'Browser Agent', instructions: `You are a helpful assistant that can browse the web to find information.`, model: '<provider>/<model>', // e.g., 'openai/gpt-4o' memory: new Memory(), browser: new StagehandBrowser({ headless: false, }), });
- Update index.ts to register the browser agent:
import { browserAgent } from './agents/browser-agent'; // In Mastra config: agents: { weatherAgent, browserAgent },
Step 3: Configure Environment Variables
Based on the selected LLM provider, check for the required API key:
| Provider | Required Environment Variable |
|---|---|
| openai | |
| anthropic | |
| groq | |
| |
| cerebras | |
| mistral | |
Check in this order:
-
Check global environment first: Run
to see if the key is already set globallyecho $<ENV_VAR_NAME>- If set globally, the project will inherit it - no
file needed.env - Skip to Step 4
- If set globally, the project will inherit it - no
-
Check project
file: If not set globally, check if.env
exists in the project and contains the key.env -
Ask user only if needed: If the key is not available globally or in
:.env- Ask the user for the API key
- Create the
file with the provided key.env
Only check for the ONE key matching the selected provider - don't check for all providers.
Step 4: Start the Development Server
Navigate to the project directory and start the dev server:
cd <directory>/<project-name> <packageManager> run dev
The server typically starts on
http://localhost:4111. Wait for the server to be ready before proceeding.
Step 5: Smoke Test the Studio
Use the Chrome browser automation tools to test the Mastra Studio.
5.1 Initial Setup
- Get browser context using
tabs_context_mcp - Create a new tab using
tabs_create_mcp - Navigate to
http://localhost:4111
5.2 Test Checklist
Perform the following smoke tests using the Chrome automation tools:
Navigation & Basic Loading
- Studio loads successfully (page contains "Mastra Studio" or shows agents list)
- Take a screenshot of the home page
Agents Page (
/agents)
- Navigate to agents page
- Verify at least one agent is listed (the example agent from
)--default - Take a screenshot
Agent Detail (
/agents/<agentId>/chat)
- Click on an agent to view details
- Verify the agent overview panel loads
- Verify model settings panel is visible
- Take a screenshot
Agent Chat
- Send a test message to the agent (e.g., "What's the weather in Tokyo?")
- Wait for response
- Verify response appears in the chat
- Take a screenshot of the conversation
Browser Agent (
/agents/browser-agent/chat) - if browser agent was added
- Navigate to the browser-agent
- Send a message: "Go to example.com and tell me what you see"
- Verify the agent launches a browser and extracts content
- Verify response includes page content
- Take a screenshot
Tools Page (
/tools)
- Navigate to tools page
- Verify tools list loads (should show get-weather tool)
- Take a screenshot
Tool Execution (
/tools/get-weather)
- Click on the get-weather tool to open detail page
- Find the city input field and enter a test city (e.g., "Tokyo")
- Click Submit button
- Wait for execution to complete
- Verify JSON output appears with weather data (temp, condition, etc.)
- Take a screenshot
Workflows Page (
/workflows)
- Navigate to workflows page
- Verify workflows list loads (should show weather-workflow)
- Take a screenshot
Workflow Execution (
/workflows/weather-workflow)
- Click on the weather-workflow to open detail page
- Verify visual graph displays (shows workflow steps)
- Find the city input field and enter a test city (e.g., "London")
- Click Run button
- Wait for execution to complete
- Verify steps show success (green checkmarks)
- Click to view JSON output modal
- Verify execution details with timing appear
- Take a screenshot
Settings Page (
/settings)
- Navigate to settings page
- Verify settings page loads
- Take a screenshot
Observability Page (
/observability)
- Navigate to observability page
- Verify traces list shows recent activity (from previous tests)
- Click on a trace to view details
- Verify timeline view shows steps and timing
- Take a screenshot
Scorers Page (
/evaluation?tab=scorers)
- Navigate to
(NOT/evaluation?tab=scorers
- that route doesn't exist)/scorers - Verify scorers list loads (shows 3 example scorers)
- Take a screenshot
Additional Pages (verify load only)
- Templates page (
) - Gallery of starter templates/templates - Request Context page (
) - JSON editor/request-context - Processors page (
) - Empty state OK/processors - MCP Servers page (
) - Empty state OK/mcps
5.3 Report Results
After completing all tests, provide a summary:
- Total tests passed/failed
- Any errors encountered
- Screenshots captured
- Recommendations for issues found
Quick Reference
| Step | Action |
|---|---|
| Create Project | |
| Install Deps | Automatic during creation |
| Set Env Vars | Check global env first, then , ask user only if needed |
| Start Server | |
| Studio URL | |
Troubleshooting
Server won't start
- Verify
has required API key.env - Check if port 4111 is available
- Try
to reinstall dependencies<pm> install
Browser can't connect
- Wait a few seconds for server to fully start
- Check terminal for server ready message
- Verify no firewall blocking localhost
Agent chat fails
- Verify API key is valid
- Check server logs for errors
- Ensure LLM provider API is accessible
Browser agent fails
- Ensure Playwright browsers are installed:
pnpm exec playwright install chromium - Check that no other browser instance is blocking
Studio Routes
| Feature | Route |
|---|---|
| Agents | |
| Workflows | |
| Tools | |
| Evaluation | |
| Scorers | |
| Observability | |
| Logs | |
| MCP Servers | |
| Processors | |
| Templates | |
| Request Context | |
| Settings | |
Notes
- The
flag includes example agents, making smoke testing meaningful-e - If the user doesn't specify an LLM provider, default to OpenAI as it's most common
- Take screenshots at each major step for documentation/debugging
- Keep the dev server running in the background during testing
- Always use explicit flags (
,-c
,-l
) to ensure non-interactive execution-e - Browser agent testing validates the new browser automation features
- Observability traces appear automatically after running agents or workflows