Vizro dashboard-build
Use this skill to build, implement, and test Vizro dashboards (Phase 2). Activate when the user wants to create a working app, says "just build it", or has data ready for implementation. Requires spec files from the dashboard-design skill (Phase 1), or user confirmation to skip design.
git clone https://github.com/mckinsey/vizro
T=$(mktemp -d) && git clone --depth=1 https://github.com/mckinsey/vizro "$T" && mkdir -p ~/.claude/skills && cp -r "$T/vizro-e2e-flow/skills/dashboard-build" ~/.claude/skills/mckinsey-vizro-dashboard-build && rm -rf "$T"
vizro-e2e-flow/skills/dashboard-build/SKILL.mdPrerequisites
Requires Phase 1 spec files from the dashboard-design skill:
spec/1_information_architecture.yaml, spec/2_interaction_ux.yaml, and spec/3_visual_design.yaml. If these do not exist, ask the user whether to run Phase 1 first or proceed without specs.
Guidelines
- Use your native tools to understand the data well, especially if you build custom charts or when you use specific selectors.
- If the user asks for an example, simply copy the example app and run it. Do not include your own data or change the example.
- When executing any script mentioned below for the first time, it may take a while to install dependencies. Plan accordingly before taking any rash actions.
- When iterating on the dashboard after completing all steps, do not forget key points from below, especially regarding spec compliance and updating and terminal handling: always keep all specs up to date, and always check if terminal output is clean after each iteration.
- Execute all scripts from this skill, and the
you will create, withapp.py
oruv run <script_name>.py
- this will ensure you use the correct dependencies and versions.uv run app.py - ABSOLUTELY NEVER type ANY commands (including
,sleep
, or anything else) in the terminal where the dashboard app is running, even if you started it withecho
. This WILL kill the dashboard process. The dashboard startup takes time - be patient and let it run undisturbed.isBackground=true - Step 2 (Testing) is critical — do not skip it. Use Playwright MCP if available, otherwise use any browser automation tool in your environment.
Spec Files: Documenting Decisions
IMPORTANT: Each step produces a spec file in the
spec/ directory to document reasoning, enable collaboration, and allow resumption in future sessions. Create the spec/ directory if it is not already present at the root of the project.
Step 1: Build dashboard
- You MUST ALWAYS copy the example app over, and modify it - this ensures less errors!
- Investigate about the Vizro model by executing the schema fetching script. ALWAYS DO this for all models that you need - do NOT assume you know it. Execute the script like so:
whereuv run ./scripts/get_model_json_schema.py <model_name> <model_name2> ...
is the name of the model you want to get the schema for (prints the full JSON schema for each model to stdout). You can get an overview of what is available by calling the overview script like so:<model_name>
(prints all available model names with one-line descriptions to stdout).uv run ./scripts/get_overview_vizro_models.py - Build the dashboard config by changing the copied example app. Important: Very often normal plotly express charts will not suffice as they are too simple. In that case, refer to the custom charts guide to create more complex charts. These MUST be added to the correct section in the python app. Call the custom chart function from the
model in your dashboard app.Graph - Run your dashboard app with
CRITICAL: After running this command, DO NOT run ANY other commands in that terminal. The dashboard takes time to start up (sometimes 10-30 seconds)uv run <your_dashboard_app>.py - You MUST read the terminal to check for any errors, but do not put commands like
in it. Fix any warnings and even more important errors you encounter. ONLY once you see the dashboard running, inform the user. NEVER run any commands in that terminal after starting the dashboard.sleep - When you iterate, no need to kill the dashboard, as we are using debug mode. Just save the file and it will reload automatically. Check the terminal occasionally for any failures. Once failed, you need to restart the dashboard.
Optimizations and common errors
- Colors: For Plotly charts and KPI cards, do not add colors in code — Vizro template defaults apply automatically. Only add chart colors if
definesspec/3_visual_design.yaml
. For AG Grid cell styling (conditional formatting, heatmaps), usecolor_decisions
— never invent hex values. See selecting-vizro-charts skill.from vizro.themes import palettes, colors - Data loading: For dashboards needing data refresh (databases, APIs) or performance optimization, see the data management guide for static vs dynamic data, caching, and best practices.
- KPI cards: Use built-in
/kpi_card
inkpi_card_reference
model only. Never rebuild as custom charts (exception: dynamic text). See selecting-vizro-charts skill.Figure
REQUIRED OUTPUT: spec/4_implementation.yaml
Save this file BEFORE proceeding to Step 2:
# spec/4_implementation.yaml implementation: app_file: <name>.py data_files: - [list of data files used] data_type: static/dynamic # static for DataFrames, dynamic for data_manager functions data_sources: - name: [data source name] type: csv/database/api/function caching: true/false refresh_strategy: [if dynamic: cache timeout or refresh trigger] spec_compliance: followed_specs: true/false deviations: - spec_item: [What was specified] actual: [What was implemented] reason: [Why the deviation was necessary] custom_charts: - name: [Function name] purpose: [What it does]
Validation Checklist
Before proceeding to Step 2, verify against spec files:
- All specs from
,spec/1_information_architecture.yaml
andspec/2_interaction_ux.yaml
are implemented if specs existspec/3_visual_design.yaml - You have read the terminal output of the dashboard app for errors and warnings, you have not put any commands in the terminal after starting the app
- Any deviations are documented in
spec/4_implementation.yaml
Step 2: Testing
This step is critical for producing bug-free dashboards. Do NOT skip it.
When conducting the below tests, go back to Step 1 to fix any issues you find, then come back here.
Automated Code Validation
Run these validation scripts against your app.py to catch common issues:
- Color validation:
— Checks for hardcoded colors (color_discrete_map, hex codes, plot_bgcolor) that bypass Vizro theming. Fix any FAIL. If the user explicitly asked for custom colors, adduv run ./scripts/validate_colors.py .
to skip app.py color checks.--custom-colors-requested - Aggregation validation:
— Checks that bar/line charts use pre-aggregated data viauv run ./scripts/validate_aggregation.py .
, not raw detail rows passed to inline@capture("graph")
/px.bar
. Fix any FAIL.px.line
Browser Testing
Navigate the running dashboard to catch two types of errors that code review alone cannot find: (1) console errors on launch, and (2) callback errors when navigating between pages.
-
Determine which browser automation tool is available:
Playwright MCP tools available? → Use them directly to navigate, click pages, and check console No Playwright MCP? → Install the Python package:
, then write a test scriptuv pip install playwright && uv run playwright install chromium -
Using your chosen tool, perform these checks:
- Navigate to dashboard URL (e.g.,
)http://localhost:8050 - Click through all pages
- Check browser console for errors
- Fix any errors found, then retest
- Navigate to dashboard URL (e.g.,
Advanced Testing flow
- Take a screenshot of each page, compare to specs and especially wireframes
- Document any discrepancies
Important things to check:
- Line charts are readable, and not a mess due to lack of aggregation
- Graphs are legible and not squashed due to Layout
REQUIRED OUTPUT: spec/5_test_report.yaml
Save this file to complete the project:
# spec/5_test_report.yaml testing: launch: successful: true/false url: http://localhost:8050 errors: [] navigation: all_pages_work: true/false issues: [] console: no_errors: true/false errors_found: [] screenshot_tests: performed: true/false pages_tested: [] discrepancies: - page: [Page name] issue: [Description of visual issue] fixed: true/false notes: [Fix details or reason not fixed] requirements_met: true/false dashboard_ready: true/false
Done When
- Validation scripts (
,validate_colors.py
) both PASSvalidate_aggregation.py - Dashboard launches without errors, no console errors, no callback errors on page navigation
- User confirms requirements are met
- All spec files from this Phase 2 saved in
directoryspec/
Reference Files
| Reference | When to Load |
|---|---|
| selecting-vizro-charts skill | Colors, KPI cards, custom charts, Plotly conventions |
| writing-vizro-yaml skill | YAML syntax, component patterns, data_manager, pitfalls |
| data_management.md | Static vs dynamic data, caching, databases, APIs |
| custom_charts_guide.md | Implementing custom charts |
| example_app.py | Starting template for dashboard implementation |
| validate_colors.py | Automated check for hardcoded colors in app.py |
| validate_aggregation.py | Automated check for pre-aggregation in bar/line charts |