Matlab-agentic-toolkit matlab-testing
Generate and run MATLAB unit tests using matlab.unittest and matlab.uitest. Parameterized tests, fixtures, mocking, coverage analysis, CI/CD with buildtool, app testing with gestures. Use when creating tests, writing test classes, running test suites, checking coverage, testing apps, or validating MATLAB code.
git clone https://github.com/matlab/matlab-agentic-toolkit
T=$(mktemp -d) && git clone --depth=1 https://github.com/matlab/matlab-agentic-toolkit "$T" && mkdir -p ~/.claude/skills && cp -r "$T/skills-catalog/matlab-core/matlab-testing" ~/.claude/skills/matlab-matlab-agentic-toolkit-matlab-testing && rm -rf "$T"
skills-catalog/matlab-core/matlab-testing/SKILL.mdTesting
Generate, structure, and run MATLAB unit tests using the
matlab.unittest framework. Covers class-based tests, parameterized testing, fixtures, mocking, coverage analysis, CI/CD integration, and app testing via MCP.
When to Use
- User asks to write tests for a MATLAB function or class
- User wants to run an existing test suite
- User needs coverage analysis or CI/CD configuration
- Test-driven development — writing tests before implementation
- Testing App Designer apps with programmatic gestures (see reference/app-testing-guidance.md)
When NOT to Use
- Testing Simulink models — use Simulink test skills
- Performance benchmarking — use profiling workflows
Must-Follow Rules
- Present a test plan first — For non-trivial test suites, propose test methods and edge cases for user approval before writing code
- Always use class-based tests — Every test file must inherit from
. Never use script-based testsmatlab.unittest.TestCase - No logic in test methods — No
,if
,switch
, orfor
. Follow Arrange-Act-Assert. If a test needs conditionals, split into separate methodstry/catch - Test public interfaces, not implementation — Never test private methods directly
- Execute via MCP — Use
orrun_matlab_test_file
to run testsevaluate_matlab_code
Workflow
Simple tests (clear behavior, limited scope)
- Briefly state what you'll test (methods + key edge cases)
- Write the test file after user confirms
Standard tests (large codebase, multiple files)
- Gather requirements — Code to test, expected behaviors, error conditions, scope, dependencies
- Present test plan — List test methods, edge cases, parameterization strategy for approval
- Implement — Write tests following the patterns below
- Run — Execute via
MCP toolrun_matlab_test_file - Check coverage — Identify untested paths, add tests
Key Functions
| Category | Functions | Purpose |
|---|---|---|
| Equality | , | Compare values (use for floats) |
| Boolean | , | Check logical conditions |
| Size/type | , , | Structural checks |
| Errors | | Confirm error is thrown with correct ID |
| Warnings | , | Check warning behavior |
| Infra | , , | Run and organize tests |
| Coverage | , | Measure test coverage |
Qualification Levels
| Level | On failure | When to use |
|---|---|---|
| Continues test | Default — most assertions |
| Stops current test | Setup validation |
| Stops entire suite | Environment preconditions |
| Skips test | Conditional execution (e.g., toolbox check) |
Patterns
Basic Test Class
classdef computeAreaTest < matlab.unittest.TestCase %computeAreaTest Tests for the computeArea function. methods (Test) function testSquare(testCase) result = computeArea(5, 5); testCase.verifyEqual(result, 25); end function testFloatingPoint(testCase) result = computeArea(1/3, 3); testCase.verifyEqual(result, 1, AbsTol=1e-12); end function testNegativeInputErrors(testCase) testCase.verifyError( ... @() computeArea(-1, 5), 'computeArea:negativeInput'); end end end
Parameterized Tests
Parameterize only when assertion logic is identical across all cases — only the data varies. Use struct for readable test names:
classdef unitConverterTest < matlab.unittest.TestCase properties (TestParameter) conversionCase = struct( ... 'freezing', struct('input', 0, 'expected', 32), ... 'boiling', struct('input', 100, 'expected', 212), ... 'bodyTemp', struct('input', 37, 'expected', 98.6)); end methods (Test) function testCelsiusToFahrenheit(testCase, conversionCase) result = celsiusToFahrenheit(conversionCase.input); testCase.verifyEqual(result, conversionCase.expected, AbsTol=1e-10); end end end
For advanced parameterization (combinations, dynamic parameters,
ClassSetupParameter), see reference/parameterized-tests-guidance.md.
Setup, Teardown, and Fixtures
Prefer
addTeardown over TestMethodTeardown blocks. Use PathFixture to add source folders:
classdef fileProcessorTest < matlab.unittest.TestCase methods (TestClassSetup) function addSourceToPath(testCase) srcFolder = fullfile(fileparts(fileparts(mfilename('fullpath'))), 'src'); testCase.applyFixture(matlab.unittest.fixtures.PathFixture(srcFolder, ... IncludingSubfolders=true)); end end methods (Test) function testProcessFile(testCase) tmpDir = string(tempname); mkdir(tmpDir); testCase.addTeardown(@() rmdir(tmpDir, 's')); testFile = fullfile(tmpDir, "data.csv"); writematrix(rand(10, 3), testFile); result = processFile(testFile); testCase.verifySize(result, [10 3]); end end end
For built-in fixtures, custom fixtures, and shared fixtures, see reference/fixtures-guidance.md.
Determinism
Seed the RNG and restore it in teardown for reproducible tests:
methods (TestMethodSetup) function resetRandomSeed(testCase) originalRng = rng; testCase.addTeardown(@() rng(originalRng)); rng(42, "twister"); end end
Test Tags
Use
TestTags for selective execution:
methods (Test, TestTags = {'Unit'}) function testFastCalculation(testCase) % ... end end methods (Test, TestTags = {'Integration', 'Slow'}) function testFullPipeline(testCase) % ... end end
Run by tag:
runtests('tests', Tag='Unit') or runtests('tests', ExcludeTag='Slow').
Running Tests
Via MCP
Use the
run_matlab_test_file MCP tool for test files. For inline runs with filtering:
results = runtests('tests'); % all tests in folder results = runtests('tests', Tag='Unit'); % by tag results = runtests('tests', Name='*Calculator*'); % by name pattern results = runtests('tests', UseParallel=true); % parallel execution results = runtests('tests', Strict=true); % warnings = failures
Analyzing Results
disp(results); for r = results([results.Failed]) fprintf('\nFAILED: %s\n', r.Name); disp(r.Details.DiagnosticRecord.Report); end
Coverage Analysis
import matlab.unittest.TestRunner import matlab.unittest.plugins.CodeCoveragePlugin import matlab.unittest.plugins.codecoverage.CoverageResult import matlab.unittest.plugins.codecoverage.CoverageReport runner = TestRunner.withTextOutput; covFormat = CoverageResult; runner.addPlugin(CodeCoveragePlugin.forFolder('src', ... Producing=[covFormat, CoverageReport('coverage-report')])); results = runner.run(testsuite('tests')); covResults = covFormat.Result; disp(covResults);
For coverage gap analysis, use the
printCoverageGaps script in reference/test-execution-guidance.md.
CI/CD Integration
Use
buildtool with a buildfile.m for CI pipelines. See reference/test-execution-guidance.md for buildfile.m templates and CI configs (GitHub Actions, Azure DevOps, GitLab CI).
App Designer Testing
For testing apps with programmatic UI gestures (
press, choose, type, drag), see reference/app-testing-guidance.md.
Key points:
- Inherit from
(notmatlab.uitest.TestCase
)matlab.unittest.TestCase - Call
after app creation, before first gesturedrawnow - Compare
with char (uilabel.Text
), not string ('text'
)"text" - Compare
with.Enable
/matlab.lang.OnOffSwitchState.on.off
References
Load these on demand — most tests only need what's in this file.
| Load when... | Reference |
|---|---|
| Tests need setup/teardown, temp dirs, path management, shared state | reference/fixtures-guidance.md |
| Floating-point tolerance selection, constraint objects, custom constraints | reference/constraints-guidance.md |
| Multiple parameters, dynamic parameters, combination strategies | reference/parameterized-tests-guidance.md |
| Code depends on external services, needs mock objects or dependency injection | reference/mocking-guidance.md |
| Running tests in CI, buildtool config, coverage gap analysis | reference/test-execution-guidance.md |
| Testing App Designer apps with gestures, dialogs, async callbacks | reference/app-testing-guidance.md |
Conventions
- Always use class-based tests inheriting from
matlab.unittest.TestCase - Name test files
and place in<functionName>Test.m
directorytests/ - Use
qualifications by default — they let all tests run even if one failsverify - Use
for every floating-point comparison — never rely on exact equalityAbsTol - No logic in test methods — follow Arrange-Act-Assert
- Use
for cleanup — it runs even if the test failsaddTeardown - Use struct-based
for readable parameterized test namesTestParameter - Keep test methods focused — test one behavior per method
- Tests must be independent and compatible with parallel execution
- Run tests via the
MCP tool for automatic result capturerun_matlab_test_file
Copyright 2026 The MathWorks, Inc.