Awesome-omni-skill powershell-pester-5
PowerShell Pester testing best practices based on Pester v5 conventions Triggers on: **/*.Tests.ps1
install
source · Clone the upstream repo
git clone https://github.com/diegosouzapw/awesome-omni-skill
Claude Code · Install into ~/.claude/skills/
T=$(mktemp -d) && git clone --depth=1 https://github.com/diegosouzapw/awesome-omni-skill "$T" && mkdir -p ~/.claude/skills && cp -r "$T/skills/cli-automation/powershell-pester-5" ~/.claude/skills/diegosouzapw-awesome-omni-skill-powershell-pester-5 && rm -rf "$T"
manifest:
skills/cli-automation/powershell-pester-5/SKILL.mdsource content
PowerShell Pester v5 Testing Guidelines
This guide provides PowerShell-specific instructions for creating automated tests using PowerShell Pester v5 module. Follow PowerShell cmdlet development guidelines in powershell.instructions.md for general PowerShell scripting best practices.
File Naming and Structure
- File Convention: Use
naming pattern*.Tests.ps1 - Placement: Place test files next to tested code or in dedicated test directories
- Import Pattern: Use
to import tested functionsBeforeAll { . $PSScriptRoot/FunctionName.ps1 } - No Direct Code: Put ALL code inside Pester blocks (
,BeforeAll
,Describe
,Context
, etc.)It
Test Structure Hierarchy
BeforeAll { # Import tested functions } Describe 'FunctionName' { Context 'When condition' { BeforeAll { # Setup for context } It 'Should behavior' { # Individual test } AfterAll { # Cleanup for context } } }
Core Keywords
: Top-level grouping, typically named after function being testedDescribe
: Sub-grouping within Describe for specific scenariosContext
: Individual test cases, use descriptive namesIt
: Assertion keyword for test validationShould
: Setup/teardown once per blockBeforeAll/AfterAll
: Setup/teardown before/after each testBeforeEach/AfterEach
Setup and Teardown
: Runs once at start of containing block, use for expensive operationsBeforeAll
: Runs before everyBeforeEach
in block, use for test-specific setupIt
: Runs after everyAfterEach
, guaranteed even if test failsIt
: Runs once at end of block, use for cleanupAfterAll- Variable Scoping:
variables available to child blocks (read-only),BeforeAll
share same scopeBeforeEach/It/AfterEach
Assertions (Should)
- Basic Comparisons:
,-Be
,-BeExactly-Not -Be - Collections:
,-Contain
,-BeIn-HaveCount - Numeric:
,-BeGreaterThan
,-BeLessThan-BeGreaterOrEqual - Strings:
,-Match
,-Like-BeNullOrEmpty - Types:
,-BeOfType
,-BeTrue-BeFalse - Files:
,-Exist-FileContentMatch - Exceptions:
,-Throw-Not -Throw
Mocking
: Replace command behaviorMock CommandName { ScriptBlock }
: Mock only when parameters match condition-ParameterFilter
: Mark mock as requiring verification-Verifiable
: Verify mock was called specific number of timesShould -Invoke
: Verify all verifiable mocks were calledShould -InvokeVerifiable- Scope: Mocks default to containing block scope
Mock Get-Service { @{ Status = 'Running' } } -ParameterFilter { $Name -eq 'TestService' } Should -Invoke Get-Service -Exactly 1 -ParameterFilter { $Name -eq 'TestService' }
Test Cases (Data-Driven Tests)
Use
-TestCases or -ForEach for parameterized tests:
It 'Should return <Expected> for <Input>' -TestCases @( @{ Input = 'value1'; Expected = 'result1' } @{ Input = 'value2'; Expected = 'result2' } ) { Get-Function $Input | Should -Be $Expected }
Data-Driven Tests
: Available on-ForEach
,Describe
, andContext
for generating multiple tests from dataIt
: Alias for-TestCases
on-ForEach
blocks (backwards compatibility)It- Hashtable Data: Each item defines variables available in test (e.g.,
)@{ Name = 'value'; Expected = 'result' } - Array Data: Uses
variable for current item$_ - Templates: Use
in test names for dynamic expansion<variablename>
# Hashtable approach It 'Returns <Expected> for <Name>' -ForEach @( @{ Name = 'test1'; Expected = 'result1' } @{ Name = 'test2'; Expected = 'result2' } ) { Get-Function $Name | Should -Be $Expected } # Array approach It 'Contains <_>' -ForEach 'item1', 'item2' { Get-Collection | Should -Contain $_ }
Tags
- Available on:
,Describe
, andContext
blocksIt - Filtering: Use
and-TagFilter
with-ExcludeTagFilterInvoke-Pester - Wildcards: Tags support
wildcards for flexible filtering-like
Describe 'Function' -Tag 'Unit' { It 'Should work' -Tag 'Fast', 'Stable' { } It 'Should be slow' -Tag 'Slow', 'Integration' { } } # Run only fast unit tests Invoke-Pester -TagFilter 'Unit' -ExcludeTagFilter 'Slow'
Skip
: Available on-Skip
,Describe
, andContext
to skip testsIt- Conditional: Use
for dynamic skipping-Skip:$condition - Runtime Skip: Use
during test execution (setup/teardown still run)Set-ItResult -Skipped
It 'Should work on Windows' -Skip:(-not $IsWindows) { } Context 'Integration tests' -Skip { }
Error Handling
- Continue on Failure: Use
to collect multiple failuresShould.ErrorAction = 'Continue' - Stop on Critical: Use
for pre-conditions-ErrorAction Stop - Test Exceptions: Use
for exception testing{ Code } | Should -Throw
Best Practices
- Descriptive Names: Use clear test descriptions that explain behavior
- AAA Pattern: Arrange (setup), Act (execute), Assert (verify)
- Isolated Tests: Each test should be independent
- Avoid Aliases: Use full cmdlet names (
notWhere-Object
)? - Single Responsibility: One assertion per test when possible
- Test File Organization: Group related tests in Context blocks. Context blocks can be nested.
Example Test Pattern
BeforeAll { . $PSScriptRoot/Get-UserInfo.ps1 } Describe 'Get-UserInfo' { Context 'When user exists' { BeforeAll { Mock Get-ADUser { @{ Name = 'TestUser'; Enabled = $true } } } It 'Should return user object' { $result = Get-UserInfo -Username 'TestUser' $result | Should -Not -BeNullOrEmpty $result.Name | Should -Be 'TestUser' } It 'Should call Get-ADUser once' { Get-UserInfo -Username 'TestUser' Should -Invoke Get-ADUser -Exactly 1 } } Context 'When user does not exist' { BeforeAll { Mock Get-ADUser { throw "User not found" } } It 'Should throw exception' { { Get-UserInfo -Username 'NonExistent' } | Should -Throw "*not found*" } } }
Configuration
Configuration is defined outside test files when calling
Invoke-Pester to control execution behavior.
# Create configuration (Pester 5.2+) $config = New-PesterConfiguration $config.Run.Path = './Tests' $config.Output.Verbosity = 'Detailed' $config.TestResult.Enabled = $true $config.TestResult.OutputFormat = 'NUnitXml' $config.Should.ErrorAction = 'Continue' Invoke-Pester -Configuration $config
Key Sections: Run (Path, Exit), Filter (Tag, ExcludeTag), Output (Verbosity), TestResult (Enabled, OutputFormat), CodeCoverage (Enabled, Path), Should (ErrorAction), Debug