Skillshub terraform-patterns
Terraform infrastructure-as-code agent skill and plugin for Claude Code, Codex, Gemini CLI, Cursor, OpenClaw. Covers module design patterns, state management strategies, provider configuration, security hardening, policy-as-code with Sentinel/OPA, and CI/CD plan/apply workflows. Use when: user wants to design Terraform modules, manage state backends, review Terraform security, implement multi-region deployments, or follow IaC best practices.
git clone https://github.com/ComeOnOliver/skillshub
T=$(mktemp -d) && git clone --depth=1 https://github.com/ComeOnOliver/skillshub "$T" && mkdir -p ~/.claude/skills && cp -r "$T/skills/alirezarezvani/claude-skills/terraform-patterns" ~/.claude/skills/comeonoliver-skillshub-terraform-patterns && rm -rf "$T"
skills/alirezarezvani/claude-skills/terraform-patterns/SKILL.mdTerraform Patterns
Predictable infrastructure. Secure state. Modules that compose. No drift.
Opinionated Terraform workflow that turns sprawling HCL into well-structured, secure, production-grade infrastructure code. Covers module design, state management, provider patterns, security hardening, and CI/CD integration.
Not a Terraform tutorial — a set of concrete decisions about how to write infrastructure code that doesn't break at 3 AM.
Slash Commands
| Command | What it does |
|---|---|
| Analyze Terraform code for anti-patterns, security issues, and structure problems |
| Design or refactor a Terraform module with proper inputs, outputs, and composition |
| Audit Terraform code for security vulnerabilities, secrets exposure, and IAM misconfigurations |
When This Skill Activates
Recognize these patterns from the user:
- "Review this Terraform code"
- "Design a Terraform module for..."
- "My Terraform state is..."
- "Set up remote state backend"
- "Multi-region Terraform deployment"
- "Terraform security review"
- "Module structure best practices"
- "Terraform CI/CD pipeline"
- Any request involving:
files, HCL, Terraform modules, state management, provider configuration, infrastructure-as-code.tf
If the user has
.tf files or wants to provision infrastructure with Terraform → this skill applies.
Workflow
/terraform:review
— Terraform Code Review
/terraform:review-
Analyze current state
- Read all
files in the target directory.tf - Identify module structure (flat vs nested)
- Count resources, data sources, variables, outputs
- Check naming conventions
- Read all
-
Apply review checklist
MODULE STRUCTURE ├── Variables have descriptions and type constraints ├── Outputs expose only what consumers need ├── Resources use consistent naming: {provider}_{type}_{purpose} ├── Locals used for computed values and DRY expressions └── No hardcoded values — everything parameterized or in locals STATE & BACKEND ├── Remote backend configured (S3, GCS, Azure Blob, Terraform Cloud) ├── State locking enabled (DynamoDB for S3, native for others) ├── State encryption at rest enabled ├── No secrets stored in state (or state access is restricted) └── Workspaces or directory isolation for environments PROVIDERS ├── Version constraints use pessimistic operator: ~> 5.0 ├── Required providers block in terraform {} block ├── Provider aliases for multi-region or multi-account └── No provider configuration in child modules SECURITY ├── No hardcoded secrets, keys, or passwords ├── IAM follows least-privilege principle ├── Encryption enabled for storage, databases, secrets ├── Security groups are not overly permissive (no 0.0.0.0/0 ingress on sensitive ports) └── Sensitive variables marked with sensitive = true -
Generate report
python3 scripts/tf_module_analyzer.py ./terraform -
Run security scan
python3 scripts/tf_security_scanner.py ./terraform
/terraform:module
— Module Design
/terraform:module-
Identify module scope
- Single responsibility: one module = one logical grouping
- Determine inputs (variables), outputs, and resource boundaries
- Decide: flat module (single directory) vs nested (calling child modules)
-
Apply module design checklist
STRUCTURE ├── main.tf — Primary resources ├── variables.tf — All input variables with descriptions and types ├── outputs.tf — All outputs with descriptions ├── versions.tf — terraform {} block with required_providers ├── locals.tf — Computed values and naming conventions ├── data.tf — Data sources (if any) └── README.md — Usage examples and variable documentation VARIABLES ├── Every variable has: description, type, validation (where applicable) ├── Sensitive values marked: sensitive = true ├── Defaults provided for optional settings ├── Use object types for related settings: variable "config" { type = object({...}) } └── Validate with: validation { condition = ... } OUTPUTS ├── Output IDs, ARNs, endpoints — things consumers need ├── Include description on every output ├── Mark sensitive outputs: sensitive = true └── Don't output entire resources — only specific attributes COMPOSITION ├── Root module calls child modules ├── Child modules never call other child modules ├── Pass values explicitly — no hidden data source lookups in child modules ├── Provider configuration only in root module └── Use module "name" { source = "./modules/name" } -
Generate module scaffold
- Output file structure with boilerplate
- Include variable validation blocks
- Add lifecycle rules where appropriate
/terraform:security
— Security Audit
/terraform:security-
Code-level audit
Check Severity Fix Hardcoded secrets in
files.tfCritical Use variables with sensitive = true or vault IAM policy with
actions*Critical Scope to specific actions and resources Security group with 0.0.0.0/0 on port 22/3389 Critical Restrict to known CIDR blocks or use SSM/bastion S3 bucket without encryption High Add
blockserver_side_encryption_configurationS3 bucket with public access High Add aws_s3_bucket_public_access_blockRDS without encryption High Set storage_encrypted = trueRDS publicly accessible High Set publicly_accessible = falseCloudTrail not enabled Medium Add
resourceaws_cloudtrailMissing
on stateful resourcesprevent_destroyMedium Add lifecycle { prevent_destroy = true }Variables without
for secretssensitive = trueMedium Add
to secret variablessensitive = true -
State security audit
Check Severity Fix Local state file Critical Migrate to remote backend with encryption Remote state without encryption High Enable encryption on backend (SSE-S3, KMS) No state locking High Enable DynamoDB for S3, native for TF Cloud State accessible to all team members Medium Restrict via IAM policies or TF Cloud teams -
Generate security report
python3 scripts/tf_security_scanner.py ./terraform python3 scripts/tf_security_scanner.py ./terraform --output json
Tooling
scripts/tf_module_analyzer.py
scripts/tf_module_analyzer.pyCLI utility for analyzing Terraform directory structure and module quality.
Features:
- Resource and data source counting
- Variable and output analysis (missing descriptions, types, validation)
- Naming convention checks
- Module composition detection
- File structure validation
- JSON and text output
Usage:
# Analyze a Terraform directory python3 scripts/tf_module_analyzer.py ./terraform # JSON output python3 scripts/tf_module_analyzer.py ./terraform --output json # Analyze a specific module python3 scripts/tf_module_analyzer.py ./modules/vpc
scripts/tf_security_scanner.py
scripts/tf_security_scanner.pyCLI utility for scanning
.tf files for common security issues.
Features:
- Hardcoded secret detection (AWS keys, passwords, tokens)
- Overly permissive IAM policy detection
- Open security group detection (0.0.0.0/0 on sensitive ports)
- Missing encryption checks (S3, RDS, EBS)
- Public access detection (S3, RDS, EC2)
- Sensitive variable audit
- JSON and text output
Usage:
# Scan a Terraform directory python3 scripts/tf_security_scanner.py ./terraform # JSON output python3 scripts/tf_security_scanner.py ./terraform --output json # Strict mode (elevate warnings) python3 scripts/tf_security_scanner.py ./terraform --strict
Module Design Patterns
Pattern 1: Flat Module (Small/Medium Projects)
infrastructure/ ├── main.tf # All resources ├── variables.tf # All inputs ├── outputs.tf # All outputs ├── versions.tf # Provider requirements ├── terraform.tfvars # Environment values (not committed) └── backend.tf # Remote state configuration
Best for: Single application, < 20 resources, one team owns everything.
Pattern 2: Nested Modules (Medium/Large Projects)
infrastructure/ ├── environments/ │ ├── dev/ │ │ ├── main.tf # Calls modules with dev params │ │ ├── backend.tf # Dev state backend │ │ └── terraform.tfvars │ ├── staging/ │ │ └── ... │ └── prod/ │ └── ... ├── modules/ │ ├── networking/ │ │ ├── main.tf │ │ ├── variables.tf │ │ └── outputs.tf │ ├── compute/ │ │ └── ... │ └── database/ │ └── ... └── versions.tf
Best for: Multiple environments, shared infrastructure patterns, team collaboration.
Pattern 3: Mono-Repo with Terragrunt
infrastructure/ ├── terragrunt.hcl # Root config ├── modules/ # Reusable modules │ ├── vpc/ │ ├── eks/ │ └── rds/ ├── dev/ │ ├── terragrunt.hcl # Dev overrides │ ├── vpc/ │ │ └── terragrunt.hcl # Module invocation │ └── eks/ │ └── terragrunt.hcl └── prod/ ├── terragrunt.hcl └── ...
Best for: Large-scale, many environments, DRY configuration, team-level isolation.
Provider Configuration Patterns
Version Pinning
terraform { required_version = ">= 1.5.0" required_providers { aws = { source = "hashicorp/aws" version = "~> 5.0" # Allow 5.x, block 6.0 } random = { source = "hashicorp/random" version = "~> 3.5" } } }
Multi-Region with Aliases
provider "aws" { region = "us-east-1" } provider "aws" { alias = "west" region = "us-west-2" } resource "aws_s3_bucket" "primary" { bucket = "my-app-primary" } resource "aws_s3_bucket" "replica" { provider = aws.west bucket = "my-app-replica" }
Multi-Account with Assume Role
provider "aws" { alias = "production" region = "us-east-1" assume_role { role_arn = "arn:aws:iam::PROD_ACCOUNT_ID:role/TerraformRole" } }
State Management Decision Tree
Single developer, small project? ├── Yes → Local state (but migrate to remote ASAP) └── No ├── Using Terraform Cloud/Enterprise? │ └── Yes → TF Cloud native backend (built-in locking, encryption, RBAC) └── No ├── AWS? │ └── S3 + DynamoDB (encryption, locking, versioning) ├── GCP? │ └── GCS bucket (native locking, encryption) ├── Azure? │ └── Azure Blob Storage (native locking, encryption) └── Other? └── Consul or PostgreSQL backend Environment isolation strategy: ├── Separate state files per environment (recommended) │ ├── Option A: Separate directories (dev/, staging/, prod/) │ └── Option B: Terraform workspaces (simpler but less isolation) └── Single state file for all environments (never do this)
CI/CD Integration Patterns
GitHub Actions Plan/Apply
# .github/workflows/terraform.yml name: Terraform on: pull_request: paths: ['terraform/**'] push: branches: [main] paths: ['terraform/**'] jobs: plan: runs-on: ubuntu-latest if: github.event_name == 'pull_request' steps: - uses: actions/checkout@v4 - uses: hashicorp/setup-terraform@v3 - run: terraform init - run: terraform validate - run: terraform plan -out=tfplan - run: terraform show -json tfplan > plan.json # Post plan as PR comment apply: runs-on: ubuntu-latest if: github.ref == 'refs/heads/main' && github.event_name == 'push' environment: production steps: - uses: actions/checkout@v4 - uses: hashicorp/setup-terraform@v3 - run: terraform init - run: terraform apply -auto-approve
Drift Detection
# Run on schedule to detect drift name: Drift Detection on: schedule: - cron: '0 6 * * 1-5' # Weekdays at 6 AM jobs: detect: runs-on: ubuntu-latest steps: - uses: actions/checkout@v4 - uses: hashicorp/setup-terraform@v3 - run: terraform init - run: | terraform plan -detailed-exitcode -out=drift.tfplan 2>&1 | tee drift.log EXIT_CODE=$? if [ $EXIT_CODE -eq 2 ]; then echo "DRIFT DETECTED — review drift.log" # Send alert (Slack, PagerDuty, etc.) fi
Proactive Triggers
Flag these without being asked:
- No remote backend configured → Migrate to S3/GCS/Azure Blob with locking and encryption.
- Provider without version constraint → Add
to prevent breaking upgrades.version = "~> X.0" - Hardcoded secrets in .tf files → Use variables with
, or integrate Vault/SSM.sensitive = true - IAM policy with
→ Scope to specific actions. No wildcard actions in production."Action": "*" - Security group open to 0.0.0.0/0 on SSH/RDP → Restrict to bastion CIDR or use SSM Session Manager.
- No state locking → Enable DynamoDB table for S3 backend, or use TF Cloud.
- Resources without tags → Add default_tags in provider block. Tags are mandatory for cost tracking.
- Missing
on databases/storage → Add lifecycle block to prevent accidental deletion.prevent_destroy
Installation
One-liner (any tool)
git clone https://github.com/alirezarezvani/claude-skills.git cp -r claude-skills/engineering/terraform-patterns ~/.claude/skills/
Multi-tool install
./scripts/convert.sh --skill terraform-patterns --tool codex|gemini|cursor|windsurf|openclaw
OpenClaw
clawhub install terraform-patterns
Related Skills
- senior-devops — Broader DevOps scope (CI/CD, monitoring, containerization). Complementary — use terraform-patterns for IaC-specific work, senior-devops for pipeline and infrastructure operations.
- aws-solution-architect — AWS architecture design. Complementary — terraform-patterns implements the infrastructure, aws-solution-architect designs it.
- senior-security — Application security. Complementary — terraform-patterns covers infrastructure security posture, senior-security covers application-level threats.
- ci-cd-pipeline-builder — Pipeline construction. Complementary — terraform-patterns defines infrastructure, ci-cd-pipeline-builder automates deployment.