Awesome-omni-skills aws-cost-cleanup
AWS Cost Cleanup workflow skill. Use this skill when the user needs Automated cleanup of unused AWS resources to reduce costs and the operator should preserve the upstream workflow, copied support files, and provenance before merging or handing off.
git clone https://github.com/diegosouzapw/awesome-omni-skills
T=$(mktemp -d) && git clone --depth=1 https://github.com/diegosouzapw/awesome-omni-skills "$T" && mkdir -p ~/.claude/skills && cp -r "$T/skills/aws-cost-cleanup" ~/.claude/skills/diegosouzapw-awesome-omni-skills-aws-cost-cleanup && rm -rf "$T"
skills/aws-cost-cleanup/SKILL.mdAWS Cost Cleanup
Overview
This public intake copy packages
plugins/antigravity-awesome-skills-claude/skills/aws-cost-cleanup from https://github.com/sickn33/antigravity-awesome-skills into the native Omni Skills editorial shape without hiding its origin.
Use it when the operator needs the upstream workflow, support files, and repository context to stay intact while the public validator and private enhancer continue their normal downstream flow.
This intake keeps the copied upstream files intact and uses
metadata.json plus ORIGIN.md as the provenance anchor for review.
AWS Cost Cleanup Automate the identification and removal of unused AWS resources to eliminate waste.
Imported source sections that did not map cleanly to the public headings are still preserved below or in the support files. Notable imported sections: Automated Cleanup Targets, Cleanup Scripts, Cost Impact Calculator, Automated Cleanup Lambda, Safety Checklist, Integration with AWS Organizations.
When to Use This Skill
Use this section as the trigger filter. It should make the activation boundary explicit before the operator loads files, runs commands, or opens a pull request.
- Use this skill when you need to automatically clean up unused AWS resources to reduce costs and eliminate waste.
- Use when the request clearly matches the imported source intent: Automated cleanup of unused AWS resources to reduce costs.
- Use when the operator should preserve upstream workflow detail instead of rewriting the process from scratch.
- Use when provenance needs to stay visible in the answer, PR, or review packet.
- Use when copied upstream references, examples, or scripts materially improve the answer.
- Use when the workflow should remain reviewable in the public intake repo before the private enhancer takes over.
Operating Table
| Situation | Start here | Why it matters |
|---|---|---|
| First-time use | | Confirms repository, branch, commit, and imported path before touching the copied workflow |
| Provenance review | | Gives reviewers a plain-language audit trail for the imported source |
| Workflow execution | | Starts with the smallest copied file that materially changes execution |
| Supporting context | | Adds the next most relevant copied source file without loading the entire package |
| Handoff decision | | Helps the operator switch to a stronger native skill when the task drifts |
Workflow
This workflow is intentionally editorial and operational at the same time. It keeps the imported source useful to the operator while still satisfying the public intake standards that feed the downstream enhancer flow.
- Discovery Phase (Read-only)
- Run all describe commands
- Generate cost impact report
- Review with team
- Validation Phase
- Verify resources are truly unused
- Check for dependencies
Imported Workflow Notes
Imported: Cleanup Workflow
-
Discovery Phase (Read-only)
- Run all describe commands
- Generate cost impact report
- Review with team
-
Validation Phase
- Verify resources are truly unused
- Check for dependencies
- Notify resource owners
-
Execution Phase (Dry-run first)
- Run cleanup scripts with dry-run
- Review proposed changes
- Execute actual cleanup
-
Verification Phase
- Confirm deletions
- Monitor for issues
- Document savings
Imported: Automated Cleanup Targets
Storage
- Unattached EBS volumes
- Old EBS snapshots (>90 days)
- Incomplete multipart S3 uploads
- Old S3 versions in versioned buckets
Compute
- Stopped EC2 instances (>30 days)
- Unused AMIs and associated snapshots
- Unused Elastic IPs
Networking
- Unused Elastic Load Balancers
- Unused NAT Gateways
- Orphaned ENIs
Examples
Example 1: Ask for the upstream workflow directly
Use @aws-cost-cleanup to handle <task>. Start from the copied upstream workflow, load only the files that change the outcome, and keep provenance visible in the answer.
Explanation: This is the safest starting point when the operator needs the imported workflow, but not the entire repository.
Example 2: Ask for a provenance-grounded review
Review @aws-cost-cleanup against metadata.json and ORIGIN.md, then explain which copied upstream files you would load first and why.
Explanation: Use this before review or troubleshooting when you need a precise, auditable explanation of origin and file selection.
Example 3: Narrow the copied support files before execution
Use @aws-cost-cleanup for <task>. Load only the copied references, examples, or scripts that change the outcome, and name the files explicitly before proceeding.
Explanation: This keeps the skill aligned with progressive disclosure instead of loading the whole copied package by default.
Example 4: Build a reviewer packet
Review @aws-cost-cleanup using the copied upstream files plus provenance, then summarize any gaps before merge.
Explanation: This is useful when the PR is waiting for human review and you want a repeatable audit packet.
Imported Usage Notes
Imported: Example Prompts
Discovery
- "Find all unused resources and calculate potential savings"
- "Generate a cleanup report for my AWS account"
- "What resources can I safely delete?"
Execution
- "Create a script to cleanup unattached EBS volumes"
- "Delete all snapshots older than 90 days"
- "Release unused Elastic IPs"
Automation
- "Set up automated cleanup for old snapshots"
- "Create a Lambda function for weekly cleanup"
- "Schedule monthly resource cleanup"
Best Practices
Treat the generated public skill as a reviewable packaging layer around the upstream repository. The goal is to keep provenance explicit and load only the copied source material that materially improves execution.
- Schedule cleanup during maintenance windows
- Always create final snapshots before deletion
- Use resource tags to identify cleanup candidates
- Implement approval workflow for production
- Log all cleanup actions for audit
- Set up cost anomaly detection
- Review cleanup results weekly
Imported Operating Notes
Imported: Best Practices
- Schedule cleanup during maintenance windows
- Always create final snapshots before deletion
- Use resource tags to identify cleanup candidates
- Implement approval workflow for production
- Log all cleanup actions for audit
- Set up cost anomaly detection
- Review cleanup results weekly
Troubleshooting
Problem: The operator skipped the imported context and answered too generically
Symptoms: The result ignores the upstream workflow in
plugins/antigravity-awesome-skills-claude/skills/aws-cost-cleanup, fails to mention provenance, or does not use any copied source files at all.
Solution: Re-open metadata.json, ORIGIN.md, and the most relevant copied upstream files. Load only the files that materially change the answer, then restate the provenance before continuing.
Problem: The imported workflow feels incomplete during review
Symptoms: Reviewers can see the generated
SKILL.md, but they cannot quickly tell which references, examples, or scripts matter for the current task.
Solution: Point at the exact copied references, examples, scripts, or assets that justify the path you took. If the gap is still real, record it in the PR instead of hiding it.
Problem: The task drifted into a different specialization
Symptoms: The imported skill starts in the right place, but the work turns into debugging, architecture, design, security, or release orchestration that a native skill handles better. Solution: Use the related skills section to hand off deliberately. Keep the imported provenance visible so the next skill inherits the right context instead of starting blind.
Related Skills
- Use when the work is better handled by that native specialization after this imported skill establishes context.@ai-dev-jobs-mcp
- Use when the work is better handled by that native specialization after this imported skill establishes context.@arm-cortex-expert
- Use when the work is better handled by that native specialization after this imported skill establishes context.@asana-automation
- Use when the work is better handled by that native specialization after this imported skill establishes context.@ask-questions-if-underspecified
Additional Resources
Use this support matrix and the linked files below as the operator packet for this imported skill. They should reflect real copied source material, not generic scaffolding.
| Resource family | What it gives the reviewer | Example path |
|---|---|---|
| copied reference notes, guides, or background material from upstream | |
| worked examples or reusable prompts copied from upstream | |
| upstream helper scripts that change execution or validation | |
| routing or delegation notes that are genuinely part of the imported package | |
| supporting assets or schemas copied from the source package | |
Imported Reference Notes
Imported: Additional Resources
Imported: Cleanup Scripts
Safe Cleanup (Dry-Run First)
#!/bin/bash # cleanup-unused-ebs.sh echo "Finding unattached EBS volumes..." VOLUMES=$(aws ec2 describe-volumes \ --filters Name=status,Values=available \ --query 'Volumes[*].VolumeId' \ --output text) for vol in $VOLUMES; do echo "Would delete: $vol" # Uncomment to actually delete: # aws ec2 delete-volume --volume-id $vol done
#!/bin/bash # cleanup-old-snapshots.sh CUTOFF_DATE=$(date -d '90 days ago' --iso-8601) aws ec2 describe-snapshots --owner-ids self \ --query "Snapshots[?StartTime<='$CUTOFF_DATE'].[SnapshotId,StartTime,VolumeSize]" \ --output text | while read snap_id start_time size; do echo "Snapshot: $snap_id (Created: $start_time, Size: ${size}GB)" # Uncomment to delete: # aws ec2 delete-snapshot --snapshot-id $snap_id done
#!/bin/bash # release-unused-eips.sh aws ec2 describe-addresses \ --query 'Addresses[?AssociationId==null].[AllocationId,PublicIp]' \ --output text | while read alloc_id public_ip; do echo "Would release: $public_ip ($alloc_id)" # Uncomment to release: # aws ec2 release-address --allocation-id $alloc_id done
S3 Lifecycle Automation
# Apply lifecycle policy to transition old objects to cheaper storage cat > lifecycle-policy.json <<EOF { "Rules": [ { "Id": "Archive old objects", "Status": "Enabled", "Transitions": [ { "Days": 90, "StorageClass": "STANDARD_IA" }, { "Days": 180, "StorageClass": "GLACIER" } ], "NoncurrentVersionExpiration": { "NoncurrentDays": 30 }, "AbortIncompleteMultipartUpload": { "DaysAfterInitiation": 7 } } ] } EOF aws s3api put-bucket-lifecycle-configuration \ --bucket my-bucket \ --lifecycle-configuration file://lifecycle-policy.json
Imported: Cost Impact Calculator
#!/usr/bin/env python3 # calculate-savings.py import boto3 from datetime import datetime, timedelta ec2 = boto3.client('ec2') # Calculate EBS volume savings volumes = ec2.describe_volumes( Filters=[{'Name': 'status', 'Values': ['available']}] ) total_size = sum(v['Size'] for v in volumes['Volumes']) monthly_cost = total_size * 0.10 # $0.10/GB-month for gp3 print(f"Unattached EBS Volumes: {len(volumes['Volumes'])}") print(f"Total Size: {total_size} GB") print(f"Monthly Savings: ${monthly_cost:.2f}") # Calculate Elastic IP savings addresses = ec2.describe_addresses() unused = [a for a in addresses['Addresses'] if 'AssociationId' not in a] eip_cost = len(unused) * 3.65 # $0.005/hour * 730 hours print(f"\nUnused Elastic IPs: {len(unused)}") print(f"Monthly Savings: ${eip_cost:.2f}") print(f"\nTotal Monthly Savings: ${monthly_cost + eip_cost:.2f}") print(f"Annual Savings: ${(monthly_cost + eip_cost) * 12:.2f}")
Imported: Automated Cleanup Lambda
import boto3 from datetime import datetime, timedelta def lambda_handler(event, context): ec2 = boto3.client('ec2') # Delete unattached volumes older than 7 days volumes = ec2.describe_volumes( Filters=[{'Name': 'status', 'Values': ['available']}] ) cutoff = datetime.now() - timedelta(days=7) deleted = 0 for vol in volumes['Volumes']: create_time = vol['CreateTime'].replace(tzinfo=None) if create_time < cutoff: try: ec2.delete_volume(VolumeId=vol['VolumeId']) deleted += 1 print(f"Deleted volume: {vol['VolumeId']}") except Exception as e: print(f"Error deleting {vol['VolumeId']}: {e}") return { 'statusCode': 200, 'body': f'Deleted {deleted} volumes' }
Imported: Safety Checklist
- Run in dry-run mode first
- Verify resources have no dependencies
- Check resource tags for ownership
- Notify stakeholders before deletion
- Create snapshots of critical data
- Test in non-production first
- Have rollback plan ready
- Document all deletions
Imported: Integration with AWS Organizations
# Run cleanup across multiple accounts for account in $(aws organizations list-accounts \ --query 'Accounts[*].Id' --output text); do echo "Checking account: $account" aws ec2 describe-volumes \ --filters Name=status,Values=available \ --profile account-$account done
Imported: Monitoring and Alerts
# Create CloudWatch alarm for cost anomalies aws cloudwatch put-metric-alarm \ --alarm-name high-cost-alert \ --alarm-description "Alert when daily cost exceeds threshold" \ --metric-name EstimatedCharges \ --namespace AWS/Billing \ --statistic Maximum \ --period 86400 \ --evaluation-periods 1 \ --threshold 100 \ --comparison-operator GreaterThanThreshold
Imported: Risk Mitigation
Medium Risk Actions:
- Deleting unattached volumes (ensure no planned reattachment)
- Removing old snapshots (verify no compliance requirements)
- Releasing Elastic IPs (check DNS records)
Always:
- Maintain 30-day backup retention
- Use AWS Backup for critical resources
- Test restore procedures
- Document cleanup decisions
Imported: Kiro CLI Integration
# Analyze and cleanup in one command kiro-cli chat "Use aws-cost-cleanup to find and remove unused resources" # Generate cleanup script kiro-cli chat "Create a safe cleanup script for my AWS account" # Schedule automated cleanup kiro-cli chat "Set up weekly automated cleanup using aws-cost-cleanup"
Imported: Limitations
- Use this skill only when the task clearly matches the scope described above.
- Do not treat the output as a substitute for environment-specific validation, testing, or expert review.
- Stop and ask for clarification if required inputs, permissions, safety boundaries, or success criteria are missing.