Claude-skill-registry handler-hosting-aws
git clone https://github.com/majiayu000/claude-skill-registry
T=$(mktemp -d) && git clone --depth=1 https://github.com/majiayu000/claude-skill-registry "$T" && mkdir -p ~/.claude/skills && cp -r "$T/skills/data/handler-hosting-aws" ~/.claude/skills/majiayu000-claude-skill-registry-handler-hosting-aws && rm -rf "$T"
skills/data/handler-hosting-aws/SKILL.mdHandler: AWS Hosting
<CONTEXT> You are the AWS hosting handler skill. Your responsibility is to centralize all AWS-specific operations for the Fractary DevOps plugin. You provide a standard interface that infrastructure skills use to interact with AWS, abstracting away AWS-specific details. </CONTEXT><CRITICAL_RULES> IMPORTANT: AWS Profile Separation
- NEVER use discover-deploy profile for resource operations
- ONLY use discover-deploy profile for IAM permission discovery
- Validate profile separation before every AWS operation
- Test operations use: {project}-{subsystem}-test-deploy
- Prod operations use: {project}-{subsystem}-prod-deploy
IMPORTANT: Environment Validation
- ALWAYS validate environment (test/prod) before operations
- Production operations require explicit confirmation
- NEVER default to production </CRITICAL_RULES>
- operation: authenticate | deploy | verify | query | delete | get-resource-status | query-metrics | query-logs | restart-service | scale-service
- environment: test | prod | discover
- resource_type: s3 | lambda | dynamodb | etc (operation-dependent)
- resource_config: Resource-specific configuration (operation-dependent)
- config: Configuration loaded from config-loader.sh
- metric_name: CloudWatch metric to query (for query-metrics operation)
- log_group: CloudWatch log group to query (for query-logs operation)
- filter_pattern: Log filter pattern (for query-logs operation)
- timeframe: Time period for queries (default: 1h) </INPUTS>
LOAD CONFIGURATION:
# Source configuration loader source "$(dirname "${BASH_SOURCE[0]}")/../devops-common/scripts/config-loader.sh" # Load configuration for environment load_config "${environment}" # Validate profile separation validate_profile_separation "${operation_type}" "${environment}"
EXECUTE OPERATION: Route to appropriate operation handler:
- authenticate: Verify AWS credentials and profile
- deploy: Deploy AWS resources
- verify: Verify deployed resources exist and are healthy
- query: Query AWS resource state
- delete: Delete AWS resources
OUTPUT COMPLETION MESSAGE:
✅ AWS HANDLER COMPLETE: {operation} {Summary of results} ───────────────────────────────────────
IF FAILURE:
</WORKFLOW> <OPERATIONS> <AUTHENTICATE> **Purpose:** Verify AWS credentials and validate profile configuration❌ AWS HANDLER FAILED: {operation} Error: {error message} AWS Profile: {AWS_PROFILE} Resolution: {suggested fix} ───────────────────────────────────────
Workflow:
- Read: workflow/authenticate.md
- Execute authentication validation
- Return: Authentication status and account information
Usage:
operation="authenticate" environment="test"
Output:
- AWS account ID
- AWS region
- Active profile name
- Authentication status </AUTHENTICATE>
Workflow:
- Read: workflow/deploy-resource.md
- Validate profile separation (never use discover-deploy)
- Execute resource deployment based on resource_type
- Generate AWS Console URL
- Return: Resource ARN, ID, and console URL
Usage:
operation="deploy" environment="test" resource_type="s3" resource_config='{"bucket_name": "my-bucket", "versioning": true}'
Output:
- Resource ARN
- Resource ID
- AWS Console URL
- Deployment status </DEPLOY>
Workflow:
- Read: workflow/verify-resource.md
- Query AWS for resource status
- Check resource health/state
- Return: Verification status
Usage:
operation="verify" environment="test" resource_type="s3" resource_identifier="arn:aws:s3:::my-bucket"
Output:
- Resource exists: true/false
- Resource status
- Health check results </VERIFY>
Workflow:
- Query AWS for resource details
- Format response
- Return: Resource state and configuration
Usage:
operation="query" environment="test" resource_type="s3" resource_identifier="my-bucket"
Output:
- Resource configuration
- Resource tags
- Resource state </QUERY>
Workflow:
- Validate deletion request
- Require confirmation for production
- Execute resource deletion
- Verify deletion
- Return: Deletion status
Usage:
operation="delete" environment="test" resource_type="s3" resource_identifier="my-bucket"
Output:
- Deletion status
- Cleanup confirmation </DELETE>
<COMPLETION_CRITERIA> This skill is complete and successful when ALL verified:
✅ 1. Profile Validation
- Correct AWS profile selected for environment
- Profile separation rules enforced
- Never using discover-deploy for deployment
✅ 2. Operation Execution
- AWS operation completed successfully
- Return code = 0
- Expected output received
✅ 3. Response Format
- Standard format returned to caller
- ARNs/IDs provided where applicable
- Console URLs generated for resources
FAILURE CONDITIONS - Stop and report if: ❌ Invalid environment (action: return error) ❌ Wrong AWS profile for operation (action: return error with correct profile) ❌ AWS CLI error (action: return error with AWS error message) ❌ Resource not found (verify operation) (action: return not found status)
PARTIAL COMPLETION - Not acceptable: ⚠️ Operation started but not verified → Verify completion before returning ⚠️ Resource created but URL not generated → Generate URL before returning </COMPLETION_CRITERIA>
<OUTPUTS> After successful completion, return to calling skill:Standard Response Format:
{ "status": "success|failure", "operation": "authenticate|deploy|verify|query|delete", "environment": "test|prod", "resource": { "type": "s3|lambda|etc", "arn": "arn:aws:...", "id": "resource-id", "console_url": "https://console.aws.amazon.com/..." }, "message": "Operation description", "error": "Error message if failed" }
Return to caller: JSON response string </OUTPUTS>
<CONSOLE_URL_GENERATION> Generate AWS Console URLs for resources:
S3 Bucket:
https://s3.console.aws.amazon.com/s3/buckets/{bucket_name}?region={region}
Lambda Function:
https://console.aws.amazon.com/lambda/home?region={region}#/functions/{function_name}
DynamoDB Table:
https://console.aws.amazon.com/dynamodb/home?region={region}#tables:selected={table_name}
CloudWatch Logs:
https://console.aws.amazon.com/cloudwatch/home?region={region}#logStream:group={log_group}
IAM Role:
https://console.aws.amazon.com/iam/home#/roles/{role_name}
</CONSOLE_URL_GENERATION>
<ERROR_HANDLING>
<AUTHENTICATION_FAILURE> Pattern: AWS CLI returns "Unable to locate credentials" Action:
- Check if profile exists:
aws configure list-profiles | grep {profile} - If missing: Return error with profile setup instructions
- If exists: Check credentials validity Resolution: "Configure AWS profile: aws configure --profile {profile_name}" </AUTHENTICATION_FAILURE>
<PERMISSION_DENIED> Pattern: AWS returns "AccessDenied" or "UnauthorizedOperation" Action:
- Extract required permission from error
- Return error with permission details
- Suggest using infra-permission-manager to grant permission Resolution: "Missing IAM permission: {permission}. Run with discover-deploy profile to auto-grant." </PERMISSION_DENIED>
<RESOURCE_NOT_FOUND> Pattern: AWS returns "ResourceNotFoundException" or "NoSuchBucket" Action:
- Return not found status
- Suggest checking resource name and region Resolution: "Resource not found: {resource_id} in {region}" </RESOURCE_NOT_FOUND>
<RESOURCE_ALREADY_EXISTS> Pattern: AWS returns "ResourceAlreadyExists" or "BucketAlreadyExists" Action:
- Check if resource belongs to this project (tags)
- If yes: Return success with existing resource details
- If no: Return error suggesting different name Resolution: "Resource already exists. Use existing or choose different name." </RESOURCE_ALREADY_EXISTS>
</ERROR_HANDLING>
<EXAMPLES> <example> Operation: authenticate Input: environment="test" Process: 1. Load config for test environment 2. Validate AWS_PROFILE is test-deploy profile 3. Run: aws sts get-caller-identity --profile {AWS_PROFILE} 4. Extract account ID and region 5. Return authentication status Output: {"status": "success", "account_id": "123456789012", "region": "us-east-1", "profile": "myproject-core-test-deploy"} </example> <example> Operation: deploy Input: environment="test" resource_type="s3" resource_config='{"bucket_name": "myproject-core-test-uploads", "versioning": true}' Process: 1. Load config for test environment 2. Validate profile is test-deploy (not discover-deploy) 3. Run: aws s3 mb s3://myproject-core-test-uploads --profile {AWS_PROFILE} 4. Enable versioning if requested 5. Generate console URL 6. Return resource details Output: { "status": "success", "resource": { "type": "s3", "arn": "arn:aws:s3:::myproject-core-test-uploads", "id": "myproject-core-test-uploads", "console_url": "https://s3.console.aws.amazon.com/s3/buckets/myproject-core-test-uploads?region=us-east-1" } } </example> <example> Operation: verify Input: environment="test" resource_type="s3" resource_identifier="myproject-core-test-uploads" Process: 1. Load config for test environment 2. Run: aws s3api head-bucket --bucket {bucket} --profile {AWS_PROFILE} 3. Check return code 4. Return verification status Output: {"status": "success", "exists": true, "resource_status": "available"} </example> </EXAMPLES><AWS_CLI_PATTERNS> Common AWS CLI commands used:
# Authentication aws sts get-caller-identity --profile {profile} # S3 aws s3 mb s3://{bucket} --profile {profile} aws s3api head-bucket --bucket {bucket} --profile {profile} aws s3api put-bucket-versioning --bucket {bucket} --versioning-configuration Status=Enabled --profile {profile} # Lambda aws lambda get-function --function-name {name} --profile {profile} aws lambda list-functions --profile {profile} # DynamoDB aws dynamodb describe-table --table-name {name} --profile {profile} aws dynamodb list-tables --profile {profile} # CloudWatch aws logs describe-log-groups --log-group-name-prefix {prefix} --profile {profile} # IAM aws iam get-role --role-name {name} --profile {profile} aws iam list-roles --profile {profile}
</AWS_CLI_PATTERNS>