Skills aws-bedrock
install
source · Clone the upstream repo
git clone https://github.com/TerminalSkills/skills
Claude Code · Install into ~/.claude/skills/
T=$(mktemp -d) && git clone --depth=1 https://github.com/TerminalSkills/skills "$T" && mkdir -p ~/.claude/skills && cp -r "$T/skills/aws-bedrock" ~/.claude/skills/terminalskills-skills-aws-bedrock && rm -rf "$T"
manifest:
skills/aws-bedrock/SKILL.mdsafety · automated scan (medium risk)
This is a pattern-based risk scan, not a security review. Our crawler flagged:
- pip install
- references AWS credentials
Always read a skill's source content before installing. Patterns alone don't mean the skill is malicious — but they warrant attention.
source content
AWS Bedrock
Overview
Amazon Bedrock is a fully managed service that provides access to foundation models from multiple providers (Anthropic, Meta, Amazon, Mistral, Cohere) through a unified AWS API. It integrates natively with AWS IAM, VPC, CloudWatch, and S3, making it ideal for enterprise workloads requiring compliance, security controls, and AWS-native data pipelines.
Setup
pip install boto3
# Configure AWS credentials aws configure # Or set environment variables: export AWS_ACCESS_KEY_ID=... export AWS_SECRET_ACCESS_KEY=... export AWS_DEFAULT_REGION=us-east-1
Enable model access in the AWS Console: Bedrock → Model Access → Enable models
Available Models
| Model ID | Provider | Best For |
|---|---|---|
| Anthropic | Best overall quality |
| Anthropic | Fast, cost-efficient |
| Anthropic | Most capable reasoning |
| Meta | Open-weight, Llama 3 70B |
| Meta | Fast, smaller Llama |
| Amazon | AWS-native text generation |
| Mistral | Code + reasoning |
| Cohere | RAG, tool use |
Instructions
Converse API (Recommended)
The Converse API is the unified chat interface for all Bedrock models:
import boto3 bedrock = boto3.client("bedrock-runtime", region_name="us-east-1") response = bedrock.converse( modelId="anthropic.claude-3-5-sonnet-20241022-v2:0", messages=[ {"role": "user", "content": [{"text": "Explain AWS Lambda in simple terms."}]} ], system=[{"text": "You are a helpful AWS solutions architect."}], inferenceConfig={ "maxTokens": 1024, "temperature": 0.7, }, ) print(response["output"]["message"]["content"][0]["text"]) print(f"Input tokens: {response['usage']['inputTokens']}") print(f"Output tokens: {response['usage']['outputTokens']}")
Streaming with Converse
import boto3 bedrock = boto3.client("bedrock-runtime", region_name="us-east-1") response = bedrock.converse_stream( modelId="anthropic.claude-3-5-sonnet-20241022-v2:0", messages=[{"role": "user", "content": [{"text": "Write a Python quicksort implementation."}]}], ) for event in response["stream"]: if "contentBlockDelta" in event: delta = event["contentBlockDelta"]["delta"] if "text" in delta: print(delta["text"], end="", flush=True) print()
InvokeModel API (Raw)
For models not yet supported by Converse, or for direct access:
import boto3 import json bedrock = boto3.client("bedrock-runtime", region_name="us-east-1") # Claude via InvokeModel body = { "anthropic_version": "bedrock-2023-05-31", "max_tokens": 1024, "messages": [ {"role": "user", "content": "What is the capital of France?"} ], } response = bedrock.invoke_model( modelId="anthropic.claude-3-5-sonnet-20241022-v2:0", body=json.dumps(body), contentType="application/json", accept="application/json", ) result = json.loads(response["body"].read()) print(result["content"][0]["text"])
Multi-Modal — Image Analysis
import boto3 import base64 import json bedrock = boto3.client("bedrock-runtime", region_name="us-east-1") # Read image with open("diagram.png", "rb") as f: image_b64 = base64.b64encode(f.read()).decode() response = bedrock.converse( modelId="anthropic.claude-3-5-sonnet-20241022-v2:0", messages=[ { "role": "user", "content": [ { "image": { "format": "png", "source": {"bytes": base64.b64decode(image_b64)}, } }, {"text": "Describe this architecture diagram and identify potential issues."}, ], } ], ) print(response["output"]["message"]["content"][0]["text"])
Tool Use (Function Calling)
import boto3 import json bedrock = boto3.client("bedrock-runtime", region_name="us-east-1") tools = [ { "toolSpec": { "name": "query_database", "description": "Execute a SQL query against the production database", "inputSchema": { "json": { "type": "object", "properties": { "sql": {"type": "string", "description": "SQL query to execute"}, "database": {"type": "string", "description": "Database name"}, }, "required": ["sql"], } }, } } ] messages = [{"role": "user", "content": [{"text": "How many active users do we have?"}]}] response = bedrock.converse( modelId="anthropic.claude-3-5-sonnet-20241022-v2:0", messages=messages, toolConfig={"tools": tools}, ) # Handle tool use if response["stopReason"] == "tool_use": tool_use = next(b for b in response["output"]["message"]["content"] if "toolUse" in b) print(f"Tool: {tool_use['toolUse']['name']}") print(f"Input: {tool_use['toolUse']['input']}") # Return tool result messages.append(response["output"]["message"]) messages.append({ "role": "user", "content": [ { "toolResult": { "toolUseId": tool_use["toolUse"]["toolUseId"], "content": [{"json": {"count": 12483, "active_last_30d": 8921}}], } } ], }) final = bedrock.converse( modelId="anthropic.claude-3-5-sonnet-20241022-v2:0", messages=messages, toolConfig={"tools": tools}, ) print(final["output"]["message"]["content"][0]["text"])
Knowledge Bases for RAG
import boto3 # Knowledge Base RAG — Bedrock manages embedding and retrieval bedrock_agent = boto3.client("bedrock-agent-runtime", region_name="us-east-1") # Retrieve relevant documents retrieve_response = bedrock_agent.retrieve( knowledgeBaseId="KB123456789", retrievalQuery={"text": "What is our refund policy?"}, retrievalConfiguration={ "vectorSearchConfiguration": {"numberOfResults": 5} }, ) # Extract text from results contexts = [r["content"]["text"] for r in retrieve_response["retrievalResults"]] # Generate answer grounded in retrieved docs retrieve_and_generate = bedrock_agent.retrieve_and_generate( input={"text": "What is our refund policy?"}, retrieveAndGenerateConfiguration={ "type": "KNOWLEDGE_BASE", "knowledgeBaseConfiguration": { "knowledgeBaseId": "KB123456789", "modelArn": "arn:aws:bedrock:us-east-1::foundation-model/anthropic.claude-3-5-sonnet-20241022-v2:0", }, }, ) print(retrieve_and_generate["output"]["text"])
Guardrails for Content Safety
import boto3 bedrock = boto3.client("bedrock-runtime", region_name="us-east-1") # Apply a guardrail to filter content response = bedrock.converse( modelId="anthropic.claude-3-5-sonnet-20241022-v2:0", messages=[{"role": "user", "content": [{"text": "User's message here"}]}], guardrailConfig={ "guardrailIdentifier": "my-guardrail-id", "guardrailVersion": "DRAFT", # or "1", "2", etc. "trace": "enabled", }, ) # Check if content was blocked if response.get("trace", {}).get("guardrail", {}).get("inputAssessment"): print("Guardrail assessment:", response["trace"]["guardrail"])
IAM Policy for Bedrock
{ "Version": "2012-10-17", "Statement": [ { "Effect": "Allow", "Action": [ "bedrock:InvokeModel", "bedrock:InvokeModelWithResponseStream", "bedrock:Converse", "bedrock:ConverseStream" ], "Resource": [ "arn:aws:bedrock:us-east-1::foundation-model/anthropic.claude-3-5-sonnet-20241022-v2:0" ] } ] }
Guidelines
- Use the Converse API for new integrations — it's model-agnostic and handles message formatting.
- Enable models in the AWS Console before first use — they are not enabled by default.
- Bedrock processes data in the selected AWS region — choose for data residency compliance.
- Knowledge Bases handle chunking, embedding, and retrieval automatically with OpenSearch Serverless.
- Guardrails can block harmful content, PII, and off-topic queries before they reach the model.
- Use
for user-facing features to reduce perceived latency.converse_stream - Cross-region inference profiles let you automatically fall back to other regions if capacity is unavailable.
- Monitor costs with AWS Cost Explorer; tag Bedrock calls with application-specific tags.