Awesome-omni-skills datadog-automation-v2
Datadog Automation via Rube MCP workflow skill. Use this skill when the user needs Automate Datadog tasks via Rube MCP (Composio): query metrics, search logs, manage monitors/dashboards, create events and downtimes. Always search tools first for current schemas and the operator should preserve the upstream workflow, copied support files, and provenance before merging or handing off.
git clone https://github.com/diegosouzapw/awesome-omni-skills
T=$(mktemp -d) && git clone --depth=1 https://github.com/diegosouzapw/awesome-omni-skills "$T" && mkdir -p ~/.claude/skills && cp -r "$T/skills/datadog-automation-v2" ~/.claude/skills/diegosouzapw-awesome-omni-skills-datadog-automation-v2 && rm -rf "$T"
skills/datadog-automation-v2/SKILL.mdDatadog Automation via Rube MCP
Overview
This public intake copy packages
plugins/antigravity-awesome-skills/skills/datadog-automation from https://github.com/sickn33/antigravity-awesome-skills into the native Omni Skills editorial shape without hiding its origin.
Use it when the operator needs the upstream workflow, support files, and repository context to stay intact while the public validator and private enhancer continue their normal downstream flow.
This intake keeps the copied upstream files intact and uses
metadata.json plus ORIGIN.md as the provenance anchor for review.
Datadog Automation via Rube MCP Automate Datadog monitoring and observability operations through Composio's Datadog toolkit via Rube MCP.
Imported source sections that did not map cleanly to the public headings are still preserved below or in the support files. Notable imported sections: Prerequisites, Common Patterns, Known Pitfalls, Limitations.
When to Use This Skill
Use this section as the trigger filter. It should make the activation boundary explicit before the operator loads files, runs commands, or opens a pull request.
- This skill is applicable to execute the workflow or actions described in the overview.
- Use when the request clearly matches the imported source intent: Automate Datadog tasks via Rube MCP (Composio): query metrics, search logs, manage monitors/dashboards, create events and downtimes. Always search tools first for current schemas.
- Use when the operator should preserve upstream workflow detail instead of rewriting the process from scratch.
- Use when provenance needs to stay visible in the answer, PR, or review packet.
- Use when copied upstream references, examples, or scripts materially improve the answer.
- Use when the workflow should remain reviewable in the public intake repo before the private enhancer takes over.
Operating Table
| Situation | Start here | Why it matters |
|---|---|---|
| First-time use | | Confirms repository, branch, commit, and imported path before touching the copied workflow |
| Provenance review | | Gives reviewers a plain-language audit trail for the imported source |
| Workflow execution | | Starts with the smallest copied file that materially changes execution |
| Supporting context | | Adds the next most relevant copied source file without loading the entire package |
| Handoff decision | | Helps the operator switch to a stronger native skill when the task drifts |
Workflow
This workflow is intentionally editorial and operational at the same time. It keeps the imported source useful to the operator while still satisfying the public intake standards that feed the downstream enhancer flow.
- Verify Rube MCP is available by confirming RUBESEARCHTOOLS responds
- Call RUBEMANAGECONNECTIONS with toolkit datadog
- If connection is not ACTIVE, follow the returned auth link to complete Datadog authentication
- Confirm connection status shows ACTIVE before running any workflows
- DATADOGLISTMETRICS - List available metric names [Optional]
- DATADOGQUERYMETRICS - Query metric time series data [Required]
- query: Datadog metric query string (e.g., avg:system.cpu.user{host:web01})
Imported Workflow Notes
Imported: Setup
Get Rube MCP: Add
https://rube.app/mcp as an MCP server in your client configuration. No API keys needed — just add the endpoint and it works.
- Verify Rube MCP is available by confirming
respondsRUBE_SEARCH_TOOLS - Call
with toolkitRUBE_MANAGE_CONNECTIONSdatadog - If connection is not ACTIVE, follow the returned auth link to complete Datadog authentication
- Confirm connection status shows ACTIVE before running any workflows
Imported: Core Workflows
1. Query and Explore Metrics
When to use: User wants to query metric data or list available metrics
Tool sequence:
- List available metric names [Optional]DATADOG_LIST_METRICS
- Query metric time series data [Required]DATADOG_QUERY_METRICS
Key parameters:
: Datadog metric query string (e.g.,query
)avg:system.cpu.user{host:web01}
: Start timestamp (Unix epoch seconds)from
: End timestamp (Unix epoch seconds)to
: Search string for listing metricsq
Pitfalls:
- Query syntax follows Datadog's metric query format:
aggregation:metric_name{tag_filters}
andfrom
are Unix epoch timestamps in seconds, not millisecondsto- Valid aggregations:
,avg
,sum
,min
,maxcount - Tag filters use curly braces:
{host:web01,env:prod} - Time range should not exceed Datadog's retention limits for the metric type
2. Search and Analyze Logs
When to use: User wants to search log entries or list log indexes
Tool sequence:
- List available log indexes [Optional]DATADOG_LIST_LOG_INDEXES
- Search logs with query and filters [Required]DATADOG_SEARCH_LOGS
Key parameters:
: Log search query using Datadog log query syntaxquery
: Start time (ISO 8601 or Unix timestamp)from
: End time (ISO 8601 or Unix timestamp)to
: Sort order ('asc' or 'desc')sort
: Number of log entries to returnlimit
Pitfalls:
- Log queries use Datadog's log search syntax:
service:web status:error - Search is limited to retained logs within the configured retention period
- Large result sets require pagination; check for cursor/page tokens
- Log indexes control routing and retention; filter by index if known
3. Manage Monitors
When to use: User wants to create, update, mute, or inspect monitors
Tool sequence:
- List all monitors with filters [Required]DATADOG_LIST_MONITORS
- Get specific monitor details [Optional]DATADOG_GET_MONITOR
- Create a new monitor [Optional]DATADOG_CREATE_MONITOR
- Update monitor configuration [Optional]DATADOG_UPDATE_MONITOR
- Silence a monitor temporarily [Optional]DATADOG_MUTE_MONITOR
- Re-enable a muted monitor [Optional]DATADOG_UNMUTE_MONITOR
Key parameters:
: Numeric monitor IDmonitor_id
: Monitor display namename
: Monitor type ('metric alert', 'service check', 'log alert', 'query alert', etc.)type
: Monitor query defining the alert conditionquery
: Notification message with @mentionsmessage
: Array of tag stringstags
: Alert threshold values (thresholds
,critical
,warning
)ok
Pitfalls:
- Monitor
must match the query type; mismatches cause creation failurestype
supports @mentions for notifications (e.g.,message
,@slack-channel
)@pagerduty- Thresholds vary by monitor type; metric monitors need
at minimumcritical - Muting a monitor suppresses notifications but the monitor still evaluates
- Monitor IDs are numeric integers
4. Manage Dashboards
When to use: User wants to list, view, update, or delete dashboards
Tool sequence:
- List all dashboards [Required]DATADOG_LIST_DASHBOARDS
- Get full dashboard definition [Optional]DATADOG_GET_DASHBOARD
- Update dashboard layout or widgets [Optional]DATADOG_UPDATE_DASHBOARD
- Remove a dashboard (irreversible) [Optional]DATADOG_DELETE_DASHBOARD
Key parameters:
: Dashboard identifier stringdashboard_id
: Dashboard titletitle
: 'ordered' (grid) or 'free' (freeform positioning)layout_type
: Array of widget definition objectswidgets
: Dashboard descriptiondescription
Pitfalls:
- Dashboard IDs are alphanumeric strings (e.g., 'abc-def-ghi'), not numeric
cannot be changed after creation; must recreate the dashboardlayout_type- Widget definitions are complex nested objects; get existing dashboard first to understand structure
- DELETE is permanent; there is no undo
5. Create Events and Manage Downtimes
When to use: User wants to post events or schedule maintenance downtimes
Tool sequence:
- List existing events [Optional]DATADOG_LIST_EVENTS
- Post a new event [Required]DATADOG_CREATE_EVENT
- Schedule a maintenance downtime [Optional]DATADOG_CREATE_DOWNTIME
Key parameters for events:
: Event titletitle
: Event body text (supports markdown)text
: Event severity ('error', 'warning', 'info', 'success')alert_type
: Array of tag stringstags
Key parameters for downtimes:
: Tag scope for the downtime (e.g.,scope
)host:web01
: Start time (Unix epoch)start
: End time (Unix epoch; omit for indefinite)end
: Downtime descriptionmessage
: Specific monitor to downtime (optional, omit for scope-based)monitor_id
Pitfalls:
- Event
supports Datadog's markdown format including @mentionstext - Downtimes scope uses tag syntax:
,host:web01env:staging - Omitting
creates an indefinite downtime; always set an end time for maintenanceend - Downtime
narrows to a single monitor; scope applies to all matching monitorsmonitor_id
6. Manage Hosts and Traces
When to use: User wants to list infrastructure hosts or inspect distributed traces
Tool sequence:
- List all reporting hosts [Required]DATADOG_LIST_HOSTS
- Get a specific distributed trace [Optional]DATADOG_GET_TRACE_BY_ID
Key parameters:
: Host search filter stringfilter
: Sort hosts by field (e.g., 'name', 'apps', 'cpu')sort_field
: Sort direction ('asc' or 'desc')sort_dir
: Distributed trace ID for trace lookuptrace_id
Pitfalls:
- Host list includes all hosts reporting to Datadog within the retention window
- Trace IDs are long numeric strings; ensure exact match
- Hosts that stop reporting are retained for a configured period before removal
Imported: Prerequisites
- Rube MCP must be connected (RUBE_SEARCH_TOOLS available)
- Active Datadog connection via
with toolkitRUBE_MANAGE_CONNECTIONSdatadog - Always call
first to get current tool schemasRUBE_SEARCH_TOOLS
Examples
Example 1: Ask for the upstream workflow directly
Use @datadog-automation-v2 to handle <task>. Start from the copied upstream workflow, load only the files that change the outcome, and keep provenance visible in the answer.
Explanation: This is the safest starting point when the operator needs the imported workflow, but not the entire repository.
Example 2: Ask for a provenance-grounded review
Review @datadog-automation-v2 against metadata.json and ORIGIN.md, then explain which copied upstream files you would load first and why.
Explanation: Use this before review or troubleshooting when you need a precise, auditable explanation of origin and file selection.
Example 3: Narrow the copied support files before execution
Use @datadog-automation-v2 for <task>. Load only the copied references, examples, or scripts that change the outcome, and name the files explicitly before proceeding.
Explanation: This keeps the skill aligned with progressive disclosure instead of loading the whole copied package by default.
Example 4: Build a reviewer packet
Review @datadog-automation-v2 using the copied upstream files plus provenance, then summarize any gaps before merge.
Explanation: This is useful when the PR is waiting for human review and you want a repeatable audit packet.
Best Practices
Treat the generated public skill as a reviewable packaging layer around the upstream repository. The goal is to keep provenance explicit and load only the copied source material that materially improves execution.
- Keep the imported skill grounded in the upstream repository; do not invent steps that the source material cannot support.
- Prefer the smallest useful set of support files so the workflow stays auditable and fast to review.
- Keep provenance, source commit, and imported file paths visible in notes and PR descriptions.
- Point directly at the copied upstream files that justify the workflow instead of relying on generic review boilerplate.
- Treat generated examples as scaffolding; adapt them to the concrete task before execution.
- Route to a stronger native skill when architecture, debugging, design, or security concerns become dominant.
Troubleshooting
Problem: The operator skipped the imported context and answered too generically
Symptoms: The result ignores the upstream workflow in
plugins/antigravity-awesome-skills/skills/datadog-automation, fails to mention provenance, or does not use any copied source files at all.
Solution: Re-open metadata.json, ORIGIN.md, and the most relevant copied upstream files. Load only the files that materially change the answer, then restate the provenance before continuing.
Problem: The imported workflow feels incomplete during review
Symptoms: Reviewers can see the generated
SKILL.md, but they cannot quickly tell which references, examples, or scripts matter for the current task.
Solution: Point at the exact copied references, examples, scripts, or assets that justify the path you took. If the gap is still real, record it in the PR instead of hiding it.
Problem: The task drifted into a different specialization
Symptoms: The imported skill starts in the right place, but the work turns into debugging, architecture, design, security, or release orchestration that a native skill handles better. Solution: Use the related skills section to hand off deliberately. Keep the imported provenance visible so the next skill inherits the right context instead of starting blind.
Related Skills
- Use when the work is better handled by that native specialization after this imported skill establishes context.@customer-support-v2
- Use when the work is better handled by that native specialization after this imported skill establishes context.@customs-trade-compliance-v2
- Use when the work is better handled by that native specialization after this imported skill establishes context.@daily-gift-v2
- Use when the work is better handled by that native specialization after this imported skill establishes context.@daily-news-report-v2
Additional Resources
Use this support matrix and the linked files below as the operator packet for this imported skill. They should reflect real copied source material, not generic scaffolding.
| Resource family | What it gives the reviewer | Example path |
|---|---|---|
| copied reference notes, guides, or background material from upstream | |
| worked examples or reusable prompts copied from upstream | |
| upstream helper scripts that change execution or validation | |
| routing or delegation notes that are genuinely part of the imported package | |
| supporting assets or schemas copied from the source package | |
Imported Reference Notes
Imported: Quick Reference
| Task | Tool Slug | Key Params |
|---|---|---|
| Query metrics | DATADOG_QUERY_METRICS | query, from, to |
| List metrics | DATADOG_LIST_METRICS | q |
| Search logs | DATADOG_SEARCH_LOGS | query, from, to, limit |
| List log indexes | DATADOG_LIST_LOG_INDEXES | (none) |
| List monitors | DATADOG_LIST_MONITORS | tags |
| Get monitor | DATADOG_GET_MONITOR | monitor_id |
| Create monitor | DATADOG_CREATE_MONITOR | name, type, query, message |
| Update monitor | DATADOG_UPDATE_MONITOR | monitor_id |
| Mute monitor | DATADOG_MUTE_MONITOR | monitor_id |
| Unmute monitor | DATADOG_UNMUTE_MONITOR | monitor_id |
| List dashboards | DATADOG_LIST_DASHBOARDS | (none) |
| Get dashboard | DATADOG_GET_DASHBOARD | dashboard_id |
| Update dashboard | DATADOG_UPDATE_DASHBOARD | dashboard_id, title, widgets |
| Delete dashboard | DATADOG_DELETE_DASHBOARD | dashboard_id |
| List events | DATADOG_LIST_EVENTS | start, end |
| Create event | DATADOG_CREATE_EVENT | title, text, alert_type |
| Create downtime | DATADOG_CREATE_DOWNTIME | scope, start, end |
| List hosts | DATADOG_LIST_HOSTS | filter, sort_field |
| Get trace | DATADOG_GET_TRACE_BY_ID | trace_id |
Imported: Common Patterns
Monitor Query Syntax
Metric alerts:
avg(last_5m):avg:system.cpu.user{env:prod} > 90
Log alerts:
logs("service:web status:error").index("main").rollup("count").last("5m") > 10
Tag Filtering
- Tags use
format:key:value
,host:web01
,env:prodservice:api - Multiple tags:
(AND logic){host:web01,env:prod} - Wildcard:
host:web*
Pagination
- Use
andpage
or offset-based pagination depending on endpointpage_size - Check response for total count to determine if more pages exist
- Continue until all results are retrieved
Imported: Known Pitfalls
Timestamps:
- Most endpoints use Unix epoch seconds (not milliseconds)
- Some endpoints accept ISO 8601; check tool schema
- Time ranges should be reasonable (not years of data)
Query Syntax:
- Metric queries:
aggregation:metric{tags} - Log queries:
pairsfield:value - Monitor queries vary by type; check Datadog documentation
Rate Limits:
- Datadog API has per-endpoint rate limits
- Implement backoff on 429 responses
- Batch operations where possible
Imported: Limitations
- Use this skill only when the task clearly matches the scope described above.
- Do not treat the output as a substitute for environment-specific validation, testing, or expert review.
- Stop and ask for clarification if required inputs, permissions, safety boundaries, or success criteria are missing.