Claude-skill-registry-data measure
Quantify values with uncertainty bounds. Use when estimating metrics, calculating risk scores, assessing magnitude, or measuring any quantifiable property.
git clone https://github.com/majiayu000/claude-skill-registry-data
T=$(mktemp -d) && git clone --depth=1 https://github.com/majiayu000/claude-skill-registry-data "$T" && mkdir -p ~/.claude/skills && cp -r "$T/data/measure" ~/.claude/skills/majiayu000-claude-skill-registry-data-measure && rm -rf "$T"
data/measure/SKILL.mdIntent
Quantify a specific metric for a target, providing a numerical value with explicit uncertainty bounds. This capability consolidates all estimation tasks (risk, impact, effort, etc.) into a single parameterized operation.
Success criteria:
- Numerical value provided for requested metric
- Uncertainty bounds explicitly stated
- Measurement method documented
- Units clearly specified
Compatible schemas:
schemas/output_schema.yaml
Inputs
| Parameter | Required | Type | Description |
|---|---|---|---|
| Yes | any | What to measure (system, code, entity, process) |
| Yes | string | The metric to quantify (risk, complexity, effort, size, etc.) |
| No | string | Unit of measurement (optional, inferred if not provided) |
| No | string | Measurement approach (heuristic, statistical, model-based) |
Procedure
-
Define the metric: Clarify exactly what is being measured
- Establish clear definition of the metric
- Identify appropriate unit of measurement
- Determine measurement methodology
-
Gather measurement inputs: Collect data needed for measurement
- Read relevant files, logs, or data sources
- Identify quantifiable indicators
- Note data quality and completeness
-
Calculate measurement: Apply measurement method to inputs
- Use appropriate calculation for the metric type
- For risk: probability * impact assessment
- For complexity: cyclomatic, cognitive, or structural metrics
- For effort: decomposition and estimation techniques
-
Establish uncertainty bounds: Quantify measurement confidence
- Calculate or estimate lower and upper bounds
- Consider data quality, method limitations
- Express as confidence interval or range
-
Ground the measurement: Document evidence and method
- Reference specific data points used
- Note any assumptions in calculation
- Document measurement methodology
Output Contract
Return a structured object:
value: number # The measured value uncertainty: lower: number # Lower bound (confidence interval) upper: number # Upper bound (confidence interval) type: string # Type of uncertainty (confidence_interval, range, std_dev) unit: string # Unit of measurement metric: string # What was measured method: string # How it was measured breakdown: # Optional component breakdown - component: string value: number confidence: 0..1 # Confidence in measurement accuracy evidence_anchors: ["file:line", "data:source"] assumptions: []
Field Definitions
| Field | Type | Description |
|---|---|---|
| number | Primary measurement value |
| object | Bounds and type of uncertainty |
| string | Unit of measurement |
| string | Name of measured property |
| string | Measurement methodology used |
| array | Optional decomposition of measurement |
Examples
Example 1: Measure Code Complexity
Input:
target: "src/services/order_processor.rb" metric: "complexity" unit: "cyclomatic"
Output:
value: 24 uncertainty: lower: 22 upper: 26 type: "range" unit: "cyclomatic complexity" metric: "complexity" method: "counted decision points (if, case, loops, &&, ||)" breakdown: - component: "process_order method" value: 12 - component: "validate_items method" value: 8 - component: "calculate_totals method" value: 4 confidence: 0.9 evidence_anchors: - "src/services/order_processor.rb:15-120" assumptions: - "Ruby control flow statements counted as decision points" - "Exception handling counted as branches"
Example 2: Measure Risk
Input:
target: "database migration: add user_preferences column" metric: "risk" method: "probability_impact"
Output:
value: 0.35 uncertainty: lower: 0.25 upper: 0.50 type: "confidence_interval_90" unit: "risk score (0-1)" metric: "risk" method: "probability (0.5) * impact (0.7) normalized" breakdown: - component: "probability_of_failure" value: 0.5 - component: "impact_if_failed" value: 0.7 - component: "data_loss_risk" value: 0.2 - component: "downtime_risk" value: 0.4 confidence: 0.7 evidence_anchors: - "migrations/20240115_add_preferences.rb:1-25" - "tool:grep:similar_migrations" assumptions: - "Table has ~1M rows based on user count" - "Migration will lock table during ALTER" - "No concurrent deployments during migration"
Verification
- Numerical value provided
- Uncertainty bounds are reasonable (lower < value < upper)
- Unit of measurement specified
- Method documented
- Evidence anchors reference measurement inputs
Verification tools: Read (to verify measurement inputs)
Safety Constraints
: falsemutation
: falserequires_checkpoint
: falserequires_approval
: lowrisk
Capability-specific rules:
- Always provide uncertainty bounds, never claim false precision
- Document measurement methodology for reproducibility
- Flag when data is insufficient for reliable measurement
- Do not extrapolate beyond available data without noting assumptions
Composition Patterns
Commonly follows:
- Measure properties of observed stateobserve
- Measure characteristics of detected itemsdetect
- Measure retrieved dataretrieve
Commonly precedes:
- Measurements feed into predictionspredict
- Measurements enable quantitative comparisoncompare
- Measurements inform risk-aware planningplan
Anti-patterns:
- Never use measure for binary detection (use
)detect - Avoid measure for categorical assessment (use
)classify
Workflow references:
- See
for risk measurementreference/workflow_catalog.yaml#digital_twin_sync_loop