Claude-skill-registry creative-learnings

Document learnings from creative tests including patterns of what worked and what didn't, updating the angle/hook performance database, and identifying new hypotheses to test. Use after test cycles to capture institutional knowledge and inform future creative strategy.

install
source · Clone the upstream repo
git clone https://github.com/majiayu000/claude-skill-registry
Claude Code · Install into ~/.claude/skills/
T=$(mktemp -d) && git clone --depth=1 https://github.com/majiayu000/claude-skill-registry "$T" && mkdir -p ~/.claude/skills && cp -r "$T/skills/data/creative-learnings" ~/.claude/skills/majiayu000-claude-skill-registry-creative-learnings && rm -rf "$T"
manifest: skills/data/creative-learnings/SKILL.md
source content

Creative Learnings

Document and systematize learnings from creative tests.

Process

Step 1: Analyze Recent Test Results

Gather Test Data:

  • All creatives tested in period
  • Performance metrics (CPA, CTR, CVR)
  • Spend and volume
  • Test duration

Categorize Results:

  • Clear winners (scale)
  • Promising (iterate)
  • Clear losers (kill)
  • Inconclusive (retest)

Step 2: Extract Patterns

What Worked - Analyze:

  • Common elements in winners
  • Hook types performing
  • Body structures winning
  • CTA formats converting
  • Visual styles succeeding
  • Avatar responses

What Didn't Work - Analyze:

  • Common failure points
  • Hook types failing
  • Angles not resonating
  • Visual styles flopping
  • Audiences not responding

Step 3: Update Performance Database

Angle Tracker:

AngleTestsWinsWin RateBest CPANotes
[Angle 1]XXX%$X[Learning]

Hook Type Tracker:

Hook TypeTestsWinsWin RateNotes
GreedXXX%[Learning]
EmotionXXX%[Learning]

Framework Tracker:

FrameworkTestsWinsWin RateNotes

Step 4: Identify New Hypotheses

From Winners:

  • What can we double down on?
  • What variations should we test?
  • What audiences should we expand to?

From Losers:

  • What should we stop doing?
  • What assumptions were wrong?
  • What variables need isolation?

From Market:

  • What competitors are doing?
  • What trends are emerging?
  • What gaps exist?

Step 5: Output Learnings Document

## CREATIVE LEARNINGS: [Date Range]

### TEST SUMMARY

**Tests Conducted:**
- Total creatives tested: [#]
- Winners identified: [#]
- Win rate: [X%]
- Total test spend: $[X]

**By Category:**
| Type | Tested | Winners | Win Rate |
|------|--------|---------|----------|
| New angles | X | X | X% |
| Hook variations | X | X | X% |
| Body iterations | X | X | X% |
| CTA tests | X | X | X% |

---

### KEY LEARNINGS

**LEARNING 1: [Title]**
- What we tested: [Description]
- Result: [Outcome]
- Why it worked/failed: [Analysis]
- Application: [How to use this]
- Confidence: [High/Medium/Low]

**LEARNING 2: [Title]**
...

---

### WHAT'S WORKING

**Winning Angles:**
1. [Angle] - Why: [Explanation]
2. [Angle] - Why: [Explanation]

**Winning Hook Types:**
1. [Type] - Performance: [Metrics]
2. [Type] - Performance: [Metrics]

**Winning Formats:**
- [Format description and why]

**Winning Visual Styles:**
- [Style description and why]

**Winning Avatars:**
- [Avatar responding best]

---

### WHAT'S NOT WORKING

**Failed Angles:**
1. [Angle] - Why failed: [Analysis]
   - Action: [Stop/Revise/Retest]

**Failed Hook Types:**
1. [Type] - Why failed: [Analysis]

**Failed Formats:**
- [What and why]

**Avoid:**
- [Thing to stop doing]
- [Thing to stop doing]

---

### PATTERN ANALYSIS

**Successful Patterns:**
- [Pattern 1]: Seen in X winners
- [Pattern 2]: Seen in X winners

**Failure Patterns:**
- [Pattern 1]: Seen in X losers
- [Pattern 2]: Seen in X losers

**Correlations Found:**
- [Variable A] + [Variable B] = [Outcome]

---

### ANGLE/HOOK DATABASE UPDATE

**New Additions:**
| Element | Type | Status | Win Rate | Notes |
|---------|------|--------|----------|-------|
| [New angle] | Angle | Proven | X% | [Note] |
| [New hook] | Hook | Testing | - | [Note] |

**Status Changes:**
- [Element]: [Old status] → [New status]

**Retired:**
- [Element]: Reason: [Why removed]

---

### HYPOTHESES FOR NEXT CYCLE

**High Priority Tests:**
1. **Hypothesis:** [Statement]
   - Based on: [Learning that inspired this]
   - Test: [What to create]
   - Expected outcome: [Prediction]

2. **Hypothesis:** [Statement]
   ...

**Medium Priority Tests:**
1. [Hypothesis and test plan]

**Experimental:**
1. [Wild card ideas worth trying]

---

### COMPETITIVE INSIGHTS

**What competitors are doing:**
- [Observation 1]
- [Observation 2]

**Opportunities identified:**
- [Gap we can exploit]

---

### RECOMMENDATIONS

**Creative Strategy Adjustments:**
1. [Recommendation]
2. [Recommendation]

**Process Improvements:**
1. [Recommendation]

**Resource Allocation:**
- More focus on: [Area]
- Less focus on: [Area]

---

### NEXT STEPS

**Immediate (This Week):**
1. [ ] [Action item]
2. [ ] [Action item]

**Short-term (This Month):**
1. [ ] [Action item]

**Share With Team:**
- Key insight to communicate: [Summary]

Building Institutional Knowledge

Document Everything:

  • Even "obvious" learnings
  • Capture the "why" not just "what"
  • Include context and conditions

Make It Searchable:

  • Consistent naming conventions
  • Tags/categories
  • Regular updates

Share and Apply:

  • Team access to learnings
  • Reference in creative briefs
  • Update SOPs based on learnings

Source: General creative optimization best practices