Awesome-omni-skill experiment-loop

Weekly experiment tracking loop for MD Home Care. Scans content changes, measures traffic impact via PostHog and GSC, and makes keep/iterate/revert decisions with lag-adjusted attribution.

install
source · Clone the upstream repo
git clone https://github.com/diegosouzapw/awesome-omni-skill
Claude Code · Install into ~/.claude/skills/
T=$(mktemp -d) && git clone --depth=1 https://github.com/diegosouzapw/awesome-omni-skill "$T" && mkdir -p ~/.claude/skills && cp -r "$T/skills/tools/experiment-loop-majiayu000" ~/.claude/skills/diegosouzapw-awesome-omni-skill-experiment-loop && rm -rf "$T"
manifest: skills/tools/experiment-loop-majiayu000/SKILL.md
source content

Experiment Loop for MD Home Care

Tracks content changes, measures their impact on traffic and rankings, and decides whether to keep, iterate, or revert. Runs weekly.

CRITICAL: Lag Times for YMYL Content

YMYL content (aged care, disability services) has longer lag times than SaaS content. Do not evaluate changes too early.

Change TypeSEO LagAEO LagEvaluation Window
Service page optimization10-21 days3-7 days3 weeks minimum
Location page creation14-21 days7-14 days3 weeks minimum
Blog post publishing7-14 days3-7 days2 weeks minimum
Provider comparison addition7-14 days3-7 days2 weeks minimum
Trust signal enhancement10-21 days7-14 days3 weeks minimum
FAQ addition7-14 days3-7 days2 weeks minimum

Step 1: Weekly Git Scan

Identify all content changes from the past week:

cd ~/Projects/mdhomecarebuild

# All content changes in last 7 days
git log --since="7 days ago" --name-only --pretty=format:"%h %s" -- "src/content/**/*.md" "src/content/**/*.mdx"

# Summarize by type
git log --since="7 days ago" --name-only --pretty=format:"" -- "src/content/blog/*.md" | sort -u | head -20
git log --since="7 days ago" --name-only --pretty=format:"" -- "src/content/services/*.md" | sort -u | head -20
git log --since="7 days ago" --name-only --pretty=format:"" -- "src/content/providers/*.md" | sort -u | head -20

Categorize each change:

  • New page: Completely new content file
  • Major edit: Structural changes (new sections, comparison tables, rewritten H1/H2)
  • Minor edit: Small fixes (typos, link updates, frontmatter changes)

Step 2: Baseline Measurement

For each changed page, capture the pre-change baseline. If baseline was not captured before the change, use the previous period as proxy.

GSC Baseline

cd ~/Projects/mdhomecarebuild

# For each changed page, get keyword data
python3 src/scripts/advanced_gsc_analyzer.py --page "/services/[slug]"
python3 src/scripts/advanced_gsc_analyzer.py --page "/blog/[slug]"

Record:

  • Top 10 keywords by clicks
  • Average position for primary keyword
  • Total impressions and clicks (last 7 days)

PostHog Baseline

# Page traffic
python3 src/scripts/posthog_analytics.py --page "/services/[slug]" --days 7

# AI referral traffic
python3 src/scripts/posthog_analytics.py --ai-referrals --days 7

Record:

  • Total pageviews (last 7 days)
  • AI referral visits to that page
  • Traffic sources breakdown

Step 3: Post-Change Measurement

After the evaluation window has passed (see lag times table), measure again.

# GSC: same page analysis
python3 src/scripts/advanced_gsc_analyzer.py --page "/services/[slug]"

# PostHog: same page traffic
python3 src/scripts/posthog_analytics.py --page "/services/[slug]" --days 7
python3 src/scripts/posthog_analytics.py --ai-referrals --days 7

Step 4: Attribution and Decision

Compare Metrics

For each experiment, calculate:

MetricBeforeAfterChange
Organic clicks (7d)XY+/- %
Impressions (7d)XY+/- %
Avg position (primary KW)XY+/- positions
AI referral visits (7d)XY+/- %
Total pageviews (7d)XY+/- %

Decision Framework

KEEP if:

  • Organic clicks increased >10%
  • OR average position improved by 2+ positions
  • OR AI referral visits increased >20%
  • OR impressions increased >15% (leading indicator)
  • AND no negative impact on other pages (cannibalization check)

ITERATE if:

  • Mixed signals (some metrics up, some flat)
  • OR small positive movement (<10% clicks) that suggests potential
  • OR evaluation window has not fully elapsed
  • Action: Make targeted refinements and re-evaluate after another cycle

REVERT if:

  • Organic clicks decreased >15%
  • AND average position dropped by 3+ positions
  • AND no compensating AI referral increase
  • Action: Restore previous version via git, document what went wrong

WAIT if:

  • Change is too recent (within lag window)
  • Action: Re-evaluate next week

Step 5: Log to Playbook

Record every experiment result in PLAYBOOK.md:

## [Date] - [Experiment Name]

**Category:** [Service page optimization / Location page / Blog post / Comparison / Trust signal / FAQ]
**Page:** [URL path]
**Change:** [Brief description of what was changed]
**Hypothesis:** [What we expected to happen]

**Baseline (pre-change):**
- Organic clicks (7d): X
- Avg position (primary KW): X
- AI referrals (7d): X

**Result (post-change, measured [date]):**
- Organic clicks (7d): Y (+/- %)
- Avg position (primary KW): Y (+/- positions)
- AI referrals (7d): Y (+/- %)

**Decision:** KEEP / ITERATE / REVERT / WAIT
**Lesson:** [What we learned]

Experiment Categories

Service Page Optimizations

  • Adding comparison tables
  • Rewriting H1/byline
  • Adding trust signal sections
  • Expanding FAQ sections
  • Adding AI differentiation paragraphs

Location Page Creation

  • New suburb-specific service pages
  • Measure: local keyword rankings, location-specific traffic

Blog Post Publishing

  • New informational content
  • Template/download posts
  • Provider comparison posts
  • Measure: organic clicks, keyword coverage expansion

Provider Comparison Additions

  • New comparison tables on existing pages
  • New "vs" blog posts
  • Measure: comparison keyword rankings, AI referral traffic

Trust Signal Enhancements

  • Adding registration numbers
  • Adding testimonials
  • Adding clinical governance sections
  • Measure: overall page authority signals, position changes

FAQ Additions

  • New FAQ sections
  • Expanding existing FAQs with PAA questions
  • Measure: featured snippet captures, PAA appearances

Weekly Routine

Every week:

  1. Run git scan (Step 1)
  2. For changes past their evaluation window, measure results (Step 3)
  3. Make keep/iterate/revert decisions (Step 4)
  4. Log results to PLAYBOOK.md (Step 5)
  5. Capture baselines for new changes (Step 2)

Usage

/experiment-loop

Runs the full weekly cycle: scan, measure, decide, log.

/experiment-loop --check "/services/sil-services"

Check status of a specific page experiment.