Awesome-omni-skill Conversion Optimization
'Conversion Rate Optimization (CRO) is the systematic process of increasing
install
source · Clone the upstream repo
git clone https://github.com/diegosouzapw/awesome-omni-skill
Claude Code · Install into ~/.claude/skills/
T=$(mktemp -d) && git clone --depth=1 https://github.com/diegosouzapw/awesome-omni-skill "$T" && mkdir -p ~/.claude/skills && cp -r "$T/skills/backend/conversion-optimization" ~/.claude/skills/diegosouzapw-awesome-omni-skill-conversion-optimization && rm -rf "$T"
manifest:
skills/backend/conversion-optimization/SKILL.mdsafety · automated scan (low risk)
This is a pattern-based risk scan, not a security review. Our crawler flagged:
- references .env files
- references API keys
Always read a skill's source content before installing. Patterns alone don't mean the skill is malicious — but they warrant attention.
source content
Conversion Optimization
Skill Profile
(Select at least one profile to enable specific modules)
- DevOps
- Backend
- Frontend
- AI-RAG
- Security Critical
Overview
Conversion Rate Optimization (CRO) is the systematic process of increasing the percentage of website or app visitors who complete a desired action (conversion) through data-driven experimentation and continuous improvement. Effective CRO uses A/B testing, user research, analytics, and iterative improvements to maximize conversions, increase revenue, reduce acquisition costs, improve user experience, and gain competitive advantage through continuous improvement.
Why This Matters
- Increase Revenue: More conversions directly translate to more revenue
- Reduce Acquisition Cost: Better conversion rates lower Customer Acquisition Cost (CAC)
- Improve User Experience: Smoother user journeys lead to happier users
- Data-Driven Decisions: Test assumptions instead of relying on opinions
- Competitive Advantage: Continuous improvement keeps you ahead of competitors
- Maximize ROI: Get more value from existing traffic without spending more on acquisition
Core Concepts & Rules
1. Core Principles
- Follow established patterns and conventions
- Maintain consistency across codebase
- Document decisions and trade-offs
2. Implementation Guidelines
- Start with the simplest viable solution
- Iterate based on feedback and requirements
- Test thoroughly before deployment
Inputs / Outputs / Contracts
- Inputs:
- Web/app analytics data (visitors, sessions, events)
- Funnel stage data (drop-off points)
- User behavior data (heatmaps, session recordings)
- User feedback (surveys, interviews)
- Current conversion metrics
- Entry Conditions:
- Analytics tracking implemented
- Conversion events defined and tracked
- Sufficient traffic volume for statistical significance
- Baseline conversion rate established
- Outputs:
- Funnel analysis with drop-off identification
- Hypotheses prioritized by ICE/PIE score
- A/B test configuration
- Test results with statistical significance
- Optimization recommendations
- Artifacts Required (Deliverables):
- Funnel analysis report
- Hypothesis document with ICE/PIE scores
- A/B test setup (variants, traffic split)
- Test results report (conversion rates, statistical significance)
- Implementation recommendations
- Acceptance Evidence:
- Funnel bottlenecks identified and documented
- Hypotheses formulated and prioritized
- A/B test configured and running
- Statistical significance achieved
- Winning variant identified and implemented
- Success Criteria:
- Conversion rate improvement > 5% (statistically significant)
- Funnel drop-off reduced at bottleneck stage
- User experience improved (measured by satisfaction metrics)
- ROI positive (revenue gain > implementation cost)
Skill Composition
- Depends on: A/B Testing Analysis, Funnel Analysis
- Compatible with: Dashboard Design, KPI Metrics, User Research
- Conflicts with: None
- Related Skills: ab-testing-analysis, funnel-analysis, dashboard-design
Quick Start / Implementation Example
- Review requirements and constraints
- Set up development environment
- Implement core functionality following patterns
- Write tests for critical paths
- Run tests and fix issues
- Document any deviations or decisions
# Example implementation following best practices def example_function(): # Your implementation here pass
Assumptions / Constraints / Non-goals
- Assumptions:
- Development environment is properly configured
- Required dependencies are available
- Team has basic understanding of domain
- Constraints:
- Must follow existing codebase conventions
- Time and resource limitations
- Compatibility requirements
- Non-goals:
- This skill does not cover edge cases outside scope
- Not a replacement for formal training
Compatibility & Prerequisites
- Supported Versions:
- Python 3.8+
- Node.js 16+
- Modern browsers (Chrome, Firefox, Safari, Edge)
- Required AI Tools:
- Code editor (VS Code recommended)
- Testing framework appropriate for language
- Version control (Git)
- Dependencies:
- Language-specific package manager
- Build tools
- Testing libraries
- Environment Setup:
keys:.env.example
,API_KEY
(no values)DATABASE_URL
Test Scenario Matrix (QA Strategy)
| Type | Focus Area | Required Scenarios / Mocks |
|---|---|---|
| Unit | Core Logic | Must cover primary logic and at least 3 edge/error cases. Target minimum 80% coverage |
| Integration | DB / API | All external API calls or database connections must be mocked during unit tests |
| E2E | User Journey | Critical user flows to test |
| Performance | Latency / Load | Benchmark requirements |
| Security | Vuln / Auth | SAST/DAST or dependency audit |
| Frontend | UX / A11y | Accessibility checklist (WCAG), Performance Budget (Lighthouse score) |
Technical Guardrails & Security Threat Model
1. Security & Privacy (Threat Model)
- Top Threats: Injection attacks, authentication bypass, data exposure
- Data Handling: Sanitize all user inputs to prevent Injection attacks. Never log raw PII
- Secrets Management: No hardcoded API keys. Use Env Vars/Secrets Manager
- Authorization: Validate user permissions before state changes
2. Performance & Resources
- Execution Efficiency: Consider time complexity for algorithms
- Memory Management: Use streams/pagination for large data
- Resource Cleanup: Close DB connections/file handlers in finally blocks
3. Architecture & Scalability
- Design Pattern: Follow SOLID principles, use Dependency Injection
- Modularity: Decouple logic from UI/Frameworks
4. Observability & Reliability
- Logging Standards: Structured JSON, include trace IDs
request_id - Metrics: Track
,error_rate
,latencyqueue_depth - Error Handling: Standardized error codes, no bare except
- Observability Artifacts:
- Log Fields: timestamp, level, message, request_id
- Metrics: request_count, error_count, response_time
- Dashboards/Alerts: High Error Rate > 5%
Agent Directives & Error Recovery
(ข้อกำหนดสำหรับ AI Agent ในการคิดและแก้ปัญหาเมื่อเกิดข้อผิดพลาด)
- Thinking Process: Analyze root cause before fixing. Do not brute-force.
- Fallback Strategy: Stop after 3 failed test attempts. Output root cause and ask for human intervention/clarification.
- Self-Review: Check against Guardrails & Anti-patterns before finalizing.
- Output Constraints: Output ONLY the modified code block. Do not explain unless asked.
Definition of Done (DoD) Checklist
- Tests passed + coverage met
- Lint/Typecheck passed
- Logging/Metrics/Trace implemented
- Security checks passed
- Documentation/Changelog updated
- Accessibility/Performance requirements met (if frontend)
Anti-patterns / Pitfalls
- ⛔ Don't: Log PII, catch-all exception, N+1 queries
- ⚠️ Watch out for: Common symptoms and quick fixes
- 💡 Instead: Use proper error handling, pagination, and logging
Reference Links & Examples
- Internal documentation and examples
- Official documentation and best practices
- Community resources and discussions
Versioning & Changelog
- Version: 1.0.0
- Changelog:
- 2026-02-22: Initial version with complete template structure