Claude-Skills tech-stack-evaluator
git clone https://github.com/borghei/Claude-Skills
T=$(mktemp -d) && git clone --depth=1 https://github.com/borghei/Claude-Skills "$T" && mkdir -p ~/.claude/skills && cp -r "$T/engineering/tech-stack-evaluator" ~/.claude/skills/borghei-claude-skills-tech-stack-evaluator && rm -rf "$T"
engineering/tech-stack-evaluator/SKILL.mdTechnology Stack Evaluator
Evaluate and compare technologies, frameworks, and cloud providers with data-driven analysis and actionable recommendations.
Table of Contents
Capabilities
| Capability | Description |
|---|---|
| Technology Comparison | Compare frameworks and libraries with weighted scoring |
| TCO Analysis | Calculate 5-year total cost including hidden costs |
| Ecosystem Health | Assess GitHub metrics, npm adoption, community strength |
| Security Assessment | Evaluate vulnerabilities and compliance readiness |
| Migration Analysis | Estimate effort, risks, and timeline for migrations |
| Cloud Comparison | Compare AWS, Azure, GCP for specific workloads |
Quick Start
Compare Two Technologies
Compare React vs Vue for a SaaS dashboard. Priorities: developer productivity (40%), ecosystem (30%), performance (30%).
Calculate TCO
Calculate 5-year TCO for Next.js on Vercel. Team: 8 developers. Hosting: $2500/month. Growth: 40%/year.
Assess Migration
Evaluate migrating from Angular.js to React. Codebase: 50,000 lines, 200 components. Team: 6 developers.
Input Formats
The evaluator accepts three input formats:
Text - Natural language queries
Compare PostgreSQL vs MongoDB for our e-commerce platform.
YAML - Structured input for automation
comparison: technologies: ["React", "Vue"] use_case: "SaaS dashboard" weights: ecosystem: 30 performance: 25 developer_experience: 45
JSON - Programmatic integration
{ "technologies": ["React", "Vue"], "use_case": "SaaS dashboard" }
Analysis Types
Quick Comparison (200-300 tokens)
- Weighted scores and recommendation
- Top 3 decision factors
- Confidence level
Standard Analysis (500-800 tokens)
- Comparison matrix
- TCO overview
- Security summary
Full Report (1200-1500 tokens)
- All metrics and calculations
- Migration analysis
- Detailed recommendations
Scripts
stack_comparator.py
Compare technologies with customizable weighted criteria.
python scripts/stack_comparator.py --help
tco_calculator.py
Calculate total cost of ownership over multi-year projections.
python scripts/tco_calculator.py --input assets/sample_input_tco.json
ecosystem_analyzer.py
Analyze ecosystem health from GitHub, npm, and community metrics.
python scripts/ecosystem_analyzer.py --technology react
security_assessor.py
Evaluate security posture and compliance readiness.
python scripts/security_assessor.py --technology express --compliance soc2,gdpr
migration_analyzer.py
Estimate migration complexity, effort, and risks.
python scripts/migration_analyzer.py --from angular-1.x --to react
References
| Document | Content |
|---|---|
| Detailed scoring algorithms and calculation formulas |
| Input/output examples for all analysis types |
| Step-by-step evaluation workflows |
Confidence Levels
| Level | Score | Interpretation |
|---|---|---|
| High | 80-100% | Clear winner, strong data |
| Medium | 50-79% | Trade-offs present, moderate uncertainty |
| Low | < 50% | Close call, limited data |
When to Use
- Comparing frontend/backend frameworks for new projects
- Evaluating cloud providers for specific workloads
- Planning technology migrations with risk assessment
- Calculating build vs. buy decisions with TCO
- Assessing open-source library viability
When NOT to Use
- Trivial decisions between similar tools (use team preference)
- Mandated technology choices (decision already made)
- Emergency production issues (use monitoring tools)
Troubleshooting
| Problem | Cause | Solution |
|---|---|---|
| Weighted scores all return 50.0 | Technology data dictionaries missing keys under each category | Ensure each category dict contains a key on a 0-100 scale (e.g., ) |
| TCO projections look unrealistically low | Default cost parameters used when or are empty | Populate , , , and with real figures |
| Ecosystem health score stuck at 50 for npm | dict is empty or not provided | Pass npm metrics (, , , ); 50 is the neutral fallback when npm data is absent |
| Security compliance returns "Unknown standard" | Unsupported standard name passed to | Use one of the supported keys: , , , (case-sensitive) |
| Migration complexity always shows moderate | defaults to when not specified | Set explicitly to , , , or in |
| Report renders ASCII tables instead of markdown | auto-detects CLI context when stdout is a TTY | Pass to force rich markdown output |
| Format detector misclassifies YAML as text | Fewer than 50% of lines match YAML key-value patterns | Ensure input uses standard YAML syntax with pairs and proper indentation |
Success Criteria
- TCO variance under 15%: Calculated TCO deviates less than 15% from actual costs when validated against real-world spending data over the projection period.
- Security score above 80/100: Technologies recommended for production use achieve a minimum overall security score of 80, corresponding to grade B or higher.
- Ecosystem health score above 65/100: Recommended technologies demonstrate viable long-term ecosystem health with a risk level no worse than "Low-Medium."
- Migration effort estimate within 20%: Person-hours and timeline estimates land within 20% of actual migration effort when measured post-completion.
- Comparison confidence above 70%: Final technology recommendations carry a confidence score of 70% or higher, indicating a meaningful score gap between top candidates.
- Compliance readiness at "Mostly Ready" or above: Technologies targeting regulated environments achieve at least 70% feature coverage against required compliance standards (GDPR, SOC2, HIPAA, PCI-DSS).
- Report generation under 5 seconds: All report types (executive summary, full report) render within 5 seconds for evaluations comparing up to 5 technologies.
Scope & Limitations
Covers:
- Weighted multi-criteria comparison of frameworks, libraries, and cloud providers
- Multi-year TCO projections including hidden costs (technical debt, vendor lock-in, turnover)
- Ecosystem viability assessment using GitHub, npm, and community metrics
- Security posture scoring and compliance readiness for GDPR, SOC2, HIPAA, PCI-DSS
Does NOT cover:
- Live data fetching from GitHub API, npm registry, or vulnerability databases (all data must be provided as input dictionaries)
- Performance benchmarking or load testing (use
for test execution)engineering/senior-qa - Licensing legal review or contract negotiation (use
compliance skills for regulatory guidance)ra-qm-team - Team hiring or organizational design decisions (use
for staffing analysis)hr-operations/talent-acquisition
Integration Points
| Skill | Integration | Data Flow |
|---|---|---|
| Feed security assessor output into deeper vulnerability analysis | results → security review input |
| Use TCO hosting projections to inform infrastructure planning | hosting/scaling data → DevOps capacity models |
| Migration test coverage scores inform QA test planning | testing_requirements → QA test strategy |
| Compliance readiness gaps feed into formal audit preparation | missing features → audit checklist |
| Executive summaries and TCO reports support CTO decision-making | executive summary → strategic technology decisions |
| Ecosystem viability and migration timelines inform product roadmaps | + → roadmap planning |
Tool Reference
stack_comparator.py
Purpose: Compare technologies with customizable weighted criteria across 8 evaluation categories: performance, scalability, developer experience, ecosystem, learning curve, documentation, community support, and enterprise readiness.
Usage:
from stack_comparator import StackComparator comparator = StackComparator({ "technologies": ["React", "Vue"], "use_case": "SaaS dashboard", "weights": {"developer_experience": 40, "performance": 30, "ecosystem": 30} }) results = comparator.compare_technologies(tech_data_list)
Constructor Parameters (
dict):comparison_data
| Key | Type | Default | Description |
|---|---|---|---|
| list | | Names of technologies to compare |
| str | | Use case context (supports , , bonuses) |
| dict | | Priority overrides |
| dict | | Category weights (auto-normalized to 100) |
Default Weights: performance 15, scalability 15, developer_experience 20, ecosystem 15, learning_curve 10, documentation 10, community_support 10, enterprise_readiness 5.
Key Methods:
-- Full comparison with scores, recommendation, confidence, and decision factorscompare_technologies(tech_data_list)
-- Score a single technology across all categoriesscore_technology(tech_name, tech_data)
-- Calculate weighted total from category scorescalculate_weighted_score(category_scores)
-- Generate pros/cons lists from scoresgenerate_pros_cons(tech_name, tech_scores)
Example Output:
{ "technologies": {"React": {"weighted_total": 78.5, "strengths": [...], "weaknesses": [...]}}, "recommendation": "React", "confidence": 72.0, "decision_factors": [{"category": "developer_experience", "importance": "40.0%", "best_performer": "React"}], "comparison_matrix": [{"category": "performance", "weight": "15.0%", "scores": {"React": "82.0", "Vue": "79.0"}}] }
Output Format: Python dictionary (serialize with
json.dumps() for JSON output).
tco_calculator.py
Purpose: Calculate comprehensive Total Cost of Ownership over multi-year projections, including initial costs, operational costs, scaling costs, hidden costs (technical debt, vendor lock-in, security incidents, downtime, turnover), and developer productivity impact.
Usage:
from tco_calculator import TCOCalculator calculator = TCOCalculator({ "technology": "Next.js", "team_size": 8, "timeline_years": 5, "initial_costs": {"licensing": 0, "migration": 15000, "developer_hourly_rate": 100}, "operational_costs": {"monthly_hosting": 2500, "annual_licensing": 0, "maintenance_hours_per_dev_monthly": 20}, "scaling_params": {"annual_growth_rate": 0.40, "initial_users": 5000} }) tco = calculator.calculate_total_tco() summary = calculator.generate_tco_summary()
Constructor Parameters (
dict):tco_data
| Key | Type | Default | Description |
|---|---|---|---|
| str | | Technology name |
| int | | Number of developers |
| int | | Projection period in years |
| dict | | One-time costs: , , , , , , |
| dict | | Recurring costs: , , , |
| dict | | Growth params: , , , |
| dict | | Productivity: , , , , , , , , , , , |
Key Methods:
-- Complete TCO with all cost componentscalculate_total_tco()
-- Executive summary with formatted dollar amountsgenerate_tco_summary()
-- One-time cost breakdowncalculate_initial_costs()
-- Year-by-year operational costscalculate_operational_costs()
-- User projections and cost-per-user analysiscalculate_scaling_costs()
-- Technical debt, vendor lock-in, security, downtime, turnovercalculate_hidden_costs()
-- Productivity gains and feature velocitycalculate_productivity_impact()
Output Format: Python dictionary (all monetary values as floats;
generate_tco_summary() returns pre-formatted dollar strings).
ecosystem_analyzer.py
Purpose: Analyze technology ecosystem health and long-term viability by scoring GitHub activity, npm adoption, community strength, corporate backing, and maintenance responsiveness on a 0-100 scale.
Usage:
from ecosystem_analyzer import EcosystemAnalyzer analyzer = EcosystemAnalyzer({ "technology": "React", "github": {"stars": 220000, "forks": 45000, "contributors": 1500, "commits_last_month": 120}, "npm": {"weekly_downloads": 20000000, "version": "18.2.0", "dependencies_count": 3}, "community": {"stackoverflow_questions": 400000, "job_postings": 15000}, "corporate_backing": {"type": "major_tech_company"} }) report = analyzer.generate_ecosystem_report() viability = analyzer.assess_viability()
Constructor Parameters (
dict):ecosystem_data
| Key | Type | Default | Description |
|---|---|---|---|
| str | | Technology name |
| dict | | GitHub metrics: , , , , , , , , |
| dict | | npm metrics: , , , |
| dict | | Community metrics: , , , |
| dict | | Backing info: (one of , , , , ), |
Health Score Weights: github_health 25%, npm_health 20%, community_health 20%, corporate_backing 15%, maintenance_health 20%.
Key Methods:
-- Complete report with health scores, viability, and formatted metricsgenerate_ecosystem_report()
-- Component scores and weighted overall scorecalculate_health_score()
-- Viability level, risk assessment, strengths, and recommendationassess_viability()
Output Format: Python dictionary with nested health scores, viability assessment, and formatted metrics.
security_assessor.py
Purpose: Evaluate security posture and compliance readiness for technology stacks. Scores vulnerabilities, patch responsiveness, built-in security features, and track record. Assesses compliance against GDPR, SOC2, HIPAA, and PCI-DSS.
Usage:
from security_assessor import SecurityAssessor assessor = SecurityAssessor({ "technology": "Express", "vulnerabilities": { "critical_last_12m": 0, "high_last_12m": 2, "avg_critical_patch_days": 7, "has_security_team": True }, "security_features": { "encryption_in_transit": True, "authentication": True, "input_validation": True, "csrf_protection": True }, "compliance_requirements": ["SOC2", "GDPR"] }) report = assessor.generate_security_report() compliance = assessor.assess_compliance(["SOC2", "GDPR"])
Constructor Parameters (
dict):security_data
| Key | Type | Default | Description |
|---|---|---|---|
| str | | Technology name |
| dict | | Vulnerability data: , , , , , , , , , , , , , , , |
| dict | | Boolean feature flags: , , , , , , , , , , , , , , |
| list | | Standards to assess: , , , |
Security Score Weights: vulnerability_score 30%, patch_responsiveness 25%, security_features 30%, track_record 15%.
Key Methods:
-- Full report with score, compliance, vulnerabilities, and recommendationsgenerate_security_report()
-- Component scores with letter grade (A-F)calculate_security_score()
-- Per-standard readiness with missing features listassess_compliance(standards)
-- Categorized vulnerability report with trend analysisidentify_vulnerabilities()
Output Format: Python dictionary with security scores, compliance assessments, and risk level.
migration_analyzer.py
Purpose: Analyze migration complexity, estimate effort in person-hours and calendar months, assess technical/business/team risks, and recommend a migration approach (direct, phased, or strangler pattern) based on complexity scoring.
Usage:
from migration_analyzer import MigrationAnalyzer analyzer = MigrationAnalyzer({ "source_technology": "Angular 1.x", "target_technology": "React", "codebase_stats": { "lines_of_code": 50000, "num_components": 200, "architecture_change_level": "significant", "current_test_coverage": 0.6 }, "team": {"team_size": 6, "target_tech_experience": "low"}, "constraints": {"downtime_tolerance": "low"} }) plan = analyzer.generate_migration_plan() effort = analyzer.estimate_effort() risks = analyzer.assess_risks()
Constructor Parameters (
dict):migration_data
| Key | Type | Default | Description |
|---|---|---|---|
| str | | Current technology |
| str | | Target technology |
| dict | | Codebase metrics: , , , (///), , , , , (////), , , (0-1), |
| dict | | Team info: , , (///) |
| dict | | Constraints: (///) |
Complexity Score Weights: code_volume 20%, architecture_changes 25%, data_migration 20%, api_compatibility 15%, dependency_changes 10%, testing_requirements 10%.
Key Methods:
-- Complete plan with complexity, effort, risks, approach, and success criteriagenerate_migration_plan()
-- Per-factor complexity scores (1-10 scale)calculate_complexity_score()
-- Person-hours, person-months, phase breakdown, and calendar timelineestimate_effort()
-- Technical, business, and team risks with severity and mitigation strategiesassess_risks()
Output Format: Python dictionary with nested complexity analysis, effort estimation, risk assessment, and recommended approach.
report_generator.py
Purpose: Generate context-aware evaluation reports with progressive disclosure. Auto-detects output context (Claude Desktop vs CLI) and renders rich markdown tables or ASCII-formatted output accordingly. Supports selective section generation.
Usage:
from report_generator import ReportGenerator generator = ReportGenerator(report_data, output_context="desktop") executive_summary = generator.generate_executive_summary(max_tokens=300) full_report = generator.generate_full_report(sections=["executive_summary", "comparison_matrix", "tco_analysis"]) generator.export_to_file("evaluation_report.md")
Constructor Parameters:
| Parameter | Type | Default | Description |
|---|---|---|---|
| dict | (required) | Complete evaluation data containing any combination of: , , , , , , , , , |
| str | (auto-detect) | Output format: for rich markdown, for ASCII tables. Auto-detects via env var and TTY check |
Available Report Sections:
executive_summary, comparison_matrix, tco_analysis, ecosystem_health, security_assessment, migration_analysis, performance_benchmarks.
Key Methods:
-- Concise summary with recommendation, strengths, concerns, and decision factorsgenerate_executive_summary(max_tokens=300)
-- Complete report with selected sections (all if None)generate_full_report(sections=None)
-- Write report to file, returns file pathexport_to_file(filename, sections=None)
Output Format: Markdown string (desktop context) or ASCII-formatted string (CLI context).
format_detector.py
Purpose: Automatically detect and parse input format (JSON, YAML, URL, or natural language text) for technology evaluation requests. Extracts technology names, use cases, priorities, and analysis types from unstructured text input.
Usage:
from format_detector import FormatDetector detector = FormatDetector('Compare React vs Vue for a SaaS dashboard. Priorities: performance, ecosystem.') format_type = detector.detect_format() # Returns: "text" parsed = detector.parse() # Returns normalized dict info = detector.get_format_info() # Returns detection metadata
Constructor Parameters:
| Parameter | Type | Default | Description |
|---|---|---|---|
| str | (required) | Raw input string in any supported format |
Supported Formats:
- JSON -- Valid JSON objects are detected and parsed directly
- YAML -- Detected when >50% of lines match key-value or list patterns (simplified parser, no PyYAML dependency)
- URL -- Detected when input contains
orhttp://
URLs; categorizes GitHub, npm, and other URLshttps:// - Text -- Natural language fallback; extracts technologies from 30+ known keywords, identifies use cases, priorities, and analysis types
Key Methods:
-- Returns format string:detect_format()
,"json"
,"yaml"
, or"url""text"
-- Parse input and return normalized dictionary with standard keys:parse()
,technologies
,use_case
,priorities
,analysis_typeformat
-- Detection metadata:get_format_info()
,detected_format
,input_length
,line_countparsing_successful
Output Format: Python dictionary normalized to a standard structure with keys:
technologies (list), use_case (str), priorities (list), analysis_type (str), format (str).