Awesome-omni-skills scikit-learn
Scikit-learn workflow skill. Use this skill when the user needs Machine learning in Python with scikit-learn. Use for classification, regression, clustering, model evaluation, and ML pipelines and the operator should preserve the upstream workflow, copied support files, and provenance before merging or handing off.
git clone https://github.com/diegosouzapw/awesome-omni-skills
T=$(mktemp -d) && git clone --depth=1 https://github.com/diegosouzapw/awesome-omni-skills "$T" && mkdir -p ~/.claude/skills && cp -r "$T/skills/scikit-learn" ~/.claude/skills/diegosouzapw-awesome-omni-skills-scikit-learn && rm -rf "$T"
skills/scikit-learn/SKILL.mdScikit-learn
Overview
This public intake copy packages
plugins/antigravity-awesome-skills-claude/skills/scikit-learn from https://github.com/sickn33/antigravity-awesome-skills into the native Omni Skills editorial shape without hiding its origin.
Use it when the operator needs the upstream workflow, support files, and repository context to stay intact while the public validator and private enhancer continue their normal downstream flow.
This intake keeps the copied upstream files intact and uses
metadata.json plus ORIGIN.md as the provenance anchor for review.
Scikit-learn
Imported source sections that did not map cleanly to the public headings are still preserved below or in the support files. Notable imported sections: Core Capabilities, Limitations.
When to Use This Skill
Use this section as the trigger filter. It should make the activation boundary explicit before the operator loads files, runs commands, or opens a pull request.
- Building classification or regression models
- Performing clustering or dimensionality reduction
- Preprocessing and transforming data for machine learning
- Evaluating model performance with cross-validation
- Tuning hyperparameters with grid or random search
- Creating ML pipelines for production workflows
Operating Table
| Situation | Start here | Why it matters |
|---|---|---|
| First-time use | | Confirms repository, branch, commit, and imported path before touching the copied workflow |
| Provenance review | | Gives reviewers a plain-language audit trail for the imported source |
| Workflow execution | | Starts with the smallest copied file that materially changes execution |
| Supporting context | | Adds the next most relevant copied source file without loading the entire package |
| Handoff decision | | Helps the operator switch to a stronger native skill when the task drifts |
Workflow
This workflow is intentionally editorial and operational at the same time. It keeps the imported source useful to the operator while still satisfying the public intake standards that feed the downstream enhancer flow.
- Load and explore data
- Split data with stratification
- Create preprocessing pipeline
- Build complete pipeline
- Tune hyperparameters
- Evaluate on test set
- Preprocess data
Imported Workflow Notes
Imported: Installation
# Install scikit-learn using uv uv uv pip install scikit-learn # Optional: Install visualization dependencies uv uv pip install matplotlib seaborn # Commonly used with uv uv pip install pandas numpy
Imported: Common Workflows
Building a Classification Model
-
Load and explore data
import pandas as pd df = pd.read_csv('data.csv') X = df.drop('target', axis=1) y = df['target'] -
Split data with stratification
from sklearn.model_selection import train_test_split X_train, X_test, y_train, y_test = train_test_split( X, y, test_size=0.2, stratify=y, random_state=42 ) -
Create preprocessing pipeline
from sklearn.pipeline import Pipeline from sklearn.preprocessing import StandardScaler from sklearn.compose import ColumnTransformer # Handle numeric and categorical features separately preprocessor = ColumnTransformer([ ('num', StandardScaler(), numeric_features), ('cat', OneHotEncoder(), categorical_features) ]) -
Build complete pipeline
model = Pipeline([ ('preprocessor', preprocessor), ('classifier', RandomForestClassifier(random_state=42)) ]) -
Tune hyperparameters
from sklearn.model_selection import GridSearchCV param_grid = { 'classifier__n_estimators': [100, 200], 'classifier__max_depth': [10, 20, None] } grid_search = GridSearchCV(model, param_grid, cv=5) grid_search.fit(X_train, y_train) -
Evaluate on test set
from sklearn.metrics import classification_report best_model = grid_search.best_estimator_ y_pred = best_model.predict(X_test) print(classification_report(y_test, y_pred))
Performing Clustering Analysis
-
Preprocess data
from sklearn.preprocessing import StandardScaler scaler = StandardScaler() X_scaled = scaler.fit_transform(X) -
Find optimal number of clusters
from sklearn.cluster import KMeans from sklearn.metrics import silhouette_score scores = [] for k in range(2, 11): kmeans = KMeans(n_clusters=k, random_state=42) labels = kmeans.fit_predict(X_scaled) scores.append(silhouette_score(X_scaled, labels)) optimal_k = range(2, 11)[np.argmax(scores)] -
Apply clustering
model = KMeans(n_clusters=optimal_k, random_state=42) labels = model.fit_predict(X_scaled) -
Visualize with dimensionality reduction
from sklearn.decomposition import PCA pca = PCA(n_components=2) X_2d = pca.fit_transform(X_scaled) plt.scatter(X_2d[:, 0], X_2d[:, 1], c=labels, cmap='viridis')
Imported: Overview
This skill provides comprehensive guidance for machine learning tasks using scikit-learn, the industry-standard Python library for classical machine learning. Use this skill for classification, regression, clustering, dimensionality reduction, preprocessing, model evaluation, and building production-ready ML pipelines.
Imported: Core Capabilities
1. Supervised Learning
Comprehensive algorithms for classification and regression tasks.
Key algorithms:
- Linear models: Logistic Regression, Linear Regression, Ridge, Lasso, ElasticNet
- Tree-based: Decision Trees, Random Forest, Gradient Boosting
- Support Vector Machines: SVC, SVR with various kernels
- Ensemble methods: AdaBoost, Voting, Stacking
- Neural Networks: MLPClassifier, MLPRegressor
- Others: Naive Bayes, K-Nearest Neighbors
When to use:
- Classification: Predicting discrete categories (spam detection, image classification, fraud detection)
- Regression: Predicting continuous values (price prediction, demand forecasting)
See:
references/supervised_learning.md for detailed algorithm documentation, parameters, and usage examples.
2. Unsupervised Learning
Discover patterns in unlabeled data through clustering and dimensionality reduction.
Clustering algorithms:
- Partition-based: K-Means, MiniBatchKMeans
- Density-based: DBSCAN, HDBSCAN, OPTICS
- Hierarchical: AgglomerativeClustering
- Probabilistic: Gaussian Mixture Models
- Others: MeanShift, SpectralClustering, BIRCH
Dimensionality reduction:
- Linear: PCA, TruncatedSVD, NMF
- Manifold learning: t-SNE, UMAP, Isomap, LLE
- Feature extraction: FastICA, LatentDirichletAllocation
When to use:
- Customer segmentation, anomaly detection, data visualization
- Reducing feature dimensions, exploratory data analysis
- Topic modeling, image compression
See:
references/unsupervised_learning.md for detailed documentation.
3. Model Evaluation and Selection
Tools for robust model evaluation, cross-validation, and hyperparameter tuning.
Cross-validation strategies:
- KFold, StratifiedKFold (classification)
- TimeSeriesSplit (temporal data)
- GroupKFold (grouped samples)
Hyperparameter tuning:
- GridSearchCV (exhaustive search)
- RandomizedSearchCV (random sampling)
- HalvingGridSearchCV (successive halving)
Metrics:
- Classification: accuracy, precision, recall, F1-score, ROC AUC, confusion matrix
- Regression: MSE, RMSE, MAE, R², MAPE
- Clustering: silhouette score, Calinski-Harabasz, Davies-Bouldin
When to use:
- Comparing model performance objectively
- Finding optimal hyperparameters
- Preventing overfitting through cross-validation
- Understanding model behavior with learning curves
See:
references/model_evaluation.md for comprehensive metrics and tuning strategies.
4. Data Preprocessing
Transform raw data into formats suitable for machine learning.
Scaling and normalization:
- StandardScaler (zero mean, unit variance)
- MinMaxScaler (bounded range)
- RobustScaler (robust to outliers)
- Normalizer (sample-wise normalization)
Encoding categorical variables:
- OneHotEncoder (nominal categories)
- OrdinalEncoder (ordered categories)
- LabelEncoder (target encoding)
Handling missing values:
- SimpleImputer (mean, median, most frequent)
- KNNImputer (k-nearest neighbors)
- IterativeImputer (multivariate imputation)
Feature engineering:
- PolynomialFeatures (interaction terms)
- KBinsDiscretizer (binning)
- Feature selection (RFE, SelectKBest, SelectFromModel)
When to use:
- Before training any algorithm that requires scaled features (SVM, KNN, Neural Networks)
- Converting categorical variables to numeric format
- Handling missing data systematically
- Creating non-linear features for linear models
See:
references/preprocessing.md for detailed preprocessing techniques.
5. Pipelines and Composition
Build reproducible, production-ready ML workflows.
Key components:
- Pipeline: Chain transformers and estimators sequentially
- ColumnTransformer: Apply different preprocessing to different columns
- FeatureUnion: Combine multiple transformers in parallel
- TransformedTargetRegressor: Transform target variable
Benefits:
- Prevents data leakage in cross-validation
- Simplifies code and improves maintainability
- Enables joint hyperparameter tuning
- Ensures consistency between training and prediction
When to use:
- Always use Pipelines for production workflows
- When mixing numerical and categorical features (use ColumnTransformer)
- When performing cross-validation with preprocessing steps
- When hyperparameter tuning includes preprocessing parameters
See:
references/pipelines_and_composition.md for comprehensive pipeline patterns.
Examples
Example 1: Ask for the upstream workflow directly
Use @scikit-learn to handle <task>. Start from the copied upstream workflow, load only the files that change the outcome, and keep provenance visible in the answer.
Explanation: This is the safest starting point when the operator needs the imported workflow, but not the entire repository.
Example 2: Ask for a provenance-grounded review
Review @scikit-learn against metadata.json and ORIGIN.md, then explain which copied upstream files you would load first and why.
Explanation: Use this before review or troubleshooting when you need a precise, auditable explanation of origin and file selection.
Example 3: Narrow the copied support files before execution
Use @scikit-learn for <task>. Load only the copied references, examples, or scripts that change the outcome, and name the files explicitly before proceeding.
Explanation: This keeps the skill aligned with progressive disclosure instead of loading the whole copied package by default.
Example 4: Build a reviewer packet
Review @scikit-learn using the copied upstream files plus provenance, then summarize any gaps before merge.
Explanation: This is useful when the PR is waiting for human review and you want a repeatable audit packet.
Imported Usage Notes
Imported: Quick Start
Classification Example
from sklearn.model_selection import train_test_split from sklearn.preprocessing import StandardScaler from sklearn.ensemble import RandomForestClassifier from sklearn.metrics import classification_report # Split data X_train, X_test, y_train, y_test = train_test_split( X, y, test_size=0.2, stratify=y, random_state=42 ) # Preprocess scaler = StandardScaler() X_train_scaled = scaler.fit_transform(X_train) X_test_scaled = scaler.transform(X_test) # Train model model = RandomForestClassifier(n_estimators=100, random_state=42) model.fit(X_train_scaled, y_train) # Evaluate y_pred = model.predict(X_test_scaled) print(classification_report(y_test, y_pred))
Complete Pipeline with Mixed Data
from sklearn.pipeline import Pipeline from sklearn.compose import ColumnTransformer from sklearn.preprocessing import StandardScaler, OneHotEncoder from sklearn.impute import SimpleImputer from sklearn.ensemble import GradientBoostingClassifier # Define feature types numeric_features = ['age', 'income'] categorical_features = ['gender', 'occupation'] # Create preprocessing pipelines numeric_transformer = Pipeline([ ('imputer', SimpleImputer(strategy='median')), ('scaler', StandardScaler()) ]) categorical_transformer = Pipeline([ ('imputer', SimpleImputer(strategy='most_frequent')), ('onehot', OneHotEncoder(handle_unknown='ignore')) ]) # Combine transformers preprocessor = ColumnTransformer([ ('num', numeric_transformer, numeric_features), ('cat', categorical_transformer, categorical_features) ]) # Full pipeline model = Pipeline([ ('preprocessor', preprocessor), ('classifier', GradientBoostingClassifier(random_state=42)) ]) # Fit and predict model.fit(X_train, y_train) y_pred = model.predict(X_test)
Imported: Example Scripts
Classification Pipeline
Run a complete classification workflow with preprocessing, model comparison, hyperparameter tuning, and evaluation:
python scripts/classification_pipeline.py
This script demonstrates:
- Handling mixed data types (numeric and categorical)
- Model comparison using cross-validation
- Hyperparameter tuning with GridSearchCV
- Comprehensive evaluation with multiple metrics
- Feature importance analysis
Clustering Analysis
Perform clustering analysis with algorithm comparison and visualization:
python scripts/clustering_analysis.py
This script demonstrates:
- Finding optimal number of clusters (elbow method, silhouette analysis)
- Comparing multiple clustering algorithms (K-Means, DBSCAN, Agglomerative, Gaussian Mixture)
- Evaluating clustering quality without ground truth
- Visualizing results with PCA projection
Best Practices
Treat the generated public skill as a reviewable packaging layer around the upstream repository. The goal is to keep provenance explicit and load only the copied source material that materially improves execution.
- Balanced data: Accuracy, F1-score
- Imbalanced data: Precision, Recall, ROC AUC, Balanced Accuracy
- Cost-sensitive: Define custom scorer
- SVM, KNN, Neural Networks
- PCA, Linear/Logistic Regression with regularization
- K-Means clustering
- Tree-based models (Decision Trees, Random Forest, Gradient Boosting)
Imported Operating Notes
Imported: Best Practices
Always Use Pipelines
Pipelines prevent data leakage and ensure consistency:
# Good: Preprocessing in pipeline pipeline = Pipeline([ ('scaler', StandardScaler()), ('model', LogisticRegression()) ]) # Bad: Preprocessing outside (can leak information) X_scaled = StandardScaler().fit_transform(X)
Fit on Training Data Only
Never fit on test data:
# Good scaler = StandardScaler() X_train_scaled = scaler.fit_transform(X_train) X_test_scaled = scaler.transform(X_test) # Only transform # Bad scaler = StandardScaler() X_all_scaled = scaler.fit_transform(np.vstack([X_train, X_test]))
Use Stratified Splitting for Classification
Preserve class distribution:
X_train, X_test, y_train, y_test = train_test_split( X, y, test_size=0.2, stratify=y, random_state=42 )
Set Random State for Reproducibility
model = RandomForestClassifier(n_estimators=100, random_state=42)
Choose Appropriate Metrics
- Balanced data: Accuracy, F1-score
- Imbalanced data: Precision, Recall, ROC AUC, Balanced Accuracy
- Cost-sensitive: Define custom scorer
Scale Features When Required
Algorithms requiring feature scaling:
- SVM, KNN, Neural Networks
- PCA, Linear/Logistic Regression with regularization
- K-Means clustering
Algorithms not requiring scaling:
- Tree-based models (Decision Trees, Random Forest, Gradient Boosting)
- Naive Bayes
Troubleshooting
Problem: The operator skipped the imported context and answered too generically
Symptoms: The result ignores the upstream workflow in
plugins/antigravity-awesome-skills-claude/skills/scikit-learn, fails to mention provenance, or does not use any copied source files at all.
Solution: Re-open metadata.json, ORIGIN.md, and the most relevant copied upstream files. Load only the files that materially change the answer, then restate the provenance before continuing.
Problem: The imported workflow feels incomplete during review
Symptoms: Reviewers can see the generated
SKILL.md, but they cannot quickly tell which references, examples, or scripts matter for the current task.
Solution: Point at the exact copied references, examples, scripts, or assets that justify the path you took. If the gap is still real, record it in the PR instead of hiding it.
Problem: The task drifted into a different specialization
Symptoms: The imported skill starts in the right place, but the work turns into debugging, architecture, design, security, or release orchestration that a native skill handles better. Solution: Use the related skills section to hand off deliberately. Keep the imported provenance visible so the next skill inherits the right context instead of starting blind.
Imported Troubleshooting Notes
Imported: Troubleshooting Common Issues
ConvergenceWarning
Issue: Model didn't converge Solution: Increase
max_iter or scale features
model = LogisticRegression(max_iter=1000)
Poor Performance on Test Set
Issue: Overfitting Solution: Use regularization, cross-validation, or simpler model
# Add regularization model = Ridge(alpha=1.0) # Use cross-validation scores = cross_val_score(model, X, y, cv=5)
Memory Error with Large Datasets
Solution: Use algorithms designed for large data
# Use SGD for large datasets from sklearn.linear_model import SGDClassifier model = SGDClassifier() # Or MiniBatchKMeans for clustering from sklearn.cluster import MiniBatchKMeans model = MiniBatchKMeans(n_clusters=8, batch_size=100)
Related Skills
- Use when the work is better handled by that native specialization after this imported skill establishes context.@00-andruia-consultant-v2
- Use when the work is better handled by that native specialization after this imported skill establishes context.@10-andruia-skill-smith-v2
- Use when the work is better handled by that native specialization after this imported skill establishes context.@20-andruia-niche-intelligence-v2
- Use when the work is better handled by that native specialization after this imported skill establishes context.@2d-games
Additional Resources
Use this support matrix and the linked files below as the operator packet for this imported skill. They should reflect real copied source material, not generic scaffolding.
| Resource family | What it gives the reviewer | Example path |
|---|---|---|
| copied reference notes, guides, or background material from upstream | |
| worked examples or reusable prompts copied from upstream | |
| upstream helper scripts that change execution or validation | |
| routing or delegation notes that are genuinely part of the imported package | |
| supporting assets or schemas copied from the source package | |
Imported Reference Notes
Imported: Reference Documentation
This skill includes comprehensive reference files for deep dives into specific topics:
Quick Reference
File:
references/quick_reference.md
- Common import patterns and installation instructions
- Quick workflow templates for common tasks
- Algorithm selection cheat sheets
- Common patterns and gotchas
- Performance optimization tips
Supervised Learning
File:
references/supervised_learning.md
- Linear models (regression and classification)
- Support Vector Machines
- Decision Trees and ensemble methods
- K-Nearest Neighbors, Naive Bayes, Neural Networks
- Algorithm selection guide
Unsupervised Learning
File:
references/unsupervised_learning.md
- All clustering algorithms with parameters and use cases
- Dimensionality reduction techniques
- Outlier and novelty detection
- Gaussian Mixture Models
- Method selection guide
Model Evaluation
File:
references/model_evaluation.md
- Cross-validation strategies
- Hyperparameter tuning methods
- Classification, regression, and clustering metrics
- Learning and validation curves
- Best practices for model selection
Preprocessing
File:
references/preprocessing.md
- Feature scaling and normalization
- Encoding categorical variables
- Missing value imputation
- Feature engineering techniques
- Custom transformers
Pipelines and Composition
File:
references/pipelines_and_composition.md
- Pipeline construction and usage
- ColumnTransformer for mixed data types
- FeatureUnion for parallel transformations
- Complete end-to-end examples
- Best practices
Imported: Additional Resources
- Official Documentation: https://scikit-learn.org/stable/
- User Guide: https://scikit-learn.org/stable/user_guide.html
- API Reference: https://scikit-learn.org/stable/api/index.html
- Examples Gallery: https://scikit-learn.org/stable/auto_examples/index.html
Imported: Limitations
- Use this skill only when the task clearly matches the scope described above.
- Do not treat the output as a substitute for environment-specific validation, testing, or expert review.
- Stop and ask for clarification if required inputs, permissions, safety boundaries, or success criteria are missing.