Awesome-omni-skills transformers-js
Transformers.js - Machine Learning for JavaScript workflow skill. Use this skill when the user needs Run Hugging Face models in JavaScript or TypeScript with Transformers.js in Node.js or the browser and the operator should preserve the upstream workflow, copied support files, and provenance before merging or handing off.
git clone https://github.com/diegosouzapw/awesome-omni-skills
T=$(mktemp -d) && git clone --depth=1 https://github.com/diegosouzapw/awesome-omni-skills "$T" && mkdir -p ~/.claude/skills && cp -r "$T/skills/transformers-js" ~/.claude/skills/diegosouzapw-awesome-omni-skills-transformers-js && rm -rf "$T"
skills/transformers-js/SKILL.mdTransformers.js - Machine Learning for JavaScript
Overview
This public intake copy packages
plugins/antigravity-awesome-skills-claude/skills/transformers-js from https://github.com/sickn33/antigravity-awesome-skills into the native Omni Skills editorial shape without hiding its origin.
Use it when the operator needs the upstream workflow, support files, and repository context to stay intact while the public validator and private enhancer continue their normal downstream flow.
This intake keeps the copied upstream files intact and uses
metadata.json plus ORIGIN.md as the provenance anchor for review.
Transformers.js - Machine Learning for JavaScript Transformers.js enables running state-of-the-art machine learning models directly in JavaScript, both in browsers and Node.js environments, with no server required.
Imported source sections that did not map cleanly to the public headings are still preserved below or in the support files. Notable imported sections: Core Concepts, Supported Tasks, Finding and Choosing Models, Advanced Configuration, Browser-Specific Considerations, Error Handling.
When to Use This Skill
Use this section as the trigger filter. It should make the activation boundary explicit before the operator loads files, runs commands, or opens a pull request.
- Run ML models for text analysis, generation, or translation in JavaScript
- Perform image classification, object detection, or segmentation
- Implement speech recognition or audio processing
- Build multimodal AI applications (text-to-image, image-to-text, etc.)
- Run models client-side in the browser without a backend
- Use when the request clearly matches the imported source intent: Run Hugging Face models in JavaScript or TypeScript with Transformers.js in Node.js or the browser.
Operating Table
| Situation | Start here | Why it matters |
|---|---|---|
| First-time use | | Confirms repository, branch, commit, and imported path before touching the copied workflow |
| Provenance review | | Gives reviewers a plain-language audit trail for the imported source |
| Workflow execution | | Starts with the smallest copied file that materially changes execution |
| Supporting context | | Adds the next most relevant copied source file without loading the entire package |
| Handoff decision | | Helps the operator switch to a stronger native skill when the task drifts |
Workflow
This workflow is intentionally editorial and operational at the same time. It keeps the imported source useful to the operator while still satisfying the public intake standards that feed the downstream enhancer flow.
-
NPM Installation bash npm install @huggingface/transformers ### Browser Usage (CDN) javascript <script type="module"> import { pipeline } from 'https://cdn.jsdelivr.net/npm/@huggingface/transformers'; </script>
- Confirm the user goal, the scope of the imported workflow, and whether this skill is still the right router for the task.
- Read the overview and provenance files before loading any copied upstream support files.
- Load only the references, examples, prompts, or scripts that materially change the outcome for the current request.
- Execute the upstream workflow while keeping provenance and source boundaries explicit in the working notes.
- Validate the result against the upstream expectations and the evidence you can point to in the copied files.
- Escalate or hand off to a related skill when the work moves out of this imported workflow's center of gravity.
Imported Workflow Notes
Imported: Installation
NPM Installation
npm install @huggingface/transformers
Browser Usage (CDN)
<script type="module"> import { pipeline } from 'https://cdn.jsdelivr.net/npm/@huggingface/transformers'; </script>
Imported: Core Concepts
1. Pipeline API
The pipeline API is the easiest way to use models. It groups together preprocessing, model inference, and postprocessing:
import { pipeline } from '@huggingface/transformers'; // Create a pipeline for a specific task const pipe = await pipeline('sentiment-analysis'); // Use the pipeline const result = await pipe('I love transformers!'); // Output: [{ label: 'POSITIVE', score: 0.999817686 }] // IMPORTANT: Always dispose when done to free memory await classifier.dispose();
⚠️ Memory Management: All pipelines must be disposed with
pipe.dispose() when finished to prevent memory leaks. See examples in Code Examples for cleanup patterns across different environments.
2. Model Selection
You can specify a custom model as the second argument:
const pipe = await pipeline( 'sentiment-analysis', 'Xenova/bert-base-multilingual-uncased-sentiment' );
Finding Models:
Browse available Transformers.js models on Hugging Face Hub:
- All models: https://huggingface.co/models?library=transformers.js&sort=trending
- By task: Add
parameterpipeline_tag- Text generation: https://huggingface.co/models?pipeline_tag=text-generation&library=transformers.js&sort=trending
- Image classification: https://huggingface.co/models?pipeline_tag=image-classification&library=transformers.js&sort=trending
- Speech recognition: https://huggingface.co/models?pipeline_tag=automatic-speech-recognition&library=transformers.js&sort=trending
Tip: Filter by task type, sort by trending/downloads, and check model cards for performance metrics and usage examples.
3. Device Selection
Choose where to run the model:
// Run on CPU (default for WASM) const pipe = await pipeline('sentiment-analysis', 'model-id'); // Run on GPU (WebGPU - experimental) const pipe = await pipeline('sentiment-analysis', 'model-id', { device: 'webgpu', });
4. Quantization Options
Control model precision vs. performance:
// Use quantized model (faster, smaller) const pipe = await pipeline('sentiment-analysis', 'model-id', { dtype: 'q4', // Options: 'fp32', 'fp16', 'q8', 'q4' });
Examples
Example 1: Ask for the upstream workflow directly
Use @transformers-js to handle <task>. Start from the copied upstream workflow, load only the files that change the outcome, and keep provenance visible in the answer.
Explanation: This is the safest starting point when the operator needs the imported workflow, but not the entire repository.
Example 2: Ask for a provenance-grounded review
Review @transformers-js against metadata.json and ORIGIN.md, then explain which copied upstream files you would load first and why.
Explanation: Use this before review or troubleshooting when you need a precise, auditable explanation of origin and file selection.
Example 3: Narrow the copied support files before execution
Use @transformers-js for <task>. Load only the copied references, examples, or scripts that change the outcome, and name the files explicitly before proceeding.
Explanation: This keeps the skill aligned with progressive disclosure instead of loading the whole copied package by default.
Example 4: Build a reviewer packet
Review @transformers-js using the copied upstream files plus provenance, then summarize any gaps before merge.
Explanation: This is useful when the PR is waiting for human review and you want a repeatable audit packet.
Best Practices
Treat the generated public skill as a reviewable packaging layer around the upstream repository. The goal is to keep provenance explicit and load only the copied source material that materially improves execution.
- Always Dispose Pipelines: Call pipe.dispose() when done - critical for preventing memory leaks
- Start with Pipelines: Use the pipeline API unless you need fine-grained control
- Test Locally First: Test models with small inputs before deploying
- Monitor Model Sizes: Be aware of model download sizes for web applications
- Handle Loading States: Show progress indicators for better UX
- Version Pin: Pin specific model versions for production stability
- Error Boundaries: Always wrap pipeline calls in try-catch blocks
Imported Operating Notes
Imported: Best Practices
- Always Dispose Pipelines: Call
when done - critical for preventing memory leakspipe.dispose() - Start with Pipelines: Use the pipeline API unless you need fine-grained control
- Test Locally First: Test models with small inputs before deploying
- Monitor Model Sizes: Be aware of model download sizes for web applications
- Handle Loading States: Show progress indicators for better UX
- Version Pin: Pin specific model versions for production stability
- Error Boundaries: Always wrap pipeline calls in try-catch blocks
- Progressive Enhancement: Provide fallbacks for unsupported browsers
- Reuse Models: Load once, use many times - don't recreate pipelines unnecessarily
- Graceful Shutdown: Dispose models on SIGTERM/SIGINT in servers
Troubleshooting
Problem: The operator skipped the imported context and answered too generically
Symptoms: The result ignores the upstream workflow in
plugins/antigravity-awesome-skills-claude/skills/transformers-js, fails to mention provenance, or does not use any copied source files at all.
Solution: Re-open metadata.json, ORIGIN.md, and the most relevant copied upstream files. Load only the files that materially change the answer, then restate the provenance before continuing.
Problem: The imported workflow feels incomplete during review
Symptoms: Reviewers can see the generated
SKILL.md, but they cannot quickly tell which references, examples, or scripts matter for the current task.
Solution: Point at the exact copied references, examples, scripts, or assets that justify the path you took. If the gap is still real, record it in the PR instead of hiding it.
Problem: The task drifted into a different specialization
Symptoms: The imported skill starts in the right place, but the work turns into debugging, architecture, design, security, or release orchestration that a native skill handles better. Solution: Use the related skills section to hand off deliberately. Keep the imported provenance visible so the next skill inherits the right context instead of starting blind.
Imported Troubleshooting Notes
Imported: Troubleshooting
Model Not Found
- Verify model exists on Hugging Face Hub
- Check model name spelling
- Ensure model has ONNX files (look for
folder in model repo)onnx
Memory Issues
- Use smaller models or quantized versions (
)dtype: 'q4' - Reduce batch size
- Limit sequence length with
max_length
WebGPU Errors
- Check browser compatibility (Chrome 113+, Edge 113+)
- Try
ifdtype: 'fp16'
failsfp32 - Fall back to WASM if WebGPU unavailable
Related Skills
- Use when the work is better handled by that native specialization after this imported skill establishes context.@supply-chain-risk-auditor
- Use when the work is better handled by that native specialization after this imported skill establishes context.@sveltekit
- Use when the work is better handled by that native specialization after this imported skill establishes context.@swift-concurrency-expert
- Use when the work is better handled by that native specialization after this imported skill establishes context.@swiftui-expert-skill
Additional Resources
Use this support matrix and the linked files below as the operator packet for this imported skill. They should reflect real copied source material, not generic scaffolding.
| Resource family | What it gives the reviewer | Example path |
|---|---|---|
| copied reference notes, guides, or background material from upstream | |
| worked examples or reusable prompts copied from upstream | |
| upstream helper scripts that change execution or validation | |
| routing or delegation notes that are genuinely part of the imported package | |
| supporting assets or schemas copied from the source package | |
Imported Reference Notes
Imported: Reference Documentation
This Skill
- Pipeline Options - Configure
withpipeline()
,progress_callback
,device
, etc.dtype - Configuration Reference - Global
configuration for caching and model loadingenv - Caching Reference - Browser Cache API, Node.js filesystem cache, and custom cache implementations
- Text Generation Guide - Streaming, chat format, and generation parameters
- Model Architectures - Supported models and selection tips
- Code Examples - Real-world implementations for different runtimes
Official Transformers.js
- Official docs: https://huggingface.co/docs/transformers.js
- API reference: https://huggingface.co/docs/transformers.js/api/pipelines
- Model hub: https://huggingface.co/models?library=transformers.js
- GitHub: https://github.com/huggingface/transformers.js
- Examples: https://github.com/huggingface/transformers.js/tree/main/examples
Imported: Quick Reference: Task IDs
| Task | Task ID |
|---|---|
| Text classification | or |
| Token classification | or |
| Question answering | |
| Fill mask | |
| Summarization | |
| Translation | |
| Text generation | |
| Text-to-text generation | |
| Zero-shot classification | |
| Image classification | |
| Image segmentation | |
| Object detection | |
| Depth estimation | |
| Image-to-image | |
| Zero-shot image classification | |
| Zero-shot object detection | |
| Automatic speech recognition | |
| Audio classification | |
| Text-to-speech | or |
| Image-to-text | |
| Document question answering | |
| Feature extraction | |
| Sentence similarity | |
This skill enables you to integrate state-of-the-art machine learning capabilities directly into JavaScript applications without requiring separate ML servers or Python environments.
Imported: Supported Tasks
Note: All examples below show basic usage.
Natural Language Processing
Text Classification
const classifier = await pipeline('text-classification'); const result = await classifier('This movie was amazing!');
Named Entity Recognition (NER)
const ner = await pipeline('token-classification'); const entities = await ner('My name is John and I live in New York.');
Question Answering
const qa = await pipeline('question-answering'); const answer = await qa({ question: 'What is the capital of France?', context: 'Paris is the capital and largest city of France.' });
Text Generation
const generator = await pipeline('text-generation', 'onnx-community/gemma-3-270m-it-ONNX'); const text = await generator('Once upon a time', { max_new_tokens: 100, temperature: 0.7 });
For streaming and chat: See Text Generation Guide for:
- Streaming token-by-token output with
TextStreamer - Chat/conversation format with system/user/assistant roles
- Generation parameters (temperature, top_k, top_p)
- Browser and Node.js examples
- React components and API endpoints
Translation
const translator = await pipeline('translation', 'Xenova/nllb-200-distilled-600M'); const output = await translator('Hello, how are you?', { src_lang: 'eng_Latn', tgt_lang: 'fra_Latn' });
Summarization
const summarizer = await pipeline('summarization'); const summary = await summarizer(longText, { max_length: 100, min_length: 30 });
Zero-Shot Classification
const classifier = await pipeline('zero-shot-classification'); const result = await classifier('This is a story about sports.', ['politics', 'sports', 'technology']);
Computer Vision
Image Classification
const classifier = await pipeline('image-classification'); const result = await classifier('https://example.com/image.jpg'); // Or with local file const result = await classifier(imageUrl);
Object Detection
const detector = await pipeline('object-detection'); const objects = await detector('https://example.com/image.jpg'); // Returns: [{ label: 'person', score: 0.95, box: { xmin, ymin, xmax, ymax } }, ...]
Image Segmentation
const segmenter = await pipeline('image-segmentation'); const segments = await segmenter('https://example.com/image.jpg');
Depth Estimation
const depthEstimator = await pipeline('depth-estimation'); const depth = await depthEstimator('https://example.com/image.jpg');
Zero-Shot Image Classification
const classifier = await pipeline('zero-shot-image-classification'); const result = await classifier('image.jpg', ['cat', 'dog', 'bird']);
Audio Processing
Automatic Speech Recognition
const transcriber = await pipeline('automatic-speech-recognition'); const result = await transcriber('audio.wav'); // Returns: { text: 'transcribed text here' }
Audio Classification
const classifier = await pipeline('audio-classification'); const result = await classifier('audio.wav');
Text-to-Speech
const synthesizer = await pipeline('text-to-speech', 'Xenova/speecht5_tts'); const audio = await synthesizer('Hello, this is a test.', { speaker_embeddings: speakerEmbeddings });
Multimodal
Image-to-Text (Image Captioning)
const captioner = await pipeline('image-to-text'); const caption = await captioner('image.jpg');
Document Question Answering
const docQA = await pipeline('document-question-answering'); const answer = await docQA('document-image.jpg', 'What is the total amount?');
Zero-Shot Object Detection
const detector = await pipeline('zero-shot-object-detection'); const objects = await detector('image.jpg', ['person', 'car', 'tree']);
Feature Extraction (Embeddings)
const extractor = await pipeline('feature-extraction'); const embeddings = await extractor('This is a sentence to embed.'); // Returns: tensor of shape [1, sequence_length, hidden_size] // For sentence embeddings (mean pooling) const extractor = await pipeline('feature-extraction', 'onnx-community/all-MiniLM-L6-v2-ONNX'); const embeddings = await extractor('Text to embed', { pooling: 'mean', normalize: true });
Imported: Finding and Choosing Models
Browsing the Hugging Face Hub
Discover compatible Transformers.js models on Hugging Face Hub:
Base URL (all models):
https://huggingface.co/models?library=transformers.js&sort=trending
Filter by task using the
pipeline_tag parameter:
Sort options:
- Most popular recently&sort=trending
- Most downloaded overall&sort=downloads
- Most liked by community&sort=likes
- Recently updated&sort=modified
Choosing the Right Model
Consider these factors when selecting a model:
1. Model Size
- Small (< 100MB): Fast, suitable for browsers, limited accuracy
- Medium (100MB - 500MB): Balanced performance, good for most use cases
- Large (> 500MB): High accuracy, slower, better for Node.js or powerful devices
2. Quantization Models are often available in different quantization levels:
- Full precision (largest, most accurate)fp32
- Half precision (smaller, still accurate)fp16
- 8-bit quantized (much smaller, slight accuracy loss)q8
- 4-bit quantized (smallest, noticeable accuracy loss)q4
3. Task Compatibility Check the model card for:
- Supported tasks (some models support multiple tasks)
- Input/output formats
- Language support (multilingual vs. English-only)
- License restrictions
4. Performance Metrics Model cards typically show:
- Accuracy scores
- Benchmark results
- Inference speed
- Memory requirements
Example: Finding a Text Generation Model
// 1. Visit: https://huggingface.co/models?pipeline_tag=text-generation&library=transformers.js&sort=trending // 2. Browse and select a model (e.g., onnx-community/gemma-3-270m-it-ONNX) // 3. Check model card for: // - Model size: ~270M parameters // - Quantization: q4 available // - Language: English // - Use case: Instruction-following chat // 4. Use the model: import { pipeline } from '@huggingface/transformers'; const generator = await pipeline( 'text-generation', 'onnx-community/gemma-3-270m-it-ONNX', { dtype: 'q4' } // Use quantized version for faster inference ); const output = await generator('Explain quantum computing in simple terms.', { max_new_tokens: 100 }); await generator.dispose();
Tips for Model Selection
- Start Small: Test with a smaller model first, then upgrade if needed
- Check ONNX Support: Ensure the model has ONNX files (look for
folder in model repo)onnx - Read Model Cards: Model cards contain usage examples, limitations, and benchmarks
- Test Locally: Benchmark inference speed and memory usage in your environment
- Community Models: Look for models by
(Transformers.js maintainer) orXenovaonnx-community - Version Pin: Use specific git commits in production for stability:
const pipe = await pipeline('task', 'model-id', { revision: 'abc123' });
Imported: Advanced Configuration
Environment Configuration (env
)
envThe
env object provides comprehensive control over Transformers.js execution, caching, and model loading.
Quick Overview:
import { env } from '@huggingface/transformers'; // View version console.log(env.version); // e.g., '3.8.1' // Common settings env.allowRemoteModels = true; // Load from Hugging Face Hub env.allowLocalModels = false; // Load from file system env.localModelPath = '/models/'; // Local model directory env.useFSCache = true; // Cache models on disk (Node.js) env.useBrowserCache = true; // Cache models in browser env.cacheDir = './.cache'; // Cache directory location
Configuration Patterns:
// Development: Fast iteration with remote models env.allowRemoteModels = true; env.useFSCache = true; // Production: Local models only env.allowRemoteModels = false; env.allowLocalModels = true; env.localModelPath = '/app/models/'; // Custom CDN env.remoteHost = 'https://cdn.example.com/models'; // Disable caching (testing) env.useFSCache = false; env.useBrowserCache = false;
For complete documentation on all configuration options, caching strategies, cache management, pre-downloading models, and more, see:
Working with Tensors
import { AutoTokenizer, AutoModel } from '@huggingface/transformers'; // Load tokenizer and model separately for more control const tokenizer = await AutoTokenizer.from_pretrained('bert-base-uncased'); const model = await AutoModel.from_pretrained('bert-base-uncased'); // Tokenize input const inputs = await tokenizer('Hello world!'); // Run model const outputs = await model(inputs);
Batch Processing
const classifier = await pipeline('sentiment-analysis'); // Process multiple texts const results = await classifier([ 'I love this!', 'This is terrible.', 'It was okay.' ]);
Imported: Browser-Specific Considerations
WebGPU Usage
WebGPU provides GPU acceleration in browsers:
const pipe = await pipeline('text-generation', 'onnx-community/gemma-3-270m-it-ONNX', { device: 'webgpu', dtype: 'fp32' });
Note: WebGPU is experimental. Check browser compatibility and file issues if problems occur.
WASM Performance
Default browser execution uses WASM:
// Optimized for browsers with quantization const pipe = await pipeline('sentiment-analysis', 'model-id', { dtype: 'q8' // or 'q4' for even smaller size });
Progress Tracking & Loading Indicators
Models can be large (ranging from a few MB to several GB) and consist of multiple files. Track download progress by passing a callback to the
pipeline() function:
import { pipeline } from '@huggingface/transformers'; // Track progress for each file const fileProgress = {}; function onProgress(info) { console.log(`${info.status}: ${info.file}`); if (info.status === 'progress') { fileProgress[info.file] = info.progress; console.log(`${info.file}: ${info.progress.toFixed(1)}%`); } if (info.status === 'done') { console.log(`✓ ${info.file} complete`); } } // Pass callback to pipeline const classifier = await pipeline('sentiment-analysis', null, { progress_callback: onProgress });
Progress Info Properties:
interface ProgressInfo { status: 'initiate' | 'download' | 'progress' | 'done' | 'ready'; name: string; // Model id or path file: string; // File being processed progress?: number; // Percentage (0-100, only for 'progress' status) loaded?: number; // Bytes downloaded (only for 'progress' status) total?: number; // Total bytes (only for 'progress' status) }
For complete examples including browser UIs, React components, CLI progress bars, and retry logic, see:
→ Pipeline Options - Progress Callback
Imported: Error Handling
try { const pipe = await pipeline('sentiment-analysis', 'model-id'); const result = await pipe('text to analyze'); } catch (error) { if (error.message.includes('fetch')) { console.error('Model download failed. Check internet connection.'); } else if (error.message.includes('ONNX')) { console.error('Model execution failed. Check model compatibility.'); } else { console.error('Unknown error:', error); } }
Imported: Performance Tips
- Reuse Pipelines: Create pipeline once, reuse for multiple inferences
- Use Quantization: Start with
orq8
for faster inferenceq4 - Batch Processing: Process multiple inputs together when possible
- Cache Models: Models are cached automatically (see Caching Reference for details on browser Cache API, Node.js filesystem cache, and custom implementations)
- WebGPU for Large Models: Use WebGPU for models that benefit from GPU acceleration
- Prune Context: For text generation, limit
to avoid memory issuesmax_new_tokens - Clean Up Resources: Call
when done to free memorypipe.dispose()
Imported: Memory Management
IMPORTANT: Always call
pipe.dispose() when finished to prevent memory leaks.
const pipe = await pipeline('sentiment-analysis'); const result = await pipe('Great product!'); await pipe.dispose(); // ✓ Free memory (100MB - several GB per model)
When to dispose:
- Application shutdown or component unmount
- Before loading a different model
- After batch processing in long-running apps
Models consume significant memory and hold GPU/CPU resources. Disposal is critical for browser memory limits and server stability.
For detailed patterns (React cleanup, servers, browser), see Code Examples
Imported: Limitations
- Use this skill only when the task clearly matches the scope described above.
- Do not treat the output as a substitute for environment-specific validation, testing, or expert review.
- Stop and ask for clarification if required inputs, permissions, safety boundaries, or success criteria are missing.