Skillshub bun-ffi-native-binding
Build high-performance native modules for JavaScript using Bun's FFI (Foreign Function Interface) with Zig or C. Use when optimizing hot paths, integrating system libraries, or requiring native performance for compute-intensive operations.
install
source · Clone the upstream repo
git clone https://github.com/ComeOnOliver/skillshub
Claude Code · Install into ~/.claude/skills/
T=$(mktemp -d) && git clone --depth=1 https://github.com/ComeOnOliver/skillshub "$T" && mkdir -p ~/.claude/skills && cp -r "$T/skills/Harmeet10000/skills/bun-ffi-native-binding" ~/.claude/skills/comeonoliver-skillshub-bun-ffi-native-binding && rm -rf "$T"
manifest:
skills/Harmeet10000/skills/bun-ffi-native-binding/SKILL.mdsource content
Bun FFI Native Binding Skill
Build native extensions for JavaScript using Bun's tight integration with Zig and C via FFI.
When to Use
- Hot paths: Compute-intensive operations (crypto, compression, math)
- System integration: Direct OS/hardware access
- Large data processing: Batch operations on arrays/buffers
- Legacy libraries: Wrap existing C/Zig libraries
Two Approaches
1. Zig Bindgen (Recommended)
Zig functions compiled directly into Bun with zero-overhead bindings.
Setup:
bun add -d @zig/build
Zig function (
src/math.zig):
const std = @import("std"); const jsc = @import("jsc"); pub fn add(global: *jsc.JSGlobalObject, a: i32, b: i32) !i32 { return std.math.add(i32, a, b) catch { return global.throwPretty("Integer overflow", .{}); }; }
Binding declaration (
src/bindings.ts):
import { t, fn } from "bindgen"; export const add = fn({ args: { global: t.globalObject, a: t.i32, b: t.i32 }, ret: t.i32 });
Usage (
index.ts):
import { add } from "bun:math"; console.log(add(2, 3)); // 5
2. C FFI (Dynamic Loading)
Load C libraries at runtime without compilation.
C function (
lib.c):
int add(int a, int b) { return a + b; }
Compile:
gcc -shared -fPIC -o lib.so lib.c
Load in Bun (
index.ts):
import { dlopen, FFIType } from "bun:ffi"; const lib = dlopen("./lib.so", { add: { args: [FFIType.i32, FFIType.i32], returns: FFIType.i32 } }); console.log(lib.symbols.add(2, 3)); // 5
Performance Considerations
Bridge Cost
- Overhead: 10-100 nanoseconds per call
- Dominates: Tiny functions called repeatedly
- Solution: Batch operations
Data Conversion
- Overhead: Proportional to payload size
- Dominates: Complex object marshaling
- Solution: Use typed arrays, avoid JSON
Rule of Thumb
If work per call > bridge cost → native wins
Critical Edge Cases
See references/EDGE_CASES.md for:
- Exception boundaries (panics crash runtime)
- Memory ownership (who frees allocations?)
- Struct alignment (layout assumptions)
- GC interaction (pinning references)
- Thread safety (event loop constraints)
- ABI compatibility (C calling convention)
Best Practices
- Minimize boundary crossings — batch processing in native code
- Use typed arrays — zero-copy buffer mapping
- Avoid per-call allocation — reuse buffers
- Binary formats — faster than JSON serialization
- Stable APIs — version struct layouts
- Error handling — convert panics to JS exceptions
Optimization Checklist
- Minimize JS → native calls
- Avoid JSON across boundary
- Use typed arrays/buffers
- Batch processing in native
- Convert errors to JS exceptions
- No Zig panics escape to JS
- No global mutable state
- Benchmark boundary latency
- No per-call memory allocation
- Thread safety verified
Example: Batch Array Processing
Zig (
src/process.zig):
pub fn processArray(global: *jsc.JSGlobalObject, ptr: [*]u32, len: usize) !u32 { var sum: u32 = 0; for (0..len) |i| { sum +|= ptr[i]; } return sum; }
JS (
index.ts):
const data = new Uint32Array([1, 2, 3, 4, 5]); const sum = processArray(data.buffer, data.length);
This avoids 5 separate JS→native calls and marshals data once.