Asi siggraph

SIGGRAPH Skill

install
source · Clone the upstream repo
git clone https://github.com/plurigrid/asi
Claude Code · Install into ~/.claude/skills/
T=$(mktemp -d) && git clone --depth=1 https://github.com/plurigrid/asi "$T" && mkdir -p ~/.claude/skills && cp -r "$T/skills/siggraph" ~/.claude/skills/plurigrid-asi-siggraph && rm -rf "$T"
manifest: skills/siggraph/SKILL.md
source content

SIGGRAPH Skill

Trit: 0 (ERGODIC/Coordinator)
Domain: computer-graphics, research, rendering, animation, simulation
Conference: ACM SIGGRAPH (Special Interest Group on Computer GRAPHics)


Overview

SIGGRAPH is the premier venue for computer graphics research. This skill indexes papers, repos, and techniques from SIGGRAPH 2023-2025.

┌─────────────────────────────────────────────────────────────────────────┐
│                      SIGGRAPH RESEARCH DOMAINS                          │
├─────────────────────────────────────────────────────────────────────────┤
│                                                                         │
│  ┌──────────────┐  ┌──────────────┐  ┌──────────────┐  ┌────────────┐  │
│  │   RENDERING  │  │  ANIMATION   │  │  GEOMETRY    │  │    AI/ML   │  │
│  │              │  │              │  │              │  │            │  │
│  │ • NeRF       │  │ • Motion     │  │ • Meshes     │  │ • Diffusion│  │
│  │ • Gaussians  │  │ • Rigging    │  │ • B-rep      │  │ • GAN      │  │
│  │ • Ray trace  │  │ • Characters │  │ • Splatting  │  │ • ControlN │  │
│  └──────────────┘  └──────────────┘  └──────────────┘  └────────────┘  │
│                                                                         │
│  ┌──────────────┐  ┌──────────────┐  ┌──────────────┐  ┌────────────┐  │
│  │  SIMULATION  │  │   IMAGING    │  │   HUMAN      │  │   ACCEL    │  │
│  │              │  │              │  │              │  │            │  │
│  │ • Physics    │  │ • HDR        │  │ • Faces      │  │ • WebGPU   │  │
│  │ • Fluids     │  │ • Colorize   │  │ • Bodies     │  │ • Neural   │  │
│  │ • MPM        │  │ • Edit       │  │ • Motion cap │  │ • Shaders  │  │
│  └──────────────┘  └──────────────┘  └──────────────┘  └────────────┘  │
└─────────────────────────────────────────────────────────────────────────┘

SIGGRAPH 2025 Top Papers

RepoTopicDescription
VAST-AI-Research/UniRig1274RiggingOne Model to Rig Them All
XPixelGroup/HYPIR1023RestorationDiffusion Score Priors for Image Restoration
aigc3d/LAM891AvatarsLarge Avatar Model for One-shot Gaussian Head
microsoft/renderformer886RenderingTransformer-based Neural Rendering with GI
IGL-HKUST/DiffusionAsShader796Video3D-aware Video Diffusion
NYU-ICL/image-gs4222D GaussiansContent-Adaptive Image Representation
PrimitiveAnything3773D GenHuman-Crafted Primitive Assembly
3DTopia/LayerPano3D305PanoramaLayered 3D Panorama Generation

SIGGRAPH 2024 Top Papers

RepoTopicDescription
TencentARC/MotionCtrl1478MotionMotion Control for Video Generation
graphdeco-inria/hierarchical-3d-gaussians1351GaussiansHierarchical 3DGS for Large Datasets
hbb1/2d-gaussian-splatting29622DGSGeometrically Accurate Radiance Fields
bytedance/X-Portrait532PortraitsExpressive Portrait Animation
MisEty/RTG-SLAM468SLAMReal-time 3D Reconstruction with Gaussians
samxuxiang/BrepGen378CADB-rep Generative Diffusion Model
AIGAnimation/CAMDM286AnimationTaming Diffusion for Character Control
electronicarts/pbmpm232PhysicsWebGPU Position Based MPM

SIGGRAPH 2023 Classics

RepoTopicDescription
XingangPan/DragGAN36005GANInteractive Point-based Image Manipulation
Doubiiu/ToonCrafter5927AnimationGenerative Cartoon Interpolation
williamyang1991/Rerender_A_Video3004VideoZero-Shot Video-to-Video Translation
pix2pixzero1143ImageZero-shot Image-to-Image Translation

Key Techniques

Gaussian Splatting

# 3D Gaussian Splatting fundamentals
# Each Gaussian: position (μ), covariance (Σ), color (SH), opacity (α)

class Gaussian3D:
    def __init__(self):
        self.position = np.zeros(3)      # μ ∈ R³
        self.covariance = np.eye(3)      # Σ ∈ R³ˣ³ (positive semi-definite)
        self.sh_coeffs = np.zeros(48)    # Spherical harmonics (RGB × 16)
        self.opacity = 1.0               # α ∈ [0, 1]
    
    def splat(self, camera):
        # Project to 2D, compute screen-space covariance
        μ_2d = camera.project(self.position)
        Σ_2d = camera.project_cov(self.covariance)
        return μ_2d, Σ_2d

Neural Radiance Fields (NeRF)

# NeRF: F(x, d) → (c, σ)
# x = 3D position, d = viewing direction
# c = RGB color, σ = volume density

def nerf_forward(model, rays_o, rays_d, near, far, n_samples):
    t = torch.linspace(near, far, n_samples)
    points = rays_o + t * rays_d
    
    # Query MLP
    rgb, density = model(points, rays_d)
    
    # Volume rendering
    weights = compute_transmittance(density, t)
    color = (weights * rgb).sum(dim=-1)
    return color

Material Point Method (MPM)

// WebGPU PB-MPM from EA SIGGRAPH 2024
// Position Based Material Point Method

struct Particle {
    position: vec3<f32>,
    velocity: vec3<f32>,
    mass: f32,
    volume: f32,
    deformation_grad: mat3x3<f32>,
}

@compute @workgroup_size(256)
fn p2g(@builtin(global_invocation_id) id: vec3<u32>) {
    // Particle to Grid transfer
    let p = particles[id.x];
    let base = floor(p.position / dx);
    
    for (var i = 0; i < 27; i++) {
        let offset = neighbor_offsets[i];
        let weight = bspline_weight(p.position, base + offset);
        atomicAdd(&grid[base + offset].mass, p.mass * weight);
        atomicAdd(&grid[base + offset].momentum, p.mass * p.velocity * weight);
    }
}

GF(3) Research Classification

MINUS (-1): Analysis/Measurement Papers
  - Perceptual studies
  - Benchmarks
  - Quality metrics

ERGODIC (0): Method/Algorithm Papers  
  - Novel techniques
  - Hybrid approaches
  - Framework design

PLUS (+1): Generation/Synthesis Papers
  - Generative models
  - Neural rendering
  - Content creation

Balanced Research Pipeline

;; catp verification for research workflow
[:literature-review :method-design :implementation]  ; -1 + 0 + 1 = 0 ✓
[:dataset-creation :training :evaluation]             ; -1 + 0 + 1 = 0 ✓
[:problem-analysis :algorithm :results]               ; -1 + 0 + 1 = 0 ✓

Resources

Official

Curated Lists

Statistics (SIGGRAPH 2025)

  • Total Accepted: 710
  • Technical Papers: 306
  • TOG Papers: 24
  • Posters: 380
  • Location: Vancouver, Canada

Commands

# Search SIGGRAPH repos
gh search repos "siggraph 2025" --sort stars --limit 20

# Clone top paper implementations
gh repo clone VAST-AI-Research/UniRig
gh repo clone microsoft/renderformer
gh repo clone hbb1/2d-gaussian-splatting

# Track new SIGGRAPH papers
gh api search/repositories -f q="siggraph 2025" --jq '.items[:10] | .[].full_name'

Related Skills

SkillTritBridge
algorithmic-art
+1Procedural generation
gay-mcp
+1Color theory for rendering
xogot
+1Game engine integration
mlx-apple-silicon
0Neural inference on Metal
iroh-p2p
+1Distributed rendering

SIGGRAPH Asia

YearLocationNotable Papers
2024TokyoToonCrafter, GVHMR, GaussianObject
2023SydneyEasyVolcap, Rerender_A_Video
2022DaeguVideoReTalking, VToonify

Skill Name: siggraph
Type: Research / Computer Graphics
Trit: 0 (ERGODIC)
GF(3): Coordinator role - bridges analysis and synthesis


Autopoietic Marginalia

The interaction IS the skill improving itself.

Every use of this skill is an opportunity for worlding:

  • MEMORY (-1): Record what was learned
  • REMEMBERING (0): Connect patterns to other skills
  • WORLDING (+1): Evolve the skill based on use

Add Interaction Exemplars here as the skill is used.