Babysitter sensor-fusion
Multi-sensor fusion algorithms for perception in autonomous driving
install
source · Clone the upstream repo
git clone https://github.com/a5c-ai/babysitter
Claude Code · Install into ~/.claude/skills/
T=$(mktemp -d) && git clone --depth=1 https://github.com/a5c-ai/babysitter "$T" && mkdir -p ~/.claude/skills && cp -r "$T/library/specializations/domains/science/automotive-engineering/skills/sensor-fusion" ~/.claude/skills/a5c-ai-babysitter-sensor-fusion && rm -rf "$T"
manifest:
library/specializations/domains/science/automotive-engineering/skills/sensor-fusion/SKILL.mdsource content
Sensor Fusion Skill
Purpose
Enable multi-sensor fusion algorithm development for autonomous driving perception including object detection, tracking, and environmental modeling.
Capabilities
- Camera, radar, lidar data preprocessing
- Object detection fusion algorithms
- Tracking filter implementation (Kalman, EKF, UKF)
- Association algorithms (Hungarian, GNN, JPDA)
- Occupancy grid fusion
- Confidence estimation and sensor weighting
- Time synchronization handling
- Ground truth comparison and metrics
Usage Guidelines
- Preprocess sensor data for consistent coordinate frames
- Select appropriate tracking filters based on object dynamics
- Implement robust association for multi-target scenarios
- Fuse sensor confidence for reliable perception
- Handle time delays and synchronization issues
- Validate fusion against ground truth data
Dependencies
- ROS/ROS2
- TensorFlow
- PyTorch
- NVIDIA DriveWorks
Process Integration
- ADA-001: Perception System Development
- ADA-002: Path Planning and Motion Control
- ADA-003: ADAS Feature Development
- ADA-004: Simulation and Virtual Validation