Claude-skill-registry face-recognition
Face recognition system patterns for attendance. Use when working with face detection, verification, enrollment, liveness detection, or any biometric authentication features.
install
source · Clone the upstream repo
git clone https://github.com/majiayu000/claude-skill-registry
Claude Code · Install into ~/.claude/skills/
T=$(mktemp -d) && git clone --depth=1 https://github.com/majiayu000/claude-skill-registry "$T" && mkdir -p ~/.claude/skills && cp -r "$T/skills/data/face-recognition" ~/.claude/skills/majiayu000-claude-skill-registry-face-recognition && rm -rf "$T"
manifest:
skills/data/face-recognition/SKILL.mdsource content
Face Recognition Skill
This skill provides guidance for implementing and maintaining the face recognition system used for attendance tracking.
Architecture Overview
Frontend (Vue 3 + TypeScript) ├── components/ # UI Components │ ├── FaceRecognition.vue │ ├── FaceEnrollment.vue │ └── FaceAnalytics.vue ├── services/ # Detection Libraries │ ├── FaceDetectionService.js # Face-API.js wrapper │ └── MediaPipeFaceService.js # MediaPipe wrapper ├── composables/ │ └── useFaceDetection.js # Composable logic ├── stores/ │ └── faceDetection.ts # Pinia state └── types/ └── face-recognition.ts # TypeScript interfaces Backend (Laravel) ├── Services/ │ └── FaceRecognitionService.php # Core business logic ├── Controllers/ │ └── FaceDetectionController.php └── routes/ └── api_face_recognition.php
Face Descriptor Format
Face descriptors are 128-dimensional float arrays generated by the face recognition neural network:
// TypeScript type FaceDescriptor = number[] // Length: 128, Range: -1 to 1 // PHP /** @var float[] $descriptor 128 elements */
Frontend Implementation
Using Face-API.js (Primary)
// resources/js/services/FaceDetectionService.js import * as faceapi from 'face-api.js' // Load models (call once at app init) async function loadModels() { const modelPath = '/models' await Promise.all([ faceapi.nets.tinyFaceDetector.loadFromUri(modelPath), faceapi.nets.faceLandmark68Net.loadFromUri(modelPath), faceapi.nets.faceRecognitionNet.loadFromUri(modelPath), faceapi.nets.faceExpressionNet.loadFromUri(modelPath), ]) } // Detect face and get descriptor async function detectFace(video: HTMLVideoElement): Promise<FaceDetectionResult | null> { const detection = await faceapi .detectSingleFace(video, new faceapi.TinyFaceDetectorOptions({ inputSize: 416, scoreThreshold: 0.5 })) .withFaceLandmarks() .withFaceDescriptor() .withFaceExpressions() if (!detection) return null return { confidence: detection.detection.score, boundingBox: detection.detection.box, landmarks: detection.landmarks.positions, descriptor: Array.from(detection.descriptor), // Float32Array → number[] expressions: detection.expressions } }
Using MediaPipe (Lightweight Alternative)
// resources/js/services/MediaPipeFaceService.js import { FaceDetection } from '@mediapipe/face_detection' const faceDetection = new FaceDetection({ locateFile: (file) => `/mediapipe/${file}` }) faceDetection.setOptions({ model: 'short', // 'short' (fast) or 'full' (accurate) minDetectionConfidence: 0.5, maxNumFaces: 1 }) // Note: MediaPipe doesn't generate 128-dim descriptors natively // Use Face-API.js for descriptor generation when matching is needed
Composable Pattern
// resources/js/composables/useFaceDetection.js import { ref, onMounted, onUnmounted } from 'vue' import { useFaceDetectionStore } from '@/stores/faceDetection' export function useFaceDetection() { const store = useFaceDetectionStore() const videoRef = ref<HTMLVideoElement | null>(null) const canvasRef = ref<HTMLCanvasElement | null>(null) async function startCamera() { const stream = await navigator.mediaDevices.getUserMedia({ video: { width: { ideal: 640 }, height: { ideal: 480 }, facingMode: 'user' } }) if (videoRef.value) { videoRef.value.srcObject = stream } store.setCameraActive(true) } function stopCamera() { const stream = videoRef.value?.srcObject as MediaStream stream?.getTracks().forEach(track => track.stop()) store.setCameraActive(false) } async function captureAndVerify() { if (!videoRef.value) return null const result = await detectFace(videoRef.value) if (!result || result.confidence < 0.7) { return { success: false, error: 'No face detected or low confidence' } } // Send to backend for verification const response = await faceRecognitionAPI.verifyFace({ descriptor: result.descriptor, confidence: result.confidence, liveness: result.liveness }) return response } onUnmounted(() => stopCamera()) return { videoRef, canvasRef, startCamera, stopCamera, captureAndVerify, isReady: computed(() => store.isInitialized), isProcessing: computed(() => store.processing) } }
Pinia Store Structure
// resources/js/stores/faceDetection.ts import { defineStore } from 'pinia' interface FaceDetectionState { cameraActive: boolean isInitialized: boolean processing: boolean currentDetection: FaceDetectionResult | null settings: { minConfidence: number // Default: 0.7 enableLiveness: boolean // Default: true detectionMethod: 'face-api' | 'mediapipe' } } export const useFaceDetectionStore = defineStore('faceDetection', () => { const state = reactive<FaceDetectionState>({ cameraActive: false, isInitialized: false, processing: false, currentDetection: null, settings: { minConfidence: 0.7, enableLiveness: true, detectionMethod: 'face-api' } }) // Getters const canCapture = computed(() => state.cameraActive && state.isInitialized && !state.processing ) const detectionQuality = computed(() => { const conf = state.currentDetection?.confidence ?? 0 if (conf >= 0.9) return 'excellent' if (conf >= 0.7) return 'good' if (conf >= 0.5) return 'fair' return 'poor' }) return { ...toRefs(state), canCapture, detectionQuality } })
Backend Implementation
Service Layer
// app/Services/FaceRecognitionService.php class FaceRecognitionService { // Configuration constants const SIMILARITY_THRESHOLD = 0.6; // Minimum match score const MIN_CONFIDENCE = 0.7; // Minimum detection confidence const QUALITY_THRESHOLD = 0.7; // Minimum quality for registration const CACHE_TTL = 3600; // Face data cache (1 hour) /** * Register a new face for an employee */ public function registerFace(int $employeeId, array $data): array { // Validate descriptor if (count($data['descriptor']) !== 128) { throw new InvalidArgumentException('Descriptor must be 128-dimensional'); } if ($data['confidence'] < self::MIN_CONFIDENCE) { return [ 'success' => false, 'message' => 'Detection confidence too low' ]; } $employee = Employee::findOrFail($employeeId); // Store face image securely $imagePath = $this->storeFaceImage($employeeId, $data['image']); // Calculate quality score $quality = $this->calculateQualityScore($data); // Save to employee metadata $employee->update([ 'face_descriptor' => json_encode($data['descriptor']), 'face_image_path' => $imagePath, 'face_quality_score' => $quality, 'face_registered_at' => now(), ]); // Clear cache Cache::forget("face_data_{$employeeId}"); return [ 'success' => true, 'quality' => $quality, 'message' => 'Face registered successfully' ]; } /** * Verify a face against registered employees */ public function verifyFace(array $data): array { $descriptor = $data['descriptor']; $registeredFaces = $this->getRegisteredFaces(); $bestMatch = null; $highestSimilarity = 0; foreach ($registeredFaces as $face) { $similarity = $this->cosineSimilarity($descriptor, $face['descriptor']); if ($similarity > $highestSimilarity && $similarity >= self::SIMILARITY_THRESHOLD) { $highestSimilarity = $similarity; $bestMatch = $face; } } if (!$bestMatch) { return [ 'success' => false, 'message' => 'No matching face found' ]; } return [ 'success' => true, 'employee_id' => $bestMatch['employee_id'], 'employee_name' => $bestMatch['employee_name'], 'similarity' => $highestSimilarity, 'confidence' => $data['confidence'] ]; } /** * Calculate cosine similarity between two descriptors */ private function cosineSimilarity(array $a, array $b): float { $dotProduct = 0; $normA = 0; $normB = 0; for ($i = 0; $i < 128; $i++) { $dotProduct += $a[$i] * $b[$i]; $normA += $a[$i] * $a[$i]; $normB += $b[$i] * $b[$i]; } $denominator = sqrt($normA) * sqrt($normB); return $denominator > 0 ? $dotProduct / $denominator : 0; } /** * Calculate face quality score (multi-factor) */ private function calculateQualityScore(array $data): float { $weights = [ 'confidence' => 0.30, 'face_size' => 0.20, 'pose' => 0.20, 'lighting' => 0.15, 'blur' => 0.15, ]; $scores = [ 'confidence' => $data['confidence'] ?? 0, 'face_size' => $this->calculateFaceSizeScore($data['boundingBox'] ?? null), 'pose' => $data['pose_score'] ?? 0.8, 'lighting' => $data['lighting_score'] ?? 0.8, 'blur' => $data['blur_score'] ?? 0.8, ]; $totalScore = 0; foreach ($weights as $factor => $weight) { $totalScore += ($scores[$factor] ?? 0) * $weight; } return round($totalScore, 4); } /** * Get all registered faces (cached) */ private function getRegisteredFaces(): array { return Cache::remember('registered_faces', self::CACHE_TTL, function () { return Employee::whereNotNull('face_descriptor') ->where('status', 'active') ->get() ->map(fn ($e) => [ 'employee_id' => $e->id, 'employee_name' => $e->name, 'descriptor' => json_decode($e->face_descriptor, true), ]) ->toArray(); }); } }
Controller
// app/Http/Controllers/FaceDetectionController.php class FaceDetectionController extends Controller { public function __construct( private readonly FaceRecognitionService $faceService ) {} public function register(Request $request): JsonResponse { $validated = $request->validate([ 'employee_id' => 'required|exists:employees,id', 'descriptor' => 'required|array|size:128', 'descriptor.*' => 'numeric', 'confidence' => 'required|numeric|min:0.7|max:1', 'image' => 'required|string', // Base64 ]); $result = $this->faceService->registerFace( $validated['employee_id'], $validated ); return response()->json($result, $result['success'] ? 200 : 400); } public function verify(Request $request): JsonResponse { $validated = $request->validate([ 'descriptor' => 'required|array|size:128', 'descriptor.*' => 'numeric', 'confidence' => 'required|numeric|min:0|max:1', 'liveness' => 'nullable|numeric|min:0|max:1', ]); $result = $this->faceService->verifyFace($validated); return response()->json($result); } }
TypeScript Interfaces
// resources/js/types/face-recognition.ts export interface FaceDetectionResult { confidence: number // 0-1 detection score liveness: number // 0-1 liveness score boundingBox: BoundingBox | null landmarks?: FaceLandmark[] descriptor?: number[] // 128-dim array expressions?: FaceExpressions } export interface BoundingBox { x: number y: number width: number height: number } export interface FaceLandmark { x: number y: number z?: number } export interface FaceExpressions { neutral: number happy: number sad: number angry: number fearful: number disgusted: number surprised: number } export interface VerificationResult { success: boolean employee_id?: string employee_name?: string similarity?: number confidence?: number message?: string } export interface RegistrationResult { success: boolean quality?: number message: string } export type DetectionMethod = 'face-api' | 'mediapipe'
Liveness Detection
Basic liveness checks to prevent photo spoofing:
// Frontend liveness detection async function checkLiveness(detections: FaceDetectionResult[]): Promise<number> { let score = 0.5 // Base score // Check for blink (eye aspect ratio change) if (detectBlink(detections)) score += 0.15 // Check for head movement if (detectHeadMovement(detections)) score += 0.15 // Check for expression variation if (detectExpressionChange(detections)) score += 0.1 // Texture analysis (photo vs real face) if (analyzeTexture(detections)) score += 0.1 return Math.min(score, 1.0) }
API Routes
// routes/api_face_recognition.php Route::middleware('auth:sanctum')->prefix('face-recognition')->group(function () { Route::post('/register', [FaceDetectionController::class, 'register']); Route::post('/verify', [FaceDetectionController::class, 'verify']); Route::post('/update', [FaceDetectionController::class, 'update']); Route::delete('/delete/{employee}', [FaceDetectionController::class, 'delete']); Route::get('/statistics', [FaceDetectionController::class, 'statistics']); });
Quality Thresholds
| Score | Rating | Action |
|---|---|---|
| >= 0.9 | Excellent | Accept |
| 0.7 - 0.9 | Good | Accept |
| 0.5 - 0.7 | Fair | Warn user, suggest retry |
| < 0.5 | Poor | Reject, require retry |
Best Practices
- Always validate descriptor length (must be exactly 128)
- Use cosine similarity for descriptor comparison (not Euclidean distance)
- Cache registered faces to improve verification speed
- Require minimum confidence of 0.7 for registration
- Store face images privately in non-public storage
- Clear cache when face data is updated or deleted
- Use transactions when updating face data with related records
- Log face operations for audit trail
Common Issues
Low Detection Confidence
- Ensure adequate lighting
- Face should be centered and fully visible
- Avoid extreme angles (> 30 degrees)
Verification Failures
- Check similarity threshold (default 0.6)
- Verify descriptor format (128 floats)
- Confirm employee has registered face
Performance
- Load models once at app initialization
- Use
for faster detectiontinyFaceDetector - Cache registered faces (1 hour TTL)