Awesome-omni-skills django-perf-review
Django Performance Review workflow skill. Use this skill when the user needs Django performance code review. Use when asked to \"review Django performance\", \"find N+1 queries\", \"optimize Django\", \"check queryset performance\", \"database performance\", \"Django ORM issues\", or audit Django code for performance problems and the operator should preserve the upstream workflow, copied support files, and provenance before merging or handing off.
git clone https://github.com/diegosouzapw/awesome-omni-skills
T=$(mktemp -d) && git clone --depth=1 https://github.com/diegosouzapw/awesome-omni-skills "$T" && mkdir -p ~/.claude/skills && cp -r "$T/skills/django-perf-review" ~/.claude/skills/diegosouzapw-awesome-omni-skills-django-perf-review && rm -rf "$T"
skills/django-perf-review/SKILL.mdDjango Performance Review
Overview
This public intake copy packages
plugins/antigravity-awesome-skills-claude/skills/django-perf-review from https://github.com/sickn33/antigravity-awesome-skills into the native Omni Skills editorial shape without hiding its origin.
Use it when the operator needs the upstream workflow, support files, and repository context to stay intact while the public validator and private enhancer continue their normal downstream flow.
This intake keeps the copied upstream files intact and uses
metadata.json plus ORIGIN.md as the provenance anchor for review.
Django Performance Review Review Django code for validated performance issues. Research the codebase to confirm issues before reporting. Report only what you can prove.
Imported source sections that did not map cleanly to the public headings are still preserved below or in the support files. Notable imported sections: Review Approach, Impact Categories, Priority 1: N+1 Queries (CRITICAL), Priority 2: Unbounded Querysets (CRITICAL), Priority 4: Write Loops (HIGH), Priority 5: Inefficient Patterns (LOW).
When to Use This Skill
Use this section as the trigger filter. It should make the activation boundary explicit before the operator loads files, runs commands, or opens a pull request.
- You need a Django performance review focused on verified ORM and query issues.
- The code likely has N+1 queries, unbounded querysets, missing indexes, or other database-driven bottlenecks.
- You want only provable performance findings, not speculative optimization advice.
- Use when the request clearly matches the imported source intent: Django performance code review. Use when asked to "review Django performance", "find N+1 queries", "optimize Django", "check queryset performance", "database performance", "Django ORM issues", or audit Django code for....
- Use when the operator should preserve upstream workflow detail instead of rewriting the process from scratch.
- Use when provenance needs to stay visible in the answer, PR, or review packet.
Operating Table
| Situation | Start here | Why it matters |
|---|---|---|
| First-time use | | Confirms repository, branch, commit, and imported path before touching the copied workflow |
| Provenance review | | Gives reviewers a plain-language audit trail for the imported source |
| Workflow execution | | Starts with the smallest copied file that materially changes execution |
| Supporting context | | Adds the next most relevant copied source file without loading the entire package |
| Handoff decision | | Helps the operator switch to a stronger native skill when the task drifts |
Workflow
This workflow is intentionally editorial and operational at the same time. It keeps the imported source useful to the operator while still satisfying the public intake standards that feed the downstream enhancer flow.
- Confirm the user goal, the scope of the imported workflow, and whether this skill is still the right router for the task.
- Read the overview and provenance files before loading any copied upstream support files.
- Load only the references, examples, prompts, or scripts that materially change the outcome for the current request.
- Execute the upstream workflow while keeping provenance and source boundaries explicit in the working notes.
- Validate the result against the upstream expectations and the evidence you can point to in the copied files.
- Escalate or hand off to a related skill when the work moves out of this imported workflow's center of gravity.
- Before merge or closure, record what was used, what changed, and what the reviewer still needs to verify.
Imported Workflow Notes
Imported: Review Approach
- Research first - Trace data flow, check for existing optimizations, verify data volume
- Validate before reporting - Pattern matching is not validation
- Zero findings is acceptable - Don't manufacture issues to appear thorough
- Severity must match impact - If you catch yourself writing "minor" in a CRITICAL finding, it's not critical. Downgrade or skip it.
Examples
Example 1: Ask for the upstream workflow directly
Use @django-perf-review to handle <task>. Start from the copied upstream workflow, load only the files that change the outcome, and keep provenance visible in the answer.
Explanation: This is the safest starting point when the operator needs the imported workflow, but not the entire repository.
Example 2: Ask for a provenance-grounded review
Review @django-perf-review against metadata.json and ORIGIN.md, then explain which copied upstream files you would load first and why.
Explanation: Use this before review or troubleshooting when you need a precise, auditable explanation of origin and file selection.
Example 3: Narrow the copied support files before execution
Use @django-perf-review for <task>. Load only the copied references, examples, or scripts that change the outcome, and name the files explicitly before proceeding.
Explanation: This keeps the skill aligned with progressive disclosure instead of loading the whole copied package by default.
Example 4: Build a reviewer packet
Review @django-perf-review using the copied upstream files plus provenance, then summarize any gaps before merge.
Explanation: This is useful when the PR is waiting for human review and you want a repeatable audit packet.
Best Practices
Treat the generated public skill as a reviewable packaging layer around the upstream repository. The goal is to keep provenance explicit and load only the copied source material that materially improves execution.
- Keep the imported skill grounded in the upstream repository; do not invent steps that the source material cannot support.
- Prefer the smallest useful set of support files so the workflow stays auditable and fast to review.
- Keep provenance, source commit, and imported file paths visible in notes and PR descriptions.
- Point directly at the copied upstream files that justify the workflow instead of relying on generic review boilerplate.
- Treat generated examples as scaffolding; adapt them to the concrete task before execution.
- Route to a stronger native skill when architecture, debugging, design, or security concerns become dominant.
Troubleshooting
Problem: The operator skipped the imported context and answered too generically
Symptoms: The result ignores the upstream workflow in
plugins/antigravity-awesome-skills-claude/skills/django-perf-review, fails to mention provenance, or does not use any copied source files at all.
Solution: Re-open metadata.json, ORIGIN.md, and the most relevant copied upstream files. Load only the files that materially change the answer, then restate the provenance before continuing.
Problem: The imported workflow feels incomplete during review
Symptoms: Reviewers can see the generated
SKILL.md, but they cannot quickly tell which references, examples, or scripts matter for the current task.
Solution: Point at the exact copied references, examples, scripts, or assets that justify the path you took. If the gap is still real, record it in the PR instead of hiding it.
Problem: The task drifted into a different specialization
Symptoms: The imported skill starts in the right place, but the work turns into debugging, architecture, design, security, or release orchestration that a native skill handles better. Solution: Use the related skills section to hand off deliberately. Keep the imported provenance visible so the next skill inherits the right context instead of starting blind.
Related Skills
- Use when the work is better handled by that native specialization after this imported skill establishes context.@devops-deploy
- Use when the work is better handled by that native specialization after this imported skill establishes context.@devops-troubleshooter
- Use when the work is better handled by that native specialization after this imported skill establishes context.@differential-review
- Use when the work is better handled by that native specialization after this imported skill establishes context.@discord-automation
Additional Resources
Use this support matrix and the linked files below as the operator packet for this imported skill. They should reflect real copied source material, not generic scaffolding.
| Resource family | What it gives the reviewer | Example path |
|---|---|---|
| copied reference notes, guides, or background material from upstream | |
| worked examples or reusable prompts copied from upstream | |
| upstream helper scripts that change execution or validation | |
| routing or delegation notes that are genuinely part of the imported package | |
| supporting assets or schemas copied from the source package | |
Imported Reference Notes
Imported: Priority 3: Missing Indexes (HIGH)
Impact: Full table scans. Negligible on small tables, catastrophic on large ones.
Rule: Index fields used in WHERE clauses on large tables
# PROBLEM: Filtering on unindexed field # User.objects.filter(email=email) # full scan if no index class User(models.Model): email = models.EmailField() # ← no db_index # SOLUTION: Add index class User(models.Model): email = models.EmailField(db_index=True)
Rule: Index fields used in ORDER BY on large tables
# PROBLEM: Sorting requires full scan without index Order.objects.order_by('-created') # SOLUTION: Index the sort field class Order(models.Model): created = models.DateTimeField(db_index=True)
Rule: Use composite indexes for common query patterns
class Order(models.Model): user = models.ForeignKey(User) status = models.CharField(max_length=20) created = models.DateTimeField() class Meta: indexes = [ models.Index(fields=['user', 'status']), # for filter(user=x, status=y) models.Index(fields=['status', '-created']), # for filter(status=x).order_by('-created') ]
Validation Checklist for Missing Indexes
- Table has 10k+ rows
- Field is used in filter() or order_by() on hot path
- Checked model - no db_index=True or Meta.indexes entry
- Not a foreign key (already indexed automatically)
Imported: Impact Categories
Issues are organized by impact. Focus on CRITICAL and HIGH - these cause real problems at scale.
| Priority | Category | Impact |
|---|---|---|
| 1 | N+1 Queries | CRITICAL - Multiplies with data, causes timeouts |
| 2 | Unbounded Querysets | CRITICAL - Memory exhaustion, OOM kills |
| 3 | Missing Indexes | HIGH - Full table scans on large tables |
| 4 | Write Loops | HIGH - Lock contention, slow requests |
| 5 | Inefficient Patterns | LOW - Rarely worth reporting |
Imported: Priority 1: N+1 Queries (CRITICAL)
Impact: Each N+1 adds
O(n) database round trips. 100 rows = 100 extra queries. 10,000 rows = timeout.
Rule: Prefetch related data accessed in loops
Validate by tracing: View → Queryset → Template/Serializer → Loop access
# PROBLEM: N+1 - each iteration queries profile def user_list(request): users = User.objects.all() return render(request, 'users.html', {'users': users}) # Template: # {% for user in users %} # {{ user.profile.bio }} ← triggers query per user # {% endfor %} # SOLUTION: Prefetch in view def user_list(request): users = User.objects.select_related('profile') return render(request, 'users.html', {'users': users})
Rule: Prefetch in serializers, not just views
DRF serializers accessing related fields cause N+1 if queryset isn't optimized.
# PROBLEM: SerializerMethodField queries per object class UserSerializer(serializers.ModelSerializer): order_count = serializers.SerializerMethodField() def get_order_count(self, obj): return obj.orders.count() # ← query per user # SOLUTION: Annotate in viewset, access in serializer class UserViewSet(viewsets.ModelViewSet): def get_queryset(self): return User.objects.annotate(order_count=Count('orders')) class UserSerializer(serializers.ModelSerializer): order_count = serializers.IntegerField(read_only=True)
Rule: Model properties that query are dangerous in loops
# PROBLEM: Property triggers query when accessed class User(models.Model): @property def recent_orders(self): return self.orders.filter(created__gte=last_week)[:5] # Used in template loop = N+1 # SOLUTION: Use Prefetch with custom queryset, or annotate
Validation Checklist for N+1
- Traced data flow from view to template/serializer
- Confirmed related field is accessed inside a loop
- Searched codebase for existing select_related/prefetch_related
- Verified table has significant row count (1000+)
- Confirmed this is a hot path (not admin, not rare action)
Imported: Priority 2: Unbounded Querysets (CRITICAL)
Impact: Loading entire tables exhausts memory. Large tables cause OOM kills and worker restarts.
Rule: Always paginate list endpoints
# PROBLEM: No pagination - loads all rows class UserListView(ListView): model = User template_name = 'users.html' # SOLUTION: Add pagination class UserListView(ListView): model = User template_name = 'users.html' paginate_by = 25
Rule: Use iterator() for large batch processing
# PROBLEM: Loads all objects into memory at once for user in User.objects.all(): process(user) # SOLUTION: Stream with iterator() for user in User.objects.iterator(chunk_size=1000): process(user)
Rule: Never call list() on unbounded querysets
# PROBLEM: Forces full evaluation into memory all_users = list(User.objects.all()) # SOLUTION: Keep as queryset, slice if needed users = User.objects.all()[:100]
Validation Checklist for Unbounded Querysets
- Table is large (10k+ rows) or will grow unbounded
- No pagination class, paginate_by, or slicing
- This runs on user-facing request (not background job with chunking)
Imported: Priority 4: Write Loops (HIGH)
Impact: N database writes instead of 1. Lock contention. Slow requests.
Rule: Use bulk_create instead of create() in loops
# PROBLEM: N inserts, N round trips for item in items: Model.objects.create(name=item['name']) # SOLUTION: Single bulk insert Model.objects.bulk_create([ Model(name=item['name']) for item in items ])
Rule: Use update() or bulk_update instead of save() in loops
# PROBLEM: N updates for obj in queryset: obj.status = 'done' obj.save() # SOLUTION A: Single UPDATE statement (same value for all) queryset.update(status='done') # SOLUTION B: bulk_update (different values) for obj in objects: obj.status = compute_status(obj) Model.objects.bulk_update(objects, ['status'], batch_size=500)
Rule: Use delete() on queryset, not in loops
# PROBLEM: N deletes for obj in queryset: obj.delete() # SOLUTION: Single DELETE queryset.delete()
Validation Checklist for Write Loops
- Loop iterates over 100+ items (or unbounded)
- Each iteration calls create(), save(), or delete()
- This runs on user-facing request (not one-time migration script)
Imported: Priority 5: Inefficient Patterns (LOW)
Rarely worth reporting. Include only as minor notes if you're already reporting real issues.
Pattern: count() vs exists()
# Slightly suboptimal if queryset.count() > 0: do_thing() # Marginally better if queryset.exists(): do_thing()
Usually skip - difference is <1ms in most cases.
Pattern: len(queryset) vs count()
# Fetches all rows to count if len(queryset) > 0: # bad if queryset not yet evaluated # Single COUNT query if queryset.count() > 0:
Only flag if queryset is large and not already evaluated.
Pattern: get() in small loops
# N queries, but if N is small (< 20), often fine for id in ids: obj = Model.objects.get(id=id)
Only flag if loop is large or this is in a very hot path.
Imported: Validation Requirements
Before reporting ANY issue:
- Trace the data flow - Follow queryset from creation to consumption
- Search for existing optimizations - Grep for select_related, prefetch_related, pagination
- Verify data volume - Check if table is actually large
- Confirm hot path - Trace call sites, verify this runs frequently
- Rule out mitigations - Check for caching, rate limiting
If you cannot validate all steps, do not report.
Imported: Output Format
#### Imported: Django Performance Review: [File/Component Name] ### Summary Validated issues: X (Y Critical, Z High) ### Findings #### [PERF-001] N+1 Query in UserListView (CRITICAL) **Location:** `views.py:45` **Issue:** Related field `profile` accessed in template loop without prefetch. **Validation:** - Traced: UserListView → users queryset → user_list.html → `{{ user.profile.bio }}` in loop - Searched codebase: no select_related('profile') found - User table: 50k+ rows (verified in admin) - Hot path: linked from homepage navigation **Evidence:** ```python def get_queryset(self): return User.objects.filter(active=True) # no select_related
Fix:
def get_queryset(self): return User.objects.filter(active=True).select_related('profile')
If no issues found: "No performance issues identified after reviewing [files] and validating [what you checked]." **Before submitting, sanity check each finding:** - Does the severity match the actual impact? ("Minor inefficiency" ≠ CRITICAL) - Is this a real performance issue or just a style preference? - Would fixing this measurably improve performance? If the answer to any is "no" - remove the finding. --- #### Imported: What NOT to Report - Test files - Admin-only views - Management commands - Migration files - One-time scripts - Code behind disabled feature flags - Tables with <1000 rows that won't grow - Patterns in cold paths (rarely executed code) - Micro-optimizations (exists vs count, only/defer without evidence) ### False Positives to Avoid **Queryset variable assignment is not an issue:** ```python # This is FINE - no performance difference projects_qs = Project.objects.filter(org=org) projects = list(projects_qs) # vs this - identical performance projects = list(Project.objects.filter(org=org))
Querysets are lazy. Assigning to a variable doesn't execute anything.
Single query patterns are not N+1:
# This is ONE query, not N+1 projects = list(Project.objects.filter(org=org))
N+1 requires a loop that triggers additional queries. A single
list() call is fine.
Missing select_related on single object fetch is not N+1:
# This is 2 queries, not N+1 - report as LOW at most state = AutofixState.objects.filter(pr_id=pr_id).first() project_id = state.request.project_id # second query
N+1 requires a loop. A single object doing 2 queries instead of 1 can be reported as LOW if relevant, but never as CRITICAL/HIGH.
Style preferences are not performance issues: If your only suggestion is "combine these two lines" or "rename this variable" - that's style, not performance. Don't report it.
Imported: Limitations
- Use this skill only when the task clearly matches the scope described above.
- Do not treat the output as a substitute for environment-specific validation, testing, or expert review.
- Stop and ask for clarification if required inputs, permissions, safety boundaries, or success criteria are missing.