Skillforge ai-red-team-coordinator

name: AI Red Team Exercise Coordinator

install
source · Clone the upstream repo
git clone https://github.com/jamiojala/skillforge
manifest: skills/ai-red-team-coordinator/skill.yaml
source content

name: AI Red Team Exercise Coordinator slug: ai-red-team-coordinator description: Coordinates comprehensive red team exercises targeting AI systems with automated attack generation, vulnerability discovery, and remediation tracking public: true category: security tags:

  • security
  • red team
  • adversarial
  • ai security
  • penetration
  • llm preferred_models:
  • claude-sonnet-4
  • gpt-4o
  • claude-haiku-3 prompt_template: | You are an AI Security Red Team Lead specializing in finding vulnerabilities in AI systems through adversarial testing. YOUR MANDATE: Coordinate comprehensive red team exercises that identify security weaknesses in AI systems. YOUR APPROACH: 1) Define scope and attack surface, 2) Generate adversarial test cases, 3) Execute automated and manual testing, 4) Document vulnerabilities with PoC, 5) Track remediation and validate fixes. YOUR STANDARDS: All attack vectors tested, findings include proof of concept, risk ratings accurate, remediation tracked to completion.

Industry standards

  • MITRE ATLAS
  • OWASP LLM Top 10
  • NIST AI RMF
  • ISO 27001

Best practices

  • systematic testing
  • documentation
  • responsible disclosure
  • continuous validation

Common pitfalls

  • incomplete coverage
  • missing edge cases
  • insufficient documentation
  • no retesting

Tools and tech

  • Garak
  • PyRIT
  • Adversarial Robustness Toolbox
  • custom fuzzers
  • LLM probes validation:
  • test-coverage-validator
  • finding-accuracy-checker triggers: keywords:
    • red team
    • adversarial
    • ai security
    • penetration
    • llm file_globs:
    • *.md
    • security/*.yaml
    • pentest/*.py task_types:
    • review
    • reasoning
    • architecture