Harness-engineering security-vulnerability-disclosure

Vulnerability Disclosure

install
source · Clone the upstream repo
git clone https://github.com/Intense-Visions/harness-engineering
Claude Code · Install into ~/.claude/skills/
T=$(mktemp -d) && git clone --depth=1 https://github.com/Intense-Visions/harness-engineering "$T" && mkdir -p ~/.claude/skills && cp -r "$T/agents/skills/claude-code/security-vulnerability-disclosure" ~/.claude/skills/intense-visions-harness-engineering-security-vulnerability-disclosure && rm -rf "$T"
manifest: agents/skills/claude-code/security-vulnerability-disclosure/SKILL.md
source content

Vulnerability Disclosure

A vulnerability without a disclosure process is a vulnerability that gets sold to exploit brokers, dropped as a zero-day, or posted on Twitter -- coordinated disclosure turns discovered vulnerabilities into patches instead of breaches

When to Use

  • Setting up a vulnerability disclosure program for your organization or product
  • Receiving a vulnerability report from an external researcher and deciding how to respond
  • Discovering a vulnerability in a third-party product and deciding how to report it
  • Writing a security advisory for a vulnerability in your product
  • Requesting a CVE identifier and coordinating public disclosure
  • Evaluating whether to establish a bug bounty program

Threat Context

The absence of a clear disclosure process creates perverse incentives that make vulnerabilities more dangerous, not less:

  • The 2017 Shadow Brokers leak: NSA-developed exploits (EternalBlue, DoublePulsar) were leaked publicly without coordination with Microsoft. Microsoft had patched EternalBlue (MS17-010) one month before the leak -- likely after being notified by NSA -- but many organizations had not applied the patch. The uncoordinated disclosure of the full exploit toolkit enabled WannaCry and NotPetya, two of the most destructive cyberattacks in history, causing billions of dollars in damage. Coordinated disclosure would have provided more time for patching before exploit availability.
  • Google Project Zero's 90-day policy: Google's vulnerability research team notifies vendors and gives them 90 days to release a patch. After 90 days, the vulnerability is disclosed publicly regardless of patch status. This policy has been controversial but effective -- it creates a hard deadline that prevents vendors from ignoring or indefinitely delaying fixes. Since its inception, most major vendors have improved their patch timelines.
  • Researchers prosecuted under CFAA: Security researchers have faced legal threats and prosecution under the Computer Fraud and Abuse Act for good-faith vulnerability research. In 2022, the US Department of Justice updated its CFAA policy to state that good-faith security research should not be prosecuted. Organizations that threaten researchers with legal action create a chilling effect that suppresses vulnerability reports, leaving vulnerabilities unfixed and users at risk.
  • Zerodium and the exploit market: Zerodium and similar brokers pay up to $2.5 million for iOS zero-days and $1 million for Chrome zero-days. When vendors do not have disclosure programs, or when they respond to reports with indifference or hostility, researchers have a financial incentive to sell to exploit brokers instead of reporting to the vendor. The vulnerability remains unpatched while the broker sells it to government clients.

Instructions

  1. Establish a vulnerability disclosure program. Every organization that produces software should have a clear, public process for receiving vulnerability reports:

    • security.txt (RFC 9116): Place a
      /.well-known/security.txt
      file on your web domain. At minimum, include:
      Contact:
      (security email or web form URL),
      Expires:
      (date when the file should be considered stale),
      Preferred-Languages:
      , and optionally
      Encryption:
      (PGP key for encrypted reports) and
      Policy:
      (link to full disclosure policy). This standardized file allows researchers to find your reporting channel quickly.
    • security@ email: Establish
      security@yourdomain.com
      as the standard contact point. Monitor it 24/7 or within a defined SLA. Configure PGP/GPG encryption so researchers can submit reports confidentially.
    • Disclosure policy: Publish a clear policy that states: what is in scope (your products, your infrastructure, third-party components), what is out of scope (social engineering of employees, physical attacks, denial of service testing), safe harbor language (you will not pursue legal action against researchers acting in good faith), expected response timeline (acknowledge within 48 hours, initial assessment within 7 days, target fix timeline within 90 days), and recognition (how you credit researchers).
    • Response SLA: Acknowledge receipt within 24-48 hours. Provide an initial severity assessment within 7 business days. Provide a timeline for the fix. Communicate status updates at least every 2 weeks for high-severity issues. Coordinate public disclosure with the researcher.
  2. Follow the coordinated disclosure process. When you discover a vulnerability in someone else's product, or when a researcher reports one to you:

    • Researcher reports to vendor: The researcher contacts the vendor through the designated channel with a detailed report including: description of the vulnerability, affected versions, reproduction steps, proof of concept (non-destructive), and suggested severity (CVSS score if possible).
    • Vendor acknowledges and assesses: The vendor confirms receipt, assigns a tracking number, and evaluates severity. The vendor communicates the assessment and expected fix timeline to the researcher.
    • Vendor develops and tests the fix: The vendor develops a patch, tests it across affected versions, and prepares the advisory. The researcher may verify the fix if the vendor provides a pre-release.
    • Coordinated public disclosure: Vendor and researcher agree on a disclosure date. The vendor releases the patch and advisory simultaneously. The researcher publishes their write-up. Users can apply the patch immediately upon learning of the vulnerability.
    • Standard disclosure timeline: 90 days is the de facto industry standard (per Google Project Zero). Some organizations use 45 days for critical vulnerabilities or 120 days for complex issues. The timeline should balance giving the vendor time to fix the issue with protecting users from indefinite exposure.
  3. Request and manage CVE identifiers. CVEs (Common Vulnerabilities and Exposures) provide a standardized way to identify vulnerabilities:

    • When to request a CVE: For any vulnerability in a product or library used by others. Internal-only systems do not need CVEs. CVEs help users track vulnerabilities across vendors, tools, and advisories.
    • CVE Numbering Authorities (CNAs): If your organization is a CNA (major software vendors, open-source projects, cloud providers), you can assign CVE IDs directly. Otherwise, request a CVE from MITRE (cveform.mitre.org) or through a CNA that covers your domain.
    • CVE record contents: CVE ID, affected product and versions, vulnerability type (CWE ID), description, severity (CVSS score), references (advisory URL, patch URL, write-up URL), and credit to the reporter.
    • NVD publication: After CVE assignment, the record is published to the National Vulnerability Database (NVD), where it is enriched with CVSS scores and CPE (Common Platform Enumeration) identifiers. This makes the vulnerability searchable and trackable by vulnerability management tools.
  4. Write effective security advisories. The advisory is the primary communication to users about the vulnerability and its fix:

    • Title: Short, specific. "Remote Code Execution in Widget Parser" not "Security Update."
    • CVE ID: Include the assigned CVE identifier.
    • Severity: CVSS v3.1 base score and vector string. Provide a qualitative rating (Critical, High, Medium, Low) alongside the numeric score.
    • Affected versions: Explicit version ranges. "Versions 2.0.0 through 2.4.3 are affected. Version 2.4.4 contains the fix."
    • Description: What the vulnerability is (buffer overflow, SQL injection, insecure deserialization), what an attacker can achieve (RCE, data disclosure, DoS), and what conditions are required for exploitation (authentication required, network-accessible, default configuration).
    • Remediation: Specific upgrade instructions. If a patch is not available, provide a workaround (configuration change, WAF rule, feature disable) with a timeline for the permanent fix.
    • Credit: Name the researcher who reported the vulnerability (with their consent). This incentivizes future reports and demonstrates good-faith participation in the security community.
    • Timeline: Optionally include the disclosure timeline (reported date, fix date, advisory date) for transparency.
  5. Consider a bug bounty program for mature organizations. Bug bounties incentivize external security research with financial rewards:

    • Prerequisites: Before launching a bug bounty, ensure you have: a functioning vulnerability disclosure program, the ability to receive, triage, and fix reports within the SLA, and engineering capacity to handle the volume of reports.
    • Scope: Define what is in scope (production systems, staging environments, specific applications) and what is out of scope (denial of service, social engineering, third- party services you do not control).
    • Reward structure: Base rewards on severity (CVSS score) and impact. Critical RCE: $5,000-$50,000+. High: $1,000-$10,000. Medium: $500-$2,000. Low: $100-$500. These ranges vary dramatically by organization and industry.
    • Platforms: HackerOne, Bugcrowd, and Intigriti provide managed bug bounty platforms with triage support, researcher vetting, and legal frameworks. These reduce operational overhead compared to running a program independently.

Details

  • The economics of disclosure: A researcher who discovers a zero-day has several options: report to the vendor (free or bug bounty reward), sell to an exploit broker ($10,000 to $2,500,000 depending on the target), use it for penetration testing engagements, or publish it for reputation. The vendor's response determines which option is most attractive. A vendor with no disclosure program, no acknowledgment, and no bounty makes the broker option more attractive. A vendor with a responsive program, fair bounties, and public credit makes responsible disclosure the rational choice.

  • PSIRT (Product Security Incident Response Team) operations: Large organizations establish a PSIRT to manage the vulnerability lifecycle: receive reports, triage, coordinate with engineering for fixes, manage CVE assignment, write advisories, coordinate disclosure timing, and track metrics (time to acknowledgment, time to fix, time to advisory). The PSIRT is the organization's interface with the external security research community. ISO/IEC 29147 (Vulnerability Disclosure) and ISO/IEC 30111 (Vulnerability Handling Processes) provide frameworks for PSIRT operations.

  • Legal landscape for security research: The Computer Fraud and Abuse Act (CFAA) in the US, the Computer Misuse Act in the UK, and similar laws in other jurisdictions can criminalize unauthorized access, even when performed with good intentions. Safe harbor language in disclosure policies provides a commitment not to pursue legal action. The DOJ's 2022 CFAA policy update states that good-faith security research should not be charged, but this is prosecutorial guidance, not law. The EU's proposed Cyber Resilience Act includes provisions for coordinated vulnerability disclosure. Organizations should consult legal counsel when drafting disclosure policies and safe harbor language.

  • Disclosure timeline negotiation: The 90-day standard is not rigid. If the vendor demonstrates active progress (regular status updates, a confirmed fix date within a reasonable window), researchers typically grant extensions. If the vendor is unresponsive or denies the vulnerability, researchers may shorten the timeline. If the vulnerability is being actively exploited in the wild (a zero-day), immediate disclosure may be warranted because users need to know to apply mitigations even before a patch exists. The goal is always to minimize the total exposure of users.

Anti-Patterns

  1. No disclosure policy. Researchers find a vulnerability but have no way to report it. There is no security.txt, no security@ email, and no contact information in the product's documentation. The researcher has three options: give up, disclose publicly (creating a zero-day), or sell to a broker. None of these options result in a coordinated patch. Publish a security.txt and a disclosure policy.

  2. Threatening legal action against researchers. A researcher reports a vulnerability and the organization's legal department sends a cease-and-desist or threatens prosecution. This chills future reports -- other researchers see the threat and decide not to report. The vulnerability remains unfixed, but now the organization has also damaged its reputation in the security community. Include safe harbor language in the disclosure policy and honor it.

  3. Disclosing without a patch available. Publishing a vulnerability advisory before a fix exists, giving attackers a roadmap without giving users a defense. Coordinate disclosure timing so that the patch and the advisory are released simultaneously. If a workaround exists, publish the workaround immediately and the full details after the patch.

  4. Ignoring vulnerability reports. A researcher reports a critical vulnerability and receives no response for weeks or months. The researcher follows up, still no response. Eventually the researcher discloses publicly out of obligation to affected users. The organization is caught unprepared with no patch. Acknowledge reports within 48 hours even if the full assessment takes longer.

  5. No CVE assignment. Fixing a vulnerability silently without assigning a CVE or publishing an advisory. Users who track vulnerabilities through CVEs and vulnerability databases do not know the fix exists and do not prioritize the update. Silent fixes leave users exposed because they do not know they need to update. Assign CVEs for all vulnerabilities in products used by others and publish advisories.