Awesome-omni-skills doc-coauthoring

Doc Co-Authoring Workflow workflow skill. Use this skill when the user needs This skill provides a structured workflow for guiding users through collaborative document creation. Act as an active guide, walking users through three stages: Context Gathering, Refinement & Structure, and Reader Testing and the operator should preserve the upstream workflow, copied support files, and provenance before merging or handing off.

install
source · Clone the upstream repo
git clone https://github.com/diegosouzapw/awesome-omni-skills
Claude Code · Install into ~/.claude/skills/
T=$(mktemp -d) && git clone --depth=1 https://github.com/diegosouzapw/awesome-omni-skills "$T" && mkdir -p ~/.claude/skills && cp -r "$T/skills/doc-coauthoring" ~/.claude/skills/diegosouzapw-awesome-omni-skills-doc-coauthoring && rm -rf "$T"
manifest: skills/doc-coauthoring/SKILL.md
source content

Doc Co-Authoring Workflow

Overview

This public intake copy packages

plugins/antigravity-awesome-skills-claude/skills/doc-coauthoring
from
https://github.com/sickn33/antigravity-awesome-skills
into the native Omni Skills editorial shape without hiding its origin.

Use it when the operator needs the upstream workflow, support files, and repository context to stay intact while the public validator and private enhancer continue their normal downstream flow.

This intake keeps the copied upstream files intact and uses

metadata.json
plus
ORIGIN.md
as the provenance anchor for review.

Doc Co-Authoring Workflow This skill provides a structured workflow for guiding users through collaborative document creation. Act as an active guide, walking users through three stages: Context Gathering, Refinement & Structure, and Reader Testing.

Imported source sections that did not map cleanly to the public headings are still preserved below or in the support files. Notable imported sections: Stage 1: Context Gathering, Stage 2: Refinement & Structure, Stage 3: Reader Testing, Final Review, Tips for Effective Guidance, Limitations.

When to Use This Skill

Use this section as the trigger filter. It should make the activation boundary explicit before the operator loads files, runs commands, or opens a pull request.

  • User mentions writing documentation: "write a doc", "draft a proposal", "create a spec", "write up"
  • User mentions specific doc types: "PRD", "design doc", "decision doc", "RFC"
  • User seems to be starting a substantial writing task
  • Context Gathering: User provides all relevant context while Claude asks clarifying questions
  • Refinement & Structure: Iteratively build each section through brainstorming and editing
  • Reader Testing: Test the doc with a fresh Claude (no context) to catch blind spots before others read it

Operating Table

SituationStart hereWhy it matters
First-time use
metadata.json
Confirms repository, branch, commit, and imported path before touching the copied workflow
Provenance review
ORIGIN.md
Gives reviewers a plain-language audit trail for the imported source
Workflow execution
SKILL.md
Starts with the smallest copied file that materially changes execution
Supporting context
SKILL.md
Adds the next most relevant copied source file without loading the entire package
Handoff decision
## Related Skills
Helps the operator switch to a stronger native skill when the task drifts

Workflow

This workflow is intentionally editorial and operational at the same time. It keeps the imported source useful to the operator while still satisfying the public intake standards that feed the downstream enhancer flow.

  1. Confirm the user goal, the scope of the imported workflow, and whether this skill is still the right router for the task.
  2. Read the overview and provenance files before loading any copied upstream support files.
  3. Load only the references, examples, prompts, or scripts that materially change the outcome for the current request.
  4. Execute the upstream workflow while keeping provenance and source boundaries explicit in the working notes.
  5. Validate the result against the upstream expectations and the evidence you can point to in the copied files.
  6. Escalate or hand off to a related skill when the work moves out of this imported workflow's center of gravity.
  7. Before merge or closure, record what was used, what changed, and what the reviewer still needs to verify.

Imported Workflow Notes

Imported: Stage 1: Context Gathering

Goal: Close the gap between what the user knows and what Claude knows, enabling smart guidance later.

Initial Questions

Start by asking the user for meta-context about the document:

  1. What type of document is this? (e.g., technical spec, decision doc, proposal)
  2. Who's the primary audience?
  3. What's the desired impact when someone reads this?
  4. Is there a template or specific format to follow?
  5. Any other constraints or context to know?

Inform them they can answer in shorthand or dump information however works best for them.

If user provides a template or mentions a doc type:

  • Ask if they have a template document to share
  • If they provide a link to a shared document, use the appropriate integration to fetch it
  • If they provide a file, read it

If user mentions editing an existing shared document:

  • Use the appropriate integration to read the current state
  • Check for images without alt-text
  • If images exist without alt-text, explain that when others use Claude to understand the doc, Claude won't be able to see them. Ask if they want alt-text generated. If so, request they paste each image into chat for descriptive alt-text generation.

Info Dumping

Once initial questions are answered, encourage the user to dump all the context they have. Request information such as:

  • Background on the project/problem
  • Related team discussions or shared documents
  • Why alternative solutions aren't being used
  • Organizational context (team dynamics, past incidents, politics)
  • Timeline pressures or constraints
  • Technical architecture or dependencies
  • Stakeholder concerns

Advise them not to worry about organizing it - just get it all out. Offer multiple ways to provide context:

  • Info dump stream-of-consciousness
  • Point to team channels or threads to read
  • Link to shared documents

If integrations are available (e.g., Slack, Teams, Google Drive, SharePoint, or other MCP servers), mention that these can be used to pull in context directly.

If no integrations are detected and in Claude.ai or Claude app: Suggest they can enable connectors in their Claude settings to allow pulling context from messaging apps and document storage directly.

Inform them clarifying questions will be asked once they've done their initial dump.

During context gathering:

  • If user mentions team channels or shared documents:

    • If integrations available: Inform them the content will be read now, then use the appropriate integration
    • If integrations not available: Explain lack of access. Suggest they enable connectors in Claude settings, or paste the relevant content directly.
  • If user mentions entities/projects that are unknown:

    • Ask if connected tools should be searched to learn more
    • Wait for user confirmation before searching
  • As user provides context, track what's being learned and what's still unclear

Asking clarifying questions:

When user signals they've done their initial dump (or after substantial context provided), ask clarifying questions to ensure understanding:

Generate 5-10 numbered questions based on gaps in the context.

Inform them they can use shorthand to answer (e.g., "1: yes, 2: see #channel, 3: no because backwards compat"), link to more docs, point to channels to read, or just keep info-dumping. Whatever's most efficient for them.

Exit condition: Sufficient context has been gathered when questions show understanding - when edge cases and trade-offs can be asked about without needing basics explained.

Transition: Ask if there's any more context they want to provide at this stage, or if it's time to move on to drafting the document.

If user wants to add more, let them. When ready, proceed to Stage 2.

Examples

Example 1: Ask for the upstream workflow directly

Use @doc-coauthoring to handle <task>. Start from the copied upstream workflow, load only the files that change the outcome, and keep provenance visible in the answer.

Explanation: This is the safest starting point when the operator needs the imported workflow, but not the entire repository.

Example 2: Ask for a provenance-grounded review

Review @doc-coauthoring against metadata.json and ORIGIN.md, then explain which copied upstream files you would load first and why.

Explanation: Use this before review or troubleshooting when you need a precise, auditable explanation of origin and file selection.

Example 3: Narrow the copied support files before execution

Use @doc-coauthoring for <task>. Load only the copied references, examples, or scripts that change the outcome, and name the files explicitly before proceeding.

Explanation: This keeps the skill aligned with progressive disclosure instead of loading the whole copied package by default.

Example 4: Build a reviewer packet

Review @doc-coauthoring using the copied upstream files plus provenance, then summarize any gaps before merge.

Explanation: This is useful when the PR is waiting for human review and you want a repeatable audit packet.

Best Practices

Treat the generated public skill as a reviewable packaging layer around the upstream repository. The goal is to keep provenance explicit and load only the copied source material that materially improves execution.

  • Keep the imported skill grounded in the upstream repository; do not invent steps that the source material cannot support.
  • Prefer the smallest useful set of support files so the workflow stays auditable and fast to review.
  • Keep provenance, source commit, and imported file paths visible in notes and PR descriptions.
  • Point directly at the copied upstream files that justify the workflow instead of relying on generic review boilerplate.
  • Treat generated examples as scaffolding; adapt them to the concrete task before execution.
  • Route to a stronger native skill when architecture, debugging, design, or security concerns become dominant.

Troubleshooting

Problem: The operator skipped the imported context and answered too generically

Symptoms: The result ignores the upstream workflow in

plugins/antigravity-awesome-skills-claude/skills/doc-coauthoring
, fails to mention provenance, or does not use any copied source files at all. Solution: Re-open
metadata.json
,
ORIGIN.md
, and the most relevant copied upstream files. Load only the files that materially change the answer, then restate the provenance before continuing.

Problem: The imported workflow feels incomplete during review

Symptoms: Reviewers can see the generated

SKILL.md
, but they cannot quickly tell which references, examples, or scripts matter for the current task. Solution: Point at the exact copied references, examples, scripts, or assets that justify the path you took. If the gap is still real, record it in the PR instead of hiding it.

Problem: The task drifted into a different specialization

Symptoms: The imported skill starts in the right place, but the work turns into debugging, architecture, design, security, or release orchestration that a native skill handles better. Solution: Use the related skills section to hand off deliberately. Keep the imported provenance visible so the next skill inherits the right context instead of starting blind.

Related Skills

  • @devops-deploy
    - Use when the work is better handled by that native specialization after this imported skill establishes context.
  • @devops-troubleshooter
    - Use when the work is better handled by that native specialization after this imported skill establishes context.
  • @differential-review
    - Use when the work is better handled by that native specialization after this imported skill establishes context.
  • @discord-automation
    - Use when the work is better handled by that native specialization after this imported skill establishes context.

Additional Resources

Use this support matrix and the linked files below as the operator packet for this imported skill. They should reflect real copied source material, not generic scaffolding.

Resource familyWhat it gives the reviewerExample path
references
copied reference notes, guides, or background material from upstream
references/n/a
examples
worked examples or reusable prompts copied from upstream
examples/n/a
scripts
upstream helper scripts that change execution or validation
scripts/n/a
agents
routing or delegation notes that are genuinely part of the imported package
agents/n/a
assets
supporting assets or schemas copied from the source package
assets/n/a

Imported Reference Notes

Imported: Stage 2: Refinement & Structure

Goal: Build the document section by section through brainstorming, curation, and iterative refinement.

Instructions to user: Explain that the document will be built section by section. For each section:

  1. Clarifying questions will be asked about what to include
  2. 5-20 options will be brainstormed
  3. User will indicate what to keep/remove/combine
  4. The section will be drafted
  5. It will be refined through surgical edits

Start with whichever section has the most unknowns (usually the core decision/proposal), then work through the rest.

Section ordering:

If the document structure is clear: Ask which section they'd like to start with.

Suggest starting with whichever section has the most unknowns. For decision docs, that's usually the core proposal. For specs, it's typically the technical approach. Summary sections are best left for last.

If user doesn't know what sections they need: Based on the type of document and template, suggest 3-5 sections appropriate for the doc type.

Ask if this structure works, or if they want to adjust it.

Once structure is agreed:

Create the initial document structure with placeholder text for all sections.

If access to artifacts is available: Use

create_file
to create an artifact. This gives both Claude and the user a scaffold to work from.

Inform them that the initial structure with placeholders for all sections will be created.

Create artifact with all section headers and brief placeholder text like "[To be written]" or "[Content here]".

Provide the scaffold link and indicate it's time to fill in each section.

If no access to artifacts: Create a markdown file in the working directory. Name it appropriately (e.g.,

decision-doc.md
,
technical-spec.md
).

Inform them that the initial structure with placeholders for all sections will be created.

Create file with all section headers and placeholder text.

Confirm the filename has been created and indicate it's time to fill in each section.

For each section:

Step 1: Clarifying Questions

Announce work will begin on the [SECTION NAME] section. Ask 5-10 clarifying questions about what should be included:

Generate 5-10 specific questions based on context and section purpose.

Inform them they can answer in shorthand or just indicate what's important to cover.

Step 2: Brainstorming

For the [SECTION NAME] section, brainstorm [5-20] things that might be included, depending on the section's complexity. Look for:

  • Context shared that might have been forgotten
  • Angles or considerations not yet mentioned

Generate 5-20 numbered options based on section complexity. At the end, offer to brainstorm more if they want additional options.

Step 3: Curation

Ask which points should be kept, removed, or combined. Request brief justifications to help learn priorities for the next sections.

Provide examples:

  • "Keep 1,4,7,9"
  • "Remove 3 (duplicates 1)"
  • "Remove 6 (audience already knows this)"
  • "Combine 11 and 12"

If user gives freeform feedback (e.g., "looks good" or "I like most of it but...") instead of numbered selections, extract their preferences and proceed. Parse what they want kept/removed/changed and apply it.

Step 4: Gap Check

Based on what they've selected, ask if there's anything important missing for the [SECTION NAME] section.

Step 5: Drafting

Use

str_replace
to replace the placeholder text for this section with the actual drafted content.

Announce the [SECTION NAME] section will be drafted now based on what they've selected.

If using artifacts: After drafting, provide a link to the artifact.

Ask them to read through it and indicate what to change. Note that being specific helps learning for the next sections.

If using a file (no artifacts): After drafting, confirm completion.

Inform them the [SECTION NAME] section has been drafted in [filename]. Ask them to read through it and indicate what to change. Note that being specific helps learning for the next sections.

Key instruction for user (include when drafting the first section): Provide a note: Instead of editing the doc directly, ask them to indicate what to change. This helps learning of their style for future sections. For example: "Remove the X bullet - already covered by Y" or "Make the third paragraph more concise".

Step 6: Iterative Refinement

As user provides feedback:

  • Use
    str_replace
    to make edits (never reprint the whole doc)
  • If using artifacts: Provide link to artifact after each edit
  • If using files: Just confirm edits are complete
  • If user edits doc directly and asks to read it: mentally note the changes they made and keep them in mind for future sections (this shows their preferences)

Continue iterating until user is satisfied with the section.

Quality Checking

After 3 consecutive iterations with no substantial changes, ask if anything can be removed without losing important information.

When section is done, confirm [SECTION NAME] is complete. Ask if ready to move to the next section.

Repeat for all sections.

Near Completion

As approaching completion (80%+ of sections done), announce intention to re-read the entire document and check for:

  • Flow and consistency across sections
  • Redundancy or contradictions
  • Anything that feels like "slop" or generic filler
  • Whether every sentence carries weight

Read entire document and provide feedback.

When all sections are drafted and refined: Announce all sections are drafted. Indicate intention to review the complete document one more time.

Review for overall coherence, flow, completeness.

Provide any final suggestions.

Ask if ready to move to Reader Testing, or if they want to refine anything else.

Imported: Stage 3: Reader Testing

Goal: Test the document with a fresh Claude (no context bleed) to verify it works for readers.

Instructions to user: Explain that testing will now occur to see if the document actually works for readers. This catches blind spots - things that make sense to the authors but might confuse others.

Testing Approach

If access to sub-agents is available (e.g., in Claude Code):

Perform the testing directly without user involvement.

Step 1: Predict Reader Questions

Announce intention to predict what questions readers might ask when trying to discover this document.

Generate 5-10 questions that readers would realistically ask.

Step 2: Test with Sub-Agent

Announce that these questions will be tested with a fresh Claude instance (no context from this conversation).

For each question, invoke a sub-agent with just the document content and the question.

Summarize what Reader Claude got right/wrong for each question.

Step 3: Run Additional Checks

Announce additional checks will be performed.

Invoke sub-agent to check for ambiguity, false assumptions, contradictions.

Summarize any issues found.

Step 4: Report and Fix

If issues found: Report that Reader Claude struggled with specific issues.

List the specific issues.

Indicate intention to fix these gaps.

Loop back to refinement for problematic sections.


If no access to sub-agents (e.g., claude.ai web interface):

The user will need to do the testing manually.

Step 1: Predict Reader Questions

Ask what questions people might ask when trying to discover this document. What would they type into Claude.ai?

Generate 5-10 questions that readers would realistically ask.

Step 2: Setup Testing

Provide testing instructions:

  1. Open a fresh Claude conversation: https://claude.ai
  2. Paste or share the document content (if using a shared doc platform with connectors enabled, provide the link)
  3. Ask Reader Claude the generated questions

For each question, instruct Reader Claude to provide:

  • The answer
  • Whether anything was ambiguous or unclear
  • What knowledge/context the doc assumes is already known

Check if Reader Claude gives correct answers or misinterprets anything.

Step 3: Additional Checks

Also ask Reader Claude:

  • "What in this doc might be ambiguous or unclear to readers?"
  • "What knowledge or context does this doc assume readers already have?"
  • "Are there any internal contradictions or inconsistencies?"

Step 4: Iterate Based on Results

Ask what Reader Claude got wrong or struggled with. Indicate intention to fix those gaps.

Loop back to refinement for any problematic sections.


Exit Condition (Both Approaches)

When Reader Claude consistently answers questions correctly and doesn't surface new gaps or ambiguities, the doc is ready.

Imported: Final Review

When Reader Testing passes: Announce the doc has passed Reader Claude testing. Before completion:

  1. Recommend they do a final read-through themselves - they own this document and are responsible for its quality
  2. Suggest double-checking any facts, links, or technical details
  3. Ask them to verify it achieves the impact they wanted

Ask if they want one more review, or if the work is done.

If user wants final review, provide it. Otherwise: Announce document completion. Provide a few final tips:

  • Consider linking this conversation in an appendix so readers can see how the doc was developed
  • Use appendices to provide depth without bloating the main doc
  • Update the doc as feedback is received from real readers

Imported: Tips for Effective Guidance

Tone:

  • Be direct and procedural
  • Explain rationale briefly when it affects user behavior
  • Don't try to "sell" the approach - just execute it

Handling Deviations:

  • If user wants to skip a stage: Ask if they want to skip this and write freeform
  • If user seems frustrated: Acknowledge this is taking longer than expected. Suggest ways to move faster
  • Always give user agency to adjust the process

Context Management:

  • Throughout, if context is missing on something mentioned, proactively ask
  • Don't let gaps accumulate - address them as they come up

Artifact Management:

  • Use
    create_file
    for drafting full sections
  • Use
    str_replace
    for all edits
  • Provide artifact link after every change
  • Never use artifacts for brainstorming lists - that's just conversation

Quality over Speed:

  • Don't rush through stages
  • Each iteration should make meaningful improvements
  • The goal is a document that actually works for readers

Imported: Limitations

  • Use this skill only when the task clearly matches the scope described above.
  • Do not treat the output as a substitute for environment-specific validation, testing, or expert review.
  • Stop and ask for clarification if required inputs, permissions, safety boundaries, or success criteria are missing.