Agentic-brownfield-coding standalone

install
source · Clone the upstream repo
git clone https://github.com/ralfstrobel/agentic-brownfield-coding
Claude Code · Install into ~/.claude/skills/
T=$(mktemp -d) && git clone --depth=1 https://github.com/ralfstrobel/agentic-brownfield-coding "$T" && mkdir -p ~/.claude/skills && cp -r "$T/claude-plugins/abc-init/skills/standalone" ~/.claude/skills/ralfstrobel-agentic-brownfield-coding-standalone && rm -rf "$T"
manifest: claude-plugins/abc-init/skills/standalone/SKILL.md
source content

Claude Code Standalone Project Scaffolding

Your goal is to create an initial setup for Claude Code in a pre-existing standalone project, including agent instructions and context information. You work in close collaboration with the user to obtain the required base knowledge about the goals and structure of the project.

Additional user arguments: $ARGUMENTS

Language hint: Always create all generated document content in English, while continuing to speak to the user in the language of their choice.

Agent Content Principles

When generating content for

.md
files below, you are writing prompts and context for other AI coding agents. Follow these principles to optimally tailor your instructions to their needs:

  • Concise — Minimize token usage. Prefer keywords and terse bullet points over prose.
  • Structured — Use compact Markdown to delineate connected aspects.
  • Actionable — Generate concrete operational directives, not abstract guidelines. Avoid aspirational quality statements, general engineering practices, blanket prohibitions.
  • Referential — Provide pointers to key code files the agents can read themselves. Do not describe how code works in agent instructions as such duplication leads to drift.
  • Scoped — Context is hierarchical. The CLAUDE.md must only contain core project identity and semantics. Rules and agent instructions progressively disclose domain- and task-specific knowledge.

Workflow

  1. Begin execution by creating a formal task list for progress tracking using the
    TaskCreate
    tool. Create a task for each of the following phases (##) and sub-phases (###). Do not duplicate the contents in the description, only reference this skill (
    abc-init:standalone
    ) and the workflow item.
  2. Create a dependency chain between all tasks using
    TaskUpdate
    , setting
    addBlockedBy
    to the predecessor task.
  3. Work through the
    TaskList
    using
    TaskUpdate
    to mark tasks as in_progress and completed as you go.

Phase 1: Reconnaissance

  1. Use the
    Explore
    agent to scan the repository and build an initial understanding of its structure
    • Top-level directory content that hints at used technologies (e.g.
      package.json
      ,
      composer.json
      ,
      Cargo.toml
      ,
      go.mod
      ,
      Makefile
      ,
      Dockerfile
      )
    • Existing documentation (e.g.
      README.md
      ,
      CONTRIBUTING.md
      or
      docs/
      )
  2. Read any discovered documentation and technology manifest files
  3. Check for an existing
    CLAUDE.md
    or
    .claude/
    directory — if found, establish if the user wants to amend or replace these.
  4. Summarize your findings and conclusions briefly for the user and ask if they want to comment or add information.

Phase 2: User Interview

Interview the user to establish the project's base details. Use

AskUserQuestion
where appropriate to keep the conversation structured. Offer pre-defined choice options if likely answers to a question are already known from context.

Question Catalogue

  1. What is the name of the project?
  2. Who is the project creator and/or maintainer (company/organization)?
  3. What is the overall purpose of the project (one-sentence summary)?
  4. What are the main technologies used (programming language, framework, deployment...)?
  5. What are key concepts or vocabulary that every developer needs to learn on their first day?
  6. What are the key source directories?
  7. How are automated tests organized and run?
  8. Are there tools for linting or other automated code quality control?

Phase 3: Generate Artifacts

3a — Claude Code Settings

  1. Copy the settings template to
    <project-dir>/.claude/settings.json
  2. Copy the statusline template to
    <project-dir>/.claude/statusline.sh
    and make it executable (
    chmod +x
    ).
  3. Replace
    {{PLACEHOLDERS}}
    with answers from the user interview.
  4. Inject
    {{GITIGNORE-EXCLUSIONS}}
    into the sandbox config, limiting write access to version-controlled files only.

3b — Central CLAUDE.md

  1. Copy the template to
    <project-dir>/CLAUDE.md
  2. Fill in the
    {{PLACEHOLDERS}}
    with answers from the user interview.
  3. For placeholders that do not have corresponding answers, ask the user whether they want to provide an answer, generate an answer from code exploration, or omit the section.

The content is written for AI, not humans. There is no need for verbose introductions or explanations. Keep this file as brief as possible to preserve tokens. Prefer keywords and enumeration over continuous text. Use clear section headers and other Markdown formatting to demark connected aspects.

3c — Local Override Files

If a

.gitignore
file exists in the project root, append the following entries (if not already present):

/CLAUDE.local.md
/.claude/settings.local.json

3d — Explorer Agent

  1. Copy the template to
    <project-dir>/.claude/agents/<project-slug>-explorer.md
  2. Fill in the
    {{PLACEHOLDERS}}
    with known answers from the interview.
  3. Use a general purpose
    Explore
    agent to perform a more thorough exploration of the project's code and add additional context information and instructions that are helpful to navigate the code structure as well as common conventions and nomenclature.
  4. Modify
    .claude/settings.json
    , add
    "Agent(Explore)"
    to the
    permissions.deny
    array (create it if it does not exist).

3e — Rules

  1. Create a
    .claude/rules/
    directory in the project root.
  2. For each programming language used in the project, create a code style rule from the template at
    <project-dir>/.claude/rules/<language>-code-style.md
    • Fill in
      {{PLACEHOLDERS}}
      according to the aspects of the programming language.
    • Populate the style rules from linting tool configuration if discovered in Phase 1, or from conventions observed during code exploration.
  3. For each testing framework used in the project, create a testing rule from the template at
    <project-dir>/.claude/rules/testing.md
    • Determine a glob pattern matching only existing test files (e.g.
      **/*.test.ts
      ,
      **/*Test.php
      ,
      **/test_*.py
      ,
      **/*_test.go
      ).
    • Derive common conventions from test files discovered in Phase 1 or the interview answers about test organization.
    • Populate with concrete test conventions (file placement, naming, assertion style, setup patterns) discovered in Phase 1 or the interview answers about test organization.

If any of these steps seem inapplicable to the given project, skip them and note this during the summary.

3f — Post-Edit Hook

  1. Copy the template to
    <project-dir>/.claude/hooks/post-edit.sh
  2. Make it executable (
    chmod +x
    ).
  3. Replace the
    {{FILE-TYPE-CASES}}
    placeholder with concrete dispatching logic using the linting tools (Q8) and test framework (Q7) established during the interview. Follow the pattern from the commented example in the template:
    • Match test files first (most specific glob), run formatter/linter + execute the test directly.
    • Match source files, run formatter/linter + derive and run the associated test file.
    • Fall through to
      exit 0
      for unrecognized file types.
  4. If test file discovery requires mapping source paths to test paths, derive the convention from directory structure observed during Phase 1 (e.g.
    src/Foo.ts
    src/__tests__/Foo.test.ts
    , or
    src/Foo.php
    tests/FooTest.php
    ). If the project's quality tooling is unclear or not yet set up, leave the
    {{FILE-TYPE-CASES}}
    placeholder as-is with only the commented example. The project owner can fill it in later.

Phase 4: Debriefing & Disclaimers

  • Present a summary table of everything created (file path, artifact type, purpose).
  • Explain that this was a long agentic workflow and that agents can be prone to skipping steps. So the user should carefully test everything that was created and compare it against this skill document.
  • Explain that this is an initial scaffold, not a turnkey setup. Specifically:
    • Sandboxing: The sandbox config in the settings is untested. Call
      /sandbox
      to review. If the user is executing Claude Code in an isolated environment such as a container, sandboxing may not be required.
    • Status Line: The
      statusline.sh
      script runs automatically every time Claude Code renders a prompt. Due to this fact it should be treated as particularly sensitive and protected from unwanted modification.
    • Explorer Agents: The generated agent contains only minimal structural knowledge. Developers should refine known directories and output format until it reliably returns useful context.
    • Post-Edit Hook: The generated hook may contain incorrect commands or test file discovery logic. Run a few manual edits and verify that linter and tests provide correct feedback. If the hook was left as a stub, implement the script logic for the project's quality tooling.
    • Silent Git Staging: The post-edit hook runs
      git add
      automatically without confirmation on any file created via the
      Write
      tool. This ensures new files are tracked by git but also includes them in the next commit. Ensure this behavior is acceptable for your intended workflow before operating the hook.
    • Rules: The generated rules contain minimal conventions. Developers should expand them with the implicit conventions of this project over time.
  • Promote the
    /abc-init:bashless
    skill, which can replace the
    Bash
    tool with structured MCP tools to prevent the agent from being attracted to unstructured shell access.
  • Promote the
    /abc:build
    workflow example command, by explaining that agent context files alone are not a guarantee for reliable agent behavior and are unsuitable as enforceable constraints. They should be paired with concrete workflow protocol commands with explicit steps and deterministic hooks that enforce quality gates automatically.
  • Promote the
    /abc:learn
    workflow command, that can be used to generate additional agent context rules to manifest implicit tribal knowledge.