Atomic-agents new-app
Scaffold a new Atomic Agents project from scratch — create the directory, `pyproject.toml`, env file, first agent, and a runnable entry point. Use when the user asks to start a new atomic-agents project from scratch, says "scaffold" / "new project" / "start from zero", or runs `/atomic-agents:new-app`.
git clone https://github.com/BrainBlend-AI/atomic-agents
T=$(mktemp -d) && git clone --depth=1 https://github.com/BrainBlend-AI/atomic-agents "$T" && mkdir -p ~/.claude/skills && cp -r "$T/claude-plugin/atomic-agents/skills/new-app" ~/.claude/skills/brainblend-ai-atomic-agents-new-app && rm -rf "$T"
claude-plugin/atomic-agents/skills/new-app/SKILL.mdNew Atomic Agents Project
Scaffold a fresh Atomic Agents project. The result is a single-package Python project with one working agent, one schema pair, a provider-wrapped client, and a runnable
main.py.
This skill is opinionated. Produce a complete, tested skeleton the user can run immediately.
Phase 1 — Interrogate
Ask these questions in one message, not one-at-a-time. Skip any the user already answered (including via
$ARGUMENTS).
- Project name — used as both directory name and package name. Default from
if provided. Normalize to$ARGUMENTS
for the directory andkebab-case
for the package.snake_case - LLM provider — OpenAI / Anthropic / Groq / Ollama / Gemini / OpenRouter / MiniMax. Default: OpenAI.
- Agent type — a rough one-liner. Shapes the default
content and the starter schema pair. Defaults to a generic chat agent.SystemPromptGenerator - Tooling —
(default, because the repo uses uv) oruv
.pip + venv
Do not ask about project layout, Python version, or dependency list. Pick them.
Phase 2 — Confirm the plan
State the plan in one short block and wait for a yes. Include:
- Directory:
<project-name>/ - Package:
<project_name>/ - Python:
(Atomic Agents uses PEP 695 generics)>=3.12 - Dependencies:
,atomic-agents>=2.7
,instructor[<provider-extra>]>=1.14
,python-dotenvrich - Dev dependencies:
,pytest
,pytest-asyncioruff - First agent:
— uses<agent-type>
/BasicChatInputSchema
unless the agent type calls for custom schemasBasicChatOutputSchema - Default model for the chosen provider (see
)framework/references/providers.md - Entry point:
with a REPLmain.py
Phase 3 — Scaffold
Create files in this order. Verify each step before proceeding.
Directory and package
<project-name>/ ├── pyproject.toml ├── .env.example ├── .gitignore ├── README.md └── <project_name>/ ├── __init__.py └── main.py
pyproject.toml
pyproject.tomlUse the template from
framework/references/project-structure.md, substituting the chosen provider extra and project name.
.env.example
.env.exampleInclude the provider's API-key variable with a placeholder. Never the real key.
.gitignore
.gitignoreUse the template from
framework/references/project-structure.md.
<project_name>/main.py
<project_name>/main.pyProduce a runnable REPL. Load
.env, instantiate the provider client per framework/references/providers.md, build an agent, wire a ChatHistory with a seed assistant message, loop on console.input(...).
When a custom agent type was requested, build custom
InputSchema / OutputSchema subclasses with field description= populated. Otherwise use BasicChatInputSchema / BasicChatOutputSchema.
Always use the canonical imports:
from atomic_agents import ( AtomicAgent, AgentConfig, BasicChatInputSchema, BasicChatOutputSchema, ) from atomic_agents.context import ChatHistory, SystemPromptGenerator from instructor import Mode
Per-provider AgentConfig knobs — match the Instructor factory mode on
AgentConfig.mode:
- OpenAI: defaults work. Omit
(or setmode
).Mode.TOOLS - Anthropic:
; includemode=Mode.TOOLS
inmax_tokens
.model_api_parameters - Groq / Ollama / MiniMax:
(Instructor factory also usesmode=Mode.JSON
).Mode.JSON - Gemini:
andassistant_role="model"
(Instructor factory usesmode=Mode.GENAI_TOOLS
).Mode.GENAI_TOOLS - OpenRouter:
.mode=Mode.TOOLS
README.md
README.mdShort. Include: what the project is, how to install (
uv sync or pip install -e .[dev]), how to set the API key (cp .env.example .env and edit), how to run (uv run python -m <project_name>.main or equivalent).
Phase 4 — Install and smoke-test
Execute the install step:
- uv:
uv sync - pip:
(Windows:python -m venv .venv && .venv/bin/pip install -e ".[dev]"
).venv\Scripts\pip
Verify imports without a live API key:
uv run python -c "from <project_name>.main import agent; print('ok')"
If that works, the scaffold is sound. Tell the user to drop their key into
.env and run the REPL.
Phase 5 — Hand off
After scaffolding, tell the user:
- How to set their key (
).cp .env.example .env - How to run (
).uv run python -m <project_name>.main - Next steps, picked from:
- Replace the starter schemas with domain-specific ones — see
.framework/references/schemas.md - Add a tool — see
.framework/references/tools.md - Add a context provider — see
.framework/references/context-providers.md - Split into multiple agents — see
.framework/references/orchestration.md
- Replace the starter schemas with domain-specific ones — see
- A pointer to
(auto-triggered) andframework
(auto-triggered before commit).review
Constraints
- Never commit
. Only.env
..env.example - Never install anything globally. Use the project venv.
- Never pick an old model. Default to current generation: OpenAI
, Anthropicgpt-5-mini
, Groqclaude-haiku-4-5
, Ollamallama-3.3-70b-versatile
, Geminillama3.1
.gemini-2.5-flash - Never hand-roll what
already templates.framework/references/project-structure.md