Agent-almanac analyze-codebase-workflow
git clone https://github.com/pjt222/agent-almanac
T=$(mktemp -d) && git clone --depth=1 https://github.com/pjt222/agent-almanac "$T" && mkdir -p ~/.claude/skills && cp -r "$T/i18n/caveman/skills/analyze-codebase-workflow" ~/.claude/skills/pjt222-agent-almanac-analyze-codebase-workflow-512f72 && rm -rf "$T"
i18n/caveman/skills/analyze-codebase-workflow/SKILL.mdAnalyze Codebase Workflow
Survey arbitrary repository. Auto-detect data flows, file I/O, script dependencies. Produce structured annotation plan for manual refinement.
When Use
- Onboarding onto unfamiliar codebase, need to understand data flow
- Starting putior integration in project with no PUT annotations yet
- Auditing existing project's data pipeline before documentation
- Preparing annotation plan before running
annotate-source-files
Inputs
- Required: Path to repository or source directory to analyze
- Optional: Specific subdirectories to focus on (default: entire repo)
- Optional: Languages to include or exclude (default: all detected)
- Optional: Detection scope: inputs only, outputs only, or both (default: both + dependencies)
Steps
Step 1: Survey Repository Structure
Identify source files and their languages. Understand what putior can analyze.
library(putior) # List all supported languages and their extensions list_supported_languages() list_supported_languages(detection_only = TRUE) # Only languages with auto-detection # Get supported extensions exts <- get_supported_extensions()
Use file listing to understand repo composition:
# Count files by extension in the target directory find /path/to/repo -type f | sed 's/.*\.//' | sort | uniq -c | sort -rn | head -20
Got: List of file extensions present in repo, with counts. Map against
get_supported_extensions() to know coverage.
If fail: Repo has no files matching supported extensions? Putior cannot auto-detect workflows. Consider whether language is supported but files use non-standard extensions.
Step 2: Check Language Detection Coverage
For each detected language, verify auto-detection pattern availability.
# Check which languages have auto-detection patterns (18 languages, 902 patterns) detection_langs <- list_supported_languages(detection_only = TRUE) cat("Languages with auto-detection:\n") print(detection_langs) # Get pattern counts for specific languages found in the repo for (lang in c("r", "python", "javascript", "sql", "dockerfile", "makefile")) { patterns <- get_detection_patterns(lang) cat(sprintf("%s: %d input, %d output, %d dependency patterns\n", lang, length(patterns$input), length(patterns$output), length(patterns$dependency) )) }
Got: Pattern counts printed for each language. R has 124 patterns, Python 159, JavaScript 71, etc.
If fail: Language returns no patterns? Supports manual annotations but not auto-detection. Plan to annotate those files manually.
Step 3: Run Auto-Detection
Execute
put_auto() on target directory to discover workflow elements.
# Full auto-detection workflow <- put_auto("./src/", detect_inputs = TRUE, detect_outputs = TRUE, detect_dependencies = TRUE ) # Exclude build scripts and test helpers from scanning workflow <- put_auto("./src/", detect_inputs = TRUE, detect_outputs = TRUE, detect_dependencies = TRUE, exclude = c("build-", "test_helper") ) # View detected workflow nodes print(workflow) # Check node count cat(sprintf("Detected %d workflow nodes\n", nrow(workflow)))
For large repos, analyze subdirectories incrementally:
# Analyze specific subdirectories etl_workflow <- put_auto("./src/etl/") api_workflow <- put_auto("./src/api/")
Got: Data frame with columns including
id, label, input, output, source_file. Each row represents detected workflow step.
If fail: Result empty? Source files may not contain recognizable I/O patterns. Try enabling debug logging:
workflow <- put_auto("./src/", log_level = "DEBUG") to see which files scanned and which patterns match.
Step 4: Generate Initial Diagram
Visualize auto-detected workflow. Assess coverage and identify gaps.
# Generate diagram from auto-detected workflow cat(put_diagram(workflow, theme = "github")) # With source file info for traceability cat(put_diagram(workflow, show_source_info = TRUE)) # Save to file for review writeLines(put_diagram(workflow, theme = "github"), "workflow-auto.md")
Got: Mermaid flowchart showing detected nodes connected by data flow edges. Nodes labeled with meaningful function/file names.
If fail: Diagram shows disconnected nodes? Auto-detection found I/O patterns but couldn't infer connections. Normal — connections derived from matching output filenames to input filenames. Annotation plan (next step) addresses gaps.
Step 5: Produce Annotation Plan
Generate structured plan documenting what found and what needs manual annotation.
# Generate annotation suggestions put_generate("./src/", style = "single") # For multiline style (more readable for complex workflows) put_generate("./src/", style = "multiline") # Copy suggestions to clipboard for easy pasting put_generate("./src/", output = "clipboard")
Document plan with coverage assessment:
## Annotation Plan ### Auto-Detected (no manual work needed) - `src/etl/extract.R` — 3 inputs, 2 outputs detected - `src/etl/transform.py` — 1 input, 1 output detected ### Needs Manual Annotation - `src/api/handler.js` — Language supported but no I/O patterns matched - `src/config/setup.sh` — Only 12 shell patterns; complex logic missed ### Not Supported - `src/legacy/process.f90` — Fortran not in detection languages ### Recommended Connections - extract.R output `data.csv` → transform.py input `data.csv` (auto-linked) - transform.py output `clean.parquet` → load.R input (needs annotation)
Got: Clear plan separating auto-detected files from those needing manual annotation. Specific recommendations for each file.
If fail:
put_generate() produces no output? Ensure directory path correct and contains source files in supported languages.
Checks
-
executes without errors on target directoryput_auto() - Detected workflow has at least one node (unless repo has no recognizable I/O)
-
produces valid Mermaid code from auto-detected workflowput_diagram() -
produces annotation suggestions for files with detected patternsput_generate() - Annotation plan document created with coverage assessment
Pitfalls
- Scanning too broadly: Running
on repo root may includeput_auto(".")
,node_modules/
,.git/
, etc. Target specific source directories.venv/ - Expecting full coverage: Auto-detection finds file I/O and library calls, not business logic. 40-60% coverage rate typical; rest needs manual annotation.
- Ignoring dependencies:
flag catchesdetect_dependencies = TRUE
,source()
,import
calls that link scripts together. Disabling it loses cross-file connections.require() - Language mismatch: Files with non-standard extensions (e.g.,
vs.R
,.r
vs.jsx
) may not be detected. Use.js
to check if extension recognized. Note extensionless files likeget_comment_prefix()
andDockerfile
supported via exact filename matching.Makefile - Large repos: For repos with 100+ source files, analyze by module/directory to keep diagrams readable.
See Also
— prerequisite: putior must be installed firstinstall-putior
— next step: add manual annotations based on planannotate-source-files
— generate final diagram after annotation completegenerate-workflow-diagram
— use MCP tools for interactive analysis sessionsconfigure-putior-mcp