Agent-almanac analyze-codebase-workflow
git clone https://github.com/pjt222/agent-almanac
T=$(mktemp -d) && git clone --depth=1 https://github.com/pjt222/agent-almanac "$T" && mkdir -p ~/.claude/skills && cp -r "$T/i18n/caveman-lite/skills/analyze-codebase-workflow" ~/.claude/skills/pjt222-agent-almanac-analyze-codebase-workflow && rm -rf "$T"
i18n/caveman-lite/skills/analyze-codebase-workflow/SKILL.mdAnalyze Codebase Workflow
Survey an arbitrary repository to auto-detect data flows, file I/O, and script dependencies, then produce a structured annotation plan for manual refinement.
When to Use
- Onboarding onto an unfamiliar codebase and need to understand data flow
- Starting putior integration in a project that has no PUT annotations yet
- Auditing an existing project's data pipeline before documentation
- Preparing an annotation plan before running
annotate-source-files
Inputs
- Required: Path to the repository or source directory to analyze
- Optional: Specific subdirectories to focus on (default: entire repo)
- Optional: Languages to include or exclude (default: all detected)
- Optional: Detection scope: inputs only, outputs only, or both (default: both + dependencies)
Procedure
Step 1: Survey Repository Structure
Identify source files and their languages to understand what putior can analyze.
library(putior) # List all supported languages and their extensions list_supported_languages() list_supported_languages(detection_only = TRUE) # Only languages with auto-detection # Get supported extensions exts <- get_supported_extensions()
Use file listing to understand repo composition:
# Count files by extension in the target directory find /path/to/repo -type f | sed 's/.*\.//' | sort | uniq -c | sort -rn | head -20
Got: A list of file extensions present in the repo, with counts. Map these against
get_supported_extensions() to know coverage.
If fail: If the repo has no files matching supported extensions, putior cannot auto-detect workflows. Consider whether the language is supported but files use non-standard extensions.
Step 2: Check Language Detection Coverage
For each detected language, verify auto-detection pattern availability.
# Check which languages have auto-detection patterns (18 languages, 902 patterns) detection_langs <- list_supported_languages(detection_only = TRUE) cat("Languages with auto-detection:\n") print(detection_langs) # Get pattern counts for specific languages found in the repo for (lang in c("r", "python", "javascript", "sql", "dockerfile", "makefile")) { patterns <- get_detection_patterns(lang) cat(sprintf("%s: %d input, %d output, %d dependency patterns\n", lang, length(patterns$input), length(patterns$output), length(patterns$dependency) )) }
Got: Pattern counts printed for each language. R has 124 patterns, Python 159, JavaScript 71, etc.
If fail: If a language returns no patterns, it supports manual annotations but not auto-detection. Plan to annotate those files manually.
Step 3: Run Auto-Detection
Execute
put_auto() on the target directory to discover workflow elements.
# Full auto-detection workflow <- put_auto("./src/", detect_inputs = TRUE, detect_outputs = TRUE, detect_dependencies = TRUE ) # Exclude build scripts and test helpers from scanning workflow <- put_auto("./src/", detect_inputs = TRUE, detect_outputs = TRUE, detect_dependencies = TRUE, exclude = c("build-", "test_helper") ) # View detected workflow nodes print(workflow) # Check node count cat(sprintf("Detected %d workflow nodes\n", nrow(workflow)))
For large repos, analyze subdirectories incrementally:
# Analyze specific subdirectories etl_workflow <- put_auto("./src/etl/") api_workflow <- put_auto("./src/api/")
Got: A data frame with columns including
id, label, input, output, source_file. Each row represents a detected workflow step.
If fail: If the result is empty, the source files may not contain recognizable I/O patterns. Try enabling debug logging:
workflow <- put_auto("./src/", log_level = "DEBUG") to see which files are scanned and which patterns match.
Step 4: Generate Initial Diagram
Visualize the auto-detected workflow to assess coverage and identify gaps.
# Generate diagram from auto-detected workflow cat(put_diagram(workflow, theme = "github")) # With source file info for traceability cat(put_diagram(workflow, show_source_info = TRUE)) # Save to file for review writeLines(put_diagram(workflow, theme = "github"), "workflow-auto.md")
Got: A Mermaid flowchart showing detected nodes connected by data flow edges. Nodes should be labeled with meaningful function/file names.
If fail: If the diagram shows disconnected nodes, the auto-detection found I/O patterns but couldn't infer connections. This is normal — connections are derived from matching output filenames to input filenames. The annotation plan (next step) will address gaps.
Step 5: Produce Annotation Plan
Generate a structured plan documenting what was found and what needs manual annotation.
# Generate annotation suggestions put_generate("./src/", style = "single") # For multiline style (more readable for complex workflows) put_generate("./src/", style = "multiline") # Copy suggestions to clipboard for easy pasting put_generate("./src/", output = "clipboard")
Document the plan with coverage assessment:
## Annotation Plan ### Auto-Detected (no manual work needed) - `src/etl/extract.R` — 3 inputs, 2 outputs detected - `src/etl/transform.py` — 1 input, 1 output detected ### Needs Manual Annotation - `src/api/handler.js` — Language supported but no I/O patterns matched - `src/config/setup.sh` — Only 12 shell patterns; complex logic missed ### Not Supported - `src/legacy/process.f90` — Fortran not in detection languages ### Recommended Connections - extract.R output `data.csv` → transform.py input `data.csv` (auto-linked) - transform.py output `clean.parquet` → load.R input (needs annotation)
Got: A clear plan separating auto-detected files from those needing manual annotation, with specific recommendations for each file.
If fail: If
put_generate() produces no output, ensure the directory path is correct and contains source files in supported languages.
Validation
-
executes without errors on the target directoryput_auto() - Detected workflow has at least one node (unless repo has no recognizable I/O)
-
produces valid Mermaid code from the auto-detected workflowput_diagram() -
produces annotation suggestions for files with detected patternsput_generate() - Annotation plan document created with coverage assessment
Pitfalls
- Scanning too broadly: Running
on a repo root may includeput_auto(".")
,node_modules/
,.git/
, etc. Target specific source directories.venv/ - Expecting full coverage: Auto-detection finds file I/O and library calls, not business logic. A 40-60% coverage rate is typical; the rest needs manual annotation.
- Ignoring dependencies: The
flag catchesdetect_dependencies = TRUE
,source()
,import
calls that link scripts together. Disabling it loses cross-file connections.require() - Language mismatch: Files with non-standard extensions (e.g.,
vs.R
,.r
vs.jsx
) may not be detected. Use.js
to check if an extension is recognized. Note that extensionless files likeget_comment_prefix()
andDockerfile
are supported via exact filename matching.Makefile - Large repos: For repos with 100+ source files, analyze by module/directory to keep diagrams readable.
Related Skills
— prerequisite: putior must be installed firstinstall-putior
— next step: add manual annotations based on the planannotate-source-files
— generate final diagram after annotation is completegenerate-workflow-diagram
— use MCP tools for interactive analysis sessionsconfigure-putior-mcp