Pydantic-deepagents performant-code
Writing efficient code that handles large data and tight constraints
install
source · Clone the upstream repo
git clone https://github.com/vstorm-co/pydantic-deepagents
Claude Code · Install into ~/.claude/skills/
T=$(mktemp -d) && git clone --depth=1 https://github.com/vstorm-co/pydantic-deepagents "$T" && mkdir -p ~/.claude/skills && cp -r "$T/pydantic_deep/bundled_skills/performant-code" ~/.claude/skills/vstorm-co-pydantic-deepagents-performant-code-e6b38b && rm -rf "$T"
manifest:
pydantic_deep/bundled_skills/performant-code/SKILL.mdsource content
Performant Code
How to write code that won't timeout on large inputs.
Think About Scale First
Before writing code, ask: how big is the data?
| Data size | Approach |
|---|---|
| < 1 MB | Load into memory, any approach works |
| 1-100 MB | Load into memory, but use efficient algorithms |
| 100 MB - 1 GB | Stream/mmap, avoid loading entirely into memory |
| > 1 GB | Streaming only, chunk-based processing |
I/O Optimization
Large files
- mmap (C:
, Python:mmap()
) — map file into memory, OS handles pagingmmap.mmap() - Buffered binary reads —
in C,fread()
in Pythonopen(f, 'rb').read(chunk) - NEVER read a 500MB file line-by-line with
when you need random accessfgets()
Writing output
- Buffer writes — don't call
for every bytewrite() - Use
orfwrite()
for binary outputsys.stdout.buffer.write() - Flush only when needed
Algorithm Complexity
- O(n) beats O(n log n) beats O(n²) — always
- Nested loops on large data = timeout. Restructure to single pass + hash map
- Sorting is O(n log n) — only sort if you need to
- Use hash maps/sets for lookup instead of linear search
- Pre-compute what you can outside loops
Language-Specific Tips
C
- Use
for large file accessmmap()
or-O2
for compiler optimizations-O3- Avoid
/malloc()
in tight loops — pre-allocatefree() - Use
instead of byte-by-byte copyingmemcpy() - Integer arithmetic > floating point when possible
Python
- Use
for numerical work (100x faster than pure Python loops)numpy
,collections.Counter
— avoid manual countingdefaultdict- List comprehensions > explicit loops
for binary parsingstruct.unpack()
>subprocess.run()os.system()- For heavy computation: consider writing a small C program instead
General
- Profile before optimizing — find the actual bottleneck
- If a program hangs, it's likely: infinite loop, deadlock, or I/O bound on huge data
- If a program is slow, check: algorithm complexity, I/O pattern, memory allocation
Constraints Awareness
- If the task says "< 5000 bytes" — count your bytes, use
wc -c - If there's a time limit — test with actual data, not toy inputs
- If there's a memory limit — don't load everything into RAM
- Always verify constraints BEFORE declaring done