Awesome-omni-skills kaizen

Kaizen: Continuous Improvement workflow skill. Use this skill when the user needs Guide for continuous improvement, error proofing, and standardization. Use this skill when the user wants to improve code quality, refactor, or discuss process improvements and the operator should preserve the upstream workflow, copied support files, and provenance before merging or handing off.

install
source · Clone the upstream repo
git clone https://github.com/diegosouzapw/awesome-omni-skills
Claude Code · Install into ~/.claude/skills/
T=$(mktemp -d) && git clone --depth=1 https://github.com/diegosouzapw/awesome-omni-skills "$T" && mkdir -p ~/.claude/skills && cp -r "$T/skills/kaizen" ~/.claude/skills/diegosouzapw-awesome-omni-skills-kaizen && rm -rf "$T"
manifest: skills/kaizen/SKILL.md
source content

Kaizen: Continuous Improvement

Overview

This public intake copy packages

plugins/antigravity-awesome-skills-claude/skills/kaizen
from
https://github.com/sickn33/antigravity-awesome-skills
into the native Omni Skills editorial shape without hiding its origin.

Use it when the operator needs the upstream workflow, support files, and repository context to stay intact while the public validator and private enhancer continue their normal downstream flow.

This intake keeps the copied upstream files intact and uses

metadata.json
plus
ORIGIN.md
as the provenance anchor for review.

Kaizen: Continuous Improvement

Imported source sections that did not map cleanly to the public headings are still preserved below or in the support files. Notable imported sections: The Four Pillars, Red Flags, Remember, Limitations.

When to Use This Skill

Use this section as the trigger filter. It should make the activation boundary explicit before the operator loads files, runs commands, or opens a pull request.

  • Code implementation and refactoring
  • Architecture and design decisions
  • Process and workflow improvements
  • Error handling and validation
  • Use when the request clearly matches the imported source intent: Guide for continuous improvement, error proofing, and standardization. Use this skill when the user wants to improve code quality, refactor, or discuss process improvements.
  • Use when the operator should preserve upstream workflow detail instead of rewriting the process from scratch.

Operating Table

SituationStart hereWhy it matters
First-time use
metadata.json
Confirms repository, branch, commit, and imported path before touching the copied workflow
Provenance review
ORIGIN.md
Gives reviewers a plain-language audit trail for the imported source
Workflow execution
SKILL.md
Starts with the smallest copied file that materially changes execution
Supporting context
SKILL.md
Adds the next most relevant copied source file without loading the entire package
Handoff decision
## Related Skills
Helps the operator switch to a stronger native skill when the task drifts

Workflow

This workflow is intentionally editorial and operational at the same time. It keeps the imported source useful to the operator while still satisfying the public intake standards that feed the downstream enhancer flow.

  1. Confirm the user goal, the scope of the imported workflow, and whether this skill is still the right router for the task.
  2. Read the overview and provenance files before loading any copied upstream support files.
  3. Load only the references, examples, prompts, or scripts that materially change the outcome for the current request.
  4. Execute the upstream workflow while keeping provenance and source boundaries explicit in the working notes.
  5. Validate the result against the upstream expectations and the evidence you can point to in the copied files.
  6. Escalate or hand off to a related skill when the work moves out of this imported workflow's center of gravity.
  7. Before merge or closure, record what was used, what changed, and what the reviewer still needs to verify.

Imported Workflow Notes

Imported: Overview

Small improvements, continuously. Error-proof by design. Follow what works. Build only what's needed.

Core principle: Many small improvements beat one big change. Prevent errors at design time, not with fixes.

Imported: The Four Pillars

1. Continuous Improvement (Kaizen)

Small, frequent improvements compound into major gains.

Principles

Incremental over revolutionary:

  • Make smallest viable change that improves quality
  • One improvement at a time
  • Verify each change before next
  • Build momentum through small wins

Always leave code better:

  • Fix small issues as you encounter them
  • Refactor while you work (within scope)
  • Update outdated comments
  • Remove dead code when you see it

Iterative refinement:

  • First version: make it work
  • Second pass: make it clear
  • Third pass: make it efficient
  • Don't try all three at once
<Good> ```typescript // Iteration 1: Make it work const calculateTotal = (items: Item[]) => { let total = 0; for (let i = 0; i < items.length; i++) { total += items[i].price * items[i].quantity; } return total; };

// Iteration 2: Make it clear (refactor) const calculateTotal = (items: Item[]): number => { return items.reduce((total, item) => { return total + (item.price * item.quantity); }, 0); };

// Iteration 3: Make it robust (add validation) const calculateTotal = (items: Item[]): number => { if (!items?.length) return 0;

return items.reduce((total, item) => { if (item.price < 0 || item.quantity < 0) { throw new Error('Price and quantity must be non-negative'); } return total + (item.price * item.quantity); }, 0); };

Each step is complete, tested, and working
</Good>

<Bad>
```typescript
// Trying to do everything at once
const calculateTotal = (items: Item[]): number => {
  // Validate, optimize, add features, handle edge cases all together
  if (!items?.length) return 0;
  const validItems = items.filter(item => {
    if (item.price < 0) throw new Error('Negative price');
    if (item.quantity < 0) throw new Error('Negative quantity');
    return item.quantity > 0; // Also filtering zero quantities
  });
  // Plus caching, plus logging, plus currency conversion...
  return validItems.reduce(...); // Too many concerns at once
};

Overwhelming, error-prone, hard to verify </Bad>

In Practice

When implementing features:

  1. Start with simplest version that works
  2. Add one improvement (error handling, validation, etc.)
  3. Test and verify
  4. Repeat if time permits
  5. Don't try to make it perfect immediately

When refactoring:

  • Fix one smell at a time
  • Commit after each improvement
  • Keep tests passing throughout
  • Stop when "good enough" (diminishing returns)

When reviewing code:

  • Suggest incremental improvements (not rewrites)
  • Prioritize: critical → important → nice-to-have
  • Focus on highest-impact changes first
  • Accept "better than before" even if not perfect

2. Poka-Yoke (Error Proofing)

Design systems that prevent errors at compile/design time, not runtime.

Principles

Make errors impossible:

  • Type system catches mistakes
  • Compiler enforces contracts
  • Invalid states unrepresentable
  • Errors caught early (left of production)

Design for safety:

  • Fail fast and loudly
  • Provide helpful error messages
  • Make correct path obvious
  • Make incorrect path difficult

Defense in layers:

  1. Type system (compile time)
  2. Validation (runtime, early)
  3. Guards (preconditions)
  4. Error boundaries (graceful degradation)

Type System Error Proofing

<Good> ```typescript // Error: string status can be any value type OrderBad = { status: string; // Can be "pending", "PENDING", "pnding", anything! total: number; };

// Good: Only valid states possible type OrderStatus = 'pending' | 'processing' | 'shipped' | 'delivered'; type Order = { status: OrderStatus; total: number; };

// Better: States with associated data type Order = | { status: 'pending'; createdAt: Date } | { status: 'processing'; startedAt: Date; estimatedCompletion: Date } | { status: 'shipped'; trackingNumber: string; shippedAt: Date } | { status: 'delivered'; deliveredAt: Date; signature: string };

// Now impossible to have shipped without trackingNumber

Type system prevents entire classes of errors
</Good>

<Good>
```typescript
// Make invalid states unrepresentable
type NonEmptyArray<T> = [T, ...T[]];

const firstItem = <T>(items: NonEmptyArray<T>): T => {
  return items[0]; // Always safe, never undefined!
};

// Caller must prove array is non-empty
const items: number[] = [1, 2, 3];
if (items.length > 0) {
  firstItem(items as NonEmptyArray<number>); // Safe
}

Function signature guarantees safety </Good>

Validation Error Proofing

<Good> ```typescript // Error: Validation after use const processPayment = (amount: number) => { const fee = amount * 0.03; // Used before validation! if (amount <= 0) throw new Error('Invalid amount'); // ... };

// Good: Validate immediately const processPayment = (amount: number) => { if (amount <= 0) { throw new Error('Payment amount must be positive'); } if (amount > 10000) { throw new Error('Payment exceeds maximum allowed'); }

const fee = amount * 0.03; // ... now safe to use };

// Better: Validation at boundary with branded type type PositiveNumber = number & { readonly __brand: 'PositiveNumber' };

const validatePositive = (n: number): PositiveNumber => { if (n <= 0) throw new Error('Must be positive'); return n as PositiveNumber; };

const processPayment = (amount: PositiveNumber) => { // amount is guaranteed positive, no need to check const fee = amount * 0.03; };

// Validate at system boundary const handlePaymentRequest = (req: Request) => { const amount = validatePositive(req.body.amount); // Validate once processPayment(amount); // Use everywhere safely };

Validate once at boundary, safe everywhere else
</Good>

#### Guards and Preconditions

<Good>
```typescript
// Early returns prevent deeply nested code
const processUser = (user: User | null) => {
  if (!user) {
    logger.error('User not found');
    return;
  }

  if (!user.email) {
    logger.error('User email missing');
    return;
  }

  if (!user.isActive) {
    logger.info('User inactive, skipping');
    return;
  }

  // Main logic here, guaranteed user is valid and active
  sendEmail(user.email, 'Welcome!');
};

Guards make assumptions explicit and enforced </Good>

Configuration Error Proofing

<Good> ```typescript // Error: Optional config with unsafe defaults type ConfigBad = { apiKey?: string; timeout?: number; };

const client = new APIClient({ timeout: 5000 }); // apiKey missing!

// Good: Required config, fails early type Config = { apiKey: string; timeout: number; };

const loadConfig = (): Config => { const apiKey = process.env.API_KEY; if (!apiKey) { throw new Error('API_KEY environment variable required'); }

return { apiKey, timeout: 5000, }; };

// App fails at startup if config invalid, not during request const config = loadConfig(); const client = new APIClient(config);

Fail at startup, not in production
</Good>

#### In Practice

**When designing APIs:**
- Use types to constrain inputs
- Make invalid states unrepresentable
- Return Result<T, E> instead of throwing
- Document preconditions in types

**When handling errors:**
- Validate at system boundaries

- Use guards for preconditions
- Fail fast with clear messages
- Log context for debugging

**When configuring:**
- Required over optional with defaults
- Validate all config at startup
- Fail deployment if config invalid
- Don't allow partial configurations

### 3. Standardized Work
Follow established patterns. Document what works. Make good practices easy to follow.

#### Principles

**Consistency over cleverness:**
- Follow existing codebase patterns
- Don't reinvent solved problems
- New pattern only if significantly better
- Team agreement on new patterns

**Documentation lives with code:**
- README for setup and architecture
- CLAUDE.md for AI coding conventions
- Comments for "why", not "what"
- Examples for complex patterns

**Automate standards:**
- Linters enforce style
- Type checks enforce contracts
- Tests verify behavior
- CI/CD enforces quality gates

#### Following Patterns

<Good>
```typescript
// Existing codebase pattern for API clients
class UserAPIClient {
  async getUser(id: string): Promise<User> {
    return this.fetch(`/users/${id}`);
  }
}

// New code follows the same pattern
class OrderAPIClient {
  async getOrder(id: string): Promise<Order> {
    return this.fetch(`/orders/${id}`);
  }
}

Consistency makes codebase predictable </Good>

<Bad> ```typescript // Existing pattern uses classes class UserAPIClient { /* ... */ }

// New code introduces different pattern without discussion const getOrder = async (id: string): Promise<Order> => { // Breaking consistency "because I prefer functions" };

Inconsistency creates confusion
</Bad>

#### Error Handling Patterns

<Good>
```typescript
// Project standard: Result type for recoverable errors
type Result<T, E> = { ok: true; value: T } | { ok: false; error: E };

// All services follow this pattern
const fetchUser = async (id: string): Promise<Result<User, Error>> => {
  try {
    const user = await db.users.findById(id);
    if (!user) {
      return { ok: false, error: new Error('User not found') };
    }
    return { ok: true, value: user };
  } catch (err) {
    return { ok: false, error: err as Error };
  }
};

// Callers use consistent pattern
const result = await fetchUser('123');
if (!result.ok) {
  logger.error('Failed to fetch user', result.error);
  return;
}
const user = result.value; // Type-safe!

Standard pattern across codebase </Good>

Documentation Standards

<Good> ```typescript /** * Retries an async operation with exponential backoff. * * Why: Network requests fail temporarily; retrying improves reliability * When to use: External API calls, database operations * When not to use: User input validation, internal function calls * * @example * const result = await retry( * () => fetch('https://api.example.com/data'), * { maxAttempts: 3, baseDelay: 1000 } * ); */ const retry = async <T>( operation: () => Promise<T>, options: RetryOptions ): Promise<T> => { // Implementation... }; ``` Documents why, when, and how </Good>

In Practice

Before adding new patterns:

  • Search codebase for similar problems solved
  • Check CLAUDE.md for project conventions
  • Discuss with team if breaking from pattern
  • Update docs when introducing new pattern

When writing code:

  • Match existing file structure
  • Use same naming conventions
  • Follow same error handling approach
  • Import from same locations

When reviewing:

  • Check consistency with existing code
  • Point to examples in codebase
  • Suggest aligning with standards
  • Update CLAUDE.md if new standard emerges

4. Just-In-Time (JIT)

Build what's needed now. No more, no less. Avoid premature optimization and over-engineering.

Principles

YAGNI (You Aren't Gonna Need It):

  • Implement only current requirements
  • No "just in case" features
  • No "we might need this later" code
  • Delete speculation

Simplest thing that works:

  • Start with straightforward solution
  • Add complexity only when needed
  • Refactor when requirements change
  • Don't anticipate future needs

Optimize when measured:

  • No premature optimization
  • Profile before optimizing
  • Measure impact of changes
  • Accept "good enough" performance

YAGNI in Action

<Good> ```typescript // Current requirement: Log errors to console const logError = (error: Error) => { console.error(error.message); }; ``` Simple, meets current need </Good> <Bad> ```typescript // Over-engineered for "future needs" interface LogTransport { write(level: LogLevel, message: string, meta?: LogMetadata): Promise<void>; }

class ConsoleTransport implements LogTransport { /... / } class FileTransport implements LogTransport { / ... / } class RemoteTransport implements LogTransport { / .../ }

class Logger { private transports: LogTransport[] = []; private queue: LogEntry[] = []; private rateLimiter: RateLimiter; private formatter: LogFormatter;

// 200 lines of code for "maybe we'll need it" }

const logError = (error: Error) => { Logger.getInstance().log('error', error.message); };

Building for imaginary future requirements
</Bad>

**When to add complexity:**
- Current requirement demands it
- Pain points identified through use
- Measured performance issues
- Multiple use cases emerged

<Good>
```typescript
// Start simple
const formatCurrency = (amount: number): string => {
  return `$${amount.toFixed(2)}`;
};

// Requirement evolves: support multiple currencies
const formatCurrency = (amount: number, currency: string): string => {
  const symbols = { USD: '$', EUR: '€', GBP: '£' };
  return `${symbols[currency]}${amount.toFixed(2)}`;
};

// Requirement evolves: support localization
const formatCurrency = (amount: number, locale: string): string => {
  return new Intl.NumberFormat(locale, {\n    style: 'currency',
    currency: locale === 'en-US' ? 'USD' : 'EUR',
  }).format(amount);
};

Complexity added only when needed </Good>

Premature Abstraction

<Bad> ```typescript // One use case, but building generic framework abstract class BaseCRUDService<T> { abstract getAll(): Promise<T[]>; abstract getById(id: string): Promise<T>; abstract create(data: Partial<T>): Promise<T>; abstract update(id: string, data: Partial<T>): Promise<T>; abstract delete(id: string): Promise<void>; }

class GenericRepository<T> { /300 lines / } class QueryBuilder<T> { / 200 lines/ } // ... building entire ORM for single table

Massive abstraction for uncertain future
</Bad>

<Good>
```typescript
// Simple functions for current needs
const getUsers = async (): Promise<User[]> => {
  return db.query('SELECT * FROM users');
};

const getUserById = async (id: string): Promise<User | null> => {
  return db.query('SELECT * FROM users WHERE id = $1', [id]);
};

// When pattern emerges across multiple entities, then abstract

Abstract only when pattern proven across 3+ cases </Good>

Performance Optimization

<Good> ```typescript // Current: Simple approach const filterActiveUsers = (users: User[]): User[] => { return users.filter(user => user.isActive); };

// Benchmark shows: 50ms for 1000 users (acceptable) // ✓ Ship it, no optimization needed

// Later: After profiling shows this is bottleneck // Then optimize with indexed lookup or caching

Optimize based on measurement, not assumptions
</Good>

<Bad>
```typescript
// Premature optimization
const filterActiveUsers = (users: User[]): User[] => {
  // "This might be slow, so let's cache and index"
  const cache = new WeakMap();
  const indexed = buildBTreeIndex(users, 'isActive');
  // 100 lines of optimization code
  // Adds complexity, harder to maintain
  // No evidence it was needed
};\

Complex solution for unmeasured problem </Bad>

In Practice

When implementing:

  • Solve the immediate problem
  • Use straightforward approach
  • Resist "what if" thinking
  • Delete speculative code

When optimizing:

  • Profile first, optimize second
  • Measure before and after
  • Document why optimization needed
  • Keep simple version in tests

When abstracting:

  • Wait for 3+ similar cases (Rule of Three)
  • Make abstraction as simple as possible
  • Prefer duplication over wrong abstraction
  • Refactor when pattern clear

Examples

Example 1: Ask for the upstream workflow directly

Use @kaizen to handle <task>. Start from the copied upstream workflow, load only the files that change the outcome, and keep provenance visible in the answer.

Explanation: This is the safest starting point when the operator needs the imported workflow, but not the entire repository.

Example 2: Ask for a provenance-grounded review

Review @kaizen against metadata.json and ORIGIN.md, then explain which copied upstream files you would load first and why.

Explanation: Use this before review or troubleshooting when you need a precise, auditable explanation of origin and file selection.

Example 3: Narrow the copied support files before execution

Use @kaizen for <task>. Load only the copied references, examples, or scripts that change the outcome, and name the files explicitly before proceeding.

Explanation: This keeps the skill aligned with progressive disclosure instead of loading the whole copied package by default.

Example 4: Build a reviewer packet

Review @kaizen using the copied upstream files plus provenance, then summarize any gaps before merge.

Explanation: This is useful when the PR is waiting for human review and you want a repeatable audit packet.

Imported Usage Notes

Imported: Integration with Commands

The Kaizen skill guides how you work. The commands provide structured analysis:

  • /why
    : Root cause analysis (5 Whys)
  • /cause-and-effect
    : Multi-factor analysis (Fishbone)
  • /plan-do-check-act
    : Iterative improvement cycles
  • /analyse-problem
    : Comprehensive documentation (A3)
  • /analyse
    : Smart method selection (Gemba/VSM/Muda)

Use commands for structured problem-solving. Apply skill for day-to-day development.

Best Practices

Treat the generated public skill as a reviewable packaging layer around the upstream repository. The goal is to keep provenance explicit and load only the copied source material that materially improves execution.

  • Keep the imported skill grounded in the upstream repository; do not invent steps that the source material cannot support.
  • Prefer the smallest useful set of support files so the workflow stays auditable and fast to review.
  • Keep provenance, source commit, and imported file paths visible in notes and PR descriptions.
  • Point directly at the copied upstream files that justify the workflow instead of relying on generic review boilerplate.
  • Treat generated examples as scaffolding; adapt them to the concrete task before execution.
  • Route to a stronger native skill when architecture, debugging, design, or security concerns become dominant.

Troubleshooting

Problem: The operator skipped the imported context and answered too generically

Symptoms: The result ignores the upstream workflow in

plugins/antigravity-awesome-skills-claude/skills/kaizen
, fails to mention provenance, or does not use any copied source files at all. Solution: Re-open
metadata.json
,
ORIGIN.md
, and the most relevant copied upstream files. Load only the files that materially change the answer, then restate the provenance before continuing.

Problem: The imported workflow feels incomplete during review

Symptoms: Reviewers can see the generated

SKILL.md
, but they cannot quickly tell which references, examples, or scripts matter for the current task. Solution: Point at the exact copied references, examples, scripts, or assets that justify the path you took. If the gap is still real, record it in the PR instead of hiding it.

Problem: The task drifted into a different specialization

Symptoms: The imported skill starts in the right place, but the work turns into debugging, architecture, design, security, or release orchestration that a native skill handles better. Solution: Use the related skills section to hand off deliberately. Keep the imported provenance visible so the next skill inherits the right context instead of starting blind.

Related Skills

  • @base
    - Use when the work is better handled by that native specialization after this imported skill establishes context.
  • @calc
    - Use when the work is better handled by that native specialization after this imported skill establishes context.
  • @draw
    - Use when the work is better handled by that native specialization after this imported skill establishes context.
  • @image-studio
    - Use when the work is better handled by that native specialization after this imported skill establishes context.

Additional Resources

Use this support matrix and the linked files below as the operator packet for this imported skill. They should reflect real copied source material, not generic scaffolding.

Resource familyWhat it gives the reviewerExample path
references
copied reference notes, guides, or background material from upstream
references/n/a
examples
worked examples or reusable prompts copied from upstream
examples/n/a
scripts
upstream helper scripts that change execution or validation
scripts/n/a
agents
routing or delegation notes that are genuinely part of the imported package
agents/n/a
assets
supporting assets or schemas copied from the source package
assets/n/a

Imported Reference Notes

Imported: Red Flags

Violating Continuous Improvement:

  • "I'll refactor it later" (never happens)
  • Leaving code worse than you found it
  • Big bang rewrites instead of incremental

Violating Poka-Yoke:

  • "Users should just be careful"
  • Validation after use instead of before
  • Optional config with no validation

Violating Standardized Work:

  • "I prefer to do it my way"
  • Not checking existing patterns
  • Ignoring project conventions

Violating Just-In-Time:

  • "We might need this someday"
  • Building frameworks before using them
  • Optimizing without measuring

Imported: Remember

Kaizen is about:

  • Small improvements continuously
  • Preventing errors by design
  • Following proven patterns
  • Building only what's needed

Not about:

  • Perfection on first try
  • Massive refactoring projects
  • Clever abstractions
  • Premature optimization

Mindset: Good enough today, better tomorrow. Repeat.

Imported: Limitations

  • Use this skill only when the task clearly matches the scope described above.
  • Do not treat the output as a substitute for environment-specific validation, testing, or expert review.
  • Stop and ask for clarification if required inputs, permissions, safety boundaries, or success criteria are missing.