← Back to AI & LLMs
AI & LLMs by @pskoett

self-improvement

Captures learnings, errors, and corrections to enable continuous improvement

Self-Improvement Skill

Log learnings and errors to markdown files for continuous improvement. Coding agents can later process these into fixes, and important learnings get promoted to project memory.

Quick Reference

Situation Action
Command/operation fails Log to .learnings/ERRORS.md
User corrects you Log to .learnings/LEARNINGS.md with category correction
User wants missing feature Log to .learnings/FEATURE_REQUESTS.md
API/external tool fails Log to .learnings/ERRORS.md with integration details
Knowledge was outdated Log to .learnings/LEARNINGS.md with category knowledge_gap
Found better approach Log to .learnings/LEARNINGS.md with category best_practice
Similar to existing entry Link with **See Also**, consider priority bump
Broadly applicable learning Promote to CLAUDE.md, AGENTS.md, and/or .github/copilot-instructions.md
Workflow improvements Promote to AGENTS.md (clawdbot workspace)
Tool gotchas Promote to TOOLS.md (clawdbot workspace)
Behavioral patterns Promote to SOUL.md (clawdbot workspace)

Setup

Create .learnings/ directory in project root if it doesn't exist:

mkdir -p .learnings

Copy templates from assets/ or create files with headers.

Logging Format

Learning Entry

Append to .learnings/LEARNINGS.md:

## [LRN-YYYYMMDD-XXX] category

**Logged**: ISO-8601 timestamp
**Priority**: low | medium | high | critical
**Status**: pending
**Area**: frontend | backend | infra | tests | docs | config

### Summary
One-line description of what was learned

### Details
Full context: what happened, what was wrong, what's correct

### Suggested Action
Specific fix or improvement to make

### Metadata
- Source: conversation | error | user_feedback
- Related Files: path/to/file.ext
- Tags: tag1, tag2
- See Also: LRN-20250110-001 (if related to existing entry)

---

Error Entry

Append to .learnings/ERRORS.md:

## [ERR-YYYYMMDD-XXX] skill_or_command_name

**Logged**: ISO-8601 timestamp
**Priority**: high
**Status**: pending
**Area**: frontend | backend | infra | tests | docs | config

### Summary
Brief description of what failed

### Error

Actual error message or output


### Context
- Command/operation attempted
- Input or parameters used
- Environment details if relevant

### Suggested Fix
If identifiable, what might resolve this

### Metadata
- Reproducible: yes | no | unknown
- Related Files: path/to/file.ext
- See Also: ERR-20250110-001 (if recurring)

---

Feature Request Entry

Append to .learnings/FEATURE_REQUESTS.md:

## [FEAT-YYYYMMDD-XXX] capability_name

**Logged**: ISO-8601 timestamp
**Priority**: medium
**Status**: pending
**Area**: frontend | backend | infra | tests | docs | config

### Requested Capability
What the user wanted to do

### User Context
Why they needed it, what problem they're solving

### Complexity Estimate
simple | medium | complex

### Suggested Implementation
How this could be built, what it might extend

### Metadata
- Frequency: first_time | recurring
- Related Features: existing_feature_name

---

ID Generation

Format: TYPE-YYYYMMDD-XXX

  • TYPE: LRN (learning), ERR (error), FEAT (feature)
  • YYYYMMDD: Current date
  • XXX: Sequential number or random 3 chars (e.g., 001, A7B)

Examples: LRN-20250115-001, ERR-20250115-A3F, FEAT-20250115-002

Resolving Entries

When an issue is fixed, update the entry:

  1. Change **Status**: pending**Status**: resolved
  2. Add resolution block after Metadata:
### Resolution
- **Resolved**: 2025-01-16T09:00:00Z
- **Commit/PR**: abc123 or #42
- **Notes**: Brief description of what was done

Other status values:

  • in_progress - Actively being worked on
  • wont_fix - Decided not to address (add reason in Resolution notes)
  • promoted - Elevated to CLAUDE.md, AGENTS.md, or .github/copilot-instructions.md

Promoting to Project Memory

When a learning is broadly applicable (not a one-off fix), promote it to permanent project memory.

When to Promote

  • Learning applies across multiple files/features
  • Knowledge any contributor (human or AI) should know
  • Prevents recurring mistakes
  • Documents project-specific conventions

Promotion Targets

Target What Belongs There
CLAUDE.md Project facts, conventions, gotchas for all Claude interactions
AGENTS.md Agent-specific workflows, tool usage patterns, automation rules
.github/copilot-instructions.md Project context and conventions for GitHub Copilot
SOUL.md Behavioral guidelines, communication style, principles (clawdbot)
TOOLS.md Tool capabilities, usage patterns, integration gotchas (clawdbot)

How to Promote

  1. Distill the learning into a concise rule or fact
  2. Add to appropriate section in target file (create file if needed)
  3. Update original entry:
    • Change **Status**: pending**Status**: promoted
    • Add **Promoted**: CLAUDE.md, AGENTS.md, or .github/copilot-instructions.md

Promotion Examples

Learning (verbose):

Project uses pnpm workspaces. Attempted npm install but failed. Lock file is pnpm-lock.yaml. Must use pnpm install.

In CLAUDE.md (concise):

## Build & Dependencies
- Package manager: pnpm (not npm) - use `pnpm install`

Learning (verbose):

When modifying API endpoints, must regenerate TypeScript client. Forgetting this causes type mismatches at runtime.

In AGENTS.md (actionable):

## After API Changes
1. Regenerate client: `pnpm run generate:api`
2. Check for type errors: `pnpm tsc --noEmit`

Recurring Pattern Detection

If logging something similar to an existing entry:

  1. Search first: grep -r "keyword" .learnings/
  2. Link entries: Add **See Also**: ERR-20250110-001 in Metadata
  3. Bump priority if issue keeps recurring
  4. Consider systemic fix: Recurring issues often indicate:
    • Missing documentation (→ promote to CLAUDE.md or .github/copilot-instructions.md)
    • Missing automation (→ add to AGENTS.md)
    • Architectural problem (→ create tech debt ticket)

Periodic Review

Review .learnings/ at natural breakpoints:

When to Review

  • Before starting a new major task
  • After completing a feature
  • When working in an area with past learnings
  • Weekly during active development

Quick Status Check

# Count pending items
grep -h "Status\*\*: pending" .learnings/*.md | wc -l

# List pending high-priority items
grep -B5 "Priority\*\*: high" .learnings/*.md | grep "^## \["

# Find learnings for a specific area
grep -l "Area\*\*: backend" .learnings/*.md

Review Actions

  • Resolve fixed items
  • Promote applicable learnings
  • Link related entries
  • Escalate recurring issues

Detection Triggers

Automatically log when you notice:

Corrections (→ learning with correction category):

  • "No, that's not right..."
  • "Actually, it should be..."
  • "You're wrong about..."
  • "That's outdated..."

Feature Requests (→ feature request):

  • "Can you also..."
  • "I wish you could..."
  • "Is there a way to..."
  • "Why can't you..."

Knowledge Gaps (→ learning with knowledge_gap category):

  • User provides information you didn't know
  • Documentation you referenced is outdated
  • API behavior differs from your understanding

Errors (→ error entry):

  • Command returns non-zero exit code
  • Exception or stack trace
  • Unexpected output or behavior
  • Timeout or connection failure

Priority Guidelines

Priority When to Use
critical Blocks core functionality, data loss risk, security issue
high Significant impact, affects common workflows, recurring issue
medium Moderate impact, workaround exists
low Minor inconvenience, edge case, nice-to-have

Area Tags

Use to filter learnings by codebase region:

Area Scope
frontend UI, components, client-side code
backend API, services, server-side code
infra CI/CD, deployment, Docker, cloud
tests Test files, testing utilities, coverage
docs Documentation, comments, READMEs
config Configuration files, environment, settings

Best Practices

  1. Log immediately - context is freshest right after the issue
  2. Be specific - future agents need to understand quickly
  3. Include reproduction steps - especially for errors
  4. Link related files - makes fixes easier
  5. Suggest concrete fixes - not just "investigate"
  6. Use consistent categories - enables filtering
  7. Promote aggressively - if in doubt, add to CLAUDE.md or .github/copilot-instructions.md
  8. Review regularly - stale learnings lose value

Gitignore Options

Keep learnings local (per-developer):

.learnings/

Track learnings in repo (team-wide): Don't add to .gitignore - learnings become shared knowledge.

Hybrid (track templates, ignore entries):

.learnings/*.md
!.learnings/.gitkeep

Hook Integration

Enable automatic reminders through agent hooks. This is opt-in - you must explicitly configure hooks.

Quick Setup (Claude Code / Codex)

Create .claude/settings.json in your project:

{
  "hooks": {
    "UserPromptSubmit": [{
      "matcher": "",
      "hooks": [{
        "type": "command",
        "command": "./skills/self-improvement/scripts/activator.sh"
      }]
    }]
  }
}

This injects a learning evaluation reminder after each prompt (~50-100 tokens overhead).

Full Setup (With Error Detection)

{
  "hooks": {
    "UserPromptSubmit": [{
      "matcher": "",
      "hooks": [{
        "type": "command",
        "command": "./skills/self-improvement/scripts/activator.sh"
      }]
    }],
    "PostToolUse": [{
      "matcher": "Bash",
      "hooks": [{
        "type": "command",
        "command": "./skills/self-improvement/scripts/error-detector.sh"
      }]
    }]
  }
}

Available Hook Scripts

Script Hook Type Purpose
scripts/activator.sh UserPromptSubmit Reminds to evaluate learnings after tasks
scripts/error-detector.sh PostToolUse (Bash) Triggers on command errors

See references/hooks-setup.md for detailed configuration and troubleshooting.

Automatic Skill Extraction

When a learning is valuable enough to become a reusable skill, extract it using the provided helper.

Skill Extraction Criteria

A learning qualifies for skill extraction when ANY of these apply:

Criterion Description
Recurring Has See Also links to 2+ similar issues
Verified Status is resolved with working fix
Non-obvious Required actual debugging/investigation to discover
Broadly applicable Not project-specific; useful across codebases
User-flagged User says "save this as a skill" or similar

Extraction Workflow

  1. Identify candidate: Learning meets extraction criteria
  2. Run helper (or create manually):
    ./skills/self-improvement/scripts/extract-skill.sh skill-name --dry-run
    ./skills/self-improvement/scripts/extract-skill.sh skill-name
    
  3. Customize SKILL.md: Fill in template with learning content
  4. Update learning: Set status to promoted_to_skill, add Skill-Path
  5. Verify: Read skill in fresh session to ensure it's self-contained

Manual Extraction

If you prefer manual creation:

  1. Create skills/<skill-name>/SKILL.md
  2. Use template from assets/SKILL-TEMPLATE.md
  3. Follow Agent Skills spec:
    • YAML frontmatter with name and description
    • Name must match folder name
    • No README.md inside skill folder

Extraction Detection Triggers

Watch for these signals that a learning should become a skill:

In conversation:

  • "Save this as a skill"
  • "I keep running into this"
  • "This would be useful for other projects"
  • "Remember this pattern"

In learning entries:

  • Multiple See Also links (recurring issue)
  • High priority + resolved status
  • Category: best_practice with broad applicability
  • User feedback praising the solution

Skill Quality Gates

Before extraction, verify:

  • Solution is tested and working
  • Description is clear without original context
  • Code examples are self-contained
  • No project-specific hardcoded values
  • Follows skill naming conventions (lowercase, hyphens)

Multi-Agent Support

This skill works across different AI coding agents with agent-specific activation.

Claude Code

Activation: Hooks (UserPromptSubmit, PostToolUse) Setup: .claude/settings.json with hook configuration Detection: Automatic via hook scripts

Codex CLI

Activation: Hooks (same pattern as Claude Code) Setup: .codex/settings.json with hook configuration Detection: Automatic via hook scripts

GitHub Copilot

Activation: Manual (no hook support) Setup: Add to .github/copilot-instructions.md:

## Self-Improvement

After solving non-obvious issues, consider logging to `.learnings/`:
1. Use format from self-improvement skill
2. Link related entries with See Also
3. Promote high-value learnings to skills

Ask in chat: "Should I log this as a learning?"

Detection: Manual review at session end

Clawdbot

Activation: Workspace injection + inter-agent messaging Setup: Configure workspace path in ~/.clawdbot/clawdbot.json Detection: Via session tools and workspace files (AGENTS.md, SOUL.md, TOOLS.md)

Clawdbot uses a workspace-based model with injected prompt files. See references/clawdbot-integration.md for detailed setup.

Agent-Agnostic Guidance

Regardless of agent, apply self-improvement when you:

  1. Discover something non-obvious - solution wasn't immediate
  2. Correct yourself - initial approach was wrong
  3. Learn project conventions - discovered undocumented patterns
  4. Hit unexpected errors - especially if diagnosis was difficult
  5. Find better approaches - improved on your original solution

Copilot Chat Integration

For Copilot users, add this to your prompts when relevant:

After completing this task, evaluate if any learnings should be logged to .learnings/ using the self-improvement skill format.

Or use quick prompts:

  • "Log this to learnings"
  • "Create a skill from this solution"
  • "Check .learnings/ for related issues"

Clawdbot Integration

Clawdbot uses workspace-based prompt injection with specialized files for different concerns.

Workspace Structure

~/clawd/                    # Default workspace (configurable)
├── AGENTS.md              # Multi-agent workflows, delegation patterns
├── SOUL.md                # Behavioral guidelines, communication style
├── TOOLS.md               # Tool capabilities, MCP integrations
└── sessions/              # Session transcripts (auto-managed)

Clawdbot Promotion Targets

Learning Type Promote To Example
Agent coordination AGENTS.md "Delegate file searches to explore agent"
Communication style SOUL.md "Be concise, avoid disclaimers"
Tool gotchas TOOLS.md "MCP server X requires auth refresh"
Project facts CLAUDE.md Standard project conventions

Inter-Agent Learning

Clawdbot supports session-based communication:

  • sessions_list - See active/recent sessions
  • sessions_history - Read transcript from another session
  • sessions_send - Send message to another session

Hybrid Setup (Claude Code + Clawdbot)

When using both:

  1. Keep .learnings/ for project-specific learnings
  2. Use clawdbot workspace files for cross-project patterns
  3. Sync high-value learnings to both systems

See references/clawdbot-integration.md for complete setup, promotion formats, and troubleshooting.