Part 2 of a 5-part series on building production-grade skills for Claude

Previous: Part 1: What Are Claude Skills | Next: Part 3: Building Your First Skill

A skill is simply a folder with no special tooling, no compilation step, and no package manager required, yet the structure of that folder and the precision of its metadata determine whether Claude uses your skill effectively or ignores it entirely.

File Structure

your-skill-name/
├── SKILL.md          # Required: main instruction file
├── scripts/          # Optional: executable code
│   ├── process_data.py
│   └── validate.sh
├── references/       # Optional: documentation loaded as needed
│   ├── api-guide.md
│   └── examples/
└── assets/           # Optional: templates, fonts, icons
    └── report-template.md

Critical naming rules:

  • SKILL.md must be exactly SKILL.md, case-sensitive, with no variations (SKILL.MD and skill.md won’t work)
  • Folder name must be kebab-case: notion-project-setup ✅ | Notion Project Setup ❌ | notion_project_setup
  • No README.md inside the skill folder, since all documentation goes in SKILL.md or references/
  • Names containing "claude" or "anthropic" are reserved and will be rejected

YAML Frontmatter: The Most Important Part

The frontmatter is how Claude decides whether to load your skill, and because it represents the first level of progressive disclosure that is always present in Claude’s system prompt, it must be concise and precise.

Minimal required format:

---
name: your-skill-name
description: What it does. Use when user asks to [specific phrases].
---

That’s the minimum viable skill definition, requiring only two fields.

Field breakdown:

FieldRequiredRules
nameYeskebab-case, must match folder name
descriptionYesWhat + When, under 1024 chars, no XML tags
licenseNoe.g., MIT, Apache-2.0
compatibilityNoEnvironment requirements (1-500 chars)
metadataNoCustom key-value pairs (author, version, mcp-server)

The description field is the most critical element because it determines when Claude loads your skill. Structure it as:

[What it does] + [When to use it] + [Key capabilities]

Here’s what good vs. bad looks like:

# ✅ Good: specific, actionable, includes trigger phrases
description: >
  Analyzes Figma design files and generates developer handoff
  documentation. Use when user uploads .fig files, asks for
  "design specs", "component documentation", or "design-to-code handoff".

# ✅ Good: clear scope and triggers
description: >
  Manages Linear project workflows including sprint planning,
  task creation, and status tracking. Use when user mentions
  "sprint", "Linear tasks", "project planning", or asks to
  "create tickets".

# ❌ Bad: too vague
description: Helps with projects.

# ❌ Bad: missing triggers
description: Creates sophisticated multi-page documentation systems.

# ❌ Bad: too technical, no user-facing triggers
description: Implements the Project entity model with hierarchical relationships.

Security restrictions on frontmatter:

  • No XML angle brackets (< or >), since frontmatter appears in Claude’s system prompt and malicious XML could inject instructions
  • No claude or anthropic in the skill name

Writing the Main Instructions

After the frontmatter delimiter (---), write your instructions in Markdown. Here’s the recommended structure:

---
name: your-skill
description: [What + When]
---

# Your Skill Name

# Instructions

# Step 1: [First Major Step]
Clear explanation of what happens.

Example:
```bash
python scripts/fetch_data.py --project-id PROJECT_ID

Expected output: [describe what success looks like]

Examples

Example 1: [Common Scenario]

User says: “Set up a new marketing campaign” Actions:

  1. Fetch existing campaigns via MCP (Model Context Protocol)
  2. Create new campaign with provided parameters Result: Campaign created with confirmation link

Troubleshooting

Error: [Common error message]

Cause: [Why it happens] Solution: [How to fix]


### Best Practices for Instructions

**Be specific and actionable:**

```bash
# ✅ Good
Run `python scripts/validate.py --input {filename}` to check data format.
If validation fails, common issues include:
- Missing required fields (add them to the CSV)
- Invalid date formats (use YYYY-MM-DD)

# ❌ Bad
Validate the data before proceeding.

Include error handling by anticipating what goes wrong and telling Claude how to recover:

# MCP Connection Failed
If you see "Connection refused":
1. Verify MCP server is running: Check Settings > Extensions
2. Confirm API key is valid
3. Try reconnecting: Settings > Extensions > [Your Service] > Reconnect

Reference bundled resources clearly:

Before writing queries, consult `references/api-patterns.md` for:
- Rate limiting guidance
- Pagination patterns
- Error codes and handling

Use progressive disclosure in practice by keeping SKILL.md focused on core instructions and moving detailed documentation, API references, and extended examples to references/ where you can link to them. This keeps your main instruction file lean while making deep knowledge available when Claude needs it.

Size Matters

Keep SKILL.md under 5,000 words because larger skills degrade performance, causing Claude’s responses to slow down and quality to drop when too much context is loaded simultaneously. If you’re exceeding this limit, you’re probably not leveraging the references/ directory enough.

Also consider how many skills run concurrently. If users have more than 20 to 50 skills enabled simultaneously, performance will suffer, so recommend selective enablement or package related capabilities into “skill packs.”

A Note on Determinism

For critical validations, consider bundling a Python or Bash script that performs checks programmatically rather than relying on natural language instructions, since code is deterministic while language interpretation is not. The built-in Office skills (docx, pptx, xlsx) use this pattern extensively: their scripts/ directories contain validation and processing logic that removes ambiguity from critical operations.