Skip to content

Boutique Authoring

This guide covers writing a boutique from scratch — the persona file, worker selection, and the fetch-then-reason pattern.

The Persona File (CLAUDE.md)

Every boutique has a CLAUDE.md in its group folder. This is the system prompt the LLM receives on every task.

Structure

markdown
# {Boutique Name} — NanoClaw Group

You are {Name}, a {one-sentence identity}.

## Objectives

1. First thing this worker does
2. Second thing
3. Third thing

## Hard Rules

1. Never do X
2. Always do Y
3. Output format constraint

## Input Format (optional)

What the worker sends to the LLM. Document the shape
so the LLM knows what to expect.

## Output Format (optional)

What the LLM should return. Be explicit — JSON schema,
markdown template, or plain text with structure.

## Domain-Specific Sections

Whatever the worker needs to know about the domain.
API reference, data dictionary, business rules, etc.

Tips for Good Personas

Be specific about what the worker is NOT. "You are a grocery deal hunter" is less useful than "You are a grocery deal hunter. You do not give nutrition advice, meal plans, or recipe suggestions."

Constrain the output format. If you want JSON, say "Output structured JSON only. No prose, no markdown, no code fences." LLMs love to wrap JSON in markdown code blocks unless you tell them not to.

Include hard numbers. "Budget ceiling: $225/week" is better than "be mindful of budget." The LLM will use the specific number.

Document negations. If a persona should never match certain content, add a "What X is NOT" section. Workers can parse this to enforce exclusions in code.

Choosing a Worker Type

Generic Worker

Use the built-in ique/worker.ts. It reads your CLAUDE.md, sends the user's raw message to the LLM, and returns the response.

bash
BOUTIQUE_ID=my-boutique npx tsx ique/worker.ts

Good for:

  • General knowledge Q&A
  • Text transformation tasks
  • Anything where the LLM's training data is sufficient
  • Prototyping a boutique before building a custom worker

Bad for:

  • Tasks that need live data (prices, filings, calendar events)
  • Tasks where hallucination is a risk
  • Tasks that need structured output validation

Custom Worker (Fetch-then-Reason)

Write groups/my-boutique/worker.ts. This gives you full control over the execute step.

Good for:

  • Any task that needs real data
  • Structured output that needs post-processing
  • Intent classification with different fetch strategies

The Fetch-then-Reason Pattern

This is the core pattern for custom workers. The LLM never touches raw data sources. You fetch the data, clean it, and hand it to the LLM in a controlled format.

Step 1: Classify Intent

Use regex or keyword matching to figure out what the user wants. This is free — no LLM call.

typescript
function classifyIntent(rawContent: string): Intent {
  if (/\bstats?\b/i.test(rawContent)) return "stats";
  if (/\btriage\b/i.test(rawContent)) return "triage";
  return "default";
}

Step 2: Fetch Data

Based on the intent, fetch the relevant data. This could be an API call, a file read, a database query.

typescript
async function fetchForIntent(intent: Intent): Promise<DomainData> {
  if (intent === "stats") {
    return computeStats();  // No API call needed
  }
  if (intent === "triage") {
    const tracks = loadTracks(vaultDir);
    const personas = loadPersonas(vaultDir);
    return { tracks, personas };
  }
}

Step 3: Build Context Document

Serialize the fetched data into a compact format the LLM can reason about. Cap the size to keep context windows small.

typescript
function buildContext(data: DomainData, maxChars = 8000): string {
  const payload = {
    query: userMessage,
    candidates: data.tracks.slice(0, 20),  // Batch cap
    personas: data.personas,
  };

  let json = JSON.stringify(payload);

  // Trim if too large
  while (json.length > maxChars && payload.candidates.length > 1) {
    payload.candidates.pop();
    json = JSON.stringify(payload);
  }

  return json;
}

Step 4: Call the LLM

Send the context + system prompt to the LLM. One call, structured input.

typescript
const { text } = await generateText({
  model: anthropic("claude-haiku-4-5-20251001"),
  system: systemPrompt,  // Your CLAUDE.md
  messages: [{
    role: "user",
    content: `Process this data. Return JSON only.\n\n${context}`
  }],
  maxTokens: 1200,
});

Step 5: Post-Process

Validate the LLM's response. Strip code fences. Parse JSON. Enforce constraints the LLM might have ignored.

typescript
function sanitize(raw: string): ValidatedResponse {
  // Strip markdown code fences
  const cleaned = raw.replace(/^```json?\s*/i, "").replace(/\s*```$/i, "");
  const parsed = JSON.parse(cleaned);

  // Enforce constraints
  for (const match of parsed.matches) {
    if (violatesNegation(match)) {
      moveToNoMatch(match);
    }
  }

  return parsed;
}

Short-Circuit Intents

Some intents don't need the LLM at all. If the user asks for "stats" or "counts", compute the answer from the data directly and skip the LLM call.

typescript
if (intent === "stats") {
  return formatStats(computeStats(tracks));
  // No LLM call. $0.00 cost. Instant response.
}

This saves money and reduces latency for queries that are pure data aggregation.

Testing

Write tests for:

  1. Frontmatter parsing — can your loader handle real files from the data source?
  2. Intent classification — does every expected phrase route to the right intent?
  3. Constraint enforcement — do negations, batch caps, and empty-data guards work?
  4. LLM response parsing — can the sanitizer handle code fences, malformed JSON, missing fields?

Mock the LLM call in tests. You're testing your logic, not the LLM's.

typescript
const llm = vi.fn().mockResolvedValue({
  text: JSON.stringify({ matches: [], no_match: [], needs_metadata: [] }),
});

await executeTask({ raw_content: "triage" }, { llm });

expect(llm).toHaveBeenCalledOnce();
const payload = JSON.parse(llm.mock.calls[0][0].messages[0].content);
expect(payload.candidates.length).toBeLessThanOrEqual(20);

File Structure

A complete custom boutique looks like:

groups/my-boutique/
├── CLAUDE.md          # System prompt (required)
├── worker.ts          # Custom worker (optional — generic worker works too)
└── worker.test.ts     # Tests (recommended)