Queue & Workers
The Queue
Every message becomes a row in the ique_queue table. Workers poll this table for tasks assigned to their boutique_id.
The queue is a SQLite table, not a message broker. There's no pub/sub, no acknowledgment protocol, no dead-letter queue. Workers poll, claim, execute, and complete. If they crash mid-task, the task stays in processing until someone investigates.
Worker Lifecycle
Every worker follows the same loop:
while running:
1. POLL — SELECT oldest task WHERE boutique=mine AND status='queued'
2. CLAIM — UPDATE status='processing' WHERE task_id=? AND status='queued'
(atomic — prevents double-execution)
3. EXECUTE — Run domain logic (fetch data, call LLM, etc.)
4. COMPLETE — UPDATE status='completed', response_content=?
or FAIL — UPDATE status='failed', error_message=?
5. SLEEP — Wait POLL_INTERVAL_MS (default: 3000ms)The claim step is critical. The WHERE status='queued' in the UPDATE means only one worker can claim a task, even if multiple workers poll simultaneously. SQLite's single-writer lock guarantees atomicity.
Two Worker Types
Generic Worker (ique/worker.ts)
Loads the boutique's CLAUDE.md as a system prompt and passes the user's message straight to the LLM. No data fetching, no post-processing.
BOUTIQUE_ID=my-boutique npx tsx ique/worker.tsGood for: simple Q&A, conversational assistants, anything where the LLM's training data is sufficient.
Custom Worker (groups/{boutique}/worker.ts)
Overrides the execute step with domain-specific logic. The typical pattern is Fetch-then-Reason:
1. Parse the user's intent (regex, keyword match — no LLM cost)
2. Fetch domain data (API call, file read, database query)
3. Build a compact context document (JSON or markdown)
4. Call the LLM with the context + CLAUDE.md system prompt
5. Post-process the response (validate JSON, enforce constraints)Good for: anything that needs real data. The LLM reasons over facts you provide, not facts it imagines.
Why Fetch-then-Reason?
The LLM never touches your API directly. It never sees raw HTML. It never guesses what the data might look like. You fetch the data, clean it, cap it to a size budget, and hand it to the LLM in a structured format.
This means:
- No hallucinations about data that doesn't exist
- Small context windows — you control exactly how much data the LLM sees
- Auditability — you can log exactly what the LLM received
- Cost control — one LLM call per task, not a chain of tool-use calls
Environment Variables
| Variable | Default | Description |
|---|---|---|
BOUTIQUE_ID | (required) | Which boutique this worker serves |
IQUE_DB_PATH | ./ique/ique.db | Path to the SQLite database |
IQUE_POLL_INTERVAL | 3000 | Poll frequency in milliseconds |
BOUTIQUE_MODEL | claude-haiku-4-5-20251001 | Which Claude model to use |
ANTHROPIC_API_KEY | (required) | API key for LLM calls |
Running Multiple Workers
Each boutique needs its own worker process. They all share the same database.
npx tsx groups/radar/worker.ts &
npx tsx groups/track-triage/worker.ts &
npx tsx groups/meal-planner/worker.ts &If a worker isn't running, tasks assigned to its boutique sit in queued until it starts. No data is lost.