Skip to content

Architecture

ique.bot has four moving parts. They communicate through HTTP and a shared SQLite database.

System Diagram

┌─────────────────────┐
│  Telegram / Slack /  │  Channel (user-facing)
│  Discord / WhatsApp  │
└──────────┬──────────┘
           │ messages

┌─────────────────────┐
│  Adapter            │  adapters/telegram.ts
│  (channel-specific) │  Converts channel messages to a standard payload.
│                     │  POST /ique/ingest → ique server
│                     │  GET  /ique/delivery → send responses back
└──────────┬──────────┘
           │ HTTP

┌─────────────────────┐
│  ique Server        │  ique/server.ts (port 3900)
│  (router + queue)   │  Routes messages to boutiques.
│                     │  Manages task lifecycle in SQLite.
└──────────┬──────────┘
           │ SQLite (ique.db)

┌─────────────────────┐
│  Workers            │  groups/{boutique}/worker.ts
│  (one per boutique) │  Poll for tasks. Fetch data. Call LLM. Complete task.
└─────────────────────┘

Process Model

start.sh launches all processes in a single terminal:

ProcessFileRole
ique serverique/server.tsHTTP API. Routes inbound messages. Manages the SQLite queue.
Worker (per boutique)groups/{id}/worker.tsPolls queue for assigned tasks. Executes domain logic.
Adapter (per channel)adapters/telegram.tsBridges a messaging channel to the ique server via HTTP.

All processes share one SQLite database (ique/ique.db) using WAL mode for concurrent access.

Task Lifecycle

Every message becomes a task. Tasks move through a state machine:

pending_routing → queued → processing → completed → delivered

                                failed
StateWhat happened
pending_routingAdapter submitted the message, router hasn't classified it yet.
queuedRouter assigned it to a boutique. Waiting for the worker to pick it up.
processingWorker claimed it (atomic UPDATE). Executing domain logic.
completedWorker wrote response_content. Waiting for adapter to deliver.
deliveredAdapter sent the response to the user on the original channel.
failedWorker threw an error. error_message field has details.

Database Schema

Three tables. That's it.

boutiques — Registry of available workers

sql
boutique_id   TEXT PRIMARY KEY   -- "radar", "track-triage", etc.
display_name  TEXT               -- Human-readable name
status        TEXT               -- "active", "maintenance", "disabled"
description   TEXT               -- Fed to the semantic router for classification
api_port      INTEGER            -- Optional: backing service port

users — Multi-channel identity mapping

sql
internal_user_id  TEXT PRIMARY KEY
telegram_id       TEXT UNIQUE
discord_id        TEXT UNIQUE
slack_id          TEXT UNIQUE
allowed_boutiques TEXT           -- JSON array of boutique IDs

New users are auto-created on first message with access to all active boutiques.

ique_queue — The task ledger

sql
task_id            TEXT PRIMARY KEY
internal_user_id   TEXT REFERENCES users
source_channel     TEXT             -- "telegram", "discord", etc.
assigned_boutique  TEXT REFERENCES boutiques
raw_content        TEXT             -- User's original message
channel_metadata   TEXT             -- JSON: chat_id, message_id (for replies)
status             TEXT             -- pending_routing → queued → ... → delivered
response_content   TEXT             -- Worker's answer
error_message      TEXT             -- If failed
created_at         TEXT
completed_at       TEXT

Key Design Decisions

SQLite, not Postgres/Redis. One file, zero ops. WAL mode handles concurrent reads. This is a personal tool, not a distributed system.

Separate processes, not threads. If one worker crashes, the others keep running. The queue holds tasks until the worker restarts. No shared memory, no mutex, no race conditions.

HTTP between adapter and server. The adapter doesn't import any ique code. It talks to the server over HTTP. This means you can run the adapter on a different machine, or swap it for a different channel without touching the server.

Stateless tasks. No conversation history, no session state. Each message is routed and processed independently. This keeps workers simple and context windows small.