How KiwiStack is built — from LXC containers to API conventions. Every component isolated, every boundary clear.
One component, one container. Upstream hidden, wrapper exposed.
How containers are structured, versioned, and managed throughout their lifecycle.
Each kiwi component runs in its own unprivileged LXC container. Inside: the upstream binary + the kiwi wrapper process. Nothing else.
Alpine (musl) where upstream supports it, Ubuntu (glibc) where it doesn't. No desktop environment, no GUI, no unnecessary packages.
Each container is semantically versioned (e.g. kiwi-mail:0.3.1). ZFS snapshots before every update for instant rollback.
Memory and CPU cgroup limits per container. Storage quotas. Each container gets its own network namespace on the private bridge.
Single-host: CLI/script manages lifecycle. Multi-host: the Vine orchestrates.
Alpine Linux (musl libc) gives smaller images and faster boots. But not every upstream runs on musl. We pick the lightest base that works.
| Component | Upstream | Lang | Base | Why |
|---|---|---|---|---|
| kiwi-id | Kanidm | Rust | Ubuntu | OpenSSL + SQLite dynamically linked to glibc. No musl builds published, not tested on Alpine. |
| kiwi-store | RustFS | Rust | Alpine | Pure Rust, Apache 2.0. Static binary compiles cleanly against musl. |
| kiwi-mail | Stalwart | Rust | Alpine pkg | x86_64 release binary is glibc-linked, but Alpine testing repo has a musl-compiled package. |
| kiwi-chat | Conduit / Tuwunel | Rust | Alpine | Publishes fully static musl binaries with jemalloc. Alpine package available. Best musl support on this list. |
| kiwi-meet | LiveKit | Go | Alpine | Go server compiles to static CGO-less binary. Official image uses Alpine 3.x. (Agents SDK needs glibc, but we don't run agents.) |
| kiwi-work | None (custom) | Rust | Alpine | Axum + libsql embedded. Pure Rust, no C dependencies. Static musl binary. |
| kiwi-docs | None (custom) | Rust | Alpine | Axum + Loro CRDT. Pure Rust. Static musl binary. |
| kiwi-search | Meilisearch | Rust | Ubuntu | Official Docker image migrated from Alpine to Debian. musl may introduce search latency. Release binaries are glibc-only. |
| kiwi-sync | Loro (embedded) | Rust | Alpine | Loro is a linked Rust library, not a separate process. Pure Rust, no C deps. |
| kiwi-gate | Pingora | Rust | Ubuntu | BoringSSL C build system doesn't target musl. Alpine Docker PR was rejected by Cloudflare. No musl builds. |
| kiwi-mcp | None (rmcp SDK) | Rust | Alpine | Pure Rust, custom service with rmcp. Static musl binary. |
| kiwi-ui | CopilotKit / AG-UI | TypeScript | Alpine | Node.js runs on Alpine (node:alpine). Only non-Rust component in the stack. |
Every LXC container follows the same directory structure. Upstream and Kiwi code are clearly separated.
/ ├── opt/ │ ├── upstream/ # upstream binary & deps │ │ ├── bin/stalwart-mail # unmodified binary │ │ ├── etc/config.toml # upstream config │ │ └── lib/ # upstream shared libs │ └── kiwi/ # kiwi wrapper │ ├── bin/kiwi-mail # wrapper binary │ └── version # e.g. "0.3.1" ├── etc/ │ └── kiwi/ │ ├── config.toml # wrapper config (bridge IP, ports) │ └── jwt-public.pem # Kiwi ID public key ├── var/ │ ├── lib/ │ │ ├── upstream/ # upstream data (DB, indices) │ │ └── kiwi/ # wrapper state (if any) │ └── log/ │ └── kiwi/ # structured logs (JSON) └── run/ └── kiwi.pid # PID file
/ ├── opt/ │ └── kiwi/ # no upstream directory │ ├── bin/kiwi-work # the service IS the binary │ └── version # e.g. "0.1.0" ├── etc/ │ └── kiwi/ │ ├── config.toml # service config │ └── jwt-public.pem # Kiwi ID public key ├── var/ │ ├── lib/ │ │ └── kiwi/ │ │ └── data.db # libsql embedded DB │ └── log/ │ └── kiwi/ # structured logs (JSON) └── run/ └── kiwi.pid # PID file
Upstream binary and its dependencies. Never modified. Replaced atomically on upgrade. Absent in custom services.
Kiwi wrapper binary. Apache 2.0 code. Starts upstream, proxies requests, authenticates. Single static binary.
Persistent data. Backed up by ZFS snapshots. Upstream and kiwi data are separated for clean upgrades.
Private bridge connects all containers. Three ports face the internet — everything else stays internal.
Every Kiwi wrapper follows the same four-step lifecycle.
Spawn the upstream process. Bind it to 127.0.0.1 only.
Poll upstream with exponential backoff. Exit if it fails to start.
Bind Axum to the private bridge. Proxy, translate, authenticate.
Forward SIGTERM. Drain connections. Graceful shutdown.
GET /healthz for the orchestrator
How requests flow from AI agents through the stack.
Every service in one browser tab. The AI proposes, the human decides.
Five named elements. Tell it what you need. It drafts the action. You approve.
Vertical icon bar on the left edge. Each service (Mail, Chat, Meet, Docs, Work, Store, Search) has its own color. One click swaps the entire workspace: menubar, list, content, and proposal context.
Full-width contextual navigation for the active service (Inbox/Drafts/Sent for Mail, Channels/Direct for Chat, Board/List/Timeline for Work...). The AI can open the right menu item when navigating.
Contextual list for the active service — inbox items, channels, meetings, documents, tasks, files, or search results. Selecting an item populates the Content Pane.
Full item view (email body, chat thread, document, kanban board). AI actions appear as a proposal card overlay — each AI-touched field gets a colored left border. Apply and Edit buttons live here; only the human can confirm.
Chat interface with conversation bubbles and CTA action buttons colored per target service. Can suggest actions, navigate between services, open a specific view — but never applies changes itself.
Kiwi code is Apache-2.0. Upstream licenses are respected through clear boundaries.
Use freely. Link, embed, modify. No restrictions. Most of KiwiStack's dependencies fall here.
Use freelyFile-level copyleft. Use unmodified binaries freely. If you modify Kanidm source files, those files must be shared under MPL-2.0.
File-level copyleftNever link. Never embed. Communicate exclusively over HTTP. Network boundary keeps Kiwi code Apache-2.0.
Network boundary onlyuse or extern crate an AGPL crate
cargo deny check licenses in CI
Internal HTTP/JSON between containers. MCP for AI agents. JWT for auth.
/api/v1/{resource} base path
{service}.{action}
{
"data": { ... },
"meta": {
"request_id": "req_abc123",
"timestamp": "2026-02-28T12:00:00Z"
}
}
{
"error": {
"code": "NOT_FOUND",
"message": "Resource not found"
},
"meta": {
"request_id": "req_abc123"
}
}
Seven phases from foundation to commercial platform. Each unlocks the next.