Skip to content
← Back to blog
Architecture

Inner Warden Architecture: A 30-Minute Tour

April 25, 2026·12 min read

Three binaries, fourteen crates, one repo

The Inner Warden workspace ships three binaries and fourteen crates. The binaries are sensor, the deterministic kernel-side collector, agent, the interpretive layer that does triage and runs the dashboard, and ctl, the CLI you use to install, configure, and inspect the other two.

The split is intentional. The sensor never calls an LLM, never makes an HTTP request, never imports an AI dependency. You can audit it without holding the whole world in your head. The agent does the interpretive work and is allowed to be more opinionated about it.

The sensor

Lives at crates/sensor/. Its job is to collect events from the kernel and from userland (proc, sysctl, journald), run them through 49 detectors and 40 eBPF hooks, and emit findings to a JSONL stream. That is it. No network, no AI, no dashboard.

Three sub-paths matter when you are getting oriented: src/collectors/ for the things that produce events, src/detectors/ for the rules that fire on them, and src/sinks/ for the JSONL writer and a few smaller exporters. Everything else in the crate is plumbing for those three.

The eBPF programs themselves live under crates/sensor-ebpf/. That is a separate #![no_std] crate that builds with the BPF target. The sensor binary embeds the compiled object via include_bytes! at build time.

The agent

Lives at crates/agent/. It tails the sensor's JSONL stream, runs 40 correlation rules, builds incidents, optionally calls a local LLM for triage, persists state to SQLite, and serves a small web dashboard. Killchain and DNA both run inline here as modules; they do not have their own daemons in the default build.

The interesting top-level files when you are exploring: src/correlate/ for the rule engine, src/triage/ for the AI router and the local classifier wiring, src/dashboard/ for the HTTP routes that the browser hits, and src/state/ for everything that touches SQLite.

The CTL

Lives at crates/ctl/. It is the operator-facing CLI. The main.rs dispatches to small modules under src/commands/. Two adjacent modules also live here: src/scan.rs for security audits and src/harden.rs for the hardening pipeline. They are CLI features, not daemon features.

The JSONL contract

The sensor writes findings to /var/lib/innerwarden/findings.jsonl and rotates that file on a size threshold. The agent tails it. That is the entire contract between the two binaries. You can replace the sensor with a script that writes the same shape and the agent will not notice.

The shape is documented in the wiki under Configuration but the short version is: one JSON object per line, each line has ts, detector, severity, and a fields object whose schema depends on the detector. The format is forward-compatible by convention; the agent ignores fields it does not know.

Why SQLite is the source of truth (post spec 037)

For most of the project's life, the agent kept incident state in a mix of in-memory structures and small JSON files under /var/lib/innerwarden/state/. That was simple, but it meant a crash could lose minutes of correlation work, and adding a new piece of state meant inventing a new file format.

Spec 037 consolidated everything into one SQLite database at /var/lib/innerwarden/state.db. Incidents, kill chain progress, mesh trust scores, sink dedupe windows, and the dashboard's auth tokens all live there. WAL mode is on. Writes go through spawn_blocking from the async loop. If something is not in SQLite after spec 037, treat it as a bug.

The slow_loop tick

The agent has one master scheduler called slow_loop. It ticks once per second by default. Every periodic task (correlation flush, dashboard cache refresh, mesh gossip, autoencoder retraining check, sink retry) registers itself with a cadence and the slow_loop calls it on the right schedule.

The reason this exists is operational. One loop, one log prefix, one place to add a new periodic. If you find yourself wanting to spawn a fresh tokio task with its own tokio::time::interval, think twice and probably register with the slow_loop instead.

How the dashboard reads state

The dashboard is a small server-rendered HTML app served by the agent on a local port. The routes live at crates/agent/src/dashboard/. They read SQLite directly using a read-only connection pool. They never block the slow_loop, and they never write to the database from a request handler; user actions go through a small command queue that the slow_loop drains.

The 18 frontend modules under crates/agent/src/dashboard/static/ are vanilla JS, no build step. The HTML is generated by the Rust handler. There is no React, no bundler, no node dependency. That is on purpose. The dashboard has to keep working when nothing else does.

Where the satellites fit

The shield, hypervisor, and SMM daemons live in their own repos. They consume the same JSONL stream and write into the same SQLite database via a small writer crate. They are optional. A clean install of Inner Warden runs sensor and agent only, and that is the configuration we test against in CI.

Reading order if you only have an hour

Start with crates/sensor/src/main.rs, follow it into the collector registry, then jump into one detector under crates/sensor/src/detectors/. Then open crates/agent/src/main.rs, find the slow_loop registration, and walk one correlation rule under crates/agent/src/correlate/. An hour gets you most of the picture.

Read more: Your first detector in 50 lines · The cross-layer correlation engine