The K8s observability story so far
The control plane side of Kubernetes security is in good shape. The audit log records every API request. OPA Gatekeeper and Kyverno block bad pod specs at admission. Network policies fence pods from each other. Service mesh handles mTLS. RBAC papers over the rest. Run a CIS scan and you can usually get to a respectable score with off-the-shelf tooling.
Once the pod is running, the visibility picture changes. The kubelet does not see what is happening inside containers. The audit log does not record thatcroninside a pod just spawned a python script that opened a shell. Network policies see that traffic flows; they cannot see that the binary inside the container is one that did not exist five minutes ago.
The node layer is the gap
eBPF on the node is the single best vantage point for everything happening inside containers. Every syscall a containerised process makes shows up to the kernel, which means it shows up to a well-instrumented eBPF program on the host. Container boundaries, namespaces, cgroups, none of that matters from the kernel's perspective.
Falco has been the canonical answer here and it is genuinely good. We are not going to claim Inner Warden replaces it outright. The honest framing: Falco gives you eBPF-based event detection with a strong rule language and an active community. Inner Warden gives you eBPF-based event detection plus AI triage scoring, plus autonomous response policy, plus correlation across layers, plus mesh broadcast for fleet-wide blocking. Different shape of tool with real overlap.
Running as a DaemonSet
The deployment model is a DaemonSet that schedules one pod per node, with host PID namespace, host network, and the relevant capabilities. The container is the same static binary you would run directly, just packaged in a minimal image.
apiVersion: apps/v1
kind: DaemonSet
metadata:
name: innerwarden
namespace: kube-system
spec:
selector:
matchLabels: { app: innerwarden }
template:
metadata:
labels: { app: innerwarden }
spec:
hostPID: true
hostNetwork: true
containers:
- name: innerwarden
image: innerwarden/sensor:0.9.2
securityContext:
privileged: true
volumeMounts:
- { name: sys, mountPath: /sys }
- { name: lib-modules, mountPath: /lib/modules }
- { name: state, mountPath: /var/lib/innerwarden }
volumes:
- { name: sys, hostPath: { path: /sys } }
- { name: lib-modules, hostPath: { path: /lib/modules } }
- { name: state, hostPath: { path: /var/lib/innerwarden } }Privileged is needed for eBPF program loading on most kernels prior to CO-RE plus capability split. On modern kernels you can run withCAP_BPF,CAP_PERFMON, andCAP_SYS_RESOURCEinstead of full privileged. The defaults are documented in the chart.
Container escape detection
The classic patterns: a process inside a container ends up with a different cgroup or namespace than its parent, a process accesses/proc/1/rootwhen it should not see the host fs, a container withCAP_SYS_ADMINcallsunshareorsetnswith kernel-namespace flags. These are detector-grade signals, not just warnings, and the agent ships with detectors for each.
The triage layer scores them in the context of which workload is in which namespace. A privileged daemon callingsetnsis fine, an application pod doing it is not.
Privileged pod activity
You usually have a few legitimate privileged DaemonSets: CNI, CSI, log shippers, observability agents, the Inner Warden DaemonSet itself. The detector takes an allow-list of privileged workloads (by service account or label selector) and flags everything else.
Combined with admission-time blocks from OPA Gatekeeper, this gives a complete picture: Gatekeeper stops the deploy from going through, the host agent catches the case where it somehow did anyway (escape, RBAC gap, supply chain).
Mesh broadcast for fleet-wide blocking
When one node detects an outbound connection to a known bad IP at high confidence and triggers an autonomous block, the other nodes should not have to learn the same lesson independently. The mesh module signs the event with Ed25519, broadcasts to peer nodes, and applies the block fleet-wide with a trust-scored game-theoretic policy that limits the blast radius of a single compromised node.
Background: mesh network game theory.
Resource footprint
On a stock Kubernetes node with a modest workload (around 30 pods), the agent uses 60 to 100 MB of RSS and one to two percent of one vCPU steady state. eBPF program load is the only meaningful spike at startup. Compared to a full Falco plus Falcosidekick plus log forwarder stack, the footprint is roughly comparable for the sensor portion and smaller for the rest, because the triage and notification logic is in the same process.
Honest overlap with Falco
Both use eBPF. Both have a rule language for syscall events. Both can detect the canonical container escape patterns. Both ship with a default ruleset.
Where Inner Warden adds: AI triage scoring before alerts page you, autonomous response policy tied to confidence, correlation across host events that Falco treats as separate (Falco is event-by-event by design), the same binary running on non-Kubernetes hosts in the same fleet, and a host-side dashboard that does not require a separate backend.
Where Falco still wins: a much larger community, richer third-party rule packs from organisations like Sysdig, deeper integration with the CNCF-aligned tooling people already have. If you are heavily invested in Falco's ecosystem, run them side by side and decide based on the comparison.
Read more: Docker container security · Self-defending server