Cross-Layer Correlation: Connecting Firmware to Userspace
A sophisticated attacker does not operate in one layer. They modify firmware to persist across reboots, inject into kernel memory to hide processes, tamper with userspace logs to cover tracks, and communicate through encrypted network channels that look benign in isolation. No single security tool sees the full picture. CrowdStrike sees the kernel. Falco sees containers. Suricata sees the network. Nobody connects firmware to userspace.
Inner Warden does. Its cross-layer correlation engine connects events across five layers and 23 correlation rules to detect attack chains that are invisible to any single-layer product.
The five layers
Inner Warden collects events from five distinct layers of the system stack. Each layer has its own collectors, detectors, and event types:
MSR write monitoring (LSTAR, SMRR, FEATURE_CONTROL), I/O port access (SPI controller probing), ACPI method execution, UEFI variable integrity. Collected via eBPF kprobes on native_write_msr and acpi_evaluate_object, plus the SMM firmware audit module.
30 eBPF programs: tracepoints (execve, connect, openat, ptrace, setuid, mount, memfd_create, init_module, mprotect, clone, kill, io_uring), kprobes (commit_creds for privesc), LSM hooks (exec blocking, file write protection, eBPF program loading). Container-aware via cgroup_id.
Auth logs, journald, exec audit, Docker events, integrity monitoring (file hashes), syslog firewall, process tree analysis. 43 stateful detectors covering SSH bruteforce, credential stuffing, rootkit detection, ransomware, and more.
Nginx access/error logs, Suricata EVE JSON, TLS fingerprinting (JA3/JA4), DNS tunneling detection, C2 callback patterns, CloudTrail for cloud network events. XDP wire-speed blocking.
SSH honeypot with interactive fake shell, HTTP honeypot with credential capture, attacker command recording, tool identification. Honeypot data reveals attacker intent before they reach real systems.
CL-004: The chain nobody else sees
This is the correlation rule that demonstrates why cross-layer detection matters. CL-004 connects three events across three layers:
Firmware layer. The LSTAR MSR holds the syscall entry point address. Writing to it redirects all system calls through attacker-controlled code. Detected by the native_write_msr kprobe.
Kernel layer. The attacker uses PTRACE_POKETEXT to inject code into a running process. Detected by the ptrace tracepoint.
Userspace layer. Auth logs or syslog files are truncated or modified to hide the intrusion. Detected by the integrity monitoring collector.
Each event individually might be explainable. A kernel update could write MSR registers. A debugger uses ptrace. Log rotation truncates files. But all three happening on the same host within 10 minutes, connected by the same process lineage? That is a compromised system. CL-004 fires a Critical severity incident.
Entity-pivoted matching
Correlation rules do not just match event types. They match events that share a common entity. An entity is an IP address, a username, a process ID, or a container ID. The stages of a correlation chain must be connected by at least one shared entity.
struct CorrelationRule {
id: &'static str, // "CL-004"
name: &'static str, // "Firmware-to-Userspace Compromise"
stages: Vec<Stage>, // ordered sequence of event matchers
time_window: Duration, // max time between first and last stage
entity_pivot: EntityType, // IP | User | PID | ContainerID
severity: Severity, // Critical | High | Medium
description: &'static str,
}
struct Stage {
layer: Layer, // Firmware | Kernel | Userspace | Network | Honeypot
detector: &'static str, // which detector must fire
required: bool, // must this stage match, or is it optional?
}The entity pivot is what makes this work. Without it, you would correlate an MSR write on one host with a ptrace on another. The pivot ensures all stages belong to the same attack, not just the same time window.
More correlation rules
CL-004 is the flagship example, but Inner Warden ships with 23 correlation rules. Here are four more that demonstrate the power of cross-layer detection:
Network scan (port_scan detector) followed by successful exploit (web_shell or reverse_shell detector) followed by persistence mechanism (crontab_persistence or systemd_persistence). Entity pivot: source IP. Time window: 30 minutes.
Attacker interacts with SSH honeypot (credential attempt + command execution) then the same IP appears in auth_log with a successful login on a real host. The credentials or techniques tried in the honeypot worked on production.
Container drift detected (overlayfs upper-layer write) followed by privilege escalation (commit_creds kprobe) followed by host-level process execution outside the container cgroup. Entity pivot: container ID then PID lineage.
TLS fingerprint matches known C2 framework (JA3/JA4) followed by archive creation (tar, zip) of sensitive directories followed by large outbound transfer to the same C2 IP. Entity pivot: destination IP. Time window: 60 minutes.
Why single-layer products miss these attacks
CrowdStrike Falcon is excellent at kernel-level detection. Falco is excellent at container runtime security. Suricata is excellent at network intrusion detection. But none of them can connect an event in one layer to an event in another:
- CrowdStrike sees the ptrace injection but not the MSR write that preceded it or the log tampering that followed. It does not operate at the firmware level.
- Falco sees container syscalls but cannot correlate them with network-layer TLS fingerprints or honeypot interaction data.
- Suricata sees the C2 beacon's TLS fingerprint but has no visibility into the kernel events that led to the compromise or the data staging that follows.
- SIEM solutions (Splunk, Elastic) can theoretically correlate across sources, but require manual rule writing, suffer from log shipping delays, and lack real-time entity pivoting.
Inner Warden runs on the host. It collects from all five layers simultaneously. Events flow through the correlation engine in real time with sub-second latency. There is no log shipping delay. There is no schema normalization step. The entity pivot happens in memory.
How the engine works
The correlation engine maintains a set of pending chains. When an event matches the first stage of a correlation rule, a new pending chain is created with the entity extracted from the event. Subsequent events are checked against all pending chains. If an event matches the next expected stage and shares the entity pivot, the chain advances.
Event arrives (any layer)
→ For each correlation rule:
→ Does this event match any stage?
→ Yes: check pending chains for this rule
→ Any chain with matching entity waiting for this stage?
→ Yes: advance the chain
→ Chain complete? → Emit Critical/High incident
→ Chain incomplete? → Update pending, reset timeout
→ No matching chain: is this Stage 1?
→ Yes: create new pending chain
→ No: discard (no chain to advance)
→ No: skip rule
→ Expire pending chains older than time_window
→ Garbage collect completed chainsPending chains are bounded in memory. Each rule can have at most 1,000 pending chains. When the limit is reached, the oldest chain is evicted. This prevents memory exhaustion from noisy environments while keeping detection effective for real attacks, which complete their chains quickly.
What to do next
- eBPF kernel security - deep dive into the 30 eBPF programs that feed kernel and firmware events into the correlation engine.
- Behavioral DNA - how correlated attack chains feed into attacker fingerprinting for campaign detection.
- TLS fingerprinting - how JA3/JA4 fingerprints provide the network layer signals that trigger correlation rules.
- Firmware integrity monitoring - the firmware layer collectors that detect UEFI and BIOS tampering.