The wrong question
"Should we go agent or agentless?" is the wrong question. Both architectures see different things. A serious security stack uses both, with each doing what it is good at. The right question is which problem you are solving today, and which architecture is cheaper for that problem.
This post is a clean comparison: what each sees, what each misses, where each costs more, and the specific scenarios where one obviously wins.
What "agentless" actually means
Agentless covers three different things. Network IDS, like Suricata or Zeek inline at a tap, sees flows and packets. Log scrapers, like Filebeat or Promtail pointed at remote files, see whatever the host already wrote down. And cloud APIs, like AWS Config or GCP Security Command Center, see configuration and posture. They have different strengths but share the property that they do not run code on the protected host.
That property is the entire pitch. No new process, no kernel attack surface, no agent to update, no install on every node. For some teams that is decisive. For others it is a ceiling on what is detectable.
What "agent" actually means
Agent means a process running on the host that taps the kernel, file system, and network from inside. In 2026, the dominant flavor is eBPF, which lets the agent attach to the kernel via verified bytecode without loading a custom module. The agent sees execve, openat, connect, module loads, and capability changes.
The cost is straightforward. You install something on every host. You update it. You trust it not to crash production. In return you get visibility no flow record can give you.
The encrypted traffic problem
In 2026 the share of TLS-encrypted traffic on the open internet is well past 95 percent. A network IDS at the edge sees TLS flows: source, destination, SNI, JA3 fingerprint, byte counts. That is enough for some detection (beaconing patterns, suspicious destinations) and not enough for the rest. You cannot inspect the request body, the response, the file uploaded, or the command sent.
An agent on the host sees the request after TLS termination. For HTTP-based C2, that is the difference between flagging a flow as "this destination is suspicious" and seeing the actual command issued. For most post-exploitation, the agent's view is the only useful one.
The runtime versus config split
Cloud APIs are powerful for a class of question: "is this S3 bucket public", "is this IAM role over-privileged", "is this security group open to the world". You answer those at fleet scale by querying the cloud. No agent needed. For a thousand accounts, this is the only sane architecture.
They are useless for runtime detection. The cloud API does not know that PID 12993 just read /etc/shadow and curled to cdn-static-9f2a.tld. That is T1003 (OS Credential Dumping) and T1071 (Application Layer Protocol C2). The cloud API never sees it because none of it is configuration. Only the host sees it.
When agentless wins
Agentless is the right call when you have many hosts, you do not control all of them, and the question you are answering is "what is the configuration". CSPM and posture management are the canonical use cases. So is east-west flow analysis on a big network where deploying agents is politically impossible. And so is auditing a partner environment where you are not allowed to install anything.
For these problems, an agent buys you nothing extra and costs you a real ongoing operational burden.
When agent wins
Agent wins for endpoint detection-and-response. The questions that need agent visibility include: did a process touch a sensitive file, did a binary load a kernel module, did a container break out, did a credential get read by a process that should not read it, and did a parent-child chain look like post-exploitation. None of those are answerable from flows or cloud APIs.
Behavioral baselines also need agents. The autoencoder cannot learn a useful per-host normal from network flows alone. It needs the process tree.
Comparison table
The hybrid that actually ships
Mature security programs run both. Cloud APIs and CSPM for configuration. Network sensors for east-west flows and ingress. Agents on the workloads that matter most: the database, the auth server, the build runner, the bastion. The agent does not need to be everywhere. It needs to be where compromise is expensive.
For solo operators and small teams, the calculus tips toward agent. You probably have under twenty hosts. The fleet-scale argument disappears. The visibility argument does not.
Inner Warden's choice
Inner Warden is an agent. Specifically an eBPF sensor plus a triage agent that runs on each host. The choice is deliberate: the detection problems we care about are runtime post-exploitation problems, and those are the problems agentless tooling cannot solve. We do not pretend we replace CSPM. We do replace the host-side EDR that would otherwise be a line item.
For the kernel-level details, see eBPF kernel security. For why a process tree matters more than a flow, see Behavioral DNA fingerprinting.