Why Your Server Gets 4000+ SSH Attacks Per Day (And What To Do About It)
The reality of a public server
Every server with port 22 open to the internet is under attack right now. Not maybe. Not sometimes. Right now, as you read this sentence.
Automated bots scan the entire IPv4 address space continuously, looking for SSH services. When they find one, they start guessing credentials. They do not rest. They do not take weekends off. A freshly provisioned VPS will receive its first SSH brute-force attempt within minutes of going online, often before you have even finished configuring it.
Most server operators know this in theory but are still surprised when they see the actual numbers. The volume is not dozens or hundreds. It is thousands of attempts every single day.
What the data shows
From the Inner Warden live feed, here is what a typical 24-hour window looks like on a single production server:
The top source countries are consistently China, the Netherlands, Russia, Indonesia, and the United States. Many of these IPs belong to compromised cloud instances and rented VPS infrastructure, not individual attackers sitting at keyboards. The Netherlands shows up heavily because of cheap VPS providers that are slow to respond to abuse reports.
The pattern is predictable: waves of rapid-fire attempts from a single IP, followed by a rotation to a new IP. The usernames tell the story: root, admin, ubuntu, postgres, deploy, git. They try the obvious ones first, then move through dictionary lists.
Three types of SSH attacks
Not all SSH attacks look the same. Understanding the differences matters because each type requires a different detection strategy.
- Brute force (single IP) - one IP hammers your server with hundreds or thousands of login attempts in rapid succession. This is the simplest attack and the easiest to detect. The IP tries the same username with different passwords, or rotates through a short list of common usernames. fail2ban catches this reliably because the pattern is obvious in the logs.
- Credential stuffing (many usernames) - the attacker has a list of username/password pairs, usually from a data breach, and tests them against your server. Each pair is tried only once or twice per IP. The rate is low enough to stay under fail2ban's threshold, but the intent is the same: find a working credential. This pattern requires tracking unique usernames per IP, not just raw failure counts.
- Distributed botnet (coordinated, low-and-slow) - a botnet coordinates across dozens or hundreds of IPs. Each IP sends only 2-3 attempts, staying well below any per-IP threshold. But in aggregate, your server is receiving hundreds of attempts per hour from different sources. No single IP looks suspicious. Detecting this requires correlating across IPs and looking at the global rate of failed logins, not just per-source counts.
Most tools only catch the first type. The second and third are where real breaches happen.
What attackers actually want
Once an attacker gets SSH access, the clock starts. They typically have automated scripts ready to execute within seconds of a successful login. The objectives depend on who is behind the attack:
- Cryptomining - the most common outcome. The attacker installs a cryptocurrency miner (usually XMRig for Monero) and runs it quietly. Your CPU spikes, your cloud bill goes up, and the attacker earns money on your hardware. Some botnets mine thousands of dollars per month across compromised servers.
- Botnet recruitment - your server becomes part of a larger network used for DDoS attacks, spam campaigns, or further scanning. The malware persists across reboots and phones home to a command-and-control server for instructions.
- Lateral movement - your server is not the final target. The attacker uses it as a jumping point to reach other machines on your network. They harvest SSH keys, scan internal subnets, and pivot deeper into your infrastructure.
- Data exfiltration - if the server has access to databases, backups, or credentials, those get copied out. Customer data, API keys, database dumps. The attacker might sell the data or use it for further attacks.
None of these require a sophisticated attacker. The tools are freely available, and the entire chain from scan to compromise to exploitation is fully automated.
Why fail2ban is not enough
fail2ban is a good first step. It watches log files and bans IPs that exceed a failure threshold. But against 4000+ daily attacks, the limitations become real:
- Regex-based detection - fail2ban matches log lines with regular expressions. It does not understand context. A credential stuffing attack that uses one attempt per IP per minute will never trigger a regex-based threshold.
- No cross-source correlation - an IP scanning your SSH port and probing your Nginx server at the same time? fail2ban sees two unrelated events in two separate jails. There is no correlation engine to connect them.
- No distributed attack detection - when 200 botnet IPs each send 2 attempts, no individual IP crosses the threshold. fail2ban bans zero of them. Meanwhile, your server has received 400 brute-force attempts in an hour.
- No kernel-level visibility - fail2ban operates entirely in userspace, reading log files after events have already happened. It cannot see network connections at the kernel level or block packets before they reach the SSH daemon.
- No threat intelligence - fail2ban bans an IP on your server but does nothing to report it. The attacker moves to the next target. No AbuseIPDB reporting, no Cloudflare WAF updates, no collective defense.
fail2ban is not broken. It was designed for a simpler threat landscape. The attacks have evolved, and the defenses need to evolve with them.
What actually works
Stopping 4000+ daily attacks requires defense at multiple layers, from the kernel up to AI-assisted decision making. Here is the stack that works:
- eBPF kernel monitoring - instead of reading log files after the fact, eBPF tracepoints observe SSH connections at the kernel level. Inner Warden's sensor runs 6 eBPF programs: tracepoints for execve, connect, and openat; a kprobe for commit_creds (privilege escalation); an LSM hook for blocking execution from
/tmp; and an XDP program for wire-speed IP blocking. This is not log parsing. This is direct kernel visibility. - XDP wire-speed blocking - when an IP is identified as an attacker, the XDP program drops packets at the network driver level before they reach the kernel's network stack. This is orders of magnitude faster than iptables rules and scales to millions of blocked IPs with zero CPU overhead per blocked packet.
- Distributed attack detection - Inner Warden's distributed SSH detector tracks the global rate of failed logins across all source IPs. When the aggregate rate exceeds normal baselines, even if no single IP is suspicious, the system flags it as a coordinated attack and identifies the participating IPs.
- AI confidence scoring - not every failed login is an attack. A developer mistyping a password should not trigger the same response as a botnet. Inner Warden's agent runs each incident through an AI provider (OpenAI, Anthropic, Groq, or Ollama) to score confidence from 0.0 to 1.0. Only high-confidence incidents trigger automated blocking. Everything else is logged and flagged for review.
The result: attacks are detected in the kernel, scored by AI, blocked at wire speed, reported to threat intelligence networks, and visible in real time on a dashboard. Every step is recorded in a structured JSONL audit trail.
See it for yourself
We publish a live feed of real attacks hitting our production server. No synthetic data. No simulations. These are actual SSH brute-force attempts, port scans, and web scanner probes happening in real time.
Visit innerwarden.com/live to watch attacks arrive, get detected, scored, and blocked. The feed updates every few seconds. Most people are surprised by how constant the traffic is. There is never a quiet moment.
Quick commands to check your server
Want to see your own numbers? Open a terminal on your server and run these commands.
Count total failed SSH login attempts:
grep "Failed password" /var/log/auth.log | wc -lSee the top 10 attacking IPs:
grep "Failed password" /var/log/auth.log | awk '{print $(NF-3)}' | sort | uniq -c | sort -rn | head -10On systemd hosts, use journalctl instead:
journalctl -u sshd --since "24 hours ago" | grep "Failed password" | wc -lIf the number is in the hundreds or thousands, your server is a target. It is not a question of whether someone will get in. It is a question of whether your defenses will hold long enough.
What to do next
- Detect SSH brute-force attacks - step-by-step guide to setting up real-time detection with automated blocking.
- Fail2ban vs Inner Warden - a detailed side-by-side comparison of both tools and when to use each.
- Set up an SSH honeypot - capture what attackers actually do after they get in, using a realistic fake shell powered by an LLM.
- Threat intelligence sharing - report attackers to AbuseIPDB and push blocks to Cloudflare WAF automatically.