Running an NTP Pool Server: Operator Guide & Retrospective

By Richard DEMONGEOT | 20 April 2026 | Reading time: 14 min

The NTP Pool (pool.ntp.org) is the time-synchronisation infrastructure that, without them knowing it, underpins most of the consumer Internet: home routers, smartphones, IoT devices, and a large portion of Linux distributions by default. This network of over 4,000 servers is 100% volunteer-based, coordinated since 2003 by Ask Bjørn Hansen and a handful of contributors.

Our Why Contribute to the NTP Pool page covers the reasons to join the project. This one is the practical follow-up: how to run a pool server in 2026, pitfalls encountered, and what to size for load. RDEM Systems has operated pool-registered servers for several years — a significant share under its own AS206014, others on third-party transits.

On this page

1. Technical prerequisites before registering

Before any registration, the server must tick four boxes:

Don't do: register on a NAT-shared IP, a residential dynamic IP, or a VM with kernel-to-host sync enabled. These configurations produce chronically negative scores and degrade pool user experience.

2. Registering on manage.ntppool.org

Registration is free and takes five minutes:

  1. Create an account on the NTP Pool admin panel (manage.ntppool.org).
  2. Add a server by its public IP (no hostname — the pool queries the IP directly). IPv4 and IPv6 are managed separately: two registrations for one dual-stack server.
  3. Choose geographic zones the server will serve. By default, the pool assigns the country zone of the IP. A European operator can extend to europe and global.
  4. Set the net speed — bandwidth you offer. Do not overestimate: 100 Mbit/s actually delivered beats 1 Gbit/s announced but unserved.
  5. Allow 24–48 hours for monitors to establish an initial score. Below positive score, the server does not yet receive real traffic.

3. Understanding the scoring system

Several geographically distributed monitors (Europe, Americas, Asia, Oceania) query each registered server every 10–20 minutes. Each query produces a score increment based on the response:

ResultScore impactTypical cause
OK, offset < 75 ms+1 (progressive toward +20)Normal operation
OK, offset 75–400 msSlight penaltyServer under load or degraded WAN
Offset > 400 ms−4Broken local sync, mishandled leap second
Timeout / no response−5Firewall, server down, net speed exceeded
Stratum 16 announced−5Unsynchronised daemon at the operator

Global score ranges from −100 (banned) to +20 (excellent). A server with score above +10 is included in pool DNS responses proportionally to its net speed. Score is public at ntppool.org/scores/<IP> and helpful for diagnosing regressions.

The pool tolerates temporary dips (maintenance, reboot) without immediate exclusion. But a score drop below +10 removes the server from DNS responses until recovery.

4. Sizing bandwidth and the "net speed"

The declared net speed drives the volume of requests the pool directs to the server proportionally. Five levels:

Net speedTypical req/sAverage outbound bandwidthOperator profile
1 Mbit/s< 50< 4 kbit/sIndividual, residential connection
10 Mbit/s50–2004–16 kbit/sSmall hosted VPS
100 Mbit/s500–2,00040–160 kbit/sPro infrastructure, comfortable VPS
500 Mbit/s2,500–10,000200 kbit/s–1 Mbit/sDatacentre, dedicated bandwidth
1,000 Mbit/s5,000–20,000500 kbit/s–2 Mbit/sOperator with dedicated link + serious CPU

An NTP packet is 90 bytes at UDP level (48-byte payload + 8 UDP + 20 IPv4 + optional headers). In practice, CPU load remains negligible even at 20,000 req/s on one modern core. The real sizing is bandwidth and link quality: a network with stable latency helps the pool more than a beefy server behind a jittery connection.

Peaks to anticipate: the pool can redirect up to 5–10× average load when a large neighbouring contributor fails (Cloudflare or Google cluster temporarily excluded, for example). Plan margin in your net speed accordingly.

5. chrony configuration in public-server mode

The choice between chrony and ntpd for a public server is structural — see which daemon to pick for a public server on check-ntp.net for the detailed decision. Minimum recommended configuration on chrony 4.x:

# /etc/chrony/chrony.conf — public NTP pool server

# === Upstream sources (at least 4, diversified) ===
server time.cloudflare.com  iburst nts
server nts.netnod.se        iburst nts
server ntp1.ptb.de          iburst
pool   2.pool.ntp.org       iburst maxsources 4

# === Accept public traffic ===
# Accept NTP requests from the whole Internet
allow 0.0.0.0/0
allow ::/0

# === Rate limiting to protect the service ===
# KoD (Kiss-o'-Death) above ~4 req/s from one IP
ratelimit interval 1 burst 16 leak 2

# === Security ===
# Never serve our internal variables
noclientlog
# Disable control commands externally (chronyc stays local)
bindcmdaddress 127.0.0.1
bindcmdaddress ::1

# === Stability ===
makestep 1.0 3
rtcsync
leapsecmode system

# === NTS server-side (optional but recommended) ===
ntsserverkey /etc/chrony/nts.key
ntsservercert /etc/chrony/nts.crt

The ratelimit directive is critical: a client looping on one request per second (symptom of a broken IoT device) can represent kbit/s by itself and hurt the score for everyone else. KoD signals it to slow down; compliant implementations obey, bad ones keep going but on a pace chrony ignores.

6. ntpd / ntpsec alternative configuration

# /etc/ntp.conf — ntpd/ntpsec public server

server 0.pool.ntp.org iburst
server 1.pool.ntp.org iburst
server time.cloudflare.com iburst nts
server ntp1.ptb.de iburst

# Restrict world actions
restrict default kod limited nomodify notrap nopeer noquery
restrict -6 default kod limited nomodify notrap nopeer noquery

# Localhost has all rights
restrict 127.0.0.1
restrict ::1

# Leap second history
leapfile /usr/share/zoneinfo/leap-seconds.list

# NO monlist (DDoS amplification vector)
# ntpsec disables mode 6/7 by default; on classic ntpd:
# disable monitor

The limited flag enables native rate-limiting; kod sends Kiss-o'-Death to abusers; nomodify notrap nopeer noquery blocks any remote reconfiguration attempt. Do not disable any of these defaults.

7. Abuse, DDoS and misconfigured clients

Joining the pool, the server becomes a target for three parasite behaviours:

7.1 Vendor lock on a hardcoded IP

Consumer device manufacturers (routers, IP cameras, smart TVs) have historically hardcoded a specific NTP server IP in their firmware — sometimes theirs, sometimes a third-party operator picked at random. Millions of devices then query that IP forever. Documented cases: Netgear WNDR router to University of Wisconsin (2003), D-Link NAS to poul-henning.kamp.dk (2005). There is no clean solution: filter source IP in iptables, or accept the load and send KoD.

7.2 DDoS amplification via monlist (historical)

In 2014, ntpd < 4.2.7 exposed a monlist command (mode 7) returning the list of the last 600 clients. Used in reflected attack, a 234-byte request could produce a ~49 KB response — a 200× amplification factor. The February 2014 attacks (CloudFlare, Hearthstone) consumed hundreds of Gbit/s. Mitigations: disable mode 6 and 7 server-side (default on chrony 4.x and ntpsec), or migrate to these modern implementations.

7.3 Clients that ignore KoD

Kiss-o'-Death is an NTP packet with stratum 0 and a refid code (RATE, DENY…) that tells the client to slow down or change source. RFC-compliant clients obey. Bad ones ignore and keep going. For those:

# Temporarily ban an abusive IP (iptables)
iptables -I INPUT -s 203.0.113.42 -p udp --dport 123 -j DROP

# Or dynamically via ipset + fail2ban
# jail chrony_kod in /etc/fail2ban/jail.d/chrony.conf

8. Operator-side monitoring

Three axes to watch continuously on a pool server:

9. What RDEM learned in production

Operator retrospective. Our servers have been running for several years — mostly under AS206014, with additional deployments on third-party transits — and contribute to the NTP pool at ntppool.org/a/rdem-systems. Lessons that apply to any serious operator:
Before going live, benchmark your infrastructure latency and jitter with ntp-tester.eu. To decode the reach column of your upstream peers (essential to keep score), see check-ntp.net. And for operators subject to compliance requirements, validate compliance before pool admission with the audit checklist.

10. De-registering a server cleanly

Abrupt removal produces monitor timeouts, which impact the score of other servers listed alongside. Clean procedure:

  1. Uncheck geographic zones on manage.ntppool.org. The server leaves DNS responses within a few hours.
  2. Wait 48 hours before actually cutting the NTP service. Client DNS caches expire during this window.
  3. Remove the final registration via the interface — this erases historical scores.
  4. Announce the removal beyond two weeks if you were a visible contributor.
Further reading. For contribution motivation, see Why Contribute to the NTP Pool. For the infrastructure foundation, see our NTP infrastructure since 2005. For handling leap events, see the leap second page.

Resources and references