Summary
Public speed tests are not designed to saturate 25–100 Gbps ports. Results are limited by the test server’s capacity, not your port. For high-capacity circuits, public endpoints typically top out around 1–10 Gbps, sometimes ~25 Gbps, and they are shared with other users—so you’ll see an artificially low ceiling even if your FDC port is perfectly fine.
What’s going on?
- Public iPerf3 servers are small/shared. Most listed public iPerf3 endpoints advertise 1–10 Gbps (occasionally 10 Gbps+), and they are shared resources, so someone else’s test can bottleneck yours.
- Speedtest-style platforms vary by operator. Operators host their own servers with widely varying capacity; there is no guarantee a nearby server can drive multi-10G or 100G single-client tests. Academic analysis confirms server deployments and paths can bottleneck results.
Practical implication
If you run a public speed test against a 25/40/100 Gbps server port, you’re nearly always measuring the public endpoint’s ceiling, not the service you purchased.
How to test correctly
- Use a private iPerf3 peer under your control (or one FDC provides) with equal/greater capacity and a high-speed NIC on both ends.
- Use multiple parallel streams and adequate test duration (see Article 3).
- Keep tests on-net / intra-DC when possible to avoid third-party bottlenecks.
References
- Public iPerf3 server directory (many 10 Gbps entries).
- CAIDA: Empirical Characterization of Ookla: server deployment & capabilities [PDF] – CAIDA 2024 oai_citation:2‡CAIDA
- ESnet / FasterData: 100G tuning & benchmarking hosts; example showing ~30 Gbps on single stream, >95 Gbps with 8 streams. oai_citation:3‡fasterdata.es.net