Summary

Default OS network settings prioritize compatibility over maximum throughput. On 25–100 Gbps links, untuned hosts routinely plateau far below line-rate. Proper TCP stackNIC driverinterrupt/NUMA, and buffer tuning is required.

Key tuning areas (Linux examples)

  1. TCP/Socket buffers
# /etc/sysctl.d/99-highspeed.conf
net.core.rmem_max = 134217728
net.core.wmem_max = 134217728
net.ipv4.tcp_rmem = 4096 87380 134217728
net.ipv4.tcp_wmem = 4096 65536 134217728

Once sysctl values are in place, run sudo sysctl --system

  1. Congestion control

Check to see if bbr congestion control is available on your host and validate that it's loaded:

sysctl net.ipv4.tcp_available_congestion_control
sudo modprobe tcp_bbr
lsmod | grep bbr
sysctl net.ipv4.tcp_congestion_control
sysctl -w net.ipv4.tcp_congestion_control=bbr
  1. NIC & driver offloads / interrupt moderation

Replace <interface-name> with the appropriate interface name you are attempting to configure.

ethtool -k <interface-name>
ethtool -C <interface-name> rx-usecs 16 rx-frames 0
  1. NUMA/IRQ pinning

Pin NIC IRQs and the sending/receiving processes to the same NUMA node/socket.

  1. 100 G-specific notes

Enable IOMMU, latest NIC drivers, confirm pause-frame/flow-control behavior.

References

  • ESnet / FasterData — Linux Host Tuning general guide (1 Gbps and above) including how buffer sizes, congestion control, etc. are tuned. ESnet Linux Host Tuning
  • ESnet / FasterData — 100G Benchmarking: IRQ binding, CPU governors, NUMA, parallel streams, etc. ESnet 100G Benchmarking
  • ESnet / FasterData — “Other Tuning” options: interrupt coalescence, offloads, txqueuelen, netdev backlogs etc. Other Tuning Options 
  • Red Hat RHEL network performance tuning guide
  • NVIDIA/Mellanox adapter performance-tuning notes.
War diese Antwort hilfreich? 1 Benutzer fanden dies hilfreich (1 Stimmen)