Poor network throughput, but double under high cpu load Announcing the arrival of Valued...

Why not use the yoke to control yaw, as well as pitch and roll?

Why does my GNOME settings mention "Moto C Plus"?

Weaponising the Grasp-at-a-Distance spell

Why doesn't the university give past final exams' answers?

Can this water damage be explained by lack of gutters and grading issues?

Suing a Police Officer Instead of the Police Department

What helicopter has the most rotor blades?

false 'Security alert' from Google - every login generates mails from 'no-reply@accounts.google.com'

Why isn't everyone flabbergasted about Bran's "gift"?

Etymology of 見舞い

How to ask rejected full-time candidates to apply to teach individual courses?

Can a Wizard take the Magic Initiate feat and select spells from the Wizard list?

Does the universe have a fixed centre of mass?

Coin Game with infinite paradox

2 sample t test for sample sizes - 30,000 and 150,000

How to get a single big right brace?

lm and glm function in R

Assertions In A Mock Callout Test

Can I take recommendation from someone I met at a conference?

How to charge percentage of transaction cost?

“Since the train was delayed for more than an hour, passengers were given a full refund.” – Why is there no article before “passengers”?

Is Bran literally the world's memory?

Kepler's 3rd law: ratios don't fit data

Why aren't these two solutions equivalent? Combinatorics problem



Poor network throughput, but double under high cpu load



Announcing the arrival of Valued Associate #679: Cesar Manara
Planned maintenance scheduled April 23, 2019 at 23:30 UTC (7:30pm US/Eastern)A gigabit network interface is CPU-limited to 25MB/s. How can I maximize the throughput?How can I produce high CPU load on Windows?Gigabit - Throughput bound more by CPU/RAM, or Disk IO?High CPU load from System when idleApple Time Capsule / 802.11ac to wired network is slower than I'd expectWindows 7 high CPU and HDD loadLinux cpu usages high but clock speed is lowTask manager shows high CPU but 'user' shows differentlyServer has High System load with Low CPU-UtilizationHigh CPU load while browsing (AMD A6)





.everyoneloves__top-leaderboard:empty,.everyoneloves__mid-leaderboard:empty,.everyoneloves__bot-mid-leaderboard:empty{ height:90px;width:728px;box-sizing:border-box;
}







0















I'm trying to understand a counter-intuitive phenomenon where I get only ~ 400 Mbit/s through a gigabit connection when the CPU is idle, but ~ 850 Mbit/s when the CPU is under full load (stress test utility). I suspect this is a symptom of some misconfiguration, in the end I would like to be able to saturate the link regardless of the CPU usage. The question is: what could cause the poor network performance, and why does it depend on the CPU usage in this strange way?



I'm glad for any hints, thanks for your time!



Systems and hardware




  • Lynx SENYO Mini PC (GE5 PCF55) running up-to-date Arch Linux kernel version 5.0.8 (called the client for reference).


  • NIC is a Realtek RTL8168:



    # lspci -nn | grep Ethernet
    08:00.0 Ethernet controller [0200]: Realtek Semiconductor Co., Ltd. RTL8111/8168/8411 PCI Express Gigabit Ethernet Controller [10ec:8168] (rev 03)


  • Connection: CAT 6 patch cable <-> CAT 7 in-wall cable <-> CAT 6 patch field <-> HPE OfficeConnect Switch <-> CAT 6 patch cable <-> supermicro home server (called the server for reference). Another PC gets ~ 950 Mbit/s on the exact same connection to the server with the exact same cables, so I guess this isolates the problem to the client.



Test setup





  • On the server:



    # iperf3 -D -s --bind 192.168.128.1



  • On the client while CPUs are idle:



    # iperf3 -c 192.168.128.1
    Connecting to host 192.168.128.1, port 5201
    [ 5] local 192.168.128.100 port 60738 connected to 192.168.128.1 port 5201
    [ ID] Interval Transfer Bitrate Retr Cwnd
    [ 5] 0.00-1.00 sec 39.8 MBytes 333 Mbits/sec 0 310 KBytes
    [ 5] 1.00-2.00 sec 40.5 MBytes 339 Mbits/sec 0 322 KBytes
    [ 5] 2.00-3.00 sec 40.3 MBytes 338 Mbits/sec 0 322 KBytes
    [ 5] 3.00-4.00 sec 41.0 MBytes 344 Mbits/sec 0 322 KBytes
    [ 5] 4.00-5.00 sec 36.9 MBytes 310 Mbits/sec 0 322 KBytes
    [ 5] 5.00-6.00 sec 53.1 MBytes 445 Mbits/sec 0 322 KBytes
    [ 5] 6.00-7.00 sec 53.7 MBytes 450 Mbits/sec 0 322 KBytes
    [ 5] 7.00-8.00 sec 54.7 MBytes 459 Mbits/sec 0 338 KBytes
    [ 5] 8.00-9.00 sec 54.0 MBytes 453 Mbits/sec 0 338 KBytes
    [ 5] 9.00-10.00 sec 54.0 MBytes 453 Mbits/sec 0 338 KBytes
    - - - - - - - - - - - - - - - - - - - - - - - - -
    [ ID] Interval Transfer Bitrate Retr
    [ 5] 0.00-10.00 sec 468 MBytes 393 Mbits/sec 0 sender
    [ 5] 0.00-10.04 sec 467 MBytes 390 Mbits/sec receiver



  • On the client while running stress -c 4 (all four CPUs busy on 100%)



    # iperf3 -c 192.168.128.1
    Connecting to host 192.168.128.1, port 5201
    [ 5] local 192.168.128.100 port 60742 connected to 192.168.128.1 port 5201
    [ ID] Interval Transfer Bitrate Retr Cwnd
    [ 5] 0.00-1.00 sec 102 MBytes 854 Mbits/sec 0 356 KBytes
    [ 5] 1.00-2.00 sec 101 MBytes 845 Mbits/sec 0 375 KBytes
    [ 5] 2.00-3.00 sec 101 MBytes 846 Mbits/sec 0 392 KBytes
    [ 5] 3.00-4.00 sec 101 MBytes 844 Mbits/sec 0 409 KBytes
    [ 5] 4.00-5.00 sec 100 MBytes 843 Mbits/sec 0 409 KBytes
    [ 5] 5.00-6.00 sec 101 MBytes 843 Mbits/sec 0 409 KBytes
    [ 5] 6.00-7.00 sec 101 MBytes 845 Mbits/sec 0 409 KBytes
    [ 5] 7.00-8.00 sec 100 MBytes 841 Mbits/sec 0 451 KBytes
    [ 5] 8.00-9.00 sec 101 MBytes 843 Mbits/sec 0 451 KBytes
    [ 5] 9.00-10.00 sec 101 MBytes 851 Mbits/sec 0 510 KBytes
    - - - - - - - - - - - - - - - - - - - - - - - - -
    [ ID] Interval Transfer Bitrate Retr
    [ 5] 0.00-10.00 sec 1008 MBytes 846 Mbits/sec 0 sender
    [ 5] 0.00-10.04 sec 1005 MBytes 840 Mbits/sec receiver



Unsuccessful attempts




  • I thought maybe its a frequency scaling issue, but cpupower frequency-set -g performance makes no difference.

  • Reading about issues with my specific NIC, I tried both drivers, the one present in the kernel (r8169) and the one from Realtek (r8168). The general picture is the same, the numbers are a bit different. With the Realtek driver I get 556 Mbits/s (idle CPU) vs 788 Mbits/s (busy CPU).

  • Cross-checked a live Ubuntu 18.04 LTS (kernel 4.18.0) with the same results.

  • Running irqbalance makes no difference.

  • Disabling IPv6 makes no difference.


Additional information (glad to add more on request)



Output from ip



# ip addr show dev enp8s0
2: enp8s0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc fq_codel state UP group default qlen 1000
link/ether 00:14:0b:81:3f:8a brd ff:ff:ff:ff:ff:ff
inet 192.168.128.100/24 brd 192.168.128.255 scope global noprefixroute enp8s0
valid_lft forever preferred_lft forever


Output from ethtool:



# ethtool enp8s0
Settings for enp8s0:
Supported ports: [ TP MII ]
Supported link modes: 10baseT/Half 10baseT/Full
100baseT/Half 100baseT/Full
1000baseT/Half 1000baseT/Full
Supported pause frame use: Symmetric Receive-only
Supports auto-negotiation: Yes
Supported FEC modes: Not reported
Advertised link modes: 10baseT/Half 10baseT/Full
100baseT/Half 100baseT/Full
1000baseT/Half 1000baseT/Full
Advertised pause frame use: Symmetric Receive-only
Advertised auto-negotiation: Yes
Advertised FEC modes: Not reported
Link partner advertised link modes: 10baseT/Half 10baseT/Full
100baseT/Half 100baseT/Full
1000baseT/Full
Link partner advertised pause frame use: No
Link partner advertised auto-negotiation: Yes
Link partner advertised FEC modes: Not reported
Speed: 1000Mb/s
Duplex: Full
Port: MII
PHYAD: 0
Transceiver: internal
Auto-negotiation: on
Supports Wake-on: pumbg
Wake-on: g
Current message level: 0x00000033 (51)
drv probe ifdown ifup
Link detected: yes

# ethtool -i enp8s0
driver: r8169
version:
firmware-version: rtl_nic/rtl8168d-1.fw
expansion-rom-version:
bus-info: 0000:08:00.0
supports-statistics: yes
supports-test: no
supports-eeprom-access: no
supports-register-dump: yes
supports-priv-flags: no


Interrupts:



# cat /proc/interrupts 
CPU0 CPU1 CPU2 CPU3
...
26: 145689 0 0 394154 PCI-MSI 4194304-edge enp8s0
...

# cat /proc/softirqs
CPU0 CPU1 CPU2 CPU3
HI: 1 42 46 1
TIMER: 11550 32469 10688 11494
NET_TX: 85221 4 7 189091
NET_RX: 145692 0 14 394372
BLOCK: 21 450 39 22765
IRQ_POLL: 0 0 0 0
TASKLET: 136092 93 97 384278
SCHED: 10554 29735 8720 9726
HRTIMER: 0 0 0 0
RCU: 6154 8303 5838 6506









share|improve this question







New contributor




PiQuer is a new contributor to this site. Take care in asking for clarification, commenting, and answering.
Check out our Code of Conduct.



























    0















    I'm trying to understand a counter-intuitive phenomenon where I get only ~ 400 Mbit/s through a gigabit connection when the CPU is idle, but ~ 850 Mbit/s when the CPU is under full load (stress test utility). I suspect this is a symptom of some misconfiguration, in the end I would like to be able to saturate the link regardless of the CPU usage. The question is: what could cause the poor network performance, and why does it depend on the CPU usage in this strange way?



    I'm glad for any hints, thanks for your time!



    Systems and hardware




    • Lynx SENYO Mini PC (GE5 PCF55) running up-to-date Arch Linux kernel version 5.0.8 (called the client for reference).


    • NIC is a Realtek RTL8168:



      # lspci -nn | grep Ethernet
      08:00.0 Ethernet controller [0200]: Realtek Semiconductor Co., Ltd. RTL8111/8168/8411 PCI Express Gigabit Ethernet Controller [10ec:8168] (rev 03)


    • Connection: CAT 6 patch cable <-> CAT 7 in-wall cable <-> CAT 6 patch field <-> HPE OfficeConnect Switch <-> CAT 6 patch cable <-> supermicro home server (called the server for reference). Another PC gets ~ 950 Mbit/s on the exact same connection to the server with the exact same cables, so I guess this isolates the problem to the client.



    Test setup





    • On the server:



      # iperf3 -D -s --bind 192.168.128.1



    • On the client while CPUs are idle:



      # iperf3 -c 192.168.128.1
      Connecting to host 192.168.128.1, port 5201
      [ 5] local 192.168.128.100 port 60738 connected to 192.168.128.1 port 5201
      [ ID] Interval Transfer Bitrate Retr Cwnd
      [ 5] 0.00-1.00 sec 39.8 MBytes 333 Mbits/sec 0 310 KBytes
      [ 5] 1.00-2.00 sec 40.5 MBytes 339 Mbits/sec 0 322 KBytes
      [ 5] 2.00-3.00 sec 40.3 MBytes 338 Mbits/sec 0 322 KBytes
      [ 5] 3.00-4.00 sec 41.0 MBytes 344 Mbits/sec 0 322 KBytes
      [ 5] 4.00-5.00 sec 36.9 MBytes 310 Mbits/sec 0 322 KBytes
      [ 5] 5.00-6.00 sec 53.1 MBytes 445 Mbits/sec 0 322 KBytes
      [ 5] 6.00-7.00 sec 53.7 MBytes 450 Mbits/sec 0 322 KBytes
      [ 5] 7.00-8.00 sec 54.7 MBytes 459 Mbits/sec 0 338 KBytes
      [ 5] 8.00-9.00 sec 54.0 MBytes 453 Mbits/sec 0 338 KBytes
      [ 5] 9.00-10.00 sec 54.0 MBytes 453 Mbits/sec 0 338 KBytes
      - - - - - - - - - - - - - - - - - - - - - - - - -
      [ ID] Interval Transfer Bitrate Retr
      [ 5] 0.00-10.00 sec 468 MBytes 393 Mbits/sec 0 sender
      [ 5] 0.00-10.04 sec 467 MBytes 390 Mbits/sec receiver



    • On the client while running stress -c 4 (all four CPUs busy on 100%)



      # iperf3 -c 192.168.128.1
      Connecting to host 192.168.128.1, port 5201
      [ 5] local 192.168.128.100 port 60742 connected to 192.168.128.1 port 5201
      [ ID] Interval Transfer Bitrate Retr Cwnd
      [ 5] 0.00-1.00 sec 102 MBytes 854 Mbits/sec 0 356 KBytes
      [ 5] 1.00-2.00 sec 101 MBytes 845 Mbits/sec 0 375 KBytes
      [ 5] 2.00-3.00 sec 101 MBytes 846 Mbits/sec 0 392 KBytes
      [ 5] 3.00-4.00 sec 101 MBytes 844 Mbits/sec 0 409 KBytes
      [ 5] 4.00-5.00 sec 100 MBytes 843 Mbits/sec 0 409 KBytes
      [ 5] 5.00-6.00 sec 101 MBytes 843 Mbits/sec 0 409 KBytes
      [ 5] 6.00-7.00 sec 101 MBytes 845 Mbits/sec 0 409 KBytes
      [ 5] 7.00-8.00 sec 100 MBytes 841 Mbits/sec 0 451 KBytes
      [ 5] 8.00-9.00 sec 101 MBytes 843 Mbits/sec 0 451 KBytes
      [ 5] 9.00-10.00 sec 101 MBytes 851 Mbits/sec 0 510 KBytes
      - - - - - - - - - - - - - - - - - - - - - - - - -
      [ ID] Interval Transfer Bitrate Retr
      [ 5] 0.00-10.00 sec 1008 MBytes 846 Mbits/sec 0 sender
      [ 5] 0.00-10.04 sec 1005 MBytes 840 Mbits/sec receiver



    Unsuccessful attempts




    • I thought maybe its a frequency scaling issue, but cpupower frequency-set -g performance makes no difference.

    • Reading about issues with my specific NIC, I tried both drivers, the one present in the kernel (r8169) and the one from Realtek (r8168). The general picture is the same, the numbers are a bit different. With the Realtek driver I get 556 Mbits/s (idle CPU) vs 788 Mbits/s (busy CPU).

    • Cross-checked a live Ubuntu 18.04 LTS (kernel 4.18.0) with the same results.

    • Running irqbalance makes no difference.

    • Disabling IPv6 makes no difference.


    Additional information (glad to add more on request)



    Output from ip



    # ip addr show dev enp8s0
    2: enp8s0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc fq_codel state UP group default qlen 1000
    link/ether 00:14:0b:81:3f:8a brd ff:ff:ff:ff:ff:ff
    inet 192.168.128.100/24 brd 192.168.128.255 scope global noprefixroute enp8s0
    valid_lft forever preferred_lft forever


    Output from ethtool:



    # ethtool enp8s0
    Settings for enp8s0:
    Supported ports: [ TP MII ]
    Supported link modes: 10baseT/Half 10baseT/Full
    100baseT/Half 100baseT/Full
    1000baseT/Half 1000baseT/Full
    Supported pause frame use: Symmetric Receive-only
    Supports auto-negotiation: Yes
    Supported FEC modes: Not reported
    Advertised link modes: 10baseT/Half 10baseT/Full
    100baseT/Half 100baseT/Full
    1000baseT/Half 1000baseT/Full
    Advertised pause frame use: Symmetric Receive-only
    Advertised auto-negotiation: Yes
    Advertised FEC modes: Not reported
    Link partner advertised link modes: 10baseT/Half 10baseT/Full
    100baseT/Half 100baseT/Full
    1000baseT/Full
    Link partner advertised pause frame use: No
    Link partner advertised auto-negotiation: Yes
    Link partner advertised FEC modes: Not reported
    Speed: 1000Mb/s
    Duplex: Full
    Port: MII
    PHYAD: 0
    Transceiver: internal
    Auto-negotiation: on
    Supports Wake-on: pumbg
    Wake-on: g
    Current message level: 0x00000033 (51)
    drv probe ifdown ifup
    Link detected: yes

    # ethtool -i enp8s0
    driver: r8169
    version:
    firmware-version: rtl_nic/rtl8168d-1.fw
    expansion-rom-version:
    bus-info: 0000:08:00.0
    supports-statistics: yes
    supports-test: no
    supports-eeprom-access: no
    supports-register-dump: yes
    supports-priv-flags: no


    Interrupts:



    # cat /proc/interrupts 
    CPU0 CPU1 CPU2 CPU3
    ...
    26: 145689 0 0 394154 PCI-MSI 4194304-edge enp8s0
    ...

    # cat /proc/softirqs
    CPU0 CPU1 CPU2 CPU3
    HI: 1 42 46 1
    TIMER: 11550 32469 10688 11494
    NET_TX: 85221 4 7 189091
    NET_RX: 145692 0 14 394372
    BLOCK: 21 450 39 22765
    IRQ_POLL: 0 0 0 0
    TASKLET: 136092 93 97 384278
    SCHED: 10554 29735 8720 9726
    HRTIMER: 0 0 0 0
    RCU: 6154 8303 5838 6506









    share|improve this question







    New contributor




    PiQuer is a new contributor to this site. Take care in asking for clarification, commenting, and answering.
    Check out our Code of Conduct.























      0












      0








      0


      1






      I'm trying to understand a counter-intuitive phenomenon where I get only ~ 400 Mbit/s through a gigabit connection when the CPU is idle, but ~ 850 Mbit/s when the CPU is under full load (stress test utility). I suspect this is a symptom of some misconfiguration, in the end I would like to be able to saturate the link regardless of the CPU usage. The question is: what could cause the poor network performance, and why does it depend on the CPU usage in this strange way?



      I'm glad for any hints, thanks for your time!



      Systems and hardware




      • Lynx SENYO Mini PC (GE5 PCF55) running up-to-date Arch Linux kernel version 5.0.8 (called the client for reference).


      • NIC is a Realtek RTL8168:



        # lspci -nn | grep Ethernet
        08:00.0 Ethernet controller [0200]: Realtek Semiconductor Co., Ltd. RTL8111/8168/8411 PCI Express Gigabit Ethernet Controller [10ec:8168] (rev 03)


      • Connection: CAT 6 patch cable <-> CAT 7 in-wall cable <-> CAT 6 patch field <-> HPE OfficeConnect Switch <-> CAT 6 patch cable <-> supermicro home server (called the server for reference). Another PC gets ~ 950 Mbit/s on the exact same connection to the server with the exact same cables, so I guess this isolates the problem to the client.



      Test setup





      • On the server:



        # iperf3 -D -s --bind 192.168.128.1



      • On the client while CPUs are idle:



        # iperf3 -c 192.168.128.1
        Connecting to host 192.168.128.1, port 5201
        [ 5] local 192.168.128.100 port 60738 connected to 192.168.128.1 port 5201
        [ ID] Interval Transfer Bitrate Retr Cwnd
        [ 5] 0.00-1.00 sec 39.8 MBytes 333 Mbits/sec 0 310 KBytes
        [ 5] 1.00-2.00 sec 40.5 MBytes 339 Mbits/sec 0 322 KBytes
        [ 5] 2.00-3.00 sec 40.3 MBytes 338 Mbits/sec 0 322 KBytes
        [ 5] 3.00-4.00 sec 41.0 MBytes 344 Mbits/sec 0 322 KBytes
        [ 5] 4.00-5.00 sec 36.9 MBytes 310 Mbits/sec 0 322 KBytes
        [ 5] 5.00-6.00 sec 53.1 MBytes 445 Mbits/sec 0 322 KBytes
        [ 5] 6.00-7.00 sec 53.7 MBytes 450 Mbits/sec 0 322 KBytes
        [ 5] 7.00-8.00 sec 54.7 MBytes 459 Mbits/sec 0 338 KBytes
        [ 5] 8.00-9.00 sec 54.0 MBytes 453 Mbits/sec 0 338 KBytes
        [ 5] 9.00-10.00 sec 54.0 MBytes 453 Mbits/sec 0 338 KBytes
        - - - - - - - - - - - - - - - - - - - - - - - - -
        [ ID] Interval Transfer Bitrate Retr
        [ 5] 0.00-10.00 sec 468 MBytes 393 Mbits/sec 0 sender
        [ 5] 0.00-10.04 sec 467 MBytes 390 Mbits/sec receiver



      • On the client while running stress -c 4 (all four CPUs busy on 100%)



        # iperf3 -c 192.168.128.1
        Connecting to host 192.168.128.1, port 5201
        [ 5] local 192.168.128.100 port 60742 connected to 192.168.128.1 port 5201
        [ ID] Interval Transfer Bitrate Retr Cwnd
        [ 5] 0.00-1.00 sec 102 MBytes 854 Mbits/sec 0 356 KBytes
        [ 5] 1.00-2.00 sec 101 MBytes 845 Mbits/sec 0 375 KBytes
        [ 5] 2.00-3.00 sec 101 MBytes 846 Mbits/sec 0 392 KBytes
        [ 5] 3.00-4.00 sec 101 MBytes 844 Mbits/sec 0 409 KBytes
        [ 5] 4.00-5.00 sec 100 MBytes 843 Mbits/sec 0 409 KBytes
        [ 5] 5.00-6.00 sec 101 MBytes 843 Mbits/sec 0 409 KBytes
        [ 5] 6.00-7.00 sec 101 MBytes 845 Mbits/sec 0 409 KBytes
        [ 5] 7.00-8.00 sec 100 MBytes 841 Mbits/sec 0 451 KBytes
        [ 5] 8.00-9.00 sec 101 MBytes 843 Mbits/sec 0 451 KBytes
        [ 5] 9.00-10.00 sec 101 MBytes 851 Mbits/sec 0 510 KBytes
        - - - - - - - - - - - - - - - - - - - - - - - - -
        [ ID] Interval Transfer Bitrate Retr
        [ 5] 0.00-10.00 sec 1008 MBytes 846 Mbits/sec 0 sender
        [ 5] 0.00-10.04 sec 1005 MBytes 840 Mbits/sec receiver



      Unsuccessful attempts




      • I thought maybe its a frequency scaling issue, but cpupower frequency-set -g performance makes no difference.

      • Reading about issues with my specific NIC, I tried both drivers, the one present in the kernel (r8169) and the one from Realtek (r8168). The general picture is the same, the numbers are a bit different. With the Realtek driver I get 556 Mbits/s (idle CPU) vs 788 Mbits/s (busy CPU).

      • Cross-checked a live Ubuntu 18.04 LTS (kernel 4.18.0) with the same results.

      • Running irqbalance makes no difference.

      • Disabling IPv6 makes no difference.


      Additional information (glad to add more on request)



      Output from ip



      # ip addr show dev enp8s0
      2: enp8s0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc fq_codel state UP group default qlen 1000
      link/ether 00:14:0b:81:3f:8a brd ff:ff:ff:ff:ff:ff
      inet 192.168.128.100/24 brd 192.168.128.255 scope global noprefixroute enp8s0
      valid_lft forever preferred_lft forever


      Output from ethtool:



      # ethtool enp8s0
      Settings for enp8s0:
      Supported ports: [ TP MII ]
      Supported link modes: 10baseT/Half 10baseT/Full
      100baseT/Half 100baseT/Full
      1000baseT/Half 1000baseT/Full
      Supported pause frame use: Symmetric Receive-only
      Supports auto-negotiation: Yes
      Supported FEC modes: Not reported
      Advertised link modes: 10baseT/Half 10baseT/Full
      100baseT/Half 100baseT/Full
      1000baseT/Half 1000baseT/Full
      Advertised pause frame use: Symmetric Receive-only
      Advertised auto-negotiation: Yes
      Advertised FEC modes: Not reported
      Link partner advertised link modes: 10baseT/Half 10baseT/Full
      100baseT/Half 100baseT/Full
      1000baseT/Full
      Link partner advertised pause frame use: No
      Link partner advertised auto-negotiation: Yes
      Link partner advertised FEC modes: Not reported
      Speed: 1000Mb/s
      Duplex: Full
      Port: MII
      PHYAD: 0
      Transceiver: internal
      Auto-negotiation: on
      Supports Wake-on: pumbg
      Wake-on: g
      Current message level: 0x00000033 (51)
      drv probe ifdown ifup
      Link detected: yes

      # ethtool -i enp8s0
      driver: r8169
      version:
      firmware-version: rtl_nic/rtl8168d-1.fw
      expansion-rom-version:
      bus-info: 0000:08:00.0
      supports-statistics: yes
      supports-test: no
      supports-eeprom-access: no
      supports-register-dump: yes
      supports-priv-flags: no


      Interrupts:



      # cat /proc/interrupts 
      CPU0 CPU1 CPU2 CPU3
      ...
      26: 145689 0 0 394154 PCI-MSI 4194304-edge enp8s0
      ...

      # cat /proc/softirqs
      CPU0 CPU1 CPU2 CPU3
      HI: 1 42 46 1
      TIMER: 11550 32469 10688 11494
      NET_TX: 85221 4 7 189091
      NET_RX: 145692 0 14 394372
      BLOCK: 21 450 39 22765
      IRQ_POLL: 0 0 0 0
      TASKLET: 136092 93 97 384278
      SCHED: 10554 29735 8720 9726
      HRTIMER: 0 0 0 0
      RCU: 6154 8303 5838 6506









      share|improve this question







      New contributor




      PiQuer is a new contributor to this site. Take care in asking for clarification, commenting, and answering.
      Check out our Code of Conduct.












      I'm trying to understand a counter-intuitive phenomenon where I get only ~ 400 Mbit/s through a gigabit connection when the CPU is idle, but ~ 850 Mbit/s when the CPU is under full load (stress test utility). I suspect this is a symptom of some misconfiguration, in the end I would like to be able to saturate the link regardless of the CPU usage. The question is: what could cause the poor network performance, and why does it depend on the CPU usage in this strange way?



      I'm glad for any hints, thanks for your time!



      Systems and hardware




      • Lynx SENYO Mini PC (GE5 PCF55) running up-to-date Arch Linux kernel version 5.0.8 (called the client for reference).


      • NIC is a Realtek RTL8168:



        # lspci -nn | grep Ethernet
        08:00.0 Ethernet controller [0200]: Realtek Semiconductor Co., Ltd. RTL8111/8168/8411 PCI Express Gigabit Ethernet Controller [10ec:8168] (rev 03)


      • Connection: CAT 6 patch cable <-> CAT 7 in-wall cable <-> CAT 6 patch field <-> HPE OfficeConnect Switch <-> CAT 6 patch cable <-> supermicro home server (called the server for reference). Another PC gets ~ 950 Mbit/s on the exact same connection to the server with the exact same cables, so I guess this isolates the problem to the client.



      Test setup





      • On the server:



        # iperf3 -D -s --bind 192.168.128.1



      • On the client while CPUs are idle:



        # iperf3 -c 192.168.128.1
        Connecting to host 192.168.128.1, port 5201
        [ 5] local 192.168.128.100 port 60738 connected to 192.168.128.1 port 5201
        [ ID] Interval Transfer Bitrate Retr Cwnd
        [ 5] 0.00-1.00 sec 39.8 MBytes 333 Mbits/sec 0 310 KBytes
        [ 5] 1.00-2.00 sec 40.5 MBytes 339 Mbits/sec 0 322 KBytes
        [ 5] 2.00-3.00 sec 40.3 MBytes 338 Mbits/sec 0 322 KBytes
        [ 5] 3.00-4.00 sec 41.0 MBytes 344 Mbits/sec 0 322 KBytes
        [ 5] 4.00-5.00 sec 36.9 MBytes 310 Mbits/sec 0 322 KBytes
        [ 5] 5.00-6.00 sec 53.1 MBytes 445 Mbits/sec 0 322 KBytes
        [ 5] 6.00-7.00 sec 53.7 MBytes 450 Mbits/sec 0 322 KBytes
        [ 5] 7.00-8.00 sec 54.7 MBytes 459 Mbits/sec 0 338 KBytes
        [ 5] 8.00-9.00 sec 54.0 MBytes 453 Mbits/sec 0 338 KBytes
        [ 5] 9.00-10.00 sec 54.0 MBytes 453 Mbits/sec 0 338 KBytes
        - - - - - - - - - - - - - - - - - - - - - - - - -
        [ ID] Interval Transfer Bitrate Retr
        [ 5] 0.00-10.00 sec 468 MBytes 393 Mbits/sec 0 sender
        [ 5] 0.00-10.04 sec 467 MBytes 390 Mbits/sec receiver



      • On the client while running stress -c 4 (all four CPUs busy on 100%)



        # iperf3 -c 192.168.128.1
        Connecting to host 192.168.128.1, port 5201
        [ 5] local 192.168.128.100 port 60742 connected to 192.168.128.1 port 5201
        [ ID] Interval Transfer Bitrate Retr Cwnd
        [ 5] 0.00-1.00 sec 102 MBytes 854 Mbits/sec 0 356 KBytes
        [ 5] 1.00-2.00 sec 101 MBytes 845 Mbits/sec 0 375 KBytes
        [ 5] 2.00-3.00 sec 101 MBytes 846 Mbits/sec 0 392 KBytes
        [ 5] 3.00-4.00 sec 101 MBytes 844 Mbits/sec 0 409 KBytes
        [ 5] 4.00-5.00 sec 100 MBytes 843 Mbits/sec 0 409 KBytes
        [ 5] 5.00-6.00 sec 101 MBytes 843 Mbits/sec 0 409 KBytes
        [ 5] 6.00-7.00 sec 101 MBytes 845 Mbits/sec 0 409 KBytes
        [ 5] 7.00-8.00 sec 100 MBytes 841 Mbits/sec 0 451 KBytes
        [ 5] 8.00-9.00 sec 101 MBytes 843 Mbits/sec 0 451 KBytes
        [ 5] 9.00-10.00 sec 101 MBytes 851 Mbits/sec 0 510 KBytes
        - - - - - - - - - - - - - - - - - - - - - - - - -
        [ ID] Interval Transfer Bitrate Retr
        [ 5] 0.00-10.00 sec 1008 MBytes 846 Mbits/sec 0 sender
        [ 5] 0.00-10.04 sec 1005 MBytes 840 Mbits/sec receiver



      Unsuccessful attempts




      • I thought maybe its a frequency scaling issue, but cpupower frequency-set -g performance makes no difference.

      • Reading about issues with my specific NIC, I tried both drivers, the one present in the kernel (r8169) and the one from Realtek (r8168). The general picture is the same, the numbers are a bit different. With the Realtek driver I get 556 Mbits/s (idle CPU) vs 788 Mbits/s (busy CPU).

      • Cross-checked a live Ubuntu 18.04 LTS (kernel 4.18.0) with the same results.

      • Running irqbalance makes no difference.

      • Disabling IPv6 makes no difference.


      Additional information (glad to add more on request)



      Output from ip



      # ip addr show dev enp8s0
      2: enp8s0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc fq_codel state UP group default qlen 1000
      link/ether 00:14:0b:81:3f:8a brd ff:ff:ff:ff:ff:ff
      inet 192.168.128.100/24 brd 192.168.128.255 scope global noprefixroute enp8s0
      valid_lft forever preferred_lft forever


      Output from ethtool:



      # ethtool enp8s0
      Settings for enp8s0:
      Supported ports: [ TP MII ]
      Supported link modes: 10baseT/Half 10baseT/Full
      100baseT/Half 100baseT/Full
      1000baseT/Half 1000baseT/Full
      Supported pause frame use: Symmetric Receive-only
      Supports auto-negotiation: Yes
      Supported FEC modes: Not reported
      Advertised link modes: 10baseT/Half 10baseT/Full
      100baseT/Half 100baseT/Full
      1000baseT/Half 1000baseT/Full
      Advertised pause frame use: Symmetric Receive-only
      Advertised auto-negotiation: Yes
      Advertised FEC modes: Not reported
      Link partner advertised link modes: 10baseT/Half 10baseT/Full
      100baseT/Half 100baseT/Full
      1000baseT/Full
      Link partner advertised pause frame use: No
      Link partner advertised auto-negotiation: Yes
      Link partner advertised FEC modes: Not reported
      Speed: 1000Mb/s
      Duplex: Full
      Port: MII
      PHYAD: 0
      Transceiver: internal
      Auto-negotiation: on
      Supports Wake-on: pumbg
      Wake-on: g
      Current message level: 0x00000033 (51)
      drv probe ifdown ifup
      Link detected: yes

      # ethtool -i enp8s0
      driver: r8169
      version:
      firmware-version: rtl_nic/rtl8168d-1.fw
      expansion-rom-version:
      bus-info: 0000:08:00.0
      supports-statistics: yes
      supports-test: no
      supports-eeprom-access: no
      supports-register-dump: yes
      supports-priv-flags: no


      Interrupts:



      # cat /proc/interrupts 
      CPU0 CPU1 CPU2 CPU3
      ...
      26: 145689 0 0 394154 PCI-MSI 4194304-edge enp8s0
      ...

      # cat /proc/softirqs
      CPU0 CPU1 CPU2 CPU3
      HI: 1 42 46 1
      TIMER: 11550 32469 10688 11494
      NET_TX: 85221 4 7 189091
      NET_RX: 145692 0 14 394372
      BLOCK: 21 450 39 22765
      IRQ_POLL: 0 0 0 0
      TASKLET: 136092 93 97 384278
      SCHED: 10554 29735 8720 9726
      HRTIMER: 0 0 0 0
      RCU: 6154 8303 5838 6506






      linux networking cpu-usage gigabit-ethernet






      share|improve this question







      New contributor




      PiQuer is a new contributor to this site. Take care in asking for clarification, commenting, and answering.
      Check out our Code of Conduct.











      share|improve this question







      New contributor




      PiQuer is a new contributor to this site. Take care in asking for clarification, commenting, and answering.
      Check out our Code of Conduct.









      share|improve this question




      share|improve this question






      New contributor




      PiQuer is a new contributor to this site. Take care in asking for clarification, commenting, and answering.
      Check out our Code of Conduct.









      asked 6 hours ago









      PiQuerPiQuer

      101




      101




      New contributor




      PiQuer is a new contributor to this site. Take care in asking for clarification, commenting, and answering.
      Check out our Code of Conduct.





      New contributor





      PiQuer is a new contributor to this site. Take care in asking for clarification, commenting, and answering.
      Check out our Code of Conduct.






      PiQuer is a new contributor to this site. Take care in asking for clarification, commenting, and answering.
      Check out our Code of Conduct.






















          0






          active

          oldest

          votes












          Your Answer








          StackExchange.ready(function() {
          var channelOptions = {
          tags: "".split(" "),
          id: "3"
          };
          initTagRenderer("".split(" "), "".split(" "), channelOptions);

          StackExchange.using("externalEditor", function() {
          // Have to fire editor after snippets, if snippets enabled
          if (StackExchange.settings.snippets.snippetsEnabled) {
          StackExchange.using("snippets", function() {
          createEditor();
          });
          }
          else {
          createEditor();
          }
          });

          function createEditor() {
          StackExchange.prepareEditor({
          heartbeatType: 'answer',
          autoActivateHeartbeat: false,
          convertImagesToLinks: true,
          noModals: true,
          showLowRepImageUploadWarning: true,
          reputationToPostImages: 10,
          bindNavPrevention: true,
          postfix: "",
          imageUploader: {
          brandingHtml: "Powered by u003ca class="icon-imgur-white" href="https://imgur.com/"u003eu003c/au003e",
          contentPolicyHtml: "User contributions licensed under u003ca href="https://creativecommons.org/licenses/by-sa/3.0/"u003ecc by-sa 3.0 with attribution requiredu003c/au003e u003ca href="https://stackoverflow.com/legal/content-policy"u003e(content policy)u003c/au003e",
          allowUrls: true
          },
          onDemand: true,
          discardSelector: ".discard-answer"
          ,immediatelyShowMarkdownHelp:true
          });


          }
          });






          PiQuer is a new contributor. Be nice, and check out our Code of Conduct.










          draft saved

          draft discarded


















          StackExchange.ready(
          function () {
          StackExchange.openid.initPostLogin('.new-post-login', 'https%3a%2f%2fsuperuser.com%2fquestions%2f1428360%2fpoor-network-throughput-but-double-under-high-cpu-load%23new-answer', 'question_page');
          }
          );

          Post as a guest















          Required, but never shown

























          0






          active

          oldest

          votes








          0






          active

          oldest

          votes









          active

          oldest

          votes






          active

          oldest

          votes








          PiQuer is a new contributor. Be nice, and check out our Code of Conduct.










          draft saved

          draft discarded


















          PiQuer is a new contributor. Be nice, and check out our Code of Conduct.













          PiQuer is a new contributor. Be nice, and check out our Code of Conduct.












          PiQuer is a new contributor. Be nice, and check out our Code of Conduct.
















          Thanks for contributing an answer to Super User!


          • Please be sure to answer the question. Provide details and share your research!

          But avoid



          • Asking for help, clarification, or responding to other answers.

          • Making statements based on opinion; back them up with references or personal experience.


          To learn more, see our tips on writing great answers.




          draft saved


          draft discarded














          StackExchange.ready(
          function () {
          StackExchange.openid.initPostLogin('.new-post-login', 'https%3a%2f%2fsuperuser.com%2fquestions%2f1428360%2fpoor-network-throughput-but-double-under-high-cpu-load%23new-answer', 'question_page');
          }
          );

          Post as a guest















          Required, but never shown





















































          Required, but never shown














          Required, but never shown












          Required, but never shown







          Required, but never shown

































          Required, but never shown














          Required, but never shown












          Required, but never shown







          Required, but never shown







          Popular posts from this blog

          Couldn't open a raw socket. Error: Permission denied (13) (nmap)Is it possible to run networking commands...

          VNC viewer RFB protocol error: bad desktop size 0x0I Cannot Type the Key 'd' (lowercase) in VNC Viewer...

          Why not use the yoke to control yaw, as well as pitch and roll? Announcing the arrival of...