[RESOLVED] Seem to have drastic disk I/O throughput slowdowns on Liquorix 5.17 and 5.18 kernels
stevenpusser
Status: Contributor
Joined: 14 Jan 2017
Posts: 62
Reply Quote
as compared to backported Debian 5.17 and 5.18 kernels. Do you have any suggested benchmarks I can run to get a true idea of the degree of slowdown?


:: Code ::

System:    Kernel: 5.18.0-2mx-amd64 x86_64 bits: 64 compiler: gcc v: 10.2.1
           parameters: BOOT_IMAGE=/boot/vmlinuz-5.18.0-2mx-amd64
           root=UUID=<filter> ro splash quiet
           init=/lib/systemd/systemd
           Desktop: Xfce 4.16.0 tk: Gtk 3.24.24 info: xfce4-panel wm: xfwm 4.16.1 vt: 7
           dm: LightDM 1.26.0 Distro: MX-21.1_ahs_x64 wildflower Oct 20  2021
           base: Debian GNU/Linux 11 (bullseye)
Machine:   Type: Laptop System: Micro-Star product: GP63 Leopard 8RD v: REV:1.0 serial: <filter>
           Chassis: type: 10 serial: <filter>
           Mobo: Micro-Star model: MS-16P6 v: REV:1.0 serial: <filter> UEFI: American Megatrends
           v: E16P6IMS.107 date: 09/05/2018
CPU:       Info: 6-Core model: Intel Core i7-8750H bits: 64 type: MT MCP arch: Kaby Lake
           note: check family: 6 model-id: 9E (158) stepping: A (10) microcode: EC cache:
           L2: 9 MiB
           flags: avx avx2 lm nx pae sse sse2 sse3 sse4_1 sse4_2 ssse3 vmx bogomips: 52799
           Speed: 2866 MHz min/max: 800/4100 MHz Core speeds (MHz): 1: 2866 2: 3883 3: 3457
           4: 3101 5: 3448 6: 3381 7: 2383 8: 2659 9: 3712 10: 3277 11: 2992 12: 3520
Graphics:  Device-1: Intel CoffeeLake-H GT2 [UHD Graphics 630] vendor: Micro-Star MSI
           driver: i915 v: kernel bus-ID: 00:02.0 chip-ID: 8086:3e9b class-ID: 0300
           Device-2: NVIDIA GP107M [GeForce GTX 1050 Ti Mobile] driver: N/A alternate: nvidia
           bus-ID: 01:00.0 chip-ID: 10de:1c8c class-ID: 0302
           Device-3: Acer HD Webcam type: USB driver: uvcvideo bus-ID: 1-13:5 chip-ID: 5986:1140
           class-ID: 0e02 serial: <filter>
           Display: x11 server: X.Org 1.20.13 compositor: compton v: 1 driver: loaded: intel
           display-ID: :0.0 screens: 1
           Screen-1: 0 s-res: 1920x1080 s-dpi: 96 s-size: 508x285mm (20.0x11.2")
           s-diag: 582mm (22.9")
           Monitor-1: eDP1 res: 1920x1080 hz: 60 dpi: 143 size: 340x190mm (13.4x7.5")
           diag: 389mm (15.3")
           OpenGL: renderer: Mesa DRI Intel UHD Graphics 630 (CFL GT2) v: 4.6 Mesa 21.2.5
           compat-v: 3.0 direct render: Yes
Drives:    Local Storage: total: 1.14 TiB used: 676.23 GiB (57.8%)
           SMART Message: Unable to run smartctl. Root privileges required.
           ID-1: /dev/sda maj-min: 8:0 vendor: Micron model: 1100 MTFDDAV256TBN size: 238.47 GiB
           block-size: physical: 512 B logical: 512 B speed: 6.0 Gb/s type: SSD serial: <filter>
           rev: A020 temp: 40 C scheme: GPT
           ID-2: /dev/sdb maj-min: 8:16 vendor: HGST (Hitachi) model: HTS721010A9E630
           size: 931.51 GiB block-size: physical: 4096 B logical: 512 B speed: 6.0 Gb/s
           type: HDD rpm: 7200 serial: <filter> rev: A3U0 temp: 37 C scheme: GPT
Partition: ID-1: / raw-size: 132 GiB size: 128.87 GiB (97.63%) used: 62.91 GiB (48.8%) fs: ext4
           dev: /dev/sda5 maj-min: 8:5
           ID-2: /boot/efi raw-size: 300 MiB size: 296 MiB (98.67%) used: 25.5 MiB (8.6%)
           fs: vfat dev: /dev/sda1 maj-min: 8:1
           ID-3: /home raw-size: 846.64 GiB size: 832.28 GiB (98.30%) used: 613.28 GiB (73.7%)
           fs: ext4 dev: /dev/sdb1 maj-min: 8:17

Back to top
damentz
Status: Assistant
Joined: 09 Sep 2008
Posts: 975
Reply Quote
Do you have any data to describe what you observed? As an experiment, I ran KDiskMark (fio with a gui) with bfq, mq-deadline-nodefault, and none. Below are the results, in that order:

BFQ
:: Code ::
                     KDiskMark (2.3.0): https://github.com/JonMagon/KDiskMark
                 Flexible I/O Tester (fio-3.29): https://github.com/axboe/fio
-----------------------------------------------------------------------------
* MB/s = 1,000,000 bytes/s [SATA/600 = 600,000,000 bytes/s]
* KB = 1000 bytes, KiB = 1024 bytes

[Read]
Sequential 1 MiB (Q= 8, T= 1):  2977.230 MB/s [   2907.5 IOPS] <  2921.80 us>
Sequential 1 MiB (Q= 1, T= 1):  2162.250 MB/s [   2111.6 IOPS] <   472.54 us>
    Random 4 KiB (Q=32, T= 1):   872.758 MB/s [ 218189.7 IOPS] <   148.30 us>
    Random 4 KiB (Q= 1, T= 1):    43.699 MB/s [  10924.9 IOPS] <    91.13 us>

[Write]
Sequential 1 MiB (Q= 8, T= 1):  2896.654 MB/s [   2828.8 IOPS] <  2689.45 us>
Sequential 1 MiB (Q= 1, T= 1):  2200.896 MB/s [   2149.3 IOPS] <   350.20 us>
    Random 4 KiB (Q=32, T= 1):  1215.137 MB/s [ 303784.4 IOPS] <   104.22 us>
    Random 4 KiB (Q= 1, T= 1):   281.031 MB/s [  70257.7 IOPS] <    13.47 us>

Profile: Default
   Test: 1 GiB (x5) [Interval: 5 sec]
   Date: 2022-06-25 15:03:45
     OS: arch unknown [linux 5.18.7-lqx1-1-lqx]


MQ-DEADLINE-NODEFAULT
:: Code ::
                     KDiskMark (2.3.0): https://github.com/JonMagon/KDiskMark
                 Flexible I/O Tester (fio-3.29): https://github.com/axboe/fio
-----------------------------------------------------------------------------
* MB/s = 1,000,000 bytes/s [SATA/600 = 600,000,000 bytes/s]
* KB = 1000 bytes, KiB = 1024 bytes

[Read]
Sequential 1 MiB (Q= 8, T= 1):  2893.786 MB/s [   2826.0 IOPS] <  3357.90 us>
Sequential 1 MiB (Q= 1, T= 1):  1776.854 MB/s [   1735.2 IOPS] <   576.95 us>
    Random 4 KiB (Q=32, T= 1):   971.456 MB/s [ 242864.0 IOPS] <   131.39 us>
    Random 4 KiB (Q= 1, T= 1):    44.001 MB/s [  11000.4 IOPS] <    90.47 us>

[Write]
Sequential 1 MiB (Q= 8, T= 1):  1846.910 MB/s [   1803.6 IOPS] <  5493.11 us>
Sequential 1 MiB (Q= 1, T= 1):  2004.788 MB/s [   1957.8 IOPS] <   423.08 us>
    Random 4 KiB (Q=32, T= 1):  1725.242 MB/s [ 431310.7 IOPS] <    73.28 us>
    Random 4 KiB (Q= 1, T= 1):   300.038 MB/s [  75009.6 IOPS] <    12.54 us>

Profile: Default
   Test: 1 GiB (x5) [Interval: 5 sec]
   Date: 2022-06-25 14:58:51
     OS: arch unknown [linux 5.18.7-lqx1-1-lqx]


NONE (no scheduler)
:: Code ::
                     KDiskMark (2.3.0): https://github.com/JonMagon/KDiskMark
                 Flexible I/O Tester (fio-3.29): https://github.com/axboe/fio
-----------------------------------------------------------------------------
* MB/s = 1,000,000 bytes/s [SATA/600 = 600,000,000 bytes/s]
* KB = 1000 bytes, KiB = 1024 bytes

[Read]
Sequential 1 MiB (Q= 8, T= 1):  2793.644 MB/s [   2728.2 IOPS] <  4429.30 us>
Sequential 1 MiB (Q= 1, T= 1):  2196.463 MB/s [   2145.0 IOPS] <   465.02 us>
    Random 4 KiB (Q=32, T= 1):   970.904 MB/s [ 242726.1 IOPS] <   131.44 us>
    Random 4 KiB (Q= 1, T= 1):    44.591 MB/s [  11147.8 IOPS] <    89.28 us>

[Write]
Sequential 1 MiB (Q= 8, T= 1):  2754.365 MB/s [   2689.8 IOPS] <  2841.97 us>
Sequential 1 MiB (Q= 1, T= 1):  2224.409 MB/s [   2172.3 IOPS] <   344.62 us>
    Random 4 KiB (Q=32, T= 1):  1918.783 MB/s [ 479695.9 IOPS] <    65.96 us>
    Random 4 KiB (Q= 1, T= 1):   301.501 MB/s [  75375.3 IOPS] <    12.48 us>

Profile: Default
   Test: 1 GiB (x5) [Interval: 5 sec]
   Date: 2022-06-25 15:10:11
     OS: arch unknown [linux 5.18.7-lqx1-1-lqx]


Storage device from inxi -Dxxx
:: Code ::
  ID-1: /dev/nvme0n1 vendor: Corsair model: Force MP510 size: 1.75 TiB
    speed: 31.6 Gb/s lanes: 4 type: SSD serial: 19138208000127720014
    rev: ECFM12.3 temp: 37.9 C scheme: GPT


As far as I can tell, disk performance is better than ever. Or maybe my storage should be performing faster?
Back to top
stevenpusser
Status: Contributor
Joined: 14 Jan 2017
Posts: 62
Reply Quote
For example, without anything else stressing the disk, extraction of a ~100MB Liquorix kernel orig.tar.xz file takes at least five times longer. Simply copying files also suffer the same slowdown.

I'm running a backported Debian 5.18.5 kernel right now which doesn't suffer that, but am doing a lengthy backport of mame 0.244 from Sid right now. I'll try some of your commands as soon as that finishes up.
Back to top
damentz
Status: Assistant
Joined: 09 Sep 2008
Posts: 975
Reply Quote
Were you able to confirm whether changing the IO scheduler impacts performance? I'm assuming that 5.18 regressed with BFQ some how and it's having an outsized impact on on your system.
Back to top
damentz
Status: Assistant
Joined: 09 Sep 2008
Posts: 975
Reply Quote
Just as an update, next Liquorix release will drastically change the way we're doing IO scheduling. Instead of almost forcing BFQ for all drives, we'll be preferring kyber for fast drives and allowing the override of "mq-deadline" again.

BFQ will be reserved for non-fast drives or drives that udev doesn't override with one of the above.
Back to top
damentz
Status: Assistant
Joined: 09 Sep 2008
Posts: 975
Reply Quote
Ok, I'll go ahead and mark this preemptively as resolved.

Latest Liquorix (5.18-11, 5.18.0-11.1), defaults to kyber as the IO scheduler for multiqueue devices. BFQ is still selected for slower single queue devices. Also, mq-deadline is now available again as its default name rather than mq-deadline-nodefault.
Back to top
Display posts from previous:   

All times are GMT - 8 Hours