i got 3 identical new servers from supermicro – SYS-121C-TN10R. all three have the same disk setup – 2+4 NVMe drives, but Debian 12 just see 5 of them instead of 6.
when i run dmesg on newly installed system i can see those errors:
[ 6.662992] pci 10000:80:01.0: BAR 13: no space for [io size 0x4000]
[ 6.662994] pci 10000:80:01.0: BAR 13: failed to assign [io size 0x4000]
[ 6.662995] pci 10000:80:03.0: BAR 13: no space for [io size 0x4000]
[ 6.662996] pci 10000:80:03.0: BAR 13: failed to assign [io size 0x4000]
[ 6.662999] pci 10000:80:01.0: BAR 13: no space for [io size 0x4000]
[ 6.663000] pci 10000:80:01.0: BAR 13: failed to assign [io size 0x4000]
[ 6.663001] pci 10000:80:03.0: BAR 13: no space for [io size 0x4000]
[ 6.663002] pci 10000:80:03.0: BAR 13: failed to assign [io size 0x4000]
to solve it i had to change PCIe bifurcation in BIOS:
- go to Advanced > Chipset configuration > North bridge > IIO Configuration:
- CPU1 Configuration > IIOU3 and change it from Auto to x4x4x4x4
- CPU2 Configuration > IIOU2 and change it from Auto to x4x4x4x4
- since i was not using Intel’s not-exactly-hardware-RAID i’ve also went to Intel VMD technology > NVMe Mode Switch: manual + disable for both CPUs
- advanced > chipset configuration > north bridge > IIO Configuration > CPU1 Configuration > IIOU3: x4x4x4x4, CPU2 Configuration > IIOU2: x4x4x4x4, Intel VMD technology > NVMe Mode Switch: manual + disable for both CPUs
i’ll admit – it’s far from obvious and i got help from the datacenter to figure it out. in the user manual for that motherboard i’ve found ‘BIOS IIO Mapping to Motherboard’s NVMe slots / PCIe lanes are connected:
perhaps it explains why those specific IOUs had to be reconfigured, but it’s far from obvious for me till now.
oh well – this time Dell wins – there, regardless to which slot i’ve plugged NVMes – they just have been visible and working without additional fiddling.