Index | Thread | Search

From:
Stefan Fritsch <sf@sfritsch.de>
Subject:
Re: vio: Enable multiqueue
To:
Hrvoje Popovski <hrvoje@srce.hr>, tech@openbsd.org
Date:
Tue, 21 Jan 2025 19:26:39 +0100

Download raw body.

Thread
Hi,

Am 16.01.25 um 19:19 schrieb Hrvoje Popovski:
>>>> this diff finally enables multiqueue for vio(4). It goes on top of the
>>>> "virtio: Support unused virtqueues" diff from my previous mail.
>>>>
>>>> The distribution of of packets to the enabled queues is not optimal. To
>>>> improve this, one would need the optional RSS (receive-side scaling)
>>>> feature which is difficult to configure with libvirt/qemu and therefore
>>>> usually not available on hypervisors. Things may improve with future
>>>> libvirt versions. RSS support is not included in this diff. But even
>>>> without RSS, we have seen some nice performance gains.
>>
>>
>>>
>>> I'm hitting this diff with forwarding setup over ipsec for two days and
>>> doing ifconfig up/down and hosts seems stable. Forwarding performance is
>>> the same as without this diff.
>>>
>>> I'm sending traffic from host connected to obsd1 vio2 then that traffic
>>> goes over ipsec link between obsd1 vio1 - obsd2 vio1 and traffic exits
>>> from obsd2 vio3 to other host
>>
>>
>> Thanks for testing. Since the traffic distribution is done heuristically
>> by the hypervisor, it is often not optimal. I think it is particularily
>> bad for your case because the hypervisor will think that all ipsec
>> traffic belongs to one flow and put it into the same queue.
>>
>> I will try to improve it a bit, but in general things get better if you
>> communicate with many peers.
> 
> Hi,
> 
> it seems that even with plain forwarding all traffic on egress
> interfaces are going to one queue. On ingress interface interrupts are
> spread nicely. Maybe because of that forwarding performance is same as
> without multiqueue vio.

Thanks again for the testing.

I don't see this on my test setup. Could you please check the packet 
stats in

kstat vio*::txq:

and maybe while the benchmark is running the cpu load distribution of 
the softnet tasks in systat -> 5

Cheers,
Stefan

> 
> 
> I'm sending random 10.111/24 from host connected to vio2 to random
> 10.222/24 to host connected to vio3.
> 
> 
> netstat
> Destination        Gateway            Flags   Refs      Use   Mtu  Prio
> Iface
> 
> 10.111/16          192.168.111.11     UGS        0        0     -     8 vio2
> 10.222/16          192.168.222.11     UGS        1 1957400238     -
> 8 vio3
> 
> 
> obsd1# vmstat -iz
> interrupt                       total     rate
> irq144/com0                       484        0
> irq48/acpi0                         0        0
> irq49/uhci0                         0        0
> irq50/uhci1                         0        0
> irq51/uhci2                         0        0
> irq52/ehci0                         0        0
> irq49/ppb0                          0        0
> irq49/ppb1                          0        0
> irq49/ppb2                          0        0
> irq49/ppb3                          0        0
> irq49/uhci3                         0        0
> irq50/uhci4                         0        0
> irq51/uhci5                         0        0
> irq52/ehci1                        20        0
> irq64/vio0:0                        0        0
> irq65/vio0:1                        9        0
> irq66/vio0:2                      286        0
> irq67/vio0:3                     1467        0
> irq68/vio0:4                      543        0
> irq69/vio0:5                      841        0
> irq70/vio1:0                        0        0
> irq71/vio1:1                        9        0
> irq72/vio1:2                        1        0
> irq73/vio1:3                        1        0
> irq74/vio1:4                        0        0
> irq75/vio1:5                        1        0
> irq76/vio2:0                        0        0
> irq77/vio2:1                        9        0
> irq78/vio2:2                 91031269    37170
> irq79/vio2:3                154334709    63019
> irq80/vio2:4                152971269    62462
> irq81/vio2:5                155102163    63332
> irq82/vio3:0                        0        0
> irq83/vio3:1                        9        0
> irq84/vio3:2                  1928679      787
> irq85/vio3:3                        4        0
> irq86/vio3:4                        1        0
> irq87/vio3:5                        0        0
> irq53/vioscsi0:0                    0        0
> irq54/vioscsi0:1                    0        0
> irq55/vioscsi0:2                    0        0
> irq56/vioscsi0:3                45856       18
> irq57/ahci0                         0        0
> irq49/ichiic0                       0        0
> irq145/pckbc0                       0        0
> irq146/pckbc0                       0        0
> irq0/clock                    1358939      554
> irq0/ipi                     10597761     4327
> Total                       567374330   231675
> 
> 
> Thank you for mq vio ....
> 
>