Download raw body.
vio: Enable multiqueue
Am 21.01.25 um 20:03 schrieb Hrvoje Popovski: > On 21.1.2025. 19:26, Stefan Fritsch wrote: >> Hi, >> >> Am 16.01.25 um 19:19 schrieb Hrvoje Popovski: >>>>>> this diff finally enables multiqueue for vio(4). It goes on top of the >>>>>> "virtio: Support unused virtqueues" diff from my previous mail. >>>>>> >>>>>> The distribution of of packets to the enabled queues is not optimal. To >>>>>> improve this, one would need the optional RSS (receive-side scaling) >>>>>> feature which is difficult to configure with libvirt/ >>>>>> qemu and therefore >>>>>> usually not available on hypervisors. Things may improve with future >>>>>> libvirt versions. RSS support is not included in this diff. But even >>>>>> without RSS, we have seen some nice performance gains. >>>> >>>> >>>>> >>>>> I'm hitting this diff with forwarding setup over ipsec for two days and >>>>> doing ifconfig up/ >>>>> down and hosts seems stable. Forwarding performance is >>>>> the same as without this diff. >>>>> >>>>> I'm sending traffic from host connected to obsd1 vio2 then that traffic >>>>> goes over ipsec link between obsd1 vio1 - obsd2 vio1 and traffic exits >>>>> from obsd2 vio3 to other host >>>> >>>> >>>> Thanks for testing. Since the traffic distribution is done heuristically >>>> by the hypervisor, it is often not optimal. I think it is particularily >>>> bad for your case because the hypervisor will think that all ipsec >>>> traffic belongs to one flow and put it into the same queue. >>>> >>>> I will try to improve it a bit, but in general things get better if you >>>> communicate with many peers. >>> >>> Hi, >>> >>> it seems that even with plain forwarding all traffic on egress >>> interfaces are going to one queue. On ingress interface interrupts are >>> spread nicely. Maybe because of that forwarding performance is same as >>> without multiqueue vio. >> >> Thanks again for the testing. >> >> I don't see this on my test setup. Could you please check the packet >> stats in One thing that is different in my setup is that I have pf enabled. If I disable pf, all packets go out on queue 0, too. This is due to the fact that we don't get a hash from the NIC that we can put into m_pkthdr.ph_flowid. pf will fill that in. If I enable pf on your setup, all outgoing queues are used. However the pktgen script with 254 source and 254 dst addresses requires around 129000 states. Increasing the limit in pf.conf leads to forwarding getting a bit faster (20%?) than with a single queue, though the rate varies quite a bit. I need to think a bit more about this and how we could improve the situation. Stefan
vio: Enable multiqueue