Index | Thread | Search

From:
Stefan Fritsch <sf@openbsd.org>
Subject:
Re: vio: Enable multiqueue
To:
Hrvoje Popovski <hrvoje@srce.hr>, tech@openbsd.org
Date:
Sun, 26 Jan 2025 19:47:04 +0100

Download raw body.

Thread
On Sun, 26 Jan 2025, Stefan Fritsch wrote:

> Am 21.01.25 um 20:03 schrieb Hrvoje Popovski:
> > On 21.1.2025. 19:26, Stefan Fritsch wrote:
> > > Hi,
> > > 
> > > Am 16.01.25 um 19:19 schrieb Hrvoje Popovski:
> > > > > > > this diff finally enables multiqueue for vio(4). It goes on top of the
> > > > > > > "virtio: Support unused virtqueues" diff from my previous mail.
> > > > > > > 
> > > > > > > The distribution of of packets to the enabled queues is not optimal. To
> > > > > > > improve this, one would need the optional RSS (receive-side scaling)
> > > > > > > feature which is difficult to configure with libvirt/
> > > > > > > qemu and therefore
> > > > > > > usually not available on hypervisors. Things may improve with future
> > > > > > > libvirt versions. RSS support is not included in this diff. But even
> > > > > > > without RSS, we have seen some nice performance gains.
> > > > > 
> > > > > 
> > > > > > 
> > > > > > I'm hitting this diff with forwarding setup over ipsec for two days and
> > > > > > doing ifconfig up/
> > > > > > down and hosts seems stable. Forwarding performance is
> > > > > > the same as without this diff.
> > > > > > 
> > > > > > I'm sending traffic from host connected to obsd1 vio2 then that traffic
> > > > > > goes over ipsec link between obsd1 vio1 - obsd2 vio1 and traffic exits
> > > > > > from obsd2 vio3 to other host
> > > > > 
> > > > > 
> > > > > Thanks for testing. Since the traffic distribution is done heuristically
> > > > > by the hypervisor, it is often not optimal. I think it is particularily
> > > > > bad for your case because the hypervisor will think that all ipsec
> > > > > traffic belongs to one flow and put it into the same queue.
> > > > > 
> > > > > I will try to improve it a bit, but in general things get better if you
> > > > > communicate with many peers.
> > > > 
> > > > Hi,
> > > > 
> > > > it seems that even with plain forwarding all traffic on egress
> > > > interfaces are going to one queue. On ingress interface interrupts are
> > > > spread nicely. Maybe because of that forwarding performance is same as
> > > > without multiqueue vio.
> > > 
> > > Thanks again for the testing.
> > > 
> > > I don't see this on my test setup. Could you please check the packet
> > > stats in
> 
> One thing that is different in my setup is that I have pf enabled. If I
> disable pf, all packets go out on queue 0, too. This is due to the fact that
> we don't get a hash from the NIC that we can put into m_pkthdr.ph_flowid. pf
> will fill that in. If I enable pf on your setup, all outgoing queues are used.
> However the pktgen script with 254 source and 254 dst addresses requires
> around 129000 states. Increasing the limit in pf.conf leads to forwarding
> getting a bit faster (20%?) than with a single queue, though the rate varies
> quite a bit.
> 
> I need to think a bit more about this and how we could improve the situation.

With this diff and pf off, I get about twice the forwarding speed on your 
test setup (4 vcpus/4 queues).

To everyone: Is this something that could possibly be committed? In cases 
where the NIC gives us RSS hashes it should not change anything.

diff --git a/sys/net/ifq.c b/sys/net/ifq.c
index 7368aa50a57..547cfb26d84 100644
--- a/sys/net/ifq.c
+++ b/sys/net/ifq.c
@@ -903,6 +903,8 @@ priq_idx(unsigned int nqueues, const struct mbuf *m)
 
 	if (ISSET(m->m_pkthdr.csum_flags, M_FLOWID))
 		flow = m->m_pkthdr.ph_flowid;
+	else
+		flow = cpu_number();
 
 	return (flow % nqueues);
 }
From owner-tech@openbsd.org  Sun Jan 26 12:26:40 2025
Received: from ref.nmedia.net (ref.nmedia.net [204.80.187.146])
	by mail.openbsd.org (OpenSMTPD) with ESMTPS id 5e0790a9 (TLSv1.3:TLS_AES_256_GCM_SHA384:256:NO);
	Sun, 26 Jan 2025 12:26:37 -0700 (MST)
Received: by ref.nmedia.net (Postfix, from userid 1000)
	id 2A9391E7480; Sun, 26 Jan 2025 11:26:37 -0800 (PST)
Date: Sun, 26 Jan 2025 11:26:37 -0800
From: Chris Cappuccio <chris@nmedia.net>
To: Stefan Fritsch <sf@openbsd.org>
Cc: Hrvoje Popovski <hrvoje@srce.hr>, tech@openbsd.org
Subject: Re: vio: Enable multiqueue
Message-ID: <Z5aMbZMS5nLRXzlZ@ref.net>
References: <c4bdea58-643f-3959-a455-15005dfb34bb@openbsd.org>
 <63580fc1-f837-40ed-91bd-ce5eeeddcd5c@srce.hr>
 <fffa5552-c06c-463f-a18f-ad064d7fc2a6@sfritsch.de>
 <478a2211-8411-43f7-b7b2-639cdc933170@srce.hr>
 <f6e9b707-b068-4be8-892b-815362e8a2fe@sfritsch.de>
 <3836cdf8-ca7f-4c30-9f2b-0ef71e94d1d3@srce.hr>
 <99709f6d-99e7-4213-af43-1d282b37e9ee@sfritsch.de>
 <046d0e3c-851d-eb46-c7ce-164363bbf820@openbsd.org>
MIME-Version: 1.0
Content-Type: text/plain; charset=us-ascii
Content-Disposition: inline
In-Reply-To: <046d0e3c-851d-eb46-c7ce-164363bbf820@openbsd.org>
X-Archive-Number: 202501/429
X-Sequence-Number: 101965

Stefan Fritsch [sf@openbsd.org] wrote:
> 
> With this diff and pf off, I get about twice the forwarding speed on your 
> test setup (4 vcpus/4 queues).
> 
> To everyone: Is this something that could possibly be committed? In cases 
> where the NIC gives us RSS hashes it should not change anything.
> 
> diff --git a/sys/net/ifq.c b/sys/net/ifq.c
> index 7368aa50a57..547cfb26d84 100644
> --- a/sys/net/ifq.c
> +++ b/sys/net/ifq.c
> @@ -903,6 +903,8 @@ priq_idx(unsigned int nqueues, const struct mbuf *m)
>  
>  	if (ISSET(m->m_pkthdr.csum_flags, M_FLOWID))
>  		flow = m->m_pkthdr.ph_flowid;
> +	else
> +		flow = cpu_number();
>  
>  	return (flow % nqueues);
>  }

there's already sys/net/toeplitz.c