Index | Thread | Search

From:
Claudio Jeker <cjeker@diehard.n-r-g.com>
Subject:
Re: bounce buffer mbuf defrag
To:
Alexander Bluhm <bluhm@openbsd.org>
Cc:
Mark Kettenis <mark.kettenis@xs4all.nl>, tech@openbsd.org
Date:
Thu, 29 Aug 2024 17:35:51 +0200

Download raw body.

Thread
On Thu, Aug 29, 2024 at 05:28:58PM +0200, Alexander Bluhm wrote:
> On Thu, Aug 29, 2024 at 04:57:41PM +0200, Mark Kettenis wrote:
> > > Date: Wed, 28 Aug 2024 14:37:45 +0200
> > > From: Alexander Bluhm <bluhm@openbsd.org>
> > > 
> > > Hi,
> > > 
> > > In my tests I have seen packet drops vio_encap() when using bounce
> > > buffers.  Some mbufs seem to be too fragmented for the pre allocated
> > > bounce buffer pages.  Calling m_defrag() fixes the problem.
> > 
> > Hmm.  How does this happen?  Is this a case for which the function
> > would have return EFBIG at the end in the non-bounce-buffer case?
> 
> The number of bounce buffers we allocate up front is a rough estimate.
> In case of a mbuf chain the virtual addesses are fragmented, so we
> need more segments.  With TSO the TCP layer sends large 64K packets.
> So all bounce pages are used up with real data, but we need two
> pages when we split segements.
> 
> I am trying the diff below.  Allocate enough pages for size and
> nsegments extra pages when we split the mbuf chain into segemnts.
> If TCP socket send buffer is more fragmented than nsegments,
> m_defrag() will rescue us.

Btw. m_defrag() was built in a way that would allow it to be called inside
bus_dmamap_load_mbuf() and so that this function would fail less and
drivers could handle this a lot better.
Also m_defrag() could do the bounce (by specifying the allocator). With
that we could remove the DMA limit from mbuf and clusters for the rest of
the network stack.

-- 
:wq Claudio