Download raw body.
bounce buffer mbuf defrag
On Thu, Aug 29, 2024 at 04:57:41PM +0200, Mark Kettenis wrote:
> > Date: Wed, 28 Aug 2024 14:37:45 +0200
> > From: Alexander Bluhm <bluhm@openbsd.org>
> >
> > Hi,
> >
> > In my tests I have seen packet drops vio_encap() when using bounce
> > buffers. Some mbufs seem to be too fragmented for the pre allocated
> > bounce buffer pages. Calling m_defrag() fixes the problem.
>
> Hmm. How does this happen? Is this a case for which the function
> would have return EFBIG at the end in the non-bounce-buffer case?
The number of bounce buffers we allocate up front is a rough estimate.
In case of a mbuf chain the virtual addesses are fragmented, so we
need more segments. With TSO the TCP layer sends large 64K packets.
So all bounce pages are used up with real data, but we need two
pages when we split segements.
I am trying the diff below. Allocate enough pages for size and
nsegments extra pages when we split the mbuf chain into segemnts.
If TCP socket send buffer is more fragmented than nsegments,
m_defrag() will rescue us.
Without bouncing I see no problems in my tests.
bluhm
Index: arch/amd64/amd64/bus_dma.c
===================================================================
RCS file: /data/mirror/openbsd/cvs/src/sys/arch/amd64/amd64/bus_dma.c,v
diff -u -p -r1.58 bus_dma.c
--- arch/amd64/amd64/bus_dma.c 28 Aug 2024 18:21:15 -0000 1.58
+++ arch/amd64/amd64/bus_dma.c 29 Aug 2024 14:36:07 -0000
@@ -140,7 +140,7 @@ _bus_dmamap_create(bus_dma_tag_t t, bus_
if (use_bounce_buffer) {
/* this many pages plus one in case we get split */
- npages = round_page(size) / PAGE_SIZE + 1;
+ npages = round_page(size) / PAGE_SIZE + nsegments;
if (npages < nsegments)
npages = nsegments;
mapsize += sizeof(struct vm_page *) * npages;
bounce buffer mbuf defrag