Download raw body.
increase the 9k mbuf clusters to 9k + 128 bytes
On Wed, Apr 22, 2026 at 02:46:49PM +1000, David Gwynne wrote:
> tl;dr: we can make the 9k clusters a bit bigger for free, which
> will give me some wiggle room in places that like to work in kilobyte
> chunks but don't make allowances for payload alignment. iykyk.
>
> long version:
>
> pools try to amortise the cost of items against the underlying
> kernel memory allocator by rounding the "page" size up to fit at
> least 8 items, and then rounding that up to the next power of 2.
> the 9k clusters are 9 * 1024 bytes, which is 72k after being
> multiplied by 8, which becomes 128k cos it's the next power of 2.
> if you divide 128k by 9k, you get 14 and some change. there's enough
> change that we can raise the cluster size by 128 bytes without
> affecting the page size or the number of items on the page. ie,
> it's still going to use 128k "pages" and fit 14 clusters.
>
> i can take advantage of this in some drivers for stupid hardware,
> so given the above it seems like a plan without any drawbacks apart
> from the pool name getting a bit bigger.
>
> im going to commit this in the next day or so unless anyone can
> find a good reason for me not to.
>
> Index: sys/kern/uipc_mbuf.c
> ===================================================================
> RCS file: /cvs/src/sys/kern/uipc_mbuf.c,v
> diff -u -p -r1.304 uipc_mbuf.c
> --- sys/kern/uipc_mbuf.c 5 Feb 2026 03:26:00 -0000 1.304
> +++ sys/kern/uipc_mbuf.c 22 Apr 2026 04:19:35 -0000
> @@ -109,12 +109,12 @@ u_int mclsizes[MCLPOOLS] = {
> MCLBYTES + 2, /* ETHER_ALIGNED 2k mbufs */
> 4 * 1024,
> 8 * 1024,
> - 9 * 1024,
> + (9 * 1024) + 128, /* use more of the pool page for ETHER_ALIGNED etc */
> 12 * 1024,
> 16 * 1024,
> 64 * 1024
> };
> -static char mclnames[MCLPOOLS][8];
> +static char mclnames[MCLPOOLS][16];
> struct pool mclpools[MCLPOOLS];
>
> struct pool *m_clpool(u_int);
> Index: usr.bin/netstat/mbuf.c
> ===================================================================
> RCS file: /cvs/src/usr.bin/netstat/mbuf.c,v
> diff -u -p -r1.47 mbuf.c
> --- usr.bin/netstat/mbuf.c 22 Jun 2025 11:34:40 -0000 1.47
> +++ usr.bin/netstat/mbuf.c 22 Apr 2026 04:19:35 -0000
> @@ -57,7 +57,7 @@ char *mclnames[] = {
> "mcl2k2",
> "mcl4k",
> "mcl8k",
> - "mcl9k",
> + "mcl9k128",
> "mcl12k",
> "mcl16k",
> "mcl64k"
>
OK claudio@
somewhat unrelated note:
Will still have an issue with such big mbuf clusters because depending on
the pmap mode you need to find a dma reachable 128k physical consequitive
range that is also on a 128k offset. On machines with limited memory this
can become very hard.
--
:wq Claudio
increase the 9k mbuf clusters to 9k + 128 bytes