From: Vitaliy Makkoveev Subject: Re: Do not load `so_qlen' within soreadable() To: Alexander Bluhm Cc: tech@openbsd.org Date: Thu, 24 Jul 2025 20:51:08 +0300 On Thu, Jul 24, 2025 at 07:23:31PM +0200, Alexander Bluhm wrote: > On Thu, Jul 24, 2025 at 03:45:28PM +0300, Vitaliy Makkoveev wrote: > > While calling soreadable() from filt_soread(), the `so_qlen' is already > > loaded to local `qlen' and used as "rv = qlen || soreadable(so)" for > > return value. Currently this path of filt_soread() is locked with > > solock(), so if `so_qlen' is null the second load within soreadable() > > only consumes CPU cycles. But in hypothetically unlocked filt_soread() > > the "rv = qlen || soreadable(so)" is inconsistent. We already did some > > movements to call filt_soread() without socket lock, so adjust > > soreadable() for that. > > > > Except the filt_soread(), the only caller of soreadable() is > > sounsplice(). Since the socket could became listening and have some > > pending connections, do the `so_qlen' load together with soreadable(). > > sosplice() does not allow SO_ACCEPTCONN sockets. > solisten() does not allow isspliced(so) || issplicedback(so) sockets. > The first hunk is sounsplice(). After we clear SB_SPLICE and set `ssp_socket' to NULL the `so' is not spliced anymore. It can't be spliced again because the sblock() is still held on `so' but the socket lock is not held, so nothing prevents `so' to became listening socket and handle some pending connections before the sounsplice() take socket lock. void sounsplice(struct socket *so, struct socket *sosp, int freeing) { sbassertlocked(&so->so_rcv); mtx_enter(&so->so_rcv.sb_mtx); mtx_enter(&sosp->so_snd.sb_mtx); so->so_rcv.sb_flags &= ~SB_SPLICE; sosp->so_snd.sb_flags &= ~SB_SPLICE; KASSERT(so->so_sp->ssp_socket == sosp); KASSERT(sosp->so_sp->ssp_soback == so); so->so_sp->ssp_socket = sosp->so_sp->ssp_soback = NULL; mtx_leave(&sosp->so_snd.sb_mtx); mtx_leave(&so->so_rcv.sb_mtx); task_del(sosplice_taskq, &so->so_splicetask); timeout_del(&so->so_idleto); /* Do not wakeup a socket that is about to be freed. */ if ((freeing & SOSP_FREEING_READ) == 0) { int readable; solock_shared(so); mtx_enter(&so->so_rcv.sb_mtx); readable = soreadable(so); mtx_leave(&so->so_rcv.sb_mtx); if (readable) sorwakeup(so); sounlock_shared(so); } > The change in sounsplice() is not necessary and confusing, so->so_qlen > cannot happen. > > > Note, for this release cycle I want to keep filt_soread() locked. Also, > > soreadable() does `so_error' load which is already loaded in the > > beginning of filt_soread(). We don't use this value in the SO_ACCEPTCONN > > path, so this load could be moved to the SS_CANTRCVMORE path there this > > value actually used, but this is not related to this diff. > > > > ok? > > OK bluhm@ for the second chunk in soreadable(). > > > Index: sys/kern/uipc_socket.c > > =================================================================== > > RCS file: /cvs/src/sys/kern/uipc_socket.c,v > > diff -u -p -r1.383 uipc_socket.c > > --- sys/kern/uipc_socket.c 23 Jul 2025 20:18:04 -0000 1.383 > > +++ sys/kern/uipc_socket.c 24 Jul 2025 12:02:00 -0000 > > @@ -1458,7 +1458,7 @@ sounsplice(struct socket *so, struct soc > > > > solock_shared(so); > > mtx_enter(&so->so_rcv.sb_mtx); > > - readable = soreadable(so); > > + readable = so->so_qlen || soreadable(so); > > mtx_leave(&so->so_rcv.sb_mtx); > > if (readable) > > sorwakeup(so); > > Index: sys/sys/socketvar.h > > =================================================================== > > RCS file: /cvs/src/sys/sys/socketvar.h,v > > diff -u -p -r1.158 socketvar.h > > --- sys/sys/socketvar.h 8 Apr 2025 15:31:22 -0000 1.158 > > +++ sys/sys/socketvar.h 24 Jul 2025 12:02:00 -0000 > > @@ -387,7 +387,7 @@ soreadable(struct socket *so) > > soassertlocked_readonly(so); > > if (isspliced(so)) > > return 0; > > - return (so->so_rcv.sb_state & SS_CANTRCVMORE) || so->so_qlen || > > + return (so->so_rcv.sb_state & SS_CANTRCVMORE) || > > so->so_error || so->so_rcv.sb_cc >= so->so_rcv.sb_lowat; > > } > >