Download raw body.
Switch `so_snd' of udp(4) sockets to the new locking scheme
On Wed, Jul 10, 2024 at 10:39:14PM +0300, Vitaliy Makkoveev wrote:
> udp_send() and following udp{,6}_output() do not append packets to
> `so_snd' socket buffer. This mean the sosend() and sosplice() sending
> paths are dummy pru_send() and there is no problems to simultaneously
> run them on the same socket.
>
> This diff lefts udp(4) somove() under exclusive solock() and pushes
> shared solock() deep down sosend() to be only around pru_send(). Since
> sosend() doesn't modify `so_snd' the unlocked `so_snd' space checks
> within somove() are safe. Corresponding `sb_state' and `sb_flags'
> modifications are protected by `sb_mtx' mutex(9).
>
> As non obvious bonus, this diff removes the last place where sbwait() is
> called with shared netlock held. tcp(4) sockets calls it with exclusive
> netlock, the rest relies on `sb_mtx' mutex(9). This allows to remove
> shared netlock handling from sosleep_nsec() and rework solock_shared()
> to take `so_lock' before netlock.
>
> ok? No new witness asserts during sosplice and ffs/nfs regress runs.
tested and OK bluhm@
don't add double empty line below
> @@ -246,6 +246,19 @@ sbspace(struct socket *so, struct sockbu
>
> return lmin(sb->sb_hiwat - sb->sb_cc, sb->sb_mbmax - sb->sb_mbcnt);
> }
> +
> +static inline long
> +sbspace(struct socket *so, struct sockbuf *sb)
> +{
> + long ret;
> +
> + sb_mtx_lock(sb);
> + ret = sbspace_locked(so, sb);
> + sb_mtx_unlock(sb);
> +
> + return ret;
> +}
> +
>
> /* do we have to send all at once on a socket? */
> #define sosendallatonce(so) \
Switch `so_snd' of udp(4) sockets to the new locking scheme