Download raw body.
UDP send in parallel
On Fri, Jan 05, 2024 at 11:06:16PM +0100, Alexander Bluhm wrote:
> Hi,
>
> Sending UDP packets via datagram socket is MP safe now. Same applies
> to raw IPv4 and IPv6, and divert sockets. Switch sosend() from
> exclusive net lock to shared net lock in combination with per socket
> lock. TCP and GRE still use exclusive net lock.
>
> Please test this diff if you have applications that run multithreaded
> and send lots of UDP packets.
This looks good, I bombarded a pdns_recursor with lots of udp work
using multiple threads an that worked fine.
On a side note: when using multiple threads and each thread having a
SO_REUSEPORT socket open for the same address/port, a single thread
(socket) gets all the incoming packets. That is not a regression, but
it would be really nice if some day the packets got distributed to
several threads/sockets in that case.
-Otto
>
> ok?
>
> bluhm
>
> Index: kern/uipc_socket.c
> ===================================================================
> RCS file: /data/mirror/openbsd/cvs/src/sys/kern/uipc_socket.c,v
> diff -u -p -r1.312 uipc_socket.c
> --- kern/uipc_socket.c 19 Dec 2023 21:34:22 -0000 1.312
> +++ kern/uipc_socket.c 3 Jan 2024 21:34:53 -0000
> @@ -582,7 +582,7 @@ sosend(struct socket *so, struct mbuf *a
>
> #define snderr(errno) { error = errno; goto release; }
>
> - solock(so);
> + solock_shared(so);
> restart:
> if ((error = sblock(so, &so->so_snd, SBLOCKWAIT(flags))) != 0)
> goto out;
> @@ -635,9 +635,9 @@ restart:
> if (flags & MSG_EOR)
> top->m_flags |= M_EOR;
> } else {
> - sounlock(so);
> + sounlock_shared(so);
> error = m_getuio(&top, atomic, space, uio);
> - solock(so);
> + solock_shared(so);
> if (error)
> goto release;
> space -= top->m_pkthdr.len;
> @@ -665,7 +665,7 @@ release:
> so->so_snd.sb_state &= ~SS_ISSENDING;
> sbunlock(so, &so->so_snd);
> out:
> - sounlock(so);
> + sounlock_shared(so);
> m_freem(top);
> m_freem(control);
> return (error);
>
UDP send in parallel