Download raw body.
[PATCH] convert mpl ticket lock to anderson's lock
Theo de Raadt <deraadt@openbsd.org> wrote:
> > Per that post, the primary problem concerns page allocation and the
> > way mutexes are implemented
>
> While looking at things in the pagedaemon, I've become concerned
> about how uvm_lock_pageq() is used
>
> Look at how uvm_pageactivate() and uvm_pagedeactivate() must be called
> unlocked
>
> So locked-callers unlock temporarily to call them, resulting in
> very strange unlock/lock/tiny-operation/unlock/lock sequences.
>
> Without the locks the tiny-operation would be instantenous but this
> design means there is a tremendous bubble if someone else wanted the lock.
>
> I think these low-level changes in locking and mutexes will have little
> effect if we are using them extremely poorly, which we are.
This is why.
revision 1.184
date: 2025/12/10 08:38:18; author: mpi; state: Exp; lines: +44 -37; commitid: ACX7PGcPcpfRHo20;
Push `pageqlock' dances inside uvm_page{de,}activate() & uvm_pagewire().
Tested during multiple bulks on amd64, i386, arm64 and sparc64 by jca@,
phessler@ and sthen@.
This pushing of the lock into the lower level functions, in turn requires that
many higher level callers have to unlock, in order to a function which now locks,
and upon return, relock for their own needs.
I don't think think that reduces contention but only hides it. But it
definately increases latency for the specific caller because someone else
wins the mutex during the two release windows. Parallelism may have increased,
but latency of critical functions calling these will suffer. Some of these
critical functions if they did't do this dance, would be finished quicker and
the system would be less contended.
To me this looks like deck chairs succesfully rearranged.
[PATCH] convert mpl ticket lock to anderson's lock