Index | Thread | Search

From:
Jeremie Courreges-Anglas <jca@wxcvbn.org>
Subject:
Re: Newest uvm_purge() to try
To:
tech@openbsd.org
Date:
Mon, 2 Jun 2025 21:18:59 +0200

Download raw body.

Thread
On Mon, Jun 02, 2025 at 10:55:33AM +0200, Martin Pieuchot wrote:
> On 28/05/25(Wed) 22:24, Mark Kettenis wrote:
> > > Date: Sat, 24 May 2025 13:35:16 +0200
> > > From: Mark Kettenis <mark.kettenis@xs4all.nl>
> > > 
> > > > Date: Sat, 24 May 2025 12:52:42 +0200
> > > > From: Martin Pieuchot <mpi@grenadille.net>
> > > > 
> > > > On 23/05/25(Fri) 15:10, Mark Kettenis wrote:
> > > > > > Date: Fri, 23 May 2025 13:15:01 +0200
> > > > > > From: Claudio Jeker <cjeker@diehard.n-r-g.com>
> > > > > > 
> > > > > > On Fri, May 23, 2025 at 10:03:11AM +0200, Martin Pieuchot wrote:
> > > > > > > On 22/05/25(Thu) 14:45, Mark Kettenis wrote:
> > > > > > > > > Date: Thu, 22 May 2025 11:58:38 +0200
> > > > > > > > > From: Martin Pieuchot <mpi@grenadille.net>
> > > > > > > > > 
> > > > > > > > > Here's the latest version incorporating kettenis@'s pmap_purge() and
> > > > > > > > > claudio@'s feedbacks.  I observed a performance improvement of 10-15%
> > > > > > > > > on workloads using 24 to 48 CPUs.  %sys time obviously increases now
> > > > > > > > > that tearing down the VM space is accounted for.
> > > > > > > > > 
> > > > > > > > > Please test and report back.
> > > > > > > > 
> > > > > > > > Going to play with this!
> > > > > > > > 
> > > > > > > > I have a question I wanted to ask about the diff though...
> > > > > > > > 
> > > > > > > > > Index: kern/kern_exit.c
> > > > > > > > > ===================================================================
> > > > > > > > > RCS file: /cvs/src/sys/kern/kern_exit.c,v
> > > > > > > > > diff -u -p -r1.248 kern_exit.c
> > > > > > > > > --- kern/kern_exit.c	21 May 2025 09:42:59 -0000	1.248
> > > > > > > > > +++ kern/kern_exit.c	22 May 2025 09:30:43 -0000
> > > > > > > > > @@ -242,9 +242,14 @@ exit1(struct proc *p, int xexit, int xsi
> > > > > > > > >  		if (pr->ps_pptr->ps_sigacts->ps_sigflags & SAS_NOCLDWAIT)
> > > > > > > > >  			atomic_setbits_int(&pr->ps_flags, PS_NOZOMBIE);
> > > > > > > > >  
> > > > > > > > > -#ifdef __HAVE_PMAP_PURGE
> > > > > > > > > -		pmap_purge(p);
> > > > > > > > > +		/* Teardown the virtual address space. */
> > > > > > > > > +		if ((p->p_flag & P_SYSTEM) == 0) {
> > > > > > > > > +#ifdef MULTIPROCESSOR
> > > > > > > > > +			__mp_release_all(&kernel_lock);
> > > > > > > > 
> > > > > > > > Why do we need an __mp_release_all() here?  Is that because we have
> > > > > > > > multiple paths into exit1() that have different recurse counts for the
> > > > > > > > kernel lock?
> > > > > > > 
> > > > > > > Exactly.  At least the one that come from single suspend code which
> > > > > > > might be > 1.
> > > > > > 
> > > > > > Right single_thread_check_locked() calls exit1() and calls KERNEL_LOCK()
> > > > > > before since it may run without KERNEL_LOCK().
> > > > > 
> > > > > Hopefully we can fix that some day.  But for now, maybe this needs a
> > > > > comment saying that we might be called recursively locked and that
> > > > > releasing all locks here is ok since exit1() won't return?
> > > > 
> > > > Sure, updated diff below does that.
> > > 
> > > Excellent!  Once this has gone through sufficient testing, I think
> > > this should go in.
> > >
> > 
> > powerpc64 seems to be happy; did some kernel, libc and clang builds
> > without issues.
> 
> So far this version has been tested on amd64, arm64, riscv64 and powerpc64.  
> 
> Any more tests?  Any oks?

Survives a ports build workload on an amd64 AMD VM, an arm64 Mini M2
Pro, riscv64 Hifive Unmatched hosts, and also a sparc64 T4-2 LDOM with
pmap_collect() neutered.  ok jca@

-- 
jca