Download raw body.
Newest uvm_purge() to try
On 22/05/25(Thu) 14:45, Mark Kettenis wrote:
> > Date: Thu, 22 May 2025 11:58:38 +0200
> > From: Martin Pieuchot <mpi@grenadille.net>
> >
> > Here's the latest version incorporating kettenis@'s pmap_purge() and
> > claudio@'s feedbacks. I observed a performance improvement of 10-15%
> > on workloads using 24 to 48 CPUs. %sys time obviously increases now
> > that tearing down the VM space is accounted for.
> >
> > Please test and report back.
>
> Going to play with this!
>
> I have a question I wanted to ask about the diff though...
>
> > Index: kern/kern_exit.c
> > ===================================================================
> > RCS file: /cvs/src/sys/kern/kern_exit.c,v
> > diff -u -p -r1.248 kern_exit.c
> > --- kern/kern_exit.c 21 May 2025 09:42:59 -0000 1.248
> > +++ kern/kern_exit.c 22 May 2025 09:30:43 -0000
> > @@ -242,9 +242,14 @@ exit1(struct proc *p, int xexit, int xsi
> > if (pr->ps_pptr->ps_sigacts->ps_sigflags & SAS_NOCLDWAIT)
> > atomic_setbits_int(&pr->ps_flags, PS_NOZOMBIE);
> >
> > -#ifdef __HAVE_PMAP_PURGE
> > - pmap_purge(p);
> > + /* Teardown the virtual address space. */
> > + if ((p->p_flag & P_SYSTEM) == 0) {
> > +#ifdef MULTIPROCESSOR
> > + __mp_release_all(&kernel_lock);
>
> Why do we need an __mp_release_all() here? Is that because we have
> multiple paths into exit1() that have different recurse counts for the
> kernel lock?
Exactly. At least the one that come from single suspend code which
might be > 1.
Newest uvm_purge() to try