Index | Thread | Search

From:
Claudio Jeker <cjeker@diehard.n-r-g.com>
Subject:
Re: sched_unpeg_proc()
To:
tech@openbsd.org
Date:
Mon, 8 Jul 2024 14:22:51 +0200

Download raw body.

Thread
On Mon, Jul 08, 2024 at 01:21:47PM +0200, Martin Pieuchot wrote:
> I'd like to start simplifying the existing scheduler.  Diff below
> introduces a function to unpeg curproc from its current CPU.  This
> will be required as soon as we no longer rely on per-CPU runqueues
> to implement thread pegging.
> 
> ok?
> 
> Index: arch/amd64/amd64/identcpu.c
> ===================================================================
> RCS file: /cvs/src/sys/arch/amd64/amd64/identcpu.c,v
> diff -u -p -r1.145 identcpu.c
> --- arch/amd64/amd64/identcpu.c	24 Jun 2024 21:22:14 -0000	1.145
> +++ arch/amd64/amd64/identcpu.c	8 Jul 2024 09:51:03 -0000
> @@ -169,7 +169,7 @@ cpu_hz_update_sensor(void *args)
>  		ci->ci_hz_sensor.value = val;
>  	}
>  
> -	atomic_clearbits_int(&curproc->p_flag, P_CPUPEG);
> +	sched_unpeg_curproc();
>  }
>  #endif
>  
> Index: dev/kstat.c
> ===================================================================
> RCS file: /cvs/src/sys/dev/kstat.c,v
> diff -u -p -r1.2 kstat.c
> --- dev/kstat.c	31 Jan 2022 05:09:17 -0000	1.2
> +++ dev/kstat.c	8 Jul 2024 09:55:45 -0000
> @@ -593,7 +593,7 @@ kstat_cpu_enter(void *p)
>  void
>  kstat_cpu_leave(void *p)
>  {
> -	atomic_clearbits_int(&curproc->p_flag, P_CPUPEG);
> +	sched_unpeg_curproc();
>  }
>  
>  void
> Index: kern/kern_sched.c
> ===================================================================
> RCS file: /cvs/src/sys/kern/kern_sched.c,v
> diff -u -p -r1.96 kern_sched.c
> --- kern/kern_sched.c	3 Jun 2024 12:48:25 -0000	1.96
> +++ kern/kern_sched.c	8 Jul 2024 10:05:00 -0000
> @@ -625,6 +625,8 @@ sched_peg_curproc(struct cpu_info *ci)
>  {
>  	struct proc *p = curproc;
>  
> +	KASSERT(!ISSET(p->p_flag, P_CPUPEG));
> +
>  	SCHED_LOCK();
>  	atomic_setbits_int(&p->p_flag, P_CPUPEG);
>  	setrunqueue(ci, p, p->p_usrpri);
> @@ -633,6 +635,16 @@ sched_peg_curproc(struct cpu_info *ci)
>  	SCHED_UNLOCK();
>  }
>  
> +void
> +sched_unpeg_curproc(void)
> +{
> +	struct proc *p = curproc;
> +
> +	KASSERT(ISSET(p->p_flag, P_CPUPEG));
> +
> +	atomic_clearbits_int(&p->p_flag, P_CPUPEG);
> +}
> +
>  #ifdef MULTIPROCESSOR
>  
>  void
> @@ -699,7 +711,7 @@ sched_barrier_task(void *arg)
>  
>  	sched_peg_curproc(ci);
>  	cond_signal(&sb->cond);
> -	atomic_clearbits_int(&curproc->p_flag, P_CPUPEG);
> +	sched_unpeg_curproc();
>  }
>  
>  void
> Index: kern/kern_smr.c
> ===================================================================
> RCS file: /cvs/src/sys/kern/kern_smr.c,v
> diff -u -p -r1.16 kern_smr.c
> --- kern/kern_smr.c	14 Aug 2022 01:58:27 -0000	1.16
> +++ kern/kern_smr.c	8 Jul 2024 10:00:15 -0000
> @@ -162,8 +162,8 @@ smr_grace_wait(void)
>  			continue;
>  		sched_peg_curproc(ci);
>  		KASSERT(ci->ci_schedstate.spc_smrgp == smrgp);
> +		sched_unpeg_curproc();
>  	}
> -	atomic_clearbits_int(&curproc->p_flag, P_CPUPEG);

Can we move the sched_unpeg_curproc() out of the loop here (where the
atomic op is)?

In other words will the new API allow calls to sched_peg_curproc() to
switch cpus or do we need to unpeg before?
e.g.
	sched_peg_curproc(cpu1);
	/* do something on cpu1 */
	sched_peg_curproc(cpu2);
	/* do something else on cpu2 */

Or do we really need to unpeg before repegging?
If that is the case then I think this diff is not correct.

>  #endif /* MULTIPROCESSOR */
>  }
>  
> Index: sys/sched.h
> ===================================================================
> RCS file: /cvs/src/sys/sys/sched.h,v
> diff -u -p -r1.72 sched.h
> --- sys/sched.h	3 Jun 2024 12:48:25 -0000	1.72
> +++ sys/sched.h	8 Jul 2024 10:01:54 -0000
> @@ -169,6 +169,7 @@ void cpu_idle_enter(void);
>  void cpu_idle_cycle(void);
>  void cpu_idle_leave(void);
>  void sched_peg_curproc(struct cpu_info *ci);
> +void sched_unpeg_curproc(void);
>  void sched_barrier(struct cpu_info *ci);
>  
>  int sysctl_hwsetperf(void *, size_t *, void *, size_t);
> 

-- 
:wq Claudio