Index | Thread | Search

From:
Martin Pieuchot <mpi@grenadille.net>
Subject:
Re: Improve uvm_pageout() logic for segmented memory space
To:
tech@openbsd.org
Date:
Thu, 7 Nov 2024 10:08:30 +0100

Download raw body.

Thread
On 06/11/24(Wed) 15:58, mpi@grenadille.net wrote:
> Diff below greatly improves the responsiveness of the page daemon for
> 64bit archs with a low/high memory split.  The improvement comes from
> a more precise calculation of how many low pages have to be freed.  As
> a result the amount of pages written to swap is decreased by ~50% in my
> tests and my arm64 machine becomes responsive during heavy swapping.
> 
> The diff includes:
> 
> - Use a global "struct uvm_pmalloc" to notify failed nowait allocations
>   in order to look at the managed lists.  The current algorithm does not
>   call uvmpd_scan() if there have been only nowait allocations.
> 
> - Skip calling the shrinkers and grabbing some locks if the page daemon
>   is awoken to rebalance the active/inactive lists.
> 
> - Do not bother releasing high pages if all we are interested in are low
>   pages
> 
> - Try to deactivate low pages first only if we are not short on swap
>   slots

Second diff:

Add an helper to check if memory has been freed for an allocation that failed.

Also do not scan the active/inactive list if the shrinkers already released
enough pages to fulfill the allocation.
---
 sys/uvm/uvm_pdaemon.c | 15 ++++++++++++---
 1 file changed, 12 insertions(+), 3 deletions(-)

diff --git a/sys/uvm/uvm_pdaemon.c b/sys/uvm/uvm_pdaemon.c
index 46d3efae820..1070fd8d82b 100644
--- a/sys/uvm/uvm_pdaemon.c
+++ b/sys/uvm/uvm_pdaemon.c
@@ -198,6 +198,14 @@ uvmpd_tune(void)
  */
 volatile int uvm_nowait_failed;
 
+static inline int
+uvmpd_pma_done(struct uvm_pmalloc *pma)
+{
+	if (pma == NULL || (pma->pm_flags & UVM_PMA_FREED))
+		return 1;
+	return 0;
+}
+
 /*
  * uvm_pageout: the main loop for the pagedaemon
  */
@@ -273,7 +281,8 @@ uvm_pageout(void *arg)
 		 * scan if needed
 		 */
 		uvm_lock_pageq();
-		if (pma != NULL || (shortage > 0) || (inactive_shortage > 0)) {
+		if (!uvmpd_pma_done(pma) ||
+		    (shortage > 0) || (inactive_shortage > 0)) {
 			uvmpd_scan(pma, shortage, inactive_shortage,
 			    &constraint);
 		}
@@ -485,7 +494,7 @@ uvmpd_scan_inactive(struct uvm_pmalloc *pma, int shortage,
 			/*
 			 * see if we've met our target
 			 */
-			if (((pma == NULL || (pma->pm_flags & UVM_PMA_FREED)) &&
+			if ((uvmpd_pma_done(pma) &&
 			    (uvmexp.paging >= (shortage - freed))) ||
 			    dirtyreacts == UVMPD_NUMDIRTYREACTS) {
 				if (swslot == 0) {
@@ -582,7 +591,7 @@ uvmpd_scan_inactive(struct uvm_pmalloc *pma, int shortage,
 			 * this page is dirty, skip it if we'll have met our
 			 * free target when all the current pageouts complete.
 			 */
-			if ((pma == NULL || (pma->pm_flags & UVM_PMA_FREED)) &&
+			if (uvmpd_pma_done(pma) &&
 			    (uvmexp.paging > (shortage - freed))) {
 				rw_exit(slock);
 				continue;
-- 
2.46.1