Index | Thread | Search

From:
"Martin Pieuchot" <mpi@grenadille.net>
Subject:
Re: Improve uvm_pageout() logic for segmented memory space
To:
tech@openbsd.org
Date:
Thu, 07 Nov 2024 10:14:22 +0100

Download raw body.

Thread
  • Martin Pieuchot:

    Improve uvm_pageout() logic for segmented memory space

On 06/11/24(Wed) 15:58, mpi@grenadille.net wrote:
> Diff below greatly improves the responsiveness of the page daemon for
> 64bit archs with a low/high memory split.  The improvement comes from
> a more precise calculation of how many low pages have to be freed.  As
> a result the amount of pages written to swap is decreased by ~50% in my
> tests and my arm64 machine becomes responsive during heavy swapping.
> 
> The diff includes:
> 
> - Use a global "struct uvm_pmalloc" to notify failed nowait allocations
>   in order to look at the managed lists.  The current algorithm does not
>   call uvmpd_scan() if there have been only nowait allocations.
> 
> - Skip calling the shrinkers and grabbing some locks if the page daemon
>   is awoken to rebalance the active/inactive lists.
> 
> - Do not bother releasing high pages if all we are interested in are low
>   pages
> 
> - Try to deactivate low pages first only if we are not short on swap
>   slots

5th diff:

Optimize active & inactive list traversals when looking only for low pages.

In the inactive list, if we are not OOM, do not bother released high pages.

In the active list, if we couldn't release enough low pages and are not out-
of-swap, deactivate only low pages.
---
 sys/uvm/uvm_pdaemon.c | 20 ++++++++++++++++++--
 1 file changed, 18 insertions(+), 2 deletions(-)

diff --git a/sys/uvm/uvm_pdaemon.c b/sys/uvm/uvm_pdaemon.c
index 614209393e2..4916a94ca80 100644
--- a/sys/uvm/uvm_pdaemon.c
+++ b/sys/uvm/uvm_pdaemon.c
@@ -514,6 +514,16 @@ uvmpd_scan_inactive(struct uvm_pmalloc *pma, int shortage)
 			uvmexp.pdscans++;
 			nextpg = TAILQ_NEXT(p, pageq);
 
+			/*
+			 * If we are not short on memory and only interested
+			 * in releasing pages from a given memory range do not
+			 * bother with other pages.
+			 */
+			if (uvmexp.paging >= (shortage - freed) &&
+			    !uvmpd_pma_done(pma) &&
+			    !uvmpd_match_constraint(p, &pma->pm_constraint))
+				continue;
+
 			anon = p->uanon;
 			uobj = p->uobject;
 
@@ -938,9 +948,15 @@ uvmpd_scan(struct uvm_pmalloc *pma, int shortage, int inactive_shortage)
 		}
 
 		/*
-		 * skip this page if it doesn't match the constraint.
+		 * If we couldn't release enough pages from a given memory
+		 * range try to deactivate them first...
+		 *
+		 * ...unless we are low on swap slots, in such case we are
+		 * probably OOM and want to release swap resources as quickly
+		 * as possible.
 		 */
-		if (!uvmpd_pma_done(pma) &&
+		if (inactive_shortage > 0 && swap_shortage == 0 &&
+		    !uvmpd_pma_done(pma) &&
 		    !uvmpd_match_constraint(p, &pma->pm_constraint))
 			continue;
 
-- 
2.46.1