Index | Thread | Search

From:
"Martin Pieuchot" <mpi@grenadille.net>
Subject:
Re: Improve uvm_pageout() logic for segmented memory space
To:
tech@openbsd.org
Date:
Thu, 07 Nov 2024 10:14:29 +0100

Download raw body.

Thread
  • Martin Pieuchot:

    Improve uvm_pageout() logic for segmented memory space

On 06/11/24(Wed) 15:58, mpi@grenadille.net wrote:
> Diff below greatly improves the responsiveness of the page daemon for
> 64bit archs with a low/high memory split.  The improvement comes from
> a more precise calculation of how many low pages have to be freed.  As
> a result the amount of pages written to swap is decreased by ~50% in my
> tests and my arm64 machine becomes responsive during heavy swapping.
> 
> The diff includes:
> 
> - Use a global "struct uvm_pmalloc" to notify failed nowait allocations
>   in order to look at the managed lists.  The current algorithm does not
>   call uvmpd_scan() if there have been only nowait allocations.
> 
> - Skip calling the shrinkers and grabbing some locks if the page daemon
>   is awoken to rebalance the active/inactive lists.
> 
> - Do not bother releasing high pages if all we are interested in are low
>   pages
> 
> - Try to deactivate low pages first only if we are not short on swap
>   slots

Last (6th) diff:

Do not bother releasing memory if all we need is balancing the page lists.

---
 sys/uvm/uvm_pdaemon.c | 33 ++++++++++++++++++++++++++++-----
 1 file changed, 28 insertions(+), 5 deletions(-)

diff --git a/sys/uvm/uvm_pdaemon.c b/sys/uvm/uvm_pdaemon.c
index 4916a94ca80..3effa10fece 100644
--- a/sys/uvm/uvm_pdaemon.c
+++ b/sys/uvm/uvm_pdaemon.c
@@ -104,6 +104,7 @@ extern unsigned long drmbackoff(long);
 struct rwlock	*uvmpd_trylockowner(struct vm_page *);
 void		uvmpd_scan(struct uvm_pmalloc *, int, int);
 int		uvmpd_scan_inactive(struct uvm_pmalloc *, int);
+void		uvmpd_scan_active(struct uvm_pmalloc *, int, int);
 void		uvmpd_tune(void);
 void		uvmpd_drop(struct pglist *);
 int		uvmpd_dropswap(struct vm_page *);
@@ -259,15 +260,27 @@ uvm_pageout(void *arg)
 			uvmexp.inactarg - uvmexp.inactive - BUFPAGES_INACT;
 		uvm_unlock_pageq();
 
-		/* Reclaim pages from the buffer cache if possible. */
 		size = 0;
 		if (pma != NULL)
 			size += pma->pm_size >> PAGE_SHIFT;
 		if (shortage > 0)
 			size += shortage;
-		if (size == 0)
-			size = 16; /* XXX */
 
+		if (size == 0) {
+			/*
+			 * Since the inactive target just got updated
+			 * above both `size' and `inactive_shortage' can
+			 * be 0.
+			 */
+			if (inactive_shortage) {
+				uvm_lock_pageq();
+				uvmpd_scan_active(NULL, 0, inactive_shortage);
+				uvm_unlock_pageq();
+			}
+			continue;
+		}
+
+		/* Reclaim pages from the buffer cache if possible. */
 		shortage -= bufbackoff(&constraint, size * 2);
 #if NDRM > 0
 		shortage -= drmbackoff(size * 2);
@@ -895,8 +908,6 @@ void
 uvmpd_scan(struct uvm_pmalloc *pma, int shortage, int inactive_shortage)
 {
 	int swap_shortage, pages_freed;
-	struct vm_page *p, *nextpg;
-	struct rwlock *slock;
 
 	MUTEX_ASSERT_LOCKED(&uvm.pageqlock);
 
@@ -939,6 +950,18 @@ uvmpd_scan(struct uvm_pmalloc *pma, int shortage, int inactive_shortage)
 		swap_shortage = shortage;
 	}
 
+	uvmpd_scan_active(pma, swap_shortage, inactive_shortage);
+}
+
+void
+uvmpd_scan_active(struct uvm_pmalloc *pma, int swap_shortage,
+    int inactive_shortage)
+{
+	struct vm_page *p, *nextpg;
+	struct rwlock *slock;
+
+	MUTEX_ASSERT_LOCKED(&uvm.pageqlock);
+
 	for (p = TAILQ_FIRST(&uvm.page_active);
 	     p != NULL && (inactive_shortage > 0 || swap_shortage > 0);
 	     p = nextpg) {
-- 
2.46.1