Index | Thread | Search

From:
Hans-Jörg Höxer <hshoexer@genua.de>
Subject:
SEV-ES guest: install final #VC trap handler 2/3
To:
<tech@openbsd.org>
Date:
Thu, 26 Jun 2025 14:20:10 +0200

Download raw body.

Thread
Hi,

this is change 2/3.
    
Install the actual #VC entry stub.

As out instructions used to ack, mask and unmask interrupts in the
PIC, we will raise #VC while handling interrupts.  Most exceptions
re-enable interrupts as soon as the stack frame is setup etc.  And
will disable them again before unwinding the stack and return from
the exception.  However, in case of #VC we want to have interrupts
disabled during the exception handling.  Otherwise we would get
nested IRQs of the same PSL.

Therefore do not use the TRAP() macro but rewrite the entry stub
without enabling interrupts and branch to the "common" code as soon
as possible.

Take care,
Hans-Joerg

-- 
commit a0a262eed3313983857fb2f1c5d750ff09967564
Author: Hans-Joerg Hoexer <hshoexer@genua.de>
Date:   Wed Mar 19 18:20:48 2025 +0100

    SEV-ES guest: install final #VC trap handler
    
    Install the actual #VC entry stub.
    
    As out instructions used to ack, mask and unmask interrupts in the
    PIC, we will raise #VC while handling interrupts.  Most exceptions
    re-enable interrupts as soon as the stack frame is setup etc.  And
    will disable them again before unwinding the stack and return from
    the exception.  However, in case of #VC we want to have interrupts
    disabled during the exception handling.  Otherwise we would get
    nested IRQs of the same PSL.
    
    Therefore do not use the TRAP() macro but rewrite the entry stub
    without enabling interrupts and branch to the "common" code as soon
    as possible.

diff --git a/sys/arch/amd64/amd64/vector.S b/sys/arch/amd64/amd64/vector.S
index 4a106b0e8e7..d2fe9ed3ee3 100644
--- a/sys/arch/amd64/amd64/vector.S
+++ b/sys/arch/amd64/amd64/vector.S
@@ -373,6 +373,41 @@ IDTVEC(trap14)
 	ZTRAP(T_VE)
 IDTVEC(trap15)
 	TRAP(T_CP)
+
+IDTVEC(trap1d)
+	/*
+	 * #VC is AMD CPU specific, thus we don't use any Intel Meltdown
+	 * workarounds.
+	 *
+	 * We handle #VC different from other traps, as we do not want
+	 * to re-enable interrupts.  #VC might happen during IRQ handling
+	 * before a specific hardware interrupt gets masked.  Re-enabling
+	 * interrupts in the trap handler might cause nested IRQs of
+	 * the same level.  Thus keep interrupts disabled.
+	 */
+	pushq	$T_VC
+	testb	$SEL_RPL,24(%rsp)
+	je	vctrap_kern
+	swapgs
+	FENCE_SWAPGS_MIS_TAKEN
+	movq	%rax,CPUVAR(SCRATCH)
+
+	/* #VC from userspace */
+	TRAP_ENTRY_USER
+	cld
+	SMAP_CLAC
+	/* shortcut to regular path, but with interrupts disabled */
+	jmp	recall_trap
+
+	/* #VC from kernspace */
+vctrap_kern:
+	FENCE_NO_SAFE_SMAP
+	TRAP_ENTRY_KERN
+	cld
+	SMAP_CLAC
+	/* shortcut to regular path, but with interrupts disabled */
+	jmp	.Lreal_kern_trap
+
 IDTVEC(trap1f)
 IDTVEC_ALIAS(trap16, trap1f)
 IDTVEC_ALIAS(trap17, trap1f)
@@ -381,7 +416,6 @@ IDTVEC_ALIAS(trap19, trap1f)
 IDTVEC_ALIAS(trap1a, trap1f)
 IDTVEC_ALIAS(trap1b, trap1f)
 IDTVEC_ALIAS(trap1c, trap1f)
-IDTVEC_ALIAS(trap1d, trap1f)
 IDTVEC_ALIAS(trap1e, trap1f)
 	/* 22 - 31 reserved for future exp */
 	ZTRAP(T_RESERVED)