From: Hans-Jörg Höxer Subject: amd64 efiboot: Do not change page table permission booting on QEMU To: Date: Tue, 1 Oct 2024 13:53:31 +0200 Hi, when efibooting amd64 we rewrite the page table build by efi firmware to ensure we do not have read only mappings. The rewrite is needed for some HP efi firwmare, that maps sections read-only. When efibooting on SEV enabled QEMU we would have to ensure the crypt bit is set when changing page tables. However, there's no need for the HP workaround when booting on QEMU (or any other VM), so just do not modify the page table, when SEV is detected. From Sebastin Sturm . Take care, HJ. ------------------------------------------------------------------------ commit 13972b899fc1c8042ae8d9fd4bc0ad1006ec4b2f Author: Hans-Joerg Hoexer Date: Thu Sep 12 13:11:46 2024 +0200 stand: Enable boot with SEV and EFI on QEMU efiboot changes page permissions as a work around for the HP EFI BIOS and therefore needs to walk the page table. SEV is currently only supported server systems, so skip the work around if SEV is enabled. From Sebastian Sturm diff --git a/sys/arch/amd64/stand/efiboot/exec_i386.c b/sys/arch/amd64/stand/efiboot/exec_i386.c index b84476a2288..8c3f1fc8a0b 100644 --- a/sys/arch/amd64/stand/efiboot/exec_i386.c +++ b/sys/arch/amd64/stand/efiboot/exec_i386.c @@ -239,6 +239,33 @@ ucode_load(void) } #ifdef __amd64__ +int +detect_sev(void) +{ + uint32_t max_ex_leaf, sev_feat; + uint32_t vendor[4]; + uint32_t sev_status, dummy; + + /* check whether we have SEV feature cpuid leaf */ + CPUID(0x80000000, max_ex_leaf, vendor[0], vendor[2], vendor[1]); + vendor[3] = 0; /* NULL-terminate */ + if (strcmp((char *)vendor, "AuthenticAMD") != 0 || + max_ex_leaf < 0x8000001F) + return -ENODEV; + + CPUID(0x8000001F, sev_feat, dummy, dummy, dummy); + /* check that SEV is supported */ + if ((sev_feat & CPUIDEAX_SEV) == 0) + return -ENODEV; + + __asm volatile ("rdmsr" : "=a" (sev_status), "=d"(dummy) : "c"(MSR_SEV_STATUS)); + /* check whether SEV is enabled */ + if ((sev_status & SEV_STAT_ENABLED) == 0) + return -ENODEV; + + return 0; +} + void protect_writeable(uint64_t addr, size_t len) { @@ -247,6 +274,9 @@ protect_writeable(uint64_t addr, size_t len) uint64_t cr0; size_t idx; + if (detect_sev() == 0) + return; + __asm volatile("movq %%cr0, %0;" : "=r"(cr0) : :); if ((cr0 & CR0_PG) == 0) return;