Index | Thread | Search

From:
Hans-Jörg Höxer <Hans-Joerg_Hoexer@genua.de>
Subject:
vmd(8), vmm(4): Experimental support for AMD SEV
To:
<tech@openbsd.org>
Cc:
<Hans-Joerg_Hoexer@genua.de>
Date:
Wed, 21 Feb 2024 20:16:50 +0100

Download raw body.

Thread
Hi everyone,

I spent some time on exploring and expermimentig with AMDs SEV (VM with
encrypted memory).  And I'd like to share my current results:

o I implemented basic proof-of-concept SEV support to both the host
  (generic kernel and vmd(8)) and guest (generic kernel).

o DMA and virtio(4) has still some issues.

o I'm able to boot bsd.rd and start download and installation of
  snapshots; however fails to complete due to DMA issues.

o I can boot a pre-installed system multi-user with generic kernel as
  SEV guest.  The system is stable enough to log in and "look around".
  But I guess it'll show same DMA issues as bsd.rd as soon as there is
  some load.

This is all proof-of-concept and far from complete.  I just crammed
things in and hacked code all over the place.  Just to get things come
to life quickly.

Nonetheless, I think this is good enough to share and to discuss how to
do things the right way.  Then ditch everything and rewrite.

To get this started, see the attached diff.

diff
====

The diff provides the following pieces:

 o support to talk to the AMD Platform Security Processor (PSP).  This is
   needed for vmd(8) being able to provide the guest with encrypted pages.
   Only the guest can then decide wether to keep pages privat (encrypted)
   or shared (plaintext) with vmd(8) for eg. virtio.  Initially everything
   is encrypted. (see below)

 o As ccp(4) (AMD Crypto Co processor) is part of the PSP I hacked the
   support into ccp(4).  Right now I'm using the simple "mail box"
   protocol; this means, transfers to/from the PSP are size limited.
   This is good enough to launch a SEV guest.  For everything else
   (eg. signing certificates requests, downloading certificates from the
   PSP, doing DH with the PSP, propper attestation, etc.) a ring buffer
   protocol needs to be implemented.

 o vmd(8) uses ccp(4) ioctl(2) to run SEV guests.  For this I've added an
   "sev" option to vm.conf(5).

 o pspctl(8):  A simple tool to view PSP state and clean up leftovers of
   crashed VMs/vmd(8) (eg. release keys and ASIDs (identifiers) of
   dead VMs).  I think in the long run this tool will not be needed.
   Things should be done by vmd(8) and vmctl(8).

 o host side kernel support:
   - detect SEV features (identcpu.c)
   - extend vmm(4) cpuid emulation to expose SEV to guest
   - let vmm(4) launch VMs with SEV enabled
   - detect and expose the C-bit position (see below)

 o guest side kernel support:
   - detect SEV features early in locore
   - determine C bit position (again, see below) in locore
   - detect guestmode in locore
   - adjust page frame masks for pmap accordingly
   - for DMA use bounce buffers; map those in plaintext; map everything
     else encrypted

the C bit
=========

The critical part of SEV is the Crypt bit (C bit) for page table entries.
This bit is used by the guest to specify wether a mapping of a page shall
be plain text or cipher text.  The position of the C bit is hardware
implementation dependend.  They seem to re-purpose the highest bit of the
physical address bus to be the C bit.  On my test machine this is bit 51.
The position can be determined by using the CPUID instruction.

This means, we have to do this on boot up in locore.  Therefore, I
modified pmap.c to use a similar approach as for the NX bit.  I similarly
adjusted the use of PG_FRAME and PG_LGFRAME.

Note that instruction fetches of an SEV enabled guest will always
go through the decryption path.  This means, code needs to reside in
encrypted RAM.  Same holds for guest page tables.

But the good news is, we can use GENERIC for both guest and host.

Similarly bus_dma.c implements now experimentally bounce buffering to
share pages for virtio(4) with vmd(8).  If the machine is not running
as SEV guest (ie. "normal" guest or on as host) unbounced DMA is used.

We might have some bugs in virto(4) that are now exposed by using bounce
buffers:  I'm not sure if everywhere bus_dma_sync() is called correctly.

vmd(8)
======

For testing I use the following vm.conf:

	vm "vm1" {
		disable
		sev
		memory 128M
		interface { switch "uplink" }
		boot "/bsd.rd"
		disk /home/vm/disk1
	}
	vm "vm1a" {
		disable
		#sev
		memory 128M
		interface { switch "uplink" }
		boot "/bsd.rd"
		disk /home/vm/disk1
	}
	vm "vm2" {
		disable
		sev
		memory 128M
		interface { switch "uplink" }
		boot "/bsd"
		disk /home/vm/disk2
	}
	vm "vm2a" {
		disable
		#sev
		memory 128M
		interface { switch "uplink" }
		boot "/bsd"
		disk /home/vm/disk2
	}
	switch "uplink" {
		interface bridge0
	}

The VMs vm1 and vm2 are running with SEV enabled using bsd.rd and bsd
respectively.  vm1a and vm2a are using the same disk images but start
with SEV disabled.  

Note that I am always running vmd with -d in the foreground.  And I am
always booting the kernel directly.  Right now BIOS booting will not
work with SEV enabled.

When the VM is started, vmd(8) mmap(2)s the guest physical memory segments
into its address space.  Then loads the kernel image into the address
space and sets up the inital page tables.  They need to have the C bit
set, as the guest will start encrypted right from the get go.  To announce
the actual pages to the PSP, I forcefully page in all pages by touching
them consecutively.  With ioctl(2) I send the _virtual_ address of current
page to ccp(4).  ccp(4) then extracts the actual _physical_ address of
the page and then announces it to the PSP.  The PSP encryptes the page
with a secret key only known(tm) to the PSP and memory controller.

Right now, I depend on the mmap(2)'ed pages not disappearing (eg. being
paged out) before I announce them to the PSP.  This will need propper
pinning or memory locking to be correct.  My test machine has enough
memory and no load, so at the moment for me this is not an issue.

DIFF
====

So, enough said.  See the diff below and let me know what you think.

Have fund and take care,
Hans-Joerg

----------------------------------------------------------------------------
diff --git a/sbin/Makefile b/sbin/Makefile
index bd65cf73568..2318e24fcf4 100644
--- a/sbin/Makefile
+++ b/sbin/Makefile
@@ -12,4 +12,6 @@ SUBDIR=	atactl badsect bioctl clri dhclient dhcpleased \
 	scsi slaacd shutdown swapctl sysctl ttyflags tunefs vnconfig \
 	umount unwind wsconsctl
 
+SUBDIR+= pspctl
+
 .include <bsd.subdir.mk>
diff --git a/sbin/pspctl/Makefile b/sbin/pspctl/Makefile
new file mode 100644
index 00000000000..5aa43973e08
--- /dev/null
+++ b/sbin/pspctl/Makefile
@@ -0,0 +1,9 @@
+#	$OpenBSD: $
+
+PROG=	pspctl
+MAN=	pspctl.8
+SRCS=	pspctl.c
+
+CFLAGS+=	-Wall
+
+.include <bsd.prog.mk>
diff --git a/sbin/pspctl/pspctl.8 b/sbin/pspctl/pspctl.8
new file mode 100644
index 00000000000..94d2ddf7db1
--- /dev/null
+++ b/sbin/pspctl/pspctl.8
@@ -0,0 +1,54 @@
+.\"     $OpenBSD: $
+.\"
+.\" Copyright (c) 2024 Hans-Joerg Hoexer <hshoexer@genua.de>
+.\"
+.\" Permission to use, copy, modify, and distribute this software for any
+.\" purpose with or without fee is hereby granted, provided that the above
+.\" copyright notice and this permission notice appear in all copies.
+.\"
+.\" THE SOFTWARE IS PROVIDED "AS IS" AND THE AUTHOR DISCLAIMS ALL WARRANTIES
+.\" WITH REGARD TO THIS SOFTWARE INCLUDING ALL IMPLIED WARRANTIES OF
+.\" MERCHANTABILITY AND FITNESS. IN NO EVENT SHALL THE AUTHOR BE LIABLE FOR
+.\" ANY SPECIAL, DIRECT, INDIRECT, OR CONSEQUENTIAL DAMAGES OR ANY DAMAGES
+.\" WHATSOEVER RESULTING FROM LOSS OF USE, DATA OR PROFITS, WHETHER IN AN
+.\" ACTION OF CONTRACT, NEGLIGENCE OR OTHER TORTIOUS ACTION, ARISING OUT OF
+.\" OR IN CONNECTION WITH THE USE OR PERFORMANCE OF THIS SOFTWARE.
+.\"
+.Dd $Mdocdate: February 8 2024 $
+.Dt PSPCTL 8
+.Os
+.Sh NAME
+.Nm pspctl
+.Nd an experimental program to manipulate the AMD PSP
+.Sh SYNOPSIS
+.Nm pspctl
+.Op Ar command Op Ar arg ...
+.Sh DESCRIPTION
+.Nm
+allows system administrator to issue commands to the Platform
+Security Processor (PSP).
+.Pp
+The main purpose of
+.Nm
+is to show status of SEV guests and clean up orphand guests.
+.Pp
+The commands are as fllows:
+.Bl -tag -width Ds
+.It Cm status Ar [id]
+Shows the global status of the PSP.
+With optional
+.Ar id
+provided details on particular SEV guest are shown.
+.It Cm deactivate Ar id
+Deactivates SEV guest.
+.It Cm decommission Ar id
+Clears all information and keys of specified guest from PSP.
+.It Cm attestation Ar id
+Requests attestation report for specified guest from PSP.
+A hex dump will show the raw report.
+.El
+.Sh SEE ALSO
+.Xr ioctl 2 ,
+.Xr vmm 4 ,
+.Xr vmd 8 ,
+.Xr vmctl 8
diff --git a/sbin/pspctl/pspctl.c b/sbin/pspctl/pspctl.c
new file mode 100644
index 00000000000..0c014945142
--- /dev/null
+++ b/sbin/pspctl/pspctl.c
@@ -0,0 +1,323 @@
+/*	$OpenBSD: $	*/
+
+/*
+ * Copyright (c) 2015 Reyk Floeter <reyk@openbsd.org>
+ * Copyright (c) 2023, 2024 Hans-Joerg Hoexer <hshoexer@genua.de>
+ *
+ * Permission to use, copy, modify, and distribute this software for any
+ * purpose with or without fee is hereby granted, provided that the above
+ * copyright notice and this permission notice appear in all copies.
+ *
+ * THE SOFTWARE IS PROVIDED "AS IS" AND THE AUTHOR DISCLAIMS ALL WARRANTIES
+ * WITH REGARD TO THIS SOFTWARE INCLUDING ALL IMPLIED WARRANTIES OF
+ * MERCHANTABILITY AND FITNESS. IN NO EVENT SHALL THE AUTHOR BE LIABLE FOR
+ * ANY SPECIAL, DIRECT, INDIRECT, OR CONSEQUENTIAL DAMAGES OR ANY DAMAGES
+ * WHATSOEVER RESULTING FROM LOSS OF USE, DATA OR PROFITS, WHETHER IN AN
+ * ACTION OF CONTRACT, NEGLIGENCE OR OTHER TORTIOUS ACTION, ARISING OUT OF
+ * OR IN CONNECTION WITH THE USE OR PERFORMANCE OF THIS SOFTWARE.
+ */
+
+#include <sys/types.h>
+#include <sys/device.h>
+#include <sys/ioctl.h>
+
+#include <machine/bus.h>
+#include <dev/ic/ccpvar.h>
+
+#include <err.h>
+#include <fcntl.h>
+#include <stdio.h>
+#include <stdlib.h>
+#include <string.h>
+#include <strings.h>
+#include <unistd.h>
+
+#include "pspctl.h"
+
+__dead void	usage(void);
+__dead void	ctl_usage(struct ctl_command *);
+int		parse(int, char *[]);
+int		ctl_status(struct parse_result *, int , char *[]);
+int		ctl_deactivate(struct parse_result *, int , char *[]);
+int		ctl_decommission(struct parse_result *, int , char *[]);
+int		ctl_attest(struct parse_result *, int , char *[]);
+
+struct ctl_command ctl_commands[] = {
+	{ "status",		CMD_STATUS,	ctl_status,		"[id]" },
+	{ "deactivate",		CMD_DEACTIVATE,	ctl_deactivate,		"id" },
+	{ "decommission",	CMD_DECOMM,	ctl_decommission,	"id" },
+	{ "attestation",	CMD_ATTEST,	ctl_attest, 		"" },
+	{ NULL }
+};
+
+__dead void
+usage(void)
+{
+	extern char	*__progname;
+
+	fprintf(stderr, "usage:\t%s [-h] command [argv ...]\n", __progname);
+
+	exit(1);
+}
+
+__dead void
+ctl_usage(struct ctl_command *ctl)
+{
+	extern	char	*__progname;
+
+	fprintf(stderr, "usage:\t%s [-h] %s %s\n", __progname, ctl->name,
+	    ctl->usage);
+
+	exit(1);
+}
+
+int
+main(int argc, char *argv[])
+{
+	int	ch;
+
+	while ((ch = getopt(argc, argv, "h")) != -1) {
+		switch (ch) {
+		case 'h':
+			/* FALLTHROUGH */
+		default:
+			usage();
+			/* NOTREACHED */
+		}
+	}
+	argc -= optind;
+	argv += optind;
+	optreset = 1;
+	optind = 1;
+
+	if (argc < 1)
+		usage();
+
+	return (parse(argc, argv));
+}
+
+int
+parse(int argc, char *argv[])
+{
+	struct ctl_command	*ctl = NULL;
+	struct parse_result	 res;
+	int			 i;
+
+	memset(&res, 0, sizeof(res));
+
+	for (i = 0; ctl_commands[i]. name != NULL; i++) {
+		if (strncmp(ctl_commands[i].name,
+		    argv[0], strlen(argv[0])) == 0) {
+			if (ctl != NULL) {
+				fprintf(stderr,
+				    "ambiguous argument: %s\n", argv[0]);
+				usage();
+			}
+			ctl = &ctl_commands[i];
+		}
+	}
+
+	if (ctl == NULL) {
+		fprintf(stderr, "unknown argument: %s\n", argv[0]);
+		usage();
+	}
+
+	res.action = ctl->action;
+	res.ctl = ctl;
+
+	if (ctl->main(&res, argc, argv) != 0)
+		exit(1);
+
+	return (0);
+}
+
+void
+hexdump(void *data, size_t len)
+{
+	int		 i;
+	unsigned char	*p;
+
+	for (i = 0, p = (unsigned char *)data; i < len; i++) {
+		if ((i % 16) == 0)
+			printf("\n");
+		if ((i % 8) == 0)
+			printf(" ");
+		printf("%02x", p[i]);
+	}
+	printf("\n");
+}
+
+static const char *
+pspctl_state(struct psp_platform_status *pst)
+{
+	switch (pst->state) {
+	case PSP_PSTATE_UNINIT:
+		return "UNINIT";
+	case PSP_PSTATE_INIT:
+		return "INIT";
+	case PSP_PSTATE_WORKING:
+		return "WORKING";
+	default:
+		return "unknown";
+	}
+}
+
+static const char *
+pspctl_gstate(struct psp_guest_status *gst)
+{
+	switch (gst->state) {
+	case PSP_GSTATE_UNINIT:
+		return "UNINIT";
+	case PSP_GSTATE_LUPDATE:
+		return "LUPDATE";
+	case PSP_GSTATE_LSECRET:
+		return "LSECRET";
+	case PSP_GSTATE_RUNNING:
+		return "RUNNING";
+	case PSP_GSTATE_SUPDATE:
+		return "SUPDATE";
+	case PSP_GSTATE_RUPDATE:
+		return "RUPDATE";
+	case PSP_GSTATE_SENT:
+		return "SENT";
+	default:
+		return "unknown";
+	}
+}
+
+int
+ctl_status(struct parse_result *res, int argc, char *argv[])
+{
+	struct psp_platform_status	 pst;
+	struct psp_guest_status		 gst;
+	const char			*errstr;
+	int				 fd, id = -1;
+
+	if (argc < 1 || argc > 2)
+		ctl_usage(res->ctl);
+
+	if (argc == 2) {
+		id = strtonum(argv[1], 1, 256, &errstr);	/* XXX 256? */
+		if (errstr != NULL)
+			ctl_usage(res->ctl);
+	}
+
+	/* get platform state */
+	memset(&pst, 0, sizeof(pst));
+
+	if ((fd = open("/dev/psp", O_RDWR)) < 0)
+		err(1, "open");
+	if (ioctl(fd, PSP_IOC_GET_PSTATUS, &pst) < 0)
+		err(1, "ioctl");
+
+	printf("platform status:\nmajor\t0x%hhx\nminor\t0x%hhx\nbuild\t0x%x\n",
+	    pst.api_major, pst.api_minor, (pst.cfges_build >> 24) & 0xff);
+	printf("state\t%s\nowner\t%d\nSEV-ES\t%d\nguests\t%d\n",
+	    pspctl_state(&pst), (pst.owner & 0x1), (pst.cfges_build & 0x1),
+	    pst.guest_count);
+
+	if (id < 0)
+		goto out;
+
+	/* if requested, also get guest state */
+	memset(&gst, 0, sizeof(gst));
+	gst.handle = id;
+
+	if (ioctl(fd, PSP_IOC_GET_GSTATUS, &gst) < 0)
+		err(1, "ioctl");
+
+	printf("\nguest status:\nhandle\t0x%x\npolicy\t0x%x\nasid\t0x%x\n"
+	    "state\t%s\n", gst.handle, gst.policy, gst.asid,
+	    pspctl_gstate(&gst));
+
+out:
+	if (close(fd) < 0)
+		err(1, "close");
+
+	return (0);
+}
+
+int
+ctl_decommission(struct parse_result *res, int argc, char *argv[])
+{
+	struct psp_decommission	 pdecomm;
+	const char		*errstr;
+	int			 fd, id;
+
+	if (argc != 2)
+		ctl_usage(res->ctl);
+
+	id = strtonum(argv[1], 1, 256, &errstr);	/* XXX 256? */
+	if (errstr != NULL)
+		ctl_usage(res->ctl);
+
+	memset(&pdecomm, 0, sizeof(pdecomm));
+	pdecomm.handle = id;
+
+	if ((fd = open("/dev/psp", O_RDWR)) < 0)
+		err(1, "open");
+	if (ioctl(fd, PSP_IOC_DECOMMISSION, &pdecomm) < 0)
+		err(1, "ioctl");
+
+	return (0);
+}
+
+
+int
+ctl_deactivate(struct parse_result *res, int argc, char *argv[])
+{
+	struct psp_deactivate	 pdeact;
+	const char		*errstr;
+	int			 fd, id;
+
+	if (argc != 2)
+		ctl_usage(res->ctl);
+
+	id = strtonum(argv[1], 1, 256, &errstr);	/* XXX 256? */
+	if (errstr != NULL)
+		ctl_usage(res->ctl);
+
+	memset(&pdeact, 0, sizeof(pdeact));
+	pdeact.handle = id;
+
+	if ((fd = open("/dev/psp", O_RDWR)) < 0)
+		err(1, "open");
+	if (ioctl(fd, PSP_IOC_DEACTIVATE, &pdeact) < 0)
+		err(1, "ioctl");
+
+	return (0);
+}
+
+int
+ctl_attest(struct parse_result *res, int argc, char *argv[])
+{
+	struct psp_attestation	 pattest;
+	const char		*errstr;
+	int			 fd, id;
+
+	if (argc != 2)
+		ctl_usage(res->ctl);
+
+	id = strtonum(argv[1], 1, 256, &errstr);	/* XXX 256? */
+	if (errstr != NULL)
+		ctl_usage(res->ctl);
+
+	memset(&pattest, 0, sizeof(pattest));
+	pattest.handle = id;
+	arc4random_buf(&pattest.attest_nonce, sizeof(pattest.attest_nonce));
+	pattest.attest_len = sizeof(pattest.psp_report);
+
+	if ((fd = open("/dev/psp", O_RDWR)) < 0)
+		err(1, "open");
+	if (ioctl(fd, PSP_IOC_ATTESTATION, &pattest) < 0)
+		err(1, "ioctl");
+
+	if (memcmp(&pattest.attest_nonce, &pattest.report_nonce,
+	    sizeof(pattest.attest_nonce)) != 0)
+		errx(1, "nonce mismatch");
+
+	printf("attestation report:\n");
+	hexdump(&pattest, sizeof(pattest));
+
+	return (0);
+}
diff --git a/sbin/pspctl/pspctl.h b/sbin/pspctl/pspctl.h
new file mode 100644
index 00000000000..fb5bbe8ea5f
--- /dev/null
+++ b/sbin/pspctl/pspctl.h
@@ -0,0 +1,40 @@
+/*	$OpenBSD: $	*/
+
+/*
+ * Copyright (c) 2015 Reyk Floeter <reyk@openbsd.org>
+ * Copyright (c) 2024 Hans-Joerg Hoexer <hshoexer@genua.de>
+ *
+ * Permission to use, copy, modify, and distribute this software for any
+ * purpose with or without fee is hereby granted, provided that the above
+ * copyright notice and this permission notice appear in all copies.
+ *
+ * THE SOFTWARE IS PROVIDED "AS IS" AND THE AUTHOR DISCLAIMS ALL WARRANTIES
+ * WITH REGARD TO THIS SOFTWARE INCLUDING ALL IMPLIED WARRANTIES OF
+ * MERCHANTABILITY AND FITNESS. IN NO EVENT SHALL THE AUTHOR BE LIABLE FOR
+ * ANY SPECIAL, DIRECT, INDIRECT, OR CONSEQUENTIAL DAMAGES OR ANY DAMAGES
+ * WHATSOEVER RESULTING FROM LOSS OF USE, DATA OR PROFITS, WHETHER IN AN
+ * ACTION OF CONTRACT, NEGLIGENCE OR OTHER TORTIOUS ACTION, ARISING OUT OF
+ * OR IN CONNECTION WITH THE USE OR PERFORMANCE OF THIS SOFTWARE.
+ */
+
+enum actions {
+	NONE,
+	CMD_STATUS,
+	CMD_DEACTIVATE,
+	CMD_DECOMM,
+	CMD_ATTEST
+};
+
+struct ctl_command;
+
+struct parse_result {
+	enum actions		 action;
+	struct ctl_command	*ctl;
+};
+
+struct ctl_command {
+	const char	*name;
+	enum actions	 action;
+	int		(*main)(struct parse_result *, int, char *[]);
+	const char	*usage;
+};
diff --git a/sys/arch/amd64/amd64/bus_dma.c b/sys/arch/amd64/amd64/bus_dma.c
index 19a366115f9..2add7ad4030 100644
--- a/sys/arch/amd64/amd64/bus_dma.c
+++ b/sys/arch/amd64/amd64/bus_dma.c
@@ -108,8 +108,13 @@ _bus_dmamap_create(bus_dma_tag_t t, bus_size_t size, int nsegments,
     bus_size_t maxsegsz, bus_size_t boundary, int flags, bus_dmamap_t *dmamp)
 {
 	struct bus_dmamap *map;
+	struct pglist mlist;
+	struct vm_page **pg, *pgnext;
+	size_t mapsize, sz, ssize;
+	vaddr_t va, sva;
 	void *mapstore;
-	size_t mapsize;
+	int npages, error;
+	const struct kmem_dyn_mode *kd;
 
 	/*
 	 * Allocate and initialize the DMA map.  The end of the map
@@ -125,6 +130,16 @@ _bus_dmamap_create(bus_dma_tag_t t, bus_size_t size, int nsegments,
 	 */
 	mapsize = sizeof(struct bus_dmamap) +
 	    (sizeof(bus_dma_segment_t) * (nsegments - 1));
+
+	/* allocate and use bounce buffers when running as SEV guest */
+	if (cpu_sev_guestmode) {
+		/* this many pages plus one in case we get split */
+		npages = round_page(size) / PAGE_SIZE + 1;
+		if (npages < nsegments)	/* looks stupid, but possible XXX */
+			npages = nsegments;
+		mapsize += sizeof(struct vm_page *) * npages;
+	}
+
 	if ((mapstore = malloc(mapsize, M_DEVBUF,
 	    (flags & BUS_DMA_NOWAIT) ?
 	        (M_NOWAIT|M_ZERO) : (M_WAITOK|M_ZERO))) == NULL)
@@ -135,8 +150,59 @@ _bus_dmamap_create(bus_dma_tag_t t, bus_size_t size, int nsegments,
 	map->_dm_segcnt = nsegments;
 	map->_dm_maxsegsz = maxsegsz;
 	map->_dm_boundary = boundary;
+	if (cpu_sev_guestmode) {
+		map->_dm_pages = (void *)&map->dm_segs[nsegments];
+		map->_dm_npages = npages;
+	}
 	map->_dm_flags = flags & ~(BUS_DMA_WAITOK|BUS_DMA_NOWAIT);
 
+	if (!cpu_sev_guestmode) {
+		*dmamp = map;
+		return (0);
+	}
+
+	sz = npages << PGSHIFT;
+	kd = flags & BUS_DMA_NOWAIT ? &kd_trylock : &kd_waitok;
+	va = (vaddr_t)km_alloc(sz, &kv_any, &kp_none, kd);
+	if (va == 0) {
+		map->_dm_npages = 0;
+		free(map, M_DEVBUF, mapsize);
+		return (ENOMEM);
+	}
+
+	TAILQ_INIT(&mlist);
+	error = uvm_pglistalloc(sz, 0, -1, PAGE_SIZE, 0, &mlist, nsegments,
+	    (flags & BUS_DMA_NOWAIT) ? UVM_PLA_WAITOK : UVM_PLA_NOWAIT);
+	if (error) {
+		map->_dm_npages = 0;
+		km_free((void *)va, sz, &kv_any, &kp_none);
+		free(map, M_DEVBUF, mapsize);
+		return (ENOMEM);
+	}
+
+	sva = va;
+	ssize = sz;
+	pgnext = TAILQ_FIRST(&mlist);
+	for (pg = map->_dm_pages; npages--; va += PAGE_SIZE, pg++) {
+		*pg = pgnext;
+		error = pmap_enter(pmap_kernel(), va, VM_PAGE_TO_PHYS(*pg),
+		    PROT_READ | PROT_WRITE,
+		    PROT_READ | PROT_WRITE | PMAP_WIRED |
+		    PMAP_CANFAIL | PMAP_NOCRYPT);
+		if (error) {
+			pmap_update(pmap_kernel());
+			map->_dm_npages = 0;
+			km_free((void *)sva, ssize, &kv_any, &kp_none);
+			free(map, M_DEVBUF, mapsize);
+			uvm_pglistfree(&mlist);
+			return (ENOMEM);
+		}
+		pgnext = TAILQ_NEXT(*pg, pageq);
+		bzero((void *)va, PAGE_SIZE);
+	}
+	pmap_update(pmap_kernel());
+	map->_dm_pgva = sva;
+
 	*dmamp = map;
 	return (0);
 }
@@ -149,7 +215,25 @@ void
 _bus_dmamap_destroy(bus_dma_tag_t t, bus_dmamap_t map)
 {
 	size_t mapsize;
-	
+	struct vm_page **pg;
+	struct pglist mlist;
+	vaddr_t va;
+
+	if (map->_dm_pgva) {
+		km_free((void *)map->_dm_pgva, map->_dm_npages << PGSHIFT,
+		    &kv_any, &kp_none);
+	}
+
+	if (map->_dm_pages) {
+		TAILQ_INIT(&mlist);
+		for (pg = map->_dm_pages, va = map->_dm_pgva;
+		    map->_dm_npages--; va += PAGE_SIZE, pg++) {
+			pmap_remove(pmap_kernel(), va, va + PAGE_SIZE);
+			TAILQ_INSERT_TAIL(&mlist, *pg, pageq);
+		}
+		uvm_pglistfree(&mlist);
+	}
+
 	mapsize = sizeof(struct bus_dmamap) +
 		(sizeof(bus_dma_segment_t) * (map->_dm_segcnt - 1));
 	free(map, M_DEVBUF, mapsize);
@@ -383,6 +467,7 @@ _bus_dmamap_unload(bus_dma_tag_t t, bus_dmamap_t map)
 	 */
 	map->dm_mapsize = 0;
 	map->dm_nsegs = 0;
+	map->_dm_nused = 0;
 }
 
 /*
@@ -393,7 +478,46 @@ void
 _bus_dmamap_sync(bus_dma_tag_t t, bus_dmamap_t map, bus_addr_t addr,
     bus_size_t size, int op)
 {
-	/* Nothing to do here. */
+	bus_dma_segment_t *sg;
+	int i, off = addr;
+	bus_size_t l;
+
+	if (!cpu_sev_guestmode)
+		return;
+
+	for (i = map->_dm_segcnt, sg = map->dm_segs; size && i--; sg++) {
+		if (off >= sg->ds_len) {
+			off -= sg->ds_len;
+			continue;
+		}
+
+		l = sg->ds_len - off;
+		if (l > size)
+			l = size;
+		size -= l;
+
+		/* READ device -> memory */
+		if (op & BUS_DMASYNC_PREREAD) {
+			/*
+			 * XXX hshoexer: clear bounce buffer?
+                         * This _must not_ be needed, still some
+                         * bugs somewhere (eg. virtio)?
+			 */
+			memset((void *)(sg->ds_va2 + off), 0, l);
+		}
+		if (op & BUS_DMASYNC_POSTREAD) {
+			bcopy((void *)(sg->ds_va2 + off),
+			    (void *)(sg->ds_va + off), l);
+		}
+
+		/* WRITE memory -> device */
+		if (op & BUS_DMASYNC_PREWRITE) {
+			bcopy((void *)(sg->ds_va + off),
+			    (void *)(sg->ds_va2 + off), l);
+		}
+
+		off = 0;
+	}
 }
 
 /*
@@ -504,7 +628,6 @@ _bus_dmamem_map(bus_dma_tag_t t, bus_dma_segment_t *segs, int nsegs,
 void
 _bus_dmamem_unmap(bus_dma_tag_t t, caddr_t kva, size_t size)
 {
-
 #ifdef DIAGNOSTIC
 	if ((u_long)kva & PGOFSET)
 		panic("_bus_dmamem_unmap");
@@ -565,10 +688,11 @@ _bus_dmamap_load_buffer(bus_dma_tag_t t, bus_dmamap_t map, void *buf,
     int first)
 {
 	bus_size_t sgsize;
-	bus_addr_t curaddr, lastaddr, baddr, bmask;
-	vaddr_t vaddr = (vaddr_t)buf;
-	int seg;
+	bus_addr_t curaddr, lastaddr, baddr, bmask, oaddr = -1;
+	vaddr_t pgva = -1, vaddr = (vaddr_t)buf;
+	int seg, page, off;
 	pmap_t pmap;
+	struct vm_page *pg;
 
 	if (p != NULL)
 		pmap = p->p_vmspace->vm_map.pmap;
@@ -589,6 +713,19 @@ _bus_dmamap_load_buffer(bus_dma_tag_t t, bus_dmamap_t map, void *buf,
 			panic("Non dma-reachable buffer at curaddr %#lx(raw)",
 			    curaddr);
 
+		if (cpu_sev_guestmode) {
+			/* use bounce buffer */
+			if (map->_dm_nused + 1 >= map->_dm_npages)
+				return (ENOMEM);
+
+			off = vaddr & PAGE_MASK;
+			pg = map->_dm_pages[page = map->_dm_nused++];
+			oaddr = curaddr;
+			curaddr = VM_PAGE_TO_PHYS(pg) + off;
+
+			pgva = map->_dm_pgva + (page << PGSHIFT) + off;
+		}
+
 		/*
 		 * Compute the segment size, and adjust counts.
 		 */
@@ -611,7 +748,10 @@ _bus_dmamap_load_buffer(bus_dma_tag_t t, bus_dmamap_t map, void *buf,
 		 */
 		if (first) {
 			map->dm_segs[seg].ds_addr = curaddr;
+			map->dm_segs[seg].ds_addr2 = oaddr;
 			map->dm_segs[seg].ds_len = sgsize;
+			map->dm_segs[seg].ds_va = vaddr;
+			map->dm_segs[seg].ds_va2 = pgva;
 			first = 0;
 		} else {
 			if (curaddr == lastaddr &&
@@ -625,7 +765,10 @@ _bus_dmamap_load_buffer(bus_dma_tag_t t, bus_dmamap_t map, void *buf,
 				if (++seg >= map->_dm_segcnt)
 					break;
 				map->dm_segs[seg].ds_addr = curaddr;
+				map->dm_segs[seg].ds_addr2 = oaddr;
 				map->dm_segs[seg].ds_len = sgsize;
+				map->dm_segs[seg].ds_va = vaddr;
+				map->dm_segs[seg].ds_va2 = pgva;
 			}
 		}
 
diff --git a/sys/arch/amd64/amd64/conf.c b/sys/arch/amd64/amd64/conf.c
index f87df421880..e8b52718757 100644
--- a/sys/arch/amd64/amd64/conf.c
+++ b/sys/arch/amd64/amd64/conf.c
@@ -98,6 +98,15 @@ int	nblkdev = nitems(bdevsw);
 	(dev_type_stop((*))) enodev, 0, \
 	(dev_type_mmap((*))) enodev, 0, 0, seltrue_kqfilter }
 
+/* open, close, ioctl */
+#define cdev_psp_init(c,n) { \
+	dev_init(c,n,open), dev_init(c,n,close), \
+	(dev_type_read((*))) enodev, \
+	(dev_type_write((*))) enodev, \
+	 dev_init(c,n,ioctl), \
+	(dev_type_stop((*))) enodev, 0, \
+	(dev_type_mmap((*))) enodev, 0, 0, seltrue_kqfilter }
+
 #define	mmread	mmrw
 #define	mmwrite	mmrw
 cdev_decl(mm);
@@ -152,6 +161,8 @@ cdev_decl(nvram);
 cdev_decl(drm);
 #include "viocon.h"
 cdev_decl(viocon);
+#include "ccp.h"
+cdev_decl(psp);
 
 #include "wsdisplay.h"
 #include "wskbd.h"
@@ -290,6 +301,7 @@ struct cdevsw	cdevsw[] =
 	cdev_fido_init(NFIDO,fido),	/* 98: FIDO/U2F security keys */
 	cdev_pppx_init(NPPPX,pppac),	/* 99: PPP Access Concentrator */
 	cdev_ujoy_init(NUJOY,ujoy),	/* 100: USB joystick/gamecontroller */
+	cdev_psp_init(NCCP,psp),		/* 101: PSP */
 };
 int	nchrdev = nitems(cdevsw);
 
diff --git a/sys/arch/amd64/amd64/cpu.c b/sys/arch/amd64/amd64/cpu.c
index 9e05abf3f16..a410bf9284b 100644
--- a/sys/arch/amd64/amd64/cpu.c
+++ b/sys/arch/amd64/amd64/cpu.c
@@ -161,6 +161,13 @@ int cpu_perf_ebx = 0;		/* cpuid(0xa).ebx */
 int cpu_perf_edx = 0;		/* cpuid(0xa).edx */
 int cpu_apmi_edx = 0;		/* cpuid(0x80000007).edx */
 int ecpu_ecxfeature = 0;	/* cpuid(0x80000001).ecx */
+int cpu_enc_eax = 0;		/* cpuid(0x8000001f).eax */
+int cpu_enc_ebx = 0;		/* cpuid(0x8000001f).ebx */
+int cpu_enc_ecx = 0;		/* cpuid(0x8000001f).ecx */
+int cpu_enc_edx = 0;		/* cpuid(0x8000001f).edx */
+int cpu_sev_stat_lo = 0;	/* MSR SEV_STATUS */
+int cpu_sev_stat_hi = 0;
+int cpu_sev_guestmode = 0;
 int cpu_meltdown = 0;
 int cpu_use_xsaves = 0;
 int need_retpoline = 1;		/* most systems need retpoline */
diff --git a/sys/arch/amd64/amd64/identcpu.c b/sys/arch/amd64/amd64/identcpu.c
index 0d113e732b8..60afa7fc793 100644
--- a/sys/arch/amd64/amd64/identcpu.c
+++ b/sys/arch/amd64/amd64/identcpu.c
@@ -70,6 +70,13 @@ int amd64_has_xcrypt;
 int amd64_has_pclmul;
 int amd64_has_aesni;
 #endif
+int amd64_has_sme;
+uint32_t amd64_sme_psize;
+int amd64_pos_cbit;
+int amd64_nvmpl;
+int amd64_nencguests;
+int amd64_has_sev;
+int amd64_has_seves;
 int has_rdrand;
 int has_rdseed;
 
@@ -242,6 +249,35 @@ const struct {
 	{ CPUIDEBX_SSBD,		"SSBD" },
 	{ CPUIDEBX_VIRT_SSBD,		"VIRTSSBD" },
 	{ CPUIDEBX_SSBD_NOTREQ,		"SSBDNR" },
+}, cpu_amdsme_eaxfeatures[] = {
+	{ CPUIDEAX_SME,			"SME" },
+	{ CPUIDEAX_SEV,			"SEV" },
+	{ CPUIDEAX_PFLUSH_MSR,		"PFLUSH-MSR" },
+	{ CPUIDEAX_SEVES,		"SEV-ES" },
+	{ CPUIDEAX_SEVSNP,		"SEV-SNP" },
+	{ CPUIDEAX_VMPL,		"VMPL" },
+#if 0
+	{ CPUIDEAX_RMPQUERY,		"RMPQUERY" },
+	{ CPUIDEAX_VMPLSSS,		"VMPLSSS" },
+	{ CPUIDEAX_SECTSC,		"SECTSC" },
+	{ CPUIDEAX_TSCAUXVIRT,		"TSCAUXVIRT" },
+	{ CPUIDEAX_HWECACHECOH,		"HWECACHECOH" },
+	{ CPUIDEAX_64BITHOST,		"64BITHOST" },
+	{ CPUIDEAX_RESTINJ,		"RESTINJ" },
+	{ CPUIDEAX_ALTINJ,		"ALTINJ" },
+	{ CPUIDEAX_DBGSTSW,		"DBGSTSW" },
+	{ CPUIDEAX_IBSDISALLOW,		"IBSDISALLOW" },
+#endif
+	{ CPUIDEAX_VTE,			"VTE" },
+#if 0
+	{ CPUIDEAX_VMGEXITPARAM,	"VMGEXITPARAM" },
+	{ CPUIDEAX_VTOMMSR,		"VTOMMSR" },
+	{ CPUIDEAX_IBSVIRT,		"IBSVIRT" },
+	{ CPUIDEAX_VMSARPROT,		"VMSAPROT" },
+	{ CPUIDEAX_SMTPROT,		"SMTPROT" },
+	{ CPUIDEAX_SVSMPAGEMSR,		"SVSMPAGEMSR" },
+	{ CPUIDEAX_NVSMSR,		"NVSMSR" },
+#endif
 }, cpu_xsave_extfeatures[] = {
 	{ XSAVE_XSAVEOPT,	"XSAVEOPT" },
 	{ XSAVE_XSAVEC,		"XSAVEC" },
@@ -749,6 +785,26 @@ identifycpu(struct cpu_info *ci)
 				printf(",%s", cpu_xsave_extfeatures[i].str);
 	}
 
+	/* AMD secure memroy encryption features */
+	if (!strcmp(cpu_vendor, "AuthenticAMD") &&
+	    ci->ci_pnfeatset >= CPUID_AMD_SME_CAP) {
+		for (i = 0; i < nitems(cpu_amdsme_eaxfeatures); i++)
+			if (cpu_enc_eax & cpu_amdsme_eaxfeatures[i].bit)
+				printf(",%s", cpu_amdsme_eaxfeatures[i].str);
+		if (cpu_enc_eax & AMD_SEV_CAP)
+			amd64_has_sev = 1;
+		if (cpu_enc_eax & AMD_SME_CAP) {
+			amd64_has_sme = 1;
+			amd64_sme_psize = ((cpu_enc_ebx >> 6) & 0x3f);
+			amd64_pos_cbit = (cpu_enc_ebx & 0x3f);
+			amd64_nvmpl = ((cpu_enc_ebx >> 12) & 0x0f);
+		}
+		if (cpu_enc_eax & AMD_SEVES_CAP)
+			amd64_has_seves = 1;
+		if (cpu_sev_guestmode)
+			printf(",SEV-GUESTMODE");
+	}
+
 	if (cpu_meltdown)
 		printf(",MELTDOWN");
 
diff --git a/sys/arch/amd64/amd64/locore0.S b/sys/arch/amd64/amd64/locore0.S
index e989e37a89a..e2293e72d52 100644
--- a/sys/arch/amd64/amd64/locore0.S
+++ b/sys/arch/amd64/amd64/locore0.S
@@ -278,6 +278,59 @@ cont:
 	cpuid
 	movl	%edx,RELOC(cpu_apmi_edx)
 
+	/*
+	 * Determine AMD SME and SEV capabilities.
+	 */
+	movl	$RELOC(cpu_vendor),%ebp
+	cmpl $0x68747541, (%ebp)	/* "Auth" */
+	jne	.Lno_semsev
+	cmpl $0x69746e65, 4(%ebp)	/* "enti" */
+	jne	.Lno_semsev
+	cmpl $0x444d4163, 8(%ebp)	/* "cAMD" */
+	jne	.Lno_semsev
+
+	/* AMD CPU, check for SME and SEV. */
+	movl	$0x8000001f, %eax
+	cpuid
+	movl	%eax, RELOC(cpu_enc_eax)
+	movl	%ebx, RELOC(cpu_enc_ebx)
+	movl	%ecx, RELOC(cpu_enc_ecx)
+	movl	%edx, RELOC(cpu_enc_edx)
+	andl	$CPUIDEAX_SME, %eax	/* SME */
+	jz	.Lno_semsev
+	movl	RELOC(cpu_enc_eax), %eax
+	andl	$CPUIDEAX_SEV, %eax	/* SEV */
+	jz	.Lno_semsev
+
+	/* Are we in guest mode with SEV enabled? */
+	movl	$MSR_SEV_STATUS, %ecx
+	rdmsr
+	movl	%eax, RELOC(cpu_sev_stat_lo)
+	movl	%edx, RELOC(cpu_sev_stat_hi)
+	andl	$SEV_STAT_ENABLED, %eax
+	jz	.Lno_semsev
+	movl	$0x1, RELOC(cpu_sev_guestmode)
+
+	/* Determine C bit position, adjust pg_frame and pg_lgframe. */
+	movl	RELOC(cpu_enc_ebx), %ecx
+	cmpl	$0x20, %ecx	/* must be at least bit 32 (counting from 0) */
+	jl	.Lno_semsev
+	xorl	%eax, %eax
+	movl	%eax, RELOC(pg_crypt)
+	andl	$0x3f, %ecx
+	movl	%ecx, RELOC(amd64_pos_cbit)
+	subl	$0x20, %ecx
+	movl	$0x1, %eax
+	shll	%cl, %eax
+	movl	%eax, RELOC((pg_crypt + 4))
+
+	/* mask off C bit */
+	notl	%eax
+	andl	%eax, RELOC(pg_frame + 4)
+	andl	%eax, RELOC(pg_lgframe + 4)
+
+.Lno_semsev:
+
 	/*
 	 * Finished with old stack; load new %esp now instead of later so we
 	 * can trace this code without having to worry about the trace trap
@@ -324,11 +377,15 @@ cont:
 	NDML3_ENTRIES + NDML2_ENTRIES + 3) * NBPG)
 
 #define fillkpt \
-1:	movl	%eax,(%ebx)	;	/* store phys addr */ \
-	movl	$0,4(%ebx)	;	/* upper 32 bits 0 */ \
-	addl	$8,%ebx		;	/* next pte/pde */ \
-	addl	$NBPG,%eax	;	/* next phys page */ \
-	loop	1b		;	/* till finished */
+	pushl	%ebp				;	/* save */ \
+1:	movl	%eax,(%ebx)			;	/* store phys addr */ \
+	movl	$0,4(%ebx)			;	/* upper 32 bits 0 */ \
+	movl	RELOC((pg_crypt + 4)), %ebp	;	/* C bit? */ \
+	orl	%ebp,4(%ebx)			;	/* apply */ \
+	addl	$8,%ebx				;	/* next pte/pde */ \
+	addl	$NBPG,%eax			;	/* next phys page */ \
+	loop	1b				;	/* till finished */ \
+	popl	%ebp				;	/* restore */
 
 
 #define fillkpt_nx \
@@ -336,6 +393,8 @@ cont:
 1:	movl	%eax,(%ebx)			;	/* store phys addr */ \
 	movl	RELOC((pg_nx + 4)), %ebp	;	/* NX bit? */ \
 	movl	%ebp,4(%ebx)			;	/* upper 32 bits */ \
+	movl	RELOC((pg_crypt + 4)), %ebp	;	/* C bit? */ \
+	orl	%ebp,4(%ebx)			;	/* apply */ \
 	addl	$8,%ebx				;	/* next pte/pde */ \
 	addl	$NBPG,%eax			;	/* next phys page */ \
 	loop	1b				;	/* till finished */ \
@@ -521,6 +580,8 @@ store_pte:
 	pushl	%ebp
 	movl	RELOC((pg_nx + 4)), %ebp
 	movl	%ebp, 4(%ebx)
+	movl	RELOC((pg_crypt + 4)), %ebp
+	orl	%ebp, 4(%ebx)
 	popl	%ebp
 	addl	$8, %ebx
 	addl	$NBPD_L2, %eax
@@ -546,6 +607,8 @@ store_pte:
 	pushl	%ebp
 	movl	RELOC((pg_nx + 4)), %ebp
 	movl	%ebp, 4(%ebx)
+	movl	RELOC((pg_crypt + 4)), %ebp
+	orl	%ebp, 4(%ebx)
 	popl	%ebp
 
 	/*
diff --git a/sys/arch/amd64/amd64/machdep.c b/sys/arch/amd64/amd64/machdep.c
index 9fa994bdceb..3280aedcab5 100644
--- a/sys/arch/amd64/amd64/machdep.c
+++ b/sys/arch/amd64/amd64/machdep.c
@@ -496,6 +496,7 @@ const struct sysctl_bounded_args cpuctl_vars[] = {
 	{ CPU_XCRYPT, &amd64_has_xcrypt, SYSCTL_INT_READONLY },
 	{ CPU_INVARIANTTSC, &tsc_is_invariant, SYSCTL_INT_READONLY },
 	{ CPU_RETPOLINE, &need_retpoline, SYSCTL_INT_READONLY },
+	{ CPU_SEVGUESTMODE, &cpu_sev_guestmode, SYSCTL_INT_READONLY },
 };
 
 /*
diff --git a/sys/arch/amd64/amd64/pmap.c b/sys/arch/amd64/amd64/pmap.c
index 1886ef87322..5ea4a5b44d2 100644
--- a/sys/arch/amd64/amd64/pmap.c
+++ b/sys/arch/amd64/amd64/pmap.c
@@ -235,6 +235,11 @@ pt_entry_t pg_g_kern = 0;
 /* pg_xo: XO PTE bits, set to PKU key1 (if cpu supports PKU) */
 pt_entry_t pg_xo;
 
+/* pg_crypt, pg_frame, pg_lgframe: will be derived from CPUID */
+pt_entry_t pg_crypt = 0;
+pt_entry_t pg_frame = PG_FRAME;
+pt_entry_t pg_lgframe = PG_LGFRAME;
+
 /*
  * pmap_pg_wc: if our processor supports PAT then we set this
  * to be the pte bits for Write Combining. Else we fall back to
@@ -465,7 +470,7 @@ pmap_find_pte_direct(struct pmap *pm, vaddr_t va, pt_entry_t **pd, int *offs)
 		if ((pde & (PG_PS|PG_V)) != PG_V)
 			return (lev - 1);
 
-		pdpa = ((*pd)[*offs] & PG_FRAME);
+		pdpa = ((*pd)[*offs] & pg_frame);
 		/* 4096/8 == 512 == 2^9 entries per level */
 		shift -= 9;
 		mask >>= 9;
@@ -498,7 +503,8 @@ pmap_kenter_pa(vaddr_t va, paddr_t pa, vm_prot_t prot)
 
 	npte = (pa & PMAP_PA_MASK) | ((prot & PROT_WRITE) ? PG_RW : PG_RO) |
 	    ((pa & PMAP_NOCACHE) ? PG_N : 0) |
-	    ((pa & PMAP_WC) ? pmap_pg_wc : 0) | PG_V;
+	    ((pa & PMAP_WC) ? pmap_pg_wc : 0) | PG_V |
+	    ((pa & PMAP_NOCRYPT) ? 0 : pg_crypt);
 
 	/* special 1:1 mappings in the first 2MB must not be global */
 	if (va >= (vaddr_t)NBPD_L2)
@@ -513,7 +519,8 @@ pmap_kenter_pa(vaddr_t va, paddr_t pa, vm_prot_t prot)
 		panic("%s: PG_PS", __func__);
 #endif
 	if (pmap_valid_entry(opte)) {
-		if (pa & PMAP_NOCACHE && (opte & PG_N) == 0)
+		if ((pa & PMAP_NOCACHE && (opte & PG_N) == 0) ||
+		    (pa & PMAP_NOCRYPT))
 			wbinvd_on_all_cpus();
 		/* This shouldn't happen */
 		pmap_tlb_shootpage(pmap_kernel(), va, 1);
@@ -582,7 +589,8 @@ pmap_set_pml4_early(paddr_t pa)
 	vaddr_t va;
 
 	pml4e = (pt_entry_t *)(proc0.p_addr->u_pcb.pcb_cr3 + KERNBASE);
-	pml4e[PDIR_SLOT_EARLY] = (pd_entry_t)early_pte_pages | PG_V | PG_RW;
+	pml4e[PDIR_SLOT_EARLY] = (pd_entry_t)early_pte_pages | PG_V | PG_RW |
+	    pg_crypt;
 
 	off = pa & PAGE_MASK_L2;
 	curpa = pa & L2_FRAME;
@@ -590,15 +598,16 @@ pmap_set_pml4_early(paddr_t pa)
 	pte = (pt_entry_t *)PMAP_DIRECT_MAP(early_pte_pages);
 	memset(pte, 0, 3 * NBPG);
 
-	pte[0] = (early_pte_pages + NBPG) | PG_V | PG_RW;
-	pte[1] = (early_pte_pages + 2 * NBPG) | PG_V | PG_RW;
+	pte[0] = (early_pte_pages + NBPG) | PG_V | PG_RW | pg_crypt;
+	pte[1] = (early_pte_pages + 2 * NBPG) | PG_V | PG_RW | pg_crypt;
 
 	pte = (pt_entry_t *)PMAP_DIRECT_MAP(early_pte_pages + NBPG);
 	for (i = 0; i < 2; i++) {
 		/* 2 early pages of mappings */
 		for (j = 0; j < 512; j++) {
 			/* j[0..511] : 2MB mappings per page */
-			pte[(i * 512) + j] = curpa | PG_V | PG_RW | PG_PS;
+			pte[(i * 512) + j] = curpa | PG_V | PG_RW | PG_PS |
+			    pg_crypt;
 			curpa += (2 * 1024 * 1024);
 		}
 	}
@@ -777,7 +786,7 @@ pmap_bootstrap(paddr_t first_avail, paddr_t max_pa)
 	if (ndmpdp > 512)
 		ndmpdp = 512;			/* At most 512GB */
 
-	dmpdp = kpm->pm_pdir[PDIR_SLOT_DIRECT] & PG_FRAME;
+	dmpdp = kpm->pm_pdir[PDIR_SLOT_DIRECT] & pg_frame;
 
 	dmpd = first_avail; first_avail += ndmpdp * PAGE_SIZE;
 
@@ -790,7 +799,7 @@ pmap_bootstrap(paddr_t first_avail, paddr_t max_pa)
 
 		*((pd_entry_t *)va) = ((paddr_t)i << L2_SHIFT);
 		*((pd_entry_t *)va) |= PG_RW | PG_V | PG_PS | pg_g_kern | PG_U |
-		    PG_M | pg_nx;
+		    PG_M | pg_nx | pg_crypt;
 	}
 
 	for (i = NDML2_ENTRIES; i < ndmpdp; i++) {
@@ -801,11 +810,12 @@ pmap_bootstrap(paddr_t first_avail, paddr_t max_pa)
 		va = PMAP_DIRECT_MAP(pdp);
 
 		*((pd_entry_t *)va) = dmpd + (i << PAGE_SHIFT);
-		*((pd_entry_t *)va) |= PG_RW | PG_V | PG_U | PG_M | pg_nx;
+		*((pd_entry_t *)va) |= PG_RW | PG_V | PG_U | PG_M | pg_nx |
+		    pg_crypt;
 	}
 
 	kpm->pm_pdir[PDIR_SLOT_DIRECT] = dmpdp | PG_V | PG_KW | PG_U |
-	    PG_M | pg_nx;
+	    PG_M | pg_nx | pg_crypt;
 
 	/* Map any remaining physical memory > 512GB */
 	for (curslot = 1 ; curslot < NUM_L4_SLOT_DIRECT ; curslot++) {
@@ -818,7 +828,7 @@ pmap_bootstrap(paddr_t first_avail, paddr_t max_pa)
 			dmpd = first_avail; first_avail += PAGE_SIZE;
 			pml3 = (pt_entry_t *)PMAP_DIRECT_MAP(dmpd);
 			kpm->pm_pdir[PDIR_SLOT_DIRECT + curslot] = dmpd |
-			    PG_KW | PG_V | PG_U | PG_M | pg_nx;
+			    PG_KW | PG_V | PG_U | PG_M | pg_nx | pg_crypt;
 
 			/* Calculate full 1GB pages in this 512GB region */
 			p = ((max_pa - start_cur) >> L3_SHIFT);
@@ -839,7 +849,8 @@ pmap_bootstrap(paddr_t first_avail, paddr_t max_pa)
 				dmpd = first_avail; first_avail += PAGE_SIZE;
 				pml2 = (pt_entry_t *)PMAP_DIRECT_MAP(dmpd);
 				pml3[i] = dmpd |
-				    PG_RW | PG_V | PG_U | PG_M | pg_nx;
+				    PG_RW | PG_V | PG_U | PG_M | pg_nx |
+				    pg_crypt;
 
 				cur_pa = start_cur + (i << L3_SHIFT);
 				j = 0;
@@ -849,7 +860,8 @@ pmap_bootstrap(paddr_t first_avail, paddr_t max_pa)
 					    (uint64_t)i * NBPD_L3 +
 					    (uint64_t)j * NBPD_L2;
 					pml2[j] |= PG_RW | PG_V | pg_g_kern |
-					    PG_U | PG_M | pg_nx | PG_PS;
+					    PG_U | PG_M | pg_nx | PG_PS |
+					    pg_crypt;
 					cur_pa += NBPD_L2;
 					j++;
 				}
@@ -949,14 +961,14 @@ pmap_randomize(void)
 	proc0.p_addr->u_pcb.pcb_cr3 = pml4pa;
 
 	/* Fixup recursive PTE PML4E slot. We are only changing the PA */
-	pml4va[PDIR_SLOT_PTE] = pml4pa | (pml4va[PDIR_SLOT_PTE] & ~PG_FRAME);
+	pml4va[PDIR_SLOT_PTE] = pml4pa | (pml4va[PDIR_SLOT_PTE] & ~pg_frame);
 
 	for (i = 0; i < NPDPG; i++) {
 		/* PTE slot already handled earlier */
 		if (i == PDIR_SLOT_PTE)
 			continue;
 
-		if (pml4va[i] & PG_FRAME)
+		if (pml4va[i] & pg_frame)
 			pmap_randomize_level(&pml4va[i], 3);
 	}
 
@@ -985,11 +997,11 @@ pmap_randomize_level(pd_entry_t *pde, int level)
 		panic("%s: cannot allocate page for L%d page directory",
 		    __func__, level);
 
-	old_pd_pa = *pde & PG_FRAME;
+	old_pd_pa = *pde & pg_frame;
 	old_pd_va = PMAP_DIRECT_MAP(old_pd_pa);
 	pmap_extract(pmap_kernel(), (vaddr_t)new_pd_va, &new_pd_pa);
 	memcpy(new_pd_va, (void *)old_pd_va, PAGE_SIZE);
-	*pde = new_pd_pa | (*pde & ~PG_FRAME);
+	*pde = new_pd_pa | (*pde & ~pg_frame);
 
 	tlbflush();
 	memset((void *)old_pd_va, 0, PAGE_SIZE);
@@ -1003,7 +1015,7 @@ pmap_randomize_level(pd_entry_t *pde, int level)
 	}
 
 	for (i = 0; i < NPDPG; i++)
-		if (new_pd_va[i] & PG_FRAME)
+		if (new_pd_va[i] & pg_frame)
 			pmap_randomize_level(&new_pd_va[i], level - 1);
 }
 
@@ -1023,7 +1035,8 @@ pmap_prealloc_lowmem_ptps(paddr_t first_avail)
 	for (;;) {
 		newp = first_avail; first_avail += PAGE_SIZE;
 		memset((void *)PMAP_DIRECT_MAP(newp), 0, PAGE_SIZE);
-		pdes[pl_i(0, level)] = (newp & PG_FRAME) | PG_V | PG_RW;
+		pdes[pl_i(0, level)] =
+		    (newp & pg_frame) | PG_V | PG_RW | pg_crypt;
 		level--;
 		if (level <= 1)
 			break;
@@ -1203,7 +1216,7 @@ pmap_get_ptp(struct pmap *pmap, vaddr_t va)
 		pva = normal_pdes[i - 2];
 
 		if (pmap_valid_entry(pva[index])) {
-			ppa = pva[index] & PG_FRAME;
+			ppa = pva[index] & pg_frame;
 			ptp = NULL;
 			continue;
 		}
@@ -1219,7 +1232,7 @@ pmap_get_ptp(struct pmap *pmap, vaddr_t va)
 		ptp->wire_count = 1;
 		pmap->pm_ptphint[i - 2] = ptp;
 		pa = VM_PAGE_TO_PHYS(ptp);
-		pva[index] = (pd_entry_t) (pa | PG_u | PG_RW | PG_V);
+		pva[index] = (pd_entry_t) (pa | PG_u | PG_RW | PG_V | pg_crypt);
 
 		/*
 		 * Meltdown Special case - if we are adding a new PML4e for
@@ -1292,7 +1305,7 @@ pmap_pdp_ctor(pd_entry_t *pdir)
 	memset(pdir, 0, PDIR_SLOT_PTE * sizeof(pd_entry_t));
 
 	/* put in recursive PDE to map the PTEs */
-	pdir[PDIR_SLOT_PTE] = pdirpa | PG_V | PG_KW | pg_nx;
+	pdir[PDIR_SLOT_PTE] = pdirpa | PG_V | PG_KW | pg_nx | pg_crypt;
 
 	npde = nkptp[PTP_LEVELS - 1];
 
@@ -1359,7 +1372,7 @@ pmap_create(void)
 	pmap->pm_pdir = pool_get(&pmap_pdp_pool, PR_WAITOK);
 	pmap_pdp_ctor(pmap->pm_pdir);
 
-	pmap->pm_pdirpa = pmap->pm_pdir[PDIR_SLOT_PTE] & PG_FRAME;
+	pmap->pm_pdirpa = pmap->pm_pdir[PDIR_SLOT_PTE] & pg_frame;
 
 	/*
 	 * Intel CPUs need a special page table to be used during usermode
@@ -1557,7 +1570,7 @@ pmap_extract(struct pmap *pmap, vaddr_t va, paddr_t *pap)
 
 	if (__predict_true(level == 0 && pmap_valid_entry(pte))) {
 		if (pap != NULL)
-			*pap = (pte & PG_FRAME) | (va & PAGE_MASK);
+			*pap = (pte & pg_frame) | (va & PAGE_MASK);
 		return 1;
 	}
 	if (level == 1 && (pte & (PG_PS|PG_V)) == (PG_PS|PG_V)) {
@@ -1661,7 +1674,7 @@ pmap_remove_ptes(struct pmap *pmap, struct vm_page *ptp, vaddr_t ptpva,
 		if (ptp != NULL)
 			ptp->wire_count--;		/* dropping a PTE */
 
-		pg = PHYS_TO_VM_PAGE(opte & PG_FRAME);
+		pg = PHYS_TO_VM_PAGE(opte & pg_frame);
 
 		/*
 		 * if we are not on a pv list we are done.
@@ -1728,7 +1741,7 @@ pmap_remove_pte(struct pmap *pmap, struct vm_page *ptp, pt_entry_t *pte,
 	if (ptp != NULL)
 		ptp->wire_count--;		/* dropping a PTE */
 
-	pg = PHYS_TO_VM_PAGE(opte & PG_FRAME);
+	pg = PHYS_TO_VM_PAGE(opte & pg_frame);
 
 	/*
 	 * if we are not on a pv list we are done.
@@ -1808,7 +1821,7 @@ pmap_do_remove(struct pmap *pmap, vaddr_t sva, vaddr_t eva, int flags)
 		if (pmap_pdes_valid(sva, &pde)) {
 
 			/* PA of the PTP */
-			ptppa = pde & PG_FRAME;
+			ptppa = pde & pg_frame;
 
 			/* get PTP if non-kernel mapping */
 
@@ -1876,7 +1889,7 @@ pmap_do_remove(struct pmap *pmap, vaddr_t sva, vaddr_t eva, int flags)
 			continue;
 
 		/* PA of the PTP */
-		ptppa = pde & PG_FRAME;
+		ptppa = pde & pg_frame;
 
 		/* get PTP if non-kernel mapping */
 		if (pmap == pmap_kernel()) {
@@ -1974,12 +1987,12 @@ pmap_page_remove(struct vm_page *pg)
 
 #ifdef DIAGNOSTIC
 		if (pve->pv_ptp != NULL && pmap_pdes_valid(pve->pv_va, &pde) &&
-		   (pde & PG_FRAME) != VM_PAGE_TO_PHYS(pve->pv_ptp)) {
+		   (pde & pg_frame) != VM_PAGE_TO_PHYS(pve->pv_ptp)) {
 			printf("%s: pg=%p: va=%lx, pv_ptp=%p\n", __func__,
 			       pg, pve->pv_va, pve->pv_ptp);
 			printf("%s: PTP's phys addr: "
 			       "actual=%lx, recorded=%lx\n", __func__,
-			       (unsigned long)(pde & PG_FRAME),
+			       (unsigned long)(pde & pg_frame),
 				VM_PAGE_TO_PHYS(pve->pv_ptp));
 			panic("%s: mapped managed page has "
 			      "invalid pv_ptp field", __func__);
@@ -2140,8 +2153,8 @@ pmap_write_protect(struct pmap *pmap, vaddr_t sva, vaddr_t eva, vm_prot_t prot)
 	shootself = (scr3 == 0);
 
 	/* should be ok, but just in case ... */
-	sva &= PG_FRAME;
-	eva &= PG_FRAME;
+	sva &= pg_frame;
+	eva &= pg_frame;
 
 	if (!(prot & PROT_READ))
 		set |= pg_xo;
@@ -2314,7 +2327,7 @@ pmap_enter_special(vaddr_t va, paddr_t pa, vm_prot_t prot)
 		if (!pmap_extract(pmap, (vaddr_t)ptp, &npa))
 			panic("%s: can't locate PDPT page", __func__);
 
-		pd[l4idx] = (npa | PG_RW | PG_V);
+		pd[l4idx] = (npa | PG_RW | PG_V | pg_crypt);
 
 		DPRINTF("%s: allocated new PDPT page at phys 0x%llx, "
 		    "setting PML4e[%lld] = 0x%llx\n", __func__,
@@ -2338,7 +2351,7 @@ pmap_enter_special(vaddr_t va, paddr_t pa, vm_prot_t prot)
 		if (!pmap_extract(pmap, (vaddr_t)ptp, &npa))
 			panic("%s: can't locate PD page", __func__);
 
-		pd[l3idx] = (npa | PG_RW | PG_V);
+		pd[l3idx] = (npa | PG_RW | PG_V | pg_crypt);
 
 		DPRINTF("%s: allocated new PD page at phys 0x%llx, "
 		    "setting PDPTe[%lld] = 0x%llx\n", __func__,
@@ -2362,7 +2375,7 @@ pmap_enter_special(vaddr_t va, paddr_t pa, vm_prot_t prot)
 		if (!pmap_extract(pmap, (vaddr_t)ptp, &npa))
 			panic("%s: can't locate PT page", __func__);
 
-		pd[l2idx] = (npa | PG_RW | PG_V);
+		pd[l2idx] = (npa | PG_RW | PG_V | pg_crypt);
 
 		DPRINTF("%s: allocated new PT page at phys 0x%llx, "
 		    "setting PDE[%lld] = 0x%llx\n", __func__,
@@ -2378,7 +2391,7 @@ pmap_enter_special(vaddr_t va, paddr_t pa, vm_prot_t prot)
 	    "0x%llx was 0x%llx\n", __func__, (uint64_t)npa, (uint64_t)pd,
 	    (uint64_t)prot, (uint64_t)pd[l1idx]);
 
-	pd[l1idx] = pa | protection_codes[prot] | PG_V | PG_W;
+	pd[l1idx] = pa | protection_codes[prot] | PG_V | PG_W | pg_crypt;
 
 	/*
 	 * Look up the corresponding U+K entry.  If we're installing the
@@ -2387,7 +2400,7 @@ pmap_enter_special(vaddr_t va, paddr_t pa, vm_prot_t prot)
 	 */
 	level = pmap_find_pte_direct(pmap, va, &ptes, &offs);
 	if (__predict_true(level == 0 && pmap_valid_entry(ptes[offs]))) {
-		if (((pd[l1idx] ^ ptes[offs]) & PG_FRAME) == 0) {
+		if (((pd[l1idx] ^ ptes[offs]) & pg_frame) == 0) {
 			pd[l1idx] |= PG_G | (ptes[offs] & (PG_N | PG_WT));
 			ptes[offs] |= PG_G;
 		} else {
@@ -2678,6 +2691,7 @@ pmap_enter(struct pmap *pmap, vaddr_t va, paddr_t pa, vm_prot_t prot, int flags)
 	struct pv_entry *pve, *opve = NULL;
 	int ptpdelta, wireddelta, resdelta;
 	int wired = (flags & PMAP_WIRED) != 0;
+	int crypt = (flags & PMAP_NOCRYPT) == 0;
 	int nocache = (pa & PMAP_NOCACHE) != 0;
 	int wc = (pa & PMAP_WC) != 0;
 	int error, shootself;
@@ -2755,7 +2769,7 @@ pmap_enter(struct pmap *pmap, vaddr_t va, paddr_t pa, vm_prot_t prot, int flags)
 		 * want to map?
 		 */
 
-		if ((opte & PG_FRAME) == pa) {
+		if ((opte & pg_frame) == pa) {
 
 			/* if this is on the PVLIST, sync R/M bit */
 			if (opte & PG_PVLIST) {
@@ -2790,7 +2804,7 @@ pmap_enter(struct pmap *pmap, vaddr_t va, paddr_t pa, vm_prot_t prot, int flags)
 		 */
 
 		if (opte & PG_PVLIST) {
-			pg = PHYS_TO_VM_PAGE(opte & PG_FRAME);
+			pg = PHYS_TO_VM_PAGE(opte & pg_frame);
 #ifdef DIAGNOSTIC
 			if (pg == NULL)
 				panic("%s: PG_PVLIST mapping with unmanaged "
@@ -2864,6 +2878,8 @@ enter_now:
 		npte |= (PG_u | PG_RW);	/* XXXCDC: no longer needed? */
 	if (pmap == pmap_kernel())
 		npte |= pg_g_kern;
+	if (crypt)
+		npte |= pg_crypt;
 
 	/*
 	 * If the old entry wasn't valid, we can just update it and
@@ -2975,7 +2991,7 @@ pmap_alloc_level(vaddr_t kva, int lvl, long *needed_ptps)
 
 		for (i = index; i <= endindex; i++) {
 			pmap_get_physpage(va, level - 1, &pa);
-			pdep[i] = pa | PG_RW | PG_V | pg_nx;
+			pdep[i] = pa | PG_RW | PG_V | pg_nx | pg_crypt;
 			nkptp[level - 1]++;
 			va += nbpd[level - 1];
 		}
diff --git a/sys/arch/amd64/amd64/vmm_machdep.c b/sys/arch/amd64/amd64/vmm_machdep.c
index 7cc3759171c..bb0019ccb74 100644
--- a/sys/arch/amd64/amd64/vmm_machdep.c
+++ b/sys/arch/amd64/amd64/vmm_machdep.c
@@ -283,6 +283,7 @@ vmm_attach_machdep(struct device *parent, struct device *self, void *aux)
 	struct vmm_softc *sc = (struct vmm_softc *)self;
 	struct cpu_info *ci;
 	CPU_INFO_ITERATOR cii;
+	extern int amd64_pos_cbit;
 
 	sc->sc_md.nr_rvi_cpus = 0;
 	sc->sc_md.nr_ept_cpus = 0;
@@ -327,6 +328,7 @@ vmm_attach_machdep(struct device *parent, struct device *self, void *aux)
 
 	if (sc->mode == VMM_MODE_RVI) {
 		sc->max_vpid = curcpu()->ci_vmm_cap.vcc_svm.svm_max_asid;
+		sc->poscbit = amd64_pos_cbit;
 	} else {
 		sc->max_vpid = 0xFFF;
 	}
@@ -1055,6 +1057,7 @@ start_vmm_on_cpu(struct cpu_info *ci)
 	uint64_t msr;
 	uint32_t cr4;
 	struct vmx_invept_descriptor vid;
+	extern int amd64_has_sev;
 
 	/* No VMM mode? exit. */
 	if ((ci->ci_vmm_flags & CI_VMM_VMX) == 0 &&
@@ -1068,6 +1071,18 @@ start_vmm_on_cpu(struct cpu_info *ci)
 		msr = rdmsr(MSR_EFER);
 		msr |= EFER_SVME;
 		wrmsr(MSR_EFER, msr);
+
+		if (amd64_has_sev) {
+			msr = rdmsr(MSR_SYS_CFG);
+			msr |= SYS_MEMENCRYPTIONMODEEN;
+			wrmsr(MSR_SYS_CFG, msr);
+			msr = rdmsr_locked(MSR_SYS_CFG, OPTERON_MSR_PASSCODE);
+			if (!(msr & SYS_MEMENCRYPTIONMODEEN)) {
+				printf("%s: failed to set "
+				    "SYS_MEMENCRYPTIONMODEEN: 0x%llx\n",
+				    __func__, msr);
+			}
+		}
 	}
 
 	/*
@@ -2056,7 +2071,9 @@ vcpu_reset_regs_svm(struct vcpu *vcpu, struct vcpu_reg_state *vrs)
 
 	/* NPT */
 	if (vmm_softc->mode == VMM_MODE_RVI) {
-		vmcb->v_np_enable = 1;
+		vmcb->v_np_enable = 0x1;	/* NP always required */
+		if (vcpu->vc_sev)
+			vmcb->v_np_enable |= 0x2;	/* add SEV */
 		vmcb->v_n_cr3 = vcpu->vc_parent->vm_map->pmap->pm_pdirpa;
 	}
 
@@ -6292,7 +6309,7 @@ vmm_handle_cpuid(struct vcpu *vcpu)
 		*rdx = 0;
 		break;
 	case 0x80000000:	/* Extended function level */
-		*rax = 0x80000008; /* curcpu()->ci_pnfeatset */
+		*rax = 0x8000001f; /* curcpu()->ci_pnfeatset */
 		*rbx = 0;
 		*rcx = 0;
 		*rdx = 0;
@@ -6352,6 +6369,12 @@ vmm_handle_cpuid(struct vcpu *vcpu)
 		*rcx = ecx;
 		*rdx = edx;
 		break;
+	case 0x8000001f:	/* encryption features (AMD) */
+		*rax = eax;
+		*rbx = ebx;
+		*rcx = ecx;
+		*rdx = edx;
+		break;
 	default:
 		DPRINTF("%s: unsupported rax=0x%llx\n", __func__, *rax);
 		*rax = 0;
diff --git a/sys/arch/amd64/include/bus.h b/sys/arch/amd64/include/bus.h
index 33d6cd6eaeb..ca8f2cd177e 100644
--- a/sys/arch/amd64/include/bus.h
+++ b/sys/arch/amd64/include/bus.h
@@ -551,7 +551,11 @@ typedef struct bus_dmamap		*bus_dmamap_t;
  */
 struct bus_dma_segment {
 	bus_addr_t	ds_addr;	/* DMA address */
+	bus_addr_t	ds_addr2;	/* replacement store */
 	bus_size_t	ds_len;		/* length of transfer */
+	vaddr_t		ds_va;		/* mapped loaded data */
+	vaddr_t		ds_va2;		/* mapped replacement data */
+
 	/*
 	 * Ugh. need this so can pass alignment down from bus_dmamem_alloc
 	 * to scatter gather maps. only the first one is used so the rest is
@@ -655,6 +659,11 @@ struct bus_dmamap {
 
 	void		*_dm_cookie;	/* cookie for bus-specific functions */
 
+	struct vm_page **_dm_pages;	/* replacement pages */
+	vaddr_t		_dm_pgva;	/* those above -- mapped */
+	int		_dm_npages;	/* number of pages allocated */
+	int		_dm_nused;	/* number of pages replaced */
+
 	/*
 	 * PUBLIC MEMBERS: these are used by machine-independent code.
 	 */
diff --git a/sys/arch/amd64/include/cpu.h b/sys/arch/amd64/include/cpu.h
index dd0537cb164..ba179cb3e8b 100644
--- a/sys/arch/amd64/include/cpu.h
+++ b/sys/arch/amd64/include/cpu.h
@@ -386,6 +386,13 @@ extern int cpu_perf_ebx;
 extern int cpu_perf_edx;
 extern int cpu_apmi_edx;
 extern int ecpu_ecxfeature;
+extern int cpu_enc_eax;
+extern int cpu_enc_ebx;
+extern int cpu_enc_ecx;
+extern int cpu_enc_edx;
+extern int cpu_sev_stat_lo;
+extern int cpu_sev_stat_hi;
+extern int cpu_sev_guestmode;
 extern int cpu_id;
 extern char cpu_vendor[];
 extern int cpuid_level;
@@ -485,7 +492,8 @@ void mp_setperf_init(void);
 #define CPU_INVARIANTTSC	17	/* has invariant TSC */
 #define CPU_PWRACTION		18	/* action caused by power button */
 #define CPU_RETPOLINE		19	/* cpu requires retpoline pattern */
-#define CPU_MAXID		20	/* number of valid machdep ids */
+#define CPU_SEVGUESTMODE	20	/* running as SEV guest */
+#define CPU_MAXID		21	/* number of valid machdep ids */
 
 #define	CTL_MACHDEP_NAMES { \
 	{ 0, 0 }, \
@@ -508,6 +516,7 @@ void mp_setperf_init(void);
 	{ "invarianttsc", CTLTYPE_INT }, \
 	{ "pwraction", CTLTYPE_INT }, \
 	{ "retpoline", CTLTYPE_INT }, \
+	{ "sevguestmode", CTLTYPE_INT}, \
 }
 
 #endif /* !_MACHINE_CPU_H_ */
diff --git a/sys/arch/amd64/include/pmap.h b/sys/arch/amd64/include/pmap.h
index 326050f4642..457891d0be7 100644
--- a/sys/arch/amd64/include/pmap.h
+++ b/sys/arch/amd64/include/pmap.h
@@ -320,6 +320,7 @@ struct pmap {
 };
 
 #define PMAP_EFI	PMAP_MD0
+#define PMAP_NOCRYPT	PMAP_MD1
 
 /*
  * MD flags that we use for pmap_enter (in the pa):
diff --git a/sys/arch/amd64/include/pte.h b/sys/arch/amd64/include/pte.h
index c2bd8793c7d..a4d26130eeb 100644
--- a/sys/arch/amd64/include/pte.h
+++ b/sys/arch/amd64/include/pte.h
@@ -164,6 +164,7 @@ typedef u_int64_t pt_entry_t;		/* PTE */
 #ifdef _KERNEL
 extern pt_entry_t pg_xo;	/* XO pte bits using PKU key1 */
 extern pt_entry_t pg_nx;	/* NX pte bit */
+extern pt_entry_t pg_crypt;	/* C pte bit */
 extern pt_entry_t pg_g_kern;	/* PG_G if glbl mappings can be used in kern */
 #endif /* _KERNEL */
 
diff --git a/sys/arch/amd64/include/specialreg.h b/sys/arch/amd64/include/specialreg.h
index 38edcca6148..25f54daab50 100644
--- a/sys/arch/amd64/include/specialreg.h
+++ b/sys/arch/amd64/include/specialreg.h
@@ -347,6 +347,34 @@
 #define CPUIDEBX_VIRT_SSBD	(1ULL << 25)	/* Virt Spec Control SSBD */
 #define CPUIDEBX_SSBD_NOTREQ	(1ULL << 26)	/* SSBD not required */
 
+/*
+ * AMD CPUID function 0x8000001F EAX bits
+ */
+#define CPUIDEAX_SME		(1ULL << 0)  /* SME */
+#define CPUIDEAX_SEV		(1ULL << 1)  /* SEV */
+#define CPUIDEAX_PFLUSH_MSR	(1ULL << 2)  /* Page Flush MSR */
+#define CPUIDEAX_SEVES		(1ULL << 3)  /* SEV-ES */
+#define CPUIDEAX_SEVSNP		(1ULL << 4)  /* SEV-SNP */
+#define CPUIDEAX_VMPL		(1ULL << 5)  /* VM Permission Levels */
+#define CPUIDEAX_RMPQUERY	(1ULL << 6)  /* RMPQUERY */
+#define CPUIDEAX_VMPLSSS	(1ULL << 7)  /* VMPL Supservisor Shadow Stack */
+#define CPUIDEAX_SECTSC		(1ULL << 8)  /* Secure TSC */
+#define CPUIDEAX_TSCAUXVIRT	(1ULL << 9)  /* TSC Aux Virtualization */
+#define CPUIDEAX_HWECACHECOH	(1ULL << 10) /* Coherency Across Encryption Domains*/
+#define CPUIDEAX_64BITHOST	(1ULL << 11) /* SEV guest execution only from a 64-bit host */
+#define CPUIDEAX_RESTINJ	(1ULL << 12) /* Restricted Injection */
+#define CPUIDEAX_ALTINJ		(1ULL << 13) /* Alternate Injection */
+#define CPUIDEAX_DBGSTSW	(1ULL << 14) /* Full debug state swap */
+#define CPUIDEAX_IBSDISALLOW	(1ULL << 15) /* Disallowing IBS use by host */
+#define CPUIDEAX_VTE		(1ULL << 16) /* Virt. Transparent Encryption */
+#define CPUIDEAX_VMGEXITPARAM	(1ULL << 17) /* VMGEXIT Parameter */
+#define CPUIDEAX_VTOMMSR	(1ULL << 18) /* Virtual TOM MSR */
+#define CPUIDEAX_IBSVIRT	(1ULL << 19) /* IBS Virtualization for SEV-ES */
+#define CPUIDEAX_VMSARPROT	(1ULL << 24) /* VMSA Register Protection */
+#define CPUIDEAX_SMTPROT	(1ULL << 25) /* SMT Protection */
+#define CPUIDEAX_SVSMPAGEMSR	(1ULL << 28) /* SVSM Communication Page MSR */
+#define CPUIDEAX_NVSMSR		(1ULL << 29) /* NestedVirtSnpMsr */
+
 #define	CPUID2FAMILY(cpuid)	(((cpuid) >> 8) & 15)
 #define	CPUID2MODEL(cpuid)	(((cpuid) >> 4) & 15)
 #define	CPUID2STEPPING(cpuid)	((cpuid) & 15)
@@ -592,6 +620,9 @@
 #define MSR_PATCH_LOADER	0xc0010020
 #define MSR_INT_PEN_MSG	0xc0010055	/* Interrupt pending message */
 
+#define MSR_SYS_CFG	0xc0010010	/* System Configuration */
+#define		SYS_MEMENCRYPTIONMODEEN	0x00800000	/* SEV */
+
 #define MSR_DE_CFG	0xc0011029	/* Decode Configuration */
 #define	DE_CFG_721	0x00000001	/* errata 721 */
 #define DE_CFG_SERIALIZE_LFENCE	(1 << 1)	/* Enable serializing lfence */
@@ -604,6 +635,7 @@
  * These require a 'passcode' for access.  See cpufunc.h.
  */
 #define	MSR_HWCR	0xc0010015
+#define		HWCR_SMMLOCK		0x00000001
 #define		HWCR_FFDIS		0x00000040
 #define		HWCR_TSCFREQSEL		0x01000000
 
@@ -614,6 +646,9 @@
 #define		NB_CFG_DISIOREQLOCK	0x0000000000000004ULL
 #define		NB_CFG_DISDATMSK	0x0000001000000000ULL
 
+#define MSR_SEV_STATUS	0xc0010131
+#define		SEV_STAT_ENABLED	0x00000001
+
 #define	MSR_LS_CFG	0xc0011020
 #define		LS_CFG_DIS_LS2_SQUISH	0x02000000
 
@@ -1489,6 +1524,16 @@
 #define SVM_INTERCEPT_CR14_WRITE_POST	(1UL << 30)
 #define SVM_INTERCEPT_CR15_WRITE_POST	(1UL << 31)
 
+/*
+ * SME
+ */
+#define CPUID_AMD_SME_CAP		0x8000001F
+#define AMD_SME_CAP			(1 << 0)
+#define AMD_SEV_CAP			(1 << 1)
+#define AMD_PGFLUSH_MSR_CAP		(1 << 2)
+#define AMD_SEVES_CAP			(1 << 3)
+#define AMD_SEVSNP_CAP			(1 << 4)
+
 /*
  * PAT
  */
diff --git a/sys/arch/amd64/include/vmmvar.h b/sys/arch/amd64/include/vmmvar.h
index e6a35211b0f..f8b280210dd 100644
--- a/sys/arch/amd64/include/vmmvar.h
+++ b/sys/arch/amd64/include/vmmvar.h
@@ -879,6 +879,8 @@ struct vcpu {
 	/* Userland Protection Keys */
 	uint32_t vc_pkru;			/* [v] */
 
+	int vc_sev;				/* [I] */
+
 	/* VMX only (all requiring [v]) */
 	uint64_t vc_vmx_basic;
 	uint64_t vc_vmx_entry_ctls;
diff --git a/sys/conf/files b/sys/conf/files
index fd76e9934e9..250ba579114 100644
--- a/sys/conf/files
+++ b/sys/conf/files
@@ -467,7 +467,7 @@ file	dev/usb/xhci.c			xhci	needs-flag
 
 # AMD Cryptographic Co-processor
 device	ccp
-file	dev/ic/ccp.c			ccp
+file	dev/ic/ccp.c			ccp	needs-flag
 
 # SDHC SD/MMC controller
 define	sdhc
diff --git a/sys/dev/ic/ccp.c b/sys/dev/ic/ccp.c
index 5a04b73938f..5234dbaae90 100644
--- a/sys/dev/ic/ccp.c
+++ b/sys/dev/ic/ccp.c
@@ -2,6 +2,7 @@
 
 /*
  * Copyright (c) 2018 David Gwynne <dlg@openbsd.org>
+ * Copyright (c) 2023, 2024 Hans-Joerg Hoexer <hshoexer@genua.de>
  *
  * Permission to use, copy, modify, and distribute this software for any
  * purpose with or without fee is hereby granted, provided that the above
@@ -23,6 +24,9 @@
 #include <sys/malloc.h>
 #include <sys/kernel.h>
 #include <sys/timeout.h>
+#include <sys/proc.h>
+
+#include <uvm/uvm.h>
 
 #include <machine/bus.h>
 
@@ -38,13 +42,19 @@ struct cfdriver ccp_cd = {
 	DV_DULL
 };
 
+struct ccp_softc *ccp_softc;
+
+int	psp_get_pstatus(struct psp_platform_status *);
+int	psp_launch_start(struct psp_launch_start *);
+int	psp_init(struct psp_init *);
+
 void
 ccp_attach(struct ccp_softc *sc)
 {
 	timeout_set(&sc->sc_tick, ccp_rng, sc);
 	ccp_rng(sc);
 
-	printf("\n");
+	printf(", RNG");
 }
 
 static void
@@ -59,3 +69,441 @@ ccp_rng(void *arg)
 
 	timeout_add_msec(&sc->sc_tick, 100);
 }
+
+int
+psp_sev_intr(struct ccp_softc *sc, uint32_t status)
+{
+	if (!(status & PSP_CMDRESP_COMPLETE))
+		return (0);
+
+	wakeup(sc);
+
+	return (1);
+}
+
+int
+psp_attach(struct ccp_softc *sc)
+{
+	struct psp_platform_status	pst;
+	struct psp_init			init;
+	size_t				size;
+	int				nsegs;
+
+	extern int amd64_has_sev;
+	extern int amd64_has_sme;
+	extern int amd64_has_seves;
+#ifdef AMDCCP_DEBUG
+	extern uint32_t amd64_sme_psize;
+	extern int amd64_pos_cbit;
+	extern int amd64_nvmpl;
+	extern int amd64_nencguests;
+
+	DPRINTF(("%s: %d %u %d %d %d %d %b\n", __func__, amd64_has_sme,
+	    amd64_sme_psize, amd64_pos_cbit, amd64_nvmpl, amd64_has_sev,
+	    amd64_nencguests, sc->sc_capabilities, PSP_CAP_BITS));
+#endif
+
+	if (!(amd64_has_sev && sc->sc_capabilities & PSP_CAP_SEV))
+		return (0);
+
+	rw_init(&sc->sc_lock, "ccp_lock");
+
+	/* create and map SEV command buffer */
+	sc->sc_cmd_size = size = PAGE_SIZE;
+	if (bus_dmamap_create(sc->sc_dmat, size, 1, size, 0,
+	    BUS_DMA_WAITOK | BUS_DMA_ALLOCNOW | BUS_DMA_64BIT,
+	    &sc->sc_cmd_map) != 0)
+		return (0);
+
+	if (bus_dmamem_alloc(sc->sc_dmat, size, 0, 0, &sc->sc_cmd_seg, 1,
+	    &nsegs, BUS_DMA_WAITOK | BUS_DMA_ZERO) != 0)
+		goto destroy;
+
+	if (bus_dmamem_map(sc->sc_dmat, &sc->sc_cmd_seg, nsegs, size,
+	    &sc->sc_cmd_kva, BUS_DMA_WAITOK) != 0)
+		goto free;
+
+	if (bus_dmamap_load(sc->sc_dmat, sc->sc_cmd_map, sc->sc_cmd_kva,
+	    size, NULL, BUS_DMA_WAITOK) != 0)
+		goto unmap;
+
+	sc->sc_sev_intr = psp_sev_intr;
+	ccp_softc = sc;
+
+	printf(", SEV");
+
+	if (!amd64_has_sme)
+		return (1);
+
+	printf(", SME");
+
+	if (!amd64_has_seves)
+		return (1);
+
+	if (psp_get_pstatus(&pst) || pst.state != 0) {
+		printf("%s: 1\n", __func__);
+		goto unmap;
+	}
+
+	/*
+         * create and map Trusted Memory Region (TMR); size 1 Mbyte,
+         * needs to be aligend to 1 Mbyte.
+	 */
+	sc->sc_tmr_size = size = PSP_TMR_SIZE;
+	if (bus_dmamap_create(sc->sc_dmat, size, 1, size, 0,
+	    BUS_DMA_WAITOK | BUS_DMA_ALLOCNOW | BUS_DMA_64BIT,
+	    &sc->sc_tmr_map) != 0)
+		return (0);
+
+	if (bus_dmamem_alloc(sc->sc_dmat, size, size, 0, &sc->sc_tmr_seg, 1,
+	    &nsegs, BUS_DMA_WAITOK | BUS_DMA_ZERO) != 0) {
+		printf("%s: 2\n", __func__);
+		goto destroy;
+	}
+
+	if (bus_dmamem_map(sc->sc_dmat, &sc->sc_tmr_seg, nsegs, size,
+	    &sc->sc_tmr_kva, BUS_DMA_WAITOK) != 0) {
+		printf("%s: 3\n", __func__);
+		goto free;
+	}
+
+	if (bus_dmamap_load(sc->sc_dmat, sc->sc_tmr_map, sc->sc_tmr_kva,
+	    size, NULL, BUS_DMA_WAITOK) != 0) {
+		printf("%s: 4\n", __func__);
+		goto unmap;
+	}
+
+	memset(&init, 0, sizeof(init));
+	init.enable_es = 1;	/* XXX disable? */
+	init.tmr_length = PSP_TMR_SIZE;
+	init.tmr_paddr = sc->sc_tmr_map->dm_segs[0].ds_addr;
+	if (psp_init(&init)) {
+		printf("%s: 5\n", __func__);
+		goto unmap;
+	}
+
+	psp_get_pstatus(&pst);
+	if ((pst.state == 1) && (pst.cfges_build & 0x1))
+		printf(", SEV-ES");
+
+	return (1);
+
+	/* XXX hshoexer: is this the right "clean up" path? */
+unmap:
+	bus_dmamem_unmap(sc->sc_dmat, sc->sc_cmd_kva, size);
+	bus_dmamem_unmap(sc->sc_dmat, sc->sc_tmr_kva, size);
+free:
+	bus_dmamem_free(sc->sc_dmat, &sc->sc_cmd_seg, 1);
+	bus_dmamem_free(sc->sc_dmat, &sc->sc_tmr_seg, 1);
+destroy:
+	bus_dmamap_destroy(sc->sc_dmat, sc->sc_cmd_map);
+	bus_dmamap_destroy(sc->sc_dmat, sc->sc_tmr_map);
+
+	return (0);
+}
+
+int
+psp_get_pstatus(struct psp_platform_status *status)
+{
+	struct ccp_softc	*sc = ccp_softc;
+	uint64_t		 paddr;
+	int			 ret;
+
+	paddr = sc->sc_cmd_map->dm_segs[0].ds_addr;
+	ret = ccp_pci_docmd(sc, PSP_CMD_PLATFORMSTATUS, paddr);
+
+	if (ret != 0)
+		return (EIO);
+
+	*status = *((struct psp_platform_status *)sc->sc_cmd_kva);
+
+	return (0);
+}
+
+int
+psp_df_flush(void)
+{
+	struct ccp_softc	*sc = ccp_softc;
+	int			 ret;
+
+	wbinvd_on_all_cpus();
+
+	ret = ccp_pci_docmd(sc, PSP_CMD_DF_FLUSH, 0x0);
+
+	if (ret != 0)
+		return (EIO);
+
+	return (0);
+}
+
+int
+psp_decommission(struct psp_decommission *decom)
+{
+	struct ccp_softc	*sc = ccp_softc;
+	int			 ret;
+	uint64_t		 paddr;
+
+	*((struct psp_decommission *)sc->sc_cmd_kva) = *decom;
+
+	paddr = sc->sc_cmd_map->dm_segs[0].ds_addr;
+	ret = ccp_pci_docmd(sc, PSP_CMD_DECOMMISSION, paddr);
+
+	if (ret != 0)
+		return (EIO);
+
+	return (0);
+}
+
+
+int
+psp_get_gstatus(struct psp_guest_status *status)
+{
+	struct ccp_softc	*sc = ccp_softc;
+	uint64_t		 paddr;
+	int			 ret;
+
+	*((struct psp_guest_status *)sc->sc_cmd_kva) = *status;
+
+	paddr = sc->sc_cmd_map->dm_segs[0].ds_addr;
+	ret = ccp_pci_docmd(sc, PSP_CMD_GUESTSTATUS, paddr);
+
+	if (ret != 0)
+		return (EIO);
+
+	*status = *((struct psp_guest_status *)sc->sc_cmd_kva);
+
+	return (0);
+}
+
+int
+psp_launch_start(struct psp_launch_start *start)
+{
+	struct ccp_softc	*sc = ccp_softc;
+	int			 ret;
+	uint64_t		 paddr;
+
+	*((struct psp_launch_start *)sc->sc_cmd_kva) = *start;
+
+	paddr = sc->sc_cmd_map->dm_segs[0].ds_addr;
+	ret = ccp_pci_docmd(sc, PSP_CMD_LAUNCH_START, paddr);
+
+	if (ret != 0)
+		return (EIO);
+
+	*start = *((struct psp_launch_start *)sc->sc_cmd_kva);
+
+	return (0);
+}
+
+int
+psp_launch_update_data(struct psp_launch_update_data *lud, struct proc *p)
+{
+	struct ccp_softc	*sc = ccp_softc;
+	pmap_t			 pmap;
+	paddr_t			 lupaddr = -1;
+	uint64_t		 paddr;
+	int			 ret;
+
+	/* Convert vaddr to paddr */
+	pmap = vm_map_pmap(&p->p_vmspace->vm_map);
+	if (!pmap_extract(pmap, lud->paddr, &lupaddr))
+		return (EINVAL);
+	lud->paddr = lupaddr;
+
+	*((struct psp_launch_update_data *)sc->sc_cmd_kva) = *lud;
+
+	paddr = sc->sc_cmd_map->dm_segs[0].ds_addr;
+	ret = ccp_pci_docmd(sc, PSP_CMD_LAUNCH_UPDATE_DATA, paddr);
+
+	if (ret != 0)
+		return (EIO);
+
+	return (0);
+}
+
+int
+psp_launch_measure(struct psp_launch_measure *lm)
+{
+	struct ccp_softc	*sc = ccp_softc;
+	int			 ret;
+	uint64_t		 paddr;
+
+	paddr = sc->sc_cmd_map->dm_segs[0].ds_addr;
+	lm->measure_paddr =
+	    paddr + offsetof(struct psp_launch_measure, psp_measure);
+	*((struct psp_launch_measure *)sc->sc_cmd_kva) = *lm;
+
+	ret = ccp_pci_docmd(sc, PSP_CMD_LAUNCH_MEASURE, paddr);
+
+	if (ret != 0)
+		return (EIO);
+
+	*lm = *((struct psp_launch_measure *)sc->sc_cmd_kva);
+	lm->measure_paddr = 0;
+
+	return (0);
+}
+
+int
+psp_launch_finish(struct psp_launch_finish *lf)
+{
+	struct ccp_softc	*sc = ccp_softc;
+	int			 ret;
+	uint64_t		 paddr;
+
+	*((struct psp_launch_finish *)sc->sc_cmd_kva) = *lf;
+
+	paddr = sc->sc_cmd_map->dm_segs[0].ds_addr;
+	ret = ccp_pci_docmd(sc, PSP_CMD_LAUNCH_FINISH, paddr);
+
+	if (ret != 0)
+		return (EIO);
+
+	return (0);
+}
+
+int
+psp_attestation(struct psp_attestation *at)
+{
+	struct ccp_softc	*sc = ccp_softc;
+	int			 ret;
+	uint64_t		 paddr;
+
+	paddr = sc->sc_cmd_map->dm_segs[0].ds_addr;
+	at->attest_paddr =
+	    paddr + offsetof(struct psp_attestation, psp_report);
+	*((struct psp_attestation *)sc->sc_cmd_kva) = *at;
+
+	ret = ccp_pci_docmd(sc, PSP_CMD_ATTESTATION, paddr);
+
+	if (ret != 0)
+		return (EIO);
+
+	*at = *((struct psp_attestation *)sc->sc_cmd_kva);
+	at->attest_paddr = 0;
+
+	return (0);
+}
+
+int
+psp_activate(struct psp_activate *act)
+{
+	struct ccp_softc	*sc = ccp_softc;
+	int			 ret;
+	uint64_t		 paddr;
+
+	*((struct psp_activate *)sc->sc_cmd_kva) = *act;
+
+	paddr = sc->sc_cmd_map->dm_segs[0].ds_addr;
+	ret = ccp_pci_docmd(sc, PSP_CMD_ACTIVATE, paddr);
+
+	if (ret != 0)
+		return (EIO);
+
+	return (0);
+}
+
+int
+psp_deactivate(struct psp_deactivate *deact)
+{
+	struct ccp_softc	*sc = ccp_softc;
+	int			 ret;
+	uint64_t		 paddr;
+
+	*((struct psp_deactivate *)sc->sc_cmd_kva) = *deact;
+
+	paddr = sc->sc_cmd_map->dm_segs[0].ds_addr;
+	ret = ccp_pci_docmd(sc, PSP_CMD_DEACTIVATE, paddr);
+
+	if (ret != 0)
+		return (EIO);
+
+	return (0);
+}
+
+int
+psp_init(struct psp_init *init)
+{
+	struct ccp_softc	*sc = ccp_softc;
+	int			 ret;
+	uint64_t		 paddr;
+
+	*((struct psp_init *)sc->sc_cmd_kva) = *init;
+
+	paddr = sc->sc_cmd_map->dm_segs[0].ds_addr;
+	ret = ccp_pci_docmd(sc, PSP_CMD_INIT, paddr);
+
+	wbinvd_on_all_cpus();
+
+	if (ret != 0)
+		return (EIO);
+
+	return (0);
+}
+
+int
+pspopen(dev_t dev, int flag, int mode, struct proc *p)
+{
+	if (ccp_softc == NULL)
+		return (ENODEV);
+
+	return (0);
+}
+
+int
+pspclose(dev_t dev, int flag, int mode, struct proc *p)
+{
+	return (0);
+}
+
+int
+pspioctl(dev_t dev, u_long cmd, caddr_t data, int flag, struct proc *p)
+{
+	int	ret;
+
+	rw_enter_write(&ccp_softc->sc_lock);
+
+	switch (cmd) {
+	case PSP_IOC_GET_PSTATUS:
+		ret = psp_get_pstatus((struct psp_platform_status *)data);
+		break;
+	case PSP_IOC_DF_FLUSH:
+		ret = psp_df_flush();
+		break;
+	case PSP_IOC_DECOMMISSION:
+		ret = psp_decommission((struct psp_decommission *)data);
+		break;
+	case PSP_IOC_GET_GSTATUS:
+		ret = psp_get_gstatus((struct psp_guest_status *)data);
+		break;
+	case PSP_IOC_LAUNCH_START:
+		ret = psp_launch_start((struct psp_launch_start *)data);
+		break;
+	case PSP_IOC_LAUNCH_UPDATE_DATA:
+		ret = psp_launch_update_data(
+		    (struct psp_launch_update_data *)data, p);
+		break;
+	case PSP_IOC_LAUNCH_MEASURE:
+		ret = psp_launch_measure((struct psp_launch_measure *)data);
+		break;
+	case PSP_IOC_LAUNCH_FINISH:
+		ret = psp_launch_finish((struct psp_launch_finish *)data);
+		break;
+	case PSP_IOC_ATTESTATION:
+		ret = psp_attestation((struct psp_attestation *)data);
+		break;
+	case PSP_IOC_ACTIVATE:
+		ret = psp_activate((struct psp_activate *)data);
+		break;
+	case PSP_IOC_DEACTIVATE:
+		ret = psp_deactivate((struct psp_deactivate *)data);
+		break;
+	default:
+		printf("%s: unkown ioctl code 0x%lx\n", __func__, cmd);
+		ret = ENOTTY;
+	}
+
+	rw_exit_write(&ccp_softc->sc_lock);
+
+	return (ret);
+}
diff --git a/sys/dev/ic/ccpvar.h b/sys/dev/ic/ccpvar.h
index 237a5d45f5e..4d1f5baf6fa 100644
--- a/sys/dev/ic/ccpvar.h
+++ b/sys/dev/ic/ccpvar.h
@@ -2,6 +2,7 @@
 
 /*
  * Copyright (c) 2018 David Gwynne <dlg@openbsd.org>
+ * Copyright (c) 2023, 2024 Hans-Joerg Hoexer <hshoexer@genua.de>
  *
  * Permission to use, copy, modify, and distribute this software for any
  * purpose with or without fee is hereby granted, provided that the above
@@ -17,13 +18,220 @@
  */
 
 #include <sys/timeout.h>
+#include <sys/ioctl.h>
+
+#define PSP_PSTATE_UNINIT	0x0
+#define PSP_PSTATE_INIT		0x1
+#define PSP_PSTATE_WORKING	0x2
+
+#define PSP_GSTATE_UNINIT	0x0
+#define PSP_GSTATE_LUPDATE	0x1
+#define PSP_GSTATE_LSECRET	0x2
+#define PSP_GSTATE_RUNNING	0x3
+#define PSP_GSTATE_SUPDATE	0x4
+#define PSP_GSTATE_RUPDATE	0x5
+#define PSP_GSTATE_SENT		0x6
+
+#define PSP_CAP_SEV					(1 << 0)
+#define PSP_CAP_TEE					(1 << 1)
+#define PSP_CAP_DBC_THRU_EXT				(1 << 2)
+#define PSP_CAP_SECURITY_REPORTING			(1 << 7)
+#define PSP_CAP_SECURITY_FUSED_PART			(1 << 8)
+#define PSP_CAP_SECURITY_DEBUG_LOCK_ON			(1 << 10)
+#define PSP_CAP_SECURITY_TSME_STATUS			(1 << 13)
+#define PSP_CAP_SECURITY_ANTI_ROLLBACK_STATUS		(1 << 15)
+#define PSP_CAP_SECURITY_RPMC_PRODUCTION_ENABLED	(1 << 16)
+#define PSP_CAP_SECURITY_RPMC_SPIROM_AVAILABLE		(1 << 17)
+#define PSP_CAP_SECURITY_HSP_TPM_AVAILABLE		(1 << 18)
+#define PSP_CAP_SECURITY_ROM_ARMOR_ENFORCED		(1 << 19)
+
+#define PSP_CAP_BITS	"\20\001SEV\002TEE\003DBC_THRU_EXT\010REPORTING\011FUSED_PART\013DEBUG_LOCK_ON\016TSME_STATUS\020ANTI_ROLLBACK_STATUS\021RPMC_PRODUCTION_ENABLED\022RPMC_SPIROM_AVAILABLE\023HSP_TPM_AVAILABLE\024ROM_ARMOR_ENFORCED"
+
+#define PSP_CMDRESP_IOC		(1 << 0)
+#define PSP_CMDRESP_COMPLETE	(1 << 1)
+#define PSP_CMDRESP_RESPONSE	(1 << 31)
+
+#define PSP_STATUS_MASK				0xffff
+#define PSP_STATUS_SUCCESS			0x0000
+#define PSP_STATUS_INVALID_PLATFORM_STATE	0x0001
+
+#define PSP_CMD_INIT			0x1
+#define PSP_CMD_PLATFORMSTATUS		0x4
+#define PSP_CMD_DF_FLUSH		0xa
+#define PSP_CMD_DECOMMISSION		0x20
+#define PSP_CMD_ACTIVATE		0x21
+#define PSP_CMD_DEACTIVATE		0x22
+#define PSP_CMD_GUESTSTATUS		0x23
+#define PSP_CMD_LAUNCH_START		0x30
+#define PSP_CMD_LAUNCH_UPDATE_DATA	0x31
+#define PSP_CMD_LAUNCH_MEASURE		0x33
+#define PSP_CMD_LAUNCH_FINISH		0x35
+#define PSP_CMD_ATTESTATION		0x36
 
 struct ccp_softc {
 	struct device		sc_dev;
 	bus_space_tag_t		sc_iot;
 	bus_space_handle_t	sc_ioh;
+	bus_size_t		sc_size;
+	bus_dma_tag_t		sc_dmat;
+	void *			sc_ih;
+
+	uint32_t		sc_capabilities;
+	int			(*sc_sev_intr)(struct ccp_softc *, uint32_t);
+
+	bus_dmamap_t		sc_cmd_map;
+	bus_dma_segment_t	sc_cmd_seg;
+	size_t			sc_cmd_size;
+	caddr_t			sc_cmd_kva;
+
+	bus_dmamap_t		sc_tmr_map;
+	bus_dma_segment_t	sc_tmr_seg;
+	size_t			sc_tmr_size;
+	caddr_t			sc_tmr_kva;
 
 	struct timeout		sc_tick;
+
+	struct rwlock		sc_lock;
 };
 
+#define PSP_TMR_SIZE		(1024*1024)
+
+#define PSP_SUCCESS		0x0000
+#define PSP_INVALID_ADDRESS	0x0009
+
+struct psp_platform_status {
+	uint8_t			api_major;
+	uint8_t			api_minor;
+	uint8_t			state;
+	uint8_t			owner;
+	uint32_t		cfges_build;
+	uint32_t		guest_count;
+} __packed;
+
+struct psp_guest_status {
+	uint32_t		handle;
+	uint32_t		policy;
+	uint32_t		asid;
+	uint8_t			state;
+} __packed;
+
+struct psp_launch_start {
+	uint32_t		handle;
+	uint32_t		policy;
+	uint64_t		dh_cert_paddr;
+	uint32_t		dh_cert_len;
+	uint32_t		reserved;
+	uint64_t		session_paddr;
+	uint32_t		session_len;
+} __packed;
+
+struct psp_launch_update_data {
+	uint32_t		handle;
+	uint32_t		reserved;
+	uint64_t		paddr;
+	uint32_t		length;
+} __packed;
+
+struct psp_measure {
+	uint8_t			measure[32];
+	uint8_t			measure_nonce[16];
+} __packed;
+
+struct psp_launch_measure {
+	uint32_t		handle;
+	uint32_t		reserved;
+	uint64_t		measure_paddr;
+	uint32_t		measure_len;
+	uint32_t		padding;
+	struct psp_measure	psp_measure;	/* 64bit aligned */
+#define measure		psp_measure.measure
+#define measure_nonce	psp_measure.measure_nonce
+} __packed;
+
+struct psp_launch_finish {
+	uint32_t		handle;
+} __packed;
+
+struct psp_report {
+	uint8_t			report_nonce[16];
+	uint8_t			report_launch_digest[32];
+	uint32_t		report_policy;
+	uint32_t		report_sig_usage;
+	uint32_t		report_sig_algo;
+	uint32_t		reserved2;
+	uint8_t			report_sig1[144];
+} __packed;
+
+struct psp_attestation {
+	uint32_t		handle;
+	uint32_t		reserved;
+	uint64_t		attest_paddr;
+	uint8_t			attest_nonce[16];
+	uint32_t		attest_len;
+	uint32_t		padding;
+	struct psp_report	psp_report;	/* 64bit aligned */
+#define report_nonce		psp_report.report_nonce
+#define report_launch_digest	psp_report.report_launch_digest
+#define report_policy		psp_report.report_policy
+#define report_sig_usage	psp_report.report_sig_usage;
+#define report_report_sig_alg	psp_report.report_sig_algo;
+#define report_report_sig1	psp_report.report_sig1;
+} __packed;
+
+struct psp_activate {
+	uint32_t		handle;
+	uint32_t		asid;
+} __packed;
+
+struct psp_deactivate {
+	uint32_t		handle;
+} __packed;
+
+struct psp_decommission {
+	uint32_t		handle;
+} __packed;
+
+struct psp_init {
+	uint32_t		enable_es;
+	uint32_t		reserved;
+	uint64_t		tmr_paddr;
+	uint32_t		tmr_length;
+} __packed;
+
+#define PSP_IOC_GET_PSTATUS	_IOR('P', 0, struct psp_platform_status)
+#define PSP_IOC_DF_FLUSH	_IO('P', 1)
+#define PSP_IOC_DECOMMISSION	_IOW('P', 2, struct psp_decommission)
+#define PSP_IOC_GET_GSTATUS	_IOWR('P', 3, struct psp_guest_status)
+#define PSP_IOC_LAUNCH_START	_IOWR('P', 4, struct psp_launch_start)
+#define PSP_IOC_LAUNCH_UPDATE_DATA \
+				_IOW('P', 5, struct psp_launch_update_data)
+#define PSP_IOC_LAUNCH_MEASURE	_IOWR('P', 6, struct psp_launch_measure)
+#define PSP_IOC_LAUNCH_FINISH	_IOW('P', 7, struct psp_launch_finish)
+#define PSP_IOC_ATTESTATION	_IOWR('P', 8, struct psp_attestation)
+#define PSP_IOC_ACTIVATE	_IOW('P', 9, struct psp_activate)
+#define PSP_IOC_DEACTIVATE	_IOW('P', 10, struct psp_deactivate)
+#if 0
+#define PSP_IOC_INIT		_IOW('P', 255, struct psp_init)
+#endif
+
+#ifdef _KERNEL
+
+//#define AMDCCP_DEBUG
+#ifdef AMDCCP_DEBUG
+#define DPRINTF(x) printf x
+#else
+#define DPRINTF(x)
+#endif
+
 void	ccp_attach(struct ccp_softc *);
+
+int	psp_attach(struct ccp_softc *);
+int	psp_sev_intr(struct ccp_softc *, uint32_t);
+
+int	pspclose(dev_t, int, int, struct proc *);
+int	pspopen(dev_t, int, int, struct proc *);
+int	pspioctl(dev_t, u_long, caddr_t, int, struct proc *);
+
+int	ccp_pci_docmd(struct ccp_softc *, int, uint64_t);
+
+#endif	/* _KERNEL */
diff --git a/sys/dev/pci/ccp_pci.c b/sys/dev/pci/ccp_pci.c
index 18407281d6c..0603794f5b0 100644
--- a/sys/dev/pci/ccp_pci.c
+++ b/sys/dev/pci/ccp_pci.c
@@ -33,9 +33,19 @@
 
 #define CCP_PCI_BAR	0x18
 
+/* AMD 17h */
+#define PSP_REG_INTEN		0x10690
+#define PSP_REG_INTSTS		0x10694
+#define PSP_REG_CMDRESP		0x10980
+#define PSP_REG_ADDRLO		0x109e0
+#define PSP_REG_ADDRHI		0x109e4
+#define PSP_REG_CAPABILITIES	0x109fc
+
 int	ccp_pci_match(struct device *, void *, void *);
 void	ccp_pci_attach(struct device *, struct device *, void *);
 
+int	ccp_pci_intr(void *);
+
 const struct cfattach ccp_pci_ca = {
 	sizeof(struct ccp_softc),
 	ccp_pci_match,
@@ -64,6 +74,10 @@ ccp_pci_attach(struct device *parent, struct device *self, void *aux)
 	struct ccp_softc *sc = (struct ccp_softc *)self;
 	struct pci_attach_args *pa = aux;
 	pcireg_t memtype;
+	pci_intr_handle_t ih;
+	const char *intrstr = NULL;
+
+	sc->sc_dmat = pa->pa_dmat;
 
 	memtype = pci_mapreg_type(pa->pa_pc, pa->pa_tag, CCP_PCI_BAR);
 	if (PCI_MAPREG_TYPE(memtype) != PCI_MAPREG_TYPE_MEM) {
@@ -72,10 +86,118 @@ ccp_pci_attach(struct device *parent, struct device *self, void *aux)
 	}
 
 	if (pci_mapreg_map(pa, CCP_PCI_BAR, memtype, 0,
-	    &sc->sc_iot, &sc->sc_ioh, NULL, NULL, 0) != 0) {
+	    &sc->sc_iot, &sc->sc_ioh, NULL, &sc->sc_size, 0) != 0) {
 		printf(": cannot map registers\n");
 		return;
 	}
 
+	sc->sc_capabilities = bus_space_read_4(sc->sc_iot, sc->sc_ioh,
+	    PSP_REG_CAPABILITIES);
+	DPRINTF(("\n%s: %b\n", __func__, sc->sc_capabilities, PSP_CAP_BITS));
+
+	/* clear and disable interrupts */
+	bus_space_write_4(sc->sc_iot, sc->sc_ioh, PSP_REG_INTEN, 0);
+	bus_space_write_4(sc->sc_iot, sc->sc_ioh, PSP_REG_INTSTS, -1);
+
+	if (pci_intr_map_msix(pa, 0, &ih) != 0 &&
+	    pci_intr_map_msi(pa, &ih) != 0 && pci_intr_map(pa, &ih) != 0) {
+		printf(": couldn't map interrupt\n");
+		goto unmap_ret;
+	}
+
+	intrstr = pci_intr_string(pa->pa_pc, ih);
+	sc->sc_ih = pci_intr_establish(pa->pa_pc, ih, IPL_BIO, ccp_pci_intr,
+	    sc, sc->sc_dev.dv_xname);
+	if (sc->sc_ih != NULL)
+		printf(": %s", intrstr);
+
 	ccp_attach(sc);
+	if (psp_attach(sc)) {
+		/* enable interrupts */
+		bus_space_write_4(sc->sc_iot, sc->sc_ioh, PSP_REG_INTEN, -1);
+	}
+
+	printf ("\n");
+
+	return;
+
+unmap_ret:
+	bus_space_unmap(sc->sc_iot, sc->sc_ioh, sc->sc_size);
+}
+
+int
+ccp_pci_intr(void *arg)
+{
+	struct ccp_softc *sc = arg;
+	uint32_t status;
+
+	status = bus_space_read_4(sc->sc_iot, sc->sc_ioh, PSP_REG_INTSTS);
+	bus_space_write_4(sc->sc_iot, sc->sc_ioh, PSP_REG_INTSTS, status);
+
+	if (sc->sc_sev_intr)
+		return (sc->sc_sev_intr(sc, status));
+
+	return (1);
+}
+
+int
+ccp_pci_wait(struct ccp_softc *sc, uint32_t *status, int poll)
+{
+	uint32_t	cmdword;
+	int		count;
+
+	if (poll) {
+		count = 0;
+		while (count++ < 10) {
+			cmdword = bus_space_read_4(sc->sc_iot, sc->sc_ioh,
+			    PSP_REG_CMDRESP);
+			if (cmdword & PSP_CMDRESP_RESPONSE)
+				goto done;
+			delay(5000);
+		}
+
+		/* timeout */
+		return (1);
+	}
+
+	if (tsleep_nsec(sc, PWAIT, "psp", SEC_TO_NSEC(1)) == EWOULDBLOCK)
+		return (1);
+
+done:
+	if (status) {
+		*status = bus_space_read_4(sc->sc_iot, sc->sc_ioh,
+		    PSP_REG_CMDRESP);
+	}
+
+	return (0);
+}
+
+int
+ccp_pci_docmd(struct ccp_softc *sc, int cmd, uint64_t paddr)
+{
+	uint32_t	plo, phi, cmdword, status;
+
+	plo = ((paddr >> 0) & 0xffffffff);
+	phi = ((paddr >> 32) & 0xffffffff);
+	cmdword = (cmd & 0x3f) << 16;
+	if (!cold)
+		cmdword |= PSP_CMDRESP_IOC;
+
+	bus_space_write_4(sc->sc_iot, sc->sc_ioh, PSP_REG_ADDRLO, plo);
+	bus_space_write_4(sc->sc_iot, sc->sc_ioh, PSP_REG_ADDRHI, phi);
+	bus_space_write_4(sc->sc_iot, sc->sc_ioh, PSP_REG_CMDRESP, cmdword);
+
+	if (ccp_pci_wait(sc, &status, cold))
+		return (1);
+
+	/* Did PSP sent a response code? */
+	if (status & PSP_CMDRESP_RESPONSE) {
+		if ((status & PSP_STATUS_MASK) != PSP_STATUS_SUCCESS) {
+			printf("%s: command failed: 0x%x\n", __func__,
+			    (status & PSP_STATUS_MASK));
+			return (1);
+		}
+	}
+
+	return (0);
 }
diff --git a/sys/dev/pv/virtio.c b/sys/dev/pv/virtio.c
index 53e27e27211..9ddf34767fb 100644
--- a/sys/dev/pv/virtio.c
+++ b/sys/dev/pv/virtio.c
@@ -665,10 +665,18 @@ virtio_enqueue_p(struct virtqueue *vq, int slot, bus_dmamap_t dmamap,
 static void
 publish_avail_idx(struct virtio_softc *sc, struct virtqueue *vq)
 {
+#if 1
+	vq->vq_avail->idx = vq->vq_avail_idx;
+#endif
 	vq_sync_aring(sc, vq, BUS_DMASYNC_PREWRITE);
 
 	virtio_membar_producer();
+#if 0	/*
+         * XXX hshoexer:  Needs to be pre write, otherwise vmd(8)
+         * won't see the update on aring idx in the bounced buffer.
+	 */
 	vq->vq_avail->idx = vq->vq_avail_idx;
+#endif
 	vq_sync_aring(sc, vq, BUS_DMASYNC_POSTWRITE);
 	vq->vq_queued = 1;
 }
diff --git a/sys/dev/vmm/vmm.c b/sys/dev/vmm/vmm.c
index 4d4866f70dc..70dd315d075 100644
--- a/sys/dev/vmm/vmm.c
+++ b/sys/dev/vmm/vmm.c
@@ -405,6 +405,8 @@ vm_create(struct vm_create_params *vcp, struct proc *p)
 			return (ret);
 		}
 		vcpu->vc_id = vm->vm_vcpu_ct;
+		vcpu->vc_sev = vcp->vcp_sev;
+		vcp->vcp_poscbit = vmm_softc->poscbit;
 		vm->vm_vcpu_ct++;
 		/* Publish vcpu to list, inheriting the reference. */
 		SLIST_INSERT_HEAD(&vm->vm_vcpu_list, vcpu, vc_vcpu_link);
@@ -750,6 +752,7 @@ vm_resetcpu(struct vm_resetcpu_params *vrp)
 #endif /* VMM_DEBUG */
 			ret = EIO;
 		}
+		vrp->vrp_asid = vcpu->vc_vpid;
 	}
 	rw_exit_write(&vcpu->vc_lock);
 out:
diff --git a/sys/dev/vmm/vmm.h b/sys/dev/vmm/vmm.h
index 47f5e12cf4f..e2a158fb9db 100644
--- a/sys/dev/vmm/vmm.h
+++ b/sys/dev/vmm/vmm.h
@@ -39,9 +39,11 @@ struct vm_create_params {
 	size_t			vcp_ncpus;
 	struct vm_mem_range	vcp_memranges[VMM_MAX_MEM_RANGES];
 	char			vcp_name[VMM_MAX_NAME_LEN];
+	int			vcp_sev;
 
         /* Output parameter from VMM_IOC_CREATE */
         uint32_t		vcp_id;
+	uint32_t		vcp_poscbit;
 };
 
 struct vm_info_result {
@@ -74,6 +76,9 @@ struct vm_resetcpu_params {
 	uint32_t		vrp_vm_id;
 	uint32_t		vrp_vcpu_id;
 	struct vcpu_reg_state	vrp_init_state;
+
+	/* Output Parameters from VMM_IOC_RESETCPU */
+	uint32_t		vrp_asid;
 };
 
 struct vm_sharemem_params {
@@ -88,7 +93,7 @@ struct vm_sharemem_params {
 #define VMM_IOC_RUN _IOWR('V', 2, struct vm_run_params) /* Run VCPU */
 #define VMM_IOC_INFO _IOWR('V', 3, struct vm_info_params) /* Get VM Info */
 #define VMM_IOC_TERM _IOW('V', 4, struct vm_terminate_params) /* Terminate VM */
-#define VMM_IOC_RESETCPU _IOW('V', 5, struct vm_resetcpu_params) /* Reset */
+#define VMM_IOC_RESETCPU _IOWR('V', 5, struct vm_resetcpu_params) /* Reset */
 #define VMM_IOC_READREGS _IOWR('V', 7, struct vm_rwregs_params) /* Get regs */
 #define VMM_IOC_WRITEREGS _IOW('V', 8, struct vm_rwregs_params) /* Set regs */
 /* Get VM params */
@@ -173,6 +178,8 @@ struct vmm_softc {
 
 	int			mode;		/* [I] */
 
+	int			poscbit;	/* [I] */
+
 	size_t			vcpu_ct;	/* [v] */
 	size_t			vcpu_max;	/* [I] */
 
diff --git a/sys/kern/kern_pledge.c b/sys/kern/kern_pledge.c
index deb254245d9..4172cf7680b 100644
--- a/sys/kern/kern_pledge.c
+++ b/sys/kern/kern_pledge.c
@@ -76,6 +76,7 @@
 #if NVMM > 0
 #include <machine/conf.h>
 #endif
+#include "ccp.h"
 #endif
 
 #include "drm.h"
@@ -1349,6 +1350,20 @@ pledge_ioctl(struct proc *p, long com, struct file *fp)
 	}
 #endif
 
+#if NCCP > 0
+#if NVMM > 0
+	if ((pledge & PLEDGE_VMM)) {
+		extern int pspopen(dev_t, int, int, struct proc *);
+
+		if ((fp->f_type == DTYPE_VNODE) &&
+		    (vp->v_type == VCHR) &&
+		    (cdevsw[major(vp->v_rdev)].d_open == pspopen)) {
+			return (0);
+		}
+	}
+#endif
+#endif
+
 	return pledge_fail(p, error, PLEDGE_TTY);
 }
 
diff --git a/sys/sys/mman.h b/sys/sys/mman.h
index c36687f5d5e..6caf9476570 100644
--- a/sys/sys/mman.h
+++ b/sys/sys/mman.h
@@ -43,6 +43,8 @@
 #define	PROT_READ	0x01	/* pages can be read */
 #define	PROT_WRITE	0x02	/* pages can be written */
 #define	PROT_EXEC	0x04	/* pages can be executed */
+#define	PROT_CRYPT	0x08	/* pages are encrypted */
+#define	PROT_NOCRYPT	0x10	/* pages are not encrypted */
 
 /*
  * Flags contain sharing type and options.
diff --git a/usr.sbin/vmd/Makefile b/usr.sbin/vmd/Makefile
index 3fbb9d086b1..683cd817ed8 100644
--- a/usr.sbin/vmd/Makefile
+++ b/usr.sbin/vmd/Makefile
@@ -8,6 +8,7 @@ SRCS+=		vm.c loadfile_elf.c pci.c virtio.c i8259.c mc146818.c
 SRCS+=		ns8250.c i8253.c dhcp.c packet.c mmio.c
 SRCS+=		parse.y atomicio.c vioscsi.c vioraw.c vioqcow2.c fw_cfg.c
 SRCS+=		vm_agentx.c vioblk.c vionet.c
+SRCS+=		psp.c sme.c
 
 CFLAGS+=	-Wall -I${.CURDIR}
 CFLAGS+=	-Wstrict-prototypes -Wmissing-prototypes
diff --git a/usr.sbin/vmd/loadfile_elf.c b/usr.sbin/vmd/loadfile_elf.c
index 864344c88e0..2e545932685 100644
--- a/usr.sbin/vmd/loadfile_elf.c
+++ b/usr.sbin/vmd/loadfile_elf.c
@@ -129,6 +129,8 @@ static void mbcopy(void *, paddr_t, int);
 extern char *__progname;
 extern int vm_id;
 
+uint64_t pg_crypt = 0;
+
 /*
  * setsegment
  *
@@ -239,7 +241,8 @@ push_pt_64(void)
 	/* First 1GB (in 2MB pages) */
 	memset(ptes, 0, sizeof(ptes));
 	for (i = 0 ; i < 512; i++) {
-		ptes[i] = PG_V | PG_RW | PG_u | PG_PS | ((2048 * 1024) * i);
+		ptes[i] = pg_crypt | PG_V | PG_RW | PG_u | PG_PS |
+		    ((2048 * 1024) * i);
 	}
 	write_mem(PML2_PAGE, ptes, PAGE_SIZE);
 }
@@ -299,8 +302,14 @@ loadfile_elf(gzFile fp, struct vmd_vm *vm, struct vcpu_reg_state *vrs,
 		vrs->vrs_crs[VCPU_REGS_CR4] = CR4_PSE;
 		vrs->vrs_msrs[VCPU_REGS_EFER] = 0ULL;
 	}
-	else
+	else {
+		if (vcp->vcp_sev && vcp->vcp_poscbit > 0) {
+			pg_crypt = (uint64_t)1 << vcp->vcp_poscbit;
+			log_info("%s: poscbit %d pg_crypt 0x%016llx", __func__,
+			    vcp->vcp_poscbit, pg_crypt);
+		}
 		push_pt_64();
+	}
 
 	if (bootdevice == VMBOOTDEV_NET) {
 		bootmac = &bm;
diff --git a/usr.sbin/vmd/parse.y b/usr.sbin/vmd/parse.y
index 2ee98897290..d0ca8489d09 100644
--- a/usr.sbin/vmd/parse.y
+++ b/usr.sbin/vmd/parse.y
@@ -126,7 +126,7 @@ typedef struct {
 %token	FORMAT GROUP
 %token	INET6 INSTANCE INTERFACE LLADDR LOCAL LOCKED MEMORY NET NIFS OWNER
 %token	PATH PREFIX RDOMAIN SIZE SOCKET SWITCH UP VM VMID STAGGERED START
-%token  PARALLEL DELAY
+%token  PARALLEL DELAY SEV
 %token	<v.number>	NUMBER
 %token	<v.string>	STRING
 %type	<v.lladdr>	lladdr
@@ -140,6 +140,7 @@ typedef struct {
 %type	<v.string>	optstring
 %type	<v.string>	string
 %type	<v.string>	vm_instance
+%type	<v.number>	sev;
 
 %%
 
@@ -414,6 +415,9 @@ vm_opts_l	: vm_opts_l vm_opts nl
 vm_opts		: disable			{
 			vmc_disable = $1;
 		}
+		| sev				{
+			vcp->vcp_sev = 1;
+		}
 		| DISK string image_format	{
 			if (parse_disk($2, $3) != 0) {
 				yyerror("failed to parse disks: %s", $2);
@@ -757,6 +761,9 @@ disable		: ENABLE			{ $$ = 0; }
 		| DISABLE			{ $$ = 1; }
 		;
 
+sev		: SEV				{ $$ = 1; }
+		;
+
 bootdevice	: CDROM				{ $$ = VMBOOTDEV_CDROM; }
 		| DISK				{ $$ = VMBOOTDEV_DISK; }
 		| NET				{ $$ = VMBOOTDEV_NET; }
@@ -841,6 +848,7 @@ lookup(char *s)
 		{ "path",		PATH },
 		{ "prefix",		PREFIX },
 		{ "rdomain",		RDOMAIN },
+		{ "sev",		SEV },
 		{ "size",		SIZE },
 		{ "socket",		SOCKET },
 		{ "staggered",		STAGGERED },
diff --git a/usr.sbin/vmd/psp.c b/usr.sbin/vmd/psp.c
new file mode 100644
index 00000000000..9b18c3aafe1
--- /dev/null
+++ b/usr.sbin/vmd/psp.c
@@ -0,0 +1,222 @@
+/*	$OpenBSD: $	*/
+
+/*
+ * Copyright (c) 2023, 2024 Hans-Joerg Hoexer <hshoexer@genua.de>
+ *
+ * Permission to use, copy, modify, and distribute this software for any
+ * purpose with or without fee is hereby granted, provided that the above
+ * copyright notice and this permission notice appear in all copies.
+ *
+ * THE SOFTWARE IS PROVIDED "AS IS" AND THE AUTHOR DISCLAIMS ALL WARRANTIES
+ * WITH REGARD TO THIS SOFTWARE INCLUDING ALL IMPLIED WARRANTIES OF
+ * MERCHANTABILITY AND FITNESS. IN NO EVENT SHALL THE AUTHOR BE LIABLE FOR
+ * ANY SPECIAL, DIRECT, INDIRECT, OR CONSEQUENTIAL DAMAGES OR ANY DAMAGES
+ * WHATSOEVER RESULTING FROM LOSS OF USE, DATA OR PROFITS, WHETHER IN AN
+ * ACTION OF CONTRACT, NEGLIGENCE OR OTHER TORTIOUS ACTION, ARISING OUT OF
+ * OR IN CONNECTION WITH THE USE OR PERFORMANCE OF THIS SOFTWARE.
+ */
+
+#include <sys/types.h>
+#include <sys/device.h>
+#include <sys/ioctl.h>
+
+#include <machine/bus.h>
+#include <dev/ic/ccpvar.h>
+
+#include <string.h>
+
+#include "vmd.h"
+
+extern struct vmd	*env;
+
+int
+psp_get_pstate(uint16_t *state)
+{
+	struct psp_platform_status pst;
+
+	if (ioctl(env->vmd_psp_fd, PSP_IOC_GET_PSTATUS, &pst) < 0) {
+		log_warn("%s: ioctl", __func__);
+		return (-1);
+	}
+
+	if (state)
+		*state = pst.state;
+
+	return (0);
+}
+
+int
+psp_df_flush(void)
+{
+	if (ioctl(env->vmd_psp_fd, PSP_IOC_DF_FLUSH) < 0) {
+		log_warn("%s: ioctl", __func__);
+		return (-1);
+	}
+
+	return (0);
+}
+
+int
+psp_get_gstate(uint32_t handle, uint32_t *policy, uint32_t *asid,
+    uint8_t *state)
+{
+	struct psp_guest_status gst;
+
+	memset(&gst, 0, sizeof(gst));
+	gst.handle = handle;
+
+	if (ioctl(env->vmd_psp_fd, PSP_IOC_GET_GSTATUS, &gst) < 0) {
+		log_warn("%s: ioctl", __func__);
+		return (-1);
+	}
+
+	if (policy)
+		*policy = gst.policy;
+	if (asid)
+		*asid = gst.asid;
+	if (state)
+		*state = gst.state;
+
+	return (0);
+}
+
+int
+psp_launch_start(uint32_t *handle)
+{
+	struct psp_launch_start pls;
+
+	memset(&pls, 0, sizeof(pls));
+
+	/* XXX NODBG | NOKS | !ES | NOSEND | DOMAIN | SEV */
+	pls.policy = 0x3b;
+
+	if (ioctl(env->vmd_psp_fd, PSP_IOC_LAUNCH_START, &pls) < 0) {
+		log_warn("%s: ioctl", __func__);
+		return (-1);
+	}
+
+	if (handle)
+		*handle = pls.handle;
+
+	return (0);
+}
+
+int
+psp_launch_update(uint32_t handle, vaddr_t v, size_t len)
+{
+	struct psp_launch_update_data plud;
+
+	memset(&plud, 0, sizeof(plud));
+	plud.handle = handle;
+	plud.paddr = v;	/* will be converted to paddr */
+	plud.length = len;
+
+	if (ioctl(env->vmd_psp_fd, PSP_IOC_LAUNCH_UPDATE_DATA, &plud) < 0) {
+		log_warn("%s: ioctl", __func__);
+		return (-1);
+	}
+
+	return (0);
+}
+
+int
+psp_launch_measure(uint32_t handle)
+{
+	struct psp_launch_measure lm;
+	char *p, buf[256];
+	size_t len;
+	unsigned int i;
+
+	memset(&lm, 0, sizeof(lm));
+	lm.handle = handle;
+	lm.measure_len = sizeof(lm.psp_measure);
+	memset(lm.measure, 0xa5, sizeof(lm.measure));
+	memset(lm.measure_nonce, 0x5a, sizeof(lm.measure_nonce));
+
+	if (ioctl(env->vmd_psp_fd, PSP_IOC_LAUNCH_MEASURE, &lm) < 0) {
+		log_warn("%s: ioctl", __func__);
+		return (-1);
+	}
+
+	len = sizeof(buf);
+	memset(buf, 0, len);
+	p = buf;
+	for (i = 0; i < sizeof(lm.measure) && len >= 2;
+	    i++, p += 2, len -= 2) {
+		snprintf(p, len, "%02x", lm.measure[i]);
+	}
+	log_debug("%s: measure\t0x%s", __func__, buf);
+
+	len = sizeof(buf);
+	memset(buf, 0, len);
+	p = buf;
+	for (i = 0; i < sizeof(lm.measure_nonce) && len >= 2;
+	    i++, p += 2, len -= 2) {
+		snprintf(p, len, "%02x", lm.measure_nonce[i]);
+	}
+	log_debug("%s: nonce\t0x%s", __func__, buf);
+
+	return (0);
+}
+
+
+int
+psp_launch_finish(uint32_t handle)
+{
+	struct psp_launch_finish lf;
+
+	lf.handle = handle;
+
+	if (ioctl(env->vmd_psp_fd, PSP_IOC_LAUNCH_FINISH, &lf) < 0) {
+		log_warn("%s: ioctl", __func__);
+		return (-1);
+	}
+
+	return (0);
+}
+
+int
+psp_activate(uint32_t handle, uint32_t asid)
+{
+	struct psp_activate act;
+
+	act.handle = handle;
+	act.asid = asid;
+
+	if (ioctl(env->vmd_psp_fd, PSP_IOC_ACTIVATE, &act) < 0) {
+		log_warn("%s: ioctl", __func__);
+		return (-1);
+	}
+
+	return (0);
+}
+
+int
+psp_deactivate(uint32_t handle)
+{
+	struct psp_deactivate deact;
+
+	deact.handle = handle;
+
+	if (ioctl(env->vmd_psp_fd, PSP_IOC_DEACTIVATE, &deact) < 0) {
+		log_warn("%s: ioctl", __func__);
+		return (-1);
+	}
+
+	return (0);
+}
+
+int
+psp_decommission(uint32_t handle)
+{
+	struct psp_decommission decom;
+
+	decom.handle = handle;
+
+	if (ioctl(env->vmd_psp_fd, PSP_IOC_DECOMMISSION, &decom) < 0) {
+		log_warn("%s: ioctl", __func__);
+		return (-1);
+	}
+
+	return (0);
+}
diff --git a/usr.sbin/vmd/sme.c b/usr.sbin/vmd/sme.c
new file mode 100644
index 00000000000..80a1a221a2e
--- /dev/null
+++ b/usr.sbin/vmd/sme.c
@@ -0,0 +1,202 @@
+/*	$OpenBSD: $	*/
+
+/*
+ * Copyright (c) 2023, 2024 Hans-Joerg Hoexer <hshoexer@genua.de>
+ *
+ * Permission to use, copy, modify, and distribute this software for any
+ * purpose with or without fee is hereby granted, provided that the above
+ * copyright notice and this permission notice appear in all copies.
+ *
+ * THE SOFTWARE IS PROVIDED "AS IS" AND THE AUTHOR DISCLAIMS ALL WARRANTIES
+ * WITH REGARD TO THIS SOFTWARE INCLUDING ALL IMPLIED WARRANTIES OF
+ * MERCHANTABILITY AND FITNESS. IN NO EVENT SHALL THE AUTHOR BE LIABLE FOR
+ * ANY SPECIAL, DIRECT, INDIRECT, OR CONSEQUENTIAL DAMAGES OR ANY DAMAGES
+ * WHATSOEVER RESULTING FROM LOSS OF USE, DATA OR PROFITS, WHETHER IN AN
+ * ACTION OF CONTRACT, NEGLIGENCE OR OTHER TORTIOUS ACTION, ARISING OUT OF
+ * OR IN CONNECTION WITH THE USE OR PERFORMANCE OF THIS SOFTWARE.
+ */
+
+#include <sys/types.h>
+#include <sys/device.h>
+
+#include <machine/bus.h>
+#include <dev/ic/ccpvar.h>
+
+#include <string.h>
+
+#include "vmd.h"
+
+int
+sme_init(struct vmd_vm *vm)
+{
+	struct vmop_create_params *vmc = &vm->vm_params;
+	struct vm_create_params *vcp = &vmc->vmc_params;
+	uint32_t	handle, asid;
+	uint16_t	pstate;
+	uint8_t		gstate;
+
+	if (!vcp->vcp_sev)
+		return(0);
+
+	if (psp_get_pstate(&pstate)) {
+		log_warnx("%s: failed to get platform state", __func__);
+		return (-1);
+	}
+	if (pstate == PSP_PSTATE_UNINIT) {
+		log_warnx("%s: platform uninitialized", __func__);
+		return (-1);
+	}
+	if (psp_launch_start(&handle) < 0) {
+		log_warnx("%s: launch failed", __func__);
+		return (-1);
+	};
+	vm->vm_sme_handle = handle;
+
+	if (psp_get_gstate(vm->vm_sme_handle, NULL, &asid, &gstate)) {
+		log_warnx("%s: failed to get guest state", __func__);
+		/* XXX leak */
+		return (-1);
+	}
+	if (gstate != PSP_GSTATE_LUPDATE) {
+		log_warnx("%s: invalid guest state: 0x%hx", __func__, gstate);
+		/* XXX leak */
+		return (-1);
+	}
+
+	return (0);
+}
+
+int
+sme_encrypt_memory(struct vmd_vm *vm)
+{
+	struct vmop_create_params *vmc = &vm->vm_params;
+	struct vm_create_params *vcp = &vmc->vmc_params;
+	struct vm_mem_range *vmr;
+	size_t		i;
+	vaddr_t		v;
+	uint64_t	value;
+	uint8_t		gstate;
+
+	if (!vcp->vcp_sev)
+		return (0);
+
+#define PAGE_SIZE 4096
+	for (i = 0; i < vcp->vcp_nmemranges; i++) {
+		vmr = &vcp->vcp_memranges[i];
+
+		if (vmr->vmr_type != VM_MEM_RAM)
+			continue;
+
+		value = 0;
+		/* XXX assumes size is multiple of PAGE_SIZE */
+		for (v = vmr->vmr_va; v < vmr->vmr_va + vmr->vmr_size;
+		    v += PAGE_SIZE) {
+
+			/* fault in page */
+			value += *(uint64_t *)v;
+			/* tell PSP to encrypt this page */
+			if (psp_launch_update(vm->vm_sme_handle, v,
+			    PAGE_SIZE)) {
+				log_warnx("%s: failed to launch update page "
+				    "%zu:0x%lx", __func__, i, v);
+				return (-1);
+			}
+		}
+		log_debug("%s: encrypted %zu:0x%lx size %zu (0x%llx)",
+		    __func__, i, vmr->vmr_va, vmr->vmr_size, value);
+	}
+	if (psp_launch_measure(vm->vm_sme_handle)) {
+		log_warnx("%s: failed to launch measure", __func__);
+		return (-1);
+	}
+	if (psp_launch_finish(vm->vm_sme_handle)) {
+		log_warnx("%s: failed to launch finish", __func__);
+		return (-1);
+	}
+
+	if (psp_get_gstate(vm->vm_sme_handle, NULL, NULL, &gstate)) {
+		log_warnx("%s: failed to get guest state", __func__);
+		/* XXX leak */
+		return (-1);
+	}
+	if (gstate != PSP_GSTATE_RUNNING) {
+		log_warnx("%s: invalid guest state: 0x%hx", __func__, gstate);
+		/* XXX leak */
+		return (-1);
+	}
+
+	return (0);
+}
+
+int
+sme_activate(struct vmd_vm *vm)
+{
+	struct vmop_create_params *vmc = &vm->vm_params;
+	struct vm_create_params *vcp = &vmc->vmc_params;
+	uint32_t	asid;
+	uint8_t		gstate;
+
+	if (!vcp->vcp_sev)
+		return (0);
+
+	if (psp_df_flush() ||
+	    psp_activate(vm->vm_sme_handle, vm->vm_sme_asid)) {
+		log_warnx("%s: failed to activate guest: 0x%x:0x%x", __func__,
+		    vm->vm_sme_handle, vm->vm_sme_handle);
+		/* XXX leak */
+		return (-1);
+	}
+
+	if (psp_get_gstate(vm->vm_sme_handle, NULL, &asid, &gstate)) {
+		log_warnx("%s: failed to get guest state", __func__);
+		/* XXX leak */
+		return (-1);
+	}
+	log_debug("%s: handle 0x%x asid 0x%x gstate 0x%hhx", __func__,
+	    vm->vm_sme_handle, asid, gstate);
+
+	return (0);
+}
+
+int
+sme_shutdown(struct vmd_vm *vm)
+{
+	struct vmop_create_params *vmc = &vm->vm_params;
+	struct vm_create_params *vcp = &vmc->vmc_params;
+	uint32_t	asid;
+	uint8_t		gstate;
+
+	log_info("%s: vcp_sev %d", __func__, vcp->vcp_sev);
+	if (!vcp->vcp_sev)
+		return (0);
+
+	vcp->vcp_sev = 0;	/* XXX */
+
+	if (psp_get_gstate(vm->vm_sme_handle, NULL, &asid, &gstate)) {
+		log_warnx("%s: failed to get guest state", __func__);
+		/* XXX leak */
+		return (-1);
+	}
+	log_debug("%s: handle 0x%x asid 0x%x/0x%x gstate 0x%hhx", __func__,
+	    vm->vm_sme_handle, vm->vm_sme_asid, asid, gstate);
+
+	if (asid != vm->vm_sme_asid) {
+		log_warnx("%s: handle/asid mismatch: 0x%x 0x%x:0x%x", __func__,
+		    vm->vm_sme_handle, vm->vm_sme_asid, asid);
+		/* XXX leak */
+		return (-1);
+	}
+
+	if (psp_deactivate(vm->vm_sme_handle) ||
+	    psp_df_flush() ||
+	    psp_decommission(vm->vm_sme_handle)) {
+		log_warnx("%s: failed to deactivate guest: 0x%x:0x%x",
+		    __func__, vm->vm_sme_handle, vm->vm_sme_asid);
+		/* XXX leak */
+		return (-1);
+	}
+	vm->vm_sme_handle = -1;
+	vm->vm_sme_asid = -1;
+
+	return (0);
+}
diff --git a/usr.sbin/vmd/vm.c b/usr.sbin/vmd/vm.c
index 86d57693474..21f961d97ca 100644
--- a/usr.sbin/vmd/vm.c
+++ b/usr.sbin/vmd/vm.c
@@ -405,6 +405,13 @@ start_vm(struct vmd_vm *vm, int fd)
 		return (ret);
 	}
 
+	/* Setup SME. */
+	ret = sme_init(vm);
+	if (ret) {
+		log_warn("could not initialize SME");
+		return (ret);
+	}
+
 	/*
 	 * Some of vmd currently relies on global state (current_vm, con_fd).
 	 */
@@ -511,6 +518,11 @@ start_vm(struct vmd_vm *vm, int fd)
 	 */
 	ret = run_vm(&vm->vm_params, &vrs);
 
+	/* Shutdown SME. */
+	log_info("%s: calling sme_shutdown", __func__);
+	if (sme_shutdown(vm))
+		log_warn("%s: could not shutdown SME", __func__);
+
 	/* Ensure that any in-flight data is written back */
 	virtio_shutdown(vm);
 
@@ -649,6 +661,9 @@ vm_shutdown(unsigned int cmd)
 	}
 	imsg_flush(&current_vm->vm_iev.ibuf);
 
+	log_info("%s: calling sme_shutdown", __func__);
+	sme_shutdown(current_vm);
+
 	_exit(0);
 }
 
@@ -980,6 +995,7 @@ vcpu_reset(uint32_t vmid, uint32_t vcpu_id, struct vcpu_reg_state *vrs)
 
 	if (ioctl(env->vmd_fd, VMM_IOC_RESETCPU, &vrp) == -1)
 		return (errno);
+	current_vm->vm_sme_asid = vrp.vrp_asid;
 
 	return (0);
 }
@@ -1128,6 +1144,9 @@ alloc_guest_mem(struct vmd_vm *vm)
 			return (ret);
 		}
 		vmr->vmr_va = (vaddr_t)p;
+
+		/* XXX fault in */
+		memset((void *)vmr->vmr_va, 0, vmr->vmr_size);
 	}
 
 	return (ret);
@@ -1385,6 +1404,18 @@ run_vm(struct vmop_create_params *vmc, struct vcpu_reg_state *vrs)
 			return (EIO);
 		}
 
+		if (sme_activate(current_vm)) {
+			log_warnx("%s: SME activatation failed for VCPU "
+			    "%zu failed - exiting.", __progname, i);
+			return (EIO);
+		}
+
+		if (sme_encrypt_memory(current_vm)) {
+			log_warnx("%s: memory encryption failed for VCPU "
+			    "%zu failed - exiting.", __progname, i);
+			return (EIO);
+		}
+
 		/* once more because reset_cpu changes regs */
 		if (current_vm->vm_state & VM_STATE_RECEIVED) {
 			vregsp.vrwp_vm_id = vcp->vcp_id;
@@ -1911,6 +1942,8 @@ vcpu_exit(struct vm_run_params *vrp)
 		break;
 	case VMX_EXIT_TRIPLE_FAULT:
 	case SVM_VMEXIT_SHUTDOWN:
+		log_debug("%s: vrp_exit_reason 0x%hx", __func__,
+		    vrp->vrp_exit_reason);
 		/* reset VM */
 		return (EAGAIN);
 	default:
@@ -2143,7 +2176,11 @@ vcpu_assert_pic_irq(uint32_t vm_id, uint32_t vcpu_id, int irq)
 
 	if (i8259_is_pending()) {
 		if (vcpu_pic_intr(vm_id, vcpu_id, 1))
+#if 1			/* XXX hshoexer */
+			log_warnx("%s: can't assert INTR", __func__);
+#else
 			fatalx("%s: can't assert INTR", __func__);
+#endif
 		mutex_lock(&vcpu_run_mtx[vcpu_id]);
 		vcpu_hlt[vcpu_id] = 0;
 		ret = pthread_cond_signal(&vcpu_run_cond[vcpu_id]);
diff --git a/usr.sbin/vmd/vm.conf.5 b/usr.sbin/vmd/vm.conf.5
index ed6cd41df64..e07ba35103b 100644
--- a/usr.sbin/vmd/vm.conf.5
+++ b/usr.sbin/vmd/vm.conf.5
@@ -323,6 +323,8 @@ If only
 .Pf : Ar group
 is given,
 only the group is set.
+.It Ic sev
+Enables SEV for guest.
 .El
 .Sh VM INSTANCES
 It is possible to use configured or running VMs as a template for
diff --git a/usr.sbin/vmd/vmd.c b/usr.sbin/vmd/vmd.c
index 887e1cc9bf8..f849876f302 100644
--- a/usr.sbin/vmd/vmd.c
+++ b/usr.sbin/vmd/vmd.c
@@ -791,7 +791,7 @@ main(int argc, char **argv)
 	int			 ch;
 	enum privsep_procid	 proc_id = PROC_PARENT;
 	int			 proc_instance = 0, vm_launch = 0;
-	int			 vmm_fd = -1, vm_fd = -1;
+	int			 vmm_fd = -1, vm_fd = -1, psp_fd = -1;
 	const char		*errp, *title = NULL;
 	int			 argc0 = argc;
 	char			 dev_type = '\0';
@@ -803,7 +803,7 @@ main(int argc, char **argv)
 	env->vmd_fd = -1;
 	env->vmd_fd6 = -1;
 
-	while ((ch = getopt(argc, argv, "D:P:I:V:X:df:i:nt:vp:")) != -1) {
+	while ((ch = getopt(argc, argv, "D:P:I:V:X:df:i:j:nt:vp:")) != -1) {
 		switch (ch) {
 		case 'D':
 			if (cmdline_symset(optarg) < 0)
@@ -865,6 +865,12 @@ main(int argc, char **argv)
 			if (errp)
 				fatalx("invalid vmm fd");
 			break;
+		case 'j':
+			psp_fd = strtonum(optarg, 0, 128, &errp);
+			if (errp)
+				fatalx("invalid psp fd");
+			log_debug("%s: psp_fd %d", __func__, psp_fd);
+			break;
 		default:
 			usage();
 		}
@@ -893,6 +899,8 @@ main(int argc, char **argv)
 
 	ps = &env->vmd_ps;
 	ps->ps_env = env;
+	env->vmd_psp_fd = psp_fd;
+	env->vmd_sme = 1;
 
 	if (config_init(env) == -1)
 		fatal("failed to initialize configuration");
@@ -970,6 +978,12 @@ main(int argc, char **argv)
 	if (!env->vmd_noaction)
 		proc_connect(ps);
 
+	if (env->vmd_noaction == 0 && env->vmd_sme && proc_id == PROC_PARENT) {
+		env->vmd_psp_fd = open(PSP_NODE, O_RDWR);
+		if (env->vmd_psp_fd == -1)
+			fatal("%s", PSP_NODE);
+	}
+
 	if (vmd_configure() == -1)
 		fatalx("configuration failed");
 
@@ -1050,6 +1064,12 @@ vmd_configure(void)
 	proc_compose_imsg(&env->vmd_ps, PROC_VMM, -1,
 	    IMSG_VMDOP_RECEIVE_VMM_FD, -1, env->vmd_fd, NULL, 0);
 
+	/* Send PSP device fd to vmm proc. */
+	if (env->vmd_psp_fd != -1) {
+		proc_compose_imsg(&env->vmd_ps, PROC_VMM, -1,
+		    IMSG_VMDOP_RECEIVE_PSP_FD, -1, env->vmd_psp_fd, NULL, 0);
+	}
+
 	/* Send shared global configuration to all children */
 	if (config_setconfig(env) == -1)
 		return (-1);
diff --git a/usr.sbin/vmd/vmd.h b/usr.sbin/vmd/vmd.h
index b6294548327..a38aa634f49 100644
--- a/usr.sbin/vmd/vmd.h
+++ b/usr.sbin/vmd/vmd.h
@@ -49,6 +49,7 @@
 #define VMD_CONF		"/etc/vm.conf"
 #define SOCKET_NAME		"/var/run/vmd.sock"
 #define VMM_NODE		"/dev/vmm"
+#define PSP_NODE		"/dev/psp"
 #define VM_DEFAULT_BIOS		"/etc/firmware/vmm-bios"
 #define VM_DEFAULT_KERNEL	"/bsd"
 #define VM_DEFAULT_DEVICE	"hd0a"
@@ -150,6 +151,7 @@ enum imsg_type {
 	IMSG_DEVOP_HOSTMAC,
 	IMSG_DEVOP_MSG,
 	IMSG_DEVOP_VIONET_MSG,
+	IMSG_VMDOP_RECEIVE_PSP_FD,
 };
 
 struct vmop_result {
@@ -304,6 +306,8 @@ struct vmd_vm {
 	struct vmop_create_params vm_params;
 	pid_t			 vm_pid;
 	uint32_t		 vm_vmid;
+	uint32_t		 vm_sme_handle;
+	uint32_t		 vm_sme_asid;	/* XXX actually per VCPU */
 
 	int			 vm_kernel;
 	char			*vm_kernel_path; /* Used by vm.conf. */
@@ -387,6 +391,7 @@ struct vmd {
 	int			 vmd_debug;
 	int			 vmd_verbose;
 	int			 vmd_noaction;
+	int			 vmd_sme;
 
 	uint32_t		 vmd_nvm;
 	struct vmlist		*vmd_vms;
@@ -397,6 +402,7 @@ struct vmd {
 	int			 vmd_fd;
 	int			 vmd_fd6;
 	int			 vmd_ptmfd;
+	int			 vmd_psp_fd;
 };
 
 struct vm_dev_pipe {
@@ -542,4 +548,22 @@ __dead void vionet_main(int, int);
 /* vioblk.c */
 __dead void vioblk_main(int, int);
 
+/* psp.c */
+int	 psp_get_pstate(uint16_t *);
+int	 psp_df_flush(void);
+int	 psp_get_gstate(uint32_t, uint32_t *, uint32_t *, uint8_t *);
+int	 psp_launch_start(uint32_t *);
+int	 psp_launch_update(uint32_t, vaddr_t, size_t);
+int	 psp_launch_measure(uint32_t);
+int	 psp_launch_finish(uint32_t);
+int	 psp_activate(uint32_t, uint32_t);
+int	 psp_deactivate(uint32_t);
+int	 psp_decommission(uint32_t);
+
+/* sme.c */
+int	sme_init(struct vmd_vm *);
+int	sme_encrypt_memory(struct vmd_vm *);
+int	sme_activate(struct vmd_vm *);
+int	sme_shutdown(struct vmd_vm *);
+
 #endif /* VMD_H */
diff --git a/usr.sbin/vmd/vmm.c b/usr.sbin/vmd/vmm.c
index dcd9a91fe4f..c47da25de0f 100644
--- a/usr.sbin/vmd/vmm.c
+++ b/usr.sbin/vmd/vmm.c
@@ -329,6 +329,11 @@ vmm_dispatch_parent(int fd, struct privsep_proc *p, struct imsg *imsg)
 		/* Get and terminate all running VMs */
 		get_info_vm(ps, NULL, 1);
 		break;
+	case IMSG_VMDOP_RECEIVE_PSP_FD:
+		if (env->vmd_psp_fd > -1)
+			fatalx("already received psp fd");
+		env->vmd_psp_fd = imsg->fd;
+		break;
 	default:
 		return (-1);
 	}
@@ -649,7 +654,7 @@ vmm_start_vm(struct imsg *imsg, uint32_t *id, pid_t *pid)
 {
 	struct vm_create_params	*vcp;
 	struct vmd_vm		*vm;
-	char			*nargv[8], num[32], vmm_fd[32];
+	char			*nargv[10], num[32], vmm_fd[32], psp_fd[32];
 	int			 fd, ret = EINVAL;
 	int			 fds[2];
 	pid_t			 vm_pid;
@@ -764,6 +769,9 @@ vmm_start_vm(struct imsg *imsg, uint32_t *id, pid_t *pid)
 				close(fd);
 		}
 
+		if (env->vmd_psp_fd > 0)
+			fcntl(env->vmd_psp_fd, F_SETFD, 0); /* psp device fd */
+
 		/*
 		 * Prepare our new argv for execvp(2) with the fd of our open
 		 * pipe to the parent/vmm process as an argument.
@@ -773,6 +781,8 @@ vmm_start_vm(struct imsg *imsg, uint32_t *id, pid_t *pid)
 		snprintf(num, sizeof(num), "%d", fds[1]);
 		memset(vmm_fd, 0, sizeof(vmm_fd));
 		snprintf(vmm_fd, sizeof(vmm_fd), "%d", env->vmd_fd);
+		memset(psp_fd, 0, sizeof(psp_fd));
+		snprintf(psp_fd, sizeof(psp_fd), "%d", env->vmd_psp_fd);
 
 		nargv[0] = env->argv0;
 		nargv[1] = "-V";
@@ -780,14 +790,16 @@ vmm_start_vm(struct imsg *imsg, uint32_t *id, pid_t *pid)
 		nargv[3] = "-n";
 		nargv[4] = "-i";
 		nargv[5] = vmm_fd;
-		nargv[6] = NULL;
+		nargv[6] = "-j";
+		nargv[7] = psp_fd;
+		nargv[8] = NULL;
 
 		if (env->vmd_verbose == 1) {
-			nargv[6] = VMD_VERBOSE_1;
-			nargv[7] = NULL;
+			nargv[8] = VMD_VERBOSE_1;
+			nargv[9] = NULL;
 		} else if (env->vmd_verbose > 1) {
-			nargv[6] = VMD_VERBOSE_2;
-			nargv[7] = NULL;
+			nargv[8] = VMD_VERBOSE_2;
+			nargv[9] = NULL;
 		}
 
 		/* Control resumes in vmd main(). */