Index | Thread | Search

From:
Mark Kettenis <mark.kettenis@xs4all.nl>
Subject:
Re: virtio_pci change for SEV
To:
Stefan Fritsch <sf@openbsd.org>
Cc:
tech@openbsd.org
Date:
Thu, 31 Oct 2024 15:22:19 +0100

Download raw body.

Thread
> Date: Tue, 29 Oct 2024 09:47:46 +0100 (CET)
> From: Stefan Fritsch <sf@openbsd.org>
> 
> On Mon, 28 Oct 2024, Mark Kettenis wrote:
> > Sorry.  But the comments still make no sense to me.  The different
> > bus_dma(9) implementations do wildly different things on different
> > architectures.
> 
> The VIRTIO_F_ACCESS_PLATFORM bit means wildly different things on 
> different architectures, too. I totally agree that a more well defined 
> wording in the standard would would have been nice.
> 
> > 
> > Maybe instead of making a statement about our bus_dma(9)
> > implementation here that I'm not sure is true.  Just write something
> > like:
> > 
> > /* XXX Without this SEV doesn't work with a KVM/qemu hypervisor on amd64. */
> > 
> > for the VIRTIO_F_ACCESS_PLATFORM bit?
> > 
> > And can we defer the VIRTIO_F_ORDER_PLATFORM bit for now?  I think
> > that should be done in a separate diff and mixing the two in a single
> > discussion isn't helping in understanding what's going on.
> 
> OK, done that now.

Thanks.  Still didn't manage to wrap my head around what
VIRTIO_F_ACCESS_PLATFORM really means.  But it does seem that setting
it (which really means accepting it when offered by the device) is
fairly harmless.

Meanwhile I looked at the Linux kernel code and I think I have a
better understanding of VIRTIO_F_ORDER_PLATFORM.  If we set this bit
(which again really means accepting it when offered by the other
device) we have to use the "strong" memory barriers that we would
normally use for real devices on a platform.

Currently our virtio drivers use the "weak" memory barriers that just
guarantee coherency between multiple CPUs on an SMP system.  That is
good enough for "software" devices, which really is all we currently
support.  So I think we're fine with dropping VIRTIO_F_ORDER_PLATFORM
support for now.

Actually, now that we've added the bus_dmamap_sync(9) calls to support
bounce buffers, we may already be issueing the stronger barriers on
some platforms.  For example, on arm64 bus_dmamap_sync(9) will call
membar_sync() which issues a stronger barrier. Although I'm not
entirely confident it issues the right one...