Index | Thread | Search

From:
Mike Larkin <mlarkin@nested.page>
Subject:
Re: cpu_xcall glue for arm64
To:
David Gwynne <david@gwynne.id.au>
Cc:
Theo de Raadt <deraadt@openbsd.org>, Mark Kettenis <mark.kettenis@xs4all.nl>, tech@openbsd.org
Date:
Wed, 23 Jul 2025 00:34:11 -0700

Download raw body.

Thread
On Wed, Jul 23, 2025 at 05:22:15PM +1000, David Gwynne wrote:
> On Tue, Jul 22, 2025 at 02:36:12PM -0600, Theo de Raadt wrote:
> > Mark Kettenis <mark.kettenis@xs4all.nl> wrote:
> >
> > > > From: "Theo de Raadt" <deraadt@openbsd.org>
> > > > Date: Tue, 22 Jul 2025 10:19:51 -0600
> > > >
> > > > Mark Kettenis <mark.kettenis@xs4all.nl> wrote:
> > > >
> > > > > > That is my thought also.  If this is impossible to use without setting
> > > > > > an option, then noone will use it.  If noone is using it, then why have
> > > > > > the code at all?  I think we want it, because we know we need it (soon).
> > > > >
> > > > > Actually xcall isn't an option; it is an attribute.  Drivers that need
> > > > > the functionality ask for it by adding it as a dependency.  It is no
> > > > > different than framebuffer drivers depending on rasops for example.
> > > >
> > > > > So the question really is whether we intend to use this functionality
> > > > > in generic code or not.  For now all the places where we intend to use
> > > > > this are MD drivers.
> > > >
> > > > That is surprising.
> > > >
> > > > I was pretty sure the uses would either be in MD code only, or in
> > > > MI /sys/kern /sys/net*, and /sys/uvm
> > > >
> > > > I would be very surprised to see it used in MI drivers.
> > >
> > > I said *MD* drivers, not MI drivers.
> >
> > Sigh, we are talking past each other.
> >
> > I believe it will be used in MI drivers and MI kernel.
> >
> > David, do you really intend for this to be only used in MD drivers?
>
> ive been kicking versions of this around for a few years, and the
> only thing i've really truly hand on heart needed for is reading
> special registers on individual CPUs, which is very MD.
>
> the MI stuff we can do would be improving the implementation of things
> like intr_barrier. there may be some benefit to quickly pushing work to
> other parts of a system for numa or device topology reasons, but i don't
> think we have enough other stuff ready for that.
>

as dv@ pointed out previously, vmm(4) could definitely make use of this for
remote EPT invalidation. the first way I did it sucked (but worked), dave made
it better but TBH without crosscall it's all sorta kludgey.

so +1 from the vmm guys on this.