Index | Thread | Search

From:
enh <enh@google.com>
Subject:
Re: AI-Driven Security Enhancements for OpenBSD Kernel
To:
Alfredo Ortega <ortegaalfredo@gmail.com>
Cc:
tech@openbsd.org
Date:
Tue, 11 Jun 2024 09:19:06 -0400

Download raw body.

Thread
On Tue, Jun 11, 2024 at 9:07 AM Alfredo Ortega <ortegaalfredo@gmail.com> wrote:
>
> The AI tries to follow the style of the existing checks in the code,
> but I can easily tell it to panic in case of a security fail.
> And I do not plan to submit this particular batch of checks, and they
> will become obsolete in about a month when the next gen AIs are made
> public.
> Most of the checks of this refactor are being done with GPT-4, that is
> not even the best current coding AI. And the mechanism of patching is
> crude, at best. Yet, it works.
> I may be wrong, but I believe by this time next year the AI will be so
> good that I doubt I will even need human reviewers.

in that case, i know a guy who can get you in on a great deal to buy a
bridge --- https://en.wikipedia.org/wiki/George_C._Parker

> El mar, 11 jun 2024 a las 9:54, Stuart Henderson
> (<stu@spacehopper.org>) escribió:
> >
> > On 2024/06/11 09:28, Alfredo Ortega wrote:
> > > I added 10000+ checks so far, in about 4 or 5 hs. Final count will
> > > likely be close to a million.
> > > It's true that many are useless, perhaps up to 50% of them.  Most
> > > stack protections put into place by the compiler are also useless.
> > > But the question is, how many are not useless? and how many checks
> > > humans missed, but the AI correctly put in place?
> >
> > Seems that many of the checks are adding return/continue when things
> > don't match conditions which aren't handled in the code. But who is to
> > say that's a safe thing to do in any given case? It might often be
> > better to let the kernel crash so the problems are more obvious.
> >
>