Index | Thread | Search

From:
Solène Rapenne <solene@perso.pw>
Subject:
Re: AI-Driven Security Enhancements for OpenBSD Kernel
To:
tech@openbsd.org
Date:
Wed, 12 Jun 2024 15:23:15 +0200

Download raw body.

Thread
Le 12/06/2024 à 13:59, Stuart Henderson a écrit :
> On 2024/06/12 13:37, Otto Moerbeek wrote:
>> On Wed, Jun 12, 2024 at 04:28:05AM -0300, Alfredo Ortega wrote:
>>
>>> The 10000 patches number is just for the IPV4/IPV6 stack. I also don't
>>> think you should review or integrate them, because in a couple months
>>> when more advanced LLMs are made available I can regenerate all the
>>> patches in less than a morning with much better quality. And again
>>> every time a new LLM is released.
>>>
>>> That's why I think of the patches as a post-processing step. I.E. you
>>> keep the regular process of development, and I or other people can
>>> refactor and release secure versions of the kernel/userland.
>>
>> You have *not* demonstrated your patches will produce a more secure
>> version of the code. That's just a big assumption you made with zero
>> evidence.
> 
> And even if you do, that will need re-doing "when more advanced LLMs are
> made available" and the patches are updated "in less than a morning".
> 
>>> It's great that you want to keep the development process human, but my
>>> opinion is that if you have AI adversaries (like we have now), you
>>> need AI protections.
>>
>> Again, you assume AI will provide protection.
> 
> Threat model needs to include "deliberately poisoning LLM source
> material so that it deliberately introduces vulnerabilities". (And of
> course it would be super easy to sneak something looking like some of
> the changes involved with the xz backdoor amongst a bunch of machine-
> generated modifications...)
> 

As it's so fast, you could easily fork OpenBSD to produce an AI generated
derivative with all vulnerabilities and bug fixed, extra point if it adds
new useful features (like a new filesystem) without breaking compatibility