Download raw body.
AI-Driven Security Enhancements for OpenBSD Kernel
As LLM outputs are probabilistic, Each iteration needs some kind of test to check against for the correctness of the generated code. In this case, the only test I have is 'compiles and boots correctly' but code with extensive tests will benefit the most, and likely will avoid those kind of mistakes. El mar, 11 jun 2024 a las 15:37, Kirill A. Korinsky (<kirill@korins.ky>) escribió: > > On Tue, 11 Jun 2024 16:52:02 +0100, > Alfredo Ortega <ortegaalfredo@gmail.com> wrote: > > > > Another thing that you can deduce is that the system that writes > > patches, can also find vulnerabilities. > > I already reported some and don't have time to report them all. But I > > imagine I'm not the only one working on these systems. > > > > Here you are the operator which verify the output of this tools. > > This can be nice and useful tool, like valgrind or static analyzer. > > But it won't be the silver bullet. > > Ok, it migth be the silver biullet which writes the code, but you should > accept extreamly bad quality of that code at the end. > > Sometimes people follows the tools suggestion without thinking that they > doing, and it may lead to disaster. > > Good example of blind trust to the tool which lead to kind of disaster is > Debian where guys had fixed OpenSSL by valgrind warning. Here random article > about that story: https://blogs.fsfe.org/tonnerre/archives/24 > > > -- > wbr, Kirill
AI-Driven Security Enhancements for OpenBSD Kernel