Index | Thread | Search

From:
Hrvoje Popovski <hrvoje@srce.hr>
Subject:
Re: vmx(4): TCP Large Receive Offload
To:
Jan Klemkow <j.klemkow@wemelug.de>
Cc:
tech@openbsd.org
Date:
Fri, 31 May 2024 23:36:04 +0200

Download raw body.

Thread
On 31.5.2024. 21:07, Jan Klemkow wrote:
> On Wed, May 29, 2024 at 12:34:24AM +0200, jan@openbsd.org wrote:
>> On Thu, May 23, 2024 at 12:10:57PM GMT, Hrvoje Popovski wrote:
>>> On 22.5.2024. 22:47, jan@openbsd.org wrote:
>>>> This diff introduces TCP Large Receive Offload (LRO) for vmx(4).
>>>>
>>>> The virtual device annotates LRO packets with receive descriptors of
>>>> type 4.  We need this additional information to calculate a valid MSS
>>>> for this packet.  Thus, we are able to route this kind of packets.
>>>> But, we just get type 4 descriptors if we pretend support vmxnet3
>>>> in revision 2.
>>>>
>>>> I tested it on ESXi 8 with Linux guests and external hosts.  It
>>>> increases the single TCP performance up to 20 GBit/s in my setup.
>>>>
>>>> Tests are welcome, especially with different ESXi versions and IP
>>>> forwarding.
>>> I'm running this diff with forwarding and iperf setup at the same time
>>> and box seems fine. Vm is on ESXi8.
>> Thanks for testing!
>>
>> Here is an update version of the diff with suggestions form bluhm.
>>
>>  - We negotiate LRO only on vmxnet3 rev 2 devices
>>  - Remove the mbus for loop
>>  - Track last mbuf via lastmp variable
>>  - Remove some useless lines
> Hrvoje found some panics in my last diff.  I just forgot the set
> send/lastmp to NULL after m_freem().
> 
> Fixed diff below.


With this diff I can't trigger panic as before.

Thank you ...