Index | Thread | Search

From:
Jan Klemkow <jan@openbsd.org>
Subject:
Re: igc0 going offline w/default perfpolicy
To:
Greg Schaefer <gsgs7878@proton.me>
Cc:
"tech@openbsd.org" <tech@openbsd.org>
Date:
Fri, 23 Jan 2026 11:23:10 +0100

Download raw body.

Thread
On Mon, Jan 19, 2026 at 08:41:39PM +0000, Greg Schaefer wrote:
> Have an odd situation with a Intel 125H based NUC with I226-LM+I226-V.
> Typically 30s-60m post-boot with a light load, igc0/i226-LM stops
> receiving packets while igc1/i226-V operates fine. igc0 can still send
> but its rx-err/rx-crc counters tick up. After down/up link, the issue
> reoccurs after another 30s-60m period.

Could you provide some debugging information?

# dmesg
# kstat igc0:::
# netstat -s
# netstat -i

What it you scenario?  Do use this interface for a desktop with client traffic,
for a Server which provice services like http, or as a router with forwarding?

Thanks,
Jan

> Initially though it was one of the numerous i225/i226 issues, but
> these are V4 chips with later V2.22 firmware. Disabling C-states >C1
> (whether via the BIOS or hacking acpicpu_x86.c) avoids the issue at the
> cost of high power consumption. Other purported i226 workarounds like
> disabling ASPM had no effect.
> 
> The problem does not occur with basic Linux or FreeBSD test installs.
> 
> While experimenting, found that changing hw.perfpolicy from high to
> auto and back to high fixed the issue. Feels like the BIOS or some 
> other embedded power management firmware is triggering power savings
> because it thinks OpenBSD cannot. Active frequency management via
> MSR_PERF_CTL / hw.perfpolicy=auto (even for a short time) seems to 
> disable the mystery power saver.
> 
> Modifying amd64/est.c/est_init() and amd64/cpu.c/cpu_hatch() to each
> call est_setperf(perflevel=99) (setting an initial frequency for each
> core as they come online) combined with kern/sched_bsd.c/setperf_auto()
> firing (and refiring...) 200ms later also seems to do the trick.
> 
> Am certain there is a more precise explanation and would welcome any
> thoughts. Thanks.