Download raw body.
bgpd: split up processing of updates
On Sun, Dec 28, 2025 at 05:34:23PM +0100, Claudio Jeker wrote:
> All done.
>
> New diff below.
Thanks, I like this better. I think there's one logic error introduced
with the conversion, other than that this is ok.
> @@ -290,8 +302,66 @@ rde_generate_updates(struct rib_entry *r
> {
> struct rde_peer *peer;
>
> - RB_FOREACH(peer, peer_tree, &peertable)
> - peer_generate_update(peer, re, newpath, old_pathid_tx, mode);
> + switch (mode) {
> + case EVAL_RECONF:
> + /* skip peers which don't need to reconfigure */
> + RB_FOREACH(peer, peer_tree, &peertable) {
> + if (peer->reconf_out == 0)
> + continue;
> + peer_generate_update(peer, re, NULL, 0, EVAL_RECONF);
> + }
> + return;
> + case EVAL_DEFAULT:
> + break;
> + case EVAL_ALL:
> + /*
> + * EVAL_DEFAULT is triggered when a new best path is selected.
> + * EVAL_ALL is sent for any other update (needed for peers with
> + * addpath or evaluate all set).
> + * There can be only one EVAL_DEFAULT queued, it replaces the
> + * previous one. A flag is enough.
> + * A path can only exist once in the queue (old or new).
> + */
> + if (re->pq_mode == EVAL_DEFAULT)
> + /* already a best path update pending, nothing to do */
> + return;
> +
> + break;
> + case EVAL_NONE:
> + fatalx("bad eval mode in %s", __func__);
> + }
> +
> + if (re->pq_mode == EVAL_NONE) {
Should this not be != EVAL_NONE? This would match the condition you had
before. My understanding is that entries with EVAL_NONE should never be
present in the queue.
> + peer = peer_get(re->pq_peer_id);
> + TAILQ_REMOVE(&peer->rib_pq_head, re, rib_queue);
> + }
> + if (newpath != NULL)
> + peer = prefix_peer(newpath);
> + else
> + peer = peerself;
> + re->pq_mode = mode;
> + re->pq_peer_id = peer->conf.id;
> + TAILQ_INSERT_TAIL(&peer->rib_pq_head, re, rib_queue);
> +}
bgpd: split up processing of updates