Skip to content

Conversation

@kalash-nexthop
Copy link

@kalash-nexthop kalash-nexthop commented Nov 15, 2025

What I did

In the portchannel state reactor code (onMsg-->addLag), when portchannel's operstate changes, trigger the code that updates member interface's status up to date in appl_db (onChange). And set individual member interface state to disabled there if port channel is operdown. This prevents the member port from continuing to forward traffic despite port channel not up.

This fixes: sonic-net/sonic-buildimage#2066

Why I did it

In our testing we found that when min_link is set to 2 and we have only one member interface UP in a portchannel, the portchannel is correctly put in operdown state by teamd. However, the remaining UP interface remains in selected state in the portchannel, and keeps forwarding the traffic. The fix addresses that by deselecting all the remaining UP interfaces in a portchannel when it goes down.

How I verified it

Before my fix, when the min_link==2 requirement wasn't met:

root@gold226:~# show inter port
Flags: A - active, I - inactive, Up - up, Dw - Down, N/A - not available,
       S - selected, D - deselected, * - not synced
  No.  Team Dev      Protocol     Ports
-----  ------------  -----------  --------------------------------------------
    1  PortChannel1  LACP(A)(Dw)  Ethernet312(D) Ethernet352(D) Ethernet360(S)

See above Ethernet360 is still in selected state, and thus continues to forward L2 traffic that was broadcasted into same vlan domain that Po1 is part of, despite Po1 being in Dw state.

With my fix when min_link==2 requirement wasn't met:

root@gold226:/home/admin# show inter port
Flags: A - active, I - inactive, Up - up, Dw - Down, N/A - not available,
       S - selected, D - deselected, * - not synced
  No.  Team Dev      Protocol     Ports
-----  ------------  -----------  ---------------------------------------------
    1  PortChannel1  LACP(A)(Dw)  Ethernet312(D) Ethernet352(D) Ethernet360(S*)

See above Ethernet360 in not synced state due to us disabling the member port in lag table in appl_db. The interface isn't forwarding traffic anymore due to effectively being "disabled" in hardware.

And when min_link==2 requirement is met agatin, it gets "enabled":

root@gold226:/home/admin# show inter port
Flags: A - active, I - inactive, Up - up, Dw - Down, N/A - not available,
       S - selected, D - deselected, * - not synced
  No.  Team Dev      Protocol     Ports
-----  ------------  -----------  --------------------------------------------
    1  PortChannel1  LACP(A)(Up)  Ethernet312(S) Ethernet352(D) Ethernet360(S)

Details if related

@mssonicbld
Copy link
Collaborator

/azp run

@azure-pipelines
Copy link

Azure Pipelines successfully started running 1 pipeline(s).

@kalash-nexthop kalash-nexthop changed the title [teamsyncd]: Whenever a portchannel's operstate goes sown, disble all member ports that are still up otherwise [teamsyncd]: Whenever a portchannel's operstate goes sown, disable all member ports that are still up otherwise Nov 15, 2025
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

The member ports of portchannel are still in selected state and can still forward traffic when the portchannel is configured to down

2 participants