-
Notifications
You must be signed in to change notification settings - Fork 123
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
After run tcpdump.py in Rhel8.2 with Avocado(version: 80.0), it suspends. #1863
Comments
Hi,
From the different screenshots attached, this is my explanation. I would recommend you to retry the tests with nping_count and see if its recreated.
|
I also seeing similar issues, this has happen after recent enhancement with tcpdump .. the test never completed after around 80 scenarios of 333 scenarios.. it is hung here [36611.768878] bnx2x 0010:01:00.0 enP16p1s0f0: NIC Link is Up, 1000 Mbps full duplex, Flow control: none Activate the web console with: systemctl enable --now cockpit.socket |
Dear @vaishnavibhat After we edit the yaml file, and then run tcpdump.py again in Rhel8.2 with Avocado(version: 82.0), it doesn't suspend. However, some items show "ERROR: Could not get operational link state". 【Test Step】 Step 2. Edit yaml file: /root/tests/tests/avocado-misc-tests/io/net/tcpdump.py.data/tcpdump.yaml Step 3. Run tcpdump.py via command: avocado run tcpdump.py -m tcpdump.py.data/tcpdump.yaml 【Test log】: job log & Manual-Test-log 【Configuration】 [FW config] [HW config] 《SUT8》 [FW config] [HW config] 【Note】 |
Hi, This mostly looks like a timing issue during MTU change . When a MTU change occurs, there is a link down - link up happening subsequently. Currently there is a timeout of 30s for this process. Here it looks like its taking a longer time for the same. I have posted this patch to increase the timeout. Addressing the review comments and waiting to get accepted. |
Tag [email protected] |
Using the yaml parameter mtu_timeout to extend wait for cases where we see timing issue with the adapter. Currently there is no specific wait time for drivers and each of them are designed and behaved differently. |
Dears, 1. Set mtu_timeout to: 10 【job.log & Manual-Test-log】 2. Set mtu_timeout to: 10000 【job.log & Manual-Test-log】 3. Set mtu_timeout to: 10000000 【job.log & Manual-Test-log】 4. Set mtu_timeout to: 1000000000000000 【job.log】 5. Set mtu_timeout to: 10000000000000000000000000000000000000000 【job.log】 6. Set mtu_timeout to: 10000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000 【job.log】 From above results, mtu_timeout should not be the root cause of the issue. Please kindly check. Many thanks!! |
In addition, in the past, when we we used the same Network-Card (Broadcom 5719 QP 1G (1G/100M/10M) Network Interface Card PCIe x4 LP), to run tcpdump.py in Rhel8.1 with Avocado(Version: 75.1), the results were all PASS. 【job.log】 【tcpdump.yaml】 Please kindly check. Many thanks!! |
After run tcpdump.py in Rhel8.2 with Avocado(version: 80.0), it suspends. Take below results for example:
【Example 1】
※Network Card: Marvell_QUAD E'NET (2X1 + 2X10 10Gb), PCIe Gen 2 X8/SHORT LP CAPABLE (SHINER SFP+ SR COPPER)
※SOL log:
※job-log:
20200816-Network9-tcpdump-job.log
※Configuration:
20200816-Network9-Configuration.zip
【Example 2】
※Network Card: Marvell_2-PORT E'NET (2X10 10Gb), PCIe Gen 2 X8/SHORT LP CAPABLE (SHINER 10GBase-T)
※SOL log:
※job-log:
20200817-Network7-tcpdump-job.log
※Configuration:
20200817-Network7-Configuration.zip
【Example 3】
※Network Card: Mellanox_2-PORT 25/10Gb NIC&ROCE SR/Cu PCIe 3.0 (25/10Gb EVERGLADES EN)
※SOL log:
※job-log:
20200818-Network6-tcdump-job.log
※Configuration:
20200818-Network6-Configuration.zip
In the past, we also use tcpdump.py to test these Network Cards in Rhel7.6 and Rhel8.1, and all results were PASS. Please kindly check if it is script error in the latest version of tcpdump.py. Many thanks!!
The text was updated successfully, but these errors were encountered: