Skip to content

Conversation

ben-grande
Copy link
Contributor

@ben-grande ben-grande commented Aug 22, 2025

@codecov
Copy link

codecov bot commented Aug 22, 2025

Codecov Report

❌ Patch coverage is 61.64384% with 28 lines in your changes missing coverage. Please review.
✅ Project coverage is 70.32%. Comparing base (fce8bad) to head (fe20264).
⚠️ Report is 3 commits behind head on main.

Files with missing lines Patch % Lines
qubes/vm/mix/net.py 67.30% 17 Missing ⚠️
qubes/vm/dispvm.py 33.33% 4 Missing ⚠️
qubes/vm/qubesvm.py 20.00% 4 Missing ⚠️
qubes/app.py 0.00% 3 Missing ⚠️
Additional details and impacted files
@@            Coverage Diff             @@
##             main     #722      +/-   ##
==========================================
- Coverage   70.40%   70.32%   -0.09%     
==========================================
  Files          61       61              
  Lines       13682    13739      +57     
==========================================
+ Hits         9633     9662      +29     
- Misses       4049     4077      +28     
Flag Coverage Δ
unittests 70.32% <61.64%> (-0.09%) ⬇️

Flags with carried forward coverage won't be shown. Click here to find out more.

☔ View full report in Codecov by Sentry.
📢 Have feedback on the report? Share it here.

🚀 New features to boost your workflow:
  • ❄️ Test Analytics: Detect flaky tests, report on failures, and find test suite problems.

@ben-grande ben-grande force-pushed the preload-netvm branch 8 times, most recently from e3c1b74 to 9d232cd Compare August 25, 2025 15:18
@ben-grande ben-grande marked this pull request as ready for review August 25, 2025 15:22
@qubesos-bot
Copy link

qubesos-bot commented Aug 26, 2025

OpenQA test summary

Complete test suite and dependencies: https://openqa.qubes-os.org/tests/overview?distri=qubesos&version=4.3&build=2025102219-4.3&flavor=pull-requests

Test run included the following:

New failures, excluding unstable

Compared to: https://openqa.qubes-os.org/tests/overview?distri=qubesos&version=4.3&build=2025081011-4.3&flavor=update

  • system_tests_gui_tools

    • qubesmanager_vmsettings: unnamed test (unknown)
    • qubesmanager_vmsettings: Failed (test died)
      # Test died: no candidate needle with tag(s) 'vm-settings-devices-s...
  • system_tests_guivm_vnc_gui_interactive

    • gui_filecopy: unnamed test (unknown)
    • gui_filecopy: Failed (test died)
      # Test died: no candidate needle with tag(s) 'files-test-file' matc...
  • system_tests_qwt_win10@hw13

    • windows_install: Failed (test died)
      # Test died: command 'script -e -c 'bash -x /usr/bin/qvm-create-win...
  • system_tests_qwt_win10_seamless@hw13

    • windows_clipboard_and_filecopy: unnamed test (unknown)
    • windows_clipboard_and_filecopy: Failed (test died)
      # Test died: no candidate needle with tag(s) 'windows-Edge-address-...
  • system_tests_qwt_win11@hw13

    • windows_install: wait_serial (wait serial expected)
      # wait_serial expected: qr/iDVvW-\d+-/...

    • windows_install: Failed (test died + timed out)
      # Test died: command 'script -e -c 'bash -x /usr/bin/qvm-create-win...

  • system_tests_gui_tools@hw7

    • qubesmanager_vmsettings: unnamed test (unknown)
    • qubesmanager_vmsettings: Failed (test died)
      # Test died: no candidate needle with tag(s) 'vm-settings-devices-s...

Failed tests

12 failures
  • system_tests_gui_tools

    • qubesmanager_vmsettings: unnamed test (unknown)
    • qubesmanager_vmsettings: Failed (test died)
      # Test died: no candidate needle with tag(s) 'vm-settings-devices-s...
  • system_tests_extra

    • TC_00_QVCTest_whonix-workstation-17: test_010_screenshare (failure)
      AssertionError: 1 != 0 : Timeout waiting for /dev/video0 in test-in...
  • system_tests_guivm_vnc_gui_interactive

    • gui_filecopy: unnamed test (unknown)
    • gui_filecopy: Failed (test died)
      # Test died: no candidate needle with tag(s) 'files-test-file' matc...
  • system_tests_qwt_win10@hw13

    • windows_install: Failed (test died)
      # Test died: command 'script -e -c 'bash -x /usr/bin/qvm-create-win...
  • system_tests_qwt_win10_seamless@hw13

    • windows_clipboard_and_filecopy: unnamed test (unknown)
    • windows_clipboard_and_filecopy: Failed (test died)
      # Test died: no candidate needle with tag(s) 'windows-Edge-address-...
  • system_tests_qwt_win11@hw13

    • windows_install: wait_serial (wait serial expected)
      # wait_serial expected: qr/iDVvW-\d+-/...

    • windows_install: Failed (test died + timed out)
      # Test died: command 'script -e -c 'bash -x /usr/bin/qvm-create-win...

  • system_tests_gui_tools@hw7

    • qubesmanager_vmsettings: unnamed test (unknown)
    • qubesmanager_vmsettings: Failed (test died)
      # Test died: no candidate needle with tag(s) 'vm-settings-devices-s...

Fixed failures

Compared to: https://openqa.qubes-os.org/tests/149225#dependencies

84 fixed
  • system_tests_kde_gui_interactive

    • gui_keyboard_layout: wait_serial (wait serial expected)
      # wait_serial expected: "echo -e '[Layout]\nLayoutList=us,de' | sud...

    • gui_keyboard_layout: Failed (test died)
      # Test died: command 'test "$(cd ~user;ls e1*)" = "$(qvm-run -p wor...

  • system_tests_audio

    • system_tests: Fail (unknown)
      Tests qubes.tests.integ.audio failed (exit code 1), details reporte...

    • system_tests: Failed (test died)
      # Test died: Some tests failed at qubesos/tests/system_tests.pm lin...

    • TC_20_AudioVM_Pulse_whonix-workstation-17: test_223_audio_play_hvm (error)
      qubes.exc.QubesVMError: Cannot connect to qrexec agent for 120 seco...

    • TC_20_AudioVM_Pulse_whonix-workstation-17: test_224_audio_rec_muted_hvm (error)
      qubes.exc.QubesVMError: Cannot connect to qrexec agent for 120 seco...

    • TC_20_AudioVM_Pulse_whonix-workstation-17: test_225_audio_rec_unmuted_hvm (error)
      qubes.exc.QubesVMError: Cannot connect to qrexec agent for 120 seco...

    • TC_20_AudioVM_Pulse_whonix-workstation-17: test_252_audio_playback_audiovm_switch_hvm (error)
      qubes.exc.QubesVMError: Cannot connect to qrexec agent for 120 seco...

  • system_tests_dispvm_perf@hw7

  • system_tests_guivm_gpu_gui_interactive@hw13

    • guivm_startup: wait_serial (wait serial expected)
      # wait_serial expected: qr/lEcbc-\d+-/...

    • guivm_startup: Failed (test died + timed out)
      # Test died: command '! qvm-check sys-whonix || time qvm-start sys-...

  • system_tests_basic_vm_qrexec_gui_ext4

    • system_tests: Fail (unknown)
      Tests qubes.tests.integ.vm_qrexec_gui failed (exit code 1), details...

    • system_tests: Failed (test died)
      # Test died: Some tests failed at qubesos/tests/system_tests.pm lin...

    • TC_20_NonAudio_whonix-gateway-17-pool: test_012_qubes_desktop_run (error + cleanup)
      raise TimeoutError from exc_val... TimeoutError

  • system_tests_audio@hw1

    • system_tests: Fail (unknown)
      Tests qubes.tests.integ.audio failed (exit code 1), details reporte...

    • system_tests: Failed (test died)
      # Test died: Some tests failed at qubesos/tests/system_tests.pm lin...

    • TC_20_AudioVM_Pulse_whonix-workstation-17: test_223_audio_play_hvm (error)
      qubes.exc.QubesVMError: Cannot connect to qrexec agent for 60 secon...

    • TC_20_AudioVM_Pulse_whonix-workstation-17: test_224_audio_rec_muted_hvm (error)
      qubes.exc.QubesVMError: Cannot connect to qrexec agent for 60 secon...

    • TC_20_AudioVM_Pulse_whonix-workstation-17: test_252_audio_playback_audiovm_switch_hvm (error)
      qubes.exc.QubesVMError: Cannot connect to qrexec agent for 60 secon...

  • system_tests_dispvm

    • system_tests: Fail (unknown)
      Tests qubes.tests.integ.dispvm failed (exit code 1), details report...

    • TC_20_DispVM_debian-13-xfce: test_012_preload_low_mem (failure)
      ~~~~~~~~~~~~~~~~^^^^^^^^^^^^^^^^^^^^^^^^^^^^... AssertionError: 1 != 0

    • TC_20_DispVM_debian-13-xfce: test_013_preload_gui (error)
      raise KeyError(key)... KeyError: 'disp3723'

    • TC_20_DispVM_debian-13-xfce: test_014_preload_nogui (error + cleanup)
      raise TimeoutError from exc_val... TimeoutError

    • TC_20_DispVM_debian-13-xfce: test_015_preload_race_more (error + cleanup)
      raise KeyError(key)... KeyError: 'disp1187'

    • TC_20_DispVM_debian-13-xfce: test_016_preload_race_less (failure + cleanup)
      ^^^^^^^^^^^^^^^^^^^^^^... AssertionError

    • TC_20_DispVM_debian-13-xfce: test_017_preload_autostart (error)
      raise KeyError(key)... KeyError: 'disp7317'

    • TC_20_DispVM_debian-13-xfce: test_018_preload_global (error)
      raise KeyError(key)... KeyError: 'disp8572'

    • TC_20_DispVM_debian-13-xfce: test_019_preload_refresh (error)
      raise KeyError(key)... KeyError: 'disp6425'

    • TC_20_DispVM_fedora-42-xfce: test_012_preload_low_mem (failure)
      ~~~~~~~~~~~~~~~~^^^^^^^^^^^^^^^^^^^^^^^^^^^^... AssertionError: 1 != 0

    • TC_20_DispVM_whonix-workstation-17: test_012_preload_low_mem (failure)
      ~~~~~~~~~~~~~~~~^^^^^^^^^^^^^^^^^^^^^^^^^^^^... AssertionError: 1 != 0

Unstable tests

Performance Tests

Performance degradation:

17 performance degradations
  • fedora-42-xfce_exec-data-duplex: 74.80 🔻 ( previous job: 67.92, degradation: 110.14%)
  • whonix-workstation-17_exec: 8.59 🔻 ( previous job: 7.57, degradation: 113.52%)
  • dom0_root_seq1m_q8t1_read 3:read_bandwidth_kb: 177215.00 🔻 ( previous job: 497426.00, degradation: 35.63%)
  • dom0_root_seq1m_q8t1_write 3:write_bandwidth_kb: 188190.00 🔻 ( previous job: 265260.00, degradation: 70.95%)
  • dom0_root_seq1m_q1t1_read 3:read_bandwidth_kb: 120482.00 🔻 ( previous job: 431512.00, degradation: 27.92%)
  • dom0_root_seq1m_q1t1_write 3:write_bandwidth_kb: 74418.00 🔻 ( previous job: 196254.00, degradation: 37.92%)
  • dom0_root_rnd4k_q32t1_read 3:read_bandwidth_kb: 5803.00 🔻 ( previous job: 23940.00, degradation: 24.24%)
  • fedora-42-xfce_root_seq1m_q8t1_write 3:write_bandwidth_kb: 99799.00 🔻 ( previous job: 140215.00, degradation: 71.18%)
  • fedora-42-xfce_root_seq1m_q1t1_write 3:write_bandwidth_kb: 37850.00 🔻 ( previous job: 47575.00, degradation: 79.56%)
  • fedora-42-xfce_root_rnd4k_q32t1_write 3:write_bandwidth_kb: 1399.00 🔻 ( previous job: 3020.00, degradation: 46.32%)
  • fedora-42-xfce_root_rnd4k_q1t1_write 3:write_bandwidth_kb: 558.00 🔻 ( previous job: 1368.00, degradation: 40.79%)
  • fedora-42-xfce_private_seq1m_q1t1_write 3:write_bandwidth_kb: 48450.00 🔻 ( previous job: 79539.00, degradation: 60.91%)
  • fedora-42-xfce_private_rnd4k_q32t1_write 3:write_bandwidth_kb: 1529.00 🔻 ( previous job: 3765.00, degradation: 40.61%)
  • fedora-42-xfce_private_rnd4k_q1t1_write 3:write_bandwidth_kb: 340.00 🔻 ( previous job: 1251.00, degradation: 27.18%)
  • fedora-42-xfce_volatile_seq1m_q8t1_write 3:write_bandwidth_kb: 86646.00 🔻 ( previous job: 157382.00, degradation: 55.05%)
  • fedora-42-xfce_volatile_rnd4k_q32t1_write 3:write_bandwidth_kb: 3635.00 🔻 ( previous job: 4098.00, degradation: 88.70%)
  • fedora-42-xfce_volatile_rnd4k_q1t1_write 3:write_bandwidth_kb: 1438.00 🔻 ( previous job: 2384.00, degradation: 60.32%)

Remaining performance tests:

163 tests
  • debian-13-xfce_vm-dispvm (mean:6.69): 80.28
  • debian-13-xfce_vm-dispvm-gui (mean:7.675): 92.10
  • debian-13-xfce_vm-dispvm-concurrent (mean:3.181): 38.18
  • debian-13-xfce_vm-dispvm-gui-concurrent (mean:3.958): 47.49
  • debian-13-xfce_dom0-dispvm (mean:7.074): 84.89
  • debian-13-xfce_dom0-dispvm-gui (mean:8.196): 98.35
  • debian-13-xfce_dom0-dispvm-concurrent (mean:3.225): 38.69
  • debian-13-xfce_dom0-dispvm-gui-concurrent (mean:4.077): 48.93
  • debian-13-xfce_vm-dispvm-preload (mean:2.776): 33.31
  • debian-13-xfce_vm-dispvm-preload-gui (mean:4.061): 48.73
  • debian-13-xfce_vm-dispvm-preload-concurrent (mean:2.631): 31.58
  • debian-13-xfce_vm-dispvm-preload-gui-concurrent (mean:3.44): 41.28
  • debian-13-xfce_dom0-dispvm-preload (mean:3.502): 42.02
  • debian-13-xfce_dom0-dispvm-preload-gui (mean:10.669): 128.03
  • debian-13-xfce_dom0-dispvm-preload-concurrent (mean:3.159): 37.90
  • debian-13-xfce_dom0-dispvm-preload-gui-concurrent (mean:3.804): 45.65
  • debian-13-xfce_dom0-dispvm-api (mean:7.148): 85.78
  • debian-13-xfce_dom0-dispvm-gui-api (mean:8.267): 99.20
  • debian-13-xfce_dom0-dispvm-concurrent-api (mean:3.455): 41.46
  • debian-13-xfce_dom0-dispvm-gui-concurrent-api (mean:4.106): 49.28
  • debian-13-xfce_dom0-dispvm-preload-less-less-api (mean:3.827): 45.93
  • debian-13-xfce_dom0-dispvm-preload-less-api (mean:3.882): 46.58
  • debian-13-xfce_dom0-dispvm-preload-api (mean:3.482): 41.78
  • debian-13-xfce_dom0-dispvm-preload-more-api (mean:3.415): 40.98
  • debian-13-xfce_dom0-dispvm-preload-more-more-api (mean:3.749): 44.99
  • debian-13-xfce_dom0-dispvm-preload-gui-api (mean:4.484): 53.81
  • debian-13-xfce_dom0-dispvm-preload-concurrent-api (mean:3.11): 37.33
  • debian-13-xfce_dom0-dispvm-preload-gui-concurrent-api (mean:3.873): 46.47
  • debian-13-xfce_vm-vm (mean:0.039): 0.47
  • debian-13-xfce_vm-vm-gui (mean:0.031): 0.37
  • debian-13-xfce_vm-vm-concurrent (mean:0.019): 0.23
  • debian-13-xfce_vm-vm-gui-concurrent (mean:0.02): 0.23
  • debian-13-xfce_dom0-vm-api (mean:0.042): 0.51
  • debian-13-xfce_dom0-vm-gui-api (mean:0.052): 0.63
  • debian-13-xfce_dom0-vm-concurrent-api (mean:0.024): 0.29
  • debian-13-xfce_dom0-vm-gui-concurrent-api (mean:0.029): 0.34
  • fedora-42-xfce_vm-dispvm (mean:7.203): 86.43
  • fedora-42-xfce_vm-dispvm-gui (mean:8.162): 97.94
  • fedora-42-xfce_vm-dispvm-concurrent (mean:3.589): 43.06
  • fedora-42-xfce_vm-dispvm-gui-concurrent (mean:4.287): 51.44
  • fedora-42-xfce_dom0-dispvm (mean:7.685): 92.22
  • fedora-42-xfce_dom0-dispvm-gui (mean:8.868): 106.42
  • fedora-42-xfce_dom0-dispvm-concurrent (mean:3.908): 46.90
  • fedora-42-xfce_dom0-dispvm-gui-concurrent (mean:4.365): 52.38
  • fedora-42-xfce_vm-dispvm-preload (mean:3.246): 38.95
  • fedora-42-xfce_vm-dispvm-preload-gui (mean:6.702): 80.42
  • fedora-42-xfce_vm-dispvm-preload-concurrent (mean:2.953): 35.44
  • fedora-42-xfce_vm-dispvm-preload-gui-concurrent (mean:3.821): 45.85
  • fedora-42-xfce_dom0-dispvm-preload (mean:3.88): 46.55
  • fedora-42-xfce_dom0-dispvm-preload-gui (mean:4.923): 59.07
  • fedora-42-xfce_dom0-dispvm-preload-concurrent (mean:3.477): 41.73
  • fedora-42-xfce_dom0-dispvm-preload-gui-concurrent (mean:4.074): 48.88
  • fedora-42-xfce_dom0-dispvm-api (mean:7.801): 93.61
  • fedora-42-xfce_dom0-dispvm-gui-api (mean:9.029): 108.34
  • fedora-42-xfce_dom0-dispvm-concurrent-api (mean:3.771): 45.25
  • fedora-42-xfce_dom0-dispvm-gui-concurrent-api (mean:4.464): 53.56
  • fedora-42-xfce_dom0-dispvm-preload-less-less-api (mean:4.292): 51.50
  • fedora-42-xfce_dom0-dispvm-preload-less-api (mean:4.247): 50.96
  • fedora-42-xfce_dom0-dispvm-preload-api (mean:4.119): 49.43
  • fedora-42-xfce_dom0-dispvm-preload-more-api (mean:3.896): 46.75
  • fedora-42-xfce_dom0-dispvm-preload-more-more-api (mean:4.037): 48.44
  • fedora-42-xfce_dom0-dispvm-preload-gui-api (mean:5.069): 60.83
  • fedora-42-xfce_dom0-dispvm-preload-concurrent-api (mean:3.419): 41.03
  • fedora-42-xfce_dom0-dispvm-preload-gui-concurrent-api (mean:4.36): 52.32
  • fedora-42-xfce_vm-vm (mean:0.033): 0.39
  • fedora-42-xfce_vm-vm-gui (mean:0.025): 0.30
  • fedora-42-xfce_vm-vm-concurrent (mean:0.024): 0.29
  • fedora-42-xfce_vm-vm-gui-concurrent (mean:0.02): 0.24
  • fedora-42-xfce_dom0-vm-api (mean:0.043): 0.52
  • fedora-42-xfce_dom0-vm-gui-api (mean:0.045): 0.54
  • fedora-42-xfce_dom0-vm-concurrent-api (mean:0.028): 0.34
  • fedora-42-xfce_dom0-vm-gui-concurrent-api (mean:0.03): 0.36
  • whonix-workstation-17_vm-dispvm (mean:7.797): 93.57
  • whonix-workstation-17_vm-dispvm-gui (mean:8.998): 107.97
  • whonix-workstation-17_vm-dispvm-concurrent (mean:4.547): 54.56
  • whonix-workstation-17_vm-dispvm-gui-concurrent (mean:4.902): 58.83
  • whonix-workstation-17_dom0-dispvm (mean:8.357): 100.28
  • whonix-workstation-17_dom0-dispvm-gui (mean:9.374): 112.49
  • whonix-workstation-17_dom0-dispvm-concurrent (mean:4.38): 52.56
  • whonix-workstation-17_dom0-dispvm-gui-concurrent (mean:5.32): 63.84
  • whonix-workstation-17_vm-dispvm-preload (mean:3.393): 40.72
  • whonix-workstation-17_vm-dispvm-preload-gui (mean:4.762): 57.15
  • whonix-workstation-17_vm-dispvm-preload-concurrent (mean:3.339): 40.07
  • whonix-workstation-17_vm-dispvm-preload-gui-concurrent (mean:4.216): 50.59
  • whonix-workstation-17_dom0-dispvm-preload (mean:4.447): 53.36
  • whonix-workstation-17_dom0-dispvm-preload-gui (mean:5.444): 65.33
  • whonix-workstation-17_dom0-dispvm-preload-concurrent (mean:3.746): 44.96
  • whonix-workstation-17_dom0-dispvm-preload-gui-concurrent (mean:4.483): 53.80
  • whonix-workstation-17_dom0-dispvm-api (mean:8.51): 102.12
  • whonix-workstation-17_dom0-dispvm-gui-api (mean:9.737): 116.85
  • whonix-workstation-17_dom0-dispvm-concurrent-api (mean:4.062): 48.74
  • whonix-workstation-17_dom0-dispvm-gui-concurrent-api (mean:4.597): 55.17
  • whonix-workstation-17_dom0-dispvm-preload-less-less-api (mean:4.57): 54.84
  • whonix-workstation-17_dom0-dispvm-preload-less-api (mean:5.064): 60.77
  • whonix-workstation-17_dom0-dispvm-preload-api (mean:4.264): 51.16
  • whonix-workstation-17_dom0-dispvm-preload-more-api (mean:4.408): 52.89
  • whonix-workstation-17_dom0-dispvm-preload-more-more-api (mean:4.262): 51.15
  • whonix-workstation-17_dom0-dispvm-preload-gui-api (mean:5.348): 64.17
  • whonix-workstation-17_dom0-dispvm-preload-concurrent-api (mean:3.733): 44.79
  • whonix-workstation-17_dom0-dispvm-preload-gui-concurrent-api (mean:4.5): 54.00
  • whonix-workstation-17_vm-vm (mean:0.024): 0.29
  • whonix-workstation-17_vm-vm-gui (mean:0.048): 0.58
  • whonix-workstation-17_vm-vm-concurrent (mean:0.015): 0.18
  • whonix-workstation-17_vm-vm-gui-concurrent (mean:0.03): 0.37
  • whonix-workstation-17_dom0-vm-api (mean:0.037): 0.45
  • whonix-workstation-17_dom0-vm-gui-api (mean:0.039): 0.47
  • whonix-workstation-17_dom0-vm-concurrent-api (mean:0.031): 0.37
  • whonix-workstation-17_dom0-vm-gui-concurrent-api (mean:0.025): 0.31
  • debian-13-xfce_exec: 8.04 🟢 ( previous job: 8.36, improvement: 96.18%)
  • debian-13-xfce_exec-root: 27.04 🟢 ( previous job: 27.36, improvement: 98.82%)
  • debian-13-xfce_socket: 8.08 🟢 ( previous job: 8.57, improvement: 94.21%)
  • debian-13-xfce_socket-root: 8.71 🔻 ( previous job: 8.26, degradation: 105.53%)
  • debian-13-xfce_exec-data-simplex: 67.47 🟢 ( previous job: 72.43, improvement: 93.15%)
  • debian-13-xfce_exec-data-duplex: 67.40 🟢 ( previous job: 76.65, improvement: 87.93%)
  • debian-13-xfce_exec-data-duplex-root: 80.77 🟢 ( previous job: 91.79, improvement: 88.00%)
  • debian-13-xfce_socket-data-duplex: 131.73 🟢 ( previous job: 133.45, improvement: 98.71%)
  • fedora-42-xfce_exec: 9.16 🔻 ( previous job: 9.06, degradation: 101.16%)
  • fedora-42-xfce_exec-root: 59.71 🔻 ( previous job: 58.19, degradation: 102.62%)
  • fedora-42-xfce_socket: 8.33 🟢 ( previous job: 8.48, improvement: 98.22%)
  • fedora-42-xfce_socket-root: 8.01 🟢 ( previous job: 8.18, improvement: 97.88%)
  • fedora-42-xfce_exec-data-simplex: 68.24 🟢 ( previous job: 78.48, improvement: 86.94%)
  • fedora-42-xfce_exec-data-duplex-root: 104.92 🔻 ( previous job: 96.36, degradation: 108.88%)
  • fedora-42-xfce_socket-data-duplex: 143.26 🔻 ( previous job: 142.58, degradation: 100.48%)
  • whonix-gateway-17_exec: 7.48 🟢 ( previous job: 8.12, improvement: 92.19%)
  • whonix-gateway-17_exec-root: 39.06 🟢 ( previous job: 41.05, improvement: 95.15%)
  • whonix-gateway-17_socket: 8.02 🟢 ( previous job: 8.52, improvement: 94.03%)
  • whonix-gateway-17_socket-root: 7.13 🟢 ( previous job: 8.12, improvement: 87.84%)
  • whonix-gateway-17_exec-data-simplex: 69.19 🟢 ( previous job: 83.60, improvement: 82.77%)
  • whonix-gateway-17_exec-data-duplex: 73.35 🔻 ( previous job: 68.38, degradation: 107.26%)
  • whonix-gateway-17_exec-data-duplex-root: 89.69 🟢 ( previous job: 99.37, improvement: 90.25%)
  • whonix-gateway-17_socket-data-duplex: 150.59 🟢 ( previous job: 167.12, improvement: 90.11%)
  • whonix-workstation-17_exec-root: 54.57 🟢 ( previous job: 56.76, improvement: 96.15%)
  • whonix-workstation-17_socket: 8.72 🔻 ( previous job: 8.59, degradation: 101.56%)
  • whonix-workstation-17_socket-root: 8.78 🟢 ( previous job: 8.89, improvement: 98.79%)
  • whonix-workstation-17_exec-data-simplex: 72.49 🔻 ( previous job: 66.80, degradation: 108.51%)
  • whonix-workstation-17_exec-data-duplex: 72.97 🟢 ( previous job: 74.50, improvement: 97.94%)
  • whonix-workstation-17_exec-data-duplex-root: 92.74 🟢 ( previous job: 102.34, improvement: 90.62%)
  • whonix-workstation-17_socket-data-duplex: 146.49 🟢 ( previous job: 147.97, improvement: 99.00%)
  • dom0_root_rnd4k_q32t1_write 3:write_bandwidth_kb: 6286.00 🟢 ( previous job: 2446.00, improvement: 256.99%)
  • dom0_root_rnd4k_q1t1_read 3:read_bandwidth_kb: 11909.00 🟢 ( previous job: 5874.00, improvement: 202.74%)
  • dom0_root_rnd4k_q1t1_write 3:write_bandwidth_kb: 1062.00 🟢 ( previous job: 29.00, improvement: 3662.07%)
  • dom0_varlibqubes_seq1m_q8t1_read 3:read_bandwidth_kb: 284939.00 🔻 ( previous job: 292489.00, degradation: 97.42%)
  • dom0_varlibqubes_seq1m_q8t1_write 3:write_bandwidth_kb: 107727.00 🔻 ( previous job: 110817.00, degradation: 97.21%)
  • dom0_varlibqubes_seq1m_q1t1_read 3:read_bandwidth_kb: 418760.00 🟢 ( previous job: 137802.00, improvement: 303.89%)
  • dom0_varlibqubes_seq1m_q1t1_write 3:write_bandwidth_kb: 198582.00 🟢 ( previous job: 121719.00, improvement: 163.15%)
  • dom0_varlibqubes_rnd4k_q32t1_read 3:read_bandwidth_kb: 106661.00 🟢 ( previous job: 103932.00, improvement: 102.63%)
  • dom0_varlibqubes_rnd4k_q32t1_write 3:write_bandwidth_kb: 6531.00 🟢 ( previous job: 6356.00, improvement: 102.75%)
  • dom0_varlibqubes_rnd4k_q1t1_read 3:read_bandwidth_kb: 7570.00 🔻 ( previous job: 7695.00, degradation: 98.38%)
  • dom0_varlibqubes_rnd4k_q1t1_write 3:write_bandwidth_kb: 4089.00 🟢 ( previous job: 3925.00, improvement: 104.18%)
  • fedora-42-xfce_root_seq1m_q8t1_read 3:read_bandwidth_kb: 403608.00 🟢 ( previous job: 366891.00, improvement: 110.01%)
  • fedora-42-xfce_root_seq1m_q1t1_read 3:read_bandwidth_kb: 308404.00 🟢 ( previous job: 299764.00, improvement: 102.88%)
  • fedora-42-xfce_root_rnd4k_q32t1_read 3:read_bandwidth_kb: 87506.00 🟢 ( previous job: 86001.00, improvement: 101.75%)
  • fedora-42-xfce_root_rnd4k_q1t1_read 3:read_bandwidth_kb: 8721.00 🔻 ( previous job: 9042.00, degradation: 96.45%)
  • fedora-42-xfce_private_seq1m_q8t1_read 3:read_bandwidth_kb: 367019.00 🔻 ( previous job: 387500.00, degradation: 94.71%)
  • fedora-42-xfce_private_seq1m_q8t1_write 3:write_bandwidth_kb: 129084.00 🔻 ( previous job: 136640.00, degradation: 94.47%)
  • fedora-42-xfce_private_seq1m_q1t1_read 3:read_bandwidth_kb: 320469.00 🔻 ( previous job: 325139.00, degradation: 98.56%)
  • fedora-42-xfce_private_rnd4k_q32t1_read 3:read_bandwidth_kb: 97952.00 🟢 ( previous job: 87396.00, improvement: 112.08%)
  • fedora-42-xfce_private_rnd4k_q1t1_read 3:read_bandwidth_kb: 8383.00 🔻 ( previous job: 8992.00, degradation: 93.23%)
  • fedora-42-xfce_volatile_seq1m_q8t1_read 3:read_bandwidth_kb: 359717.00 🔻 ( previous job: 383531.00, degradation: 93.79%)
  • fedora-42-xfce_volatile_seq1m_q1t1_read 3:read_bandwidth_kb: 297721.00 🟢 ( previous job: 293225.00, improvement: 101.53%)
  • fedora-42-xfce_volatile_seq1m_q1t1_write 3:write_bandwidth_kb: 89150.00 🟢 ( previous job: 64217.00, improvement: 138.83%)
  • fedora-42-xfce_volatile_rnd4k_q32t1_read 3:read_bandwidth_kb: 88228.00 🟢 ( previous job: 87141.00, improvement: 101.25%)
  • fedora-42-xfce_volatile_rnd4k_q1t1_read 3:read_bandwidth_kb: 8967.00 🟢 ( previous job: 8804.00, improvement: 101.85%)

Comment on lines 219 to 275
self.assertEqual(self.run_cmd(self.testvm1, self.ping_ip), 0)
self.assertEqual(self.run_cmd(self.testvm1, self.ping_name), 0)
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Those two should fail, since you changed netvm to None, no?

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Yes... why it didn't fail, I believe there is a race between

  • event domain-unpaused
  • ping

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

There is not a race, the detach_network is not called if netvm is None. If I try to call it, detachDevice, it fails on

File "/usr/share/qubes/templates/libvirt/devices/net.xml", line 7, in top-level template code
    <backenddomain name="{{ vm.netvm.name }}" />
    ^^^^^^^^^^^^^^^^^^^^^
jinja2.exceptions.UndefinedError: 'None' has no attribute 'name'

So I have to pass the vm.netvm somehow.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I did manage to write the jinja, but failed as vm.ip is None. The ip needs to be saved so it can be removed with vif-route-qubes.

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Maybe detach can be changed to retrieve info from libvirt, not VM properties? If the XML is needed, maybe you can take it out from libvirt_domain.XMLDesc()?

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

It seems viable, yes:

    <interface type='ethernet'>
      <mac address='00:16:3e:5e:6c:00'/>
      <ip address='10.137.0.21' family='ipv4'/>
      <script path='vif-route-qubes'/>
      <backenddomain name='test-inst-netvm1'/>
    </interface>

except (qubes.exc.QubesException, libvirt.libvirtError):
vm.log.warning("Cannot attach network", exc_info=1)

def reset_deferred_netvm(self):
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

"reset" is a bit unfortunate here (means more "clear" than re-set). Maybe "apply"?

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Done.

): # pylint: disable=unused-argument
"""Check for deferred netvm changes in case qube was paused while
changes happened."""
if getattr(self, "is_preload", False):
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This may be problematic - self.use_preload() is also called in a domain-unpaused handler, so depending on execution order, is_preload may be false here already. See fire_event docstring, and check if the order is correct according to that description (and add info why it's correct into commit message). But if the order is wrong, or undefined, this will need some other solution

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

# qubes/vm/dispvm.py

@qubes.events.handler("domain-unpaused")
def on_domain_unpaused_dispvm(...):
    if self.is_preload:
        self.use_preload()

def use_preload(...):
    ...
    self.apply_deferred_netvm()
    self.preload_requested = None
    ...


# qubes/vm/mix/net.py

@qubes.events.handler("domain-unpaused")
def on_domain_unpaused_net(...):
    if getattr(self, "is_preload", False):
        return
    self.apply_deferred_netvm()

def apply_deferred_netvm(...):
    deferred_from = self.features.get("deferred-netvm-original", None)
    if deferred_from is None:
        return

So the is_preload is only False after running the apply_deferred_netvm() from DispVM.use_preload. If it is False and reaches on_domain_unpaused_net(), it will call the apply_deferred_netvm but won't execute because deferred_from is None. I will think how to include this in the commit message.

@ben-grande ben-grande force-pushed the preload-netvm branch 3 times, most recently from c093f2b to 38cc8bb Compare August 28, 2025 13:10
vm.ip6 = ip_elem.get("address")
self.libvirt_domain.detachDevice(
self.app.env.get_template("libvirt/devices/net.xml").render(vm=self)
self.app.env.get_template("libvirt/devices/net.xml").render(vm=vm)
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

You could simply use XML you already got (interface variable), instead of re-building it.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Yes, that is much simpler, thanks for the correction.

with self.subTest("resetting netvm to default"):
original_netvm = vm.netvm.name if vm.netvm else ""
del vm.netvm
mock_detach.assert_called()
Copy link
Member

@marmarek marmarek Aug 28, 2025

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I don't think it should be called on paused vm. It looks like you are simply missing reset_mock() calls after previous sub-test.

@ben-grande
Copy link
Contributor Author

or maybe it will start automatically?

Starting a client stats the netvm. Also, the netvm is already started in configure_netvm, the start_vm is just to wait for GUI session.

@marmarek
Copy link
Member

marmarek commented Oct 7, 2025

Oh no, now the test failed in network_ipv6 job :(
But it looks a bit different now. First of all, test-inst-vm1 does have eth0 and it has IP address set etc. But the backend side (vif15.0) reports NO-CARRIER. And the frontend side, have state=1 (weird, at this state eth0 shouldn't be created yet).

I'll look at it a bit...

@marmarek
Copy link
Member

FYI update on debugging this: I think it's a kernel issue, I reported it to xen-devel, and in parallel came up with QubesOS/qubes-linux-kernel#1193, but based on test results, it doesn't fully fix it yet (for some reason, tests in network_ipv6 still fail...)

@marmarek
Copy link
Member

So, at this point, I'd like to merge this, even if the new test sometimes (often) fail. I'll do one more test run, and depending on the outcome, maybe mark it as expected failure or something. After all this PR fixes actual issue and failure seems to be caused by a bug elsewhere, not here.

@ben-grande

This comment was marked as outdated.

Applying deferred netvm for preloaded disposables is handled separately
to ensure that before a disposable is returned to the user, the
networking is already set up. If the domain-unpaused event of the
NetVMMixin kicks in before the preload is used, it is ignored by the
"is_preload" attribute, if it kicks after, it is ignored by the absent
"deferred-netvm-original" feature.

Fixes: QubesOS/qubes-issues#10173
For: QubesOS/qubes-issues#1512
Concatenating empty ('') value is lost on translation.

For: QubesOS/qubes-issues#1512
@ben-grande
Copy link
Contributor Author

  • ipv4 failed
  • shutdown_old didn't fail this time, but nothing changed on the test, so it could fail in the future

Therefore I will set to skip shutdown_old and purge_old for ipv4 and ipv6.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Projects

None yet

Development

Successfully merging this pull request may close these issues.

Preloaded disposable doesn't handle netvm changes when it is paused

3 participants