-
Notifications
You must be signed in to change notification settings - Fork 4
Implement immediate VM cleanup in plan fixture #246
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
base: main
Are you sure you want to change the base?
Conversation
|
Warning Rate limit exceeded@krcmarik has exceeded the limit for the number of commits that can be reviewed per hour. Please wait 9 minutes and 15 seconds before requesting another review. ⌛ How to resolve this issue?After the wait time has elapsed, a review can be triggered using the We recommend that you space out your commits to avoid hitting the rate limit. 🚦 How do rate limits work?CodeRabbit enforces hourly rate limits for each developer per organization. Our paid plans have higher rate limits than the trial, open-source and free plans. In all cases, we re-allow further reviews after a brief timeout. Please see our FAQ for further information. 📒 Files selected for processing (1)
WalkthroughSession-level VM/Pod teardown was removed and VM deletion moved into the test fixture: after yielding the plan, the fixture now immediately attempts to instantiate and delete each Changes
Estimated code review effort🎯 2 (Simple) | ⏱️ ~12 minutes 🚥 Pre-merge checks | ✅ 2 | ❌ 1❌ Failed checks (1 warning)
✅ Passed checks (2 passed)
✏️ Tip: You can configure your own custom pre-merge checks in the settings. ✨ Finishing touches🧪 Generate unit tests (beta)
Thanks for using CodeRabbit! It's free for OSS, and your support helps us grow. If you like it, consider giving us a shout-out. Comment |
|
Report bugs in Issues Welcome! 🎉This pull request will be automatically processed with the following features: 🔄 Automatic Actions
📋 Available CommandsPR Status Management
Review & Approval
Testing & Validation
Container Operations
Cherry-pick Operations
Label Management
✅ Merge RequirementsThis PR will be automatically approved when the following conditions are met:
📊 Review ProcessApprovers and ReviewersApprovers:
Reviewers:
Available Labels
💡 Tips
For more information, please refer to the project documentation or contact the maintainers. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Actionable comments posted: 1
🤖 Fix all issues with AI agents
In @conftest.py:
- Around line 739-740: Replace the LOGGER.error call in the VM cleanup except
block so that the full traceback is preserved: change the except block that
catches Exception (current code using LOGGER.error(f"Failed to delete VM
{vm['name']}: {exc}")) to call LOGGER.exception with %-style formatting (e.g.,
LOGGER.exception("Failed to delete VM %s", vm["name"])) and remove the unused
exc variable; this ensures the stack trace is logged and avoids f-string lint
warnings.
📜 Review details
Configuration used: Path: .coderabbit.yaml
Review profile: ASSERTIVE
Plan: Pro
📒 Files selected for processing (2)
conftest.pyutilities/pytest_utils.py
💤 Files with no reviewable changes (1)
- utilities/pytest_utils.py
🧰 Additional context used
🪛 Ruff (0.14.10)
conftest.py
737-737: Logging statement uses f-string
(G004)
739-739: Do not catch blind exception: Exception
(BLE001)
740-740: Use logging.exception instead of logging.error
Replace with exception
(TRY400)
740-740: Logging statement uses f-string
(G004)
🔇 Additional comments (1)
conftest.py (1)
727-738: LGTM! Immediate VM cleanup aligns with PR objectivesThe cleanup logic correctly:
- Targets destination VMs in
target_namespace(not source VMs)- Uses
existscheck to avoidNotFoundErroron already-deleted VMs- Employs
wait=Truefor synchronous deletion, preventing resource contention between sequential tests- Handles the case where
vm["name"]was updated to the migrated name on line 711This directly addresses the resource exhaustion issue on smaller clusters.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Actionable comments posted: 2
🤖 Fix all issues with AI agents
In @conftest.py:
- Around line 739-740: Remove the unused exception variable by changing the
except clause from "except Exception as exc:" to "except Exception:" around the
block that calls LOGGER.exception("Failed to delete VM %s", vm["name"]);
LOGGER.exception already logs the current exception context so drop the "as exc"
binding and leave the LOGGER.exception call unchanged.
- Line 737: Replace the eager f-string in the LOGGER.info call with lazy logging
interpolation: change the LOGGER.info invocation that currently uses f"Deleting
VM {vm_obj.name} after test" to use a format string and argument(s) so the
message is constructed only if the log level is enabled (i.e.,
LOGGER.info("Deleting VM %s after test", vm_obj.name)); update the specific
LOGGER.info call that references vm_obj.name accordingly.
📜 Review details
Configuration used: Path: .coderabbit.yaml
Review profile: ASSERTIVE
Plan: Pro
📒 Files selected for processing (1)
conftest.py
🧰 Additional context used
🪛 Ruff (0.14.10)
conftest.py
737-737: Logging statement uses f-string
(G004)
739-739: Local variable exc is assigned to but never used
Remove assignment to unused variable exc
(F841)
🔇 Additional comments (2)
conftest.py (2)
727-729: LGTM on the cleanup strategy.Moving VM cleanup from session-level to function-level addresses the resource exhaustion issue on smaller clusters. The comment about Pods being garbage collected via
ownerReferencesis accurate — Kubernetes will handle pod cleanup automatically.
730-740: No issues found. Thevm["name"]mutation at line 711 is intentional and correct. Thesource_provider.vm_dict()method returns the actual cloned/renamed VM name insource_vm_details["name"]. Updatingvm["name"]with this value ensures cleanup uses the correct actual VM name in the target namespace, which is the proper design. The updated name is consistently used throughout (line 717 waits for the updated name, line 733 cleans up the updated name).
- Delete VMs immediately after each test instead of deferring to session end - Remove Pod tracking as Kubernetes garbage collects them via ownerReferences - Remove redundant VM cleanup loop from session teardown (delete_all_vms serves as safety net) - Prevents resource accumulation during long test runs
|
@coderabbitai review |
✅ Actions performedReview triggered.
|
|
@coderabbitai full review |
✅ Actions performedFull review triggered. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Actionable comments posted: 1
🤖 Fix all issues with AI agents
In @conftest.py:
- Around line 727-740: The LOGGER.info call uses an f-string while other logs
use lazy % formatting; change the info call inside the VM cleanup loop to use
lazy formatting by replacing the f-string with LOGGER.info("Deleting VM %s after
test", vm_obj.name) so logging is consistent with LOGGER.exception("Failed to
delete VM %s", vm["name"]) and avoids eager interpolation.
📜 Review details
Configuration used: Path: .coderabbit.yaml
Review profile: ASSERTIVE
Plan: Pro
📒 Files selected for processing (2)
conftest.pyutilities/pytest_utils.py
💤 Files with no reviewable changes (1)
- utilities/pytest_utils.py
🧰 Additional context used
🪛 Ruff (0.14.10)
conftest.py
737-737: Logging statement uses f-string
(G004)
⏰ Context from checks skipped due to timeout of 90000ms. You can increase the timeout in your CodeRabbit configuration to a maximum of 15 minutes (900000ms). (6)
- GitHub Check: can-be-merged
- GitHub Check: can-be-merged
- GitHub Check: can-be-merged
- GitHub Check: tox
- GitHub Check: pre-commit
- GitHub Check: build-container
🔇 Additional comments (1)
conftest.py (1)
729-740: Immediate per-test VM cleanup is a solid improvement.This change directly addresses the resource exhaustion issue described in the PR objectives. Deleting VMs immediately after each test rather than deferring to session end prevents OCP clusters from running out of CPU/memory when running sequential warm migrations or migrations with target power state "on".
The assumption that Kubernetes will garbage-collect pods via
ownerReferencesis correct - VMs own their virt-launcher pods, so pod cleanup is automatic.
| "module": pod.__module__, | ||
| }) | ||
| try: | ||
| vm_obj = VirtualMachine( |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
This change defeats the whole process of having a functional environment for debugging if someone needs it.
If the issue is resources then I prefer to stop the VMs and not delete them here.
And do not remove the exsiting code, we store them also in order to get leftovers for the current run
The VMs created on destination OCP cluster by migration process are deleted at the end of the test session once all tests/migration plans are executed and not at the end of the each finished migration plan. If more tests with warm migration type or target power state - on are sequentially executed within one session, a smaller OCP cluster may quickly end up without sufficient resources:
0/6 nodes are available: 3 Insufficient cpu, 3 Insufficient memory.
To mitigate the problem we may do:
(Or we can alternatively make sure that in the cleanup phase of Plan fixture we stop all the running VMs created withint the Plan execution)
Summary by CodeRabbit
✏️ Tip: You can customize this high-level summary in your review settings.