-
Notifications
You must be signed in to change notification settings - Fork 4
Add test with secondary datastore on the same storage system supporting copyoffload #237
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
base: main
Are you sure you want to change the base?
Add test with secondary datastore on the same storage system supporting copyoffload #237
Conversation
WalkthroughAdds multi-datastore copy-offload support: new config fields for a secondary datastore and RDM testing, storage-mapping changes to produce per-datastore entries, provider changes to validate capacity and place disks per-datastore, and tests for multi-datastore migration. Changes
Sequence Diagram(s)sequenceDiagram
participant Test as Test Runner
participant MTV as mtv_migration.get_storage_migration_map
participant Orchestrator as Migration Orchestrator
participant Provider as libs.providers.vmware
participant vCenter as vCenter (datastores)
Test->>MTV: request storage map (datastore_id, secondary_datastore_id...)
MTV-->>Test: return per-datastore storage map entries
Test->>Orchestrator: start migration with storage map
Orchestrator->>Provider: prepare clone + disks (per-disk datastore)
Provider->>vCenter: resolve datastores, query free space per datastore
vCenter-->>Provider: datastore objects & free space
Provider->>Provider: group disks by datastore, validate capacity
Provider->>vCenter: create folders / allocate VMDKs on chosen datastores
vCenter-->>Provider: confirm creation
Provider-->>Orchestrator: clone + attach disks completed
Orchestrator-->>Test: migration finished
Estimated code review effort🎯 3 (Moderate) | ⏱️ ~25 minutes 🚥 Pre-merge checks | ✅ 3✅ Passed checks (3 passed)
✏️ Tip: You can configure your own custom pre-merge checks in the settings. ✨ Finishing touches
🧪 Generate unit tests (beta)
Thanks for using CodeRabbit! It's free for OSS, and your support helps us grow. If you like it, consider giving us a shout-out. Comment |
|
Report bugs in Issues Welcome! 🎉This pull request will be automatically processed with the following features: 🔄 Automatic Actions
📋 Available CommandsPR Status Management
Review & Approval
Testing & Validation
Container Operations
Cherry-pick Operations
Label Management
✅ Merge RequirementsThis PR will be automatically approved when the following conditions are met:
📊 Review ProcessApprovers and ReviewersApprovers:
Reviewers:
Available Labels
💡 Tips
For more information, please refer to the project documentation or contact the maintainers. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Actionable comments posted: 2
Caution
Some comments are outside the diff and can’t be posted inline due to platform limitations.
⚠️ Outside diff range comments (1)
libs/providers/vmware.py (1)
742-750: SCSI unit allocation can incorrectly use reserved unit 7 (and can overflow).
available_unit_numberis computed once withi != 7, but then incremented withavailable_unit_number += 1, which can assign unit 7 to a disk (reserved for the controller) and can also exceed 15 without a guard.Proposed fix
- available_unit_number = next((i for i in range(16) if i != 7 and i not in used_unit_numbers), None) - if available_unit_number is None: + available_unit_numbers = [i for i in range(16) if i != 7 and i not in used_unit_numbers] + if not available_unit_numbers or len(disks_to_add) > len(available_unit_numbers): raise RuntimeError(f"No available unit number on SCSI controller for VM '{source_vm.name}'.") new_disk_key_counter = -101 - for disk in disks_to_add: + for disk, unit_number in zip(disks_to_add, available_unit_numbers, strict=False): # Determine which datastore to use for this disk disk_datastore_id = disk.get("datastore_id") - LOGGER.info(f"Processing disk {available_unit_number}: datastore_id from config = '{disk_datastore_id}'") + LOGGER.info(f"Processing disk {unit_number}: datastore_id from config = '{disk_datastore_id}'") ... - new_disk_spec.device.unitNumber = available_unit_number + new_disk_spec.device.unitNumber = unit_number ... - full_path = ( - f"[{disk_datastore.name}] {datastore_path}/{clone_vm_name}_disk_{available_unit_number}.vmdk" - ) + full_path = f"[{disk_datastore.name}] {datastore_path}/{clone_vm_name}_disk_{unit_number}.vmdk" ... - LOGGER.info(f"Disk {available_unit_number}: fileName set to custom path: {full_path}") + LOGGER.info(f"Disk {unit_number}: fileName set to custom path: {full_path}") ... - f"Disk {available_unit_number}: fileName set to [{disk_datastore.name}] " + f"Disk {unit_number}: fileName set to [{disk_datastore.name}] " f"to force creation on this datastore" ) ... - LOGGER.info(f"Disk {available_unit_number}: provisioning type: {disk.get('provision_type', 'thin')}") + LOGGER.info(f"Disk {unit_number}: provisioning type: {disk.get('provision_type', 'thin')}") new_disk_spec.device.backing = backing_info device_changes.append(new_disk_spec) - available_unit_number += 1 new_disk_key_counter -= 1Also applies to: 796-883
🤖 Fix all issues with AI agents
In @libs/providers/vmware.py:
- Around line 751-795: The code treats the literal sentinel
"secondary_datastore_id" as a datastore id even when secondary_datastore is
None, which later makes self.get_obj([vim.Datastore], ds_id) raise a ValueError;
update the logic in the capacity calculation loop (the block iterating
disks_to_add and the subsequent datastore_capacity_requirements check) to detect
when disk.get("datastore_id") == "secondary_datastore_id" but
secondary_datastore is None and either (a) fall back to target_datastore._moId
or (b) raise a VmCloneError with a clear message about missing configured
secondary_datastore_id; apply the same fix to the equivalent validation block
later (the one that also resolves ds_id via target_datastore._moId,
secondary_datastore._moId or self.get_obj) so no raw ValueError can be raised
when the sentinel is present without a configured secondary datastore.
In @tests/test_copyoffload_migration.py:
- Around line 723-737: The test function
test_copyoffload_multi_datastore_migration currently declares an unused
parameter copyoffload_config (Ruff ARG001) and its docstring/example shows
"datastore_ids" which doesn't match the actual config keys datastore_id and
secondary_datastore_id; to fix, either remove copyoffload_config from the
function signature or reference it (e.g., assert it's not None or call a
validate helper) so it's used, and update the docstring/example to show the
correct keys "datastore_id" and "secondary_datastore_id" (also apply the same
change to the other affected test block around lines 738-870).
🧹 Nitpick comments (3)
utilities/mtv_migration.py (2)
279-356: De-dupe/validate datastore list in multi-datastore copy-offload mode.If
secondary_datastore_idequalsdatastore_id(or appears twice), you’ll generate duplicateStorageMap.mappingentries, which may be rejected or cause confusing behavior.Proposed tweak
- datastores_to_map = [datastore_id] - if secondary_datastore_id: - datastores_to_map.append(secondary_datastore_id) + datastores_to_map = [datastore_id] + if secondary_datastore_id and secondary_datastore_id != datastore_id: + datastores_to_map.append(secondary_datastore_id) + elif secondary_datastore_id == datastore_id: + LOGGER.warning("secondary_datastore_id matches datastore_id; ignoring duplicate datastore mapping")
328-356: Consider failing fast when only one of (datastore_id, offload_plugin_config) is set.Right now, if someone passes
datastore_idwithoutoffload_plugin_config(or vice-versa), the function quietly goes to the “standard storage map” path. If that’s not intended, raiseValueErrorwith a clear message.libs/providers/vmware.py (1)
787-794: Ruff TRY003: consider raising a shorterVmCloneErrorand logging details separately.If your CI enforces Ruff’s TRY003, this will keep failing lint. If not enforced, feel free to ignore.
📜 Review details
Configuration used: Organization UI
Review profile: CHILL
Plan: Pro
📒 Files selected for processing (6)
.providers.json.exampleREADME.mdlibs/providers/vmware.pytests/test_copyoffload_migration.pytests/tests_config/config.pyutilities/mtv_migration.py
🧰 Additional context used
🧬 Code graph analysis (1)
tests/test_copyoffload_migration.py (2)
libs/forklift_inventory.py (2)
vms_names(130-135)vms(59-60)utilities/mtv_migration.py (4)
get_network_migration_map(379-412)get_storage_migration_map(269-376)migrate_vms(32-98)verify_vm_disk_count(449-474)
🪛 Ruff (0.14.10)
tests/test_copyoffload_migration.py
735-735: Unused function argument: copyoffload_config
(ARG001)
libs/providers/vmware.py
787-790: Avoid specifying long messages outside the exception class
(TRY003)
🔇 Additional comments (4)
libs/providers/vmware.py (1)
836-867: Verifybacking_info.fileName = f"[{disk_datastore.name}]"reliably forces datastore placement.This is pyVmomi/vSphere behavior-sensitive; if vSphere requires a full path (folder + filename) to avoid defaulting to the VM’s folder/datastore, you could still end up on the wrong datastore in some environments. Please validate against the vSphere versions you test against.
README.md (1)
281-305: Docs match the new config shape and intended usage.The new
secondary_datastore_idand per-disk"datastore_id": "secondary_datastore_id"guidance is clear and aligns with the implementation..providers.json.example (1)
29-41: Example config additions look good (primary/secondary/RDM/default VM name).tests/tests_config/config.py (1)
137-157: Test config entry is consistent with the new multi-datastore flow.
76cd1cc to
0ed0784
Compare
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Actionable comments posted: 5
🤖 Fix all issues with AI agents
In @libs/providers/vmware.py:
- Line 756: Change logging calls that use f-strings to use %-formatting so
formatting is deferred by the logging module; for example replace
LOGGER.info(f"Secondary datastore available: {secondary_datastore.name}
({secondary_datastore_id})") with LOGGER.info("Secondary datastore available: %s
(%s)", secondary_datastore.name, secondary_datastore_id) and apply the same
pattern to the other LOGGER calls noted (around the blocks referencing
secondary_datastore, datastore_id, cluster, host, and task/results in the vmware
provider functions) so all messages use the logging module’s %-style with
separate args instead of f-strings.
- Around line 751-798: Extract the long inline exception messages into reusable
constants or a helper function to improve maintainability: create a constant
(e.g., ERR_SECONDARY_DS_NOT_CONFIGURED) for the message used in the VmCloneError
raised when secondary_datastore is missing, and implement a small helper like
format_insufficient_capacity_message(datastore, required_gb, available_gb) to
return the formatted string used in the VmCloneError and the LOGGER.info call in
the capacity validation loop (referencing VmCloneError, secondary_datastore,
target_datastore, datastore_capacity_requirements, and the validation loop that
logs/raises for required_gb vs available_space_gb).
In @tests/test_copyoffload_migration.py:
- Around line 820-821: Replace the f-string logging with lazy `%`-style
formatting: change the two LOGGER.info calls that currently use f-strings with
datastore_id and secondary_datastore_id to use format strings and pass the
variables as arguments (e.g., LOGGER.info("Multi-datastore migration using
primary datastore: %s", datastore_id) and similarly for secondary_datastore_id)
so logging formatting is deferred when the log level is disabled.
- Around line 711-722: The parametrize call uses a comma-separated string for
parameter names; change the first argument to an explicit tuple of names so
pytest.mark.parametrize("plan,multus_network_name", ...) becomes
pytest.mark.parametrize(("plan", "multus_network_name"), ...). Locate the
pytest.mark.parametrize invocation in tests/test_copyoffload_migration.py (the
call that currently lists "plan,multus_network_name") and replace the string
parameter list with the tuple form, preserving the existing values,
indirect=True, and ids argument.
In @utilities/mtv_migration.py:
- Line 329: Add explicit validation around copy-offload parameters: instead of
the current implicit check `if datastore_id and offload_plugin_config:`,
validate the combinations and raise/log an explicit error when
misconfigured—specifically, if `datastore_id` is provided but
`offload_plugin_config` is missing, or if `secondary_datastore_id` is provided
but `datastore_id` is missing. Implement this in the same scope where
`datastore_id`, `offload_plugin_config`, and `secondary_datastore_id` are
checked (the block containing the `if datastore_id and offload_plugin_config:`
condition), returning/raising a clear error message describing the missing
parameter(s) so the code does not silently fall back to standard migration.
📜 Review details
Configuration used: Path: .coderabbit.yaml
Review profile: ASSERTIVE
Plan: Pro
📒 Files selected for processing (6)
.providers.json.exampleREADME.mdlibs/providers/vmware.pytests/test_copyoffload_migration.pytests/tests_config/config.pyutilities/mtv_migration.py
🧰 Additional context used
🪛 Ruff (0.14.10)
libs/providers/vmware.py
756-756: Logging statement uses f-string
(G004)
767-769: Avoid specifying long messages outside the exception class
(TRY003)
791-794: Avoid specifying long messages outside the exception class
(TRY003)
796-797: Logging statement uses f-string
(G004)
804-804: Logging statement uses f-string
(G004)
810-811: Logging statement uses f-string
(G004)
817-818: Logging statement uses f-string
(G004)
824-825: Logging statement uses f-string
(G004)
849-849: Logging statement uses f-string
(G004)
864-864: Logging statement uses f-string
(G004)
869-870: Logging statement uses f-string
(G004)
879-879: Logging statement uses f-string
(G004)
utilities/mtv_migration.py
334-334: Logging statement uses f-string
(G004)
336-336: Logging statement uses f-string
(G004)
355-355: Logging statement uses f-string
(G004)
tests/test_copyoffload_migration.py
713-713: Wrong type passed to first argument of pytest.mark.parametrize; expected tuple
Use a tuple for the first argument
(PT006)
820-820: Logging statement uses f-string
(G004)
821-821: Logging statement uses f-string
(G004)
🔇 Additional comments (10)
README.md (1)
285-285: LGTM: Clear documentation of multi-datastore feature.The documentation accurately describes the new multi-datastore capability, explaining the relationship between primary and secondary datastores and how disks can reference them. The use case is well articulated.
Also applies to: 299-305
.providers.json.example (1)
29-30: LGTM: Configuration example accurately reflects new fields.The example clearly distinguishes between primary and secondary datastores with helpful inline comments. The optional fields (rdm_lun_uuid, default_vm_name) are appropriately documented alongside the new secondary_datastore_id field.
Also applies to: 39-42
tests/test_copyoffload_migration.py (2)
802-822: LGTM: Thorough validation of multi-datastore configuration.The test properly validates that both primary and secondary datastores are configured before proceeding, with clear error messages. The logging statements provide useful debugging information about which datastores are being used.
823-878: LGTM: Test correctly exercises multi-datastore migration flow.The test follows the established pattern:
- Creates network migration map
- Builds offload plugin configuration
- Passes both
datastore_idandsecondary_datastore_idto storage map generation- Executes migration
- Verifies disk count post-migration
The integration with the broader copy-offload infrastructure appears sound.
tests/tests_config/config.py (1)
137-157: LGTM: Test configuration appropriately defines multi-datastore scenario.The configuration correctly sets up a VM with:
- Base disk(s) from template on primary datastore
- One additional 30GB thin-provisioned disk explicitly assigned to secondary datastore via
"datastore_id": "secondary_datastore_id"This validates the core multi-datastore capability where disks are distributed across two datastores on the same storage system.
libs/providers/vmware.py (3)
800-826: LGTM: Clear per-disk datastore resolution logic.The three-tier resolution hierarchy is well-implemented:
- Secondary datastore reference - if disk specifies
"datastore_id": "secondary_datastore_id"- Custom datastore ID - if disk specifies an explicit datastore MoRef
- Default/primary datastore - fallback to target_datastore
Each decision is logged with the datastore name and MoRef ID, which will greatly aid debugging in production.
840-871: LGTM: Correct VMware API usage for disk placement.Setting both
backing_info.datastore(line 842) andbacking_info.fileName(lines 863, 867) ensures the disk is created on the correct datastore:
- backing_info.datastore: Associates the backing with the target datastore object
- backing_info.fileName: When set to
[datastore_name]without a full path, vSphere places the disk on that datastoreThe custom
datastore_pathhandling (lines 844-864) correctly constructs the full VMDK path with the selected datastore name.
978-978: Approved: Minor formatting addition.The blank line addition improves code readability by separating the disk addition logic from the MAC regeneration logic.
utilities/mtv_migration.py (2)
280-280: LGTM: Clean parameter addition with clear documentation.The new
secondary_datastore_idparameter is properly typed, maintains backward compatibility, and is well-documented. The docstring updates clearly explain the multi-datastore capability.Also applies to: 291-293, 304-305
331-356: LGTM: Multi-datastore logic is correct and well-structured.The implementation properly:
- Builds a list of datastores (primary + optional secondary)
- Creates separate storage map entries for each datastore
- Constructs destination configs with appropriate optional fields
- Provides clear logging for debugging
The storage map entry structure correctly includes
destination,source(with datastore id), andoffloadPluginfor each datastore.Note on static analysis hints (lines 334, 336, 355): Ruff flags f-strings in logging statements, preferring lazy
%formatting. However, this is consistent with the existing codebase style (see lines 59, 192, 200, 254, etc.). Changing only these lines would create inconsistency. If f-string logging is a concern, it should be addressed codebase-wide in a separate refactoring effort.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Actionable comments posted: 2
🤖 Fix all issues with AI agents
In @tests/test_copyoffload_migration.py:
- Around line 820-821: Replace the eager f-strings in the two logging calls so
they use the logger's lazy formatting API: update the LOGGER.info calls that
reference datastore_id and secondary_datastore_id (the lines reading
"Multi-datastore migration using primary datastore: {datastore_id}" and the
secondary equivalent) to use format placeholders and pass the variables as
arguments (e.g., "%s" placeholders with datastore_id / secondary_datastore_id as
parameters) so interpolation happens only if the message is actually logged.
- Around line 809-814: The validation currently uses
all([storage_vendor_product, datastore_id]) and emits a generic failure; change
it to check each variable individually (storage_vendor_product and datastore_id)
and build a specific error message listing which parameter(s) are missing, then
call pytest.fail with that message so the test output indicates exactly which
config key is absent.
📜 Review details
Configuration used: Path: .coderabbit.yaml
Review profile: ASSERTIVE
Plan: Pro
📒 Files selected for processing (1)
tests/test_copyoffload_migration.py
🧰 Additional context used
🪛 Ruff (0.14.10)
tests/test_copyoffload_migration.py
820-820: Logging statement uses f-string
(G004)
821-821: Logging statement uses f-string
(G004)
🔇 Additional comments (2)
tests/test_copyoffload_migration.py (2)
723-737: Verify if SSH fixtures are intentionally omitted.This test function doesn't include the
setup_copyoffload_sshandvm_ssh_connectionsfixtures that are present in other copy-offload tests (e.g.,test_copyoffload_thin_migrationat line 53-54,test_copyoffload_thick_lazy_migrationat line 199-200).If post-migration VM verification via SSH is needed—such as confirming the VM is accessible and operational—these fixtures should be added. If the verification at line 878 (
verify_vm_disk_count) is sufficient for this test scenario, the omission is acceptable.SEVERITY: LOW (potential oversight vs. intentional design)
Please confirm whether SSH-based verification is required for the multi-datastore migration test scenario. If yes, add the missing fixtures to the function signature.
711-878: Well-structured test for multi-datastore copy-offload migration.The new test function effectively validates the multi-datastore copy-offload scenario with:
- Clear parametrization and test identification
- Comprehensive documentation covering requirements, configuration, and workflow
- Proper validation of required configuration fields (datastore_id and secondary_datastore_id)
- Correct integration with existing migration utilities
- Post-migration verification of disk counts
The test follows the established pattern of other copy-offload tests in the file and appropriately extends the functionality to handle the secondary datastore scenario.
myakove
left a comment
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Please make sure existing tests are not broken with these changes (vmware tests)
Co-authored-by: coderabbitai[bot] <136622811+coderabbitai[bot]@users.noreply.github.com>
Co-authored-by: coderabbitai[bot] <136622811+coderabbitai[bot]@users.noreply.github.com>
e938071
0d74f15 to
e938071
Compare
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Actionable comments posted: 1
Caution
Some comments are outside the diff and can’t be posted inline due to platform limitations.
⚠️ Outside diff range comments (1)
tests/test_copyoffload_migration.py (1)
12-12: LOW: Unused importtime.The
timemodule is imported but not used anywhere in this file. This should be removed to keep imports clean.Suggested fix
import pytest from pytest_testconfig import config as py_config from simple_logger.logger import get_logger -import time from urllib.parse import urlparse
🤖 Fix all issues with AI agents
In @tests/test_copyoffload_migration.py:
- Around line 711-722: Change the pytest.mark.parametrize call to use the
comma-separated string parameter format for consistency with the rest of the
file: replace the tuple ("plan", "multus_network_name") in the
pytest.mark.parametrize decorator with the string "plan,multus_network_name"
(keeping the same values, indirect=True and ids unchanged) so the test
definition for the copyoffload multi-datastore migration matches the file's
style conventions.
📜 Review details
Configuration used: Path: .coderabbit.yaml
Review profile: ASSERTIVE
Plan: Pro
📒 Files selected for processing (5)
.providers.json.examplelibs/providers/vmware.pytests/test_copyoffload_migration.pytests/tests_config/config.pyutilities/mtv_migration.py
🧰 Additional context used
📓 Path-based instructions (5)
**/*.py
📄 CodeRabbit inference engine (CLAUDE.md)
**/*.py: Always use built-in Python typing (dict, list, tuple, etc.) for type annotations in Python code
Useopenshift-python-wrapperfor all OpenShift/Kubernetes cluster interactions; import resource classes fromocp_resources.*and utilities fromocp_utilities.*
Never import directly fromkubernetespackage at runtime; never instantiateDynamicClientdirectly - useocp_utilities.infra.get_client()instead
ImportDynamicClientfromkubernetes.dynamiconly withinTYPE_CHECKINGblocks or as string annotations for type hints
Every OpenShift resource must be created usingcreate_and_store_resource()function fromutilities/resources.pyonly
Keep functions small and focused - maximum ~50 lines per function with single responsibility; extract helpers with underscore prefix for sub-tasks
Keep code simple and readable; avoid over-engineering
Runruff checkandruff formatfor linting and formatting of Python code
Runmypywith strict configuration for type checking of Python code
Files:
utilities/mtv_migration.pytests/tests_config/config.pylibs/providers/vmware.pytests/test_copyoffload_migration.py
tests/**/*.py
📄 CodeRabbit inference engine (CLAUDE.md)
tests/**/*.py: Never use default values or.get()with fallbacks for configurations defined in the codebase (py_config, plan, tests_params); use direct dictionary access to fail fast on missing keys
Use.get()with explicit validation for external/provider configurations to provide context-rich error messages
Useindirect=Truein@pytest.mark.parametrizedecorator when passing parameters to fixtures
Apply pytest markers (tier0, warm, remote, copyoffload) to categorize tests; markers must be defined in pytest.ini
Files:
tests/tests_config/config.pytests/test_copyoffload_migration.py
tests/tests_config/config.py
📄 CodeRabbit inference engine (CLAUDE.md)
tests/tests_config/config.py: Test configuration must be centralized intests/tests_config/config.pywith descriptive test parameter names matching test function names
All test parameters must specifywarm_migrationexplicitly (True/False) with no defaults
Files:
tests/tests_config/config.py
{conftest.py,tests/**/*.py}
📄 CodeRabbit inference engine (CLAUDE.md)
Track all created resources in
fixture_store['teardown']dictionary for automatic cleanup
Files:
tests/tests_config/config.pytests/test_copyoffload_migration.py
tests/test_*_migration.py
📄 CodeRabbit inference engine (CLAUDE.md)
tests/test_*_migration.py: Test structure must follow standard pattern: parametrize with config, use standard fixtures, create migration maps, execute migration
New test files should follow naming conventiontest_<feature>_migration.pyand be placed intests/directory
Files:
tests/test_copyoffload_migration.py
🧠 Learnings (5)
📓 Common learnings
Learnt from: shellymiron
Repo: RedHatQE/mtv-api-tests PR: 239
File: README.md:270-331
Timestamp: 2026-01-12T16:24:31.016Z
Learning: Repository RedHatQE/mtv-api-tests: The OpenShift Job templates in README.md and docs/how-to-run-copyoffload-tests.md are intentionally duplicated with identical sections because the files are meant to be different but share common Job template structure; do not suggest consolidating them into a single YAML manifest.
📚 Learning: 2026-01-13T08:11:43.776Z
Learnt from: CR
Repo: RedHatQE/mtv-api-tests PR: 0
File: CLAUDE.md:0-0
Timestamp: 2026-01-13T08:11:43.776Z
Learning: Applies to tests/tests_config/config.py : Test configuration must be centralized in `tests/tests_config/config.py` with descriptive test parameter names matching test function names
Applied to files:
tests/tests_config/config.py
📚 Learning: 2026-01-13T08:11:43.776Z
Learnt from: CR
Repo: RedHatQE/mtv-api-tests PR: 0
File: CLAUDE.md:0-0
Timestamp: 2026-01-13T08:11:43.776Z
Learning: Applies to tests/tests_config/config.py : All test parameters must specify `warm_migration` explicitly (True/False) with no defaults
Applied to files:
tests/tests_config/config.py
📚 Learning: 2026-01-13T08:11:43.776Z
Learnt from: CR
Repo: RedHatQE/mtv-api-tests PR: 0
File: CLAUDE.md:0-0
Timestamp: 2026-01-13T08:11:43.776Z
Learning: Applies to tests/**/*.py : Use `indirect=True` in `pytest.mark.parametrize` decorator when passing parameters to fixtures
Applied to files:
tests/test_copyoffload_migration.py
📚 Learning: 2026-01-13T08:11:43.776Z
Learnt from: CR
Repo: RedHatQE/mtv-api-tests PR: 0
File: CLAUDE.md:0-0
Timestamp: 2026-01-13T08:11:43.776Z
Learning: Applies to tests/**/*.py : Use `.get()` with explicit validation for external/provider configurations to provide context-rich error messages
Applied to files:
tests/test_copyoffload_migration.py
🧬 Code graph analysis (2)
libs/providers/vmware.py (2)
conftest.py (1)
copyoffload_config(856-899)exceptions/exceptions.py (1)
VmCloneError(53-54)
tests/test_copyoffload_migration.py (1)
libs/forklift_inventory.py (2)
vms_names(130-135)vms(59-60)
🪛 Ruff (0.14.10)
utilities/mtv_migration.py
330-330: Avoid specifying long messages outside the exception class
(TRY003)
333-333: Avoid specifying long messages outside the exception class
(TRY003)
340-340: Logging statement uses f-string
(G004)
342-342: Logging statement uses f-string
(G004)
361-361: Logging statement uses f-string
(G004)
libs/providers/vmware.py
839-839: Logging statement uses f-string
(G004)
845-846: Logging statement uses f-string
(G004)
852-853: Logging statement uses f-string
(G004)
859-860: Logging statement uses f-string
(G004)
884-884: Logging statement uses f-string
(G004)
899-899: Logging statement uses f-string
(G004)
904-905: Logging statement uses f-string
(G004)
914-914: Logging statement uses f-string
(G004)
🔇 Additional comments (12)
.providers.json.example (1)
27-42: LGTM - Documentation updates for new copy-offload configuration options.The new configuration fields (
secondary_datastore_id,rdm_lun_uuid,default_vm_name) are well-documented with inline comments explaining their purpose. These align with the multi-datastore copy-offload feature being added.tests/test_copyoffload_migration.py (2)
802-821: LGTM - Proper configuration access pattern with explicit validation.This correctly follows the coding guidelines: using
.get()for external/provider configurations (lines 804-806) with explicit validation and context-rich error messages (lines 810-820), while using direct access for codebase configurations (line 807). This pattern is actually more robust than some earlier tests in this file.
825-880: LGTM - Test logic follows established patterns.The test correctly:
- Creates network migration map
- Builds offload plugin configuration
- Creates storage migration map with both primary and secondary datastores
- Executes migration
- Verifies disk count on migrated VM
This follows the same structure as other copy-offload tests in this file.
tests/tests_config/config.py (1)
137-157: LGTM - Test configuration follows project conventions.The configuration entry:
- Uses a descriptive name matching the test function (
test_copyoffload_multi_datastore_migration) ✓- Explicitly sets
warm_migration: False(per coding guidelines) ✓- Sets
copyoffload: Trueto enable copy-offload features ✓- Uses
datastore_id: "secondary_datastore_id"as a string reference that gets resolved invmware.pyduring disk creationThis pattern allows tests to specify secondary datastore placement without hardcoding actual datastore IDs in the test config.
utilities/mtv_migration.py (2)
329-333: LGTM - Proper parameter validation with clear error messages.The validation logic correctly enforces the dependency chain:
secondary_datastore_idrequiresdatastore_idto be setdatastore_idrequiresoffload_plugin_configto be setThis prevents invalid configurations early with descriptive
ValueErrormessages. The static analysis hint (TRY003) about message placement is a minor style preference that doesn't impact functionality here.
335-361: LGTM - Multi-datastore storage map generation.The implementation correctly:
- Builds a list of datastores (primary + optional secondary)
- Creates a storage map entry for each datastore with consistent destination configuration
- Shares the same
offloadPluginconfig across entries (correct since both datastores are on the same storage system)- Provides clear logging for debugging
libs/providers/vmware.py (6)
29-32: LGTM - Error message constant.Defining error messages as constants is good practice - it ensures consistency and makes them easier to find/update.
35-68: LGTM - Well-structured helper functions.These functions provide consistent message formatting for capacity-related logging and errors. Good use of the DRY principle - centralized formatting logic that's reused in capacity validation.
799-833: LGTM - Per-datastore capacity validation.MEDIUM: Good defensive implementation that:
- Groups disks by their target datastore (lines 800-818)
- Only validates capacity for thick-provisioned disks (line 815) - thin disks don't need upfront space
- Validates each datastore independently (lines 820-833)
- Provides clear error messages on failure
The magic string
"secondary_datastore_id"on line 806 matches the test configuration pattern - this is intentional indirection allowing test configs to reference the secondary datastore symbolically.
836-861: LGTM - Per-disk datastore resolution logic.The three-way branching correctly handles all cases:
- Secondary datastore (line 842): When
datastore_id == "secondary_datastore_id"(symbolic reference)- Custom datastore (line 848): When a specific datastore ID is provided
- Default datastore (line 855): Falls back to the primary/target datastore
Each branch includes appropriate logging for debugging datastore placement issues.
875-906: LGTM - Critical datastore backing configuration.HIGH: This is a critical fix. Setting both
backing_info.datastoreANDbacking_info.fileName(lines 876-906) is essential for forcing disk creation on the correct datastore. Without both, vSphere may silently create the disk on a different datastore.The comment on lines 875-876 correctly explains WHY this is necessary - this kind of documentation is valuable for preventing future regressions.
1013-1013: Formatting change only.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Actionable comments posted: 1
🤖 Fix all issues with AI agents
In @tests/test_copyoffload_migration.py:
- Around line 711-722: The linter flags the param names in the
pytest.mark.parametrize call here: change the first argument from the
comma-separated string "plan,multus_network_name" to an explicit tuple ("plan",
"multus_network_name") for pytest.mark.parametrize and apply the same
replacement consistently across this test file to avoid style mismatch; update
all other parametrize usages that currently use the string pattern so they use
tuple syntax (e.g., replace any "a,b" occurrences with ("a","b")) while keeping
the existing param lists, indirect/ids args and param values unchanged.
📜 Review details
Configuration used: Path: .coderabbit.yaml
Review profile: ASSERTIVE
Plan: Pro
📒 Files selected for processing (1)
tests/test_copyoffload_migration.py
🧰 Additional context used
📓 Path-based instructions (4)
**/*.py
📄 CodeRabbit inference engine (CLAUDE.md)
**/*.py: Always use built-in Python typing (dict, list, tuple, etc.) for type annotations in Python code
Useopenshift-python-wrapperfor all OpenShift/Kubernetes cluster interactions; import resource classes fromocp_resources.*and utilities fromocp_utilities.*
Never import directly fromkubernetespackage at runtime; never instantiateDynamicClientdirectly - useocp_utilities.infra.get_client()instead
ImportDynamicClientfromkubernetes.dynamiconly withinTYPE_CHECKINGblocks or as string annotations for type hints
Every OpenShift resource must be created usingcreate_and_store_resource()function fromutilities/resources.pyonly
Keep functions small and focused - maximum ~50 lines per function with single responsibility; extract helpers with underscore prefix for sub-tasks
Keep code simple and readable; avoid over-engineering
Runruff checkandruff formatfor linting and formatting of Python code
Runmypywith strict configuration for type checking of Python code
Files:
tests/test_copyoffload_migration.py
tests/**/*.py
📄 CodeRabbit inference engine (CLAUDE.md)
tests/**/*.py: Never use default values or.get()with fallbacks for configurations defined in the codebase (py_config, plan, tests_params); use direct dictionary access to fail fast on missing keys
Use.get()with explicit validation for external/provider configurations to provide context-rich error messages
Useindirect=Truein@pytest.mark.parametrizedecorator when passing parameters to fixtures
Apply pytest markers (tier0, warm, remote, copyoffload) to categorize tests; markers must be defined in pytest.ini
Files:
tests/test_copyoffload_migration.py
{conftest.py,tests/**/*.py}
📄 CodeRabbit inference engine (CLAUDE.md)
Track all created resources in
fixture_store['teardown']dictionary for automatic cleanup
Files:
tests/test_copyoffload_migration.py
tests/test_*_migration.py
📄 CodeRabbit inference engine (CLAUDE.md)
tests/test_*_migration.py: Test structure must follow standard pattern: parametrize with config, use standard fixtures, create migration maps, execute migration
New test files should follow naming conventiontest_<feature>_migration.pyand be placed intests/directory
Files:
tests/test_copyoffload_migration.py
🧠 Learnings (4)
📓 Common learnings
Learnt from: shellymiron
Repo: RedHatQE/mtv-api-tests PR: 239
File: README.md:270-331
Timestamp: 2026-01-12T16:24:31.016Z
Learning: Repository RedHatQE/mtv-api-tests: The OpenShift Job templates in README.md and docs/how-to-run-copyoffload-tests.md are intentionally duplicated with identical sections because the files are meant to be different but share common Job template structure; do not suggest consolidating them into a single YAML manifest.
📚 Learning: 2026-01-12T16:24:31.016Z
Learnt from: shellymiron
Repo: RedHatQE/mtv-api-tests PR: 239
File: README.md:270-331
Timestamp: 2026-01-12T16:24:31.016Z
Learning: Repository RedHatQE/mtv-api-tests: The OpenShift Job templates in README.md and docs/how-to-run-copyoffload-tests.md are intentionally duplicated with identical sections because the files are meant to be different but share common Job template structure; do not suggest consolidating them into a single YAML manifest.
Applied to files:
tests/test_copyoffload_migration.py
📚 Learning: 2026-01-13T08:11:43.776Z
Learnt from: CR
Repo: RedHatQE/mtv-api-tests PR: 0
File: CLAUDE.md:0-0
Timestamp: 2026-01-13T08:11:43.776Z
Learning: Applies to tests/**/*.py : Use `.get()` with explicit validation for external/provider configurations to provide context-rich error messages
Applied to files:
tests/test_copyoffload_migration.py
📚 Learning: 2026-01-13T08:11:43.776Z
Learnt from: CR
Repo: RedHatQE/mtv-api-tests PR: 0
File: CLAUDE.md:0-0
Timestamp: 2026-01-13T08:11:43.776Z
Learning: Applies to tests/test_*_migration.py : Test structure must follow standard pattern: parametrize with config, use standard fixtures, create migration maps, execute migration
Applied to files:
tests/test_copyoffload_migration.py
🧬 Code graph analysis (1)
tests/test_copyoffload_migration.py (1)
utilities/mtv_migration.py (4)
get_network_migration_map(385-418)get_storage_migration_map(269-382)migrate_vms(32-98)verify_vm_disk_count(455-480)
🪛 Ruff (0.14.10)
tests/test_copyoffload_migration.py
713-713: Wrong type passed to first argument of pytest.mark.parametrize; expected tuple
Use a tuple for the first argument
(PT006)
🔇 Additional comments (5)
tests/test_copyoffload_migration.py (5)
802-807: LGTM: Correct use of.get()with explicit validation for external configUsing
.get()for provider configuration fields (storage_vendor_product,datastore_id,secondary_datastore_id) followed by explicit validation at lines 810-820 aligns with the coding guideline: "Use.get()with explicit validation for external/provider configurations to provide context-rich error messages."
809-820: LGTM: Validation provides context-rich error messagesThe validation logic correctly:
- Collects all missing required parameters before failing
- Provides a clear error message listing what's missing
- Has a separate, specific check for
secondary_datastore_idsince it's unique to this multi-datastore testThis fail-fast approach with descriptive errors will help with debugging configuration issues.
846-861: LGTM: Correct usage ofsecondary_datastore_idparameterThe storage migration map correctly passes both
datastore_idandsecondary_datastore_idtoget_storage_migration_map(). Based on the relevant code snippet fromutilities/mtv_migration.py(lines 268-381), this will create storage map entries for both datastores with the copy-offload plugin configuration.
863-880: LGTM: Migration execution and verification follow standard patternThe test correctly:
- Executes migration via
migrate_vms()with all required parameters- Verifies disk count post-migration using
verify_vm_disk_count()The comment on line 879 accurately describes the expected outcome (1 base + 1 added = 2 total disks).
723-737: Clarify if SSH checks should be skipped for multi-datastore scenarioThe omission of
vm_ssh_connectionsis handled gracefully by the code—check_vms()accepts it as an optional parameter (SSHConnectionManager | None = None) and skips SSH connectivity checks when it'sNone(line 1129-1158 inutilities/post_migration.py). However, this test differs from similar copyoffload tests (test_copyoffload_multi_disk_migration,test_copyoffload_thick_lazy_migration,test_copyoffload_rdm_virtual_disk_migration) which all providevm_ssh_connections.Verify whether skipping SSH connectivity and static IP preservation checks is intentional for the multi-datastore scenario, or if this should follow the pattern of other copyoffload tests to ensure comprehensive post-migration validation.
|
/approve |
Add Two-Datastore Support for Copy-Offload Migrations
Summary
Adds support for copy-offload migrations with VMs that have disks distributed across two different datastores on the same storage system.
This PR introduces
secondary_datastore_idconfiguration field for copyoffload, allowing disks to be explicitly placed on either the primary or secondary datastore.Configuration
"copyoffload": {
"datastore_id": "datastore-123", // Primary/default
"secondary_datastore_id": "datastore-456", // Secondary (same storage system)
...
}### Usage in Tests
"add_disks": [{
"size_gb": 30,
"datastore_id": "secondary_datastore_id" // Places disk on secondary datastore
}]## Changes
Implementation
tests/test_copyoffload_migration.py: Newtest_copyoffload_multi_datastore_migrationtestlibs/providers/vmware.py: Enhanced disk creation to support per-disk datastore assignment with properbacking_info.fileNameenforcementutilities/mtv_migration.py: Updated storage map generation to handle both primary and secondary datastoresconftest.py: Updated SSH fixture for backward compatibilitytests/tests_config/config.py: Added two-datastore test configurationREADME.md&.providers.json.example: Updated with two-datastore examplesTechnical Details
backing_info.datastoreANDbacking_info.fileNameto force correct datastore placementSummary by CodeRabbit
New Features
Tests
✏️ Tip: You can customize this high-level summary in your review settings.