Skip to content

[VMware] Sync the disk path or datastore changes for IDE disks, and before any volume resize during start vm (for the volumes on datastore cluster) #10748

New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Open
wants to merge 2 commits into
base: 4.20
Choose a base branch
from

Conversation

sureshanaparti
Copy link
Contributor

@sureshanaparti sureshanaparti commented Apr 17, 2025

Description

This PR syncs the disk path or datastore changes for IDE disks as well, and before any volume resize during start vm (for the volumes on datastore cluster pool) in VMware.

Fixes #10626

Types of changes

  • Breaking change (fix or feature that would cause existing functionality to change)
  • New feature (non-breaking change which adds functionality)
  • Bug fix (non-breaking change which fixes an issue)
  • Enhancement (improves an existing feature and functionality)
  • Cleanup (Code refactoring and cleanup, that may add test cases)
  • build/CI
  • test (unit or integration test code)

Feature/Enhancement Scale or Bug Severity

Feature/Enhancement Scale

  • Major
  • Minor

Bug Severity

  • BLOCKER
  • Critical
  • Major
  • Minor
  • Trivial

Screenshots (if appropriate):

How Has This Been Tested?

Tested VM (with ISO) start after storage drs triggered in datastore cluster for the ROOT volume.

How did you try to break this feature and the system with this change?

@sureshanaparti
Copy link
Contributor Author

@blueorangutan package

@blueorangutan
Copy link

@sureshanaparti a [SL] Jenkins job has been kicked to build packages. It will be bundled with KVM, XenServer and VMware SystemVM templates. I'll keep you posted as I make progress.

Copy link

codecov bot commented Apr 17, 2025

Codecov Report

Attention: Patch coverage is 0% with 33 lines in your changes missing coverage. Please review.

Project coverage is 16.13%. Comparing base (0785ba0) to head (b552d90).

Files with missing lines Patch % Lines
...oud/hypervisor/vmware/resource/VmwareResource.java 0.00% 33 Missing ⚠️
Additional details and impacted files
@@            Coverage Diff            @@
##               4.20   #10748   +/-   ##
=========================================
  Coverage     16.13%   16.13%           
  Complexity    13216    13216           
=========================================
  Files          5649     5649           
  Lines        496683   496684    +1     
  Branches      60176    60176           
=========================================
+ Hits          80135    80139    +4     
+ Misses       407625   407622    -3     
  Partials       8923     8923           
Flag Coverage Δ
uitests 4.01% <ø> (ø)
unittests 16.98% <0.00%> (+<0.01%) ⬆️

Flags with carried forward coverage won't be shown. Click here to find out more.

☔ View full report in Codecov by Sentry.
📢 Have feedback on the report? Share it here.

🚀 New features to boost your workflow:
  • ❄️ Test Analytics: Detect flaky tests, report on failures, and find test suite problems.
  • 📦 JS Bundle Analysis: Save yourself from yourself by tracking and limiting bundle sizes in JS merges.

@blueorangutan
Copy link

Packaging result [SF]: ✔️ el8 ✔️ el9 ✔️ debian ✔️ suse15. SL-JID 13097

@Pearl1594
Copy link
Contributor

@blueorangutan test ol8 vmware-7u3

@Pearl1594 Pearl1594 added this to the 4.20.1 milestone Apr 21, 2025
@blueorangutan
Copy link

@Pearl1594 [SL] unsupported parameters provided. Supported mgmt server os are: ol8, ol9, debian12, rocky8, alma9, suse15, centos7, centos6, alma8, ubuntu18, ubuntu22, ubuntu20, ubuntu24. Supported hypervisors are: kvm-centos6, kvm-centos7, kvm-rocky8, kvm-ol8, kvm-ol9, kvm-alma8, kvm-alma9, kvm-ubuntu18, kvm-ubuntu20, kvm-ubuntu22, kvm-ubuntu24, kvm-debian12, kvm-suse15, vmware-55u3, vmware-60u2, vmware-65u2, vmware-67u3, vmware-70u1, vmware-70u2, vmware-70u3, vmware-80, vmware-80u1, vmware-80u2, vmware-80u3, xenserver-65sp1, xenserver-71, xenserver-74, xenserver-84, xcpng74, xcpng76, xcpng80, xcpng81, xcpng82, xcpng83

@Pearl1594
Copy link
Contributor

@blueorangutan test ol8 vmware-70u3

@blueorangutan
Copy link

@Pearl1594 a [SL] Trillian-Jenkins test job (ol8 mgmt + vmware-70u3) has been kicked to run smoke tests

@sureshanaparti sureshanaparti moved this to In Progress in ACS 4.20.1 Apr 21, 2025
Copy link
Contributor

@kiranchavala kiranchavala left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@sureshanaparti Tested the pr, issue is still present

@kiranchavala kiranchavala self-assigned this Apr 21, 2025
@blueorangutan
Copy link

[SF] Trillian test result (tid-13070)
Environment: vmware-70u3 (x2), Advanced Networking with Mgmt server ol8
Total time taken: 68100 seconds
Marvin logs: https://github.com/blueorangutan/acs-prs/releases/download/trillian/pr10748-t13070-vmware-70u3.zip
Smoke tests completed. 135 look OK, 6 have errors, 0 did not run
Only failed and skipped tests results shown below:

Test Result Time (s) Test File
test_01_events_resource Error 345.74 test_events_resource.py
test_04_deploy_vm_for_other_user_and_test_vm_operations Error 124.08 test_network_permissions.py
test_01_deployVMInSharedNetwork Error 162.58 test_network.py
test_02_restore_vm_with_disk_offering Error 60.19 test_restore_vm.py
test_03_restore_vm_with_disk_offering_custom_size Error 57.20 test_restore_vm.py
test_02_list_cpvm_vm Failure 0.04 test_ssvm.py
test_04_cpvm_internals Failure 0.05 test_ssvm.py
test_02_restore_vm_strict_tags_failure Error 61.63 test_vm_strict_host_tags.py

…nd before any resize during start vm (for the volumes on datastore cluster)
@sureshanaparti sureshanaparti force-pushed the vmware-sync-disks-during-start-for-datastorecluster branch from 65bf768 to b552d90 Compare April 28, 2025 08:45
@sureshanaparti
Copy link
Contributor Author

@blueorangutan package

@sureshanaparti sureshanaparti changed the title [WIP][VMware] Sync the disk path or datastore changes for IDE disks, and before any volume resize during start vm (for the volumes on datastore cluster) [VMware] Sync the disk path or datastore changes for IDE disks, and before any volume resize during start vm (for the volumes on datastore cluster) Apr 28, 2025
@blueorangutan
Copy link

@sureshanaparti a [SL] Jenkins job has been kicked to build packages. It will be bundled with KVM, XenServer and VMware SystemVM templates. I'll keep you posted as I make progress.

@sureshanaparti sureshanaparti marked this pull request as ready for review April 28, 2025 08:48
@blueorangutan
Copy link

Packaging result [SF]: ✔️ el8 ✔️ el9 ✔️ debian ✔️ suse15. SL-JID 13199

@kiranchavala
Copy link
Contributor

@blueorangutan test

@blueorangutan
Copy link

@kiranchavala a [SL] Trillian-Jenkins test job (ol8 mgmt + kvm-ol8) has been kicked to run smoke tests

Copy link
Contributor

@kiranchavala kiranchavala left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

LGTM

  1. Have a Cloudstack environment with Vmware 8.0 and datastore cluster and vsphere storage drs enabled

  2. Deploy a vm with a iso image

  3. Stop the vm

  4. Trigger a vsphere storage drs on one of the datastore cluster ( this can done filling up a datastore)

  5. Start the vm > vm able to start

Logs

 2025-04-29 07:33:14,967 INFO  [c.c.h.v.m.BaseMO] (DirectAgent-240:[ctx-d7f4865e, 10.0.35.221, job-51/job-52, cmd: StartCommand]) (logid:976681f9) Looking for disk device info for volume [i-2-6-VM.vmdk] with base name [i-2-6-VM].
2025-04-29 07:33:14,967 INFO  [c.c.h.v.m.BaseMO] (DirectAgent-240:[ctx-d7f4865e, 10.0.35.221, job-51/job-52, cmd: StartCommand]) (logid:976681f9) Testing if disk device with controller key [200] and unit number [1] has backing of type VirtualDiskFlatVer2BackingInfo.
2025-04-29 07:33:14,967 INFO  [c.c.h.v.m.BaseMO] (DirectAgent-240:[ctx-d7f4865e, 10.0.35.221, job-51/job-52, cmd: StartCommand]) (logid:976681f9) Testing if backing datastore name [ds2] from backing [[ds2] i-2-6-VM/i-2-6-VM.vmdk] matches source datastore name [].
2025-04-29 07:33:14,967 INFO  [c.c.h.v.m.BaseMO] (DirectAgent-240:[ctx-d7f4865e, 10.0.35.221, job-51/job-52, cmd: StartCommand]) (logid:976681f9) Disk backing [[ds2] i-2-6-VM/i-2-6-VM.vmdk] matches device bus name [ide0:1].
2025-04-29 07:33:15,179 INFO  [c.c.h.v.r.VmwareResource] (DirectAgent-240:[ctx-d7f4865e, 10.0.35.221, job-51/job-52, cmd: StartCommand]) (logid:976681f9) Found existing disk info from volume path: i-2-6-VM
2025-04-29 07:33:18,997 DEBUG [c.c.h.v.r.VmwareResource] (DirectAgent-240:[ctx-d7f4865e, 10.0.35.221, job-51/job-52, cmd: StartCommand]) (logid:976681f9) VM i-2-6-VM has been started successfully with hostname i-2-6-VM.
2025-04-29 07:33:19,001 DEBUG [c.c.a.t.Request] (Work-Job-Executor-12:[ctx-c071f476, job-51/job-52, ctx-46c70ab3]) (logid:976681f9) Seq 1-6852789782997631226: Received:  { Ans: , MgmtId: 32987496317188, via: 1(10.0.35.221), Ver: v1, Flags: 10, { StartAnswer } }
2025-04-29 07:33:19,025 INFO  [o.a.c.e.o.VolumeOrchestrator] (Work-Job-Executor-12:[ctx-c071f476, job-51/job-52, ctx-46c70ab3]) (logid:976681f9) Updating volume's disk chain info. Volume: [{"name":"ROOT-6","uuid":"281e0a25-11fa-456e-b61b-aaeb2d086a80"}]. Path: [90e32def4fad4217a9590ac3434484e3] -> [i-2-6-VM], Disk Chain Info: [{"diskDeviceBusName":"ide0:1","diskChain":["[ds1] i-2-6-VM/90e32def4fad4217a9590ac3434484e3.vmdk"]}] -> [{"diskDeviceBusName":"ide0:1","diskChain":["[ds2] i-2-6-VM/i-2-6-VM.vmdk"]}].
2025-04-29 07:33:19,057 DEBUG [c.c.n.NetworkModelImpl] (Work-Job-Executor-12:[ctx-c071f476, job-51/job-52, ctx-46c70ab3]) (logid:976681f9) Service SecurityGroup is not supported in the network Network {"id": 204, "name": "test", "uuid": "f5c3bb0a-cf73-42a2-803b-fdd745cffc03", "networkofferingid": 10}
2025-04-29 07:33:19,060 DEBUG [c.c.n.NetworkModelImpl] (Work-Job-Executor-12:[ctx-c071f476, job-51/job-52, ctx-46c70ab3]) (logid:976681f9) Service SecurityGroup is not supported in the network Network {"id": 204, "name": "test", "uuid": "f5c3bb0a-cf73-42a2-803b-fdd745cffc03", "networkofferingid": 10}
2025-04-29 07:33:19,063 DEBUG [c.c.c.CapacityManagerImpl] (Work-Job-Executor-12:[ctx-c071f476, job-51/job-52, ctx-46c70ab3]) (logid:976681f9) VM instance {"id":6,"instanceName":"i-2-6-VM","state":"Running","type":"User","uuid":"3ee2497d-2ace-4c1e-9892-eb3f5a814490"} state transited from [Starting] to [Running] with event [OperationSucceeded]. VM's original host: Host {"id":1,"name":"10.0.35.221","type":"Routing","uuid":"33452dc5-9409-4124-80bf-1b409cfbdc7e"}, new host: Host {"id":1,"name":"10.0.35.221","type":"Routing","uuid":"33452dc5-9409-4124-80bf-1b409cfbdc7e"}, host before state transition: Host {"id":1,"name":"10.0.35.221","type":"Routing","uuid":"33452dc5-9409-4124-80bf-1b409cfbdc7e"}
2025-04-29 07:33:19,066 DEBUG [c.c.v.ClusteredVirtualMachineManagerImpl] (Work-Job-Executor-12:[ctx-c071f476, job-51/job-52, ctx-46c70ab3]) (logid:976681f9) Start completed for VM VM instance {"id":6,"instanceName":"i-2-6-VM","state":"Running","type":"User","uuid":"3ee2497d-2ace-4c1e-9892-eb3f5a814490"}

Comment on lines +2353 to +2398
VirtualMachineDiskInfo matchingExistingDisk = getMatchingExistingDisk(diskInfoBuilder, vol, hyperHost, context);
VolumeObjectTO volumeTO = (VolumeObjectTO) vol.getData();
DataStoreTO primaryStore = volumeTO.getDataStore();
Map<String, String> details = vol.getDetails();
boolean managed = false;
String iScsiName = null;

if (details != null) {
managed = Boolean.parseBoolean(details.get(DiskTO.MANAGED));
iScsiName = details.get(DiskTO.IQN);
}

String primaryStoreUuid = primaryStore.getUuid();
// if the storage is managed, iScsiName should not be null
String datastoreName = managed ? VmwareResource.getDatastoreName(iScsiName) : primaryStoreUuid;
Pair<ManagedObjectReference, DatastoreMO> volumeDsDetails = dataStoresDetails.get(datastoreName);

assert (volumeDsDetails != null);
if (volumeDsDetails == null) {
throw new Exception("Primary datastore " + primaryStore.getUuid() + " is not mounted on host.");
}

if (vol.getDetails().get(DiskTO.PROTOCOL_TYPE) != null && vol.getDetails().get(DiskTO.PROTOCOL_TYPE).equalsIgnoreCase("DatastoreCluster")) {
if (diskInfoBuilder != null && matchingExistingDisk != null) {
String[] diskChain = matchingExistingDisk.getDiskChain();
if (diskChain != null && diskChain.length > 0) {
DatastoreFile file = new DatastoreFile(diskChain[0]);
if (!file.getFileBaseName().equalsIgnoreCase(volumeTO.getPath())) {
if (logger.isInfoEnabled())
logger.info("Detected disk-chain top file change on volume: " + volumeTO.getId() + " " + volumeTO.getPath() + " -> " + file.getFileBaseName());
volumeTO.setPath(file.getFileBaseName());
vol.setPath(file.getFileBaseName());
}
}
DatastoreMO diskDatastoreMofromVM = getDataStoreWhereDiskExists(hyperHost, context, diskInfoBuilder, vol, diskDatastores);
if (diskDatastoreMofromVM != null) {
String actualPoolUuid = diskDatastoreMofromVM.getCustomFieldValue(CustomFieldConstants.CLOUD_UUID);
if (actualPoolUuid != null && !actualPoolUuid.equalsIgnoreCase(primaryStore.getUuid())) {
volumeDsDetails = new Pair<>(diskDatastoreMofromVM.getMor(), diskDatastoreMofromVM);
if (logger.isInfoEnabled())
logger.info("Detected datastore uuid change on volume: " + volumeTO.getId() + " " + primaryStore.getUuid() + " -> " + actualPoolUuid);
((PrimaryDataStoreTO)primaryStore).setUuid(actualPoolUuid);
}
}
}
}
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@sureshanaparti , can you extract these lines in extra methods?

@blueorangutan
Copy link

[SF] Trillian test result (tid-13178)
Environment: kvm-ol8 (x2), Advanced Networking with Mgmt server ol8
Total time taken: 54709 seconds
Marvin logs: https://github.com/blueorangutan/acs-prs/releases/download/trillian/pr10748-t13178-kvm-ol8.zip
Smoke tests completed. 140 look OK, 1 have errors, 0 did not run
Only failed and skipped tests results shown below:

Test Result Time (s) Test File
test_02_restore_vm_strict_tags_failure Failure 61.72 test_vm_strict_host_tags.py
test_02_scale_vm_strict_tags_failure Failure 63.78 test_vm_strict_host_tags.py
test_06_deploy_vm_on_any_host_with_strict_tags_failure Failure 5.76 test_vm_strict_host_tags.py

@rohityadavcloud rohityadavcloud requested a review from nvazquez May 1, 2025 06:08
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Projects
Status: In Progress
Development

Successfully merging this pull request may close these issues.

Unable to start a vm which has a iso after vmware storage drs get's triggered in datastore cluster
5 participants