You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
am-agrawa
changed the title
test_change_cluster_resource_profile[balanced] fails with ResourceLeftoversException
test_change_cluster_resource_profile[balanced] fails with ResourceLeftoversException during teardown
Oct 30, 2024
As we are changing resource profile, the pod cpu and memory values will be changed. Hence it reports resource leftover exception. Will explore if there is any better way to handle this
ODF 4.17
2024-10-30 11:19:19 leftover_detected = False
2024-10-30 11:19:19
2024-10-30 11:19:19 leftovers = {"Leftovers added": [], "Leftovers removed": []}
2024-10-30 11:19:19 for kind, kind_diff in diffs_dict.items():
2024-10-30 11:19:19 if not kind_diff:
2024-10-30 11:19:19 continue
2024-10-30 11:19:19 if kind_diff[0]:
2024-10-30 11:19:19 leftovers["Leftovers added"].append({f"{kind}": kind_diff[0]})
2024-10-30 11:19:19 leftover_detected = True
2024-10-30 11:19:19 if kind_diff[1]:
2024-10-30 11:19:19 leftovers["Leftovers removed"].append({f"{kind}": kind_diff[1]})
2024-10-30 11:19:19 leftover_detected = True
2024-10-30 11:19:19 if leftover_detected:
2024-10-30 11:19:19 > raise exceptions.ResourceLeftoversException(
2024-10-30 11:19:19 f"\nThere are leftovers in the environment after test case:"
2024-10-30 11:19:19 f"\nResources added:\n{yaml.dump(leftovers['Leftovers added'])}"
2024-10-30 11:19:19 f"\nResources "
2024-10-30 11:19:19 f"removed:\n {yaml.dump(leftovers['Leftovers removed'])}"
2024-10-30 11:19:19 )
2024-10-30 11:19:19 E ocs_ci.ocs.exceptions.ResourceLeftoversException:
2024-10-30 11:19:19 E There are leftovers in the environment after test case:
2024-10-30 11:19:19 E Resources added:
2024-10-30 11:19:19 E - 'pods':
2024-10-30 11:19:19 E - apiVersion: v1
2024-10-30 11:19:19 E kind: Pod
2024-10-30 11:19:19 E metadata:
2024-10-30 11:19:19 E annotations:
2024-10-30 11:19:19 E k8s.ovn.org/pod-networks: '{"default":{"ip_addresses":["10.129.2.76/23"],"mac_address":"0a:58:0a:81:02:4c","gateway_ips":["10.129.2.1"],"routes":[{"dest":"10.128.0.0/14","nextHop":"10.129.2.1"},{"dest":"172.30.0.0/16","nextHop":"10.129.2.1"},{"dest":"100.64.0.0/16","nextHop":"10.129.2.1"}],"ip_address":"10.129.2.76/23","gateway_ip":"10.129.2.1","role":"primary"}}'
2024-10-30 11:19:19 E k8s.v1.cni.cncf.io/network-status: "[{\n "name": "ovn-kubernetes",\n
2024-10-30 11:19:19 E \ "interface": "eth0",\n "ips": [\n "10.129.2.76"\n
2024-10-30 11:19:19 E \ ],\n "mac": "0a:58:0a:81:02:4c",\n "default": true,\n
2024-10-30 11:19:19 E \ "dns": {}\n}]"
2024-10-30 11:19:19 E openshift.io/scc: rook-ceph
2024-10-30 11:19:19 E creationTimestamp: '2024-10-30T05:28:23Z'
2024-10-30 11:19:19 E generateName: rook-ceph-mds-ocs-storagecluster-cephfilesystem-a-59d44f4bb5-
2024-10-30 11:19:19 E labels:
2024-10-30 11:19:19 E app: rook-ceph-mds
2024-10-30 11:19:19 E app.kubernetes.io/component: cephfilesystems.ceph.rook.io
2024-10-30 11:19:19 E app.kubernetes.io/created-by: rook-ceph-operator
2024-10-30 11:19:19 E app.kubernetes.io/instance: ocs-storagecluster-cephfilesystem-a
2024-10-30 11:19:19 E app.kubernetes.io/managed-by: rook-ceph-operator
2024-10-30 11:19:19 E app.kubernetes.io/name: ceph-mds
2024-10-30 11:19:19 E app.kubernetes.io/part-of: ocs-storagecluster-cephfilesystem
2024-10-30 11:19:19 E ceph_daemon_id: ocs-storagecluster-cephfilesystem-a
2024-10-30 11:19:19 E ceph_daemon_type: mds
2024-10-30 11:19:19 E mds: ocs-storagecluster-cephfilesystem-a
2024-10-30 11:19:19 E odf-resource-profile: balanced
2024-10-30 11:19:19 E pod-template-hash: 59d44f4bb5
2024-10-30 11:19:19 E rook.io/operator-namespace: openshift-storage
2024-10-30 11:19:19 E rook_cluster: openshift-storage
2024-10-30 11:19:19 E rook_file_system: ocs-storagecluster-cephfilesystem
2024-10-30 11:19:19 E name: rook-ceph-mds-ocs-storagecluster-cephfilesystem-a-59d44f4bmwm7r
2024-10-30 11:19:19 E namespace: openshift-storage
Run- https://url.corp.redhat.com/915ce15
The text was updated successfully, but these errors were encountered: