Skip to content

Z3 support - Artifact test #11373

New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Open
wants to merge 4 commits into
base: master
Choose a base branch
from
Open

Conversation

roydahan
Copy link
Contributor

@roydahan roydahan commented Jul 10, 2025

Testing

PR pre-checks (self review)

  • I added the relevant backport labels
  • I didn't leave commented-out/debugging code

Reminders

  • Add New configuration option and document them (in sdcm/sct_config.py)
  • Add unit tests to cover my changes (under unit-test/ folder)
  • Update the Readme/doc folder relevant to this change (if needed)

@@ -3,7 +3,7 @@ backtrace_decoding: false
gce_instance_type_db: 'n2-standard-2'
gce_root_disk_type_db: 'pd-ssd'
root_disk_size_db: 50
gce_n_local_ssd_disk_db: 16
gce_n_local_ssd_disk_db: 0
Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Since I want to add the z3 instance types to the same job with other instance types (n2), I need to think of a way to change this one.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@soyacz maybe I can add it to the "if" in gce_utils?

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

In a general, we could think of overrides per instance types, if we still want upfront configuration to be correct.

Other option is to clear this option in configuration by default, and have code that selects it automatically based on known types, and other tests should set it as needed.

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

people will make mistakes (or will cause conflicts like this) and each new test that will use z3's will be failing because of trying to attach disks.
We could think of overrides per instance type (as we might want to add fewer disks to smaller n2 instances) - but I think it's beyond scope of this PR and should be added as follow up.
For this commit, I'd add to already added special handling for Z3's. Inside of if "z3-highmem" in machine_type: clause i'd add: instance.disks = [disk for disk in disks if "-data-local-ssd-" not in disk.device_name] - this should remove local ssd's from these types and workaround the issue.

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

So would suggest dropping the default for it, and putting it in the code like that

import re

# mapping between GCE instance types and their number of local SSDs disk we can use for data storage
gce_instances = {
    "n2-highmem-8": 2,
    "n2-highmem-16": 4,
    "n2-highmem-32": 8,
    "n2-highmem-48": 16,
    "n2-highmem-64": 24,
    "n2-highmem-80": 24,
    "n2-highmem-96": 24,
    "n2-highmem-128": 24,
    "z3-.*": 0,
    "c3-.*": 0,
    "c4-.*": 0,
}


def test_01():
    list_gcp_instances = [
        "n2-highmem-8",
        "n2-highmem-16",
        "n2-highmem-32",
        "n2-highmem-48",
        "n2-highmem-64",
        "n2-highmem-80",
        "n2-highmem-96",
        "n2-highmem-128",
        "z3-highssd-16",
        "c3-highmem-16",
        "c4-highmem-16",
    ]
    for i in list_gcp_instances:
        for instance_type_regex, local_ssd_count in gce_instances.items():
            regex = re.compile(instance_type_regex)
            if regex.match(i):
                print(f"Instance Type: {i}, Local SSD Count: {local_ssd_count}")

test should still be able to set it base on their need of disks, but if you didn't each type would have default

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I prefer the solution of @soyacz since I don't want to start explicitly listing all types and ssd configurations and we already have a special treatment for Z3 machine type.

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

you don't have to list them all, you can still have default, but since this default isn't going to be correct for some of the type, I just suggest to organize it in a way it would be clearer, and not build some specific code for each new supported type that should have different value

either way you'll need to adapt the code on each new type that doesn't fit with the defaults.

roydahan added 3 commits July 16, 2025 15:50
Z3 instances come with local_ssd atached by default without the
option to attach more/less local_ssd.
This commit set them automatically to 0 for Z3 instances and reducing
the default we have from 16 to 8.
@roydahan roydahan marked this pull request as ready for review July 16, 2025 12:56
@roydahan
Copy link
Contributor Author

Ready for review and passed testing.

Copy link
Contributor

@fruch fruch left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

LGTM

@@ -3,7 +3,7 @@ backtrace_decoding: false
gce_instance_type_db: 'n2-standard-2'
gce_root_disk_type_db: 'pd-ssd'
root_disk_size_db: 50
gce_n_local_ssd_disk_db: 16
gce_n_local_ssd_disk_db: 8
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I hope you read carefully the docs :)

Copy link
Contributor

@soyacz soyacz left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

LGTM

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Projects
None yet
Development

Successfully merging this pull request may close these issues.

3 participants