Skip to content

Conversation

@RamLavi
Copy link
Collaborator

@RamLavi RamLavi commented Apr 1, 2025

What this PR does / why we need it:
This PR introduces a gitActions that will run the kubevirtci bumper every 3 months, in order to keep CNAO up to date.

Special notes for your reviewer:

Release note:

NONE

@kubevirt-bot kubevirt-bot added release-note-none Denotes a PR that doesn't merit a release note. dco-signoff: yes Indicates the PR's author has DCO signed all their commits. labels Apr 1, 2025
@RamLavi
Copy link
Collaborator Author

RamLavi commented Apr 1, 2025

@oshoval wdyt?

@oshoval
Copy link
Collaborator

oshoval commented Apr 2, 2025

Nice thanks
lets change to 3M?

One thing to consider please, is the k8s version we use, sometimes the old one is dropped
we can introduce auto detection of the latest version, wdyt?
otoh sometimes kubevirtci has a newer version then the one we aim to, (and sometimes we use older one than the one we aim to)
the best will be if we could query to detect the desired current k8s version automatically

@oshoval
Copy link
Collaborator

oshoval commented Apr 2, 2025

we might want to make sure
https://github.com/kubevirt/cluster-network-addons-operator/pull/2131/files#diff-030affb1cecb4332ce886f9c3e14ed4740eff8868bfa708e5ae8f83f0bb133aaR4

this change is robust, i just changed it to make it work, not sure if it is consistent
if it isnt it meant the job will fail and we wont see it

@RamLavi
Copy link
Collaborator Author

RamLavi commented Apr 2, 2025

Nice thanks lets change to 3M?

np

One thing to consider please, is the k8s version we use, sometimes the old one is dropped we can introduce auto detection of the latest version, wdyt? otoh sometimes kubevirtci has a newer version then the one we aim to, (and sometimes we use older one than the one we aim to) the best will be if we could query to detect the desired current k8s version automatically

Do you mean the KUBEVIRT_PROVIDER?

@oshoval
Copy link
Collaborator

oshoval commented Apr 2, 2025

Do you mean the KUBEVIRT_PROVIDER?

yep

@RamLavi
Copy link
Collaborator Author

RamLavi commented Apr 2, 2025

Do you mean the KUBEVIRT_PROVIDER?

yep

We can add another workflow to bump to the maximum KUBEVIRT_PROVIDER (with format of k8s-x.y). wdyt?

@RamLavi
Copy link
Collaborator Author

RamLavi commented Apr 2, 2025

we might want to make sure https://github.com/kubevirt/cluster-network-addons-operator/pull/2131/files#diff-030affb1cecb4332ce886f9c3e14ed4740eff8868bfa708e5ae8f83f0bb133aaR4

this change is robust, i just changed it to make it work, not sure if it is consistent if it isnt it meant the job will fail and we wont see it

If we see we get gitAction fails, we'll tackle it then I think..

@oshoval
Copy link
Collaborator

oshoval commented Apr 2, 2025

We can add another workflow to bump to the maximum KUBEVIRT_PROVIDER (with format of k8s-x.y). wdyt?

it is problematic, because sometimes latest kubevirtci is more advance than the one we are working on, since the upstream CI team adds it before its needed

for sure we can discuss it on follow ups with Brian and Daniel

thanks

@ashokpariya0
Copy link
Member

Is it possible to check whether a tag supports multi-architecture? For example, the latest tag (2504011456-29ba6f0e) from KubeVirtCI GitHub tags does not support multi-architecture (you can verify this by checking the corresponding Go-CLI tag on Quay.io), while the previous tag (2503281142-a291de27) does. Therefore, always using the latest tag might not be reliable

@oshoval
Copy link
Collaborator

oshoval commented Apr 2, 2025

Is it possible to check whether a tag supports multi-architecture? For example, the latest tag (2504011456-29ba6f0e) from KubeVirtCI GitHub tags does not support multi-architecture (you can verify this by checking the corresponding Go-CLI tag on Quay.io), while the previous tag (2503281142-a291de27) does. Therefore, always using the latest tag might not be reliable

either by podman inspect, or on quay.io web page it shows
(also with docker there is a trick, but with podman its easier)

we dont need to add such validation to git actions otoh
for that we have the lanes you and us create, if it fails we will see there
there was a little hiccup thats all, it doesnt mean we need to be extra safe from now on in this aspect

@RamLavi
Copy link
Collaborator Author

RamLavi commented Apr 2, 2025

Is it possible to check whether a tag supports multi-architecture? For example, the latest tag (2504011456-29ba6f0e) from KubeVirtCI GitHub tags does not support multi-architecture (you can verify this by checking the corresponding Go-CLI tag on Quay.io), while the previous tag (2503281142-a291de27) does. Therefore, always using the latest tag might not be reliable

either by podman inspect, or on quay.io web page it shows (also with docker there is a trick, but with podman its easier)

we dont need to add such validation to git actions otoh for that we have the lanes you and us create, if it fails we will see there there was a little hiccup thats all, it doesnt mean we need to be extra safe from now on in this aspect

I agree with @oshoval . If any, a check like this should be inside the bump-kubevirt.sh script itself, and not in this workflow.

@oshoval
Copy link
Collaborator

oshoval commented Apr 2, 2025

I agree with @oshoval . If any, a check like this should be inside the bump-kubevirt.sh script itself, and not in this workflow.

I would say no check at all, the lanes are already good enough
we can't add validation based on every problem we see, it is expansive to run and maintain
but thanks a lot for caring and contributing

@ashokpariya0
Copy link
Member

Okay thanks, Yes, I agree with you guys; there's no need to add such checks, We'll see if anything fails.

@RamLavi
Copy link
Collaborator Author

RamLavi commented Apr 2, 2025

We can add another workflow to bump to the maximum KUBEVIRT_PROVIDER (with format of k8s-x.y). wdyt?

it is problematic, because sometimes latest kubevirtci is more advance than the one we are working on, since the upstream CI team adds it before its needed

for sure we can discuss it on follow ups with Brian and Daniel

thanks

Then I'll try to add a step that uses the tag chosen to figure out the latest provider, and use that

@oshoval
Copy link
Collaborator

oshoval commented Apr 2, 2025

We can add another workflow to bump to the maximum KUBEVIRT_PROVIDER (with format of k8s-x.y). wdyt?

it is problematic, because sometimes latest kubevirtci is more advance than the one we are working on, since the upstream CI team adds it before its needed
for sure we can discuss it on follow ups with Brian and Daniel
thanks

Then I'll try to add a step that uses the tag chosen to figure out the latest provider, and use that

it isnt good, because latest might be newer than the one we use, because CI team adds the cutting edge, yet used version,
if you mean the latest that we do need to use, this need some kind of query that will know to answer that, and to maintain it


- name: Check for changes
id: changes
run: |
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

please consider
run: echo "changed=$(! git diff --quiet cluster/cluster.sh && echo true || echo false)" >> "$GITHUB_OUTPUT"

Copy link
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I would prefer to keep as is


on:
schedule:
- cron: '0 0 1 1,5,9 *' # 00:00 on the 1st of Jan, May, Sep
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

please change to 3M, worth to change the comment to say it is 3M

Copy link
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

DONE

@kubevirt-bot kubevirt-bot added the lgtm Indicates that a PR is ready to be merged. label Apr 2, 2025
Run it every 3 months to keep things up to date.

Signed-off-by: Ram Lavi <[email protected]>
@RamLavi RamLavi force-pushed the add_bump_kubevirtci_gitactions branch from 6ebf3c5 to 5c2b9b8 Compare April 3, 2025 08:24
@kubevirt-bot kubevirt-bot removed the lgtm Indicates that a PR is ready to be merged. label Apr 3, 2025
@RamLavi
Copy link
Collaborator Author

RamLavi commented Apr 3, 2025

@oshoval added a step to also bump kubevirt-provider. PTAL

@RamLavi RamLavi changed the title gitActions: Add workflow to bump kubevirtci tag gitActions: Add workflow to bump kubevirtci Apr 3, 2025
| grep '"name": "k8s-[0-9]\+\.[0-9]\+"' \
| sed -E 's/.*"k8s-([0-9]+\.[0-9]+)".*/\1/' \
| sort -V \
| tail -n1)
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Thanks
didn't look on all deeply yet but:

we cant take tail -n1
even if we wanted be on the bleeding edge (i.e k8s version higher than the one that is recommended at the very moment)
we can't, because when we freeze and create a branch, the branch need to use the recommended one

i.e release-0.97 uses k8s version X, not X+1 even if kubevirtci already has it

what you can do is to count, and always take the 3rd starting from oldest
on most of the cases it will work, it would stop working if the providers will have suddenly different naming,
as happened in the past, where we had
k8s-x-centos9 and such, which might breaks this formula, but we can do it meanwhile, and be proactive according needs

Copy link
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

ah I see.

what you can do is to count, and always take the 3rd starting from oldest

why third? I don't get the logic

Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

we always support 3 versions
lets say we have

1.30
1.31
1.32  - the one we officially need to support

or

1.30
1.31
1.32 - the one we officially need to support
1.33 - too new, just because kubevirtci is on bleeding edge sometimes

so if we take the 3rd (counting from oldest), it is the most recent that we do want to use
same as what we run on kubevirt phase 1

assuming there aren't ad hoc surprises like 1.31-special that breaks the logic
but we can assume most of the time there aren't

1.30
1.31
1.31-special
1.32

Copy link
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Sorry I don't get it.. How do we know which version is "the one we officially need to support"? can't we just use that?

Copy link
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

In the end we need to agree on the logic "which of the 3 will it choose"

Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

lets discuss offline and split to 2 PRs ?

Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

#2216 (comment)

Removing the KUBEVIRT_PROVIDER set. The nightly should use the default.

Food for thought:
This can be an idea how to know the suggested provider to use,
we can suggest on kubevirtci, that it will know to use the latest by default, it will know the formula
and also will handle rare corner cases like we discussed

@RamLavi RamLavi force-pushed the add_bump_kubevirtci_gitactions branch from 5c2b9b8 to 574191a Compare April 3, 2025 11:31
@sonarqubecloud
Copy link

sonarqubecloud bot commented Apr 3, 2025

@RamLavi
Copy link
Collaborator Author

RamLavi commented Apr 6, 2025

@oshoval PTAL

@oshoval
Copy link
Collaborator

oshoval commented Apr 6, 2025

/lgtm
would you able to trigger it to test it please?

unless we are already on latest

@kubevirt-bot kubevirt-bot added the lgtm Indicates that a PR is ready to be merged. label Apr 6, 2025
@RamLavi
Copy link
Collaborator Author

RamLavi commented Apr 7, 2025

@oshoval once it's merged, I'll run it.
/approve

@kubevirt-bot
Copy link
Collaborator

[APPROVALNOTIFIER] This PR is APPROVED

This pull-request has been approved by: RamLavi

The full list of commands accepted by this bot can be found here.

The pull request process is described here

Details Needs approval from an approver in each of these files:

Approvers can indicate their approval by writing /approve in a comment
Approvers can cancel approval by writing /approve cancel in a comment

@kubevirt-bot kubevirt-bot added the approved Indicates a PR has been approved by an approver from all required OWNERS files. label Apr 7, 2025
@kubevirt-bot kubevirt-bot merged commit 7af808f into kubevirt:main Apr 7, 2025
15 checks passed
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

approved Indicates a PR has been approved by an approver from all required OWNERS files. dco-signoff: yes Indicates the PR's author has DCO signed all their commits. lgtm Indicates that a PR is ready to be merged. release-note-none Denotes a PR that doesn't merit a release note. size/M

Projects

None yet

Development

Successfully merging this pull request may close these issues.

4 participants