-
Notifications
You must be signed in to change notification settings - Fork 55
gitActions: Add workflow to bump kubevirtci #2217
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
gitActions: Add workflow to bump kubevirtci #2217
Conversation
|
@oshoval wdyt? |
|
Nice thanks One thing to consider please, is the k8s version we use, sometimes the old one is dropped |
|
we might want to make sure this change is robust, i just changed it to make it work, not sure if it is consistent |
np
Do you mean the KUBEVIRT_PROVIDER? |
yep |
We can add another workflow to bump to the maximum KUBEVIRT_PROVIDER (with format of k8s-x.y). wdyt? |
If we see we get gitAction fails, we'll tackle it then I think.. |
it is problematic, because sometimes latest kubevirtci is more advance than the one we are working on, since the upstream CI team adds it before its needed for sure we can discuss it on follow ups with Brian and Daniel thanks |
|
Is it possible to check whether a tag supports multi-architecture? For example, the latest tag (2504011456-29ba6f0e) from KubeVirtCI GitHub tags does not support multi-architecture (you can verify this by checking the corresponding Go-CLI tag on Quay.io), while the previous tag (2503281142-a291de27) does. Therefore, always using the latest tag might not be reliable |
either by podman inspect, or on quay.io web page it shows we dont need to add such validation to git actions otoh |
I agree with @oshoval . If any, a check like this should be inside the bump-kubevirt.sh script itself, and not in this workflow. |
I would say no check at all, the lanes are already good enough |
|
Okay thanks, Yes, I agree with you guys; there's no need to add such checks, We'll see if anything fails. |
Then I'll try to add a step that uses the tag chosen to figure out the latest provider, and use that |
it isnt good, because latest might be newer than the one we use, because CI team adds the cutting edge, yet used version, |
|
|
||
| - name: Check for changes | ||
| id: changes | ||
| run: | |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
please consider
run: echo "changed=$(! git diff --quiet cluster/cluster.sh && echo true || echo false)" >> "$GITHUB_OUTPUT"
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I would prefer to keep as is
|
|
||
| on: | ||
| schedule: | ||
| - cron: '0 0 1 1,5,9 *' # 00:00 on the 1st of Jan, May, Sep |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
please change to 3M, worth to change the comment to say it is 3M
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
DONE
Run it every 3 months to keep things up to date. Signed-off-by: Ram Lavi <[email protected]>
6ebf3c5 to
5c2b9b8
Compare
|
@oshoval added a step to also bump kubevirt-provider. PTAL |
hack/bump-kubevirt-provider.sh
Outdated
| | grep '"name": "k8s-[0-9]\+\.[0-9]\+"' \ | ||
| | sed -E 's/.*"k8s-([0-9]+\.[0-9]+)".*/\1/' \ | ||
| | sort -V \ | ||
| | tail -n1) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Thanks
didn't look on all deeply yet but:
we cant take tail -n1
even if we wanted be on the bleeding edge (i.e k8s version higher than the one that is recommended at the very moment)
we can't, because when we freeze and create a branch, the branch need to use the recommended one
i.e release-0.97 uses k8s version X, not X+1 even if kubevirtci already has it
what you can do is to count, and always take the 3rd starting from oldest
on most of the cases it will work, it would stop working if the providers will have suddenly different naming,
as happened in the past, where we had
k8s-x-centos9 and such, which might breaks this formula, but we can do it meanwhile, and be proactive according needs
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
ah I see.
what you can do is to count, and always take the 3rd starting from oldest
why third? I don't get the logic
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
we always support 3 versions
lets say we have
1.30
1.31
1.32 - the one we officially need to support
or
1.30
1.31
1.32 - the one we officially need to support
1.33 - too new, just because kubevirtci is on bleeding edge sometimes
so if we take the 3rd (counting from oldest), it is the most recent that we do want to use
same as what we run on kubevirt phase 1
assuming there aren't ad hoc surprises like 1.31-special that breaks the logic
but we can assume most of the time there aren't
1.30
1.31
1.31-special
1.32
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Sorry I don't get it.. How do we know which version is "the one we officially need to support"? can't we just use that?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
In the end we need to agree on the logic "which of the 3 will it choose"
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
lets discuss offline and split to 2 PRs ?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Removing the KUBEVIRT_PROVIDER set. The nightly should use the default.
Food for thought:
This can be an idea how to know the suggested provider to use,
we can suggest on kubevirtci, that it will know to use the latest by default, it will know the formula
and also will handle rare corner cases like we discussed
5c2b9b8 to
574191a
Compare
|
|
@oshoval PTAL |
|
/lgtm unless we are already on latest |
|
@oshoval once it's merged, I'll run it. |
|
[APPROVALNOTIFIER] This PR is APPROVED This pull-request has been approved by: RamLavi The full list of commands accepted by this bot can be found here. The pull request process is described here DetailsNeeds approval from an approver in each of these files:
Approvers can indicate their approval by writing |



What this PR does / why we need it:
This PR introduces a gitActions that will run the kubevirtci bumper every 3 months, in order to keep CNAO up to date.
Special notes for your reviewer:
Release note: