Skip to content

feat: add configmap with mobster task git revision #6961

New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Open
wants to merge 6 commits into
base: main
Choose a base branch
from

Conversation

jedinym
Copy link

@jedinym jedinym commented Jul 2, 2025

The release-service operator needs access to a configmap with the git revision of Mobster tasks to use, to process SBOMs. This mimicks the process that is already used by conforma to inject the git revision into release pipelines.

https://issues.redhat.com/browse/ISV-5876
https://issues.redhat.com/browse/ISV-6051

Copy link

openshift-ci bot commented Jul 2, 2025

[APPROVALNOTIFIER] This PR is NOT APPROVED

This pull-request has been approved by: jedinym
Once this PR has been reviewed and has the lgtm label, please assign skabashnyuk for approval. For more information see the Code Review Process.

The full list of commands accepted by this bot can be found here.

Needs approval from an approver in each of these files:

Approvers can indicate their approval by writing /approve in a comment
Approvers can cancel approval by writing /approve cancel in a comment

Copy link

openshift-ci bot commented Jul 2, 2025

Hi @jedinym. Thanks for your PR.

I'm waiting for a redhat-appstudio member to verify that this patch is reasonable to test. If it is, they should reply with /ok-to-test on its own line. Until that is done, I will not automatically test new commits in this PR, but the usual testing commands by org members will still work. Regular contributors should join the org to skip this step.

Once the patch is verified, the new status will be reflected by the ok-to-test label.

I understand the commands that are listed here.

Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes-sigs/prow repository.

@jedinym
Copy link
Author

jedinym commented Jul 8, 2025

/verify-owners

1 similar comment
@jedinym
Copy link
Author

jedinym commented Jul 9, 2025

/verify-owners

@jedinym jedinym marked this pull request as ready for review July 9, 2025 07:13
@openshift-ci openshift-ci bot requested review from lcarva and manish-jangra July 9, 2025 07:13
@jedinym
Copy link
Author

jedinym commented Jul 9, 2025

/ok-to-test

Comment on lines 5 to 6
labels:
rbac.authorization.k8s.io/aggregate-to-view: "true"
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I'm not sure this is required. Do we have any specific need for it?

Copy link
Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I was following this: #1511
We also need to use this CM in the e2e tests.

Do you know of a better way of achieving that?

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

AFAICT, KubeSaw was used to bind each user to the view ClusterRole in each namespace the user has access to. That's something not supported from the new backend. As I understand it, adding it to the view ClusterRole should have no effect nowadays. Sharing with the system:authenticated group should work instead.

However, this means that every user in Konflux will be able to read the ConfigMap. Is this what we want to achieve here? From the PR description it seems to me we want to enable the release-service operator only, right?

The release-service operator needs access to a configmap with the git revision of Mobster tasks to use, to process SBOMs.

In this case, I think we should restrict to the release-service operator's ServiceAccount.

Copy link
Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The ConfigMap does not contain any sensitive information, so I would be fine with having it exposed to all. Do you think it's a bad idea still?

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

If there is no need from users to have visibility on it and no particular reason for choosing this approach, I find scoping to the interested ServiceAccount only a cleaner solution

Copy link
Contributor

@sadlerap sadlerap left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

You're going to need to add a mobster ApplicationSet underneath argo-cd-apps/base/member/infra-deployments and register it with the development overlay if you want to have these in e2e tests.

Comment on lines 11 to 12
resourceNames:
- mobster-defaults
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

If we're scoping this to a specific resource, why do we need a ClusterRole? I presume this configmap isn't going to exist in all namespaces, so a Role is probably better here.

Copy link
Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Changed to Role.

@jedinym
Copy link
Author

jedinym commented Jul 14, 2025

@sadlerap I added the ApplicationSet. Could you please clarify what you mean by "register it with the development overlay"?

@sadlerap
Copy link
Contributor

@sadlerap I added the ApplicationSet. Could you please clarify what you mean by "register it with the development overlay"?

You need to ensure the applicationset for this component shows up in the overlays for the clusters you want to deploy it on. For instance, the internal production clusters are largely deployed by rendering the kustomize template at argo-cd-apps/overlays/production-downstream/.

@filariow
Copy link
Member

@sadlerap I added the ApplicationSet. Could you please clarify what you mean by "register it with the development overlay"?

You need to ensure the applicationset for this component shows up in the overlays for the clusters you want to deploy it on. For instance, the internal production clusters are largely deployed by rendering the kustomize template at argo-cd-apps/overlays/production-downstream/.

you can take inspiration from #5163, #5203, and similar PRs. In your case, you also need to target the development overlay. Please, scope this PR to development and staging clusters only and address production in a follow-up one.

Copy link

openshift-ci bot commented Jul 15, 2025

@jedinym: The following test failed, say /retest to rerun all failed tests or /retest-required to rerun all mandatory failed tests:

Test name Commit Details Required Rerun command
ci/prow/appstudio-e2e-tests bff63cc link true /test appstudio-e2e-tests

Full PR test history. Your PR dashboard.

Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes-sigs/prow repository. I understand the commands that are listed here.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Projects
None yet
Development

Successfully merging this pull request may close these issues.

5 participants