-
Notifications
You must be signed in to change notification settings - Fork 123
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
feat(backend): Source ObjStore Creds from Env in Tekton Template #1259
feat(backend): Source ObjStore Creds from Env in Tekton Template #1259
Conversation
Thanks for your pull request! It looks like this may be your first contribution to a Google open source project. Before we can look at your pull request, you'll need to sign a Contributor License Agreement (CLA). View this failed invocation of the CLA check for more information. For the most up to date status, view the checks section at the bottom of the pull request. |
@@ -146,6 +146,14 @@ func GetArtifactImage() string { | |||
return GetStringConfigWithDefault(ArtifactImage, DefaultArtifactImage) | |||
} | |||
|
|||
func GetObjectStoreAccessKey() string { | |||
return GetStringConfig(ObjectStoreAccessKey) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
By getting the S3 credential from viper, the whole cluster has to share the same S3 credential for all users. Whereas by getting the S3 credential from the secret, because users has full control of their namespaces so they can update the mlpipeline-minio-artifact
with their own S3 credential without impacting other users.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
(this and the following review feedback items are related so centralizing the discussion here)
This makes sense, and it would be good to parameterize these somehow. As you mentioned below, the easiest path is probably to create some new env vars for the s3 Secret name, accesskey Key, and secretkey Key. It is a little clunky, but does remove the hardcoded secret name. However, this would mean every namespace would need to use identical secret names across Namespaces (ie namespace1 and namespace2 can each have a foo-s3-secret
with different contents, but the secret must always be named foo-s3-secret
and use the same accesskey
and secretkey
keys.
I think that's a step in the right direction but it would be better if a namespace can individually reference an arbitrary secret name in a vacuum (namespace1 references foo-secret
but namespace2 references bar-secret
). I'm not sure of a clean way to incorporate that though, you wouldn't happen to know of a better vector for retrieving these parameters than apiserver env vars, would you? If not I can push a changeset that implements the parameterized secret name as outline above.
@@ -354,8 +356,8 @@ func (t *Tekton) injectArchivalStep(workflow util.Workflow, artifactItemsJSON ma | |||
t.getObjectFieldSelector("PIPELINERUN", "metadata.labels['tekton.dev/pipelineRun']"), | |||
t.getObjectFieldSelector("PODNAME", "metadata.name"), | |||
t.getObjectFieldSelector("NAMESPACE", "metadata.namespace"), | |||
t.getSecretKeySelector("AWS_ACCESS_KEY_ID", "mlpipeline-minio-artifact", "accesskey"), |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
maybe instead of hard code the mlpipeline-minio-artifact
secret name, we can parameterize it so users can point to other secret name for s3 credentials.
@@ -354,8 +356,8 @@ func (t *Tekton) injectArchivalStep(workflow util.Workflow, artifactItemsJSON ma | |||
t.getObjectFieldSelector("PIPELINERUN", "metadata.labels['tekton.dev/pipelineRun']"), | |||
t.getObjectFieldSelector("PODNAME", "metadata.name"), | |||
t.getObjectFieldSelector("NAMESPACE", "metadata.namespace"), | |||
t.getSecretKeySelector("AWS_ACCESS_KEY_ID", "mlpipeline-minio-artifact", "accesskey"), | |||
t.getSecretKeySelector("AWS_SECRET_ACCESS_KEY", "mlpipeline-minio-artifact", "secretkey"), | |||
t.getEnvVar("AWS_ACCESS_KEY_ID", objectStoreAccessKey), |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Using getEnvVar
will expose the S3 credentials as raw string in the pod spec. We should try to get the env variable from the secret instead of passing the raw string into env variable.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
yikes, didn't realize this was getting templated into the pod spec directly, sorry about that. Upcoming changeset will revert back to getSecretKeySelector
5ccfa0a
to
3823331
Compare
Thanks @gmfrasca, could you rebase this PR with the latest commit so we can test it against the new Tekton v1 API? |
ObjectStoreAccessKey string = "OBJECTSTORECONFIG_ACCESSKEY" | ||
ObjectStoreSecretKey string = "OBJECTSTORECONFIG_SECRETKEY" | ||
ObjectStoreSecretKey string = "OBJECTSTORECONFIG_SECRETACCESSKEY" |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
@rimolive I remember you introduced this new variable. In KFP-Tekton I don't think we are using this variable in open source. Are you okay with this change?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
FWIW I believe this was just a typo that I've corrected here, vanilla KFP has instances of what I've updated it to (OBJECTSTORECONFIG_SECRETACCESSKEY
) in their manifests, but as @Tomcli noted it doesn't appear to be used anywhere here in KFP-Tekton so it never broke anything.
3823331
to
a126ab4
Compare
/lgtm |
[APPROVALNOTIFIER] This PR is APPROVED This pull-request has been approved by: gmfrasca, Tomcli The full list of commands accepted by this bot can be found here. The pull request process is described here
Needs approval from an approver in each of these files:
Approvers can indicate their approval by writing |
Which issue is resolved by this Pull Request:
Resolves #
Description of your changes:
There is a hard-coded reference to a Secret named
mlpipeline-minio-artifact
that contains credentials metadata to the Object Store. This is the only instance of this, so sourcing the metadata from the apiserver's Env Vars (where it is already provided) will make deployments more flexible and is better practice, in general.Also updates a viper config variable to match/target what is used across the rest of the project and manifests. It was also the only instance of this (
_SECRETKEY
rather than_SECRETACCESSKEY
), so likely was a typo.Environment tested:
python --version
): 3.11.1tkn version
): v0.31.4kubectl version
): v1.22/etc/os-release
): Fedora 37Checklist: