Releases: saritasa-nest/saritasa-devops-helm-charts
eol-prometheus-exporter-0.1.0-dev-12
End of life prometheus exporter. A Kubernetes's helm chart for a exporter that get information about end of life/support of products in order to be scrapped by Prometheus You must supply a valid configmap with a list of products with its versions: yaml # Get available products from: # https://endoflife.date/api/all.json # and find available cycles in: # https://endoflife.date/api/{product}.json eks: current: '1.30' comment: EKS django: current: '5.1' comment: backend
Check https://github.com/saritasa-nest/saritasa-devops-tools-eol-exporter/blob/main/config.yaml.example for more example values. Each product must have a field current
with valid version as defined in: https://endoflife.date/api/{product}.json. A comment
field is optional, and it will be added as a label in the metrics. A Prometheus extra scrape config must be configured in order to be able to watch the metrics in Prometheus. The service name will be defined as: $CHART_NAME.$NAMESPACE:$PORT
. By default this is: eol-exporter.prometheus:8080
: yaml extraScrapeConfigs: | - job_name: prometheus-eol-exporter metrics_path: /metrics scrape_interval: 5m scrape_timeout: 30s static_configs: - targets: - eol-exporter.prometheus:8080
Check https://github.com/saritasa-nest/saritasa-devops-tools-eol-exporter/blob/main/README.md#prometheus-server-config for more information The exporter provides two metrics: - endoflife_expiration_timestamp_seconds
: Information about end of life (EOL) of products. Metric value is the UNIX timestamp of the eolDate label - endoflife_expired
: Information about end of life (EOL) of products. Boolean value of 1 for expired products. Sample query to get if EKS EOL is less than 30 days: sh (endoflife_expiration_timestamp_seconds{name="eks"} - time()) > ((60*60*24) * 10) and (endoflife_expiration_timestamp_seconds{name="eks"} - time()) <= ((60*60*24) * 30)
Sample query to get if EKS EOL has already happened: sh endoflife_expired{name="eks"} == 1
saritasa-tekton-apps-0.2.23-dev.4
A Helm chart for tekton apps (rbac, eventlistener) Implements: - dynamic records for eventlistener - PVCs - RBAC - configmaps for each app - triggerbindings for each app - kubernetes job to make sure the PVCs are bound and argocd marks the app as healthy - argocd project for each app - argocd application for each app component - argocd notifications for each app project ## example usage with argocd
Install the chart: helm repo add saritasa https://saritasa-nest.github.io/saritasa-devops-helm-charts/
then declare dynamic list of projects (and associated components of that project like backend, api, frontend, etc) that would be dynamically added into the tekton's eventlistener manifest. Each component should be a separate git repository. yaml --- apiVersion: argoproj.io/v1alpha1 kind: Application metadata: name: tekton-apps namespace: argo-cd finalizers: - resources-finalizer.argocd.argoproj.io annotations: argocd.argoproj.io/sync-options: SkipDryRunOnMissingResource=true argocd.argoproj.io/sync-wave: "41" spec: destination: server: https://kubernetes.default.svc namespace: ci project: default source: chart: saritasa-tekton-apps helm: values: | environment: staging gitBranchPrefixes: - staging storageClassName: gp3 nodeSelector: ops: 'true' aws: region: "us-west-2" dns: staging.site.com defaultRegistry: xxx.dkr.ecr.us-west-2.amazonaws.com argocd: server: deploy.staging.site.com eventlistener: enableWebhookSecret: true apps: - project: vp enabled: true argocd: labels: created-by: xxx ops-main: xxx ops-secondary: xxx pm: xxx tm: xxx namespace: prod notifications: annotations: # In rocks/cloud cluster use slack-token integration: notifications.argoproj.io/subscribe.on-health-degraded.slack: project-vp; project-vp-alarms notifications.argoproj.io/subscribe.on-sync-failed.slack: project-vp-ci; project-vp-alarms notifications.argoproj.io/subscribe.on-sync-status-unknown.slack: project-vp; project-vp-alarms notifications.argoproj.io/subscribe.on-deployed.slack: project-vp-ci # In staging/prod client cluster use webhook integration: notifications.argoproj.io/subscribe.on-health-degraded.project-webhook: enabled mailList: [email protected] devopsMailList: [email protected] jiraURL: https://site.atlassian.net/browse/vp tektonURL: https://tekton.staging.site.com/#/namespaces/ci/pipelineruns slack: client-vp-ci kubernetesRepository: name: vp-kubernetes-aws branch: main url: [email protected]:org-name/vp-kubernetes-aws.git components: - name: backend repository: vp-backend pipeline: buildpack-django-build-pipeline applicationURL: https://api.staging.site.com argocd: syncWave: 220 tekton: workspacePVC: 15Gi buildpacksPVC: 25Gi eventlistener: template: buildpack-django-build-pipeline-trigger-template triggerBinding: - name: docker_registry_repository value: xxx.dkr.ecr.us-west-2.amazonaws.com/vp/staging/backend - name: buildpack_builder_image value: xxx.dkr.ecr.us-west-2.amazonaws.com/vp/staging/buildpacks/google/builder:v1 - name: buildpack_runner_image value: xxx.dkr.ecr.us-west-2.amazonaws.com/vp/staging/buildpacks/google/runner:v1 - name: frontend repository: vp-frontend pipeline: buildpack-frontend-build-pipeline applicationURL: https://staging.site.com argocd: syncWave: 220 tekton: workspacePVC: 15Gi buildpacksPVC: 25Gi eventlistener: template: buildpack-frontend-build-pipeline-trigger-template triggerBinding: - name: docker_registry_repository value: xxx.dkr.ecr.us-west-2.amazonaws.com/vp/staging/frontend - name: buildpack_builder_image value: xxx.dkr.ecr.us-west-2.amazonaws.com/vp/staging/buildpacks/paketo/builder:full - name: buildpack_runner_image value: xxx.dkr.ecr.us-west-2.amazonaws.com/vp/staging/buildpacks/paketo/runner:full - name: source_subpath value: dist/web # make sure PVCs are bound after the chart is synced # by temporarily mount them into short-live job. runPostInstallMountPvcJob: false repoURL: https://saritasa-nest.github.io/saritasa-devops-helm-charts/ targetRevision: "0.1.16" syncPolicy: automated: prune: true selfHeal: true syncOptions: - CreateNamespace=true
Above helm chart creates a new ArgoCD project for each project in values, for each component in project's components there is created a separate ArgoCD application and required for Tekton ci/cd resources (triggerbindings, roles, configmaps, jobs, serviceaccounts, pvcs and etc). For each Argocd project, notifications to multiple slack channels with different types of triggers are added. The example above define for each subscription, the slack channels (project-xx, project-xx-ci project-xx-alarms) that should be added by default. This can be modified to add/remove a channel in case of a custom config needed. There are two ways of activating notifications, using slack-token integration and using project-webhooks integration. The slack-token allows sending to any slack channel where the app is installed, that's why we should only use it in rocks/cloud cluster and not in clients clusters. The project-webhook integrations can only send to the channel where it's created in Slack app 'client deployments' (https://api.slack.com/apps/A01LM626QTZ/incoming-webhooks?) and it should be used in staging/prod client clusters. The on-sync-status-unknown subscription is only available for Wordpress applications (it creates redundant notifications for non Wordpress apps) # fill below parameters for each project
block - apps[PROJECT].environment - possbility to define custom project's environment, needed for cases when need to deploy dev
and prod
envs to the same cluster. For example xxx
dev and prod both deployed in rocks EKS (not required) - apps[PROJECT].enabled - boolean value to define whether the project enabled or not (required) - apps[PROJECT].argocd.labels - labels which are added to ArgoCD project (required) - apps[PROJECT].argocd.namespace - allowed for ArgoCD project namespace (required) - apps[PROJECT].argocd.notifications.annotations[] - list of slack channels subscriptions, each with a different trigger - apps[PROJECT].argocd.syncWave - ArgoCD project sync wave, i.e. sequence in which project should be synced (not required, default: "200") - apps[PROJECT].argocd.sourceRepos[] - source repositories added to ArgoCD project (not required, default: [<apps[PROJECT].kubernetesRepository.url>]) - apps[PROJECT].argocd.extraDestinationNamespaces[] - adds extra destination namespaces for ArgoCD project to be able to create custom apps within project's kubernetes repo (not required, default: null) - apps[PROJECT].mailList - project's team email address (required) - apps[PROJECT].devopsMailList - project's devops team email address (required) - apps[PROJECT].jiraURL - project's JIRA url (required) - apps[PROJECT].tektonURL - link to Tekton pipelineruns used in Tekton ConfigMap as TEKTON_URL
during Slack notification send (required) - apps[PROJECT].slack - project's Slack channel name (required) - apps[PROJECT].kubernetesRepository.name - project's kubernetes repository name used in ArgoCD application and Tekton TriggerBinding (may be absent and replaced with apps[PROJECT].components[NAME].argocd
and apps[PROJECT].argocd.sourceRepos[]
blocks in case if project has no kubernetes repo) - apps[PROJECT].kubernetesRepository.branch - project's kubernetes repository branch used in ArgoCD application and Tekton TriggerBinding (may be absent and replaced with apps[PROJECT].components[NAME].argocd
and apps[PROJECT].argocd.sourceRepos[]
blocks in case if project has no kubernetes repo) - apps[PROJECT].kubernetesRepository.url - project's kubernetes repository url used in ArgoCD application and Tekton TriggerBinding (may be absent and replaced with apps[PROJECT].components[NAME].argocd
and apps[PROJECT].argocd.sourceRepos[]
blocks in case if project has no kubernetes repo) Basically we have 2 different types of ci/cd - basic (buildpacks, kaniko) and wordpress ones. So depending on project's component type you will need to fill different parameters. # fill below parameters for each component
block - apps[PROJECT].components[NAME].repository - the name of the repository containing the code (may be absent in case of wordpress application without deployment, i.e. bolrdswp, taco, saritasa-wordpress-demo) - apps[PROJECT].components[NAME].pipeline - the name of the pipeline building the code from the repository above - apps[PROJECT].components[NAME].namespace - the name of the namespace for component. Optional parameter - apps[PROJECT].components[NAME].argocd.source.syncWave - custom component ArgoCD application sync wave (default: "210") - ap...
saritasa-tekton-apps-0.2.23-dev.3
A Helm chart for tekton apps (rbac, eventlistener) Implements: - dynamic records for eventlistener - PVCs - RBAC - configmaps for each app - triggerbindings for each app - kubernetes job to make sure the PVCs are bound and argocd marks the app as healthy - argocd project for each app - argocd application for each app component - argocd notifications for each app project ## example usage with argocd
Install the chart: helm repo add saritasa https://saritasa-nest.github.io/saritasa-devops-helm-charts/
then declare dynamic list of projects (and associated components of that project like backend, api, frontend, etc) that would be dynamically added into the tekton's eventlistener manifest. Each component should be a separate git repository. yaml --- apiVersion: argoproj.io/v1alpha1 kind: Application metadata: name: tekton-apps namespace: argo-cd finalizers: - resources-finalizer.argocd.argoproj.io annotations: argocd.argoproj.io/sync-options: SkipDryRunOnMissingResource=true argocd.argoproj.io/sync-wave: "41" spec: destination: server: https://kubernetes.default.svc namespace: ci project: default source: chart: saritasa-tekton-apps helm: values: | environment: staging gitBranchPrefixes: - staging storageClassName: gp3 nodeSelector: ops: 'true' aws: region: "us-west-2" dns: staging.site.com defaultRegistry: xxx.dkr.ecr.us-west-2.amazonaws.com argocd: server: deploy.staging.site.com eventlistener: enableWebhookSecret: true apps: - project: vp enabled: true argocd: labels: created-by: xxx ops-main: xxx ops-secondary: xxx pm: xxx tm: xxx namespace: prod notifications: annotations: # In rocks/cloud cluster use slack-token integration: notifications.argoproj.io/subscribe.on-health-degraded.slack: project-vp; project-vp-alarms notifications.argoproj.io/subscribe.on-sync-failed.slack: project-vp-ci; project-vp-alarms notifications.argoproj.io/subscribe.on-sync-status-unknown.slack: project-vp; project-vp-alarms notifications.argoproj.io/subscribe.on-deployed.slack: project-vp-ci # In staging/prod client cluster use webhook integration: notifications.argoproj.io/subscribe.on-health-degraded.project-webhook: enabled mailList: [email protected] devopsMailList: [email protected] jiraURL: https://site.atlassian.net/browse/vp tektonURL: https://tekton.staging.site.com/#/namespaces/ci/pipelineruns slack: client-vp-ci kubernetesRepository: name: vp-kubernetes-aws branch: main url: [email protected]:org-name/vp-kubernetes-aws.git components: - name: backend repository: vp-backend pipeline: buildpack-django-build-pipeline applicationURL: https://api.staging.site.com argocd: syncWave: 220 tekton: workspacePVC: 15Gi buildpacksPVC: 25Gi eventlistener: template: buildpack-django-build-pipeline-trigger-template triggerBinding: - name: docker_registry_repository value: xxx.dkr.ecr.us-west-2.amazonaws.com/vp/staging/backend - name: buildpack_builder_image value: xxx.dkr.ecr.us-west-2.amazonaws.com/vp/staging/buildpacks/google/builder:v1 - name: buildpack_runner_image value: xxx.dkr.ecr.us-west-2.amazonaws.com/vp/staging/buildpacks/google/runner:v1 - name: frontend repository: vp-frontend pipeline: buildpack-frontend-build-pipeline applicationURL: https://staging.site.com argocd: syncWave: 220 tekton: workspacePVC: 15Gi buildpacksPVC: 25Gi eventlistener: template: buildpack-frontend-build-pipeline-trigger-template triggerBinding: - name: docker_registry_repository value: xxx.dkr.ecr.us-west-2.amazonaws.com/vp/staging/frontend - name: buildpack_builder_image value: xxx.dkr.ecr.us-west-2.amazonaws.com/vp/staging/buildpacks/paketo/builder:full - name: buildpack_runner_image value: xxx.dkr.ecr.us-west-2.amazonaws.com/vp/staging/buildpacks/paketo/runner:full - name: source_subpath value: dist/web # make sure PVCs are bound after the chart is synced # by temporarily mount them into short-live job. runPostInstallMountPvcJob: false repoURL: https://saritasa-nest.github.io/saritasa-devops-helm-charts/ targetRevision: "0.1.16" syncPolicy: automated: prune: true selfHeal: true syncOptions: - CreateNamespace=true
Above helm chart creates a new ArgoCD project for each project in values, for each component in project's components there is created a separate ArgoCD application and required for Tekton ci/cd resources (triggerbindings, roles, configmaps, jobs, serviceaccounts, pvcs and etc). For each Argocd project, notifications to multiple slack channels with different types of triggers are added. The example above define for each subscription, the slack channels (project-xx, project-xx-ci project-xx-alarms) that should be added by default. This can be modified to add/remove a channel in case of a custom config needed. There are two ways of activating notifications, using slack-token integration and using project-webhooks integration. The slack-token allows sending to any slack channel where the app is installed, that's why we should only use it in rocks/cloud cluster and not in clients clusters. The project-webhook integrations can only send to the channel where it's created in Slack app 'client deployments' (https://api.slack.com/apps/A01LM626QTZ/incoming-webhooks?) and it should be used in staging/prod client clusters. The on-sync-status-unknown subscription is only available for Wordpress applications (it creates redundant notifications for non Wordpress apps) # fill below parameters for each project
block - apps[PROJECT].environment - possbility to define custom project's environment, needed for cases when need to deploy dev
and prod
envs to the same cluster. For example xxx
dev and prod both deployed in rocks EKS (not required) - apps[PROJECT].enabled - boolean value to define whether the project enabled or not (required) - apps[PROJECT].argocd.labels - labels which are added to ArgoCD project (required) - apps[PROJECT].argocd.namespace - allowed for ArgoCD project namespace (required) - apps[PROJECT].argocd.notifications.annotations[] - list of slack channels subscriptions, each with a different trigger - apps[PROJECT].argocd.syncWave - ArgoCD project sync wave, i.e. sequence in which project should be synced (not required, default: "200") - apps[PROJECT].argocd.sourceRepos[] - source repositories added to ArgoCD project (not required, default: [<apps[PROJECT].kubernetesRepository.url>]) - apps[PROJECT].argocd.extraDestinationNamespaces[] - adds extra destination namespaces for ArgoCD project to be able to create custom apps within project's kubernetes repo (not required, default: null) - apps[PROJECT].mailList - project's team email address (required) - apps[PROJECT].devopsMailList - project's devops team email address (required) - apps[PROJECT].jiraURL - project's JIRA url (required) - apps[PROJECT].tektonURL - link to Tekton pipelineruns used in Tekton ConfigMap as TEKTON_URL
during Slack notification send (required) - apps[PROJECT].slack - project's Slack channel name (required) - apps[PROJECT].kubernetesRepository.name - project's kubernetes repository name used in ArgoCD application and Tekton TriggerBinding (may be absent and replaced with apps[PROJECT].components[NAME].argocd
and apps[PROJECT].argocd.sourceRepos[]
blocks in case if project has no kubernetes repo) - apps[PROJECT].kubernetesRepository.branch - project's kubernetes repository branch used in ArgoCD application and Tekton TriggerBinding (may be absent and replaced with apps[PROJECT].components[NAME].argocd
and apps[PROJECT].argocd.sourceRepos[]
blocks in case if project has no kubernetes repo) - apps[PROJECT].kubernetesRepository.url - project's kubernetes repository url used in ArgoCD application and Tekton TriggerBinding (may be absent and replaced with apps[PROJECT].components[NAME].argocd
and apps[PROJECT].argocd.sourceRepos[]
blocks in case if project has no kubernetes repo) Basically we have 2 different types of ci/cd - basic (buildpacks, kaniko) and wordpress ones. So depending on project's component type you will need to fill different parameters. # fill below parameters for each component
block - apps[PROJECT].components[NAME].repository - the name of the repository containing the code (may be absent in case of wordpress application without deployment, i.e. bolrdswp, taco, saritasa-wordpress-demo) - apps[PROJECT].components[NAME].pipeline - the name of the pipeline building the code from the repository above - apps[PROJECT].components[NAME].namespace - the name of the namespace for component. Optional parameter - apps[PROJECT].components[NAME].argocd.source.syncWave - custom component ArgoCD application sync wave (default: "210") - ap...
saritasa-tekton-pipelines-0.1.48
A Helm chart for Tekton Pipelines Implements: - common tekton tasks - common tekton pipelines - common tekton trigger templates - common tekton trigger bindings Implemented pipelines include: - buildpack based pipelines based on generator template (php, python, frontend, nodejs, ruby, go) - kaniko pipeline - wordpress pipeline ## example usage with argocd
Install the chart: helm repo add saritasa https://saritasa-nest.github.io/saritasa-devops-helm-charts/
then if you want to support only frontend and django pipelines based on buildpack without any script modifications: yaml apiVersion: argoproj.io/v1alpha1 kind: Application metadata: name: tekton-pipelines namespace: argo-cd finalizers: - resources-finalizer.argocd.argoproj.io annotations: argocd.argoproj.io/sync-options: SkipDryRunOnMissingResource=true argocd.argoproj.io/sync-wave: "60" spec: destination: server: https://kubernetes.default.svc namespace: ci project: default source: chart: saritasa-tekton-pipelines helm: values: | buildpacks: enabled: true generate: buildpackFrontendBuildPipeline: enabled: true buildpackDjangoBuildPipeline: enabled: true repoURL: https://saritasa-nest.github.io/saritasa-devops-helm-charts/ targetRevision: "0.1.4" syncPolicy: automated: prune: true selfHeal: true syncOptions: - CreateNamespace=true
If you want to modify the behavior of the build step you can easily do that by redefining steps you want to run prior to running the build
step of the associated buildpack pipeline. You can create multiple versions of pipelines as a result. Just make sure to give them a different name an example: yaml buildpacks: enabled: true generate: buildpackFrontendBuildPipelineNew: name: buildpack-frontend-build-pipeline-new enabled: false buildTaskName: buildpack-frontend-new buildTaskSteps: - name: build-hello-world image: node:22 imagePullPolicy: IfNotPresent workingDir: $(resources.inputs.app.path) script: | #!/bin/bash echo "hello world" preDeployTaskSteps: - name: pre-deploy-hello-world image: node:22 imagePullPolicy: IfNotPresent workingDir: $(resources.inputs.app.path) script: | #!/bin/bash echo "hello world" extraPostDeployTaskSteps: - name: post-deploy-hello-world image: node:22 imagePullPolicy: IfNotPresent workingDir: $(resources.inputs.app.path) script: | #!/bin/bash echo "hello world"
If you want to modify build
step from buildpack's build
Task added by default, you just need to add a new overrideBuildStep
key with new step content in values.yaml for required pipeline and helm chart will provision a custom build
step: yaml buildpacks: enabled: true generate: buildpackFrontendBuildPipelineNew: name: buildpack-frontend-build-pipeline-new enabled: false buildTaskName: buildpack-frontend-new overrideBuildStep: name: build image: node:22 imagePullPolicy: IfNotPresent workingDir: $(resources.inputs.app.path) script: | #!/bin/bash az login --identity --username <managed-indentity> az acr login --name <container-registry> /cnb/lifecycle/creator \ -app=$(params.source_subpath) \ -project-metadata=project.toml \ -cache-dir=/cache \ -layers=/layers \ -platform=$(workspaces.source.path)/$(params.platform_dir) \ -report=/layers/report.toml \ -cache-image=$(params.cache_image) \ -uid=$(params.user_id) \ -gid=$(params.group_id) \ -process-type=$(params.process_type) \ -skip-restore=$(params.skip_restore) \ -previous-image=$(resources.outputs.image.url) \ -run-image=$(params.run_image) \ $(resources.outputs.image.url) buildTaskSteps: - name: build-hello-world image: node:22 imagePullPolicy: IfNotPresent workingDir: $(resources.inputs.app.path) script: | #!/bin/bash echo "hello world"
If you want to modify Kaniko build arguments, you can pass kaniko_extra_args
parameter to kaniko-pipeline
. For example, if you want to pass BASE_IMAGE
build argument value to be used in Dockerfile you can add following line to specific project trigger-binding: yaml - name: kaniko_extra_args value: --build-arg=BASE_IMAGE=965067289393.dkr.ecr.us-west-2.amazonaws.com/saritasa/legacy/php:php71-smart-screen-base
Chart has possibility to perform Sentry
releases if it is needed, you can configure it by updating below settings in values.yaml: yaml sentry: enabled: true authTokenSecret: "sentry-auth-token" # auth token to connect to Sentry API (change it if you have custom value) authTokenSecretKey: "auth-token" # key for auth token in `authTokenSecret` secret (change it if you have custom value) org: "saritasa" # name of your Sentry organization (change it if you have custom value) url: https://sentry.saritasa.rocks/ # Sentry url (change it if you have custom value)
After configuring these values, you will have an extra sentry-release
step after argocd-deploy
one for buildpacks and kaniko builds.
saritasa-tekton-apps-0.2.23-dev.2
A Helm chart for tekton apps (rbac, eventlistener) Implements: - dynamic records for eventlistener - PVCs - RBAC - configmaps for each app - triggerbindings for each app - kubernetes job to make sure the PVCs are bound and argocd marks the app as healthy - argocd project for each app - argocd application for each app component - argocd notifications for each app project ## example usage with argocd
Install the chart: helm repo add saritasa https://saritasa-nest.github.io/saritasa-devops-helm-charts/
then declare dynamic list of projects (and associated components of that project like backend, api, frontend, etc) that would be dynamically added into the tekton's eventlistener manifest. Each component should be a separate git repository. yaml --- apiVersion: argoproj.io/v1alpha1 kind: Application metadata: name: tekton-apps namespace: argo-cd finalizers: - resources-finalizer.argocd.argoproj.io annotations: argocd.argoproj.io/sync-options: SkipDryRunOnMissingResource=true argocd.argoproj.io/sync-wave: "41" spec: destination: server: https://kubernetes.default.svc namespace: ci project: default source: chart: saritasa-tekton-apps helm: values: | environment: staging gitBranchPrefixes: - staging storageClassName: gp3 nodeSelector: ops: 'true' aws: region: "us-west-2" dns: staging.site.com defaultRegistry: xxx.dkr.ecr.us-west-2.amazonaws.com argocd: server: deploy.staging.site.com eventlistener: enableWebhookSecret: true apps: - project: vp enabled: true argocd: labels: created-by: xxx ops-main: xxx ops-secondary: xxx pm: xxx tm: xxx namespace: prod notifications: annotations: # In rocks/cloud cluster use slack-token integration: notifications.argoproj.io/subscribe.on-health-degraded.slack: project-vp; project-vp-alarms notifications.argoproj.io/subscribe.on-sync-failed.slack: project-vp-ci; project-vp-alarms notifications.argoproj.io/subscribe.on-sync-status-unknown.slack: project-vp; project-vp-alarms notifications.argoproj.io/subscribe.on-deployed.slack: project-vp-ci # In staging/prod client cluster use webhook integration: notifications.argoproj.io/subscribe.on-health-degraded.project-webhook: enabled mailList: [email protected] devopsMailList: [email protected] jiraURL: https://site.atlassian.net/browse/vp tektonURL: https://tekton.staging.site.com/#/namespaces/ci/pipelineruns slack: client-vp-ci kubernetesRepository: name: vp-kubernetes-aws branch: main url: [email protected]:org-name/vp-kubernetes-aws.git components: - name: backend repository: vp-backend pipeline: buildpack-django-build-pipeline applicationURL: https://api.staging.site.com argocd: syncWave: 220 tekton: workspacePVC: 15Gi buildpacksPVC: 25Gi eventlistener: template: buildpack-django-build-pipeline-trigger-template triggerBinding: - name: docker_registry_repository value: xxx.dkr.ecr.us-west-2.amazonaws.com/vp/staging/backend - name: buildpack_builder_image value: xxx.dkr.ecr.us-west-2.amazonaws.com/vp/staging/buildpacks/google/builder:v1 - name: buildpack_runner_image value: xxx.dkr.ecr.us-west-2.amazonaws.com/vp/staging/buildpacks/google/runner:v1 - name: frontend repository: vp-frontend pipeline: buildpack-frontend-build-pipeline applicationURL: https://staging.site.com argocd: syncWave: 220 tekton: workspacePVC: 15Gi buildpacksPVC: 25Gi eventlistener: template: buildpack-frontend-build-pipeline-trigger-template triggerBinding: - name: docker_registry_repository value: xxx.dkr.ecr.us-west-2.amazonaws.com/vp/staging/frontend - name: buildpack_builder_image value: xxx.dkr.ecr.us-west-2.amazonaws.com/vp/staging/buildpacks/paketo/builder:full - name: buildpack_runner_image value: xxx.dkr.ecr.us-west-2.amazonaws.com/vp/staging/buildpacks/paketo/runner:full - name: source_subpath value: dist/web # make sure PVCs are bound after the chart is synced # by temporarily mount them into short-live job. runPostInstallMountPvcJob: false repoURL: https://saritasa-nest.github.io/saritasa-devops-helm-charts/ targetRevision: "0.1.16" syncPolicy: automated: prune: true selfHeal: true syncOptions: - CreateNamespace=true
Above helm chart creates a new ArgoCD project for each project in values, for each component in project's components there is created a separate ArgoCD application and required for Tekton ci/cd resources (triggerbindings, roles, configmaps, jobs, serviceaccounts, pvcs and etc). For each Argocd project, notifications to multiple slack channels with different types of triggers are added. The example above define for each subscription, the slack channels (project-xx, project-xx-ci project-xx-alarms) that should be added by default. This can be modified to add/remove a channel in case of a custom config needed. There are two ways of activating notifications, using slack-token integration and using project-webhooks integration. The slack-token allows sending to any slack channel where the app is installed, that's why we should only use it in rocks/cloud cluster and not in clients clusters. The project-webhook integrations can only send to the channel where it's created in Slack app 'client deployments' (https://api.slack.com/apps/A01LM626QTZ/incoming-webhooks?) and it should be used in staging/prod client clusters. The on-sync-status-unknown subscription is only available for Wordpress applications (it creates redundant notifications for non Wordpress apps) # fill below parameters for each project
block - apps[PROJECT].environment - possbility to define custom project's environment, needed for cases when need to deploy dev
and prod
envs to the same cluster. For example xxx
dev and prod both deployed in rocks EKS (not required) - apps[PROJECT].enabled - boolean value to define whether the project enabled or not (required) - apps[PROJECT].argocd.labels - labels which are added to ArgoCD project (required) - apps[PROJECT].argocd.namespace - allowed for ArgoCD project namespace (required) - apps[PROJECT].argocd.notifications.annotations[] - list of slack channels subscriptions, each with a different trigger - apps[PROJECT].argocd.syncWave - ArgoCD project sync wave, i.e. sequence in which project should be synced (not required, default: "200") - apps[PROJECT].argocd.sourceRepos[] - source repositories added to ArgoCD project (not required, default: [<apps[PROJECT].kubernetesRepository.url>]) - apps[PROJECT].argocd.extraDestinationNamespaces[] - adds extra destination namespaces for ArgoCD project to be able to create custom apps within project's kubernetes repo (not required, default: null) - apps[PROJECT].mailList - project's team email address (required) - apps[PROJECT].devopsMailList - project's devops team email address (required) - apps[PROJECT].jiraURL - project's JIRA url (required) - apps[PROJECT].tektonURL - link to Tekton pipelineruns used in Tekton ConfigMap as TEKTON_URL
during Slack notification send (required) - apps[PROJECT].slack - project's Slack channel name (required) - apps[PROJECT].kubernetesRepository.name - project's kubernetes repository name used in ArgoCD application and Tekton TriggerBinding (may be absent and replaced with apps[PROJECT].components[NAME].argocd
and apps[PROJECT].argocd.sourceRepos[]
blocks in case if project has no kubernetes repo) - apps[PROJECT].kubernetesRepository.branch - project's kubernetes repository branch used in ArgoCD application and Tekton TriggerBinding (may be absent and replaced with apps[PROJECT].components[NAME].argocd
and apps[PROJECT].argocd.sourceRepos[]
blocks in case if project has no kubernetes repo) - apps[PROJECT].kubernetesRepository.url - project's kubernetes repository url used in ArgoCD application and Tekton TriggerBinding (may be absent and replaced with apps[PROJECT].components[NAME].argocd
and apps[PROJECT].argocd.sourceRepos[]
blocks in case if project has no kubernetes repo) Basically we have 2 different types of ci/cd - basic (buildpacks, kaniko) and wordpress ones. So depending on project's component type you will need to fill different parameters. # fill below parameters for each component
block - apps[PROJECT].components[NAME].repository - the name of the repository containing the code (may be absent in case of wordpress application without deployment, i.e. bolrdswp, taco, saritasa-wordpress-demo) - apps[PROJECT].components[NAME].pipeline - the name of the pipeline building the code from the repository above - apps[PROJECT].components[NAME].namespace - the name of the namespace for component. Optional parameter - apps[PROJECT].components[NAME].argocd.source.syncWave - custom component ArgoCD application sync wave (default: "210") - ap...
eol-prometheus-exporter-0.1.0-dev-11
End of life prometheus exporter. A Kubernetes's helm chart for a exporter that get information about end of life/support of products in order to be scrapped by Prometheus You must supply a valid configmap with a list of products with its versions: yaml # Get available products from: # https://endoflife.date/api/all.json # and find available cycles in: # https://endoflife.date/api/{product}.json eks: current: '1.30' comment: EKS django: current: '5.1' comment: backend
Check https://github.com/saritasa-nest/saritasa-devops-tools-eol-exporter/blob/main/config.yaml.example for more example values. Each product must have a field current
with valid version as defined in: https://endoflife.date/api/{product}.json. A comment
field is optional, and it will be added as a label in the metrics. A Prometheus extra scrape config must be configured in order to be able to watch the metrics in Prometheus. The service name will be defined as: $CHART_NAME.$NAMESPACE:$PORT
. By default this is: eol-exporter.prometheus:8080
: yaml extraScrapeConfigs: | - job_name: prometheus-eol-exporter metrics_path: /metrics scrape_interval: 5m scrape_timeout: 30s static_configs: - targets: - eol-exporter.prometheus:8080
Check https://github.com/saritasa-nest/saritasa-devops-tools-eol-exporter/blob/main/README.md#prometheus-server-config for more information The exporter provides two metrics: - endoflife_expiration_timestamp_seconds
: Information about end of life (EOL) of products. Metric value is the UNIX timestamp of the eolDate label - endoflife_expired
: Information about end of life (EOL) of products. Boolean value of 1 for expired products. Sample query to get if EKS EOL is less than 30 days: sh (endoflife_expiration_timestamp_seconds{name="eks"} - time()) > ((60*60*24) * 10) and (endoflife_expiration_timestamp_seconds{name="eks"} - time()) <= ((60*60*24) * 30)
Sample query to get if EKS EOL has already happened: sh endoflife_expired{name="eks"} == 1
eol-exporter-0.1.0-dev-11
End of life exporter. A Kubernetes's helm chart for a exporter that get information about end of life/support of products in order to be scrapped by Prometheus You must supply a valid configmap with a list of products with its versions. Check https://github.com/saritasa-nest/saritasa-devops-tools-eol-exporter/blob/main/config.yaml.example for example values. Each product must have a field current
with valid version as defined in: https://endoflife.date/api/{product}.json A comment
field is optional, and it will be added as a label in the metrics. A Prometheus extra scrape config must be configured in order to be able to watch the metrics in Prometheus. The service name will be defined as: $CHART_NAME.$NAMESPACE:$PORT By default this is: eol-exporter.prometheus:8080 An example extraScrapeConfigs is available in: https://github.com/saritasa-nest/saritasa-devops-tools-eol-exporter/blob/main/README.md#prometheus-server-config The exporter provides two metrics: - endoflife_expiration_timestamp_seconds: Information about end of life (EOL) of products. Metric value is the UNIX timestamp of the eolDate label - endoflife_expired: Information about end of life (EOL) of products. Boolean value of 1 for expired products.
terraform-pod-0.0.30
A Helm chart for running infra-dev-aws solutions ## Install the chart Install the chart: helm repo add saritasa https://saritasa-nest.github.io/saritasa-devops-helm-charts/
## Use ### Simple case (infra-dev-aws) sh helm upgrade --install CLIENT saritasa/terraform-pod \ --namespace terraform \ --set terraform.client=CLIENT \ --set image.tag=1.9.7 \ --set github.repository=saritasa-nest/CLIENT-infra-dev-aws \ --set github.branch=feature/branch \ --set github.username=YOUR-GITHUB-USERNAME \ --set github.email=YOUR-GITHUB-EMAIL \ --set gitCryptKey=$(base64 -w 0 git-crypt-key) \ --wait
### Passing aws-vault short-term credentials (infra-aws) For infra-aws repos you may want to pass short-term TTL AWS credentials from the aws-vault sh ( unset AWS_VAULT && creds=$(aws-vault exec saritasa/v2/administrators --json) && \ helm upgrade --install CLIENT saritasa/terraform-pod \ --namespace terraform \ --set terraform.client=CLIENT \ --set image.tag=1.9.7 \ --set github.repository=saritasa-nest/CLIENT-infra-aws \ --set github.branch=feature/branch \ --set github.username=YOUR-GITHUB-USERNAME \ --set github.email=YOUR-GITHUB-EMAIL \ --set gitCryptKey=$(base64 -w 0 path/to/git-crypt-key) \ --set terraform.token=xxx \ --set aws.accessKeyId=$(echo $creds | jq -r ".AccessKeyId") \ --set aws.secretAccessKey=$(echo $creds | jq -r ".SecretAccessKey") \ --set aws.sessionToken="$(echo $creds | jq -r ".SessionToken")" \ --set infracost.enabled=true \ --set terraform.initCommand="make _staging init" \ --wait && \ unset creds )
Run command as shown in `()`` so that creds are not exported in your local shell. ## Terminate sh helm delete CLIENT ```` ## Debug If you want to debug the helm chart (after the improvements) you can perform the following
sh ( unset AWS_VAULT && creds=$(aws-vault exec saritasa/v2/administrators --json) && \ helm template --release-name debug-tfpod \ --namespace terraform \ --set terraform.client=saritasa \ --set image.tag=1.9.7 \ --set github.repository=saritasa-nest/some-repo-infra-aws \ --set github.branch=feature/branch-name \ --set github.username=your-username \ --set github.email=your-email \ --set gitCryptKey=$(base64 -w 0 git-crypt-key) \ --set aws.accessKeyId="$(echo
eol-exporter-0.1.0-dev-9
End of life exporter. A Kubernetes's helm chart for a exporter that get information about end of life/support of products in order to be scrapped by Prometheus
eol-exporter-0.1.0-dev-10
End of life exporter. A Kubernetes's helm chart for a exporter that get information about end of life/support of products in order to be scrapped by Prometheus