From 2dbdfab441520b878bb6b1b92613f7f22a28387c Mon Sep 17 00:00:00 2001 From: thisisobate Date: Fri, 28 Jun 2024 19:59:59 +0100 Subject: [PATCH 1/2] Feat: version faq docs Signed-off-by: thisisobate --- CONTRIBUTING.md | 113 ++++++++++----- content/docs/2.0/faq.md | 2 +- content/docs/2.1/faq.md | 2 +- content/docs/2.10/faq.md | 2 +- content/docs/2.11/faq.md | 2 +- content/docs/2.12/faq.md | 2 +- content/docs/2.13/faq.md | 2 +- content/docs/2.14/faq.md | 2 +- content/docs/2.14/reference/faq.md | 2 +- content/docs/2.15/reference/faq.md | 2 +- content/docs/2.2/faq.md | 2 +- content/docs/2.3/faq.md | 2 +- content/docs/2.4/faq.md | 2 +- content/docs/2.5/faq.md | 2 +- content/docs/2.6/faq.md | 2 +- content/docs/2.7/faq.md | 2 +- content/docs/2.8/faq.md | 2 +- content/docs/2.9/faq.md | 2 +- data/faq2_14.toml | 217 +++++++++++++++++++++++++++++ data/faq2_15.toml | 217 +++++++++++++++++++++++++++++ layouts/shortcodes/faq20.html | 8 +- 21 files changed, 534 insertions(+), 55 deletions(-) create mode 100644 data/faq2_14.toml create mode 100644 data/faq2_15.toml diff --git a/CONTRIBUTING.md b/CONTRIBUTING.md index d909a47ea..46261e8e5 100644 --- a/CONTRIBUTING.md +++ b/CONTRIBUTING.md @@ -8,20 +8,32 @@ Our documentation is versioned so it's important to make the changes for the cor ## Getting Help -If you have a question about KEDA or how best to contribute, the [#KEDA](https://kubernetes.slack.com/archives/CKZJ36A5D) channel on the Kubernetes slack channel ([get an invite if you don't have one already](https://slack.k8s.io/)) is a good place to start. We also have regular [community stand-ups](https://github.com/kedacore/keda#community) to track ongoing work and discuss areas of contribution. For any issues with the product you can [create an issue](https://github.com/kedacore/keda/issues/new) in this repo. +If you have a question about KEDA or how best to contribute, the [#KEDA](https://kubernetes.slack.com/archives/CKZJ36A5D) channel on the Kubernetes slack channel ([get an invite if you don't have one already](https://slack.k8s.io/)) is a good place to start. We also have regular [community stand-ups](https://github.com/kedacore/keda#community) to track ongoing work and discuss areas of contribution. For any issues with the product you can [create an issue](https://github.com/kedacore/keda/issues/new) in this repo. ## Contributing New Documentation We provide easy ways to introduce new content: -- [Adding new blog post](#adding-blog-post) -- [Adding new Frequently Asked Question (FAQ)](#add-new-frequently-asked-question-faq) -- [Adding new scaler documentation](#adding-scaler-documentation) -- [Adding new troubleshooting guidance](#add-new-troubleshooting-guidance) -- [Become a listed KEDA user!](#become-a-listed-KEDA-user) -- [Become a listed KEDA commercial offering!](#become-a-listed-KEDA-commercial-offering) -- [Writing documentation for a scaler](#writing-documentation-for-a-new-authentication-provider) -- [Writing documentation for a scaler](#writing-documentation-for-a-scaler) +- [Contributing to KEDA](#contributing-to-keda) + - [Getting Help](#getting-help) + - [Contributing New Documentation](#contributing-new-documentation) + - [Become a listed KEDA user!](#become-a-listed-keda-user) + - [Become a listed KEDA commercial offering!](#become-a-listed-keda-commercial-offering) + - [Adding blog post](#adding-blog-post) + - [Adding scaler documentation](#adding-scaler-documentation) + - [Writing documentation for a new authentication provider](#writing-documentation-for-a-new-authentication-provider) + - [Add new Frequently Asked Question (FAQ)](#add-new-frequently-asked-question-faq) + - [Add new troubleshooting guidance](#add-new-troubleshooting-guidance) + - [Writing documentation for a scaler](#writing-documentation-for-a-scaler) + - [Working with documentation versions](#working-with-documentation-versions) + - [Preparing a new version](#preparing-a-new-version) + - [Publishing a new version](#publishing-a-new-version) + - [Developer Certificate of Origin: Signing your work](#developer-certificate-of-origin-signing-your-work) + - [Every commit needs to be signed](#every-commit-needs-to-be-signed) + - [I didn't sign my commit, now what?!](#i-didnt-sign-my-commit-now-what) + - [Changing the website](#changing-the-website) + - [Creating and building a local environment](#creating-and-building-a-local-environment) + - [Adding a new filter option](#adding-a-new-filter-option) Learn more how to [create and build a local environment](#creating-and-building-a-local-environment). @@ -30,6 +42,7 @@ Learn more how to [create and build a local environment](#creating-and-building- Are you using KEDA in production? Do you want to become a [listed user](https://keda.sh/community/#users)? Say no more! You can easily get listed by following these steps: + 1. Upload your logo to `static/img/logos/` _(350x180)_ 2. Configure your company as a new user in `config.toml` _(sorted alphabetically)_ @@ -46,6 +59,7 @@ Here's a good example of [Coralogix becoming a listed user](https://github.com/k Do you offer commercial support for KEDA and want to become a [listed commercial offering](https://keda.sh/support/#commercial-support)? Say no more! You can easily get listed by following these steps: + 1. Upload your logo to `static/img/logos/` _(350x180)_ 2. Configure your company as a new user in `config.toml` _(sorted alphabetically)_ @@ -66,9 +80,9 @@ $ hugo new blog/my-new-post.md This creates a boilerplate Markdown file in `content/blog/my-new-post.md` whose contents you can modify. The following fields are required: -* `title` -* `date` (in `YYYY-MM-DD` format) -* `author` +- `title` +- `date` (in `YYYY-MM-DD` format) +- `author` ### Adding scaler documentation @@ -82,10 +96,10 @@ This creates a boilerplate Markdown file in `content/docs//scalers/my-new-scaler.md` whose contents you can modify. Make sure to update the following metadata fields: -* `title` -* `availability` -* `maintainer` -* `description` +- `title` +- `availability` +- `maintainer` +- `description` ### Writing documentation for a new authentication provider @@ -99,7 +113,7 @@ This creates a boilerplate Markdown file in `content/docs//providers/my-new-provider.md` whose contents you can modify. Make sure to update the following metadata fields: -* `title` +- `title` ### Add new Frequently Asked Question (FAQ) @@ -143,13 +157,34 @@ Here are a few examples: ## Working with documentation versions The KEDA documentation is versioned. Each version has its own subdirectory under -[content/docs](content/docs). To add a new version, copy the directory for -the most recent version. Here's an example: +[content/docs](content/docs). To add a new version, follow these steps: + +1. copy the directory for the most recent version. Here's an example: ```console $ cp -rf content/docs/ content/docs/ ``` +2. copy the file for the most recent faq data in the `data` directory. Here's an example: + +```console +$ cp -rf data/faq data/faq +``` + +3. navigate to the new faq file: + +```console +$ cd content/docs//reference/faq.md +``` + +4. update the versionData option + +``` +{{< faq20 versionData="NEW_FAQ_FILE_NAME" >}} +``` + +Replace `NEW_FAQ_FILE_NAME` with the file name of the faq data for the new version. + By default, new documentation versions are not listed as available version so it's safe to make changes to them. After every release, the version will be published as new version. @@ -169,7 +204,7 @@ Ensure that compatibility matrix on `content/docs/{next-version}/operate/cluster Once a version is ready to be published, we must add the version to the `params.versions.docs` list in [config.toml](config.toml). -More recent versions should be placed first in the list (ordering *does* matter +More recent versions should be placed first in the list (ordering _does_ matter because the first element in that list is considered the latest version). > Note: Remember to [prepare the next version](#preparing-a-new-version). @@ -179,6 +214,7 @@ because the first element in that list is considered the latest version). ### Every commit needs to be signed The Developer Certificate of Origin (DCO) is a lightweight way for contributors to certify that they wrote or otherwise have the right to submit the code they are contributing to the project. Here is the full text of the DCO, reformatted for readability: + ``` By making a contribution to this project, I certify that: @@ -198,12 +234,14 @@ This is my commit message Signed-off-by: Random J Developer ``` + Git even has a `-s` command line option to append this automatically to your commit message: + ``` $ git commit -s -m 'This is my commit message' ``` -Each Pull Request is checked whether or not commits in a Pull Request do contain a valid Signed-off-by line. +Each Pull Request is checked whether or not commits in a Pull Request do contain a valid Signed-off-by line. ### I didn't sign my commit, now what?! @@ -259,7 +297,7 @@ FILTER_NAME = "filter_value" Replace FILTER_NAME with any desired name of your choice. Same applies to the value. 3. Navigate to the `list.lunr.json` file to edit: `cd layouts/_default/list.lunr.json`. -4. Open the file and go down to line 3. You will notice the format of the data represented in a key/value pair. Just before the closing parenthesis, append your new option like this: `"FILTER_NAME" $scalers.Params.FILTER_NAME`. +4. Open the file and go down to line 3. You will notice the format of the data represented in a key/value pair. Just before the closing parenthesis, append your new option like this: `"FILTER_NAME" $scalers.Params.FILTER_NAME`. Replace FILTER_NAME with the same name represented in the frontmatter (see step 2 above for reference). @@ -276,7 +314,7 @@ params = ["availability", "maintainer", "category", "type", "FILTER_NAME"] this.field("FILTER_NAME", { boost: 5, }); -``` +``` Replace FILTER_NAME with the same name represented in the frontmatter (see step 2 above for reference). @@ -291,7 +329,7 @@ parse[doc.title] = { availability: doc.availability, category: doc.category, type: doc.type, - FILTER_NAME: doc.FILTER_NAME + FILTER_NAME: doc.FILTER_NAME, }; ``` @@ -300,21 +338,22 @@ parse[doc.title] = { ```html
FILTER_NAME
- {{ $FILTER_NAME := slice }} - {{ range $scalers := where site.RegularPages ".CurrentSection.Title" "Scalers" }} - {{ with $scalers.Params.FILTER_NAME }} - {{ $FILTER_NAME = $categories | append ($scalers.Params.FILTER_NAME) }} - {{ $FILTER_NAME = uniq $FILTER_NAME }} - {{ end }} - {{ end }} - {{ range $FILTER_NAME }} - {{ $item := . }} + {{ $FILTER_NAME := slice }} {{ range $scalers := where site.RegularPages + ".CurrentSection.Title" "Scalers" }} {{ with $scalers.Params.FILTER_NAME }} {{ + $FILTER_NAME = $categories | append ($scalers.Params.FILTER_NAME) }} {{ + $FILTER_NAME = uniq $FILTER_NAME }} {{ end }} {{ end }} {{ range $FILTER_NAME + }} {{ $item := . }}
- - + +
{{ end }} -
+ ``` Replace FILTER_NAME with the same name represented in the frontmatter (see step 2 above for reference). @@ -324,4 +363,4 @@ Replace FILTER_NAME with the same name represented in the frontmatter (see step [localhost:8888]: http://localhost:8888 [LTS release]: https://nodejs.org/en/about/releases/ [Netlify]: https://netlify.com -[nvm]: https://github.com/nvm-sh/nvm/blob/master/README.md#installing-and-updating \ No newline at end of file +[nvm]: https://github.com/nvm-sh/nvm/blob/master/README.md#installing-and-updating diff --git a/content/docs/2.0/faq.md b/content/docs/2.0/faq.md index d012d6767..df84af22c 100644 --- a/content/docs/2.0/faq.md +++ b/content/docs/2.0/faq.md @@ -2,4 +2,4 @@ title = "FAQ" +++ -{{< faq20 >}} +{{< faq20 versionData="faq20" >}} diff --git a/content/docs/2.1/faq.md b/content/docs/2.1/faq.md index d012d6767..df84af22c 100644 --- a/content/docs/2.1/faq.md +++ b/content/docs/2.1/faq.md @@ -2,4 +2,4 @@ title = "FAQ" +++ -{{< faq20 >}} +{{< faq20 versionData="faq20" >}} diff --git a/content/docs/2.10/faq.md b/content/docs/2.10/faq.md index d012d6767..df84af22c 100644 --- a/content/docs/2.10/faq.md +++ b/content/docs/2.10/faq.md @@ -2,4 +2,4 @@ title = "FAQ" +++ -{{< faq20 >}} +{{< faq20 versionData="faq20" >}} diff --git a/content/docs/2.11/faq.md b/content/docs/2.11/faq.md index d012d6767..df84af22c 100644 --- a/content/docs/2.11/faq.md +++ b/content/docs/2.11/faq.md @@ -2,4 +2,4 @@ title = "FAQ" +++ -{{< faq20 >}} +{{< faq20 versionData="faq20" >}} diff --git a/content/docs/2.12/faq.md b/content/docs/2.12/faq.md index d012d6767..df84af22c 100644 --- a/content/docs/2.12/faq.md +++ b/content/docs/2.12/faq.md @@ -2,4 +2,4 @@ title = "FAQ" +++ -{{< faq20 >}} +{{< faq20 versionData="faq20" >}} diff --git a/content/docs/2.13/faq.md b/content/docs/2.13/faq.md index d012d6767..df84af22c 100644 --- a/content/docs/2.13/faq.md +++ b/content/docs/2.13/faq.md @@ -2,4 +2,4 @@ title = "FAQ" +++ -{{< faq20 >}} +{{< faq20 versionData="faq20" >}} diff --git a/content/docs/2.14/faq.md b/content/docs/2.14/faq.md index d012d6767..cffb0bd53 100644 --- a/content/docs/2.14/faq.md +++ b/content/docs/2.14/faq.md @@ -2,4 +2,4 @@ title = "FAQ" +++ -{{< faq20 >}} +{{< faq20 versionData="faq2_14" >}} diff --git a/content/docs/2.14/reference/faq.md b/content/docs/2.14/reference/faq.md index df75bbe78..995bfb24d 100644 --- a/content/docs/2.14/reference/faq.md +++ b/content/docs/2.14/reference/faq.md @@ -3,4 +3,4 @@ title = "FAQ" weight = 2000 +++ -{{< faq20 >}} +{{< faq20 versionData="faq2_14" >}} diff --git a/content/docs/2.15/reference/faq.md b/content/docs/2.15/reference/faq.md index df75bbe78..95365417f 100644 --- a/content/docs/2.15/reference/faq.md +++ b/content/docs/2.15/reference/faq.md @@ -3,4 +3,4 @@ title = "FAQ" weight = 2000 +++ -{{< faq20 >}} +{{< faq20 versionData="faq2_15" >}} diff --git a/content/docs/2.2/faq.md b/content/docs/2.2/faq.md index d012d6767..df84af22c 100644 --- a/content/docs/2.2/faq.md +++ b/content/docs/2.2/faq.md @@ -2,4 +2,4 @@ title = "FAQ" +++ -{{< faq20 >}} +{{< faq20 versionData="faq20" >}} diff --git a/content/docs/2.3/faq.md b/content/docs/2.3/faq.md index d012d6767..df84af22c 100644 --- a/content/docs/2.3/faq.md +++ b/content/docs/2.3/faq.md @@ -2,4 +2,4 @@ title = "FAQ" +++ -{{< faq20 >}} +{{< faq20 versionData="faq20" >}} diff --git a/content/docs/2.4/faq.md b/content/docs/2.4/faq.md index d012d6767..df84af22c 100644 --- a/content/docs/2.4/faq.md +++ b/content/docs/2.4/faq.md @@ -2,4 +2,4 @@ title = "FAQ" +++ -{{< faq20 >}} +{{< faq20 versionData="faq20" >}} diff --git a/content/docs/2.5/faq.md b/content/docs/2.5/faq.md index d012d6767..df84af22c 100644 --- a/content/docs/2.5/faq.md +++ b/content/docs/2.5/faq.md @@ -2,4 +2,4 @@ title = "FAQ" +++ -{{< faq20 >}} +{{< faq20 versionData="faq20" >}} diff --git a/content/docs/2.6/faq.md b/content/docs/2.6/faq.md index d012d6767..df84af22c 100644 --- a/content/docs/2.6/faq.md +++ b/content/docs/2.6/faq.md @@ -2,4 +2,4 @@ title = "FAQ" +++ -{{< faq20 >}} +{{< faq20 versionData="faq20" >}} diff --git a/content/docs/2.7/faq.md b/content/docs/2.7/faq.md index d012d6767..df84af22c 100644 --- a/content/docs/2.7/faq.md +++ b/content/docs/2.7/faq.md @@ -2,4 +2,4 @@ title = "FAQ" +++ -{{< faq20 >}} +{{< faq20 versionData="faq20" >}} diff --git a/content/docs/2.8/faq.md b/content/docs/2.8/faq.md index d012d6767..df84af22c 100644 --- a/content/docs/2.8/faq.md +++ b/content/docs/2.8/faq.md @@ -2,4 +2,4 @@ title = "FAQ" +++ -{{< faq20 >}} +{{< faq20 versionData="faq20" >}} diff --git a/content/docs/2.9/faq.md b/content/docs/2.9/faq.md index d012d6767..df84af22c 100644 --- a/content/docs/2.9/faq.md +++ b/content/docs/2.9/faq.md @@ -2,4 +2,4 @@ title = "FAQ" +++ -{{< faq20 >}} +{{< faq20 versionData="faq20" >}} diff --git a/data/faq2_14.toml b/data/faq2_14.toml new file mode 100644 index 000000000..95d1089e0 --- /dev/null +++ b/data/faq2_14.toml @@ -0,0 +1,217 @@ +[[qna]] +q = "What is KEDA and why is it useful?" +a = "KEDA stands for Kubernetes Event-driven Autoscaler. It is built to be able to activate a Kubernetes deployment (i.e. no pods to a single pod) and subsequently to more pods based on events from various event sources." +type = "General" + +[[qna]] +q = "What are the prerequisites for using KEDA?" +a = """ +KEDA is designed, tested and is supported to be run on any Kubernetes cluster that runs Kubernetes v1.17.0 or above. + +It uses a CRD (custom resource definition) and the Kubernetes metric server so you will have to use a Kubernetes version which supports these. + +> 💡 Kubernetes v1.16 is supported with KEDA v2.4.0 or below +""" +type = "General" + +[[qna]] +q = "Can KEDA be used in production?" +a = "Yes! KEDA v2.0 is suited for production workloads, but we still support v1.5 if you are running that as well.." +type = "General" + +[[qna]] +q = "What does it cost?" +a = "There is no charge for using KEDA itself, we just ask people to [become a listed user](https://github.com/kedacore/keda-docs#become-a-listed-keda-user) when possible." +type = "General" + +[[qna]] +q = "Can I scale HTTP workloads with KEDA and Kubernetes?" +a = """ +KEDA will scale a container using metrics from a scaler, but unfortunately there is no scaler today for HTTP workloads out-of-the-box. + +We do, however, provide some alternative approaches: +- Use our HTTP add-on scaler which is currently in experimental stage ([GitHub](https://github.com/kedacore/http-add-on)) +- Use [Prometheus scaler](/docs/latest/scalers/prometheus/) to create scale rule based on metrics around HTTP events + - Read [Anirudh Garg's blog post](https://dev.to/anirudhgarg_99/scale-up-and-down-a-http-triggered-function-app-in-kubernetes-using-keda-4m42) to learn more. +""" +type = "Features" + +[[qna]] +q = "Is short polling intervals a problem?" +a = "Polling interval really only impacts the time-to-activation (scaling from 0 to 1) but once scaled to one it's really up to the HPA (horizontal pod autoscaler) which polls KEDA." +type = "Features" + +[[qna]] +q = "Using multiple triggers for the same scale target" +a = """ +KEDA allows you to use multiple triggers as part of the same `ScaledObject` or `ScaledJob`. + +By doing this, your autoscaling becomes better: +- All your autoscaling rules are in one place +- You will not have multiple `ScaledObject`'s or `ScaledJob`'s interfering with each other + +KEDA will start scaling as soon as when one of the triggers meets the criteria. Horizontal Pod Autoscaler (HPA) will calculate metrics for every scaler and use the highest desired replica count to scale the workload to. +""" +type = "Best Practices" + +[[qna]] +q = "Don't combine `ScaledObject` with Horizontal Pod Autoscaler (HPA)" +a = """ +We recommend not to combine using KEDA's `ScaledObject` with a Horizontal Pod Autoscaler (HPA) to scale the same workload. + +They will compete with each other resulting given KEDA uses Horizontal Pod Autoscaler (HPA) under the hood and will result in odd scaling behavior. + +If you are using a Horizontal Pod Autoscaler (HPA) to scale on CPU and/or memory, we recommend using the [CPU scaler](/docs/latest/scalers/cpu/) & [Memory scaler](/docs/latest/scalers/memory/) scalers instead. +""" +type = "Best Practices" + +[[qna]] +q = "What does the target metric value in the Horizontal Pod Autoscaler (HPA) represent?" +a = """ +The target metric value is used by the Horizontal Pod Autoscaler (HPA) to make scaling decisions. + +The current target value on the Horizontal Pod Autoscaler (HPA) often does not match with the metrics on the system you are scaling on. This is because of how the Horizontal Pod Autoscaler's (HPA) [scaling algorithm](https://kubernetes.io/docs/tasks/run-application/horizontal-pod-autoscale/#algorithm-details) works. + +By default, KEDA scalers use average metrics (the `AverageValue` metric type). This means that the HPA will use the average value of the metric between the total amount of pods. As of KEDA v2.7, ScaledObjects also support the `Value` metric type. You can learn more about it [here](https://keda.sh/docs/latest/concepts/scaling-deployments/#triggers). +""" +type = "Kubernetes" + +[[qna]] +q = "Why does KEDA use external metrics and not custom metrics instead?" +a = """ +Kubernetes allows you to autoscale based on custom & external metrics which are fundamentally different: +- **Custom metrics** are metrics that come from applications solely running on the Kubernetes cluster (Prometheus) +- **External metrics** are metrics that represent the state of an application/service that is running outside of the Kubernetes cluster (AWS, Azure, GCP, Datadog, etc.) + +Because KEDA primarily serves metrics for metric sources outside of the Kubernetes cluster, it uses external metrics and not custom metrics. + +This is why KEDA registers the `v1beta1.external.metrics.k8s.io` namespace in the API service. However, this is just an implementation detail as both offer the same functionality. + +Read [about the different metric APIs](https://kubernetes.io/docs/tasks/run-application/horizontal-pod-autoscale/#support-for-metrics-apis) or [this article](https://cloud.google.com/kubernetes-engine/docs/concepts/custom-and-external-metrics) by Google Cloud to learn more. +""" +type = "Kubernetes" + +[[qna]] +q = "Can I run multiple metric servers serving external metrics in the same cluster?" +a = """ +Unfortunately, you cannot do that. + +Kubernetes currently only supports one metric server serving `external.metrics.k8s.io` metrics per cluster. This is because only one API Service can be registered to handle external metrics. + +If you want to know what external metric server is currently registered, you can use the following command: + +```shell +~ kubectl get APIService/v1beta1.external.metrics.k8s.io +NAME SERVICE AVAILABLE AGE +v1beta1.external.metrics.k8s.io keda-system/keda-operator-metrics-apiserver True 457d +``` + +Once a new metric server is installed, it will overwrite the existing API Server registration and take over the `v1beta1.external.metrics.k8s.io` namespace. This will cause the previously installed metric server to be ignored. + +There is an [open proposal](https://github.com/kubernetes-sigs/custom-metrics-apiserver/issues/70) to allow multiple metric servers in the same cluster, but it's not implemented yet. +""" +type = "Kubernetes" + +[[qna]] +q = "Can I run multiple installations of KEDA in the same cluster?" +a = """ +Unfortunately, you cannot do that. + +This is a limitation that is because Kubernetes does not allow you to run multiple metric servers in the same cluster that serve external metrics. + +Also, KEDA does not allow you to share a single metric server across multiple operator installations. + +Learn more in the "Can I run multiple metric servers serving external metrics in the same cluster?" FAQ entry. +""" +type = "Kubernetes" + +[[qna]] +q = "How can I get involved?" +a = """ +There are several ways to get involved. + +* Pick up an issue to work on. A good place to start might be issues which are marked as [Good First Issue](https://github.com/kedacore/keda/labels/good%20first%20issue) or [Help Wanted](https://github.com/kedacore/keda/labels/help%20wanted) +* We are always looking to add more scalers. +* We are always looking for more samples, documentation, etc. +* Please join us in our [weekly standup](https://github.com/kedacore/keda#community). +""" +type = "Community" + +[[qna]] +q = "Where can I get to the code for the Scalers?" +a = "All scalers have their code [here](https://github.com/kedacore/keda/tree/main/pkg/scalers)." +type = "Community" + +[[qna]] +q = "How do I access KEDA resources using `client-go`?" +a = """KEDA client-go is exported as part of the KEDA repository.""" + +[[qna]] +q = "How do I run KEDA with `readOnlyRootFilesystem=true`?" +a = """ +As default, KEDA v2.10 or above sets `readOnlyRootFilesystem=true` as default without any other manual intervention. + +If you are running KEDA v2.9 or below, you can't run KEDA with `readOnlyRootFilesystem=true` as default because Metrics adapter generates self-signed certificates during deployment and stores them on the root file system. +To overcome this, you can create a secret/configmap with a valid CA, cert and key and then mount it to the Metrics Deployment. +To use your certificate, you need to reference it in the container `args` section, e.g.: +``` +args: + - '--client-ca-file=/cabundle/service-ca.crt' + - '--tls-cert-file=/certs/tls.crt' + - '--tls-private-key-file=/certs/tls.key' +``` +It is also possible to run KEDA with `readOnlyRootFilesystem=true` by creating an emptyDir volume and mounting it to the path where, +by default, metrics server writes its generated cert. The corresponding helm command is: +``` +helm install keda kedacore/keda --namespace keda \ + --set 'volumes.metricsApiServer.extraVolumes[0].name=keda-volume' \ + --set 'volumes.metricsApiServer.extraVolumeMounts[0].name=keda-volume' \ + --set 'volumes.metricsApiServer.extraVolumeMounts[0].mountPath=/apiserver.local.config/certificates/' \ + --set 'securityContext.metricServer.readOnlyRootFilesystem=true' +``` +""" +type = "Features" + +[[qna]] +q = "How do I run KEDA with TLS v1.3 only?" +a = """ +By default, Keda listens on TLS v1.1 and TLSv1.2, with the default Golang ciphersuites. +In some environments, these ciphers may be considered less secure, for example CBC ciphers. + +As an alternative, you can configure the minimum TLS version to be v1.3 to increase security. +Since all modern clients support this version, there should be no impact in most scenarios. + +You can set this with args - e.g.: +``` +args: + - '--tls-min-version=VersionTLS13' +``` +""" +type = "Features" + +[[qna]] +q = "Does KEDA depend on any Azure service?" +a = "No, KEDA only takes a dependency on standard Kubernetes constructs and can run on any Kubernetes cluster whether in OpenShift, AKS, GKE, EKS or your own infrastructure." +type = "Azure" + +[[qna]] +q = "Does KEDA only work with Azure Functions?" +a = "No, KEDA can scale up/down any container that you specify in your deployment. There has been work done in the Azure Functions tooling to make it easy to scale an Azure Function container." +type = "Azure" + +[[qna]] +q = "Why should we use KEDA if we are already using Azure Functions in Azure?" +a = """ +There are a few reasons for this: + +* Run functions on-premises (potentially in something like an 'intelligent edge' architecture) +* Run functions alongside other Kubernetes apps (maybe in a restricted network, app mesh, custom environment, etc.) +* Run functions outside of Azure (no vendor lock-in) +* Specific need for more control (GPU enabled compute clusters, policies, etc.) +""" +type = "Azure" + +[[qna]] +q = "Does scaler search support wildcard search?" +a = "Yes. The search actually supports wildcard search. We've made our search to automatically perform wildcard filtering on the fly so you don't have to append special symbols within your search query." +type = "Website" diff --git a/data/faq2_15.toml b/data/faq2_15.toml new file mode 100644 index 000000000..95d1089e0 --- /dev/null +++ b/data/faq2_15.toml @@ -0,0 +1,217 @@ +[[qna]] +q = "What is KEDA and why is it useful?" +a = "KEDA stands for Kubernetes Event-driven Autoscaler. It is built to be able to activate a Kubernetes deployment (i.e. no pods to a single pod) and subsequently to more pods based on events from various event sources." +type = "General" + +[[qna]] +q = "What are the prerequisites for using KEDA?" +a = """ +KEDA is designed, tested and is supported to be run on any Kubernetes cluster that runs Kubernetes v1.17.0 or above. + +It uses a CRD (custom resource definition) and the Kubernetes metric server so you will have to use a Kubernetes version which supports these. + +> 💡 Kubernetes v1.16 is supported with KEDA v2.4.0 or below +""" +type = "General" + +[[qna]] +q = "Can KEDA be used in production?" +a = "Yes! KEDA v2.0 is suited for production workloads, but we still support v1.5 if you are running that as well.." +type = "General" + +[[qna]] +q = "What does it cost?" +a = "There is no charge for using KEDA itself, we just ask people to [become a listed user](https://github.com/kedacore/keda-docs#become-a-listed-keda-user) when possible." +type = "General" + +[[qna]] +q = "Can I scale HTTP workloads with KEDA and Kubernetes?" +a = """ +KEDA will scale a container using metrics from a scaler, but unfortunately there is no scaler today for HTTP workloads out-of-the-box. + +We do, however, provide some alternative approaches: +- Use our HTTP add-on scaler which is currently in experimental stage ([GitHub](https://github.com/kedacore/http-add-on)) +- Use [Prometheus scaler](/docs/latest/scalers/prometheus/) to create scale rule based on metrics around HTTP events + - Read [Anirudh Garg's blog post](https://dev.to/anirudhgarg_99/scale-up-and-down-a-http-triggered-function-app-in-kubernetes-using-keda-4m42) to learn more. +""" +type = "Features" + +[[qna]] +q = "Is short polling intervals a problem?" +a = "Polling interval really only impacts the time-to-activation (scaling from 0 to 1) but once scaled to one it's really up to the HPA (horizontal pod autoscaler) which polls KEDA." +type = "Features" + +[[qna]] +q = "Using multiple triggers for the same scale target" +a = """ +KEDA allows you to use multiple triggers as part of the same `ScaledObject` or `ScaledJob`. + +By doing this, your autoscaling becomes better: +- All your autoscaling rules are in one place +- You will not have multiple `ScaledObject`'s or `ScaledJob`'s interfering with each other + +KEDA will start scaling as soon as when one of the triggers meets the criteria. Horizontal Pod Autoscaler (HPA) will calculate metrics for every scaler and use the highest desired replica count to scale the workload to. +""" +type = "Best Practices" + +[[qna]] +q = "Don't combine `ScaledObject` with Horizontal Pod Autoscaler (HPA)" +a = """ +We recommend not to combine using KEDA's `ScaledObject` with a Horizontal Pod Autoscaler (HPA) to scale the same workload. + +They will compete with each other resulting given KEDA uses Horizontal Pod Autoscaler (HPA) under the hood and will result in odd scaling behavior. + +If you are using a Horizontal Pod Autoscaler (HPA) to scale on CPU and/or memory, we recommend using the [CPU scaler](/docs/latest/scalers/cpu/) & [Memory scaler](/docs/latest/scalers/memory/) scalers instead. +""" +type = "Best Practices" + +[[qna]] +q = "What does the target metric value in the Horizontal Pod Autoscaler (HPA) represent?" +a = """ +The target metric value is used by the Horizontal Pod Autoscaler (HPA) to make scaling decisions. + +The current target value on the Horizontal Pod Autoscaler (HPA) often does not match with the metrics on the system you are scaling on. This is because of how the Horizontal Pod Autoscaler's (HPA) [scaling algorithm](https://kubernetes.io/docs/tasks/run-application/horizontal-pod-autoscale/#algorithm-details) works. + +By default, KEDA scalers use average metrics (the `AverageValue` metric type). This means that the HPA will use the average value of the metric between the total amount of pods. As of KEDA v2.7, ScaledObjects also support the `Value` metric type. You can learn more about it [here](https://keda.sh/docs/latest/concepts/scaling-deployments/#triggers). +""" +type = "Kubernetes" + +[[qna]] +q = "Why does KEDA use external metrics and not custom metrics instead?" +a = """ +Kubernetes allows you to autoscale based on custom & external metrics which are fundamentally different: +- **Custom metrics** are metrics that come from applications solely running on the Kubernetes cluster (Prometheus) +- **External metrics** are metrics that represent the state of an application/service that is running outside of the Kubernetes cluster (AWS, Azure, GCP, Datadog, etc.) + +Because KEDA primarily serves metrics for metric sources outside of the Kubernetes cluster, it uses external metrics and not custom metrics. + +This is why KEDA registers the `v1beta1.external.metrics.k8s.io` namespace in the API service. However, this is just an implementation detail as both offer the same functionality. + +Read [about the different metric APIs](https://kubernetes.io/docs/tasks/run-application/horizontal-pod-autoscale/#support-for-metrics-apis) or [this article](https://cloud.google.com/kubernetes-engine/docs/concepts/custom-and-external-metrics) by Google Cloud to learn more. +""" +type = "Kubernetes" + +[[qna]] +q = "Can I run multiple metric servers serving external metrics in the same cluster?" +a = """ +Unfortunately, you cannot do that. + +Kubernetes currently only supports one metric server serving `external.metrics.k8s.io` metrics per cluster. This is because only one API Service can be registered to handle external metrics. + +If you want to know what external metric server is currently registered, you can use the following command: + +```shell +~ kubectl get APIService/v1beta1.external.metrics.k8s.io +NAME SERVICE AVAILABLE AGE +v1beta1.external.metrics.k8s.io keda-system/keda-operator-metrics-apiserver True 457d +``` + +Once a new metric server is installed, it will overwrite the existing API Server registration and take over the `v1beta1.external.metrics.k8s.io` namespace. This will cause the previously installed metric server to be ignored. + +There is an [open proposal](https://github.com/kubernetes-sigs/custom-metrics-apiserver/issues/70) to allow multiple metric servers in the same cluster, but it's not implemented yet. +""" +type = "Kubernetes" + +[[qna]] +q = "Can I run multiple installations of KEDA in the same cluster?" +a = """ +Unfortunately, you cannot do that. + +This is a limitation that is because Kubernetes does not allow you to run multiple metric servers in the same cluster that serve external metrics. + +Also, KEDA does not allow you to share a single metric server across multiple operator installations. + +Learn more in the "Can I run multiple metric servers serving external metrics in the same cluster?" FAQ entry. +""" +type = "Kubernetes" + +[[qna]] +q = "How can I get involved?" +a = """ +There are several ways to get involved. + +* Pick up an issue to work on. A good place to start might be issues which are marked as [Good First Issue](https://github.com/kedacore/keda/labels/good%20first%20issue) or [Help Wanted](https://github.com/kedacore/keda/labels/help%20wanted) +* We are always looking to add more scalers. +* We are always looking for more samples, documentation, etc. +* Please join us in our [weekly standup](https://github.com/kedacore/keda#community). +""" +type = "Community" + +[[qna]] +q = "Where can I get to the code for the Scalers?" +a = "All scalers have their code [here](https://github.com/kedacore/keda/tree/main/pkg/scalers)." +type = "Community" + +[[qna]] +q = "How do I access KEDA resources using `client-go`?" +a = """KEDA client-go is exported as part of the KEDA repository.""" + +[[qna]] +q = "How do I run KEDA with `readOnlyRootFilesystem=true`?" +a = """ +As default, KEDA v2.10 or above sets `readOnlyRootFilesystem=true` as default without any other manual intervention. + +If you are running KEDA v2.9 or below, you can't run KEDA with `readOnlyRootFilesystem=true` as default because Metrics adapter generates self-signed certificates during deployment and stores them on the root file system. +To overcome this, you can create a secret/configmap with a valid CA, cert and key and then mount it to the Metrics Deployment. +To use your certificate, you need to reference it in the container `args` section, e.g.: +``` +args: + - '--client-ca-file=/cabundle/service-ca.crt' + - '--tls-cert-file=/certs/tls.crt' + - '--tls-private-key-file=/certs/tls.key' +``` +It is also possible to run KEDA with `readOnlyRootFilesystem=true` by creating an emptyDir volume and mounting it to the path where, +by default, metrics server writes its generated cert. The corresponding helm command is: +``` +helm install keda kedacore/keda --namespace keda \ + --set 'volumes.metricsApiServer.extraVolumes[0].name=keda-volume' \ + --set 'volumes.metricsApiServer.extraVolumeMounts[0].name=keda-volume' \ + --set 'volumes.metricsApiServer.extraVolumeMounts[0].mountPath=/apiserver.local.config/certificates/' \ + --set 'securityContext.metricServer.readOnlyRootFilesystem=true' +``` +""" +type = "Features" + +[[qna]] +q = "How do I run KEDA with TLS v1.3 only?" +a = """ +By default, Keda listens on TLS v1.1 and TLSv1.2, with the default Golang ciphersuites. +In some environments, these ciphers may be considered less secure, for example CBC ciphers. + +As an alternative, you can configure the minimum TLS version to be v1.3 to increase security. +Since all modern clients support this version, there should be no impact in most scenarios. + +You can set this with args - e.g.: +``` +args: + - '--tls-min-version=VersionTLS13' +``` +""" +type = "Features" + +[[qna]] +q = "Does KEDA depend on any Azure service?" +a = "No, KEDA only takes a dependency on standard Kubernetes constructs and can run on any Kubernetes cluster whether in OpenShift, AKS, GKE, EKS or your own infrastructure." +type = "Azure" + +[[qna]] +q = "Does KEDA only work with Azure Functions?" +a = "No, KEDA can scale up/down any container that you specify in your deployment. There has been work done in the Azure Functions tooling to make it easy to scale an Azure Function container." +type = "Azure" + +[[qna]] +q = "Why should we use KEDA if we are already using Azure Functions in Azure?" +a = """ +There are a few reasons for this: + +* Run functions on-premises (potentially in something like an 'intelligent edge' architecture) +* Run functions alongside other Kubernetes apps (maybe in a restricted network, app mesh, custom environment, etc.) +* Run functions outside of Azure (no vendor lock-in) +* Specific need for more control (GPU enabled compute clusters, policies, etc.) +""" +type = "Azure" + +[[qna]] +q = "Does scaler search support wildcard search?" +a = "Yes. The search actually supports wildcard search. We've made our search to automatically perform wildcard filtering on the fly so you don't have to append special symbols within your search query." +type = "Website" diff --git a/layouts/shortcodes/faq20.html b/layouts/shortcodes/faq20.html index 75c294338..c26bdcd69 100644 --- a/layouts/shortcodes/faq20.html +++ b/layouts/shortcodes/faq20.html @@ -1,4 +1,10 @@ -{{ $faq20 := site.Data.faq20.qna }} +{{ $filename := .Get "versionData" }} +{{ range index .Site.Data $filename }} + {{ $.Scratch.Add "faq20" . }} +{{ end }} + +{{ $faq20 := $.Scratch.Get "faq20" }} +

General

From 43cabd8675eb1737c7be85098d77ea58b44d3cab Mon Sep 17 00:00:00 2001 From: thisisobate Date: Wed, 3 Jul 2024 13:57:55 +0100 Subject: [PATCH 2/2] chore: start sentence with capital letter Signed-off-by: thisisobate --- CONTRIBUTING.md | 8 ++++---- 1 file changed, 4 insertions(+), 4 deletions(-) diff --git a/CONTRIBUTING.md b/CONTRIBUTING.md index 46261e8e5..8068d73d1 100644 --- a/CONTRIBUTING.md +++ b/CONTRIBUTING.md @@ -159,25 +159,25 @@ Here are a few examples: The KEDA documentation is versioned. Each version has its own subdirectory under [content/docs](content/docs). To add a new version, follow these steps: -1. copy the directory for the most recent version. Here's an example: +1. Copy the directory for the most recent version. Here's an example: ```console $ cp -rf content/docs/ content/docs/ ``` -2. copy the file for the most recent faq data in the `data` directory. Here's an example: +2. Copy the file for the most recent faq data in the `data` directory. Here's an example: ```console $ cp -rf data/faq data/faq ``` -3. navigate to the new faq file: +3. Navigate to the new faq file: ```console $ cd content/docs//reference/faq.md ``` -4. update the versionData option +4. Update the versionData option ``` {{< faq20 versionData="NEW_FAQ_FILE_NAME" >}}