Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Could not find schema for CustomResourceDefinition #51

Open
dewe opened this issue Jun 16, 2021 · 15 comments
Open

Could not find schema for CustomResourceDefinition #51

dewe opened this issue Jun 16, 2021 · 15 comments

Comments

@dewe
Copy link

dewe commented Jun 16, 2021

When validating apiextensions.k8s.io/v1 CustomResourceDefinition resources I get this error:

<file> - CustomResourceDefinition servicemonitors.monitoring.coreos.com failed validation: could not find schema for CustomResourceDefinition

Why is that? I thought this API was part of the Kubernetes API?

@yannh
Copy link
Owner

yannh commented Jun 16, 2021

Hi Dewe, no it is not, servicemonitors.monitoring.coreos.com is part of Prometheus' operator CRD! The workflow to validate CRDs is described in the README, https://github.com/yannh/kubeconform#converting-an-openapi-file-to-a-json-schema - could you go through it and let me know whether its understandable?
It does require a little bit of manual work unfortunately.

@dewe
Copy link
Author

dewe commented Jun 16, 2021

Hi, thanks for the quick answer.

I've tried it out that workflow earlier today, it wasn't entirely clear how to do it, but eventually I got it working. One thing that could be improved is to emphasize the correct FILENAME_FORMAT (i.e. not the default).

But... now, this issue, it's about the actual CustomResourceDefinition that fails, not the ServiceMonitor resource.

To reproduce, taking the ServiceMonitor definition as an example:

$ URL=https://raw.githubusercontent.com/prometheus-operator/prometheus-operator/master/example/prometheus-operator-crd/monitoring.coreos.com_servicemonitors.yaml
$ curl -s $URL | kubeconform
stdin - CustomResourceDefinition servicemonitors.monitoring.coreos.com failed validation: could not find schema for CustomResourceDefinition
Summary: 1 resource found parsing stdin - Valid: 0, Invalid: 0, Errors: 1, Skipped: 0

If I skip the CustomResourceDefinition schema, it passes, obviously:

$ curl -s $URL | kubeconform -summary -skip CustomResourceDefinition
Summary: 1 resource found parsing stdin - Valid: 0, Invalid: 0, Errors: 0, Skipped: 1

To me, this indicates that the schema for CustomResourceDefinition cannot be found. And I think I've seen something similar with kubeval previously.

@paulfantom
Copy link

I can confirm this is the case. It is why we are skipping validation of CRDs in kube-prometheus (https://github.com/prometheus-operator/kube-prometheus/blob/main/Makefile#L43).

@yannh
Copy link
Owner

yannh commented Jun 22, 2021

I understand. The problem is due to the fact that the schema for CustomResourceDefinitions are not stored in the schema repository, most likely because of this

While I am quite interested in getting this to work it is likely to take me some time to figure out why this limitation is in there. If anyone wants to give this a shot before me 👍

@arizonaherbaltea
Copy link

arizonaherbaltea commented Jul 14, 2021

I ran into a similar issue, but after converting the yaml version of custom resource definition using yannh's openapi2jsonschema.py, I'm now able to validate. Similarly, if I use kubectl to add the custom resource and the again to get -o json, this new format allows for validation to occur without this weird issue cropping up.

Here's an example:
convert.Dockerfile

FROM python:3.8-alpine

RUN apk update \
    && apk add --no-cache \
      curl \
      bash \
      git \
      jq \
      yq

ARG APP_DIR=/apps/convert
ENV APP_DIR=${APP_DIR}
WORKDIR ${APP_DIR}/
RUN curl -s -L "https://raw.githubusercontent.com/yannh/kubeconform/master/scripts/openapi2jsonschema.py" \
         -o "${APP_DIR}/openapi2jsonschema.py" \
    && pip install pyyaml

ENTRYPOINT [ "/bin/bash", "-c" ]
CMD [ "cd \"${MOUNT_PATH}\" && while read -r line; do python \"${APP_DIR}/openapi2jsonschema.py\" \"${line}\"; done < <( echo \"$CONVERT_PATH\" | grep -v -e '^[[:space:]]*$')" ]

docker-compose.yml

version: '3.8'
services:
  convert-crd:
    build:
      context: .
      dockerfile: convert.Dockerfile
    image: convert-crd
    environment:
      CONVERT_PATH: |
        https://raw.githubusercontent.com/external-secrets/kubernetes-external-secrets/master/charts/kubernetes-external-secrets/crds/kubernetes-client.io_externalsecrets_crd.yaml
        https://raw.githubusercontent.com/istio/istio/1.10.2/manifests/charts/base/crds/crd-all.gen.yaml
      MOUNT_PATH: '/apps/mount'
    volumes:
      - ${PWD}/converted:/apps/mount

add the urls to the location of the raw yaml files under CONVERT_PATH in docker-compose.yml
Run this:

mkdir -p ./converted
docker-compose up --build --remove-orphans --force-recreate -- convert-crd

In the converted folder now these can be referenced using the -schema-location parameter against the kubeconform cli.

reset_color="\\e[0m"; color_red="\\e[31m"; color_green="\\e[32m"; color_blue="\\e[36m";
function echo_fail { echo -e "${color_red}$*${reset_color}"; }
function echo_success { echo -e "${color_green}$*${reset_color}"; }
function echo_info { echo -e "${color_blue}info: $*${reset_color}"; }

chart="production/default/helm/"
echo_info "Validating Chart '$chart'"
helm template ${FLAGS[@]} -- "$chart" | \
kubeconform -strict \
-schema-location default \
-schema-location "converted/{{ .ResourceKind }}_{{ .ResourceAPIVersion }}.json" \
-summary \
&& echo_success "Kubeconform succeeded!" || echo_fail "Kubeconform failed!!"

you should see this as an output (note, in this output below I'm running in a docker container):

helm-kubeconform_1  | info: Validating Chart 'prod/default/helm/'
helm-kubeconform_1  | Summary: 7 resources found parsing stdin - Valid: 7, Invalid: 0, Errors: 0, Skipped: 0
helm-kubeconform_1  | ✔ Kubeconform succeeded!

my helm chart has these CustomResources:

  • Gateway ( istio )
  • VirtualService ( istio )
  • ExternalSecret

Let me know if this helps. Perhaps kubeconform is supposed to automatically convert these yamls into json, but I don't think so...
Without conversion I was getting this: /apps/helm/crds/ExternalSecret.yaml - CustomResourceDefinition externalsecrets.kubernetes-client.io failed validation: could not find schema for CustomResourceDefinition

@Glitchm
Copy link

Glitchm commented Dec 29, 2021

I'm very new to this, so someone feel free to educate me if there's a reason I shouldn't do this.

The project I'm currently working on uses flux and has a lot of apiextensions.k8s.io/v1 with customResourceDefinitions. I probably don't need to validate those as flux creates those files, but it's part of a folder I'm validating. To get around this I pull the file from here and add it as another schema location and no longer get failed validation: could not find schema for CustomResourceDefinition.

Hope this helps anyone that comes across the same problem.

@adriananeci
Copy link

adriananeci commented Feb 7, 2022

@Glitchm While your approach suppress the error, it is not catching the real issues.
For example, given the following CRD spec:

apiVersion: apiextensions.k8s.io/v1
kind: CustomResourceDefinition
metadata:
  name: test.crd.com
spec:

kubeconform is returning a success message when is shouldn't:

❯ kubeconform -schema-location https://raw.githubusercontent.com/kubernetes/kubernetes/master/api/openapi-spec/v3/apis__apiextensions.k8s.io__v1_openapi.json aa.yaml

❯ echo $?
0

Based on the schema validation, the spec section from a CustomResourceDefinition object is required based on https://github.com/kubernetes/kubernetes/blob/master/api/openapi-spec/v3/apis__apiextensions.k8s.io__v1_openapi.json#L88 .

Also, group, names, scope, versions sections from spec are required too based on https://github.com/kubernetes/kubernetes/blob/master/api/openapi-spec/v3/apis__apiextensions.k8s.io__v1_openapi.json#L249

None of them are defined in the above example, but yet the validation succeeded.

K8s API server is indeed complaining when trying to apply the same spec:

❯ kubectl apply -f aa.yaml
error: error validating "aa.yaml": error validating data: ValidationError(CustomResourceDefinition): missing required field "spec" in io.k8s.apiextensions-apiserver.pkg.apis.apiextensions.v1.CustomResourceDefinition; if you choose to ignore these errors, turn validation off with --validate=false

@FarnazBGH
Copy link

I solved it by adding to schema-location and using Datree's CRDs-catalog as it is mentioned in the Readme.

kubeconform -summary -output pretty  -schema-location default -schema-location "https://raw.githubusercontent.com/datreeio/CRDs-catalog/main/{{.Group}}/{{.ResourceKind}}_{{.ResourceAPIVersion}}.json"

Now as you can see everything is validated:

helm template -f apps/kube-prometheus-stack/values.yaml apps/kube-prometheus-stack| kubeconform -summary -output pretty  -schema-location default -schema-location "https://raw.githubusercontent.com/datreeio/CRDs-catalog/main/{{.Group}}/{{.ResourceKind}}_{{.ResourceAPIVersion}}.json" 
                                    
Summary: 105 resources found parsing stdin - Valid: 105, Invalid: 0, Errors: 0, Skipped: 0

@emmeowzing
Copy link

Even for your own CRDs, it's possible to convert them to JSON and just point at them in GitHub as I've done here ~

https://github.com/premiscale/pass-operator/blob/master/helm/operator-crds/_json/PassSecret.json

https://github.com/premiscale/pass-operator/blob/master/.circleci/helm.yml#L16

@Skaronator
Copy link

Skaronator commented Mar 12, 2024

This issue thread is perplexing because everyone is talking about a different thing.

When you have issue Validating a Custom Resource like ServiceMonitor or Certificate: Add this to your command line:

-schema-location "https://raw.githubusercontent.com/datreeio/CRDs-catalog/main/{{.Group}}/{{.ResourceKind}}_{{.ResourceAPIVersion}}.json"

When you have issues validating a Custom Resource Definition, which is a NATIVE Kubernetes Resource: Add this to your command line:

-schema-location "https://raw.githubusercontent.com/yannh/kubernetes-json-schema/master/{{.NormalizedKubernetesVersion}}/{{.ResourceKind}}.json"

Your final command line should look like this:

kubeconform -output pretty -strict \
  -schema-location default \
  -schema-location "https://raw.githubusercontent.com/datreeio/CRDs-catalog/main/{{.Group}}/{{.ResourceKind}}_{{.ResourceAPIVersion}}.json" \
  -schema-location "https://raw.githubusercontent.com/yannh/kubernetes-json-schema/master/{{.NormalizedKubernetesVersion}}/{{.ResourceKind}}.json"

Edit: Fixed below comment.

@yannh
Copy link
Owner

yannh commented Mar 12, 2024

Hi @Skaronator , unfortunately that's not correct, -schema-location should not point to a single file - I obviously need to improve documentation. In the order you passed parameters, apis__apiextensions.k8s.io__v1_openapi.json will be used to validate all files that are not found in default or datree.
Kubeconform goes through all the schema-location in order of passing, and tries to complete the template string to find a schema in the schema location. If it finds a schema (can successfully download one), it will use this to validate the manifest. If not, it will try the next schema-location. Is this a bit more clear?

@CyDickey-msr
Copy link

CyDickey-msr commented Mar 12, 2024

yannh/kubernetes-json-schema#26 seemed to fix the issue for me.

Although I do run into what Yannh mentioned above when I use multiple single file schema locations:

  -schema-location default
  -schema-location https://raw.githubusercontent.com/datreeio/CRDs-catalog/main/\{\{.Group\}\}/\{\{.ResourceKind\}\}_\{\{.ResourceAPIVersion\}\}.json
  -schema-location https://raw.githubusercontent.com/yannh/kubernetes-json-schema/master/v1.28.5/customresourcedefinition.json
  -schema-location https://json.schemastore.org/kustomization.json

It acts like https://raw.githubusercontent.com/yannh/kubernetes-json-schema/master/v1.28.5/customresourcedefinition.json is the only one to exist opposed to leveraging both single file schema locations.

However this worked as expected:

-schema-location https://raw.githubusercontent.com/datreeio/CRDs-catalog/main/{{.Group}}/{{.ResourceKind}}_{{.ResourceAPIVersion}}.json
-schema-location https://raw.githubusercontent.com/yannh/kubernetes-json-schema/master/v${{ inputs.kubernetes_version }}/{{.ResourceKind}}.json
-schema-location https://json.schemastore.org/{{.ResourceKind}}.json

@Skaronator
Copy link

@CyDickey-msr you can also use {{.NormalizedKubernetesVersion}} in the template and define the version in kubeconform using -kubernetes-version.

The final arguments should look like this:

-kubernetes-version="your-version"
-schema-location default
-schema-location https://raw.githubusercontent.com/datreeio/CRDs-catalog/main/{{.Group}}/{{.ResourceKind}}_{{.ResourceAPIVersion}}.json
-schema-location https://raw.githubusercontent.com/yannh/kubernetes-json-schema/master/{{.NormalizedKubernetesVersion}}/{{.ResourceKind}}.json
-schema-location https://json.schemastore.org/{{.ResourceKind}}.json

@rjshrjndrn
Copy link

Is this possible to include the CRDS to main repo by default ? Like a GH actions? If so, I can work on that.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests