Skip to content

[Bug]:TLS config for Elasticsearch backend in Jaeger v2 Helm deployment #7485

@anhtaw

Description

@anhtaw

What happened?

v2.0.0-rc2
git-commit=4b7446248e087edffd15508e760e8e5da044f4b4
build-date=2024-10-07T09:16:33Z
Problem
When configuring the jaeger_storage extension with an Elasticsearch backend that requires TLS, the configuration schema only accepts the following keys under elasticsearch.tls:

Steps to reproduce

Attempting to use insecure or insecure_skip_verify results in:

error decoding 'backends[primary_store]': decoding failed due to the following error(s):
'elasticsearch.tls' has invalid keys: insecure_skip_verify
extensions:
jaeger_storage:
backends:
primary_store:
elasticsearch:
index_prefix: jaeger
server_urls: ["https://k8s-alert-helm-elasticsearch.k8s-alert.svc.cluster.local:9200"]
username: elastic
password: Pvcb@123
tls:
insecure_skip_verify: true # <- causes error
archive_store:
elasticsearch:
index_prefix: jaeger-archive
server_urls: ["https://k8s-alert-helm-elasticsearch.k8s-alert.svc.cluster.local:9200"]
username: elastic
password: Pvcb@123

Expected behavior

Support for insecure_skip_verify: true in elasticsearch.tls, consistent with configtls.ClientConfig.

OR at least clear documentation that Jaeger v2 requires CA file and does not allow skipping TLS verification.

Relevant log output

userconfig: |
  service:
    telemetry:
      logs:
        level: debug
    extensions: [jaeger_storage, jaeger_query, healthcheckv2]
    pipelines:
      traces:
        receivers: [kafka]
        processors: [batch]
        exporters: [jaeger_storage_exporter]
      traces/kafka:
        receivers: [otlp]
        processors: [batch]
        exporters: [kafka]
      traces/processed:
        receivers:
          - kafka/traces-processed
        processors:
          - memory_limiter
        exporters:
          - jaeger_storage_exporter
      traces/default:
        receivers: [kafka/traces-raw]
        processors: [memory_limiter, batch]
        exporters: [kafka/to-staging,jaeger_storage_exporter]
  extensions:
    healthcheckv2:
      use_v2: true
      http:
        endpoint: 0.0.0.0:13133
    # pprof:
    #   endpoint: 0.0.0.0:1777
    # zpages:
    #   endpoint: 0.0.0.0:55679
    jaeger_query:
      storage:
        traces: primary_store
        traces_archive: archive_store
    jaeger_storage:
      backends:
        primary_store:
          elasticsearch:
            index_prefix: jaeger
            server_urls: ["https://k8s-alert-helm-elasticsearch.k8s-alert.svc.cluster.local:9200"]
            username: elastic
            password: Pvcb@123
          # memory:
          #   max_traces: 100000
        archive_store:
          elasticsearch:
            index_prefix: jaeger-archive
            server_urls: ["https://k8s-alert-helm-elasticsearch.k8s-alert.svc.cluster.local:9200"]
            username: elastic
            password: Pvcb@123
          # memory:
          #   max_traces: 100000
  receivers:
    kafka:
      brokers:
        - observability-kafka-bootstrap.k8s-kafka-v1:9092
      topic: traces.jaeger_proto
      initial_offset: earliest
    otlp:
      protocols:
        grpc:
          endpoint: 0.0.0.0:4317
        http:
          endpoint: 0.0.0.0:4318
    kafka/traces-processed:
      brokers:
        - observability-kafka-bootstrap.k8s-kafka-v1:9092
      group_id: traces-02-stream-otlp-proto-processed
      protocol_version: 3.7.0
      topic: traces.otlp_proto.processed
    kafka/traces-raw:
      brokers:
        - observability-kafka-bootstrap.k8s-kafka-v1:9092
      group_id: traces-02-stream-otlp-proto-raw
      protocol_version: 3.7.0
      topic: traces.otlp_proto.raw
  processors:
    memory_limiter:
      check_interval: 1s
      limit_percentage: 90
      spike_limit_percentage: 15

    batch:
      send_batch_size: 100
      timeout: 10s
  exporters:
    kafka:
      brokers:
        - observability-kafka-bootstrap.k8s-kafka-v1:9092
      topic: traces.jaeger_proto
    kafka/to-staging:
      brokers:
        - observability-kafka-bootstrap.k8s-kafka-v1:9092
      protocol_version: 3.7.0
      sending_queue:
        num_consumers: 200
        queue_size: 20000
      topic: traces.otlp_proto.staging
    jaeger_storage_exporter:
      trace_storage: primary_store
config:
  service:
    extensions: [jaeger_storage, jaeger_query, healthcheckv2]
    pipelines:
      traces:
        receivers: [kafka]
        processors: [batch]
        exporters: [jaeger_storage_exporter]
      traces/kafka:
        receivers: [otlp]
        processors: [batch]
        exporters: [kafka]
  extensions:
    healthcheckv2:
      use_v2: true
      http:
        endpoint: 0.0.0.0:13133
    # pprof:
    #   endpoint: 0.0.0.0:1777
    # zpages:
    #   endpoint: 0.0.0.0:55679
    jaeger_query:
      storage:
        traces: primary_store
        traces_archive: archive_store
    jaeger_storage:
      backends:
        primary_store:
          elasticsearch:
            index_prefix: jaeger
            server_urls: ["https://k8s-alert-helm-elasticsearch.k8s-alert:9200"]
            username: elastic
            password: Pvcb@123
          memory:
            max_traces: 100000
        archive_store:
          elasticsearch:
            index_prefix: jaeger-archive
            server_urls: ["https://k8s-alert-helm-elasticsearch.k8s-alert:9200"]
            username: elastic
            password: Pvcb@123
          memory:
            max_traces: 100000
  receivers:
    kafka:
      brokers:
        - observability-kafka-bootstrap.k8s-kafka-v1:9092
      topic: traces.jaeger_proto
      initial_offset: earliest
    otlp:
      protocols:
        grpc:
          endpoint: 0.0.0.0:4317
        http:
          endpoint: 0.0.0.0:4318
    kafka/traces-processed:
      brokers:
        - observability-kafka-bootstrap.k8s-kafka-v1:9092
      group_id: traces-02-stream-otlp-proto-processed
      protocol_version: 3.7.0
      topic: traces.otlp_proto.processed
    kafka/traces-raw:
      brokers:
        - observability-kafka-bootstrap.k8s-kafka-v1:9092
      group_id: traces-02-stream-otlp-proto-raw
      protocol_version: 3.7.0
      topic: traces.otlp_proto.raw
  processors:
    batch: null
  exporters:
    kafka:
      brokers:
        - observability-kafka-bootstrap.k8s-kafka-v1:9092
      topic: traces.jaeger_proto
    jaeger_storage_exporter:
      trace_storage: primary_store
# The following settings apply to Jaeger v1 and partially to Jaeger v2
global:
  imageRegistry:
provisionDataStore:
  cassandra: false
  elasticsearch: false
  kafka: false
networkPolicy:
  enabled: false
# Overrides the image tag where default is the chart appVersion.
tag: ""
nameOverride: ""
fullnameOverride: "traces-03-jaeger-ingester-new"
allInOne:
  enabled: false
  replicas: 1
  image:
    registry: ""
    repository: jaegertracing/jaeger
    tag: ""
    digest: ""
    pullPolicy: IfNotPresent
    pullSecrets: []
  extraEnv: []
  extraSecretMounts:
    []
    # - name: jaeger-tls
    #   mountPath: /tls
    #   subPath: ""
    #   secretName: jaeger-tls
    #   readOnly: true
  # command line arguments / CLI flags
  # See https://www.jaegertracing.io/docs/cli/
  args: []
  # samplingConfig: |-
  #   {
  #     "default_strategy": {
  #       "type": "probabilistic",
  #       "param": 1
  #     }
  #   }
  serviceAccount:
    annotations: {}
    automountServiceAccountToken: true
  service:
    headless: true
    collector:
      otlp:
        grpc:
          name: otlp-grpc
        http:
          name: otlp-http
  ingress:
    enabled: false
    # For Kubernetes >= 1.18 you should specify the ingress-controller via the field ingressClassName
    # See https://kubernetes.io/blog/2020/04/02/improvements-to-the-ingress-api-in-kubernetes-1.18/#specifying-the-class-of-an-ingress
    # ingressClassName: nginx
    annotations: {}
    labels: {}
    # Used to create an Ingress record.
    # hosts:
    #   - chart-example.local
    # annotations:
    #   kubernetes.io/ingress.class: nginx
    #   kubernetes.io/tls-acme: "true"
    # labels:
    #   app: jaeger
    # tls:
    #   # Secrets must be manually created in the namespace.
    #   - secretName: chart-example-tls
    #     hosts:
    #       - chart-example.local
    pathType:
  resources:
      limits:
        cpu: 500m
        memory: 512Mi
      requests:
        cpu: 256m
        memory: 128Mi
  nodeSelector: {}
  tolerations: []
  affinity: {}
  topologySpreadContraints: []
  podSecurityContext:
    runAsUser: 10001
    runAsGroup: 10001
    fsGroup: 10001
  securityContext: {}
storage:
    # allowed values (cassandra, elasticsearch, grpc-plugin, badger, memory)
    type: elasticsearch
    elasticsearch:
      scheme: https
      host: k8s-alert-helm-elasticsearch.k8s-alert
      port: 9200
      anonymous: false
      user: elastic
      usePassword: true
      password: Pvcb@123
      # indexPrefix: test
      ## Use existing secret (ignores previous password)
      # existingSecret:
      # existingSecretKey:
      nodesWanOnly: false
      extraEnv:
        - name: OTEL_LOG_LEVEL
          value: debug
        - name: LOG_LEVEL
          value: debug
        - name: ES_TLS_ENABLED
          value: "true"
        - name: ES_TLS_SKIP_HOST_VERIFY
          value: "true"
        - name: ES_NUM_SHARDS
          value: "10"
        - name: ES_NUM_REPLICAS
          value: "1"  
      ## ES related env vars to be configured on the concerned components
      # - name: ES_SERVER_URLS
      #   value: http://elasticsearch-master:9200
      # - name: ES_USERNAME
      #   value: elastic
      # - name: ES_INDEX_PREFIX
      #   value: test
      ## ES related cmd line opts to be coauthentication: nonenfigured on the concerned components
      cmdlineParams:
        {}
        # es.server-urls: http://elasticsearch-master:9200
        # es.username: elastic
        # es.index-prefix: test
      tls:
        enabled: false
        secretName: es-tls-secret
        # The mount properties of the secret
        mountPath: /es-tls/ca-cert.pem
        subPath: ca-cert.pem
        # How ES_TLS_CA variable will be set in the various components
        ca: /es-tls/ca-cert.pem

    kafka:
      brokers:
        - observability-kafka-bootstrap.k8s-kafka:9092
      topic: traces.jaeger_proto
      authentication: none
      extraEnv: []
    grpcPlugin:
      extraEnv: []
    badger:
      ephemeral: true
      persistence:
        mountPath: /mnt/data
        useExistingPvcName: ""
      extraEnv: []
    memory:
      extraEnv: []
schema:
    annotations: {}
    tolerations: []
    image:
      registry: ""
      repository: jaegertracing/jaeger-cassandra-schema
      tag: ""
      digest: ""
      pullPolicy: IfNotPresent
      pullSecrets: []
    resources:
      {}
      # limits:
      #   cpu: 500m
      #   memory: 512Mi
      # requests:
      #   cpu: 256m
      #   memory: 128Mi
    serviceAccount:
      create: true
      # Explicitly mounts the API credentials for the Service Account
      automountServiceAccountToken: true
      name:
    podAnnotations: {}
    podLabels: {}
    securityContext: {}
    podSecurityContext: {}
    ## Deadline for cassandra schema creation job
    activeDeadlineSeconds: 300
    extraEnv:
      []
      # - name: MODE
      #   value: prod
      # - name: TRACE_TTL
      #   value: "172800"
      # - name: DEPENDENCIES_TTL
      #   value: "0"
# For configurable values of the elasticsearch if provisioned, please see:
# https://github.com/bitnami/charts/tree/main/bitnami/elasticsearch
ingester:
  enabled: false
  podSecurityContext: {}
  securityContext: {}
  annotations: {}
  image:
    registry: ""
    repository: jaegertracing/jaeger-ingester
    tag: ""
    digest: ""
    pullPolicy: IfNotPresent
    pullSecrets: []
  dnsPolicy: ClusterFirst
  cmdlineParams: {}
  replicaCount: 1
  autoscaling:
    enabled: false
    minReplicas: 2
    maxReplicas: 10
    behavior: {}
    # targetCPUUtilizationPercentage: 80
    # targetMemoryUtilizationPercentage: 80
  service:
    annotations: {}
    # List of IP ranges that are allowed to access the load balancer (if supported)
    loadBalancerSourceRanges: []
    type: ClusterIP
  resources:
    {}
    # limits:
    #   cpu: 1
    #   memory: 1Gi
    # requests:
    #   cpu: 500m
    #   memory: 512Mi
  serviceAccount:
    create: true
    # Explicitly mounts the API credentials for the Service Account
    automountServiceAccountToken: false
    annotations: {}
    name:
  nodeSelector: {}
  tolerations: []
  affinity: {}
  podAnnotations: {}
  ## Additional pod labels
  ## ref: https://kubernetes.io/docs/concepts/overview/working-with-objects/labels/
  podLabels: {}
  extraSecretMounts: []
  extraConfigmapMounts: []
  extraEnv: []
  envFrom: []
  initContainers: []
  serviceMonitor:
    enabled: true
    additionalLabels: {}
    # https://github.com/prometheus-operator/prometheus-operator/blob/master/Documentation/api.md#relabelconfig
    relabelings: []
    # -- ServiceMonitor metric relabel configs to apply to samples before ingestion
    # https://github.com/prometheus-operator/prometheus-operator/blob/main/Documentation/api.md#endpoint
    metricRelabelings: []
agent:
  podSecurityContext: {}
  securityContext: {}
  enabled: false
  annotations: {}
  image:
    registry: ""
    repository: jaegertracing/jaeger-agent
    tag: ""
    digest: ""
    pullPolicy: IfNotPresent
    pullSecrets: []
  cmdlineParams: {}
  extraEnv: []
  daemonset:
    useHostPort: false
    updateStrategy:
      {}
      # type: RollingUpdate
      # rollingUpdate:
      #   maxUnavailable: 1
  service:
    annotations: {}
    # List of IP ranges that are allowed to access the load balancer (if supported)
    loadBalancerSourceRanges: []
    type: ClusterIP
    # zipkinThriftPort :accept zipkin.thrift over compact thrift protocol
    zipkinThriftPort: 5775
    # compactPort: accept jaeger.thrift over compact thrift protocol
    compactPort: 6831
    # binaryPort: accept jaeger.thrift over binary thrift protocol
    binaryPort: 6832
    # samplingPort: (HTTP) serve configs, sampling strategies
    samplingPort: 5778
  resources:
    {}
    # limits:
    #   cpu: 500m
    #   memory: 512Mi
    # requests:
    #   cpu: 256m
    #   memory: 128Mi
  serviceAccount:
    create: true
    # Explicitly mounts the API credentials for the Service Account
    automountServiceAccountToken: false
    name:
    annotations: {}
  nodeSelector: {}
  tolerations: []
  affinity: {}
  podAnnotations: {}
  ## Additional pod labels
  ## ref: https://kubernetes.io/docs/concepts/overview/working-with-objects/labels/
  podLabels: {}
  extraSecretMounts: []
  # - name: jaeger-tls
  #   mountPath: /tls
  #   subPath: ""
  #   secretName: jaeger-tls
  #   readOnly: true
  extraConfigmapMounts: []
  # - name: jaeger-config
  #   mountPath: /config
  #   subPath: ""
  #   configMap: jaeger-config
  #   readOnly: true
  envFrom: []
  useHostNetwork: false
  dnsPolicy: ClusterFirst
  priorityClassName: ""
  initContainers: []
  serviceMonitor:
    enabled: true
    additionalLabels: {}
    # https://github.com/prometheus-operator/prometheus-operator/blob/master/Documentation/api.md#relabelconfig
    relabelings: []
    # -- ServiceMonitor metric relabel configs to apply to samples before ingestion
    # https://github.com/prometheus-operator/prometheus-operator/blob/main/Documentation/api.md#endpoint
    metricRelabelings: []
collector:
  enabled: true
  replicaCount: 2
  cmdlineParams: {}
  podSecurityContext: {}
  securityContext: {}
  annotations: {}
  image:
    registry: ""
    repository: jaegertracing/jaeger
    tag: ""
    digest: ""
    pullPolicy: IfNotPresent
    pullSecrets: []
  dnsPolicy: ClusterFirst
  extraEnv:
    - name: OTEL_LOG_LEVEL
      value: debug
    - name: LOG_LEVEL
      value: debug
    - name: ES_TLS_ENABLED
      value: "true"
    - name: ES_TLS_SKIP_HOST_VERIFY
      value: "true"
    - name: ES_NUM_SHARDS
      value: "10"
    - name: ES_NUM_REPLICAS
      value: "1"      
  envFrom: []
  basePath: /
  autoscaling:
    enabled: false
    minReplicas: 2
    maxReplicas: 10
    behavior: {}
    # targetCPUUtilizationPercentage: 80
    # targetMemoryUtilizationPercentage: 80
  service:
    annotations: {}
    # The IP to be used by the load balancer (if supported)
    loadBalancerIP: ""
    # List of IP ranges that are allowed to access the load balancer (if supported)
    loadBalancerSourceRanges: []
    type: ClusterIP
    # Cluster IP address to assign to service. Set to None to make service headless
    clusterIP: ""
    grpc:
      port: 14250
      # nodePort:
    # httpPort: can accept spans directly from clients in jaeger.thrift format
    http:
      port: 14268
      # nodePort:
    # can accept Zipkin spans in JSON or Thrift
    zipkin:
      {}
      # port: 9411
      # nodePort:
    otlp:
      grpc:
        name: otlp-grpc
        port: 4317
      http:
        name: otlp-http
        port: 4318
    healthCheck:
      name: healthcheck
      targetPort: healthcheck
  ingress:
    enabled: false
    # For Kubernetes >= 1.18 you should specify the ingress-controller via the field ingressClassName
    # See https://kubernetes.io/blog/2020/04/02/improvements-to-the-ingress-api-in-kubernetes-1.18/#specifying-the-class-of-an-ingress
    # ingressClassName: nginx
    annotations: {}
    labels: {}
    # Used to create an Ingress record.
    # The 'hosts' variable accepts two formats:
    # hosts:
    #   - chart-example.local
    # or:
    # hosts:
    #   - host: chart-example.local
    #     servicePort: grpc
    # annotations:
    # kubernetes.io/ingress.class: nginx
    # kubernetes.io/tls-acme: "true"
    # labels:
    # app: jaeger-collector
    # tls:
    # Secrets must be manually created in the namespace.
    # - secretName: chart-example-tls
    #   hosts:
    #     - chart-example.local
    pathType:
  resources:
    {}
    # limits:
    #   cpu: 1
    #   memory: 1Gi
    # requests:
    #   cpu: 500m
    #   memory: 512Mi
  serviceAccount:
    create: true
    # Explicitly mounts the API credentials for the Service Account
    automountServiceAccountToken: false
    name:
    annotations: {}
  nodeSelector: {}
  tolerations: []
  affinity: {}
  podAnnotations: {}
  ## Additional pod labels
  ## ref: https://kubernetes.io/docs/concepts/overview/working-with-objects/labels/
  podLabels: {}
  extraSecretMounts: []
  # - name: jaeger-tls
  #   mountPath: /tls
  #   subPath: ""
  #   secretName: jaeger-tls
  #   readOnly: true
  extraConfigmapMounts: []
  # - name: jaeger-config
  #   mountPath: /config
  #   subPath: ""
  #   configMap: jaeger-config
  #   readOnly: true
  # samplingConfig: |-
  #   {
  #     "service_strategies": [
  #       {
  #         "service": "foo",
  #         "type": "probabilistic",
  #         "param": 0.8,
  #         "operation_strategies": [
  #           {
  #             "operation": "op1",
  #             "type": "probabilistic",
  #             "param": 0.2
  #           },
  #           {
  #             "operation": "op2",
  #             "type": "probabilistic",
  #             "param": 0.4
  #           }
  #         ]
  #       },
  #       {
  #         "service": "bar",
  #         "type": "ratelimiting",
  #         "param": 5
  #       }
  #     ],
  #     "default_strategy": {
  #       "type": "probabilistic",
  #       "param": 1
  #     }
  #   }
  priorityClassName: ""
  serviceMonitor:
    enabled: true
    additionalLabels: {}
    # https://github.com/prometheus-operator/prometheus-operator/blob/master/Documentation/api.md#relabelconfig
    relabelings: []
    # -- ServiceMonitor metric relabel configs to apply to samples before ingestion
    # https://github.com/prometheus-operator/prometheus-operator/blob/main/Documentation/api.md#endpoint
    metricRelabelings: []
  initContainers: []
  networkPolicy:
    enabled: false
    # ingressRules:
    #   namespaceSelector: {}
    #   podSelector: {}
    #   customRules: []
    # egressRules:
    #   namespaceSelector: {}
    #   podSelector: {}
    #   customRules: []
query:
  enabled: true
  basePath: /
  initContainers: []
  imagePullSecrets: []
  oAuthSidecar:
    enabled: false
    resources:
      # {}
      limits:
        cpu: 500m
        memory: 512Mi
      requests:
        cpu: 256m
        memory: 128Mi
    image:
      registry: quay.io
      repository: oauth2-proxy/oauth2-proxy
      tag: v7.6.0
      digest: ""
      pullPolicy: IfNotPresent
      pullSecrets: []
    containerPort: 4180
    args: []
    extraEnv:
      - name: ES_TLS_ENABLED
        value: "true"
      - name: ES_TLS_SKIP_HOST_VERIFY
        value: "true"
      - name: ES_NUM_SHARDS
        value: "10"
      - name: ES_NUM_REPLICAS
        value: "1"    
    extraConfigmapMounts: []
    extraSecretMounts: []
  config: |-
    provider = "oidc"
    https_address = ":4180"
    upstreams = ["http://localhost:16686"]
    redirect_url = "https://jaeger-svc-domain/oauth2/callback"
    client_id = "jaeger-query"
    oidc_issuer_url = "https://keycloak-svc-domain/auth/realms/Default"
    cookie_secure = "true"
    email_domains = "*"
    oidc_groups_claim = "groups"
    user_id_claim = "preferred_username"
    skip_provider_button = "true"
  podSecurityContext: {}
  securityContext: {}
  agentSidecar:
    enabled: false
  #    resources:
  #      limits:
  #        cpu: 500m
  #        memory: 512Mi
  #      requests:
  #        cpu: 256m
  #        memory: 128Mi
  annotations: {}
  image:
    registry: ""
    repository: jaegertracing/jaeger
    tag: ""
    digest: ""
    pullPolicy: IfNotPresent
    pullSecrets: []
  dnsPolicy: ClusterFirst
  cmdlineParams: {}
  extraEnv: []
  envFrom: []
  replicaCount: 1
  service:
    annotations: {}
    type: ClusterIP
    # List of IP ranges that are allowed to access the load balancer (if supported)
    loadBalancerSourceRanges: []
    port: 80
    # Specify a custom target port (e.g. port of auth proxy)
    # targetPort: 8080
    # Specify a specific node port when type is NodePort
    # nodePort: 32500
    healthCheck:
      name: healthcheck
      targetPort: healthcheck
  ingress:
    enabled: true
    annotations:
      cert-manager.io/cluster-issuer: cncf-cert-manager-signer-issuer-from-le
      kubernetes.io/ingress.class: nginx-k8s
    hosts:
      - jaeger-query-v1.k8s.monitoring.nonprod.pvcombank.io
    tls:
      - hosts:
          - jaeger-query-v1.k8s.monitoring.nonprod.pvcombank.io
        secretName: jaeger-query-v1-ingress-tls
    # For Kubernetes >= 1.18 you should specify the ingress-controller via the field ingressClassName
    # See https://kubernetes.io/blog/2020/04/02/improvements-to-the-ingress-api-in-kubernetes-1.18/#specifying-the-class-of-an-ingress
    ingressClassName: "nginx-k8s"
    labels: {}
    # pathType: ImplementationSpecific
    health:
      exposed: false
  resources:
    {}
    # limits:
    #   cpu: 500m
    #   memory: 512Mi
    # requests:
    #    cpu: 256m
    #    memory: 128Mi
  serviceAccount:
    create: true
    # Explicitly mounts the API credentials for the Service Account
    automountServiceAccountToken: false
    name:
    annotations: {}
  nodeSelector: {}
  tolerations: []
  affinity: {}
  podAnnotations: {}
  ## Additional pod labels
  ## ref: https://kubernetes.io/docs/concepts/overview/working-with-objects/labels/ 
What is the difference between config and userconfig in the Jaeger Helm chart for v2?

I noticed that if I leave userconfig empty (using the default value), the deployment fails with an error. Why is userconfig mandatory, while config can be omitted?

Screenshot

No response

Additional context

No response

Jaeger backend version

v2.0.0-rc2

SDK

No response

Pipeline

old v1 : OTEL SDK -> OTEL collector -> kafka ->oTEL collector -> jaeger collector - >kafka -> jaeger ingester -> elastich
v2 : OTEL SDK -> OTEL collector -> kafka -> jaeger oTEL collector -> elasticsearch

Stogage backend

Elasticsearch 9

Operating system

Linux

Deployment model

EKS

Deployment configs

helm

Metadata

Metadata

Assignees

No one assigned

    Type

    No type

    Projects

    No projects

    Milestone

    No milestone

    Relationships

    None yet

    Development

    No branches or pull requests

    Issue actions