Skip to content

[Feature]: Custom Headers on Elasticsearch Storage Requests #7517

@kking124

Description

@kking124

What happened?

I am attempting to have a jaegertracing/jaeger 2.10.0 container connect to an aws hosted opensearch 3.1 storage backend through the aws-sigv4-proxy version 1.10.

Despite my server URL being set to http, jaeger keeps making https requests. I have been unable to disable this forced https.

I have also tried going directly to opensearch and via the proxy and that does not work either even if it assumes the correct task role.

Steps to reproduce

  1. run aws hosted opensearch in private subnet of vpc
  2. run aws-sigv4 proxy in private subnet of vpc
  3. run jaeger in private subnet of vpc with essentially default configuration for opensearch from docs: https://github.com/jaegertracing/jaeger/blob/v2.10.0/cmd/jaeger/config-elasticsearch.yaml
  4. jaeger crash loops

Expected behavior

jager would start successfully, and have access to opensearch as a backend

Relevant log output

jaeger log:

	
2025-09-23T17:26:55.797Z	error	extensions/extensions.go:58	Failed to start extension	{
    "resource": {
        "service.instance.id": "2e547c81-8131-44d1-9d04-e6a5e88f244c",
        "service.name": "jaeger",
        "service.version": "v2.10.0"
    },
    "otelcol.component.id": "jaeger_storage",
    "otelcol.component.kind": "extension",
    "error": "failed to initialize storage 'some_storage': failed to create Elasticsearch client: health check timeout: no Elasticsearch node available"
}


sigv4-proxy log:

time="2025-09-23T17:26:54Z" level=error msg="unable to proxy request" error="Head \"https://XXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXX:000/\": http: server gave HTTP response to HTTPS client"

Screenshot

N/A

Additional context

I have tried both using the proxy and going directly to opensearch. Neither works.

The sigv4 proxy does work in sending requests to opensearch as it is in use for a user access route that goes:

user -> cloudfront -> alb -> authn/authz proxy (private subnet, fargate) -> sigv4 proxy (privates subnet, fargate) -> opensearch

I've tried setting any and all of the following and none seem to effect the forced HTTPS for opensearch/elasticsearch despite the EXPLICIT http that was set:

  jaeger_storage:
    backends:
      some_storage: &opensearch_config
        opensearch:
          sniffing:
            enabled: false
            use_https: false
          version: 7
          disable_health_check: true

disabling ALL of the sniffing, health checks, and setting version to 7 finally gets the error back to me in jaeger that it's sending an https message to an http endpoint.

Jaeger backend version

2.10.0

SDK

N/A

Pipeline

N/A

Stogage backend

Opensearch 3.1

Operating system

linux

Deployment model

ecs fargate

Deployment configs

config-opensearch.yaml.tpl (see: https://github.com/jaegertracing/jaeger/blob/v2.10.0/cmd/jaeger/config-elasticsearch.yaml)

service:
  extensions: [jaeger_storage, jaeger_query, healthcheckv2]
  pipelines:
    traces:
      receivers: [otlp]
      processors: [batch]
      exporters: [jaeger_storage_exporter]
  telemetry:
    resource:
      service.name: jaeger
    metrics:
      level: basic
      readers:
        - pull:
            exporter:
              prometheus:
                host: 0.0.0.0
                port: 8888
    logs:
      level: debug

extensions:
  healthcheckv2:
    use_v2: true
    http:

  jaeger_query:
    base_path: /traces
    storage:
      traces: some_storage
      metrics: some_storage
      traces_archive: another_storage
    ui:
      config_file: ./config-ui.json

  jaeger_storage:
    backends:
      some_storage: &opensearch_config
        opensearch:
          server_urls:
            - http://${opensearch_host}:${opensearch_port}
          indices:
            index_prefix: "${index_prefix}"
            spans:
              date_layout: "2006-01-02"
              rollover_frequency: "day"
              shards: 5
              replicas: 1
            services:
              date_layout: "2006-01-02"
              rollover_frequency: "day"
              shards: 5
              replicas: 1
            dependencies:
              date_layout: "2006-01-02"
              rollover_frequency: "day"
              shards: 5
              replicas: 1
            sampling:
              date_layout: "2006-01-02"
              rollover_frequency: "day"
              shards: 5
              replicas: 1
      another_storage:
        opensearch:
          server_urls:
            - http://${opensearch_host}:${opensearch_port}
          indices:
            index_prefix: "${archive_index}"

    # Optional, enable metrics backend to use Monitor tab
    metric_backends:
      some_storage: *opensearch_config

receivers:
  otlp:
    protocols:
      grpc:
      http:
        endpoint: "0.0.0.0:4318"

processors:
  batch:

exporters:
  jaeger_storage_exporter:
    trace_storage: some_storage


sigv4 container definition

    {
      name      = "sigv4-proxy"
      image     = "public.ecr.aws/aws-observability/aws-sigv4-proxy:1.10"
      essential = true
      portMappings = [
        {
          containerPort = local.opensearch_proxy_port
          hostPort      = local.opensearch_proxy_port
          protocol      = "tcp"
        }
      ]
      command = [
        "--verbose",
        "--log-failed-requests", 
        "--log-signing-process", 
        "--no-verify-ssl",
        "--name", 
        "es",
        "--region",
        "${var.aws_region}",
        "--sign-host",
        "${var.aws_region}.es.amazonaws.com",
        "--strip",
        "Cloudfront-Forwarded-Proto Cloudfront-Is-Desktop-Viewer Cloudfront-Is-Mobile-Viewer Cloudfront-Is-Smarttv-Viewer Cloudfront-Is-Tablet-Viewer Cloudfront-Viewer-Asn Cloudfront-Viewer-Country Cookie"
      ]
      environment = [
        {
            name  = "AWS_SDK_LOAD_CONFIG"
            value = "true"
        }
      ]
      logConfiguration = {
        logDriver = "awslogs"
        options = {
          "awslogs-group"         = aws_cloudwatch_log_group.opensearch_proxy.name
          "awslogs-region"        = var.aws_region
          "awslogs-stream-prefix" = "sigv4-proxy"
        }
      }
    }


jaeger container definition

    {
      name  = "jaeger-ui"
      image = "jaegertracing/jaeger:latest"
      
      essential = true
      
      portMappings = [
        {
          containerPort = local.jaeger_port
          hostPort      = local.jaeger_port
          protocol      = "tcp"
        }
      ]

      command = [
        "--config",
        "/conf/config-opensearch.yaml"
      ]
      healthCheck = {
        command = [
          "CMD-SHELL",
          "wget --no-verbose --tries=1 --spider http://localhost:${local.jaeger_port}/ || exit 1"
        ]
        interval    = 30
        timeout     = 5
        retries     = 3
        startPeriod = 60
      }
      mountPoints = [
        {
          sourceVolume  = "config"
          containerPath = "/conf"
        }
      ]
      dependsOn = [
        {
          condition     = "HEALTHY"
          containerName = "aws-appconfig-agent"
        }
      ]
      logConfiguration = {
        logDriver = "awslogs"
        options = {
          "awslogs-group"         = aws_cloudwatch_log_group.jaeger_ui.name
          "awslogs-region"        = var.aws_region
          "awslogs-stream-prefix" = "jaeger-ui"
        }
      }
    },
    {
      name      = "aws-appconfig-agent"
      image     = "public.ecr.aws/aws-appconfig/aws-appconfig-agent:latest"
      essential = true
      environment = [
        {
          name  = "MANIFEST"
          value = module.jaeger_appconfig_manifest.configuration_identifier
        },
        {
          name  = "PREFETCH_LIST"
          value = "${module.jaeger_config_opensearch.configuration_identifier},${module.jaeger_config_ui.configuration_identifier}"
        }
      ]
      healthCheck = {
        command = [
          "CMD-SHELL",
          "curl -f ${module.jaeger_config_opensearch.appconfig_agent_https_endpoint} && chmod -R 644 /data/* || exit 1"
        ]
        interval    = 60
        timeout     = 5
        retries     = 3
        startPeriod = 10
      }
      mountPoints = [
        {
          sourceVolume  = "config"
          containerPath = "/data"
        }
      ]
      logConfiguration = {
        logDriver = "awslogs"
        options = {
          "awslogs-group"         = aws_cloudwatch_log_group.ui_proxy.name
          "awslogs-region"        = var.aws_region
          "awslogs-stream-prefix" = "appconfig-agent"
        }
      }
    }

Metadata

Metadata

Assignees

No one assigned

    Type

    No type

    Projects

    No projects

    Milestone

    No milestone

    Relationships

    None yet

    Development

    No branches or pull requests

    Issue actions