Skip to content
This repository has been archived by the owner on Feb 15, 2022. It is now read-only.

otel-v1-apm-service-map is always empty #720

Open
lgarvey opened this issue Jul 12, 2021 · 4 comments
Open

otel-v1-apm-service-map is always empty #720

lgarvey opened this issue Jul 12, 2021 · 4 comments
Labels
bug Something isn't working

Comments

@lgarvey
Copy link

lgarvey commented Jul 12, 2021

Describe the bug
The otel-v1-apm-service-map ES index is always empty, despite span data being received.

And data-prepper reporting that it has processed incoming data: "INFO com.amazon.dataprepper.pipeline.ProcessWorker - service-map-pipeline Worker: Processing 1 records from buffer"

Expected behavior
I'd expect the otel-v1-apm-service-map to contain data, OR at least an error in either the data-prepper logs or the ES logs.

Environment (please complete the following information):
AES - 7.10
data-prepper 1.0 (docker)
collector v0.29.0 (docker)

Additional context

#1/bin/bash

from opentelemetry import trace
from opentelemetry.sdk.trace import TracerProvider
from opentelemetry.exporter.otlp.proto.grpc.trace_exporter import OTLPSpanExporter
from opentelemetry.sdk.trace.export import (
    ConsoleSpanExporter,
    BatchSpanProcessor,
)

from opentelemetry.propagate import set_global_textmap
from opentelemetry.propagators.b3 import B3Format
from opentelemetry.sdk.resources import Resource
from opentelemetry.instrumentation.django import DjangoInstrumentor
from opentelemetry.instrumentation.psycopg2 import Psycopg2Instrumentor

TRACER_NAME = "open-telemetry"

if __name__ == "__main__":

    # enable b4 propogation
    #set_global_textmap(B3Format())

    # build resource/tags
    resource = Resource(attributes={
        "service.name": "my-local-service",
        "space": "platform",
        "org": "local",
    })

    # initiate trace provider
    # TODO: add logic to switch between console or otlp
    trace.set_tracer_provider(
        TracerProvider(resource=resource)
    )

    #processor = BatchSpanProcessor(ConsoleSpanExporter())
    processor = BatchSpanProcessor(
        OTLPSpanExporter(endpoint="[redacted url]", insecure=False)
    )

    trace.get_tracer_provider().add_span_processor(processor)

    # this inserts the open-tracing middleware into settings.MIDDLEWARE
    # DjangoInstrumentor().instrument()

    # psychoPG instructation
    #Psycopg2Instrumentor().instrument()
    # redis instrumentation

    # elastic instrumentation

    # celery instrumentation

    # set the service name, space and org from VCAP

    tracer = trace.get_tracer(__name__)

    with tracer.start_as_current_span("foo"):
        with tracer.start_as_current_span("bar"):
            with tracer.start_as_current_span("baz"):
                print("Hello world from OpenTelemetry Python!")

data-prepper pipeline cofig:

entry-pipeline:
  delay: "100"
  source:
    otel_trace_source:
      ssl: false
  sink:
    - pipeline:
        name: "raw-pipeline"
    - pipeline:
        name: "service-map-pipeline"
raw-pipeline:
  source:
    pipeline:
      name: "entry-pipeline"
  prepper:
    - otel_trace_raw_prepper:
  sink:
    - elasticsearch:
        hosts: ["${ELASTICSEARCH_HOST}" ]
        username: "${ELASTICSEARCH_USERNAME}"
        password: "${ELASTICSEARCH_PASSWORD}"
        trace_analytics_raw: true
service-map-pipeline:
  delay: "100"
  source:
    pipeline:
      name: "entry-pipeline"
  prepper:
    - service_map_stateful:
  sink:
    - elasticsearch:
        hosts: ["${ELASTICSEARCH_HOST}" ]
        username: "${ELASTICSEARCH_USERNAME}"
        password: "${ELASTICSEARCH_PASSWORD}"
        trace_analytics_service_map: true

collector config:

receivers:
  otlp:
    protocols:
      grpc:
        endpoint: "0.0.0.0:4317"
      http:

processors:
  batch:

extensions:
  health_check:

exporters:
  logging:
    logLevel: debug
  otlp:
    endpoint: localhost:21890
    insecure: true

service:
  extensions: [health_check]
  pipelines:

    traces:
      receivers: [otlp]
      processors: [batch]
      exporters: [logging, otlp]
@bryanhonof
Copy link

Facing the same issue with opensearch dashboard 1.2.0, and opensearch 1.2.3. In opensearch 1.1, where the tracing part is still a plugin, it worked. We only started seeing this issue after upgrading. My config & pipelines look pretty much identical to the above.

@dlvenable
Copy link
Contributor

@bryanhonof , What version and distribution of Data Prepper are you using?

This GitHub project is the OpenDistro distribution of Data Prepper. It is no longer maintained and work have moved to the OpenSearch distribution of Data Prepper.

The latest version of OpenSearch Data Prepper is 1.2.1. You can read about migrating in the Migrating from OpenDistro page.

@dlvenable
Copy link
Contributor

@bryanhonof , Do your services have interactions with each other? There is a ticket in OpenSearch Data Prepper to support the scenario where services do not interact with each other.

opensearch-project/data-prepper#628

Does this sound like the issue you are experiencing?

@bryanhonof
Copy link

bryanhonof commented Jan 18, 2022

What version and distribution of Data Prepper are you using?

@dlvenable We're currently using v1.2.1from dockerhub.

Do your services have interactions with each other?

They do yes.

Weird thing is that it only "disappeared" recently when we did and upgrade from opensearch 1.1 to, dasbhoard 1.2.0, and 1.2.3 for opensearch itself. Is there perhaps a migration step we missed? Because in 1.1 the tracing part of opensearch, if I'm not mistaken, was still a plugin.

Sign up for free to subscribe to this conversation on GitHub. Already have an account? Sign in.
Labels
bug Something isn't working
Projects
None yet
Development

No branches or pull requests

3 participants