-
Notifications
You must be signed in to change notification settings - Fork 3.1k
Description
Component(s)
exporter/opensearch
What happened?
Description
When using dynamic logs_index using a custom placeholder e.g. "{k8s.container.name}", the resulting index is not always the right one.
Steps to Reproduce
- Setup opentelemetry-collector (or operator) inside a Kubernetes cluster
- Setup an opensearch logs exporter with custom logs_index: "logs-otel-%{k8s.container.name}"
Expected Result
Each logentry should be placed in its proper index with the name of the container that it originates from.
Actual Result
A lot of logentries end up in arbitrary indices.
Looks like somewhere in the process of (bulk-)writing logentries, sets of logentries are written to opensearch with the dynamic logs_index feature only evaluated on the first entry.
Enabling or disabling batch-features do not seem to have any effect on this issue.
See screencapture of kibana output below: field "resource.k8s.container.name" should be incorporated in the "_index" but in a lot of cases it is different.
Collector version
v0.129.0 and higher
Environment information
Environment
Kubernetes: 1.33
Docker image: otel/opentelemetry-collector-contrib:0.129.0 (up to 0.136.0)
OpenTelemetry Collector configuration
exporters:
opensearch/logs:
http:
endpoint: <my opensearch endpoint>
logs_index: "logs-otel-%{k8s.container.name}"
logs_index_fallback: "default"
logs_index_time_format: "yyyy.MM.dd"
...
service:
pipelines:
logs:
exporters:
- opensearch/logs
...Log output
Additional context
No response
Tip
React with 👍 to help prioritize this issue. Please use comments to provide useful context, avoiding +1 or me too, to help us triage it. Learn more here.