Open
Description
Summary
Experiencing connection errors between Tempo components due to protocol mismatches and service registration issues. The primary error is:
rpc error: code = Unimplemented desc = unknown service tempopb.StreamingQuerier
Previously saw:
rpc error: code = Unavailable desc = connection error: desc = "error reading server preface: http2: failed reading the frame payload: %!w(<nil>), note that the frame header looked like an HTTP/1.1 header"
Environment
- Kubernetes deployment using Helm chart
- Tempo version: 2.8.0
- Helm chart version: 1.23.1
Investigation Details
Port Configuration Issues
- Service exposes ports 16686/16687 for Jaeger UI/metrics
- Internal netstat shows port 7777 is listening (with no PID)
- Port 9095 is also listening (gRPC)
- Service definition maps to ports 16686/16687, not to 7777
Protocol Mismatch
- Initial error indicated HTTP/1.1 vs HTTP/2 protocol mismatch
- Added
stream_over_http_enabled: true
at root level of config - Now getting "unknown service tempopb.StreamingQuerier" error
Connection Tests
- Direct curl to port 7777 returns "Received HTTP/0.9 when not allowed"
- Suggests gRPC endpoint being accessed with HTTP client
Questions
- Why is the service mapping to ports 16686/16687 when the application listens on 7777?
- Why isn't the StreamingQuerier service registered on the expected endpoint?
- Is there a configuration issue with the query frontend component?
Attempted Solutions
- Added
stream_over_http_enabled: true
to root config - Verified port configurations and service mappings
- Checked for proper component registration
Impact
Unable to perform trace searches, resulting in errors in the Grafana logs and potential service disruption.
Additional Context
The issue appears to be related to how the Tempo components are configured to communicate with each other, specifically around the query frontend and querier components.
Metadata
Metadata
Assignees
Labels
No labels