Add span attributes to tracing module#7269
Conversation
c309a32 to
10404e6
Compare
|
|
||
| ````bash | ||
| $ go test ./... | ||
| $ go test ./modules/caddyhttp/tracing/ |
There was a problem hiding this comment.
Also I thought for newbies like me, if you want to attract them 😅 this is a helpful first pointer after the build.
|
Thanks for the contribution! I'll ask if @hairyhenderson, who I know is very busy, might have a chance to look this over, since I don't know much about the metrics/tracing stuff. |
|
Thank you! That would be much appreciated. But also I think the tracing part is the simple one. I believe based on the test that the span attributes are already recorded correctly. But the hard part is the replacements. I don't think I found a nice example of how response placeholders are generated and used. I pushed a test expecting a default span attr from OTEL, but you can see it fail there. Additionally, I couldn't really use the other response placeholders out of the box either. So I wonder if I'm doing something wrong either in setting up the test to be realistic, or in the middleware code to make them unavailable. |
|
Hi! Gently pinging this, is there anyone else that might be able to lend a hand with a review @mholt? Implementers of other systems that have used placeholders etc. 🙏 |
|
Well, the use of placeholders / replacer looks good IMO. I'm just not qualified to approve the rest of the code changes 😅 |
|
Alright! I found the Caddy documentation for placeholders, so I fixed the failing test which was assuming response placeholders that actually are not even supposed to be there. Then I decided to try the setup live before spending more of your time. So here's a trial of the functionality using the binary built from the branch: SetupTo test trace output, I'm spinning up an opentelemetry collector with some debug output. Collector configuration# collector.yaml
receivers:
otlp:
protocols:
grpc:
endpoint: 0.0.0.0:4317
processors:
batch:
exporters:
debug:
verbosity: detailed
debug/ignore:
verbosity: basic
service:
pipelines:
traces:
receivers: [otlp]
processors: [batch]
exporters: [debug]
metrics:
receivers: [otlp]
processors: [batch]
exporters: [debug/ignore]
logs:
receivers: [otlp]
processors: [batch]
exporters: [debug/ignore]Current Caddy span nameTo establish a baseline, here's what the latest Caddy outputs when tracing the builtin metrics. CaddyfileDocker compose# docker-compose.yaml
services:
caddy:
image: caddy:latest
restart: unless-stopped
volumes:
- ./mycaddy:/etc/caddy
environment:
OTEL_EXPORTER_OTLP_ENDPOINT: http://collector:4317
OTEL_SERVICE_NAME: caddy
depends_on:
- collector
ports:
- "80:80"
collector:
restart: unless-stopped
image: otel/opentelemetry-collector-contrib:latest
command: ["--config", "/etc/otelcol/config.yaml"]
volumes:
- ./collector.yaml:/etc/otelcol/config.yaml:roWhen running The custom metrics span name is written out as expected. Span attributes additionNext, I tested setting static span attributes with and without placeholders. CaddyfileDocker composeservices:
collector:
restart: unless-stopped
image: otel/opentelemetry-collector-contrib:latest
command: ["--config", "/etc/otelcol/config.yaml"]
volumes:
- ./collector.yaml:/etc/otelcol/config.yaml:roOutputWhen separately running ConclusionI would consider this done as far as I'm concerned, given that you've signed off the placeholder implementation and the integration with Open Telemetry works as expected 🙏 But of course if you need another signoff, let's wait for that! Is there anyone besides Dave that we could ask to provide that? Or if you have any questions that would make you feel more comfortable with merging it yourself, I'm happy to answer as well! |
|
Merry late Christmas and a happy almost new year! Just bumping this up to see if you had any thoughts on my analysis above @mholt. Do you feel like more evidence or an OTEL expert review is needed to get this in? I hope the real-world test is conclusive enough, but let me know and I can try to look for others as well in my circle (but not tied to this project unfortunately). Also, if you're still on holiday, no rush to respond! Please enjoy 😄 |
|
@felix-hilden I'd prefer it, but are you at least willing to maintain it if there's an issue? We could use help maintaining the tracing functionality, as you can tell :) If so, we can probably merge this and give it a try. |
|
Understandable! It's a lot to promise that I'll actively maintain it since this is a totally new project, but for sure I'm willing to resolve any bugs with this particular implementation. Outside of that I'm also happy to chime in on related things and review other possible tracing work, so ping away! Happy new year! |
|
Thank you -- would much appreciate it! Happy new year to you as well! |
|
Cheers! Two related questions to this merge:
|
Not really -- if there's anything that needs to be added to the docs, then yes, but you've already covered that :) Thanks |
Fixes #7261
Hi @mholt, if I may ping you, thanks for labeling the associated issue! I decided to try to contribute a fix, as the scope is quite well defined.
First, I think my initial suggestion in the ticket of using a flat directive would probably
be improved by using a map instead, more consistent to what Caddyfile already does:
tracing { span my-span-name span_attributes { attr1 value1 attr2 {placeholder} } }I'm disclosing that I have used Claude to generate most of the code in this PR as this is my first time touching Go in general, although naturally I reviewed the output and tried to understand it. So if holding my hand through the MR is too much, feel free to discard it altogether in favor of a better solution!
If not, there are a couple of things I wanted to ask about. I think given the tests, is pretty already in a reasonable place. Building from that, on a practical level if you have quick pointers:
Many thanks! I'm very excited to try out Caddy for real and hopefully switch to it for good.