-
-
Notifications
You must be signed in to change notification settings - Fork 2.4k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Go kit: the road ahead #843
Comments
With OpenCensus and OpenTelemetry the abstraction is kinda moved to the platform layer. I would say the same is true for tracing.
I think the implemented auth methods are useful, but need a bit of work. They are also great reference implementations for how auth in a go-kit environment should work. Since auth (sadly) is rarely as simple as basic auth or JWT, chances are people will use customized versions (for example I extended the jwt package with additional token validation).
This is a huge pain point for me. Not the separation of transport and service level errors, but the fact that the way they are handled is quite inconsistent. Service level errors are supposed to be encoded by the transport's response encoder, transport level errors are usually encoded by a dedicated error encoder (at least in case of the HTTP server). There is another dimension of errors as well: errors that are "internal" and errors returned to the caller. Both sources (service, transport) can produce both types of errors, but they are "handled" (logged or returned to the caller) separately. Furthermore, server level error handlers (a third way of error handling, where I usually want to log fatal, internal errors only) receive both encoding/decoding errors (which are still transport errors, but not originated from endpoints, so I guess they are a third type of errors?) and transport level errors returned from endpoints. The way I see it, errors are either returned to the caller (usually not logged, because they are business/client errors) or logged (and the caller receives a generic "internal" error). Both service and transport level can produce both types of errors. I opened a separate issue with an example: #923 |
Disagree with punting native tracing plugability for standard OTel middleware. OTel is a giant shitshow so far. |
TBH I've seen more marketing than actually working features so far, so moving towards OTel might indeed be a bad idea at this point. Just wanted to point out that tracing and metrics packages might both get deprecated over time in favor of solutions like OpenCensus or OpenTelemetry. |
They won't because the abstractions made are often political compromises and not based on actual value. As an example the Zipkin native ecosystem is very extensive, well tested and continues to be under active development. Having Go kit native tracing abstractions allows for better granularity and consistency across different transports within Go kit as well as better interop in brown field. Even if using OTel there is a place for Go kit native hooks and logic. |
Okay, I think I've just understood that the tracing abstraction layer is actually just integrations for different tracing frameworks, while the metrics package actually provides a separate abstraction layer. I was confused why those two would be treated differently, but I guess having integrations with different systems makes sense. |
How to understand "I expect the endpoint layer will no longer be necessary" |
go modules supports multiple modules per repository. Can we consider examples being a sub module that depends on the go-kit main module? A pain point when doing this is adding versions to individual modules separately. But for examples package it could be fine. |
Here is a Go kit example with the latest generics proposal: https://go2goplay.golang.org/p/vpUyP0j6cwH Update: new version reflecting the latest proposal: https://go2goplay.golang.org/p/3wpdReO7leW |
I fiddled a little bit with the example in order to use a service interface and to highlight the code that can be generated by a tool: |
I think one of the reasons is that in practice, with microservices, per-node fine-grained rate limiting is rarely used (as with https://godoc.org/golang.org/x/time/rate ) . Additionally depending on one's infrastructure, it could be challenging to implement token bucket (or leaky bucket) with their fixed rate token addition requirement. When doing global rate limiting (needed for pretty much any real world use case), latency and storage overhead per request, are important constraints at scale. In our org, we use this implementation, https://github.com/monmohan/rate-limiting and it has worked really well for us. We handle ~400K rpm traffic consistently with this rate limiter with extremely low overhead. Its based on a report by Cloudflare, where they use this implementation to handle a tonne of traffic (way higher than ours) IMHO, One option is that the go-kit package should contain a real world production ready rate-limiter. Another option is to have a separate project where some of these real-world "third-party" packages are kept. Users of go-kit can pick and chose them as bolt-on middleware M |
my 5 cents here - I believe package log should be separated to its own repo. Its imported by external projects much more than other packages from go-kit. But right now importing just log package adds lots of unwanted entries to go.sum which is quite bothering for everyone who care about "dependency hygiene". E.g. see related issue from Prometheus (who is user of kit/log package) prometheus/common#255 |
We will probably extract log to its own package, but two notes about your comment:
|
True, but there are two major issues with Go modules dependencies download:
These two can dramatically increase build times and affect a lot of users. |
What should be the rule for defining a "module boundary" going forward? |
Dependency isolation sounds like the most compelling argument from Go's perspective. Maintenance is certainly another issue with everything in a single core: do you deprecate certain components while it's in the main repo? Can you selectively apply maintenance policy without confusing users? It might be a wild idea, but if manage to eliminate the endpoint layer (which I'm still not sure we can), the core module might disappear entirely: from the Keep section above:
From this perspective, it might actually make sense to move out the code that make sense to keep and leave the current kit repo as is for backwards compatibility (unless you want to do sub-go-modules there, but I guess not). |
Now that Golang 1.18rc1 is out, is there a roadmap for using generics here? Last time I used go-kit it was incredibly verbose, and I felt it required either generics or code generation to be productive. |
Go 1.18 is released! |
generics is comming |
@peterbourgon : last release is last year, do we plan to keep regular release for this? Another option is to enable dependabot, to ensure dependent library is regularly updated |
There's no reason to automatically upgrade dependencies in a project like Go kit, as versions are controlled by downstream consumers. Those security issues are generally bogus. |
Go kit: the road ahead
Context
Keep
Undecided
Sunset
Additional changes
Coda: repos
Coda: generics
Errata
The text was updated successfully, but these errors were encountered: