-
Notifications
You must be signed in to change notification settings - Fork 35
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
OpAMP Agent heartbeats #183
Comments
I think it is reasonable for the agent to actively report a healthy heartbeat, this helps opamp server keep the information up to date. But there's a question, what's the difference with #28 |
I think the main difference is that my proposal relates to heartbeats as a means of solving #28, allowing clients to automatically report health on an interval. A goal of this is definitely to keep the connection alive, but it also means that the client has a responsibility to reports its health on an interval. I can understand closing this issue in favor of that though... |
Want to give a +1 to this, we recently encountered an issue with a misbehaving proxy that kept the OpAMP connection alive, despite the fact that the agent pod didn't exist anymore, and I believe that could have been detected with a heartbeat mechanism like this. I do like the idea of having it be independent of transport, since it seems to be very similar conceptually to the poll interval. There's also this PR that's been open for a while proposing a heartbeat mechanism, seems similar to what's proposed here: #176 |
A crashed or misbehaving client may cause connection/goroutine leaks in the OpAMP server (open-telemetry/opamp-go#271). A heartbeat mechanism can help the server find out unresponsive peers and also defend against intermediaries (LB, proxy, network equipment) which may time out and terminate idle connections (#28). |
I also wanted to give a big +1 to this. The regular status reports from the opamp-bridge have been really useful in
I agree with @BinaryFissionGames that for the sake of the OpAMP protocol, this behavior should be transport agnostic (i.e. should work for both HTTP and Websockets given both are valid transport protocols for OpAMP). |
Just chiming in that this does sound like a worthwile addition and that it should be the same for both transports. My 2c would be that it is indeed useful for clients to either respect any interval negotiable from the server (or eg. a Retry-After header), or at least set a reasonable maximum polling frequency to avoid overloading the server by accident. |
@tpaschalis I have the PR up here if you can take a look! |
Currently, if an agent is functioning successfully and without any changes to its health or component status, no messages will be sent after initial handshakes. For websockets, because no messages are sent from the agent to the server, after a certain period most of these connections will idle and be shut down causing a broken socket. I propose that we add support for an heartbeat interval as part of the specification. I have already implemented this capability in the opamp-bridge where a user is able to configure a heartbeat interval to periodically set the agent's health message. HTTP theoretically gets around this with its polling interval.
This heartbeat implementation is important for the bridge where many of the events in Kubernetes happen asynchronously. The bridge is not informed directly therefore must poll state to send this message. The collector's opamp extension, however, watches for some changes but is currently not busy. For the extension, the collector will idle after sometime and prevent the server from being able to send any more messages to the extension.
I think a heartbeat interval could be optional, however, it must be communicated as part of the initial AgentToServer message. This would allow the server to know when to mark the agent as unhealthy.
Open Questions
I'm happy to write the spec change for this issue, but would love everyone's thoughts here.
The text was updated successfully, but these errors were encountered: