Skip to content

Allow configuration of BoundedChannelOptions.Capacity in SocketFrameHandler #1826

New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Open
mtapkanov opened this issue Apr 25, 2025 · 1 comment
Assignees

Comments

@mtapkanov
Copy link

Is your feature request related to a problem? Please describe.

Problem

In the current implementation of SocketFrameHandler (RabbitMQ.Client 7.0.0), the channel used for outgoing messages is created with a hardcoded bounded capacity of 128:

var channel = Channel.CreateBounded<RentedMemory>(
    new BoundedChannelOptions(128) { ... });

Currently, in SocketFrameHandler, the bounded channel used to buffer outgoing frames has a hardcoded capacity of 128.
This can cause messages to accumulate in memory if the writer loop is delayed (due to backpressure, network issues, etc).
If the process is shut down or crashes before these messages are flushed to the socket, they are silently lost.
This is particularly critical in high-throughput systems where reliability is more important than latency or throughput.

Describe the solution you'd like

I’d like the channel capacity (currently hardcoded as 128) to be configurable.
For example, this could be done by adding a new optional property to AmqpTcpEndpoint, such as:

public int? SocketWriteBufferCapacity { get; set; }

This value can then be passed to BoundedChannelOptions when creating the internal channel.
The default value can remain 128 to preserve backward compatibility.

Describe alternatives you've considered

Using the Harmony library to patch the hardcoded BoundedChannelOptions capacity. While this allowed us to dynamically adjust the value at runtime, it’s not an ideal solution because it relies on modifying the library’s internal behavior, which can be error-prone and incompatible with future versions of RabbitMQ.Client.

This is alternatives, while functional in certain contexts, do not provide a clean, officially supported solution, which is why a configurable option would be the preferred approach.

Additional context

We observed this issue during a controlled RabbitMQ server restart in a production-like environment.
The application had written several messages into the buffer (up to the hardcoded 128 limit), and upon restart, they were not delivered.
This resulted in data loss and inconsistent state across services.

I’m happy to contribute a PR for this change if it’s accepted. Thanks!

@lukebakken lukebakken self-assigned this Apr 25, 2025
@lukebakken lukebakken added this to the 7.2.0 milestone Apr 25, 2025
@lukebakken
Copy link
Collaborator

Thanks for investigating this issue.

The application had written several messages into the buffer (up to the hardcoded 128 limit), and upon restart, they were not delivered. This resulted in data loss and inconsistent state across services.

Your application should be using publisher confirmations, which would have revealed this scenario, because you would not have received a confirmation for those lost messages. If you're not using publisher confirmations (and using mandatory: true) you can't ever be 100% sure your messages are actually routed to a queue in RabbitMQ.

Right now I don't see how adjusting the bounded channel's size would mitigate this issue. Even if set to 1, you could lose that one message if you're not using publisher confirmations.

We may wish to improve this scenario when a connection is lost. If messages are in this channel when the associated connection is lost, one of the following should happen:

  • An exception is thrown.
  • Those messages are published on the recovered channel (probably not possible without major changes).

@lukebakken lukebakken removed this from the 7.2.0 milestone Apr 25, 2025
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Projects
None yet
Development

No branches or pull requests

2 participants