Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Add support for multiple addresses same as in golang client #172

Open
kobi-co opened this issue Jul 5, 2023 · 7 comments
Open

Add support for multiple addresses same as in golang client #172

kobi-co opened this issue Jul 5, 2023 · 7 comments
Labels
enhancement New feature or request

Comments

@kobi-co
Copy link

kobi-co commented Jul 5, 2023

Hi,
It would be great if you'll be able to support multiple addresses same as in the golang client library for better support on working with clickhouse clusters.

Thank you!

@slvrtrn slvrtrn added the enhancement New feature or request label Jul 5, 2023
@slvrtrn
Copy link
Contributor

slvrtrn commented Jul 11, 2023

@kobi-co, it should be similar to https://github.com/ClickHouse/clickhouse-go#http-support-experimental, right?

http://host1:8123,host2:8123/database?dial_timeout=200ms&max_execution_time=60

@kobi-co
Copy link
Author

kobi-co commented Jul 11, 2023

I actually meant the option to pass multiple addresses in the Open method, but if we can get the same result by passing one address with multiple hosts that's good as well.

Thanks!

@zibellon
Copy link

Plus 1

Hello

I have clickhouse cluster - 4 servers (4 shards, no replicas). Now, this client can connect only to one instance (For example - shard 2). IF shard 2 DOWN - nothing work ...

  1. pass multiple: { host, port, username, password, dbName }[]. AS ARRAY
  2. switching connection and other INSIDE lib

@ayZagen
Copy link

ayZagen commented Sep 11, 2024

even tough we can manage this by ourselves in application side, having this feature in client would be a lot better.

generic connection-string could be used.

I can work on this if you guys are open to PR

@slvrtrn
Copy link
Contributor

slvrtrn commented Sep 12, 2024

The tricky part is probably the internal implementation with the Node.js client, as it looks like it will require multiple HTTP agents (due to KeepAlive and different hosts). Then, what is the strategy for choosing one of the available underlying agents—random/round-robin? Also, how will it work with a custom HTTP agent instance that can be provided in the client config? What if a particular node is down? And many other corner cases...

Another option is to keep the entire client implementation as-is and provide an auxiliary client builder for multiple addresses with multiple internal client instances inside. But this is also fairly convoluted.

Anyway, for a smooth client usage experience, it will be required to track the nodes that are down and so on, and that looks like an LB responsibility, which, in my opinion, should be used with clusters that expose the HTTP interface anyway (like nginx with a very simple config that we use in the client tests with multiple nodes).

CC @mshustov WDYT?

@mshustov
Copy link
Member

mshustov commented Sep 12, 2024

Anyway, for a smooth client usage experience, it will be required to track the nodes that are down and so on, and that looks like an LB responsibility, which, in my opinion, should be used with clusters that expose the HTTP interface anyway (like nginx with a very simple config that we use in the client tests with multiple nodes).

Agreed, IMO it's the correct design: use a reverse proxy for load balancing, health monitoring, ssl termination, etc.

@ayZagen
Copy link

ayZagen commented Sep 12, 2024

By no means I am a Clickhouse expert but isn't clickhouse cluster topology active-active? If so, it wouldn't matter which instance is receiving the connection. For load balancing or query splitting end users can still use nginx like proxies, this feature won't prevent them from doing so.

As for the context of this request I believe ensuring one stable connection is enough. It could select randomly or one by one, really doesn't matter. There is no need to track each node just iterate until one connection succeeds.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
enhancement New feature or request
Projects
None yet
Development

No branches or pull requests

5 participants