How to resolve defragmentation under large data volumes #20961
Unanswered
Black-max12138
asked this question in
Q&A
Replies: 1 comment
-
|
https://github.com/ahrtr/etcd-defrag this is the best solution the community can provide for now. It supports rule based automatic compaction + defragmentation. You can also set Please read the readme of the project. |
Beta Was this translation helpful? Give feedback.
0 replies
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
Uh oh!
There was an error while loading. Please reload this page.
-
Bug report criteria
What happened?
As the etcd data grows, the size of the etcd database increases accordingly. When the etcd data reaches 4.1GB, a single defragmentation operation takes about 15 seconds. If this defragmentation occurs on the primary node, the cluster cannot write any data during this time. Is there any good way to handle this situation?
We thought of one solution: setting a threshold for the database. If the primary node reaches this threshold, we could elect a new primary node and then perform disk defragmentation on the old primary node. Would this approach be feasible? Or is there any other better solution?
What did you expect to happen?
Defragmentation should not affect cluster operations.
How can we reproduce it (as minimally and precisely as possible)?
Constructing a situation where the database size is relatively large
Anything else we need to know?
No response
Etcd version (please run commands below)
3.5.11
Etcd configuration (command line flags or environment variables)
paste your configuration here
Etcd debug information (please run commands below, feel free to obfuscate the IP address or FQDN in the output)
Relevant log output
Beta Was this translation helpful? Give feedback.
All reactions