-
Notifications
You must be signed in to change notification settings - Fork 652
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
[BUG] Clusters can become inconsistent if one side times out the handshake during cluster meets #1251
Comments
Was also able to get the inverse state, where the new node knows the the meeting nodes but the old nodes do not.
Note, this requires a special patch for dropping all packets. We have to dodge the ping, pong, and meet packets that the new node we're adding my send out. I'll submit a PR in a second so it's easy to reproduce. Produces:
EDIT: Das PR, #1252. |
A minor debugging change that helped in the investigation of #1251. Basically there are some edge cases where we want to fully isolate a note from receiving packets, but can't suspend the process because we need it to continue sending outbound traffic. So, added a filter for that. Signed-off-by: Madelyn Olson <[email protected]>
commit 3c32ee1 Author: Madelyn Olson <[email protected]> Date: Mon Nov 4 12:36:20 2024 -0800 Add a filter option to drop all cluster packets (valkey-io#1252) A minor debugging change that helped in the investigation of valkey-io#1251. Basically there are some edge cases where we want to fully isolate a note from receiving packets, but can't suspend the process because we need it to continue sending outbound traffic. So, added a filter for that. Signed-off-by: Madelyn Olson <[email protected]> commit a102852 Author: Binbin <[email protected]> Date: Sat Nov 2 19:51:14 2024 +0800 Fix timing issue in cluster-shards tests (valkey-io#1243) The cluster-node-timeout is 3000 in our tests, the timing test wasn't succeeding, so extending the wait_for made them much more reliable. Signed-off-by: Binbin <[email protected]> commit 0d7b234 Author: Jim Brunner <[email protected]> Date: Fri Nov 1 15:16:18 2024 -0700 correct type internal to kvstore (minor) (valkey-io#1246) All of the internal variables related to number of dicts in the kvstore are type `int`. Not sure why these 2 items were declared as `long long`. Signed-off-by: Jim Brunner <[email protected]> commit e985ead Author: zhenwei pi <[email protected]> Date: Fri Nov 1 20:28:09 2024 +0800 RDMA: Prevent IO for child process (valkey-io#1244) RDMA MR (memory region) is not forkable, the VMA (virtual memory area) of a MR gets empty in a child process. Prevent IO for child process to avoid server crash. In the check for whether read and write is allowed in an RDMA connection, a check that if we're in a child process is added. If we are, the function returns an error, which will cause the RDMA client to be disconnected. Suggested-by: Viktor Söderqvist <[email protected]> Signed-off-by: zhenwei pi <[email protected]> commit 1c222f7 Author: Madelyn Olson <[email protected]> Date: Thu Oct 31 11:37:53 2024 -0700 Improve performance of sdssplitargs (valkey-io#1230) The current implementation of `sdssplitargs` does repeated `sdscatlen` to build the parsed arguments, which isn't very efficient because it does a lot of extra reallocations and moves through the sds code a lot. It also typically results in memory overhead, because `sdscatlen` over-allocates, which is usually not needed since args are usually not modified after being created. The new implementation of sdssplitargs does two passes, the first to parse the argument to figure out the final length and the second to actually copy the string. It's generally about 2x faster for larger strings (~100 bytes), and about 20% faster for small strings (~10 bytes). This is generally faster since as long as everything is in the CPU cache, it's going to be fast. There are a couple of sanity tests, none existed before, as well as some fuzzying which was used to find some bugs and also to do the benchmarking. The original benchmarking code can be seen valkey-io@6576aeb. ``` test_sdssplitargs_benchmark - unit/test_sds.c:530] Using random seed: 1729883235 [test_sdssplitargs_benchmark - unit/test_sds.c:577] Improvement: 56.44%, new:13039us, old:29930us [test_sdssplitargs_benchmark - unit/test_sds.c:577] Improvement: 56.58%, new:12057us, old:27771us [test_sdssplitargs_benchmark - unit/test_sds.c:577] Improvement: 59.18%, new:9048us, old:22165us [test_sdssplitargs_benchmark - unit/test_sds.c:577] Improvement: 54.61%, new:12381us, old:27278us [test_sdssplitargs_benchmark - unit/test_sds.c:577] Improvement: 51.17%, new:16012us, old:32793us [test_sdssplitargs_benchmark - unit/test_sds.c:577] Improvement: 49.18%, new:16041us, old:31563us [test_sdssplitargs_benchmark - unit/test_sds.c:577] Improvement: 58.40%, new:12450us, old:29930us [test_sdssplitargs_benchmark - unit/test_sds.c:577] Improvement: 56.49%, new:13066us, old:30031us [test_sdssplitargs_benchmark - unit/test_sds.c:577] Improvement: 58.75%, new:12744us, old:30894us [test_sdssplitargs_benchmark - unit/test_sds.c:577] Improvement: 52.44%, new:16885us, old:35504us [test_sdssplitargs_benchmark - unit/test_sds.c:577] Improvement: 62.57%, new:8107us, old:21659us [test_sdssplitargs_benchmark - unit/test_sds.c:577] Improvement: 62.12%, new:8320us, old:21966us [test_sdssplitargs_benchmark - unit/test_sds.c:577] Improvement: 45.23%, new:13960us, old:25487us [test_sdssplitargs_benchmark - unit/test_sds.c:577] Improvement: 57.95%, new:9188us, old:21849us ``` --------- Signed-off-by: Madelyn Olson <[email protected]> commit 91cbf77 Author: Masahiro Ide <[email protected]> Date: Fri Nov 1 03:30:05 2024 +0900 Eliminate snprintf usage at setDeferredAggregateLen (valkey-io#1234) to align with how we encode the length at `_addReplyLongLongWithPrefix` Signed-off-by: Masahiro Ide <[email protected]> Co-authored-by: Masahiro Ide <[email protected]> commit ab98f37 Author: zhenwei pi <[email protected]> Date: Wed Oct 30 18:12:42 2024 +0800 RDMA: Delete keepalive timer on closing (valkey-io#1237) Typically, RDMA connection gets closed by client side, the server side handles diconnected CM event, and delete keepalive timer correctly. However, the server side may close connection voluntarily, for example the maxium connections exceed. Handle this case to avoid invalid memory access. Signed-off-by: zhenwei pi <[email protected]> commit 789a73b Author: Binbin <[email protected]> Date: Wed Oct 30 10:25:50 2024 +0800 Minor fix to debug logging in replicationFeedStreamFromPrimaryStream (valkey-io#1235) We should only print logs when hide-user-data-from-log is off. Signed-off-by: Binbin <[email protected]> commit 13f5f66 Author: Shivshankar <[email protected]> Date: Tue Oct 29 19:19:56 2024 -0400 Update the argument of clusterNodeGetReplica declaration (valkey-io#1239) clusterNodeGetReplica agrumnets are missed to migrate during the slave to replication migration so updated the argument slave to replica. Signed-off-by: Shivshankar-Reddy <[email protected]> commit 5a4c064 Author: Madelyn Olson <[email protected]> Date: Tue Oct 29 14:26:17 2024 -0700 Mark main and serverAssert as weak symbols to be overridden (valkey-io#1232) At some point unit tests stopped building on MacOS because of duplicate symbols. I had originally solved this problem by using a flag that overrides symbols, but the much better solution is to mark the duplicate symbols as weak and they can be overridden during linking. (Symbols by default are strong, strong symbols override weak symbols) I also added macos unit build to the CI, so that this doesn't silently break in the future again. --------- Signed-off-by: Madelyn Olson <[email protected]> Co-authored-by: Viktor Söderqvist <[email protected]> commit 8ee7a58 Author: zixuan zhao <[email protected]> Date: Tue Oct 29 06:13:30 2024 -0400 Document log format configs in valkey.conf (valkey-io#1233) Add config options for log format and timestamp format introduced by Related to valkey-io#1225 This change adds two new configs into valkey.conf: log-format log-timestamp-format --------- Signed-off-by: azuredream <[email protected]> commit c21f1dc Author: Lipeng Zhu <[email protected]> Date: Mon Oct 28 13:43:23 2024 +0800 Increase the IO_THREADS_MAX_NUM. (valkey-io#1220) This patch try to increase the max number of io-threads from 16(128) to 256 for below reasons: 1. The core number increases a lot in the modern server processors, for example, the [Sierra Forest](https://en.wikipedia.org/wiki/Sierra_Forest) processors are targeted towards with up to **288** cores. Due to limitation of **_io-threads_** number (16 and 128 ), benchmark like https://openbenchmarking.org/test/pts/valkey even cannot run on a high core count server. 2. For some workloads, the bottleneck could be main thread, but for the other workloads, big key/value which caused heavy io, the bottleneck could be the io-threads, for example benchmark `memtier_benchmark -s 127.0.0.1 -p 9001 "--data-size" "20000" --ratio 1:0 --key-pattern P:P --key-minimum=1 --key-maximum 1000000 --test-time 180 -c 50 -t 16 --hide-histogram`. The QPS is still scalable after 16 io-threads. ![image](https://github.com/user-attachments/assets/e980f805-a162-44be-b03e-ab37a9c489cf) **Fig 1. QPS Scale factor with io-threads number grows.** Signed-off-by: Lipeng Zhu <[email protected]> Co-authored-by: Wangyang Guo <[email protected]> commit 5d2ff85 Author: Binbin <[email protected]> Date: Sun Oct 27 15:23:00 2024 +0800 Fix minor repldbfd leak in updateReplicasWaitingBgsave if fstat fails (valkey-io#1226) In the old code, if fstat fails, replica->repldbfd will hold the fd and we are doing a free client. And in freeClient, we check and close only if repl_state == REPLICA_STATE_SEND_BULK. So if fstat fails, we will leak the fd. We can also extend freeClient to handle REPLICA_STATE_WAIT_BGSAVE_END as well, but here seems to be a more friendly (and safer) way. Signed-off-by: Binbin <[email protected]> commit 4be09e4 Author: Shivshankar <[email protected]> Date: Fri Oct 25 08:03:59 2024 -0400 Fix typo in valkey.conf file's shutdown section (valkey-io#1224) Found typo "exists" ==> "exits" in valkey.conf in shutdown section. Signed-off-by: Shivshankar-Reddy <[email protected]> commit 9c60fcd Author: Lipeng Zhu <[email protected]> Date: Fri Oct 25 17:13:28 2024 +0800 Do security attack check only when command not found to reduce the critical path (valkey-io#1212) When explored the cycles distribution for main thread with io-threads enabled. We found this security attack check takes significant time in main thread, **~3%** cycles were used to do the commands security check in main thread. This patch try to completely avoid doing it in the hot path. We can do it only after we looked up the command and it wasn't found, just before we call commandCheckExistence. --------- Signed-off-by: Lipeng Zhu <[email protected]> Co-authored-by: Wangyang Guo <[email protected]> commit 55bbbe0 Author: zixuan zhao <[email protected]> Date: Thu Oct 24 18:36:32 2024 -0400 Configurable log and timestamp formats (logfmt, ISO8601) (valkey-io#1022) Add ability to configure log output format and timestamp format in the logs. This change adds two new configs: * `log-format`: Either legacy or logfmt (See https://brandur.org/logfmt) * `log-timestamp-format`: legacy, iso8601 or milliseconds (since the eppch). Related to valkey-io#1006. Example: ``` $ ./valkey-server /home/zhaoz12/git/valkey/valkey/valkey.conf pid=109463 role=RDB/AOF timestamp="2024-09-10T20:37:25.738-04:00" level=warning message="WARNING Memory overcommit must be enabled! Without it, a background save or replication may fail under low memory condition. Being disabled, it can also cause failures without low memory condition, see jemalloc/jemalloc#1328. To fix this issue add 'vm.overcommit_memory = 1' to /etc/sysctl.conf and then reboot or run the command 'sysctl vm.overcommit_memory=1' for this to take effect." pid=109463 role=RDB/AOF timestamp="2024-09-10T20:37:25.738-04:00" level=notice message="oO0OoO0OoO0Oo Valkey is starting oO0OoO0OoO0Oo" pid=109463 role=RDB/AOF timestamp="2024-09-10T20:37:25.738-04:00" level=notice message="Valkey version=255.255.255, bits=64, commit=affbea5d, modified=1, pid=109463, just started" pid=109463 role=RDB/AOF timestamp="2024-09-10T20:37:25.738-04:00" level=notice message="Configuration loaded" pid=109463 role=master timestamp="2024-09-10T20:37:25.738-04:00" level=notice message="monotonic clock: POSIX clock_gettime" pid=109463 role=master timestamp="2024-09-10T20:37:25.739-04:00" level=warning message="Failed to write PID file: Permission denied" ``` --------- Signed-off-by: azuredream <[email protected]> commit 2956367 Author: Binbin <[email protected]> Date: Thu Oct 24 21:53:05 2024 +0800 Maintain return value of rdbSaveDb after writing slot-info aux (valkey-io#1222) All other places written in this function are maintained it, although the caller of rdbSaveDb does not reply on it, it is maintained to be consistent with other places, is its duty. Signed-off-by: Binbin <[email protected]> commit a21fe71 Author: Binbin <[email protected]> Date: Thu Oct 24 16:38:47 2024 +0800 Limit CLUSTER_CANT_FAILOVER_DATA_AGE log to 10 times period (valkey-io#1189) If a replica is step into data_age too old stage, it can not trigger the failover and currently it can not be automatically recovered and we will print a log every CLUSTER_CANT_FAILOVER_RELOG_PERIOD, which is every second. If the primary has not recovered or there is no manual failover, this log will flood the log file. In this case, limit its frequency to 10 times period, which is 10 seconds in our code. Also in this data_age too old stage, the repeated logs also can stand for the progress of the failover. See also valkey-io#780 for more details about it. Signed-off-by: Binbin <[email protected]> Co-authored-by: Ping Xie <[email protected]> commit c419524 Author: muelstefamzn <[email protected]> Date: Wed Oct 23 16:56:32 2024 -0700 Trim free space from inline command argument strings to avoid excess memory usage (valkey-io#1213) The command argument strings created while parsing inline commands (see `processInlineBuffer()`) can contain free capacity. Since some commands ,such as `SET`, store these strings in the database, that free capacity increases the memory usage. In the worst case, it could double the memory usage. This only occurs if the inline command format is used. The argument strings are built by appending character by character in `sdssplitargs()`. Regular RESP commands are not affected. This change trims the strings within `processInlineBuffer()`. this? When the command argument string is packed into an object, `trimStringObjectIfNeeded()` is called. This does only trim the string if it is larger than `PROTO_MBULK_BIG_ARG` (32kB), as only strings larger than this would ever need trimming if the command it sent using the bulk string format. We could modify this condition, but that would potentially have a performance impact on commands using the bulk format. Since those make up for the vast majority of executed commands, limiting this change to inline commands seems prudent. * 1 million `SET [key] [value]` commands * Random keys (16 bytes) * 600 bytes values Memory usage without this change: ``` used_memory:1089327888 used_memory_human:1.01G used_memory_rss:1131696128 used_memory_rss_human:1.05G used_memory_peak:1089348264 used_memory_peak_human:1.01G used_memory_peak_perc:100.00% used_memory_overhead:49302800 used_memory_startup:911808 used_memory_dataset:1040025088 used_memory_dataset_perc:95.55% ``` Memory usage with this change: ``` used_memory:705327888 used_memory_human:672.65M used_memory_rss:718802944 used_memory_rss_human:685.50M used_memory_peak:705348256 used_memory_peak_human:672.67M used_memory_peak_perc:100.00% used_memory_overhead:49302800 used_memory_startup:911808 used_memory_dataset:656025088 used_memory_dataset_perc:93.13% ``` If the same experiment is repeated using the normal RESP array of bulk string format (`*3\r\n$3\r\nSET\r\n...`) then the memory usage is 672MB with and without of this change. If a replica is attached, its memory usage is 672MB with and without this change, since the replication link never uses inline commands. Signed-off-by: Stefan Mueller <[email protected]> commit c176de4 Author: danish-mehmood <[email protected]> Date: Thu Oct 24 02:30:42 2024 +0500 Clarify the wording from dually to the more common doubly (valkey-io#1214) Clarify documentation is ziplist.c Signed-off-by: danish-mehmood <[email protected]> commit b803f7a Author: Binbin <[email protected]> Date: Wed Oct 23 17:11:42 2024 +0800 Cleaned up getSlotOrReply is return -1 instead of C_ERR (valkey-io#1211) Minor cleanup since getSlotOrReply return -1 on error, not return C_ERR. Signed-off-by: Binbin <[email protected]> commit 5d70ccd Author: Binbin <[email protected]> Date: Wed Oct 23 10:22:25 2024 +0800 Make replica CLUSTER RESET flush async based on lazyfree-lazy-user-flush (valkey-io#1190) Currently, if the replica has a lot of data, CLUSTER RESET will block for a while and report the slowlog, and it seems that there is no harm in making it async so external components can be easier when monitoring it. Signed-off-by: Binbin <[email protected]> Co-authored-by: Ping Xie <[email protected]> commit 285064b Author: Shivshankar <[email protected]> Date: Mon Oct 21 22:54:40 2024 -0400 fix typo (valkey-io#1202) Signed-off-by: Shivshankar-Reddy <[email protected]> commit 771918e Author: Shivshankar <[email protected]> Date: Mon Oct 21 16:48:29 2024 -0400 Updating command.def by running the generate-command-code.py (valkey-io#1203) Part of valkey-io#1200 PR, since feild is changed. Looks like commands.def is missed to get genereated based on the changes so that is causing CI failure on unstable. Signed-off-by: Shivshankar-Reddy <[email protected]> commit 5885dc5 Author: Viktor Söderqvist <[email protected]> Date: Mon Oct 21 16:04:47 2024 +0200 Fix BGSAVE CANCEL since and history fields (valkey-io#1200) Fixes wrong "since" and "history" introduced in valkey-io#757. --------- Signed-off-by: Viktor Söderqvist <[email protected]> commit 29b83f1 Author: ranshid <[email protected]> Date: Mon Oct 21 12:56:44 2024 +0300 Introduce bgsave cancel (valkey-io#757) In some cases bgsave child process can run for a long time exhausting system resources. Although it is possible to kill the bgsave child process from the system shell, sometimes it is not possible allowing OS level access. This PR adds a new subcommand to the BGSAVE command. When user will issue `BGSAVE CANCEL`, it will do one of the 2: 1. In case a bgsave child process is currently running, the child process would be immediately killed thus terminating any save/replication full sync process. 2. In case a bgsave child process is SCHEDULED to run, the scheduled execution will be cancelled. --------- Signed-off-by: ranshid <[email protected]> Signed-off-by: ranshid <[email protected]> Signed-off-by: Ran Shidlansik <[email protected]> Signed-off-by: Binbin <[email protected]> Co-authored-by: Binbin <[email protected]> Co-authored-by: Viktor Söderqvist <[email protected]> commit 71f8c34 Author: zhenwei pi <[email protected]> Date: Mon Oct 21 16:11:27 2024 +0800 RDMA: Fix listener priv opaque pointer (valkey-io#1194) struct connListener.priv should be used by connection type specific data, static local listener data should not use this. A RDMA config structure is going to be introduced in the next step: ``` typedef struct serverRdmaContextConfig { char *bindaddr; int bindaddr_count; int port; int rx_size; int comp_vector; ... } serverRdmaContextConfig; ``` Then a builtin RDMA will be supported. Signed-off-by: zhenwei pi <[email protected]> commit 2743b7e Author: Binbin <[email protected]> Date: Sat Oct 19 14:56:10 2024 +0800 Fix SORT GET to ignore special pattern # in cluster slot check (valkey-io#1182) This special pattern '#' is used to get the element itself, it does not actually participate in the slot check. In this case, passing `GET #` will cause '#' to participate in the slot check, causing the command to get an `pattern may be in different slots` error. Signed-off-by: Binbin <[email protected]> Signed-off-by: Jacob Murphy <[email protected]>
Another simpler way to trigger this inconsistent view of the cluster is by preventing the node being met to receive the final PONG message of the handshake protocol. proc cluster_get_first_node_in_handshake id {
set nodes [get_cluster_nodes $id]
foreach n $nodes {
if {[cluster_has_flag $n handshake]} {
return [dict get $n id]
}
}
return {}
}
start_cluster 2 0 {tags {external:skip cluster} overrides {cluster-node-timeout 4000 cluster-replica-no-failover yes}} {
test "Partial node timeout with two shards" {
set cluster_port [find_available_port $::baseport $::portcount]
start_server [list overrides [list cluster-enabled yes cluster-node-timeout 4000 cluster-port $cluster_port]] {
# In this test we will trigger a handshake timeout on one side of the handshake.
# Node 1 and 2 already know each other, then we make node 0 meet node 1:
#
# Node 0 -- MEET -> Node 1
# Node 0 <- PONG -- Node 1
# Node 0 <- PING -- Node 1 [Node 0 will mark the handshake as successful]
# Node 0 -- PONG -> Node 1 [we drop this message, so node 1 will eventually mark the handshake as timed out]
# Drop PONG msgs
R 1 DEBUG DROP-CLUSTER-PACKET-FILTER 1
# Node 0 meets node 1
R 0 CLUSTER MEET [srv -1 host] [srv -1 port]
# Wait for node 0 to know about the other nodes in the cluster
wait_for_condition 50 100 {
[llength [R 0 CLUSTER NODES]] == 26
} else {
fail "Node 0 never learned about node 1 and 2"
}
# At this point, node 0 learned about the other nodes in the cluster from meeting node 1.
wait_for_condition 50 100 {
[cluster_get_first_node_in_handshake 0] eq {}
} else {
fail "Node 1 never exited handshake state"
}
# At this point, from node 0 point of view, the handshake with node 1 succeeded.
puts "====== cluster nodes"
puts [R 0 CLUSTER NODES]
puts [R 1 CLUSTER NODES]
puts [R 2 CLUSTER NODES]
puts "======"
wait_for_condition 50 100 {
[cluster_get_first_node_in_handshake 1] eq {}
} else {
fail "Node 1 never exited handshake state"
}
assert {[llength [R 1 CLUSTER NODES]] == 18}
# At this point, from node 1 point of view, the handshake with node 0 timed out.
# Allow all msgs
R 1 DEBUG DROP-CLUSTER-PACKET-FILTER -1
after 1000
puts "Nodes after 1 second"
puts [R 0 CLUSTER NODES]
puts [R 1 CLUSTER NODES]
puts [R 2 CLUSTER NODES]
after 10000
puts "Nodes after 10 seconds"
puts [R 0 CLUSTER NODES]
puts [R 1 CLUSTER NODES]
puts [R 2 CLUSTER NODES]
}
}
} Results of this test run:
|
A minor debugging change that helped in the investigation of valkey-io#1251. Basically there are some edge cases where we want to fully isolate a note from receiving packets, but can't suspend the process because we need it to continue sending outbound traffic. So, added a filter for that. Signed-off-by: Madelyn Olson <[email protected]>
The following TCL test will reproduce the issue:
You'll get something like the following output:
Node
4534af6491d293645e32f4000f958590a19c61be
in this case timed out its handshake with the other nodes who put it online. The other two nodes will send it normal ping messages, not MEETS, because they only send a first meet message is put online. Node4534af6491d293645e32f4000f958590a19c61be
will ignore those ping messages.I think the larger the cluster the less likely this will be to happen, since as long as one node connects and know about the rest of the cluster, it will spread gossip which will retry the connections.
Not sure this is a large issue at the moment, but wanted to document the finding.
The text was updated successfully, but these errors were encountered: