Skip to content

Commit 1217a83

Browse files
authored
Create 0081-flashback.md
Add flashback
1 parent ed764d7 commit 1217a83

File tree

1 file changed

+109
-0
lines changed

1 file changed

+109
-0
lines changed

text/0081-flashback.md

+109
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,109 @@
1+
# RFC: Return TiKV to some certain version
2+
3+
## Summary
4+
5+
This proposal introduces a method to return TiKV to some certain version, which could be useful when doing some lossy recovery.
6+
7+
## Motivation
8+
9+
The initial motivation is to do lossy recovery on TiKV when deploying TiDB cluster across DCs with DR Auto-sync method.
10+
11+
When use DR Auto-sync method to deploy TiDB cluster in two DC and
12+
13+
- there is a network isolation between primary and DR DC
14+
- PD make the replica state of primary DC to be Async after a while
15+
- the primary DC is down
16+
17+
Then the sync state of the cluster (from the DR DC's PD) is not dependable. In this case, data can be partially sent to the TiKV nodes in DR, and break the ACID consistency.
18+
19+
Though it is impossible to do loseless recovery in this situation, we can recover the data on TiKV nodes to a certain version (`resolved_ts` in this case), which is ACID consistency.
20+
21+
## Detailed design
22+
23+
Basically, we need a method to return TiKV to a certain version (timestamp), we should add a new RPC to kvproto:
24+
25+
```protobuf
26+
message FlashBackRequest {
27+
Context context = 1;
28+
uint64 checkpoint_ts = 2;
29+
}
30+
31+
message FlashBackResponse {
32+
}
33+
34+
service Tikv {
35+
// …
36+
rpc FlashBack(kvrpcpb.FlashBackRequest) returns (kvrpcpb.FlashBackResponse) {}
37+
}
38+
```
39+
40+
When a TiKV recieve `FlashBackRequest`, it will create a `FlashBackWorker` instance and start the work.
41+
42+
There are several steps to follow when doing the flash back:
43+
44+
### 1. Wait all committed raft log are applied
45+
46+
We can just polling each region's `RegionInfo` in a method similiar with `/home/longfangsong/pingcap/tikv/src/server/debug.rs#190` does, and wait for `commit_index` to be equal to `apply_index`.
47+
48+
### 2. Clean WriteCF and DefaultCF
49+
50+
Then we can remove all data created by transactions which commited after `checkpoint_ts`, this can be done by remove all `Write` records which `commit_ts > checkpoint_ts` and corresponding data in `DEFAULT_CF` from KVEngine, ie.
51+
52+
```rust
53+
fn scan_next_batch(&mut self, batch_size: usize) -> Option<Vec<(Vec<u8>, Write)>> {
54+
let mut writes = None;
55+
for _ in 0..batch_size {
56+
if let Some((key, write)) = self.next_write()? {
57+
let commit_ts = Key::decode_ts_from(keys::origin_key(&key));
58+
let mut writes = writes.get_or_insert(Vec::new());
59+
if commit_ts > self.ts {
60+
writes.push((key, write));
61+
}
62+
} else {
63+
return writes;
64+
}
65+
}
66+
writes
67+
}
68+
69+
pub fn process_next_batch(
70+
&mut self,
71+
batch_size: usize,
72+
wb: &mut RocksWriteBatch,
73+
) -> bool {
74+
let writes = if let Some(writes) = self.scan_next_batch(batch_size) {
75+
writes
76+
} else {
77+
return false;
78+
}
79+
for (key, write) in writes {
80+
let default_key = Key::append_ts(Key::from_raw(&key), write.start_ts).to_raw().unwrap();
81+
box_try!(wb.delete_cf(CF_WRITE, &key));
82+
box_try!(wb.delete_cf(CF_DEFAULT, &default_key));
83+
}
84+
true
85+
}
86+
```
87+
88+
Note we scan and process data in batches, so we can report the progress to the client.
89+
90+
### 3. Resolve locks once
91+
92+
After remove all the writes, we can promise we won't give the user ACID inconsistency data when processing read, however, there might be still locks in lock cf which block some user's operation.
93+
94+
So we should do resolve locks on all data once.
95+
96+
This lock resolving process is just like the one used by GC, with safepoint ts equals to current ts.
97+
98+
## Drawbacks
99+
100+
This is a **very** unsafe operation, we should prevent the user call this on normal cluster.
101+
102+
## Alternatives
103+
104+
We can also clean the lock cf in step2 instead of do resolve locks.
105+
106+
107+
108+
109+

0 commit comments

Comments
 (0)