-
Notifications
You must be signed in to change notification settings - Fork 1.9k
[ENH]: Refactor compactor into three chained orchestrators #5831
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
base: main
Are you sure you want to change the base?
Conversation
Reviewer ChecklistPlease leverage this checklist to ensure your code review is thorough before approving Testing, Bugs, Errors, Logs, Documentation
System Compatibility
Quality
|
This stack of pull requests is managed by Graphite. Learn more about stacking. |
|
Split Monolithic Compactor into Three Dedicated Orchestrators This PR performs a deep internal refactor of the compaction subsystem. The former single-class The change is entirely behind internal service boundaries, but any code that interacted directly with the old orchestrator or scheduler now needs to target the stage-specific interfaces. The refactor increases maintainability, unlocks easier extension of compaction strategies, and prepares the worker service for upcoming background-work architecture changes. Key Changes• Introduced DataFetchOrchestrator, ApplyDataOrchestrator, and RegisterOrchestrator Affected Areas• execution/orchestration/* This summary was automatically generated by @propel-code-bot |
2cbf88b to
caaea81
Compare
caaea81 to
1f6723b
Compare
| num_materialized_logs: 0, | ||
| segment_spans: HashMap::new(), | ||
| materialized_log_data, |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
[BestPractice]
Resource leak risk: HNSW index cleanup only happens in success case:
async fn try_purge_hnsw(path: &Path, hnsw_index_uuid: Option<IndexUuid>) {
if let Some(hnsw_index_uuid) = hnsw_index_uuid {
let _ = HnswIndexProvider::purge_one_id(path, hnsw_index_uuid).await;
}
}This cleanup method ignores all errors (let _ =). If purging fails due to file system errors or permissions, temporary HNSW indexes will accumulate on disk. Add error logging and potentially retry logic for cleanup failures.
Context for Agents
[**BestPractice**]
Resource leak risk: HNSW index cleanup only happens in success case:
```rust
async fn try_purge_hnsw(path: &Path, hnsw_index_uuid: Option<IndexUuid>) {
if let Some(hnsw_index_uuid) = hnsw_index_uuid {
let _ = HnswIndexProvider::purge_one_id(path, hnsw_index_uuid).await;
}
}
```
This cleanup method ignores all errors (`let _ =`). If purging fails due to file system errors or permissions, temporary HNSW indexes will accumulate on disk. Add error logging and potentially retry logic for cleanup failures.
File: rust/worker/src/execution/orchestration/apply_data_orchestrator.rs
Line: 1901f6723b to
2c77401
Compare
| self.context | ||
| .orchestrator_context | ||
| .task_cancellation_token | ||
| .clone(), |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
[BestPractice]
Resource leak: The try_purge_hnsw function is called in cleanup methods but errors are silently ignored:
let _ = HnswIndexProvider::purge_one_id(path, hnsw_index_uuid).await;If the HNSW index cleanup fails, it could leave dangling resources on disk. Consider logging errors:
if let Err(e) = HnswIndexProvider::purge_one_id(path, hnsw_index_uuid).await {
tracing::warn!("Failed to purge HNSW index {}: {}", hnsw_index_uuid, e);
}Context for Agents
[**BestPractice**]
Resource leak: The `try_purge_hnsw` function is called in cleanup methods but errors are silently ignored:
```rust
let _ = HnswIndexProvider::purge_one_id(path, hnsw_index_uuid).await;
```
If the HNSW index cleanup fails, it could leave dangling resources on disk. Consider logging errors:
```rust
if let Err(e) = HnswIndexProvider::purge_one_id(path, hnsw_index_uuid).await {
tracing::warn!("Failed to purge HNSW index {}: {}", hnsw_index_uuid, e);
}
```
File: rust/worker/src/execution/orchestration/data_fetch_orchestrator.rs
Line: 207| self.terminate_with_result(Err(e), ctx).await; | ||
| return; | ||
| #[allow(clippy::too_many_arguments)] | ||
| pub async fn compact( |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
nit: consider making this an Orchestrator with different stages for better code organization
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
or maybe consider CollectionCompactionContext::data_fetch,apply_data,...
rust/worker/src/execution/orchestration/apply_data_orchestrator.rs
Outdated
Show resolved
Hide resolved
2c77401 to
d4c2383
Compare
This comment has been minimized.
This comment has been minimized.
| pub fn get_segment_writer_by_id( | ||
| &self, |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
[BestPractice]
Schema field mutation without proper validation: collection_info.schema = apply_data_response.schema; directly assigns the schema without checking if the assignment conflicts with existing collection constraints. If apply_data_response.schema is None when the collection requires a schema, this could create an invalid state.
// Add validation:
let updated_schema = apply_data_response.schema;
if collection_info.collection.dimension.is_some() && updated_schema.is_none() {
return Err(CompactionError::InvariantViolation(
"Collection with dimension must have a schema"
));
}
collection_info.schema = updated_schema;Context for Agents
[**BestPractice**]
Schema field mutation without proper validation: `collection_info.schema = apply_data_response.schema;` directly assigns the schema without checking if the assignment conflicts with existing collection constraints. If `apply_data_response.schema` is `None` when the collection requires a schema, this could create an invalid state.
```rust
// Add validation:
let updated_schema = apply_data_response.schema;
if collection_info.collection.dimension.is_some() && updated_schema.is_none() {
return Err(CompactionError::InvariantViolation(
"Collection with dimension must have a schema"
));
}
collection_info.schema = updated_schema;
```
File: rust/worker/src/execution/orchestration/compact.rs
Line: 204| None => return, | ||
| } |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
[BestPractice]
Error handling gap: self.ok_or_terminate(segment_writer, ctx).await pattern doesn't handle the case where get_segment_writer_by_id() returns Ok(writer) but the writer is in an invalid state (e.g., already consumed/moved). This could lead to runtime panics when trying to use the writer.
// Add state validation:
let segment_writer = match self.context.get_segment_writer_by_id(message.segment_id) {
Ok(writer) => {
// Validate writer is still usable
if !writer.is_valid() {
return self.terminate_with_result(Err(...), ctx).await;
}
writer
},
Err(e) => {
return self.terminate_with_result(Err(e.into()), ctx).await;
}
};Context for Agents
[**BestPractice**]
Error handling gap: `self.ok_or_terminate(segment_writer, ctx).await` pattern doesn't handle the case where `get_segment_writer_by_id()` returns `Ok(writer)` but the writer is in an invalid state (e.g., already consumed/moved). This could lead to runtime panics when trying to use the writer.
```rust
// Add state validation:
let segment_writer = match self.context.get_segment_writer_by_id(message.segment_id) {
Ok(writer) => {
// Validate writer is still usable
if !writer.is_valid() {
return self.terminate_with_result(Err(...), ctx).await;
}
writer
},
Err(e) => {
return self.terminate_with_result(Err(e.into()), ctx).await;
}
};
```
File: rust/worker/src/execution/orchestration/apply_data_orchestrator.rs
Line: 568d4c2383 to
daee3e2
Compare
daee3e2 to
67e5b87
Compare
rust/worker/src/execution/orchestration/apply_data_orchestrator.rs
Outdated
Show resolved
Hide resolved
| impl ExecuteAttachedFunctionOperator { | ||
| /// Create a new ExecuteAttachedFunctionOperator from an AttachedFunction. | ||
| /// The executor is selected based on the function_id in the attached function. | ||
| #[allow(dead_code)] |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
[BestPractice]
The PR description mentions removing all function-related code. Since this function is now unused, it seems it should be removed completely instead of being marked with #[allow(dead_code)]. This would make the codebase cleaner and more aligned with the PR's goal.
Context for Agents
[**BestPractice**]
The PR description mentions removing all function-related code. Since this function is now unused, it seems it should be removed completely instead of being marked with `#[allow(dead_code)]`. This would make the codebase cleaner and more aligned with the PR's goal.
File: rust/worker/src/execution/operators/execute_task.rs
Line: 8767e5b87 to
c31769e
Compare
This comment has been minimized.
This comment has been minimized.
c31769e to
1664dea
Compare
| ctx, | ||
| ) | ||
| .await; | ||
| return; |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
[BestPractice]
Potential integer overflow: The record count calculation could overflow with very large datasets:
collection_info.collection.total_records_post_compaction = output.len() as u64;If output.len() exceeds u64::MAX, this will wrap around. Use checked arithmetic:
collection_info.collection.total_records_post_compaction =
u64::try_from(output.len())
.map_err(|_| DataFetchOrchestratorError::InvariantViolation(
"Record count exceeds u64 maximum"
))?;Context for Agents
[**BestPractice**]
Potential integer overflow: The record count calculation could overflow with very large datasets:
```rust
collection_info.collection.total_records_post_compaction = output.len() as u64;
```
If `output.len()` exceeds `u64::MAX`, this will wrap around. Use checked arithmetic:
```rust
collection_info.collection.total_records_post_compaction =
u64::try_from(output.len())
.map_err(|_| DataFetchOrchestratorError::InvariantViolation(
"Record count exceeds u64 maximum"
))?;
```
File: rust/worker/src/execution/orchestration/data_fetch_orchestrator.rs
Line: 7171664dea to
fd04dfa
Compare
| Ok(ref compaction_response) => match compaction_response { | ||
| CompactionResponse::Success { job_id } => { | ||
| if job_id != &resp.job_id.0 { | ||
| CompactionResponse::Success { job_id, .. } => { |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
[BestPractice]
The Success variant of CompactionResponse only has the job_id field. The .. is unnecessary and suggests there are other fields being ignored, which is not the case. Removing it makes the code clearer and more accurate.
Context for Agents
[**BestPractice**]
The `Success` variant of `CompactionResponse` only has the `job_id` field. The `..` is unnecessary and suggests there are other fields being ignored, which is not the case. Removing it makes the code clearer and more accurate.
File: rust/worker/src/compactor/compaction_manager.rs
Line: 339fd04dfa to
4dcc473
Compare
| }; | ||
|
|
||
| let result = log | ||
| .update_collection_log_offset(&input.tenant, input.collection_id, input.log_position) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
[BestPractice]
Potential race condition in log offset updates: The update_collection_log_offset method in InMemoryLog can fail (returns Result<(), Box<dyn ChromaError>>) but there's no atomic guarantee that the SysDB flush and log offset update happen together. If SysDB flush succeeds but log offset update fails, the system could be in an inconsistent state where SysDB thinks compaction completed but the log service doesn't.
// In register.rs, this could fail after SysDB flush succeeds:
let result = log
.update_collection_log_offset(&input.tenant, input.collection_id, input.log_position)
.await;Consider using a distributed transaction or compensation logic to handle partial failures.
Context for Agents
[**BestPractice**]
Potential race condition in log offset updates: The `update_collection_log_offset` method in `InMemoryLog` can fail (returns `Result<(), Box<dyn ChromaError>>`) but there's no atomic guarantee that the SysDB flush and log offset update happen together. If SysDB flush succeeds but log offset update fails, the system could be in an inconsistent state where SysDB thinks compaction completed but the log service doesn't.
```rust
// In register.rs, this could fail after SysDB flush succeeds:
let result = log
.update_collection_log_offset(&input.tenant, input.collection_id, input.log_position)
.await;
```
Consider using a distributed transaction or compensation logic to handle partial failures.
File: rust/worker/src/execution/operators/register.rs
Line: 148There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
This partial failure is ok by our protocol! The sysdb is the source of truth. The log can be behind, in which case on a subsequent compaction, the log will be repaired and advanced to match the sysdb. The only implication of the log being behind the sysdb is we will unnecessarily launch a compaction since the rollup can't occur.
| Some(outputs) => outputs, | ||
| None => { | ||
| self.terminate_with_result( | ||
| Err(ApplyLogsOrchestratorError::InvariantViolation( |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
[BestPractice]
Missing error handling for empty materialized outputs: The code checks materialized_output.result.is_empty() and terminates with an invariant violation, but this could be a legitimate case when there are no logs to process. This will cause the orchestrator to fail unnecessarily.
// This should handle empty results gracefully, not as an error
if materialized_output.result.is_empty() {
self.terminate_with_result(
Err(ApplyLogsOrchestratorError::InvariantViolation(
"Attempting to apply an empty materialized output",
)),
ctx,
)
.await;
return Vec::new();
}Consider handling empty results as a valid case and return early with success.
Context for Agents
[**BestPractice**]
Missing error handling for empty materialized outputs: The code checks `materialized_output.result.is_empty()` and terminates with an invariant violation, but this could be a legitimate case when there are no logs to process. This will cause the orchestrator to fail unnecessarily.
```rust
// This should handle empty results gracefully, not as an error
if materialized_output.result.is_empty() {
self.terminate_with_result(
Err(ApplyLogsOrchestratorError::InvariantViolation(
"Attempting to apply an empty materialized output",
)),
ctx,
)
.await;
return Vec::new();
}
```
Consider handling empty results as a valid case and return early with success.
File: rust/worker/src/execution/orchestration/apply_logs_orchestrator.rs
Line: 454| let collection_info = match self.context.get_collection_info_mut() { | ||
| Ok(info) => info, | ||
| Err(err) => { | ||
| tracing::info!("We're failing right here"); |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
[BestPractice]
This tracing::info! call appears to be a leftover debug statement and should probably be removed.
Context for Agents
[**BestPractice**]
This `tracing::info!` call appears to be a leftover debug statement and should probably be removed.
File: rust/worker/src/execution/orchestration/log_fetch_orchestrator.rs
Line: 650| while let Some(entry) = entries.next_entry().await.expect("Failed to read next dir") { | ||
| let path = entry.path(); | ||
| let metadata = entry.metadata().await.expect("Failed to read metadata"); | ||
| println!("Path: {}", path.display()); |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
[BestPractice]
This appears to be a leftover debug print statement. It should probably be removed.
Context for Agents
[**BestPractice**]
This appears to be a leftover debug print statement. It should probably be removed.
File: rust/worker/src/compactor/compaction_manager.rs
Line: 10624dcc473 to
c0937a6
Compare
| &self.context.blockfile_provider, | ||
| )) | ||
| .await | ||
| { | ||
| Ok(reader) => Ok(Some(reader)), | ||
| Err(err) => match *err { | ||
| RecordSegmentReaderCreationError::UninitializedSegment => Ok(None), | ||
| _ => Err(*err), | ||
| }, | ||
| }, | ||
| ctx, | ||
| ) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
[CriticalError]
Missing error handling for record reader creation failure:
let record_reader = match self
.ok_or_terminate(
match Box::pin(RecordSegmentReader::from_segment(
&output.record_segment,
&self.context.blockfile_provider,
))
.await
{
Ok(reader) => Ok(Some(reader)),
Err(err) => match *err {
RecordSegmentReaderCreationError::UninitializedSegment => Ok(None),
_ => Err(*err),
},
},
ctx,
)The code dereferences *err which moves the boxed error, but then tries to use Err(*err) which attempts to move it again. This will cause a compilation error or runtime panic.
Fix:
Err(err) => Err(err),Context for Agents
[**CriticalError**]
Missing error handling for record reader creation failure:
```rust
let record_reader = match self
.ok_or_terminate(
match Box::pin(RecordSegmentReader::from_segment(
&output.record_segment,
&self.context.blockfile_provider,
))
.await
{
Ok(reader) => Ok(Some(reader)),
Err(err) => match *err {
RecordSegmentReaderCreationError::UninitializedSegment => Ok(None),
_ => Err(*err),
},
},
ctx,
)
```
The code dereferences `*err` which moves the boxed error, but then tries to use `Err(*err)` which attempts to move it again. This will cause a compilation error or runtime panic.
Fix:
```rust
Err(err) => Err(err),
```
File: rust/worker/src/execution/orchestration/log_fetch_orchestrator.rs
Line: 401| pub fn set_fail_update_offset(&mut self, fail: bool) { | ||
| self.fail_update_offset = fail; |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
[BestPractice]
Race condition in test log offset update:
pub fn set_fail_update_offset(&mut self, fail: bool) {
self.fail_update_offset = fail;
}
pub async fn update_collection_log_offset(
&mut self,
collection_id: CollectionUuid,
new_offset: i64,
) -> Result<(), Box<dyn ChromaError>> {
if self.fail_update_offset {
return Err(Box::new(InMemoryLogError::UpdateOffsetFailed));
}
self.offsets.insert(collection_id, new_offset);
Ok(())
}This test utility is not thread-safe. If multiple tests run concurrently and access the same InMemoryLog instance, the fail_update_offset flag and offsets HashMap could be corrupted. Use Arc<Mutex<>> or ensure single-threaded test execution.
Context for Agents
[**BestPractice**]
Race condition in test log offset update:
```rust
pub fn set_fail_update_offset(&mut self, fail: bool) {
self.fail_update_offset = fail;
}
pub async fn update_collection_log_offset(
&mut self,
collection_id: CollectionUuid,
new_offset: i64,
) -> Result<(), Box<dyn ChromaError>> {
if self.fail_update_offset {
return Err(Box::new(InMemoryLogError::UpdateOffsetFailed));
}
self.offsets.insert(collection_id, new_offset);
Ok(())
}
```
This test utility is not thread-safe. If multiple tests run concurrently and access the same `InMemoryLog` instance, the `fail_update_offset` flag and `offsets` HashMap could be corrupted. Use `Arc<Mutex<>>` or ensure single-threaded test execution.
File: rust/log/src/in_memory_log.rs
Line: 61| } | ||
| } else { | ||
| collection | ||
| .size_bytes_post_compaction | ||
| .saturating_add_signed(self.collection_logical_size_delta_bytes) | ||
| }; | ||
|
|
||
| let flush_results = std::mem::take(&mut self.flush_results); | ||
| let total_records_post_compaction = collection.total_records_post_compaction; |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
[BestPractice]
Integer overflow risk in collection size calculation:
let collection_logical_size_bytes = if self.context.is_rebuild {
match u64::try_from(self.collection_logical_size_delta_bytes) {
Ok(size_bytes) => size_bytes,
_ => {
// error handling
}
}
} else {
collection
.size_bytes_post_compaction
.saturating_add_signed(self.collection_logical_size_delta_bytes)
};While saturating_add_signed prevents overflow, it silently caps at u64::MAX which could lead to incorrect size reporting. For a database system, this could cause storage quota miscalculations. Consider returning an error instead:
collection.size_bytes_post_compaction
.checked_add_signed(self.collection_logical_size_delta_bytes)
.ok_or(ApplyLogsOrchestratorError::InvariantViolation(
"Collection size overflow detected"
))?Context for Agents
[**BestPractice**]
Integer overflow risk in collection size calculation:
```rust
let collection_logical_size_bytes = if self.context.is_rebuild {
match u64::try_from(self.collection_logical_size_delta_bytes) {
Ok(size_bytes) => size_bytes,
_ => {
// error handling
}
}
} else {
collection
.size_bytes_post_compaction
.saturating_add_signed(self.collection_logical_size_delta_bytes)
};
```
While `saturating_add_signed` prevents overflow, it silently caps at `u64::MAX` which could lead to incorrect size reporting. For a database system, this could cause storage quota miscalculations. Consider returning an error instead:
```rust
collection.size_bytes_post_compaction
.checked_add_signed(self.collection_logical_size_delta_bytes)
.ok_or(ApplyLogsOrchestratorError::InvariantViolation(
"Collection size overflow detected"
))?
```
File: rust/worker/src/execution/orchestration/apply_logs_orchestrator.rs
Line: 388| // Check for stale version (optimistic concurrency control) | ||
| if collection.version > collection_version { |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
[BestPractice]
Missing validation for stale collection version check:
// Check for stale version (optimistic concurrency control)
if collection.version > collection_version {
return Err(FlushCompactionError::FailedToFlushCompaction(
tonic::Status::failed_precondition(format!(
"Collection version is stale: expected {}, but collection is at version {}",
collection_version, collection.version
)),
));
}This only checks > but doesn't handle the case where collection.version < collection_version, which could indicate data corruption or a serious consistency issue. The check should be != for exact version matching:
if collection.version != collection_version {
return Err(FlushCompactionError::FailedToFlushCompaction(
tonic::Status::failed_precondition(format!(
"Collection version mismatch: expected {}, but collection is at version {}",
collection_version, collection.version
)),
));
}Context for Agents
[**BestPractice**]
Missing validation for stale collection version check:
```rust
// Check for stale version (optimistic concurrency control)
if collection.version > collection_version {
return Err(FlushCompactionError::FailedToFlushCompaction(
tonic::Status::failed_precondition(format!(
"Collection version is stale: expected {}, but collection is at version {}",
collection_version, collection.version
)),
));
}
```
This only checks `>` but doesn't handle the case where `collection.version < collection_version`, which could indicate data corruption or a serious consistency issue. The check should be `!=` for exact version matching:
```rust
if collection.version != collection_version {
return Err(FlushCompactionError::FailedToFlushCompaction(
tonic::Status::failed_precondition(format!(
"Collection version mismatch: expected {}, but collection is at version {}",
collection_version, collection.version
)),
));
}
```
File: rust/sysdb/src/test_sysdb.rs
Line: 480c0937a6 to
1605c9a
Compare
1605c9a to
906531e
Compare
| ) | ||
| .await; | ||
| return Vec::new(); | ||
| } | ||
| }; | ||
|
|
||
| for materialized_output in materialized_outputs { |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
[CriticalError]
Resource Leak: Span Not Dropped on Early Termination
When tasks fail and terminate_with_result is called, spans stored in self.segment_spans are never removed:
let result = self.create_apply_log_to_segment_writer_tasks(/*...*/).await;
let mut new_tasks = match result {
Ok(tasks) => tasks,
Err(err) => {
self.terminate_with_result(Err(err.into()), ctx).await; // Early return
return Vec::new(); // Spans in self.segment_spans never dropped
}
};Spans remain in memory until the orchestrator is dropped, causing:
- Memory leak for span data
- Incorrect trace timing (spans appear active when work stopped)
- Open telemetry connections held longer than necessary
Fix:
Err(err) => {
self.segment_spans.clear(); // Drop all spans before terminating
self.terminate_with_result(Err(err.into()), ctx).await;
return Vec::new();
}Context for Agents
[**CriticalError**]
**Resource Leak: Span Not Dropped on Early Termination**
When tasks fail and `terminate_with_result` is called, spans stored in `self.segment_spans` are never removed:
```rust
let result = self.create_apply_log_to_segment_writer_tasks(/*...*/).await;
let mut new_tasks = match result {
Ok(tasks) => tasks,
Err(err) => {
self.terminate_with_result(Err(err.into()), ctx).await; // Early return
return Vec::new(); // Spans in self.segment_spans never dropped
}
};
```
Spans remain in memory until the orchestrator is dropped, causing:
1. Memory leak for span data
2. Incorrect trace timing (spans appear active when work stopped)
3. Open telemetry connections held longer than necessary
**Fix:**
```rust
Err(err) => {
self.segment_spans.clear(); // Drop all spans before terminating
self.terminate_with_result(Err(err.into()), ctx).await;
return Vec::new();
}
```
File: rust/worker/src/execution/orchestration/apply_logs_orchestrator.rs
Line: 464906531e to
4104603
Compare
| let result = sysdb | ||
| .flush_compaction( | ||
| input.tenant.clone(), | ||
| input.collection_id, | ||
| input.log_position, | ||
| input.collection_version, | ||
| input.segment_flush_info.clone(), | ||
| input.total_records_post_compaction, | ||
| input.collection_logical_size_bytes, | ||
| input.schema.clone(), | ||
| ) | ||
| .await; | ||
|
|
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
[BestPractice]
Idempotency violation: The flush_compaction call has no unique transaction/request ID to prevent duplicate execution:
let result = sysdb.flush_compaction(
input.tenant.clone(),
input.collection_id,
input.log_position,
input.collection_version,
input.segment_flush_info.clone(),
// ... no idempotency key
).await;If the operator crashes after flush_compaction succeeds but before update_collection_log_offset, a retry will re-execute flush_compaction with the same parameters, potentially:
- Incrementing counters twice
- Creating duplicate segment records
- Corrupting collection state
Fix: Add an idempotency key (e.g., job_id or request UUID) to flush_compaction to detect retries:
struct FlushCompactionRequest {
idempotency_key: Uuid, // Deduplicate retries
// ... existing fields
}Context for Agents
[**BestPractice**]
**Idempotency violation**: The `flush_compaction` call has no unique transaction/request ID to prevent duplicate execution:
```rust
let result = sysdb.flush_compaction(
input.tenant.clone(),
input.collection_id,
input.log_position,
input.collection_version,
input.segment_flush_info.clone(),
// ... no idempotency key
).await;
```
If the operator crashes after `flush_compaction` succeeds but before `update_collection_log_offset`, a retry will re-execute `flush_compaction` with the same parameters, potentially:
- Incrementing counters twice
- Creating duplicate segment records
- Corrupting collection state
**Fix**: Add an idempotency key (e.g., `job_id` or request UUID) to `flush_compaction` to detect retries:
```rust
struct FlushCompactionRequest {
idempotency_key: Uuid, // Deduplicate retries
// ... existing fields
}
```
File: rust/worker/src/execution/operators/register.rs
Line: 138
Description of changes
Summarize the changes made by this PR.
This change removes all function related code from the compaction path including the scheduler and the compaction orchestrator. This is done to make way for a refactor on the preexisting compaction orchestrator.
This refactor entails breaking the CompactOrchestrator into three chained orchestrators:
The DataFetchOrchestrator that does GetCollectionAndSegments -> FetchLog/SourceRecordSegments -> Partition -> Materialized Logs. Its main task is to source data.
The ApplyDataOrchestrator that takes in materialized log records from the previous orchestrator and applies them to segments via ApplyOperators, CommitOperators and Flush operators.
The RegisterOrchestrator that takes in flushed segment path from the previous step and invokes the Register operator.
Any common code across these three orchestrators has remained in compact.rs.
Test plan
How are these changes tested?
pytestfor python,yarn testfor js,cargo testfor rustMigration plan
Are there any migrations, or any forwards/backwards compatibility changes needed in order to make sure this change deploys reliably?
Observability plan
What is the plan to instrument and monitor this change?
Documentation Changes
Are all docstrings for user-facing APIs updated if required? Do we need to make documentation changes in the _docs section?_