Skip to content

Conversation

@tanujnay112
Copy link
Contributor

@tanujnay112 tanujnay112 commented Dec 9, 2025

Description of changes

Summarize the changes made by this PR.

  • Improvements & Bug fixes
    • The compactor used to have a DLQ per node before this change. This creates a new compaction_failure_count column on the collections table to implement a global DLQ.
    • In memory failing_jobs map has been removed.
    • get_dead_jobs() endpoint has been removed as this information should be derived from the SysDB now.
  • New functionality
    • It is expected that restarting compactor nodes will not reset DLQ state now.

Test plan

How are these changes tested?

test_k8s_integration_scheduler already tests this feature.

  • Tests pass locally with pytest for python, yarn test for js, cargo test for rust

Migration plan

Are there any migrations, or any forwards/backwards compatibility changes needed in order to make sure this change deploys reliably?

The new column in collectionsis defaulted to 0 in the database, Go model and proto definitions.

Observability plan

What is the plan to instrument and monitor this change?

The DLQ is more easily observable from the SysDB now.

compactor_job_failure_count replaces the compactor_dead_jobs_count metric and we increment that on every failure instead of every time a job is "killed".

Documentation Changes

Are all docstrings for user-facing APIs updated if required? Do we need to make documentation changes in the _docs section?_

@github-actions
Copy link

github-actions bot commented Dec 9, 2025

Reviewer Checklist

Please leverage this checklist to ensure your code review is thorough before approving

Testing, Bugs, Errors, Logs, Documentation

  • Can you think of any use case in which the code does not behave as intended? Have they been tested?
  • Can you think of any inputs or external events that could break the code? Is user input validated and safe? Have they been tested?
  • If appropriate, are there adequate property based tests?
  • If appropriate, are there adequate unit tests?
  • Should any logging, debugging, tracing information be added or removed?
  • Are error messages user-friendly?
  • Have all documentation changes needed been made?
  • Have all non-obvious changes been commented?

System Compatibility

  • Are there any potential impacts on other parts of the system or backward compatibility?
  • Does this change intersect with any items on our roadmap, and if so, is there a plan for fitting them together?

Quality

  • Is this code of a unexpectedly high quality (Readability, Modularity, Intuitiveness)

Copy link
Contributor Author

This stack of pull requests is managed by Graphite. Learn more about stacking.

@tanujnay112 tanujnay112 marked this pull request as ready for review December 9, 2025 23:39
@propel-code-bot
Copy link
Contributor

propel-code-bot bot commented Dec 9, 2025

Implement Global DLQ persistence via SysDB compaction failure tracking

Introduces a globally persistent compaction dead-letter queue by recording per-collection failure counts in SysDB, removing node-local in-memory tracking, and wiring compactor logic to consult the persisted state. Adds the new compaction_failure_count column and associated CRUD plumbing across Rust, Go, Proto, and SQL layers, while updating scheduler metrics and workflows to increment the counter on failures and respect configured thresholds. Includes migration and API changes, updates to scheduler tests to validate skip-after-threshold behaviour, and replaces the compactor_dead_jobs_count metric with the new failure counter.

Key Changes

• Extend collections schema with compaction_failure_count, propagate through ORM models (Collection, CollectionToGc), proto messages, and migration artifacts.
• Add IncrementCompactionFailureCount RPC to SysDB gRPC service and wire coordinator/catalog/DAO layers to atomically increment the counter, updating compaction failure logic.
• Modify Rust scheduler and compaction manager to remove in-memory DLQ tracking, consult persisted counts from SysDB, and convert fail_job/is_job_in_progress flows to async with metrics updates.
• Update configuration structures, tests (test_k8s_integration_scheduler), and compactor metrics to support the new global DLQ semantics.

Affected Areas

rust/worker/src/compactor scheduler and manager logic
rust/sysdb/src/sysdb.rs and SQLite/Test SysDB implementations
go/pkg/sysdb coordinator, DAO, GRPC layers, and ORM models
idl/chromadb/proto definitions and migrations
rust/types collection model and downstream usages

This summary was automatically generated by @propel-code-bot

Comment on lines +1140 to +1147
pub async fn increment_compaction_failure_count(
&self,
_collection_id: CollectionUuid,
) -> Result<(), sqlx::Error> {
Err(sqlx::Error::Protocol(
"increment_compaction_failure_count not implemented for SQLite".into(),
))
}
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Recommended

[Requirements] This function is currently a stub that returns an error. Since the implementation for SQLite seems straightforward, it would be beneficial to implement it now to ensure consistent behavior across different environments (local/testing vs. production) and to make testing this feature easier. Here's a suggested implementation that performs an atomic update.

Context for Agents
This function is currently a stub that returns an error. Since the implementation for SQLite seems straightforward, it would be beneficial to implement it now to ensure consistent behavior across different environments (local/testing vs. production) and to make testing this feature easier. Here's a suggested implementation that performs an atomic update.

File: rust/sysdb/src/sqlite.rs
Line: 1147

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

2 participants