-
Notifications
You must be signed in to change notification settings - Fork 1
make flush processor per pod #196
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Conversation
WalkthroughThe changes update the RDB import flush instance workflow to support multiple pod IDs instead of a single pod ID. This involves schema adjustments, code modifications to iterate over arrays of pod IDs, and updates to related tests and job data structures. Additional error handling is introduced in frontend handlers for organization context setting. Changes
Sequence Diagram(s)sequenceDiagram
participant JobProcessor as RdbImportFlushInstanceProcessor
participant K8sRepo as K8sRepository
JobProcessor->>JobProcessor: Receive job with podIds[]
loop For each podId in podIds
JobProcessor->>K8sRepo: flushInstance(cloudProvider, clusterId, region, instanceId, podId, ...)
K8sRepo-->>JobProcessor: (Result)
end
sequenceDiagram
participant Handler as instanceCreated/instanceDeleted Handler
participant Client as client
Handler->>Client: userSetUsingOrg(orgId)
alt Success
Handler->>Handler: Log success, continue
else Failure
Handler->>Handler: Log error, send HTTP 500 response
end
Assessment against linked issues
Suggested reviewers
Poem
✨ Finishing Touches
Thanks for using CodeRabbit! It's free for OSS, and your support helps us grow. If you like it, consider giving us a shout-out. 🪧 TipsChatThere are 3 ways to chat with CodeRabbit:
SupportNeed help? Create a ticket on our support page for assistance with any issues or questions. Note: Be mindful of the bot's finite context window. It's strongly recommended to break down tasks such as reading entire modules into smaller chunks. For a focused discussion, use review comments to chat about specific files and their changes, instead of using the PR comments. CodeRabbit Commands (Invoked using PR comments)
Other keywords and placeholders
CodeRabbit Configuration File (
|
Tofu Plan Output - observability_stack_ctrl_plane_k8s
|
Tofu Plan Output - observability_stack_ctrl_plane_infra
|
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Actionable comments posted: 0
🧹 Nitpick comments (1)
backend/services/db-importer-worker/src/processors/RdbImportFlushInstanceProcessor.ts (1)
22-32
: Sequential pod processing implemented correctly, but consider error handling implications.The implementation correctly iterates over multiple pods and processes each individually. However, the current approach has some considerations:
- Fail-fast behavior: If one pod fails, subsequent pods won't be processed
- No partial success tracking: The job will fail entirely even if some pods were successfully flushed
Consider whether partial success should be tracked or if parallel processing might be more efficient for large pod arrays.
📜 Review details
Configuration used: CodeRabbit UI
Review profile: CHILL
Plan: Pro
📒 Files selected for processing (9)
backend/libs/schemas/src/services/db-importer-worker/v1/processors/rdb-import.ts
(1 hunks)backend/services/db-importer-worker/src/processors/RdbImportFlushInstanceProcessor.ts
(1 hunks)backend/services/db-importer-worker/src/repositories/k8s/K8sRepository.ts
(1 hunks)backend/services/db-importer-worker/src/tests/import-rdb-cluster.spec.ts
(1 hunks)backend/services/db-importer-worker/src/tests/import-rdb-replication.spec.ts
(1 hunks)backend/services/db-importer-worker/src/tests/import-rdb-standalone.spec.ts
(1 hunks)backend/services/db-importer/src/repositories/tasksQueue/TaskQueueBullMQRepository.ts
(1 hunks)frontend/apps/auth-proxy/lib/handlers/instanceCreated.ts
(1 hunks)frontend/apps/auth-proxy/lib/handlers/instanceDeleted.ts
(1 hunks)
⏰ Context from checks skipped due to timeout of 90000ms (7)
- GitHub Check: Plan stack observability_stack_ctrl_plane_infra
- GitHub Check: build-and-push-docker (db-importer)
- GitHub Check: build-and-push-docker (db-importer-worker)
- GitHub Check: build
- GitHub Check: build
- GitHub Check: build-and-push-docker (db-importer-worker)
- GitHub Check: build-and-push-docker (db-importer)
🔇 Additional comments (8)
frontend/apps/auth-proxy/lib/handlers/instanceCreated.ts (1)
61-72
: Code changes look good, but they don't match the PR description.The implementation correctly adds organization context setting with proper error handling. However, these changes are about Grafana user organization context, not "flush processor per pod" as mentioned in the PR title and objectives.
Likely an incorrect or invalid review comment.
frontend/apps/auth-proxy/lib/handlers/instanceDeleted.ts (1)
56-67
: Consistent implementation with proper error handling.The organization context setting follows the same pattern as
instanceCreated.ts
, which maintains consistency across both handlers. The error handling and logging are appropriate.backend/services/db-importer-worker/src/tests/import-rdb-replication.spec.ts (1)
129-129
: LGTM: Test updated correctly for multi-pod flush processor.The change from passing a single
podId
to the entirepodIds
array aligns perfectly with the schema update forRdbImportFlushInstanceProcessor
.backend/libs/schemas/src/services/db-importer-worker/v1/processors/rdb-import.ts (1)
98-98
:✅ Verification successful
Breaking schema change requires verification of all consumers.
The change from
podId: Type.String()
topodIds: Type.Array(Type.String())
is a breaking change in the data contract. Ensure that all consumers ofRdbImportFlushInstanceProcessorDataSchema
have been updated accordingly.
🏁 Script executed:
#!/bin/bash # Description: Verify that RdbImportFlushInstanceProcessor has been updated to handle podIds array # Expected: The processor should iterate over podIds array instead of handling a single podId # Search for the processor implementation ast-grep --pattern 'class RdbImportFlushInstanceProcessor { $$$ }' # Search for any remaining usage of single podId in flush processor rg -A 10 -B 5 "RdbImportFlushInstanceProcessor" --type tsLength of output: 26581
Verified:
RdbImportFlushInstanceProcessor
now correctly handlespodIds
array
- Schema (
RdbImportFlushInstanceProcessorDataSchema
) updated topodIds: Type.Array(Type.String())
.- Processor (
backend/services/db-importer-worker/src/processors/RdbImportFlushInstanceProcessor.ts
) iterates overjob.data.podIds
.- All related tests (
import-rdb-replication.spec.ts
,import-rdb-standalone.spec.ts
,import-rdb-cluster.spec.ts
) have been updated to passpodIds
.- Other processors/schemas remain unchanged (still use
podId
) as intended.No further code changes are required here. Ensure any downstream consumers are aware of this breaking change and bump package version accordingly.
backend/services/db-importer-worker/src/tests/import-rdb-cluster.spec.ts (1)
129-129
: LGTM: Consistent update for cluster test scenario.The change correctly updates the test to pass the entire
podIds
array, which is particularly relevant for cluster scenarios with multiple pods (6 pods in this test case).backend/services/db-importer/src/repositories/tasksQueue/TaskQueueBullMQRepository.ts (1)
329-329
: LGTM: Job data correctly updated for multi-pod flush processor.The change from
podId: task.payload.podIds[0]
topodIds: task.payload.podIds
correctly implements the new data structure for the flush instance processor while maintaining single pod handling for other processors in the flow.backend/services/db-importer-worker/src/tests/import-rdb-standalone.spec.ts (1)
129-129
: LGTM: Test data correctly updated for multi-pod support.The change from passing a single
podId
to the entirepodIds
array aligns with the updated processor schema and logic.backend/services/db-importer-worker/src/repositories/k8s/K8sRepository.ts (1)
867-899
:❓ Verification inconclusive
Verify cluster functionality after removing isCluster parameter.
The removal of the
isCluster
parameter and the associated--cluster call localhost:6379
argument from the redis-cli command represents a significant change that could impact cluster deployments.Please ensure this change doesn't break existing cluster functionality and that all callers have been updated accordingly.
🏁 Script executed:
#!/bin/bash # Description: Verify all calls to flushInstance method have been updated to match new signature # Search for flushInstance method calls to ensure they don't pass isCluster parameter rg -A 10 "flushInstance\(" --type tsLength of output: 2351
🏁 Script executed:
#!/bin/bash # Description: Check if there are any references to cluster-specific flush logic that might be affected # Search for cluster-related flush operations rg -B 5 -A 5 "cluster.*flush|flush.*cluster" --type tsLength of output: 56
Verify cluster functionality after flushInstance signature change
- Confirmed all calls to
flushInstance
have been updated to the new signature (noisCluster
parameter).- No references to cluster-specific flush flags (e.g.,
--cluster call
) were found in the repository.Please validate in a staging or QA environment that flush, AOF rewrite, and save operations still behave correctly on Redis cluster deployments.
fix #172
Summary by CodeRabbit