feat(core): Deploy v0.2.1 core services with KEDA and PV data source #5
Add this suggestion to a batch that can be applied as a single commit.
This suggestion is invalid because no changes were made to the code.
Suggestions cannot be applied while the pull request is closed.
Suggestions cannot be applied while viewing a subset of changes.
Only one suggestion per line can be applied in a batch.
Add this suggestion to a batch that can be applied as a single commit.
Applying suggestions on deleted lines is not supported.
You must change the existing code in this line in order to create a valid suggestion.
Outdated suggestions cannot be applied.
This suggestion has been applied or marked resolved.
Suggestions cannot be applied from pending reviews.
Suggestions cannot be applied on multi-line comments.
Suggestions cannot be applied while the pull request is queued to merge.
Suggestion cannot be applied right now. Please check back later.
Summary
This pull request deploys the core application services for version 0.2.1, activating the complete asynchronous data processing pipeline. Building upon the new chart architecture, this feature introduces the new Go-based
worker, enables event-driven autoscaling with KEDA, and implements a dynamic data loading mechanism for thesimulatorusing PersistentVolumes.This marks a major milestone, bringing the
write-pipelinefrom simulation to a fully functional, observable, and scalable deployment within the Kubernetes environment.Key Changes
This feature branch implements the following key functionalities:
1. Upgraded Core Service Interfaces to v0.2.1
simulator,ingestor,analytics-api) have been updated to use the v0.2.1 container images and corresponding environment variables/configurations.2. Introduced the Go Worker and KEDA Autoscaling
workerDeployment: A newDeploymentfor the Go-based Kafka consumer (worker) has been added to thewrite-pipeline.ScaledObjectIntegration: AScaledObjectis now deployed alongside the worker, configured to monitor the Kafka topic's consumer lag. This enables the worker deployment to automatically scale fromminReplicastomaxReplicasbased on real-time load, making the data pipeline resilient to traffic bursts.3. Adopted the Modernized
values.yamlInterfacesafezone-corechart and its subcharts have been updated to consume variables from the new, centralizedvalues.yamlstructure (e.g.,global.kafka,global.database).4. Implemented Dynamic Data Loading via PV/PVC
simulator's data source is no longer static. It now mounts aPersistentVolumeClaim.selector, which binds to differentPersistentVolumesbased on atypeparameter passed from the environment-specificvalues.yaml(e.g.,type: "smoke-test"ortype: "full-data"). This allows for easy switching of simulation scenarios between environments.5. Renamed Subcharts for Clarity
safezone-corehave been renamed from the genericproducer/consumerto the more descriptivewrite-pipelineandread-pipeline.Overall Impact
With this change, the entire
write-pipelineis now fully deployed and operational. The system is no longer just a collection of individual services but a cohesive, event-driven architecture that can autonomously respond to load. This lays the final foundation for end-to-end testing and validation of the SafeZone application.