-
Notifications
You must be signed in to change notification settings - Fork 0
Open
Labels
Description
Problem
Current rate limiting implementation may not work correctly in distributed environments where multiple Honua Server instances are running behind a load balancer. Each node maintains its own rate limiting counters, potentially allowing users to exceed intended limits by distributing requests across multiple nodes.
Impact
- Security Risk: Rate limiting bypass could enable abuse or DoS attacks
- Resource Protection: Unable to properly protect backend resources in scaled deployments
- SLA Compliance: Cannot guarantee rate limit enforcement across the application tier
Solution Requirements
Distributed Rate Limiting Store
- Implement Redis-based distributed rate limiting using sliding window counters
- Support for multiple rate limiting dimensions (per-user, per-IP, per-endpoint, global)
- Configurable time windows (per-second, per-minute, per-hour, per-day)
Multi-Tier Fallback Strategy
- Primary: Redis distributed counters with sliding window algorithm
- Fallback: Local in-memory counters when Redis unavailable
- Circuit Breaker: Automatic failover and recovery detection
Configuration Options
- Rate limits per protocol (FeatureServer, OData, OGC APIs)
- Per-user vs anonymous user limits
- Burst allowance and sustained rate differentiation
- Redis connection settings and retry policies
Monitoring & Observability
- Rate limiting metrics (requests blocked, limits exceeded, Redis health)
- Distributed tracing integration for rate limiting decisions
- Alerting when rate limits are frequently exceeded
Technical Considerations
- Performance: Sub-millisecond rate limiting decisions
- Consistency: Eventual consistency acceptable for rate limiting use case
- Resilience: Graceful degradation when distributed store unavailable
- AOT Compatibility: All implementations must support Native AOT compilation
Priority
High - Essential for production multi-node deployments
Acceptance Criteria
- Redis-based distributed rate limiting implementation
- Configurable rate limiting rules per protocol/endpoint
- Fallback to local counters when Redis unavailable
- Performance metrics and monitoring integration
- Integration tests with multiple server instances
- Documentation for deployment and configuration