English | 中文
An in-memory L1 cache for Go that targets eventual consistency across instances via asynchronous invalidation with Redis Streams.
cd tests/
docker compose up -d
# Verify services are healthy
curl http://localhost:8080/health # instance-a
curl http://localhost:8081/health # instance-b
# Run integration tests
go test ./integration/
# Set a value in instance-a
curl -X POST http://localhost:8080/set \
-H "Content-Type: application/json" \
-d '{"key":"user:123","value":"John Doe"}'
# Get the value from instance-b
curl "http://localhost:8081/get?key=user:123"
# Update in instance-a
curl -X POST http://localhost:8080/set \
-H "Content-Type: application/json" \
-d '{"key":"user:123","value":"Alice Doe"}'
# Get synchronized value from instance-b
curl "http://localhost:8081/get?key=user:123"
Different Go caching solutions fit different scenarios. For strong cross-instance consistency where network latency is acceptable, a direct Redis connection can be used. For simple, single-machine applications, go-cache
is suitable. big-cache
is a better choice for single-node contexts that cache massive datasets and need to avoid GC pauses. groupcache
is specifically designed for caching immutable data in a distributed environment.
A common architectural gap exists for modern microservices. When a service is deployed across multiple instances and needs to cache mutable data, it faces a conflict: it requires both the low-latency reads of an in-process cache and near real-time data consistency. Single-node caches cannot ensure consistency across instances, while groupcache
's design for immutable data lacks the required invalidation mechanism.
This project, sync-cache
, proposes a two-tier (L1/L2) architecture. The L1 cache is a high-performance, in-process store (like Ristretto) in each service instance, ensuring ultra-fast reads. The L2 is a shared invalidation bus (like Redis Streams) that does not store business data; its sole purpose is to broadcast invalidation messages. When data is updated on one instance, it notifies the bus, and all other instances receive a message to evict the item from their local L1 cache.
Each cache instance requires a stable, unique identifier to maintain consistent consumer group membership in Redis Streams, which tracks message consumption progress across restarts. Without stable identity, restarted instances would create new consumer groups, potentially missing invalidation messages or losing track of consumption state.
Although L1 relies on L2 Redis for data queries and message synchronization, the pressure on Redis can be mitigated by adopting a sync-cache solution, even if L2 has not yet achieved high availability.