Releases: olric-data/olric
v0.5.0
Olric v0.5.0 is here.
- Olric Binary Protocol has been replaced with Redis Serialization Protocol 2 (RESP2),
- Redefined DMap API,
- Improved storage engine,
- Cluster events,
- A drop-in replacement of Redis' Publish-Subscribe messaging. DTopic data structure has been removed,
olric-cli
,olric-benchmark
, andolric-stats
has been removed.
Important: Please note that this version of Olric is incompatible with the previous versions.
See the README.md file and the documents on pkg.go.dev.
v0.4.10
v0.4.9
This release includes the following fixes and improvements:
buraksezer/consistent upgraded to v0.10.0
. It includes the following fixes:
- AverageLoad() function panics with "divide by zero" when no members are in the hash ring buraksezer/consistent#19,
- RLock called twice in GetClosestN and cause a deadlock buraksezer/consistent#23,
- Improve documentation,
- Validate configuration and add default values to the configuration variables.
v0.5.0-rc.1
Here is the first release candidate of v0.5.x tree. It includes the following improvements:
- Increase the verbosity level of some log messages.
- Upgrade hashicorp/memberlist to
v0.5.0
v0.4.8
This release includes the following fixes and improvements:
- Prevent data race while reading storage engine statistics.
v0.4.7
This release includes the following fixes and improvements:
- Factory failure drains available connection max from pool #4
- Backported a fix from fluxninja/olric. See #172
Thank you for your contributions:
v0.4.6
v0.5.0-beta.8
v0.5.0-beta.7
What is Olric?
Olric is a distributed, in-memory data structure store. It's designed from the ground up to be distributed, and it can be used both as an embedded Go library and as a language-independent service.
With Olric, you can instantly create a fast, scalable, shared pool of RAM across a cluster of computers.
Olric is implemented in Go and uses the Redis protocol. That means Olric has client implementations in all major programming languages.
Olric is highly scalable and available. Distributed applications can use for distributed caching, clustering, and Publish-Subscribe messaging.
It is designed to scale out to hundreds of members and thousands of clients. When you add new members, they automatically discover the cluster and linearly increase the memory capacity. Olric offers simple scalability, partitioning (sharding), and re-balancing out-of-the-box. It does not require any extra coordination processes. With Olric, when you start another process to add more capacity, data and backups are automatically and evenly balanced.
See Samples section, and API docs on pkg.go.dev to get started!
Here is the seventh beta of the v0.5.x tree. It includes the following improvements:
- Bring back pipelining feature to Golang client #174,
- Add
RefreshMetadata
method to the Client interface, - Add
ErrConnRefused
error type, Delete
returns the number of deleted keys,- Smart routing: ClusterClient can calculate the partition owner for the given key,
- Improve documentation.
Sample pipelining
func ExamplePipeline() {
c, err := NewClusterClient([]string{"127.0.0.1:3320"})
if err != nil {
// Handle this error
}
dm, err := c.NewDMap("mydmap")
if err != nil {
// Handle this error
}
ctx := context.Background()
pipe, err := dm.Pipeline()
if err != nil {
// Handle this error
}
futurePut, err := pipe.Put(ctx, "key-1", "value-1")
if err != nil {
// Handle this error
}
futureGet := pipe.Get(ctx, "key-1")
err = pipe.Exec(context.Background())
if err != nil {
// Handle this error
}
err = futurePut.Result()
if err != nil {
// Handle this error
}
gr, err := futureGet.Result()
if err != nil {
// Handle this error
}
value, err := gr.String()
if err != nil {
// Handle this error
}
fmt.Println(value)
}
v0.5.0-beta.6
Here is the sixth beta of the v0.5.x tree. It includes the following improvements:
- Add
DM.INCRBYFLOAT
command, - Fix corrupt cursor problems in DM.SCAN implementation in the clients.