-
Notifications
You must be signed in to change notification settings - Fork 308
Description
Can't find any user forum, so sorry for misusing a Github issue. Please feel free to tell me where I should ask questions instead of creating github issues.
I'm evaluating the Apicurio Registry, primarily for use in a streaming system with Kafka and Avro Schemas, but also for its ability to handle AsyncAPI, Protobuf and Json schemas.
As I already have a Kafka setup, I would like to use Kafka as the storage backend. This opens up some questions that I can't really find answers to in the documentation:
-
I would like to run multiple instances of the Registry, primarily for resillience against failure in the underlying infrastructure (i.e
node failure in Kubernetes). Is there any special configuration needed for this, or is it just a matter of starting multiple
instances and let a loadbalancer (Kubernetes service) spread the load between them? What about Kafka Consumer group id, should it be set, or not configured? -
What are the main pros and cons of the Kafka vs. the Kafka Streams storage backend?
-
When using the Kafka Streams storage backend, I assume the instances (when multiple) need to be able to talk to eachother on the application server port?
-
I also have a need to run the schema registry in multiple Kubernetes clusters, running in different datacenters. It's OK if only one of the datacenters have update abilities, the rest may be read-only. What is the recommended approach for this? My main usecase for now is to support one production cluster and one development cluster, so I'm pondering running an instance of the schema registry in the development cluster that talk to the production Kafka as backend. Will this work? Will it work only with the Kafka backend, or can it also be made to work with the streams backend?