Skip to content

Commit 61f3580

Browse files
author
cnb0
committed
adding self balancing properties
1 parent 97c857f commit 61f3580

File tree

10 files changed

+1441
-34
lines changed

10 files changed

+1441
-34
lines changed
Lines changed: 1 addition & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1 @@
1+
1

WS/00.kfk.confluent.docker.setup/zk-single-kafka-single.yml

Lines changed: 0 additions & 33 deletions
This file was deleted.

WS/00.kfk.gettingstarted/1-kafka-cli/0-kafka-topics.sh

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -1,7 +1,7 @@
11
# Replace "kafka-topics"
22
# by "kafka-topics.sh" or "kafka-topics.bat" based on your system # (or bin/kafka-topics.sh or bin\windows\kafka-topics.bat if you didn't setup PATH / Environment variables)
33

4-
kafka-topics.sh
4+
/home/nobleprog/kafka_2.12-2.4.0/bin/kafka-topics.sh
55

66
kafka-topics --zookeeper 127.0.0.1:2181 --list
77

Lines changed: 86 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,86 @@
1+
##
2+
# Licensed to the Apache Software Foundation (ASF) under one or more
3+
# contributor license agreements. See the NOTICE file distributed with
4+
# this work for additional information regarding copyright ownership.
5+
# The ASF licenses this file to You under the Apache License, Version 2.0
6+
# (the "License"); you may not use this file except in compliance with
7+
# the License. You may obtain a copy of the License at
8+
#
9+
# http://www.apache.org/licenses/LICENSE-2.0
10+
#
11+
# Unless required by applicable law or agreed to in writing, software
12+
# distributed under the License is distributed on an "AS IS" BASIS,
13+
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
14+
# See the License for the specific language governing permissions and
15+
# limitations under the License.
16+
##
17+
18+
# This file contains some of the configurations for the Kafka Connect distributed worker. This file is intended
19+
# to be used with the examples, and some settings may differ from those used in a production system, especially
20+
# the `bootstrap.servers` and those specifying replication factors.
21+
22+
# A list of host/port pairs to use for establishing the initial connection to the Kafka cluster.
23+
bootstrap.servers=localhost:9092
24+
25+
# unique name for the cluster, used in forming the Connect cluster group. Note that this must not conflict with consumer group IDs
26+
group.id=connect-cluster
27+
28+
# The converters specify the format of data in Kafka and how to translate it into Connect data. Every Connect user will
29+
# need to configure these based on the format they want their data in when loaded from or stored into Kafka
30+
key.converter=org.apache.kafka.connect.json.JsonConverter
31+
value.converter=org.apache.kafka.connect.json.JsonConverter
32+
# Converter-specific settings can be passed in by prefixing the Converter's setting with the converter we want to apply
33+
# it to
34+
key.converter.schemas.enable=true
35+
value.converter.schemas.enable=true
36+
37+
# Topic to use for storing offsets. This topic should have many partitions and be replicated and compacted.
38+
# Kafka Connect will attempt to create the topic automatically when needed, but you can always manually create
39+
# the topic before starting Kafka Connect if a specific topic configuration is needed.
40+
# Most users will want to use the built-in default replication factor of 3 or in some cases even specify a larger value.
41+
# Since this means there must be at least as many brokers as the maximum replication factor used, we'd like to be able
42+
# to run this example on a single-broker cluster and so here we instead set the replication factor to 1.
43+
offset.storage.topic=connect-offsets
44+
offset.storage.replication.factor=2
45+
#offset.storage.partitions=25
46+
47+
# Topic to use for storing connector and task configurations; note that this should be a single partition, highly replicated,
48+
# and compacted topic. Kafka Connect will attempt to create the topic automatically when needed, but you can always manually create
49+
# the topic before starting Kafka Connect if a specific topic configuration is needed.
50+
# Most users will want to use the built-in default replication factor of 3 or in some cases even specify a larger value.
51+
# Since this means there must be at least as many brokers as the maximum replication factor used, we'd like to be able
52+
# to run this example on a single-broker cluster and so here we instead set the replication factor to 1.
53+
config.storage.topic=connect-configs
54+
config.storage.replication.factor=2
55+
56+
# Topic to use for storing statuses. This topic can have multiple partitions and should be replicated and compacted.
57+
# Kafka Connect will attempt to create the topic automatically when needed, but you can always manually create
58+
# the topic before starting Kafka Connect if a specific topic configuration is needed.
59+
# Most users will want to use the built-in default replication factor of 3 or in some cases even specify a larger value.
60+
# Since this means there must be at least as many brokers as the maximum replication factor used, we'd like to be able
61+
# to run this example on a single-broker cluster and so here we instead set the replication factor to 1.
62+
status.storage.topic=connect-status
63+
status.storage.replication.factor=2
64+
#status.storage.partitions=5
65+
66+
# Flush much faster than normal, which is useful for testing/debugging
67+
offset.flush.interval.ms=10000
68+
69+
# These are provided to inform the user about the presence of the REST host and port configs
70+
# Hostname & Port for the REST API to listen on. If this is set, it will bind to the interface used to listen to requests.
71+
#rest.host.name=
72+
#rest.port=8083
73+
74+
# The Hostname & Port that will be given out to other workers to connect to i.e. URLs that are routable from other servers.
75+
#rest.advertised.host.name=
76+
#rest.advertised.port=
77+
78+
# Set to a list of filesystem paths separated by commas (,) to enable class loading isolation for plugins
79+
# (connectors, converters, transformations). The list should consist of top level directories that include
80+
# any combination of:
81+
# a) directories immediately containing jars with plugins and their dependencies
82+
# b) uber-jars with plugins and their dependencies
83+
# c) directories immediately containing the package directory structure of classes of plugins and their dependencies
84+
# Examples:
85+
# plugin.path=/usr/local/share/java,/usr/local/share/kafka/plugins,/opt/connectors,
86+
plugin.path=/usr/share/java,/home/nobleprog/confluent-6.0.0/share/confluent-hub-components
Lines changed: 133 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,133 @@
1+
# (Copyright) Confluent, Inc.
2+
3+
############################# Server Basics #############################
4+
5+
# A comma separated list of Apache Kafka cluster host names (required)
6+
# NOTE: should not be localhost
7+
#bootstrap.servers=kafka1:9092
8+
9+
# A comma separated list of ZooKeeper host names (for ACLs)
10+
#zookeeper.connect=zookeeper1:2181
11+
12+
############################# Control Center Settings #############################
13+
14+
# Unique identifier for the Control Center
15+
#confluent.controlcenter.id=1
16+
17+
# Directory for Control Center to store data
18+
# NOTE: this should be changed to point to a reliable directory
19+
confluent.controlcenter.data.dir=/tmp/confluent/control-center
20+
21+
# License string for the Control Center
22+
#confluent.license=XyZ
23+
24+
# A comma separated list of Connect host names
25+
#confluent.controlcenter.connect.cluster=http://localhost:8083
26+
27+
# KSQL cluster URL
28+
#confluent.controlcenter.ksql.ksqlDB.url=http://localhost:8088
29+
30+
# Schema Registry cluster URL
31+
#confluent.controlcenter.schema.registry.url=http://localhost:8081
32+
33+
# Kafka REST endpoint URL
34+
#confluent.controlcenter.streams.cprest.url=http://localhost:8090
35+
confluent.controlcenter.streams.cprest.url=http://localhost:8090,http://localhost:8091,http://localhost:8092,http://localhost:8093,http://localhost:8094
36+
37+
# Settings to enable email alerts
38+
#confluent.controlcenter.mail.enabled=true
39+
#confluent.controlcenter.mail.host.name=smtp1
40+
#confluent.controlcenter.mail.port=587
41+
42+
43+
# Replication for internal Control Center topics.
44+
# Only lower them for testing.
45+
# WARNING: replication factor of 1 risks data loss.
46+
#confluent.controlcenter.internal.topics.replication=3
47+
48+
# Number of partitions for Control Center internal topics
49+
# Increase for better throughput on monitored data (CPU bound)
50+
# NOTE: changing requires running `bin/control-center-reset` prior to restart
51+
#confluent.controlcenter.internal.topics.partitions=4
52+
53+
# Topic used to store Control Center configuration
54+
# WARNING: replication factor of 1 risks data loss.
55+
#confluent.controlcenter.command.topic.replication=3
56+
57+
# Enable automatic UI updates
58+
confluent.controlcenter.ui.autoupdate.enable=true
59+
60+
# Enable usage data collection
61+
confluent.controlcenter.usage.data.collection.enable=true
62+
63+
# Enable Controller Chart in Broker page
64+
#confluent.controlcenter.ui.controller.chart.enable=true
65+
66+
############################# Control Center RBAC Settings #############################
67+
68+
# Enable RBAC authorization in Control Center by providing a comma-separated list of Metadata Service (MDS) URLs
69+
#confluent.metadata.bootstrap.server.urls=http://localhost:8090
70+
71+
# MDS credentials of an RBAC user for Control Center to act on behalf of
72+
# NOTE: This user must be a SystemAdmin on each Apache Kafka cluster
73+
#confluent.metadata.basic.auth.user.info=username:password
74+
75+
# Enable SASL-based authentication for each Apache Kafka cluster (SASL_PLAINTEXT or SASL_SSL required)
76+
#confluent.controlcenter.streams.security.protocol=SASL_PLAINTEXT
77+
#confluent.controlcenter.kafka.<name>.security.protocol=SASL_PLAINTEXT
78+
79+
# Enable authentication using a bearer token for Control Center's REST endpoints
80+
#confluent.controlcenter.rest.authentication.method=BEARER
81+
82+
# Public key used to verify bearer tokens
83+
# NOTE: Must match the MDS public key
84+
#public.key.path=/path/to/publickey.pem
85+
86+
############################# Broker (Metrics reporter) Monitoring #############################
87+
88+
# Set how far back in time metrics reporter data should be processed
89+
#confluent.metrics.topic.skip.backlog.minutes=15
90+
91+
############################# Stream (Interceptor) Monitoring #############################
92+
93+
# Keep these settings default unless using non-Confluent interceptors
94+
95+
# Override topic name for intercepted (should mach custom interceptor settings)
96+
#confluent.monitoring.interceptor.topic=_confluent-monitoring
97+
98+
# Number of partitions for the intercepted topic
99+
#confluent.monitoring.interceptor.topic.partitions=12
100+
101+
# Amount of replication for intercepted topics
102+
# WARNING: replication factor of 1 risks data loss.
103+
#confluent.monitoring.interceptor.topic.replication=3
104+
105+
# Set how far back in time interceptor data should be processed
106+
#confluent.monitoring.interceptor.topic.skip.backlog.minutes=15
107+
108+
############################# System Health (Broker) Monitoring #############################
109+
110+
# Number of partitions for the metrics topic
111+
#confluent.metrics.topic.partitions=12
112+
113+
# Replication factor for broker monitoring data
114+
# WARNING: replication factor of 1 risks data loss.
115+
#confluent.metrics.topic.replication=3
116+
117+
############################# Streams (state store) settings #############################
118+
119+
# Increase for better throughput on data processing (CPU bound)
120+
#confluent.controlcenter.streams.num.stream.threads=8
121+
122+
################################## Confluent Telemetry Settings ##################################
123+
124+
# To start using Telemetry, first generate a Confluent Cloud API key/secret. This can be done with
125+
# instructions at https://docs.confluent.io/current/cloud/using/api-keys.html. Note that you should
126+
# be using the '--resource cloud' flag.
127+
#
128+
# After generating an API key/secret, to enable Telemetry uncomment the lines below and paste
129+
# in your API key/secret.
130+
#
131+
#confluent.telemetry.enabled=true
132+
#confluent.telemetry.api.key=<CLOUD_API_KEY>
133+
#confluent.telemetry.api.secret=<CCLOUD_API_SECRET>

0 commit comments

Comments
 (0)