You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Copy file name to clipboardexpand all lines: docs/en/cloud/security/gcp-private-service-connect.md
+7-7
Original file line number
Diff line number
Diff line change
@@ -172,7 +172,7 @@ output "psc_connection_id" {
172
172
```
173
173
174
174
:::note
175
-
TARGET - Use **endpointServiceId** from [Obtain GCP service attachment for Private Service Connect](#obtain-gcp-service-attachment-for-private-service-connect) step
175
+
TARGET - Use **endpointServiceId** from [Obtain GCP service attachment for Private Service Connect](#obtain-gcp-service-attachment-and-dns-name-for-private-service-connect) step
176
176
:::
177
177
178
178
## Setting up DNS
@@ -228,7 +228,7 @@ gcloud dns \
228
228
--rrdatas="10.128.0.2"
229
229
```
230
230
:::note
231
-
DNS_RECORD - use **privateDnsHostname** from [Obtain GCP service attachment for Private Service Connect](#obtain-gcp-service-attachment-for-private-service-connect) step
231
+
DNS_RECORD - use **privateDnsHostname** from [Obtain GCP service attachment for Private Service Connect](#obtain-gcp-service-attachment-and-dns-name-for-private-service-connect) step
DNS_NAME - Use **privateDnsHostname** from [Obtain GCP service attachment for Private Service Connect](#obtain-gcp-service-attachment-for-private-service-connect) step
259
+
DNS_NAME - Use **privateDnsHostname** from [Obtain GCP service attachment for Private Service Connect](#obtain-gcp-service-attachment-and-dns-name-for-private-service-connect) step
260
260
:::
261
261
262
262
## Verify DNS setup
263
263
264
-
DNS_RECORD - Use **privateDnsHostname** from [Obtain GCP service attachment for Private Service Connect](#obtain-gcp-service-attachment-for-private-service-connect) step
264
+
DNS_RECORD - Use **privateDnsHostname** from [Obtain GCP service attachment for Private Service Connect](#obtain-gcp-service-attachment-and-dns-name-for-private-service-connect) step
## Accessing instance using Private Service Connect
389
389
390
-
Each instance with configured Private Service Connect filters has two endpoints: public and private. In order to connect using Private Service Connect, you need to use a private endpoint, see use **endpointServiceId** from [Obtain GCP service attachment for Private Service Connect](#obtain-gcp-service-attachment-for-private-service-connect) step
390
+
Each instance with configured Private Service Connect filters has two endpoints: public and private. In order to connect using Private Service Connect, you need to use a private endpoint, see use **endpointServiceId** from [Obtain GCP service attachment for Private Service Connect](#obtain-gcp-service-attachment-and-dns-name-for-private-service-connect) step
391
391
392
392
:::note
393
393
Private DNS hostname is only available from your GCP VPC. Do not try to resolve the DNS host from a machine that resides outside of GCP VPC.
@@ -421,7 +421,7 @@ In this example, connection to the `xxxxxxx.yy-xxxxN.p.gcp.clickhouse.cloud` hos
421
421
422
422
### Test DNS setup
423
423
424
-
DNS_NAME - Use **privateDnsHostname** from [Obtain GCP service attachment for Private Service Connect](#obtain-gcp-service-attachment-for-private-service-connect) step
424
+
DNS_NAME - Use **privateDnsHostname** from [Obtain GCP service attachment for Private Service Connect](#obtain-gcp-service-attachment-and-dns-name-for-private-service-connect) step
425
425
426
426
```bash
427
427
nslookup $DNS_NAME
@@ -443,7 +443,7 @@ If you have problems with connecting using PSC link, check your connectivity usi
443
443
444
444
OpenSSL should be able to connect (see CONNECTED in the output). `errno=104` is expected.
445
445
446
-
DNS_NAME - Use **privateDnsHostname** from [Obtain GCP service attachment for Private Service Connect](#obtain-gcp-service-attachment-for-private-service-connect) step
446
+
DNS_NAME - Use **privateDnsHostname** from [Obtain GCP service attachment for Private Service Connect](#obtain-gcp-service-attachment-and-dns-name-for-private-service-connect) step
Copy file name to clipboardexpand all lines: docs/en/integrations/data-ingestion/clickpipes/kafka.md
+1-1
Original file line number
Diff line number
Diff line change
@@ -271,7 +271,7 @@ Batches are inserted when one of the following criteria has been met:
271
271
272
272
### Latency
273
273
274
-
Latency (defined as the time between the Kafka message being produced and the message being available in ClickHouse) will be dependent on a number of factors (i.e. broker latency, network latency, message size/format). The [batching](#Batching) described in the section above will also impact latency. We always recommend testing your specific use case with typical loads to determine the expected latency.
274
+
Latency (defined as the time between the Kafka message being produced and the message being available in ClickHouse) will be dependent on a number of factors (i.e. broker latency, network latency, message size/format). The [batching](#batching) described in the section above will also impact latency. We always recommend testing your specific use case with typical loads to determine the expected latency.
275
275
276
276
ClickPipes does not provide any guarantees concerning latency. If you have specific low-latency requirements, please [contact us](https://clickhouse.com/company/contact?loc=clickpipes).
Copy file name to clipboardexpand all lines: docs/en/integrations/data-ingestion/clickpipes/kinesis.md
+1-1
Original file line number
Diff line number
Diff line change
@@ -122,7 +122,7 @@ Batches are inserted when one of the following criteria has been met:
122
122
123
123
### Latency
124
124
125
-
Latency (defined as the time between the Kinesis message being sent to the stream and the message being available in ClickHouse) will be dependent on a number of factors (i.e. kinesis latency, network latency, message size/format). The [batching](#Batching) described in the section above will also impact latency. We always recommend testing your specific use case to understand the latency you can expect.
125
+
Latency (defined as the time between the Kinesis message being sent to the stream and the message being available in ClickHouse) will be dependent on a number of factors (i.e. kinesis latency, network latency, message size/format). The [batching](#batching) described in the section above will also impact latency. We always recommend testing your specific use case to understand the latency you can expect.
126
126
127
127
If you have specific low-latency requirements, please [contact us](https://clickhouse.com/company/contact?loc=clickpipes).
Copy file name to clipboardexpand all lines: docs/en/integrations/data-ingestion/kafka/kafka-clickhouse-connect-sink.md
+1-1
Original file line number
Diff line number
Diff line change
@@ -106,7 +106,7 @@ The full table of configuration options:
106
106
|`value.converter` (Required* - See Description) | Set based on the type of data on your topic. Supported: - JSON, String, Avro or Protobuf formats. Required here if not defined in worker config. |`"org.apache.kafka.connect.json.JsonConverter"`|
107
107
|`value.converter.schemas.enable`| Connector Value Converter Schema Support |`"false"`|
108
108
|`errors.tolerance`| Connector Error Tolerance. Supported: none, all |`"none"`|
109
-
|`errors.deadletterqueue.topic.name`| If set (with errors.tolerance=all), a DLQ will be used for failed batches (see [Troubleshooting](#Troubleshooting)) |`""`|
109
+
|`errors.deadletterqueue.topic.name`| If set (with errors.tolerance=all), a DLQ will be used for failed batches (see [Troubleshooting](#troubleshooting)) |`""`|
110
110
|`errors.deadletterqueue.context.headers.enable`| Adds additional headers for the DLQ |`""`|
111
111
|`clickhouseSettings`| Comma-separated list of ClickHouse settings (e.g. "insert_quorum=2, etc...") |`""`|
112
112
|`topic2TableMap`| Comma-separated list that maps topic names to table names (e.g. "topic1=table1, topic2=table2, etc...") |`""`|
Copy file name to clipboardexpand all lines: docs/en/integrations/data-ingestion/s3/performance.md
+1-1
Original file line number
Diff line number
Diff line change
@@ -296,7 +296,7 @@ Individual nodes can also be bottlenecked by network and S3 GET requests, preven
296
296
297
297
Eventually, horizontal scaling is often necessary due to hardware availability and cost-efficiency. In ClickHouse Cloud, production clusters have at least 3 nodes. Users may also wish to therefore utilize all nodes for an insert.
298
298
299
-
Utilizing a cluster for S3 reads requires using the `s3Cluster` function as described in [Utilizing Clusters](./index.md#utilizing-clusters). This allows reads to be distributed across nodes.
299
+
Utilizing a cluster for S3 reads requires using the `s3Cluster` function as described in [Utilizing Clusters](/docs/en/integrations/s3#utilizing-clusters). This allows reads to be distributed across nodes.
300
300
301
301
The server that initially receives the insert query first resolves the glob pattern and then dispatches the processing of each matching file dynamically to itself and the other servers.
0 commit comments