From e6578038f4e1f17b94c2ad69d996c75ba5bd5643 Mon Sep 17 00:00:00 2001 From: Blargian Date: Wed, 8 Jan 2025 22:55:22 +0100 Subject: [PATCH] fix missed broken anchors --- .../cloud/security/gcp-private-service-connect.md | 14 +++++++------- .../data-ingestion/clickpipes/kafka.md | 2 +- .../data-ingestion/clickpipes/kinesis.md | 2 +- .../kafka/kafka-clickhouse-connect-sink.md | 2 +- .../integrations/data-ingestion/s3/performance.md | 2 +- .../data-visualization/powerbi-and-clickhouse.md | 2 +- 6 files changed, 12 insertions(+), 12 deletions(-) diff --git a/docs/en/cloud/security/gcp-private-service-connect.md b/docs/en/cloud/security/gcp-private-service-connect.md index c46638c4c14..6fe62f66f89 100644 --- a/docs/en/cloud/security/gcp-private-service-connect.md +++ b/docs/en/cloud/security/gcp-private-service-connect.md @@ -172,7 +172,7 @@ output "psc_connection_id" { ``` :::note -TARGET - Use **endpointServiceId** from [Obtain GCP service attachment for Private Service Connect](#obtain-gcp-service-attachment-for-private-service-connect) step +TARGET - Use **endpointServiceId** from [Obtain GCP service attachment for Private Service Connect](#obtain-gcp-service-attachment-and-dns-name-for-private-service-connect) step ::: ## Setting up DNS @@ -228,7 +228,7 @@ gcloud dns \ --rrdatas="10.128.0.2" ``` :::note -DNS_RECORD - use **privateDnsHostname** from [Obtain GCP service attachment for Private Service Connect](#obtain-gcp-service-attachment-for-private-service-connect) step +DNS_RECORD - use **privateDnsHostname** from [Obtain GCP service attachment for Private Service Connect](#obtain-gcp-service-attachment-and-dns-name-for-private-service-connect) step ::: ### Option 3: Using Terraform @@ -256,12 +256,12 @@ resource "google_dns_record_set" "psc_dns_record" { ``` :::note -DNS_NAME - Use **privateDnsHostname** from [Obtain GCP service attachment for Private Service Connect](#obtain-gcp-service-attachment-for-private-service-connect) step +DNS_NAME - Use **privateDnsHostname** from [Obtain GCP service attachment for Private Service Connect](#obtain-gcp-service-attachment-and-dns-name-for-private-service-connect) step ::: ## Verify DNS setup -DNS_RECORD - Use **privateDnsHostname** from [Obtain GCP service attachment for Private Service Connect](#obtain-gcp-service-attachment-for-private-service-connect) step +DNS_RECORD - Use **privateDnsHostname** from [Obtain GCP service attachment for Private Service Connect](#obtain-gcp-service-attachment-and-dns-name-for-private-service-connect) step ```bash ping $DNS_RECORD @@ -387,7 +387,7 @@ curl --silent --user ${KEY_ID:?}:${KEY_SECRET:?} -X PATCH -H "Content-Type: appl ## Accessing instance using Private Service Connect -Each instance with configured Private Service Connect filters has two endpoints: public and private. In order to connect using Private Service Connect, you need to use a private endpoint, see use **endpointServiceId** from [Obtain GCP service attachment for Private Service Connect](#obtain-gcp-service-attachment-for-private-service-connect) step +Each instance with configured Private Service Connect filters has two endpoints: public and private. In order to connect using Private Service Connect, you need to use a private endpoint, see use **endpointServiceId** from [Obtain GCP service attachment for Private Service Connect](#obtain-gcp-service-attachment-and-dns-name-for-private-service-connect) step :::note Private DNS hostname is only available from your GCP VPC. Do not try to resolve the DNS host from a machine that resides outside of GCP VPC. @@ -421,7 +421,7 @@ In this example, connection to the `xxxxxxx.yy-xxxxN.p.gcp.clickhouse.cloud` hos ### Test DNS setup -DNS_NAME - Use **privateDnsHostname** from [Obtain GCP service attachment for Private Service Connect](#obtain-gcp-service-attachment-for-private-service-connect) step +DNS_NAME - Use **privateDnsHostname** from [Obtain GCP service attachment for Private Service Connect](#obtain-gcp-service-attachment-and-dns-name-for-private-service-connect) step ```bash nslookup $DNS_NAME @@ -443,7 +443,7 @@ If you have problems with connecting using PSC link, check your connectivity usi OpenSSL should be able to connect (see CONNECTED in the output). `errno=104` is expected. -DNS_NAME - Use **privateDnsHostname** from [Obtain GCP service attachment for Private Service Connect](#obtain-gcp-service-attachment-for-private-service-connect) step +DNS_NAME - Use **privateDnsHostname** from [Obtain GCP service attachment for Private Service Connect](#obtain-gcp-service-attachment-and-dns-name-for-private-service-connect) step ```bash openssl s_client -connect ${DNS_NAME}:9440 diff --git a/docs/en/integrations/data-ingestion/clickpipes/kafka.md b/docs/en/integrations/data-ingestion/clickpipes/kafka.md index 00e718c5783..b015a5eb47d 100644 --- a/docs/en/integrations/data-ingestion/clickpipes/kafka.md +++ b/docs/en/integrations/data-ingestion/clickpipes/kafka.md @@ -271,7 +271,7 @@ Batches are inserted when one of the following criteria has been met: ### Latency -Latency (defined as the time between the Kafka message being produced and the message being available in ClickHouse) will be dependent on a number of factors (i.e. broker latency, network latency, message size/format). The [batching](#Batching) described in the section above will also impact latency. We always recommend testing your specific use case with typical loads to determine the expected latency. +Latency (defined as the time between the Kafka message being produced and the message being available in ClickHouse) will be dependent on a number of factors (i.e. broker latency, network latency, message size/format). The [batching](#batching) described in the section above will also impact latency. We always recommend testing your specific use case with typical loads to determine the expected latency. ClickPipes does not provide any guarantees concerning latency. If you have specific low-latency requirements, please [contact us](https://clickhouse.com/company/contact?loc=clickpipes). diff --git a/docs/en/integrations/data-ingestion/clickpipes/kinesis.md b/docs/en/integrations/data-ingestion/clickpipes/kinesis.md index 68d9d5b1357..a1dbaefbf70 100644 --- a/docs/en/integrations/data-ingestion/clickpipes/kinesis.md +++ b/docs/en/integrations/data-ingestion/clickpipes/kinesis.md @@ -122,7 +122,7 @@ Batches are inserted when one of the following criteria has been met: ### Latency -Latency (defined as the time between the Kinesis message being sent to the stream and the message being available in ClickHouse) will be dependent on a number of factors (i.e. kinesis latency, network latency, message size/format). The [batching](#Batching) described in the section above will also impact latency. We always recommend testing your specific use case to understand the latency you can expect. +Latency (defined as the time between the Kinesis message being sent to the stream and the message being available in ClickHouse) will be dependent on a number of factors (i.e. kinesis latency, network latency, message size/format). The [batching](#batching) described in the section above will also impact latency. We always recommend testing your specific use case to understand the latency you can expect. If you have specific low-latency requirements, please [contact us](https://clickhouse.com/company/contact?loc=clickpipes). diff --git a/docs/en/integrations/data-ingestion/kafka/kafka-clickhouse-connect-sink.md b/docs/en/integrations/data-ingestion/kafka/kafka-clickhouse-connect-sink.md index 6f353562076..f5a9ebb736d 100644 --- a/docs/en/integrations/data-ingestion/kafka/kafka-clickhouse-connect-sink.md +++ b/docs/en/integrations/data-ingestion/kafka/kafka-clickhouse-connect-sink.md @@ -106,7 +106,7 @@ The full table of configuration options: | `value.converter` (Required* - See Description) | Set based on the type of data on your topic. Supported: - JSON, String, Avro or Protobuf formats. Required here if not defined in worker config. | `"org.apache.kafka.connect.json.JsonConverter"` | | `value.converter.schemas.enable` | Connector Value Converter Schema Support | `"false"` | | `errors.tolerance` | Connector Error Tolerance. Supported: none, all | `"none"` | -| `errors.deadletterqueue.topic.name` | If set (with errors.tolerance=all), a DLQ will be used for failed batches (see [Troubleshooting](#Troubleshooting)) | `""` | +| `errors.deadletterqueue.topic.name` | If set (with errors.tolerance=all), a DLQ will be used for failed batches (see [Troubleshooting](#troubleshooting)) | `""` | | `errors.deadletterqueue.context.headers.enable` | Adds additional headers for the DLQ | `""` | | `clickhouseSettings` | Comma-separated list of ClickHouse settings (e.g. "insert_quorum=2, etc...") | `""` | | `topic2TableMap` | Comma-separated list that maps topic names to table names (e.g. "topic1=table1, topic2=table2, etc...") | `""` | diff --git a/docs/en/integrations/data-ingestion/s3/performance.md b/docs/en/integrations/data-ingestion/s3/performance.md index 9481c50d6d7..a5d273b0f7a 100644 --- a/docs/en/integrations/data-ingestion/s3/performance.md +++ b/docs/en/integrations/data-ingestion/s3/performance.md @@ -296,7 +296,7 @@ Individual nodes can also be bottlenecked by network and S3 GET requests, preven Eventually, horizontal scaling is often necessary due to hardware availability and cost-efficiency. In ClickHouse Cloud, production clusters have at least 3 nodes. Users may also wish to therefore utilize all nodes for an insert. -Utilizing a cluster for S3 reads requires using the `s3Cluster` function as described in [Utilizing Clusters](./index.md#utilizing-clusters). This allows reads to be distributed across nodes. +Utilizing a cluster for S3 reads requires using the `s3Cluster` function as described in [Utilizing Clusters](/docs/en/integrations/s3#utilizing-clusters). This allows reads to be distributed across nodes. The server that initially receives the insert query first resolves the glob pattern and then dispatches the processing of each matching file dynamically to itself and the other servers. diff --git a/docs/en/integrations/data-visualization/powerbi-and-clickhouse.md b/docs/en/integrations/data-visualization/powerbi-and-clickhouse.md index 299f37fff4c..795ed098687 100644 --- a/docs/en/integrations/data-visualization/powerbi-and-clickhouse.md +++ b/docs/en/integrations/data-visualization/powerbi-and-clickhouse.md @@ -20,7 +20,7 @@ Power BI requires you to create your dashboards within the Desktop version and p This tutorial will guide you through the process of: * [Installing the ClickHouse ODBC Driver](#install-the-odbc-driver) -* [Installing the ClickHouse Power BI Connector into Power BI Desktop](#install-clickhouse-connector) +* [Installing the ClickHouse Power BI Connector into Power BI Desktop](#power-bi-installation) * [Querying data from ClickHouse for visualistion in Power BI Desktop](#query-and-visualise-data) * [Setting up an on-premise data gateway for Power BI Service](#power-bi-service)