Skip to content

Commit eb7dddb

Browse files
authored
Merge pull request #3055 from Blargian/fix_remaining_hashes
Fix broken anchor links
2 parents a39ee1f + e657803 commit eb7dddb

File tree

6 files changed

+12
-12
lines changed

6 files changed

+12
-12
lines changed

docs/en/cloud/security/gcp-private-service-connect.md

+7-7
Original file line numberDiff line numberDiff line change
@@ -172,7 +172,7 @@ output "psc_connection_id" {
172172
```
173173

174174
:::note
175-
TARGET - Use **endpointServiceId** from [Obtain GCP service attachment for Private Service Connect](#obtain-gcp-service-attachment-for-private-service-connect) step
175+
TARGET - Use **endpointServiceId** from [Obtain GCP service attachment for Private Service Connect](#obtain-gcp-service-attachment-and-dns-name-for-private-service-connect) step
176176
:::
177177

178178
## Setting up DNS
@@ -228,7 +228,7 @@ gcloud dns \
228228
--rrdatas="10.128.0.2"
229229
```
230230
:::note
231-
DNS_RECORD - use **privateDnsHostname** from [Obtain GCP service attachment for Private Service Connect](#obtain-gcp-service-attachment-for-private-service-connect) step
231+
DNS_RECORD - use **privateDnsHostname** from [Obtain GCP service attachment for Private Service Connect](#obtain-gcp-service-attachment-and-dns-name-for-private-service-connect) step
232232
:::
233233

234234
### Option 3: Using Terraform
@@ -256,12 +256,12 @@ resource "google_dns_record_set" "psc_dns_record" {
256256
```
257257

258258
:::note
259-
DNS_NAME - Use **privateDnsHostname** from [Obtain GCP service attachment for Private Service Connect](#obtain-gcp-service-attachment-for-private-service-connect) step
259+
DNS_NAME - Use **privateDnsHostname** from [Obtain GCP service attachment for Private Service Connect](#obtain-gcp-service-attachment-and-dns-name-for-private-service-connect) step
260260
:::
261261

262262
## Verify DNS setup
263263

264-
DNS_RECORD - Use **privateDnsHostname** from [Obtain GCP service attachment for Private Service Connect](#obtain-gcp-service-attachment-for-private-service-connect) step
264+
DNS_RECORD - Use **privateDnsHostname** from [Obtain GCP service attachment for Private Service Connect](#obtain-gcp-service-attachment-and-dns-name-for-private-service-connect) step
265265

266266
```bash
267267
ping $DNS_RECORD
@@ -387,7 +387,7 @@ curl --silent --user ${KEY_ID:?}:${KEY_SECRET:?} -X PATCH -H "Content-Type: appl
387387

388388
## Accessing instance using Private Service Connect
389389

390-
Each instance with configured Private Service Connect filters has two endpoints: public and private. In order to connect using Private Service Connect, you need to use a private endpoint, see use **endpointServiceId** from [Obtain GCP service attachment for Private Service Connect](#obtain-gcp-service-attachment-for-private-service-connect) step
390+
Each instance with configured Private Service Connect filters has two endpoints: public and private. In order to connect using Private Service Connect, you need to use a private endpoint, see use **endpointServiceId** from [Obtain GCP service attachment for Private Service Connect](#obtain-gcp-service-attachment-and-dns-name-for-private-service-connect) step
391391

392392
:::note
393393
Private DNS hostname is only available from your GCP VPC. Do not try to resolve the DNS host from a machine that resides outside of GCP VPC.
@@ -421,7 +421,7 @@ In this example, connection to the `xxxxxxx.yy-xxxxN.p.gcp.clickhouse.cloud` hos
421421

422422
### Test DNS setup
423423

424-
DNS_NAME - Use **privateDnsHostname** from [Obtain GCP service attachment for Private Service Connect](#obtain-gcp-service-attachment-for-private-service-connect) step
424+
DNS_NAME - Use **privateDnsHostname** from [Obtain GCP service attachment for Private Service Connect](#obtain-gcp-service-attachment-and-dns-name-for-private-service-connect) step
425425

426426
```bash
427427
nslookup $DNS_NAME
@@ -443,7 +443,7 @@ If you have problems with connecting using PSC link, check your connectivity usi
443443

444444
OpenSSL should be able to connect (see CONNECTED in the output). `errno=104` is expected.
445445

446-
DNS_NAME - Use **privateDnsHostname** from [Obtain GCP service attachment for Private Service Connect](#obtain-gcp-service-attachment-for-private-service-connect) step
446+
DNS_NAME - Use **privateDnsHostname** from [Obtain GCP service attachment for Private Service Connect](#obtain-gcp-service-attachment-and-dns-name-for-private-service-connect) step
447447

448448
```bash
449449
openssl s_client -connect ${DNS_NAME}:9440

docs/en/integrations/data-ingestion/clickpipes/kafka.md

+1-1
Original file line numberDiff line numberDiff line change
@@ -271,7 +271,7 @@ Batches are inserted when one of the following criteria has been met:
271271

272272
### Latency
273273

274-
Latency (defined as the time between the Kafka message being produced and the message being available in ClickHouse) will be dependent on a number of factors (i.e. broker latency, network latency, message size/format). The [batching](#Batching) described in the section above will also impact latency. We always recommend testing your specific use case with typical loads to determine the expected latency.
274+
Latency (defined as the time between the Kafka message being produced and the message being available in ClickHouse) will be dependent on a number of factors (i.e. broker latency, network latency, message size/format). The [batching](#batching) described in the section above will also impact latency. We always recommend testing your specific use case with typical loads to determine the expected latency.
275275

276276
ClickPipes does not provide any guarantees concerning latency. If you have specific low-latency requirements, please [contact us](https://clickhouse.com/company/contact?loc=clickpipes).
277277

docs/en/integrations/data-ingestion/clickpipes/kinesis.md

+1-1
Original file line numberDiff line numberDiff line change
@@ -122,7 +122,7 @@ Batches are inserted when one of the following criteria has been met:
122122

123123
### Latency
124124

125-
Latency (defined as the time between the Kinesis message being sent to the stream and the message being available in ClickHouse) will be dependent on a number of factors (i.e. kinesis latency, network latency, message size/format). The [batching](#Batching) described in the section above will also impact latency. We always recommend testing your specific use case to understand the latency you can expect.
125+
Latency (defined as the time between the Kinesis message being sent to the stream and the message being available in ClickHouse) will be dependent on a number of factors (i.e. kinesis latency, network latency, message size/format). The [batching](#batching) described in the section above will also impact latency. We always recommend testing your specific use case to understand the latency you can expect.
126126

127127
If you have specific low-latency requirements, please [contact us](https://clickhouse.com/company/contact?loc=clickpipes).
128128

docs/en/integrations/data-ingestion/kafka/kafka-clickhouse-connect-sink.md

+1-1
Original file line numberDiff line numberDiff line change
@@ -106,7 +106,7 @@ The full table of configuration options:
106106
| `value.converter` (Required* - See Description) | Set based on the type of data on your topic. Supported: - JSON, String, Avro or Protobuf formats. Required here if not defined in worker config. | `"org.apache.kafka.connect.json.JsonConverter"` |
107107
| `value.converter.schemas.enable` | Connector Value Converter Schema Support | `"false"` |
108108
| `errors.tolerance` | Connector Error Tolerance. Supported: none, all | `"none"` |
109-
| `errors.deadletterqueue.topic.name` | If set (with errors.tolerance=all), a DLQ will be used for failed batches (see [Troubleshooting](#Troubleshooting)) | `""` |
109+
| `errors.deadletterqueue.topic.name` | If set (with errors.tolerance=all), a DLQ will be used for failed batches (see [Troubleshooting](#troubleshooting)) | `""` |
110110
| `errors.deadletterqueue.context.headers.enable` | Adds additional headers for the DLQ | `""` |
111111
| `clickhouseSettings` | Comma-separated list of ClickHouse settings (e.g. "insert_quorum=2, etc...") | `""` |
112112
| `topic2TableMap` | Comma-separated list that maps topic names to table names (e.g. "topic1=table1, topic2=table2, etc...") | `""` |

docs/en/integrations/data-ingestion/s3/performance.md

+1-1
Original file line numberDiff line numberDiff line change
@@ -296,7 +296,7 @@ Individual nodes can also be bottlenecked by network and S3 GET requests, preven
296296

297297
Eventually, horizontal scaling is often necessary due to hardware availability and cost-efficiency. In ClickHouse Cloud, production clusters have at least 3 nodes. Users may also wish to therefore utilize all nodes for an insert.
298298

299-
Utilizing a cluster for S3 reads requires using the `s3Cluster` function as described in [Utilizing Clusters](./index.md#utilizing-clusters). This allows reads to be distributed across nodes.
299+
Utilizing a cluster for S3 reads requires using the `s3Cluster` function as described in [Utilizing Clusters](/docs/en/integrations/s3#utilizing-clusters). This allows reads to be distributed across nodes.
300300

301301
The server that initially receives the insert query first resolves the glob pattern and then dispatches the processing of each matching file dynamically to itself and the other servers.
302302

docs/en/integrations/data-visualization/powerbi-and-clickhouse.md

+1-1
Original file line numberDiff line numberDiff line change
@@ -20,7 +20,7 @@ Power BI requires you to create your dashboards within the Desktop version and p
2020
This tutorial will guide you through the process of:
2121

2222
* [Installing the ClickHouse ODBC Driver](#install-the-odbc-driver)
23-
* [Installing the ClickHouse Power BI Connector into Power BI Desktop](#install-clickhouse-connector)
23+
* [Installing the ClickHouse Power BI Connector into Power BI Desktop](#power-bi-installation)
2424
* [Querying data from ClickHouse for visualistion in Power BI Desktop](#query-and-visualise-data)
2525
* [Setting up an on-premise data gateway for Power BI Service](#power-bi-service)
2626

0 commit comments

Comments
 (0)