Skip to content

Commit

Permalink
Merge branch 'main' of github.com:ClickHouse/clickhouse-docs into mw-…
Browse files Browse the repository at this point in the history
…fix-search-crash
  • Loading branch information
gjones committed Jan 8, 2025
2 parents 160cfc4 + eb7dddb commit 4aea657
Show file tree
Hide file tree
Showing 6 changed files with 12 additions and 12 deletions.
14 changes: 7 additions & 7 deletions docs/en/cloud/security/gcp-private-service-connect.md
Original file line number Diff line number Diff line change
Expand Up @@ -172,7 +172,7 @@ output "psc_connection_id" {
```

:::note
TARGET - Use **endpointServiceId** from [Obtain GCP service attachment for Private Service Connect](#obtain-gcp-service-attachment-for-private-service-connect) step
TARGET - Use **endpointServiceId** from [Obtain GCP service attachment for Private Service Connect](#obtain-gcp-service-attachment-and-dns-name-for-private-service-connect) step
:::

## Setting up DNS
Expand Down Expand Up @@ -228,7 +228,7 @@ gcloud dns \
--rrdatas="10.128.0.2"
```
:::note
DNS_RECORD - use **privateDnsHostname** from [Obtain GCP service attachment for Private Service Connect](#obtain-gcp-service-attachment-for-private-service-connect) step
DNS_RECORD - use **privateDnsHostname** from [Obtain GCP service attachment for Private Service Connect](#obtain-gcp-service-attachment-and-dns-name-for-private-service-connect) step
:::

### Option 3: Using Terraform
Expand Down Expand Up @@ -256,12 +256,12 @@ resource "google_dns_record_set" "psc_dns_record" {
```

:::note
DNS_NAME - Use **privateDnsHostname** from [Obtain GCP service attachment for Private Service Connect](#obtain-gcp-service-attachment-for-private-service-connect) step
DNS_NAME - Use **privateDnsHostname** from [Obtain GCP service attachment for Private Service Connect](#obtain-gcp-service-attachment-and-dns-name-for-private-service-connect) step
:::

## Verify DNS setup

DNS_RECORD - Use **privateDnsHostname** from [Obtain GCP service attachment for Private Service Connect](#obtain-gcp-service-attachment-for-private-service-connect) step
DNS_RECORD - Use **privateDnsHostname** from [Obtain GCP service attachment for Private Service Connect](#obtain-gcp-service-attachment-and-dns-name-for-private-service-connect) step

```bash
ping $DNS_RECORD
Expand Down Expand Up @@ -387,7 +387,7 @@ curl --silent --user ${KEY_ID:?}:${KEY_SECRET:?} -X PATCH -H "Content-Type: appl

## Accessing instance using Private Service Connect

Each instance with configured Private Service Connect filters has two endpoints: public and private. In order to connect using Private Service Connect, you need to use a private endpoint, see use **endpointServiceId** from [Obtain GCP service attachment for Private Service Connect](#obtain-gcp-service-attachment-for-private-service-connect) step
Each instance with configured Private Service Connect filters has two endpoints: public and private. In order to connect using Private Service Connect, you need to use a private endpoint, see use **endpointServiceId** from [Obtain GCP service attachment for Private Service Connect](#obtain-gcp-service-attachment-and-dns-name-for-private-service-connect) step

:::note
Private DNS hostname is only available from your GCP VPC. Do not try to resolve the DNS host from a machine that resides outside of GCP VPC.
Expand Down Expand Up @@ -421,7 +421,7 @@ In this example, connection to the `xxxxxxx.yy-xxxxN.p.gcp.clickhouse.cloud` hos

### Test DNS setup

DNS_NAME - Use **privateDnsHostname** from [Obtain GCP service attachment for Private Service Connect](#obtain-gcp-service-attachment-for-private-service-connect) step
DNS_NAME - Use **privateDnsHostname** from [Obtain GCP service attachment for Private Service Connect](#obtain-gcp-service-attachment-and-dns-name-for-private-service-connect) step

```bash
nslookup $DNS_NAME
Expand All @@ -443,7 +443,7 @@ If you have problems with connecting using PSC link, check your connectivity usi

OpenSSL should be able to connect (see CONNECTED in the output). `errno=104` is expected.

DNS_NAME - Use **privateDnsHostname** from [Obtain GCP service attachment for Private Service Connect](#obtain-gcp-service-attachment-for-private-service-connect) step
DNS_NAME - Use **privateDnsHostname** from [Obtain GCP service attachment for Private Service Connect](#obtain-gcp-service-attachment-and-dns-name-for-private-service-connect) step

```bash
openssl s_client -connect ${DNS_NAME}:9440
Expand Down
2 changes: 1 addition & 1 deletion docs/en/integrations/data-ingestion/clickpipes/kafka.md
Original file line number Diff line number Diff line change
Expand Up @@ -271,7 +271,7 @@ Batches are inserted when one of the following criteria has been met:

### Latency

Latency (defined as the time between the Kafka message being produced and the message being available in ClickHouse) will be dependent on a number of factors (i.e. broker latency, network latency, message size/format). The [batching](#Batching) described in the section above will also impact latency. We always recommend testing your specific use case with typical loads to determine the expected latency.
Latency (defined as the time between the Kafka message being produced and the message being available in ClickHouse) will be dependent on a number of factors (i.e. broker latency, network latency, message size/format). The [batching](#batching) described in the section above will also impact latency. We always recommend testing your specific use case with typical loads to determine the expected latency.

ClickPipes does not provide any guarantees concerning latency. If you have specific low-latency requirements, please [contact us](https://clickhouse.com/company/contact?loc=clickpipes).

Expand Down
2 changes: 1 addition & 1 deletion docs/en/integrations/data-ingestion/clickpipes/kinesis.md
Original file line number Diff line number Diff line change
Expand Up @@ -122,7 +122,7 @@ Batches are inserted when one of the following criteria has been met:

### Latency

Latency (defined as the time between the Kinesis message being sent to the stream and the message being available in ClickHouse) will be dependent on a number of factors (i.e. kinesis latency, network latency, message size/format). The [batching](#Batching) described in the section above will also impact latency. We always recommend testing your specific use case to understand the latency you can expect.
Latency (defined as the time between the Kinesis message being sent to the stream and the message being available in ClickHouse) will be dependent on a number of factors (i.e. kinesis latency, network latency, message size/format). The [batching](#batching) described in the section above will also impact latency. We always recommend testing your specific use case to understand the latency you can expect.

If you have specific low-latency requirements, please [contact us](https://clickhouse.com/company/contact?loc=clickpipes).

Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -106,7 +106,7 @@ The full table of configuration options:
| `value.converter` (Required* - See Description) | Set based on the type of data on your topic. Supported: - JSON, String, Avro or Protobuf formats. Required here if not defined in worker config. | `"org.apache.kafka.connect.json.JsonConverter"` |
| `value.converter.schemas.enable` | Connector Value Converter Schema Support | `"false"` |
| `errors.tolerance` | Connector Error Tolerance. Supported: none, all | `"none"` |
| `errors.deadletterqueue.topic.name` | If set (with errors.tolerance=all), a DLQ will be used for failed batches (see [Troubleshooting](#Troubleshooting)) | `""` |
| `errors.deadletterqueue.topic.name` | If set (with errors.tolerance=all), a DLQ will be used for failed batches (see [Troubleshooting](#troubleshooting)) | `""` |
| `errors.deadletterqueue.context.headers.enable` | Adds additional headers for the DLQ | `""` |
| `clickhouseSettings` | Comma-separated list of ClickHouse settings (e.g. "insert_quorum=2, etc...") | `""` |
| `topic2TableMap` | Comma-separated list that maps topic names to table names (e.g. "topic1=table1, topic2=table2, etc...") | `""` |
Expand Down
2 changes: 1 addition & 1 deletion docs/en/integrations/data-ingestion/s3/performance.md
Original file line number Diff line number Diff line change
Expand Up @@ -296,7 +296,7 @@ Individual nodes can also be bottlenecked by network and S3 GET requests, preven

Eventually, horizontal scaling is often necessary due to hardware availability and cost-efficiency. In ClickHouse Cloud, production clusters have at least 3 nodes. Users may also wish to therefore utilize all nodes for an insert.

Utilizing a cluster for S3 reads requires using the `s3Cluster` function as described in [Utilizing Clusters](./index.md#utilizing-clusters). This allows reads to be distributed across nodes.
Utilizing a cluster for S3 reads requires using the `s3Cluster` function as described in [Utilizing Clusters](/docs/en/integrations/s3#utilizing-clusters). This allows reads to be distributed across nodes.

The server that initially receives the insert query first resolves the glob pattern and then dispatches the processing of each matching file dynamically to itself and the other servers.

Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -20,7 +20,7 @@ Power BI requires you to create your dashboards within the Desktop version and p
This tutorial will guide you through the process of:

* [Installing the ClickHouse ODBC Driver](#install-the-odbc-driver)
* [Installing the ClickHouse Power BI Connector into Power BI Desktop](#install-clickhouse-connector)
* [Installing the ClickHouse Power BI Connector into Power BI Desktop](#power-bi-installation)
* [Querying data from ClickHouse for visualistion in Power BI Desktop](#query-and-visualise-data)
* [Setting up an on-premise data gateway for Power BI Service](#power-bi-service)

Expand Down

0 comments on commit 4aea657

Please sign in to comment.