Skip to content

Commit 54d1775

Browse files
committed
fix invalid mdx
1 parent dc6c1cc commit 54d1775

32 files changed

+109
-106
lines changed

contrib-writing-guide.md

Lines changed: 5 additions & 3 deletions
Original file line numberDiff line numberDiff line change
@@ -43,7 +43,7 @@ sudo apt-get install npm
4343
sudo npm install --global yarn
4444
```
4545

46-
note: if the Node version available in your distro is old (<=v16), you can use [nvm](https://github.com/nvm-sh/nvm#installing-and-updating) to pick a specific one.
46+
note: if the Node version available in your distro is old (`<=v16`), you can use [nvm](https://github.com/nvm-sh/nvm#installing-and-updating) to pick a specific one.
4747

4848
for example to use node 18:
4949

@@ -477,10 +477,12 @@ cd $DOCS/ClickHouse/tests/integration/
477477

478478
Code highlighting is based on the language chosen for your code blocks. Specify the language when you start the code block:
479479

480-
<pre lang="no-highlight"><code>```sql
480+
<pre lang="no-highlight"><code>
481+
```sql
481482
SELECT firstname from imdb.actors;
482483
```
483-
</code></pre>
484+
</code>
485+
</pre>
484486

485487
```sql
486488
SELECT firstname from imdb.actors;

docs/en/_snippets/_GCS_authentication_and_bucket.md

Lines changed: 2 additions & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -1,5 +1,6 @@
11

2-
<details><summary>Create GCS buckets and an HMAC key</summary>
2+
<details>
3+
<summary>Create GCS buckets and an HMAC key</summary>
34

45
### ch_bucket_us_east1
56

docs/en/_snippets/_S3_authentication_and_bucket.md

Lines changed: 2 additions & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -1,5 +1,6 @@
11

2-
<details><summary>Create S3 buckets and an IAM user</summary>
2+
<details>
3+
<summary>Create S3 buckets and an IAM user</summary>
34

45
This article demonstrates the basics of how to configure an AWS IAM user, create an S3 bucket and configure ClickHouse to use the bucket as an S3 disk. You should work with your security team to determine the permissions to be used, and consider these as a starting point.
56

docs/en/_snippets/_add_remote_ip_access_list_detail.md

Lines changed: 2 additions & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -1,4 +1,5 @@
1-
<details><summary>Manage your IP Access List</summary>
1+
<details>
2+
<summary>Manage your IP Access List</summary>
23

34
From your ClickHouse Cloud services list choose the service that you will work with and switch to **Security**. If the IP Access List does not contain the IP Address or range of the remote system that needs to connect to your ClickHouse Cloud service, then you can resolve the problem with **Add entry**:
45

docs/en/_snippets/_add_superset_detail.md

Lines changed: 2 additions & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -1,4 +1,5 @@
1-
<details><summary>Launch Apache Superset in Docker</summary>
1+
<details>
2+
<summary>Launch Apache Superset in Docker</summary>
23

34
Superset provides [installing Superset locally using Docker Compose](https://superset.apache.org/docs/installation/installing-superset-using-docker-compose/) instructions. After checking out the Apache Superset repo from GitHub you can run the latest development code, or a specific tag. We recommend release 2.0.0 as it is the latest release not marked as `pre-release`.
45

docs/en/_snippets/_check_ip_access_list_detail.md

Lines changed: 2 additions & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -1,4 +1,5 @@
1-
<details><summary>Manage your IP Access List</summary>
1+
<details>
2+
<summary>Manage your IP Access List</summary>
23

34
From your ClickHouse Cloud services list choose the service that you will work with and switch to **Settings**.
45

docs/en/_snippets/_launch_sql_console.md

Lines changed: 2 additions & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -2,7 +2,8 @@
22
If you need a SQL client connection, your ClickHouse Cloud service has an associated web based SQL console; expand **Connect to SQL console** below for details.
33
:::
44

5-
<details><summary>Connect to SQL console</summary>
5+
<details>
6+
<summary>Connect to SQL console</summary>
67

78
From your ClickHouse Cloud services list, choose the service that you will work with and click **Connect**. From here you can **Open SQL console**:
89

docs/en/cloud/manage/postman.md

Lines changed: 2 additions & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -61,15 +61,15 @@ The Postman Application is available for use within a web browser or can be down
6161
* Under the organizationid folder, navigate to "GET organizational details":
6262
* In the middle frame menu under Params an organizationid is required.
6363
![Test retrieval of organization details](@site/docs/en/cloud/manage/images/postman/postman14.png)
64-
* Edit this value with "orgid" in curly braces "{{orgid}}" (From setting this value earlier a menu will appear with the value):
64+
* Edit this value with "orgid" in curly braces `{{orgid}}` (From setting this value earlier a menu will appear with the value):
6565
![Submit test](@site/docs/en/cloud/manage/images/postman/postman15.png)
6666
* After pressing the "Save" button, press the blue "Send" button at the top right of the screen.
6767
![Return value](@site/docs/en/cloud/manage/images/postman/postman16.png)
6868
* The returned results should deliver your organization details with "status": 200. (If you receive a "status" 400 with no organization information your configuration is not correct).
6969

7070
### Test "GET service details"
7171
* Click "GET service details"
72-
* Edit the Values for organizationid and serviceid with {{orgid}} and {{serviceid}} respectively.
72+
* Edit the Values for organizationid and serviceid with `{{orgid}}` and `{{serviceid}}` respectively.
7373
* Press "Save" and then the blue "Send" button on the right.
7474
![List of services](@site/docs/en/cloud/manage/images/postman/postman17.png)
7575
* The returned results should deliver a list of your services and their details with "status": 200. (If you receive a "status" 400 with no service(s) information your configuration is not correct).

docs/en/cloud/security/accessing-s3-data-securely.md

Lines changed: 2 additions & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -73,7 +73,7 @@ This approach allows customers to manage all access to their S3 buckets in a sin
7373

7474
3 - Create a new IAM role with the following IAM & Trust policy.
7575

76-
Trust policy (Please replace {ClickHouse_IAM_ARN} with the IAM Role arn belong to your ClickHouse instance):
76+
Trust policy (Please replace `{ClickHouse_IAM_ARN}` with the IAM Role arn belong to your ClickHouse instance):
7777

7878
```json
7979
{
@@ -90,7 +90,7 @@ Trust policy (Please replace {ClickHouse_IAM_ARN} with the IAM Role arn belong
9090
}
9191
```
9292

93-
IAM policy (Please replace {BUCKET_NAME} with your bucket name):
93+
IAM policy (Please replace `{BUCKET_NAME}` with your bucket name):
9494

9595
```
9696
{

docs/en/cloud/security/saml-sso-setup.md

Lines changed: 7 additions & 7 deletions
Original file line numberDiff line numberDiff line change
@@ -89,8 +89,8 @@ You will configure two App Integrations in Okta for each ClickHouse organization
8989

9090
| Field | Value |
9191
|--------------------------------|-------|
92-
| Single Sign On URL | https://auth.clickhouse.cloud/login/callback?connection={organizationid} |
93-
| Audience URI (SP Entity ID) | urn:auth0:ch-production:{organizationid} |
92+
| Single Sign On URL | `https://auth.clickhouse.cloud/login/callback?connection={organizationid}` |
93+
| Audience URI (SP Entity ID) | `urn:auth0:ch-production:{organizationid}` |
9494
| Default RelayState | Leave blank |
9595
| Name ID format | Unspecified |
9696
| Application username | Email |
@@ -147,8 +147,8 @@ You will configure one SAML app in Google for each organization and must provide
147147

148148
| Field | Value |
149149
|-----------|-------|
150-
| ACS URL | https://auth.clickhouse.cloud/login/callback?connection={organizationid} |
151-
| Entity ID | urn:auth0:ch-production:{organizationid} |
150+
| ACS URL | `https://auth.clickhouse.cloud/login/callback?connection={organizationid}` |
151+
| Entity ID | `urn:auth0:ch-production:{organizationid}` |
152152

153153
8. Check the box for **Signed response**.
154154

@@ -198,9 +198,9 @@ You will set up one application integration with a separate sign-on URL for each
198198

199199
| Field | Value |
200200
|---------------------------|-------|
201-
| Identifier (Entity ID) | urn:auth0:ch-production:{organizationid} |
202-
| Reply URL (Assertion Consumer Service URL) | https://auth.clickhouse.cloud/login/callback?connection={organizationid} |
203-
| Sign on URL | https://console.clickhouse.cloud?connection={organizationid} |
201+
| Identifier (Entity ID) | `urn:auth0:ch-production:{organizationid}` |
202+
| Reply URL (Assertion Consumer Service URL) | `https://auth.clickhouse.cloud/login/callback?connection={organizationid}` |
203+
| Sign on URL | `https://console.clickhouse.cloud?connection={organizationid}` |
204204
| Relay State | Blank |
205205
| Logout URL | Blank |
206206

docs/en/guides/best-practices/sparse-primary-indexes.md

Lines changed: 28 additions & 19 deletions
Original file line numberDiff line numberDiff line change
@@ -174,27 +174,36 @@ SETTINGS index_granularity = 8192, index_granularity_bytes = 0, compress_primary
174174
</summary>
175175
<p>
176176

177-
In order to simplify the discussions later on in this guide, as well as make the diagrams and results reproducible, the DDL statement
178-
<ul>
179-
<li>specifies a compound sorting key for the table via an `ORDER BY` clause</li>
180-
<br/>
181-
<li>explicitly controls how many index entries the primary index will have through the settings:</li>
182-
<br/>
183-
<ul>
184-
<li>`index_granularity: explicitly set to its default value of 8192. This means that for each group of 8192 rows, the primary index will have one index entry, e.g. if the table contains 16384 rows then the index will have two index entries.
185-
</li>
186-
<br/>
187-
<li>`index_granularity_bytes`: set to 0 in order to disable <a href="https://clickhouse.com/docs/en/whats-new/changelog/2019/#experimental-features-1" target="_blank">adaptive index granularity</a>. Adaptive index granularity means that ClickHouse automatically creates one index entry for a group of n rows if either of these are true:
177+
In order to simplify the discussions later on in this guide, as well as make the diagrams and results reproducible, the DDL statement:
178+
188179
<ul>
189-
<li>if n is less than 8192 and the size of the combined row data for that n rows is larger than or equal to 10 MB (the default value for index_granularity_bytes) or</li>
190-
<li>if the combined row data size for n rows is less than 10 MB but n is 8192.</li>
191-
</ul>
192-
</li>
193-
<br/>
194-
<li>`compress_primary_key`: set to 0 to disable <a href="https://github.com/ClickHouse/ClickHouse/issues/34437" target="_blank">compression of the primary index</a>. This will allow us to optionally inspect its contents later.
195-
</li>
196-
</ul>
180+
<li>
181+
Specifies a compound sorting key for the table via an <code>ORDER BY</code> clause.
182+
</li>
183+
<li>
184+
Explicitly controls how many index entries the primary index will have through the settings:
185+
<ul>
186+
<li>
187+
<code>index_granularity</code>: explicitly set to its default value of 8192. This means that for each group of 8192 rows, the primary index will have one index entry. For example, if the table contains 16384 rows, the index will have two index entries.
188+
</li>
189+
<li>
190+
<code>index_granularity_bytes</code>: set to 0 in order to disable <a href="https://clickhouse.com/docs/en/whats-new/changelog/2019/#experimental-features-1" target="_blank">adaptive index granularity</a>. Adaptive index granularity means that ClickHouse automatically creates one index entry for a group of n rows if either of these are true:
191+
<ul>
192+
<li>
193+
If <code>n</code> is less than 8192 and the size of the combined row data for that <code>n</code> rows is larger than or equal to 10 MB (the default value for <code>index_granularity_bytes</code>).
194+
</li>
195+
<li>
196+
If the combined row data size for <code>n</code> rows is less than 10 MB but <code>n</code> is 8192.
197+
</li>
198+
</ul>
199+
</li>
200+
<li>
201+
<code>compress_primary_key</code>: set to 0 to disable <a href="https://github.com/ClickHouse/ClickHouse/issues/34437" target="_blank">compression of the primary index</a>. This will allow us to optionally inspect its contents later.
202+
</li>
203+
</ul>
204+
</li>
197205
</ul>
206+
198207
</p>
199208
</details>
200209

docs/en/guides/sre/keeper/index.md

Lines changed: 4 additions & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -984,7 +984,8 @@ Example config for cluster:
984984
</remote_servers>
985985
```
986986

987-
### Procedures to set up tables to use {uuid}
987+
### Procedures to set up tables to use `{uuid}`
988+
988989
1. Configure Macros on each server
989990
example for server 1:
990991
```xml
@@ -1018,6 +1019,7 @@ Query id: 07fb7e65-beb4-4c30-b3ef-bd303e5c42b5
10181019
```
10191020

10201021
3. Create a table on the cluster using the macros and `{uuid}`
1022+
10211023
```sql
10221024
CREATE TABLE db_uuid.uuid_table1 ON CLUSTER 'cluster_1S_2R'
10231025
(
@@ -1046,6 +1048,7 @@ Query id: 8f542664-4548-4a02-bd2a-6f2c973d0dc4
10461048
```
10471049

10481050
4. Create a distributed table
1051+
10491052
```sql
10501053
create table db_uuid.dist_uuid_table1 on cluster 'cluster_1S_2R'
10511054
(

docs/en/guides/sre/user-management/configuring-ldap.md

Lines changed: 2 additions & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -61,7 +61,7 @@ ClickHouse can be configured to use LDAP to authenticate ClickHouse database use
6161
|----------|------------------------------|---------------------|
6262
|host |hostname or IP of LDAP server |ldap.forumsys.com |
6363
|port |directory port for LDAP server|389 |
64-
|bind_dn |template path to users |uid={user_name},dc=example,dc=com|
64+
|bind_dn |template path to users |`uid={user_name},dc=example,dc=com`|
6565
|enable_tls|whether to use secure ldap |no |
6666
|tls_require_cert |whether to require certificate for connection|never|
6767

@@ -103,7 +103,7 @@ ClickHouse can be configured to use LDAP to authenticate ClickHouse database use
103103
|server |label defined in the prior ldap_servers section|test_ldap_server|
104104
|roles |name of the roles defined in ClickHouse the users will be mapped to|scientists_role|
105105
|base_dn |base path to start search for groups with user |dc=example,dc=com|
106-
|search_filter|ldap search filter to identify groups to select for mapping users |(&amp;(objectClass=groupOfUniqueNames)(uniqueMember={bind_dn}))|
106+
|search_filter|ldap search filter to identify groups to select for mapping users |`(&(objectClass=groupOfUniqueNames)(uniqueMember={bind_dn}))`|
107107
|attribute |which attribute name should value be returned from|cn|
108108

109109

docs/en/integrations/data-ingestion/clickpipes/secure-kinesis.md

Lines changed: 2 additions & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -40,7 +40,7 @@ Using this approach, customers can manage all access to their Kinesis data strea
4040

4141
3 - Create a new IAM role with the following IAM & Trust policy. Note that the name of the IAM role **must start with** `ClickHouseAccessRole-` for this to work.
4242

43-
Trust policy (Please replace {ClickHouse_IAM_ARN} with the IAM Role arn belong to your ClickHouse instance):
43+
Trust policy (Please replace `{ClickHouse_IAM_ARN}` with the IAM Role arn belong to your ClickHouse instance):
4444

4545
```json
4646
{
@@ -57,7 +57,7 @@ Trust policy (Please replace {ClickHouse_IAM_ARN} with the IAM Role arn belong t
5757
}
5858
```
5959

60-
IAM policy (Please replace {STREAM_NAME} with your kinesis stream name):
60+
IAM policy (Please replace `{STREAM_NAME}` with your kinesis stream name):
6161

6262
```
6363
{

docs/en/integrations/data-ingestion/dbms/postgresql/connecting-to-postgresql.md

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -343,7 +343,7 @@ Query id: b0729816-3917-44d3-8d1a-fed912fb59ce
343343
This integration guide focused on a simple example on how to replicate a database with a table, however, there exist more advanced options which include replicating the whole database or adding new tables and schemas to the existing replications. Although DDL commands are not supported for this replication, the engine can be set to detect changes and reload the tables when there are structural changes made.
344344

345345
:::info
346-
For more features available for advanced options, please see the reference documentation: <https://clickhouse.com/docs/en/engines/database-engines/materialized-postgresql/>
346+
For more features available for advanced options, please see the [reference documentation](/docs/en/engines/database-engines/materialized-postgresql).
347347
:::
348348

349349

docs/en/integrations/data-ingestion/dbms/postgresql/data-type-mappings.md

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -35,7 +35,7 @@ The following table shows the equivalent ClickHouse data types for Postgres.
3535
| CIDR | [String](/en/sql-reference/data-types/string) |
3636
| HSTORE | [Map(K, V)](/en/sql-reference/data-types/map), [Map](/en/sql-reference/data-types/map)(K,[Variant](/en/sql-reference/data-types/variant)) |
3737
| UUID | [UUID](/en/sql-reference/data-types/uuid) |
38-
| ARRAY<T\> | [ARRAY(T)](/en/sql-reference/data-types/array) |
38+
| ARRAY&lt;T\> | [ARRAY(T)](/en/sql-reference/data-types/array) |
3939
| JSON* | [String](/en/sql-reference/data-types/string), [Variant](/en/sql-reference/data-types/variant), [Nested](/en/sql-reference/data-types/nested-data-structures/nested#nestedname1-type1-name2-type2-), [Tuple](/en/sql-reference/data-types/tuple) |
4040
| JSONB | [String](/en/sql-reference/data-types/string) |
4141

docs/en/integrations/data-ingestion/etl-tools/dbt/index.md

Lines changed: 2 additions & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -510,7 +510,7 @@ To illustrate this example, we will add the actor "Clicky McClickHouse", who wil
510510
1. First, we modify our model to be of type incremental. This addition requires:
511511
512512
1. **unique_key** - To ensure the plugin can uniquely identify rows, we must provide a unique_key - in this case, the `id` field from our query will suffice. This ensures we will have no row duplicates in our materialized table. For more details on uniqueness constraints, see[ here](https://docs.getdbt.com/docs/building-a-dbt-project/building-models/configuring-incremental-models#defining-a-uniqueness-constraint-optional).
513-
2. **Incremental filter** - We also need to tell dbt how it should identify which rows have changed on an incremental run. This is achieved by providing a delta expression. Typically this involves a timestamp for event data; hence our updated_at timestamp field. This column, which defaults to the value of now() when rows are inserted, allows new roles to be identified. Additionally, we need to identify the alternative case where new actors are added. Using the {{this}} variable, to denote the existing materialized table, this gives us the expression `where id > (select max(id) from {{ this }}) or updated_at > (select max(updated_at) from {{this}})`. We embed this inside the `{% if is_incremental() %}` condition, ensuring it is only used on incremental runs and not when the table is first constructed. For more details on filtering rows for incremental models, see [this discussion in the dbt docs](https://docs.getdbt.com/docs/building-a-dbt-project/building-models/configuring-incremental-models#filtering-rows-on-an-incremental-run).
513+
2. **Incremental filter** - We also need to tell dbt how it should identify which rows have changed on an incremental run. This is achieved by providing a delta expression. Typically this involves a timestamp for event data; hence our updated_at timestamp field. This column, which defaults to the value of now() when rows are inserted, allows new roles to be identified. Additionally, we need to identify the alternative case where new actors are added. Using the `{{this}}` variable, to denote the existing materialized table, this gives us the expression `where id > (select max(id) from {{ this }}) or updated_at > (select max(updated_at) from {{this}})`. We embed this inside the `{% if is_incremental() %}` condition, ensuring it is only used on incremental runs and not when the table is first constructed. For more details on filtering rows for incremental models, see [this discussion in the dbt docs](https://docs.getdbt.com/docs/building-a-dbt-project/building-models/configuring-incremental-models#filtering-rows-on-an-incremental-run).
514514
515515
Update the file `actor_summary.sql` as follows:
516516
@@ -826,7 +826,7 @@ This process is shown below:
826826
### insert_overwrite mode (Experimental)
827827
Performs the following steps:
828828
829-
1. Create a staging (temporary) table with the same structure as the incremental model relation: CREATE TABLE {staging} AS {target}.
829+
1. Create a staging (temporary) table with the same structure as the incremental model relation: `CREATE TABLE {staging} AS {target}`.
830830
2. Insert only new records (produced by SELECT) into the staging table.
831831
3. Replace only new partitions (present in the staging table) into the target table.
832832

0 commit comments

Comments
 (0)