Skip to content

Commit

Permalink
fix invalid mdx
Browse files Browse the repository at this point in the history
  • Loading branch information
gingerwizard committed Jan 9, 2025
1 parent dc6c1cc commit 54d1775
Show file tree
Hide file tree
Showing 32 changed files with 109 additions and 106 deletions.
8 changes: 5 additions & 3 deletions contrib-writing-guide.md
Original file line number Diff line number Diff line change
Expand Up @@ -43,7 +43,7 @@ sudo apt-get install npm
sudo npm install --global yarn
```

note: if the Node version available in your distro is old (<=v16), you can use [nvm](https://github.com/nvm-sh/nvm#installing-and-updating) to pick a specific one.
note: if the Node version available in your distro is old (`<=v16`), you can use [nvm](https://github.com/nvm-sh/nvm#installing-and-updating) to pick a specific one.

for example to use node 18:

Expand Down Expand Up @@ -477,10 +477,12 @@ cd $DOCS/ClickHouse/tests/integration/

Code highlighting is based on the language chosen for your code blocks. Specify the language when you start the code block:

<pre lang="no-highlight"><code>```sql
<pre lang="no-highlight"><code>
```sql
SELECT firstname from imdb.actors;
```
</code></pre>
</code>
</pre>

```sql
SELECT firstname from imdb.actors;
Expand Down
3 changes: 2 additions & 1 deletion docs/en/_snippets/_GCS_authentication_and_bucket.md
Original file line number Diff line number Diff line change
@@ -1,5 +1,6 @@

<details><summary>Create GCS buckets and an HMAC key</summary>
<details>
<summary>Create GCS buckets and an HMAC key</summary>

### ch_bucket_us_east1

Expand Down
3 changes: 2 additions & 1 deletion docs/en/_snippets/_S3_authentication_and_bucket.md
Original file line number Diff line number Diff line change
@@ -1,5 +1,6 @@

<details><summary>Create S3 buckets and an IAM user</summary>
<details>
<summary>Create S3 buckets and an IAM user</summary>

This article demonstrates the basics of how to configure an AWS IAM user, create an S3 bucket and configure ClickHouse to use the bucket as an S3 disk. You should work with your security team to determine the permissions to be used, and consider these as a starting point.

Expand Down
3 changes: 2 additions & 1 deletion docs/en/_snippets/_add_remote_ip_access_list_detail.md
Original file line number Diff line number Diff line change
@@ -1,4 +1,5 @@
<details><summary>Manage your IP Access List</summary>
<details>
<summary>Manage your IP Access List</summary>

From your ClickHouse Cloud services list choose the service that you will work with and switch to **Security**. If the IP Access List does not contain the IP Address or range of the remote system that needs to connect to your ClickHouse Cloud service, then you can resolve the problem with **Add entry**:

Expand Down
3 changes: 2 additions & 1 deletion docs/en/_snippets/_add_superset_detail.md
Original file line number Diff line number Diff line change
@@ -1,4 +1,5 @@
<details><summary>Launch Apache Superset in Docker</summary>
<details>
<summary>Launch Apache Superset in Docker</summary>

Superset provides [installing Superset locally using Docker Compose](https://superset.apache.org/docs/installation/installing-superset-using-docker-compose/) instructions. After checking out the Apache Superset repo from GitHub you can run the latest development code, or a specific tag. We recommend release 2.0.0 as it is the latest release not marked as `pre-release`.

Expand Down
3 changes: 2 additions & 1 deletion docs/en/_snippets/_check_ip_access_list_detail.md
Original file line number Diff line number Diff line change
@@ -1,4 +1,5 @@
<details><summary>Manage your IP Access List</summary>
<details>
<summary>Manage your IP Access List</summary>

From your ClickHouse Cloud services list choose the service that you will work with and switch to **Settings**.

Expand Down
3 changes: 2 additions & 1 deletion docs/en/_snippets/_launch_sql_console.md
Original file line number Diff line number Diff line change
Expand Up @@ -2,7 +2,8 @@
If you need a SQL client connection, your ClickHouse Cloud service has an associated web based SQL console; expand **Connect to SQL console** below for details.
:::

<details><summary>Connect to SQL console</summary>
<details>
<summary>Connect to SQL console</summary>

From your ClickHouse Cloud services list, choose the service that you will work with and click **Connect**. From here you can **Open SQL console**:

Expand Down
4 changes: 2 additions & 2 deletions docs/en/cloud/manage/postman.md
Original file line number Diff line number Diff line change
Expand Up @@ -61,15 +61,15 @@ The Postman Application is available for use within a web browser or can be down
* Under the organizationid folder, navigate to "GET organizational details":
* In the middle frame menu under Params an organizationid is required.
![Test retrieval of organization details](@site/docs/en/cloud/manage/images/postman/postman14.png)
* Edit this value with "orgid" in curly braces "{{orgid}}" (From setting this value earlier a menu will appear with the value):
* Edit this value with "orgid" in curly braces `{{orgid}}` (From setting this value earlier a menu will appear with the value):
![Submit test](@site/docs/en/cloud/manage/images/postman/postman15.png)
* After pressing the "Save" button, press the blue "Send" button at the top right of the screen.
![Return value](@site/docs/en/cloud/manage/images/postman/postman16.png)
* The returned results should deliver your organization details with "status": 200. (If you receive a "status" 400 with no organization information your configuration is not correct).

### Test "GET service details"
* Click "GET service details"
* Edit the Values for organizationid and serviceid with {{orgid}} and {{serviceid}} respectively.
* Edit the Values for organizationid and serviceid with `{{orgid}}` and `{{serviceid}}` respectively.
* Press "Save" and then the blue "Send" button on the right.
![List of services](@site/docs/en/cloud/manage/images/postman/postman17.png)
* The returned results should deliver a list of your services and their details with "status": 200. (If you receive a "status" 400 with no service(s) information your configuration is not correct).
Expand Down
4 changes: 2 additions & 2 deletions docs/en/cloud/security/accessing-s3-data-securely.md
Original file line number Diff line number Diff line change
Expand Up @@ -73,7 +73,7 @@ This approach allows customers to manage all access to their S3 buckets in a sin

3 - Create a new IAM role with the following IAM & Trust policy.

Trust policy (Please replace {ClickHouse_IAM_ARN} with the IAM Role arn belong to your ClickHouse instance):
Trust policy (Please replace `{ClickHouse_IAM_ARN}` with the IAM Role arn belong to your ClickHouse instance):

```json
{
Expand All @@ -90,7 +90,7 @@ Trust policy (Please replace {ClickHouse_IAM_ARN} with the IAM Role arn belong
}
```

IAM policy (Please replace {BUCKET_NAME} with your bucket name):
IAM policy (Please replace `{BUCKET_NAME}` with your bucket name):

```
{
Expand Down
14 changes: 7 additions & 7 deletions docs/en/cloud/security/saml-sso-setup.md
Original file line number Diff line number Diff line change
Expand Up @@ -89,8 +89,8 @@ You will configure two App Integrations in Okta for each ClickHouse organization

| Field | Value |
|--------------------------------|-------|
| Single Sign On URL | https://auth.clickhouse.cloud/login/callback?connection={organizationid} |
| Audience URI (SP Entity ID) | urn:auth0:ch-production:{organizationid} |
| Single Sign On URL | `https://auth.clickhouse.cloud/login/callback?connection={organizationid}` |
| Audience URI (SP Entity ID) | `urn:auth0:ch-production:{organizationid}` |
| Default RelayState | Leave blank |
| Name ID format | Unspecified |
| Application username | Email |
Expand Down Expand Up @@ -147,8 +147,8 @@ You will configure one SAML app in Google for each organization and must provide

| Field | Value |
|-----------|-------|
| ACS URL | https://auth.clickhouse.cloud/login/callback?connection={organizationid} |
| Entity ID | urn:auth0:ch-production:{organizationid} |
| ACS URL | `https://auth.clickhouse.cloud/login/callback?connection={organizationid}` |
| Entity ID | `urn:auth0:ch-production:{organizationid}` |

8. Check the box for **Signed response**.

Expand Down Expand Up @@ -198,9 +198,9 @@ You will set up one application integration with a separate sign-on URL for each

| Field | Value |
|---------------------------|-------|
| Identifier (Entity ID) | urn:auth0:ch-production:{organizationid} |
| Reply URL (Assertion Consumer Service URL) | https://auth.clickhouse.cloud/login/callback?connection={organizationid} |
| Sign on URL | https://console.clickhouse.cloud?connection={organizationid} |
| Identifier (Entity ID) | `urn:auth0:ch-production:{organizationid}` |
| Reply URL (Assertion Consumer Service URL) | `https://auth.clickhouse.cloud/login/callback?connection={organizationid}` |
| Sign on URL | `https://console.clickhouse.cloud?connection={organizationid}` |
| Relay State | Blank |
| Logout URL | Blank |

Expand Down
47 changes: 28 additions & 19 deletions docs/en/guides/best-practices/sparse-primary-indexes.md
Original file line number Diff line number Diff line change
Expand Up @@ -174,27 +174,36 @@ SETTINGS index_granularity = 8192, index_granularity_bytes = 0, compress_primary
</summary>
<p>

In order to simplify the discussions later on in this guide, as well as make the diagrams and results reproducible, the DDL statement
<ul>
<li>specifies a compound sorting key for the table via an `ORDER BY` clause</li>
<br/>
<li>explicitly controls how many index entries the primary index will have through the settings:</li>
<br/>
<ul>
<li>`index_granularity: explicitly set to its default value of 8192. This means that for each group of 8192 rows, the primary index will have one index entry, e.g. if the table contains 16384 rows then the index will have two index entries.
</li>
<br/>
<li>`index_granularity_bytes`: set to 0 in order to disable <a href="https://clickhouse.com/docs/en/whats-new/changelog/2019/#experimental-features-1" target="_blank">adaptive index granularity</a>. Adaptive index granularity means that ClickHouse automatically creates one index entry for a group of n rows if either of these are true:
In order to simplify the discussions later on in this guide, as well as make the diagrams and results reproducible, the DDL statement:

<ul>
<li>if n is less than 8192 and the size of the combined row data for that n rows is larger than or equal to 10 MB (the default value for index_granularity_bytes) or</li>
<li>if the combined row data size for n rows is less than 10 MB but n is 8192.</li>
</ul>
</li>
<br/>
<li>`compress_primary_key`: set to 0 to disable <a href="https://github.com/ClickHouse/ClickHouse/issues/34437" target="_blank">compression of the primary index</a>. This will allow us to optionally inspect its contents later.
</li>
</ul>
<li>
Specifies a compound sorting key for the table via an <code>ORDER BY</code> clause.
</li>
<li>
Explicitly controls how many index entries the primary index will have through the settings:
<ul>
<li>
<code>index_granularity</code>: explicitly set to its default value of 8192. This means that for each group of 8192 rows, the primary index will have one index entry. For example, if the table contains 16384 rows, the index will have two index entries.
</li>
<li>
<code>index_granularity_bytes</code>: set to 0 in order to disable <a href="https://clickhouse.com/docs/en/whats-new/changelog/2019/#experimental-features-1" target="_blank">adaptive index granularity</a>. Adaptive index granularity means that ClickHouse automatically creates one index entry for a group of n rows if either of these are true:
<ul>
<li>
If <code>n</code> is less than 8192 and the size of the combined row data for that <code>n</code> rows is larger than or equal to 10 MB (the default value for <code>index_granularity_bytes</code>).
</li>
<li>
If the combined row data size for <code>n</code> rows is less than 10 MB but <code>n</code> is 8192.
</li>
</ul>
</li>
<li>
<code>compress_primary_key</code>: set to 0 to disable <a href="https://github.com/ClickHouse/ClickHouse/issues/34437" target="_blank">compression of the primary index</a>. This will allow us to optionally inspect its contents later.
</li>
</ul>
</li>
</ul>

</p>
</details>

Expand Down
5 changes: 4 additions & 1 deletion docs/en/guides/sre/keeper/index.md
Original file line number Diff line number Diff line change
Expand Up @@ -984,7 +984,8 @@ Example config for cluster:
</remote_servers>
```

### Procedures to set up tables to use {uuid}
### Procedures to set up tables to use `{uuid}`

1. Configure Macros on each server
example for server 1:
```xml
Expand Down Expand Up @@ -1018,6 +1019,7 @@ Query id: 07fb7e65-beb4-4c30-b3ef-bd303e5c42b5
```

3. Create a table on the cluster using the macros and `{uuid}`

```sql
CREATE TABLE db_uuid.uuid_table1 ON CLUSTER 'cluster_1S_2R'
(
Expand Down Expand Up @@ -1046,6 +1048,7 @@ Query id: 8f542664-4548-4a02-bd2a-6f2c973d0dc4
```

4. Create a distributed table

```sql
create table db_uuid.dist_uuid_table1 on cluster 'cluster_1S_2R'
(
Expand Down
4 changes: 2 additions & 2 deletions docs/en/guides/sre/user-management/configuring-ldap.md
Original file line number Diff line number Diff line change
Expand Up @@ -61,7 +61,7 @@ ClickHouse can be configured to use LDAP to authenticate ClickHouse database use
|----------|------------------------------|---------------------|
|host |hostname or IP of LDAP server |ldap.forumsys.com |
|port |directory port for LDAP server|389 |
|bind_dn |template path to users |uid={user_name},dc=example,dc=com|
|bind_dn |template path to users |`uid={user_name},dc=example,dc=com`|
|enable_tls|whether to use secure ldap |no |
|tls_require_cert |whether to require certificate for connection|never|

Expand Down Expand Up @@ -103,7 +103,7 @@ ClickHouse can be configured to use LDAP to authenticate ClickHouse database use
|server |label defined in the prior ldap_servers section|test_ldap_server|
|roles |name of the roles defined in ClickHouse the users will be mapped to|scientists_role|
|base_dn |base path to start search for groups with user |dc=example,dc=com|
|search_filter|ldap search filter to identify groups to select for mapping users |(&amp;(objectClass=groupOfUniqueNames)(uniqueMember={bind_dn}))|
|search_filter|ldap search filter to identify groups to select for mapping users |`(&(objectClass=groupOfUniqueNames)(uniqueMember={bind_dn}))`|
|attribute |which attribute name should value be returned from|cn|


Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -40,7 +40,7 @@ Using this approach, customers can manage all access to their Kinesis data strea

3 - Create a new IAM role with the following IAM & Trust policy. Note that the name of the IAM role **must start with** `ClickHouseAccessRole-` for this to work.

Trust policy (Please replace {ClickHouse_IAM_ARN} with the IAM Role arn belong to your ClickHouse instance):
Trust policy (Please replace `{ClickHouse_IAM_ARN}` with the IAM Role arn belong to your ClickHouse instance):

```json
{
Expand All @@ -57,7 +57,7 @@ Trust policy (Please replace {ClickHouse_IAM_ARN} with the IAM Role arn belong t
}
```

IAM policy (Please replace {STREAM_NAME} with your kinesis stream name):
IAM policy (Please replace `{STREAM_NAME}` with your kinesis stream name):

```
{
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -343,7 +343,7 @@ Query id: b0729816-3917-44d3-8d1a-fed912fb59ce
This integration guide focused on a simple example on how to replicate a database with a table, however, there exist more advanced options which include replicating the whole database or adding new tables and schemas to the existing replications. Although DDL commands are not supported for this replication, the engine can be set to detect changes and reload the tables when there are structural changes made.

:::info
For more features available for advanced options, please see the reference documentation: <https://clickhouse.com/docs/en/engines/database-engines/materialized-postgresql/>
For more features available for advanced options, please see the [reference documentation](/docs/en/engines/database-engines/materialized-postgresql).
:::


Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -35,7 +35,7 @@ The following table shows the equivalent ClickHouse data types for Postgres.
| CIDR | [String](/en/sql-reference/data-types/string) |
| HSTORE | [Map(K, V)](/en/sql-reference/data-types/map), [Map](/en/sql-reference/data-types/map)(K,[Variant](/en/sql-reference/data-types/variant)) |
| UUID | [UUID](/en/sql-reference/data-types/uuid) |
| ARRAY<T\> | [ARRAY(T)](/en/sql-reference/data-types/array) |
| ARRAY&lt;T\> | [ARRAY(T)](/en/sql-reference/data-types/array) |
| JSON* | [String](/en/sql-reference/data-types/string), [Variant](/en/sql-reference/data-types/variant), [Nested](/en/sql-reference/data-types/nested-data-structures/nested#nestedname1-type1-name2-type2-), [Tuple](/en/sql-reference/data-types/tuple) |
| JSONB | [String](/en/sql-reference/data-types/string) |

Expand Down
4 changes: 2 additions & 2 deletions docs/en/integrations/data-ingestion/etl-tools/dbt/index.md
Original file line number Diff line number Diff line change
Expand Up @@ -510,7 +510,7 @@ To illustrate this example, we will add the actor "Clicky McClickHouse", who wil
1. First, we modify our model to be of type incremental. This addition requires:
1. **unique_key** - To ensure the plugin can uniquely identify rows, we must provide a unique_key - in this case, the `id` field from our query will suffice. This ensures we will have no row duplicates in our materialized table. For more details on uniqueness constraints, see[ here](https://docs.getdbt.com/docs/building-a-dbt-project/building-models/configuring-incremental-models#defining-a-uniqueness-constraint-optional).
2. **Incremental filter** - We also need to tell dbt how it should identify which rows have changed on an incremental run. This is achieved by providing a delta expression. Typically this involves a timestamp for event data; hence our updated_at timestamp field. This column, which defaults to the value of now() when rows are inserted, allows new roles to be identified. Additionally, we need to identify the alternative case where new actors are added. Using the {{this}} variable, to denote the existing materialized table, this gives us the expression `where id > (select max(id) from {{ this }}) or updated_at > (select max(updated_at) from {{this}})`. We embed this inside the `{% if is_incremental() %}` condition, ensuring it is only used on incremental runs and not when the table is first constructed. For more details on filtering rows for incremental models, see [this discussion in the dbt docs](https://docs.getdbt.com/docs/building-a-dbt-project/building-models/configuring-incremental-models#filtering-rows-on-an-incremental-run).
2. **Incremental filter** - We also need to tell dbt how it should identify which rows have changed on an incremental run. This is achieved by providing a delta expression. Typically this involves a timestamp for event data; hence our updated_at timestamp field. This column, which defaults to the value of now() when rows are inserted, allows new roles to be identified. Additionally, we need to identify the alternative case where new actors are added. Using the `{{this}}` variable, to denote the existing materialized table, this gives us the expression `where id > (select max(id) from {{ this }}) or updated_at > (select max(updated_at) from {{this}})`. We embed this inside the `{% if is_incremental() %}` condition, ensuring it is only used on incremental runs and not when the table is first constructed. For more details on filtering rows for incremental models, see [this discussion in the dbt docs](https://docs.getdbt.com/docs/building-a-dbt-project/building-models/configuring-incremental-models#filtering-rows-on-an-incremental-run).
Update the file `actor_summary.sql` as follows:
Expand Down Expand Up @@ -826,7 +826,7 @@ This process is shown below:
### insert_overwrite mode (Experimental)
Performs the following steps:
1. Create a staging (temporary) table with the same structure as the incremental model relation: CREATE TABLE {staging} AS {target}.
1. Create a staging (temporary) table with the same structure as the incremental model relation: `CREATE TABLE {staging} AS {target}`.
2. Insert only new records (produced by SELECT) into the staging table.
3. Replace only new partitions (present in the staging table) into the target table.
Expand Down
Loading

0 comments on commit 54d1775

Please sign in to comment.