Skip to content

Commit

Permalink
Merge pull request #2881 from ClickHouse/issue_fix_quotes
Browse files Browse the repository at this point in the history
move to " quotes
  • Loading branch information
gingerwizard authored Dec 10, 2024
2 parents 630cbcb + d080e10 commit b8e296e
Show file tree
Hide file tree
Showing 50 changed files with 159 additions and 159 deletions.
4 changes: 2 additions & 2 deletions docs/en/about-us/distinctive-features.md
Original file line number Diff line number Diff line change
Expand Up @@ -9,7 +9,7 @@ description: Understand what makes ClickHouse stand apart from other database ma

## True Column-Oriented Database Management System

In a real column-oriented DBMS, no extra data is stored with the values. This means that constant-length values must be supported to avoid storing their length number next to the values. For example, a billion UInt8-type values should consume around 1 GB uncompressed, or this strongly affects the CPU use. It is essential to store data compactly (without any garbage) even when uncompressed since the speed of decompression (CPU usage) depends mainly on the volume of uncompressed data.
In a real column-oriented DBMS, no extra data is stored with the values. This means that constant-length values must be supported to avoid storing their length "number" next to the values. For example, a billion UInt8-type values should consume around 1 GB uncompressed, or this strongly affects the CPU use. It is essential to store data compactly (without any "garbage") even when uncompressed since the speed of decompression (CPU usage) depends mainly on the volume of uncompressed data.

This is in contrast to systems that can store values of different columns separately, but that cannot effectively process analytical queries due to their optimization for other scenarios, such as HBase, BigTable, Cassandra, and HyperTable. You would get throughput around a hundred thousand rows per second in these systems, but not hundreds of millions of rows per second.

Expand Down Expand Up @@ -63,7 +63,7 @@ Unlike other database management systems, secondary indexes in ClickHouse do not

## Suitable for Online Queries {#suitable-for-online-queries}

Most OLAP database management systems do not aim for online queries with sub-second latencies. In alternative systems, report building time of tens of seconds or even minutes is often considered acceptable. Sometimes it takes even more time, which forces systems to prepare reports offline (in advance or by responding with come back later).
Most OLAP database management systems do not aim for online queries with sub-second latencies. In alternative systems, report building time of tens of seconds or even minutes is often considered acceptable. Sometimes it takes even more time, which forces systems to prepare reports offline (in advance or by responding with "come back later").

In ClickHouse "low latency" means that queries can be processed without delay and without trying to prepare an answer in advance, right at the same moment as the user interface page is loading. In other words, online.

Expand Down
2 changes: 1 addition & 1 deletion docs/en/architecture/cluster-deployment.md
Original file line number Diff line number Diff line change
Expand Up @@ -18,7 +18,7 @@ This ClickHouse cluster will be a homogenous cluster. Here are the steps:
3. Create local tables on each instance
4. Create a [Distributed table](../engines/table-engines/special/distributed.md)

A [distributed table](../engines/table-engines/special/distributed.md) is a kind of view to the local tables in a ClickHouse cluster. A SELECT query from a distributed table executes using resources of all cluster’s shards. You may specify configs for multiple clusters and create multiple distributed tables to provide views for different clusters.
A [distributed table](../engines/table-engines/special/distributed.md) is a kind of "view" to the local tables in a ClickHouse cluster. A SELECT query from a distributed table executes using resources of all cluster’s shards. You may specify configs for multiple clusters and create multiple distributed tables to provide views for different clusters.

Here is an example config for a cluster with three shards, with one replica each:

Expand Down
2 changes: 1 addition & 1 deletion docs/en/cloud/manage/backups.md
Original file line number Diff line number Diff line change
Expand Up @@ -15,7 +15,7 @@ Database backups provide a safety net by ensuring that if data is lost for any u

## How backups work in ClickHouse Cloud

ClickHouse Cloud backups are a combination of full and incremental backups that constitute a backup chain. The chain starts with a full backup, and incremental backups are then taken over the next several scheduled time periods to create a sequence of backups. Once a backup chain reaches a certain length, a new chain is started. This entire chain of backups can then be utilized to restore data to a new service if needed. Once all backups included in a specific chain are past the retention timeframe set for the service (more on retention below), the chain is discarded.
ClickHouse Cloud backups are a combination of "full" and "incremental" backups that constitute a backup chain. The chain starts with a full backup, and incremental backups are then taken over the next several scheduled time periods to create a sequence of backups. Once a backup chain reaches a certain length, a new chain is started. This entire chain of backups can then be utilized to restore data to a new service if needed. Once all backups included in a specific chain are past the retention timeframe set for the service (more on retention below), the chain is discarded.

In the screenshot below, the solid line squares show full backups and the dotted line squares show incremental backups. The solid line rectangle around the squares denotes the retention period and the backups that are visible to the end user, which can be used for a backup restore. In the scenario below, backups are being taken every 24 hours and are retained for 2 days.

Expand Down
32 changes: 16 additions & 16 deletions docs/en/cloud/manage/postman.md
Original file line number Diff line number Diff line change
Expand Up @@ -16,27 +16,27 @@ The Postman Application is available for use within a web browser or can be down
![Create workspace](@site/docs/en/cloud/manage/images/postman/postman2.png)

### Create a Collection
* Below Explore on the top left Menu click Import:
* Below "Explore" on the top left Menu click "Import":
![Explore > Import](@site/docs/en/cloud/manage/images/postman/postman3.png)

* A modal will appear:
![API URL entry](@site/docs/en/cloud/manage/images/postman/postman4.png)

* Enter the API address: https://api.clickhouse.cloud/v1 and press 'Enter':
* Enter the API address: "https://api.clickhouse.cloud/v1" and press 'Enter':
![Import](@site/docs/en/cloud/manage/images/postman/postman5.png)

* Select Postman Collection by clicking on the Import button:
* Select "Postman Collection" by clicking on the "Import" button:
![Collection > Import](@site/docs/en/cloud/manage/images/postman/postman6.png)

### Interface with the ClickHouse Cloud API spec
* The API spec for ClickHouse Cloud will now appear within Collections (Left Navigation).
* The "API spec for ClickHouse Cloud" will now appear within "Collections" (Left Navigation).
![Import your API](@site/docs/en/cloud/manage/images/postman/postman7.png)

* Click on API spec for ClickHouse Cloud. From the middle pain select the ‘Authorization’ tab:
* Click on "API spec for ClickHouse Cloud." From the middle pain select the ‘Authorization’ tab:
![Import complete](@site/docs/en/cloud/manage/images/postman/postman8.png)

### Set Authorization
* Toggle the dropdown menu to select Basic Auth:
* Toggle the dropdown menu to select "Basic Auth":
![Basic auth](@site/docs/en/cloud/manage/images/postman/postman9.png)

* Enter the Username and Password received when you set up your ClickHouse Cloud API keys:
Expand All @@ -45,32 +45,32 @@ The Postman Application is available for use within a web browser or can be down
### Enable Variables
* [Variables](https://learning.postman.com/docs/sending-requests/variables/) enable the storage and reuse of values in Postman allowing for easier API testing.
#### Set the Organization ID and Service ID
* Within the Collection, click the Variable tab in the middle pane (The Base URL will have been set by the earlier API import):
* Below baseURL click the open field Add new value, and Substitute your organization ID and service ID:
* Within the "Collection", click the "Variable" tab in the middle pane (The Base URL will have been set by the earlier API import):
* Below "baseURL" click the open field "Add new value", and Substitute your organization ID and service ID:
![Organization ID and Service ID](@site/docs/en/cloud/manage/images/postman/postman11.png)

## Test the ClickHouse Cloud API functionalities
### Test "GET list of available organizations"
* Under the OpenAPI spec for ClickHouse Cloud, expand the folder > V1 > organizations
* Click GET list of available organizations and press the blue "Send" button on the right:
* Under the "OpenAPI spec for ClickHouse Cloud", expand the folder > V1 > organizations
* Click "GET list of available organizations" and press the blue "Send" button on the right:
![Test retrieval of organizations](@site/docs/en/cloud/manage/images/postman/postman12.png)
* The returned results should deliver your organization details with status: 200. (If you receive a status 400 with no organization information your configuration is not correct).
* The returned results should deliver your organization details with "status": 200. (If you receive a "status" 400 with no organization information your configuration is not correct).
![Status](@site/docs/en/cloud/manage/images/postman/postman13.png)

### Test "GET organizational details"
* Under the organizationid folder, navigate to GET organizational details:
* Under the organizationid folder, navigate to "GET organizational details":
* In the middle frame menu under Params an organizationid is required.
![Test retrieval of organization details](@site/docs/en/cloud/manage/images/postman/postman14.png)
* Edit this value with "orgid" in curly braces "{{orgid}}" (From setting this value earlier a menu will appear with the value):
![Submit test](@site/docs/en/cloud/manage/images/postman/postman15.png)
* After pressing the "Save" button, press the blue "Send" button at the top right of the screen.
![Return value](@site/docs/en/cloud/manage/images/postman/postman16.png)
* The returned results should deliver your organization details with status: 200. (If you receive a status 400 with no organization information your configuration is not correct).
* The returned results should deliver your organization details with "status": 200. (If you receive a "status" 400 with no organization information your configuration is not correct).

### Test "GET service details"
* Click GET service details
* Click "GET service details"
* Edit the Values for organizationid and serviceid with {{orgid}} and {{serviceid}} respectively.
* Press Save and then the blue Send button on the right.
* Press "Save" and then the blue "Send" button on the right.
![List of services](@site/docs/en/cloud/manage/images/postman/postman17.png)
* The returned results should deliver a list of your services and their details with status: 200. (If you receive a status 400 with no service(s) information your configuration is not correct).
* The returned results should deliver a list of your services and their details with "status": 200. (If you receive a "status" 400 with no service(s) information your configuration is not correct).

16 changes: 8 additions & 8 deletions docs/en/cloud/reference/changelog.md
Original file line number Diff line number Diff line change
Expand Up @@ -323,7 +323,7 @@ Backups are important for every database (no matter how reliable), and we've tak

### Create APIs from your SQL queries (Beta)

When you write a SQL query for ClickHouse, you still need to connect to ClickHouse via a driver to expose your query to your application. Now with our now **Query Endpoints** feature, you can execute SQL queries directly from an API without any configuration. You can specify the query endpoints to return JSON, CSV, or TSVs. Click the Share button in the cloud console to try this new feature with your queries. Read more about Query Endpoints [here](https://clickhouse.com/blog/automatic-query-endpoints).
When you write a SQL query for ClickHouse, you still need to connect to ClickHouse via a driver to expose your query to your application. Now with our now **Query Endpoints** feature, you can execute SQL queries directly from an API without any configuration. You can specify the query endpoints to return JSON, CSV, or TSVs. Click the "Share" button in the cloud console to try this new feature with your queries. Read more about Query Endpoints [here](https://clickhouse.com/blog/automatic-query-endpoints).

<img alt="Configure query endpoints" style={{width: '450px', marginLeft: 0}} src={require('./images/may-17-query-endpoints.png').default} />

Expand All @@ -335,7 +335,7 @@ There are 12 free training modules in ClickHouse Develop training course. Prior

### Load data from S3 and GCS using ClickPipes

You may have noticed in our newly released cloud console that there’s a new section called Data sources. The Data sources page is powered by ClickPipes, a native ClickHouse Cloud feature which lets you easily insert data from a variety of sources into ClickHouse Cloud.
You may have noticed in our newly released cloud console that there’s a new section called "Data sources". The "Data sources" page is powered by ClickPipes, a native ClickHouse Cloud feature which lets you easily insert data from a variety of sources into ClickHouse Cloud.

Our most recent ClickPipes update features the ability to directly upload data directly from Amazon S3 and Google Cloud Storage. While you can still use our built-in table functions, ClickPipes is a fully-managed service via our UI that will let you ingest data from S3 and GCS in just a few clicks. This feature is still in Private Preview, but you can try it out today via the cloud console.

Expand Down Expand Up @@ -399,7 +399,7 @@ This release introduces support for Microsoft Azure, Horizontal Scaling via API,

### General updates
- Introduced support for Microsoft Azure in Private Preview. To gain access, please reach out to account management or support, or join the [waitlist](https://clickhouse.com/cloud/azure-waitlist).
- Introduced Release Channels – the ability to specify the timing of upgrades based on environment type. In this release, we added the fast release channel, which enables you to upgrade your non-production environments ahead of production (please contact support to enable).
- Introduced Release Channels – the ability to specify the timing of upgrades based on environment type. In this release, we added the "fast" release channel, which enables you to upgrade your non-production environments ahead of production (please contact support to enable).

### Administration changes
- Added support for horizontal scaling configuration via API (private preview, please contact support to enable)
Expand All @@ -421,7 +421,7 @@ This release introduces support for Microsoft Azure, Horizontal Scaling via API,
- ClickHouse Python Client: [Added support](https://github.com/ClickHouse/clickhouse-connect/issues/155) for query streaming via PyArrow (community contribution)

### Security updates
- Updated ClickHouse Cloud to prevent [Role-based Access Control is bypassed when query caching is enabled](https://github.com/ClickHouse/ClickHouse/security/advisories/GHSA-45h5-f7g3-gr8r) (CVE-2024-22412)
- Updated ClickHouse Cloud to prevent ["Role-based Access Control is bypassed when query caching is enabled"](https://github.com/ClickHouse/ClickHouse/security/advisories/GHSA-45h5-f7g3-gr8r) (CVE-2024-22412)

## March 14, 2024

Expand All @@ -439,7 +439,7 @@ This release makes available in early access the new Cloud Console experience, C

### Integrations changes
- Grafana: Fixed dashboard migration for v4, ad-hoc filtering logic
- Tableau Connector: Fixed DATENAME function and rounding for real arguments
- Tableau Connector: Fixed DATENAME function and rounding for "real" arguments
- Kafka Connector: Fixed NPE in connection initialization, added ability to specify JDBC driver options
- Golang client: Reduced the memory footprint for handling responses, fixed Date32 extreme values, fixed error reporting when compression is enabled
- Python client: Improved timezone support in datetime parameters, improved performance for Pandas DataFrame
Expand Down Expand Up @@ -673,14 +673,14 @@ This release brings the beta release of the PowerBI Desktop official connector,

## Aug 24, 2023

This release adds support for the MySQL interface to the ClickHouse database, introduces a new official PowerBI connector, adds a new Running Queries view in the cloud console, and updates the ClickHouse version to 23.7.
This release adds support for the MySQL interface to the ClickHouse database, introduces a new official PowerBI connector, adds a new "Running Queries" view in the cloud console, and updates the ClickHouse version to 23.7.

### General updates
- Added support for the [MySQL wire protocol](https://clickhouse.com/docs/en/interfaces/mysql), which (among other use cases) enables compatibility with many existing BI tools. Please reach out to support to enable this feature for your organization.
- Introduced a new official PowerBI connector

### Console changes
- Added support for Running Queries view in SQL Console
- Added support for "Running Queries" view in SQL Console

### ClickHouse 23.7 version upgrade
- Added support for Azure Table function, promoted geo datatypes to production-ready, and improved join performance - see 23.5 release [blog](https://clickhouse.com/blog/clickhouse-release-23-05) for details
Expand Down Expand Up @@ -753,7 +753,7 @@ This release makes ClickHouse Cloud on GCP generally available, brings a Terrafo
- Improved caching while processing large inserts

### Administration changes
- Expanded local dictionary creation for non default users
- Expanded local dictionary creation for non "default" users

## May 30, 2023

Expand Down
Loading

0 comments on commit b8e296e

Please sign in to comment.