Skip to content
Merged
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
Original file line number Diff line number Diff line change
Expand Up @@ -331,7 +331,7 @@ WHERE id = 'abc123-def456-789'
```

It is also possible to take incremental backups.
For more detail on backups in general, the reader is referred to the documentation for [backup and restore](/operations/backup).
For more detail on backups in general, the reader is referred to the documentation for [backup and restore](/operations/backup/overview).

## Restore to ClickHouse Cloud {#restore-to-clickhouse-cloud}

Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -45,7 +45,7 @@ This approach works well for cases where inserts contain different data. However
For `INSERT ... VALUES` queries, splitting the inserted data into blocks is deterministic and is determined by settings. Therefore, you should retry insertions with the same settings values as the initial operation.

For `INSERT ... SELECT` queries, it's important that the `SELECT` part of the query returns the same data in the same order for each operation. Note, this is hard to achieve in practice. To ensure stable data order on retries, define a `ORDER BY ALL` section in the `SELECT` part of the query. Right now you have to use exactly `ORDER BY ALL` in the query. Support for `ORDER BY` is not implemented yet and the `SELECT` part of the query would not be considered as stable. Keep in mind that it is possible that the selected table could be updated between retries - the result data could have changed and deduplication will not occur. Additionally, in situations where you are inserting large amounts of data, it is possible that the number of blocks after inserts can overflow the deduplication log window, and ClickHouse won't know to deduplicate the blocks.
Right now, the behavior for `INSERT ... SELECT` is controlled by the [`insert_select_deduplicate`](/operations/settings/settings/#insert_select_deduplicate) setting. This setting determines whether deduplication is applied to data inserted using `INSERT ... SELECT` queries. See the linked documentation for details and usage examples.
Right now, the behavior for `INSERT ... SELECT` is controlled by the `insert_select_deduplicate` setting. This setting determines whether deduplication is applied to data inserted using `INSERT ... SELECT` queries. See the linked documentation for details and usage examples.

## Insert deduplication with materialized views {#insert-deduplication-with-materialized-views}

Expand Down
6 changes: 3 additions & 3 deletions docs/integrations/data-ingestion/clickpipes/kafka/index.md
Original file line number Diff line number Diff line change
Expand Up @@ -15,9 +15,9 @@ integration:
<!--AUTOGENERATED_START-->
| Page | Description |
|-----|-----|
| [Creating your first Kafka ClickPipe](/integrations/clickpipes/kafka/create-your-first-kafka-clickpipe) | Step-by-step guide to creating your first Kafka ClickPipe. |
| [Schema registries for Kafka ClickPipe](/integrations/clickpipes/kafka/schema-registries) | How to integrate for ClickPipes with a schema registry for schema management |
| [Reference](/integrations/clickpipes/kafka/reference) | Details supported formats, sources, delivery semantics, authentication and experimental features supported by Kafka ClickPipes |
| [Best practices](/integrations/clickpipes/kafka/best-practices) | Details best practices to follow when working with Kafka ClickPipes |
| [Schema registries for Kafka ClickPipe](/integrations/clickpipes/kafka/schema-registries) | How to integrate for ClickPipes with a schema registry for schema management |
| [Creating your first Kafka ClickPipe](/integrations/clickpipes/kafka/create-your-first-kafka-clickpipe) | Step-by-step guide to creating your first Kafka ClickPipe. |
| [Kafka ClickPipes FAQ](/integrations/clickpipes/kafka/faq) | Frequently asked questions about ClickPipes for Kafka |
| [Best practices](/integrations/clickpipes/kafka/best-practices) | Details best practices to follow when working with Kafka ClickPipes |
<!--AUTOGENERATED_END-->
Original file line number Diff line number Diff line change
Expand Up @@ -47,7 +47,7 @@
## Configure network access {#configure-network-access}

:::note
ClickPipes does not support Azure Private Link connections. If you do not allow public access to your Azure Flexible Server for MySQL instance, you can [use an SSH tunnel](#configure-network-security) to connect securely. Azure Private Link will be supported in the future.
ClickPipes does not support Azure Private Link connections. If you do not allow public access to your Azure Flexible Server for MySQL instance, you can [use an SSH tunnel](/integrations/clickpipes/mysql/source/azure-flexible-server-mysql#configure-network-access) to connect securely. Azure Private Link will be supported in the future.

Check notice on line 50 in docs/integrations/data-ingestion/clickpipes/mysql/source/azure-flexible-server-mysql.md

View workflow job for this annotation

GitHub Actions / vale

ClickHouse.Contractions

Suggestion: Use 'don't' instead of 'do not'.

Check notice on line 50 in docs/integrations/data-ingestion/clickpipes/mysql/source/azure-flexible-server-mysql.md

View workflow job for this annotation

GitHub Actions / vale

ClickHouse.Contractions

Suggestion: Use 'doesn't' instead of 'does not'.
:::

Next, you must allow connections to your Azure Flexible Server for MySQL instance from ClickPipes.
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -43,9 +43,9 @@ ClickHouse has a huge number of functions that can be used for data analysis —
- **`HEX([my_string])`** *(added in v0.2.1)* — Returns a string containing the argument's hexadecimal representation. Equivalent of [`hex()`](/sql-reference/functions/encoding-functions/#hex).
- **`KURTOSIS([my_number])`** — Computes the sample kurtosis of a sequence. Equivalent of [`kurtSamp()`](/sql-reference/aggregate-functions/reference/kurtsamp).
- **`KURTOSISP([my_number])`** — Computes the kurtosis of a sequence. The equivalent of [`kurtPop()`](/sql-reference/aggregate-functions/reference/kurtpop).
- **`MEDIAN_EXACT([my_number])`** *(added in v0.1.3)* — Exactly computes the median of a numeric data sequence. Equivalent of [`quantileExact(0.5)(...)`](/sql-reference/aggregate-functions/reference/quantileexact/#quantileexact).
- **`MEDIAN_EXACT([my_number])`** *(added in v0.1.3)* — Exactly computes the median of a numeric data sequence. Equivalent of [`quantileExact(0.5)(...)`](/sql-reference/aggregate-functions/reference/quantileexact).
- **`MOD([my_number_1], [my_number_2])`** — Calculates the remainder after division. If arguments are floating-point numbers, they are pre-converted to integers by dropping the decimal portion. Equivalent of [`modulo()`](/sql-reference/functions/arithmetic-functions/#modulo).
- **`PERCENTILE_EXACT([my_number], [level_float])`** *(added in v0.1.3)* — Exactly computes the percentile of a numeric data sequence. The recommended level range is [0.01, 0.99]. Equivalent of [`quantileExact()()`](/sql-reference/aggregate-functions/reference/quantileexact/#quantileexact).
- **`PERCENTILE_EXACT([my_number], [level_float])`** *(added in v0.1.3)* — Exactly computes the percentile of a numeric data sequence. The recommended level range is [0.01, 0.99]. Equivalent of [`quantileExact()()`](/sql-reference/aggregate-functions/reference/quantileexact).
- **`PROPER([my_string])`** *(added in v0.2.5)* - Converts a text string so the first letter of each word is capitalized and the remaining letters are in lowercase. Spaces and non-alphanumeric characters such as punctuation also act as separators. For example:
```text
PROPER("PRODUCT name") => "Product Name"
Expand Down
2 changes: 1 addition & 1 deletion scripts/settings/autogenerate-settings.sh
Original file line number Diff line number Diff line change
Expand Up @@ -11,7 +11,7 @@ tmp_dir="$target_dir/scripts/tmp"
GENERATE_SETTINGS=true # Set to true to enable settings documentation generation
GENERATE_FUNCTIONS=true # Set to true to enable regular function documentation generation
GENERATE_SYSTEM_TABLES=true # Set to true to enable system tables documentation generation
GENERATE_AGGREGATE_FUNCTIONS=false # Set to true to enable aggregate function generation
GENERATE_AGGREGATE_FUNCTIONS=true # Set to true to enable aggregate function generation

# --- Parse Command Line Arguments ---
CUSTOM_BINARY=""
Expand Down
Loading