Skip to content

Commit 53a6af8

Browse files
authored
refine syntax consitency (#522)
1 parent b312eea commit 53a6af8

13 files changed

+79
-79
lines changed

docs/alert.md

Lines changed: 5 additions & 5 deletions
Original file line numberDiff line numberDiff line change
@@ -11,11 +11,11 @@ Alerts are often used in combination with [Scheduled Tasks](/task) and [Material
1111
## Create Alert
1212

1313
```sql
14-
CREATE ALERT [IF NOT EXISTS] <db.alert-name>
14+
CREATE ALERT [IF NOT EXISTS] <db.alert_name>
1515
BATCH <N> EVENTS WITH TIMEOUT <interval>
1616
LIMIT <M> ALERTS PER <interval>
17-
CALL <python-udf-name>
18-
AS <streaming-select-query>;
17+
CALL <python_udf_name>
18+
AS <streaming_select_query>;
1919
```
2020

2121
:::info
@@ -126,11 +126,11 @@ SHOW ALERTS [FROM db] [SETTINGS verbose=true]
126126
## Show Alert
127127

128128
```sql
129-
SHOW CREATE ALERT <db.alert-name> [SETTINGS show_multi_versions=true]
129+
SHOW CREATE ALERT <db.alert_name> [SETTINGS show_multi_versions=true]
130130
```
131131

132132
## Drop Alert
133133

134134
```sql
135-
DROP ALERT <db.alert-name>
135+
DROP ALERT <db.alert_name>
136136
```

docs/append-stream-codecs.md

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -101,7 +101,7 @@ CREATE STREAM mystream
101101
ORDER BY x;
102102
```
103103

104-
:::note
104+
:::info
105105
If compression needs to be applied, it must be explicitly specified. Otherwise, only encryption will be applied to data.
106106
:::
107107

docs/append-stream-ttl.md

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -43,7 +43,7 @@ Expired data is removed only during the merge process, which runs asynchronously
4343
To accelerate this cleanup, you can manually trigger a merge by running the `OPTIMIZE` command. This attempts to start an unscheduled merge of data parts for a stream:
4444

4545
```sql
46-
OPTIMIZE STREAM <db.stream-name>;
46+
OPTIMIZE STREAM <db.stream_name>;
4747
```
4848

4949
## Examples

docs/append-stream.md

Lines changed: 19 additions & 19 deletions
Original file line numberDiff line numberDiff line change
@@ -5,13 +5,13 @@ An **Append Stream** in Timeplus is best understood as a **streaming ClickHouse
55
## Create Append Stream
66

77
```sql
8-
CREATE STREAM [IF NOT EXISTS] <db.stream-name>
8+
CREATE STREAM [IF NOT EXISTS] <db.stream_name>
99
(
10-
name1 [type1] [DEFAULT | ALIAS expr1] [COMMENT 'column-comment'] [compression_codec],
11-
name2 [type2] [DEFAULT | ALIAS expr2] [COMMENT 'column-comment'] [compression_codec],
10+
name1 [type1] [DEFAULT | ALIAS expr1] [COMMENT 'column_comment'] [compression_codec],
11+
name2 [type2] [DEFAULT | ALIAS expr2] [COMMENT 'column_comment'] [compression_codec],
1212
...
13-
INDEX index-name1 expr1 TYPE type1(...) [GRANULARITY value1],
14-
INDEX index-name2 expr2 TYPE type2(...) [GRANULARITY value1],
13+
INDEX index_name1 expr1 TYPE type1(...) [GRANULARITY value1],
14+
INDEX index_name2 expr2 TYPE type2(...) [GRANULARITY value1],
1515
...
1616
)
1717
ORDER BY <expression>
@@ -23,25 +23,25 @@ ORDER BY <expression>
2323
]
2424
COMMENT '<stream-comment>'
2525
SETTINGS
26-
shards=<num-of-shards>,
27-
replication_factor=<replication-factor>,
26+
shards=<num_of_shards>,
27+
replication_factor=<replication_factor>,
2828
mode=['append'|'changelog_kv'|'versioned_kv'],
29-
version_column=<version-column>,
29+
version_column=<version_column>,
3030
storage_type=['hybrid'|'streaming'|'inmemory'],
3131
logstore_codec=['lz4'|'zstd'|'none'],
32-
logstore_retention_bytes=<retention-bytes>,
33-
logstore_retention_ms=<retention-ms>,
34-
placement_policies='<placement-policies>',
35-
shared_disk='<shared-disk>',
32+
logstore_retention_bytes=<retention_bytes>,
33+
logstore_retention_ms=<retention_ms>,
34+
placement_policies='<placement_policies>',
35+
shared_disk='<shared_disk>',
3636
ingest_mode=['async'|'sync'],
3737
ack=['quorum'|'local'|'none'],
38-
ingest_batch_max_bytes=<batch-bytes>,
39-
ingest_batch_timeout_ms=<batch-timeout>,
40-
fetch_threads=<remote-fetch-threads>,
41-
flush_threshold_count=<batch-flush-rows>,
42-
flush_threshold_ms=<batch-flush-timeout>,
43-
flush_threshold_bytes=<batch-flush-size>,
44-
merge_with_ttl_timeout=<timeout-in-seconds>;
38+
ingest_batch_max_bytes=<batch_bytes>,
39+
ingest_batch_timeout_ms=<batch_timeout>,
40+
fetch_threads=<remote_fetch_threads>,
41+
flush_threshold_count=<batch_flush_rows>,
42+
flush_threshold_ms=<batch_flush_timeout>,
43+
flush_threshold_bytes=<batch_flush_size>,
44+
merge_with_ttl_timeout=<timeout_in_seconds>;
4545
```
4646

4747
### Storage Architecture

docs/materialized-view-checkpoint.md

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -61,7 +61,7 @@ AS
6161
SELECT
6262
window_start AS win_start,
6363
s,
64-
SUM(i)
64+
sum(i)
6565
FROM tumble(source, 5s)
6666
GROUP BY window_start, s;
6767
```

docs/materialized-view-lifecycle.md

Lines changed: 6 additions & 6 deletions
Original file line numberDiff line numberDiff line change
@@ -29,13 +29,13 @@ Timeplus provides system commands to manage Materialized Views.
2929
### Pause Materialized View
3030

3131
```sql
32-
SYSTEM PAUSE MATERIALIZED VIEW <db.mat-view-name> [PERMANENT];
32+
SYSTEM PAUSE MATERIALIZED VIEW <db.mat_view_name> [PERMANENT];
3333
```
3434

3535
When pausing a Materialized View:
3636

3737
1. The leader triggers a checkpoint.
38-
2. The leader stops the query pipeline and marks the state as Paused.
38+
2. The leader stops the query pipeline and marks the state as `Paused`.
3939
3. If `PERMANENT` is specified, the system updates `pause_on_start=true` in the DDL metadata and commits it to the metastore.
4040
- This ensures the view remains paused even after node restarts.
4141
- Without `PERMANENT`, the view will resume automatically on restart.
@@ -48,7 +48,7 @@ SYSTEM PAUSE MATERIALIZED VIEW tumble_aggr_mv;
4848
### Resume Materialized View
4949

5050
```sql
51-
SYSTEM RESUME MATERIALIZED VIEW <db.mat-view-name> [PERMANENT];
51+
SYSTEM RESUME MATERIALIZED VIEW <db.mat_view_name> [PERMANENT];
5252
```
5353

5454
When resuming a view:
@@ -70,7 +70,7 @@ Aborting is similar to pausing, but:
7070
- The `pause_on_start` setting is not modified in the DDL metadata.
7171

7272
```sql
73-
SYSTEM ABORT MATERIALIZED VIEW <db.mat-view-name>;
73+
SYSTEM ABORT MATERIALIZED VIEW <db.mat_view_name>;
7474
```
7575

7676
**Example**:
@@ -87,7 +87,7 @@ Used to recover views in the `Error` state. Recovery involves:
8787
3. Transitioning to ExecutingPipeline.
8888

8989
```sql
90-
SYSTEM RECOVER MATERIALIZED VIEW <db.mat-view-name>;
90+
SYSTEM RECOVER MATERIALIZED VIEW <db.mat_view_name>;
9191
```
9292

9393
**Example**:
@@ -100,7 +100,7 @@ SYSTEM RECOVER MATERIALIZED VIEW tumble_aggr_mv;
100100
For Materialized Views governed by Raft, you can transfer leadership to another replica. This helps balance workload or mitigate temporary issues.
101101

102102
```sql
103-
SYSTEM TRANSFER LEADER <db.mat-view-name> <mat-view-shard-id> FROM <leader-node> TO <follower-node>;
103+
SYSTEM TRANSFER LEADER <db.mat_view_name> <mat_view_shard_id> FROM <leader_node> TO <follower_node>;
104104
```
105105

106106
:::info

docs/materialized-view.md

Lines changed: 5 additions & 5 deletions
Original file line numberDiff line numberDiff line change
@@ -44,8 +44,8 @@ you can build **complex, end-to-end data processing pipelines** with Materialize
4444
## Create Materialized View
4545

4646
```sql
47-
CREATE MATERIALIZED VIEW [IF NOT EXISTS] <db.mat-view-name>
48-
[INTO <db.target-stream-or-table>]
47+
CREATE MATERIALIZED VIEW [IF NOT EXISTS] <db.mat_view_name>
48+
[INTO <db.target_stream_or_table>]
4949
AS
5050
<SELECT ...>
5151
[SETTINGS
@@ -320,7 +320,7 @@ DROP VIEW [IF EXISTS] db.<view_name>;
320320
You can modify the following Materialized View query settings using:
321321

322322
```sql
323-
ALTER VIEW <db.mat-view-name> MODIFY QUERY SETTING <key>=<value>, <key>=<value>, ...;
323+
ALTER VIEW <db.mat_view_name> MODIFY QUERY SETTING <key>=<value>, <key>=<value>, ...;
324324
```
325325

326326
Supported settings changes include:
@@ -341,7 +341,7 @@ ALTER VIEW tumble_aggr_mv MODIFY QUERY SETTING checkpoint_interval=-1, enable_dl
341341
You can update the comment associated with a Materialized View:
342342

343343
```sql
344-
ALTER VIEW <db.mat-view-name> MODIFY COMMENT '<new-comments>';
344+
ALTER VIEW <db.mat_view_name> MODIFY COMMENT '<new-comments>';
345345
```
346346

347347
**Example:**
@@ -420,5 +420,5 @@ GROUP BY
420420
window_start, s
421421
SETTINGS
422422
recovery_policy='best_effort', -- Use 'best_effort' recovery policy
423-
input_format_ignore_parsing_errors=true; -- Skip parsing errors for better resiliency
423+
input_format_ignore_parsing_errors=true; -- Skip parsing errors for better resiliency
424424
```

docs/mutable-stream-indexes.md

Lines changed: 4 additions & 4 deletions
Original file line numberDiff line numberDiff line change
@@ -121,7 +121,7 @@ When you do this, the system scans all existing data in the background to build
121121
This process is **asynchronous**, so the index becomes usable only after the build completes.
122122

123123
```sql
124-
ALTER STREAM <db.mutable-stream-name> ADD INDEX <index-name> (<columns>);
124+
ALTER STREAM <db.mutable_stream_name> ADD INDEX <index_name> (<columns>);
125125
```
126126

127127
**Example**:
@@ -137,7 +137,7 @@ Until the new secondary index is fully built, using it to accelerate a historica
137137
## Drop Secondary Index
138138

139139
```sql
140-
ALTER STREAM <db.mutable-stream-name> DROP INDEX <index-name>;
140+
ALTER STREAM <db.mutable_stream_name> DROP INDEX <index_name>;
141141
```
142142

143143
**Example**:
@@ -157,7 +157,7 @@ When executing a query against a Mutable Stream, **Timeplus automatically select
157157
You can manually hint which secondary index to use with the query setting:
158158

159159
```sql
160-
SETTINGS use_index='<secondary-idx-name>'
160+
SETTINGS use_index='<secondary_idx_name>'
161161
```
162162

163163
Timeplus will still validate whether the chosen index is applicable.
@@ -206,7 +206,7 @@ The rebuild process runs **asynchronously** in the background.
206206

207207
```sql
208208
-- Clear the secondary index and then rebuild it
209-
ALTER STREAM <db.mutable-stream-name> MATERIALIZE INDEX <secondary-index-name> WITH CLEAR;
209+
ALTER STREAM <db.mutable_stream_name> MATERIALIZE INDEX <secondary_index_name> WITH CLEAR;
210210
```
211211

212212
**Example**:

docs/mutable-stream-ttl.md

Lines changed: 5 additions & 5 deletions
Original file line numberDiff line numberDiff line change
@@ -20,7 +20,7 @@ If a row has not been updated for longer than `ttl_seconds`, it becomes **eligib
2020
```sql
2121
CREATE MUTABLE STREAM ...
2222
SETTINGS
23-
ttl_seconds=<ttl-seconds>, ...
23+
ttl_seconds=<ttl_seconds>, ...
2424
```
2525

2626
### Example
@@ -52,8 +52,8 @@ The specified **TTL column** represents the event timestamp of each row. During
5252
```sql
5353
CREATE MUTABLE STREAM ...
5454
SETTINGS
55-
ttl_seconds=<ttl-seconds>,
56-
ttl_column=<ttl-column>, ...
55+
ttl_seconds=<ttl_seconds>,
56+
ttl_column=<ttl_column>, ...
5757
```
5858

5959
### Example
@@ -85,8 +85,8 @@ You can control how frequently these compactions run by setting `periodic_compac
8585
```
8686
CREATE MUTABLE STREAM ...
8787
SETTINGS
88-
ttl_seconds = <ttl-seconds>,
89-
ttl_column = <ttl-column>,
88+
ttl_seconds = <ttl_seconds>,
89+
ttl_column = <ttl_column>,
9090
kvstore_options = 'periodic_compaction_seconds=1800;';
9191
```
9292

docs/mutable-stream.md

Lines changed: 23 additions & 23 deletions
Original file line numberDiff line numberDiff line change
@@ -18,43 +18,43 @@ For more details on the motivation behind Mutable Streams, see [this blog post](
1818
```sql
1919
CREATE MUTABLE STREAM [IF NOT EXISTS] <db.mutable-stream-name>
2020
(
21-
name1 [type1] [DEFAULT | ALIAS expr1] [COMMENT 'column-comment'],
22-
name2 [type2] [DEFAULT | ALIAS expr1] [COMMENT 'column-comment'],
21+
name1 [type1] [DEFAULT | ALIAS expr1] [COMMENT 'column_comment'],
22+
name2 [type2] [DEFAULT | ALIAS expr1] [COMMENT 'column_comment'],
2323
...
2424
<column definitions>,
25-
INDEX <secondary-index-name1> (column, ...) [UNIQUE] STORING (column, ...),
26-
INDEX <secondary-index-name2> (column, ...) [UNIQUE] STORING (column, ...),
25+
INDEX <secondary_index_name1> (column, ...) [UNIQUE] STORING (column, ...),
26+
INDEX <secondary_index_name2> (column, ...) [UNIQUE] STORING (column, ...),
2727
...
28-
FAMILY <column-family-name1> (column, ...),
29-
FAMILY <column-family-name2> (column, ...),
28+
FAMILY <column_family_name1> (column, ...),
29+
FAMILY <column_family_name2> (column, ...),
3030
...
3131
)
3232
PRIMARY KEY (column, ...)
33-
COMMENT '<stream-comment>'
33+
COMMENT '<stream_comment>'
3434
SETTINGS
35-
shards=<num-of-shards>,
36-
replication_factor=<replication-factor>,
37-
version_column=<version-column>,
35+
shards=<num_of_shards>,
36+
replication_factor=<replication_factor>,
37+
version_column=<version_column>,
3838
coalesced=[true|false],
3939
logstore_codec=['lz4'|'zstd'|'none'],
40-
logstore_retention_bytes=<retention-bytes>,
41-
logstore_retention_ms=<retention-ms>,
42-
ttl_seconds=<ttl-seconds>,
43-
ttl_column=<ttl-column>,
40+
logstore_retention_bytes=<retention_bytes>,
41+
logstore_retention_ms=<retention_ms>,
42+
ttl_seconds=<ttl_seconds>,
43+
ttl_column=<ttl_column>,
4444
auto_cf=[true|false],
45-
placement_policies='<placement-policies>',
45+
placement_policies='<placement_policies>',
4646
late_insert_overrides=[true|false],
47-
shared_disk='<shared-disk>',
47+
shared_disk='<shared_disk>',
4848
ingest_mode=['async'|'sync'],
4949
ack=['quorum'|'local'|'none'],
50-
ingest_batch_max_bytes=<batch-bytes>,
51-
ingest_batch_timeout_ms=<batch-timeout>,
52-
fetch_threads=<remote-fetch-threads>,
53-
flush_rows=<batch-flush-rows>,
54-
flush_ms=<batch-flush-timeout>,
50+
ingest_batch_max_bytes=<batch_bytes>,
51+
ingest_batch_timeout_ms=<batch_timeout>,
52+
fetch_threads=<remote_fetch_threads>,
53+
flush_rows=<batch_flush_rows>,
54+
flush_ms=<batch_flush_timeout>,
5555
log_kvstore=[true|false],
5656
kvstore_codec=['snappy'|'lz4'|'zstd'],
57-
kvstore_options='<kvstore-options>',
57+
kvstore_options='<kvstore_options>',
5858
enable_hash_index=[true|false],
5959
enable_statistics=[true|false];
6060
```
@@ -348,7 +348,7 @@ Using column families can slow down ingestion speed, since each family is intern
348348
You can delete rows from a Mutable Stream using the `DELETE` statement:
349349

350350
```sql
351-
DELETE FROM <db.mutable-stream-name> WHERE <predicates>;
351+
DELETE FROM <db.mutable_stream_name> WHERE <predicates>;
352352
```
353353

354354
- If the `WHERE` predicates can leverage the primary index or a secondary index, the delete operation will be fast.

0 commit comments

Comments
 (0)