You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Copy file name to clipboardExpand all lines: docs/automq-kafka-source.md
+1-1Lines changed: 1 addition & 1 deletion
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -28,5 +28,5 @@ Click **Next**. Timeplus will connect to the server and list all topics. Choose
28
28
In the next step, confirm the schema of the Timeplus stream and specify a name. At the end of the wizard, an external stream will be created in Timeplus. You can query data or even write data to the AutoMQ topic with SQL.
29
29
30
30
See also:
31
-
*[Kafka External Stream](/proton-kafka)
31
+
*[Kafka External Stream](/kafka-source)
32
32
*[Tutorial: Streaming ETL from Kafka to ClickHouse](/tutorial-sql-etl-kafka-to-ch)
Leveraging HTTP external stream, you can write / materialize data to BigQuery directly from Timeplus.
4
+
5
+
## Write to BigQuery {#example-write-to-bigquery}
6
+
7
+
Assume you have created a table in BigQuery with 2 columns:
8
+
```sql
9
+
create table `PROJECT.DATASET.http_sink_t1`(
10
+
num int,
11
+
str string);
12
+
```
13
+
14
+
Follow [the guide](https://cloud.google.com/bigquery/docs/authentication) to choose the proper authentication to Google Cloud, such as via the gcloud CLI `gcloud auth application-default print-access-token`.
Replace the `OAUTH_TOKEN` with the output of `gcloud auth application-default print-access-token` or other secure way to obtain OAuth token. Replace `PROJECT`, `DATASET` and `TABLE` to match your BigQuery table path. Also change `format_template_row_format` to match the table schema.
30
+
31
+
Then you can insert data via a materialized view or just via `INSERT` command:
32
+
```sql
33
+
INSERT INTO http_bigquery_t1 VALUES(10,'A'),(11,'B');
Copy file name to clipboardExpand all lines: docs/cli-migrate.md
+1-1Lines changed: 1 addition & 1 deletion
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -8,7 +8,7 @@ This tool is available in Timeplus Enterprise 2.5. It supports [Timeplus Enterpr
8
8
9
9
## How It Works
10
10
11
-
The migration is done via capturing the SQL DDL from the source deployment and rerunning those SQL DDL in the target deployment. Data are read from source Timeplus via [Timeplus External Streams](/timeplus-external-stream) and write to the target Timeplus via `INSERT INTO .. SELECT .. FROM table(tp_ext_stream)`. The data files won't be copied among the source and target Timeplus, but you need to ensure the target Timeplus can access to the source Timeplus, so that it can read data via Timeplus External Streams.
11
+
The migration is done via capturing the SQL DDL from the source deployment and rerunning those SQL DDL in the target deployment. Data are read from source Timeplus via [Timeplus External Streams](/timeplus-source) and write to the target Timeplus via `INSERT INTO .. SELECT .. FROM table(tp_ext_stream)`. The data files won't be copied among the source and target Timeplus, but you need to ensure the target Timeplus can access to the source Timeplus, so that it can read data via Timeplus External Streams.
Copy file name to clipboardExpand all lines: docs/clickhouse-external-table.md
+3-1Lines changed: 3 additions & 1 deletion
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -1,5 +1,7 @@
1
1
# ClickHouse External Table
2
2
3
+
## Overview
4
+
3
5
Timeplus can read or write ClickHouse tables directly. This unlocks a set of new use cases, such as
4
6
5
7
- Use Timeplus to efficiently process real-time data in Kafka/Redpanda, apply flat transformation or stateful aggregation, then write the data to the local or remote ClickHouse for further analysis or visualization.
@@ -41,7 +43,7 @@ The required settings are type and address. For other settings, the default valu
41
43
42
44
The `config_file` setting is available since Timeplus Enterprise 2.7. You can specify the path to a file that contains the configuration settings. The file should be in the format of `key=value` pairs, one pair per line. You can set the ClickHouse user and password in the file.
43
45
44
-
Please follow the example in [Kafka External Stream](/proton-kafka#config_file).
46
+
Please follow the example in [Kafka External Stream](/kafka-source#config_file).
45
47
46
48
You don't need to specify the columns, since the table schema will be fetched from the ClickHouse server.
Copy file name to clipboardExpand all lines: docs/connect-data-in.md
+6-8Lines changed: 6 additions & 8 deletions
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -1,9 +1,9 @@
1
-
# Getting Data In
1
+
# Connect Data In
2
2
3
3
Timeplus supports multiple ways to load data into the system, or access the external data without copying them in Timeplus:
4
4
5
5
-[External Stream for Apache Kafka](/external-stream), Confluent, Redpanda, and other Kafka API compatible data streaming platform. This feature is also available in Timeplus Proton.
6
-
-[External Stream for Apache Pulsar](/pulsar-external-stream) is available in Timeplus Enterprise 2.5 and above.
6
+
-[External Stream for Apache Pulsar](/pulsar-source) is available in Timeplus Enterprise 2.5 and above.
7
7
- Source for extra wide range of data sources. This is only available in Timeplus Enterprise. This integrates with [Redpanda Connect](https://redpanda.com/connect), supporting 200+ connectors.
8
8
- On Timeplus web console, you can also [upload CSV files](#csv) and import them into streams.
9
9
- For Timeplus Enterprise, [REST API](/ingest-api) and SDKs are provided to push data to Timeplus programmatically.
@@ -15,12 +15,12 @@ Timeplus supports multiple ways to load data into the system, or access the exte
15
15
Choose "Data Collection" from the navigation menu to setup data access to other systems. There are two categories:
16
16
* Timeplus Connect: directly supported by Timeplus Inc, with easy-to-use setup wizards.
17
17
* Demo Stream: generate random data for various use cases. [Learn more](#streamgen)
18
-
* Timeplus: read data from another Timeplus deployment. [Learn more](/timeplus-external-stream)
18
+
* Timeplus: read data from another Timeplus deployment. [Learn more](/timeplus-source)
19
19
* Apache Kafka: setup external streams to read from Apache Kafka. [Learn more](#kafka)
20
20
* Confluent Cloud: setup external streams to read from Confluent Cloud
21
21
* Redpanda: setup external streams to read from Redpanda
22
22
* Apache Pulsar: setup external streams to read from Apache Pulsar. [Learn more](#pulsar)
23
-
* ClickHouse: setup external tables to read from ClickHouse, without duplicating data in Timeplus. [Learn more](/proton-clickhouse-external-table)
23
+
* ClickHouse: setup external tables to read from ClickHouse, without duplicating data in Timeplus. [Learn more](/clickhouse-external-table)
24
24
* NATS: load data from NATS to Timeplus streams
25
25
* WebSocket: load data from WebSocket to Timeplus streams
26
26
* HTTP Stream: load data from HTTP stream to Timeplus streams
@@ -29,19 +29,17 @@ Choose "Data Collection" from the navigation menu to setup data access to other
29
29
* Stream Ingestion: a wizard to guide you to push data to Timeplus via Ingest REST API. [Learn more](/ingest-api)
30
30
* Redpanda Connect: available since Timeplus Enterprise 2.5 or above. Set up data access to other systems by editing a YAML file. Powered by Redpanda Connect, supported by Redpanda Data Inc. or Redpanda Community.
31
31
32
-
33
-
34
32
### Load streaming data from Apache Kafka {#kafka}
35
33
36
34
As of today, Kafka is the primary data integration for Timeplus. With our strong partnership with Confluent, you can load your real-time data from Confluent Cloud, Confluent Platform, or Apache Kafka into the Timeplus streaming engine. You can also create [external streams](/external-stream) to analyze data in Confluent/Kafka/Redpanda without moving data.
37
35
38
-
[Learn more.](/proton-kafka)
36
+
[Learn more.](/kafka-source)
39
37
40
38
### Load streaming data from Apache Pulsar {#pulsar}
41
39
42
40
Apache® Pulsar™ is a cloud-native, distributed, open source messaging and streaming platform for real-time workloads. Since Timeplus Enterprise 2.5, Pulsar External Streams can be created to read or write data for Pulsar.
Replace the `TOKEN`, `HOST`, and `WAREHOUSE_ID` to match your Databricks settings. Also change `format_template_row_format` and `format_template_row_format` to match the table schema.
31
+
32
+
Then you can insert data via a materialized view or just via `INSERT` command:
33
+
```sql
34
+
INSERT INTO http_databricks_t1(product, quantity) VALUES('test',95);
35
+
```
36
+
37
+
This will insert one row per request. We plan to support batch insert and Databricks specific format to support different table schemas in the future.
Leveraging HTTP external stream, you can write data to Elastic Search or Open Search directly from Timeplus.
4
+
5
+
## Write to OpenSearch / ElasticSearch {#example-write-to-es}
6
+
7
+
Assuming you have created an index `students` in a deployment of OpenSearch or ElasticSearch, you can create the following external stream to write data to the index.
8
+
9
+
```sql
10
+
CREATE EXTERNAL STREAM opensearch_t1 (
11
+
name string,
12
+
gpa float32,
13
+
grad_year int16
14
+
) SETTINGS
15
+
type ='http',
16
+
data_format ='OpenSearch', --can also use the alias "ElasticSearch"
0 commit comments