From aaa34241fa54bde362b082c747cfda7e6e861628 Mon Sep 17 00:00:00 2001 From: Leto_b Date: Thu, 27 Feb 2025 17:30:52 +0800 Subject: [PATCH] add chapter number in tree doc-en --- .../Tree/API/Programming-CSharp-Native-API.md | 36 ++++----- .../Tree/API/Programming-Cpp-Native-API.md | 28 +++---- .../Tree/API/Programming-Data-Subscription.md | 6 +- .../Tree/API/Programming-Go-Native-API.md | 4 +- .../Master/Tree/API/Programming-JDBC.md | 8 +- .../Tree/API/Programming-Java-Native-API.md | 6 +- .../Master/Tree/API/Programming-Kafka.md | 8 +- .../Master/Tree/API/Programming-MQTT.md | 8 +- .../Tree/API/Programming-NodeJS-Native-API.md | 8 +- .../Master/Tree/API/Programming-ODBC.md | 8 +- .../Tree/API/Programming-OPC-UA_timecho.md | 28 +++---- .../Tree/API/Programming-Python-Native-API.md | 80 +++++++++---------- .../Tree/API/Programming-Rust-Native-API.md | 8 +- .../Master/Tree/API/RestServiceV1.md | 16 ++-- .../Master/Tree/API/RestServiceV2.md | 18 ++--- .../Cluster-Concept_apache.md | 18 ++--- .../Cluster-Concept_timecho.md | 20 ++--- .../Data-Model-and-Terminology_apache.md | 6 +- .../Data-Model-and-Terminology_timecho.md | 6 +- .../Tree/Background-knowledge/Data-Type.md | 12 +-- .../Navigating_Time_Series_Data.md | 10 +-- .../Basic-Concept/Operate-Metadata_apache.md | 66 +++++++-------- .../Basic-Concept/Operate-Metadata_timecho.md | 70 ++++++++-------- .../Master/Tree/Basic-Concept/Query-Data.md | 72 ++++++++--------- .../Tree/Basic-Concept/Write-Delete-Data.md | 28 +++---- .../AINode_Deployment_apache.md | 48 +++++------ .../AINode_Deployment_timecho.md | 38 ++++----- .../Cluster-Deployment_apache.md | 24 +++--- .../Cluster-Deployment_timecho.md | 34 ++++---- .../Database-Resources.md | 14 ++-- .../Deployment-form_apache.md | 6 +- .../Deployment-form_timecho.md | 8 +- .../Docker-Deployment_apache.md | 36 ++++----- .../Docker-Deployment_timecho.md | 46 +++++------ .../Dual-Active-Deployment_timecho.md | 16 ++-- .../Environment-Requirements.md | 16 ++-- .../IoTDB-Package_apache.md | 4 +- .../IoTDB-Package_timecho.md | 4 +- .../Monitoring-panel-deployment.md | 22 ++--- .../Stand-Alone-Deployment_apache.md | 16 ++-- .../Stand-Alone-Deployment_timecho.md | 24 +++--- .../workbench-deployment_timecho.md | 14 ++-- .../Tree/FAQ/Frequently-asked-questions.md | 30 +++---- .../IoTDB-Introduction_apache.md | 6 +- .../IoTDB-Introduction_timecho.md | 16 ++-- .../Tree/IoTDB-Introduction/Scenario.md | 24 +++--- .../Tree/QuickStart/QuickStart_apache.md | 10 +-- .../Tree/QuickStart/QuickStart_timecho.md | 10 +-- .../Cluster-data-partitioning.md | 14 ++-- .../Encoding-and-Compression.md | 10 +-- .../API/Programming-CSharp-Native-API.md | 36 ++++----- .../latest/API/Programming-Cpp-Native-API.md | 28 +++---- .../API/Programming-Data-Subscription.md | 6 +- .../latest/API/Programming-Go-Native-API.md | 4 +- src/UserGuide/latest/API/Programming-JDBC.md | 8 +- .../latest/API/Programming-Java-Native-API.md | 6 +- src/UserGuide/latest/API/Programming-Kafka.md | 8 +- src/UserGuide/latest/API/Programming-MQTT.md | 8 +- .../API/Programming-NodeJS-Native-API.md | 8 +- src/UserGuide/latest/API/Programming-ODBC.md | 8 +- .../latest/API/Programming-OPC-UA_timecho.md | 28 +++---- .../API/Programming-Python-Native-API.md | 80 +++++++++---------- .../latest/API/Programming-Rust-Native-API.md | 8 +- src/UserGuide/latest/API/RestServiceV1.md | 16 ++-- src/UserGuide/latest/API/RestServiceV2.md | 18 ++--- .../Cluster-Concept_apache.md | 18 ++--- .../Cluster-Concept_timecho.md | 20 ++--- .../Data-Model-and-Terminology_apache.md | 6 +- .../Data-Model-and-Terminology_timecho.md | 6 +- .../latest/Background-knowledge/Data-Type.md | 12 +-- .../Navigating_Time_Series_Data.md | 10 +-- .../Basic-Concept/Operate-Metadata_apache.md | 66 +++++++-------- .../Basic-Concept/Operate-Metadata_timecho.md | 70 ++++++++-------- .../latest/Basic-Concept/Query-Data.md | 72 ++++++++--------- .../latest/Basic-Concept/Write-Delete-Data.md | 28 +++---- .../AINode_Deployment_apache.md | 48 +++++------ .../AINode_Deployment_timecho.md | 38 ++++----- .../Cluster-Deployment_apache.md | 24 +++--- .../Cluster-Deployment_timecho.md | 34 ++++---- .../Database-Resources.md | 14 ++-- .../Deployment-form_apache.md | 6 +- .../Deployment-form_timecho.md | 8 +- .../Docker-Deployment_apache.md | 36 ++++----- .../Docker-Deployment_timecho.md | 46 +++++------ .../Dual-Active-Deployment_timecho.md | 16 ++-- .../Environment-Requirements.md | 16 ++-- .../IoTDB-Package_apache.md | 4 +- .../IoTDB-Package_timecho.md | 4 +- .../Monitoring-panel-deployment.md | 22 ++--- .../Stand-Alone-Deployment_apache.md | 16 ++-- .../Stand-Alone-Deployment_timecho.md | 24 +++--- .../workbench-deployment_timecho.md | 14 ++-- .../latest/FAQ/Frequently-asked-questions.md | 30 +++---- .../IoTDB-Introduction_apache.md | 6 +- .../IoTDB-Introduction_timecho.md | 16 ++-- .../latest/IoTDB-Introduction/Scenario.md | 24 +++--- .../latest/QuickStart/QuickStart_apache.md | 10 +-- .../latest/QuickStart/QuickStart_timecho.md | 10 +-- .../Cluster-data-partitioning.md | 14 ++-- .../Encoding-and-Compression.md | 10 +-- 100 files changed, 1068 insertions(+), 1068 deletions(-) diff --git a/src/UserGuide/Master/Tree/API/Programming-CSharp-Native-API.md b/src/UserGuide/Master/Tree/API/Programming-CSharp-Native-API.md index a4f208f7c..2a85b3c32 100644 --- a/src/UserGuide/Master/Tree/API/Programming-CSharp-Native-API.md +++ b/src/UserGuide/Master/Tree/API/Programming-CSharp-Native-API.md @@ -21,9 +21,9 @@ # C# Native API -## Installation +## 1. Installation -### Install from NuGet Package +### 1.1 Install from NuGet Package We have prepared Nuget Package for C# users. Users can directly install the client through .NET CLI. [The link of our NuGet Package is here](https://www.nuget.org/packages/Apache.IoTDB/). Run the following command in the command line to complete installation @@ -33,18 +33,18 @@ dotnet add package Apache.IoTDB Note that the `Apache.IoTDB` package only supports versions greater than `.net framework 4.6.1`. -## Prerequisites +## 2. Prerequisites .NET SDK Version >= 5.0 .NET Framework >= 4.6.1 -## How to Use the Client (Quick Start) +## 3. How to Use the Client (Quick Start) Users can quickly get started by referring to the use cases under the Apache-IoTDB-Client-CSharp-UserCase directory. These use cases serve as a useful resource for getting familiar with the client's functionality and capabilities. For those who wish to delve deeper into the client's usage and explore more advanced features, the samples directory contains additional code samples. -## Developer environment requirements for iotdb-client-csharp +## 4. Developer environment requirements for iotdb-client-csharp ``` .NET SDK Version >= 5.0 @@ -53,17 +53,17 @@ ApacheThrift >= 0.14.1 NLog >= 4.7.9 ``` -### OS +### 4.1 OS * Linux, Macos or other unix-like OS * Windows+bash(WSL, cygwin, Git Bash) -### Command Line Tools +### 4.2 Command Line Tools * dotnet CLI * Thrift -## Basic interface description +## 5. Basic interface description The Session interface is semantically identical to other language clients @@ -101,7 +101,7 @@ await session_pool.InsertTabletAsync(tablet); await session_pool.Close(); ``` -## **Row Record** +## 6. **Row Record** - Encapsulate and abstract the `record` data in **IoTDB** - e.g. @@ -117,7 +117,7 @@ var rowRecord = new RowRecord(long timestamps, List values, List measurements); ``` -### **Tablet** +### 6.1 **Tablet** - A data structure similar to a table, containing several non empty data blocks of a device's rows。 - e.g. @@ -137,9 +137,9 @@ var tablet = -## **API** +## 7. **API** -### **Basic API** +### 7.1 **Basic API** | api name | parameters | notes | use example | | -------------- | ------------------------- | ------------------------ | ----------------------------- | @@ -151,7 +151,7 @@ var tablet = | SetTimeZone | string | set time zone | session_pool.GetTimeZone() | | GetTimeZone | null | get time zone | session_pool.GetTimeZone() | -### **Record API** +### 7.2 **Record API** | api name | parameters | notes | use example | | ----------------------------------- | ----------------------------- | ----------------------------------- | ------------------------------------------------------------ | @@ -162,7 +162,7 @@ var tablet = | TestInsertRecordAsync | string, RowRecord | test insert record | session_pool.TestInsertRecordAsync("root.97209_TEST_CSHARP_CLIENT_GROUP.TEST_CSHARP_CLIENT_DEVICE", rowRecord) | | TestInsertRecordsAsync | List\, List\ | test insert record | session_pool.TestInsertRecordsAsync(device_id, rowRecords) | -### **Tablet API** +### 7.3 **Tablet API** | api name | parameters | notes | use example | | ---------------------- | ------------ | -------------------- | -------------------------------------------- | @@ -171,14 +171,14 @@ var tablet = | TestInsertTabletAsync | Tablet | test insert tablet | session_pool.TestInsertTabletAsync(tablet) | | TestInsertTabletsAsync | List\ | test insert tablets | session_pool.TestInsertTabletsAsync(tablets) | -### **SQL API** +### 7.4 **SQL API** | api name | parameters | notes | use example | | ----------------------------- | ---------- | ------------------------------ | ------------------------------------------------------------ | | ExecuteQueryStatementAsync | string | execute sql query statement | session_pool.ExecuteQueryStatementAsync("select * from root.97209_TEST_CSHARP_CLIENT_GROUP.TEST_CSHARP_CLIENT_DEVICE where time<15"); | | ExecuteNonQueryStatementAsync | string | execute sql nonquery statement | session_pool.ExecuteNonQueryStatementAsync( "create timeseries root.97209_TEST_CSHARP_CLIENT_GROUP.TEST_CSHARP_CLIENT_DEVICE.status with datatype=BOOLEAN,encoding=PLAIN") | -### **Scheam API** +### 7.5 **Scheam API** | api name | parameters | notes | use example | | -------------------------- | ------------------------------------------------------------ | --------------------------- | ------------------------------------------------------------ | @@ -191,7 +191,7 @@ var tablet = | DeleteTimeSeriesAsync | string | delete time series | | | DeleteDataAsync | List\, long, long | delete data | session_pool.DeleteDataAsync(ts_path_lst, 2, 3) | -### **Other API** +### 7.6 **Other API** | api name | parameters | notes | use example | | -------------------------- | ---------- | --------------------------- | ---------------------------------------------------- | @@ -201,7 +201,7 @@ var tablet = [e.g.](https://github.com/apache/iotdb-client-csharp/tree/main/samples/Apache.IoTDB.Samples) -## SessionPool +## 8. SessionPool To implement concurrent client requests, we provide a `SessionPool` for the native interface. Since `SessionPool` itself is a superset of `Session`, when `SessionPool` is a When the `pool_size` parameter is set to 1, it reverts to the original `Session` diff --git a/src/UserGuide/Master/Tree/API/Programming-Cpp-Native-API.md b/src/UserGuide/Master/Tree/API/Programming-Cpp-Native-API.md index b462983d2..22c08bc3b 100644 --- a/src/UserGuide/Master/Tree/API/Programming-Cpp-Native-API.md +++ b/src/UserGuide/Master/Tree/API/Programming-Cpp-Native-API.md @@ -21,7 +21,7 @@ # C++ Native API -## Dependencies +## 1. Dependencies - Java 8+ - Flex @@ -30,9 +30,9 @@ - OpenSSL 1.0+ - GCC 5.5.0+ -## Installation +## 2. Installation -### Install Required Dependencies +### 2.1 Install Required Dependencies - **MAC** 1. Install Bison: @@ -89,7 +89,7 @@ - Download and install [OpenSSL](http://slproweb.com/products/Win32OpenSSL.html). - Add the include directory under the installation directory to the PATH environment variable. -### Compilation +### 2.2 Compilation Clone the source code from git: ```shell @@ -131,7 +131,7 @@ Run Maven to compile in the IoTDB root directory: After successful compilation, the packaged library files will be located in `iotdb-client/client-cpp/target`, and you can find the compiled example program under `example/client-cpp-example/target`. -### Compilation Q&A +### 2.3 Compilation Q&A Q: What are the requirements for the environment on Linux? @@ -158,11 +158,11 @@ A: - Go back to the IoTDB code directory and run `.\mvnw.cmd clean package -pl example/client-cpp-example -am -DskipTests -P with-cpp -Dcmake.generator="Visual Studio 15 2017"`. -## Native APIs +## 3. Native APIs Here we show the commonly used interfaces and their parameters in the Native API: -### Initialization +### 3.1 Initialization - Open a Session ```cpp @@ -180,7 +180,7 @@ Notice: this RPC compression status of client must comply with that of IoTDB ser void close(); ``` -### Data Definition Interface (DDL) +### 3.2 Data Definition Interface (DDL) #### Database Management @@ -302,7 +302,7 @@ std::vector showMeasurementsInTemplate(const std::string &template_ ``` -### Data Manipulation Interface (DML) +### 3.3 Data Manipulation Interface (DML) #### Insert @@ -384,7 +384,7 @@ void deleteData(const std::vector &paths, int64_t endTime); void deleteData(const std::vector &paths, int64_t startTime, int64_t endTime); ``` -### IoTDB-SQL Interface +### 3.4 IoTDB-SQL Interface - Execute query statement ```cpp @@ -397,7 +397,7 @@ void executeNonQueryStatement(const std::string &sql); ``` -## Examples +## 4. Examples The sample code of using these interfaces is in: @@ -406,16 +406,16 @@ The sample code of using these interfaces is in: If the compilation finishes successfully, the example project will be placed under `example/client-cpp-example/target` -## FAQ +## 5. FAQ -### on Mac +### 5.1 on Mac If errors occur when compiling thrift source code, try to downgrade your xcode-commandline from 12 to 11.5 see https://stackoverflow.com/questions/63592445/ld-unsupported-tapi-file-type-tapi-tbd-in-yaml-file/65518087#65518087 -### on Windows +### 5.2 on Windows When Building Thrift and downloading packages via "wget", a possible annoying issue may occur with error message looks like: diff --git a/src/UserGuide/Master/Tree/API/Programming-Data-Subscription.md b/src/UserGuide/Master/Tree/API/Programming-Data-Subscription.md index 9391c04a7..281cc9864 100644 --- a/src/UserGuide/Master/Tree/API/Programming-Data-Subscription.md +++ b/src/UserGuide/Master/Tree/API/Programming-Data-Subscription.md @@ -23,7 +23,7 @@ IoTDB provides powerful data subscription functionality, allowing users to access newly added data from IoTDB in real-time through subscription APIs. For detailed functional definitions and introductions:[Data subscription](../User-Manual/Data-subscription.md) -## 1 Core Steps +## 1. Core Steps 1. Create Topic: Create a Topic that includes the measurement points you wish to subscribe to. 2. Subscribe to Topic: Before a consumer subscribes to a topic, the topic must have been created, otherwise the subscription will fail. Consumers under the same consumer group will evenly distribute the data. @@ -31,7 +31,7 @@ IoTDB provides powerful data subscription functionality, allowing users to acces 4. Unsubscribe: When a consumer is closed, it will exit the corresponding consumer group and cancel all existing subscriptions. -## 2 Detailed Steps +## 2. Detailed Steps This section is used to illustrate the core development process and does not demonstrate all parameters and interfaces. For a comprehensive understanding of all features and parameters, please refer to: [Java Native API](../API/Programming-Java-Native-API.md#3-native-interface-description) @@ -182,7 +182,7 @@ public class DataConsumerExample { -## 3 Java Native API Description +## 3. Java Native API Description ### 3.1 Parameter List diff --git a/src/UserGuide/Master/Tree/API/Programming-Go-Native-API.md b/src/UserGuide/Master/Tree/API/Programming-Go-Native-API.md index b227ed672..baad278b4 100644 --- a/src/UserGuide/Master/Tree/API/Programming-Go-Native-API.md +++ b/src/UserGuide/Master/Tree/API/Programming-Go-Native-API.md @@ -23,7 +23,7 @@ The Git repository for the Go Native API client is located [here](https://github.com/apache/iotdb-client-go/) -## Dependencies +## 1. Dependencies * golang >= 1.13 * make >= 3.0 @@ -32,7 +32,7 @@ The Git repository for the Go Native API client is located [here](https://github * Linux、Macos or other unix-like systems * Windows+bash (WSL、cygwin、Git Bash) -## Installation +## 2. Installation * go mod diff --git a/src/UserGuide/Master/Tree/API/Programming-JDBC.md b/src/UserGuide/Master/Tree/API/Programming-JDBC.md index 0251e469c..b599c3d4b 100644 --- a/src/UserGuide/Master/Tree/API/Programming-JDBC.md +++ b/src/UserGuide/Master/Tree/API/Programming-JDBC.md @@ -25,12 +25,12 @@ IT CAN NOT PROVIDE HIGH THROUGHPUT FOR WRITE OPERATIONS. PLEASE USE [Java Native API](./Programming-Java-Native-API.md) INSTEAD* -## Dependencies +## 1. Dependencies * JDK >= 1.8+ * Maven >= 3.9+ -## Installation +## 2. Installation In root directory: @@ -38,7 +38,7 @@ In root directory: mvn clean install -pl iotdb-client/jdbc -am -DskipTests ``` -## Use IoTDB JDBC with Maven +## 3. Use IoTDB JDBC with Maven ```xml @@ -50,7 +50,7 @@ mvn clean install -pl iotdb-client/jdbc -am -DskipTests ``` -## Coding Examples +## 4. Coding Examples This chapter provides an example of how to open a database connection, execute an SQL query, and display the results. diff --git a/src/UserGuide/Master/Tree/API/Programming-Java-Native-API.md b/src/UserGuide/Master/Tree/API/Programming-Java-Native-API.md index 2b2b68da3..53b445885 100644 --- a/src/UserGuide/Master/Tree/API/Programming-Java-Native-API.md +++ b/src/UserGuide/Master/Tree/API/Programming-Java-Native-API.md @@ -23,14 +23,14 @@ In the native API of IoTDB, the `Session` is the core interface for interacting `SessionPool` is a connection pool for `Session`, and it is recommended to use `SessionPool` for programming. In scenarios with multi-threaded concurrency, `SessionPool` can manage and allocate connection resources effectively, thereby improving system performance and resource utilization efficiency. -## 1 Overview of Steps +## 1. Overview of Steps 1. Create a Connection Pool Instance: Initialize a SessionPool object to manage multiple Session instances. 2. Perform Operations: Directly obtain a Session instance from the SessionPool and execute database operations, without the need to open and close connections each time. 3. Close Connection Pool Resources: When database operations are no longer needed, close the SessionPool to release all related resources. -## 2 Detailed Steps +## 2. Detailed Steps This section provides an overview of the core development process and does not demonstrate all parameters and interfaces. For a complete list of functionalities and parameters, please refer to:[Java Native API](./Programming-Java-Native-API.md#3-native-interface-description) or check the: [Source Code](https://github.com/apache/iotdb/tree/master/example/session/src/main/java/org/apache/iotdb) @@ -343,7 +343,7 @@ public class SessionPoolExample { } ``` -### 3 Native Interface Description +### 3. Native Interface Description #### 3.1 Parameter List diff --git a/src/UserGuide/Master/Tree/API/Programming-Kafka.md b/src/UserGuide/Master/Tree/API/Programming-Kafka.md index 0a041448f..aab8d3d21 100644 --- a/src/UserGuide/Master/Tree/API/Programming-Kafka.md +++ b/src/UserGuide/Master/Tree/API/Programming-Kafka.md @@ -23,9 +23,9 @@ [Apache Kafka](https://kafka.apache.org/) is an open-source distributed event streaming platform used by thousands of companies for high-performance data pipelines, streaming analytics, data integration, and mission-critical applications. -## Coding Example +## 1. Coding Example -### kafka Producer Producing Data Java Code Example +### 1.1 kafka Producer Producing Data Java Code Example ```java Properties props = new Properties(); @@ -39,7 +39,7 @@ producer.close(); ``` -### kafka Consumer Receiving Data Java Code Example +### 1.2 kafka Consumer Receiving Data Java Code Example ```java Properties props = new Properties(); @@ -53,7 +53,7 @@ ConsumerRecords records = kafkaConsumer.poll(Duration.ofSeconds(1)); ``` -### Example of Java Code Stored in IoTDB Server +### 1.3 Example of Java Code Stored in IoTDB Server ```java SessionPool pool = diff --git a/src/UserGuide/Master/Tree/API/Programming-MQTT.md b/src/UserGuide/Master/Tree/API/Programming-MQTT.md index 98fca63d4..953414c60 100644 --- a/src/UserGuide/Master/Tree/API/Programming-MQTT.md +++ b/src/UserGuide/Master/Tree/API/Programming-MQTT.md @@ -30,7 +30,7 @@ IoTDB server includes a built-in MQTT service that allows remote devices send me -## Built-in MQTT Service +## 1. Built-in MQTT Service The Built-in MQTT Service provide the ability of direct connection to IoTDB through MQTT. It listen the publish messages from MQTT clients and then write the data into storage immediately. The MQTT topic corresponds to IoTDB timeseries. @@ -58,7 +58,7 @@ or json array of the above two. -## MQTT Configurations +## 2. MQTT Configurations The IoTDB MQTT service load configurations from `${IOTDB_HOME}/${IOTDB_CONF}/iotdb-system.properties` by default. Configurations are as follows: @@ -73,7 +73,7 @@ Configurations are as follows: | mqtt_max_message_size | the max mqtt message size in byte| 1048576 | -## Coding Examples +## 3. Coding Examples The following is an example which a mqtt client send messages to IoTDB server. ```java @@ -101,7 +101,7 @@ connection.disconnect(); ``` -## Customize your MQTT Message Format +## 4. Customize your MQTT Message Format If you do not like the above Json format, you can customize your MQTT Message format by just writing several lines of codes. An example can be found in `example/mqtt-customize` project. diff --git a/src/UserGuide/Master/Tree/API/Programming-NodeJS-Native-API.md b/src/UserGuide/Master/Tree/API/Programming-NodeJS-Native-API.md index 35c7964cd..0c75c75ec 100644 --- a/src/UserGuide/Master/Tree/API/Programming-NodeJS-Native-API.md +++ b/src/UserGuide/Master/Tree/API/Programming-NodeJS-Native-API.md @@ -24,14 +24,14 @@ Apache IoTDB uses Thrift as a cross-language RPC-framework so access to IoTDB can be achieved through the interfaces provided by Thrift. This document will introduce how to generate a native Node.js interface that can be used to access IoTDB. -## Dependents +## 1. Dependents * JDK >= 1.8 * Node.js >= 16.0.0 * Linux、Macos or like unix * Windows+bash -## Generate the Node.js native interface +## 2. Generate the Node.js native interface 1. Find the `pom.xml` file in the root directory of the IoTDB source code folder. 2. Open the `pom.xml` file and find the following content: @@ -67,11 +67,11 @@ This document will introduce how to generate a native Node.js interface that can This command will automatically delete the files in `iotdb/iotdb-protocol/thrift/target` and `iotdb/iotdb-protocol/thrift-commons/target`, and repopulate the folder with the newly generated files. The newly generated JavaScript sources will be located in `iotdb/iotdb-protocol/thrift/target/generated-sources-nodejs` in the various modules of the `iotdb-protocol` module. -## Using the Node.js native interface +## 3. Using the Node.js native interface Simply copy the files in `iotdb/iotdb-protocol/thrift/target/generated-sources-nodejs/` and `iotdb/iotdb-protocol/thrift-commons/target/generated-sources-nodejs/` into your project. -## rpc interface +## 4. rpc interface ``` // open a session diff --git a/src/UserGuide/Master/Tree/API/Programming-ODBC.md b/src/UserGuide/Master/Tree/API/Programming-ODBC.md index 7d2b9bb20..0d5bf67f1 100644 --- a/src/UserGuide/Master/Tree/API/Programming-ODBC.md +++ b/src/UserGuide/Master/Tree/API/Programming-ODBC.md @@ -22,19 +22,19 @@ # ODBC With IoTDB JDBC, IoTDB can be accessed using the ODBC-JDBC bridge. -## Dependencies +## 1. Dependencies * IoTDB-JDBC's jar-with-dependency package * ODBC-JDBC bridge (e.g. ZappySys JDBC Bridge) -## Deployment -### Preparing JDBC package +## 2. Deployment +### 2.1 Preparing JDBC package Download the source code of IoTDB, and execute the following command in root directory: ```shell mvn clean package -pl iotdb-client/jdbc -am -DskipTests -P get-jar-with-dependencies ``` Then, you can see the output `iotdb-jdbc-1.3.2-SNAPSHOT-jar-with-dependencies.jar` under `iotdb-client/jdbc/target` directory. -### Preparing ODBC-JDBC Bridge +### 2.2 Preparing ODBC-JDBC Bridge *Note: Here we only provide one kind of ODBC-JDBC bridge as the instance. Readers can use other ODBC-JDBC bridges to access IoTDB with the IOTDB-JDBC.* 1. **Download Zappy-Sys ODBC-JDBC Bridge**: Enter the https://zappysys.com/products/odbc-powerpack/odbc-jdbc-bridge-driver/ website, and click "download". diff --git a/src/UserGuide/Master/Tree/API/Programming-OPC-UA_timecho.md b/src/UserGuide/Master/Tree/API/Programming-OPC-UA_timecho.md index a1084df2a..e6d675042 100644 --- a/src/UserGuide/Master/Tree/API/Programming-OPC-UA_timecho.md +++ b/src/UserGuide/Master/Tree/API/Programming-OPC-UA_timecho.md @@ -21,11 +21,11 @@ # OPC UA Protocol -## OPC UA +## 1. OPC UA OPC UA is a technical specification used in the automation field for communication between different devices and systems, enabling cross platform, cross language, and cross network operations, providing a reliable and secure data exchange foundation for the Industrial Internet of Things. IoTDB supports OPC UA protocol, and IoTDB OPC Server supports both Client/Server and Pub/Sub communication modes. -### OPC UA Client/Server Mode +### 1.1 OPC UA Client/Server Mode - **Client/Server Mode**:In this mode, IoTDB's stream processing engine establishes a connection with the OPC UA Server via an OPC UA Sink. The OPC UA Server maintains data within its Address Space, from which IoTDB can request and retrieve data. Additionally, other OPC UA Clients can access the data on the server. @@ -40,7 +40,7 @@ OPC UA is a technical specification used in the automation field for communicati - Each measurement point is recorded as a variable node and the latest value in the current database is recorded. -### OPC UA Pub/Sub Mode +### 1.2 OPC UA Pub/Sub Mode - **Pub/Sub Mode**: In this mode, IoTDB's stream processing engine sends data change events to the OPC UA Server through an OPC UA Sink. These events are published to the server's message queue and managed through Event Nodes. Other OPC UA Clients can subscribe to these Event Nodes to receive notifications upon data changes. @@ -65,9 +65,9 @@ OPC UA is a technical specification used in the automation field for communicati - Events are only sent to clients that are already listening; if a client is not connected, the Event will be ignored. -## IoTDB OPC Server Startup method +## 2. IoTDB OPC Server Startup method -### Syntax +### 2.1 Syntax The syntax for creating the Sink is as follows: @@ -85,7 +85,7 @@ create pipe p1 ) ``` -### Parameters +### 2.2 Parameters | key | value | value range | required or not | default value | | :------------------------------ | :----------------------------------------------------------- | :------------------------------------- | :------- | :------------- | @@ -98,7 +98,7 @@ create pipe p1 | sink.user | User for OPC UA, specified in the configuration | String | Optional | root | | sink.password | Password for OPC UA, specified in the configuration | String | Optional | root | -### 示例 +### 2.3 Example ```Bash create pipe p1 @@ -108,7 +108,7 @@ create pipe p1 start pipe p1; ``` -### Usage Limitations +### 2.4 Usage Limitations 1. **DataRegion Requirement**: The OPC UA server will only start if there is a DataRegion in IoTDB. For an empty IoTDB, a data entry is necessary for the OPC UA server to become effective. @@ -122,9 +122,9 @@ start pipe p1; 4. **Does not support deleting data and modifying measurement point types:** In Client Server mode, OPC UA cannot delete data or change data type settings. In Pub Sub mode, if data is deleted, information cannot be pushed to the client. -## IoTDB OPC Server Example +## 3. IoTDB OPC Server Example -### Client / Server Mode +### 3.1 Client / Server Mode #### Preparation Work @@ -174,7 +174,7 @@ insert into root.test.db(time, s2) values(now(), 2) -### Pub / Sub Mode +### 3.2 Pub / Sub Mode #### Preparation Work @@ -187,7 +187,7 @@ The code includes: - Client configuration and startup logic(ClientExampleRunner) - The parent class of ClientTest(ClientExample) -### Quick Start +### 3.3 Quick Start The steps are as follows: @@ -252,9 +252,9 @@ start pipe p1; -### Notes +### 3.4 Notes -1. **stand alone and cluster:**It is recommended to use a 1C1D (one coordinator and one data node) single machine version. If there are multiple DataNodes in the cluster, data may be sent in a scattered manner across various DataNodes, and it may not be possible to listen to all the data. +1. **stand alone and cluster:** It is recommended to use a 1C1D (one coordinator and one data node) single machine version. If there are multiple DataNodes in the cluster, data may be sent in a scattered manner across various DataNodes, and it may not be possible to listen to all the data. 2. **No Need to Operate Root Directory Certificates:** During the certificate operation process, there is no need to operate the `iotdb-server.pfx` certificate under the IoTDB security root directory and the `example-client.pfx` directory under the client security directory. When the Client and Server connect bidirectionally, they will send the root directory certificate to each other. If it is the first time the other party sees this certificate, it will be placed in the reject dir. If the certificate is in the trusted/certs, then the other party can trust it. diff --git a/src/UserGuide/Master/Tree/API/Programming-Python-Native-API.md b/src/UserGuide/Master/Tree/API/Programming-Python-Native-API.md index 8d74b41cf..01ade47f8 100644 --- a/src/UserGuide/Master/Tree/API/Programming-Python-Native-API.md +++ b/src/UserGuide/Master/Tree/API/Programming-Python-Native-API.md @@ -21,13 +21,13 @@ # Python Native API -## Requirements +## 1. Requirements You have to install thrift (>=0.13) before using the package. -## How to use (Example) +## 2. How to use (Example) First, download the package: `pip3 install apache-iotdb` @@ -52,7 +52,7 @@ zone = session.get_time_zone() session.close() ``` -## Initialization +## 3. Initialization * Initialize a Session @@ -94,11 +94,11 @@ Notice: this RPC compression status of client must comply with that of IoTDB ser ```python session.close() ``` -## Managing Session through SessionPool +## 4. Managing Session through SessionPool Utilizing SessionPool to manage sessions eliminates the need to worry about session reuse. When the number of session connections reaches the maximum capacity of the pool, requests for acquiring a session will be blocked, and you can set the blocking wait time through parameters. After using a session, it should be returned to the SessionPool using the `putBack` method for proper management. -### Create SessionPool +### 4.1 Create SessionPool ```python pool_config = PoolConfig(host=ip,port=port, user_name=username, @@ -110,7 +110,7 @@ wait_timeout_in_ms = 3000 # # Create the connection pool session_pool = SessionPool(pool_config, max_pool_size, wait_timeout_in_ms) ``` -### Create a SessionPool using distributed nodes. +### 4.2 Create a SessionPool using distributed nodes. ```python pool_config = PoolConfig(node_urls=node_urls=["127.0.0.1:6667", "127.0.0.1:6668", "127.0.0.1:6669"], user_name=username, password=password, fetch_size=1024, @@ -118,7 +118,7 @@ pool_config = PoolConfig(node_urls=node_urls=["127.0.0.1:6667", "127.0.0.1:6668" max_pool_size = 5 wait_timeout_in_ms = 3000 ``` -### Acquiring a session through SessionPool and manually calling PutBack after use +### 4.3 Acquiring a session through SessionPool and manually calling PutBack after use ```python session = session_pool.get_session() @@ -132,9 +132,9 @@ session_pool.put_back(session) session_pool.close() ``` -## Data Definition Interface (DDL Interface) +## 5. Data Definition Interface (DDL Interface) -### Database Management +### 5.1 Database Management * CREATE DATABASE @@ -148,7 +148,7 @@ session.set_storage_group(group_name) session.delete_storage_group(group_name) session.delete_storage_groups(group_name_lst) ``` -### Timeseries Management +### 5.2 Timeseries Management * Create one or multiple timeseries @@ -184,9 +184,9 @@ session.delete_time_series(paths_list) session.check_time_series_exists(path) ``` -## Data Manipulation Interface (DML Interface) +## 6. Data Manipulation Interface (DML Interface) -### Insert +### 6.1 Insert It is recommended to use insertTablet to help improve write efficiency. @@ -310,7 +310,7 @@ session.insert_records( session.insert_records_of_one_device(device_id, time_list, measurements_list, data_types_list, values_list) ``` -### Insert with type inference +### 6.2 Insert with type inference When the data is of String type, we can use the following interface to perform type inference based on the value of the value itself. For example, if value is "true" , it can be automatically inferred to be a boolean type. If value is "3.2" , it can be automatically inferred as a flout type. Without type information, server has to do type inference, which may cost some time. @@ -320,7 +320,7 @@ When the data is of String type, we can use the following interface to perform t session.insert_str_record(device_id, timestamp, measurements, string_values) ``` -### Insert of Aligned Timeseries +### 6.3 Insert of Aligned Timeseries The Insert of aligned timeseries uses interfaces like insert_aligned_XXX, and others are similar to the above interfaces: @@ -331,7 +331,7 @@ The Insert of aligned timeseries uses interfaces like insert_aligned_XXX, and ot * insert_aligned_tablets -## IoTDB-SQL Interface +## 7. IoTDB-SQL Interface * Execute query statement @@ -351,8 +351,8 @@ session.execute_non_query_statement(sql) session.execute_statement(sql) ``` -## Schema Template -### Create Schema Template +## 8. Schema Template +### 8.1 Create Schema Template The step for creating a metadata template is as follows 1. Create the template class 2. Adding MeasurementNode @@ -371,7 +371,7 @@ template.add_template(m_node_z) session.create_schema_template(template) ``` -### Modify Schema Template measurements +### 8.2 Modify Schema Template measurements Modify measurements in a template, the template must be already created. These are functions that add or delete some measurement nodes. * add node in template ```python @@ -383,17 +383,17 @@ session.add_measurements_in_template(template_name, measurements_path, data_type session.delete_node_in_template(template_name, path) ``` -### Set Schema Template +### 8.3 Set Schema Template ```python session.set_schema_template(template_name, prefix_path) ``` -### Uset Schema Template +### 8.4 Uset Schema Template ```python session.unset_schema_template(template_name, prefix_path) ``` -### Show Schema Template +### 8.5 Show Schema Template * Show all schema templates ```python session.show_all_templates() @@ -428,14 +428,14 @@ session.show_paths_template_set_on(template_name) session.show_paths_template_using_on(template_name) ``` -### Drop Schema Template +### 8.6 Drop Schema Template Delete an existing metadata template,dropping an already set template is not supported ```python session.drop_schema_template("template_python") ``` -## Pandas Support +## 9. Pandas Support To easily transform a query result to a [Pandas Dataframe](https://pandas.pydata.org/pandas-docs/stable/reference/api/pandas.DataFrame.html) the SessionDataSet has a method `.todf()` which consumes the dataset and transforms it to a pandas dataframe. @@ -463,7 +463,7 @@ df = ... ``` -## IoTDB Testcontainer +## 10. IoTDB Testcontainer The Test Support is based on the lib `testcontainers` (https://testcontainers-python.readthedocs.io/en/latest/index.html) which you need to install in your project if you want to use the feature. @@ -482,12 +482,12 @@ class MyTestCase(unittest.TestCase): by default it will load the image `apache/iotdb:latest`, if you want a specific version just pass it like e.g. `IoTDBContainer("apache/iotdb:0.12.0")` to get version `0.12.0` running. -## IoTDB DBAPI +## 11. IoTDB DBAPI IoTDB DBAPI implements the Python DB API 2.0 specification (https://peps.python.org/pep-0249/), which defines a common interface for accessing databases in Python. -### Examples +### 11.1 Examples + Initialization The initialized parameters are consistent with the session part (except for the sqlalchemy_mode). @@ -536,11 +536,11 @@ cursor.close() conn.close() ``` -## IoTDB SQLAlchemy Dialect (Experimental) +## 12. IoTDB SQLAlchemy Dialect (Experimental) The SQLAlchemy dialect of IoTDB is written to adapt to Apache Superset. This part is still being improved. Please do not use it in the production environment! -### Mapping of the metadata +### 12.1 Mapping of the metadata The data model used by SQLAlchemy is a relational data model, which describes the relationships between different entities through tables. While the data model of IoTDB is a hierarchical data model, which organizes the data through a tree structure. In order to adapt IoTDB to the dialect of SQLAlchemy, the original data model in IoTDB needs to be reorganized. @@ -570,7 +570,7 @@ The following figure shows the relationship between the two more intuitively: ![sqlalchemy-to-iotdb](/img/UserGuide/API/IoTDB-SQLAlchemy/sqlalchemy-to-iotdb.png?raw=true) -### Data type mapping +### 12.2 Data type mapping | data type in IoTDB | data type in SQLAlchemy | |--------------------|-------------------------| | BOOLEAN | Boolean | @@ -581,7 +581,7 @@ The following figure shows the relationship between the two more intuitively: | TEXT | Text | | LONG | BigInteger | -### Example +### 12.3 Example + execute statement @@ -627,15 +627,15 @@ for row in res: ``` -## Developers +## 13. Developers -### Introduction +### 13.1 Introduction This is an example of how to connect to IoTDB with python, using the thrift rpc interfaces. Things are almost the same on Windows or Linux, but pay attention to the difference like path separator. -### Prerequisites +### 13.2 Prerequisites Python3.7 or later is preferred. @@ -652,7 +652,7 @@ pip install -r requirements_dev.txt -### Compile the thrift library and Debug +### 13.3 Compile the thrift library and Debug In the root of IoTDB's source code folder, run `mvn clean generate-sources -pl iotdb-client/client-py -am`. @@ -664,7 +664,7 @@ This folder is ignored from git and should **never be pushed to git!** -### Session Client & Example +### 13.4 Session Client & Example We packed up the Thrift interface in `client-py/src/iotdb/Session.py` (similar with its Java counterpart), also provided an example file `client-py/src/SessionExample.py` of how to use the session module. please read it carefully. @@ -686,7 +686,7 @@ session.close() -### Tests +### 13.5 Tests Please add your custom tests in `tests` folder. @@ -696,14 +696,14 @@ To run all defined tests just type `pytest .` in the root folder. -### Futher Tools +### 13.6 Futher Tools [black](https://pypi.org/project/black/) and [flake8](https://pypi.org/project/flake8/) are installed for autoformatting and linting. Both can be run by `black .` or `flake8 .` respectively. -## Releasing +## 14. Releasing To do a release just ensure that you have the right set of generated thrift files. Then run linting and auto-formatting. @@ -712,13 +712,13 @@ Then you are good to go to do a release! -### Preparing your environment +### 14.1 Preparing your environment First, install all necessary dev dependencies via `pip install -r requirements_dev.txt`. -### Doing the Release +### 14.2 Doing the Release There is a convenient script `release.sh` to do all steps for a release. Namely, these are diff --git a/src/UserGuide/Master/Tree/API/Programming-Rust-Native-API.md b/src/UserGuide/Master/Tree/API/Programming-Rust-Native-API.md index f58df68fc..d25923e71 100644 --- a/src/UserGuide/Master/Tree/API/Programming-Rust-Native-API.md +++ b/src/UserGuide/Master/Tree/API/Programming-Rust-Native-API.md @@ -24,7 +24,7 @@ IoTDB uses Thrift as a cross language RPC framework, so access to IoTDB can be achieved through the interface provided by Thrift. This document will introduce how to generate a native Rust interface that can access IoTDB. -## Dependents +## 1. Dependents * JDK >= 1.8 * Rust >= 1.0.0 @@ -38,7 +38,7 @@ Thrift (0.14.1 or higher) must be installed to compile Thrift files into Rust co http://thrift.apache.org/docs/install/ ``` -## Compile the Thrift library and generate the Rust native interface +## 2. Compile the Thrift library and generate the Rust native interface 1. Find the `pom.xml` file in the root directory of the IoTDB source code folder. 2. Open the `pom.xml` file and find the following content: @@ -74,11 +74,11 @@ http://thrift.apache.org/docs/install/ This command will automatically delete the files in `iotdb/iotdb-protocol/thrift/target` and `iotdb/iotdb-protocol/thrift-commons/target`, and repopulate the folder with the newly generated files. The newly generated Rust sources will be located in `iotdb/iotdb-protocol/thrift/target/generated-sources-rust` in the various modules of the `iotdb-protocol` module. -## Using the Rust native interface +## 3. Using the Rust native interface Copy `iotdb/iotdb-protocol/thrift/target/generated-sources-rust/` and `iotdb/iotdb-protocol/thrift-commons/target/generated-sources-rust/` into your project。 -## RPC interface +## 4. RPC interface ``` // open a session diff --git a/src/UserGuide/Master/Tree/API/RestServiceV1.md b/src/UserGuide/Master/Tree/API/RestServiceV1.md index 775235fed..4fb834708 100644 --- a/src/UserGuide/Master/Tree/API/RestServiceV1.md +++ b/src/UserGuide/Master/Tree/API/RestServiceV1.md @@ -22,7 +22,7 @@ # RESTful API V1(Not Recommend) IoTDB's RESTful services can be used for query, write, and management operations, using the OpenAPI standard to define interfaces and generate frameworks. -## Enable RESTful Services +## 1. Enable RESTful Services RESTful services are disabled by default. @@ -32,7 +32,7 @@ RESTful services are disabled by default. enable_rest_service=true ``` -## Authentication +## 2. Authentication Except the liveness probe API `/ping`, RESTful services use the basic authentication. Each URL request needs to carry `'Authorization': 'Basic ' + base64.encode(username + ':' + password)`. The username used in the following examples is: `root`, and password is: `root`. @@ -67,9 +67,9 @@ Authorization: Basic cm9vdDpyb290 } ``` -## Interface +## 3. Interface -### ping +### 3.1 ping The `/ping` API can be used for service liveness probing. @@ -119,7 +119,7 @@ Sample response: > `/ping` can be accessed without authorization. -### query +### 3.2 query The query interface can be used to handle data queries and metadata queries. @@ -762,7 +762,7 @@ curl -H "Content-Type:application/json" -H "Authorization:Basic cm9vdDpyb290" -X } ``` -### nonQuery +### 3.3 nonQuery Request method: `POST` @@ -798,7 +798,7 @@ Sample response: -### insertTablet +### 3.4 insertTablet Request method: `POST` @@ -837,7 +837,7 @@ Sample response: } ``` -## Configuration +## 4. Configuration The configuration is located in 'iotdb-system.properties'. diff --git a/src/UserGuide/Master/Tree/API/RestServiceV2.md b/src/UserGuide/Master/Tree/API/RestServiceV2.md index 6c6011bf5..186cd1360 100644 --- a/src/UserGuide/Master/Tree/API/RestServiceV2.md +++ b/src/UserGuide/Master/Tree/API/RestServiceV2.md @@ -22,7 +22,7 @@ # RESTful API V2 IoTDB's RESTful services can be used for query, write, and management operations, using the OpenAPI standard to define interfaces and generate frameworks. -## Enable RESTful Services +## 1. Enable RESTful Services RESTful services are disabled by default. @@ -32,7 +32,7 @@ RESTful services are disabled by default. enable_rest_service=true ``` -## Authentication +## 2. Authentication Except the liveness probe API `/ping`, RESTful services use the basic authentication. Each URL request needs to carry `'Authorization': 'Basic ' + base64.encode(username + ':' + password)`. The username used in the following examples is: `root`, and password is: `root`. @@ -67,9 +67,9 @@ Authorization: Basic cm9vdDpyb290 } ``` -## Interface +## 3. Interface -### ping +### 3.1 ping The `/ping` API can be used for service liveness probing. @@ -119,7 +119,7 @@ Sample response: > `/ping` can be accessed without authorization. -### query +### 3.2 query The query interface can be used to handle data queries and metadata queries. @@ -762,7 +762,7 @@ curl -H "Content-Type:application/json" -H "Authorization:Basic cm9vdDpyb290" -X } ``` -### nonQuery +### 3.3 nonQuery Request method: `POST` @@ -798,7 +798,7 @@ Sample response: -### insertTablet +### 3.4 insertTablet Request method: `POST` @@ -837,7 +837,7 @@ Sample response: } ``` -### insertRecords +### 3.5 insertRecords Request method: `POST` @@ -877,7 +877,7 @@ Sample response: ``` -## Configuration +## 4. Configuration The configuration is located in 'iotdb-system.properties'. diff --git a/src/UserGuide/Master/Tree/Background-knowledge/Cluster-Concept_apache.md b/src/UserGuide/Master/Tree/Background-knowledge/Cluster-Concept_apache.md index 674a74e69..5a2b4652c 100644 --- a/src/UserGuide/Master/Tree/Background-knowledge/Cluster-Concept_apache.md +++ b/src/UserGuide/Master/Tree/Background-knowledge/Cluster-Concept_apache.md @@ -21,7 +21,7 @@ # Common Concepts -## Sql_dialect Related Concepts +## 1. Sql_dialect Related Concepts | Concept | Meaning | | ----------------------- | ------------------------------------------------------------ | @@ -32,7 +32,7 @@ | Encoding | Encoding is a compression technique that represents data in binary form to improve storage efficiency. IoTDB supports various encoding methods for different types of data. For more detailed information, please refer to:[Encoding-and-Compression](../Technical-Insider/Encoding-and-Compression.md) | | Compression | After data encoding, IoTDB uses compression technology to further compress binary data to enhance storage efficiency. IoTDB supports multiple compression methods. For more detailed information, please refer to: [Encoding-and-Compression](../Technical-Insider/Encoding-and-Compression.md) | -## Distributed Related Concepts +## 2. Distributed Related Concepts The following figure shows a common IoTDB 3C3D (3 ConfigNodes, 3 DataNodes) cluster deployment pattern: @@ -46,7 +46,7 @@ IoTDB's cluster includes the following common concepts: The above concepts will be introduced in the following text. -### Nodes +### 2.1 Nodes IoTDB cluster includes three types of nodes (processes): ConfigNode (management node), DataNode (data node), and AINode (analysis node), as shown below: @@ -54,7 +54,7 @@ IoTDB cluster includes three types of nodes (processes): ConfigNode (management - DataNode: Serves client requests and is responsible for data storage and computation, as shown in DataNode-1, DataNode-2, and DataNode-3 in the figure above. - AINode: Provides machine learning capabilities, supports the registration of trained machine learning models, and allows model inference through SQL calls. It has already built-in self-developed time-series large models and common machine learning algorithms (such as prediction and anomaly detection). -### Data Partitioning +### 2.2 Data Partitioning In IoTDB, both metadata and data are divided into small partitions, namely Regions, which are managed by various DataNodes in the cluster. @@ -62,7 +62,7 @@ In IoTDB, both metadata and data are divided into small partitions, namely Regio - DataRegion: Data partition, managing the data of a part of devices for a certain period of time. DataRegions with the same RegionID on different DataNodes are mutual replicas, as shown in DataRegion-2 in the figure above, which has two replicas located on DataNode-1 and DataNode-2. - For specific partitioning algorithms, please refer to: [Data Partitioning](../Technical-Insider/Cluster-data-partitioning.md) -### Replica Groups +### 2.3 Replica Groups The number of replicas for data and metadata can be configured. The recommended configurations for different deployment modes are as follows, where multi-replication can provide high-availability services. @@ -71,11 +71,11 @@ The number of replicas for data and metadata can be configured. The recommended | Schema | schema_replication_factor | 1 | 3 | | Data | data_replication_factor | 1 | 2 | -## Deployment Related Concepts +## 3. Deployment Related Concepts IoTDB has two operating modes: Stand-Alone mode and Cluster mode. -### Stand-Alone Mode +### 3.1 Stand-Alone Mode An IoTDB Stand-Alone instance includes 1 ConfigNode and 1 DataNode, i.e., 1C1D; @@ -84,7 +84,7 @@ An IoTDB Stand-Alone instance includes 1 ConfigNode and 1 DataNode, i.e., 1C1D; - **Applicable Scenarios**:Scenarios with limited resources or low requirements for high availability, such as edge-side servers. - **Deployment Method**:[Stand-Alone-Deployment](../Deployment-and-Maintenance/Stand-Alone-Deployment_apache.md) -### Cluster Mode +### 3.2 Cluster Mode An IoTDB cluster instance consists of 3 ConfigNodes and no less than 3 DataNodes, usually 3 DataNodes, i.e., 3C3D; when some nodes fail, the remaining nodes can still provide services, ensuring the high availability of the database service, and the database performance can be improved with the addition of nodes. @@ -92,7 +92,7 @@ An IoTDB cluster instance consists of 3 ConfigNodes and no less than 3 DataNodes - **Applicable Scenarios**:Enterprise-level application scenarios requiring high availability and reliability. - **Deployment Method**:[Cluster-Deployment](../Deployment-and-Maintenance/Cluster-Deployment_apache.md) -### Summary of Features +### 3.3 Summary of Features | Dimension | Stand-Alone Mode | Cluster Mode | | ------------ | ---------------------------- | ------------------------ | diff --git a/src/UserGuide/Master/Tree/Background-knowledge/Cluster-Concept_timecho.md b/src/UserGuide/Master/Tree/Background-knowledge/Cluster-Concept_timecho.md index 42344aa47..2307c4a71 100644 --- a/src/UserGuide/Master/Tree/Background-knowledge/Cluster-Concept_timecho.md +++ b/src/UserGuide/Master/Tree/Background-knowledge/Cluster-Concept_timecho.md @@ -21,7 +21,7 @@ # Common Concepts -## Sql_dialect Related Concepts +## 1. Sql_dialect Related Concepts | Concept | Meaning | | ----------------------- | ------------------------------------------------------------ | @@ -32,7 +32,7 @@ | Encoding | Encoding is a compression technique that represents data in binary form to improve storage efficiency. IoTDB supports various encoding methods for different types of data. For more detailed information, please refer to:[Encoding-and-Compression](../Technical-Insider/Encoding-and-Compression.md) | | Compression | After data encoding, IoTDB uses compression technology to further compress binary data to enhance storage efficiency. IoTDB supports multiple compression methods. For more detailed information, please refer to: [Encoding-and-Compression](../Technical-Insider/Encoding-and-Compression.md) | -## Distributed Related Concepts +## 2. Distributed Related Concepts The following figure shows a common IoTDB 3C3D (3 ConfigNodes, 3 DataNodes) cluster deployment pattern: @@ -47,7 +47,7 @@ IoTDB's cluster includes the following common concepts: The above concepts will be introduced in the following text. -### Nodes +### 2.1 Nodes IoTDB cluster includes three types of nodes (processes): ConfigNode (management node), DataNode (data node), and AINode (analysis node), as shown below: @@ -55,7 +55,7 @@ IoTDB cluster includes three types of nodes (processes): ConfigNode (management - DataNode: Serves client requests and is responsible for data storage and computation, as shown in DataNode-1, DataNode-2, and DataNode-3 in the figure above. - AINode: Provides machine learning capabilities, supports the registration of trained machine learning models, and allows model inference through SQL calls. It has already built-in self-developed time-series large models and common machine learning algorithms (such as prediction and anomaly detection). -### Data Partitioning +### 2.2 Data Partitioning In IoTDB, both metadata and data are divided into small partitions, namely Regions, which are managed by various DataNodes in the cluster. @@ -63,7 +63,7 @@ In IoTDB, both metadata and data are divided into small partitions, namely Regio - DataRegion: Data partition, managing the data of a part of devices for a certain period of time. DataRegions with the same RegionID on different DataNodes are mutual replicas, as shown in DataRegion-2 in the figure above, which has two replicas located on DataNode-1 and DataNode-2. - For specific partitioning algorithms, please refer to: [Data Partitioning](../Technical-Insider/Cluster-data-partitioning.md) -### Replica Groups +### 2.3 Replica Groups The number of replicas for data and metadata can be configured. The recommended configurations for different deployment modes are as follows, where multi-replication can provide high-availability services. @@ -73,11 +73,11 @@ The number of replicas for data and metadata can be configured. The recommended | Data | data_replication_factor | 1 | 2 | -## Deployment Related Concepts +## 3. Deployment Related Concepts IoTDB has three operating modes: Stand-Alone mode, Cluster mode, and Dual-Active mode. -### Stand-Alone Mode +### 3.1 Stand-Alone Mode An IoTDB Stand-Alone instance includes 1 ConfigNode and 1 DataNode, i.e., 1C1D; @@ -86,7 +86,7 @@ An IoTDB Stand-Alone instance includes 1 ConfigNode and 1 DataNode, i.e., 1C1D; - **Applicable Scenarios**:Scenarios with limited resources or low requirements for high availability, such as edge-side servers. - **Deployment Method**:[Stand-Alone-Deployment](../Deployment-and-Maintenance/Stand-Alone-Deployment_timecho.md) -### Dual-Active Mode +### 3.2 Dual-Active Mode Dual-active deployment is a feature of TimechoDB Enterprise Edition, which refers to two independent instances performing bidirectional synchronization and can provide services simultaneously. When one instance is restarted after a shutdown, the other instance will resume transmission of the missing data. @@ -97,7 +97,7 @@ Dual-active deployment is a feature of TimechoDB Enterprise Edition, which refer - **Applicable Scenarios**:Scenarios with limited resources (only two servers) but requiring high-availability capabilities. - **Deployment Method**:[Dual-Active-Deployment](../Deployment-and-Maintenance/Dual-Active-Deployment_timecho.md) -### Cluster Mode +### 3.3 Cluster Mode An IoTDB cluster instance consists of 3 ConfigNodes and no less than 3 DataNodes, usually 3 DataNodes, i.e., 3C3D; when some nodes fail, the remaining nodes can still provide services, ensuring the high availability of the database service, and the database performance can be improved with the addition of nodes. @@ -105,7 +105,7 @@ An IoTDB cluster instance consists of 3 ConfigNodes and no less than 3 DataNodes - **Applicable Scenarios**:Enterprise-level application scenarios requiring high availability and reliability. - **Deployment Method**:[Cluster-Deployment](../Deployment-and-Maintenance/Cluster-Deployment_timecho.md) -### Summary of Features +### 3.4 Summary of Features | Dimension | Stand-Alone Mode | Dual-Active Mode | Cluster Mode | | ------------ | ---------------------------- | ------------------------ | ------------------------ | diff --git a/src/UserGuide/Master/Tree/Background-knowledge/Data-Model-and-Terminology_apache.md b/src/UserGuide/Master/Tree/Background-knowledge/Data-Model-and-Terminology_apache.md index 2bec26da1..f3b777bf2 100644 --- a/src/UserGuide/Master/Tree/Background-knowledge/Data-Model-and-Terminology_apache.md +++ b/src/UserGuide/Master/Tree/Background-knowledge/Data-Model-and-Terminology_apache.md @@ -23,11 +23,11 @@ This section introduces how to transform time series data application scenarios into IoTDB time series modeling. -## 1 Time Series Data Model +## 1. Time Series Data Model Before designing an IoTDB data model, it's essential to understand time series data and its underlying structure. For more details, refer to: [Time Series Data Model](../Background-knowledge/Navigating_Time_Series_Data.md) -## 2 Two Time Series Model in IoTDB +## 2. Two Time Series Model in IoTDB IoTDB offers two data modeling syntaxes—tree model and table model, each with its distinct characteristics as follows: @@ -80,7 +80,7 @@ The following table compares the tree model and the table model from various dim - When establishing a database connection via client tools (Cli) or SDKs, specify the model syntax using the `sql_dialect` parameter (Tree syntax is used by default). -## 3 Application Scenarios +## 3. Application Scenarios The application scenarios mainly include two categories: diff --git a/src/UserGuide/Master/Tree/Background-knowledge/Data-Model-and-Terminology_timecho.md b/src/UserGuide/Master/Tree/Background-knowledge/Data-Model-and-Terminology_timecho.md index 0e843c871..477666573 100644 --- a/src/UserGuide/Master/Tree/Background-knowledge/Data-Model-and-Terminology_timecho.md +++ b/src/UserGuide/Master/Tree/Background-knowledge/Data-Model-and-Terminology_timecho.md @@ -23,11 +23,11 @@ This section introduces how to transform time series data application scenarios into IoTDB time series modeling. -## 1 Time Series Data Model +## 1. Time Series Data Model Before designing an IoTDB data model, it's essential to understand time series data and its underlying structure. For more details, refer to: [Time Series Data Model](../Background-knowledge/Navigating_Time_Series_Data.md) -## 2 Two Time Series Model in IoTDB +## 2. Two Time Series Model in IoTDB IoTDB offers two data modeling syntaxes—tree model and table model, each with its distinct characteristics as follows: @@ -80,7 +80,7 @@ The following table compares the tree model and the table model from various dim - When establishing a database connection via client tools (Cli) or SDKs, specify the model syntax using the `sql_dialect` parameter (Tree syntax is used by default). -## 3 Application Scenarios +## 3. Application Scenarios The application scenarios mainly include three categories: diff --git a/src/UserGuide/Master/Tree/Background-knowledge/Data-Type.md b/src/UserGuide/Master/Tree/Background-knowledge/Data-Type.md index e33af42eb..2fbb63040 100644 --- a/src/UserGuide/Master/Tree/Background-knowledge/Data-Type.md +++ b/src/UserGuide/Master/Tree/Background-knowledge/Data-Type.md @@ -21,7 +21,7 @@ # Data Type -## Basic Data Type +## 1.Basic Data Type IoTDB supports the following data types: @@ -38,7 +38,7 @@ IoTDB supports the following data types: The difference between STRING and TEXT types is that STRING type has more statistical information and can be used to optimize value filtering queries, while TEXT type is suitable for storing long strings. -### Float Precision +### 1.1 Float Precision The time series of **FLOAT** and **DOUBLE** type can specify (MAX\_POINT\_NUMBER, see [this page](../SQL-Manual/SQL-Manual.md) for more information on how to specify), which is the number of digits after the decimal point of the floating point number, if the encoding method is [RLE](../Technical-Insider/Encoding-and-Compression.md) or [TS\_2DIFF](../Technical-Insider/Encoding-and-Compression.md). If MAX\_POINT\_NUMBER is not specified, the system will use [float\_precision](../Reference/DataNode-Config-Manual.md) in the configuration file `iotdb-system.properties`. @@ -49,7 +49,7 @@ CREATE TIMESERIES root.vehicle.d0.s0 WITH DATATYPE=FLOAT, ENCODING=RLE, 'MAX_POI * For Float data value, The data range is (-Integer.MAX_VALUE, Integer.MAX_VALUE), rather than Float.MAX_VALUE, and the max_point_number is 19, caused by the limition of function Math.round(float) in Java. * For Double data value, The data range is (-Long.MAX_VALUE, Long.MAX_VALUE), rather than Double.MAX_VALUE, and the max_point_number is 19, caused by the limition of function Math.round(double) in Java (Long.MAX_VALUE=9.22E18). -### Data Type Compatibility +### 1.2 Data Type Compatibility When the written data type is inconsistent with the data type of time-series, - If the data type of time-series is not compatible with the written data type, the system will give an error message. @@ -70,11 +70,11 @@ The compatibility of each data type is shown in the following table: | TIMESTAMP | INT32 INT64 TIMESTAMP | | DATE | DATE | -## Timestamp +## 2. Timestamp The timestamp is the time point at which data is produced. It includes absolute timestamps and relative timestamps -### Absolute timestamp +### 2.1 Absolute timestamp Absolute timestamps in IoTDB are divided into two types: LONG and DATETIME (including DATETIME-INPUT and DATETIME-DISPLAY). When a user inputs a timestamp, he can use a LONG type timestamp or a DATETIME-INPUT type timestamp, and the supported formats of the DATETIME-INPUT type timestamp are shown in the table below: @@ -144,7 +144,7 @@ IoTDB can support LONG types and DATETIME-DISPLAY types when displaying timestam -### Relative timestamp +### 2.2 Relative timestamp Relative time refers to the time relative to the server time ```now()``` and ```DATETIME``` time. diff --git a/src/UserGuide/Master/Tree/Background-knowledge/Navigating_Time_Series_Data.md b/src/UserGuide/Master/Tree/Background-knowledge/Navigating_Time_Series_Data.md index e365acb32..121373d1c 100644 --- a/src/UserGuide/Master/Tree/Background-knowledge/Navigating_Time_Series_Data.md +++ b/src/UserGuide/Master/Tree/Background-knowledge/Navigating_Time_Series_Data.md @@ -20,7 +20,7 @@ --> # Entering Time Series Data -## What Is Time Series Data? +## 1. What Is Time Series Data? In today's era of the Internet of Things, various scenarios such as the Internet of Things and industrial scenarios are undergoing digital transformation. People collect various states of devices by installing sensors on them. If the motor collects voltage and current, the blade speed, angular velocity, and power generation of the fan; Vehicle collection of latitude and longitude, speed, and fuel consumption; The vibration frequency, deflection, displacement, etc. of the bridge. The data collection of sensors has penetrated into various industries. @@ -32,19 +32,19 @@ Generally speaking, we refer to each collection point as a measurement point (al The massive time series data generated by sensors is the foundation of digital transformation in various industries, so our modeling of time series data mainly focuses on equipment and sensors. -## Key Concepts of Time Series Data +## 2. Key Concepts of Time Series Data The main concepts involved in time-series data can be divided from bottom to top: data points, measurement points, and equipment. ![](/img/time-series-data-en-04.png) -### Data Point +### 2.1 Data Point - Definition: Consists of a timestamp and a value, where the timestamp is of type long and the value can be of various types such as BOOLEAN, FLOAT, INT32, etc. - Example: A row of a time series in the form of a table in the above figure, or a point of a time series in the form of a graph, is a data point. ![](/img/time-series-data-en-03.png) -### Measurement Points +### 2.2 Measurement Points - Definition: It is a time series formed by multiple data points arranged in increments according to timestamps. Usually, a measuring point represents a collection point and can regularly collect physical quantities of the environment it is located in. - Also known as: physical quantity, time series, timeline, semaphore, indicator, measurement value, etc @@ -54,7 +54,7 @@ The main concepts involved in time-series data can be divided from bottom to top - Vehicle networking scenarios: fuel consumption, vehicle speed, longitude, dimensions - Factory scenario: temperature, humidity -### Device +### 2.3 Device - Definition: Corresponding to a physical device in an actual scene, usually a collection of measurement points, identified by one to multiple labels - Example: diff --git a/src/UserGuide/Master/Tree/Basic-Concept/Operate-Metadata_apache.md b/src/UserGuide/Master/Tree/Basic-Concept/Operate-Metadata_apache.md index 3b2b6de9d..688211f3e 100644 --- a/src/UserGuide/Master/Tree/Basic-Concept/Operate-Metadata_apache.md +++ b/src/UserGuide/Master/Tree/Basic-Concept/Operate-Metadata_apache.md @@ -21,9 +21,9 @@ # Timeseries Management -## Database Management +## 1. Database Management -### Create Database +### 1.1 Create Database According to the storage model we can set up the corresponding database. Two SQL statements are supported for creating databases, as follows: @@ -49,7 +49,7 @@ The LayerName of database can only be chinese or english characters, numbers, un Besides, if deploy on Windows system, the LayerName is case-insensitive, which means it's not allowed to create databases `root.ln` and `root.LN` at the same time. -### Show Databases +### 1.2 Show Databases After creating the database, we can use the [SHOW DATABASES](../SQL-Manual/SQL-Manual.md) statement and [SHOW DATABASES \](../SQL-Manual/SQL-Manual.md) to view the databases. The SQL statements are as follows: @@ -71,7 +71,7 @@ Total line number = 2 It costs 0.060s ``` -### Delete Database +### 1.3 Delete Database User can use the `DELETE DATABASE ` statement to delete all databases matching the pathPattern. Please note the data in the database will also be deleted. @@ -82,7 +82,7 @@ IoTDB > DELETE DATABASE root.sgcc IoTDB > DELETE DATABASE root.** ``` -### Count Databases +### 1.4 Count Databases User can use the `COUNT DATABASE ` statement to count the number of databases. It is allowed to specify `PathPattern` to count the number of databases matching the `PathPattern`. @@ -141,7 +141,7 @@ Total line number = 1 It costs 0.002s ``` -### Setting up heterogeneous databases (Advanced operations) +### 1.5 Setting up heterogeneous databases (Advanced operations) Under the premise of familiar with IoTDB metadata modeling, users can set up heterogeneous databases in IoTDB to cope with different production needs. @@ -236,7 +236,7 @@ The query results in each column are as follows: + The required minimum DataRegionGroup number of the Database + The permitted maximum DataRegionGroup number of the Database -### TTL +### 1.6 TTL IoTDB supports device-level TTL settings, which means it is able to delete old data automatically and periodically. The benefit of using TTL is that hopefully you can control the total disk space usage and prevent the machine from running out of disks. Moreover, the query performance may downgrade as the total number of files goes up and the memory usage also increases as there are more files. Timely removing such files helps to keep at a high query performance level and reduce memory usage. @@ -348,7 +348,7 @@ IoTDB> show devices ``` All devices will definitely have a TTL, meaning it cannot be null. INF represents infinity. -## Device Template +## 2. Device Template IoTDB supports the device template function, enabling different entities of the same type to share metadata, reduce the memory usage of metadata, and simplify the management of numerous entities and measurements. @@ -356,7 +356,7 @@ IoTDB supports the device template function, enabling different entities of the ![img](/img/templateEN.jpg) -### Create Device Template +### 2.1 Create Device Template The SQL syntax for creating a metadata template is as follows: @@ -379,7 +379,7 @@ IoTDB> create device template t2 aligned (lat FLOAT encoding=Gorilla, lon FLOAT The` lat` and `lon` measurements are aligned. -### Set Device Template +### 2.2 Set Device Template After a device template is created, it should be set to specific path before creating related timeseries or insert data. @@ -395,7 +395,7 @@ The SQL Statement for setting device template is as follow: IoTDB> set device template t1 to root.sg1.d1 ``` -### Activate Device Template +### 2.3 Activate Device Template After setting the device template, with the system enabled to auto create schema, you can insert data into the timeseries. For example, suppose there's a database root.sg1 and t1 has been set to root.sg1.d1, then timeseries like root.sg1.d1.temperature and root.sg1.d1.status are available and data points can be inserted. @@ -447,7 +447,7 @@ show devices root.sg1.** +---------------+---------+ ```` -### Show Device Template +### 2.4 Show Device Template - Show all device templates @@ -519,7 +519,7 @@ The execution result is as follows: +-----------+ ``` -### Deactivate device Template +### 2.5 Deactivate device Template To delete a group of timeseries represented by device template, namely deactivate the device template, use the following SQL statement: @@ -547,7 +547,7 @@ IoTDB> deactivate device template t1 from root.sg1.*, root.sg2.* If the template name is not provided in sql, all template activation on paths matched by given path pattern will be removed. -### Unset Device Template +### 2.6 Unset Device Template The SQL Statement for unsetting device template is as follow: @@ -557,7 +557,7 @@ IoTDB> unset device template t1 from root.sg1.d1 **Attention**: It should be guaranteed that none of the timeseries represented by the target device template exists, before unset it. It can be achieved by deactivation operation. -### Drop Device Template +### 2.7 Drop Device Template The SQL Statement for dropping device template is as follow: @@ -567,7 +567,7 @@ IoTDB> drop device template t1 **Attention**: Dropping an already set template is not supported. -### Alter Device Template +### 2.8 Alter Device Template In a scenario where measurements need to be added, you can modify the template to add measurements to all devicesdevice using the device template. @@ -579,9 +579,9 @@ IoTDB> alter device template t1 add (speed FLOAT encoding=RLE, FLOAT TEXT encodi **When executing data insertion to devices with device template set on related prefix path and there are measurements not present in this device template, the measurements will be auto added to this device template.** -## Timeseries Management +## 3. Timeseries Management -### Create Timeseries +### 3.1 Create Timeseries According to the storage model selected before, we can create corresponding timeseries in the two databases respectively. The SQL statements for creating timeseries are as follows: @@ -614,7 +614,7 @@ error: encoding TS_2DIFF does not support BOOLEAN Please refer to [Encoding](../Technical-Insider/Encoding-and-Compression.md) for correspondence between data type and encoding. -### Create Aligned Timeseries +### 3.2 Create Aligned Timeseries The SQL statement for creating a group of timeseries are as follows: @@ -626,7 +626,7 @@ You can set different datatype, encoding, and compression for the timeseries in It is also supported to set an alias, tag, and attribute for aligned timeseries. -### Delete Timeseries +### 3.3 Delete Timeseries To delete the timeseries we created before, we are able to use `(DELETE | DROP) TimeSeries ` statement. @@ -639,7 +639,7 @@ IoTDB> delete timeseries root.ln.wf02.* IoTDB> drop timeseries root.ln.wf02.* ``` -### Show Timeseries +### 3.4 Show Timeseries * SHOW LATEST? TIMESERIES pathPattern? whereClause? limitClause? @@ -751,7 +751,7 @@ It costs 0.016s It is worth noting that when the queried path does not exist, the system will return no timeseries. -### Count Timeseries +### 3.5 Count Timeseries IoTDB is able to use `COUNT TIMESERIES ` to count the number of timeseries matching the path. SQL statements are as follows: @@ -836,7 +836,7 @@ It costs 0.002s > Note: The path of timeseries is just a filter condition, which has no relationship with the definition of level. -### Tag and Attribute Management +### 3.6 Tag and Attribute Management We can also add an alias, extra tag and attribute information while creating one timeseries. @@ -1011,9 +1011,9 @@ IoTDB> show timeseries where TAGS(tag1)='v1' The above operations are supported for timeseries tag, attribute updates, etc. -## Path query +## 4. Path query -### Path +### 4.1 Path A `path` is an expression that conforms to the following constraints: @@ -1033,7 +1033,7 @@ wildcard ; ``` -### NodeName +### 4.2 NodeName - The parts of a path separated by `.` are called node names (`nodeName`). - For example, `root.a.b.c` is a path with a depth of 4 levels, where `root`, `a`, `b`, and `c` are all node names. @@ -1048,11 +1048,11 @@ wildcard - UNICODE Chinese characters (`\u2E80` to `\u9FFF`) - **Case sensitivity**: On Windows systems, path node names in the database are case-insensitive. For example, `root.ln` and `root.LN` are considered the same path. -### Special Characters (Backquote) +### 4.3 Special Characters (Backquote) If special characters (such as spaces or punctuation marks) are needed in a `nodeName`, you can enclose the node name in Backquote (`). For more information on the use of backticks, please refer to [Backquote](../SQL-Manual/Syntax-Rule.md#reverse-quotation-marks). -### Path Pattern +### 4.4 Path Pattern To make it more convenient and efficient to express multiple time series, IoTDB provides paths with wildcards `*` and `**`. Wildcards can appear in any level of a path. @@ -1065,7 +1065,7 @@ To make it more convenient and efficient to express multiple time series, IoTDB **Note**: `*` and `**` cannot be placed at the beginning of a path. -### Show Child Paths +### 4.5 Show Child Paths ``` SHOW CHILD PATHS pathPattern @@ -1093,7 +1093,7 @@ It costs 0.002s > get all paths in form of root.xx.xx.xx:show child paths root.xx.xx -### Show Child Nodes +### 4.6 Show Child Nodes ``` SHOW CHILD NODES pathPattern @@ -1124,7 +1124,7 @@ Example: +------------+ ``` -### Count Nodes +### 4.7 Count Nodes IoTDB is able to use `COUNT NODES LEVEL=` to count the number of nodes at the given level in current Metadata Tree considering a given pattern. IoTDB will find paths that @@ -1177,7 +1177,7 @@ It costs 0.002s > Note: The path of timeseries is just a filter condition, which has no relationship with the definition of level. -### Show Devices +### 4.8 Show Devices * SHOW DEVICES pathPattern? (WITH DATABASE)? devicesWhereClause? limitClause? @@ -1258,7 +1258,7 @@ Total line number = 2 It costs 0.001s ``` -### Count Devices +### 4.9 Count Devices * COUNT DEVICES / diff --git a/src/UserGuide/Master/Tree/Basic-Concept/Operate-Metadata_timecho.md b/src/UserGuide/Master/Tree/Basic-Concept/Operate-Metadata_timecho.md index 4380b55a2..b101a9057 100644 --- a/src/UserGuide/Master/Tree/Basic-Concept/Operate-Metadata_timecho.md +++ b/src/UserGuide/Master/Tree/Basic-Concept/Operate-Metadata_timecho.md @@ -21,9 +21,9 @@ # Timeseries Management -## Database Management +## 1. Database Management -### Create Database +### 1.1 Create Database According to the storage model we can set up the corresponding database. Two SQL statements are supported for creating databases, as follows: @@ -49,7 +49,7 @@ The LayerName of database can only be chinese or english characters, numbers, un Besides, if deploy on Windows system, the LayerName is case-insensitive, which means it's not allowed to create databases `root.ln` and `root.LN` at the same time. -### Show Databases +### 1.2 Show Databases After creating the database, we can use the [SHOW DATABASES](../SQL-Manual/SQL-Manual.md) statement and [SHOW DATABASES \](../SQL-Manual/SQL-Manual.md) to view the databases. The SQL statements are as follows: @@ -71,7 +71,7 @@ Total line number = 2 It costs 0.060s ``` -### Delete Database +### 1.3 Delete Database User can use the `DELETE DATABASE ` statement to delete all databases matching the pathPattern. Please note the data in the database will also be deleted. @@ -82,7 +82,7 @@ IoTDB > DELETE DATABASE root.sgcc IoTDB > DELETE DATABASE root.** ``` -### Count Databases +### 1.4 Count Databases User can use the `COUNT DATABASE ` statement to count the number of databases. It is allowed to specify `PathPattern` to count the number of databases matching the `PathPattern`. @@ -141,7 +141,7 @@ Total line number = 1 It costs 0.002s ``` -### Setting up heterogeneous databases (Advanced operations) +### 1.5 Setting up heterogeneous databases (Advanced operations) Under the premise of familiar with IoTDB metadata modeling, users can set up heterogeneous databases in IoTDB to cope with different production needs. @@ -236,7 +236,7 @@ The query results in each column are as follows: + The required minimum DataRegionGroup number of the Database + The permitted maximum DataRegionGroup number of the Database -### TTL +### 1.6 TTL IoTDB supports device-level TTL settings, which means it is able to delete old data automatically and periodically. The benefit of using TTL is that hopefully you can control the total disk space usage and prevent the machine from running out of disks. Moreover, the query performance may downgrade as the total number of files goes up and the memory usage also increases as there are more files. Timely removing such files helps to keep at a high query performance level and reduce memory usage. @@ -349,12 +349,12 @@ IoTDB> show devices All devices will definitely have a TTL, meaning it cannot be null. INF represents infinity. -## Device Template +## 2. Device Template IoTDB supports the device template function, enabling different entities of the same type to share metadata, reduce the memory usage of metadata, and simplify the management of numerous entities and measurements. -### Create Device Template +### 2.1 Create Device Template The SQL syntax for creating a metadata template is as follows: @@ -380,7 +380,7 @@ The` lat` and `lon` measurements are aligned. ![img](/img/templateEN.jpg) -### Set Device Template +### 2.2 Set Device Template After a device template is created, it should be set to specific path before creating related timeseries or insert data. @@ -396,7 +396,7 @@ The SQL Statement for setting device template is as follow: IoTDB> set device template t1 to root.sg1.d1 ``` -### Activate Device Template +### 2.3 Activate Device Template After setting the device template, with the system enabled to auto create schema, you can insert data into the timeseries. For example, suppose there's a database root.sg1 and t1 has been set to root.sg1.d1, then timeseries like root.sg1.d1.temperature and root.sg1.d1.status are available and data points can be inserted. @@ -448,7 +448,7 @@ show devices root.sg1.** +---------------+---------+ ```` -### Show Device Template +### 2.4 Show Device Template - Show all device templates @@ -520,7 +520,7 @@ The execution result is as follows: +-----------+ ``` -### Deactivate device Template +### 2.5 Deactivate device Template To delete a group of timeseries represented by device template, namely deactivate the device template, use the following SQL statement: @@ -548,7 +548,7 @@ IoTDB> deactivate device template t1 from root.sg1.*, root.sg2.* If the template name is not provided in sql, all template activation on paths matched by given path pattern will be removed. -### Unset Device Template +### 2.6 Unset Device Template The SQL Statement for unsetting device template is as follow: @@ -558,7 +558,7 @@ IoTDB> unset device template t1 from root.sg1.d1 **Attention**: It should be guaranteed that none of the timeseries represented by the target device template exists, before unset it. It can be achieved by deactivation operation. -### Drop Device Template +### 2.7 Drop Device Template The SQL Statement for dropping device template is as follow: @@ -568,7 +568,7 @@ IoTDB> drop device template t1 **Attention**: Dropping an already set template is not supported. -### Alter Device Template +### 2.8 Alter Device Template In a scenario where measurements need to be added, you can modify the template to add measurements to all devicesdevice using the device template. @@ -580,9 +580,9 @@ IoTDB> alter device template t1 add (speed FLOAT encoding=RLE, FLOAT TEXT encodi **When executing data insertion to devices with device template set on related prefix path and there are measurements not present in this device template, the measurements will be auto added to this device template.** -## Timeseries Management +## 3. Timeseries Management -### Create Timeseries +### 3.1 Create Timeseries According to the storage model selected before, we can create corresponding timeseries in the two databases respectively. The SQL statements for creating timeseries are as follows: @@ -615,7 +615,7 @@ error: encoding TS_2DIFF does not support BOOLEAN Please refer to [Encoding](../Technical-Insider/Encoding-and-Compression.md) for correspondence between data type and encoding. -### Create Aligned Timeseries +### 3.2 Create Aligned Timeseries The SQL statement for creating a group of timeseries are as follows: @@ -627,7 +627,7 @@ You can set different datatype, encoding, and compression for the timeseries in It is also supported to set an alias, tag, and attribute for aligned timeseries. -### Delete Timeseries +### 3.3 Delete Timeseries To delete the timeseries we created before, we are able to use `(DELETE | DROP) TimeSeries ` statement. @@ -640,7 +640,7 @@ IoTDB> delete timeseries root.ln.wf02.* IoTDB> drop timeseries root.ln.wf02.* ``` -### Show Timeseries +### 3.4 Show Timeseries * SHOW LATEST? TIMESERIES pathPattern? whereClause? limitClause? @@ -752,7 +752,7 @@ It costs 0.016s It is worth noting that when the queried path does not exist, the system will return no timeseries. -### Count Timeseries +### 3.5 Count Timeseries IoTDB is able to use `COUNT TIMESERIES ` to count the number of timeseries matching the path. SQL statements are as follows: @@ -837,7 +837,7 @@ It costs 0.002s > Note: The path of timeseries is just a filter condition, which has no relationship with the definition of level. -### Active Timeseries Query +### 3.6 Active Timeseries Query By adding WHERE time filter conditions to the existing SHOW/COUNT TIMESERIES, we can obtain time series with data within the specified time range. It is important to note that in metadata queries with time filters, views are not considered; only the time series actually stored in the TsFile are taken into account. @@ -877,7 +877,7 @@ IoTDB> count timeseries where time >= 15000 and time < 16000; +-----------------+ ``` Regarding the definition of active time series, data that can be queried normally is considered active, meaning time series that have been inserted but deleted are not included. -### Tag and Attribute Management +### 3.7 Tag and Attribute Management We can also add an alias, extra tag and attribute information while creating one timeseries. @@ -1052,9 +1052,9 @@ IoTDB> show timeseries where TAGS(tag1)='v1' The above operations are supported for timeseries tag, attribute updates, etc. -## Path query +## 4. Path query -### Path +### 4.1 Path A `path` is an expression that conforms to the following constraints: @@ -1074,7 +1074,7 @@ wildcard ; ``` -### NodeName +### 4.2 NodeName - The parts of a path separated by `.` are called node names (`nodeName`). - For example, `root.a.b.c` is a path with a depth of 4 levels, where `root`, `a`, `b`, and `c` are all node names. @@ -1089,11 +1089,11 @@ wildcard - UNICODE Chinese characters (`\u2E80` to `\u9FFF`) - **Case sensitivity**: On Windows systems, path node names in the database are case-insensitive. For example, `root.ln` and `root.LN` are considered the same path. -### Special Characters (Backquote) +### 4.3 Special Characters (Backquote) If special characters (such as spaces or punctuation marks) are needed in a `nodeName`, you can enclose the node name in Backquote (`). For more information on the use of backticks, please refer to [Backquote](../SQL-Manual/Syntax-Rule.md#reverse-quotation-marks). -### Path Pattern +### 4.4 Path Pattern To make it more convenient and efficient to express multiple time series, IoTDB provides paths with wildcards `*` and `**`. Wildcards can appear in any level of a path. @@ -1106,7 +1106,7 @@ To make it more convenient and efficient to express multiple time series, IoTDB **Note**: `*` and `**` cannot be placed at the beginning of a path. -### Show Child Paths +### 4.5 Show Child Paths ``` SHOW CHILD PATHS pathPattern @@ -1134,7 +1134,7 @@ It costs 0.002s > get all paths in form of root.xx.xx.xx:show child paths root.xx.xx -### Show Child Nodes +### 4.6 Show Child Nodes ``` SHOW CHILD NODES pathPattern @@ -1165,7 +1165,7 @@ Example: +------------+ ``` -### Count Nodes +### 4.7 Count Nodes IoTDB is able to use `COUNT NODES LEVEL=` to count the number of nodes at the given level in current Metadata Tree considering a given pattern. IoTDB will find paths that @@ -1218,7 +1218,7 @@ It costs 0.002s > Note: The path of timeseries is just a filter condition, which has no relationship with the definition of level. -### Show Devices +### 4.8 Show Devices * SHOW DEVICES pathPattern? (WITH DATABASE)? devicesWhereClause? limitClause? @@ -1299,7 +1299,7 @@ Total line number = 2 It costs 0.001s ``` -### Count Devices +### 4.9 Count Devices * COUNT DEVICES / @@ -1344,7 +1344,7 @@ Total line number = 1 It costs 0.004s ``` -### Active Device Query +### 4.10 Active Device Query Similar to active timeseries query, we can add time filter conditions to device viewing and statistics to query active devices that have data within a certain time range. The definition of active here is the same as for active time series. An example usage is as follows: ``` IoTDB> insert into root.sg.data(timestamp, s1,s2) values(15000, 1, 2); diff --git a/src/UserGuide/Master/Tree/Basic-Concept/Query-Data.md b/src/UserGuide/Master/Tree/Basic-Concept/Query-Data.md index d312503c9..f98c5ec37 100644 --- a/src/UserGuide/Master/Tree/Basic-Concept/Query-Data.md +++ b/src/UserGuide/Master/Tree/Basic-Concept/Query-Data.md @@ -19,9 +19,9 @@ --> # Query Data -## OVERVIEW +## 1. OVERVIEW -### Syntax Definition +### 1.1 Syntax Definition In IoTDB, `SELECT` statement is used to retrieve data from one or more selected time series. Here is the syntax definition of `SELECT` statement: @@ -47,7 +47,7 @@ SELECT [LAST] selectExpr [, selectExpr] ... [ALIGN BY {TIME | DEVICE}] ``` -### Syntax Description +### 1.2 Syntax Description #### `SELECT` clause @@ -107,7 +107,7 @@ SELECT [LAST] selectExpr [, selectExpr] ... - The query result set is **ALIGN BY TIME** by default, including a time column and several value columns, and the timestamps of each column of data in each row are the same. - It also supports **ALIGN BY DEVICE**. The query result set contains a time column, a device column, and several value columns. -### Basic Examples +### 1.3 Basic Examples #### Select a Column of Data Based on a Time Interval @@ -264,7 +264,7 @@ Total line number = 10 It costs 0.016s ``` -### Execution Interface +### 1.4 Execution Interface In IoTDB, there are two ways to execute data query: @@ -331,7 +331,7 @@ SessionDataSet executeAggregationQuery( long slidingStep); ``` -## `SELECT` CLAUSE +## 2. `SELECT` CLAUSE The `SELECT` clause specifies the output of the query, consisting of several `selectExpr`. Each `selectExpr` defines one or more columns in the query result. For select expression details, see document [Operator-and-Expression](../SQL-Manual/Operator-and-Expression.md). - Example 1: @@ -346,7 +346,7 @@ select temperature from root.ln.wf01.wt01 select status, temperature from root.ln.wf01.wt01 ``` -### Last Query +### 2.1 Last Query The last query is a special type of query in Apache IoTDB. It returns the data point with the largest timestamp of the specified time series. In other word, it returns the latest state of a time series. This feature is especially important in IoT data analysis scenarios. To meet the performance requirement of real-time device monitoring systems, Apache IoTDB caches the latest values of all time series to achieve microsecond read latency. @@ -427,7 +427,7 @@ Total line number = 2 It costs 0.002s ``` -## `WHERE` CLAUSE +## 3. `WHERE` CLAUSE In IoTDB query statements, two filter conditions, **time filter** and **value filter**, are supported. @@ -438,7 +438,7 @@ The supported operators are as follows: - Range contains operator: contains ( `IN` ). - String matches operator: `LIKE`, `REGEXP`. -### Time Filter +### 3.1 Time Filter Use time filters to filter data for a specific time range. For supported formats of timestamps, please refer to [Timestamp](../Background-knowledge/Data-Type.md) . @@ -464,7 +464,7 @@ An example is as follows: Note: In the above example, `time` can also be written as `timestamp`. -### Value Filter +### 3.2 Value Filter Use value filters to filter data whose data values meet certain criteria. **Allow** to use a time series not selected in the select clause as a value filter. @@ -516,7 +516,7 @@ An example is as follows: select code from root.sg1.d1 where temperature is not null; ```` -### Fuzzy Query +### 3.3 Fuzzy Query Fuzzy query is divided into Like statement and Regexp statement, both of which can support fuzzy matching of TEXT type data. @@ -599,7 +599,7 @@ Total line number = 2 It costs 0.002s ``` -## `GROUP BY` CLAUSE +## 4. `GROUP BY` CLAUSE IoTDB supports using `GROUP BY` clause to aggregate the time series by segment and group. @@ -607,7 +607,7 @@ Segmented aggregation refers to segmenting data in the row direction according t Group aggregation refers to grouping the potential business attributes of time series for different time series. Each group contains several time series, and each group gets an aggregated value. Support **group by path level** and **group by tag** two grouping methods. -### Aggregate By Segment +### 4.1 Aggregate By Segment #### Aggregate By Time @@ -1252,7 +1252,7 @@ Get the results: +-----------------------------+-----------------------------+--------------------------------------+ ``` -### Aggregate By Group +### 4.2 Aggregate By Group #### Aggregation By Level @@ -1582,7 +1582,7 @@ As this feature is still under development, some queries have not been completed > 5. Temporarily not support expressions as aggregation function parameter,e.g. `count(s+1)`. > 6. Not support the value filter, which stands the same with the `GROUP BY LEVEL` query. -## `HAVING` CLAUSE +## 5. `HAVING` CLAUSE If you want to filter the results of aggregate queries, you can use the `HAVING` clause after the `GROUP BY` clause. @@ -1679,15 +1679,15 @@ Filtering result 2: +-----------------------------+-------------+---------+---------+ ``` -## `FILL` CLAUSE +## 6. `FILL` CLAUSE -### Introduction +### 6.1 Introduction When executing some queries, there may be no data for some columns in some rows, and data in these locations will be null, but this kind of null value is not conducive to data visualization and analysis, and the null value needs to be filled. In IoTDB, users can use the FILL clause to specify the fill mode when data is missing. Fill null value allows the user to fill any query result with null values according to a specific method, such as taking the previous value that is not null, or linear interpolation. The query result after filling the null value can better reflect the data distribution, which is beneficial for users to perform data analysis. -### Syntax Definition +### 6.2 Syntax Definition **The following is the syntax definition of the `FILL` clause:** @@ -1700,7 +1700,7 @@ FILL '(' PREVIOUS | LINEAR | constant ')' - We can specify only one fill method in the `FILL` clause, and this method applies to all columns of the result set. - Null value fill is not compatible with version 0.13 and previous syntax (`FILL(([(, , )?])+)`) is not supported anymore. -### Fill Methods +### 6.3 Fill Methods **IoTDB supports the following three fill methods:** @@ -1994,14 +1994,14 @@ result will be like: Total line number = 4 ``` -## `LIMIT` and `SLIMIT` CLAUSES (PAGINATION) +## 7. `LIMIT` and `SLIMIT` CLAUSES (PAGINATION) When the query result set has a large amount of data, it is not conducive to display on one page. You can use the `LIMIT/SLIMIT` clause and the `OFFSET/SOFFSET` clause to control paging. - The `LIMIT` and `SLIMIT` clauses are used to control the number of rows and columns of query results. - The `OFFSET` and `SOFFSET` clauses are used to control the starting position of the result display. -### Row Control over Query Results +### 7.1 Row Control over Query Results By using LIMIT and OFFSET clauses, users control the query results in a row-related manner. We demonstrate how to use LIMIT and OFFSET clauses through the following examples. @@ -2121,7 +2121,7 @@ Total line number = 4 It costs 0.016s ``` -### Column Control over Query Results +### 7.2 Column Control over Query Results By using SLIMIT and SOFFSET clauses, users can control the query results in a column-related manner. We will demonstrate how to use SLIMIT and SOFFSET clauses through the following examples. @@ -2209,7 +2209,7 @@ Total line number = 7 It costs 0.000s ``` -### Row and Column Control over Query Results +### 7.3 Row and Column Control over Query Results In addition to row or column control over query results, IoTDB allows users to control both rows and columns of query results. Here is a complete example with both LIMIT clauses and SLIMIT clauses. @@ -2244,7 +2244,7 @@ Total line number = 10 It costs 0.009s ``` -### Error Handling +### 7.4 Error Handling If the parameter N/SN of LIMIT/SLIMIT exceeds the size of the result set, IoTDB returns all the results as expected. For example, the query result of the original SQL statement consists of six rows, and we select the first 100 rows through the LIMIT clause: @@ -2322,9 +2322,9 @@ The SQL statement will not be executed and the corresponding error prompt is giv Msg: 411: Meet error in query process: The value of SOFFSET (2) is equal to or exceeds the number of sequences (2) that can actually be returned. ``` -## `ORDER BY` CLAUSE +## 8. `ORDER BY` CLAUSE -### Order by in ALIGN BY TIME mode +### 8.1 Order by in ALIGN BY TIME mode The result set of IoTDB is in ALIGN BY TIME mode by default and `ORDER BY TIME` clause can also be used to specify the ordering of timestamp. The SQL statement is: @@ -2345,7 +2345,7 @@ Results: +-----------------------------+--------------------------+------------------------+-----------------------------+------------------------+ ``` -### Order by in ALIGN BY DEVICE mode +### 8.2 Order by in ALIGN BY DEVICE mode When querying in ALIGN BY DEVICE mode, `ORDER BY` clause can be used to specify the ordering of result set. @@ -2447,7 +2447,7 @@ The result shows below: +-----------------------------+-----------------+---------------+-------------+------------------+ ``` -### Order by arbitrary expressions +### 8.3 Order by arbitrary expressions In addition to the predefined keywords "Time" and "Device" in IoTDB, `ORDER BY` can also be used to sort by any expressions. @@ -2620,11 +2620,11 @@ This will give you the following results: +-----------------------------+---------+-----+ ``` -## `ALIGN BY` CLAUSE +## 9. `ALIGN BY` CLAUSE In addition, IoTDB supports another result set format: `ALIGN BY DEVICE`. -### Align by Device +### 9.1 Align by Device The `ALIGN BY DEVICE` indicates that the deviceId is considered as a column. Therefore, there are totally limited columns in the dataset. @@ -2657,11 +2657,11 @@ Total line number = 6 It costs 0.012s ``` -### Ordering in ALIGN BY DEVICE +### 9.2 Ordering in ALIGN BY DEVICE ALIGN BY DEVICE mode arranges according to the device first, and sort each device in ascending order according to the timestamp. The ordering and priority can be adjusted through `ORDER BY` clause. -## `INTO` CLAUSE (QUERY WRITE-BACK) +## 10. `INTO` CLAUSE (QUERY WRITE-BACK) The `SELECT INTO` statement copies data from query result set into target time series. @@ -2671,7 +2671,7 @@ The application scenarios are as follows: - **Query result storage**: Persistently store the query results, which acts like a materialized view. - **Non-aligned time series to aligned time series**: Rewrite non-aligned time series into another aligned time series. -### SQL Syntax +### 10.1 SQL Syntax #### Syntax Definition @@ -2938,7 +2938,7 @@ This statement specifies that `root.sg_copy.d1` is an unaligned device and `root - When the target time series does not exist, the system automatically creates it (including the database). - When the queried time series does not exist, or the queried sequence does not have data, the target time series will not be created automatically. -### Application examples +### 10.2 Application examples #### Implement IoTDB internal ETL @@ -2995,7 +2995,7 @@ Total line number = 2 It costs 0.375s ``` -### User Permission Management +### 10.3 User Permission Management The user must have the following permissions to execute a query write-back statement: @@ -3004,6 +3004,6 @@ The user must have the following permissions to execute a query write-back state For more user permissions related content, please refer to [Account Management Statements](../User-Manual/Authority-Management.md). -### Configurable Properties +### 10.4 Configurable Properties * `select_into_insert_tablet_plan_row_limit`: The maximum number of rows can be processed in one insert-tablet-plan when executing select-into statements. 10000 by default. diff --git a/src/UserGuide/Master/Tree/Basic-Concept/Write-Delete-Data.md b/src/UserGuide/Master/Tree/Basic-Concept/Write-Delete-Data.md index 3d8fdb3a0..8c009eac1 100644 --- a/src/UserGuide/Master/Tree/Basic-Concept/Write-Delete-Data.md +++ b/src/UserGuide/Master/Tree/Basic-Concept/Write-Delete-Data.md @@ -21,7 +21,7 @@ # Write & Delete Data -## CLI INSERT +## 1. CLI INSERT IoTDB provides users with a variety of ways to insert real-time data, such as directly inputting [INSERT SQL statement](../SQL-Manual/SQL-Manual.md#insert-data) in [Client/Shell tools](../Tools-System/CLI.md), or using [Java JDBC](../API/Programming-JDBC.md) to perform single or batch execution of [INSERT SQL statement](../SQL-Manual/SQL-Manual.md). @@ -29,7 +29,7 @@ NOTE: This section mainly introduces the use of [INSERT SQL statement](../SQL- Writing a repeat timestamp covers the original timestamp data, which can be regarded as updated data. -### Use of INSERT Statements +### 1.1 Use of INSERT Statements The [INSERT SQL statement](../SQL-Manual/SQL-Manual.md#insert-data) statement is used to insert data into one or more specified timeseries created. For each point of data inserted, it consists of a [timestamp](../Basic-Concept/Operate-Metadata.md) and a sensor acquisition value (see [Data Type](../Background-knowledge/Data-Type.md)). @@ -89,7 +89,7 @@ IoTDB > insert into root.ln.wf02.wt02(status, hardware) values (false, 'v2') **Note:** Timestamps must be specified when inserting multiple rows of data in a SQL. -### Insert Data Into Aligned Timeseries +### 1.2 Insert Data Into Aligned Timeseries To insert data into a group of aligned time series, we only need to add the `ALIGNED` keyword in SQL, and others are similar. @@ -116,11 +116,11 @@ Total line number = 3 It costs 0.004s ``` -## NATIVE API WRITE +## 2. NATIVE API WRITE The Native API ( Session ) is the most widely used series of APIs of IoTDB, including multiple APIs, adapted to different data collection scenarios, with high performance and multi-language support. -### Multi-language API write +### 2.1 Multi-language API write #### Java @@ -139,7 +139,7 @@ Refer to [ C++ Data Manipulation Interface (DML) ](../API/Programming-Cpp-Native Refer to [Go Native API](../API/Programming-Go-Native-API.md) -## REST API WRITE +## 3. REST API WRITE Refer to [insertTablet (v1)](../API/RestServiceV1.md#inserttablet) or [insertTablet (v2)](../API/RestServiceV2.md#inserttablet) @@ -177,29 +177,29 @@ Example: } ``` -## MQTT WRITE +## 4. MQTT WRITE Refer to [Built-in MQTT Service](../API/Programming-MQTT.md#built-in-mqtt-service) -## BATCH DATA LOAD +## 5. BATCH DATA LOAD In different scenarios, the IoTDB provides a variety of methods for importing data in batches. This section describes the two most common methods for importing data in CSV format and TsFile format. -### TsFile Batch Load +### 5.1 TsFile Batch Load TsFile is the file format of time series used in IoTDB. You can directly import one or more TsFile files with time series into another running IoTDB instance through tools such as CLI. For details, see [Data Import](../Tools-System/Data-Import-Tool.md). -### CSV Batch Load +### 5.2 CSV Batch Load CSV stores table data in plain text. You can write multiple formatted data into a CSV file and import the data into the IoTDB in batches. Before importing data, you are advised to create the corresponding metadata in the IoTDB. Don't worry if you forget to create one, the IoTDB can automatically infer the data in the CSV to its corresponding data type, as long as you have a unique data type for each column. In addition to a single file, the tool supports importing multiple CSV files as folders and setting optimization parameters such as time precision. For details, see [Data Import](../Tools-System/Data-Import-Tool.md). -## DELETE +## 6. DELETE Users can delete data that meet the deletion condition in the specified timeseries by using the [DELETE statement](../SQL-Manual/SQL-Manual.md#delete-data). When deleting data, users can select one or more timeseries paths, prefix paths, or paths with star to delete data within a certain time interval. In a JAVA programming environment, you can use the [Java JDBC](../API/Programming-JDBC.md) to execute single or batch UPDATE statements. -### Delete Single Timeseries +### 6.1 Delete Single Timeseries Taking ln Group as an example, there exists such a usage scenario: @@ -242,7 +242,7 @@ delete from root.ln.wf02.wt02.status ``` -### Delete Multiple Timeseries +### 6.2 Delete Multiple Timeseries If both the power supply status and hardware version of the ln group wf02 plant wt02 device before 2017-11-01 16:26:00 need to be deleted, [the prefix path with broader meaning or the path with star](../Basic-Concept/Operate-Metadata.md) can be used to delete the data. The SQL statement for this operation is: @@ -263,7 +263,7 @@ IoTDB> delete from root.ln.wf03.wt02.status where time < now() Msg: The statement is executed successfully. ``` -### Delete Time Partition (experimental) +### 6.3 Delete Time Partition (experimental) You may delete all data in a time partition of a database using the following grammar: diff --git a/src/UserGuide/Master/Tree/Deployment-and-Maintenance/AINode_Deployment_apache.md b/src/UserGuide/Master/Tree/Deployment-and-Maintenance/AINode_Deployment_apache.md index 3e749e161..057d4001d 100644 --- a/src/UserGuide/Master/Tree/Deployment-and-Maintenance/AINode_Deployment_apache.md +++ b/src/UserGuide/Master/Tree/Deployment-and-Maintenance/AINode_Deployment_apache.md @@ -20,24 +20,24 @@ --> # AINode Deployment -## AINode Introduction +## 1. AINode Introduction -### Capability Introduction +### 1.1 Capability Introduction AINode is the third type of endogenous node provided by IoTDB after the Configurable Node and DataNode. This node extends its ability to perform machine learning analysis on time series by interacting with the DataNode and Configurable Node of the IoTDB cluster. It supports the introduction of existing machine learning models from external sources for registration and the use of registered models to complete time series analysis tasks on specified time series data through simple SQL statements. The creation, management, and inference of models are integrated into the database engine. Currently, machine learning algorithms or self-developed models are available for common time series analysis scenarios, such as prediction and anomaly detection. -### Delivery Method +### 1.2 Delivery Method It is an additional package outside the IoTDB cluster, with independent installation and activation (if you need to try or use it, please contact Timecho Technology Business or Technical Support). -### Deployment mode +### 1.3 Deployment mode
-## Installation preparation +## 2. Installation preparation -### Get installation package +### 2.1 Get installation package Users can download the software installation package for AINode, download and unzip it to complete the installation of AINode. @@ -53,7 +53,7 @@ | README_ZH.md | file | Explanation of the Chinese version of the markdown format | | `README.md` | file | Instructions | -### Environment preparation +### 2.2 Environment preparation - Suggested operating environment:Ubuntu, CentOS, MacOS - Runtime Environment @@ -68,9 +68,9 @@ ../Python-3.8.0/python -m venv `venv` ``` -## Installation steps +## 3. Installation steps -### Install AINode +### 3.1 Install AINode 1. Check the kernel architecture of Linux @@ -140,7 +140,7 @@ ``` > Return to the default environment of the system: conda deactivate - ### Configuration item modification +### 3.2 Configuration item modification AINode supports modifying some necessary parameters. You can find the following parameters in the `conf/iotdb-ainode.properties` file and make persistent modifications to them: : @@ -156,7 +156,7 @@ AINode supports modifying some necessary parameters. You can find the following | ain_logs_dir | The path where AINode stores logs, the starting directory of the relative path is related to the operating system, and it is recommended to use an absolute path | String | logs/AINode | Effective after restart | | ain_thrift_compression_enabled | Does AINode enable Thrift's compression mechanism , 0-Do not start, 1-Start | Boolean | 0 | Effective after restart | -### Start AINode +### 3.3 Start AINode After completing the deployment of Seed Config Node, the registration and inference functions of the model can be supported by adding AINode nodes. After specifying the information of the IoTDB cluster in the configuration file, the corresponding instruction can be executed to start AINode and join the IoTDB cluster。 @@ -214,7 +214,7 @@ AINode supports modifying some necessary parameters. You can find the following After writing the parameter value, uncomment the corresponding line and save it to take effect on the next script execution. -#### Example +#### Example ##### Directly start: @@ -251,7 +251,7 @@ If the version of AINode has been updated (such as updating the `lib` folder), t # Windows c nohup bash sbin\start-ainode.bat -r > myout.file 2>& 1 & ``` -#### Non networked environment startup +#### Non networked environment startup ##### Start command @@ -282,7 +282,7 @@ If the version of AINode has been updated (such as updating the `lib` folder), t sbin\start-ainode.bat -i -r -n ``` -##### Parameter introduction: +##### Parameter introduction: | **Name** | **Label** | **Describe** | **Is it mandatory** | **Type** | **Default value** | **Input method** | | ------------------- | ---- | ------------------------------------------------------------ | -------- | ------ | ---------------- | ---------------------- | @@ -291,7 +291,7 @@ If the version of AINode has been updated (such as updating the `lib` folder), t > Attention: When installation fails in a non networked environment, first check if the installation package corresponding to the platform is selected, and then confirm that the Python version is 3.8 (due to the limitations of the downloaded installation package on Python versions, 3.7, 3.9, and others are not allowed) -#### Example +#### Example ##### Directly start: @@ -309,7 +309,7 @@ If the version of AINode has been updated (such as updating the `lib` folder), t nohup bash sbin\start-ainode.bat > myout.file 2>& 1 & ``` -### Detecting the status of AINode nodes +### 3.4 Detecting the status of AINode nodes During the startup process of AINode, the new AINode will be automatically added to the IoTDB cluster. After starting AINode, you can enter SQL in the command line to query. If you see an AINode node in the cluster and its running status is Running (as shown below), it indicates successful joining. @@ -325,7 +325,7 @@ IoTDB> show cluster +------+----------+-------+---------------+------------+-------+-----------+ ``` -### Stop AINode +### 3.5 Stop AINode If you need to stop a running AINode node, execute the corresponding shutdown script. @@ -379,7 +379,7 @@ IoTDB> show cluster ``` If you need to restart the node, you need to execute the startup script again. -### Remove AINode +### 3.6 Remove AINode When it is necessary to remove an AINode node from the cluster, a removal script can be executed. The difference between removing and stopping scripts is that stopping retains the AINode node in the cluster but stops the AINode service, while removing removes the AINode node from the cluster. @@ -427,7 +427,7 @@ When it is necessary to remove an AINode node from the cluster, a removal script ``` After writing the parameter value, uncomment the corresponding line and save it to take effect on the next script execution. -#### Example +#### Example ##### Directly remove: @@ -461,9 +461,9 @@ If the user loses files in the data folder, AINode may not be able to actively r sbin\remove-ainode.bat -t /: ``` -## common problem +## 4. common problem -### An error occurs when starting AINode stating that the venv module cannot be found +### 4.1 An error occurs when starting AINode stating that the venv module cannot be found When starting AINode using the default method, a Python virtual environment will be created in the installation package directory and dependencies will be installed, so it is required to install the venv module. Generally speaking, Python 3.8 and above versions come with built-in VenV, but for some systems with built-in Python environments, this requirement may not be met. There are two solutions when this error occurs (choose one or the other): @@ -479,7 +479,7 @@ Install version 3.8.0 of venv into AINode in the AINode path. ``` When running the startup script, use ` -i ` to specify an existing Python interpreter path as the running environment for AINode, eliminating the need to create a new virtual environment. - ### The SSL module in Python is not properly installed and configured to handle HTTPS resources + ### 4.2 The SSL module in Python is not properly installed and configured to handle HTTPS resources WARNING: pip is configured with locations that require TLS/SSL, however the ssl module in Python is not available. You can install OpenSSLS and then rebuild Python to solve this problem > Currently Python versions 3.6 to 3.9 are compatible with OpenSSL 1.0.2, 1.1.0, and 1.1.1. @@ -493,7 +493,7 @@ make sudo make install ``` - ### Pip version is lower + ### 4.3 Pip version is lower A compilation issue similar to "error: Microsoft Visual C++14.0 or greater is required..." appears on Windows @@ -505,7 +505,7 @@ The corresponding error occurs during installation and compilation, usually due ``` - ### Install and compile Python + ### 4.4 Install and compile Python Use the following instructions to download the installation package from the official website and extract it: ```shell diff --git a/src/UserGuide/Master/Tree/Deployment-and-Maintenance/AINode_Deployment_timecho.md b/src/UserGuide/Master/Tree/Deployment-and-Maintenance/AINode_Deployment_timecho.md index e82a62556..1bfc0699a 100644 --- a/src/UserGuide/Master/Tree/Deployment-and-Maintenance/AINode_Deployment_timecho.md +++ b/src/UserGuide/Master/Tree/Deployment-and-Maintenance/AINode_Deployment_timecho.md @@ -20,24 +20,24 @@ --> # AINode Deployment -## AINode Introduction +## 1. AINode Introduction -### Capability Introduction +### 1.1 Capability Introduction AINode is the third type of endogenous node provided by IoTDB after the Configurable Node and DataNode. This node extends its ability to perform machine learning analysis on time series by interacting with the DataNode and Configurable Node of the IoTDB cluster. It supports the introduction of existing machine learning models from external sources for registration and the use of registered models to complete time series analysis tasks on specified time series data through simple SQL statements. The creation, management, and inference of models are integrated into the database engine. Currently, machine learning algorithms or self-developed models are available for common time series analysis scenarios, such as prediction and anomaly detection. -### Delivery Method +### 1.2 Delivery Method It is an additional package outside the IoTDB cluster, with independent installation and activation (if you need to try or use it, please contact Timecho Technology Business or Technical Support). -### Deployment mode +### 1.3 Deployment mode
-## Installation preparation +## 2. Installation preparation -### Get installation package +### 2.1 Get installation package Users can download the software installation package for AINode, download and unzip it to complete the installation of AINode. @@ -53,7 +53,7 @@ | README_ZH.md | file | Explanation of the Chinese version of the markdown format | | `README.md` | file | Instructions | -### Environment preparation +### 2.2 Environment preparation - Suggested operating environment:Ubuntu, CentOS, MacOS - Runtime Environment @@ -68,9 +68,9 @@ ../Python-3.8.0/python -m venv `venv` ``` -## Installation steps +## 3. Installation steps -### Install AINode +### 3.1 Install AINode 1. AINode activation @@ -174,7 +174,7 @@ ``` > Return to the default environment of the system: conda deactivate - ### Configuration item modification + ### 3.2 Configuration item modification AINode supports modifying some necessary parameters. You can find the following parameters in the `conf/iotdb-ainode.properties` file and make persistent modifications to them: : @@ -190,7 +190,7 @@ AINode supports modifying some necessary parameters. You can find the following | ain_logs_dir | The path where AINode stores logs, the starting directory of the relative path is related to the operating system, and it is recommended to use an absolute path | String | logs/AINode | Effective after restart | | ain_thrift_compression_enabled | Does AINode enable Thrift's compression mechanism , 0-Do not start, 1-Start | Boolean | 0 | Effective after restart | -### Start AINode +### 3.3 Start AINode After completing the deployment of Seed Config Node, the registration and inference functions of the model can be supported by adding AINode nodes. After specifying the information of the IoTDB cluster in the configuration file, the corresponding instruction can be executed to start AINode and join the IoTDB cluster。 @@ -343,7 +343,7 @@ If the version of AINode has been updated (such as updating the `lib` folder), t nohup bash sbin\start-ainode.bat > myout.file 2>& 1 & ``` -### Detecting the status of AINode nodes +### 3.4 Detecting the status of AINode nodes During the startup process of AINode, the new AINode will be automatically added to the IoTDB cluster. After starting AINode, you can enter SQL in the command line to query. If you see an AINode node in the cluster and its running status is Running (as shown below), it indicates successful joining. @@ -359,7 +359,7 @@ IoTDB> show cluster +------+----------+-------+---------------+------------+-------+-----------+ ``` -### Stop AINode +### 3.5 Stop AINode If you need to stop a running AINode node, execute the corresponding shutdown script. @@ -413,7 +413,7 @@ IoTDB> show cluster ``` If you need to restart the node, you need to execute the startup script again. -### Remove AINode +### 3.6 Remove AINode When it is necessary to remove an AINode node from the cluster, a removal script can be executed. The difference between removing and stopping scripts is that stopping retains the AINode node in the cluster but stops the AINode service, while removing removes the AINode node from the cluster. @@ -495,9 +495,9 @@ If the user loses files in the data folder, AINode may not be able to actively r sbin\remove-ainode.bat -t /: ``` -## common problem +## 4. common problem -### An error occurs when starting AINode stating that the venv module cannot be found +### 4.1 An error occurs when starting AINode stating that the venv module cannot be found When starting AINode using the default method, a Python virtual environment will be created in the installation package directory and dependencies will be installed, so it is required to install the venv module. Generally speaking, Python 3.8 and above versions come with built-in VenV, but for some systems with built-in Python environments, this requirement may not be met. There are two solutions when this error occurs (choose one or the other): @@ -513,7 +513,7 @@ Install version 3.8.0 of venv into AINode in the AINode path. ``` When running the startup script, use ` -i ` to specify an existing Python interpreter path as the running environment for AINode, eliminating the need to create a new virtual environment. - ### The SSL module in Python is not properly installed and configured to handle HTTPS resources + ### 4.2 The SSL module in Python is not properly installed and configured to handle HTTPS resources WARNING: pip is configured with locations that require TLS/SSL, however the ssl module in Python is not available. You can install OpenSSLS and then rebuild Python to solve this problem > Currently Python versions 3.6 to 3.9 are compatible with OpenSSL 1.0.2, 1.1.0, and 1.1.1. @@ -527,7 +527,7 @@ make sudo make install ``` - ### Pip version is lower + ### 4.3 Pip version is lower A compilation issue similar to "error: Microsoft Visual C++14.0 or greater is required..." appears on Windows @@ -539,7 +539,7 @@ The corresponding error occurs during installation and compilation, usually due ``` - ### Install and compile Python + ### 4.4 Install and compile Python Use the following instructions to download the installation package from the official website and extract it: ```shell diff --git a/src/UserGuide/Master/Tree/Deployment-and-Maintenance/Cluster-Deployment_apache.md b/src/UserGuide/Master/Tree/Deployment-and-Maintenance/Cluster-Deployment_apache.md index 4389a704f..568aff270 100644 --- a/src/UserGuide/Master/Tree/Deployment-and-Maintenance/Cluster-Deployment_apache.md +++ b/src/UserGuide/Master/Tree/Deployment-and-Maintenance/Cluster-Deployment_apache.md @@ -26,7 +26,7 @@ This section will take the IoTDB classic cluster deployment architecture 3C3D (3 -## Note +## 1. Note 1. Before installation, ensure that the system is complete by referring to [System configuration](./Environment-Requirements.md) @@ -46,13 +46,13 @@ This section will take the IoTDB classic cluster deployment architecture 3C3D (3 - Using the same user operation: Ensure that the same user is used for start, stop and other operations, and do not switch users. - Avoid using sudo: Try to avoid using sudo commands as they execute commands with root privileges, which may cause confusion or security issues. -## Preparation Steps +## 2. Preparation Steps 1. Prepare the IoTDB database installation package::apache-iotdb-{version}-all-bin.zip(Please refer to the installation package for details:[IoTDB-Package](../Deployment-and-Maintenance/IoTDB-Package_apache.md)) 2. Configure the operating system environment according to environmental requirements (system environment configuration can be found in:[Environment Requirement](../Deployment-and-Maintenance/Environment-Requirements.md)) -## Installation Steps +## 3. Installation Steps Assuming there are three Linux servers now, the IP addresses and service roles are assigned as follows: @@ -62,7 +62,7 @@ Assuming there are three Linux servers now, the IP addresses and service roles a | 192.168.1.4 | iotdb-2 | ConfigNode、DataNode | | 192.168.1.5 | iotdb-3 | ConfigNode、DataNode | -### Set Host Name +### 3.1 Set Host Name On three machines, configure the host names separately. To set the host names, configure `/etc/hosts` on the target server. Use the following command: @@ -72,7 +72,7 @@ echo "192.168.1.4 iotdb-2" >> /etc/hosts echo "192.168.1.5 iotdb-3" >> /etc/hosts ``` -### Configuration +### 3.2 Configuration Unzip the installation package and enter the installation directory @@ -133,7 +133,7 @@ Open DataNode Configuration File `./conf/iotdb-system.properties`,Set the follow > ❗️Attention: Editors such as VSCode Remote do not have automatic configuration saving function. Please ensure that the modified files are saved persistently, otherwise the configuration items will not take effect -### Start ConfigNode +### 3.3 Start ConfigNode Start the first confignode of IoTDB-1 first, ensuring that the seed confignode node starts first, and then start the second and third confignode nodes in sequence @@ -145,7 +145,7 @@ cd sbin If the startup fails, please refer to [Common Questions](#common-questions). -### Start DataNode +### 3.4 Start DataNode Enter the `sbin` directory of iotdb and start three datanode nodes in sequence: @@ -154,7 +154,7 @@ cd sbin ./start-datanode.sh -d #"- d" parameter will start in the background ``` -### Verify Deployment +### 3.5 Verify Deployment Can be executed directly Cli startup script in `./sbin` directory: @@ -172,9 +172,9 @@ You can use the `show cluster` command to view cluster information: > The appearance of `ACTIVATED (W)` indicates passive activation, which means that this Configurable Node does not have a license file (or has not issued the latest license file with a timestamp), and its activation depends on other Activated Configurable Nodes in the cluster. At this point, it is recommended to check if the license file has been placed in the license folder. If not, please place the license file. If a license file already exists, it may be due to inconsistency between the license file of this node and the information of other nodes. Please contact Timecho staff to reapply. -## Node Maintenance Steps +## 4. Node Maintenance Steps -### ConfigNode Node Maintenance +### 4.1 ConfigNode Node Maintenance ConfigNode node maintenance is divided into two types of operations: adding and removing ConfigNodes, with two common use cases: - Cluster expansion: For example, when there is only one ConfigNode in the cluster, and you want to increase the high availability of ConfigNode nodes, you can add two ConfigNodes, making a total of three ConfigNodes in the cluster. @@ -239,7 +239,7 @@ sbin/remove-confignode.bat [confignode_id] ``` -### DataNode Node Maintenance +### 4.2 DataNode Node Maintenance There are two common scenarios for DataNode node maintenance: @@ -306,7 +306,7 @@ sbin/remove-datanode.sh [datanode_id] #Windows sbin/remove-datanode.bat [datanode_id] ``` -## Common Questions +## 5. Questions 1. Confignode failed to start diff --git a/src/UserGuide/Master/Tree/Deployment-and-Maintenance/Cluster-Deployment_timecho.md b/src/UserGuide/Master/Tree/Deployment-and-Maintenance/Cluster-Deployment_timecho.md index bd7d0aee5..99996d8b7 100644 --- a/src/UserGuide/Master/Tree/Deployment-and-Maintenance/Cluster-Deployment_timecho.md +++ b/src/UserGuide/Master/Tree/Deployment-and-Maintenance/Cluster-Deployment_timecho.md @@ -28,7 +28,7 @@ This guide describes how to manually deploy a cluster instance consisting of 3 C -## Prerequisites +## 1. Prerequisites 1. [System configuration](./Environment-Requirements.md):Ensure the system has been configured according to the preparation guidelines. @@ -53,13 +53,13 @@ This guide describes how to manually deploy a cluster instance consisting of 3 C 6. **Monitoring Panel**: Deploy a monitoring panel to track key performance metrics. Contact the Timecho team for access and refer to the "[Monitoring Panel Deployment](./Monitoring-panel-deployment.md)" guide. -## Preparation +## 2. Preparation 1. Obtain the TimechoDB installation package: `timechodb-{version}-bin.zip` following [IoTDB-Package](../Deployment-and-Maintenance/IoTDB-Package_timecho.md)) 2. Configure the operating system environment according to [Environment Requirement](../Deployment-and-Maintenance/Environment-Requirements.md)) -## Installation Steps +## 3. Installation Steps Taking a cluster with three Linux servers with the following information as example: @@ -69,7 +69,7 @@ Taking a cluster with three Linux servers with the following information as exam | 11.101.17.225 | iotdb-2 | ConfigNode、DataNode | | 11.101.17.226 | iotdb-3 | ConfigNode、DataNode | -### 1.Configure Hostnames +### 3.1 Configure Hostnames On all three servers, configure the hostnames by editing the `/etc/hosts` file. Use the following commands: @@ -79,7 +79,7 @@ echo "11.101.17.225 iotdb-2" >> /etc/hosts echo "11.101.17.226 iotdb-3" >> /etc/hosts ``` -### 2. Extract Installation Package +### 3.2 Extract Installation Package Unzip the installation package and enter the installation directory: @@ -88,7 +88,7 @@ unzip timechodb-{version}-bin.zip cd timechodb-{version}-bin ``` -### 3. Parameters Configuration +### 3.3 Parameters Configuration - #### Memory Configuration @@ -137,7 +137,7 @@ Set the following parameters in `./conf/iotdb-system.properties`. Refer to `./co **Note:** Ensure files are saved after editing. Tools like VSCode Remote do not save changes automatically. -### 4. Start ConfigNode Instances +### 3.4 Start ConfigNode Instances 1. Start the first ConfigNode (`iotdb-1`) as the seed node @@ -150,7 +150,7 @@ cd sbin If the startup fails, refer to the [Common Questions](#common-questions) section below for troubleshooting. -### 5.Start DataNode Instances +### 3.5 Start DataNode Instances On each server, navigate to the `sbin` directory and start the DataNode: @@ -159,7 +159,7 @@ cd sbin ./start-datanode.sh -d #"- d" parameter will start in the background ``` -### 6.Activate Database +### 3.6 Activate Database #### Option 1: File-Based Activation @@ -217,15 +217,15 @@ cd sbin IoTDB> activate '01-D4EYQGPZ-EAUJJODW-NUKRDR6F-TUQS3B75-EDZFLK3A-6BOKJFFZ-ALDHOMN7-NB2E4BHI-7ZKGFVK6-GCIFXA4T-UG3XJTTD-SHJV6F2P-Q27B4OMJ-R47ZDIM3-UUASUXG2-OQXGVZCO-MMYKICZU-TWFQYYAO-ZOAGOKJA-NYHQTA5U-EWAR4EP5-MRC6R2CI-PKUTKRCT-7UDGRH3F-7BYV4P5D-6KKIA===,01-D4EYQGPZ-EAUJJODW-NUKRDR6F-TUQS3B75-EDZFLK3A-6BOKJFFZ-ALDHOMN7-NB2E4BHI-7ZKGFVK6-GCIFXA4T-UG3XJTTD-SHJV6F2P-Q27B4OMJ-R47ZDIM3-UUASUXG2-OQXGVZCO-MMYKICZU-TWFQYYAO-ZOAGOKJA-NYHQTA5U-EWAR4EP5-MRC6R2CI-PKUTKRCT-7UDGRH3F-7BYV4P5D-6KKIA===,01-D4EYQGPZ-EAUJJODW-NUKRDR6F-TUQS3B75-EDZFLK3A-6BOKJFFZ-ALDHOMN7-NB2E4BHI-7ZKGFVK6-GCIFXA4T-UG3XJTTD-SHJV6F2P-Q27B4OMJ-R47ZDIM3-UUASUXG2-OQXGVZCO-MMYKICZU-TWFQYYAO-ZOAGOKJA-NYHQTA5U-EWAR4EP5-MRC6R2CI-PKUTKRCT-7UDGRH3F-7BYV4P5D-6KKIA===' ``` -### 7.Verify Activation +### 3.7 Verify Activation Check the `ClusterActivationStatus` field. If it shows `ACTIVATED`, the database has been successfully activated. ![](/img/%E9%9B%86%E7%BE%A4-%E9%AA%8C%E8%AF%81.png) -## Maintenance +## 4. Maintenance -### ConfigNode Maintenance +### 4.1 ConfigNode Maintenance ConfigNode maintenance includes adding and removing ConfigNodes. Common use cases include: @@ -289,7 +289,7 @@ sbin/remove-confignode.bat [confignode_id] sbin/remove-confignode.bat [cn_internal_address:cn_internal_port] ``` -### DataNode Maintenance +### 4.2 DataNode Maintenance DataNode maintenance includes adding and removing DataNodes. Common use cases include: @@ -351,7 +351,7 @@ sbin/remove-datanode.sh [dn_rpc_address:dn_rpc_port] sbin/remove-datanode.bat [dn_rpc_address:dn_rpc_port] ``` -## Common Questions +## 5. Common Questions 1. Activation Fails Repeatedly - Use the `ls -al` command to verify that the ownership of the installation directory matches the current user. @@ -388,15 +388,15 @@ sbin/remove-datanode.bat [dn_rpc_address:dn_rpc_port] rm -rf data logs ``` -## Appendix +## 6. Appendix -### ConfigNode Parameters +### 6.1 ConfigNode Parameters | Parameter | Description | Is it required | | :-------- | :---------------------------------------------------------- | :------------- | | -d | Starts the process in daemon mode (runs in the background). | No | -### DataNode Parameters +### 6.2 DataNode Parameters | Parameter | Description | Required | | :-------- | :----------------------------------------------------------- | :------- | diff --git a/src/UserGuide/Master/Tree/Deployment-and-Maintenance/Database-Resources.md b/src/UserGuide/Master/Tree/Deployment-and-Maintenance/Database-Resources.md index d6210318a..51cc3a70a 100644 --- a/src/UserGuide/Master/Tree/Deployment-and-Maintenance/Database-Resources.md +++ b/src/UserGuide/Master/Tree/Deployment-and-Maintenance/Database-Resources.md @@ -19,7 +19,7 @@ --> # Database Resources -## CPU +## 1. CPU @@ -81,7 +81,7 @@
-## Memory +## 2. Memory @@ -143,8 +143,8 @@
-## Storage (Disk) -### Storage space +## 3. Storage (Disk) +### 3.1 Storage space Calculation formula: Number of measurement points * Sampling frequency (Hz) * Size of each data point (Byte, different data types may vary, see table below) * Storage time (seconds) * Number of copies (usually 1 copy for a single node and 2 copies for a cluster) ÷ Compression ratio (can be estimated at 5-10 times, but may be higher in actual situations) @@ -189,13 +189,13 @@ Example: 1000 devices, each with 100 measurement points, a total of 100000 seque - Complete calculation formula: 1000 devices * 100 measurement points * 12 bytes per data point * 86400 seconds per day * 365 days per year * 3 copies / 10 compression ratio / 1024 / 1024 / 1024 / 1024 =11T - Simplified calculation formula: 1000 * 100 * 12 * 86400 * 365 * 3 / 10 / 1024 / 1024 / 1024 / 1024 =11T -### Storage Configuration +### 3.2 Storage Configuration If the number of nodes is over 10000000 or the query load is high, it is recommended to configure SSD -## Network (Network card) +## 4. Network (Network card) If the write throughput does not exceed 10 million points/second, configure 1Gbps network card. When the write throughput exceeds 10 million points per second, a 10Gbps network card needs to be configured. | **Write throughput (data points per second)** | **NIC rate** | | ------------------- | ------------- | | <10 million | 1Gbps | | >=10 million | 10Gbps | -## Other instructions +## 5. Other instructions IoTDB has the ability to scale up clusters in seconds, and expanding node data does not require migration. Therefore, you do not need to worry about the limited cluster capacity estimated based on existing data. In the future, you can add new nodes to the cluster when you need to scale up. \ No newline at end of file diff --git a/src/UserGuide/Master/Tree/Deployment-and-Maintenance/Deployment-form_apache.md b/src/UserGuide/Master/Tree/Deployment-and-Maintenance/Deployment-form_apache.md index a934884cb..eec5edf92 100644 --- a/src/UserGuide/Master/Tree/Deployment-and-Maintenance/Deployment-form_apache.md +++ b/src/UserGuide/Master/Tree/Deployment-and-Maintenance/Deployment-form_apache.md @@ -22,7 +22,7 @@ IoTDB has two operation modes: standalone mode and cluster mode. -## 1 Standalone Mode +## 1. Standalone Mode An IoTDB standalone instance includes 1 ConfigNode and 1 DataNode, referred to as 1C1D. @@ -31,7 +31,7 @@ An IoTDB standalone instance includes 1 ConfigNode and 1 DataNode, referred to a - **Deployment method**:[Stand-Alone Deployment](../Deployment-and-Maintenance/Stand-Alone-Deployment_apache.md) -## 2 Cluster Mode +## 2. Cluster Mode An IoTDB cluster instance consists of 3 ConfigNodes and no fewer than 3 DataNodes, typically 3 DataNodes, referred to as 3C3D. In the event of partial node failures, the remaining nodes can still provide services, ensuring high availability of the database service, and the database performance can be improved with the addition of nodes. @@ -39,7 +39,7 @@ An IoTDB cluster instance consists of 3 ConfigNodes and no fewer than 3 DataNode - **Applicable scenarios**: Enterprise-level application scenarios that require high availability and reliability. - **Deployment method**: [Cluster Deployment](../Deployment-and-Maintenance/Cluster-Deployment_apache.md) -## 3 Summary of Features +## 3. Summary of Features | **Dimension** | **Stand-Alone Mode** | **Cluster Mode** | | :-------------------------- | :----------------------------------------------------- | :----------------------------------------------------------- | diff --git a/src/UserGuide/Master/Tree/Deployment-and-Maintenance/Deployment-form_timecho.md b/src/UserGuide/Master/Tree/Deployment-and-Maintenance/Deployment-form_timecho.md index c757e9561..b2daee47f 100644 --- a/src/UserGuide/Master/Tree/Deployment-and-Maintenance/Deployment-form_timecho.md +++ b/src/UserGuide/Master/Tree/Deployment-and-Maintenance/Deployment-form_timecho.md @@ -22,7 +22,7 @@ IoTDB has two operation modes: standalone mode and cluster mode. -## 1 Standalone Mode +## 1. Standalone Mode An IoTDB standalone instance includes 1 ConfigNode and 1 DataNode, i.e., 1C1D. @@ -30,7 +30,7 @@ An IoTDB standalone instance includes 1 ConfigNode and 1 DataNode, i.e., 1C1D. - **Use Cases**: Scenarios with limited resources or low high-availability requirements, such as edge servers. - **Deployment Method**: [Stand-Alone Deployment](../Deployment-and-Maintenance/Stand-Alone-Deployment_timecho.md) -## 2 Dual-Active Mode +## 2. Dual-Active Mode Dual-Active Deployment is a feature of TimechoDB, where two independent instances synchronize bidirectionally and can provide services simultaneously. If one instance stops and restarts, the other instance will resume data transfer from the breakpoint. @@ -40,7 +40,7 @@ Dual-Active Deployment is a feature of TimechoDB, where two independent instance - **Use Cases**: Scenarios with limited resources (only two servers) but requiring high availability. - **Deployment Method**: [Dual-Active Deployment](../Deployment-and-Maintenance/Dual-Active-Deployment_timecho.md) -## 3 Cluster Mode +## 3. Cluster Mode An IoTDB cluster instance consists of 3 ConfigNodes and no fewer than 3 DataNodes, typically 3 DataNodes, i.e., 3C3D. If some nodes fail, the remaining nodes can still provide services, ensuring high availability of the database. Performance can be improved by adding DataNodes. @@ -50,7 +50,7 @@ An IoTDB cluster instance consists of 3 ConfigNodes and no fewer than 3 DataNode -## 4 Feature Summary +## 4. Feature Summary | **Dimension** | **Stand-Alone Mode** | **Dual-Active Mode** | **Cluster Mode** | | :-------------------------- | :------------------------------------------------------- | :------------------------------------------------------ | :------------------------------------------------------ | diff --git a/src/UserGuide/Master/Tree/Deployment-and-Maintenance/Docker-Deployment_apache.md b/src/UserGuide/Master/Tree/Deployment-and-Maintenance/Docker-Deployment_apache.md index 048c3e0d8..2bd990022 100644 --- a/src/UserGuide/Master/Tree/Deployment-and-Maintenance/Docker-Deployment_apache.md +++ b/src/UserGuide/Master/Tree/Deployment-and-Maintenance/Docker-Deployment_apache.md @@ -20,9 +20,9 @@ --> # Docker Deployment -## Environmental Preparation +## 1. Environmental Preparation -### Docker Installation +### 1.1 Docker Installation ```SQL #Taking Ubuntu as an example, other operating systems can search for installation methods themselves @@ -42,7 +42,7 @@ sudo systemctl enable docker docker --version #Display version information, indicating successful installation ``` -### Docker-compose Installation +### 1.2 Docker-compose Installation ```SQL #Installation command @@ -53,11 +53,11 @@ ln -s /usr/local/bin/docker-compose /usr/bin/docker-compose docker-compose --version #Displaying version information indicates successful installation ``` -## Stand-Alone Deployment +## 2. Stand-Alone Deployment This section demonstrates how to deploy a standalone Docker version of 1C1D. -### Pull Image File +### 2.1 Pull Image File The Docker image of Apache IoTDB has been uploaded tohttps://hub.docker.com/r/apache/iotdb。 @@ -75,13 +75,13 @@ docker images ![](/img/%E5%BC%80%E6%BA%90-%E6%8B%89%E5%8F%96%E9%95%9C%E5%83%8F.png) -### Create Docker Bridge Network +### 2.2 Create Docker Bridge Network ```Bash docker network create --driver=bridge --subnet=172.18.0.0/16 --gateway=172.18.0.1 iotdb ``` -### Write The Yml File For Docker-Compose +### 2.3 Write The Yml File For Docker-Compose Here we take the example of consolidating the IoTDB installation directory and yml files in the/docker iotdb folder: @@ -130,7 +130,7 @@ networks: external: true ``` -### Start IoTDB +### 2.4 Start IoTDB Use the following command to start: @@ -139,7 +139,7 @@ cd /docker-iotdb docker-compose -f docker-compose-standalone.yml up -d #Background startup ``` -### Validate Deployment +### 2.5 alidate Deployment - Viewing the log, the following words indicate successful startup @@ -172,7 +172,7 @@ You can see that all services are running and the activation status shows as act ![](/img/%E5%BC%80%E6%BA%90-%E9%AA%8C%E8%AF%81%E9%83%A8%E7%BD%B23.png) -### Map/conf Directory (optional) +### 2.6 Map/conf Directory (optional) If you want to directly modify the configuration file in the physical machine in the future, you can map the/conf folder in the container in three steps: @@ -197,7 +197,7 @@ Step 3: Restart IoTDB docker-compose -f docker-compose-standalone.yml up -d ``` -## Cluster Deployment +## 3. Cluster Deployment This section describes how to manually deploy an instance that includes 3 Config Nodes and 3 Data Nodes, commonly known as a 3C3D cluster. @@ -209,7 +209,7 @@ This section describes how to manually deploy an instance that includes 3 Config Taking the host network as an example, we will demonstrate how to deploy a 3C3D cluster. -### Set Host Name +### 3.1 Set Host Name Assuming there are currently three Linux servers, the IP addresses and service role assignments are as follows: @@ -227,7 +227,7 @@ echo "192.168.1.4 iotdb-2" >> /etc/hosts echo "192.168.1.5 iotdb-3" >> /etc/hosts ``` -### Pull Image File +### 3.2 Pull Image File The Docker image of Apache IoTDB has been uploaded tohttps://hub.docker.com/r/apache/iotdb。 @@ -245,7 +245,7 @@ docker images ![](/img/%E5%BC%80%E6%BA%90-%E9%9B%86%E7%BE%A4%E7%89%881.png) -### Write The Yml File For Docker Compose +### 3.3 Write The Yml File For Docker Compose Here we take the example of consolidating the IoTDB installation directory and yml files in the `/docker-iotdb` folder: @@ -324,7 +324,7 @@ services: network_mode: "host" #Using the host network ``` -### Starting Confignode For The First Time +### 3.4 Starting Confignode For The First Time First, start configNodes on each of the three servers to obtain the machine code. Pay attention to the startup order, start the first iotdb-1 first, then start iotdb-2 and iotdb-3. @@ -333,7 +333,7 @@ cd /docker-iotdb docker-compose -f confignode.yml up -d #Background startup ``` -### Start Datanode +### 3.5 Start Datanode Start datanodes on 3 servers separately @@ -344,7 +344,7 @@ docker-compose -f datanode.yml up -d #Background startup ![](/img/%E5%BC%80%E6%BA%90-%E9%9B%86%E7%BE%A4%E7%89%882.png) -### Validate Deployment +### 3.6 Validate Deployment - Viewing the logs, the following words indicate that the datanode has successfully started @@ -377,7 +377,7 @@ docker-compose -f datanode.yml up -d #Background startup ![](/img/%E5%BC%80%E6%BA%90-%E9%9B%86%E7%BE%A4%E7%89%885.png) -### Map/conf Directory (optional) +### 3.7 Map/conf Directory (optional) If you want to directly modify the configuration file in the physical machine in the future, you can map the/conf folder in the container in three steps: diff --git a/src/UserGuide/Master/Tree/Deployment-and-Maintenance/Docker-Deployment_timecho.md b/src/UserGuide/Master/Tree/Deployment-and-Maintenance/Docker-Deployment_timecho.md index 4aec6d8ee..ccd071bbb 100644 --- a/src/UserGuide/Master/Tree/Deployment-and-Maintenance/Docker-Deployment_timecho.md +++ b/src/UserGuide/Master/Tree/Deployment-and-Maintenance/Docker-Deployment_timecho.md @@ -20,9 +20,9 @@ --> # Docker Deployment -## Environmental Preparation +## 1. Environmental Preparation -### Docker Installation +### 1.1 Docker Installation ```Bash #Taking Ubuntu as an example, other operating systems can search for installation methods themselves @@ -42,7 +42,7 @@ sudo systemctl enable docker docker --version #Display version information, indicating successful installation ``` -### Docker-compose Installation +### 1.2 Docker-compose Installation ```Bash #Installation command @@ -53,7 +53,7 @@ ln -s /usr/local/bin/docker-compose /usr/bin/docker-compose docker-compose --version #Displaying version information indicates successful installation ``` -### Install The Dmidecode Plugin +### 1.3 Install The Dmidecode Plugin By default, Linux servers should already be installed. If not, you can use the following command to install them. @@ -63,15 +63,15 @@ sudo apt-get install dmidecode After installing dmidecode, search for the installation path: `wherever dmidecode`. Assuming the result is `/usr/sbin/dmidecode`, remember this path as it will be used in the later docker compose yml file. -### Get Container Image Of IoTDB +### 1.4 Get Container Image Of IoTDB You can contact business or technical support to obtain container images for IoTDB Enterprise Edition. -## Stand-Alone Deployment +## 2. Stand-Alone Deployment This section demonstrates how to deploy a standalone Docker version of 1C1D. -### Load Image File +### 2.1 Load Image File For example, the container image file name of IoTDB obtained here is: `iotdb-enterprise-1.3.2-3-standalone-docker.tar.gz` @@ -89,13 +89,13 @@ docker images ![](/img/%E5%8D%95%E6%9C%BA-%E6%9F%A5%E7%9C%8B%E9%95%9C%E5%83%8F.png) -### Create Docker Bridge Network +### 2.2 Create Docker Bridge Network ```Bash docker network create --driver=bridge --subnet=172.18.0.0/16 --gateway=172.18.0.1 iotdb ``` -### Write The Yml File For docker-compose +### 2.3 Write The Yml File For docker-compose Here we take the example of consolidating the IoTDB installation directory and yml files in the/docker iotdb folder: @@ -147,7 +147,7 @@ networks: external: true ``` -### First Launch +### 2.4 First Launch Use the following command to start: @@ -160,7 +160,7 @@ Due to lack of activation, it is normal to exit directly upon initial startup. T ![](/img/%E5%8D%95%E6%9C%BA-%E6%BF%80%E6%B4%BB.png) -### Apply For Activation +### 2.5 Apply For Activation - After the first startup, a system_info file will be generated in the physical machine directory `/docker-iotdb/iotdb/activation`, and this file will be copied to the Timecho staff. @@ -170,7 +170,7 @@ Due to lack of activation, it is normal to exit directly upon initial startup. T ![](/img/%E5%8D%95%E6%9C%BA-%E7%94%B3%E8%AF%B7%E6%BF%80%E6%B4%BB2.png) -### Restart IoTDB +### 2.6 Restart IoTDB ```Bash docker-compose -f docker-compose-standalone.yml up -d @@ -178,7 +178,7 @@ docker-compose -f docker-compose-standalone.yml up -d ![](/img/%E5%90%AF%E5%8A%A8iotdb.png) -### Validate Deployment +### 2.7 Validate Deployment - Viewing the log, the following words indicate successful startup @@ -211,7 +211,7 @@ docker-compose -f docker-compose-standalone.yml up -d ![](/img/%E5%8D%95%E6%9C%BA-%E9%AA%8C%E8%AF%81%E9%83%A8%E7%BD%B23.png) -### Map/conf Directory (optional) +### 2.8 Map/conf Directory (optional) If you want to directly modify the configuration file in the physical machine in the future, you can map the/conf folder in the container in three steps: @@ -239,7 +239,7 @@ Step 3: Restart IoTDB docker-compose -f docker-compose-standalone.yml up -d ``` -## Cluster Deployment +## 3. Cluster Deployment This section describes how to manually deploy an instance that includes 3 Config Nodes and 3 Data Nodes, commonly known as a 3C3D cluster. @@ -251,7 +251,7 @@ This section describes how to manually deploy an instance that includes 3 Config Taking the host network as an example, we will demonstrate how to deploy a 3C3D cluster. -### Set Host Name +### 3.1 Set Host Name Assuming there are currently three Linux servers, the IP addresses and service role assignments are as follows: @@ -269,7 +269,7 @@ echo "192.168.1.4 iotdb-2" >> /etc/hosts echo "192.168.1.5 iotdb-3" >> /etc/hosts ``` -### Load Image File +### 3.2 Load Image File For example, the container image file name obtained for IoTDB is: `iotdb-enterprise-1.3.23-standalone-docker.tar.gz` @@ -287,7 +287,7 @@ docker images ![](/img/%E9%95%9C%E5%83%8F%E5%8A%A0%E8%BD%BD.png) -### Write The Yml File For Docker Compose +### 3.3 Write The Yml File For Docker Compose Here we take the example of consolidating the IoTDB installation directory and yml files in the /docker-iotdb folder: @@ -366,7 +366,7 @@ services: network_mode: "host" #Using the host network ``` -### Starting Confignode For The First Time +### 3.4 Starting Confignode For The First Time First, start configNodes on each of the three servers to obtain the machine code. Pay attention to the startup order, start the first iotdb-1 first, then start iotdb-2 and iotdb-3. @@ -375,7 +375,7 @@ cd /docker-iotdb docker-compose -f confignode.yml up -d #Background startup ``` -### Apply For Activation +### 3.5 Apply For Activation - After starting three confignodes for the first time, a system_info file will be generated in each physical machine directory `/docker-iotdb/iotdb/activation`, and the system_info files of the three servers will be copied to the Timecho staff; @@ -387,7 +387,7 @@ docker-compose -f confignode.yml up -d #Background startup - After the license is placed in the corresponding activation folder, confignode will be automatically activated without restarting confignode -### Start Datanode +### 3.6 Start Datanode Start datanodes on 3 servers separately @@ -398,7 +398,7 @@ docker-compose -f datanode.yml up -d #Background startup ![](/img/%E9%9B%86%E7%BE%A4%E7%89%88-dn%E5%90%AF%E5%8A%A8.png) -### Validate Deployment +### 3.7 Validate Deployment - Viewing the logs, the following words indicate that the datanode has successfully started @@ -431,7 +431,7 @@ docker-compose -f datanode.yml up -d #Background startup ![](/img/%E9%9B%86%E7%BE%A4-%E6%BF%80%E6%B4%BB.png) -### Map/conf Directory (optional) +### 3.8 Map/conf Directory (optional) If you want to directly modify the configuration file in the physical machine in the future, you can map the/conf folder in the container in three steps: diff --git a/src/UserGuide/Master/Tree/Deployment-and-Maintenance/Dual-Active-Deployment_timecho.md b/src/UserGuide/Master/Tree/Deployment-and-Maintenance/Dual-Active-Deployment_timecho.md index 40c5e1d3d..2865e6da7 100644 --- a/src/UserGuide/Master/Tree/Deployment-and-Maintenance/Dual-Active-Deployment_timecho.md +++ b/src/UserGuide/Master/Tree/Deployment-and-Maintenance/Dual-Active-Deployment_timecho.md @@ -20,7 +20,7 @@ --> # Dual Active Deployment -## What is a double active version? +## 1. What is a double active version? Dual active usually refers to two independent machines (or clusters) that perform real-time mirror synchronization. Their configurations are completely independent and can simultaneously receive external writes. Each independent machine (or cluster) can synchronize the data written to itself to another machine (or cluster), and the data of the two machines (or clusters) can achieve final consistency. @@ -30,7 +30,7 @@ Dual active usually refers to two independent machines (or clusters) that perfor ![](/img/20240731104336.png) -## Note +## 2. Note 1. It is recommended to prioritize using `hostname` for IP configuration during deployment to avoid the problem of database failure caused by modifying the host IP in the later stage. To set the hostname, you need to configure `/etc/hosts` on the target server. If the local IP is 192.168.1.3 and the hostname is iotdb-1, you can use the following command to set the server's hostname and configure IoTDB's `cn_internal-address` and` dn_internal-address` using the hostname. @@ -42,7 +42,7 @@ Dual active usually refers to two independent machines (or clusters) that perfor 3. Recommend deploying a monitoring panel, which can monitor important operational indicators and keep track of database operation status at any time. The monitoring panel can be obtained by contacting the business department. The steps for deploying the monitoring panel can be referred to [Monitoring Panel Deployment](https://www.timecho.com/docs/UserGuide/latest/Deployment-and-Maintenance/Monitoring-panel-deployment.html) -## Installation Steps +## 3. Installation Steps Taking the dual active version IoTDB built by two single machines A and B as an example, the IP addresses of A and B are 192.168.1.3 and 192.168.1.4, respectively. Here, we use hostname to represent different hosts. The plan is as follows: @@ -51,11 +51,11 @@ Taking the dual active version IoTDB built by two single machines A and B as an | A | 192.168.1.3 | iotdb-1 | | B | 192.168.1.4 | iotdb-2 | -### Step1:Install Two Independent IoTDBs Separately +### 3.1 Install Two Independent IoTDBs Separately Install IoTDB on two machines separately, and refer to the deployment documentation for the standalone version [Stand-Alone Deployment](../Deployment-and-Maintenance/Stand-Alone-Deployment_timecho.md),The deployment document for the cluster version can be referred to [Cluster Deployment](../Deployment-and-Maintenance/Cluster-Deployment_timecho.md)。**It is recommended that the configurations of clusters A and B remain consistent to achieve the best dual active effect** -### Step2:Create A Aata Synchronization Task On Machine A To Machine B +### 3.2 Create A Aata Synchronization Task On Machine A To Machine B - Create a data synchronization process on machine A, where the data on machine A is automatically synchronized to machine B. Use the cli tool in the sbin directory to connect to the IoTDB database on machine A: @@ -79,7 +79,7 @@ Install IoTDB on two machines separately, and refer to the deployment documentat - Note: To avoid infinite data loops, it is necessary to set the parameter `source. forwarding pipe questions` on both A and B to `false`, indicating that data transmitted from another pipe will not be forwarded. -### Step3:Create A Data Synchronization Task On Machine B To Machine A +### 3.3 Create A Data Synchronization Task On Machine B To Machine A - Create a data synchronization process on machine B, where the data on machine B is automatically synchronized to machine A. Use the cli tool in the sbin directory to connect to the IoTDB database on machine B @@ -103,7 +103,7 @@ Install IoTDB on two machines separately, and refer to the deployment documentat - Note: To avoid infinite data loops, it is necessary to set the parameter `source. forwarding pipe questions` on both A and B to `false` , indicating that data transmitted from another pipe will not be forwarded. -### Step4:Validate Deployment +### 3.4 Validate Deployment After the above data synchronization process is created, the dual active cluster can be started. @@ -144,7 +144,7 @@ show pipes Ensure that every pipe is in the RUNNING state. -### Step5:Stop Dual Active Version IoTDB +### 3.5 Stop Dual Active Version IoTDB - Execute the following command on machine A: diff --git a/src/UserGuide/Master/Tree/Deployment-and-Maintenance/Environment-Requirements.md b/src/UserGuide/Master/Tree/Deployment-and-Maintenance/Environment-Requirements.md index e286154e1..72f2e5081 100644 --- a/src/UserGuide/Master/Tree/Deployment-and-Maintenance/Environment-Requirements.md +++ b/src/UserGuide/Master/Tree/Deployment-and-Maintenance/Environment-Requirements.md @@ -20,9 +20,9 @@ --> # System Requirements -## Disk Array +## 1. Disk Array -### Configuration Suggestions +### 1.1 Configuration Suggestions IoTDB has no strict operation requirements on disk array configuration. It is recommended to use multiple disk arrays to store IoTDB data to achieve the goal of concurrent writing to multiple disk arrays. For configuration, refer to the following suggestions: @@ -35,7 +35,7 @@ IoTDB has no strict operation requirements on disk array configuration. It is re You are advised to mount multiple hard disks (1-6 disks). 3. When deploying IoTDB, it is recommended to avoid using network storage devices such as NAS. -### Configuration Example +### 1.2 Configuration Example - Example 1: Four 3.5-inch hard disks @@ -68,13 +68,13 @@ The recommended configurations are as follows: | data disk | RAID5 | 7 | 1 | 6 | | data disk | NoRaid | 1 | 0 | 1 | -## Operating System +## 2. Operating System -### Version Requirements +### 2.1 Version Requirements IoTDB supports operating systems such as Linux, Windows, and MacOS, while the enterprise version supports domestic CPUs such as Loongson, Phytium, and Kunpeng. It also supports domestic server operating systems such as Neokylin, KylinOS, UOS, and Linx. -### Disk Partition +### 2.2 Disk Partition - The default standard partition mode is recommended. LVM extension and hard disk encryption are not recommended. - The system disk needs only the space used by the operating system, and does not need to reserve space for the IoTDB. @@ -151,7 +151,7 @@ systemctl start sshd # Enable port 22 3. Ensure that servers are connected to each other -### Other Configuration +### 2.3 Other Configuration 1. Reduce the system swap priority to the lowest level @@ -178,7 +178,7 @@ echo "* hard nofile 65535" >> /etc/security/limits.conf # View after exiting the current terminal session, expect to display 65535 ulimit -n ``` -## Software Dependence +## 3. Software Dependence Install the Java runtime environment (Java version >= 1.8). Ensure that jdk environment variables are set. (It is recommended to deploy JDK17 for V1.3.2.2 or later. In some scenarios, the performance of JDK of earlier versions is compromised, and Datanodes cannot be stopped.) diff --git a/src/UserGuide/Master/Tree/Deployment-and-Maintenance/IoTDB-Package_apache.md b/src/UserGuide/Master/Tree/Deployment-and-Maintenance/IoTDB-Package_apache.md index 4bf9b1e0f..e775c431f 100644 --- a/src/UserGuide/Master/Tree/Deployment-and-Maintenance/IoTDB-Package_apache.md +++ b/src/UserGuide/Master/Tree/Deployment-and-Maintenance/IoTDB-Package_apache.md @@ -20,12 +20,12 @@ --> # Obtain IoTDB -## 1 How to obtain IoTDB +## 1. How to obtain IoTDB The installation package can be directly obtained from the Apache IoTDB official website:https://iotdb.apache.org/Download/ -## 2 Installation Package Structure +## 2. Installation Package Structure Install the package after decompression(`apache-iotdb--all-bin.zip`),After decompressing the installation package, the directory structure is as follows: diff --git a/src/UserGuide/Master/Tree/Deployment-and-Maintenance/IoTDB-Package_timecho.md b/src/UserGuide/Master/Tree/Deployment-and-Maintenance/IoTDB-Package_timecho.md index 261c8a10f..3c1742408 100644 --- a/src/UserGuide/Master/Tree/Deployment-and-Maintenance/IoTDB-Package_timecho.md +++ b/src/UserGuide/Master/Tree/Deployment-and-Maintenance/IoTDB-Package_timecho.md @@ -20,11 +20,11 @@ --> # Obtain TimechoDB -## 1 How to obtain TimechoDB +## 1. How to obtain TimechoDB The TimechoDB installation package can be obtained through product trial application or by directly contacting the Timecho team. -## 2 Installation Package Structure +## 2. Installation Package Structure After unpacking the installation package(`iotdb-enterprise-{version}-bin.zip`),you will see the directory structure is as follows: diff --git a/src/UserGuide/Master/Tree/Deployment-and-Maintenance/Monitoring-panel-deployment.md b/src/UserGuide/Master/Tree/Deployment-and-Maintenance/Monitoring-panel-deployment.md index 17fced6e9..ec61a2a41 100644 --- a/src/UserGuide/Master/Tree/Deployment-and-Maintenance/Monitoring-panel-deployment.md +++ b/src/UserGuide/Master/Tree/Deployment-and-Maintenance/Monitoring-panel-deployment.md @@ -24,14 +24,14 @@ The IoTDB monitoring panel is one of the supporting tools for the IoTDB Enterpri The instructions for using the monitoring panel tool can be found in the [Instructions](../Tools-System/Monitor-Tool.md) section of the document. -## Installation Preparation +## 1. Installation Preparation 1. Installing IoTDB: You need to first install IoTDB V1.0 or above Enterprise Edition. You can contact business or technical support to obtain 2. Obtain the IoTDB monitoring panel installation package: Based on the enterprise version of IoTDB database monitoring panel, you can contact business or technical support to obtain -## Installation Steps +## 2. Installation Steps -### Step 1: IoTDB enables monitoring indicator collection +### 2.1 IoTDB enables monitoring indicator collection 1. Open the monitoring configuration item. The configuration items related to monitoring in IoTDB are disabled by default. Before deploying the monitoring panel, you need to open the relevant configuration items (note that the service needs to be restarted after enabling monitoring configuration). @@ -67,7 +67,7 @@ Taking the 3C3D cluster as an example, the monitoring configuration that needs t ![](/img/%E5%90%AF%E5%8A%A8.png) -### Step 2: Install and configure Prometheus +### 2.2 Install and configure Prometheus > Taking Prometheus installed on server 192.168.1.3 as an example. @@ -118,7 +118,7 @@ scrape_configs: ![](/img/%E8%8A%82%E7%82%B9%E7%9B%91%E6%8E%A7.png) -### Step 3: Install Grafana and configure the data source +### 2.3 Install Grafana and configure the data source > Taking Grafana installed on server 192.168.1.3 as an example. @@ -146,7 +146,7 @@ When configuring the Data Source, pay attention to the URL where Prometheus is l ![](/img/%E9%85%8D%E7%BD%AE%E6%88%90%E5%8A%9F.png) -### Step 4: Import IoTDB Grafana Dashboards +### 2.4 Import IoTDB Grafana Dashboards 1. Enter Grafana and select Dashboards: @@ -184,9 +184,9 @@ When configuring the Data Source, pay attention to the URL where Prometheus is l ![](/img/%E9%9D%A2%E6%9D%BF%E6%B1%87%E6%80%BB.png) -## Appendix, Detailed Explanation of Monitoring Indicators +## 3. Appendix, Detailed Explanation of Monitoring Indicators -### System Dashboard +### 3.1 System Dashboard This panel displays the current usage of system CPU, memory, disk, and network resources, as well as partial status of the JVM. @@ -272,7 +272,7 @@ Eno refers to the network card connected to the public network, while lo refers - Packet Speed:The speed at which the network card sends and receives packets, and one RPC request can correspond to one or more packets - Connection Num:The current number of socket connections for the selected process (IoTDB only has TCP) -### Performance Overview Dashboard +### 3.2 Performance Overview Dashboard #### Cluster Overview @@ -350,7 +350,7 @@ Eno refers to the network card connected to the public network, while lo refers - File Size: Node management file size situation - Log Number Per Minute: Different types of logs per minute for nodes -### ConfigNode Dashboard +### 3.3 ConfigNode Dashboard This panel displays the performance of all management nodes in the cluster, including partitioning, node information, and client connection statistics. @@ -408,7 +408,7 @@ This panel displays the performance of all management nodes in the cluster, incl - Remote / Local Write QPS: Remote and local QPS written to node Ratis - RatisConsensus Memory: Memory usage of Node Ratis consensus protocol -### DataNode Dashboard +### 3.4 DataNode Dashboard This panel displays the monitoring status of all data nodes in the cluster, including write time, query time, number of stored files, etc. diff --git a/src/UserGuide/Master/Tree/Deployment-and-Maintenance/Stand-Alone-Deployment_apache.md b/src/UserGuide/Master/Tree/Deployment-and-Maintenance/Stand-Alone-Deployment_apache.md index 08133222a..90c524236 100644 --- a/src/UserGuide/Master/Tree/Deployment-and-Maintenance/Stand-Alone-Deployment_apache.md +++ b/src/UserGuide/Master/Tree/Deployment-and-Maintenance/Stand-Alone-Deployment_apache.md @@ -20,7 +20,7 @@ --> # Stand-Alone Deployment -## Matters Needing Attention +## 1. Matters Needing Attention 1. Before installation, ensure that the system is complete by referring to [System configuration](./Environment-Requirements.md). @@ -40,16 +40,16 @@ - Using the same user operation: Ensure that the same user is used for start, stop and other operations, and do not switch users. - Avoid using sudo: Try to avoid using sudo commands as they execute commands with root privileges, which may cause confusion or security issues. -## Installation Steps +## 2. Installation Steps -### 1、Unzip the installation package and enter the installation directory +### 2.1 Unzip the installation package and enter the installation directory ```Shell unzip apache-iotdb-{version}-all-bin.zip cd apache-iotdb-{version}-all-bin ``` -### 2、Parameter Configuration +### 2.2 Parameter Configuration #### Environment Script Configuration @@ -103,7 +103,7 @@ Open the DataNode configuration file (./conf/iotdb-system. properties file) and > ❗️Attention: Editors such as VSCode Remote do not have automatic configuration saving function. Please ensure that the modified files are saved persistently, otherwise the configuration items will not take effect -### 3、Start ConfigNode +### 2.3 Start ConfigNode Enter the sbin directory of iotdb and start confignode @@ -112,7 +112,7 @@ Enter the sbin directory of iotdb and start confignode ``` If the startup fails, please refer to [Common Questions](#common-questions). -### 4、Start DataNode +### 2.4 Start DataNode Enter the sbin directory of iotdb and start datanode: @@ -121,7 +121,7 @@ cd sbin ./start-datanode.sh -d #The "- d" parameter will start in the background ``` -### 5、Verify Deployment +### 2.5 Verify Deployment Can be executed directly/ Cli startup script in sbin directory: @@ -141,7 +141,7 @@ When the status is all running, it indicates that the service has started succes > The appearance of 'Activated (W)' indicates passive activation, indicating that this Config Node does not have a license file (or has not issued the latest license file with a timestamp). At this point, it is recommended to check if the license file has been placed in the license folder. If not, please place the license file. If a license file already exists, it may be due to inconsistency between the license file of this node and the information of other nodes. Please contact Timecho staff to reapply. -## Common Questions +## 3. Common Questions 1. Confignode failed to start diff --git a/src/UserGuide/Master/Tree/Deployment-and-Maintenance/Stand-Alone-Deployment_timecho.md b/src/UserGuide/Master/Tree/Deployment-and-Maintenance/Stand-Alone-Deployment_timecho.md index 9a95c038f..4a11206a1 100644 --- a/src/UserGuide/Master/Tree/Deployment-and-Maintenance/Stand-Alone-Deployment_timecho.md +++ b/src/UserGuide/Master/Tree/Deployment-and-Maintenance/Stand-Alone-Deployment_timecho.md @@ -22,7 +22,7 @@ This guide introduces how to set up a standalone TimechoDB instance, which includes one ConfigNode and one DataNode (commonly referred to as 1C1D). -## Prerequisites +## 1. Prerequisites 1. [System configuration](./Environment-Requirements.md): Ensure the system has been configured according to the preparation guidelines. @@ -46,9 +46,9 @@ This guide introduces how to set up a standalone TimechoDB instance, which inclu 6. **Monitoring Panel**: Deploy a monitoring panel to track key performance metrics. Contact the Timecho team for access and refer to the "[Monitoring Board Install and Deploy](./Monitoring-panel-deployment.md)" guide. -## Installation Steps +## 2. Installation Steps -### 1、Extract Installation Package +### 2.1Extract Installation Package Unzip the installation package and navigate to the directory: @@ -57,7 +57,7 @@ unzip timechodb-{version}-bin.zip cd timechodb-{version}-bin ``` -### 2、Parameter Configuration +### 2.2 Parameter Configuration #### Memory Configuration @@ -104,7 +104,7 @@ Set the following parameters in `conf/iotdb-system.properties`. Refer to `conf/i | dn_schema_region_consensus_port | Port used for metadata replica consensus protocol communication | 10760 | 10760 | This parameter cannot be modified after the first startup. | | dn_seed_config_node | Address of the ConfigNode for registering and joining the cluster. (e.g.,`cn_internal_address:cn_internal_port`) | 127.0.0.1:10710 | Use `cn_internal_address:cn_internal_port` | This parameter cannot be modified after the first startup. | -### 3、Start ConfigNode +### 2.3 Start ConfigNode Navigate to the `sbin` directory and start ConfigNode: @@ -114,7 +114,7 @@ Navigate to the `sbin` directory and start ConfigNode: If the startup fails, refer to the [**Common Problem**](#Common Problem) section below for troubleshooting. -### 4、Start DataNode +### 2.4 Start DataNode Navigate to the `sbin` directory of IoTDB and start the DataNode: @@ -122,7 +122,7 @@ Navigate to the `sbin` directory of IoTDB and start the DataNode: ./sbin/start-datanode.sh -d # The "-d" flag starts the process in the background. ```` -### 5、Activate Database +### 2.5 Activate Database #### Option 1: File-Based Activation @@ -181,13 +181,13 @@ It costs 0.030s IoTDB> activate '01-D4EYQGPZ-EAUJJODW-NUKRDR6F-TUQS3B75-EDZFLK3A-6BOKJFFZ-ALDHOMN7-NB2E4BHI-7ZKGFVK6-GCIFXA4T-UG3XJTTD-SHJV6F2P-Q27B4OMJ-R47ZDIM3-UUASUXG2-OQXGVZCO-MMYKICZU-TWFQYYAO-ZOAGOKJA-NYHQTA5U-EWAR4EP5-MRC6R2CI-PKUTKRCT-7UDGRH3F-7BYV4P5D-6KKIA===' ``` -### 6、Verify Activation +### 2.6 Verify Activation Check the `ClusterActivationStatus` field. If it shows `ACTIVATED`, the database has been successfully activated. ![](/img/%E5%8D%95%E6%9C%BA-%E9%AA%8C%E8%AF%81.png) -## Common Problem +## 3. Common Problem 1. Activation Fails Repeatedly 1. Use the `ls -al` command to verify that the ownership of the installation directory matches the current user. @@ -229,15 +229,15 @@ cd /data/iotdb rm -rf data logs ``` -## Appendix +## 4. Appendix -### ConfigNode Parameters +### 4.1 ConfigNode Parameters | Parameter | Description | **Is it required** | | :-------- | :---------------------------------------------------------- | :----------------- | | -d | Starts the process in daemon mode (runs in the background). | No | -### DataNode Parameters +### 4.2 DataNode Parameters | Parameter | Description | Required | | :-------- | :----------------------------------------------------------- | :------- | diff --git a/src/UserGuide/Master/Tree/Deployment-and-Maintenance/workbench-deployment_timecho.md b/src/UserGuide/Master/Tree/Deployment-and-Maintenance/workbench-deployment_timecho.md index f26ef9229..335ffc4c2 100644 --- a/src/UserGuide/Master/Tree/Deployment-and-Maintenance/workbench-deployment_timecho.md +++ b/src/UserGuide/Master/Tree/Deployment-and-Maintenance/workbench-deployment_timecho.md @@ -29,7 +29,7 @@ The visualization console is one of the supporting tools for IoTDB (similar to N The instructions for using the visualization console tool can be found in the [Instructions](../Tools-System/Monitor-Tool.md) section of the document. -## Installation Preparation +## 1. Installation Preparation | Preparation Content | Name | Version Requirements | Link | | :----------------------: | :-------------------------: | :----------------------------------------------------------: | :----------------------------------------------------------: | @@ -39,9 +39,9 @@ The instructions for using the visualization console tool can be found in the [I | Database | IoTDB | Requires V1.2.0 Enterprise Edition and above | You can contact business or technical support to obtain | | Console | IoTDB-Workbench-`` | - | You can choose according to the appendix version comparison table and contact business or technical support to obtain it | -## Installation Steps +## 2. Installation Steps -### Step 1: IoTDB enables monitoring indicator collection +### 2.1 IoTDB enables monitoring indicator collection 1. Open the monitoring configuration item. The configuration items related to monitoring in IoTDB are disabled by default. Before deploying the monitoring panel, you need to open the relevant configuration items (note that the service needs to be restarted after enabling monitoring configuration). @@ -111,7 +111,7 @@ The instructions for using the visualization console tool can be found in the [I ![](/img/%E5%90%AF%E5%8A%A8.png) -### Step 2: Install and configure Prometheus +### 2.2 Install and configure Prometheus 1. Download the Prometheus installation package, which requires installation of V2.30.3 and above. You can go to the Prometheus official website to download it (https://prometheus.io/docs/introduction/first_steps/) 2. Unzip the installation package and enter the unzipped folder: @@ -157,7 +157,7 @@ The instructions for using the visualization console tool can be found in the [I -### Step 3: Install Workbench +### 2.3 Install Workbench 1. Enter the config directory of iotdb Workbench -`` @@ -190,7 +190,7 @@ The instructions for using the visualization console tool can be found in the [I ![](/img/workbench-en.png) -### Step 4: Configure Instance Information +### 2.4 Configure Instance Information 1. Configure instance information: You only need to fill in the following information to connect to the instance @@ -210,7 +210,7 @@ The instructions for using the visualization console tool can be found in the [I ![](/img/workbench-en-2.png) -## Appendix: IoTDB and Workbench Version Comparison Table +## 3. Appendix: IoTDB and Workbench Version Comparison Table | Workbench Version Number | Release Note | Supports IoTDB Versions | | :------------------------: | :------------------------------------------------------------: | :-------------------------: | diff --git a/src/UserGuide/Master/Tree/FAQ/Frequently-asked-questions.md b/src/UserGuide/Master/Tree/FAQ/Frequently-asked-questions.md index de789a04c..abb80ab95 100644 --- a/src/UserGuide/Master/Tree/FAQ/Frequently-asked-questions.md +++ b/src/UserGuide/Master/Tree/FAQ/Frequently-asked-questions.md @@ -21,9 +21,9 @@ # Frequently Asked Questions -## General FAQ +## 1. General FAQ -### How can I identify my version of IoTDB? +### 1.1 How can I identify my version of IoTDB? There are several ways to identify the version of IoTDB that you are using: @@ -65,7 +65,7 @@ Total line number = 1 It costs 0.241s ``` -### Where can I find IoTDB logs? +### 1.2 Where can I find IoTDB logs? Suppose your root directory is: @@ -87,11 +87,11 @@ Let `$IOTDB_CLI_HOME = /workspace/iotdb/cli/target/iotdb-cli-{project.version}` By default settings, the logs are stored under ```IOTDB_HOME/logs```. You can change log level and storage path by configuring ```logback.xml``` under ```IOTDB_HOME/conf```. -### Where can I find IoTDB data files? +### 1.3 Where can I find IoTDB data files? By default settings, the data files (including tsfile, metadata, and WAL files) are stored under ```IOTDB_HOME/data/datanode```. -### How do I know how many time series are stored in IoTDB? +### 1.4 How do I know how many time series are stored in IoTDB? Use IoTDB's Command Line Interface: @@ -114,15 +114,15 @@ If you are using Linux, you can use the following shell command: > 6 ``` -### Can I use Hadoop and Spark to read TsFile in IoTDB? +### 1.5 Can I use Hadoop and Spark to read TsFile in IoTDB? Yes. IoTDB has intense integration with Open Source Ecosystem. IoTDB supports [Hadoop](https://github.com/apache/iotdb-extras/tree/master/connectors/hadoop), [Spark](https://github.com/apache/iotdb-extras/tree/master/connectors/spark-iotdb-connector) and [Grafana](https://github.com/apache/iotdb-extras/tree/master/connectors/grafana-connector) visualization tool. -### How does IoTDB handle duplicate points? +### 1.6 How does IoTDB handle duplicate points? A data point is uniquely identified by a full time series path (e.g. ```root.vehicle.d0.s0```) and timestamp. If you submit a new point with the same path and timestamp as an existing point, IoTDB updates the value of this point instead of inserting a new point. -### How can I tell what type of the specific timeseries? +### 1.7 How can I tell what type of the specific timeseries? Use ```SHOW TIMESERIES ``` SQL in IoTDB's Command Line Interface: @@ -144,7 +144,7 @@ Otherwise, you can also use wildcard in timeseries path: IoTDB> show timeseries root.fit.d1.* ``` -### How can I change IoTDB's Cli time display format? +### 1.8 How can I change IoTDB's Cli time display format? The default IoTDB's Cli time display format is readable (e.g. ```1970-01-01T08:00:00.001```), if you want to display time in timestamp type or other readable format, add parameter ```-disableISO8601``` in start command: @@ -152,12 +152,12 @@ The default IoTDB's Cli time display format is readable (e.g. ```1970-01-01T08:0 > $IOTDB_CLI_HOME/sbin/start-cli.sh -h 127.0.0.1 -p 6667 -u root -pw root -disableISO8601 ``` -### How to handle error `IndexOutOfBoundsException` from `org.apache.ratis.grpc.server.GrpcLogAppender`? +### 1.9 How to handle error `IndexOutOfBoundsException` from `org.apache.ratis.grpc.server.GrpcLogAppender`? This is an internal error log from Ratis 2.4.1, our dependency, and no impact on data writes or reads is expected. It has been reported to the Ratis community and will be fixed in the future releases. -### How to deal with estimated out of memory errors? +### 1.10 How to deal with estimated out of memory errors? Report an error message: ``` @@ -179,9 +179,9 @@ Some possible improvement items: It is an internal error introduced by Ratis 2.4.1 dependency, and we can safely ignore this exception as it will not affect normal operations. We will fix this message in the incoming releases. -## FAQ for Cluster Setup +## 2. FAQ for Cluster Setup -### Cluster StartUp and Stop +### 2.1 Cluster StartUp and Stop #### Failed to start ConfigNode for the first time, how to find the reason? @@ -222,7 +222,7 @@ not affect normal operations. We will fix this message in the incoming releases. - The default RPC address of 0.13 is `0.0.0.0`, but the default RPC address of 1.0 is `127.0.0.1`. -### Cluster Restart +### 2.2 Cluster Restart #### How to restart any ConfigNode in the cluster? @@ -246,7 +246,7 @@ not affect normal operations. We will fix this message in the incoming releases. - Can't. The running result will be "The port is already occupied". -### Cluster Maintenance +### 2.3 Cluster Maintenance #### How to find the reason when Show cluster failed, and error logs like "please check server status" are shown? diff --git a/src/UserGuide/Master/Tree/IoTDB-Introduction/IoTDB-Introduction_apache.md b/src/UserGuide/Master/Tree/IoTDB-Introduction/IoTDB-Introduction_apache.md index 8c93bd738..71f394c8f 100644 --- a/src/UserGuide/Master/Tree/IoTDB-Introduction/IoTDB-Introduction_apache.md +++ b/src/UserGuide/Master/Tree/IoTDB-Introduction/IoTDB-Introduction_apache.md @@ -30,7 +30,7 @@ Apache IoTDB is a low-cost, high-performance native temporal database for the In - Installation, deployment, and usage documentation: [QuickStart](../QuickStart/QuickStart_apache.md) -## Product Components +## 1. Product Components IoTDB products consist of several components that help users efficiently manage and analyze the massive amount of time-series data generated by the IoT. @@ -46,7 +46,7 @@ IoTDB products consist of several components that help users efficiently manage 3. Time-series Model Training and Inference Integrated Engine (IoTDB AINode): For intelligent analysis scenarios, IoTDB provides the AINode time-series model training and inference integrated engine, which offers a complete set of time-series data analysis tools. The underlying engine supports model training tasks and data management, including machine learning and deep learning. With these tools, users can conduct in-depth analysis of the data stored in IoTDB and extract its value. -## Product Features +## 2. Product Features TimechoDB has the following advantages and characteristics: @@ -66,7 +66,7 @@ TimechoDB has the following advantages and characteristics: - Rich ecological environment docking: Supports docking with big data ecosystem components such as Hadoop, Spark, and supports equipment management and visualization tools such as Grafana, Thingsboard, DataEase. -## Commercial version +## 3. Commercial version Timecho provides the original commercial product TimechoDB based on the open source version of Apache IoTDB, providing enterprise level products and services for enterprises and commercial customers. It can solve various problems encountered by enterprises when building IoT big data platforms to manage time-series data, such as complex application scenarios, large data volumes, high sampling frequencies, high amount of unaligned data, long data processing time, diverse analysis requirements, and high storage and operation costs. diff --git a/src/UserGuide/Master/Tree/IoTDB-Introduction/IoTDB-Introduction_timecho.md b/src/UserGuide/Master/Tree/IoTDB-Introduction/IoTDB-Introduction_timecho.md index d798b4e63..7c03f8a40 100644 --- a/src/UserGuide/Master/Tree/IoTDB-Introduction/IoTDB-Introduction_timecho.md +++ b/src/UserGuide/Master/Tree/IoTDB-Introduction/IoTDB-Introduction_timecho.md @@ -28,7 +28,7 @@ Timecho provides a more diverse range of product features, stronger performance - Download 、Deployment and Usage:[QuickStart](../QuickStart/QuickStart_timecho.md) -## Product Components +## 1. Product Components Timecho products is composed of several components, covering the entire time-series data lifecycle from data collection, data management to data analysis & application, helping users efficiently manage and analyze the massive amount of time-series data generated by the IoT. @@ -45,7 +45,7 @@ Timecho products is composed of several components, covering the entire time-ser 4. **Data collection**: To more conveniently dock with various industrial collection scenarios, Timecho provides data collection access services, supporting multiple protocols and formats, which can access data generated by various sensors and devices, while also supporting features such as breakpoint resumption and network barrier penetration. It is more adapted to the characteristics of difficult configuration, slow transmission, and weak network in the industrial field collection process, making the user's data collection simpler and more efficient. -## Product Features +## 2. Product Features TimechoDB has the following advantages and characteristics: @@ -65,9 +65,9 @@ TimechoDB has the following advantages and characteristics: - Rich ecological environment docking: Supports docking with big data ecosystem components such as Hadoop, Spark, and supports equipment management and visualization tools such as Grafana, Thingsboard, DataEase. -## Enterprise characteristics +## 3. Enterprise characteristics -### Higher level product features +### 3.1 Higher level product features Building on the open-source version, TimechoDB offers a range of advanced product features, with native upgrades and optimizations at the kernel level for industrial production scenarios. These include multi-level storage, cloud-edge collaboration, visualization tools, and security enhancements, allowing users to focus more on business development without worrying too much about underlying logic. This simplifies and enhances industrial production, bringing more economic benefits to enterprises. For example: @@ -211,11 +211,11 @@ The detailed functional comparison is as follows:
-### More efficient/stable product performance +### 3.2 More efficient/stable product performance TimechoDB has optimized stability and performance on the basis of the open source version. With technical support from the enterprise version, it can achieve more than 10 times performance improvement and has the performance advantage of timely fault recovery. -### More User-Friendly Tool System +### 3.3 More User-Friendly Tool System TimechoDB will provide users with a simpler and more user-friendly tool system. Through products such as the Cluster Monitoring Panel (Grafana), Database Console (Workbench), and Cluster Management Tool (Deploy Tool, abbreviated as IoTD), it will help users quickly deploy, manage, and monitor database clusters, reduce the work/learning costs of operation and maintenance personnel, simplify database operation and maintenance work, and make the operation and maintenance process more convenient and efficient. @@ -256,12 +256,12 @@ TimechoDB will provide users with a simpler and more user-friendly tool system.  -### More professional enterprise technical services +### 3.4 More professional enterprise technical services TimechoDB customers provide powerful original factory services, including but not limited to on-site installation and training, expert consultant consultation, on-site emergency assistance, software upgrades, online self-service, remote support, and guidance on using the latest development version. At the same time, in order to make TimechoDB more suitable for industrial production scenarios, we will recommend modeling solutions, optimize read-write performance, optimize compression ratios, recommend database configurations, and provide other technical support based on the actual data structure and read-write load of the enterprise. If encountering industrial customization scenarios that are not covered by some products, TimechoDB will provide customized development tools based on user characteristics. Compared to the open source version, TimechoDB provides a faster release frequency every 2-3 months. At the same time, it offers day level exclusive fixes for urgent customer issues to ensure stable production environments. -### More compatible localization adaptation +### 3.5 More compatible localization adaptation The TimechoDB code is self-developed and controllable, and is compatible with most mainstream information and creative products (CPU, operating system, etc.), and has completed compatibility certification with multiple manufacturers to ensure product compliance and security. \ No newline at end of file diff --git a/src/UserGuide/Master/Tree/IoTDB-Introduction/Scenario.md b/src/UserGuide/Master/Tree/IoTDB-Introduction/Scenario.md index b295af566..6a84569aa 100644 --- a/src/UserGuide/Master/Tree/IoTDB-Introduction/Scenario.md +++ b/src/UserGuide/Master/Tree/IoTDB-Introduction/Scenario.md @@ -21,9 +21,9 @@ # Scenario -## Application 1: Internet of Vehicles +## 1. Internet of Vehicles -### Background +### 1.1 Background > - Challenge: a large number of vehicles and time series @@ -31,7 +31,7 @@ A car company has a huge business volume and needs to deal with a large number o In the original architecture, the HBase cluster was used as the storage database. The query delay was high, and the system maintenance was difficult and costly. The HBase cluster cannot meet the demand. On the contrary, IoTDB supports high-frequency data writing with millions of measurement points and millisecond-level query response speed. The efficient data processing capability allows users to obtain the required data quickly and accurately. Therefore, IoTDB is chosen as the data storage layer, which has a lightweight architecture, reduces operation and maintenance costs, and supports elastic expansion and contraction and high availability to ensure system stability and availability. -### Architecture +### 1.2 Architecture The data management architecture of the car company using IoTDB as the time-series data storage engine is shown in the figure below. @@ -40,9 +40,9 @@ The data management architecture of the car company using IoTDB as the time-seri The vehicle data is encoded based on TCP and industrial protocols and sent to the edge gateway, and the gateway sends the data to the message queue Kafka cluster, decoupling the two ends of production and consumption. Kafka sends data to Flink for real-time processing, and the processed data is written into IoTDB. Both historical data and latest data are queried in IoTDB, and finally the data flows into the visualization platform through API for application. -## Application 2: Intelligent Operation and Maintenance +## 2. Intelligent Operation and Maintenance -### Background +### 2.1 Background A steel factory aims to build a low-cost, large-scale access-capable remote intelligent operation and maintenance software and hardware platform, access hundreds of production lines, more than one million devices, and tens of millions of time series, to achieve remote coverage of intelligent operation and maintenance. @@ -55,30 +55,30 @@ There are many challenges in this process: After selecting IoTDB as the storage database of the intelligent operation and maintenance platform, it can stably write multi-frequency and high-frequency acquisition data, covering the entire steel process, and use a composite compression algorithm to reduce the data size by more than 10 times, saving costs. IoTDB also effectively supports downsampling query of historical data of more than 10 years, helping enterprises to mine data trends and assist enterprises in long-term strategic analysis. -### Architecture +### 2.2 Architecture The figure below shows the architecture design of the intelligent operation and maintenance platform of the steel plant. ![img](/img/architecture2.jpg) -## Application 3: Smart Factory +## 3. Smart Factory -### Background +### 3.1 Background > - Challenge:Cloud-edge collaboration A cigarette factory hopes to upgrade from a "traditional factory" to a "high-end factory". It uses the Internet of Things and equipment monitoring technology to strengthen information management and services to realize the free flow of data within the enterprise and to help improve productivity and lower operating costs. -### Architecture +### 3.2 Architecture The figure below shows the factory's IoT system architecture. IoTDB runs through the three-level IoT platform of the company, factory, and workshop to realize unified joint debugging and joint control of equipment. The data at the workshop level is collected, processed and stored in real time through the IoTDB at the edge layer, and a series of analysis tasks are realized. The preprocessed data is sent to the IoTDB at the platform layer for data governance at the business level, such as device management, connection management, and service support. Eventually, the data will be integrated into the IoTDB at the group level for comprehensive analysis and decision-making across the organization. ![img](/img/architecture3.jpg) -## Application 4: Condition monitoring +## 4. Condition monitoring -### Background +### 4.1 Background > - Challenge: Smart heating, cost reduction and efficiency increase @@ -86,7 +86,7 @@ A power plant needs to monitor tens of thousands of measuring points of main and After using IoTDB as the storage and analysis engine, combined with meteorological data, building control data, household control data, heat exchange station data, official website data, heat source side data, etc., all data are time-aligned in IoTDB to provide reliable data basis to realize smart heating. At the same time, it also solves the problem of monitoring the working conditions of various important components in the relevant heating process, such as on-demand billing and pipe network, heating station, etc., to reduce manpower input. -### Architecture +### 4.2 Architecture The figure below shows the data management architecture of the power plant in the heating scene. diff --git a/src/UserGuide/Master/Tree/QuickStart/QuickStart_apache.md b/src/UserGuide/Master/Tree/QuickStart/QuickStart_apache.md index bfb8e98af..b9fe6b362 100644 --- a/src/UserGuide/Master/Tree/QuickStart/QuickStart_apache.md +++ b/src/UserGuide/Master/Tree/QuickStart/QuickStart_apache.md @@ -24,7 +24,7 @@ This document will guide you through methods to get started quickly with IoTDB. -## How to Install and Deploy? +## 1. How to Install and Deploy? This guide will assist you in quickly installing and deploying IoTDB. You can quickly navigate to the content you need to review through the following document links: @@ -43,7 +43,7 @@ This guide will assist you in quickly installing and deploying IoTDB. You can qu > ❗️Note: We currently still recommend direct installation and deployment on physical/virtual machines. For Docker deployment, please refer to [Docker Deployment](../Deployment-and-Maintenance/Docker-Deployment_apache.md) -## How to Use IoTDB? +## 2. How to Use IoTDB? 1. Database Modeling Design: Database modeling is a crucial step in creating a database system, involving the design of data structures and relationships to ensure that the organization of data meets the needs of specific applications. The following documents will help you quickly understand IoTDB's modeling design: @@ -67,7 +67,7 @@ This guide will assist you in quickly installing and deploying IoTDB. You can qu 5. API: IoTDB provides multiple application programming interfaces (API) for developers to interact with IoTDB in their applications, and currently supports [Java Native API](../API/Programming-Java-Native-API.md)、[Python Native API](../API/Programming-Python-Native-API.md)、[C++ Native API](../API/Programming-Cpp-Native-API.md) ,For more API, please refer to the official website 【API】 and other chapters -## What other convenient tools are available? +## 3. What other convenient tools are available? In addition to its rich features, IoTDB also has a comprehensive range of tools in its surrounding system. This document will help you quickly use the peripheral tool system : @@ -77,7 +77,7 @@ In addition to its rich features, IoTDB also has a comprehensive range of tools - Data Export Script: For different scenarios, IoTDB provides users with multiple ways to batch export data. For specific usage instructions, please refer to: [Data Export](../Tools-System/Data-Export-Tool.md) -## Want to Learn More About the Technical Details? +## 4. Want to Learn More About the Technical Details? If you are interested in delving deeper into the technical aspects of IoTDB, you can refer to the following documents: @@ -87,6 +87,6 @@ If you are interested in delving deeper into the technical aspects of IoTDB, you - Data Partitioning and Load Balancing: IoTDB has meticulously designed data partitioning strategies and load balancing algorithms based on the characteristics of time series data, enhancing the availability and performance of the cluster. For more information, please refer to: [Data Partitioning and Load Balancing](../Technical-Insider/Cluster-data-partitioning.md) -## Encountering problems during use? +## 5. Encountering problems during use? If you encounter difficulties during installation or use, you can move to [Frequently Asked Questions](../FAQ/Frequently-asked-questions.md) View in the middle diff --git a/src/UserGuide/Master/Tree/QuickStart/QuickStart_timecho.md b/src/UserGuide/Master/Tree/QuickStart/QuickStart_timecho.md index 3728903a5..d0feadb25 100644 --- a/src/UserGuide/Master/Tree/QuickStart/QuickStart_timecho.md +++ b/src/UserGuide/Master/Tree/QuickStart/QuickStart_timecho.md @@ -24,7 +24,7 @@ This document will guide you through methods to get started quickly with IoTDB. -## How to Install and Deploy? +## 1. How to Install and Deploy? This guide will assist you in quickly installing and deploying IoTDB. You can quickly navigate to the content you need to review through the following document links: @@ -50,7 +50,7 @@ This guide will assist you in quickly installing and deploying IoTDB. You can qu - Workbench: It is the visual interface of IoTDB,Support providing through interface interaction Operate Metadata、Query Data、Data Visualization and other functions, help users use the database easily and efficiently, and the installation steps can be viewed [Workbench Deployment](../Deployment-and-Maintenance/workbench-deployment_timecho.md) -## How to Use IoTDB? +## 2. How to Use IoTDB? 1. Database Modeling Design: Database modeling is a crucial step in creating a database system, involving the design of data structures and relationships to ensure that the organization of data meets the needs of specific applications. The following documents will help you quickly understand IoTDB's modeling design: @@ -78,7 +78,7 @@ This guide will assist you in quickly installing and deploying IoTDB. You can qu 5. API: IoTDB provides multiple application programming interfaces (API) for developers to interact with IoTDB in their applications, and currently supports[ Java Native API](../API/Programming-Java-Native-API.md)、[Python Native API](../API/Programming-Python-Native-API.md)、[C++ Native API](../API/Programming-Cpp-Native-API.md)、[Go Native API](../API/Programming-Go-Native-API.md), For more API, please refer to the official website 【API】 and other chapters -## What other convenient tools are available? +## 3. What other convenient tools are available? In addition to its rich features, IoTDB also has a comprehensive range of tools in its surrounding system. This document will help you quickly use the peripheral tool system : @@ -93,7 +93,7 @@ In addition to its rich features, IoTDB also has a comprehensive range of tools - Data Export Script: For different scenarios, IoTDB provides users with multiple ways to batch export data. For specific usage instructions, please refer to: [Data Export](../Tools-System/Data-Export-Tool.md) -## Want to Learn More About the Technical Details? +## 4. Want to Learn More About the Technical Details? If you are interested in delving deeper into the technical aspects of IoTDB, you can refer to the following documents: @@ -103,6 +103,6 @@ If you are interested in delving deeper into the technical aspects of IoTDB, you - Data Partitioning and Load Balancing: IoTDB has meticulously designed data partitioning strategies and load balancing algorithms based on the characteristics of time series data, enhancing the availability and performance of the cluster. For more information, please refer to: [Data Partitionin & Load Balancing](../Technical-Insider/Cluster-data-partitioning.md) -## Encountering problems during use? +## 5. Encountering problems during use? If you encounter difficulties during installation or use, you can move to [Frequently Asked Questions](../FAQ/Frequently-asked-questions.md) View in the middle \ No newline at end of file diff --git a/src/UserGuide/Master/Tree/Technical-Insider/Cluster-data-partitioning.md b/src/UserGuide/Master/Tree/Technical-Insider/Cluster-data-partitioning.md index 479f95527..11ee2be4e 100644 --- a/src/UserGuide/Master/Tree/Technical-Insider/Cluster-data-partitioning.md +++ b/src/UserGuide/Master/Tree/Technical-Insider/Cluster-data-partitioning.md @@ -22,10 +22,10 @@ # Load Balance This document introduces the partitioning strategies and load balance strategies in IoTDB. According to the characteristics of time series data, IoTDB partitions them by series and time dimensions. Combining a series partition with a time partition creates a partition, the unit of division. To enhance throughput and reduce management costs, these partitions are evenly allocated to RegionGroups, which serve as the unit of replication. The RegionGroup's Regions then determine the storage location, with the leader Region managing the primary load. During this process, the Region placement strategy determines which nodes will host the replicas, while the leader selection strategy designates which Region will act as the leader. -## Partitioning Strategy & Partition Allocation +## 1. Partitioning Strategy & Partition Allocation IoTDB implements tailored partitioning algorithms for time series data. Building on this foundation, the partition information cached on both ConfigNodes and DataNodes is not only manageable in size but also clearly differentiated between hot and cold. Subsequently, balanced partitions are evenly allocated across the cluster's RegionGroups to achieve storage balance. -### Partitioning Strategy +### 1.1 Partitioning Strategy IoTDB maps each sensor in the production environment to a time series. The time series are then partitioned using the series partitioning algorithm to manage their schema, and combined with the time partitioning algorithm to manage their data. The following figure illustrates how IoTDB partitions time series data. @@ -53,7 +53,7 @@ Since the series partitioning algorithm evenly partitions the time series, each #### Data Partitioning Combining a series partition with a time partition creates a data partition. Since the series partitioning algorithm evenly partitions the time series, the load of data partitions within a specified time partition remains balanced. These data partitions are then evenly allocated across the DataRegionGroups to achieve balanced data distribution. -### Partition Allocation +### 1.2 Partition Allocation IoTDB uses RegionGroups to enable elastic storage of time series, with the number of RegionGroups in the cluster determined by the total resources available across all DataNodes. Since the number of RegionGroups is dynamic, IoTDB can easily scale out. Both the SchemaRegionGroup and DataRegionGroup follow the same partition allocation algorithm, which evenly splits all series partitions. The following figure demonstrates the partition allocation process, where the dynamic RegionGroups match the variously expending time series and cluster. @@ -70,10 +70,10 @@ Both the SchemaRegionGroup and the DataRegionGroup follow the same allocation al Notably, IoTDB effectively leverages the characteristics of time series data. When the TTL (Time to Live) is configured, IoTDB enables migration-free elastic storage for time series data. This feature facilitates cluster expansion while minimizing the impact on online operations. The figures above illustrate an instance of this feature: newborn data partitions are evenly allocated to each DataRegion, and expired data are automatically archived. As a result, the cluster's storage will eventually remain balanced. -## Balance Strategy +## 2. Balance Strategy To enhance the cluster's availability and performance, IoTDB employs sophisticated storage load and computing load balance algorithms. -### Storage Load Balance +### 2.1 Storage Load Balance The number of Regions held by a DataNode reflects its storage load. If the difference in the number of Regions across DataNodes is relatively large, the DataNode with more Regions is likely to become a storage bottleneck. Although a straightforward Round Robin placement algorithm can achieve storage balance by ensuring that each DataNode hosts an equal number of Regions, it compromises the cluster's fault tolerance, as illustrated below: @@ -88,7 +88,7 @@ In this scenario, if DataNode $n_2$ fails, the load previously handled by DataNo To address this issue, IoTDB employs a Region placement algorithm that not only evenly distributes Regions across all DataNodes but also ensures that each DataNode can offload its storage to sufficient other DataNodes in the event of a failure. As a result, the cluster achieves balanced storage distribution and a high level of fault tolerance, ensuring its availability. -### Computing Load Balance +### 2.2 Computing Load Balance The number of leader Regions held by a DataNode reflects its Computing load. If the difference in the number of leaders across DataNodes is relatively large, the DataNode with more leaders is likely to become a Computing bottleneck. If the leader selection process is conducted using a transparent Greedy algorithm, the result may be an unbalanced leader distribution when the Regions are fault-tolerantly placed, as demonstrated below: @@ -103,7 +103,7 @@ Please note that all the above steps strictly follow the Greedy algorithm. Howev To address this issue, IoTDB employs a leader selection algorithm that can consistently balance the cluster's leader distribution. Consequently, the cluster achieves balanced Computing load distribution, ensuring its performance. -## Source Code +## 3. Source Code + [Data Partitioning](https://github.com/apache/iotdb/tree/master/iotdb-core/node-commons/src/main/java/org/apache/iotdb/commons/partition) + [Partition Allocation](https://github.com/apache/iotdb/tree/master/iotdb-core/confignode/src/main/java/org/apache/iotdb/confignode/manager/load/balancer/partition) + [Region Placement](https://github.com/apache/iotdb/tree/master/iotdb-core/confignode/src/main/java/org/apache/iotdb/confignode/manager/load/balancer/region) diff --git a/src/UserGuide/Master/Tree/Technical-Insider/Encoding-and-Compression.md b/src/UserGuide/Master/Tree/Technical-Insider/Encoding-and-Compression.md index d987c47a2..632f29674 100644 --- a/src/UserGuide/Master/Tree/Technical-Insider/Encoding-and-Compression.md +++ b/src/UserGuide/Master/Tree/Technical-Insider/Encoding-and-Compression.md @@ -22,7 +22,7 @@ # Encoding and Compression -## Encoding Methods +## 1. Encoding Methods To improve the efficiency of data storage, it is necessary to encode data during data writing, thereby reducing the amount of disk space used. In the process of writing and reading data, the amount of data involved in the I/O operations can be reduced to improve performance. IoTDB supports the following encoding methods for different data types: @@ -72,7 +72,7 @@ To improve the efficiency of data storage, it is necessary to encode data during RLBE is a lossless encoding that combines the ideas of differential encoding, bit-packing encoding, run-length encoding, Fibonacci encoding and concatenation. RLBE encoding is suitable for time series data with increasing and small increment value, and is not suitable for time series data with large fluctuation. -### Correspondence between data type and encoding +### 1.1 Correspondence between data type and encoding The five encodings described in the previous sections are applicable to different data types. If the correspondence is wrong, the time series cannot be created correctly. @@ -99,11 +99,11 @@ As shown below, the second-order difference encoding does not support the Boolea IoTDB> create timeseries root.ln.wf02.wt02.status WITH DATATYPE=BOOLEAN, ENCODING=TS_2DIFF Msg: 507: encoding TS_2DIFF does not support BOOLEAN ``` -## Compression +## 2. Compression When the time series is written and encoded as binary data according to the specified type, IoTDB compresses the data using compression technology to further improve space storage efficiency. Although both encoding and compression are designed to improve storage efficiency, encoding techniques are usually available only for specific data types (e.g., second-order differential encoding is only suitable for INT32 or INT64 data type, and storing floating-point numbers requires multiplying them by 10m to convert to integers), after which the data is converted to a binary stream. The compression method (SNAPPY) compresses the binary stream, so the use of the compression method is no longer limited by the data type. -### Basic Compression Methods +### 2.1 Basic Compression Methods IoTDB allows you to specify the compression method of the column when creating a time series, and supports the following compression methods: @@ -121,7 +121,7 @@ IoTDB allows you to specify the compression method of the column when creating a The specified syntax for compression is detailed in [Create Timeseries Statement](../SQL-Manual/SQL-Manual.md). -### Compression Ratio Statistics +### 2.2 Compression Ratio Statistics Compression ratio statistics file: data/datanode/system/compression_ratio diff --git a/src/UserGuide/latest/API/Programming-CSharp-Native-API.md b/src/UserGuide/latest/API/Programming-CSharp-Native-API.md index a4f208f7c..2a85b3c32 100644 --- a/src/UserGuide/latest/API/Programming-CSharp-Native-API.md +++ b/src/UserGuide/latest/API/Programming-CSharp-Native-API.md @@ -21,9 +21,9 @@ # C# Native API -## Installation +## 1. Installation -### Install from NuGet Package +### 1.1 Install from NuGet Package We have prepared Nuget Package for C# users. Users can directly install the client through .NET CLI. [The link of our NuGet Package is here](https://www.nuget.org/packages/Apache.IoTDB/). Run the following command in the command line to complete installation @@ -33,18 +33,18 @@ dotnet add package Apache.IoTDB Note that the `Apache.IoTDB` package only supports versions greater than `.net framework 4.6.1`. -## Prerequisites +## 2. Prerequisites .NET SDK Version >= 5.0 .NET Framework >= 4.6.1 -## How to Use the Client (Quick Start) +## 3. How to Use the Client (Quick Start) Users can quickly get started by referring to the use cases under the Apache-IoTDB-Client-CSharp-UserCase directory. These use cases serve as a useful resource for getting familiar with the client's functionality and capabilities. For those who wish to delve deeper into the client's usage and explore more advanced features, the samples directory contains additional code samples. -## Developer environment requirements for iotdb-client-csharp +## 4. Developer environment requirements for iotdb-client-csharp ``` .NET SDK Version >= 5.0 @@ -53,17 +53,17 @@ ApacheThrift >= 0.14.1 NLog >= 4.7.9 ``` -### OS +### 4.1 OS * Linux, Macos or other unix-like OS * Windows+bash(WSL, cygwin, Git Bash) -### Command Line Tools +### 4.2 Command Line Tools * dotnet CLI * Thrift -## Basic interface description +## 5. Basic interface description The Session interface is semantically identical to other language clients @@ -101,7 +101,7 @@ await session_pool.InsertTabletAsync(tablet); await session_pool.Close(); ``` -## **Row Record** +## 6. **Row Record** - Encapsulate and abstract the `record` data in **IoTDB** - e.g. @@ -117,7 +117,7 @@ var rowRecord = new RowRecord(long timestamps, List values, List measurements); ``` -### **Tablet** +### 6.1 **Tablet** - A data structure similar to a table, containing several non empty data blocks of a device's rows。 - e.g. @@ -137,9 +137,9 @@ var tablet = -## **API** +## 7. **API** -### **Basic API** +### 7.1 **Basic API** | api name | parameters | notes | use example | | -------------- | ------------------------- | ------------------------ | ----------------------------- | @@ -151,7 +151,7 @@ var tablet = | SetTimeZone | string | set time zone | session_pool.GetTimeZone() | | GetTimeZone | null | get time zone | session_pool.GetTimeZone() | -### **Record API** +### 7.2 **Record API** | api name | parameters | notes | use example | | ----------------------------------- | ----------------------------- | ----------------------------------- | ------------------------------------------------------------ | @@ -162,7 +162,7 @@ var tablet = | TestInsertRecordAsync | string, RowRecord | test insert record | session_pool.TestInsertRecordAsync("root.97209_TEST_CSHARP_CLIENT_GROUP.TEST_CSHARP_CLIENT_DEVICE", rowRecord) | | TestInsertRecordsAsync | List\, List\ | test insert record | session_pool.TestInsertRecordsAsync(device_id, rowRecords) | -### **Tablet API** +### 7.3 **Tablet API** | api name | parameters | notes | use example | | ---------------------- | ------------ | -------------------- | -------------------------------------------- | @@ -171,14 +171,14 @@ var tablet = | TestInsertTabletAsync | Tablet | test insert tablet | session_pool.TestInsertTabletAsync(tablet) | | TestInsertTabletsAsync | List\ | test insert tablets | session_pool.TestInsertTabletsAsync(tablets) | -### **SQL API** +### 7.4 **SQL API** | api name | parameters | notes | use example | | ----------------------------- | ---------- | ------------------------------ | ------------------------------------------------------------ | | ExecuteQueryStatementAsync | string | execute sql query statement | session_pool.ExecuteQueryStatementAsync("select * from root.97209_TEST_CSHARP_CLIENT_GROUP.TEST_CSHARP_CLIENT_DEVICE where time<15"); | | ExecuteNonQueryStatementAsync | string | execute sql nonquery statement | session_pool.ExecuteNonQueryStatementAsync( "create timeseries root.97209_TEST_CSHARP_CLIENT_GROUP.TEST_CSHARP_CLIENT_DEVICE.status with datatype=BOOLEAN,encoding=PLAIN") | -### **Scheam API** +### 7.5 **Scheam API** | api name | parameters | notes | use example | | -------------------------- | ------------------------------------------------------------ | --------------------------- | ------------------------------------------------------------ | @@ -191,7 +191,7 @@ var tablet = | DeleteTimeSeriesAsync | string | delete time series | | | DeleteDataAsync | List\, long, long | delete data | session_pool.DeleteDataAsync(ts_path_lst, 2, 3) | -### **Other API** +### 7.6 **Other API** | api name | parameters | notes | use example | | -------------------------- | ---------- | --------------------------- | ---------------------------------------------------- | @@ -201,7 +201,7 @@ var tablet = [e.g.](https://github.com/apache/iotdb-client-csharp/tree/main/samples/Apache.IoTDB.Samples) -## SessionPool +## 8. SessionPool To implement concurrent client requests, we provide a `SessionPool` for the native interface. Since `SessionPool` itself is a superset of `Session`, when `SessionPool` is a When the `pool_size` parameter is set to 1, it reverts to the original `Session` diff --git a/src/UserGuide/latest/API/Programming-Cpp-Native-API.md b/src/UserGuide/latest/API/Programming-Cpp-Native-API.md index b462983d2..22c08bc3b 100644 --- a/src/UserGuide/latest/API/Programming-Cpp-Native-API.md +++ b/src/UserGuide/latest/API/Programming-Cpp-Native-API.md @@ -21,7 +21,7 @@ # C++ Native API -## Dependencies +## 1. Dependencies - Java 8+ - Flex @@ -30,9 +30,9 @@ - OpenSSL 1.0+ - GCC 5.5.0+ -## Installation +## 2. Installation -### Install Required Dependencies +### 2.1 Install Required Dependencies - **MAC** 1. Install Bison: @@ -89,7 +89,7 @@ - Download and install [OpenSSL](http://slproweb.com/products/Win32OpenSSL.html). - Add the include directory under the installation directory to the PATH environment variable. -### Compilation +### 2.2 Compilation Clone the source code from git: ```shell @@ -131,7 +131,7 @@ Run Maven to compile in the IoTDB root directory: After successful compilation, the packaged library files will be located in `iotdb-client/client-cpp/target`, and you can find the compiled example program under `example/client-cpp-example/target`. -### Compilation Q&A +### 2.3 Compilation Q&A Q: What are the requirements for the environment on Linux? @@ -158,11 +158,11 @@ A: - Go back to the IoTDB code directory and run `.\mvnw.cmd clean package -pl example/client-cpp-example -am -DskipTests -P with-cpp -Dcmake.generator="Visual Studio 15 2017"`. -## Native APIs +## 3. Native APIs Here we show the commonly used interfaces and their parameters in the Native API: -### Initialization +### 3.1 Initialization - Open a Session ```cpp @@ -180,7 +180,7 @@ Notice: this RPC compression status of client must comply with that of IoTDB ser void close(); ``` -### Data Definition Interface (DDL) +### 3.2 Data Definition Interface (DDL) #### Database Management @@ -302,7 +302,7 @@ std::vector showMeasurementsInTemplate(const std::string &template_ ``` -### Data Manipulation Interface (DML) +### 3.3 Data Manipulation Interface (DML) #### Insert @@ -384,7 +384,7 @@ void deleteData(const std::vector &paths, int64_t endTime); void deleteData(const std::vector &paths, int64_t startTime, int64_t endTime); ``` -### IoTDB-SQL Interface +### 3.4 IoTDB-SQL Interface - Execute query statement ```cpp @@ -397,7 +397,7 @@ void executeNonQueryStatement(const std::string &sql); ``` -## Examples +## 4. Examples The sample code of using these interfaces is in: @@ -406,16 +406,16 @@ The sample code of using these interfaces is in: If the compilation finishes successfully, the example project will be placed under `example/client-cpp-example/target` -## FAQ +## 5. FAQ -### on Mac +### 5.1 on Mac If errors occur when compiling thrift source code, try to downgrade your xcode-commandline from 12 to 11.5 see https://stackoverflow.com/questions/63592445/ld-unsupported-tapi-file-type-tapi-tbd-in-yaml-file/65518087#65518087 -### on Windows +### 5.2 on Windows When Building Thrift and downloading packages via "wget", a possible annoying issue may occur with error message looks like: diff --git a/src/UserGuide/latest/API/Programming-Data-Subscription.md b/src/UserGuide/latest/API/Programming-Data-Subscription.md index 9391c04a7..281cc9864 100644 --- a/src/UserGuide/latest/API/Programming-Data-Subscription.md +++ b/src/UserGuide/latest/API/Programming-Data-Subscription.md @@ -23,7 +23,7 @@ IoTDB provides powerful data subscription functionality, allowing users to access newly added data from IoTDB in real-time through subscription APIs. For detailed functional definitions and introductions:[Data subscription](../User-Manual/Data-subscription.md) -## 1 Core Steps +## 1. Core Steps 1. Create Topic: Create a Topic that includes the measurement points you wish to subscribe to. 2. Subscribe to Topic: Before a consumer subscribes to a topic, the topic must have been created, otherwise the subscription will fail. Consumers under the same consumer group will evenly distribute the data. @@ -31,7 +31,7 @@ IoTDB provides powerful data subscription functionality, allowing users to acces 4. Unsubscribe: When a consumer is closed, it will exit the corresponding consumer group and cancel all existing subscriptions. -## 2 Detailed Steps +## 2. Detailed Steps This section is used to illustrate the core development process and does not demonstrate all parameters and interfaces. For a comprehensive understanding of all features and parameters, please refer to: [Java Native API](../API/Programming-Java-Native-API.md#3-native-interface-description) @@ -182,7 +182,7 @@ public class DataConsumerExample { -## 3 Java Native API Description +## 3. Java Native API Description ### 3.1 Parameter List diff --git a/src/UserGuide/latest/API/Programming-Go-Native-API.md b/src/UserGuide/latest/API/Programming-Go-Native-API.md index b227ed672..baad278b4 100644 --- a/src/UserGuide/latest/API/Programming-Go-Native-API.md +++ b/src/UserGuide/latest/API/Programming-Go-Native-API.md @@ -23,7 +23,7 @@ The Git repository for the Go Native API client is located [here](https://github.com/apache/iotdb-client-go/) -## Dependencies +## 1. Dependencies * golang >= 1.13 * make >= 3.0 @@ -32,7 +32,7 @@ The Git repository for the Go Native API client is located [here](https://github * Linux、Macos or other unix-like systems * Windows+bash (WSL、cygwin、Git Bash) -## Installation +## 2. Installation * go mod diff --git a/src/UserGuide/latest/API/Programming-JDBC.md b/src/UserGuide/latest/API/Programming-JDBC.md index 0251e469c..b599c3d4b 100644 --- a/src/UserGuide/latest/API/Programming-JDBC.md +++ b/src/UserGuide/latest/API/Programming-JDBC.md @@ -25,12 +25,12 @@ IT CAN NOT PROVIDE HIGH THROUGHPUT FOR WRITE OPERATIONS. PLEASE USE [Java Native API](./Programming-Java-Native-API.md) INSTEAD* -## Dependencies +## 1. Dependencies * JDK >= 1.8+ * Maven >= 3.9+ -## Installation +## 2. Installation In root directory: @@ -38,7 +38,7 @@ In root directory: mvn clean install -pl iotdb-client/jdbc -am -DskipTests ``` -## Use IoTDB JDBC with Maven +## 3. Use IoTDB JDBC with Maven ```xml @@ -50,7 +50,7 @@ mvn clean install -pl iotdb-client/jdbc -am -DskipTests ``` -## Coding Examples +## 4. Coding Examples This chapter provides an example of how to open a database connection, execute an SQL query, and display the results. diff --git a/src/UserGuide/latest/API/Programming-Java-Native-API.md b/src/UserGuide/latest/API/Programming-Java-Native-API.md index 2b2b68da3..53b445885 100644 --- a/src/UserGuide/latest/API/Programming-Java-Native-API.md +++ b/src/UserGuide/latest/API/Programming-Java-Native-API.md @@ -23,14 +23,14 @@ In the native API of IoTDB, the `Session` is the core interface for interacting `SessionPool` is a connection pool for `Session`, and it is recommended to use `SessionPool` for programming. In scenarios with multi-threaded concurrency, `SessionPool` can manage and allocate connection resources effectively, thereby improving system performance and resource utilization efficiency. -## 1 Overview of Steps +## 1. Overview of Steps 1. Create a Connection Pool Instance: Initialize a SessionPool object to manage multiple Session instances. 2. Perform Operations: Directly obtain a Session instance from the SessionPool and execute database operations, without the need to open and close connections each time. 3. Close Connection Pool Resources: When database operations are no longer needed, close the SessionPool to release all related resources. -## 2 Detailed Steps +## 2. Detailed Steps This section provides an overview of the core development process and does not demonstrate all parameters and interfaces. For a complete list of functionalities and parameters, please refer to:[Java Native API](./Programming-Java-Native-API.md#3-native-interface-description) or check the: [Source Code](https://github.com/apache/iotdb/tree/master/example/session/src/main/java/org/apache/iotdb) @@ -343,7 +343,7 @@ public class SessionPoolExample { } ``` -### 3 Native Interface Description +### 3. Native Interface Description #### 3.1 Parameter List diff --git a/src/UserGuide/latest/API/Programming-Kafka.md b/src/UserGuide/latest/API/Programming-Kafka.md index 0a041448f..aab8d3d21 100644 --- a/src/UserGuide/latest/API/Programming-Kafka.md +++ b/src/UserGuide/latest/API/Programming-Kafka.md @@ -23,9 +23,9 @@ [Apache Kafka](https://kafka.apache.org/) is an open-source distributed event streaming platform used by thousands of companies for high-performance data pipelines, streaming analytics, data integration, and mission-critical applications. -## Coding Example +## 1. Coding Example -### kafka Producer Producing Data Java Code Example +### 1.1 kafka Producer Producing Data Java Code Example ```java Properties props = new Properties(); @@ -39,7 +39,7 @@ producer.close(); ``` -### kafka Consumer Receiving Data Java Code Example +### 1.2 kafka Consumer Receiving Data Java Code Example ```java Properties props = new Properties(); @@ -53,7 +53,7 @@ ConsumerRecords records = kafkaConsumer.poll(Duration.ofSeconds(1)); ``` -### Example of Java Code Stored in IoTDB Server +### 1.3 Example of Java Code Stored in IoTDB Server ```java SessionPool pool = diff --git a/src/UserGuide/latest/API/Programming-MQTT.md b/src/UserGuide/latest/API/Programming-MQTT.md index 98fca63d4..953414c60 100644 --- a/src/UserGuide/latest/API/Programming-MQTT.md +++ b/src/UserGuide/latest/API/Programming-MQTT.md @@ -30,7 +30,7 @@ IoTDB server includes a built-in MQTT service that allows remote devices send me -## Built-in MQTT Service +## 1. Built-in MQTT Service The Built-in MQTT Service provide the ability of direct connection to IoTDB through MQTT. It listen the publish messages from MQTT clients and then write the data into storage immediately. The MQTT topic corresponds to IoTDB timeseries. @@ -58,7 +58,7 @@ or json array of the above two. -## MQTT Configurations +## 2. MQTT Configurations The IoTDB MQTT service load configurations from `${IOTDB_HOME}/${IOTDB_CONF}/iotdb-system.properties` by default. Configurations are as follows: @@ -73,7 +73,7 @@ Configurations are as follows: | mqtt_max_message_size | the max mqtt message size in byte| 1048576 | -## Coding Examples +## 3. Coding Examples The following is an example which a mqtt client send messages to IoTDB server. ```java @@ -101,7 +101,7 @@ connection.disconnect(); ``` -## Customize your MQTT Message Format +## 4. Customize your MQTT Message Format If you do not like the above Json format, you can customize your MQTT Message format by just writing several lines of codes. An example can be found in `example/mqtt-customize` project. diff --git a/src/UserGuide/latest/API/Programming-NodeJS-Native-API.md b/src/UserGuide/latest/API/Programming-NodeJS-Native-API.md index 35c7964cd..0c75c75ec 100644 --- a/src/UserGuide/latest/API/Programming-NodeJS-Native-API.md +++ b/src/UserGuide/latest/API/Programming-NodeJS-Native-API.md @@ -24,14 +24,14 @@ Apache IoTDB uses Thrift as a cross-language RPC-framework so access to IoTDB can be achieved through the interfaces provided by Thrift. This document will introduce how to generate a native Node.js interface that can be used to access IoTDB. -## Dependents +## 1. Dependents * JDK >= 1.8 * Node.js >= 16.0.0 * Linux、Macos or like unix * Windows+bash -## Generate the Node.js native interface +## 2. Generate the Node.js native interface 1. Find the `pom.xml` file in the root directory of the IoTDB source code folder. 2. Open the `pom.xml` file and find the following content: @@ -67,11 +67,11 @@ This document will introduce how to generate a native Node.js interface that can This command will automatically delete the files in `iotdb/iotdb-protocol/thrift/target` and `iotdb/iotdb-protocol/thrift-commons/target`, and repopulate the folder with the newly generated files. The newly generated JavaScript sources will be located in `iotdb/iotdb-protocol/thrift/target/generated-sources-nodejs` in the various modules of the `iotdb-protocol` module. -## Using the Node.js native interface +## 3. Using the Node.js native interface Simply copy the files in `iotdb/iotdb-protocol/thrift/target/generated-sources-nodejs/` and `iotdb/iotdb-protocol/thrift-commons/target/generated-sources-nodejs/` into your project. -## rpc interface +## 4. rpc interface ``` // open a session diff --git a/src/UserGuide/latest/API/Programming-ODBC.md b/src/UserGuide/latest/API/Programming-ODBC.md index 7d2b9bb20..0d5bf67f1 100644 --- a/src/UserGuide/latest/API/Programming-ODBC.md +++ b/src/UserGuide/latest/API/Programming-ODBC.md @@ -22,19 +22,19 @@ # ODBC With IoTDB JDBC, IoTDB can be accessed using the ODBC-JDBC bridge. -## Dependencies +## 1. Dependencies * IoTDB-JDBC's jar-with-dependency package * ODBC-JDBC bridge (e.g. ZappySys JDBC Bridge) -## Deployment -### Preparing JDBC package +## 2. Deployment +### 2.1 Preparing JDBC package Download the source code of IoTDB, and execute the following command in root directory: ```shell mvn clean package -pl iotdb-client/jdbc -am -DskipTests -P get-jar-with-dependencies ``` Then, you can see the output `iotdb-jdbc-1.3.2-SNAPSHOT-jar-with-dependencies.jar` under `iotdb-client/jdbc/target` directory. -### Preparing ODBC-JDBC Bridge +### 2.2 Preparing ODBC-JDBC Bridge *Note: Here we only provide one kind of ODBC-JDBC bridge as the instance. Readers can use other ODBC-JDBC bridges to access IoTDB with the IOTDB-JDBC.* 1. **Download Zappy-Sys ODBC-JDBC Bridge**: Enter the https://zappysys.com/products/odbc-powerpack/odbc-jdbc-bridge-driver/ website, and click "download". diff --git a/src/UserGuide/latest/API/Programming-OPC-UA_timecho.md b/src/UserGuide/latest/API/Programming-OPC-UA_timecho.md index a1084df2a..e6d675042 100644 --- a/src/UserGuide/latest/API/Programming-OPC-UA_timecho.md +++ b/src/UserGuide/latest/API/Programming-OPC-UA_timecho.md @@ -21,11 +21,11 @@ # OPC UA Protocol -## OPC UA +## 1. OPC UA OPC UA is a technical specification used in the automation field for communication between different devices and systems, enabling cross platform, cross language, and cross network operations, providing a reliable and secure data exchange foundation for the Industrial Internet of Things. IoTDB supports OPC UA protocol, and IoTDB OPC Server supports both Client/Server and Pub/Sub communication modes. -### OPC UA Client/Server Mode +### 1.1 OPC UA Client/Server Mode - **Client/Server Mode**:In this mode, IoTDB's stream processing engine establishes a connection with the OPC UA Server via an OPC UA Sink. The OPC UA Server maintains data within its Address Space, from which IoTDB can request and retrieve data. Additionally, other OPC UA Clients can access the data on the server. @@ -40,7 +40,7 @@ OPC UA is a technical specification used in the automation field for communicati - Each measurement point is recorded as a variable node and the latest value in the current database is recorded. -### OPC UA Pub/Sub Mode +### 1.2 OPC UA Pub/Sub Mode - **Pub/Sub Mode**: In this mode, IoTDB's stream processing engine sends data change events to the OPC UA Server through an OPC UA Sink. These events are published to the server's message queue and managed through Event Nodes. Other OPC UA Clients can subscribe to these Event Nodes to receive notifications upon data changes. @@ -65,9 +65,9 @@ OPC UA is a technical specification used in the automation field for communicati - Events are only sent to clients that are already listening; if a client is not connected, the Event will be ignored. -## IoTDB OPC Server Startup method +## 2. IoTDB OPC Server Startup method -### Syntax +### 2.1 Syntax The syntax for creating the Sink is as follows: @@ -85,7 +85,7 @@ create pipe p1 ) ``` -### Parameters +### 2.2 Parameters | key | value | value range | required or not | default value | | :------------------------------ | :----------------------------------------------------------- | :------------------------------------- | :------- | :------------- | @@ -98,7 +98,7 @@ create pipe p1 | sink.user | User for OPC UA, specified in the configuration | String | Optional | root | | sink.password | Password for OPC UA, specified in the configuration | String | Optional | root | -### 示例 +### 2.3 Example ```Bash create pipe p1 @@ -108,7 +108,7 @@ create pipe p1 start pipe p1; ``` -### Usage Limitations +### 2.4 Usage Limitations 1. **DataRegion Requirement**: The OPC UA server will only start if there is a DataRegion in IoTDB. For an empty IoTDB, a data entry is necessary for the OPC UA server to become effective. @@ -122,9 +122,9 @@ start pipe p1; 4. **Does not support deleting data and modifying measurement point types:** In Client Server mode, OPC UA cannot delete data or change data type settings. In Pub Sub mode, if data is deleted, information cannot be pushed to the client. -## IoTDB OPC Server Example +## 3. IoTDB OPC Server Example -### Client / Server Mode +### 3.1 Client / Server Mode #### Preparation Work @@ -174,7 +174,7 @@ insert into root.test.db(time, s2) values(now(), 2) -### Pub / Sub Mode +### 3.2 Pub / Sub Mode #### Preparation Work @@ -187,7 +187,7 @@ The code includes: - Client configuration and startup logic(ClientExampleRunner) - The parent class of ClientTest(ClientExample) -### Quick Start +### 3.3 Quick Start The steps are as follows: @@ -252,9 +252,9 @@ start pipe p1; -### Notes +### 3.4 Notes -1. **stand alone and cluster:**It is recommended to use a 1C1D (one coordinator and one data node) single machine version. If there are multiple DataNodes in the cluster, data may be sent in a scattered manner across various DataNodes, and it may not be possible to listen to all the data. +1. **stand alone and cluster:** It is recommended to use a 1C1D (one coordinator and one data node) single machine version. If there are multiple DataNodes in the cluster, data may be sent in a scattered manner across various DataNodes, and it may not be possible to listen to all the data. 2. **No Need to Operate Root Directory Certificates:** During the certificate operation process, there is no need to operate the `iotdb-server.pfx` certificate under the IoTDB security root directory and the `example-client.pfx` directory under the client security directory. When the Client and Server connect bidirectionally, they will send the root directory certificate to each other. If it is the first time the other party sees this certificate, it will be placed in the reject dir. If the certificate is in the trusted/certs, then the other party can trust it. diff --git a/src/UserGuide/latest/API/Programming-Python-Native-API.md b/src/UserGuide/latest/API/Programming-Python-Native-API.md index 8d74b41cf..01ade47f8 100644 --- a/src/UserGuide/latest/API/Programming-Python-Native-API.md +++ b/src/UserGuide/latest/API/Programming-Python-Native-API.md @@ -21,13 +21,13 @@ # Python Native API -## Requirements +## 1. Requirements You have to install thrift (>=0.13) before using the package. -## How to use (Example) +## 2. How to use (Example) First, download the package: `pip3 install apache-iotdb` @@ -52,7 +52,7 @@ zone = session.get_time_zone() session.close() ``` -## Initialization +## 3. Initialization * Initialize a Session @@ -94,11 +94,11 @@ Notice: this RPC compression status of client must comply with that of IoTDB ser ```python session.close() ``` -## Managing Session through SessionPool +## 4. Managing Session through SessionPool Utilizing SessionPool to manage sessions eliminates the need to worry about session reuse. When the number of session connections reaches the maximum capacity of the pool, requests for acquiring a session will be blocked, and you can set the blocking wait time through parameters. After using a session, it should be returned to the SessionPool using the `putBack` method for proper management. -### Create SessionPool +### 4.1 Create SessionPool ```python pool_config = PoolConfig(host=ip,port=port, user_name=username, @@ -110,7 +110,7 @@ wait_timeout_in_ms = 3000 # # Create the connection pool session_pool = SessionPool(pool_config, max_pool_size, wait_timeout_in_ms) ``` -### Create a SessionPool using distributed nodes. +### 4.2 Create a SessionPool using distributed nodes. ```python pool_config = PoolConfig(node_urls=node_urls=["127.0.0.1:6667", "127.0.0.1:6668", "127.0.0.1:6669"], user_name=username, password=password, fetch_size=1024, @@ -118,7 +118,7 @@ pool_config = PoolConfig(node_urls=node_urls=["127.0.0.1:6667", "127.0.0.1:6668" max_pool_size = 5 wait_timeout_in_ms = 3000 ``` -### Acquiring a session through SessionPool and manually calling PutBack after use +### 4.3 Acquiring a session through SessionPool and manually calling PutBack after use ```python session = session_pool.get_session() @@ -132,9 +132,9 @@ session_pool.put_back(session) session_pool.close() ``` -## Data Definition Interface (DDL Interface) +## 5. Data Definition Interface (DDL Interface) -### Database Management +### 5.1 Database Management * CREATE DATABASE @@ -148,7 +148,7 @@ session.set_storage_group(group_name) session.delete_storage_group(group_name) session.delete_storage_groups(group_name_lst) ``` -### Timeseries Management +### 5.2 Timeseries Management * Create one or multiple timeseries @@ -184,9 +184,9 @@ session.delete_time_series(paths_list) session.check_time_series_exists(path) ``` -## Data Manipulation Interface (DML Interface) +## 6. Data Manipulation Interface (DML Interface) -### Insert +### 6.1 Insert It is recommended to use insertTablet to help improve write efficiency. @@ -310,7 +310,7 @@ session.insert_records( session.insert_records_of_one_device(device_id, time_list, measurements_list, data_types_list, values_list) ``` -### Insert with type inference +### 6.2 Insert with type inference When the data is of String type, we can use the following interface to perform type inference based on the value of the value itself. For example, if value is "true" , it can be automatically inferred to be a boolean type. If value is "3.2" , it can be automatically inferred as a flout type. Without type information, server has to do type inference, which may cost some time. @@ -320,7 +320,7 @@ When the data is of String type, we can use the following interface to perform t session.insert_str_record(device_id, timestamp, measurements, string_values) ``` -### Insert of Aligned Timeseries +### 6.3 Insert of Aligned Timeseries The Insert of aligned timeseries uses interfaces like insert_aligned_XXX, and others are similar to the above interfaces: @@ -331,7 +331,7 @@ The Insert of aligned timeseries uses interfaces like insert_aligned_XXX, and ot * insert_aligned_tablets -## IoTDB-SQL Interface +## 7. IoTDB-SQL Interface * Execute query statement @@ -351,8 +351,8 @@ session.execute_non_query_statement(sql) session.execute_statement(sql) ``` -## Schema Template -### Create Schema Template +## 8. Schema Template +### 8.1 Create Schema Template The step for creating a metadata template is as follows 1. Create the template class 2. Adding MeasurementNode @@ -371,7 +371,7 @@ template.add_template(m_node_z) session.create_schema_template(template) ``` -### Modify Schema Template measurements +### 8.2 Modify Schema Template measurements Modify measurements in a template, the template must be already created. These are functions that add or delete some measurement nodes. * add node in template ```python @@ -383,17 +383,17 @@ session.add_measurements_in_template(template_name, measurements_path, data_type session.delete_node_in_template(template_name, path) ``` -### Set Schema Template +### 8.3 Set Schema Template ```python session.set_schema_template(template_name, prefix_path) ``` -### Uset Schema Template +### 8.4 Uset Schema Template ```python session.unset_schema_template(template_name, prefix_path) ``` -### Show Schema Template +### 8.5 Show Schema Template * Show all schema templates ```python session.show_all_templates() @@ -428,14 +428,14 @@ session.show_paths_template_set_on(template_name) session.show_paths_template_using_on(template_name) ``` -### Drop Schema Template +### 8.6 Drop Schema Template Delete an existing metadata template,dropping an already set template is not supported ```python session.drop_schema_template("template_python") ``` -## Pandas Support +## 9. Pandas Support To easily transform a query result to a [Pandas Dataframe](https://pandas.pydata.org/pandas-docs/stable/reference/api/pandas.DataFrame.html) the SessionDataSet has a method `.todf()` which consumes the dataset and transforms it to a pandas dataframe. @@ -463,7 +463,7 @@ df = ... ``` -## IoTDB Testcontainer +## 10. IoTDB Testcontainer The Test Support is based on the lib `testcontainers` (https://testcontainers-python.readthedocs.io/en/latest/index.html) which you need to install in your project if you want to use the feature. @@ -482,12 +482,12 @@ class MyTestCase(unittest.TestCase): by default it will load the image `apache/iotdb:latest`, if you want a specific version just pass it like e.g. `IoTDBContainer("apache/iotdb:0.12.0")` to get version `0.12.0` running. -## IoTDB DBAPI +## 11. IoTDB DBAPI IoTDB DBAPI implements the Python DB API 2.0 specification (https://peps.python.org/pep-0249/), which defines a common interface for accessing databases in Python. -### Examples +### 11.1 Examples + Initialization The initialized parameters are consistent with the session part (except for the sqlalchemy_mode). @@ -536,11 +536,11 @@ cursor.close() conn.close() ``` -## IoTDB SQLAlchemy Dialect (Experimental) +## 12. IoTDB SQLAlchemy Dialect (Experimental) The SQLAlchemy dialect of IoTDB is written to adapt to Apache Superset. This part is still being improved. Please do not use it in the production environment! -### Mapping of the metadata +### 12.1 Mapping of the metadata The data model used by SQLAlchemy is a relational data model, which describes the relationships between different entities through tables. While the data model of IoTDB is a hierarchical data model, which organizes the data through a tree structure. In order to adapt IoTDB to the dialect of SQLAlchemy, the original data model in IoTDB needs to be reorganized. @@ -570,7 +570,7 @@ The following figure shows the relationship between the two more intuitively: ![sqlalchemy-to-iotdb](/img/UserGuide/API/IoTDB-SQLAlchemy/sqlalchemy-to-iotdb.png?raw=true) -### Data type mapping +### 12.2 Data type mapping | data type in IoTDB | data type in SQLAlchemy | |--------------------|-------------------------| | BOOLEAN | Boolean | @@ -581,7 +581,7 @@ The following figure shows the relationship between the two more intuitively: | TEXT | Text | | LONG | BigInteger | -### Example +### 12.3 Example + execute statement @@ -627,15 +627,15 @@ for row in res: ``` -## Developers +## 13. Developers -### Introduction +### 13.1 Introduction This is an example of how to connect to IoTDB with python, using the thrift rpc interfaces. Things are almost the same on Windows or Linux, but pay attention to the difference like path separator. -### Prerequisites +### 13.2 Prerequisites Python3.7 or later is preferred. @@ -652,7 +652,7 @@ pip install -r requirements_dev.txt -### Compile the thrift library and Debug +### 13.3 Compile the thrift library and Debug In the root of IoTDB's source code folder, run `mvn clean generate-sources -pl iotdb-client/client-py -am`. @@ -664,7 +664,7 @@ This folder is ignored from git and should **never be pushed to git!** -### Session Client & Example +### 13.4 Session Client & Example We packed up the Thrift interface in `client-py/src/iotdb/Session.py` (similar with its Java counterpart), also provided an example file `client-py/src/SessionExample.py` of how to use the session module. please read it carefully. @@ -686,7 +686,7 @@ session.close() -### Tests +### 13.5 Tests Please add your custom tests in `tests` folder. @@ -696,14 +696,14 @@ To run all defined tests just type `pytest .` in the root folder. -### Futher Tools +### 13.6 Futher Tools [black](https://pypi.org/project/black/) and [flake8](https://pypi.org/project/flake8/) are installed for autoformatting and linting. Both can be run by `black .` or `flake8 .` respectively. -## Releasing +## 14. Releasing To do a release just ensure that you have the right set of generated thrift files. Then run linting and auto-formatting. @@ -712,13 +712,13 @@ Then you are good to go to do a release! -### Preparing your environment +### 14.1 Preparing your environment First, install all necessary dev dependencies via `pip install -r requirements_dev.txt`. -### Doing the Release +### 14.2 Doing the Release There is a convenient script `release.sh` to do all steps for a release. Namely, these are diff --git a/src/UserGuide/latest/API/Programming-Rust-Native-API.md b/src/UserGuide/latest/API/Programming-Rust-Native-API.md index f58df68fc..d25923e71 100644 --- a/src/UserGuide/latest/API/Programming-Rust-Native-API.md +++ b/src/UserGuide/latest/API/Programming-Rust-Native-API.md @@ -24,7 +24,7 @@ IoTDB uses Thrift as a cross language RPC framework, so access to IoTDB can be achieved through the interface provided by Thrift. This document will introduce how to generate a native Rust interface that can access IoTDB. -## Dependents +## 1. Dependents * JDK >= 1.8 * Rust >= 1.0.0 @@ -38,7 +38,7 @@ Thrift (0.14.1 or higher) must be installed to compile Thrift files into Rust co http://thrift.apache.org/docs/install/ ``` -## Compile the Thrift library and generate the Rust native interface +## 2. Compile the Thrift library and generate the Rust native interface 1. Find the `pom.xml` file in the root directory of the IoTDB source code folder. 2. Open the `pom.xml` file and find the following content: @@ -74,11 +74,11 @@ http://thrift.apache.org/docs/install/ This command will automatically delete the files in `iotdb/iotdb-protocol/thrift/target` and `iotdb/iotdb-protocol/thrift-commons/target`, and repopulate the folder with the newly generated files. The newly generated Rust sources will be located in `iotdb/iotdb-protocol/thrift/target/generated-sources-rust` in the various modules of the `iotdb-protocol` module. -## Using the Rust native interface +## 3. Using the Rust native interface Copy `iotdb/iotdb-protocol/thrift/target/generated-sources-rust/` and `iotdb/iotdb-protocol/thrift-commons/target/generated-sources-rust/` into your project。 -## RPC interface +## 4. RPC interface ``` // open a session diff --git a/src/UserGuide/latest/API/RestServiceV1.md b/src/UserGuide/latest/API/RestServiceV1.md index 775235fed..4fb834708 100644 --- a/src/UserGuide/latest/API/RestServiceV1.md +++ b/src/UserGuide/latest/API/RestServiceV1.md @@ -22,7 +22,7 @@ # RESTful API V1(Not Recommend) IoTDB's RESTful services can be used for query, write, and management operations, using the OpenAPI standard to define interfaces and generate frameworks. -## Enable RESTful Services +## 1. Enable RESTful Services RESTful services are disabled by default. @@ -32,7 +32,7 @@ RESTful services are disabled by default. enable_rest_service=true ``` -## Authentication +## 2. Authentication Except the liveness probe API `/ping`, RESTful services use the basic authentication. Each URL request needs to carry `'Authorization': 'Basic ' + base64.encode(username + ':' + password)`. The username used in the following examples is: `root`, and password is: `root`. @@ -67,9 +67,9 @@ Authorization: Basic cm9vdDpyb290 } ``` -## Interface +## 3. Interface -### ping +### 3.1 ping The `/ping` API can be used for service liveness probing. @@ -119,7 +119,7 @@ Sample response: > `/ping` can be accessed without authorization. -### query +### 3.2 query The query interface can be used to handle data queries and metadata queries. @@ -762,7 +762,7 @@ curl -H "Content-Type:application/json" -H "Authorization:Basic cm9vdDpyb290" -X } ``` -### nonQuery +### 3.3 nonQuery Request method: `POST` @@ -798,7 +798,7 @@ Sample response: -### insertTablet +### 3.4 insertTablet Request method: `POST` @@ -837,7 +837,7 @@ Sample response: } ``` -## Configuration +## 4. Configuration The configuration is located in 'iotdb-system.properties'. diff --git a/src/UserGuide/latest/API/RestServiceV2.md b/src/UserGuide/latest/API/RestServiceV2.md index 6c6011bf5..186cd1360 100644 --- a/src/UserGuide/latest/API/RestServiceV2.md +++ b/src/UserGuide/latest/API/RestServiceV2.md @@ -22,7 +22,7 @@ # RESTful API V2 IoTDB's RESTful services can be used for query, write, and management operations, using the OpenAPI standard to define interfaces and generate frameworks. -## Enable RESTful Services +## 1. Enable RESTful Services RESTful services are disabled by default. @@ -32,7 +32,7 @@ RESTful services are disabled by default. enable_rest_service=true ``` -## Authentication +## 2. Authentication Except the liveness probe API `/ping`, RESTful services use the basic authentication. Each URL request needs to carry `'Authorization': 'Basic ' + base64.encode(username + ':' + password)`. The username used in the following examples is: `root`, and password is: `root`. @@ -67,9 +67,9 @@ Authorization: Basic cm9vdDpyb290 } ``` -## Interface +## 3. Interface -### ping +### 3.1 ping The `/ping` API can be used for service liveness probing. @@ -119,7 +119,7 @@ Sample response: > `/ping` can be accessed without authorization. -### query +### 3.2 query The query interface can be used to handle data queries and metadata queries. @@ -762,7 +762,7 @@ curl -H "Content-Type:application/json" -H "Authorization:Basic cm9vdDpyb290" -X } ``` -### nonQuery +### 3.3 nonQuery Request method: `POST` @@ -798,7 +798,7 @@ Sample response: -### insertTablet +### 3.4 insertTablet Request method: `POST` @@ -837,7 +837,7 @@ Sample response: } ``` -### insertRecords +### 3.5 insertRecords Request method: `POST` @@ -877,7 +877,7 @@ Sample response: ``` -## Configuration +## 4. Configuration The configuration is located in 'iotdb-system.properties'. diff --git a/src/UserGuide/latest/Background-knowledge/Cluster-Concept_apache.md b/src/UserGuide/latest/Background-knowledge/Cluster-Concept_apache.md index 674a74e69..5a2b4652c 100644 --- a/src/UserGuide/latest/Background-knowledge/Cluster-Concept_apache.md +++ b/src/UserGuide/latest/Background-knowledge/Cluster-Concept_apache.md @@ -21,7 +21,7 @@ # Common Concepts -## Sql_dialect Related Concepts +## 1. Sql_dialect Related Concepts | Concept | Meaning | | ----------------------- | ------------------------------------------------------------ | @@ -32,7 +32,7 @@ | Encoding | Encoding is a compression technique that represents data in binary form to improve storage efficiency. IoTDB supports various encoding methods for different types of data. For more detailed information, please refer to:[Encoding-and-Compression](../Technical-Insider/Encoding-and-Compression.md) | | Compression | After data encoding, IoTDB uses compression technology to further compress binary data to enhance storage efficiency. IoTDB supports multiple compression methods. For more detailed information, please refer to: [Encoding-and-Compression](../Technical-Insider/Encoding-and-Compression.md) | -## Distributed Related Concepts +## 2. Distributed Related Concepts The following figure shows a common IoTDB 3C3D (3 ConfigNodes, 3 DataNodes) cluster deployment pattern: @@ -46,7 +46,7 @@ IoTDB's cluster includes the following common concepts: The above concepts will be introduced in the following text. -### Nodes +### 2.1 Nodes IoTDB cluster includes three types of nodes (processes): ConfigNode (management node), DataNode (data node), and AINode (analysis node), as shown below: @@ -54,7 +54,7 @@ IoTDB cluster includes three types of nodes (processes): ConfigNode (management - DataNode: Serves client requests and is responsible for data storage and computation, as shown in DataNode-1, DataNode-2, and DataNode-3 in the figure above. - AINode: Provides machine learning capabilities, supports the registration of trained machine learning models, and allows model inference through SQL calls. It has already built-in self-developed time-series large models and common machine learning algorithms (such as prediction and anomaly detection). -### Data Partitioning +### 2.2 Data Partitioning In IoTDB, both metadata and data are divided into small partitions, namely Regions, which are managed by various DataNodes in the cluster. @@ -62,7 +62,7 @@ In IoTDB, both metadata and data are divided into small partitions, namely Regio - DataRegion: Data partition, managing the data of a part of devices for a certain period of time. DataRegions with the same RegionID on different DataNodes are mutual replicas, as shown in DataRegion-2 in the figure above, which has two replicas located on DataNode-1 and DataNode-2. - For specific partitioning algorithms, please refer to: [Data Partitioning](../Technical-Insider/Cluster-data-partitioning.md) -### Replica Groups +### 2.3 Replica Groups The number of replicas for data and metadata can be configured. The recommended configurations for different deployment modes are as follows, where multi-replication can provide high-availability services. @@ -71,11 +71,11 @@ The number of replicas for data and metadata can be configured. The recommended | Schema | schema_replication_factor | 1 | 3 | | Data | data_replication_factor | 1 | 2 | -## Deployment Related Concepts +## 3. Deployment Related Concepts IoTDB has two operating modes: Stand-Alone mode and Cluster mode. -### Stand-Alone Mode +### 3.1 Stand-Alone Mode An IoTDB Stand-Alone instance includes 1 ConfigNode and 1 DataNode, i.e., 1C1D; @@ -84,7 +84,7 @@ An IoTDB Stand-Alone instance includes 1 ConfigNode and 1 DataNode, i.e., 1C1D; - **Applicable Scenarios**:Scenarios with limited resources or low requirements for high availability, such as edge-side servers. - **Deployment Method**:[Stand-Alone-Deployment](../Deployment-and-Maintenance/Stand-Alone-Deployment_apache.md) -### Cluster Mode +### 3.2 Cluster Mode An IoTDB cluster instance consists of 3 ConfigNodes and no less than 3 DataNodes, usually 3 DataNodes, i.e., 3C3D; when some nodes fail, the remaining nodes can still provide services, ensuring the high availability of the database service, and the database performance can be improved with the addition of nodes. @@ -92,7 +92,7 @@ An IoTDB cluster instance consists of 3 ConfigNodes and no less than 3 DataNodes - **Applicable Scenarios**:Enterprise-level application scenarios requiring high availability and reliability. - **Deployment Method**:[Cluster-Deployment](../Deployment-and-Maintenance/Cluster-Deployment_apache.md) -### Summary of Features +### 3.3 Summary of Features | Dimension | Stand-Alone Mode | Cluster Mode | | ------------ | ---------------------------- | ------------------------ | diff --git a/src/UserGuide/latest/Background-knowledge/Cluster-Concept_timecho.md b/src/UserGuide/latest/Background-knowledge/Cluster-Concept_timecho.md index 42344aa47..2307c4a71 100644 --- a/src/UserGuide/latest/Background-knowledge/Cluster-Concept_timecho.md +++ b/src/UserGuide/latest/Background-knowledge/Cluster-Concept_timecho.md @@ -21,7 +21,7 @@ # Common Concepts -## Sql_dialect Related Concepts +## 1. Sql_dialect Related Concepts | Concept | Meaning | | ----------------------- | ------------------------------------------------------------ | @@ -32,7 +32,7 @@ | Encoding | Encoding is a compression technique that represents data in binary form to improve storage efficiency. IoTDB supports various encoding methods for different types of data. For more detailed information, please refer to:[Encoding-and-Compression](../Technical-Insider/Encoding-and-Compression.md) | | Compression | After data encoding, IoTDB uses compression technology to further compress binary data to enhance storage efficiency. IoTDB supports multiple compression methods. For more detailed information, please refer to: [Encoding-and-Compression](../Technical-Insider/Encoding-and-Compression.md) | -## Distributed Related Concepts +## 2. Distributed Related Concepts The following figure shows a common IoTDB 3C3D (3 ConfigNodes, 3 DataNodes) cluster deployment pattern: @@ -47,7 +47,7 @@ IoTDB's cluster includes the following common concepts: The above concepts will be introduced in the following text. -### Nodes +### 2.1 Nodes IoTDB cluster includes three types of nodes (processes): ConfigNode (management node), DataNode (data node), and AINode (analysis node), as shown below: @@ -55,7 +55,7 @@ IoTDB cluster includes three types of nodes (processes): ConfigNode (management - DataNode: Serves client requests and is responsible for data storage and computation, as shown in DataNode-1, DataNode-2, and DataNode-3 in the figure above. - AINode: Provides machine learning capabilities, supports the registration of trained machine learning models, and allows model inference through SQL calls. It has already built-in self-developed time-series large models and common machine learning algorithms (such as prediction and anomaly detection). -### Data Partitioning +### 2.2 Data Partitioning In IoTDB, both metadata and data are divided into small partitions, namely Regions, which are managed by various DataNodes in the cluster. @@ -63,7 +63,7 @@ In IoTDB, both metadata and data are divided into small partitions, namely Regio - DataRegion: Data partition, managing the data of a part of devices for a certain period of time. DataRegions with the same RegionID on different DataNodes are mutual replicas, as shown in DataRegion-2 in the figure above, which has two replicas located on DataNode-1 and DataNode-2. - For specific partitioning algorithms, please refer to: [Data Partitioning](../Technical-Insider/Cluster-data-partitioning.md) -### Replica Groups +### 2.3 Replica Groups The number of replicas for data and metadata can be configured. The recommended configurations for different deployment modes are as follows, where multi-replication can provide high-availability services. @@ -73,11 +73,11 @@ The number of replicas for data and metadata can be configured. The recommended | Data | data_replication_factor | 1 | 2 | -## Deployment Related Concepts +## 3. Deployment Related Concepts IoTDB has three operating modes: Stand-Alone mode, Cluster mode, and Dual-Active mode. -### Stand-Alone Mode +### 3.1 Stand-Alone Mode An IoTDB Stand-Alone instance includes 1 ConfigNode and 1 DataNode, i.e., 1C1D; @@ -86,7 +86,7 @@ An IoTDB Stand-Alone instance includes 1 ConfigNode and 1 DataNode, i.e., 1C1D; - **Applicable Scenarios**:Scenarios with limited resources or low requirements for high availability, such as edge-side servers. - **Deployment Method**:[Stand-Alone-Deployment](../Deployment-and-Maintenance/Stand-Alone-Deployment_timecho.md) -### Dual-Active Mode +### 3.2 Dual-Active Mode Dual-active deployment is a feature of TimechoDB Enterprise Edition, which refers to two independent instances performing bidirectional synchronization and can provide services simultaneously. When one instance is restarted after a shutdown, the other instance will resume transmission of the missing data. @@ -97,7 +97,7 @@ Dual-active deployment is a feature of TimechoDB Enterprise Edition, which refer - **Applicable Scenarios**:Scenarios with limited resources (only two servers) but requiring high-availability capabilities. - **Deployment Method**:[Dual-Active-Deployment](../Deployment-and-Maintenance/Dual-Active-Deployment_timecho.md) -### Cluster Mode +### 3.3 Cluster Mode An IoTDB cluster instance consists of 3 ConfigNodes and no less than 3 DataNodes, usually 3 DataNodes, i.e., 3C3D; when some nodes fail, the remaining nodes can still provide services, ensuring the high availability of the database service, and the database performance can be improved with the addition of nodes. @@ -105,7 +105,7 @@ An IoTDB cluster instance consists of 3 ConfigNodes and no less than 3 DataNodes - **Applicable Scenarios**:Enterprise-level application scenarios requiring high availability and reliability. - **Deployment Method**:[Cluster-Deployment](../Deployment-and-Maintenance/Cluster-Deployment_timecho.md) -### Summary of Features +### 3.4 Summary of Features | Dimension | Stand-Alone Mode | Dual-Active Mode | Cluster Mode | | ------------ | ---------------------------- | ------------------------ | ------------------------ | diff --git a/src/UserGuide/latest/Background-knowledge/Data-Model-and-Terminology_apache.md b/src/UserGuide/latest/Background-knowledge/Data-Model-and-Terminology_apache.md index 2bec26da1..f3b777bf2 100644 --- a/src/UserGuide/latest/Background-knowledge/Data-Model-and-Terminology_apache.md +++ b/src/UserGuide/latest/Background-knowledge/Data-Model-and-Terminology_apache.md @@ -23,11 +23,11 @@ This section introduces how to transform time series data application scenarios into IoTDB time series modeling. -## 1 Time Series Data Model +## 1. Time Series Data Model Before designing an IoTDB data model, it's essential to understand time series data and its underlying structure. For more details, refer to: [Time Series Data Model](../Background-knowledge/Navigating_Time_Series_Data.md) -## 2 Two Time Series Model in IoTDB +## 2. Two Time Series Model in IoTDB IoTDB offers two data modeling syntaxes—tree model and table model, each with its distinct characteristics as follows: @@ -80,7 +80,7 @@ The following table compares the tree model and the table model from various dim - When establishing a database connection via client tools (Cli) or SDKs, specify the model syntax using the `sql_dialect` parameter (Tree syntax is used by default). -## 3 Application Scenarios +## 3. Application Scenarios The application scenarios mainly include two categories: diff --git a/src/UserGuide/latest/Background-knowledge/Data-Model-and-Terminology_timecho.md b/src/UserGuide/latest/Background-knowledge/Data-Model-and-Terminology_timecho.md index 0e843c871..477666573 100644 --- a/src/UserGuide/latest/Background-knowledge/Data-Model-and-Terminology_timecho.md +++ b/src/UserGuide/latest/Background-knowledge/Data-Model-and-Terminology_timecho.md @@ -23,11 +23,11 @@ This section introduces how to transform time series data application scenarios into IoTDB time series modeling. -## 1 Time Series Data Model +## 1. Time Series Data Model Before designing an IoTDB data model, it's essential to understand time series data and its underlying structure. For more details, refer to: [Time Series Data Model](../Background-knowledge/Navigating_Time_Series_Data.md) -## 2 Two Time Series Model in IoTDB +## 2. Two Time Series Model in IoTDB IoTDB offers two data modeling syntaxes—tree model and table model, each with its distinct characteristics as follows: @@ -80,7 +80,7 @@ The following table compares the tree model and the table model from various dim - When establishing a database connection via client tools (Cli) or SDKs, specify the model syntax using the `sql_dialect` parameter (Tree syntax is used by default). -## 3 Application Scenarios +## 3. Application Scenarios The application scenarios mainly include three categories: diff --git a/src/UserGuide/latest/Background-knowledge/Data-Type.md b/src/UserGuide/latest/Background-knowledge/Data-Type.md index e33af42eb..2fbb63040 100644 --- a/src/UserGuide/latest/Background-knowledge/Data-Type.md +++ b/src/UserGuide/latest/Background-knowledge/Data-Type.md @@ -21,7 +21,7 @@ # Data Type -## Basic Data Type +## 1.Basic Data Type IoTDB supports the following data types: @@ -38,7 +38,7 @@ IoTDB supports the following data types: The difference between STRING and TEXT types is that STRING type has more statistical information and can be used to optimize value filtering queries, while TEXT type is suitable for storing long strings. -### Float Precision +### 1.1 Float Precision The time series of **FLOAT** and **DOUBLE** type can specify (MAX\_POINT\_NUMBER, see [this page](../SQL-Manual/SQL-Manual.md) for more information on how to specify), which is the number of digits after the decimal point of the floating point number, if the encoding method is [RLE](../Technical-Insider/Encoding-and-Compression.md) or [TS\_2DIFF](../Technical-Insider/Encoding-and-Compression.md). If MAX\_POINT\_NUMBER is not specified, the system will use [float\_precision](../Reference/DataNode-Config-Manual.md) in the configuration file `iotdb-system.properties`. @@ -49,7 +49,7 @@ CREATE TIMESERIES root.vehicle.d0.s0 WITH DATATYPE=FLOAT, ENCODING=RLE, 'MAX_POI * For Float data value, The data range is (-Integer.MAX_VALUE, Integer.MAX_VALUE), rather than Float.MAX_VALUE, and the max_point_number is 19, caused by the limition of function Math.round(float) in Java. * For Double data value, The data range is (-Long.MAX_VALUE, Long.MAX_VALUE), rather than Double.MAX_VALUE, and the max_point_number is 19, caused by the limition of function Math.round(double) in Java (Long.MAX_VALUE=9.22E18). -### Data Type Compatibility +### 1.2 Data Type Compatibility When the written data type is inconsistent with the data type of time-series, - If the data type of time-series is not compatible with the written data type, the system will give an error message. @@ -70,11 +70,11 @@ The compatibility of each data type is shown in the following table: | TIMESTAMP | INT32 INT64 TIMESTAMP | | DATE | DATE | -## Timestamp +## 2. Timestamp The timestamp is the time point at which data is produced. It includes absolute timestamps and relative timestamps -### Absolute timestamp +### 2.1 Absolute timestamp Absolute timestamps in IoTDB are divided into two types: LONG and DATETIME (including DATETIME-INPUT and DATETIME-DISPLAY). When a user inputs a timestamp, he can use a LONG type timestamp or a DATETIME-INPUT type timestamp, and the supported formats of the DATETIME-INPUT type timestamp are shown in the table below: @@ -144,7 +144,7 @@ IoTDB can support LONG types and DATETIME-DISPLAY types when displaying timestam -### Relative timestamp +### 2.2 Relative timestamp Relative time refers to the time relative to the server time ```now()``` and ```DATETIME``` time. diff --git a/src/UserGuide/latest/Background-knowledge/Navigating_Time_Series_Data.md b/src/UserGuide/latest/Background-knowledge/Navigating_Time_Series_Data.md index e365acb32..121373d1c 100644 --- a/src/UserGuide/latest/Background-knowledge/Navigating_Time_Series_Data.md +++ b/src/UserGuide/latest/Background-knowledge/Navigating_Time_Series_Data.md @@ -20,7 +20,7 @@ --> # Entering Time Series Data -## What Is Time Series Data? +## 1. What Is Time Series Data? In today's era of the Internet of Things, various scenarios such as the Internet of Things and industrial scenarios are undergoing digital transformation. People collect various states of devices by installing sensors on them. If the motor collects voltage and current, the blade speed, angular velocity, and power generation of the fan; Vehicle collection of latitude and longitude, speed, and fuel consumption; The vibration frequency, deflection, displacement, etc. of the bridge. The data collection of sensors has penetrated into various industries. @@ -32,19 +32,19 @@ Generally speaking, we refer to each collection point as a measurement point (al The massive time series data generated by sensors is the foundation of digital transformation in various industries, so our modeling of time series data mainly focuses on equipment and sensors. -## Key Concepts of Time Series Data +## 2. Key Concepts of Time Series Data The main concepts involved in time-series data can be divided from bottom to top: data points, measurement points, and equipment. ![](/img/time-series-data-en-04.png) -### Data Point +### 2.1 Data Point - Definition: Consists of a timestamp and a value, where the timestamp is of type long and the value can be of various types such as BOOLEAN, FLOAT, INT32, etc. - Example: A row of a time series in the form of a table in the above figure, or a point of a time series in the form of a graph, is a data point. ![](/img/time-series-data-en-03.png) -### Measurement Points +### 2.2 Measurement Points - Definition: It is a time series formed by multiple data points arranged in increments according to timestamps. Usually, a measuring point represents a collection point and can regularly collect physical quantities of the environment it is located in. - Also known as: physical quantity, time series, timeline, semaphore, indicator, measurement value, etc @@ -54,7 +54,7 @@ The main concepts involved in time-series data can be divided from bottom to top - Vehicle networking scenarios: fuel consumption, vehicle speed, longitude, dimensions - Factory scenario: temperature, humidity -### Device +### 2.3 Device - Definition: Corresponding to a physical device in an actual scene, usually a collection of measurement points, identified by one to multiple labels - Example: diff --git a/src/UserGuide/latest/Basic-Concept/Operate-Metadata_apache.md b/src/UserGuide/latest/Basic-Concept/Operate-Metadata_apache.md index 3b2b6de9d..688211f3e 100644 --- a/src/UserGuide/latest/Basic-Concept/Operate-Metadata_apache.md +++ b/src/UserGuide/latest/Basic-Concept/Operate-Metadata_apache.md @@ -21,9 +21,9 @@ # Timeseries Management -## Database Management +## 1. Database Management -### Create Database +### 1.1 Create Database According to the storage model we can set up the corresponding database. Two SQL statements are supported for creating databases, as follows: @@ -49,7 +49,7 @@ The LayerName of database can only be chinese or english characters, numbers, un Besides, if deploy on Windows system, the LayerName is case-insensitive, which means it's not allowed to create databases `root.ln` and `root.LN` at the same time. -### Show Databases +### 1.2 Show Databases After creating the database, we can use the [SHOW DATABASES](../SQL-Manual/SQL-Manual.md) statement and [SHOW DATABASES \](../SQL-Manual/SQL-Manual.md) to view the databases. The SQL statements are as follows: @@ -71,7 +71,7 @@ Total line number = 2 It costs 0.060s ``` -### Delete Database +### 1.3 Delete Database User can use the `DELETE DATABASE ` statement to delete all databases matching the pathPattern. Please note the data in the database will also be deleted. @@ -82,7 +82,7 @@ IoTDB > DELETE DATABASE root.sgcc IoTDB > DELETE DATABASE root.** ``` -### Count Databases +### 1.4 Count Databases User can use the `COUNT DATABASE ` statement to count the number of databases. It is allowed to specify `PathPattern` to count the number of databases matching the `PathPattern`. @@ -141,7 +141,7 @@ Total line number = 1 It costs 0.002s ``` -### Setting up heterogeneous databases (Advanced operations) +### 1.5 Setting up heterogeneous databases (Advanced operations) Under the premise of familiar with IoTDB metadata modeling, users can set up heterogeneous databases in IoTDB to cope with different production needs. @@ -236,7 +236,7 @@ The query results in each column are as follows: + The required minimum DataRegionGroup number of the Database + The permitted maximum DataRegionGroup number of the Database -### TTL +### 1.6 TTL IoTDB supports device-level TTL settings, which means it is able to delete old data automatically and periodically. The benefit of using TTL is that hopefully you can control the total disk space usage and prevent the machine from running out of disks. Moreover, the query performance may downgrade as the total number of files goes up and the memory usage also increases as there are more files. Timely removing such files helps to keep at a high query performance level and reduce memory usage. @@ -348,7 +348,7 @@ IoTDB> show devices ``` All devices will definitely have a TTL, meaning it cannot be null. INF represents infinity. -## Device Template +## 2. Device Template IoTDB supports the device template function, enabling different entities of the same type to share metadata, reduce the memory usage of metadata, and simplify the management of numerous entities and measurements. @@ -356,7 +356,7 @@ IoTDB supports the device template function, enabling different entities of the ![img](/img/templateEN.jpg) -### Create Device Template +### 2.1 Create Device Template The SQL syntax for creating a metadata template is as follows: @@ -379,7 +379,7 @@ IoTDB> create device template t2 aligned (lat FLOAT encoding=Gorilla, lon FLOAT The` lat` and `lon` measurements are aligned. -### Set Device Template +### 2.2 Set Device Template After a device template is created, it should be set to specific path before creating related timeseries or insert data. @@ -395,7 +395,7 @@ The SQL Statement for setting device template is as follow: IoTDB> set device template t1 to root.sg1.d1 ``` -### Activate Device Template +### 2.3 Activate Device Template After setting the device template, with the system enabled to auto create schema, you can insert data into the timeseries. For example, suppose there's a database root.sg1 and t1 has been set to root.sg1.d1, then timeseries like root.sg1.d1.temperature and root.sg1.d1.status are available and data points can be inserted. @@ -447,7 +447,7 @@ show devices root.sg1.** +---------------+---------+ ```` -### Show Device Template +### 2.4 Show Device Template - Show all device templates @@ -519,7 +519,7 @@ The execution result is as follows: +-----------+ ``` -### Deactivate device Template +### 2.5 Deactivate device Template To delete a group of timeseries represented by device template, namely deactivate the device template, use the following SQL statement: @@ -547,7 +547,7 @@ IoTDB> deactivate device template t1 from root.sg1.*, root.sg2.* If the template name is not provided in sql, all template activation on paths matched by given path pattern will be removed. -### Unset Device Template +### 2.6 Unset Device Template The SQL Statement for unsetting device template is as follow: @@ -557,7 +557,7 @@ IoTDB> unset device template t1 from root.sg1.d1 **Attention**: It should be guaranteed that none of the timeseries represented by the target device template exists, before unset it. It can be achieved by deactivation operation. -### Drop Device Template +### 2.7 Drop Device Template The SQL Statement for dropping device template is as follow: @@ -567,7 +567,7 @@ IoTDB> drop device template t1 **Attention**: Dropping an already set template is not supported. -### Alter Device Template +### 2.8 Alter Device Template In a scenario where measurements need to be added, you can modify the template to add measurements to all devicesdevice using the device template. @@ -579,9 +579,9 @@ IoTDB> alter device template t1 add (speed FLOAT encoding=RLE, FLOAT TEXT encodi **When executing data insertion to devices with device template set on related prefix path and there are measurements not present in this device template, the measurements will be auto added to this device template.** -## Timeseries Management +## 3. Timeseries Management -### Create Timeseries +### 3.1 Create Timeseries According to the storage model selected before, we can create corresponding timeseries in the two databases respectively. The SQL statements for creating timeseries are as follows: @@ -614,7 +614,7 @@ error: encoding TS_2DIFF does not support BOOLEAN Please refer to [Encoding](../Technical-Insider/Encoding-and-Compression.md) for correspondence between data type and encoding. -### Create Aligned Timeseries +### 3.2 Create Aligned Timeseries The SQL statement for creating a group of timeseries are as follows: @@ -626,7 +626,7 @@ You can set different datatype, encoding, and compression for the timeseries in It is also supported to set an alias, tag, and attribute for aligned timeseries. -### Delete Timeseries +### 3.3 Delete Timeseries To delete the timeseries we created before, we are able to use `(DELETE | DROP) TimeSeries ` statement. @@ -639,7 +639,7 @@ IoTDB> delete timeseries root.ln.wf02.* IoTDB> drop timeseries root.ln.wf02.* ``` -### Show Timeseries +### 3.4 Show Timeseries * SHOW LATEST? TIMESERIES pathPattern? whereClause? limitClause? @@ -751,7 +751,7 @@ It costs 0.016s It is worth noting that when the queried path does not exist, the system will return no timeseries. -### Count Timeseries +### 3.5 Count Timeseries IoTDB is able to use `COUNT TIMESERIES ` to count the number of timeseries matching the path. SQL statements are as follows: @@ -836,7 +836,7 @@ It costs 0.002s > Note: The path of timeseries is just a filter condition, which has no relationship with the definition of level. -### Tag and Attribute Management +### 3.6 Tag and Attribute Management We can also add an alias, extra tag and attribute information while creating one timeseries. @@ -1011,9 +1011,9 @@ IoTDB> show timeseries where TAGS(tag1)='v1' The above operations are supported for timeseries tag, attribute updates, etc. -## Path query +## 4. Path query -### Path +### 4.1 Path A `path` is an expression that conforms to the following constraints: @@ -1033,7 +1033,7 @@ wildcard ; ``` -### NodeName +### 4.2 NodeName - The parts of a path separated by `.` are called node names (`nodeName`). - For example, `root.a.b.c` is a path with a depth of 4 levels, where `root`, `a`, `b`, and `c` are all node names. @@ -1048,11 +1048,11 @@ wildcard - UNICODE Chinese characters (`\u2E80` to `\u9FFF`) - **Case sensitivity**: On Windows systems, path node names in the database are case-insensitive. For example, `root.ln` and `root.LN` are considered the same path. -### Special Characters (Backquote) +### 4.3 Special Characters (Backquote) If special characters (such as spaces or punctuation marks) are needed in a `nodeName`, you can enclose the node name in Backquote (`). For more information on the use of backticks, please refer to [Backquote](../SQL-Manual/Syntax-Rule.md#reverse-quotation-marks). -### Path Pattern +### 4.4 Path Pattern To make it more convenient and efficient to express multiple time series, IoTDB provides paths with wildcards `*` and `**`. Wildcards can appear in any level of a path. @@ -1065,7 +1065,7 @@ To make it more convenient and efficient to express multiple time series, IoTDB **Note**: `*` and `**` cannot be placed at the beginning of a path. -### Show Child Paths +### 4.5 Show Child Paths ``` SHOW CHILD PATHS pathPattern @@ -1093,7 +1093,7 @@ It costs 0.002s > get all paths in form of root.xx.xx.xx:show child paths root.xx.xx -### Show Child Nodes +### 4.6 Show Child Nodes ``` SHOW CHILD NODES pathPattern @@ -1124,7 +1124,7 @@ Example: +------------+ ``` -### Count Nodes +### 4.7 Count Nodes IoTDB is able to use `COUNT NODES LEVEL=` to count the number of nodes at the given level in current Metadata Tree considering a given pattern. IoTDB will find paths that @@ -1177,7 +1177,7 @@ It costs 0.002s > Note: The path of timeseries is just a filter condition, which has no relationship with the definition of level. -### Show Devices +### 4.8 Show Devices * SHOW DEVICES pathPattern? (WITH DATABASE)? devicesWhereClause? limitClause? @@ -1258,7 +1258,7 @@ Total line number = 2 It costs 0.001s ``` -### Count Devices +### 4.9 Count Devices * COUNT DEVICES / diff --git a/src/UserGuide/latest/Basic-Concept/Operate-Metadata_timecho.md b/src/UserGuide/latest/Basic-Concept/Operate-Metadata_timecho.md index 4380b55a2..b101a9057 100644 --- a/src/UserGuide/latest/Basic-Concept/Operate-Metadata_timecho.md +++ b/src/UserGuide/latest/Basic-Concept/Operate-Metadata_timecho.md @@ -21,9 +21,9 @@ # Timeseries Management -## Database Management +## 1. Database Management -### Create Database +### 1.1 Create Database According to the storage model we can set up the corresponding database. Two SQL statements are supported for creating databases, as follows: @@ -49,7 +49,7 @@ The LayerName of database can only be chinese or english characters, numbers, un Besides, if deploy on Windows system, the LayerName is case-insensitive, which means it's not allowed to create databases `root.ln` and `root.LN` at the same time. -### Show Databases +### 1.2 Show Databases After creating the database, we can use the [SHOW DATABASES](../SQL-Manual/SQL-Manual.md) statement and [SHOW DATABASES \](../SQL-Manual/SQL-Manual.md) to view the databases. The SQL statements are as follows: @@ -71,7 +71,7 @@ Total line number = 2 It costs 0.060s ``` -### Delete Database +### 1.3 Delete Database User can use the `DELETE DATABASE ` statement to delete all databases matching the pathPattern. Please note the data in the database will also be deleted. @@ -82,7 +82,7 @@ IoTDB > DELETE DATABASE root.sgcc IoTDB > DELETE DATABASE root.** ``` -### Count Databases +### 1.4 Count Databases User can use the `COUNT DATABASE ` statement to count the number of databases. It is allowed to specify `PathPattern` to count the number of databases matching the `PathPattern`. @@ -141,7 +141,7 @@ Total line number = 1 It costs 0.002s ``` -### Setting up heterogeneous databases (Advanced operations) +### 1.5 Setting up heterogeneous databases (Advanced operations) Under the premise of familiar with IoTDB metadata modeling, users can set up heterogeneous databases in IoTDB to cope with different production needs. @@ -236,7 +236,7 @@ The query results in each column are as follows: + The required minimum DataRegionGroup number of the Database + The permitted maximum DataRegionGroup number of the Database -### TTL +### 1.6 TTL IoTDB supports device-level TTL settings, which means it is able to delete old data automatically and periodically. The benefit of using TTL is that hopefully you can control the total disk space usage and prevent the machine from running out of disks. Moreover, the query performance may downgrade as the total number of files goes up and the memory usage also increases as there are more files. Timely removing such files helps to keep at a high query performance level and reduce memory usage. @@ -349,12 +349,12 @@ IoTDB> show devices All devices will definitely have a TTL, meaning it cannot be null. INF represents infinity. -## Device Template +## 2. Device Template IoTDB supports the device template function, enabling different entities of the same type to share metadata, reduce the memory usage of metadata, and simplify the management of numerous entities and measurements. -### Create Device Template +### 2.1 Create Device Template The SQL syntax for creating a metadata template is as follows: @@ -380,7 +380,7 @@ The` lat` and `lon` measurements are aligned. ![img](/img/templateEN.jpg) -### Set Device Template +### 2.2 Set Device Template After a device template is created, it should be set to specific path before creating related timeseries or insert data. @@ -396,7 +396,7 @@ The SQL Statement for setting device template is as follow: IoTDB> set device template t1 to root.sg1.d1 ``` -### Activate Device Template +### 2.3 Activate Device Template After setting the device template, with the system enabled to auto create schema, you can insert data into the timeseries. For example, suppose there's a database root.sg1 and t1 has been set to root.sg1.d1, then timeseries like root.sg1.d1.temperature and root.sg1.d1.status are available and data points can be inserted. @@ -448,7 +448,7 @@ show devices root.sg1.** +---------------+---------+ ```` -### Show Device Template +### 2.4 Show Device Template - Show all device templates @@ -520,7 +520,7 @@ The execution result is as follows: +-----------+ ``` -### Deactivate device Template +### 2.5 Deactivate device Template To delete a group of timeseries represented by device template, namely deactivate the device template, use the following SQL statement: @@ -548,7 +548,7 @@ IoTDB> deactivate device template t1 from root.sg1.*, root.sg2.* If the template name is not provided in sql, all template activation on paths matched by given path pattern will be removed. -### Unset Device Template +### 2.6 Unset Device Template The SQL Statement for unsetting device template is as follow: @@ -558,7 +558,7 @@ IoTDB> unset device template t1 from root.sg1.d1 **Attention**: It should be guaranteed that none of the timeseries represented by the target device template exists, before unset it. It can be achieved by deactivation operation. -### Drop Device Template +### 2.7 Drop Device Template The SQL Statement for dropping device template is as follow: @@ -568,7 +568,7 @@ IoTDB> drop device template t1 **Attention**: Dropping an already set template is not supported. -### Alter Device Template +### 2.8 Alter Device Template In a scenario where measurements need to be added, you can modify the template to add measurements to all devicesdevice using the device template. @@ -580,9 +580,9 @@ IoTDB> alter device template t1 add (speed FLOAT encoding=RLE, FLOAT TEXT encodi **When executing data insertion to devices with device template set on related prefix path and there are measurements not present in this device template, the measurements will be auto added to this device template.** -## Timeseries Management +## 3. Timeseries Management -### Create Timeseries +### 3.1 Create Timeseries According to the storage model selected before, we can create corresponding timeseries in the two databases respectively. The SQL statements for creating timeseries are as follows: @@ -615,7 +615,7 @@ error: encoding TS_2DIFF does not support BOOLEAN Please refer to [Encoding](../Technical-Insider/Encoding-and-Compression.md) for correspondence between data type and encoding. -### Create Aligned Timeseries +### 3.2 Create Aligned Timeseries The SQL statement for creating a group of timeseries are as follows: @@ -627,7 +627,7 @@ You can set different datatype, encoding, and compression for the timeseries in It is also supported to set an alias, tag, and attribute for aligned timeseries. -### Delete Timeseries +### 3.3 Delete Timeseries To delete the timeseries we created before, we are able to use `(DELETE | DROP) TimeSeries ` statement. @@ -640,7 +640,7 @@ IoTDB> delete timeseries root.ln.wf02.* IoTDB> drop timeseries root.ln.wf02.* ``` -### Show Timeseries +### 3.4 Show Timeseries * SHOW LATEST? TIMESERIES pathPattern? whereClause? limitClause? @@ -752,7 +752,7 @@ It costs 0.016s It is worth noting that when the queried path does not exist, the system will return no timeseries. -### Count Timeseries +### 3.5 Count Timeseries IoTDB is able to use `COUNT TIMESERIES ` to count the number of timeseries matching the path. SQL statements are as follows: @@ -837,7 +837,7 @@ It costs 0.002s > Note: The path of timeseries is just a filter condition, which has no relationship with the definition of level. -### Active Timeseries Query +### 3.6 Active Timeseries Query By adding WHERE time filter conditions to the existing SHOW/COUNT TIMESERIES, we can obtain time series with data within the specified time range. It is important to note that in metadata queries with time filters, views are not considered; only the time series actually stored in the TsFile are taken into account. @@ -877,7 +877,7 @@ IoTDB> count timeseries where time >= 15000 and time < 16000; +-----------------+ ``` Regarding the definition of active time series, data that can be queried normally is considered active, meaning time series that have been inserted but deleted are not included. -### Tag and Attribute Management +### 3.7 Tag and Attribute Management We can also add an alias, extra tag and attribute information while creating one timeseries. @@ -1052,9 +1052,9 @@ IoTDB> show timeseries where TAGS(tag1)='v1' The above operations are supported for timeseries tag, attribute updates, etc. -## Path query +## 4. Path query -### Path +### 4.1 Path A `path` is an expression that conforms to the following constraints: @@ -1074,7 +1074,7 @@ wildcard ; ``` -### NodeName +### 4.2 NodeName - The parts of a path separated by `.` are called node names (`nodeName`). - For example, `root.a.b.c` is a path with a depth of 4 levels, where `root`, `a`, `b`, and `c` are all node names. @@ -1089,11 +1089,11 @@ wildcard - UNICODE Chinese characters (`\u2E80` to `\u9FFF`) - **Case sensitivity**: On Windows systems, path node names in the database are case-insensitive. For example, `root.ln` and `root.LN` are considered the same path. -### Special Characters (Backquote) +### 4.3 Special Characters (Backquote) If special characters (such as spaces or punctuation marks) are needed in a `nodeName`, you can enclose the node name in Backquote (`). For more information on the use of backticks, please refer to [Backquote](../SQL-Manual/Syntax-Rule.md#reverse-quotation-marks). -### Path Pattern +### 4.4 Path Pattern To make it more convenient and efficient to express multiple time series, IoTDB provides paths with wildcards `*` and `**`. Wildcards can appear in any level of a path. @@ -1106,7 +1106,7 @@ To make it more convenient and efficient to express multiple time series, IoTDB **Note**: `*` and `**` cannot be placed at the beginning of a path. -### Show Child Paths +### 4.5 Show Child Paths ``` SHOW CHILD PATHS pathPattern @@ -1134,7 +1134,7 @@ It costs 0.002s > get all paths in form of root.xx.xx.xx:show child paths root.xx.xx -### Show Child Nodes +### 4.6 Show Child Nodes ``` SHOW CHILD NODES pathPattern @@ -1165,7 +1165,7 @@ Example: +------------+ ``` -### Count Nodes +### 4.7 Count Nodes IoTDB is able to use `COUNT NODES LEVEL=` to count the number of nodes at the given level in current Metadata Tree considering a given pattern. IoTDB will find paths that @@ -1218,7 +1218,7 @@ It costs 0.002s > Note: The path of timeseries is just a filter condition, which has no relationship with the definition of level. -### Show Devices +### 4.8 Show Devices * SHOW DEVICES pathPattern? (WITH DATABASE)? devicesWhereClause? limitClause? @@ -1299,7 +1299,7 @@ Total line number = 2 It costs 0.001s ``` -### Count Devices +### 4.9 Count Devices * COUNT DEVICES / @@ -1344,7 +1344,7 @@ Total line number = 1 It costs 0.004s ``` -### Active Device Query +### 4.10 Active Device Query Similar to active timeseries query, we can add time filter conditions to device viewing and statistics to query active devices that have data within a certain time range. The definition of active here is the same as for active time series. An example usage is as follows: ``` IoTDB> insert into root.sg.data(timestamp, s1,s2) values(15000, 1, 2); diff --git a/src/UserGuide/latest/Basic-Concept/Query-Data.md b/src/UserGuide/latest/Basic-Concept/Query-Data.md index d312503c9..f98c5ec37 100644 --- a/src/UserGuide/latest/Basic-Concept/Query-Data.md +++ b/src/UserGuide/latest/Basic-Concept/Query-Data.md @@ -19,9 +19,9 @@ --> # Query Data -## OVERVIEW +## 1. OVERVIEW -### Syntax Definition +### 1.1 Syntax Definition In IoTDB, `SELECT` statement is used to retrieve data from one or more selected time series. Here is the syntax definition of `SELECT` statement: @@ -47,7 +47,7 @@ SELECT [LAST] selectExpr [, selectExpr] ... [ALIGN BY {TIME | DEVICE}] ``` -### Syntax Description +### 1.2 Syntax Description #### `SELECT` clause @@ -107,7 +107,7 @@ SELECT [LAST] selectExpr [, selectExpr] ... - The query result set is **ALIGN BY TIME** by default, including a time column and several value columns, and the timestamps of each column of data in each row are the same. - It also supports **ALIGN BY DEVICE**. The query result set contains a time column, a device column, and several value columns. -### Basic Examples +### 1.3 Basic Examples #### Select a Column of Data Based on a Time Interval @@ -264,7 +264,7 @@ Total line number = 10 It costs 0.016s ``` -### Execution Interface +### 1.4 Execution Interface In IoTDB, there are two ways to execute data query: @@ -331,7 +331,7 @@ SessionDataSet executeAggregationQuery( long slidingStep); ``` -## `SELECT` CLAUSE +## 2. `SELECT` CLAUSE The `SELECT` clause specifies the output of the query, consisting of several `selectExpr`. Each `selectExpr` defines one or more columns in the query result. For select expression details, see document [Operator-and-Expression](../SQL-Manual/Operator-and-Expression.md). - Example 1: @@ -346,7 +346,7 @@ select temperature from root.ln.wf01.wt01 select status, temperature from root.ln.wf01.wt01 ``` -### Last Query +### 2.1 Last Query The last query is a special type of query in Apache IoTDB. It returns the data point with the largest timestamp of the specified time series. In other word, it returns the latest state of a time series. This feature is especially important in IoT data analysis scenarios. To meet the performance requirement of real-time device monitoring systems, Apache IoTDB caches the latest values of all time series to achieve microsecond read latency. @@ -427,7 +427,7 @@ Total line number = 2 It costs 0.002s ``` -## `WHERE` CLAUSE +## 3. `WHERE` CLAUSE In IoTDB query statements, two filter conditions, **time filter** and **value filter**, are supported. @@ -438,7 +438,7 @@ The supported operators are as follows: - Range contains operator: contains ( `IN` ). - String matches operator: `LIKE`, `REGEXP`. -### Time Filter +### 3.1 Time Filter Use time filters to filter data for a specific time range. For supported formats of timestamps, please refer to [Timestamp](../Background-knowledge/Data-Type.md) . @@ -464,7 +464,7 @@ An example is as follows: Note: In the above example, `time` can also be written as `timestamp`. -### Value Filter +### 3.2 Value Filter Use value filters to filter data whose data values meet certain criteria. **Allow** to use a time series not selected in the select clause as a value filter. @@ -516,7 +516,7 @@ An example is as follows: select code from root.sg1.d1 where temperature is not null; ```` -### Fuzzy Query +### 3.3 Fuzzy Query Fuzzy query is divided into Like statement and Regexp statement, both of which can support fuzzy matching of TEXT type data. @@ -599,7 +599,7 @@ Total line number = 2 It costs 0.002s ``` -## `GROUP BY` CLAUSE +## 4. `GROUP BY` CLAUSE IoTDB supports using `GROUP BY` clause to aggregate the time series by segment and group. @@ -607,7 +607,7 @@ Segmented aggregation refers to segmenting data in the row direction according t Group aggregation refers to grouping the potential business attributes of time series for different time series. Each group contains several time series, and each group gets an aggregated value. Support **group by path level** and **group by tag** two grouping methods. -### Aggregate By Segment +### 4.1 Aggregate By Segment #### Aggregate By Time @@ -1252,7 +1252,7 @@ Get the results: +-----------------------------+-----------------------------+--------------------------------------+ ``` -### Aggregate By Group +### 4.2 Aggregate By Group #### Aggregation By Level @@ -1582,7 +1582,7 @@ As this feature is still under development, some queries have not been completed > 5. Temporarily not support expressions as aggregation function parameter,e.g. `count(s+1)`. > 6. Not support the value filter, which stands the same with the `GROUP BY LEVEL` query. -## `HAVING` CLAUSE +## 5. `HAVING` CLAUSE If you want to filter the results of aggregate queries, you can use the `HAVING` clause after the `GROUP BY` clause. @@ -1679,15 +1679,15 @@ Filtering result 2: +-----------------------------+-------------+---------+---------+ ``` -## `FILL` CLAUSE +## 6. `FILL` CLAUSE -### Introduction +### 6.1 Introduction When executing some queries, there may be no data for some columns in some rows, and data in these locations will be null, but this kind of null value is not conducive to data visualization and analysis, and the null value needs to be filled. In IoTDB, users can use the FILL clause to specify the fill mode when data is missing. Fill null value allows the user to fill any query result with null values according to a specific method, such as taking the previous value that is not null, or linear interpolation. The query result after filling the null value can better reflect the data distribution, which is beneficial for users to perform data analysis. -### Syntax Definition +### 6.2 Syntax Definition **The following is the syntax definition of the `FILL` clause:** @@ -1700,7 +1700,7 @@ FILL '(' PREVIOUS | LINEAR | constant ')' - We can specify only one fill method in the `FILL` clause, and this method applies to all columns of the result set. - Null value fill is not compatible with version 0.13 and previous syntax (`FILL(([(, , )?])+)`) is not supported anymore. -### Fill Methods +### 6.3 Fill Methods **IoTDB supports the following three fill methods:** @@ -1994,14 +1994,14 @@ result will be like: Total line number = 4 ``` -## `LIMIT` and `SLIMIT` CLAUSES (PAGINATION) +## 7. `LIMIT` and `SLIMIT` CLAUSES (PAGINATION) When the query result set has a large amount of data, it is not conducive to display on one page. You can use the `LIMIT/SLIMIT` clause and the `OFFSET/SOFFSET` clause to control paging. - The `LIMIT` and `SLIMIT` clauses are used to control the number of rows and columns of query results. - The `OFFSET` and `SOFFSET` clauses are used to control the starting position of the result display. -### Row Control over Query Results +### 7.1 Row Control over Query Results By using LIMIT and OFFSET clauses, users control the query results in a row-related manner. We demonstrate how to use LIMIT and OFFSET clauses through the following examples. @@ -2121,7 +2121,7 @@ Total line number = 4 It costs 0.016s ``` -### Column Control over Query Results +### 7.2 Column Control over Query Results By using SLIMIT and SOFFSET clauses, users can control the query results in a column-related manner. We will demonstrate how to use SLIMIT and SOFFSET clauses through the following examples. @@ -2209,7 +2209,7 @@ Total line number = 7 It costs 0.000s ``` -### Row and Column Control over Query Results +### 7.3 Row and Column Control over Query Results In addition to row or column control over query results, IoTDB allows users to control both rows and columns of query results. Here is a complete example with both LIMIT clauses and SLIMIT clauses. @@ -2244,7 +2244,7 @@ Total line number = 10 It costs 0.009s ``` -### Error Handling +### 7.4 Error Handling If the parameter N/SN of LIMIT/SLIMIT exceeds the size of the result set, IoTDB returns all the results as expected. For example, the query result of the original SQL statement consists of six rows, and we select the first 100 rows through the LIMIT clause: @@ -2322,9 +2322,9 @@ The SQL statement will not be executed and the corresponding error prompt is giv Msg: 411: Meet error in query process: The value of SOFFSET (2) is equal to or exceeds the number of sequences (2) that can actually be returned. ``` -## `ORDER BY` CLAUSE +## 8. `ORDER BY` CLAUSE -### Order by in ALIGN BY TIME mode +### 8.1 Order by in ALIGN BY TIME mode The result set of IoTDB is in ALIGN BY TIME mode by default and `ORDER BY TIME` clause can also be used to specify the ordering of timestamp. The SQL statement is: @@ -2345,7 +2345,7 @@ Results: +-----------------------------+--------------------------+------------------------+-----------------------------+------------------------+ ``` -### Order by in ALIGN BY DEVICE mode +### 8.2 Order by in ALIGN BY DEVICE mode When querying in ALIGN BY DEVICE mode, `ORDER BY` clause can be used to specify the ordering of result set. @@ -2447,7 +2447,7 @@ The result shows below: +-----------------------------+-----------------+---------------+-------------+------------------+ ``` -### Order by arbitrary expressions +### 8.3 Order by arbitrary expressions In addition to the predefined keywords "Time" and "Device" in IoTDB, `ORDER BY` can also be used to sort by any expressions. @@ -2620,11 +2620,11 @@ This will give you the following results: +-----------------------------+---------+-----+ ``` -## `ALIGN BY` CLAUSE +## 9. `ALIGN BY` CLAUSE In addition, IoTDB supports another result set format: `ALIGN BY DEVICE`. -### Align by Device +### 9.1 Align by Device The `ALIGN BY DEVICE` indicates that the deviceId is considered as a column. Therefore, there are totally limited columns in the dataset. @@ -2657,11 +2657,11 @@ Total line number = 6 It costs 0.012s ``` -### Ordering in ALIGN BY DEVICE +### 9.2 Ordering in ALIGN BY DEVICE ALIGN BY DEVICE mode arranges according to the device first, and sort each device in ascending order according to the timestamp. The ordering and priority can be adjusted through `ORDER BY` clause. -## `INTO` CLAUSE (QUERY WRITE-BACK) +## 10. `INTO` CLAUSE (QUERY WRITE-BACK) The `SELECT INTO` statement copies data from query result set into target time series. @@ -2671,7 +2671,7 @@ The application scenarios are as follows: - **Query result storage**: Persistently store the query results, which acts like a materialized view. - **Non-aligned time series to aligned time series**: Rewrite non-aligned time series into another aligned time series. -### SQL Syntax +### 10.1 SQL Syntax #### Syntax Definition @@ -2938,7 +2938,7 @@ This statement specifies that `root.sg_copy.d1` is an unaligned device and `root - When the target time series does not exist, the system automatically creates it (including the database). - When the queried time series does not exist, or the queried sequence does not have data, the target time series will not be created automatically. -### Application examples +### 10.2 Application examples #### Implement IoTDB internal ETL @@ -2995,7 +2995,7 @@ Total line number = 2 It costs 0.375s ``` -### User Permission Management +### 10.3 User Permission Management The user must have the following permissions to execute a query write-back statement: @@ -3004,6 +3004,6 @@ The user must have the following permissions to execute a query write-back state For more user permissions related content, please refer to [Account Management Statements](../User-Manual/Authority-Management.md). -### Configurable Properties +### 10.4 Configurable Properties * `select_into_insert_tablet_plan_row_limit`: The maximum number of rows can be processed in one insert-tablet-plan when executing select-into statements. 10000 by default. diff --git a/src/UserGuide/latest/Basic-Concept/Write-Delete-Data.md b/src/UserGuide/latest/Basic-Concept/Write-Delete-Data.md index 3d8fdb3a0..8c009eac1 100644 --- a/src/UserGuide/latest/Basic-Concept/Write-Delete-Data.md +++ b/src/UserGuide/latest/Basic-Concept/Write-Delete-Data.md @@ -21,7 +21,7 @@ # Write & Delete Data -## CLI INSERT +## 1. CLI INSERT IoTDB provides users with a variety of ways to insert real-time data, such as directly inputting [INSERT SQL statement](../SQL-Manual/SQL-Manual.md#insert-data) in [Client/Shell tools](../Tools-System/CLI.md), or using [Java JDBC](../API/Programming-JDBC.md) to perform single or batch execution of [INSERT SQL statement](../SQL-Manual/SQL-Manual.md). @@ -29,7 +29,7 @@ NOTE: This section mainly introduces the use of [INSERT SQL statement](../SQL- Writing a repeat timestamp covers the original timestamp data, which can be regarded as updated data. -### Use of INSERT Statements +### 1.1 Use of INSERT Statements The [INSERT SQL statement](../SQL-Manual/SQL-Manual.md#insert-data) statement is used to insert data into one or more specified timeseries created. For each point of data inserted, it consists of a [timestamp](../Basic-Concept/Operate-Metadata.md) and a sensor acquisition value (see [Data Type](../Background-knowledge/Data-Type.md)). @@ -89,7 +89,7 @@ IoTDB > insert into root.ln.wf02.wt02(status, hardware) values (false, 'v2') **Note:** Timestamps must be specified when inserting multiple rows of data in a SQL. -### Insert Data Into Aligned Timeseries +### 1.2 Insert Data Into Aligned Timeseries To insert data into a group of aligned time series, we only need to add the `ALIGNED` keyword in SQL, and others are similar. @@ -116,11 +116,11 @@ Total line number = 3 It costs 0.004s ``` -## NATIVE API WRITE +## 2. NATIVE API WRITE The Native API ( Session ) is the most widely used series of APIs of IoTDB, including multiple APIs, adapted to different data collection scenarios, with high performance and multi-language support. -### Multi-language API write +### 2.1 Multi-language API write #### Java @@ -139,7 +139,7 @@ Refer to [ C++ Data Manipulation Interface (DML) ](../API/Programming-Cpp-Native Refer to [Go Native API](../API/Programming-Go-Native-API.md) -## REST API WRITE +## 3. REST API WRITE Refer to [insertTablet (v1)](../API/RestServiceV1.md#inserttablet) or [insertTablet (v2)](../API/RestServiceV2.md#inserttablet) @@ -177,29 +177,29 @@ Example: } ``` -## MQTT WRITE +## 4. MQTT WRITE Refer to [Built-in MQTT Service](../API/Programming-MQTT.md#built-in-mqtt-service) -## BATCH DATA LOAD +## 5. BATCH DATA LOAD In different scenarios, the IoTDB provides a variety of methods for importing data in batches. This section describes the two most common methods for importing data in CSV format and TsFile format. -### TsFile Batch Load +### 5.1 TsFile Batch Load TsFile is the file format of time series used in IoTDB. You can directly import one or more TsFile files with time series into another running IoTDB instance through tools such as CLI. For details, see [Data Import](../Tools-System/Data-Import-Tool.md). -### CSV Batch Load +### 5.2 CSV Batch Load CSV stores table data in plain text. You can write multiple formatted data into a CSV file and import the data into the IoTDB in batches. Before importing data, you are advised to create the corresponding metadata in the IoTDB. Don't worry if you forget to create one, the IoTDB can automatically infer the data in the CSV to its corresponding data type, as long as you have a unique data type for each column. In addition to a single file, the tool supports importing multiple CSV files as folders and setting optimization parameters such as time precision. For details, see [Data Import](../Tools-System/Data-Import-Tool.md). -## DELETE +## 6. DELETE Users can delete data that meet the deletion condition in the specified timeseries by using the [DELETE statement](../SQL-Manual/SQL-Manual.md#delete-data). When deleting data, users can select one or more timeseries paths, prefix paths, or paths with star to delete data within a certain time interval. In a JAVA programming environment, you can use the [Java JDBC](../API/Programming-JDBC.md) to execute single or batch UPDATE statements. -### Delete Single Timeseries +### 6.1 Delete Single Timeseries Taking ln Group as an example, there exists such a usage scenario: @@ -242,7 +242,7 @@ delete from root.ln.wf02.wt02.status ``` -### Delete Multiple Timeseries +### 6.2 Delete Multiple Timeseries If both the power supply status and hardware version of the ln group wf02 plant wt02 device before 2017-11-01 16:26:00 need to be deleted, [the prefix path with broader meaning or the path with star](../Basic-Concept/Operate-Metadata.md) can be used to delete the data. The SQL statement for this operation is: @@ -263,7 +263,7 @@ IoTDB> delete from root.ln.wf03.wt02.status where time < now() Msg: The statement is executed successfully. ``` -### Delete Time Partition (experimental) +### 6.3 Delete Time Partition (experimental) You may delete all data in a time partition of a database using the following grammar: diff --git a/src/UserGuide/latest/Deployment-and-Maintenance/AINode_Deployment_apache.md b/src/UserGuide/latest/Deployment-and-Maintenance/AINode_Deployment_apache.md index 3e749e161..057d4001d 100644 --- a/src/UserGuide/latest/Deployment-and-Maintenance/AINode_Deployment_apache.md +++ b/src/UserGuide/latest/Deployment-and-Maintenance/AINode_Deployment_apache.md @@ -20,24 +20,24 @@ --> # AINode Deployment -## AINode Introduction +## 1. AINode Introduction -### Capability Introduction +### 1.1 Capability Introduction AINode is the third type of endogenous node provided by IoTDB after the Configurable Node and DataNode. This node extends its ability to perform machine learning analysis on time series by interacting with the DataNode and Configurable Node of the IoTDB cluster. It supports the introduction of existing machine learning models from external sources for registration and the use of registered models to complete time series analysis tasks on specified time series data through simple SQL statements. The creation, management, and inference of models are integrated into the database engine. Currently, machine learning algorithms or self-developed models are available for common time series analysis scenarios, such as prediction and anomaly detection. -### Delivery Method +### 1.2 Delivery Method It is an additional package outside the IoTDB cluster, with independent installation and activation (if you need to try or use it, please contact Timecho Technology Business or Technical Support). -### Deployment mode +### 1.3 Deployment mode
-## Installation preparation +## 2. Installation preparation -### Get installation package +### 2.1 Get installation package Users can download the software installation package for AINode, download and unzip it to complete the installation of AINode. @@ -53,7 +53,7 @@ | README_ZH.md | file | Explanation of the Chinese version of the markdown format | | `README.md` | file | Instructions | -### Environment preparation +### 2.2 Environment preparation - Suggested operating environment:Ubuntu, CentOS, MacOS - Runtime Environment @@ -68,9 +68,9 @@ ../Python-3.8.0/python -m venv `venv` ``` -## Installation steps +## 3. Installation steps -### Install AINode +### 3.1 Install AINode 1. Check the kernel architecture of Linux @@ -140,7 +140,7 @@ ``` > Return to the default environment of the system: conda deactivate - ### Configuration item modification +### 3.2 Configuration item modification AINode supports modifying some necessary parameters. You can find the following parameters in the `conf/iotdb-ainode.properties` file and make persistent modifications to them: : @@ -156,7 +156,7 @@ AINode supports modifying some necessary parameters. You can find the following | ain_logs_dir | The path where AINode stores logs, the starting directory of the relative path is related to the operating system, and it is recommended to use an absolute path | String | logs/AINode | Effective after restart | | ain_thrift_compression_enabled | Does AINode enable Thrift's compression mechanism , 0-Do not start, 1-Start | Boolean | 0 | Effective after restart | -### Start AINode +### 3.3 Start AINode After completing the deployment of Seed Config Node, the registration and inference functions of the model can be supported by adding AINode nodes. After specifying the information of the IoTDB cluster in the configuration file, the corresponding instruction can be executed to start AINode and join the IoTDB cluster。 @@ -214,7 +214,7 @@ AINode supports modifying some necessary parameters. You can find the following After writing the parameter value, uncomment the corresponding line and save it to take effect on the next script execution. -#### Example +#### Example ##### Directly start: @@ -251,7 +251,7 @@ If the version of AINode has been updated (such as updating the `lib` folder), t # Windows c nohup bash sbin\start-ainode.bat -r > myout.file 2>& 1 & ``` -#### Non networked environment startup +#### Non networked environment startup ##### Start command @@ -282,7 +282,7 @@ If the version of AINode has been updated (such as updating the `lib` folder), t sbin\start-ainode.bat -i -r -n ``` -##### Parameter introduction: +##### Parameter introduction: | **Name** | **Label** | **Describe** | **Is it mandatory** | **Type** | **Default value** | **Input method** | | ------------------- | ---- | ------------------------------------------------------------ | -------- | ------ | ---------------- | ---------------------- | @@ -291,7 +291,7 @@ If the version of AINode has been updated (such as updating the `lib` folder), t > Attention: When installation fails in a non networked environment, first check if the installation package corresponding to the platform is selected, and then confirm that the Python version is 3.8 (due to the limitations of the downloaded installation package on Python versions, 3.7, 3.9, and others are not allowed) -#### Example +#### Example ##### Directly start: @@ -309,7 +309,7 @@ If the version of AINode has been updated (such as updating the `lib` folder), t nohup bash sbin\start-ainode.bat > myout.file 2>& 1 & ``` -### Detecting the status of AINode nodes +### 3.4 Detecting the status of AINode nodes During the startup process of AINode, the new AINode will be automatically added to the IoTDB cluster. After starting AINode, you can enter SQL in the command line to query. If you see an AINode node in the cluster and its running status is Running (as shown below), it indicates successful joining. @@ -325,7 +325,7 @@ IoTDB> show cluster +------+----------+-------+---------------+------------+-------+-----------+ ``` -### Stop AINode +### 3.5 Stop AINode If you need to stop a running AINode node, execute the corresponding shutdown script. @@ -379,7 +379,7 @@ IoTDB> show cluster ``` If you need to restart the node, you need to execute the startup script again. -### Remove AINode +### 3.6 Remove AINode When it is necessary to remove an AINode node from the cluster, a removal script can be executed. The difference between removing and stopping scripts is that stopping retains the AINode node in the cluster but stops the AINode service, while removing removes the AINode node from the cluster. @@ -427,7 +427,7 @@ When it is necessary to remove an AINode node from the cluster, a removal script ``` After writing the parameter value, uncomment the corresponding line and save it to take effect on the next script execution. -#### Example +#### Example ##### Directly remove: @@ -461,9 +461,9 @@ If the user loses files in the data folder, AINode may not be able to actively r sbin\remove-ainode.bat -t /: ``` -## common problem +## 4. common problem -### An error occurs when starting AINode stating that the venv module cannot be found +### 4.1 An error occurs when starting AINode stating that the venv module cannot be found When starting AINode using the default method, a Python virtual environment will be created in the installation package directory and dependencies will be installed, so it is required to install the venv module. Generally speaking, Python 3.8 and above versions come with built-in VenV, but for some systems with built-in Python environments, this requirement may not be met. There are two solutions when this error occurs (choose one or the other): @@ -479,7 +479,7 @@ Install version 3.8.0 of venv into AINode in the AINode path. ``` When running the startup script, use ` -i ` to specify an existing Python interpreter path as the running environment for AINode, eliminating the need to create a new virtual environment. - ### The SSL module in Python is not properly installed and configured to handle HTTPS resources + ### 4.2 The SSL module in Python is not properly installed and configured to handle HTTPS resources WARNING: pip is configured with locations that require TLS/SSL, however the ssl module in Python is not available. You can install OpenSSLS and then rebuild Python to solve this problem > Currently Python versions 3.6 to 3.9 are compatible with OpenSSL 1.0.2, 1.1.0, and 1.1.1. @@ -493,7 +493,7 @@ make sudo make install ``` - ### Pip version is lower + ### 4.3 Pip version is lower A compilation issue similar to "error: Microsoft Visual C++14.0 or greater is required..." appears on Windows @@ -505,7 +505,7 @@ The corresponding error occurs during installation and compilation, usually due ``` - ### Install and compile Python + ### 4.4 Install and compile Python Use the following instructions to download the installation package from the official website and extract it: ```shell diff --git a/src/UserGuide/latest/Deployment-and-Maintenance/AINode_Deployment_timecho.md b/src/UserGuide/latest/Deployment-and-Maintenance/AINode_Deployment_timecho.md index e82a62556..1bfc0699a 100644 --- a/src/UserGuide/latest/Deployment-and-Maintenance/AINode_Deployment_timecho.md +++ b/src/UserGuide/latest/Deployment-and-Maintenance/AINode_Deployment_timecho.md @@ -20,24 +20,24 @@ --> # AINode Deployment -## AINode Introduction +## 1. AINode Introduction -### Capability Introduction +### 1.1 Capability Introduction AINode is the third type of endogenous node provided by IoTDB after the Configurable Node and DataNode. This node extends its ability to perform machine learning analysis on time series by interacting with the DataNode and Configurable Node of the IoTDB cluster. It supports the introduction of existing machine learning models from external sources for registration and the use of registered models to complete time series analysis tasks on specified time series data through simple SQL statements. The creation, management, and inference of models are integrated into the database engine. Currently, machine learning algorithms or self-developed models are available for common time series analysis scenarios, such as prediction and anomaly detection. -### Delivery Method +### 1.2 Delivery Method It is an additional package outside the IoTDB cluster, with independent installation and activation (if you need to try or use it, please contact Timecho Technology Business or Technical Support). -### Deployment mode +### 1.3 Deployment mode
-## Installation preparation +## 2. Installation preparation -### Get installation package +### 2.1 Get installation package Users can download the software installation package for AINode, download and unzip it to complete the installation of AINode. @@ -53,7 +53,7 @@ | README_ZH.md | file | Explanation of the Chinese version of the markdown format | | `README.md` | file | Instructions | -### Environment preparation +### 2.2 Environment preparation - Suggested operating environment:Ubuntu, CentOS, MacOS - Runtime Environment @@ -68,9 +68,9 @@ ../Python-3.8.0/python -m venv `venv` ``` -## Installation steps +## 3. Installation steps -### Install AINode +### 3.1 Install AINode 1. AINode activation @@ -174,7 +174,7 @@ ``` > Return to the default environment of the system: conda deactivate - ### Configuration item modification + ### 3.2 Configuration item modification AINode supports modifying some necessary parameters. You can find the following parameters in the `conf/iotdb-ainode.properties` file and make persistent modifications to them: : @@ -190,7 +190,7 @@ AINode supports modifying some necessary parameters. You can find the following | ain_logs_dir | The path where AINode stores logs, the starting directory of the relative path is related to the operating system, and it is recommended to use an absolute path | String | logs/AINode | Effective after restart | | ain_thrift_compression_enabled | Does AINode enable Thrift's compression mechanism , 0-Do not start, 1-Start | Boolean | 0 | Effective after restart | -### Start AINode +### 3.3 Start AINode After completing the deployment of Seed Config Node, the registration and inference functions of the model can be supported by adding AINode nodes. After specifying the information of the IoTDB cluster in the configuration file, the corresponding instruction can be executed to start AINode and join the IoTDB cluster。 @@ -343,7 +343,7 @@ If the version of AINode has been updated (such as updating the `lib` folder), t nohup bash sbin\start-ainode.bat > myout.file 2>& 1 & ``` -### Detecting the status of AINode nodes +### 3.4 Detecting the status of AINode nodes During the startup process of AINode, the new AINode will be automatically added to the IoTDB cluster. After starting AINode, you can enter SQL in the command line to query. If you see an AINode node in the cluster and its running status is Running (as shown below), it indicates successful joining. @@ -359,7 +359,7 @@ IoTDB> show cluster +------+----------+-------+---------------+------------+-------+-----------+ ``` -### Stop AINode +### 3.5 Stop AINode If you need to stop a running AINode node, execute the corresponding shutdown script. @@ -413,7 +413,7 @@ IoTDB> show cluster ``` If you need to restart the node, you need to execute the startup script again. -### Remove AINode +### 3.6 Remove AINode When it is necessary to remove an AINode node from the cluster, a removal script can be executed. The difference between removing and stopping scripts is that stopping retains the AINode node in the cluster but stops the AINode service, while removing removes the AINode node from the cluster. @@ -495,9 +495,9 @@ If the user loses files in the data folder, AINode may not be able to actively r sbin\remove-ainode.bat -t /: ``` -## common problem +## 4. common problem -### An error occurs when starting AINode stating that the venv module cannot be found +### 4.1 An error occurs when starting AINode stating that the venv module cannot be found When starting AINode using the default method, a Python virtual environment will be created in the installation package directory and dependencies will be installed, so it is required to install the venv module. Generally speaking, Python 3.8 and above versions come with built-in VenV, but for some systems with built-in Python environments, this requirement may not be met. There are two solutions when this error occurs (choose one or the other): @@ -513,7 +513,7 @@ Install version 3.8.0 of venv into AINode in the AINode path. ``` When running the startup script, use ` -i ` to specify an existing Python interpreter path as the running environment for AINode, eliminating the need to create a new virtual environment. - ### The SSL module in Python is not properly installed and configured to handle HTTPS resources + ### 4.2 The SSL module in Python is not properly installed and configured to handle HTTPS resources WARNING: pip is configured with locations that require TLS/SSL, however the ssl module in Python is not available. You can install OpenSSLS and then rebuild Python to solve this problem > Currently Python versions 3.6 to 3.9 are compatible with OpenSSL 1.0.2, 1.1.0, and 1.1.1. @@ -527,7 +527,7 @@ make sudo make install ``` - ### Pip version is lower + ### 4.3 Pip version is lower A compilation issue similar to "error: Microsoft Visual C++14.0 or greater is required..." appears on Windows @@ -539,7 +539,7 @@ The corresponding error occurs during installation and compilation, usually due ``` - ### Install and compile Python + ### 4.4 Install and compile Python Use the following instructions to download the installation package from the official website and extract it: ```shell diff --git a/src/UserGuide/latest/Deployment-and-Maintenance/Cluster-Deployment_apache.md b/src/UserGuide/latest/Deployment-and-Maintenance/Cluster-Deployment_apache.md index 4389a704f..568aff270 100644 --- a/src/UserGuide/latest/Deployment-and-Maintenance/Cluster-Deployment_apache.md +++ b/src/UserGuide/latest/Deployment-and-Maintenance/Cluster-Deployment_apache.md @@ -26,7 +26,7 @@ This section will take the IoTDB classic cluster deployment architecture 3C3D (3 -## Note +## 1. Note 1. Before installation, ensure that the system is complete by referring to [System configuration](./Environment-Requirements.md) @@ -46,13 +46,13 @@ This section will take the IoTDB classic cluster deployment architecture 3C3D (3 - Using the same user operation: Ensure that the same user is used for start, stop and other operations, and do not switch users. - Avoid using sudo: Try to avoid using sudo commands as they execute commands with root privileges, which may cause confusion or security issues. -## Preparation Steps +## 2. Preparation Steps 1. Prepare the IoTDB database installation package::apache-iotdb-{version}-all-bin.zip(Please refer to the installation package for details:[IoTDB-Package](../Deployment-and-Maintenance/IoTDB-Package_apache.md)) 2. Configure the operating system environment according to environmental requirements (system environment configuration can be found in:[Environment Requirement](../Deployment-and-Maintenance/Environment-Requirements.md)) -## Installation Steps +## 3. Installation Steps Assuming there are three Linux servers now, the IP addresses and service roles are assigned as follows: @@ -62,7 +62,7 @@ Assuming there are three Linux servers now, the IP addresses and service roles a | 192.168.1.4 | iotdb-2 | ConfigNode、DataNode | | 192.168.1.5 | iotdb-3 | ConfigNode、DataNode | -### Set Host Name +### 3.1 Set Host Name On three machines, configure the host names separately. To set the host names, configure `/etc/hosts` on the target server. Use the following command: @@ -72,7 +72,7 @@ echo "192.168.1.4 iotdb-2" >> /etc/hosts echo "192.168.1.5 iotdb-3" >> /etc/hosts ``` -### Configuration +### 3.2 Configuration Unzip the installation package and enter the installation directory @@ -133,7 +133,7 @@ Open DataNode Configuration File `./conf/iotdb-system.properties`,Set the follow > ❗️Attention: Editors such as VSCode Remote do not have automatic configuration saving function. Please ensure that the modified files are saved persistently, otherwise the configuration items will not take effect -### Start ConfigNode +### 3.3 Start ConfigNode Start the first confignode of IoTDB-1 first, ensuring that the seed confignode node starts first, and then start the second and third confignode nodes in sequence @@ -145,7 +145,7 @@ cd sbin If the startup fails, please refer to [Common Questions](#common-questions). -### Start DataNode +### 3.4 Start DataNode Enter the `sbin` directory of iotdb and start three datanode nodes in sequence: @@ -154,7 +154,7 @@ cd sbin ./start-datanode.sh -d #"- d" parameter will start in the background ``` -### Verify Deployment +### 3.5 Verify Deployment Can be executed directly Cli startup script in `./sbin` directory: @@ -172,9 +172,9 @@ You can use the `show cluster` command to view cluster information: > The appearance of `ACTIVATED (W)` indicates passive activation, which means that this Configurable Node does not have a license file (or has not issued the latest license file with a timestamp), and its activation depends on other Activated Configurable Nodes in the cluster. At this point, it is recommended to check if the license file has been placed in the license folder. If not, please place the license file. If a license file already exists, it may be due to inconsistency between the license file of this node and the information of other nodes. Please contact Timecho staff to reapply. -## Node Maintenance Steps +## 4. Node Maintenance Steps -### ConfigNode Node Maintenance +### 4.1 ConfigNode Node Maintenance ConfigNode node maintenance is divided into two types of operations: adding and removing ConfigNodes, with two common use cases: - Cluster expansion: For example, when there is only one ConfigNode in the cluster, and you want to increase the high availability of ConfigNode nodes, you can add two ConfigNodes, making a total of three ConfigNodes in the cluster. @@ -239,7 +239,7 @@ sbin/remove-confignode.bat [confignode_id] ``` -### DataNode Node Maintenance +### 4.2 DataNode Node Maintenance There are two common scenarios for DataNode node maintenance: @@ -306,7 +306,7 @@ sbin/remove-datanode.sh [datanode_id] #Windows sbin/remove-datanode.bat [datanode_id] ``` -## Common Questions +## 5. Questions 1. Confignode failed to start diff --git a/src/UserGuide/latest/Deployment-and-Maintenance/Cluster-Deployment_timecho.md b/src/UserGuide/latest/Deployment-and-Maintenance/Cluster-Deployment_timecho.md index bd7d0aee5..99996d8b7 100644 --- a/src/UserGuide/latest/Deployment-and-Maintenance/Cluster-Deployment_timecho.md +++ b/src/UserGuide/latest/Deployment-and-Maintenance/Cluster-Deployment_timecho.md @@ -28,7 +28,7 @@ This guide describes how to manually deploy a cluster instance consisting of 3 C -## Prerequisites +## 1. Prerequisites 1. [System configuration](./Environment-Requirements.md):Ensure the system has been configured according to the preparation guidelines. @@ -53,13 +53,13 @@ This guide describes how to manually deploy a cluster instance consisting of 3 C 6. **Monitoring Panel**: Deploy a monitoring panel to track key performance metrics. Contact the Timecho team for access and refer to the "[Monitoring Panel Deployment](./Monitoring-panel-deployment.md)" guide. -## Preparation +## 2. Preparation 1. Obtain the TimechoDB installation package: `timechodb-{version}-bin.zip` following [IoTDB-Package](../Deployment-and-Maintenance/IoTDB-Package_timecho.md)) 2. Configure the operating system environment according to [Environment Requirement](../Deployment-and-Maintenance/Environment-Requirements.md)) -## Installation Steps +## 3. Installation Steps Taking a cluster with three Linux servers with the following information as example: @@ -69,7 +69,7 @@ Taking a cluster with three Linux servers with the following information as exam | 11.101.17.225 | iotdb-2 | ConfigNode、DataNode | | 11.101.17.226 | iotdb-3 | ConfigNode、DataNode | -### 1.Configure Hostnames +### 3.1 Configure Hostnames On all three servers, configure the hostnames by editing the `/etc/hosts` file. Use the following commands: @@ -79,7 +79,7 @@ echo "11.101.17.225 iotdb-2" >> /etc/hosts echo "11.101.17.226 iotdb-3" >> /etc/hosts ``` -### 2. Extract Installation Package +### 3.2 Extract Installation Package Unzip the installation package and enter the installation directory: @@ -88,7 +88,7 @@ unzip timechodb-{version}-bin.zip cd timechodb-{version}-bin ``` -### 3. Parameters Configuration +### 3.3 Parameters Configuration - #### Memory Configuration @@ -137,7 +137,7 @@ Set the following parameters in `./conf/iotdb-system.properties`. Refer to `./co **Note:** Ensure files are saved after editing. Tools like VSCode Remote do not save changes automatically. -### 4. Start ConfigNode Instances +### 3.4 Start ConfigNode Instances 1. Start the first ConfigNode (`iotdb-1`) as the seed node @@ -150,7 +150,7 @@ cd sbin If the startup fails, refer to the [Common Questions](#common-questions) section below for troubleshooting. -### 5.Start DataNode Instances +### 3.5 Start DataNode Instances On each server, navigate to the `sbin` directory and start the DataNode: @@ -159,7 +159,7 @@ cd sbin ./start-datanode.sh -d #"- d" parameter will start in the background ``` -### 6.Activate Database +### 3.6 Activate Database #### Option 1: File-Based Activation @@ -217,15 +217,15 @@ cd sbin IoTDB> activate '01-D4EYQGPZ-EAUJJODW-NUKRDR6F-TUQS3B75-EDZFLK3A-6BOKJFFZ-ALDHOMN7-NB2E4BHI-7ZKGFVK6-GCIFXA4T-UG3XJTTD-SHJV6F2P-Q27B4OMJ-R47ZDIM3-UUASUXG2-OQXGVZCO-MMYKICZU-TWFQYYAO-ZOAGOKJA-NYHQTA5U-EWAR4EP5-MRC6R2CI-PKUTKRCT-7UDGRH3F-7BYV4P5D-6KKIA===,01-D4EYQGPZ-EAUJJODW-NUKRDR6F-TUQS3B75-EDZFLK3A-6BOKJFFZ-ALDHOMN7-NB2E4BHI-7ZKGFVK6-GCIFXA4T-UG3XJTTD-SHJV6F2P-Q27B4OMJ-R47ZDIM3-UUASUXG2-OQXGVZCO-MMYKICZU-TWFQYYAO-ZOAGOKJA-NYHQTA5U-EWAR4EP5-MRC6R2CI-PKUTKRCT-7UDGRH3F-7BYV4P5D-6KKIA===,01-D4EYQGPZ-EAUJJODW-NUKRDR6F-TUQS3B75-EDZFLK3A-6BOKJFFZ-ALDHOMN7-NB2E4BHI-7ZKGFVK6-GCIFXA4T-UG3XJTTD-SHJV6F2P-Q27B4OMJ-R47ZDIM3-UUASUXG2-OQXGVZCO-MMYKICZU-TWFQYYAO-ZOAGOKJA-NYHQTA5U-EWAR4EP5-MRC6R2CI-PKUTKRCT-7UDGRH3F-7BYV4P5D-6KKIA===' ``` -### 7.Verify Activation +### 3.7 Verify Activation Check the `ClusterActivationStatus` field. If it shows `ACTIVATED`, the database has been successfully activated. ![](/img/%E9%9B%86%E7%BE%A4-%E9%AA%8C%E8%AF%81.png) -## Maintenance +## 4. Maintenance -### ConfigNode Maintenance +### 4.1 ConfigNode Maintenance ConfigNode maintenance includes adding and removing ConfigNodes. Common use cases include: @@ -289,7 +289,7 @@ sbin/remove-confignode.bat [confignode_id] sbin/remove-confignode.bat [cn_internal_address:cn_internal_port] ``` -### DataNode Maintenance +### 4.2 DataNode Maintenance DataNode maintenance includes adding and removing DataNodes. Common use cases include: @@ -351,7 +351,7 @@ sbin/remove-datanode.sh [dn_rpc_address:dn_rpc_port] sbin/remove-datanode.bat [dn_rpc_address:dn_rpc_port] ``` -## Common Questions +## 5. Common Questions 1. Activation Fails Repeatedly - Use the `ls -al` command to verify that the ownership of the installation directory matches the current user. @@ -388,15 +388,15 @@ sbin/remove-datanode.bat [dn_rpc_address:dn_rpc_port] rm -rf data logs ``` -## Appendix +## 6. Appendix -### ConfigNode Parameters +### 6.1 ConfigNode Parameters | Parameter | Description | Is it required | | :-------- | :---------------------------------------------------------- | :------------- | | -d | Starts the process in daemon mode (runs in the background). | No | -### DataNode Parameters +### 6.2 DataNode Parameters | Parameter | Description | Required | | :-------- | :----------------------------------------------------------- | :------- | diff --git a/src/UserGuide/latest/Deployment-and-Maintenance/Database-Resources.md b/src/UserGuide/latest/Deployment-and-Maintenance/Database-Resources.md index d6210318a..51cc3a70a 100644 --- a/src/UserGuide/latest/Deployment-and-Maintenance/Database-Resources.md +++ b/src/UserGuide/latest/Deployment-and-Maintenance/Database-Resources.md @@ -19,7 +19,7 @@ --> # Database Resources -## CPU +## 1. CPU @@ -81,7 +81,7 @@
-## Memory +## 2. Memory @@ -143,8 +143,8 @@
-## Storage (Disk) -### Storage space +## 3. Storage (Disk) +### 3.1 Storage space Calculation formula: Number of measurement points * Sampling frequency (Hz) * Size of each data point (Byte, different data types may vary, see table below) * Storage time (seconds) * Number of copies (usually 1 copy for a single node and 2 copies for a cluster) ÷ Compression ratio (can be estimated at 5-10 times, but may be higher in actual situations) @@ -189,13 +189,13 @@ Example: 1000 devices, each with 100 measurement points, a total of 100000 seque - Complete calculation formula: 1000 devices * 100 measurement points * 12 bytes per data point * 86400 seconds per day * 365 days per year * 3 copies / 10 compression ratio / 1024 / 1024 / 1024 / 1024 =11T - Simplified calculation formula: 1000 * 100 * 12 * 86400 * 365 * 3 / 10 / 1024 / 1024 / 1024 / 1024 =11T -### Storage Configuration +### 3.2 Storage Configuration If the number of nodes is over 10000000 or the query load is high, it is recommended to configure SSD -## Network (Network card) +## 4. Network (Network card) If the write throughput does not exceed 10 million points/second, configure 1Gbps network card. When the write throughput exceeds 10 million points per second, a 10Gbps network card needs to be configured. | **Write throughput (data points per second)** | **NIC rate** | | ------------------- | ------------- | | <10 million | 1Gbps | | >=10 million | 10Gbps | -## Other instructions +## 5. Other instructions IoTDB has the ability to scale up clusters in seconds, and expanding node data does not require migration. Therefore, you do not need to worry about the limited cluster capacity estimated based on existing data. In the future, you can add new nodes to the cluster when you need to scale up. \ No newline at end of file diff --git a/src/UserGuide/latest/Deployment-and-Maintenance/Deployment-form_apache.md b/src/UserGuide/latest/Deployment-and-Maintenance/Deployment-form_apache.md index a934884cb..eec5edf92 100644 --- a/src/UserGuide/latest/Deployment-and-Maintenance/Deployment-form_apache.md +++ b/src/UserGuide/latest/Deployment-and-Maintenance/Deployment-form_apache.md @@ -22,7 +22,7 @@ IoTDB has two operation modes: standalone mode and cluster mode. -## 1 Standalone Mode +## 1. Standalone Mode An IoTDB standalone instance includes 1 ConfigNode and 1 DataNode, referred to as 1C1D. @@ -31,7 +31,7 @@ An IoTDB standalone instance includes 1 ConfigNode and 1 DataNode, referred to a - **Deployment method**:[Stand-Alone Deployment](../Deployment-and-Maintenance/Stand-Alone-Deployment_apache.md) -## 2 Cluster Mode +## 2. Cluster Mode An IoTDB cluster instance consists of 3 ConfigNodes and no fewer than 3 DataNodes, typically 3 DataNodes, referred to as 3C3D. In the event of partial node failures, the remaining nodes can still provide services, ensuring high availability of the database service, and the database performance can be improved with the addition of nodes. @@ -39,7 +39,7 @@ An IoTDB cluster instance consists of 3 ConfigNodes and no fewer than 3 DataNode - **Applicable scenarios**: Enterprise-level application scenarios that require high availability and reliability. - **Deployment method**: [Cluster Deployment](../Deployment-and-Maintenance/Cluster-Deployment_apache.md) -## 3 Summary of Features +## 3. Summary of Features | **Dimension** | **Stand-Alone Mode** | **Cluster Mode** | | :-------------------------- | :----------------------------------------------------- | :----------------------------------------------------------- | diff --git a/src/UserGuide/latest/Deployment-and-Maintenance/Deployment-form_timecho.md b/src/UserGuide/latest/Deployment-and-Maintenance/Deployment-form_timecho.md index c757e9561..b2daee47f 100644 --- a/src/UserGuide/latest/Deployment-and-Maintenance/Deployment-form_timecho.md +++ b/src/UserGuide/latest/Deployment-and-Maintenance/Deployment-form_timecho.md @@ -22,7 +22,7 @@ IoTDB has two operation modes: standalone mode and cluster mode. -## 1 Standalone Mode +## 1. Standalone Mode An IoTDB standalone instance includes 1 ConfigNode and 1 DataNode, i.e., 1C1D. @@ -30,7 +30,7 @@ An IoTDB standalone instance includes 1 ConfigNode and 1 DataNode, i.e., 1C1D. - **Use Cases**: Scenarios with limited resources or low high-availability requirements, such as edge servers. - **Deployment Method**: [Stand-Alone Deployment](../Deployment-and-Maintenance/Stand-Alone-Deployment_timecho.md) -## 2 Dual-Active Mode +## 2. Dual-Active Mode Dual-Active Deployment is a feature of TimechoDB, where two independent instances synchronize bidirectionally and can provide services simultaneously. If one instance stops and restarts, the other instance will resume data transfer from the breakpoint. @@ -40,7 +40,7 @@ Dual-Active Deployment is a feature of TimechoDB, where two independent instance - **Use Cases**: Scenarios with limited resources (only two servers) but requiring high availability. - **Deployment Method**: [Dual-Active Deployment](../Deployment-and-Maintenance/Dual-Active-Deployment_timecho.md) -## 3 Cluster Mode +## 3. Cluster Mode An IoTDB cluster instance consists of 3 ConfigNodes and no fewer than 3 DataNodes, typically 3 DataNodes, i.e., 3C3D. If some nodes fail, the remaining nodes can still provide services, ensuring high availability of the database. Performance can be improved by adding DataNodes. @@ -50,7 +50,7 @@ An IoTDB cluster instance consists of 3 ConfigNodes and no fewer than 3 DataNode -## 4 Feature Summary +## 4. Feature Summary | **Dimension** | **Stand-Alone Mode** | **Dual-Active Mode** | **Cluster Mode** | | :-------------------------- | :------------------------------------------------------- | :------------------------------------------------------ | :------------------------------------------------------ | diff --git a/src/UserGuide/latest/Deployment-and-Maintenance/Docker-Deployment_apache.md b/src/UserGuide/latest/Deployment-and-Maintenance/Docker-Deployment_apache.md index 048c3e0d8..2bd990022 100644 --- a/src/UserGuide/latest/Deployment-and-Maintenance/Docker-Deployment_apache.md +++ b/src/UserGuide/latest/Deployment-and-Maintenance/Docker-Deployment_apache.md @@ -20,9 +20,9 @@ --> # Docker Deployment -## Environmental Preparation +## 1. Environmental Preparation -### Docker Installation +### 1.1 Docker Installation ```SQL #Taking Ubuntu as an example, other operating systems can search for installation methods themselves @@ -42,7 +42,7 @@ sudo systemctl enable docker docker --version #Display version information, indicating successful installation ``` -### Docker-compose Installation +### 1.2 Docker-compose Installation ```SQL #Installation command @@ -53,11 +53,11 @@ ln -s /usr/local/bin/docker-compose /usr/bin/docker-compose docker-compose --version #Displaying version information indicates successful installation ``` -## Stand-Alone Deployment +## 2. Stand-Alone Deployment This section demonstrates how to deploy a standalone Docker version of 1C1D. -### Pull Image File +### 2.1 Pull Image File The Docker image of Apache IoTDB has been uploaded tohttps://hub.docker.com/r/apache/iotdb。 @@ -75,13 +75,13 @@ docker images ![](/img/%E5%BC%80%E6%BA%90-%E6%8B%89%E5%8F%96%E9%95%9C%E5%83%8F.png) -### Create Docker Bridge Network +### 2.2 Create Docker Bridge Network ```Bash docker network create --driver=bridge --subnet=172.18.0.0/16 --gateway=172.18.0.1 iotdb ``` -### Write The Yml File For Docker-Compose +### 2.3 Write The Yml File For Docker-Compose Here we take the example of consolidating the IoTDB installation directory and yml files in the/docker iotdb folder: @@ -130,7 +130,7 @@ networks: external: true ``` -### Start IoTDB +### 2.4 Start IoTDB Use the following command to start: @@ -139,7 +139,7 @@ cd /docker-iotdb docker-compose -f docker-compose-standalone.yml up -d #Background startup ``` -### Validate Deployment +### 2.5 alidate Deployment - Viewing the log, the following words indicate successful startup @@ -172,7 +172,7 @@ You can see that all services are running and the activation status shows as act ![](/img/%E5%BC%80%E6%BA%90-%E9%AA%8C%E8%AF%81%E9%83%A8%E7%BD%B23.png) -### Map/conf Directory (optional) +### 2.6 Map/conf Directory (optional) If you want to directly modify the configuration file in the physical machine in the future, you can map the/conf folder in the container in three steps: @@ -197,7 +197,7 @@ Step 3: Restart IoTDB docker-compose -f docker-compose-standalone.yml up -d ``` -## Cluster Deployment +## 3. Cluster Deployment This section describes how to manually deploy an instance that includes 3 Config Nodes and 3 Data Nodes, commonly known as a 3C3D cluster. @@ -209,7 +209,7 @@ This section describes how to manually deploy an instance that includes 3 Config Taking the host network as an example, we will demonstrate how to deploy a 3C3D cluster. -### Set Host Name +### 3.1 Set Host Name Assuming there are currently three Linux servers, the IP addresses and service role assignments are as follows: @@ -227,7 +227,7 @@ echo "192.168.1.4 iotdb-2" >> /etc/hosts echo "192.168.1.5 iotdb-3" >> /etc/hosts ``` -### Pull Image File +### 3.2 Pull Image File The Docker image of Apache IoTDB has been uploaded tohttps://hub.docker.com/r/apache/iotdb。 @@ -245,7 +245,7 @@ docker images ![](/img/%E5%BC%80%E6%BA%90-%E9%9B%86%E7%BE%A4%E7%89%881.png) -### Write The Yml File For Docker Compose +### 3.3 Write The Yml File For Docker Compose Here we take the example of consolidating the IoTDB installation directory and yml files in the `/docker-iotdb` folder: @@ -324,7 +324,7 @@ services: network_mode: "host" #Using the host network ``` -### Starting Confignode For The First Time +### 3.4 Starting Confignode For The First Time First, start configNodes on each of the three servers to obtain the machine code. Pay attention to the startup order, start the first iotdb-1 first, then start iotdb-2 and iotdb-3. @@ -333,7 +333,7 @@ cd /docker-iotdb docker-compose -f confignode.yml up -d #Background startup ``` -### Start Datanode +### 3.5 Start Datanode Start datanodes on 3 servers separately @@ -344,7 +344,7 @@ docker-compose -f datanode.yml up -d #Background startup ![](/img/%E5%BC%80%E6%BA%90-%E9%9B%86%E7%BE%A4%E7%89%882.png) -### Validate Deployment +### 3.6 Validate Deployment - Viewing the logs, the following words indicate that the datanode has successfully started @@ -377,7 +377,7 @@ docker-compose -f datanode.yml up -d #Background startup ![](/img/%E5%BC%80%E6%BA%90-%E9%9B%86%E7%BE%A4%E7%89%885.png) -### Map/conf Directory (optional) +### 3.7 Map/conf Directory (optional) If you want to directly modify the configuration file in the physical machine in the future, you can map the/conf folder in the container in three steps: diff --git a/src/UserGuide/latest/Deployment-and-Maintenance/Docker-Deployment_timecho.md b/src/UserGuide/latest/Deployment-and-Maintenance/Docker-Deployment_timecho.md index 4aec6d8ee..ccd071bbb 100644 --- a/src/UserGuide/latest/Deployment-and-Maintenance/Docker-Deployment_timecho.md +++ b/src/UserGuide/latest/Deployment-and-Maintenance/Docker-Deployment_timecho.md @@ -20,9 +20,9 @@ --> # Docker Deployment -## Environmental Preparation +## 1. Environmental Preparation -### Docker Installation +### 1.1 Docker Installation ```Bash #Taking Ubuntu as an example, other operating systems can search for installation methods themselves @@ -42,7 +42,7 @@ sudo systemctl enable docker docker --version #Display version information, indicating successful installation ``` -### Docker-compose Installation +### 1.2 Docker-compose Installation ```Bash #Installation command @@ -53,7 +53,7 @@ ln -s /usr/local/bin/docker-compose /usr/bin/docker-compose docker-compose --version #Displaying version information indicates successful installation ``` -### Install The Dmidecode Plugin +### 1.3 Install The Dmidecode Plugin By default, Linux servers should already be installed. If not, you can use the following command to install them. @@ -63,15 +63,15 @@ sudo apt-get install dmidecode After installing dmidecode, search for the installation path: `wherever dmidecode`. Assuming the result is `/usr/sbin/dmidecode`, remember this path as it will be used in the later docker compose yml file. -### Get Container Image Of IoTDB +### 1.4 Get Container Image Of IoTDB You can contact business or technical support to obtain container images for IoTDB Enterprise Edition. -## Stand-Alone Deployment +## 2. Stand-Alone Deployment This section demonstrates how to deploy a standalone Docker version of 1C1D. -### Load Image File +### 2.1 Load Image File For example, the container image file name of IoTDB obtained here is: `iotdb-enterprise-1.3.2-3-standalone-docker.tar.gz` @@ -89,13 +89,13 @@ docker images ![](/img/%E5%8D%95%E6%9C%BA-%E6%9F%A5%E7%9C%8B%E9%95%9C%E5%83%8F.png) -### Create Docker Bridge Network +### 2.2 Create Docker Bridge Network ```Bash docker network create --driver=bridge --subnet=172.18.0.0/16 --gateway=172.18.0.1 iotdb ``` -### Write The Yml File For docker-compose +### 2.3 Write The Yml File For docker-compose Here we take the example of consolidating the IoTDB installation directory and yml files in the/docker iotdb folder: @@ -147,7 +147,7 @@ networks: external: true ``` -### First Launch +### 2.4 First Launch Use the following command to start: @@ -160,7 +160,7 @@ Due to lack of activation, it is normal to exit directly upon initial startup. T ![](/img/%E5%8D%95%E6%9C%BA-%E6%BF%80%E6%B4%BB.png) -### Apply For Activation +### 2.5 Apply For Activation - After the first startup, a system_info file will be generated in the physical machine directory `/docker-iotdb/iotdb/activation`, and this file will be copied to the Timecho staff. @@ -170,7 +170,7 @@ Due to lack of activation, it is normal to exit directly upon initial startup. T ![](/img/%E5%8D%95%E6%9C%BA-%E7%94%B3%E8%AF%B7%E6%BF%80%E6%B4%BB2.png) -### Restart IoTDB +### 2.6 Restart IoTDB ```Bash docker-compose -f docker-compose-standalone.yml up -d @@ -178,7 +178,7 @@ docker-compose -f docker-compose-standalone.yml up -d ![](/img/%E5%90%AF%E5%8A%A8iotdb.png) -### Validate Deployment +### 2.7 Validate Deployment - Viewing the log, the following words indicate successful startup @@ -211,7 +211,7 @@ docker-compose -f docker-compose-standalone.yml up -d ![](/img/%E5%8D%95%E6%9C%BA-%E9%AA%8C%E8%AF%81%E9%83%A8%E7%BD%B23.png) -### Map/conf Directory (optional) +### 2.8 Map/conf Directory (optional) If you want to directly modify the configuration file in the physical machine in the future, you can map the/conf folder in the container in three steps: @@ -239,7 +239,7 @@ Step 3: Restart IoTDB docker-compose -f docker-compose-standalone.yml up -d ``` -## Cluster Deployment +## 3. Cluster Deployment This section describes how to manually deploy an instance that includes 3 Config Nodes and 3 Data Nodes, commonly known as a 3C3D cluster. @@ -251,7 +251,7 @@ This section describes how to manually deploy an instance that includes 3 Config Taking the host network as an example, we will demonstrate how to deploy a 3C3D cluster. -### Set Host Name +### 3.1 Set Host Name Assuming there are currently three Linux servers, the IP addresses and service role assignments are as follows: @@ -269,7 +269,7 @@ echo "192.168.1.4 iotdb-2" >> /etc/hosts echo "192.168.1.5 iotdb-3" >> /etc/hosts ``` -### Load Image File +### 3.2 Load Image File For example, the container image file name obtained for IoTDB is: `iotdb-enterprise-1.3.23-standalone-docker.tar.gz` @@ -287,7 +287,7 @@ docker images ![](/img/%E9%95%9C%E5%83%8F%E5%8A%A0%E8%BD%BD.png) -### Write The Yml File For Docker Compose +### 3.3 Write The Yml File For Docker Compose Here we take the example of consolidating the IoTDB installation directory and yml files in the /docker-iotdb folder: @@ -366,7 +366,7 @@ services: network_mode: "host" #Using the host network ``` -### Starting Confignode For The First Time +### 3.4 Starting Confignode For The First Time First, start configNodes on each of the three servers to obtain the machine code. Pay attention to the startup order, start the first iotdb-1 first, then start iotdb-2 and iotdb-3. @@ -375,7 +375,7 @@ cd /docker-iotdb docker-compose -f confignode.yml up -d #Background startup ``` -### Apply For Activation +### 3.5 Apply For Activation - After starting three confignodes for the first time, a system_info file will be generated in each physical machine directory `/docker-iotdb/iotdb/activation`, and the system_info files of the three servers will be copied to the Timecho staff; @@ -387,7 +387,7 @@ docker-compose -f confignode.yml up -d #Background startup - After the license is placed in the corresponding activation folder, confignode will be automatically activated without restarting confignode -### Start Datanode +### 3.6 Start Datanode Start datanodes on 3 servers separately @@ -398,7 +398,7 @@ docker-compose -f datanode.yml up -d #Background startup ![](/img/%E9%9B%86%E7%BE%A4%E7%89%88-dn%E5%90%AF%E5%8A%A8.png) -### Validate Deployment +### 3.7 Validate Deployment - Viewing the logs, the following words indicate that the datanode has successfully started @@ -431,7 +431,7 @@ docker-compose -f datanode.yml up -d #Background startup ![](/img/%E9%9B%86%E7%BE%A4-%E6%BF%80%E6%B4%BB.png) -### Map/conf Directory (optional) +### 3.8 Map/conf Directory (optional) If you want to directly modify the configuration file in the physical machine in the future, you can map the/conf folder in the container in three steps: diff --git a/src/UserGuide/latest/Deployment-and-Maintenance/Dual-Active-Deployment_timecho.md b/src/UserGuide/latest/Deployment-and-Maintenance/Dual-Active-Deployment_timecho.md index 40c5e1d3d..2865e6da7 100644 --- a/src/UserGuide/latest/Deployment-and-Maintenance/Dual-Active-Deployment_timecho.md +++ b/src/UserGuide/latest/Deployment-and-Maintenance/Dual-Active-Deployment_timecho.md @@ -20,7 +20,7 @@ --> # Dual Active Deployment -## What is a double active version? +## 1. What is a double active version? Dual active usually refers to two independent machines (or clusters) that perform real-time mirror synchronization. Their configurations are completely independent and can simultaneously receive external writes. Each independent machine (or cluster) can synchronize the data written to itself to another machine (or cluster), and the data of the two machines (or clusters) can achieve final consistency. @@ -30,7 +30,7 @@ Dual active usually refers to two independent machines (or clusters) that perfor ![](/img/20240731104336.png) -## Note +## 2. Note 1. It is recommended to prioritize using `hostname` for IP configuration during deployment to avoid the problem of database failure caused by modifying the host IP in the later stage. To set the hostname, you need to configure `/etc/hosts` on the target server. If the local IP is 192.168.1.3 and the hostname is iotdb-1, you can use the following command to set the server's hostname and configure IoTDB's `cn_internal-address` and` dn_internal-address` using the hostname. @@ -42,7 +42,7 @@ Dual active usually refers to two independent machines (or clusters) that perfor 3. Recommend deploying a monitoring panel, which can monitor important operational indicators and keep track of database operation status at any time. The monitoring panel can be obtained by contacting the business department. The steps for deploying the monitoring panel can be referred to [Monitoring Panel Deployment](https://www.timecho.com/docs/UserGuide/latest/Deployment-and-Maintenance/Monitoring-panel-deployment.html) -## Installation Steps +## 3. Installation Steps Taking the dual active version IoTDB built by two single machines A and B as an example, the IP addresses of A and B are 192.168.1.3 and 192.168.1.4, respectively. Here, we use hostname to represent different hosts. The plan is as follows: @@ -51,11 +51,11 @@ Taking the dual active version IoTDB built by two single machines A and B as an | A | 192.168.1.3 | iotdb-1 | | B | 192.168.1.4 | iotdb-2 | -### Step1:Install Two Independent IoTDBs Separately +### 3.1 Install Two Independent IoTDBs Separately Install IoTDB on two machines separately, and refer to the deployment documentation for the standalone version [Stand-Alone Deployment](../Deployment-and-Maintenance/Stand-Alone-Deployment_timecho.md),The deployment document for the cluster version can be referred to [Cluster Deployment](../Deployment-and-Maintenance/Cluster-Deployment_timecho.md)。**It is recommended that the configurations of clusters A and B remain consistent to achieve the best dual active effect** -### Step2:Create A Aata Synchronization Task On Machine A To Machine B +### 3.2 Create A Aata Synchronization Task On Machine A To Machine B - Create a data synchronization process on machine A, where the data on machine A is automatically synchronized to machine B. Use the cli tool in the sbin directory to connect to the IoTDB database on machine A: @@ -79,7 +79,7 @@ Install IoTDB on two machines separately, and refer to the deployment documentat - Note: To avoid infinite data loops, it is necessary to set the parameter `source. forwarding pipe questions` on both A and B to `false`, indicating that data transmitted from another pipe will not be forwarded. -### Step3:Create A Data Synchronization Task On Machine B To Machine A +### 3.3 Create A Data Synchronization Task On Machine B To Machine A - Create a data synchronization process on machine B, where the data on machine B is automatically synchronized to machine A. Use the cli tool in the sbin directory to connect to the IoTDB database on machine B @@ -103,7 +103,7 @@ Install IoTDB on two machines separately, and refer to the deployment documentat - Note: To avoid infinite data loops, it is necessary to set the parameter `source. forwarding pipe questions` on both A and B to `false` , indicating that data transmitted from another pipe will not be forwarded. -### Step4:Validate Deployment +### 3.4 Validate Deployment After the above data synchronization process is created, the dual active cluster can be started. @@ -144,7 +144,7 @@ show pipes Ensure that every pipe is in the RUNNING state. -### Step5:Stop Dual Active Version IoTDB +### 3.5 Stop Dual Active Version IoTDB - Execute the following command on machine A: diff --git a/src/UserGuide/latest/Deployment-and-Maintenance/Environment-Requirements.md b/src/UserGuide/latest/Deployment-and-Maintenance/Environment-Requirements.md index e286154e1..72f2e5081 100644 --- a/src/UserGuide/latest/Deployment-and-Maintenance/Environment-Requirements.md +++ b/src/UserGuide/latest/Deployment-and-Maintenance/Environment-Requirements.md @@ -20,9 +20,9 @@ --> # System Requirements -## Disk Array +## 1. Disk Array -### Configuration Suggestions +### 1.1 Configuration Suggestions IoTDB has no strict operation requirements on disk array configuration. It is recommended to use multiple disk arrays to store IoTDB data to achieve the goal of concurrent writing to multiple disk arrays. For configuration, refer to the following suggestions: @@ -35,7 +35,7 @@ IoTDB has no strict operation requirements on disk array configuration. It is re You are advised to mount multiple hard disks (1-6 disks). 3. When deploying IoTDB, it is recommended to avoid using network storage devices such as NAS. -### Configuration Example +### 1.2 Configuration Example - Example 1: Four 3.5-inch hard disks @@ -68,13 +68,13 @@ The recommended configurations are as follows: | data disk | RAID5 | 7 | 1 | 6 | | data disk | NoRaid | 1 | 0 | 1 | -## Operating System +## 2. Operating System -### Version Requirements +### 2.1 Version Requirements IoTDB supports operating systems such as Linux, Windows, and MacOS, while the enterprise version supports domestic CPUs such as Loongson, Phytium, and Kunpeng. It also supports domestic server operating systems such as Neokylin, KylinOS, UOS, and Linx. -### Disk Partition +### 2.2 Disk Partition - The default standard partition mode is recommended. LVM extension and hard disk encryption are not recommended. - The system disk needs only the space used by the operating system, and does not need to reserve space for the IoTDB. @@ -151,7 +151,7 @@ systemctl start sshd # Enable port 22 3. Ensure that servers are connected to each other -### Other Configuration +### 2.3 Other Configuration 1. Reduce the system swap priority to the lowest level @@ -178,7 +178,7 @@ echo "* hard nofile 65535" >> /etc/security/limits.conf # View after exiting the current terminal session, expect to display 65535 ulimit -n ``` -## Software Dependence +## 3. Software Dependence Install the Java runtime environment (Java version >= 1.8). Ensure that jdk environment variables are set. (It is recommended to deploy JDK17 for V1.3.2.2 or later. In some scenarios, the performance of JDK of earlier versions is compromised, and Datanodes cannot be stopped.) diff --git a/src/UserGuide/latest/Deployment-and-Maintenance/IoTDB-Package_apache.md b/src/UserGuide/latest/Deployment-and-Maintenance/IoTDB-Package_apache.md index 4bf9b1e0f..e775c431f 100644 --- a/src/UserGuide/latest/Deployment-and-Maintenance/IoTDB-Package_apache.md +++ b/src/UserGuide/latest/Deployment-and-Maintenance/IoTDB-Package_apache.md @@ -20,12 +20,12 @@ --> # Obtain IoTDB -## 1 How to obtain IoTDB +## 1. How to obtain IoTDB The installation package can be directly obtained from the Apache IoTDB official website:https://iotdb.apache.org/Download/ -## 2 Installation Package Structure +## 2. Installation Package Structure Install the package after decompression(`apache-iotdb--all-bin.zip`),After decompressing the installation package, the directory structure is as follows: diff --git a/src/UserGuide/latest/Deployment-and-Maintenance/IoTDB-Package_timecho.md b/src/UserGuide/latest/Deployment-and-Maintenance/IoTDB-Package_timecho.md index 261c8a10f..3c1742408 100644 --- a/src/UserGuide/latest/Deployment-and-Maintenance/IoTDB-Package_timecho.md +++ b/src/UserGuide/latest/Deployment-and-Maintenance/IoTDB-Package_timecho.md @@ -20,11 +20,11 @@ --> # Obtain TimechoDB -## 1 How to obtain TimechoDB +## 1. How to obtain TimechoDB The TimechoDB installation package can be obtained through product trial application or by directly contacting the Timecho team. -## 2 Installation Package Structure +## 2. Installation Package Structure After unpacking the installation package(`iotdb-enterprise-{version}-bin.zip`),you will see the directory structure is as follows: diff --git a/src/UserGuide/latest/Deployment-and-Maintenance/Monitoring-panel-deployment.md b/src/UserGuide/latest/Deployment-and-Maintenance/Monitoring-panel-deployment.md index 17fced6e9..ec61a2a41 100644 --- a/src/UserGuide/latest/Deployment-and-Maintenance/Monitoring-panel-deployment.md +++ b/src/UserGuide/latest/Deployment-and-Maintenance/Monitoring-panel-deployment.md @@ -24,14 +24,14 @@ The IoTDB monitoring panel is one of the supporting tools for the IoTDB Enterpri The instructions for using the monitoring panel tool can be found in the [Instructions](../Tools-System/Monitor-Tool.md) section of the document. -## Installation Preparation +## 1. Installation Preparation 1. Installing IoTDB: You need to first install IoTDB V1.0 or above Enterprise Edition. You can contact business or technical support to obtain 2. Obtain the IoTDB monitoring panel installation package: Based on the enterprise version of IoTDB database monitoring panel, you can contact business or technical support to obtain -## Installation Steps +## 2. Installation Steps -### Step 1: IoTDB enables monitoring indicator collection +### 2.1 IoTDB enables monitoring indicator collection 1. Open the monitoring configuration item. The configuration items related to monitoring in IoTDB are disabled by default. Before deploying the monitoring panel, you need to open the relevant configuration items (note that the service needs to be restarted after enabling monitoring configuration). @@ -67,7 +67,7 @@ Taking the 3C3D cluster as an example, the monitoring configuration that needs t ![](/img/%E5%90%AF%E5%8A%A8.png) -### Step 2: Install and configure Prometheus +### 2.2 Install and configure Prometheus > Taking Prometheus installed on server 192.168.1.3 as an example. @@ -118,7 +118,7 @@ scrape_configs: ![](/img/%E8%8A%82%E7%82%B9%E7%9B%91%E6%8E%A7.png) -### Step 3: Install Grafana and configure the data source +### 2.3 Install Grafana and configure the data source > Taking Grafana installed on server 192.168.1.3 as an example. @@ -146,7 +146,7 @@ When configuring the Data Source, pay attention to the URL where Prometheus is l ![](/img/%E9%85%8D%E7%BD%AE%E6%88%90%E5%8A%9F.png) -### Step 4: Import IoTDB Grafana Dashboards +### 2.4 Import IoTDB Grafana Dashboards 1. Enter Grafana and select Dashboards: @@ -184,9 +184,9 @@ When configuring the Data Source, pay attention to the URL where Prometheus is l ![](/img/%E9%9D%A2%E6%9D%BF%E6%B1%87%E6%80%BB.png) -## Appendix, Detailed Explanation of Monitoring Indicators +## 3. Appendix, Detailed Explanation of Monitoring Indicators -### System Dashboard +### 3.1 System Dashboard This panel displays the current usage of system CPU, memory, disk, and network resources, as well as partial status of the JVM. @@ -272,7 +272,7 @@ Eno refers to the network card connected to the public network, while lo refers - Packet Speed:The speed at which the network card sends and receives packets, and one RPC request can correspond to one or more packets - Connection Num:The current number of socket connections for the selected process (IoTDB only has TCP) -### Performance Overview Dashboard +### 3.2 Performance Overview Dashboard #### Cluster Overview @@ -350,7 +350,7 @@ Eno refers to the network card connected to the public network, while lo refers - File Size: Node management file size situation - Log Number Per Minute: Different types of logs per minute for nodes -### ConfigNode Dashboard +### 3.3 ConfigNode Dashboard This panel displays the performance of all management nodes in the cluster, including partitioning, node information, and client connection statistics. @@ -408,7 +408,7 @@ This panel displays the performance of all management nodes in the cluster, incl - Remote / Local Write QPS: Remote and local QPS written to node Ratis - RatisConsensus Memory: Memory usage of Node Ratis consensus protocol -### DataNode Dashboard +### 3.4 DataNode Dashboard This panel displays the monitoring status of all data nodes in the cluster, including write time, query time, number of stored files, etc. diff --git a/src/UserGuide/latest/Deployment-and-Maintenance/Stand-Alone-Deployment_apache.md b/src/UserGuide/latest/Deployment-and-Maintenance/Stand-Alone-Deployment_apache.md index 08133222a..90c524236 100644 --- a/src/UserGuide/latest/Deployment-and-Maintenance/Stand-Alone-Deployment_apache.md +++ b/src/UserGuide/latest/Deployment-and-Maintenance/Stand-Alone-Deployment_apache.md @@ -20,7 +20,7 @@ --> # Stand-Alone Deployment -## Matters Needing Attention +## 1. Matters Needing Attention 1. Before installation, ensure that the system is complete by referring to [System configuration](./Environment-Requirements.md). @@ -40,16 +40,16 @@ - Using the same user operation: Ensure that the same user is used for start, stop and other operations, and do not switch users. - Avoid using sudo: Try to avoid using sudo commands as they execute commands with root privileges, which may cause confusion or security issues. -## Installation Steps +## 2. Installation Steps -### 1、Unzip the installation package and enter the installation directory +### 2.1 Unzip the installation package and enter the installation directory ```Shell unzip apache-iotdb-{version}-all-bin.zip cd apache-iotdb-{version}-all-bin ``` -### 2、Parameter Configuration +### 2.2 Parameter Configuration #### Environment Script Configuration @@ -103,7 +103,7 @@ Open the DataNode configuration file (./conf/iotdb-system. properties file) and > ❗️Attention: Editors such as VSCode Remote do not have automatic configuration saving function. Please ensure that the modified files are saved persistently, otherwise the configuration items will not take effect -### 3、Start ConfigNode +### 2.3 Start ConfigNode Enter the sbin directory of iotdb and start confignode @@ -112,7 +112,7 @@ Enter the sbin directory of iotdb and start confignode ``` If the startup fails, please refer to [Common Questions](#common-questions). -### 4、Start DataNode +### 2.4 Start DataNode Enter the sbin directory of iotdb and start datanode: @@ -121,7 +121,7 @@ cd sbin ./start-datanode.sh -d #The "- d" parameter will start in the background ``` -### 5、Verify Deployment +### 2.5 Verify Deployment Can be executed directly/ Cli startup script in sbin directory: @@ -141,7 +141,7 @@ When the status is all running, it indicates that the service has started succes > The appearance of 'Activated (W)' indicates passive activation, indicating that this Config Node does not have a license file (or has not issued the latest license file with a timestamp). At this point, it is recommended to check if the license file has been placed in the license folder. If not, please place the license file. If a license file already exists, it may be due to inconsistency between the license file of this node and the information of other nodes. Please contact Timecho staff to reapply. -## Common Questions +## 3. Common Questions 1. Confignode failed to start diff --git a/src/UserGuide/latest/Deployment-and-Maintenance/Stand-Alone-Deployment_timecho.md b/src/UserGuide/latest/Deployment-and-Maintenance/Stand-Alone-Deployment_timecho.md index 9a95c038f..4a11206a1 100644 --- a/src/UserGuide/latest/Deployment-and-Maintenance/Stand-Alone-Deployment_timecho.md +++ b/src/UserGuide/latest/Deployment-and-Maintenance/Stand-Alone-Deployment_timecho.md @@ -22,7 +22,7 @@ This guide introduces how to set up a standalone TimechoDB instance, which includes one ConfigNode and one DataNode (commonly referred to as 1C1D). -## Prerequisites +## 1. Prerequisites 1. [System configuration](./Environment-Requirements.md): Ensure the system has been configured according to the preparation guidelines. @@ -46,9 +46,9 @@ This guide introduces how to set up a standalone TimechoDB instance, which inclu 6. **Monitoring Panel**: Deploy a monitoring panel to track key performance metrics. Contact the Timecho team for access and refer to the "[Monitoring Board Install and Deploy](./Monitoring-panel-deployment.md)" guide. -## Installation Steps +## 2. Installation Steps -### 1、Extract Installation Package +### 2.1Extract Installation Package Unzip the installation package and navigate to the directory: @@ -57,7 +57,7 @@ unzip timechodb-{version}-bin.zip cd timechodb-{version}-bin ``` -### 2、Parameter Configuration +### 2.2 Parameter Configuration #### Memory Configuration @@ -104,7 +104,7 @@ Set the following parameters in `conf/iotdb-system.properties`. Refer to `conf/i | dn_schema_region_consensus_port | Port used for metadata replica consensus protocol communication | 10760 | 10760 | This parameter cannot be modified after the first startup. | | dn_seed_config_node | Address of the ConfigNode for registering and joining the cluster. (e.g.,`cn_internal_address:cn_internal_port`) | 127.0.0.1:10710 | Use `cn_internal_address:cn_internal_port` | This parameter cannot be modified after the first startup. | -### 3、Start ConfigNode +### 2.3 Start ConfigNode Navigate to the `sbin` directory and start ConfigNode: @@ -114,7 +114,7 @@ Navigate to the `sbin` directory and start ConfigNode: If the startup fails, refer to the [**Common Problem**](#Common Problem) section below for troubleshooting. -### 4、Start DataNode +### 2.4 Start DataNode Navigate to the `sbin` directory of IoTDB and start the DataNode: @@ -122,7 +122,7 @@ Navigate to the `sbin` directory of IoTDB and start the DataNode: ./sbin/start-datanode.sh -d # The "-d" flag starts the process in the background. ```` -### 5、Activate Database +### 2.5 Activate Database #### Option 1: File-Based Activation @@ -181,13 +181,13 @@ It costs 0.030s IoTDB> activate '01-D4EYQGPZ-EAUJJODW-NUKRDR6F-TUQS3B75-EDZFLK3A-6BOKJFFZ-ALDHOMN7-NB2E4BHI-7ZKGFVK6-GCIFXA4T-UG3XJTTD-SHJV6F2P-Q27B4OMJ-R47ZDIM3-UUASUXG2-OQXGVZCO-MMYKICZU-TWFQYYAO-ZOAGOKJA-NYHQTA5U-EWAR4EP5-MRC6R2CI-PKUTKRCT-7UDGRH3F-7BYV4P5D-6KKIA===' ``` -### 6、Verify Activation +### 2.6 Verify Activation Check the `ClusterActivationStatus` field. If it shows `ACTIVATED`, the database has been successfully activated. ![](/img/%E5%8D%95%E6%9C%BA-%E9%AA%8C%E8%AF%81.png) -## Common Problem +## 3. Common Problem 1. Activation Fails Repeatedly 1. Use the `ls -al` command to verify that the ownership of the installation directory matches the current user. @@ -229,15 +229,15 @@ cd /data/iotdb rm -rf data logs ``` -## Appendix +## 4. Appendix -### ConfigNode Parameters +### 4.1 ConfigNode Parameters | Parameter | Description | **Is it required** | | :-------- | :---------------------------------------------------------- | :----------------- | | -d | Starts the process in daemon mode (runs in the background). | No | -### DataNode Parameters +### 4.2 DataNode Parameters | Parameter | Description | Required | | :-------- | :----------------------------------------------------------- | :------- | diff --git a/src/UserGuide/latest/Deployment-and-Maintenance/workbench-deployment_timecho.md b/src/UserGuide/latest/Deployment-and-Maintenance/workbench-deployment_timecho.md index f26ef9229..335ffc4c2 100644 --- a/src/UserGuide/latest/Deployment-and-Maintenance/workbench-deployment_timecho.md +++ b/src/UserGuide/latest/Deployment-and-Maintenance/workbench-deployment_timecho.md @@ -29,7 +29,7 @@ The visualization console is one of the supporting tools for IoTDB (similar to N The instructions for using the visualization console tool can be found in the [Instructions](../Tools-System/Monitor-Tool.md) section of the document. -## Installation Preparation +## 1. Installation Preparation | Preparation Content | Name | Version Requirements | Link | | :----------------------: | :-------------------------: | :----------------------------------------------------------: | :----------------------------------------------------------: | @@ -39,9 +39,9 @@ The instructions for using the visualization console tool can be found in the [I | Database | IoTDB | Requires V1.2.0 Enterprise Edition and above | You can contact business or technical support to obtain | | Console | IoTDB-Workbench-`` | - | You can choose according to the appendix version comparison table and contact business or technical support to obtain it | -## Installation Steps +## 2. Installation Steps -### Step 1: IoTDB enables monitoring indicator collection +### 2.1 IoTDB enables monitoring indicator collection 1. Open the monitoring configuration item. The configuration items related to monitoring in IoTDB are disabled by default. Before deploying the monitoring panel, you need to open the relevant configuration items (note that the service needs to be restarted after enabling monitoring configuration). @@ -111,7 +111,7 @@ The instructions for using the visualization console tool can be found in the [I ![](/img/%E5%90%AF%E5%8A%A8.png) -### Step 2: Install and configure Prometheus +### 2.2 Install and configure Prometheus 1. Download the Prometheus installation package, which requires installation of V2.30.3 and above. You can go to the Prometheus official website to download it (https://prometheus.io/docs/introduction/first_steps/) 2. Unzip the installation package and enter the unzipped folder: @@ -157,7 +157,7 @@ The instructions for using the visualization console tool can be found in the [I -### Step 3: Install Workbench +### 2.3 Install Workbench 1. Enter the config directory of iotdb Workbench -`` @@ -190,7 +190,7 @@ The instructions for using the visualization console tool can be found in the [I ![](/img/workbench-en.png) -### Step 4: Configure Instance Information +### 2.4 Configure Instance Information 1. Configure instance information: You only need to fill in the following information to connect to the instance @@ -210,7 +210,7 @@ The instructions for using the visualization console tool can be found in the [I ![](/img/workbench-en-2.png) -## Appendix: IoTDB and Workbench Version Comparison Table +## 3. Appendix: IoTDB and Workbench Version Comparison Table | Workbench Version Number | Release Note | Supports IoTDB Versions | | :------------------------: | :------------------------------------------------------------: | :-------------------------: | diff --git a/src/UserGuide/latest/FAQ/Frequently-asked-questions.md b/src/UserGuide/latest/FAQ/Frequently-asked-questions.md index de789a04c..abb80ab95 100644 --- a/src/UserGuide/latest/FAQ/Frequently-asked-questions.md +++ b/src/UserGuide/latest/FAQ/Frequently-asked-questions.md @@ -21,9 +21,9 @@ # Frequently Asked Questions -## General FAQ +## 1. General FAQ -### How can I identify my version of IoTDB? +### 1.1 How can I identify my version of IoTDB? There are several ways to identify the version of IoTDB that you are using: @@ -65,7 +65,7 @@ Total line number = 1 It costs 0.241s ``` -### Where can I find IoTDB logs? +### 1.2 Where can I find IoTDB logs? Suppose your root directory is: @@ -87,11 +87,11 @@ Let `$IOTDB_CLI_HOME = /workspace/iotdb/cli/target/iotdb-cli-{project.version}` By default settings, the logs are stored under ```IOTDB_HOME/logs```. You can change log level and storage path by configuring ```logback.xml``` under ```IOTDB_HOME/conf```. -### Where can I find IoTDB data files? +### 1.3 Where can I find IoTDB data files? By default settings, the data files (including tsfile, metadata, and WAL files) are stored under ```IOTDB_HOME/data/datanode```. -### How do I know how many time series are stored in IoTDB? +### 1.4 How do I know how many time series are stored in IoTDB? Use IoTDB's Command Line Interface: @@ -114,15 +114,15 @@ If you are using Linux, you can use the following shell command: > 6 ``` -### Can I use Hadoop and Spark to read TsFile in IoTDB? +### 1.5 Can I use Hadoop and Spark to read TsFile in IoTDB? Yes. IoTDB has intense integration with Open Source Ecosystem. IoTDB supports [Hadoop](https://github.com/apache/iotdb-extras/tree/master/connectors/hadoop), [Spark](https://github.com/apache/iotdb-extras/tree/master/connectors/spark-iotdb-connector) and [Grafana](https://github.com/apache/iotdb-extras/tree/master/connectors/grafana-connector) visualization tool. -### How does IoTDB handle duplicate points? +### 1.6 How does IoTDB handle duplicate points? A data point is uniquely identified by a full time series path (e.g. ```root.vehicle.d0.s0```) and timestamp. If you submit a new point with the same path and timestamp as an existing point, IoTDB updates the value of this point instead of inserting a new point. -### How can I tell what type of the specific timeseries? +### 1.7 How can I tell what type of the specific timeseries? Use ```SHOW TIMESERIES ``` SQL in IoTDB's Command Line Interface: @@ -144,7 +144,7 @@ Otherwise, you can also use wildcard in timeseries path: IoTDB> show timeseries root.fit.d1.* ``` -### How can I change IoTDB's Cli time display format? +### 1.8 How can I change IoTDB's Cli time display format? The default IoTDB's Cli time display format is readable (e.g. ```1970-01-01T08:00:00.001```), if you want to display time in timestamp type or other readable format, add parameter ```-disableISO8601``` in start command: @@ -152,12 +152,12 @@ The default IoTDB's Cli time display format is readable (e.g. ```1970-01-01T08:0 > $IOTDB_CLI_HOME/sbin/start-cli.sh -h 127.0.0.1 -p 6667 -u root -pw root -disableISO8601 ``` -### How to handle error `IndexOutOfBoundsException` from `org.apache.ratis.grpc.server.GrpcLogAppender`? +### 1.9 How to handle error `IndexOutOfBoundsException` from `org.apache.ratis.grpc.server.GrpcLogAppender`? This is an internal error log from Ratis 2.4.1, our dependency, and no impact on data writes or reads is expected. It has been reported to the Ratis community and will be fixed in the future releases. -### How to deal with estimated out of memory errors? +### 1.10 How to deal with estimated out of memory errors? Report an error message: ``` @@ -179,9 +179,9 @@ Some possible improvement items: It is an internal error introduced by Ratis 2.4.1 dependency, and we can safely ignore this exception as it will not affect normal operations. We will fix this message in the incoming releases. -## FAQ for Cluster Setup +## 2. FAQ for Cluster Setup -### Cluster StartUp and Stop +### 2.1 Cluster StartUp and Stop #### Failed to start ConfigNode for the first time, how to find the reason? @@ -222,7 +222,7 @@ not affect normal operations. We will fix this message in the incoming releases. - The default RPC address of 0.13 is `0.0.0.0`, but the default RPC address of 1.0 is `127.0.0.1`. -### Cluster Restart +### 2.2 Cluster Restart #### How to restart any ConfigNode in the cluster? @@ -246,7 +246,7 @@ not affect normal operations. We will fix this message in the incoming releases. - Can't. The running result will be "The port is already occupied". -### Cluster Maintenance +### 2.3 Cluster Maintenance #### How to find the reason when Show cluster failed, and error logs like "please check server status" are shown? diff --git a/src/UserGuide/latest/IoTDB-Introduction/IoTDB-Introduction_apache.md b/src/UserGuide/latest/IoTDB-Introduction/IoTDB-Introduction_apache.md index 8c93bd738..71f394c8f 100644 --- a/src/UserGuide/latest/IoTDB-Introduction/IoTDB-Introduction_apache.md +++ b/src/UserGuide/latest/IoTDB-Introduction/IoTDB-Introduction_apache.md @@ -30,7 +30,7 @@ Apache IoTDB is a low-cost, high-performance native temporal database for the In - Installation, deployment, and usage documentation: [QuickStart](../QuickStart/QuickStart_apache.md) -## Product Components +## 1. Product Components IoTDB products consist of several components that help users efficiently manage and analyze the massive amount of time-series data generated by the IoT. @@ -46,7 +46,7 @@ IoTDB products consist of several components that help users efficiently manage 3. Time-series Model Training and Inference Integrated Engine (IoTDB AINode): For intelligent analysis scenarios, IoTDB provides the AINode time-series model training and inference integrated engine, which offers a complete set of time-series data analysis tools. The underlying engine supports model training tasks and data management, including machine learning and deep learning. With these tools, users can conduct in-depth analysis of the data stored in IoTDB and extract its value. -## Product Features +## 2. Product Features TimechoDB has the following advantages and characteristics: @@ -66,7 +66,7 @@ TimechoDB has the following advantages and characteristics: - Rich ecological environment docking: Supports docking with big data ecosystem components such as Hadoop, Spark, and supports equipment management and visualization tools such as Grafana, Thingsboard, DataEase. -## Commercial version +## 3. Commercial version Timecho provides the original commercial product TimechoDB based on the open source version of Apache IoTDB, providing enterprise level products and services for enterprises and commercial customers. It can solve various problems encountered by enterprises when building IoT big data platforms to manage time-series data, such as complex application scenarios, large data volumes, high sampling frequencies, high amount of unaligned data, long data processing time, diverse analysis requirements, and high storage and operation costs. diff --git a/src/UserGuide/latest/IoTDB-Introduction/IoTDB-Introduction_timecho.md b/src/UserGuide/latest/IoTDB-Introduction/IoTDB-Introduction_timecho.md index d798b4e63..7c03f8a40 100644 --- a/src/UserGuide/latest/IoTDB-Introduction/IoTDB-Introduction_timecho.md +++ b/src/UserGuide/latest/IoTDB-Introduction/IoTDB-Introduction_timecho.md @@ -28,7 +28,7 @@ Timecho provides a more diverse range of product features, stronger performance - Download 、Deployment and Usage:[QuickStart](../QuickStart/QuickStart_timecho.md) -## Product Components +## 1. Product Components Timecho products is composed of several components, covering the entire time-series data lifecycle from data collection, data management to data analysis & application, helping users efficiently manage and analyze the massive amount of time-series data generated by the IoT. @@ -45,7 +45,7 @@ Timecho products is composed of several components, covering the entire time-ser 4. **Data collection**: To more conveniently dock with various industrial collection scenarios, Timecho provides data collection access services, supporting multiple protocols and formats, which can access data generated by various sensors and devices, while also supporting features such as breakpoint resumption and network barrier penetration. It is more adapted to the characteristics of difficult configuration, slow transmission, and weak network in the industrial field collection process, making the user's data collection simpler and more efficient. -## Product Features +## 2. Product Features TimechoDB has the following advantages and characteristics: @@ -65,9 +65,9 @@ TimechoDB has the following advantages and characteristics: - Rich ecological environment docking: Supports docking with big data ecosystem components such as Hadoop, Spark, and supports equipment management and visualization tools such as Grafana, Thingsboard, DataEase. -## Enterprise characteristics +## 3. Enterprise characteristics -### Higher level product features +### 3.1 Higher level product features Building on the open-source version, TimechoDB offers a range of advanced product features, with native upgrades and optimizations at the kernel level for industrial production scenarios. These include multi-level storage, cloud-edge collaboration, visualization tools, and security enhancements, allowing users to focus more on business development without worrying too much about underlying logic. This simplifies and enhances industrial production, bringing more economic benefits to enterprises. For example: @@ -211,11 +211,11 @@ The detailed functional comparison is as follows:
-### More efficient/stable product performance +### 3.2 More efficient/stable product performance TimechoDB has optimized stability and performance on the basis of the open source version. With technical support from the enterprise version, it can achieve more than 10 times performance improvement and has the performance advantage of timely fault recovery. -### More User-Friendly Tool System +### 3.3 More User-Friendly Tool System TimechoDB will provide users with a simpler and more user-friendly tool system. Through products such as the Cluster Monitoring Panel (Grafana), Database Console (Workbench), and Cluster Management Tool (Deploy Tool, abbreviated as IoTD), it will help users quickly deploy, manage, and monitor database clusters, reduce the work/learning costs of operation and maintenance personnel, simplify database operation and maintenance work, and make the operation and maintenance process more convenient and efficient. @@ -256,12 +256,12 @@ TimechoDB will provide users with a simpler and more user-friendly tool system.  -### More professional enterprise technical services +### 3.4 More professional enterprise technical services TimechoDB customers provide powerful original factory services, including but not limited to on-site installation and training, expert consultant consultation, on-site emergency assistance, software upgrades, online self-service, remote support, and guidance on using the latest development version. At the same time, in order to make TimechoDB more suitable for industrial production scenarios, we will recommend modeling solutions, optimize read-write performance, optimize compression ratios, recommend database configurations, and provide other technical support based on the actual data structure and read-write load of the enterprise. If encountering industrial customization scenarios that are not covered by some products, TimechoDB will provide customized development tools based on user characteristics. Compared to the open source version, TimechoDB provides a faster release frequency every 2-3 months. At the same time, it offers day level exclusive fixes for urgent customer issues to ensure stable production environments. -### More compatible localization adaptation +### 3.5 More compatible localization adaptation The TimechoDB code is self-developed and controllable, and is compatible with most mainstream information and creative products (CPU, operating system, etc.), and has completed compatibility certification with multiple manufacturers to ensure product compliance and security. \ No newline at end of file diff --git a/src/UserGuide/latest/IoTDB-Introduction/Scenario.md b/src/UserGuide/latest/IoTDB-Introduction/Scenario.md index b295af566..6a84569aa 100644 --- a/src/UserGuide/latest/IoTDB-Introduction/Scenario.md +++ b/src/UserGuide/latest/IoTDB-Introduction/Scenario.md @@ -21,9 +21,9 @@ # Scenario -## Application 1: Internet of Vehicles +## 1. Internet of Vehicles -### Background +### 1.1 Background > - Challenge: a large number of vehicles and time series @@ -31,7 +31,7 @@ A car company has a huge business volume and needs to deal with a large number o In the original architecture, the HBase cluster was used as the storage database. The query delay was high, and the system maintenance was difficult and costly. The HBase cluster cannot meet the demand. On the contrary, IoTDB supports high-frequency data writing with millions of measurement points and millisecond-level query response speed. The efficient data processing capability allows users to obtain the required data quickly and accurately. Therefore, IoTDB is chosen as the data storage layer, which has a lightweight architecture, reduces operation and maintenance costs, and supports elastic expansion and contraction and high availability to ensure system stability and availability. -### Architecture +### 1.2 Architecture The data management architecture of the car company using IoTDB as the time-series data storage engine is shown in the figure below. @@ -40,9 +40,9 @@ The data management architecture of the car company using IoTDB as the time-seri The vehicle data is encoded based on TCP and industrial protocols and sent to the edge gateway, and the gateway sends the data to the message queue Kafka cluster, decoupling the two ends of production and consumption. Kafka sends data to Flink for real-time processing, and the processed data is written into IoTDB. Both historical data and latest data are queried in IoTDB, and finally the data flows into the visualization platform through API for application. -## Application 2: Intelligent Operation and Maintenance +## 2. Intelligent Operation and Maintenance -### Background +### 2.1 Background A steel factory aims to build a low-cost, large-scale access-capable remote intelligent operation and maintenance software and hardware platform, access hundreds of production lines, more than one million devices, and tens of millions of time series, to achieve remote coverage of intelligent operation and maintenance. @@ -55,30 +55,30 @@ There are many challenges in this process: After selecting IoTDB as the storage database of the intelligent operation and maintenance platform, it can stably write multi-frequency and high-frequency acquisition data, covering the entire steel process, and use a composite compression algorithm to reduce the data size by more than 10 times, saving costs. IoTDB also effectively supports downsampling query of historical data of more than 10 years, helping enterprises to mine data trends and assist enterprises in long-term strategic analysis. -### Architecture +### 2.2 Architecture The figure below shows the architecture design of the intelligent operation and maintenance platform of the steel plant. ![img](/img/architecture2.jpg) -## Application 3: Smart Factory +## 3. Smart Factory -### Background +### 3.1 Background > - Challenge:Cloud-edge collaboration A cigarette factory hopes to upgrade from a "traditional factory" to a "high-end factory". It uses the Internet of Things and equipment monitoring technology to strengthen information management and services to realize the free flow of data within the enterprise and to help improve productivity and lower operating costs. -### Architecture +### 3.2 Architecture The figure below shows the factory's IoT system architecture. IoTDB runs through the three-level IoT platform of the company, factory, and workshop to realize unified joint debugging and joint control of equipment. The data at the workshop level is collected, processed and stored in real time through the IoTDB at the edge layer, and a series of analysis tasks are realized. The preprocessed data is sent to the IoTDB at the platform layer for data governance at the business level, such as device management, connection management, and service support. Eventually, the data will be integrated into the IoTDB at the group level for comprehensive analysis and decision-making across the organization. ![img](/img/architecture3.jpg) -## Application 4: Condition monitoring +## 4. Condition monitoring -### Background +### 4.1 Background > - Challenge: Smart heating, cost reduction and efficiency increase @@ -86,7 +86,7 @@ A power plant needs to monitor tens of thousands of measuring points of main and After using IoTDB as the storage and analysis engine, combined with meteorological data, building control data, household control data, heat exchange station data, official website data, heat source side data, etc., all data are time-aligned in IoTDB to provide reliable data basis to realize smart heating. At the same time, it also solves the problem of monitoring the working conditions of various important components in the relevant heating process, such as on-demand billing and pipe network, heating station, etc., to reduce manpower input. -### Architecture +### 4.2 Architecture The figure below shows the data management architecture of the power plant in the heating scene. diff --git a/src/UserGuide/latest/QuickStart/QuickStart_apache.md b/src/UserGuide/latest/QuickStart/QuickStart_apache.md index bfb8e98af..b9fe6b362 100644 --- a/src/UserGuide/latest/QuickStart/QuickStart_apache.md +++ b/src/UserGuide/latest/QuickStart/QuickStart_apache.md @@ -24,7 +24,7 @@ This document will guide you through methods to get started quickly with IoTDB. -## How to Install and Deploy? +## 1. How to Install and Deploy? This guide will assist you in quickly installing and deploying IoTDB. You can quickly navigate to the content you need to review through the following document links: @@ -43,7 +43,7 @@ This guide will assist you in quickly installing and deploying IoTDB. You can qu > ❗️Note: We currently still recommend direct installation and deployment on physical/virtual machines. For Docker deployment, please refer to [Docker Deployment](../Deployment-and-Maintenance/Docker-Deployment_apache.md) -## How to Use IoTDB? +## 2. How to Use IoTDB? 1. Database Modeling Design: Database modeling is a crucial step in creating a database system, involving the design of data structures and relationships to ensure that the organization of data meets the needs of specific applications. The following documents will help you quickly understand IoTDB's modeling design: @@ -67,7 +67,7 @@ This guide will assist you in quickly installing and deploying IoTDB. You can qu 5. API: IoTDB provides multiple application programming interfaces (API) for developers to interact with IoTDB in their applications, and currently supports [Java Native API](../API/Programming-Java-Native-API.md)、[Python Native API](../API/Programming-Python-Native-API.md)、[C++ Native API](../API/Programming-Cpp-Native-API.md) ,For more API, please refer to the official website 【API】 and other chapters -## What other convenient tools are available? +## 3. What other convenient tools are available? In addition to its rich features, IoTDB also has a comprehensive range of tools in its surrounding system. This document will help you quickly use the peripheral tool system : @@ -77,7 +77,7 @@ In addition to its rich features, IoTDB also has a comprehensive range of tools - Data Export Script: For different scenarios, IoTDB provides users with multiple ways to batch export data. For specific usage instructions, please refer to: [Data Export](../Tools-System/Data-Export-Tool.md) -## Want to Learn More About the Technical Details? +## 4. Want to Learn More About the Technical Details? If you are interested in delving deeper into the technical aspects of IoTDB, you can refer to the following documents: @@ -87,6 +87,6 @@ If you are interested in delving deeper into the technical aspects of IoTDB, you - Data Partitioning and Load Balancing: IoTDB has meticulously designed data partitioning strategies and load balancing algorithms based on the characteristics of time series data, enhancing the availability and performance of the cluster. For more information, please refer to: [Data Partitioning and Load Balancing](../Technical-Insider/Cluster-data-partitioning.md) -## Encountering problems during use? +## 5. Encountering problems during use? If you encounter difficulties during installation or use, you can move to [Frequently Asked Questions](../FAQ/Frequently-asked-questions.md) View in the middle diff --git a/src/UserGuide/latest/QuickStart/QuickStart_timecho.md b/src/UserGuide/latest/QuickStart/QuickStart_timecho.md index 3728903a5..d0feadb25 100644 --- a/src/UserGuide/latest/QuickStart/QuickStart_timecho.md +++ b/src/UserGuide/latest/QuickStart/QuickStart_timecho.md @@ -24,7 +24,7 @@ This document will guide you through methods to get started quickly with IoTDB. -## How to Install and Deploy? +## 1. How to Install and Deploy? This guide will assist you in quickly installing and deploying IoTDB. You can quickly navigate to the content you need to review through the following document links: @@ -50,7 +50,7 @@ This guide will assist you in quickly installing and deploying IoTDB. You can qu - Workbench: It is the visual interface of IoTDB,Support providing through interface interaction Operate Metadata、Query Data、Data Visualization and other functions, help users use the database easily and efficiently, and the installation steps can be viewed [Workbench Deployment](../Deployment-and-Maintenance/workbench-deployment_timecho.md) -## How to Use IoTDB? +## 2. How to Use IoTDB? 1. Database Modeling Design: Database modeling is a crucial step in creating a database system, involving the design of data structures and relationships to ensure that the organization of data meets the needs of specific applications. The following documents will help you quickly understand IoTDB's modeling design: @@ -78,7 +78,7 @@ This guide will assist you in quickly installing and deploying IoTDB. You can qu 5. API: IoTDB provides multiple application programming interfaces (API) for developers to interact with IoTDB in their applications, and currently supports[ Java Native API](../API/Programming-Java-Native-API.md)、[Python Native API](../API/Programming-Python-Native-API.md)、[C++ Native API](../API/Programming-Cpp-Native-API.md)、[Go Native API](../API/Programming-Go-Native-API.md), For more API, please refer to the official website 【API】 and other chapters -## What other convenient tools are available? +## 3. What other convenient tools are available? In addition to its rich features, IoTDB also has a comprehensive range of tools in its surrounding system. This document will help you quickly use the peripheral tool system : @@ -93,7 +93,7 @@ In addition to its rich features, IoTDB also has a comprehensive range of tools - Data Export Script: For different scenarios, IoTDB provides users with multiple ways to batch export data. For specific usage instructions, please refer to: [Data Export](../Tools-System/Data-Export-Tool.md) -## Want to Learn More About the Technical Details? +## 4. Want to Learn More About the Technical Details? If you are interested in delving deeper into the technical aspects of IoTDB, you can refer to the following documents: @@ -103,6 +103,6 @@ If you are interested in delving deeper into the technical aspects of IoTDB, you - Data Partitioning and Load Balancing: IoTDB has meticulously designed data partitioning strategies and load balancing algorithms based on the characteristics of time series data, enhancing the availability and performance of the cluster. For more information, please refer to: [Data Partitionin & Load Balancing](../Technical-Insider/Cluster-data-partitioning.md) -## Encountering problems during use? +## 5. Encountering problems during use? If you encounter difficulties during installation or use, you can move to [Frequently Asked Questions](../FAQ/Frequently-asked-questions.md) View in the middle \ No newline at end of file diff --git a/src/UserGuide/latest/Technical-Insider/Cluster-data-partitioning.md b/src/UserGuide/latest/Technical-Insider/Cluster-data-partitioning.md index 479f95527..11ee2be4e 100644 --- a/src/UserGuide/latest/Technical-Insider/Cluster-data-partitioning.md +++ b/src/UserGuide/latest/Technical-Insider/Cluster-data-partitioning.md @@ -22,10 +22,10 @@ # Load Balance This document introduces the partitioning strategies and load balance strategies in IoTDB. According to the characteristics of time series data, IoTDB partitions them by series and time dimensions. Combining a series partition with a time partition creates a partition, the unit of division. To enhance throughput and reduce management costs, these partitions are evenly allocated to RegionGroups, which serve as the unit of replication. The RegionGroup's Regions then determine the storage location, with the leader Region managing the primary load. During this process, the Region placement strategy determines which nodes will host the replicas, while the leader selection strategy designates which Region will act as the leader. -## Partitioning Strategy & Partition Allocation +## 1. Partitioning Strategy & Partition Allocation IoTDB implements tailored partitioning algorithms for time series data. Building on this foundation, the partition information cached on both ConfigNodes and DataNodes is not only manageable in size but also clearly differentiated between hot and cold. Subsequently, balanced partitions are evenly allocated across the cluster's RegionGroups to achieve storage balance. -### Partitioning Strategy +### 1.1 Partitioning Strategy IoTDB maps each sensor in the production environment to a time series. The time series are then partitioned using the series partitioning algorithm to manage their schema, and combined with the time partitioning algorithm to manage their data. The following figure illustrates how IoTDB partitions time series data. @@ -53,7 +53,7 @@ Since the series partitioning algorithm evenly partitions the time series, each #### Data Partitioning Combining a series partition with a time partition creates a data partition. Since the series partitioning algorithm evenly partitions the time series, the load of data partitions within a specified time partition remains balanced. These data partitions are then evenly allocated across the DataRegionGroups to achieve balanced data distribution. -### Partition Allocation +### 1.2 Partition Allocation IoTDB uses RegionGroups to enable elastic storage of time series, with the number of RegionGroups in the cluster determined by the total resources available across all DataNodes. Since the number of RegionGroups is dynamic, IoTDB can easily scale out. Both the SchemaRegionGroup and DataRegionGroup follow the same partition allocation algorithm, which evenly splits all series partitions. The following figure demonstrates the partition allocation process, where the dynamic RegionGroups match the variously expending time series and cluster. @@ -70,10 +70,10 @@ Both the SchemaRegionGroup and the DataRegionGroup follow the same allocation al Notably, IoTDB effectively leverages the characteristics of time series data. When the TTL (Time to Live) is configured, IoTDB enables migration-free elastic storage for time series data. This feature facilitates cluster expansion while minimizing the impact on online operations. The figures above illustrate an instance of this feature: newborn data partitions are evenly allocated to each DataRegion, and expired data are automatically archived. As a result, the cluster's storage will eventually remain balanced. -## Balance Strategy +## 2. Balance Strategy To enhance the cluster's availability and performance, IoTDB employs sophisticated storage load and computing load balance algorithms. -### Storage Load Balance +### 2.1 Storage Load Balance The number of Regions held by a DataNode reflects its storage load. If the difference in the number of Regions across DataNodes is relatively large, the DataNode with more Regions is likely to become a storage bottleneck. Although a straightforward Round Robin placement algorithm can achieve storage balance by ensuring that each DataNode hosts an equal number of Regions, it compromises the cluster's fault tolerance, as illustrated below: @@ -88,7 +88,7 @@ In this scenario, if DataNode $n_2$ fails, the load previously handled by DataNo To address this issue, IoTDB employs a Region placement algorithm that not only evenly distributes Regions across all DataNodes but also ensures that each DataNode can offload its storage to sufficient other DataNodes in the event of a failure. As a result, the cluster achieves balanced storage distribution and a high level of fault tolerance, ensuring its availability. -### Computing Load Balance +### 2.2 Computing Load Balance The number of leader Regions held by a DataNode reflects its Computing load. If the difference in the number of leaders across DataNodes is relatively large, the DataNode with more leaders is likely to become a Computing bottleneck. If the leader selection process is conducted using a transparent Greedy algorithm, the result may be an unbalanced leader distribution when the Regions are fault-tolerantly placed, as demonstrated below: @@ -103,7 +103,7 @@ Please note that all the above steps strictly follow the Greedy algorithm. Howev To address this issue, IoTDB employs a leader selection algorithm that can consistently balance the cluster's leader distribution. Consequently, the cluster achieves balanced Computing load distribution, ensuring its performance. -## Source Code +## 3. Source Code + [Data Partitioning](https://github.com/apache/iotdb/tree/master/iotdb-core/node-commons/src/main/java/org/apache/iotdb/commons/partition) + [Partition Allocation](https://github.com/apache/iotdb/tree/master/iotdb-core/confignode/src/main/java/org/apache/iotdb/confignode/manager/load/balancer/partition) + [Region Placement](https://github.com/apache/iotdb/tree/master/iotdb-core/confignode/src/main/java/org/apache/iotdb/confignode/manager/load/balancer/region) diff --git a/src/UserGuide/latest/Technical-Insider/Encoding-and-Compression.md b/src/UserGuide/latest/Technical-Insider/Encoding-and-Compression.md index d987c47a2..632f29674 100644 --- a/src/UserGuide/latest/Technical-Insider/Encoding-and-Compression.md +++ b/src/UserGuide/latest/Technical-Insider/Encoding-and-Compression.md @@ -22,7 +22,7 @@ # Encoding and Compression -## Encoding Methods +## 1. Encoding Methods To improve the efficiency of data storage, it is necessary to encode data during data writing, thereby reducing the amount of disk space used. In the process of writing and reading data, the amount of data involved in the I/O operations can be reduced to improve performance. IoTDB supports the following encoding methods for different data types: @@ -72,7 +72,7 @@ To improve the efficiency of data storage, it is necessary to encode data during RLBE is a lossless encoding that combines the ideas of differential encoding, bit-packing encoding, run-length encoding, Fibonacci encoding and concatenation. RLBE encoding is suitable for time series data with increasing and small increment value, and is not suitable for time series data with large fluctuation. -### Correspondence between data type and encoding +### 1.1 Correspondence between data type and encoding The five encodings described in the previous sections are applicable to different data types. If the correspondence is wrong, the time series cannot be created correctly. @@ -99,11 +99,11 @@ As shown below, the second-order difference encoding does not support the Boolea IoTDB> create timeseries root.ln.wf02.wt02.status WITH DATATYPE=BOOLEAN, ENCODING=TS_2DIFF Msg: 507: encoding TS_2DIFF does not support BOOLEAN ``` -## Compression +## 2. Compression When the time series is written and encoded as binary data according to the specified type, IoTDB compresses the data using compression technology to further improve space storage efficiency. Although both encoding and compression are designed to improve storage efficiency, encoding techniques are usually available only for specific data types (e.g., second-order differential encoding is only suitable for INT32 or INT64 data type, and storing floating-point numbers requires multiplying them by 10m to convert to integers), after which the data is converted to a binary stream. The compression method (SNAPPY) compresses the binary stream, so the use of the compression method is no longer limited by the data type. -### Basic Compression Methods +### 2.1 Basic Compression Methods IoTDB allows you to specify the compression method of the column when creating a time series, and supports the following compression methods: @@ -121,7 +121,7 @@ IoTDB allows you to specify the compression method of the column when creating a The specified syntax for compression is detailed in [Create Timeseries Statement](../SQL-Manual/SQL-Manual.md). -### Compression Ratio Statistics +### 2.2 Compression Ratio Statistics Compression ratio statistics file: data/datanode/system/compression_ratio