diff --git a/build.gradle b/build.gradle
index ca23fbe..752fb3c 100644
--- a/build.gradle
+++ b/build.gradle
@@ -47,9 +47,6 @@ dependencies {
implementation 'org.springframework.boot:spring-boot-starter-web'
- implementation 'net.logstash.logback:logstash-logback-encoder:8.0'
- implementation 'ch.qos.logback:logback-classic:1.5.6'
-
runtimeOnly 'org.hsqldb:hsqldb'
testImplementation('org.springframework.boot:spring-boot-starter-test') {
diff --git a/doc/modules/application-logging-guide/examples/docker-compose.yml b/doc/modules/application-logging-guide/examples/docker-compose.yml
deleted file mode 120000
index b4d69e1..0000000
--- a/doc/modules/application-logging-guide/examples/docker-compose.yml
+++ /dev/null
@@ -1 +0,0 @@
-../../../../docker-compose.yml
\ No newline at end of file
diff --git a/doc/modules/application-logging-guide/examples/logstash.conf b/doc/modules/application-logging-guide/examples/logstash.conf
deleted file mode 120000
index 159bbe4..0000000
--- a/doc/modules/application-logging-guide/examples/logstash.conf
+++ /dev/null
@@ -1 +0,0 @@
-../../../../logstash.conf
\ No newline at end of file
diff --git a/doc/modules/application-logging-guide/pages/index.adoc b/doc/modules/application-logging-guide/pages/index.adoc
index 1c270f0..5dacb57 100644
--- a/doc/modules/application-logging-guide/pages/index.adoc
+++ b/doc/modules/application-logging-guide/pages/index.adoc
@@ -29,13 +29,7 @@ git checkout {page-origin-branch}
[[what-we-are-going-to-build]]
== What We are Going to Build
-In this guide, we extend the https://github.com/jmix-framework/jmix-petclinic-2[Jmix Petclinic^] application by configuring custom log outputs, adjusting log levels, and adding context-sensitive MDC values to ensure that key information, such as the Pet ID, is consistently included in log entries. We enable SQL logging to gain insight into the database queries being executed, which is crucial for diagnosing performance issues and understanding how data is being retrieved. Finally, we integrate Jmix with a centralized log management solution (Elasticsearch and Kibana), making it easier to monitor and analyze logs from a unified interface.
-
-
-// [[final-application]]
-// === Final Application
-//
-// video::zTYx_KSeMzY[youtube,width=1280,height=600]
+In this guide, we extend the https://github.com/jmix-framework/jmix-petclinic-2[Jmix Petclinic^] application by configuring custom log outputs, adjusting log levels, and adding context-sensitive MDC values to ensure that key information, such as the Pet ID, is consistently included in log entries. We enable SQL logging to gain insight into the database queries being executed, which is crucial for diagnosing performance issues and understanding how data is being retrieved.
[[why-application-loggig-is-essential]]
=== Why Application Logging is Essential
@@ -135,6 +129,7 @@ include::example$/src/main/resources/application.properties[tags=logging-level-c
----
This configuration defines different logging levels for various components of the Jmix application and the underlying libraries like `EclipseLink` and `Liquibase`. For example:
+
- The logging level for `io.jmix` is set to `INFO`, which means only important informational messages, warnings, and errors will be logged.
- The level for `liquibase` is set to `WARN`, so only warning and error messages are logged, reducing verbosity.
@@ -294,109 +289,18 @@ It is important to clear the MDC context after the operation is complete using `
For more details on how MDC works, refer to the official Logback documentation: https://logback.qos.ch/manual/mdc.html[Logback Manual: MDC].
-[[centralized-logging]]
-== Centralized Logging
-
-With centralized logging, all log data is collected and stored in one place, rather than scattered across individual servers or files. This makes it much easier to search, access, and analyze logs, no matter the size of your application. Even for smaller applications, centralized logging can be helpful because it allows you to quickly find specific log entries and troubleshoot issues more efficiently.
-
-Centralized logging provides benefits like:
-
-- **Easy accessibility**: Logs can be accessed through a web interface, making them searchable and easier to explore. This enables real-time troubleshooting and monitoring without requiring direct access to the servers.
-- **Collaboration**: Centralized logging allows team members to share and link logs, which can help in debugging or reviewing incidents together.
-- **Correlating logs**: Logs from multiple services can be aggregated in one place, making it easier to correlate events across different systems or services.
-- **Alerting**: Many centralized logging solutions offer built-in alerting capabilities. This allows you to set up notifications for specific log messages, so you can be immediately notified when critical errors or issues occur.
-- **Enhanced observability**: Centralized logging solutions often integrate with metrics collection systems, combining logs with performance metrics. This ties directly into the concept of observability, where logs, metrics, and other signals are used together to gain a more comprehensive view of your application's performance and health.
-
-There are many providers for centralized logging solutions, such as Datadog, New Relic, or self-hosted options. In this example, we will use the popular ELK Stack (Elasticsearch, Logstash, Kibana) to demonstrate how to integrate Jmix with a centralized logging solution.
-
-[[setting-up-the-elk-stack]]
-=== Setting up the ELK Stack
-
-To set up the ELK Stack, we will use Docker to run Elasticsearch, Logstash, and Kibana. This setup will allow us to collect, store, and visualize logs in real-time. Start by creating a `docker-compose.yml` file in the root of your project and add the following configuration:
-
-[source,yml,indent=0]
-----
-include::example$/docker-compose.yml[]
-----
-
-This configuration starts three containers: Elasticsearch is responsible for storing the log data, Logstash receives logs from the Jmix application and forwards them to Elasticsearch for storage, and Kibana provides a web interface where you can visualize and search through the log data.
-
-The Logstash configuration file is included below:
-
-[source,indent=0]
-.logstash.conf
-----
-include::example$/logstash.conf[]
-----
-
-This configuration is divided into two sections. The `input` block sets up a TCP listener on port 5044 and uses a JSON codec. This ensures that incoming log messages are interpreted as JSON. The `output` block forwards the parsed log events to an Elasticsearch cluster available at http://elasticsearch:9200[^].
-
-For more information on logstash configuration see: https://www.elastic.co/guide/en/logstash/current/configuration.html[Logstash Docs: Creating a Logstash pipeline^].
-
-You can start these services with the following command:
-
-[source,bash,indent=0]
-----
-$ docker compose up
-----
-
-Once the services are running, Kibana will be accessible at http://localhost:5601[^], where you can explore and visualize logs in real-time.
-
-[[configure-logging-to-logstash]]
-=== Configure Logging to Logstash
-
-Next, we will configure the Jmix application to send logs to Logstash, which will forward them to Elasticsearch. This involves two steps: adding the necessary dependencies and modifying the logging configuration.
-
-First, add the following dependencies to your `build.gradle` file:
-
-.build.gradle
-[source,gradle,indent=0]
-----
-dependencies {
-
- // ...
-
- implementation 'net.logstash.logback:logstash-logback-encoder:8.0'
- implementation 'ch.qos.logback:logback-classic:1.5.6'
-}
-----
-
-These dependencies include the Logstash encoder and Logback classic, allowing us to configure Logstash in our logging configuration.
-
-Next, modify the `logback-spring.xml` file to include a Logstash appender, which will send logs to the Logstash service:
-
-.logback-spring.xml
-[source,xml,indent=0]
-----
-include::example$/src/main/resources/logback-spring.xml[]
-----
-
-With this setup, the `LogstashTcpSocketAppender` sends logs from the Jmix application to Logstash. This allows us to centralize and process logs through Elasticsearch and visualize them in Kibana.
-
-[[viewing-logs-in-kibana]]
-=== Viewing Logs in Kibana
-
-Once the ELK Stack is up and running, you can access Kibana at http://localhost:5601/app/logs[^]. This web interface allows you to search, filter, and visualize logs sent from your Jmix application. Kibana provides a powerful interface for exploring log data, enabling you to drill down into specific events, correlate logs across services, and create dashboards for monitoring.
-
-The MDC values are stored in the Elasticsearch index as dedicated fields, which makes it possible to easily search for them and display them as columns in Kibana. This allows you to filter logs by MDC values such as **petId** or **jmixUser** and see them directly in the log view. As shown in the screenshot below, these fields appear alongside standard log data, making it easier to analyze logs based on the custom context from your application:
-
-image::jmix-kibana-logs.png[Kibana Logs Visualization, link="_images/jmix-kibana-logs.png"]
-
-To learn more about using Kibana to search and analyze logs, refer to the official Kibana documentation:
-https://www.elastic.co/guide/en/kibana/current/discover.html[Kibana Discover Documentation^].
-
-With this setup, you can now efficiently monitor and analyze your application's logs in a centralized location, making it easier to troubleshoot, optimize, and collaborate on any issues that arise.
-
[[summary]]
== Summary
This guide demonstrated how effective logging can be implemented in a Jmix application using the Java ecosystem. We explored basic logging concepts, how to use Slf4J and logback to write log messages, and advanced features like MDC (Mapped Diagnostic Context) to include contextual information, such as a Pet ID, across log messages automatically.
-We also looked at how logging levels can be customized for different environments, either through configuration files or dynamic environment variables. Additionally, we touched upon centralized logging solutions, like Elasticsearch, for managing and analyzing logs externally.
+We also looked at how logging levels can be customized for different environments, either through configuration files or dynamic environment variables.
Logging is essential for observability and debugging in production environments. Properly configured logging ensures that administrators can track down issues without direct access to the running application, making it a core aspect of application maintenance and monitoring.
[[further-information]]
=== Further Information
+
+* xref:observability-logging-guide:index.adoc[Advanced Guide on Observability: Centralized Logging]
* https://docs.spring.io/spring-boot/reference/features/logging.html[Spring Boot Logging Documentation^]
-* https://logback.qos.ch/manual/index.html[Logback Manual]
\ No newline at end of file
+* https://logback.qos.ch/manual/index.html[Logback Manual^]
\ No newline at end of file
diff --git a/docker-compose.yml b/docker-compose.yml
deleted file mode 100644
index 6fa4cee..0000000
--- a/docker-compose.yml
+++ /dev/null
@@ -1,39 +0,0 @@
-services:
- elasticsearch:
- # Elasticsearch is the database where log data is stored.
- # Logstash sends processed logs here, and it provides search and analytics capabilities.
- image: docker.elastic.co/elasticsearch/elasticsearch:8.10.2
- environment:
- - discovery.type=single-node # Running in single-node mode for development.
- - xpack.security.enabled=false # Disabling security for easier access in development.
- ports:
- - "9200:9200" # Exposing Elasticsearch's API on port 9200.
- volumes:
- - esdata:/usr/share/elasticsearch/data # Persisting data across container restarts.
-
- logstash:
- # Logstash collects logs from your application and forwards them to Elasticsearch.
- # The application's logging configuration (logback-spring.xml) defines Logstash as the destination for logs.
- # Logstash then ensures the logs are properly structured and stored in Elasticsearch.
- image: docker.elastic.co/logstash/logstash:8.10.2
- volumes:
- - ./logstash.conf:/usr/share/logstash/pipeline/logstash.conf # Logstash pipeline configuration.
- ports:
- - "5044:5044" # Listening on port 5044 for incoming logs from the application.
- depends_on:
- - elasticsearch # Ensuring Elasticsearch is ready before Logstash starts.
-
- kibana:
- # Kibana provides a web interface to visualize and explore log data stored in Elasticsearch.
- # You can create dashboards, filter logs, and search for specific data fields.
- image: docker.elastic.co/kibana/kibana:8.10.2
- environment:
- - ELASTICSEARCH_HOSTS=http://elasticsearch:9200 # Kibana connects to Elasticsearch at this address.
- ports:
- - "5601:5601" # Kibana's web interface is accessible on port 5601.
- depends_on:
- - elasticsearch # Ensures Elasticsearch starts before Kibana.
-
-volumes:
- esdata:
- driver: local # Persistent storage for Elasticsearch data.
\ No newline at end of file
diff --git a/logstash.conf b/logstash.conf
deleted file mode 100644
index 8a94345..0000000
--- a/logstash.conf
+++ /dev/null
@@ -1,12 +0,0 @@
-input {
- tcp {
- port => 5044
- codec => json
- }
-}
-
-output {
- elasticsearch {
- hosts => ["http://elasticsearch:9200"]
- }
-}
\ No newline at end of file
diff --git a/src/main/resources/logback-spring.xml b/src/main/resources/logback-spring.xml
index 19d8866..5f01803 100644
--- a/src/main/resources/logback-spring.xml
+++ b/src/main/resources/logback-spring.xml
@@ -3,14 +3,8 @@
-
- localhost:5044
-
-
-
-