From ea1f83d73973721587974ee0146b331d66fc5bfa Mon Sep 17 00:00:00 2001 From: Tran Ngoc Nhan Date: Sat, 6 Dec 2025 14:22:25 +0700 Subject: [PATCH] Fix plural words rendering in docs Signed-off-by: Tran Ngoc Nhan --- .../ROOT/pages/appendix/native-images.adoc | 2 +- .../pages/kafka/annotation-error-handling.adoc | 16 ++++++++-------- .../ROOT/pages/kafka/configuring-topics.adoc | 4 ++-- .../modules/ROOT/pages/kafka/connecting.adoc | 2 +- .../ROOT/pages/kafka/container-props.adoc | 12 ++++++------ .../antora/modules/ROOT/pages/kafka/events.adoc | 2 +- .../modules/ROOT/pages/kafka/micrometer.adoc | 10 +++++----- .../message-listener-container.adoc | 4 ++-- .../kafka/receiving-messages/sequencing.adoc | 6 +++--- .../receiving-messages/template-receive.adoc | 2 +- .../ROOT/pages/kafka/sending-messages.adoc | 16 ++++++++-------- .../antora/modules/ROOT/pages/kafka/serdes.adoc | 4 ++-- .../modules/ROOT/pages/kafka/transactions.adoc | 2 +- .../retrytopic/accessing-delivery-attempts.adoc | 2 +- .../main/antora/modules/ROOT/pages/testing.adoc | 6 +++--- 15 files changed, 45 insertions(+), 45 deletions(-) diff --git a/spring-kafka-docs/src/main/antora/modules/ROOT/pages/appendix/native-images.adoc b/spring-kafka-docs/src/main/antora/modules/ROOT/pages/appendix/native-images.adoc index 7d2f5b065c..128c858be1 100644 --- a/spring-kafka-docs/src/main/antora/modules/ROOT/pages/appendix/native-images.adoc +++ b/spring-kafka-docs/src/main/antora/modules/ROOT/pages/appendix/native-images.adoc @@ -1,7 +1,7 @@ [[native-images]] = Native Images -{spring-framework-reference-url}/core/aot.html[Spring AOT] native hints are provided to assist in developing native images for Spring applications that use Spring for Apache Kafka, including hints for AVRO generated classes used in `@KafkaListener`+++s+++. +{spring-framework-reference-url}/core/aot.html[Spring AOT] native hints are provided to assist in developing native images for Spring applications that use Spring for Apache Kafka, including hints for AVRO generated classes used in ``@KafkaListener``s. IMPORTANT: `spring-kafka-test` (and, specifically, its `EmbeddedKafkaBroker`) is not supported in native images. diff --git a/spring-kafka-docs/src/main/antora/modules/ROOT/pages/kafka/annotation-error-handling.adoc b/spring-kafka-docs/src/main/antora/modules/ROOT/pages/kafka/annotation-error-handling.adoc index 0e0bcdada0..cf5e1d39ac 100644 --- a/spring-kafka-docs/src/main/antora/modules/ROOT/pages/kafka/annotation-error-handling.adoc +++ b/spring-kafka-docs/src/main/antora/modules/ROOT/pages/kafka/annotation-error-handling.adoc @@ -50,7 +50,7 @@ It has a sub-interface (`ConsumerAwareListenerErrorHandler`) that has access to Object handleError(Message message, ListenerExecutionFailedException exception, Consumer consumer); ---- -Another sub-interface (`ManualAckListenerErrorHandler`) provides access to the `Acknowledgment` object when using manual `AckMode`+++s+++. +Another sub-interface (`ManualAckListenerErrorHandler`) provides access to the `Acknowledgment` object when using manual ``AckMode``s. [source, java] ---- @@ -250,7 +250,7 @@ Always ensure that exceptions thrown in message processing code explicitly exten In other words, if the application throws an exception, ensure that it is extended from `RuntimeException` and not inadvertently inherited from `Error`. Standard errors like `OutOfMemoryError`, `IllegalAccessError`, and other errors beyond the control of the application are still treated as ``Error``s and not retried. -The error handler can be configured with one or more `RetryListener`+++s+++, receiving notifications of retry and recovery progress. +The error handler can be configured with one or more ``RetryListener``s, receiving notifications of retry and recovery progress. Starting with version 2.8.10, methods for batch listeners were added. [source, java] @@ -487,7 +487,7 @@ public void listen(List> records, Acknowledgment a Starting with version 2.8, batch listeners can now properly handle conversion errors, when using a `MessageConverter` with a `ByteArrayDeserializer`, a `BytesDeserializer` or a `StringDeserializer`, as well as a `DefaultErrorHandler`. When a conversion error occurs, the payload is set to null and a deserialization exception is added to the record headers, similar to the `ErrorHandlingDeserializer`. -A list of `ConversionException`+++s+++ is available in the listener so the listener can throw a `BatchListenerFailedException` indicating the first index at which a conversion exception occurred. +A list of ``ConversionException``s is available in the listener so the listener can throw a `BatchListenerFailedException` indicating the first index at which a conversion exception occurred. Example: @@ -756,7 +756,7 @@ Since the event also has a reference to the container, you can restart the conta Starting with version 2.7, while waiting for a `BackOff` interval, the error handler will loop with a short sleep until the desired delay is reached, while checking to see if the container has been stopped, allowing the sleep to exit soon after the `stop()` rather than causing a delay. -Starting with version 2.7, the processor can be configured with one or more `RetryListener`+++s+++, receiving notifications of retry and recovery progress. +Starting with version 2.7, the processor can be configured with one or more ``RetryListener``s, receiving notifications of retry and recovery progress. [source, java] ---- @@ -838,7 +838,7 @@ public void listen(@Payload Thing thing, } ---- -When used in a `RecordInterceptor` or `RecordFilterStrategy` implementation, the header is in the consumer record as a byte array, converted using the `KafkaListenerAnnotationBeanPostProcessor`+++'+++s `charSet` property. +When used in a `RecordInterceptor` or `RecordFilterStrategy` implementation, the header is in the consumer record as a byte array, converted using the ``KafkaListenerAnnotationBeanPostProcessor``'s `charSet` property. The header mappers also convert to `String` when creating `MessageHeaders` from the consumer record and never map this header on an outbound record. @@ -911,7 +911,7 @@ The record sent to the dead-letter topic is enhanced with the following headers: * `KafkaHeaders.DLT_ORIGINAL_TIMESTAMP_TYPE`: The original timestamp type. * `KafkaHeaders.DLT_ORIGINAL_CONSUMER_GROUP`: The original consumer group that failed to process the record (since version 2.8). -Key exceptions are only caused by `DeserializationException`+++s+++ so there is no `DLT_KEY_EXCEPTION_CAUSE_FQCN`. +Key exceptions are only caused by ``DeserializationException``s so there is no `DLT_KEY_EXCEPTION_CAUSE_FQCN`. There are two mechanisms to add more headers. @@ -923,8 +923,8 @@ The second is simpler to implement but the first has more information available, Starting with version 2.3, when used in conjunction with an `ErrorHandlingDeserializer`, the publisher will restore the record `value()`, in the dead-letter producer record, to the original value that failed to be deserialized. Previously, the `value()` was null and user code had to decode the `DeserializationException` from the message headers. -In addition, you can provide multiple `KafkaTemplate`+++s+++ to the publisher; this might be needed, for example, if you want to publish the `byte[]` from a `DeserializationException`, as well as values using a different serializer from records that were deserialized successfully. -Here is an example of configuring the publisher with `KafkaTemplate`+++s+++ that use a `String` and `byte[]` serializer: +In addition, you can provide multiple ``KafkaTemplate``s to the publisher; this might be needed, for example, if you want to publish the `byte[]` from a `DeserializationException`, as well as values using a different serializer from records that were deserialized successfully. +Here is an example of configuring the publisher with ``KafkaTemplate``s that use a `String` and `byte[]` serializer: [source, java] ---- diff --git a/spring-kafka-docs/src/main/antora/modules/ROOT/pages/kafka/configuring-topics.adoc b/spring-kafka-docs/src/main/antora/modules/ROOT/pages/kafka/configuring-topics.adoc index ab6923cb95..d5ef7e63f0 100644 --- a/spring-kafka-docs/src/main/antora/modules/ROOT/pages/kafka/configuring-topics.adoc +++ b/spring-kafka-docs/src/main/antora/modules/ROOT/pages/kafka/configuring-topics.adoc @@ -43,7 +43,7 @@ include::{kotlin-examples}/topics/Config.kt[tag=brokerProps] ---- ====== -Starting with version 2.7, you can declare multiple `NewTopic`+++s+++ in a single `KafkaAdmin.NewTopics` bean definition: +Starting with version 2.7, you can declare multiple ``NewTopic``s in a single `KafkaAdmin.NewTopics` bean definition: [tabs] ====== @@ -63,7 +63,7 @@ include::{kotlin-examples}/topics/Config.kt[tag=newTopicsBean] ====== -IMPORTANT: When using Spring Boot, a `KafkaAdmin` bean is automatically registered so you only need the `NewTopic` (and/or `NewTopics`) `@Bean`+++s+++. +IMPORTANT: When using Spring Boot, a `KafkaAdmin` bean is automatically registered so you only need the `NewTopic` (and/or `NewTopics`) ``@Bean``s. By default, if the broker is not available, a message is logged, but the context continues to load. You can programmatically invoke the admin's `initialize()` method to try again later. diff --git a/spring-kafka-docs/src/main/antora/modules/ROOT/pages/kafka/connecting.adoc b/spring-kafka-docs/src/main/antora/modules/ROOT/pages/kafka/connecting.adoc index 678c5a1f00..c3e614e545 100644 --- a/spring-kafka-docs/src/main/antora/modules/ROOT/pages/kafka/connecting.adoc +++ b/spring-kafka-docs/src/main/antora/modules/ROOT/pages/kafka/connecting.adoc @@ -15,7 +15,7 @@ To close existing Consumers, call `stop()` (and then `start()`) on the `KafkaLis For convenience, the framework also provides an `ABSwitchCluster` which supports two sets of bootstrap servers; one of which is active at any time. Configure the `ABSwitchCluster` and add it to the producer and consumer factories, and the `KafkaAdmin`, by calling `setBootstrapServersSupplier()`. When you want to switch, call `primary()` or `secondary()` and call `reset()` on the producer factory to establish new connection(s); for consumers, `stop()` and `start()` all listener containers. -When using `@KafkaListener`+++s+++, `stop()` and `start()` the `KafkaListenerEndpointRegistry` bean. +When using ``@KafkaListener``s, `stop()` and `start()` the `KafkaListenerEndpointRegistry` bean. See the Javadocs for more information. diff --git a/spring-kafka-docs/src/main/antora/modules/ROOT/pages/kafka/container-props.adoc b/spring-kafka-docs/src/main/antora/modules/ROOT/pages/kafka/container-props.adoc index e89363e30b..2108984701 100644 --- a/spring-kafka-docs/src/main/antora/modules/ROOT/pages/kafka/container-props.adoc +++ b/spring-kafka-docs/src/main/antora/modules/ROOT/pages/kafka/container-props.adoc @@ -112,12 +112,12 @@ The time to process a batch of records plus this value must be less than the `ma |[[idleEventInterval]]<> |`null` -|When set, enables publication of `ListenerContainerIdleEvent`+++s+++, see xref:kafka/events.adoc[Application Events] and xref:kafka/events.adoc#idle-containers[Detecting Idle and Non-Responsive Consumers]. +|When set, enables publication of ``ListenerContainerIdleEvent``s, see xref:kafka/events.adoc[Application Events] and xref:kafka/events.adoc#idle-containers[Detecting Idle and Non-Responsive Consumers]. Also see `idleBeforeDataMultiplier`. |[[idlePartitionEventInterval]]<> |`null` -|When set, enables publication of `ListenerContainerIdlePartitionEvent`+++s+++, see xref:kafka/events.adoc[Application Events] and xref:kafka/events.adoc#idle-containers[Detecting Idle and Non-Responsive Consumers]. +|When set, enables publication of ``ListenerContainerIdlePartitionEvent``s, see xref:kafka/events.adoc[Application Events] and xref:kafka/events.adoc#idle-containers[Detecting Idle and Non-Responsive Consumers]. |[[kafkaConsumerProperties]]<> |None @@ -287,7 +287,7 @@ See xref:kafka/annotation-error-handling.adoc#error-handlers[Container Error Han |[[listenerId]]<> |See desc. -|The bean name for user-configured containers or the `id` attribute of `@KafkaListener`+++s+++. +|The bean name for user-configured containers or the `id` attribute of ``@KafkaListener``s. |[[listenerInfo]]<> |null @@ -342,11 +342,11 @@ Also see `interceptBeforeTx`. |[[assignedPartitions2]]<> |(read only) -|The aggregate of partitions currently assigned to this container's child `KafkaMessageListenerContainer`+++s+++ (explicitly or not). +|The aggregate of partitions currently assigned to this container's child ``KafkaMessageListenerContainer``s (explicitly or not). |[[concurrency]]<> |1 -|The number of child `KafkaMessageListenerContainer`+++s+++ to manage. +|The number of child ``KafkaMessageListenerContainer``s to manage. |[[containerPaused2]]<> |n/a @@ -354,6 +354,6 @@ Also see `interceptBeforeTx`. |[[containers]]<> |n/a -|A reference to all child `KafkaMessageListenerContainer`+++s+++. +|A reference to all child ``KafkaMessageListenerContainer``s. |=== diff --git a/spring-kafka-docs/src/main/antora/modules/ROOT/pages/kafka/events.adoc b/spring-kafka-docs/src/main/antora/modules/ROOT/pages/kafka/events.adoc index 81ce8244a4..5702252f8c 100644 --- a/spring-kafka-docs/src/main/antora/modules/ROOT/pages/kafka/events.adoc +++ b/spring-kafka-docs/src/main/antora/modules/ROOT/pages/kafka/events.adoc @@ -176,7 +176,7 @@ You can also use `@EventListener`, introduced in Spring Framework 4.2. The next example combines `@KafkaListener` and `@EventListener` into a single class. You should understand that the application listener gets events for all containers, so you may need to check the listener ID if you want to take specific action based on which container is idle. -You can also use the `@EventListener`+++'+++s `condition` for this purpose. +You can also use the ``@EventListener``'s `condition` for this purpose. See xref:kafka/events.adoc[Application Events] for information about event properties. diff --git a/spring-kafka-docs/src/main/antora/modules/ROOT/pages/kafka/micrometer.adoc b/spring-kafka-docs/src/main/antora/modules/ROOT/pages/kafka/micrometer.adoc index 9885b29e75..806af1b3ca 100644 --- a/spring-kafka-docs/src/main/antora/modules/ROOT/pages/kafka/micrometer.adoc +++ b/spring-kafka-docs/src/main/antora/modules/ROOT/pages/kafka/micrometer.adoc @@ -4,8 +4,8 @@ [[monitoring-listener-performance]] == Monitoring Listener Performance -Starting with version 2.3, the listener container will automatically create and update Micrometer `Timer`+++s+++ for the listener, if `Micrometer` is detected on the classpath, and a single `MeterRegistry` is present in the application context. -The timers can be disabled by setting the `ContainerProperty`+++'+++s `micrometerEnabled` to `false`. +Starting with version 2.3, the listener container will automatically create and update Micrometer ``Timer``s for the listener, if `Micrometer` is detected on the classpath, and a single `MeterRegistry` is present in the application context. +The timers can be disabled by setting the ``ContainerProperty``'s `micrometerEnabled` to `false`. Two timers are maintained - one for successful calls to the listener and one for failures. @@ -15,16 +15,16 @@ The timers are named `spring.kafka.listener` and have the following tags: * `result` : `success` or `failure` * `exception` : `none` or `ListenerExecutionFailedException` -You can add additional tags using the `ContainerProperties`+++'+++s `micrometerTags` property. +You can add additional tags using the ``ContainerProperties``'s `micrometerTags` property. -Starting with versions 2.9.8, 3.0.6, you can provide a function in `ContainerProperties`+++'+++s `micrometerTagsProvider`; the function receives the `ConsumerRecord` and returns tags which can be based on that record, and merged with any static tags in `micrometerTags`. +Starting with versions 2.9.8, 3.0.6, you can provide a function in ``ContainerProperties``'s `micrometerTagsProvider`; the function receives the `ConsumerRecord` and returns tags which can be based on that record, and merged with any static tags in `micrometerTags`. NOTE: With the concurrent container, timers are created for each thread and the `name` tag is suffixed with `-n` where n is `0` to `concurrency-1`. [[monitoring-kafkatemplate-performance]] == Monitoring KafkaTemplate Performance -Starting with version 2.5, the template will automatically create and update Micrometer `Timer`+++s+++ for send operations, if `Micrometer` is detected on the classpath, and a single `MeterRegistry` is present in the application context. +Starting with version 2.5, the template will automatically create and update Micrometer ``Timer``s for send operations, if `Micrometer` is detected on the classpath, and a single `MeterRegistry` is present in the application context. The timers can be disabled by setting the template's `micrometerEnabled` property to `false`. Two timers are maintained - one for successful calls to the listener and one for failures. diff --git a/spring-kafka-docs/src/main/antora/modules/ROOT/pages/kafka/receiving-messages/message-listener-container.adoc b/spring-kafka-docs/src/main/antora/modules/ROOT/pages/kafka/receiving-messages/message-listener-container.adoc index 6d3b52cbb1..21d0549aad 100644 --- a/spring-kafka-docs/src/main/antora/modules/ROOT/pages/kafka/receiving-messages/message-listener-container.adoc +++ b/spring-kafka-docs/src/main/antora/modules/ROOT/pages/kafka/receiving-messages/message-listener-container.adoc @@ -52,7 +52,7 @@ public void configureRecordInterceptor(AbstractKafkaListenerContainerFactory receive(Collection requested, Durati As you can see, you need to know the partition and offset of the record(s) you need to retrieve; a new `Consumer` is created (and closed) for each operation. With the last two methods, each record is retrieved individually and the results assembled into a `ConsumerRecords` object. -When creating the `TopicPartitionOffset`+++s+++ for the request, only positive, absolute offsets are supported. +When creating the ``TopicPartitionOffset``s for the request, only positive, absolute offsets are supported. diff --git a/spring-kafka-docs/src/main/antora/modules/ROOT/pages/kafka/sending-messages.adoc b/spring-kafka-docs/src/main/antora/modules/ROOT/pages/kafka/sending-messages.adoc index 6380823169..0f24398fa8 100644 --- a/spring-kafka-docs/src/main/antora/modules/ROOT/pages/kafka/sending-messages.adoc +++ b/spring-kafka-docs/src/main/antora/modules/ROOT/pages/kafka/sending-messages.adoc @@ -278,7 +278,7 @@ public class Application { } ---- -The corresponding `@KafkaListener`+++s+++ for this example are shown in xref:kafka/receiving-messages/listener-annotation.adoc#annotation-properties[Annotation Properties]. +The corresponding ``@KafkaListener``s for this example are shown in xref:kafka/receiving-messages/listener-annotation.adoc#annotation-properties[Annotation Properties]. For another technique to achieve similar results, but with the additional capability of sending different types to the same topic, see xref:kafka/serdes.adoc#delegating-serialization[Delegating Serializer and Deserializer]. @@ -299,7 +299,7 @@ Calling `reset()` or `destroy()` will not clean up these producers. Also see xref:kafka/transactions.adoc#tx-template-mixed[`KafkaTemplate` Transactional and non-Transactional Publishing]. When creating a `DefaultKafkaProducerFactory`, key and/or value `Serializer` classes can be picked up from configuration by calling the constructor that only takes in a Map of properties (see example in xref:kafka/sending-messages.adoc#kafka-template[Using `KafkaTemplate`]), or `Serializer` instances may be passed to the `DefaultKafkaProducerFactory` constructor (in which case all ``Producer``s share the same instances). -Alternatively you can provide `Supplier`+++s+++ (starting with version 2.3) that will be used to obtain separate `Serializer` instances for each `Producer`: +Alternatively you can provide ``Supplier``s (starting with version 2.3) that will be used to obtain separate `Serializer` instances for each `Producer`: [source, java] ---- @@ -429,7 +429,7 @@ Note that we can use Boot's auto-configured container factory to create the repl If a non-trivial deserializer is being used for replies, consider using an xref:kafka/serdes.adoc#error-handling-deserializer[`ErrorHandlingDeserializer`] that delegates to your configured deserializer. When so configured, the `RequestReplyFuture` will be completed exceptionally and you can catch the `ExecutionException`, with the `DeserializationException` in its `cause` property. -Starting with version 2.6.7, in addition to detecting `DeserializationException`+++s+++, the template will call the `replyErrorChecker` function, if provided. +Starting with version 2.6.7, in addition to detecting ``DeserializationException``s, the template will call the `replyErrorChecker` function, if provided. If it returns an exception, the future will be completed exceptionally. Here is an example: @@ -568,7 +568,7 @@ NOTE: Conversely, if the requesting application is not a spring application and Previously, the listener had to echo custom correlation headers. [[exchanging-messages]] -=== Request/Reply with `Message`+++s+++ +=== Request/Reply with ``Message``s Version 2.7 added methods to the `ReplyingKafkaTemplate` to send and receive ``spring-messaging``'s `Message` abstraction: @@ -672,7 +672,7 @@ The template in xref:kafka/sending-messages.adoc#replying-template[Using `Replyi For cases where multiple receivers of a single message return a reply, you can use the `AggregatingReplyingKafkaTemplate`. This is an implementation of the client-side of the https://www.enterpriseintegrationpatterns.com/patterns/messaging/BroadcastAggregate.html[Scatter-Gather Enterprise Integration Pattern]. -Like the `ReplyingKafkaTemplate`, the `AggregatingReplyingKafkaTemplate` constructor takes a producer factory and a listener container to receive the replies; it has a third parameter `BiPredicate>, Boolean> releaseStrategy` which is consulted each time a reply is received; when the predicate returns `true`, the collection of `ConsumerRecord`+++s+++ is used to complete the `Future` returned by the `sendAndReceive` method. +Like the `ReplyingKafkaTemplate`, the `AggregatingReplyingKafkaTemplate` constructor takes a producer factory and a listener container to receive the replies; it has a third parameter `BiPredicate>, Boolean> releaseStrategy` which is consulted each time a reply is received; when the predicate returns `true`, the collection of ``ConsumerRecord``s is used to complete the `Future` returned by the `sendAndReceive` method. There is an additional property `returnPartialOnTimeout` (default false). When this is set to `true`, instead of completing the future with a `KafkaReplyTimeoutException`, a partial result completes the future normally (as long as at least one reply record has been received). @@ -694,7 +694,7 @@ ConsumerRecord>> consumerRec future.get(30, TimeUnit.SECONDS); ---- -Notice that the return type is a `ConsumerRecord` with a value that is a collection of `ConsumerRecord`+++s+++. +Notice that the return type is a `ConsumerRecord` with a value that is a collection of ``ConsumerRecord``s. The "outer" `ConsumerRecord` is not a "real" record, it is synthesized by the template, as a holder for the actual reply records received for the request. When a normal release occurs (release strategy returns true), the topic is set to `aggregatedResults`; if `returnPartialOnTimeout` is true, and timeout occurs (and at least one reply record has been received), the topic is set to `partialResultsAfterTimeout`. The template provides constant static variables for these "topic" names: @@ -714,13 +714,13 @@ public static final String AGGREGATED_RESULTS_TOPIC = "aggregatedResults"; public static final String PARTIAL_RESULTS_AFTER_TIMEOUT_TOPIC = "partialResultsAfterTimeout"; ---- -The real `ConsumerRecord`+++s+++ in the `Collection` contain the actual topic(s) from which the replies are received. +The real ``ConsumerRecord``s in the `Collection` contain the actual topic(s) from which the replies are received. IMPORTANT: The listener container for the replies **must** be configured with `AckMode.MANUAL` or `AckMode.MANUAL_IMMEDIATE`; the consumer property `enable.auto.commit` must be `false` (the default since version 2.3). To avoid any possibility of losing messages, the template only commits offsets when there are zero requests outstanding, i.e. when the last outstanding request is released by the release strategy. After a rebalance, it is possible for duplicate reply deliveries; these will be ignored for any in-flight requests; you may see error log messages when duplicate replies are received for already released replies. -NOTE: If you use an xref:kafka/serdes.adoc#error-handling-deserializer[`ErrorHandlingDeserializer`] with this aggregating template, the framework will not automatically detect `DeserializationException`+++s+++. +NOTE: If you use an xref:kafka/serdes.adoc#error-handling-deserializer[`ErrorHandlingDeserializer`] with this aggregating template, the framework will not automatically detect ``DeserializationException``s. Instead, the record (with a `null` value) will be returned intact, with the deserialization exception(s) in headers. It is recommended that applications call the utility method `ReplyingKafkaTemplate.checkDeserialization()` method to determine if a deserialization exception occurred. See its JavaDocs for more information. diff --git a/spring-kafka-docs/src/main/antora/modules/ROOT/pages/kafka/serdes.adoc b/spring-kafka-docs/src/main/antora/modules/ROOT/pages/kafka/serdes.adoc index 32ac64b0d6..f2b955e4a9 100644 --- a/spring-kafka-docs/src/main/antora/modules/ROOT/pages/kafka/serdes.adoc +++ b/spring-kafka-docs/src/main/antora/modules/ROOT/pages/kafka/serdes.adoc @@ -23,7 +23,7 @@ For more complex or particular cases, the `KafkaConsumer` (and, therefore, `Kafk constructors to accept `Serializer` and `Deserializer` instances for `keys` and `values`, respectively. When you use this API, the `DefaultKafkaProducerFactory` and `DefaultKafkaConsumerFactory` also provide properties (through constructors or setter methods) to inject custom `Serializer` and `Deserializer` instances into the target `Producer` or `Consumer`. -Also, you can pass in `Supplier` or `Supplier` instances through constructors - these `Supplier`+++s+++ are called on creation of each `Producer` or `Consumer`. +Also, you can pass in `Supplier` or `Supplier` instances through constructors - these ``Supplier``s are called on creation of each `Producer` or `Consumer`. [[string-serde]] == String serialization @@ -355,7 +355,7 @@ In this case, if there are ambiguous matches, an ordered `Map`, such as a `Linke === By Topic Starting with version 2.8, the `DelegatingByTopicSerializer` and `DelegatingByTopicDeserializer` allow selection of a serializer/deserializer based on the topic name. -Regex `Pattern`+++s+++ are used to lookup the instance to use. +Regex ``Pattern``s are used to lookup the instance to use. The map can be configured using a constructor, or via properties (a comma delimited list of `pattern:serializer`). [source, java] diff --git a/spring-kafka-docs/src/main/antora/modules/ROOT/pages/kafka/transactions.adoc b/spring-kafka-docs/src/main/antora/modules/ROOT/pages/kafka/transactions.adoc index aae4b186c4..db2c94565e 100644 --- a/spring-kafka-docs/src/main/antora/modules/ROOT/pages/kafka/transactions.adoc +++ b/spring-kafka-docs/src/main/antora/modules/ROOT/pages/kafka/transactions.adoc @@ -149,7 +149,7 @@ Normally, when a `KafkaTemplate` is transactional (configured with a transaction The transaction can be started by a `TransactionTemplate`, a `@Transactional` method, calling `executeInTransaction`, or by a listener container, when configured with a `KafkaTransactionManager`. Any attempt to use the template outside the scope of a transaction results in the template throwing an `IllegalStateException`. Starting with version 2.4.3, you can set the template's `allowNonTransactional` property to `true`. -In that case, the template will allow the operation to run without a transaction, by calling the `ProducerFactory`+++'+++s `createNonTransactionalProducer()` method; the producer will be cached, or thread-bound, as normal for reuse. +In that case, the template will allow the operation to run without a transaction, by calling the ``ProducerFactory``'s `createNonTransactionalProducer()` method; the producer will be cached, or thread-bound, as normal for reuse. See xref:kafka/sending-messages.adoc#producer-factory[Using `DefaultKafkaProducerFactory`]. [[transactions-batch]] diff --git a/spring-kafka-docs/src/main/antora/modules/ROOT/pages/retrytopic/accessing-delivery-attempts.adoc b/spring-kafka-docs/src/main/antora/modules/ROOT/pages/retrytopic/accessing-delivery-attempts.adoc index 17bc8fa458..1fe65fe1f0 100644 --- a/spring-kafka-docs/src/main/antora/modules/ROOT/pages/retrytopic/accessing-delivery-attempts.adoc +++ b/spring-kafka-docs/src/main/antora/modules/ROOT/pages/retrytopic/accessing-delivery-attempts.adoc @@ -9,7 +9,7 @@ To access blocking and non-blocking delivery attempts, add these headers to your @Header(name = RetryTopicHeaders.DEFAULT_HEADER_ATTEMPTS, required = false) Integer nonBlockingAttempts ---- -Blocking delivery attempts are only provided if you set `ContainerProperties`+++'+++s xref:kafka/container-props.adoc#deliveryAttemptHeader[deliveryAttemptHeader] to `true`. +Blocking delivery attempts are only provided if you set ``ContainerProperties``'s xref:kafka/container-props.adoc#deliveryAttemptHeader[deliveryAttemptHeader] to `true`. Note that the non blocking attempts will be `null` for the initial delivery. diff --git a/spring-kafka-docs/src/main/antora/modules/ROOT/pages/testing.adoc b/spring-kafka-docs/src/main/antora/modules/ROOT/pages/testing.adoc index c3a0c42416..c98d81dcb8 100644 --- a/spring-kafka-docs/src/main/antora/modules/ROOT/pages/testing.adoc +++ b/spring-kafka-docs/src/main/antora/modules/ROOT/pages/testing.adoc @@ -616,13 +616,13 @@ Here are examples: ---- @Bean ProducerFactory nonTransFactory() { - return new MockProducerFactory<>(() -> + return new MockProducerFactory<>(() -> new MockProducer<>(true, new StringSerializer(), new StringSerializer())); } @Bean ProducerFactory transFactory() { - MockProducer mockProducer = + MockProducer mockProducer = new MockProducer<>(true, new StringSerializer(), new StringSerializer()); mockProducer.initTransactions(); return new MockProducerFactory((tx, id) -> mockProducer, "defaultTxId"); @@ -635,7 +635,7 @@ The transactional id is provided in case you wish to use a different `MockProduc If you are using producers in a multi-threaded environment, the `BiFunction` should return multiple producers (perhaps thread-bound using a `ThreadLocal`). -IMPORTANT: Transactional `MockProducer`+++s+++ must be initialized for transactions by calling `initTransaction()`. +IMPORTANT: Transactional ``MockProducer``s must be initialized for transactions by calling `initTransaction()`. When using the `MockProducer`, if you do not want to close the producer after each send, then you can provide a custom `MockProducer` implementation that overrides the `close` method that does not call the `close` method from the super class. This is convenient for testing, when verifying multiple publishing on the same producer without closing it.