Skip to content
This repository was archived by the owner on Mar 6, 2026. It is now read-only.

Commit e4dad95

Browse files
Initial commit
Signed-off-by: teodordelibasic-db <teodor.delibasic@databricks.com>
1 parent f4a9819 commit e4dad95

File tree

56 files changed

+6476
-2946
lines changed

Some content is hidden

Large Commits have some content hidden by default. Use the searchbox below for content that may be hidden.

56 files changed

+6476
-2946
lines changed

.gitignore

Lines changed: 3 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -19,6 +19,9 @@ buildNumber.properties
1919
.settings/
2020
.project
2121
.classpath
22+
.metals/
23+
.bsp/
24+
.bazelbsp/
2225

2326
# OS
2427
.DS_Store

CHANGELOG.md

Lines changed: 53 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -1,5 +1,58 @@
11
# Version changelog
22

3+
## Release v0.2.0
4+
5+
#### Native Rust Backend (JNI Migration)
6+
- The SDK now uses JNI (Java Native Interface) to call the Zerobus Rust SDK instead of pure Java gRPC calls
7+
- Native library is automatically loaded from the classpath or system library path
8+
9+
#### New APIs
10+
11+
**Offset-Based Ingestion API** - Preferred alternative to CompletableFuture-based API:
12+
- `ZerobusStream.ingestRecordOffset(IngestableRecord)` - Returns offset immediately without future allocation
13+
- `ZerobusStream.ingestRecordsOffset(Iterable)` - Batch ingestion returning `Optional<Long>` (empty for empty batch)
14+
- `ZerobusStream.waitForOffset(long)` - Block until specific offset is acknowledged
15+
16+
**JSON Record Support**:
17+
- `IngestableRecord` interface - Unified interface for all record types
18+
- `JsonRecord` class - JSON string wrapper implementing IngestableRecord
19+
- `ProtoRecord<T>` class - Protocol Buffer wrapper implementing IngestableRecord
20+
- `RecordType` enum - Specifies stream serialization format (`PROTO` or `JSON`)
21+
- `StreamConfigurationOptions.setRecordType(RecordType)` - Configure stream for JSON or Proto records
22+
- Both record types work with `ingestRecord()` and `ingestRecordOffset()` methods
23+
24+
**Batch Operations**:
25+
- `ZerobusStream.ingestRecords(Iterable)` - Ingest multiple records with single acknowledgment
26+
- `ZerobusStream.getUnackedBatches()` - Get unacknowledged records preserving batch grouping
27+
- `EncodedBatch` class - Represents a batch of encoded records
28+
29+
**Arrow Flight Support** (Experimental):
30+
- `ZerobusArrowStream` class - High-performance columnar data ingestion
31+
- `ArrowTableProperties` class - Table configuration with Arrow schema
32+
- `ArrowStreamConfigurationOptions` class - Arrow stream configuration
33+
- `ZerobusSdk.createArrowStream()` - Create Arrow Flight streams
34+
- `ZerobusSdk.recreateArrowStream()` - Recover failed Arrow streams
35+
36+
**New Callback Interface**:
37+
- `AckCallback` interface with `onAck(long offsetId)` and `onError(long offsetId, String message)`
38+
- More detailed error information than the deprecated Consumer-based callback
39+
40+
### Deprecated
41+
42+
- `ZerobusStream.ingestRecord(RecordType)` - Use `ingestRecordOffset()` instead. The offset-based API avoids CompletableFuture allocation overhead.
43+
- `ZerobusStream.ingestRecord(IngestableRecord)` - Use `ingestRecordOffset()` instead.
44+
- `ZerobusStream.ingestRecords(Iterable)` - Use `ingestRecordsOffset()` instead.
45+
- `ZerobusStream.getState()` - Stream state is no longer exposed by the native backend. Returns `OPENED` or `CLOSED` only.
46+
- `ZerobusStream.getUnackedRecords()` - Returns empty iterator. Use `getUnackedBatches()` or `getUnackedRecordsRaw()` instead.
47+
- `StreamConfigurationOptions.Builder.setAckCallback(Consumer<IngestRecordResponse>)` - Use `setAckCallback(AckCallback)` instead.
48+
- `ZerobusSdk.setStubFactory()` - gRPC stub factory is no longer used with native backend. Throws `UnsupportedOperationException`.
49+
50+
### Platform Support
51+
52+
- Linux x86_64: Supported
53+
- Windows x86_64: Supported
54+
- macOS: Not yet supported (planned for future release)
55+
356
## Release v0.1.0
457

558
Initial release of the Databricks Zerobus Ingest SDK for Java.

NEXT_CHANGELOG.md

Lines changed: 1 addition & 26 deletions
Original file line numberDiff line numberDiff line change
@@ -1,38 +1,13 @@
11
# NEXT CHANGELOG
22

3-
## Release v0.2.0
3+
## Release v0.3.0
44

55
### New Features and Improvements
66

7-
- Updated Protocol Buffers from 3.24.0 to 4.33.0 for improved performance and latest features
8-
- Updated gRPC dependencies from 1.58.0 to 1.76.0 for enhanced stability and security
9-
- Updated SLF4J logging framework from 1.7.36 to 2.0.17 for modern logging capabilities
10-
117
### Bug Fixes
128

139
### Documentation
1410

15-
- Updated README.md with new dependency versions
16-
- Updated protoc compiler version recommendations
17-
- Updated Logback version compatibility for SLF4J 2.0
18-
1911
### Internal Changes
2012

21-
- Updated maven-compiler-plugin from 3.11.0 to 3.14.1
22-
- All gRPC artifacts now consistently use version 1.76.0
23-
2413
### API Changes
25-
26-
**Breaking Changes**
27-
28-
- **Protocol Buffers 4.x Migration**: If you use the regular JAR (not the fat JAR), you must upgrade to protobuf-java 4.33.0 and regenerate any custom `.proto` files using protoc 4.x
29-
- Download protoc 4.33.0 from: https://github.com/protocolbuffers/protobuf/releases/tag/v33.0
30-
- Regenerate proto files: `protoc --java_out=src/main/java src/main/proto/record.proto`
31-
- Protobuf 4.x is binary-compatible over the wire with 3.x, but generated Java code may differ
32-
33-
- **SLF4J 2.0 Migration**: If you use a logging implementation, you may need to update it:
34-
- `slf4j-simple`: Use version 2.0.17 or later
35-
- `logback-classic`: Use version 1.4.14 or later (for SLF4J 2.0 compatibility)
36-
- `log4j-slf4j-impl`: Use version 2.20.0 or later
37-
38-
**Note**: If you use the fat JAR (`jar-with-dependencies`), all dependencies are bundled and no action is required.

0 commit comments

Comments
 (0)