feat: expose Arrow-native geospatial option (databricks.arrow.native_geospatial)#350
Open
jatorre wants to merge 5 commits intoadbc-drivers:mainfrom
Open
feat: expose Arrow-native geospatial option (databricks.arrow.native_geospatial)#350jatorre wants to merge 5 commits intoadbc-drivers:mainfrom
jatorre wants to merge 5 commits intoadbc-drivers:mainfrom
Conversation
Expose geospatialAsArrow support (SPARK-54232) as an opt-in ADBC connection option. When set to "true", geometry/geography columns arrive as Struct<srid: Int32, wkb: Binary> instead of EWKT strings. This depends on databricks/databricks-sql-go#328 which adds the WithArrowNativeGeospatial() ConnOption to the underlying Go SQL driver. Usage via adbc_connect (e.g. from DuckDB adbc_scanner): adbc_connect({ 'driver': 'libadbc_driver_databricks.dylib', 'databricks.server_hostname': '...', 'databricks.arrow.native_geospatial': 'true' })
When databricks.arrow.native_geospatial is enabled, the driver now converts Struct<srid: Int32, wkb: Binary> columns to flat Binary columns with ARROW:extension:name=geoarrow.wkb metadata. This enables downstream consumers (e.g. DuckDB adbc_scanner) to automatically map geometry columns to native GEOMETRY types without any explicit ST_GeomFromWKB conversion. Pipeline: Databricks -> Struct<srid,wkb> -> geoarrow.wkb -> native GEOMETRY Benchmarks vs baseline (ST_AsBinary + ST_GeomFromWKB): 100k points: 2.05x faster (31k rows/sec vs 15k rows/sec) 10k polygons: 1.31x faster (4.5k rows/sec vs 3.4k rows/sec)
Defer schema transformation to the first Next() call so the SRID can be read from the first non-null row of each geometry column. The SRID is encoded as PROJJSON CRS in ARROW:extension:metadata, e.g. EPSG:4326 or EPSG:3857. This ensures CRS information propagates correctly to downstream consumers (DuckDB, pandas, polars, GDAL). Split transformSchemaForGeoArrow into: - detectGeometryColumns: finds geometry struct column indices (called in constructor) - buildGeoArrowSchema: builds geoarrow schema with CRS from first batch (called lazily) Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
The schema must be available before the first Next() call since consumers like adbc_scanner read it upfront to create table columns. Build the geoarrow.wkb schema eagerly with empty CRS metadata in the constructor, then enrich it with the actual SRID from the first record batch during the first Next() call. Verified: DuckDB now correctly recognizes geometry columns as native GEOMETRY type via the geoarrow.wkb extension metadata. Benchmark results (Databricks → DuckDB): - 100k points: 7x faster than ST_AsBinary baseline - 10k polygons: 3.6x faster than ST_AsBinary baseline Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
Co-authored-by: Dewey Dunnington <dewey@dunnington.ca>
This was referenced Mar 17, 2026
Closed
jatorre
added a commit
to jatorre/snowflake
that referenced
this pull request
Mar 18, 2026
Detect GEOGRAPHY/GEOMETRY columns during query execution and tag them with geoarrow.wkb Arrow extension metadata, enabling DuckDB and other Arrow consumers to receive native geometry types with CRS information. How it works: 1. Set GEOGRAPHY/GEOMETRY_OUTPUT_FORMAT=WKB at connection time so geo columns arrive as binary WKB instead of GeoJSON strings 2. Before executing a query, extract the table name and run DESCRIBE TABLE to identify GEOGRAPHY/GEOMETRY columns (catalog metadata is unaffected by the WKB output format setting) 3. Tag identified columns with geoarrow.wkb extension metadata in the Arrow schema — GEOGRAPHY gets CRS "EPSG:4326", GEOMETRY gets no CRS 4. Data flows as binary WKB with zero conversion overhead Note: Snowflake's REST API reports geo columns as "binary" in rowtype metadata when WKB output format is set, losing the original type info. This is why we need the separate DESCRIBE TABLE query. We've reported this to Snowflake. Limitations (documented as TODOs): - GEOMETRY SRID: requires data inspection to determine, same cross-driver issue as adbc-drivers/redshift#2 and adbc-drivers/databricks#350 - Arbitrary queries: only table scans (SELECT ... FROM table) get geoarrow metadata. Complex queries with joins/subqueries don't trigger geo detection. The data is still correct WKB, just without the metadata. Tested end-to-end: DuckDB reads Snowflake GEOGRAPHY as native GEOMETRY with CRS EPSG:4326, and GeoParquet export preserves the type. Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
This file contains hidden or bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
Add this suggestion to a batch that can be applied as a single commit.This suggestion is invalid because no changes were made to the code.Suggestions cannot be applied while the pull request is closed.Suggestions cannot be applied while viewing a subset of changes.Only one suggestion per line can be applied in a batch.Add this suggestion to a batch that can be applied as a single commit.Applying suggestions on deleted lines is not supported.You must change the existing code in this line in order to create a valid suggestion.Outdated suggestions cannot be applied.This suggestion has been applied or marked resolved.Suggestions cannot be applied from pending reviews.Suggestions cannot be applied on multi-line comments.Suggestions cannot be applied while the pull request is queued to merge.Suggestion cannot be applied right now. Please check back later.
Summary
Adds Arrow-native geospatial support via
databricks.arrow.native_geospatialADBC connection option, implementing the full pipeline from Databricks geometry to standard GeoArrowgeoarrow.wkb.Why geoarrow.wkb: Databricks uses a proprietary
Struct<srid: Int32, wkb: Binary>format for Arrow geometry serialization. While this works, it's not recognized by downstream tools. By converting togeoarrow.wkb(the standard Arrow extension type for geometry), we get maximum compatibility: DuckDB, pandas/geopandas, polars, GDAL, and any other GeoArrow-aware consumer can read geometry natively without custom parsing. The conversion is essentially free (zero-copy pointer extraction, no data copying).Pipeline
End result:
SELECT * FROM adbc_scan(...)returns DuckDB GEOMETRY directly. NoST_AsBinary(), noST_GeomFromWKB(). Zero geometry conversion in user code.Dependency
Requires databricks/databricks-sql-go#328 for
WithArrowNativeGeospatial()ConnOption.Changes
go/driver.go— AddOptionArrowNativeGeospatialconstantgo/database.go—useArrowNativeGeospatialfield, GetOption/SetOption, connection passthroughgo/connection.go— Pass flag to statementsgo/statement.go— Pass flag to IPC reader adaptergo/ipc_reader_adapter.go— Core geoarrow conversion:isGeometryStruct()— detect DatabricksStruct<srid: Int32, wkb: Binary>detectGeometryColumns()— find geometry struct column indicesbuildGeoArrowSchemaWithoutCRS()— eagerly rewrite schema: Struct → Binary withgeoarrow.wkb(needed before firstNext()since consumers read schema upfront)buildGeoArrowSchema()— rebuild schema with SRID-based CRS from first record batch, encoded as PROJJSON inARROW:extension:metadatatransformRecordForGeoArrow()— extractwkbchild array from struct per batch (zero-copy, O(columns) not O(rows))Usage
Benchmarks (vs baseline ST_AsBinary + ST_GeomFromWKB)
SRID / CRS Handling
Databricks uses per-row SRID in
Struct<srid, wkb>. The GeoArrow spec defines CRS per-column, not per-row. The driver reads the SRID from the first non-null row of each geometry column in the first record batch and encodes it as PROJJSON CRS inARROW:extension:metadata:{"crs":{"type":"projjson","properties":{"name":"EPSG:4326"},"id":{"authority":"EPSG","code":4326}}}In practice, Databricks geometry/geography uses a consistent SRID within a column (typically 0 or 4326), so this per-column CRS approach is safe and follows the GeoArrow standard. Non-zero SRIDs (e.g. 4326, 3857) are preserved; SRID 0 produces empty CRS metadata (the geoarrow default).