[VL] Support config velox parquet writer option storeDecimalAsInteger for …#11839
Open
lifulong wants to merge 1 commit intoapache:mainfrom
Open
[VL] Support config velox parquet writer option storeDecimalAsInteger for …#11839lifulong wants to merge 1 commit intoapache:mainfrom
lifulong wants to merge 1 commit intoapache:mainfrom
Conversation
…compatible with spark conf spark.sql.parquet.writeLegacyFormat
|
Run Gluten Clickhouse CI on x86 |
This file contains hidden or bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
Add this suggestion to a batch that can be applied as a single commit.This suggestion is invalid because no changes were made to the code.Suggestions cannot be applied while the pull request is closed.Suggestions cannot be applied while viewing a subset of changes.Only one suggestion per line can be applied in a batch.Add this suggestion to a batch that can be applied as a single commit.Applying suggestions on deleted lines is not supported.You must change the existing code in this line in order to create a valid suggestion.Outdated suggestions cannot be applied.This suggestion has been applied or marked resolved.Suggestions cannot be applied from pending reviews.Suggestions cannot be applied on multi-line comments.Suggestions cannot be applied while the pull request is queued to merge.Suggestion cannot be applied right now. Please check back later.
…compatible with spark conf spark.sql.parquet.writeLegacyFormat
What changes are proposed in this pull request?
Support config spark.sql.parquet.writeLegacyFormat while use native write, compatible with Vanilla spark.
Velox doesn’t expose any config to control how Parquet decimal columns are actually written.
I have added this parameter via PR facebookincubator/velox#16941.
This feature is really useful when Spark or Flink reads Hive tables using ParquetHiveSerDe defined in Hive CREATE TABLE statements, especially with older Hive versions like 2.1.
With Velox’s current write logic, it decides whether to write decimals as int or fixed_len_byte_array based on precision.
When write decimal use Int32/Int64 will cause Spark and Flink to throw exceptions when reading those Hive tables.
Depends on facebookincubator/velox#16941
How was this patch tested?
test at our produce env
Was this patch authored or co-authored using generative AI tooling?
co-authored with cursor