Skip to content

Spark Engine using HMS is not supporting INCREMENTAL_BY_TIME_RANGE #5388

@alexlebedevdev

Description

@alexlebedevdev

Version of sqlmesh : v0.217.0 and v0.218.0

I m trying to run this model

`MODEL (
  name sqlmesh.test_supply_trends_hourly_agg_spark_5,
  kind INCREMENTAL_BY_TIME_RANGE (
    time_column data_timestamp
  ),
  dialect spark,
  gateway iceberg_hive_prod,
  table_format iceberg,
  partitioned_by (data_timestamp),
  physical_properties (
    'write.format.default' = 'parquet',
    'write.parquet.compression-codec' = 'snappy'
  )
);


SELECT
    SUM(total_pvs) AS total_pvs,
    data_timestamp
FROM iceberg.schema.table_name
WHERE
data_timestamp BETWEEN @start_ts AND @end_ts
GROUP BY
    data_timestamp

With this config :

project:analytics

default_gateway: iceberg_hive_prod

gateway_managed_virtual_layer: true

gateways:
  iceberg_hive_prod:
    connection:
      type: spark
      config:
        spark.master: local[2]
        spark.app.name: sqlmesh-iceberg-prod
        spark.jars.packages: org.apache.iceberg:iceberg-spark-runtime-4.0_2.13:1.10.0
        spark.sql.catalog.iceberg: org.apache.iceberg.spark.SparkCatalog
        spark.sql.catalog.iceberg.type: hive
        spark.sql.catalog.iceberg.uri: thrift://hive-metastore:port
        spark.sql.catalog.iceberg.warehouse: hdfs://some-warehose
        spark.sql.defaultCatalog: iceberg
        # HDFS configuration
        spark.hadoop.fs.defaultFS: hdfs://some-cluster
        spark.hadoop.fs.hdfs.impl: org.apache.hadoop.hdfs.DistributedFileSystem
        # Iceberg extensions
        spark.sql.extensions: org.apache.iceberg.spark.extensions.IcebergSparkSessionExtensions
        spark.sql.execution.arrow.pyspark.enabled: false
        iceberg.catalog.type: hive_metastore
        # Disable problematic Iceberg features that cause conflicts
        spark.sql.catalog.iceberg.properties.commit.retry.num-retries: 3
        spark.sql.catalog.iceberg.properties.commit.retry.min-wait-ms: 100
    # Use MySQL for state storage in this gateway
    state_connection:
      type: mysql
      host: some-mysql
      port: port
      user: login
      password: password
      database: sqlmesh_state

model_defaults:
  dialect: spark
  cron: '@hourly'
physical_schema_mapping:
  dev: sqlmesh__dev      
  prod: sqlmesh         
  staging: sqlmesh__stg  

When i run as it stated above i m getting this :
table created as expected with all relevant fields and partitions.
Data is queried but not inserted anywhere
exception :

2025-09-16 13:29:31,726 - MainThread - sqlmesh.core.scheduler - INFO - Execution failed for node EvaluateNode(snapshot_name='"iceberg"."sqlmesh"."test_supply_trends_hourly_agg_spark_5"', interval=(1758024000000, 1758027600000), batch_index=0) (scheduler.py:597)
Traceback (most recent call last):
  File "/usr/local/lib/python3.11/site-packages/sqlmesh/utils/concurrency.py", line 69, in _process_node
    self.fn(node)
  File "/usr/local/lib/python3.11/site-packages/sqlmesh/core/scheduler.py", line 533, in run_node
    audit_results = self.evaluate(
                    ^^^^^^^^^^^^^^
  File "/usr/local/lib/python3.11/site-packages/sqlmesh/core/scheduler.py", line 242, in evaluate
    audit_results = self._audit_snapshot(
                    ^^^^^^^^^^^^^^^^^^^^^
  File "/usr/local/lib/python3.11/site-packages/sqlmesh/core/scheduler.py", line 861, in _audit_snapshot
    audit_results = self.snapshot_evaluator.audit(
                    ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "/usr/local/lib/python3.11/site-packages/sqlmesh/core/snapshot/evaluator.py", line 628, in audit
    self.wap_publish_snapshot(snapshot, wap_id, deployability_index)
  File "/usr/local/lib/python3.11/site-packages/sqlmesh/core/snapshot/evaluator.py", line 890, in wap_publish_snapshot
    adapter.wap_publish(table_name, wap_id)
  File "/usr/local/lib/python3.11/site-packages/sqlmesh/core/engine_adapter/shared.py", line 312, in internal_wrapper
    return func(*list_args, **kwargs)
           ^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "/usr/local/lib/python3.11/site-packages/sqlmesh/core/engine_adapter/spark.py", line 526, in wap_publish
    self.execute(
  File "/usr/local/lib/python3.11/site-packages/sqlmesh/core/engine_adapter/base.py", line 2443, in execute
    self._execute(sql, track_rows_processed, **kwargs)
  File "/usr/local/lib/python3.11/site-packages/sqlmesh/core/engine_adapter/base.py", line 2475, in _execute
    self.cursor.execute(sql, **kwargs)
  File "/usr/local/lib/python3.11/site-packages/sqlmesh/engines/spark/db_api/spark_session.py", line 27, in execute
    self._last_df = self._spark.sql(query)
                    ^^^^^^^^^^^^^^^^^^^^^^
  File "/usr/local/lib/python3.11/site-packages/pyspark/sql/session.py", line 1810, in sql
    return DataFrame(self._jsparkSession.sql(sqlQuery, litArgs), self)
                     ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "/usr/local/lib/python3.11/site-packages/py4j/java_gateway.py", line 1362, in __call__
    return_value = get_return_value(
                   ^^^^^^^^^^^^^^^^^
  File "/usr/local/lib/python3.11/site-packages/pyspark/errors/exceptions/captured.py", line 282, in deco
    return f(*a, **kw)
           ^^^^^^^^^^^
  File "/usr/local/lib/python3.11/site-packages/py4j/protocol.py", line 327, in get_return_value
    raise Py4JJavaError(
py4j.protocol.Py4JJavaError: An error occurred while calling o79.sql.
: org.apache.iceberg.exceptions.CherrypickAncestorCommitException: Cannot cherrypick snapshot 3130901677534106199: already an ancestor
	at org.apache.iceberg.CherryPickOperation.validateNonAncestor(CherryPickOperation.java:204)
	at org.apache.iceberg.CherryPickOperation.validate(CherryPickOperation.java:162)
	at org.apache.iceberg.SnapshotProducer.apply(SnapshotProducer.java:260)
	at org.apache.iceberg.CherryPickOperation.apply(CherryPickOperation.java:198)
	at org.apache.iceberg.SnapshotProducer.lambda$commit$2(SnapshotProducer.java:440)
	at org.apache.iceberg.util.Tasks$Builder.runTaskWithRetry(Tasks.java:413)
	at org.apache.iceberg.util.Tasks$Builder.runSingleThreaded(Tasks.java:219)
	at org.apache.iceberg.util.Tasks$Builder.run(Tasks.java:203)
	at org.apache.iceberg.util.Tasks$Builder.run(Tasks.java:196)
	at org.apache.iceberg.SnapshotProducer.commit(SnapshotProducer.java:438)
	at org.apache.iceberg.SnapshotManager.cherrypick(SnapshotManager.java:48)
	at org.apache.iceberg.spark.procedures.CherrypickSnapshotProcedure.lambda$call$0(CherrypickSnapshotProcedure.java:94)
	at org.apache.iceberg.spark.procedures.BaseProcedure.execute(BaseProcedure.java:132)
	at org.apache.iceberg.spark.procedures.BaseProcedure.modifyIcebergTable(BaseProcedure.java:113)
	at org.apache.iceberg.spark.procedures.CherrypickSnapshotProcedure.call(CherrypickSnapshotProcedure.java:91)
	at org.apache.spark.sql.catalyst.analysis.InvokeProcedures.org$apache$spark$sql$catalyst$analysis$InvokeProcedures$$invoke(InvokeProcedures.scala:48)
	at org.apache.spark.sql.catalyst.analysis.InvokeProcedures$$anonfun$apply$1.applyOrElse(InvokeProcedures.scala:40)
	at org.apache.spark.sql.catalyst.analysis.InvokeProcedures$$anonfun$apply$1.applyOrElse(InvokeProcedures.scala:36)
	at org.apache.spark.sql.catalyst.plans.logical.AnalysisHelper.$anonfun$resolveOperatorsDownWithPruning$2(AnalysisHelper.scala:200)
	at org.apache.spark.sql.catalyst.trees.CurrentOrigin$.withOrigin(origin.scala:86)
	at org.apache.spark.sql.catalyst.plans.logical.AnalysisHelper.$anonfun$resolveOperatorsDownWithPruning$1(AnalysisHelper.scala:200)
	at org.apache.spark.sql.catalyst.plans.logical.AnalysisHelper$.allowInvokingTransformsInAnalyzer(AnalysisHelper.scala:416)
	at org.apache.spark.sql.catalyst.plans.logical.AnalysisHelper.resolveOperatorsDownWithPruning(AnalysisHelper.scala:198)
	at org.apache.spark.sql.catalyst.plans.logical.AnalysisHelper.resolveOperatorsDownWithPruning$(AnalysisHelper.scala:194)
	at org.apache.spark.sql.catalyst.plans.logical.LogicalPlan.resolveOperatorsDownWithPruning(LogicalPlan.scala:37)
	at org.apache.spark.sql.catalyst.plans.logical.AnalysisHelper.resolveOperatorsWithPruning(AnalysisHelper.scala:100)
	at org.apache.spark.sql.catalyst.plans.logical.AnalysisHelper.resolveOperatorsWithPruning$(AnalysisHelper.scala:97)
	at org.apache.spark.sql.catalyst.plans.logical.LogicalPlan.resolveOperatorsWithPruning(LogicalPlan.scala:37)
	at org.apache.spark.sql.catalyst.plans.logical.AnalysisHelper.resolveOperators(AnalysisHelper.scala:77)
	at org.apache.spark.sql.catalyst.plans.logical.AnalysisHelper.resolveOperators$(AnalysisHelper.scala:76)
	at org.apache.spark.sql.catalyst.plans.logical.LogicalPlan.resolveOperators(LogicalPlan.scala:37)
	at org.apache.spark.sql.catalyst.analysis.InvokeProcedures.apply(InvokeProcedures.scala:36)
	at org.apache.spark.sql.catalyst.analysis.InvokeProcedures.apply(InvokeProcedures.scala:34)
	at org.apache.spark.sql.catalyst.rules.RuleExecutor.$anonfun$execute$2(RuleExecutor.scala:242)
	at scala.collection.LinearSeqOps.foldLeft(LinearSeq.scala:183)
	at scala.collection.LinearSeqOps.foldLeft$(LinearSeq.scala:179)
	at scala.collection.immutable.List.foldLeft(List.scala:79)
	at org.apache.spark.sql.catalyst.rules.RuleExecutor.$anonfun$execute$1(RuleExecutor.scala:239)
	at org.apache.spark.sql.catalyst.rules.RuleExecutor.$anonfun$execute$1$adapted(RuleExecutor.scala:231)
	at scala.collection.immutable.List.foreach(List.scala:334)
	at org.apache.spark.sql.catalyst.rules.RuleExecutor.execute(RuleExecutor.scala:231)
	at org.apache.spark.sql.catalyst.analysis.Analyzer.org$apache$spark$sql$catalyst$analysis$Analyzer$$executeSameContext(Analyzer.scala:340)
	at org.apache.spark.sql.catalyst.analysis.Analyzer.$anonfun$execute$1(Analyzer.scala:336)
	at org.apache.spark.sql.catalyst.analysis.AnalysisContext$.withNewAnalysisContext(Analyzer.scala:234)
	at org.apache.spark.sql.catalyst.analysis.Analyzer.execute(Analyzer.scala:336)
	at org.apache.spark.sql.catalyst.analysis.Analyzer.execute(Analyzer.scala:299)
	at org.apache.spark.sql.catalyst.rules.RuleExecutor.$anonfun$executeAndTrack$1(RuleExecutor.scala:201)
	at org.apache.spark.sql.catalyst.QueryPlanningTracker$.withTracker(QueryPlanningTracker.scala:89)
	at org.apache.spark.sql.catalyst.rules.RuleExecutor.executeAndTrack(RuleExecutor.scala:201)
	at org.apache.spark.sql.catalyst.analysis.resolver.HybridAnalyzer.resolveInFixedPoint(HybridAnalyzer.scala:190)
	at org.apache.spark.sql.catalyst.analysis.resolver.HybridAnalyzer.$anonfun$apply$1(HybridAnalyzer.scala:76)
	at org.apache.spark.sql.catalyst.analysis.resolver.HybridAnalyzer.withTrackedAnalyzerBridgeState(HybridAnalyzer.scala:111)
	at org.apache.spark.sql.catalyst.analysis.resolver.HybridAnalyzer.apply(HybridAnalyzer.scala:71)
	at org.apache.spark.sql.catalyst.analysis.Analyzer.$anonfun$executeAndCheck$1(Analyzer.scala:330)
	at org.apache.spark.sql.catalyst.plans.logical.AnalysisHelper$.markInAnalyzer(AnalysisHelper.scala:423)
	at org.apache.spark.sql.catalyst.analysis.Analyzer.executeAndCheck(Analyzer.scala:330)
	at org.apache.spark.sql.execution.QueryExecution.$anonfun$lazyAnalyzed$2(QueryExecution.scala:110)
	at org.apache.spark.sql.catalyst.QueryPlanningTracker.measurePhase(QueryPlanningTracker.scala:148)
	at org.apache.spark.sql.execution.QueryExecution.$anonfun$executePhase$2(QueryExecution.scala:278)
	at org.apache.spark.sql.execution.QueryExecution$.withInternalError(QueryExecution.scala:654)
	at org.apache.spark.sql.execution.QueryExecution.$anonfun$executePhase$1(QueryExecution.scala:278)
	at org.apache.spark.sql.SparkSession.withActive(SparkSession.scala:804)
	at org.apache.spark.sql.execution.QueryExecution.executePhase(QueryExecution.scala:277)
	at org.apache.spark.sql.execution.QueryExecution.$anonfun$lazyAnalyzed$1(QueryExecution.scala:110)
	at scala.util.Try$.apply(Try.scala:217)
	at org.apache.spark.util.Utils$.doTryWithCallerStacktrace(Utils.scala:1378)
	at org.apache.spark.util.Utils$.getTryWithCallerStacktrace(Utils.scala:1439)
	at org.apache.spark.util.LazyTry.get(LazyTry.scala:58)
	at org.apache.spark.sql.execution.QueryExecution.analyzed(QueryExecution.scala:121)
	at org.apache.spark.sql.execution.QueryExecution.assertAnalyzed(QueryExecution.scala:80)
	at org.apache.spark.sql.classic.Dataset$.$anonfun$ofRows$5(Dataset.scala:139)
	at org.apache.spark.sql.SparkSession.withActive(SparkSession.scala:804)
	at org.apache.spark.sql.classic.Dataset$.ofRows(Dataset.scala:136)
	at org.apache.spark.sql.classic.SparkSession.$anonfun$sql$1(SparkSession.scala:462)
	at org.apache.spark.sql.SparkSession.withActive(SparkSession.scala:804)
	at org.apache.spark.sql.classic.SparkSession.sql(SparkSession.scala:449)
	at org.apache.spark.sql.classic.SparkSession.sql(SparkSession.scala:467)
	at java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
	at java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:77)
	at java.base/jdk.internal.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
	at java.base/java.lang.reflect.Method.invoke(Method.java:568)
	at py4j.reflection.MethodInvoker.invoke(MethodInvoker.java:244)
	at py4j.reflection.ReflectionEngine.invoke(ReflectionEngine.java:374)
	at py4j.Gateway.invoke(Gateway.java:282)
	at py4j.commands.AbstractCommand.invokeMethod(AbstractCommand.java:132)
	at py4j.commands.CallCommand.execute(CallCommand.java:79)
	at py4j.ClientServerConnection.waitForCommands(ClientServerConnection.java:184)
	at py4j.ClientServerConnection.run(ClientServerConnection.java:108)
	at java.base/java.lang.Thread.run(Thread.java:840)
	Suppressed: org.apache.spark.util.Utils$OriginalTryStackTraceException: Full stacktrace of original doTryWithCallerStacktrace caller
		at org.apache.iceberg.CherryPickOperation.validateNonAncestor(CherryPickOperation.java:204)
		at org.apache.iceberg.CherryPickOperation.validate(CherryPickOperation.java:162)
		at org.apache.iceberg.SnapshotProducer.apply(SnapshotProducer.java:260)
		at org.apache.iceberg.CherryPickOperation.apply(CherryPickOperation.java:198)
		at org.apache.iceberg.SnapshotProducer.lambda$commit$2(SnapshotProducer.java:440)
		at org.apache.iceberg.util.Tasks$Builder.runTaskWithRetry(Tasks.java:413)
		at org.apache.iceberg.util.Tasks$Builder.runSingleThreaded(Tasks.java:219)
		at org.apache.iceberg.util.Tasks$Builder.run(Tasks.java:203)
		at org.apache.iceberg.util.Tasks$Builder.run(Tasks.java:196)
		at org.apache.iceberg.SnapshotProducer.commit(SnapshotProducer.java:438)
		at org.apache.iceberg.SnapshotManager.cherrypick(SnapshotManager.java:48)
		at org.apache.iceberg.spark.procedures.CherrypickSnapshotProcedure.lambda$call$0(CherrypickSnapshotProcedure.java:94)
		at org.apache.iceberg.spark.procedures.BaseProcedure.execute(BaseProcedure.java:132)
		at org.apache.iceberg.spark.procedures.BaseProcedure.modifyIcebergTable(BaseProcedure.java:113)
		at org.apache.iceberg.spark.procedures.CherrypickSnapshotProcedure.call(CherrypickSnapshotProcedure.java:91)
		at org.apache.spark.sql.catalyst.analysis.InvokeProcedures.org$apache$spark$sql$catalyst$analysis$InvokeProcedures$$invoke(InvokeProcedures.scala:48)
		at org.apache.spark.sql.catalyst.analysis.InvokeProcedures$$anonfun$apply$1.applyOrElse(InvokeProcedures.scala:40)
		at org.apache.spark.sql.catalyst.analysis.InvokeProcedures$$anonfun$apply$1.applyOrElse(InvokeProcedures.scala:36)
		at org.apache.spark.sql.catalyst.plans.logical.AnalysisHelper.$anonfun$resolveOperatorsDownWithPruning$2(AnalysisHelper.scala:200)
		at org.apache.spark.sql.catalyst.trees.CurrentOrigin$.withOrigin(origin.scala:86)
		at org.apache.spark.sql.catalyst.plans.logical.AnalysisHelper.$anonfun$resolveOperatorsDownWithPruning$1(AnalysisHelper.scala:200)
		at org.apache.spark.sql.catalyst.plans.logical.AnalysisHelper$.allowInvokingTransformsInAnalyzer(AnalysisHelper.scala:416)
		at org.apache.spark.sql.catalyst.plans.logical.AnalysisHelper.resolveOperatorsDownWithPruning(AnalysisHelper.scala:198)
		at org.apache.spark.sql.catalyst.plans.logical.AnalysisHelper.resolveOperatorsDownWithPruning$(AnalysisHelper.scala:194)
		at org.apache.spark.sql.catalyst.plans.logical.LogicalPlan.resolveOperatorsDownWithPruning(LogicalPlan.scala:37)
		at org.apache.spark.sql.catalyst.plans.logical.AnalysisHelper.resolveOperatorsWithPruning(AnalysisHelper.scala:100)
		at org.apache.spark.sql.catalyst.plans.logical.AnalysisHelper.resolveOperatorsWithPruning$(AnalysisHelper.scala:97)
		at org.apache.spark.sql.catalyst.plans.logical.LogicalPlan.resolveOperatorsWithPruning(LogicalPlan.scala:37)
		at org.apache.spark.sql.catalyst.plans.logical.AnalysisHelper.resolveOperators(AnalysisHelper.scala:77)
		at org.apache.spark.sql.catalyst.plans.logical.AnalysisHelper.resolveOperators$(AnalysisHelper.scala:76)
		at org.apache.spark.sql.catalyst.plans.logical.LogicalPlan.resolveOperators(LogicalPlan.scala:37)
		at org.apache.spark.sql.catalyst.analysis.InvokeProcedures.apply(InvokeProcedures.scala:36)
		at org.apache.spark.sql.catalyst.analysis.InvokeProcedures.apply(InvokeProcedures.scala:34)
		at org.apache.spark.sql.catalyst.rules.RuleExecutor.$anonfun$execute$2(RuleExecutor.scala:242)
		at scala.collection.LinearSeqOps.foldLeft(LinearSeq.scala:183)
		at scala.collection.LinearSeqOps.foldLeft$(LinearSeq.scala:179)
		at scala.collection.immutable.List.foldLeft(List.scala:79)
		at org.apache.spark.sql.catalyst.rules.RuleExecutor.$anonfun$execute$1(RuleExecutor.scala:239)
		at org.apache.spark.sql.catalyst.rules.RuleExecutor.$anonfun$execute$1$adapted(RuleExecutor.scala:231)
		at scala.collection.immutable.List.foreach(List.scala:334)
		at org.apache.spark.sql.catalyst.rules.RuleExecutor.execute(RuleExecutor.scala:231)
		at org.apache.spark.sql.catalyst.analysis.Analyzer.org$apache$spark$sql$catalyst$analysis$Analyzer$$executeSameContext(Analyzer.scala:340)
		at org.apache.spark.sql.catalyst.analysis.Analyzer.$anonfun$execute$1(Analyzer.scala:336)
		at org.apache.spark.sql.catalyst.analysis.AnalysisContext$.withNewAnalysisContext(Analyzer.scala:234)
		at org.apache.spark.sql.catalyst.analysis.Analyzer.execute(Analyzer.scala:336)
		at org.apache.spark.sql.catalyst.analysis.Analyzer.execute(Analyzer.scala:299)
		at org.apache.spark.sql.catalyst.rules.RuleExecutor.$anonfun$executeAndTrack$1(RuleExecutor.scala:201)
		at org.apache.spark.sql.catalyst.QueryPlanningTracker$.withTracker(QueryPlanningTracker.scala:89)
		at org.apache.spark.sql.catalyst.rules.RuleExecutor.executeAndTrack(RuleExecutor.scala:201)
		at org.apache.spark.sql.catalyst.analysis.resolver.HybridAnalyzer.resolveInFixedPoint(HybridAnalyzer.scala:190)
		at org.apache.spark.sql.catalyst.analysis.resolver.HybridAnalyzer.$anonfun$apply$1(HybridAnalyzer.scala:76)
		at org.apache.spark.sql.catalyst.analysis.resolver.HybridAnalyzer.withTrackedAnalyzerBridgeState(HybridAnalyzer.scala:111)
		at org.apache.spark.sql.catalyst.analysis.resolver.HybridAnalyzer.apply(HybridAnalyzer.scala:71)
		at org.apache.spark.sql.catalyst.analysis.Analyzer.$anonfun$executeAndCheck$1(Analyzer.scala:330)
		at org.apache.spark.sql.catalyst.plans.logical.AnalysisHelper$.markInAnalyzer(AnalysisHelper.scala:423)
		at org.apache.spark.sql.catalyst.analysis.Analyzer.executeAndCheck(Analyzer.scala:330)
		at org.apache.spark.sql.execution.QueryExecution.$anonfun$lazyAnalyzed$2(QueryExecution.scala:110)
		at org.apache.spark.sql.catalyst.QueryPlanningTracker.measurePhase(QueryPlanningTracker.scala:148)
		at org.apache.spark.sql.execution.QueryExecution.$anonfun$executePhase$2(QueryExecution.scala:278)
		at org.apache.spark.sql.execution.QueryExecution$.withInternalError(QueryExecution.scala:654)
		at org.apache.spark.sql.execution.QueryExecution.$anonfun$executePhase$1(QueryExecution.scala:278)
		at org.apache.spark.sql.SparkSession.withActive(SparkSession.scala:804)
		at org.apache.spark.sql.execution.QueryExecution.executePhase(QueryExecution.scala:277)
		at org.apache.spark.sql.execution.QueryExecution.$anonfun$lazyAnalyzed$1(QueryExecution.scala:110)
		at scala.util.Try$.apply(Try.scala:217)
		at org.apache.spark.util.Utils$.doTryWithCallerStacktrace(Utils.scala:1378)
		at org.apache.spark.util.LazyTry.tryT$lzycompute(LazyTry.scala:46)
		at org.apache.spark.util.LazyTry.tryT(LazyTry.scala:46)
		... 22 more

When i run the same model but with appending config
virtual_environment_mode: dev_only
I have table created in prod schema but with no data as well and different error :

MainThread - sqlmesh.core.scheduler - INFO - Execution failed for node EvaluateNode(snapshot_name='"iceberg"."sqlmesh"."test_supply_trends_hourly_agg_spark_5"', interval=(1758024000000, 1758027600000), batch_index=0) (scheduler.py:597)
Traceback (most recent call last):
  File "/usr/local/lib/python3.11/site-packages/sqlglot/parser.py", line 1655, in parse_into
    return self._parse(parser, raw_tokens, sql)
           ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "/usr/local/lib/python3.11/site-packages/sqlglot/parser.py", line 1697, in _parse
    self.raise_error("Invalid expression / Unexpected token")
  File "/usr/local/lib/python3.11/site-packages/sqlglot/parser.py", line 1738, in raise_error
    raise error
sqlglot.errors.ParseError: Invalid expression / Unexpected token. Line 1, Col: 9.
  `iceberg�[4m`�[0m.`sqlmesh`.`test_supply_trends_hourly_agg_spark_5`.branch_wap_04890d09

The above exception was the direct cause of the following exception:

Traceback (most recent call last):
  File "/usr/local/lib/python3.11/site-packages/sqlglot/expressions.py", line 8546, in to_table
    table = maybe_parse(sql_path, into=Table, dialect=dialect)
            ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "/usr/local/lib/python3.11/site-packages/sqlglot/expressions.py", line 7687, in maybe_parse
    return sqlglot.parse_one(sql, read=dialect, into=into, **opts)
           ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "/usr/local/lib/python3.11/site-packages/sqlglot/__init__.py", line 137, in parse_one
    result = dialect.parse_into(into, sql, **opts)
             ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "/usr/local/lib/python3.11/site-packages/sqlglot/dialects/dialect.py", line 1088, in parse_into
    return self.parser(**opts).parse_into(expression_type, self.tokenize(sql), sql)
           ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "/usr/local/lib/python3.11/site-packages/sqlglot/parser.py", line 1660, in parse_into
    raise ParseError(
sqlglot.errors.ParseError: Failed to parse '`iceberg`.`sqlmesh`.`test_supply_trends_hourly_agg_spark_5`.branch_wap_04890d09' into <class 'sqlglot.expressions.Table'>

Kind full with the same setup does insert data into iceberg tables. So looks like this incremental has errors.

Metadata

Metadata

Assignees

No one assigned

    Type

    No type

    Projects

    No projects

    Milestone

    No milestone

    Relationships

    None yet

    Development

    No branches or pull requests

    Issue actions