Skip to content

Commit 4d57ea7

Browse files
eakmanrqclaude
andcommitted
Limit Spark driver memory to 512m in tests to prevent OOM
The Spark JVM defaults to grabbing ~1GB+ of heap memory. With 2 xdist workers each spawning Spark, this exceeds the 7GB available on GitHub Actions runners. Capping spark.driver.memory at 512m prevents the "Cannot allocate memory" failures in test_pyspark_python_model and the spark db_api tests. Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com> Signed-off-by: eakmanrq <6326532+eakmanrq@users.noreply.github.com>
1 parent 3113988 commit 4d57ea7

File tree

2 files changed

+2
-0
lines changed

2 files changed

+2
-0
lines changed

tests/core/test_test.py

Lines changed: 1 addition & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -1722,6 +1722,7 @@ def test_pyspark_python_model(tmp_path: Path) -> None:
17221722
spark_connection_config = SparkConnectionConfig(
17231723
config={
17241724
"spark.master": "local",
1725+
"spark.driver.memory": "512m",
17251726
"spark.sql.warehouse.dir": f"{tmp_path}/data_dir",
17261727
"spark.driver.extraJavaOptions": f"-Dderby.system.home={tmp_path}/derby_dir",
17271728
},

tests/engines/spark/conftest.py

Lines changed: 1 addition & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -9,6 +9,7 @@ def spark_session() -> t.Generator[SparkSession, None, None]:
99
session = (
1010
SparkSession.builder.master("local")
1111
.appName("SQLMesh Test")
12+
.config("spark.driver.memory", "512m")
1213
.enableHiveSupport()
1314
.getOrCreate()
1415
)

0 commit comments

Comments
 (0)