Skip to content

On GB10 Spark, llama cpp generation fails with "No next state found" #1812

@antheas

Description

@antheas

Describe the issue as clearly as possible:

On a DGX Spark, generation fails after a few samples when using llama cpp with outlines and a simple structured parser. This does not happen on x86 with an A40.

A sample is provided to directly reproduce.

Steps/code to reproduce the bug:

from typing import Literal

from llama_cpp import Llama
from outlines.models import LlamaCpp
from pydantic.main import BaseModel

from outlines import Generator

class EvalOutputType(BaseModel):
    score: Literal[1, 2, 3, 4, 5]

llm = LlamaCpp(
    Llama.from_pretrained(
        repo_id="Qwen/Qwen3-8B-GGUF",
        filename="Qwen3-8B-Q4_K_M.gguf",
        n_ctx=40960,
        n_gpu_layers=-1,
        batch_size=1,
        verbose=False,
    )
)

generator = Generator(llm, output_type=EvalOutputType)

prompt_works = "You are a tax accountant.\n\nYou are given the following 3 real households as a reference:\n{'year': '2016', 'income_rank': 0.2562, 'income_rank_1': 0.0516, 'income_rank_2': 0.0553, 'income_rank_3': 0.0526, 'income_rank_4': 0.0609, 'income_rank_5': 0.054, 'age_ref': 61, 'expenditures': [{'month': '5', 'product_code': '010210', 'cost': 7.98, 'is_training': '1'}, {'month': '5', 'product_code': '020110', 'cost': 2.0816, 'is_training': '1'}, {'month': '5', 'product_code': '020210', 'cost': 1.8983, 'is_training': '1'}, {'month': '5', 'product_code': '020510', 'cost': 0.8899, 'is_training': '1'}, {'month': '5', 'product_code': '040110', 'cost': 4.4899, 'is_training': '1'}, {'month': '5', 'product_code': '040510', 'cost': 10.2799, 'is_training': '1'}, {'month': '5', 'product_code': '040510', 'cost': 4.28, 'is_training': '1'}, {'month': '5', 'product_code': '090110', 'cost': 3.2599, 'is_training': '1'}, {'month': '5', 'product_code': '090110', 'cost': 2.98, 'is_training': '1'}, {'month': '5', 'product_code': '100410', 'cost': 3.69, 'is_training': '1'}, {'month': '5', 'product_code': '100410', 'cost': 3.69, 'is_training': '1'}, {'month': '5', 'product_code': '110210', 'cost': 1.3799, 'is_training': '1'}, {'month': '5', 'product_code': '110410', 'cost': 4.98, 'is_training': '1'}, {'month': '5', 'product_code': '150110', 'cost': 2.49, 'is_training': '1'}, {'month': '5', 'product_code': '190321', 'cost': 23, 'is_training': '1'}, {'month': '5', 'product_code': '200111', 'cost': 7.57, 'is_training': '1'}, {'month': '5', 'product_code': '320902', 'cost': 29.2, 'is_training': '1'}, {'month': '5', 'product_code': '470111', 'cost': 21.92, 'is_training': '1'}, {'month': '5', 'product_code': '610310', 'cost': 22.9699, 'is_training': '1'}, {'month': '5', 'product_code': '630110', 'cost': 5.61, 'is_training': '1'}, {'month': '5', 'product_code': '630110', 'cost': 5.61, 'is_training': '1'}], 'members': []}\n{'year': '2016', 'income_rank': 0.2466, 'income_rank_1': 0.0393, 'income_rank_2': 0.0412, 'income_rank_3': 0.0414, 'income_rank_4': 0.0491, 'income_rank_5': 0.042, 'age_ref': 79, 'expenditures': [{'month': '5', 'product_code': '190112', 'cost': 9.1499, 'is_training': '1'}, {'month': '5', 'product_code': '560210', 'cost': 133.5899, 'is_training': '1'}], 'members': []}\n{'year': '2016', 'income_rank': 0.2575, 'income_rank_1': 0.0537, 'income_rank_2': 0.0572, 'income_rank_3': 0.0555, 'income_rank_4': 0.0625, 'income_rank_5': 0.0559, 'age_ref': 35, 'expenditures': [{'month': '5', 'product_code': '020210', 'cost': 1.5, 'is_training': '1'}, {'month': '5', 'product_code': '020210', 'cost': 2.99, 'is_training': '1'}, {'month': '5', 'product_code': '020210', 'cost': 2.99, 'is_training': '1'}, {'month': '5', 'product_code': '020510', 'cost': 3, 'is_training': '1'}, {'month': '5', 'product_code': '060210', 'cost': 6.9899, 'is_training': '1'}, {'month': '5', 'product_code': '060210', 'cost': 6.9899, 'is_training': '1'}, {'month': '5', 'product_code': '080110', 'cost': 2.99, 'is_training': '1'}, {'month': '5', 'product_code': '090110', 'cost': 2.2899, 'is_training': '1'}, {'month': '5', 'product_code': '100210', 'cost': 3.99, 'is_training': '1'}, {'month': '5', 'product_code': '100210', 'cost': 3.99, 'is_training': '1'}, {'month': '5', 'product_code': '110210', 'cost': 1.6599, 'is_training': '1'}, {'month': '5', 'product_code': '110210', 'cost': 2.99, 'is_training': '1'}, {'month': '5', 'product_code': '110410', 'cost': 1.4283, 'is_training': '1'}, {'month': '5', 'product_code': '110410', 'cost': 2.99, 'is_training': '1'}, {'month': '5', 'product_code': '110410', 'cost': 2.99, 'is_training': '1'}, {'month': '5', 'product_code': '110410', 'cost': 1.4283, 'is_training': '1'}, {'month': '5', 'product_code': '110510', 'cost': 2.1516, 'is_training': '1'}, {'month': '5', 'product_code': '110510', 'cost': 2.1516, 'is_training': '1'}, {'month': '5', 'product_code': '120410', 'cost': 1.99, 'is_training': '1'}, {'month': '5', 'product_code': '120410', 'cost': 1.99, 'is_training': '1'}, {'month': '5', 'product_code': '120410', 'cost': 1.99, 'is_training': '1'}, {'month': '5', 'product_code': '120410', 'cost': 1.99, 'is_training': '1'}, {'month': '5', 'product_code': '130212', 'cost': 0.99, 'is_training': '1'}, {'month': '5', 'product_code': '130212', 'cost': 1.99, 'is_training': '1'}, {'month': '5', 'product_code': '130310', 'cost': 1.99, 'is_training': '1'}, {'month': '5', 'product_code': '150110', 'cost': 0.99, 'is_training': '1'}, {'month': '5', 'product_code': '170532', 'cost': 1.8899, 'is_training': '1'}, {'month': '5', 'product_code': '170532', 'cost': 1.83, 'is_training': '1'}, {'month': '5', 'product_code': '180611', 'cost': 1.99, 'is_training': '1'}, {'month': '5', 'product_code': '180611', 'cost': 1.99, 'is_training': '1'}, {'month': '5', 'product_code': '180611', 'cost': 1.99, 'is_training': '1'}, {'month': '5', 'product_code': '190211', 'cost': 15.1499, 'is_training': '1'}, {'month': '5', 'product_code': '190311', 'cost': 4.19, 'is_training': '1'}, {'month': '5', 'product_code': '330110', 'cost': 4.86, 'is_training': '1'}, {'month': '5', 'product_code': '380315', 'cost': 20.78, 'is_training': '1'}, {'month': '5', 'product_code': '380315', 'cost': 10.39, 'is_training': '1'}, {'month': '5', 'product_code': '380315', 'cost': 10.39, 'is_training': '1'}, {'month': '5', 'product_code': '530311', 'cost': 33.4799, 'is_training': '1'}, {'month': '5', 'product_code': '640310', 'cost': 8.63, 'is_training': '1'}, {'month': '5', 'product_code': '640310', 'cost': 16.2, 'is_training': '1'}, {'month': '5', 'product_code': '640310', 'cost': 10.8, 'is_training': '1'}, {'month': '5', 'product_code': '650110', 'cost': 20.3365, 'is_training': '1'}, {'month': '5', 'product_code': '650210', 'cost': 18.5434, 'is_training': '1'}], 'members': []}\n\nThen, you are asked to comment on how real the following household is with a rating from 1 to 5 (5 being very real):\n{'year': '2016', 'income_rank': 0.2538, 'income_rank_1': 0.0479, 'income_rank_2': 0.0512, 'income_rank_3': 0.0486, 'income_rank_4': 0.0565, 'income_rank_5': 0.05, 'age_ref': 73, 'expenditures': [{'month': '5', 'product_code': '009000', 'cost': 926.76, 'is_training': '1'}, {'month': '5', 'product_code': '020110', 'cost': 1.0408, 'is_training': '1'}, {'month': '5', 'product_code': '020210', 'cost': 0.9491, 'is_training': '1'}, {'month': '5', 'product_code': '110110', 'cost': 2.49, 'is_training': '1'}, {'month': '5', 'product_code': '110210', 'cost': 0.7799, 'is_training': '1'}, {'month': '5', 'product_code': '120410', 'cost': 1.0199, 'is_training': '1'}, {'month': '5', 'product_code': '150110', 'cost': 1.3899, 'is_training': '1'}, {'month': '5', 'product_code': '180210', 'cost': 2, 'is_training': '1'}, {'month': '5', 'product_code': '190111', 'cost': 5.7399, 'is_training': '1'}, {'month': '5', 'product_code': '190112', 'cost': 14.92, 'is_training': '1'}, {'month': '5', 'product_code': '190311', 'cost': 3.4911, 'is_training': '1'}, {'month': '5', 'product_code': '220210', 'cost': 1188.8699, 'is_training': '1'}, {'month': '5', 'product_code': '260110', 'cost': 38.1688, 'is_training': '1'}, {'month': '5', 'product_code': '260210', 'cost': 3.4911, 'is_training': '1'}, {'month': '5', 'product_code': '330310', 'cost': 6.1788, 'is_training': '1'}, {'month': '5', 'product_code': '580000', 'cost': 69.61, 'is_training': '1'}, {'month': '5', 'product_code': '580000', 'cost': 50, 'is_training': '1'}, {'month': '5', 'product_code': '620111', 'cost': 118.8, 'is_training': '1'}, {'month': '5', 'product_code': '620121', 'cost': 9.72, 'is_training': '1'}, {'month': '5', 'product_code': '620121', 'cost': 35.6399, 'is_training': '1'}, {'month': '5', 'product_code': '620912', 'cost': 8.63, 'is_training': '1'}], 'members': []}"
prompt_fails = "You are a tax accountant.\n\nYou are given the following 3 real households as a reference:\n{'year': '2016', 'income_rank': 0.2472, 'income_rank_1': 0.0399, 'income_rank_2': 0.0419, 'income_rank_3': 0.042, 'income_rank_4': 0.0497, 'income_rank_5': 0.0427, 'age_ref': 33, 'expenditures': [{'month': '6', 'product_code': '010120', 'cost': 1.19, 'is_training': '1'}, {'month': '6', 'product_code': '100210', 'cost': 1.7999, 'is_training': '1'}, {'month': '6', 'product_code': '160212', 'cost': 2, 'is_training': '1'}, {'month': '6', 'product_code': '190211', 'cost': 17.8199, 'is_training': '1'}, {'month': '6', 'product_code': '640310', 'cost': 4.28, 'is_training': '1'}], 'members': []}\n{'year': '2016', 'income_rank': 0.2105, 'income_rank_1': 0.0482, 'income_rank_2': 0.049, 'income_rank_3': 0.0498, 'income_rank_4': 0.0496, 'income_rank_5': 0.0528, 'age_ref': 43, 'expenditures': [{'month': '4', 'product_code': '170110', 'cost': 0.9776, 'is_training': '1'}, {'month': '4', 'product_code': '170210', 'cost': 10.71, 'is_training': '1'}, {'month': '4', 'product_code': '170210', 'cost': 0.9223, 'is_training': '1'}, {'month': '4', 'product_code': '170510', 'cost': 3.17, 'is_training': '1'}, {'month': '4', 'product_code': '180710', 'cost': 2, 'is_training': '1'}, {'month': '4', 'product_code': '190111', 'cost': 15, 'is_training': '1'}, {'month': '4', 'product_code': '190111', 'cost': 4.5599, 'is_training': '1'}, {'month': '4', 'product_code': '190211', 'cost': 28.09, 'is_training': '1'}, {'month': '4', 'product_code': '320380', 'cost': 9.8208, 'is_training': '1'}, {'month': '4', 'product_code': '470111', 'cost': 10, 'is_training': '1'}, {'month': '4', 'product_code': '620214', 'cost': 12.1791, 'is_training': '1'}], 'members': []}\n{'year': '2016', 'income_rank': 0.2466, 'income_rank_1': 0.0393, 'income_rank_2': 0.0412, 'income_rank_3': 0.0414, 'income_rank_4': 0.0491, 'income_rank_5': 0.042, 'age_ref': 79, 'expenditures': [{'month': '5', 'product_code': '190112', 'cost': 9.1499, 'is_training': '1'}, {'month': '5', 'product_code': '560210', 'cost': 133.5899, 'is_training': '1'}], 'members': []}\n\nThen, you are asked to comment on how real the following household is with a rating from 1 to 5 (5 being very real):\n{'year': '2016', 'income_rank': 0.2538, 'income_rank_1': 0.0479, 'income_rank_2': 0.0512, 'income_rank_3': 0.0486, 'income_rank_4': 0.0565, 'income_rank_5': 0.05, 'age_ref': 73, 'expenditures': [{'month': '5', 'product_code': '009000', 'cost': 926.76, 'is_training': '1'}, {'month': '5', 'product_code': '020110', 'cost': 1.0408, 'is_training': '1'}, {'month': '5', 'product_code': '020210', 'cost': 0.9491, 'is_training': '1'}, {'month': '5', 'product_code': '110110', 'cost': 2.49, 'is_training': '1'}, {'month': '5', 'product_code': '110210', 'cost': 0.7799, 'is_training': '1'}, {'month': '5', 'product_code': '120410', 'cost': 1.0199, 'is_training': '1'}, {'month': '5', 'product_code': '150110', 'cost': 1.3899, 'is_training': '1'}, {'month': '5', 'product_code': '180210', 'cost': 2, 'is_training': '1'}, {'month': '5', 'product_code': '190111', 'cost': 5.7399, 'is_training': '1'}, {'month': '5', 'product_code': '190112', 'cost': 14.92, 'is_training': '1'}, {'month': '5', 'product_code': '190311', 'cost': 3.4911, 'is_training': '1'}, {'month': '5', 'product_code': '220210', 'cost': 1188.8699, 'is_training': '1'}, {'month': '5', 'product_code': '260110', 'cost': 38.1688, 'is_training': '1'}, {'month': '5', 'product_code': '260210', 'cost': 3.4911, 'is_training': '1'}, {'month': '5', 'product_code': '330310', 'cost': 6.1788, 'is_training': '1'}, {'month': '5', 'product_code': '580000', 'cost': 69.61, 'is_training': '1'}, {'month': '5', 'product_code': '580000', 'cost': 50, 'is_training': '1'}, {'month': '5', 'product_code': '620111', 'cost': 118.8, 'is_training': '1'}, {'month': '5', 'product_code': '620121', 'cost': 9.72, 'is_training': '1'}, {'month': '5', 'product_code': '620121', 'cost': 35.6399, 'is_training': '1'}, {'month': '5', 'product_code': '620912', 'cost': 8.63, 'is_training': '1'}], 'members': []}"

prompt = prompt_fails
print(prompt)

while True:
    for j in generator.stream(prompt, max_tokens=None):  # type: ignore
        print(j, end="", flush=True)
    print()

Expected result:

We would expect:
`{"score": 5}`

But we get an error.

Error message:

The following repeats (g is a generated faulty token which should not have been allowed):

GException ignored on calling ctypes callback function: <function CustomSampler.__init__.<locals>.apply_wrapper at 0xe951d391df80>
Traceback (most recent call last):
  File "/home/antheas/Projects/pasteur/venv/lib/python3.12/site-packages/llama_cpp/_internals.py", line 621, in apply_wrapper
    self.apply_func(cur_p)
  File "/home/antheas/Projects/pasteur/venv/lib/python3.12/site-packages/llama_cpp/llama.py", line 709, in apply_func
    recarray.logit[:] = logit_processor(self._input_ids, recarray.logit)
                        ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "/home/antheas/Projects/pasteur/venv/lib/python3.12/site-packages/outlines/processors/base_logits_processor.py", line 126, in __call__
    self.process_logits(
  File "/home/antheas/Projects/pasteur/venv/lib/python3.12/site-packages/outlines/backends/outlines_core.py", line 189, in process_logits
    self._guides[i].advance(
ValueError: No next state found for the current state: 128 with token ID: 38

Outlines/Python version information:

Version information

For more context, I also tried the latest outlines-core and the previous versions of numba and outlines. Python is as provided by Nvidia's environment.

Details ``` 1.2.9 Python 3.12.3 (main, Jan 8 2026, 11:30:50) [GCC 13.3.0] accessible-pygments==0.0.5 aiofiles==25.1.0 alabaster==1.0.0 alembic==1.18.1 annotated-doc==0.0.4 annotated-types==0.7.0 antlr4-python3-runtime==4.9.3 anyio==4.12.1 appdirs==1.4.4 argon2-cffi==25.1.0 argon2-cffi-bindings==25.1.0 arrow==1.4.0 asttokens==3.0.1 async-lru==2.1.0 attrs==25.4.0 babel==2.17.0 beautifulsoup4==4.14.3 binaryornot==0.4.4 black==26.1.0 bleach==6.3.0 blinker==1.9.0 boto3==1.42.30 botocore==1.42.30 build==1.4.0 cachetools==6.2.4 certifi==2026.1.4 cffi==2.0.0 chardet==5.2.0 charset-normalizer==3.4.4 click==8.3.1 click-default-group==1.2.4 cloudpickle==3.1.2 colorama==0.4.6 comm==0.2.3 commonmark==0.9.1 contourpy==1.3.3 cookiecutter==2.6.0 cryptography==46.0.3 cycler==0.12.1 databricks-sdk==0.78.0 debugpy==1.8.19 decorator==5.2.1 defusedxml==0.7.1 Deprecated==1.2.18 diskcache==5.6.3 distro==1.9.0 docker==7.1.0 docutils==0.22.4 dynaconf==3.2.12 executing==2.2.1 fastapi==0.128.0 fastjsonschema==2.21.2 filelock==3.20.3 Flask==3.1.2 flask-cors==6.0.2 fonttools==4.61.1 fqdn==1.5.1 fsspec==2026.1.0 genson==1.3.0 gitdb==4.0.12 GitPython==3.1.46 google-auth==2.47.0 graphene==3.4.3 graphql-core==3.2.7 graphql-relay==3.2.0 greenlet==3.3.0 gunicorn==23.0.0 h11==0.16.0 halo==0.0.31 hf-xet==1.2.0 httpcore==1.0.9 httptools==0.7.1 httpx==0.28.1 huey==2.6.0 huggingface_hub==1.3.2 idna==3.11 imagesize==1.4.1 importlib_metadata==8.7.1 importlib_resources==6.5.2 ipykernel==7.1.0 ipython==8.38.0 isoduration==20.11.0 itsdangerous==2.2.0 jedi==0.19.2 Jinja2==3.1.6 jiter==0.12.0 jmespath==1.0.1 joblib==1.5.3 json5==0.13.0 jsonpath-ng==1.7.0 jsonpointer==3.0.0 jsonschema==4.26.0 jsonschema-specifications==2025.9.1 jupyter-events==0.12.0 jupyter-lsp==2.3.0 jupyter_client==8.8.0 jupyter_core==5.9.1 jupyter_server==2.17.0 jupyter_server_terminals==0.5.4 jupyterlab==4.5.2 jupyterlab_pygments==0.3.0 jupyterlab_server==2.28.0 kedro==1.1.1 kedro-datasets==9.1.1 kedro-telemetry==0.7.0 kedro-viz==12.2.0 kiwisolver==1.4.9 lark==1.3.1 lazy_loader==0.4 llama_cpp_python==0.3.16 llvmlite==0.46.0 log-symbols==0.0.14 Mako==1.3.10 Markdown==3.7 markdown-it-py==4.0.0 MarkupSafe==3.0.3 matplotlib==3.10.8 matplotlib-inline==0.2.1 mdurl==0.1.2 mistune==3.2.0 mlflow==3.8.1 mlflow-skinny==3.8.1 mlflow-tracing==3.8.1 more-itertools==10.8.0 mypy_extensions==1.1.0 narwhals==2.15.0 nbclient==0.10.4 nbconvert==7.16.6 nbformat==5.10.4 nbsphinx==0.9.8 nest-asyncio==1.6.0 networkx==3.6.1 notebook_shim==0.2.4 numba==0.63.1 numpy==2.3.5 nvidia-nccl-cu12==2.29.2 omegaconf==2.3.0 openai==2.15.0 opentelemetry-api==1.39.1 opentelemetry-proto==1.39.1 opentelemetry-sdk==1.39.1 opentelemetry-semantic-conventions==0.60b1 orjson==3.11.5 outlines==1.2.9 outlines_core==0.2.14 overrides==7.7.0 packaging==25.0 pandas==2.3.3 pandocfilters==1.5.1 parquet_tools==0.2.16 parse==1.20.2 parso==0.8.5 -e git+https://github.com/pasteur-dev/pasteur@15dc503b3c96677f7c7030cbfbf384673eb86480#egg=pasteur pathspec==1.0.3 pexpect==4.9.0 pillow==12.1.0 pip-tools==7.5.2 platformdirs==4.5.1 plotly==6.5.2 pluggy==1.6.0 ply==3.11 pre_commit_hooks==5.0.0 prometheus_client==0.24.1 prompt_toolkit==3.0.52 protobuf==6.33.4 psutil==7.2.1 ptyprocess==0.7.0 pure_eval==0.2.3 pyarrow==22.0.0 pyasn1==0.6.2 pyasn1_modules==0.4.2 pycparser==2.23 pydantic==2.12.5 pydantic_core==2.41.5 pydata-sphinx-theme==0.15.4 Pygments==2.19.2 PyMySQL==1.1.2 pyparsing==3.3.1 pyproject_hooks==1.2.0 python-dateutil==2.9.0.post0 python-dotenv==1.2.1 python-json-logger==4.0.0 python-slugify==8.0.4 pytokens==0.4.0 pytoolconfig==1.3.1 pytz==2025.2 PyYAML==6.0.3 pyzmq==27.1.0 recommonmark==0.7.1 referencing==0.37.0 requests==2.32.5 rfc3339-validator==0.1.4 rfc3986-validator==0.1.1 rfc3987-syntax==1.1.0 rich==14.2.0 roman-numerals==4.1.0 rope==1.13.0 rpds-py==0.30.0 rsa==4.9.1 ruamel.yaml==0.18.10 ruamel.yaml.clib==0.2.12 s3transfer==0.16.0 scikit-learn==1.8.0 scipy==1.17.0 secure==1.0.1 Send2Trash==2.1.0 setuptools==80.9.0 shellingham==1.5.4 six==1.17.0 smmap==5.0.2 sniffio==1.3.1 snowballstemmer==3.0.1 soupsieve==2.8.2 Sphinx==9.1.0 sphinx-autobuild==2025.8.25 sphinx-autodoc-typehints==3.6.2 sphinx-book-theme==1.1.4 sphinx-copybutton==0.5.2 sphinxcontrib-applehelp==2.0.0 sphinxcontrib-devhelp==2.0.0 sphinxcontrib-htmlhelp==2.1.0 sphinxcontrib-jsmath==1.0.1 sphinxcontrib-qthelp==2.0.0 sphinxcontrib-serializinghtml==2.0.0 spinners==0.0.24 SQLAlchemy==2.0.45 sqlparse==0.5.5 stack-data==0.6.3 starlette==0.50.0 strawberry-graphql==0.258.0 tabulate==0.9.0 tenacity==9.0.0 termcolor==3.3.0 terminado==0.18.1 text-unidecode==1.3 threadpoolctl==3.6.0 thrift==0.16.0 tinycss2==1.4.0 toml==0.10.2 tomli_w==1.2.0 tornado==6.5.4 tqdm==4.67.1 traitlets==5.14.3 typer-slim==0.21.1 types-python-dateutil==2.9.0.20241206 typing-inspection==0.4.2 typing_extensions==4.15.0 tzdata==2025.3 uri-template==1.3.0 urllib3==2.6.3 uvicorn==0.40.0 uvloop==0.22.1 watchfiles==1.1.1 wcwidth==0.2.14 webcolors==25.10.0 webencodings==0.5.1 websocket-client==1.9.0 websockets==16.0 Werkzeug==3.1.5 wheel==0.45.1 wrapt==1.17.2 xgboost==3.1.3 zipp==3.23.0 ```

Context for the issue:

Have a deadline coming up with two sparks that cannot be used. Can patch on my own if necessary.

Metadata

Metadata

Assignees

No one assigned

    Labels

    Type

    No type

    Projects

    No projects

    Milestone

    No milestone

    Relationships

    None yet

    Development

    No branches or pull requests

    Issue actions