### Background & Description Qwen3.5 support in llama.cpp already implemented: https://github.com/ggml-org/llama.cpp/pull/19468 ### API & Usage _No response_ ### How to implement _No response_