Skip to content

Comments

Enable self refiner modality guidance#120

Open
Fabio Maia (fabiormoura) wants to merge 4 commits intoLightricks:mainfrom
fabiormoura:enable_self_refiner_modality_guidance
Open

Enable self refiner modality guidance#120
Fabio Maia (fabiormoura) wants to merge 4 commits intoLightricks:mainfrom
fabiormoura:enable_self_refiner_modality_guidance

Conversation

@fabiormoura

No description provided.

The LTX-2 fork's X0Model.forward() accepts single Modality objects,
not batched lists like Wan2GP's modified transformer. Use two separate
forward passes (normal + cross-attn perturbed) instead of batching.

Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
The Triton fused_add_round_kernel uses fp8e4nv, which requires Ada
Lovelace (compute capability >= 8.9). On Ampere (sm_86, e.g. A40),
this fails with 'type fp8e4nv not supported in this architecture'.

Detect GPU compute capability at first call and fall back to pure
PyTorch: add in bfloat16 then deterministic cast to FP8. Slightly
less accurate than stochastic rounding but equivalent for inference.
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

1 participant