Skip to content

感谢你们的高质量工作,我对训练wan部分的代码很感兴趣,请问你们有计划开源这部分代码吗? 希望你们不吝赐教! #19

@frost-ice0107

Description

@frost-ice0107
  1. We are about to update a version of the paper that introduces resource usage. Overall, the pre-training stage uses 8 machines with 8 GPU cards each (64 GPUs total).
  2. Stage 1 involves training the VGM (WAN 2.2-5B) only. We have not open-sourced the training code for this part. We have not open-sourced the training code for the VAE part of Stage 2, but the code for Motus Stage 2 and subsequent downstream task fine-tuning is the same - you just need to prepare latent action data and configure Motus/configs/lerobot.yaml.

Originally posted by @HongzheBi in #9

Metadata

Metadata

Assignees

No one assigned

    Labels

    No labels
    No labels

    Type

    No type

    Projects

    No projects

    Milestone

    No milestone

    Relationships

    None yet

    Development

    No branches or pull requests

    Issue actions