Skip to content

Preprocessing of dataset for Self-Supervised Pre-Training of Swin Transformers #162

@marvnmtz

Description

@marvnmtz

Thank you for releasing the pretraining code. As I try to reproduce it, I stumbled over some questions.

The first question regards the pre-processing, and more specifically the spacing of the data. You wrote that for the BTCV challenge a spacing of 1.5 x 1.5 x 2.0 mm is used. Does the same hold for the pretraining data.

You also said that you excluded full air (voxel = 0) patches. I cannot really find the part in the code where this is done. Could you describe, how and where this is done?

Metadata

Metadata

Assignees

No one assigned

    Labels

    No labels
    No labels

    Type

    No type

    Projects

    No projects

    Milestone

    No milestone

    Relationships

    None yet

    Development

    No branches or pull requests

    Issue actions