Add device-agnostic process group wrappers for distributed setup initialization#307
Add device-agnostic process group wrappers for distributed setup initialization#307AnantGulati wants to merge 2 commits intometa-pytorch:mainfrom
Conversation
|
Hi @AnantGulati! Thank you for your pull request and welcome to our community. Action RequiredIn order to merge any pull request (code, docs, etc.), we require contributors to sign our Contributor License Agreement, and we don't seem to have one on file for you. ProcessIn order for us to review and merge your suggested changes, please sign at https://code.facebook.com/cla. If you are contributing on behalf of someone else (eg your employer), the individual CLA may not be sufficient and your employer may need to sign the corporate CLA. Once the CLA is signed, our tooling will perform checks and validations. Afterwards, the pull request will be tagged with If you have received this in error or have any questions, please contact us at cla@meta.com. Thanks! |
|
Thank you for signing our Contributor License Agreement. We can now accept your code for this (and any) Meta Open Source project. Thanks! |
|
@d4l3k @H-Huang @tushar00jain Could you please review or allocate the appropriate reviewer for this PR Thanks |
As an extension of RFC #257
The goal of this PR is to enable device-agnostic distributed setup initialization.
I've created two new classes:
ProcessGroupAcceleratorProcessGroupBabyAcceleratorThese classes automatically route to the appropriate device-specific implementation using
torch.accelerator:ProcessGroupNCCL/ProcessGroupBabyNCCLProcessGroupXCCL/ProcessGroupBabyXCCLI've added test cases to demonstrate and validate this functionality.
Future PRs will convert more test cases and code files to use this device-agnostic approach, reducing manual overhead and enabling greater flexibility and code reuse.