-
Notifications
You must be signed in to change notification settings - Fork 220
docs: v0.5 performance results update #1772
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
base: main
Are you sure you want to change the base?
Conversation
📝 WalkthroughWalkthroughDocumentation update to the performance summary benchmark file. Adds three new benchmark sections for H100 BF16/FP8 and GB200 BF16 configurations with restructured table format including an Algorithm column and updated metrics for training and generation parameters. Changes
Estimated code review effort🎯 2 (Simple) | ⏱️ ~12 minutes Possibly related PRs
Suggested reviewers
🚥 Pre-merge checks | ✅ 4✅ Passed checks (4 passed)
✏️ Tip: You can configure your own custom pre-merge checks in the settings. Thanks for using CodeRabbit! It's free for OSS, and your support helps us grow. If you like it, consider giving us a shout-out. Comment |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Actionable comments posted: 1
🤖 Fix all issues with AI agents
In @docs/about/performance-summary.md:
- Line 46: The section header "## Nemo RL v0.4" is inconsistent with the PR
title referencing v0.5; update the header text in the markdown (replace "Nemo RL
v0.4" with "Nemo RL v0.5") so the documentation version matches the PR and other
references to the release.
📜 Review details
Configuration used: Path: .coderabbit.yaml
Review profile: CHILL
Plan: Pro
📒 Files selected for processing (1)
docs/about/performance-summary.md
🧰 Additional context used
📓 Path-based instructions (2)
docs/**/*.md
📄 CodeRabbit inference engine (CODING_GUIDELINES.md)
Update docs/index.md when a new markdown doc is added under docs/**/*.md or a markdown file is renamed, ensuring the document appears in the most appropriate section
Files:
docs/about/performance-summary.md
!(**/tests/**|**/test_*.py|**/test_*.sh)
📄 CodeRabbit inference engine (CODING_GUIDELINES.md)
Add the NVIDIA copyright header to all Python files and shell scripts (excluding tests). The header should include the current year
Files:
docs/about/performance-summary.md
🧠 Learnings (1)
📚 Learning: 2025-11-24T17:24:47.707Z
Learnt from: CR
Repo: NVIDIA-NeMo/RL PR: 0
File: coderabbit-custom-pre-merge-checks-unique-id-file-non-traceable-F7F2B60C-1728-4C9A-8889-4F2235E186CA.txt:0-0
Timestamp: 2025-11-24T17:24:47.707Z
Learning: If a change could affect performance, the PR description should include before-and-after performance numbers, as well as the configuration and context in which they apply
Applied to files:
docs/about/performance-summary.md
⏰ Context from checks skipped due to timeout of 90000ms. You can increase the timeout in your CodeRabbit configuration to a maximum of 15 minutes (900000ms). (6)
- GitHub Check: build-container / main
- GitHub Check: Lint check
- GitHub Check: build-container / main
- GitHub Check: Lint check
- GitHub Check: Post submodule check comment / Comment on PR
- GitHub Check: Post automodel integration comment / Comment on PR
🔇 Additional comments (2)
docs/about/performance-summary.md (2)
48-78: LGTM! Well-structured benchmark sections.The H100 BF16 and FP8 benchmark sections are well-organized with clear metadata and properly formatted tables. The addition of the "Algorithm" column effectively distinguishes between GRPO and DAPO results, and the dataset references on line 49 appropriately mention both algorithms used.
79-96: LGTM! GB200 benchmark section is well-structured.The GB200 BF16 benchmark section follows the same clear structure as the H100 sections with appropriate system metadata and properly formatted tables. The results showcase performance on the new GB200-NVL72 system.
|
|
||
| --- | ||
|
|
||
| ## Nemo RL v0.4 |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Version mismatch: Section header says "v0.4" but PR title indicates "v0.5".
The section header states "Nemo RL v0.4" while the PR title is "v0.5 performance results update". Please verify and correct the version number to ensure documentation accuracy.
📝 Proposed fix if this should be v0.5
-## Nemo RL v0.4
+## Nemo RL v0.5📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
| ## Nemo RL v0.4 | |
| ## Nemo RL v0.5 |
🤖 Prompt for AI Agents
In @docs/about/performance-summary.md at line 46, The section header "## Nemo RL
v0.4" is inconsistent with the PR title referencing v0.5; update the header text
in the markdown (replace "Nemo RL v0.4" with "Nemo RL v0.5") so the
documentation version matches the PR and other references to the release.
What does this PR do ?
As title
Issues
List issues that this PR closes (syntax):
Usage
# Add a code snippet demonstrating how to use thisBefore your PR is "Ready for review"
Pre checks:
Additional Information
Summary by CodeRabbit
✏️ Tip: You can customize this high-level summary in your review settings.