-
Notifications
You must be signed in to change notification settings - Fork 220
refactor: split train and val dataset in response dataset #1649
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
base: main
Are you sure you want to change the base?
Conversation
f8dcf7c to
2f78c84
Compare
2f78c84 to
fd448be
Compare
2aa7ce0 to
6a093d1
Compare
terrykong
left a comment
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
some initial thoughts
since it's a big PR @ashors1 could you help as a second review?
examples/configs/recipes/llm/sft-llama3.1-8b-1n8g-megatron.yaml
Outdated
Show resolved
Hide resolved
examples/run_grpo.py
Outdated
| assert hasattr(data, "processor"), "Dataset must have a processor attribute" | ||
| task_data_processors[task_name] = (task_spec, data.processor) | ||
| # setup train dataset | ||
| update_single_dataset_config(data_config["train"], data_config) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
wdyt about just expecting users to populate the train config? then we don't have dup keys
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I think we should have a default value especially when we support multiple datasets in next PR, otherwise people need to write the same things for every dataset, then the data config will be a bit redundant.
and I'm thinking if it's better to provide a default like train and validation, it seems more directly than just put it outside. wdyt?
# now
data:
train:
# this dataset will override prompt_key and use the default values for other vars
- data_path: /path/to/local/train_dataset_1.jsonl
prompt_key: question
# this dataset will use all the default values
- data_path: /path/to/local/train_dataset_2.jsonl
validation:
- data_path: /path/to/local/val_dataset.jsonl
# will use below vars as default values if dataset doesn't specify it
dataset_name: BinaryPreferenceDataset
prompt_key: prompt
chosen_key: chosen
rejected_key: rejected
prompt_file: null
system_prompt_file: null
env_name: math
# add `default`
data:
train:
# this dataset will override prompt_key and use the default values for other vars
- data_path: /path/to/local/train_dataset_1.jsonl
prompt_key: question
# this dataset will use all the default values
- data_path: /path/to/local/train_dataset_2.jsonl
validation:
- data_path: /path/to/local/val_dataset.jsonl
default:
# will use below vars as default values if dataset doesn't specify it
dataset_name: BinaryPreferenceDataset
prompt_key: prompt
chosen_key: chosen
rejected_key: rejected
prompt_file: null
system_prompt_file: null
env_name: mathThere was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I feel like it's better to be explicit rather than rely on fallback, since it's not clear what needs what. So to understand the relationship between default and each dataset, they'd need to inspect code.
I agree it's kind of redundant, but it's more explicit.
could you get feedback from research team to see what they'd prefer?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
6b34af3 to
fea258d
Compare
Signed-off-by: Yuki Huang <yukih@nvidia.com>
Signed-off-by: Yuki Huang <yukih@nvidia.com>
Signed-off-by: Yuki Huang <yukih@nvidia.com>
Signed-off-by: Rayen <ruit@nvidia.com>
Signed-off-by: Yuki Huang <yukih@nvidia.com>
Signed-off-by: Yuki Huang <yukih@nvidia.com>
Signed-off-by: Yuki Huang <yukih@nvidia.com>
Signed-off-by: Yuki Huang <yukih@nvidia.com>
Signed-off-by: Yuki Huang <yukih@nvidia.com>
Signed-off-by: Yuki Huang <yukih@nvidia.com>
Signed-off-by: Yuki Huang <yukih@nvidia.com>
Signed-off-by: Yuki Huang <yukih@nvidia.com>
Signed-off-by: Yuki Huang <yukih@nvidia.com>
Signed-off-by: Yuki Huang <yukih@nvidia.com>
Signed-off-by: Yuki Huang <yukih@nvidia.com>
Signed-off-by: Yuki Huang <yukih@nvidia.com>
Signed-off-by: Yuki Huang <yukih@nvidia.com>
Signed-off-by: Yuki Huang <yukih@nvidia.com> update all run_xxx and recipe of response dataset to use default Signed-off-by: Yuki Huang <yukih@nvidia.com> fix missing default Signed-off-by: Yuki Huang <yukih@nvidia.com>
Signed-off-by: Yuki Huang <yukih@nvidia.com>
Signed-off-by: Yuki Huang <yukih@nvidia.com>
Signed-off-by: Yuki Huang <yukih@nvidia.com>
Signed-off-by: Yuki Huang <yukih@nvidia.com>
f9def0d to
ec862a3
Compare
Signed-off-by: Yuki Huang <yukih@nvidia.com>
|
Running nightly test now and need some minor fix, will push it later. |
Related issue: #1050
nemo_rl/data/datasets/response_datasets/into a similar format.clevr_cogentandopenmathinstruct2.New Param
Add a new param
split_validation_sizeto handle the case that one dataset is used for both training and validation. (e.g.,OpenMathInstruct-2inexamples/configs/grpo_math_1B.yaml)data.train.split_validation_size > 0 and data.validation is None, will use part of the training dataset as validation dataset.data.train.split_validation_size > 0 and data.validation is not None, will use both "part of the training dataset" and "provided validation dataset" as validation dataset.Usage
Migrate Guide
openai_formatandResponseDataset)Test Result
Summary by CodeRabbit
Release Notes
New Features
trainandvalidationblocks in data settingsDocumentation
Bug Fixes & Improvements
✏️ Tip: You can customize this high-level summary in your review settings.