-
Notifications
You must be signed in to change notification settings - Fork 1.8k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
🌊 Add error for iterable datasets in GRPOTrainer #3216
Conversation
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Pull Request Overview
This PR adds a check in GRPOTrainer to raise an error when using iterable datasets, ensuring that only standard datasets are accepted.
- Added a conditional block to detect IterableDataset usage in both train and evaluation datasets.
- Provides a clear error message including a reference to an existing issue for context.
Comments suppressed due to low confidence (2)
trl/trainer/grpo_trainer.py:410
- Please add unit tests that verify that providing an IterableDataset (and dicts containing IterableDatasets) to the trainer results in the proper NotImplementedError being raised.
if (
trl/trainer/grpo_trainer.py:418
- Consider enhancing the error message with guidance or a link to the documentation for converting iterable datasets into a supported format, to help users resolve the issue.
raise NotImplementedError(
The docs for this PR live here. All of your documentation changes will be reflected on that endpoint. The docs are available until 30 days after the last update. |
import tempfile
from datasets import load_dataset
from trl import GRPOConfig, GRPOTrainer
dataset = load_dataset("trl-internal-testing/zen", "standard_prompt_only", split="train").to_iterable_dataset()
def dummy_reward_func(completions, **kwargs):
return [0.0] * len(completions)
with tempfile.TemporaryDirectory() as tmp_dir:
training_args = GRPOConfig(
output_dir=tmp_dir,
learning_rate=0.1, # increase the learning rate to speed up the test
per_device_train_batch_size=3, # reduce the batch size to reduce memory usage
num_generations=3, # reduce the number of generations to reduce memory usage
max_completion_length=32, # reduce the completion length to reduce memory usage
report_to="none",
)
trainer = GRPOTrainer(
model="trl-internal-testing/tiny-Qwen2ForCausalLM-2.5",
reward_funcs=dummy_reward_func,
args=training_args,
train_dataset=dataset,
)
|
What does this PR do?
See #3213
Before submitting
Pull Request section?
to it if that's the case.
documentation guidelines.
Who can review?
Anyone in the community is free to review the PR once the tests have passed. Feel free to tag
members/contributors who may be interested in your PR.