user
stringlengths
3
28
created_at
timestamp[us]
body
stringlengths
1
173k
issue_number
int64
1
2.39k
HuggingFaceDocBuilderDev
2024-11-24T15:17:22
The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/trl/pr_2389). All of your documentation changes will be reflected on that endpoint. The docs are available until 30 days after the last update.
2,389
HuggingFaceDocBuilderDev
2024-11-22T18:41:28
The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/trl/pr_2386). All of your documentation changes will be reflected on that endpoint. The docs are available until 30 days after the last update.
2,386
qgallouedec
2024-11-22T18:21:38
Thanks for reporting. It likely comes from the chat template. Can you share it?
2,385
qgallouedec
2024-11-22T18:24:26
The further explain the error, we expect a chat template that verifies ```python formatted_prompt = tokenizer.apply_chat_template(prompt, add_generation_prompt=True, tokenize=False) formatted_prompt_completion = tokenizer.apply_chat_template(prompt + completion, tokenize=False) assert formatted_prompt_completion.startswith(formatted_prompt) ``` Example with Qwen: ```python >>> from transformers import AutoTokenizer >>> tokenizer = AutoTokenizer.from_pretrained("Qwen/Qwen2.5-0.5B-Instruct") >>> prompt = [{"role": "user", "content": "Where is Paris?"}] >>> completion = [{"role": "assistant", "content": "In France."}] >>> formatted_prompt = tokenizer.apply_chat_template(prompt, add_generation_prompt=True, tokenize=False) >>> formatted_prompt_completion = tokenizer.apply_chat_template(prompt + completion, tokenize=False) >>> formatted_prompt '<|im_start|>system\nYou are Qwen, created by Alibaba Cloud. You are a helpful assistant.<|im_end|>\n<|im_start|>user\nWhere is Paris?<|im_end|>\n<|im_start|>assistant\n' >>> formatted_prompt_completion '<|im_start|>system\nYou are Qwen, created by Alibaba Cloud. You are a helpful assistant.<|im_end|>\n<|im_start|>user\nWhere is Paris?<|im_end|>\n<|im_start|>assistant\nIn France.<|im_end|>\n<|im_start|>assistant\n' >>> formatted_prompt_completion.startswith(formatted_prompt) True ```
2,385
qgallouedec
2024-11-22T18:34:03
It may come from here in you example: ```diff ds = ds.map( lambda x: { "system": [{"role": "user", "content": x["system"]}], "prompt": [{"role": "user", "content": x["prompt"]}], "chosen": [{"role": "assistant", "content": x["chosen"]}], - "rejected": [{"role": "user", "content": x["rejected"]}], + "rejected": [{"role": "assistant", "content": x["rejected"]}], } ) ```
2,385
MohamedAliRashad
2024-11-22T18:47:59
@qgallouedec I am the stupidest person on earth Thanks a lot
2,385
HuggingFaceDocBuilderDev
2024-11-22T18:09:42
The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/trl/pr_2384). All of your documentation changes will be reflected on that endpoint. The docs are available until 30 days after the last update.
2,384
qgallouedec
2024-11-24T16:07:10
Hi! Thanks for the suggestion. It could be a great addition. I haven't read the paper in detail yet but what you describe sounds closer to KTO than DPO, doesn't it? Do you have an implementation that already works?
2,383
AML14
2024-11-22T12:55:31
Update: DPO doesn't even work with a code completion task (i.e., neither the input nor output include FIM special tokens) with the base model. As an example, here is the output generated by `Qwen/Qwen2.5-Coder-0.5B` for the following input: ```java // Input: protected RouteBuilder createRouteBuilder()throws Exception { return new RouteBuilder() { // Output: @Override public void configure() throws Exception { from("direct:hello") .to("mock:hello"); } }; }<|endoftext|> ``` And here is the output of the same model after having applied DPO with about 3000 instances, where the prompt is the input and the chosen/rejected are correct/wrong completions: ```java // Input: protected RouteBuilder createRouteBuilder()throws Exception { return new RouteBuilder() { // Output: public void configure() throws Exception { <|fim_middle|> <|fim_middle|> <|fim_middle|><|endoftext|> ``` The model is completely broken after applying DPO.
2,382
yiyepiaoling0715
2024-11-23T10:18:06
> And here is the output of the same model after having applied DPO with about 3000 instances, where the prompt is the input and the chosen/rejected are correct/wrong completions: why not work with code completion task? I also do the code completion task with rl. i get some benefit,maybe not work under your situation is because of your train corpus
2,382
qgallouedec
2024-11-23T16:16:27
Is your dataset public? How does the training curves look like?
2,382
qgallouedec
2024-11-23T16:22:27
Can you confirm that your effective batch size is 8?
2,382
HuggingFaceDocBuilderDev
2024-11-21T19:59:22
The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/trl/pr_2381). All of your documentation changes will be reflected on that endpoint. The docs are available until 30 days after the last update.
2,381
HuggingFaceDocBuilderDev
2024-11-21T19:57:55
The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/trl/pr_2380). All of your documentation changes will be reflected on that endpoint. The docs are available until 30 days after the last update.
2,380
qgallouedec
2024-11-21T19:33:35
Thanks!
2,379
qgallouedec
2024-11-21T19:35:56
Pushed to hub here https://huggingface.co/datasets/trl-lib/hh-rlhf-helpful-base
2,379
HuggingFaceDocBuilderDev
2024-11-21T19:37:26
The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/trl/pr_2379). All of your documentation changes will be reflected on that endpoint. The docs are available until 30 days after the last update.
2,379
qgallouedec
2024-11-20T09:44:57
Please don't use image when referring to code next time. Use [permalink to code](https://docs.github.com/en/get-started/writing-on-github/working-with-advanced-formatting/creating-a-permanent-link-to-a-code-snippet). --- > train_dataset should be in the form of dict after processing the map No, `map` returns a `Dataset` instance (see [`datasets.map` documentation](https://huggingface.co/docs/datasets/en/process#map)). Unless you remove these columns (prompt, completion) from the dataset, they remain.
2,374
a7217339
2024-11-20T09:50:36
Thank you for your guidance. As a beginner, I am not yet proficient in grammar. Sorry.
2,374
HuggingFaceDocBuilderDev
2024-11-20T09:36:07
The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/trl/pr_2373). All of your documentation changes will be reflected on that endpoint. The docs are available until 30 days after the last update.
2,373
HuggingFaceDocBuilderDev
2024-11-20T08:39:39
The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/trl/pr_2372). All of your documentation changes will be reflected on that endpoint. The docs are available until 30 days after the last update.
2,372
kashif
2024-11-20T08:40:44
thanks @qgallouedec
2,372
qgallouedec
2024-11-20T09:11:02
Failing test not related (same as https://github.com/huggingface/trl/pull/2370#issuecomment-2486585773)
2,372
qgallouedec
2024-11-20T07:45:08
Thanks for reporting. Please provide more info, like the training arguments etc
2,371
HuggingFaceDocBuilderDev
2024-11-19T18:49:28
The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/trl/pr_2370). All of your documentation changes will be reflected on that endpoint. The docs are available until 30 days after the last update.
2,370
qgallouedec
2024-11-19T19:34:33
Failing test not related (fixed in #2373)
2,370
lewtun
2024-11-19T12:30:06
Yes I agree this symlink business was not a great choice for the chat CLI. Let's revisit later
2,369
qgallouedec
2024-11-19T08:15:12
> Can DPOTrainer support inputting encoded token IDs to customize the calculation of different attention masks No, and won't be supported, unless we are provided with a good reason to support it. > excluding the prompt part from loss computation? Actually, it's how dpo works by default. See https://github.com/huggingface/trl/blob/b80c1a6fb8754c578f7178213e56d780abbe96d5/trl/trainer/dpo_trainer.py#L1089-L1092
2,368
LBJ6666
2024-11-19T08:55:16
@qgallouedec Thank you for your response
2,368
gmonair
2024-11-20T12:41:37
I think I found the issue. For posterity, it seems that it was caused by setting torch_dtype to "half" instead of "auto". User error.
2,367
qgallouedec
2024-11-19T05:43:32
Please use english only
2,366
qgallouedec
2024-11-20T13:05:00
Probably linked to #2127. Closing as the title is not in English and the question isn't clear enough for us to help you. Feel free to open a clearer issue the complies with our guidelines
2,366
HuggingFaceDocBuilderDev
2024-11-18T16:18:54
The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/trl/pr_2365). All of your documentation changes will be reflected on that endpoint. The docs are available until 30 days after the last update.
2,365
qgallouedec
2024-11-18T12:58:40
Thanks!
2,364
HuggingFaceDocBuilderDev
2024-11-18T13:03:00
The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/trl/pr_2364). All of your documentation changes will be reflected on that endpoint. The docs are available until 30 days after the last update.
2,364
August-murr
2024-11-24T15:09:48
to get it to work with `ppo_trainer.train`, one idea is to modify the `get_reward` function which is used by `PPOTrainer`. Clone the repo and check out the `get_reward` function in `utils.py` https://github.com/huggingface/trl/blob/672c96546d9cae7a6d0afba381b189bb3cb2e8b5/trl/trainer/utils.py#L1069-L1093. Right now, it uses a `torch.nn.Module` to calculate the reward. You can modify it to use your rule-based reward logic instead. Just make sure the function still outputs a `torch.tensor` so `PPOTrainer` doesn’t break. You might also need to adjust some references in `ppo_config.py` and `ppo_trainer.py`. For example, remove anything that assumes there’s a reward model being used, since in your case, there won’t be one.
2,363
kashif
2024-11-21T09:34:50
yes would welcome distillation trainers!
2,361
kashif
2024-11-18T08:44:55
thanks @bartoszzuk perhaps it's better to set the `self.data_collator` to the default one if it is none and then use `self.data_collator` in the data loaders?
2,360
kashif
2024-11-18T10:17:24
you might need to run `make precommit` in the root of the TRL to fix styling
2,360
HuggingFaceDocBuilderDev
2024-11-18T10:20:50
The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/trl/pr_2360). All of your documentation changes will be reflected on that endpoint. The docs are available until 30 days after the last update.
2,360
HuggingFaceDocBuilderDev
2024-11-15T14:07:41
The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/trl/pr_2359). All of your documentation changes will be reflected on that endpoint. The docs are available until 30 days after the last update.
2,359
ccs96307
2024-11-14T12:34:24
Hi, it looks like the error arises because the `PPOTrainer` class expects a `value_model` to be defined and passed in, which appears to be required in the current TRL version. The `disable_dropout_in_model` method is likely encountering `NoneType` because `value_model` wasn’t specified, and thus defaults to `None`. Hope this helps!
2,357
Mrinh212375
2024-11-15T05:33:13
> Hi, it looks like the error arises because the `PPOTrainer` class expects a `value_model` to be defined and passed in, which appears to be required in the current TRL version. The `disable_dropout_in_model` method is likely encountering `NoneType` because `value_model` wasn’t specified, and thus defaults to `None`. > > Hope this helps! Hi, thanks......I have passed value_model the same as policy_model, I thought it was optional, so didn't pass anything.........anyway error is gone. also I can call the ppo_trainer.train() method directly right ? unlike the older version, no need to write ppo training loop.....Can you please clarify on this point.
2,357
ccs96307
2024-11-15T13:35:40
Glad to hear the error is resolved! Yes, as far as I know, you can directly call the `ppo_trainer.train()` method without needing to write a training loop.
2,357
qgallouedec
2024-11-14T09:05:42
Does "from scratch" means the opposite of "finetuning" for you? Please precise your question
2,356
kalocide
2024-11-16T00:47:00
why would you pre-train with RL?
2,356
kashif
2024-11-21T10:05:15
just to debug, can you kindly try to see if you get the same issue when you do not pass a validation dataset? Also, can you check what happens when you explicitly pass `num_train_epochs=1` as an option to the `DPOConfig` Thanks!
2,355
Mrinh212375
2024-11-14T07:21:27
Hi...I think we need to create a copy of the policy model using create_reference_model() function....is it ?? I'm facing another problem in new PPOTrainer()...according to documentation we need to pass **'module**' unlike the previous version(HF PreTrainedWrapper ). How to get the HF pretrainedWrapper models and pass to PPOTrainer() as module
2,353
ccs96307
2024-11-14T12:55:25
I'm hopeful that https://github.com/huggingface/trl/pull/2344 will address this issue! :raised_hands:
2,353
qgallouedec
2024-11-23T16:36:38
Thanks for this detailled report. The easiest is probably to remove all columns in `dataset.map` ```python dataset.map(..., remove_columns=dataset.column_names) ``` What do you think? Would you like to make a PR to fix this?
2,351
HuggingFaceDocBuilderDev
2024-11-11T23:46:54
The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/trl/pr_2350). All of your documentation changes will be reflected on that endpoint. The docs are available until 30 days after the last update.
2,350
HuggingFaceDocBuilderDev
2024-11-11T23:17:46
The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/trl/pr_2349). All of your documentation changes will be reflected on that endpoint. The docs are available until 30 days after the last update.
2,349
HuggingFaceDocBuilderDev
2024-11-11T21:19:57
The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/trl/pr_2348). All of your documentation changes will be reflected on that endpoint. The docs are available until 30 days after the last update.
2,348
muellerzr
2024-11-11T22:33:16
Beautiful! 🔥
2,348
HuggingFaceDocBuilderDev
2024-11-11T13:32:56
The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/trl/pr_2347). All of your documentation changes will be reflected on that endpoint. The docs are available until 30 days after the last update.
2,347
qgallouedec
2024-11-11T12:30:12
Can you point the "previous version" you are refering to?
2,346
qgallouedec
2024-11-11T16:17:38
I think it has been like this from the initial implementation (see #2020)
2,346
Galaxy-Husky
2024-11-11T16:56:32
> I think it has been like this from the initial implementation (see #2020) Sorry, I didn't say that right. I mean before v0.11.0, there was no `maybe_apply_chat_template` back then. For example, the dpo dataset was preprocessed like: https://github.com/huggingface/trl/blob/55cc4b1076144b74a6ce5d07557b7f664b1de8d9/examples/scripts/dpo.py#L156-L160 Since the code has been refactored , I'm not sure if there was generation prompt or not. If so, could you please point out where it was implemented?
2,346
qgallouedec
2024-11-11T17:23:09
Yes the example code was wrong, you need to add a generation prompt at the end of the prompt.
2,346
Galaxy-Husky
2024-11-11T17:24:31
> Yes the example code was wrong, you need to add a generation prompt at the end of the prompt. I see. Thanks a lot!
2,346
HuggingFaceDocBuilderDev
2024-11-11T12:02:17
The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/trl/pr_2345). All of your documentation changes will be reflected on that endpoint. The docs are available until 30 days after the last update.
2,345
qgallouedec
2024-11-11T12:21:29
Why do you need the model to be un eval mode? Can we use the inference mode in forward instead?
2,345
kashif
2024-11-14T10:29:56
@ qgallouedec using inference mode so there should be no unexpected behaviour
2,345
qgallouedec
2024-11-11T19:51:09
very nice @ccs96307! looking into details
2,344
HuggingFaceDocBuilderDev
2024-11-11T19:57:23
The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/trl/pr_2344). All of your documentation changes will be reflected on that endpoint. The docs are available until 30 days after the last update.
2,344
qgallouedec
2024-11-18T10:54:04
Thanks a lot @ccs96307 for your contribution!
2,344
HuggingFaceDocBuilderDev
2024-11-11T12:52:03
The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/trl/pr_2343). All of your documentation changes will be reflected on that endpoint. The docs are available until 30 days after the last update.
2,343
qgallouedec
2024-11-11T13:04:58
It should be fixed by #2325. Could you confirm?
2,342
asparius
2024-11-11T22:58:45
Saving issue is solved but training time duration has increased significantly, 1 million episodes taking 300+ hours on A100. Is this expected, is there any reference number to compare with?
2,342
qgallouedec
2024-11-14T11:09:48
I can't reproduce: ``` # v0.12.1 (includes the fix); transformers 4.47 dev (blue) /fsx/qgallouedec/trl/examples/scripts/rloo/rloo_tldr.py --output_dir models/minimal/rloo_tldr --dataset_name trl-internal-testing/tldr-preference-sft-trl-style --dataset_test_split validation --num_ppo_epochs 2 --num_mini_batches 2 --learning_rate 3e-6 --per_device_train_batch_size 4 --gradient_accumulation_steps 16 --total_episodes 1000 --model_name_or_path EleutherAI/pythia-1b-deduped --sft_model_path cleanrl/EleutherAI_pythia-1b-deduped__sft__tldr --reward_model_path cleanrl/EleutherAI_pythia-1b-deduped__reward__tldr --local_rollout_forward_batch_size 16 --missing_eos_penalty 1.0 --stop_token eos --kl_coef 0.03 --save_strategy steps --save_steps 10000 --eval_strategy steps --eval_steps 1000 --report_to wandb ``` ``` # TRL v0.11 (doesn't include the fix); transformers v4.45 (red) /fsx/qgallouedec/trl/examples/scripts/rloo/rloo_tldr.py --output_dir models/minimal/rloo_tldr --num_ppo_epochs 2 --num_mini_batches 2 --learning_rate 3e-6 --per_device_train_batch_size 4 --gradient_accumulation_steps 16 --total_episodes 1000 --model_name_or_path EleutherAI/pythia-1b-deduped --sft_model_path cleanrl/EleutherAI_pythia-1b-deduped__sft__tldr --reward_model_path cleanrl/EleutherAI_pythia-1b-deduped__reward__tldr --local_rollout_forward_batch_size 16 --missing_eos_penalty 1.0 --stop_token eos --kl_coef 0.03 --save_strategy steps --save_steps 10000 --eval_strategy steps --eval_steps 1000 --report_to wandb ``` ![W B Chart 14_11_2024, 12_08_20](https://github.com/user-attachments/assets/eed3ec12-9b00-4860-b356-f50c68a9e6ee)
2,342
Shreyas-Bhat
2024-11-14T15:52:49
Hi @shashankg7 , I have the exact same question. Do you have the answer to this? Thanks
2,341
shashankg7
2024-11-14T16:01:28
Kind of. To train in a mini-batch and multi-epoch mode with the samples collected from the current policy, plain REINFORCE/policy-gradient will not work, since the model changes from the policy used to collect the data. Importance sampling trick is required to account for the change in action distribution. But that's just my guess, there might be some other reason as well.
2,341
Shreyas-Bhat
2024-11-14T16:11:27
Thanks a lot for your prompt response, @shashankg7 ! That makes more sense now. I had another question and was wondering if you face the same: during training, do your model logits tend to high negative values (often -inf)?
2,341
qgallouedec
2024-11-10T03:01:22
We know that a lot of notebooks/docs are outdated. Sorry for the inconvenience. It was a deliberate choice that has allowed us to move faster on the lib evolution. For more information, see https://github.com/huggingface/trl/pull/2174#issuecomment-2399843454. But you can be sure that it will soon be completely up to date. Most doc and notebooks should work with `trl==0.11` I agree with you that the notebooks should mention it. Feel free to open a PR it that sense if you wan't to contribute
2,340
Debolena7
2024-11-10T11:12:12
Thank you so much for your prompt reply. Changing the package trl version resolved the errors. I have been trying several code examples of rlhf from huggingface and also from youtube for a week now, and all had multiple issues. Was stuck for so many days. Thanks again..
2,340
Mrinh212375
2024-11-14T07:31:27
@Debolena7 @qgallouedec ... ```` config = PPOConfig( #model_name="google/gemma-2-2b-it", learning_rate=1.41e-5, mini_batch_size=5, batch_size=20, output_dir='/kaggle/working/' ) ppo_trainer = PPOTrainer(config=config, processing_class = 'PreTrainedTokenizerBase' , policy = model, ref_policy = ref_model, reward_model = rm_model, #tokenizer=tokenizer, train_dataset=ppo_training_dataset, data_collator=collator) ```` when I'm trying to run the above code snippet, I'm getting the following error - ![image](https://github.com/user-attachments/assets/9d3c0a08-2276-4a58-9c81-e2bf5e52c955) How to pass the module from the HF preTrainedWrapper class ?
2,340
ioana-ghiban-arm
2024-11-19T09:55:52
hi! I'm facing quite a few errors when attempting running the 'toxicity' example as well. Currently stuck on this error: `TypeError: PPOTrainer.__init__() got multiple values for argument 'processing_class'`. Would immensely appreciate an updated end-to-end working demo of this. Thank you in advance.
2,340
Debolena7
2024-11-19T20:28:25
> policy = model, > ref_policy = ref_model, > reward_model = rm_model, @Mrinh212375 I faced the same issue. So this error is basically caused because, the value model is not being passed in the 'PPOTrainer' arguments. So, by default, the value_model is None, which leads to the error. To solve it, you can either initialize a value model like: `value_model = AutoModelForSequenceClassification.from_pretrained("model_name")` and pass the value model into the 'PPOTrainer' , OR just simply use old `trl==0.11.0`
2,340
Debolena7
2024-11-19T20:38:20
> hi! I'm facing quite a few errors when attempting running the 'toxicity' example as well. Currently stuck on this error: `TypeError: PPOTrainer.__init__() got multiple values for argument 'processing_class'`. Would immensely appreciate an updated end-to-end working demo of this. Thank you in advance. @ioana-ghiban-arm You can pass your model tokenizer into the 'processing_class' argument of PPOTrainer. `tokenizer = AutoTokenizer.from_pretrained(model_id)` ``` ppo_trainer = PPOTrainer(config=config, processing_class = tokenizer, .................) ```
2,340
ioana-ghiban-arm
2024-11-20T08:59:29
@Debolena7 thank you for your help! you're right, I tried your suggestion and I think the execution got further. Now I'm getting the error I'd see when running a simplified version of the script. Do you perhaps have some troubleshooting steps for this error: `AttributeError: 'AutoModelForCausalLMWithValueHead' object has no attribute 'generation_config'`? TIA
2,340
Debolena7
2024-11-20T10:39:23
it seems you have used something like: `model = AutoModelForCausalLMWithValueHead.from_pretrained(model_id)` which lead to the error.. you can use: `from transformers import GenerationConfig ` `model.generation_config = GenerationConfig()` , after initialization. But i would suggest it is best to use an old trl==0.11.0. otherwise, you will encounter more errors.
2,340
ioana-ghiban-arm
2024-11-22T14:12:02
thank you for your help. Indeed, changing to `trl==0.11` does get the training going. However, I'm seeing this warning: `UserWarning: The average ratio of batch (...) exceeds threshold 10.00. Skipping batch.` which as mentioned [here](https://github.com/huggingface/trl/issues/1031) _suggests that the updates to the policy are too large, which could lead to instability in the training_. The maintainer suggested using [ppo.py](https://github.com/huggingface/trl/blob/main/examples/scripts/ppo/ppo.py) instead, so I tried adapting that script to use the toxicity model and dataset. However, as that is an updated script I'm assuming it should be ran with the latest version of trl provided by the repo. That leads me to the error that this thread started with.. Any suggestion to help me stop going in circles and be able to run a first round of fine-tuning on this model would be greatly appreciated, thank you.
2,340
imrankh46
2024-11-08T06:46:07
@kashif any suggestions?
2,338
Sunrepe
2024-11-11T14:58:38
### I encountered the same problem. My System Info is: ''' - Python version: 3.10.14 - PyTorch version: 2.4.1 - CUDA device(s): NVIDIA A800-SXM4-80GB, NVIDIA A800-SXM4-80GB, NVIDIA A800-SXM4-80GB, NVIDIA A800-SXM4-80GB, NVIDIA A100-SXM4-80GB, NVIDIA A100-SXM4-80GB, NVIDIA A100-SXM4-80GB - Transformers version: 4.46.2 - Accelerate version: 0.34.2 - Accelerate config: not found - Datasets version: 3.0.1 - HF Hub version: 0.25.1 - TRL version: 0.12.0 - bitsandbytes version: not installed - DeepSpeed version: 0.15.1 - Diffusers version: not installed - Liger-Kernel version: not installed - LLM-Blender version: not installed - OpenAI version: 0.28.0 - PEFT version: 0.13.0 ''' Here’s a revised version of your text with the grammar corrected: --- I am using the code in `example/script/sft.py`. I have downloaded the dataset and model locally. So, I run the following terminal command: ```bash python sft.py \ --model_name_or_path /data1/llm_models/qwen-05B \ --dataset_name /data1/datasets/trl-lib/Capybara \ --learning_rate 2.0e-4 \ --num_train_epochs 1 \ --packing \ --per_device_train_batch_size 2 \ --gradient_accumulation_steps 8 \ --gradient_checkpointing \ --logging_steps 25 \ --eval_strategy steps \ --eval_steps 100 \ --use_peft \ --lora_r 32 \ --lora_alpha 16 \ --output_dir Qwen2-0.5B-SFT ``` ## However, I am encountering the following issue: ```python Traceback (most recent call last): File "/data1/tmpzxf/research/SwiftSage/df_models/sft.py", line 106, in <module> trainer.train() File "/data1/envs/miniconda3/envs/tdt/lib/python3.10/site-packages/transformers/trainer.py", line 2123, in train return inner_training_loop( File "/data1/envs/miniconda3/envs/tdt/lib/python3.10/site-packages/transformers/trainer.py", line 2481, in _inner_training_loop tr_loss_step = self.training_step(model, inputs, num_items_in_batch) File "/data1/envs/miniconda3/envs/tdt/lib/python3.10/site-packages/transformers/trainer.py", line 3579, in training_step loss = self.compute_loss(model, inputs, num_items_in_batch=num_items_in_batch) File "/data1/envs/miniconda3/envs/tdt/lib/python3.10/site-packages/transformers/trainer.py", line 3633, in compute_loss outputs = model(**inputs) File "/data1/envs/miniconda3/envs/tdt/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1553, in _wrapped_call_impl return self._call_impl(*args, **kwargs) File "/data1/envs/miniconda3/envs/tdt/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1562, in _call_impl return forward_call(*args, **kwargs) File "/data1/envs/miniconda3/envs/tdt/lib/python3.10/site-packages/torch/nn/parallel/data_parallel.py", line 176, in forward inputs, module_kwargs = self.scatter(inputs, kwargs, self.device_ids) File "/data1/envs/miniconda3/envs/tdt/lib/python3.10/site-packages/torch/nn/parallel/data_parallel.py", line 198, in scatter return scatter_kwargs(inputs, kwargs, device_ids, dim=self.dim) File "/data1/envs/miniconda3/envs/tdt/lib/python3.10/site-packages/torch/nn/parallel/scatter_gather.py", line 78, in scatter_kwargs scattered_kwargs = scatter(kwargs, target_gpus, dim) if kwargs else [] File "/data1/envs/miniconda3/envs/tdt/lib/python3.10/site-packages/torch/nn/parallel/scatter_gather.py", line 64, in scatter res = scatter_map(inputs) File "/data1/envs/miniconda3/envs/tdt/lib/python3.10/site-packages/torch/nn/parallel/scatter_gather.py", line 55, in scatter_map return [type(obj)(i) for i in zip(*map(scatter_map, obj.items()))] File "/data1/envs/miniconda3/envs/tdt/lib/python3.10/site-packages/torch/nn/parallel/scatter_gather.py", line 51, in scatter_map return list(zip(*map(scatter_map, obj))) File "/data1/envs/miniconda3/envs/tdt/lib/python3.10/site-packages/torch/nn/parallel/scatter_gather.py", line 47, in scatter_map return Scatter.apply(target_gpus, None, dim, obj) File "/data1/envs/miniconda3/envs/tdt/lib/python3.10/site-packages/torch/autograd/function.py", line 574, in apply return super().apply(*args, **kwargs) # type: ignore[misc] File "/data1/envs/miniconda3/envs/tdt/lib/python3.10/site-packages/torch/nn/parallel/_functions.py", line 96, in forward outputs = comm.scatter(input, target_gpus, chunk_sizes, ctx.dim, streams) File "/data1/envs/miniconda3/envs/tdt/lib/python3.10/site-packages/torch/nn/parallel/comm.py", line 188, in scatter return tuple(torch._C._scatter(tensor, devices, chunk_sizes, dim, streams)) RuntimeError: chunk expects at least a 1-dimensional tensor ```
2,338
qGentry
2024-11-11T17:53:28
Looks like "num_items_in_batch" is getting added to the batch dict at some point by trl/tokenizer/collator and it is a 0-dim constant that is getting scattered across data parallel replicas but it can't.
2,338
hua-777
2024-11-12T22:06:44
Isolating my training to 1 GPU fixed this problem for me. ``` import os os.environ["CUDA_VISIBLE_DEVICES"] = "0"
2,338
Leo-T-Zang
2024-11-14T00:49:59
try transformers 4.45.1?
2,338
oscar50513
2024-11-14T10:07:41
I successfully tested Transformers 4.46.0!!!!
2,338
imrankh46
2024-11-14T13:38:12
I have some Nan entry in the dataset. And also change the code a little bit so it working for me.
2,338
yxdr
2024-11-15T04:22:45
I encountered the same problem when I used the following command to run my training script. ``` CUDA_VISIBLE_DEVICES=0,1 python train.py \ --seed=1 \ --model_path=$MODEL_PATH \ --processed_data_dir=$PROCESSED_DATA_DIR \ --output_dir=$OUTPUT_DIR \ --learning_rate=5e-6 \ --epochs=1 \ --save_freq=10 \ --eval_freq=10 \ --num_warmup_steps=30 ``` But when I switched to using Huggingface Accelerate to run it, the problem disappeared. ``` CUDA_VISIBLE_DEVICES=0,1 accelerate launch --num_processes 2 train.py \ --seed=1 \ --model_path=$MODEL_PATH \ --processed_data_dir=$PROCESSED_DATA_DIR \ --output_dir=$OUTPUT_DIR \ --learning_rate=5e-6 \ --epochs=1 \ --save_freq=10 \ --eval_freq=10 \ --num_warmup_steps=30 ``` Additionally, if you use only one GPU, there should be no problem either.
2,338
Suman-punshi
2024-11-15T08:31:11
I tried all the solutions above, reverting to single GPU and using accelerate, but it is still not solving the problem for me
2,338
kashif
2024-11-15T08:39:18
@Suman-punshi what is your TRL Env and versions?
2,338
Suman-punshi
2024-11-15T08:41:44
@kashif my TRL version 0.12.0
2,338
qgallouedec
2024-11-10T03:07:29
I agree. Not sure what's the best way to do that though, because it still has to work with the precomputing of ref logprobs. (that's why we initially set `"shuffle": False`). Any idea?
2,337
HuggingFaceDocBuilderDev
2024-11-07T13:26:05
The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/trl/pr_2336). All of your documentation changes will be reflected on that endpoint. The docs are available until 30 days after the last update.
2,336
littleshutong
2024-11-08T11:59:05
trl/trainer/ppo_trainer.py ![image](https://github.com/user-attachments/assets/4f4ba132-f48a-48e2-8225-2f3c35b4df57) However, it is necessary to consider passing the parameters over.
2,335
ccs96307
2024-11-10T17:23:12
I encountered this issue previously and temporarily worked around it by adjusting the accelerate version to 0.34.2. Here are the versions I used: - accelerate==0.34.2 - torch==2.5.1 - transformers==4.46.2 - deepspeed==0.15.4
2,335
Galaxy-Husky
2024-11-20T07:00:40
@qgallouedec hi, do you have any suggestions?
2,334
qgallouedec
2024-11-07T21:02:47
As far as I understand, the grad accum thing is only an issue with SFT right?
2,333
kashif
2024-11-07T21:04:15
right i think its more about the updated kernels
2,333