user
stringlengths 3
28
| created_at
timestamp[us] | body
stringlengths 1
173k
| issue_number
int64 1
2.38k
|
---|---|---|---|
HuggingFaceDocBuilderDev | 2024-11-21T19:59:22 | The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/trl/pr_2381). All of your documentation changes will be reflected on that endpoint. The docs are available until 30 days after the last update. | 2,381 |
HuggingFaceDocBuilderDev | 2024-11-21T19:57:55 | The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/trl/pr_2380). All of your documentation changes will be reflected on that endpoint. The docs are available until 30 days after the last update. | 2,380 |
qgallouedec | 2024-11-21T19:33:35 | Thanks!
| 2,379 |
qgallouedec | 2024-11-21T19:35:56 | Pushed to hub here https://huggingface.co/datasets/trl-lib/hh-rlhf-helpful-base | 2,379 |
HuggingFaceDocBuilderDev | 2024-11-21T19:37:26 | The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/trl/pr_2379). All of your documentation changes will be reflected on that endpoint. The docs are available until 30 days after the last update. | 2,379 |
qgallouedec | 2024-11-20T09:44:57 | Please don't use image when referring to code next time. Use [permalink to code](https://docs.github.com/en/get-started/writing-on-github/working-with-advanced-formatting/creating-a-permanent-link-to-a-code-snippet).
---
> train_dataset should be in the form of dict after processing the map
No, `map` returns a `Dataset` instance (see [`datasets.map` documentation](https://huggingface.co/docs/datasets/en/process#map)). Unless you remove these columns (prompt, completion) from the dataset, they remain. | 2,374 |
a7217339 | 2024-11-20T09:50:36 | Thank you for your guidance. As a beginner, I am not yet proficient in grammar. Sorry. | 2,374 |
HuggingFaceDocBuilderDev | 2024-11-20T09:36:07 | The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/trl/pr_2373). All of your documentation changes will be reflected on that endpoint. The docs are available until 30 days after the last update. | 2,373 |
HuggingFaceDocBuilderDev | 2024-11-20T08:39:39 | The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/trl/pr_2372). All of your documentation changes will be reflected on that endpoint. The docs are available until 30 days after the last update. | 2,372 |
kashif | 2024-11-20T08:40:44 | thanks @qgallouedec | 2,372 |
qgallouedec | 2024-11-20T09:11:02 | Failing test not related (same as https://github.com/huggingface/trl/pull/2370#issuecomment-2486585773) | 2,372 |
qgallouedec | 2024-11-20T07:45:08 | Thanks for reporting. Please provide more info, like the training arguments etc | 2,371 |
HuggingFaceDocBuilderDev | 2024-11-19T18:49:28 | The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/trl/pr_2370). All of your documentation changes will be reflected on that endpoint. The docs are available until 30 days after the last update. | 2,370 |
qgallouedec | 2024-11-19T19:34:33 | Failing test not related (fixed in #2373) | 2,370 |
lewtun | 2024-11-19T12:30:06 | Yes I agree this symlink business was not a great choice for the chat CLI. Let's revisit later | 2,369 |
qgallouedec | 2024-11-19T08:15:12 | > Can DPOTrainer support inputting encoded token IDs to customize the calculation of different attention masks
No, and won't be supported, unless we are provided with a good reason to support it.
> excluding the prompt part from loss computation?
Actually, it's how dpo works by default. See
https://github.com/huggingface/trl/blob/b80c1a6fb8754c578f7178213e56d780abbe96d5/trl/trainer/dpo_trainer.py#L1089-L1092 | 2,368 |
LBJ6666 | 2024-11-19T08:55:16 | @qgallouedec Thank you for your response | 2,368 |
gmonair | 2024-11-20T12:41:37 | I think I found the issue. For posterity, it seems that it was caused by setting torch_dtype to "half" instead of "auto". User error. | 2,367 |
qgallouedec | 2024-11-19T05:43:32 | Please use english only | 2,366 |
qgallouedec | 2024-11-20T13:05:00 | Probably linked to #2127. Closing as the title is not in English and the question isn't clear enough for us to help you. Feel free to open a clearer issue the complies with our guidelines | 2,366 |
HuggingFaceDocBuilderDev | 2024-11-18T16:18:54 | The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/trl/pr_2365). All of your documentation changes will be reflected on that endpoint. The docs are available until 30 days after the last update. | 2,365 |
qgallouedec | 2024-11-18T12:58:40 | Thanks! | 2,364 |
HuggingFaceDocBuilderDev | 2024-11-18T13:03:00 | The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/trl/pr_2364). All of your documentation changes will be reflected on that endpoint. The docs are available until 30 days after the last update. | 2,364 |
kashif | 2024-11-21T09:34:50 | yes would welcome distillation trainers! | 2,361 |
kashif | 2024-11-18T08:44:55 | thanks @bartoszzuk perhaps it's better to set the `self.data_collator` to the default one if it is none and then use `self.data_collator` in the data loaders? | 2,360 |
kashif | 2024-11-18T10:17:24 | you might need to run `make precommit` in the root of the TRL to fix styling | 2,360 |
HuggingFaceDocBuilderDev | 2024-11-18T10:20:50 | The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/trl/pr_2360). All of your documentation changes will be reflected on that endpoint. The docs are available until 30 days after the last update. | 2,360 |
HuggingFaceDocBuilderDev | 2024-11-15T14:07:41 | The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/trl/pr_2359). All of your documentation changes will be reflected on that endpoint. The docs are available until 30 days after the last update. | 2,359 |
ccs96307 | 2024-11-14T12:34:24 | Hi, it looks like the error arises because the `PPOTrainer` class expects a `value_model` to be defined and passed in, which appears to be required in the current TRL version. The `disable_dropout_in_model` method is likely encountering `NoneType` because `value_model` wasn’t specified, and thus defaults to `None`.
Hope this helps! | 2,357 |
Mrinh212375 | 2024-11-15T05:33:13 | > Hi, it looks like the error arises because the `PPOTrainer` class expects a `value_model` to be defined and passed in, which appears to be required in the current TRL version. The `disable_dropout_in_model` method is likely encountering `NoneType` because `value_model` wasn’t specified, and thus defaults to `None`.
>
> Hope this helps!
Hi, thanks......I have passed value_model the same as policy_model, I thought it was optional, so didn't pass anything.........anyway error is gone.
also I can call the ppo_trainer.train() method directly right ? unlike the older version, no need to write ppo training loop.....Can you please clarify on this point. | 2,357 |
ccs96307 | 2024-11-15T13:35:40 | Glad to hear the error is resolved!
Yes, as far as I know, you can directly call the `ppo_trainer.train()` method without needing to write a training loop. | 2,357 |
qgallouedec | 2024-11-14T09:05:42 | Does "from scratch" means the opposite of "finetuning" for you? Please precise your question | 2,356 |
kalocide | 2024-11-16T00:47:00 | why would you pre-train with RL? | 2,356 |
kashif | 2024-11-21T10:05:15 | just to debug, can you kindly try to see if you get the same issue when you do not pass a validation dataset?
Also, can you check what happens when you explicitly pass `num_train_epochs=1` as an option to the `DPOConfig` Thanks!
| 2,355 |
Mrinh212375 | 2024-11-14T07:21:27 | Hi...I think we need to create a copy of the policy model using create_reference_model() function....is it ??
I'm facing another problem in new PPOTrainer()...according to documentation we need to pass **'module**' unlike the previous version(HF PreTrainedWrapper ).
How to get the HF pretrainedWrapper models and pass to PPOTrainer() as module | 2,353 |
ccs96307 | 2024-11-14T12:55:25 | I'm hopeful that https://github.com/huggingface/trl/pull/2344 will address this issue! :raised_hands: | 2,353 |
HuggingFaceDocBuilderDev | 2024-11-11T23:46:54 | The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/trl/pr_2350). All of your documentation changes will be reflected on that endpoint. The docs are available until 30 days after the last update. | 2,350 |
HuggingFaceDocBuilderDev | 2024-11-11T23:17:46 | The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/trl/pr_2349). All of your documentation changes will be reflected on that endpoint. The docs are available until 30 days after the last update. | 2,349 |
HuggingFaceDocBuilderDev | 2024-11-11T21:19:57 | The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/trl/pr_2348). All of your documentation changes will be reflected on that endpoint. The docs are available until 30 days after the last update. | 2,348 |
muellerzr | 2024-11-11T22:33:16 | Beautiful! 🔥 | 2,348 |
HuggingFaceDocBuilderDev | 2024-11-11T13:32:56 | The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/trl/pr_2347). All of your documentation changes will be reflected on that endpoint. The docs are available until 30 days after the last update. | 2,347 |
qgallouedec | 2024-11-11T12:30:12 | Can you point the "previous version" you are refering to? | 2,346 |
qgallouedec | 2024-11-11T16:17:38 | I think it has been like this from the initial implementation (see #2020) | 2,346 |
Galaxy-Husky | 2024-11-11T16:56:32 | > I think it has been like this from the initial implementation (see #2020)
Sorry, I didn't say that right. I mean before v0.11.0, there was no `maybe_apply_chat_template` back then. For example, the dpo dataset was preprocessed like:
https://github.com/huggingface/trl/blob/55cc4b1076144b74a6ce5d07557b7f664b1de8d9/examples/scripts/dpo.py#L156-L160
Since the code has been refactored , I'm not sure if there was generation prompt or not. If so, could you please point out where it was implemented? | 2,346 |
qgallouedec | 2024-11-11T17:23:09 | Yes the example code was wrong, you need to add a generation prompt at the end of the prompt. | 2,346 |
Galaxy-Husky | 2024-11-11T17:24:31 | > Yes the example code was wrong, you need to add a generation prompt at the end of the prompt.
I see. Thanks a lot! | 2,346 |
HuggingFaceDocBuilderDev | 2024-11-11T12:02:17 | The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/trl/pr_2345). All of your documentation changes will be reflected on that endpoint. The docs are available until 30 days after the last update. | 2,345 |
qgallouedec | 2024-11-11T12:21:29 | Why do you need the model to be un eval mode?
Can we use the inference mode in forward instead? | 2,345 |
kashif | 2024-11-14T10:29:56 | @ qgallouedec using inference mode so there should be no unexpected behaviour | 2,345 |
qgallouedec | 2024-11-11T19:51:09 | very nice @ccs96307! looking into details | 2,344 |
HuggingFaceDocBuilderDev | 2024-11-11T19:57:23 | The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/trl/pr_2344). All of your documentation changes will be reflected on that endpoint. The docs are available until 30 days after the last update. | 2,344 |
qgallouedec | 2024-11-18T10:54:04 | Thanks a lot @ccs96307 for your contribution! | 2,344 |
HuggingFaceDocBuilderDev | 2024-11-11T12:52:03 | The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/trl/pr_2343). All of your documentation changes will be reflected on that endpoint. The docs are available until 30 days after the last update. | 2,343 |
qgallouedec | 2024-11-11T13:04:58 | It should be fixed by #2325. Could you confirm? | 2,342 |
asparius | 2024-11-11T22:58:45 | Saving issue is solved but training time duration has increased significantly, 1 million episodes taking 300+ hours on A100. Is this expected, is there any reference number to compare with? | 2,342 |
qgallouedec | 2024-11-14T11:09:48 | I can't reproduce:
```
# v0.12.1 (includes the fix); transformers 4.47 dev (blue)
/fsx/qgallouedec/trl/examples/scripts/rloo/rloo_tldr.py --output_dir models/minimal/rloo_tldr --dataset_name trl-internal-testing/tldr-preference-sft-trl-style --dataset_test_split validation --num_ppo_epochs 2 --num_mini_batches 2 --learning_rate 3e-6 --per_device_train_batch_size 4 --gradient_accumulation_steps 16 --total_episodes 1000 --model_name_or_path EleutherAI/pythia-1b-deduped --sft_model_path cleanrl/EleutherAI_pythia-1b-deduped__sft__tldr --reward_model_path cleanrl/EleutherAI_pythia-1b-deduped__reward__tldr --local_rollout_forward_batch_size 16 --missing_eos_penalty 1.0 --stop_token eos --kl_coef 0.03 --save_strategy steps --save_steps 10000 --eval_strategy steps --eval_steps 1000 --report_to wandb
```
```
# TRL v0.11 (doesn't include the fix); transformers v4.45 (red)
/fsx/qgallouedec/trl/examples/scripts/rloo/rloo_tldr.py --output_dir models/minimal/rloo_tldr --num_ppo_epochs 2 --num_mini_batches 2 --learning_rate 3e-6 --per_device_train_batch_size 4 --gradient_accumulation_steps 16 --total_episodes 1000 --model_name_or_path EleutherAI/pythia-1b-deduped --sft_model_path cleanrl/EleutherAI_pythia-1b-deduped__sft__tldr --reward_model_path cleanrl/EleutherAI_pythia-1b-deduped__reward__tldr --local_rollout_forward_batch_size 16 --missing_eos_penalty 1.0 --stop_token eos --kl_coef 0.03 --save_strategy steps --save_steps 10000 --eval_strategy steps --eval_steps 1000 --report_to wandb
```
![W B Chart 14_11_2024, 12_08_20](https://github.com/user-attachments/assets/eed3ec12-9b00-4860-b356-f50c68a9e6ee)
| 2,342 |
Shreyas-Bhat | 2024-11-14T15:52:49 | Hi @shashankg7 ,
I have the exact same question. Do you have the answer to this?
Thanks | 2,341 |
shashankg7 | 2024-11-14T16:01:28 | Kind of. To train in a mini-batch and multi-epoch mode with the samples collected from the current policy, plain REINFORCE/policy-gradient will not work, since the model changes from the policy used to collect the data. Importance sampling trick is required to account for the change in action distribution. But that's just my guess, there might be some other reason as well. | 2,341 |
Shreyas-Bhat | 2024-11-14T16:11:27 | Thanks a lot for your prompt response, @shashankg7 ! That makes more sense now. I had another question and was wondering if you face the same: during training, do your model logits tend to high negative values (often -inf)?
| 2,341 |
qgallouedec | 2024-11-10T03:01:22 | We know that a lot of notebooks/docs are outdated. Sorry for the inconvenience.
It was a deliberate choice that has allowed us to move faster on the lib evolution. For more information, see https://github.com/huggingface/trl/pull/2174#issuecomment-2399843454. But you can be sure that it will soon be completely up to date.
Most doc and notebooks should work with `trl==0.11`
I agree with you that the notebooks should mention it. Feel free to open a PR it that sense if you wan't to contribute | 2,340 |
Debolena7 | 2024-11-10T11:12:12 | Thank you so much for your prompt reply. Changing the package trl version resolved the errors. I have been trying several code examples of rlhf from huggingface and also from youtube for a week now, and all had multiple issues. Was stuck for so many days. Thanks again.. | 2,340 |
Mrinh212375 | 2024-11-14T07:31:27 | @Debolena7 @qgallouedec ...
````
config = PPOConfig(
#model_name="google/gemma-2-2b-it",
learning_rate=1.41e-5,
mini_batch_size=5,
batch_size=20,
output_dir='/kaggle/working/'
)
ppo_trainer = PPOTrainer(config=config,
processing_class = 'PreTrainedTokenizerBase' ,
policy = model,
ref_policy = ref_model,
reward_model = rm_model,
#tokenizer=tokenizer,
train_dataset=ppo_training_dataset,
data_collator=collator)
````
when I'm trying to run the above code snippet, I'm getting the following error -
![image](https://github.com/user-attachments/assets/9d3c0a08-2276-4a58-9c81-e2bf5e52c955)
How to pass the module from the HF preTrainedWrapper class ? | 2,340 |
ioana-ghiban-arm | 2024-11-19T09:55:52 | hi! I'm facing quite a few errors when attempting running the 'toxicity' example as well. Currently stuck on this error:
`TypeError: PPOTrainer.__init__() got multiple values for argument 'processing_class'`. Would immensely appreciate an updated end-to-end working demo of this. Thank you in advance. | 2,340 |
Debolena7 | 2024-11-19T20:28:25 | > policy = model,
> ref_policy = ref_model,
> reward_model = rm_model,
@Mrinh212375
I faced the same issue. So this error is basically caused because, the value model is not being passed in the 'PPOTrainer' arguments. So, by default, the value_model is None, which leads to the error.
To solve it, you can either initialize a value model like:
`value_model = AutoModelForSequenceClassification.from_pretrained("model_name")` and pass the value model into the 'PPOTrainer' ,
OR just simply use old `trl==0.11.0` | 2,340 |
Debolena7 | 2024-11-19T20:38:20 | > hi! I'm facing quite a few errors when attempting running the 'toxicity' example as well. Currently stuck on this error: `TypeError: PPOTrainer.__init__() got multiple values for argument 'processing_class'`. Would immensely appreciate an updated end-to-end working demo of this. Thank you in advance.
@ioana-ghiban-arm
You can pass your model tokenizer into the 'processing_class' argument of PPOTrainer.
`tokenizer = AutoTokenizer.from_pretrained(model_id)`
```
ppo_trainer = PPOTrainer(config=config,
processing_class = tokenizer, .................)
```
| 2,340 |
ioana-ghiban-arm | 2024-11-20T08:59:29 | @Debolena7 thank you for your help! you're right, I tried your suggestion and I think the execution got further. Now I'm getting the error I'd see when running a simplified version of the script. Do you perhaps have some troubleshooting steps for this error: `AttributeError: 'AutoModelForCausalLMWithValueHead' object has no attribute 'generation_config'`?
TIA | 2,340 |
Debolena7 | 2024-11-20T10:39:23 | it seems you have used something like: `model = AutoModelForCausalLMWithValueHead.from_pretrained(model_id)` which lead to the error.. you can use:
`from transformers import GenerationConfig `
`model.generation_config = GenerationConfig()` , after initialization.
But i would suggest it is best to use an old trl==0.11.0. otherwise, you will encounter more errors. | 2,340 |
imrankh46 | 2024-11-08T06:46:07 | @kashif any suggestions?
| 2,338 |
Sunrepe | 2024-11-11T14:58:38 | ### I encountered the same problem.
My System Info is:
'''
- Python version: 3.10.14
- PyTorch version: 2.4.1
- CUDA device(s): NVIDIA A800-SXM4-80GB, NVIDIA A800-SXM4-80GB, NVIDIA A800-SXM4-80GB, NVIDIA A800-SXM4-80GB, NVIDIA A100-SXM4-80GB, NVIDIA A100-SXM4-80GB, NVIDIA A100-SXM4-80GB
- Transformers version: 4.46.2
- Accelerate version: 0.34.2
- Accelerate config: not found
- Datasets version: 3.0.1
- HF Hub version: 0.25.1
- TRL version: 0.12.0
- bitsandbytes version: not installed
- DeepSpeed version: 0.15.1
- Diffusers version: not installed
- Liger-Kernel version: not installed
- LLM-Blender version: not installed
- OpenAI version: 0.28.0
- PEFT version: 0.13.0
'''
Here’s a revised version of your text with the grammar corrected:
---
I am using the code in `example/script/sft.py`.
I have downloaded the dataset and model locally.
So, I run the following terminal command:
```bash
python sft.py \
--model_name_or_path /data1/llm_models/qwen-05B \
--dataset_name /data1/datasets/trl-lib/Capybara \
--learning_rate 2.0e-4 \
--num_train_epochs 1 \
--packing \
--per_device_train_batch_size 2 \
--gradient_accumulation_steps 8 \
--gradient_checkpointing \
--logging_steps 25 \
--eval_strategy steps \
--eval_steps 100 \
--use_peft \
--lora_r 32 \
--lora_alpha 16 \
--output_dir Qwen2-0.5B-SFT
```
## However, I am encountering the following issue:
```python
Traceback (most recent call last):
File "/data1/tmpzxf/research/SwiftSage/df_models/sft.py", line 106, in <module>
trainer.train()
File "/data1/envs/miniconda3/envs/tdt/lib/python3.10/site-packages/transformers/trainer.py", line 2123, in train
return inner_training_loop(
File "/data1/envs/miniconda3/envs/tdt/lib/python3.10/site-packages/transformers/trainer.py", line 2481, in _inner_training_loop
tr_loss_step = self.training_step(model, inputs, num_items_in_batch)
File "/data1/envs/miniconda3/envs/tdt/lib/python3.10/site-packages/transformers/trainer.py", line 3579, in training_step
loss = self.compute_loss(model, inputs, num_items_in_batch=num_items_in_batch)
File "/data1/envs/miniconda3/envs/tdt/lib/python3.10/site-packages/transformers/trainer.py", line 3633, in compute_loss
outputs = model(**inputs)
File "/data1/envs/miniconda3/envs/tdt/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1553, in _wrapped_call_impl
return self._call_impl(*args, **kwargs)
File "/data1/envs/miniconda3/envs/tdt/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1562, in _call_impl
return forward_call(*args, **kwargs)
File "/data1/envs/miniconda3/envs/tdt/lib/python3.10/site-packages/torch/nn/parallel/data_parallel.py", line 176, in forward
inputs, module_kwargs = self.scatter(inputs, kwargs, self.device_ids)
File "/data1/envs/miniconda3/envs/tdt/lib/python3.10/site-packages/torch/nn/parallel/data_parallel.py", line 198, in scatter
return scatter_kwargs(inputs, kwargs, device_ids, dim=self.dim)
File "/data1/envs/miniconda3/envs/tdt/lib/python3.10/site-packages/torch/nn/parallel/scatter_gather.py", line 78, in scatter_kwargs
scattered_kwargs = scatter(kwargs, target_gpus, dim) if kwargs else []
File "/data1/envs/miniconda3/envs/tdt/lib/python3.10/site-packages/torch/nn/parallel/scatter_gather.py", line 64, in scatter
res = scatter_map(inputs)
File "/data1/envs/miniconda3/envs/tdt/lib/python3.10/site-packages/torch/nn/parallel/scatter_gather.py", line 55, in scatter_map
return [type(obj)(i) for i in zip(*map(scatter_map, obj.items()))]
File "/data1/envs/miniconda3/envs/tdt/lib/python3.10/site-packages/torch/nn/parallel/scatter_gather.py", line 51, in scatter_map
return list(zip(*map(scatter_map, obj)))
File "/data1/envs/miniconda3/envs/tdt/lib/python3.10/site-packages/torch/nn/parallel/scatter_gather.py", line 47, in scatter_map
return Scatter.apply(target_gpus, None, dim, obj)
File "/data1/envs/miniconda3/envs/tdt/lib/python3.10/site-packages/torch/autograd/function.py", line 574, in apply
return super().apply(*args, **kwargs) # type: ignore[misc]
File "/data1/envs/miniconda3/envs/tdt/lib/python3.10/site-packages/torch/nn/parallel/_functions.py", line 96, in forward
outputs = comm.scatter(input, target_gpus, chunk_sizes, ctx.dim, streams)
File "/data1/envs/miniconda3/envs/tdt/lib/python3.10/site-packages/torch/nn/parallel/comm.py", line 188, in scatter
return tuple(torch._C._scatter(tensor, devices, chunk_sizes, dim, streams))
RuntimeError: chunk expects at least a 1-dimensional tensor
```
| 2,338 |
qGentry | 2024-11-11T17:53:28 | Looks like "num_items_in_batch" is getting added to the batch dict at some point by trl/tokenizer/collator and it is a 0-dim constant that is getting scattered across data parallel replicas but it can't. | 2,338 |
hua-777 | 2024-11-12T22:06:44 | Isolating my training to 1 GPU fixed this problem for me.
```
import os
os.environ["CUDA_VISIBLE_DEVICES"] = "0" | 2,338 |
Leo-T-Zang | 2024-11-14T00:49:59 | try transformers 4.45.1? | 2,338 |
oscar50513 | 2024-11-14T10:07:41 | I successfully tested Transformers 4.46.0!!!! | 2,338 |
imrankh46 | 2024-11-14T13:38:12 | I have some Nan entry in the dataset.
And also change the code a little bit so it working for me.
| 2,338 |
yxdr | 2024-11-15T04:22:45 | I encountered the same problem when I used the following command to run my training script.
```
CUDA_VISIBLE_DEVICES=0,1 python train.py \
--seed=1 \
--model_path=$MODEL_PATH \
--processed_data_dir=$PROCESSED_DATA_DIR \
--output_dir=$OUTPUT_DIR \
--learning_rate=5e-6 \
--epochs=1 \
--save_freq=10 \
--eval_freq=10 \
--num_warmup_steps=30
```
But when I switched to using Huggingface Accelerate to run it, the problem disappeared.
```
CUDA_VISIBLE_DEVICES=0,1 accelerate launch --num_processes 2 train.py \
--seed=1 \
--model_path=$MODEL_PATH \
--processed_data_dir=$PROCESSED_DATA_DIR \
--output_dir=$OUTPUT_DIR \
--learning_rate=5e-6 \
--epochs=1 \
--save_freq=10 \
--eval_freq=10 \
--num_warmup_steps=30
```
Additionally, if you use only one GPU, there should be no problem either. | 2,338 |
Suman-punshi | 2024-11-15T08:31:11 | I tried all the solutions above, reverting to single GPU and using accelerate, but it is still not solving the problem for me | 2,338 |
kashif | 2024-11-15T08:39:18 | @Suman-punshi what is your TRL Env and versions? | 2,338 |
Suman-punshi | 2024-11-15T08:41:44 | @kashif my TRL version 0.12.0
| 2,338 |
qgallouedec | 2024-11-10T03:07:29 | I agree.
Not sure what's the best way to do that though, because it still has to work with the precomputing of ref logprobs. (that's why we initially set `"shuffle": False`). Any idea? | 2,337 |
HuggingFaceDocBuilderDev | 2024-11-07T13:26:05 | The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/trl/pr_2336). All of your documentation changes will be reflected on that endpoint. The docs are available until 30 days after the last update. | 2,336 |
littleshutong | 2024-11-08T11:59:05 | trl/trainer/ppo_trainer.py
![image](https://github.com/user-attachments/assets/4f4ba132-f48a-48e2-8225-2f3c35b4df57)
However, it is necessary to consider passing the parameters over.
| 2,335 |
ccs96307 | 2024-11-10T17:23:12 | I encountered this issue previously and temporarily worked around it by adjusting the accelerate version to 0.34.2. Here are the versions I used:
- accelerate==0.34.2
- torch==2.5.1
- transformers==4.46.2
- deepspeed==0.15.4 | 2,335 |
Galaxy-Husky | 2024-11-20T07:00:40 | @qgallouedec hi, do you have any suggestions? | 2,334 |
qgallouedec | 2024-11-07T21:02:47 | As far as I understand, the grad accum thing is only an issue with SFT right?
| 2,333 |
kashif | 2024-11-07T21:04:15 | right i think its more about the updated kernels | 2,333 |
HuggingFaceDocBuilderDev | 2024-11-07T21:25:19 | The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/trl/pr_2333). All of your documentation changes will be reflected on that endpoint. The docs are available until 30 days after the last update. | 2,333 |
ByronHsu | 2024-11-07T22:01:15 | Yes grad accum is only used for sft. Beside grad accum, we also have other improvement | 2,333 |
qgallouedec | 2024-11-08T00:33:46 | I approve, as this is an important issue affecting the most widely used trainer. (Thanks for solving it!)
For the record, generally speaking, I won’t raise the minimum version requirement unless a new feature from the dependency is needed in our codebase. | 2,333 |
kashif | 2024-11-06T12:54:07 | thanks @yanghh2000 would it be possible to add a test? | 2,332 |
yanghh2000 | 2024-11-06T13:03:11 | Hi, I am glad to help, but I am not sure how to add a test for this. Is there any guideline to test a PR? | 2,332 |
yanghh2000 | 2024-11-06T13:15:42 | Oh, I have read the guideline in trl/CONTRIBUTING.md, and what I need to do is add a test.py and commit it under test/ dir? | 2,332 |
kashif | 2024-11-06T13:19:41 | yes in the `dpo_trainer` tests file | 2,332 |
qgallouedec | 2024-11-06T13:42:57 | Tbh I'm not sure it is possible to test it considering it's in a middle of the method. | 2,332 |
qgallouedec | 2024-11-06T09:18:31 | Good catch! Thanks! Do you mind opening a PR to fix that? | 2,330 |
naskimed | 2024-11-07T16:25:59 | Hey, I have the same issue using PPOTrainer: "ValueError: Please make sure to properly initialize your accelerator via `accelerator = Accelerator()` before using any functionality from the `accelerate` library".
trl: 0.13.0.dev0
transformers: 4.46.2
accelerate: 1.1.0.dev0
![Screenshot from 2024-11-07 17-24-25](https://github.com/user-attachments/assets/6f6144a3-21a7-4231-adce-1753127a602a)
| 2,329 |
KAKSIS | 2024-11-08T09:19:43 | > Hey, I have the same issue using PPOTrainer: "ValueError: Please make sure to properly initialize your accelerator via `accelerator = Accelerator()` before using any functionality from the `accelerate` library".
>
> trl: 0.13.0.dev0 transformers: 4.46.2 accelerate: 1.1.0.dev0 ![Screenshot from 2024-11-07 17-24-25](https://private-user-images.githubusercontent.com/95038145/384049328-6f6144a3-21a7-4231-adce-1753127a602a.png?jwt=eyJhbGciOiJIUzI1NiIsInR5cCI6IkpXVCJ9.eyJpc3MiOiJnaXRodWIuY29tIiwiYXVkIjoicmF3LmdpdGh1YnVzZXJjb250ZW50LmNvbSIsImtleSI6ImtleTUiLCJleHAiOjE3MzEwNTc4NDgsIm5iZiI6MTczMTA1NzU0OCwicGF0aCI6Ii85NTAzODE0NS8zODQwNDkzMjgtNmY2MTQ0YTMtMjFhNy00MjMxLWFkY2UtMTc1MzEyN2E2MDJhLnBuZz9YLUFtei1BbGdvcml0aG09QVdTNC1ITUFDLVNIQTI1NiZYLUFtei1DcmVkZW50aWFsPUFLSUFWQ09EWUxTQTUzUFFLNFpBJTJGMjAyNDExMDglMkZ1cy1lYXN0LTElMkZzMyUyRmF3czRfcmVxdWVzdCZYLUFtei1EYXRlPTIwMjQxMTA4VDA5MTkwOFomWC1BbXotRXhwaXJlcz0zMDAmWC1BbXotU2lnbmF0dXJlPThlNGU1OTZhNTY5OGI1MjU0MDMwOWMzZjhkNWQ0NTdlZDBiMTRhMGNkMWJlNjAzNTQ2ZTQxZTc3OTBiZjhmNDQmWC1BbXotU2lnbmVkSGVhZGVycz1ob3N0In0.qKQiLP4wjRB8nwNya4VxfFJE4Je6UgYbSeejKzzChas)
I have the same problem | 2,329 |
kongjiellx | 2024-11-08T12:03:09 | +1
with PPOTrainer | 2,329 |
macheng6 | 2024-11-11T08:23:25 | After using the version configuration below, the code can be run:
trl==0.11.4 accelerate==0.33.0, | 2,329 |
HuggingFaceDocBuilderDev | 2024-11-05T17:41:25 | The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/trl/pr_2328). All of your documentation changes will be reflected on that endpoint. The docs are available until 30 days after the last update. | 2,328 |
HuggingFaceDocBuilderDev | 2024-11-05T11:14:43 | The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/trl/pr_2327). All of your documentation changes will be reflected on that endpoint. The docs are available until 30 days after the last update. | 2,327 |
End of preview. Expand
in Dataset Viewer.
Stars
import requests
from datetime import datetime
from datasets import Dataset
import pyarrow as pa
import os
def get_stargazers(owner, repo, token):
# Initialize the count and the page number
page = 1
stargazers = []
while True:
# Construct the URL for the stargazers with pagination
stargazers_url = f"https://api.github.com/repos/{owner}/{repo}/stargazers?page={page}&per_page=100"
# Send the request to GitHub API with appropriate headers
headers = {"Accept": "application/vnd.github.v3.star+json", "Authorization": "token " + token}
response = requests.get(stargazers_url, headers=headers)
if response.status_code != 200:
raise Exception(f"Failed to fetch stargazers with status code {response.status_code}: {response.text}")
stargazers_page = response.json()
if not stargazers_page: # Exit the loop if there are no more stargazers to process
break
stargazers.extend(stargazers_page)
page += 1 # Move to the next page
return stargazers
token = os.environ.get("GITHUB_PAT")
stargazers = get_stargazers("huggingface", "trl", token)
stargazers = {key: [stargazer[key] for stargazer in stargazers] for key in stargazers[0].keys()}
dataset = Dataset.from_dict(stargazers)
def clean(example):
starred_at = datetime.strptime(example["starred_at"], "%Y-%m-%dT%H:%M:%SZ")
starred_at = pa.scalar(starred_at, type=pa.timestamp("s", tz="UTC"))
return {"starred_at": starred_at, "user": example["user"]["login"]}
dataset = dataset.map(clean, remove_columns=dataset.column_names)
dataset.push_to_hub("qgallouedec/trl-metrics", config_name="stargazers")
Pypi downloads
from datasets import Dataset
from google.cloud import bigquery
import os
os.environ["GOOGLE_APPLICATION_CREDENTIALS"] = "propane-tree-432413-4c3e2b5e6b3c.json"
# Initialize a BigQuery client
client = bigquery.Client()
# Define your query
query = """
#standardSQL
WITH daily_downloads AS (
SELECT
DATE(timestamp) AS day,
COUNT(*) AS num_downloads
FROM
`bigquery-public-data.pypi.file_downloads`
WHERE
file.project = 'trl'
-- Filter for the last 12 months
AND DATE(timestamp) BETWEEN DATE_SUB(CURRENT_DATE(), INTERVAL 54 MONTH) AND CURRENT_DATE()
GROUP BY
day
)
SELECT
day,
num_downloads
FROM
daily_downloads
ORDER BY
day DESC
"""
# Execute the query
query_job = client.query(query)
# Fetch the results
results = query_job.result()
# Convert the results to a pandas DataFrame and then to a Dataset
df = results.to_dataframe()
dataset = Dataset.from_pandas(df)
dataset.push_to_hub("qgallouedec/trl-metrics", config_name="pypi_downloads")
Models tagged
from huggingface_hub import HfApi
from datasets import Dataset
api = HfApi()
models = api.list_models(tags="trl")
dataset_list = [{"id": model.id, "created_at": model.created_at, "likes": model.likes, "downloads": model.downloads, "tags": model.tags} for model in models]
dataset_dict = {key: [d[key] for d in dataset_list] for key in dataset_list[0].keys()}
dataset = Dataset.from_dict(dataset_dict)
dataset.push_to_hub("qgallouedec/trl-metrics", config_name="models")
Issues and comments
import requests
from datetime import datetime
import os
from datasets import Dataset
from tqdm import tqdm
token = os.environ.get("GITHUB_PAT")
def get_full_response(url, headers, params=None):
page = 1
output = []
params = params or {}
while True:
params = {**params, "page": page, "per_page": 100}
response = requests.get(url, headers=headers, params=params)
if response.status_code != 200:
raise Exception(f"Failed to fetch issues: {response.text}")
batch = response.json()
if len(batch) == 0:
break
output.extend(batch)
page += 1
return output
# GitHub API URL for issues (closed and open)
issues_url = f"https://api.github.com/repos/huggingface/trl/issues"
# Set up headers for authentication
headers = {"Authorization": f"token {token}", "Accept": "application/vnd.github.v3+json"}
# Make the request
issues = get_full_response(issues_url, headers, params={"state": "all"})
issues_dataset_dict = {
"number": [],
"title": [],
"user": [],
"state": [],
"created_at": [],
"closed_at": [],
"comments_count": [],
}
comments_dataset_dict = {
"user": [],
"created_at": [],
"body": [],
"issue_number": [],
}
for issue in tqdm(issues):
# Extract relevant information
issue_number = issue["number"]
title = issue["title"]
created_at = datetime.strptime(issue["created_at"], "%Y-%m-%dT%H:%M:%SZ")
comments_count = issue["comments"]
comments_url = issue["comments_url"]
comments = get_full_response(comments_url, headers=headers)
for comment in comments:
comments_dataset_dict["user"].append(comment["user"]["login"])
comments_dataset_dict["created_at"].append(datetime.strptime(comment["created_at"], "%Y-%m-%dT%H:%M:%SZ"))
comments_dataset_dict["body"].append(comment["body"])
comments_dataset_dict["issue_number"].append(issue_number)
issues_dataset_dict["number"].append(issue_number)
issues_dataset_dict["title"].append(title)
issues_dataset_dict["user"].append(issue["user"]["login"])
issues_dataset_dict["state"].append(issue["state"])
issues_dataset_dict["created_at"].append(datetime.strptime(issue["created_at"], "%Y-%m-%dT%H:%M:%SZ"))
issues_dataset_dict["closed_at"].append(datetime.strptime(issue["closed_at"], "%Y-%m-%dT%H:%M:%SZ") if issue["closed_at"] else None)
issues_dataset_dict["comments_count"].append(comments_count)
issues_dataset = Dataset.from_dict(issues_dataset_dict)
comments_dataset = Dataset.from_dict(comments_dataset_dict)
issues_dataset.push_to_hub("qgallouedec/trl-metrics", config_name="issues")
comments_dataset.push_to_hub("qgallouedec/trl-metrics", config_name="issue_comments")
- Downloads last month
- 253