user
stringlengths 3
28
| created_at
timestamp[us] | body
stringlengths 1
173k
| issue_number
int64 1
2.57k
| __index_level_0__
int64 0
8.05k
|
---|---|---|---|---|
lvwerra | 2023-03-15T09:00:06 | Unfortunately the GPT-4 model was not released so we can't fine-tune it ourselves. Closing the issue for now as it seems solved :) | 217 | 8,000 |
widyaputeriaulia10 | 2023-03-15T09:08:09 | i see. I just saw the OpenAI web that GPT 4 is still waitlist. So, it is
possible to use updated GPT version but i need to fine-tune the model
first, isn't it ?
Pada tanggal Sel, 14 Mar 2023 pukul 15.26 Leandro von Werra <
***@***.***> menulis:
> Could this be related to this? #183 (comment)
> <https://github.com/lvwerra/trl/issues/183#issuecomment-1451250635>
>
> Without code and the full error message it's a bit hard to know what's
> going on.
>
> —
> Reply to this email directly, view it on GitHub
> <https://github.com/lvwerra/trl/issues/217#issuecomment-1467626644>, or
> unsubscribe
> <https://github.com/notifications/unsubscribe-auth/APSA3T2SGLDW6HBSCFXR4I3W4ATT3ANCNFSM6AAAAAAV2DJ42U>
> .
> You are receiving this because you authored the thread.Message ID:
> ***@***.***>
>
| 217 | 8,001 |
lvwerra | 2023-03-15T09:11:54 | OpenAI will only give access to an API to use the model, not the actual weights and code to fine-tune it yourself. So no, it won't be possible to fine-tune GPT-4 which is the same situation as with GPT-3. | 217 | 8,002 |
widyaputeriaulia10 | 2023-03-15T09:15:00 | Ok, I see. Thank you
Pada tanggal Rab, 15 Mar 2023 16.12, Leandro von Werra <
***@***.***> menulis:
> OpenAI will only give access to an API to use the model, not the actual
> weights and code to fine-tune it yourself. So no, it won't be possible to
> fine-tune GPT-4 which is the same situation as with GPT-3.
>
> —
> Reply to this email directly, view it on GitHub
> <https://github.com/lvwerra/trl/issues/217#issuecomment-1469627040>, or
> unsubscribe
> <https://github.com/notifications/unsubscribe-auth/APSA3T6ZV4AETOITAWUF7LLW4GBWLANCNFSM6AAAAAAV2DJ42U>
> .
> You are receiving this because you authored the thread.Message ID:
> ***@***.***>
>
| 217 | 8,003 |
ShiJiawenwen | 2023-03-30T08:22:52 | hi, can you tell me how to solve this problem? I face the same problem. | 217 | 8,004 |
widyaputeriaulia10 | 2023-03-30T08:54:44 | based on the comment #183 (comment)
<https://github.com/lvwerra/trl/issues/183#issuecomment-1451250635>, i just
follow the instruction and it works for my case.
Good Luck
Pada tanggal Kam, 30 Mar 2023 pukul 15.23 ShiJiawenwen <
***@***.***> menulis:
> hi, can you tell me how to solve this problem? I face the same problem.
>
> —
> Reply to this email directly, view it on GitHub
> <https://github.com/lvwerra/trl/issues/217#issuecomment-1489895085>, or
> unsubscribe
> <https://github.com/notifications/unsubscribe-auth/APSA3T7GJSU3M3R5URBXTP3W6U7GPANCNFSM6AAAAAAV2DJ42U>
> .
> You are receiving this because you authored the thread.Message ID:
> ***@***.***>
>
| 217 | 8,005 |
ShiJiawenwen | 2023-03-30T09:01:46 | Thank you! | 217 | 8,006 |
natolambert | 2023-03-14T03:25:56 | Closes #https://github.com/lvwerra/trl/issues/215 if correct on point 1 @younesbelkada ! | 216 | 8,007 |
HuggingFaceDocBuilderDev | 2023-03-14T03:28:40 | _The documentation is not available anymore as the PR was closed or merged._ | 216 | 8,008 |
natolambert | 2023-03-14T03:34:12 | I tested the logging change with my code in H4 #https://github.com/huggingface/h4/pull/73, and it fixed my problem! | 216 | 8,009 |
natolambert | 2023-03-14T16:32:40 | I'll test `tensorboard` today. FYI this is needed for the script in H4, so I'll be motivated to get this working soon.
If `tensorboard` doesn't work, I'll prolly do an if statement. | 216 | 8,010 |
natolambert | 2023-03-14T19:22:14 | @younesbelkada I think I ran this with `tensorboard` (just changed the config to as follows and it didn't error). Seems good to me?
The term I changed `tracker_kwargs` was not used in any of TRL to date actually.
```
config = PPOConfig(
model_name="ybelkada/gpt-j-6b-sharded-bf16",
learning_rate=(1.47e-5) * 2,
# log_with="wandb",
log_with="tensorboard",
accelerator_kwargs={"logging_dir": '/home/nathan/logs/'},
batch_size=32,
forward_batch_size=1,
)
``` | 216 | 8,011 |
younesbelkada | 2023-03-14T19:47:14 | Thanks a lot for experimenting @natolambert ! LGTM | 216 | 8,012 |
HuggingFaceDocBuilderDev | 2023-03-12T15:38:28 | _The documentation is not available anymore as the PR was closed or merged._ | 214 | 8,013 |
HuggingFaceDocBuilderDev | 2023-03-12T15:36:38 | _The documentation is not available anymore as the PR was closed or merged._ | 213 | 8,014 |
HuggingFaceDocBuilderDev | 2023-03-12T07:45:13 | _The documentation is not available anymore as the PR was closed or merged._ | 212 | 8,015 |
younesbelkada | 2023-03-12T07:44:10 | Hello @TeamDman
Thanks a lot for the report! They should be now fixed with https://github.com/huggingface/blog/pull/927
Thanks! | 211 | 8,016 |
HuggingFaceDocBuilderDev | 2023-03-10T11:13:26 | _The documentation is not available anymore as the PR was closed or merged._ | 210 | 8,017 |
younesbelkada | 2023-03-13T13:42:17 | Experiments of gpt-neo-1b int8 + peft multi-GPU : https://wandb.ai/distill-bloom/trl/runs/x3d6fig6?workspace=user-younesbelkada
Single GPU baseline with peft and int8: https://wandb.ai/distill-bloom/trl/runs/rgcqxtfd?workspace=user-younesbelkada | 210 | 8,018 |
younesbelkada | 2023-03-13T15:50:06 | Ran a DP script with `accelerate launch gpt2-sentiment.py` to make sure nothing is broken in DP and seems to work like charm!
@lvwerra @edbeeching this is ready for review | 210 | 8,019 |
younesbelkada | 2023-03-10T06:05:12 | Hi @prakamya-mishra
Thanks for the issue
Can you please update `trl` ? `pip install --upgrade trl` should be solved in https://github.com/lvwerra/trl/pull/190 | 209 | 8,020 |
lvwerra | 2023-03-21T10:09:37 | Closing this for now. Feel free to reopen if issue persists. | 209 | 8,021 |
HuggingFaceDocBuilderDev | 2023-03-09T12:27:34 | _The documentation is not available anymore as the PR was closed or merged._ | 208 | 8,022 |
Kororinpas | 2023-04-08T02:35:30 | Hi,there. I am a beginner in the field of NLP and I have been working with the GPT-J model recently. I came across your code for merging adapter layers into the base model's weights in s02_merge_peft_adapter.py , and I have some questions regarding the merging process.
From my understanding, after fine-tuning the model with the LORA layer and running this merging code, the LORA layer is replaced with a new randomly initialized linear layer. However, I did not see any indication in the code that the parameters of the LORA layer were inherited by this new linear layer.If this is the case, then it would mean that my previous training of only the LORA layer was pointless.
I would be grateful if you could provide me with some clarification on this matter. Thank you very much for your time and help. | 208 | 8,023 |
younesbelkada | 2023-04-08T09:35:11 | Hi @Kororinpas
Please have a look at this nice thread: https://github.com/lvwerra/trl/issues/250 that goes over the details of merging LoRA layers!
Also now you can directly merge the lora layers using `peft` with the `model.merge_and_unload()` utility function, see https://github.com/huggingface/peft/pull/227
Let us know if you have more questions after that | 208 | 8,024 |
Kororinpas | 2023-04-09T01:02:50 | @younesbelkada Thank you for sharing this information with me. I have already checked out the thread you suggested and the original code. Now, my problem is solved. Thanks again! | 208 | 8,025 |
HuggingFaceDocBuilderDev | 2023-03-09T10:33:57 | The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/trl/pr_207). All of your documentation changes will be reflected on that endpoint. | 207 | 8,026 |
github-actions[bot] | 2023-06-20T15:05:00 | This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.
| 207 | 8,027 |
HuggingFaceDocBuilderDev | 2023-03-09T08:26:20 | _The documentation is not available anymore as the PR was closed or merged._ | 206 | 8,028 |
lvwerra | 2023-03-09T13:54:10 | Lack of time and people - feel free to try it! | 205 | 8,029 |
alexixu | 2023-03-09T03:39:31 | I have a similar question that why there is a peak in "loss/total" and "loss/policy"? | 204 | 8,030 |
Mryangkaitong | 2023-03-09T03:54:13 | > I have a similar question that why there is a peak in "loss/total" and "loss/policy"?
Is your kl becoming more and more negative? According to my understanding, the method of objective/kl is(https://github.com/lvwerra/trl/blob/main/trl/trainer/ppo_trainer.py#L848)(data["logprobs"] - data["ref_logprobs"]), if it becomes more and more negative, it means that data["logprobs"] is getting smaller and smaller, and data[" logprobs"] is the logits after softmax_log, that is, the logits are getting smaller and smaller? More and more unbelievable? looking at the official demo of trl, objective/kl is getting more and more positive from the beginning, and then tends to be stable instead of going negative | 204 | 8,031 |
alexixu | 2023-03-09T06:00:26 | @Mryangkaitong No, the KL value becoming bigger and more positive.
The policy changes the origin token distribution so the KL value increases. | 204 | 8,032 |
lvwerra | 2023-03-09T13:56:26 | Note that the KL-divergence (`objective-kl`) should never be negative. This can happen if you generate text with settings that are not pure sampling (e.g. early stopping strategies, minimum generation length etc.). Have you tried the generation settings used in the T5 example. | 204 | 8,033 |
Mryangkaitong | 2023-03-11T02:12:47 | Thanks a lot, it worked | 204 | 8,034 |
Mryangkaitong | 2023-03-12T03:59:47 | > Note that the KL-divergence (`objective-kl`) should never be negative. This can happen if you generate text with settings that are not pure sampling (e.g. early stopping strategies, minimum generation length etc.). Have you tried the generation settings used in the T5 example.
Hello author, it is possible to converge now, but when the convergence reaches a certain level, the reward_mean will decrease again, as shown in the figure below (roughly 70 steps is an epoch), is this normal?
`learning_rate=1e-5,
batch_size=32,
ppo_epochs=3,
init_kl_coef=0.1,
mini_batch_size=32`
<img width="1071" alt="截屏2023-03-12 上午11 58 36" src="https://user-images.githubusercontent.com/23132307/224523521-96f13582-a3c1-4579-993c-3e1fd897e55a.png">
<img width="1485" alt="截屏2023-03-12 上午11 58 43" src="https://user-images.githubusercontent.com/23132307/224523524-4533c2e0-cbe1-4986-996d-04143a5680f8.png">
| 204 | 8,035 |
PanchenkoYehor | 2023-03-13T17:08:57 | > > Note that the KL-divergence (`objective-kl`) should never be negative. This can happen if you generate text with settings that are not pure sampling (e.g. early stopping strategies, minimum generation length etc.). Have you tried the generation settings used in the T5 example.
>
> Hello author, it is possible to converge now, but when the convergence reaches a certain level, the reward_mean will decrease again, as shown in the figure below (roughly 70 steps is an epoch), is this normal?
>
> `learning_rate=1e-5, batch_size=32, ppo_epochs=3, init_kl_coef=0.1, mini_batch_size=32`
>
Hi @Mryangkaitong , have you managed to resolve negative kl issue by changing generation kwargs? What generation kwargs do you use for now? | 204 | 8,036 |
lvwerra | 2023-03-13T18:04:12 | Maybe saving the best model would be a good idea? At each step compare average reward and save if it is the best so far. Maybe also a learning rate scheduler could help. | 204 | 8,037 |
lvwerra | 2023-03-13T18:33:17 | Also note that the KL divergence is getting smaller (which also counts as a positive reward) so your model's generation might actually become higher quality as they get closer again to the original distribution. | 204 | 8,038 |
Mryangkaitong | 2023-03-14T07:56:55 | > Note that the KL-divergence (`objective-kl`) should never be negative. This can happen if you generate text with settings that are not pure sampling (e.g. early stopping strategies, minimum generation length etc.). Have you tried the generation settings used in the T5 example.
when I use generation settings
`max_length=512`
objective-kl become negative, but no settings max_length, my response become very very short(one token) I want to use max_length, how can I do | 204 | 8,039 |
Mryangkaitong | 2023-03-14T08:00:28 | > > > Note that the KL-divergence (`objective-kl`) should never be negative. This can happen if you generate text with settings that are not pure sampling (e.g. early stopping strategies, minimum generation length etc.). Have you tried the generation settings used in the T5 example.
> >
> >
> > Hello author, it is possible to converge now, but when the convergence reaches a certain level, the reward_mean will decrease again, as shown in the figure below (roughly 70 steps is an epoch), is this normal?
> > `learning_rate=1e-5, batch_size=32, ppo_epochs=3, init_kl_coef=0.1, mini_batch_size=32`
>
> Hi @Mryangkaitong , have you managed to resolve negative kl issue by changing generation kwargs? What generation kwargs do you use for now?
https://github.com/lvwerra/trl/blob/main/examples/sentiment/scripts/t5-sentiment.py#L90 | 204 | 8,040 |
lvwerra | 2023-03-16T08:49:10 | With the generation kwargs you linked above + `max_new_tokens` the model should always generate as many tokens as you want without negative KL divergence. | 204 | 8,041 |
lvwerra | 2023-03-09T13:59:41 | WE have experienced this a few times when the generation is very short (only 1-2 tokens). One way to force the model to always generate tokens is to set the `eos_token_id=-1` as done in the T5 example:
```python
generation_kwargs = {"top_k": 0.0, "top_p": 1.0, "do_sample": True, "eos_token_id": -1}
``` | 203 | 8,042 |
alexixu | 2023-03-10T03:02:34 | Thanks! @lvwerra
The generation_kwargs has min_length parameter
`generation_kwargs = {
"min_length": 40}
`
This setting will led to wrong KL value, Right?
Another question is why short generated text cause loss compute anomaly.
| 203 | 8,043 |
lvwerra | 2023-03-13T18:06:37 | Haven't had time to investigate this, yet, but it's tracked in #101. Yes, the `min_length` can lead to negative KL. The difference between this and the `eos_token_id=-1` is that with the `min_lengh` the eos token is suppressed while in the latter it can be generated but the generation just continues. | 203 | 8,044 |
lvwerra | 2023-04-14T08:56:26 | Closing for now, feel free to re-open if there's an update. | 203 | 8,045 |
zhangyipin | 2023-03-08T03:42:56 | And second, why does the critic model loss require a clip? I'm not sure what this implement is for. | 202 | 8,046 |
lvwerra | 2023-03-09T14:10:52 | I think that's the standard way to do it in PPO, but I'll let @natolambert or @edbeeching pitch in who might have a good explanation/intuition :) | 202 | 8,047 |
edbeeching | 2023-03-09T14:42:52 | That is the standard way to do it, see [example from clean-rl](https://github.com/vwxyzjn/cleanrl/blob/2e41da2a3649c50f27121d74896110fe8f69dd52/cleanrl/ppo_atari.py#L241). Empirically, this achieves more stable training, in a standard RL setting. There is probably a theoretical justification, such as the value estimate is from the policy that sampled the trajectory.
As for clipping the value loss, this aims to restrict the size of the updates of the value estimates, in a similar manner as the policy clipping. This is quite standard in most implementations.
I would not be surprised if these details make little difference in an LLM RL fine-tuning setting, as it seems quite robust in comparison to a standard RL setting. You can switch off the value clipping with the `cliprange_value` parameter if you want to disable it.
| 202 | 8,048 |
lvwerra | 2023-03-13T18:30:08 | The main reason to write the custom `respond_to_batch` early on was to have full control over the generation as the `generate` method was a big black box to me. Now, using the right generation kwargs the `generate` function should work as well. I suspect the issue you are facing comes more from training rather than sampling. Maybe you need to adapt the learning rate and batch size? Have you tried your generate function with the working GPT-J example in the codebase? Maybe that will help narrow down the issue. | 201 | 0 |
zwb29 | 2023-03-16T05:33:50 | Thanks for your reply!
> I suspect the issue you are facing comes more from training rather than sampling.
Indeed! I have solved my problem, and I find that the reason why my train collapse was mentioned in https://github.com/lvwerra/trl/pull/60.
The Tanh() function limited the State value function in [-1, 1], and the return(value + advantage) is not limited in [-1, 1], so the vf_loss can't decrease properly, after 100+ steps, my training collapse. After fixing this, it works.
Because of that, I carefully checked all the updates in new release versions of TRL, and make sure ppo under my framework is all-up-to-date.
Thanks for all your works again!
| 201 | 1 |
zwb29 | 2023-03-16T05:46:04 | One more question, still about model.generate().
Do you still recommend using a simple sample function? Have you ever tried something like beam search?
> Now, using the right generation kwargs the generate function should work as well.
What is the right generation kwargs/method in your experience?
| 201 | 2 |
lvwerra | 2023-03-16T08:55:41 | We usually use quite simple generation settings (see the examples), but I think you could experiment with better strategies, just be aware that if the KL-divergence becomes negative that you are not sampling from the token distribution properly which usually has a bad impact on training. | 201 | 3 |
zwb29 | 2023-03-16T09:13:05 | OK, I will try it.
I'll close this. Thanks again! | 201 | 4 |
lvwerra | 2023-03-13T18:24:47 | The KL divergence is used as a penalty per token whereas the score is only given to the sequence as a whole thus it is received at the last generated token. The PPO loop will discount it in addition so also previous tokens benefit from it. | 200 | 5 |
lvwerra | 2023-03-13T18:21:48 | As far as I can tell from the paper the entropy bonus is optional and not used in the experiments (see section 6.1). To avoid the model from generating gibberish TRL uses the KL penalty approach proposed by OpenAI's follow up work for tuning language models. This should prevent the model from deviating to far from the original distribution. | 199 | 6 |
philharmonikerzzy | 2023-03-13T22:43:22 | interestingly im still observing the training process resulting in ever increasing entropy and therefore gibberish output. What would be the parameters I should tune/update to discourage the model from increasing the entropy too much | 199 | 7 |
lvwerra | 2023-03-21T10:11:08 | Hard to know what could be the issue without a minimal example and some logs. Can you share a bit more? | 199 | 8 |
lvwerra | 2023-04-14T09:07:40 | Closing this for now, feel free to re-open if there's an update. | 199 | 9 |
HuggingFaceDocBuilderDev | 2023-03-06T13:17:15 | _The documentation is not available anymore as the PR was closed or merged._ | 198 | 10 |
lvwerra | 2023-03-06T13:56:54 | Hi @SauravMaheshkar, thanks for opening a PR! If I understand it correctly this integration hashes the requirements/setup files to cache dependencies. In that case we would be quite blind to breaking changes from new versions of a library, right? | 198 | 11 |
SauravMaheshkar | 2023-03-06T15:08:51 | @lvwerra as far as I understand, the action reuses cache if the files listed under the `cache-dependency-path` haven't been changed for while. If the files have been recently changed then it updates the cache. So we should not be blind to breaking changes from new versions. | 198 | 12 |
lvwerra | 2023-03-13T16:41:16 | I think the issue is that we don't change the files in question often so it is possible that there are breaking changes that we don't notice because we never updated the cached files. Is there an option to set a cache lifetime? | 198 | 13 |
SauravMaheshkar | 2023-03-13T17:14:27 | AFAIK there isn't a option to set cache lifetime however we can use the `cache-hit` flag to check if a cache hit has occurred. [Source](https://github.com/actions/setup-python/blob/main/docs/advanced-usage.md#cache-hit) | 198 | 14 |
lvwerra | 2023-03-13T18:31:46 | Hi @SauravMaheshkar, not sure I follow how the cache-hit would help. I would like to at least clear the cache once every 1-2 days since libraries might be updated that the we won't install due to caching. | 198 | 15 |
SauravMaheshkar | 2023-03-13T21:59:55 | Hey @lvwerra thanks for the feedback, you do make an interesting case. Upon searching around for a bit, we can create another workflow [similar to the one in the Github Docs](https://docs.github.com/en/actions/using-workflows/caching-dependencies-to-speed-up-workflows#force-deleting-cache-entries) which can clear all the caches every `n` days. | 198 | 16 |
SauravMaheshkar | 2023-03-15T04:29:00 | Do you want me to work on that as well in this PR ? | 198 | 17 |
lvwerra | 2023-03-16T08:43:25 | Hi @SauravMaheshkar, yes, if we can delete the cache every 1-2 days adding caching would be a good feature. | 198 | 18 |
SauravMaheshkar | 2023-03-18T13:27:59 | @lvwerra I added a proposal for a workflow in e19ce2072fa0dc17c4dc8b60d009ed08f9d7d8f4 | 198 | 19 |
lvwerra | 2023-03-21T10:28:35 | Awesome work @SauravMaheshkar , that looks great to me! What do you think @younesbelkada ?
| 198 | 20 |
lvwerra | 2023-03-21T11:07:02 | For the first question `- cron: "0 0 * * *"` specifies refresh at midnight (see [here](https://crontab.guru)). Also curious about the second one! | 198 | 21 |
younesbelkada | 2023-03-21T11:13:43 | > For the first question - cron: "0 0 * * *" specifies refresh at midnight (see [here](https://crontab.guru/)).
Awesome thank you! | 198 | 22 |
SauravMaheshkar | 2023-03-21T11:26:00 | > Thanks a lot for this awesome feature! I am totally ok for this PR, I just have few (basic) questions
>
> * Where the duration of the cache refreshing mechanism (you stated each 1 or 2 days) can be changed on the file you have added?
> * Why the changes on `tests.yml` are needed? It seems to be applied only on python 3.8, do we need to change it as well for other python versions?
> Thanks a lot!
We can run the tests on other python versions as well. The changes proposed by this PR is independent of the python version. We can test for more python versions as well. | 198 | 23 |
younesbelkada | 2023-03-21T12:23:28 | Thanks a lot for clarifying!
> The changes proposed by this PR is independent of the python version.
I am a bit confused by this, can you elaborate a bit more on that?
Also I can see that the modified block on `tests.yml` only touches the `check-code-quality` workflow. Ultimately my question is if you can point me to the place that ensures the caching refreshing mechanism is triggered on the CI runners?
Again thank you! | 198 | 24 |
lvwerra | 2023-03-23T20:39:18 | Hi @SauravMaheshkar just to double check: it seems like the cache is only a pip download cache and the libraries still need to be installed (which takes a significant amount of time). Is this expected? | 198 | 25 |
SauravMaheshkar | 2023-03-23T22:02:27 | > Hi @SauravMaheshkar just to double check: it seems like the cache is only a pip download cache and the libraries still need to be installed (which takes a significant amount of time). Is this expected?
Yes it's just a download cache AFAIK | 198 | 26 |
HuggingFaceDocBuilderDev | 2023-03-06T08:50:13 | _The documentation is not available anymore as the PR was closed or merged._ | 197 | 27 |
lvwerra | 2023-03-06T08:42:37 | TRL uses `accelerate` as its backend and as such support multi-GPU training but via data parallelism. That means the model still needs to be loaded on a single machine. In parallel we are working on a PEFT integration #145 which would allow to train much larger models on a single machine. | 196 | 28 |
zhangyipin | 2023-03-07T05:35:31 | > TRL uses `accelerate` as its backend and as such support multi-GPU training but via data parallelism. That means the model still needs to be loaded on a single machine. In parallel we are working on a PEFT integration #145 which would allow to train much larger models on a single machine.
Is using an HTTP server to link different modules common for larger devices like the 175B GPT? Or are there other options for how to set up each module? | 196 | 29 |
lvwerra | 2023-03-13T18:22:49 | You mean having different models just as an endpoint you can ping from the main loop? I think this would be another valid option. | 196 | 30 |
HuggingFaceDocBuilderDev | 2023-03-05T07:43:36 | _The documentation is not available anymore as the PR was closed or merged._ | 195 | 31 |
lvwerra | 2023-03-06T08:43:38 | The reward model has to be trained before the RL/PPO loop. That's why it's not part of the trainer. However, there is a script to train a reward model in examples: https://github.com/lvwerra/trl/blob/main/examples/summarization/scripts/reward_summarization.py
Hope this helps. | 194 | 32 |
weberxie | 2023-03-06T14:37:00 | Thanks for your reply!
So the Reward Model will not be updated in the PPO train loop, Is this the standard process of the PPO algorithm?
Thanks. | 194 | 33 |
lvwerra | 2023-03-06T14:40:28 | The reward model is not part of the PPO definition. PPO assumes an environment that emits a reward based on actions. In RLHF we simulate the human preference with a reward model. In theory it could be updated from time to time to better reflect human preferences with new prompts from the model, but in many instances it is a static model. As noted before, this is not part of the PPO formulation and you can also use a simple rule for the rewards (how many times was the string "the" in the generated text). | 194 | 34 |
weberxie | 2023-03-06T15:00:37 | Thanks for your kind explanation!
I understand Reward Model is static. Regarding the code implementation of TRLX's ppo_trainer, the policy function and value function are the same model, am I right? | 194 | 35 |
weberxie | 2023-03-06T15:04:46 | From the paper
> Learning to summarize from human feedback
, it mentions
> We initialize the value function to the parameters of
> the reward model. In our experiments, the reward model, policy, and value function are the same size.
, but I didn’t see the relevant implementation from the TRLX code, is my understanding wrong? | 194 | 36 |
akk-123 | 2023-03-07T08:01:32 | I have a question about value_head and reward model, we train value_head wish it can predict 'return' close to Reward model, why not use reward model directly, maybe not need value head? | 194 | 37 |
lvwerra | 2023-04-14T09:03:53 | > We initialize the value function to the parameters of the reward model. In our experiments, the reward model, policy, and value function are the same size.
Indeed, in our implementation we share weights with the policy model (which also saves memory).
>I have a question about value_head and reward model, we train value_head wish it can predict 'return' close to Reward model, why not use reward model directly, maybe not need value head?
The reward model is only trained to evaluate full sequences while the value head is optimized evaluate the current sequence state's prospect reward. | 194 | 38 |
younesbelkada | 2023-03-06T08:20:27 | Hello @Mryangkaitong
Thanks for the issue!
In fact this can depend on a lot of factors, firstly being the "domain gap" between your target dataset and the pre-trained model.
What is the purpose of your dataset and what is the model that you are using? You might need to first pre-train your model on a dataset that is similar to yours.
Also note that in the examples we have used a LR that is adapted to the model that has been pre-trained on imdb. Hence, you might need to play with that parameter a lot (i.e. use higher learning rate). You can also play with the KL penalty coefficient: https://github.com/lvwerra/trl/blob/a05ddbdd836d3217c80a4b3e679ba984bfd4fa24/trl/trainer/ppo_config.py#L84 to give the model more degree of freedom and deviate from its original distribution
Let us know if this helps! | 193 | 39 |
lvwerra | 2023-03-06T08:45:41 | Hi @Mryangkaitong, note that in RL the loss is less an indicator of model convergence than in normal supervised learning. It is more important to look at the distribution of rewards and how they shift throughout training. | 193 | 40 |
Mryangkaitong | 2023-03-07T02:26:29 | Thank you very much for your reply. I have fintune my model first (similar to a chatgpt model), and now I want to use the ppo algorithm to further optimize it. In order to test whether trl is effective, my reward model is very simple, just a rule: For example, when the reply has the special character \n (this can be understood as a newline symbol, which means that the response is replied in sections, which is considered good). Specifically reward = response.count("\n")*2
Another change is here:
https://github.com/lvwerra/trl/blob/main/trl/trainer/ppo_trainer.py#L610
Because my model logits and decoder_input_ids can correspond one-to-one, so my changes here are like this:
<img width="1335" alt="企业微信截图_319a1b05-d692-4f14-bd17-ff77044f7a3b" src="https://user-images.githubusercontent.com/23132307/223302928-6b3c2e9c-854f-460c-88ae-206039d00b91.png">
I have debugged init_kl_coef (increase or decrease) and lr and other parameters are not working, my configuration is as follows:
`
config = PPOConfig(
model_name="my_model",
learning_rate=5e-5,
batch_size=16,
ppo_epochs=1,
init_kl_coef=0.3,
log_with="wandb",
remove_unused_columns=False,
mini_batch_size=8
)`
The current result is:
<img width="1183" alt="截屏2023-03-07 上午10 23 10" src="https://user-images.githubusercontent.com/23132307/223303178-38dbe152-f761-4d4d-9a97-6bb601333f35.png">
<img width="1528" alt="截屏2023-03-07 上午10 24 27的副本" src="https://user-images.githubusercontent.com/23132307/223303330-52388be6-3bce-4d94-9d30-3e97040ac8ad.png">
Now for all prompts, the responses are now exactly the same
At the same time, there is also a question here(https://github.com/lvwerra/trl/blob/main/trl/trainer/ppo_trainer.py#L616), which is input_ids. I understand that the first token is padding and does not need to be calculated. Isn’t logits and input_ids in one-to-one correspondence? It stands to reason that it should be aligned as follows:
`logprobs = logprobs_from_logits(logits[:, 1:, :], input_ids[:, 1:])`
instead of
`logprobs = logprobs_from_logits(logits[:, :-1, :], input_ids[:, 1:])`
How should I understand this? I suspect that it may be because of my changes here that the current model does not converge.
In addition, my model structure is a structure similar to Prefix LM, which is not exactly the same as that of T5.
<img width="719" alt="截屏2023-03-07 下午3 18 45" src="https://user-images.githubusercontent.com/23132307/223351770-7f726e8a-9ac8-47c3-b164-5d27c060ed4b.png">
In order to adapt to trl, the class I use is AutoModelForSeq2SeqLMWithValueHead instead of AutoModelForCausalLMWithValueHead, should I use AutoModelForCausalLMWithValueHead? In addition, it is still said that for the model of the Prefix LM, it needs to be designed separately
| 193 | 41 |
Mryangkaitong | 2023-03-11T02:14:21 | I solved it, thanks | 193 | 42 |
parshinsh | 2024-02-04T14:41:24 | @Mryangkaitong I'm facing the same issue. Can you please explain more how did you solve this? | 193 | 43 |
natolambert | 2023-03-03T03:27:05 | `make style` changed some other lines, not sure why. oops. | 192 | 44 |
HuggingFaceDocBuilderDev | 2023-03-03T03:29:51 | _The documentation is not available anymore as the PR was closed or merged._ | 192 | 45 |
natolambert | 2023-03-03T04:17:27 | In digging further, I saw the labels are handled elsewhere so this wasn't needed.
Not sure how to delete the PR. | 192 | 46 |
natolambert | 2023-03-03T03:12:46 | FYI @TristanThrush you may know this :) | 191 | 47 |
lvwerra | 2023-03-06T08:48:18 | Isn't this handled here: https://github.com/lvwerra/trl/blob/f95be7736fcb4eff59964c8857a4fa8e05fe2632/examples/summarization/scripts/reward_summarization.py#L97-L102 | 191 | 48 |
TristanThrush | 2023-03-06T16:50:44 | In this code, it is assumed that the `j` examples are the preferred ones and the `k` examples are not preferred.
https://github.com/lvwerra/trl/blob/a05ddbdd836d3217c80a4b3e679ba984bfd4fa24/examples/summarization/scripts/reward_summarization.py#L90
Let me know if I misunderstood anything | 191 | 49 |
lvwerra | 2023-03-07T10:45:50 | Makes sense to me. @natolambert? | 191 | 50 |