repo
stringclasses
1 value
number
int64
1
25.3k
state
stringclasses
2 values
title
stringlengths
1
487
body
stringlengths
0
234k
โŒ€
created_at
stringlengths
19
19
closed_at
stringlengths
19
19
comments
stringlengths
0
293k
transformers
24,905
open
RuntimeError: CUDA error: CUBLAS_STATUS_NOT_INITIALIZED when calling `cublasCreate(handle)`
### System Info torch version: 1.12.0+cu113 CUDA: 11.4 `RuntimeError: CUDA error: CUBLAS_STATUS_NOT_INITIALIZED when calling cublasCreate(handle)` I am getting this error when trying to run using CUDA. Works fine when running in CPU. ### Who can help? _No response_ ### Information - [ ] The official example scripts - [ ] My own modified scripts ### Tasks - [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...) - [ ] My own task or dataset (give details below) ### Reproduction CODE: ``` import os import pandas as pd import time from transformers import T5Tokenizer, T5ForConditionalGeneration import torch tokenizer = T5Tokenizer.from_pretrained("google/flan-t5-xl") model = T5ForConditionalGeneration.from_pretrained("google/flan-t5-xl", low_cpu_mem_usage=True).to("cuda:0") def generate(input_text): input_ids = tokenizer(input_text, return_tensors="pt").input_ids.to("cuda:0") output = model.generate(input_ids, max_length=512) return tokenizer.decode(output[0], skip_special_tokens=True) input_text = 'Something .... Sonethinggg......' response = generate(input_text) print(input_text) ``` ### Expected behavior Output
07-19-2023 06:34:05
07-19-2023 06:34:05
This usually means there is something wrong with your setup install.<|||||>> Like version mismatch?
transformers
24,904
closed
๐ŸŒ [i18n-KO] Translated `tf_xla.md` to Korean
<!-- PR์˜ ์ œ๋ชฉ์€ "๐ŸŒ [i18n-KO] Translated `<your_file>.md` to Korean" ์œผ๋กœ ๋ถ€ํƒ๋“œ๋ฆฝ๋‹ˆ๋‹ค! --> # What does this PR do? Translated the `tf_xla.md` file of the documentation to Korean. Thank you in advance for your review. Part of https://github.com/huggingface/transformers/issues/20179 ## Before reviewing - [x] Check for missing / redundant translations (๋ฒˆ์—ญ ๋ˆ„๋ฝ/์ค‘๋ณต ๊ฒ€์‚ฌ) - [x] Grammar Check (๋งž์ถค๋ฒ• ๊ฒ€์‚ฌ) - [x] Review or Add new terms to glossary (์šฉ์–ด ํ™•์ธ ๋ฐ ์ถ”๊ฐ€) - [x] Check Inline TOC (e.g. `[[lowercased-header]]`) - [x] Check live-preview for gotchas (live-preview๋กœ ์ •์ƒ์ž‘๋™ ํ™•์ธ) ## Who can review? (Initial) <!-- 1. ์œ„ ์ฒดํฌ๊ฐ€ ๋ชจ๋‘ ์™„๋ฃŒ๋œ ๋’ค์—, ์ด ์•„๋ž˜์— ๋ฆฌ๋ทฐ๋ฅผ ์š”์ฒญํ•  ํŒ€์›๋“ค์„ ๋ฉ˜์…˜ํ•ด์ฃผ์„ธ์š”! --> <!-- May you please review this PR? @member1 @member2 ... --> @kihoon71, @0525hhgus, @54data, @Sunmin0520, @seank021, @augustinLib ## Before submitting - [x] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case). - [x] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests), Pull Request section? - [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link to it if that's the case. - [x] Did you make sure to update the documentation with your changes? Here are the [documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and [here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation). - [ ] Did you write any new necessary tests? ## Who can review? (Final) <!-- 2. ํŒ€์›๋“ค๊ณผ ๋ฆฌ๋ทฐ๊ฐ€ ๋๋‚œ ํ›„์—๋งŒ ํ—ˆ๊น…ํŽ˜์ด์Šค ์ง์›๋“ค์—๊ฒŒ ๋ฆฌ๋ทฐ ์š”์ฒญํ•˜๋Š” ์•„๋ž˜ ์ฃผ์„์„ ๋…ธ์ถœํ•ด์ฃผ์„ธ์š”! --> May you please review this PR? @sgugger, @ArthurZucker, @eunseojo
07-19-2023 06:22:30
07-19-2023 06:22:30
_The documentation is not available anymore as the PR was closed or merged._<|||||>๋ฆฌ๋ทฐ ์‚ฌํ•ญ ์—†์Šต๋‹ˆ๋‹ค!<|||||>๊ผผ๊ผผํ•œ ๋ฒˆ์—ญ ์ž˜ ๋ณด์•˜์Šต๋‹ˆ๋‹ค! ์ˆ˜๊ณ  ๋งŽ์œผ์…จ์Šต๋‹ˆ๋‹ค!
transformers
24,903
closed
Xformers is not installed correctly.
### System Info - `transformers` version: 4.30.2 - Platform: Linux-5.15.0-76-generic-x86_64-with-glibc2.35 - Python version: 3.10.6 - Huggingface_hub version: 0.16.4 - Safetensors version: 0.3.1 - PyTorch version (GPU?): 2.0.1+cu117 (True) - Tensorflow version (GPU?): not installed (NA) - Flax version (CPU?/GPU?/TPU?): not installed (NA) - Jax version: not installed - JaxLib version: not installed - Using GPU in script?: yes - Using distributed or parallel set-up in script?: no ### Who can help? _No response_ ### Information - [ ] The official example scripts - [ ] My own modified scripts ### Tasks - [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...) - [ ] My own task or dataset (give details below) ### Reproduction ``` python from transformers import pipeline pipe = pipeline("text-classification", model="roberta-base", device=0) ``` Edit: I know this model isn't trained for the "text-classification" task, I get the same problem with a private model I fine tuned. Results in the message ``` ... Xformers is not installed correctly. If you want to use memory_efficient_attention to accelerate training use the following command to install Xformers pip install xformers. ``` But I'm using torch==2.0.1 and [memory-efficient-attention](https://huggingface.co/docs/diffusers/optimization/fp16#memory-efficient-attention ) states "If you have PyTorch 2.0 installed, you shouldnโ€™t use xFormers!" The message is confusing - I have torch 2.0 installed and pipeline is for inference. This message doesn't occur if I use `AutoModelForSequenceClassification.from_pretrained` ### Expected behavior The documentation or the warning message are inconsistent.
07-19-2023 06:15:15
07-19-2023 06:15:15
It looks like the `pipeline` is back to importing every model (this message comes from trying to access an unrelated model). I'll have a look later this week. You can ignore that warning in the meantime, it's irrelevant.<|||||>Should be fixed by the PR linked above.
transformers
24,902
closed
fix typo in BARK_PRETRAINED_MODEL_ARCHIVE_LIST
# What does this PR do? <!-- Congratulations! You've made it this far! You're not quite done yet though. Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution. Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change. Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost. --> <!-- Remove if not applicable --> fix typo in BARK_PRETRAINED_MODEL_ARCHIVE_LIST suno/barh should be suno/bark ## Before submitting - [x] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case). - [ ] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests), Pull Request section? - [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link to it if that's the case. - [ ] Did you make sure to update the documentation with your changes? Here are the [documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and [here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation). - [ ] Did you write any new necessary tests? ## Who can review? Anyone in the community is free to review the PR once the tests have passed. Feel free to tag members/contributors who may be interested in your PR. <!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @ If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**. Please tag fewer than 3 people. Models: - text models: @ArthurZucker and @younesbelkada - vision models: @amyeroberts - speech models: @sanchit-gandhi - graph models: @clefourrier Library: - flax: @sanchit-gandhi - generate: @gante - pipelines: @Narsil - tensorflow: @gante and @Rocketknight1 - tokenizers: @ArthurZucker - trainer: @sgugger Integrations: - deepspeed: HF Trainer/Accelerate: @pacman100 - ray/raytune: @richardliaw, @amogkam Documentation: @sgugger, @stevhliu and @MKhalusova HF projects: - accelerate: [different repo](https://github.com/huggingface/accelerate) - datasets: [different repo](https://github.com/huggingface/datasets) - diffusers: [different repo](https://github.com/huggingface/diffusers) - rust tokenizers: [different repo](https://github.com/huggingface/tokenizers) Maintained examples (not research project or legacy): - Flax: @sanchit-gandhi - PyTorch: @sgugger - TensorFlow: @Rocketknight1 -->
07-19-2023 05:19:39
07-19-2023 05:19:39
_The documentation is not available anymore as the PR was closed or merged._
transformers
24,900
closed
๐ŸŒ [i18n-KO] Translated `testing.md` to Korean
# What does this PR do? Translated the `testing.md` file of the documentation to Korean. Thank you in advance for your review. Part of https://github.com/huggingface/transformers/issues/20179 ## Before reviewing - [x] Check for missing / redundant translations (๋ฒˆ์—ญ ๋ˆ„๋ฝ/์ค‘๋ณต ๊ฒ€์‚ฌ) - [x] Grammar Check (๋งž์ถค๋ฒ• ๊ฒ€์‚ฌ) - [x] Review or Add new terms to glossary (์šฉ์–ด ํ™•์ธ ๋ฐ ์ถ”๊ฐ€) - [x] Check Inline TOC (e.g. `[[lowercased-header]]`) - [x] Check live-preview for gotchas (live-preview๋กœ ์ •์ƒ์ž‘๋™ ํ™•์ธ) ## Who can review? (Initial) ## Before submitting - [x] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case). - [x] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests), Pull Request section? - [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link to it if that's the case. - [x] Did you make sure to update the documentation with your changes? Here are the [documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and [here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation). - [ ] Did you write any new necessary tests? ## Who can review? (Final) @kihoon71, @0525hhgus, @54data, @seank021, @augustinLib ## Who can review? (Final) May you please review this PR? @sgugger, @ArthurZucker, @eunseojo
07-19-2023 02:19:16
07-19-2023 02:19:16
_The documentation is not available anymore as the PR was closed or merged._<|||||>์œ„์— ํ˜„์„œ ๋ฉ˜ํ† ๋‹˜๊ป˜์„œ ์˜ฌ๋ ค์ฃผ์‹  ๊ฒƒ ์™ธ์—๋Š” ๋ฆฌ๋ทฐ ์‚ฌํ•ญ ์—†์Šต๋‹ˆ๋‹ค!
transformers
24,899
closed
LLAMA 2 HF tokenizer len is 32001
### System Info Installed from source (4.32.0.dev0) ``` from transformers import AutoTokenizer, AutoModelForCausalLM tokenizer = AutoTokenizer.from_pretrained("meta-llama/Llama-2-13b-hf") model = AutoModelForCausalLM.from_pretrained("meta-llama/Llama-2-13b-hf") print(len(tokenizer)) #32001 print(model.config.vocab_size) #32000 ``` ### Who can help? _No response_ ### Information - [ ] The official example scripts - [ ] My own modified scripts ### Tasks - [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...) - [ ] My own task or dataset (give details below) ### Reproduction ``` from transformers import AutoTokenizer, AutoModelForCausalLM tokenizer = AutoTokenizer.from_pretrained("meta-llama/Llama-2-13b-hf") model = AutoModelForCausalLM.from_pretrained("meta-llama/Llama-2-13b-hf") print(len(tokenizer)) #32001 print(model.config.vocab_size) #32000 ``` ### Expected behavior model vocab size and tokenizer len should be 32000. It seems the padding token of the tokenizer is set to '\<unk\>'. Which is not normally the case. It's normally not set.
07-19-2023 01:57:54
07-19-2023 01:57:54
The actual config says that `pad_token_id=0` - so I assume this is correct? What is interesting is that id `32000` maps to a token `'<pad>'` while the original vocab does not contain this token: https://huggingface.co/meta-llama/Llama-2-7b-hf/raw/main/tokenizer.json It seems this is being added somewhere in HF code?<|||||>cc @ArthurZucker <|||||>Hey! Yes this is not entirely expected, we update the slow tokenizer, but the fast version did not get the update. I'll open PRs to fix this! <|||||>@ArthurZucker jfyi I see the same behavior for slow and fast on 4.31.0 edit: correction, indeed it is not set in the tokenizer<|||||>Is there any plan to fix it?<|||||>It is fixed on all model! <|||||>@ArthurZucker Does the fix really resolve this? I pip installed `transformers` from Github, and there is still a mismatch. ```python from transformers import AutoTokenizer, AutoModelForCausalLM # Using fast tokenizer by default tokenizer = AutoTokenizer.from_pretrained("meta-llama/Llama-2-7b-chat-hf") model = AutoModelForCausalLM.from_pretrained("meta-llama/Llama-2-7b-chat-hf") print(len(tokenizer)) # 32001 print(model.config.vocab_size) # 32000 print(tokenizer.get_added_vocab()) # {'<pad>': 32000} ``` Transformers version: `4.32.0.dev0`<|||||>Will check asap! Might be the fast tokenizer that did not get updated.<|||||>Sorry but no, I cannot reproduce your issue. I do not know if you maybe did not update cached files but I have this: ```python >>> from transformers import AutoTokenizer >>> tokenizer = AutoTokenizer.from_pretrained("meta-llama/Llama-2-7b-chat-hf") >>> len(tokenizer) 32000 >>> print(tokenizer.get_added_vocab()) {} ```<|||||>Thanks for confirming. You're right, it's because of the locally cached files. Downloading them from huggingface and re-running the code now works fine.
transformers
24,898
open
NLLB MoE router_state referenced before assignment
### System Info - `transformers` version: 4.29.2 - Platform: Linux-5.15.0-69-generic-x86_64-with-glibc2.17 - Python version: 3.8.17 - Huggingface_hub version: 0.15.1 - Safetensors version: not installed - PyTorch version (GPU?): 2.0.1 (True) - Tensorflow version (GPU?): not installed (NA) - Flax version (CPU?/GPU?/TPU?): not installed (NA) - Jax version: not installed - JaxLib version: not installed - Using GPU in script?: <fill in> - Using distributed or parallel set-up in script?: <fill in> ### Who can help? @ArthurZucker @youn ### Information - [ ] The official example scripts - [X] My own modified scripts ### Tasks - [X] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...) - [ ] My own task or dataset (give details below) ### Reproduction ```python model = AutoModelForSeq2SeqLM.from_pretrained("facebook/nllb-moe-54b") model( input_ids=input_ids, attention_mask=attenstion_mask, decoder_input_ids=decoder_input_ids, decoder_attention_mask=decoder_attention_mask, output_router_logits=True, return_dict=True, ) ``` ```bash transformers/models/nllb_moe/modeling_nllb_moe.py", line 720, in forward outputs += (router_states,) UnboundLocalError: local variable 'router_states' referenced before assignment ``` ### Expected behavior return encoder_router_logits and decoder_router_logits rather than error. The error happens on the dense layers where no router_state is returned.
07-19-2023 01:07:27
07-19-2023 01:07:27
cc @ArthurZucker <|||||>Hey! Thanks for reporting! I remember working on a bug where NLLB-MoE was not being torch compiled because None values were returned. Will push a fix! Glad to see that Nllb-MoE is being used ๐Ÿค—
transformers
24,897
open
Is attention_mask supposed to be added to attention_weights? Based on functions docstring mask values are either 0 or 1.
https://github.com/huggingface/transformers/blame/476be08c4aa96f8c1cae4200d2677bbe8f12cf80/src/transformers/models/autoformer/modeling_autoformer.py#L619
07-18-2023 23:37:47
07-18-2023 23:37:47
cc @kashif <|||||>thanks @sgugger and @rpanackal let me check
transformers
24,896
open
๐ŸŒ [i18n-KO] Translated `perf_train_tpu_tf.md` to Korean
<!-- PR์˜ ์ œ๋ชฉ์€ "๐ŸŒ [i18n-KO] Translated `<your_file>.md` to Korean" ์œผ๋กœ ๋ถ€ํƒ๋“œ๋ฆฝ๋‹ˆ๋‹ค --> # What does this PR do? Translated the `perf_train_tpu_tf.md` file of the documentation to Korean. Thank you in advance for your review! Part of https://github.com/huggingface/transformers/issues/20179 <!-- ๋ฉ”์ธ ์ด์Šˆ์— ๊ธฐ๋ก์ด ๋‚จ์•„์š”! ๊ฐ€์งœ์—ฐ๊ตฌ์†Œ ๋ฆฌํฌ๋ฅผ ์‚ฌ์šฉํ•ด ์—ฐ์Šตํ•˜์‹ค๋•Œ๋Š” ์ œ๊ฑฐํ•ด์ฃผ์‹œ๋ฉด ๊ฐ์‚ฌํ•˜๊ฒ ์Šต๋‹ˆ๋‹ค! :smile: --> ## Before reviewing - [x] Check for missing / redundant translations (๋ฒˆ์—ญ ๋ˆ„๋ฝ/์ค‘๋ณต ๊ฒ€์‚ฌ) - [x] Grammar Check (๋งž์ถค๋ฒ• ๊ฒ€์‚ฌ) - [x] Review or Add new terms to glossary (์šฉ์–ด ํ™•์ธ ๋ฐ ์ถ”๊ฐ€) - [ ] Check Inline TOC (e.g. `[[lowercased-header]]`) - [ ] Check live-preview for gotchas (live-preview๋กœ ์ •์ƒ์ž‘๋™ ํ™•์ธ) ## Who can review? (Initial) <!-- 1. ์œ„ ์ฒดํฌ๊ฐ€ ๋ชจ๋‘ ์™„๋ฃŒ๋œ ๋’ค์—๋งŒ ๊ฐ€์งœ์—ฐ๊ตฌ์†Œ ํŒ€์›๋“ค์—๊ฒŒ ๋ฆฌ๋ทฐ ์š”์ฒญํ•˜๋Š” ์•„๋ž˜ ์ฃผ์„์„ ๋…ธ์ถœํ•ด์ฃผ์„ธ์š”! --> <!-- Team PseudoLab, may you please review this PR? --> @kihoon71, @0525hhgus, @54data, @Sunmin0520, @seank021, @augustinLib ## Before submitting - [x] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case). - [x] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests), Pull Request section? - [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link to it if that's the case. - [x] Did you make sure to update the documentation with your changes? Here are the [documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and [here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation). - [ ] Did you write any new necessary tests? ## Who can review? (Final) <!-- 2. ๊ฐ€์งœ์—ฐ๊ตฌ์†Œ ํŒ€์›๋“ค๊ณผ ๋ฆฌ๋ทฐ๊ฐ€ ๋๋‚œ ํ›„์—๋งŒ ํ—ˆ๊น…ํŽ˜์ด์Šค ์ง์›๋“ค์—๊ฒŒ ๋ฆฌ๋ทฐ ์š”์ฒญํ•˜๋Š” ์•„๋ž˜ ์ฃผ์„์„ ๋…ธ์ถœํ•ด์ฃผ์„ธ์š”! --> <!-- May you please review this PR? --> <!-- @sgugger, @ArthurZucker, @eunseojo -->
07-18-2023 23:10:58
07-18-2023 23:10:58
The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/transformers/pr_24896). All of your documentation changes will be reflected on that endpoint.
transformers
24,895
closed
Update tested versions in READMEs
# What does this PR do? This updates the top-level readme files say the project has been tested with Python 3.8+ (instead of 3.7+) and PyTorch 1.10+ (instead of 1.9+). Versions prior to those are no longer supported as of [v4.31.0](https://github.com/huggingface/transformers/releases/tag/v4.31.0). It also updates some other versions in those lists, that had become out of date earlier. The non-English readme files were less up to date than the English readme file `README.md`. I allowed my editor to remove trailing whitespace, since it does not appear to have been intentional. Rendered Markdown does not appear changed. After editing all seven READMEs, running `make fix-copies` (required to pass CI) propagated this whitespace removal to (just) one of the `index.md` files, which is why that is also changed. However, I understand if whitespace removal may be viewed as best done separately; if requested, I'd be pleased to modify this PR to retain the trailing whitespace. ## Rationale My reasoning is similar as the reasoning that was given for the previous update to these versions in #24307. As of [**v4.31.0**](https://github.com/huggingface/transformers/releases/tag/v4.31.0), ๐Ÿค— Transformers has dropped support for Python 3.7 (#24091) and for PyTorch 1.9 (#24080). Because of that: - If the version ranges noted in the readme files are not changed, some users are likely to be mislead into expecting new releases of ๐Ÿค— Transformers to support those versions. - A lesser issue is that the claim that this repository is tested on all those versions will gradually become inaccurate as further contributions are made to the repository, now that those versions are not supported. ## Before submitting - [x] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case). - [x] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests), Pull Request section? - [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link to it if that's the case. <!-- I'm sure who, if anyone, I should ping for this. I may edit in a ping later. ## Who can review? Anyone in the community is free to review the PR once the tests have passed. Feel free to tag members/contributors who may be interested in your PR. Your PR will be replied to more quickly if you can figure out the right person to tag with @ If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**. Please tag fewer than 3 people. Models: - text models: @ArthurZucker and @younesbelkada - vision models: @amyeroberts - speech models: @sanchit-gandhi - graph models: @clefourrier Library: - flax: @sanchit-gandhi - generate: @gante - pipelines: @Narsil - tensorflow: @gante and @Rocketknight1 - tokenizers: @ArthurZucker - trainer: @sgugger Integrations: - deepspeed: HF Trainer/Accelerate: @pacman100 - ray/raytune: @richardliaw, @amogkam Documentation: @sgugger, @stevhliu and @MKhalusova HF projects: - accelerate: [different repo](https://github.com/huggingface/accelerate) - datasets: [different repo](https://github.com/huggingface/datasets) - diffusers: [different repo](https://github.com/huggingface/diffusers) - rust tokenizers: [different repo](https://github.com/huggingface/tokenizers) Maintained examples (not research project or legacy): - Flax: @sanchit-gandhi - PyTorch: @sgugger - TensorFlow: @Rocketknight1 -->
07-18-2023 21:58:02
07-18-2023 21:58:02
@sgugger Thanks for reviewing! I've made it non-draft so it can be merged, as requested. Before I saw your comment, I noticed that the listed TensorFlow versions were also older than the minimum `transformers` currently supports and added a commit to deal with that. I'm not sure if that newest commit was part of what you had looked at. If you'd like me to remove that commit, or otherwise make a change related to it, please let me know!<|||||>_The documentation is not available anymore as the PR was closed or merged._<|||||>The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/transformers/pr_24895). All of your documentation changes will be reflected on that endpoint.
transformers
24,894
open
add a configuration option in llama architecture
# What does this PR do? <!-- Congratulations! You've made it this far! You're not quite done yet though. Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution. Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change. Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost. --> I add a configuration option in llama to switch on the bias in output projection in MHA, which is critical for incorporating a new line of alignment research called [inference-time intervention](https://arxiv.org/pdf/2306.03341.pdf) into huggingface hub. Basically the inference-time-intervention found specific vectors to be added into the residual stream of the forwarding process that can significantly boost the truthfulness of the LLaMA family, including Alpaca and Vicuna. However, it requires a slight architecture change to the original architecture, the bias term needs to be activated in the output projection in the MHA. By merging this PR, I can push [honest llama](https://github.com/likenneth/honest_llama) onto huggingface hub and entertain all huggingface users with a more truthful LLaMA model, and its friends, honest Alpaca and honest Vicuna. This PR contains only 3 lines of new code and is completely backward-compatible. Fixes # (issue) N/A ## Before submitting - :x: This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case). - :white_check_mark: Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests), Pull Request section? - :x: Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link to it if that's the case. - :white_check_mark: Did you make sure to update the documentation with your changes? Here are the [documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and [here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation). - :x: Did you write any new necessary tests? ## Who can review? Anyone in the community is free to review the PR once the tests have passed. Feel free to tag members/contributors who may be interested in your PR. @ArthurZucker and @younesbelkada <!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @ If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**. Please tag fewer than 3 people. Models: - text models: @ArthurZucker and @younesbelkada - vision models: @amyeroberts - speech models: @sanchit-gandhi - graph models: @clefourrier Library: - flax: @sanchit-gandhi - generate: @gante - pipelines: @Narsil - tensorflow: @gante and @Rocketknight1 - tokenizers: @ArthurZucker - trainer: @sgugger Integrations: - deepspeed: HF Trainer/Accelerate: @pacman100 - ray/raytune: @richardliaw, @amogkam Documentation: @sgugger, @stevhliu and @MKhalusova HF projects: - accelerate: [different repo](https://github.com/huggingface/accelerate) - datasets: [different repo](https://github.com/huggingface/datasets) - diffusers: [different repo](https://github.com/huggingface/diffusers) - rust tokenizers: [different repo](https://github.com/huggingface/tokenizers) Maintained examples (not research project or legacy): - Flax: @sanchit-gandhi - PyTorch: @sgugger - TensorFlow: @Rocketknight1 -->
07-18-2023 21:43:42
07-18-2023 21:43:42
Thanks for your PR! I think this is more suited to our code on the Hub API as it doesn't apply to the base LLaMA checkpoints. cc @ArthurZucker if you have a different opinion.<|||||>Hi @sgugger, Appreciate your prompt reply. However I think I missed something. What do you mean by it doesn't apply to the base LLaMA? I can run these code without problems; ``` import llama model_name = 'decapoda-research/llama-7b-hf' tokenizer = llama.LLaMATokenizer.from_pretrained(model_name) model = llama.LLaMAForCausalLM.from_pretrained(model_name, low_cpu_mem_usage = True, torch_dtype=torch.float16) ``` And what is the Hub API? My understanding was, any model on HF Hub has to be an instantiation of the transformers library, then since honest llama requires a bias term and this cannot be achieved by existing flexibility of the configuration, I made this PR that flex up the HF LLaMA model. Thanks!<|||||>You are requesting to make some changes to a model to accommodate your custom versions of it, the original LLaMA checkpoints do not need this config flag. So that's why this should be done via the [code on the Hub API](https://huggingface.co/docs/transformers/custom_models) which allows you to share your code along with the model weights on the Hub.<|||||>I see, let me have a look at the link @sgugger shared to see how to upload customized model to HF!<|||||>Thanks for your help! I baked the inference-time intervention into a LLaMA-2-7B. The process is done offline and the edited model can work independently, as fast as the original LLaMA-2. Link: https://huggingface.co/likenneth/honest_llama2_chat_7B
transformers
24,893
closed
Avoid some pipeline tasks to use `use_cache=True`
# What does this PR do? Fix #24873 For example, we can pass `use_cache=True` (or set this in config) to `BartForSequenceClassification` and it will return `past_key_values` (although not useful to the task). This is to respond to #24873, despite the memory issue reported there is not 100% for sure yet. However, avoid using `cache` and not to let model returning `past_key_values` can avoid overhead like CPU/GPU communication (huge/many tensors). In the code snippet I provided in #24873, there is a gain of 16.5% of running time. We could probably extend this PR to other pipeline task classes.
07-18-2023 20:19:49
07-18-2023 20:19:49
_The documentation is not available anymore as the PR was closed or merged._<|||||>LGTM! Thanks for diving into this<|||||>Thanks for resolving this so quickly @ydshieh!
transformers
24,892
closed
Add descriptive docstring to TemperatureLogitsWarper
# What does this PR do? <!-- Congratulations! You've made it this far! You're not quite done yet though. Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution. Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change. Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost. --> <!-- Remove if not applicable --> Fixes one of the cases in https://github.com/huggingface/transformers/issues/24783 ## Before submitting - [x] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case). ## Who can review? @gante ## Notes My first PR to this library, greatly appreciate your patience :wink: <!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @ If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**. Please tag fewer than 3 people. Models: - text models: @ArthurZucker and @younesbelkada - vision models: @amyeroberts - speech models: @sanchit-gandhi - graph models: @clefourrier Library: - flax: @sanchit-gandhi - generate: @gante - pipelines: @Narsil - tensorflow: @gante and @Rocketknight1 - tokenizers: @ArthurZucker - trainer: @sgugger Integrations: - deepspeed: HF Trainer/Accelerate: @pacman100 - ray/raytune: @richardliaw, @amogkam Documentation: @sgugger, @stevhliu and @MKhalusova HF projects: - accelerate: [different repo](https://github.com/huggingface/accelerate) - datasets: [different repo](https://github.com/huggingface/datasets) - diffusers: [different repo](https://github.com/huggingface/diffusers) - rust tokenizers: [different repo](https://github.com/huggingface/tokenizers) Maintained examples (not research project or legacy): - Flax: @sanchit-gandhi - PyTorch: @sgugger - TensorFlow: @Rocketknight1 -->
07-18-2023 17:17:06
07-18-2023 17:17:06
Please note that I force pushed the branch as I had to fix the linter (and took the advantage to sync my fork). In case someone else runs into the same problem (and by some planetary alignment :ringed_planet: :earth_africa: :new_moon: lands in this tiny comment), it took me a while to figure out the issue as `make quality` was failing without fixing anything, and `make style` was fixing the wrong thing. This is what it was failing: ```python >>> some_python_code = 1 Some output of above prompt This kind of text shouldn't be here >>> some_other_python_code = 2 ```<|||||>_The documentation is not available anymore as the PR was closed or merged._<|||||>> That is a great example @nablabits! :fire: > > Thank you for iterating on it, I've requested a few minor changes (to further improve information density), and it should be ready to merge after they are addressed :hugs: Hi @gante, thanks for your patience, guidance and support, much appreciated :hugs: . I greatly enjoyed the learning experience. Are you happy for me to pick something else in the list (I'd like to deepen in my knowledge of this tiny bit of the platform) or the protocol suggests that I should leave remaining tasks for other folks? <|||||>@nablabits feel free to pick more tasks from the list, as many as you want (one at a time, of course) -- as long as you confirm that no one is working on a given task and that you share on the issue that you've decided to take it ๐Ÿค—
transformers
24,891
closed
[`Llama2`] Add support for Llama 2
# What does this PR do? Add Support for Llama2 ! ๐Ÿ”ฅ ๐Ÿ”ฅ
07-18-2023 16:14:03
07-18-2023 16:14:03
_The documentation is not available anymore as the PR was closed or merged._<|||||>The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/transformers/pr_24891). All of your documentation changes will be reflected on that endpoint.
transformers
24,890
closed
Check for accelerate env var when doing CPU only
# What does this PR do? Checks if `ACCELERATE_USE_CPU` is enabled to ensure trainer parts get set properly if so (and if `--use_cpu` isn't used) Fixes # (issue) ## Before submitting - [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case). - [x] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests), Pull Request section? - [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link to it if that's the case. - [ ] Did you make sure to update the documentation with your changes? Here are the [documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and [here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation). - [ ] Did you write any new necessary tests? ## Who can review? Anyone in the community is free to review the PR once the tests have passed. Feel free to tag members/contributors who may be interested in your PR. @sgugger
07-18-2023 16:04:13
07-18-2023 16:04:13
_The documentation is not available anymore as the PR was closed or merged._
transformers
24,889
closed
[`Blip`] Fix blip output name
As suggested by @ydshieh offline and similarly as https://github.com/huggingface/transformers/pull/22893 In fact, there is no reason to call the output logits `decoder_logits`, as they always come from the decoder cc @sgugger @ydshieh
07-18-2023 15:31:41
07-18-2023 15:31:41
_The documentation is not available anymore as the PR was closed or merged._<|||||>Hi thanks for this @younesbelkada โค๏ธ However, this is not enough, you have to add something like https://github.com/huggingface/transformers/blob/3ec10e6c76362191b61260300fe1d6173a8dd7e1/src/transformers/models/swin/modeling_swin.py#L171-L177 in that PR to make backward compatibility. (unless @sgugger say this model is still recent and/or not high usage)<|||||>Thanks @ydshieh for double checking, I missed that, it should be now added! :D <|||||>The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/transformers/pr_24889). All of your documentation changes will be reflected on that endpoint.
transformers
24,888
closed
[`InstructBlip`] Fix int8/fp4 issues
# What does this PR do? FIxes: https://github.com/huggingface/transformers/issues/24884 To reproduce: ```python from transformers import InstructBlipProcessor, InstructBlipForConditionalGeneration import torch from PIL import Image import requests device = "cuda" if torch.cuda.is_available() else "cpu" MODEL_NAME = "Salesforce/instructblip-flan-t5-xl" # Note: Here we no longer specify `torch.bfloat16`. model = InstructBlipForConditionalGeneration.from_pretrained(MODEL_NAME, device_map={"":0}, load_in_4bit=True) processor = InstructBlipProcessor.from_pretrained(MODEL_NAME) url = "https://raw.githubusercontent.com/salesforce/LAVIS/main/docs/_static/Confusing-Pictures.jpg" image = Image.open(requests.get(url, stream=True).raw).convert("RGB") prompt = "What is unusual about this image?" # Note: Here we no longer specify `torch.bfloat16`, but we use `torch.float16` as shown in the test code for Salesforce/instructlblup-vicuna-7b inputs = processor(images=image, text=prompt, return_tensors="pt").to(device, torch.float16) outputs = model.generate( **inputs, do_sample=False, num_beams=5, max_length=256, min_length=1, top_p=0.9, repetition_penalty=1.5, length_penalty=1.0, temperature=1, ) generated_text = processor.batch_decode(outputs, skip_special_tokens=True)[0].strip() print(generated_text) ``` Strangely I couldn't reproduce the issue with vicuna models but managed to reproduce with flan-t5 models. Also it is very strange that users never reported the same issue with Blip2 cc @sgugger
07-18-2023 14:53:13
07-18-2023 14:53:13
_The documentation is not available anymore as the PR was closed or merged._
transformers
24,887
closed
๐ŸŒ [i18n-KO] Translated `perf_train_tpu_tf.md` to Korean
<!-- PR์˜ ์ œ๋ชฉ์€ "๐ŸŒ [i18n-KO] Translated `<your_file>.md` to Korean" ์œผ๋กœ ๋ถ€ํƒ๋“œ๋ฆฝ๋‹ˆ๋‹ค --> # What does this PR do? Translated the `perf_train_tpu_tf.md` file of the documentation to Korean ๐Ÿ˜„ Thank you in advance for your review! Part of https://github.com/huggingface/transformers/issues/20179 <!-- ๋ฉ”์ธ ์ด์Šˆ์— ๊ธฐ๋ก์ด ๋‚จ์•„์š”! ๊ฐ€์งœ์—ฐ๊ตฌ์†Œ ๋ฆฌํฌ๋ฅผ ์‚ฌ์šฉํ•ด ์—ฐ์Šตํ•˜์‹ค๋•Œ๋Š” ์ œ๊ฑฐํ•ด์ฃผ์‹œ๋ฉด ๊ฐ์‚ฌํ•˜๊ฒ ์Šต๋‹ˆ๋‹ค! :smile: --> ## Before reviewing - [x] Check for missing / redundant translations (๋ฒˆ์—ญ ๋ˆ„๋ฝ/์ค‘๋ณต ๊ฒ€์‚ฌ) - [x] Grammar Check (๋งž์ถค๋ฒ• ๊ฒ€์‚ฌ) - [x] Review or Add new terms to glossary (์šฉ์–ด ํ™•์ธ ๋ฐ ์ถ”๊ฐ€) - [x] Check Inline TOC (e.g. `[[lowercased-header]]`) - [x] Check live-preview for gotchas (live-preview๋กœ ์ •์ƒ์ž‘๋™ ํ™•์ธ) ## Who can review? (Initial) <!-- 1. ์œ„ ์ฒดํฌ๊ฐ€ ๋ชจ๋‘ ์™„๋ฃŒ๋œ ๋’ค์—๋งŒ ๊ฐ€์งœ์—ฐ๊ตฌ์†Œ ํŒ€์›๋“ค์—๊ฒŒ ๋ฆฌ๋ทฐ ์š”์ฒญํ•˜๋Š” ์•„๋ž˜ ์ฃผ์„์„ ๋…ธ์ถœํ•ด์ฃผ์„ธ์š”! --> <!-- Team PseudoLab, may you please review this PR? --> @kihoon71, @0525hhgus, @54data, @Sunmin0520, @seank021, @augustinLib ## Before submitting - [x] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case). - [x] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests), Pull Request section? - [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link to it if that's the case. - [x] Did you make sure to update the documentation with your changes? Here are the [documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and [here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation). - [ ] Did you write any new necessary tests? ## Who can review? (Final) <!-- 2. ๊ฐ€์งœ์—ฐ๊ตฌ์†Œ ํŒ€์›๋“ค๊ณผ ๋ฆฌ๋ทฐ๊ฐ€ ๋๋‚œ ํ›„์—๋งŒ ํ—ˆ๊น…ํŽ˜์ด์Šค ์ง์›๋“ค์—๊ฒŒ ๋ฆฌ๋ทฐ ์š”์ฒญํ•˜๋Š” ์•„๋ž˜ ์ฃผ์„์„ ๋…ธ์ถœํ•ด์ฃผ์„ธ์š”! --> <!-- May you please review this PR? --> <!-- @sgugger, @ArthurZucker, @eunseojo -->
07-18-2023 14:12:41
07-18-2023 14:12:41
_The documentation is not available anymore as the PR was closed or merged._
transformers
24,886
closed
Separate CircleCI cache between `main` and `pull` or other branches
# What does this PR do? Keep the CircleCI cache running on `main` branch not affected by other branches/PR. Reading the following by keeping in mind: - @lhoestq created a branch yesterday using `datasets 2.13.2.dev0` - @sgugger changed `setup.py` yesterday in the commit `4.32.0.dev0` - the cache used by @lhoestq 's branch is used on `main` in/after @sgugger 's commit. Also: the CI triggered on `main` is much less frequently, so keeping its own cache from the pull events is fine (in terms of the cost) Assume - someone changes `setup.py` to use a dev. version of a library, say `datasets` in a PR or a HF non-main branch - CI triggered + [precise] cache not found + [partial] cache found + cache updated with `datasets` dev. version - shortly, another person change `setup.py` (not necessary the same library involved) in another PR/branch and being merged - CI triggered+ [precise] cache not found + [partial] cache found: - this could be the above one (depending the time gap) - `datasets` version remains `dev` if it has `datasets>=XXX` in `setup.py` in the merged PR - (as `dev` is newer version, so requirement already satisfied) - we get failures on `main` due to the `datasets dev` version, which should be avoided.
07-18-2023 13:50:46
07-18-2023 13:50:46
_The documentation is not available anymore as the PR was closed or merged._
transformers
24,885
closed
Disable ipex env var if false
# What does this PR do? Properly sets the Accelerate env variable if apex is set to False (the default in training args) Fixes # (issue) Solves https://github.com/huggingface/transformers/issues/24871 ## Before submitting - [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case). - [x] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests), Pull Request section? - [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link to it if that's the case. - [ ] Did you make sure to update the documentation with your changes? Here are the [documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and [here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation). - [ ] Did you write any new necessary tests? ## Who can review? Anyone in the community is free to review the PR once the tests have passed. Feel free to tag members/contributors who may be interested in your PR. @sgugger
07-18-2023 13:45:12
07-18-2023 13:45:12
_The documentation is not available anymore as the PR was closed or merged._
transformers
24,884
closed
InstructBLIP - FlanT5-XL model Int4/8 quantization broken
### System Info - `transformers` version: 4.32.0.dev0 - Platform: Linux-4.14.314-238.539.amzn2.x86_64-x86_64-with-glibc2.31 - Python version: 3.10.9 - Huggingface_hub version: 0.16.4 - Safetensors version: 0.3.1 - Accelerate version: 0.22.0.dev0 - Accelerate config: - compute_environment: LOCAL_MACHINE - distributed_type: MULTI_GPU - mixed_precision: no - use_cpu: False - num_processes: 4 - machine_rank: 0 - num_machines: 1 - gpu_ids: all - rdzv_backend: static - same_network: True - main_training_function: main - downcast_bf16: no - tpu_use_cluster: False - tpu_use_sudo: False - PyTorch version (GPU?): 1.13.1+cu117 (True) - Tensorflow version (GPU?): not installed (NA) - Flax version (CPU?/GPU?/TPU?): not installed (NA) - Jax version: not installed - JaxLib version: not installed - Using GPU in script?: Yes - Using distributed or parallel set-up in script?: No ### Who can help? @ArthurZucker @younesbelkada @NielsRogge ### Information - [X] The official example scripts - [ ] My own modified scripts ### Tasks - [X] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...) - [ ] My own task or dataset (give details below) ### Reproduction ## Problem Specifying `load_in_8bit` or `load_in_4bit` for `Salesforce/instructblip-flan-t5-xl`, I am able to load the model into GPU memory, but calling generate results in an error. ## Steps to Reproduce: ### torch.bfloat16 Working Version: 1. Load model into memory ``` from transformers import InstructBlipProcessor, InstructBlipForConditionalGeneration import torch from PIL import Image import requests device = "cuda" if torch.cuda.is_available() else "cpu" MODEL_NAME = "Salesforce/instructblip-flan-t5-xl" # load in bfloat16 - this is type t5 models were pretrained using (see https://github.com/salesforce/LAVIS/issues/418) model = InstructBlipForConditionalGeneration.from_pretrained(MODEL_NAME, device_map="auto", torch_dtype=torch.bfloat16) processor = InstructBlipProcessor.from_pretrained(MODEL_NAME) ``` 2. Run example VQA ``` url = "https://raw.githubusercontent.com/salesforce/LAVIS/main/docs/_static/Confusing-Pictures.jpg" image = Image.open(requests.get(url, stream=True).raw).convert("RGB") prompt = "What is unusual about this image?" # Cast to torch.bfloat16, otherwise we get an error. inputs = processor(images=image, text=prompt, return_tensors="pt").to(device, torch.bfloat16) outputs = model.generate( **inputs, do_sample=False, num_beams=5, max_length=256, min_length=1, top_p=0.9, repetition_penalty=1.5, length_penalty=1.0, temperature=1, ) generated_text = processor.batch_decode(outputs, skip_special_tokens=True)[0].strip() print(generated_text) ``` 3. Observe generated text: `The image depicts a man ironing clothes on the back of a yellow van in the middle of a busy city street. The unusual aspect of the image is that the man is not wearing a shirt, which may indicate that he is a homeless person or an immigrant. In addition, there are several other vehicles in the background, including taxis, buses, and motorcycles.` ### `load_in_8bit` Failing Version: 1. Load model into memory ``` from transformers import InstructBlipProcessor, InstructBlipForConditionalGeneration import torch from PIL import Image import requests device = "cuda" if torch.cuda.is_available() else "cpu" MODEL_NAME = "Salesforce/instructblip-flan-t5-xl" # Note: Here we no longer specify `torch.bfloat16`. model = InstructBlipForConditionalGeneration.from_pretrained(MODEL_NAME, device_map="auto", load_in_8bit=True) processor = InstructBlipProcessor.from_pretrained(MODEL_NAME) ``` 2. Run example VQA. Note we use the same input type as in [the test code](https://github.com/younesbelkada/transformers/blob/dc9dba7824a949b2a1f89e1f4537da9c8e25dd10/tests/models/instructblip/test_modeling_instructblip.py#L533). ``` url = "https://raw.githubusercontent.com/salesforce/LAVIS/main/docs/_static/Confusing-Pictures.jpg" image = Image.open(requests.get(url, stream=True).raw).convert("RGB") prompt = "What is unusual about this image?" # Note: Here we no longer specify `torch.bfloat16`, but we use `torch.float16` as shown in the test code for Salesforce/instructlblup-vicuna-7b inputs = processor(images=image, text=prompt, return_tensors="pt").to(device, torch.float16) outputs = model.generate( **inputs, do_sample=False, num_beams=5, max_length=256, min_length=1, top_p=0.9, repetition_penalty=1.5, length_penalty=1.0, temperature=1, ) generated_text = processor.batch_decode(outputs, skip_special_tokens=True)[0].strip() print(generated_text) ``` 3. Observe error ``` RuntimeError Traceback (most recent call last) Cell In[4], line 14 11 if torch.is_floating_point(v): 12 inputs[k] = v.to(torch.float16) ---> 14 outputs = model.generate( 15 **inputs, 16 do_sample=False, 17 num_beams=5, 18 max_length=256, 19 min_length=1, 20 top_p=0.9, 21 repetition_penalty=1.5, 22 length_penalty=1.0, 23 temperature=1, 24 ) 25 generated_text = processor.batch_decode(outputs, skip_special_tokens=True)[0].strip() 26 print(generated_text) File /usr/lib/python3/dist-packages/torch/autograd/grad_mode.py:27, in _DecoratorContextManager.__call__.<locals>.decorate_context(*args, **kwargs) 24 @functools.wraps(func) 25 def decorate_context(*args, **kwargs): 26 with self.clone(): ---> 27 return func(*args, **kwargs) File /usr/lib/python3/dist-packages/transformers/models/instructblip/modeling_instructblip.py:1522, in InstructBlipForConditionalGeneration.generate(self, pixel_values, qformer_input_ids, qformer_attention_mask, input_ids, attention_mask, **generate_kwargs) 1520 qformer_attention_mask = torch.ones_like(qformer_input_ids) 1521 qformer_attention_mask = torch.cat([query_attention_mask, qformer_attention_mask], dim=1) -> 1522 query_outputs = self.qformer( 1523 input_ids=qformer_input_ids, 1524 attention_mask=qformer_attention_mask, 1525 query_embeds=query_tokens, 1526 encoder_hidden_states=image_embeds, 1527 encoder_attention_mask=image_attention_mask, 1528 return_dict=True, 1529 ) 1530 query_output = query_outputs.last_hidden_state[:, : query_tokens.size(1), :] 1532 language_model_inputs = self.language_projection(query_output) File /usr/lib/python3/dist-packages/torch/nn/modules/module.py:1194, in Module._call_impl(self, *input, **kwargs) 1190 # If we don't have any hooks, we want to skip the rest of the logic in 1191 # this function, and just call forward. 1192 if not (self._backward_hooks or self._forward_hooks or self._forward_pre_hooks or _global_backward_hooks 1193 or _global_forward_hooks or _global_forward_pre_hooks): -> 1194 return forward_call(*input, **kwargs) 1195 # Do not call functions when jit is used 1196 full_backward_hooks, non_full_backward_hooks = [], [] File /usr/lib/python3/dist-packages/accelerate/hooks.py:165, in add_hook_to_module.<locals>.new_forward(*args, **kwargs) 163 output = old_forward(*args, **kwargs) 164 else: --> 165 output = old_forward(*args, **kwargs) 166 return module._hf_hook.post_forward(module, output) File /usr/lib/python3/dist-packages/transformers/models/instructblip/modeling_instructblip.py:1169, in InstructBlipQFormerModel.forward(self, input_ids, attention_mask, position_ids, query_embeds, head_mask, encoder_hidden_states, encoder_attention_mask, past_key_values, use_cache, output_attentions, output_hidden_states, return_dict) 1163 past_key_values_length = ( 1164 past_key_values[0][0].shape[2] - self.config.query_length if past_key_values is not None else 0 1165 ) 1167 query_length = query_embeds.shape[1] if query_embeds is not None else 0 -> 1169 embedding_output = self.embeddings( 1170 input_ids=input_ids, 1171 position_ids=position_ids, 1172 query_embeds=query_embeds, 1173 past_key_values_length=past_key_values_length, 1174 ) 1176 input_shape = embedding_output.size()[:-1] 1177 batch_size, seq_length = input_shape File /usr/lib/python3/dist-packages/torch/nn/modules/module.py:1194, in Module._call_impl(self, *input, **kwargs) 1190 # If we don't have any hooks, we want to skip the rest of the logic in 1191 # this function, and just call forward. 1192 if not (self._backward_hooks or self._forward_hooks or self._forward_pre_hooks or _global_backward_hooks 1193 or _global_forward_hooks or _global_forward_pre_hooks): -> 1194 return forward_call(*input, **kwargs) 1195 # Do not call functions when jit is used 1196 full_backward_hooks, non_full_backward_hooks = [], [] File /usr/lib/python3/dist-packages/accelerate/hooks.py:165, in add_hook_to_module.<locals>.new_forward(*args, **kwargs) 163 output = old_forward(*args, **kwargs) 164 else: --> 165 output = old_forward(*args, **kwargs) 166 return module._hf_hook.post_forward(module, output) File /usr/lib/python3/dist-packages/transformers/models/instructblip/modeling_instructblip.py:1041, in InstructBlipQFormerEmbeddings.forward(self, input_ids, position_ids, query_embeds, past_key_values_length) 1038 else: 1039 embeddings = query_embeds -> 1041 embeddings = self.layernorm(embeddings) 1042 embeddings = self.dropout(embeddings) 1043 return embeddings File /usr/lib/python3/dist-packages/torch/nn/modules/module.py:1194, in Module._call_impl(self, *input, **kwargs) 1190 # If we don't have any hooks, we want to skip the rest of the logic in 1191 # this function, and just call forward. 1192 if not (self._backward_hooks or self._forward_hooks or self._forward_pre_hooks or _global_backward_hooks 1193 or _global_forward_hooks or _global_forward_pre_hooks): -> 1194 return forward_call(*input, **kwargs) 1195 # Do not call functions when jit is used 1196 full_backward_hooks, non_full_backward_hooks = [], [] File /usr/lib/python3/dist-packages/accelerate/hooks.py:165, in add_hook_to_module.<locals>.new_forward(*args, **kwargs) 163 output = old_forward(*args, **kwargs) 164 else: --> 165 output = old_forward(*args, **kwargs) 166 return module._hf_hook.post_forward(module, output) File /usr/lib/python3/dist-packages/torch/nn/modules/normalization.py:190, in LayerNorm.forward(self, input) 189 def forward(self, input: Tensor) -> Tensor: --> 190 return F.layer_norm( 191 input, self.normalized_shape, self.weight, self.bias, self.eps) File /usr/lib/python3/dist-packages/torch/nn/functional.py:2515, in layer_norm(input, normalized_shape, weight, bias, eps) 2511 if has_torch_function_variadic(input, weight, bias): 2512 return handle_torch_function( 2513 layer_norm, (input, weight, bias), input, normalized_shape, weight=weight, bias=bias, eps=eps 2514 ) -> 2515 return torch.layer_norm(input, normalized_shape, weight, bias, eps, torch.backends.cudnn.enabled) RuntimeError: expected scalar type Float but found Half ``` I am unable to get `load_in_8bit` or `load_in_4bit` to work, both return these errors. I have also tried changing the dtype casting when putting the input processing to the GPU, but observe different errors. ### Expected behavior Expect quantization to work, as it does when using `Salesforce/instructblip-vicuna-7b` model. I am able to use quantized `google/flan-t5-xl` text generation model with the same setup, and have run `pip uninstall apex` as described in https://github.com/huggingface/transformers/issues/21391
07-18-2023 13:35:33
07-18-2023 13:35:33
Hi @lukealexmiller Thanks for reporting, will look into it ASAP. <|||||>Hi @lukealexmiller Again, thanks for reporting, I made a patch to support 8bit / 4bit correctly for Flan-t5 models in https://github.com/huggingface/transformers/pull/24888 , before it gets merged you can download it with the following: ```bash pip install git+https://github.com/younesbelkada/transformers.git@fix-instructblip ```<|||||>Hi @lukealexmiller, isn't that expected to fail given that you load the model in 8 bit, not providing any `dtype`, and cast the inputs to `torch.float16`? I personally also provide the `torch_dtype` argument to the `from_pretrained` method, which works: ``` from PIL import Image import requests from transformers import InstructBlipProcessor, InstructBlipForConditionalGeneration import torch processor = InstructBlipProcessor.from_pretrained("Salesforce/instructblip-flan-t5-xl") model = InstructBlipForConditionalGeneration.from_pretrained( "Salesforce/instructblip-flan-t5-xl", load_in_8bit=True, device_map="auto", torch_dtype=torch.bfloat16 ) url = "http://images.cocodataset.org/val2017/000000039769.jpg" image = Image.open(requests.get(url, stream=True).raw) prompt = "How many cats are there?" inputs = processor(images=image, text=prompt, return_tensors="pt").to(device="cuda", dtype=torch.bfloat16) generated_ids = model.generate(**inputs) generated_text = processor.batch_decode(generated_ids, skip_special_tokens=True)[0].strip() print(generated_text) ``` I'm also casting to `bfloat16` rather than `float16` to match the original implementation. Maybe @younesbelkada knows whether or not this should work without providing the `torch_dtype` argument.<|||||>Thanks for the prompt responses and PR @younesbelkada & @NielsRogge. @NielsRogge that does make sense, but if I load the model `from_pretrained` and also specify `torch_dtype`, my notebook kernel dies. I'm running on A10G w/24GB RAM, and as this works without `load_in_8bit=True`, I don't believe this is an OOM error. Although I don't have more detailed error info yet. Any thoughts? <|||||>@lukealexmiller the fix should be now on the main branch, feel free to re-open if the issue persists! <|||||>@younesbelkada using `Resolved https://github.com/huggingface/transformers.git to commit 07360b6c9c9448d619a82798419ed291dfc6ac8f` I am still unable to load the model and successfully call generate using `torch.float16`. I see the same error as before. However, it seems likely that I should be specifying `torch.bfloat16` as proposed by @NielsRogge. I don't have access to more than 24GB at the moment and don't see why the `int8` quantization would counterintuitively need more than `bfloat16` and cause OOM, but @NielsRogge can you confirm the GPU you're using and the memory footprint during/after model loading? Can one of you re-open the issue as the PR doesn't appear to have solved the problem, but it appears that there is another problem that causes the notebook kernel to crash? Thanks<|||||>@younesbelkada / @NielsRogge I am unable to re-open this issue, are either of you able to do that and do you have any ideas on the problem I'm still seeing?<|||||>Hi @lukealexmiller I can confirm this script: ```python from transformers import InstructBlipProcessor, InstructBlipForConditionalGeneration import torch from PIL import Image import requests device = "cuda" if torch.cuda.is_available() else "cpu" MODEL_NAME = "Salesforce/instructblip-flan-t5-xl" # Note: Here we no longer specify `torch.bfloat16`. model = InstructBlipForConditionalGeneration.from_pretrained(MODEL_NAME, device_map={"":0}, load_in_4bit=True) processor = InstructBlipProcessor.from_pretrained(MODEL_NAME) url = "https://raw.githubusercontent.com/salesforce/LAVIS/main/docs/_static/Confusing-Pictures.jpg" image = Image.open(requests.get(url, stream=True).raw).convert("RGB") prompt = "What is unusual about this image?" # Note: Here we no longer specify `torch.bfloat16`, but we use `torch.float16` as shown in the test code for Salesforce/instructlblup-vicuna-7b inputs = processor(images=image, text=prompt, return_tensors="pt").to(device, torch.float16) outputs = model.generate( **inputs, do_sample=False, num_beams=5, max_length=256, min_length=1, top_p=0.9, repetition_penalty=1.5, length_penalty=1.0, temperature=1, ) generated_text = processor.batch_decode(outputs, skip_special_tokens=True)[0].strip() print(generated_text) ``` Still works on my end, can you try to uninstall transformers and re-install it from source: ```bash pip uninstall transformers pip install git+https://github.com/huggingface/transformers.git ```<|||||>Hi @lukealexmiller the code snippet above is also working for me on an A100 GPU with 80 GB of RAM. I'm currently unable to access the GPU, but if I am I could report GPU memory usage. Normally it's the same as the model size in case you're using int4 quantization (so for Salesforce/instructblip-flan-t5-xl that's around 8GB of GPU RAM).<|||||>Update; I can confirm 9299MiB / 81920MiB of the A100 is being used. Hence around 9 GB, which is in line with int4 quantization (as much memory as you have parameters, i.e. 9 billion parameter model = 9 billion bytes = 9 GB of GPU memory).
transformers
24,883
closed
๐ŸŒ[i18n-KO] Translated performance.md to Korean
<!-- PR์˜ ์ œ๋ชฉ์€ "๐ŸŒ [i18n-KO] Translated `<your_file>.md` to Korean" ์œผ๋กœ ๋ถ€ํƒ๋“œ๋ฆฝ๋‹ˆ๋‹ค! --> # What does this PR do? Translated the `performance.md` file of the documentation to Korean. Thank you in advance for your review. Part of https://github.com/huggingface/transformers/issues/20179 ## Before reviewing - [x] Check for missing / redundant translations (๋ฒˆ์—ญ ๋ˆ„๋ฝ/์ค‘๋ณต ๊ฒ€์‚ฌ) - [x] Grammar Check (๋งž์ถค๋ฒ• ๊ฒ€์‚ฌ) - [x] Review or Add new terms to glossary (์šฉ์–ด ํ™•์ธ ๋ฐ ์ถ”๊ฐ€) - [x] Check Inline TOC (e.g. `[[lowercased-header]]`) - [x] Check live-preview for gotchas (live-preview๋กœ ์ •์ƒ์ž‘๋™ ํ™•์ธ) ## Who can review? (Initial) <!-- 1. ์œ„ ์ฒดํฌ๊ฐ€ ๋ชจ๋‘ ์™„๋ฃŒ๋œ ๋’ค์—, ์ด ์•„๋ž˜์— ๋ฆฌ๋ทฐ๋ฅผ ์š”์ฒญํ•  ํŒ€์›๋“ค์„ ๋ฉ˜์…˜ํ•ด์ฃผ์„ธ์š”! --> May you please review this PR? @0525hhgus, @Sunmin0520, @54data, @seank021, @kihoon71 ## Before submitting - [x] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case). - [x] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests), Pull Request section? - [x] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link to it if that's the case. - [x] Did you make sure to update the documentation with your changes? Here are the [documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and [here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation). - [x] Did you write any new necessary tests? ## Who can review? (Final) <!-- 2. ํŒ€์›๋“ค๊ณผ ๋ฆฌ๋ทฐ๊ฐ€ ๋๋‚œ ํ›„์—๋งŒ ํ—ˆ๊น…ํŽ˜์ด์Šค ์ง์›๋“ค์—๊ฒŒ ๋ฆฌ๋ทฐ ์š”์ฒญํ•˜๋Š” ์•„๋ž˜ ์ฃผ์„์„ ๋…ธ์ถœํ•ด์ฃผ์„ธ์š”! --> May you please review this PR? @sgugger, @ArthurZucker, @eunseojo
07-18-2023 13:24:06
07-18-2023 13:24:06
_The documentation is not available anymore as the PR was closed or merged._<|||||>๋ฆฌ๋ทฐ ์‚ฌํ•ญ ์—†์Šต๋‹ˆ๋‹ค!
transformers
24,882
closed
Enable `ZeroShotAudioClassificationPipelineTests::test_small_model_pt`
# What does this PR do? This test was failing due to a dev version of `datasets` (created in another branch) being used in `main`. It's now resolved. Thank you @lhoestq for the investigation.
07-18-2023 12:46:07
07-18-2023 12:46:07
_The documentation is not available anymore as the PR was closed or merged._
transformers
24,881
closed
๐ŸŒย [i18n-KO] Translatedย `transformers_agents.md` to Korean
# What does this PR do? Translated the `transformers_agents.md` file of the documentation to Korean. Thank you in advance for your review. Part of https://github.com/huggingface/transformers/issues/20179 ## Before reviewing - [x] Check for missing / redundant translations (๋ฒˆ์—ญ ๋ˆ„๋ฝ/์ค‘๋ณต ๊ฒ€์‚ฌ) - [x] Grammar Check (๋งž์ถค๋ฒ• ๊ฒ€์‚ฌ) - [x] Review or Add new terms to glossary (์šฉ์–ด ํ™•์ธ ๋ฐ ์ถ”๊ฐ€) - [x] Check Inline TOC (e.g. `[[lowercased-header]]`) - [x] Check live-preview for gotchas (live-preview๋กœ ์ •์ƒ์ž‘๋™ ํ™•์ธ) ## Who can review? (Initial) May you please review this PR? @sronger, @TaeYupNoh, @kj021, @HanNayeoniee, @eenzeenee, @sim-so ## Before submitting - [x] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case). - [x] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests), Pull Request section? - [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link to it if that's the case. - [x] Did you make sure to update the documentation with your changes? Here are the [documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and [here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation). - [ ] Did you write any new necessary tests? ## Who can review? (Final) @sgugger, @ArthurZucker, @eunseojo May you please review this PR?
07-18-2023 12:02:51
07-18-2023 12:02:51
_The documentation is not available anymore as the PR was closed or merged._<|||||>@eenzeenee @sronger๋‹˜ ๋ฆฌ๋ทฐ ๊ฐ์‚ฌํ•ฉ๋‹ˆ๋‹ค! ์ œ์•ˆํ•ด์ฃผ์‹  ์ˆ˜์ •์‚ฌํ•ญ์„ ๋ชจ๋‘ ๋ฐ˜์˜ํ–ˆ์Šต๋‹ˆ๋‹ค โ˜บ๏ธ<|||||>Could you review this PR? ๐Ÿ˜ƒ @sgugger, @ArthurZucker, @eunseojo
transformers
24,880
closed
Fix CircleCI cache
# What does this PR do? Fix the cache of site-package is loaded in the step of pip cache loading. See comment in the change.
07-18-2023 10:50:47
07-18-2023 10:50:47
_The documentation is not available anymore as the PR was closed or merged._
transformers
24,879
closed
add ascend npu accelerator support
### What does this PR do? Currently, Accelerate has supported ascend npu([see](https://github.com/huggingface/accelerate/pull/1676)). This PR enables users to leverage the ascend npu for training and inference of ๐Ÿค— Transformers models. For example, you can run the official glue text-classification task using ascend npu with below command: ```bash export TASK_NAME=sst2 time python -m torch.distributed.run --nproc_per_node 8 run_glue.py \ --model_name_or_path bert-base-cased \ --task_name $TASK_NAME \ --do_train \ --do_eval \ --max_seq_length 128 \ --per_device_train_batch_size 32 \ --learning_rate 2e-5 \ --num_train_epochs 3 \ --output_dir ./output ``` Below are the output logs: ```text WARNING:__main__: ***************************************** Setting OMP_NUM_THREADS environment variable for each process to be 1 in default, to avoid your system being overloaded, please further tune the variable for optimal performance in your application as needed. ***************************************** 07/18/2023 22:10:00 - WARNING - __main__ - Process rank: 2, device: npu:2, n_gpu: 1distributed training: True, 16-bits training: False 07/18/2023 22:10:00 - WARNING - __main__ - Process rank: 1, device: npu:1, n_gpu: 1distributed training: True, 16-bits training: False 07/18/2023 22:10:00 - WARNING - __main__ - Process rank: 5, device: npu:5, n_gpu: 1distributed training: True, 16-bits training: False 07/18/2023 22:10:00 - WARNING - __main__ - Process rank: 3, device: npu:3, n_gpu: 1distributed training: True, 16-bits training: False 07/18/2023 22:10:00 - WARNING - datasets.builder - Found cached dataset glue (/root/.cache/huggingface/datasets/glue/sst2/1.0.0/dacbe3125aa31d7f70367a07a8a9e72a5a0bfeb5fc42e75c9db75b96da6053ad) 100%|โ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆ| 3/3 [00:00<00:00, 261.94it/s] 07/18/2023 22:10:00 - WARNING - __main__ - Process rank: 7, device: npu:7, n_gpu: 1distributed training: True, 16-bits training: False 07/18/2023 22:10:00 - WARNING - datasets.builder - Found cached dataset glue (/root/.cache/huggingface/datasets/glue/sst2/1.0.0/dacbe3125aa31d7f70367a07a8a9e72a5a0bfeb5fc42e75c9db75b96da6053ad) 0%| | 0/3 [00:00<?, ?it/s]07/18/2023 22:10:00 - WARNING - datasets.builder - Found cached dataset glue (/root/.cache/huggingface/datasets/glue/sst2/1.0.0/dacbe3125aa31d7f70367a07a8a9e72a5a0bfeb5fc42e75c9db75b96da6053ad) 100%|โ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆ| 3/3 [00:00<00:00, 224.74it/s] 100%|โ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆ| 3/3 [00:00<00:00, 229.06it/s] 07/18/2023 22:10:00 - WARNING - datasets.builder - Found cached dataset glue (/root/.cache/huggingface/datasets/glue/sst2/1.0.0/dacbe3125aa31d7f70367a07a8a9e72a5a0bfeb5fc42e75c9db75b96da6053ad) 07/18/2023 22:10:00 - WARNING - __main__ - Process rank: 4, device: npu:4, n_gpu: 1distributed training: True, 16-bits training: False 100%|โ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆ| 3/3 [00:00<00:00, 220.52it/s] 07/18/2023 22:10:00 - WARNING - datasets.builder - Found cached dataset glue (/root/.cache/huggingface/datasets/glue/sst2/1.0.0/dacbe3125aa31d7f70367a07a8a9e72a5a0bfeb5fc42e75c9db75b96da6053ad) 0%| | 0/3 [00:00<?, ?it/s]07/18/2023 22:10:00 - WARNING - __main__ - Process rank: 6, device: npu:6, n_gpu: 1distributed training: True, 16-bits training: False 100%|โ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆ| 3/3 [00:00<00:00, 218.07it/s] 07/18/2023 22:10:00 - WARNING - datasets.builder - Found cached dataset glue (/root/.cache/huggingface/datasets/glue/sst2/1.0.0/dacbe3125aa31d7f70367a07a8a9e72a5a0bfeb5fc42e75c9db75b96da6053ad) 100%|โ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆ| 3/3 [00:00<00:00, 144.02it/s] 07/18/2023 22:10:00 - WARNING - datasets.builder - Found cached dataset glue (/root/.cache/huggingface/datasets/glue/sst2/1.0.0/dacbe3125aa31d7f70367a07a8a9e72a5a0bfeb5fc42e75c9db75b96da6053ad) 100%|โ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆ| 3/3 [00:00<00:00, 220.95it/s] [WARNING|modeling_utils.py:3331] 2023-07-18 22:10:03,044 >> Some weights of BertForSequenceClassification were not initialized from the model checkpoint at bert-base-cased and are newly initialized: ['classifier.bias', 'classifier.weight'] You should probably TRAIN this model on a down-stream task to be able to use it for predictions and inference. [WARNING|modeling_utils.py:3331] 2023-07-18 22:10:03,134 >> Some weights of BertForSequenceClassification were not initialized from the model checkpoint at bert-base-cased and are newly initialized: ['classifier.weight', 'classifier.bias'] You should probably TRAIN this model on a down-stream task to be able to use it for predictions and inference. [WARNING|modeling_utils.py:3331] 2023-07-18 22:10:03,225 >> Some weights of BertForSequenceClassification were not initialized from the model checkpoint at bert-base-cased and are newly initialized: ['classifier.weight', 'classifier.bias'] You should probably TRAIN this model on a down-stream task to be able to use it for predictions and inference. [WARNING|modeling_utils.py:3331] 2023-07-18 22:10:03,255 >> Some weights of BertForSequenceClassification were not initialized from the model checkpoint at bert-base-cased and are newly initialized: ['classifier.bias', 'classifier.weight'] You should probably TRAIN this model on a down-stream task to be able to use it for predictions and inference. [WARNING|modeling_utils.py:3331] 2023-07-18 22:10:03,349 >> Some weights of BertForSequenceClassification were not initialized from the model checkpoint at bert-base-cased and are newly initialized: ['classifier.bias', 'classifier.weight'] You should probably TRAIN this model on a down-stream task to be able to use it for predictions and inference. [WARNING|modeling_utils.py:3331] 2023-07-18 22:10:03,451 >> Some weights of BertForSequenceClassification were not initialized from the model checkpoint at bert-base-cased and are newly initialized: ['classifier.bias', 'classifier.weight'] You should probably TRAIN this model on a down-stream task to be able to use it for predictions and inference. [WARNING|modeling_utils.py:3331] 2023-07-18 22:10:03,487 >> Some weights of BertForSequenceClassification were not initialized from the model checkpoint at bert-base-cased and are newly initialized: ['classifier.bias', 'classifier.weight'] You should probably TRAIN this model on a down-stream task to be able to use it for predictions and inference. 07/18/2023 22:10:04 - WARNING - __main__ - Process rank: 0, device: npu:0, n_gpu: 1distributed training: True, 16-bits training: False 07/18/2023 22:10:04 - INFO - __main__ - Training/evaluation parameters TrainingArguments( _n_gpu=1, adafactor=False, adam_beta1=0.9, adam_beta2=0.999, adam_epsilon=1e-08, auto_find_batch_size=False, bf16=False, bf16_full_eval=False, data_seed=None, dataloader_drop_last=False, dataloader_num_workers=0, dataloader_pin_memory=True, ddp_backend=None, ddp_broadcast_buffers=None, ddp_bucket_cap_mb=None, ddp_find_unused_parameters=None, ddp_timeout=1800, debug=[], deepspeed=None, disable_tqdm=False, do_eval=True, do_predict=False, do_train=True, eval_accumulation_steps=None, eval_delay=0, eval_steps=None, evaluation_strategy=no, fp16=False, fp16_backend=auto, fp16_full_eval=False, fp16_opt_level=O1, fsdp=[], fsdp_config={'fsdp_min_num_params': 0, 'xla': False, 'xla_fsdp_grad_ckpt': False}, fsdp_min_num_params=0, fsdp_transformer_layer_cls_to_wrap=None, full_determinism=False, gradient_accumulation_steps=1, gradient_checkpointing=False, greater_is_better=None, group_by_length=False, half_precision_backend=auto, hub_model_id=None, hub_private_repo=False, hub_strategy=every_save, hub_token=<HUB_TOKEN>, ignore_data_skip=False, include_inputs_for_metrics=False, jit_mode_eval=False, label_names=None, label_smoothing_factor=0.0, learning_rate=2e-05, length_column_name=length, load_best_model_at_end=False, local_rank=0, log_level=passive, log_level_replica=warning, log_on_each_node=True, logging_dir=./output/runs/Jul18_22-09-51_localhost.localdomain, logging_first_step=False, logging_nan_inf_filter=True, logging_steps=500, logging_strategy=steps, lr_scheduler_type=linear, max_grad_norm=1.0, max_steps=-1, metric_for_best_model=None, mp_parameters=, no_cuda=False, num_train_epochs=3.0, optim=adamw_hf, optim_args=None, output_dir=./output, overwrite_output_dir=False, past_index=-1, per_device_eval_batch_size=8, per_device_train_batch_size=32, prediction_loss_only=False, push_to_hub=False, push_to_hub_model_id=None, push_to_hub_organization=None, push_to_hub_token=<PUSH_TO_HUB_TOKEN>, ray_scope=last, remove_unused_columns=True, report_to=[], resume_from_checkpoint=None, run_name=./output, save_on_each_node=False, save_safetensors=False, save_steps=500, save_strategy=steps, save_total_limit=None, seed=42, sharded_ddp=[], skip_memory_metrics=True, tf32=None, torch_compile=False, torch_compile_backend=None, torch_compile_mode=None, torchdynamo=None, tpu_metrics_debug=False, tpu_num_cores=None, use_ipex=False, use_legacy_prediction_loop=False, use_mps_device=False, warmup_ratio=0.0, warmup_steps=0, weight_decay=0.0, xpu_backend=None, ) 07/18/2023 22:10:04 - INFO - datasets.info - Loading Dataset Infos from /root/.cache/huggingface/modules/datasets_modules/datasets/glue/dacbe3125aa31d7f70367a07a8a9e72a5a0bfeb5fc42e75c9db75b96da6053ad 07/18/2023 22:10:04 - INFO - datasets.builder - Overwrite dataset info from restored data version if exists. 07/18/2023 22:10:04 - INFO - datasets.info - Loading Dataset info from /root/.cache/huggingface/datasets/glue/sst2/1.0.0/dacbe3125aa31d7f70367a07a8a9e72a5a0bfeb5fc42e75c9db75b96da6053ad 07/18/2023 22:10:04 - WARNING - datasets.builder - Found cached dataset glue (/root/.cache/huggingface/datasets/glue/sst2/1.0.0/dacbe3125aa31d7f70367a07a8a9e72a5a0bfeb5fc42e75c9db75b96da6053ad) 07/18/2023 22:10:04 - INFO - datasets.info - Loading Dataset info from /root/.cache/huggingface/datasets/glue/sst2/1.0.0/dacbe3125aa31d7f70367a07a8a9e72a5a0bfeb5fc42e75c9db75b96da6053ad 100%|โ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆ| 3/3 [00:00<00:00, 279.65it/s] [INFO|configuration_utils.py:710] 2023-07-18 22:10:04,346 >> loading configuration file bert-base-cased/config.json [INFO|configuration_utils.py:768] 2023-07-18 22:10:04,352 >> Model config BertConfig { "_name_or_path": "bert-base-cased", "architectures": [ "BertForMaskedLM" ], "attention_probs_dropout_prob": 0.1, "classifier_dropout": null, "finetuning_task": "sst2", "gradient_checkpointing": false, "hidden_act": "gelu", "hidden_dropout_prob": 0.1, "hidden_size": 768, "initializer_range": 0.02, "intermediate_size": 3072, "layer_norm_eps": 1e-12, "max_position_embeddings": 512, "model_type": "bert", "num_attention_heads": 12, "num_hidden_layers": 12, "pad_token_id": 0, "position_embedding_type": "absolute", "transformers_version": "4.31.0.dev0", "type_vocab_size": 2, "use_cache": true, "vocab_size": 28996 } [INFO|configuration_utils.py:710] 2023-07-18 22:10:04,353 >> loading configuration file bert-base-cased/config.json [INFO|configuration_utils.py:768] 2023-07-18 22:10:04,354 >> Model config BertConfig { "_name_or_path": "bert-base-cased", "architectures": [ "BertForMaskedLM" ], "attention_probs_dropout_prob": 0.1, "classifier_dropout": null, "gradient_checkpointing": false, "hidden_act": "gelu", "hidden_dropout_prob": 0.1, "hidden_size": 768, "initializer_range": 0.02, "intermediate_size": 3072, "layer_norm_eps": 1e-12, "max_position_embeddings": 512, "model_type": "bert", "num_attention_heads": 12, "num_hidden_layers": 12, "pad_token_id": 0, "position_embedding_type": "absolute", "transformers_version": "4.31.0.dev0", "type_vocab_size": 2, "use_cache": true, "vocab_size": 28996 } [INFO|tokenization_utils_base.py:1841] 2023-07-18 22:10:04,355 >> loading file vocab.txt [INFO|tokenization_utils_base.py:1841] 2023-07-18 22:10:04,355 >> loading file tokenizer.json [INFO|tokenization_utils_base.py:1841] 2023-07-18 22:10:04,355 >> loading file added_tokens.json [INFO|tokenization_utils_base.py:1841] 2023-07-18 22:10:04,355 >> loading file special_tokens_map.json [INFO|tokenization_utils_base.py:1841] 2023-07-18 22:10:04,355 >> loading file tokenizer_config.json [INFO|configuration_utils.py:710] 2023-07-18 22:10:04,355 >> loading configuration file bert-base-cased/config.json [INFO|configuration_utils.py:768] 2023-07-18 22:10:04,356 >> Model config BertConfig { "_name_or_path": "bert-base-cased", "architectures": [ "BertForMaskedLM" ], "attention_probs_dropout_prob": 0.1, "classifier_dropout": null, "gradient_checkpointing": false, "hidden_act": "gelu", "hidden_dropout_prob": 0.1, "hidden_size": 768, "initializer_range": 0.02, "intermediate_size": 3072, "layer_norm_eps": 1e-12, "max_position_embeddings": 512, "model_type": "bert", "num_attention_heads": 12, "num_hidden_layers": 12, "pad_token_id": 0, "position_embedding_type": "absolute", "transformers_version": "4.31.0.dev0", "type_vocab_size": 2, "use_cache": true, "vocab_size": 28996 } [INFO|modeling_utils.py:2600] 2023-07-18 22:10:04,436 >> loading weights file bert-base-cased/pytorch_model.bin [INFO|modeling_utils.py:3319] 2023-07-18 22:10:06,936 >> Some weights of the model checkpoint at bert-base-cased were not used when initializing BertForSequenceClassification: ['cls.predictions.transform.LayerNorm.bias', 'cls.predictions.bias', 'cls.predictions.transform.dense.bias', 'cls.seq_relationship.bias', 'cls.seq_relationship.weight', 'cls.predictions.transform.dense.weight', 'cls.predictions.transform.LayerNorm.weight', 'cls.predictions.decoder.weight'] - This IS expected if you are initializing BertForSequenceClassification from the checkpoint of a model trained on another task or with another architecture (e.g. initializing a BertForSequenceClassification model from a BertForPreTraining model). - This IS NOT expected if you are initializing BertForSequenceClassification from the checkpoint of a model that you expect to be exactly identical (initializing a BertForSequenceClassification model from a BertForSequenceClassification model). [WARNING|modeling_utils.py:3331] 2023-07-18 22:10:06,936 >> Some weights of BertForSequenceClassification were not initialized from the model checkpoint at bert-base-cased and are newly initialized: ['classifier.weight', 'classifier.bias'] You should probably TRAIN this model on a down-stream task to be able to use it for predictions and inference. 07/18/2023 22:10:06 - WARNING - datasets.arrow_dataset - Loading cached processed dataset at /root/.cache/huggingface/datasets/glue/sst2/1.0.0/dacbe3125aa31d7f70367a07a8a9e72a5a0bfeb5fc42e75c9db75b96da6053ad/cache-6dd95798535d1820.arrow 07/18/2023 22:10:06 - WARNING - datasets.arrow_dataset - Loading cached processed dataset at /root/.cache/huggingface/datasets/glue/sst2/1.0.0/dacbe3125aa31d7f70367a07a8a9e72a5a0bfeb5fc42e75c9db75b96da6053ad/cache-0f938cc5dd2d8410.arrow 07/18/2023 22:10:06 - WARNING - datasets.arrow_dataset - Loading cached processed dataset at /root/.cache/huggingface/datasets/glue/sst2/1.0.0/dacbe3125aa31d7f70367a07a8a9e72a5a0bfeb5fc42e75c9db75b96da6053ad/cache-d00cbf8279e5d543.arrow 07/18/2023 22:10:11 - INFO - __main__ - Sample 14592 of the training set: {'sentence': 'a great movie ', 'label': 1, 'idx': 14592, 'input_ids': [101, 170, 1632, 2523, 102, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0], 'token_type_ids': [0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0], 'attention_mask': [1, 1, 1, 1, 1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0]}. 07/18/2023 22:10:11 - INFO - __main__ - Sample 3278 of the training set: {'sentence': 'entertaining , if somewhat standardized , action ', 'label': 1, 'idx': 3278, 'input_ids': [101, 15021, 117, 1191, 4742, 18013, 117, 2168, 102, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0], 'token_type_ids': [0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0], 'attention_mask': [1, 1, 1, 1, 1, 1, 1, 1, 1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0]}. 07/18/2023 22:10:11 - INFO - __main__ - Sample 36048 of the training set: {'sentence': 'even when there are lulls , the emotions seem authentic , ', 'label': 1, 'idx': 36048, 'input_ids': [101, 1256, 1165, 1175, 1132, 181, 11781, 1116, 117, 1103, 6288, 3166, 16047, 117, 102, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0], 'token_type_ids': [0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0], 'attention_mask': [1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0]}. 07/18/2023 22:10:11 - WARNING - datasets.arrow_dataset - Loading cached processed dataset at /root/.cache/huggingface/datasets/glue/sst2/1.0.0/dacbe3125aa31d7f70367a07a8a9e72a5a0bfeb5fc42e75c9db75b96da6053ad/cache-6dd95798535d1820.arrow 07/18/2023 22:10:11 - WARNING - datasets.arrow_dataset - Loading cached processed dataset at /root/.cache/huggingface/datasets/glue/sst2/1.0.0/dacbe3125aa31d7f70367a07a8a9e72a5a0bfeb5fc42e75c9db75b96da6053ad/cache-6dd95798535d1820.arrow 07/18/2023 22:10:11 - WARNING - datasets.arrow_dataset - Loading cached processed dataset at /root/.cache/huggingface/datasets/glue/sst2/1.0.0/dacbe3125aa31d7f70367a07a8a9e72a5a0bfeb5fc42e75c9db75b96da6053ad/cache-6dd95798535d1820.arrow 07/18/2023 22:10:11 - WARNING - datasets.arrow_dataset - Loading cached processed dataset at /root/.cache/huggingface/datasets/glue/sst2/1.0.0/dacbe3125aa31d7f70367a07a8a9e72a5a0bfeb5fc42e75c9db75b96da6053ad/cache-6dd95798535d1820.arrow 07/18/2023 22:10:11 - WARNING - datasets.arrow_dataset - Loading cached processed dataset at /root/.cache/huggingface/datasets/glue/sst2/1.0.0/dacbe3125aa31d7f70367a07a8a9e72a5a0bfeb5fc42e75c9db75b96da6053ad/cache-6dd95798535d1820.arrow 07/18/2023 22:10:11 - WARNING - datasets.arrow_dataset - Loading cached processed dataset at /root/.cache/huggingface/datasets/glue/sst2/1.0.0/dacbe3125aa31d7f70367a07a8a9e72a5a0bfeb5fc42e75c9db75b96da6053ad/cache-6dd95798535d1820.arrow 07/18/2023 22:10:11 - WARNING - datasets.arrow_dataset - Loading cached processed dataset at /root/.cache/huggingface/datasets/glue/sst2/1.0.0/dacbe3125aa31d7f70367a07a8a9e72a5a0bfeb5fc42e75c9db75b96da6053ad/cache-6dd95798535d1820.arrow 07/18/2023 22:10:11 - WARNING - datasets.arrow_dataset - Loading cached processed dataset at /root/.cache/huggingface/datasets/glue/sst2/1.0.0/dacbe3125aa31d7f70367a07a8a9e72a5a0bfeb5fc42e75c9db75b96da6053ad/cache-0f938cc5dd2d8410.arrow 07/18/2023 22:10:11 - WARNING - datasets.arrow_dataset - Loading cached processed dataset at /root/.cache/huggingface/datasets/glue/sst2/1.0.0/dacbe3125aa31d7f70367a07a8a9e72a5a0bfeb5fc42e75c9db75b96da6053ad/cache-0f938cc5dd2d8410.arrow 07/18/2023 22:10:11 - WARNING - datasets.arrow_dataset - Loading cached processed dataset at /root/.cache/huggingface/datasets/glue/sst2/1.0.0/dacbe3125aa31d7f70367a07a8a9e72a5a0bfeb5fc42e75c9db75b96da6053ad/cache-0f938cc5dd2d8410.arrow 07/18/2023 22:10:11 - WARNING - datasets.arrow_dataset - Loading cached processed dataset at /root/.cache/huggingface/datasets/glue/sst2/1.0.0/dacbe3125aa31d7f70367a07a8a9e72a5a0bfeb5fc42e75c9db75b96da6053ad/cache-0f938cc5dd2d8410.arrow 07/18/2023 22:10:11 - WARNING - datasets.arrow_dataset - Loading cached processed dataset at /root/.cache/huggingface/datasets/glue/sst2/1.0.0/dacbe3125aa31d7f70367a07a8a9e72a5a0bfeb5fc42e75c9db75b96da6053ad/cache-0f938cc5dd2d8410.arrow 07/18/2023 22:10:11 - WARNING - datasets.arrow_dataset - Loading cached processed dataset at /root/.cache/huggingface/datasets/glue/sst2/1.0.0/dacbe3125aa31d7f70367a07a8a9e72a5a0bfeb5fc42e75c9db75b96da6053ad/cache-0f938cc5dd2d8410.arrow 07/18/2023 22:10:11 - WARNING - datasets.arrow_dataset - Loading cached processed dataset at /root/.cache/huggingface/datasets/glue/sst2/1.0.0/dacbe3125aa31d7f70367a07a8a9e72a5a0bfeb5fc42e75c9db75b96da6053ad/cache-d00cbf8279e5d543.arrow 07/18/2023 22:10:11 - WARNING - datasets.arrow_dataset - Loading cached processed dataset at /root/.cache/huggingface/datasets/glue/sst2/1.0.0/dacbe3125aa31d7f70367a07a8a9e72a5a0bfeb5fc42e75c9db75b96da6053ad/cache-0f938cc5dd2d8410.arrow 07/18/2023 22:10:11 - WARNING - datasets.arrow_dataset - Loading cached processed dataset at /root/.cache/huggingface/datasets/glue/sst2/1.0.0/dacbe3125aa31d7f70367a07a8a9e72a5a0bfeb5fc42e75c9db75b96da6053ad/cache-d00cbf8279e5d543.arrow 07/18/2023 22:10:11 - WARNING - datasets.arrow_dataset - Loading cached processed dataset at /root/.cache/huggingface/datasets/glue/sst2/1.0.0/dacbe3125aa31d7f70367a07a8a9e72a5a0bfeb5fc42e75c9db75b96da6053ad/cache-d00cbf8279e5d543.arrow 07/18/2023 22:10:11 - WARNING - datasets.arrow_dataset - Loading cached processed dataset at /root/.cache/huggingface/datasets/glue/sst2/1.0.0/dacbe3125aa31d7f70367a07a8a9e72a5a0bfeb5fc42e75c9db75b96da6053ad/cache-d00cbf8279e5d543.arrow 07/18/2023 22:10:11 - WARNING - datasets.arrow_dataset - Loading cached processed dataset at /root/.cache/huggingface/datasets/glue/sst2/1.0.0/dacbe3125aa31d7f70367a07a8a9e72a5a0bfeb5fc42e75c9db75b96da6053ad/cache-d00cbf8279e5d543.arrow 07/18/2023 22:10:11 - WARNING - datasets.arrow_dataset - Loading cached processed dataset at /root/.cache/huggingface/datasets/glue/sst2/1.0.0/dacbe3125aa31d7f70367a07a8a9e72a5a0bfeb5fc42e75c9db75b96da6053ad/cache-d00cbf8279e5d543.arrow 07/18/2023 22:10:11 - WARNING - datasets.arrow_dataset - Loading cached processed dataset at /root/.cache/huggingface/datasets/glue/sst2/1.0.0/dacbe3125aa31d7f70367a07a8a9e72a5a0bfeb5fc42e75c9db75b96da6053ad/cache-d00cbf8279e5d543.arrow [INFO|trainer.py:763] 2023-07-18 22:10:11,652 >> The following columns in the training set don't have a corresponding argument in `BertForSequenceClassification.forward` and have been ignored: idx, sentence. If idx, sentence are not expected by `BertForSequenceClassification.forward`, you can safely ignore this message. [W LegacyTypeDispatch.h:79] Warning: AutoNonVariableTypeMode is deprecated and will be removed in 1.10 release. For kernel implementations please use AutoDispatchBelowADInplaceOrView instead, If you are looking for a user facing API to enable running your inference-only workload, please use c10::InferenceMode. Using AutoDispatchBelowADInplaceOrView in user code is under risk of producing silent wrong result in some edge cases. See Note [AutoDispatchBelowAutograd] for more details. (function operator()) [W LegacyTypeDispatch.h:79] Warning: AutoNonVariableTypeMode is deprecated and will be removed in 1.10 release. For kernel implementations please use AutoDispatchBelowADInplaceOrView instead, If you are looking for a user facing API to enable running your inference-only workload, please use c10::InferenceMode. Using AutoDispatchBelowADInplaceOrView in user code is under risk of producing silent wrong result in some edge cases. See Note [AutoDispatchBelowAutograd] for more details. (function operator()) [W LegacyTypeDispatch.h:79] Warning: AutoNonVariableTypeMode is deprecated and will be removed in 1.10 release. For kernel implementations please use AutoDispatchBelowADInplaceOrView instead, If you are looking for a user facing API to enable running your inference-only workload, please use c10::InferenceMode. Using AutoDispatchBelowADInplaceOrView in user code is under risk of producing silent wrong result in some edge cases. See Note [AutoDispatchBelowAutograd] for more details. (function operator()) [W LegacyTypeDispatch.h:79] Warning: AutoNonVariableTypeMode is deprecated and will be removed in 1.10 release. For kernel implementations please use AutoDispatchBelowADInplaceOrView instead, If you are looking for a user facing API to enable running your inference-only workload, please use c10::InferenceMode. Using AutoDispatchBelowADInplaceOrView in user code is under risk of producing silent wrong result in some edge cases. See Note [AutoDispatchBelowAutograd] for more details. (function operator()) [W LegacyTypeDispatch.h:79] Warning: AutoNonVariableTypeMode is deprecated and will be removed in 1.10 release. For kernel implementations please use AutoDispatchBelowADInplaceOrView instead, If you are looking for a user facing API to enable running your inference-only workload, please use c10::InferenceMode. Using AutoDispatchBelowADInplaceOrView in user code is under risk of producing silent wrong result in some edge cases. See Note [AutoDispatchBelowAutograd] for more details. (function operator()) [W LegacyTypeDispatch.h:79] Warning: AutoNonVariableTypeMode is deprecated and will be removed in 1.10 release. For kernel implementations please use AutoDispatchBelowADInplaceOrView instead, If you are looking for a user facing API to enable running your inference-only workload, please use c10::InferenceMode. Using AutoDispatchBelowADInplaceOrView in user code is under risk of producing silent wrong result in some edge cases. See Note [AutoDispatchBelowAutograd] for more details. (function operator()) [W LegacyTypeDispatch.h:79] Warning: AutoNonVariableTypeMode is deprecated and will be removed in 1.10 release. For kernel implementations please use AutoDispatchBelowADInplaceOrView instead, If you are looking for a user facing API to enable running your inference-only workload, please use c10::InferenceMode. Using AutoDispatchBelowADInplaceOrView in user code is under risk of producing silent wrong result in some edge cases. See Note [AutoDispatchBelowAutograd] for more details. (function operator()) [W LegacyTypeDispatch.h:79] Warning: AutoNonVariableTypeMode is deprecated and will be removed in 1.10 release. For kernel implementations please use AutoDispatchBelowADInplaceOrView instead, If you are looking for a user facing API to enable running your inference-only workload, please use c10::InferenceMode. Using AutoDispatchBelowADInplaceOrView in user code is under risk of producing silent wrong result in some edge cases. See Note [AutoDispatchBelowAutograd] for more details. (function operator()) [INFO|trainer.py:1686] 2023-07-18 22:10:14,427 >> ***** Running training ***** [INFO|trainer.py:1687] 2023-07-18 22:10:14,427 >> Num examples = 67,349 [INFO|trainer.py:1688] 2023-07-18 22:10:14,427 >> Num Epochs = 3 [INFO|trainer.py:1689] 2023-07-18 22:10:14,427 >> Instantaneous batch size per device = 32 [INFO|trainer.py:1692] 2023-07-18 22:10:14,427 >> Total train batch size (w. parallel, distributed & accumulation) = 256 [INFO|trainer.py:1693] 2023-07-18 22:10:14,427 >> Gradient Accumulation steps = 1 [INFO|trainer.py:1694] 2023-07-18 22:10:14,427 >> Total optimization steps = 792 [INFO|trainer.py:1695] 2023-07-18 22:10:14,429 >> Number of trainable parameters = 108,311,810 0%| | 0/792 [00:00<?, ?it/s][W reducer.cpp:1278] Warning: find_unused_parameters=True was specified in DDP constructor, but did not find any unused parameters in the forward pass. This flag results in an extra traversal of the autograd graph every iteration, which can adversely affect performance. If your model indeed never has any unused parameters in the forward pass, consider turning this flag off. Note that this warning may be a false positive if your model has flow control causing later iterations to have unused parameters. (function operator()) [W reducer.cpp:1278] Warning: find_unused_parameters=True was specified in DDP constructor, but did not find any unused parameters in the forward pass. This flag results in an extra traversal of the autograd graph every iteration, which can adversely affect performance. If your model indeed never has any unused parameters in the forward pass, consider turning this flag off. Note that this warning may be a false positive if your model has flow control causing later iterations to have unused parameters. (function operator()) [W reducer.cpp:1278] Warning: find_unused_parameters=True was specified in DDP constructor, but did not find any unused parameters in the forward pass. This flag results in an extra traversal of the autograd graph every iteration, which can adversely affect performance. If your model indeed never has any unused parameters in the forward pass, consider turning this flag off. Note that this warning may be a false positive if your model has flow control causing later iterations to have unused parameters. (function operator()) [W reducer.cpp:1278] Warning: find_unused_parameters=True was specified in DDP constructor, but did not find any unused parameters in the forward pass. This flag results in an extra traversal of the autograd graph every iteration, which can adversely affect performance. If your model indeed never has any unused parameters in the forward pass, consider turning this flag off. Note that this warning may be a false positive if your model has flow control causing later iterations to have unused parameters. (function operator()) [W reducer.cpp:1278] Warning: find_unused_parameters=True was specified in DDP constructor, but did not find any unused parameters in the forward pass. This flag results in an extra traversal of the autograd graph every iteration, which can adversely affect performance. If your model indeed never has any unused parameters in the forward pass, consider turning this flag off. Note that this warning may be a false positive if your model has flow control causing later iterations to have unused parameters. (function operator()) [W reducer.cpp:1278] Warning: find_unused_parameters=True was specified in DDP constructor, but did not find any unused parameters in the forward pass. This flag results in an extra traversal of the autograd graph every iteration, which can adversely affect performance. If your model indeed never has any unused parameters in the forward pass, consider turning this flag off. Note that this warning may be a false positive if your model has flow control causing later iterations to have unused parameters. (function operator()) [W reducer.cpp:1278] Warning: find_unused_parameters=True was specified in DDP constructor, but did not find any unused parameters in the forward pass. This flag results in an extra traversal of the autograd graph every iteration, which can adversely affect performance. If your model indeed never has any unused parameters in the forward pass, consider turning this flag off. Note that this warning may be a false positive if your model has flow control causing later iterations to have unused parameters. (function operator()) [W reducer.cpp:1278] Warning: find_unused_parameters=True was specified in DDP constructor, but did not find any unused parameters in the forward pass. This flag results in an extra traversal of the autograd graph every iteration, which can adversely affect performance. If your model indeed never has any unused parameters in the forward pass, consider turning this flag off. Note that this warning may be a false positive if your model has flow control causing later iterations to have unused parameters. (function operator()) 0%| | 1/792 [00:24<5:20:41, 24.33s/it] 63%|โ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–Ž | 500/792 [02:39<00:57, 5.10it/s]{'loss': 0.2132, 'learning_rate': 7.373737373737374e-06, 'epoch': 1.89} 63%|โ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–Ž | 500/792 [02:43<00:57, 5.10it/s][INFO|trainer.py:2807] 2023-07-18 22:13:00,287 >> Saving model checkpoint to ./output/checkpoint-500 [INFO|configuration_utils.py:458] 2023-07-18 22:13:00,289 >> Configuration saved in ./output/checkpoint-500/config.json [INFO|modeling_utils.py:1851] 2023-07-18 22:13:01,488 >> Model weights saved in ./output/checkpoint-500/pytorch_model.bin [INFO|tokenization_utils_base.py:2214] 2023-07-18 22:13:01,489 >> tokenizer config file saved in ./output/checkpoint-500/tokenizer_config.json [INFO|tokenization_utils_base.py:2221] 2023-07-18 22:13:01,489 >> Special tokens file saved in ./output/checkpoint-500/special_tokens_map.json 100%|โ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆ| 792/792 [03:44<00:00, 5.38it/s][INFO|trainer.py:1934] 2023-07-18 22:13:58,740 >> Training completed. Do not forget to share your model on huggingface.co/models =) {'train_runtime': 224.3121, 'train_samples_per_second': 900.741, 'train_steps_per_second': 3.531, 'train_loss': 0.17379718356662327, 'epoch': 3.0} 100%|โ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆ| 792/792 [03:44<00:00, 3.53it/s] [INFO|trainer.py:2807] 2023-07-18 22:13:58,745 >> Saving model checkpoint to ./output [INFO|configuration_utils.py:458] 2023-07-18 22:13:58,747 >> Configuration saved in ./output/config.json [INFO|modeling_utils.py:1851] 2023-07-18 22:13:59,855 >> Model weights saved in ./output/pytorch_model.bin [INFO|tokenization_utils_base.py:2214] 2023-07-18 22:13:59,857 >> tokenizer config file saved in ./output/tokenizer_config.json [INFO|tokenization_utils_base.py:2221] 2023-07-18 22:13:59,857 >> Special tokens file saved in ./output/special_tokens_map.json ***** train metrics ***** epoch = 3.0 train_loss = 0.1738 train_runtime = 0:03:44.31 train_samples = 67349 train_samples_per_second = 900.741 train_steps_per_second = 3.531 07/18/2023 22:13:59 - INFO - __main__ - *** Evaluate *** [INFO|trainer.py:763] 2023-07-18 22:13:59,922 >> The following columns in the evaluation set don't have a corresponding argument in `BertForSequenceClassification.forward` and have been ignored: idx, sentence. If idx, sentence are not expected by `BertForSequenceClassification.forward`, you can safely ignore this message. [INFO|trainer.py:3081] 2023-07-18 22:13:59,926 >> ***** Running Evaluation ***** [INFO|trainer.py:3083] 2023-07-18 22:13:59,926 >> Num examples = 872 [INFO|trainer.py:3086] 2023-07-18 22:13:59,926 >> Batch size = 8 100%|โ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆ| 14/14 [00:07<00:00, 1.85it/s] ***** eval metrics ***** epoch = 3.0 eval_accuracy = 0.9186 eval_loss = 0.258 eval_runtime = 0:00:09.61 eval_samples = 872 eval_samples_per_second = 90.662 eval_steps_per_second = 1.456 real 4m38.911s user 39m59.583s sys 4m9.578s ``` cc @sgugger
07-18-2023 10:45:44
07-18-2023 10:45:44
_The documentation is not available anymore as the PR was closed or merged._
transformers
24,878
closed
[`Docs`] Clarify 4bit docs
# What does this PR do? As discussed internally with @lewtun , this PR refactors a bit the 4bit docs by adding more clarifications about best practices and giving relevant pointers to users about advanced usage. Also fixed the requirements instructions as 4bit is now part of the latest release cc @sgugger
07-18-2023 09:16:57
07-18-2023 09:16:57
_The documentation is not available anymore as the PR was closed or merged._
transformers
24,877
closed
check if eval dataset is dict
# What does this PR do? Simply checks if `eval_dataset` is a dict and if it is, run a sequential evaluation on each evaluation dataset. <!-- Congratulations! You've made it this far! You're not quite done yet though. Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution. Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change. Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost. --> <!-- Remove if not applicable --> Fixes # (issue) #24832 ## Before submitting - [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case). - [x] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests), Pull Request section? - [x] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link to it if that's the case. - [x] Did you make sure to update the documentation with your changes? Here are the [documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and [here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation). - [ ] Did you write any new necessary tests? ## Who can review? @sgugger Anyone in the community is free to review the PR once the tests have passed. Feel free to tag members/contributors who may be interested in your PR. <!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @ If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**. Please tag fewer than 3 people. Models: - text models: @ArthurZucker and @younesbelkada - vision models: @amyeroberts - speech models: @sanchit-gandhi - graph models: @clefourrier Library: - flax: @sanchit-gandhi - generate: @gante - pipelines: @Narsil - tensorflow: @gante and @Rocketknight1 - tokenizers: @ArthurZucker - trainer: @sgugger Integrations: - deepspeed: HF Trainer/Accelerate: @pacman100 - ray/raytune: @richardliaw, @amogkam Documentation: @sgugger, @stevhliu and @MKhalusova HF projects: - accelerate: [different repo](https://github.com/huggingface/accelerate) - datasets: [different repo](https://github.com/huggingface/datasets) - diffusers: [different repo](https://github.com/huggingface/diffusers) - rust tokenizers: [different repo](https://github.com/huggingface/tokenizers) Maintained examples (not research project or legacy): - Flax: @sanchit-gandhi - PyTorch: @sgugger - TensorFlow: @Rocketknight1 -->
07-18-2023 06:29:11
07-18-2023 06:29:11
_The documentation is not available anymore as the PR was closed or merged._
transformers
24,876
open
Model saving have dimension issue while using deepspeed stage 3 with multi node for larger models
### System Info Python 3.8.10 transformers 4.30.2 accelerate 0.20.3 deepspeed 0.9.5 ### Who can help? @pacman100 ### Information - [ ] The official example scripts - [ ] My own modified scripts ### Tasks - [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...) - [ ] My own task or dataset (give details below) ### Reproduction I'm finetuning LLaMa30B using the command below. `accelerate launch src/train_sft.py --model_name_or_path huggyllama/llama-30b --do_train --dataset dummy_identity --finetuning_type full --output_dir output/30B-sft-identity-v1 --overwrite_cache --per_device_train_batch_size 4 --gradient_accumulation_steps 4 --lr_scheduler_type cosine --logging_steps 10 --save_steps 200 --learning_rate 5e-5 --num_train_epochs 3 --plot_loss --fp16 --deepspeed ds_config.json --report_to wandb` I'm running this on 2 nodes with 4 A100 80GB on each. For deepspeed I'm using stage 3. When I tried to load the saved model it's giving the error as below. But this issue is not there for LLaMa 7B. So I assume it has something to do with the stage 3 optimization and gathering for saving. <img width="1156" alt="image" src="https://github.com/huggingface/transformers/assets/12937285/d4f1b234-b67b-457a-84b3-2f64c6101418"> This is my deepspeed config ``` { "fp16": { "enabled": "auto", "loss_scale": 0, "loss_scale_window": 1000, "initial_scale_power": 16, "hysteresis": 2, "min_loss_scale": 1 }, "optimizer": { "type": "AdamW", "params": { "lr": "auto", "betas": "auto", "eps": "auto", "weight_decay": "auto" } }, "scheduler": { "type": "WarmupLR", "params": { "warmup_min_lr": "auto", "warmup_max_lr": "auto", "warmup_num_steps": "auto" } }, "zero_optimization": { "stage": 3, "overlap_comm": true, "contiguous_gradients": true, "sub_group_size": 1e9, "reduce_bucket_size": "auto", "stage3_prefetch_bucket_size": "auto", "stage3_param_persistence_threshold": "auto", "stage3_max_live_parameters": 1e9, "stage3_max_reuse_distance": 1e9, "stage3_gather_16bit_weights_on_model_save": true }, "gradient_accumulation_steps": "auto", "gradient_clipping": "auto", "steps_per_print": 2000, "train_batch_size": "auto", "train_micro_batch_size_per_gpu": "auto", "wall_clock_breakdown": false } ``` A similar [issue](https://github.com/hiyouga/LLaMA-Efficient-Tuning/issues/70#issuecomment-1626876405) was reported on the open-source repo I'm using. I can see they were able to resolve it without using HF trainer and write a script with accelerate. ### Expected behavior The multinode stage 3 trained model should be saved without any errors.
07-18-2023 04:53:45
07-18-2023 04:53:45
@dittops To be fair, it was said that it was resolved, but we have no means of validating that, since the actual code was not shared.
transformers
24,875
open
Remove jnp.DeviceArray since it is deprecated.
The latest version of JAX deprecates jax.numpy.DeviceArray. When using this version with Transformers, we get this error when instantiating FlaxBertModel: `module 'jax.numpy' has no attribute 'DeviceArray'`
07-18-2023 03:05:23
07-18-2023 03:05:23
cc @sanchit-gandhi <|||||>@sanchit-gandhi This pull request should be prioritised since the affected Flax models are currently not usable at all.<|||||>@mariecwhite Thanks for opening this PR! Running `make style` and pushing the changes will resolve the code quality CI checks<|||||>Hey @mariecwhite - it seems there is an issue with your CircleCI permissions, meaning the tests won't run! Could you try refreshing your permissions as shown [here](https://support.circleci.com/hc/en-us/articles/360048210711-How-to-Refresh-User-Permissions-)? Let me know if you encounter any issues!<|||||>The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/transformers/pr_24875). All of your documentation changes will be reflected on that endpoint.<|||||>> Hey @mariecwhite - it seems there is an issue with your CircleCI permissions, meaning the tests won't run! Could you try refreshing your permissions as shown [here](https://support.circleci.com/hc/en-us/articles/360048210711-How-to-Refresh-User-Permissions-)? Let me know if you encounter any issues! I just updated my CircleCI permissions and rebased.<|||||>CircleCI is still failing for me. Since this is a priority, I'm happy to let https://github.com/huggingface/transformers/pull/25275 get merged instead of this.
transformers
24,874
closed
NotImplementedError: offload_to_cpu=True and NO_SHARD is not supported yet
### System Info I was using fsdp with settings "full_shard auto_wrap" on a A100 GPU. The training went well but was interupted when saving the checkpoints. The error stated `NotImplementedError: offload_to_cpu=True and NO_SHARD is not supported yet`. I understand that I am using a single GPU so fsdp defaluts to NO_SHAPR. However, I dont understand why offload_to_cpu was set to True. Or anywhere I can reset it to false? ### Who can help? _No response_ ### Information - [ ] The official example scripts - [ ] My own modified scripts ### Tasks - [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...) - [ ] My own task or dataset (give details below) ### Reproduction following https://github.com/lm-sys/FastChat to fine-tune an LLM ### Expected behavior the error as stated.
07-18-2023 03:00:41
07-18-2023 03:00:41
Can you please give us a code reproducer of the issue? cc @pacman100 <|||||>> Can you please give us a code reproducer of the issue? cc @pacman100 Thanks. Sure, here is the code: `torchrun train.py \ --model_name_or_path openlm-research/open_llama_3b \ --data_path /path/to/data \ --bf16 True \ --output_dir /path/to/output \ --num_train_epochs 3 \ --per_device_train_batch_size 2 \ --per_device_eval_batch_size 2 \ --gradient_accumulation_steps 4 \ --evaluation_strategy "no" \ --eval_steps 1500 \ --save_strategy "steps" \ --save_steps 2 \ --save_total_limit 3 \ --learning_rate 2e-5 \ --weight_decay 0. \ --warmup_ratio 0.04 \ --lr_scheduler_type "cosine" \ --logging_steps 1 \ --fsdp "shard_grad_op auto_wrap" \ --fsdp_transformer_layer_cls_to_wrap 'LlamaDecoderLayer' \ --tf32 True \ --model_max_length 2048 \ --gradient_checkpointing True \ --lazy_preprocess True ` `train.py` can be found from [here](https://github.com/lm-sys/FastChat/blob/main/fastchat/train/train.py) You can use dummy data from [here](https://github.com/lm-sys/FastChat/blob/main/data/dummy_conversation.json)<|||||>Hello, thank you @linkailuo1986, the above PR should fix it. But it doesn't make sense to use FSDP on a single GPU<|||||>Thanks @pacman100 for the quick fix. I used FSDP because it seemd to reduce VRAM for a larger batch size, which otherwise got an OOM error without using it.
transformers
24,873
closed
ZeroShotClassificationPipeline has large memory spikes when using a lot of candidate_labels
### System Info ``` - `transformers` version: 4.27.4 - Platform: Linux-5.4.0-1103-aws-x86_64-with-glibc2.27 - Python version: 3.8.0 - Huggingface_hub version: 0.16.4 - PyTorch version (GPU?): 1.8.1+cu102 (True) - Tensorflow version (GPU?): not installed (NA) - Flax version (CPU?/GPU?/TPU?): not installed (NA) - Jax version: not installed - JaxLib version: not installed - Using GPU in script?: YES - Using distributed or parallel set-up in script?: NO ``` ### Who can help? @Narsil ### Information - [ ] The official example scripts - [X] My own modified scripts ### Tasks - [X] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...) - [ ] My own task or dataset (give details below) ### Reproduction # Repro Steps Cell ``` from transformers import pipeline tmp_repro_data = ['What a wonderful creation. Art, in our house, comes in many colors Thanks to Crayola! We unfortunately never seem to have enough crayons, though!', 'I purchased this to replace my 7 yr old video baby monitor that had been dropped too many times. I love this monitor. It also works with my old Summer monitor camera.', 'This float is very comfortable (and we LOVE the cup holders) but the headrest is a little too high to be universally comfortable. I think letting some air out of it would solve the problem.', 'I have Marmite on toast every morning. Although I have been told by an Australian friend that Marmite is not up to the standards of Vegemite, I cannot tell the difference. An English friend tells me Marmite is better. Go figure.:) I love it and make certain to never run out of Marmite. This is one of those', "This was the only reason I could get anything done once we got home. My daughter always wanted to be held, but once you would lay her under here (especially the mirror), she was mesmerized. I highly recommend buying the extra set of toys also! Even though she's older now and doesn't play with it as much - she still loves to take all of the toys with her and play with them alone!!!", 'This is the best packaged coconut water that I have ever tasted. It reminds me of the fresh coconut water right from the tree that I used to have in Jamaica. I have tried other brands of coconut water, but none can compare to Vita Coco. That is the only brand that I will buy...other that buying the real green coconut.', "I specifically looked for this product online because I couldn't find it in the local drugstore any longer. This really does what it says it does - it gives you a matte finish which lasts pretty much all day - you will notice a difference when you use it and when you don't. If you tend to get oily skin during the day (T-zone or otherwise) this minimizes that significantly. I hope I can always find this product!", "We got this for my daughter's first birthday and she loves it. She can make the animals or just push the frog and dance to the songs. It's also easy to take with you and at home, she can carry the little farm house to another room if she wants. (It's been thrown all over and hasn't broken, which is another plus.) We may get tired of the same animal facts and songs but she never does.", 'I love Bare Escentuals bare minerals products. I treated myself to 4 of the products. I found the accompanying brushes high in quality. I wash my brushes periodically to prevent break outs. The brushes do well. I have had many compliments on my complection. Even though I am older I still get breakouts. The minerals have helped to decrease the flare- ups.**** I can well identify with the comments by some customers about dry skin and looking older.**** I absolutely must use a moisturizer with each application. I wish so much this moisturizing issue would be addressed with TV presentations. Otherwise, without liberal application of a moisturizer, my skin would look extremely dry and chalky no matter how beautiful the glow! Also lines are quite visible if moisure is insufficient. In spite of all of this, I have found minerals to be a great makeup.It is worth the money and time to continue my use of a moisturizer routine I have used for years. Ann HannaI also wish the lids were plastic for easy washing after use. I use an alcohol wipe to clean the inside of the lids periodically.', "My 10 month old son loves this toy! It is one of his favorites. Not only does he like to put the shapes (and any other toy that will fit) in the holes, but he also loves to play with the shapes on the floor, especially the ones that shake and make noise. He also likes this toy a lot because he can open and close the lid of the drum, repeatedly putting in and taking out shapes and toys. The Shape Rattle 'n Roll has definitely been worth the mere five dollars that it cost!", "I have been looking a long time for gum that is not made of materials bad for your health. I'm not worried about taste, but this tastes good and more importantly for me it is healthy and chews well. Some of the healthy chewing gums just fall apart. to me, healthy chewing gum means it doesn't have sugar or the horrible chemicals you find in the sugarless gums sold at grocery stores.", 'We adopted two cats from a rescue shelter, a male first and then a female a couple of days later. They got along okay in the beginning but became more and more jealous of each other. The male had to be the boss of everything...food, toys and attention. The female started getting back at him about two months later by wetting in his favorite hang-out spots on the floor and then on my husband\'s leather recliner. I tried Nature\'s Miracle on the carpet first but it didn\'t work. The smell was still there and she went back and wet on the same spot.After researching online and reading the reviews and tips from other customers, I ordered a gallon of Urine Off For Cats through Amazon, as well as a blacklight from Walmart and a big marinade infuser syringe from Linens N Things. The blacklight found spots we were unaware of, including under the recliner. I took masking tape and marked the area about 6" beyond each spot on the carpet and then marked spots about 4" apart within each circle. I poured about 3 cups of Urine Off into a 4 cup measuring cup to make it easier to draw the solution into the syringe. Then I injected each spot with a full syringe of Urine Off, marking each spot with an X in pen on the masking tape as I went along so I knew where I had already injected. Eventually the tip on the syringe was bending so I found that it was easier to use a big skewer to poke the hole first and then push the syringe into the carpet. When I finished injecting, I then filled a pump sprayer with the solution and saturated the top of the carpet. I covered the spots with plastic garbage bags for 2 days and then allowed them to air dry.For the leather recliner I had to pull the leather covers away from the back of the cushions and spray the leather both inside and out and around the zippers. I injected the cushions with the syringe like I did the carpet and put them in garbage bags. I also put a plastic tarp under the chair and sprayed everywhere the urine may have gone on top and underneath of the chair including any padding and all of the metal and springs. Then I covered it all with plastic bags for 2 days before letting it air dry. Check the cushions to be sure mold doesn\'t start growing. I removed them from the plastic early. I used a leather conditioner afterwards to restore the pliability to the leather. The metal underneath began to rust in some places, but it came off with WD 40 when we treated the hinges afterwards.I wish I could say I only had to do all of this work once, but I had to repeat it a second time before all of the smell was gone to my sensitive nose. To be fair, the directions say it may take two or more treatments for the smell to be eliminated. Also, I had used the Nature\'s Miracle on the two small spots I was aware of first which may have made it harder for the Urine Off to work. I\'m sure I didn\'t get down to the subfloor with the Nature\'s Miracle and I didn\'t cover it with plastic. In fact, I used a fan on it to dry it faster which I now know is the opposite of what you should do. But because the Urine Off directions said you had to saturate the carpet, the padding and the subfloor below the padding, and also to widen the area that you treat beyond the original spots, I had to buy a 2nd gallon. To repeat the process, I bought a 3rd gallon. But the end result is that we don\'t smell any urine odor. Only a slight lemony smell of the solution remains. I was able to save our $1800 recliner, but sadly, my husband insisted that our female cat go back to the rescue shelter. I\'m sure she will be much better off as an only cat who is queen of her castle just as our boy enjoys being king.', "We like trying new flours for our home made bread (mostly sourdough). Spelt works very well with sourdough starter. Gives the bread a subtle nut like flavor and fine texture. Plus, it doesn't affect the rise very much (we do add gluten to assist the sourdough rise). Of amaranth, teff, and spelt, we like spelt the best.", 'One of the many things I can do during the day is use smaller amounts of paper to try to reduce my carbon footprint. I know it would be better to not use papertowels at all, but this is surely a good alternative for those of us on the path to global warming enlightenment.', 'The product is wonderful - it leaves the laundry smelling very fresh. It is a pleasure to deal with the folks at [...]. They are very quick with their deliveries and responsive.', "I originally gave this product a 5 star review, but I was forced to edit the review after 3 months of use. I find the major problem with this item is that if you have a very messy diaper and can't tightly bundle it without a mess, the mess gets all over the module that dumps the diaper and the stink is OUTSIDE of the diaper pail.I used this pail with both cloth and disposable diapers (during different time periods). The first month it worked great for both, in my opinion, although it didn't hold near as many cloth diapers and they occasionally needed some help getting into the pail. However, once my baby grew into the next size of cloth diapers, it was IMPOSSIBLE to use the champ with them. Now, I understand this pail is not marketed for use with cloth diapers so I won't hold that against it, however just in case you are considering it for such a purpose as I did, DON'T.The last complaint I have with this pail is that after only 2 months of use the inner seal broke and fumes were all over the room. I did NOT call the company for a replacement as the other reviewer did, because I cannot use the pail with my cloth diapers.This pail has become garage sale fare.", "It's a nice replica toy for children. Good work with the full retractable blade, but not very shinny (the similar saber created before shines much more). a little big for kids, but fun to play with. my son loves it.", "This machine has plenty of power. I have only used it to make pizza dough and it worked extremely well. The dough came out great and I can't wait to use the shredding blades next.", 'This pasta has a wonderful natural flavor and is naturally high in fiber and nutrients. I eat it plain sometimes, and use it in place of white rice and other more processed grains.', 'I really recommend this product. The price on Amazon was a lot better than I could find in any store. The product arrived ahead of expected delivery time. It works really well, its quick to heat up and does a really good job of smoothing down my thick hair!'] p = pipeline( 'zero-shot-classification', model='facebook/bart-large-mnli', device=0, ) def _revised_forward(self, inputs): candidate_label = inputs["candidate_label"] sequence = inputs["sequence"] model_inputs = {k: inputs[k] for k in self.tokenizer.model_input_names} #type: ignore outputs = self.model(**model_inputs, use_cache=False) model_outputs = { "candidate_label": candidate_label, "sequence": sequence, "is_last": inputs["is_last"], **outputs, } return model_outputs # With this line it works as expected, without it memory spikes. The only difference between `revised_forward` # and the transformers repo is that we pass `use_cache=False` as an extra arg to inference with `self.model` #p._forward = _revised_forward.__get__(p) p( tmp_repro_data, multi_label=True, candidate_labels=list(range(130)), ) ``` ### Expected behavior Letting this script run as is causes memory (CPU memory, not GPU memory) to spike over 10Gi at around 1500 inference calls. This can break a lot of environments, especially anything involving running jobs on resource constrained machines. After some debugging, we traced this to the `past_key_values` object being returned by the Bart model, which was a tuple of some very large tensors. We suspect that these large tensors are causing garbage collection to not be able to catch up when storing all of these model inference requests in a single list. Passing `use_cache=False` to model inference (and therefore not returning the `past_key_values` object) fixes the memory spikes, making us think this was indeed the issue.
07-18-2023 01:58:48
07-18-2023 01:58:48
Hi @rsmith49 Thank you for opening this issue ๐Ÿค— . I will take a look!<|||||>> could you confirm that the issue happens only when the results (of all 1500 inference calls) are saved? (I think it's yes?) I have been running in a jupyter notebook, which I think does save the results from calling the pipeline since it is the final statement in the cell - let me try in a regular python process and see if the memory spikes the same. I should note though that the "1500 inference calls" I mentioned are only over 20 documents - since there are 130 `candidate_labels`, the pipeline calls the model for inference 2600 times (130 * 20). So saving the results here will be 20 dicts with ranked scores for each `candidate_label`. > when you save the model inference results, do you also contain those returned past_key_values? (I guess not ..?) Correct, the result from the pipeline does not contain the `past_key_values`. The "storing in a single list" code occurs [here](https://github.com/huggingface/transformers/blob/dd49404a897f84622d38254fe90cd07d8c1640b0/src/transformers/pipelines/base.py#L1103), and stepping through with `pdb` shows the iterator's internal function creating a reference to the `past_key_values` at each `__next__` call<|||||>Hi, I am not able to reproduce with the following (slightly modified) script (see at the end), running in python directly ```bash iteration: 0 RAM: 4318.6015625 MB timing: 18.248116 sec. ============== iteration: 156 RAM: 4319.5 MB timing: 18.464201 sec ``` It would be great if you can try to see if the issue happens with python script only. However, this is a sequence classification model, and the `past_key_values` is not used by this model. I will try to have a fix anyway. Thank you again for showing this issue to us! ```python from transformers import pipeline tmp_repro_data = ['I purchased this to replace my 7 yr old video baby monitor that had been dropped too many times.'] * 20 ckpt = 'facebook/bart-large-mnli' # ckpt = 'facebook/bart-base' p = pipeline( 'zero-shot-classification', model=ckpt, device="cuda", batch_size=20, ) import pdb; pdb.set_trace() def _revised_forward(self, inputs): candidate_label = inputs["candidate_label"] sequence = inputs["sequence"] model_inputs = {k: inputs[k] for k in self.tokenizer.model_input_names} #type: ignore outputs = self.model(**model_inputs, use_cache=False) model_outputs = { "candidate_label": candidate_label, "sequence": sequence, "is_last": inputs["is_last"], **outputs, } return model_outputs # With this line it works as expected, without it memory spikes. The only difference between `revised_forward` # and the transformers repo is that we pass `use_cache=False` as an extra arg to inference with `self.model` import psutil import os process = psutil.Process(os.getpid()) #p._forward = _revised_forward.__get__(p) import datetime for i in range(1000): s = datetime.datetime.now() o = p( tmp_repro_data, multi_label=True, candidate_labels=list(range(130)), ) e = datetime.datetime.now() d = (e-s).total_seconds() mem = process.memory_info()[0] / float(2 ** 20) print(i) print(mem) print(d) print("=" * 80) ```<|||||>Thanks for looking into this! Weirdly, I also did not see memory spikes when using a single text snippet copied 20 times, only when using 20 unique strings (I'm guessing something to do with caching somewhere in either python, torch, or transformers that makes garbage collection more effective). So if you could try using the example list I posted above that may do it. Haven't had a chance to run the script in a pure python process but will let you know when I do!<|||||>That would be nice to know! (I am opening a PR soon anyway :-) )<|||||>Ran the script as just `python tmp_script.py` and saw memory go as high as 7.9Gi before I killed the process, somewhere around 1187 samples (NOTE: same environment as above, transformers==4.27.4, python version 3.8.0). So it looks like it occurs not just when saving the result of `p(...)`, and is not just an artifact of notebooks ๐Ÿ‘ <|||||>> go as high as 7.9Gi You use the different 20 text sentences in `tmp_repro_data`, right? (I am running with the same text repeated 20 times, with latest `main` of `transformers`.)<|||||>> You use the different 20 text sentences in tmp_repro_data, right? Yes, not sure why repeating the same text doesn't trigger it, but I get the same result as you when using repeated text
transformers
24,872
closed
`main_input_name` is None if `predict_with_generate` in keras_callbacks.py for encoder-decoder(Bert-Bert) TF models
### System Info - `transformers` version: 4.30.2 - Platform: Linux-5.15.109+-x86_64-with-glibc2.31 - Python version: 3.10.12 - Huggingface_hub version: 0.16.4 - Safetensors version: 0.3.1 - PyTorch version (GPU?): 2.0.1+cu118 (True) - Tensorflow version (GPU?): 2.12.0 (True) - Flax version (CPU?/GPU?/TPU?): 0.7.0 (gpu) - Jax version: 0.4.13 - JaxLib version: 0.4.13 - Using GPU in script?: Yes - Using distributed or parallel set-up in script?: No ### Who can help? @gante , @Rocketknight1 ### Information - [ ] The official example scripts - [X] My own modified scripts ### Tasks - [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...) - [X] My own task or dataset (give details below) ### Reproduction ``` KeyError Traceback (most recent call last) [<ipython-input-49-089aabb58b9b>](https://localhost:8080/#) in <cell line: 1>() ----> 1 history = model.fit(x=tf_train_set, validation_data=tf_validation_set, epochs=num_epochs, callbacks=callbacks) 1 frames [/usr/local/lib/python3.10/dist-packages/transformers/keras_callbacks.py](https://localhost:8080/#) in on_epoch_end(self, epoch, logs) 217 if self.predict_with_generate: 218 if isinstance(batch, dict): --> 219 generation_inputs = batch[main_input_name] 220 attention_mask = batch.get("attention_mask", None) 221 else: KeyError: None ``` [Here's](https://colab.research.google.com/drive/1d75HqymedDSopRDXDBSNIBGT1q7zvCWz?usp=sharing) the colab link to reproduce the error. because of this code in `keras_callbacks.py` (commented with >>>>> .... <<<<<< for better understanding)- ``` #### in tf_keras_callback (in func `on_epoch_end`, ~line 191) main_input_name = None if self.predict_with_generate: # This dense conditional recognizes the case where we have an encoder-decoder model, but # avoids getting tangled up when we just have a model with a layer called 'encoder' if hasattr(self.model, "encoder") and hasattr(self.model.encoder, "main_input_name"): # >>>>>>> If this condition is not satisfied(which is the case currently), `main_input_name` remains None <<<<<<<< if self.model.encoder.main_input_name != self.model.main_input_name: main_input_name = self.model.encoder.main_input_name else: main_input_name = getattr(self.model, "main_input_name", "input_ids") if self.use_xla_generation and self.generation_function is None: def generation_function(inputs, attention_mask): return self.model.generate(inputs, attention_mask=attention_mask, **self.generate_kwargs) self.generation_function = tf.function(generation_function, jit_compile=True) prediction_list = [] label_list = [] # The whole predict/generate loop is handled inside this method for batch in self.eval_dataset: if isinstance(batch, tuple): batch, labels = batch else: labels = None if self.predict_with_generate: if isinstance(batch, dict): generation_inputs = batch[main_input_name] # >>>>>>>>>>>> `main_input_name` remains None here (~line 219) <<<<<<<<<<<< attention_mask = batch.get("attention_mask", None) else: generation_inputs = batch attention_mask = None if self.use_xla_generation: predictions = self.generation_function(generation_inputs, attention_mask=attention_mask) else: predictions = self.model.generate( generation_inputs, attention_mask=attention_mask, **self.generate_kwargs ) ``` ### Expected behavior `main_input_name should` be `input_ids` for that the following function can be modified - ``` if hasattr(self.model, "encoder") and hasattr(self.model.encoder, "main_input_name"): if self.model.encoder.main_input_name != self.model.main_input_name: main_input_name = self.model.encoder.main_input_name ``` to something like - ``` if hasattr(self.model, "encoder") and hasattr(self.model.encoder, "main_input_name"): main_input_name = self.model.encoder.main_input_name ```
07-18-2023 01:39:54
07-18-2023 01:39:54
Thank you for reproting @saichandrapandraju ๐Ÿค— cc @Rocketknight1 (when he is back, or I can take a look after finishing some other tasks ) <|||||>Sure @ydshieh , once this is verified to be valid, I can create a PR ๐Ÿ™‚
transformers
24,871
closed
Trainer is always using IPEX, even when use_ipex=False
### System Info - `transformers` version: 4.32.0.dev0 - Platform: Linux-5.15.0-75-generic-x86_64-with-glibc2.35 - Python version: 3.10.6 - Huggingface_hub version: 0.16.4 - Safetensors version: 0.3.1 - Accelerate version: 0.21.0 - Accelerate config: not found - PyTorch version (GPU?): 2.0.1+cu117 (False) - Tensorflow version (GPU?): not installed (NA) - Flax version (CPU?/GPU?/TPU?): not installed (NA) - Jax version: not installed - JaxLib version: not installed - Using GPU in script?: No - Using distributed or parallel set-up in script?: No ### Who can help? @sgugger ### Information - [X] The official example scripts - [X] My own modified scripts ### Tasks - [X] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...) - [X] My own task or dataset (give details below) ### Reproduction Steps to reproduce the behavior: 1. The issue can be reproduced with the [text-classification example](https://github.com/huggingface/transformers/tree/main/examples/pytorch/text-classification) script (other scripts would have the same issue). I have `intel-extension-for-pytorch==2.0.100` installed in my environment and am running the following command to run_glue.py without `use_ipex` (so it should default to `False`): ``` export MODEL_NAME=distilbert-base-uncased export OUTPUT_DIR=/home/dmsuehir/glue_output export TASK_NAME=mrpc python run_glue.py \ --model_name_or_path $MODEL_NAME \ --task_name $TASK_NAME \ --do_train \ --max_seq_length 128 \ --per_device_train_batch_size 64 \ --learning_rate 2e-5 \ --num_train_epochs 1 \ --no_cuda \ --output_dir $OUTPUT_DIR \ --bf16 ``` The train metrics I see with this run are: ``` ***** train metrics ***** epoch = 1.0 train_loss = 0.6083 train_runtime = 0:00:37.35 train_samples = 3668 train_samples_per_second = 98.191 train_steps_per_second = 1.553 ``` Note that we are seeing `98.191` samples/second. 2. Next try running the same command, except adding on `--use_ipex`. Note that I am also deleting my output directory between runs. ``` python run_glue.py \ --model_name_or_path $MODEL_NAME \ --task_name $TASK_NAME \ --do_train \ --max_seq_length 128 \ --per_device_train_batch_size 64 \ --learning_rate 2e-5 \ --num_train_epochs 1 \ --no_cuda \ --output_dir $OUTPUT_DIR \ --bf16 \ --use_ipex ``` I see a similar training metric for `train_samples_per_second` as step 1: ``` ***** train metrics ***** epoch = 1.0 train_loss = 0.6083 train_runtime = 0:00:37.94 train_samples = 3668 train_samples_per_second = 96.654 train_steps_per_second = 1.528 ``` 3. Finally, I had debugged this issue to look into how IPEX is being used in the Trainer. I found that it can be called in two places: (1) it can get called from the Trainer [here](https://github.com/huggingface/transformers/blob/main/src/transformers/trainer.py#L1310) or (2) it can get called by accelerate [here](https://github.com/huggingface/accelerate/blob/main/src/accelerate/accelerator.py#L1748). The Trainer is properly respecting the `use_ipex` arg, however, it appears that accelerate is always using IPEX if it's installed. Digging deeper into this, I found that accelerate would only not use IPEX if [`ACCELERATE_USE_IPEX` gets set to False/0](https://github.com/huggingface/accelerate/blob/main/src/accelerate/state.py#L765). To confirm this, I manually set `ACCELERATE_USE_IPEX=0` and then ran the same script/args from step 1: ``` export ACCELERATE_USE_IPEX=0 python run_glue.py \ --model_name_or_path $MODEL_NAME \ --task_name $TASK_NAME \ --do_train \ --max_seq_length 128 \ --per_device_train_batch_size 64 \ --learning_rate 2e-5 \ --num_train_epochs 1 \ --no_cuda \ --output_dir $OUTPUT_DIR \ --bf16 ``` And now I see these training metrics, where we see a drop in `train_samples_per_second`, which indicates that IPEX has actually been turned off now that the env var was used: ``` ***** train metrics ***** epoch = 1.0 train_loss = 0.697 train_runtime = 0:01:07.74 train_samples = 3668 train_samples_per_second = 54.143 train_steps_per_second = 0.856 ``` ### Expected behavior When `use_ipex` is not given or set to `False`, IPEX optimize should not get called. If it's agreed that this is in fact a bug, I would be happy to work on a PR to fix it. I saw that other accelerate env vars are getting set from `training_args.py`.
07-18-2023 00:44:23
07-18-2023 00:44:23
cc @muellerzr (right?)<|||||>This is a problem that should be solved in Accelerate, I'll work on a PR today with this. Thanks for the flag! Edit: actually this can be solved in the training args, PR coming shortly<|||||>@dmsuehir can you try running again with `pip install git+https://github.com/huggingface/transformers@muellerzr-ipex` and set `use_ipex` to `False`? (it's the default)<|||||>@muellerzr Yes, the fix in your branch works. Thanks!<|||||>@muellerzr By the way, I think `no_cuda` and `ACCELERATE_USE_CPU` may have the same issue, but I don't have a GPU on my machine to verify.
transformers
24,870
open
Amazon Bedrock model as HfAgent
### Feature request I'd like to be able to use Amazon Bedrock available models, for example Claude, as the HfAgent model. ### Motivation Expanded model support ### Your contribution Not sure currently.
07-17-2023 22:20:42
07-17-2023 22:20:42
Hi @austinmw Thank you for this feature request. I am not sure however, cc my colleague @sgugger who knows much better on the Agent topic!
transformers
24,869
open
๐ŸŒ[i18n-KO] Translated `<debugging>.md`to Korean
<!-- PR์˜ ์ œ๋ชฉ์€ "๐ŸŒ [i18n-KO] Translated `<your_file>.md` to Korean" ์œผ๋กœ ๋ถ€ํƒ๋“œ๋ฆฝ๋‹ˆ๋‹ค --> # What does this PR do? Translated the `<debugging>.md` file of the documentation to Korean. Thank you in advance for your review. Part of https://github.com/huggingface/transformers/issues/20179 <!-- ๋ฉ”์ธ ์ด์Šˆ์— ๊ธฐ๋ก์ด ๋‚จ์•„์š”! ๊ฐ€์งœ์—ฐ๊ตฌ์†Œ ๋ฆฌํฌ๋ฅผ ์‚ฌ์šฉํ•ด ์—ฐ์Šตํ•˜์‹ค๋•Œ๋Š” ์ œ๊ฑฐํ•ด์ฃผ์‹œ๋ฉด ๊ฐ์‚ฌํ•˜๊ฒ ์Šต๋‹ˆ๋‹ค! :smile: --> ## Before reviewing - [x] Check for missing / redundant translations (๋ฒˆ์—ญ ๋ˆ„๋ฝ/์ค‘๋ณต ๊ฒ€์‚ฌ) - [x] Grammar Check (๋งž์ถค๋ฒ• ๊ฒ€์‚ฌ) - [x] Review or Add new terms to glossary (์šฉ์–ด ํ™•์ธ ๋ฐ ์ถ”๊ฐ€) - [x] Check Inline TOC (e.g. `[[lowercased-header]]`) - [x] Check live-preview for gotchas (live-preview๋กœ ์ •์ƒ์ž‘๋™ ํ™•์ธ) ## Who can review? (Initial) <!-- 1. ์œ„ ์ฒดํฌ๊ฐ€ ๋ชจ๋‘ ์™„๋ฃŒ๋œ ๋’ค์—๋งŒ ๊ฐ€์งœ์—ฐ๊ตฌ์†Œ ํŒ€์›๋“ค์—๊ฒŒ ๋ฆฌ๋ทฐ ์š”์ฒญํ•˜๋Š” ์•„๋ž˜ ์ฃผ์„์„ ๋…ธ์ถœํ•ด์ฃผ์„ธ์š”! --> Team PseudoLab, may you please review this PR? @sronger, @TaeYupNoh, @kj021, @HanNayeoniee, @eenzeenee, @sim-so ## Before submitting - [x] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case). - [x] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests), Pull Request section? - [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link to it if that's the case. - [x] Did you make sure to update the documentation with your changes? Here are the [documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and [here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation). - [ ] Did you write any new necessary tests? ## Who can review? (Final) <!-- 2. ๊ฐ€์งœ์—ฐ๊ตฌ์†Œ ํŒ€์›๋“ค๊ณผ ๋ฆฌ๋ทฐ๊ฐ€ ๋๋‚œ ํ›„์—๋งŒ ํ—ˆ๊น…ํŽ˜์ด์Šค ์ง์›๋“ค์—๊ฒŒ ๋ฆฌ๋ทฐ ์š”์ฒญํ•˜๋Š” ์•„๋ž˜ ์ฃผ์„์„ ๋…ธ์ถœํ•ด์ฃผ์„ธ์š”! --> <!-- @sgugger, @ArthurZucker, @eunseojo May you please review this PR? -->
07-17-2023 20:46:14
07-17-2023 20:46:14
The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/transformers/pr_24869). All of your documentation changes will be reflected on that endpoint.<|||||>์†Œํ˜„๋‹˜์ด ๋ฆฌ๋ทฐ๋ฅผ ๊ผผ๊ผผํ•˜๊ฒŒ ํ•ด์ฃผ์…จ๋„ค์š”! LGTM ๐Ÿ‘
transformers
24,868
closed
Remove `tests/onnx`
# What does this PR do? Remove `tests/onnx`, as mentioned, as discussed https://github.com/huggingface/transformers/pull/24800#issuecomment-1634822781 Note there are still some tests like `TFGPT2ModelTest::test_onnx_runtime_optimize` which are not removed in this PR.
07-17-2023 20:16:26
07-17-2023 20:16:26
_The documentation is not available anymore as the PR was closed or merged._
transformers
24,867
closed
Skip failing `ZeroShotAudioClassificationPipelineTests::test_small_model_pt` for now
# What does this PR do? Skip failing `ZeroShotAudioClassificationPipelineTests::test_small_model_pt` for now. see [failing job](https://app.circleci.com/pipelines/github/huggingface/transformers/68367/workflows/0d616969-381a-4ce2-96f9-ec83b259df75/jobs/856192) likely a `datasets` issue
07-17-2023 19:29:28
07-17-2023 19:29:28
_The documentation is not available anymore as the PR was closed or merged._
transformers
24,866
open
ValueError: operands could not be broadcast together with shapes (60,4) (24,2,60,16,1024,64)
I am trying to fine-tune a model using the trainer API but I am getting this error: ![image](https://github.com/huggingface/transformers/assets/138615931/22a5ce44-a0ee-4fba-83f4-9d31c9ce76d4) I have looked online and I can't find anything similar to this, at least related to transformers and NPL. here is the code that I am using: https://colab.research.google.com/drive/1hdAG3rC1LHp7tJ4DCKbt90IDFsGcubHf?usp=sharing
07-17-2023 19:00:40
07-17-2023 19:00:40
You should use the [forums](https://discuss.huggingface.co/) to debug your code as we keep issues for feature requests and bugs in the library only. Here it seems your `compute_metrics` function does not take the logits from the result of the model, which contains two arrays at least (the logits and some kind of hidden state).
transformers
24,865
closed
Skip Add model like job
# What does this PR do? The Add model like job has been failing for a mysterious reason since this morning. I suggest skipping it for now and re-enabling it once it is fixed.
07-17-2023 18:35:42
07-17-2023 18:35:42
_The documentation is not available anymore as the PR was closed or merged._
transformers
24,864
closed
Fix the fetch of all example tests
# What does this PR do? I noticed on recent PRs that when all tests are fetched, the example tests are not run. Upon closer inspection, it's because the test for `"all"` in the `test_fetcher` compares a list instead of a string. This PR addresses that.
07-17-2023 17:53:57
07-17-2023 17:53:57
The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/transformers/pr_24864). All of your documentation changes will be reflected on that endpoint.
transformers
24,863
closed
deprecate no_cuda
# What does this PR do? This PR deprecates the `no_cuda` arg because it is confusing for Mac users as their models get dispatched to `mps` device when `no_cuda=False`. If they want to train the model on cpu, they need to set `no_cuda=True` which is not intuitive. We rename it `use_cpu` instead. Related issue #24697
07-17-2023 17:13:58
07-17-2023 17:13:58
_The documentation is not available anymore as the PR was closed or merged._
transformers
24,862
closed
Fix token pass
# What does this PR do? The `token` passed along in `PreTrainedTokenizerBase.from_pretrained` is passed along twice at the end: one time in the kwargs and one time as `use_auth_token`. This caused the speech examples to fail.
07-17-2023 17:02:33
07-17-2023 17:02:33
The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/transformers/pr_24862). All of your documentation changes will be reflected on that endpoint.
transformers
24,861
closed
fix broken links in READMEs
# What does this PR do? Currently foreign languages READMEs have broken links, this PR fixes it cc @sgugger
07-17-2023 15:56:32
07-17-2023 15:56:32
_The documentation is not available anymore as the PR was closed or merged._
transformers
24,860
closed
Model parameters don't update with deepspeed integration
### System Info - `transformers` version: 4.31.0.dev0 - Platform: Linux-5.10.173-154.642.amzn2.x86_64-x86_64-with-glibc2.26 - Python version: 3.10.10 - Huggingface_hub version: 0.14.1 - Safetensors version: 0.3.1 - PyTorch version (GPU?): 2.0.0 (True) - Tensorflow version (GPU?): not installed (NA) - Flax version (CPU?/GPU?/TPU?): not installed (NA) - Jax version: not installed - JaxLib version: not installed - Using GPU in script?: yes - Using distributed or parallel set-up in script?: yes ### Who can help? @pacman100 ### Information - [X] The official example scripts - [X] My own modified scripts ### Tasks - [X] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...) - [ ] My own task or dataset (give details below) ### Reproduction (1) I noticed that performance wasn't improving over training epochs, so I added in the following code to `examples/pytorch/translation/run_translate.py` at line 250 in order to monitor parameter value changes: ``` from transformers import TrainerCallback class MonitorParameterCallback(TrainerCallback): def on_train_begin(self, args, state, control, model, **kwargs): self.original_value = model.encoder.block[0].layer[0].SelfAttention.q.weight.clone() def on_epoch_end(self, args, state, control, model, **kwargs): new_value = model.encoder.block[0].layer[0].SelfAttention.q.weight.clone() change = new_value - self.original_value change_norm = float(change.square().sum()) print('change = ', change_norm) ``` However, any other method of confirming parameter changes would suffice. (2) The following code (no deepspeed) prints out positive values for training loss and change in parameter values: ``` python examples/pytorch/translation/run_translation.py \ --model_name_or_path t5-small \ --do_train \ --source_lang en \ --target_lang ro \ --dataset_name wmt16 \ --dataset_config_name ro-en \ --output_dir /tmp/tst-translation \ --per_device_train_batch_size=4 \ --per_device_eval_batch_size=4 \ --overwrite_output_dir \ --predict_with_generate \ --max_train_samples 16 ``` (3) The following code (with deepspeed) prints out positive values for training loss but 0 for change in parameter values: ``` deepspeed examples/pytorch/translation/run_translation.py \ --model_name_or_path t5-small \ --do_train \ --source_lang en \ --target_lang ro \ --dataset_name wmt16 \ --dataset_config_name ro-en \ --output_dir /tmp/tst-translation \ --per_device_train_batch_size=4 \ --per_device_eval_batch_size=4 \ --overwrite_output_dir \ --predict_with_generate \ --max_train_samples 16 \ --deepspeed tests/deepspeed/ds_config_zero3.json ``` ### Expected behavior When training with deepspeed, parameter values of the model should update.
07-17-2023 15:49:06
07-17-2023 15:49:06
I apologize if this is actually a deepspeed issue rather than a deepspeed integration issueโ€”I'm having trouble parsing which option is the case.<|||||>I think it is expected behaviour, because Zero3 partitions the model weights. You should use `deepspeed.zero.GatheredParameters` context manager or you can check the partitioned parameters stored in the `param.ds_tensor` attribute. To prove it, you can check your `model.encoder.block[0].layer[0].SelfAttention.q.weight.data.shape`, it should be empty.<|||||>@1ytic, you're rightโ€”it was empty. It's not clear to me how to use the suggestions you gave meโ€”can you provide a little more detail? <|||||>Try to use this tensor `model.encoder.block[0].layer[0].SelfAttention.q.weight.ds_tensor.clone()` in your `MonitorParameterCallback`.<|||||>Thanks! It turns out, the issue was due to NaN loss from fp16.
transformers
24,859
closed
Add TAPEX to the list of deprecated models
# What does this PR do? TAPEX was not in the list of deprecated models, leading to importing it with the auto API not working. I'll make a script to check the content of that constant is in sync with the content of the deprecated folder so this doesn't happen again. Fixes #24852
07-17-2023 15:29:15
07-17-2023 15:29:15
_The documentation is not available anymore as the PR was closed or merged._<|||||>The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/transformers/pr_24859). All of your documentation changes will be reflected on that endpoint.
transformers
24,858
closed
Ra
# What does this PR do? <!-- Congratulations! You've made it this far! You're not quite done yet though. Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution. Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change. Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost. --> <!-- Remove if not applicable --> Fixes # (issue) ## Before submitting - [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case). - [ ] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests), Pull Request section? - [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link to it if that's the case. - [ ] Did you make sure to update the documentation with your changes? Here are the [documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and [here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation). - [ ] Did you write any new necessary tests? ## Who can review? Anyone in the community is free to review the PR once the tests have passed. Feel free to tag members/contributors who may be interested in your PR. <!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @ If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**. Please tag fewer than 3 people. Models: - text models: @ArthurZucker and @younesbelkada - vision models: @amyeroberts - speech models: @sanchit-gandhi - graph models: @clefourrier Library: - flax: @sanchit-gandhi - generate: @gante - pipelines: @Narsil - tensorflow: @gante and @Rocketknight1 - tokenizers: @ArthurZucker - trainer: @sgugger Integrations: - deepspeed: HF Trainer/Accelerate: @pacman100 - ray/raytune: @richardliaw, @amogkam Documentation: @sgugger, @stevhliu and @MKhalusova HF projects: - accelerate: [different repo](https://github.com/huggingface/accelerate) - datasets: [different repo](https://github.com/huggingface/datasets) - diffusers: [different repo](https://github.com/huggingface/diffusers) - rust tokenizers: [different repo](https://github.com/huggingface/tokenizers) Maintained examples (not research project or legacy): - Flax: @sanchit-gandhi - PyTorch: @sgugger - TensorFlow: @Rocketknight1 -->
07-17-2023 14:28:44
07-17-2023 14:28:44
transformers
24,857
open
Everything CLIP related seems to break starting form transformers 4.28.0
### System Info - `transformers` version: 4.28.0 - Platform: Linux-5.10.107+-x86_64-with-glibc2.31 - Python version: 3.9.16 - Huggingface_hub version: 0.16.4 - Safetensors version: 0.3.1 - PyTorch version (GPU?): 1.11.0+cu113 (True) - Tensorflow version (GPU?): not installed (NA) - Flax version (CPU?/GPU?/TPU?): not installed (NA) - Jax version: not installed - JaxLib version: not installed - Using GPU in script?: Yes - Using distributed or parallel set-up in script?: No ### Who can help? @amyeroberts ### Information - [X] The official example scripts - [X] My own modified scripts ### Tasks - [X] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...) - [X] My own task or dataset (give details below) ### Reproduction It seems to me that there is some regression starting from transformers 4.28.0 that affects the CLIP vision model and everything related to it. In particular, I am having issue with * ClipSeg * the CLIPVisionModel proper. # ClipSeg For ClipSeg, I am able to use it and get the expected masks, essentially by literally following the example [here](https://huggingface.co/docs/transformers/model_doc/clipseg#transformers.CLIPSegForImageSegmentation): ```python from transformers import AutoProcessor, CLIPSegForImageSegmentation from PIL import Image import requests processor = AutoProcessor.from_pretrained("CIDAS/clipseg-rd64-refined") model = CLIPSegForImageSegmentation.from_pretrained("CIDAS/clipseg-rd64-refined") url = "http://images.cocodataset.org/val2017/000000039769.jpg" image = Image.open(requests.get(url, stream=True).raw) texts = ["a cat", "a remote", "a blanket"] inputs = processor(text=texts, images=[image] * len(texts), padding=True, return_tensors="pt") outputs = model(**inputs) logits = outputs.logits print(logits.shape) ``` Then `logits` contains the logits from which I can obtain a mask by something like ```python mask = torch.exp(logits) mask /= mask.max() ``` I tested this and it works reliably until transformers 4.27.4. But with transformers 4.28.0, I get masks that are completely black regardless of the input image. # ClipVisionModel This is harder to describe, since it relies on an internal model. I have trained a model that makes use of the image embeddings generated by ClipVisionModel for custom subject generation. Everything works well until transformers 4.27.4. If I switch to 4.28.0, the generated image changes completely. The only change is installing 4.28.0. In fact, if I save the embeddings generated by CLIPVisionModel with the two different versions for any random image, I see that they are different. to be sure, this is how I generate image embeddings: ```python clip = CLIPModel.from_pretrained(...) preprocessor = CLIPProcessor.from_pretrained(...) ... encoded_data = preprocessor( text=prompts, images=images, return_tensors="pt", max_length=77, padding="max_length", truncation=True, ) clip_output = clip( input_ids=encoded_data.input_ids, pixel_values=encoded_data.pixel_values, ) image_embeds =clip.visual_projection( clip_output.vision_model_output.last_hidden_state ) ``` For reference, I am using clip-vit-large-patch14 ### Expected behavior I would expect CLIPVisionModel to give the same result on the same image, both in 4.27.4 and in 4.28.0
07-17-2023 13:09:51
07-17-2023 13:09:51
Just to make something reproducible, here we can see that the output of CLIPProcessor changes. I run the script ```python from PIL import Image import requests import transformers from torchvision.transforms.functional import to_tensor from transformers import CLIPProcessor processor = CLIPProcessor.from_pretrained("openai/clip-vit-large-patch14") url = "http://images.cocodataset.org/val2017/000000039769.jpg" image = Image.open(requests.get(url, stream=True).raw) reference = to_tensor(image) encoded_data = processor( text=[""], images=[reference], return_tensors="pt", max_length=77, padding="max_length", truncation=True, ) print(transformers.__version__) print(encoded_data.pixel_values.mean()) ``` With 4.27.4 I get ``` 4.27.4 tensor(0.2463) ``` With 4.28.0 I get ``` 4.28.0 tensor(-1.6673) ```<|||||>I figured out the issue: the CLIPProcessor expects tensors in the range [0, 255], but only starting from transformers 4.28.0. This seems a pretty breaking change to me! If I multiply my tensor by 255, I get the right results<|||||>Hi, Thanks for reporting. This seems related to https://github.com/huggingface/transformers/issues/23096 and may be caused by https://github.com/huggingface/transformers/pull/22458. cc @amyeroberts <|||||>Hi @andreaferretti, thanks for raising this issue! What's being observed, is actually a resolution of inconsistent behaviour of the previous CLIP feature extractors. I'll explain: * to_tensor() doesn't just convert to a pytorch tensor, it also rescales the values to be between 0 - 1 * The deprecated feature extractors and image processors use Pillow for resizing their images. * Pillow requires that for RGB, pixel values are uint8 between 0-255. * Therefore input images with float values are upscaled and cast to uint8 before being converted to a PIL.Image.Image In the previous behaviour, images after resizing kept their upscaled values. Currently, if an image was upscaled during resizing, the pixel values are downscaled back e.g. to between 0-1. This ensures that the user can set `do_resize` to `True` or `False` and the only difference in the output image is its size (and interpolated pixels). Previously, if you set `do_resize=False`, then your image pixel values are never upscaled, they remain between 0-1, would be downscaled again, as is happening now. Rather than try to infer processor behaviour based on inputs, we keep the processing behaviour consistent and let the user explicitly control this. If you wish to input images with pixel values that have been downscaled, then you just need to tell the image processor not to do any additional scaling using the `do_rescale` flag: ```py outputs = image_processor(images, do_rescale=False) ``` Alternatively, you could pass in the images without calling `to_tensor`. In the issues linked by @NielsRogge, this is also explained: https://github.com/huggingface/transformers/issues/23096#issuecomment-1557699476 However, this is the second time a similar issue has been raised, indicating that the behaviour is unexpected. I'll think about how to best address this with documentation or possible warning within the code. <|||||>Yeah, it would be useful to add a warning mentioning `do_rescale`, as well as mention this issue in the documentation of CLIP and related models
transformers
24,856
closed
Replace assert statements with exceptions
# What does this PR do? I have replaced the assert statements with appropriate exceptions in the directory `src/transformers/models/` with all models beginning with `a` and `b` letters. Also, I have corrected error handling at places where except statements were handling AssertionError, even thought it was never to be raised. Here is an example: ``` try: if pointer.shape != array.shape: raise ValueError(f"Pointer shape {pointer.shape} and array shape {array.shape} mismatched") except AssertionError as e: # Incorrect line e.args += (pointer.shape, array.shape) raise ``` I changed the above to: ``` try: if pointer.shape != array.shape: raise ValueError(f"Pointer shape {pointer.shape} and array shape {array.shape} mismatched") except ValueError as e: # Corrected the line e.args += (pointer.shape, array.shape) raise ``` Fixes #12789 ## Before submitting - [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case). - [x] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests), Pull Request section? - [x] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link to it if that's the case. - [ ] Did you make sure to update the documentation with your changes? Here are the [documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and [here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation). - [ ] Did you write any new necessary tests? ## Who can review? @ArthurZucker @sgugger
07-17-2023 12:56:38
07-17-2023 12:56:38
_The documentation is not available anymore as the PR was closed or merged._
transformers
24,855
closed
Fix comments for `_merge_heads`
# What does this PR do? <!-- Congratulations! You've made it this far! You're not quite done yet though. Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution. Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change. Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost. --> <!-- Remove if not applicable --> ## Before submitting - [x] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case). - [x] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests), Pull Request section? - [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link to it if that's the case. - [ ] Did you make sure to update the documentation with your changes? Here are the [documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and [here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation). - [ ] Did you write any new necessary tests? ## Who can review? @sgugger Anyone in the community is free to review the PR once the tests have passed. Feel free to tag members/contributors who may be interested in your PR. <!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @ If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**. Please tag fewer than 3 people. Models: - text models: @ArthurZucker and @younesbelkada - vision models: @amyeroberts - speech models: @sanchit-gandhi - graph models: @clefourrier Library: - flax: @sanchit-gandhi - generate: @gante - pipelines: @Narsil - tensorflow: @gante and @Rocketknight1 - tokenizers: @ArthurZucker - trainer: @sgugger Integrations: - deepspeed: HF Trainer/Accelerate: @pacman100 - ray/raytune: @richardliaw, @amogkam Documentation: @sgugger, @stevhliu and @MKhalusova HF projects: - accelerate: [different repo](https://github.com/huggingface/accelerate) - datasets: [different repo](https://github.com/huggingface/datasets) - diffusers: [different repo](https://github.com/huggingface/diffusers) - rust tokenizers: [different repo](https://github.com/huggingface/tokenizers) Maintained examples (not research project or legacy): - Flax: @sanchit-gandhi - PyTorch: @sgugger - TensorFlow: @Rocketknight1 -->
07-17-2023 12:56:08
07-17-2023 12:56:08
_The documentation is not available anymore as the PR was closed or merged._
transformers
24,854
open
[BLIP-2] Improve conversion script
# What does this PR do? When investigating an issue reported [here](https://github.com/salesforce/LAVIS/issues/418), I've reran and improved BLIP-2's conversion script (based on InstructBLIP). It's important to compare apples-to-apples, so I had to fork the LAVIS repo and make sure the original model is also run in float32.
07-17-2023 12:38:13
07-17-2023 12:38:13
The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/transformers/pr_24854). All of your documentation changes will be reflected on that endpoint.
transformers
24,853
closed
Fix `is_vision_available`
# What does this PR do? Fix #24845 After #23163, we need an extra check if we want to support the use of `pillow-simd`.
07-17-2023 12:09:33
07-17-2023 12:09:33
_The documentation is not available anymore as the PR was closed or merged._
transformers
24,852
closed
Loading model microsoft/tapex-base-finetuned-wtq failed with error No module named 'transformers.models.tapex'
### System Info OS: Ubuntu 22.04 Python 3.9 Packages installed: ``` pip install git+https://github.com/huggingface/transformers pip install datasets huggingface_hub torch torchvision tensorflow accelerate librosa ffmpeg ``` ### Who can help? _No response_ ### Information - [ ] The official example scripts - [X] My own modified scripts ### Tasks - [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...) - [X] My own task or dataset (give details below) ### Reproduction Code ``` import transformers transformers.pipeline( task="table-question-answering", model="microsoft/tapex-base-finetuned-wtq" ) ``` raises error: ``` ModuleNotFoundError: No module named 'transformers.models.tapex' ``` ### Expected behavior Returns `transformers.pipelines.table_question_answering.TableQuestionAnsweringPipeline` object
07-17-2023 10:36:17
07-17-2023 10:36:17
I'm unable to reproduce this on my side. Are you sure you are using the latest commit from main?<|||||>@sgugger Yes. I installed it by: `pip install git+https://github.com/huggingface/transformers` and then in python REPL, run: ``` import transformers transformers.pipeline( task="table-question-answering", model="microsoft/tapex-base-finetuned-wtq" ) ``` Our MLflow CI (run against transformer master) has the same error: https://github.com/mlflow/mlflow/actions/runs/5567846085/jobs/10170056430#step:15:526 <|||||>@sgugger Did you run it on ubuntu system ? <|||||>I had some `__pycache__` remaining in `models/tapex` so it didn't error, but after cleaning that folder I can reproduce. Having a look, the fix should come this afternoon.<|||||>It was actually fairly easy to fix. Could you quickly check the PR above solves the issue for you too?
transformers
24,851
open
save_pretrained 4bits/8bits model
### System Info I have a BitsAndBytes Quantization 4/8bitsmodels๏ผŒHow to save it? I invoked save_pretrained api๏ผŒhowever๏ผŒ I get error๏ผšAttributeError: 'str' object has no attribute 'numel'ใ€‚ ### Who can help? _No response_ ### Information - [ ] The official example scripts - [ ] My own modified scripts ### Tasks - [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...) - [ ] My own task or dataset (give details below) ### Reproduction bnb_config = BitsAndBytesConfig() bnb_config.load_in_8bit = True model = AutoModelForCausalLM.from_pretrained( model_path, load_in_8bit=True, device_map="auto", trust_remote_code=True ).eval() model.save_pretrained("BitsAndBytesQuant8/") ### Expected behavior saved successfully
07-17-2023 09:01:07
07-17-2023 09:01:07
cc @younesbelkada and @SunMarc <|||||>hi @jameswu2014 Thanks for the issue, what is the transformers version you are using? Can you try again with the latest version of `transformers`? Also can you share the full traceback and a reproducible handy snippet? Thanks<|||||>having the same issue, believe because there are string values in the state dicts, that are not really tensors<|||||>Hi @psinger, I just checked our daily CI and we do have a CI test to check int8 serialization works correctly here: https://github.com/huggingface/transformers/blob/main/tests/bnb/test_mixed_int8.py#L286-L311 and the test is passing. I believe maybe this PR: https://github.com/huggingface/transformers/pull/24416 fixed your issues, can you try to install transformers from source and use the latest version of `bitsandbytes` ? ```bash pip uninstall transformers bitsandbytes pip install git+https://github.com/huggingface/transformers.git pip install --upgrade bitsandbytes ```<|||||>Thanks, I actually misspoke, my error is: `AttributeError: 'str' object has no attribute 'device' ` Installing transformers from source indeed seems to solve it.<|||||>> Hi @psinger, I just checked our daily CI and we do have a CI test to check int8 serialization works correctly here: https://github.com/huggingface/transformers/blob/main/tests/bnb/test_mixed_int8.py#L286-L311 and the test is passing. I believe maybe this PR: #24416 fixed your issues, can you try to install transformers from source and use the latest version of `bitsandbytes` ? > > ```shell > pip uninstall transformers bitsandbytes > pip install git+https://github.com/huggingface/transformers.git > pip install --upgrade bitsandbytes > ``` Whether 4bits supported? I need both 4bits and 8bits saved.<|||||>Hi @jameswu2014 Thanks for the heads up, as stated above, currently 4bit saving is not supported yet, feel free to raise an issue on bitsandbytes repository to request this feature<|||||>Is there a work-around for 4-bit models? Can I convert the model to something like float16 and then save it? Or is 4-bit fine-tuning not really usable? Thanks.
transformers
24,850
open
I used a trainer to pretraining a BertForMaskedLM model, but the training loss always be zero
### System Info > transformers==4.28.0 ### Who can help? @sgugger ### Information - [ ] The official example scripts - [X] My own modified scripts ### Tasks - [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...) - [X] My own task or dataset (give details below) ### Reproduction I used a trainer to pretraining a BertForMaskedLM model, but the training loss always be zero ``` config = BertConfig(vocab_size=40000,num_hidden_layers=6,) model = BertForMaskedLM(config) print('Number of parameters: ', model.num_parameters()) pretrained_models_path = "my path" training_args = TrainingArguments( output_dir=pretrained_models_path, overwrite_output_dir=True, per_device_train_batch_size=32, num_train_epochs=10, save_steps=10000, save_total_limit=2, prediction_loss_only = True, fp16=True, ) trainer = Trainer( args=training_args, train_dataset=train_dataset, data_collator=data_collator, model=model, ) trainer.train() ``` when I finish the training, the results show these: ``` Step | Training Loss -- | -- 500 | 0.000000 1000 | 0.000000 1500 | 0.000000 2000 | 0.000000 2500 | 0.000000 3000 | 0.000000 TrainOutput(global_step=3130, training_loss=0.0, metrics={'train_runtime': 367.6023, 'train_samples_per_second': 272.033, 'train_steps_per_second': 8.515, 'total_flos': 1779771658266624.0, 'train_loss': 0.0, 'epoch': 10.0}) ``` ### Expected behavior How can I modify the code to resolve this issue?
07-17-2023 08:50:41
07-17-2023 08:50:41
Please use the [forums](https://discuss.huggingface.co/) to debug such issues in your code :-)
transformers
24,849
open
unscale_() has already been called on this optimizer since the last update().
Hi all, I'm facing the error in the subject. I saw this problem have been already solved but I still have this. This is how I configured the parameters for the trainer. ``` trainer = transformers.Trainer( model=model, # model is decapoda-research/llama-7b-hf train_dataset=data["train"], args=transformers.TrainingArguments( per_device_train_batch_size=MICRO_BATCH_SIZE, # 4 micro batch size gradient_accumulation_steps=GRADIENT_ACCUMULATION_STEPS, # 16 auto_find_batch_size=False, # set True to avoid unscale() problem warmup_steps=100, num_train_epochs=EPOCHS, #2 epochs learning_rate=LEARNING_RATE, # 3e-4 fp16=True, logging_steps=20, optim="adamw_torch", output_dir=NAME, save_total_limit=3, save_strategy="steps", save_steps=200, ), data_collator=transformers.DataCollatorForLanguageModeling(tokenizer, mlm=False), ) ``` The strange behaviour is that the problem raises after the end of the first epoch. ``` {'loss': 0.8378, 'learning_rate': 0.00016153846153846153, 'epoch': 0.99} 50%|โ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–Œ | 831/1660 [15:57<6:52:51, 29.88s/it] Traceback (most recent call last): File "/home/paco/dev/stambecco/train.py", line 138, in <module> trainer.train(resume_from_checkpoint=checkpoint_flag) File "/home/paco/.local/lib/python3.10/site-packages/transformers/trainer.py", line 1539, in train return inner_training_loop( File "/home/paco/.local/lib/python3.10/site-packages/transformers/trainer.py", line 1850, in _inner_training_loop self.accelerator.clip_grad_norm_( File "/home/paco/.local/lib/python3.10/site-packages/accelerate/accelerator.py", line 1893, in clip_grad_norm_ self.unscale_gradients() File "/home/paco/.local/lib/python3.10/site-packages/accelerate/accelerator.py", line 1856, in unscale_gradients self.scaler.unscale_(opt) File "/home/paco/.local/lib/python3.10/site-packages/torch/cuda/amp/grad_scaler.py", line 275, in unscale_ raise RuntimeError("unscale_() has already been called on this optimizer since the last update().") RuntimeError: unscale_() has already been called on this optimizer since the last update(). 50%|โ–ˆโ–ˆโ–ˆโ–ˆโ–ˆ | 831/1660 [16:27<16:24, 1.19s/it] ``` ### System Info The environment is WSL `Linux 5.15.90.1-microsoft-standard-WSL2 #1 SMP Fri Jan 27 02:56:13 UTC 2023 x86_64 x86_64 x86_64 GNU/Linux ` **pip list** ``` Package Version ------------------------ ------------- accelerate 0.20.3 aiohttp 3.8.4 aiosignal 1.3.1 async-timeout 4.0.2 attrs 23.1.0 bitsandbytes 0.39.1 blinker 1.4 certifi 2022.12.7 charset-normalizer 2.1.1 cmake 3.25.0 command-not-found 0.3 cryptography 3.4.8 datasets 2.13.0 dbus-python 1.2.18 dill 0.3.6 distro 1.7.0 distro-info 1.1build1 filelock 3.9.0 frozenlist 1.3.3 fsspec 2023.6.0 httplib2 0.20.2 huggingface-hub 0.15.1 idna 3.4 importlib-metadata 4.6.4 jeepney 0.7.1 Jinja2 3.1.2 keyring 23.5.0 launchpadlib 1.10.16 lazr.restfulclient 0.14.4 lazr.uri 1.0.6 lit 15.0.7 loralib 0.1.1 MarkupSafe 2.1.2 more-itertools 8.10.0 mpmath 1.2.1 multidict 6.0.4 multiprocess 0.70.14 netifaces 0.11.0 networkx 3.0 numpy 1.24.1 nvidia-cublas-cu11 11.10.3.66 nvidia-cuda-cupti-cu11 11.7.101 nvidia-cuda-nvrtc-cu11 11.7.99 nvidia-cuda-runtime-cu11 11.7.99 nvidia-cudnn-cu11 8.5.0.96 nvidia-cufft-cu11 10.9.0.58 nvidia-curand-cu11 10.2.10.91 nvidia-cusolver-cu11 11.4.0.1 nvidia-cusparse-cu11 11.7.4.91 nvidia-nccl-cu11 2.14.3 nvidia-nvtx-cu11 11.7.91 oauthlib 3.2.0 packaging 23.1 pandas 2.0.2 peft 0.4.0.dev0 Pillow 9.3.0 pip 22.0.2 psutil 5.9.5 pyarrow 12.0.1 PyGObject 3.42.1 PyJWT 2.3.0 pyparsing 2.4.7 python-apt 2.4.0+ubuntu1 python-dateutil 2.8.2 pytz 2023.3 PyYAML 5.4.1 regex 2023.6.3 requests 2.28.1 safetensors 0.3.1 scipy 1.10.1 SecretStorage 3.3.1 sentencepiece 0.1.99 setuptools 59.6.0 six 1.16.0 ssh-import-id 5.11 sympy 1.11.1 systemd-python 234 tokenizers 0.13.3 torch 2.0.1+cu117 torchaudio 2.0.2+cu117 torchvision 0.15.2+cu117 tqdm 4.65.0 transformers 4.31.0.dev0 triton 2.0.0 typing_extensions 4.4.0 tzdata 2023.3 ubuntu-advantage-tools 8001 ufw 0.36.1 unattended-upgrades 0.1 urllib3 1.26.13 wadllib 1.3.6 wheel 0.37.1 xxhash 3.2.0 yarl 1.9.2 zipp 1.0.0 ``` ### Who can help? _No response_ ### Information - [ ] The official example scripts - [ ] My own modified scripts ### Tasks - [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...) - [ ] My own task or dataset (give details below) ### Reproduction ``` tokenizer = LlamaTokenizer.from_pretrained( BASE_MODEL, add_eos_token=True ) model = prepare_model_for_int8_training(model) print("Preparing LoRA weights") config = LoraConfig( r=LORA_R, lora_alpha=LORA_ALPHA, target_modules=["q_proj", "v_proj"], lora_dropout=LORA_DROPOUT, bias="none", task_type="CAUSAL_LM", ) model = get_peft_model(model, config) tokenizer.pad_token_id = 0 # We want this to be different from the eos token if DATA_PATH.endswith(".json") or DATA_PATH.endswith(".jsonl"): data = load_dataset("json", data_files=DATA_PATH) else: data = load_dataset(DATA_PATH) # Functions tokenize() and generate_prompt() read the json file with the following format: # { # "instruction": "", # "input": "", # "output": "" # }, data = data.shuffle().map(lambda x: tokenize(generate_prompt(x))) model.print_trainable_parameters() trainer = transformers.Trainer( model=model, train_dataset=data["train"], args=transformers.TrainingArguments( per_device_train_batch_size=MICRO_BATCH_SIZE, gradient_accumulation_steps=GRADIENT_ACCUMULATION_STEPS, auto_find_batch_size=False, warmup_steps=100, num_train_epochs=EPOCHS, learning_rate=LEARNING_RATE, fp16=True, logging_steps=20, optim="adamw_torch", output_dir=NAME, save_total_limit=3, save_strategy="steps", save_steps=200, ), data_collator=transformers.DataCollatorForLanguageModeling(tokenizer, mlm=False), ) model.config.use_cache = False checkpoint_folder = os.path.join(os.getcwd(), NAME) # check if the checkpoint folder exists and is not empty checkpoint_flag = os.path.isdir(checkpoint_folder) and len(os.listdir(checkpoint_folder))> 0 print(f"Does a checkpoint folder exists? {checkpoint_flag}\n") trainer.train(resume_from_checkpoint=checkpoint_flag) model.save_pretrained(f"models/{NAME}") ``` ### Expected behavior Not raising the error and continue with the epoch #2
07-17-2023 08:01:30
07-17-2023 08:01:30
cc @muellerzr and @pacman100 <|||||>Hello @paxvinci, I am running following example and unable to reproduce the issue: Command: ``` cd transformers python examples/pytorch/language-modeling/run_clm.py --model_name_or_path gpt2 --dataset_name wikitext --dataset_config_name wikitext-2-raw-v1 --per_device_train_batch_size 8 --per_device_eval_batch_size 8 --do_train --do_eval --output_dir /tmp/test-clm --gradient_accumulation_steps 6 --overwrite_output_dir ``` output logs ``` [INFO|trainer.py:1686] 2023-07-17 15:47:49,578 >> ***** Running training ***** [INFO|trainer.py:1687] 2023-07-17 15:47:49,578 >> Num examples = 2,318 [INFO|trainer.py:1688] 2023-07-17 15:47:49,578 >> Num Epochs = 3 [INFO|trainer.py:1689] 2023-07-17 15:47:49,578 >> Instantaneous batch size per device = 8 [INFO|trainer.py:1692] 2023-07-17 15:47:49,578 >> Total train batch size (w. parallel, distributed & accumulation) = 48 [INFO|trainer.py:1693] 2023-07-17 15:47:49,578 >> Gradient Accumulation steps = 6 [INFO|trainer.py:1694] 2023-07-17 15:47:49,578 >> Total optimization steps = 144 [INFO|trainer.py:1695] 2023-07-17 15:47:49,578 >> Number of trainable parameters = 124,439,808 [INFO|integrations.py:716] 2023-07-17 15:47:49,579 >> Automatic Weights & Biases logging enabled, to disable set os.environ["WANDB_DISABLED"] = "true" wandb: Currently logged in as: smangrul. Use `wandb login --relogin` to force relogin wandb: Tracking run with wandb version 0.15.5 wandb: Run data is saved locally in /home/sourab/transformers/examples/pytorch/language-modeling/wandb/run-20230717_154750-20eekm9c wandb: Run `wandb offline` to turn off syncing. wandb: Syncing run usual-dragon-27 wandb: โญ๏ธ View project at https://wandb.ai/smangrul/huggingface wandb: ๐Ÿš€ View run at https://wandb.ai/smangrul/huggingface/runs/20eekm9c 100%|โ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆ| 144/144 [09:01<00:00, 3.76s/it][INFO|trainer.py:1934] 2023-07-17 15:56:56,376 >> Training completed. Do not forget to share your model on huggingface.co/models =) {'train_runtime': 546.7981, 'train_samples_per_second': 12.718, 'train_steps_per_second': 0.263, 'train_loss': 3.233305189344618, 'epoch': 2.98} 100%|โ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆ| 144/144 [09:01<00:00, 3.76s/it] [INFO|trainer.py:2807] 2023-07-17 15:56:56,378 >> Saving model checkpoint to /tmp/test-clm [INFO|configuration_utils.py:458] 2023-07-17 15:56:56,378 >> Configuration saved in /tmp/test-clm/config.json [INFO|configuration_utils.py:375] 2023-07-17 15:56:56,379 >> Configuration saved in /tmp/test-clm/generation_config.json [INFO|modeling_utils.py:1851] 2023-07-17 15:56:57,203 >> Model weights saved in /tmp/test-clm/pytorch_model.bin [INFO|tokenization_utils_base.py:2214] 2023-07-17 15:56:57,203 >> tokenizer config file saved in /tmp/test-clm/tokenizer_config.json [INFO|tokenization_utils_base.py:2221] 2023-07-17 15:56:57,204 >> Special tokens file saved in /tmp/test-clm/special_tokens_map.json ***** train metrics ***** epoch = 2.98 train_loss = 3.2333 train_runtime = 0:09:06.79 train_samples = 2318 train_samples_per_second = 12.718 train_steps_per_second = 0.263 07/17/2023 15:56:57 - INFO - __main__ - *** Evaluate *** [INFO|trainer.py:3081] 2023-07-17 15:56:57,284 >> ***** Running Evaluation ***** [INFO|trainer.py:3083] 2023-07-17 15:56:57,284 >> Num examples = 240 [INFO|trainer.py:3086] 2023-07-17 15:56:57,284 >> Batch size = 8 100%|โ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆ| 30/30 [00:07<00:00, 4.20it/s] ***** eval metrics ***** epoch = 2.98 eval_accuracy = 0.4212 eval_loss = 3.0811 eval_runtime = 0:00:07.36 eval_samples = 240 eval_samples_per_second = 32.588 eval_steps_per_second = 4.074 perplexity = 21.7826 wandb: Waiting for W&B process to finish... (success). wandb: \ 0.015 MB of 0.015 MB uploaded (0.000 MB deduped) wandb: Run history: wandb: eval/accuracy โ– wandb: eval/loss โ– wandb: eval/runtime โ– wandb: eval/samples_per_second โ– wandb: eval/steps_per_second โ– wandb: train/epoch โ–โ– wandb: train/global_step โ–โ– wandb: train/total_flos โ– wandb: train/train_loss โ– wandb: train/train_runtime โ– wandb: train/train_samples_per_second โ– wandb: train/train_steps_per_second โ– wandb: wandb: Run summary: wandb: eval/accuracy 0.42115 wandb: eval/loss 3.08111 wandb: eval/runtime 7.3646 wandb: eval/samples_per_second 32.588 wandb: eval/steps_per_second 4.074 wandb: train/epoch 2.98 wandb: train/global_step 144 wandb: train/total_flos 3610010714112000.0 wandb: train/train_loss 3.23331 wandb: train/train_runtime 546.7981 wandb: train/train_samples_per_second 12.718 wandb: train/train_steps_per_second 0.263 wandb: wandb: ๐Ÿš€ View run usual-dragon-27 at: https://wandb.ai/smangrul/huggingface/runs/20eekm9c wandb: Synced 6 W&B file(s), 0 media file(s), 0 artifact file(s) and 0 other file(s) ``` Using latest transformers and accelerate main branch<|||||>Please share a minimal reproducer so that we can deep dive if the issue still persists<|||||>I cannot share the json file due to confidential data. I reinstalled the last transformers and I restarted the train session. If I'll face again the error I'll send an update.<|||||>Update: I downloaded the last version of the transformers via pip and I started again the training. After a couple of problems due to BSOD I restarted the training from checkpoints but I still receive "**Can't find a valid checkpoint at**" . There is a warning after the creation of the model ``` The tokenizer class you load from this checkpoint is not the same type as the class this function is called from. It may result in unexpected tokenization. The tokenizer class you load from this checkpoint is 'LLaMATokenizer'. The class this function is called from is 'LlamaTokenizer'. LLAMA Tokenizer created LlamaTokenizer(name_or_path='decapoda-research/llama-7b-hf', vocab_size=32000, model_max_length=1000000000000000019884624838656, is_fast=False, padding_side='right', truncation_side='right', special_tokens={'bos_token': AddedToken("", rstrip=False, lstrip=False, single_word=False, normalized=True), 'eos_token': AddedToken("", rstrip=False, lstrip=False, single_word=False, normalized=True), 'unk_token': AddedToken("", rstrip=False, lstrip=False, single_word=False, normalized=True)}, clean_up_tokenization_spaces=False) ``` I tried to chage from LlamaTokenizer to LLaMATokenizer but the class does not exists.
transformers
24,848
closed
[DOCS] Example for `LogitsProcessor` class
# What does this PR do? Added some doc string to `RepetitionPenaltyLogitsProcessor` with some examples as well. @gante let me know if there's anything else I should add or remove from the docs. Fixes # (issue) #24783 ## Before submitting - [X] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
07-17-2023 07:17:33
07-17-2023 07:17:33
You'll probably need to run `make fixup` before you next commit :)<|||||>Previously I was having some problems with `make fixup` but now it's done and reformatted, I really like how the Makefile is structured. <|||||>@sgugger note: the weird part of the diff seems innocuous ๐Ÿค” <|||||>_The documentation is not available anymore as the PR was closed or merged._<|||||>@sgugger I have addressed all the suggested changes. <|||||>Thanks!<|||||>The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/transformers/pr_24848). All of your documentation changes will be reflected on that endpoint.
transformers
24,847
open
Trainer logs to wrong wandb project
### System Info - `transformers` version: 4.30.2 - Platform: Linux-5.15.0-76-generic-x86_64-with-glibc2.35 - Python version: 3.10.6 - Huggingface_hub version: 0.16.4 - Safetensors version: 0.3.1 - PyTorch version (GPU?): 2.0.1+cu117 (True) - Tensorflow version (GPU?): not installed (NA) - Flax version (CPU?/GPU?/TPU?): not installed (NA) - Jax version: not installed - JaxLib version: not installed - Using GPU in script?: yes - Using distributed or parallel set-up in script?: no ### Who can help? _No response_ ### Information - [ ] The official example scripts - [X] My own modified scripts ### Tasks - [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...) - [X] My own task or dataset (give details below) ### Reproduction I've written my own `train.py` based on [sequence_classification](https://huggingface.co/docs/transformers/tasks/sequence_classification). I see the same issue for official scripts. before running I initialise wandb logging (`wandb init`). This creates `wandb/settings` containing the project name I chose for my model ``` [default] entity = my-username project = my-project base_url = https://api.wandb.ai ``` But the Trainer logs everything to the project `huggingface`, i.e. it's ignoring/overriding the project name I've configured. ### Expected behavior Don' over-ride the configured wandb project with a default.
07-17-2023 02:57:45
07-17-2023 02:57:45
cc @muellerzr <|||||>This actually isn't possible currently, you need to set environmental variables such as `export WANDB_PROJECT my-project` <|||||>@david-waterworth You can use the os module to set the environment variable or use what @muellerzr suggested ``` import os os.environ["WANDB_PROJECT"] = "my-project" ```<|||||>Thanks @muellerzr but I'm not sure what you mean by not possible? It is possible in general, the steps I follow are: 1. Create a new project ``` bash mkdir my_project cd my_project python3 -m venv .venv source .venv/bin/activate pip install -U pip ``` 3. Initialise wandb ``` bash pip install wandb wandb init ``` In the last step I create a new project name (say test), this creates `wandb/settings` ``` bash cat wandb/settings ``` ``` [default] entity = my-user project = test base_url = https://api.wandb.ai ``` 4. Create a script ``` python import wandb wandb.init() # Don't pass a project name! print(wandb.run.project_name()) # correctly picked up setting from wandb/settings ``` I'm assuming that what the trainer does is check the env variable, and if its not set, explicitly passes "huggingface" as the project - i.e. ``` wandb.init("huggingface") ``` As a workaround, I can parse the settings file myself and set the env variable
transformers
24,846
open
bloom add_prefix_space= True
### System Info Hi dear officer I use Bloom BloomTokenizerFast as a tokenizer. here is an issue. Version =4.28.0 when I use BloomTokenizerFast, I find the add_prefix_space= True is not useful. Here is the code. `tokenizer = BloomTokenizerFast.from_pretrained("bigscience/bloom",add_prefix_space = True) print(tokenizer.add_prefix_space) print(tokenizer("Hello world")["input_ids"]) print(transformers.__version__) True [59414, 8876] 4.28.0 ` here is other code. `from transformers import BloomTokenizerFast tokenizer = BloomTokenizerFast.from_pretrained("bigscience/bloom") print(tokenizer("Hello world")["input_ids"]) [59414, 8876] ` I don't know why they will encode the same result. please have a look! Thanks ### Who can help? @Arth ### Information - [ ] The official example scripts - [X] My own modified scripts ### Tasks - [X] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...) - [ ] My own task or dataset (give details below) ### Reproduction It should encode different results. since add_prefix_space= True. ### Expected behavior It should encode different results. since add_prefix_space= True.
07-17-2023 00:51:21
07-17-2023 00:51:21
cc @ArthurZucker and @younesbelkada <|||||>@younesbelkada @ArthurZucker <|||||>cc @ArthurZucker as he is more familiar than me regarding tokenizers<|||||>Hey! Thanks for opening this issue. This is half a `tokenizers` issue ( even if you save the tokenizer and modify the `tokenizer_config.json` to set `add_prefix_space=True` in the `pre_tokenizer` the outputs are the same) and half a transformers issue (setting `add_prefix_space=False` and then saving does not change the value saved!) Will try to fix it ๐Ÿ‘๐Ÿป
transformers
24,845
closed
is_vision_available() fails when using pillow-simd
### System Info - `transformers` version: 4.30.2 - Platform: Linux-5.11.0-1022-aws-x86_64-with-glibc2.29 - Python version: 3.8.10 - Huggingface_hub version: 0.15.1 - Safetensors version: 0.3.1 - PyTorch version (GPU?): 2.0.0+cu118 (True) - Tensorflow version (GPU?): 2.8.0 (True) - Flax version (CPU?/GPU?/TPU?): not installed (NA) - Jax version: not installed - JaxLib version: not installed - Using GPU in script?: no - Using distributed or parallel set-up in script?: no ### Who can help? @sgugger @apbard - additional checks introduced in https://github.com/huggingface/transformers/pull/23163 are failing on my environment. This is because I'm using [pillow-simd](https://github.com/uploadcare/pillow-simd) and not the vanilla pillow package. Is this drop-in pillow replacement not supported or is it possible to introduce additional logic to support this? ### Information - [ ] The official example scripts - [ ] My own modified scripts ### Tasks - [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...) - [ ] My own task or dataset (give details below) ### Reproduction First install pillow-simd (`pip install pillow-simd`) Run the following code: ``` import PIL print(PIL.__version__) import transformers print(transformers.utils.import_utils.is_vision_available()) ``` ### Expected behavior Previous versions (before 4.30.*) return `True`
07-16-2023 11:57:43
07-16-2023 11:57:43
Thanks for reporting! This might be because of the new way we test if packages are available. @ydshieh would you have some bandwidth to take a look at this?<|||||>Sure, will take a look today!
transformers
24,844
open
AttributeError: 'DeepSpeedCPUAdam' object has no attribute 'ds_opt_adam' for multinode training
### System Info - `transformers` version: 4.31.0.dev0 - Platform: Linux-4.18.0-305.25.1.el8_4.x86_64-x86_64-with-glibc2.28 - Python version: 3.10.12 - Huggingface_hub version: 0.15.1 - Safetensors version: 0.3.1 - Accelerate version: 0.21.0 - Accelerate config: not found - PyTorch version (GPU?): 1.13.1 (False) - Tensorflow version (GPU?): not installed (NA) - Flax version (CPU?/GPU?/TPU?): not installed (NA) - Jax version: not installed - JaxLib version: not installed - Using GPU in script?: yes - Using distributed or parallel set-up in script?: multinode distributed setup - Deepspeed version: 0.9.5 ### Who can help? @pacman100 ### Information - [X] The official example scripts - [ ] My own modified scripts ### Tasks - [X] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...) - [ ] My own task or dataset (give details below) ### Reproduction To reproduce, run the following with your `hostfile` and `deepspeed` config specified. I used the zero3 config [here](https://huggingface.co/docs/transformers/main_classes/deepspeed#zero3-config). ``` deepspeed --hostfile=$PBS_O_WORKDIR/hostfile $ROOT_DIR/transformers/examples/pytorch/language-modeling/run_clm.py \ --model_name_or_path gpt2 \ --dataset_name wikitext \ --dataset_config_name wikitext-2-raw-v1 \ --per_device_train_batch_size 8 \ --per_device_eval_batch_size 8 \ --do_train \ --do_eval \ --output_dir ~/scratch/finetuned_models/debug \ --deepspeed "$ROOT_DIR/FastChat/fastchat/ds_config/ds_config_zero3.json" ``` I ran into the error `AttributeError: 'DeepSpeedCPUAdam' object has no attribute 'ds_opt_adam'`. Here's the logs: ``` x1000c1s1b0n0: warn(msg) x1000c1s1b0n0: /home/users/industry/dso/lannliat/.local/lib/python3.10/site-packages/bitsandbytes/cuda_setup/main.py:149: UserWarning: WARNING: The following directories listed in your path were found to be non-existent: {PosixPath('/opt/cray/pe/cce/13.0.2/cce/aarch64/lib/libcray-c++-rts.a')} x1000c1s1b0n0: warn(msg) x1000c1s1b0n0: /home/users/industry/dso/lannliat/.local/lib/python3.10/site-packages/bitsandbytes/cuda_setup/main.py:149: UserWarning: WARNING: The following directories listed in your path were found to be non-existent: {PosixPath('/opt/cray/pe/cti/2.15.10/lib/pkgconfig')} x1000c1s1b0n0: warn(msg) x1000c1s1b0n0: /home/users/industry/dso/lannliat/.local/lib/python3.10/site-packages/bitsandbytes/cuda_setup/main.py:149: UserWarning: WARNING: The following directories listed in your path were found to be non-existent: {PosixPath('/opt/cray/pe/cce/13.0.2/cce/aarch64/lib')} x1000c1s1b0n0: warn(msg) x1000c1s1b0n0: /home/users/industry/dso/lannliat/.local/lib/python3.10/site-packages/bitsandbytes/cuda_setup/main.py:149: UserWarning: WARNING: The following directories listed in your path were found to be non-existent: {PosixPath('/opt/cray/pe/cce/13.0.2/cce/x86_64/share/nls/En/%N.cat')} x1000c1s1b0n0: warn(msg) x1000c1s1b0n0: /home/users/industry/dso/lannliat/.local/lib/python3.10/site-packages/bitsandbytes/cuda_setup/main.py:149: UserWarning: WARNING: The following directories listed in your path were found to be non-existent: {PosixPath('/opt/cray/pe/mpich/8.1.15/ofi/@PRGENV@/@PE_MPICH_GENCOMPS@/lib/pkgconfig')} x1000c1s1b0n0: warn(msg) x1000c1s1b0n0: /home/users/industry/dso/lannliat/.local/lib/python3.10/site-packages/bitsandbytes/cuda_setup/main.py:149: UserWarning: WARNING: The following directories listed in your path were found to be non-existent: {PosixPath('libfabric/1.11.0.4.125'), PosixPath('cray-pals/1.1.6'), PosixPath('craype-x86-rome'), PosixPath('craype-network-ofi'), PosixPath('cce/13.0.2'), PosixPath('cray-dsmml/0.2.2'), PosixPath('perftools-base/22.04.0'), PosixPath('cray-mpich/8.1.15'), PosixPath('craype/2.7.15'), PosixPath('PrgEnv-cray/8.3.3')} x1000c1s1b0n0: warn(msg) x1000c1s1b0n0: /home/users/industry/dso/lannliat/.local/lib/python3.10/site-packages/bitsandbytes/cuda_setup/main.py:149: UserWarning: WARNING: The following directories listed in your path were found to be non-existent: {PosixPath('/opt/cray/pe/cce/13.0.2/cce/aarch64/include/craylibs')} x1000c1s1b0n0: warn(msg) x1000c1s1b0n0: /home/users/industry/dso/lannliat/.local/lib/python3.10/site-packages/bitsandbytes/cuda_setup/main.py:149: UserWarning: WARNING: The following directories listed in your path were found to be non-existent: {PosixPath('/opt/gcc-cross-aarch64/8.1.0/aarch64')} x1000c1s1b0n0: warn(msg) x1000c1s1b0n0: /home/users/industry/dso/lannliat/.local/lib/python3.10/site-packages/bitsandbytes/cuda_setup/main.py:149: UserWarning: WARNING: The following directories listed in your path were found to be non-existent: {PosixPath('/opt/cray/pe/fftw/3.3.8.13/@PE_FFTW_DEFAULT_TARGET@/lib/pkgconfig')} x1000c1s1b0n0: warn(msg) x1000c1s1b0n0: /home/users/industry/dso/lannliat/.local/lib/python3.10/site-packages/bitsandbytes/cuda_setup/main.py:149: UserWarning: WARNING: The following directories listed in your path were found to be non-existent: {PosixPath('/opt/cray/pe/libsci/21.08.1.2/@PRGENV@/@PE_LIBSCI_DEFAULT_GENCOMPS@/@PE_LIBSCI_DEFAULT_TARGET@/lib/pkgconfig')} x1000c1s1b0n0: warn(msg) x1000c1s1b0n0: /home/users/industry/dso/lannliat/.local/lib/python3.10/site-packages/bitsandbytes/cuda_setup/main.py:149: UserWarning: WARNING: The following directories listed in your path were found to be non-existent: {PosixPath('/opt/cray/pe/sma/11.5.3.beta/ofi/sma@PE_SMA_DEFAULT_DIR_DEFAULT64@/lib64/pkgconfig')} x1000c1s1b0n0: warn(msg) x1000c1s1b0n0: CUDA_SETUP: WARNING! libcudart.so not found in any environmental path. Searching in backup paths... x1000c1s1b0n0: /home/users/industry/dso/lannliat/.local/lib/python3.10/site-packages/bitsandbytes/cuda_setup/main.py:149: UserWarning: WARNING: The following directories listed in your path were found to be non-existent: {PosixPath('/opt/clmgr/man'), PosixPath('/opt/cray/pe/man')} x1000c1s1b0n0: warn(msg) x1000c1s1b0n0: /home/users/industry/dso/lannliat/.local/lib/python3.10/site-packages/bitsandbytes/cuda_setup/main.py:149: UserWarning: WARNING: The following directories listed in your path were found to be non-existent: {PosixPath('/opt/modulefiles')} x1000c1s1b0n0: warn(msg) x1000c1s1b0n0: /home/users/industry/dso/lannliat/.local/lib/python3.10/site-packages/bitsandbytes/cuda_setup/main.py:149: UserWarning: WARNING: The following directories listed in your path were found to be non-existent: {PosixPath('libexec64/opts')} x1000c1s1b0n0: warn(msg) x1000c1s1b0n0: CUDA SETUP: Highest compute capability among GPUs detected: 8.0 x1000c1s1b0n0: CUDA SETUP: Detected CUDA version 116 x1000c1s1b0n0: CUDA SETUP: Loading binary /home/users/industry/dso/lannliat/.local/lib/python3.10/site-packages/bitsandbytes/libbitsandbytes_cpu.so... x1000c1s1b0n0: /home/users/industry/dso/lannliat/.local/lib/python3.10/site-packages/bitsandbytes/cuda_setup/main.py:149: UserWarning: WARNING: The following directories listed in your path were found to be non-existent: {PosixPath('() { eval `/opt/cray/pe/modules/3.2.11.6/bin/modulecmd bash $*`\n}')} x1000c1s1b0n0: warn(msg) x1000c1s1b0n0: /home/users/industry/dso/lannliat/.local/lib/python3.10/site-packages/bitsandbytes/cuda_setup/main.py:149: UserWarning: WARNING: The following directories listed in your path were found to be non-existent: {PosixPath('/usr/local/cuda/lib64')} x1000c1s1b0n0: warn(msg) x1000c1s1b0n0: /home/users/industry/dso/lannliat/.local/lib/python3.10/site-packages/bitsandbytes/cuda_setup/main.py:149: UserWarning: WARNING: No libcudart.so found! Install CUDA or the cudatoolkit package (anaconda)! x1000c1s1b0n0: warn(msg) x1000c1s1b0n0: [2023-07-16 11:23:41,585] [INFO] [real_accelerator.py:110:get_accelerator] Setting ds_accelerator to cuda (auto detect) x1000c1s1b0n0: [2023-07-16 11:23:41,586] [INFO] [real_accelerator.py:110:get_accelerator] Setting ds_accelerator to cuda (auto detect) x1000c0s6b0n0: [2023-07-16 11:23:44,841] [WARNING] [comm.py:152:init_deepspeed_backend] NCCL backend in DeepSpeed not yet implemented x1000c0s6b0n0: [2023-07-16 11:23:44,841] [INFO] [comm.py:594:init_distributed] cdb=None x1000c0s6b0n0: [2023-07-16 11:23:44,841] [INFO] [comm.py:625:init_distributed] Initializing TorchBackend in DeepSpeed with backend nccl x1000c0s6b0n0: [2023-07-16 11:23:44,842] [WARNING] [comm.py:152:init_deepspeed_backend] NCCL backend in DeepSpeed not yet implemented x1000c0s6b0n0: [2023-07-16 11:23:44,842] [INFO] [comm.py:594:init_distributed] cdb=None x1000c1s1b0n0: [2023-07-16 11:23:46,769] [WARNING] [comm.py:152:init_deepspeed_backend] NCCL backend in DeepSpeed not yet implemented x1000c1s1b0n0: [2023-07-16 11:23:46,770] [WARNING] [comm.py:152:init_deepspeed_backend] NCCL backend in DeepSpeed not yet implemented x1000c1s1b0n0: [2023-07-16 11:23:46,770] [INFO] [comm.py:594:init_distributed] cdb=None x1000c1s1b0n0: [2023-07-16 11:23:46,770] [INFO] [comm.py:594:init_distributed] cdb=None x1000c0s6b0n0: 07/16/2023 11:23:47 - WARNING - __main__ - Process rank: 0, device: cuda:0, n_gpu: 1distributed training: True, 16-bits training: False x1000c0s6b0n0: 07/16/2023 11:23:47 - INFO - __main__ - Training/evaluation parameters TrainingArguments( x1000c0s6b0n0: _n_gpu=1, x1000c0s6b0n0: adafactor=False, x1000c0s6b0n0: adam_beta1=0.9, x1000c0s6b0n0: adam_beta2=0.999, x1000c0s6b0n0: adam_epsilon=1e-08, x1000c0s6b0n0: auto_find_batch_size=False, x1000c0s6b0n0: bf16=False, x1000c0s6b0n0: bf16_full_eval=False, x1000c0s6b0n0: data_seed=None, x1000c0s6b0n0: dataloader_drop_last=False, x1000c0s6b0n0: dataloader_num_workers=0, x1000c0s6b0n0: dataloader_pin_memory=True, x1000c0s6b0n0: ddp_backend=None, x1000c0s6b0n0: ddp_broadcast_buffers=None, x1000c0s6b0n0: ddp_bucket_cap_mb=None, x1000c0s6b0n0: ddp_find_unused_parameters=None, x1000c0s6b0n0: ddp_timeout=1800, x1000c0s6b0n0: debug=[], x1000c0s6b0n0: deepspeed=/home/users/industry/dso/lannliat/FastChat/fastchat/ds_config/ds_config_zero3.json, x1000c0s6b0n0: disable_tqdm=False, x1000c0s6b0n0: do_eval=True, x1000c0s6b0n0: do_predict=False, x1000c0s6b0n0: do_train=True, x1000c0s6b0n0: eval_accumulation_steps=None, x1000c0s6b0n0: eval_delay=0, x1000c0s6b0n0: eval_steps=None, x1000c0s6b0n0: evaluation_strategy=no, x1000c0s6b0n0: fp16=False, x1000c0s6b0n0: fp16_backend=auto, x1000c0s6b0n0: fp16_full_eval=False, x1000c0s6b0n0: fp16_opt_level=O1, x1000c0s6b0n0: fsdp=[], x1000c0s6b0n0: fsdp_config={'fsdp_min_num_params': 0, 'xla': False, 'xla_fsdp_grad_ckpt': False}, x1000c0s6b0n0: fsdp_min_num_params=0, x1000c0s6b0n0: fsdp_transformer_layer_cls_to_wrap=None, x1000c0s6b0n0: full_determinism=False, x1000c0s6b0n0: gradient_accumulation_steps=1, x1000c0s6b0n0: gradient_checkpointing=False, x1000c0s6b0n0: greater_is_better=None, x1000c0s6b0n0: group_by_length=False, x1000c0s6b0n0: half_precision_backend=auto, x1000c0s6b0n0: hub_model_id=None, x1000c0s6b0n0: hub_private_repo=False, x1000c0s6b0n0: hub_strategy=every_save, x1000c0s6b0n0: hub_token=<HUB_TOKEN>, x1000c0s6b0n0: ignore_data_skip=False, x1000c0s6b0n0: include_inputs_for_metrics=False, x1000c0s6b0n0: jit_mode_eval=False, x1000c0s6b0n0: label_names=None, x1000c0s6b0n0: label_smoothing_factor=0.0, x1000c0s6b0n0: learning_rate=5e-05, x1000c0s6b0n0: length_column_name=length, x1000c0s6b0n0: load_best_model_at_end=False, x1000c0s6b0n0: local_rank=0, x1000c0s6b0n0: log_level=passive, x1000c0s6b0n0: log_level_replica=warning, x1000c0s6b0n0: log_on_each_node=True, x1000c0s6b0n0: logging_dir=/home/users/industry/dso/lannliat/scratch/finetuned_models/debug/runs/Jul16_11-23-44_x1000c0s6b0n0, x1000c0s6b0n0: logging_first_step=False, x1000c0s6b0n0: logging_nan_inf_filter=True, x1000c0s6b0n0: logging_steps=500, x1000c0s6b0n0: logging_strategy=steps, x1000c0s6b0n0: lr_scheduler_type=linear, x1000c0s6b0n0: max_grad_norm=1.0, x1000c0s6b0n0: max_steps=-1, x1000c0s6b0n0: metric_for_best_model=None, x1000c0s6b0n0: mp_parameters=, x1000c0s6b0n0: no_cuda=False, x1000c0s6b0n0: num_train_epochs=3.0, x1000c0s6b0n0: optim=adamw_hf, x1000c0s6b0n0: optim_args=None, x1000c0s6b0n0: output_dir=/home/users/industry/dso/lannliat/scratch/finetuned_models/debug, x1000c0s6b0n0: overwrite_output_dir=False, x1000c0s6b0n0: past_index=-1, x1000c0s6b0n0: per_device_eval_batch_size=8, x1000c0s6b0n0: per_device_train_batch_size=8, x1000c0s6b0n0: prediction_loss_only=False, x1000c0s6b0n0: push_to_hub=False, x1000c0s6b0n0: push_to_hub_model_id=None, x1000c0s6b0n0: push_to_hub_organization=None, x1000c0s6b0n0: push_to_hub_token=<PUSH_TO_HUB_TOKEN>, x1000c0s6b0n0: ray_scope=last, x1000c0s6b0n0: remove_unused_columns=True, x1000c0s6b0n0: report_to=['wandb'], x1000c0s6b0n0: resume_from_checkpoint=None, x1000c0s6b0n0: run_name=/home/users/industry/dso/lannliat/scratch/finetuned_models/debug, x1000c0s6b0n0: save_on_each_node=False, x1000c0s6b0n0: save_safetensors=False, x1000c0s6b0n0: save_steps=500, x1000c0s6b0n0: save_strategy=steps, x1000c0s6b0n0: save_total_limit=None, x1000c0s6b0n0: seed=42, x1000c0s6b0n0: sharded_ddp=[], x1000c0s6b0n0: skip_memory_metrics=True, x1000c0s6b0n0: tf32=None, x1000c0s6b0n0: torch_compile=False, x1000c0s6b0n0: torch_compile_backend=None, x1000c0s6b0n0: torch_compile_mode=None, x1000c0s6b0n0: torchdynamo=None, x1000c0s6b0n0: tpu_metrics_debug=False, x1000c0s6b0n0: tpu_num_cores=None, x1000c0s6b0n0: use_ipex=False, x1000c0s6b0n0: use_legacy_prediction_loop=False, x1000c0s6b0n0: use_mps_device=False, x1000c0s6b0n0: warmup_ratio=0.0, x1000c0s6b0n0: warmup_steps=0, x1000c0s6b0n0: weight_decay=0.0, x1000c0s6b0n0: xpu_backend=None, x1000c0s6b0n0: ) x1000c0s6b0n0: 07/16/2023 11:23:47 - WARNING - __main__ - Process rank: 1, device: cuda:1, n_gpu: 1distributed training: True, 16-bits training: False x1000c1s1b0n0: 07/16/2023 11:23:47 - WARNING - __main__ - Process rank: 0, device: cuda:0, n_gpu: 1distributed training: True, 16-bits training: False x1000c1s1b0n0: 07/16/2023 11:23:47 - INFO - __main__ - Training/evaluation parameters TrainingArguments( x1000c1s1b0n0: _n_gpu=1, x1000c1s1b0n0: adafactor=False, x1000c1s1b0n0: adam_beta1=0.9, x1000c1s1b0n0: adam_beta2=0.999, x1000c1s1b0n0: adam_epsilon=1e-08, x1000c1s1b0n0: auto_find_batch_size=False, x1000c1s1b0n0: bf16=False, x1000c1s1b0n0: bf16_full_eval=False, x1000c1s1b0n0: data_seed=None, x1000c1s1b0n0: dataloader_drop_last=False, x1000c1s1b0n0: dataloader_num_workers=0, x1000c1s1b0n0: dataloader_pin_memory=True, x1000c1s1b0n0: ddp_backend=None, x1000c1s1b0n0: ddp_broadcast_buffers=None, x1000c1s1b0n0: ddp_bucket_cap_mb=None, x1000c1s1b0n0: ddp_find_unused_parameters=None, x1000c1s1b0n0: ddp_timeout=1800, x1000c1s1b0n0: debug=[], x1000c1s1b0n0: deepspeed=/home/users/industry/dso/lannliat/FastChat/fastchat/ds_config/ds_config_zero3.json, x1000c1s1b0n0: disable_tqdm=False, x1000c1s1b0n0: do_eval=True, x1000c1s1b0n0: do_predict=False, x1000c1s1b0n0: do_train=True, x1000c1s1b0n0: eval_accumulation_steps=None, x1000c1s1b0n0: eval_delay=0, x1000c1s1b0n0: eval_steps=None, x1000c1s1b0n0: evaluation_strategy=no, x1000c1s1b0n0: fp16=False, x1000c1s1b0n0: fp16_backend=auto, x1000c1s1b0n0: fp16_full_eval=False, x1000c1s1b0n0: fp16_opt_level=O1, x1000c1s1b0n0: fsdp=[], x1000c1s1b0n0: fsdp_config={'fsdp_min_num_params': 0, 'xla': False, 'xla_fsdp_grad_ckpt': False}, x1000c1s1b0n0: fsdp_min_num_params=0, x1000c1s1b0n0: fsdp_transformer_layer_cls_to_wrap=None, x1000c1s1b0n0: full_determinism=False, x1000c1s1b0n0: gradient_accumulation_steps=1, x1000c1s1b0n0: gradient_checkpointing=False, x1000c1s1b0n0: greater_is_better=None, x1000c1s1b0n0: group_by_length=False, x1000c1s1b0n0: half_precision_backend=auto, x1000c1s1b0n0: hub_model_id=None, x1000c1s1b0n0: hub_private_repo=False, x1000c1s1b0n0: hub_strategy=every_save, x1000c1s1b0n0: hub_token=<HUB_TOKEN>, x1000c1s1b0n0: ignore_data_skip=False, x1000c1s1b0n0: include_inputs_for_metrics=False, x1000c1s1b0n0: jit_mode_eval=False, x1000c1s1b0n0: label_names=None, x1000c1s1b0n0: label_smoothing_factor=0.0, x1000c1s1b0n0: learning_rate=5e-05, x1000c1s1b0n0: length_column_name=length, x1000c1s1b0n0: load_best_model_at_end=False, x1000c1s1b0n0: local_rank=0, x1000c1s1b0n0: log_level=passive, x1000c1s1b0n0: log_level_replica=warning, x1000c1s1b0n0: log_on_each_node=True, x1000c1s1b0n0: logging_dir=/home/users/industry/dso/lannliat/scratch/finetuned_models/debug/runs/Jul16_11-23-46_x1000c1s1b0n0, x1000c1s1b0n0: logging_first_step=False, x1000c1s1b0n0: logging_nan_inf_filter=True, x1000c1s1b0n0: logging_steps=500, x1000c1s1b0n0: logging_strategy=steps, x1000c1s1b0n0: lr_scheduler_type=linear, x1000c1s1b0n0: max_grad_norm=1.0, x1000c1s1b0n0: max_steps=-1, x1000c1s1b0n0: metric_for_best_model=None, x1000c1s1b0n0: mp_parameters=, x1000c1s1b0n0: no_cuda=False, x1000c1s1b0n0: num_train_epochs=3.0, x1000c1s1b0n0: optim=adamw_hf, x1000c1s1b0n0: optim_args=None, x1000c1s1b0n0: output_dir=/home/users/industry/dso/lannliat/scratch/finetuned_models/debug, x1000c1s1b0n0: overwrite_output_dir=False, x1000c1s1b0n0: past_index=-1, x1000c1s1b0n0: per_device_eval_batch_size=8, x1000c1s1b0n0: per_device_train_batch_size=8, x1000c1s1b0n0: prediction_loss_only=False, x1000c1s1b0n0: push_to_hub=False, x1000c1s1b0n0: push_to_hub_model_id=None, x1000c1s1b0n0: push_to_hub_organization=None, x1000c1s1b0n0: push_to_hub_token=<PUSH_TO_HUB_TOKEN>, x1000c1s1b0n0: ray_scope=last, x1000c1s1b0n0: remove_unused_columns=True, x1000c1s1b0n0: report_to=['wandb'], x1000c1s1b0n0: resume_from_checkpoint=None, x1000c1s1b0n0: run_name=/home/users/industry/dso/lannliat/scratch/finetuned_models/debug, x1000c1s1b0n0: save_on_each_node=False, x1000c1s1b0n0: save_safetensors=False, x1000c1s1b0n0: save_steps=500, x1000c1s1b0n0: save_strategy=steps, x1000c1s1b0n0: save_total_limit=None, x1000c1s1b0n0: seed=42, x1000c1s1b0n0: sharded_ddp=[], x1000c1s1b0n0: skip_memory_metrics=True, x1000c1s1b0n0: tf32=None, x1000c1s1b0n0: torch_compile=False, x1000c1s1b0n0: torch_compile_backend=None, x1000c1s1b0n0: torch_compile_mode=None, x1000c1s1b0n0: torchdynamo=None, x1000c1s1b0n0: tpu_metrics_debug=False, x1000c1s1b0n0: tpu_num_cores=None, x1000c1s1b0n0: use_ipex=False, x1000c1s1b0n0: use_legacy_prediction_loop=False, x1000c1s1b0n0: use_mps_device=False, x1000c1s1b0n0: warmup_ratio=0.0, x1000c1s1b0n0: warmup_steps=0, x1000c1s1b0n0: weight_decay=0.0, x1000c1s1b0n0: xpu_backend=None, x1000c1s1b0n0: ) x1000c1s1b0n0: 07/16/2023 11:23:47 - WARNING - __main__ - Process rank: 1, device: cuda:1, n_gpu: 1distributed training: True, 16-bits training: False x1000c1s1b0n0: 07/16/2023 11:23:49 - INFO - datasets.info - Loading Dataset Infos from /home/users/industry/dso/lannliat/.cache/huggingface/modules/datasets_modules/datasets/wikitext/a241db52902eaf2c6aa732210bead40c090019a499ceb13bcbfa3f8ab646a126 x1000c1s1b0n0: 07/16/2023 11:23:49 - INFO - datasets.builder - Overwrite dataset info from restored data version if exists. x1000c1s1b0n0: 07/16/2023 11:23:49 - INFO - datasets.info - Loading Dataset info from /home/users/industry/dso/lannliat/.cache/huggingface/datasets/wikitext/wikitext-2-raw-v1/1.0.0/a241db52902eaf2c6aa732210bead40c090019a499ceb13bcbfa3f8ab646a126 x1000c1s1b0n0: 07/16/2023 11:23:49 - WARNING - datasets.builder - Found cached dataset wikitext (/home/users/industry/dso/lannliat/.cache/huggingface/datasets/wikitext/wikitext-2-raw-v1/1.0.0/a241db52902eaf2c6aa732210bead40c090019a499ceb13bcbfa3f8ab646a126) x1000c1s1b0n0: 07/16/2023 11:23:49 - INFO - datasets.info - Loading Dataset info from /home/users/industry/dso/lannliat/.cache/huggingface/datasets/wikitext/wikitext-2-raw-v1/1.0.0/a241db52902eaf2c6aa732210bead40c090019a499ceb13bcbfa3f8ab646a126 100%|โ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆ| 3/3 [00:00<00:00, 272.09it/s] x1000c0s6b0n0: 07/16/2023 11:23:50 - INFO - datasets.info - Loading Dataset Infos from /home/users/industry/dso/lannliat/.cache/huggingface/modules/datasets_modules/datasets/wikitext/a241db52902eaf2c6aa732210bead40c090019a499ceb13bcbfa3f8ab646a126 x1000c0s6b0n0: 07/16/2023 11:23:50 - INFO - datasets.builder - Overwrite dataset info from restored data version if exists. x1000c0s6b0n0: 07/16/2023 11:23:50 - INFO - datasets.info - Loading Dataset info from /home/users/industry/dso/lannliat/.cache/huggingface/datasets/wikitext/wikitext-2-raw-v1/1.0.0/a241db52902eaf2c6aa732210bead40c090019a499ceb13bcbfa3f8ab646a126 x1000c0s6b0n0: 07/16/2023 11:23:50 - WARNING - datasets.builder - Found cached dataset wikitext (/home/users/industry/dso/lannliat/.cache/huggingface/datasets/wikitext/wikitext-2-raw-v1/1.0.0/a241db52902eaf2c6aa732210bead40c090019a499ceb13bcbfa3f8ab646a126) 100%|โ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆ| 3/3 [00:00<00:00, 194.61it/s] x1000c0s6b0n0: 07/16/2023 11:23:50 - WARNING - datasets.builder - Found cached dataset wikitext (/home/users/industry/dso/lannliat/.cache/huggingface/datasets/wikitext/wikitext-2-raw-v1/1.0.0/a241db52902eaf2c6aa732210bead40c090019a499ceb13bcbfa3f8ab646a126) x1000c0s6b0n0: 07/16/2023 11:23:50 - INFO - datasets.info - Loading Dataset info from /home/users/industry/dso/lannliat/.cache/huggingface/datasets/wikitext/wikitext-2-raw-v1/1.0.0/a241db52902eaf2c6aa732210bead40c090019a499ceb13bcbfa3f8ab646a126 100%|โ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆ| 3/3 [00:00<00:00, 811.12it/s] x1000c1s1b0n0: [INFO|configuration_utils.py:712] 2023-07-16 11:23:50,641 >> loading configuration file config.json from cache at /home/users/industry/dso/lannliat/.cache/huggingface/hub/models--gpt2/snapshots/11c5a3d5811f50298f278a704980280950aedb10/config.json x1000c1s1b0n0: [INFO|configuration_utils.py:768] 2023-07-16 11:23:50,641 >> Model config GPT2Config { x1000c1s1b0n0: "_name_or_path": "gpt2", x1000c1s1b0n0: "activation_function": "gelu_new", x1000c1s1b0n0: "architectures": [ x1000c1s1b0n0: "GPT2LMHeadModel" x1000c1s1b0n0: ], x1000c1s1b0n0: "attn_pdrop": 0.1, x1000c1s1b0n0: "bos_token_id": 50256, x1000c1s1b0n0: "embd_pdrop": 0.1, x1000c1s1b0n0: "eos_token_id": 50256, x1000c1s1b0n0: "initializer_range": 0.02, x1000c1s1b0n0: "layer_norm_epsilon": 1e-05, x1000c1s1b0n0: "model_type": "gpt2", x1000c1s1b0n0: "n_ctx": 1024, x1000c1s1b0n0: "n_embd": 768, x1000c1s1b0n0: "n_head": 12, x1000c1s1b0n0: "n_inner": null, x1000c1s1b0n0: "n_layer": 12, x1000c1s1b0n0: "n_positions": 1024, x1000c1s1b0n0: "reorder_and_upcast_attn": false, x1000c1s1b0n0: "resid_pdrop": 0.1, x1000c1s1b0n0: "scale_attn_by_inverse_layer_idx": false, x1000c1s1b0n0: "scale_attn_weights": true, x1000c1s1b0n0: "summary_activation": null, x1000c1s1b0n0: "summary_first_dropout": 0.1, x1000c1s1b0n0: "summary_proj_to_labels": true, x1000c1s1b0n0: "summary_type": "cls_index", x1000c1s1b0n0: "summary_use_proj": true, x1000c1s1b0n0: "task_specific_params": { x1000c1s1b0n0: "text-generation": { x1000c1s1b0n0: "do_sample": true, x1000c1s1b0n0: "max_length": 50 x1000c1s1b0n0: } x1000c1s1b0n0: }, x1000c1s1b0n0: "transformers_version": "4.31.0.dev0", x1000c1s1b0n0: "use_cache": true, x1000c1s1b0n0: "vocab_size": 50257 x1000c1s1b0n0: } x1000c1s1b0n0: x1000c0s6b0n0: [INFO|configuration_utils.py:712] 2023-07-16 11:23:50,647 >> loading configuration file config.json from cache at /home/users/industry/dso/lannliat/.cache/huggingface/hub/models--gpt2/snapshots/11c5a3d5811f50298f278a704980280950aedb10/config.json x1000c0s6b0n0: [INFO|configuration_utils.py:768] 2023-07-16 11:23:50,648 >> Model config GPT2Config { x1000c0s6b0n0: "_name_or_path": "gpt2", x1000c0s6b0n0: "activation_function": "gelu_new", x1000c0s6b0n0: "architectures": [ x1000c0s6b0n0: "GPT2LMHeadModel" x1000c0s6b0n0: ], x1000c0s6b0n0: "attn_pdrop": 0.1, x1000c0s6b0n0: "bos_token_id": 50256, x1000c0s6b0n0: "embd_pdrop": 0.1, x1000c0s6b0n0: "eos_token_id": 50256, x1000c0s6b0n0: "initializer_range": 0.02, x1000c0s6b0n0: "layer_norm_epsilon": 1e-05, x1000c0s6b0n0: "model_type": "gpt2", x1000c0s6b0n0: "n_ctx": 1024, x1000c0s6b0n0: "n_embd": 768, x1000c0s6b0n0: "n_head": 12, x1000c0s6b0n0: "n_inner": null, x1000c0s6b0n0: "n_layer": 12, x1000c0s6b0n0: "n_positions": 1024, x1000c0s6b0n0: "reorder_and_upcast_attn": false, x1000c0s6b0n0: "resid_pdrop": 0.1, x1000c0s6b0n0: "scale_attn_by_inverse_layer_idx": false, x1000c0s6b0n0: "scale_attn_weights": true, x1000c0s6b0n0: "summary_activation": null, x1000c0s6b0n0: "summary_first_dropout": 0.1, x1000c0s6b0n0: "summary_proj_to_labels": true, x1000c0s6b0n0: "summary_type": "cls_index", x1000c0s6b0n0: "summary_use_proj": true, x1000c0s6b0n0: "task_specific_params": { x1000c0s6b0n0: "text-generation": { x1000c0s6b0n0: "do_sample": true, x1000c0s6b0n0: "max_length": 50 x1000c0s6b0n0: } x1000c0s6b0n0: }, x1000c0s6b0n0: "transformers_version": "4.31.0.dev0", x1000c0s6b0n0: "use_cache": true, x1000c0s6b0n0: "vocab_size": 50257 x1000c0s6b0n0: } x1000c0s6b0n0: x1000c1s1b0n0: 07/16/2023 11:23:50 - WARNING - datasets.builder - Found cached dataset wikitext (/home/users/industry/dso/lannliat/.cache/huggingface/datasets/wikitext/wikitext-2-raw-v1/1.0.0/a241db52902eaf2c6aa732210bead40c090019a499ceb13bcbfa3f8ab646a126) 100%|โ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆ| 3/3 [00:00<00:00, 1171.27it/s] x1000c0s6b0n0: [INFO|tokenization_auto.py:512] 2023-07-16 11:23:50,890 >> Could not locate the tokenizer configuration file, will try to use the model config instead. x1000c0s6b0n0: [INFO|configuration_utils.py:712] 2023-07-16 11:23:51,134 >> loading configuration file config.json from cache at /home/users/industry/dso/lannliat/.cache/huggingface/hub/models--gpt2/snapshots/11c5a3d5811f50298f278a704980280950aedb10/config.json x1000c0s6b0n0: [INFO|configuration_utils.py:768] 2023-07-16 11:23:51,135 >> Model config GPT2Config { x1000c0s6b0n0: "_name_or_path": "gpt2", x1000c0s6b0n0: "activation_function": "gelu_new", x1000c0s6b0n0: "architectures": [ x1000c0s6b0n0: "GPT2LMHeadModel" x1000c0s6b0n0: ], x1000c0s6b0n0: "attn_pdrop": 0.1, x1000c0s6b0n0: "bos_token_id": 50256, x1000c0s6b0n0: "embd_pdrop": 0.1, x1000c0s6b0n0: "eos_token_id": 50256, x1000c0s6b0n0: "initializer_range": 0.02, x1000c0s6b0n0: "layer_norm_epsilon": 1e-05, x1000c0s6b0n0: "model_type": "gpt2", x1000c0s6b0n0: "n_ctx": 1024, x1000c0s6b0n0: "n_embd": 768, x1000c0s6b0n0: "n_head": 12, x1000c0s6b0n0: "n_inner": null, x1000c0s6b0n0: "n_layer": 12, x1000c0s6b0n0: "n_positions": 1024, x1000c0s6b0n0: "reorder_and_upcast_attn": false, x1000c0s6b0n0: "resid_pdrop": 0.1, x1000c0s6b0n0: "scale_attn_by_inverse_layer_idx": false, x1000c0s6b0n0: "scale_attn_weights": true, x1000c0s6b0n0: "summary_activation": null, x1000c0s6b0n0: "summary_first_dropout": 0.1, x1000c0s6b0n0: "summary_proj_to_labels": true, x1000c0s6b0n0: "summary_type": "cls_index", x1000c0s6b0n0: "summary_use_proj": true, x1000c0s6b0n0: "task_specific_params": { x1000c0s6b0n0: "text-generation": { x1000c0s6b0n0: "do_sample": true, x1000c0s6b0n0: "max_length": 50 x1000c0s6b0n0: } x1000c0s6b0n0: }, x1000c0s6b0n0: "transformers_version": "4.31.0.dev0", x1000c0s6b0n0: "use_cache": true, x1000c0s6b0n0: "vocab_size": 50257 x1000c0s6b0n0: } x1000c0s6b0n0: x1000c1s1b0n0: [INFO|tokenization_auto.py:512] 2023-07-16 11:23:51,338 >> Could not locate the tokenizer configuration file, will try to use the model config instead. x1000c1s1b0n0: [INFO|configuration_utils.py:712] 2023-07-16 11:23:51,581 >> loading configuration file config.json from cache at /home/users/industry/dso/lannliat/.cache/huggingface/hub/models--gpt2/snapshots/11c5a3d5811f50298f278a704980280950aedb10/config.json x1000c1s1b0n0: [INFO|configuration_utils.py:768] 2023-07-16 11:23:51,582 >> Model config GPT2Config { x1000c1s1b0n0: "_name_or_path": "gpt2", x1000c1s1b0n0: "activation_function": "gelu_new", x1000c1s1b0n0: "architectures": [ x1000c1s1b0n0: "GPT2LMHeadModel" x1000c1s1b0n0: ], x1000c1s1b0n0: "attn_pdrop": 0.1, x1000c1s1b0n0: "bos_token_id": 50256, x1000c1s1b0n0: "embd_pdrop": 0.1, x1000c1s1b0n0: "eos_token_id": 50256, x1000c1s1b0n0: "initializer_range": 0.02, x1000c1s1b0n0: "layer_norm_epsilon": 1e-05, x1000c1s1b0n0: "model_type": "gpt2", x1000c1s1b0n0: "n_ctx": 1024, x1000c1s1b0n0: "n_embd": 768, x1000c1s1b0n0: "n_head": 12, x1000c1s1b0n0: "n_inner": null, x1000c1s1b0n0: "n_layer": 12, x1000c1s1b0n0: "n_positions": 1024, x1000c1s1b0n0: "reorder_and_upcast_attn": false, x1000c1s1b0n0: "resid_pdrop": 0.1, x1000c1s1b0n0: "scale_attn_by_inverse_layer_idx": false, x1000c1s1b0n0: "scale_attn_weights": true, x1000c1s1b0n0: "summary_activation": null, x1000c1s1b0n0: "summary_first_dropout": 0.1, x1000c1s1b0n0: "summary_proj_to_labels": true, x1000c1s1b0n0: "summary_type": "cls_index", x1000c1s1b0n0: "summary_use_proj": true, x1000c1s1b0n0: "task_specific_params": { x1000c1s1b0n0: "text-generation": { x1000c1s1b0n0: "do_sample": true, x1000c1s1b0n0: "max_length": 50 x1000c1s1b0n0: } x1000c1s1b0n0: }, x1000c1s1b0n0: "transformers_version": "4.31.0.dev0", x1000c1s1b0n0: "use_cache": true, x1000c1s1b0n0: "vocab_size": 50257 x1000c1s1b0n0: } x1000c1s1b0n0: x1000c0s6b0n0: [INFO|tokenization_utils_base.py:1843] 2023-07-16 11:23:52,137 >> loading file vocab.json from cache at /home/users/industry/dso/lannliat/.cache/huggingface/hub/models--gpt2/snapshots/11c5a3d5811f50298f278a704980280950aedb10/vocab.json x1000c0s6b0n0: [INFO|tokenization_utils_base.py:1843] 2023-07-16 11:23:52,137 >> loading file merges.txt from cache at /home/users/industry/dso/lannliat/.cache/huggingface/hub/models--gpt2/snapshots/11c5a3d5811f50298f278a704980280950aedb10/merges.txt x1000c0s6b0n0: [INFO|tokenization_utils_base.py:1843] 2023-07-16 11:23:52,137 >> loading file tokenizer.json from cache at /home/users/industry/dso/lannliat/.cache/huggingface/hub/models--gpt2/snapshots/11c5a3d5811f50298f278a704980280950aedb10/tokenizer.json x1000c0s6b0n0: [INFO|tokenization_utils_base.py:1843] 2023-07-16 11:23:52,137 >> loading file added_tokens.json from cache at None x1000c0s6b0n0: [INFO|tokenization_utils_base.py:1843] 2023-07-16 11:23:52,137 >> loading file special_tokens_map.json from cache at None x1000c0s6b0n0: [INFO|tokenization_utils_base.py:1843] 2023-07-16 11:23:52,137 >> loading file tokenizer_config.json from cache at None x1000c0s6b0n0: [INFO|configuration_utils.py:712] 2023-07-16 11:23:52,139 >> loading configuration file config.json from cache at /home/users/industry/dso/lannliat/.cache/huggingface/hub/models--gpt2/snapshots/11c5a3d5811f50298f278a704980280950aedb10/config.json x1000c0s6b0n0: [INFO|configuration_utils.py:768] 2023-07-16 11:23:52,140 >> Model config GPT2Config { x1000c0s6b0n0: "_name_or_path": "gpt2", x1000c0s6b0n0: "activation_function": "gelu_new", x1000c0s6b0n0: "architectures": [ x1000c0s6b0n0: "GPT2LMHeadModel" x1000c0s6b0n0: ], x1000c0s6b0n0: "attn_pdrop": 0.1, x1000c0s6b0n0: "bos_token_id": 50256, x1000c0s6b0n0: "embd_pdrop": 0.1, x1000c0s6b0n0: "eos_token_id": 50256, x1000c0s6b0n0: "initializer_range": 0.02, x1000c0s6b0n0: "layer_norm_epsilon": 1e-05, x1000c0s6b0n0: "model_type": "gpt2", x1000c0s6b0n0: "n_ctx": 1024, x1000c0s6b0n0: "n_embd": 768, x1000c0s6b0n0: "n_head": 12, x1000c0s6b0n0: "n_inner": null, x1000c0s6b0n0: "n_layer": 12, x1000c0s6b0n0: "n_positions": 1024, x1000c0s6b0n0: "reorder_and_upcast_attn": false, x1000c0s6b0n0: "resid_pdrop": 0.1, x1000c0s6b0n0: "scale_attn_by_inverse_layer_idx": false, x1000c0s6b0n0: "scale_attn_weights": true, x1000c0s6b0n0: "summary_activation": null, x1000c0s6b0n0: "summary_first_dropout": 0.1, x1000c0s6b0n0: "summary_proj_to_labels": true, x1000c0s6b0n0: "summary_type": "cls_index", x1000c0s6b0n0: "summary_use_proj": true, x1000c0s6b0n0: "task_specific_params": { x1000c0s6b0n0: "text-generation": { x1000c0s6b0n0: "do_sample": true, x1000c0s6b0n0: "max_length": 50 x1000c0s6b0n0: } x1000c0s6b0n0: }, x1000c0s6b0n0: "transformers_version": "4.31.0.dev0", x1000c0s6b0n0: "use_cache": true, x1000c0s6b0n0: "vocab_size": 50257 x1000c0s6b0n0: } x1000c0s6b0n0: x1000c0s6b0n0: [INFO|modeling_utils.py:2603] 2023-07-16 11:23:52,218 >> loading weights file model.safetensors from cache at /home/users/industry/dso/lannliat/.cache/huggingface/hub/models--gpt2/snapshots/11c5a3d5811f50298f278a704980280950aedb10/model.safetensors x1000c1s1b0n0: [INFO|tokenization_utils_base.py:1843] 2023-07-16 11:23:52,571 >> loading file vocab.json from cache at /home/users/industry/dso/lannliat/.cache/huggingface/hub/models--gpt2/snapshots/11c5a3d5811f50298f278a704980280950aedb10/vocab.json x1000c1s1b0n0: [INFO|tokenization_utils_base.py:1843] 2023-07-16 11:23:52,571 >> loading file merges.txt from cache at /home/users/industry/dso/lannliat/.cache/huggingface/hub/models--gpt2/snapshots/11c5a3d5811f50298f278a704980280950aedb10/merges.txt x1000c1s1b0n0: [INFO|tokenization_utils_base.py:1843] 2023-07-16 11:23:52,571 >> loading file tokenizer.json from cache at /home/users/industry/dso/lannliat/.cache/huggingface/hub/models--gpt2/snapshots/11c5a3d5811f50298f278a704980280950aedb10/tokenizer.json x1000c1s1b0n0: [INFO|tokenization_utils_base.py:1843] 2023-07-16 11:23:52,571 >> loading file added_tokens.json from cache at None x1000c1s1b0n0: [INFO|tokenization_utils_base.py:1843] 2023-07-16 11:23:52,571 >> loading file special_tokens_map.json from cache at None x1000c1s1b0n0: [INFO|tokenization_utils_base.py:1843] 2023-07-16 11:23:52,571 >> loading file tokenizer_config.json from cache at None x1000c1s1b0n0: [INFO|configuration_utils.py:712] 2023-07-16 11:23:52,573 >> loading configuration file config.json from cache at /home/users/industry/dso/lannliat/.cache/huggingface/hub/models--gpt2/snapshots/11c5a3d5811f50298f278a704980280950aedb10/config.json x1000c1s1b0n0: [INFO|configuration_utils.py:768] 2023-07-16 11:23:52,574 >> Model config GPT2Config { x1000c1s1b0n0: "_name_or_path": "gpt2", x1000c1s1b0n0: "activation_function": "gelu_new", x1000c1s1b0n0: "architectures": [ x1000c1s1b0n0: "GPT2LMHeadModel" x1000c1s1b0n0: ], x1000c1s1b0n0: "attn_pdrop": 0.1, x1000c1s1b0n0: "bos_token_id": 50256, x1000c1s1b0n0: "embd_pdrop": 0.1, x1000c1s1b0n0: "eos_token_id": 50256, x1000c1s1b0n0: "initializer_range": 0.02, x1000c1s1b0n0: "layer_norm_epsilon": 1e-05, x1000c1s1b0n0: "model_type": "gpt2", x1000c1s1b0n0: "n_ctx": 1024, x1000c1s1b0n0: "n_embd": 768, x1000c1s1b0n0: "n_head": 12, x1000c1s1b0n0: "n_inner": null, x1000c1s1b0n0: "n_layer": 12, x1000c1s1b0n0: "n_positions": 1024, x1000c1s1b0n0: "reorder_and_upcast_attn": false, x1000c1s1b0n0: "resid_pdrop": 0.1, x1000c1s1b0n0: "scale_attn_by_inverse_layer_idx": false, x1000c1s1b0n0: "scale_attn_weights": true, x1000c1s1b0n0: "summary_activation": null, x1000c1s1b0n0: "summary_first_dropout": 0.1, x1000c1s1b0n0: "summary_proj_to_labels": true, x1000c1s1b0n0: "summary_type": "cls_index", x1000c1s1b0n0: "summary_use_proj": true, x1000c1s1b0n0: "task_specific_params": { x1000c1s1b0n0: "text-generation": { x1000c1s1b0n0: "do_sample": true, x1000c1s1b0n0: "max_length": 50 x1000c1s1b0n0: } x1000c1s1b0n0: }, x1000c1s1b0n0: "transformers_version": "4.31.0.dev0", x1000c1s1b0n0: "use_cache": true, x1000c1s1b0n0: "vocab_size": 50257 x1000c1s1b0n0: } x1000c1s1b0n0: x1000c1s1b0n0: [INFO|modeling_utils.py:2603] 2023-07-16 11:23:52,649 >> loading weights file model.safetensors from cache at /home/users/industry/dso/lannliat/.cache/huggingface/hub/models--gpt2/snapshots/11c5a3d5811f50298f278a704980280950aedb10/model.safetensors x1000c1s1b0n0: [INFO|modeling_utils.py:2694] 2023-07-16 11:23:52,754 >> Detected DeepSpeed ZeRO-3: activating zero.init() for this model x1000c0s6b0n0: [INFO|modeling_utils.py:2694] 2023-07-16 11:23:52,754 >> Detected DeepSpeed ZeRO-3: activating zero.init() for this model x1000c1s1b0n0: [INFO|configuration_utils.py:599] 2023-07-16 11:23:52,757 >> Generate config GenerationConfig { x1000c1s1b0n0: "_from_model_config": true, x1000c1s1b0n0: "bos_token_id": 50256, x1000c1s1b0n0: "eos_token_id": 50256, x1000c1s1b0n0: "transformers_version": "4.31.0.dev0" x1000c1s1b0n0: } x1000c1s1b0n0: x1000c0s6b0n0: [INFO|configuration_utils.py:599] 2023-07-16 11:23:52,759 >> Generate config GenerationConfig { x1000c0s6b0n0: "_from_model_config": true, x1000c0s6b0n0: "bos_token_id": 50256, x1000c0s6b0n0: "eos_token_id": 50256, x1000c0s6b0n0: "transformers_version": "4.31.0.dev0" x1000c0s6b0n0: } x1000c0s6b0n0: x1000c0s6b0n0: [2023-07-16 11:23:58,743] [INFO] [partition_parameters.py:453:__exit__] finished initializing model with 0.16B parameters x1000c1s1b0n0: [INFO|modeling_utils.py:3329] 2023-07-16 11:23:59,764 >> All model checkpoint weights were used when initializing GPT2LMHeadModel. x1000c1s1b0n0: x1000c0s6b0n0: [INFO|modeling_utils.py:3329] 2023-07-16 11:23:59,764 >> All model checkpoint weights were used when initializing GPT2LMHeadModel. x1000c1s1b0n0: [INFO|modeling_utils.py:3337] 2023-07-16 11:23:59,764 >> All the weights of GPT2LMHeadModel were initialized from the model checkpoint at gpt2. x1000c0s6b0n0: x1000c1s1b0n0: If your task is similar to the task the model of the checkpoint was trained on, you can already use GPT2LMHeadModel for predictions without further training. x1000c0s6b0n0: [INFO|modeling_utils.py:3337] 2023-07-16 11:23:59,764 >> All the weights of GPT2LMHeadModel were initialized from the model checkpoint at gpt2. x1000c0s6b0n0: If your task is similar to the task the model of the checkpoint was trained on, you can already use GPT2LMHeadModel for predictions without further training. x1000c0s6b0n0: [INFO|configuration_utils.py:561] 2023-07-16 11:24:00,009 >> loading configuration file generation_config.json from cache at /home/users/industry/dso/lannliat/.cache/huggingface/hub/models--gpt2/snapshots/11c5a3d5811f50298f278a704980280950aedb10/generation_config.json x1000c0s6b0n0: [INFO|configuration_utils.py:599] 2023-07-16 11:24:00,009 >> Generate config GenerationConfig { x1000c0s6b0n0: "_from_model_config": true, x1000c0s6b0n0: "bos_token_id": 50256, x1000c0s6b0n0: "eos_token_id": 50256, x1000c0s6b0n0: "transformers_version": "4.31.0.dev0" x1000c0s6b0n0: } x1000c0s6b0n0: x1000c1s1b0n0: [INFO|configuration_utils.py:561] 2023-07-16 11:24:00,009 >> loading configuration file generation_config.json from cache at /home/users/industry/dso/lannliat/.cache/huggingface/hub/models--gpt2/snapshots/11c5a3d5811f50298f278a704980280950aedb10/generation_config.json x1000c1s1b0n0: [INFO|configuration_utils.py:599] 2023-07-16 11:24:00,010 >> Generate config GenerationConfig { x1000c1s1b0n0: "_from_model_config": true, x1000c1s1b0n0: "bos_token_id": 50256, x1000c1s1b0n0: "eos_token_id": 50256, x1000c1s1b0n0: "transformers_version": "4.31.0.dev0" x1000c1s1b0n0: } x1000c1s1b0n0: x1000c1s1b0n0: 07/16/2023 11:24:00 - WARNING - datasets.arrow_dataset - Loading cached processed dataset at /home/users/industry/dso/lannliat/.cache/huggingface/datasets/wikitext/wikitext-2-raw-v1/1.0.0/a241db52902eaf2c6aa732210bead40c090019a499ceb13bcbfa3f8ab646a126/cache-44ec8a7bce9ef049.arrow x1000c1s1b0n0: 07/16/2023 11:24:00 - WARNING - datasets.arrow_dataset - Loading cached processed dataset at /home/users/industry/dso/lannliat/.cache/huggingface/datasets/wikitext/wikitext-2-raw-v1/1.0.0/a241db52902eaf2c6aa732210bead40c090019a499ceb13bcbfa3f8ab646a126/cache-721cda7e77511ffc.arrow x1000c0s6b0n0: 07/16/2023 11:24:00 - WARNING - datasets.arrow_dataset - Loading cached processed dataset at /home/users/industry/dso/lannliat/.cache/huggingface/datasets/wikitext/wikitext-2-raw-v1/1.0.0/a241db52902eaf2c6aa732210bead40c090019a499ceb13bcbfa3f8ab646a126/cache-44ec8a7bce9ef049.arrow x1000c0s6b0n0: 07/16/2023 11:24:00 - WARNING - datasets.arrow_dataset - Loading cached processed dataset at /home/users/industry/dso/lannliat/.cache/huggingface/datasets/wikitext/wikitext-2-raw-v1/1.0.0/a241db52902eaf2c6aa732210bead40c090019a499ceb13bcbfa3f8ab646a126/cache-721cda7e77511ffc.arrow x1000c1s1b0n0: 07/16/2023 11:24:00 - WARNING - datasets.arrow_dataset - Loading cached processed dataset at /home/users/industry/dso/lannliat/.cache/huggingface/datasets/wikitext/wikitext-2-raw-v1/1.0.0/a241db52902eaf2c6aa732210bead40c090019a499ceb13bcbfa3f8ab646a126/cache-49f826bee8b8a100.arrow x1000c0s6b0n0: 07/16/2023 11:24:00 - WARNING - datasets.arrow_dataset - Loading cached processed dataset at /home/users/industry/dso/lannliat/.cache/huggingface/datasets/wikitext/wikitext-2-raw-v1/1.0.0/a241db52902eaf2c6aa732210bead40c090019a499ceb13bcbfa3f8ab646a126/cache-49f826bee8b8a100.arrow x1000c1s1b0n0: 07/16/2023 11:24:00 - WARNING - datasets.arrow_dataset - Loading cached processed dataset at /home/users/industry/dso/lannliat/.cache/huggingface/datasets/wikitext/wikitext-2-raw-v1/1.0.0/a241db52902eaf2c6aa732210bead40c090019a499ceb13bcbfa3f8ab646a126/cache-c87119ecc69384c8.arrow x1000c0s6b0n0: 07/16/2023 11:24:00 - WARNING - datasets.arrow_dataset - Loading cached processed dataset at /home/users/industry/dso/lannliat/.cache/huggingface/datasets/wikitext/wikitext-2-raw-v1/1.0.0/a241db52902eaf2c6aa732210bead40c090019a499ceb13bcbfa3f8ab646a126/cache-c87119ecc69384c8.arrow x1000c1s1b0n0: 07/16/2023 11:24:00 - WARNING - datasets.arrow_dataset - Loading cached processed dataset at /home/users/industry/dso/lannliat/.cache/huggingface/datasets/wikitext/wikitext-2-raw-v1/1.0.0/a241db52902eaf2c6aa732210bead40c090019a499ceb13bcbfa3f8ab646a126/cache-44ec8a7bce9ef049.arrow x1000c0s6b0n0: 07/16/2023 11:24:00 - WARNING - datasets.arrow_dataset - Loading cached processed dataset at /home/users/industry/dso/lannliat/.cache/huggingface/datasets/wikitext/wikitext-2-raw-v1/1.0.0/a241db52902eaf2c6aa732210bead40c090019a499ceb13bcbfa3f8ab646a126/cache-44ec8a7bce9ef049.arrow x1000c1s1b0n0: 07/16/2023 11:24:00 - WARNING - datasets.arrow_dataset - Loading cached processed dataset at /home/users/industry/dso/lannliat/.cache/huggingface/datasets/wikitext/wikitext-2-raw-v1/1.0.0/a241db52902eaf2c6aa732210bead40c090019a499ceb13bcbfa3f8ab646a126/cache-721cda7e77511ffc.arrow x1000c1s1b0n0: 07/16/2023 11:24:00 - WARNING - datasets.arrow_dataset - Loading cached processed dataset at /home/users/industry/dso/lannliat/.cache/huggingface/datasets/wikitext/wikitext-2-raw-v1/1.0.0/a241db52902eaf2c6aa732210bead40c090019a499ceb13bcbfa3f8ab646a126/cache-49f826bee8b8a100.arrow x1000c0s6b0n0: 07/16/2023 11:24:00 - WARNING - datasets.arrow_dataset - Loading cached processed dataset at /home/users/industry/dso/lannliat/.cache/huggingface/datasets/wikitext/wikitext-2-raw-v1/1.0.0/a241db52902eaf2c6aa732210bead40c090019a499ceb13bcbfa3f8ab646a126/cache-721cda7e77511ffc.arrow x1000c0s6b0n0: 07/16/2023 11:24:00 - WARNING - datasets.arrow_dataset - Loading cached processed dataset at /home/users/industry/dso/lannliat/.cache/huggingface/datasets/wikitext/wikitext-2-raw-v1/1.0.0/a241db52902eaf2c6aa732210bead40c090019a499ceb13bcbfa3f8ab646a126/cache-49f826bee8b8a100.arrow x1000c1s1b0n0: 07/16/2023 11:24:00 - WARNING - datasets.arrow_dataset - Loading cached processed dataset at /home/users/industry/dso/lannliat/.cache/huggingface/datasets/wikitext/wikitext-2-raw-v1/1.0.0/a241db52902eaf2c6aa732210bead40c090019a499ceb13bcbfa3f8ab646a126/cache-acd38b65189dc44f.arrow x1000c0s6b0n0: 07/16/2023 11:24:00 - WARNING - datasets.arrow_dataset - Loading cached processed dataset at /home/users/industry/dso/lannliat/.cache/huggingface/datasets/wikitext/wikitext-2-raw-v1/1.0.0/a241db52902eaf2c6aa732210bead40c090019a499ceb13bcbfa3f8ab646a126/cache-acd38b65189dc44f.arrow x1000c1s1b0n0: 07/16/2023 11:24:00 - WARNING - datasets.arrow_dataset - Loading cached processed dataset at /home/users/industry/dso/lannliat/.cache/huggingface/datasets/wikitext/wikitext-2-raw-v1/1.0.0/a241db52902eaf2c6aa732210bead40c090019a499ceb13bcbfa3f8ab646a126/cache-c1cf316f13c4acf5.arrow x1000c0s6b0n0: 07/16/2023 11:24:00 - WARNING - datasets.arrow_dataset - Loading cached processed dataset at /home/users/industry/dso/lannliat/.cache/huggingface/datasets/wikitext/wikitext-2-raw-v1/1.0.0/a241db52902eaf2c6aa732210bead40c090019a499ceb13bcbfa3f8ab646a126/cache-c1cf316f13c4acf5.arrow x1000c1s1b0n0: 07/16/2023 11:24:00 - WARNING - datasets.arrow_dataset - Loading cached processed dataset at /home/users/industry/dso/lannliat/.cache/huggingface/datasets/wikitext/wikitext-2-raw-v1/1.0.0/a241db52902eaf2c6aa732210bead40c090019a499ceb13bcbfa3f8ab646a126/cache-c87119ecc69384c8.arrow x1000c0s6b0n0: 07/16/2023 11:24:00 - WARNING - datasets.arrow_dataset - Loading cached processed dataset at /home/users/industry/dso/lannliat/.cache/huggingface/datasets/wikitext/wikitext-2-raw-v1/1.0.0/a241db52902eaf2c6aa732210bead40c090019a499ceb13bcbfa3f8ab646a126/cache-c87119ecc69384c8.arrow x1000c1s1b0n0: 07/16/2023 11:24:00 - WARNING - datasets.arrow_dataset - Loading cached processed dataset at /home/users/industry/dso/lannliat/.cache/huggingface/datasets/wikitext/wikitext-2-raw-v1/1.0.0/a241db52902eaf2c6aa732210bead40c090019a499ceb13bcbfa3f8ab646a126/cache-acd38b65189dc44f.arrow x1000c0s6b0n0: 07/16/2023 11:24:00 - WARNING - datasets.arrow_dataset - Loading cached processed dataset at /home/users/industry/dso/lannliat/.cache/huggingface/datasets/wikitext/wikitext-2-raw-v1/1.0.0/a241db52902eaf2c6aa732210bead40c090019a499ceb13bcbfa3f8ab646a126/cache-acd38b65189dc44f.arrow x1000c1s1b0n0: 07/16/2023 11:24:00 - WARNING - datasets.arrow_dataset - Loading cached processed dataset at /home/users/industry/dso/lannliat/.cache/huggingface/datasets/wikitext/wikitext-2-raw-v1/1.0.0/a241db52902eaf2c6aa732210bead40c090019a499ceb13bcbfa3f8ab646a126/cache-c1cf316f13c4acf5.arrow x1000c0s6b0n0: 07/16/2023 11:24:00 - WARNING - datasets.arrow_dataset - Loading cached processed dataset at /home/users/industry/dso/lannliat/.cache/huggingface/datasets/wikitext/wikitext-2-raw-v1/1.0.0/a241db52902eaf2c6aa732210bead40c090019a499ceb13bcbfa3f8ab646a126/cache-c1cf316f13c4acf5.arrow x1000c0s6b0n0: [2023-07-16 11:24:01,734] [INFO] [logging.py:96:log_dist] [Rank 0] DeepSpeed info: version=0.9.5, git-hash=unknown, git-branch=unknown x1000c0s6b0n0: [2023-07-16 11:24:02,277] [INFO] [logging.py:96:log_dist] [Rank 0] DeepSpeed Flops Profiler Enabled: False x1000c0s6b0n0: [WARNING] cpu_adam cuda is missing or is incompatible with installed torch, only cpu ops can be compiled! x1000c0s6b0n0: Using /home/users/industry/dso/lannliat/.cache/torch_extensions/py310_cu116 as PyTorch extensions root... x1000c1s1b0n0: [WARNING] cpu_adam cuda is missing or is incompatible with installed torch, only cpu ops can be compiled! x1000c1s1b0n0: Using /home/users/industry/dso/lannliat/.cache/torch_extensions/py310_cu116 as PyTorch extensions root... x1000c1s1b0n0: [WARNING] cpu_adam cuda is missing or is incompatible with installed torch, only cpu ops can be compiled! x1000c1s1b0n0: Using /home/users/industry/dso/lannliat/.cache/torch_extensions/py310_cu116 as PyTorch extensions root... x1000c1s1b0n0: Traceback (most recent call last): x1000c1s1b0n0: File "/home/users/industry/dso/lannliat/transformers/examples/pytorch/language-modeling/run_clm.py", line 634, in <module> x1000c1s1b0n0: main() x1000c1s1b0n0: File "/home/users/industry/dso/lannliat/transformers/examples/pytorch/language-modeling/run_clm.py", line 582, in main x1000c1s1b0n0: train_result = trainer.train(resume_from_checkpoint=checkpoint) x1000c1s1b0n0: File "/home/users/industry/dso/lannliat/transformers/src/transformers/trainer.py", line 1539, in train x1000c1s1b0n0: return inner_training_loop( x1000c1s1b0n0: File "/home/users/industry/dso/lannliat/transformers/src/transformers/trainer.py", line 1659, in _inner_training_loop x1000c1s1b0n0: model, self.optimizer, self.lr_scheduler = self.accelerator.prepare( x1000c1s1b0n0: File "/home/users/industry/dso/lannliat/.conda/envs/debug/lib/python3.10/site-packages/accelerate/accelerator.py", line 1198, in prepare x1000c1s1b0n0: result = self._prepare_deepspeed(*args) x1000c1s1b0n0: File "/home/users/industry/dso/lannliat/.conda/envs/debug/lib/python3.10/site-packages/accelerate/accelerator.py", line 1537, in _prepare_deepspeed x1000c1s1b0n0: engine, optimizer, _, lr_scheduler = deepspeed.initialize(**kwargs) x1000c1s1b0n0: File "/home/users/industry/dso/lannliat/.conda/envs/debug/lib/python3.10/site-packages/deepspeed/__init__.py", line 165, in initialize x1000c1s1b0n0: engine = DeepSpeedEngine(args=args, x1000c1s1b0n0: File "/home/users/industry/dso/lannliat/.conda/envs/debug/lib/python3.10/site-packages/deepspeed/runtime/engine.py", line 309, in __init__ x1000c1s1b0n0: self._configure_optimizer(optimizer, model_parameters) x1000c1s1b0n0: File "/home/users/industry/dso/lannliat/.conda/envs/debug/lib/python3.10/site-packages/deepspeed/runtime/engine.py", line 1173, in _configure_optimizer x1000c1s1b0n0: basic_optimizer = self._configure_basic_optimizer(model_parameters) x1000c1s1b0n0: File "/home/users/industry/dso/lannliat/.conda/envs/debug/lib/python3.10/site-packages/deepspeed/runtime/engine.py", line 1229, in _configure_basic_optimizer x1000c1s1b0n0: optimizer = DeepSpeedCPUAdam(model_parameters, x1000c1s1b0n0: File "/home/users/industry/dso/lannliat/.conda/envs/debug/lib/python3.10/site-packages/deepspeed/ops/adam/cpu_adam.py", line 94, in __init__ x1000c1s1b0n0: self.ds_opt_adam = CPUAdamBuilder().load() x1000c1s1b0n0: File "/home/users/industry/dso/lannliat/.conda/envs/debug/lib/python3.10/site-packages/deepspeed/ops/op_builder/builder.py", line 454, in load x1000c1s1b0n0: return self.jit_load(verbose) x1000c1s1b0n0: File "/home/users/industry/dso/lannliat/.conda/envs/debug/lib/python3.10/site-packages/deepspeed/ops/op_builder/builder.py", line 497, in jit_load x1000c1s1b0n0: op_module = load(name=self.name, x1000c1s1b0n0: File "/home/users/industry/dso/lannliat/.conda/envs/debug/lib/python3.10/site-packages/torch/utils/cpp_extension.py", line 1284, in load x1000c1s1b0n0: return _jit_compile( x1000c1s1b0n0: File "/home/users/industry/dso/lannliat/.conda/envs/debug/lib/python3.10/site-packages/torch/utils/cpp_extension.py", line 1508, in _jit_compile x1000c1s1b0n0: _write_ninja_file_and_build_library( x1000c1s1b0n0: File "/home/users/industry/dso/lannliat/.conda/envs/debug/lib/python3.10/site-packages/torch/utils/cpp_extension.py", line 1592, in _write_ninja_file_and_build_library x1000c1s1b0n0: verify_ninja_availability() x1000c1s1b0n0: File "/home/users/industry/dso/lannliat/.conda/envs/debug/lib/python3.10/site-packages/torch/utils/cpp_extension.py", line 1648, in verify_ninja_availability x1000c1s1b0n0: raise RuntimeError("Ninja is required to load C++ extensions") x1000c1s1b0n0: RuntimeError: Ninja is required to load C++ extensions x1000c0s6b0n0: Traceback (most recent call last): x1000c0s6b0n0: File "/home/users/industry/dso/lannliat/transformers/examples/pytorch/language-modeling/run_clm.py", line 634, in <module> x1000c0s6b0n0: main() x1000c0s6b0n0: File "/home/users/industry/dso/lannliat/transformers/examples/pytorch/language-modeling/run_clm.py", line 582, in main x1000c0s6b0n0: train_result = trainer.train(resume_from_checkpoint=checkpoint) x1000c0s6b0n0: File "/home/users/industry/dso/lannliat/transformers/src/transformers/trainer.py", line 1539, in train x1000c0s6b0n0: return inner_training_loop( x1000c0s6b0n0: File "/home/users/industry/dso/lannliat/transformers/src/transformers/trainer.py", line 1659, in _inner_training_loop x1000c0s6b0n0: model, self.optimizer, self.lr_scheduler = self.accelerator.prepare( x1000c0s6b0n0: File "/home/users/industry/dso/lannliat/.conda/envs/debug/lib/python3.10/site-packages/accelerate/accelerator.py", line 1198, in prepare x1000c0s6b0n0: result = self._prepare_deepspeed(*args) x1000c0s6b0n0: File "/home/users/industry/dso/lannliat/.conda/envs/debug/lib/python3.10/site-packages/accelerate/accelerator.py", line 1537, in _prepare_deepspeed x1000c0s6b0n0: engine, optimizer, _, lr_scheduler = deepspeed.initialize(**kwargs) x1000c0s6b0n0: File "/home/users/industry/dso/lannliat/.conda/envs/debug/lib/python3.10/site-packages/deepspeed/__init__.py", line 165, in initialize x1000c0s6b0n0: engine = DeepSpeedEngine(args=args, x1000c0s6b0n0: File "/home/users/industry/dso/lannliat/.conda/envs/debug/lib/python3.10/site-packages/deepspeed/runtime/engine.py", line 309, in __init__ x1000c0s6b0n0: self._configure_optimizer(optimizer, model_parameters) x1000c0s6b0n0: File "/home/users/industry/dso/lannliat/.conda/envs/debug/lib/python3.10/site-packages/deepspeed/runtime/engine.py", line 1173, in _configure_optimizer x1000c0s6b0n0: basic_optimizer = self._configure_basic_optimizer(model_parameters) x1000c0s6b0n0: File "/home/users/industry/dso/lannliat/.conda/envs/debug/lib/python3.10/site-packages/deepspeed/runtime/engine.py", line 1229, in _configure_basic_optimizer x1000c0s6b0n0: optimizer = DeepSpeedCPUAdam(model_parameters, x1000c0s6b0n0: File "/home/users/industry/dso/lannliat/.conda/envs/debug/lib/python3.10/site-packages/deepspeed/ops/adam/cpu_adam.py", line 94, in __init__ x1000c0s6b0n0: self.ds_opt_adam = CPUAdamBuilder().load() x1000c0s6b0n0: File "/home/users/industry/dso/lannliat/.conda/envs/debug/lib/python3.10/site-packages/deepspeed/ops/op_builder/builder.py", line 454, in load x1000c0s6b0n0: return self.jit_load(verbose) x1000c0s6b0n0: File "/home/users/industry/dso/lannliat/.conda/envs/debug/lib/python3.10/site-packages/deepspeed/ops/op_builder/builder.py", line 497, in jit_load x1000c0s6b0n0: op_module = load(name=self.name, x1000c0s6b0n0: File "/home/users/industry/dso/lannliat/.conda/envs/debug/lib/python3.10/site-packages/torch/utils/cpp_extension.py", line 1284, in load x1000c0s6b0n0: return _jit_compile( x1000c0s6b0n0: File "/home/users/industry/dso/lannliat/.conda/envs/debug/lib/python3.10/site-packages/torch/utils/cpp_extension.py", line 1508, in _jit_compile x1000c0s6b0n0: _write_ninja_file_and_build_library( x1000c0s6b0n0: File "/home/users/industry/dso/lannliat/.conda/envs/debug/lib/python3.10/site-packages/torch/utils/cpp_extension.py", line 1592, in _write_ninja_file_and_build_library x1000c0s6b0n0: verify_ninja_availability() x1000c0s6b0n0: File "/home/users/industry/dso/lannliat/.conda/envs/debug/lib/python3.10/site-packages/torch/utils/cpp_extension.py", line 1648, in verify_ninja_availability x1000c0s6b0n0: raise RuntimeError("Ninja is required to load C++ extensions") x1000c0s6b0n0: RuntimeError: Ninja is required to load C++ extensions x1000c1s1b0n0: Loading extension module cpu_adam... x1000c1s1b0n0: Time to load cpu_adam op: 0.502467155456543 seconds x1000c1s1b0n0: Exception ignored in: <function DeepSpeedCPUAdam.__del__ at 0x7f42085dca60> x1000c1s1b0n0: Traceback (most recent call last): x1000c1s1b0n0: File "/home/users/industry/dso/lannliat/.conda/envs/debug/lib/python3.10/site-packages/deepspeed/ops/adam/cpu_adam.py", line 102, in __del__ x1000c1s1b0n0: self.ds_opt_adam.destroy_adam(self.opt_id) x1000c1s1b0n0: AttributeError: 'DeepSpeedCPUAdam' object has no attribute 'ds_opt_adam' x1000c0s6b0n0: [WARNING] cpu_adam cuda is missing or is incompatible with installed torch, only cpu ops can be compiled! x1000c0s6b0n0: Using /home/users/industry/dso/lannliat/.cache/torch_extensions/py310_cu116 as PyTorch extensions root... x1000c0s6b0n0: Traceback (most recent call last): x1000c0s6b0n0: File "/home/users/industry/dso/lannliat/transformers/examples/pytorch/language-modeling/run_clm.py", line 634, in <module> x1000c0s6b0n0: main() x1000c0s6b0n0: File "/home/users/industry/dso/lannliat/transformers/examples/pytorch/language-modeling/run_clm.py", line 582, in main x1000c0s6b0n0: train_result = trainer.train(resume_from_checkpoint=checkpoint) x1000c0s6b0n0: File "/home/users/industry/dso/lannliat/transformers/src/transformers/trainer.py", line 1539, in train x1000c0s6b0n0: return inner_training_loop( x1000c0s6b0n0: File "/home/users/industry/dso/lannliat/transformers/src/transformers/trainer.py", line 1659, in _inner_training_loop x1000c0s6b0n0: model, self.optimizer, self.lr_scheduler = self.accelerator.prepare( x1000c0s6b0n0: File "/home/users/industry/dso/lannliat/.conda/envs/debug/lib/python3.10/site-packages/accelerate/accelerator.py", line 1198, in prepare x1000c0s6b0n0: result = self._prepare_deepspeed(*args) x1000c0s6b0n0: File "/home/users/industry/dso/lannliat/.conda/envs/debug/lib/python3.10/site-packages/accelerate/accelerator.py", line 1537, in _prepare_deepspeed x1000c0s6b0n0: engine, optimizer, _, lr_scheduler = deepspeed.initialize(**kwargs) x1000c0s6b0n0: File "/home/users/industry/dso/lannliat/.conda/envs/debug/lib/python3.10/site-packages/deepspeed/__init__.py", line 165, in initialize x1000c0s6b0n0: engine = DeepSpeedEngine(args=args, x1000c0s6b0n0: File "/home/users/industry/dso/lannliat/.conda/envs/debug/lib/python3.10/site-packages/deepspeed/runtime/engine.py", line 309, in __init__ x1000c0s6b0n0: self._configure_optimizer(optimizer, model_parameters) x1000c0s6b0n0: File "/home/users/industry/dso/lannliat/.conda/envs/debug/lib/python3.10/site-packages/deepspeed/runtime/engine.py", line 1173, in _configure_optimizer x1000c0s6b0n0: basic_optimizer = self._configure_basic_optimizer(model_parameters) x1000c0s6b0n0: File "/home/users/industry/dso/lannliat/.conda/envs/debug/lib/python3.10/site-packages/deepspeed/runtime/engine.py", line 1229, in _configure_basic_optimizer x1000c0s6b0n0: optimizer = DeepSpeedCPUAdam(model_parameters, x1000c0s6b0n0: File "/home/users/industry/dso/lannliat/.conda/envs/debug/lib/python3.10/site-packages/deepspeed/ops/adam/cpu_adam.py", line 94, in __init__ x1000c0s6b0n0: self.ds_opt_adam = CPUAdamBuilder().load() x1000c0s6b0n0: File "/home/users/industry/dso/lannliat/.conda/envs/debug/lib/python3.10/site-packages/deepspeed/ops/op_builder/builder.py", line 454, in load x1000c0s6b0n0: return self.jit_load(verbose) x1000c0s6b0n0: File "/home/users/industry/dso/lannliat/.conda/envs/debug/lib/python3.10/site-packages/deepspeed/ops/op_builder/builder.py", line 497, in jit_load x1000c0s6b0n0: op_module = load(name=self.name, x1000c0s6b0n0: File "/home/users/industry/dso/lannliat/.conda/envs/debug/lib/python3.10/site-packages/torch/utils/cpp_extension.py", line 1284, in load x1000c0s6b0n0: return _jit_compile( x1000c0s6b0n0: File "/home/users/industry/dso/lannliat/.conda/envs/debug/lib/python3.10/site-packages/torch/utils/cpp_extension.py", line 1508, in _jit_compile x1000c0s6b0n0: _write_ninja_file_and_build_library( x1000c0s6b0n0: File "/home/users/industry/dso/lannliat/.conda/envs/debug/lib/python3.10/site-packages/torch/utils/cpp_extension.py", line 1592, in _write_ninja_file_and_build_library x1000c0s6b0n0: verify_ninja_availability() x1000c0s6b0n0: File "/home/users/industry/dso/lannliat/.conda/envs/debug/lib/python3.10/site-packages/torch/utils/cpp_extension.py", line 1648, in verify_ninja_availability x1000c0s6b0n0: raise RuntimeError("Ninja is required to load C++ extensions") x1000c0s6b0n0: RuntimeError: Ninja is required to load C++ extensions x1000c0s6b0n0: Exception ignored in: <function DeepSpeedCPUAdam.__del__ at 0x7eff0daaca60> x1000c0s6b0n0: Traceback (most recent call last): x1000c0s6b0n0: File "/home/users/industry/dso/lannliat/.conda/envs/debug/lib/python3.10/site-packages/deepspeed/ops/adam/cpu_adam.py", line 102, in __del__ x1000c0s6b0n0: self.ds_opt_adam.destroy_adam(self.opt_id) x1000c0s6b0n0: AttributeError: 'DeepSpeedCPUAdam' object has no attribute 'ds_opt_adam' x1000c0s6b0n0: Exception ignored in: <function DeepSpeedCPUAdam.__del__ at 0x7f0145cb8a60> x1000c0s6b0n0: Traceback (most recent call last): x1000c0s6b0n0: File "/home/users/industry/dso/lannliat/.conda/envs/debug/lib/python3.10/site-packages/deepspeed/ops/adam/cpu_adam.py", line 102, in __del__ x1000c0s6b0n0: self.ds_opt_adam.destroy_adam(self.opt_id) x1000c0s6b0n0: AttributeError: 'DeepSpeedCPUAdam' object has no attribute 'ds_opt_adam' x1000c1s1b0n0: [2023-07-16 11:24:04,470] [INFO] [launch.py:315:sigkill_handler] Killing subprocess 2431494 x1000c0s6b0n0: [2023-07-16 11:24:04,784] [INFO] [launch.py:315:sigkill_handler] Killing subprocess 665498 x1000c0s6b0n0: [2023-07-16 11:24:04,804] [INFO] [launch.py:315:sigkill_handler] Killing subprocess 665499 ``` When I tried single node training, the training procedes smoothly: ``` deepspeed --num_gpus=2 $ROOT_DIR/transformers/examples/pytorch/language-modeling/run_clm.py \ --model_name_or_path gpt2 \ --dataset_name wikitext \ --dataset_config_name wikitext-2-raw-v1 \ --per_device_train_batch_size 8 \ --per_device_eval_batch_size 8 \ --do_train \ --do_eval \ --output_dir ~/scratch/finetuned_models/debug \ --deepspeed "$ROOT_DIR/FastChat/fastchat/ds_config/ds_config_zero3.json" ``` ### Expected behavior Expected training to proceed smoothly as in single node case.
07-16-2023 03:40:03
07-16-2023 03:40:03
For the error log ```bash x1000c0s6b0n0: File "/home/users/industry/dso/lannliat/.conda/envs/debug/lib/python3.10/site-packages/deepspeed/runtime/engine.py", line 1229, in _configure_basic_optimizer x1000c0s6b0n0: optimizer = DeepSpeedCPUAdam(model_parameters, ``` I believe it's best to open an issue on [DeepSpeed GitHub issues](https://github.com/microsoft/DeepSpeed/issues) page for this. It's likely a deepspeed version issue.
transformers
24,843
closed
RuntimeError: CUDA error: CUBLAS_STATUS_NOT_INITIALIZED when calling `cublasCreate(handle)`
Quantizing HuggingFaceH4/starchat-alpha model using transformers BitsAndBytesConfig and LoRA Config for reducing the memory usage. ``` nf4_config = BitsAndBytesConfig( load_in_4bit=True, bnb_4bit_quant_type="nf4", bnb_4bit_use_double_quant=True, bnb_4bit_compute_dtype=torch.bfloat16) model = AutoModelForCausalLM.from_pretrained(model_id, config = config, quantization_config=nf4_config, torch_dtype=torch.bfloat16,) model.resize_token_embeddings(num_additional_token + tokenizer.vocab_size) ``` tried to reduce the model_max_length: ``` tokenizer = AutoTokenizer.from_pretrained("HuggingFaceH4/starchat-alpha", additional_special_tokens = ["[SYSTEM]", "[ASSISTANT]", "[USER]", "[END]"], pad_token = "[PAD]", model_max_length = 1000, return_token_type_ids=False) ``` Used LoRA configuration: ``` from peft import LoraConfig, get_peft_model model.gradient_checkpointing_enable() model = prepare_model_for_kbit_training(model) lora_config = LoraConfig( r=8, lora_alpha=32, target_modules=["c_attn"], lora_dropout=0.05, bias="none", task_type="CAUSAL_LM" ) model = get_peft_model(model, lora_config) model.config.use_cache = False ``` Using the LoRA configuration i was able to reduce trainable params to 4,014,080 . Finally encountered the error while training the model using HuggingFace TrainingArguments and Trainer to train the model. ``` training_args = TrainingArguments( output_dir='./NeuralCodeBot_starchat', # output directory num_train_epochs= 2, # total number of training epochs per_device_train_batch_size=1, # batch size per device during training per_device_eval_batch_size=1, # batch size for evaluation warmup_steps=50, # number of warmup steps for learning rate scheduler weight_decay=0.01, # strength of weight decay logging_dir='./logs', # directory for storing logs logging_steps=100, learning_rate=1e-3, max_steps = 10000, fp16= True, push_to_hub=True, ) trainer = Trainer( model= model, args=training_args, train_dataset=tokenized_dataset["train"], eval_dataset=tokenized_dataset["test"], data_collator=data_collator, tokenizer=tokenizer, compute_metrics=compute_metrics, ) trainer.train() ``` ## Error Message: ``` --------------------------------------------------------------------------- RuntimeError Traceback (most recent call last) [<ipython-input-29-3435b262f1ae>](https://localhost:8080/#) in <cell line: 1>() ----> 1 trainer.train() 30 frames [/usr/local/lib/python3.10/dist-packages/bitsandbytes/autograd/_functions.py](https://localhost:8080/#) in forward(ctx, A, B, out, bias, state) 514 # 1. Dequantize 515 # 2. MatmulnN --> 516 output = torch.nn.functional.linear(A, F.dequantize_4bit(B, state).to(A.dtype).t(), bias) 517 518 # 3. Save state RuntimeError: CUDA error: CUBLAS_STATUS_NOT_INITIALIZED when calling `cublasCreate(handle)` ``` When running the above code, I encountered an error during training. Unfortunately, I'm unable to provide the exact error message as I couldn't run the code on CPU due to the limitation of BitsAndBytesConfig requiring GPUs. However, I would like to request assistance from the community in resolving this issue. If anyone has encountered a similar error while quantizing HuggingFace language models, specifically using BitsAndBytesConfig and Trainer, I would greatly appreciate any suggestions or guidance on how to overcome this issue. Any of your suggestions will be highly appreciated. Thank You in advance.
07-15-2023 20:58:16
07-15-2023 20:58:16
I was able to resolve the issue by following changes Firstly i added the special tokens that were used in my prompt card. ``` special_token_dict = tokenizer.special_tokens_map tokenizer.add_special_tokens(special_token_dict) ``` Then changed this line of code ``` model.resize_token_embeddings(num_additional_token + tokenizer.vocab_size) ``` to this ``` model.resize_token_embeddings(len(tokenizer)) ```
transformers
24,842
open
Request support for RWKV-4-World model.
### Model description As RWKV-4-World is using a different tokenizer and vocabs, the current RWKV support in transformers is incompatible. ### Open source status - [X] The model implementation is available - [X] The model weights are available ### Provide useful links for the implementation https://huggingface.co/StarRing2022/RWKV-4-World-1.5B @StarRing2022
07-15-2023 20:25:15
07-15-2023 20:25:15
@sgugger Is it possible to add the support for it? I'd like to use PEFT to fine tune the model.<|||||>We have implemented the function of fine-tuning the RWKV-World model using the peft library, and we will immediately publish it in HF<|||||>The relevant code will be provided under this issue<|||||>Please refer to๏ผš https://github.com/StarRing2022/HF-For-RWKVWorld-LoraAlpaca https://huggingface.co/StarRing2022/RWKV-4-World-7B<|||||>> Please refer to๏ผš https://github.com/StarRing2022/HF-For-RWKVWorld-LoraAlpaca https://huggingface.co/StarRing2022/RWKV-4-World-7B @StarRing2022 Thank you for your great work. Will you make a PR for it? Or will you just keep it a separate repo? I'd like to see it PR back to hf transformers. And others can get your work easily. https://github.com/StarRing2022/RingRWKV I think this should be mentioned in the example github repo as well.<|||||>cc @younesbelkada <|||||>Thanks๏ผŒthere are indeed some issues with the RWKV in HF format on the official transformers. Recently, we have found that there are issues such as CFG and sample_ Logits, PR is necessary, and we also hope that the official can provide improvements
transformers
24,841
open
Support for caching prompt hidden states through multiple calls of `generate()`
### Feature request Hi there, I'd like to be able to re-use the hidden states for a common (potentially long) prompt across multiple calls to `model.generate()` in order to reduce redundant computation. Here is how I envision a final API, though I'm sure there are multiple ways to do it. ```python # Load stuff model = AutoModel.from_pretrained('huggyllama/llama-7b') tokenizer = AutoTokenizer.from_pretrained('huggyllama/llama-7b') # Common prompt that we'd prepend to every example prompt = "This is a common prompt in every example." prompt_ids = tokenizer(prompt, return_tensors='pt') # Examples to pass to generate examples = ["Ackbar went to", "Billaba enjoys", "Cody is eating some"] # Generation loop outputs = [] prompt_hidden_state = None for ex in examples: # Current way of doing things out = model.generate( **tokenizer(prompt + ex, return_tensors='pt'), ) # Proposed method to re-use prompt_hidden_state out = model.generate( **tokenizer(x, return_tensors='pt'), common_prompt_ids=prompt_ids, prompt_hidden_state=prompt_hidden_state ) prompt_hidden_state = out.prompt_hidden_state outputs.append(out.sequences) ``` Thanks in advance. ### Motivation A very common pattern for LLM usage is having a common prompt (e.g., instructions and input/output pairs), a sample input, and asking it to generate the sample output. For example: ``` You are a programmer's assistant which converts English descriptions to Python functions. English: <example 1 description> Python: <example 1 function> English: <example 2 description> Python: <example 2 function> English: <example 3 description> Python: <example 3 function> English: <input description> Python: ``` I'd like to be able to cache the common part of the prompt across inputs, that is, everything before `<input description>` which appears in every example to avoid potentially expensive re-computation. ### Your contribution The only existing info I could find is the short discussion [here](https://discuss.huggingface.co/t/avoid-recalculating-hidden-states-between-generate-calls/34209). I tried messing around a bit to get this to work but had little luck. I'm not familiar with the inner-workings of `transformers` and ran into numerous errors. One problem is padding, which if we're using left padding, can cause some misalignment with the prompt hidden states, e.g.: ``` <p> <p> <p> common prompt x_1 x_2 x_3 <p> <p> common prompt x_1 x_2 x_3 x_4 <p> <p> <p> <p> common prompt x_1 x_2 ``` I don't know the best way to solve this. Do we dynamically pad every tensor in `past_key_values`? That seems slow but I don't know if it actually is. If someone can suggest a better/easier way or maybe give some more pointers on how to solve padding. I'd be happy to try again myself. Thanks in advance.
07-15-2023 20:12:40
07-15-2023 20:12:40
cc @gante <|||||>Hey @offendo ๐Ÿ‘‹ This is a relevant request, and one that I can't give an exact solution at the moment. In general, `generate()` + prompting is very manual at the moment, and we really want to improve it. As such, the solution to your proposal will depend on our next iteration of prompt-handling! I'm assigning the issue to me, and I'll keep you posted ๐Ÿค— (cc @patrickvonplaten -- this is related to our brainstorming session yesterday, about prompting)<|||||>This feature would be useful not only for long prompts, but also for incrementally building dialogs / conversations without recomputing the generated parts. And I think there's a simple solution: just expose `past_key_values` in the various `generate` functions, and allow `past_key_values` to be passed in as the parameter for all decoder-only models.<|||||>@gante can't we already pass `past_key_values` to `generate` through `model_kwargs` ? So we could pre-encode the prompt into `past_key_values` with a single forward pass and then re-use it in generation already I think. See this similar (old) issue maybe: https://github.com/huggingface/transformers/issues/4368 should help<|||||>@patrickvonplaten Yeah I guess this solves the issue for @offendo , but still `past_key_values` are not being returned with the generated sequences. Hopefully there could be a flag (possibly called `return_past_key_values`) that allows them to be returned.<|||||>@lqf96 absolutely. I'll work on adding that, since I've seen others mentioning its usefulness :)<|||||>Looks like this is being addressed by #25086... Hopefully it can be accepted soon!<|||||>ไฝ ๅฅฝ๏ผŒๆˆ‘ๅทฒ็ปๆŽฅๆ”ถๅˆฐไฝ ็š„้‚ฎไปถ๏ผ<|||||>> @gante can't we already pass `past_key_values` to `generate` through `model_kwargs` ? So we could pre-encode the prompt into `past_key_values` with a single forward pass and then re-use it in generation already I think. > > See this similar (old) issue maybe: #4368 should help Does this approach work even with the padding issue? If we encode the common prompt (which has no padding) and then call generate on a batch of values with left padding, the hidden states wonโ€™t align properly and weโ€™d have to do some dynamic padding of the `past_key_values` or something. Please correct me if I am misunderstanding something! And also, thanks to everyone for their work on this!<|||||>@offendo If the inputs are properly passed, then the position ids can be correctly inferred, which will result in the correct output :) Possibly we may need additional logic to ensure this position id inference can happen seamlessly
transformers
24,840
open
Again: RuntimeError: unscale_() has already been called on this optimizer since the last update()
### System Info - `transformers` version: 4.30.2 - Platform: Windows-10-10.0.19045-SP0 - Python version: 3.10.12 - Huggingface_hub version: 0.16.2 - Safetensors version: 0.3.1 - PyTorch version (GPU?): 2.0.1+cu117 (True) - Tensorflow version (GPU?): not installed (NA) - Flax version (CPU?/GPU?/TPU?): not installed (NA) - Jax version: not installed - JaxLib version: not installed - Using GPU in script?: <fill in> - Using distributed or parallel set-up in script?: <fill in> ### Who can help? @sgugger ### Information - [ ] The official example scripts - [ ] My own modified scripts ### Tasks - [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...) - [ ] My own task or dataset (give details below) ### Reproduction I've seen the PR where this was supposed to be fixed. Now I think from my experiments the issue still happens when gradient_accumulation_steps is larger than dataset.num_rows divided by per_device_train_batch_size These are obviously not very good input data and it's kinda obvious that it should blow - but this could happen to people if the dataset is too small and gradient_accumulation_steps set arbitrary - but no relevant info is given just a RuntimeError. So it's kinda bug and kinda user error. On my end the issue can be fixed if I lower the gradient_accumulation_steps so it satisfies the above requirement. I had not looked at the transformers code where this gets to play. This is literally based on my own hunch - if I would forget to safeguard the data, I would make an error there. The debug log File "\env\lib\site-packages\transformers\trainer.py", line 1645, in train return inner_training_loop( File "\env\lib\site-packages\transformers\trainer.py", line 1987, in _inner_training_loop self.accelerator.clip_grad_norm_( File "\env\lib\site-packages\accelerate\accelerator.py", line 1893, in clip_grad_norm_ self.unscale_gradients() File "\env\lib\site-packages\accelerate\accelerator.py", line 1856, in unscale_gradients self.scaler.unscale_(opt) File "env\lib\site-packages\torch\cuda\amp\grad_scaler.py", line 275, in unscale_ raise RuntimeError("unscale_() has already been called on this optimizer since the last update().") RuntimeError: unscale_() has already been called on this optimizer since the last update(). ### Expected behavior safe guard this if it is indeed an issue or give error that tells you why this happens (too high gradient_accumulation_steps for the amount of data) If this is wrong place to bring this, let me know.
07-15-2023 19:51:15
07-15-2023 19:51:15
I have the same errors (I opened an issue). However I don't have the gradient_accumulation_steps larger than the rows divied by per_device_train_batch_size. I mean I have this parameters: GRADIENT_ACCUMULATION_STEPS = 16 MICRO_BATCH_SIZE = 4 dataset.num_rows = 53131 Could it be that one of the batch size is less than the micro_batch_size?<|||||>I think this is fixed on the main branch now. cc @muellerzr and @pacman100 <|||||>Hello @FartyPants, please see this: https://github.com/huggingface/transformers/issues/24849#issuecomment-1638272113
transformers
24,839
closed
eval_loss of the same set of data differs when using different batch size
### System Info - `transformers` version: 4.30.0 - Platform: Linux-5.4.0-147-generic-x86_64-with-glibc2.31 - Python version: 3.10.12 - Huggingface_hub version: 0.16.4 - Safetensors version: 0.3.1 - PyTorch version (GPU?): 2.0.1+cu117 (True) - Tensorflow version (GPU?): not installed (NA) - Flax version (CPU?/GPU?/TPU?): not installed (NA) - Jax version: not installed - JaxLib version: not installed - Using GPU in script?: no - Using distributed or parallel set-up in script?: no ### Who can help? @ArthurZucker @younesbelkada ### Information - [ ] The official example scripts - [X] My own modified scripts ### Tasks - [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...) - [x] My own task or dataset (give details below) ### Reproduction `eval_loss` of **the same set of data** from **the same model** (`gpt-neo`, `flan-t5`, `llama`...) **differs** when using different batch size. ```python from transformers import AutoModel, AutoTokenizer, AutoModelForCausalLM, AutoModelForSeq2SeqLM tokenizer = AutoTokenizer.from_pretrained("EleutherAI/gpt-neo-125m") model = AutoModelForCausalLM.from_pretrained("EleutherAI/gpt-neo-125m", padding_side="left") # bug also happens on flan-t5 # tokenizer = AutoTokenizer.from_pretrained("google/flan-t5-large") # model = AutoModelForSeq2SeqLM.from_pretrained("google/flan-t5-large") # set pad token if tokenizer.pad_token is None: tokenizer.pad_token = tokenizer.eos_token # For the following inputs, eval_loss is different when using different batch size samples = [ "Sheldon: So if a photon is directed through a plane with two slits in it and either slit is observed it will not go through both slits. If it's unobserved it will, however, if it's observed after it's left the plane but before it hits its target, it will not have gone through both slits. Leonard: Agreed, what's your point? Sheldon: There's no point, I just think it's a good idea for a tee-shirt. Leonard: Excuse me? Receptionist: Hang on. Leonard: One across is Aegean, eight down is Nabakov, twenty-six across is MCM, fourteen down isโ€ฆ move your fingerโ€ฆ phylum, which makes fourteen across Port-au-Prince. See, Papa Doc's capital idea, that's Port-au-Prince. Haiti. Receptionist: Can I help you? Leonard: Yes. Um, is this the High IQ sperm bank? Receptionist: If you have to ask, maybe you shouldn't be here. Sheldon: I think this is the place. Receptionist: Fill these out. Leonard: Thank-you. We'll be right back. Receptionist: Oh, take your time. I'll just finish my crossword puzzle. Oh wait. (They sit and begin to fill in forms). Sheldon: Leonard, I don't think I can do this. Leonard: What, are you kidding? You're a semi-pro. Sheldon: No. We are committing genetic fraud. There's no guarantee that our sperm is going to generate high IQ offspring, think about that. Sheldon: So if a photon is directed through a plane with two slits in it and either slit is observed it will not go through both slits. If it's unobserved it will, however, if it's observed after it's left the plane but before it hits its target, it will not have gone through both slits. Leonard: Agreed, what's your point? Sheldon: There's no point, I just think it's a good idea for a tee-shirt. Leonard: Excuse me?", "Sheldon: Are you still mad about the sperm bank? Leonard: No. Sheldon: You want to hear an interesting thing about stairs? Leonard: Not really. Sheldon: If the height of a single step is off by as little as two millimetres, most people will trip. Leonard: I don't care. Two millimetres? That doesn't seem right. Sheldon: No, it's true, I did a series of experiments when I was twelve, my father broke his clavicle. Leonard: Is that why they sent you to boarding school? Sheldon: No, that was the result of my work with lasers. Leonard: New neighbour? Sheldon: Evidently. Leonard: Significant improvement over the old neighbour. Sheldon: Two hundred pound transvestite with a skin condition, yes she is. Penny: Oh, hi! Leonard: Hi. Sheldon: Hi. Leonard: Hi. Sheldon: Hi. Penny: Hi? Leonard: We don't mean to interrupt, we live across the hall. Penny: Oh, that's nice. Leonard: Ohโ€ฆ uhโ€ฆ noโ€ฆ we don't live togetherโ€ฆ umโ€ฆ we live together but in separate, heterosexual bedrooms. Penny: Oh, okay, well, guess I'm your new neighbour, Penny. Leonard: Leonard, Sheldon. Penny: Hi. Leonard: Hi. Sheldon: Hi. Penny: Hi. Leonard: Hi. Well, uh, oh, welcome to the building. Penny: Thankyou, maybe we can have coffee sometime. Leonard: Oh, great. Penny: Great. Sheldon: Great. Leonard: Great. Well, bye. Penny: Bye. Sheldon: Bye. Leonard: Bye. Leonard: Should we have invited her for lunch? Sheldon: No. We're going to start Season Two of Battlestar Galactica. Leonard: We already watched the Season Two DVDs. Sheldon: Not with commentary. Leonard: I think we should be good neighbours, invite her over, make her feel welcome. Sheldon: We never invited Louis-slash-Louise over. Leonard: Well, then that was wrong of us. Sheldon: Are you still mad about the sperm bank? Leonard: No. Sheldon: You want to hear an interesting thing about stairs? Leonard: Not really.", "Leonard: Okay, well, make yourself at home. Penny: Okay, thankyou. Leonard: You're very welcome. Penny: This looks like some serious stuff, Leonard, did you do this? Sheldon: Actually that's my work. Penny: Wow. Sheldon: Yeah, well, it's just some quantum mechanics, with a little string theory doodling around the edges. That part there, that's just a joke, it's a spoof of the Bourne-Oppenheimer approximation. Penny: So you're like, one of those, beautiful mind genius guys. Sheldon: Yeah. Penny: This is really impressive. Leonard: I have a board. If you like boards, this is my board. Penny: Holy smokes. Sheldon: If by holy smokes you mean a derivative restatement of the kind of stuff you can find scribbled on the wall of any men's room at MIT, sure. Leonard: What? Sheldon: Oh, come on. Who hasn't seen this differential below โ€œhere I sit broken hearted?โ€ Leonard: At least I didn't have to invent twenty-six dimensions just to make the math come out. Sheldon: I didn't invent them, they're there. Leonard: In what universe? Sheldon: In all of them, that is the point. Penny: Uh, do you guys mind if I start? Sheldon: Um, Penny, that's where I sit. Penny: So, sit next to me. Sheldon: No, I sit there. Penny: What's the difference? Sheldon: What's the difference? Leonard: Here we go. Sheldon: In the winter that seat is close enough to the radiator to remain warm, and yet not so close as to cause perspiration. In the summer it's directly in the path of a cross breeze created by open windows there, and there. Leonard: Okay, well, make yourself at home. Penny: Okay, thankyou. Leonard: You're very welcome. Penny: This looks like some serious stuff, Leonard, did you do this?", "Leonard: Uh, there it goes, it sticks, I'm sorry. Penny: Okay. Thanks. Leonard: You're welcome, oh, you're going to step right, okay, I'llโ€ฆ. Penny: Hey, Leonard? Leonard: The hair products are Sheldon's. Penny: Um, okay. Can I ask you a favour. Leonard: A favour? Sure, you can ask me a favour, I would do you a favour for you. Penny: It's okay if you say no. Leonard: Oh, I'll probably say yes. Penny: It's just not the kind of thing you ask a guy you've just met. Leonard: Wow. Leonard: Uh, there it goes, it sticks, I'm sorry. Penny: Okay. Thanks. Leonard: You're welcome, oh, you're going to step right, okay, I'llโ€ฆ. Penny: Hey, Leonard?" ] model.eval() with torch.no_grad(): # feed all data in one batch all_batch_samples = tokenizer(samples, return_tensors="pt", padding="max_length", max_length=480, truncation=True) labels = all_batch_samples["input_ids"].clone() labels[labels == tokenizer.pad_token_id] = -100 all_batch_samples["labels"] = labels outputs = model(**all_batch_samples) all_loss = outputs.loss # feed one data sample per batch (batch size is 1) losses = [] for i in range(len(all_batch_samples["input_ids"])): batch_samples = tokenizer(samples[i], return_tensors="pt", padding="max_length", max_length=480, truncation=True) labels = batch_samples["input_ids"].clone() labels[labels == tokenizer.pad_token_id] = -100 batch_samples["labels"] = labels for k, v in batch_samples.items(): # always true assert (all_batch_samples[k][i] == batch_samples[k]).all() losses.append(model(**batch_samples).loss) losses = torch.stack(losses) print(f"BS=1: {losses.mean()}", "*"*5, f"BS=all: {all_loss}", "*"*5, f"Losses: {losses}") # BS=1: 3.6513803005218506 ***** BS=all: 3.6280925273895264 ***** Losses: tensor([3.5703, 3.4178, 3.8621, 3.7554]) ``` ### Expected behavior I think the loss should be exactly the same with different batch sizes. I wonder why the deviation happens.
07-15-2023 17:26:49
07-15-2023 17:26:49
Hi @namespace-Pt Thank you for reporting. This is because this is a causal LM models, where the loss is computed across the non-padding tokens. The loss (returned from the model's `forward`) is the total loss divided by the number of non-padding tokens sent to the model. In your case (4 examples), they have 438, 461, 423 and 183 non-padding tokens, a total of `1505`. For each single example, the (averaged) loss is `2.5674`, `2.7242`, `2.9536` and `2.3945`. Multiplying by the corresponding number of non-padding tokens, we get `1124.5172`, `1255.8704`, `1249.3870` and `438.1949`. Summing them gives the total loss of `4067.9697`. Divided by `1505` (the total number of non-padding tokens in the batch), we get `4067.9697 / 1505 = 2.7031`, which is the loss we get when sending the batch to the model. (There is a slight precision issue above, but it's fine) This is known and not an real issue. However, if you want to have full control, you can call model's forward without `labels` and compute it in your own code. <|||||>There is a more detailed discussion https://github.com/huggingface/transformers/issues/24725<|||||>Got it. Thank you. So it should not be macro-average.
transformers
24,838
closed
Ko perf infer gpu many
# What does this PR do? <!-- Congratulations! You've made it this far! You're not quite done yet though. Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution. Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change. Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost. --> <!-- Remove if not applicable --> Fixes # (issue) ## Before submitting - [x] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case). - [x] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests), Pull Request section? - [x] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link to it if that's the case. - [x] Did you make sure to update the documentation with your changes? Here are the [documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and [here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation). - [x] Did you write any new necessary tests? ## Who can review? Anyone in the community is free to review the PR once the tests have passed. Feel free to tag members/contributors who may be interested in your PR. <!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @ If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**. Please tag fewer than 3 people. Models: - text models: @ArthurZucker and @younesbelkada - vision models: @amyeroberts - speech models: @sanchit-gandhi - graph models: @clefourrier Library: - flax: @sanchit-gandhi - generate: @gante - pipelines: @Narsil - tensorflow: @gante and @Rocketknight1 - tokenizers: @ArthurZucker - trainer: @sgugger Integrations: - deepspeed: HF Trainer/Accelerate: @pacman100 - ray/raytune: @richardliaw, @amogkam Documentation: @sgugger, @stevhliu and @MKhalusova HF projects: - accelerate: [different repo](https://github.com/huggingface/accelerate) - datasets: [different repo](https://github.com/huggingface/datasets) - diffusers: [different repo](https://github.com/huggingface/diffusers) - rust tokenizers: [different repo](https://github.com/huggingface/tokenizers) Maintained examples (not research project or legacy): - Flax: @sanchit-gandhi - PyTorch: @sgugger - TensorFlow: @Rocketknight1 -->
07-15-2023 08:35:32
07-15-2023 08:35:32
I'll create a PR again using the Korean translation team's PR template.
transformers
24,837
closed
Remove deprecated codes
# What does this PR do? This PR removes some deprecated code: - remove `xpu_backend` training argument, for it is deprecated and will be remove in version 4.31 of Transformers. - remove some codes that will never be executed. ## Before submitting - [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case). - [ ] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests), Pull Request section? - [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link to it if that's the case. - [ ] Did you make sure to update the documentation with your changes? Here are the [documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and [here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation). - [ ] Did you write any new necessary tests? ## Who can review? @sgugger <!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @ If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**. Please tag fewer than 3 people. Models: - text models: @ArthurZucker and @younesbelkada - vision models: @amyeroberts - speech models: @sanchit-gandhi - graph models: @clefourrier Library: - flax: @sanchit-gandhi - generate: @gante - pipelines: @Narsil - tensorflow: @gante and @Rocketknight1 - tokenizers: @ArthurZucker - trainer: @sgugger Integrations: - deepspeed: HF Trainer/Accelerate: @pacman100 - ray/raytune: @richardliaw, @amogkam Documentation: @sgugger, @stevhliu and @MKhalusova HF projects: - accelerate: [different repo](https://github.com/huggingface/accelerate) - datasets: [different repo](https://github.com/huggingface/datasets) - diffusers: [different repo](https://github.com/huggingface/diffusers) - rust tokenizers: [different repo](https://github.com/huggingface/tokenizers) Maintained examples (not research project or legacy): - Flax: @sanchit-gandhi - PyTorch: @sgugger - TensorFlow: @Rocketknight1 -->
07-15-2023 06:14:11
07-15-2023 06:14:11
It seems the failing test case is not related to these commits. Please re-trigger this failed workflow, Thanks.<|||||>_The documentation is not available anymore as the PR was closed or merged._
transformers
24,836
open
Pipeline feature request for min_new_tokens
### Feature request Pipeline already supports the option max_new_tokens. Iโ€™m requesting for the existing โ€œmin_new_tokensโ€ to be able to be used with pipeline the same way as โ€œmax_new_tokensโ€. Currently this will throw an error when trying to specify โ€œmin_new_tokensโ€ as unrecognised. ### Motivation Maintaining consistency, as the previous way to specify tokens is deprecated. ### Your contribution I can test, but Iโ€™m not a developer. So no, I couldnโ€™t do a PR.
07-15-2023 04:22:18
07-15-2023 04:22:18
cc @gante <|||||>Hey @mediocreatmybest -- the flag is already available and operational :) Be mindful that you may need to adjust `max_new_tokens` to an admissible range. An impossible combination of constraints should raise a warning -- we are working on them :) ```py from transformers import pipeline pipe = pipeline(task="text-generation", model="gpt2") # Base example pipe_out = pipe("This is a sequence of numbers: 1 2 3 4", do_sample=False) print(pipe_out) # Add min_new_tokens -> no change because the default maximum number of tokens is smaller # This will likely be an exception in the future. pipe_out = pipe("This is a sequence of numbers: 1 2 3 4", do_sample=False, min_new_tokens=100) print(pipe_out) # If we add max_new_tokens -> it works as expected pipe_out = pipe("This is a sequence of numbers: 1 2 3 4", do_sample=False, min_new_tokens=100, max_new_tokens=100) print(pipe_out) ```<|||||>(I'm closing since the feature already exists -- feel free to continue commenting)<|||||>Awesome thanks! <|||||>> (I'm closing since the feature already exists -- feel free to continue commenting) Thanks @gante, just to confirm which version of transformers has this been included in? Currently transformers 4.31.0 with pipeline task text-to-image produces this error when running with min_new_tokens Simple Caption Pipeline: ``` from transformers import pipeline pipe = pipeline("image-to-text",model="Salesforce/blip-image-captioning-base",min_new_tokens=5, max_new_tokens=20) caption = pipe("https://huggingface.co/datasets/Narsil/image_dummy/raw/main/parrots.png") print(caption) ``` ``` pipe = pipeline("image-to-text",model="Salesforce/blip-image-captioning-base",min_new_tokens=5, max_new_tokens=20) File "C:\Python310\lib\site-packages\transformers\pipelines\__init__.py", line 988, in pipeline return pipeline_class(model=model, framework=framework, task=task, **kwargs) File "C:\Python310\lib\site-packages\transformers\pipelines\image_to_text.py", line 55, in __init__ super().__init__(*args, **kwargs) File "C:\Python310\lib\site-packages\transformers\pipelines\base.py", line 816, in __init__ self._preprocess_params, self._forward_params, self._postprocess_params = self._sanitize_parameters(**kwargs) TypeError: ImageToTextPipeline._sanitize_parameters() got an unexpected keyword argument 'min_new_tokens' ```<|||||>Just ran through your example and that ran without an issue, I'm guessing my issue might be the ImagetoTextPipeline doesn't have that feature? <|||||>@mediocreatmybest quite possibly `ImagetoTextPipeline` is not correctly accepting text generation arguments -- will check it!<|||||>@mediocreatmybest #24989 fixes it :) (feel free to pip install from that PR)
transformers
24,835
open
Reflexion Agent implementation?
### Feature request Reflexion: Language Agents with Verbal Reinforcement Learning (https://arxiv.org/abs/2303.11366) has code in https://github.com/noahshinn024/reflexion, can you integrate it into transformers_agents? ### Motivation Reflexion Agent is a very interesting advanced agent, and already has its code open sourced, can it be easy to integrate it into transformers_agents? ### Your contribution Currently no.
07-15-2023 02:55:50
07-15-2023 02:55:50
transformers
24,834
closed
Pipeline image-to-text task and Bitsandbytes error
### System Info Python 3.10.6 Transformers 4.30.0 Bitsandbytes 0.39.1 Windows / Linux ### Who can help? @nar ### Information - [X] The official example scripts - [X] My own modified scripts ### Tasks - [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...) - [X] My own task or dataset (give details below) ### Reproduction Using an 4 or 8bit quantised model such as: https://huggingface.co/Mediocreatmybest/blip2-opt-2.7b_8bit ### Expected behavior The Pipeline image processor to detect the model is running with a 4 or 8bit model with bitsandbytes. I apologise if this should be a feature request or if itโ€™s a bug, I couldnโ€™t find any examples of what I was trying to do. When running through the pipeline examples from the hugging face website, if I try using an 8bit model, the model seems to be detected correctly and casts it to 8bit, but the Processor doesnโ€™t seem to follow suit and runs at its default, throwing an error that they both should be set at the same floating point. Iโ€™ve uploaded a few models set at 8bit to save on size and memory, as BLIP2 is pretty heavy, using it on consumer devices is oviously challenging. The models Iโ€™ve uploaded to HuggingFace are: Mediocreatmybest/blip2-opt-2.7b_8bit Mediocreatmybest/blip2-opt-6.7b_8bit Mediocreatmybest/blip2-flan-t5-xxl_8bit I can get them working with regular methods, but as Iโ€™m a beginner itโ€™s obviously challenging. Thanks again for all the great work!
07-15-2023 02:30:43
07-15-2023 02:30:43
Based on this document, it should be possible, but maybe this is just an issue with multimodal or image processors with pipeline? https://huggingface.co/docs/transformers/main/pipeline_tutorial _# pip install accelerate bitsandbytes import torch from transformers import pipeline pipe = pipeline(model="facebook/opt-1.3b", device_map="auto", model_kwargs={"load_in_8bit": True}) output = pipe("This is a cool example!", do_sample=True, top_p=0.95)_<|||||>Also I did create a huggingface.co spaces using pipeline with the ability to try load in 8bit (obviously errors) https://huggingface.co/spaces/Mediocreatmybest/PipelineImageCaption Thanks. <|||||>Adding the stack trace from google colab. --------------------------------------------------------------------------- RuntimeError Traceback (most recent call last) <ipython-input-4-ca06d49da534> in <cell line: 19>() 17 captioner 18 # caption ---> 19 caption = captioner(image)[0]['generated_text'] 20 print(caption) 16 frames /usr/local/lib/python3.10/dist-packages/torch/nn/modules/conv.py in _conv_forward(self, input, weight, bias) 457 weight, bias, self.stride, 458 _pair(0), self.dilation, self.groups) --> 459 return F.conv2d(input, weight, bias, self.stride, 460 self.padding, self.dilation, self.groups) 461 RuntimeError: Input type (float) and bias type (c10::Half) should be the same<|||||>cc @younesbelkada <|||||>Hi @mediocreatmybest Thanks for the issue, it seems the input image needs to be converted into half-precision (`torch.float16`), can you share a small handy reproducible snippet that leads to your bug? <|||||>Thanks for the fast response! The snippet I was using to test on google colab and on my personal device was: ``` from transformers import pipeline import torch image = "https://huggingface.co/datasets/Narsil/image_dummy/raw/main/parrots.png" model = "Salesforce/blip-image-captioning-base" model_kwargs = {"load_in_8bit": True, "torch_dtype": torch.float16} captioner = pipeline(task="image-to-text", model=model, max_new_tokens=30, model_kwargs=model_kwargs, use_fast=True ) # load model captioner # caption caption = captioner(image)[0]['generated_text'] print(caption) ``` (Copy and pasted from my mobile device, hopefully this formatted correctly) Thanks ๐Ÿ™ <|||||>I encountered similar errors while using Blip/Blip2/Git models in an image_to_text pipeline. In my case, I was working with float16 instead of 8bit precision, as under my setup I was encountering additional issues with 8bit. I think there's a very good chance that the fix I've made in #24947 might also fix your issue (for the three models I've implemented the fix for). If you're able to give it a try I'd be interested in hearing if it fixes your issue too.<|||||>> I encountered similar errors while using Blip/Blip2/Git models in an image_to_text pipeline. In my case, I was working with float16 instead of 8bit precision, as under my setup I was encountering additional issues with 8bit. I think there's a very good chance that the fix I've made in #24947 might also fix your issue (for the three models I've implemented the fix for). If you're able to give it a try I'd be interested in hearing if it fixes your issue too. Thanks @JimAllanson, happy to try test, but I'm pretty new to Python, what is the best way to test this for you? editing the site-packages with the change?
transformers
24,833
closed
Bump cryptography from 41.0.0 to 41.0.2 in /examples/research_projects/decision_transformer
Bumps [cryptography](https://github.com/pyca/cryptography) from 41.0.0 to 41.0.2. <details> <summary>Changelog</summary> <p><em>Sourced from <a href="https://github.com/pyca/cryptography/blob/main/CHANGELOG.rst">cryptography's changelog</a>.</em></p> <blockquote> <p>41.0.2 - 2023-07-10</p> <pre><code> * Fixed bugs in creating and parsing SSH certificates where critical options with values were handled incorrectly. Certificates are now created correctly and parsing accepts correct values as well as the previously generated invalid forms with a warning. In the next release, support for parsing these invalid forms will be removed. <p>.. _v41-0-1:</p> <p>41.0.1 - 2023-06-01 </code></pre></p> <ul> <li>Temporarily allow invalid ECDSA signature algorithm parameters in X.509 certificates, which are generated by older versions of Java.</li> <li>Allow null bytes in pass phrases when serializing private keys.</li> </ul> <p>.. _v41-0-0:</p> </blockquote> </details> <details> <summary>Commits</summary> <ul> <li><a href="https://github.com/pyca/cryptography/commit/7431db737cf0407560fac689d24f1d2e5efc349d"><code>7431db7</code></a> bump for 41.0.2 (<a href="https://redirect.github.com/pyca/cryptography/issues/9215">#9215</a>)</li> <li><a href="https://github.com/pyca/cryptography/commit/e190ef190525999d1f599cf8c3aef5cb7f3a8bc4"><code>e190ef1</code></a> Backport ssh cert fix (<a href="https://redirect.github.com/pyca/cryptography/issues/9211">#9211</a>)</li> <li><a href="https://github.com/pyca/cryptography/commit/bb204c8ca7bc0df0c24b6f6c1f59ed5f5bee9226"><code>bb204c8</code></a> Backport: Added PyPy 3.10 to CI (<a href="https://redirect.github.com/pyca/cryptography/issues/8933">#8933</a>) (<a href="https://redirect.github.com/pyca/cryptography/issues/9210">#9210</a>)</li> <li><a href="https://github.com/pyca/cryptography/commit/d02de9f26e9a2353e89427c1cea8b9ed2bae969e"><code>d02de9f</code></a> changelog and version bump (<a href="https://redirect.github.com/pyca/cryptography/issues/9008">#9008</a>)</li> <li><a href="https://github.com/pyca/cryptography/commit/53dc686431f59658d892b83383a330d796105843"><code>53dc686</code></a> Backport null fix (<a href="https://redirect.github.com/pyca/cryptography/issues/9007">#9007</a>)</li> <li><a href="https://github.com/pyca/cryptography/commit/b99900596e65f31543d62cf1a52069c709ba7970"><code>b999005</code></a> Backport tolerate (<a href="https://redirect.github.com/pyca/cryptography/issues/9006">#9006</a>)</li> <li>See full diff in <a href="https://github.com/pyca/cryptography/compare/41.0.0...41.0.2">compare view</a></li> </ul> </details> <br /> [![Dependabot compatibility score](https://dependabot-badges.githubapp.com/badges/compatibility_score?dependency-name=cryptography&package-manager=pip&previous-version=41.0.0&new-version=41.0.2)](https://docs.github.com/en/github/managing-security-vulnerabilities/about-dependabot-security-updates#about-compatibility-scores) Dependabot will resolve any conflicts with this PR as long as you don't alter it yourself. You can also trigger a rebase manually by commenting `@dependabot rebase`. [//]: # (dependabot-automerge-start) [//]: # (dependabot-automerge-end) --- <details> <summary>Dependabot commands and options</summary> <br /> You can trigger Dependabot actions by commenting on this PR: - `@dependabot rebase` will rebase this PR - `@dependabot recreate` will recreate this PR, overwriting any edits that have been made to it - `@dependabot merge` will merge this PR after your CI passes on it - `@dependabot squash and merge` will squash and merge this PR after your CI passes on it - `@dependabot cancel merge` will cancel a previously requested merge and block automerging - `@dependabot reopen` will reopen this PR if it is closed - `@dependabot close` will close this PR and stop Dependabot recreating it. You can achieve the same result by closing it manually - `@dependabot ignore this major version` will close this PR and stop Dependabot creating any more for this major version (unless you reopen the PR or upgrade to it yourself) - `@dependabot ignore this minor version` will close this PR and stop Dependabot creating any more for this minor version (unless you reopen the PR or upgrade to it yourself) - `@dependabot ignore this dependency` will close this PR and stop Dependabot creating any more for this dependency (unless you reopen the PR or upgrade to it yourself) You can disable automated security fix PRs for this repo from the [Security Alerts page](https://github.com/huggingface/transformers/network/alerts). </details>
07-15-2023 01:21:46
07-15-2023 01:21:46
_The documentation is not available anymore as the PR was closed or merged._
transformers
24,832
closed
`trainer.evaluate` does throws an error when using multiple evaluation dataset
### System Info ``` - `transformers` version: 4.31.0.dev0 - Platform: macOS-12.5.1-arm64-arm-64bit - Python version: 3.8.16 - Huggingface_hub version: 0.14.1 - Safetensors version: 0.3.1 - PyTorch version (GPU?): 2.0.1 (False) - Tensorflow version (GPU?): not installed (NA) - Flax version (CPU?/GPU?/TPU?): not installed (NA) - Jax version: not installed - JaxLib version: not installed - Using GPU in script?: No - Using distributed or parallel set-up in script?: No ``` ### Who can help? @sgugger ### Information - [X] The official example scripts - [ ] My own modified scripts ### Tasks - [X] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...) - [X] My own task or dataset (give details below) ### Reproduction 1. use transformers [examples code](https://github.com/huggingface/transformers/tree/main/examples/pytorch/summarization) on summarization (or any other) 2. Pass multiple evaluation dataset as follows when running the code. This should be supported as [documented here](https://huggingface.co/docs/transformers/main_classes/trainer#transformers.Trainer.eval_dataset). ``` python examples/pytorch/summarization/run_summarization.py \ --model_name_or_path t5-small \ --do_train \ --do_eval \ --train_file "a.txt" "b.txt" \ --validation_file "a_valid.txt" "b_valid.txt" ``` ### Expected behavior The example code should not meet an error when `trainer.evaluate` is called, which is either 1) inteยทmittently during training and 2) [at the end of the training](https://github.com/huggingface/transformers/blob/5bb4430edc7df9f9950d412d98bbe505cc4d328b/examples/pytorch/summarization/run_summarization.py#L695). During the training, Trainer [checks if the the passed `eva_dataset` consists of multiple or not](https://github.com/huggingface/transformers/blob/5bb4430edc7df9f9950d412d98bbe505cc4d328b/src/transformers/trainer.py#L2216). Since this line is missing in the 2) at the end of training evaluation case, this meets an error. I'd be happy to make a PR on this :)
07-15-2023 00:01:32
07-15-2023 00:01:32
Yes, please do suggest a PR to fix this, thanks!
transformers
24,831
closed
RwkvForCausalLM does not support gradient checkpointing.
### System Info Is there some reason for RwkvForCausalLM does not support gradient checkpointing, since RWKV-LM supports it? @ArthurZucker and @younesbelkada ### Who can help? _No response_ ### Information - [X] The official example scripts - [ ] My own modified scripts ### Tasks - [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...) - [X] My own task or dataset (give details below) ### Reproduction `model.gradient_checkpointing_enable()` ``` ValueError(f"{self.__class__.__name__} does not support gradient checkpointing.") ValueError: RwkvForCausalLM does not support gradient checkpointing. ``` ### Expected behavior No errors, as long as RWKV-LM supports it.
07-14-2023 19:52:06
07-14-2023 19:52:06
Thanks for reporting @jonataslaw https://github.com/huggingface/transformers/pull/24955 has introduced the GC support for RWKV models, can you try that out by installing `transformers` from source and let us know how it goes? ```bash pip uninstall transformers pip install git+https://github.com/huggingface/transformers.git ```<|||||>Thanks for the quick update! I tested your PR, and it works like a charm. However it stops with an error about 10% of training (during eval): File "/home/jonataslaw/miniconda3/lib/python3.9/site-packages/accelerate/hooks.py", line 165, in new_forward output = old_forward(*args, **kwargs) File "/home/jonataslaw/miniconda3/lib/python3.9/site-packages/transformers/models/rwkv/modeling_rwkv.py", line 645, in forward self._rescale_layers() File "/home/jonataslaw/miniconda3/lib/python3.9/site-packages/transformers/models/rwkv/modeling_rwkv.py", line 738, in _rescale_layers block.attention.output.weight.quant_state[0].div_( RuntimeError: result type Float can't be cast to the desired output type Byte Edit: NVM, it is a unrelated problem about inference.<|||||>Ohh, I get it, it is the nested quantization problem related in https://github.com/huggingface/transformers/issues/23848<|||||>Yes, sadly nested quantization is not supported for RWKV, please use the un-nested one ! <|||||>Thanks for the update. I will change my code and stay alert in case it changes in the future. Thanks again for fixing the GC, it helps a lot.<|||||>Thanks very much @jonataslaw !
transformers
24,830
open
Overlapped offset mapping when manually adding special tokens
### System Info - `transformers` version: 4.31.0.dev0 (commit 21946a8cf4a273f35ac2f3a53edafc398699f527) - Platform: macOS-13.2.1-x86_64-i386-64bit - Python version: 3.9.17 - Huggingface_hub version: 0.16.4 - Safetensors version: 0.3.1 - PyTorch version (GPU?): 2.0.1 (False) - Tensorflow version (GPU?): not installed (NA) - Flax version (CPU?/GPU?/TPU?): not installed (NA) - Jax version: not installed - JaxLib version: not installed - Using GPU in script?: no - Using distributed or parallel set-up in script?: no ### Who can help? @ArthurZucker ### Information - [ ] The official example scripts - [ ] My own modified scripts ### Tasks - [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...) - [ ] My own task or dataset (give details below) ### Reproduction I need to manually insert special tokens in a string so I'm using `add_special_tokens=False` and I encounter decoding inconsistencies between tokenizers that are not present with `add_special_tokens=True`. Here are the test cases with the `openlm-research/open_llama_7b` tokenizer. ```python >>> tok = AutoTokenizer.from_pretrained("openlm-research/open_llama_7b", legacy=False) >>> encodings = tok("<s> SYSTEM", add_special_tokens=False, return_offsets_mapping=True) >>> encodings["input_ids"] [1, 31822, 18469, 29767] >>> encodings["offset_mapping"] [(0, 3), (3, 4), (3, 6), (6, 10)] >>> tok.decode(encodings["input_ids"]) '<s> SYSTEM' ``` In the offset mapping, the 2nd and 3rd token are overlapping which is unexpected, and the decoded sequence does not give back the original string, but adds an additional whitespace after the BOS token. When letting the tokenizer handle the special tokens by itself (`add_special_tokens=True`), the issue is not present. ```python >>> tok = AutoTokenizer.from_pretrained("openlm-research/open_llama_7b", legacy=False) >>> encodings = tok("SYSTEM", add_special_tokens=True, return_offsets_mapping=True) >>> encodings["input_ids"] [1, 18469, 29767] >>> encodings["offset_mapping"] [(0, 0), (0, 2), (2, 6)] >>> tok.decode(encodings["input_ids"]) '<s> SYSTEM' ``` ### Expected behavior Here is the test case with the LLaMA tokenizer, which works as expected, even when manually handling special tokens. ```python >>> tok = AutoTokenizer.from_pretrained("huggyllama/llama-7b", legacy=False) >>> encodings = tok("<s> SYSTEM", add_special_tokens=False, return_offsets_mapping=True) >>> encodings["input_ids"] [1, 28962, 1254, 12665] >>> encodings["offset_mapping"] [(0, 3), (3, 6), (6, 8), (8, 10)] >>> tok.decode(encodings["input_ids"]) '<s> SYSTEM' ``` The offsets are non-overlapping, and the decoding gives back the original string without additional whitespaces.
07-14-2023 19:47:32
07-14-2023 19:47:32
Hey! The `legacy=False` option is pretty much useless for fast tokenizers, so just a FYI here! (the warning is triggered because a slow instance is initialised!) The `31822` corresponds to `SPIECE_UNDERLINE = "โ–"`. The tokenizer on the hub seems to have a few issues already: ```python >>> tok = AutoTokenizer.from_pretrained("openlm-research/open_llama_7b", legacy=False, use_fast = False) >>> tok.encode(" ") [1] >>> tok = AutoTokenizer.from_pretrained("openlm-research/open_llama_7b", legacy=False, use_fast = True) >>> tok.encode(" ") [1, 31822, ..........., 31822] ``` Then, there are no `tokenizer.json` files there, which suggest they are using the default Llama converted. But this might not be intended. As you can see, the `huggyllama/llama-7b` is working as expected. I suggest you open an issue on [the original repo](https://huggingface.co/openlm-research/open_llama_7b/discussions)! I am not familiar with this model but might have been wrongly converted
transformers
24,829
closed
[WIP] Add state in segments id calculation
# What does this PR do? At the moment, when segments are calculated it's calculated on an image-per-image basis. This means when predicting with certain models e.g. DETR, the segment id that each class corresponds can be different across each image in a batch and across batches. This PR adds a private attribute to the image processor class to store the class: to segment_id mapping as state. /!\ There is a possible breaking change, as `compute_segments` now returns 3 rather than two objects. Fixes #23461 ## Before submitting - [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case). - [x] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests), Pull Request section? - [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link to it if that's the case. - [ ] Did you make sure to update the documentation with your changes? Here are the [documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and [here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation). - [ ] Did you write any new necessary tests?
07-14-2023 18:49:49
07-14-2023 18:49:49
_The documentation is not available anymore as the PR was closed or merged._
transformers
24,828
closed
๐ŸŒ[i18n-KO] Translated pipeline_webserver.md to Korean
<!-- PR์˜ ์ œ๋ชฉ์€ "๐ŸŒ [i18n-KO] Translated `<your_file>.md` to Korean" ์œผ๋กœ ๋ถ€ํƒ๋“œ๋ฆฝ๋‹ˆ๋‹ค! --> # What does this PR do? Translated the `pipeline_webserver.md` file of the documentation to Korean. Thank you in advance for your review. Part of https://github.com/huggingface/transformers/issues/20179 ## Before reviewing - [x] Check for missing / redundant translations (๋ฒˆ์—ญ ๋ˆ„๋ฝ/์ค‘๋ณต ๊ฒ€์‚ฌ) - [x] Grammar Check (๋งž์ถค๋ฒ• ๊ฒ€์‚ฌ) - [x] Review or Add new terms to glossary (์šฉ์–ด ํ™•์ธ ๋ฐ ์ถ”๊ฐ€) - [x] Check Inline TOC (e.g. `[[lowercased-header]]`) - [x] Check live-preview for gotchas (live-preview๋กœ ์ •์ƒ์ž‘๋™ ํ™•์ธ) ## Who can review? (Initial) @0525hhgus, @Sunmin0520, @54data, @seank021, @augustinLib <!-- 1. ์œ„ ์ฒดํฌ๊ฐ€ ๋ชจ๋‘ ์™„๋ฃŒ๋œ ๋’ค์—, ์ด ์•„๋ž˜์— ๋ฆฌ๋ทฐ๋ฅผ ์š”์ฒญํ•  ํŒ€์›๋“ค์„ ๋ฉ˜์…˜ํ•ด์ฃผ์„ธ์š”! --> <!-- May you please review this PR? @member1 @member2 ... --> ## Before submitting - [x] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case). - [x] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests), Pull Request section? - [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link to it if that's the case. - [x] Did you make sure to update the documentation with your changes? Here are the [documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and [here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation). - [ ] Did you write any new necessary tests? ## Who can review? (Final) May you please review this PR? @sgugger, @ArthurZucker, @eunseojo <!-- 2. ํŒ€์›๋“ค๊ณผ ๋ฆฌ๋ทฐ๊ฐ€ ๋๋‚œ ํ›„์—๋งŒ ํ—ˆ๊น…ํŽ˜์ด์Šค ์ง์›๋“ค์—๊ฒŒ ๋ฆฌ๋ทฐ ์š”์ฒญํ•˜๋Š” ์•„๋ž˜ ์ฃผ์„์„ ๋…ธ์ถœํ•ด์ฃผ์„ธ์š”! --> <!-- May you please review this PR? @sgugger, @ArthurZucker, @eunseojo --> <!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @ If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**. Please tag fewer than 3 people. Models: - text models: @ArthurZucker and @younesbelkada - vision models: @amyeroberts - speech models: @sanchit-gandhi - graph models: @clefourrier Library: - flax: @sanchit-gandhi - generate: @gante - pipelines: @Narsil - tensorflow: @gante and @Rocketknight1 - tokenizers: @ArthurZucker - trainer: @sgugger Integrations: - deepspeed: HF Trainer/Accelerate: @pacman100 - ray/raytune: @richardliaw, @amogkam Documentation: @sgugger, @stevhliu and @MKhalusova HF projects: - accelerate: [different repo](https://github.com/huggingface/accelerate) - datasets: [different repo](https://github.com/huggingface/datasets) - diffusers: [different repo](https://github.com/huggingface/diffusers) - rust tokenizers: [different repo](https://github.com/huggingface/tokenizers) Maintained examples (not research project or legacy): - Flax: @sanchit-gandhi - PyTorch: @sgugger - TensorFlow: @Rocketknight1 -->
07-14-2023 18:21:12
07-14-2023 18:21:12
_The documentation is not available anymore as the PR was closed or merged._<|||||>๋ณ„๋„์˜ ๋ฆฌ๋ทฐ ์‚ฌํ•ญ ์—†์Šต๋‹ˆ๋‹ค. ์ˆ˜๊ณ ํ•˜์…จ์Šต๋‹ˆ๋‹ค :)
transformers
24,827
open
[`core`]ย PEFT integration
# What does this PR do? This PR is an attempt to tightly integrate PEFT library with transformers, by offering users the ability to load PEFT models out of the box from `AutoModelForxxx.from_pretrained()` if the local directory or the Hub model id contains adapter weights and adapter config. ```python import tempfile from transformers import AutoModelForCausalLM, AutoTokenizer from peft import AutoPeftModelForCausalLM peft_model_id = "ybelkada/opt-350m-lora" model = AutoModelForCausalLM.from_pretrained(peft_model_id) with tempfile.TemporaryDirectory() as tmpdirname: peft_model = AutoPeftModelForCausalLM.from_pretrained(peft_model_id) peft_model.save_pretrained(tmpdirname) model = AutoModelForCausalLM.from_pretrained(tmpdirname) print(model) ``` Although this is similar to what have been introduced in https://github.com/huggingface/peft/pull/694 this PR offers a direct integration with transformers ## TODOs: - [x] ย handle `PeftModel.from_pretrained(xxx)` kwargs - [x] tests - [x] docs (with the help of @stevhliu ) cc @sgugger @pacman100 @BenjaminBossan
07-14-2023 17:06:56
07-14-2023 17:06:56
I would like to have first review of the draft if possible, to see if we are inline with the approach ๐Ÿ™ @sgugger - Thanks !<|||||>The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/transformers/pr_24827). All of your documentation changes will be reflected on that endpoint.<|||||>Thank you for your review @pacman100 ! I think the canonical way to load PEFT models for inference would still be to use PEFT classes (i.e. either `AutoPeftModelForCausalLM` or `PeftModel` - btw we should encourage users to use more and more `AutoPeftModelForCausalLM` instead of `PeftModel`) This PR is intended to make things even easier for users and for further integrations with HF ecosystem (`pipeline`, `diffusers`) and it will be clearly documented. I also think we should update the inference widgets after the PEFT release.<|||||>> Mmm that's not the correct way to add a new peft job as it will always run on any PR, even if it shouldn't be run. @younesbelkada You can take a look https://github.com/huggingface/transformers/blob/476be08c4aa96f8c1cae4200d2677bbe8f12cf80/utils/tests_fetcher.py#L720 (check `examples_test_list.txt` and `examples_tests_to_run` in the same file and the 2 CircleCI config files) (you have to check against `peft_integration` in your case)<|||||>The design as is would make it hard for `diffusers` to leverage `transformes` to load PEFT weights for transformers models. In `diffusers` we have the following workflow: ```py from diffusers import DiffusionPipeline pipe = DiffusionPipeline.from_pretrained("runwayml/stable-diffusion-v1-5") # now we have a loaded CLIPTextModel under `pipe.text_encoder` pipe.load_lora(...) # doing this means we would want to call `text_encoder.load_adapter(...)` under the hood ``` This means we necessarily need `transformers` to support the ability to load lora weights into already instantiated models, just like MMS allows it via `load_adapter` - see: https://huggingface.co/docs/transformers/v4.31.0/en/model_doc/wav2vec2#transformers.Wav2Vec2ForCTC.load_adapter.example [Edit] We could come around it, by wrapping `pipe.text_encoder` into a `PeftModel` under the hood when doing `pipe.load_lora(...)` to transform `CLIPTextModel` into `PeftModel` but that: - would break some internal `diffusers` code - force us to wrap logic around `transformers`, e.g. we could just do `pipe.text_encoder.load_adapter(...)`, but would have to first wrap every transformers model into a PEFT model<|||||>From purely a `transformers` point of view, I would also struggle a bit with the following: 1.) PEFT weights seemingly should only be loaded with `AutoModel`, which is restrictive as there is no need to go over the AutoModel class if one knows the model. It does look like the following would be possible: ```py from transformers import LlamaForCausalLM model = LlamaForCausalLM.from_pretrained("tloen/alpaca-lora-7b") ``` but then `model.__class__` is of type `PeftModel` which would be confusing to me - I used a class method of `LlamaForCausalLM` 2.) I don't like that `.from_pretrained(<peft/model/id>)` more or less fully dispatches to the `peft` library instead of staying in Transformers' land. I imagined `peft` to be used as a utility library, not Transformers to dispatch to peft. => Could we not create a `PeftModelMixin` class so that `peft` operates more under the hood?
transformers
24,826
closed
Remove unused code in GPT-Neo
# What does this PR do? <!-- Congratulations! You've made it this far! You're not quite done yet though. Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution. Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change. Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost. --> <!-- Remove if not applicable --> Fix [#24820](https://github.com/huggingface/transformers/issues/24820#issuecomment-1635744675) by removing `else` statement in `GPTNeoForCausalLM`. ## Who can review? @gante Anyone in the community is free to review the PR once the tests have passed. Feel free to tag members/contributors who may be interested in your PR. <!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @ If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**. Please tag fewer than 3 people. Models: - text models: @ArthurZucker and @younesbelkada - vision models: @amyeroberts - speech models: @sanchit-gandhi - graph models: @clefourrier Library: - flax: @sanchit-gandhi - generate: @gante - pipelines: @Narsil - tensorflow: @gante and @Rocketknight1 - tokenizers: @ArthurZucker - trainer: @sgugger Integrations: - deepspeed: HF Trainer/Accelerate: @pacman100 - ray/raytune: @richardliaw, @amogkam Documentation: @sgugger, @stevhliu and @MKhalusova HF projects: - accelerate: [different repo](https://github.com/huggingface/accelerate) - datasets: [different repo](https://github.com/huggingface/datasets) - diffusers: [different repo](https://github.com/huggingface/diffusers) - rust tokenizers: [different repo](https://github.com/huggingface/tokenizers) Maintained examples (not research project or legacy): - Flax: @sanchit-gandhi - PyTorch: @sgugger - TensorFlow: @Rocketknight1 -->
07-14-2023 16:58:54
07-14-2023 16:58:54
@sgugger context: this change was discussed in https://github.com/huggingface/transformers/issues/24820 -- this model is the only one that deletes `position_ids` in this function, and there is no apparent reason for it. This is a mostly unused code path and has been part of the original GPT-Neo commit.<|||||>The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/transformers/pr_24826). All of your documentation changes will be reflected on that endpoint.
transformers
24,825
closed
deprecate `sharded_ddp` training argument
# What does this PR do? This PR deprecates the `sharded_ddp` training arguments, since ShardedDDP has been [upstreamded to PyTorch](https://pytorch.org/blog/introducing-pytorch-fully-sharded-data-parallel-api/), so users can use the `fsdp` traning parameter instead. --- ~~According to fairscale([see](https://github.com/facebookresearch/fairscale)), PyTorch FSDP is the recommended method for scaling to large NN models. I think Sharded-DDP is dead and it's time to say goodbye to this library.~~ ## Before submitting - [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case). - [ ] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests), Pull Request section? - [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link to it if that's the case. - [ ] Did you make sure to update the documentation with your changes? Here are the [documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and [here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation). - [ ] Did you write any new necessary tests? ## Who can review? Anyone in the community is free to review the PR once the tests have passed. Feel free to tag members/contributors who may be interested in your PR. <!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @ If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**. Please tag fewer than 3 people. Models: - text models: @ArthurZucker and @younesbelkada - vision models: @amyeroberts - speech models: @sanchit-gandhi - graph models: @clefourrier Library: - flax: @sanchit-gandhi - generate: @gante - pipelines: @Narsil - tensorflow: @gante and @Rocketknight1 - tokenizers: @ArthurZucker - trainer: @sgugger Integrations: - deepspeed: HF Trainer/Accelerate: @pacman100 - ray/raytune: @richardliaw, @amogkam Documentation: @sgugger, @stevhliu and @MKhalusova HF projects: - accelerate: [different repo](https://github.com/huggingface/accelerate) - datasets: [different repo](https://github.com/huggingface/datasets) - diffusers: [different repo](https://github.com/huggingface/diffusers) - rust tokenizers: [different repo](https://github.com/huggingface/tokenizers) Maintained examples (not research project or legacy): - Flax: @sanchit-gandhi - PyTorch: @sgugger - TensorFlow: @Rocketknight1 -->
07-14-2023 13:48:06
07-14-2023 13:48:06
> That makes sense thanks! Note that while removing it entirely from the doc is fine, we can't abruptly remove it from the library like this. We will need to properly deprecate it first and in two-tree minor versions we can fully remove it. @sgugger Thanks for pointing this out. I have rolled back the code changes and added a warning that `sharded_ddp` will be deprecated. Could you take a second look at this PR? <|||||>The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/transformers/pr_24825). All of your documentation changes will be reflected on that endpoint.
transformers
24,824
closed
Check models used for common tests are small
# What does this PR do? This is the first PR in a series that will aim at reducing the time spent on the tests (and the corresponding cost ๐Ÿ˜… ). A first analysis of the slowest tests show that the common tests sometimes use real-life models instead of tiny ones. This PR adds a check that the size of the model for common tests is not bigger than 1M parameters (which is a wide bar, BERT has a version with 55k parameters for its common tests for instance). To avoid making the PR too long, the new test is skipped in most of the failures, only the DETR variants, BridgeTower, Canine and CLAP are treated in this PR. For the DETR variants, a full pretrained ResNet-50 was used as the backbone which is replaced by a tiny random ResNet. The following models will be treated in follow-up PRs: * CTRL * CVT * DETA * DPT * DPT Hybrid * EfficientNet * Encodec * ESM * Flava * Git * GPTSan-Japanese * Graphormer * LayoutLM * LayoutLMv2 * LeViT * Mask2Former * Maskformer * MobileViT * MobileVit2 * OneFormer * Perceiver * SegFormer * SpeechT5 * SwiftFormer * TableTransformer * TimmBackbone * TVLT * UperNet * VideoMAE * ViT-MAE * ViViT
07-14-2023 13:27:18
07-14-2023 13:27:18
_The documentation is not available anymore as the PR was closed or merged._
transformers
24,823
open
change nn.ReLU to torch.relu in ACT_FNS for OpenAI
# What does this PR do? <!-- Congratulations! You've made it this far! You're not quite done yet though. Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution. Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change. Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost. --> <!-- Remove if not applicable --> Fixes # 24821 ## Before submitting - [*] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests), Pull Request section? - [*] https://github.com/huggingface/transformers/issues/24821 ## Who can review? Anyone in the community is free to review the PR once the tests have passed. Feel free to tag members/contributors who may be interested in your PR. @ArthurZucker @younesbelkada <!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @ If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**. Please tag fewer than 3 people. Models: - text models: @ArthurZucker and @younesbelkada - vision models: @amyeroberts - speech models: @sanchit-gandhi - graph models: @clefourrier Library: - flax: @sanchit-gandhi - generate: @gante - pipelines: @Narsil - tensorflow: @gante and @Rocketknight1 - tokenizers: @ArthurZucker - trainer: @sgugger Integrations: - deepspeed: HF Trainer/Accelerate: @pacman100 - ray/raytune: @richardliaw, @amogkam Documentation: @sgugger, @stevhliu and @MKhalusova HF projects: - accelerate: [different repo](https://github.com/huggingface/accelerate) - datasets: [different repo](https://github.com/huggingface/datasets) - diffusers: [different repo](https://github.com/huggingface/diffusers) - rust tokenizers: [different repo](https://github.com/huggingface/tokenizers) Maintained examples (not research project or legacy): - Flax: @sanchit-gandhi - PyTorch: @sgugger - TensorFlow: @Rocketknight1 --> --- I ran the following test command after installing pytorch: ```bash python -m pytest -s -v ./tests/models/openai/test_modeling_openai.py # 68 passed, 38 skipped, 15 warnings in 39.26s ``` I figured that's good enough.
07-14-2023 12:40:45
07-14-2023 12:40:45
The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/transformers/pr_24823). All of your documentation changes will be reflected on that endpoint.
transformers
24,822
closed
Generate: sequence bias can handle same terminations
# What does this PR do? Fixes the issue raised by @stas00 [here](https://github.com/huggingface/transformers/pull/24334#issuecomment-1631324670). In a nutshell, when I designed `SequenceBiasLogitsProcessor`, I've committed the fallacy of early optimization -- the resulting solution was not compatible with biasing sequences that had the same termination. This PR opts for a simpler solution that is more inclusive (but probably slower). Existing tests and docstring example passing ๐Ÿ‘
07-14-2023 12:32:55
07-14-2023 12:32:55
_The documentation is not available anymore as the PR was closed or merged._<|||||>Hi @gante, Thank you for trying to fix it. The failure is still there with this PR's branch. ``` File "/mnt/nvme0/code/huggingface/m4-master/m4/evaluation/launch.py", line 143, in <module> main(args) File "/mnt/nvme0/code/huggingface/m4-master/m4/evaluation/launch.py", line 97, in main score = evaluator(task, accelerator, model, args) File "/mnt/nvme0/code/huggingface/m4-master/m4/evaluation/evaluators/in_contexter.py", line 263, in in_contexter metric = task.add_batch_metric(metric, **kwargs) File "/mnt/nvme0/code/huggingface/m4-master/m4/models/vgpt2/evaluation_open_ended_vqa_in_context_vgpt2.py", line 336, in add_batch_metric generated_tokens = self.generate_tokens(**kwargs) File "/mnt/nvme0/code/huggingface/m4-master/m4/models/vgpt2/evaluation_open_ended_vqa_in_context_vgpt2.py", line 312, in generate_tokens generated_tokens = unwrapped_model.generate( File "/home/stas/anaconda3/envs/py38-pt20/lib/python3.8/site-packages/torch/utils/_contextlib.py", line 115, in decorate_context return func(*args, **kwargs) File "/mnt/nvme0/code/huggingface/transformers-master-2/src/transformers/generation/utils.py", line 1613, in generate return self.beam_search( File "/mnt/nvme0/code/huggingface/transformers-master-2/src/transformers/generation/utils.py", line 2930, in beam_search next_token_scores_processed = logits_processor(input_ids, next_token_scores) File "/mnt/nvme0/code/huggingface/transformers-master-2/src/transformers/generation/logits_process.py", line 92, in __call__ scores = processor(input_ids, scores) File "/mnt/nvme0/code/huggingface/transformers-master-2/src/transformers/generation/logits_process.py", line 618, in __call__ self._prepare_bias_variables(scores) File "/mnt/nvme0/code/huggingface/transformers-master-2/src/transformers/generation/logits_process.py", line 674, in _prepare_bias_variables raise ValueError( ValueError: Setting a bias on sequences that share a common token termination is not yet supported. Please open an issue if you see this error message (after checking that it doesn't already exist). ```<|||||>@stas00 that is odd -- the exception was entirely removed ๐Ÿ‘€ On my end, I can't reach the exception as before. Can I kindly request to try again? ๐Ÿค— (and, if it still fails, may I have some way to reproduce it?)<|||||>ah, my bad. somehow I got the wrong branch I think. I have retested with the latest of this PR and the problem is no more. Thank you for the fix, Joao oh, I know. I was on the branch of the original PR #24334 - hence the confusion
transformers
24,821
open
OpenAIGPTModel raises exception with `afn="Relu"`
### System Info - `transformers` version: 4.30.2 - Platform: Linux-5.4.0-148-generic-x86_64-with-glibc2.31 - Python version: 3.9.16 - Huggingface_hub version: 0.16.4 - Safetensors version: 0.3.1 - PyTorch version (GPU?): 2.0.1+cu117 (True) - Tensorflow version (GPU?): not installed (NA) - Flax version (CPU?/GPU?/TPU?): not installed (NA) - Jax version: not installed - JaxLib version: not installed - Using GPU in script?: no - Using distributed or parallel set-up in script?: no ### Who can help? @ArthurZucker @younesbelkada ### Information - [ ] The official example scripts - [X] My own modified scripts ### Tasks - [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...) - [X] My own task or dataset (give details below) ### Reproduction ```python from transformers.models.openai import OpenAIGPTConfig, OpenAIGPTModel OpenAIGPTModel(OpenAIGPTConfig(n_vocab=1, n_embed=1, afn="relu"))(torch.eye(1, dtype=int)) # raises: # AttributeError: 'ReLU' object has no attribute 'size' ``` ### Expected behavior I would expect no error to be raised and the calculation to be performed. For example, the following code behaves as expected: ```python OpenAIGPTModel(OpenAIGPTConfig(n_vocab=1, n_embed=1, afn="gelu"))(torch.eye(1, dtype=int)) ```
07-14-2023 12:27:07
07-14-2023 12:27:07
Hey thanks for reporting and providing a very efficient snippet! Indeed the relu is not initialized thus you have the error. Reviewing your PR now
transformers
24,820
closed
Extra else in GPTNeo prepare_inputs_for_generation
https://github.com/huggingface/transformers/blob/fe861e578f50dc9c06de33cd361d2f625017e624/src/transformers/models/gpt_neo/modeling_gpt_neo.py#L704 Here shouldn't be an extra `else`. Sometimes users want to input customized position_ids and attention_masks, however this `else` eliminates such practice.
07-14-2023 08:54:09
07-14-2023 08:54:09
My bad. I think one should override the `prepare_inputs_for_generation` method if he wants to customize position_ids. The current code is okay for general use cases. But according to recent code of LLaMA https://github.com/huggingface/transformers/blob/fe861e578f50dc9c06de33cd361d2f625017e624/src/transformers/models/llama/modeling_llama.py#L740, the `else` should be deleted anyway.<|||||>Hi @namespace-Pt, thanks for raising this issue. Yes, the great thing about open source code is anyone can build upon and adapt it to their needs! Wrt comparing to the code with Llama, these are different models, and so there might be different assumptions about the inputs for generation; sometimes, even if the assumptions are the same, one model was implemented at a different time and the logic differs for backwards compatibility reasons; and sometimes it's just an oversight on our part :) cc @gante for reference :) <|||||>@namespace-Pt ๐Ÿ‘‹ I think the `else` is indeed not needed. Would you like to open a PR to fix it? ๐Ÿค— <|||||>Create pull request in https://github.com/huggingface/transformers/pull/24826
transformers
24,819
closed
Model training with torch_dtype=torch.bfloat16 is possible?
### System Info - `transformers` version: 4.30.2 - Platform: Linux-5.10.184-174.730.amzn2.x86_64-x86_64-with-glibc2.26 - Python version: 3.9.16 - Huggingface_hub version: 0.15.1 - Safetensors version: 0.3.1 - PyTorch version (GPU?): 2.0.1+cu117 (True) - Tensorflow version (GPU?): not installed (NA) - Flax version (CPU?/GPU?/TPU?): not installed (NA) - Jax version: not installed - JaxLib version: not installed - Using GPU in script?: No - Using distributed or parallel set-up in script?: No ### Who can help? @ArthurZucker @younesbelkada ### Information - [ ] The official example scripts - [X] My own modified scripts ### Tasks - [X] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...) - [X] My own task or dataset (give details below) ### Reproduction From #23165 and specifically this comment https://github.com/huggingface/transformers/issues/23165#issuecomment-1536439098 of @sgugger , it seems that we should not set `torch_dtype` during training. However I think this is possible. See for example the following script ```python from transformers import AutoTokenizer, AutoModelForCausalLM import torch model_name = "gpt2" input = "Hello world!" tokenizer = AutoTokenizer.from_pretrained(model_name) device = torch.device("cuda" if torch.cuda.is_available() else "cpu") model = AutoModelForCausalLM.from_pretrained(model_name, torch_dtype=torch.bfloat16).to(device) optimizer = torch.optim.Adam(model.parameters(), lr=5e-5) input_ids = tokenizer.encode(input, return_tensors="pt").to(device) output = model(input_ids, labels=input_ids) output.loss.backward() optimizer.step() print(model.get_input_embeddings().weight.grad) ``` I think gpt2 is trained using fp32 but I can load it in bfloat16 and train it (or at least get a gradient with bf16). Thus I wonder if I misunderstood something. I have another project running to train LLaMA using bfloat16 (essentially using [`run_clm.py`](https://github.com/huggingface/transformers/blob/main/examples/pytorch/language-modeling/run_clm.py) from the official repo with `--torch_dtype=bfloat16 --bf16` command line flag), so if it turns out that I should not use `torch_dtype` for training then it means that I need to stop the experiments and rerun a lot of things :( Thank you. ### Expected behavior I am not sure if we should use `torch_dtype` for training.
07-14-2023 07:39:54
07-14-2023 07:39:54
You can try to train in full bfloat16 but it's not as stable as mixed precision bfloat16 training. When running `run_clm` with `--torch_dtype=bfloat16` you train in full bfloat16 so the flag `--bf16` (mixed precision training) is not really useful.<|||||>@sgugger thank you for the info!
transformers
24,818
closed
ๆทปๅŠ peftๆจกๅž‹็š„resume_from_checkpointๅฎž็Žฐ
### Feature request ็Žฐ้˜ถๆฎต trainer.train(resume_from_checkpoint=resume_from_checkpoint) ไธ่ƒฝๅคŸๅŠ ่ฝฝadapter ๆƒ้‡่ฟ›่กŒ resume ### Motivation ็Žฐ้˜ถๆฎต trainer.train(resume_from_checkpoint=resume_from_checkpoint) ไธ่ƒฝๅคŸๅŠ ่ฝฝadapter ๆƒ้‡่ฟ›่กŒ resume ### Your contribution ๆฒกๆœ‰
07-14-2023 06:45:02
07-14-2023 06:45:02
Hi @506610466, thanks for raising an issue. Could you please follow the issue template and provide all the information requested, including the running environment, a minimal code snippet to reproduce the error, full traceback and expected behaviour?
transformers
24,817
open
Fix Dropout Implementation in Graphormer
This commit corrects the dropout implementation in Graphormer, aligning it with the original implementation (https://github.com/microsoft/Graphormer) and improving performance. Specifically: 1. The `attention_dropout` variable, intended for use in GraphormerMultiheadAttention, was defined but not used. This has been corrected to use `attention_dropout` instead of the regular `dropout`. 2. The `activation_dropout` for the activations in the feed-forward layers was missing. Instead, the regular `dropout` was used. This commit adds `activation_dropout` to the feed-forward layers and to the GraphormerConfig including documentation. These changes ensure the dropout implementation matches the original Graphormer and delivers empirically better performance. # What does this PR do? <!-- Congratulations! You've made it this far! You're not quite done yet though. Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution. Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change. Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost. --> <!-- Remove if not applicable --> Fixes # (issue) ## Before submitting - [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case). - [x] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests), Pull Request section? - [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link to it if that's the case. - [x] Did you make sure to update the documentation with your changes? Here are the [documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and [here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation). - [ ] Did you write any new necessary tests? ## Who can review? Anyone in the community is free to review the PR once the tests have passed. Feel free to tag members/contributors who may be interested in your PR. <!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @ If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**. Please tag fewer than 3 people. Models: - text models: @ArthurZucker and @younesbelkada - vision models: @amyeroberts - speech models: @sanchit-gandhi - graph models: @clefourrier Library: - flax: @sanchit-gandhi - generate: @gante - pipelines: @Narsil - tensorflow: @gante and @Rocketknight1 - tokenizers: @ArthurZucker - trainer: @sgugger Integrations: - deepspeed: HF Trainer/Accelerate: @pacman100 - ray/raytune: @richardliaw, @amogkam Documentation: @sgugger, @stevhliu and @MKhalusova HF projects: - accelerate: [different repo](https://github.com/huggingface/accelerate) - datasets: [different repo](https://github.com/huggingface/datasets) - diffusers: [different repo](https://github.com/huggingface/diffusers) - rust tokenizers: [different repo](https://github.com/huggingface/tokenizers) Maintained examples (not research project or legacy): - Flax: @sanchit-gandhi - PyTorch: @sgugger - TensorFlow: @Rocketknight1 --> @clefourrier
07-14-2023 04:42:11
07-14-2023 04:42:11
Hi @clefourrier and others, this is the first time contributing to Huggingface for me. In the course of my thesis I found some improvements/fixes to your current Graphormer implementation including this one which is quite simple but has a big impact and should be easy to review. I plan to make some more pull requests with more performance related changes to speed Graphormer up and possibly also to add the 3D version in the following days/weeks. Feel free to reach out. Best wishes, Alexander
transformers
24,816
open
Proper way to monkey patch a customized model no in transformers?
### Feature request Hi, nowadays more and more model emerging out, many of them hard to merge transformers, but we have more and more customized patchs such as condensing rotary, xformers attn etc. What is the best to monkey patch a customized model which not inside transformers library but just loaded with AutoModel? ### Motivation need guidance ### Your contribution current not
07-14-2023 03:34:49
07-14-2023 03:34:49
Hi @lucasjinreal, thanks for raising an issue! This is a question best placed in our [forums](https://discuss.huggingface.co/). We try to reserve the github issues for feature requests and bug reports.<|||||>@amyeroberts Please help me find someone answer this issue, I posted serveral on forum none of them got response. than k u<|||||>@lucasjinreal Another place to ask questions like this is in [our discord](https://discord.com/invite/hugging-face-879548962464493619).
transformers
24,815
closed
transformers-like library for Prompt or Agent library?
### Feature request Add advanced Prompt or Agent methods as models to transformers library, or build a new advanced Prompt or Agent library use ARCH like transformers library. ### Motivation There are many Prompt/Agent libraries like: https://langchain.com/ https://github.com/yoheinakajima/babyagi https://github.com/Significant-Gravitas/Auto-GPT, but they are all have too many abstracts in code or just a framework for automatic tasks. transformers/diffusers library has simple abstracts, If there is advanced Prompt or Agent library like transformers/diffusers, it will be much better for researchers to do research work. and i think huggingface is the best to build this library. ### Your contribution Currently no.
07-14-2023 02:48:08
07-14-2023 02:48:08
Hi @ghosthamlet, Have you checked out Agents and Tools from Hugging Face? https://huggingface.co/docs/transformers/v4.30.0/en/transformers_agents<|||||>@amyeroberts Thanks, it looks great. I found the code is all in src/transformers/tools, seems like it is not as flexible as transformers. if someone wants to integrate a new agent like `Reflexion: Language Agents with Verbal Reinforcement Learning` https://arxiv.org/abs/2303.11366, can it be as easy as integrate a transformers model?<|||||>@ghosthamlet Easiness is subjective, so it would depend on what you do and don't find easy with the transformers library :) This is a question best placed in our [forums](https://discuss.huggingface.co/), as we try to reserve the github issues for feature requests and bug reports. However, if it's a request for the model to be implemented, could you open a separate issue with the feature request?<|||||>@amyeroberts Thanks, I understand now. I will create a separate issue.
transformers
24,814
closed
Is there a way to use Blip2Model for Zero-Shot Classification?
Hi there I am attempting to adapt the Blip2Model for a zero-shot classification task as follows: - N text sentences/classes --> x = N text embeddings - 1 test image -> y = 1 image embedding - soft-max(dot-product(x, y)) to get the probabilities over classes This is my solution so far: ``` def get_img_embedding(images]): """ Turn a list of image inputs into tensor of embedding vectors images should be of shape (batch_size, channels, height, width) """ image_tensors = blip2model.preproc([ Image.open(i.path) # type: ignore for i in images], return_tensors='pt') # Dict with 'pixel_values' entry of size batch_size, C, H, W image_tensors = image_tensors.to(self.device, torch.float16) # type: ignore # pass images through the vision model and then the qformer to get query-conditional image features query_outputs = blip2model.get_qformer_features(**image_tensors) # tuple (last_hidden_state, pooler_output) query_output = query_outputs['pooler_output'] # (batch_size, hidden_size) # project query-conditional image features into language space image_features = blip2model.language_projection(query_output) # shape (batch_size, hidden_size) image_features /= image_features.norm(dim=-1, keepdim=True) return image_features def get_text_embedding(texts): """ Turn a list of text inputs into tensor of embedding vectors. texts is a list of strings to embed. """ text_tokens = blip2model.text_tokenizer(texts, padding=True, return_tensors='pt') text_tokens = text_tokens.to(self.device) text_outputs = blip2model.get_text_features(**text_tokens, output_hidden_states=True) # type: ignore text_features = text_outputs['hidden_states'][-1][:, 0, :] # extract [CLS] embedding from last hidden state, shape (batch_size, hidden_size) text_features /= text_features.norm(dim=-1, keepdim=True) return text_features ``` Then I would take the dot product between the two. Am I on the right track? Thanks
07-13-2023 23:48:47
07-13-2023 23:48:47
Hi @danielamassiceti, thanks for raising an issue! This is a question best placed in our [forums](https://discuss.huggingface.co/). We try to reserve the github issues for feature requests and bug reports.<|||||>Thanks, will repost it there!
transformers
24,813
closed
Replacing agent image_qa tool with InstructBLIP
### System Info Hi, I'm attempting to use InstructBLIP for image QA: ```python from transformers import HfAgent, load_tool from diffusers.utils import load_image agent = HfAgent("https://api-inference.huggingface.co/models/bigcode/starcoder") image = load_image("https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/transformers/rivers_and_lakes.png") agent.toolbox["image_qa"] = load_tool(task_or_repo_id="image-question-answering", model_repo_id="Salesforce/instructblip-vicuna-13b") agent.run("what colors are in this image?", image=image) ``` However this gives me the error: > ValueError: Unrecognized configuration class for this kind of AutoModel: AutoModelForVisualQuestionAnswering. > Model type should be one of ViltConfig. I'm not sure if I'm doing this incorrectly, it's unsupported, or there's a bug. ### Who can help? @amyeroberts @Narsil ### Information - [ ] The official example scripts - [X] My own modified scripts ### Tasks - [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...) - [X] My own task or dataset (give details below) ### Reproduction Code above ### Expected behavior Was hoping that InstructBLIP would answer the question instead of VilT
07-13-2023 22:34:38
07-13-2023 22:34:38
Hi @austinmw, thanks for raising this issue! This is because the model is being loaded [using the AutoModelForVisualQuestionAnswering](https://github.com/huggingface/transformers/blob/91d7df58b6537d385e90578dac40204cb550f706/src/transformers/tools/image_question_answering.py#L39C2-L39C2) class, [which only has Vilt listed as a compatible model](https://github.com/huggingface/transformers/blob/91d7df58b6537d385e90578dac40204cb550f706/src/transformers/models/auto/modeling_auto.py#L818). BLIP can be loaded using `AutoModelForVision2Seq` - listed here with [other models like Pix2Struct](https://github.com/huggingface/transformers/blob/91d7df58b6537d385e90578dac40204cb550f706/src/transformers/models/auto/modeling_auto.py#L543C1-L543C37). The reason for this distinction is that the 'answers' from these models are obtained in two different ways. ViltForQuestionAnswering is really a classifier, and predicts the class most likely to be the answer to the question. You can see an example of the [categories here](https://huggingface.co/dandelin/vilt-b32-finetuned-vqa/blob/d0a1f6ab88522427a7ae76ceb6e1e1e7b68a1d08/config.json#L9). Whereas BLIP generates an answer using causal language modeling. If you wish to use BLIP, you can easily define your own `ImageQuestionAnsweringTool` which you can modify to suit the behaviour (and generation strategy) you desire e.g.: ```python import requests from PIL import Image from transformers import AutoModelForVision2Seq, AutoProcessor from transformers.tools import PipelineTool from transformers.utils import requires_backends class ImageQuestionAnsweringTool(PipelineTool): default_checkpoint = "Salesforce/blip2-opt-2.7b" description = ( "This is a tool that answers a question about an image. It takes an input named `image` which should be the " "image containing the information, as well as a `question` which should be the question in English. It " "returns a text that is the answer to the question." ) name = "image_qa" pre_processor_class = AutoProcessor model_class = AutoModelForVision2Seq inputs = ["image", "text"] outputs = ["text"] def __init__(self, *args, **kwargs): requires_backends(self, ["vision"]) super().__init__(*args, **kwargs) def encode(self, image, question: str): return self.pre_processor(image, question, return_tensors="pt") def forward(self, inputs): outputs = self.model.generate(**inputs, max_new_tokens=50) return outputs def decode(self, outputs): return self.pre_processor.batch_decode(outputs, skip_special_tokens=True)[0] url = "http://images.cocodataset.org/val2017/000000039769.jpg" image = Image.open(requests.get(url, stream=True).raw) question = "Question: what is the colour of the sofa? Answer: " tool = ImageQuestionAnsweringTool() response = tool(image=image, question=question) print(response) ``` cc @LysandreJik<|||||>Ah I see, thanks, and I really appreciate the example you provided!! ๐Ÿ™๐Ÿป One follow-up question though, how can I set `load_in_4bit=True, torch_dtype=torch.float16` for the model_class?<|||||>@austinmw You can pass in an [instantiated model](https://huggingface.co/docs/transformers/v4.31.0/en/main_classes/agent#transformers.PipelineTool.model) when instantiating the tool.
transformers
24,812
closed
Fixing double `use_auth_token.pop` (preventing private models from being visible).
# What does this PR do? Should fix: https://github.com/huggingface/transformers/issues/14334#issuecomment-1634527833 Repro: Have a private repo, with `vocab.json` (spread out files for the tokenizer) and use `AutoTokenizer.from_pretrained(..., use_auth_token="token")`. Not sure if we already have private visibility tests to maybe add/fix some so we can detect this in our suite. <!-- Congratulations! You've made it this far! You're not quite done yet though. Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution. Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change. Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost. --> <!-- Remove if not applicable --> Fixes # (issue) ## Before submitting - [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case). - [ ] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests), Pull Request section? - [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link to it if that's the case. - [ ] Did you make sure to update the documentation with your changes? Here are the [documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and [here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation). - [ ] Did you write any new necessary tests? ## Who can review? Anyone in the community is free to review the PR once the tests have passed. Feel free to tag members/contributors who may be interested in your PR. <!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @ If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**. Please tag fewer than 3 people. Models: - text models: @ArthurZucker and @younesbelkada - vision models: @amyeroberts - speech models: @sanchit-gandhi - graph models: @clefourrier Library: - flax: @sanchit-gandhi - generate: @gante - pipelines: @Narsil - tensorflow: @gante and @Rocketknight1 - tokenizers: @ArthurZucker - trainer: @sgugger Integrations: - deepspeed: HF Trainer/Accelerate: @pacman100 - ray/raytune: @richardliaw, @amogkam Documentation: @sgugger, @stevhliu and @MKhalusova HF projects: - accelerate: [different repo](https://github.com/huggingface/accelerate) - datasets: [different repo](https://github.com/huggingface/datasets) - diffusers: [different repo](https://github.com/huggingface/diffusers) - rust tokenizers: [different repo](https://github.com/huggingface/tokenizers) Maintained examples (not research project or legacy): - Flax: @sanchit-gandhi - PyTorch: @sgugger - TensorFlow: @Rocketknight1 -->
07-13-2023 21:02:01
07-13-2023 21:02:01
_The documentation is not available anymore as the PR was closed or merged._
transformers
24,811
closed
Unable to run compute_transition_scores : facing 'CodeT5pConfig' object has no attribute 'vocab_size' error
### System Info Hello Team , your help is appreciated in the below issue. I am running the below code snippet on Google Colab. - `transformers` version: 4.30.2 - Platform: Linux-5.15.109+-x86_64-with-glibc2.31 - Python version: 3.10.12 - Huggingface_hub version: 0.16.4 - Safetensors version: 0.3.1 - PyTorch version (GPU?): 2.0.1+cu118 (True) - Tensorflow version (GPU?): 2.12.0 (True) - Flax version (CPU?/GPU?/TPU?): 0.7.0 (gpu) - Jax version: 0.4.13 - JaxLib version: 0.4.13 - Using GPU in script?: yes - Using distributed or parallel set-up in script?: no ### Who can help? @gante ### Information - [ ] The official example scripts - [x] My own modified scripts ### Tasks - [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...) - [X] My own task or dataset (give details below) ### Reproduction ``` from transformers import AutoModelForSeq2SeqLM, AutoTokenizer DEVICE = "cuda" if torch.cuda.is_available() else "cpu" beam_size = 5 max_len = 500 model = "Salesforce/codet5p-2b" tokenizer = AutoTokenizer.from_pretrained(model) model = AutoModelForSeq2SeqLM.from_pretrained(model, trust_remote_code=True, torch_dtype=torch.float16, low_cpu_mem_usage=True) model.eval() model.to(DEVICE) prompt = "\n\n\ndef sum_squares(lst):\n \"\"\"\"\n This function will take a list of integers. For all entries in the list, the function shall square the integer entry if its index is a \n multiple of 3 and will cube the integer entry if its index is a multiple of 4 and not a multiple of 3. The function will not \n change the entries in the list whose indexes are not a multiple of 3 or 4. The function shall then return the sum of all entries. \n \n Examples:\n For lst = [1,2,3] the output should be 6\n For lst = [] the output should be 0\n For lst = [-1,-5,2,-1,-5] the output should be -126\n \"\"\"\n" prompt = prompt.replace(' ', '\t') prompt_batch_decoder = [prompt] encoding_decoder = tokenizer(prompt_batch_decoder, return_tensors="pt", truncation=True, max_length=max_len).to(DEVICE) input_ids=encoding_decoder['input_ids'] with torch.no_grad(): gen_tokens = model.generate(**encoding_decoder, decoder_input_ids=encoding_decoder['input_ids'], max_length=max_len, num_beams= beam_size, do_sample=False, num_return_sequences=1, return_dict_in_generate=True, output_scores=True, early_stopping=True) transition_scores = model.compute_transition_scores(gen_tokens.sequences, gen_tokens.scores,gen_tokens.beam_indices, normalize_logits=True) ``` ### Expected behavior transition_scores are computed
07-13-2023 19:38:22
07-13-2023 19:38:22
#### Facing the below error: ``` AttributeError Traceback (most recent call last) [<ipython-input-22-821b8f7c4aac>](https://localhost:8080/#) in <cell line: 1>() ----> 1 transition_scores = model.compute_transition_scores(gen_tokens.sequences, gen_tokens.scores,gen_tokens.beam_indices, normalize_logits=True) 1 frames [/usr/local/lib/python3.10/dist-packages/transformers/configuration_utils.py](https://localhost:8080/#) in __getattribute__(self, key) 259 if key != "attribute_map" and key in super().__getattribute__("attribute_map"): 260 key = super().__getattribute__("attribute_map")[key] --> 261 return super().__getattribute__(key) 262 263 def __init__(self, **kwargs): AttributeError: 'CodeT5pConfig' object has no attribute 'vocab_size' ``` I checked the config file of CodeT5p-2B model on https://huggingface.co/Salesforce/codet5p-2b/blob/main/config.json, it has a vocab_size attribute, I believe this is the right config file to check. I am not sure what is causing this error. Could you please help?<|||||>Hey @MansiShinde, the issue is that the `CodeT5pConfig` class defined [here](https://huggingface.co/Salesforce/instructcodet5p-16b/blob/70bb08afa3d6f081b347e67752ca8e031a35ac4a/configuration_codet5p.py#L71-L90) does not have a `vocab_size` attribute but rather `encoder` and `decoder` attributes of type `CodeT5pModuleConfig` which then holds the vocab size attribute. You should be able calculate transition scores by using the model's decoder like this: ``` transition_scores = model.decoder.compute_transition_scores(gen_tokens.sequences, gen_tokens.scores,gen_tokens.beam_indices, normalize_logits=True) ```<|||||>> Hey @MansiShinde, the issue is that the `CodeT5pConfig` class defined [here](https://huggingface.co/Salesforce/instructcodet5p-16b/blob/70bb08afa3d6f081b347e67752ca8e031a35ac4a/configuration_codet5p.py#L71-L90) does not have a `vocab_size` attribute but rather `encoder` and `decoder` attributes of type `CodeT5pModuleConfig` which then holds the vocab size attribute. You should be able calculate transition scores by using the model's decoder like this: > > ``` > transition_scores = model.decoder.compute_transition_scores(gen_tokens.sequences, gen_tokens.scores,gen_tokens.beam_indices, normalize_logits=True) > ``` Ohh okay got it. Thanks for the help @fadynakhla !! <|||||>Thank you for jumping in, @fadynakhla ๐Ÿš€
transformers
24,810
closed
Use _BaseAutoModelClass's register method
# What does this PR do? Switching `_BaseAutoModelClass`'s `from_pretrained` and `from_config` to use the register classmethod that it defines rather than using the `_LazyAutoMapping` register method directly. This makes use of the additional consistency check within `_BaseAutoModelClass`'s register method. <!-- Congratulations! You've made it this far! You're not quite done yet though. Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution. Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change. Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost. --> ## Before submitting - [ ] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests), Pull Request section? Yes - [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Discussed briefly in [#24737 ](https://github.com/huggingface/transformers/issues/24737) - [ ] Did you make sure to update the documentation with your changes? No public methods/classes changed - [ ] Did you write any new necessary tests? None necessary ## Who can review? Anyone in the community is free to review the PR once the tests have passed. @sgugger <!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @ If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**. Please tag fewer than 3 people. Models: - text models: @ArthurZucker and @younesbelkada - vision models: @amyeroberts - speech models: @sanchit-gandhi - graph models: @clefourrier Library: - flax: @sanchit-gandhi - generate: @gante - pipelines: @Narsil - tensorflow: @gante and @Rocketknight1 - tokenizers: @ArthurZucker - trainer: @sgugger Integrations: - deepspeed: HF Trainer/Accelerate: @pacman100 - ray/raytune: @richardliaw, @amogkam Documentation: @sgugger, @stevhliu and @MKhalusova HF projects: - accelerate: [different repo](https://github.com/huggingface/accelerate) - datasets: [different repo](https://github.com/huggingface/datasets) - diffusers: [different repo](https://github.com/huggingface/diffusers) - rust tokenizers: [different repo](https://github.com/huggingface/tokenizers) Maintained examples (not research project or legacy): - Flax: @sanchit-gandhi - PyTorch: @sgugger - TensorFlow: @Rocketknight1 -->
07-13-2023 17:56:11
07-13-2023 17:56:11
> Thanks a lot! Glad to help! Also, what are your thoughts on adding some typehinting to the `_model_mapping` variable e.g. ``` class _BaseAutoModelClass: # Base class for auto models. _model_mapping: Optional["_LazyAutoMapping"] = None ``` instead of ``` class _BaseAutoModelClass: # Base class for auto models. _model_mapping = None ``` it took quite some time during our discussion yesterday for me to track down what type of object `_model_mapping` was<|||||>_The documentation is not available anymore as the PR was closed or merged._<|||||>This is all internal code, so we don't really document types as rigorously as in public facing classes :-)<|||||>Sounds good just thought I'd ask
transformers
24,809
closed
Fix typo 'submosules'
# What does this PR do? Fixes a one-character typo in the docs for large-model loading. ## Before submitting - [X] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
07-13-2023 14:56:26
07-13-2023 14:56:26
_The documentation is not available anymore as the PR was closed or merged._
transformers
24,808
closed
Remove Falcon docs for the release until TGI is ready
Make sure we're not advertising docs for the model until we're ready to support it! cc @sgugger
07-13-2023 14:40:45
07-13-2023 14:40:45
Fixed, my bad!<|||||>_The documentation is not available anymore as the PR was closed or merged._
transformers
24,807
closed
Run hub tests
# What does this PR do? It looks like the hub tests are currently not running :grimacing: This is because the env variable for staging test is not set to `True`, due to me when we moved the circle CI to a dynamic config. Hopefully nothing is broken...
07-13-2023 14:29:41
07-13-2023 14:29:41
_The documentation is not available anymore as the PR was closed or merged._<|||||>Fixed the tests that did not work, so can merge :-)
transformers
24,806
closed
Add accelerate version in transformers-cli env
# What does this PR do? What is says on the tin. Now Trainer uses accelerate, me & others are having to ask for the accelerate version often in issues. ## Before submitting - [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case). - [x] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests), Pull Request section? - [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link to it if that's the case. - [ ] Did you make sure to update the documentation with your changes? Here are the [documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and [here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation). - [ ] Did you write any new necessary tests?
07-13-2023 14:08:11
07-13-2023 14:08:11
_The documentation is not available anymore as the PR was closed or merged._<|||||>@sgugger I've added the config too, matching the accelerate logic: a specific config file can be passed in, or the default config is read if it exists. LMKWYT :)
transformers
24,805
closed
Fix MobileVitV2 doctest checkpoint
# What does this PR do? Doctests CI currently fails on MobileVitV2 tests. Doctest was copied from the V1 mobilevit, and checkpoints didn't match. Remove the copied from as there's just some very simple and standard model head logic coped. Interestingly, the [example for MobileVit V1](https://github.com/huggingface/transformers/blob/21946a8cf4a273f35ac2f3a53edafc398699f527/src/transformers/models/mobilevit/modeling_mobilevit.py#L1027) I'd expect to fail as it doesn't import torch. [It is included in the doc tests](https://github.com/huggingface/transformers/blob/21946a8cf4a273f35ac2f3a53edafc398699f527/utils/documentation_tests.txt#L300). ## Before submitting - [x] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case). - [ ] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests), Pull Request section? - [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link to it if that's the case. - [ ] Did you make sure to update the documentation with your changes? Here are the [documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and [here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation). - [ ] Did you write any new necessary tests?
07-13-2023 11:34:16
07-13-2023 11:34:16
_The documentation is not available anymore as the PR was closed or merged._<|||||>> So in MobileVitV2 doc example, the torch has to be imported? No, not to have the test pass. I added it because it wouldn't run if you copy-pasted the snippet from the docs. I've added it to MobileVit v1 too now for the same reason :)