url
stringlengths 66
66
| text
stringlengths 141
41.9k
| num_labels
sequencelengths 1
8
| arr_labels
sequencelengths 82
82
| labels
sequencelengths 1
8
|
---|---|---|---|---|
https://api.github.com/repos/huggingface/transformers/issues/34715 |
TITLE
The error caused by the missing espeak library
COMMENTS
3
REACTIONS
+1: 0
-1: 0
laugh: 0
hooray: 0
heart: 0
rocket: 0
eyes: 0
BODY
### System Info
**System Info**
- `transformers` version: 4.46.2
- Platform: Linux-5.4.0-198-generic-x86_64-with-glibc2.35
- Python version: 3.11.9
- Huggingface_hub version: 0.26.2
- Safetensors version: 0.4.5
- Accelerate version: not installed
- Accelerate config: not found
- PyTorch version (GPU?): 2.4.0 (True)
- Tensorflow version (GPU?): not installed (NA)
- Flax version (CPU?/GPU?/TPU?): not installed (NA)
- Jax version: not installed
- JaxLib version: not installed
- Using distributed or parallel set-up in script?: <fill in>
- Using GPU in script?: <fill in>
- GPU type: Tesla V100-SXM2-16GB
### Who can help?
_No response_
### Information
- [X] The official example scripts
- [ ] My own modified scripts
### Tasks
- [X] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)
- [ ] My own task or dataset (give details below)
### Reproduction
1. **Docker Image**: Use the Docker image `pytorch/pytorch:2.4.0-cuda12.4-cudnn9-devel`.
2. **Install Requirements**:
```
transformers
other necessary packages
```
3. **Run the First Test Code**:
```python
from transformers.models.auto.tokenization_auto import AutoTokenizer as A
b = A.from_pretrained("facebook/wav2vec2-xlsr-53-espeak-cv-ft", cache_dir=None, force_download=True, local_files_only=False, revision='main')
print("b", b)
# b False
```
4. **After Analysis**, run the following test code:
```python
from transformers.models.wav2vec2_phoneme.tokenization_wav2vec2_phoneme import (
Wav2Vec2PhonemeCTCTokenizer,
)
kwargs = {
"cache_dir": None,
"force_download": True,
"local_files_only": False,
"revision": "main",
"_from_auto": True,
"_commit_hash": None,
}
A = Wav2Vec2PhonemeCTCTokenizer("facebook/wav2vec2-xlsr-53-espeak-cv-ft", **kwargs)
```
### Expected behavior
In step 3, the Tokenizer was not loaded correctly.
Step 4 throws the following error:
```
Traceback (most recent call last):
File "/workspace/test.py", line 14, in <module>
A = Wav2Vec2PhonemeCTCTokenizer(
^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/opt/conda/lib/python3.11/site-packages/transformers/models/wav2vec2_phoneme/tokenization_wav2vec2_phoneme.py", line 136, in __init__
self.init_backend(self.phonemizer_lang)
File "/opt/conda/lib/python3.11/site-packages/transformers/models/wav2vec2_phoneme/tokenization_wav2vec2_phoneme.py", line 185, in init_backend
self.backend = BACKENDS[self.phonemizer_backend](phonemizer_lang, language_switch="remove-flags")
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/opt/conda/lib/python3.11/site-packages/phonemizer/backend/espeak/espeak.py", line 45, in __init__
super().__init__(
File "/opt/conda/lib/python3.11/site-packages/phonemizer/backend/espeak/base.py", line 39, in __init__
super().__init__(
File "/opt/conda/lib/python3.11/site-packages/phonemizer/backend/base.py", line 77, in __init__
raise RuntimeError( # pragma: nocover
RuntimeError: espeak not installed on your system
``` | [
64
] | [
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
1,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0
] | [
"bug"
] |
https://api.github.com/repos/huggingface/transformers/issues/33665 |
TITLE
[Tests] Diverse Whisper fixes
COMMENTS
12
REACTIONS
+1: 0
-1: 0
laugh: 0
hooray: 0
heart: 0
rocket: 0
eyes: 0
BODY
# What does this PR do?
There's a lot of pending failing tests with Whisper. This PR addresses some issues:
1. #31683 and #31770 mentioned a out-of-range word level timestamps. This happens because `decoder_inputs_ids` were once `forced_input_ids`. This had an impact on the `beam_indices`.
`beam_indices` has a length of `decoder_input_ids + potentially_generated_ids` but doesn't take into account `decoder_input_ids` when keeping track of the indices. In other words `beam_indices[0]` is really the beam indice of the first generated token, instead of `decoder_input_ids[0]`.
2. The Flash-Attention 2 attention mask was causing an issue
3. The remaining work is done on the modeling tests. Note that some of these tests were failing because of straightforward reasons - e.g the output was a dict - and are actually still failing, but their reasons for failing are not straightforward anymore. Debugging will be easier though.
**Note:** With #33450 and this, we're down from 29 failing tests to 17
| [
73
] | [
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
1,
0,
0,
0,
0,
0,
0,
0,
0
] | [
"run-slow"
] |
https://api.github.com/repos/huggingface/transformers/issues/34622 |
TITLE
Question on OWLv2 Model Input Size Flexibility in Hugging Face
COMMENTS
3
REACTIONS
+1: 0
-1: 0
laugh: 0
hooray: 0
heart: 0
rocket: 0
eyes: 0
BODY
I noticed that in Google Research's OWLv2 implementation, the model can accept images of varying sizes, as it allows the input image size to be adjusted. Does Hugging Face’s version of OWLv2 support this same flexibility in input image sizes? | [
62
] | [
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
1,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0
] | [
"Vision"
] |
https://api.github.com/repos/huggingface/transformers/issues/34583 |
TITLE
Add support for Apple's Depth-Pro
COMMENTS
86
REACTIONS
+1: 0
-1: 0
laugh: 0
hooray: 0
heart: 3
rocket: 0
eyes: 0
BODY
# What does this PR do?
Fixes #34020
This PR adds Apple's Depth Pro model to Hugging Face Transformers. Depth Pro is a foundation model for zero-shot metric monocular depth estimation. It leverages a multi-scale vision transformer optimized for dense predictions. It downsamples an image at several scales. At each scale, it is split into patches, which are processed by a ViT-based (Dinov2) patch encoder, with weights shared across scales. Patches are merged into feature maps, upsampled, and fused via a DPT decoder.
Relevant Links
- Research Paper: [Depth Pro: Sharp Monocular Metric Depth in Less Than a Second](https://arxiv.org/pdf/2410.02073)
- Authors: [Aleksei Bochkovskii](https://arxiv.org/search/cs?searchtype=author&query=Bochkovskii,+A), [Amaël Delaunoy](https://arxiv.org/search/cs?searchtype=author&query=Delaunoy,+A), and others
- Implementation: [apple/ml-depth-pro](https://github.com/apple/ml-depth-pro)
- Models Weights: [apple/DepthPro](https://huggingface.co/apple/DepthPro)
## Before submitting
- [x] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#create-a-pull-request),
Pull Request section?
- [x] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case.
- [x] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation).
- [x] Did you write any new necessary tests?
## Who can review?
@amyeroberts, @qubvel
| [
77,
62
] | [
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
1,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
1,
0,
0,
0,
0
] | [
"New model",
"Vision"
] |
https://api.github.com/repos/huggingface/transformers/issues/35349 |
TITLE
A warning message showing that `MultiScaleDeformableAttention.so` is not found in `/root/.cache/torch_extensions` if `ninja` is installed with `transformers`
COMMENTS
12
REACTIONS
+1: 1
-1: 0
laugh: 0
hooray: 0
heart: 0
rocket: 0
eyes: 0
BODY
### System Info
* `transformers`: `4.47.1`
* `torch`: `2.5.1`
* `timm`: `1.0.12`
* `ninja`: `1.11.1.3`
* `python`: `3.10.14`
* `pip`: `23.0.1`
* CUDA runtime installed by `torch`: `nvidia-cuda-runtime-cu12==12.4.127`
* OS (in container): Debian GNU/Linux 12 (bookworm)
* OS (native device): Windows 11 Enterprise 23H2 (`10.0.22631 Build 22631`)
* Docker version: `27.3.1, build ce12230`
* NVIDIA Driver: `565.57.02`
### Who can help?
I am asking help for [`DeformableDetrModel`](https://huggingface.co/docs/transformers/v4.47.1/en/model_doc/deformable_detr#transformers.DeformableDetrModel)
vision models: @amyeroberts, @qubvel
### Information
- [X] The official example scripts
- [ ] My own modified scripts
### Tasks
- [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)
- [ ] My own task or dataset (give details below)
### Reproduction
1. Start a new docker container by
```sh
docker run --gpus all -it --rm --shm-size=1g python:3.10-slim bash
```
2. Install dependencies
```sh
pip install transformers[torch] requests pillow timm
```
3. Run the following script (copied from [the document](https://huggingface.co/docs/transformers/v4.47.1/en/model_doc/deformable_detr#transformers.DeformableDetrModel.forward.example)), it works fine and does not show any message.
```python
from transformers import AutoImageProcessor, DeformableDetrModel
from PIL import Image
import requests
url = "http://images.cocodataset.org/val2017/000000039769.jpg"
image = Image.open(requests.get(url, stream=True).raw)
image_processor = AutoImageProcessor.from_pretrained("SenseTime/deformable-detr")
model = DeformableDetrModel.from_pretrained("SenseTime/deformable-detr")
inputs = image_processor(images=image, return_tensors="pt")
outputs = model(**inputs)
last_hidden_states = outputs.last_hidden_state
list(last_hidden_states.shape)
```
4. Install ninja:
```sh
pip install ninja
```
5. Run [the same script](https://huggingface.co/docs/transformers/v4.47.1/en/model_doc/deformable_detr#transformers.DeformableDetrModel.forward.example) again, this time, the following warning messages will show
```text
!! WARNING !!
!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!
Your compiler (c++) is not compatible with the compiler Pytorch was
built with for this platform, which is g++ on linux. Please
use g++ to to compile your extension. Alternatively, you may
compile PyTorch from source using c++, and then you can also use
c++ to compile your extension.
See https://github.com/pytorch/pytorch/blob/master/CONTRIBUTING.md for help
with compiling PyTorch from source.
!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!
!! WARNING !!
warnings.warn(WRONG_COMPILER_WARNING.format(
Could not load the custom kernel for multi-scale deformable attention: CUDA_HOME environment variable is not set. Please set it to your CUDA install root.
Could not load the custom kernel for multi-scale deformable attention: /root/.cache/torch_extensions/py310_cu124/MultiScaleDeformableAttention/MultiScaleDeformableAttention.so: cannot open shared object file: No such file or directory
Could not load the custom kernel for multi-scale deformable attention: /root/.cache/torch_extensions/py310_cu124/MultiScaleDeformableAttention/MultiScaleDeformableAttention.so: cannot open shared object file: No such file or directory
Could not load the custom kernel for multi-scale deformable attention: /root/.cache/torch_extensions/py310_cu124/MultiScaleDeformableAttention/MultiScaleDeformableAttention.so: cannot open shared object file: No such file or directory
Could not load the custom kernel for multi-scale deformable attention: /root/.cache/torch_extensions/py310_cu124/MultiScaleDeformableAttention/MultiScaleDeformableAttention.so: cannot open shared object file: No such file or directory
Could not load the custom kernel for multi-scale deformable attention: /root/.cache/torch_extensions/py310_cu124/MultiScaleDeformableAttention/MultiScaleDeformableAttention.so: cannot open shared object file: No such file or directory
Could not load the custom kernel for multi-scale deformable attention: /root/.cache/torch_extensions/py310_cu124/MultiScaleDeformableAttention/MultiScaleDeformableAttention.so: cannot open shared object file: No such file or directory
Could not load the custom kernel for multi-scale deformable attention: /root/.cache/torch_extensions/py310_cu124/MultiScaleDeformableAttention/MultiScaleDeformableAttention.so: cannot open shared object file: No such file or directory
Could not load the custom kernel for multi-scale deformable attention: /root/.cache/torch_extensions/py310_cu124/MultiScaleDeformableAttention/MultiScaleDeformableAttention.so: cannot open shared object file: No such file or directory
Could not load the custom kernel for multi-scale deformable attention: /root/.cache/torch_extensions/py310_cu124/MultiScaleDeformableAttention/MultiScaleDeformableAttention.so: cannot open shared object file: No such file or directory
Could not load the custom kernel for multi-scale deformable attention: /root/.cache/torch_extensions/py310_cu124/MultiScaleDeformableAttention/MultiScaleDeformableAttention.so: cannot open shared object file: No such file or directory
Could not load the custom kernel for multi-scale deformable attention: /root/.cache/torch_extensions/py310_cu124/MultiScaleDeformableAttention/MultiScaleDeformableAttention.so: cannot open shared object file: No such file or directory
```
Certainly, `/root/.cache/torch_extensions/py310_cu124/MultiScaleDeformableAttention/` is empty.
The issue happens only when both `ninja` and `transformers` are installed. I believe that the following issue may be related to this issue:
https://app.semanticdiff.com/gh/huggingface/transformers/pull/32834/overview
### Expected behavior
It seems that ninja will let `DeformableDetrModel` throw unexpected error messages (despite that the script still works). That's may be because I am using a container without any compiler or CUDA preinstalled (the CUDA run time is installed by `pip`).
I think there should be a check that automatically turn of the `ninja` related functionalities even if `ninja` is installed by `pip`, as long as the requirements like compiler version, CUDA path, or something, are not fulfilled.
| [
64
] | [
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
1,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0
] | [
"bug"
] |
https://api.github.com/repos/huggingface/transformers/issues/34699 |
TITLE
TypeError: Accelerator.__init__() got an unexpected keyword argument 'dispatch_batches'
COMMENTS
8
REACTIONS
+1: 0
-1: 0
laugh: 0
hooray: 0
heart: 0
rocket: 0
eyes: 0
BODY
### System Info
transformers: 4.39.3
python: 3.10.12
### Who can help?
_No response_
### Information
- [X] The official example scripts
- [ ] My own modified scripts
### Tasks
- [X] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)
- [ ] My own task or dataset (give details below)
### Reproduction
An error may occur at line 580 of [run_ner.py](https://github.com/huggingface/transformers/blob/v4.39.3/examples/pytorch/token-classification/run_ner.py)
```python
trainer = Trainer(
model=model,
args=training_args,
train_dataset=train_dataset if training_args.do_train else None,
eval_dataset=eval_dataset if training_args.do_eval else None,
tokenizer=tokenizer,
data_collator=data_collator,
compute_metrics=compute_metrics,
)
```
### Expected behavior
```
Traceback (most recent call last):
File "/usr/src/app/llm_model_test/ner_train/run_ner.py", line 666, in <module>
main()
File "/usr/src/app/llm_model_test/ner_train/run_ner.py", line 580, in main
trainer = Trainer(
File "/usr/local/lib/python3.10/dist-packages/transformers/trainer.py", line 373, in __init__
self.create_accelerator_and_postprocess()
File "/usr/local/lib/python3.10/dist-packages/transformers/trainer.py", line 4252, in create_accelerator_and_postprocess
self.accelerator = Accelerator(
TypeError: Accelerator.__init__() got an unexpected keyword argument 'dispatch_batches'
```
Please Help Me... | [
27,
64
] | [
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
1,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
1,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0
] | [
"dependencies",
"bug"
] |
https://api.github.com/repos/huggingface/transformers/issues/35979 |
TITLE
Fix custom kernel for DeformableDetr, RT-Detr, GroundingDINO, OmDet-Turbo in Pytorch 2.6.0
COMMENTS
6
REACTIONS
+1: 0
-1: 0
laugh: 0
hooray: 0
heart: 0
rocket: 0
eyes: 0
BODY
# What does this PR do?
Updates
- tesnor.type().is_cuda() -> tesnor.is_cuda();
- tensor.data<...> -> tensor.data_ptr<...>
The following message appears in logs:
```
Tensor.type() is deprecated. Instead use Tensor.options(), which in many cases (e.g. in a constructor)
is a drop-in replacement. If you were using data from type(), that is now available from Tensor
itself, so instead of tensor.type().scalar_type(), use tensor.scalar_type() instead and instead of
tensor.type().backend() use tensor.device().
```
Fixes #35976
Might be relevant:
- https://github.com/pytorch/pytorch/issues/28472
- https://discuss.pytorch.org/t/kernel-launch-deprecated-packed-accessor-arguments-and-tensor-type-alternative/138875
## Before submitting
- [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [ ] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#create-a-pull-request),
Pull Request section?
- [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case.
- [ ] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation).
- [ ] Did you write any new necessary tests?
## Who can review?
Anyone in the community is free to review the PR once the tests have passed. Feel free to tag
members/contributors who may be interested in your PR.
<!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @
If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**.
Please tag fewer than 3 people.
Models:
- text models: @ArthurZucker
- vision models: @amyeroberts, @qubvel
- speech models: @ylacombe, @eustlb
- graph models: @clefourrier
Library:
- flax: @sanchit-gandhi
- generate: @zucchini-nlp (visual-language models) or @gante (all others)
- pipelines: @Rocketknight1
- tensorflow: @gante and @Rocketknight1
- tokenizers: @ArthurZucker
- trainer: @muellerzr and @SunMarc
- chat templates: @Rocketknight1
Integrations:
- deepspeed: HF Trainer/Accelerate: @muellerzr
- ray/raytune: @richardliaw, @amogkam
- Big Model Inference: @SunMarc
- quantization (bitsandbytes, autogpt): @SunMarc @MekkCyber
Documentation: @stevhliu
HF projects:
- accelerate: [different repo](https://github.com/huggingface/accelerate)
- datasets: [different repo](https://github.com/huggingface/datasets)
- diffusers: [different repo](https://github.com/huggingface/diffusers)
- rust tokenizers: [different repo](https://github.com/huggingface/tokenizers)
Maintained examples (not research project or legacy):
- Flax: @sanchit-gandhi
- PyTorch: See Models above and tag the person corresponding to the modality of the example.
- TensorFlow: @Rocketknight1
-->
| [
62
] | [
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
1,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0
] | [
"Vision"
] |
https://api.github.com/repos/huggingface/transformers/issues/36023 |
TITLE
CEP_AI
COMMENTS
0
REACTIONS
+1: 0
-1: 0
laugh: 0
hooray: 0
heart: 0
rocket: 0
eyes: 0
BODY
### Model description
CEP of Subject Ai
### Open source status
- [ ] The model implementation is available
- [ ] The model weights are available
### Provide useful links for the implementation
_No response_ | [
77
] | [
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
1,
0,
0,
0,
0
] | [
"New model"
] |
https://api.github.com/repos/huggingface/transformers/issues/35767 |
TITLE
Issue: Error with _eos_token_tensor when using Generator with GenerationMixin
COMMENTS
1
REACTIONS
+1: 0
-1: 0
laugh: 0
hooray: 0
heart: 0
rocket: 0
eyes: 0
BODY
### System Info
e-readability-summarization/src/inference$ python run_2.py
Traceback (most recent call last):
File "/home/surenoobster/Documents/project/src/inference/run_2.py", line 87, in <module>
output = generator.generate(
^^^^^^^^^^^^^^^^^^^
File "/home/surenoobster/anaconda3/lib/python3.12/site-packages/torch/utils/_contextlib.py", line 116, in decorate_context
return func(*args, **kwargs)
^^^^^^^^^^^^^^^^^^^^^
File "/home/surenoobster/Documents/project/src/inference/generation_2.py", line 572, in generate
stopping_criteria = self.model._get_stopping_criteria(
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/home/surenoobster/anaconda3/lib/python3.12/site-packages/transformers/generation/utils.py", line 1126, in _get_stopping_criteria
if generation_config._eos_token_tensor is not None:
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
AttributeError: 'GenerationConfig' object has no attribute '_eos_token_tensor'
### Who can help?
_No response_
### Information
- [ ] The official example scripts
- [x] My own modified scripts
### Tasks
- [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)
- [ ] My own task or dataset (give details below)
### Reproduction
I'm not able to get it , getting problem with device
### Expected behavior
The Generator should initialize the required token tensors correctly to ensure compatibility with GenerationMixin and avoid errors. | [
64
] | [
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
1,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0
] | [
"bug"
] |
https://api.github.com/repos/huggingface/transformers/issues/36037 |
TITLE
Fix qwen2-vl generate calls with synced_gpus
COMMENTS
1
REACTIONS
+1: 0
-1: 0
laugh: 0
hooray: 0
heart: 0
rocket: 0
eyes: 0
BODY
### System Info
When using `synced_gpus`, after one peer finishes generating, the cache position in the generation process continues to increase. This leads to the input IDs going out of bounds, resulting in errors. The issue specifically occurs in the following line of code:
[modeling_qwen2_vl.py#L1739](https://github.com/huggingface/transformers/blob/main/src/transformers/models/qwen2_vl/modeling_qwen2_vl.py#L1739).
The root cause seems to be the difference in the implementation of the `prepare_inputs_for_generation` function compared to the default implementation found here:
[utils.py#L388](https://github.com/huggingface/transformers/blob/main/src/transformers/generation/utils.py#L388).
### Who can help?
_No response_
### Information
- [ ] The official example scripts
- [ ] My own modified scripts
### Tasks
- [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)
- [ ] My own task or dataset (give details below)
### Reproduction
Happened when running this code
https://github.com/Deep-Agent/R1-V/blob/main/src/open-r1-multimodal/src/open_r1/trainer/grpo_trainer.py#L372
### Expected behavior
Keep consistent with the default implementation | [
64
] | [
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
1,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0
] | [
"bug"
] |
https://api.github.com/repos/huggingface/transformers/issues/34024 |
TITLE
HF Trainer do not support Pytorch FSDP with FP8; ValueError: You must pass a model and an optimizer together to `accelerate.prepare()` when using TransformerEngine.
COMMENTS
0
REACTIONS
+1: 0
-1: 0
laugh: 0
hooray: 0
heart: 0
rocket: 0
eyes: 0
BODY
### System Info

acc_cfg.yml:
compute_environment: LOCAL_MACHINE
debug: false
distributed_type: FSDP
downcast_bf16: 'no'
enable_cpu_affinity: false
fsdp_config:
fsdp_activation_checkpointing: true
fsdp_auto_wrap_policy: NO_WRAP
fsdp_backward_prefetch: NO_PREFETCH
fsdp_cpu_ram_efficient_loading: true
fsdp_forward_prefetch: true
fsdp_offload_params: true
fsdp_sharding_strategy: FULL_SHARD
fsdp_state_dict_type: SHARDED_STATE_DICT
fsdp_sync_module_states: true
fsdp_use_orig_params: true
machine_rank: 0
main_process_ip: 0.0.0.0
main_process_port: 0
main_training_function: main
mixed_precision: fp8
fp8_config:
amax_compute_algorithm: max
amax_history_length: 1024
backend: TE
fp8_format: HYBRID
interval: 1
margin: 0
override_linear_precision: false
use_autocast_during_eval: true
num_machines: 3
num_processes: 24
rdzv_backend: etcd-v2
same_network: false
tpu_env: []
tpu_use_cluster: false
tpu_use_sudo: false
use_cpu: false
### Who can help?
_No response_
### Information
- [ ] The official example scripts
- [ ] My own modified scripts
### Tasks
- [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)
- [ ] My own task or dataset (give details below)
### Reproduction
* accelerate launch --config_file acc_cfg.yml train.py $TRAINING_ARGS
* the train.py is any training script that train using transformers.Trainer
* $TRAINING_ARGS are the TrainingArguments plus some path to data

### Expected behavior
Train Paligemma model with FSDP and FP8. | [
66,
64,
17,
80
] | [
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
1,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
1,
0,
1,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
1,
0
] | [
"trainer",
"bug",
"PyTorch FSDP",
"Accelerate"
] |
https://api.github.com/repos/huggingface/transformers/issues/35507 |
TITLE
Memory Access out of bounds in mra/cuda_kernel.cu::index_max_cuda_kernel()
COMMENTS
2
REACTIONS
+1: 0
-1: 0
laugh: 0
hooray: 0
heart: 0
rocket: 0
eyes: 0
BODY
### System Info
* OS: Linux ubuntu 22.04 LTS
* Device: A100-80GB
* docker: nvidia/pytorch:24.04-py3
* transformers: latest, 4.47.0
### Who can help?
_No response_
### Information
- [ ] The official example scripts
- [ ] My own modified scripts
### Tasks
- [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)
- [ ] My own task or dataset (give details below)
### Reproduction
## Reproduction
1. pip install the latest transformers
2. prepare the UT test enviroments by `pip install -e .[testing]`
3. `pytest tests/models/mra/test_modeling_mra.py`
## Analysis
There might be some memory access out-of-bound behaviours in CUDA kernel `index_max_cuda_kernel()`
https://github.com/huggingface/transformers/blob/main/src/transformers/kernels/mra/cuda_kernel.cu#L6C1-L58C2
Note that `max_buffer` in this kernel is `extern __shared__ float` type, which means `max_buffer` would be stored in shared memory.
According to https://github.com/huggingface/transformers/blob/main/src/transformers/kernels/mra/cuda_launch.cu#L24-L35, CUDA would launch this kernel with
* gird size: `batch_size`
* block size: 256
* shared memory size: `A_num_block * 32 * sizeof(float)`
In case that `A_num_block` < 4, the for statement below might accidentally locate the memory out of `A_num_block * 32`, since num_thread here is 256, and threadIdx.x is [0, 255].
```
for (int idx_start = 0; idx_start < 32 * num_block; idx_start = idx_start + num_thread) {
```
Therefore, when threadblocks of threads try to access `max_buffer`, it would be wiser and more careful to always add `if` statements before to avoid memory access out of bounds.
So We suggest to add `if` statements in two places:

### Expected behavior
UT tests should all pass! | [
64
] | [
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
1,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0
] | [
"bug"
] |
https://api.github.com/repos/huggingface/transformers/issues/36057 |
TITLE
past_key_values type support bug
COMMENTS
1
REACTIONS
+1: 0
-1: 0
laugh: 0
hooray: 0
heart: 0
rocket: 0
eyes: 0
BODY
In many `XXXXForCausalLM` Class, the past_key_values signature of the forward function is: `past_key_values: Optional[Union[Cache, List[torch.FloatTensor]]] = None,` but the corresponding past_key_values signature of the forward function of the `XXXXModel` Class is: `past_key_values: Optional[Cache] = None,` and the function implementation does not support the `List` type. `past_key_values` will causes an error when calling `self.model` use List type `past_key_values`.
Like `LlamaForCausalLM` and `Qwen2ForCausalLM` | [
74
] | [
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
1,
0,
0,
0,
0,
0,
0,
0
] | [
"Documentation"
] |
https://api.github.com/repos/huggingface/transformers/issues/36142 |
TITLE
Bump cryptography from 43.0.1 to 44.0.1 in /examples/research_projects/decision_transformer
COMMENTS
1
REACTIONS
+1: 0
-1: 0
laugh: 0
hooray: 0
heart: 0
rocket: 0
eyes: 0
BODY
Bumps [cryptography](https://github.com/pyca/cryptography) from 43.0.1 to 44.0.1.
<details>
<summary>Changelog</summary>
<p><em>Sourced from <a href="https://github.com/pyca/cryptography/blob/main/CHANGELOG.rst">cryptography's changelog</a>.</em></p>
<blockquote>
<p>44.0.1 - 2025-02-11</p>
<pre><code>
* Updated Windows, macOS, and Linux wheels to be compiled with OpenSSL 3.4.1.
* We now build ``armv7l`` ``manylinux`` wheels and publish them to PyPI.
* We now build ``manylinux_2_34`` wheels and publish them to PyPI.
<p>.. _v44-0-0:</p>
<p>44.0.0 - 2024-11-27
</code></pre></p>
<ul>
<li><strong>BACKWARDS INCOMPATIBLE:</strong> Dropped support for LibreSSL < 3.9.</li>
<li>Deprecated Python 3.7 support. Python 3.7 is no longer supported by the
Python core team. Support for Python 3.7 will be removed in a future
<code>cryptography</code> release.</li>
<li>Updated Windows, macOS, and Linux wheels to be compiled with OpenSSL 3.4.0.</li>
<li>macOS wheels are now built against the macOS 10.13 SDK. Users on older
versions of macOS should upgrade, or they will need to build
<code>cryptography</code> themselves.</li>
<li>Enforce the :rfc:<code>5280</code> requirement that extended key usage extensions must
not be empty.</li>
<li>Added support for timestamp extraction to the
:class:<code>~cryptography.fernet.MultiFernet</code> class.</li>
<li>Relax the Authority Key Identifier requirements on root CA certificates
during X.509 verification to allow fields permitted by :rfc:<code>5280</code> but
forbidden by the CA/Browser BRs.</li>
<li>Added support for :class:<code>~cryptography.hazmat.primitives.kdf.argon2.Argon2id</code>
when using OpenSSL 3.2.0+.</li>
<li>Added support for the :class:<code>~cryptography.x509.Admissions</code> certificate extension.</li>
<li>Added basic support for PKCS7 decryption (including S/MIME 3.2) via
:func:<code>~cryptography.hazmat.primitives.serialization.pkcs7.pkcs7_decrypt_der</code>,
:func:<code>~cryptography.hazmat.primitives.serialization.pkcs7.pkcs7_decrypt_pem</code>, and
:func:<code>~cryptography.hazmat.primitives.serialization.pkcs7.pkcs7_decrypt_smime</code>.</li>
</ul>
<p>.. _v43-0-3:</p>
<p>43.0.3 - 2024-10-18</p>
<pre><code>
* Fixed release metadata for ``cryptography-vectors``
<p>.. _v43-0-2:</p>
<p>43.0.2 - 2024-10-18
</code></pre></p>
<ul>
<li>Fixed compilation when using LibreSSL 4.0.0.</li>
</ul>
<p>.. _v43-0-1:</p>
</blockquote>
</details>
<details>
<summary>Commits</summary>
<ul>
<li><a href="https://github.com/pyca/cryptography/commit/adaaaed77db676bbaa9d171175db81dce056e2a7"><code>adaaaed</code></a> Bump for 44.0.1 release (<a href="https://redirect.github.com/pyca/cryptography/issues/12441">#12441</a>)</li>
<li><a href="https://github.com/pyca/cryptography/commit/ccc61dabe38b86956bf218565cd4e82b918345a1"><code>ccc61da</code></a> [backport] test and build on armv7l (<a href="https://redirect.github.com/pyca/cryptography/issues/12420">#12420</a>) (<a href="https://redirect.github.com/pyca/cryptography/issues/12431">#12431</a>)</li>
<li><a href="https://github.com/pyca/cryptography/commit/f299a48153650f2dd87716343f2daa7cd39a1f59"><code>f299a48</code></a> remove deprecated call (<a href="https://redirect.github.com/pyca/cryptography/issues/12052">#12052</a>)</li>
<li><a href="https://github.com/pyca/cryptography/commit/439eb0594a9ffb7c9adedb2490998d83914d141e"><code>439eb05</code></a> Bump version for 44.0.0 (<a href="https://redirect.github.com/pyca/cryptography/issues/12051">#12051</a>)</li>
<li><a href="https://github.com/pyca/cryptography/commit/2c5ad4d8dcec1b8f833198bc2f3b4634c4fd9d78"><code>2c5ad4d</code></a> chore(deps): bump maturin from 1.7.4 to 1.7.5 in /.github/requirements (<a href="https://redirect.github.com/pyca/cryptography/issues/12050">#12050</a>)</li>
<li><a href="https://github.com/pyca/cryptography/commit/d23968adddd79aa8508d7c1f985da09383b3808f"><code>d23968a</code></a> chore(deps): bump libc from 0.2.165 to 0.2.166 (<a href="https://redirect.github.com/pyca/cryptography/issues/12049">#12049</a>)</li>
<li><a href="https://github.com/pyca/cryptography/commit/133c0e02edf2f172318eb27d8f50525ed64c9ec3"><code>133c0e0</code></a> Bump x509-limbo and/or wycheproof in CI (<a href="https://redirect.github.com/pyca/cryptography/issues/12047">#12047</a>)</li>
<li><a href="https://github.com/pyca/cryptography/commit/f2259d7aa0d134c839ebe298baa8b63de9ead804"><code>f2259d7</code></a> Bump BoringSSL and/or OpenSSL in CI (<a href="https://redirect.github.com/pyca/cryptography/issues/12046">#12046</a>)</li>
<li><a href="https://github.com/pyca/cryptography/commit/e201c870b89fd2606d67230a97e50c3badb07907"><code>e201c87</code></a> fixed metadata in changelog (<a href="https://redirect.github.com/pyca/cryptography/issues/12044">#12044</a>)</li>
<li><a href="https://github.com/pyca/cryptography/commit/c6104cc3669585941dc1d2b9c6507621c53d242f"><code>c6104cc</code></a> Prohibit Python 3.9.0, 3.9.1 -- they have a bug that causes errors (<a href="https://redirect.github.com/pyca/cryptography/issues/12045">#12045</a>)</li>
<li>Additional commits viewable in <a href="https://github.com/pyca/cryptography/compare/43.0.1...44.0.1">compare view</a></li>
</ul>
</details>
<br />
[](https://docs.github.com/en/github/managing-security-vulnerabilities/about-dependabot-security-updates#about-compatibility-scores)
Dependabot will resolve any conflicts with this PR as long as you don't alter it yourself. You can also trigger a rebase manually by commenting `@dependabot rebase`.
[//]: # (dependabot-automerge-start)
[//]: # (dependabot-automerge-end)
---
<details>
<summary>Dependabot commands and options</summary>
<br />
You can trigger Dependabot actions by commenting on this PR:
- `@dependabot rebase` will rebase this PR
- `@dependabot recreate` will recreate this PR, overwriting any edits that have been made to it
- `@dependabot merge` will merge this PR after your CI passes on it
- `@dependabot squash and merge` will squash and merge this PR after your CI passes on it
- `@dependabot cancel merge` will cancel a previously requested merge and block automerging
- `@dependabot reopen` will reopen this PR if it is closed
- `@dependabot close` will close this PR and stop Dependabot recreating it. You can achieve the same result by closing it manually
- `@dependabot show <dependency name> ignore conditions` will show all of the ignore conditions of the specified dependency
- `@dependabot ignore this major version` will close this PR and stop Dependabot creating any more for this major version (unless you reopen the PR or upgrade to it yourself)
- `@dependabot ignore this minor version` will close this PR and stop Dependabot creating any more for this minor version (unless you reopen the PR or upgrade to it yourself)
- `@dependabot ignore this dependency` will close this PR and stop Dependabot creating any more for this dependency (unless you reopen the PR or upgrade to it yourself)
You can disable automated security fix PRs for this repo from the [Security Alerts page](https://github.com/huggingface/transformers/network/alerts).
</details> | [
27,
60
] | [
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
1,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
1,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0
] | [
"dependencies",
"python"
] |
https://api.github.com/repos/huggingface/transformers/issues/34727 |
TITLE
[Idefics3] processing_idefics3 - IndexError: list index out of range for multiple image input
COMMENTS
2
REACTIONS
+1: 0
-1: 0
laugh: 0
hooray: 0
heart: 0
rocket: 0
eyes: 0
BODY
### System Info
- `transformers` version: 4.46.2
- Platform: Linux-5.4.0-1134-aws-x86_64-with-glibc2.31
- Python version: 3.10.2
- Huggingface_hub version: 0.26.2
- Safetensors version: 0.4.5
- Accelerate version: 1.1.1
- Accelerate config: not found
- PyTorch version (GPU?): 2.5.1+cu124 (True)
- Tensorflow version (GPU?): not installed (NA)
- Flax version (CPU?/GPU?/TPU?): not installed (NA)
- Jax version: not installed
- JaxLib version: not installed
- Using distributed or parallel set-up in script?: No
- Using GPU in script?: Yes
- GPU type: Tesla T4
### Who can help?
@amyeroberts , @quvb
### Information
- [X] The official example scripts
- [ ] My own modified scripts
### Tasks
- [X] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)
- [ ] My own task or dataset (give details below)
### Reproduction
Code to reproduce:
from PIL import Image
img1=Image.open('Image1.JPG')
img2=Image.open('Image2.JPG')
prompt = processor.apply_chat_template(messages, add_generation_prompt=True)
inputs = processor(text=prompt, images=[img1,img2], return_tensors="pt")
inputs = {k: v.to(DEVICE) for k, v in inputs.items()}
# Generate
generated_ids = model.generate(**inputs, max_new_tokens=512)
generated_texts = processor.batch_decode(generated_ids, skip_special_tokens=True)
print(generated_texts)
---------------------------------------------------------------------------
IndexError Traceback (most recent call last)
Cell In[4], line 6
3 img2=Image.open('Image2.JPG')
5 prompt = processor.apply_chat_template(messages, add_generation_prompt=True)
----> 6 inputs = processor(text=[prompt,prompt], images=[img1,img2], return_tensors="pt")
7 inputs = {k: v.to(DEVICE) for k, v in inputs.items()}
9 # Generate
File ~/envs/default/lib/python3.10/site-packages/transformers/models/idefics3/processing_idefics3.py:302, in Idefics3Processor.__call__(self, images, text, audio, videos, image_seq_len, **kwargs)
300 sample = split_sample[0]
301 for i, image_prompt_string in enumerate(image_prompt_strings):
--> 302 sample += image_prompt_string + split_sample[i + 1]
303 prompt_strings.append(sample)
305 text_inputs = self.tokenizer(text=prompt_strings, **output_kwargs["text_kwargs"])
IndexError: list index out of range
### Expected behavior
I would expect Model to take 2 images in the input and provide generation using these 2 images as context. | [
64,
62,
12
] | [
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
1,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
1,
0,
1,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0
] | [
"bug",
"Vision",
"Multimodal"
] |
https://api.github.com/repos/huggingface/transformers/issues/34176 |
TITLE
[Bug] transformers `TPU` support broken on `v4.45.0`
COMMENTS
23
REACTIONS
+1: 0
-1: 0
laugh: 0
hooray: 0
heart: 0
rocket: 0
eyes: 1
BODY
### System Info
transformers: v4.45.0 and up (any of v4.45.0 / v4.45.1 / v4.45.2)
accelerate: v1.0.1 (same result on v0.34.2)
### Who can help?
trainer experts: @muellerzr @SunMarc
accelerate expert: @muellerzr
text models expert: @ArthurZucker
Thank you guys!
### Information
- [X] The official example scripts
- [X] My own modified scripts
### Tasks
- [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)
- [X] My own task or dataset (give details below)
### Reproduction
Minimal working code is [Here](https://gist.github.com/steveepreston/acd125a08214c631ba8389eb61a13798). Code follows [GoogleCloudPlatform example](https://github.com/GoogleCloudPlatform/vertex-ai-samples/blob/main/notebooks/official/training/tpuv5e_llama2_pytorch_finetuning_and_serving.ipynb)
on TPU VM, train done like a charm on transformers from v4.43.1 to v4.44.2, but when upgrading to any of v4.45.0 / v4.45.1 / v4.45.2 it throws this Error: `RuntimeError: There are currently no available devices found, must be one of 'XPU', 'CUDA', or 'NPU'.`
**Error Traceback:**
General traceback is: callling `SFTTrainer()` > `self.accelerator = Accelerator(**args)` (transformers/trainer.py)
<details>
<summary>Click here to Show Full Error Traceback</summary>
```python
---------------------------------------------------------------------------
RuntimeError Traceback (most recent call last)
Cell In[48], line 4
1 from trl import SFTTrainer
2 from transformers import AutoTokenizer, AutoModelForCausalLM, TrainingArguments
----> 4 trainer = SFTTrainer(
5 model=base_model,
6 train_dataset=data,
7 args=TrainingArguments(
8 per_device_train_batch_size=BATCH_SIZE, # This is actually the global batch size for SPMD.
9 num_train_epochs=1,
10 max_steps=-1,
11 output_dir="/output_dir",
12 optim="adafactor",
13 logging_steps=1,
14 dataloader_drop_last = True, # Required for SPMD.
15 fsdp="full_shard",
16 fsdp_config=fsdp_config,
17 ),
18 peft_config=lora_config,
19 dataset_text_field="quote",
20 max_seq_length=max_seq_length,
21 packing=True,
22 )
File /usr/local/lib/python3.10/site-packages/huggingface_hub/utils/_deprecation.py:101, in _deprecate_arguments.<locals>._inner_deprecate_positional_args.<locals>.inner_f(*args, **kwargs)
99 message += "\n\n" + custom_message
100 warnings.warn(message, FutureWarning)
--> 101 return f(*args, **kwargs)
File /usr/local/lib/python3.10/site-packages/trl/trainer/sft_trainer.py:401, in SFTTrainer.__init__(self, model, args, data_collator, train_dataset, eval_dataset, tokenizer, model_init, compute_metrics, callbacks, optimizers, preprocess_logits_for_metrics, peft_config, dataset_text_field, packing, formatting_func, max_seq_length, infinite, num_of_sequences, chars_per_token, dataset_num_proc, dataset_batch_size, neftune_noise_alpha, model_init_kwargs, dataset_kwargs, eval_packing)
395 if tokenizer.padding_side is not None and tokenizer.padding_side != "right":
396 warnings.warn(
397 "You passed a tokenizer with `padding_side` not equal to `right` to the SFTTrainer. This might lead to some unexpected behaviour due to "
398 "overflow issues when training a model in half-precision. You might consider adding `tokenizer.padding_side = 'right'` to your code."
399 )
--> 401 super().__init__(
402 model=model,
403 args=args,
404 data_collator=data_collator,
405 train_dataset=train_dataset,
406 eval_dataset=eval_dataset,
407 tokenizer=tokenizer,
408 model_init=model_init,
409 compute_metrics=compute_metrics,
410 callbacks=callbacks,
411 optimizers=optimizers,
412 preprocess_logits_for_metrics=preprocess_logits_for_metrics,
413 )
415 # Add tags for models that have been loaded with the correct transformers version
416 if hasattr(self.model, "add_model_tags"):
File /usr/local/lib/python3.10/site-packages/transformers/trainer.py:411, in Trainer.__init__(self, model, args, data_collator, train_dataset, eval_dataset, tokenizer, model_init, compute_metrics, callbacks, optimizers, preprocess_logits_for_metrics)
408 self.deepspeed = None
409 self.is_in_train = False
--> 411 self.create_accelerator_and_postprocess()
413 # memory metrics - must set up as early as possible
414 self._memory_tracker = TrainerMemoryTracker(self.args.skip_memory_metrics)
File /usr/local/lib/python3.10/site-packages/transformers/trainer.py:4858, in Trainer.create_accelerator_and_postprocess(self)
4855 args.update(accelerator_config)
4857 # create accelerator object
-> 4858 self.accelerator = Accelerator(**args)
4859 # some Trainer classes need to use `gather` instead of `gather_for_metrics`, thus we store a flag
4860 self.gather_function = self.accelerator.gather_for_metrics
File /usr/local/lib/python3.10/site-packages/accelerate/accelerator.py:349, in Accelerator.__init__(self, device_placement, split_batches, mixed_precision, gradient_accumulation_steps, cpu, dataloader_config, deepspeed_plugin, fsdp_plugin, megatron_lm_plugin, rng_types, log_with, project_dir, project_config, gradient_accumulation_plugin, step_scheduler_with_optimizer, kwargs_handlers, dynamo_backend, deepspeed_plugins)
345 raise ValueError(f"FSDP requires PyTorch >= {FSDP_PYTORCH_VERSION}")
347 if fsdp_plugin is None: # init from env variables
348 fsdp_plugin = (
--> 349 FullyShardedDataParallelPlugin() if os.environ.get("ACCELERATE_USE_FSDP", "false") == "true" else None
350 )
351 else:
352 if not isinstance(fsdp_plugin, FullyShardedDataParallelPlugin):
File <string>:21, in __init__(self, sharding_strategy, backward_prefetch, mixed_precision_policy, auto_wrap_policy, cpu_offload, ignored_modules, state_dict_type, state_dict_config, optim_state_dict_config, limit_all_gathers, use_orig_params, param_init_fn, sync_module_states, forward_prefetch, activation_checkpointing, cpu_ram_efficient_loading, transformer_cls_names_to_wrap, min_num_params)
File /usr/local/lib/python3.10/site-packages/accelerate/utils/dataclasses.py:1684, in FullyShardedDataParallelPlugin.__post_init__(self)
1682 device = torch.xpu.current_device()
1683 else:
-> 1684 raise RuntimeError(
1685 "There are currently no available devices found, must be one of 'XPU', 'CUDA', or 'NPU'."
1686 )
1687 # Create a function that will be used to initialize the parameters of the model
1688 # when using `sync_module_states`
1689 self.param_init_fn = lambda x: x.to_empty(device=device, recurse=False)
RuntimeError: There are currently no available devices found, must be one of 'XPU', 'CUDA', or 'NPU'.
```
</details>
**My observation and guess**
I tested multiple times, and can confirm that this error is Directly Caused by only changing version of `transformers`. Therefore `accelerate` version was fixed during all runs, my guess is something changed on `v4.45.0` (maybe on `trainer.py`) that affects `args` in the `self.accelerator = Accelerator(**args)`, so that error will raised by `accelerate` .
### Expected behavior
my guess: `args` corrected and `self.accelerator = Accelerator(**args)` called correctly. so `accelerate` can work on `TPU`. | [
64
] | [
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
1,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0
] | [
"bug"
] |
https://api.github.com/repos/huggingface/transformers/issues/34162 |
TITLE
requests.exceptions.ReadTimeout on already cached/downloaded model using SentenceTransformers
COMMENTS
5
REACTIONS
+1: 0
-1: 0
laugh: 0
hooray: 0
heart: 0
rocket: 0
eyes: 0
BODY
### System Info
transformers version: 44.2
python version: 3.11.6
system OS: Linux
### Who can help?
_No response_
### Information
- [ ] The official example scripts
- [X] My own modified scripts
### Tasks
- [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)
- [ ] My own task or dataset (give details below)
### Reproduction
When HuggingFace servers are down or you have no internet connection, try to initialize an already downloaded/cached model. I was using SentenceTransformers (running SentenceTransformer(model_name_or_path=model_id, device=my_device), but the problem comes from the transformers library, so I'm not sure which library should make the changes.
### Expected behavior
The model loads properly without requiring any connection to the hub. | [
67,
64
] | [
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
1,
0,
0,
1,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0
] | [
"Usage",
"bug"
] |
https://api.github.com/repos/huggingface/transformers/issues/33998 |
TITLE
Is the BOS token id of 128000 **hardcoded** into the llama 3.2 tokenizer?
COMMENTS
13
REACTIONS
+1: 0
-1: 0
laugh: 0
hooray: 0
heart: 0
rocket: 0
eyes: 0
BODY
### System Info
- `transformers` version: 4.45.1
- Platform: Linux-5.15.154+-x86_64-with-glibc2.31
- Python version: 3.10.13
- Huggingface_hub version: 0.23.2
- Safetensors version: 0.4.3
- Accelerate version: 0.30.1
- Accelerate config: not found
- PyTorch version (GPU?): 2.1.2+cpu (False)
- Tensorflow version (GPU?): 2.15.0 (False)
- Flax version (CPU?/GPU?/TPU?): 0.8.4 (cpu)
- Jax version: 0.4.28
- JaxLib version: 0.4.28
- Using distributed or parallel set-up in script?: <fill in>
### Who can help?
@ArthurZucker @itazap
### Information
- [ ] The official example scripts
- [ ] My own modified scripts
### Tasks
- [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)
- [ ] My own task or dataset (give details below)
### Reproduction
I trained the llama 3.2 tokenizer using an Amharic language corpus and a vocab size of `28k`, but when I use it to tokenize text, the first token id is still `128000` when it should have been the new tokenizer's **BOS token id** of `0`.
And here's a tokenization of an example text. As can be seen, the first token id is `128000` when it should have been `0`.
```python
from transformers import AutoTokenizer
tokenizer = AutoTokenizer.from_pretrained("rasyosef/llama-3.2-amharic-tokenizer-28k")
text = "ሁሉም ነገር"
inputs = tokenizer(text, return_tensors="pt")
print(inputs["input_ids"])
```
Output:
```
tensor([[128000, 1704, 802]])
```
### Expected behavior
The first token id of the tokenized text should be the new tokenizer's **BOS token id** of `0` instead of the original llama 3.2 tokenizer's BOS token id of `128000`. The vocab size is `28000` and the number `128000` should not appear anywhere in the `input_ids` list.
This is causing index out of range errors when indexing the embedding matrix of a newly initialized model. | [
47,
64
] | [
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
1,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
1,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0
] | [
"Core: Tokenization",
"bug"
] |
https://api.github.com/repos/huggingface/transformers/issues/36168 |
TITLE
Bump transformers from 4.38.0 to 4.48.0 in /examples/research_projects/adversarial
COMMENTS
1
REACTIONS
+1: 0
-1: 0
laugh: 0
hooray: 0
heart: 0
rocket: 0
eyes: 0
BODY
Bumps [transformers](https://github.com/huggingface/transformers) from 4.38.0 to 4.48.0.
<details>
<summary>Release notes</summary>
<p><em>Sourced from <a href="https://github.com/huggingface/transformers/releases">transformers's releases</a>.</em></p>
<blockquote>
<h2>v4.48.0: ModernBERT, Aria, TimmWrapper, ColPali, Falcon3, Bamba, VitPose, DinoV2 w/ Registers, Emu3, Cohere v2, TextNet, DiffLlama, PixtralLarge, Moonshine</h2>
<h2>New models</h2>
<h3>ModernBERT</h3>
<p>The ModernBert model was proposed in <a href="https://arxiv.org/abs/2412.13663">Smarter, Better, Faster, Longer: A Modern Bidirectional Encoder for Fast, Memory Efficient, and Long Context Finetuning and Inference</a> by Benjamin Warner, Antoine Chaffin, Benjamin Clavié, Orion Weller, Oskar Hallström, Said Taghadouini, Alexis Galalgher, Raja Bisas, Faisal Ladhak, Tom Aarsen, Nathan Cooper, Grifin Adams, Jeremy Howard and Iacopo Poli.</p>
<p>It is a refresh of the traditional encoder architecture, as used in previous models such as <a href="https://huggingface.co/docs/transformers/en/model_doc/bert">BERT</a> and <a href="https://huggingface.co/docs/transformers/en/model_doc/roberta">RoBERTa</a>.</p>
<p>It builds on BERT and implements many modern architectural improvements which have been developed since its original release, such as:</p>
<ul>
<li><a href="https://huggingface.co/blog/designing-positional-encoding">Rotary Positional Embeddings</a> to support sequences of up to 8192 tokens.</li>
<li><a href="https://arxiv.org/abs/2208.08124">Unpadding</a> to ensure no compute is wasted on padding tokens, speeding up processing time for batches with mixed-length sequences.</li>
<li><a href="https://arxiv.org/abs/2002.05202">GeGLU</a> Replacing the original MLP layers with GeGLU layers, shown to improve performance.</li>
<li><a href="https://arxiv.org/abs/2004.05150v2">Alternating Attention</a> where most attention layers employ a sliding window of 128 tokens, with Global Attention only used every 3 layers.</li>
<li><a href="https://github.com/Dao-AILab/flash-attention">Flash Attention</a> to speed up processing.</li>
<li>A model designed following recent <a href="https://arxiv.org/abs/2401.14489">The Case for Co-Designing Model Architectures with Hardware</a>, ensuring maximum efficiency across inference GPUs.</li>
<li>Modern training data scales (2 trillion tokens) and mixtures (including code ande math data)</li>
</ul>
<p><img src="https://github.com/user-attachments/assets/4256c0b1-9b40-4d71-ac42-fc94827d5e9d" alt="image" /></p>
<ul>
<li>Add ModernBERT to Transformers by <a href="https://github.com/warner-benjamin"><code>@warner-benjamin</code></a> in <a href="https://redirect.github.com/huggingface/transformers/issues/35158">#35158</a></li>
</ul>
<h3>Aria</h3>
<p>The Aria model was proposed in <a href="https://huggingface.co/papers/2410.05993">Aria: An Open Multimodal Native Mixture-of-Experts Model</a> by Li et al. from the Rhymes.AI team.</p>
<p>Aria is an open multimodal-native model with best-in-class performance across a wide range of multimodal, language, and coding tasks. It has a Mixture-of-Experts architecture, with respectively 3.9B and 3.5B activated parameters per visual token and text token.</p>
<ul>
<li>Add Aria by <a href="https://github.com/aymeric-roucher"><code>@aymeric-roucher</code></a> in <a href="https://redirect.github.com/huggingface/transformers/issues/34157">#34157</a>
<img src="https://github.com/user-attachments/assets/ef41fcc9-2c5f-4a75-ab1a-438f73d3d7e2" alt="image" /></li>
</ul>
<h3>TimmWrapper</h3>
<p>We add a <code>TimmWrapper</code> set of classes such that timm models can be loaded in as transformer models into the library.</p>
<p>Here's a general usage example:</p>
<pre lang="py"><code>import torch
from urllib.request import urlopen
from PIL import Image
from transformers import AutoConfig, AutoModelForImageClassification, AutoImageProcessor
<p>checkpoint = "timm/resnet50.a1_in1k"
img = Image.open(urlopen(
'<a href="https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/beignets-task-guide.png">https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/beignets-task-guide.png</a>'
))</p>
<p>image_processor = AutoImageProcessor.from_pretrained(checkpoint)
</tr></table>
</code></pre></p>
</blockquote>
<p>... (truncated)</p>
</details>
<details>
<summary>Commits</summary>
<ul>
<li><a href="https://github.com/huggingface/transformers/commit/6bc0fbcfa7acb6ac4937e7456a76c2f7975fefec"><code>6bc0fbc</code></a> [WIP] Emu3: add model (<a href="https://redirect.github.com/huggingface/transformers/issues/33770">#33770</a>)</li>
<li><a href="https://github.com/huggingface/transformers/commit/59e28c30fa3a91213f569bccef73f082afa8c656"><code>59e28c3</code></a> Fix flex_attention in training mode (<a href="https://redirect.github.com/huggingface/transformers/issues/35605">#35605</a>)</li>
<li><a href="https://github.com/huggingface/transformers/commit/7cf6230e25078742b21907ae49d1542747606457"><code>7cf6230</code></a> push a fix for now</li>
<li><a href="https://github.com/huggingface/transformers/commit/d6f446ffa79811d35484d445bc5c7932e8a536d6"><code>d6f446f</code></a> when filtering we can't use the convert script as we removed them</li>
<li><a href="https://github.com/huggingface/transformers/commit/8ce1e9578af6151e4192d59c345e2ad86ee789d4"><code>8ce1e95</code></a> [test-all]</li>
<li><a href="https://github.com/huggingface/transformers/commit/af2d7caff393cf8881396b73d92d0595b6a3b2ae"><code>af2d7ca</code></a> Add Moonshine (<a href="https://redirect.github.com/huggingface/transformers/issues/34784">#34784</a>)</li>
<li><a href="https://github.com/huggingface/transformers/commit/42b8e7916b6b6dff5cb77252286db1aa07b7b41e"><code>42b8e79</code></a> ModernBert: reuse GemmaRotaryEmbedding via modular + Integration tests (<a href="https://redirect.github.com/huggingface/transformers/issues/35459">#35459</a>)</li>
<li><a href="https://github.com/huggingface/transformers/commit/e39c9f7a78fa2960a7045e8fc5a2d96b5d7eebf1"><code>e39c9f7</code></a> v4.48-release</li>
<li><a href="https://github.com/huggingface/transformers/commit/8de7b1ba8d126a6fc9f9bcc3173a71b46f0c3601"><code>8de7b1b</code></a> Add flex_attn to diffllama (<a href="https://redirect.github.com/huggingface/transformers/issues/35601">#35601</a>)</li>
<li><a href="https://github.com/huggingface/transformers/commit/1e3ddcb2d0380d0d909a44edc217dff68956ec5e"><code>1e3ddcb</code></a> ModernBERT bug fixes (<a href="https://redirect.github.com/huggingface/transformers/issues/35404">#35404</a>)</li>
<li>Additional commits viewable in <a href="https://github.com/huggingface/transformers/compare/v4.38.0...v4.48.0">compare view</a></li>
</ul>
</details>
<br />
[](https://docs.github.com/en/github/managing-security-vulnerabilities/about-dependabot-security-updates#about-compatibility-scores)
Dependabot will resolve any conflicts with this PR as long as you don't alter it yourself. You can also trigger a rebase manually by commenting `@dependabot rebase`.
[//]: # (dependabot-automerge-start)
[//]: # (dependabot-automerge-end)
---
<details>
<summary>Dependabot commands and options</summary>
<br />
You can trigger Dependabot actions by commenting on this PR:
- `@dependabot rebase` will rebase this PR
- `@dependabot recreate` will recreate this PR, overwriting any edits that have been made to it
- `@dependabot merge` will merge this PR after your CI passes on it
- `@dependabot squash and merge` will squash and merge this PR after your CI passes on it
- `@dependabot cancel merge` will cancel a previously requested merge and block automerging
- `@dependabot reopen` will reopen this PR if it is closed
- `@dependabot close` will close this PR and stop Dependabot recreating it. You can achieve the same result by closing it manually
- `@dependabot show <dependency name> ignore conditions` will show all of the ignore conditions of the specified dependency
- `@dependabot ignore this major version` will close this PR and stop Dependabot creating any more for this major version (unless you reopen the PR or upgrade to it yourself)
- `@dependabot ignore this minor version` will close this PR and stop Dependabot creating any more for this minor version (unless you reopen the PR or upgrade to it yourself)
- `@dependabot ignore this dependency` will close this PR and stop Dependabot creating any more for this dependency (unless you reopen the PR or upgrade to it yourself)
You can disable automated security fix PRs for this repo from the [Security Alerts page](https://github.com/huggingface/transformers/network/alerts).
</details> | [
27,
60
] | [
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
1,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
1,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0
] | [
"dependencies",
"python"
] |
https://api.github.com/repos/huggingface/transformers/issues/34446 |
TITLE
Beit image classification have different results compared from versions prior to 4.43.0
COMMENTS
10
REACTIONS
+1: 0
-1: 0
laugh: 0
hooray: 0
heart: 0
rocket: 0
eyes: 0
BODY
### System Info
- `transformers` version: 4.43.0
- Platform: Windows-10-10.0.19045-SP0
- Python version: 3.10.9
- Huggingface_hub version: 0.26.1
- Safetensors version: 0.4.5
- Accelerate version: 0.23.0
- Accelerate config: not found
- PyTorch version (GPU?): 1.13.1+cu117 (True)
- Tensorflow version (GPU?): not installed (NA)
- Flax version (CPU?/GPU?/TPU?): not installed (NA)
- Jax version: not installed
- JaxLib version: not installed
- Using distributed or parallel set-up in script?: No
- Using GPU in script?: Yes
- GPU type: NVIDIA GeForce RTX 3060 Ti
### Who can help?
@amyeroberts
### Information
- [ ] The official example scripts
- [X] My own modified scripts
### Tasks
- [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)
- [X] My own task or dataset (give details below)
### Reproduction
Given the following image:

Running the following pipeline for versions prior to `4.43.0` (4.42.4)
```py
from PIL import Image
from transformers import pipeline
import transformers
pipeline_aesthetic = pipeline(
"image-classification", "cafeai/cafe_aesthetic", device=0
)
with Image.open("F:\\Downloads\\Tower.jpg") as img:
predictions = pipeline_aesthetic(img, top_k=2)
predict_keyed = {}
for p in predictions:
# print(type(p))
if not isinstance(p, dict):
raise Exception("Prediction value is missing?")
predict_keyed[p["label"]] = p["score"]
print(predict_keyed,transformers.__version__)
```
For 4.42.4, it returns:
```
{'aesthetic': 0.651885986328125, 'not_aesthetic': 0.3481140434741974} 4.42.4
```
For 4.43.0:
```
{'aesthetic': 0.43069663643836975, 'not_aesthetic': 0.2877475321292877} 4.43.0
```
### Expected behavior
Expected results from 4.42.4 instead of 4.43.0.
### Addn Notes.
I narrowed it down to this commit being the cause: https://github.com/huggingface/transformers/blob/06fd7972acbc6a5e9cd75b4d482583c060ac2ed0/src/transformers/models/beit/modeling_beit.py but unsure where exactly it is changed. | [
64,
62
] | [
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
1,
0,
1,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0
] | [
"bug",
"Vision"
] |
https://api.github.com/repos/huggingface/transformers/issues/35664 |
TITLE
RLE of SAM can't handle masks with no change
COMMENTS
1
REACTIONS
+1: 0
-1: 0
laugh: 0
hooray: 0
heart: 0
rocket: 0
eyes: 0
BODY
### System Info
- `transformers` version: 4.49.0.dev0
- Platform: Windows-10-10.0.26100-SP0
- Python version: 3.11.11
- Huggingface_hub version: 0.27.1
- Safetensors version: 0.5.2
- Accelerate version: not installed
- Accelerate config: not found
- PyTorch version (GPU?): 2.5.1 (False)
- Tensorflow version (GPU?): not installed (NA)
- Flax version (CPU?/GPU?/TPU?): not installed (NA)
- Jax version: not installed
- JaxLib version: not installed
- Using distributed or parallel set-up in script?: No
### Who can help?
@ArthurZucker
### Information
- [ ] The official example scripts
- [x] My own modified scripts
### Tasks
- [x] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)
- [ ] My own task or dataset (give details below)
### Reproduction
I'm fine-tuning the SamModel and using the fine-tuned model in a mask-generation pipeline afterward.
After some time in the training, I suddenly get the following error when using the fine-tuned model in the pipeline:
```
Traceback (most recent call last):
File "***.py", line 17, in <module>
outputs = generator(image)
^^^^^^^^^^^^^^^^
File "transformers\pipelines\mask_generation.py", line 166, in __call__
return super().__call__(image, *args, num_workers=num_workers, batch_size=batch_size, **kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "transformers\pipelines\base.py", line 1354, in __call__
return next(
^^^^^
File "transformers\pipelines\pt_utils.py", line 124, in __next__
item = next(self.iterator)
^^^^^^^^^^^^^^^^^^^
File "transformers\pipelines\pt_utils.py", line 269, in __next__
processed = self.infer(next(self.iterator), **self.params)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "transformers\pipelines\base.py", line 1269, in forward
model_outputs = self._forward(model_inputs, **forward_params)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "transformers\pipelines\mask_generation.py", line 237, in _forward
masks, iou_scores, boxes = self.image_processor.filter_masks(
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "transformers\models\sam\image_processing_sam.py", line 847, in filter_masks
return self._filter_masks_pt(
^^^^^^^^^^^^^^^^^^^^^^
File "transformers\models\sam\image_processing_sam.py", line 945, in _filter_masks_pt
masks = _mask_to_rle_pytorch(masks)
^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "transformers\models\sam\image_processing_sam.py", line 1386, in _mask_to_rle_pytorch
counts += [cur_idxs[0].item()] + btw_idxs.tolist() + [height * width - cur_idxs[-1]]
~~~~~~~~^^^
IndexError: index 0 is out of bounds for dimension 0 with size 0
```
Note: this error doesn't occur on every image, but just on some.
Code used to produce error:
```
image = Image.open("PATH_TO_MY_IMAGE")
model = SamModel.from_pretrained("PATH_TO_MY_CHECKPOINT")
processor = SamImageProcessor.from_pretrained("facebook/sam-vit-huge")
generator = pipeline(
"mask-generation",
model=model,
device="cpu",
points_per_batch=64,
image_processor=processor
) # MaskGenerationPipeline
outputs = generator(image)
```
### Expected behavior
No error should be thrown and the RLE should be computed correctly. | [
64
] | [
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
1,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0
] | [
"bug"
] |
https://api.github.com/repos/huggingface/transformers/issues/36068 |
TITLE
cannot import name 'is_timm_config_dict' from 'transformers.utils.generic'
COMMENTS
16
REACTIONS
+1: 0
-1: 0
laugh: 0
hooray: 0
heart: 0
rocket: 0
eyes: 0
BODY
### System Info
Transformers version 4.48.2
platform kaggle L4*4 or P40
timm version 1.0.12 or1.0.14 or None
Python version 3.10.12 (main, Nov 6 2024, 20:22:13) [GCC 11.4.0]
### Who can help?
_No response_
### Information
- [ ] The official example scripts
- [ ] My own modified scripts
### Tasks
- [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)
- [ ] My own task or dataset (give details below)
### Reproduction
from vllm.platforms import current_platform
then get
```
ImportError Traceback (most recent call last)
/usr/local/lib/python3.10/dist-packages/transformers/utils/import_utils.py in _get_module(self, module_name)
1792 missing_backends = self._object_missing_backend[name]
-> 1793
1794 class Placeholder(metaclass=DummyObject):
/usr/lib/python3.10/importlib/__init__.py in import_module(name, package)
125 level += 1
--> 126 return _bootstrap._gcd_import(name[level:], package, level)
127
/usr/lib/python3.10/importlib/_bootstrap.py in _gcd_import(name, package, level)
/usr/lib/python3.10/importlib/_bootstrap.py in _find_and_load(name, import_)
/usr/lib/python3.10/importlib/_bootstrap.py in _find_and_load_unlocked(name, import_)
/usr/lib/python3.10/importlib/_bootstrap.py in _load_unlocked(spec)
/usr/lib/python3.10/importlib/_bootstrap_external.py in exec_module(self, module)
/usr/lib/python3.10/importlib/_bootstrap.py in _call_with_frames_removed(f, *args, **kwds)
/usr/local/lib/python3.10/dist-packages/transformers/configuration_utils.py in <module>
42 )
---> 43 from .utils.generic import is_timm_config_dict
44
ImportError: cannot import name 'is_timm_config_dict' from 'transformers.utils.generic' (/usr/local/lib/python3.10/dist-packages/transformers/utils/generic.py)
The above exception was the direct cause of the following exception:
RuntimeError Traceback (most recent call last)
<ipython-input-13-46a92ab71489> in <cell line: 1>()
----> 1 from vllm.platforms import current_platform
2 device_name = current_platform.get_device_name().lower()
3 print(device_name)
/usr/local/lib/python3.10/dist-packages/vllm/__init__.py in <module>
4 import torch
5
----> 6 from vllm.engine.arg_utils import AsyncEngineArgs, EngineArgs
7 from vllm.engine.async_llm_engine import AsyncLLMEngine
8 from vllm.engine.llm_engine import LLMEngine
/usr/local/lib/python3.10/dist-packages/vllm/engine/arg_utils.py in <module>
9
10 import vllm.envs as envs
---> 11 from vllm.config import (CacheConfig, CompilationConfig, ConfigFormat,
12 DecodingConfig, DeviceConfig, HfOverrides,
13 KVTransferConfig, LoadConfig, LoadFormat, LoRAConfig,
/usr/local/lib/python3.10/dist-packages/vllm/config.py in <module>
15 import torch
16 from pydantic import BaseModel, Field, PrivateAttr
---> 17 from transformers import PretrainedConfig
18
19 import vllm.envs as envs
/usr/lib/python3.10/importlib/_bootstrap.py in _handle_fromlist(module, fromlist, import_, recursive)
/usr/local/lib/python3.10/dist-packages/transformers/utils/import_utils.py in __getattr__(self, name)
1779 def __dir__(self):
1780 result = super().__dir__()
-> 1781 # The elements of self.__all__ that are submodules may or may not be in the dir already, depending on whether
1782 # they have been accessed or not. So we only add the elements of self.__all__ that are not already in the dir.
1783 for attr in self.__all__:
/usr/local/lib/python3.10/dist-packages/transformers/utils/import_utils.py in _get_module(self, module_name)
1793
1794 class Placeholder(metaclass=DummyObject):
-> 1795 _backends = missing_backends
1796
1797 def __init__(self, *args, **kwargs):
RuntimeError: Failed to import transformers.configuration_utils because of the following error (look up to see its traceback):
cannot import name 'is_timm_config_dict' from 'transformers.utils.generic' (/usr/local/lib/python3.10/dist-packages/transformers/utils/generic.py)
```
### Expected behavior
No error
@zucchini-nlp | [
64
] | [
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
1,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0
] | [
"bug"
] |
https://api.github.com/repos/huggingface/transformers/issues/34690 |
TITLE
Changes required to `save_model` for certain models (e.g., Phi 3.5 Vision)
COMMENTS
4
REACTIONS
+1: 1
-1: 0
laugh: 0
hooray: 0
heart: 0
rocket: 0
eyes: 0
BODY
### Feature request
This request proposes one of three changes (see **Motivation** for background, and **Your contribution** more thoughts on possible solutions) in order to allow saving of a certain class of models, including but not limited to Phi 3.5 Vision.
1. Accept a `state_dict` argument in the `Trainer` class's `save_model()` method (https://github.com/huggingface/transformers/blob/main/src/transformers/trainer.py#L3719-L3768). This `state_dict` parameter should then be passed down to the call to the private `_save()` method (https://github.com/huggingface/transformers/blob/main/src/transformers/trainer.py#L3842), which _does_ accept a `state_dict` argument.
2. Rather than`state_dict` as an argument to `save_model()`, determine the appropriate heuristic such that we can successfully save Phi 3.5 Vision and other architecturally similar models.
3. Some change to the way `transformers` handles shared tensors...?
### Motivation
I encountered an issue while trying to fine-tune Phi 3.5 Vision using the `Trainer` class from `transformers`. In particular, when trying to call `save()` or `save_pretrained()`, transformers throws the following error:
```
RuntimeError: The weights trying to be saved contained shared tensors [{'model.vision_embed_tokens.wte.weight',
'model.embed_tokens.weight'}] that are mismatching the transformers base configuration.
Try saving using `safe_serialization=False` or remove this tensor sharing.
```
Below are two minimal reproducible examples:
_Example #1_
```
from transformers import AutoModelForCausalLM
model_id = "microsoft/Phi-3.5-vision-instruct"
model = AutoModelForCausalLM.from_pretrained(
model_id, device_map="cuda", trust_remote_code=True, torch_dtype="auto"
)
model.save_pretrained("out")
```
_Example #2_
```
from transformers import (
Trainer,
TrainingArguments,
)
training_args = TrainingArguments(
save_only_model=True,
output_dir='./out/',
save_strategy='no',
)
trainer = Trainer(
model=model,
args=training_args
)
trainer.save_model()
```
It looks like others have also encountered this issue. See the list of reference issues below in "Issues".
A contributor to the Phi 3 Vision cookbook suggested the following solution, stating "You need to remove the wte weight. It's okay because when the model is loaded from the checkpoint, it will automatically copy the weight from the embedding weight."
```
state_dict = model.state_dict()
state_dict = {k:v for k, v in state_dict.items() if "wte" not in k}
model.save_pretrained(args.save_model_path, state_dict=state_dict, safe_serialization=True)
processor.save_pretrained(args.save_model_path)
```
This does indeed seem to work. However, it doesn't exactly fit into a use case that relies on the `Trainer` abstraction. The call to the `Trainer` class's `save_model()` method doesn't accommodate a state_dict argument (see https://github.com/huggingface/transformers/blob/main/src/transformers/trainer.py#L3719-L3768).
**Issues**
1. https://github.com/kazuar/Phi3-Vision-ft/issues/2
2. https://discuss.huggingface.co/t/runtimeerror-when-saving-phi-3-5-vision-due-to-shared-tensors/116457
4. https://github.com/huggingface/transformers/issues/32354
5. https://discuss.huggingface.co/t/using-trainer-to-save-a-bartforsequenceclassification-model/81606
### Your contribution
I'd be glad to submit a PR, but I think some discussion is needed from the appropriate `transformers` stakeholders.
It's not clear to me whether the most appropriate change here is to modify the function signature.
Alternatively, maybe there's a heuristic by which we could determine whether the architecture is such that one needs to save everything but the `wte` weights. I don't know the answer to that off-hand. It may require a deep dive from Phi 3/3.5 Vision SMEs.
Or more broadly, perhaps there's some change to the way `transformers` handles shared tensors in the base configuration that would be most appropriate. | [
66,
76,
4
] | [
0,
0,
0,
0,
1,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
1,
0,
0,
0,
0,
0,
0,
0,
0,
0,
1,
0,
0,
0,
0,
0
] | [
"trainer",
"Feature request",
"Safetensors"
] |
https://api.github.com/repos/huggingface/transformers/issues/33409 |
TITLE
Can’t train Mamba2 with FP16 (Mamba(/2)ForCausalLM)
COMMENTS
3
REACTIONS
+1: 0
-1: 0
laugh: 0
hooray: 0
heart: 0
rocket: 0
eyes: 0
BODY
### System Info
transformers.version=4.44.2
### Reproduction
1. Run script:
```
config = AutoConfig.from_pretrained('state-spaces/mamba-130m')
model = MambaForCausalLM(config)
model.to(device)
training_args = TrainingArguments(
output_dir=args.output_dir,
logging_dir='./logs',
gradient_accumulation_steps=1,
save_steps=50000,
max_steps=1000000,
eval_strategy="steps",
eval_steps=50000,
logging_strategy="epoch",
logging_steps=2000,
learning_rate=1e-4,
fp16=True,
dataloader_num_workers=4,
per_device_train_batch_size=512,
per_device_eval_batch_size=512,
lr_scheduler_type="constant_with_warmup",
weight_decay=0.1,
warmup_steps=2000,
)
trainer = Trainer(
model=model,
tokenizer=tokenizer,
args=training_args,
train_dataset=tokenized_train,
eval_dataset=tokenized_eval
)
trainer.train()
```
### Expected behavior
```
File "/users/PAS2581/kanaka/research/GrokkedTransformersarewang2024/trying_different_archs/mamba/main.py", line 575, in <module>
main()
File "/users/PAS2581/kanaka/research/GrokkedTransformersarewang2024/trying_different_archs/mamba/main.py", line 545, in main
trainer.train()
File "/users/PAS2581/kanaka/miniconda3/envs/grokk/lib/python3.10/site-packages/transformers/trainer.py", line 1938, in train
return inner_training_loop(
File "/users/PAS2581/kanaka/miniconda3/envs/grokk/lib/python3.10/site-packages/transformers/trainer.py", line 2356, in _inner_training_loop
self._maybe_log_save_evaluate(tr_loss, grad_norm, model, trial, epoch, ignore_keys_for_eval)
File "/users/PAS2581/kanaka/miniconda3/envs/grokk/lib/python3.10/site-packages/transformers/trainer.py", line 2804, in _maybe_log_save_evaluate
metrics = self._evaluate(trial, ignore_keys_for_eval)
File "/users/PAS2581/kanaka/miniconda3/envs/grokk/lib/python3.10/site-packages/transformers/trainer.py", line 2761, in _evaluate
metrics = self.evaluate(ignore_keys=ignore_keys_for_eval)
File "/users/PAS2581/kanaka/miniconda3/envs/grokk/lib/python3.10/site-packages/transformers/trainer.py", line 3666, in evaluate
output = eval_loop(
File "/users/PAS2581/kanaka/miniconda3/envs/grokk/lib/python3.10/site-packages/transformers/trainer.py", line 3857, in evaluation_loop
losses, logits, labels = self.prediction_step(model, inputs, prediction_loss_only, ignore_keys=ignore_keys)
File "/users/PAS2581/kanaka/miniconda3/envs/grokk/lib/python3.10/site-packages/transformers/trainer.py", line 4075, in prediction_step
loss, outputs = self.compute_loss(model, inputs, return_outputs=True)
File "/users/PAS2581/kanaka/miniconda3/envs/grokk/lib/python3.10/site-packages/transformers/trainer.py", line 3363, in compute_loss
outputs = model(**inputs)
File "/users/PAS2581/kanaka/miniconda3/envs/grokk/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1553, in _wrapped_call_impl
return self._call_impl(*args, **kwargs)
File "/users/PAS2581/kanaka/miniconda3/envs/grokk/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1562, in _call_impl
return forward_call(*args, **kwargs)
File "/users/PAS2581/kanaka/miniconda3/envs/grokk/lib/python3.10/site-packages/accelerate/utils/operations.py", line 819, in forward
return model_forward(*args, **kwargs)
File "/users/PAS2581/kanaka/miniconda3/envs/grokk/lib/python3.10/site-packages/accelerate/utils/operations.py", line 807, in __call__
return convert_to_fp32(self.model_forward(*args, **kwargs))
File "/users/PAS2581/kanaka/miniconda3/envs/grokk/lib/python3.10/site-packages/torch/amp/autocast_mode.py", line 43, in decorate_autocast
return func(*args, **kwargs)
File "/users/PAS2581/kanaka/miniconda3/envs/grokk/lib/python3.10/site-packages/transformers/models/mamba/modeling_mamba.py", line 738, in forward
mamba_outputs = self.backbone(
File "/users/PAS2581/kanaka/miniconda3/envs/grokk/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1553, in _wrapped_call_impl
return self._call_impl(*args, **kwargs)
File "/users/PAS2581/kanaka/miniconda3/envs/grokk/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1562, in _call_impl
return forward_call(*args, **kwargs)
File "/users/PAS2581/kanaka/miniconda3/envs/grokk/lib/python3.10/site-packages/transformers/models/mamba/modeling_mamba.py", line 610, in forward
hidden_states = mixer_block(hidden_states, cache_params=cache_params, cache_position=cache_position)
File "/users/PAS2581/kanaka/miniconda3/envs/grokk/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1553, in _wrapped_call_impl
return self._call_impl(*args, **kwargs)
File "/users/PAS2581/kanaka/miniconda3/envs/grokk/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1562, in _call_impl
return forward_call(*args, **kwargs)
File "/users/PAS2581/kanaka/miniconda3/envs/grokk/lib/python3.10/site-packages/transformers/models/mamba/modeling_mamba.py", line 354, in forward
hidden_states = self.mixer(hidden_states, cache_params=cache_params, cache_position=cache_position)
File "/users/PAS2581/kanaka/miniconda3/envs/grokk/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1553, in _wrapped_call_impl
return self._call_impl(*args, **kwargs)
File "/users/PAS2581/kanaka/miniconda3/envs/grokk/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1562, in _call_impl
return forward_call(*args, **kwargs)
File "/users/PAS2581/kanaka/miniconda3/envs/grokk/lib/python3.10/site-packages/transformers/models/mamba/modeling_mamba.py", line 310, in forward
return self.cuda_kernels_forward(hidden_states, cache_params, cache_position)
File "/users/PAS2581/kanaka/miniconda3/envs/grokk/lib/python3.10/site-packages/transformers/models/mamba/modeling_mamba.py", line 178, in cuda_kernels_forward
cache_params.update_conv_state(self.layer_idx, conv_states, cache_position)
File "/users/PAS2581/kanaka/miniconda3/envs/grokk/lib/python3.10/site-packages/transformers/cache_utils.py", line 1644, in update_conv_state
conv_state[:, :, cache_position] = new_conv_state.to(conv_state.device)
RuntimeError: Index put requires the source and destination dtypes match, got Float for the destination and Half for the source.
``` | [
64
] | [
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
1,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0
] | [
"bug"
] |
https://api.github.com/repos/huggingface/transformers/issues/33536 |
TITLE
Documentation for HuBERT is Incomplete
COMMENTS
7
REACTIONS
+1: 0
-1: 0
laugh: 0
hooray: 0
heart: 0
rocket: 0
eyes: 0
BODY
### System Info
- `transformers` version: 4.44.2
- Platform: Linux-4.18.0-477.27.1.el8_8.x86_64-x86_64-with-glibc2.28
- Python version: 3.11.9
- Huggingface_hub version: 0.23.4
- Safetensors version: 0.4.3
- Accelerate version: 0.32.1
- Accelerate config: not found
- PyTorch version (GPU?): 2.2.1+cu121 (True)
- Tensorflow version (GPU?): not installed (NA)
- Flax version (CPU?/GPU?/TPU?): not installed (NA)
- Jax version: not installed
- JaxLib version: not installed
- Using distributed or parallel set-up in script?: <fill in>
- Using GPU in script?: <fill in>
- GPU type: Tesla V100-SXM2-32GB
### Who can help?
@ylacombe @muellerzr
There are two issues.
- One is with the missing information in the documentation regarding the parameters of the HuBERT model. The `init` function of `HubertConfig` has `pad_token_id=0, bos_token_id=1, eos_token_id=2` but the information about it missing from the docstring.
- This is concerning because if someone is following the ASR tutorial by Von Platen (https://huggingface.co/blog/fine-tune-wav2vec2-english), the token ids for padding, bos, and eos would not correspond to 0, 1, and 2, respectively.
- The other issue is a result of the mismatch between the padding token ids. In `HF trainer`, when the `compute_metric` is called during evaluation, it bundles the whole dataset together by padding `pred_ids` by a value of 0 to the length of the longest sample in the dataset. However, during the decoding, if the `token_id` doesn't correspond to 0, the decoding would carry one extra letter at the end of the transcription, which would correspond to the token with id 0, thereby generating an incorrect transcription and hence an incorrect CER/WER.
### Information
- [X] The official example scripts
- [X] My own modified scripts
### Tasks
- [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)
- [ ] My own task or dataset (give details below)
### Reproduction
This issue could be replicated by following Von Platen's tutorial on finetuning `wav2vec 2.0` but instead of `wav2vec 2.0`, use `hubert-base`. Please let me know if you require any further information.
### Expected behavior
There should be a clear mention about the default values of the special `token_ids`, in particular the `pad_token` and the potential issues downstream with any other value. And if the behaviour of `compute_metric` is not actually intended, taking an arbitrary value of `pad_token_id` could be considered to make the code token_id invariant. | [
74,
64,
43
] | [
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
1,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
1,
0,
0,
0,
0,
0,
0,
0,
0,
0,
1,
0,
0,
0,
0,
0,
0,
0
] | [
"Documentation",
"bug",
"Audio"
] |
https://api.github.com/repos/huggingface/transformers/issues/35559 |
TITLE
iframe
COMMENTS
1
REACTIONS
+1: 0
-1: 0
laugh: 0
hooray: 0
heart: 0
rocket: 0
eyes: 0
BODY
### System Info
Test
### Who can help?
_No response_
### Information
- [ ] The official example scripts
- [ ] My own modified scripts
### Tasks
- [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)
- [ ] My own task or dataset (give details below)
### Reproduction
<iframe
src="https://hkchengrex-mmaudio.hf.space"
frameborder="0"
width="850"
height="450"
></iframe>
### Expected behavior
Hello, what is the reason for the issue of the application not functioning in an iframe for some of the spaces? For example:
<iframe
src="https://hkchengrex-mmaudio.hf.space"
frameborder="0"
width="850"
height="450"
></iframe>
When we use this in a webpage, the application does not work. Is there a solution to run it inside the iframe? Since I can't use Web components because the Gradio library is not available in my region. Is there a way to run that application using an iframe? | [
64
] | [
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
1,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0
] | [
"bug"
] |
https://api.github.com/repos/huggingface/transformers/issues/35700 |
TITLE
Uniformize OwlViT and Owlv2 processors
COMMENTS
9
REACTIONS
+1: 0
-1: 0
laugh: 0
hooray: 0
heart: 0
rocket: 0
eyes: 0
BODY
# What does this PR do?
Adds uniformized processors following https://github.com/huggingface/transformers/issues/31911 for OwlViT and Owlv2.
Split from this PR #32841
<!--
Congratulations! You've made it this far! You're not quite done yet though.
Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution.
Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change.
Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost.
-->
## Who can review?
Anyone in the community is free to review the PR once the tests have passed. Feel free to tag
members/contributors who may be interested in your PR.
<!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @
If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**.
Please tag fewer than 3 people.
Models:
- text models: @ArthurZucker
- vision models: @amyeroberts, @qubvel
- speech models: @ylacombe, @eustlb
- graph models: @clefourrier
Library:
- flax: @sanchit-gandhi
- generate: @zucchini-nlp (visual-language models) or @gante (all others)
- pipelines: @Rocketknight1
- tensorflow: @gante and @Rocketknight1
- tokenizers: @ArthurZucker
- trainer: @muellerzr and @SunMarc
- chat templates: @Rocketknight1
Integrations:
- deepspeed: HF Trainer/Accelerate: @muellerzr
- ray/raytune: @richardliaw, @amogkam
- Big Model Inference: @SunMarc
- quantization (bitsandbytes, autogpt): @SunMarc @MekkCyber
Documentation: @stevhliu
HF projects:
- accelerate: [different repo](https://github.com/huggingface/accelerate)
- datasets: [different repo](https://github.com/huggingface/datasets)
- diffusers: [different repo](https://github.com/huggingface/diffusers)
- rust tokenizers: [different repo](https://github.com/huggingface/tokenizers)
Maintained examples (not research project or legacy):
- Flax: @sanchit-gandhi
- PyTorch: See Models above and tag the person corresponding to the modality of the example.
- TensorFlow: @Rocketknight1
-->
| [
73
] | [
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
1,
0,
0,
0,
0,
0,
0,
0,
0
] | [
"run-slow"
] |
https://api.github.com/repos/huggingface/transformers/issues/36150 |
TITLE
SDPA `is_causal=False` has no effect due to `LlamaModel._prepare_4d_causal_attention_mask_with_cache_position`
COMMENTS
3
REACTIONS
+1: 0
-1: 0
laugh: 0
hooray: 0
heart: 0
rocket: 0
eyes: 3
BODY
### System Info
- `transformers` version: 4.48.3
- Platform: Linux-5.15.0-130-generic-x86_64-with-glibc2.35
- Python version: 3.9.21
- Huggingface_hub version: 0.28.1
- Safetensors version: 0.5.2
- Accelerate version: 1.3.0
- Accelerate config: not found
- PyTorch version (GPU?): 2.6.0+cu124 (True)
- Tensorflow version (GPU?): not installed (NA)
- Flax version (CPU?/GPU?/TPU?): not installed (NA)
- Jax version: not installed
- JaxLib version: not installed
- Using distributed or parallel set-up in script?: <fill in>
- Using GPU in script?: <fill in>
- GPU type: NVIDIA H100 80GB HBM3
### Who can help?
@ArthurZucker @Cyrilvallez
### Information
- [ ] The official example scripts
- [ ] My own modified scripts
### Tasks
- [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)
- [ ] My own task or dataset (give details below)
### Reproduction
Observe `is_causal=False` has no effect when using `attn_implementation="sdpa"` with an `attention_mask` with at least one `False` element:
```python
import torch
import transformers
device = torch.device("cuda:0")
input_ids = torch.tensor(
[
[
128000, 128006, 9125, 128007, 271, 34, 7747, 553, 279,
2768, 1495, 439, 1694, 5552, 311, 5557, 11, 17452,
11, 10034, 11, 477, 11759, 13, 128009, 128006, 882,
128007, 271, 791, 502, 77355, 3280, 690, 10536, 1022,
449, 264, 72097, 2489, 1990, 35812, 323, 64921, 13,
128009, 128006, 78191, 128007, 271, 42079, 128009, 128004, 128004,
128004, 128004
]
],
device=device,
)
attention_mask = torch.tensor(
[
[
True, True, True, True, True, True, True, True, True, True,
True, True, True, True, True, True, True, True, True, True,
True, True, True, True, True, True, True, True, True, True,
True, True, True, True, True, True, True, True, True, True,
True, True, True, True, True, True, True, True, True, True,
True, True, False, False, False, False
]
],
device=device,
)
with device:
model = transformers.AutoModelForCausalLM.from_pretrained(
"/models/meta-llama/Llama-3.2-1B-Instruct", # https://huggingface.co/meta-llama/Llama-3.2-1B-Instruct
attn_implementation="sdpa",
torch_dtype=torch.bfloat16,
)
causal_logits = model(input_ids, attention_mask=attention_mask, is_causal=True).logits
noncausal_logits = model(input_ids, attention_mask=attention_mask, is_causal=False).logits
torch.testing.assert_close(causal_logits, noncausal_logits) # shouldn't be true, otherwise what is_causal controlling?
```
Observe that mocking `LlamaModel._prepare_4d_causal_attention_mask_with_cache_position` with an implementation that just replicates the `attention_mask` also has no effect when using `is_causal=True`:
```python
from unittest import mock
def _prepare_4d_causal_attention_mask_with_cache_position(
attention_mask: torch.Tensor,
sequence_length: int,
target_length: int,
dtype: torch.dtype,
device: torch.device,
cache_position: torch.Tensor,
batch_size: int,
**kwargs,
):
min_dtype = torch.tensor(torch.finfo(dtype).min, dtype=dtype, device=attention_mask.device)
return ~attention_mask.view(batch_size, 1, 1, sequence_length).expand(batch_size, 1, sequence_length, sequence_length) * min_dtype
with mock.patch.object(model.model, "_prepare_4d_causal_attention_mask_with_cache_position", _prepare_4d_causal_attention_mask_with_cache_position):
sdpa_causal_logits = model(input_ids, attention_mask=attention_mask, is_causal=True).logits
hf_causal_logits = model(input_ids, attention_mask=attention_mask, is_causal=True).logits
torch.testing.assert_close(sdpa_causal_logits, hf_causal_logits) # shouldn't be true, otherwise what is _prepare_4d_causal_attention_mask_with_cache_position doing?
```
### Expected behavior
1. At the very least, `LlamaModel. _prepare_4d_causal_attention_mask_with_cache_position` should respect `is_causal=False`. Right now, it always returns a causal mask when using sdpa with sequence_length > 1 and an attention_mask with at least one False element.
2. It is not really clear to me why we aren't purely relying on SDPA's own `is_causal` parameter. My 2nd example demonstrates that the current implementation of `LlamaModel. _prepare_4d_causal_attention_mask_with_cache_position` definitely isn't always necessary... so when is it necessary? Or what parts are necessary? Looking at the equivalent implementation that PyTorch describes for [`scaled_dot_product_attention`](https://pytorch.org/docs/stable/generated/torch.nn.functional.scaled_dot_product_attention.html), it seems like we are replicating a bit of their handling of `attn_mask`. Also notably there are 4 separate CUDA allocations happening in the current implementation (`torch.full`, `torch.triu`, `torch.arange`, `Tensor.clone`) compared to my proposed 1. | [
64
] | [
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
1,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0
] | [
"bug"
] |
https://api.github.com/repos/huggingface/transformers/issues/35245 |
TITLE
Add Dinov2 with registers
COMMENTS
6
REACTIONS
+1: 0
-1: 0
laugh: 0
hooray: 0
heart: 0
rocket: 0
eyes: 0
BODY
# What does this PR do?
This PR is a continuance of #32905 by @NielsRogge.
When running pytest there were two errors:
ERROR examples/research_projects/codeparrot/scripts/tests/test_deduplicate.py
ERROR templates/adding_a_missing_tokenization_test/cookiecutter-template-{{cookiecutter.modelname}}/test_tokenization_{{cookiecutter.lowercase_modelname}}.py
I am not sure what the problem for these are. Any guidance would be appreciated.
**Relevant Reviewers**
@ArthurZucker | [
77,
62,
73
] | [
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
1,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
1,
0,
0,
0,
1,
0,
0,
0,
0
] | [
"New model",
"Vision",
"run-slow"
] |
https://api.github.com/repos/huggingface/transformers/issues/35022 |
TITLE
Only Fine-tune the embeddings of the added special tokens
COMMENTS
2
REACTIONS
+1: 0
-1: 0
laugh: 0
hooray: 0
heart: 0
rocket: 0
eyes: 0
BODY
### Feature request
Hi, I added some new special tokens to the LLMs (specifically I'm using Qwen2-VL) and then I only want to fine-tune the embedding layers of these added tokens while keeping all other parameters (and the embedding layers for other tokens) frozen. I wonder if there is a built-in way to do so instead of fine-tuning the whole embedding matrix?
### Motivation
If we want to maximumly retain the original capabilities of the model while adding new tokens for certain scenarios, this might be needed, especially when we don't have much data and do not want to alter the pretrained weights.
Another question: if we have a considerable amount of data, is it recommended to fine-tune the whole embedding matrix or only the embeddings for the added tokens?
### Your contribution
If it's a reasonable feature and not implemented yet, I'm happy to submit a PR. | [
76
] | [
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
1,
0,
0,
0,
0,
0
] | [
"Feature request"
] |
https://api.github.com/repos/huggingface/transformers/issues/35205 |
TITLE
run_mlm_flax on tpu v5-pods
COMMENTS
1
REACTIONS
+1: 0
-1: 0
laugh: 0
hooray: 0
heart: 0
rocket: 0
eyes: 0
BODY
### System Info
Latest update of both transformers and jax
### Who can help?
@ArthurZucker I am trying to use the `run_mlm_flax.py` to train a Roberta model on a v5-256 pod. However, while a single v3-8 is capable of running with `per_device_batch_size=128`, the v5-256 are only able to run with` per_device_batch_size=2`. Any ideas?
### Information
- [ ] The official example scripts
- [ ] My own modified scripts
### Tasks
- [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)
- [ ] My own task or dataset (give details below)
### Reproduction
Using default code.
### Expected behavior
I would expect a v5-256 to run a lot faster here. | [
55,
64
] | [
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
1,
0,
0,
0,
0,
0,
0,
0,
0,
1,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0
] | [
"Flax",
"bug"
] |
https://api.github.com/repos/huggingface/transformers/issues/33827 |
TITLE
bug in the token healing
COMMENTS
1
REACTIONS
+1: 0
-1: 0
laugh: 0
hooray: 0
heart: 0
rocket: 0
eyes: 0
BODY
### System Info
- `transformers` version: 4.45.1
- Platform: Windows-11-10.0.22631-SP0
- Python version: 3.12.5
- Huggingface_hub version: 0.24.6
- Safetensors version: 0.4.4
- Accelerate version: 0.34.2
- Accelerate config: not found
- PyTorch version (GPU?): 2.4.0+cu124 (True)
- Tensorflow version (GPU?): not installed (NA)
- Flax version (CPU?/GPU?/TPU?): not installed (NA)
- Jax version: not installed
- JaxLib version: not installed
- Using distributed or parallel set-up in script?: no
- Using GPU in script?: yes
- GPU type: NVIDIA GeForce GTX 1660 Ti
### Who can help?
@ArthurZucker @gante
### Information
- [ ] The official example scripts
- [X] My own modified scripts
### Tasks
- [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)
- [X] My own task or dataset (give details below)
### Reproduction
I am using the below script to generate outputs. I am using a custom trained GPT2 model.
```python
inputs = self.tokenizer(input, padding=True, return_tensors="pt").to(self.dev)
generated_ids = self.model.generate(
**inputs,
**get_variable_dictionary(args),
pad_token_id=self.tokenizer.eos_token_id,
renormalize_logits=True,
token_healing=True,
tokenizer=self.tokenizer,
)
```
Below code block is filling the **GenerationConfig** parameters:
```python
**get_variable_dictionary(args)
```
The script is run without issues when ```token_healing``` is disabled. When ```token_healing``` is enabled, this error is occured:
```bash
!!! Exception during processing !!! 'ExtensionsTrie' object has no attribute 'values'
Traceback (most recent call last):
File "D:\sd\ComfyUI\execution.py", line 323, in execute
output_data, output_ui, has_subgraph = get_output_data(obj, input_data_all, execution_block_cb=execution_block_cb, pre_execute_cb=pre_execute_cb)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "D:\sd\ComfyUI\execution.py", line 198, in get_output_data
return_values = _map_node_over_list(obj, input_data_all, obj.FUNCTION, allow_interrupt=True, execution_block_cb=execution_block_cb, pre_execute_cb=pre_execute_cb)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "D:\sd\ComfyUI\custom_nodes\prompt-generator-comfyui\generator\generate.py", line 102, in generate_multiple_texts
generated_ids = self.model.generate(
^^^^^^^^^^^^^^^^^^^^
File "D:\sd\ComfyUI\venv\Lib\site-packages\torch\utils\_contextlib.py", line 116, in decorate_context
return func(*args, **kwargs)
^^^^^^^^^^^^^^^^^^^^^
File "D:\sd\ComfyUI\venv\Lib\site-packages\transformers\generation\utils.py", line 1882, in generate
input_ids = self.heal_tokens(input_ids, tokenizer)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "D:\sd\ComfyUI\venv\Lib\site-packages\transformers\generation\utils.py", line 2295, in heal_tokens
seq_bias = {(alt_tok,): 10.0 for alt_tok in vocab_trie.values(prefix=tail_tok)}
^^^^^^^^^^^^^^^^^
AttributeError: 'ExtensionsTrie' object has no attribute 'values'
```
I did some changes to the code in [src/transformers/generation/utils.py](https://github.com/huggingface/transformers/blob/main/src/transformers/generation/utils.py#L2295) file at line 2295 and it is working after the below updates:
```python
seq_bias = {(tokenizer.convert_tokens_to_ids(alt_tok),): 10.0 for alt_tok in vocab_trie.extensions(prefix=tail_tok)}
```
As I understand from the exceptions; ```sequence_bias``` need the keys to be integer tuple values, but the older version is given the string tuple as the key. And **ExtensionsTrie** doesn't have values function but have extensions function.
I can't be sure if the error is a general one because I saw that there are already tests for ```token_healing``` and it passed those tests.
### Expected behavior
When using ```token_healing``` variable, the program has to be not terminated with error. | [
64,
18
] | [
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
1,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
1,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0
] | [
"bug",
"Generation"
] |
https://api.github.com/repos/huggingface/transformers/issues/34613 |
TITLE
redirect logging output to `stdout` instead of `stderr`
COMMENTS
3
REACTIONS
+1: 2
-1: 0
laugh: 0
hooray: 0
heart: 0
rocket: 0
eyes: 0
BODY
### Feature request
Redirect logging output to `stdout` instead of `stderr`. Specifically, add argument `stream=sys.stdout` at: https://github.com/huggingface/transformers/blob/893ad04fad145904ccb71e4e858e4134c32226b6/src/transformers/utils/logging.py#L88.
### Motivation
It is a common practice to redirect logging output to `stdout` in deep learning frameworks.
For example: Detectron2: https://github.com/facebookresearch/detectron2/blob/8d85329aed8506ea3672e3e208971345973ea761/detectron2/utils/logger.py#L84
fairseq: https://github.com/facebookresearch/fairseq/blob/ecbf110e1eb43861214b05fa001eff584954f65a/fairseq_cli/train.py#L22
Deepspeed: https://github.com/microsoft/DeepSpeed/blob/2b41d6212c160a3645691b77b210ba7dd957c23f/deepspeed/utils/logging.py#L69.
Here is my analysis. Traditionally, `stdout` is used for output of the program and `stderr` is used for warning/debugging. That's why the default stream of `logging` is `stderr`. However, the output of deep learning frameworks consists of losses, eval results and checkpoints. It's a common practice to use `logger.info()` to display this information. Therefore, it would be more appropriate to redirect these outputs to `stdout` since they are part of the program's normal output.
### Your contribution
I can submit a PR if this request is confirmed. | [
76
] | [
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
1,
0,
0,
0,
0,
0
] | [
"Feature request"
] |
https://api.github.com/repos/huggingface/transformers/issues/35276 |
TITLE
inconsistent generation
COMMENTS
3
REACTIONS
+1: 0
-1: 0
laugh: 0
hooray: 0
heart: 0
rocket: 0
eyes: 0
BODY
### System Info
- `transformers` version: 4.45.2
- Python version: 3.8.18
- Huggingface_hub version: 0.26.3
- Safetensors version: 0.4.1
- Accelerate version: 0.32.1
- PyTorch version (GPU?): 2.1.0+cu121 (True)
- GPU type: NVIDIA A10
### Who can help?
_No response_
### Information
- [ ] The official example scripts
- [x] My own modified scripts
### Tasks
- [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)
- [x] My own task or dataset (give details below)
### Reproduction
I used the same input, but changed the code logic slightly, resulting in different results
here is the context of code(mainly load model)
```
import torch
from transformers import AutoTokenizer, AutoModelForCausalLM, AutoConfig, DynamicCache
device = torch.device("cuda" if torch.cuda.is_available() else "cpu")
model_path = "meta-llama/Meta-Llama-3-8B-Instruct"
model = AutoModelForCausalLM.from_pretrained(model_path, attn_implementation="flash_attention_2", device_map=device).eval()
tokenizer = AutoTokenizer.from_pretrained(model_path)
encoded_input = tokenizer("what is your name", return_tensors='pt').to(device)
window_size = 1
front_input = {key: value[:, :-window_size] for key, value in encoded_input.items()}
rear_input = {key: value[:, -window_size:] for key, value in encoded_input.items()}
```
and here is the first generation code
```
past_key_values = DynamicCache()
generation = model.generate(**encoded_input, past_key_values=past_key_values, max_new_tokens=32, do_sample=False)
generation = tokenizer.batch_decode(generation)[0]
print(generation)
```
the generation is as below:
```
what is your name?" and "what is your occupation?" are not necessary. The form is designed to be as simple and easy to fill out as possible, while still gathering the
```
and the seconde generation code is:
```
past_key_values = DynamicCache()
with torch.no_grad():
_ = model(**front_input, past_key_values=past_key_values, use_cache=True)
generation = model.generate(**encoded_input, past_key_values=past_key_values, max_new_tokens=32, do_sample=False)
generation = tokenizer.batch_decode(generation)[0]
```
the generation is as below:
```
what is your name?" and "what is your occupation?" are not necessary. The form is designed to be as simple and easy to fill out as possible, so that you can
```
### Expected behavior
well, it's weird, I think these two generation process is the same since I do not use sampling, but why the results are different. Is there anything wrong with my operation? | [
64
] | [
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
1,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0
] | [
"bug"
] |
https://api.github.com/repos/huggingface/transformers/issues/34059 |
TITLE
data load speed is quite slow when dataloader_num_workers=0
COMMENTS
5
REACTIONS
+1: 0
-1: 0
laugh: 0
hooray: 0
heart: 0
rocket: 0
eyes: 0
BODY
### System Info
- `transformers` version: 4.45.2
- Platform: Linux-6.9.10-amd64-x86_64-with-glibc2.39
- Python version: 3.11.9
- Huggingface_hub version: 0.24.5
- Safetensors version: 0.4.4
- Accelerate version: 1.0.0
- Accelerate config: - compute_environment: LOCAL_MACHINE
- distributed_type: DEEPSPEED
- mixed_precision: bf16
- use_cpu: False
- debug: False
- num_processes: 8
- machine_rank: 0
- num_machines: 1
- rdzv_backend: static
- same_network: True
- main_training_function: main
- enable_cpu_affinity: False
- deepspeed_config: {'gradient_accumulation_steps': 1, 'offload_optimizer_device': 'none', 'offload_param_device': 'none', 'zero3_init_flag': False, 'zero3_save_16bit_model': False, 'zero_stage': 3}
- downcast_bf16: no
- tpu_use_cluster: False
- tpu_use_sudo: False
- tpu_env: []
- PyTorch version (GPU?): 2.4.0 (True)
- Tensorflow version (GPU?): not installed (NA)
- Flax version (CPU?/GPU?/TPU?): not installed (NA)
- Jax version: not installed
- JaxLib version: not installed
- Using distributed or parallel set-up in script?: <fill in>
- Using GPU in script?: <fill in>
- GPU type: NVIDIA RTX A6000
### Who can help?
@muellerzr @SunMarc
### Information
- [ ] The official example scripts
- [X] My own modified scripts
### Tasks
- [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)
- [X] My own task or dataset (give details below)
### Reproduction
My dataset:
```
import torch
import os
import json
from utils.video import read_frames_decord
from torchvision.transforms import Compose, Resize, CenterCrop, RandomResizedCrop, RandomHorizontalFlip, ToTensor, Normalize
from PIL import Image
class DatasetForOfflineDistill(torch.utils.data.Dataset):
def __init__(
self,
anno_path: str | os.PathLike,
data_root: str | os.PathLike,
feat_path: str | os.PathLike,
tokenizer: torch.nn.Module | None = None,
tokenize: bool = False,
num_frames: int = 8,
test: bool = False
):
with open(anno_path) as f:
self.anno = json.load(f)
self.data_root = data_root
# keys of each item: idx, text_embeds, video_embeds
self.feat = torch.load(feat_path, weights_only=True)
self.num_frames = num_frames
self.transforms = self.build_transforms(test)
self.tokenizer = tokenizer
self.tokenize = tokenize
def build_transforms(self, test: bool):
image_mean = [
0.48145466,
0.4578275,
0.40821073
]
image_std = [
0.26862954,
0.26130258,
0.27577711
]
size = 224
normalize = (
Normalize(mean=image_mean, std=image_std)
)
train_transforms = Compose(
[
RandomResizedCrop(size),
RandomHorizontalFlip(),
ToTensor(),
normalize,
]
)
val_transforms = Compose(
[
Resize(size),
CenterCrop(size),
ToTensor(),
normalize,
]
)
if test:
return val_transforms
return train_transforms
def __len__(self):
return len(self.anno)
def __getitem__(self, idx):
rank = int(os.environ.get("LOCAL_RANK") or 0)
# HERE IS THE DEBUG MESSAGE
now = datetime.now()
dt_string = now.strftime("%d/%m/%Y %H:%M:%S")
print(f'[{dt_string}] Rank {rank} is loading', idx)
item = self.feat[idx]
anno_idx = item['idx']
# [teacher_dim] -> [1, teacher_dim]
text_embeds = item['text_embeds']
video_embeds = item['video_embeds']
caption = self.anno[anno_idx]['caption']
if self.tokenizer is not None and self.tokenize:
tokenized_caption = self.tokenizer(caption)
caption = {
'input_ids': tokenized_caption['input_ids'],
'attention_mask': tokenized_caption['attention_mask'],
}
video_path = os.path.join(self.data_root, self.anno[anno_idx]['video'])
video = read_frames_decord(video_path, num_frames=self.num_frames).numpy()
frames = [self.transforms(Image.fromarray(frame)) for frame in video]
frames = torch.stack(frames)
return {
'caption': caption,
'video': frames,
'text_embeds': text_embeds,
'video_embeds': video_embeds
}
```
Part of my training script:
```
train_data = DatasetForOfflineDistill(
anno_path=data_config['anno_path'],
data_root=data_config['data_root'],
feat_path=data_config['feat_paths'][teacher_type],
tokenize=False,
num_frames=num_frames,
)
def custom_collate_fn(batch):
# batch is a list of dicts
collated_batch = {}
for key in batch[0].keys():
collated_batch[key] = [b[key] for b in batch]
# collated_batch['video'] is a list of [num_frames, 3, 224, 224]
# collated_batch['caption'] is a list of strings
tokenized_caption = model.student_caller.tokenizer(collated_batch['caption'], padding=True, return_tensors="pt")
collated_batch['input_ids'] = tokenized_caption['input_ids']
collated_batch['attention_mask'] = tokenized_caption['attention_mask']
collated_batch['pixel_values'] = torch.stack(collated_batch['video'])
collated_batch['video_embeds'] = torch.stack(collated_batch['video_embeds'])
collated_batch['text_embeds'] = torch.stack(collated_batch['text_embeds'])
return collated_batch
trainer = Trainer(
model=model,
train_dataset=train_data,
args=transformers.TrainingArguments(
per_device_train_batch_size=micro_batch_size,
gradient_accumulation_steps=gradient_accumulation_steps,
warmup_ratio=warmup_ratio,
num_train_epochs=num_epochs,
learning_rate=learning_rate,
fp16=True if not bf16 else False,
bf16=bf16,
logging_steps=logging_steps,
save_strategy="steps",
eval_steps=None,
save_steps=save_steps,
output_dir=output_dir,
save_total_limit=1,
load_best_model_at_end=False,
ddp_find_unused_parameters=False if ddp else None,
run_name=run_name,
report_to=None,
deepspeed=deepspeed,
gradient_checkpointing=grad_checkpoint,
remove_unused_columns=False,
dataloader_num_workers=0,
dataloader_pin_memory=True,
# dataloader_prefetch_factor=10,
# dataloader_persistent_workers=True,
),
data_collator=custom_collate_fn,
)
```
The model is a simple CLIPModel.
If dataloader_num_workers=0 and dataloader_pin_memory=True, the load of cpu is around 1000 but the print speed of the debug message(see my code above) is about 1-2/sec. See the image below.
<img width="1010" alt="image" src="https://github.com/user-attachments/assets/6d433ae4-4620-4c2a-a0e7-e852e8e14883">
<img width="1979" alt="image" src="https://github.com/user-attachments/assets/08f68dad-0a6b-4363-a51f-e5d62a965fae">
If dataloader_num_workers=4, dataloader_pin_memory=True, dataloader_prefetch_factor=2 and dataloader_persistent_workers=True, the load of cpu is around 100 and the print speed of the debug message(see my code above) is above 20/sec.
<img width="1033" alt="image" src="https://github.com/user-attachments/assets/109540b0-d874-49a5-baa2-450eee5e5609">
<img width="1969" alt="image" src="https://github.com/user-attachments/assets/aa431e28-6237-40b7-9445-d9620bff8e27">
### Expected behavior
1. The speed should be the same whatever the setting. (at least dataloader_num_workers=0 is slower than dataloader_num_workers=4)
2. The dataloader should prefetch data to avoid gpu waiting. | [
66,
64,
62
] | [
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
1,
0,
1,
0,
1,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0
] | [
"trainer",
"bug",
"Vision"
] |
https://api.github.com/repos/huggingface/transformers/issues/34033 |
TITLE
IDEFICS can't use inputs_embeds in generate function
COMMENTS
2
REACTIONS
+1: 0
-1: 0
laugh: 0
hooray: 0
heart: 0
rocket: 0
eyes: 0
BODY
### System Info
- `transformers` version: 4.45.2
- Platform: Linux-5.4.0-148-generic-x86_64-with-glibc2.31
- Python version: 3.10.13
- Huggingface_hub version: 0.23.3
- Safetensors version: 0.4.2
- Accelerate version: 0.27.2
- Accelerate config: not found
- PyTorch version (GPU?): 2.2.1+cu121 (True)
- Tensorflow version (GPU?): not installed (NA)
- Flax version (CPU?/GPU?/TPU?): not installed (NA)
- Jax version: not installed
- JaxLib version: not installed
- Using distributed or parallel set-up in script?: parallel
- Using GPU in script?: yes
- GPU type: NVIDIA RTX A6000
when I use inputs_embeds instead of input_ids, the idefics model's generate function return a error:
"""
You passed `inputs_embeds` to `.generate()`, but the model class IdeficsForVisionText2Text doesn't have its forwarding implemented. See the GPT2 implementation for an example ([Generate: decoder-only models can generate with `inputs_embeds` by gante · Pull Request #21405 · hug](https://github.com/huggingface/transformers/pull/21405)), and feel free to open a PR with it!
"""
However, In IdeficsForVisionText2Text's defintation, I find the forward already have the inputs_embeds enabled. The following function is defined at line 1541 of the code:
```python
class IdeficsForVisionText2Text(IdeficsPreTrainedModel):
_keys_to_ignore_on_load_missing = [r"lm_head.weight"]
_tied_weights_keys = ["model.embed_tokens.weight", "lm_head.weight"]
def __init__(self, config, vision_model=None):
super().__init__(config)
self.model = IdeficsModel(config)
self.lm_head = IdeficsDecoupledLinear(
in_features=config.hidden_size,
out_features=config.vocab_size,
out_additional_features=config.additional_vocab_size,
bias=False,
partially_freeze=config.freeze_lm_head,
)
# Initialize weights and apply final processing
self.post_init()
def get_input_embeddings(self):
return self.model.embed_tokens
def set_input_embeddings(self, value):
self.model.embed_tokens = value
def get_output_embeddings(self):
return self.lm_head
def set_output_embeddings(self, new_embeddings):
self.lm_head = new_embeddings
def set_decoder(self, decoder):
self.model = decoder
def get_decoder(self):
return self.model
def tie_weights(self):
"""
Overwrite `transformers.modeling_utils.PreTrainedModel.tie_weights` to handle the case of
IdeficsDecoupledLinear and IdeficsDecoupledEmbedding.
"""
output_embeddings = self.get_output_embeddings()
input_embeddings = self.get_input_embeddings()
if getattr(self.config, "tie_word_embeddings", True):
output_embeddings.weight = input_embeddings.weight
if input_embeddings.num_additional_embeddings > 0:
assert output_embeddings.out_additional_features == input_embeddings.num_additional_embeddings
output_embeddings.additional_fc.weight = input_embeddings.additional_embedding.weight
if hasattr(output_embeddings, "out_features") and hasattr(input_embeddings, "num_embeddings"):
output_embeddings.out_features = input_embeddings.num_embeddings
if hasattr(output_embeddings, "out_additional_features") and hasattr(
input_embeddings, "num_additional_embeddings"
):
output_embeddings.out_additional_features = input_embeddings.num_additional_embeddings
@add_start_docstrings_to_model_forward(LLAMA_INPUTS_DOCSTRING)
@replace_return_docstrings(output_type=IdeficsCausalLMOutputWithPast, config_class=_CONFIG_FOR_DOC)
def forward(
self,
input_ids: torch.LongTensor = None,
attention_mask: Optional[torch.Tensor] = None,
position_ids: Optional[torch.LongTensor] = None,
past_key_values: Optional[List[torch.FloatTensor]] = None,
inputs_embeds: Optional[torch.FloatTensor] = None,
pixel_values: Optional[torch.FloatTensor] = None,
image_encoder_embeddings: Optional[torch.FloatTensor] = None,
perceiver_embeddings: Optional[torch.FloatTensor] = None,
image_attention_mask: Optional[torch.Tensor] = None,
labels: Optional[torch.LongTensor] = None,
use_cache: Optional[bool] = None,
output_attentions: Optional[bool] = None,
output_hidden_states: Optional[bool] = None,
interpolate_pos_encoding: Optional[bool] = False,
return_dict: Optional[bool] = None,
cache_position: Optional[torch.LongTensor] = None,
) -> Union[Tuple, IdeficsCausalLMOutputWithPast]:
```
So why can't this code just use generate to generate it, I'd be very grateful if solve this problem 🙏
### Who can help?
@zucchini-nlp @patrickvonplaten
### Information
- [X] The official example scripts
- [ ] My own modified scripts
### Tasks
- [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)
- [X] My own task or dataset (give details below)
### Reproduction
```python
import torch
from transformers import IdeficsForVisionText2Text, AutoProcessor
device = "cuda:1" if torch.cuda.is_available() else "cpu"
# We feed to the model an arbitrary sequence of text strings and images. Images can be either URLs or PIL Images.
prompts = [
[
"https://upload.wikimedia.org/wikipedia/commons/8/86/Id%C3%A9fix.JPG",
"In this picture from Asterix and Obelix, we can see"
],
]
processor = AutoProcessor.from_pretrained("HuggingFaceM4/idefics-9b")
# --batched mode
inputs = processor(prompts, return_tensors="pt").to(device)
# --single sample mode
# inputs = processor(prompts[0], return_tensors="pt").to(device)
# Generation args
bad_words_ids = processor.tokenizer(["<image>", "<fake_token_around_image>"], add_special_tokens=False).input_ids
inputs_embeds = interface.model.model.embed_tokens(inputs["input_ids"])
inputs["input_ids"] = None
generated_ids = interface.generate(inputs_embeds = inputs_embeds, bad_words_ids=bad_words_ids, max_length=100)
generated_text = processor.batch_decode(generated_ids, skip_special_tokens=True)
for i, t in enumerate(generated_text):
print(f"{i}:\n{t}\n")
```
### Expected behavior
It shouldn't crash | [
64,
62,
18
] | [
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
1,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
1,
0,
1,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0
] | [
"bug",
"Vision",
"Generation"
] |
https://api.github.com/repos/huggingface/transformers/issues/33342 |
TITLE
Add "EAT: Self-Supervised Pre-Training with Efficient Audio Transformer"
COMMENTS
0
REACTIONS
+1: 0
-1: 0
laugh: 0
hooray: 0
heart: 0
rocket: 0
eyes: 0
BODY
### Model description
The original authors of the model write:
> EAT is an audio self-supervised learning model with high effectiveness and efficiency during self-supervised pre-training. You can find details in the paper [EAT: Self-Supervised Pre-Training with Efficient Audio Transformer](https://arxiv.org/abs/2401.03497).
A self-supervised learning model can benefit the community greatly, since it requires no labelled data, and can be trained on any dataset. Especially since, the strength of this approach is that it can be applied to variable-length audio. With enough resources (for example, compute, and, data), it could have a similar reach as BERT.
### Open source status
- [X] The model implementation is available
- [X] The model weights are available
### Provide useful links for the implementation
- GitHub Repo: https://github.com/cwx-worst-one/EAT
- Links for model checkpoints:
- [EAT-base_epoch30](https://drive.google.com/file/d/19hfzLgHCkyqTOYmHt8dqVa9nm-weBq4f/view?usp=sharing) (pre-training)
- [EAT-base_epoch30](https://drive.google.com/file/d/1aCYiQmoZv_Gh1FxnR-CCWpNAp6DIJzn6/view?usp=sharing) (fine-tuning on AS-2M)
- [EAT-large_epoch20](https://drive.google.com/file/d/1PEgriRvHsqrtLzlA478VemX7Q0ZGl889/view?usp=sharing) (pre-training)
- [EAT-large_epoch20](https://drive.google.com/file/d/1b_f_nQAdjM1B6u72OFUtFiUu-4yM2shd/view?usp=sharing) (fine-tuning on AS-2M)
- Paper: https://www.ijcai.org/proceedings/2024/421 | [
77,
76
] | [
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
1,
1,
0,
0,
0,
0
] | [
"New model",
"Feature request"
] |
https://api.github.com/repos/huggingface/transformers/issues/33661 |
TITLE
Undefined variable in: scripts/check_tokenizers.py
COMMENTS
9
REACTIONS
+1: 0
-1: 0
laugh: 0
hooray: 0
heart: 0
rocket: 0
eyes: 0
BODY
### System Info
Python 3.12.4
### Who can help?
_No response_
### Information
- [X] The official example scripts
- [ ] My own modified scripts
### Tasks
- [X] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)
- [ ] My own task or dataset (give details below)
### Reproduction
`if check_diff(spm_ids[first : first + i], tok_ids[first : first + j], sp, tok) and check_details(
line,
spm_ids[first + i : last],
tok_ids[first + j : last],
slow,
fast,`
### Expected behavior
Undefined Variables: sp and tok are not defined anywhere within the check_details function or its enclosing scopes. This will result in a NameError when the code attempts to execute this line. | [
47,
64
] | [
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
1,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
1,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0
] | [
"Core: Tokenization",
"bug"
] |
https://api.github.com/repos/huggingface/transformers/issues/34600 |
TITLE
AssertionError for Pytorch PiPPy example
COMMENTS
3
REACTIONS
+1: 1
-1: 0
laugh: 0
hooray: 0
heart: 0
rocket: 0
eyes: 0
BODY
### System Info
```
(zt) root@autodl-container-7071118252-7032359d:~/test/PiPPy/examples/llama# transformers-cli env
Copy-and-paste the text below in your GitHub issue and FILL OUT the two last points.
- `transformers` version: 4.44.0
- Platform: Linux-5.4.0-126-generic-x86_64-with-glibc2.35
- Python version: 3.10.0
- Huggingface_hub version: 0.26.2
- Safetensors version: 0.4.5
- Accelerate version: not installed
- Accelerate config: not found
- PyTorch version (GPU?): 2.5.1+cu124 (True)
- Tensorflow version (GPU?): not installed (NA)
- Flax version (CPU?/GPU?/TPU?): not installed (NA)
- Jax version: not installed
- JaxLib version: not installed
- Using distributed or parallel set-up in script?: <fill in>
- Using GPU in script?: <fill in>
- GPU type: NVIDIA GeForce RTX 3090
```
### Who can help?
pipelines: @Rocketknight1
Big Model Inference: @SunMarc
Hi! I am MD students who interested in pipeline parallelism in LLM inference. I have successfully run a[ llama2 example](https://github.com/pytorch/PiPPy/blob/main/examples/llama/pippy_llama.py) in [PiPPy repo](https://github.com/pytorch/PiPPy). So I want to further modify this code to support Llama3 series models/, especially for **Llama-3.2-3B**. But when I run this code just simple modfy the path of model and tokenizer. But It turned out bug:
```
(zt) root@autodl-container-7071118252-7032359d:~/test/PiPPy/examples/llama# torchrun --nproc-per-node 2 pippy_llama.py
Loading checkpoint shards: 100%|████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 2/2 [00:02<00:00, 1.09s/it]
LlamaForCausalLM(
(model): LlamaModel(
(embed_tokens): Embedding(128256, 3072)
(layers): ModuleList(
(0-27): 28 x LlamaDecoderLayer(
(self_attn): LlamaSdpaAttention(
(q_proj): Linear(in_features=3072, out_features=3072, bias=False)
(k_proj): Linear(in_features=3072, out_features=1024, bias=False)
(v_proj): Linear(in_features=3072, out_features=1024, bias=False)
(o_proj): Linear(in_features=3072, out_features=3072, bias=False)
(rotary_emb): LlamaRotaryEmbedding()
)
(mlp): LlamaMLP(
(gate_proj): Linear(in_features=3072, out_features=8192, bias=False)
(up_proj): Linear(in_features=3072, out_features=8192, bias=False)
(down_proj): Linear(in_features=8192, out_features=3072, bias=False)
(act_fn): SiLU()
)
(input_layernorm): LlamaRMSNorm((3072,), eps=1e-05)
(post_attention_layernorm): LlamaRMSNorm((3072,), eps=1e-05)
)
)
(norm): LlamaRMSNorm((3072,), eps=1e-05)
(rotary_emb): LlamaRotaryEmbedding()
)
(lm_head): Linear(in_features=3072, out_features=128256, bias=False)
)
Loading checkpoint shards: 100%|████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 2/2 [00:02<00:00, 1.15s/it]
LlamaForCausalLM(
(model): LlamaModel(
(embed_tokens): Embedding(128256, 3072)
(layers): ModuleList(
(0-27): 28 x LlamaDecoderLayer(
(self_attn): LlamaSdpaAttention(
(q_proj): Linear(in_features=3072, out_features=3072, bias=False)
(k_proj): Linear(in_features=3072, out_features=1024, bias=False)
(v_proj): Linear(in_features=3072, out_features=1024, bias=False)
(o_proj): Linear(in_features=3072, out_features=3072, bias=False)
(rotary_emb): LlamaRotaryEmbedding()
)
(mlp): LlamaMLP(
(gate_proj): Linear(in_features=3072, out_features=8192, bias=False)
(up_proj): Linear(in_features=3072, out_features=8192, bias=False)
(down_proj): Linear(in_features=8192, out_features=3072, bias=False)
(act_fn): SiLU()
)
(input_layernorm): LlamaRMSNorm((3072,), eps=1e-05)
(post_attention_layernorm): LlamaRMSNorm((3072,), eps=1e-05)
)
)
(norm): LlamaRMSNorm((3072,), eps=1e-05)
(rotary_emb): LlamaRotaryEmbedding()
)
(lm_head): Linear(in_features=3072, out_features=128256, bias=False)
)
layers_per_rank = 14
layers_per_rank = 14
[rank0]: Traceback (most recent call last):
[rank0]: File "/root/test/PiPPy/examples/llama/pippy_llama.py", line 36, in <module>
[rank0]: pipe = pipeline(llama, mb_args=(mb_inputs["input_ids"],))
[rank0]: File "/root/miniconda3/envs/zt/lib/python3.10/site-packages/torch/distributed/pipelining/_IR.py", line 1238, in pipeline
[rank0]: return Pipe.from_tracing(
[rank0]: File "/root/miniconda3/envs/zt/lib/python3.10/site-packages/torch/distributed/pipelining/_IR.py", line 1051, in from_tracing
[rank0]: pipe = Pipe._from_traced(
[rank0]: File "/root/miniconda3/envs/zt/lib/python3.10/site-packages/torch/distributed/pipelining/_IR.py", line 750, in _from_traced
[rank0]: new_submod = _outline_submodules(submodule.graph)
[rank0]: File "/root/miniconda3/envs/zt/lib/python3.10/site-packages/torch/distributed/pipelining/_unflatten.py", line 24, in _outline_submodules
[rank0]: ).run_outer()
[rank0]: File "/root/miniconda3/envs/zt/lib/python3.10/site-packages/torch/export/unflatten.py", line 1014, in run_outer
[rank0]: self.run_from(node_idx)
[rank0]: File "/root/miniconda3/envs/zt/lib/python3.10/site-packages/torch/export/unflatten.py", line 1094, in run_from
[rank0]: ).run_from(node_idx)
[rank0]: File "/root/miniconda3/envs/zt/lib/python3.10/site-packages/torch/export/unflatten.py", line 1094, in run_from
[rank0]: ).run_from(node_idx)
[rank0]: File "/root/miniconda3/envs/zt/lib/python3.10/site-packages/torch/export/unflatten.py", line 1043, in run_from
[rank0]: self.finalize_outputs()
[rank0]: File "/root/miniconda3/envs/zt/lib/python3.10/site-packages/torch/export/unflatten.py", line 993, in finalize_outputs
[rank0]: _verify_graph_equivalence(self.cached_graph_module, self.module)
[rank0]: File "/root/miniconda3/envs/zt/lib/python3.10/site-packages/torch/export/unflatten.py", line 655, in _verify_graph_equivalence
[rank0]: assert graph_dump(x.graph) == graph_dump(y.graph)
[rank0]: AssertionError
[rank0]:[W1104 21:21:40.765172753 ProcessGroupNCCL.cpp:1250] Warning: WARNING: process group has NOT been destroyed before we destruct ProcessGroupNCCL. On normal program exit, the application should call destroy_process_group to ensure that any pending NCCL operations have finished in this process. In rare cases this process can exit before this point and block the progress of another member of the process group. This constraint has always been present, but this warning has only been added since PyTorch 2.4 (function operator())
[rank1]: Traceback (most recent call last):
[rank1]: File "/root/test/PiPPy/examples/llama/pippy_llama.py", line 36, in <module>
[rank1]: pipe = pipeline(llama, mb_args=(mb_inputs["input_ids"],))
[rank1]: File "/root/miniconda3/envs/zt/lib/python3.10/site-packages/torch/distributed/pipelining/_IR.py", line 1238, in pipeline
[rank1]: return Pipe.from_tracing(
[rank1]: File "/root/miniconda3/envs/zt/lib/python3.10/site-packages/torch/distributed/pipelining/_IR.py", line 1051, in from_tracing
[rank1]: pipe = Pipe._from_traced(
[rank1]: File "/root/miniconda3/envs/zt/lib/python3.10/site-packages/torch/distributed/pipelining/_IR.py", line 750, in _from_traced
[rank1]: new_submod = _outline_submodules(submodule.graph)
[rank1]: File "/root/miniconda3/envs/zt/lib/python3.10/site-packages/torch/distributed/pipelining/_unflatten.py", line 24, in _outline_submodules
[rank1]: ).run_outer()
[rank1]: File "/root/miniconda3/envs/zt/lib/python3.10/site-packages/torch/export/unflatten.py", line 1014, in run_outer
[rank1]: self.run_from(node_idx)
[rank1]: File "/root/miniconda3/envs/zt/lib/python3.10/site-packages/torch/export/unflatten.py", line 1094, in run_from
[rank1]: ).run_from(node_idx)
[rank1]: File "/root/miniconda3/envs/zt/lib/python3.10/site-packages/torch/export/unflatten.py", line 1094, in run_from
[rank1]: ).run_from(node_idx)
[rank1]: File "/root/miniconda3/envs/zt/lib/python3.10/site-packages/torch/export/unflatten.py", line 1043, in run_from
[rank1]: self.finalize_outputs()
[rank1]: File "/root/miniconda3/envs/zt/lib/python3.10/site-packages/torch/export/unflatten.py", line 993, in finalize_outputs
[rank1]: _verify_graph_equivalence(self.cached_graph_module, self.module)
[rank1]: File "/root/miniconda3/envs/zt/lib/python3.10/site-packages/torch/export/unflatten.py", line 655, in _verify_graph_equivalence
[rank1]: assert graph_dump(x.graph) == graph_dump(y.graph)
[rank1]: AssertionError
W1104 21:21:41.688867 2513 site-packages/torch/distributed/elastic/multiprocessing/api.py:897] Sending process 2540 closing signal SIGTERM
E1104 21:21:42.054025 2513 site-packages/torch/distributed/elastic/multiprocessing/api.py:869] failed (exitcode: 1) local_rank: 0 (pid: 2539) of binary: /root/miniconda3/envs/zt/bin/python
Traceback (most recent call last):
File "/root/miniconda3/envs/zt/bin/torchrun", line 8, in <module>
sys.exit(main())
File "/root/miniconda3/envs/zt/lib/python3.10/site-packages/torch/distributed/elastic/multiprocessing/errors/__init__.py", line 355, in wrapper
return f(*args, **kwargs)
File "/root/miniconda3/envs/zt/lib/python3.10/site-packages/torch/distributed/run.py", line 919, in main
run(args)
File "/root/miniconda3/envs/zt/lib/python3.10/site-packages/torch/distributed/run.py", line 910, in run
elastic_launch(
File "/root/miniconda3/envs/zt/lib/python3.10/site-packages/torch/distributed/launcher/api.py", line 138, in __call__
return launch_agent(self._config, self._entrypoint, list(args))
File "/root/miniconda3/envs/zt/lib/python3.10/site-packages/torch/distributed/launcher/api.py", line 269, in launch_agent
raise ChildFailedError(
torch.distributed.elastic.multiprocessing.errors.ChildFailedError:
============================================================
pippy_llama.py FAILED
------------------------------------------------------------
Failures:
<NO_OTHER_FAILURES>
------------------------------------------------------------
Root Cause (first observed failure):
[0]:
time : 2024-11-04_21:21:41
host : autodl-container-7071118252-7032359d
rank : 0 (local_rank: 0)
exitcode : 1 (pid: 2539)
error_file: <N/A>
traceback : To enable traceback see: https://pytorch.org/docs/stable/elastic/errors.html
============================================================
```
That's a same problem when I run this example for Llama2 model, but I fix it by degrading the version of transformers to **4.36.2**. But when I use this solution for Llama3, it seems that the dependency isn't support the newest Llama model.
```
zt) root@autodl-container-7071118252-7032359d:~/test/PiPPy/examples/llama# torchrun --nproc-per-node 2 pippy_llama.py
Traceback (most recent call last):
File "/root/test/PiPPy/examples/llama/pippy_llama.py", line 8, in <module>
llama = AutoModelForCausalLM.from_pretrained(
File "/root/miniconda3/envs/zt/lib/python3.10/site-packages/transformers/models/auto/auto_factory.py", line 526, in from_pretrained
config, kwargs = AutoConfig.from_pretrained(
File "/root/miniconda3/envs/zt/lib/python3.10/site-packages/transformers/models/auto/configuration_auto.py", line 1124, in from_pretrained
return config_class.from_dict(config_dict, **unused_kwargs)
File "/root/miniconda3/envs/zt/lib/python3.10/site-packages/transformers/configuration_utils.py", line 764, in from_dict
config = cls(**config_dict)
File "/root/miniconda3/envs/zt/lib/python3.10/site-packages/transformers/models/llama/configuration_llama.py", line 160, in __init__
self._rope_scaling_validation()
File "/root/miniconda3/envs/zt/lib/python3.10/site-packages/transformers/models/llama/configuration_llama.py", line 180, in _rope_scaling_validation
raise ValueError(
ValueError: `rope_scaling` must be a dictionary with with two fields, `type` and `factor`, got {'factor': 32.0, 'high_freq_factor': 4.0, 'low_freq_factor': 1.0, 'original_max_position_embeddings': 8192, 'rope_type': 'llama3'}
```
So how can I fix it, I am not good at fixing this bug. :(
### Information
- [ ] The official example scripts
- [X] My own modified scripts
### Tasks
- [X] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)
- [ ] My own task or dataset (give details below)
### Reproduction
1、git the repo and install the related dependency
```
git cllone https://github.com/pytorch/PiPPy.git
pip install -r requirements.txt
```
2、go the llama directoty and run` pippy_llama.py`
`torchrun --nproc-per-node 2 pippy_llama.py`
**Here is the code I modify**
```ruby
# $ torchrun --nproc-per-node 4 pippy_llama.py
import os
import torch
from transformers import AutoModelForCausalLM, AutoTokenizer
from torch.distributed.pipelining import SplitPoint, pipeline, ScheduleGPipe
# Grab the model
llama = AutoModelForCausalLM.from_pretrained(
"/root/autodl-tmp/model/Llama-3.2-3B", local_files_only= True
)
print(llama)
tokenizer = AutoTokenizer.from_pretrained("/root/autodl-tmp/model/Llama-3.2-3B", local_files_only= True)
tokenizer.pad_token = tokenizer.eos_token
mb_prompts = (
"How do you", "I like to",
) # microbatch size = 2
rank = int(os.environ["RANK"])
world_size = int(os.environ["WORLD_SIZE"])
device = torch.device(f"cuda:{rank % torch.cuda.device_count()}")
torch.distributed.init_process_group(rank=rank, world_size=world_size)
llama.to(device).eval()
# Cut model by equal number of layers per rank
layers_per_rank = llama.config.num_hidden_layers // world_size
print(f"layers_per_rank = {layers_per_rank}")
split_spec = {
f"model.layers.{i * layers_per_rank}": SplitPoint.BEGINNING
for i in range(1, world_size)
}
# Create a pipeline representation from the model
mb_inputs = tokenizer(mb_prompts, return_tensors="pt", padding=True).to(device)
pipe = pipeline(llama, mb_args=(mb_inputs["input_ids"],))
# Create pipeline stage for each rank
stage = pipe.build_stage(rank, device=device)
# Run time inputs
full_batch_prompts = (
"How do you", "I like to", "Can I help", "You need to",
"The weather is", "I found a", "What is your", "You are so",
) # full batch size = 8
inputs = tokenizer(full_batch_prompts, return_tensors="pt", padding=True).to(device)
# Attach to a schedule
# number of microbatches = 8 // 2 = 4
num_mbs = 4
schedule = ScheduleGPipe(stage, num_mbs)
# Run
if rank == 0:
args = inputs["input_ids"]
else:
args = None
output = schedule.step(args)
# Decode
if output is not None:
next_token_logits = output[0][:, -1, :]
next_token = torch.argmax(next_token_logits, dim=-1)
print(tokenizer.batch_decode(next_token))
```
### Expected behavior
just the output for one decoding iteration of LLM
```
Outputs:
['make', 'think', 'you', 'be', 'getting', 'great', 'favorite', 'right']
``` | [
16,
64
] | [
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
1,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
1,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0
] | [
"Pipeline Parallel",
"bug"
] |
https://api.github.com/repos/huggingface/transformers/issues/33518 |
TITLE
HQQ
COMMENTS
2
REACTIONS
+1: 0
-1: 0
laugh: 0
hooray: 0
heart: 0
rocket: 0
eyes: 0
BODY
### System Info
- `transformers` version: 4.44.2
- Platform: Windows-10-10.0.19045-SP0
- Python version: 3.10.11
- Huggingface_hub version: 0.24.7
- Safetensors version: 0.4.5
- Accelerate version: 0.34.2
- Accelerate config: not found
- PyTorch version (GPU?): 2.6.0.dev20240915+cu121 (True)
- Tensorflow version (GPU?): not installed (NA)
- Flax version (CPU?/GPU?/TPU?): not installed (NA)
- Jax version: not installed
- JaxLib version: not installed
- Using distributed or parallel set-up in script?: <fill in>
- Using GPU in script?: <fill in>
- GPU type: NVIDIA GeForce MX330
And:
python.exe -m pip install --upgrade torch torchvision torchaudio --index-url https://download.pytorch.org/whl/nightly/cu124
python.exe -m pip install --upgrade transformers
python.exe -m pip install --upgrade git+https://github.com/mobiusml/hqq.git
python.exe -m pip install --upgrade huggingface_hub
### Who can help?
@SunMarc
### Information
- [X] The official example scripts
- [ ] My own modified scripts
### Tasks
- [X] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)
- [ ] My own task or dataset (give details below)
### Reproduction
# from https://huggingface.co/docs/transformers/v4.44.2/quantization/hqq
```
from transformers import AutoModelForCausalLM, AutoTokenizer, HqqConfig
# Method 1: all linear layers will use the same quantization config
quant_config = HqqConfig(nbits=8, group_size=64, quant_zero=False, quant_scale=False, axis=0) #axis=0 is used by default
model = transformers.AutoModelForCausalLM.from_pretrained(
"meta-llama/Meta-Llama-3-8B-Instruct",
torch_dtype=torch.float16,
device_map="cuda",
quantization_config=quant_config
)
```
Error:
Traceback (most recent call last):
File "C:\Users\Admin\Desktop\hqq\hqq2.py", line 4, in <module>
quant_config = HqqConfig(nbits=8, group_size=64, quant_zero=False, quant_scale=False, axis=0) #axis=0 is used by default
File "C:\Users\Admin\Desktop\hqq\venv\lib\site-packages\transformers\utils\quantization_config.py", line 228, in __init__
from hqq.core.quantize import BaseQuantizeConfig as HQQBaseQuantizeConfig
File "C:\Users\Admin\Desktop\hqq\hqq.py", line 3, in <module>
from hqq.models.hf.base import AutoHQQHFModel
ModuleNotFoundError: No module named 'hqq.models'; 'hqq' is not a package
### Expected behavior
Quantized model. | [
64
] | [
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
1,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0
] | [
"bug"
] |
https://api.github.com/repos/huggingface/transformers/issues/35402 |
TITLE
AttributeError: 'SegformerFeatureExtractor' object has no attribute 'reduce_labels' still has no clear guide around
COMMENTS
4
REACTIONS
+1: 0
-1: 0
laugh: 0
hooray: 0
heart: 0
rocket: 0
eyes: 0
BODY
### System Info
Python 3.11.10, transformers 4.47.0
### Who can help?
@stevhliu
### Information
- [X] The official example scripts
- [ ] My own modified scripts
### Tasks
- [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)
- [X] My own task or dataset (give details below)
### Reproduction
Trying to train by using
`from transformers import AutoFeatureExtractor
feature_extractor = AutoFeatureExtractor.from_pretrained("nvidia/segformer-b0-finetuned-ade-512-512")`
as feature extractor and keep getting `AttributeError: 'SegformerFeatureExtractor' object has no attribute 'reduce_labels' still has no clear guide around`
found [this](https://github.com/huggingface/transformers/issues/25801) that said to repair the docs but I still haven't found the solution to do it by reading links and docs surrounding the links. Is it still a feature or should I move to other feature extractor?
### Expected behavior
``AttributeError: 'SegformerFeatureExtractor' object has no attribute 'reduce_labels' ` solution should be
`feature_extractor = AutoFeatureExtractor.from_pretrained("nvidia/segformer-b0-finetuned-ade-512-512", do_reduce_labels=True)
`
according to the [link](https://github.com/huggingface/transformers/issues/25801), but the problem persists.
Edit2:
Complete error message since by the time I wrote this I already try running it again for another chance. Here's the complete error code
```
---------------------------------------------------------------------------
AttributeError Traceback (most recent call last)
Cell In[158], line 1
----> 1 trainer.train()
2 trainer.push_to_hub()
File c:\Users\Lenovo\miniconda3\envs\pretrain-huggingface\Lib\site-packages\transformers\trainer.py:2155, in Trainer.train(self, resume_from_checkpoint, trial, ignore_keys_for_eval, **kwargs)
2152 try:
2153 # Disable progress bars when uploading models during checkpoints to avoid polluting stdout
2154 hf_hub_utils.disable_progress_bars()
-> 2155 return inner_training_loop(
2156 args=args,
2157 resume_from_checkpoint=resume_from_checkpoint,
2158 trial=trial,
2159 ignore_keys_for_eval=ignore_keys_for_eval,
2160 )
2161 finally:
2162 hf_hub_utils.enable_progress_bars()
File c:\Users\Lenovo\miniconda3\envs\pretrain-huggingface\Lib\site-packages\transformers\trainer.py:2589, in Trainer._inner_training_loop(self, batch_size, args, resume_from_checkpoint, trial, ignore_keys_for_eval)
2587 self.state.epoch = epoch + (step + 1 + steps_skipped) / steps_in_epoch
2588 self.control = self.callback_handler.on_step_end(args, self.state, self.control)
-> 2589 self._maybe_log_save_evaluate(
2590 tr_loss, grad_norm, model, trial, epoch, ignore_keys_for_eval, start_time
2591 )
2592 else:
2593 self.control = self.callback_handler.on_substep_end(args, self.state, self.control)
File c:\Users\Lenovo\miniconda3\envs\pretrain-huggingface\Lib\site-packages\transformers\trainer.py:3047, in Trainer._maybe_log_save_evaluate(self, tr_loss, grad_norm, model, trial, epoch, ignore_keys_for_eval, start_time)
3045 metrics = None
3046 if self.control.should_evaluate:
-> 3047 metrics = self._evaluate(trial, ignore_keys_for_eval)
3048 is_new_best_metric = self._determine_best_metric(metrics=metrics, trial=trial)
3050 if self.args.save_strategy == SaveStrategy.BEST:
File c:\Users\Lenovo\miniconda3\envs\pretrain-huggingface\Lib\site-packages\transformers\trainer.py:3001, in Trainer._evaluate(self, trial, ignore_keys_for_eval, skip_scheduler)
3000 def _evaluate(self, trial, ignore_keys_for_eval, skip_scheduler=False):
-> 3001 metrics = self.evaluate(ignore_keys=ignore_keys_for_eval)
3002 self._report_to_hp_search(trial, self.state.global_step, metrics)
3004 # Run delayed LR scheduler now that metrics are populated
File c:\Users\Lenovo\miniconda3\envs\pretrain-huggingface\Lib\site-packages\transformers\trainer.py:4051, in Trainer.evaluate(self, eval_dataset, ignore_keys, metric_key_prefix)
4048 start_time = time.time()
4050 eval_loop = self.prediction_loop if self.args.use_legacy_prediction_loop else self.evaluation_loop
-> 4051 output = eval_loop(
4052 eval_dataloader,
4053 description="Evaluation",
4054 # No point gathering the predictions if there are no metrics, otherwise we defer to
4055 # self.args.prediction_loss_only
4056 prediction_loss_only=True if self.compute_metrics is None else None,
4057 ignore_keys=ignore_keys,
4058 metric_key_prefix=metric_key_prefix,
4059 )
4061 total_batch_size = self.args.eval_batch_size * self.args.world_size
4062 if f"{metric_key_prefix}_jit_compilation_time" in output.metrics:
File c:\Users\Lenovo\miniconda3\envs\pretrain-huggingface\Lib\site-packages\transformers\trainer.py:4340, in Trainer.evaluation_loop(self, dataloader, description, prediction_loss_only, ignore_keys, metric_key_prefix)
4338 eval_set_kwargs["losses"] = all_losses if "loss" in args.include_for_metrics else None
4339 eval_set_kwargs["inputs"] = all_inputs if "inputs" in args.include_for_metrics else None
-> 4340 metrics = self.compute_metrics(
4341 EvalPrediction(predictions=all_preds, label_ids=all_labels, **eval_set_kwargs)
4342 )
4343 elif metrics is None:
4344 metrics = {}
Cell In[156], line 27, in compute_metrics(eval_pred)
19 pred_labels = logits_tensor.detach().cpu().numpy()
20 # currently using _compute instead of compute
21 # see this issue for more info: https://github.com/huggingface/evaluate/pull/328#issuecomment-1286866576
22 metrics = metric._compute(
23 predictions=pred_labels,
24 references=labels,
25 num_labels=num_labels,
26 ignore_index=0,
---> 27 reduce_labels=feature_extractor.reduce_labels,
28 )
30 # add per category metrics as individual key-value pairs
31 per_category_accuracy = metrics.pop("per_category_accuracy").tolist()
AttributeError: 'SegformerFeatureExtractor' object has no attribute 'reduce_labels'
``` | [
64,
62
] | [
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
1,
0,
1,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0
] | [
"bug",
"Vision"
] |
https://api.github.com/repos/huggingface/transformers/issues/33429 |
TITLE
`Zero-shot object detection` documentation sentence rephrase
COMMENTS
0
REACTIONS
+1: 0
-1: 0
laugh: 0
hooray: 0
heart: 0
rocket: 0
eyes: 0
BODY
In [Zero-shot object detection](https://huggingface.co/docs/transformers/tasks/zero_shot_object_detection) documentation, there is an incomplete sentence:
```
...object classification and localization heads. associate images and their corresponding textual descriptions...
```
The sentence beginning with "associate images" needs to be rephrased to improve clarity and complete the thought. | [
74
] | [
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
1,
0,
0,
0,
0,
0,
0,
0
] | [
"Documentation"
] |
https://api.github.com/repos/huggingface/transformers/issues/33415 |
TITLE
Cannot batch them ({'num_frames', 'input_features', 'is_last'} != {'input_features', 'is_last'})
COMMENTS
11
REACTIONS
+1: 0
-1: 0
laugh: 0
hooray: 0
heart: 0
rocket: 0
eyes: 0
BODY
I have the same problem.
when i use pipe to inference with batch_size=1, everything is ok. However the error occur when infer with batch_size>1.
transformers: 4.44.0
torch: 2.1.2
model: whisper-large-v3-zh-punct
autio_data: wav data
```python
import time
from transformers import pipeline, WhisperForConditionalGeneration, AutoModelForSpeechSeq2Seq, AutoProcessor
import os
import torch
DATA_DIR = r'C:\Users\chenjq2\Desktop\wav格式录音'
# DATA_DIR = r'./test_data'
LANGUAGE = 'zh'
TASK = 'transcribe'
files = os.listdir(DATA_DIR)
paths = []
for name in files:
paths.append(os.path.join(DATA_DIR, name))
MODEL_ID = r"C:\Users\chenjq2\Desktop\tmp\models--BELLE-2--Belle-whisper-large-v3-zh-punct\snapshots\f81f1ac2f123f118094a7baa69e532eab375600e"
device = "cuda:0" if torch.cuda.is_available() else "cpu"
torch_dtype = torch.float16 if torch.cuda.is_available() else torch.float32
model = AutoModelForSpeechSeq2Seq.from_pretrained(
MODEL_ID, torch_dtype=torch_dtype, low_cpu_mem_usage=True
)
model.to(device)
processor = AutoProcessor.from_pretrained(MODEL_ID, language=LANGUAGE, task=TASK)
pipe = pipeline(
"automatic-speech-recognition",
model=model,
tokenizer=processor.tokenizer,
feature_extractor=processor.feature_extractor,
torch_dtype=torch_dtype,
device=device,
)
t1 = time.time()
print(pipe(paths, batch_size=4))
print(f'time cost:{time.time()-t1}')
```
error msg:
```
E:\program\anaconda3\envs\nlp\lib\site-packages\torch\_utils.py:831: UserWarning: TypedStorage is deprecated. It will be removed in the future and UntypedStorage will be the only storage class. This should only matter to you if you are using storages directly. To access UntypedStorage directly, use tensor.untyped_storage() instead of tensor.storage()
return self.fget.__get__(instance, owner)()
Traceback (most recent call last):
File "E:\program\anaconda3\envs\nlp\lib\site-packages\torch\utils\data\dataloader.py", line 630, in __next__
data = self._next_data()
File "E:\program\anaconda3\envs\nlp\lib\site-packages\torch\utils\data\dataloader.py", line 674, in _next_data
data = self._dataset_fetcher.fetch(index) # may raise StopIteration
File "E:\program\anaconda3\envs\nlp\lib\site-packages\torch\utils\data\_utils\fetch.py", line 42, in fetch
return self.collate_fn(data)
File "E:\program\anaconda3\envs\nlp\lib\site-packages\transformers\pipelines\base.py", line 175, in inner
raise ValueError(
ValueError: The elements of the batch contain different keys. Cannot batch them ({'num_frames', 'input_features', 'is_last'} != {'input_features', 'is_last'})
```
The differnece is due to this:
It will have an additional field num_frames if the code runs to block 2, but not if it runs to block 1.
XXX\transformers\pipelines\automatic_speech_recognition.py

Could anyone tell me how to solve it?
_Originally posted by @minmie in https://github.com/huggingface/transformers/issues/33404#issuecomment-2342510083_
| [
51,
43
] | [
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
1,
0,
0,
0,
0,
0,
0,
0,
1,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0
] | [
"Core: Pipeline",
"Audio"
] |
https://api.github.com/repos/huggingface/transformers/issues/34809 |
TITLE
Flex attention + refactor
COMMENTS
7
REACTIONS
+1: 0
-1: 0
laugh: 0
hooray: 0
heart: 0
rocket: 4
eyes: 0
BODY
Opening this to add support for all models following #34282
Lets bring support for flex attention to more models! 🤗
- [x] Gemma2
It would be great to add the support for more architectures such as
- [ ] Qwen2
- [ ] Llama
- [ ] Gemma
- [ ] QwenVl
- [ ] Mistral
- [ ] Clip
... and many more
For anyone who wants to contribute just open a PR and link it to this issue, and ping me for a review!! 🤗 | [
50,
76,
0
] | [
1,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
1,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
1,
0,
0,
0,
0,
0
] | [
"PyTorch",
"Feature request",
"Good Difficult Issue"
] |
https://api.github.com/repos/huggingface/transformers/issues/35976 |
TITLE
Deformable DETR custom kernel fails to compile with PyTorch 2.6
COMMENTS
2
REACTIONS
+1: 0
-1: 0
laugh: 0
hooray: 0
heart: 0
rocket: 0
eyes: 0
BODY
Hello,
I understand this might be expected given the recent release of PyTorch 2.6, but wanted to bring it to your attention for tracking purposes.
I'd like to report a compatibility issue between the Deformable DETR custom CUDA kernel and PyTorch 2.6.
The kernel fails to compile due to what appears to be API changes in PyTorch's type system.
I cut some of the error message out, but the gist of it is:
```
Could not load the custom kernel for multi-scale deformable attention: Error building extension 'MultiScaleDeformableAttention'...
.venv/lib/python3.11/site-packages/transformers/kernels/deformable_detr/cuda/ms_deform_attn_cuda.cu(69): error: no suitable conversion function from "const at::DeprecatedTypeProperties" to "c10::ScalarType" exists
; at::ScalarType _st = ::detail::scalar_type(the_type); ; switch (_st) { case at::ScalarType::Double: { do { if constexpr (!at::should_include_kernel_dtype( at_dispatch_name, at::ScalarType::Double)) { if (!(false)) { ::c10::detail::torchCheckFail( __func__, "/home/hassonofer/Programming/transformers/.venv/lib/python3.11/site-packages/transformers/kernels/deformable_detr/cuda/ms_deform_attn_cuda.cu", static_cast<uint32_t>(69), (::c10::detail::torchCheckMsgImpl( "Expected " "false" " to be true, but got false. " "(Could this error message be improved? If so, " "please report an enhancement request to PyTorch.)", "dtype '", toString(at::ScalarType::Double), "' not selected for kernel tag ", at_dispatch_name))); }; } } while (0); using scalar_t [[maybe_unused]] = c10::impl::ScalarTypeToCPPTypeT<at::ScalarType::Double>; return
.venv/lib/python3.11/site-packages/transformers/kernels/deformable_detr/cuda/ms_deform_attn_cuda.cu(140): error: no suitable conversion function from "const at::DeprecatedTypeProperties" to "c10::ScalarType" exists
; at::ScalarType _st = ::detail::scalar_type(the_type); ; switch (_st) { case at::ScalarType::Double: { do { if constexpr (!at::should_include_kernel_dtype( at_dispatch_name, at::ScalarType::Double)) { if (!(false)) { ::c10::detail::torchCheckFail( __func__, "/home/hassonofer/Programming/transformers/.venv/lib/python3.11/site-packages/transformers/kernels/deformable_detr/cuda/ms_deform_attn_cuda.cu", static_cast<uint32_t>(140), (::c10::detail::torchCheckMsgImpl( "Expected " "false" " to be true, but got false. " "(Could this error message be improved? If so, " "please report an enhancement request to PyTorch.)", "dtype '", toString(at::ScalarType::Double), "' not selected for kernel tag ", at_dispatch_name))); }; } } while (0); using scalar_t [[maybe_unused]] = c10::impl::ScalarTypeToCPPTypeT<at::ScalarType::Double>; return
```
**Environment:**
- PyTorch 2.6
- CUDA 12.4
- Python 3.11
- transformers 4.48.1
Thank you for your time.
### Who can help?
_No response_
### Information
- [ ] The official example scripts
- [ ] My own modified scripts
### Tasks
- [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)
- [ ] My own task or dataset (give details below)
### Reproduction
1. Set up a fresh environment with PyTorch 2.6:
pip3 install torch torchvision torchaudio
pip3 install timm transformers
2. Run the following minimal reproduction code:
```python
from transformers import DeformableDetrForObjectDetection
model = DeformableDetrForObjectDetection.from_pretrained("SenseTime/deformable-detr")
```
### Expected behavior
Clean compilation :) | [
64,
62
] | [
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
1,
0,
1,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0
] | [
"bug",
"Vision"
] |
https://api.github.com/repos/huggingface/transformers/issues/34009 |
TITLE
Enabled Flash Attention for PaliGemma models
COMMENTS
9
REACTIONS
+1: 0
-1: 0
laugh: 0
hooray: 0
heart: 0
rocket: 0
eyes: 0
BODY
# What does this PR do?
<!--
Congratulations! You've made it this far! You're not quite done yet though.
Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution.
Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change.
Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost.
-->
<!-- Remove if not applicable -->
Fixes #33963
## Before submitting
- [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [x] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#create-a-pull-request),
Pull Request section?
- [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case.
- [ ] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation).
- [ ] Did you write any new necessary tests?
## Who can review?
@qubvel
<!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @
If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**.
Please tag fewer than 3 people.
Models:
- text models: @ArthurZucker
- vision models: @amyeroberts, @qubvel
- speech models: @ylacombe, @eustlb
- graph models: @clefourrier
Library:
- flax: @sanchit-gandhi
- generate: @zucchini-nlp (visual-language models) or @gante (all others)
- pipelines: @Rocketknight1
- tensorflow: @gante and @Rocketknight1
- tokenizers: @ArthurZucker
- trainer: @muellerzr and @SunMarc
- chat templates: @Rocketknight1
Integrations:
- deepspeed: HF Trainer/Accelerate: @muellerzr
- ray/raytune: @richardliaw, @amogkam
- Big Model Inference: @SunMarc
- quantization (bitsandbytes, autogpt): @SunMarc
Documentation: @stevhliu
HF projects:
- accelerate: [different repo](https://github.com/huggingface/accelerate)
- datasets: [different repo](https://github.com/huggingface/datasets)
- diffusers: [different repo](https://github.com/huggingface/diffusers)
- rust tokenizers: [different repo](https://github.com/huggingface/tokenizers)
Maintained examples (not research project or legacy):
- Flax: @sanchit-gandhi
- PyTorch: See Models above and tag the person corresponding to the modality of the example.
- TensorFlow: @Rocketknight1
-->
| [
68,
12,
73
] | [
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
1,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
1,
0,
0,
0,
0,
1,
0,
0,
0,
0,
0,
0,
0,
0
] | [
"Flash Attention",
"Multimodal",
"run-slow"
] |
https://api.github.com/repos/huggingface/transformers/issues/33683 |
TITLE
AutoTokenizer for XGLM model not working properly
COMMENTS
2
REACTIONS
+1: 0
-1: 0
laugh: 0
hooray: 0
heart: 0
rocket: 0
eyes: 1
BODY
### System Info
- `transformers` version: 4.44.2
- Platform: Linux-6.1.85+-x86_64-with-glibc2.35
- Python version: 3.10.12
- Huggingface_hub version: 0.24.7
- Safetensors version: 0.4.5
- Accelerate version: 0.34.2
- Accelerate config: not found
- PyTorch version (GPU?): 2.4.1+cu121 (False)
- Tensorflow version (GPU?): 2.17.0 (False)
- Flax version (CPU?/GPU?/TPU?): 0.8.4 (cpu)
- Jax version: 0.4.26
- JaxLib version: 0.4.26
- Using distributed or parallel set-up in script?: no
### Who can help?
@ArthurZucker @itazap
### Information
- [ ] The official example scripts
- [X] My own modified scripts
### Tasks
- [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)
- [X] My own task or dataset (give details below)
### Reproduction
```
from transformers import XGLMTokenizer, AutoTokenizer
xglm_tok = XGLMTokenizer.from_pretrained("facebook/xglm-2.9B")
auto_tok = AutoTokenizer.from_pretrained("facebook/xglm-2.9B")
print(xglm_tok.encode('a ')) # [2, 11]
print(auto_tok.encode('a ')) # [2, 11, 6]
```
### Expected behavior
both the tokenizer should output the same ids. | [
47,
35,
64
] | [
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
1,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
1,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
1,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0
] | [
"Core: Tokenization",
"Fast Tokenizers",
"bug"
] |
https://api.github.com/repos/huggingface/transformers/issues/34145 |
TITLE
Request more specific info from bug reporters when opening deepspeed issues
COMMENTS
1
REACTIONS
+1: 0
-1: 0
laugh: 0
hooray: 0
heart: 0
rocket: 0
eyes: 0
BODY
### Feature request
Hi!
I would like the bug reporters to be prompted (or have section to fill in the reports template) to provide `ds_report` info and `zero3` config when opening a bug report related to deepspeed integration (maybe it could be more general). Anything to make sure these bits of info are more likely to included upfront would make some of these issues much more actionable.
### Motivation
I've been looking at some deepspeed integration bugs lately (#28808,#29348,#31867), I noticed that often more deepspeed info has to be requested. I was wondering if some specific (and maybe **BOLDED**) guidelines about what info to provide would go a long way when opening bug reports. I think a reminder to include `zero configs` and `ds_report` might be helpful. I believe this is particularily a pitfall for stuff that is often parsed in (configs, etc).
Something like:
### Reproduction
Please provide a code sample that reproduces the problem you ran into. It can be a Colab link or just a code snippet.
If you have code snippets, error messages, stack traces please provide them here as well.
Important! Use code tags to correctly format your code. See https://help.github.com/en/github/writing-on-github/creating-and-highlighting-code-blocks#syntax-highlighting
Do not use screenshots, as they are hard to read and (more importantly) don't allow others to copy-and-paste your code.
*If you are opening an issue related to one of the following please ensure the this info is included in your reproduction script:
Deepspeed - zero3 config, ds_report output,
Trainer - your trainer config file,
etc.*
@ArthurZucker @amyeroberts | [
76
] | [
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
1,
0,
0,
0,
0,
0
] | [
"Feature request"
] |
https://api.github.com/repos/huggingface/transformers/issues/36157 |
TITLE
Add functionality to save model when training unexpectedly terminates
COMMENTS
3
REACTIONS
+1: 0
-1: 0
laugh: 0
hooray: 0
heart: 0
rocket: 0
eyes: 0
BODY
### Feature request
I'm thinking of implementing it like this:
```python
try:
trainer.train(resume_from_checkpoint=args.resume_from_checkpoint)
finally:
trainer._save_checkpoint(trainer.model, None)
```
I want to utilize the characteristics of 'finally' to ensure that the model is saved at least once at the end,
even if the training terminates unexpectedly.
### Motivation
Sometimes we need to terminate training unintentionally due to scheduling or various other issues.
If the model checkpoint hasn't been saved even after training has progressed to some extent,
all the training resources used until now are wasted.
### Your contribution
Therefore, I want to add functionality to save the model checkpoint unconditionally
even if the process is terminated by an error or kill signal unintentionally.
And I want to control this through train_args. | [
76
] | [
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
1,
0,
0,
0,
0,
0
] | [
"Feature request"
] |
https://api.github.com/repos/huggingface/transformers/issues/33917 |
TITLE
Fix Whisper shortform EOS
COMMENTS
2
REACTIONS
+1: 0
-1: 0
laugh: 0
hooray: 0
heart: 0
rocket: 0
eyes: 0
BODY
# What does this PR do?
Since short and longform merging, Whisper removed EOS tokens when doing shortform transcription, which is something not happening in the original implementation. It fixes the `test_default_multilingual_transcription_short_form` and `test_generate_with_prompt_ids` tests
A side effect is that average logprob was miscomputed.
cc @eustlb
| [
73
] | [
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
1,
0,
0,
0,
0,
0,
0,
0,
0
] | [
"run-slow"
] |
https://api.github.com/repos/huggingface/transformers/issues/34789 |
TITLE
Add `Tensor Parallel` support for ALL models
COMMENTS
6
REACTIONS
+1: 0
-1: 0
laugh: 0
hooray: 0
heart: 0
rocket: 4
eyes: 0
BODY
Just opening this to add support for all models following #34184
Lets bring support to all model! 🤗
- [x] Llama
It would be great to add the support for more architectures such as
- [ ] Qwen2
- [ ] QwenVl
- [ ] Mistral
- [ ] Llava
... and many more
For anyone who wants to contribute just open a PR and link it to this issue, and ping me for a review!! 🤗 | [
76,
81,
0
] | [
1,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
1,
0,
0,
0,
0,
1
] | [
"Feature request",
"Tensor Parallel",
"Good Difficult Issue"
] |
https://api.github.com/repos/huggingface/transformers/issues/34977 |
TITLE
Deprecation Warning for `max_size` in `DetrImageProcessor.preprocess`
COMMENTS
2
REACTIONS
+1: 0
-1: 0
laugh: 0
hooray: 0
heart: 0
rocket: 0
eyes: 0
BODY
### System Info
- `transformers` version: 4.47.0.dev0
- Platform: Linux-5.15.0-126-generic-x86_64-with-glibc2.31
- Python version: 3.11.0
- Huggingface_hub version: 0.24.5
- Safetensors version: 0.4.4
- Accelerate version: 1.1.1
- Accelerate config: not found
- PyTorch version (GPU?): 2.1.2+cu118 (True)
- Tensorflow version (GPU?): not installed (NA)
- Flax version (CPU?/GPU?/TPU?): not installed (NA)
- Jax version: not installed
- JaxLib version: not installed
- Using distributed or parallel set-up in script?: <fill in>
- Using GPU in script?: <fill in>
- GPU type: NVIDIA GeForce RTX 3090
### Who can help?
@amyeroberts, @qubvel
and I think @NielsRogge worked on it too ?
### Information
- [X] The official example scripts
- [ ] My own modified scripts
### Tasks
- [X] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)
- [ ] My own task or dataset (give details below)
### Reproduction
```python
import logging
import numpy as np
from transformers.models.detr.image_processing_detr import DetrImageProcessor
logging.basicConfig(level=logging.WARNING)
logger = logging.getLogger(__name__)
images = [np.ones((512, 512, 3))]
annotations = [{'image_id': [], 'annotations': []}]
size = {'max_height': 600, 'max_width': 600}
image_processor = DetrImageProcessor()
images = image_processor.preprocess(images, do_resize=True, do_rescale=False, size=size, annotations=annotations, format='coco_detection')
```
### Expected behavior
Hello!
I noticed that the `preprocess` method in the `DetrImageProcessor` class always passes `max_size` to the `resize` method,
https://github.com/huggingface/transformers/blob/4120cb257f03b834fb332e0b0ee6570245e85656/src/transformers/models/detr/image_processing_detr.py#L1445-L1447
and that triggers a deprecation warning in `resize` method,
```bash
The `max_size` parameter is deprecated and will be removed in v4.26. Please specify in `size['longest_edge'] instead`.
```
https://github.com/huggingface/transformers/blob/4120cb257f03b834fb332e0b0ee6570245e85656/src/transformers/models/detr/image_processing_detr.py#L992-L997
I propose removing the unused `max_size` argument from the preprocess method since it is always `None`,
https://github.com/huggingface/transformers/blob/4120cb257f03b834fb332e0b0ee6570245e85656/src/transformers/models/detr/image_processing_detr.py#L1340
Would it be okay if I work on this and submit a pull request? I can try to see if the problem also occurs in other models. | [
1,
62,
65
] | [
0,
1,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
1,
0,
0,
1,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0
] | [
"WIP",
"Vision",
"Processing"
] |
https://api.github.com/repos/huggingface/transformers/issues/35814 |
TITLE
[Feature Request] Support register customize quantization method out-of-tree
COMMENTS
2
REACTIONS
+1: 0
-1: 0
laugh: 0
hooray: 0
heart: 0
rocket: 0
eyes: 0
BODY
### Feature request
Support register customize quantization method out-of-tree.
The usage would be as follows:
```python
from transformers.quantizers import HfQuantizer
from transformers.quantizers import regsiter_quantization_config, register_quantizer
from transformers.utils.quantization_config import QuantizationConfigMixin
@regsiter_quantization_config("custom")
class CustomFakeQuantizationConfig(QuantizationConfigMixin):
"""The custom fake quantization config."""
@register_quantizer("custom")
class CustomFakeQuantizer(HfQuantizer):
"""The custom fake quantizer."""
```
### Motivation
We would greatly appreciate it if HuggingFace could support registering custom quantization schemes externally. This would allow us to integrate the schemes of any LLM quantization tools and evaluate fake quantization models using the powerful combination of `lm_eval` + `huggingface`. Thank you for considering this!
Similar features have already been supported by vLLM, see:
- https://github.com/vllm-project/vllm/issues/11926
- https://github.com/vllm-project/vllm/pull/11969
### Your contribution
If this feature request is considered, I'd happily submit a PR to implement it. | [
76
] | [
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
1,
0,
0,
0,
0,
0
] | [
"Feature request"
] |
https://api.github.com/repos/huggingface/transformers/issues/35706 |
TITLE
autocast() got an unexpected keyword argument 'cache_enabled when use trainer.torch_jit_model_eval
COMMENTS
4
REACTIONS
+1: 0
-1: 0
laugh: 0
hooray: 0
heart: 0
rocket: 0
eyes: 0
BODY
### System Info
- `transformers` version: 4.46.3
- Platform: Linux-4.18.0-147.el8_1.x86_64-x86_64-with-glibc2.10
- Python version: 3.8.10
- Huggingface_hub version: 0.26.5
- Safetensors version: 0.4.5
- Accelerate version: 1.0.1
- Accelerate config: not found
- PyTorch version (GPU?): 2.4.1+cu118 (True)
- Tensorflow version (GPU?): not installed (NA)
- Flax version (CPU?/GPU?/TPU?): not installed (NA)
- Jax version: not installed
- JaxLib version: not installed
- Using distributed or parallel set-up in script?: <fill in>
- Using GPU in script?: <fill in>
- GPU type: NVIDIA A100-SXM4-80GB
### Who can help?
@muellerzr
@SunMarc
### Information
- [x] The official example scripts
- [x] My own modified scripts
### Tasks
- [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)
- [ ] My own task or dataset (give details below)
### Reproduction
When using the `torch_jit_model_eval()` method in trainer, it prompts
> "failed to use PyTorch jit mode due to: autocast() got an unexpected keyword argument 'cache_enabled'."
Looking at the details, it was found that the error was caused by the `self.accelerator.autocast(cache_enabled=False)` method. Its method definition is `def autocast(self, autocast_handler: AutocastKwargs = None)`, and there is no `cache_enabled` method.
Is this because the code here has not been updated, or because I ignored some settings?
Is there a solution now?
### Expected behavior
Work normally. | [
64
] | [
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
1,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0
] | [
"bug"
] |
https://api.github.com/repos/huggingface/transformers/issues/35105 |
TITLE
Fix signatures for processing kwargs
COMMENTS
1
REACTIONS
+1: 0
-1: 0
laugh: 0
hooray: 0
heart: 0
rocket: 0
eyes: 0
BODY
# What does this PR do?
as title | [
73
] | [
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
1,
0,
0,
0,
0,
0,
0,
0,
0
] | [
"run-slow"
] |
https://api.github.com/repos/huggingface/transformers/issues/34272 |
TITLE
image_transforms preprocess quite slow when run large image with qwen2vl
COMMENTS
9
REACTIONS
+1: 0
-1: 0
laugh: 0
hooray: 0
heart: 0
rocket: 0
eyes: 0
BODY
### System Info
- `transformers` version: 4.45.2
- Platform: Linux-5.4.0-132-generic-x86_64-with-glibc2.31
- Python version: 3.12.7
- Huggingface_hub version: 0.25.1
- Safetensors version: 0.4.5
- Accelerate version: 1.0.0
- Accelerate config: not found
- PyTorch version (GPU?): 2.4.0+cu121 (True)
- Tensorflow version (GPU?): not installed (NA)
- Flax version (CPU?/GPU?/TPU?): not installed (NA)
- Jax version: not installed
- JaxLib version: not installed
- Using distributed or parallel set-up in script?: <fill in>
- Using GPU in script?: <fill in>
- GPU type: NVIDIA GeForce RTX 3090
### Who can help?
_No response_
### Information
- [ ] The official example scripts
- [ ] My own modified scripts
### Tasks
- [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)
- [ ] My own task or dataset (give details below)
### Reproduction
funcitons in image_transforms, `rescale`, `normalize` quite slow when preprocess large image.
https://github.com/huggingface/transformers/blob/main/src/transformers/image_transforms.py
here is benchmark

please refer to https://github.com/vllm-project/vllm/issues/9238
### Expected behavior
how to improve performance? | [
10,
64,
62
] | [
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
1,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
1,
0,
1,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0
] | [
"Performance",
"bug",
"Vision"
] |
https://api.github.com/repos/huggingface/transformers/issues/34474 |
TITLE
Useful Sensors Moonshine Transcription Model
COMMENTS
3
REACTIONS
+1: 1
-1: 0
laugh: 0
hooray: 0
heart: 0
rocket: 0
eyes: 0
BODY
### Model description
Model description can be found in the [Moonshine Whitepaper](https://github.com/usefulsensors/moonshine/blob/main/moonshine_paper.pdf).
I will be porting our [existing torch model](https://github.com/usefulsensors/moonshine/blob/b2a61fff243dd78ee2fa72dd1bceff8ccf656c4c/model.py) to Transformers.
### Open source status
- [X] The model implementation is available
- [X] The model weights are available
### Provide useful links for the implementation
[Implementation](https://github.com/usefulsensors/moonshine) Special credit to @keveman for training and @evmaki for data collection and preprocessing.
[Model weights](https://huggingface.co/UsefulSensors/moonshine) | [
77
] | [
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
1,
0,
0,
0,
0
] | [
"New model"
] |
https://api.github.com/repos/huggingface/transformers/issues/35744 |
TITLE
[Doc] Adding blog post to model doc for `TimmWrapper`
COMMENTS
1
REACTIONS
+1: 0
-1: 0
laugh: 0
hooray: 0
heart: 0
rocket: 0
eyes: 0
BODY
This PR adds the blog post link to the `TimmWrapper` documentation. | [
74,
62
] | [
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
1,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
1,
0,
0,
0,
0,
0,
0,
0
] | [
"Documentation",
"Vision"
] |
https://api.github.com/repos/huggingface/transformers/issues/36147 |
TITLE
Torchao `int4_weight_only` save error when passing layout
COMMENTS
0
REACTIONS
+1: 0
-1: 0
laugh: 0
hooray: 0
heart: 0
rocket: 0
eyes: 0
BODY
### System Info
Copy-and-paste the text below in your GitHub issue and FILL OUT the two last points.
- `transformers` version: 4.49.0.dev0
- Platform: Linux-4.18.0-425.3.1.el8.x86_64-x86_64-with-glibc2.39
- Python version: 3.12.3
- Huggingface_hub version: 0.27.1
- Safetensors version: 0.4.5
- Accelerate version: 1.4.0.dev0
- Accelerate config: - compute_environment: LOCAL_MACHINE
- distributed_type: MULTI_GPU
- mixed_precision: bf16
- use_cpu: False
- debug: False
- num_processes: 2
- machine_rank: 0
- num_machines: 1
- gpu_ids: 5,6
- rdzv_backend: static
- same_network: True
- main_training_function: main
- enable_cpu_affinity: False
- downcast_bf16: no
- tpu_use_cluster: False
- tpu_use_sudo: False
- tpu_env: []
- PyTorch version (GPU?): 2.6.0+cu126 (True)
- Tensorflow version (GPU?): not installed (NA)
- Flax version (CPU?/GPU?/TPU?): not installed (NA)
- Jax version: not installed
- JaxLib version: not installed
- Using distributed or parallel set-up in script?: <fill in>
- Using GPU in script?: <fill in>
- GPU type: NVIDIA A100 80GB PCIe
### Who can help?
@SunMarc
### Information
- [ ] The official example scripts
- [ ] My own modified scripts
### Tasks
- [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)
- [ ] My own task or dataset (give details below)
### Reproduction
More details refer to [torchao](https://github.com/pytorch/ao/issues/1704)
### Expected behavior
Hi @SunMarc . Do you think we can handle this in transformers? | [
64
] | [
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
1,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0
] | [
"bug"
] |
https://api.github.com/repos/huggingface/transformers/issues/35281 |
TITLE
Prism model
COMMENTS
2
REACTIONS
+1: 0
-1: 0
laugh: 0
hooray: 0
heart: 0
rocket: 0
eyes: 0
BODY
# What does this PR do?
Add a new translation model, `Prism`, initially based on the [fairseq Transformer](https://github.com/facebookresearch/fairseq/tree/main/fairseq/models/transformer).
The Prism model was first described in the paper [Automatic Machine Translation Evaluation in Many Languages via Zero-Shot Paraphrasing](https://aclanthology.org/2020.emnlp-main.8) (Thompson & Post, EMNLP 2020). Based on the code in this PR, I have converted the model to the HF model hub here: https://huggingface.co/dariast/prism
The original code can be found [here](https://github.com/thompsonb/prism/tree/master) and the original documentation is found [here](https://github.com/thompsonb/prism/blob/master/translation/README.md).
## Implementation notes
- The HF adaptation of the model is implemented utilizing existing [M2M100 model](https://github.com/huggingface/transformers/tree/main/src/transformers/models/m2m_100) architecture. The parts of the code that are identical to the HF M2M100 are cited accordingly.
- The integration and performance tests are executed according to the guidelines and identical to the ones provided in the [M2M100 testing](https://github.com/huggingface/transformers/tree/main/tests/models/m2m_100) suite.
## Before submitting
- [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [x] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#create-a-pull-request), Pull Request section?
- [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link to it if that's the case.
- [x] Did you make sure to update the documentation with your changes? Here are the [documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and [here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation).
- [x] Did you write any new necessary tests?
## Who can review?
Anyone in the community is free to review the PR once the tests have passed.
Previously, the model's tests and repository were reviewed by @jvamvas [here](https://github.com/jvamvas/transformers/pull/1).
| [
77,
5
] | [
0,
0,
0,
0,
0,
1,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
1,
0,
0,
0,
0
] | [
"New model",
"Text"
] |
https://api.github.com/repos/huggingface/transformers/issues/34938 |
TITLE
<spam>
COMMENTS
0
REACTIONS
+1: 0
-1: 0
laugh: 0
hooray: 0
heart: 0
rocket: 0
eyes: 0
BODY
### Model description
test
### Open source status
- [X] The model implementation is available
- [X] The model weights are available
### Provide useful links for the implementation
test | [
77
] | [
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
1,
0,
0,
0,
0
] | [
"New model"
] |
https://api.github.com/repos/huggingface/transformers/issues/34579 |
TITLE
Does per_device_train_batch_size have a loss error similar to that of GA?
COMMENTS
3
REACTIONS
+1: 0
-1: 0
laugh: 0
hooray: 0
heart: 0
rocket: 0
eyes: 0
BODY
### System Info
transformers4.46.1
### Who can help?
@muellerzr
### Information
- [ ] The official example scripts
- [ ] My own modified scripts
### Tasks
- [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)
- [ ] My own task or dataset (give details below)
### Reproduction
GA loss fixes the average loss of multiple training steps. If per_device_train_batch_size is set to 2 for 1 gpu and per_device_train_batch_size for 2 gpu, will it be different?
I looked at the code for ForCausalLMLoss. Such as:
```python
def fixed_cross_entropy(source, target, num_items_in_batch: int = None, ignore_index: int = -100, **kwargs):
reduction = "sum" if num_items_in_batch is not None else "mean"
loss = nn.functional.cross_entropy(source, target, ignore_index=ignore_index, reduction=reduction)
if reduction == "sum":
loss = loss / num_items_in_batch
return loss
def ForCausalLMLoss(
logits, labels, vocab_size: int, num_items_in_batch: int = None, ignore_index: int = -100, **kwargs
):
# Upcast to float if we need to compute the loss to avoid potential precision issues
logits = logits.float()
# Shift so that tokens < n predict n
shift_logits = logits[..., :-1, :].contiguous()
shift_labels = labels[..., 1:].contiguous()
# Flatten the tokens
shift_logits = shift_logits.view(-1, vocab_size)
shift_labels = shift_labels.view(-1)
# Enable model parallelism
shift_labels = shift_labels.to(shift_logits.device)
loss = fixed_cross_entropy(shift_logits, shift_labels, num_items_in_batch, ignore_index, **kwargs)
return loss
```
GA loss fixes the average loss of multiple training steps. If per_device_train_batch_size is set to 2 for 1 gpu and per_device_train_batch_size for 2 gpu, will it be different?
If the value of per_device_train_batch_size is 2 because of `shift_logits.view(-1, vocab_size)`, the tokens of the two sequences are lost together and then averaged. If the value of per_device_train_batch_size is 1 and the number of GPUs is 2, the average loss is calculated for each batch. Finally, when the number of losses to be calculated for each sequence differs greatly, a great difference occurs.
### Expected behavior
Under a global batch size, is loss averaged for all tokens or averaged for each sequence and then for all batches?
Looking forward to the answer reply | [
64
] | [
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
1,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0
] | [
"bug"
] |
https://api.github.com/repos/huggingface/transformers/issues/36157 |
TITLE
Add functionality to save model when training unexpectedly terminates
COMMENTS
3
REACTIONS
+1: 0
-1: 0
laugh: 0
hooray: 0
heart: 0
rocket: 0
eyes: 0
BODY
### Feature request
I'm thinking of implementing it like this:
```python
try:
trainer.train(resume_from_checkpoint=args.resume_from_checkpoint)
finally:
trainer._save_checkpoint(trainer.model, None)
```
I want to utilize the characteristics of 'finally' to ensure that the model is saved at least once at the end,
even if the training terminates unexpectedly.
### Motivation
Sometimes we need to terminate training unintentionally due to scheduling or various other issues.
If the model checkpoint hasn't been saved even after training has progressed to some extent,
all the training resources used until now are wasted.
### Your contribution
Therefore, I want to add functionality to save the model checkpoint unconditionally
even if the process is terminated by an error or kill signal unintentionally.
And I want to control this through train_args. | [
76
] | [
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
1,
0,
0,
0,
0,
0
] | [
"Feature request"
] |
https://api.github.com/repos/huggingface/transformers/issues/34097 |
TITLE
SlidingWindowCache issue
COMMENTS
2
REACTIONS
+1: 0
-1: 0
laugh: 0
hooray: 0
heart: 0
rocket: 0
eyes: 0
BODY
### System Info
On main
### Who can help?
@zucchini-nlp @gante
### Information
- [ ] The official example scripts
- [ ] My own modified scripts
### Tasks
- [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)
- [ ] My own task or dataset (give details below)
### Reproduction
We currently cannot use `cache_implementation='sliding_window'` with FA2. The following snippet
```python
from transformers import AutoTokenizer, AutoModelForCausalLM, DynamicCache, StaticCache
import torch
device = 3
model_name = 'mistralai/Mistral-7B-v0.1'
dtype = torch.bfloat16
attn = 'flash_attention_2'
model = AutoModelForCausalLM.from_pretrained(model_name, attn_implementation=attn,
torch_dtype=dtype, low_cpu_mem_usage=True).cuda(device)
tokenizer = AutoTokenizer.from_pretrained(model_name)
tokenizer.pad_token = tokenizer.eos_token
generation_kwargs = {
"max_new_tokens": 50,
"eos_token_id": None,
"do_sample": False,
"return_dict_in_generate": True,
}
inputs = tokenizer('Hello who are', padding=True, return_tensors='pt')
input_ids = inputs['input_ids'].to(device)
attention_mask = inputs['attention_mask'].to(device)
attention_mask
outputs = model.generate(input_ids, attention_mask=attention_mask, **generation_kwargs, cache_implementation='sliding_window')
```
fails with
```python
ValueError: You are attempting to perform batched generation with padding_side='right' this may lead to unexpected behaviour for Flash Attention version of Mistral. Make sure to call `tokenizer.padding_side = 'left'` before tokenizing the input.
```
### Expected behavior
This comes from the fact that `prepare_input_for_generation` creates a 4d mask if `if isinstance(past_key_values, StaticCache)`, but FA2 does not support 4d masks. I believe we need a check and correct expansion of the 2d mask as well with `SlidingWindowCache`. Found about it when opening https://github.com/huggingface/transformers/pull/34093 | [
64
] | [
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
1,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0
] | [
"bug"
] |
https://api.github.com/repos/huggingface/transformers/issues/35568 |
TITLE
Any plans to integrate GTE model natively into transformers
COMMENTS
8
REACTIONS
+1: 1
-1: 0
laugh: 0
hooray: 0
heart: 0
rocket: 0
eyes: 0
BODY
### Model description
Any plans to integrate `gte` model natively into transformers as right now we are using this model with `trust_remote_code=True` argument
### Open source status
- [X] The model implementation is available
- [X] The model weights are available
### Provide useful links for the implementation
Model Implementation: https://huggingface.co/Alibaba-NLP/new-impl/blob/main/modeling.py
Model Weights: https://huggingface.co/Alibaba-NLP/gte-base-en-v1.5 | [
77
] | [
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
1,
0,
0,
0,
0
] | [
"New model"
] |
https://api.github.com/repos/huggingface/transformers/issues/34741 |
TITLE
possible llama rope implementation issue
COMMENTS
2
REACTIONS
+1: 0
-1: 0
laugh: 0
hooray: 0
heart: 0
rocket: 0
eyes: 0
BODY
### System Info
their might be a bug in the llama rope code.
### Who can help?
_No response_
### Information
- [X] The official example scripts
- [ ] My own modified scripts
### Tasks
- [X] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)
- [ ] My own task or dataset (give details below)
### Reproduction
just look at the code
### Expected behavior
https://github.com/huggingface/transformers/blob/a3d69a8994d673899608a7c17fbf4f953f50474e/src/transformers/models/llama/modeling_llama.py#L199
heyyy 🤗,
i was learning rope recently and was looking at the reference implementation above. and i'm a little confused here.
in the original rope paper, we should pair each adjacent numbers and apply the roration. below is a screenshot from the paper.

notice that the index for the second X should goes like 2, 1, 4, 3, 6, 5 ....
but in the code above it uses x1 = x[..., : x.shape[-1] // 2] , which rotate all the second half of X with the all the first half, so the index will goes like 4, 5, 6, 1, 2, 3...
so this seems not aligning with what rope needs, it is effectively trying to pair xi with xi+(d/2), but what we need is to pair xi with xi+1.
would love to hear anything from you guys 🤗
tom
| [
75,
64
] | [
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
1,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
1,
0,
0,
0,
0,
0,
0
] | [
"Discussion",
"bug"
] |
https://api.github.com/repos/huggingface/transformers/issues/34364 |
TITLE
MsT: chunking the LM-head and MLP to extend sequence length and save memory
COMMENTS
3
REACTIONS
+1: 0
-1: 0
laugh: 0
hooray: 0
heart: 0
rocket: 0
eyes: 0
BODY
### Feature request
I would like to contribute an LM-head and MLP implementation that optimizes intermediate memory(logits and MLP intermediate). Chunking the LM-head and MLP over sequence dimension and accumulating the gradient during training and fine-tuning. This would not reduce throughput and mathematical equivalence to standard training. Combining with gradient checkpoint, we can extend 12x-24x sequence length over vanilla huggingface/transformers, and 4-7x sequence length over gradient checkpoint. This is all implemented in [this repo](https://github.com/wdlctc/transformers/)
### Usage:
from transformers.integrations import replace_with_minis
replace_with_minis()
### Motivation
State-of-the-art transformer models introduced larger tokenizers with a vocabulary of 128K tokens (Llama3) or 256 tokens (Gemma2). Training/Finetune these models can easily run out of GPU memory, and we found a vast majority of the memory is consumed by the intermediate memory(logits and MLP intermediate). The intermediate memory is not necessary to be stored on GPU memory, therefore, using chunking technologies for these intermediate memory can significant reduce the peak memory usage, like gradient accumulation. The memory save can be significantly enlarged with gradient checkpoint, where intermediate memory takes most activation memory.
### Your contribution
I provided an initial implementation on this https://github.com/wdlctc/transformers/. Not sure what is the right way to integrate it. Could be a modification of modeling_llama.py (and other model implementation), or an new class, not sure. I'm also not very familiar with the PR process, since this is my first issue, so maybe it would be better if someone from the HF team can shepherd this through.
### Additional context
Blog: https://wdlctc.github.io/mst.html
Model Finetune Guidence with our tech: [LLAMA3](https://github.com/wdlctc/mini-s/blob/main/doc/llama3.md), [Qwen2](https://github.com/wdlctc/mini-s/blob/main/doc/qwen.md), [Memba](https://github.com/wdlctc/mini-s/blob/main/doc/falcon-mamba.md), [Mistral](https://github.com/wdlctc/mini-s/blob/main/doc/mistral.md), [Gemma2](https://github.com/wdlctc/mini-s/blob/main/doc/gemma.md)
Arxiv: https://arxiv.org/abs/2407.15892 | [
76
] | [
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
1,
0,
0,
0,
0,
0
] | [
"Feature request"
] |
https://api.github.com/repos/huggingface/transformers/issues/34584 |
TITLE
Saving checkpoints *only* on improvement
COMMENTS
1
REACTIONS
+1: 0
-1: 0
laugh: 0
hooray: 0
heart: 0
rocket: 0
eyes: 0
BODY
### Feature request
When using the Hugging Face Trainer, I would like to save a checkpoint only if my objective metric has improved.
### Motivation
Currently, I am using eval_steps=100,save_steps=100, save_limit=1 and load_best_model_at_end=True which means that every 100 steps, the latest checkpoint is getting written and then the previous checkpoint is getting deleted unless it is the best checkpoint.
This has done approximately 2TB of wear to my SSD in only a few days due to an excessive amount of checkpointing. I really don’t need to resume from the latest checkpoint, I just need the best checkpoint to be saved, and I’m not concerned about the run crashing, so in this case, there is really no need to be saving every 100 steps.
Additionally, it is not feasible to wait until the end of the run and load the best state because I am manually early stopping my runs. I do not wish to automate the early stopping either.
I’m happy to monkey patch my build of transformers if anyone is aware of the culprit lines I can comment out or modify.
### Your contribution
N/A | [
76
] | [
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
1,
0,
0,
0,
0,
0
] | [
"Feature request"
] |
https://api.github.com/repos/huggingface/transformers/issues/34249 |
TITLE
Add support for Janus model from DeepSeek AI
COMMENTS
3
REACTIONS
+1: 0
-1: 0
laugh: 0
hooray: 0
heart: 0
rocket: 0
eyes: 3
BODY
### Model description
Janus is an autoregressive framework that unifies multimodal understanding and generation. Unlike previous approaches that use a single visual encoder for both tasks, Janus decouples visual encoding into separate pathways while utilizing a unified transformer architecture for processing. This decoupling addresses the conflict between visual encoder roles in understanding and generation, enhancing flexibility and performance.
Key features:
- Unified framework for multimodal understanding and generation
- Decoupled visual encoding pathways
- Single, unified transformer architecture for processing
- Improved performance in multimodal understanding tasks
- Flexibility to select optimal encoding methods for each component
### Open source status
- [X] The model implementation is available
- [X] The model weights are available
### Provide useful links for the implementation
The Janus model is developed by DeepSeek AI. Here are the relevant links for implementation:
Paper: [Janus: Bridging the Gap Between Multimodal Understanding and Generation](https://arxiv.org/pdf/2410.13848)
GitHub repository: [deepseek-ai/Janus](https://github.com/deepseek-ai/Janus) | [
77
] | [
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
1,
0,
0,
0,
0
] | [
"New model"
] |
https://api.github.com/repos/huggingface/transformers/issues/33624 |
TITLE
Moshi integration
COMMENTS
7
REACTIONS
+1: 0
-1: 0
laugh: 0
hooray: 5
heart: 5
rocket: 0
eyes: 2
BODY
# What does this PR do?
Moshi is the latest Kyutai model. It is a streaming speech-to-speech model, that can also do an inner dialogue (i.e it outputs text as well).
In particular, it means that Moshi deals with 3 streams of information:
1. The user's audio
2. Moshi's audio
3. Moshi's textual output
Similarly to `Musicgen`, audio is represented with audio codebooks, which can be interpreted like tokens. The main difference between text tokens and audio codebooks is that audio codebooks introduce an additional dimension of information.
Text tokens are typically of dim `(batch_size, sequence_length)` but audio tokens are of dim `(batch_size, num_codebooks, sequence_length)`.

--------
It's made of 3 components:
**1. The main decoder (Helium in the paper)**
Here, it corresponds to `MoshiForCausalLM`. It is strictly a classic text LLM, that uses an architecture similar to `Gemma`. In other words, it takes text tokens, embeds them, pass them through the decoder and a language head, to get text logits.
**2. The depth decoder**
On its own, it's also a classic LLM, but this time, instead of generating over the time dimension, it generates over the codebook dimension.
It also means that its context length is `num_codebooks` -> it can't generate more than `num_codebooks`.
Another interesting difference with a classic LLM is that each timestamp (here it correspond to each codebook) got its own set of Linear Layers and Embeddings.
**3. Mimi**
It's the audio encoder from Kyutai, that has recently been integrated to transformers, which is used to "tokenize" audio. It has the same use that `Encodec` has in `Musicgen`.
--------
## Architecture choice:
1. `MoshiForCausalLM` corresponds to the main decoder, it can be used as a textual LLM.
2. `MoshiDepthDecoder` is the depth decoder mentioned above
3. `MoshiForConditionalGeneration` encapsulates the main decoder, the depth decoder and the audio encoder.
Conceptually, `MoshiForConditionalGeneration` takes as input one stream of text and two streams of audio inputs - what the user has said so far, and what the model have generated so far - and generates two streams - a text stream and an audio stream.
**How does it work:**
-> The input streams are embedded and combined into `inputs_embeds`.
-> `inputs_embeds` is passed through the main decoder. There's nothing special done here, it's the same operation as Gemma or so on.
-> The main decoder outputs `text logits` but also its `last hidden state` which is called `temporal context` in the picture above.
-> the depth decoder switches the dimension on which we generate (codebooks instead of time). It uses the token generated from `text logits` and the `temporal context` to auto-regressively generate audio codebooks.
| [
73
] | [
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
1,
0,
0,
0,
0,
0,
0,
0,
0
] | [
"run-slow"
] |
https://api.github.com/repos/huggingface/transformers/issues/34761 |
TITLE
Keras 3 compatibility
COMMENTS
3
REACTIONS
+1: 0
-1: 0
laugh: 0
hooray: 0
heart: 0
rocket: 0
eyes: 0
BODY
### Feature request
Since version 2.16, Tensorflow default keras is the version 3.
It would be great to have Transformers using keras 3 as it won't have conflict on the version used.
For now we need to set
```
ENV TF_USE_LEGACY_KERAS=1
```
whenever we use Tensorflow and Transformers
### Motivation
It will simplify deployment, as well as prevent bugs on the long run.
### Your contribution
Not sure about the complexity of the task, but I am volunteering to help to modify the python codebase to do this upgrade.
Feel free to let me know where to start.
| [
76
] | [
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
1,
0,
0,
0,
0,
0
] | [
"Feature request"
] |
https://api.github.com/repos/huggingface/transformers/issues/35346 |
TITLE
[`Mamba2`] Varlen implementation
COMMENTS
1
REACTIONS
+1: 0
-1: 0
laugh: 0
hooray: 0
heart: 0
rocket: 0
eyes: 0
BODY
### Feature request
Use varlen implementations (cu_seq_lens) of mamba2 and conv1d when requirements are met, i.e. mostly version dependencies.
### Motivation
It's similar to how fa2 works with varlen and it should boost performance while guaranteeing correct performance on batches.
### Your contribution
I can make a PR, not sure when I'll get to it - prolly after xmas days. | [
76
] | [
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
1,
0,
0,
0,
0,
0
] | [
"Feature request"
] |
https://api.github.com/repos/huggingface/transformers/issues/34479 |
TITLE
GPTQ quantization throws an error with custom dataset but works perfectly fine on already existing ones.
COMMENTS
15
REACTIONS
+1: 0
-1: 0
laugh: 0
hooray: 0
heart: 0
rocket: 0
eyes: 0
BODY
### System Info
transformers: 4.45.2
huggingface-hub: 0.26.1
accelerate: 1.0.1
optimum: 1.23.1
auto-gptq: 0.7.1
bitsandbytes: 0.44.1
### Who can help?
_No response_
### Information
- [ ] The official example scripts
- [X] My own modified scripts
### Tasks
- [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)
- [X] My own task or dataset (give details below)
### Reproduction
I can successfully quantize llama model using GPTQConfig given wikitext2 dataset, however the moment I switch to a custom dataset (literally any custom dataset), I will get all sort of errors: Here is my setup:
```
custom_data = ['example1', example2', . . . , ]
config = GPTQConfig(bits = 4, dataset = custom_data (any list of string), tokenizer = tokenizer, group_size = 128, desc_false = False, model_seq_len = 4096)
model = AutoModelForCausalLM.from_pretrained(model_path, quantization_config = config , device_map = 'auto')
```
This would give the error message of CUDA OOM, no matter size of data, what so ever. Like if I even choose two short data points.
Now if I get rid of device_map = 'auto' and set it as follows:
```
custom_data = ['example1', example2', . . . , ]
config = GPTQConfig(bits = 4, dataset = custom_data (any list of string), tokenizer = tokenizer, group_size = 128, desc_false = False, model_seq_len = 4096)
model = AutoModelForCausalLM.from_pretrained(model_path, quantization_config = config , device_map =0)
```
I get the following error:
`Expected all tensors to be on the same device but got cpu and cuda
`
Removing the device_map = 'auto' also gives me the same error.
Now keeping everything the same but changing the dataset to wikitext2 works just fine.
Wondering is there any problem with the custom data? or is any way I can format my data to be understood by the package?
### Expected behavior
I expect according to documentation, the GPTQConfig works fine with custom dataset as long as the data is a list of string. BTW, I have enough resources in terms of memory & GPU, that I think CUDA OOM is not related to resource limitation. | [
64
] | [
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
1,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0
] | [
"bug"
] |
https://api.github.com/repos/huggingface/transformers/issues/35770 |
TITLE
Mamba2 doesn't support Multi-GPU training (fast path)
COMMENTS
5
REACTIONS
+1: 0
-1: 0
laugh: 0
hooray: 0
heart: 0
rocket: 0
eyes: 0
BODY
### System Info
**System Info**
- `transformers` version: 4.46.3
- Platform: Linux-4.18.0-553.27.1.el8_10.x86_64-x86_64-with-glibc2.28
- Python version: 3.9.20
- Huggingface_hub version: 0.26.2
- Safetensors version: 0.4.5
- Accelerate version: 1.2.1
- Accelerate config: not found
- PyTorch version (GPU?): 2.4.0 (True)
- Tensorflow version (GPU?): not installed (NA)
- Flax version (CPU?/GPU?/TPU?): not installed (NA)
- Jax version: not installed
- JaxLib version: not installed
- Using distributed or parallel set-up in script?: parallel
- Using GPU in script?: yes
- GPU type: NVIDIA A100 80GB PCIe
### Who can help?
@ylacombe, @eustlb @muellerzr @SunMarc
### Information
- [ ] The official example scripts
- [x] My own modified scripts
### Tasks
- [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)
- [x] My own task or dataset (give details below)
### Reproduction
```python
from datasets import load_dataset
from trl import SFTTrainer, SFTConfig
from transformers import AutoTokenizer, AutoModelForCausalLM, DataCollatorForLanguageModeling
model_id = 'AntonV/mamba2-130m-hf'
dataset_name = 'yelp_review_full'
tokenizer = AutoTokenizer.from_pretrained(model_id)
tokenizer.pad_token = tokenizer.eos_token
tokenizer.padding_side = 'right'
data_collator = DataCollatorForLanguageModeling(tokenizer=tokenizer, mlm=False)
model = AutoModelForCausalLM.from_pretrained(model_id)
dataset = load_dataset(dataset_name, split='train', streaming=True)
train_dataset = dataset
training_args = SFTConfig(
output_dir='./outputs',
num_train_epochs=5,
per_device_train_batch_size=4,
per_device_eval_batch_size=4,
logging_dir='./logs',
learning_rate=2e-3,
save_steps=500,
save_safetensors=False,
max_steps=10000,
report_to='none'
)
trainer = SFTTrainer(
model=model,
processing_class=tokenizer,
data_collator=data_collator,
args=training_args,
train_dataset=train_dataset,
)
trainer.train()
```
### Expected behavior
Hi! When using cuda_kernels_forward in Mamba2 on multiple GPUs the following error appears (full traceback in the end):
```
config.pre_hook({**self.nargs, **kwargs, **config.all_kwargs()})
TypeError: 'NoneType' object is not a mapping ```
```
However, it works just fine when I'm using the slower path, torch_forward.
Do you know how to address this issue?
I'm using SFTTrainer (inherited from Transformers Trainer).
Thanks a lot.
**Traceback**
```
File "/mnt/lbosm1/home/nadavsc/projects/LLMamba/train.py", line 82, in <module>
main()
File "/mnt/lbosm1/home/nadavsc/projects/LLMamba/train.py", line 79, in main
trainer.train()
File "/home/nadavsc/LIGHTBITS/envs/ssm/lib/python3.9/site-packages/transformers/trainer.py", line 2123, in train
return inner_training_loop(
File "/home/nadavsc/LIGHTBITS/envs/ssm/lib/python3.9/site-packages/transformers/trainer.py", line 2481, in _inner_training_loop
tr_loss_step = self.training_step(model, inputs, num_items_in_batch)
File "/home/nadavsc/LIGHTBITS/envs/ssm/lib/python3.9/site-packages/transformers/trainer.py", line 3612, in training_step
self.accelerator.backward(loss, **kwargs)
File "/home/nadavsc/LIGHTBITS/envs/ssm/lib/python3.9/site-packages/accelerate/accelerator.py", line 2248, in backward
loss.backward(**kwargs)
File "/home/nadavsc/LIGHTBITS/envs/ssm/lib/python3.9/site-packages/torch/_tensor.py", line 521, in backward
torch.autograd.backward(
File "/home/nadavsc/LIGHTBITS/envs/ssm/lib/python3.9/site-packages/torch/autograd/__init__.py", line 289, in backward
_engine_run_backward(
File "/home/nadavsc/LIGHTBITS/envs/ssm/lib/python3.9/site-packages/torch/autograd/graph.py", line 768, in _engine_run_backward
return Variable._execution_engine.run_backward( # Calls into the C++ engine to run the backward pass
File "/home/nadavsc/LIGHTBITS/envs/ssm/lib/python3.9/site-packages/torch/autograd/function.py", line 306, in apply
return user_fn(self, *args)
File "/home/nadavsc/LIGHTBITS/envs/ssm/lib/python3.9/site-packages/torch/amp/autocast_mode.py", line 501, in decorate_bwd
return bwd(*args, **kwargs)
File "/mnt/lbosm1/home/nadavsc/projects/LLMamba/mamba_ssm/ops/triton/ssd_combined.py", line 893, in backward
dx, ddt, dA, dB, dC, dD, _, ddt_bias, dinitial_states = _mamba_chunk_scan_combined_bwd(
File "/mnt/lbosm1/home/nadavsc/projects/LLMamba/mamba_ssm/ops/triton/ssd_combined.py", line 414, in _mamba_chunk_scan_combined_bwd
dx, ddt, dD_from_x = _chunk_scan_chunk_state_bwd_dx(x, dt, dA_cumsum, B, CB, dout, dstates, D=D, seq_idx=seq_idx, dx=dx)
File "/mnt/lbosm1/home/nadavsc/projects/LLMamba/mamba_ssm/ops/triton/ssd_combined.py", line 250, in _chunk_scan_chunk_state_bwd_dx
_chunk_scan_chunk_state_bwd_dx_kernel[grid_dx](
File "/home/nadavsc/LIGHTBITS/envs/ssm/lib/python3.9/site-packages/triton/runtime/jit.py", line 345, in <lambda>
return lambda *args, **kwargs: self.run(grid=grid, warmup=False, *args, **kwargs)
File "/home/nadavsc/LIGHTBITS/envs/ssm/lib/python3.9/site-packages/triton/runtime/autotuner.py", line 170, in run
config.pre_hook({**self.nargs, **kwargs, **config.all_kwargs()})
TypeError: 'NoneType' object is not a mapping | [
64
] | [
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
1,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0
] | [
"bug"
] |
https://api.github.com/repos/huggingface/transformers/issues/35827 |
TITLE
ImportError: cannot import name 'NoneType' from 'types' on main in Python 3.9
COMMENTS
2
REACTIONS
+1: 1
-1: 0
laugh: 0
hooray: 0
heart: 0
rocket: 0
eyes: 0
BODY
### System Info
Linux
### Who can help?
_No response_
### Information
- [ ] The official example scripts
- [x] My own modified scripts
### Tasks
- [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)
- [ ] My own task or dataset (give details below)
### Reproduction
```
docker run python:3.9 bash -c 'pip install git+https://github.com/huggingface/transformers && python -c "import transformers"'
```
Output:
```
Traceback (most recent call last):
File "<string>", line 1, in <module>
File "/usr/local/lib/python3.9/site-packages/transformers/__init__.py", line 26, in <module>
from . import dependency_versions_check
File "/usr/local/lib/python3.9/site-packages/transformers/dependency_versions_check.py", line 16, in <module>
from .utils.versions import require_version, require_version_core
File "/usr/local/lib/python3.9/site-packages/transformers/utils/__init__.py", line 27, in <module>
from .chat_template_utils import DocstringParsingException, TypeHintParsingException, get_json_schema
File "/usr/local/lib/python3.9/site-packages/transformers/utils/chat_template_utils.py", line 22, in <module>
from types import NoneType
ImportError: cannot import name 'NoneType' from 'types' (/usr/local/lib/python3.9/types.py)
```
### Expected behavior
No import error | [
64
] | [
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
1,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0
] | [
"bug"
] |
https://api.github.com/repos/huggingface/transformers/issues/33841 |
TITLE
Roberta is ExecuTorch compatible
COMMENTS
0
REACTIONS
+1: 0
-1: 0
laugh: 0
hooray: 0
heart: 0
rocket: 0
eyes: 0
BODY
### Feature request
Enable Roberta to ["Export to ExecuTorch"](https://github.com/huggingface/transformers/issues/32253) workflow
### Motivation
See details in #32253
### Your contribution
Enable Roberta model | [
76,
31
] | [
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
1,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
1,
0,
0,
0,
0,
0
] | [
"Feature request",
"ExecuTorch"
] |
https://api.github.com/repos/huggingface/transformers/issues/34103 |
TITLE
fix(DPT,Depth-Anything) `torch.export`
COMMENTS
15
REACTIONS
+1: 0
-1: 0
laugh: 0
hooray: 0
heart: 0
rocket: 0
eyes: 0
BODY
# What does this PR do?
Small modification of the DPT modeling code to remove a new object creation in a `forward()` method of a Module. This object creation makes the model incompatible with `torch.export`, which is a key part of preparing a model to run on a variety of hardware backends through projects such as [ExecuTorch](https://pytorch.org/executorch/main/intro-overview.html) (related issue: https://github.com/huggingface/transformers/issues/32253)
## Motivation
[torch.export](https://pytorch.org/tutorials/intermediate/torch_export_tutorial.html#) allows you to export PyTorch models into standardized model representations, intended to be optimized and run efficiently using frameworks such as TensorRT or ExecuTorch.
## The Bug
They key issue was the slice on `self.layers`:
https://github.com/huggingface/transformers/blob/617b21273a349bd3a94e2b3bfb83f8089f45749b/src/transformers/models/dpt/modeling_dpt.py#L696
`self.layers[1:]` creates a new `ModuleList()` each time this line is executed.
https://github.com/pytorch/pytorch/blob/69bcf1035e7f06f2eefd8986d000cc980e9ebd37/torch/nn/modules/container.py#L330
The model tracer in `torch.export` monkey-patches nn.Module constructors during evaluation of the `forward()` pass, so the original DPT modeling code raises the following error:
```
File "/home/philkuz/.pyenv/versions/gml311/lib/python3.11/site-packages/torch/nn/modules/container.py", line 293, in __getitem__
return self.__class__(list(self._modules.values())[idx])
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
TypeError: _ModuleStackTracer.__init__.<locals>.AttrProxy.__init__() missing 1 required positional argument: 'path'
```
## The Solution
Pytorch recommends users update the modeling code. My team and I figured this could be helpful to the broader community, especially a future where Export to Executorch becomes more widely available: https://github.com/huggingface/transformers/issues/32253
This also removes an unnecessary creation of a new module list as a bonus.
### Tests
I ensured that `tests/models/dpt/test_modeling_dpt.py` passes, which appears to test a portion of the outputs. I also verified that the entire output of the model
before and after my changes matched with the following script:
```python
import os
import sys
import numpy as np
import requests
import torch
from PIL import Image
from transformers import pipeline
url = "http://images.cocodataset.org/val2017/000000039769.jpg"
image = Image.open(requests.get(url, stream=True).raw)
model = pipeline("depth-estimation", "facebook/dpt-dinov2-base-kitti")
result = model(image)
output_file = "depth_estimation_output.npy"
if not os.path.exists(output_file):
# Save the current output
np.save(output_file, result["predicted_depth"])
print(f"Depth estimation output saved to {output_file}")
print("Rerun the script to compare the output")
sys.exit(0)
# Load existing output and compare
expected_output = np.load(output_file)
np.testing.assert_allclose(
result["predicted_depth"],
expected_output,
rtol=1e-5,
atol=1e-5,
err_msg="Depth estimation output has changed",
)
print("Depth estimation output matches the saved version.")
```
<!--
Congratulations! You've made it this far! You're not quite done yet though.
Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution.
Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change.
Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost.
-->
<!-- Remove if not applicable -->
## Before submitting
- [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [x] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#create-a-pull-request),
Pull Request section?
- [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case.
- [ ] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation).
- [ ] Did you write any new necessary tests?
## Who can review?
Anyone in the community is free to review the PR once the tests have passed. Feel free to tag
members/contributors who may be interested in your PR.
@amyeroberts, @qubvel
<!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @
If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**.
Please tag fewer than 3 people.
Models:
- text models: @ArthurZucker
- speech models: @ylacombe, @eustlb
- graph models: @clefourrier
Library:
- flax: @sanchit-gandhi
- generate: @zucchini-nlp (visual-language models) or @gante (all others)
- pipelines: @Rocketknight1
- tensorflow: @gante and @Rocketknight1
- tokenizers: @ArthurZucker
- trainer: @muellerzr and @SunMarc
- chat templates: @Rocketknight1
Integrations:
- deepspeed: HF Trainer/Accelerate: @muellerzr
- ray/raytune: @richardliaw, @amogkam
- Big Model Inference: @SunMarc
- quantization (bitsandbytes, autogpt): @SunMarc
Documentation: @stevhliu
HF projects:
- accelerate: [different repo](https://github.com/huggingface/accelerate)
- datasets: [different repo](https://github.com/huggingface/datasets)
- diffusers: [different repo](https://github.com/huggingface/diffusers)
- rust tokenizers: [different repo](https://github.com/huggingface/tokenizers)
Maintained examples (not research project or legacy):
- Flax: @sanchit-gandhi
- PyTorch: See Models above and tag the person corresponding to the modality of the example.
- TensorFlow: @Rocketknight1
-->
| [
62,
73,
31
] | [
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
1,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
1,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
1,
0,
0,
0,
0,
0,
0,
0,
0
] | [
"Vision",
"run-slow",
"ExecuTorch"
] |
https://api.github.com/repos/huggingface/transformers/issues/33645 |
TITLE
A mismatch in the docs
COMMENTS
2
REACTIONS
+1: 0
-1: 0
laugh: 0
hooray: 0
heart: 0
rocket: 0
eyes: 0
BODY
### System Info
Hi,
Everything's going great with DeBERTa, thanks a lot!
There's a tiny bit of a mismatch in the docs.

Hey, no biggie, but if it's important, I can fix it. Just let me know.
Best,
Fedor
### Who can help?
@stevhliu
### Information
- [ ] The official example scripts
- [ ] My own modified scripts
### Tasks
- [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)
- [ ] My own task or dataset (give details below)
### Reproduction
https://github.com/huggingface/transformers/blob/78b2929c0554b79e0489b451ce4ece14d265ead2/src/transformers/models/deberta/configuration_deberta.py#L43
### Expected behavior
https://github.com/huggingface/transformers/blob/78b2929c0554b79e0489b451ce4ece14d265ead2/src/transformers/models/deberta/configuration_deberta.py#L105 | [
74
] | [
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
1,
0,
0,
0,
0,
0,
0,
0
] | [
"Documentation"
] |
https://api.github.com/repos/huggingface/transformers/issues/33243 |
TITLE
Supprot for qwen2moe gguf models
COMMENTS
6
REACTIONS
+1: 0
-1: 0
laugh: 0
hooray: 0
heart: 0
rocket: 0
eyes: 0
BODY
### System Info
- `transformers` version: 4.45.0.dev0
- Platform: Linux-5.15.0-105-generic-x86_64-with-glibc2.35
- Python version: 3.10.14
- Huggingface_hub version: 0.24.5
- Safetensors version: 0.4.4
- Accelerate version: 0.33.0
- Accelerate config: - compute_environment: LOCAL_MACHINE
- distributed_type: DEEPSPEED
- use_cpu: False
- debug: True
- num_processes: 24
- machine_rank: 2
- num_machines: 3
- main_process_ip: gpu007
- main_process_port: 9901
- rdzv_backend: static
- same_network: True
- main_training_function: main
- enable_cpu_affinity: False
- deepspeed_config: {'deepspeed_config_file': '/data/vayu/train/config/deepspeed/zero2.json', 'deepspeed_hostfile': '/data/vayu/train/config/hostfile', 'deepspeed_multinode_launcher': 'pdsh', 'zero3_init_flag': False}
- downcast_bf16: no
- tpu_use_cluster: False
- tpu_use_sudo: False
- tpu_env: []
- PyTorch version (GPU?): 2.4.0 (True)
- Tensorflow version (GPU?): not installed (NA)
- Flax version (CPU?/GPU?/TPU?): not installed (NA)
- Jax version: not installed
- JaxLib version: not installed
- Using distributed or parallel set-up in script?: <fill in>
- Using GPU in script?: <fill in>
- GPU type: NVIDIA A800-SXM4-80GB
### Who can help?
_No response_
### Information
- [X] The official example scripts
- [x] My own modified scripts
### Tasks
- [X] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)
- [ ] My own task or dataset (give details below)
### Reproduction
from transformers import AutoModelForCausalLM
model_id = "./qwen2moe_4x1.5b/"
file_name = "Qwen2-4x1.5B-reasoning-pro-Q4_K_M.gguf" # local file, base on Qwen/Qwen2-1.5B-Instruct
model = AutoModelForCausalLM.from_pretrained(model_id, gguf_file=filename)
### Expected behavior
miniconda3/envs/vllm_cu12/lib/python3.10/site-packages/transformers/modeling_gguf_pytorch_utils.py", line 100, in load_gguf_checkpoint
raise ValueError(f"Architecture {architecture} not supported")
ValueError: Architecture qwen2moe not supported | [
64
] | [
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
1,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0
] | [
"bug"
] |
https://api.github.com/repos/huggingface/transformers/issues/33343 |
TITLE
How to install transformers==4.45, two or three days I can install successfully, but today cannot.
COMMENTS
3
REACTIONS
+1: 0
-1: 0
laugh: 0
hooray: 0
heart: 0
rocket: 0
eyes: 0
BODY
### System Info
torch2.2
### Who can help?
_No response_
### Information
- [ ] The official example scripts
- [ ] My own modified scripts
### Tasks
- [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)
- [ ] My own task or dataset (give details below)
### Reproduction
pip install git+https://github.com/huggingface/transformers.git
### Expected behavior
How to install the latest transformers | [
37,
64
] | [
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
1,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
1,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0
] | [
"Installation",
"bug"
] |
https://api.github.com/repos/huggingface/transformers/issues/35254 |
TITLE
Mismatch Between txt img_token and Image Count in Multimodal Processor Causes Debugging
COMMENTS
3
REACTIONS
+1: 0
-1: 0
laugh: 0
hooray: 0
heart: 0
rocket: 0
eyes: 0
BODY
### Feature request
As with Qwen2-VL, if the number of img_tokens input in the multimodal processor does not match the number of images,
a warning or error should be displayed.
### Motivation
## reproduction code
```python
import requests
from PIL import Image
from transformers import Qwen2VLForConditionalGeneration, Qwen2VLProcessor
model = Qwen2VLForConditionalGeneration.from_pretrained("Qwen/Qwen2-VL-7B-Instruct")
processor = Qwen2VLProcessor.from_pretrained("Qwen/Qwen2-VL-7B-Instruct")
prompts = [
f"USER: {processor.image_token}{processor.image_token}\nWhat are the things I should be cautious about when I visit this place? What should I bring with me? ASSISTANT:",
]
image1 = Image.open(requests.get("https://llava-vl.github.io/static/images/view.jpg", stream=True).raw)
inputs = processor(images=[image1], text=prompts, return_tensors="pt", padding=True)
```
## env
```
- `transformers` version: 4.47.0.dev0
- Platform: Linux-5.15.0-124-generic-x86_64-with-glibc2.35
- Python version: 3.10.12
- Huggingface_hub version: 0.26.2
- Safetensors version: 0.4.5
- Accelerate version: 1.1.1
```
```
Traceback (most recent call last):
File "/usr/local/lib/python3.10/dist-packages/transformers/models/qwen2_vl/processing_qwen2_vl.py", line 139, in __call__
self.image_token, "<|placeholder|>" * (image_grid_thw[index].prod() // merge_length), 1
IndexError: index 1 is out of bounds for dimension 0 with size 1
```
When running the code, an error like the one above occurs.
The cause is that the number of img_tokens does not match the number of images.
However, the error is not very intuitive, so it took some time to find the cause.
Therefore, I think it would be good to explicitly display a warning or error
when the number of img_tokens and images do not match.
### Your contribution
It seems possible to add a statement that explicitly displays an error or warning
when the number of img_tokens and images do not match in the multimodal processor. | [
76,
12
] | [
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
1,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
1,
0,
0,
0,
0,
0
] | [
"Feature request",
"Multimodal"
] |
https://api.github.com/repos/huggingface/transformers/issues/35407 |
TITLE
Training issues latest version
COMMENTS
3
REACTIONS
+1: 0
-1: 0
laugh: 0
hooray: 0
heart: 0
rocket: 0
eyes: 0
BODY
### System Info
- `transformers` version: 4.48.0.dev0
- Platform: Linux-5.15.167.4-microsoft-standard-WSL2-x86_64-with-glibc2.39
- Python version: 3.11.11
- Huggingface_hub version: 0.27.0
- Safetensors version: 0.4.5
- Accelerate version: 1.2.1
- Accelerate config: not found
- PyTorch version (GPU?): 2.5.1+cu121 (True)
- Tensorflow version (GPU?): not installed (NA)
- Flax version (CPU?/GPU?/TPU?): not installed (NA)
- Jax version: not installed
- JaxLib version: not installed
- Using distributed or parallel set-up in script?: <fill in>
- Using GPU in script?: <fill in>
- GPU type: NVIDIA GeForce RTX 3060 Laptop GPU
### Who can help?
_No response_
### Information
- [X] The official example scripts
- [ ] My own modified scripts
### Tasks
- [X] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)
- [ ] My own task or dataset (give details below)
### Reproduction
Steps to reproduce:
1. Clone ModernBert repo
2. Install latest transformers version (4.48.0-dev0)
3. Run examples/train_st.py to finetune modernbert.
### Expected behavior
I would expect no errors.
However when building transformers from an earlier commit it works.
> `pip install git+https://github.com/huggingface/transformers.git@f42084e6411c39b74309af4a7d6ed640c01a4c9e`
So I think something broke during the latest commits. | [
66,
64
] | [
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
1,
0,
1,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0
] | [
"trainer",
"bug"
] |
https://api.github.com/repos/huggingface/transformers/issues/36204 |
TITLE
Request to add DEIM object detector
COMMENTS
1
REACTIONS
+1: 0
-1: 0
laugh: 0
hooray: 0
heart: 0
rocket: 0
eyes: 0
BODY
### Model description
DEIM is a SOTA real-time object detector based on DETR architecture and even beat the recent D-FINE on checkpoints that were not pretrained on object365. DEIM accelerates the convergence by improving the quantity and quality of matching with Dense O2O (one to-one) and MAL (Matchability-Aware Loss) respectively.
It can be used to improve existing object detectors.
### Open source status
- [x] The model implementation is available
- [x] The model weights are available
### Provide useful links for the implementation
Paper : https://arxiv.org/abs/2412.04234
Code : https://github.com/ShihuaHuang95/DEIM | [
77,
62,
54
] | [
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
1,
0,
0,
0,
0,
0,
0,
0,
1,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
1,
0,
0,
0,
0
] | [
"New model",
"Vision",
"contributions-welcome"
] |
https://api.github.com/repos/huggingface/transformers/issues/34563 |
TITLE
uniformize kwargs for VisionTextDualEncoder
COMMENTS
4
REACTIONS
+1: 0
-1: 0
laugh: 0
hooray: 0
heart: 0
rocket: 0
eyes: 0
BODY
Adds uniformized processors for VisionTextDualEncoder following https://github.com/huggingface/transformers/issues/31911.
@qubvel @molbap
| [
65
] | [
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
1,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0
] | [
"Processing"
] |
https://api.github.com/repos/huggingface/transformers/issues/34510 |
TITLE
CrossEntropyLoss has per-token weight that is different from original semantics?
COMMENTS
4
REACTIONS
+1: 0
-1: 0
laugh: 0
hooray: 0
heart: 0
rocket: 0
eyes: 0
BODY
### System Info
Hi thanks for the library! Suppose we have two samples `[a,b,c,d,e,f,g]` and `[x,y]`. Then, the naive CrossEntropy will give:
```
loss = 1/2( 1/7(loss(a) + loss(b) + ... + loss(g)) + 1/2(loss(x) + loss(y)) )
```
However, it seems that Transformers flatten everything and then compute cross entropy. Therefore, ths loss is indeed as if we have a sequence like `[a,b,c,d,e,f,g,x,y]`. And it will be
```
loss = 1/9(loss(a) + ... + loss(y))
```
Surely the two are different. So I wonder is this a bug, i.e. will this cause trouble? For example, suppose sample 1 is super long while sample 2 is super short, then the two ways will give very different weights for sample 2, possibly resulting in problems. But since huggingface transformers is so popular and is widely used, I wonder maybe it is OK to have that.
### Who can help?
@ArthurZucker
### Information
- [ ] The official example scripts
- [ ] My own modified scripts
### Tasks
- [X] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)
- [ ] My own task or dataset (give details below)
### Reproduction
(see above)
### Expected behavior
(see above) | [
64
] | [
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
1,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0
] | [
"bug"
] |
https://api.github.com/repos/huggingface/transformers/issues/34841 |
TITLE
Urgent: 429 Errors and Potential Rate Limiting on Model Access
COMMENTS
7
REACTIONS
+1: 0
-1: 0
laugh: 0
hooray: 0
heart: 0
rocket: 0
eyes: 0
BODY
### System Info
Hi team,
From yesterday, we have been seeing 429 errors when trying to access different models.
Here's an example of the error we're seeing:
```
File "/usr/local/lib/python3.8/site-packages/requests/models.py", line 1024, in raise_for_status
raise HTTPError(http_error_msg, response=self)
requests.exceptions.HTTPError: 429 Client Error: Too Many Requests for URL: https://huggingface.co/meta-llama/Meta-Llama-3-8B/resolve/main/config.json
```
We suspect we are being rate-limited. We can share the IPs with you.
Could we look into resolving this issue?
Thank you so much!
cc @amyeroberts @philschmid @sgugger
### Who can help?
@amyeroberts @philschmid @sgugger
### Information
- [ ] The official example scripts
- [X] My own modified scripts
### Tasks
- [X] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)
- [ ] My own task or dataset (give details below)
### Reproduction
We ran into the issues in our internal runs.
### Expected behavior
Run without hitting the 429 errors. | [
64
] | [
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
1,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0
] | [
"bug"
] |
https://api.github.com/repos/huggingface/transformers/issues/33898 |
TITLE
Flex attention support with arbitrary 4d mask for LlamaModel
COMMENTS
6
REACTIONS
+1: 4
-1: 0
laugh: 0
hooray: 0
heart: 0
rocket: 0
eyes: 0
BODY
### Feature request
It would be nice to combine the benefits of flex attention and 4d masking.
Perhaps the llama model could be a first case, allowing arbitrary 4d masks to be handled via an efficient flex attention path.
### Motivation
Custom attention masking/biasing patterns lead to considerable improvements in flexibility, and are central to state-of-the-art models like AlphaFold and recent multimodal models.
4d attention masking in Transformers already provides the user with the flexibility to define custom biases, however performance is limited by the fact that 4d masking is incompatible with flash attention.
A 4d mask supporting flex attention attention path would retain full flexibility while maintaining performance. As far as I understand, nothing comparable exists in Transformers currently.
### Your contribution
Not very familiar with what this would take. | [
76,
68
] | [
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
1,
0,
0,
0,
0,
0,
0,
0,
1,
0,
0,
0,
0,
0
] | [
"Feature request",
"Flash Attention"
] |
https://api.github.com/repos/huggingface/transformers/issues/33901 |
TITLE
Unnecessary flash attention warnings
COMMENTS
0
REACTIONS
+1: 0
-1: 0
laugh: 0
hooray: 0
heart: 0
rocket: 0
eyes: 0
BODY
### System Info
latest version
### Who can help?
@ArthurZucker
### Information
- [ ] The official example scripts
- [X] My own modified scripts
### Tasks
- [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)
- [X] My own task or dataset (give details below)
### Reproduction
init a model with attn_implementation=flash_attn_2 by passing a config to PretrainedModel.\__init__
torch.get_default_dtype will be passed to _autoset_attn_implementation, triggering the warning.
I know that the warning can be disabled by changing torch's default dtype - but it this the recommended way to cast models to bf16/fp16? if not then the warning shouldn't be raised.
this was previously discussed in #28052 but the warning still occurs
### Expected behavior
this shouldn't raise a warning when the dtype of the model is unknown | [
64,
68
] | [
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
1,
0,
0,
0,
1,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0
] | [
"bug",
"Flash Attention"
] |
https://api.github.com/repos/huggingface/transformers/issues/34840 |
TITLE
CVE-2024-11392/11393/11394 vulnerabilities
COMMENTS
27
REACTIONS
+1: 0
-1: 0
laugh: 0
hooray: 0
heart: 0
rocket: 0
eyes: 0
BODY
### Feature request
https://vuldb.com/?id.285443
https://vuldb.com/?id.285442
https://vuldb.com/?id.285441
Have these vulnerabilities been fixed in the current version, or are there any plans to fix them in the near future?
And what is the scope of impact of these vulnerabilities?
### Motivation
confirm the scope of impact of these vulnerabilities and the fix version
### Your contribution
no | [
76
] | [
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
1,
0,
0,
0,
0,
0
] | [
"Feature request"
] |
https://api.github.com/repos/huggingface/transformers/issues/34906 |
TITLE
torch.compile: generate should use call instead of forward
COMMENTS
7
REACTIONS
+1: 2
-1: 0
laugh: 0
hooray: 0
heart: 0
rocket: 0
eyes: 0
BODY
### System Info
- `transformers` version: 4.47.0.dev0
- Platform: Linux-5.14.0-284.73.1.el9_2.x86_64-x86_64-with-glibc2.35
- Python version: 3.10.12
- Huggingface_hub version: 0.25.2
- Safetensors version: 0.4.5
- Accelerate version: not installed
- Accelerate config: not found
- PyTorch version (GPU?): 2.6.0.dev20241008+cu124 (False)
- Tensorflow version (GPU?): not installed (NA)
- Flax version (CPU?/GPU?/TPU?): not installed (NA)
- Jax version: not installed
- JaxLib version: not installed
- Using distributed or parallel set-up in script?: NO
### Who can help?
@ArthurZucker @Cyrilvallez
### Information
- [X] The official example scripts
- [ ] My own modified scripts
### Tasks
- [X] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)
- [ ] My own task or dataset (give details below)
### Reproduction
```
import torch
from transformers import AutoModelForCausalLM, AutoTokenizer
model_name = "facebook/opt-125m"
length = 100
prompt_text = 'In a small, bustling cafe nestled in the heart of a vibrant city, a serendipitous event unfolded, leaving a lasting impression on all who witnessed it. As the patrons sat sipping their coffees and engaging in animated conversations, a talented street musician entered the cafe, carrying a weathered guitar and radiating an aura of creativity.'
tokenizer = AutoTokenizer.from_pretrained(model_name, use_fast=True)
model = AutoModelForCausalLM.from_pretrained(model_name)
model.compile()
input_ids = tokenizer(prompt_text, add_special_tokens=False, return_tensors='pt').input_ids
output = model.generate(input_ids, max_new_tokens=length)
```
### Expected behavior
Expected behaviour is that we use the compiled forward function.
When compiling using the [`model.compile()`](https://github.com/pytorch/pytorch/blob/a6344c8bcd22798987087244e961cdc0cbf9e9df/torch/nn/modules/module.py#L2985) API, the call method uses an [internal variable](https://github.com/pytorch/pytorch/blob/a6344c8bcd22798987087244e961cdc0cbf9e9df/torch/nn/modules/module.py#L1736) with the compiled forward instead of the uncompiled forward.
(I raised a [related issue in pytorch](https://github.com/pytorch/pytorch/issues/141473), this is the Option 2 there)
So generate, should use the call method instead of the forward to use the compiled version of forward (for this particular case of model.compile).
However, recent changes have changed this call to model.forward() instead of model() for the non-first token :
```
def _sample():
...
def model_forward(model, *args, **kwargs):
return model.forward(*args, **kwargs)
...
if i == 0:
outputs = self(**model_inputs, return_dict=True)
i += 1
else:
outputs = model_forward(self, return_dict=True, **model_inputs)
```
model_forward should be changed to call model() instead of model.forward() | [
64
] | [
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
1,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0
] | [
"bug"
] |
https://api.github.com/repos/huggingface/transformers/issues/35463 |
TITLE
Qwen2-VL used to work with `inputs_embeds` instead of `input_ids`, but no more
COMMENTS
0
REACTIONS
+1: 0
-1: 0
laugh: 0
hooray: 0
heart: 0
rocket: 0
eyes: 1
BODY
### System Info
- `transformers` version: 4.47.1
- Platform: Linux-4.18.0-513.18.1.el8_9.x86_64-x86_64-with-glibc2.35
- Python version: 3.10.12
- Huggingface_hub version: 0.27.0
- Safetensors version: 0.4.5
- Accelerate version: 1.2.1
- Accelerate config: not found
- PyTorch version (GPU?): 2.5.1+cu121 (True)
- Tensorflow version (GPU?): not installed (NA)
- Flax version (CPU?/GPU?/TPU?): not installed (NA)
- Jax version: not installed
- JaxLib version: not installed
- Using distributed or parallel set-up in script?: No
- Using GPU in script?: Yes
- GPU type: NVIDIA H100 80GB HBM3
### Who can help?
@zucchini-nlp
### Information
- [ ] The official example scripts
- [X] My own modified scripts
### Tasks
- [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)
- [X] My own task or dataset (give details below)
### Reproduction
### Preparation
```python
import torch
from transformers import Qwen2VLForConditionalGeneration, AutoTokenizer, AutoProcessor
from qwen_vl_utils import process_vision_info
model = Qwen2VLForConditionalGeneration.from_pretrained(
"Qwen/Qwen2-VL-7B-Instruct",
torch_dtype=torch.bfloat16,
attn_implementation="eager", # flash_attention_2 also produces the same error
device_map="auto",
)
processor = AutoProcessor.from_pretrained("Qwen/Qwen2-VL-7B-Instruct")
messages = [
{
"role": "user",
"content": [
{
"type": "image",
"image": "https://qianwen-res.oss-cn-beijing.aliyuncs.com/Qwen-VL/assets/demo.jpeg",
},
{"type": "text", "text": "Describe this image."},
],
}
]
text = processor.apply_chat_template(
messages, tokenize=False, add_generation_prompt=True
)
image_inputs, video_inputs = process_vision_info(messages)
inputs = processor(
text=[text],
images=image_inputs,
videos=video_inputs,
padding=True,
return_tensors="pt",
)
inputs = inputs.to("cuda")
```
### Working example
```python
generated_ids = model.generate(**inputs, max_new_tokens=128)
```
### Used to work
Worked in 9470d6532436e9db2951a196effd6f8841befb76 but not in v4.47.1 [[comparison]](https://github.com/huggingface/transformers/compare/9470d6532436e9db2951a196effd6f8841befb76...241c04d36867259cdf11dbb4e9d9a60f9cb65ebc)
```python
input_ids = inputs["input_ids"]
attention_mask = inputs["attention_mask"]
pixel_values = inputs["pixel_values"]
image_grid_thw = inputs["image_grid_thw"]
inputs_embeds = model.model.embed_tokens(input_ids)
if pixel_values is not None:
pixel_values = pixel_values.type(model.visual.get_dtype())
image_embeds = model.visual(pixel_values, grid_thw=image_grid_thw)
n_image_tokens = (input_ids == model.config.image_token_id).sum().item()
n_image_features = image_embeds.shape[0]
if n_image_tokens != n_image_features:
raise ValueError(
f"Image features and image tokens do not match: tokens: {n_image_tokens}, features {n_image_features}"
)
image_mask = (
(input_ids == model.config.image_token_id)
.unsqueeze(-1)
.expand_as(inputs_embeds)
.to(inputs_embeds.device)
)
image_embeds = image_embeds.to(inputs_embeds.device, inputs_embeds.dtype)
inputs_embeds = inputs_embeds.masked_scatter(image_mask, image_embeds)
if attention_mask is not None:
attention_mask = attention_mask.to(inputs_embeds.device)
generated_ids = model.generate(inputs_embeds=inputs_embeds, attention_mask=attention_mask, max_new_tokens=128)
```
### Expected behavior
The latter should work the same as the former.
The latter's error message example
```
File "/usr/local/lib/python3.10/dist-packages/transformers/models/qwen2_vl/modeling_qwen2_vl.py", line 578, in forward
attn_weights = attn_weights + causal_mask
RuntimeError: The size of tensor a (2362) must match the size of tensor b (1182) at non-singleton dimension 3
``` | [
64,
19
] | [
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
1,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
1,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0
] | [
"bug",
"VLM"
] |
https://api.github.com/repos/huggingface/transformers/issues/34486 |
TITLE
fix pixtral processor
COMMENTS
3
REACTIONS
+1: 0
-1: 0
laugh: 0
hooray: 0
heart: 0
rocket: 0
eyes: 0
BODY
# What does this PR do?
Should fix https://github.com/huggingface/transformers/issues/34204 .
| [
73
] | [
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
1,
0,
0,
0,
0,
0,
0,
0,
0
] | [
"run-slow"
] |
https://api.github.com/repos/huggingface/transformers/issues/33976 |
TITLE
LlavaForConditionalGeneration._merge_input_ids_with_image_features is incorrect
COMMENTS
9
REACTIONS
+1: 0
-1: 0
laugh: 0
hooray: 0
heart: 0
rocket: 0
eyes: 1
BODY
### System Info
commit: 38f9f10dd9240619ea17fb6c7acb51b3bc592232
### Who can help?
@amyeroberts @qubvel
### Information
- [X] The official example scripts
- [X] My own modified scripts
### Tasks
- [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)
- [ ] My own task or dataset (give details below)
### Reproduction
The function `_merge_input_ids_with_image_features` of class `LlavaForConditionalGeneration` gives incorrect results if it has to pad the result (which arises when the batch has different amounts of image tokens in each row).
https://github.com/huggingface/transformers/blob/38f9f10dd9240619ea17fb6c7acb51b3bc592232/src/transformers/models/llava/modeling_llava.py#L297
I created the following reproduction example to show the failure mode simply. Note that it uses a copy of the function to add print statements for debugging, but the code is otherwise an identical copy at HEAD.
```
def _merge_input_ids_with_image_features(self, image_features, inputs_embeds, input_ids, attention_mask, labels):
num_images, num_image_patches, embed_dim = image_features.shape
batch_size, sequence_length = input_ids.shape
left_padding = not torch.sum(input_ids[:, -1] == torch.tensor(self.pad_token_id))
# 1. Create a mask to know where special image tokens are
special_image_token_mask = input_ids == self.config.image_token_index
num_special_image_tokens = torch.sum(special_image_token_mask, dim=-1)
# Compute the maximum embed dimension
max_embed_dim = (num_special_image_tokens.max() * (num_image_patches - 1)) + sequence_length
batch_indices, non_image_indices = torch.where(input_ids != self.config.image_token_index)
# 2. Compute the positions where text should be written
# Calculate new positions for text tokens in merged image-text sequence.
# `special_image_token_mask` identifies image tokens. Each image token will be replaced by `nb_text_tokens_per_images - 1` text tokens.
# `torch.cumsum` computes how each image token shifts subsequent text token positions.
# - 1 to adjust for zero-based indexing, as `cumsum` inherently increases indices by one.
new_token_positions = torch.cumsum((special_image_token_mask * (num_image_patches - 1) + 1), -1) - 1
nb_image_pad = max_embed_dim - 1 - new_token_positions[:, -1]
print(f"nb_image_pad: {nb_image_pad}")
if left_padding:
new_token_positions += nb_image_pad[:, None] # offset for left padding
text_to_overwrite = new_token_positions[batch_indices, non_image_indices]
# 3. Create the full embedding, already padded to the maximum position
final_embedding = torch.zeros(
batch_size, max_embed_dim, embed_dim, dtype=inputs_embeds.dtype, device=inputs_embeds.device
)
final_attention_mask = torch.zeros(
batch_size, max_embed_dim, dtype=attention_mask.dtype, device=inputs_embeds.device
)
if labels is not None:
final_labels = torch.full(
(batch_size, max_embed_dim), self.config.ignore_index, dtype=input_ids.dtype, device=input_ids.device
)
# In case the Vision model or the Language model has been offloaded to CPU, we need to manually
# set the corresponding tensors into their correct target device.
target_device = inputs_embeds.device
batch_indices, non_image_indices, text_to_overwrite = (
batch_indices.to(target_device),
non_image_indices.to(target_device),
text_to_overwrite.to(target_device),
)
attention_mask = attention_mask.to(target_device)
print(f"max_embed_dim: {max_embed_dim}")
print(f"batch_indices: {batch_indices}")
print(f"non_image_indices: {non_image_indices}")
print(f"text_to_overwrite: {text_to_overwrite}")
# 4. Fill the embeddings based on the mask. If we have ["hey" "<image>", "how", "are"]
# we need to index copy on [0, 577, 578, 579] for the text and [1:576] for the image features
final_embedding[batch_indices, text_to_overwrite] = inputs_embeds[batch_indices, non_image_indices]
final_attention_mask[batch_indices, text_to_overwrite] = attention_mask[batch_indices, non_image_indices]
if labels is not None:
final_labels[batch_indices, text_to_overwrite] = labels[batch_indices, non_image_indices]
# 5. Fill the embeddings corresponding to the images. Anything that is not `text_positions` needs filling (#29835)
image_to_overwrite = torch.full(
(batch_size, max_embed_dim), True, dtype=torch.bool, device=inputs_embeds.device
)
image_to_overwrite[batch_indices, text_to_overwrite] = False
print(f"image_to_overwrite cumsum: {image_to_overwrite.cumsum(-1) - 1}")
image_to_overwrite &= image_to_overwrite.cumsum(-1) - 1 >= nb_image_pad[:, None].to(target_device)
print(f"image_to_overwrite: {image_to_overwrite}")
if image_to_overwrite.sum() != image_features.shape[:-1].numel():
raise ValueError(
f"The input provided to the model are wrong. The number of image tokens is {torch.sum(special_image_token_mask)} while"
f" the number of image given to the model is {num_images}. This prevents correct indexing and breaks batch generation."
)
final_embedding[image_to_overwrite] = image_features.contiguous().reshape(-1, embed_dim).to(target_device)
final_attention_mask |= image_to_overwrite
position_ids = (final_attention_mask.cumsum(-1) - 1).masked_fill_((final_attention_mask == 0), 1)
# 6. Mask out the embedding at padding positions, as we later use the past_key_value value to determine the non-attended tokens.
batch_indices, pad_indices = torch.where(input_ids == self.pad_token_id)
indices_to_mask = new_token_positions[batch_indices, pad_indices]
final_embedding[batch_indices, indices_to_mask] = 0
if labels is None:
final_labels = None
return final_embedding, final_attention_mask, final_labels, position_ids
tokenizer = AutoTokenizer.from_pretrained("llava-hf/llava-1.5-7b-hf", use_fast=True)
model = LlavaForConditionalGeneration.from_pretrained("llava-hf/llava-1.5-7b-hf", device_map=0, torch_dtype=torch.float16)
model.eval()
assert isinstance(model, LlavaForConditionalGeneration)
model._merge_input_ids_with_image_features = types.MethodType(_merge_input_ids_with_image_features, model)
prompt1 = "System prompt<image>Caption 1<pad><pad><pad><pad><pad>"
prompt2 = "System prompt<image>Caption 2<image>Caption 3" # 12 tokens
print("`System Prompt` tokens: ", tokenizer.encode("System prompt", add_special_tokens=False, truncation=False))
print("`Caption 1` tokens: ", tokenizer.encode("Caption 1", add_special_tokens=False, truncation=False))
# Tokenize the prompt
input_ids = [
tokenizer.encode(prompt1, return_tensors="pt", add_special_tokens=False, truncation=False).squeeze(0),
tokenizer.encode(prompt2, return_tensors="pt", add_special_tokens=False, truncation=False).squeeze(0),
]
input_ids = torch.stack(input_ids)
assert isinstance(input_ids, torch.Tensor)
print(f"Input IDs: {input_ids.shape}")
print(f"Input IDs: {input_ids}")
gc.collect()
torch.cuda.empty_cache()
with torch.no_grad():
input_embeddings = model.get_input_embeddings()(input_ids.to('cuda'))
embedded_images = torch.randn(3, 5, 4096, device='cuda', dtype=torch.float16)
attention_mask = torch.ones(2, 12, device='cuda', dtype=torch.bool)
result = model._merge_input_ids_with_image_features(embedded_images, input_embeddings, input_ids, attention_mask, None)[0]
print(result.shape)
print("`System Prompt` diff: ", (result[0, :2] - input_embeddings[0, :2]).abs().max())
print("`image` diff: ", (result[0, 2:7] - embedded_images[0]).abs().max())
print("`Caption 1` diff: ", (result[0, 7:11] - input_embeddings[0, 3:7]).abs().max())
print("`System Prompt` diff: ", (result[1, :2] - input_embeddings[1, :2]).abs().max())
print("`image` diff: ", (result[1, 2:7] - embedded_images[1]).abs().max())
print("`Caption 2` diff: ", (result[1, 7:11] - input_embeddings[1, 3:7]).abs().max())
print("`image` diff: ", (result[1, 11:16] - embedded_images[2]).abs().max())
print("`Caption 3` diff: ", (result[1, 16:20] - input_embeddings[1, 8:12]).abs().max())
```
Running this gives:
```
`System Prompt` tokens: [2184, 9508]
`Caption 1` tokens: [9243, 683, 29871, 29896]
Input IDs: torch.Size([2, 12])
Input IDs: tensor([[ 2184, 9508, 32000, 9243, 683, 29871, 29896, 32001, 32001, 32001,
32001, 32001],
[ 2184, 9508, 32000, 9243, 683, 29871, 29906, 32000, 9243, 683,
29871, 29941]])
nb_image_pad: tensor([4, 0])
max_embed_dim: 20
batch_indices: tensor([0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1],
device='cuda:0')
non_image_indices: tensor([ 0, 1, 3, 4, 5, 6, 7, 8, 9, 10, 11, 0, 1, 3, 4, 5, 6, 8,
9, 10, 11], device='cuda:0')
text_to_overwrite: tensor([ 0, 1, 7, 8, 9, 10, 11, 12, 13, 14, 15, 0, 1, 7, 8, 9, 10, 16,
17, 18, 19], device='cuda:0')
image_to_overwrite cumsum: tensor([[-1, -1, 0, 1, 2, 3, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 5, 6,
7, 8],
[-1, -1, 0, 1, 2, 3, 4, 4, 4, 4, 4, 5, 6, 7, 8, 9, 9, 9,
9, 9]], device='cuda:0')
image_to_overwrite: tensor([[False, False, False, False, False, False, True, False, False, False,
False, False, False, False, False, False, True, True, True, True],
[False, False, True, True, True, True, True, False, False, False,
False, True, True, True, True, True, False, False, False, False]],
device='cuda:0')
torch.Size([2, 20, 4096])
`System Prompt` diff: tensor(0., device='cuda:0', dtype=torch.float16)
`image` diff: tensor(5.0898, device='cuda:0', dtype=torch.float16)
`Caption 1` diff: tensor(0., device='cuda:0', dtype=torch.float16)
`System Prompt` diff: tensor(0., device='cuda:0', dtype=torch.float16)
`image` diff: tensor(0., device='cuda:0', dtype=torch.float16)
`Caption 2` diff: tensor(0., device='cuda:0', dtype=torch.float16)
`image` diff: tensor(0., device='cuda:0', dtype=torch.float16)
`Caption 3` diff: tensor(0., device='cuda:0', dtype=torch.float16)
```
The last set of prints shows the difference between what the function outputs and what is expected, by comparing the different pieces of the result to the original inputs. As can be seen here, the result is _almost_ correct but has misplaced the first image.
The reason is this line: https://github.com/huggingface/transformers/blob/38f9f10dd9240619ea17fb6c7acb51b3bc592232/src/transformers/models/llava/modeling_llava.py#L352
It looks like this line is trying to account for padding in the outputs, but assumes the outputs are going to be left padded, whereas the rest of the function dynamically switches between left and right padding depending on the `left_padding` variable. If the outputs are being right padded, the comparison on this line is incorrect. That can be seen in the debug output above where `image_to_overwrite` shows the image being scattered around the tensor.
Importantly, this incorrect behavior can occur for _any_ padding, because `left_padding` is "automatically" detected from the input. Even if the model is setup for left padding, if a particular batch doesn't have any padding going into this function, the function will assume right padding and thus mangle the output.
One solution is to add an if statement on that line based on `left_padding` and use a different condition to handle right padding. I _think_ this code (for the right padding case) would work, but I haven't tested it extensively:
```
idxs = torch.arange(max_embed_dim, device=image_to_overwrite.device).expand(batch_size, -1)
image_to_overwrite &= idxs < (max_embed_dim - nb_image_pad[:, None]).to(target_device)
```
Side note:
I think the current implementation for handling the left padding case could also be replaced with arange based logic. To me it would be more clear, as it indicates the intention of that line explicitly (set all padded indexes to False). Whereas I found the use of `image_to_overwrite.cumsum(-1)` opaque and difficult to decipher what it was trying to calculate.
### Expected behavior
See above | [
64,
62
] | [
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
1,
0,
1,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0
] | [
"bug",
"Vision"
] |
https://api.github.com/repos/huggingface/transformers/issues/34887 |
TITLE
Bump tornado from 6.4.1 to 6.4.2 in /examples/research_projects/visual_bert
COMMENTS
1
REACTIONS
+1: 0
-1: 0
laugh: 0
hooray: 0
heart: 0
rocket: 0
eyes: 0
BODY
Bumps [tornado](https://github.com/tornadoweb/tornado) from 6.4.1 to 6.4.2.
<details>
<summary>Changelog</summary>
<p><em>Sourced from <a href="https://github.com/tornadoweb/tornado/blob/v6.4.2/docs/releases.rst">tornado's changelog</a>.</em></p>
<blockquote>
<h1>Release notes</h1>
<p>.. toctree::
:maxdepth: 2</p>
<p>releases/v6.4.2
releases/v6.4.1
releases/v6.4.0
releases/v6.3.3
releases/v6.3.2
releases/v6.3.1
releases/v6.3.0
releases/v6.2.0
releases/v6.1.0
releases/v6.0.4
releases/v6.0.3
releases/v6.0.2
releases/v6.0.1
releases/v6.0.0
releases/v5.1.1
releases/v5.1.0
releases/v5.0.2
releases/v5.0.1
releases/v5.0.0
releases/v4.5.3
releases/v4.5.2
releases/v4.5.1
releases/v4.5.0
releases/v4.4.3
releases/v4.4.2
releases/v4.4.1
releases/v4.4.0
releases/v4.3.0
releases/v4.2.1
releases/v4.2.0
releases/v4.1.0
releases/v4.0.2
releases/v4.0.1
releases/v4.0.0
releases/v3.2.2
releases/v3.2.1
releases/v3.2.0
releases/v3.1.1
releases/v3.1.0
releases/v3.0.2
releases/v3.0.1
releases/v3.0.0
releases/v2.4.1
releases/v2.4.0</p>
<!-- raw HTML omitted -->
</blockquote>
<p>... (truncated)</p>
</details>
<details>
<summary>Commits</summary>
<ul>
<li><a href="https://github.com/tornadoweb/tornado/commit/a5ecfab15e52202a46d34638aad93cddca86d87b"><code>a5ecfab</code></a> Bump version to 6.4.2</li>
<li><a href="https://github.com/tornadoweb/tornado/commit/bc7df6bafdec61155e7bf385081feb205463857d"><code>bc7df6b</code></a> Fix tests with Twisted 24.7.0</li>
<li><a href="https://github.com/tornadoweb/tornado/commit/d5ba4a1695fbf7c6a3e54313262639b198291533"><code>d5ba4a1</code></a> httputil: Fix quadratic performance of cookie parsing</li>
<li>See full diff in <a href="https://github.com/tornadoweb/tornado/compare/v6.4.1...v6.4.2">compare view</a></li>
</ul>
</details>
<br />
[](https://docs.github.com/en/github/managing-security-vulnerabilities/about-dependabot-security-updates#about-compatibility-scores)
Dependabot will resolve any conflicts with this PR as long as you don't alter it yourself. You can also trigger a rebase manually by commenting `@dependabot rebase`.
[//]: # (dependabot-automerge-start)
[//]: # (dependabot-automerge-end)
---
<details>
<summary>Dependabot commands and options</summary>
<br />
You can trigger Dependabot actions by commenting on this PR:
- `@dependabot rebase` will rebase this PR
- `@dependabot recreate` will recreate this PR, overwriting any edits that have been made to it
- `@dependabot merge` will merge this PR after your CI passes on it
- `@dependabot squash and merge` will squash and merge this PR after your CI passes on it
- `@dependabot cancel merge` will cancel a previously requested merge and block automerging
- `@dependabot reopen` will reopen this PR if it is closed
- `@dependabot close` will close this PR and stop Dependabot recreating it. You can achieve the same result by closing it manually
- `@dependabot show <dependency name> ignore conditions` will show all of the ignore conditions of the specified dependency
- `@dependabot ignore this major version` will close this PR and stop Dependabot creating any more for this major version (unless you reopen the PR or upgrade to it yourself)
- `@dependabot ignore this minor version` will close this PR and stop Dependabot creating any more for this minor version (unless you reopen the PR or upgrade to it yourself)
- `@dependabot ignore this dependency` will close this PR and stop Dependabot creating any more for this dependency (unless you reopen the PR or upgrade to it yourself)
You can disable automated security fix PRs for this repo from the [Security Alerts page](https://github.com/huggingface/transformers/network/alerts).
</details> | [
27,
60
] | [
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
1,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
1,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0
] | [
"dependencies",
"python"
] |
https://api.github.com/repos/huggingface/transformers/issues/35146 |
TITLE
RuntimeError: shape '[1, 3098, 6, 5, 128]' is invalid for input of size 12689408
COMMENTS
2
REACTIONS
+1: 0
-1: 0
laugh: 0
hooray: 0
heart: 0
rocket: 0
eyes: 0
BODY
### System Info
conda create -yn duo python=3.10
conda activate duo
conda install -y git
conda install -y nvidia/label/cuda-12.4.0::cuda-toolkit
conda install -y nvidia::cuda-cudart-dev
conda install -y pytorch torchvision torchaudio pytorch-cuda=12.4 -c pytorch -c nvidia
pip install transformers==4.45.2 accelerate sentencepiece datasets wandb zstandard matplotlib huggingface_hub==0.25.2
pip install tensor_parallel==2.0.0
pip install ninja packaging
pip install flash-attn==2.6.3 --no-build-isolation
# LongBench evaluation
pip install seaborn rouge_score einops pandas
pip install flashinfer -i https://flashinfer.ai/whl/cu121/torch2.4/
### Who can help?
_No response_
### Information
- [ ] The official example scripts
- [ ] My own modified scripts
### Tasks
- [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)
- [ ] My own task or dataset (give details below)
### Reproduction
We encountered a shape mismatch error while trying to reproduce Duo Attention. We tested versions 4.37 to 4.47, and the issue shifted from a RuntimeError: Boolean value of Tensor with more than one value is ambiguous to a RuntimeError: shape '[1, 3098, 6, 5, 128]' is invalid for input of size 12689408. We couldn't resolve the issue by changing the versions.
We also tried different models with the following commands:
huggingface-cli download togethercomputer/Llama-2-7B-32K-Instruct --local-dir Llama-2-7B-32K-Instruct
huggingface-cli download gradientai/Llama-3-8B-Instruct-Gradient-1048k --local-dir Llama-3-8B-Instruct-Gradient-1048k
huggingface-cli download gradientai/Llama-3-8B-Instruct-Gradient-4194k --local-dir Llama-3-8B-Instruct-Gradient-4194k
huggingface-cli download mistralai/Mistral-7B-Instruct-v0.2 --local-dir Mistral-7B-Instruct-v0.2
huggingface-cli download mistralai/Mistral-7B-Instruct-v0.3 --local-dir Mistral-7B-Instruct-v0.3
However, none of these models worked. There was a previous issue suggesting that updating the transformer version could solve the problem, but we are still getting shape mismatch errors.
Could there be other packages that need to be updated as well?
### Expected behavior
A solution of RuntimeError: shape '[1, 3098, 6, 5, 128]' is invalid for input of size 12689408 | [
64
] | [
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
1,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0
] | [
"bug"
] |
https://api.github.com/repos/huggingface/transformers/issues/35282 |
TITLE
Qwen2vl support for GGUF
COMMENTS
5
REACTIONS
+1: 0
-1: 0
laugh: 0
hooray: 0
heart: 0
rocket: 0
eyes: 0
BODY
### Feature request
llama.cpp recently added [support for Qwen2VL](https://github.com/ggerganov/llama.cpp/commit/ba1cb19cdd0d92e012e0f6e009e0620f854b6afd), which means that we can now quantize Qwen2VL models (and I've done so, successfully!) I'd like to be able to load quantized Qwen2VL models with AutoModelForVision2Seq; currently, transformers doesn't recognize qwen2vl as a valid architecture.
### Motivation
It would be wonderful to be able to use quantized GGUF Qwen2VL models!
### Your contribution
I'm happy to work up the PR for this, if I can get some direction on where to start. I'm hacking through the code right now, but I don't know it well enough to be able to meaningfully dent the problem just yet. | [
76
] | [
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
1,
0,
0,
0,
0,
0
] | [
"Feature request"
] |
https://api.github.com/repos/huggingface/transformers/issues/35570 |
TITLE
Transformers can create unconventional python module names when loading certain repositories
COMMENTS
2
REACTIONS
+1: 0
-1: 0
laugh: 0
hooray: 0
heart: 0
rocket: 0
eyes: 0
BODY
### System Info
- `transformers` version: 4.41.1
- Platform: Linux-5.15.0-113-generic-x86_64-with-glibc2.35
- Python version: 3.10.15
- Huggingface_hub version: 0.23.5
- Safetensors version: 0.4.5
- Accelerate version: 1.1.1
- Accelerate config: not found
- PyTorch version (GPU?): 2.4.1+cu121 (False)
- Tensorflow version (GPU?): not installed (NA)
- Flax version (CPU?/GPU?/TPU?): not installed (NA)
- Jax version: not installed
- JaxLib version: not installed
- Using GPU in script?: no
- Using distributed or parallel set-up in script?: no
### Who can help?
@Rocketknight1 (maybe?)
### Information
Python module names cannot typically:
* Start with anything but letter or underscore
* Contain hyphens
Transformers can create and load python modules that break both of these conventions. This can cause unexpected behavior with code that uses the modules that transformers creates, such as creating, saving, and loading pyTorch traces from disk.
### Tasks
Load a model from huggingface and trace it.
### Reproduction
I try to load, trace, save to disk, and reload the model from this repo: https://huggingface.co/nomic-ai/nomic-bert-2048
```
import torch
import torch.nn.functional as F
from transformers import AutoTokenizer, AutoModel
# Define mean pooling function
def mean_pooling(model_output, attention_mask):
token_embeddings = model_output[0]
input_mask_expanded = attention_mask.unsqueeze(-1).expand(token_embeddings.size()).float()
return torch.sum(token_embeddings * input_mask_expanded, 1) / torch.clamp(input_mask_expanded.sum(1), min=1e-9)
# Create a wrapper class for tracing
class TransformerWrapper(torch.nn.Module):
def __init__(self, model):
super().__init__()
self.model = model
def forward(self, input_ids, attention_mask):
outputs = self.model(input_ids=input_ids, attention_mask=attention_mask)
pooled = mean_pooling(outputs, attention_mask)
return F.normalize(pooled, p=2, dim=1)
# Load model and tokenizer
tokenizer = AutoTokenizer.from_pretrained('nomic-ai/nomic-embed-text-v1')
tokenizer.model_max_length = 128
base_model = AutoModel.from_pretrained('nomic-ai/nomic-embed-text-v1', trust_remote_code=True)
base_model.eval()
# Create wrapped model
wrapped_model = TransformerWrapper(base_model)
# Prepare example input for tracing
example_sentences = ['example sentence']
encoded_input = tokenizer(
example_sentences,
padding="max_length",
truncation=True,
return_tensors='pt'
)
with torch.no_grad():
output = wrapped_model(encoded_input["input_ids"], encoded_input["attention_mask"])
# Trace the model
with torch.no_grad():
traced_model = torch.jit.trace(
wrapped_model,
(
encoded_input['input_ids'],
encoded_input['attention_mask']
)
)
print(type(base_model))
torch.jit.save(traced_model, "my_model.pt")
torch.jit.load("my_model.pt") # this will fail
```
The model is loaded in an unconventionally-named python module:
```
$ print(type(base_model))
<class 'transformers_modules.nomic-ai.nomic-bert-2048.40b98394640e630d5276807046089b233113aa87.modeling_hf_nomic_bert.NomicBertModel'>`
```
The module name is serialized inside the torch trace. When the trace is loaded again, it fails to parse because the module name of the class does not follow python conventions:
```
return torch.jit.load(model_path)
cpp_module = torch._C.import_ir_module(cu, str(f), map_location, _extra_files, _restore_shapes) # type: ignore[call-arg]
RuntimeError: expected newline but found 'ident' here:
Serialized File "code/__torch__.py", line 6
training : bool
_is_full_backward_hook : Optional[bool]
model : __torch__.transformers_modules.nomic-ai.nomic-bert-2048.40b98394640e630d5276807046089b233113aa87.modeling_hf_nomic_bert.NomicBertModel
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ <--- HERE
def forward(self: __torch__.TransformerWrapper,
input_ids: Tensor,
```
### Expected behavior
The module names created by transformers should be sanitized to follow python convention. I was able to solve this problem with a simple modification:
https://github.com/kory/transformers/commit/b3fde4fff92f83fc3322c05cada94dae90842df8
I am unsure if this is the best fix, or whether it would be considered safe, for the package as a whole, but this does fix the tracing issue I'm hitting:
```
print(type(base_model))
<class 'transformers_modules.nomic_ai.nomic_bert_2048._40b98394640e630d5276807046089b233113aa87.modeling_hf_nomic_bert.NomicBertModel'>
```
| [
64
] | [
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
1,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0
] | [
"bug"
] |
https://api.github.com/repos/huggingface/transformers/issues/34573 |
TITLE
RuntimeError: linalg.vector_norm: Expected a floating point or complex tensor as input. Got Long
COMMENTS
4
REACTIONS
+1: 0
-1: 0
laugh: 0
hooray: 0
heart: 0
rocket: 0
eyes: 0
BODY
### System Info
transformers == 4.45
torch == 2.4.1 + cu118
accelerate == 1.0.1
### Who can help?
_No response_
### Information
- [ ] The official example scripts
- [ ] My own modified scripts
### Tasks
- [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)
- [ ] My own task or dataset (give details below)
### Reproduction
```
dataset = load_dataset("pg19")
dataloader = {
split: DataLoader(dataset[split], batch_size=args.batch_size, shuffle=(split == 'train'),
pin_memory=True) for split in ['train', 'validation', 'test']}
accelerator = Accelerator()
device = accelerator.device
model_name = "meta-llama/Meta-Llama-3.1-8B-Instruct"
tokenizer = AutoTokenizer.from_pretrained(model_name)
if tokenizer.pad_token is None:
tokenizer.add_special_tokens({'pad_token': '[PAD]'}) # tokenizer.pad_token = tokenizer.eos_token e.g.
model = LlamaForCausalLM.from_pretrained(model_name, config=config, torch_dtype = torch.bfloat16).to(device)
model.resize_token_embeddings(len(tokenizer))
train_dataloader, eval_dataloader, model, optimizer, lr_scheduler = accelerator.prepare(
dataloader["train"], dataloader["validation"], model, optimizer, lr_scheduler
)
for epoch in range(1, args.num_epochs + 1):
start_time = perf_counter()
model.train()
train_loss = 0
for idx, batch in enumerate(tqdm(train_dataloader, disable=args.disable_tqdm)):
inputs = tokenizer(batch['text'], padding="longest", truncation=True, max_length=2200, return_tensors='pt', return_token_type_ids=False).to(device)
inputs['labels'] = inputs['input_ids'].clone()
label_mask = inputs['attention_mask'].bool()
inputs['labels'][~label_mask] = -100
loss = model(**inputs).loss
accelerator.backward(loss)
optimizer.step()
lr_scheduler.step()
optimizer.zero_grad()
```
### Expected behavior
I'm using PyTorch 2.4.1 +cu118 and transformers 4.45, training with a batch size of 2 with 2 nvidia A100-80GB. When padding appeared in a batch, the attention_mask in LlamaSdpaAttention was activated(i.e. not None at this step).
```
causal_mask = attention_mask
if attention_mask is not None:
causal_mask = causal_mask[:, :, :, : key_states.shape[-2]]
```
After performing the torch.nn.functional.scaled_dot_product_attention operation, I encountered the following error at this line
```accelerator.backward(loss)```
```RuntimeError: linalg.vector_norm: Expected a floating point or complex tensor as input. Got Long```
For now, I’ve resolved this by skipping batches that include padding, but I would like to understand the root cause and potential solutions for this issue. | [
64,
80
] | [
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
1,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
1,
0
] | [
"bug",
"Accelerate"
] |
https://api.github.com/repos/huggingface/transformers/issues/35002 |
TITLE
OpenBLAS Warning : Detect OpenMP Loop and this application may hang. Please rebuild the library with USE_OPENMP=1 option.
COMMENTS
2
REACTIONS
+1: 0
-1: 0
laugh: 0
hooray: 0
heart: 0
rocket: 0
eyes: 0
BODY
### System Info
- `transformers` version: 4.46.3
- Platform: Linux-6.8.0-49-generic-x86_64-with-glibc2.35
- Python version: 3.11.10
- Huggingface_hub version: 0.26.2
- Safetensors version: 0.4.5
- Accelerate version: 1.1.1
- Accelerate config: not found
- PyTorch version (GPU?): 2.5.1+cu124 (True)
- Tensorflow version (GPU?): not installed (NA)
- Flax version (CPU?/GPU?/TPU?): not installed (NA)
- Jax version: not installed
- JaxLib version: not installed
- Using distributed or parallel set-up in script?: no
- Using GPU in script?: no
- GPU type: NVIDIA GeForce RTX 3060
### Who can help?
@ylacombe @eustlb
I am finetuning whisper-small on common corpus locally. When I run trainer.train(), I get the following error infinitely -
```
OpenBLAS Warning : Detect OpenMP Loop and this application may hang. Please rebuild the library with USE_OPENMP=1 option.
```
The exact same notebook works perfectly on Google Colab.
### Information
- [ ] The official example scripts
- [X] My own modified scripts
### Tasks
- [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)
- [X] My own task or dataset (give details below)
### Reproduction
Please refer to [this page](https://huggingface.co/learn/audio-course/en/chapter5/fine-tuning) for the code.
### Expected behavior
The model should start fine-tuning. | [
64
] | [
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
1,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0
] | [
"bug"
] |
https://api.github.com/repos/huggingface/transformers/issues/34400 |
TITLE
[Trainer] RandomSampler for train dataloader fails with Llama-3.2-1B using max model sequence length
COMMENTS
2
REACTIONS
+1: 0
-1: 0
laugh: 0
hooray: 0
heart: 0
rocket: 0
eyes: 0
BODY
### System Info
- `transformers` version: 4.45.2
- Platform: Linux-5.15.0-91-generic-x86_64-with-glibc2.35
- Python version: 3.9.16
- Huggingface_hub version: 0.24.0
- Safetensors version: 0.4.5
- Accelerate version: 1.0.1
- Accelerate config: - compute_environment: LOCAL_MACHINE
- distributed_type: NO
- mixed_precision: no
- use_cpu: False
- debug: False
- num_processes: 1
- machine_rank: 0
- num_machines: 1
- gpu_ids: 0
- rdzv_backend: static
- same_network: True
- main_training_function: main
- enable_cpu_affinity: True
- downcast_bf16: no
- tpu_use_cluster: False
- tpu_use_sudo: False
- tpu_env: []
- dynamo_config: {'dynamo_backend': 'INDUCTOR'}
- PyTorch version (GPU?): 2.5.0+cu121 (True)
- Tensorflow version (GPU?): not installed (NA)
- Flax version (CPU?/GPU?/TPU?): not installed (NA)
- Jax version: not installed
- JaxLib version: not installed
- Using distributed or parallel set-up in script?: No
- Using GPU in script?: Yes
- GPU type: NVIDIA RTX A6000
### Who can help?
@muellerzr @SunMarc
### Information
- [X] The official example scripts
- [ ] My own modified scripts
### Tasks
- [X] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)
- [ ] My own task or dataset (give details below)
### Reproduction
Run the `run_clm.py` script to finetune Llama-3.2-1B on wikitext-2-raw-v1* as such (single GPU):
```bash
python run_clm.py \
--log_level info \
--model_name_or_path=meta-llama/Llama-3.2-1B \
--dataset_name=Salesforce/wikitext \
--dataset_config_name=wikitext-2-raw-v1 \
--block_size=1024 \
--per_device_train_batch_size=8 \
--do_train \
--output_dir=Llama-3.2-1B-wikitext-2-raw-v1 \
--overwrite_output_dir \
--seed=42 \
--logging_steps=10 \
--lr_scheduler_type=cosine \
--num_train_epochs=3 \
--learning_rate=5e-05 \
--warmup_ratio=0.03 \
--dataloader_drop_last
```
This works fine.
Now simply remove the `--block_size=1024` arg to let it default to model max length (131072), and re-run.
This produces the following error:
```text
Traceback (most recent call last):
File "run_clm.py", line 657, in <module>
main()
File "run_clm.py", line 605, in main
train_result = trainer.train(resume_from_checkpoint=checkpoint)
File "/anaconda3/envs/dev/lib/python3.9/site-packages/transformers/trainer.py", line 2052, in train
return inner_training_loop(
File "/anaconda3/envs/dev/lib/python3.9/site-packages/transformers/trainer.py", line 2081, in _inner_training_loop
train_dataloader = self.get_train_dataloader()
File "/anaconda3/envs/dev/lib/python3.9/site-packages/transformers/trainer.py", line 925, in get_train_dataloader
dataloader_params["sampler"] = self._get_train_sampler()
File "/anaconda3/envs/dev/lib/python3.9/site-packages/transformers/trainer.py", line 895, in _get_train_sampler
return RandomSampler(self.train_dataset)
File "/anaconda3/envs/dev/lib/python3.9/site-packages/torch/utils/data/sampler.py", line 164, in __init__
raise ValueError(
**ValueError: num_samples should be a positive integer value, but got num_samples=0**
```
*Note: Dataset size should not be the issue - you can also reproduce this using `--dataset_config_name=wikitext-103-raw-v1`
### Expected behavior
Changing the training sequence length should not affect the operation of the training dataloader. | [
24,
66,
64
] | [
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
1,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
1,
0,
1,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0
] | [
"Ex: LM (Finetuning)",
"trainer",
"bug"
] |