url
stringlengths 66
66
| text
stringlengths 294
30.1k
| num_labels
sequence | arr_labels
sequence | labels
sequence |
---|---|---|---|---|
https://api.github.com/repos/huggingface/transformers/issues/23240 |
TITLE
[New model] ImageBind: One Embedding Space To Bind Them All
COMMENTS
2
REACTIONS
+1: 1
-1: 0
laugh: 0
hooray: 0
heart: 0
rocket: 0
eyes: 0
BODY
### Model description
As stated in their [blog post](https://ai.facebook.com/blog/imagebind-six-modalities-binding-ai/),
> "[ImageBind is] the first AI model capable of binding information from six modalities. The [model](https://github.com/facebookresearch/ImageBind) learns a single embedding, or shared representation space, not just for text, image/video, and audio, but also for sensors that record depth (3D), thermal (infrared radiation), and inertial measurement units (IMU), which calculate motion and position."
### Open source status
- [X] The model implementation is available
- [X] The model weights are available
### Provide useful links for the implementation
GitHub repo: https://github.com/facebookresearch/ImageBind
Paper: https://facebookresearch.github.io/ImageBind/paper
Blog: https://ai.facebook.com/blog/imagebind-six-modalities-binding-ai/
Demo: https://imagebind.metademolab.com/
Video: https://dl.fbaipublicfiles.com/imagebind/imagebind_video.mp4
Weights: https://dl.fbaipublicfiles.com/imagebind/imagebind_huge.pth (currently only 1 that I can see) | [
20
] | [
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
1,
0,
0,
0,
0,
0,
0,
0
] | [
"New model"
] |
https://api.github.com/repos/huggingface/transformers/issues/24874 |
TITLE
NotImplementedError: offload_to_cpu=True and NO_SHARD is not supported yet
COMMENTS
4
REACTIONS
+1: 0
-1: 0
laugh: 0
hooray: 0
heart: 0
rocket: 0
eyes: 0
BODY
### System Info
I was using fsdp with settings "full_shard auto_wrap" on a A100 GPU. The training went well but was interupted when saving the checkpoints. The error stated `NotImplementedError: offload_to_cpu=True and NO_SHARD is not supported yet`. I understand that I am using a single GPU so fsdp defaluts to NO_SHAPR. However, I dont understand why offload_to_cpu was set to True. Or anywhere I can reset it to false?
### Who can help?
_No response_
### Information
- [ ] The official example scripts
- [ ] My own modified scripts
### Tasks
- [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)
- [ ] My own task or dataset (give details below)
### Reproduction
following https://github.com/lm-sys/FastChat to fine-tune an LLM
### Expected behavior
the error as stated. | [
25
] | [
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
1,
0,
0
] | [
"solved"
] |
https://api.github.com/repos/huggingface/transformers/issues/23480 |
TITLE
SpeechT5 cannot read numbers
COMMENTS
10
REACTIONS
+1: 0
-1: 0
laugh: 0
hooray: 0
heart: 0
rocket: 0
eyes: 0
BODY
### System Info
transformers == 4.29.0
environment = Colab
Python == 3.10.11
tensorflow == 2.12.0
torch == 2.0.1+cu118
torchaudio == 2.0.2+cu118
### Who can help?
@sanchit-gandhi
### Information
- [ ] The official example scripts
- [X] My own modified scripts
### Tasks
- [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)
- [X] My own task or dataset (give details below)
### Reproduction
1. Init a Transformer agent
2. Init a text which contains numbers. For example text = "More than 10 people have been killed by Covid."
3. Call the agent for a text-to-speech (SpeechT5). For example, audio_translated = agent.run("Read out loud the text", text=text)
4. Play the generated audio
The audio blanks all the numbers/digits.
I am suspecting SpeechT5 to behave wrongly as the code generated by the agent seems to be correct.
Good luck :)
### Expected behavior
The audio file should contain numbers/digits indicated in the text. | [
2,
19
] | [
0,
0,
1,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
1,
0,
0,
0,
0,
0,
0,
0,
0
] | [
"Good Second Issue",
"Feature request"
] |
https://api.github.com/repos/huggingface/transformers/issues/24518 |
TITLE
[i18n-<English>] Translating docs to <Chinese>
COMMENTS
1
REACTIONS
+1: 0
-1: 0
laugh: 0
hooray: 0
heart: 0
rocket: 0
eyes: 0
BODY
<!--
Note: Please search to see if an issue already exists for the language you are trying to translate.
-->
Hi!
I translated all the English documents into Chinese.
Let's bring the documentation to all the <languageName>-speaking community 🌐 (currently 0 out of 267 complete)
Who would want to translate? Please follow the 🤗 [TRANSLATING guide](https://github.com/huggingface/transformers/blob/main/docs/TRANSLATING.md). Here is a list of the files ready for translation. Let us know in this issue if you'd like to translate any, and we'll add your name to the list.
Some notes:
* Please translate using an informal tone (imagine you are talking with a friend about transformers 🤗).
* Please translate in a gender-neutral way.
* Add your translations to the folder called `<languageCode>` inside the [source folder](https://github.com/huggingface/transformers/tree/main/docs/source).
* Register your translation in `<languageCode>/_toctree.yml`; please follow the order of the [English version](https://github.com/huggingface/transformers/blob/main/docs/source/en/_toctree.yml).
* Once you're finished, open a pull request and tag this issue by including #issue-number in the description, where issue-number is the number of this issue. Please ping @ArthurZucker, @sgugger for review.
* 🙋 If you'd like others to help you with the translation, you can also post in the 🤗 [forums](https://discuss.huggingface.co/).
<!--
Keep on adding more as you go 🔥
-->
| [
8
] | [
0,
0,
0,
0,
0,
0,
0,
0,
1,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0
] | [
"WIP"
] |
https://api.github.com/repos/huggingface/transformers/issues/24829 |
TITLE
[WIP] Add state in segments id calculation
COMMENTS
1
REACTIONS
+1: 0
-1: 0
laugh: 0
hooray: 0
heart: 0
rocket: 0
eyes: 0
BODY
# What does this PR do?
At the moment, when segments are calculated it's calculated on an image-per-image basis. This means when predicting with certain models e.g. DETR, the segment id that each class corresponds can be different across each image in a batch and across batches.
This PR adds a private attribute to the image procoesor class to store the class: to segment_id mapping as state.
/!\ There is a possible breaking change, as `compute_segments` now returns 3 rather than two objects.
Fixes #23461
## Before submitting
- [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [x] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests),
Pull Request section?
- [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case.
- [ ] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation).
- [ ] Did you write any new necessary tests?
| [
8
] | [
0,
0,
0,
0,
0,
0,
0,
0,
1,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0
] | [
"WIP"
] |
https://api.github.com/repos/huggingface/transformers/issues/22771 |
TITLE
TF Swiftformer
COMMENTS
11
REACTIONS
+1: 3
-1: 0
laugh: 0
hooray: 0
heart: 0
rocket: 0
eyes: 0
BODY
### Model description
Add the TensorFlow port of the SwiftFormer model. See related issue: #22685
To be done once the SwiftFormer model has been added: #22686
### Open source status
- [X] The model implementation is available
- [X] The model weights are available
### Provide useful links for the implementation
Original repo: https://github.com/amshaker/swiftformer | [
20
] | [
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
1,
0,
0,
0,
0,
0,
0,
0
] | [
"New model"
] |
https://api.github.com/repos/huggingface/transformers/issues/22352 |
TITLE
XVector Finetuning process - Whisper XVector
COMMENTS
0
REACTIONS
+1: 0
-1: 0
laugh: 0
hooray: 0
heart: 0
rocket: 0
eyes: 0
BODY
### Model description
The idea is to apply XVector to Whisper and, In the process, generate documentation to Finetune or Adapt XVector (Maybe something similar to SetFit for Audio) @vaibva
### Open source status
- [ ] The model implementation is available
- [ ] The model weights are available
### Provide useful links for the implementation
_No response_ | [
20
] | [
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
1,
0,
0,
0,
0,
0,
0,
0
] | [
"New model"
] |
https://api.github.com/repos/huggingface/transformers/issues/24343 |
TITLE
Enable non-causal mask (to enable MLM) for VisionEncoderDecoder models
COMMENTS
3
REACTIONS
+1: 0
-1: 0
laugh: 0
hooray: 0
heart: 0
rocket: 0
eyes: 0
BODY
### Feature request
Hello! The current (amazing!) VisionEncoderDecoder library supports text generation via a standard causal LM. Some recent work (linked [here](https://arxiv.org/abs/2306.07915)) has shown promise in having the text decoder be a MLM instead of a causal LM. I believe this is doable with the current VisionEncoderDecoder library by passing in [MASK] tokens for the decoder_input_ids and passing in the labels as usual, but this would still result in a causal mask. The code comment is as follows which makes me think this:
```
decoder_attention_mask (`torch.BoolTensor` of shape `(batch_size, target_sequence_length)`, *optional*):
Default behavior: generate a tensor that ignores pad tokens in `decoder_input_ids`. Causal mask will also
be used by default.
```
Is there a way to turn off causal masking to predict multiple text tokens at once using a VisionEncoderDecoder model?
### Motivation
Masked language modeling on top of a Vision encoder appears to be a promising new approach for image captioning and pre-training of vision models according to [this recent work](https://arxiv.org/abs/2306.07915).
### Your contribution
Thank you! | [
19
] | [
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
1,
0,
0,
0,
0,
0,
0,
0,
0
] | [
"Feature request"
] |
https://api.github.com/repos/huggingface/transformers/issues/22356 |
TITLE
The output of TFAutoModel-save_pretrained and keras-ModelCheckpoint do not equal.
COMMENTS
9
REACTIONS
+1: 0
-1: 0
laugh: 0
hooray: 0
heart: 0
rocket: 0
eyes: 0
BODY
### Describe the bug
```
history = model.fit(
tf_train_dataset, validation_split=0.01,
epochs=int(training_args.num_train_epochs),
callbacks=callbacks,
)
model.save_pretrained(checkpoint_local)
```
output: `h5` file
```
callbacks = [tf.keras.callbacks.ModelCheckpoint(checkpoint_local)]
history = model.fit(
tf_train_dataset, validation_split=0.01,
epochs=int(training_args.num_train_epochs),
callbacks=callbacks,
)
```
output: `pb` file and `assets` and `variables`
### System info
```shell
transformers = 4.26
python = 3.8
```
| [
27
] | [
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
1
] | [
"bug"
] |
https://api.github.com/repos/huggingface/transformers/issues/22853 |
TITLE
Add an efficient vision transformer backbone in ICLR 2022: CrossFormer
COMMENTS
1
REACTIONS
+1: 0
-1: 0
laugh: 0
hooray: 0
heart: 0
rocket: 0
eyes: 0
BODY
### Model description
The CrossFormer has three new things that does not exist in other ViTs (such as Swin):
1. The cross-scale embedding layer(CEL) that generate cross-scale embeddings as ViT's input.
2. The long-short distance attention (LSDA) mechanism, which is an efficient replacement of the vanilla self-attention and shows better performance than Swin
3. A dynamic relative position bias, a kind of relative position bias that support dynamic group size.
### Open source status
- [X] The model implementation is available
- [X] The model weights are available
### Provide useful links for the implementation
The open source website: https://github.com/cheerss/CrossFormer
The paper was accepted in ICLR 2022: https://openreview.net/forum?id=_PHymLIxuI | [
20
] | [
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
1,
0,
0,
0,
0,
0,
0,
0
] | [
"New model"
] |
https://api.github.com/repos/huggingface/transformers/issues/22841 |
TITLE
Raise err if minimum Accelerate version isn't available
COMMENTS
1
REACTIONS
+1: 0
-1: 0
laugh: 0
hooray: 0
heart: 0
rocket: 0
eyes: 0
BODY
# What does this PR do?
This PR will raise an explicit `ImportError` during `TrainingArguments` if `Accelerate` isn't installed (or isn't the required minimal version) and Accelerate is going to be utilized
Fixes # (issue)
## Before submitting
- [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [x] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests),
Pull Request section?
- [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case.
- [ ] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation).
- [ ] Did you write any new necessary tests?
## Who can review?
Anyone in the community is free to review the PR once the tests have passed. Feel free to tag
members/contributors who may be interested in your PR.
@sgugger | [
16
] | [
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
1,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0
] | [
"Distributed Training / Models"
] |
https://api.github.com/repos/huggingface/transformers/issues/24781 |
TITLE
Add text-mesh models inside Hugginfaces
COMMENTS
1
REACTIONS
+1: 0
-1: 0
laugh: 0
hooray: 0
heart: 0
rocket: 0
eyes: 0
BODY
### Feature request
Text to 3D models are really getting the scene in some industries but actually the state of the art techiniques are very hard to integrate in production code. Some examples are:
https://www.nasir.lol/clipmesh
https://github.com/openai/shap-e
Would be awesome to the community if HF has that integrated.
### Motivation
Text-3D models can have a big space in multiple types of industry
### Your contribution
If I have some guidance I can help working on this side. But I will need HF developers help. | [
20,
19
] | [
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
1,
1,
0,
0,
0,
0,
0,
0,
0
] | [
"New model",
"Feature request"
] |
https://api.github.com/repos/huggingface/transformers/issues/23767 |
TITLE
Bump tornado from 6.0.4 to 6.3.2 in /examples/research_projects/visual_bert
COMMENTS
1
REACTIONS
+1: 0
-1: 0
laugh: 0
hooray: 0
heart: 0
rocket: 0
eyes: 0
BODY
Bumps [tornado](https://github.com/tornadoweb/tornado) from 6.0.4 to 6.3.2.
<details>
<summary>Changelog</summary>
<p><em>Sourced from <a href="https://github.com/tornadoweb/tornado/blob/master/docs/releases.rst">tornado's changelog</a>.</em></p>
<blockquote>
<h1>Release notes</h1>
<p>.. toctree::
:maxdepth: 2</p>
<p>releases/v6.3.2
releases/v6.3.1
releases/v6.3.0
releases/v6.2.0
releases/v6.1.0
releases/v6.0.4
releases/v6.0.3
releases/v6.0.2
releases/v6.0.1
releases/v6.0.0
releases/v5.1.1
releases/v5.1.0
releases/v5.0.2
releases/v5.0.1
releases/v5.0.0
releases/v4.5.3
releases/v4.5.2
releases/v4.5.1
releases/v4.5.0
releases/v4.4.3
releases/v4.4.2
releases/v4.4.1
releases/v4.4.0
releases/v4.3.0
releases/v4.2.1
releases/v4.2.0
releases/v4.1.0
releases/v4.0.2
releases/v4.0.1
releases/v4.0.0
releases/v3.2.2
releases/v3.2.1
releases/v3.2.0
releases/v3.1.1
releases/v3.1.0
releases/v3.0.2
releases/v3.0.1
releases/v3.0.0
releases/v2.4.1
releases/v2.4.0
releases/v2.3.0
releases/v2.2.1
releases/v2.2.0
releases/v2.1.1</p>
<!-- raw HTML omitted -->
</blockquote>
<p>... (truncated)</p>
</details>
<details>
<summary>Commits</summary>
<ul>
<li><a href="https://github.com/tornadoweb/tornado/commit/34f5c1cf2696afec5532ca9e870ba32cbc7fee27"><code>34f5c1c</code></a> Version 6.3.2</li>
<li><a href="https://github.com/tornadoweb/tornado/commit/32ad07c54e607839273b4e1819c347f5c8976b2f"><code>32ad07c</code></a> web: Fix an open redirect in StaticFileHandler</li>
<li><a href="https://github.com/tornadoweb/tornado/commit/e0fa53ee96db720dc7800d0248c39a4ffb8911e9"><code>e0fa53e</code></a> Merge pull request <a href="https://redirect.github.com/tornadoweb/tornado/issues/3257">#3257</a> from bdarnell/build-workflow-wstest-warning</li>
<li><a href="https://github.com/tornadoweb/tornado/commit/f5a1d5c7e235ad8860a4c2c5f259a43692bcbaab"><code>f5a1d5c</code></a> ci: Only run pypi actions from the main repo</li>
<li><a href="https://github.com/tornadoweb/tornado/commit/1849ef6c48415ef8f5fecbd47d9f68225588507c"><code>1849ef6</code></a> test: Close a websocket client that causes occasional test failures</li>
<li><a href="https://github.com/tornadoweb/tornado/commit/fcb09eba4bd45c2ebfb6356a38acdb3b4450c0d8"><code>fcb09eb</code></a> Merge pull request <a href="https://redirect.github.com/tornadoweb/tornado/issues/3256">#3256</a> from bdarnell/build-workflow-qemu</li>
<li><a href="https://github.com/tornadoweb/tornado/commit/c3d50f41a29cda5f76031c60cf7902b175b79479"><code>c3d50f4</code></a> ci: Update setup-qemu-action version</li>
<li><a href="https://github.com/tornadoweb/tornado/commit/419838b9bcc51445241630def0478f1fbaa61b4b"><code>419838b</code></a> Merge pull request <a href="https://redirect.github.com/tornadoweb/tornado/issues/3255">#3255</a> from bdarnell/bump-version-6.3.1</li>
<li><a href="https://github.com/tornadoweb/tornado/commit/cd5b9fcf4ac16c3f5480b3d8ae81b4103c0e7549"><code>cd5b9fc</code></a> Bump version to 6.3.1</li>
<li><a href="https://github.com/tornadoweb/tornado/commit/245334401570a40ba01813d9adb14976c50d77dd"><code>2453344</code></a> Merge pull request <a href="https://redirect.github.com/tornadoweb/tornado/issues/3254">#3254</a> from bdarnell/fix-set-cookie-case</li>
<li>Additional commits viewable in <a href="https://github.com/tornadoweb/tornado/compare/v6.0.4...v6.3.2">compare view</a></li>
</ul>
</details>
<br />
[![Dependabot compatibility score](https://dependabot-badges.githubapp.com/badges/compatibility_score?dependency-name=tornado&package-manager=pip&previous-version=6.0.4&new-version=6.3.2)](https://docs.github.com/en/github/managing-security-vulnerabilities/about-dependabot-security-updates#about-compatibility-scores)
Dependabot will resolve any conflicts with this PR as long as you don't alter it yourself. You can also trigger a rebase manually by commenting `@dependabot rebase`.
[//]: # (dependabot-automerge-start)
[//]: # (dependabot-automerge-end)
---
<details>
<summary>Dependabot commands and options</summary>
<br />
You can trigger Dependabot actions by commenting on this PR:
- `@dependabot rebase` will rebase this PR
- `@dependabot recreate` will recreate this PR, overwriting any edits that have been made to it
- `@dependabot merge` will merge this PR after your CI passes on it
- `@dependabot squash and merge` will squash and merge this PR after your CI passes on it
- `@dependabot cancel merge` will cancel a previously requested merge and block automerging
- `@dependabot reopen` will reopen this PR if it is closed
- `@dependabot close` will close this PR and stop Dependabot recreating it. You can achieve the same result by closing it manually
- `@dependabot ignore this major version` will close this PR and stop Dependabot creating any more for this major version (unless you reopen the PR or upgrade to it yourself)
- `@dependabot ignore this minor version` will close this PR and stop Dependabot creating any more for this minor version (unless you reopen the PR or upgrade to it yourself)
- `@dependabot ignore this dependency` will close this PR and stop Dependabot creating any more for this dependency (unless you reopen the PR or upgrade to it yourself)
You can disable automated security fix PRs for this repo from the [Security Alerts page](https://github.com/huggingface/transformers/network/alerts).
</details> | [
21
] | [
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
1,
0,
0,
0,
0,
0,
0
] | [
"dependencies"
] |
https://api.github.com/repos/huggingface/transformers/issues/23928 |
TITLE
[Feature Request] Add timestamp prediction for TF Whisper
COMMENTS
10
REACTIONS
+1: 0
-1: 0
laugh: 0
hooray: 0
heart: 0
rocket: 0
eyes: 0
BODY
### System Info
Latest Version.
On google Colab
### Who can help?
@sanchit-gandhi @connor-henderson
### Information
I am trying to convert tensorflow whisper to tflite but turns out that TFWhisper doesnt want to output timestamp tokens.
- [X] The official example scripts
- [ ] My own modified scripts
### Tasks
- [X] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)
- [ ] My own task or dataset (give details below)
### Reproduction
```
import tensorflow as tf
# Importing necessary classes from transformers
from transformers import WhisperProcessor, WhisperFeatureExtractor, TFWhisperForConditionalGeneration, WhisperTokenizer
# Importing necessary functions from datasets
from datasets import load_dataset
# Creating an instance of AutoProcessor from the pretrained model
feature_extractor = WhisperFeatureExtractor.from_pretrained("openai/whisper-tiny.en")
tokenizer = WhisperTokenizer.from_pretrained("openai/whisper-tiny.en", predict_timestamps=True)
processor = WhisperProcessor(feature_extractor, tokenizer)
# Creating an instance of TFWhisperForConditionalGeneration from the pretrained model
model = TFWhisperForConditionalGeneration.from_pretrained("openai/whisper-tiny.en")
# Loading dataset
ds = load_dataset("hf-internal-testing/librispeech_asr_dummy", "clean", split="validation")
# Inputs
inputs = processor(ds[0]["audio"]["array"], return_tensors="tf")
input_features = inputs.input_features
# Generating Transcription
generated_ids = model.generate(input_features=input_features, return_timestamps=True)
transcription = processor.tokenizer.decode(generated_ids[0], decode_with_timestamps=True)
print(transcription)
```
<|startoftranscript|><|notimestamps|> Mr. Quilter is the apostle of the middle classes, and we are glad to welcome his gospel.<|endoftext|>
### Expected behavior
While the same tokenizer with ```predict_timestamps=True``` works as expected in pytorch:
```
import torch
from transformers import WhisperForConditionalGeneration
model = WhisperForConditionalGeneration.from_pretrained("openai/whisper-tiny.en")
ds = load_dataset("hf-internal-testing/librispeech_asr_dummy", "clean", split="validation")
inputs = processor(ds[0]["audio"]["array"], return_tensors="pt")
input_features = inputs.input_features
generated_ids = model.generate(inputs=input_features, return_timestamps=True)
transcription = processor.tokenizer.decode(generated_ids[0], decode_with_timestamps=True)
transcription
```
<|startoftranscript|><|0.00|> Mr. Quilter is the apostle of the middle classes, and we are glad to welcome his gospel.<|5.44|><|endoftext|>
| [
23,
2,
19
] | [
0,
0,
1,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
1,
0,
0,
0,
1,
0,
0,
0,
0
] | [
"TensorFlow",
"Good Second Issue",
"Feature request"
] |
https://api.github.com/repos/huggingface/transformers/issues/24295 |
TITLE
Add training support for EnCodec
COMMENTS
12
REACTIONS
+1: 0
-1: 0
laugh: 0
hooray: 0
heart: 0
rocket: 0
eyes: 0
BODY
### Feature request
Would be cool to add training support for the EnCodec model.
Not entirely sure if we can easily make it compatible with Trainer, so this can be a good second issue I think.
### Motivation
…
### Your contribution
… | [
2,
19
] | [
0,
0,
1,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
1,
0,
0,
0,
0,
0,
0,
0,
0
] | [
"Good Second Issue",
"Feature request"
] |
https://api.github.com/repos/huggingface/transformers/issues/22487 |
TITLE
Support `text-to-speech` in `pipeline` function and in Optimum
COMMENTS
11
REACTIONS
+1: 1
-1: 0
laugh: 0
hooray: 0
heart: 0
rocket: 0
eyes: 3
BODY
### Feature request
SpeechT5 was recently added to Transformers:
* **Blog post**: https://huggingface.co/blog/speecht5
* **Spaces demo**: https://huggingface.co/spaces/Matthijs/speecht5-tts-demo
* **Models**: https://huggingface.co/mechanicalsea/speecht5-tts
It would be great if `text-to-speech` could be supported across the Transformers stack.
### Motivation
@xenova [bumped into this as an issue](https://github.com/xenova/transformers.js/issues/59) when trying to get SpeechT5 working in the browser (Transformers.js).
### Your contribution
Probably unable to help with this at the moment. | [
9,
19
] | [
0,
0,
0,
0,
0,
0,
0,
0,
0,
1,
0,
0,
0,
0,
0,
0,
0,
0,
0,
1,
0,
0,
0,
0,
0,
0,
0,
0
] | [
"Core: Pipeline",
"Feature request"
] |
https://api.github.com/repos/huggingface/transformers/issues/22380 |
TITLE
Bump tensorflow from 2.8.1 to 2.11.1 in /examples/research_projects/decision_transformer
COMMENTS
2
REACTIONS
+1: 0
-1: 0
laugh: 0
hooray: 0
heart: 0
rocket: 0
eyes: 0
BODY
Bumps [tensorflow](https://github.com/tensorflow/tensorflow) from 2.8.1 to 2.11.1.
<details>
<summary>Release notes</summary>
<p><em>Sourced from <a href="https://github.com/tensorflow/tensorflow/releases">tensorflow's releases</a>.</em></p>
<blockquote>
<h2>TensorFlow 2.11.1</h2>
<h1>Release 2.11.1</h1>
<p><strong>Note</strong>: TensorFlow 2.10 was the last TensorFlow release that supported GPU on native-Windows. Starting with TensorFlow 2.11, you will need to install TensorFlow in WSL2, or install tensorflow-cpu and, optionally, try the TensorFlow-DirectML-Plugin.</p>
<ul>
<li>Security vulnerability fixes will no longer be patched to this Tensorflow version. The latest Tensorflow version includes the security vulnerability fixes. You can update to the latest version (recommended) or patch security vulnerabilities yourself <a href="https://github.com/tensorflow/tensorflow#patching-guidelines">steps</a>. You can refer to the <a href="https://github.com/tensorflow/tensorflow/releases">release notes</a> of the latest Tensorflow version for a list of newly fixed vulnerabilities. If you have any questions, please create a GitHub issue to let us know.</li>
</ul>
<p>This release also introduces several vulnerability fixes:</p>
<ul>
<li>Fixes an FPE in TFLite in conv kernel <a href="https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2023-27579">CVE-2023-27579</a></li>
<li>Fixes a double free in Fractional(Max/Avg)Pool <a href="https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2023-25801">CVE-2023-25801</a></li>
<li>Fixes a null dereference on ParallelConcat with XLA <a href="https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2023-25676">CVE-2023-25676</a></li>
<li>Fixes a segfault in Bincount with XLA <a href="https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2023-25675">CVE-2023-25675</a></li>
<li>Fixes an NPE in RandomShuffle with XLA enable <a href="https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2023-25674">CVE-2023-25674</a></li>
<li>Fixes an FPE in TensorListSplit with XLA <a href="https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2023-25673">CVE-2023-25673</a></li>
<li>Fixes segmentation fault in tfg-translate <a href="https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2023-25671">CVE-2023-25671</a></li>
<li>Fixes an NPE in QuantizedMatMulWithBiasAndDequantize <a href="https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2023-25670">CVE-2023-25670</a></li>
<li>Fixes an FPE in AvgPoolGrad with XLA <a href="https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2023-25669">CVE-2023-25669</a></li>
<li>Fixes a heap out-of-buffer read vulnerability in the QuantizeAndDequantize operation <a href="https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2023-25668">CVE-2023-25668</a></li>
<li>Fixes a segfault when opening multiframe gif <a href="https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2023-25667">CVE-2023-25667</a></li>
<li>Fixes an NPE in SparseSparseMaximum <a href="https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2023-25665">CVE-2023-25665</a></li>
<li>Fixes an FPE in AudioSpectrogram <a href="https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2023-25666">CVE-2023-25666</a></li>
<li>Fixes a heap-buffer-overflow in AvgPoolGrad <a href="https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2023-25664">CVE-2023-25664</a></li>
<li>Fixes a NPE in TensorArrayConcatV2 <a href="https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2023-25663">CVE-2023-25663</a></li>
<li>Fixes a Integer overflow in EditDistance <a href="https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2023-25662">CVE-2023-25662</a></li>
<li>Fixes a Seg fault in <code>tf.raw_ops.Print</code> <a href="https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2023-25660">CVE-2023-25660</a></li>
<li>Fixes a OOB read in DynamicStitch <a href="https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2023-25659">CVE-2023-25659</a></li>
<li>Fixes a OOB Read in GRUBlockCellGrad <a href="https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2023-25658">CVE-2023-25658</a></li>
</ul>
<h2>TensorFlow 2.11.0</h2>
<h1>Release 2.11.0</h1>
<h2>Breaking Changes</h2>
<ul>
<li>
<p>The <code>tf.keras.optimizers.Optimizer</code> base class now points to the new Keras optimizer, while the old optimizers have been moved to the <code>tf.keras.optimizers.legacy</code> namespace.</p>
<p>If you find your workflow failing due to this change, you may be facing one of the following issues:</p>
<ul>
<li><strong>Checkpoint loading failure.</strong> The new optimizer handles optimizer state differently from the old optimizer, which simplifies the logic of checkpoint saving/loading, but at the cost of breaking checkpoint backward compatibility in some cases. If you want to keep using an old checkpoint, please change your optimizer to <code>tf.keras.optimizer.legacy.XXX</code> (e.g. <code>tf.keras.optimizer.legacy.Adam</code>).</li>
<li><strong>TF1 compatibility.</strong> The new optimizer, <code>tf.keras.optimizers.Optimizer</code>, does not support TF1 any more, so please use the legacy optimizer <code>tf.keras.optimizer.legacy.XXX</code>. We highly recommend <a href="https://www.tensorflow.org/guide/migrate">migrating your workflow to TF2</a> for stable support and new features.</li>
<li><strong>Old optimizer API not found.</strong> The new optimizer, <code>tf.keras.optimizers.Optimizer</code>, has a different set of public APIs from the old optimizer. These API changes are mostly related to getting rid of slot variables and TF1 support. Please check the API documentation to find alternatives to the missing API. If you must call the deprecated API, please change your optimizer to the legacy optimizer.</li>
<li><strong>Learning rate schedule access.</strong> When using a <code>tf.keras.optimizers.schedules.LearningRateSchedule</code>, the new optimizer's <code>learning_rate</code> property returns the current learning rate value instead of a <code>LearningRateSchedule</code> object as before. If you need to access the <code>LearningRateSchedule</code> object, please use <code>optimizer._learning_rate</code>.</li>
<li><strong>If you implemented a custom optimizer based on the old optimizer.</strong> Please set your optimizer to subclass <code>tf.keras.optimizer.legacy.XXX</code>. If you want to migrate to the new optimizer and find it does not support your optimizer, please file an issue in the <a href="https://github.com/keras-team/keras/issues">Keras GitHub repo</a>.</li>
<li><strong>Errors, such as <code>Cannot recognize variable...</code>.</strong> The new optimizer requires all optimizer variables to be created at the first <code>apply_gradients()</code> or <code>minimize()</code> call. If your workflow calls the optimizer to update different parts of the model in multiple stages, please call <code>optimizer.build(model.trainable_variables)</code> before the training loop.</li>
<li><strong>Timeout or performance loss.</strong> We don't anticipate this to happen, but if you see such issues, please use the legacy optimizer, and file an issue in the Keras GitHub repo.</li>
</ul>
<p>The old Keras optimizer will never be deleted, but will not see any new feature additions. New optimizers (for example, <code>tf.keras.optimizers.Adafactor</code>) will only be implemented based on the new <code>tf.keras.optimizers.Optimizer</code> base class.</p>
</li>
<li>
<p><code>tensorflow/python/keras</code> code is a legacy copy of Keras since the TensorFlow v2.7 release, and will be deleted in the v2.12 release. Please remove any import of <code>tensorflow.python.keras</code> and use the public API with <code>from tensorflow import keras</code> or <code>import tensorflow as tf; tf.keras</code>.</p>
</li>
</ul>
<h2>Major Features and Improvements</h2>
<!-- raw HTML omitted -->
</blockquote>
<p>... (truncated)</p>
</details>
<details>
<summary>Changelog</summary>
<p><em>Sourced from <a href="https://github.com/tensorflow/tensorflow/blob/master/RELEASE.md">tensorflow's changelog</a>.</em></p>
<blockquote>
<h1>Release 2.11.1</h1>
<p><strong>Note</strong>: TensorFlow 2.10 was the last TensorFlow release that supported GPU on native-Windows. Starting with TensorFlow 2.11, you will need to install TensorFlow in WSL2, or install tensorflow-cpu and, optionally, try the TensorFlow-DirectML-Plugin.</p>
<ul>
<li>Security vulnerability fixes will no longer be patched to this Tensorflow version. The latest Tensorflow version includes the security vulnerability fixes. You can update to the latest version (recommended) or patch security vulnerabilities yourself <a href="https://github.com/tensorflow/tensorflow#patching-guidelines">steps</a>. You can refer to the <a href="https://github.com/tensorflow/tensorflow/releases">release notes</a> of the latest Tensorflow version for a list of newly fixed vulnerabilities. If you have any questions, please create a GitHub issue to let us know.</li>
</ul>
<p>This release also introduces several vulnerability fixes:</p>
<ul>
<li>Fixes an FPE in TFLite in conv kernel <a href="https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2023-27579">CVE-2023-27579</a></li>
<li>Fixes a double free in Fractional(Max/Avg)Pool <a href="https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2023-25801">CVE-2023-25801</a></li>
<li>Fixes a null dereference on ParallelConcat with XLA <a href="https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2023-25676">CVE-2023-25676</a></li>
<li>Fixes a segfault in Bincount with XLA <a href="https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2023-25675">CVE-2023-25675</a></li>
<li>Fixes an NPE in RandomShuffle with XLA enable <a href="https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2023-25674">CVE-2023-25674</a></li>
<li>Fixes an FPE in TensorListSplit with XLA <a href="https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2023-25673">CVE-2023-25673</a></li>
<li>Fixes segmentation fault in tfg-translate <a href="https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2023-25671">CVE-2023-25671</a></li>
<li>Fixes an NPE in QuantizedMatMulWithBiasAndDequantize <a href="https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2023-25670">CVE-2023-25670</a></li>
<li>Fixes an FPE in AvgPoolGrad with XLA <a href="https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2023-25669">CVE-2023-25669</a></li>
<li>Fixes a heap out-of-buffer read vulnerability in the QuantizeAndDequantize operation <a href="https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2023-25668">CVE-2023-25668</a></li>
<li>Fixes a segfault when opening multiframe gif <a href="https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2023-25667">CVE-2023-25667</a></li>
<li>Fixes an NPE in SparseSparseMaximum <a href="https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2023-25665">CVE-2023-25665</a></li>
<li>Fixes an FPE in AudioSpectrogram <a href="https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2023-25666">CVE-2023-25666</a></li>
<li>Fixes a heap-buffer-overflow in AvgPoolGrad <a href="https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2023-25664">CVE-2023-25664</a></li>
<li>Fixes a NPE in TensorArrayConcatV2 <a href="https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2023-25663">CVE-2023-25663</a></li>
<li>Fixes a Integer overflow in EditDistance <a href="https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2023-25662">CVE-2023-25662</a></li>
<li>Fixes a Seg fault in <code>tf.raw_ops.Print</code> <a href="https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2023-25660">CVE-2023-25660</a></li>
<li>Fixes a OOB read in DynamicStitch <a href="https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2023-25659">CVE-2023-25659</a></li>
<li>Fixes a OOB Read in GRUBlockCellGrad <a href="https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2023-25658">CVE-2023-25658</a></li>
</ul>
<h1>Release 2.11.0</h1>
<h2>Breaking Changes</h2>
<ul>
<li>
<p><code>tf.keras.optimizers.Optimizer</code> now points to the new Keras optimizer, and
old optimizers have moved to the <code>tf.keras.optimizers.legacy</code> namespace.
If you find your workflow failing due to this change,
you may be facing one of the following issues:</p>
<ul>
<li><strong>Checkpoint loading failure.</strong> The new optimizer handles optimizer
state differently from the old optimizer, which simplies the logic of
checkpoint saving/loading, but at the cost of breaking checkpoint
backward compatibility in some cases. If you want to keep using an old
checkpoint, please change your optimizer to
<code>tf.keras.optimizers.legacy.XXX</code> (e.g.
<code>tf.keras.optimizers.legacy.Adam</code>).</li>
<li><strong>TF1 compatibility.</strong> The new optimizer does not support TF1 any more,
so please use the legacy optimizer <code>tf.keras.optimizer.legacy.XXX</code>.
We highly recommend to migrate your workflow to TF2 for stable
support and new features.</li>
<li><strong>API not found.</strong> The new optimizer has a different set of public APIs
from the old optimizer. These API changes are mostly related to
getting rid of slot variables and TF1 support. Please check the API</li>
</ul>
</li>
</ul>
<!-- raw HTML omitted -->
</blockquote>
<p>... (truncated)</p>
</details>
<details>
<summary>Commits</summary>
<ul>
<li><a href="https://github.com/tensorflow/tensorflow/commit/a3e2c692c18649329c4210cf8df2487d2028e267"><code>a3e2c69</code></a> Merge pull request <a href="https://redirect.github.com/tensorflow/tensorflow/issues/60016">#60016</a> from tensorflow/fix-relnotes</li>
<li><a href="https://github.com/tensorflow/tensorflow/commit/13b85dcf966d0c94b2e5c21291be039db2dec7b9"><code>13b85dc</code></a> Fix release notes</li>
<li><a href="https://github.com/tensorflow/tensorflow/commit/48b18dbf1301f24be9f2f41189d318ce5398540a"><code>48b18db</code></a> Merge pull request <a href="https://redirect.github.com/tensorflow/tensorflow/issues/60014">#60014</a> from tensorflow/disable-test-that-ooms</li>
<li><a href="https://github.com/tensorflow/tensorflow/commit/eea48f50d6982879909bf8e0d0151bbce3f9bf4a"><code>eea48f5</code></a> Disable a test that results in OOM+segfault</li>
<li><a href="https://github.com/tensorflow/tensorflow/commit/a63258434247784605986cfc2b43cb3be846cf8a"><code>a632584</code></a> Merge pull request <a href="https://redirect.github.com/tensorflow/tensorflow/issues/60000">#60000</a> from tensorflow/venkat-patch-3</li>
<li><a href="https://github.com/tensorflow/tensorflow/commit/93dea7a67df44bde557e580dfdcde5ba0a7a344d"><code>93dea7a</code></a> Update RELEASE.md</li>
<li><a href="https://github.com/tensorflow/tensorflow/commit/a2ba9f16f0154bf93f21132878b154238d89fad6"><code>a2ba9f1</code></a> Updating Release.md with Legal Language for Release Notes</li>
<li><a href="https://github.com/tensorflow/tensorflow/commit/fae41c76bdc760454b3e5c1d3af9b8d5a5c6c548"><code>fae41c7</code></a> Merge pull request <a href="https://redirect.github.com/tensorflow/tensorflow/issues/59998">#59998</a> from tensorflow/fix-bad-cherrypick-again</li>
<li><a href="https://github.com/tensorflow/tensorflow/commit/2757416dcd4a2d00ea36512c2ffd347030c1196b"><code>2757416</code></a> Fix bad cherrypick</li>
<li><a href="https://github.com/tensorflow/tensorflow/commit/c78616f4b00125c8a563e10ce6b76bea8070bdd0"><code>c78616f</code></a> Merge pull request <a href="https://redirect.github.com/tensorflow/tensorflow/issues/59992">#59992</a> from tensorflow/fix-2.11-build</li>
<li>Additional commits viewable in <a href="https://github.com/tensorflow/tensorflow/compare/v2.8.1...v2.11.1">compare view</a></li>
</ul>
</details>
<br />
[![Dependabot compatibility score](https://dependabot-badges.githubapp.com/badges/compatibility_score?dependency-name=tensorflow&package-manager=pip&previous-version=2.8.1&new-version=2.11.1)](https://docs.github.com/en/github/managing-security-vulnerabilities/about-dependabot-security-updates#about-compatibility-scores)
Dependabot will resolve any conflicts with this PR as long as you don't alter it yourself. You can also trigger a rebase manually by commenting `@dependabot rebase`.
[//]: # (dependabot-automerge-start)
[//]: # (dependabot-automerge-end)
---
<details>
<summary>Dependabot commands and options</summary>
<br />
You can trigger Dependabot actions by commenting on this PR:
- `@dependabot rebase` will rebase this PR
- `@dependabot recreate` will recreate this PR, overwriting any edits that have been made to it
- `@dependabot merge` will merge this PR after your CI passes on it
- `@dependabot squash and merge` will squash and merge this PR after your CI passes on it
- `@dependabot cancel merge` will cancel a previously requested merge and block automerging
- `@dependabot reopen` will reopen this PR if it is closed
- `@dependabot close` will close this PR and stop Dependabot recreating it. You can achieve the same result by closing it manually
- `@dependabot ignore this major version` will close this PR and stop Dependabot creating any more for this major version (unless you reopen the PR or upgrade to it yourself)
- `@dependabot ignore this minor version` will close this PR and stop Dependabot creating any more for this minor version (unless you reopen the PR or upgrade to it yourself)
- `@dependabot ignore this dependency` will close this PR and stop Dependabot creating any more for this dependency (unless you reopen the PR or upgrade to it yourself)
You can disable automated security fix PRs for this repo from the [Security Alerts page](https://github.com/huggingface/transformers/network/alerts).
</details> | [
21
] | [
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
1,
0,
0,
0,
0,
0,
0
] | [
"dependencies"
] |
https://api.github.com/repos/huggingface/transformers/issues/24057 |
TITLE
CUDA OOM error when loading sharded checkpoint
COMMENTS
5
REACTIONS
+1: 0
-1: 0
laugh: 0
hooray: 0
heart: 0
rocket: 0
eyes: 0
BODY
### System Info
* `transformers` version: 4.27.1
* Platform: Linux-5.19.0-41-generic-x86_64-with-glibc2.35
* Python version: 3.9.12
* Huggingface_hub version: 0.13.2
* PyTorch version (GPU?): 2.0.0+cu117 (True)
* Tensorflow version (GPU?): not installed (NA)
* Flax version (CPU?/GPU?/TPU?): not installed (NA)
* Jax version: not installed
* JaxLib version: not installed
* Using GPU in script?: Yes
* Using distributed or parallel set-up in script?: Yes, parallel (accelerate auto-mapping)
### Who can help?
@sgugger @pacman100
### Information
- [ ] The official example scripts
- [X] My own modified scripts
### Tasks
- [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)
- [X] My own task or dataset (give details below)
### Reproduction
This is a port-over from an issue I wrote on the PyTorch forums [here](https://discuss.pytorch.org/t/cuda-oom-error-when-loading-sharded-checkpoint/180710). I received some help from the folks on the PyTorch side, but unfortunately, they seem to be suggesting that there may be an error in the way `Trainer` saves FSDP models. I will rehash the issue here with the additional context:
> We fine-tuned Stability’s StableLM-7b using Huggingface’s Trainer API (with FSDP) and then saved the resulting checkpoints in the sharded format that is typical for large language models. Quite surprisingly, however, attempting to load the model for inference leads to a strange error when loading one of the checkpoints (`Unable to load weights from pytorch checkpoint file`)
>
> We took some further investigative steps by making a simple `torch.load` call on the problem shard, and got a CUDA OOM error. The exceedingly strange thing about this OOM error is that we are working with a node with 8xA100s (80GB), and the given state dict is only 171kB (comprising only 7 layers of the model). So, you can imagine seeing the following error was quite a shock:
>
> ```
> torch.cuda.OutOfMemoryError: CUDA out of memory. Tried to allocate 29.31 GiB (GPU 0; 79.19 GiB total capacity; 55.76 GiB already allocated; 22.48 GiB free; 55.76 GiB reserved in total by PyTorch) If reserved memory is >> allocated memory try setting max_split_size_mb to avoid fragmentation. See documentation for Memory Management and PYTORCH_CUDA_ALLOC_CONF
> ```
>
> After looking into this further, I discovered a few threads discussing this issue, like [this one 3](https://discuss.pytorch.org/t/cuda-error-out-of-memory-when-load-models/38011), and attempted some of the fixes, namely loading the state dict on CPU first. After doing so, I received the following error:
> `RuntimeError: Trying to resize storage that is not resizable`
>
> So it seems that approach is out of the question. As I previously said, the strange thing here is that the first two shards load without issue, while the third and fourth cannot be loaded. Additionally, nothing seems particularly out of place in the shard-layer mapping JSON. I am stumped here.
The folks at PyTorch let us know that with FSDP models should _not_ be saved using `torch.save` and provided an example script of how they should be saved [here](https://github.com/pytorch/pytorch/blob/e71ab214226af1f9dbded944e939c6447e0e8f09/torch/distributed/checkpoint/examples/fsdp_checkpoint_example.py#L59). Does `Trainer` properly handle these larger models, or is there an extra step we should be taking here?
### Expected behavior
Typically, I would expect `save_model` to process the model shards in a way that allows them to be reloaded without issue using `from_pretrained` along with `accelerate`'s auto device mapping. | [
25
] | [
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
1,
0,
0
] | [
"solved"
] |
https://api.github.com/repos/huggingface/transformers/issues/23964 |
TITLE
Bump cryptography from 39.0.1 to 41.0.0 in /examples/research_projects/decision_transformer
COMMENTS
1
REACTIONS
+1: 0
-1: 0
laugh: 0
hooray: 0
heart: 0
rocket: 0
eyes: 0
BODY
Bumps [cryptography](https://github.com/pyca/cryptography) from 39.0.1 to 41.0.0.
<details>
<summary>Changelog</summary>
<p><em>Sourced from <a href="https://github.com/pyca/cryptography/blob/main/CHANGELOG.rst">cryptography's changelog</a>.</em></p>
<blockquote>
<p>41.0.0 - 2023-05-30</p>
<pre><code>
* **BACKWARDS INCOMPATIBLE:** Support for OpenSSL less than 1.1.1d has been
removed. Users on older version of OpenSSL will need to upgrade.
* **BACKWARDS INCOMPATIBLE:** Support for Python 3.6 has been removed.
* **BACKWARDS INCOMPATIBLE:** Dropped support for LibreSSL < 3.6.
* Updated the minimum supported Rust version (MSRV) to 1.56.0, from 1.48.0.
* Updated Windows, macOS, and Linux wheels to be compiled with OpenSSL 3.1.1.
* Added support for the :class:`~cryptography.x509.OCSPAcceptableResponses`
OCSP extension.
* Added support for the :class:`~cryptography.x509.MSCertificateTemplate`
proprietary Microsoft certificate extension.
* Implemented support for equality checks on all asymmetric public key types.
* Added support for ``aes256-gcm@openssh.com`` encrypted keys in
:func:`~cryptography.hazmat.primitives.serialization.load_ssh_private_key`.
* Added support for obtaining X.509 certificate signature algorithm parameters
(including PSS) via
:meth:`~cryptography.x509.Certificate.signature_algorithm_parameters`.
* Support signing :class:`~cryptography.hazmat.primitives.asymmetric.padding.PSS`
X.509 certificates via the new keyword-only argument ``rsa_padding`` on
:meth:`~cryptography.x509.CertificateBuilder.sign`.
* Added support for
:class:`~cryptography.hazmat.primitives.ciphers.aead.ChaCha20Poly1305`
on BoringSSL.
<p>.. _v40-0-2:</p>
<p>40.0.2 - 2023-04-14
</code></pre></p>
<ul>
<li>Fixed compilation when using LibreSSL 3.7.2.</li>
<li>Added some functions to support an upcoming <code>pyOpenSSL</code> release.</li>
</ul>
<p>.. _v40-0-1:</p>
<p>40.0.1 - 2023-03-24</p>
<pre><code>
* Fixed a bug where certain operations would fail if an object happened to be
in the top-half of the memory-space. This only impacted 32-bit systems.
<p>.. _v40-0-0:</p>
<p>40.0.0 - 2023-03-24
</code></pre></p>
<ul>
<li><strong>BACKWARDS INCOMPATIBLE:</strong> As announced in the 39.0.0 changelog, the way
<code>cryptography</code> links OpenSSL has changed. This only impacts users who</li>
</ul>
<!-- raw HTML omitted -->
</blockquote>
<p>... (truncated)</p>
</details>
<details>
<summary>Commits</summary>
<ul>
<li><a href="https://github.com/pyca/cryptography/commit/c4d494fd3ee907316bd846e90cbf4a8df75a25ac"><code>c4d494f</code></a> 41.0.0 version bump (<a href="https://redirect.github.com/pyca/cryptography/issues/8991">#8991</a>)</li>
<li><a href="https://github.com/pyca/cryptography/commit/8708245ccdeaff21d65eea68a4f8d2a7c5949a22"><code>8708245</code></a> new openssl day (<a href="https://redirect.github.com/pyca/cryptography/issues/8990">#8990</a>)</li>
<li><a href="https://github.com/pyca/cryptography/commit/31436a486661cd863d4c77e40facf93fbb2d9f54"><code>31436a4</code></a> admit to the existence of nuance in HKDF (<a href="https://redirect.github.com/pyca/cryptography/issues/8987">#8987</a>)</li>
<li><a href="https://github.com/pyca/cryptography/commit/91e41898e6d1d2a9a6e980c39e2f8baa2fa8a1f8"><code>91e4189</code></a> Port DSA to Rust (<a href="https://redirect.github.com/pyca/cryptography/issues/8978">#8978</a>)</li>
<li><a href="https://github.com/pyca/cryptography/commit/f302d28b81607aab28d22b653da78d564824f267"><code>f302d28</code></a> Update CI for new LibreSSL releases (<a href="https://redirect.github.com/pyca/cryptography/issues/8975">#8975</a>)</li>
<li><a href="https://github.com/pyca/cryptography/commit/851d8ccb340bfc93c827b9e80af939a216b34925"><code>851d8cc</code></a> Bump openssl from 0.10.52 to 0.10.53 in /src/rust (<a href="https://redirect.github.com/pyca/cryptography/issues/8986">#8986</a>)</li>
<li><a href="https://github.com/pyca/cryptography/commit/0918c7236c94c29272e0790ba0227cfa9401943b"><code>0918c72</code></a> Bump coverage from 7.2.6 to 7.2.7 (<a href="https://redirect.github.com/pyca/cryptography/issues/8985">#8985</a>)</li>
<li><a href="https://github.com/pyca/cryptography/commit/730a5ce11a91f40c1bb0f881ab22bc52d6cecef6"><code>730a5ce</code></a> Bump openssl-sys from 0.9.87 to 0.9.88 in /src/rust (<a href="https://redirect.github.com/pyca/cryptography/issues/8984">#8984</a>)</li>
<li><a href="https://github.com/pyca/cryptography/commit/88e8c288975709228005e70301644034463d9823"><code>88e8c28</code></a> Bump BoringSSL and/or OpenSSL in CI (<a href="https://redirect.github.com/pyca/cryptography/issues/8983">#8983</a>)</li>
<li><a href="https://github.com/pyca/cryptography/commit/3e24e44527a69884ca0c3247e1b5e9c8bbf590c9"><code>3e24e44</code></a> Bump once_cell from 1.17.1 to 1.17.2 in /src/rust (<a href="https://redirect.github.com/pyca/cryptography/issues/8982">#8982</a>)</li>
<li>Additional commits viewable in <a href="https://github.com/pyca/cryptography/compare/39.0.1...41.0.0">compare view</a></li>
</ul>
</details>
<br />
[![Dependabot compatibility score](https://dependabot-badges.githubapp.com/badges/compatibility_score?dependency-name=cryptography&package-manager=pip&previous-version=39.0.1&new-version=41.0.0)](https://docs.github.com/en/github/managing-security-vulnerabilities/about-dependabot-security-updates#about-compatibility-scores)
Dependabot will resolve any conflicts with this PR as long as you don't alter it yourself. You can also trigger a rebase manually by commenting `@dependabot rebase`.
[//]: # (dependabot-automerge-start)
[//]: # (dependabot-automerge-end)
---
<details>
<summary>Dependabot commands and options</summary>
<br />
You can trigger Dependabot actions by commenting on this PR:
- `@dependabot rebase` will rebase this PR
- `@dependabot recreate` will recreate this PR, overwriting any edits that have been made to it
- `@dependabot merge` will merge this PR after your CI passes on it
- `@dependabot squash and merge` will squash and merge this PR after your CI passes on it
- `@dependabot cancel merge` will cancel a previously requested merge and block automerging
- `@dependabot reopen` will reopen this PR if it is closed
- `@dependabot close` will close this PR and stop Dependabot recreating it. You can achieve the same result by closing it manually
- `@dependabot ignore this major version` will close this PR and stop Dependabot creating any more for this major version (unless you reopen the PR or upgrade to it yourself)
- `@dependabot ignore this minor version` will close this PR and stop Dependabot creating any more for this minor version (unless you reopen the PR or upgrade to it yourself)
- `@dependabot ignore this dependency` will close this PR and stop Dependabot creating any more for this dependency (unless you reopen the PR or upgrade to it yourself)
You can disable automated security fix PRs for this repo from the [Security Alerts page](https://github.com/huggingface/transformers/network/alerts).
</details> | [
21
] | [
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
1,
0,
0,
0,
0,
0,
0
] | [
"dependencies"
] |
https://api.github.com/repos/huggingface/transformers/issues/22685 |
TITLE
Add SwiftFormer
COMMENTS
0
REACTIONS
+1: 0
-1: 0
laugh: 0
hooray: 0
heart: 0
rocket: 0
eyes: 0
BODY
### Model description
'SwiftFormer' paper introduces a novel efficient additive attention mechanism that effectively replaces the quadratic matrix multiplication operations in the self-attention computation with linear element-wise multiplications. A series of models called 'SwiftFormer' is built based on this, which achieves state-of-the-art performance in terms of both accuracy and mobile inference speed. Even their small variant achieves 78.5% top-1 ImageNet1K accuracy with only 0.8 ms latency on iPhone 14, which is more accurate and 2× faster compared to MobileViT-v2.
I would like to add this model to Huggingface.
### Open source status
- [X] The model implementation is available
- [X] The model weights are available
### Provide useful links for the implementation
Paper: https://arxiv.org/abs/2303.15446
Original code and weights: https://github.com/Amshaker/SwiftFormer
Author: @Amshaker
| [
20
] | [
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
1,
0,
0,
0,
0,
0,
0,
0
] | [
"New model"
] |
https://api.github.com/repos/huggingface/transformers/issues/24507 |
TITLE
Add Compact Convolutional Transformer model (CCT)
COMMENTS
8
REACTIONS
+1: 0
-1: 0
laugh: 0
hooray: 0
heart: 0
rocket: 0
eyes: 0
BODY
# What does this PR do?
<!--
Congratulations! You've made it this far! You're not quite done yet though.
Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution.
Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change.
Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost.
-->
<!-- Remove if not applicable -->
Fixes #20133 (issue)
## Before submitting
- [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [x] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests),
Pull Request section?
- [x] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case.
- [x] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation).
- [x] Did you write any new necessary tests?
## Who can review?
Anyone in the community is free to review the PR once the tests have passed. Feel free to tag
members/contributors who may be interested in your PR.
@amyeroberts @ArthurZucker
<!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @
If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**.
Please tag fewer than 3 people.
Models:
- text models: @ArthurZucker and @younesbelkada
- vision models: @amyeroberts
- speech models: @sanchit-gandhi
- graph models: @clefourrier
Library:
- flax: @sanchit-gandhi
- generate: @gante
- pipelines: @Narsil
- tensorflow: @gante and @Rocketknight1
- tokenizers: @ArthurZucker
- trainer: @sgugger
Integrations:
- deepspeed: HF Trainer/Accelerate: @pacman100
- ray/raytune: @richardliaw, @amogkam
Documentation: @sgugger, @stevhliu and @MKhalusova
HF projects:
- accelerate: [different repo](https://github.com/huggingface/accelerate)
- datasets: [different repo](https://github.com/huggingface/datasets)
- diffusers: [different repo](https://github.com/huggingface/diffusers)
- rust tokenizers: [different repo](https://github.com/huggingface/tokenizers)
Maintained examples (not research project or legacy):
- Flax: @sanchit-gandhi
- PyTorch: @sgugger
- TensorFlow: @Rocketknight1
-->
| [
7
] | [
0,
0,
0,
0,
0,
0,
0,
1,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0
] | [
"Model on the Hub"
] |
https://api.github.com/repos/huggingface/transformers/issues/22178 |
TITLE
Add BEiTv3
COMMENTS
3
REACTIONS
+1: 0
-1: 0
laugh: 0
hooray: 0
heart: 0
rocket: 0
eyes: 0
BODY
### Model description
Microsoft just open-sourced BEiTv3: https://github.com/microsoft/unilm/tree/master/beit3
This is a very powerful vision-language model that can be used as backbone for a variety of downstream tasks, from image classification to VQA to object detection.
Time to add it to HF Transformers! :)
### Open source status
- [X] The model implementation is available
- [X] The model weights are available
### Provide useful links for the implementation
https://github.com/microsoft/unilm/tree/master/beit3 | [
20
] | [
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
1,
0,
0,
0,
0,
0,
0,
0
] | [
"New model"
] |