text
stringlengths 2
11.8k
|
---|
If you're interested in submitting a resource to be included here, please feel free to open a Pull Request and we'll review it! The resource should ideally demonstrate something new instead of duplicating an existing resource.
Dinov2Config
[[autodoc]] Dinov2Config
Dinov2Model
[[autodoc]] Dinov2Model
- forward
Dinov2ForImageClassification
[[autodoc]] Dinov2ForImageClassification
- forward |
UnivNet
Overview
The UnivNet model was proposed in UnivNet: A Neural Vocoder with Multi-Resolution Spectrogram Discriminators for High-Fidelity Waveform Generation by Won Jang, Dan Lim, Jaesam Yoon, Bongwan Kin, and Juntae Kim.
The UnivNet model is a generative adversarial network (GAN) trained to synthesize high fidelity speech waveforms. The UnivNet model shared in transformers is the generator, which maps a conditioning log-mel spectrogram and optional noise sequence to a speech waveform (e.g. a vocoder). Only the generator is required for inference. The discriminator used to train the generator is not implemented.
The abstract from the paper is the following:
Most neural vocoders employ band-limited mel-spectrograms to generate waveforms. If full-band spectral features are used as the input, the vocoder can be provided with as much acoustic information as possible. However, in some models employing full-band mel-spectrograms, an over-smoothing problem occurs as part of which non-sharp spectrograms are generated. To address this problem, we propose UnivNet, a neural vocoder that synthesizes high-fidelity waveforms in real time. Inspired by works in the field of voice activity detection, we added a multi-resolution spectrogram discriminator that employs multiple linear spectrogram magnitudes computed using various parameter sets. Using full-band mel-spectrograms as input, we expect to generate high-resolution signals by adding a discriminator that employs spectrograms of multiple resolutions as the input. In an evaluation on a dataset containing information on hundreds of speakers, UnivNet obtained the best objective and subjective results among competing models for both seen and unseen speakers. These results, including the best subjective score for text-to-speech, demonstrate the potential for fast adaptation to new speakers without a need for training from scratch.
Tips: |
The noise_sequence argument for [UnivNetModel.forward] should be standard Gaussian noise (such as from torch.randn) of shape ([batch_size], noise_length, model.config.model_in_channels), where noise_length should match the length dimension (dimension 1) of the input_features argument. If not supplied, it will be randomly generated; a torch.Generator can be supplied to the generator argument so that the forward pass can be reproduced. (Note that [UnivNetFeatureExtractor] will return generated noise by default, so it shouldn't be necessary to generate noise_sequence manually.)
Padding added by [UnivNetFeatureExtractor] can be removed from the [UnivNetModel] output through the [UnivNetFeatureExtractor.batch_decode] method, as shown in the usage example below.
Padding the end of each waveform with silence can reduce artifacts at the end of the generated audio sample. This can be done by supplying pad_end = True to [UnivNetFeatureExtractor.__call__]. See this issue for more details. |
Usage Example:
thon
import torch
from scipy.io.wavfile import write
from datasets import Audio, load_dataset
from transformers import UnivNetFeatureExtractor, UnivNetModel
model_id_or_path = "dg845/univnet-dev"
model = UnivNetModel.from_pretrained(model_id_or_path)
feature_extractor = UnivNetFeatureExtractor.from_pretrained(model_id_or_path)
ds = load_dataset("hf-internal-testing/librispeech_asr_dummy", "clean", split="validation")
Resample the audio to the model and feature extractor's sampling rate.
ds = ds.cast_column("audio", Audio(sampling_rate=feature_extractor.sampling_rate))
Pad the end of the converted waveforms to reduce artifacts at the end of the output audio samples.
inputs = feature_extractor(
ds[0]["audio"]["array"], sampling_rate=ds[0]["audio"]["sampling_rate"], pad_end=True, return_tensors="pt"
)
with torch.no_grad():
audio = model(**inputs)
Remove the extra padding at the end of the output.
audio = feature_extractor.batch_decode(**audio)[0]
Convert to wav file
write("sample_audio.wav", feature_extractor.sampling_rate, audio) |
This model was contributed by dg845.
To the best of my knowledge, there is no official code release, but an unofficial implementation can be found at maum-ai/univnet with pretrained checkpoints here.
UnivNetConfig
[[autodoc]] UnivNetConfig
UnivNetFeatureExtractor
[[autodoc]] UnivNetFeatureExtractor
- call
UnivNetModel
[[autodoc]] UnivNetModel
- forward |
Jukebox
Overview
The Jukebox model was proposed in Jukebox: A generative model for music
by Prafulla Dhariwal, Heewoo Jun, Christine Payne, Jong Wook Kim, Alec Radford,
Ilya Sutskever. It introduces a generative music model which can produce minute long samples that can be conditioned on
an artist, genres and lyrics.
The abstract from the paper is the following:
We introduce Jukebox, a model that generates music with singing in the raw audio domain. We tackle the long context of raw audio using a multiscale VQ-VAE to compress it to discrete codes, and modeling those using autoregressive Transformers. We show that the combined model at scale can generate high-fidelity and diverse songs with coherence up to multiple minutes. We can condition on artist and genre to steer the musical and vocal style, and on unaligned lyrics to make the singing more controllable. We are releasing thousands of non cherry-picked samples, along with model weights and code.
As shown on the following figure, Jukebox is made of 3 priors which are decoder only models. They follow the architecture described in Generating Long Sequences with Sparse Transformers, modified to support longer context length.
First, a autoencoder is used to encode the text lyrics. Next, the first (also called top_prior) prior attends to the last hidden states extracted from the lyrics encoder. The priors are linked to the previous priors respectively via an AudioConditioner module. TheAudioConditioner upsamples the outputs of the previous prior to raw tokens at a certain audio frame per second resolution.
The metadata such as artist, genre and timing are passed to each prior, in the form of a start token and positional embedding for the timing data. The hidden states are mapped to the closest codebook vector from the VQVAE in order to convert them to raw audio. |
This model was contributed by Arthur Zucker.
The original code can be found here.
Usage tips |
This model only supports inference. This is for a few reasons, mostly because it requires a crazy amount of memory to train. Feel free to open a PR and add what's missing to have a full integration with the hugging face trainer!
This model is very slow, and takes 8h to generate a minute long audio using the 5b top prior on a V100 GPU. In order automaticallay handle the device on which the model should execute, use accelerate.
Contrary to the paper, the order of the priors goes from 0 to 1 as it felt more intuitive : we sample starting from 0.
Primed sampling (conditioning the sampling on raw audio) requires more memory than ancestral sampling and should be used with fp16 set to True. |
This model was contributed by Arthur Zucker.
The original code can be found here.
JukeboxConfig
[[autodoc]] JukeboxConfig
JukeboxPriorConfig
[[autodoc]] JukeboxPriorConfig
JukeboxVQVAEConfig
[[autodoc]] JukeboxVQVAEConfig
JukeboxTokenizer
[[autodoc]] JukeboxTokenizer
- save_vocabulary
JukeboxModel
[[autodoc]] JukeboxModel
- ancestral_sample
- primed_sample
- continue_sample
- upsample
- _sample
JukeboxPrior
[[autodoc]] JukeboxPrior
- sample
- forward
JukeboxVQVAE
[[autodoc]] JukeboxVQVAE
- forward
- encode
- decode |
MusicGen
Overview
The MusicGen model was proposed in the paper Simple and Controllable Music Generation
by Jade Copet, Felix Kreuk, Itai Gat, Tal Remez, David Kant, Gabriel Synnaeve, Yossi Adi and Alexandre DΓ©fossez.
MusicGen is a single stage auto-regressive Transformer model capable of generating high-quality music samples conditioned
on text descriptions or audio prompts. The text descriptions are passed through a frozen text encoder model to obtain a
sequence of hidden-state representations. MusicGen is then trained to predict discrete audio tokens, or audio codes,
conditioned on these hidden-states. These audio tokens are then decoded using an audio compression model, such as EnCodec,
to recover the audio waveform.
Through an efficient token interleaving pattern, MusicGen does not require a self-supervised semantic representation of
the text/audio prompts, thus eliminating the need to cascade multiple models to predict a set of codebooks (e.g.
hierarchically or upsampling). Instead, it is able to generate all the codebooks in a single forward pass.
The abstract from the paper is the following:
We tackle the task of conditional music generation. We introduce MusicGen, a single Language Model (LM) that operates
over several streams of compressed discrete music representation, i.e., tokens. Unlike prior work, MusicGen is comprised
of a single-stage transformer LM together with efficient token interleaving patterns, which eliminates the need for
cascading several models, e.g., hierarchically or upsampling. Following this approach, we demonstrate how MusicGen
can generate high-quality samples, while being conditioned on textual description or melodic features, allowing better
controls over the generated output. We conduct extensive empirical evaluation, considering both automatic and human
studies, showing the proposed approach is superior to the evaluated baselines on a standard text-to-music benchmark.
Through ablation studies, we shed light over the importance of each of the components comprising MusicGen.
This model was contributed by sanchit-gandhi. The original code can be found
here. The pre-trained checkpoints can be found on the
Hugging Face Hub.
Usage tips |
After downloading the original checkpoints from here , you can convert them using the conversion script available at
src/transformers/models/musicgen/convert_musicgen_transformers.py with the following command: |
python src/transformers/models/musicgen/convert_musicgen_transformers.py \
--checkpoint small --pytorch_dump_folder /output/path --safe_serialization
Generation
MusicGen is compatible with two generation modes: greedy and sampling. In practice, sampling leads to significantly
better results than greedy, thus we encourage sampling mode to be used where possible. Sampling is enabled by default,
and can be explicitly specified by setting do_sample=True in the call to [MusicgenForConditionalGeneration.generate],
or by overriding the model's generation config (see below).
Generation is limited by the sinusoidal positional embeddings to 30 second inputs. Meaning, MusicGen cannot generate more
than 30 seconds of audio (1503 tokens), and input audio passed by Audio-Prompted Generation contributes to this limit so,
given an input of 20 seconds of audio, MusicGen cannot generate more than 10 seconds of additional audio.
Transformers supports both mono (1-channel) and stereo (2-channel) variants of MusicGen. The mono channel versions
generate a single set of codebooks. The stereo versions generate 2 sets of codebooks, 1 for each channel (left/right),
and each set of codebooks is decoded independently through the audio compression model. The audio streams for each
channel are combined to give the final stereo output.
Unconditional Generation
The inputs for unconditional (or 'null') generation can be obtained through the method
[MusicgenForConditionalGeneration.get_unconditional_inputs]:
thon |
from transformers import MusicgenForConditionalGeneration
model = MusicgenForConditionalGeneration.from_pretrained("facebook/musicgen-small")
unconditional_inputs = model.get_unconditional_inputs(num_samples=1)
audio_values = model.generate(**unconditional_inputs, do_sample=True, max_new_tokens=256) |
The audio outputs are a three-dimensional Torch tensor of shape (batch_size, num_channels, sequence_length). To listen
to the generated audio samples, you can either play them in an ipynb notebook:
thon
from IPython.display import Audio
sampling_rate = model.config.audio_encoder.sampling_rate
Audio(audio_values[0].numpy(), rate=sampling_rate)
Or save them as a .wav file using a third-party library, e.g. scipy:
thon |
Or save them as a .wav file using a third-party library, e.g. scipy:
thon
import scipy
sampling_rate = model.config.audio_encoder.sampling_rate
scipy.io.wavfile.write("musicgen_out.wav", rate=sampling_rate, data=audio_values[0, 0].numpy())
Text-Conditional Generation
The model can generate an audio sample conditioned on a text prompt through use of the [MusicgenProcessor] to pre-process
the inputs:
thon |
from transformers import AutoProcessor, MusicgenForConditionalGeneration
processor = AutoProcessor.from_pretrained("facebook/musicgen-small")
model = MusicgenForConditionalGeneration.from_pretrained("facebook/musicgen-small")
inputs = processor(
text=["80s pop track with bassy drums and synth", "90s rock song with loud guitars and heavy drums"],
padding=True,
return_tensors="pt",
)
audio_values = model.generate(**inputs, do_sample=True, guidance_scale=3, max_new_tokens=256) |
The guidance_scale is used in classifier free guidance (CFG), setting the weighting between the conditional logits
(which are predicted from the text prompts) and the unconditional logits (which are predicted from an unconditional or
'null' prompt). Higher guidance scale encourages the model to generate samples that are more closely linked to the input
prompt, usually at the expense of poorer audio quality. CFG is enabled by setting guidance_scale > 1. For best results,
use guidance_scale=3 (default).
Audio-Prompted Generation
The same [MusicgenProcessor] can be used to pre-process an audio prompt that is used for audio continuation. In the
following example, we load an audio file using the π€ Datasets library, which can be pip installed through the command
below: |
pip install --upgrade pip
pip install datasets[audio]
thon |
from transformers import AutoProcessor, MusicgenForConditionalGeneration
from datasets import load_dataset
processor = AutoProcessor.from_pretrained("facebook/musicgen-small")
model = MusicgenForConditionalGeneration.from_pretrained("facebook/musicgen-small")
dataset = load_dataset("sanchit-gandhi/gtzan", split="train", streaming=True)
sample = next(iter(dataset))["audio"]
take the first half of the audio sample
sample["array"] = sample["array"][: len(sample["array"]) // 2]
inputs = processor(
audio=sample["array"],
sampling_rate=sample["sampling_rate"],
text=["80s blues track with groovy saxophone"],
padding=True,
return_tensors="pt",
)
audio_values = model.generate(**inputs, do_sample=True, guidance_scale=3, max_new_tokens=256) |
For batched audio-prompted generation, the generated audio_values can be post-processed to remove padding by using the
[MusicgenProcessor] class:
thon |
from transformers import AutoProcessor, MusicgenForConditionalGeneration
from datasets import load_dataset
processor = AutoProcessor.from_pretrained("facebook/musicgen-small")
model = MusicgenForConditionalGeneration.from_pretrained("facebook/musicgen-small")
dataset = load_dataset("sanchit-gandhi/gtzan", split="train", streaming=True)
sample = next(iter(dataset))["audio"]
take the first quarter of the audio sample
sample_1 = sample["array"][: len(sample["array"]) // 4]
take the first half of the audio sample
sample_2 = sample["array"][: len(sample["array"]) // 2]
inputs = processor(
audio=[sample_1, sample_2],
sampling_rate=sample["sampling_rate"],
text=["80s blues track with groovy saxophone", "90s rock song with loud guitars and heavy drums"],
padding=True,
return_tensors="pt",
)
audio_values = model.generate(**inputs, do_sample=True, guidance_scale=3, max_new_tokens=256)
post-process to remove padding from the batched audio
audio_values = processor.batch_decode(audio_values, padding_mask=inputs.padding_mask) |
Generation Configuration
The default parameters that control the generation process, such as sampling, guidance scale and number of generated
tokens, can be found in the model's generation config, and updated as desired:
thon |
from transformers import MusicgenForConditionalGeneration
model = MusicgenForConditionalGeneration.from_pretrained("facebook/musicgen-small")
inspect the default generation config
model.generation_config
increase the guidance scale to 4.0
model.generation_config.guidance_scale = 4.0
decrease the max length to 256 tokens
model.generation_config.max_length = 256 |
Note that any arguments passed to the generate method will supersede those in the generation config, so setting
do_sample=False in the call to generate will supersede the setting of model.generation_config.do_sample in the
generation config.
Model Structure
The MusicGen model can be de-composed into three distinct stages:
1. Text encoder: maps the text inputs to a sequence of hidden-state representations. The pre-trained MusicGen models use a frozen text encoder from either T5 or Flan-T5
2. MusicGen decoder: a language model (LM) that auto-regressively generates audio tokens (or codes) conditional on the encoder hidden-state representations
3. Audio encoder/decoder: used to encode an audio prompt to use as prompt tokens, and recover the audio waveform from the audio tokens predicted by the decoder
Thus, the MusicGen model can either be used as a standalone decoder model, corresponding to the class [MusicgenForCausalLM],
or as a composite model that includes the text encoder and audio encoder/decoder, corresponding to the class
[MusicgenForConditionalGeneration]. If only the decoder needs to be loaded from the pre-trained checkpoint, it can be loaded by first
specifying the correct config, or be accessed through the .decoder attribute of the composite model:
thon |
from transformers import AutoConfig, MusicgenForCausalLM, MusicgenForConditionalGeneration
Option 1: get decoder config and pass to .from_pretrained
decoder_config = AutoConfig.from_pretrained("facebook/musicgen-small").decoder
decoder = MusicgenForCausalLM.from_pretrained("facebook/musicgen-small", **decoder_config)
Option 2: load the entire composite model, but only return the decoder
decoder = MusicgenForConditionalGeneration.from_pretrained("facebook/musicgen-small").decoder |
Since the text encoder and audio encoder/decoder models are frozen during training, the MusicGen decoder [MusicgenForCausalLM]
can be trained standalone on a dataset of encoder hidden-states and audio codes. For inference, the trained decoder can
be combined with the frozen text encoder and audio encoder/decoders to recover the composite [MusicgenForConditionalGeneration]
model.
Tips:
* MusicGen is trained on the 32kHz checkpoint of Encodec. You should ensure you use a compatible version of the Encodec model.
* Sampling mode tends to deliver better results than greedy - you can toggle sampling with the variable do_sample in the call to [MusicgenForConditionalGeneration.generate]
MusicgenDecoderConfig
[[autodoc]] MusicgenDecoderConfig
MusicgenConfig
[[autodoc]] MusicgenConfig
MusicgenProcessor
[[autodoc]] MusicgenProcessor
MusicgenModel
[[autodoc]] MusicgenModel
- forward
MusicgenForCausalLM
[[autodoc]] MusicgenForCausalLM
- forward
MusicgenForConditionalGeneration
[[autodoc]] MusicgenForConditionalGeneration
- forward |
Swin Transformer
Overview
The Swin Transformer was proposed in Swin Transformer: Hierarchical Vision Transformer using Shifted Windows
by Ze Liu, Yutong Lin, Yue Cao, Han Hu, Yixuan Wei, Zheng Zhang, Stephen Lin, Baining Guo.
The abstract from the paper is the following:
This paper presents a new vision Transformer, called Swin Transformer, that capably serves as a general-purpose backbone
for computer vision. Challenges in adapting Transformer from language to vision arise from differences between the two domains,
such as large variations in the scale of visual entities and the high resolution of pixels in images compared to words in text.
To address these differences, we propose a hierarchical Transformer whose representation is computed with \bold{S}hifted
\bold{win}dows. The shifted windowing scheme brings greater efficiency by limiting self-attention computation to non-overlapping
local windows while also allowing for cross-window connection. This hierarchical architecture has the flexibility to model at
various scales and has linear computational complexity with respect to image size. These qualities of Swin Transformer make it
compatible with a broad range of vision tasks, including image classification (87.3 top-1 accuracy on ImageNet-1K) and dense
prediction tasks such as object detection (58.7 box AP and 51.1 mask AP on COCO test-dev) and semantic segmentation
(53.5 mIoU on ADE20K val). Its performance surpasses the previous state-of-the-art by a large margin of +2.7 box AP and
+2.6 mask AP on COCO, and +3.2 mIoU on ADE20K, demonstrating the potential of Transformer-based models as vision backbones.
The hierarchical design and the shifted window approach also prove beneficial for all-MLP architectures. |
Swin Transformer architecture. Taken from the original paper.
This model was contributed by novice03. The Tensorflow version of this model was contributed by amyeroberts. The original code can be found here.
Usage tips |
Swin pads the inputs supporting any input height and width (if divisible by 32).
Swin can be used as a backbone. When output_hidden_states = True, it will output both hidden_states and reshaped_hidden_states. The reshaped_hidden_states have a shape of (batch, num_channels, height, width) rather than (batch_size, sequence_length, num_channels).
Resources
A list of official Hugging Face and community (indicated by π) resources to help you get started with Swin Transformer. |
Resources
A list of official Hugging Face and community (indicated by π) resources to help you get started with Swin Transformer.
[SwinForImageClassification] is supported by this example script and notebook.
See also: Image classification task guide
Besides that:
[SwinForMaskedImageModeling] is supported by this example script. |
Besides that:
[SwinForMaskedImageModeling] is supported by this example script.
If you're interested in submitting a resource to be included here, please feel free to open a Pull Request and we'll review it! The resource should ideally demonstrate something new instead of duplicating an existing resource.
SwinConfig
[[autodoc]] SwinConfig |
SwinModel
[[autodoc]] SwinModel
- forward
SwinForMaskedImageModeling
[[autodoc]] SwinForMaskedImageModeling
- forward
SwinForImageClassification
[[autodoc]] transformers.SwinForImageClassification
- forward
TFSwinModel
[[autodoc]] TFSwinModel
- call
TFSwinForMaskedImageModeling
[[autodoc]] TFSwinForMaskedImageModeling
- call
TFSwinForImageClassification
[[autodoc]] transformers.TFSwinForImageClassification
- call |
Perceiver
Overview
The Perceiver IO model was proposed in Perceiver IO: A General Architecture for Structured Inputs &
Outputs by Andrew Jaegle, Sebastian Borgeaud, Jean-Baptiste Alayrac, Carl Doersch,
Catalin Ionescu, David Ding, Skanda Koppula, Daniel Zoran, Andrew Brock, Evan Shelhamer, Olivier HΓ©naff, Matthew M.
Botvinick, Andrew Zisserman, Oriol Vinyals, JoΓ£o Carreira.
Perceiver IO is a generalization of Perceiver to handle arbitrary outputs in
addition to arbitrary inputs. The original Perceiver only produced a single classification label. In addition to
classification labels, Perceiver IO can produce (for example) language, optical flow, and multimodal videos with audio.
This is done using the same building blocks as the original Perceiver. The computational complexity of Perceiver IO is
linear in the input and output size and the bulk of the processing occurs in the latent space, allowing us to process
inputs and outputs that are much larger than can be handled by standard Transformers. This means, for example,
Perceiver IO can do BERT-style masked language modeling directly using bytes instead of tokenized inputs.
The abstract from the paper is the following:
The recently-proposed Perceiver model obtains good results on several domains (images, audio, multimodal, point
clouds) while scaling linearly in compute and memory with the input size. While the Perceiver supports many kinds of
inputs, it can only produce very simple outputs such as class scores. Perceiver IO overcomes this limitation without
sacrificing the original's appealing properties by learning to flexibly query the model's latent space to produce
outputs of arbitrary size and semantics. Perceiver IO still decouples model depth from data size and still scales
linearly with data size, but now with respect to both input and output sizes. The full Perceiver IO model achieves
strong results on tasks with highly structured output spaces, such as natural language and visual understanding,
StarCraft II, and multi-task and multi-modal domains. As highlights, Perceiver IO matches a Transformer-based BERT
baseline on the GLUE language benchmark without the need for input tokenization and achieves state-of-the-art
performance on Sintel optical flow estimation.
Here's a TLDR explaining how Perceiver works:
The main problem with the self-attention mechanism of the Transformer is that the time and memory requirements scale
quadratically with the sequence length. Hence, models like BERT and RoBERTa are limited to a max sequence length of 512
tokens. Perceiver aims to solve this issue by, instead of performing self-attention on the inputs, perform it on a set
of latent variables, and only use the inputs for cross-attention. In this way, the time and memory requirements don't
depend on the length of the inputs anymore, as one uses a fixed amount of latent variables, like 256 or 512. These are
randomly initialized, after which they are trained end-to-end using backpropagation.
Internally, [PerceiverModel] will create the latents, which is a tensor of shape (batch_size, num_latents,
d_latents). One must provide inputs (which could be text, images, audio, you name it!) to the model, which it will
use to perform cross-attention with the latents. The output of the Perceiver encoder is a tensor of the same shape. One
can then, similar to BERT, convert the last hidden states of the latents to classification logits by averaging along
the sequence dimension, and placing a linear layer on top of that to project the d_latents to num_labels.
This was the idea of the original Perceiver paper. However, it could only output classification logits. In a follow-up
work, PerceiverIO, they generalized it to let the model also produce outputs of arbitrary size. How, you might ask? The
idea is actually relatively simple: one defines outputs of an arbitrary size, and then applies cross-attention with the
last hidden states of the latents, using the outputs as queries, and the latents as keys and values.
So let's say one wants to perform masked language modeling (BERT-style) with the Perceiver. As the Perceiver's input
length will not have an impact on the computation time of the self-attention layers, one can provide raw bytes,
providing inputs of length 2048 to the model. If one now masks out certain of these 2048 tokens, one can define the
outputs as being of shape: (batch_size, 2048, 768). Next, one performs cross-attention with the final hidden states
of the latents to update the outputs tensor. After cross-attention, one still has a tensor of shape (batch_size,
2048, 768). One can then place a regular language modeling head on top, to project the last dimension to the
vocabulary size of the model, i.e. creating logits of shape (batch_size, 2048, 262) (as Perceiver uses a vocabulary
size of 262 byte IDs). |
Perceiver IO architecture. Taken from the original paper
This model was contributed by nielsr. The original code can be found
here.
Perceiver does not work with torch.nn.DataParallel due to a bug in PyTorch, see issue #36035
Resources |
The quickest way to get started with the Perceiver is by checking the tutorial
notebooks.
Refer to the blog post if you want to fully understand how the model works and
is implemented in the library. Note that the models available in the library only showcase some examples of what you can do
with the Perceiver. There are many more use cases, including question answering, named-entity recognition, object detection,
audio classification, video classification, etc.
Text classification task guide
Masked language modeling task guide
Image classification task guide |
Perceiver specific outputs
[[autodoc]] models.perceiver.modeling_perceiver.PerceiverModelOutput
[[autodoc]] models.perceiver.modeling_perceiver.PerceiverDecoderOutput
[[autodoc]] models.perceiver.modeling_perceiver.PerceiverMaskedLMOutput
[[autodoc]] models.perceiver.modeling_perceiver.PerceiverClassifierOutput
PerceiverConfig
[[autodoc]] PerceiverConfig
PerceiverTokenizer
[[autodoc]] PerceiverTokenizer
- call
PerceiverFeatureExtractor
[[autodoc]] PerceiverFeatureExtractor
- call
PerceiverImageProcessor
[[autodoc]] PerceiverImageProcessor
- preprocess
PerceiverTextPreprocessor
[[autodoc]] models.perceiver.modeling_perceiver.PerceiverTextPreprocessor
PerceiverImagePreprocessor
[[autodoc]] models.perceiver.modeling_perceiver.PerceiverImagePreprocessor
PerceiverOneHotPreprocessor
[[autodoc]] models.perceiver.modeling_perceiver.PerceiverOneHotPreprocessor
PerceiverAudioPreprocessor
[[autodoc]] models.perceiver.modeling_perceiver.PerceiverAudioPreprocessor
PerceiverMultimodalPreprocessor
[[autodoc]] models.perceiver.modeling_perceiver.PerceiverMultimodalPreprocessor
PerceiverProjectionDecoder
[[autodoc]] models.perceiver.modeling_perceiver.PerceiverProjectionDecoder
PerceiverBasicDecoder
[[autodoc]] models.perceiver.modeling_perceiver.PerceiverBasicDecoder
PerceiverClassificationDecoder
[[autodoc]] models.perceiver.modeling_perceiver.PerceiverClassificationDecoder
PerceiverOpticalFlowDecoder
[[autodoc]] models.perceiver.modeling_perceiver.PerceiverOpticalFlowDecoder
PerceiverBasicVideoAutoencodingDecoder
[[autodoc]] models.perceiver.modeling_perceiver.PerceiverBasicVideoAutoencodingDecoder
PerceiverMultimodalDecoder
[[autodoc]] models.perceiver.modeling_perceiver.PerceiverMultimodalDecoder
PerceiverProjectionPostprocessor
[[autodoc]] models.perceiver.modeling_perceiver.PerceiverProjectionPostprocessor
PerceiverAudioPostprocessor
[[autodoc]] models.perceiver.modeling_perceiver.PerceiverAudioPostprocessor
PerceiverClassificationPostprocessor
[[autodoc]] models.perceiver.modeling_perceiver.PerceiverClassificationPostprocessor
PerceiverMultimodalPostprocessor
[[autodoc]] models.perceiver.modeling_perceiver.PerceiverMultimodalPostprocessor
PerceiverModel
[[autodoc]] PerceiverModel
- forward
PerceiverForMaskedLM
[[autodoc]] PerceiverForMaskedLM
- forward
PerceiverForSequenceClassification
[[autodoc]] PerceiverForSequenceClassification
- forward
PerceiverForImageClassificationLearned
[[autodoc]] PerceiverForImageClassificationLearned
- forward
PerceiverForImageClassificationFourier
[[autodoc]] PerceiverForImageClassificationFourier
- forward
PerceiverForImageClassificationConvProcessing
[[autodoc]] PerceiverForImageClassificationConvProcessing
- forward
PerceiverForOpticalFlow
[[autodoc]] PerceiverForOpticalFlow
- forward
PerceiverForMultimodalAutoencoding
[[autodoc]] PerceiverForMultimodalAutoencoding
- forward |
X-MOD
Overview
The X-MOD model was proposed in Lifting the Curse of Multilinguality by Pre-training Modular Transformers by Jonas Pfeiffer, Naman Goyal, Xi Lin, Xian Li, James Cross, Sebastian Riedel, and Mikel Artetxe.
X-MOD extends multilingual masked language models like XLM-R to include language-specific modular components (language adapters) during pre-training. For fine-tuning, the language adapters in each transformer layer are frozen.
The abstract from the paper is the following:
Multilingual pre-trained models are known to suffer from the curse of multilinguality, which causes per-language performance to drop as they cover more languages. We address this issue by introducing language-specific modules, which allows us to grow the total capacity of the model, while keeping the total number of trainable parameters per language constant. In contrast with prior work that learns language-specific components post-hoc, we pre-train the modules of our Cross-lingual Modular (X-MOD) models from the start. Our experiments on natural language inference, named entity recognition and question answering show that our approach not only mitigates the negative interference between languages, but also enables positive transfer, resulting in improved monolingual and cross-lingual performance. Furthermore, our approach enables adding languages post-hoc with no measurable drop in performance, no longer limiting the model usage to the set of pre-trained languages.
This model was contributed by jvamvas.
The original code can be found here and the original documentation is found here.
Usage tips
Tips:
- X-MOD is similar to XLM-R, but a difference is that the input language needs to be specified so that the correct language adapter can be activated.
- The main models β base and large β have adapters for 81 languages.
Adapter Usage
Input language
There are two ways to specify the input language:
1. By setting a default language before using the model:
thon
from transformers import XmodModel
model = XmodModel.from_pretrained("facebook/xmod-base")
model.set_default_language("en_XX") |
By explicitly passing the index of the language adapter for each sample:
thon
import torch
input_ids = torch.tensor(
[
[0, 581, 10269, 83, 99942, 136, 60742, 23, 70, 80583, 18276, 2],
[0, 1310, 49083, 443, 269, 71, 5486, 165, 60429, 660, 23, 2],
]
)
lang_ids = torch.LongTensor(
[
0, # en_XX
8, # de_DE
]
)
output = model(input_ids, lang_ids=lang_ids) |
Fine-tuning
The paper recommends that the embedding layer and the language adapters are frozen during fine-tuning. A method for doing this is provided:
thon
model.freeze_embeddings_and_language_adapters()
Fine-tune the model
Cross-lingual transfer
After fine-tuning, zero-shot cross-lingual transfer can be tested by activating the language adapter of the target language:
thon
model.set_default_language("de_DE")
Evaluate the model on German examples
Resources |
Resources
Text classification task guide
Token classification task guide
Question answering task guide
Causal language modeling task guide
Masked language modeling task guide
Multiple choice task guide |
XmodConfig
[[autodoc]] XmodConfig
XmodModel
[[autodoc]] XmodModel
- forward
XmodForCausalLM
[[autodoc]] XmodForCausalLM
- forward
XmodForMaskedLM
[[autodoc]] XmodForMaskedLM
- forward
XmodForSequenceClassification
[[autodoc]] XmodForSequenceClassification
- forward
XmodForMultipleChoice
[[autodoc]] XmodForMultipleChoice
- forward
XmodForTokenClassification
[[autodoc]] XmodForTokenClassification
- forward
XmodForQuestionAnswering
[[autodoc]] XmodForQuestionAnswering
- forward |
DistilBERT |
Overview
The DistilBERT model was proposed in the blog post Smaller, faster, cheaper, lighter: Introducing DistilBERT, a
distilled version of BERT, and the paper DistilBERT, a
distilled version of BERT: smaller, faster, cheaper and lighter. DistilBERT is a
small, fast, cheap and light Transformer model trained by distilling BERT base. It has 40% less parameters than
google-bert/bert-base-uncased, runs 60% faster while preserving over 95% of BERT's performances as measured on the GLUE language
understanding benchmark.
The abstract from the paper is the following:
As Transfer Learning from large-scale pre-trained models becomes more prevalent in Natural Language Processing (NLP),
operating these large models in on-the-edge and/or under constrained computational training or inference budgets
remains challenging. In this work, we propose a method to pre-train a smaller general-purpose language representation
model, called DistilBERT, which can then be fine-tuned with good performances on a wide range of tasks like its larger
counterparts. While most prior work investigated the use of distillation for building task-specific models, we leverage
knowledge distillation during the pretraining phase and show that it is possible to reduce the size of a BERT model by
40%, while retaining 97% of its language understanding capabilities and being 60% faster. To leverage the inductive
biases learned by larger models during pretraining, we introduce a triple loss combining language modeling,
distillation and cosine-distance losses. Our smaller, faster and lighter model is cheaper to pre-train and we
demonstrate its capabilities for on-device computations in a proof-of-concept experiment and a comparative on-device
study.
This model was contributed by victorsanh. This model jax version was
contributed by kamalkraj. The original code can be found here.
Usage tips |
DistilBERT doesn't have token_type_ids, you don't need to indicate which token belongs to which segment. Just
separate your segments with the separation token tokenizer.sep_token (or [SEP]).
DistilBERT doesn't have options to select the input positions (position_ids input). This could be added if
necessary though, just let us know if you need this option. |
Same as BERT but smaller. Trained by distillation of the pretrained BERT model, meaning itβs been trained to predict the same probabilities as the larger model. The actual objective is a combination of:
finding the same probabilities as the teacher model
predicting the masked tokens correctly (but no next-sentence objective)
a cosine similarity between the hidden states of the student and the teacher model |
Resources
A list of official Hugging Face and community (indicated by π) resources to help you get started with DistilBERT. If you're interested in submitting a resource to be included here, please feel free to open a Pull Request and we'll review it! The resource should ideally demonstrate something new instead of duplicating an existing resource. |
A blog post on Getting Started with Sentiment Analysis using Python with DistilBERT.
A blog post on how to train DistilBERT with Blurr for sequence classification.
A blog post on how to use Ray to tune DistilBERT hyperparameters.
A blog post on how to train DistilBERT with Hugging Face and Amazon SageMaker.
A notebook on how to finetune DistilBERT for multi-label classification. π
A notebook on how to finetune DistilBERT for multiclass classification with PyTorch. π
A notebook on how to finetune DistilBERT for text classification in TensorFlow. π
[DistilBertForSequenceClassification] is supported by this example script and notebook.
[TFDistilBertForSequenceClassification] is supported by this example script and notebook.
[FlaxDistilBertForSequenceClassification] is supported by this example script and notebook.
Text classification task guide |
[DistilBertForTokenClassification] is supported by this example script and notebook.
[TFDistilBertForTokenClassification] is supported by this example script and notebook.
[FlaxDistilBertForTokenClassification] is supported by this example script.
Token classification chapter of the π€ Hugging Face Course.
Token classification task guide |
[DistilBertForMaskedLM] is supported by this example script and notebook.
[TFDistilBertForMaskedLM] is supported by this example script and notebook.
[FlaxDistilBertForMaskedLM] is supported by this example script and notebook.
Masked language modeling chapter of the π€ Hugging Face Course.
Masked language modeling task guide |
[DistilBertForQuestionAnswering] is supported by this example script and notebook.
[TFDistilBertForQuestionAnswering] is supported by this example script and notebook.
[FlaxDistilBertForQuestionAnswering] is supported by this example script.
Question answering chapter of the π€ Hugging Face Course.
Question answering task guide |
Multiple choice
- [DistilBertForMultipleChoice] is supported by this example script and notebook.
- [TFDistilBertForMultipleChoice] is supported by this example script and notebook.
- Multiple choice task guide
βοΈ Optimization
A blog post on how to quantize DistilBERT with π€ Optimum and Intel.
A blog post on how Optimizing Transformers for GPUs with π€ Optimum.
A blog post on Optimizing Transformers with Hugging Face Optimum.
β‘οΈ Inference |
β‘οΈ Inference
A blog post on how to Accelerate BERT inference with Hugging Face Transformers and AWS Inferentia with DistilBERT.
A blog post on Serverless Inference with Hugging Face's Transformers, DistilBERT and Amazon SageMaker.
π Deploy
A blog post on how to deploy DistilBERT on Google Cloud.
A blog post on how to deploy DistilBERT with Amazon SageMaker.
A blog post on how to Deploy BERT with Hugging Face Transformers, Amazon SageMaker and Terraform module. |
Combining DistilBERT and Flash Attention 2
First, make sure to install the latest version of Flash Attention 2 to include the sliding window attention feature. |
pip install -U flash-attn --no-build-isolation
Make also sure that you have a hardware that is compatible with Flash-Attention 2. Read more about it in the official documentation of flash-attn repository. Make also sure to load your model in half-precision (e.g. torch.float16)
To load and run a model using Flash Attention 2, refer to the snippet below:
thon |
import torch
from transformers import AutoTokenizer, AutoModel
device = "cuda" # the device to load the model onto
tokenizer = AutoTokenizer.from_pretrained('distilbert/distilbert-base-uncased')
model = AutoModel.from_pretrained("distilbert/distilbert-base-uncased", torch_dtype=torch.float16, attn_implementation="flash_attention_2")
text = "Replace me by any text you'd like."
encoded_input = tokenizer(text, return_tensors='pt').to(device)
model.to(device)
output = model(**encoded_input) |
DistilBertConfig
[[autodoc]] DistilBertConfig
DistilBertTokenizer
[[autodoc]] DistilBertTokenizer
DistilBertTokenizerFast
[[autodoc]] DistilBertTokenizerFast |
DistilBertModel
[[autodoc]] DistilBertModel
- forward
DistilBertForMaskedLM
[[autodoc]] DistilBertForMaskedLM
- forward
DistilBertForSequenceClassification
[[autodoc]] DistilBertForSequenceClassification
- forward
DistilBertForMultipleChoice
[[autodoc]] DistilBertForMultipleChoice
- forward
DistilBertForTokenClassification
[[autodoc]] DistilBertForTokenClassification
- forward
DistilBertForQuestionAnswering
[[autodoc]] DistilBertForQuestionAnswering
- forward |
TFDistilBertModel
[[autodoc]] TFDistilBertModel
- call
TFDistilBertForMaskedLM
[[autodoc]] TFDistilBertForMaskedLM
- call
TFDistilBertForSequenceClassification
[[autodoc]] TFDistilBertForSequenceClassification
- call
TFDistilBertForMultipleChoice
[[autodoc]] TFDistilBertForMultipleChoice
- call
TFDistilBertForTokenClassification
[[autodoc]] TFDistilBertForTokenClassification
- call
TFDistilBertForQuestionAnswering
[[autodoc]] TFDistilBertForQuestionAnswering
- call |
FlaxDistilBertModel
[[autodoc]] FlaxDistilBertModel
- call
FlaxDistilBertForMaskedLM
[[autodoc]] FlaxDistilBertForMaskedLM
- call
FlaxDistilBertForSequenceClassification
[[autodoc]] FlaxDistilBertForSequenceClassification
- call
FlaxDistilBertForMultipleChoice
[[autodoc]] FlaxDistilBertForMultipleChoice
- call
FlaxDistilBertForTokenClassification
[[autodoc]] FlaxDistilBertForTokenClassification
- call
FlaxDistilBertForQuestionAnswering
[[autodoc]] FlaxDistilBertForQuestionAnswering
- call |
OpenAI GPT |
Overview
OpenAI GPT model was proposed in Improving Language Understanding by Generative Pre-Training
by Alec Radford, Karthik Narasimhan, Tim Salimans and Ilya Sutskever. It's a causal (unidirectional) transformer
pre-trained using language modeling on a large corpus will long range dependencies, the Toronto Book Corpus.
The abstract from the paper is the following:
Natural language understanding comprises a wide range of diverse tasks such as textual entailment, question answering,
semantic similarity assessment, and document classification. Although large unlabeled text corpora are abundant,
labeled data for learning these specific tasks is scarce, making it challenging for discriminatively trained models to
perform adequately. We demonstrate that large gains on these tasks can be realized by generative pretraining of a
language model on a diverse corpus of unlabeled text, followed by discriminative fine-tuning on each specific task. In
contrast to previous approaches, we make use of task-aware input transformations during fine-tuning to achieve
effective transfer while requiring minimal changes to the model architecture. We demonstrate the effectiveness of our
approach on a wide range of benchmarks for natural language understanding. Our general task-agnostic model outperforms
discriminatively trained models that use architectures specifically crafted for each task, significantly improving upon
the state of the art in 9 out of the 12 tasks studied.
Write With Transformer is a webapp created and hosted by Hugging Face
showcasing the generative capabilities of several models. GPT is one of them.
This model was contributed by thomwolf. The original code can be found here.
Usage tips |
GPT is a model with absolute position embeddings so it's usually advised to pad the inputs on the right rather than
the left.
GPT was trained with a causal language modeling (CLM) objective and is therefore powerful at predicting the next
token in a sequence. Leveraging this feature allows GPT-2 to generate syntactically coherent text as it can be
observed in the run_generation.py example script. |
Note:
If you want to reproduce the original tokenization process of the OpenAI GPT paper, you will need to install ftfy
and SpaCy: |
pip install spacy ftfy==4.4.3
python -m spacy download en
If you don't install ftfy and SpaCy, the [OpenAIGPTTokenizer] will default to tokenize
using BERT's BasicTokenizer followed by Byte-Pair Encoding (which should be fine for most usage, don't worry).
Resources
A list of official Hugging Face and community (indicated by π) resources to help you get started with OpenAI GPT. If you're interested in submitting a resource to be included here, please feel free to open a Pull Request and we'll review it! The resource should ideally demonstrate something new instead of duplicating an existing resource. |
A blog post on outperforming OpenAI GPT-3 with SetFit for text-classification.
See also: Text classification task guide |
A blog on how to Finetune a non-English GPT-2 Model with Hugging Face.
A blog on How to generate text: using different decoding methods for language generation with Transformers with GPT-2.
A blog on Training CodeParrot π¦ from Scratch, a large GPT-2 model.
A blog on Faster Text Generation with TensorFlow and XLA with GPT-2.
A blog on How to train a Language Model with Megatron-LM with a GPT-2 model.
A notebook on how to finetune GPT2 to generate lyrics in the style of your favorite artist. π
A notebook on how to finetune GPT2 to generate tweets in the style of your favorite Twitter user. π
Causal language modeling chapter of the π€ Hugging Face Course.
[OpenAIGPTLMHeadModel] is supported by this causal language modeling example script, text generation example script and notebook.
[TFOpenAIGPTLMHeadModel] is supported by this causal language modeling example script and notebook.
See also: Causal language modeling task guide |
A course material on Byte-Pair Encoding tokenization.
OpenAIGPTConfig
[[autodoc]] OpenAIGPTConfig
OpenAIGPTTokenizer
[[autodoc]] OpenAIGPTTokenizer
- save_vocabulary
OpenAIGPTTokenizerFast
[[autodoc]] OpenAIGPTTokenizerFast
OpenAI specific outputs
[[autodoc]] models.openai.modeling_openai.OpenAIGPTDoubleHeadsModelOutput
[[autodoc]] models.openai.modeling_tf_openai.TFOpenAIGPTDoubleHeadsModelOutput |
OpenAIGPTModel
[[autodoc]] OpenAIGPTModel
- forward
OpenAIGPTLMHeadModel
[[autodoc]] OpenAIGPTLMHeadModel
- forward
OpenAIGPTDoubleHeadsModel
[[autodoc]] OpenAIGPTDoubleHeadsModel
- forward
OpenAIGPTForSequenceClassification
[[autodoc]] OpenAIGPTForSequenceClassification
- forward |
TFOpenAIGPTModel
[[autodoc]] TFOpenAIGPTModel
- call
TFOpenAIGPTLMHeadModel
[[autodoc]] TFOpenAIGPTLMHeadModel
- call
TFOpenAIGPTDoubleHeadsModel
[[autodoc]] TFOpenAIGPTDoubleHeadsModel
- call
TFOpenAIGPTForSequenceClassification
[[autodoc]] TFOpenAIGPTForSequenceClassification
- call |
LeViT
Overview
The LeViT model was proposed in LeViT: Introducing Convolutions to Vision Transformers by Ben Graham, Alaaeldin El-Nouby, Hugo Touvron, Pierre Stock, Armand Joulin, HervΓ© JΓ©gou, Matthijs Douze. LeViT improves the Vision Transformer (ViT) in performance and efficiency by a few architectural differences such as activation maps with decreasing resolutions in Transformers and the introduction of an attention bias to integrate positional information.
The abstract from the paper is the following:
*We design a family of image classification architectures that optimize the trade-off between accuracy
and efficiency in a high-speed regime. Our work exploits recent findings in attention-based architectures,
which are competitive on highly parallel processing hardware. We revisit principles from the extensive
literature on convolutional neural networks to apply them to transformers, in particular activation maps
with decreasing resolutions. We also introduce the attention bias, a new way to integrate positional information
in vision transformers. As a result, we propose LeVIT: a hybrid neural network for fast inference image classification.
We consider different measures of efficiency on different hardware platforms, so as to best reflect a wide range of
application scenarios. Our extensive experiments empirically validate our technical choices and show they are suitable
to most architectures. Overall, LeViT significantly outperforms existing convnets and vision transformers with respect
to the speed/accuracy tradeoff. For example, at 80% ImageNet top-1 accuracy, LeViT is 5 times faster than EfficientNet on CPU. * |
LeViT Architecture. Taken from the original paper.
This model was contributed by anugunj. The original code can be found here.
Usage tips |
Compared to ViT, LeViT models use an additional distillation head to effectively learn from a teacher (which, in the LeViT paper, is a ResNet like-model). The distillation head is learned through backpropagation under supervision of a ResNet like-model. They also draw inspiration from convolution neural networks to use activation maps with decreasing resolutions to increase the efficiency.
There are 2 ways to fine-tune distilled models, either (1) in a classic way, by only placing a prediction head on top
of the final hidden state and not using the distillation head, or (2) by placing both a prediction head and distillation
head on top of the final hidden state. In that case, the prediction head is trained using regular cross-entropy between
the prediction of the head and the ground-truth label, while the distillation prediction head is trained using hard distillation
(cross-entropy between the prediction of the distillation head and the label predicted by the teacher). At inference time,
one takes the average prediction between both heads as final prediction. (2) is also called "fine-tuning with distillation",
because one relies on a teacher that has already been fine-tuned on the downstream dataset. In terms of models, (1) corresponds
to [LevitForImageClassification] and (2) corresponds to [LevitForImageClassificationWithTeacher].
All released checkpoints were pre-trained and fine-tuned on ImageNet-1k
(also referred to as ILSVRC 2012, a collection of 1.3 million images and 1,000 classes). only. No external data was used. This is in
contrast with the original ViT model, which used external data like the JFT-300M dataset/Imagenet-21k for
pre-training.
The authors of LeViT released 5 trained LeViT models, which you can directly plug into [LevitModel] or [LevitForImageClassification].
Techniques like data augmentation, optimization, and regularization were used in order to simulate training on a much larger dataset
(while only using ImageNet-1k for pre-training). The 5 variants available are (all trained on images of size 224x224):
facebook/levit-128S, facebook/levit-128, facebook/levit-192, facebook/levit-256 and
facebook/levit-384. Note that one should use [LevitImageProcessor] in order to
prepare images for the model.
[LevitForImageClassificationWithTeacher] currently supports only inference and not training or fine-tuning.
You can check out demo notebooks regarding inference as well as fine-tuning on custom data here
(you can just replace [ViTFeatureExtractor] by [LevitImageProcessor] and [ViTForImageClassification] by [LevitForImageClassification] or [LevitForImageClassificationWithTeacher]). |
Resources
A list of official Hugging Face and community (indicated by π) resources to help you get started with LeViT.
[LevitForImageClassification] is supported by this example script and notebook.
See also: Image classification task guide |
If you're interested in submitting a resource to be included here, please feel free to open a Pull Request and we'll review it! The resource should ideally demonstrate something new instead of duplicating an existing resource.
LevitConfig
[[autodoc]] LevitConfig
LevitFeatureExtractor
[[autodoc]] LevitFeatureExtractor
- call
LevitImageProcessor
[[autodoc]] LevitImageProcessor
- preprocess
LevitModel
[[autodoc]] LevitModel
- forward
LevitForImageClassification
[[autodoc]] LevitForImageClassification
- forward
LevitForImageClassificationWithTeacher
[[autodoc]] LevitForImageClassificationWithTeacher
- forward |
MobileNet V2
Overview
The MobileNet model was proposed in MobileNetV2: Inverted Residuals and Linear Bottlenecks by Mark Sandler, Andrew Howard, Menglong Zhu, Andrey Zhmoginov, Liang-Chieh Chen.
The abstract from the paper is the following:
In this paper we describe a new mobile architecture, MobileNetV2, that improves the state of the art performance of mobile models on multiple tasks and benchmarks as well as across a spectrum of different model sizes. We also describe efficient ways of applying these mobile models to object detection in a novel framework we call SSDLite. Additionally, we demonstrate how to build mobile semantic segmentation models through a reduced form of DeepLabv3 which we call Mobile DeepLabv3.
The MobileNetV2 architecture is based on an inverted residual structure where the input and output of the residual block are thin bottleneck layers opposite to traditional residual models which use expanded representations in the input an MobileNetV2 uses lightweight depthwise convolutions to filter features in the intermediate expansion layer. Additionally, we find that it is important to remove non-linearities in the narrow layers in order to maintain representational power. We demonstrate that this improves performance and provide an intuition that led to this design. Finally, our approach allows decoupling of the input/output domains from the expressiveness of the transformation, which provides a convenient framework for further analysis. We measure our performance on Imagenet classification, COCO object detection, VOC image segmentation. We evaluate the trade-offs between accuracy, and number of operations measured by multiply-adds (MAdd), as well as the number of parameters.
This model was contributed by matthijs. The original code and weights can be found here for the main model and here for DeepLabV3+.
Usage tips |
The checkpoints are named mobilenet_v2_depth_size, for example mobilenet_v2_1.0_224, where 1.0 is the depth multiplier (sometimes also referred to as "alpha" or the width multiplier) and 224 is the resolution of the input images the model was trained on.
Even though the checkpoint is trained on images of specific size, the model will work on images of any size. The smallest supported image size is 32x32.
One can use [MobileNetV2ImageProcessor] to prepare images for the model. |
One can use [MobileNetV2ImageProcessor] to prepare images for the model.
The available image classification checkpoints are pre-trained on ImageNet-1k (also referred to as ILSVRC 2012, a collection of 1.3 million images and 1,000 classes). However, the model predicts 1001 classes: the 1000 classes from ImageNet plus an extra βbackgroundβ class (index 0).
The segmentation model uses a DeepLabV3+ head. The available semantic segmentation checkpoints are pre-trained on PASCAL VOC. |
The segmentation model uses a DeepLabV3+ head. The available semantic segmentation checkpoints are pre-trained on PASCAL VOC.
The original TensorFlow checkpoints use different padding rules than PyTorch, requiring the model to determine the padding amount at inference time, since this depends on the input image size. To use native PyTorch padding behavior, create a [MobileNetV2Config] with tf_padding = False.
Unsupported features: |
Unsupported features:
The [MobileNetV2Model] outputs a globally pooled version of the last hidden state. In the original model it is possible to use an average pooling layer with a fixed 7x7 window and stride 1 instead of global pooling. For inputs that are larger than the recommended image size, this gives a pooled output that is larger than 1x1. The Hugging Face implementation does not support this. |
The original TensorFlow checkpoints include quantized models. We do not support these models as they include additional "FakeQuantization" operations to unquantize the weights.
It's common to extract the output from the expansion layers at indices 10 and 13, as well as the output from the final 1x1 convolution layer, for downstream purposes. Using output_hidden_states=True returns the output from all intermediate layers. There is currently no way to limit this to specific layers. |
The DeepLabV3+ segmentation head does not use the final convolution layer from the backbone, but this layer gets computed anyway. There is currently no way to tell [MobileNetV2Model] up to which layer it should run.
Resources
A list of official Hugging Face and community (indicated by π) resources to help you get started with MobileNetV2.
[MobileNetV2ForImageClassification] is supported by this example script and notebook.
See also: Image classification task guide |
Semantic segmentation
- Semantic segmentation task guide
If you're interested in submitting a resource to be included here, please feel free to open a Pull Request and we'll review it! The resource should ideally demonstrate something new instead of duplicating an existing resource.
MobileNetV2Config
[[autodoc]] MobileNetV2Config
MobileNetV2FeatureExtractor
[[autodoc]] MobileNetV2FeatureExtractor
- preprocess
- post_process_semantic_segmentation
MobileNetV2ImageProcessor
[[autodoc]] MobileNetV2ImageProcessor
- preprocess
- post_process_semantic_segmentation
MobileNetV2Model
[[autodoc]] MobileNetV2Model
- forward
MobileNetV2ForImageClassification
[[autodoc]] MobileNetV2ForImageClassification
- forward
MobileNetV2ForSemanticSegmentation
[[autodoc]] MobileNetV2ForSemanticSegmentation
- forward |
GPT-J
Overview
The GPT-J model was released in the kingoflolz/mesh-transformer-jax repository by Ben Wang and Aran Komatsuzaki. It is a GPT-2-like
causal language model trained on the Pile dataset.
This model was contributed by Stella Biderman.
Usage tips |
To load GPT-J in float32 one would need at least 2x model size
RAM: 1x for initial weights and another 1x to load the checkpoint. So for GPT-J it would take at least 48GB
RAM to just load the model. To reduce the RAM usage there are a few options. The torch_dtype argument can be
used to initialize the model in half-precision on a CUDA device only. There is also a fp16 branch which stores the fp16 weights,
which could be used to further minimize the RAM usage:
thon |
thon
from transformers import GPTJForCausalLM
import torch
device = "cuda"
model = GPTJForCausalLM.from_pretrained(
"EleutherAI/gpt-j-6B",
revision="float16",
torch_dtype=torch.float16,
).to(device) |
The model should fit on 16GB GPU for inference. For training/fine-tuning it would take much more GPU RAM. Adam
optimizer for example makes four copies of the model: model, gradients, average and squared average of the gradients.
So it would need at least 4x model size GPU memory, even with mixed precision as gradient updates are in fp32. This
is not including the activations and data batches, which would again require some more GPU RAM. So one should explore
solutions such as DeepSpeed, to train/fine-tune the model. Another option is to use the original codebase to
train/fine-tune the model on TPU and then convert the model to Transformers format for inference. Instructions for
that could be found here |
Although the embedding matrix has a size of 50400, only 50257 entries are used by the GPT-2 tokenizer. These extra
tokens are added for the sake of efficiency on TPUs. To avoid the mismatch between embedding matrix size and vocab
size, the tokenizer for GPT-J contains 143 extra tokens
<|extratoken_1|> <|extratoken_143|>, so the vocab_size of tokenizer also becomes 50400.
Usage examples
The [~generation.GenerationMixin.generate] method can be used to generate text using GPT-J
model.
thon |
from transformers import AutoModelForCausalLM, AutoTokenizer
model = AutoModelForCausalLM.from_pretrained("EleutherAI/gpt-j-6B")
tokenizer = AutoTokenizer.from_pretrained("EleutherAI/gpt-j-6B")
prompt = (
"In a shocking finding, scientists discovered a herd of unicorns living in a remote, "
"previously unexplored valley, in the Andes Mountains. Even more surprising to the "
"researchers was the fact that the unicorns spoke perfect English."
)
input_ids = tokenizer(prompt, return_tensors="pt").input_ids
gen_tokens = model.generate(
input_ids,
do_sample=True,
temperature=0.9,
max_length=100,
)
gen_text = tokenizer.batch_decode(gen_tokens)[0] |
or in float16 precision:
thon |
from transformers import GPTJForCausalLM, AutoTokenizer
import torch
device = "cuda"
model = GPTJForCausalLM.from_pretrained("EleutherAI/gpt-j-6B", torch_dtype=torch.float16).to(device)
tokenizer = AutoTokenizer.from_pretrained("EleutherAI/gpt-j-6B")
prompt = (
"In a shocking finding, scientists discovered a herd of unicorns living in a remote, "
"previously unexplored valley, in the Andes Mountains. Even more surprising to the "
"researchers was the fact that the unicorns spoke perfect English."
)
input_ids = tokenizer(prompt, return_tensors="pt").input_ids.to(device)
gen_tokens = model.generate(
input_ids,
do_sample=True,
temperature=0.9,
max_length=100,
)
gen_text = tokenizer.batch_decode(gen_tokens)[0] |
Resources
A list of official Hugging Face and community (indicated by π) resources to help you get started with GPT-J. If you're interested in submitting a resource to be included here, please feel free to open a Pull Request and we'll review it! The resource should ideally demonstrate something new instead of duplicating an existing resource. |
Description of GPT-J.
A blog on how to Deploy GPT-J 6B for inference using Hugging Face Transformers and Amazon SageMaker.
A blog on how to Accelerate GPT-J inference with DeepSpeed-Inference on GPUs.
A blog post introducing GPT-J-6B: 6B JAX-Based Transformer. π
A notebook for GPT-J-6B Inference Demo. π
Another notebook demonstrating Inference with GPT-J-6B.
Causal language modeling chapter of the π€ Hugging Face Course.
[GPTJForCausalLM] is supported by this causal language modeling example script, text generation example script, and notebook.
[TFGPTJForCausalLM] is supported by this causal language modeling example script and notebook.
[FlaxGPTJForCausalLM] is supported by this causal language modeling example script and notebook. |
Documentation resources
- Text classification task guide
- Question answering task guide
- Causal language modeling task guide
GPTJConfig
[[autodoc]] GPTJConfig
- all
GPTJModel
[[autodoc]] GPTJModel
- forward
GPTJForCausalLM
[[autodoc]] GPTJForCausalLM
- forward
GPTJForSequenceClassification
[[autodoc]] GPTJForSequenceClassification
- forward
GPTJForQuestionAnswering
[[autodoc]] GPTJForQuestionAnswering
- forward |
TFGPTJModel
[[autodoc]] TFGPTJModel
- call
TFGPTJForCausalLM
[[autodoc]] TFGPTJForCausalLM
- call
TFGPTJForSequenceClassification
[[autodoc]] TFGPTJForSequenceClassification
- call
TFGPTJForQuestionAnswering
[[autodoc]] TFGPTJForQuestionAnswering
- call
FlaxGPTJModel
[[autodoc]] FlaxGPTJModel
- call
FlaxGPTJForCausalLM
[[autodoc]] FlaxGPTJForCausalLM
- call |
MobileViT
Overview
The MobileViT model was proposed in MobileViT: Light-weight, General-purpose, and Mobile-friendly Vision Transformer by Sachin Mehta and Mohammad Rastegari. MobileViT introduces a new layer that replaces local processing in convolutions with global processing using transformers.
The abstract from the paper is the following:
Light-weight convolutional neural networks (CNNs) are the de-facto for mobile vision tasks. Their spatial inductive biases allow them to learn representations with fewer parameters across different vision tasks. However, these networks are spatially local. To learn global representations, self-attention-based vision trans-formers (ViTs) have been adopted. Unlike CNNs, ViTs are heavy-weight. In this paper, we ask the following question: is it possible to combine the strengths of CNNs and ViTs to build a light-weight and low latency network for mobile vision tasks? Towards this end, we introduce MobileViT, a light-weight and general-purpose vision transformer for mobile devices. MobileViT presents a different perspective for the global processing of information with transformers, i.e., transformers as convolutions. Our results show that MobileViT significantly outperforms CNN- and ViT-based networks across different tasks and datasets. On the ImageNet-1k dataset, MobileViT achieves top-1 accuracy of 78.4% with about 6 million parameters, which is 3.2% and 6.2% more accurate than MobileNetv3 (CNN-based) and DeIT (ViT-based) for a similar number of parameters. On the MS-COCO object detection task, MobileViT is 5.7% more accurate than MobileNetv3 for a similar number of parameters.
This model was contributed by matthijs. The TensorFlow version of the model was contributed by sayakpaul. The original code and weights can be found here.
Usage tips |
MobileViT is more like a CNN than a Transformer model. It does not work on sequence data but on batches of images. Unlike ViT, there are no embeddings. The backbone model outputs a feature map. You can follow this tutorial for a lightweight introduction.
One can use [MobileViTImageProcessor] to prepare images for the model. Note that if you do your own preprocessing, the pretrained checkpoints expect images to be in BGR pixel order (not RGB).
The available image classification checkpoints are pre-trained on ImageNet-1k (also referred to as ILSVRC 2012, a collection of 1.3 million images and 1,000 classes).
The segmentation model uses a DeepLabV3 head. The available semantic segmentation checkpoints are pre-trained on PASCAL VOC.
As the name suggests MobileViT was designed to be performant and efficient on mobile phones. The TensorFlow versions of the MobileViT models are fully compatible with TensorFlow Lite. |
You can use the following code to convert a MobileViT checkpoint (be it image classification or semantic segmentation) to generate a
TensorFlow Lite model: |
from transformers import TFMobileViTForImageClassification
import tensorflow as tf
model_ckpt = "apple/mobilevit-xx-small"
model = TFMobileViTForImageClassification.from_pretrained(model_ckpt)
converter = tf.lite.TFLiteConverter.from_keras_model(model)
converter.optimizations = [tf.lite.Optimize.DEFAULT]
converter.target_spec.supported_ops = [
tf.lite.OpsSet.TFLITE_BUILTINS,
tf.lite.OpsSet.SELECT_TF_OPS,
]
tflite_model = converter.convert()
tflite_filename = model_ckpt.split("/")[-1] + ".tflite"
with open(tflite_filename, "wb") as f:
f.write(tflite_model) |
The resulting model will be just about an MB making it a good fit for mobile applications where resources and network
bandwidth can be constrained.
Resources
A list of official Hugging Face and community (indicated by π) resources to help you get started with MobileViT.
[MobileViTForImageClassification] is supported by this example script and notebook.
See also: Image classification task guide |
Semantic segmentation
- Semantic segmentation task guide
If you're interested in submitting a resource to be included here, please feel free to open a Pull Request and we'll review it! The resource should ideally demonstrate something new instead of duplicating an existing resource.
MobileViTConfig
[[autodoc]] MobileViTConfig
MobileViTFeatureExtractor
[[autodoc]] MobileViTFeatureExtractor
- call
- post_process_semantic_segmentation
MobileViTImageProcessor
[[autodoc]] MobileViTImageProcessor
- preprocess
- post_process_semantic_segmentation |