text
stringlengths 2
11.8k
|
---|
pip install -U flash-attn --no-build-isolation
Make also sure that you have a hardware that is compatible with Flash-Attention 2. Read more about it in the official documentation of flash-attn repository. Make also sure to load your model in half-precision (e.g. `torch.float16``)
To load and run a model using Flash Attention 2, refer to the snippet below:
thon |
import torch
from transformers import OPTForCausalLM, GPT2Tokenizer
device = "cuda" # the device to load the model onto
model = OPTForCausalLM.from_pretrained("facebook/opt-350m", torch_dtype=torch.float16, attn_implementation="flash_attention_2")
tokenizer = GPT2Tokenizer.from_pretrained("facebook/opt-350m")
prompt = ("A chat between a curious human and the Statue of Liberty.\n\nHuman: What is your name?\nStatue: I am the "
"Statue of Liberty.\nHuman: Where do you live?\nStatue: New York City.\nHuman: How long have you lived "
"there?")
model_inputs = tokenizer([prompt], return_tensors="pt").to(device)
model.to(device)
generated_ids = model.generate(**model_inputs, max_new_tokens=30, do_sample=False)
tokenizer.batch_decode(generated_ids)[0]
'A chat between a curious human and the Statue of Liberty.\n\nHuman: What is your name?\nStatue: I am the Statue of Liberty.\nHuman: Where do you live?\nStatue: New York City.\nHuman: How long have you lived there?\nStatue: I have lived here for about a year.\nHuman: What is your favorite place to eat?\nStatue: I love' |
Expected speedups
Below is an expected speedup diagram that compares pure inference time between the native implementation in transformers using facebook/opt-2.7b checkpoint and the Flash Attention 2 version of the model using two different sequence lengths.
Below is an expected speedup diagram that compares pure inference time between the native implementation in transformers using facebook/opt-350m checkpoint and the Flash Attention 2 version of the model using two different sequence lengths. |
OPTConfig
[[autodoc]] OPTConfig
OPTModel
[[autodoc]] OPTModel
- forward
OPTForCausalLM
[[autodoc]] OPTForCausalLM
- forward
OPTForSequenceClassification
[[autodoc]] OPTForSequenceClassification
- forward
OPTForQuestionAnswering
[[autodoc]] OPTForQuestionAnswering
- forward
TFOPTModel
[[autodoc]] TFOPTModel
- call
TFOPTForCausalLM
[[autodoc]] TFOPTForCausalLM
- call
FlaxOPTModel
[[autodoc]] FlaxOPTModel
- call
FlaxOPTForCausalLM
[[autodoc]] FlaxOPTForCausalLM
- call |
T5v1.1
Overview
T5v1.1 was released in the google-research/text-to-text-transfer-transformer
repository by Colin Raffel et al. It's an improved version of the original T5 model.
This model was contributed by patrickvonplaten. The original code can be
found here.
Usage tips
One can directly plug in the weights of T5v1.1 into a T5 model, like so:
thon
from transformers import T5ForConditionalGeneration
model = T5ForConditionalGeneration.from_pretrained("google/t5-v1_1-base") |
from transformers import T5ForConditionalGeneration
model = T5ForConditionalGeneration.from_pretrained("google/t5-v1_1-base")
T5 Version 1.1 includes the following improvements compared to the original T5 model:
GEGLU activation in the feed-forward hidden layer, rather than ReLU. See this paper.
Dropout was turned off in pre-training (quality win). Dropout should be re-enabled during fine-tuning.
Pre-trained on C4 only without mixing in the downstream tasks. |
Dropout was turned off in pre-training (quality win). Dropout should be re-enabled during fine-tuning.
Pre-trained on C4 only without mixing in the downstream tasks.
No parameter sharing between the embedding and classifier layer.
"xl" and "xxl" replace "3B" and "11B". The model shapes are a bit different - larger d_model and smaller
num_heads and d_ff. |
Note: T5 Version 1.1 was only pre-trained on C4 excluding any supervised
training. Therefore, this model has to be fine-tuned before it is usable on a downstream task, unlike the original T5
model. Since t5v1.1 was pre-trained unsupervisedly, there's no real advantage to using a task prefix during single-task
fine-tuning. If you are doing multi-task fine-tuning, you should use a prefix.
Google has released the following variants:
google/t5-v1_1-small
google/t5-v1_1-base
google/t5-v1_1-large |
google/t5-v1_1-small
google/t5-v1_1-base
google/t5-v1_1-large
google/t5-v1_1-xl
google/t5-v1_1-xxl.
Refer to T5's documentation page for all API reference, tips, code examples and notebooks. |
ViTMAE
Overview
The ViTMAE model was proposed in Masked Autoencoders Are Scalable Vision Learners by Kaiming He, Xinlei Chen, Saining Xie, Yanghao Li,
Piotr DollΓ‘r, Ross Girshick. The paper shows that, by pre-training a Vision Transformer (ViT) to reconstruct pixel values for masked patches, one can get results after
fine-tuning that outperform supervised pre-training.
The abstract from the paper is the following:
This paper shows that masked autoencoders (MAE) are scalable self-supervised learners for computer vision. Our MAE approach is simple: we mask random patches of the
input image and reconstruct the missing pixels. It is based on two core designs. First, we develop an asymmetric encoder-decoder architecture, with an encoder that operates
only on the visible subset of patches (without mask tokens), along with a lightweight decoder that reconstructs the original image from the latent representation and mask
tokens. Second, we find that masking a high proportion of the input image, e.g., 75%, yields a nontrivial and meaningful self-supervisory task. Coupling these two designs
enables us to train large models efficiently and effectively: we accelerate training (by 3x or more) and improve accuracy. Our scalable approach allows for learning high-capacity
models that generalize well: e.g., a vanilla ViT-Huge model achieves the best accuracy (87.8%) among methods that use only ImageNet-1K data. Transfer performance in downstream
tasks outperforms supervised pre-training and shows promising scaling behavior.
MAE architecture. Taken from the original paper.
This model was contributed by nielsr. TensorFlow version of the model was contributed by sayakpaul and
ariG23498 (equal contribution). The original code can be found here.
Usage tips |
MAE (masked auto encoding) is a method for self-supervised pre-training of Vision Transformers (ViTs). The pre-training objective is relatively simple:
by masking a large portion (75%) of the image patches, the model must reconstruct raw pixel values. One can use [ViTMAEForPreTraining] for this purpose.
After pre-training, one "throws away" the decoder used to reconstruct pixels, and one uses the encoder for fine-tuning/linear probing. This means that after
fine-tuning, one can directly plug in the weights into a [ViTForImageClassification].
One can use [ViTImageProcessor] to prepare images for the model. See the code examples for more info.
Note that the encoder of MAE is only used to encode the visual patches. The encoded patches are then concatenated with mask tokens, which the decoder (which also
consists of Transformer blocks) takes as input. Each mask token is a shared, learned vector that indicates the presence of a missing patch to be predicted. Fixed
sin/cos position embeddings are added both to the input of the encoder and the decoder.
For a visual understanding of how MAEs work you can check out this post. |
Resources
A list of official Hugging Face and community (indicated by π) resources to help you get started with ViTMAE.
[ViTMAEForPreTraining] is supported by this example script, allowing you to pre-train the model from scratch/further pre-train the model on custom data.
A notebook that illustrates how to visualize reconstructed pixel values with [ViTMAEForPreTraining] can be found here. |
If you're interested in submitting a resource to be included here, please feel free to open a Pull Request and we'll review it! The resource should ideally demonstrate something new instead of duplicating an existing resource.
ViTMAEConfig
[[autodoc]] ViTMAEConfig
ViTMAEModel
[[autodoc]] ViTMAEModel
- forward
ViTMAEForPreTraining
[[autodoc]] transformers.ViTMAEForPreTraining
- forward |
ViTMAEModel
[[autodoc]] ViTMAEModel
- forward
ViTMAEForPreTraining
[[autodoc]] transformers.ViTMAEForPreTraining
- forward
TFViTMAEModel
[[autodoc]] TFViTMAEModel
- call
TFViTMAEForPreTraining
[[autodoc]] transformers.TFViTMAEForPreTraining
- call |
I-BERT
Overview
The I-BERT model was proposed in I-BERT: Integer-only BERT Quantization by
Sehoon Kim, Amir Gholami, Zhewei Yao, Michael W. Mahoney and Kurt Keutzer. It's a quantized version of RoBERTa running
inference up to four times faster.
The abstract from the paper is the following:
Transformer based models, like BERT and RoBERTa, have achieved state-of-the-art results in many Natural Language
Processing tasks. However, their memory footprint, inference latency, and power consumption are prohibitive for
efficient inference at the edge, and even at the data center. While quantization can be a viable solution for this,
previous work on quantizing Transformer based models use floating-point arithmetic during inference, which cannot
efficiently utilize integer-only logical units such as the recent Turing Tensor Cores, or traditional integer-only ARM
processors. In this work, we propose I-BERT, a novel quantization scheme for Transformer based models that quantizes
the entire inference with integer-only arithmetic. Based on lightweight integer-only approximation methods for
nonlinear operations, e.g., GELU, Softmax, and Layer Normalization, I-BERT performs an end-to-end integer-only BERT
inference without any floating point calculation. We evaluate our approach on GLUE downstream tasks using
RoBERTa-Base/Large. We show that for both cases, I-BERT achieves similar (and slightly higher) accuracy as compared to
the full-precision baseline. Furthermore, our preliminary implementation of I-BERT shows a speedup of 2.4 - 4.0x for
INT8 inference on a T4 GPU system as compared to FP32 inference. The framework has been developed in PyTorch and has
been open-sourced.
This model was contributed by kssteven. The original code can be found here.
Resources |
Text classification task guide
Token classification task guide
Question answering task guide
Masked language modeling task guide
Multiple choice task guide |
IBertConfig
[[autodoc]] IBertConfig
IBertModel
[[autodoc]] IBertModel
- forward
IBertForMaskedLM
[[autodoc]] IBertForMaskedLM
- forward
IBertForSequenceClassification
[[autodoc]] IBertForSequenceClassification
- forward
IBertForMultipleChoice
[[autodoc]] IBertForMultipleChoice
- forward
IBertForTokenClassification
[[autodoc]] IBertForTokenClassification
- forward
IBertForQuestionAnswering
[[autodoc]] IBertForQuestionAnswering
- forward |
Decision Transformer
Overview
The Decision Transformer model was proposed in Decision Transformer: Reinforcement Learning via Sequence Modeling
by Lili Chen, Kevin Lu, Aravind Rajeswaran, Kimin Lee, Aditya Grover, Michael Laskin, Pieter Abbeel, Aravind Srinivas, Igor Mordatch.
The abstract from the paper is the following:
We introduce a framework that abstracts Reinforcement Learning (RL) as a sequence modeling problem.
This allows us to draw upon the simplicity and scalability of the Transformer architecture, and associated advances
in language modeling such as GPT-x and BERT. In particular, we present Decision Transformer, an architecture that
casts the problem of RL as conditional sequence modeling. Unlike prior approaches to RL that fit value functions or
compute policy gradients, Decision Transformer simply outputs the optimal actions by leveraging a causally masked
Transformer. By conditioning an autoregressive model on the desired return (reward), past states, and actions, our
Decision Transformer model can generate future actions that achieve the desired return. Despite its simplicity,
Decision Transformer matches or exceeds the performance of state-of-the-art model-free offline RL baselines on
Atari, OpenAI Gym, and Key-to-Door tasks.
This version of the model is for tasks where the state is a vector.
This model was contributed by edbeeching. The original code can be found here.
DecisionTransformerConfig
[[autodoc]] DecisionTransformerConfig
DecisionTransformerGPT2Model
[[autodoc]] DecisionTransformerGPT2Model
- forward
DecisionTransformerModel
[[autodoc]] DecisionTransformerModel
- forward |
Pegasus
Overview
The Pegasus model was proposed in PEGASUS: Pre-training with Extracted Gap-sentences for Abstractive Summarization by Jingqing Zhang, Yao Zhao, Mohammad Saleh and Peter J. Liu on Dec 18, 2019.
According to the abstract, |
Pegasus' pretraining task is intentionally similar to summarization: important sentences are removed/masked from an
input document and are generated together as one output sequence from the remaining sentences, similar to an
extractive summary.
Pegasus achieves SOTA summarization performance on all 12 downstream tasks, as measured by ROUGE and human eval.
This model was contributed by sshleifer. The Authors' code can be found here.
Usage tips |
This model was contributed by sshleifer. The Authors' code can be found here.
Usage tips
Sequence-to-sequence model with the same encoder-decoder model architecture as BART. Pegasus is pre-trained jointly on two self-supervised objective functions: Masked Language Modeling (MLM) and a novel summarization specific pretraining objective, called Gap Sentence Generation (GSG).
MLM: encoder input tokens are randomly replaced by a mask tokens and have to be predicted by the encoder (like in BERT) |
MLM: encoder input tokens are randomly replaced by a mask tokens and have to be predicted by the encoder (like in BERT)
GSG: whole encoder input sentences are replaced by a second mask token and fed to the decoder, but which has a causal mask to hide the future words like a regular auto-regressive transformer decoder.
FP16 is not supported (help/ideas on this appreciated!).
The adafactor optimizer is recommended for pegasus fine-tuning. |
FP16 is not supported (help/ideas on this appreciated!).
The adafactor optimizer is recommended for pegasus fine-tuning.
Checkpoints
All the checkpoints are fine-tuned for summarization, besides
pegasus-large, whence the other checkpoints are fine-tuned: |
Checkpoints
All the checkpoints are fine-tuned for summarization, besides
pegasus-large, whence the other checkpoints are fine-tuned:
Each checkpoint is 2.2 GB on disk and 568M parameters.
FP16 is not supported (help/ideas on this appreciated!).
Summarizing xsum in fp32 takes about 400ms/sample, with default parameters on a v100 GPU.
Full replication results and correctly pre-processed data can be found in this Issue.
Distilled checkpoints are described in this paper.
Implementation Notes |
All models are transformer encoder-decoders with 16 layers in each component.
The implementation is completely inherited from [BartForConditionalGeneration]
Some key configuration differences:
static, sinusoidal position embeddings
the model starts generating with pad_token_id (which has 0 token_embedding) as the prefix.
more beams are used (num_beams=8)
All pretrained pegasus checkpoints are the same besides three attributes: tokenizer.model_max_length (maximum
input size), max_length (the maximum number of tokens to generate) and length_penalty.
The code to convert checkpoints trained in the author's repo can be
found in convert_pegasus_tf_to_pytorch.py. |
Usage Example
thon
from transformers import PegasusForConditionalGeneration, PegasusTokenizer
import torch
src_text = [
""" PG&E stated it scheduled the blackouts in response to forecasts for high winds amid dry conditions. The aim is to reduce the risk of wildfires. Nearly 800 thousand customers were scheduled to be affected by the shutoffs which were expected to last through at least midday tomorrow."""
] |
model_name = "google/pegasus-xsum"
device = "cuda" if torch.cuda.is_available() else "cpu"
tokenizer = PegasusTokenizer.from_pretrained(model_name)
model = PegasusForConditionalGeneration.from_pretrained(model_name).to(device)
batch = tokenizer(src_text, truncation=True, padding="longest", return_tensors="pt").to(device)
translated = model.generate(**batch)
tgt_text = tokenizer.batch_decode(translated, skip_special_tokens=True)
assert (
tgt_text[0]
== "California's largest electricity provider has turned off power to hundreds of thousands of customers."
) |
Resources
Script to fine-tune pegasus
on the XSUM dataset. Data download instructions at examples/pytorch/summarization/.
Causal language modeling task guide
Translation task guide
Summarization task guide
PegasusConfig
[[autodoc]] PegasusConfig
PegasusTokenizer
warning: add_tokens does not work at the moment.
[[autodoc]] PegasusTokenizer
PegasusTokenizerFast
[[autodoc]] PegasusTokenizerFast |
PegasusConfig
[[autodoc]] PegasusConfig
PegasusTokenizer
warning: add_tokens does not work at the moment.
[[autodoc]] PegasusTokenizer
PegasusTokenizerFast
[[autodoc]] PegasusTokenizerFast
PegasusModel
[[autodoc]] PegasusModel
- forward
PegasusForConditionalGeneration
[[autodoc]] PegasusForConditionalGeneration
- forward
PegasusForCausalLM
[[autodoc]] PegasusForCausalLM
- forward |
TFPegasusModel
[[autodoc]] TFPegasusModel
- call
TFPegasusForConditionalGeneration
[[autodoc]] TFPegasusForConditionalGeneration
- call
FlaxPegasusModel
[[autodoc]] FlaxPegasusModel
- call
- encode
- decode
FlaxPegasusForConditionalGeneration
[[autodoc]] FlaxPegasusForConditionalGeneration
- call
- encode
- decode |
PoolFormer
Overview
The PoolFormer model was proposed in MetaFormer is Actually What You Need for Vision by Sea AI Labs. Instead of designing complicated token mixer to achieve SOTA performance, the target of this work is to demonstrate the competence of transformer models largely stem from the general architecture MetaFormer.
The abstract from the paper is the following:
Transformers have shown great potential in computer vision tasks. A common belief is their attention-based token mixer module contributes most to their competence. However, recent works show the attention-based module in transformers can be replaced by spatial MLPs and the resulted models still perform quite well. Based on this observation, we hypothesize that the general architecture of the transformers, instead of the specific token mixer module, is more essential to the model's performance. To verify this, we deliberately replace the attention module in transformers with an embarrassingly simple spatial pooling operator to conduct only the most basic token mixing. Surprisingly, we observe that the derived model, termed as PoolFormer, achieves competitive performance on multiple computer vision tasks. For example, on ImageNet-1K, PoolFormer achieves 82.1% top-1 accuracy, surpassing well-tuned vision transformer/MLP-like baselines DeiT-B/ResMLP-B24 by 0.3%/1.1% accuracy with 35%/52% fewer parameters and 48%/60% fewer MACs. The effectiveness of PoolFormer verifies our hypothesis and urges us to initiate the concept of "MetaFormer", a general architecture abstracted from transformers without specifying the token mixer. Based on the extensive experiments, we argue that MetaFormer is the key player in achieving superior results for recent transformer and MLP-like models on vision tasks. This work calls for more future research dedicated to improving MetaFormer instead of focusing on the token mixer modules. Additionally, our proposed PoolFormer could serve as a starting baseline for future MetaFormer architecture design.
The figure below illustrates the architecture of PoolFormer. Taken from the original paper. |
This model was contributed by heytanay. The original code can be found here.
Usage tips
PoolFormer has a hierarchical architecture, where instead of Attention, a simple Average Pooling layer is present. All checkpoints of the model can be found on the hub.
One can use [PoolFormerImageProcessor] to prepare images for the model.
As most models, PoolFormer comes in different sizes, the details of which can be found in the table below. |
| Model variant | Depths | Hidden sizes | Params (M) | ImageNet-1k Top 1 |
| :---------------: | ------------- | ------------------- | :------------: | :-------------------: |
| s12 | [2, 2, 6, 2] | [64, 128, 320, 512] | 12 | 77.2 |
| s24 | [4, 4, 12, 4] | [64, 128, 320, 512] | 21 | 80.3 |
| s36 | [6, 6, 18, 6] | [64, 128, 320, 512] | 31 | 81.4 |
| m36 | [6, 6, 18, 6] | [96, 192, 384, 768] | 56 | 82.1 |
| m48 | [8, 8, 24, 8] | [96, 192, 384, 768] | 73 | 82.5 |
Resources
A list of official Hugging Face and community (indicated by π) resources to help you get started with PoolFormer. |
[PoolFormerForImageClassification] is supported by this example script and notebook.
See also: Image classification task guide |
If you're interested in submitting a resource to be included here, please feel free to open a Pull Request and we'll review it! The resource should ideally demonstrate something new instead of duplicating an existing resource.
PoolFormerConfig
[[autodoc]] PoolFormerConfig
PoolFormerFeatureExtractor
[[autodoc]] PoolFormerFeatureExtractor
- call
PoolFormerImageProcessor
[[autodoc]] PoolFormerImageProcessor
- preprocess
PoolFormerModel
[[autodoc]] PoolFormerModel
- forward
PoolFormerForImageClassification
[[autodoc]] PoolFormerForImageClassification
- forward |
YOSO
Overview
The YOSO model was proposed in You Only Sample (Almost) Once: Linear Cost Self-Attention Via Bernoulli Sampling
by Zhanpeng Zeng, Yunyang Xiong, Sathya N. Ravi, Shailesh Acharya, Glenn Fung, Vikas Singh. YOSO approximates standard softmax self-attention
via a Bernoulli sampling scheme based on Locality Sensitive Hashing (LSH). In principle, all the Bernoulli random variables can be sampled with
a single hash.
The abstract from the paper is the following:
Transformer-based models are widely used in natural language processing (NLP). Central to the transformer model is
the self-attention mechanism, which captures the interactions of token pairs in the input sequences and depends quadratically
on the sequence length. Training such models on longer sequences is expensive. In this paper, we show that a Bernoulli sampling
attention mechanism based on Locality Sensitive Hashing (LSH), decreases the quadratic complexity of such models to linear.
We bypass the quadratic cost by considering self-attention as a sum of individual tokens associated with Bernoulli random
variables that can, in principle, be sampled at once by a single hash (although in practice, this number may be a small constant).
This leads to an efficient sampling scheme to estimate self-attention which relies on specific modifications of
LSH (to enable deployment on GPU architectures). We evaluate our algorithm on the GLUE benchmark with standard 512 sequence
length where we see favorable performance relative to a standard pretrained Transformer. On the Long Range Arena (LRA) benchmark,
for evaluating performance on long sequences, our method achieves results consistent with softmax self-attention but with sizable
speed-ups and memory savings and often outperforms other efficient self-attention methods. Our code is available at this https URL
This model was contributed by novice03. The original code can be found here.
Usage tips |
The YOSO attention algorithm is implemented through custom CUDA kernels, functions written in CUDA C++ that can be executed multiple times
in parallel on a GPU.
The kernels provide a fast_hash function, which approximates the random projections of the queries and keys using the Fast Hadamard Transform. Using these
hash codes, the lsh_cumulation function approximates self-attention via LSH-based Bernoulli sampling.
To use the custom kernels, the user should set config.use_expectation = False. To ensure that the kernels are compiled successfully,
the user must install the correct version of PyTorch and cudatoolkit. By default, config.use_expectation = True, which uses YOSO-E and
does not require compiling CUDA kernels. |
YOSO Attention Algorithm. Taken from the original paper.
Resources
Text classification task guide
Token classification task guide
Question answering task guide
Masked language modeling task guide
Multiple choice task guide |
YosoConfig
[[autodoc]] YosoConfig
YosoModel
[[autodoc]] YosoModel
- forward
YosoForMaskedLM
[[autodoc]] YosoForMaskedLM
- forward
YosoForSequenceClassification
[[autodoc]] YosoForSequenceClassification
- forward
YosoForMultipleChoice
[[autodoc]] YosoForMultipleChoice
- forward
YosoForTokenClassification
[[autodoc]] YosoForTokenClassification
- forward
YosoForQuestionAnswering
[[autodoc]] YosoForQuestionAnswering
- forward |
Trajectory Transformer
This model is in maintenance mode only, so we won't accept any new PRs changing its code.
If you run into any issues running this model, please reinstall the last version that supported this model: v4.30.0.
You can do so by running the following command: pip install -U transformers==4.30.0. |
Overview
The Trajectory Transformer model was proposed in Offline Reinforcement Learning as One Big Sequence Modeling Problem by Michael Janner, Qiyang Li, Sergey Levine.
The abstract from the paper is the following:
Reinforcement learning (RL) is typically concerned with estimating stationary policies or single-step models,
leveraging the Markov property to factorize problems in time. However, we can also view RL as a generic sequence
modeling problem, with the goal being to produce a sequence of actions that leads to a sequence of high rewards.
Viewed in this way, it is tempting to consider whether high-capacity sequence prediction models that work well
in other domains, such as natural-language processing, can also provide effective solutions to the RL problem.
To this end, we explore how RL can be tackled with the tools of sequence modeling, using a Transformer architecture
to model distributions over trajectories and repurposing beam search as a planning algorithm. Framing RL as sequence
modeling problem simplifies a range of design decisions, allowing us to dispense with many of the components common
in offline RL algorithms. We demonstrate the flexibility of this approach across long-horizon dynamics prediction,
imitation learning, goal-conditioned RL, and offline RL. Further, we show that this approach can be combined with
existing model-free algorithms to yield a state-of-the-art planner in sparse-reward, long-horizon tasks.
This model was contributed by CarlCochet. The original code can be found here.
Usage tips
This Transformer is used for deep reinforcement learning. To use it, you need to create sequences from
actions, states and rewards from all previous timesteps. This model will treat all these elements together
as one big sequence (a trajectory).
TrajectoryTransformerConfig
[[autodoc]] TrajectoryTransformerConfig
TrajectoryTransformerModel
[[autodoc]] TrajectoryTransformerModel
- forward |
StableLM
Overview
StableLM 3B 4E1T was proposed in StableLM 3B 4E1T: Technical Report by Stability AI and is the first model in a series of multi-epoch pre-trained language models.
Model Details
StableLM 3B 4E1T is a decoder-only base language model pre-trained on 1 trillion tokens of diverse English and code datasets for four epochs.
The model architecture is transformer-based with partial Rotary Position Embeddings, SwiGLU activation, LayerNorm, etc.
We also provide StableLM Zephyr 3B, an instruction fine-tuned version of the model that can be used for chat-based applications.
Usage Tips |
The architecture is similar to LLaMA but with RoPE applied to 25% of head embedding dimensions, LayerNorm instead of RMSNorm, and optional QKV bias terms.
StableLM 3B 4E1T-based models uses the same tokenizer as [GPTNeoXTokenizerFast].
StableLM 3B 4E1T and StableLM Zephyr 3B can be found on the Huggingface Hub
The following code snippet demonstrates how to use StableLM 3B 4E1T for inference:
thon |
from transformers import AutoModelForCausalLM, AutoTokenizer
device = "cuda" # the device to load the model onto
tokenizer = AutoTokenizer.from_pretrained("stabilityai/stablelm-3b-4e1t")
model = AutoModelForCausalLM.from_pretrained("stabilityai/stablelm-3b-4e1t")
model.to(device)
model_inputs = tokenizer("The weather is always wonderful in", return_tensors="pt").to(model.device)
generated_ids = model.generate(**model_inputs, max_length=32, do_sample=True)
responses = tokenizer.batch_decode(generated_ids, skip_special_tokens=True)
responses
['The weather is always wonderful in Santa Barbara and, for visitors hoping to make the move to our beautiful seaside city, this town offers plenty of great places to'] |
Combining StableLM and Flash Attention 2
First, make sure to install the latest version of Flash Attention v2.
pip install -U flash-attn --no-build-isolation
Also make sure that your hardware is compatible with Flash-Attention 2. Read more about it in the official documentation of the flash-attn repository. Note: you must load your model in half-precision (e.g. torch.bfloat16).
Now, to run the model with Flash Attention 2, refer to the snippet below:
thon |
import torch
from transformers import AutoModelForCausalLM, AutoTokenizer
device = "cuda" # the device to load the model onto
tokenizer = AutoTokenizer.from_pretrained("stabilityai/stablelm-3b-4e1t")
model = AutoModelForCausalLM.from_pretrained("stabilityai/stablelm-3b-4e1t", torch_dtype=torch.bfloat16, attn_implementation="flash_attention_2")
model.to(device)
model_inputs = tokenizer("The weather is always wonderful in", return_tensors="pt").to(model.device)
generated_ids = model.generate(**model_inputs, max_length=32, do_sample=True)
responses = tokenizer.batch_decode(generated_ids, skip_special_tokens=True)
responses
['The weather is always wonderful in Santa Barbara and, for visitors hoping to make the move to our beautiful seaside city, this town offers plenty of great places to'] |
StableLmConfig
[[autodoc]] StableLmConfig
StableLmModel
[[autodoc]] StableLmModel
- forward
StableLmForCausalLM
[[autodoc]] StableLmForCausalLM
- forward
StableLmForSequenceClassification
[[autodoc]] StableLmForSequenceClassification
- forward |
BERTweet
Overview
The BERTweet model was proposed in BERTweet: A pre-trained language model for English Tweets by Dat Quoc Nguyen, Thanh Vu, Anh Tuan Nguyen.
The abstract from the paper is the following:
We present BERTweet, the first public large-scale pre-trained language model for English Tweets. Our BERTweet, having
the same architecture as BERT-base (Devlin et al., 2019), is trained using the RoBERTa pre-training procedure (Liu et
al., 2019). Experiments show that BERTweet outperforms strong baselines RoBERTa-base and XLM-R-base (Conneau et al.,
2020), producing better performance results than the previous state-of-the-art models on three Tweet NLP tasks:
Part-of-speech tagging, Named-entity recognition and text classification.
This model was contributed by dqnguyen. The original code can be found here.
Usage example
thon |
import torch
from transformers import AutoModel, AutoTokenizer
bertweet = AutoModel.from_pretrained("vinai/bertweet-base")
For transformers v4.x+:
tokenizer = AutoTokenizer.from_pretrained("vinai/bertweet-base", use_fast=False)
For transformers v3.x:
tokenizer = AutoTokenizer.from_pretrained("vinai/bertweet-base")
INPUT TWEET IS ALREADY NORMALIZED!
line = "SC has first two presumptive cases of coronavirus , DHEC confirms HTTPURL via @USER :cry:"
input_ids = torch.tensor([tokenizer.encode(line)])
with torch.no_grad():
features = bertweet(input_ids) # Models outputs are now tuples
With TensorFlow 2.0+:
from transformers import TFAutoModel
bertweet = TFAutoModel.from_pretrained("vinai/bertweet-base") |
This implementation is the same as BERT, except for tokenization method. Refer to BERT documentation for
API reference information.
BertweetTokenizer
[[autodoc]] BertweetTokenizer |
BridgeTower
Overview
The BridgeTower model was proposed in BridgeTower: Building Bridges Between Encoders in Vision-Language Representative Learning by Xiao Xu, Chenfei Wu, Shachar Rosenman, Vasudev Lal, Wanxiang Che, Nan Duan. The goal of this model is to build a
bridge between each uni-modal encoder and the cross-modal encoder to enable comprehensive and detailed interaction at each layer of the cross-modal encoder thus achieving remarkable performance on various downstream tasks with almost negligible additional performance and computational costs.
This paper has been accepted to the AAAI'23 conference.
The abstract from the paper is the following:
Vision-Language (VL) models with the TWO-TOWER architecture have dominated visual-language representation learning in recent years.
Current VL models either use lightweight uni-modal encoders and learn to extract, align and fuse both modalities simultaneously in a deep cross-modal encoder, or feed the last-layer uni-modal representations from the deep pre-trained uni-modal encoders into the top cross-modal encoder.
Both approaches potentially restrict vision-language representation learning and limit model performance. In this paper, we propose BRIDGETOWER, which introduces multiple bridge layers that build a connection between the top layers of uni-modal encoders and each layer of the crossmodal encoder.
This enables effective bottom-up cross-modal alignment and fusion between visual and textual representations of different semantic levels of pre-trained uni-modal encoders in the cross-modal encoder. Pre-trained with only 4M images, BRIDGETOWER achieves state-of-the-art performance on various downstream vision-language tasks.
In particular, on the VQAv2 test-std set, BRIDGETOWER achieves an accuracy of 78.73%, outperforming the previous state-of-the-art model METER by 1.09% with the same pre-training data and almost negligible additional parameters and computational costs.
Notably, when further scaling the model, BRIDGETOWER achieves an accuracy of 81.15%, surpassing models that are pre-trained on orders-of-magnitude larger datasets. |
BridgeTower architecture. Taken from the original paper.
This model was contributed by Anahita Bhiwandiwalla, Tiep Le and Shaoyen Tseng. The original code can be found here.
Usage tips and examples
BridgeTower consists of a visual encoder, a textual encoder and cross-modal encoder with multiple lightweight bridge layers.
The goal of this approach was to build a bridge between each uni-modal encoder and the cross-modal encoder to enable comprehensive and detailed interaction at each layer of the cross-modal encoder.
In principle, one can apply any visual, textual or cross-modal encoder in the proposed architecture.
The [BridgeTowerProcessor] wraps [RobertaTokenizer] and [BridgeTowerImageProcessor] into a single instance to both
encode the text and prepare the images respectively.
The following example shows how to run contrastive learning using [BridgeTowerProcessor] and [BridgeTowerForContrastiveLearning].
thon |
from transformers import BridgeTowerProcessor, BridgeTowerForContrastiveLearning
import requests
from PIL import Image
url = "http://images.cocodataset.org/val2017/000000039769.jpg"
image = Image.open(requests.get(url, stream=True).raw)
texts = ["An image of two cats chilling on a couch", "A football player scoring a goal"]
processor = BridgeTowerProcessor.from_pretrained("BridgeTower/bridgetower-large-itm-mlm-itc")
model = BridgeTowerForContrastiveLearning.from_pretrained("BridgeTower/bridgetower-large-itm-mlm-itc")
forward pass
scores = dict()
for text in texts:
# prepare inputs
encoding = processor(image, text, return_tensors="pt")
outputs = model(**encoding)
scores[text] = outputs |
The following example shows how to run image-text retrieval using [BridgeTowerProcessor] and [BridgeTowerForImageAndTextRetrieval].
thon |
from transformers import BridgeTowerProcessor, BridgeTowerForImageAndTextRetrieval
import requests
from PIL import Image
url = "http://images.cocodataset.org/val2017/000000039769.jpg"
image = Image.open(requests.get(url, stream=True).raw)
texts = ["An image of two cats chilling on a couch", "A football player scoring a goal"]
processor = BridgeTowerProcessor.from_pretrained("BridgeTower/bridgetower-base-itm-mlm")
model = BridgeTowerForImageAndTextRetrieval.from_pretrained("BridgeTower/bridgetower-base-itm-mlm")
forward pass
scores = dict()
for text in texts:
# prepare inputs
encoding = processor(image, text, return_tensors="pt")
outputs = model(**encoding)
scores[text] = outputs.logits[0, 1].item() |
The following example shows how to run masked language modeling using [BridgeTowerProcessor] and [BridgeTowerForMaskedLM].
thon |
from transformers import BridgeTowerProcessor, BridgeTowerForMaskedLM
from PIL import Image
import requests
url = "http://images.cocodataset.org/val2017/000000360943.jpg"
image = Image.open(requests.get(url, stream=True).raw).convert("RGB")
text = "a looking out of the window"
processor = BridgeTowerProcessor.from_pretrained("BridgeTower/bridgetower-base-itm-mlm")
model = BridgeTowerForMaskedLM.from_pretrained("BridgeTower/bridgetower-base-itm-mlm")
prepare inputs
encoding = processor(image, text, return_tensors="pt")
forward pass
outputs = model(**encoding)
results = processor.decode(outputs.logits.argmax(dim=-1).squeeze(0).tolist())
print(results)
.a cat looking out of the window. |
Tips:
This implementation of BridgeTower uses [RobertaTokenizer] to generate text embeddings and OpenAI's CLIP/ViT model to compute visual embeddings.
Checkpoints for pre-trained bridgeTower-base and bridgetower masked language modeling and image text matching are released.
Please refer to Table 5 for BridgeTower's performance on Image Retrieval and other down stream tasks.
The PyTorch version of this model is only available in torch 1.10 and higher. |
BridgeTowerConfig
[[autodoc]] BridgeTowerConfig
BridgeTowerTextConfig
[[autodoc]] BridgeTowerTextConfig
BridgeTowerVisionConfig
[[autodoc]] BridgeTowerVisionConfig
BridgeTowerImageProcessor
[[autodoc]] BridgeTowerImageProcessor
- preprocess
BridgeTowerProcessor
[[autodoc]] BridgeTowerProcessor
- call
BridgeTowerModel
[[autodoc]] BridgeTowerModel
- forward
BridgeTowerForContrastiveLearning
[[autodoc]] BridgeTowerForContrastiveLearning
- forward
BridgeTowerForMaskedLM
[[autodoc]] BridgeTowerForMaskedLM
- forward
BridgeTowerForImageAndTextRetrieval
[[autodoc]] BridgeTowerForImageAndTextRetrieval
- forward |
BART
Overview
The Bart model was proposed in BART: Denoising Sequence-to-Sequence Pre-training for Natural Language Generation,
Translation, and Comprehension by Mike Lewis, Yinhan Liu, Naman Goyal, Marjan
Ghazvininejad, Abdelrahman Mohamed, Omer Levy, Ves Stoyanov and Luke Zettlemoyer on 29 Oct, 2019.
According to the abstract, |
Bart uses a standard seq2seq/machine translation architecture with a bidirectional encoder (like BERT) and a
left-to-right decoder (like GPT).
The pretraining task involves randomly shuffling the order of the original sentences and a novel in-filling scheme,
where spans of text are replaced with a single mask token.
BART is particularly effective when fine tuned for text generation but also works well for comprehension tasks. It
matches the performance of RoBERTa with comparable training resources on GLUE and SQuAD, achieves new
state-of-the-art results on a range of abstractive dialogue, question answering, and summarization tasks, with gains
of up to 6 ROUGE. |
This model was contributed by sshleifer. The authors' code can be found here.
Usage tips:
BART is a model with absolute position embeddings so it's usually advised to pad the inputs on the right rather than
the left. |
BART is a model with absolute position embeddings so it's usually advised to pad the inputs on the right rather than
the left.
Sequence-to-sequence model with an encoder and a decoder. Encoder is fed a corrupted version of the tokens, decoder is fed the original tokens (but has a mask to hide the future words like a regular transformers decoder). A composition of the following transformations are applied on the pretraining tasks for the encoder:
mask random tokens (like in BERT) |
mask random tokens (like in BERT)
delete random tokens
mask a span of k tokens with a single mask token (a span of 0 tokens is an insertion of a mask token)
permute sentences
rotate the document to make it start at a specific token
Implementation Notes |
Bart doesn't use token_type_ids for sequence classification. Use [BartTokenizer] or
[~BartTokenizer.encode] to get the proper splitting.
The forward pass of [BartModel] will create the decoder_input_ids if they are not passed.
This is different than some other modeling APIs. A typical use case of this feature is mask filling.
Model predictions are intended to be identical to the original implementation when
forced_bos_token_id=0. This only works, however, if the string you pass to
[fairseq.encode] starts with a space.
[~generation.GenerationMixin.generate] should be used for conditional generation tasks like
summarization, see the example in that docstrings.
Models that load the facebook/bart-large-cnn weights will not have a mask_token_id, or be able to perform
mask-filling tasks. |
Mask Filling
The facebook/bart-base and facebook/bart-large checkpoints can be used to fill multi-token masks.
thon
from transformers import BartForConditionalGeneration, BartTokenizer
model = BartForConditionalGeneration.from_pretrained("facebook/bart-large", forced_bos_token_id=0)
tok = BartTokenizer.from_pretrained("facebook/bart-large")
example_english_phrase = "UN Chief Says There Is No in Syria"
batch = tok(example_english_phrase, return_tensors="pt")
generated_ids = model.generate(batch["input_ids"])
assert tok.batch_decode(generated_ids, skip_special_tokens=True) == [
"UN Chief Says There Is No Plan to Stop Chemical Weapons in Syria"
] |
Resources
A list of official Hugging Face and community (indicated by π) resources to help you get started with BART. If you're interested in submitting a resource to be included here, please feel free to open a Pull Request and we'll review it! The resource should ideally demonstrate something new instead of duplicating an existing resource. |
A blog post on Distributed Training: Train BART/T5 for Summarization using π€ Transformers and Amazon SageMaker.
A notebook on how to finetune BART for summarization with fastai using blurr. π
A notebook on how to finetune BART for summarization in two languages with Trainer class. π
[BartForConditionalGeneration] is supported by this example script and notebook.
[TFBartForConditionalGeneration] is supported by this example script and notebook.
[FlaxBartForConditionalGeneration] is supported by this example script.
An example of how to train [BartForConditionalGeneration] with a Hugging Face datasets object can be found in this forum discussion
Summarization chapter of the π€ Hugging Face course.
Summarization task guide |
[BartForConditionalGeneration] is supported by this example script and notebook.
[TFBartForConditionalGeneration] is supported by this example script and notebook.
[FlaxBartForConditionalGeneration] is supported by this example script and notebook.
Masked language modeling chapter of the π€ Hugging Face Course.
Masked language modeling task guide |
A notebook on how to finetune mBART using Seq2SeqTrainer for Hindi to English translation. π
[BartForConditionalGeneration] is supported by this example script and notebook.
[TFBartForConditionalGeneration] is supported by this example script and notebook.
Translation task guide |
See also:
- Text classification task guide
- Question answering task guide
- Causal language modeling task guide
- Distilled checkpoints are described in this paper.
BartConfig
[[autodoc]] BartConfig
- all
BartTokenizer
[[autodoc]] BartTokenizer
- all
BartTokenizerFast
[[autodoc]] BartTokenizerFast
- all |
BartModel
[[autodoc]] BartModel
- forward
BartForConditionalGeneration
[[autodoc]] BartForConditionalGeneration
- forward
BartForSequenceClassification
[[autodoc]] BartForSequenceClassification
- forward
BartForQuestionAnswering
[[autodoc]] BartForQuestionAnswering
- forward
BartForCausalLM
[[autodoc]] BartForCausalLM
- forward |
TFBartModel
[[autodoc]] TFBartModel
- call
TFBartForConditionalGeneration
[[autodoc]] TFBartForConditionalGeneration
- call
TFBartForSequenceClassification
[[autodoc]] TFBartForSequenceClassification
- call |
FlaxBartModel
[[autodoc]] FlaxBartModel
- call
- encode
- decode
FlaxBartForConditionalGeneration
[[autodoc]] FlaxBartForConditionalGeneration
- call
- encode
- decode
FlaxBartForSequenceClassification
[[autodoc]] FlaxBartForSequenceClassification
- call
- encode
- decode
FlaxBartForQuestionAnswering
[[autodoc]] FlaxBartForQuestionAnswering
- call
- encode
- decode
FlaxBartForCausalLM
[[autodoc]] FlaxBartForCausalLM
- call |
TAPEX
This model is in maintenance mode only, we don't accept any new PRs changing its code.
If you run into any issues running this model, please reinstall the last version that supported this model: v4.30.0.
You can do so by running the following command: pip install -U transformers==4.30.0. |
Overview
The TAPEX model was proposed in TAPEX: Table Pre-training via Learning a Neural SQL Executor by Qian Liu,
Bei Chen, Jiaqi Guo, Morteza Ziyadi, Zeqi Lin, Weizhu Chen, Jian-Guang Lou. TAPEX pre-trains a BART model to solve synthetic SQL queries, after
which it can be fine-tuned to answer natural language questions related to tabular data, as well as performing table fact checking.
TAPEX has been fine-tuned on several datasets:
- SQA (Sequential Question Answering by Microsoft)
- WTQ (Wiki Table Questions by Stanford University)
- WikiSQL (by Salesforce)
- TabFact (by USCB NLP Lab).
The abstract from the paper is the following:
Recent progress in language model pre-training has achieved a great success via leveraging large-scale unstructured textual data. However, it is
still a challenge to apply pre-training on structured tabular data due to the absence of large-scale high-quality tabular data. In this paper, we
propose TAPEX to show that table pre-training can be achieved by learning a neural SQL executor over a synthetic corpus, which is obtained by automatically
synthesizing executable SQL queries and their execution outputs. TAPEX addresses the data scarcity challenge via guiding the language model to mimic a SQL
executor on the diverse, large-scale and high-quality synthetic corpus. We evaluate TAPEX on four benchmark datasets. Experimental results demonstrate that
TAPEX outperforms previous table pre-training approaches by a large margin and achieves new state-of-the-art results on all of them. This includes improvements
on the weakly-supervised WikiSQL denotation accuracy to 89.5% (+2.3%), the WikiTableQuestions denotation accuracy to 57.5% (+4.8%), the SQA denotation accuracy
to 74.5% (+3.5%), and the TabFact accuracy to 84.2% (+3.2%). To our knowledge, this is the first work to exploit table pre-training via synthetic executable programs
and to achieve new state-of-the-art results on various downstream tasks.
Usage tips |
TAPEX is a generative (seq2seq) model. One can directly plug in the weights of TAPEX into a BART model.
TAPEX has checkpoints on the hub that are either pre-trained only, or fine-tuned on WTQ, SQA, WikiSQL and TabFact.
Sentences + tables are presented to the model as sentence + " " + linearized table. The linearized table has the following format:
col: col1 | col2 | col 3 row 1 : val1 | val2 | val3 row 2 : .
TAPEX has its own tokenizer, that allows to prepare all data for the model easily. One can pass Pandas DataFrames and strings to the tokenizer,
and it will automatically create the input_ids and attention_mask (as shown in the usage examples below). |
Usage: inference
Below, we illustrate how to use TAPEX for table question answering. As one can see, one can directly plug in the weights of TAPEX into a BART model.
We use the Auto API, which will automatically instantiate the appropriate tokenizer ([TapexTokenizer]) and model ([BartForConditionalGeneration]) for us,
based on the configuration file of the checkpoint on the hub.
thon |
from transformers import AutoTokenizer, AutoModelForSeq2SeqLM
import pandas as pd
tokenizer = AutoTokenizer.from_pretrained("microsoft/tapex-large-finetuned-wtq")
model = AutoModelForSeq2SeqLM.from_pretrained("microsoft/tapex-large-finetuned-wtq")
prepare table + question
data = {"Actors": ["Brad Pitt", "Leonardo Di Caprio", "George Clooney"], "Number of movies": ["87", "53", "69"]}
table = pd.DataFrame.from_dict(data)
question = "how many movies does Leonardo Di Caprio have?"
encoding = tokenizer(table, question, return_tensors="pt")
let the model generate an answer autoregressively
outputs = model.generate(**encoding)
decode back to text
predicted_answer = tokenizer.batch_decode(outputs, skip_special_tokens=True)[0]
print(predicted_answer)
53 |
Note that [TapexTokenizer] also supports batched inference. Hence, one can provide a batch of different tables/questions, or a batch of a single table
and multiple questions, or a batch of a single query and multiple tables. Let's illustrate this:
thon |
prepare table + question
data = {"Actors": ["Brad Pitt", "Leonardo Di Caprio", "George Clooney"], "Number of movies": ["87", "53", "69"]}
table = pd.DataFrame.from_dict(data)
questions = [
"how many movies does Leonardo Di Caprio have?",
"which actor has 69 movies?",
"what's the first name of the actor who has 87 movies?",
]
encoding = tokenizer(table, questions, padding=True, return_tensors="pt")
let the model generate an answer autoregressively
outputs = model.generate(**encoding)
decode back to text
tokenizer.batch_decode(outputs, skip_special_tokens=True)
[' 53', ' george clooney', ' brad pitt'] |
In case one wants to do table verification (i.e. the task of determining whether a given sentence is supported or refuted by the contents
of a table), one can instantiate a [BartForSequenceClassification] model. TAPEX has checkpoints on the hub fine-tuned on TabFact, an important
benchmark for table fact checking (it achieves 84% accuracy). The code example below again leverages the Auto API.
thon |
from transformers import AutoTokenizer, AutoModelForSequenceClassification
tokenizer = AutoTokenizer.from_pretrained("microsoft/tapex-large-finetuned-tabfact")
model = AutoModelForSequenceClassification.from_pretrained("microsoft/tapex-large-finetuned-tabfact")
prepare table + sentence
data = {"Actors": ["Brad Pitt", "Leonardo Di Caprio", "George Clooney"], "Number of movies": ["87", "53", "69"]}
table = pd.DataFrame.from_dict(data)
sentence = "George Clooney has 30 movies"
encoding = tokenizer(table, sentence, return_tensors="pt")
forward pass
outputs = model(**encoding)
print prediction
predicted_class_idx = outputs.logits[0].argmax(dim=0).item()
print(model.config.id2label[predicted_class_idx])
Refused |
TAPEX architecture is the same as BART, except for tokenization. Refer to BART documentation for information on
configuration classes and their parameters. TAPEX-specific tokenizer is documented below.
TapexTokenizer
[[autodoc]] TapexTokenizer
- call
- save_vocabulary |
EfficientFormer
Overview
The EfficientFormer model was proposed in EfficientFormer: Vision Transformers at MobileNet Speed
by Yanyu Li, Geng Yuan, Yang Wen, Eric Hu, Georgios Evangelidis, Sergey Tulyakov, Yanzhi Wang, Jian Ren. EfficientFormer proposes a
dimension-consistent pure transformer that can be run on mobile devices for dense prediction tasks like image classification, object
detection and semantic segmentation.
The abstract from the paper is the following:
Vision Transformers (ViT) have shown rapid progress in computer vision tasks, achieving promising results on various benchmarks.
However, due to the massive number of parameters and model design, e.g., attention mechanism, ViT-based models are generally
times slower than lightweight convolutional networks. Therefore, the deployment of ViT for real-time applications is particularly
challenging, especially on resource-constrained hardware such as mobile devices. Recent efforts try to reduce the computation
complexity of ViT through network architecture search or hybrid design with MobileNet block, yet the inference speed is still
unsatisfactory. This leads to an important question: can transformers run as fast as MobileNet while obtaining high performance?
To answer this, we first revisit the network architecture and operators used in ViT-based models and identify inefficient designs.
Then we introduce a dimension-consistent pure transformer (without MobileNet blocks) as a design paradigm.
Finally, we perform latency-driven slimming to get a series of final models dubbed EfficientFormer.
Extensive experiments show the superiority of EfficientFormer in performance and speed on mobile devices.
Our fastest model, EfficientFormer-L1, achieves 79.2% top-1 accuracy on ImageNet-1K with only 1.6 ms inference latency on
iPhone 12 (compiled with CoreML), which { runs as fast as MobileNetV2Γ1.4 (1.6 ms, 74.7% top-1),} and our largest model,
EfficientFormer-L7, obtains 83.3% accuracy with only 7.0 ms latency. Our work proves that properly designed transformers can
reach extremely low latency on mobile devices while maintaining high performance.
This model was contributed by novice03 and Bearnardd.
The original code can be found here. The TensorFlow version of this model was added by D-Roberts.
Documentation resources |
Image classification task guide
EfficientFormerConfig
[[autodoc]] EfficientFormerConfig
EfficientFormerImageProcessor
[[autodoc]] EfficientFormerImageProcessor
- preprocess
EfficientFormerModel
[[autodoc]] EfficientFormerModel
- forward
EfficientFormerForImageClassification
[[autodoc]] EfficientFormerForImageClassification
- forward
EfficientFormerForImageClassificationWithTeacher
[[autodoc]] EfficientFormerForImageClassificationWithTeacher
- forward |
TFEfficientFormerModel
[[autodoc]] TFEfficientFormerModel
- call
TFEfficientFormerForImageClassification
[[autodoc]] TFEfficientFormerForImageClassification
- call
TFEfficientFormerForImageClassificationWithTeacher
[[autodoc]] TFEfficientFormerForImageClassificationWithTeacher
- call |
MADLAD-400
Overview
MADLAD-400 models were released in the paper MADLAD-400: A Multilingual And Document-Level Large Audited Dataset.
The abstract from the paper is the following:
We introduce MADLAD-400, a manually audited, general domain 3T token monolingual dataset based on CommonCrawl, spanning 419 languages. We discuss
the limitations revealed by self-auditing MADLAD-400, and the role data auditing
had in the dataset creation process. We then train and release a 10.7B-parameter
multilingual machine translation model on 250 billion tokens covering over 450
languages using publicly available data, and find that it is competitive with models
that are significantly larger, and report the results on different domains. In addition, we train a 8B-parameter language model, and assess the results on few-shot
translation. We make the baseline models 1
available to the research community.
This model was added by Juarez Bochi. The original checkpoints can be found here.
This is a machine translation model that supports many low-resource languages, and that is competitive with models that are significantly larger.
One can directly use MADLAD-400 weights without finetuning the model:
thon |
from transformers import AutoModelForSeq2SeqLM, AutoTokenizer
model = AutoModelForSeq2SeqLM.from_pretrained("google/madlad400-3b-mt")
tokenizer = AutoTokenizer.from_pretrained("google/madlad400-3b-mt")
inputs = tokenizer("<2pt> I love pizza!", return_tensors="pt")
outputs = model.generate(**inputs)
print(tokenizer.batch_decode(outputs, skip_special_tokens=True))
['Eu amo pizza!']
Google has released the following variants:
google/madlad400-3b-mt
google/madlad400-7b-mt
google/madlad400-7b-mt-bt |
Google has released the following variants:
google/madlad400-3b-mt
google/madlad400-7b-mt
google/madlad400-7b-mt-bt
google/madlad400-10b-mt
The original checkpoints can be found here.
Refer to T5's documentation page for all API references, code examples, and notebooks. For more details regarding training and evaluation of the MADLAD-400, refer to the model card. |
Mamba
Overview
The Mamba model was proposed in Mamba: Linear-Time Sequence Modeling with Selective State Spaces by Albert Gu and Tri Dao.
This model is a new paradigm architecture based on state-space-models. You can read more about the intuition behind these here.
The abstract from the paper is the following:
Foundation models, now powering most of the exciting applications in deep learning, are almost universally based on the Transformer architecture and its core attention module. Many subquadratic-time architectures such as linear attention, gated convolution and recurrent models, and structured state space models (SSMs) have been developed to address Transformers' computational inefficiency on long sequences, but they have not performed as well as attention on important modalities such as language. We identify that a key weakness of such models is their inability to perform content-based reasoning, and make several improvements. First, simply letting the SSM parameters be functions of the input addresses their weakness with discrete modalities, allowing the model to selectively propagate or forget information along the sequence length dimension depending on the current token. Second, even though this change prevents the use of efficient convolutions, we design a hardware-aware parallel algorithm in recurrent mode. We integrate these selective SSMs into a simplified end-to-end neural network architecture without attention or even MLP blocks (Mamba). Mamba enjoys fast inference (5Γ higher throughput than Transformers) and linear scaling in sequence length, and its performance improves on real data up to million-length sequences. As a general sequence model backbone, Mamba achieves state-of-the-art performance across several modalities such as language, audio, and genomics. On language modeling, our Mamba-3B model outperforms Transformers of the same size and matches Transformers twice its size, both in pretraining and downstream evaluation.
Tips: |
Mamba is a new state space model architecture that rivals the classic Transformers. It is based on the line of progress on structured state space models, with an efficient hardware-aware design and implementation in the spirit of FlashAttention.
Mamba stacks mixer layers, which are the equivalent of Attention layers. The core logic of mamba is held in the MambaMixer class.
Two implementations cohabit: one is optimized and uses fast cuda kernels, while the other one is naive but can run on any device!
The current implementation leverages the original cuda kernels: the equivalent of flash attention for Mamba are hosted in the mamba-ssm and the causal_conv1d repositories. Make sure to install them if your hardware supports them!
Contributions to make the naive path faster are welcome π€ |
This model was contributed by ArthurZ.
The original code can be found here.
Usage
A simple generation example:
thon
from transformers import MambaConfig, MambaForCausalLM, AutoTokenizer
import torch
tokenizer = AutoTokenizer.from_pretrained("ArthurZ/mamba-130m")
tokenizer.pad_token = tokenizer.eos_token
model = MambaForCausalLM.from_pretrained("ArthurZ/mamba-130m", vocab_size=50280, num_hidden_layers=24, torch_dtype=torch.float32)
model.config.use_cache = True
input_ids = tokenizer("Hey how are you doing?", return_tensors= "pt")["input_ids"]
out = model.generate(input_ids, max_new_tokens=10)
print(tokenizer.batch_decode(out)) |
Peft finetuning
The slow version is not very stable for training, and the fast one needs float32!
python
from datasets import load_dataset
from trl import SFTTrainer
from peft import LoraConfig
from transformers import AutoTokenizer, AutoModelForCausalLM, TrainingArguments
model_id = "ArthurZ/mamba-2.8b"
tokenizer = AutoTokenizer.from_pretrained(model_id, pad_token ="<s>")
model = AutoModelForCausalLM.from_pretrained(model_id)
dataset = load_dataset("Abirate/english_quotes", split="train")
training_args = TrainingArguments(
output_dir="./results",
num_train_epochs=3,
per_device_train_batch_size=4,
logging_dir='./logs',
logging_steps=10,
learning_rate=2e-3
)
lora_config = LoraConfig(
r=8,
target_modules="all-linear",
task_type="CAUSAL_LM",
bias="none"
)
trainer = SFTTrainer(
model=model,
tokenizer=tokenizer,
args=training_args,
peft_config=lora_config,
train_dataset=dataset,
dataset_text_field="quote",
)
trainer.train()
MambaConfig
[[autodoc]] MambaConfig
MambaModel
[[autodoc]] MambaModel
- forward
MambaLMHeadModel
[[autodoc]] MambaForCausalLM
- forward |
Convolutional Vision Transformer (CvT)
Overview
The CvT model was proposed in CvT: Introducing Convolutions to Vision Transformers by Haiping Wu, Bin Xiao, Noel Codella, Mengchen Liu, Xiyang Dai, Lu Yuan and Lei Zhang. The Convolutional vision Transformer (CvT) improves the Vision Transformer (ViT) in performance and efficiency by introducing convolutions into ViT to yield the best of both designs.
The abstract from the paper is the following:
We present in this paper a new architecture, named Convolutional vision Transformer (CvT), that improves Vision Transformer (ViT)
in performance and efficiency by introducing convolutions into ViT to yield the best of both designs. This is accomplished through
two primary modifications: a hierarchy of Transformers containing a new convolutional token embedding, and a convolutional Transformer
block leveraging a convolutional projection. These changes introduce desirable properties of convolutional neural networks (CNNs)
to the ViT architecture (\ie shift, scale, and distortion invariance) while maintaining the merits of Transformers (\ie dynamic attention,
global context, and better generalization). We validate CvT by conducting extensive experiments, showing that this approach achieves
state-of-the-art performance over other Vision Transformers and ResNets on ImageNet-1k, with fewer parameters and lower FLOPs. In addition,
performance gains are maintained when pretrained on larger datasets (\eg ImageNet-22k) and fine-tuned to downstream tasks. Pre-trained on
ImageNet-22k, our CvT-W24 obtains a top-1 accuracy of 87.7\% on the ImageNet-1k val set. Finally, our results show that the positional encoding,
a crucial component in existing Vision Transformers, can be safely removed in our model, simplifying the design for higher resolution vision tasks.
This model was contributed by anugunj. The original code can be found here.
Usage tips |
CvT models are regular Vision Transformers, but trained with convolutions. They outperform the original model (ViT) when fine-tuned on ImageNet-1K and CIFAR-100.
You can check out demo notebooks regarding inference as well as fine-tuning on custom data here (you can just replace [ViTFeatureExtractor] by [AutoImageProcessor] and [ViTForImageClassification] by [CvtForImageClassification]).
The available checkpoints are either (1) pre-trained on ImageNet-22k (a collection of 14 million images and 22k classes) only, (2) also fine-tuned on ImageNet-22k or (3) also fine-tuned on ImageNet-1k (also referred to as ILSVRC 2012, a collection of 1.3 million
images and 1,000 classes). |
Resources
A list of official Hugging Face and community (indicated by π) resources to help you get started with CvT.
[CvtForImageClassification] is supported by this example script and notebook.
See also: Image classification task guide
If you're interested in submitting a resource to be included here, please feel free to open a Pull Request and we'll review it! The resource should ideally demonstrate something new instead of duplicating an existing resource.
CvtConfig
[[autodoc]] CvtConfig |
CvtModel
[[autodoc]] CvtModel
- forward
CvtForImageClassification
[[autodoc]] CvtForImageClassification
- forward
TFCvtModel
[[autodoc]] TFCvtModel
- call
TFCvtForImageClassification
[[autodoc]] TFCvtForImageClassification
- call |
DINOv2
Overview
The DINOv2 model was proposed in DINOv2: Learning Robust Visual Features without Supervision by
Maxime Oquab, TimothΓ©e Darcet, ThΓ©o Moutakanni, Huy Vo, Marc Szafraniec, Vasil Khalidov, Pierre Fernandez, Daniel Haziza, Francisco Massa, Alaaeldin El-Nouby, Mahmoud Assran, Nicolas Ballas, Wojciech Galuba, Russell Howes, Po-Yao Huang, Shang-Wen Li, Ishan Misra, Michael Rabbat, Vasu Sharma, Gabriel Synnaeve, Hu Xu, HervΓ© Jegou, Julien Mairal, Patrick Labatut, Armand Joulin, Piotr Bojanowski.
DINOv2 is an upgrade of DINO, a self-supervised method applied on Vision Transformers. This method enables all-purpose visual features, i.e., features that work across image distributions and tasks without finetuning.
The abstract from the paper is the following:
The recent breakthroughs in natural language processing for model pretraining on large quantities of data have opened the way for similar foundation models in computer vision. These models could greatly simplify the use of images in any system by producing all-purpose visual features, i.e., features that work across image distributions and tasks without finetuning. This work shows that existing pretraining methods, especially self-supervised methods, can produce such features if trained on enough curated data from diverse sources. We revisit existing approaches and combine different techniques to scale our pretraining in terms of data and model size. Most of the technical contributions aim at accelerating and stabilizing the training at scale. In terms of data, we propose an automatic pipeline to build a dedicated, diverse, and curated image dataset instead of uncurated data, as typically done in the self-supervised literature. In terms of models, we train a ViT model (Dosovitskiy et al., 2020) with 1B parameters and distill it into a series of smaller models that surpass the best available all-purpose features, OpenCLIP (Ilharco et al., 2021) on most of the benchmarks at image and pixel levels.
This model was contributed by nielsr.
The original code can be found here.
Usage tips
The model can be traced using torch.jit.trace which leverages JIT compilation to optimize the model making it faster to run. Note this still produces some mis-matched elements and the difference between the original model and the traced model is of the order of 1e-4.
thon
import torch
from transformers import AutoImageProcessor, AutoModel
from PIL import Image
import requests
url = 'http://images.cocodataset.org/val2017/000000039769.jpg'
image = Image.open(requests.get(url, stream=True).raw)
processor = AutoImageProcessor.from_pretrained('facebook/dinov2-base')
model = AutoModel.from_pretrained('facebook/dinov2-base')
inputs = processor(images=image, return_tensors="pt")
outputs = model(**inputs)
last_hidden_states = outputs[0]
We have to force return_dict=False for tracing
model.config.return_dict = False
with torch.no_grad():
traced_model = torch.jit.trace(model, [inputs.pixel_values])
traced_outputs = traced_model(inputs.pixel_values)
print((last_hidden_states - traced_outputs[0]).abs().max()) |
Resources
A list of official Hugging Face and community (indicated by π) resources to help you get started with DPT.
Demo notebooks for DINOv2 can be found here. π
[Dinov2ForImageClassification] is supported by this example script and notebook.
See also: Image classification task guide |