text
stringlengths 2
11.8k
|
---|
MobileViTModel
[[autodoc]] MobileViTModel
- forward
MobileViTForImageClassification
[[autodoc]] MobileViTForImageClassification
- forward
MobileViTForSemanticSegmentation
[[autodoc]] MobileViTForSemanticSegmentation
- forward
TFMobileViTModel
[[autodoc]] TFMobileViTModel
- call
TFMobileViTForImageClassification
[[autodoc]] TFMobileViTForImageClassification
- call
TFMobileViTForSemanticSegmentation
[[autodoc]] TFMobileViTForSemanticSegmentation
- call |
XLM
Overview
The XLM model was proposed in Cross-lingual Language Model Pretraining by
Guillaume Lample, Alexis Conneau. It's a transformer pretrained using one of the following objectives:
a causal language modeling (CLM) objective (next token prediction),
a masked language modeling (MLM) objective (BERT-like), or
a Translation Language Modeling (TLM) object (extension of BERT's MLM to multiple language inputs) |
The abstract from the paper is the following:
Recent studies have demonstrated the efficiency of generative pretraining for English natural language understanding.
In this work, we extend this approach to multiple languages and show the effectiveness of cross-lingual pretraining. We
propose two methods to learn cross-lingual language models (XLMs): one unsupervised that only relies on monolingual
data, and one supervised that leverages parallel data with a new cross-lingual language model objective. We obtain
state-of-the-art results on cross-lingual classification, unsupervised and supervised machine translation. On XNLI, our
approach pushes the state of the art by an absolute gain of 4.9% accuracy. On unsupervised machine translation, we
obtain 34.3 BLEU on WMT'16 German-English, improving the previous state of the art by more than 9 BLEU. On supervised
machine translation, we obtain a new state of the art of 38.5 BLEU on WMT'16 Romanian-English, outperforming the
previous best approach by more than 4 BLEU. Our code and pretrained models will be made publicly available.
This model was contributed by thomwolf. The original code can be found here.
Usage tips |
XLM has many different checkpoints, which were trained using different objectives: CLM, MLM or TLM. Make sure to
select the correct objective for your task (e.g. MLM checkpoints are not suitable for generation).
XLM has multilingual checkpoints which leverage a specific lang parameter. Check out the multi-lingual page for more information.
A transformer model trained on several languages. There are three different type of training for this model and the library provides checkpoints for all of them: |
Causal language modeling (CLM) which is the traditional autoregressive training (so this model could be in the previous section as well). One of the languages is selected for each training sample, and the model input is a sentence of 256 tokens, that may span over several documents in one of those languages.
Masked language modeling (MLM) which is like RoBERTa. One of the languages is selected for each training sample, and the model input is a sentence of 256 tokens, that may span over several documents in one of those languages, with dynamic masking of the tokens.
A combination of MLM and translation language modeling (TLM). This consists of concatenating a sentence in two different languages, with random masking. To predict one of the masked tokens, the model can use both, the surrounding context in language 1 and the context given by language 2. |
Resources
Text classification task guide
Token classification task guide
Question answering task guide
Causal language modeling task guide
Masked language modeling task guide
Multiple choice task guide
XLMConfig
[[autodoc]] XLMConfig
XLMTokenizer
[[autodoc]] XLMTokenizer
- build_inputs_with_special_tokens
- get_special_tokens_mask
- create_token_type_ids_from_sequences
- save_vocabulary
XLM specific outputs
[[autodoc]] models.xlm.modeling_xlm.XLMForQuestionAnsweringOutput |
XLMModel
[[autodoc]] XLMModel
- forward
XLMWithLMHeadModel
[[autodoc]] XLMWithLMHeadModel
- forward
XLMForSequenceClassification
[[autodoc]] XLMForSequenceClassification
- forward
XLMForMultipleChoice
[[autodoc]] XLMForMultipleChoice
- forward
XLMForTokenClassification
[[autodoc]] XLMForTokenClassification
- forward
XLMForQuestionAnsweringSimple
[[autodoc]] XLMForQuestionAnsweringSimple
- forward
XLMForQuestionAnswering
[[autodoc]] XLMForQuestionAnswering
- forward |
TFXLMModel
[[autodoc]] TFXLMModel
- call
TFXLMWithLMHeadModel
[[autodoc]] TFXLMWithLMHeadModel
- call
TFXLMForSequenceClassification
[[autodoc]] TFXLMForSequenceClassification
- call
TFXLMForMultipleChoice
[[autodoc]] TFXLMForMultipleChoice
- call
TFXLMForTokenClassification
[[autodoc]] TFXLMForTokenClassification
- call
TFXLMForQuestionAnsweringSimple
[[autodoc]] TFXLMForQuestionAnsweringSimple
- call |
LongT5
Overview
The LongT5 model was proposed in LongT5: Efficient Text-To-Text Transformer for Long Sequences
by Mandy Guo, Joshua Ainslie, David Uthus, Santiago Ontanon, Jianmo Ni, Yun-Hsuan Sung and Yinfei Yang. It's an
encoder-decoder transformer pre-trained in a text-to-text denoising generative setting. LongT5 model is an extension of
T5 model, and it enables using one of the two different efficient attention mechanisms - (1) Local attention, or (2)
Transient-Global attention.
The abstract from the paper is the following:
Recent work has shown that either (1) increasing the input length or (2) increasing model size can improve the
performance of Transformer-based neural models. In this paper, we present a new model, called LongT5, with which we
explore the effects of scaling both the input length and model size at the same time. Specifically, we integrated
attention ideas from long-input transformers (ETC), and adopted pre-training strategies from summarization pre-training
(PEGASUS) into the scalable T5 architecture. The result is a new attention mechanism we call {\em Transient Global}
(TGlobal), which mimics ETC's local/global attention mechanism, but without requiring additional side-inputs. We are
able to achieve state-of-the-art results on several summarization tasks and outperform the original T5 models on
question answering tasks.
This model was contributed by stancld.
The original code can be found here.
Usage tips |
[LongT5ForConditionalGeneration] is an extension of [T5ForConditionalGeneration] exchanging the traditional
encoder self-attention layer with efficient either local attention or transient-global (tglobal) attention.
Unlike the T5 model, LongT5 does not use a task prefix. Furthermore, it uses a different pre-training objective
inspired by the pre-training of [PegasusForConditionalGeneration].
LongT5 model is designed to work efficiently and very well on long-range sequence-to-sequence tasks where the
input sequence exceeds commonly used 512 tokens. It is capable of handling input sequences of a length up to 16,384 tokens.
For Local Attention, the sparse sliding-window local attention operation allows a given token to attend only r
tokens to the left and right of it (with r=127 by default). Local Attention does not introduce any new parameters
to the model. The complexity of the mechanism is linear in input sequence length l: O(l*r).
Transient Global Attention is an extension of the Local Attention. It, furthermore, allows each input token to
interact with all other tokens in the layer. This is achieved via splitting an input sequence into blocks of a fixed
length k (with a default k=16). Then, a global token for such a block is obtained via summing and normalizing the embeddings of every token
in the block. Thanks to this, the attention allows each token to attend to both nearby tokens like in Local attention, and
also every global token like in the case of standard global attention (transient represents the fact the global tokens
are constructed dynamically within each attention operation). As a consequence, TGlobal attention introduces
a few new parameters -- global relative position biases and a layer normalization for global token's embedding.
The complexity of this mechanism is O(l(r + l/k)).
An example showing how to evaluate a fine-tuned LongT5 model on the pubmed dataset is below. |
thon |
import evaluate
from datasets import load_dataset
from transformers import AutoTokenizer, LongT5ForConditionalGeneration
dataset = load_dataset("scientific_papers", "pubmed", split="validation")
model = (
LongT5ForConditionalGeneration.from_pretrained("Stancld/longt5-tglobal-large-16384-pubmed-3k_steps")
.to("cuda")
.half()
)
tokenizer = AutoTokenizer.from_pretrained("Stancld/longt5-tglobal-large-16384-pubmed-3k_steps")
def generate_answers(batch):
inputs_dict = tokenizer(
batch["article"], max_length=16384, padding="max_length", truncation=True, return_tensors="pt"
)
input_ids = inputs_dict.input_ids.to("cuda")
attention_mask = inputs_dict.attention_mask.to("cuda")
output_ids = model.generate(input_ids, attention_mask=attention_mask, max_length=512, num_beams=2)
batch["predicted_abstract"] = tokenizer.batch_decode(output_ids, skip_special_tokens=True)
return batch
result = dataset.map(generate_answer, batched=True, batch_size=2)
rouge = evaluate.load("rouge")
rouge.compute(predictions=result["predicted_abstract"], references=result["abstract"]) |
Resources
Translation task guide
Summarization task guide
LongT5Config
[[autodoc]] LongT5Config
LongT5Model
[[autodoc]] LongT5Model
- forward
LongT5ForConditionalGeneration
[[autodoc]] LongT5ForConditionalGeneration
- forward
LongT5EncoderModel
[[autodoc]] LongT5EncoderModel
- forward
FlaxLongT5Model
[[autodoc]] FlaxLongT5Model
- call
- encode
- decode
FlaxLongT5ForConditionalGeneration
[[autodoc]] FlaxLongT5ForConditionalGeneration
- call
- encode
- decode |
CPM
Overview
The CPM model was proposed in CPM: A Large-scale Generative Chinese Pre-trained Language Model by Zhengyan Zhang, Xu Han, Hao Zhou, Pei Ke, Yuxian Gu, Deming Ye, Yujia Qin,
Yusheng Su, Haozhe Ji, Jian Guan, Fanchao Qi, Xiaozhi Wang, Yanan Zheng, Guoyang Zeng, Huanqi Cao, Shengqi Chen,
Daixuan Li, Zhenbo Sun, Zhiyuan Liu, Minlie Huang, Wentao Han, Jie Tang, Juanzi Li, Xiaoyan Zhu, Maosong Sun.
The abstract from the paper is the following:
Pre-trained Language Models (PLMs) have proven to be beneficial for various downstream NLP tasks. Recently, GPT-3,
with 175 billion parameters and 570GB training data, drew a lot of attention due to the capacity of few-shot (even
zero-shot) learning. However, applying GPT-3 to address Chinese NLP tasks is still challenging, as the training corpus
of GPT-3 is primarily English, and the parameters are not publicly available. In this technical report, we release the
Chinese Pre-trained Language Model (CPM) with generative pre-training on large-scale Chinese training data. To the best
of our knowledge, CPM, with 2.6 billion parameters and 100GB Chinese training data, is the largest Chinese pre-trained
language model, which could facilitate several downstream Chinese NLP tasks, such as conversation, essay generation,
cloze test, and language understanding. Extensive experiments demonstrate that CPM achieves strong performance on many
NLP tasks in the settings of few-shot (even zero-shot) learning.
This model was contributed by canwenxu. The original implementation can be found
here: https://github.com/TsinghuaAI/CPM-Generate |
CPM's architecture is the same as GPT-2, except for tokenization method. Refer to GPT-2 documentation for
API reference information.
CpmTokenizer
[[autodoc]] CpmTokenizer
CpmTokenizerFast
[[autodoc]] CpmTokenizerFast |
PatchTST
Overview
The PatchTST model was proposed in A Time Series is Worth 64 Words: Long-term Forecasting with Transformers by Yuqi Nie, Nam H. Nguyen, Phanwadee Sinthong and Jayant Kalagnanam.
At a high level the model vectorizes time series into patches of a given size and encodes the resulting sequence of vectors via a Transformer that then outputs the prediction length forecast via an appropriate head. The model is illustrated in the following figure: |
The abstract from the paper is the following:
We propose an efficient design of Transformer-based models for multivariate time series forecasting and self-supervised representation learning. It is based on two key components: (i) segmentation of time series into subseries-level patches which are served as input tokens to Transformer; (ii) channel-independence where each channel contains a single univariate time series that shares the same embedding and Transformer weights across all the series. Patching design naturally has three-fold benefit: local semantic information is retained in the embedding; computation and memory usage of the attention maps are quadratically reduced given the same look-back window; and the model can attend longer history. Our channel-independent patch time series Transformer (PatchTST) can improve the long-term forecasting accuracy significantly when compared with that of SOTA Transformer-based models. We also apply our model to self-supervised pre-training tasks and attain excellent fine-tuning performance, which outperforms supervised training on large datasets. Transferring of masked pre-trained representation on one dataset to others also produces SOTA forecasting accuracy.
This model was contributed by namctin, gsinthong, diepi, vijaye12, wmgifford, and kashif. The original code can be found here.
Usage tips
The model can also be used for time series classification and time series regression. See the respective [PatchTSTForClassification] and [PatchTSTForRegression] classes.
Resources |
A blog post explaining PatchTST in depth can be found here. The blog can also be opened in Google Colab.
PatchTSTConfig
[[autodoc]] PatchTSTConfig
PatchTSTModel
[[autodoc]] PatchTSTModel
- forward
PatchTSTForPrediction
[[autodoc]] PatchTSTForPrediction
- forward
PatchTSTForClassification
[[autodoc]] PatchTSTForClassification
- forward
PatchTSTForPretraining
[[autodoc]] PatchTSTForPretraining
- forward
PatchTSTForRegression
[[autodoc]] PatchTSTForRegression
- forward |
Longformer |
Overview
The Longformer model was presented in Longformer: The Long-Document Transformer by Iz Beltagy, Matthew E. Peters, Arman Cohan.
The abstract from the paper is the following:
Transformer-based models are unable to process long sequences due to their self-attention operation, which scales
quadratically with the sequence length. To address this limitation, we introduce the Longformer with an attention
mechanism that scales linearly with sequence length, making it easy to process documents of thousands of tokens or
longer. Longformer's attention mechanism is a drop-in replacement for the standard self-attention and combines a local
windowed attention with a task motivated global attention. Following prior work on long-sequence transformers, we
evaluate Longformer on character-level language modeling and achieve state-of-the-art results on text8 and enwik8. In
contrast to most prior work, we also pretrain Longformer and finetune it on a variety of downstream tasks. Our
pretrained Longformer consistently outperforms RoBERTa on long document tasks and sets new state-of-the-art results on
WikiHop and TriviaQA.
This model was contributed by beltagy. The Authors' code can be found here.
Usage tips |
Since the Longformer is based on RoBERTa, it doesn't have token_type_ids. You don't need to indicate which
token belongs to which segment. Just separate your segments with the separation token tokenizer.sep_token (or
</s>).
A transformer model replacing the attention matrices by sparse matrices to go faster. Often, the local context (e.g., what are the two tokens left and right?) is enough to take action for a given token. Some preselected input tokens are still given global attention, but the attention matrix has way less parameters, resulting in a speed-up. See the local attention section for more information. |
Longformer Self Attention
Longformer self attention employs self attention on both a "local" context and a "global" context. Most tokens only
attend "locally" to each other meaning that each token attends to its \(\frac{1}{2} w\) previous tokens and
\(\frac{1}{2} w\) succeeding tokens with \(w\) being the window length as defined in
config.attention_window. Note that config.attention_window can be of type List to define a
different \(w\) for each layer. A selected few tokens attend "globally" to all other tokens, as it is
conventionally done for all tokens in BertSelfAttention.
Note that "locally" and "globally" attending tokens are projected by different query, key and value matrices. Also note
that every "locally" attending token not only attends to tokens within its window \(w\), but also to all "globally"
attending tokens so that global attention is symmetric.
The user can define which tokens attend "locally" and which tokens attend "globally" by setting the tensor
global_attention_mask at run-time appropriately. All Longformer models employ the following logic for
global_attention_mask: |
0: the token attends "locally",
1: the token attends "globally". |
For more information please also refer to [~LongformerModel.forward] method.
Using Longformer self attention, the memory and time complexity of the query-key matmul operation, which usually
represents the memory and time bottleneck, can be reduced from \(\mathcal{O}(n_s \times n_s)\) to
\(\mathcal{O}(n_s \times w)\), with \(n_s\) being the sequence length and \(w\) being the average window
size. It is assumed that the number of "globally" attending tokens is insignificant as compared to the number of
"locally" attending tokens.
For more information, please refer to the official paper.
Training
[LongformerForMaskedLM] is trained the exact same way [RobertaForMaskedLM] is
trained and should be used as follows:
thon
input_ids = tokenizer.encode("This is a sentence from [MASK] training data", return_tensors="pt")
mlm_labels = tokenizer.encode("This is a sentence from the training data", return_tensors="pt")
loss = model(input_ids, labels=input_ids, masked_lm_labels=mlm_labels)[0] |
Resources
Text classification task guide
Token classification task guide
Question answering task guide
Masked language modeling task guide
Multiple choice task guide |
LongformerConfig
[[autodoc]] LongformerConfig
LongformerTokenizer
[[autodoc]] LongformerTokenizer
LongformerTokenizerFast
[[autodoc]] LongformerTokenizerFast
Longformer specific outputs
[[autodoc]] models.longformer.modeling_longformer.LongformerBaseModelOutput
[[autodoc]] models.longformer.modeling_longformer.LongformerBaseModelOutputWithPooling
[[autodoc]] models.longformer.modeling_longformer.LongformerMaskedLMOutput
[[autodoc]] models.longformer.modeling_longformer.LongformerQuestionAnsweringModelOutput
[[autodoc]] models.longformer.modeling_longformer.LongformerSequenceClassifierOutput
[[autodoc]] models.longformer.modeling_longformer.LongformerMultipleChoiceModelOutput
[[autodoc]] models.longformer.modeling_longformer.LongformerTokenClassifierOutput
[[autodoc]] models.longformer.modeling_tf_longformer.TFLongformerBaseModelOutput
[[autodoc]] models.longformer.modeling_tf_longformer.TFLongformerBaseModelOutputWithPooling
[[autodoc]] models.longformer.modeling_tf_longformer.TFLongformerMaskedLMOutput
[[autodoc]] models.longformer.modeling_tf_longformer.TFLongformerQuestionAnsweringModelOutput
[[autodoc]] models.longformer.modeling_tf_longformer.TFLongformerSequenceClassifierOutput
[[autodoc]] models.longformer.modeling_tf_longformer.TFLongformerMultipleChoiceModelOutput
[[autodoc]] models.longformer.modeling_tf_longformer.TFLongformerTokenClassifierOutput |
LongformerModel
[[autodoc]] LongformerModel
- forward
LongformerForMaskedLM
[[autodoc]] LongformerForMaskedLM
- forward
LongformerForSequenceClassification
[[autodoc]] LongformerForSequenceClassification
- forward
LongformerForMultipleChoice
[[autodoc]] LongformerForMultipleChoice
- forward
LongformerForTokenClassification
[[autodoc]] LongformerForTokenClassification
- forward
LongformerForQuestionAnswering
[[autodoc]] LongformerForQuestionAnswering
- forward |
TFLongformerModel
[[autodoc]] TFLongformerModel
- call
TFLongformerForMaskedLM
[[autodoc]] TFLongformerForMaskedLM
- call
TFLongformerForQuestionAnswering
[[autodoc]] TFLongformerForQuestionAnswering
- call
TFLongformerForSequenceClassification
[[autodoc]] TFLongformerForSequenceClassification
- call
TFLongformerForTokenClassification
[[autodoc]] TFLongformerForTokenClassification
- call
TFLongformerForMultipleChoice
[[autodoc]] TFLongformerForMultipleChoice
- call |
GroupViT
Overview
The GroupViT model was proposed in GroupViT: Semantic Segmentation Emerges from Text Supervision by Jiarui Xu, Shalini De Mello, Sifei Liu, Wonmin Byeon, Thomas Breuel, Jan Kautz, Xiaolong Wang.
Inspired by CLIP, GroupViT is a vision-language model that can perform zero-shot semantic segmentation on any given vocabulary categories.
The abstract from the paper is the following:
Grouping and recognition are important components of visual scene understanding, e.g., for object detection and semantic segmentation. With end-to-end deep learning systems, grouping of image regions usually happens implicitly via top-down supervision from pixel-level recognition labels. Instead, in this paper, we propose to bring back the grouping mechanism into deep networks, which allows semantic segments to emerge automatically with only text supervision. We propose a hierarchical Grouping Vision Transformer (GroupViT), which goes beyond the regular grid structure representation and learns to group image regions into progressively larger arbitrary-shaped segments. We train GroupViT jointly with a text encoder on a large-scale image-text dataset via contrastive losses. With only text supervision and without any pixel-level annotations, GroupViT learns to group together semantic regions and successfully transfers to the task of semantic segmentation in a zero-shot manner, i.e., without any further fine-tuning. It achieves a zero-shot accuracy of 52.3% mIoU on the PASCAL VOC 2012 and 22.4% mIoU on PASCAL Context datasets, and performs competitively to state-of-the-art transfer-learning methods requiring greater levels of supervision.
This model was contributed by xvjiarui. The TensorFlow version was contributed by ariG23498 with the help of Yih-Dar SHIEH, Amy Roberts, and Joao Gante.
The original code can be found here.
Usage tips |
You may specify output_segmentation=True in the forward of GroupViTModel to get the segmentation logits of input texts.
Resources
A list of official Hugging Face and community (indicated by π) resources to help you get started with GroupViT.
The quickest way to get started with GroupViT is by checking the example notebooks (which showcase zero-shot segmentation inference).
One can also check out the HuggingFace Spaces demo to play with GroupViT. |
GroupViTConfig
[[autodoc]] GroupViTConfig
- from_text_vision_configs
GroupViTTextConfig
[[autodoc]] GroupViTTextConfig
GroupViTVisionConfig
[[autodoc]] GroupViTVisionConfig
GroupViTModel
[[autodoc]] GroupViTModel
- forward
- get_text_features
- get_image_features
GroupViTTextModel
[[autodoc]] GroupViTTextModel
- forward
GroupViTVisionModel
[[autodoc]] GroupViTVisionModel
- forward |
TFGroupViTModel
[[autodoc]] TFGroupViTModel
- call
- get_text_features
- get_image_features
TFGroupViTTextModel
[[autodoc]] TFGroupViTTextModel
- call
TFGroupViTVisionModel
[[autodoc]] TFGroupViTVisionModel
- call |
Pix2Struct
Overview
The Pix2Struct model was proposed in Pix2Struct: Screenshot Parsing as Pretraining for Visual Language Understanding by Kenton Lee, Mandar Joshi, Iulia Turc, Hexiang Hu, Fangyu Liu, Julian Eisenschlos, Urvashi Khandelwal, Peter Shaw, Ming-Wei Chang, Kristina Toutanova.
The abstract from the paper is the following: |
Visually-situated language is ubiquitous -- sources range from textbooks with diagrams to web pages with images and tables, to mobile apps with buttons and forms. Perhaps due to this diversity, previous work has typically relied on domain-specific recipes with limited sharing of the underlying data, model architectures, and objectives. We present Pix2Struct, a pretrained image-to-text model for purely visual language understanding, which can be finetuned on tasks containing visually-situated language. Pix2Struct is pretrained by learning to parse masked screenshots of web pages into simplified HTML. The web, with its richness of visual elements cleanly reflected in the HTML structure, provides a large source of pretraining data well suited to the diversity of downstream tasks. Intuitively, this objective subsumes common pretraining signals such as OCR, language modeling, image captioning. In addition to the novel pretraining strategy, we introduce a variable-resolution input representation and a more flexible integration of language and vision inputs, where language prompts such as questions are rendered directly on top of the input image. For the first time, we show that a single pretrained model can achieve state-of-the-art results in six out of nine tasks across four domains: documents, illustrations, user interfaces, and natural images. |
Tips:
Pix2Struct has been fine tuned on a variety of tasks and datasets, ranging from image captioning, visual question answering (VQA) over different inputs (books, charts, science diagrams), captioning UI components etc. The full list can be found in Table 1 of the paper.
We therefore advise you to use these models for the tasks they have been fine tuned on. For instance, if you want to use Pix2Struct for UI captioning, you should use the model fine tuned on the UI dataset. If you want to use Pix2Struct for image captioning, you should use the model fine tuned on the natural images captioning dataset and so on.
If you want to use the model to perform conditional text captioning, make sure to use the processor with add_special_tokens=False.
This model was contributed by ybelkada.
The original code can be found here.
Resources |
Fine-tuning Notebook
All models |
Pix2StructConfig
[[autodoc]] Pix2StructConfig
- from_text_vision_configs
Pix2StructTextConfig
[[autodoc]] Pix2StructTextConfig
Pix2StructVisionConfig
[[autodoc]] Pix2StructVisionConfig
Pix2StructProcessor
[[autodoc]] Pix2StructProcessor
Pix2StructImageProcessor
[[autodoc]] Pix2StructImageProcessor
- preprocess
Pix2StructTextModel
[[autodoc]] Pix2StructTextModel
- forward
Pix2StructVisionModel
[[autodoc]] Pix2StructVisionModel
- forward
Pix2StructForConditionalGeneration
[[autodoc]] Pix2StructForConditionalGeneration
- forward |
Custom Layers and Utilities
This page lists all the custom layers used by the library, as well as the utility functions it provides for modeling.
Most of those are only useful if you are studying the code of the models in the library.
Pytorch custom modules
[[autodoc]] pytorch_utils.Conv1D
[[autodoc]] modeling_utils.PoolerStartLogits
- forward
[[autodoc]] modeling_utils.PoolerEndLogits
- forward
[[autodoc]] modeling_utils.PoolerAnswerClass
- forward
[[autodoc]] modeling_utils.SquadHeadOutput
[[autodoc]] modeling_utils.SQuADHead
- forward
[[autodoc]] modeling_utils.SequenceSummary
- forward
PyTorch Helper Functions
[[autodoc]] pytorch_utils.apply_chunking_to_forward
[[autodoc]] pytorch_utils.find_pruneable_heads_and_indices
[[autodoc]] pytorch_utils.prune_layer
[[autodoc]] pytorch_utils.prune_conv1d_layer
[[autodoc]] pytorch_utils.prune_linear_layer
TensorFlow custom layers
[[autodoc]] modeling_tf_utils.TFConv1D
[[autodoc]] modeling_tf_utils.TFSequenceSummary
TensorFlow loss functions
[[autodoc]] modeling_tf_utils.TFCausalLanguageModelingLoss
[[autodoc]] modeling_tf_utils.TFMaskedLanguageModelingLoss
[[autodoc]] modeling_tf_utils.TFMultipleChoiceLoss
[[autodoc]] modeling_tf_utils.TFQuestionAnsweringLoss
[[autodoc]] modeling_tf_utils.TFSequenceClassificationLoss
[[autodoc]] modeling_tf_utils.TFTokenClassificationLoss
TensorFlow Helper Functions
[[autodoc]] modeling_tf_utils.get_initializer
[[autodoc]] modeling_tf_utils.keras_serializable
[[autodoc]] modeling_tf_utils.shape_list |
General Utilities
This page lists all of Transformers general utility functions that are found in the file utils.py.
Most of those are only useful if you are studying the general code in the library.
Enums and namedtuples
[[autodoc]] utils.ExplicitEnum
[[autodoc]] utils.PaddingStrategy
[[autodoc]] utils.TensorType
Special Decorators
[[autodoc]] utils.add_start_docstrings
[[autodoc]] utils.add_start_docstrings_to_model_forward
[[autodoc]] utils.add_end_docstrings
[[autodoc]] utils.add_code_sample_docstrings
[[autodoc]] utils.replace_return_docstrings
Special Properties
[[autodoc]] utils.cached_property
Other Utilities
[[autodoc]] utils._LazyModule |
Utilities for FeatureExtractors
This page lists all the utility functions that can be used by the audio [FeatureExtractor] in order to compute special features from a raw audio using common algorithms such as Short Time Fourier Transform or log mel spectrogram.
Most of those are only useful if you are studying the code of the audio processors in the library.
Audio Transformations
[[autodoc]] audio_utils.hertz_to_mel
[[autodoc]] audio_utils.mel_to_hertz
[[autodoc]] audio_utils.mel_filter_bank
[[autodoc]] audio_utils.optimal_fft_length
[[autodoc]] audio_utils.window_function
[[autodoc]] audio_utils.spectrogram
[[autodoc]] audio_utils.power_to_db
[[autodoc]] audio_utils.amplitude_to_db |
Utilities for Generation
This page lists all the utility functions used by [~generation.GenerationMixin.generate].
Generate Outputs
The output of [~generation.GenerationMixin.generate] is an instance of a subclass of
[~utils.ModelOutput]. This output is a data structure containing all the information returned
by [~generation.GenerationMixin.generate], but that can also be used as tuple or dictionary.
Here's an example:
thon
from transformers import GPT2Tokenizer, GPT2LMHeadModel
tokenizer = GPT2Tokenizer.from_pretrained("openai-community/gpt2")
model = GPT2LMHeadModel.from_pretrained("openai-community/gpt2")
inputs = tokenizer("Hello, my dog is cute and ", return_tensors="pt")
generation_output = model.generate(**inputs, return_dict_in_generate=True, output_scores=True) |
The generation_output object is a [~generation.GenerateDecoderOnlyOutput], as we can
see in the documentation of that class below, it means it has the following attributes:
sequences: the generated sequences of tokens
scores (optional): the prediction scores of the language modelling head, for each generation step
hidden_states (optional): the hidden states of the model, for each generation step
attentions (optional): the attention weights of the model, for each generation step |
Here we have the scores since we passed along output_scores=True, but we don't have hidden_states and
attentions because we didn't pass output_hidden_states=True or output_attentions=True.
You can access each attribute as you would usually do, and if that attribute has not been returned by the model, you
will get None. Here for instance generation_output.scores are all the generated prediction scores of the
language modeling head, and generation_output.attentions is None.
When using our generation_output object as a tuple, it only keeps the attributes that don't have None values.
Here, for instance, it has two elements, loss then logits, so
python
generation_output[:2]
will return the tuple (generation_output.sequences, generation_output.scores) for instance.
When using our generation_output object as a dictionary, it only keeps the attributes that don't have None
values. Here, for instance, it has two keys that are sequences and scores.
We document here all output types.
PyTorch
[[autodoc]] generation.GenerateDecoderOnlyOutput
[[autodoc]] generation.GenerateEncoderDecoderOutput
[[autodoc]] generation.GenerateBeamDecoderOnlyOutput
[[autodoc]] generation.GenerateBeamEncoderDecoderOutput
TensorFlow
[[autodoc]] generation.TFGreedySearchEncoderDecoderOutput
[[autodoc]] generation.TFGreedySearchDecoderOnlyOutput
[[autodoc]] generation.TFSampleEncoderDecoderOutput
[[autodoc]] generation.TFSampleDecoderOnlyOutput
[[autodoc]] generation.TFBeamSearchEncoderDecoderOutput
[[autodoc]] generation.TFBeamSearchDecoderOnlyOutput
[[autodoc]] generation.TFBeamSampleEncoderDecoderOutput
[[autodoc]] generation.TFBeamSampleDecoderOnlyOutput
[[autodoc]] generation.TFContrastiveSearchEncoderDecoderOutput
[[autodoc]] generation.TFContrastiveSearchDecoderOnlyOutput
FLAX
[[autodoc]] generation.FlaxSampleOutput
[[autodoc]] generation.FlaxGreedySearchOutput
[[autodoc]] generation.FlaxBeamSearchOutput
LogitsProcessor
A [LogitsProcessor] can be used to modify the prediction scores of a language model head for
generation.
PyTorch
[[autodoc]] AlternatingCodebooksLogitsProcessor
- call
[[autodoc]] ClassifierFreeGuidanceLogitsProcessor
- call
[[autodoc]] EncoderNoRepeatNGramLogitsProcessor
- call
[[autodoc]] EncoderRepetitionPenaltyLogitsProcessor
- call
[[autodoc]] EpsilonLogitsWarper
- call
[[autodoc]] EtaLogitsWarper
- call
[[autodoc]] ExponentialDecayLengthPenalty
- call
[[autodoc]] ForcedBOSTokenLogitsProcessor
- call
[[autodoc]] ForcedEOSTokenLogitsProcessor
- call
[[autodoc]] ForceTokensLogitsProcessor
- call
[[autodoc]] HammingDiversityLogitsProcessor
- call
[[autodoc]] InfNanRemoveLogitsProcessor
- call
[[autodoc]] LogitNormalization
- call
[[autodoc]] LogitsProcessor
- call
[[autodoc]] LogitsProcessorList
- call
[[autodoc]] LogitsWarper
- call
[[autodoc]] MinLengthLogitsProcessor
- call
[[autodoc]] MinNewTokensLengthLogitsProcessor
- call
[[autodoc]] NoBadWordsLogitsProcessor
- call
[[autodoc]] NoRepeatNGramLogitsProcessor
- call
[[autodoc]] PrefixConstrainedLogitsProcessor
- call
[[autodoc]] RepetitionPenaltyLogitsProcessor
- call
[[autodoc]] SequenceBiasLogitsProcessor
- call
[[autodoc]] SuppressTokensAtBeginLogitsProcessor
- call
[[autodoc]] SuppressTokensLogitsProcessor
- call
[[autodoc]] TemperatureLogitsWarper
- call
[[autodoc]] TopKLogitsWarper
- call
[[autodoc]] TopPLogitsWarper
- call
[[autodoc]] TypicalLogitsWarper
- call
[[autodoc]] UnbatchedClassifierFreeGuidanceLogitsProcessor
- call
[[autodoc]] WhisperTimeStampLogitsProcessor
- call
TensorFlow
[[autodoc]] TFForcedBOSTokenLogitsProcessor
- call
[[autodoc]] TFForcedEOSTokenLogitsProcessor
- call
[[autodoc]] TFForceTokensLogitsProcessor
- call
[[autodoc]] TFLogitsProcessor
- call
[[autodoc]] TFLogitsProcessorList
- call
[[autodoc]] TFLogitsWarper
- call
[[autodoc]] TFMinLengthLogitsProcessor
- call
[[autodoc]] TFNoBadWordsLogitsProcessor
- call
[[autodoc]] TFNoRepeatNGramLogitsProcessor
- call
[[autodoc]] TFRepetitionPenaltyLogitsProcessor
- call
[[autodoc]] TFSuppressTokensAtBeginLogitsProcessor
- call
[[autodoc]] TFSuppressTokensLogitsProcessor
- call
[[autodoc]] TFTemperatureLogitsWarper
- call
[[autodoc]] TFTopKLogitsWarper
- call
[[autodoc]] TFTopPLogitsWarper
- call
FLAX
[[autodoc]] FlaxForcedBOSTokenLogitsProcessor
- call
[[autodoc]] FlaxForcedEOSTokenLogitsProcessor
- call
[[autodoc]] FlaxForceTokensLogitsProcessor
- call
[[autodoc]] FlaxLogitsProcessor
- call
[[autodoc]] FlaxLogitsProcessorList
- call
[[autodoc]] FlaxLogitsWarper
- call
[[autodoc]] FlaxMinLengthLogitsProcessor
- call
[[autodoc]] FlaxSuppressTokensAtBeginLogitsProcessor
- call
[[autodoc]] FlaxSuppressTokensLogitsProcessor
- call
[[autodoc]] FlaxTemperatureLogitsWarper
- call
[[autodoc]] FlaxTopKLogitsWarper
- call
[[autodoc]] FlaxTopPLogitsWarper
- call
[[autodoc]] FlaxWhisperTimeStampLogitsProcessor
- call
StoppingCriteria
A [StoppingCriteria] can be used to change when to stop generation (other than EOS token). Please note that this is exclusively available to our PyTorch implementations.
[[autodoc]] StoppingCriteria
- call
[[autodoc]] StoppingCriteriaList
- call
[[autodoc]] MaxLengthCriteria
- call
[[autodoc]] MaxTimeCriteria
- call
Constraints
A [Constraint] can be used to force the generation to include specific tokens or sequences in the output. Please note that this is exclusively available to our PyTorch implementations.
[[autodoc]] Constraint
[[autodoc]] PhrasalConstraint
[[autodoc]] DisjunctiveConstraint
[[autodoc]] ConstraintListState
BeamSearch
[[autodoc]] BeamScorer
- process
- finalize
[[autodoc]] BeamSearchScorer
- process
- finalize
[[autodoc]] ConstrainedBeamSearchScorer
- process
- finalize
Utilities
[[autodoc]] top_k_top_p_filtering
[[autodoc]] tf_top_k_top_p_filtering
Streamers
[[autodoc]] TextStreamer
[[autodoc]] TextIteratorStreamer
Caches
[[autodoc]] Cache
- update
[[autodoc]] DynamicCache
- update
- get_seq_length
- reorder_cache
- to_legacy_cache
- from_legacy_cache
[[autodoc]] SinkCache
- update
- get_seq_length
- reorder_cache
[[autodoc]] StaticCache
- update
- get_seq_length |
Time Series Utilities
This page lists all the utility functions and classes that can be used for Time Series based models.
Most of those are only useful if you are studying the code of the time series models or you wish to add to the collection of distributional output classes.
Distributional Output
[[autodoc]] time_series_utils.NormalOutput
[[autodoc]] time_series_utils.StudentTOutput
[[autodoc]] time_series_utils.NegativeBinomialOutput |
Utilities for Tokenizers
This page lists all the utility functions used by the tokenizers, mainly the class
[~tokenization_utils_base.PreTrainedTokenizerBase] that implements the common methods between
[PreTrainedTokenizer] and [PreTrainedTokenizerFast] and the mixin
[~tokenization_utils_base.SpecialTokensMixin].
Most of those are only useful if you are studying the code of the tokenizers in the library.
PreTrainedTokenizerBase
[[autodoc]] tokenization_utils_base.PreTrainedTokenizerBase
- call
- all
SpecialTokensMixin
[[autodoc]] tokenization_utils_base.SpecialTokensMixin
Enums and namedtuples
[[autodoc]] tokenization_utils_base.TruncationStrategy
[[autodoc]] tokenization_utils_base.CharSpan
[[autodoc]] tokenization_utils_base.TokenSpan |
Utilities for Image Processors
This page lists all the utility functions used by the image processors, mainly the functional
transformations used to process the images.
Most of those are only useful if you are studying the code of the image processors in the library.
Image Transformations
[[autodoc]] image_transforms.center_crop
[[autodoc]] image_transforms.center_to_corners_format
[[autodoc]] image_transforms.corners_to_center_format
[[autodoc]] image_transforms.id_to_rgb
[[autodoc]] image_transforms.normalize
[[autodoc]] image_transforms.pad
[[autodoc]] image_transforms.rgb_to_id
[[autodoc]] image_transforms.rescale
[[autodoc]] image_transforms.resize
[[autodoc]] image_transforms.to_pil_image
ImageProcessingMixin
[[autodoc]] image_processing_utils.ImageProcessingMixin |
Utilities for Trainer
This page lists all the utility functions used by [Trainer].
Most of those are only useful if you are studying the code of the Trainer in the library.
Utilities
[[autodoc]] EvalPrediction
[[autodoc]] IntervalStrategy
[[autodoc]] enable_full_determinism
[[autodoc]] set_seed
[[autodoc]] torch_distributed_zero_first
Callbacks internals
[[autodoc]] trainer_callback.CallbackHandler
Distributed Evaluation
[[autodoc]] trainer_pt_utils.DistributedTensorGatherer
Trainer Argument Parser
[[autodoc]] HfArgumentParser
Debug Utilities
[[autodoc]] debug_utils.DebugUnderflowOverflow |
Utilities for pipelines
This page lists all the utility functions the library provides for pipelines.
Most of those are only useful if you are studying the code of the models in the library.
Argument handling
[[autodoc]] pipelines.ArgumentHandler
[[autodoc]] pipelines.ZeroShotClassificationArgumentHandler
[[autodoc]] pipelines.QuestionAnsweringArgumentHandler
Data format
[[autodoc]] pipelines.PipelineDataFormat
[[autodoc]] pipelines.CsvPipelineDataFormat
[[autodoc]] pipelines.JsonPipelineDataFormat
[[autodoc]] pipelines.PipedPipelineDataFormat
Utilities
[[autodoc]] pipelines.PipelineException |
Agents & Tools
Transformers Agents is an experimental API which is subject to change at any time. Results returned by the agents
can vary as the APIs or underlying models are prone to change. |
To learn more about agents and tools make sure to read the introductory guide. This page
contains the API docs for the underlying classes.
Agents
We provide three types of agents: [HfAgent] uses inference endpoints for opensource models, [LocalAgent] uses a model of your choice locally and [OpenAiAgent] uses OpenAI closed models.
HfAgent
[[autodoc]] HfAgent
LocalAgent
[[autodoc]] LocalAgent
OpenAiAgent
[[autodoc]] OpenAiAgent
AzureOpenAiAgent
[[autodoc]] AzureOpenAiAgent
Agent
[[autodoc]] Agent
- chat
- run
- prepare_for_new_chat
Tools
load_tool
[[autodoc]] load_tool
Tool
[[autodoc]] Tool
PipelineTool
[[autodoc]] PipelineTool
RemoteTool
[[autodoc]] RemoteTool
launch_gradio_demo
[[autodoc]] launch_gradio_demo
Agent Types
Agents can handle any type of object in-between tools; tools, being completely multimodal, can accept and return
text, image, audio, video, among other types. In order to increase compatibility between tools, as well as to
correctly render these returns in ipython (jupyter, colab, ipython notebooks, ), we implement wrapper classes
around these types.
The wrapped objects should continue behaving as initially; a text object should still behave as a string, an image
object should still behave as a PIL.Image.
These types have three specific purposes: |
Calling to_raw on the type should return the underlying object
Calling to_string on the type should return the object as a string: that can be the string in case of an AgentText
but will be the path of the serialized version of the object in other instances
Displaying it in an ipython kernel should display the object correctly |
AgentText
[[autodoc]] transformers.tools.agent_types.AgentText
AgentImage
[[autodoc]] transformers.tools.agent_types.AgentImage
AgentAudio
[[autodoc]] transformers.tools.agent_types.AgentAudio |
Feature Extractor
A feature extractor is in charge of preparing input features for audio or vision models. This includes feature extraction from sequences, e.g., pre-processing audio files to generate Log-Mel Spectrogram features, feature extraction from images, e.g., cropping image files, but also padding, normalization, and conversion to NumPy, PyTorch, and TensorFlow tensors.
FeatureExtractionMixin
[[autodoc]] feature_extraction_utils.FeatureExtractionMixin
- from_pretrained
- save_pretrained
SequenceFeatureExtractor
[[autodoc]] SequenceFeatureExtractor
- pad
BatchFeature
[[autodoc]] BatchFeature
ImageFeatureExtractionMixin
[[autodoc]] image_utils.ImageFeatureExtractionMixin |
Generation
Each framework has a generate method for text generation implemented in their respective GenerationMixin class:
PyTorch [~generation.GenerationMixin.generate] is implemented in [~generation.GenerationMixin].
TensorFlow [~generation.TFGenerationMixin.generate] is implemented in [~generation.TFGenerationMixin].
Flax/JAX [~generation.FlaxGenerationMixin.generate] is implemented in [~generation.FlaxGenerationMixin]. |
Regardless of your framework of choice, you can parameterize the generate method with a [~generation.GenerationConfig]
class instance. Please refer to this class for the complete list of generation parameters, which control the behavior
of the generation method.
To learn how to inspect a model's generation configuration, what are the defaults, how to change the parameters ad hoc,
and how to create and save a customized generation configuration, refer to the
text generation strategies guide. The guide also explains how to use related features,
like token streaming.
GenerationConfig
[[autodoc]] generation.GenerationConfig
- from_pretrained
- from_model_config
- save_pretrained
GenerationMixin
[[autodoc]] generation.GenerationMixin
- generate
- compute_transition_scores
TFGenerationMixin
[[autodoc]] generation.TFGenerationMixin
- generate
- compute_transition_scores
FlaxGenerationMixin
[[autodoc]] generation.FlaxGenerationMixin
- generate |
Tokenizer
A tokenizer is in charge of preparing the inputs for a model. The library contains tokenizers for all the models. Most
of the tokenizers are available in two flavors: a full python implementation and a "Fast" implementation based on the
Rust library π€ Tokenizers. The "Fast" implementations allows: |
a significant speed-up in particular when doing batched tokenization and
additional methods to map between the original string (character and words) and the token space (e.g. getting the
index of the token comprising a given character or the span of characters corresponding to a given token). |
The base classes [PreTrainedTokenizer] and [PreTrainedTokenizerFast]
implement the common methods for encoding string inputs in model inputs (see below) and instantiating/saving python and
"Fast" tokenizers either from a local file or directory or from a pretrained tokenizer provided by the library
(downloaded from HuggingFace's AWS S3 repository). They both rely on
[~tokenization_utils_base.PreTrainedTokenizerBase] that contains the common methods, and
[~tokenization_utils_base.SpecialTokensMixin].
[PreTrainedTokenizer] and [PreTrainedTokenizerFast] thus implement the main
methods for using all the tokenizers: |
Tokenizing (splitting strings in sub-word token strings), converting tokens strings to ids and back, and
encoding/decoding (i.e., tokenizing and converting to integers).
Adding new tokens to the vocabulary in a way that is independent of the underlying structure (BPE, SentencePiece).
Managing special tokens (like mask, beginning-of-sentence, etc.): adding them, assigning them to attributes in the
tokenizer for easy access and making sure they are not split during tokenization. |
[BatchEncoding] holds the output of the
[~tokenization_utils_base.PreTrainedTokenizerBase]'s encoding methods (__call__,
encode_plus and batch_encode_plus) and is derived from a Python dictionary. When the tokenizer is a pure python
tokenizer, this class behaves just like a standard python dictionary and holds the various model inputs computed by
these methods (input_ids, attention_mask). When the tokenizer is a "Fast" tokenizer (i.e., backed by
HuggingFace tokenizers library), this class provides in addition
several advanced alignment methods which can be used to map between the original string (character and words) and the
token space (e.g., getting the index of the token comprising a given character or the span of characters corresponding
to a given token).
PreTrainedTokenizer
[[autodoc]] PreTrainedTokenizer
- call
- add_tokens
- add_special_tokens
- apply_chat_template
- batch_decode
- decode
- encode
- push_to_hub
- all
PreTrainedTokenizerFast
The [PreTrainedTokenizerFast] depend on the tokenizers library. The tokenizers obtained from the π€ tokenizers library can be
loaded very simply into π€ transformers. Take a look at the Using tokenizers from π€ tokenizers page to understand how this is done.
[[autodoc]] PreTrainedTokenizerFast
- call
- add_tokens
- add_special_tokens
- apply_chat_template
- batch_decode
- decode
- encode
- push_to_hub
- all
BatchEncoding
[[autodoc]] BatchEncoding |
Optimization
The .optimization module provides:
an optimizer with weight decay fixed that can be used to fine-tuned models, and
several schedules in the form of schedule objects that inherit from _LRSchedule:
a gradient accumulation class to accumulate the gradients of multiple batches |
AdamW (PyTorch)
[[autodoc]] AdamW
AdaFactor (PyTorch)
[[autodoc]] Adafactor
AdamWeightDecay (TensorFlow)
[[autodoc]] AdamWeightDecay
[[autodoc]] create_optimizer
Schedules
Learning Rate Schedules (Pytorch)
[[autodoc]] SchedulerType
[[autodoc]] get_scheduler
[[autodoc]] get_constant_schedule
[[autodoc]] get_constant_schedule_with_warmup
[[autodoc]] get_cosine_schedule_with_warmup
[[autodoc]] get_cosine_with_hard_restarts_schedule_with_warmup
[[autodoc]] get_linear_schedule_with_warmup |
[[autodoc]] get_cosine_schedule_with_warmup
[[autodoc]] get_cosine_with_hard_restarts_schedule_with_warmup
[[autodoc]] get_linear_schedule_with_warmup
[[autodoc]] get_polynomial_decay_schedule_with_warmup
[[autodoc]] get_inverse_sqrt_schedule
Warmup (TensorFlow)
[[autodoc]] WarmUp
Gradient Strategies
GradientAccumulator (TensorFlow)
[[autodoc]] GradientAccumulator |
Models
The base classes [PreTrainedModel], [TFPreTrainedModel], and
[FlaxPreTrainedModel] implement the common methods for loading/saving a model either from a local
file or directory, or from a pretrained model configuration provided by the library (downloaded from HuggingFace's AWS
S3 repository).
[PreTrainedModel] and [TFPreTrainedModel] also implement a few methods which
are common among all the models to: |
resize the input token embeddings when new tokens are added to the vocabulary
prune the attention heads of the model. |
The other methods that are common to each model are defined in [~modeling_utils.ModuleUtilsMixin]
(for the PyTorch models) and [~modeling_tf_utils.TFModuleUtilsMixin] (for the TensorFlow models) or
for text generation, [~generation.GenerationMixin] (for the PyTorch models),
[~generation.TFGenerationMixin] (for the TensorFlow models) and
[~generation.FlaxGenerationMixin] (for the Flax/JAX models).
PreTrainedModel
[[autodoc]] PreTrainedModel
- push_to_hub
- all |
Large model loading
In Transformers 4.20.0, the [~PreTrainedModel.from_pretrained] method has been reworked to accommodate large models using Accelerate. This requires Accelerate >= 0.9.0 and PyTorch >= 1.9.0. Instead of creating the full model, then loading the pretrained weights inside it (which takes twice the size of the model in RAM, one for the randomly initialized model, one for the weights), there is an option to create the model as an empty shell, then only materialize its parameters when the pretrained weights are loaded.
This option can be activated with low_cpu_mem_usage=True. The model is first created on the Meta device (with empty weights) and the state dict is then loaded inside it (shard by shard in the case of a sharded checkpoint). This way the maximum RAM used is the full size of the model only. |
from transformers import AutoModelForSeq2SeqLM
t0pp = AutoModelForSeq2SeqLM.from_pretrained("bigscience/T0pp", low_cpu_mem_usage=True) |
Moreover, you can directly place the model on different devices if it doesn't fully fit in RAM (only works for inference for now). With device_map="auto", Accelerate will determine where to put each layer to maximize the use of your fastest devices (GPUs) and offload the rest on the CPU, or even the hard drive if you don't have enough GPU RAM (or CPU RAM). Even if the model is split across several devices, it will run as you would normally expect.
When passing a device_map, low_cpu_mem_usage is automatically set to True, so you don't need to specify it: |
from transformers import AutoModelForSeq2SeqLM
t0pp = AutoModelForSeq2SeqLM.from_pretrained("bigscience/T0pp", device_map="auto") |
You can inspect how the model was split across devices by looking at its hf_device_map attribute:
py
t0pp.hf_device_map
python out
{'shared': 0,
'decoder.embed_tokens': 0,
'encoder': 0,
'decoder.block.0': 0,
'decoder.block.1': 1,
'decoder.block.2': 1,
'decoder.block.3': 1,
'decoder.block.4': 1,
'decoder.block.5': 1,
'decoder.block.6': 1,
'decoder.block.7': 1,
'decoder.block.8': 1,
'decoder.block.9': 1,
'decoder.block.10': 1,
'decoder.block.11': 1,
'decoder.block.12': 1,
'decoder.block.13': 1,
'decoder.block.14': 1,
'decoder.block.15': 1,
'decoder.block.16': 1,
'decoder.block.17': 1,
'decoder.block.18': 1,
'decoder.block.19': 1,
'decoder.block.20': 1,
'decoder.block.21': 1,
'decoder.block.22': 'cpu',
'decoder.block.23': 'cpu',
'decoder.final_layer_norm': 'cpu',
'decoder.dropout': 'cpu',
'lm_head': 'cpu'}
You can also write your own device map following the same format (a dictionary layer name to device). It should map all parameters of the model to a given device, but you don't have to detail where all the submodules of one layer go if that layer is entirely on the same device. For instance, the following device map would work properly for T0pp (as long as you have the GPU memory):
python
device_map = {"shared": 0, "encoder": 0, "decoder": 1, "lm_head": 1}
Another way to minimize the memory impact of your model is to instantiate it at a lower precision dtype (like torch.float16) or use direct quantization techniques as described below.
Model Instantiation dtype
Under Pytorch a model normally gets instantiated with torch.float32 format. This can be an issue if one tries to
load a model whose weights are in fp16, since it'd require twice as much memory. To overcome this limitation, you can
either explicitly pass the desired dtype using torch_dtype argument:
python
model = T5ForConditionalGeneration.from_pretrained("t5", torch_dtype=torch.float16)
or, if you want the model to always load in the most optimal memory pattern, you can use the special value "auto",
and then dtype will be automatically derived from the model's weights:
python
model = T5ForConditionalGeneration.from_pretrained("t5", torch_dtype="auto")
Models instantiated from scratch can also be told which dtype to use with:
python
config = T5Config.from_pretrained("t5")
model = AutoModel.from_config(config)
Due to Pytorch design, this functionality is only available for floating dtypes.
ModuleUtilsMixin
[[autodoc]] modeling_utils.ModuleUtilsMixin
TFPreTrainedModel
[[autodoc]] TFPreTrainedModel
- push_to_hub
- all
TFModelUtilsMixin
[[autodoc]] modeling_tf_utils.TFModelUtilsMixin
FlaxPreTrainedModel
[[autodoc]] FlaxPreTrainedModel
- push_to_hub
- all
Pushing to the Hub
[[autodoc]] utils.PushToHubMixin
Sharded checkpoints
[[autodoc]] modeling_utils.load_sharded_checkpoint |
Pipelines
The pipelines are a great and easy way to use models for inference. These pipelines are objects that abstract most of
the complex code from the library, offering a simple API dedicated to several tasks, including Named Entity
Recognition, Masked Language Modeling, Sentiment Analysis, Feature Extraction and Question Answering. See the
task summary for examples of use.
There are two categories of pipeline abstractions to be aware about: |
The [pipeline] which is the most powerful object encapsulating all other pipelines.
Task-specific pipelines are available for audio, computer vision, natural language processing, and multimodal tasks.
The pipeline abstraction
The pipeline abstraction is a wrapper around all the other available pipelines. It is instantiated as any other
pipeline but can provide additional quality of life.
Simple call on one item:
thon |
pipe = pipeline("text-classification")
pipe("This restaurant is awesome")
[{'label': 'POSITIVE', 'score': 0.9998743534088135}]
If you want to use a specific model from the hub you can ignore the task if the model on
the hub already defines it:
thon
pipe = pipeline(model="FacebookAI/roberta-large-mnli")
pipe("This restaurant is awesome")
[{'label': 'NEUTRAL', 'score': 0.7313136458396912}]
To call a pipeline on many items, you can call it with a list.
thon |
To call a pipeline on many items, you can call it with a list.
thon
pipe = pipeline("text-classification")
pipe(["This restaurant is awesome", "This restaurant is awful"])
[{'label': 'POSITIVE', 'score': 0.9998743534088135},
{'label': 'NEGATIVE', 'score': 0.9996669292449951}] |
To iterate over full datasets it is recommended to use a dataset directly. This means you don't need to allocate
the whole dataset at once, nor do you need to do batching yourself. This should work just as fast as custom loops on
GPU. If it doesn't don't hesitate to create an issue.
thon
import datasets
from transformers import pipeline
from transformers.pipelines.pt_utils import KeyDataset
from tqdm.auto import tqdm
pipe = pipeline("automatic-speech-recognition", model="facebook/wav2vec2-base-960h", device=0)
dataset = datasets.load_dataset("superb", name="asr", split="test")
KeyDataset (only pt) will simply return the item in the dict returned by the dataset item
as we're not interested in the target part of the dataset. For sentence pair use KeyPairDataset
for out in tqdm(pipe(KeyDataset(dataset, "file"))):
print(out)
# {"text": "NUMBER TEN FRESH NELLY IS WAITING ON YOU GOOD NIGHT HUSBAND"}
# {"text": .}
# . |
For ease of use, a generator is also possible:
thon
from transformers import pipeline
pipe = pipeline("text-classification")
def data():
while True:
# This could come from a dataset, a database, a queue or HTTP request
# in a server
# Caveat: because this is iterative, you cannot use num_workers > 1 variable
# to use multiple threads to preprocess data. You can still have 1 thread that
# does the preprocessing while the main runs the big inference
yield "This is a test"
for out in pipe(data()):
print(out)
# {"text": "NUMBER TEN FRESH NELLY IS WAITING ON YOU GOOD NIGHT HUSBAND"}
# {"text": .}
# . |
[[autodoc]] pipeline
Pipeline batching
All pipelines can use batching. This will work
whenever the pipeline uses its streaming ability (so when passing lists or Dataset or generator).
thon
from transformers import pipeline
from transformers.pipelines.pt_utils import KeyDataset
import datasets
dataset = datasets.load_dataset("imdb", name="plain_text", split="unsupervised")
pipe = pipeline("text-classification", device=0)
for out in pipe(KeyDataset(dataset, "text"), batch_size=8, truncation="only_first"):
print(out)
# [{'label': 'POSITIVE', 'score': 0.9998743534088135}]
# Exactly the same output as before, but the content are passed
# as batches to the model |
However, this is not automatically a win for performance. It can be either a 10x speedup or 5x slowdown depending
on hardware, data and the actual model being used.
Example where it's mostly a speedup:
thon
from transformers import pipeline
from torch.utils.data import Dataset
from tqdm.auto import tqdm
pipe = pipeline("text-classification", device=0)
class MyDataset(Dataset):
def len(self):
return 5000
def __getitem__(self, i):
return "This is a test" |
dataset = MyDataset()
for batch_size in [1, 8, 64, 256]:
print("-" * 30)
print(f"Streaming batch_size={batch_size}")
for out in tqdm(pipe(dataset, batch_size=batch_size), total=len(dataset)):
pass
On GTX 970
Streaming no batching
100%|ββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββ| 5000/5000 [00:26<00:00, 187.52it/s]
Streaming batch_size=8
100%|βββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββ| 5000/5000 [00:04<00:00, 1205.95it/s] |
Streaming batch_size=8
100%|βββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββ| 5000/5000 [00:04<00:00, 1205.95it/s]
Streaming batch_size=64
100%|βββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββ| 5000/5000 [00:02<00:00, 2478.24it/s]
Streaming batch_size=256
100%|βββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββ| 5000/5000 [00:01<00:00, 2554.43it/s]
(diminishing returns, saturated the GPU) |
Streaming batch_size=256
100%|βββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββ| 5000/5000 [00:01<00:00, 2554.43it/s]
(diminishing returns, saturated the GPU)
Example where it's most a slowdown:
thon
class MyDataset(Dataset):
def len(self):
return 5000
def __getitem__(self, i):
if i % 64 == 0:
n = 100
else:
n = 1
return "This is a test" * n |
This is a occasional very long sentence compared to the other. In that case, the whole batch will need to be 400
tokens long, so the whole batch will be [64, 400] instead of [64, 4], leading to the high slowdown. Even worse, on
bigger batches, the program simply crashes.
Streaming no batching
100%|βββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββ| 1000/1000 [00:05<00:00, 183.69it/s] |
Streaming no batching
100%|βββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββ| 1000/1000 [00:05<00:00, 183.69it/s]
Streaming batch_size=8
100%|βββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββ| 1000/1000 [00:03<00:00, 265.74it/s]
Streaming batch_size=64
100%|ββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββ| 1000/1000 [00:26<00:00, 37.80it/s] |
Streaming batch_size=256
0%| | 0/1000 [00:00<?, ?it/s]
Traceback (most recent call last):
File "/home/nicolas/src/transformers/test.py", line 42, in
for out in tqdm(pipe(dataset, batch_size=256), total=len(dataset)):
.
q = q / math.sqrt(dim_per_head) # (bs, n_heads, q_length, dim_per_head)
RuntimeError: CUDA out of memory. Tried to allocate 376.00 MiB (GPU 0; 3.95 GiB total capacity; 1.72 GiB already allocated; 354.88 MiB free; 2.46 GiB reserved in total by PyTorch) |
There are no good (general) solutions for this problem, and your mileage may vary depending on your use cases. Rule of
thumb:
For users, a rule of thumb is:
Measure performance on your load, with your hardware. Measure, measure, and keep measuring. Real numbers are the
only way to go.
If you are latency constrained (live product doing inference), don't batch.
If you are using CPU, don't batch.
If you are using throughput (you want to run your model on a bunch of static data), on GPU, then: |
If you are using throughput (you want to run your model on a bunch of static data), on GPU, then:
If you have no clue about the size of the sequence_length ("natural" data), by default don't batch, measure and
try tentatively to add it, add OOM checks to recover when it will fail (and it will at some point if you don't
control the sequence_length.) |
If your sequence_length is super regular, then batching is more likely to be VERY interesting, measure and push
it until you get OOMs.
The larger the GPU the more likely batching is going to be more interesting
As soon as you enable batching, make sure you can handle OOMs nicely. |
Pipeline chunk batching
zero-shot-classification and question-answering are slightly specific in the sense, that a single input might yield
multiple forward pass of a model. Under normal circumstances, this would yield issues with batch_size argument.
In order to circumvent this issue, both of these pipelines are a bit specific, they are ChunkPipeline instead of
regular Pipeline. In short:
python
preprocessed = pipe.preprocess(inputs)
model_outputs = pipe.forward(preprocessed)
outputs = pipe.postprocess(model_outputs)
Now becomes:
python
all_model_outputs = []
for preprocessed in pipe.preprocess(inputs):
model_outputs = pipe.forward(preprocessed)
all_model_outputs.append(model_outputs)
outputs = pipe.postprocess(all_model_outputs)
This should be very transparent to your code because the pipelines are used in
the same way.
This is a simplified view, since the pipeline can handle automatically the batch to ! Meaning you don't have to care
about how many forward passes you inputs are actually going to trigger, you can optimize the batch_size
independently of the inputs. The caveats from the previous section still apply.
Pipeline custom code
If you want to override a specific pipeline.
Don't hesitate to create an issue for your task at hand, the goal of the pipeline is to be easy to use and support most
cases, so transformers could maybe support your use case.
If you want to try simply you can: |
Subclass your pipeline of choice
thon
class MyPipeline(TextClassificationPipeline):
def postprocess():
# Your code goes here
scores = scores * 100
# And here
my_pipeline = MyPipeline(model=model, tokenizer=tokenizer, )
or if you use pipeline function, then:
my_pipeline = pipeline(model="xxxx", pipeline_class=MyPipeline) |
That should enable you to do all the custom code you want.
Implementing a pipeline
Implementing a new pipeline
Audio
Pipelines available for audio tasks include the following.
AudioClassificationPipeline
[[autodoc]] AudioClassificationPipeline
- call
- all
AutomaticSpeechRecognitionPipeline
[[autodoc]] AutomaticSpeechRecognitionPipeline
- call
- all
TextToAudioPipeline
[[autodoc]] TextToAudioPipeline
- call
- all
ZeroShotAudioClassificationPipeline
[[autodoc]] ZeroShotAudioClassificationPipeline
- call
- all
Computer vision
Pipelines available for computer vision tasks include the following.
DepthEstimationPipeline
[[autodoc]] DepthEstimationPipeline
- call
- all
ImageClassificationPipeline
[[autodoc]] ImageClassificationPipeline
- call
- all
ImageSegmentationPipeline
[[autodoc]] ImageSegmentationPipeline
- call
- all
ImageToImagePipeline
[[autodoc]] ImageToImagePipeline
- call
- all
ObjectDetectionPipeline
[[autodoc]] ObjectDetectionPipeline
- call
- all
VideoClassificationPipeline
[[autodoc]] VideoClassificationPipeline
- call
- all
ZeroShotImageClassificationPipeline
[[autodoc]] ZeroShotImageClassificationPipeline
- call
- all
ZeroShotObjectDetectionPipeline
[[autodoc]] ZeroShotObjectDetectionPipeline
- call
- all
Natural Language Processing
Pipelines available for natural language processing tasks include the following.
ConversationalPipeline
[[autodoc]] Conversation
[[autodoc]] ConversationalPipeline
- call
- all
FillMaskPipeline
[[autodoc]] FillMaskPipeline
- call
- all
QuestionAnsweringPipeline
[[autodoc]] QuestionAnsweringPipeline
- call
- all
SummarizationPipeline
[[autodoc]] SummarizationPipeline
- call
- all
TableQuestionAnsweringPipeline
[[autodoc]] TableQuestionAnsweringPipeline
- call
TextClassificationPipeline
[[autodoc]] TextClassificationPipeline
- call
- all
TextGenerationPipeline
[[autodoc]] TextGenerationPipeline
- call
- all
Text2TextGenerationPipeline
[[autodoc]] Text2TextGenerationPipeline
- call
- all
TokenClassificationPipeline
[[autodoc]] TokenClassificationPipeline
- call
- all
TranslationPipeline
[[autodoc]] TranslationPipeline
- call
- all
ZeroShotClassificationPipeline
[[autodoc]] ZeroShotClassificationPipeline
- call
- all
Multimodal
Pipelines available for multimodal tasks include the following.
DocumentQuestionAnsweringPipeline
[[autodoc]] DocumentQuestionAnsweringPipeline
- call
- all
FeatureExtractionPipeline
[[autodoc]] FeatureExtractionPipeline
- call
- all
ImageFeatureExtractionPipeline
[[autodoc]] ImageFeatureExtractionPipeline
- call
- all
ImageToTextPipeline
[[autodoc]] ImageToTextPipeline
- call
- all
MaskGenerationPipeline
[[autodoc]] MaskGenerationPipeline
- call
- all
VisualQuestionAnsweringPipeline
[[autodoc]] VisualQuestionAnsweringPipeline
- call
- all
Parent class: Pipeline
[[autodoc]] Pipeline |
Keras callbacks
When training a Transformers model with Keras, there are some library-specific callbacks available to automate common
tasks:
KerasMetricCallback
[[autodoc]] KerasMetricCallback
PushToHubCallback
[[autodoc]] PushToHubCallback |
Model outputs
All models have outputs that are instances of subclasses of [~utils.ModelOutput]. Those are
data structures containing all the information returned by the model, but that can also be used as tuples or
dictionaries.
Let's see how this looks in an example:
thon
from transformers import BertTokenizer, BertForSequenceClassification
import torch
tokenizer = BertTokenizer.from_pretrained("google-bert/bert-base-uncased")
model = BertForSequenceClassification.from_pretrained("google-bert/bert-base-uncased")
inputs = tokenizer("Hello, my dog is cute", return_tensors="pt")
labels = torch.tensor([1]).unsqueeze(0) # Batch size 1
outputs = model(**inputs, labels=labels) |
The outputs object is a [~modeling_outputs.SequenceClassifierOutput], as we can see in the
documentation of that class below, it means it has an optional loss, a logits, an optional hidden_states and
an optional attentions attribute. Here we have the loss since we passed along labels, but we don't have
hidden_states and attentions because we didn't pass output_hidden_states=True or
output_attentions=True. |
When passing output_hidden_states=True you may expect the outputs.hidden_states[-1] to match outputs.last_hidden_states exactly.
However, this is not always the case. Some models apply normalization or subsequent process to the last hidden state when it's returned. |
You can access each attribute as you would usually do, and if that attribute has not been returned by the model, you
will get None. Here for instance outputs.loss is the loss computed by the model, and outputs.attentions is
None.
When considering our outputs object as tuple, it only considers the attributes that don't have None values.
Here for instance, it has two elements, loss then logits, so
python
outputs[:2]
will return the tuple (outputs.loss, outputs.logits) for instance.
When considering our outputs object as dictionary, it only considers the attributes that don't have None
values. Here for instance, it has two keys that are loss and logits.
We document here the generic model outputs that are used by more than one model type. Specific output types are
documented on their corresponding model page.
ModelOutput
[[autodoc]] utils.ModelOutput
- to_tuple
BaseModelOutput
[[autodoc]] modeling_outputs.BaseModelOutput
BaseModelOutputWithPooling
[[autodoc]] modeling_outputs.BaseModelOutputWithPooling
BaseModelOutputWithCrossAttentions
[[autodoc]] modeling_outputs.BaseModelOutputWithCrossAttentions
BaseModelOutputWithPoolingAndCrossAttentions
[[autodoc]] modeling_outputs.BaseModelOutputWithPoolingAndCrossAttentions
BaseModelOutputWithPast
[[autodoc]] modeling_outputs.BaseModelOutputWithPast
BaseModelOutputWithPastAndCrossAttentions
[[autodoc]] modeling_outputs.BaseModelOutputWithPastAndCrossAttentions
Seq2SeqModelOutput
[[autodoc]] modeling_outputs.Seq2SeqModelOutput
CausalLMOutput
[[autodoc]] modeling_outputs.CausalLMOutput
CausalLMOutputWithCrossAttentions
[[autodoc]] modeling_outputs.CausalLMOutputWithCrossAttentions
CausalLMOutputWithPast
[[autodoc]] modeling_outputs.CausalLMOutputWithPast
MaskedLMOutput
[[autodoc]] modeling_outputs.MaskedLMOutput
Seq2SeqLMOutput
[[autodoc]] modeling_outputs.Seq2SeqLMOutput
NextSentencePredictorOutput
[[autodoc]] modeling_outputs.NextSentencePredictorOutput
SequenceClassifierOutput
[[autodoc]] modeling_outputs.SequenceClassifierOutput
Seq2SeqSequenceClassifierOutput
[[autodoc]] modeling_outputs.Seq2SeqSequenceClassifierOutput
MultipleChoiceModelOutput
[[autodoc]] modeling_outputs.MultipleChoiceModelOutput
TokenClassifierOutput
[[autodoc]] modeling_outputs.TokenClassifierOutput
QuestionAnsweringModelOutput
[[autodoc]] modeling_outputs.QuestionAnsweringModelOutput
Seq2SeqQuestionAnsweringModelOutput
[[autodoc]] modeling_outputs.Seq2SeqQuestionAnsweringModelOutput
Seq2SeqSpectrogramOutput
[[autodoc]] modeling_outputs.Seq2SeqSpectrogramOutput
SemanticSegmenterOutput
[[autodoc]] modeling_outputs.SemanticSegmenterOutput
ImageClassifierOutput
[[autodoc]] modeling_outputs.ImageClassifierOutput
ImageClassifierOutputWithNoAttention
[[autodoc]] modeling_outputs.ImageClassifierOutputWithNoAttention
DepthEstimatorOutput
[[autodoc]] modeling_outputs.DepthEstimatorOutput
Wav2Vec2BaseModelOutput
[[autodoc]] modeling_outputs.Wav2Vec2BaseModelOutput
XVectorOutput
[[autodoc]] modeling_outputs.XVectorOutput
Seq2SeqTSModelOutput
[[autodoc]] modeling_outputs.Seq2SeqTSModelOutput
Seq2SeqTSPredictionOutput
[[autodoc]] modeling_outputs.Seq2SeqTSPredictionOutput
SampleTSPredictionOutput
[[autodoc]] modeling_outputs.SampleTSPredictionOutput
TFBaseModelOutput
[[autodoc]] modeling_tf_outputs.TFBaseModelOutput
TFBaseModelOutputWithPooling
[[autodoc]] modeling_tf_outputs.TFBaseModelOutputWithPooling
TFBaseModelOutputWithPoolingAndCrossAttentions
[[autodoc]] modeling_tf_outputs.TFBaseModelOutputWithPoolingAndCrossAttentions
TFBaseModelOutputWithPast
[[autodoc]] modeling_tf_outputs.TFBaseModelOutputWithPast
TFBaseModelOutputWithPastAndCrossAttentions
[[autodoc]] modeling_tf_outputs.TFBaseModelOutputWithPastAndCrossAttentions
TFSeq2SeqModelOutput
[[autodoc]] modeling_tf_outputs.TFSeq2SeqModelOutput
TFCausalLMOutput
[[autodoc]] modeling_tf_outputs.TFCausalLMOutput
TFCausalLMOutputWithCrossAttentions
[[autodoc]] modeling_tf_outputs.TFCausalLMOutputWithCrossAttentions
TFCausalLMOutputWithPast
[[autodoc]] modeling_tf_outputs.TFCausalLMOutputWithPast
TFMaskedLMOutput
[[autodoc]] modeling_tf_outputs.TFMaskedLMOutput
TFSeq2SeqLMOutput
[[autodoc]] modeling_tf_outputs.TFSeq2SeqLMOutput
TFNextSentencePredictorOutput
[[autodoc]] modeling_tf_outputs.TFNextSentencePredictorOutput
TFSequenceClassifierOutput
[[autodoc]] modeling_tf_outputs.TFSequenceClassifierOutput
TFSeq2SeqSequenceClassifierOutput
[[autodoc]] modeling_tf_outputs.TFSeq2SeqSequenceClassifierOutput
TFMultipleChoiceModelOutput
[[autodoc]] modeling_tf_outputs.TFMultipleChoiceModelOutput
TFTokenClassifierOutput
[[autodoc]] modeling_tf_outputs.TFTokenClassifierOutput
TFQuestionAnsweringModelOutput
[[autodoc]] modeling_tf_outputs.TFQuestionAnsweringModelOutput
TFSeq2SeqQuestionAnsweringModelOutput
[[autodoc]] modeling_tf_outputs.TFSeq2SeqQuestionAnsweringModelOutput
FlaxBaseModelOutput
[[autodoc]] modeling_flax_outputs.FlaxBaseModelOutput
FlaxBaseModelOutputWithPast
[[autodoc]] modeling_flax_outputs.FlaxBaseModelOutputWithPast
FlaxBaseModelOutputWithPooling
[[autodoc]] modeling_flax_outputs.FlaxBaseModelOutputWithPooling
FlaxBaseModelOutputWithPastAndCrossAttentions
[[autodoc]] modeling_flax_outputs.FlaxBaseModelOutputWithPastAndCrossAttentions
FlaxSeq2SeqModelOutput
[[autodoc]] modeling_flax_outputs.FlaxSeq2SeqModelOutput
FlaxCausalLMOutputWithCrossAttentions
[[autodoc]] modeling_flax_outputs.FlaxCausalLMOutputWithCrossAttentions
FlaxMaskedLMOutput
[[autodoc]] modeling_flax_outputs.FlaxMaskedLMOutput
FlaxSeq2SeqLMOutput
[[autodoc]] modeling_flax_outputs.FlaxSeq2SeqLMOutput
FlaxNextSentencePredictorOutput
[[autodoc]] modeling_flax_outputs.FlaxNextSentencePredictorOutput
FlaxSequenceClassifierOutput
[[autodoc]] modeling_flax_outputs.FlaxSequenceClassifierOutput
FlaxSeq2SeqSequenceClassifierOutput
[[autodoc]] modeling_flax_outputs.FlaxSeq2SeqSequenceClassifierOutput
FlaxMultipleChoiceModelOutput
[[autodoc]] modeling_flax_outputs.FlaxMultipleChoiceModelOutput
FlaxTokenClassifierOutput
[[autodoc]] modeling_flax_outputs.FlaxTokenClassifierOutput
FlaxQuestionAnsweringModelOutput
[[autodoc]] modeling_flax_outputs.FlaxQuestionAnsweringModelOutput
FlaxSeq2SeqQuestionAnsweringModelOutput
[[autodoc]] modeling_flax_outputs.FlaxSeq2SeqQuestionAnsweringModelOutput |
Processors
Processors can mean two different things in the Transformers library:
- the objects that pre-process inputs for multi-modal models such as Wav2Vec2 (speech and text)
or CLIP (text and vision)
- deprecated objects that were used in older versions of the library to preprocess data for GLUE or SQUAD.
Multi-modal processors
Any multi-modal model will require an object to encode or decode the data that groups several modalities (among text,
vision and audio). This is handled by objects called processors, which group together two or more processing objects
such as tokenizers (for the text modality), image processors (for vision) and feature extractors (for audio).
Those processors inherit from the following base class that implements the saving and loading functionality:
[[autodoc]] ProcessorMixin
Deprecated processors
All processors follow the same architecture which is that of the
[~data.processors.utils.DataProcessor]. The processor returns a list of
[~data.processors.utils.InputExample]. These
[~data.processors.utils.InputExample] can be converted to
[~data.processors.utils.InputFeatures] in order to be fed to the model.
[[autodoc]] data.processors.utils.DataProcessor
[[autodoc]] data.processors.utils.InputExample
[[autodoc]] data.processors.utils.InputFeatures
GLUE
General Language Understanding Evaluation (GLUE) is a benchmark that evaluates the
performance of models across a diverse set of existing NLU tasks. It was released together with the paper GLUE: A
multi-task benchmark and analysis platform for natural language understanding
This library hosts a total of 10 processors for the following tasks: MRPC, MNLI, MNLI (mismatched), CoLA, SST2, STSB,
QQP, QNLI, RTE and WNLI.
Those processors are: |
[~data.processors.utils.MrpcProcessor]
[~data.processors.utils.MnliProcessor]
[~data.processors.utils.MnliMismatchedProcessor]
[~data.processors.utils.Sst2Processor]
[~data.processors.utils.StsbProcessor]
[~data.processors.utils.QqpProcessor]
[~data.processors.utils.QnliProcessor]
[~data.processors.utils.RteProcessor]
[~data.processors.utils.WnliProcessor] |
Additionally, the following method can be used to load values from a data file and convert them to a list of
[~data.processors.utils.InputExample].
[[autodoc]] data.processors.glue.glue_convert_examples_to_features
XNLI
The Cross-Lingual NLI Corpus (XNLI) is a benchmark that evaluates the
quality of cross-lingual text representations. XNLI is crowd-sourced dataset based on MultiNLI: pairs of text are labeled with textual entailment annotations for 15
different languages (including both high-resource language such as English and low-resource languages such as Swahili).
It was released together with the paper XNLI: Evaluating Cross-lingual Sentence Representations
This library hosts the processor to load the XNLI data: |
[~data.processors.utils.XnliProcessor] |