text
stringlengths 2
11.8k
|
---|
XLNetModel
[[autodoc]] XLNetModel
- forward
XLNetLMHeadModel
[[autodoc]] XLNetLMHeadModel
- forward
XLNetForSequenceClassification
[[autodoc]] XLNetForSequenceClassification
- forward
XLNetForMultipleChoice
[[autodoc]] XLNetForMultipleChoice
- forward
XLNetForTokenClassification
[[autodoc]] XLNetForTokenClassification
- forward
XLNetForQuestionAnsweringSimple
[[autodoc]] XLNetForQuestionAnsweringSimple
- forward
XLNetForQuestionAnswering
[[autodoc]] XLNetForQuestionAnswering
- forward |
TFXLNetModel
[[autodoc]] TFXLNetModel
- call
TFXLNetLMHeadModel
[[autodoc]] TFXLNetLMHeadModel
- call
TFXLNetForSequenceClassification
[[autodoc]] TFXLNetForSequenceClassification
- call
TFLNetForMultipleChoice
[[autodoc]] TFXLNetForMultipleChoice
- call
TFXLNetForTokenClassification
[[autodoc]] TFXLNetForTokenClassification
- call
TFXLNetForQuestionAnsweringSimple
[[autodoc]] TFXLNetForQuestionAnsweringSimple
- call |
ESM
Overview
This page provides code and pre-trained weights for Transformer protein language models from Meta AI's Fundamental
AI Research Team, providing the state-of-the-art ESMFold and ESM-2, and the previously released ESM-1b and ESM-1v.
Transformer protein language models were introduced in the paper Biological structure and function emerge from scaling
unsupervised learning to 250 million protein sequences by
Alexander Rives, Joshua Meier, Tom Sercu, Siddharth Goyal, Zeming Lin, Jason Liu, Demi Guo, Myle Ott,
C. Lawrence Zitnick, Jerry Ma, and Rob Fergus.
The first version of this paper was preprinted in 2019.
ESM-2 outperforms all tested single-sequence protein language models across a range of structure prediction tasks,
and enables atomic resolution structure prediction.
It was released with the paper Language models of protein sequences at the scale of evolution enable accurate
structure prediction by Zeming Lin, Halil Akin, Roshan Rao, Brian Hie,
Zhongkai Zhu, Wenting Lu, Allan dos Santos Costa, Maryam Fazel-Zarandi, Tom Sercu, Sal Candido and Alexander Rives.
Also introduced in this paper was ESMFold. It uses an ESM-2 stem with a head that can predict folded protein
structures with state-of-the-art accuracy. Unlike AlphaFold2,
it relies on the token embeddings from the large pre-trained protein language model stem and does not perform a multiple
sequence alignment (MSA) step at inference time, which means that ESMFold checkpoints are fully "standalone" -
they do not require a database of known protein sequences and structures with associated external query tools
to make predictions, and are much faster as a result.
The abstract from
"Biological structure and function emerge from scaling unsupervised learning to 250
million protein sequences" is
In the field of artificial intelligence, a combination of scale in data and model capacity enabled by unsupervised
learning has led to major advances in representation learning and statistical generation. In the life sciences, the
anticipated growth of sequencing promises unprecedented data on natural sequence diversity. Protein language modeling
at the scale of evolution is a logical step toward predictive and generative artificial intelligence for biology. To
this end, we use unsupervised learning to train a deep contextual language model on 86 billion amino acids across 250
million protein sequences spanning evolutionary diversity. The resulting model contains information about biological
properties in its representations. The representations are learned from sequence data alone. The learned representation
space has a multiscale organization reflecting structure from the level of biochemical properties of amino acids to
remote homology of proteins. Information about secondary and tertiary structure is encoded in the representations and
can be identified by linear projections. Representation learning produces features that generalize across a range of
applications, enabling state-of-the-art supervised prediction of mutational effect and secondary structure and
improving state-of-the-art features for long-range contact prediction.
The abstract from
"Language models of protein sequences at the scale of evolution enable accurate structure prediction" is
Large language models have recently been shown to develop emergent capabilities with scale, going beyond
simple pattern matching to perform higher level reasoning and generate lifelike images and text. While
language models trained on protein sequences have been studied at a smaller scale, little is known about
what they learn about biology as they are scaled up. In this work we train models up to 15 billion parameters,
the largest language models of proteins to be evaluated to date. We find that as models are scaled they learn
information enabling the prediction of the three-dimensional structure of a protein at the resolution of
individual atoms. We present ESMFold for high accuracy end-to-end atomic level structure prediction directly
from the individual sequence of a protein. ESMFold has similar accuracy to AlphaFold2 and RoseTTAFold for
sequences with low perplexity that are well understood by the language model. ESMFold inference is an
order of magnitude faster than AlphaFold2, enabling exploration of the structural space of metagenomic
proteins in practical timescales.
The original code can be found here and was
was developed by the Fundamental AI Research team at Meta AI.
ESM-1b, ESM-1v and ESM-2 were contributed to huggingface by jasonliu
and Matt.
ESMFold was contributed to huggingface by Matt and
Sylvain, with a big thank you to Nikita Smetanin, Roshan Rao and Tom Sercu for their
help throughout the process!
Usage tips |
ESM models are trained with a masked language modeling (MLM) objective.
The HuggingFace port of ESMFold uses portions of the openfold library. The openfold library is licensed under the Apache License 2.0.
Resources
Text classification task guide
Token classification task guide
Masked language modeling task guide |
Resources
Text classification task guide
Token classification task guide
Masked language modeling task guide
EsmConfig
[[autodoc]] EsmConfig
- all
EsmTokenizer
[[autodoc]] EsmTokenizer
- build_inputs_with_special_tokens
- get_special_tokens_mask
- create_token_type_ids_from_sequences
- save_vocabulary |
EsmModel
[[autodoc]] EsmModel
- forward
EsmForMaskedLM
[[autodoc]] EsmForMaskedLM
- forward
EsmForSequenceClassification
[[autodoc]] EsmForSequenceClassification
- forward
EsmForTokenClassification
[[autodoc]] EsmForTokenClassification
- forward
EsmForProteinFolding
[[autodoc]] EsmForProteinFolding
- forward |
TFEsmModel
[[autodoc]] TFEsmModel
- call
TFEsmForMaskedLM
[[autodoc]] TFEsmForMaskedLM
- call
TFEsmForSequenceClassification
[[autodoc]] TFEsmForSequenceClassification
- call
TFEsmForTokenClassification
[[autodoc]] TFEsmForTokenClassification
- call |
Pyramid Vision Transformer (PVT)
Overview
The PVT model was proposed in
Pyramid Vision Transformer: A Versatile Backbone for Dense Prediction without Convolutions
by Wenhai Wang, Enze Xie, Xiang Li, Deng-Ping Fan, Kaitao Song, Ding Liang, Tong Lu, Ping Luo, Ling Shao. The PVT is a type of
vision transformer that utilizes a pyramid structure to make it an effective backbone for dense prediction tasks. Specifically
it allows for more fine-grained inputs (4 x 4 pixels per patch) to be used, while simultaneously shrinking the sequence length
of the Transformer as it deepens - reducing the computational cost. Additionally, a spatial-reduction attention (SRA) layer
is used to further reduce the resource consumption when learning high-resolution features.
The abstract from the paper is the following:
Although convolutional neural networks (CNNs) have achieved great success in computer vision, this work investigates a
simpler, convolution-free backbone network useful for many dense prediction tasks. Unlike the recently proposed Vision
Transformer (ViT) that was designed for image classification specifically, we introduce the Pyramid Vision Transformer
(PVT), which overcomes the difficulties of porting Transformer to various dense prediction tasks. PVT has several
merits compared to current state of the arts. Different from ViT that typically yields low resolution outputs and
incurs high computational and memory costs, PVT not only can be trained on dense partitions of an image to achieve high
output resolution, which is important for dense prediction, but also uses a progressive shrinking pyramid to reduce the
computations of large feature maps. PVT inherits the advantages of both CNN and Transformer, making it a unified
backbone for various vision tasks without convolutions, where it can be used as a direct replacement for CNN backbones.
We validate PVT through extensive experiments, showing that it boosts the performance of many downstream tasks, including
object detection, instance and semantic segmentation. For example, with a comparable number of parameters, PVT+RetinaNet
achieves 40.4 AP on the COCO dataset, surpassing ResNet50+RetinNet (36.3 AP) by 4.1 absolute AP (see Figure 2). We hope
that PVT could serve as an alternative and useful backbone for pixel-level predictions and facilitate future research.
This model was contributed by Xrenya. The original code can be found here. |
PVTv1 on ImageNet-1K |
| Model variant |Size |Acc@1|Params (M)|
|--------------------|:-------:|:-------:|:------------:|
| PVT-Tiny | 224 | 75.1 | 13.2 |
| PVT-Small | 224 | 79.8 | 24.5 |
| PVT-Medium | 224 | 81.2 | 44.2 |
| PVT-Large | 224 | 81.7 | 61.4 |
PvtConfig
[[autodoc]] PvtConfig
PvtImageProcessor
[[autodoc]] PvtImageProcessor
- preprocess
PvtForImageClassification
[[autodoc]] PvtForImageClassification
- forward
PvtModel
[[autodoc]] PvtModel
- forward |
Reformer |
Overview
The Reformer model was proposed in the paper Reformer: The Efficient Transformer by Nikita Kitaev, Łukasz Kaiser, Anselm Levskaya.
The abstract from the paper is the following:
Large Transformer models routinely achieve state-of-the-art results on a number of tasks but training these models can
be prohibitively costly, especially on long sequences. We introduce two techniques to improve the efficiency of
Transformers. For one, we replace dot-product attention by one that uses locality-sensitive hashing, changing its
complexity from O(L^2) to O(Llog(L)), where L is the length of the sequence. Furthermore, we use reversible residual
layers instead of the standard residuals, which allows storing activations only once in the training process instead of
N times, where N is the number of layers. The resulting model, the Reformer, performs on par with Transformer models
while being much more memory-efficient and much faster on long sequences.
This model was contributed by patrickvonplaten. The Authors' code can be
found here.
Usage tips |
Reformer does not work with torch.nn.DataParallel due to a bug in PyTorch, see issue #36035.
Use Axial position encoding (see below for more details). It’s a mechanism to avoid having a huge positional encoding matrix (when the sequence length is very big) by factorizing it into smaller matrices.
Replace traditional attention by LSH (local-sensitive hashing) attention (see below for more details). It’s a technique to avoid computing the full product query-key in the attention layers.
Avoid storing the intermediate results of each layer by using reversible transformer layers to obtain them during the backward pass (subtracting the residuals from the input of the next layer gives them back) or recomputing them for results inside a given layer (less efficient than storing them but saves memory).
Compute the feedforward operations by chunks and not on the whole batch. |
Axial Positional Encodings
Axial Positional Encodings were first implemented in Google's trax library
and developed by the authors of this model's paper. In models that are treating very long input sequences, the
conventional position id encodings store an embeddings vector of size \(d\) being the config.hidden_size for
every position \(i, \ldots, n_s\), with \(n_s\) being config.max_embedding_size. This means that having
a sequence length of \(n_s = 2^{19} \approx 0.5M\) and a config.hidden_size of \(d = 2^{10} \approx 1000\)
would result in a position encoding matrix:
$$X_{i,j}, \text{ with } i \in \left[1,\ldots, d\right] \text{ and } j \in \left[1,\ldots, n_s\right]$$
which alone has over 500M parameters to store. Axial positional encodings factorize \(X_{i,j}\) into two matrices:
$$X^{1}_{i,j}, \text{ with } i \in \left[1,\ldots, d^1\right] \text{ and } j \in \left[1,\ldots, n_s^1\right]$$
and
$$X^{2}_{i,j}, \text{ with } i \in \left[1,\ldots, d^2\right] \text{ and } j \in \left[1,\ldots, n_s^2\right]$$
with:
$$d = d^1 + d^2 \text{ and } n_s = n_s^1 \times n_s^2 .$$
Therefore the following holds:
$$X_{i,j} = \begin{cases}
X^{1}{i, k}, & \text{if }\ i < d^1 \text{ with } k = j \mod n_s^1 \
X^{2}{i - d^1, l}, & \text{if } i \ge d^1 \text{ with } l = \lfloor\frac{j}{n_s^1}\rfloor
\end{cases}$$
Intuitively, this means that a position embedding vector \(x_j \in \mathbb{R}^{d}\) is now the composition of two
factorized embedding vectors: \(x^1_{k, l} + x^2_{l, k}\), where as the config.max_embedding_size dimension
\(j\) is factorized into \(k \text{ and } l\). This design ensures that each position embedding vector
\(x_j\) is unique.
Using the above example again, axial position encoding with \(d^1 = 2^9, d^2 = 2^9, n_s^1 = 2^9, n_s^2 = 2^{10}\)
can drastically reduced the number of parameters from 500 000 000 to \(2^{18} + 2^{19} \approx 780 000\) parameters, this means 85% less memory usage.
In practice, the parameter config.axial_pos_embds_dim is set to a tuple \((d^1, d^2)\) which sum has to be
equal to config.hidden_size and config.axial_pos_shape is set to a tuple \((n_s^1, n_s^2)\) which
product has to be equal to config.max_embedding_size, which during training has to be equal to the sequence
length of the input_ids.
LSH Self Attention
In Locality sensitive hashing (LSH) self attention the key and query projection weights are tied. Therefore, the key
query embedding vectors are also tied. LSH self attention uses the locality sensitive hashing mechanism proposed in
Practical and Optimal LSH for Angular Distance to assign each of the tied key
query embedding vectors to one of config.num_buckets possible buckets. The premise is that the more "similar"
key query embedding vectors (in terms of cosine similarity) are to each other, the more likely they are assigned to
the same bucket.
The accuracy of the LSH mechanism can be improved by increasing config.num_hashes or directly the argument
num_hashes of the forward function so that the output of the LSH self attention better approximates the output
of the "normal" full self attention. The buckets are then sorted and chunked into query key embedding vector chunks
each of length config.lsh_chunk_length. For each chunk, the query embedding vectors attend to its key vectors
(which are tied to themselves) and to the key embedding vectors of config.lsh_num_chunks_before previous
neighboring chunks and config.lsh_num_chunks_after following neighboring chunks.
For more information, see the original Paper or this great blog post.
Note that config.num_buckets can also be factorized into a list \((n_{\text{buckets}}^1,
n_{\text{buckets}}^2)\). This way instead of assigning the query key embedding vectors to one of \((1,\ldots,
n_{\text{buckets}})\) they are assigned to one of \((1-1,\ldots, n_{\text{buckets}}^1-1, \ldots,
1-n_{\text{buckets}}^2, \ldots, n_{\text{buckets}}^1-n_{\text{buckets}}^2)\). This is crucial for very long sequences to
save memory.
When training a model from scratch, it is recommended to leave config.num_buckets=None, so that depending on the
sequence length a good value for num_buckets is calculated on the fly. This value will then automatically be
saved in the config and should be reused for inference.
Using LSH self attention, the memory and time complexity of the query-key matmul operation can be reduced from
\(\mathcal{O}(n_s \times n_s)\) to \(\mathcal{O}(n_s \times \log(n_s))\), which usually represents the memory
and time bottleneck in a transformer model, with \(n_s\) being the sequence length.
Local Self Attention
Local self attention is essentially a "normal" self attention layer with key, query and value projections, but is
chunked so that in each chunk of length config.local_chunk_length the query embedding vectors only attends to
the key embedding vectors in its chunk and to the key embedding vectors of config.local_num_chunks_before
previous neighboring chunks and config.local_num_chunks_after following neighboring chunks.
Using Local self attention, the memory and time complexity of the query-key matmul operation can be reduced from
\(\mathcal{O}(n_s \times n_s)\) to \(\mathcal{O}(n_s \times \log(n_s))\), which usually represents the memory
and time bottleneck in a transformer model, with \(n_s\) being the sequence length.
Training
During training, we must ensure that the sequence length is set to a value that can be divided by the least common
multiple of config.lsh_chunk_length and config.local_chunk_length and that the parameters of the Axial
Positional Encodings are correctly set as described above. Reformer is very memory efficient so that the model can
easily be trained on sequences as long as 64000 tokens.
For training, the [ReformerModelWithLMHead] should be used as follows:
python
input_ids = tokenizer.encode("This is a sentence from the training data", return_tensors="pt")
loss = model(input_ids, labels=input_ids)[0]
Resources |
Text classification task guide
Question answering task guide
Causal language modeling task guide
Masked language modeling task guide |
ReformerConfig
[[autodoc]] ReformerConfig
ReformerTokenizer
[[autodoc]] ReformerTokenizer
- save_vocabulary
ReformerTokenizerFast
[[autodoc]] ReformerTokenizerFast
ReformerModel
[[autodoc]] ReformerModel
- forward
ReformerModelWithLMHead
[[autodoc]] ReformerModelWithLMHead
- forward
ReformerForMaskedLM
[[autodoc]] ReformerForMaskedLM
- forward
ReformerForSequenceClassification
[[autodoc]] ReformerForSequenceClassification
- forward
ReformerForQuestionAnswering
[[autodoc]] ReformerForQuestionAnswering
- forward |
CamemBERT
Overview
The CamemBERT model was proposed in CamemBERT: a Tasty French Language Model by
Louis Martin, Benjamin Muller, Pedro Javier Ortiz Suárez, Yoann Dupont, Laurent Romary, Éric Villemonte de la
Clergerie, Djamé Seddah, and Benoît Sagot. It is based on Facebook's RoBERTa model released in 2019. It is a model
trained on 138GB of French text.
The abstract from the paper is the following:
Pretrained language models are now ubiquitous in Natural Language Processing. Despite their success, most available
models have either been trained on English data or on the concatenation of data in multiple languages. This makes
practical use of such models --in all languages except English-- very limited. Aiming to address this issue for French,
we release CamemBERT, a French version of the Bi-directional Encoders for Transformers (BERT). We measure the
performance of CamemBERT compared to multilingual models in multiple downstream tasks, namely part-of-speech tagging,
dependency parsing, named-entity recognition, and natural language inference. CamemBERT improves the state of the art
for most of the tasks considered. We release the pretrained model for CamemBERT hoping to foster research and
downstream applications for French NLP.
This model was contributed by the ALMAnaCH team (Inria). The original code can be found here. |
This implementation is the same as RoBERTa. Refer to the documentation of RoBERTa for usage examples as well
as the information relative to the inputs and outputs.
Resources
Text classification task guide
Token classification task guide
Question answering task guide
Causal language modeling task guide
Masked language modeling task guide
Multiple choice task guide |
Text classification task guide
Token classification task guide
Question answering task guide
Causal language modeling task guide
Masked language modeling task guide
Multiple choice task guide
CamembertConfig
[[autodoc]] CamembertConfig
CamembertTokenizer
[[autodoc]] CamembertTokenizer
- build_inputs_with_special_tokens
- get_special_tokens_mask
- create_token_type_ids_from_sequences
- save_vocabulary
CamembertTokenizerFast
[[autodoc]] CamembertTokenizerFast |
CamembertModel
[[autodoc]] CamembertModel
CamembertForCausalLM
[[autodoc]] CamembertForCausalLM
CamembertForMaskedLM
[[autodoc]] CamembertForMaskedLM
CamembertForSequenceClassification
[[autodoc]] CamembertForSequenceClassification
CamembertForMultipleChoice
[[autodoc]] CamembertForMultipleChoice
CamembertForTokenClassification
[[autodoc]] CamembertForTokenClassification
CamembertForQuestionAnswering
[[autodoc]] CamembertForQuestionAnswering |
TFCamembertModel
[[autodoc]] TFCamembertModel
TFCamembertForCasualLM
[[autodoc]] TFCamembertForCausalLM
TFCamembertForMaskedLM
[[autodoc]] TFCamembertForMaskedLM
TFCamembertForSequenceClassification
[[autodoc]] TFCamembertForSequenceClassification
TFCamembertForMultipleChoice
[[autodoc]] TFCamembertForMultipleChoice
TFCamembertForTokenClassification
[[autodoc]] TFCamembertForTokenClassification
TFCamembertForQuestionAnswering
[[autodoc]] TFCamembertForQuestionAnswering |
GPT-NeoX-Japanese
Overview
We introduce GPT-NeoX-Japanese, which is an autoregressive language model for Japanese, trained on top of https://github.com/EleutherAI/gpt-neox.
Japanese is a unique language with its large vocabulary and a combination of hiragana, katakana, and kanji writing scripts.
To address this distinct structure of the Japanese language, we use a special sub-word tokenizer. We are very grateful to tanreinama for open-sourcing this incredibly helpful tokenizer.
Following the recommendations from Google's research on PaLM, we have removed bias parameters from transformer blocks, achieving better model performance. Please refer this article in detail.
Development of the model was led by Shinya Otani, Takayoshi Makabe, Anuj Arora, and Kyo Hattori from ABEJA, Inc.. For more information on this model-building activity, please refer here (ja).
Usage example
The generate() method can be used to generate text using GPT NeoX Japanese model.
thon |
from transformers import GPTNeoXJapaneseForCausalLM, GPTNeoXJapaneseTokenizer
model = GPTNeoXJapaneseForCausalLM.from_pretrained("abeja/gpt-neox-japanese-2.7b")
tokenizer = GPTNeoXJapaneseTokenizer.from_pretrained("abeja/gpt-neox-japanese-2.7b")
prompt = "人とAIが協調するためには、"
input_ids = tokenizer(prompt, return_tensors="pt").input_ids
gen_tokens = model.generate(
input_ids,
do_sample=True,
temperature=0.9,
max_length=100,
)
gen_text = tokenizer.batch_decode(gen_tokens, skip_special_tokens=True)[0]
print(gen_text)
人とAIが協調するためには、AIと人が共存し、AIを正しく理解する必要があります。 |
Resources
Causal language modeling task guide
GPTNeoXJapaneseConfig
[[autodoc]] GPTNeoXJapaneseConfig
GPTNeoXJapaneseTokenizer
[[autodoc]] GPTNeoXJapaneseTokenizer
GPTNeoXJapaneseModel
[[autodoc]] GPTNeoXJapaneseModel
- forward
GPTNeoXJapaneseForCausalLM
[[autodoc]] GPTNeoXJapaneseForCausalLM
- forward |
MRA
Overview
The MRA model was proposed in Multi Resolution Analysis (MRA) for Approximate Self-Attention by Zhanpeng Zeng, Sourav Pal, Jeffery Kline, Glenn M Fung, and Vikas Singh.
The abstract from the paper is the following:
Transformers have emerged as a preferred model for many tasks in natural language processing and vision. Recent efforts on training and deploying Transformers more efficiently have identified many strategies to approximate the self-attention matrix, a key module in a Transformer architecture. Effective ideas include various prespecified sparsity patterns, low-rank basis expansions and combinations thereof. In this paper, we revisit classical Multiresolution Analysis (MRA) concepts such as Wavelets, whose potential value in this setting remains underexplored thus far. We show that simple approximations based on empirical feedback and design choices informed by modern hardware and implementation challenges, eventually yield a MRA-based approach for self-attention with an excellent performance profile across most criteria of interest. We undertake an extensive set of experiments and demonstrate that this multi-resolution scheme outperforms most efficient self-attention proposals and is favorable for both short and long sequences. Code is available at https://github.com/mlpen/mra-attention.
This model was contributed by novice03.
The original code can be found here.
MraConfig
[[autodoc]] MraConfig
MraModel
[[autodoc]] MraModel
- forward
MraForMaskedLM
[[autodoc]] MraForMaskedLM
- forward
MraForSequenceClassification
[[autodoc]] MraForSequenceClassification
- forward
MraForMultipleChoice
[[autodoc]] MraForMultipleChoice
- forward
MraForTokenClassification
[[autodoc]] MraForTokenClassification
- forward
MraForQuestionAnswering
[[autodoc]] MraForQuestionAnswering
- forward |
Pop2Piano |
Overview
The Pop2Piano model was proposed in Pop2Piano : Pop Audio-based Piano Cover Generation by Jongho Choi and Kyogu Lee.
Piano covers of pop music are widely enjoyed, but generating them from music is not a trivial task. It requires great
expertise with playing piano as well as knowing different characteristics and melodies of a song. With Pop2Piano you
can directly generate a cover from a song's audio waveform. It is the first model to directly generate a piano cover
from pop audio without melody and chord extraction modules.
Pop2Piano is an encoder-decoder Transformer model based on T5. The input audio
is transformed to its waveform and passed to the encoder, which transforms it to a latent representation. The decoder
uses these latent representations to generate token ids in an autoregressive way. Each token id corresponds to one of four
different token types: time, velocity, note and 'special'. The token ids are then decoded to their equivalent MIDI file.
The abstract from the paper is the following:
Piano covers of pop music are enjoyed by many people. However, the
task of automatically generating piano covers of pop music is still
understudied. This is partly due to the lack of synchronized
{Pop, Piano Cover} data pairs, which made it challenging to apply
the latest data-intensive deep learning-based methods. To leverage
the power of the data-driven approach, we make a large amount of
paired and synchronized {Pop, Piano Cover} data using an automated
pipeline. In this paper, we present Pop2Piano, a Transformer network
that generates piano covers given waveforms of pop music. To the best
of our knowledge, this is the first model to generate a piano cover
directly from pop audio without using melody and chord extraction
modules. We show that Pop2Piano, trained with our dataset, is capable
of producing plausible piano covers.
This model was contributed by Susnato Dhar.
The original code can be found here.
Usage tips |
To use Pop2Piano, you will need to install the 🤗 Transformers library, as well as the following third party modules: |
pip install pretty-midi==0.2.9 essentia==2.1b6.dev1034 librosa scipy
Please note that you may need to restart your runtime after installation.
Pop2Piano is an Encoder-Decoder based model like T5.
Pop2Piano can be used to generate midi-audio files for a given audio sequence.
Choosing different composers in Pop2PianoForConditionalGeneration.generate() can lead to variety of different results.
Setting the sampling rate to 44.1 kHz when loading the audio file can give good performance.
Though Pop2Piano was mainly trained on Korean Pop music, it also does pretty well on other Western Pop or Hip Hop songs. |
Examples
Example using HuggingFace Dataset:
thon |
from datasets import load_dataset
from transformers import Pop2PianoForConditionalGeneration, Pop2PianoProcessor
model = Pop2PianoForConditionalGeneration.from_pretrained("sweetcocoa/pop2piano")
processor = Pop2PianoProcessor.from_pretrained("sweetcocoa/pop2piano")
ds = load_dataset("sweetcocoa/pop2piano_ci", split="test")
inputs = processor(
audio=ds["audio"][0]["array"], sampling_rate=ds["audio"][0]["sampling_rate"], return_tensors="pt"
)
model_output = model.generate(input_features=inputs["input_features"], composer="composer1")
tokenizer_output = processor.batch_decode(
token_ids=model_output, feature_extractor_output=inputs
)["pretty_midi_objects"][0]
tokenizer_output.write("./Outputs/midi_output.mid") |
Example using your own audio file:
thon |
import librosa
from transformers import Pop2PianoForConditionalGeneration, Pop2PianoProcessor
audio, sr = librosa.load("", sr=44100) # feel free to change the sr to a suitable value.
model = Pop2PianoForConditionalGeneration.from_pretrained("sweetcocoa/pop2piano")
processor = Pop2PianoProcessor.from_pretrained("sweetcocoa/pop2piano")
inputs = processor(audio=audio, sampling_rate=sr, return_tensors="pt")
model_output = model.generate(input_features=inputs["input_features"], composer="composer1")
tokenizer_output = processor.batch_decode(
token_ids=model_output, feature_extractor_output=inputs
)["pretty_midi_objects"][0]
tokenizer_output.write("./Outputs/midi_output.mid") |
Example of processing multiple audio files in batch:
thon |
import librosa
from transformers import Pop2PianoForConditionalGeneration, Pop2PianoProcessor
feel free to change the sr to a suitable value.
audio1, sr1 = librosa.load("", sr=44100)
audio2, sr2 = librosa.load("", sr=44100)
model = Pop2PianoForConditionalGeneration.from_pretrained("sweetcocoa/pop2piano")
processor = Pop2PianoProcessor.from_pretrained("sweetcocoa/pop2piano")
inputs = processor(audio=[audio1, audio2], sampling_rate=[sr1, sr2], return_attention_mask=True, return_tensors="pt")
Since we now generating in batch(2 audios) we must pass the attention_mask
model_output = model.generate(
input_features=inputs["input_features"],
attention_mask=inputs["attention_mask"],
composer="composer1",
)
tokenizer_output = processor.batch_decode(
token_ids=model_output, feature_extractor_output=inputs
)["pretty_midi_objects"]
Since we now have 2 generated MIDI files
tokenizer_output[0].write("./Outputs/midi_output1.mid")
tokenizer_output[1].write("./Outputs/midi_output2.mid") |
Example of processing multiple audio files in batch (Using Pop2PianoFeatureExtractor and Pop2PianoTokenizer):
thon |
import librosa
from transformers import Pop2PianoForConditionalGeneration, Pop2PianoFeatureExtractor, Pop2PianoTokenizer
feel free to change the sr to a suitable value.
audio1, sr1 = librosa.load("", sr=44100)
audio2, sr2 = librosa.load("", sr=44100)
model = Pop2PianoForConditionalGeneration.from_pretrained("sweetcocoa/pop2piano")
feature_extractor = Pop2PianoFeatureExtractor.from_pretrained("sweetcocoa/pop2piano")
tokenizer = Pop2PianoTokenizer.from_pretrained("sweetcocoa/pop2piano")
inputs = feature_extractor(
audio=[audio1, audio2],
sampling_rate=[sr1, sr2],
return_attention_mask=True,
return_tensors="pt",
)
Since we now generating in batch(2 audios) we must pass the attention_mask
model_output = model.generate(
input_features=inputs["input_features"],
attention_mask=inputs["attention_mask"],
composer="composer1",
)
tokenizer_output = tokenizer.batch_decode(
token_ids=model_output, feature_extractor_output=inputs
)["pretty_midi_objects"]
Since we now have 2 generated MIDI files
tokenizer_output[0].write("./Outputs/midi_output1.mid")
tokenizer_output[1].write("./Outputs/midi_output2.mid") |
Pop2PianoConfig
[[autodoc]] Pop2PianoConfig
Pop2PianoFeatureExtractor
[[autodoc]] Pop2PianoFeatureExtractor
- call
Pop2PianoForConditionalGeneration
[[autodoc]] Pop2PianoForConditionalGeneration
- forward
- generate
Pop2PianoTokenizer
[[autodoc]] Pop2PianoTokenizer
- call
Pop2PianoProcessor
[[autodoc]] Pop2PianoProcessor
- call |
ConvNeXt V2
Overview
The ConvNeXt V2 model was proposed in ConvNeXt V2: Co-designing and Scaling ConvNets with Masked Autoencoders by Sanghyun Woo, Shoubhik Debnath, Ronghang Hu, Xinlei Chen, Zhuang Liu, In So Kweon, Saining Xie.
ConvNeXt V2 is a pure convolutional model (ConvNet), inspired by the design of Vision Transformers, and a successor of ConvNeXT.
The abstract from the paper is the following:
Driven by improved architectures and better representation learning frameworks, the field of visual recognition has enjoyed rapid modernization and performance boost in the early 2020s. For example, modern ConvNets, represented by ConvNeXt, have demonstrated strong performance in various scenarios. While these models were originally designed for supervised learning with ImageNet labels, they can also potentially benefit from self-supervised learning techniques such as masked autoencoders (MAE). However, we found that simply combining these two approaches leads to subpar performance. In this paper, we propose a fully convolutional masked autoencoder framework and a new Global Response Normalization (GRN) layer that can be added to the ConvNeXt architecture to enhance inter-channel feature competition. This co-design of self-supervised learning techniques and architectural improvement results in a new model family called ConvNeXt V2, which significantly improves the performance of pure ConvNets on various recognition benchmarks, including ImageNet classification, COCO detection, and ADE20K segmentation. We also provide pre-trained ConvNeXt V2 models of various sizes, ranging from an efficient 3.7M-parameter Atto model with 76.7% top-1 accuracy on ImageNet, to a 650M Huge model that achieves a state-of-the-art 88.9% accuracy using only public training data. |
ConvNeXt V2 architecture. Taken from the original paper.
This model was contributed by adirik. The original code can be found here.
Resources
A list of official Hugging Face and community (indicated by 🌎) resources to help you get started with ConvNeXt V2.
[ConvNextV2ForImageClassification] is supported by this example script and notebook. |
If you're interested in submitting a resource to be included here, please feel free to open a Pull Request and we'll review it! The resource should ideally demonstrate something new instead of duplicating an existing resource.
ConvNextV2Config
[[autodoc]] ConvNextV2Config
ConvNextV2Model
[[autodoc]] ConvNextV2Model
- forward
ConvNextV2ForImageClassification
[[autodoc]] ConvNextV2ForImageClassification
- forward
TFConvNextV2Model
[[autodoc]] TFConvNextV2Model
- call
TFConvNextV2ForImageClassification
[[autodoc]] TFConvNextV2ForImageClassification
- call |
Donut
Overview
The Donut model was proposed in OCR-free Document Understanding Transformer by
Geewook Kim, Teakgyu Hong, Moonbin Yim, Jeongyeon Nam, Jinyoung Park, Jinyeong Yim, Wonseok Hwang, Sangdoo Yun, Dongyoon Han, Seunghyun Park.
Donut consists of an image Transformer encoder and an autoregressive text Transformer decoder to perform document understanding
tasks such as document image classification, form understanding and visual question answering.
The abstract from the paper is the following:
Understanding document images (e.g., invoices) is a core but challenging task since it requires complex functions such as reading text and a holistic understanding of the document. Current Visual Document Understanding (VDU) methods outsource the task of reading text to off-the-shelf Optical Character Recognition (OCR) engines and focus on the understanding task with the OCR outputs. Although such OCR-based approaches have shown promising performance, they suffer from 1) high computational costs for using OCR; 2) inflexibility of OCR models on languages or types of document; 3) OCR error propagation to the subsequent process. To address these issues, in this paper, we introduce a novel OCR-free VDU model named Donut, which stands for Document understanding transformer. As the first step in OCR-free VDU research, we propose a simple architecture (i.e., Transformer) with a pre-training objective (i.e., cross-entropy loss). Donut is conceptually simple yet effective. Through extensive experiments and analyses, we show a simple OCR-free VDU model, Donut, achieves state-of-the-art performances on various VDU tasks in terms of both speed and accuracy. In addition, we offer a synthetic data generator that helps the model pre-training to be flexible in various languages and domains. |
Donut high-level overview. Taken from the original paper.
This model was contributed by nielsr. The original code can be found
here.
Usage tips
The quickest way to get started with Donut is by checking the tutorial
notebooks, which show how to use the model
at inference time as well as fine-tuning on custom data.
Donut is always used within the VisionEncoderDecoder framework. |
Inference examples
Donut's [VisionEncoderDecoder] model accepts images as input and makes use of
[~generation.GenerationMixin.generate] to autoregressively generate text given the input image.
The [DonutImageProcessor] class is responsible for preprocessing the input image and
[XLMRobertaTokenizer/XLMRobertaTokenizerFast] decodes the generated target tokens to the target string. The
[DonutProcessor] wraps [DonutImageProcessor] and [XLMRobertaTokenizer/XLMRobertaTokenizerFast]
into a single instance to both extract the input features and decode the predicted token ids. |
Step-by-step Document Image Classification |
import re
from transformers import DonutProcessor, VisionEncoderDecoderModel
from datasets import load_dataset
import torch
processor = DonutProcessor.from_pretrained("naver-clova-ix/donut-base-finetuned-rvlcdip")
model = VisionEncoderDecoderModel.from_pretrained("naver-clova-ix/donut-base-finetuned-rvlcdip")
device = "cuda" if torch.cuda.is_available() else "cpu"
model.to(device) # doctest: +IGNORE_RESULT
load document image
dataset = load_dataset("hf-internal-testing/example-documents", split="test")
image = dataset[1]["image"]
prepare decoder inputs
task_prompt = ""
decoder_input_ids = processor.tokenizer(task_prompt, add_special_tokens=False, return_tensors="pt").input_ids
pixel_values = processor(image, return_tensors="pt").pixel_values
outputs = model.generate(
pixel_values.to(device),
decoder_input_ids=decoder_input_ids.to(device),
max_length=model.decoder.config.max_position_embeddings,
pad_token_id=processor.tokenizer.pad_token_id,
eos_token_id=processor.tokenizer.eos_token_id,
use_cache=True,
bad_words_ids=[[processor.tokenizer.unk_token_id]],
return_dict_in_generate=True,
)
sequence = processor.batch_decode(outputs.sequences)[0]
sequence = sequence.replace(processor.tokenizer.eos_token, "").replace(processor.tokenizer.pad_token, "")
sequence = re.sub(r"<.*?>", "", sequence, count=1).strip() # remove first task start token
print(processor.token2json(sequence))
{'class': 'advertisement'} |
Step-by-step Document Parsing |
import re
from transformers import DonutProcessor, VisionEncoderDecoderModel
from datasets import load_dataset
import torch
processor = DonutProcessor.from_pretrained("naver-clova-ix/donut-base-finetuned-cord-v2")
model = VisionEncoderDecoderModel.from_pretrained("naver-clova-ix/donut-base-finetuned-cord-v2")
device = "cuda" if torch.cuda.is_available() else "cpu"
model.to(device) # doctest: +IGNORE_RESULT
load document image
dataset = load_dataset("hf-internal-testing/example-documents", split="test")
image = dataset[2]["image"]
prepare decoder inputs
task_prompt = ""
decoder_input_ids = processor.tokenizer(task_prompt, add_special_tokens=False, return_tensors="pt").input_ids
pixel_values = processor(image, return_tensors="pt").pixel_values
outputs = model.generate(
pixel_values.to(device),
decoder_input_ids=decoder_input_ids.to(device),
max_length=model.decoder.config.max_position_embeddings,
pad_token_id=processor.tokenizer.pad_token_id,
eos_token_id=processor.tokenizer.eos_token_id,
use_cache=True,
bad_words_ids=[[processor.tokenizer.unk_token_id]],
return_dict_in_generate=True,
)
sequence = processor.batch_decode(outputs.sequences)[0]
sequence = sequence.replace(processor.tokenizer.eos_token, "").replace(processor.tokenizer.pad_token, "")
sequence = re.sub(r"<.*?>", "", sequence, count=1).strip() # remove first task start token
print(processor.token2json(sequence))
{'menu': {'nm': 'CINNAMON SUGAR', 'unitprice': '17,000', 'cnt': '1 x', 'price': '17,000'}, 'sub_total': {'subtotal_price': '17,000'}, 'total': {'total_price': '17,000', 'cashprice': '20,000', 'changeprice': '3,000'}} |
Step-by-step Document Visual Question Answering (DocVQA) |
import re
from transformers import DonutProcessor, VisionEncoderDecoderModel
from datasets import load_dataset
import torch
processor = DonutProcessor.from_pretrained("naver-clova-ix/donut-base-finetuned-docvqa")
model = VisionEncoderDecoderModel.from_pretrained("naver-clova-ix/donut-base-finetuned-docvqa")
device = "cuda" if torch.cuda.is_available() else "cpu"
model.to(device) # doctest: +IGNORE_RESULT
load document image from the DocVQA dataset
dataset = load_dataset("hf-internal-testing/example-documents", split="test")
image = dataset[0]["image"]
prepare decoder inputs
task_prompt = "{user_input}"
question = "When is the coffee break?"
prompt = task_prompt.replace("{user_input}", question)
decoder_input_ids = processor.tokenizer(prompt, add_special_tokens=False, return_tensors="pt").input_ids
pixel_values = processor(image, return_tensors="pt").pixel_values
outputs = model.generate(
pixel_values.to(device),
decoder_input_ids=decoder_input_ids.to(device),
max_length=model.decoder.config.max_position_embeddings,
pad_token_id=processor.tokenizer.pad_token_id,
eos_token_id=processor.tokenizer.eos_token_id,
use_cache=True,
bad_words_ids=[[processor.tokenizer.unk_token_id]],
return_dict_in_generate=True,
)
sequence = processor.batch_decode(outputs.sequences)[0]
sequence = sequence.replace(processor.tokenizer.eos_token, "").replace(processor.tokenizer.pad_token, "")
sequence = re.sub(r"<.*?>", "", sequence, count=1).strip() # remove first task start token
print(processor.token2json(sequence))
{'question': 'When is the coffee break?', 'answer': '11-14 to 11:39 a.m.'} |
See the model hub to look for Donut checkpoints.
Training
We refer to the tutorial notebooks.
DonutSwinConfig
[[autodoc]] DonutSwinConfig
DonutImageProcessor
[[autodoc]] DonutImageProcessor
- preprocess
DonutFeatureExtractor
[[autodoc]] DonutFeatureExtractor
- call
DonutProcessor
[[autodoc]] DonutProcessor
- call
- from_pretrained
- save_pretrained
- batch_decode
- decode
DonutSwinModel
[[autodoc]] DonutSwinModel
- forward |
mLUKE
Overview
The mLUKE model was proposed in mLUKE: The Power of Entity Representations in Multilingual Pretrained Language Models by Ryokan Ri, Ikuya Yamada, and Yoshimasa Tsuruoka. It's a multilingual extension
of the LUKE model trained on the basis of XLM-RoBERTa.
It is based on XLM-RoBERTa and adds entity embeddings, which helps improve performance on various downstream tasks
involving reasoning about entities such as named entity recognition, extractive question answering, relation
classification, cloze-style knowledge completion.
The abstract from the paper is the following:
Recent studies have shown that multilingual pretrained language models can be effectively improved with cross-lingual
alignment information from Wikipedia entities. However, existing methods only exploit entity information in pretraining
and do not explicitly use entities in downstream tasks. In this study, we explore the effectiveness of leveraging
entity representations for downstream cross-lingual tasks. We train a multilingual language model with 24 languages
with entity representations and show the model consistently outperforms word-based pretrained models in various
cross-lingual transfer tasks. We also analyze the model and the key insight is that incorporating entity
representations into the input allows us to extract more language-agnostic features. We also evaluate the model with a
multilingual cloze prompt task with the mLAMA dataset. We show that entity-based prompt elicits correct factual
knowledge more likely than using only word representations.
This model was contributed by ryo0634. The original code can be found here.
Usage tips
One can directly plug in the weights of mLUKE into a LUKE model, like so:
thon
from transformers import LukeModel
model = LukeModel.from_pretrained("studio-ousia/mluke-base") |
Note that mLUKE has its own tokenizer, [MLukeTokenizer]. You can initialize it as follows:
thon
from transformers import MLukeTokenizer
tokenizer = MLukeTokenizer.from_pretrained("studio-ousia/mluke-base")
As mLUKE's architecture is equivalent to that of LUKE, one can refer to LUKE's documentation page for all
tips, code examples and notebooks.
MLukeTokenizer
[[autodoc]] MLukeTokenizer
- call
- save_vocabulary |
QDQBERT
Overview
The QDQBERT model can be referenced in Integer Quantization for Deep Learning Inference: Principles and Empirical
Evaluation by Hao Wu, Patrick Judd, Xiaojie Zhang, Mikhail Isaev and Paulius
Micikevicius.
The abstract from the paper is the following:
Quantization techniques can reduce the size of Deep Neural Networks and improve inference latency and throughput by
taking advantage of high throughput integer instructions. In this paper we review the mathematical aspects of
quantization parameters and evaluate their choices on a wide range of neural network models for different application
domains, including vision, speech, and language. We focus on quantization techniques that are amenable to acceleration
by processors with high-throughput integer math pipelines. We also present a workflow for 8-bit quantization that is
able to maintain accuracy within 1% of the floating-point baseline on all networks studied, including models that are
more difficult to quantize, such as MobileNets and BERT-large.
This model was contributed by shangz.
Usage tips |
QDQBERT model adds fake quantization operations (pair of QuantizeLinear/DequantizeLinear ops) to (i) linear layer
inputs and weights, (ii) matmul inputs, (iii) residual add inputs, in BERT model.
QDQBERT requires the dependency of Pytorch Quantization Toolkit. To install pip install pytorch-quantization --extra-index-url https://pypi.ngc.nvidia.com
QDQBERT model can be loaded from any checkpoint of HuggingFace BERT model (for example google-bert/bert-base-uncased), and
perform Quantization Aware Training/Post Training Quantization.
A complete example of using QDQBERT model to perform Quatization Aware Training and Post Training Quantization for
SQUAD task can be found at transformers/examples/research_projects/quantization-qdqbert/. |
Set default quantizers
QDQBERT model adds fake quantization operations (pair of QuantizeLinear/DequantizeLinear ops) to BERT by
TensorQuantizer in Pytorch Quantization Toolkit. TensorQuantizer is the module
for quantizing tensors, with QuantDescriptor defining how the tensor should be quantized. Refer to Pytorch
Quantization Toolkit userguide for more details.
Before creating QDQBERT model, one has to set the default QuantDescriptor defining default tensor quantizers.
Example:
thon |
import pytorch_quantization.nn as quant_nn
from pytorch_quantization.tensor_quant import QuantDescriptor
The default tensor quantizer is set to use Max calibration method
input_desc = QuantDescriptor(num_bits=8, calib_method="max")
The default tensor quantizer is set to be per-channel quantization for weights
weight_desc = QuantDescriptor(num_bits=8, axis=((0,)))
quant_nn.QuantLinear.set_default_quant_desc_input(input_desc)
quant_nn.QuantLinear.set_default_quant_desc_weight(weight_desc) |
Calibration
Calibration is the terminology of passing data samples to the quantizer and deciding the best scaling factors for
tensors. After setting up the tensor quantizers, one can use the following example to calibrate the model:
thon
Find the TensorQuantizer and enable calibration
for name, module in model.named_modules():
if name.endswith("_input_quantizer"):
module.enable_calib()
module.disable_quant() # Use full precision data to calibrate
Feeding data samples
model(x) |
Finalize calibration
for name, module in model.named_modules():
if name.endswith("_input_quantizer"):
module.load_calib_amax()
module.enable_quant()
If running on GPU, it needs to call .cuda() again because new tensors will be created by calibration process
model.cuda()
Keep running the quantized model |
Export to ONNX
The goal of exporting to ONNX is to deploy inference by TensorRT. Fake
quantization will be broken into a pair of QuantizeLinear/DequantizeLinear ONNX ops. After setting static member of
TensorQuantizer to use Pytorch’s own fake quantization functions, fake quantized model can be exported to ONNX, follow
the instructions in torch.onnx. Example:
thon
from pytorch_quantization.nn import TensorQuantizer
TensorQuantizer.use_fb_fake_quant = True
Load the calibrated model |
from pytorch_quantization.nn import TensorQuantizer
TensorQuantizer.use_fb_fake_quant = True
Load the calibrated model
ONNX export
torch.onnx.export()
Resources
Text classification task guide
Token classification task guide
Question answering task guide
Causal language modeling task guide
Masked language modeling task guide
Multiple choice task guide |
QDQBertConfig
[[autodoc]] QDQBertConfig
QDQBertModel
[[autodoc]] QDQBertModel
- forward
QDQBertLMHeadModel
[[autodoc]] QDQBertLMHeadModel
- forward
QDQBertForMaskedLM
[[autodoc]] QDQBertForMaskedLM
- forward
QDQBertForSequenceClassification
[[autodoc]] QDQBertForSequenceClassification
- forward
QDQBertForNextSentencePrediction
[[autodoc]] QDQBertForNextSentencePrediction
- forward
QDQBertForMultipleChoice
[[autodoc]] QDQBertForMultipleChoice
- forward
QDQBertForTokenClassification
[[autodoc]] QDQBertForTokenClassification
- forward
QDQBertForQuestionAnswering
[[autodoc]] QDQBertForQuestionAnswering
- forward |
BertGeneration
Overview
The BertGeneration model is a BERT model that can be leveraged for sequence-to-sequence tasks using
[EncoderDecoderModel] as proposed in Leveraging Pre-trained Checkpoints for Sequence Generation
Tasks by Sascha Rothe, Shashi Narayan, Aliaksei Severyn.
The abstract from the paper is the following:
Unsupervised pretraining of large neural models has recently revolutionized Natural Language Processing. By
warm-starting from the publicly released checkpoints, NLP practitioners have pushed the state-of-the-art on multiple
benchmarks while saving significant amounts of compute time. So far the focus has been mainly on the Natural Language
Understanding tasks. In this paper, we demonstrate the efficacy of pre-trained checkpoints for Sequence Generation. We
developed a Transformer-based sequence-to-sequence model that is compatible with publicly available pre-trained BERT,
GPT-2 and RoBERTa checkpoints and conducted an extensive empirical study on the utility of initializing our model, both
encoder and decoder, with these checkpoints. Our models result in new state-of-the-art results on Machine Translation,
Text Summarization, Sentence Splitting, and Sentence Fusion.
This model was contributed by patrickvonplaten. The original code can be
found here.
Usage examples and tips
The model can be used in combination with the [EncoderDecoderModel] to leverage two pretrained BERT checkpoints for
subsequent fine-tuning:
thon |
leverage checkpoints for Bert2Bert model
use BERT's cls token as BOS token and sep token as EOS token
encoder = BertGenerationEncoder.from_pretrained("google-bert/bert-large-uncased", bos_token_id=101, eos_token_id=102)
add cross attention layers and use BERT's cls token as BOS token and sep token as EOS token
decoder = BertGenerationDecoder.from_pretrained(
"google-bert/bert-large-uncased", add_cross_attention=True, is_decoder=True, bos_token_id=101, eos_token_id=102
)
bert2bert = EncoderDecoderModel(encoder=encoder, decoder=decoder)
create tokenizer
tokenizer = BertTokenizer.from_pretrained("google-bert/bert-large-uncased")
input_ids = tokenizer(
"This is a long article to summarize", add_special_tokens=False, return_tensors="pt"
).input_ids
labels = tokenizer("This is a short summary", return_tensors="pt").input_ids
train
loss = bert2bert(input_ids=input_ids, decoder_input_ids=labels, labels=labels).loss
loss.backward() |
Pretrained [EncoderDecoderModel] are also directly available in the model hub, e.g.:
thon |
instantiate sentence fusion model
sentence_fuser = EncoderDecoderModel.from_pretrained("google/roberta2roberta_L-24_discofuse")
tokenizer = AutoTokenizer.from_pretrained("google/roberta2roberta_L-24_discofuse")
input_ids = tokenizer(
"This is the first sentence. This is the second sentence.", add_special_tokens=False, return_tensors="pt"
).input_ids
outputs = sentence_fuser.generate(input_ids)
print(tokenizer.decode(outputs[0]))
Tips: |
Tips:
[BertGenerationEncoder] and [BertGenerationDecoder] should be used in
combination with [EncoderDecoder].
For summarization, sentence splitting, sentence fusion and translation, no special tokens are required for the input.
Therefore, no EOS token should be added to the end of the input. |
BertGenerationConfig
[[autodoc]] BertGenerationConfig
BertGenerationTokenizer
[[autodoc]] BertGenerationTokenizer
- save_vocabulary
BertGenerationEncoder
[[autodoc]] BertGenerationEncoder
- forward
BertGenerationDecoder
[[autodoc]] BertGenerationDecoder
- forward |
Transformer XL |
This model is in maintenance mode only, so we won't accept any new PRs changing its code. This model was deprecated due to security issues linked to pickle.load.
We recommend switching to more recent models for improved security.
In case you would still like to use TransfoXL in your experiments, we recommend using the Hub checkpoint with a specific revision to ensure you are downloading safe files from the Hub.
You will need to set the environment variable TRUST_REMOTE_CODE to True in order to allow the
usage of pickle.load():
thon
import os
from transformers import TransfoXLTokenizer, TransfoXLLMHeadModel
os.environ["TRUST_REMOTE_CODE"] = "True"
checkpoint = 'transfo-xl/transfo-xl-wt103'
revision = '40a186da79458c9f9de846edfaea79c412137f97'
tokenizer = TransfoXLTokenizer.from_pretrained(checkpoint, revision=revision)
model = TransfoXLLMHeadModel.from_pretrained(checkpoint, revision=revision) |
If you run into any issues running this model, please reinstall the last version that supported this model: v4.35.0.
You can do so by running the following command: pip install -U transformers==4.35.0. |
Overview
The Transformer-XL model was proposed in Transformer-XL: Attentive Language Models Beyond a Fixed-Length Context by Zihang Dai, Zhilin Yang, Yiming Yang, Jaime Carbonell, Quoc V. Le, Ruslan
Salakhutdinov. It's a causal (uni-directional) transformer with relative positioning (sinusoïdal) embeddings which can
reuse previously computed hidden-states to attend to longer context (memory). This model also uses adaptive softmax
inputs and outputs (tied).
The abstract from the paper is the following:
Transformers have a potential of learning longer-term dependency, but are limited by a fixed-length context in the
setting of language modeling. We propose a novel neural architecture Transformer-XL that enables learning dependency
beyond a fixed length without disrupting temporal coherence. It consists of a segment-level recurrence mechanism and a
novel positional encoding scheme. Our method not only enables capturing longer-term dependency, but also resolves the
context fragmentation problem. As a result, Transformer-XL learns dependency that is 80% longer than RNNs and 450%
longer than vanilla Transformers, achieves better performance on both short and long sequences, and is up to 1,800+
times faster than vanilla Transformers during evaluation. Notably, we improve the state-of-the-art results of
bpc/perplexity to 0.99 on enwiki8, 1.08 on text8, 18.3 on WikiText-103, 21.8 on One Billion Word, and 54.5 on Penn
Treebank (without finetuning). When trained only on WikiText-103, Transformer-XL manages to generate reasonably
coherent, novel text articles with thousands of tokens.
This model was contributed by thomwolf. The original code can be found here.
Usage tips |
Transformer-XL uses relative sinusoidal positional embeddings. Padding can be done on the left or on the right. The
original implementation trains on SQuAD with padding on the left, therefore the padding defaults are set to left.
Transformer-XL is one of the few models that has no sequence length limit.
Same as a regular GPT model, but introduces a recurrence mechanism for two consecutive segments (similar to a regular RNNs with two consecutive inputs). In this context, a segment is a number of consecutive tokens (for instance 512) that may span across multiple documents, and segments are fed in order to the model.
Basically, the hidden states of the previous segment are concatenated to the current input to compute the attention scores. This allows the model to pay attention to information that was in the previous segment as well as the current one. By stacking multiple attention layers, the receptive field can be increased to multiple previous segments.
This changes the positional embeddings to positional relative embeddings (as the regular positional embeddings would give the same results in the current input and the current hidden state at a given position) and needs to make some adjustments in the way attention scores are computed. |
TransformerXL does not work with torch.nn.DataParallel due to a bug in PyTorch, see issue #36035
Resources
Text classification task guide
Causal language modeling task guide |
TransfoXLConfig
[[autodoc]] TransfoXLConfig
TransfoXLTokenizer
[[autodoc]] TransfoXLTokenizer
- save_vocabulary
TransfoXL specific outputs
[[autodoc]] models.deprecated.transfo_xl.modeling_transfo_xl.TransfoXLModelOutput
[[autodoc]] models.deprecated.transfo_xl.modeling_transfo_xl.TransfoXLLMHeadModelOutput
[[autodoc]] models.deprecated.transfo_xl.modeling_tf_transfo_xl.TFTransfoXLModelOutput
[[autodoc]] models.deprecated.transfo_xl.modeling_tf_transfo_xl.TFTransfoXLLMHeadModelOutput |
TransfoXLModel
[[autodoc]] TransfoXLModel
- forward
TransfoXLLMHeadModel
[[autodoc]] TransfoXLLMHeadModel
- forward
TransfoXLForSequenceClassification
[[autodoc]] TransfoXLForSequenceClassification
- forward
TFTransfoXLModel
[[autodoc]] TFTransfoXLModel
- call
TFTransfoXLLMHeadModel
[[autodoc]] TFTransfoXLLMHeadModel
- call
TFTransfoXLForSequenceClassification
[[autodoc]] TFTransfoXLForSequenceClassification
- call |
Internal Layers
[[autodoc]] AdaptiveEmbedding
[[autodoc]] TFAdaptiveEmbedding |
DETA
Overview
The DETA model was proposed in NMS Strikes Back by Jeffrey Ouyang-Zhang, Jang Hyun Cho, Xingyi Zhou, Philipp Krähenbühl.
DETA (short for Detection Transformers with Assignment) improves Deformable DETR by replacing the one-to-one bipartite Hungarian matching loss
with one-to-many label assignments used in traditional detectors with non-maximum suppression (NMS). This leads to significant gains of up to 2.5 mAP.
The abstract from the paper is the following:
Detection Transformer (DETR) directly transforms queries to unique objects by using one-to-one bipartite matching during training and enables end-to-end object detection. Recently, these models have surpassed traditional detectors on COCO with undeniable elegance. However, they differ from traditional detectors in multiple designs, including model architecture and training schedules, and thus the effectiveness of one-to-one matching is not fully understood. In this work, we conduct a strict comparison between the one-to-one Hungarian matching in DETRs and the one-to-many label assignments in traditional detectors with non-maximum supervision (NMS). Surprisingly, we observe one-to-many assignments with NMS consistently outperform standard one-to-one matching under the same setting, with a significant gain of up to 2.5 mAP. Our detector that trains Deformable-DETR with traditional IoU-based label assignment achieved 50.2 COCO mAP within 12 epochs (1x schedule) with ResNet50 backbone, outperforming all existing traditional or transformer-based detectors in this setting. On multiple datasets, schedules, and architectures, we consistently show bipartite matching is unnecessary for performant detection transformers. Furthermore, we attribute the success of detection transformers to their expressive transformer architecture. |
DETA overview. Taken from the original paper.
This model was contributed by nielsr.
The original code can be found here.
Resources
A list of official Hugging Face and community (indicated by 🌎) resources to help you get started with DETA.
Demo notebooks for DETA can be found here.
See also: Object detection task guide |
If you're interested in submitting a resource to be included here, please feel free to open a Pull Request and we'll review it! The resource should ideally demonstrate something new instead of duplicating an existing resource.
DetaConfig
[[autodoc]] DetaConfig
DetaImageProcessor
[[autodoc]] DetaImageProcessor
- preprocess
- post_process_object_detection
DetaModel
[[autodoc]] DetaModel
- forward
DetaForObjectDetection
[[autodoc]] DetaForObjectDetection
- forward |
Starcoder2
Overview
StarCoder2 is a family of open LLMs for code and comes in 3 different sizes with 3B, 7B and 15B parameters. The flagship StarCoder2-15B model is trained on over 4 trillion tokens and 600+ programming languages from The Stack v2. All models use Grouped Query Attention, a context window of 16,384 tokens with a sliding window attention of 4,096 tokens, and were trained using the Fill-in-the-Middle objective. The models have been released with the paper StarCoder 2 and The Stack v2: The Next Generation by Anton Lozhkov, Raymond Li, Loubna Ben Allal, Federico Cassano, Joel Lamy-Poirier, Nouamane Tazi, Ao Tang, Dmytro Pykhtar, Jiawei Liu, Yuxiang Wei, Tianyang Liu, Max Tian, Denis Kocetkov, Arthur Zucker, Younes Belkada, Zijian Wang, Qian Liu, Dmitry Abulkhanov, Indraneil Paul, Zhuang Li, Wen-Ding Li, Megan Risdal, Jia Li, Jian Zhu, Terry Yue Zhuo, Evgenii Zheltonozhskii, Nii Osae Osae Dade, Wenhao Yu, Lucas Krauß, Naman Jain, Yixuan Su, Xuanli He, Manan Dey, Edoardo Abati, Yekun Chai, Niklas Muennighoff, Xiangru Tang, Muhtasham Oblokulov, Christopher Akiki, Marc Marone, Chenghao Mou, Mayank Mishra, Alex Gu, Binyuan Hui, Tri Dao, Armel Zebaze, Olivier Dehaene, Nicolas Patry, Canwen Xu, Julian McAuley, Han Hu, Torsten Scholak, Sebastien Paquet, Jennifer Robinson, Carolyn Jane Anderson, Nicolas Chapados, Mostofa Patwary, Nima Tajbakhsh, Yacine Jernite, Carlos Muñoz Ferrandis, Lingming Zhang, Sean Hughes, Thomas Wolf, Arjun Guha, Leandro von Werra, and Harm de Vries.
The abstract of the paper is the following: |
The BigCode project, an open-scientific collaboration focused on the responsible development of Large Language Models for Code (Code LLMs), introduces StarCoder2. In partnership with Software Heritage (SWH), we build The Stack v2 on top of the digital commons of their source code archive. Alongside the SWH repositories spanning 619 programming languages, we carefully select other high-quality data sources, such as GitHub pull requests, Kaggle notebooks, and code documentation. This results in a training set that is 4x larger than the first StarCoder dataset. We train StarCoder2 models with 3B, 7B, and 15B parameters on 3.3 to 4.3 trillion tokens and thoroughly evaluate them on a comprehensive set of Code LLM benchmarks. We find that our small model, StarCoder2-3B, outperforms other Code LLMs of similar size on most benchmarks, and also outperforms StarCoderBase-15B. Our large model, StarCoder2- 15B, significantly outperforms other models of comparable size. In addition, it matches or outperforms CodeLlama-34B, a model more than twice its size. Although DeepSeekCoder- 33B is the best-performing model at code completion for high-resource languages, we find that StarCoder2-15B outperforms it on math and code reasoning benchmarks, as well as several low-resource languages. We make the model weights available under an OpenRAIL license and ensure full transparency regarding the training data by releasing the SoftWare Heritage persistent IDentifiers (SWHIDs) of the source code data. |
License
The models are licensed under the BigCode OpenRAIL-M v1 license agreement.
Usage tips
The StarCoder2 models can be found in the HuggingFace hub. You can find some examples for inference and fine-tuning in StarCoder2's GitHub repo.
These ready-to-use checkpoints can be downloaded and used via the HuggingFace Hub:
thon |
from transformers import AutoModelForCausalLM, AutoTokenizer
model = AutoModelForCausalLM.from_pretrained("bigcode/starcoder2-7b", device_map="auto")
tokenizer = AutoTokenizer.from_pretrained("bigcode/starcoder2-7b")
prompt = "def print_hello_world():"
model_inputs = tokenizer([prompt], return_tensors="pt").to("cuda")
model.to(device)
generated_ids = model.generate(**model_inputs, max_new_tokens=10, do_sample=False)
tokenizer.batch_decode(generated_ids)[0]
"def print_hello_world():\n\treturn 'Hello World!'" |
Starcoder2Config
[[autodoc]] Starcoder2Config
Starcoder2Model
[[autodoc]] Starcoder2Model
- forward
Starcoder2ForCausalLM
[[autodoc]] Starcoder2ForCausalLM
- forward
Starcoder2ForSequenceClassification
[[autodoc]] Starcoder2ForSequenceClassification
- forward |
ELECTRA |
Overview
The ELECTRA model was proposed in the paper ELECTRA: Pre-training Text Encoders as Discriminators Rather Than
Generators. ELECTRA is a new pretraining approach which trains two
transformer models: the generator and the discriminator. The generator's role is to replace tokens in a sequence, and
is therefore trained as a masked language model. The discriminator, which is the model we're interested in, tries to
identify which tokens were replaced by the generator in the sequence.
The abstract from the paper is the following:
Masked language modeling (MLM) pretraining methods such as BERT corrupt the input by replacing some tokens with [MASK]
and then train a model to reconstruct the original tokens. While they produce good results when transferred to
downstream NLP tasks, they generally require large amounts of compute to be effective. As an alternative, we propose a
more sample-efficient pretraining task called replaced token detection. Instead of masking the input, our approach
corrupts it by replacing some tokens with plausible alternatives sampled from a small generator network. Then, instead
of training a model that predicts the original identities of the corrupted tokens, we train a discriminative model that
predicts whether each token in the corrupted input was replaced by a generator sample or not. Thorough experiments
demonstrate this new pretraining task is more efficient than MLM because the task is defined over all input tokens
rather than just the small subset that was masked out. As a result, the contextual representations learned by our
approach substantially outperform the ones learned by BERT given the same model size, data, and compute. The gains are
particularly strong for small models; for example, we train a model on one GPU for 4 days that outperforms GPT (trained
using 30x more compute) on the GLUE natural language understanding benchmark. Our approach also works well at scale,
where it performs comparably to RoBERTa and XLNet while using less than 1/4 of their compute and outperforms them when
using the same amount of compute.
This model was contributed by lysandre. The original code can be found here.
Usage tips |
ELECTRA is the pretraining approach, therefore there is nearly no changes done to the underlying model: BERT. The
only change is the separation of the embedding size and the hidden size: the embedding size is generally smaller,
while the hidden size is larger. An additional projection layer (linear) is used to project the embeddings from their
embedding size to the hidden size. In the case where the embedding size is the same as the hidden size, no projection
layer is used.
ELECTRA is a transformer model pretrained with the use of another (small) masked language model. The inputs are corrupted by that language model, which takes an input text that is randomly masked and outputs a text in which ELECTRA has to predict which token is an original and which one has been replaced. Like for GAN training, the small language model is trained for a few steps (but with the original texts as objective, not to fool the ELECTRA model like in a traditional GAN setting) then the ELECTRA model is trained for a few steps.
The ELECTRA checkpoints saved using Google Research's implementation
contain both the generator and discriminator. The conversion script requires the user to name which model to export
into the correct architecture. Once converted to the HuggingFace format, these checkpoints may be loaded into all
available ELECTRA models, however. This means that the discriminator may be loaded in the
[ElectraForMaskedLM] model, and the generator may be loaded in the
[ElectraForPreTraining] model (the classification head will be randomly initialized as it
doesn't exist in the generator). |
Resources
Text classification task guide
Token classification task guide
Question answering task guide
Causal language modeling task guide
Masked language modeling task guide
Multiple choice task guide |
Text classification task guide
Token classification task guide
Question answering task guide
Causal language modeling task guide
Masked language modeling task guide
Multiple choice task guide
ElectraConfig
[[autodoc]] ElectraConfig
ElectraTokenizer
[[autodoc]] ElectraTokenizer
ElectraTokenizerFast
[[autodoc]] ElectraTokenizerFast
Electra specific outputs
[[autodoc]] models.electra.modeling_electra.ElectraForPreTrainingOutput
[[autodoc]] models.electra.modeling_tf_electra.TFElectraForPreTrainingOutput |
ElectraModel
[[autodoc]] ElectraModel
- forward
ElectraForPreTraining
[[autodoc]] ElectraForPreTraining
- forward
ElectraForCausalLM
[[autodoc]] ElectraForCausalLM
- forward
ElectraForMaskedLM
[[autodoc]] ElectraForMaskedLM
- forward
ElectraForSequenceClassification
[[autodoc]] ElectraForSequenceClassification
- forward
ElectraForMultipleChoice
[[autodoc]] ElectraForMultipleChoice
- forward
ElectraForTokenClassification
[[autodoc]] ElectraForTokenClassification
- forward
ElectraForQuestionAnswering
[[autodoc]] ElectraForQuestionAnswering
- forward |
TFElectraModel
[[autodoc]] TFElectraModel
- call
TFElectraForPreTraining
[[autodoc]] TFElectraForPreTraining
- call
TFElectraForMaskedLM
[[autodoc]] TFElectraForMaskedLM
- call
TFElectraForSequenceClassification
[[autodoc]] TFElectraForSequenceClassification
- call
TFElectraForMultipleChoice
[[autodoc]] TFElectraForMultipleChoice
- call
TFElectraForTokenClassification
[[autodoc]] TFElectraForTokenClassification
- call
TFElectraForQuestionAnswering
[[autodoc]] TFElectraForQuestionAnswering
- call |
FlaxElectraModel
[[autodoc]] FlaxElectraModel
- call
FlaxElectraForPreTraining
[[autodoc]] FlaxElectraForPreTraining
- call
FlaxElectraForCausalLM
[[autodoc]] FlaxElectraForCausalLM
- call
FlaxElectraForMaskedLM
[[autodoc]] FlaxElectraForMaskedLM
- call
FlaxElectraForSequenceClassification
[[autodoc]] FlaxElectraForSequenceClassification
- call
FlaxElectraForMultipleChoice
[[autodoc]] FlaxElectraForMultipleChoice
- call
FlaxElectraForTokenClassification
[[autodoc]] FlaxElectraForTokenClassification
- call
FlaxElectraForQuestionAnswering
[[autodoc]] FlaxElectraForQuestionAnswering
- call |
RoBERTa-PreLayerNorm
Overview
The RoBERTa-PreLayerNorm model was proposed in fairseq: A Fast, Extensible Toolkit for Sequence Modeling by Myle Ott, Sergey Edunov, Alexei Baevski, Angela Fan, Sam Gross, Nathan Ng, David Grangier, Michael Auli.
It is identical to using the --encoder-normalize-before flag in fairseq.
The abstract from the paper is the following:
fairseq is an open-source sequence modeling toolkit that allows researchers and developers to train custom models for translation, summarization, language modeling, and other text generation tasks. The toolkit is based on PyTorch and supports distributed training across multiple GPUs and machines. We also support fast mixed-precision training and inference on modern GPUs.
This model was contributed by andreasmaden.
The original code can be found here.
Usage tips |
The implementation is the same as Roberta except instead of using Add and Norm it does Norm and Add. Add and Norm refers to the Addition and LayerNormalization as described in Attention Is All You Need.
This is identical to using the --encoder-normalize-before flag in fairseq.
Resources
Text classification task guide
Token classification task guide
Question answering task guide
Causal language modeling task guide
Masked language modeling task guide
Multiple choice task guide |
Text classification task guide
Token classification task guide
Question answering task guide
Causal language modeling task guide
Masked language modeling task guide
Multiple choice task guide
RobertaPreLayerNormConfig
[[autodoc]] RobertaPreLayerNormConfig |
RobertaPreLayerNormModel
[[autodoc]] RobertaPreLayerNormModel
- forward
RobertaPreLayerNormForCausalLM
[[autodoc]] RobertaPreLayerNormForCausalLM
- forward
RobertaPreLayerNormForMaskedLM
[[autodoc]] RobertaPreLayerNormForMaskedLM
- forward
RobertaPreLayerNormForSequenceClassification
[[autodoc]] RobertaPreLayerNormForSequenceClassification
- forward
RobertaPreLayerNormForMultipleChoice
[[autodoc]] RobertaPreLayerNormForMultipleChoice
- forward
RobertaPreLayerNormForTokenClassification
[[autodoc]] RobertaPreLayerNormForTokenClassification
- forward
RobertaPreLayerNormForQuestionAnswering
[[autodoc]] RobertaPreLayerNormForQuestionAnswering
- forward |
TFRobertaPreLayerNormModel
[[autodoc]] TFRobertaPreLayerNormModel
- call
TFRobertaPreLayerNormForCausalLM
[[autodoc]] TFRobertaPreLayerNormForCausalLM
- call
TFRobertaPreLayerNormForMaskedLM
[[autodoc]] TFRobertaPreLayerNormForMaskedLM
- call
TFRobertaPreLayerNormForSequenceClassification
[[autodoc]] TFRobertaPreLayerNormForSequenceClassification
- call
TFRobertaPreLayerNormForMultipleChoice
[[autodoc]] TFRobertaPreLayerNormForMultipleChoice
- call
TFRobertaPreLayerNormForTokenClassification
[[autodoc]] TFRobertaPreLayerNormForTokenClassification
- call
TFRobertaPreLayerNormForQuestionAnswering
[[autodoc]] TFRobertaPreLayerNormForQuestionAnswering
- call |
FlaxRobertaPreLayerNormModel
[[autodoc]] FlaxRobertaPreLayerNormModel
- call
FlaxRobertaPreLayerNormForCausalLM
[[autodoc]] FlaxRobertaPreLayerNormForCausalLM
- call
FlaxRobertaPreLayerNormForMaskedLM
[[autodoc]] FlaxRobertaPreLayerNormForMaskedLM
- call
FlaxRobertaPreLayerNormForSequenceClassification
[[autodoc]] FlaxRobertaPreLayerNormForSequenceClassification
- call
FlaxRobertaPreLayerNormForMultipleChoice
[[autodoc]] FlaxRobertaPreLayerNormForMultipleChoice
- call
FlaxRobertaPreLayerNormForTokenClassification
[[autodoc]] FlaxRobertaPreLayerNormForTokenClassification
- call
FlaxRobertaPreLayerNormForQuestionAnswering
[[autodoc]] FlaxRobertaPreLayerNormForQuestionAnswering
- call |
MobileViTV2
Overview
The MobileViTV2 model was proposed in Separable Self-attention for Mobile Vision Transformers by Sachin Mehta and Mohammad Rastegari.
MobileViTV2 is the second version of MobileViT, constructed by replacing the multi-headed self-attention in MobileViT with separable self-attention.
The abstract from the paper is the following:
Mobile vision transformers (MobileViT) can achieve state-of-the-art performance across several mobile vision tasks, including classification and detection. Though these models have fewer parameters, they have high latency as compared to convolutional neural network-based models. The main efficiency bottleneck in MobileViT is the multi-headed self-attention (MHA) in transformers, which requires O(k2) time complexity with respect to the number of tokens (or patches) k. Moreover, MHA requires costly operations (e.g., batch-wise matrix multiplication) for computing self-attention, impacting latency on resource-constrained devices. This paper introduces a separable self-attention method with linear complexity, i.e. O(k). A simple yet effective characteristic of the proposed method is that it uses element-wise operations for computing self-attention, making it a good choice for resource-constrained devices. The improved model, MobileViTV2, is state-of-the-art on several mobile vision tasks, including ImageNet object classification and MS-COCO object detection. With about three million parameters, MobileViTV2 achieves a top-1 accuracy of 75.6% on the ImageNet dataset, outperforming MobileViT by about 1% while running 3.2× faster on a mobile device.
This model was contributed by shehan97.
The original code can be found here.
Usage tips |