text
stringlengths
23
371k
source
stringlengths
32
152
How to contribute to the Datasets Server? [![Contributor Covenant](https://img.shields.io/badge/Contributor%20Covenant-2.0-4baaaa.svg)](CODE_OF_CONDUCT.md) The Datasets Server is an open source project, so all contributions and suggestions are welcome. You can contribute in many different ways: giving ideas, answering questions, reporting bugs, proposing enhancements, improving the documentation, fixing bugs... Many thanks in advance to every contributor. In order to facilitate healthy, constructive behavior in an open and inclusive community, we all respect and abide by our [code of conduct](CODE_OF_CONDUCT.md). ## How to work on an open Issue? You have the list of open Issues at: https://github.com/huggingface/datasets/issues Some of them may have the label `help wanted`: that means that any contributor is welcomed! If you would like to work on any of the open Issues: 1. Make sure it is not already assigned to someone else. You have the assignee (if any) on the top of the right column of the Issue page. 2. You can self-assign it by commenting on the Issue page with one of the keywords: `#take` or `#self-assign`. 3. Work on your self-assigned issue and eventually create a Pull Request. ## How to create a Pull Request? 1. Fork the [repository](https://github.com/huggingface/datasets-server) by clicking on the 'Fork' button on the repository's page. This creates a copy of the code under your GitHub user account. 2. Clone your fork to your local disk, and add the base repository as a remote: ```bash git clone git@github.com:<your Github handle>/datasets-server.git cd datasets-server git remote add upstream https://github.com/huggingface/datasets-server.git ``` 3. Create a new branch to hold your development changes: ```bash git checkout -b a-descriptive-name-for-my-changes ``` **do not** work on the `main` branch. 4. Set up a development environment by following the [developer guide](./DEVELOPER_GUIDE.md) 5. Develop the features on your branch. 6. Format your code. Run the linter and formatter, so that your newly added files look nice with the following command: ```bash make style ``` 7. Once you're happy with your code, add your changes and make a commit to record your changes locally: ```bash git add -p git commit ``` It is a good idea to sync your copy of the code with the original repository regularly. This way you can quickly account for changes: ```bash git fetch upstream git rebase upstream/main ``` Push the changes to your account using: ```bash git push -u origin a-descriptive-name-for-my-changes ``` 8. Once you are satisfied, go the webpage of your fork on GitHub. Click on "Pull request" to send your to the project maintainers for review. Thank you for your contribution! ## Code of conduct This project adheres to the HuggingFace [code of conduct](CODE_OF_CONDUCT.md). By participating, you are expected to uphold this code.
huggingface/datasets-server/blob/main/CONTRIBUTING.md
Preprocess In addition to loading datasets, 🤗 Datasets other main goal is to offer a diverse set of preprocessing functions to get a dataset into an appropriate format for training with your machine learning framework. There are many possible ways to preprocess a dataset, and it all depends on your specific dataset. Sometimes you may need to rename a column, and other times you might need to unflatten nested fields. 🤗 Datasets provides a way to do most of these things. But in nearly all preprocessing cases, depending on your dataset modality, you'll need to: - Tokenize a text dataset. - Resample an audio dataset. - Apply transforms to an image dataset. The last preprocessing step is usually setting your dataset format to be compatible with your machine learning framework's expected input format. In this tutorial, you'll also need to install the 🤗 Transformers library: ```bash pip install transformers ``` Grab a dataset of your choice and follow along! ## Tokenize text Models cannot process raw text, so you'll need to convert the text into numbers. Tokenization provides a way to do this by dividing text into individual words called *tokens*. Tokens are finally converted to numbers. <Tip> Check out the [Tokenizers](https://huggingface.co/course/chapter2/4?fw=pt) section in Chapter 2 of the Hugging Face course to learn more about tokenization and different tokenization algorithms. </Tip> **1**. Start by loading the [rotten_tomatoes](https://huggingface.co/datasets/rotten_tomatoes) dataset and the tokenizer corresponding to a pretrained [BERT](https://huggingface.co/bert-base-uncased) model. Using the same tokenizer as the pretrained model is important because you want to make sure the text is split in the same way. ```py >>> from transformers import AutoTokenizer >>> from datasets import load_dataset >>> tokenizer = AutoTokenizer.from_pretrained("bert-base-uncased") >>> dataset = load_dataset("rotten_tomatoes", split="train") ``` **2**. Call your tokenizer on the first row of `text` in the dataset: ```py >>> tokenizer(dataset[0]["text"]) {'input_ids': [101, 1103, 2067, 1110, 17348, 1106, 1129, 1103, 6880, 1432, 112, 188, 1207, 107, 14255, 1389, 107, 1105, 1115, 1119, 112, 188, 1280, 1106, 1294, 170, 24194, 1256, 3407, 1190, 170, 11791, 5253, 188, 1732, 7200, 10947, 12606, 2895, 117, 179, 7766, 118, 172, 15554, 1181, 3498, 6961, 3263, 1137, 188, 1566, 7912, 14516, 6997, 119, 102], 'token_type_ids': [0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0], 'attention_mask': [1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1]} ``` The tokenizer returns a dictionary with three items: - `input_ids`: the numbers representing the tokens in the text. - `token_type_ids`: indicates which sequence a token belongs to if there is more than one sequence. - `attention_mask`: indicates whether a token should be masked or not. These values are actually the model inputs. **3**. The fastest way to tokenize your entire dataset is to use the [`~Dataset.map`] function. This function speeds up tokenization by applying the tokenizer to batches of examples instead of individual examples. Set the `batched` parameter to `True`: ```py >>> def tokenization(example): ... return tokenizer(example["text"]) >>> dataset = dataset.map(tokenization, batched=True) ``` **4**. Set the format of your dataset to be compatible with your machine learning framework: <frameworkcontent> <pt> Use the [`~Dataset.set_format`] function to set the dataset format to be compatible with PyTorch: ```py >>> dataset.set_format(type="torch", columns=["input_ids", "token_type_ids", "attention_mask", "label"]) >>> dataset.format['type'] 'torch' ``` </pt> <tf> Use the [`~Dataset.to_tf_dataset`] function to set the dataset format to be compatible with TensorFlow. You'll also need to import a [data collator](https://huggingface.co/docs/transformers/main_classes/data_collator#transformers.DataCollatorWithPadding) from 🤗 Transformers to combine the varying sequence lengths into a single batch of equal lengths: ```py >>> from transformers import DataCollatorWithPadding >>> data_collator = DataCollatorWithPadding(tokenizer=tokenizer, return_tensors="tf") >>> tf_dataset = dataset.to_tf_dataset( ... columns=["input_ids", "token_type_ids", "attention_mask"], ... label_cols=["label"], ... batch_size=2, ... collate_fn=data_collator, ... shuffle=True ... ) ``` </tf> </frameworkcontent> **5**. The dataset is now ready for training with your machine learning framework! ## Resample audio signals Audio inputs like text datasets need to be divided into discrete data points. This is known as *sampling*; the sampling rate tells you how much of the speech signal is captured per second. It is important to make sure the sampling rate of your dataset matches the sampling rate of the data used to pretrain the model you're using. If the sampling rates are different, the pretrained model may perform poorly on your dataset because it doesn't recognize the differences in the sampling rate. **1**. Start by loading the [MInDS-14](https://huggingface.co/datasets/PolyAI/minds14) dataset, the [`Audio`] feature, and the feature extractor corresponding to a pretrained [Wav2Vec2](https://huggingface.co/facebook/wav2vec2-base-960h) model: ```py >>> from transformers import AutoFeatureExtractor >>> from datasets import load_dataset, Audio >>> feature_extractor = AutoFeatureExtractor.from_pretrained("facebook/wav2vec2-base-960h") >>> dataset = load_dataset("PolyAI/minds14", "en-US", split="train") ``` **2**. Index into the first row of the dataset. When you call the `audio` column of the dataset, it is automatically decoded and resampled: ```py >>> dataset[0]["audio"] {'array': array([ 0. , 0.00024414, -0.00024414, ..., -0.00024414, 0. , 0. ], dtype=float32), 'path': '/root/.cache/huggingface/datasets/downloads/extracted/f14948e0e84be638dd7943ac36518a4cf3324e8b7aa331c5ab11541518e9368c/en-US~JOINT_ACCOUNT/602ba55abb1e6d0fbce92065.wav', 'sampling_rate': 8000} ``` **3**. Reading a dataset card is incredibly useful and can give you a lot of information about the dataset. A quick look at the MInDS-14 dataset card tells you the sampling rate is 8kHz. Likewise, you can get many details about a model from its model card. The Wav2Vec2 model card says it was sampled on 16kHz speech audio. This means you'll need to upsample the MInDS-14 dataset to match the sampling rate of the model. Use the [`~Dataset.cast_column`] function and set the `sampling_rate` parameter in the [`Audio`] feature to upsample the audio signal. When you call the `audio` column now, it is decoded and resampled to 16kHz: ```py >>> dataset = dataset.cast_column("audio", Audio(sampling_rate=16_000)) >>> dataset[0]["audio"] {'array': array([ 2.3443763e-05, 2.1729663e-04, 2.2145823e-04, ..., 3.8356509e-05, -7.3497440e-06, -2.1754686e-05], dtype=float32), 'path': '/root/.cache/huggingface/datasets/downloads/extracted/f14948e0e84be638dd7943ac36518a4cf3324e8b7aa331c5ab11541518e9368c/en-US~JOINT_ACCOUNT/602ba55abb1e6d0fbce92065.wav', 'sampling_rate': 16000} ``` **4**. Use the [`~Dataset.map`] function to resample the entire dataset to 16kHz. This function speeds up resampling by applying the feature extractor to batches of examples instead of individual examples. Set the `batched` parameter to `True`: ```py >>> def preprocess_function(examples): ... audio_arrays = [x["array"] for x in examples["audio"]] ... inputs = feature_extractor( ... audio_arrays, sampling_rate=feature_extractor.sampling_rate, max_length=16000, truncation=True ... ) ... return inputs >>> dataset = dataset.map(preprocess_function, batched=True) ``` **5**. The dataset is now ready for training with your machine learning framework! ## Apply data augmentations The most common preprocessing you'll do with image datasets is *data augmentation*, a process that introduces random variations to an image without changing the meaning of the data. This can mean changing the color properties of an image or randomly cropping an image. You are free to use any data augmentation library you like, and 🤗 Datasets will help you apply your data augmentations to your dataset. **1**. Start by loading the [Beans](https://huggingface.co/datasets/beans) dataset, the `Image` feature, and the feature extractor corresponding to a pretrained [ViT](https://huggingface.co/google/vit-base-patch16-224-in21k) model: ```py >>> from transformers import AutoFeatureExtractor >>> from datasets import load_dataset, Image >>> feature_extractor = AutoFeatureExtractor.from_pretrained("google/vit-base-patch16-224-in21k") >>> dataset = load_dataset("beans", split="train") ``` **2**. Index into the first row of the dataset. When you call the `image` column of the dataset, the underlying PIL object is automatically decoded into an image. ```py >>> dataset[0]["image"] ``` **3**. Now, you can apply some transforms to the image. Feel free to take a look at the [various transforms available](https://pytorch.org/vision/stable/auto_examples/plot_transforms.html#sphx-glr-auto-examples-plot-transforms-py) in torchvision and choose one you'd like to experiment with. This example applies a transform that randomly rotates the image: ```py >>> from torchvision.transforms import RandomRotation >>> rotate = RandomRotation(degrees=(0, 90)) >>> def transforms(examples): ... examples["pixel_values"] = [rotate(image.convert("RGB")) for image in examples["image"]] ... return examples ``` **4**. Use the [`~Dataset.set_transform`] function to apply the transform on-the-fly. When you index into the image `pixel_values`, the transform is applied, and your image gets rotated. ```py >>> dataset.set_transform(transforms) >>> dataset[0]["pixel_values"] ``` **5**. The dataset is now ready for training with your machine learning framework!
huggingface/datasets/blob/main/docs/source/use_dataset.mdx
-- title: Introducing our new pricing thumbnail: /blog/assets/114_pricing-update/thumbnail.png authors: - user: sbrandeis - user: pierric --- # Introducing our new pricing As you might have noticed, our [pricing page](https://huggingface.co/pricing) has changed a lot recently. First of all, we are sunsetting the Paid tier of the Inference API service. The Inference API will still be available for everyone to use for free. But if you're looking for a fast, enterprise-grade inference as a service, we recommend checking out our brand new solution for this: [Inference Endpoints](https://huggingface.co/inference-endpoints). Along with Inference Endpoints, we've recently introduced hardware upgrades for [Spaces](https://huggingface.co/spaces/launch), which allows running ML demos with the hardware of your choice. No subscription is required to use these services; you only need to add a credit card to your account from your [billing settings](https://huggingface.co/settings/billing). You can also attach a payment method to any of [your organizations](https://huggingface.co/settings/organizations). Your billing settings centralize everything about our paid services. From there, you can manage your personal PRO subscription, update your payment method, and visualize your usage for the past three months. Usage for all our paid services and subscriptions will be charged at the start of each month, and a consolidated invoice will be available for your records. **TL;DR**: **At HF we monetize by providing simple access to compute for AI**, with services like AutoTrain, Spaces and Inference Endpoints, directly accessible from the Hub. [Read more](https://huggingface.co/docs/hub/billing) about our pricing and billing system. If you have any questions, feel free to reach out. We welcome your feedback 🔥
huggingface/blog/blob/main/pricing-update.md
TensorFlow 和 Keras 中的图像分类 相关空间:https://huggingface.co/spaces/abidlabs/keras-image-classifier 标签:VISION, MOBILENET, TENSORFLOW ## 简介 图像分类是计算机视觉中的一项核心任务。构建更好的分类器来识别图像中的物体是一个研究的热点领域,因为它在交通控制系统到卫星成像等应用中都有广泛的应用。 这样的模型非常适合与 Gradio 的 _image_ 输入组件一起使用,因此在本教程中,我们将使用 Gradio 构建一个用于图像分类的 Web 演示。我们可以在 Python 中构建整个 Web 应用程序,它的界面将如下所示(试试其中一个例子!): <iframe src="https://abidlabs-keras-image-classifier.hf.space" frameBorder="0" height="660" title="Gradio app" class="container p-0 flex-grow space-iframe" allow="accelerometer; ambient-light-sensor; autoplay; battery; camera; document-domain; encrypted-media; fullscreen; geolocation; gyroscope; layout-animations; legacy-image-formats; magnetometer; microphone; midi; oversized-images; payment; picture-in-picture; publickey-credentials-get; sync-xhr; usb; vr ; wake-lock; xr-spatial-tracking" sandbox="allow-forms allow-modals allow-popups allow-popups-to-escape-sandbox allow-same-origin allow-scripts allow-downloads"></iframe> 让我们开始吧! ### 先决条件 确保您已经[安装](/getting_started)了 `gradio` Python 包。我们将使用一个预训练的 Keras 图像分类模型,因此您还应该安装了 `tensorflow`。 ## 第一步 —— 设置图像分类模型 首先,我们需要一个图像分类模型。在本教程中,我们将使用一个预训练的 Mobile Net 模型,因为它可以从[Keras](https://keras.io/api/applications/mobilenet/)轻松下载。您也可以使用其他预训练模型或训练自己的模型。 ```python import tensorflow as tf inception_net = tf.keras.applications.MobileNetV2() ``` 此行代码将使用 Keras 库自动下载 MobileNet 模型和权重。 ## 第二步 —— 定义 `predict` 函数 接下来,我们需要定义一个函数,该函数接收*用户输入*作为参数(在本例中为图像),并返回预测结果。预测结果应以字典形式返回,其中键是类名,值是置信概率。我们将从这个[text 文件](https://git.io/JJkYN)中加载类名。 对于我们的预训练模型,函数将如下所示: ```python import requests # 从ImageNet下载可读性标签。 response = requests.get("https://git.io/JJkYN") labels = response.text.split("\n") def classify_image(inp): inp = inp.reshape((-1, 224, 224, 3)) inp = tf.keras.applications.mobilenet_v2.preprocess_input(inp) prediction = inception_net.predict(inp).flatten() confidences = {labels[i]: float(prediction[i]) for i in range(1000)} return confidences ``` 让我们来详细了解一下。该函数接受一个参数: - `inp`:输入图像的 `numpy` 数组 然后,函数添加一个批次维度,通过模型进行处理,并返回: - `confidences`:预测结果,以字典形式表示,其中键是类标签,值是置信概率 ## 第三步 —— 创建 Gradio 界面 现在我们已经设置好了预测函数,我们可以围绕它创建一个 Gradio 界面。 在这种情况下,输入组件是一个拖放图像组件。要创建此输入组件,我们可以使用 `"gradio.inputs.Image"` 类,该类创建该组件并处理预处理以将其转换为 numpy 数组。我们将使用一个参数来实例化该类,该参数会自动将输入图像预处理为 224 像素 x224 像素的大小,这是 MobileNet 所期望的尺寸。 输出组件将是一个 `"label"`,它以美观的形式显示顶部标签。由于我们不想显示所有的 1,000 个类标签,所以我们将自定义它只显示前 3 个标签。 最后,我们还将添加一个 `examples` 参数,它允许我们使用一些预定义的示例预填充我们的接口。Gradio 的代码如下所示: ```python import gradio as gr gr.Interface(fn=classify_image, inputs=gr.Image(shape=(224, 224)), outputs=gr.Label(num_top_classes=3), examples=["banana.jpg", "car.jpg"]).launch() ``` 这将生成以下界面,您可以在浏览器中立即尝试(尝试上传您自己的示例!): <iframe src="https://abidlabs-keras-image-classifier.hf.space" frameBorder="0" height="660" title="Gradio app" class="container p-0 flex-grow space-iframe" allow="accelerometer; ambient-light-sensor; autoplay; battery; camera; document-domain; encrypted-media; fullscreen; geolocation; gyroscope; layout-animations; legacy-image-formats; magnetometer; microphone; midi; oversized-images; payment; picture-in-picture; publickey-credentials-get; sync-xhr; usb; vr ; wake-lock; xr-spatial-tracking" sandbox="allow-forms allow-modals allow-popups allow-popups-to-escape-sandbox allow-same-origin allow-scripts allow-downloads"></iframe> --- 完成!这就是构建图像分类器的 Web 演示所需的所有代码。如果您想与他人分享,请尝试在启动接口时设置 `share=True`!
gradio-app/gradio/blob/main/guides/cn/04_integrating-other-frameworks/image-classification-in-tensorflow.md
-- title: Faster Stable Diffusion with Core ML on iPhone, iPad, and Mac thumbnail: /blog/assets/149_fast_diffusers_coreml/thumbnail.png authors: - user: pcuenq --- # Faster Stable Diffusion with Core ML on iPhone, iPad, and Mac WWDC’23 (Apple Worldwide Developers Conference) was held last week. A lot of the news focused on the Vision Pro announcement during the keynote, but there’s much more to it. Like every year, WWDC week is packed with more than 200 technical sessions that dive deep inside the upcoming features across Apple operating systems and frameworks. This year we are particularly excited about changes in Core ML devoted to compression and optimization techniques. These changes make running [models](https://huggingface.co/apple) such as Stable Diffusion faster and with less memory use! As a taste, consider the following test I ran on my [iPhone 13 back in December](https://huggingface.co/blog/diffusers-coreml), compared with the current speed using 6-bit palettization: <img style="border:none;" alt="Stable Diffusion on iPhone, back in December and now using 6-bit palettization" src="https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/blog/fast-diffusers-coreml/before-after-1600.jpg" /> <small>Stable Diffusion on iPhone, back in December and now with 6-bit palettization</small> ## Contents * [New Core ML Optimizations](#new-core-ml-optimizations) * [Using Quantized and Optimized Stable Diffusion Models](#using-quantized-and-optimized-stable-diffusion-models) * [Converting and Optimizing Custom Models](#converting-and-optimizing-custom-models) * [Using Less than 6 bits](#using-less-than-6-bits) * [Conclusion](#conclusion) ## New Core ML Optimizations Core ML is a mature framework that allows machine learning models to run efficiently on-device, taking advantage of all the compute hardware in Apple devices: the CPU, the GPU, and the Neural Engine specialized in ML tasks. On-device execution is going through a period of extraordinary interest triggered by the popularity of models such as Stable Diffusion and Large Language Models with chat interfaces. Many people want to run these models on their hardware for a variety of reasons, including convenience, privacy, and API cost savings. Naturally, many developers are exploring ways to run these models efficiently on-device and creating new apps and use cases. Core ML improvements that contribute to achieving that goal are big news for the community! The Core ML optimization changes encompass two different (but complementary) software packages: * The Core ML framework itself. This is the engine that runs ML models on Apple hardware and is part of the operating system. Models have to be exported in a special format supported by the framework, and this format is also referred to as “Core ML”. * The `coremltools` conversion package. This is an [open-source Python module](https://github.com/apple/coremltools) whose mission is to convert PyTorch or Tensorflow models to the Core ML format. `coremltools` now includes a new submodule called `coremltools.optimize` with all the compression and optimization tools. For full details on this package, please take a look at [this WWDC session](https://developer.apple.com/wwdc23/10047). In the case of Stable Diffusion, we’ll be using _6-bit palettization_, a type of quantization that compresses model weights from a 16-bit floating-point representation to just 6 bits per parameter. The name “palettization” refers to a technique similar to the one used in computer graphics to work with a limited set of colors: the color table (or “palette”) contains a fixed number of colors, and the colors in the image are replaced with the indexes of the closest colors available in the palette. This immediately provides the benefit of drastically reducing storage size, and thus reducing download time and on-device disk use. <img style="border:none;" alt="Illustration of 2-bit palettization. Image credit: Apple WWDC’23 Session 'Use Core ML Tools for machine learning model compression'" src="https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/blog/fast-diffusers-coreml/palettization_illustration.png" /> <small>Illustration of 2-bit palettization. Image credit: Apple WWDC’23 Session <i><a href="https://developer.apple.com/wwdc23/10047">Use Core ML Tools for machine learning model compression</a></i>.</small> The compressed 6-bit _weights_ cannot be used for computation, because they are just indices into a table and no longer represent the magnitude of the original weights. Therefore, Core ML needs to uncompress the palletized weights before use. In previous versions of Core ML, uncompression took place when the model was first loaded from disk, so the amount of memory used was equal to the uncompressed model size. With the new improvements, weights are kept as 6-bit numbers and converted on the fly as inference progresses from layer to layer. This might seem slow – an inference run requires a lot of uncompressing operations –, but it’s typically more efficient than preparing all the weights in 16-bit mode! The reason is that memory transfers are in the critical path of execution, and transferring less memory is faster than transferring uncompressed data. ## Using Quantized and Optimized Stable Diffusion Models [Last December](https://huggingface.co/blog/diffusers-coreml), Apple introduced [`ml-stable-diffusion`](https://github.com/apple/ml-stable-diffusion), an open-source repo based on [diffusers](https://github.com/huggingface/diffusers) to easily convert Stable Diffusion models to Core ML. It also applies [optimizations to the transformers attention layers](https://machinelearning.apple.com/research/neural-engine-transformers) that make inference faster on the Neural Engine (on devices where it’s available). `ml-stable-diffusion` has just been updated after WWDC with the following: * Quantization is supported using `--quantize-nbits` during conversion. You can quantize to 8, 6, 4, or even 2 bits! For best results, we recommend using 6-bit quantization, as the precision loss is small while achieving fast inference and significant memory savings. If you want to go lower than that, please check [this section](#using-less-than-6-bits) for advanced techniques. * Additional optimizations of the attention layers that achieve even better performance on the Neural Engine! The trick is to split the query sequences into chunks of 512 to avoid the creation of large intermediate tensors. This method is called `SPLIT_EINSUM_V2` in the code and can improve performance between 10% to 30%. In order to make it easy for everyone to take advantage of these improvements, we have converted the four official Stable Diffusion models and pushed them to the [Hub](https://huggingface.co/apple). These are all the variants: | Model | Uncompressed | Palettized | |---------------------------|-------------------|---------------------------| | Stable Diffusion 1.4 | [Core ML, `float16`](https://huggingface.co/apple/coreml-stable-diffusion-v1-4) | [Core ML, 6-bit palettized](https://huggingface.co/apple/coreml-stable-diffusion-1-4-palettized) | | Stable Diffusion 1.5 | [Core ML, `float16`](https://huggingface.co/apple/coreml-stable-diffusion-v1-5) | [Core ML, 6-bit palettized](https://huggingface.co/apple/coreml-stable-diffusion-v1-5-palettized) | | Stable Diffusion 2 base | [Core ML, `float16`](https://huggingface.co/apple/coreml-stable-diffusion-2-base) | [Core ML, 6-bit palettized](https://huggingface.co/apple/coreml-stable-diffusion-2-base-palettized) | | Stable Diffusion 2.1 base | [Core ML, `float16`](https://huggingface.co/apple/coreml-stable-diffusion-2-1-base) | [Core ML, 6-bit palettized](https://huggingface.co/apple/coreml-stable-diffusion-2-1-base-palettized) | <br> <div style="background-color: #f0fcf0; padding: 8px 32px 1px; outline: 1px solid; border-radius: 10px;"> In order to use 6-bit models, you need the development versions of iOS/iPadOS 17 or macOS 14 (Sonoma) because those are the ones that contain the latest Core ML framework. You can download them from the [Apple developer site](https://developer.apple.com) if you are a registered developer, or you can sign up for the public beta that will be released in a few weeks. </div> <br> Note that each variant is available in Core ML format and also as a `zip` archive. Zip files are ideal for native apps, such as our [open-source demo app](https://github.com/huggingface/swift-coreml-diffusers) and other [third party tools](https://github.com/godly-devotion/MochiDiffusion). If you just want to run the models on your own hardware, the easiest way is to use our demo app and select the quantized model you want to test. You need to compile the app using Xcode, but an update will be available for download in the App Store soon. For more details, check [our previous post](https://huggingface.co/blog/fast-mac-diffusers). <img style="border:none;" alt="Running 6-bit stable-diffusion-2-1-base model in demo app" src="https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/blog/fast-diffusers-coreml/diffusers_mac_screenshot.png" /> <small>Running 6-bit stable-diffusion-2-1-base model in demo app</small> If you want to download a particular Core ML package to integrate it in your own Xcode project, you can clone the repos or just download the version of interest using code like the following. ```Python from huggingface_hub import snapshot_download from pathlib import Path repo_id = "apple/coreml-stable-diffusion-2-1-base-palettized" variant = "original/packages" model_path = Path("./models") / (repo_id.split("/")[-1] + "_" + variant.replace("/", "_")) snapshot_download(repo_id, allow_patterns=f"{variant}/*", local_dir=model_path, local_dir_use_symlinks=False) print(f"Model downloaded at {model_path}") ``` ## Converting and Optimizing Custom Models If you want to use a personalized Stable Diffusion model (for example, if you have fine-tuned or dreamboothed your own models), you can use Apple’s ml-stable-diffusion repo to do the conversion yourself. This is a brief summary of how you’d go about it, but we recommend you read [the documentation details](https://github.com/apple/ml-stable-diffusion#-converting-models-to-core-ml). <br> <div style="background-color: #f0fcf0; padding: 8px 32px 1px; outline: 1px solid; border-radius: 10px;"> If you want to apply quantization, you need the latest versions of `coremltools`, `apple/ml-stable-diffusion` and Xcode in order to do the conversion. * Download `coremltools` 7.0 beta from [the releases page in GitHub](https://github.com/apple/coremltools/releases). * Download Xcode 15.0 beta from [Apple developer site](https://developer.apple.com/). * Download `apple/ml-stable-diffusion` [from the repo](https://github.com/apple/ml-stable-diffusion) and follow the installation instructions. </div> <br> 1. Select the model you want to convert. You can train your own or choose one from the [Hugging Face Diffusers Models Gallery](https://huggingface.co/spaces/huggingface-projects/diffusers-gallery). For example, let’s convert [`prompthero/openjourney-v4`](https://huggingface.co/prompthero/openjourney-v4). 2. Install `apple/ml-stable-diffusion` and run a first conversion using the `ORIGINAL` attention implementation like this: ```bash python -m python_coreml_stable_diffusion.torch2coreml \ --model-version prompthero/openjourney-v4 \ --convert-unet \ --convert-text-encoder \ --convert-vae-decoder \ --convert-vae-encoder \ --convert-safety-checker \ --quantize-nbits 6 \ --attention-implementation ORIGINAL \ --compute-unit CPU_AND_GPU \ --bundle-resources-for-swift-cli \ --check-output-correctness \ -o models/original/openjourney-6-bit ``` <br> <div style="background-color: #f0fcf0; padding: 8px 32px 1px; outline: 1px solid; border-radius: 10px;"> * Use `--convert-vae-encoder` if you want to use image-to-image tasks. * Do _not_ use `--chunk-unet` with `--quantized-nbits 6` (or less), as the quantized model is small enough to work fine on both iOS and macOS. </div> <br> 3. Repeat the conversion for the `SPLIT_EINSUM_V2` attention implementation: ```bash python -m python_coreml_stable_diffusion.torch2coreml \ --model-version prompthero/openjourney-v4 \ --convert-unet \ --convert-text-encoder \ --convert-vae-decoder \ --convert-safety-checker \ --quantize-nbits 6 \ --attention-implementation SPLIT_EINSUM_V2 \ --compute-unit ALL \ --bundle-resources-for-swift-cli \ --check-output-correctness \ -o models/split_einsum_v2/openjourney-6-bit ``` 4. Test the converted models on the desired hardware. As a rule of thumb, the `ORIGINAL` version usually works better on macOS, whereas `SPLIT_EINSUM_V2` is usually faster on iOS. For more details and additional data points, see [these tests contributed by the community](https://github.com/huggingface/swift-coreml-diffusers/issues/31) on the previous version of Stable Diffusion for Core ML. 5. To integrate the desired model in your own app: * If you are going to distribute the model inside the app, use the `.mlpackage` files. Note that this will increase the size of your app binary. * Otherwise, you can use the compiled `Resources` to download them dynamically when your app starts. <br> <div style="background-color: #f0fcf0; padding: 8px 32px 1px; outline: 1px solid; border-radius: 10px;"> If you don’t use the `--quantize-nbits` option, weights will be represented as 16-bit floats. This is compatible with the current version of Core ML so you won’t need to install the betas of iOS, macOS or Xcode. </div> <br> ## Using Less than 6 bits 6-bit quantization is a sweet spot between model quality, model size and convenience – you just need to provide a conversion option in order to be able to quantize any pre-trained model. This is an example of _post-training compression_. The beta version of `coremltools` released last week also includes _training-time_ compression methods. The idea here is that you can fine-tune a pre-trained Stable Diffusion model and perform the weights compression while fine-tuning is taking place. This allows you to use 4- or even 2-bit compression while minimizing the loss in quality. The reason this works is because weight clustering is performed using a differentiable algorithm, and therefore we can apply the usual training optimizers to find the quantization table while minimizing model loss. We have plans to evaluate this method soon, and can’t wait to see how 4-bit optimized models work and how fast they run. If you beat us to it, please drop us a note and we’ll be happy to check 🙂 ## Conclusion Quantization methods can be used to reduce the size of Stable Diffusion models, make them run faster on-device and consume less resources. The latest versions of Core ML and `coremltools` support techniques like 6-bit palettization that are easy to apply and that have a minimal impact on quality. We have added 6-bit palettized [models to the Hub](https://huggingface.co/apple), which are small enough to run on both iOS and macOS. We've also shown how you can convert fine-tuned models yourself, and can't wait to see what you do with these tools and techniques!
huggingface/blog/blob/main/fast-diffusers-coreml.md
Big Transfer (BiT) **Big Transfer (BiT)** is a type of pretraining recipe that pre-trains on a large supervised source dataset, and fine-tunes the weights on the target task. Models are trained on the JFT-300M dataset. The finetuned models contained in this collection are finetuned on ImageNet. ## How do I use this model on an image? To load a pretrained model: ```py >>> import timm >>> model = timm.create_model('resnetv2_101x1_bitm', pretrained=True) >>> model.eval() ``` To load and preprocess the image: ```py >>> import urllib >>> from PIL import Image >>> from timm.data import resolve_data_config >>> from timm.data.transforms_factory import create_transform >>> config = resolve_data_config({}, model=model) >>> transform = create_transform(**config) >>> url, filename = ("https://github.com/pytorch/hub/raw/master/images/dog.jpg", "dog.jpg") >>> urllib.request.urlretrieve(url, filename) >>> img = Image.open(filename).convert('RGB') >>> tensor = transform(img).unsqueeze(0) # transform and add batch dimension ``` To get the model predictions: ```py >>> import torch >>> with torch.no_grad(): ... out = model(tensor) >>> probabilities = torch.nn.functional.softmax(out[0], dim=0) >>> print(probabilities.shape) >>> # prints: torch.Size([1000]) ``` To get the top-5 predictions class names: ```py >>> # Get imagenet class mappings >>> url, filename = ("https://raw.githubusercontent.com/pytorch/hub/master/imagenet_classes.txt", "imagenet_classes.txt") >>> urllib.request.urlretrieve(url, filename) >>> with open("imagenet_classes.txt", "r") as f: ... categories = [s.strip() for s in f.readlines()] >>> # Print top categories per image >>> top5_prob, top5_catid = torch.topk(probabilities, 5) >>> for i in range(top5_prob.size(0)): ... print(categories[top5_catid[i]], top5_prob[i].item()) >>> # prints class names and probabilities like: >>> # [('Samoyed', 0.6425196528434753), ('Pomeranian', 0.04062102362513542), ('keeshond', 0.03186424449086189), ('white wolf', 0.01739676296710968), ('Eskimo dog', 0.011717947199940681)] ``` Replace the model name with the variant you want to use, e.g. `resnetv2_101x1_bitm`. You can find the IDs in the model summaries at the top of this page. To extract image features with this model, follow the [timm feature extraction examples](../feature_extraction), just change the name of the model you want to use. ## How do I finetune this model? You can finetune any of the pre-trained models just by changing the classifier (the last layer). ```py >>> model = timm.create_model('resnetv2_101x1_bitm', pretrained=True, num_classes=NUM_FINETUNE_CLASSES) ``` To finetune on your own dataset, you have to write a training loop or adapt [timm's training script](https://github.com/rwightman/pytorch-image-models/blob/master/train.py) to use your dataset. ## How do I train this model? You can follow the [timm recipe scripts](../scripts) for training a new model afresh. ## Citation ```BibTeX @misc{kolesnikov2020big, title={Big Transfer (BiT): General Visual Representation Learning}, author={Alexander Kolesnikov and Lucas Beyer and Xiaohua Zhai and Joan Puigcerver and Jessica Yung and Sylvain Gelly and Neil Houlsby}, year={2020}, eprint={1912.11370}, archivePrefix={arXiv}, primaryClass={cs.CV} } ``` <!-- Type: model-index Collections: - Name: Big Transfer Paper: Title: 'Big Transfer (BiT): General Visual Representation Learning' URL: https://paperswithcode.com/paper/large-scale-learning-of-general-visual Models: - Name: resnetv2_101x1_bitm In Collection: Big Transfer Metadata: FLOPs: 5330896 Parameters: 44540000 File Size: 178256468 Architecture: - 1x1 Convolution - Bottleneck Residual Block - Convolution - Global Average Pooling - Group Normalization - Max Pooling - ReLU - Residual Block - Residual Connection - Softmax - Weight Standardization Tasks: - Image Classification Training Techniques: - Mixup - SGD with Momentum - Weight Decay Training Data: - ImageNet - JFT-300M Training Resources: Cloud TPUv3-512 ID: resnetv2_101x1_bitm LR: 0.03 Epochs: 90 Layers: 101 Crop Pct: '1.0' Momentum: 0.9 Batch Size: 4096 Image Size: '480' Weight Decay: 0.0001 Interpolation: bilinear Code: https://github.com/rwightman/pytorch-image-models/blob/b9843f954b0457af2db4f9dea41a8538f51f5d78/timm/models/resnetv2.py#L444 Weights: https://storage.googleapis.com/bit_models/BiT-M-R101x1-ILSVRC2012.npz Results: - Task: Image Classification Dataset: ImageNet Metrics: Top 1 Accuracy: 82.21% Top 5 Accuracy: 96.47% - Name: resnetv2_101x3_bitm In Collection: Big Transfer Metadata: FLOPs: 15988688 Parameters: 387930000 File Size: 1551830100 Architecture: - 1x1 Convolution - Bottleneck Residual Block - Convolution - Global Average Pooling - Group Normalization - Max Pooling - ReLU - Residual Block - Residual Connection - Softmax - Weight Standardization Tasks: - Image Classification Training Techniques: - Mixup - SGD with Momentum - Weight Decay Training Data: - ImageNet - JFT-300M Training Resources: Cloud TPUv3-512 ID: resnetv2_101x3_bitm LR: 0.03 Epochs: 90 Layers: 101 Crop Pct: '1.0' Momentum: 0.9 Batch Size: 4096 Image Size: '480' Weight Decay: 0.0001 Interpolation: bilinear Code: https://github.com/rwightman/pytorch-image-models/blob/b9843f954b0457af2db4f9dea41a8538f51f5d78/timm/models/resnetv2.py#L451 Weights: https://storage.googleapis.com/bit_models/BiT-M-R101x3-ILSVRC2012.npz Results: - Task: Image Classification Dataset: ImageNet Metrics: Top 1 Accuracy: 84.38% Top 5 Accuracy: 97.37% - Name: resnetv2_152x2_bitm In Collection: Big Transfer Metadata: FLOPs: 10659792 Parameters: 236340000 File Size: 945476668 Architecture: - 1x1 Convolution - Bottleneck Residual Block - Convolution - Global Average Pooling - Group Normalization - Max Pooling - ReLU - Residual Block - Residual Connection - Softmax - Weight Standardization Tasks: - Image Classification Training Techniques: - Mixup - SGD with Momentum - Weight Decay Training Data: - ImageNet - JFT-300M ID: resnetv2_152x2_bitm Crop Pct: '1.0' Image Size: '480' Interpolation: bilinear Code: https://github.com/rwightman/pytorch-image-models/blob/b9843f954b0457af2db4f9dea41a8538f51f5d78/timm/models/resnetv2.py#L458 Weights: https://storage.googleapis.com/bit_models/BiT-M-R152x2-ILSVRC2012.npz Results: - Task: Image Classification Dataset: ImageNet Metrics: Top 1 Accuracy: 84.4% Top 5 Accuracy: 97.43% - Name: resnetv2_152x4_bitm In Collection: Big Transfer Metadata: FLOPs: 21317584 Parameters: 936530000 File Size: 3746270104 Architecture: - 1x1 Convolution - Bottleneck Residual Block - Convolution - Global Average Pooling - Group Normalization - Max Pooling - ReLU - Residual Block - Residual Connection - Softmax - Weight Standardization Tasks: - Image Classification Training Techniques: - Mixup - SGD with Momentum - Weight Decay Training Data: - ImageNet - JFT-300M Training Resources: Cloud TPUv3-512 ID: resnetv2_152x4_bitm Crop Pct: '1.0' Image Size: '480' Interpolation: bilinear Code: https://github.com/rwightman/pytorch-image-models/blob/b9843f954b0457af2db4f9dea41a8538f51f5d78/timm/models/resnetv2.py#L465 Weights: https://storage.googleapis.com/bit_models/BiT-M-R152x4-ILSVRC2012.npz Results: - Task: Image Classification Dataset: ImageNet Metrics: Top 1 Accuracy: 84.95% Top 5 Accuracy: 97.45% - Name: resnetv2_50x1_bitm In Collection: Big Transfer Metadata: FLOPs: 5330896 Parameters: 25550000 File Size: 102242668 Architecture: - 1x1 Convolution - Bottleneck Residual Block - Convolution - Global Average Pooling - Group Normalization - Max Pooling - ReLU - Residual Block - Residual Connection - Softmax - Weight Standardization Tasks: - Image Classification Training Techniques: - Mixup - SGD with Momentum - Weight Decay Training Data: - ImageNet - JFT-300M Training Resources: Cloud TPUv3-512 ID: resnetv2_50x1_bitm LR: 0.03 Epochs: 90 Layers: 50 Crop Pct: '1.0' Momentum: 0.9 Batch Size: 4096 Image Size: '480' Weight Decay: 0.0001 Interpolation: bilinear Code: https://github.com/rwightman/pytorch-image-models/blob/b9843f954b0457af2db4f9dea41a8538f51f5d78/timm/models/resnetv2.py#L430 Weights: https://storage.googleapis.com/bit_models/BiT-M-R50x1-ILSVRC2012.npz Results: - Task: Image Classification Dataset: ImageNet Metrics: Top 1 Accuracy: 80.19% Top 5 Accuracy: 95.63% - Name: resnetv2_50x3_bitm In Collection: Big Transfer Metadata: FLOPs: 15988688 Parameters: 217320000 File Size: 869321580 Architecture: - 1x1 Convolution - Bottleneck Residual Block - Convolution - Global Average Pooling - Group Normalization - Max Pooling - ReLU - Residual Block - Residual Connection - Softmax - Weight Standardization Tasks: - Image Classification Training Techniques: - Mixup - SGD with Momentum - Weight Decay Training Data: - ImageNet - JFT-300M Training Resources: Cloud TPUv3-512 ID: resnetv2_50x3_bitm LR: 0.03 Epochs: 90 Layers: 50 Crop Pct: '1.0' Momentum: 0.9 Batch Size: 4096 Image Size: '480' Weight Decay: 0.0001 Interpolation: bilinear Code: https://github.com/rwightman/pytorch-image-models/blob/b9843f954b0457af2db4f9dea41a8538f51f5d78/timm/models/resnetv2.py#L437 Weights: https://storage.googleapis.com/bit_models/BiT-M-R50x3-ILSVRC2012.npz Results: - Task: Image Classification Dataset: ImageNet Metrics: Top 1 Accuracy: 83.75% Top 5 Accuracy: 97.12% -->
huggingface/pytorch-image-models/blob/main/hfdocs/source/models/big-transfer.mdx
his demo identifies if two speakers are the same person using Gradio's Audio and HTML components.
gradio-app/gradio/blob/main/demo/same-person-or-different/DESCRIPTION.md
!--Copyright 2021 NVIDIA Corporation and The HuggingFace Team. All rights reserved. Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with the License. You may obtain a copy of the License at http://www.apache.org/licenses/LICENSE-2.0 Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the specific language governing permissions and limitations under the License. ⚠️ Note that this file is in Markdown but contain specific syntax for our doc-builder (similar to MDX) that may not be rendered properly in your Markdown viewer. --> # QDQBERT ## Overview The QDQBERT model can be referenced in [Integer Quantization for Deep Learning Inference: Principles and Empirical Evaluation](https://arxiv.org/abs/2004.09602) by Hao Wu, Patrick Judd, Xiaojie Zhang, Mikhail Isaev and Paulius Micikevicius. The abstract from the paper is the following: *Quantization techniques can reduce the size of Deep Neural Networks and improve inference latency and throughput by taking advantage of high throughput integer instructions. In this paper we review the mathematical aspects of quantization parameters and evaluate their choices on a wide range of neural network models for different application domains, including vision, speech, and language. We focus on quantization techniques that are amenable to acceleration by processors with high-throughput integer math pipelines. We also present a workflow for 8-bit quantization that is able to maintain accuracy within 1% of the floating-point baseline on all networks studied, including models that are more difficult to quantize, such as MobileNets and BERT-large.* This model was contributed by [shangz](https://huggingface.co/shangz). ## Usage tips - QDQBERT model adds fake quantization operations (pair of QuantizeLinear/DequantizeLinear ops) to (i) linear layer inputs and weights, (ii) matmul inputs, (iii) residual add inputs, in BERT model. - QDQBERT requires the dependency of [Pytorch Quantization Toolkit](https://github.com/NVIDIA/TensorRT/tree/master/tools/pytorch-quantization). To install `pip install pytorch-quantization --extra-index-url https://pypi.ngc.nvidia.com` - QDQBERT model can be loaded from any checkpoint of HuggingFace BERT model (for example *bert-base-uncased*), and perform Quantization Aware Training/Post Training Quantization. - A complete example of using QDQBERT model to perform Quatization Aware Training and Post Training Quantization for SQUAD task can be found at [transformers/examples/research_projects/quantization-qdqbert/](examples/research_projects/quantization-qdqbert/). ### Set default quantizers QDQBERT model adds fake quantization operations (pair of QuantizeLinear/DequantizeLinear ops) to BERT by `TensorQuantizer` in [Pytorch Quantization Toolkit](https://github.com/NVIDIA/TensorRT/tree/master/tools/pytorch-quantization). `TensorQuantizer` is the module for quantizing tensors, with `QuantDescriptor` defining how the tensor should be quantized. Refer to [Pytorch Quantization Toolkit userguide](https://docs.nvidia.com/deeplearning/tensorrt/pytorch-quantization-toolkit/docs/userguide.html) for more details. Before creating QDQBERT model, one has to set the default `QuantDescriptor` defining default tensor quantizers. Example: ```python >>> import pytorch_quantization.nn as quant_nn >>> from pytorch_quantization.tensor_quant import QuantDescriptor >>> # The default tensor quantizer is set to use Max calibration method >>> input_desc = QuantDescriptor(num_bits=8, calib_method="max") >>> # The default tensor quantizer is set to be per-channel quantization for weights >>> weight_desc = QuantDescriptor(num_bits=8, axis=((0,))) >>> quant_nn.QuantLinear.set_default_quant_desc_input(input_desc) >>> quant_nn.QuantLinear.set_default_quant_desc_weight(weight_desc) ``` ### Calibration Calibration is the terminology of passing data samples to the quantizer and deciding the best scaling factors for tensors. After setting up the tensor quantizers, one can use the following example to calibrate the model: ```python >>> # Find the TensorQuantizer and enable calibration >>> for name, module in model.named_modules(): ... if name.endswith("_input_quantizer"): ... module.enable_calib() ... module.disable_quant() # Use full precision data to calibrate >>> # Feeding data samples >>> model(x) >>> # ... >>> # Finalize calibration >>> for name, module in model.named_modules(): ... if name.endswith("_input_quantizer"): ... module.load_calib_amax() ... module.enable_quant() >>> # If running on GPU, it needs to call .cuda() again because new tensors will be created by calibration process >>> model.cuda() >>> # Keep running the quantized model >>> # ... ``` ### Export to ONNX The goal of exporting to ONNX is to deploy inference by [TensorRT](https://developer.nvidia.com/tensorrt). Fake quantization will be broken into a pair of QuantizeLinear/DequantizeLinear ONNX ops. After setting static member of TensorQuantizer to use Pytorch’s own fake quantization functions, fake quantized model can be exported to ONNX, follow the instructions in [torch.onnx](https://pytorch.org/docs/stable/onnx.html). Example: ```python >>> from pytorch_quantization.nn import TensorQuantizer >>> TensorQuantizer.use_fb_fake_quant = True >>> # Load the calibrated model >>> ... >>> # ONNX export >>> torch.onnx.export(...) ``` ## Resources - [Text classification task guide](../tasks/sequence_classification) - [Token classification task guide](../tasks/token_classification) - [Question answering task guide](../tasks/question_answering) - [Causal language modeling task guide](../tasks/language_modeling) - [Masked language modeling task guide](../tasks/masked_language_modeling) - [Multiple choice task guide](../tasks/multiple_choice) ## QDQBertConfig [[autodoc]] QDQBertConfig ## QDQBertModel [[autodoc]] QDQBertModel - forward ## QDQBertLMHeadModel [[autodoc]] QDQBertLMHeadModel - forward ## QDQBertForMaskedLM [[autodoc]] QDQBertForMaskedLM - forward ## QDQBertForSequenceClassification [[autodoc]] QDQBertForSequenceClassification - forward ## QDQBertForNextSentencePrediction [[autodoc]] QDQBertForNextSentencePrediction - forward ## QDQBertForMultipleChoice [[autodoc]] QDQBertForMultipleChoice - forward ## QDQBertForTokenClassification [[autodoc]] QDQBertForTokenClassification - forward ## QDQBertForQuestionAnswering [[autodoc]] QDQBertForQuestionAnswering - forward
huggingface/transformers/blob/main/docs/source/en/model_doc/qdqbert.md
!--Copyright 2021 The HuggingFace Team. All rights reserved. Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with the License. You may obtain a copy of the License at http://www.apache.org/licenses/LICENSE-2.0 Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the specific language governing permissions and limitations under the License. ⚠️ Note that this file is in Markdown but contain specific syntax for our doc-builder (similar to MDX) that may not be rendered properly in your Markdown viewer. --> # UniSpeech ## Overview The UniSpeech model was proposed in [UniSpeech: Unified Speech Representation Learning with Labeled and Unlabeled Data](https://arxiv.org/abs/2101.07597) by Chengyi Wang, Yu Wu, Yao Qian, Kenichi Kumatani, Shujie Liu, Furu Wei, Michael Zeng, Xuedong Huang . The abstract from the paper is the following: *In this paper, we propose a unified pre-training approach called UniSpeech to learn speech representations with both unlabeled and labeled data, in which supervised phonetic CTC learning and phonetically-aware contrastive self-supervised learning are conducted in a multi-task learning manner. The resultant representations can capture information more correlated with phonetic structures and improve the generalization across languages and domains. We evaluate the effectiveness of UniSpeech for cross-lingual representation learning on public CommonVoice corpus. The results show that UniSpeech outperforms self-supervised pretraining and supervised transfer learning for speech recognition by a maximum of 13.4% and 17.8% relative phone error rate reductions respectively (averaged over all testing languages). The transferability of UniSpeech is also demonstrated on a domain-shift speech recognition task, i.e., a relative word error rate reduction of 6% against the previous approach.* This model was contributed by [patrickvonplaten](https://huggingface.co/patrickvonplaten). The Authors' code can be found [here](https://github.com/microsoft/UniSpeech/tree/main/UniSpeech). ## Usage tips - UniSpeech is a speech model that accepts a float array corresponding to the raw waveform of the speech signal. Please use [`Wav2Vec2Processor`] for the feature extraction. - UniSpeech model can be fine-tuned using connectionist temporal classification (CTC) so the model output has to be decoded using [`Wav2Vec2CTCTokenizer`]. ## Resources - [Audio classification task guide](../tasks/audio_classification) - [Automatic speech recognition task guide](../tasks/asr) ## UniSpeechConfig [[autodoc]] UniSpeechConfig ## UniSpeech specific outputs [[autodoc]] models.unispeech.modeling_unispeech.UniSpeechForPreTrainingOutput ## UniSpeechModel [[autodoc]] UniSpeechModel - forward ## UniSpeechForCTC [[autodoc]] UniSpeechForCTC - forward ## UniSpeechForSequenceClassification [[autodoc]] UniSpeechForSequenceClassification - forward ## UniSpeechForPreTraining [[autodoc]] UniSpeechForPreTraining - forward
huggingface/transformers/blob/main/docs/source/en/model_doc/unispeech.md
!--Copyright 2023 The GLIGEN Authors and The HuggingFace Team. All rights reserved. Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with the License. You may obtain a copy of the License at http://www.apache.org/licenses/LICENSE-2.0 Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the specific language governing permissions and limitations under the License. --> # GLIGEN (Grounded Language-to-Image Generation) The GLIGEN model was created by researchers and engineers from [University of Wisconsin-Madison, Columbia University, and Microsoft](https://github.com/gligen/GLIGEN). The [`StableDiffusionGLIGENPipeline`] and [`StableDiffusionGLIGENTextImagePipeline`] can generate photorealistic images conditioned on grounding inputs. Along with text and bounding boxes with [`StableDiffusionGLIGENPipeline`], if input images are given, [`StableDiffusionGLIGENTextImagePipeline`] can insert objects described by text at the region defined by bounding boxes. Otherwise, it'll generate an image described by the caption/prompt and insert objects described by text at the region defined by bounding boxes. It's trained on COCO2014D and COCO2014CD datasets, and the model uses a frozen CLIP ViT-L/14 text encoder to condition itself on grounding inputs. The abstract from the [paper](https://huggingface.co/papers/2301.07093) is: *Large-scale text-to-image diffusion models have made amazing advances. However, the status quo is to use text input alone, which can impede controllability. In this work, we propose GLIGEN, Grounded-Language-to-Image Generation, a novel approach that builds upon and extends the functionality of existing pre-trained text-to-image diffusion models by enabling them to also be conditioned on grounding inputs. To preserve the vast concept knowledge of the pre-trained model, we freeze all of its weights and inject the grounding information into new trainable layers via a gated mechanism. Our model achieves open-world grounded text2img generation with caption and bounding box condition inputs, and the grounding ability generalizes well to novel spatial configurations and concepts. GLIGEN’s zeroshot performance on COCO and LVIS outperforms existing supervised layout-to-image baselines by a large margin.* <Tip> Make sure to check out the Stable Diffusion [Tips](https://huggingface.co/docs/diffusers/en/api/pipelines/stable_diffusion/overview#tips) section to learn how to explore the tradeoff between scheduler speed and quality and how to reuse pipeline components efficiently! If you want to use one of the official checkpoints for a task, explore the [gligen](https://huggingface.co/gligen) Hub organizations! </Tip> [`StableDiffusionGLIGENPipeline`] was contributed by [Nikhil Gajendrakumar](https://github.com/nikhil-masterful) and [`StableDiffusionGLIGENTextImagePipeline`] was contributed by [Nguyễn Công Tú Anh](https://github.com/tuanh123789). ## StableDiffusionGLIGENPipeline [[autodoc]] StableDiffusionGLIGENPipeline - all - __call__ - enable_vae_slicing - disable_vae_slicing - enable_vae_tiling - disable_vae_tiling - enable_model_cpu_offload - prepare_latents - enable_fuser ## StableDiffusionGLIGENTextImagePipeline [[autodoc]] StableDiffusionGLIGENTextImagePipeline - all - __call__ - enable_vae_slicing - disable_vae_slicing - enable_vae_tiling - disable_vae_tiling - enable_model_cpu_offload - prepare_latents - enable_fuser ## StableDiffusionPipelineOutput [[autodoc]] pipelines.stable_diffusion.StableDiffusionPipelineOutput
huggingface/diffusers/blob/main/docs/source/en/api/pipelines/stable_diffusion/gligen.md
his notebook shows how to deploy a vision model from 🤗 Transformers (written in TensorFlow) to [Vertex AI](https://cloud.google.com/vertex-ai). This is beneficial in many ways: * Vertex AI provides support for autoscaling, authorization, and authentication out of the box. * One can maintain multiple versions of a model and can control the traffic split very easily. * Purely serverless. This notebook uses code from [this official GCP example](https://github.com/GoogleCloudPlatform/vertex-ai-samples/blob/main/notebooks/community/vertex_endpoints/optimized_tensorflow_runtime/bert_optimized_online_prediction.ipynb). This tutorial uses the following billable components of Google Cloud: * Vertex AI * Cloud Storage Learn about [Vertex AI pricing](https://cloud.google.com/vertex-ai/pricing) and [Cloud Storage pricing](https://cloud.google.com/storage/pricing), and use the [Pricing Calculator](https://cloud.google.com/products/calculator/) to generate a cost estimate based on your projected usage. ## Initial setup First authenticate yourself to provide Colab access to your GCP resources. ```python from google.colab import auth auth.authenticate_user() ``` ```python # Storage bucket GCS_BUCKET = "gs://[GCS-BUCKET-NAME]" REGION = "us-central1" ``` ```python !gsutil mb -l $REGION $GCS_BUCKET ``` ```python # Install Vertex AI SDK and transformers !pip install --upgrade google-cloud-aiplatform transformers -q ``` ## Initial imports ```python from transformers import ViTImageProcessor, TFViTForImageClassification import tensorflow as tf import tempfile import requests import base64 import json import os ``` ```python import transformers print(tf.__version__) print(transformers.__version__) ``` ## Save the model locally We will work with a [Vision Transformer B-16 model provided by 🤗 Transformers](https://huggingface.co/docs/transformers/main/en/model_doc/vit). We will first initialize it, load the model weights, and then save it locally as a [SavedModel](https://www.tensorflow.org/guide/saved_model) resource. ```python # the saved_model parameter is a flag to create a saved model version of the model LOCAL_MODEL_DIR = "vit" model = TFViTForImageClassification.from_pretrained("google/vit-base-patch16-224") model.save_pretrained(LOCAL_MODEL_DIR, saved_model=True) ``` ```python # Inspect the input and output signatures of the model !saved_model_cli show --dir {LOCAL_MODEL_DIR}/saved_model/1 --all ``` ## Embedding pre and post processing ops inside the model ML models usually require some pre and post processing of the input data and predicted results. So, it's a good idea to ship an ML model that already has these supports. It also helps in reducing training/serving skew. For our model we need: * Data normalization, resizing, and transposition as the preprocessing ops. * Mapping the predicted logits to ImageNet-1k classes as the post-processing ops. ```python processor = ViTImageProcessor() processor ``` ```python CONCRETE_INPUT = "pixel_values" SIZE = processor.size["height"] INPUT_SHAPE = (SIZE, SIZE, 3) ``` ```python def normalize_img(img, mean=processor.image_mean, std=processor.image_std): # Scale to the value range of [0, 1] first and then normalize. img = img / 255 mean = tf.constant(mean) std = tf.constant(std) return (img - mean) / std def preprocess(string_input): decoded = tf.io.decode_jpeg(string_input, channels=3) resized = tf.image.resize(decoded, size=(SIZE, SIZE)) normalized = normalize_img(resized) normalized = tf.transpose(normalized, (2, 0, 1)) # Since HF models are channel-first. return normalized @tf.function(input_signature=[tf.TensorSpec([None], tf.string)]) def preprocess_fn(string_input): decoded_images = tf.map_fn( preprocess, string_input, fn_output_signature=tf.float32, ) return {CONCRETE_INPUT: decoded_images} def model_exporter(model: tf.keras.Model): m_call = tf.function(model.call).get_concrete_function( tf.TensorSpec( shape=[None, 3, SIZE, SIZE], dtype=tf.float32, name=CONCRETE_INPUT ) ) @tf.function(input_signature=[tf.TensorSpec([None], tf.string)]) def serving_fn(string_input): labels = tf.constant( list(model.config.id2label.values()), dtype=tf.string ) images = preprocess_fn(string_input) predictions = m_call(**images) indices = tf.argmax(predictions.logits, axis=1) pred_source = tf.gather(params=labels, indices=indices) probs = tf.nn.softmax(predictions.logits, axis=1) pred_confidence = tf.reduce_max(probs, axis=1) return {"label": pred_source, "confidence": pred_confidence} return serving_fn ``` ```python # To deploy the model on Vertex AI we must have the model in a storage bucket. tf.saved_model.save( model, os.path.join(GCS_BUCKET, LOCAL_MODEL_DIR), signatures={"serving_default": model_exporter(model)}, ) ``` **Notes on making the model accept string inputs**: When dealing with images via REST or gRPC requests the size of the request payload can easily spiral up depending on the resolution of the images being passed. This is why, it is good practice to compress them reliably and then prepare the request payload. ## Deployment on Vertex AI [This resource](https://cloud.google.com/vertex-ai/docs/general/general-concepts) shows some relevant concepts on Vertex AI. ```python from google.cloud.aiplatform import gapic as aip ``` ```python # Deployment hardware DEPLOY_COMPUTE = "n1-standard-8" DEPLOY_GPU = aip.AcceleratorType.NVIDIA_TESLA_T4 PROJECT_ID = "GCP-PROJECT-ID" ``` ```python # Initialize clients. API_ENDPOINT = f"{REGION}-aiplatform.googleapis.com" PARENT = f"projects/{PROJECT_ID}/locations/{REGION}" client_options = {"api_endpoint": API_ENDPOINT} model_service_client = aip.ModelServiceClient(client_options=client_options) endpoint_service_client = aip.EndpointServiceClient(client_options=client_options) prediction_service_client = aip.PredictionServiceClient(client_options=client_options) ``` ```python # Upload the model to Vertex AI. tf28_gpu_model_dict = { "display_name": "ViT Base TF2.8 GPU model", "artifact_uri": f"{GCS_BUCKET}/{LOCAL_MODEL_DIR}", "container_spec": { "image_uri": "us-docker.pkg.dev/vertex-ai/prediction/tf2-gpu.2-8:latest", }, } tf28_gpu_model = ( model_service_client.upload_model(parent=PARENT, model=tf28_gpu_model_dict) .result(timeout=180) .model ) tf28_gpu_model ``` ```python # Create an Endpoint for the model. tf28_gpu_endpoint_dict = { "display_name": "ViT Base TF2.8 GPU endpoint", } tf28_gpu_endpoint = ( endpoint_service_client.create_endpoint( parent=PARENT, endpoint=tf28_gpu_endpoint_dict ) .result(timeout=300) .name ) tf28_gpu_endpoint ``` ```python # Deploy the Endpoint. tf28_gpu_deployed_model_dict = { "model": tf28_gpu_model, "display_name": "ViT Base TF2.8 GPU deployed model", "dedicated_resources": { "min_replica_count": 1, "max_replica_count": 1, "machine_spec": { "machine_type": DEPLOY_COMPUTE, "accelerator_type": DEPLOY_GPU, "accelerator_count": 1, }, }, } tf28_gpu_deployed_model = endpoint_service_client.deploy_model( endpoint=tf28_gpu_endpoint, deployed_model=tf28_gpu_deployed_model_dict, traffic_split={"0": 100}, ).result() tf28_gpu_deployed_model ``` ## Make a prediction request ```python # Generate sample data. import base64 image_path = tf.keras.utils.get_file( "image.jpg", "http://images.cocodataset.org/val2017/000000039769.jpg" ) bytes = tf.io.read_file(image_path) b64str = base64.b64encode(bytes.numpy()).decode("utf-8") ``` ```python # Model input signature key name. pushed_model_location = os.path.join(GCS_BUCKET, LOCAL_MODEL_DIR) loaded = tf.saved_model.load(pushed_model_location) serving_input = list( loaded.signatures["serving_default"].structured_input_signature[1].keys() )[0] print("Serving function input:", serving_input) ``` ```python from google.protobuf import json_format from google.protobuf.struct_pb2 import Value def predict_image(image, endpoint, serving_input): # The format of each instance should conform to the deployed model's prediction input schema. instances_list = [{serving_input: {"b64": image}}] instances = [json_format.ParseDict(s, Value()) for s in instances_list] print( prediction_service_client.predict( endpoint=endpoint, instances=instances, ) ) predict_image(b64str, tf28_gpu_endpoint, serving_input) ``` ## Cleaning up of resources ```python def cleanup(endpoint, model_name, deployed_model_id): response = endpoint_service_client.undeploy_model( endpoint=endpoint, deployed_model_id=deployed_model_id ) print("running undeploy_model operation:", response.operation.name) print(response.result()) response = endpoint_service_client.delete_endpoint(name=endpoint) print("running delete_endpoint operation:", response.operation.name) print(response.result()) response = model_service_client.delete_model(name=model_name) print("running delete_model operation:", response.operation.name) print(response.result()) cleanup(tf28_gpu_endpoint, tf28_gpu_model, tf28_gpu_deployed_model.deployed_model.id) ``` ```python !gsutil rm -r $GCS_BUCKET ```
huggingface/blog/blob/main/notebooks/112_vertex_ai_vision.ipynb
Using PEFT with timm `peft` allows us to train any model with LoRA as long as the layer type is supported. Since `Conv2D` is one of the supported layer types, it makes sense to test it on image models. In this short notebook, we will demonstrate this with an image classification task using [`timm`](https://huggingface.co/docs/timm/index). ## Imports Make sure that you have the latest version of `peft` installed. To ensure that, run this in your Python environment: python -m pip install --upgrade peft Also, ensure that `timm` is installed: python -m pip install --upgrade timm ```python import timm import torch from PIL import Image from timm.data import resolve_data_config from timm.data.transforms_factory import create_transform ``` ```python import peft from datasets import load_dataset ``` ```python torch.manual_seed(0) ``` ## Loading the pre-trained base model We use a small pretrained `timm` model, `PoolFormer`. Find more info on its [model card](https://huggingface.co/timm/poolformer_m36.sail_in1k). ```python model_id_timm = "timm/poolformer_m36.sail_in1k" ``` We tell `timm` that we deal with 3 classes, to ensure that the classification layer has the correct size. ```python model = timm.create_model(model_id_timm, pretrained=True, num_classes=3) ``` These are the transformations steps necessary to process the image. ```python transform = create_transform(**resolve_data_config(model.pretrained_cfg, model=model)) ``` ## Data For this exercise, we use the "beans" dataset. More details on the dataset can be found on [its datasets page](https://huggingface.co/datasets/beans). For our purposes, what's important is that we have image inputs and the target we're trying to predict is one of three classes for each image. ```python ds = load_dataset("beans") ``` ```python ds_train = ds["train"] ds_valid = ds["validation"] ``` ```python ds_train[0]["image"] ``` We define a small processing function which is responsible for loading and transforming the images, as well as extracting the labels. ```python def process(batch): x = torch.cat([transform(img).unsqueeze(0) for img in batch["image"]]) y = torch.tensor(batch["labels"]) return {"x": x, "y": y} ``` ```python ds_train.set_transform(process) ds_valid.set_transform(process) ``` ```python train_loader = torch.utils.data.DataLoader(ds_train, batch_size=32) valid_loader = torch.utils.data.DataLoader(ds_valid, batch_size=32) ``` ## Training This is just a function that performs the train loop, nothing fancy happening. ```python def train(model, optimizer, criterion, train_dataloader, valid_dataloader, epochs): for epoch in range(epochs): model.train() train_loss = 0 for batch in train_dataloader: xb, yb = batch["x"], batch["y"] xb, yb = xb.to(device), yb.to(device) outputs = model(xb) lsm = torch.nn.functional.log_softmax(outputs, dim=-1) loss = criterion(lsm, yb) train_loss += loss.detach().float() loss.backward() optimizer.step() optimizer.zero_grad() model.eval() valid_loss = 0 correct = 0 n_total = 0 for batch in valid_dataloader: xb, yb = batch["x"], batch["y"] xb, yb = xb.to(device), yb.to(device) with torch.no_grad(): outputs = model(xb) lsm = torch.nn.functional.log_softmax(outputs, dim=-1) loss = criterion(lsm, yb) valid_loss += loss.detach().float() correct += (outputs.argmax(-1) == yb).sum().item() n_total += len(yb) train_loss_total = (train_loss / len(train_dataloader)).item() valid_loss_total = (valid_loss / len(valid_dataloader)).item() valid_acc_total = correct / n_total print(f"{epoch=:<2} {train_loss_total=:.4f} {valid_loss_total=:.4f} {valid_acc_total=:.4f}") ``` ### Selecting which layers to fine-tune with LoRA Let's take a look at the layers of our model. We only print the first 30, since there are quite a few: ```python [(n, type(m)) for n, m in model.named_modules()][:30] ``` Most of these layers are not good targets for LoRA, but we see a couple that should interest us. Their names are `'stages.0.blocks.0.mlp.fc1'`, etc. With a bit of regex, we can match them easily. Also, we should inspect the name of the classification layer, since we want to train that one too! ```python [(n, type(m)) for n, m in model.named_modules()][-5:] ``` config = peft.LoraConfig( r=8, target_modules=r".*\.mlp\.fc\d|head\.fc", ) Okay, this gives us all the information we need to fine-tune this model. With a bit of regex, we match the convolutional layers that should be targeted for LoRA. We also want to train the classification layer `'head.fc'` (without LoRA), so we add it to the `modules_to_save`. ```python config = peft.LoraConfig(r=8, target_modules=r".*\.mlp\.fc\d", modules_to_save=["head.fc"]) ``` Finally, let's create the `peft` model, the optimizer and criterion, and we can get started. As shown below, less than 2% of the model's total parameters are updated thanks to `peft`. ```python device = "cuda" if torch.cuda.is_available() else "cpu" peft_model = peft.get_peft_model(model, config).to(device) optimizer = torch.optim.Adam(peft_model.parameters(), lr=2e-4) criterion = torch.nn.CrossEntropyLoss() peft_model.print_trainable_parameters() ``` ```python %time train(peft_model, optimizer, criterion, train_loader, valid_dataloader=valid_loader, epochs=10) ``` We get an accuracy of ~0.97, despite only training a tiny amount of parameters. That's a really nice result. ## Sharing the model through Hugging Face Hub ### Pushing the model to Hugging Face Hub If we want to share the fine-tuned weights with the world, we can upload them to Hugging Face Hub like this: ```python user = "BenjaminB" # put your user name here model_name = "peft-lora-with-timm-model" model_id = f"{user}/{model_name}" ``` ```python peft_model.push_to_hub(model_id); ``` As we can see, the adapter size is only 4.3 MB. The original model was 225 MB. That's a very big saving. ### Loading the model from HF Hub Now, it only takes one step to load the model from HF Hub. To do this, we can use `PeftModel.from_pretrained`, passing our base model and the model ID: ```python base_model = timm.create_model(model_id_timm, pretrained=True, num_classes=3) loaded = peft.PeftModel.from_pretrained(base_model, model_id) ``` ```python x = ds_train[:1]["x"] y_peft = peft_model(x.to(device)) y_loaded = loaded(x) torch.allclose(y_peft.cpu(), y_loaded) ``` ### Clean up Finally, as a clean up step, you may want to delete the repo. ```python from huggingface_hub import delete_repo ``` ```python delete_repo(model_id) ```
huggingface/peft/blob/main/examples/image_classification/image_classification_timm_peft_lora.ipynb
Gradio Demo: image_classifier_interface_load ``` !pip install -q gradio ``` ``` # Downloading files from the demo repo import os !wget -q https://github.com/gradio-app/gradio/raw/main/demo/image_classifier_interface_load/cheetah1.jpeg !wget -q https://github.com/gradio-app/gradio/raw/main/demo/image_classifier_interface_load/cheetah1.jpg !wget -q https://github.com/gradio-app/gradio/raw/main/demo/image_classifier_interface_load/lion.jpg ``` ``` import gradio as gr import pathlib current_dir = pathlib.Path(__file__).parent images = [str(current_dir / "cheetah1.jpeg"), str(current_dir / "cheetah1.jpg"), str(current_dir / "lion.jpg")] img_classifier = gr.load( "models/google/vit-base-patch16-224", examples=images, cache_examples=False ) def func(img, text): return img_classifier(img), text using_img_classifier_as_function = gr.Interface( func, [gr.Image(type="filepath"), "text"], ["label", "text"], examples=[ [str(current_dir / "cheetah1.jpeg"), None], [str(current_dir / "cheetah1.jpg"), "cheetah"], [str(current_dir / "lion.jpg"), "lion"], ], cache_examples=False, ) demo = gr.TabbedInterface([using_img_classifier_as_function, img_classifier]) if __name__ == "__main__": demo.launch() ```
gradio-app/gradio/blob/main/demo/image_classifier_interface_load/run.ipynb
!--Copyright 2023 The HuggingFace Team. All rights reserved. Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with the License. You may obtain a copy of the License at http://www.apache.org/licenses/LICENSE-2.0 Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the specific language governing permissions and limitations under the License. ⚠️ Note that this file is in Markdown but contain specific syntax for our doc-builder (similar to MDX) that may not be rendered properly in your Markdown viewer. --> # Monocular depth estimation Monocular depth estimation is a computer vision task that involves predicting the depth information of a scene from a single image. In other words, it is the process of estimating the distance of objects in a scene from a single camera viewpoint. Monocular depth estimation has various applications, including 3D reconstruction, augmented reality, autonomous driving, and robotics. It is a challenging task as it requires the model to understand the complex relationships between objects in the scene and the corresponding depth information, which can be affected by factors such as lighting conditions, occlusion, and texture. <Tip> The task illustrated in this tutorial is supported by the following model architectures: <!--This tip is automatically generated by `make fix-copies`, do not fill manually!--> [DPT](../model_doc/dpt), [GLPN](../model_doc/glpn) <!--End of the generated tip--> </Tip> In this guide you'll learn how to: * create a depth estimation pipeline * run depth estimation inference by hand Before you begin, make sure you have all the necessary libraries installed: ```bash pip install -q transformers ``` ## Depth estimation pipeline The simplest way to try out inference with a model supporting depth estimation is to use the corresponding [`pipeline`]. Instantiate a pipeline from a [checkpoint on the Hugging Face Hub](https://huggingface.co/models?pipeline_tag=depth-estimation&sort=downloads): ```py >>> from transformers import pipeline >>> checkpoint = "vinvino02/glpn-nyu" >>> depth_estimator = pipeline("depth-estimation", model=checkpoint) ``` Next, choose an image to analyze: ```py >>> from PIL import Image >>> import requests >>> url = "https://unsplash.com/photos/HwBAsSbPBDU/download?ixid=MnwxMjA3fDB8MXxzZWFyY2h8MzR8fGNhciUyMGluJTIwdGhlJTIwc3RyZWV0fGVufDB8MHx8fDE2Nzg5MDEwODg&force=true&w=640" >>> image = Image.open(requests.get(url, stream=True).raw) >>> image ``` <div class="flex justify-center"> <img src="https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/transformers/tasks/depth-estimation-example.jpg" alt="Photo of a busy street"/> </div> Pass the image to the pipeline. ```py >>> predictions = depth_estimator(image) ``` The pipeline returns a dictionary with two entries. The first one, called `predicted_depth`, is a tensor with the values being the depth expressed in meters for each pixel. The second one, `depth`, is a PIL image that visualizes the depth estimation result. Let's take a look at the visualized result: ```py >>> predictions["depth"] ``` <div class="flex justify-center"> <img src="https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/transformers/tasks/depth-visualization.png" alt="Depth estimation visualization"/> </div> ## Depth estimation inference by hand Now that you've seen how to use the depth estimation pipeline, let's see how we can replicate the same result by hand. Start by loading the model and associated processor from a [checkpoint on the Hugging Face Hub](https://huggingface.co/models?pipeline_tag=depth-estimation&sort=downloads). Here we'll use the same checkpoint as before: ```py >>> from transformers import AutoImageProcessor, AutoModelForDepthEstimation >>> checkpoint = "vinvino02/glpn-nyu" >>> image_processor = AutoImageProcessor.from_pretrained(checkpoint) >>> model = AutoModelForDepthEstimation.from_pretrained(checkpoint) ``` Prepare the image input for the model using the `image_processor` that will take care of the necessary image transformations such as resizing and normalization: ```py >>> pixel_values = image_processor(image, return_tensors="pt").pixel_values ``` Pass the prepared inputs through the model: ```py >>> import torch >>> with torch.no_grad(): ... outputs = model(pixel_values) ... predicted_depth = outputs.predicted_depth ``` Visualize the results: ```py >>> import numpy as np >>> # interpolate to original size >>> prediction = torch.nn.functional.interpolate( ... predicted_depth.unsqueeze(1), ... size=image.size[::-1], ... mode="bicubic", ... align_corners=False, ... ).squeeze() >>> output = prediction.numpy() >>> formatted = (output * 255 / np.max(output)).astype("uint8") >>> depth = Image.fromarray(formatted) >>> depth ``` <div class="flex justify-center"> <img src="https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/transformers/tasks/depth-visualization.png" alt="Depth estimation visualization"/> </div>
huggingface/transformers/blob/main/docs/source/en/tasks/monocular_depth_estimation.md
-- title: "Retrieval Augmented Generation with Huggingface Transformers and Ray" thumbnail: /blog/assets/12_ray_rag/ray_arch_updated.png authors: - user: ray-project guest: true --- # Retrieval Augmented Generation with Huggingface Transformers and Ray ##### A guest blog post by <a href="/amogkam">Amog Kamsetty</a> from the Anyscale team [Huggingface Transformers](https://huggingface.co/) recently added the [Retrieval Augmented Generation (RAG)](https://twitter.com/huggingface/status/1310597560906780680) model, a new NLP architecture that leverages external documents (like Wikipedia) to augment its knowledge and achieve state of the art results on knowledge-intensive tasks. In this blog post, we introduce the integration of [Ray](https://docs.ray.io/en/master/), a library for building scalable applications, into the RAG contextual document retrieval mechanism. This speeds up retrieval calls by 2x and improves the scalability of RAG distributed [fine-tuning](https://github.com/huggingface/transformers/tree/master/examples/research_projects/rag). ### What is Retrieval Augmented Generation (RAG)? ![alt_text](assets/12_ray_rag/rag_gif.gif "image_tooltip") _An overview of RAG. The model retrieves contextual documents from an external dataset as part of its execution. These contextual documents are used in conjunction with the original input to produce an output. The GIF is taken from [Facebook's original blog post](https://ai.facebook.com/blog/retrieval-augmented-generation-streamlining-the-creation-of-intelligent-natural-language-processing-models)._ Recently, [Huggingface](https://huggingface.co/) partnered with [Facebook AI](https://ai.facebook.com/) to introduce the [RAG](https://twitter.com/huggingface/status/1310597560906780680) model as part of its Transformers library. [RAG](https://ai.facebook.com/blog/retrieval-augmented-generation-streamlining-the-creation-of-intelligent-natural-language-processing-models/) acts just like any other [seq2seq model](https://blog.keras.io/a-ten-minute-introduction-to-sequence-to-sequence-learning-in-keras.html). However, [RAG](https://ai.facebook.com/blog/retrieval-augmented-generation-streamlining-the-creation-of-intelligent-natural-language-processing-models/) has an intermediate component that retrieves contextual documents from an external knowledge base (like a Wikipedia text corpus). These documents are then used in conjunction with the input sequence and passed into the underlying seq2seq [generator](https://huggingface.co/blog/how-to-generate). This information retrieval step allows [RAG](https://ai.facebook.com/blog/retrieval-augmented-generation-streamlining-the-creation-of-intelligent-natural-language-processing-models/) to make use of multiple sources of knowledge -- those that are baked into the model parameters and the information that is contained in the contextual passages, allowing it to outperform other state-of-the-art models in tasks like question answering. You can try it for yourself using this [demo provided by Huggingface](https://huggingface.co/rag/)! ### Scaling up fine-tuning This retrieval of contextual documents is crucial for RAG's state-of-the-art results but introduces an extra layer of complexity. When scaling up the training process via a data-parallel training routine, a naive implementation of the document lookup can become a bottleneck for training. Further, the **document index** used in the retrieval component is often quite large, making it infeasible for each training worker to load its own replicated copy of the index. The previous implementation of RAG fine-tuning leveraged the [torch.distributed](https://pytorch.org/docs/stable/distributed.html) communication package for the document retrieval portion. However, this implementation sometimes proved to be inflexible and limited in scalability. Instead, a framework-agnostic and a more flexible implementation for ad-hoc concurrent programming is required. [Ray](https://ray.io/) fits the bill perfectly. Ray is a simple, yet powerful Python library for general-purpose distributed and parallel programming. Using Ray for distributed document retrieval, we achieved a **2x speedup per retrieval call compared to `torch.distributed`**, and overall better fine-tuning scalability. ### Ray for Document Retrieval ![alt_text](assets/12_ray_rag/torch_distributed_document_retrieval.png "image_tooltip") _Document retrieval with the torch.distributed implementation_ The main drawback of the [torch.distributed](https://pytorch.org/docs/stable/distributed.html) implementation for document retrieval was that it latched onto the same process group used for training and only the rank 0 training worker loaded the index into memory. As a result, this implementation had some limitations: 1. **Synchronization bottleneck**: The rank 0 worker had to receive the inputs from all workers, perform the index query, and then send the results back to the other workers. This limited performance with multiple training workers. 2. **PyTorch specific**: The document retrieval process group had to latch onto the existing process group used for training, meaning that PyTorch had to be used for training as well. ![alt_text](assets/12_ray_rag/ray_arch_updated.png "image_tooltip") _Document retrieval with the Ray implementation_ To overcome these limitations, we introduced a novel implementation of distributed retrieval based on Ray. With [Ray’s stateful actor abstractions](https://docs.ray.io/en/master/actors.html), multiple processes that are separate from the training processes are used to load the index and handle the retrieval queries. With multiple Ray actors, retrieval is no longer a bottleneck and PyTorch is no longer a requirement for RAG. And as you can see below, using the [Ray](https://docs.ray.io/en/master/) based implementation leads to better retrieval performance for multi-GPU fine-tuning. The following results show the seconds per retrieval call and we can see that as we increase the number of GPUs that we train on, using Ray has comparatively better performance than `torch.distributed`. Also, if we increase the number of Ray processes that perform retrieval, we also get better performance with more training workers since a single retrieval process is no longer a bottleneck. <table> <tr> <td> </td> <td>2 GPU </td> <td>3 GPU </td> <td>4 GPU </td> </tr> <tr> <td>torch.distributed </td> <td>2.12 sec/retrieval </td> <td>2.62 sec/retrieve </td> <td>3.438 sec/retrieve </td> </tr> <tr> <td>Ray 2 retrieval processes </td> <td>1.49 sec/retrieve </td> <td>1.539 sec/retrieve </td> <td>2.029 sec/retrieve </td> </tr> <tr> <td>Ray 4 retrieval processes </td> <td>1.145 sec/retrieve </td> <td>1.484 sec/retrieve </td> <td>1.66 sec/retrieve </td> </tr> </table> _A performance comparison of different retrieval implementations. For each document retrieval implementation, we run 500 training steps with a per-GPU batch size of 8, and measure the time it takes to retrieve the contextual documents for each batch on the rank 0 training worker. As the results show, using multiple retrieval processes improves performance, especially as we scale training to multiple GPUs._ ### How do I use it? [Huggingface](https://huggingface.co/) provides a [PyTorch Lightning](https://github.com/PyTorchLightning/pytorch-lightning) based [fine tuning script](https://github.com/huggingface/transformers/tree/master/examples/research_projects/rag), and we extended it to add the Ray retrieval implementation as an option. To try it out, first install the necessary requirements ```bash pip install ray pip install transformers pip install -r transformers/examples/research_projects/rag/requirements.txt ``` Then, you can specify your data paths and other configurations and run [finetune-rag-ray.sh](https://github.com/huggingface/transformers/blob/master/examples/research_projects/rag/finetune_rag_ray.sh)! ```bash # Sample script to finetune RAG using Ray for distributed retrieval. # Add parent directory to python path to access lightning_base.py export PYTHONPATH="../":"${PYTHONPATH}" # Start a single-node Ray cluster. ray start --head # A sample finetuning run, you need to specify data_dir, output_dir and model_name_or_path # run ./examples/rag/finetune_rag_ray.sh --help to see all the possible options python examples/rag/finetune_rag.py \ --data_dir $DATA_DIR \ --output_dir $OUTPUT_DIR \ --model_name_or_path $MODEL_NAME_OR_PATH \ --model_type rag_sequence \ --fp16 \ --gpus 8 \ --profile \ --do_train \ --do_predict \ --n_val -1 \ --train_batch_size 8 \ --eval_batch_size 1 \ --max_source_length 128 \ --max_target_length 25 \ --val_max_target_length 25 \ --test_max_target_length 25 \ --label_smoothing 0.1 \ --dropout 0.1 \ --attention_dropout 0.1 \ --weight_decay 0.001 \ --adam_epsilon 1e-08 \ --max_grad_norm 0.1 \ --lr_scheduler polynomial \ --learning_rate 3e-05 \ --num_train_epochs 100 \ --warmup_steps 500 \ --gradient_accumulation_steps 1 \ --distributed_retriever ray \ --num_retrieval_workers 4 # Stop the Ray cluster. ray stop ``` ## What’s next? Using RAG with [Huggingface transformers](https://github.com/huggingface/transformers/tree/master/examples/research_projects/rag) and the [Ray retrieval implementation](https://github.com/huggingface/transformers/blob/master/examples/research_projects/rag/finetune_rag_ray.sh) for faster distributed fine-tuning, you can leverage RAG for retrieval-based generation on your own knowledge-intensive tasks. Also, hyperparameter tuning is another aspect of transformer fine tuning and can have [huge impacts on accuracy](https://medium.com/distributed-computing-with-ray/hyperparameter-optimization-for-transformers-a-guide-c4e32c6c989b). For scalable and easy hyperparameter tuning, check out the [Ray Tune](https://docs.ray.io/en/latest/tune/) library. By using [Ray Tune’s integration with PyTorch Lightning](https://medium.com/distributed-computing-with-ray/scaling-up-pytorch-lightning-hyperparameter-tuning-with-ray-tune-4bd9e1ff9929), or the [built-in integration with Huggingface transformers](https://huggingface.co/blog/ray-tune), you can run experiments to find the perfect hyperparameters for your RAG model. And lastly, stay tuned for a potential Tensorflow implementation of [RAG](https://ai.facebook.com/blog/retrieval-augmented-generation-streamlining-the-creation-of-intelligent-natural-language-processing-models) on [Huggingface](https://huggingface.co/)! If you plan to try RAG+Ray integration out, please feel free to share your experiences on the [Ray Discourse](https://discuss.ray.io/) or join the [Ray community Slack](https://docs.google.com/forms/d/e/1FAIpQLSfAcoiLCHOguOm8e7Jnn-JJdZaCxPGjgVCvFijHB5PLaQLeig/viewform) for further discussion -- we’d love to hear from you! > Also published at https://medium.com/distributed-computing-with-ray/retrieval-augmented-generation-with-huggingface-transformers-and-ray-b09b56161b1e
huggingface/blog/blob/main/ray-rag.md
@gradio/image ## 0.5.3 ### Fixes - [#6766](https://github.com/gradio-app/gradio/pull/6766) [`73268ee`](https://github.com/gradio-app/gradio/commit/73268ee2e39f23ebdd1e927cb49b8d79c4b9a144) - Improve source selection UX. Thanks [@hannahblair](https://github.com/hannahblair)! ## 0.5.2 ### Patch Changes - Updated dependencies [[`245d58e`](https://github.com/gradio-app/gradio/commit/245d58eff788e8d44a59d37a2d9b26d0f08a62b4)]: - @gradio/client@0.9.2 - @gradio/upload@0.5.5 ## 0.5.1 ### Patch Changes - Updated dependencies [[`5d51fbc`](https://github.com/gradio-app/gradio/commit/5d51fbce7826da840a2fd4940feb5d9ad6f1bc5a), [`34f9431`](https://github.com/gradio-app/gradio/commit/34f943101bf7dd6b8a8974a6131c1ed7c4a0dac0)]: - @gradio/upload@0.5.4 - @gradio/client@0.9.1 ## 0.5.0 ### Features - [#6726](https://github.com/gradio-app/gradio/pull/6726) [`21cfb0a`](https://github.com/gradio-app/gradio/commit/21cfb0acc309bb1a392f4d8a8e42f6be864c5978) - Remove the styles from the Image/Video primitive components and Fix the container styles. Thanks [@whitphx](https://github.com/whitphx)! - [#6398](https://github.com/gradio-app/gradio/pull/6398) [`67ddd40`](https://github.com/gradio-app/gradio/commit/67ddd40b4b70d3a37cb1637c33620f8d197dbee0) - Lite v4. Thanks [@whitphx](https://github.com/whitphx)! - [#6399](https://github.com/gradio-app/gradio/pull/6399) [`053bec9`](https://github.com/gradio-app/gradio/commit/053bec98be1127e083414024e02cf0bebb0b5142) - Improve CSS token documentation in Storybook. Thanks [@hannahblair](https://github.com/hannahblair)! ### Fixes - [#6709](https://github.com/gradio-app/gradio/pull/6709) [`6a9151d`](https://github.com/gradio-app/gradio/commit/6a9151d5c9432c724098da7d88a539aaaf5ffe88) - Remove progress animation on streaming. Thanks [@aliabid94](https://github.com/aliabid94)! ## 0.4.2 ### Fixes - [#6635](https://github.com/gradio-app/gradio/pull/6635) [`b639e04`](https://github.com/gradio-app/gradio/commit/b639e040741e6c0d9104271c81415d7befbd8cf3) - Quick Image + Text Component Fixes. Thanks [@dawoodkhan82](https://github.com/dawoodkhan82)! ## 0.4.1 ### Patch Changes - Updated dependencies [[`71f1a1f99`](https://github.com/gradio-app/gradio/commit/71f1a1f9931489d465c2c1302a5c8d768a3cd23a)]: - @gradio/client@0.8.2 - @gradio/upload@0.5.1 ## 0.4.0 ### Highlights #### New `ImageEditor` component ([#6169](https://github.com/gradio-app/gradio/pull/6169) [`9caddc17b`](https://github.com/gradio-app/gradio/commit/9caddc17b1dea8da1af8ba724c6a5eab04ce0ed8)) A brand new component, completely separate from `Image` that provides simple editing capabilities. - Set background images from file uploads, webcam, or just paste! - Crop images with an improved cropping UI. App authors can event set specific crop size, or crop ratios (`1:1`, etc) - Paint on top of any image (or no image) and erase any mistakes! - The ImageEditor supports layers, confining draw and erase actions to that layer. - More flexible access to data. The image component returns a composite image representing the final state of the canvas as well as providing the background and all layers as individual images. - Fully customisable. All features can be enabled and disabled. Even the brush color swatches can be customised. <video src="https://user-images.githubusercontent.com/12937446/284027169-31188926-fd16-4a1c-8718-998e7aae4695.mp4" autoplay muted></video> ```py def fn(im): im["composite"] # the full canvas im["background"] # the background image im["layers"] # a list of individual layers im = gr.ImageEditor( # decide which sources you'd like to accept sources=["upload", "webcam", "clipboard"], # set a cropsize constraint, can either be a ratio or a concrete [width, height] crop_size="1:1", # enable crop (or disable it) transforms=["crop"], # customise the brush brush=Brush( default_size="25", # or leave it as 'auto' color_mode="fixed", # 'fixed' hides the user swatches and colorpicker, 'defaults' shows it default_color="hotpink", # html names are supported colors=[ "rgba(0, 150, 150, 1)", # rgb(a) "#fff", # hex rgb "hsl(360, 120, 120)" # in fact any valid colorstring ] ), brush=Eraser(default_size="25") ) ``` Thanks [@pngwn](https://github.com/pngwn)! ## 0.3.6 ### Fixes - [#6441](https://github.com/gradio-app/gradio/pull/6441) [`2f805a7dd`](https://github.com/gradio-app/gradio/commit/2f805a7dd3d2b64b098f659dadd5d01258290521) - Small but important bugfixes for gr.Image: The upload event was not triggering at all. The paste-from-clipboard was not triggering an upload event. The clear button was not triggering a change event. The change event was triggering infinitely. Uploaded images were not preserving their original names. Uploading a new image should clear out the previous image. Thanks [@freddyaboulton](https://github.com/freddyaboulton)! ## 0.3.5 ### Patch Changes - Updated dependencies [[`324867f63`](https://github.com/gradio-app/gradio/commit/324867f63c920113d89a565892aa596cf8b1e486), [`d84209703`](https://github.com/gradio-app/gradio/commit/d84209703b7a0728cdb49221e543500ddb6a8d33)]: - @gradio/client@0.8.1 - @gradio/wasm@0.3.0 - @gradio/upload@0.4.1 ## 0.3.4 ### Features - [#6363](https://github.com/gradio-app/gradio/pull/6363) [`4d3aad33a`](https://github.com/gradio-app/gradio/commit/4d3aad33a0b66639dbbb2928f305a79fb7789b2d) - Fix image upload. Thanks [@freddyaboulton](https://github.com/freddyaboulton)! ### Fixes - [#6322](https://github.com/gradio-app/gradio/pull/6322) [`6204ccac5`](https://github.com/gradio-app/gradio/commit/6204ccac5967763e0ebde550d04d12584243a120) - Fixes `gr.load()` so it works properly with Images and Examples. Thanks [@abidlabs](https://github.com/abidlabs)! ## 0.3.3 ### Patch Changes - Updated dependencies [[`bca6c2c80`](https://github.com/gradio-app/gradio/commit/bca6c2c80f7e5062427019de45c282238388af95), [`3cdeabc68`](https://github.com/gradio-app/gradio/commit/3cdeabc6843000310e1a9e1d17190ecbf3bbc780), [`fad92c29d`](https://github.com/gradio-app/gradio/commit/fad92c29dc1f5cd84341aae417c495b33e01245f)]: - @gradio/client@0.7.2 - @gradio/atoms@0.2.1 - @gradio/upload@0.3.3 - @gradio/statustracker@0.3.1 ## 0.3.2 ### Fixes - [#6213](https://github.com/gradio-app/gradio/pull/6213) [`27194a987`](https://github.com/gradio-app/gradio/commit/27194a987fa7ba1234b5fc0ce8bf7fabef7033a9) - Ensure the statustracker for `gr.Image` displays in static mode. Thanks [@pngwn](https://github.com/pngwn)! ## 0.3.1 ### Patch Changes - Updated dependencies [[`2ba14b284`](https://github.com/gradio-app/gradio/commit/2ba14b284f908aa13859f4337167a157075a68eb)]: - @gradio/client@0.7.1 - @gradio/upload@0.3.1 ## 0.3.0 ### Features - [#5498](https://github.com/gradio-app/gradio/pull/5498) [`287fe6782`](https://github.com/gradio-app/gradio/commit/287fe6782825479513e79a5cf0ba0fbfe51443d7) - fix circular dependency with client + upload. Thanks [@pngwn](https://github.com/pngwn)! - [#5498](https://github.com/gradio-app/gradio/pull/5498) [`287fe6782`](https://github.com/gradio-app/gradio/commit/287fe6782825479513e79a5cf0ba0fbfe51443d7) - Fix selectable prop in the backend. Thanks [@pngwn](https://github.com/pngwn)! - [#5498](https://github.com/gradio-app/gradio/pull/5498) [`287fe6782`](https://github.com/gradio-app/gradio/commit/287fe6782825479513e79a5cf0ba0fbfe51443d7) - Image v4. Thanks [@pngwn](https://github.com/pngwn)! - [#5498](https://github.com/gradio-app/gradio/pull/5498) [`287fe6782`](https://github.com/gradio-app/gradio/commit/287fe6782825479513e79a5cf0ba0fbfe51443d7) - Publish all components to npm. Thanks [@pngwn](https://github.com/pngwn)! - [#5498](https://github.com/gradio-app/gradio/pull/5498) [`287fe6782`](https://github.com/gradio-app/gradio/commit/287fe6782825479513e79a5cf0ba0fbfe51443d7) - Custom components. Thanks [@pngwn](https://github.com/pngwn)! - [#6171](https://github.com/gradio-app/gradio/pull/6171) [`28322422c`](https://github.com/gradio-app/gradio/commit/28322422cb9d8d3e471e439ad602959662e79312) - strip dangling svelte imports. Thanks [@pngwn](https://github.com/pngwn)! ## 0.3.0-beta.9 ### Features - [#6143](https://github.com/gradio-app/gradio/pull/6143) [`e4f7b4b40`](https://github.com/gradio-app/gradio/commit/e4f7b4b409323b01aa01b39e15ce6139e29aa073) - fix circular dependency with client + upload. Thanks [@pngwn](https://github.com/pngwn)! - [#6136](https://github.com/gradio-app/gradio/pull/6136) [`667802a6c`](https://github.com/gradio-app/gradio/commit/667802a6cdbfb2ce454a3be5a78e0990b194548a) - JS Component Documentation. Thanks [@freddyaboulton](https://github.com/freddyaboulton)! - [#6094](https://github.com/gradio-app/gradio/pull/6094) [`c476bd5a5`](https://github.com/gradio-app/gradio/commit/c476bd5a5b70836163b9c69bf4bfe068b17fbe13) - Image v4. Thanks [@pngwn](https://github.com/pngwn)! - [#6149](https://github.com/gradio-app/gradio/pull/6149) [`90318b1dd`](https://github.com/gradio-app/gradio/commit/90318b1dd118ae08a695a50e7c556226234ab6dc) - swap `mode` on the frontned to `interactive` to match the backend. Thanks [@pngwn](https://github.com/pngwn)! - [#6135](https://github.com/gradio-app/gradio/pull/6135) [`bce37ac74`](https://github.com/gradio-app/gradio/commit/bce37ac744496537e71546d2bb889bf248dcf5d3) - Fix selectable prop in the backend. Thanks [@freddyaboulton](https://github.com/freddyaboulton)! ### Fixes - [#6146](https://github.com/gradio-app/gradio/pull/6146) [`40a171ea6`](https://github.com/gradio-app/gradio/commit/40a171ea60c74afa9519d6cb159def16ce68e1ca) - Fix image double change bug. Thanks [@pngwn](https://github.com/pngwn)! ## 0.3.0-beta.8 ### Features - [#6016](https://github.com/gradio-app/gradio/pull/6016) [`83e947676`](https://github.com/gradio-app/gradio/commit/83e947676d327ca2ab6ae2a2d710c78961c771a0) - Format js in v4 branch. Thanks [@freddyaboulton](https://github.com/freddyaboulton)! - [#6044](https://github.com/gradio-app/gradio/pull/6044) [`9053c95a1`](https://github.com/gradio-app/gradio/commit/9053c95a10de12aef572018ee37c71106d2da675) - Simplify File Component. Thanks [@freddyaboulton](https://github.com/freddyaboulton)! ### Fixes - [#6046](https://github.com/gradio-app/gradio/pull/6046) [`dbb7de5e0`](https://github.com/gradio-app/gradio/commit/dbb7de5e02c53fee05889d696d764d212cb96c74) - fix tests. Thanks [@pngwn](https://github.com/pngwn)! ## 0.3.0-beta.7 ### Patch Changes - Updated dependencies [[`174b73619`](https://github.com/gradio-app/gradio/commit/174b736194756e23f51bbaf6f850bac5f1ca95b5), [`5fbda0bd2`](https://github.com/gradio-app/gradio/commit/5fbda0bd2b2bbb2282249b8875d54acf87cd7e84)]: - @gradio/wasm@0.2.0-beta.1 ## 0.3.0-beta.6 ### Features - [#5960](https://github.com/gradio-app/gradio/pull/5960) [`319c30f3f`](https://github.com/gradio-app/gradio/commit/319c30f3fccf23bfe1da6c9b132a6a99d59652f7) - rererefactor frontend files. Thanks [@pngwn](https://github.com/pngwn)! - [#5938](https://github.com/gradio-app/gradio/pull/5938) [`13ed8a485`](https://github.com/gradio-app/gradio/commit/13ed8a485d5e31d7d75af87fe8654b661edcca93) - V4: Use beta release versions for '@gradio' packages. Thanks [@freddyaboulton](https://github.com/freddyaboulton)! ## 0.4.0 ### Features - [#5627](https://github.com/gradio-app/gradio/pull/5627) [`b67115e8e`](https://github.com/gradio-app/gradio/commit/b67115e8e6e489fffd5271ea830211863241ddc5) - Lite: Make the Examples component display media files using pseudo HTTP requests to the Wasm server. Thanks [@whitphx](https://github.com/whitphx)! - [#5934](https://github.com/gradio-app/gradio/pull/5934) [`8d909624f`](https://github.com/gradio-app/gradio/commit/8d909624f61a49536e3c0f71cb2d9efe91216219) - Fix styling issues with Audio, Image and Video components. Thanks [@aliabd](https://github.com/aliabd)! ## 0.3.2 ### Patch Changes - Updated dependencies []: - @gradio/utils@0.1.2 - @gradio/atoms@0.1.4 - @gradio/statustracker@0.2.2 - @gradio/upload@0.3.2 ## 0.3.1 ### Patch Changes - Updated dependencies [[`8f0fed857`](https://github.com/gradio-app/gradio/commit/8f0fed857d156830626eb48b469d54d211a582d2)]: - @gradio/icons@0.2.0 - @gradio/atoms@0.1.3 - @gradio/statustracker@0.2.1 - @gradio/upload@0.3.1 ## 0.3.0 ### Features - [#5554](https://github.com/gradio-app/gradio/pull/5554) [`75ddeb390`](https://github.com/gradio-app/gradio/commit/75ddeb390d665d4484667390a97442081b49a423) - Accessibility Improvements. Thanks [@hannahblair](https://github.com/hannahblair)! ## 0.2.4 ### Fixes - [#5587](https://github.com/gradio-app/gradio/pull/5587) [`e0d61b8ba`](https://github.com/gradio-app/gradio/commit/e0d61b8baa0f6293f53b9bdb1647d42f9ae2583a) - Fix `.clear()` events for audio and image. Thanks [@dawoodkhan82](https://github.com/dawoodkhan82)! ## 0.2.3 ### Fixes - [#5528](https://github.com/gradio-app/gradio/pull/5528) [`dc86e4a7`](https://github.com/gradio-app/gradio/commit/dc86e4a7e1c40b910c74558e6f88fddf9b3292bc) - Lazy load all images. Thanks [@aliabid94](https://github.com/aliabid94)! ## 0.2.2 ### Patch Changes - Updated dependencies [[`afac0006`](https://github.com/gradio-app/gradio/commit/afac0006337ce2840cf497cd65691f2f60ee5912)]: - @gradio/statustracker@0.2.0 - @gradio/utils@0.1.1 - @gradio/atoms@0.1.2 - @gradio/upload@0.2.1 ## 0.2.1 ### Patch Changes - Updated dependencies [[`abf1c57d`](https://github.com/gradio-app/gradio/commit/abf1c57d7d85de0df233ee3b38aeb38b638477db), [`79d8f9d8`](https://github.com/gradio-app/gradio/commit/79d8f9d891901683c5a1b7486efb44eab2478c96)]: - @gradio/icons@0.1.0 - @gradio/utils@0.1.0 - @gradio/upload@0.2.0 - @gradio/atoms@0.1.1 - @gradio/statustracker@0.1.1 ## 0.2.0 ### Highlights #### Improve startup performance and markdown support ([#5279](https://github.com/gradio-app/gradio/pull/5279) [`fe057300`](https://github.com/gradio-app/gradio/commit/fe057300f0672c62dab9d9b4501054ac5d45a4ec)) ##### Improved markdown support We now have better support for markdown in `gr.Markdown` and `gr.Dataframe`. Including syntax highlighting and Github Flavoured Markdown. We also have more consistent markdown behaviour and styling. ##### Various performance improvements These improvements will be particularly beneficial to large applications. - Rather than attaching events manually, they are now delegated, leading to a significant performance improvement and addressing a performance regression introduced in a recent version of Gradio. App startup for large applications is now around twice as fast. - Optimised the mounting of individual components, leading to a modest performance improvement during startup (~30%). - Corrected an issue that was causing markdown to re-render infinitely. - Ensured that the `gr.3DModel` does re-render prematurely. Thanks [@pngwn](https://github.com/pngwn)! ### Features - [#5215](https://github.com/gradio-app/gradio/pull/5215) [`fbdad78a`](https://github.com/gradio-app/gradio/commit/fbdad78af4c47454cbb570f88cc14bf4479bbceb) - Lazy load interactive or static variants of a component individually, rather than loading both variants regardless. This change will improve performance for many applications. Thanks [@pngwn](https://github.com/pngwn)! - [#5216](https://github.com/gradio-app/gradio/pull/5216) [`4b58ea6d`](https://github.com/gradio-app/gradio/commit/4b58ea6d98e7a43b3f30d8a4cb6f379bc2eca6a8) - Update i18n tokens and locale files. Thanks [@hannahblair](https://github.com/hannahblair)! ## 0.1.1 ### Patch Changes - Updated dependencies [[`667875b2`](https://github.com/gradio-app/gradio/commit/667875b2441753e74d25bd9d3c8adedd8ede11cd)]: - @gradio/upload@0.0.3 ## 0.1.0 ### Features - [#4979](https://github.com/gradio-app/gradio/pull/4979) [`44ac8ad0`](https://github.com/gradio-app/gradio/commit/44ac8ad08d82ea12c503dde5c78f999eb0452de2) - Allow setting sketch color default. Thanks [@aliabid94](https://github.com/aliabid94)!
gradio-app/gradio/blob/main/js/image/CHANGELOG.md
!--Copyright 2023 The HuggingFace Team. All rights reserved. Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with the License. You may obtain a copy of the License at http://www.apache.org/licenses/LICENSE-2.0 Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the specific language governing permissions and limitations under the License. ⚠️ Note that this file is in Markdown but contain specific syntax for our doc-builder (similar to MDX) that may not be rendered properly in your Markdown viewer. --> # PEFT as a utility library Let's cover in this section how you can leverage PEFT's low level API to inject trainable adapters into any `torch` module. The development of this API has been motivated by the need for super users to not rely on modeling classes that are exposed in PEFT library and still be able to use adapter methods such as LoRA, IA3 and AdaLoRA. ## Supported tuner types Currently the supported adapter types are the 'injectable' adapters, meaning adapters where an inplace modification of the model is sufficient to correctly perform the fine tuning. As such, only [LoRA](../conceptual_guides/lora), AdaLoRA and [IA3](../conceptual_guides/ia3) are currently supported in this API. ## `inject_adapter_in_model` method To perform the adapter injection, simply use `inject_adapter_in_model` method that takes 3 arguments, the PEFT config and the model itself and an optional adapter name. You can also attach multiple adapters in the model if you call multiple times `inject_adapter_in_model` with different adapter names. Below is a basic example usage of how to inject LoRA adapters into the submodule `linear` of the module `DummyModel`. ```python import torch from peft import inject_adapter_in_model, LoraConfig class DummyModel(torch.nn.Module): def __init__(self): super().__init__() self.embedding = torch.nn.Embedding(10, 10) self.linear = torch.nn.Linear(10, 10) self.lm_head = torch.nn.Linear(10, 10) def forward(self, input_ids): x = self.embedding(input_ids) x = self.linear(x) x = self.lm_head(x) return x lora_config = LoraConfig( lora_alpha=16, lora_dropout=0.1, r=64, bias="none", target_modules=["linear"], ) model = DummyModel() model = inject_adapter_in_model(lora_config, model) dummy_inputs = torch.LongTensor([[0, 1, 2, 3, 4, 5, 6, 7]]) dummy_outputs = model(dummy_inputs) ``` If you print the model, you will notice that the adapters have been correctly injected into the model ```bash DummyModel( (embedding): Embedding(10, 10) (linear): Linear( in_features=10, out_features=10, bias=True (lora_dropout): ModuleDict( (default): Dropout(p=0.1, inplace=False) ) (lora_A): ModuleDict( (default): Linear(in_features=10, out_features=64, bias=False) ) (lora_B): ModuleDict( (default): Linear(in_features=64, out_features=10, bias=False) ) (lora_embedding_A): ParameterDict() (lora_embedding_B): ParameterDict() ) (lm_head): Linear(in_features=10, out_features=10, bias=True) ) ``` Note that it should be up to users to properly take care of saving the adapters (in case they want to save adapters only), as `model.state_dict()` will return the full state dict of the model. In case you want to extract the adapters state dict you can use the `get_peft_model_state_dict` method: ```python from peft import get_peft_model_state_dict peft_state_dict = get_peft_model_state_dict(model) print(peft_state_dict) ``` ## Pros and cons When to use this API and when to not use it? Let's discuss in this section the pros and cons Pros: - The model gets modified in-place, meaning the model will preserve all its original attributes and methods - Works for any torch module, and any modality (vision, text, multi-modal) Cons: - You need to manually writing Hugging Face `from_pretrained` and `save_pretrained` utility methods if you want to easily save / load adapters from the Hugging Face Hub. - You cannot use any of the utility method provided by `PeftModel` such as disabling adapters, merging adapters, etc.
huggingface/peft/blob/main/docs/source/developer_guides/low_level_api.md
-- title: Introducing Pull Requests and Discussions 🥳 thumbnail: /blog/assets/76_community_update/thumbnail.png --- # Introducing Pull Requests and Discussions 🥳 ![Pull requests and discussions on the hub](assets/76_community_update/community-update.png) We are thrilled to announce the release of our latest collaborative features: pull requests and discussions on the Hugging Face Hub! Pull requests and discussions are available today under the [community tab](https://huggingface.co/gpt2/discussions) for all repository types: models, datasets, and Spaces. Any member of the community can create and participate in discussions and pull requests, facilitating collaborations not only within teams, but also with everyone else in the community! It's the biggest update ever done to the Hub, and we can't wait to see the community members start collaborating with it 🤩. The new "Community" tab also aligns with proposals in ethical ML throughout the years. Feedback and iterations have a central place in the development of ethical machine learning software. We really believe having it in the community's toolset will unlock new kinds of positive patterns in ML, collaborations, and progress. Some example use cases for discussions and pull requests: - Propose suggestions in model cards to improve disclosures of ethical biases. - Let users flag concerning generations of a given Space demo. - Provide a venue through which model and dataset authors can have a direct discussion with community members. - Allow others to improve your repositories! For example, users might want to provide TensorFlow weights! ## Discussions ![Discussions on the Hugging Face Hub](assets/76_community_update/new-discussion.png) [Discussions](https://huggingface.co/gpt2/discussions?type=discussion) allow community members ask and answer questions as well as share their ideas and suggestions directly with the repository owners and the community. Anyone can create and participate in discussions in the community tab of a repository. ## Pull requests ![Pull requests on the Hugging Face Hub](assets/76_community_update/new-pr.png) [Pull requests](https://huggingface.co/gpt2/discussions?type=pull_request) allow community members open, comment, merge, or close pull requests directly from the website. The easiest way to open a pull request is to use the "Collaborate" button in the "Files and versions" tab. It will let you do single file contributions very easily. Under the hood, our Pull requests do not use forks and branches, but instead, custom "branches" called `refs` that are stored directly on the source repo. This approach to avoids the need to create a forks for each new version of the model/dataset. ## How is this different from other git hosts At a high level, we aim to build a simpler version of other git hosts' (like GitHub's) PRs and Issues: - no forks are involved: contributors push to a special `ref` branch directly on the source repo - no hard distinction between issues and PRs: they are essentially the same so we display them in the same lists - streamlined for ML (i.e. models/datasets/Spaces repos), not arbitrary repos ## What's next Of course, it's only the beginning. We will listen to the community feedback to add new features and improve the community tab in the future. If you have any feedback, you can [join the discussion here](https://huggingface.co/spaces/huggingface/HuggingDiscussions/discussions/1). Today is the best time to join your first discussion and open a PR! 🤗
huggingface/blog/blob/main/community-update.md
!--⚠️ Note that this file is in Markdown but contain specific syntax for our doc-builder (similar to MDX) that may not be rendered properly in your Markdown viewer. --> # Collections A collection is a group of related items on the Hub (models, datasets, Spaces, papers) that are organized together on the same page. Collections are useful for creating your own portfolio, bookmarking content in categories, or presenting a curated list of items you want to share. Check out this [guide](https://huggingface.co/docs/hub/collections) to understand in more detail what collections are and how they look on the Hub. You can directly manage collections in the browser, but in this guide, we will focus on how to manage it programmatically. ## Fetch a collection Use [`get_collection`] to fetch your collections or any public ones. You must have the collection's *slug* to retrieve a collection. A slug is an identifier for a collection based on the title and a unique ID. You can find the slug in the URL of the collection page. <div class="flex justify-center"> <img src="https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/hfh_collection_slug.png"/> </div> Let's fetch the collection with, `"TheBloke/recent-models-64f9a55bb3115b4f513ec026"`: ```py >>> from huggingface_hub import get_collection >>> collection = get_collection("TheBloke/recent-models-64f9a55bb3115b4f513ec026") >>> collection Collection( slug='TheBloke/recent-models-64f9a55bb3115b4f513ec026', title='Recent models', owner='TheBloke', items=[...], last_updated=datetime.datetime(2023, 10, 2, 22, 56, 48, 632000, tzinfo=datetime.timezone.utc), position=1, private=False, theme='green', upvotes=90, description="Models I've recently quantized. Please note that currently this list has to be updated manually, and therefore is not guaranteed to be up-to-date." ) >>> collection.items[0] CollectionItem( item_object_id='651446103cd773a050bf64c2', item_id='TheBloke/U-Amethyst-20B-AWQ', item_type='model', position=88, note=None ) ``` The [`Collection`] object returned by [`get_collection`] contains: - high-level metadata: `slug`, `owner`, `title`, `description`, etc. - a list of [`CollectionItem`] objects; each item represents a model, a dataset, a Space, or a paper. All collection items are guaranteed to have: - a unique `item_object_id`: this is the id of the collection item in the database - an `item_id`: this is the id on the Hub of the underlying item (model, dataset, Space, paper); it is not necessarily unique, and only the `item_id`/`item_type` pair is unique - an `item_type`: model, dataset, Space, paper - the `position` of the item in the collection, which can be updated to reorganize your collection (see [`update_collection_item`] below) A `note` can also be attached to the item. This is useful to add additional information about the item (a comment, a link to a blog post, etc.). The attribute still has a `None` value if an item doesn't have a note. In addition to these base attributes, returned items can have additional attributes depending on their type: `author`, `private`, `lastModified`, `gated`, `title`, `likes`, `upvotes`, etc. None of these attributes are guaranteed to be returned. ## List collections We can also retrieve collections using [`list_collections`]. Collections can be filtered using some parameters. Let's list all the collections from the user [`teknium`](https://huggingface.co/teknium). ```py >>> from huggingface_hub import list_collections >>> collections = list_collections(owner="teknium") ``` This returns an iterable of `Collection` objects. We can iterate over them to print, for example, the number of upvotes for each collection. ```py >>> for collection in collections: ... print("Number of upvotes:", collection.upvotes) Number of upvotes: 1 Number of upvotes: 5 ``` <Tip warning={true}> When listing collections, the item list per collection is truncated to 4 items maximum. To retrieve all items from a collection, you must use [`get_collection`]. </Tip> It is possible to do more advanced filtering. Let's get all collections containing the model [TheBloke/OpenHermes-2.5-Mistral-7B-GGUF](https://huggingface.co/TheBloke/OpenHermes-2.5-Mistral-7B-GGUF), sorted by trending, and limit the count to 5. ```py >>> collections = list_collections(item="models/TheBloke/OpenHermes-2.5-Mistral-7B-GGUF", sort="trending", limit=5): >>> for collection in collections: ... print(collection.slug) teknium/quantized-models-6544690bb978e0b0f7328748 AmeerH/function-calling-65560a2565d7a6ef568527af PostArchitekt/7bz-65479bb8c194936469697d8c gnomealone/need-to-test-652007226c6ce4cdacf9c233 Crataco/favorite-7b-models-651944072b4fffcb41f8b568 ``` Parameter `sort` must be one of `"last_modified"`, `"trending"` or `"upvotes"`. Parameter `item` accepts any particular item. For example: * `"models/teknium/OpenHermes-2.5-Mistral-7B"` * `"spaces/julien-c/open-gpt-rhyming-robot"` * `"datasets/squad"` * `"papers/2311.12983"` For more details, please check out [`list_collections`] reference. ## Create a new collection Now that we know how to get a [`Collection`], let's create our own! Use [`create_collection`] with a title and description. To create a collection on an organization page, pass `namespace="my-cool-org"` when creating the collection. Finally, you can also create private collections by passing `private=True`. ```py >>> from huggingface_hub import create_collection >>> collection = create_collection( ... title="ICCV 2023", ... description="Portfolio of models, papers and demos I presented at ICCV 2023", ... ) ``` It will return a [`Collection`] object with the high-level metadata (title, description, owner, etc.) and an empty list of items. You will now be able to refer to this collection using it's `slug`. ```py >>> collection.slug 'owner/iccv-2023-15e23b46cb98efca45' >>> collection.title "ICCV 2023" >>> collection.owner "username" >>> collection.url 'https://huggingface.co/collections/owner/iccv-2023-15e23b46cb98efca45' ``` ## Manage items in a collection Now that we have a [`Collection`], we want to add items to it and organize them. ### Add items Items have to be added one by one using [`add_collection_item`]. You only need to know the `collection_slug`, `item_id` and `item_type`. Optionally, you can also add a `note` to the item (500 characters maximum). ```py >>> from huggingface_hub import create_collection, add_collection_item >>> collection = create_collection(title="OS Week Highlights - Sept 18 - 24", namespace="osanseviero") >>> collection.slug "osanseviero/os-week-highlights-sept-18-24-650bfed7f795a59f491afb80" >>> add_collection_item(collection.slug, item_id="coqui/xtts", item_type="space") >>> add_collection_item( ... collection.slug, ... item_id="warp-ai/wuerstchen", ... item_type="model", ... note="Würstchen is a new fast and efficient high resolution text-to-image architecture and model" ... ) >>> add_collection_item(collection.slug, item_id="lmsys/lmsys-chat-1m", item_type="dataset") >>> add_collection_item(collection.slug, item_id="warp-ai/wuerstchen", item_type="space") # same item_id, different item_type ``` If an item already exists in a collection (same `item_id`/`item_type` pair), an HTTP 409 error will be raised. You can choose to ignore this error by setting `exists_ok=True`. ### Add a note to an existing item You can modify an existing item to add or modify the note attached to it using [`update_collection_item`]. Let's reuse the example above: ```py >>> from huggingface_hub import get_collection, update_collection_item # Fetch collection with newly added items >>> collection_slug = "osanseviero/os-week-highlights-sept-18-24-650bfed7f795a59f491afb80" >>> collection = get_collection(collection_slug) # Add note the `lmsys-chat-1m` dataset >>> update_collection_item( ... collection_slug=collection_slug, ... item_object_id=collection.items[2].item_object_id, ... note="This dataset contains one million real-world conversations with 25 state-of-the-art LLMs.", ... ) ``` ### Reorder items Items in a collection are ordered. The order is determined by the `position` attribute of each item. By default, items are ordered by appending new items at the end of the collection. You can update the order using [`update_collection_item`] the same way you would add a note. Let's reuse our example above: ```py >>> from huggingface_hub import get_collection, update_collection_item # Fetch collection >>> collection_slug = "osanseviero/os-week-highlights-sept-18-24-650bfed7f795a59f491afb80" >>> collection = get_collection(collection_slug) # Reorder to place the two `Wuerstchen` items together >>> update_collection_item( ... collection_slug=collection_slug, ... item_object_id=collection.items[3].item_object_id, ... position=2, ... ) ``` ### Remove items Finally, you can also remove an item using [`delete_collection_item`]. ```py >>> from huggingface_hub import get_collection, update_collection_item # Fetch collection >>> collection_slug = "osanseviero/os-week-highlights-sept-18-24-650bfed7f795a59f491afb80" >>> collection = get_collection(collection_slug) # Remove `coqui/xtts` Space from the list >>> delete_collection_item(collection_slug=collection_slug, item_object_id=collection.items[0].item_object_id) ``` ## Delete collection A collection can be deleted using [`delete_collection`]. <Tip warning={true}> This is a non-revertible action. A deleted collection cannot be restored. </Tip> ```py >>> from huggingface_hub import delete_collection >>> collection = delete_collection("username/useless-collection-64f9a55bb3115b4f513ec026", missing_ok=True) ```
huggingface/huggingface_hub/blob/main/docs/source/en/guides/collections.md
Image classification Image classification datasets are used to train a model to classify an entire image. There are a wide variety of applications enabled by these datasets such as identifying endangered wildlife species or screening for disease in medical images. This guide will show you how to apply transformations to an image classification dataset. Before you start, make sure you have up-to-date versions of `albumentations` and `cv2` installed: ```bash pip install -U albumentations opencv-python ``` This guide uses the [Beans](https://huggingface.co/datasets/beans) dataset for identifying the type of bean plant disease based on an image of its leaf. Load the dataset and take a look at an example: ```py >>> from datasets import load_dataset >>> dataset = load_dataset("beans") >>> dataset["train"][10] {'image': <PIL.JpegImagePlugin.JpegImageFile image mode=RGB size=500x500 at 0x7F8D2F4D7A10>, 'image_file_path': '/root/.cache/huggingface/datasets/downloads/extracted/b0a21163f78769a2cf11f58dfc767fb458fc7cea5c05dccc0144a2c0f0bc1292/train/angular_leaf_spot/angular_leaf_spot_train.204.jpg', 'labels': 0} ``` The dataset has three fields: * `image`: a PIL image object. * `image_file_path`: the path to the image file. * `labels`: the label or category of the image. Next, check out an image: <div class="flex justify-center"> <img src="https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/datasets/img_clf.png"> </div> Now apply some augmentations with `albumentations`. You'll randomly crop the image, flip it horizontally, and adjust its brightness. ```py >>> import cv2 >>> import albumentations >>> import numpy as np >>> transform = albumentations.Compose([ ... albumentations.RandomCrop(width=256, height=256), ... albumentations.HorizontalFlip(p=0.5), ... albumentations.RandomBrightnessContrast(p=0.2), ... ]) ``` Create a function to apply the transformation to the images: ```py >>> def transforms(examples): ... examples["pixel_values"] = [ ... transform(image=np.array(image))["image"] for image in examples["image"] ... ] ... ... return examples ``` Use the [`~Dataset.set_transform`] function to apply the transformation on-the-fly to batches of the dataset to consume less disk space: ```py >>> dataset.set_transform(transforms) ``` You can verify the transformation worked by indexing into the `pixel_values` of the first example: ```py >>> import numpy as np >>> import matplotlib.pyplot as plt >>> img = dataset["train"][0]["pixel_values"] >>> plt.imshow(img) ``` <div class="flex justify-center"> <img class="block dark:hidden" src="https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/datasets/img_clf_aug.png"> <img class="hidden dark:block" src="https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/datasets/img_clf_aug.png"/> </div> <Tip> Now that you know how to process a dataset for image classification, learn [how to train an image classification model](https://colab.research.google.com/github/huggingface/notebooks/blob/main/examples/image_classification.ipynb) and use it for inference. </Tip>
huggingface/datasets/blob/main/docs/source/image_classification.mdx
-- title: "Introducing Agents.js: Give tools to your LLMs using JavaScript" thumbnail: /blog/assets/agents-js/thumbnail.png authors: - user: nsarrazin --- # Introducing Agents.js: Give tools to your LLMs using JavaScript We have recently been working on Agents.js at [huggingface.js](https://github.com/huggingface/huggingface.js/blob/main/packages/agents/README.md). It's a new library for giving tool access to LLMs from JavaScript in either the browser or the server. It ships with a few multi-modal tools out of the box and can easily be extended with your own tools and language models. ## Installation Getting started is very easy, you can grab the library from npm with the following: ``` npm install @huggingface/agents ``` ## Usage The library exposes the `HfAgent` object which is the entry point to the library. You can instantiate it like this: ```ts import { HfAgent } from "@huggingface/agents"; const HF_ACCESS_TOKEN = "hf_..."; // get your token at https://huggingface.co/settings/tokens const agent = new HfAgent(HF_ACCESS_TOKEN); ``` Afterward, using the agent is easy. You give it a plain-text command and it will return some messages. ```ts const code = await agent.generateCode( "Draw a picture of a rubber duck with a top hat, then caption this picture." ); ``` which in this case generated the following code ```js // code generated by the LLM async function generate() { const output = await textToImage("rubber duck with a top hat"); message("We generate the duck picture", output); const caption = await imageToText(output); message("Now we caption the image", caption); return output; } ``` Then the code can be evaluated as such: ```ts const messages = await agent.evaluateCode(code); ``` The messages returned by the agent are objects with the following shape: ```ts export interface Update { message: string; data: undefined | string | Blob; ``` where `message` is an info text and `data` can contain either a string or a blob. The blob can be used to display images or audio. If you trust your environment (see [warning](#usage-warning)), you can also run the code directly from the prompt with `run` : ```ts const messages = await agent.run( "Draw a picture of a rubber duck with a top hat, then caption this picture." ); ``` ### Usage warning Currently using this library will mean evaluating arbitrary code in the browser (or in Node). This is a security risk and should not be done in an untrusted environment. We recommend that you use `generateCode` and `evaluateCode` instead of `run` in order to check what code you are running. ## Custom LLMs 💬 By default `HfAgent` will use [OpenAssistant/oasst-sft-4-pythia-12b-epoch-3.5](https://huggingface.co/OpenAssistant/oasst-sft-4-pythia-12b-epoch-3.5) hosted Inference API as the LLM. This can be customized however. When instancing your `HfAgent` you can pass a custom LLM. A LLM in this context is any async function that takes a string input and returns a promise for a string. For example if you have an OpenAI API key you could make use of it like this: ```ts import { Configuration, OpenAIApi } from "openai"; const HF_ACCESS_TOKEN = "hf_..."; const api = new OpenAIApi(new Configuration({ apiKey: "sk-..." })); const llmOpenAI = async (prompt: string): Promise<string> => { return ( ( await api.createCompletion({ model: "text-davinci-003", prompt: prompt, max_tokens: 1000, }) ).data.choices[0].text ?? "" ); }; const agent = new HfAgent(HF_ACCESS_TOKEN, llmOpenAI); ``` ## Custom Tools 🛠️ Agents.js was designed to be easily expanded with custom tools & examples. For example if you wanted to add a tool that would translate text from English to German you could do it like this: ```ts import type { Tool } from "@huggingface/agents/src/types"; const englishToGermanTool: Tool = { name: "englishToGerman", description: "Takes an input string in english and returns a german translation. ", examples: [ { prompt: "translate the string 'hello world' to german", code: `const output = englishToGerman("hello world")`, tools: ["englishToGerman"], }, { prompt: "translate the string 'The quick brown fox jumps over the lazy dog` into german", code: `const output = englishToGerman("The quick brown fox jumps over the lazy dog")`, tools: ["englishToGerman"], }, ], call: async (input, inference) => { const data = await input; if (typeof data !== "string") { throw new Error("Input must be a string"); } const result = await inference.translation({ model: "t5-base", inputs: input, }); return result.translation_text; }, }; ``` Now this tool can be added to the list of tools when initiating your agent. ```ts import { HfAgent, LLMFromHub, defaultTools } from "@huggingface/agents"; const HF_ACCESS_TOKEN = "hf_..."; const agent = new HfAgent(HF_ACCESS_TOKEN, LLMFromHub("hf_..."), [ englishToGermanTool, ...defaultTools, ]); ``` ## Passing input files to the agent 🖼️ The agent can also take input files to pass along to the tools. You can pass an optional [`FileList`](https://developer.mozilla.org/en-US/docs/Web/API/FileList) to `generateCode` and `evaluateCode` as such: If you have the following html: ```html <input id="fileItem" type="file" /> ``` Then you can do: ```ts const agent = new HfAgent(HF_ACCESS_TOKEN); const files = document.getElementById("fileItem").files; // FileList type const code = agent.generateCode( "Caption the image and then read the text out loud.", files ); ``` Which generated the following code when passing an image: ```ts // code generated by the LLM async function generate(image) { const caption = await imageToText(image); message("First we caption the image", caption); const output = await textToSpeech(caption); message("Then we read the caption out loud", output); return output; } ``` ## Demo 🎉 We've been working on a demo for Agents.js that you can try out [here](https://nsarrazin-agents-js-oasst.hf.space/). It's powered by the same Open Assistant 30B model that we use on HuggingChat and uses tools called from the hub. 🚀
huggingface/blog/blob/main/agents-js.md
!--Copyright 2023 The HuggingFace Team. All rights reserved. Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with the License. You may obtain a copy of the License at http://www.apache.org/licenses/LICENSE-2.0 Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the specific language governing permissions and limitations under the License. ⚠️ Note that this file is in Markdown but contain specific syntax for our doc-builder (similar to MDX) that may not be rendered properly in your Markdown viewer. --> # UnivNet ## Overview The UnivNet model was proposed in [UnivNet: A Neural Vocoder with Multi-Resolution Spectrogram Discriminators for High-Fidelity Waveform Generation](https://arxiv.org/abs/2106.07889) by Won Jang, Dan Lim, Jaesam Yoon, Bongwan Kin, and Juntae Kim. The UnivNet model is a generative adversarial network (GAN) trained to synthesize high fidelity speech waveforms. The UnivNet model shared in `transformers` is the *generator*, which maps a conditioning log-mel spectrogram and optional noise sequence to a speech waveform (e.g. a vocoder). Only the generator is required for inference. The *discriminator* used to train the `generator` is not implemented. The abstract from the paper is the following: *Most neural vocoders employ band-limited mel-spectrograms to generate waveforms. If full-band spectral features are used as the input, the vocoder can be provided with as much acoustic information as possible. However, in some models employing full-band mel-spectrograms, an over-smoothing problem occurs as part of which non-sharp spectrograms are generated. To address this problem, we propose UnivNet, a neural vocoder that synthesizes high-fidelity waveforms in real time. Inspired by works in the field of voice activity detection, we added a multi-resolution spectrogram discriminator that employs multiple linear spectrogram magnitudes computed using various parameter sets. Using full-band mel-spectrograms as input, we expect to generate high-resolution signals by adding a discriminator that employs spectrograms of multiple resolutions as the input. In an evaluation on a dataset containing information on hundreds of speakers, UnivNet obtained the best objective and subjective results among competing models for both seen and unseen speakers. These results, including the best subjective score for text-to-speech, demonstrate the potential for fast adaptation to new speakers without a need for training from scratch.* Tips: - The `noise_sequence` argument for [`UnivNetModel.forward`] should be standard Gaussian noise (such as from `torch.randn`) of shape `([batch_size], noise_length, model.config.model_in_channels)`, where `noise_length` should match the length dimension (dimension 1) of the `input_features` argument. If not supplied, it will be randomly generated; a `torch.Generator` can be supplied to the `generator` argument so that the forward pass can be reproduced. (Note that [`UnivNetFeatureExtractor`] will return generated noise by default, so it shouldn't be necessary to generate `noise_sequence` manually.) - Padding added by [`UnivNetFeatureExtractor`] can be removed from the [`UnivNetModel`] output through the [`UnivNetFeatureExtractor.batch_decode`] method, as shown in the usage example below. - Padding the end of each waveform with silence can reduce artifacts at the end of the generated audio sample. This can be done by supplying `pad_end = True` to [`UnivNetFeatureExtractor.__call__`]. See [this issue](https://github.com/seungwonpark/melgan/issues/8) for more details. Usage Example: ```python import torch from scipy.io.wavfile import write from datasets import Audio, load_dataset from transformers import UnivNetFeatureExtractor, UnivNetModel model_id_or_path = "dg845/univnet-dev" model = UnivNetModel.from_pretrained(model_id_or_path) feature_extractor = UnivNetFeatureExtractor.from_pretrained(model_id_or_path) ds = load_dataset("hf-internal-testing/librispeech_asr_dummy", "clean", split="validation") # Resample the audio to the model and feature extractor's sampling rate. ds = ds.cast_column("audio", Audio(sampling_rate=feature_extractor.sampling_rate)) # Pad the end of the converted waveforms to reduce artifacts at the end of the output audio samples. inputs = feature_extractor( ds[0]["audio"]["array"], sampling_rate=ds[0]["audio"]["sampling_rate"], pad_end=True, return_tensors="pt" ) with torch.no_grad(): audio = model(**inputs) # Remove the extra padding at the end of the output. audio = feature_extractor.batch_decode(**audio)[0] # Convert to wav file write("sample_audio.wav", feature_extractor.sampling_rate, audio) ``` This model was contributed by [dg845](https://huggingface.co/dg845). To the best of my knowledge, there is no official code release, but an unofficial implementation can be found at [maum-ai/univnet](https://github.com/maum-ai/univnet) with pretrained checkpoints [here](https://github.com/maum-ai/univnet#pre-trained-model). ## UnivNetConfig [[autodoc]] UnivNetConfig ## UnivNetFeatureExtractor [[autodoc]] UnivNetFeatureExtractor - __call__ ## UnivNetModel [[autodoc]] UnivNetModel - forward
huggingface/transformers/blob/main/docs/source/en/model_doc/univnet.md
Advanced Topics ## Contents - [Using OpenCV in Spaces](./spaces-using-opencv) - [More ways to create Spaces](./spaces-more-ways-to-create) - [Managing Spaces with Github Actions ](./spaces-github-actions) - [Managing Spaces with CircleCI Workflows ](./spaces-circleci) - [How to Add a Space to ArXiv ](./spaces-add-to-arxiv)
huggingface/hub-docs/blob/main/docs/hub/spaces-advanced.md
-- title: "Using Machine Learning to Aid Survivors and Race through Time" thumbnail: /blog/assets/using-ml-for-disasters/thumbnail.png authors: - user: merve - user: adirik --- # Using Machine Learning to Aid Survivors and Race through Time On February 6, 2023, earthquakes measuring 7.7 and 7.6 hit South Eastern Turkey, affecting 10 cities and resulting in more than 42,000 deaths and 120,000 injured as of February 21. A few hours after the earthquake, a group of programmers started a Discord server to roll out an application called *afetharita*, literally meaning, *disaster map*. This application would serve search & rescue teams and volunteers to find survivors and bring them help. The need for such an app arose when survivors posted screenshots of texts with their addresses and what they needed (including rescue) on social media. Some survivors also tweeted what they needed so their relatives knew they were alive and that they need rescue. Needing to extract information from these tweets, we developed various applications to turn them into structured data and raced against time in developing and deploying these apps. When I got invited to the discord server, there was quite a lot of chaos regarding how we (volunteers) would operate and what we would do. We decided to collaboratively train models so we needed a model and dataset registry. We opened a Hugging Face organization account and collaborated through pull requests as to build ML-based applications to receive and process information. ![organization](https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/disaster-assets/org.png) We had been told by volunteers in other teams that there's a need for an application to post screenshots, extract information from the screenshots, structure it and write the structured information to the database. We started developing an application that would take a given image, extract the text first, and from text, extract a name, telephone number, and address and write these informations to a database that would be handed to authorities. After experimenting with various open-source OCR tools, we started using `easyocr` for OCR part and `Gradio` for building an interface for this application. We were asked to build a standalone application for OCR as well so we opened endpoints from the interface. The text output from OCR is parsed using transformers-based fine-tuned NER model. ![OCR](https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/disaster-assets/ocr-app.png) To collaborate and improve the application, we hosted it on Hugging Face Spaces and we've received a GPU grant to keep the application up and running. Hugging Face Hub team has set us up a CI bot for us to have an ephemeral environment, so we could see how a pull request would affect the Space, and it helped us during pull request reviews. Later on, we were given labeled content from various channels (e.g. twitter, discord) with raw tweets of survivors' calls for help, along with the addresses and personal information extracted from them. We started experimenting both with few-shot prompting of closed-source models and fine-tuning our own token classification model from transformers. We’ve used [bert-base-turkish-cased](https://huggingface.co/dbmdz/bert-base-turkish-cased) as a base model for token classification and came up with the first address extraction model. ![NER](https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/disaster-assets/deprem-ner.png) The model was later used in `afetharita` to extract addresses. The parsed addresses would be sent to a geocoding API to obtain longitude and latitude, and the geolocation would then be displayed on the front-end map. For inference, we have used Inference API, which is an API that hosts model for inference and is automatically enabled when the model is pushed to Hugging Face Hub. Using Inference API for serving has saved us from pulling the model, writing an app, building a docker image, setting up CI/CD, and deploying the model to a cloud instance, where it would be extra overhead work for the DevOps and cloud teams as well. Hugging Face teams have provided us with more replicas so that there would be no downtime and the application would be robust against a lot of traffic. ![backend_pipeline](https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/disaster-assets/production_pipeline.png) Later on, we were asked if we could extract what earthquake survivors need from a given tweet. We were given data with multiple labels for multiple needs in a given tweet, and these needs could be shelter, food, or logistics, as it was freezing cold over there. We’ve started experimenting first with zero-shot experimentations with open-source NLI models on Hugging Face Hub and few-shot experimentations with closed-source generative model endpoints. We have tried [xlm-roberta-large-xnli](https://huggingface.co/joeddav/xlm-roberta-large-xnli) and [convbert-base-turkish-mc4-cased-allnli_tr](https://huggingface.co/emrecan/convbert-base-turkish-mc4-cased-allnli_tr). NLI models were particularly useful as we could directly infer with candidate labels and change the labels as data drift occurs, whereas generative models could have made up labels and cause mismatches when giving responses to the backend. We initially didn’t have labeled data so anything would work. In the end, we decided to fine-tune our own model as it would take roughly three minutes to fine-tune BERT’s text classification head on a single GPU. We had a labelling effort to develop the dataset to train this model. We logged our experiments in the model card’s metadata so we could later come up with a leaderboard to keep track of which model should be deployed to production. For base model, we have tried [bert-base-turkish-uncased](https://huggingface.co/loodos/bert-base-turkish-uncased) and [bert-base-turkish-128k-cased](https://huggingface.co/dbmdz/bert-base-turkish-128k-cased) and realized they perform better than [bert-base-turkish-cased](https://huggingface.co/dbmdz/bert-base-turkish-cased). You can find our leaderboard [here](https://huggingface.co/spaces/deprem-ml/intent-leaderboard). ![intent_model](https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/disaster-assets/model-repo.png) Considering the task at hand and the imbalance of our data classes, we focused on eliminating false negatives and created a Space to benchmark the recall and F1-scores of all models. To do this, we added the metadata tag `deprem-clf-v1` to all relevant model repos and used this tag to automatically retrieve the logged F1 and recall scores and rank models. We had a separate benchmark set to avoid leakage to the train set and consistently benchmark our models. We also benchmarked each model to identify the best threshold per label for deployment. We wanted our NER model to be evaluated and crowd-sourced the effort because the data labelers were working to give us better and updated intent datasets. To evaluate the NER model, we’ve set up a labeling interface using `Argilla` and `Gradio`, where people could input a tweet and flag the output as correct/incorrect/ambiguous. ![active_learning](https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/disaster-assets/active-learning.png) Later, the dataset was deduplicated and used to benchmark our further experiments. Another team under machine learning has worked with generative models (behind a gated API) to get the specific needs (as labels were too broad) as free text and pass the text as an additional context to each posting. For this, they’ve done prompt engineering and wrapped the API endpoints as a separate API, and deployed them on the cloud. We found that using few-shot prompting with LLMs helps adjust to fine-grained needs in the presence of rapidly developing data drift, as the only thing we need to adjust is the prompt and we do not need any labeled data for this. These models are currently being used in production to create the points in the heat map below so that volunteers and search and rescue teams can bring the needs to survivors. ![afetharita](https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/disaster-assets/afetharita.png) We’ve realized that if it wasn’t for Hugging Face Hub and the ecosystem, we wouldn’t be able to collaborate, prototype, and deploy this fast. Below is our MLOps pipeline for address recognition and intent classification models. ![mlops](https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/disaster-assets/pipeline.png) There are tens of volunteers behind this application and its individual components, who worked with no sleep to get these out in such a short time. ## Remote Sensing Applications Other teams worked on remote sensing applications to assess the damage to buildings and infrastructure in an effort to direct search and rescue operations. The lack of electricity and stable mobile networks during the first 48 hours of the earthquake, combined with collapsed roads, made it extremely difficult to assess the extent of the damage and where help was needed. The search and rescue operations were also heavily affected by false reports of collapsed and damaged buildings due to the difficulties in communication and transportation. To address these issues and create open source tools that can be leveraged in the future, we started by collecting pre and post-earthquake satellite images of the affected zones from Planet Labs, Maxar and Copernicus Open Access Hub. ![input_satellite](https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/disaster-assets/output_satellite.png) Our initial approach was to rapidly label satellite images for object detection and instance segmentation, with a single category for "buildings". The aim was to evaluate the extent of damage by comparing the number of surviving buildings in pre- and post-earthquake images collected from the same area. In order to make it easier to train models, we started by cropping 1080x1080 satellite images into smaller 640x640 chunks. Next, we fine-tuned [YOLOv5](https://huggingface.co/spaces/deprem-ml/deprem_satellite_test), YOLOv8 and EfficientNet models for building detection and a [SegFormer](https://huggingface.co/spaces/deprem-ml/deprem_satellite_semantic_whu) model for semantic segmentation of buildings, and deployed these apps as Hugging Face Spaces. ![app](https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/disaster-assets/app.png) Once again, dozens of volunteers worked on labeling, preparing data, and training models. In addition to individual volunteers, companies like [Co-One](https://co-one.co/) volunteered to label satellite data with more detailed annotations for buildings and infrastructure, including *no damage*, *destroyed*, *damaged*, *damaged facility,* and *undamaged facility* labels. Our current objective is to release an extensive open-source dataset that can expedite search and rescue operations worldwide in the future. ![output_satellite](https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/disaster-assets/processed_satellite.jpeg) ## Wrapping Up For this extreme use case, we had to move fast and optimize over classification metrics where even one percent improvement mattered. There were many ethical discussions in the progress, as even picking the metric to optimize over was an ethical question. We have seen how open-source machine learning and democratization enables individuals to build life-saving applications. We are thankful for the community behind Hugging Face for releasing these models and datasets, and team at Hugging Face for their infrastructure and MLOps support.
huggingface/blog/blob/main/using-ml-for-disasters.md
Using spaCy at Hugging Face `spaCy` is a popular library for advanced Natural Language Processing used widely across industry. `spaCy` makes it easy to use and train pipelines for tasks like named entity recognition, text classification, part of speech tagging and more, and lets you build powerful applications to process and analyze large volumes of text. ## Exploring spaCy models in the Hub The official models from `spaCy` 3.3 are in the `spaCy` [Organization Page](https://huggingface.co/spacy). Anyone in the community can also share their `spaCy` models, which you can find by filtering at the left of the [models page](https://huggingface.co/models?library=spacy). All models on the Hub come up with useful features 1. An automatically generated model card with label scheme, metrics, components, and more. 2. An evaluation sections at top right where you can look at the metrics. 3. Metadata tags that help for discoverability and contain information such as license and language. 4. An interactive widget you can use to play out with the model directly in the browser 5. An Inference API that allows to make inference requests. <div class="flex justify-center"> <img class="block dark:hidden" src="https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/hub/libraries-spacy_widget.png"/> <img class="hidden dark:block" src="https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/hub/libraries-spacy_widget-dark.png"/> </div> ## Using existing models All `spaCy` models from the Hub can be directly installed using pip install. ```bash pip install https://huggingface.co/spacy/en_core_web_sm/resolve/main/en_core_web_sm-any-py3-none-any.whl ``` To find the link of interest, you can go to a repository with a `spaCy` model. When you open the repository, you can click `Use in spaCy` and you will be given a working snippet that you can use to install and load the model! <div class="flex justify-center"> <img class="block dark:hidden" src="https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/hub/libraries-spacy_snippet.png"/> <img class="hidden dark:block" src="https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/hub/libraries-spacy_snippet-dark.png"/> </div> <div class="flex justify-center"> <img class="block dark:hidden" src="https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/hub/libraries-spacy_snippet2.png"/> <img class="hidden dark:block" src="https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/hub/libraries-spacy_snippet2-dark.png"/> </div> Once installed, you can load the model as any spaCy pipeline. ```python # Using spacy.load(). import spacy nlp = spacy.load("en_core_web_sm") # Importing as module. import en_core_web_sm nlp = en_core_web_sm.load() ``` ## Sharing your models ### Using the spaCy CLI (recommended) The `spacy-huggingface-hub` library extends `spaCy` native CLI so people can easily push their packaged models to the Hub. You can install spacy-huggingface-hub from pip: ```bash pip install spacy-huggingface-hub ``` You can then check if the command has been registered successfully ```bash python -m spacy huggingface-hub --help ``` To push with the CLI, you can use the `huggingface-hub push` command as seen below. ```bash python -m spacy huggingface-hub push [whl_path] [--org] [--msg] [--local-repo] [--verbose] ``` | Argument | Type | Description | | -------------------- | ------------ | ----------------------------------------------------------------------------------------------------------------------------- | | `whl_path` | str / `Path` | The path to the `.whl` file packaged with [`spacy package`](https://spacy.io/api/cli#package). | | `--org`, `-o` | str | Optional name of organization to which the pipeline should be uploaded. | | `--msg`, `-m` | str | Commit message to use for update. Defaults to `"Update spaCy pipeline"`. | | `--local-repo`, `-l` | str / `Path` | Local path to the model repository (will be created if it doesn't exist). Defaults to `hub` in the current working directory. | | `--verbose`, `-V` | bool | Output additional info for debugging, e.g. the full generated hub metadata. | You can then upload any pipeline packaged with [`spacy package`](https://spacy.io/api/cli#package). Make sure to set `--build wheel` to output a binary .whl file. The uploader will read all metadata from the pipeline package, including the auto-generated pretty `README.md` and the model details available in the `meta.json`. ```bash huggingface-cli login python -m spacy package ./en_ner_fashion ./output --build wheel cd ./output/en_ner_fashion-0.0.0/dist python -m spacy huggingface-hub push en_ner_fashion-0.0.0-py3-none-any.whl ``` In just a minute, you can get your packaged model in the Hub, try it out directly in the browser, and share it with the rest of the community. All the required metadata will be uploaded for you and you even get a cool model card. The command will output two things: * Where to find your repo in the Hub! For example, https://huggingface.co/spacy/en_core_web_sm * And how to install the pipeline directly from the Hub! ### From a Python script You can use the `push` function from Python. It returns a dictionary containing the `"url"` and "`whl_url`" of the published model and the wheel file, which you can later install with `pip install`. ```py from spacy_huggingface_hub import push result = push("./en_ner_fashion-0.0.0-py3-none-any.whl") print(result["url"]) ``` ## Additional resources * spacy-huggingface-hub [library](https://github.com/explosion/spacy-huggingface-hub). * Launch [blog post](https://huggingface.co/blog/spacy) * spaCy v 3.1 [Announcement](https://explosion.ai/blog/spacy-v3-1#huggingface-hub) * spaCy [documentation](https://spacy.io/universe/project/spacy-huggingface-hub/)
huggingface/hub-docs/blob/main/docs/hub/spacy.md
Würstchen text-to-image fine-tuning ## Running locally with PyTorch Before running the scripts, make sure to install the library's training dependencies: **Important** To make sure you can successfully run the latest versions of the example scripts, we highly recommend **installing from source** and keeping the install up to date. To do this, execute the following steps in a new virtual environment: ```bash git clone https://github.com/huggingface/diffusers cd diffusers pip install . ``` Then cd into the example folder and run ```bash cd examples/wuerstchen/text_to_image pip install -r requirements.txt ``` And initialize an [🤗Accelerate](https://github.com/huggingface/accelerate/) environment with: ```bash accelerate config ``` For this example we want to directly store the trained LoRA embeddings on the Hub, so we need to be logged in and add the `--push_to_hub` flag to the training script. To log in, run: ```bash huggingface-cli login ``` ## Prior training You can fine-tune the Würstchen prior model with the `train_text_to_image_prior.py` script. Note that we currently support `--gradient_checkpointing` for prior model fine-tuning so you can use it for more GPU memory constrained setups. <br> <!-- accelerate_snippet_start --> ```bash export DATASET_NAME="lambdalabs/pokemon-blip-captions" accelerate launch train_text_to_image_prior.py \ --mixed_precision="fp16" \ --dataset_name=$DATASET_NAME \ --resolution=768 \ --train_batch_size=4 \ --gradient_accumulation_steps=4 \ --gradient_checkpointing \ --dataloader_num_workers=4 \ --max_train_steps=15000 \ --learning_rate=1e-05 \ --max_grad_norm=1 \ --checkpoints_total_limit=3 \ --lr_scheduler="constant" --lr_warmup_steps=0 \ --validation_prompts="A robot pokemon, 4k photo" \ --report_to="wandb" \ --push_to_hub \ --output_dir="wuerstchen-prior-pokemon-model" ``` <!-- accelerate_snippet_end --> ## Training with LoRA Low-Rank Adaption of Large Language Models (or LoRA) was first introduced by Microsoft in [LoRA: Low-Rank Adaptation of Large Language Models](https://arxiv.org/abs/2106.09685) by *Edward J. Hu, Yelong Shen, Phillip Wallis, Zeyuan Allen-Zhu, Yuanzhi Li, Shean Wang, Lu Wang, Weizhu Chen*. In a nutshell, LoRA allows adapting pretrained models by adding pairs of rank-decomposition matrices to existing weights and **only** training those newly added weights. This has a couple of advantages: - Previous pretrained weights are kept frozen so that the model is not prone to [catastrophic forgetting](https://www.pnas.org/doi/10.1073/pnas.1611835114). - Rank-decomposition matrices have significantly fewer parameters than original model, which means that trained LoRA weights are easily portable. - LoRA attention layers allow to control to which extent the model is adapted toward new training images via a `scale` parameter. ### Prior Training First, you need to set up your development environment as explained in the [installation](#Running-locally-with-PyTorch) section. Make sure to set the `DATASET_NAME` environment variable. Here, we will use the [Pokemon captions dataset](https://huggingface.co/datasets/lambdalabs/pokemon-blip-captions). ```bash export DATASET_NAME="lambdalabs/pokemon-blip-captions" accelerate launch train_text_to_image_lora_prior.py \ --mixed_precision="fp16" \ --dataset_name=$DATASET_NAME --caption_column="text" \ --resolution=768 \ --train_batch_size=8 \ --num_train_epochs=100 --checkpointing_steps=5000 \ --learning_rate=1e-04 --lr_scheduler="constant" --lr_warmup_steps=0 \ --seed=42 \ --rank=4 \ --validation_prompt="cute dragon creature" \ --report_to="wandb" \ --push_to_hub \ --output_dir="wuerstchen-prior-pokemon-lora" ```
huggingface/diffusers/blob/main/examples/wuerstchen/text_to_image/README.md
!--Copyright 2023 The HuggingFace Team. All rights reserved. Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with the License. You may obtain a copy of the License at http://www.apache.org/licenses/LICENSE-2.0 Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the specific language governing permissions and limitations under the License. --> # Overview 🧨 Diffusers offers many pipelines, models, and schedulers for generative tasks. To make loading these components as simple as possible, we provide a single and unified method - `from_pretrained()` - that loads any of these components from either the Hugging Face [Hub](https://huggingface.co/models?library=diffusers&sort=downloads) or your local machine. Whenever you load a pipeline or model, the latest files are automatically downloaded and cached so you can quickly reuse them next time without redownloading the files. This section will show you everything you need to know about loading pipelines, how to load different components in a pipeline, how to load checkpoint variants, and how to load community pipelines. You'll also learn how to load schedulers and compare the speed and quality trade-offs of using different schedulers. Finally, you'll see how to convert and load KerasCV checkpoints so you can use them in PyTorch with 🧨 Diffusers.
huggingface/diffusers/blob/main/docs/source/en/using-diffusers/loading_overview.md
-- language: - en license: mit library_name: pytorch-lightning tags: - pytorch - image-classification datasets: - beans metrics: - acc --- # my-cool-model ## Model description You can embed local or remote images using `![](...)`
huggingface/huggingface_hub/blob/main/tests/fixtures/cards/sample_simple.md
Gradio Demo: latex ``` !pip install -q gradio ``` ``` import gradio as gr with gr.Blocks() as demo: gr.Markdown( r""" # Hello World! $\frac{\sqrt{x + y}}{4}$ is today's lesson ## the $\sqrt{x + y}$ is first Start with $\frac{\frac{x+1}{x+2}}{x+3}$ then we get $ 2+x $ and $3$. There are three formulas to know: the first is $\gamma^2 + \theta^2 = \omega^2$ $\sqrt{x^2+1}$ is next Integral $\int_{a}^{b} x^2 \,dx$ is last Start typing below to see the output. I spent $5 at the grocery store. Then I bought a $2.50 ice cream cone. """) if __name__ == "__main__": demo.launch() ```
gradio-app/gradio/blob/main/demo/latex/run.ipynb
Beam Datasets Some datasets are too large to be processed on a single machine. Instead, you can process them with [Apache Beam](https://beam.apache.org/), a library for parallel data processing. The processing pipeline is executed on a distributed processing backend such as [Apache Flink](https://flink.apache.org/), [Apache Spark](https://spark.apache.org/), or [Google Cloud Dataflow](https://cloud.google.com/dataflow). We have already created Beam pipelines for some of the larger datasets like [wikipedia](https://huggingface.co/datasets/wikipedia), and [wiki40b](https://huggingface.co/datasets/wiki40b). You can load these normally with [`load_dataset`]. But if you want to run your own Beam pipeline with Dataflow, here is how: 1. Specify the dataset and configuration you want to process: ``` DATASET_NAME=your_dataset_name # ex: wikipedia CONFIG_NAME=your_config_name # ex: 20220301.en ``` 2. Input your Google Cloud Platform information: ``` PROJECT=your_project BUCKET=your_bucket REGION=your_region ``` 3. Specify your Python requirements: ``` echo "datasets" > /tmp/beam_requirements.txt echo "apache_beam" >> /tmp/beam_requirements.txt ``` 4. Run the pipeline: ``` datasets-cli run_beam datasets/$DATASET_NAME \ --name $CONFIG_NAME \ --save_info \ --cache_dir gs://$BUCKET/cache/datasets \ --beam_pipeline_options=\ "runner=DataflowRunner,project=$PROJECT,job_name=$DATASET_NAME-gen,"\ "staging_location=gs://$BUCKET/binaries,temp_location=gs://$BUCKET/temp,"\ "region=$REGION,requirements_file=/tmp/beam_requirements.txt" ``` <Tip> When you run your pipeline, you can adjust the parameters to change the runner (Flink or Spark), output location (S3 bucket or HDFS), and the number of workers. </Tip>
huggingface/datasets/blob/main/docs/source/beam.mdx
Gradio Demo: video_identity ``` !pip install -q gradio ``` ``` # Downloading files from the demo repo import os os.mkdir('video') !wget -q -O video/video_sample.mp4 https://github.com/gradio-app/gradio/raw/main/demo/video_identity/video/video_sample.mp4 ``` ``` import gradio as gr import os def video_identity(video): return video demo = gr.Interface(video_identity, gr.Video(), "playable_video", examples=[ os.path.join(os.path.abspath(''), "video/video_sample.mp4")], cache_examples=True) if __name__ == "__main__": demo.launch() ```
gradio-app/gradio/blob/main/demo/video_identity/run.ipynb
-- title: "Accelerating PyTorch Transformers with Intel Sapphire Rapids - part 2" thumbnail: /blog/assets/129_intel_sapphire_rapids_inference/01.png authors: - user: juliensimon --- # Accelerating PyTorch Transformers with Intel Sapphire Rapids, part 2 In a [recent post](https://huggingface.co/blog/intel-sapphire-rapids), we introduced you to the fourth generation of Intel Xeon CPUs, code-named [Sapphire Rapids](https://en.wikipedia.org/wiki/Sapphire_Rapids), and its new Advanced Matrix Extensions ([AMX](https://en.wikipedia.org/wiki/Advanced_Matrix_Extensions)) instruction set. Combining a cluster of Sapphire Rapids servers running on Amazon EC2 and Intel libraries like the [Intel Extension for PyTorch](https://github.com/intel/intel-extension-for-pytorch), we showed you how to efficiently run distributed training at scale, achieving an 8-fold speedup compared to the previous Xeon generation (Ice Lake) with near-linear scaling. In this post, we're going to focus on inference. Working with popular HuggingFace transformers implemented with PyTorch, we'll first measure their performance on an Ice Lake server for short and long NLP token sequences. Then, we'll do the same with a Sapphire Rapids server and the latest version of Hugging Face [Optimum Intel](https://github.com/huggingface/optimum-intel), an open-source library dedicated to hardware acceleration for Intel platforms. Let's get started! ## Why You Should Consider CPU-based Inference There are several factors to consider when deciding whether to run deep learning inference on a CPU or GPU. The most important one is certainly the size of the model. In general, larger models may benefit more from the additional computational power provided by a GPU, while smaller models can run efficiently on a CPU. Another factor to consider is the level of parallelism in the model and the inference task. GPUs are designed to excel at massively parallel processing, so they may be more efficient for tasks that can be parallelized effectively. On the other hand, if the model or inference task does not have a very high level of parallelism, a CPU may be a more effective choice. Cost is also an important factor to consider. GPUs can be expensive, and using a CPU may be a more cost-effective option, particularly if your business use case doesn't require extremely low latency. In addition, if you need the ability to easily scale up or down the number of inference workers, or if you need to be able to run inference on a wide variety of hardware, using CPUs may be a more flexible option. Now, let's set up our test servers. ## Setting up our Test Servers Just like in the previous post, we're going to use Amazon EC2 instances: * a `c6i.16xlarge` instance, based on the Ice Lake architecture, * a `r7iz.16xlarge-metal` instance, based on the Sapphire Rapids architecture. You can read more about the new r7iz family on the [AWS website](https://aws.amazon.com/ec2/instance-types/r7iz/). Both instances have 32 physical cores (thus, 64 vCPUs). We will set them up in the same way: * Ubuntu 22.04 with Linux 5.15.0 (`ami-0574da719dca65348`), * PyTorch 1.13 with Intel Extension for PyTorch 1.13, * Transformers 4.25.1. The only difference will be the addition of the Optimum Intel Library on the r7iz instance. Here are the setup steps. As usual, we recommend using a virtual environment to keep things nice and tidy. ``` sudo apt-get update # Add libtcmalloc for extra performance sudo apt install libgoogle-perftools-dev -y export LD_PRELOAD="/usr/lib/x86_64-linux-gnu/libtcmalloc.so" sudo apt-get install python3-pip -y pip install pip --upgrade export PATH=/home/ubuntu/.local/bin:$PATH pip install virtualenv virtualenv inference_env source inference_env/bin/activate pip3 install torch==1.13.0 -f https://download.pytorch.org/whl/cpu pip3 install intel_extension_for_pytorch==1.13.0 -f https://developer.intel.com/ipex-whl-stable-cpu pip3 install transformers # Only needed on the r7iz instance pip3 install optimum[intel] ``` Once we've completed these steps on the two instances, we can start running our tests. ## Benchmarking Popular NLP models In this example, we're going to benchmark several NLP models on a text classification task: [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased), [bert-base-uncased](https://huggingface.co/bert-base-uncased) and [roberta-base](https://huggingface.co/roberta-base). You can find the [full script](https://gist.github.com/juliensimon/7ae1c8d12e8a27516e1392a3c73ac1cc) on Github. Feel free to try it with your models! ``` models = ["distilbert-base-uncased", "bert-base-uncased", "roberta-base"] ``` Using both 16-token and 128-token sentences, we will measure mean and p99 prediction latency for single inference and batch inference. This should give us a decent view of the speedup we can expect in real-life scenarios. ``` sentence_short = "This is a really nice pair of shoes, I am completely satisfied with my purchase" sentence_short_array = [sentence_short] * 8 sentence_long = "These Adidas Lite Racer shoes hit a nice sweet spot for comfort shoes. Despite being a little snug in the toe box, these are very comfortable to wear and provide nice support while wearing. I would stop short of saying they are good running shoes or cross-trainers because they simply lack the ankle and arch support most would desire in those type of shoes and the treads wear fairly quickly, but they are definitely comfortable. I actually walked around Disney World all day in these without issue if that is any reference. Bottom line, I use these as the shoes they are best; versatile, inexpensive, and comfortable, without expecting the performance of a high-end athletic sneaker or expecting the comfort of my favorite pair of slippers." sentence_long_array = [sentence_long] * 8 ``` The benchmarking function is very simple. After a few warmup iterations, we run 1,000 predictions with the pipeline API, store the prediction times, and compute both their mean and p99 values. ``` import time import numpy as np def benchmark(pipeline, data, iterations=1000): # Warmup for i in range(100): result = pipeline(data) times = [] for i in range(iterations): tick = time.time() result = pipeline(data) tock = time.time() times.append(tock - tick) return "{:.2f}".format(np.mean(times) * 1000), "{:.2f}".format( np.percentile(times, 99) * 1000 ) ``` On the c6i (Ice Lake) instance, we only use a vanilla Transformers pipeline. ``` from transformers import pipeline for model in models: print(f"Benchmarking {model}") pipe = pipeline("sentiment-analysis", model=model) result = benchmark(pipe, sentence_short) print(f"Transformers pipeline, short sentence: {result}") result = benchmark(pipe, sentence_long) print(f"Transformers pipeline, long sentence: {result}") result = benchmark(pipe, sentence_short_array) print(f"Transformers pipeline, short sentence array: {result}") result = benchmark(pipe, sentence_long_array) print(f"Transformers pipeline, long sentence array: {result}") ``` On the r7iz (Sapphire Rapids) instance, we use both a vanilla pipeline and an Optimum pipeline. In the Optimum pipeline, we enable `bfloat16` mode to leverage the AMX instructions. We also set `jit` to `True` to further optimize the model with just-in-time compilation. ``` import torch from optimum.intel import inference_mode with inference_mode(pipe, dtype=torch.bfloat16, jit=True) as opt_pipe: result = benchmark(opt_pipe, sentence_short) print(f"Optimum pipeline, short sentence: {result}") result = benchmark(opt_pipe, sentence_long) print(f"Optimum pipeline, long sentence: {result}") result = benchmark(opt_pipe, sentence_short_array) print(f"Optimum pipeline, short sentence array: {result}") result = benchmark(opt_pipe, sentence_long_array) print(f"Optimum pipeline, long sentence array: {result}") ``` For the sake of brevity, we'll just look at the p99 results for [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased). All times are in milliseconds. You'll find full results at the end of the post. <kbd> <img src="assets/129_intel_sapphire_rapids_inference/01.png"> </kbd> As you can see in the graph above, single predictions run **60-65%** faster compared to the previous generation of Xeon CPUs. In other words, thanks to the combination of Intel Sapphire Rapids and Hugging Face Optimum, you can accelerate your predictions 3x with only tiny changes to your code. This lets you achieve reach **single-digit prediction latency** even with long text sequences, which was only possible with GPUs so far. ## Conclusion The fourth generation of Intel Xeon CPUs delivers excellent inference performance, especially when combined with Hugging Face Optimum. This is yet another step on the way to making Deep Learning more accessible and more cost-effective, and we're looking forward to continuing this work with our friends at Intel. Here are some additional resources to help you get started: * [Intel IPEX](https://github.com/intel/intel-extension-for-pytorch) on GitHub * [Hugging Face Optimum](https://github.com/huggingface/optimum) on GitHub If you have questions or feedback, we'd love to read them on the [Hugging Face forum](https://discuss.huggingface.co/). Thanks for reading! ## Appendix: full results <kbd> <img src="assets/129_intel_sapphire_rapids_inference/02.png"> </kbd> *Ubuntu 22.04 with libtcmalloc, Linux 5.15.0 patched for Intel AMX support, PyTorch 1.13 with Intel Extension for PyTorch, Transformers 4.25.1, Optimum 1.6.1, Optimum Intel 1.7.0.dev0*
huggingface/blog/blob/main/intel-sapphire-rapids-inference.md
@gradio/fallback ## 0.2.6 ### Patch Changes - Updated dependencies [[`828fb9e`](https://github.com/gradio-app/gradio/commit/828fb9e6ce15b6ea08318675a2361117596a1b5d), [`73268ee`](https://github.com/gradio-app/gradio/commit/73268ee2e39f23ebdd1e927cb49b8d79c4b9a144)]: - @gradio/statustracker@0.4.3 - @gradio/atoms@0.4.1 ## 0.2.5 ### Patch Changes - Updated dependencies [[`4d1cbbc`](https://github.com/gradio-app/gradio/commit/4d1cbbcf30833ef1de2d2d2710c7492a379a9a00)]: - @gradio/atoms@0.4.0 - @gradio/statustracker@0.4.2 ## 0.2.4 ### Patch Changes - Updated dependencies []: - @gradio/atoms@0.3.1 - @gradio/statustracker@0.4.1 ## 0.2.3 ### Patch Changes - Updated dependencies [[`9caddc17b`](https://github.com/gradio-app/gradio/commit/9caddc17b1dea8da1af8ba724c6a5eab04ce0ed8)]: - @gradio/atoms@0.3.0 - @gradio/statustracker@0.4.0 ## 0.2.2 ### Patch Changes - Updated dependencies [[`f816136a0`](https://github.com/gradio-app/gradio/commit/f816136a039fa6011be9c4fb14f573e4050a681a)]: - @gradio/atoms@0.2.2 - @gradio/statustracker@0.3.2 ## 0.2.1 ### Patch Changes - Updated dependencies [[`3cdeabc68`](https://github.com/gradio-app/gradio/commit/3cdeabc6843000310e1a9e1d17190ecbf3bbc780), [`fad92c29d`](https://github.com/gradio-app/gradio/commit/fad92c29dc1f5cd84341aae417c495b33e01245f)]: - @gradio/atoms@0.2.1 - @gradio/statustracker@0.3.1 ## 0.2.0 ### Features - [#5498](https://github.com/gradio-app/gradio/pull/5498) [`287fe6782`](https://github.com/gradio-app/gradio/commit/287fe6782825479513e79a5cf0ba0fbfe51443d7) - Publish all components to npm. Thanks [@pngwn](https://github.com/pngwn)! - [#5498](https://github.com/gradio-app/gradio/pull/5498) [`287fe6782`](https://github.com/gradio-app/gradio/commit/287fe6782825479513e79a5cf0ba0fbfe51443d7) - Custom components. Thanks [@pngwn](https://github.com/pngwn)! ## 0.2.0-beta.8 ### Patch Changes - Updated dependencies [[`667802a6c`](https://github.com/gradio-app/gradio/commit/667802a6cdbfb2ce454a3be5a78e0990b194548a), [`c476bd5a5`](https://github.com/gradio-app/gradio/commit/c476bd5a5b70836163b9c69bf4bfe068b17fbe13)]: - @gradio/atoms@0.2.0-beta.6 - @gradio/statustracker@0.3.0-beta.8 - @gradio/utils@0.2.0-beta.6 ## 0.2.0-beta.7 ### Features - [#6016](https://github.com/gradio-app/gradio/pull/6016) [`83e947676`](https://github.com/gradio-app/gradio/commit/83e947676d327ca2ab6ae2a2d710c78961c771a0) - Format js in v4 branch. Thanks [@freddyaboulton](https://github.com/freddyaboulton)! - [#6107](https://github.com/gradio-app/gradio/pull/6107) [`9a40de7bf`](https://github.com/gradio-app/gradio/commit/9a40de7bff5844c8a135e73c7d175eb02b63a966) - Fix: Move to cache in init postprocess + Fallback Fixes. Thanks [@freddyaboulton](https://github.com/freddyaboulton)! - [#6092](https://github.com/gradio-app/gradio/pull/6092) [`11d67ae75`](https://github.com/gradio-app/gradio/commit/11d67ae7529e0838565e4131b185c413489c5aa6) - Add a stand-alone install command and tidy-up the fallback template. Thanks [@freddyaboulton](https://github.com/freddyaboulton)! ## 0.2.0-beta.6 ### Features - [#5960](https://github.com/gradio-app/gradio/pull/5960) [`319c30f3f`](https://github.com/gradio-app/gradio/commit/319c30f3fccf23bfe1da6c9b132a6a99d59652f7) - rererefactor frontend files. Thanks [@pngwn](https://github.com/pngwn)! ## 0.2.0-beta.5 ### Features - [#5648](https://github.com/gradio-app/gradio/pull/5648) [`c573e2339`](https://github.com/gradio-app/gradio/commit/c573e2339b86c85b378dc349de5e9223a3c3b04a) - Publish all components to npm. Thanks [@freddyaboulton](https://github.com/freddyaboulton)! ## 0.2.0-beta.4 ### Patch Changes - Updated dependencies [[`0b4fd5b6d`](https://github.com/gradio-app/gradio/commit/0b4fd5b6db96fc95a155e5e935e17e1ab11d1161)]: - @gradio/utils@0.2.0-beta.3 - @gradio/atoms@0.2.0-beta.3 - @gradio/statustracker@0.3.0-beta.4 ## 0.2.0-beta.3 ### Patch Changes - Updated dependencies [[`14fc612d8`](https://github.com/gradio-app/gradio/commit/14fc612d84bf6b1408eccd3a40fab41f25477571)]: - @gradio/utils@0.2.0-beta.2 - @gradio/atoms@0.2.0-beta.2 - @gradio/statustracker@0.3.0-beta.3 ## 0.2.0-beta.2 ### Features - [#5620](https://github.com/gradio-app/gradio/pull/5620) [`c4c25ecdf`](https://github.com/gradio-app/gradio/commit/c4c25ecdf8c2fab5e3c41b519564e3b6a9ebfce3) - fix build and broken imports. Thanks [@pngwn](https://github.com/pngwn)! ## 0.2.0-beta.1 ### Patch Changes - Updated dependencies []: - @gradio/utils@0.2.0-beta.1 - @gradio/atoms@0.2.0-beta.1 - @gradio/statustracker@0.3.0-beta.1 ## 0.2.0-beta.0 ### Features - [#5507](https://github.com/gradio-app/gradio/pull/5507) [`1385dc688`](https://github.com/gradio-app/gradio/commit/1385dc6881f2d8ae7a41106ec21d33e2ef04d6a9) - Custom components. Thanks [@pngwn](https://github.com/pngwn)! # @gradio/checkbox ## 0.1.1 ### Fixes - [#5340](https://github.com/gradio-app/gradio/pull/5340) [`df090e89`](https://github.com/gradio-app/gradio/commit/df090e89f74a16e4cb2b700a1e3263cabd2bdd91) - Fix Checkbox select dispatch. Thanks [@freddyaboulton](https://github.com/freddyaboulton)! ## 0.1.0 ### Highlights #### Improve startup performance and markdown support ([#5279](https://github.com/gradio-app/gradio/pull/5279) [`fe057300`](https://github.com/gradio-app/gradio/commit/fe057300f0672c62dab9d9b4501054ac5d45a4ec)) ##### Improved markdown support We now have better support for markdown in `gr.Markdown` and `gr.Dataframe`. Including syntax highlighting and Github Flavoured Markdown. We also have more consistent markdown behaviour and styling. ##### Various performance improvements These improvements will be particularly beneficial to large applications. - Rather than attaching events manually, they are now delegated, leading to a significant performance improvement and addressing a performance regression introduced in a recent version of Gradio. App startup for large applications is now around twice as fast. - Optimised the mounting of individual components, leading to a modest performance improvement during startup (~30%). - Corrected an issue that was causing markdown to re-render infinitely. - Ensured that the `gr.3DModel` does re-render prematurely. Thanks [@pngwn](https://github.com/pngwn)! ### Features - [#5215](https://github.com/gradio-app/gradio/pull/5215) [`fbdad78a`](https://github.com/gradio-app/gradio/commit/fbdad78af4c47454cbb570f88cc14bf4479bbceb) - Lazy load interactive or static variants of a component individually, rather than loading both variants regardless. This change will improve performance for many applications. Thanks [@pngwn](https://github.com/pngwn)! - [#5216](https://github.com/gradio-app/gradio/pull/5216) [`4b58ea6d`](https://github.com/gradio-app/gradio/commit/4b58ea6d98e7a43b3f30d8a4cb6f379bc2eca6a8) - Update i18n tokens and locale files. Thanks [@hannahblair](https://github.com/hannahblair)!
gradio-app/gradio/blob/main/js/fallback/CHANGELOG.md
Access and view Metrics Hugging Face Endpoints provides access to the metrics and analytics of your Endpoints through the UI on the detailed overview in the “Analytics” tab of your Endpoints. <img src="https://raw.githubusercontent.com/huggingface/hf-endpoints-documentation/main/assets/10_metric.png" alt="metric dashboard" /> ## Access Metrics via API The Hugging Face Inference Endpoints API exposes a [route to access the metrics](/docs/inference-endpoints/api_reference#getendpointmetrics) of your Endpoints. You can use this route to send customized Prometheus queries to your Endpoints.
huggingface/hf-endpoints-documentation/blob/main/docs/source/guides/metrics.mdx
🚀 Creating Discord Bots from Gradio Apps 🚀 Tags: NLP, TEXT, CHAT We're excited to announce that Gradio can now automatically create a discord bot from a deployed app! 🤖 Discord is a popular communication platform that allows users to chat and interact with each other in real-time. By turning your Gradio app into a Discord bot, you can bring cutting edge AI to your discord server and give your community a whole new way to interact. ## 💻 How does it work? 💻 With `gradio_client` version `0.3.0`, any gradio `ChatInterface` app on the internet can automatically be deployed as a discord bot via the `deploy_discord` method of the `Client` class. Technically, any gradio app that exposes an api route that takes in a single string and outputs a single string can be deployed to discord. In this guide, we will focus on `gr.ChatInterface` as those apps naturally lend themselves to discord's chat functionality. ## 🛠️ Requirements 🛠️ Make sure you have the latest `gradio_client` and `gradio` versions installed. ```bash pip install gradio_client>=0.3.0 gradio>=3.38.0 ``` Also, make sure you have a [Hugging Face account](https://huggingface.co/) and a [write access token](https://huggingface.co/docs/hub/security-tokens). ⚠️ Tip ⚠️: Make sure you login to the Hugging Face Hub by running `huggingface-cli login`. This will let you skip passing your token in all subsequent commands in this guide. ## 🏃‍♀️ Quickstart 🏃‍♀️ ### Step 1: Implementing our chatbot Let's build a very simple Chatbot using `ChatInterface` that simply repeats the user message. Write the following code into an `app.py` ```python import gradio as gr def slow_echo(message, history): return message demo = gr.ChatInterface(slow_echo).queue().launch() ``` ### Step 2: Deploying our App In order to create a discord bot for our app, it must be accessible over the internet. In this guide, we will use the `gradio deploy` command to deploy our chatbot to Hugging Face spaces from the command line. Run the following command. ```bash gradio deploy --title echo-chatbot --app-file app.py ``` This command will ask you some questions, e.g. requested hardware, requirements, but the default values will suffice for this guide. Note the URL of the space that was created. Mine is https://huggingface.co/spaces/freddyaboulton/echo-chatbot ### Step 3: Creating a Discord Bot Turning our space into a discord bot is also a one-liner thanks to the `gradio deploy-discord`. Run the following command: ```bash gradio deploy-discord --src freddyaboulton/echo-chatbot ``` ❗️ Advanced ❗️: If you already have a discord bot token you can pass it to the `deploy-discord` command. Don't worry, if you don't have one yet! ```bash gradio deploy-discord --src freddyaboulton/echo-chatbot --discord-bot-token <token> ``` Note the URL that gets printed out to the console. Mine is https://huggingface.co/spaces/freddyaboulton/echo-chatbot-gradio-discord-bot ### Step 4: Getting a Discord Bot Token If you didn't have a discord bot token for step 3, go to the URL that got printed in the console and follow the instructions there. Once you obtain a token, run the command again but this time pass in the token: ```bash gradio deploy-discord --src freddyaboulton/echo-chatbot --discord-bot-token <token> ``` ### Step 5: Add the bot to your server Visit the space of your discord bot. You should see "Add this bot to your server by clicking this link:" followed by a URL. Go to that URL and add the bot to your server! ### Step 6: Use your bot! By default the bot can be called by starting a message with `/chat`, e.g. `/chat <your prompt here>`. ⚠️ Tip ⚠️: If either of the deployed spaces goes to sleep, the bot will stop working. By default, spaces go to sleep after 48 hours of inactivity. You can upgrade the hardware of your space to prevent it from going to sleep. See this [guide](https://huggingface.co/docs/hub/spaces-gpus#using-gpu-spaces) for more information. <img src="https://gradio-builds.s3.amazonaws.com/demo-files/discordbots/guide/echo_slash.gif"> ### Using the `gradio_client.Client` Class You can also create a discord bot from a deployed gradio app with python. ```python import gradio_client as grc grc.Client("freddyaboulton/echo-chatbot").deploy_discord() ``` ## 🦾 Using State of The Art LLMs 🦾 We have created an organization on Hugging Face called [gradio-discord-bots](https://huggingface.co/gradio-discord-bots) containing several template spaces that explain how to deploy state of the art LLMs powered by gradio as discord bots. The easiest way to get started is by deploying Meta's Llama 2 LLM with 70 billion parameter. Simply go to this [space](https://huggingface.co/spaces/gradio-discord-bots/Llama-2-70b-chat-hf) and follow the instructions. The deployment can be done in one line! 🤯 ```python import gradio_client as grc grc.Client("ysharma/Explore_llamav2_with_TGI").deploy_discord(to_id="llama2-70b-discord-bot") ``` ## 🦜 Additional LLMs 🦜 In addition to Meta's 70 billion Llama 2 model, we have prepared template spaces for the following LLMs and deployment options: - [gpt-3.5-turbo](https://huggingface.co/spaces/gradio-discord-bots/gpt-35-turbo), powered by openai. Required OpenAI key. - [falcon-7b-instruct](https://huggingface.co/spaces/gradio-discord-bots/falcon-7b-instruct) powered by Hugging Face Inference Endpoints. - [Llama-2-13b-chat-hf](https://huggingface.co/spaces/gradio-discord-bots/Llama-2-13b-chat-hf) powered by Hugging Face Inference Endpoints. - [Llama-2-13b-chat-hf](https://huggingface.co/spaces/gradio-discord-bots/llama-2-13b-chat-transformers) powered by Hugging Face transformers. To deploy any of these models to discord, simply follow the instructions in the linked space for that model. ## Deploying non-chat gradio apps to discord As mentioned above, you don't need a `gr.ChatInterface` if you want to deploy your gradio app to discord. All that's needed is an api route that takes in a single string and outputs a single string. The following code will deploy a space that translates english to german as a discord bot. ```python import gradio_client as grc client = grc.Client("freddyaboulton/english-to-german") client.deploy_discord(api_names=['german']) ``` ## Conclusion That's it for this guide! We're really excited about this feature. Tag [@Gradio](https://twitter.com/Gradio) on twitter and show us how your discord community interacts with your discord bots.
gradio-app/gradio/blob/main/guides/04_chatbots/03_creating-a-discord-bot-from-a-gradio-app.md
MobileNet v2 **MobileNetV2** is a convolutional neural network architecture that seeks to perform well on mobile devices. It is based on an [inverted residual structure](https://paperswithcode.com/method/inverted-residual-block) where the residual connections are between the bottleneck layers. The intermediate expansion layer uses lightweight depthwise convolutions to filter features as a source of non-linearity. As a whole, the architecture of MobileNetV2 contains the initial fully convolution layer with 32 filters, followed by 19 residual bottleneck layers. ## How do I use this model on an image? To load a pretrained model: ```python import timm model = timm.create_model('mobilenetv2_100', pretrained=True) model.eval() ``` To load and preprocess the image: ```python import urllib from PIL import Image from timm.data import resolve_data_config from timm.data.transforms_factory import create_transform config = resolve_data_config({}, model=model) transform = create_transform(**config) url, filename = ("https://github.com/pytorch/hub/raw/master/images/dog.jpg", "dog.jpg") urllib.request.urlretrieve(url, filename) img = Image.open(filename).convert('RGB') tensor = transform(img).unsqueeze(0) # transform and add batch dimension ``` To get the model predictions: ```python import torch with torch.no_grad(): out = model(tensor) probabilities = torch.nn.functional.softmax(out[0], dim=0) print(probabilities.shape) # prints: torch.Size([1000]) ``` To get the top-5 predictions class names: ```python # Get imagenet class mappings url, filename = ("https://raw.githubusercontent.com/pytorch/hub/master/imagenet_classes.txt", "imagenet_classes.txt") urllib.request.urlretrieve(url, filename) with open("imagenet_classes.txt", "r") as f: categories = [s.strip() for s in f.readlines()] # Print top categories per image top5_prob, top5_catid = torch.topk(probabilities, 5) for i in range(top5_prob.size(0)): print(categories[top5_catid[i]], top5_prob[i].item()) # prints class names and probabilities like: # [('Samoyed', 0.6425196528434753), ('Pomeranian', 0.04062102362513542), ('keeshond', 0.03186424449086189), ('white wolf', 0.01739676296710968), ('Eskimo dog', 0.011717947199940681)] ``` Replace the model name with the variant you want to use, e.g. `mobilenetv2_100`. You can find the IDs in the model summaries at the top of this page. To extract image features with this model, follow the [timm feature extraction examples](https://rwightman.github.io/pytorch-image-models/feature_extraction/), just change the name of the model you want to use. ## How do I finetune this model? You can finetune any of the pre-trained models just by changing the classifier (the last layer). ```python model = timm.create_model('mobilenetv2_100', pretrained=True, num_classes=NUM_FINETUNE_CLASSES) ``` To finetune on your own dataset, you have to write a training loop or adapt [timm's training script](https://github.com/rwightman/pytorch-image-models/blob/master/train.py) to use your dataset. ## How do I train this model? You can follow the [timm recipe scripts](https://rwightman.github.io/pytorch-image-models/scripts/) for training a new model afresh. ## Citation ```BibTeX @article{DBLP:journals/corr/abs-1801-04381, author = {Mark Sandler and Andrew G. Howard and Menglong Zhu and Andrey Zhmoginov and Liang{-}Chieh Chen}, title = {Inverted Residuals and Linear Bottlenecks: Mobile Networks for Classification, Detection and Segmentation}, journal = {CoRR}, volume = {abs/1801.04381}, year = {2018}, url = {http://arxiv.org/abs/1801.04381}, archivePrefix = {arXiv}, eprint = {1801.04381}, timestamp = {Tue, 12 Jan 2021 15:30:06 +0100}, biburl = {https://dblp.org/rec/journals/corr/abs-1801-04381.bib}, bibsource = {dblp computer science bibliography, https://dblp.org} } ``` <!-- Type: model-index Collections: - Name: MobileNet V2 Paper: Title: 'MobileNetV2: Inverted Residuals and Linear Bottlenecks' URL: https://paperswithcode.com/paper/mobilenetv2-inverted-residuals-and-linear Models: - Name: mobilenetv2_100 In Collection: MobileNet V2 Metadata: FLOPs: 401920448 Parameters: 3500000 File Size: 14202571 Architecture: - 1x1 Convolution - Batch Normalization - Convolution - Depthwise Separable Convolution - Dropout - Inverted Residual Block - Max Pooling - ReLU6 - Residual Connection - Softmax Tasks: - Image Classification Training Techniques: - RMSProp - Weight Decay Training Data: - ImageNet Training Resources: 16x GPUs ID: mobilenetv2_100 LR: 0.045 Crop Pct: '0.875' Momentum: 0.9 Batch Size: 1536 Image Size: '224' Weight Decay: 4.0e-05 Interpolation: bicubic RMSProp Decay: 0.9 Code: https://github.com/rwightman/pytorch-image-models/blob/9a25fdf3ad0414b4d66da443fe60ae0aa14edc84/timm/models/efficientnet.py#L955 Weights: https://github.com/rwightman/pytorch-image-models/releases/download/v0.1-weights/mobilenetv2_100_ra-b33bc2c4.pth Results: - Task: Image Classification Dataset: ImageNet Metrics: Top 1 Accuracy: 72.95% Top 5 Accuracy: 91.0% - Name: mobilenetv2_110d In Collection: MobileNet V2 Metadata: FLOPs: 573958832 Parameters: 4520000 File Size: 18316431 Architecture: - 1x1 Convolution - Batch Normalization - Convolution - Depthwise Separable Convolution - Dropout - Inverted Residual Block - Max Pooling - ReLU6 - Residual Connection - Softmax Tasks: - Image Classification Training Techniques: - RMSProp - Weight Decay Training Data: - ImageNet Training Resources: 16x GPUs ID: mobilenetv2_110d LR: 0.045 Crop Pct: '0.875' Momentum: 0.9 Batch Size: 1536 Image Size: '224' Weight Decay: 4.0e-05 Interpolation: bicubic RMSProp Decay: 0.9 Code: https://github.com/rwightman/pytorch-image-models/blob/9a25fdf3ad0414b4d66da443fe60ae0aa14edc84/timm/models/efficientnet.py#L969 Weights: https://github.com/rwightman/pytorch-image-models/releases/download/v0.1-weights/mobilenetv2_110d_ra-77090ade.pth Results: - Task: Image Classification Dataset: ImageNet Metrics: Top 1 Accuracy: 75.05% Top 5 Accuracy: 92.19% - Name: mobilenetv2_120d In Collection: MobileNet V2 Metadata: FLOPs: 888510048 Parameters: 5830000 File Size: 23651121 Architecture: - 1x1 Convolution - Batch Normalization - Convolution - Depthwise Separable Convolution - Dropout - Inverted Residual Block - Max Pooling - ReLU6 - Residual Connection - Softmax Tasks: - Image Classification Training Techniques: - RMSProp - Weight Decay Training Data: - ImageNet Training Resources: 16x GPUs ID: mobilenetv2_120d LR: 0.045 Crop Pct: '0.875' Momentum: 0.9 Batch Size: 1536 Image Size: '224' Weight Decay: 4.0e-05 Interpolation: bicubic RMSProp Decay: 0.9 Code: https://github.com/rwightman/pytorch-image-models/blob/9a25fdf3ad0414b4d66da443fe60ae0aa14edc84/timm/models/efficientnet.py#L977 Weights: https://github.com/rwightman/pytorch-image-models/releases/download/v0.1-weights/mobilenetv2_120d_ra-5987e2ed.pth Results: - Task: Image Classification Dataset: ImageNet Metrics: Top 1 Accuracy: 77.28% Top 5 Accuracy: 93.51% - Name: mobilenetv2_140 In Collection: MobileNet V2 Metadata: FLOPs: 770196784 Parameters: 6110000 File Size: 24673555 Architecture: - 1x1 Convolution - Batch Normalization - Convolution - Depthwise Separable Convolution - Dropout - Inverted Residual Block - Max Pooling - ReLU6 - Residual Connection - Softmax Tasks: - Image Classification Training Techniques: - RMSProp - Weight Decay Training Data: - ImageNet Training Resources: 16x GPUs ID: mobilenetv2_140 LR: 0.045 Crop Pct: '0.875' Momentum: 0.9 Batch Size: 1536 Image Size: '224' Weight Decay: 4.0e-05 Interpolation: bicubic RMSProp Decay: 0.9 Code: https://github.com/rwightman/pytorch-image-models/blob/9a25fdf3ad0414b4d66da443fe60ae0aa14edc84/timm/models/efficientnet.py#L962 Weights: https://github.com/rwightman/pytorch-image-models/releases/download/v0.1-weights/mobilenetv2_140_ra-21a4e913.pth Results: - Task: Image Classification Dataset: ImageNet Metrics: Top 1 Accuracy: 76.51% Top 5 Accuracy: 93.0% -->
huggingface/pytorch-image-models/blob/main/docs/models/mobilenet-v2.md
!--Copyright 2022 The HuggingFace Team. All rights reserved. Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with the License. You may obtain a copy of the License at http://www.apache.org/licenses/LICENSE-2.0 Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the ⚠️ Note that this file is in Markdown but contain specific syntax for our doc-builder (similar to MDX) that may not be rendered properly in your Markdown viewer. specific language governing permissions and limitations under the License. --> # Donut ## Overview The Donut model was proposed in [OCR-free Document Understanding Transformer](https://arxiv.org/abs/2111.15664) by Geewook Kim, Teakgyu Hong, Moonbin Yim, Jeongyeon Nam, Jinyoung Park, Jinyeong Yim, Wonseok Hwang, Sangdoo Yun, Dongyoon Han, Seunghyun Park. Donut consists of an image Transformer encoder and an autoregressive text Transformer decoder to perform document understanding tasks such as document image classification, form understanding and visual question answering. The abstract from the paper is the following: *Understanding document images (e.g., invoices) is a core but challenging task since it requires complex functions such as reading text and a holistic understanding of the document. Current Visual Document Understanding (VDU) methods outsource the task of reading text to off-the-shelf Optical Character Recognition (OCR) engines and focus on the understanding task with the OCR outputs. Although such OCR-based approaches have shown promising performance, they suffer from 1) high computational costs for using OCR; 2) inflexibility of OCR models on languages or types of document; 3) OCR error propagation to the subsequent process. To address these issues, in this paper, we introduce a novel OCR-free VDU model named Donut, which stands for Document understanding transformer. As the first step in OCR-free VDU research, we propose a simple architecture (i.e., Transformer) with a pre-training objective (i.e., cross-entropy loss). Donut is conceptually simple yet effective. Through extensive experiments and analyses, we show a simple OCR-free VDU model, Donut, achieves state-of-the-art performances on various VDU tasks in terms of both speed and accuracy. In addition, we offer a synthetic data generator that helps the model pre-training to be flexible in various languages and domains.* <img src="https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/transformers/model_doc/donut_architecture.jpg" alt="drawing" width="600"/> <small> Donut high-level overview. Taken from the <a href="https://arxiv.org/abs/2111.15664">original paper</a>. </small> This model was contributed by [nielsr](https://huggingface.co/nielsr). The original code can be found [here](https://github.com/clovaai/donut). ## Usage tips - The quickest way to get started with Donut is by checking the [tutorial notebooks](https://github.com/NielsRogge/Transformers-Tutorials/tree/master/Donut), which show how to use the model at inference time as well as fine-tuning on custom data. - Donut is always used within the [VisionEncoderDecoder](vision-encoder-decoder) framework. ## Inference examples Donut's [`VisionEncoderDecoder`] model accepts images as input and makes use of [`~generation.GenerationMixin.generate`] to autoregressively generate text given the input image. The [`DonutImageProcessor`] class is responsible for preprocessing the input image and [`XLMRobertaTokenizer`/`XLMRobertaTokenizerFast`] decodes the generated target tokens to the target string. The [`DonutProcessor`] wraps [`DonutImageProcessor`] and [`XLMRobertaTokenizer`/`XLMRobertaTokenizerFast`] into a single instance to both extract the input features and decode the predicted token ids. - Step-by-step Document Image Classification ```py >>> import re >>> from transformers import DonutProcessor, VisionEncoderDecoderModel >>> from datasets import load_dataset >>> import torch >>> processor = DonutProcessor.from_pretrained("naver-clova-ix/donut-base-finetuned-rvlcdip") >>> model = VisionEncoderDecoderModel.from_pretrained("naver-clova-ix/donut-base-finetuned-rvlcdip") >>> device = "cuda" if torch.cuda.is_available() else "cpu" >>> model.to(device) # doctest: +IGNORE_RESULT >>> # load document image >>> dataset = load_dataset("hf-internal-testing/example-documents", split="test") >>> image = dataset[1]["image"] >>> # prepare decoder inputs >>> task_prompt = "<s_rvlcdip>" >>> decoder_input_ids = processor.tokenizer(task_prompt, add_special_tokens=False, return_tensors="pt").input_ids >>> pixel_values = processor(image, return_tensors="pt").pixel_values >>> outputs = model.generate( ... pixel_values.to(device), ... decoder_input_ids=decoder_input_ids.to(device), ... max_length=model.decoder.config.max_position_embeddings, ... pad_token_id=processor.tokenizer.pad_token_id, ... eos_token_id=processor.tokenizer.eos_token_id, ... use_cache=True, ... bad_words_ids=[[processor.tokenizer.unk_token_id]], ... return_dict_in_generate=True, ... ) >>> sequence = processor.batch_decode(outputs.sequences)[0] >>> sequence = sequence.replace(processor.tokenizer.eos_token, "").replace(processor.tokenizer.pad_token, "") >>> sequence = re.sub(r"<.*?>", "", sequence, count=1).strip() # remove first task start token >>> print(processor.token2json(sequence)) {'class': 'advertisement'} ``` - Step-by-step Document Parsing ```py >>> import re >>> from transformers import DonutProcessor, VisionEncoderDecoderModel >>> from datasets import load_dataset >>> import torch >>> processor = DonutProcessor.from_pretrained("naver-clova-ix/donut-base-finetuned-cord-v2") >>> model = VisionEncoderDecoderModel.from_pretrained("naver-clova-ix/donut-base-finetuned-cord-v2") >>> device = "cuda" if torch.cuda.is_available() else "cpu" >>> model.to(device) # doctest: +IGNORE_RESULT >>> # load document image >>> dataset = load_dataset("hf-internal-testing/example-documents", split="test") >>> image = dataset[2]["image"] >>> # prepare decoder inputs >>> task_prompt = "<s_cord-v2>" >>> decoder_input_ids = processor.tokenizer(task_prompt, add_special_tokens=False, return_tensors="pt").input_ids >>> pixel_values = processor(image, return_tensors="pt").pixel_values >>> outputs = model.generate( ... pixel_values.to(device), ... decoder_input_ids=decoder_input_ids.to(device), ... max_length=model.decoder.config.max_position_embeddings, ... pad_token_id=processor.tokenizer.pad_token_id, ... eos_token_id=processor.tokenizer.eos_token_id, ... use_cache=True, ... bad_words_ids=[[processor.tokenizer.unk_token_id]], ... return_dict_in_generate=True, ... ) >>> sequence = processor.batch_decode(outputs.sequences)[0] >>> sequence = sequence.replace(processor.tokenizer.eos_token, "").replace(processor.tokenizer.pad_token, "") >>> sequence = re.sub(r"<.*?>", "", sequence, count=1).strip() # remove first task start token >>> print(processor.token2json(sequence)) {'menu': {'nm': 'CINNAMON SUGAR', 'unitprice': '17,000', 'cnt': '1 x', 'price': '17,000'}, 'sub_total': {'subtotal_price': '17,000'}, 'total': {'total_price': '17,000', 'cashprice': '20,000', 'changeprice': '3,000'}} ``` - Step-by-step Document Visual Question Answering (DocVQA) ```py >>> import re >>> from transformers import DonutProcessor, VisionEncoderDecoderModel >>> from datasets import load_dataset >>> import torch >>> processor = DonutProcessor.from_pretrained("naver-clova-ix/donut-base-finetuned-docvqa") >>> model = VisionEncoderDecoderModel.from_pretrained("naver-clova-ix/donut-base-finetuned-docvqa") >>> device = "cuda" if torch.cuda.is_available() else "cpu" >>> model.to(device) # doctest: +IGNORE_RESULT >>> # load document image from the DocVQA dataset >>> dataset = load_dataset("hf-internal-testing/example-documents", split="test") >>> image = dataset[0]["image"] >>> # prepare decoder inputs >>> task_prompt = "<s_docvqa><s_question>{user_input}</s_question><s_answer>" >>> question = "When is the coffee break?" >>> prompt = task_prompt.replace("{user_input}", question) >>> decoder_input_ids = processor.tokenizer(prompt, add_special_tokens=False, return_tensors="pt").input_ids >>> pixel_values = processor(image, return_tensors="pt").pixel_values >>> outputs = model.generate( ... pixel_values.to(device), ... decoder_input_ids=decoder_input_ids.to(device), ... max_length=model.decoder.config.max_position_embeddings, ... pad_token_id=processor.tokenizer.pad_token_id, ... eos_token_id=processor.tokenizer.eos_token_id, ... use_cache=True, ... bad_words_ids=[[processor.tokenizer.unk_token_id]], ... return_dict_in_generate=True, ... ) >>> sequence = processor.batch_decode(outputs.sequences)[0] >>> sequence = sequence.replace(processor.tokenizer.eos_token, "").replace(processor.tokenizer.pad_token, "") >>> sequence = re.sub(r"<.*?>", "", sequence, count=1).strip() # remove first task start token >>> print(processor.token2json(sequence)) {'question': 'When is the coffee break?', 'answer': '11-14 to 11:39 a.m.'} ``` See the [model hub](https://huggingface.co/models?filter=donut) to look for Donut checkpoints. ## Training We refer to the [tutorial notebooks](https://github.com/NielsRogge/Transformers-Tutorials/tree/master/Donut). ## DonutSwinConfig [[autodoc]] DonutSwinConfig ## DonutImageProcessor [[autodoc]] DonutImageProcessor - preprocess ## DonutFeatureExtractor [[autodoc]] DonutFeatureExtractor - __call__ ## DonutProcessor [[autodoc]] DonutProcessor - __call__ - from_pretrained - save_pretrained - batch_decode - decode ## DonutSwinModel [[autodoc]] DonutSwinModel - forward
huggingface/transformers/blob/main/docs/source/en/model_doc/donut.md
Gradio Demo: blocks_essay ``` !pip install -q gradio ``` ``` import gradio as gr countries_cities_dict = { "USA": ["New York", "Los Angeles", "Chicago"], "Canada": ["Toronto", "Montreal", "Vancouver"], "Pakistan": ["Karachi", "Lahore", "Islamabad"], } def change_textbox(choice): if choice == "short": return gr.Textbox(lines=2, visible=True), gr.Button(interactive=True) elif choice == "long": return gr.Textbox(lines=8, visible=True, value="Lorem ipsum dolor sit amet"), gr.Button(interactive=True) else: return gr.Textbox(visible=False), gr.Button(interactive=False) with gr.Blocks() as demo: radio = gr.Radio( ["short", "long", "none"], label="What kind of essay would you like to write?" ) text = gr.Textbox(lines=2, interactive=True, show_copy_button=True) with gr.Row(): num = gr.Number(minimum=0, maximum=100, label="input") out = gr.Number(label="output") minimum_slider = gr.Slider(0, 100, 0, label="min") maximum_slider = gr.Slider(0, 100, 100, label="max") submit_btn = gr.Button("Submit", variant="primary") with gr.Row(): country = gr.Dropdown(list(countries_cities_dict.keys()), label="Country") cities = gr.Dropdown([], label="Cities") @country.change(inputs=country, outputs=cities) def update_cities(country): cities = list(countries_cities_dict[country]) return gr.Dropdown(choices=cities, value=cities[0], interactive=True) def reset_bounds(minimum, maximum): return gr.Number(minimum=minimum, maximum=maximum) radio.change(fn=change_textbox, inputs=radio, outputs=[text, submit_btn]) gr.on( [minimum_slider.change, maximum_slider.change], reset_bounds, [minimum_slider, maximum_slider], outputs=num, ) num.submit(lambda x: x, num, out) if __name__ == "__main__": demo.launch() ```
gradio-app/gradio/blob/main/demo/blocks_essay/run.ipynb
!--Copyright 2023 The HuggingFace Team. All rights reserved. Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with the License. You may obtain a copy of the License at http://www.apache.org/licenses/LICENSE-2.0 Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the specific language governing permissions and limitations under the License. --> # DDIMInverseScheduler `DDIMInverseScheduler` is the inverted scheduler from [Denoising Diffusion Implicit Models](https://huggingface.co/papers/2010.02502) (DDIM) by Jiaming Song, Chenlin Meng and Stefano Ermon. The implementation is mostly based on the DDIM inversion definition from [Null-text Inversion for Editing Real Images using Guided Diffusion Models](https://huggingface.co/papers/2211.09794). ## DDIMInverseScheduler [[autodoc]] DDIMInverseScheduler
huggingface/diffusers/blob/main/docs/source/en/api/schedulers/ddim_inverse.md
Gradio Demo: generate_tone ``` !pip install -q gradio numpy ``` ``` import numpy as np import gradio as gr notes = ["C", "C#", "D", "D#", "E", "F", "F#", "G", "G#", "A", "A#", "B"] def generate_tone(note, octave, duration): sr = 48000 a4_freq, tones_from_a4 = 440, 12 * (octave - 4) + (note - 9) frequency = a4_freq * 2 ** (tones_from_a4 / 12) duration = int(duration) audio = np.linspace(0, duration, duration * sr) audio = (20000 * np.sin(audio * (2 * np.pi * frequency))).astype(np.int16) return sr, audio demo = gr.Interface( generate_tone, [ gr.Dropdown(notes, type="index"), gr.Slider(4, 6, step=1), gr.Textbox(value=1, label="Duration in seconds"), ], "audio", ) if __name__ == "__main__": demo.launch() ```
gradio-app/gradio/blob/main/demo/generate_tone/run.ipynb
-- title: Getting Started with Hugging Face Inference Endpoints thumbnail: /blog/assets/109_inference_endpoints/endpoints05.png authors: - user: juliensimon --- # Getting Started with Hugging Face Inference Endpoints Training machine learning models has become quite simple, especially with the rise of pre-trained models and transfer learning. OK, sometimes it's not *that* simple, but at least, training models will never break critical applications, and make customers unhappy about your quality of service. Deploying models, however... Yes, we've all been there. Deploying models in production usually requires jumping through a series of hoops. Packaging your model in a container, provisioning the infrastructure, creating your prediction API, securing it, scaling it, monitoring it, and more. Let's face it: building all this plumbing takes valuable time away from doing actual machine learning work. Unfortunately, it can also go awfully wrong. We strive to fix this problem with the newly launched Hugging Face [Inference Endpoints](https://huggingface.co/inference-endpoints). In the spirit of making machine learning ever simpler without compromising on state-of-the-art quality, we've built a service that lets you deploy machine learning models directly from the [Hugging Face hub](https://huggingface.co) to managed infrastructure on your favorite cloud in just a few clicks. Simple, secure, and scalable: you can have it all. Let me show you how this works! ### Deploying a model on Inference Endpoints Looking at the list of [tasks](https://huggingface.co/docs/inference-endpoints/supported_tasks) that Inference Endpoints support, I decided to deploy a Swin image classification model that I recently fine-tuned with [AutoTrain](https://huggingface.co/autotrain) on the [food101](https://huggingface.co/datasets/food101) dataset. If you're interested in how I built this model, this [video](https://youtu.be/uFxtl7QuUvo) will show you the whole process. Starting from my [model page](https://huggingface.co/juliensimon/autotrain-food101-1471154053), I click on `Deploy` and select `Inference Endpoints`. <kbd> <img src="assets/109_inference_endpoints/endpoints00.png"> </kbd> This takes me directly to the [endpoint creation](https://ui.endpoints.huggingface.co/new) page. <kbd> <img src="assets/109_inference_endpoints/endpoints01.png"> </kbd> I decide to deploy the latest revision of my model on a single GPU instance, hosted on AWS in the `eu-west-1` region. Optionally, I could set up autoscaling, and I could even deploy the model in a [custom container](https://huggingface.co/docs/inference-endpoints/guides/custom_container). <kbd> <img src="assets/109_inference_endpoints/endpoints02.png"> </kbd> Next, I need to decide who can access my endpoint. From least secure to most secure, the three options are: * **Public**: the endpoint runs in a public Hugging Face subnet, and anyone on the Internet can access it without any authentication. Think twice before selecting this! * **Protected**: the endpoint runs in a public Hugging Face subnet, and anyone on the Internet with the appropriate organization token can access it. * **Private**: the endpoint runs in a private Hugging Face subnet. It's not accessible on the Internet. It's only available in your AWS account through a VPC Endpoint created with [AWS PrivateLink](https://aws.amazon.com/privatelink/). You can control which VPC and subnet(s) in your AWS account have access to the endpoint. Let's first deploy a protected endpoint, and then we'll deploy a private one. ### Deploying a Protected Inference Endpoint I simply select `Protected` and click on `Create Endpoint`. <kbd> <img src="assets/109_inference_endpoints/endpoints03.png"> </kbd> After a few minutes, the endpoint is up and running, and its URL is visible. <kbd> <img src="assets/109_inference_endpoints/endpoints04.png"> </kbd> I can immediately test it by uploading an [image](assets/109_inference_endpoints/food.jpg) in the inference widget. <kbd> <img src="assets/109_inference_endpoints/endpoints05.png"> </kbd> Of course, I can also invoke the endpoint directly with a few lines of Python code, and I authenticate with my Hugging Face API token (you'll find yours in your account settings on the hub). ``` import requests, json API_URL = "https://oncm9ojdmjwesag2.eu-west-1.aws.endpoints.huggingface.cloud" headers = { "Authorization": "Bearer MY_API_TOKEN", "Content-Type": "image/jpg" } def query(filename): with open(filename, "rb") as f: data = f.read() response = requests.request("POST", API_URL, headers=headers, data=data) return json.loads(response.content.decode("utf-8")) output = query("food.jpg") ``` As you would expect, the predicted result is identical. ``` [{'score': 0.9998438358306885, 'label': 'hummus'}, {'score': 6.674625183222815e-05, 'label': 'falafel'}, {'score': 6.490697160188574e-06, 'label': 'escargots'}, {'score': 5.776922080258373e-06, 'label': 'deviled_eggs'}, {'score': 5.492902801051969e-06, 'label': 'shrimp_and_grits'}] ``` Moving to the `Analytics` tab, I can see endpoint metrics. Some of my requests failed because I deliberately omitted the `Content-Type` header. <kbd> <img src="assets/109_inference_endpoints/endpoints06.png"> </kbd> For additional details, I can check the full logs in the `Logs` tab. ``` 5c7fbb4485cd8w7 2022-10-10T08:19:04.915Z 2022-10-10 08:19:04,915 | INFO | POST / | Duration: 142.76 ms 5c7fbb4485cd8w7 2022-10-10T08:19:05.860Z 2022-10-10 08:19:05,860 | INFO | POST / | Duration: 148.06 ms 5c7fbb4485cd8w7 2022-10-10T09:21:39.251Z 2022-10-10 09:21:39,250 | ERROR | Content type "None" not supported. Supported content types are: application/json, text/csv, text/plain, image/png, image/jpeg, image/jpg, image/tiff, image/bmp, image/gif, image/webp, image/x-image, audio/x-flac, audio/flac, audio/mpeg, audio/wave, audio/wav, audio/x-wav, audio/ogg, audio/x-audio, audio/webm, audio/webm;codecs=opus 5c7fbb4485cd8w7 2022-10-10T09:21:44.114Z 2022-10-10 09:21:44,114 | ERROR | Content type "None" not supported. Supported content types are: application/json, text/csv, text/plain, image/png, image/jpeg, image/jpg, image/tiff, image/bmp, image/gif, image/webp, image/x-image, audio/x-flac, audio/flac, audio/mpeg, audio/wave, audio/wav, audio/x-wav, audio/ogg, audio/x-audio, audio/webm, audio/webm;codecs=opus ``` Now, let's increase our security level and deploy a private endpoint. ### Deploying a Private Inference Endpoint Repeating the steps above, I select `Private` this time. This opens a new box asking me for the identifier of the AWS account in which the endpoint will be visible. I enter the appropriate ID and click on `Create Endpoint`. Not sure about your AWS account id? Here's an AWS CLI one-liner for you: `aws sts get-caller-identity --query Account --output text` <kbd> <img src="assets/109_inference_endpoints/endpoints07.png"> </kbd> After a few minutes, the Inference Endpoints user interface displays the name of the VPC service name. Mine is `com.amazonaws.vpce.eu-west-1.vpce-svc-07a49a19a427abad7`. Next, I open the AWS console and go to the [VPC Endpoints](https://console.aws.amazon.com/vpc/home?#Endpoints:) page. Then, I click on `Create endpoint` to create a VPC endpoint, which will enable my AWS account to access my Inference Endpoint through AWS PrivateLink. In a nutshell, I need to fill in the name of the VPC service name displayed above, select the VPC and subnets(s) allowed to access the endpoint, and attach an appropriate Security Group. Nothing scary: I just follow the steps listed in the [Inference Endpoints documentation](https://huggingface.co/docs/inference-endpoints/guides/private_link). Once I've created the VPC endpoint, my setup looks like this. <kbd> <img src="assets/109_inference_endpoints/endpoints08.png"> </kbd> Returning to the Inference Endpoints user interface, the private endpoint runs a minute or two later. Let's test it! Launching an Amazon EC2 instance in one of the subnets allowed to access the VPC endpoint, I use the inference endpoint URL to predict my test image. ``` curl https://oncm9ojdmjwesag2.eu-west-1.aws.endpoints.huggingface.cloud \ -X POST --data-binary '@food.jpg' \ -H "Authorization: Bearer MY_API_TOKEN" \ -H "Content-Type: image/jpeg" [{"score":0.9998466968536377, "label":"hummus"}, {"score":0.00006414744711946696, "label":"falafel"}, {"score":6.4065129663504194e-6, "label":"escargots"}, {"score":5.819705165777123e-6, "label":"deviled_eggs"}, {"score":5.532585873879725e-6, "label":"shrimp_and_grits"}] ``` This is all there is to it. Once I'm done testing, I delete the endpoints that I've created to avoid unwanted charges. I also delete the VPC Endpoint in the AWS console. Hugging Face customers are already using Inference Endpoints. For example, [Phamily](https://phamily.com/), the #1 in-house chronic care management & proactive care platform, [told us](https://www.youtube.com/watch?v=20C9X5OYO2Q) that Inference Endpoints is helping them simplify and accelerate HIPAA-compliant Transformer deployments. ### Now it's your turn! Thanks to Inference Endpoints, you can deploy production-grade, scalable, secure endpoints in minutes, in just a few clicks. Why don't you [give it a try](https://ui.endpoints.huggingface.co/new)? We have plenty of ideas to make the service even better, and we'd love to hear your feedback in the [Hugging Face forum](https://discuss.huggingface.co/). Thank you for reading and have fun with Inference Endpoints!
huggingface/blog/blob/main/inference-endpoints.md
!--Copyright 2023 The HuggingFace Team. All rights reserved. Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with the License. You may obtain a copy of the License at http://www.apache.org/licenses/LICENSE-2.0 Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the specific language governing permissions and limitations under the License. --> # Self-Attention Guidance [Improving Sample Quality of Diffusion Models Using Self-Attention Guidance](https://huggingface.co/papers/2210.00939) is by Susung Hong et al. The abstract from the paper is: *Denoising diffusion models (DDMs) have attracted attention for their exceptional generation quality and diversity. This success is largely attributed to the use of class- or text-conditional diffusion guidance methods, such as classifier and classifier-free guidance. In this paper, we present a more comprehensive perspective that goes beyond the traditional guidance methods. From this generalized perspective, we introduce novel condition- and training-free strategies to enhance the quality of generated images. As a simple solution, blur guidance improves the suitability of intermediate samples for their fine-scale information and structures, enabling diffusion models to generate higher quality samples with a moderate guidance scale. Improving upon this, Self-Attention Guidance (SAG) uses the intermediate self-attention maps of diffusion models to enhance their stability and efficacy. Specifically, SAG adversarially blurs only the regions that diffusion models attend to at each iteration and guides them accordingly. Our experimental results show that our SAG improves the performance of various diffusion models, including ADM, IDDPM, Stable Diffusion, and DiT. Moreover, combining SAG with conventional guidance methods leads to further improvement.* You can find additional information about Self-Attention Guidance on the [project page](https://ku-cvlab.github.io/Self-Attention-Guidance), [original codebase](https://github.com/KU-CVLAB/Self-Attention-Guidance), and try it out in a [demo](https://huggingface.co/spaces/susunghong/Self-Attention-Guidance) or [notebook](https://colab.research.google.com/github/SusungHong/Self-Attention-Guidance/blob/main/SAG_Stable.ipynb). <Tip> Make sure to check out the Schedulers [guide](../../using-diffusers/schedulers) to learn how to explore the tradeoff between scheduler speed and quality, and see the [reuse components across pipelines](../../using-diffusers/loading#reuse-components-across-pipelines) section to learn how to efficiently load the same components into multiple pipelines. </Tip> ## StableDiffusionSAGPipeline [[autodoc]] StableDiffusionSAGPipeline - __call__ - all ## StableDiffusionOutput [[autodoc]] pipelines.stable_diffusion.StableDiffusionPipelineOutput
huggingface/diffusers/blob/main/docs/source/en/api/pipelines/self_attention_guidance.md
`@gradio/form` ```html <script> import { Form } from "@gradio/form"; </script> ``` Form ```javascript export let visible = true; export let scale: number | null = null; export let min_width = 0; ```
gradio-app/gradio/blob/main/js/form/README.md
-- title: "Non-engineers guide: Train a LLaMA 2 chatbot" thumbnail: /blog/assets/78_ml_director_insights/tuto.png authors: - user: 2legit2overfit - user: abhishek --- # Non-engineers guide: Train a LLaMA 2 chatbot ## Introduction In this tutorial we will show you how anyone can build their own open-source ChatGPT without ever writing a single line of code! We’ll use the LLaMA 2 base model, fine tune it for chat with an open-source instruction dataset and then deploy the model to a chat app you can share with your friends. All by just clicking our way to greatness. 😀 Why is this important? Well, machine learning, especially LLMs (Large Language Models), has witnessed an unprecedented surge in popularity, becoming a critical tool in our personal and business lives. Yet, for most outside the specialized niche of ML engineering, the intricacies of training and deploying these models appears beyond reach. If the anticipated future of machine learning is to be one filled with ubiquitous personalized models, then there's an impending challenge ahead: How do we empower those with non-technical backgrounds to harness this technology independently? At Hugging Face, we’ve been quietly working to pave the way for this inclusive future. Our suite of tools, including services like Spaces, AutoTrain, and Inference Endpoints, are designed to make the world of machine learning accessible to everyone. To showcase just how accessible this democratized future is, this tutorial will show you how to use [Spaces](https://huggingface.co/Spaces), [AutoTrain](https://huggingface.co/autotrain) and [ChatUI](https://huggingface.co/inference-endpoints) to build the chat app. All in just three simple steps, sans a single line of code. For context I’m also not an ML engineer, but a member of the Hugging Face GTM team. If I can do this then you can too! Let's dive in! ## Introduction to Spaces Spaces from Hugging Face is a service that provides easy to use GUI for building and deploying web hosted ML demos and apps. The service allows you to quickly build ML demos using Gradio or Streamlit front ends, upload your own apps in a docker container, or even select a number of pre-configured ML applications to deploy instantly. We’ll be deploying two of the pre-configured docker application templates from Spaces, AutoTrain and ChatUI. You can read more about Spaces [here](https://huggingface.co/docs/hub/spaces). ## Introduction to AutoTrain AutoTrain is a no-code tool that lets non-ML Engineers, (or even non-developers 😮) train state-of-the-art ML models without the need to code. It can be used for NLP, computer vision, speech, tabular data and even now for fine-tuning LLMs like we’ll be doing today. You can read more about AutoTrain [here](https://huggingface.co/docs/autotrain/index). ## Introduction to ChatUI ChatUI is exactly what it sounds like, it’s the open-source UI built by Hugging Face that provides an interface to interact with open-source LLMs. Notably, it's the same UI behind HuggingChat, our 100% open-source alternative to ChatGPT. You can read more about ChatUI [here](https://github.com/huggingface/chat-ui). ### Step 1: Create a new AutoTrain Space 1.1 Go to [huggingface.co/spaces](https://huggingface.co/spaces) and select “Create new Space”. <p align="center"> <img src="https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/blog/llama2-non-engineers/tuto1.png"><br> </p> 1.2 Give your Space a name and select a preferred usage license if you plan to make your model or Space public. 1.3 In order to deploy the AutoTrain app from the Docker Template in your deployed space select Docker > AutoTrain. <p align="center"> <img src="https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/blog/llama2-non-engineers/tuto2.png"><br> </p> 1.4 Select your “Space hardware” for running the app. (Note: For the AutoTrain app the free CPU basic option will suffice, the model training later on will be done using separate compute which we can choose later) 1.5 Add your “HF_TOKEN” under “Space secrets” in order to give this Space access to your Hub account. Without this the Space won’t be able to train or save a new model to your account. (Note: Your HF_TOKEN can be found in your Hugging Face Profile under Settings > Access Tokens, make sure the token is selected as “Write”) 1.6 Select whether you want to make the “Private” or “Public”, for the AutoTrain Space itself it’s recommended to keep this Private, but you can always publicly share your model or Chat App later on. 1.7 Hit “Create Space” et voilà! The new Space will take a couple of minutes to build after which you can open the Space and start using AutoTrain. <p align="center"> <img src="https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/blog/llama2-non-engineers/tuto3.png"><br> </p> ### Step 2: Launch a Model Training in AutoTrain 2.1 Once you’re AutoTrain space has launched you’ll see the GUI below. AutoTrain can be used for several different kinds of training including LLM fine-tuning, text classification, tabular data and diffusion models. As we’re focusing on LLM training today select the “LLM” tab. 2.2 Choose the LLM you want to train from the “Model Choice” field, you can select a model from the list or type the name of the model from the Hugging Face model card, in this example we’ve used Meta’s Llama 2 7b foundation model, learn more from the model card [here](https://huggingface.co/meta-llama/Llama-2-7b-hf). (Note: LLama 2 is gated model which requires you to request access from Meta before using, but there are plenty of others non-gated models you could choose like Falcon) 2.3 In “Backend” select the CPU or GPU you want to use for your training. For a 7b model an “A10G Large” will be big enough. If you choose to train a larger model you’ll need to make sure the model can fully fit in the memory of your selected GPU. (Note: If you want to train a larger model and need access to an A100 GPU please email api-enterprise@huggingface.co) 2.4 Of course to fine-tune a model you’ll need to upload “Training Data”. When you do, make sure the dataset is correctly formatted and in CSV file format. An example of the required format can be found [here](https://huggingface.co/docs/autotrain/main/en/llm_finetuning). If your dataset contains multiple columns, be sure to select the “Text Column” from your file that contains the training data. In this example we’ll be using the Alpaca instruction tuning dataset, more information about this dataset is available [here](https://huggingface.co/datasets/tatsu-lab/alpaca). You can also download it directly as CSV from [here](https://huggingface.co/datasets/tofighi/LLM/resolve/main/alpaca.csv). <p align="center"> <img src="https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/blog/llama2-non-engineers/tuto4.png"><br> </p> 2.5 Optional: You can upload “Validation Data” to test your newly trained model against, but this isn’t required. 2.6 A number of advanced settings can be configured in AutoTrain to reduce the memory footprint of your model like changing precision (“FP16”), quantization (“Int4/8”) or whether to employ PEFT (Parameter Efficient Fine Tuning). It’s recommended to use these as is set by default as it will reduce the time and cost to train your model, and only has a small impact on model performance. 2.7 Similarly you can configure the training parameters in “Parameter Choice” but for now let’s use the default settings. <p align="center"> <img src="https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/blog/llama2-non-engineers/tuto5.png"><br> </p> 2.8 Now everything is set up, select “Add Job” to add the model to your training queue then select “Start Training” (Note: If you want to train multiple models versions with different hyper-parameters you can add multiple jobs to run simultaneously) 2.9 After training has started you’ll see that a new “Space” has been created in your Hub account. This Space is running the model training, once it’s complete the new model will also be shown in your Hub account under “Models”. (Note: To view training progress you can view live logs in the Space) 2.10 Go grab a coffee, depending on the size of your model and training data this could take a few hours or even days. Once completed a new model will appear in your Hugging Face Hub account under “Models”. <p align="center"> <img src="https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/blog/llama2-non-engineers/tuto6.png"><br> </p> ### Step 3: Create a new ChatUI Space using your model 3.1 Follow the same process of setting up a new Space as in steps 1.1 > 1.3, but select the ChatUI docker template instead of AutoTrain. 3.2 Select your “Space Hardware” for our 7b model an A10G Small will be sufficient to run the model, but this will vary depending on the size of your model. <p align="center"> <img src="https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/blog/llama2-non-engineers/tuto7.png"><br> </p> 3.3 If you have your own Mongo DB you can provide those details in order to store chat logs under “MONGODB_URL”. Otherwise leave the field blank and a local DB will be created automatically. 3.4 In order to run the chat app using the model you’ve trained you’ll need to provide the “MODEL_NAME” under the “Space variables” section. You can find the name of your model by looking in the “Models” section of your Hugging Face profile, it will be the same as the “Project name” you used in AutoTrain. In our example it’s “2legit2overfit/wrdt-pco6-31a7-0”. 3.4 Under “Space variables” you can also change model inference parameters including temperature, top-p, max tokens generated and others to change the nature of your generations. For now let’s stick with the default settings. <p align="center"> <img src="https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/blog/llama2-non-engineers/tuto8.png"><br> </p> 3.5 Now you are ready to hit “Create” and launch your very own open-source ChatGPT. Congratulations! If you’ve done it right it should look like this. <p align="center"> <img src="https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/blog/llama2-non-engineers/tuto9.png"><br> </p> _If you’re feeling inspired, but still need technical support to get started, feel free to reach out and apply for support [here](https://huggingface.co/support#form). Hugging Face offers a paid Expert Advice service that might be able to help._
huggingface/blog/blob/main/Llama2-for-non-engineers.md
!--Copyright 2022 The HuggingFace Team. All rights reserved. Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with the License. You may obtain a copy of the License at http://www.apache.org/licenses/LICENSE-2.0 Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the specific language governing permissions and limitations under the License. --> # Creating a scene This section describes how to create and save a simple scene. The code in this section is reflected in [examples/basic/create_and_save.py](https://github.com/huggingface/simulate/blob/main/examples/basic/create_and_save.py). Start by creating an empty scene: ``` import simulate as sm scene = sm.Scene() ``` Then, add a light source and a box to the scene: ``` scene += sm.LightSun() scene += sm.Box() ``` A full list of available built-in objects can be found [here](https://github.com/huggingface/simulate/blob/main/examples/basic/objects.py). You can save the scene using the [glTF 2.0](https://registry.khronos.org/glTF/specs/2.0/glTF-2.0.html) specification, allowing it to be shared across multiple platforms: ``` scene.save("output/test.gltf") ``` Finally, call `show()` to display the scene: ``` scene.show() ``` ## Loading scenes Scenes can also be loaded from the hub: ``` scene = sm.Scene.create_from("simulate-tests/Box/glTF-Embedded/Box.gltf") ``` or, they can be treated as individual assets, and added to the scene: ``` scene += sm.Asset.create_from("simulate-tests/Box/glTF-Embedded/Box.gltf") ```
huggingface/simulate/blob/main/docs/source/tutorials/creating_a_scene.mdx
# [Deprecated] Multi Token Textual Inversion **IMPORTART: This research project is deprecated. Multi Token Textual Inversion is now supported natively in [the official textual inversion example](https://github.com/huggingface/diffusers/tree/main/examples/textual_inversion#running-locally-with-pytorch).** The author of this project is [Isamu Isozaki](https://github.com/isamu-isozaki) - please make sure to tag the author for issue and PRs as well as @patrickvonplaten. We add multi token support to textual inversion. I added 1. num_vec_per_token for the number of used to reference that token 2. progressive_tokens for progressively training the token from 1 token to 2 token etc 3. progressive_tokens_max_steps for the max number of steps until we start full training 4. vector_shuffle to shuffle vectors Feel free to add these options to your training! In practice num_vec_per_token around 10+vector shuffle works great! ## Textual Inversion fine-tuning example [Textual inversion](https://arxiv.org/abs/2208.01618) is a method to personalize text2image models like stable diffusion on your own images using just 3-5 examples. The `textual_inversion.py` script shows how to implement the training procedure and adapt it for stable diffusion. ## Running on Colab Colab for training [![Open In Colab](https://colab.research.google.com/assets/colab-badge.svg)](https://colab.research.google.com/github/huggingface/notebooks/blob/main/diffusers/sd_textual_inversion_training.ipynb) Colab for inference [![Open In Colab](https://colab.research.google.com/assets/colab-badge.svg)](https://colab.research.google.com/github/huggingface/notebooks/blob/main/diffusers/stable_conceptualizer_inference.ipynb) ## Running locally with PyTorch ### Installing the dependencies Before running the scripts, make sure to install the library's training dependencies: **Important** To make sure you can successfully run the latest versions of the example scripts, we highly recommend **installing from source** and keeping the install up to date as we update the example scripts frequently and install some example-specific requirements. To do this, execute the following steps in a new virtual environment: ```bash git clone https://github.com/huggingface/diffusers cd diffusers pip install . ``` Then cd in the example folder and run ```bash pip install -r requirements.txt ``` And initialize an [🤗Accelerate](https://github.com/huggingface/accelerate/) environment with: ```bash accelerate config ``` ### Cat toy example You need to accept the model license before downloading or using the weights. In this example we'll use model version `v1-5`, so you'll need to visit [its card](https://huggingface.co/runwayml/stable-diffusion-v1-5), read the license and tick the checkbox if you agree. You have to be a registered user in 🤗 Hugging Face Hub, and you'll also need to use an access token for the code to work. For more information on access tokens, please refer to [this section of the documentation](https://huggingface.co/docs/hub/security-tokens). Run the following command to authenticate your token ```bash huggingface-cli login ``` If you have already cloned the repo, then you won't need to go through these steps. <br> Now let's get our dataset.Download 3-4 images from [here](https://drive.google.com/drive/folders/1fmJMs25nxS_rSNqS5hTcRdLem_YQXbq5) and save them in a directory. This will be our training data. And launch the training using **___Note: Change the `resolution` to 768 if you are using the [stable-diffusion-2](https://huggingface.co/stabilityai/stable-diffusion-2) 768x768 model.___** ```bash export MODEL_NAME="runwayml/stable-diffusion-v1-5" export DATA_DIR="path-to-dir-containing-images" accelerate launch textual_inversion.py \ --pretrained_model_name_or_path=$MODEL_NAME \ --train_data_dir=$DATA_DIR \ --learnable_property="object" \ --placeholder_token="<cat-toy>" --initializer_token="toy" \ --resolution=512 \ --train_batch_size=1 \ --gradient_accumulation_steps=4 \ --max_train_steps=3000 \ --learning_rate=5.0e-04 --scale_lr \ --lr_scheduler="constant" \ --lr_warmup_steps=0 \ --output_dir="textual_inversion_cat" ``` A full training run takes ~1 hour on one V100 GPU. ### Inference Once you have trained a model using above command, the inference can be done simply using the `StableDiffusionPipeline`. Make sure to include the `placeholder_token` in your prompt. ```python from diffusers import StableDiffusionPipeline model_id = "path-to-your-trained-model" pipe = StableDiffusionPipeline.from_pretrained(model_id,torch_dtype=torch.float16).to("cuda") prompt = "A <cat-toy> backpack" image = pipe(prompt, num_inference_steps=50, guidance_scale=7.5).images[0] image.save("cat-backpack.png") ``` ## Training with Flax/JAX For faster training on TPUs and GPUs you can leverage the flax training example. Follow the instructions above to get the model and dataset before running the script. Before running the scripts, make sure to install the library's training dependencies: ```bash pip install -U -r requirements_flax.txt ``` ```bash export MODEL_NAME="duongna/stable-diffusion-v1-4-flax" export DATA_DIR="path-to-dir-containing-images" python textual_inversion_flax.py \ --pretrained_model_name_or_path=$MODEL_NAME \ --train_data_dir=$DATA_DIR \ --learnable_property="object" \ --placeholder_token="<cat-toy>" --initializer_token="toy" \ --resolution=512 \ --train_batch_size=1 \ --max_train_steps=3000 \ --learning_rate=5.0e-04 --scale_lr \ --output_dir="textual_inversion_cat" ``` It should be at least 70% faster than the PyTorch script with the same configuration. ### Training with xformers: You can enable memory efficient attention by [installing xFormers](https://github.com/facebookresearch/xformers#installing-xformers) and padding the `--enable_xformers_memory_efficient_attention` argument to the script. This is not available with the Flax/JAX implementation.
huggingface/diffusers/blob/main/examples/research_projects/multi_token_textual_inversion/README.md
Gradio Demo: hello_world_3 ``` !pip install -q gradio ``` ``` import gradio as gr def greet(name, is_morning, temperature): salutation = "Good morning" if is_morning else "Good evening" greeting = f"{salutation} {name}. It is {temperature} degrees today" celsius = (temperature - 32) * 5 / 9 return greeting, round(celsius, 2) demo = gr.Interface( fn=greet, inputs=["text", "checkbox", gr.Slider(0, 100)], outputs=["text", "number"], ) if __name__ == "__main__": demo.launch() ```
gradio-app/gradio/blob/main/demo/hello_world_3/run.ipynb
ou are at the right place if you want to understand what the Byte pair Encoding subword tokenization algorithm is, how to train it and how the tokenization of a text is done with this algorithm. The BPE algorithm was initially proposed as a text compression algorithm but it is also very well suited as a tokenizer for your language models. The idea of BPE is to divide words into a sequence of "subword units" which are units that appear frequently in a reference corpus - that is, the corpus we used to train it. How is a BPE tokenizer trained? First of all, we have to get a corpus of texts. We will not train our tokenizer on this raw text but we will first normalize it then pre-tokenize it. As the pre-tokenization divides the text into a list of words, we can represent our corpus in another way by gathering together the same words and by maintaining a counter, here represented in blue. To understand how the training works, we consider this toy corpus composed of the following words: huggingface, hugging, hug, hugger, etc. BPE is an algorithm that starts with an initial vocabulary and then increases it to the desired size. To build the initial vocabulary, we start by separating each word of the corpus into a list of elementary units that compose them -here the characters. We could also have chosen bytes as elementary units but it would have been less visual. We list in our vocabulary all the characters that appear and that will constitute our initial vocabulary! Let's now see how to increase it. We return to our split corpus, we will go through the words one by one and count all the occurrences of token pairs. The first pair is composed of the token "h" and "u", the second 'u' and "g", and we continue like that until we have the complete list. Once we know all the pairs and their frequency of appearance, we will choose the one that appears the most frequently: here it is the pair composed of the letters 'l' and 'e'. We note our first merging rule and we add the new token to our vocabulary. We can then apply this merging rule to our splits: you can see that we have merged all the pairs of tokens composed of the tokens "l" and "e". And now we just have to reproduce the same steps with our new splits: we calculate the frequency of occurrence of each pair of tokens, we select the pair with the highest frequency, we note it in our merge rules, we add the new one to the vocabulary and then we merge all the pairs of tokens composed of the token "le" and "a" into our splits. And we can repeat this operation until we reach the desired vocabulary size. Here we stopped when our vocabulary reached 21 tokens. We can see now that the words of our corpus are now divided into far fewer tokens than at the beginning of the training. We can see that our algorithm has learned the radicals "hug" and "learn" and also the verbal ending "ing". Now that we have learned our vocabulary and our merging rules, we can tokenize new texts. For example, if we want to tokenize the word hugs: first we'll divide it into elementary units so it became a sequence of characters. Then we'll go through our merge rules until we have one that we can apply. Here we can merge the letters h and u. And here we can merge 2 tokens to get the new token hug. When we get to the end of our merge rule the tokenization is finished. ßAnd that's it, I hope that now the BPE algorithm has no more secret for you!
huggingface/course/blob/main/subtitles/en/raw/chapter6/06_bpe.md
Conclusion Congrats on finishing this unit! You’ve just trained your first ML-Agents and shared it to the Hub 🥳. The best way to learn is to **practice and try stuff**. Why not try another environment? [ML-Agents has 18 different environments](https://github.com/Unity-Technologies/ml-agents/blob/develop/docs/Learning-Environment-Examples.md). For instance: - [Worm](https://singularite.itch.io/worm), where you teach a worm to crawl. - [Walker](https://singularite.itch.io/walker), where you teach an agent to walk towards a goal. Check the documentation to find out how to train them and to see the list of already integrated MLAgents environments on the Hub: https://github.com/huggingface/ml-agents#getting-started <img src="https://huggingface.co/datasets/huggingface-deep-rl-course/course-images/resolve/main/en/unit5/envs-unity.jpeg" alt="Example envs"/> In the next unit, we're going to learn about multi-agents. You're going to train your first multi-agents to compete in Soccer and Snowball fight against other classmate's agents. <img src="https://huggingface.co/datasets/huggingface-deep-rl-course/course-images/resolve/main/en/unit7/snowballfight.gif" alt="Snownball fight"/> Finally, we would love **to hear what you think of the course and how we can improve it**. If you have some feedback then please 👉 [fill this form](https://forms.gle/BzKXWzLAGZESGNaE9) ### Keep Learning, stay awesome 🤗
huggingface/deep-rl-class/blob/main/units/en/unit5/conclusion.mdx
Load image data Image datasets have [`Image`] type columns, which contain PIL objects. <Tip> To work with image datasets, you need to have the `vision` dependency installed. Check out the [installation](./installation#vision) guide to learn how to install it. </Tip> When you load an image dataset and call the image column, the images are decoded as PIL Images: ```py >>> from datasets import load_dataset, Image >>> dataset = load_dataset("beans", split="train") >>> dataset[0]["image"] ``` <Tip warning={true}> Index into an image dataset using the row index first and then the `image` column - `dataset[0]["image"]` - to avoid decoding and resampling all the image objects in the dataset. Otherwise, this can be a slow and time-consuming process if you have a large dataset. </Tip> For a guide on how to load any type of dataset, take a look at the <a class="underline decoration-sky-400 decoration-2 font-semibold" href="./loading">general loading guide</a>. ## Local files You can load a dataset from the image path. Use the [`~Dataset.cast_column`] function to accept a column of image file paths, and decode it into a PIL image with the [`Image`] feature: ```py >>> from datasets import Dataset, Image >>> dataset = Dataset.from_dict({"image": ["path/to/image_1", "path/to/image_2", ..., "path/to/image_n"]}).cast_column("image", Image()) >>> dataset[0]["image"] <PIL.PngImagePlugin.PngImageFile image mode=RGBA size=1200x215 at 0x15E6D7160>] ``` If you only want to load the underlying path to the image dataset without decoding the image object, set `decode=False` in the [`Image`] feature: ```py >>> dataset = load_dataset("beans", split="train").cast_column("image", Image(decode=False)) >>> dataset[0]["image"] {'bytes': None, 'path': '/root/.cache/huggingface/datasets/downloads/extracted/b0a21163f78769a2cf11f58dfc767fb458fc7cea5c05dccc0144a2c0f0bc1292/train/bean_rust/bean_rust_train.29.jpg'} ``` ## ImageFolder You can also load a dataset with an `ImageFolder` dataset builder which does not require writing a custom dataloader. This makes `ImageFolder` ideal for quickly creating and loading image datasets with several thousand images for different vision tasks. Your image dataset structure should look like this: ``` folder/train/dog/golden_retriever.png folder/train/dog/german_shepherd.png folder/train/dog/chihuahua.png folder/train/cat/maine_coon.png folder/train/cat/bengal.png folder/train/cat/birman.png ``` Load your dataset by specifying `imagefolder` and the directory of your dataset in `data_dir`: ```py >>> from datasets import load_dataset >>> dataset = load_dataset("imagefolder", data_dir="/path/to/folder") >>> dataset["train"][0] {"image": <PIL.PngImagePlugin.PngImageFile image mode=RGBA size=1200x215 at 0x15E6D7160>, "label": 0} >>> dataset["train"][-1] {"image": <PIL.PngImagePlugin.PngImageFile image mode=RGBA size=1200x215 at 0x15E8DAD30>, "label": 1} ``` Load remote datasets from their URLs with the `data_files` parameter: ```py >>> dataset = load_dataset("imagefolder", data_files="https://download.microsoft.com/download/3/E/1/3E1C3F21-ECDB-4869-8368-6DEBA77B919F/kagglecatsanddogs_3367a.zip", split="train") ``` Some datasets have a metadata file (`metadata.csv`/`metadata.jsonl`) associated with it, containing other information about the data like bounding boxes, text captions, and labels. The metadata is automatically loaded when you call [`load_dataset`] and specify `imagefolder`. To ignore the information in the metadata file, set `drop_labels=False` in [`load_dataset`], and allow `ImageFolder` to automatically infer the label name from the directory name: ```py >>> from datasets import load_dataset >>> dataset = load_dataset("imagefolder", data_dir="/path/to/folder", drop_labels=False) ``` <Tip> For more information about creating your own `ImageFolder` dataset, take a look at the [Create an image dataset](./image_dataset) guide. </Tip> ## WebDataset The [WebDataset](https://github.com/webdataset/webdataset) format is based on a folder of TAR archives and is suitable for big image datasets. Because of their size, WebDatasets are generally loaded in streaming mode (using `streaming=True`). You can load a WebDataset like this: ```python >>> from datasets import load_dataset >>> dataset = load_dataset("webdataset", data_dir="/path/to/folder", streaming=True) ```
huggingface/datasets/blob/main/docs/source/image_load.mdx
EfficientNet (Knapsack Pruned) **EfficientNet** is a convolutional neural network architecture and scaling method that uniformly scales all dimensions of depth/width/resolution using a *compound coefficient*. Unlike conventional practice that arbitrary scales these factors, the EfficientNet scaling method uniformly scales network width, depth, and resolution with a set of fixed scaling coefficients. For example, if we want to use \\( 2^N \\) times more computational resources, then we can simply increase the network depth by \\( \alpha ^ N \\), width by \\( \beta ^ N \\), and image size by \\( \gamma ^ N \\), where \\( \alpha, \beta, \gamma \\) are constant coefficients determined by a small grid search on the original small model. EfficientNet uses a compound coefficient \\( \phi \\) to uniformly scales network width, depth, and resolution in a principled way. The compound scaling method is justified by the intuition that if the input image is bigger, then the network needs more layers to increase the receptive field and more channels to capture more fine-grained patterns on the bigger image. The base EfficientNet-B0 network is based on the inverted bottleneck residual blocks of [MobileNetV2](https://paperswithcode.com/method/mobilenetv2), in addition to [squeeze-and-excitation blocks](https://paperswithcode.com/method/squeeze-and-excitation-block). This collection consists of pruned EfficientNet models. ## How do I use this model on an image? To load a pretrained model: ```py >>> import timm >>> model = timm.create_model('efficientnet_b1_pruned', pretrained=True) >>> model.eval() ``` To load and preprocess the image: ```py >>> import urllib >>> from PIL import Image >>> from timm.data import resolve_data_config >>> from timm.data.transforms_factory import create_transform >>> config = resolve_data_config({}, model=model) >>> transform = create_transform(**config) >>> url, filename = ("https://github.com/pytorch/hub/raw/master/images/dog.jpg", "dog.jpg") >>> urllib.request.urlretrieve(url, filename) >>> img = Image.open(filename).convert('RGB') >>> tensor = transform(img).unsqueeze(0) # transform and add batch dimension ``` To get the model predictions: ```py >>> import torch >>> with torch.no_grad(): ... out = model(tensor) >>> probabilities = torch.nn.functional.softmax(out[0], dim=0) >>> print(probabilities.shape) >>> # prints: torch.Size([1000]) ``` To get the top-5 predictions class names: ```py >>> # Get imagenet class mappings >>> url, filename = ("https://raw.githubusercontent.com/pytorch/hub/master/imagenet_classes.txt", "imagenet_classes.txt") >>> urllib.request.urlretrieve(url, filename) >>> with open("imagenet_classes.txt", "r") as f: ... categories = [s.strip() for s in f.readlines()] >>> # Print top categories per image >>> top5_prob, top5_catid = torch.topk(probabilities, 5) >>> for i in range(top5_prob.size(0)): ... print(categories[top5_catid[i]], top5_prob[i].item()) >>> # prints class names and probabilities like: >>> # [('Samoyed', 0.6425196528434753), ('Pomeranian', 0.04062102362513542), ('keeshond', 0.03186424449086189), ('white wolf', 0.01739676296710968), ('Eskimo dog', 0.011717947199940681)] ``` Replace the model name with the variant you want to use, e.g. `efficientnet_b1_pruned`. You can find the IDs in the model summaries at the top of this page. To extract image features with this model, follow the [timm feature extraction examples](../feature_extraction), just change the name of the model you want to use. ## How do I finetune this model? You can finetune any of the pre-trained models just by changing the classifier (the last layer). ```py >>> model = timm.create_model('efficientnet_b1_pruned', pretrained=True, num_classes=NUM_FINETUNE_CLASSES) ``` To finetune on your own dataset, you have to write a training loop or adapt [timm's training script](https://github.com/rwightman/pytorch-image-models/blob/master/train.py) to use your dataset. ## How do I train this model? You can follow the [timm recipe scripts](../scripts) for training a new model afresh. ## Citation ```BibTeX @misc{tan2020efficientnet, title={EfficientNet: Rethinking Model Scaling for Convolutional Neural Networks}, author={Mingxing Tan and Quoc V. Le}, year={2020}, eprint={1905.11946}, archivePrefix={arXiv}, primaryClass={cs.LG} } ``` ``` @misc{aflalo2020knapsack, title={Knapsack Pruning with Inner Distillation}, author={Yonathan Aflalo and Asaf Noy and Ming Lin and Itamar Friedman and Lihi Zelnik}, year={2020}, eprint={2002.08258}, archivePrefix={arXiv}, primaryClass={cs.LG} } ``` <!-- Type: model-index Collections: - Name: EfficientNet Pruned Paper: Title: Knapsack Pruning with Inner Distillation URL: https://paperswithcode.com/paper/knapsack-pruning-with-inner-distillation Models: - Name: efficientnet_b1_pruned In Collection: EfficientNet Pruned Metadata: FLOPs: 489653114 Parameters: 6330000 File Size: 25595162 Architecture: - 1x1 Convolution - Average Pooling - Batch Normalization - Convolution - Dense Connections - Dropout - Inverted Residual Block - Squeeze-and-Excitation Block - Swish Tasks: - Image Classification Training Data: - ImageNet ID: efficientnet_b1_pruned Crop Pct: '0.882' Image Size: '240' Interpolation: bicubic Code: https://github.com/rwightman/pytorch-image-models/blob/a7f95818e44b281137503bcf4b3e3e94d8ffa52f/timm/models/efficientnet.py#L1208 Weights: https://imvl-automl-sh.oss-cn-shanghai.aliyuncs.com/darts/hyperml/hyperml/job_45403/outputs/effnetb1_pruned_9ebb3fe6.pth Results: - Task: Image Classification Dataset: ImageNet Metrics: Top 1 Accuracy: 78.25% Top 5 Accuracy: 93.84% - Name: efficientnet_b2_pruned In Collection: EfficientNet Pruned Metadata: FLOPs: 878133915 Parameters: 8310000 File Size: 33555005 Architecture: - 1x1 Convolution - Average Pooling - Batch Normalization - Convolution - Dense Connections - Dropout - Inverted Residual Block - Squeeze-and-Excitation Block - Swish Tasks: - Image Classification Training Data: - ImageNet ID: efficientnet_b2_pruned Crop Pct: '0.89' Image Size: '260' Interpolation: bicubic Code: https://github.com/rwightman/pytorch-image-models/blob/a7f95818e44b281137503bcf4b3e3e94d8ffa52f/timm/models/efficientnet.py#L1219 Weights: https://imvl-automl-sh.oss-cn-shanghai.aliyuncs.com/darts/hyperml/hyperml/job_45403/outputs/effnetb2_pruned_203f55bc.pth Results: - Task: Image Classification Dataset: ImageNet Metrics: Top 1 Accuracy: 79.91% Top 5 Accuracy: 94.86% - Name: efficientnet_b3_pruned In Collection: EfficientNet Pruned Metadata: FLOPs: 1239590641 Parameters: 9860000 File Size: 39770812 Architecture: - 1x1 Convolution - Average Pooling - Batch Normalization - Convolution - Dense Connections - Dropout - Inverted Residual Block - Squeeze-and-Excitation Block - Swish Tasks: - Image Classification Training Data: - ImageNet ID: efficientnet_b3_pruned Crop Pct: '0.904' Image Size: '300' Interpolation: bicubic Code: https://github.com/rwightman/pytorch-image-models/blob/a7f95818e44b281137503bcf4b3e3e94d8ffa52f/timm/models/efficientnet.py#L1230 Weights: https://imvl-automl-sh.oss-cn-shanghai.aliyuncs.com/darts/hyperml/hyperml/job_45403/outputs/effnetb3_pruned_5abcc29f.pth Results: - Task: Image Classification Dataset: ImageNet Metrics: Top 1 Accuracy: 80.86% Top 5 Accuracy: 95.24% -->
huggingface/pytorch-image-models/blob/main/hfdocs/source/models/efficientnet-pruned.mdx
!--Copyright 2023 The HuggingFace Team. All rights reserved. Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with the License. You may obtain a copy of the License at http://www.apache.org/licenses/LICENSE-2.0 Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the specific language governing permissions and limitations under the License. --> # Latent Diffusion Latent Diffusion was proposed in [High-Resolution Image Synthesis with Latent Diffusion Models](https://huggingface.co/papers/2112.10752) by Robin Rombach, Andreas Blattmann, Dominik Lorenz, Patrick Esser, Björn Ommer. The abstract from the paper is: *By decomposing the image formation process into a sequential application of denoising autoencoders, diffusion models (DMs) achieve state-of-the-art synthesis results on image data and beyond. Additionally, their formulation allows for a guiding mechanism to control the image generation process without retraining. However, since these models typically operate directly in pixel space, optimization of powerful DMs often consumes hundreds of GPU days and inference is expensive due to sequential evaluations. To enable DM training on limited computational resources while retaining their quality and flexibility, we apply them in the latent space of powerful pretrained autoencoders. In contrast to previous work, training diffusion models on such a representation allows for the first time to reach a near-optimal point between complexity reduction and detail preservation, greatly boosting visual fidelity. By introducing cross-attention layers into the model architecture, we turn diffusion models into powerful and flexible generators for general conditioning inputs such as text or bounding boxes and high-resolution synthesis becomes possible in a convolutional manner. Our latent diffusion models (LDMs) achieve a new state of the art for image inpainting and highly competitive performance on various tasks, including unconditional image generation, semantic scene synthesis, and super-resolution, while significantly reducing computational requirements compared to pixel-based DMs.* The original codebase can be found at [CompVis/latent-diffusion](https://github.com/CompVis/latent-diffusion). <Tip> Make sure to check out the Schedulers [guide](../../using-diffusers/schedulers) to learn how to explore the tradeoff between scheduler speed and quality, and see the [reuse components across pipelines](../../using-diffusers/loading#reuse-components-across-pipelines) section to learn how to efficiently load the same components into multiple pipelines. </Tip> ## LDMTextToImagePipeline [[autodoc]] LDMTextToImagePipeline - all - __call__ ## LDMSuperResolutionPipeline [[autodoc]] LDMSuperResolutionPipeline - all - __call__ ## ImagePipelineOutput [[autodoc]] pipelines.ImagePipelineOutput
huggingface/diffusers/blob/main/docs/source/en/api/pipelines/latent_diffusion.md
!--Copyright 2020 The HuggingFace Team. All rights reserved. Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with the License. You may obtain a copy of the License at http://www.apache.org/licenses/LICENSE-2.0 Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the specific language governing permissions and limitations under the License. ⚠️ Note that this file is in Markdown but contain specific syntax for our doc-builder (similar to MDX) that may not be rendered properly in your Markdown viewer. --> # I-BERT ## Overview The I-BERT model was proposed in [I-BERT: Integer-only BERT Quantization](https://arxiv.org/abs/2101.01321) by Sehoon Kim, Amir Gholami, Zhewei Yao, Michael W. Mahoney and Kurt Keutzer. It's a quantized version of RoBERTa running inference up to four times faster. The abstract from the paper is the following: *Transformer based models, like BERT and RoBERTa, have achieved state-of-the-art results in many Natural Language Processing tasks. However, their memory footprint, inference latency, and power consumption are prohibitive for efficient inference at the edge, and even at the data center. While quantization can be a viable solution for this, previous work on quantizing Transformer based models use floating-point arithmetic during inference, which cannot efficiently utilize integer-only logical units such as the recent Turing Tensor Cores, or traditional integer-only ARM processors. In this work, we propose I-BERT, a novel quantization scheme for Transformer based models that quantizes the entire inference with integer-only arithmetic. Based on lightweight integer-only approximation methods for nonlinear operations, e.g., GELU, Softmax, and Layer Normalization, I-BERT performs an end-to-end integer-only BERT inference without any floating point calculation. We evaluate our approach on GLUE downstream tasks using RoBERTa-Base/Large. We show that for both cases, I-BERT achieves similar (and slightly higher) accuracy as compared to the full-precision baseline. Furthermore, our preliminary implementation of I-BERT shows a speedup of 2.4 - 4.0x for INT8 inference on a T4 GPU system as compared to FP32 inference. The framework has been developed in PyTorch and has been open-sourced.* This model was contributed by [kssteven](https://huggingface.co/kssteven). The original code can be found [here](https://github.com/kssteven418/I-BERT). ## Resources - [Text classification task guide](../tasks/sequence_classification) - [Token classification task guide](../tasks/token_classification) - [Question answering task guide](../tasks/question_answering) - [Masked language modeling task guide](../tasks/masked_language_modeling) - [Multiple choice task guide](../tasks/masked_language_modeling) ## IBertConfig [[autodoc]] IBertConfig ## IBertModel [[autodoc]] IBertModel - forward ## IBertForMaskedLM [[autodoc]] IBertForMaskedLM - forward ## IBertForSequenceClassification [[autodoc]] IBertForSequenceClassification - forward ## IBertForMultipleChoice [[autodoc]] IBertForMultipleChoice - forward ## IBertForTokenClassification [[autodoc]] IBertForTokenClassification - forward ## IBertForQuestionAnswering [[autodoc]] IBertForQuestionAnswering - forward
huggingface/transformers/blob/main/docs/source/en/model_doc/ibert.md
-- title: "Ethical Guidelines for developing the Diffusers library" thumbnail: /blog/assets/ethics-diffusers/thumbnail.png authors: - user: giadap --- # Ethical guidelines for developing the Diffusers library We are on a journey to make our libraries more responsible, one commit at a time! As part of the [Diffusers library documentation](https://huggingface.co/docs/diffusers/main/en/index), we are proud to announce the publication of an [ethical framework](https://huggingface.co/docs/diffusers/main/en/conceptual/ethical_guidelines). Given diffusion models' real case applications in the world and potential negative impacts on society, this initiative aims to guide the technical decisions of the Diffusers library maintainers about community contributions. We wish to be transparent in how we make decisions, and above all, we aim to clarify what values guide those decisions. We see ethics as a process that leverages guiding values, concrete actions, and continuous adaptation. For this reason, we are committed to adjusting our guidelines over time, following the evolution of the Diffusers project and the valuable feedback from the community that keeps it alive. # Ethical guidelines * **Transparency**: we are committed to being transparent in managing PRs, explaining our choices to users, and making technical decisions. * **Consistency**: we are committed to guaranteeing our users the same level of attention in project management, keeping it technically stable and consistent. * **Simplicity**: with a desire to make it easy to use and exploit the Diffusers library, we are committed to keeping the project’s goals lean and coherent. * **Accessibility**: the Diffusers project helps lower the entry bar for contributors who can help run it even without technical expertise. Doing so makes research artifacts more accessible to the community. * **Reproducibility**: we aim to be transparent about the reproducibility of upstream code, models, and datasets when made available through the Diffusers library. * **Responsibility**: as a community and through teamwork, we hold a collective responsibility to our users by anticipating and mitigating this technology’s potential risks and dangers. # Safety features and mechanisms In addition, we provide a non-exhaustive - and hopefully continuously expanding! - list of safety features and mechanisms implemented by the Hugging Face team and the broader community. * **[Community tab](https://huggingface.co/docs/hub/repositories-pull-requests-discussions)**: it enables the community to discuss and better collaborate on a project. * **Tag feature**: authors of a repository can tag their content as being “Not For All Eyes” * **Bias exploration and evaluation**: the Hugging Face team provides a [Space](https://huggingface.co/spaces/society-ethics/DiffusionBiasExplorer) to demonstrate the biases in Stable Diffusion and DALL-E interactively. In this sense, we support and encourage bias explorers and evaluations. * **Encouraging safety in deployment** * **[Safe Stable Diffusion](https://huggingface.co/docs/diffusers/main/en/api/pipelines/stable_diffusion_safe)**: It mitigates the well-known issue that models, like Stable Diffusion, that are trained on unfiltered, web-crawled datasets tend to suffer from inappropriate degeneration. Related paper: [Safe Latent Diffusion: Mitigating Inappropriate Degeneration in Diffusion Models](https://arxiv.org/abs/2211.05105). * **Staged released on the Hub**: in particularly sensitive situations, access to some repositories should be restricted. This staged release is an intermediary step that allows the repository’s authors to have more control over its use. * **Licensing**: [OpenRAILs](https://huggingface.co/blog/open_rail), a new type of licensing, allow us to ensure free access while having a set of restrictions that ensure more responsible use.
huggingface/blog/blob/main/ethics-diffusers.md
Audit Logs <Tip warning={true}> This feature is part of the <a href="https://huggingface.co/enterprise" target="_blank">Enterprise Hub</a>. </Tip> Audit Logs enable organization admins to easily review actions taken by members, including organization membership, repository settings and billing changes. Audit Logs are accessible through your organization admin settings. ![](https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/hub/storage-regions/audit-logs.png)
huggingface/hub-docs/blob/main/docs/hub/audit-logs.md
Gradio Components: The Key Concepts In this section, we discuss a few important concepts when it comes to components in Gradio. It's important to understand these concepts when developing your own component. Otherwise, your component may behave very different to other Gradio components! Tip: You can skip this section if you are familiar with the internals of the Gradio library, such as each component's preprocess and postprocess methods. ## Interactive vs Static Every component in Gradio comes in a `static` variant, and most come in an `interactive` version as well. The `static` version is used when a component is displaying a value, and the user can **NOT** change that value by interacting with it. The `interactive` version is used when the user is able to change the value by interacting with the Gradio UI. Let's see some examples: ```python import gradio as gr with gr.Blocks() as demo: gr.Textbox(value="Hello", interactive=True) gr.Textbox(value="Hello", interactive=False) demo.launch() ``` This will display two textboxes. The only difference: you'll be able to edit the value of the Gradio component on top, and you won't be able to edit the variant on the bottom (i.e. the textbox will be disabled). Perhaps a more interesting example is with the `Image` component: ```python import gradio as gr with gr.Blocks() as demo: gr.Image(interactive=True) gr.Image(interactive=False) demo.launch() ``` The interactive version of the component is much more complex -- you can upload images or snap a picture from your webcam -- while the static version can only be used to display images. Not every component has a distinct interactive version. For example, the `gr.AnnotatedImage` only appears as a static version since there's no way to interactively change the value of the annotations or the image. ### What you need to remember * Gradio will use the interactive version (if available) of a component if that component is used as the **input** to any event; otherwise, the static version will be used. * When you design custom components, you **must** accept the boolean interactive keyword in the constructor of your Python class. In the frontend, you **may** accept the `interactive` property, a `bool` which represents whether the component should be static or interactive. If you do not use this property in the frontend, the component will appear the same in interactive or static mode. ## The value and how it is preprocessed/postprocessed The most important attribute of a component is its `value`. Every component has a `value`. The value that is typically set by the user in the frontend (if the component is interactive) or displayed to the user (if it is static). It is also this value that is sent to the backend function when a user triggers an event, or returned by the user's function e.g. at the end of a prediction. So this value is passed around quite a bit, but sometimes the format of the value needs to change between the frontend and backend. Take a look at this example: ```python import numpy as np import gradio as gr def sepia(input_img): sepia_filter = np.array([ [0.393, 0.769, 0.189], [0.349, 0.686, 0.168], [0.272, 0.534, 0.131] ]) sepia_img = input_img.dot(sepia_filter.T) sepia_img /= sepia_img.max() return sepia_img demo = gr.Interface(sepia, gr.Image(shape=(200, 200)), "image") demo.launch() ``` This will create a Gradio app which has an `Image` component as the input and the output. In the frontend, the Image component will actually **upload** the file to the server and send the **filepath** but this is converted to a `numpy` array before it is sent to a user's function. Conversely, when the user returns a `numpy` array from their function, the numpy array is converted to a file so that it can be sent to the frontend and displayed by the `Image` component. Tip: By default, the `Image` component sends numpy arrays to the python function because it is a common choice for machine learning engineers, though the Image component also supports other formats using the `type` parameter. Read the `Image` docs [here](https://www.gradio.app/docs/image) to learn more. Each component does two conversions: 1. `preprocess`: Converts the `value` from the format sent by the frontend to the format expected by the python function. This usually involves going from a web-friendly **JSON** structure to a **python-native** data structure, like a `numpy` array or `PIL` image. The `Audio`, `Image` components are good examples of `preprocess` methods. 2. `postprocess`: Converts the value returned by the python function to the format expected by the frontend. This usually involves going from a **python-native** data-structure, like a `PIL` image to a **JSON** structure. ### What you need to remember * Every component must implement `preprocess` and `postprocess` methods. In the rare event that no conversion needs to happen, simply return the value as-is. `Textbox` and `Number` are examples of this. * As a component author, **YOU** control the format of the data displayed in the frontend as well as the format of the data someone using your component will receive. Think of an ergonomic data-structure a **python** developer will find intuitive, and control the conversion from a **Web-friendly JSON** data structure (and vice-versa) with `preprocess` and `postprocess.` ## The "Example Version" of a Component Gradio apps support providing example inputs -- and these are very useful in helping users get started using your Gradio app. In `gr.Interface`, you can provide examples using the `examples` keyword, and in `Blocks`, you can provide examples using the special `gr.Examples` component. At the bottom of this screenshot, we show a miniature example image of a cheetah that, when clicked, will populate the same image in the input Image component: ![img](https://user-images.githubusercontent.com/1778297/277548211-a3cb2133-2ffc-4cdf-9a83-3e8363b57ea6.png) To enable the example view, you must have the following two files in the top of the `frontend` directory: * `Example.svelte`: this corresponds to the "example version" of your component * `Index.svelte`: this corresponds to the "regular version" In the backend, you typically don't need to do anything unless you would like to modify the user-provided `value` of the examples to something else before it is sent to the frontend. You can do this in the `as_example` method of the component. The `Example.svelte` and `as_example` methods will be covered in greater depth in the dedicated [frontend](./frontend) and [backend](./backend) guides. ### What you need to remember * If you expect your component to be used as input, it is important to define an "Example" view. * If you don't, Gradio will use a default one but it won't be as informative as it can be! ## Conclusion Now that you know the most important pieces to remember about Gradio components, you can start to design and build your own!
gradio-app/gradio/blob/main/guides/05_custom-components/02_key-component-concepts.md
-- title: "Introducing BERTopic Integration with the Hugging Face Hub" thumbnail: /blog/assets/145_bertopic/logo.png authors: - user: MaartenGr guest: true - user: davanstrien --- # Introducing BERTopic Integration with the Hugging Face Hub [![Open in Colab](https://colab.research.google.com/assets/colab-badge.svg 'open in colab')](https://colab.research.google.com/#fileId=https://huggingface.co/spaces/davanstrien/blog_notebooks/blob/main/BERTopic_hub_starter.ipynb) We are thrilled to announce a significant update to the [BERTopic](https://maartengr.github.io/BERTopic) Python library, expanding its capabilities and further streamlining the workflow for topic modelling enthusiasts and practitioners. BERTopic now supports pushing and pulling trained topic models directly to and from the Hugging Face Hub. This new integration opens up exciting possibilities for leveraging the power of BERTopic in production use cases with ease. ## What is Topic Modelling? Topic modelling is a method that can help uncover hidden themes or "topics" within a group of documents. By analyzing the words in the documents, we can find patterns and connections that reveal these underlying topics. For example, a document about machine learning is more likely to use words like "gradient" and "embedding" compared to a document about baking bread. Each document usually covers multiple topics in different proportions. By examining the word statistics, we can identify clusters of related words that represent these topics. This allows us to analyze a set of documents and determine the topics they discuss, as well as the balance of topics within each document. More recently, new approaches to topic modelling have moved beyond using words to using more rich representations such as those offered through Transformer based models. ## What is BERTopic? BERTopic is a state-of-the-art Python library that simplifies the topic modelling process using various embedding techniques and [c-TF-IDF](https://maartengr.github.io/BERTopic/api/ctfidf.html) to create dense clusters allowing for easily interpretable topics whilst keeping important words in the topic descriptions. <figure class="image table text-center m-0 w-full"> <video alt="BERTopic overview" style="max-width: 70%; margin: auto;" autoplay loop autobuffer muted playsinline > <source src="https://huggingface.co/datasets/huggingface/documentation-images/resolve/2d1113254a370972470d42e122df150f3551cc07/blog/BERTopic/bertopic_overview.mp4" type="video/mp4"> </video> </figure> *An overview of the BERTopic library* Whilst BERTopic is easy to get started with, it supports a range of advanced approaches to topic modelling including [guided](https://maartengr.github.io/BERTopic/getting_started/guided/guided.html), [supervised](https://maartengr.github.io/BERTopic/getting_started/supervised/supervised.html), [semi-supervised](https://maartengr.github.io/BERTopic/getting_started/semisupervised/semisupervised.html) and [manual](https://maartengr.github.io/BERTopic/getting_started/manual/manual.html) topic modelling. More recently BERTopic has added support for multi-modal topic models. BERTopic also have a rich set of tools for producing visualizations. BERTopic provides a powerful tool for users to uncover significant topics within text collections, thereby gaining valuable insights. With BERTopic, users can analyze customer reviews, explore research papers, or categorize news articles with ease, making it an essential tool for anyone looking to extract meaningful information from their text data. ## BERTopic Model Management with the Hugging Face Hub With the latest integration, BERTopic users can seamlessly push and pull their trained topic models to and from the Hugging Face Hub. This integration marks a significant milestone in simplifying the deployment and management of BERTopic models across different environments. The process of training and pushing a BERTopic model to the Hub can be done in a few lines ```python from bertopic import BERTopic topic_model = BERTopic("english") topics, probs = topic_model.fit_transform(docs) topic_model.push_to_hf_hub('davanstrien/transformers_issues_topics') ``` You can then load this model in two lines and use it to predict against new data. ```python from bertopic import BERTopic topic_model = BERTopic.load("davanstrien/transformers_issues_topics") ``` By leveraging the power of the Hugging Face Hub, BERTopic users can effortlessly share, version, and collaborate on their topic models. The Hub acts as a central repository, allowing users to store and organize their models, making it easier to deploy models in production, share them with colleagues, or even showcase them to the broader NLP community. You can use the `libraries` filter on the hub to find BERTopic models. ![BERTopic hub filter](https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/blog/BERTopic/bertopic-lib-filter.png) Once you have found a BERTopic model you are interested in you can use the Hub inference widget to try out the model and see if it might be a good fit for your use case. Once you have a trained topic model, you can push it to the Hugging Face Hub in one line. Pushing your model to the Hub will automatically create an initial model card for your model, including an overview of the topics created. Below you can see an example of the topics resulting from a [model trained on ArXiv data](https://huggingface.co/MaartenGr/BERTopic_ArXiv). <details> <summary>Click here for an overview of all topics.</summary> | Topic ID | Topic Keywords | Topic Frequency | Label | |----------|----------------|-----------------|-------| | -1 | language - models - model - data - based | 20 | -1_language_models_model_data | | 0 | dialogue - dialog - response - responses - intent | 14247 | 0_dialogue_dialog_response_responses | | 1 | speech - asr - speech recognition - recognition - end | 1833 | 1_speech_asr_speech recognition_recognition | | 2 | tuning - tasks - prompt - models - language | 1369 | 2_tuning_tasks_prompt_models | | 3 | summarization - summaries - summary - abstractive - document | 1109 | 3_summarization_summaries_summary_abstractive | | 4 | question - answer - qa - answering - question answering | 893 | 4_question_answer_qa_answering | | 5 | sentiment - sentiment analysis - aspect - analysis - opinion | 837 | 5_sentiment_sentiment analysis_aspect_analysis | | 6 | clinical - medical - biomedical - notes - patient | 691 | 6_clinical_medical_biomedical_notes | | 7 | translation - nmt - machine translation - neural machine - neural machine translation | 586 | 7_translation_nmt_machine translation_neural machine | | 8 | generation - text generation - text - language generation - nlg | 558 | 8_generation_text generation_text_language generation | | 9 | hate - hate speech - offensive - speech - detection | 484 | 9_hate_hate speech_offensive_speech | | 10 | news - fake - fake news - stance - fact | 455 | 10_news_fake_fake news_stance | | 11 | relation - relation extraction - extraction - relations - entity | 450 | 11_relation_relation extraction_extraction_relations | | 12 | ner - named - named entity - entity - named entity recognition | 376 | 12_ner_named_named entity_entity | | 13 | parsing - parser - dependency - treebank - parsers | 370 | 13_parsing_parser_dependency_treebank | | 14 | event - temporal - events - event extraction - extraction | 314 | 14_event_temporal_events_event extraction | | 15 | emotion - emotions - multimodal - emotion recognition - emotional | 300 | 15_emotion_emotions_multimodal_emotion recognition | | 16 | word - embeddings - word embeddings - embedding - words | 292 | 16_word_embeddings_word embeddings_embedding | | 17 | explanations - explanation - rationales - rationale - interpretability | 212 | 17_explanations_explanation_rationales_rationale | | 18 | morphological - arabic - morphology - languages - inflection | 204 | 18_morphological_arabic_morphology_languages | | 19 | topic - topics - topic models - lda - topic modeling | 200 | 19_topic_topics_topic models_lda | | 20 | bias - gender - biases - gender bias - debiasing | 195 | 20_bias_gender_biases_gender bias | | 21 | law - frequency - zipf - words - length | 185 | 21_law_frequency_zipf_words | | 22 | legal - court - law - legal domain - case | 182 | 22_legal_court_law_legal domain | | 23 | adversarial - attacks - attack - adversarial examples - robustness | 181 | 23_adversarial_attacks_attack_adversarial examples | | 24 | commonsense - commonsense knowledge - reasoning - knowledge - commonsense reasoning | 180 | 24_commonsense_commonsense knowledge_reasoning_knowledge | | 25 | quantum - semantics - calculus - compositional - meaning | 171 | 25_quantum_semantics_calculus_compositional | | 26 | correction - error - error correction - grammatical - grammatical error | 161 | 26_correction_error_error correction_grammatical | | 27 | argument - arguments - argumentation - argumentative - mining | 160 | 27_argument_arguments_argumentation_argumentative | | 28 | sarcasm - humor - sarcastic - detection - humorous | 157 | 28_sarcasm_humor_sarcastic_detection | | 29 | coreference - resolution - coreference resolution - mentions - mention | 156 | 29_coreference_resolution_coreference resolution_mentions | | 30 | sense - word sense - wsd - word - disambiguation | 153 | 30_sense_word sense_wsd_word | | 31 | knowledge - knowledge graph - graph - link prediction - entities | 149 | 31_knowledge_knowledge graph_graph_link prediction | | 32 | parsing - semantic parsing - amr - semantic - parser | 146 | 32_parsing_semantic parsing_amr_semantic | | 33 | cross lingual - lingual - cross - transfer - languages | 146 | 33_cross lingual_lingual_cross_transfer | | 34 | mt - translation - qe - quality - machine translation | 139 | 34_mt_translation_qe_quality | | 35 | sql - text sql - queries - spider - schema | 138 | 35_sql_text sql_queries_spider | | 36 | classification - text classification - label - text - labels | 136 | 36_classification_text classification_label_text | | 37 | style - style transfer - transfer - text style - text style transfer | 136 | 37_style_style transfer_transfer_text style | | 38 | question - question generation - questions - answer - generation | 129 | 38_question_question generation_questions_answer | | 39 | authorship - authorship attribution - attribution - author - authors | 127 | 39_authorship_authorship attribution_attribution_author | | 40 | sentence - sentence embeddings - similarity - sts - sentence embedding | 123 | 40_sentence_sentence embeddings_similarity_sts | | 41 | code - identification - switching - cs - code switching | 121 | 41_code_identification_switching_cs | | 42 | story - stories - story generation - generation - storytelling | 118 | 42_story_stories_story generation_generation | | 43 | discourse - discourse relation - discourse relations - rst - discourse parsing | 117 | 43_discourse_discourse relation_discourse relations_rst | | 44 | code - programming - source code - code generation - programming languages | 117 | 44_code_programming_source code_code generation | | 45 | paraphrase - paraphrases - paraphrase generation - paraphrasing - generation | 114 | 45_paraphrase_paraphrases_paraphrase generation_paraphrasing | | 46 | agent - games - environment - instructions - agents | 111 | 46_agent_games_environment_instructions | | 47 | covid - covid 19 - 19 - tweets - pandemic | 108 | 47_covid_covid 19_19_tweets | | 48 | linking - entity linking - entity - el - entities | 107 | 48_linking_entity linking_entity_el | | 49 | poetry - poems - lyrics - poem - music | 103 | 49_poetry_poems_lyrics_poem | | 50 | image - captioning - captions - visual - caption | 100 | 50_image_captioning_captions_visual | | 51 | nli - entailment - inference - natural language inference - language inference | 96 | 51_nli_entailment_inference_natural language inference | | 52 | keyphrase - keyphrases - extraction - document - phrases | 95 | 52_keyphrase_keyphrases_extraction_document | | 53 | simplification - text simplification - ts - sentence - simplified | 95 | 53_simplification_text simplification_ts_sentence | | 54 | empathetic - emotion - emotional - empathy - emotions | 95 | 54_empathetic_emotion_emotional_empathy | | 55 | depression - mental - health - mental health - social media | 93 | 55_depression_mental_health_mental health | | 56 | segmentation - word segmentation - chinese - chinese word segmentation - chinese word | 93 | 56_segmentation_word segmentation_chinese_chinese word segmentation | | 57 | citation - scientific - papers - citations - scholarly | 85 | 57_citation_scientific_papers_citations | | 58 | agreement - syntactic - verb - grammatical - subject verb | 85 | 58_agreement_syntactic_verb_grammatical | | 59 | metaphor - literal - figurative - metaphors - idiomatic | 83 | 59_metaphor_literal_figurative_metaphors | | 60 | srl - semantic role - role labeling - semantic role labeling - role | 82 | 60_srl_semantic role_role labeling_semantic role labeling | | 61 | privacy - private - federated - privacy preserving - federated learning | 82 | 61_privacy_private_federated_privacy preserving | | 62 | change - semantic change - time - semantic - lexical semantic | 82 | 62_change_semantic change_time_semantic | | 63 | bilingual - lingual - cross lingual - cross - embeddings | 80 | 63_bilingual_lingual_cross lingual_cross | | 64 | political - media - news - bias - articles | 77 | 64_political_media_news_bias | | 65 | medical - qa - question - questions - clinical | 75 | 65_medical_qa_question_questions | | 66 | math - mathematical - math word - word problems - problems | 73 | 66_math_mathematical_math word_word problems | | 67 | financial - stock - market - price - news | 69 | 67_financial_stock_market_price | | 68 | table - tables - tabular - reasoning - qa | 69 | 68_table_tables_tabular_reasoning | | 69 | readability - complexity - assessment - features - reading | 65 | 69_readability_complexity_assessment_features | | 70 | layout - document - documents - document understanding - extraction | 64 | 70_layout_document_documents_document understanding | | 71 | brain - cognitive - reading - syntactic - language | 62 | 71_brain_cognitive_reading_syntactic | | 72 | sign - gloss - language - signed - language translation | 61 | 72_sign_gloss_language_signed | | 73 | vqa - visual - visual question - visual question answering - question | 59 | 73_vqa_visual_visual question_visual question answering | | 74 | biased - biases - spurious - nlp - debiasing | 57 | 74_biased_biases_spurious_nlp | | 75 | visual - dialogue - multimodal - image - dialog | 55 | 75_visual_dialogue_multimodal_image | | 76 | translation - machine translation - machine - smt - statistical | 54 | 76_translation_machine translation_machine_smt | | 77 | multimodal - visual - image - translation - machine translation | 52 | 77_multimodal_visual_image_translation | | 78 | geographic - location - geolocation - geo - locations | 51 | 78_geographic_location_geolocation_geo | | 79 | reasoning - prompting - llms - chain thought - chain | 48 | 79_reasoning_prompting_llms_chain thought | | 80 | essay - scoring - aes - essay scoring - essays | 45 | 80_essay_scoring_aes_essay scoring | | 81 | crisis - disaster - traffic - tweets - disasters | 45 | 81_crisis_disaster_traffic_tweets | | 82 | graph - text classification - text - gcn - classification | 44 | 82_graph_text classification_text_gcn | | 83 | annotation - tools - linguistic - resources - xml | 43 | 83_annotation_tools_linguistic_resources | | 84 | entity alignment - alignment - kgs - entity - ea | 43 | 84_entity alignment_alignment_kgs_entity | | 85 | personality - traits - personality traits - evaluative - text | 42 | 85_personality_traits_personality traits_evaluative | | 86 | ad - alzheimer - alzheimer disease - disease - speech | 40 | 86_ad_alzheimer_alzheimer disease_disease | | 87 | taxonomy - hypernymy - taxonomies - hypernym - hypernyms | 39 | 87_taxonomy_hypernymy_taxonomies_hypernym | | 88 | active learning - active - al - learning - uncertainty | 37 | 88_active learning_active_al_learning | | 89 | reviews - summaries - summarization - review - opinion | 36 | 89_reviews_summaries_summarization_review | | 90 | emoji - emojis - sentiment - message - anonymous | 35 | 90_emoji_emojis_sentiment_message | | 91 | table - table text - tables - table text generation - text generation | 35 | 91_table_table text_tables_table text generation | | 92 | domain - domain adaptation - adaptation - domains - source | 35 | 92_domain_domain adaptation_adaptation_domains | | 93 | alignment - word alignment - parallel - pairs - alignments | 34 | 93_alignment_word alignment_parallel_pairs | | 94 | indo - languages - indo european - names - family | 34 | 94_indo_languages_indo european_names | | 95 | patent - claim - claim generation - chemical - technical | 32 | 95_patent_claim_claim generation_chemical | | 96 | agents - emergent - communication - referential - games | 32 | 96_agents_emergent_communication_referential | | 97 | graph - amr - graph text - graphs - text generation | 31 | 97_graph_amr_graph text_graphs | | 98 | moral - ethical - norms - values - social | 29 | 98_moral_ethical_norms_values | | 99 | acronym - acronyms - abbreviations - abbreviation - disambiguation | 27 | 99_acronym_acronyms_abbreviations_abbreviation | | 100 | typing - entity typing - entity - type - types | 27 | 100_typing_entity typing_entity_type | | 101 | coherence - discourse - discourse coherence - coherence modeling - text | 26 | 101_coherence_discourse_discourse coherence_coherence modeling | | 102 | pos - taggers - tagging - tagger - pos tagging | 25 | 102_pos_taggers_tagging_tagger | | 103 | drug - social - social media - media - health | 25 | 103_drug_social_social media_media | | 104 | gender - translation - bias - gender bias - mt | 24 | 104_gender_translation_bias_gender bias | | 105 | job - resume - skills - skill - soft | 21 | 105_job_resume_skills_skill | </details> Due to the improved saving procedure, training on large datasets generates small model sizes. In the example below, a BERTopic model was trained on 100,000 documents, resulting in a ~50MB model keeping all of the original’s model functionality. For inference, the model can be further reduced to only ~3MB! ![](https://huggingface.co/datasets/huggingface/documentation-images/resolve/2d1113254a370972470d42e122df150f3551cc07/blog/BERTopic/serialization.png) The benefits of this integration are particularly notable for production use cases. Users can now effortlessly deploy BERTopic models into their existing applications or systems, ensuring seamless integration within their data pipelines. This streamlined workflow enables faster iteration and efficient model updates and ensures consistency across different environments. ### safetensors: Ensuring Secure Model Management In addition to the Hugging Face Hub integration, BERTopic now supports serialization using the [safetensors library](https://huggingface.co/docs/safetensors/). Safetensors is a new simple format for storing tensors safely (instead of pickle), which is still fast (zero-copy). We’re excited to see more and more libraries leveraging safetensors for safe serialization. You can read more about a recent audit of the library in this [blog post](https://huggingface.co/blog/safetensors-security-audit). ### An example of using BERTopic to explore RLHF datasets To illustrate some of the power of BERTopic let's look at an example of how it can be used to monitor changes in topics in datasets used to train chat models. The last year has seen several datasets for Reinforcement Learning with Human Feedback released. One of these datasets is the [OpenAssistant Conversations dataset](https://huggingface.co/datasets/OpenAssistant/oasst1). This dataset was produced via a worldwide crowd-sourcing effort involving over 13,500 volunteers. Whilst this dataset already has some scores for toxicity, quality, humour etc., we may want to get a better understanding of what types of conversations are represented in this dataset. BERTopic offers one way of getting a better understanding of the topics in this dataset. In this case, we train a model on the English assistant responses part of the datasets. Resulting in a [topic model](https://huggingface.co/davanstrien/chat_topics) with 75 topics. BERTopic gives us various ways of visualizing a dataset. We can see the top 8 topics and their associated words below. We can see that the second most frequent topic consists mainly of ‘response words’, which we often see frequently from chat models, i.e. responses which aim to be ‘polite’ and ‘helpful’. We can also see a large number of topics related to programming or computing topics as well as physics, recipes and pets. ![Words associated with top 8 topics](https://huggingface.co/datasets/huggingface/documentation-images/resolve/2d1113254a370972470d42e122df150f3551cc07/blog/BERTopic/topic_word_scores.png) [databricks/databricks-dolly-15k](https://huggingface.co/datasets/databricks/databricks-dolly-15k) is another dataset that can be used to train an RLHF model. The approach taken to creating this dataset was quite different from the OpenAssistant Conversations dataset since it was created by employees of Databricks instead of being crowd sourced via volunteers. Perhaps we can use our trained BERTopic model to compare the topics across these two datasets? The new BERTopic Hub integrations mean we can load this trained model and apply it to new examples. ```python topic_model = BERTopic.load("davanstrien/chat_topics") ``` We can predict on a single example text: ```python example = "Stalemate is a drawn position. It doesn't matter who has captured more pieces or is in a winning position" topic, prob = topic_model.transform(example) ``` We can get more information about the predicted topic ```python topic_model.get_topic_info(topic) ``` | | Count | Name | Representation | |---:|--------:|:--------------------------------------|:----------------------------------------------------------------------------------------------------| | 0 | 240 | 22_chess_chessboard_practice_strategy | ['chess', 'chessboard', 'practice', 'strategy', 'learn', 'pawn', 'board', 'pawns', 'play', 'decks'] | We can see here the topics predicted seem to make sense. We may want to extend this to compare the topics predicted for the whole dataset. ```python from datasets import load_dataset dataset = load_dataset("databricks/databricks-dolly-15k") dolly_docs = dataset['train']['response'] dolly_topics, dolly_probs = topic_model.transform(dolly_docs) ``` We can then compare the distribution of topics across both datasets. We can see here that there seems to be a broader distribution across topics in the dolly dataset according to our BERTopic model. This might be a result of the different approaches to creating both datasets (we likely want to retrain a BERTopic across both datasets to ensure we are not missing topics to confirm this). ![Topic distribution comparison](https://huggingface.co/datasets/huggingface/documentation-images/resolve/2d1113254a370972470d42e122df150f3551cc07/blog/BERTopic/distribution.png) *Comparison of the distribution of topics between the two datasets* We can potentially use topic models in a production setting to monitor whether topics drift to far from an expected distribution. This can serve as a signal that there has been drift between your original training data and the types of conversations you are seeing in production. You may also decide to use a topic modelling as you are collecting training data to ensure you are getting examples for topics you may particularly care about. ## Get Started with BERTopic and Hugging Face Hub You can visit the official documentation for a [quick start guide](https://maartengr.github.io/BERTopic/getting_started/quickstart/quickstart.html) to get help using BERTopic. You can find a starter Colab notebook [here](https://colab.research.google.com/#fileId=https%3A//huggingface.co/spaces/davanstrien/blog_notebooks/blob/main/BERTopic_hub_starter.ipynb) that shows how you can train a BERTopic model and push it to the Hub. Some examples of BERTopic models already on the hub: - [MaartenGr/BERTopic_ArXiv](https://huggingface.co/MaartenGr/BERTopic_ArXiv): a model trained on ~30000 ArXiv Computation and Language articles (cs.CL) after 1991. - [MaartenGr/BERTopic_Wikipedia](https://huggingface.co/MaartenGr/BERTopic_Wikipedia): a model trained on 1000000 English Wikipedia pages. - [davanstrien/imdb_bertopic](https://huggingface.co/davanstrien/imdb_bertopic): a model trained on the unsupervised split of the IMDB dataset You can find a full overview of BERTopic models on the hub using the [libraries filter](https://huggingface.co/models?library=bertopic&sort=downloads) We invite you to explore the possibilities of this new integration and share your trained models on the hub!
huggingface/blog/blob/main/bertopic.md
!--Copyright 2021 The HuggingFace Team. All rights reserved. Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with the License. You may obtain a copy of the License at http://www.apache.org/licenses/LICENSE-2.0 Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the specific language governing permissions and limitations under the License. ⚠️ Note that this file is in Markdown but contain specific syntax for our doc-builder (similar to MDX) that may not be rendered properly in your Markdown viewer. --> # Vision Transformer (ViT) ## Overview The Vision Transformer (ViT) model was proposed in [An Image is Worth 16x16 Words: Transformers for Image Recognition at Scale](https://arxiv.org/abs/2010.11929) by Alexey Dosovitskiy, Lucas Beyer, Alexander Kolesnikov, Dirk Weissenborn, Xiaohua Zhai, Thomas Unterthiner, Mostafa Dehghani, Matthias Minderer, Georg Heigold, Sylvain Gelly, Jakob Uszkoreit, Neil Houlsby. It's the first paper that successfully trains a Transformer encoder on ImageNet, attaining very good results compared to familiar convolutional architectures. The abstract from the paper is the following: *While the Transformer architecture has become the de-facto standard for natural language processing tasks, its applications to computer vision remain limited. In vision, attention is either applied in conjunction with convolutional networks, or used to replace certain components of convolutional networks while keeping their overall structure in place. We show that this reliance on CNNs is not necessary and a pure transformer applied directly to sequences of image patches can perform very well on image classification tasks. When pre-trained on large amounts of data and transferred to multiple mid-sized or small image recognition benchmarks (ImageNet, CIFAR-100, VTAB, etc.), Vision Transformer (ViT) attains excellent results compared to state-of-the-art convolutional networks while requiring substantially fewer computational resources to train.* <img src="https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/transformers/model_doc/vit_architecture.jpg" alt="drawing" width="600"/> <small> ViT architecture. Taken from the <a href="https://arxiv.org/abs/2010.11929">original paper.</a> </small> Following the original Vision Transformer, some follow-up works have been made: - [DeiT](deit) (Data-efficient Image Transformers) by Facebook AI. DeiT models are distilled vision transformers. The authors of DeiT also released more efficiently trained ViT models, which you can directly plug into [`ViTModel`] or [`ViTForImageClassification`]. There are 4 variants available (in 3 different sizes): *facebook/deit-tiny-patch16-224*, *facebook/deit-small-patch16-224*, *facebook/deit-base-patch16-224* and *facebook/deit-base-patch16-384*. Note that one should use [`DeiTImageProcessor`] in order to prepare images for the model. - [BEiT](beit) (BERT pre-training of Image Transformers) by Microsoft Research. BEiT models outperform supervised pre-trained vision transformers using a self-supervised method inspired by BERT (masked image modeling) and based on a VQ-VAE. - DINO (a method for self-supervised training of Vision Transformers) by Facebook AI. Vision Transformers trained using the DINO method show very interesting properties not seen with convolutional models. They are capable of segmenting objects, without having ever been trained to do so. DINO checkpoints can be found on the [hub](https://huggingface.co/models?other=dino). - [MAE](vit_mae) (Masked Autoencoders) by Facebook AI. By pre-training Vision Transformers to reconstruct pixel values for a high portion (75%) of masked patches (using an asymmetric encoder-decoder architecture), the authors show that this simple method outperforms supervised pre-training after fine-tuning. This model was contributed by [nielsr](https://huggingface.co/nielsr). The original code (written in JAX) can be found [here](https://github.com/google-research/vision_transformer). Note that we converted the weights from Ross Wightman's [timm library](https://github.com/rwightman/pytorch-image-models), who already converted the weights from JAX to PyTorch. Credits go to him! ## Usage tips - To feed images to the Transformer encoder, each image is split into a sequence of fixed-size non-overlapping patches, which are then linearly embedded. A [CLS] token is added to serve as representation of an entire image, which can be used for classification. The authors also add absolute position embeddings, and feed the resulting sequence of vectors to a standard Transformer encoder. - As the Vision Transformer expects each image to be of the same size (resolution), one can use [`ViTImageProcessor`] to resize (or rescale) and normalize images for the model. - Both the patch resolution and image resolution used during pre-training or fine-tuning are reflected in the name of each checkpoint. For example, `google/vit-base-patch16-224` refers to a base-sized architecture with patch resolution of 16x16 and fine-tuning resolution of 224x224. All checkpoints can be found on the [hub](https://huggingface.co/models?search=vit). - The available checkpoints are either (1) pre-trained on [ImageNet-21k](http://www.image-net.org/) (a collection of 14 million images and 21k classes) only, or (2) also fine-tuned on [ImageNet](http://www.image-net.org/challenges/LSVRC/2012/) (also referred to as ILSVRC 2012, a collection of 1.3 million images and 1,000 classes). - The Vision Transformer was pre-trained using a resolution of 224x224. During fine-tuning, it is often beneficial to use a higher resolution than pre-training [(Touvron et al., 2019)](https://arxiv.org/abs/1906.06423), [(Kolesnikov et al., 2020)](https://arxiv.org/abs/1912.11370). In order to fine-tune at higher resolution, the authors perform 2D interpolation of the pre-trained position embeddings, according to their location in the original image. - The best results are obtained with supervised pre-training, which is not the case in NLP. The authors also performed an experiment with a self-supervised pre-training objective, namely masked patched prediction (inspired by masked language modeling). With this approach, the smaller ViT-B/16 model achieves 79.9% accuracy on ImageNet, a significant improvement of 2% to training from scratch, but still 4% behind supervised pre-training. ## Resources Demo notebooks regarding inference as well as fine-tuning ViT on custom data can be found [here](https://github.com/NielsRogge/Transformers-Tutorials/tree/master/VisionTransformer). A list of official Hugging Face and community (indicated by 🌎) resources to help you get started with ViT. If you're interested in submitting a resource to be included here, please feel free to open a Pull Request and we'll review it! The resource should ideally demonstrate something new instead of duplicating an existing resource. `ViTForImageClassification` is supported by: <PipelineTag pipeline="image-classification"/> - A blog post on how to [Fine-Tune ViT for Image Classification with Hugging Face Transformers](https://huggingface.co/blog/fine-tune-vit) - A blog post on [Image Classification with Hugging Face Transformers and `Keras`](https://www.philschmid.de/image-classification-huggingface-transformers-keras) - A notebook on [Fine-tuning for Image Classification with Hugging Face Transformers](https://github.com/huggingface/notebooks/blob/main/examples/image_classification.ipynb) - A notebook on how to [Fine-tune the Vision Transformer on CIFAR-10 with the Hugging Face Trainer](https://github.com/NielsRogge/Transformers-Tutorials/blob/master/VisionTransformer/Fine_tuning_the_Vision_Transformer_on_CIFAR_10_with_the_%F0%9F%A4%97_Trainer.ipynb) - A notebook on how to [Fine-tune the Vision Transformer on CIFAR-10 with PyTorch Lightning](https://github.com/NielsRogge/Transformers-Tutorials/blob/master/VisionTransformer/Fine_tuning_the_Vision_Transformer_on_CIFAR_10_with_PyTorch_Lightning.ipynb) ⚗️ Optimization - A blog post on how to [Accelerate Vision Transformer (ViT) with Quantization using Optimum](https://www.philschmid.de/optimizing-vision-transformer) ⚡️ Inference - A notebook on [Quick demo: Vision Transformer (ViT) by Google Brain](https://github.com/NielsRogge/Transformers-Tutorials/blob/master/VisionTransformer/Quick_demo_of_HuggingFace_version_of_Vision_Transformer_inference.ipynb) 🚀 Deploy - A blog post on [Deploying Tensorflow Vision Models in Hugging Face with TF Serving](https://huggingface.co/blog/tf-serving-vision) - A blog post on [Deploying Hugging Face ViT on Vertex AI](https://huggingface.co/blog/deploy-vertex-ai) - A blog post on [Deploying Hugging Face ViT on Kubernetes with TF Serving](https://huggingface.co/blog/deploy-tfserving-kubernetes) ## ViTConfig [[autodoc]] ViTConfig ## ViTFeatureExtractor [[autodoc]] ViTFeatureExtractor - __call__ ## ViTImageProcessor [[autodoc]] ViTImageProcessor - preprocess <frameworkcontent> <pt> ## ViTModel [[autodoc]] ViTModel - forward ## ViTForMaskedImageModeling [[autodoc]] ViTForMaskedImageModeling - forward ## ViTForImageClassification [[autodoc]] ViTForImageClassification - forward </pt> <tf> ## TFViTModel [[autodoc]] TFViTModel - call ## TFViTForImageClassification [[autodoc]] TFViTForImageClassification - call </tf> <jax> ## FlaxVitModel [[autodoc]] FlaxViTModel - __call__ ## FlaxViTForImageClassification [[autodoc]] FlaxViTForImageClassification - __call__ </jax> </frameworkcontent>
huggingface/transformers/blob/main/docs/source/en/model_doc/vit.md
ome bugs in your code are very straightforward. You try running it, you get a syntax error somewhere, Python tells you exactly where, and you fix it. This is great - it's simple and satisfying. Sometimes, though, things crash and the error is impossible to understand. This happens a lot in machine learning for a few reasons - you're working with big data structures, using big, complex libraries with a lot of moving parts, and also you're doing a lot of GPU computing. In Keras there's the added bonus problem that your models are often compiled before execution, which is great for performance but makes debugging them very difficult. This is going to be a video about what to do when you run into one of those nightmare bugs. To give you some intuitions for what can go wrong, and where to look for the source of bugs that you encounter, let's use this example script, and I'll show it to you here in two parts. First, we do all our imports, we load a dataset, we create our tokenizer and we tokenize the dataset. Next, we convert our datasets to TensorFlow datasets, so that we can run fit() on them, and then we load our model from a pretrained checkpoint, compile it and fit it. It seems straightforward enough, but beware! This spooky code hides many dark and mysterious secrets. What happens when we run it? Well, this isn't great. What does that mean? We tried to train on our data, but we got no gradient? This is pretty perplexing - how do we even begin to debug something like that? When the error you get doesn't immediately suggest where the problem is, the best solution is often to walk through things in sequence, making sure at each stage that things look right. And of course, the place to start is always to check your data. The best way to do that to grab a batch from the tf.data.Dataset that your model is training on, right at the end of the training pipeline. And we can do that like so, by looping over the dataset for one iteration and then breaking. So what do we get when we inspect that batch? We see that we're not getting any gradient because we're not passing labels to Keras! Our labels are in the batch, but they're a key in the input dictionary, not a separate label. This is one of the most common issues you'll encounter when training Transformers models with TensorFlow. Our models can all compute loss internally, but to use that loss for training the labels need to be passed in the input dictionary, where the model can see them. This internal loss is the loss that we use when we don't specify a loss value to compile(). Keras, on the other hand, usually expects labels to be passed separately from the input dictionary and not to be visible to the model, and loss computations will usually fail if you don't do that. We need to choose one or the other: Either we use the model's internal loss and keep the labels where they are, or we keep using Keras losses, but we move the labels to the place Keras expects them. For simplicity, let's use the model internal losses, by removing the loss argument from the call to compile(). So what happens if we try training with model.fit() after fixing the loss function! Well, it runs this time... but now we get a loss of nan. This isn't good. NaN is not a good loss. In fact, if we inspect our model now, we'll see that not only are all the outputs nan , all the weights are nan too. Once a single nan creeps into your computations, it tends to spread, because it propagates from the loss back through your gradient and then into the weight updates. So nan destroyed our model. But where did it creep in first? To find out, we need to re-initialize the model and look at the outputs for just the first batch. And when we do that, we see that nan first appears in the loss, but only in some samples! You can see this in more detail in the accompanying section of the course notes, but we find that if we look at the labels, the samples with a loss of nan all have a label of 2! This gives us a very strong clue - if we check the model, with model.config.num_labels, we see the model thinks there's only 2 labels, but if we see a value of 2, that means there's at least 3 labels, because 0 is a label too! So we got a loss of nan because we got an "impossible" label. To fix that, we need to go back and set the model to have the right number of labels. We can set num_labels=3 when we initialize the model with from_pretrained. So now we think our data is good and our model is good, so training should work. And if we try running model.fit(), we get... hmm. The loss goes down, but it's not very quick. And if we keep running it out, we'll find that it stalls at a fairly high value. What's going on? Well, when things are mostly working, but training is just slow, that can often be a good time to look at your optimizer and training hyperparameters. And this is where I want to mention one of the most common sources of issues when you're working with Keras - you can name things like optimizers with strings, but if you do that, all of the options get silently set to their default values. So we specified our optimizer as Adam, but in the process we invisibly got the default learning rate, which is 1e-3, or ten to the power of minus 3. This is way too high for training transformer models! We should go back and specify the learning rate directly - good values are between 1e-5 and 1e-4. Let's split the difference and pick 5e-5. And if you recompile with that, you'll find that training actually works, at last. Again, I went through this quite quickly, and I recommend checking out the course notes for this to see this in more detail and to experiment with the code yourself. Good luck, and remember to take breaks if your code is giving you a hard time!
huggingface/course/blob/main/subtitles/en/raw/chapter8/04_debug-tf.md
!--Copyright 2021 The HuggingFace Team. All rights reserved. Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with the License. You may obtain a copy of the License at http://www.apache.org/licenses/LICENSE-2.0 Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the specific language governing permissions and limitations under the License. ⚠️ Note that this file is in Markdown but contain specific syntax for our doc-builder (similar to MDX) that may not be rendered properly in your Markdown viewer. --> # Speech2Text2 ## Overview The Speech2Text2 model is used together with [Wav2Vec2](wav2vec2) for Speech Translation models proposed in [Large-Scale Self- and Semi-Supervised Learning for Speech Translation](https://arxiv.org/abs/2104.06678) by Changhan Wang, Anne Wu, Juan Pino, Alexei Baevski, Michael Auli, Alexis Conneau. Speech2Text2 is a *decoder-only* transformer model that can be used with any speech *encoder-only*, such as [Wav2Vec2](wav2vec2) or [HuBERT](hubert) for Speech-to-Text tasks. Please refer to the [SpeechEncoderDecoder](speech-encoder-decoder) class on how to combine Speech2Text2 with any speech *encoder-only* model. This model was contributed by [Patrick von Platen](https://huggingface.co/patrickvonplaten). The original code can be found [here](https://github.com/pytorch/fairseq/blob/1f7ef9ed1e1061f8c7f88f8b94c7186834398690/fairseq/models/wav2vec/wav2vec2_asr.py#L266). ## Usage tips - Speech2Text2 achieves state-of-the-art results on the CoVoST Speech Translation dataset. For more information, see the [official models](https://huggingface.co/models?other=speech2text2) . - Speech2Text2 is always used within the [SpeechEncoderDecoder](speech-encoder-decoder) framework. - Speech2Text2's tokenizer is based on [fastBPE](https://github.com/glample/fastBPE). ## Inference Speech2Text2's [`SpeechEncoderDecoderModel`] model accepts raw waveform input values from speech and makes use of [`~generation.GenerationMixin.generate`] to translate the input speech autoregressively to the target language. The [`Wav2Vec2FeatureExtractor`] class is responsible for preprocessing the input speech and [`Speech2Text2Tokenizer`] decodes the generated target tokens to the target string. The [`Speech2Text2Processor`] wraps [`Wav2Vec2FeatureExtractor`] and [`Speech2Text2Tokenizer`] into a single instance to both extract the input features and decode the predicted token ids. - Step-by-step Speech Translation ```python >>> import torch >>> from transformers import Speech2Text2Processor, SpeechEncoderDecoderModel >>> from datasets import load_dataset >>> import soundfile as sf >>> model = SpeechEncoderDecoderModel.from_pretrained("facebook/s2t-wav2vec2-large-en-de") >>> processor = Speech2Text2Processor.from_pretrained("facebook/s2t-wav2vec2-large-en-de") >>> def map_to_array(batch): ... speech, _ = sf.read(batch["file"]) ... batch["speech"] = speech ... return batch >>> ds = load_dataset("hf-internal-testing/librispeech_asr_dummy", "clean", split="validation") >>> ds = ds.map(map_to_array) >>> inputs = processor(ds["speech"][0], sampling_rate=16_000, return_tensors="pt") >>> generated_ids = model.generate(inputs=inputs["input_values"], attention_mask=inputs["attention_mask"]) >>> transcription = processor.batch_decode(generated_ids) ``` - Speech Translation via Pipelines The automatic speech recognition pipeline can also be used to translate speech in just a couple lines of code ```python >>> from datasets import load_dataset >>> from transformers import pipeline >>> librispeech_en = load_dataset("hf-internal-testing/librispeech_asr_dummy", "clean", split="validation") >>> asr = pipeline( ... "automatic-speech-recognition", ... model="facebook/s2t-wav2vec2-large-en-de", ... feature_extractor="facebook/s2t-wav2vec2-large-en-de", ... ) >>> translation_de = asr(librispeech_en[0]["file"]) ``` See [model hub](https://huggingface.co/models?filter=speech2text2) to look for Speech2Text2 checkpoints. ## Resources - [Causal language modeling task guide](../tasks/language_modeling) ## Speech2Text2Config [[autodoc]] Speech2Text2Config ## Speech2TextTokenizer [[autodoc]] Speech2Text2Tokenizer - batch_decode - decode - save_vocabulary ## Speech2Text2Processor [[autodoc]] Speech2Text2Processor - __call__ - from_pretrained - save_pretrained - batch_decode - decode ## Speech2Text2ForCausalLM [[autodoc]] Speech2Text2ForCausalLM - forward
huggingface/transformers/blob/main/docs/source/en/model_doc/speech_to_text_2.md
-- title: "OpenRAIL: Towards open and responsible AI licensing frameworks" thumbnail: /blog/assets/100_open_rail/100_open-rail.png authors: - user: CarlosMF --- # OpenRAIL: Towards open and responsible AI licensing frameworks Open & Responsible AI licenses ("OpenRAIL") are AI-specific licenses enabling open access, use and distribution of AI artifacts while requiring a responsible use of the latter. OpenRAIL licenses could be for open and responsible ML what current open software licenses are to code and Creative Commons to general content: **a widespread community licensing tool.** Advances in machine learning and other AI-related areas have flourished these past years partly thanks to the ubiquity of the open source culture in the Information and Communication Technologies (ICT) sector, which has permeated into ML research and development dynamics. Notwithstanding the benefits of openness as a core value for innovation in the field, (not so already) recent events related to the ethical and socio-economic concerns of development and use of machine learning models have spread a clear message: Openness is not enough. Closed systems are not the answer though, as the problem persists under the opacity of firms' private AI development processes. ## **Open source licenses do not fit all** Access, development and use of ML models is highly influenced by open source licensing schemes. For instance, ML developers might colloquially refer to "open sourcing a model" when they make its weights available by attaching an official open source license, or any other open software or content license such as Creative Commons. This begs the question: why do they do it? Are ML artifacts and source code really that similar? Do they share enough from a technical perspective that private governance mechanisms (e.g. open source licenses) designed for source code should also govern the development and use of ML models? Most current model developers seem to think so, as the majority of openly released models have an open source license (e.g., Apache 2.0). See for instance the Hugging Face [Model Hub](https://huggingface.co/models?license=license:apache-2.0&sort=downloads) and [Muñoz Ferrandis & Duque Lizarralde (2022)](https://papers.ssrn.com/sol3/papers.cfm?abstract_id=4018413). However, empirical evidence is also telling us that a rigid approach to open sourcing [and/or](https://www.gnu.org/philosophy/open-source-misses-the-point.en.html) Free Software dynamics and an axiomatic belief in Freedom 0 for the release of ML artifacts is creating socio-ethical distortions in the use of ML models (see [Widder et al. (2022)](https://davidwidder.me/files/widder-ossdeepfakes-facct22.pdf)). In simpler terms, open source licenses do not take the technical nature and capabilities of the model as a different artifact to software/source code into account, and are therefore ill-adapted to enabling a more responsible use of ML models (e.g. criteria 6 of the [Open Source Definition](https://opensource.org/osd)), see also [Widder et al. (2022)](https://davidwidder.me/files/widder-ossdeepfakes-facct22.pdf); [Moran (2021)](https://www.google.com/url?q=https://thegradient.pub/machine-learning-ethics-and-open-source-licensing-2/&sa=D&source=docs&ust=1655402923069398&usg=AOvVaw3yTXEfpRQOJ99w04v5GAEd); [Contractor et al. (2020)](https://facctconference.org/static/pdfs_2022/facct22-63.pdf). If specific ad hoc practices devoted to documentation, transparency and ethical usage of ML models are already present and improving each day (e.g., model cards, evaluation benchmarks), why shouldn't open licensing practices also be adapted to the specific capabilities and challenges stemming from ML models? Same concerns are rising in commercial and government ML licensing practices. In the words of [Bowe & Martin (2022)](https://www.gmu.edu/news/2022-04/no-10-implementing-responsible-ai-proposed-framework-data-licensing): "_Babak Siavoshy, general counsel at Anduril Industries, asked what type of license terms should apply to an AI algorithm privately developed for computer-vision object detection and adapt it for military targeting or threat-evaluation? Neither commercial software licenses nor standard DFARS data rights clauses adequately answer this question as neither appropriately protects the developer's interest or enable the government to gain the insight into the system to deploy it responsibly_". If indeed ML models and software/source code are different artifacts, why is the former released under open source licenses? The answer is easy, open source licenses have become the de facto standard in software-related markets for the open sharing of code among software communities. This "open source" approach to collaborative software development has permeated and influenced AI development and licensing practices and has brought huge benefits. Both open source and Open & Responsible AI licenses ("OpenRAIL") might well be complementary initiatives. **Why don't we design a set of licensing mechanisms inspired by movements such as open source and led by an evidence-based approach from the ML field?** In fact, there is a new set of licensing frameworks which are going to be the vehicle towards open and responsible ML development, use and access: Open & Responsible AI Licenses ([OpenRAIL](https://www.licenses.ai/blog/2022/8/18/naming-convention-of-responsible-ai-licenses)). ## **A change of licensing paradigm: OpenRAIL** The OpenRAIL [approach](https://www.licenses.ai/blog/2022/8/18/naming-convention-of-responsible-ai-licenses) taken by the [RAIL Initiative](https://www.licenses.ai/) and supported by Hugging Face is informed and inspired by initiatives such as BigScience, Open Source, and Creative Commons. The 2 main features of an OpenRAIL license are: - **Open:** these licenses allow royalty free access and flexible downstream use and re-distribution of the licensed material, and distribution of any derivatives of it. - **Responsible:** OpenRAIL licenses embed a specific set of restrictions for the use of the licensed AI artifact in identified critical scenarios. Use-based restrictions are informed by an evidence-based approach to ML development and use limitations which forces to draw a line between promoting wide access and use of ML against potential social costs stemming from harmful uses of the openly licensed AI artifact. Therefore, while benefiting from an open access to the ML model, the user will not be able to use the model for the specified restricted scenarios. The integration of use-based restrictions clauses into open AI licenses brings up the ability to better control the use of AI artifacts and the capacity of enforcement to the licensor of the ML model, standing up for a responsible use of the released AI artifact, in case a misuse of the model is identified. If behavioral-use restrictions were not present in open AI licenses, how would licensors even begin to think about responsible use-related legal tools when openly releasing their AI artifacts? OpenRAILs and RAILs are the first step towards enabling ethics-informed behavioral restrictions. And even before thinking about enforcement, use-based restriction clauses might act as a deterrent for potential users to misuse the model (i.e., dissuasive effect). However, the mere presence of use-based restrictions might not be enough to ensure that potential misuses of the released AI artifact won't happen. This is why OpenRAILs require downstream adoption of the use-based restrictions by subsequent re-distribution and derivatives of the AI artifact, as a means to dissuade users of derivatives of the AI artifact from misusing the latter. The effect of copyleft-style behavioral-use clauses spreads the requirement from the original licensor on his/her wish and trust on the responsible use of the licensed artifact. Moreover, widespread adoption of behavioral-use clauses gives subsequent distributors of derivative versions of the licensed artifact the ability for a better control of the use of it. From a social perspective, OpenRAILs are a vehicle towards the consolidation of an informed and respectful culture of sharing AI artifacts acknowledging their limitations and the values held by the licensors of the model. ## **OpenRAIL could be for good machine learning what open software licensing is to code** Three examples of OpenRAIL licenses are the recently released [BigScience OpenRAIL-M](https://www.licenses.ai/blog/2022/8/26/bigscience-open-rail-m-license), StableDiffusion's [CreativeML OpenRAIL-M](https://huggingface.co/spaces/CompVis/stable-diffusion-license), and the genesis of the former two: [BigSicence BLOOM RAIL v1.0](https://huggingface.co/spaces/bigscience/license) (see post and FAQ [here](https://bigscience.huggingface.co/blog/the-bigscience-rail-license)). The latter was specifically designed to promote open and responsible access and use of BigScience's 176B parameter model named BLOOM (and related checkpoints). The license plays at the intersection between openness and responsible AI by proposing a permissive set of licensing terms coped with a use-based restrictions clause wherein a limited number of restricted uses is set based on the evidence on the potential that Large Language Models (LLMs) have, as well as their inherent risks and scrutinized limitations. The OpenRAIL approach taken by the RAIL Initiative is a consequence of the BigScience BLOOM RAIL v1.0 being the first of its kind in parallel with the release of other more restricted models with behavioral-use clauses, such as [OPT-175](https://github.com/facebookresearch/metaseq/blob/main/projects/OPT/MODEL_LICENSE.md) or [SEER](https://github.com/facebookresearch/vissl/blob/main/projects/SEER/MODEL_LICENSE.md), being also made available. The licenses are BigScience's reaction to 2 partially addressed challenges in the licensing space: (i) the "Model" being a different thing to "code"; (ii) the responsible use of the Model. BigScience made that extra step by really focusing the license on the specific case scenario and BigScience's community goals. In fact, the solution proposed is kind of a new one in the AI space: BigScience designed the license in a way that makes the responsible use of the Model widespread (i.e. promotion of responsible use), because any re-distribution or derivatives of the Model will have to comply with the specific use-based restrictions while being able to propose other licensing terms when it comes to the rest of the license. OpenRAIL also aligns with the ongoing regulatory trend proposing sectoral specific regulations for the deployment, use and commercialization of AI systems. With the advent of AI regulations (e.g., [EU AI Act](https://eur-lex.europa.eu/legal-content/EN/TXT/?uri=CELEX%3A52021PC0206); Canada's [proposal](https://iapp.org/news/a/canada-introduces-new-federal-privacy-and-ai-legislation/) of an AI & Data Act), new open licensing paradigms informed by AI regulatory trends and ethical concerns have the potential of being massively adopted in the coming years. Open sourcing a model without taking due account of its impact, use, and documentation could be a source of concern in light of new AI regulatory trends. Henceforth, OpenRAILs should be conceived as instruments articulating with ongoing AI regulatory trends and part of a broader system of AI governance tools, and not as the only solution enabling open and responsible use of AI. Open licensing is one of the cornerstones of AI innovation. Licenses as social and legal institutions should be well taken care of. They should not be conceived as burdensome legal technical mechanisms, but rather as a communication instrument among AI communities bringing stakeholders together by sharing common messages on how the licensed artifact can be used. Let's invest in a healthy open and responsible AI licensing culture, the future of AI innovation and impact depends on it, on all of us, on you. Author: Carlos Muñoz Ferrandis Blog acknowledgments: Yacine Jernite, Giada Pistilli, Irene Solaiman, Clementine Fourrier, Clément Délange
huggingface/blog/blob/main/open_rail.md
-- title: Using LoRA for Efficient Stable Diffusion Fine-Tuning thumbnail: /blog/assets/lora/thumbnail.png authors: - user: pcuenq - user: sayakpaul --- # Using LoRA for Efficient Stable Diffusion Fine-Tuning [LoRA: Low-Rank Adaptation of Large Language Models](https://arxiv.org/abs/2106.09685) is a novel technique introduced by Microsoft researchers to deal with the problem of fine-tuning large-language models. Powerful models with billions of parameters, such as GPT-3, are prohibitively expensive to fine-tune in order to adapt them to particular tasks or domains. LoRA proposes to freeze pre-trained model weights and inject trainable layers (_rank-decomposition matrices_) in each transformer block. This greatly reduces the number of trainable parameters and GPU memory requirements since gradients don't need to be computed for most model weights. The researchers found that by focusing on the Transformer attention blocks of large-language models, fine-tuning quality with LoRA was on par with full model fine-tuning while being much faster and requiring less compute. ## LoRA for Diffusers 🧨 Even though LoRA was initially proposed for large-language models and demonstrated on transformer blocks, the technique can also be applied elsewhere. In the case of Stable Diffusion fine-tuning, LoRA can be applied to the cross-attention layers that relate the image representations with the prompts that describe them. The details of the following figure (taken from the [Stable Diffusion paper](https://arxiv.org/abs/2112.10752)) are not important, just note that the yellow blocks are the ones in charge of building the relationship between image and text representations. ![Latent Diffusion Architecture](https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/blog/lora-assets/latent-diffusion.png) To the best of our knowledge, Simo Ryu ([`@cloneofsimo`](https://github.com/cloneofsimo)) was the first one to come up with a LoRA implementation adapted to Stable Diffusion. Please, do take a look at [their GitHub project](https://github.com/cloneofsimo/lora) to see examples and lots of interesting discussions and insights. In order to inject LoRA trainable matrices as deep in the model as in the cross-attention layers, people used to need to hack the source code of [diffusers](https://github.com/huggingface/diffusers) in imaginative (but fragile) ways. If Stable Diffusion has shown us one thing, it is that the community always comes up with ways to bend and adapt the models for creative purposes, and we love that! Providing the flexibility to manipulate the cross-attention layers could be beneficial for many other reasons, such as making it easier to adopt optimization techniques such as [xFormers](https://github.com/facebookresearch/xformers). Other creative projects such as [Prompt-to-Prompt](https://arxiv.org/abs/2208.01626) could do with some easy way to access those layers, so we decided to [provide a general way for users to do it](https://github.com/huggingface/diffusers/pull/1639). We've been testing that _pull request_ since late December, and it officially launched with our [diffusers release yesterday](https://github.com/huggingface/diffusers/releases/tag/v0.12.0). We've been working with [`@cloneofsimo`](https://github.com/cloneofsimo) to provide LoRA training support in diffusers, for both Dreambooth and full fine-tuning methods! These techniques provide the following benefits: - Training is much faster, as already discussed. - Compute requirements are lower. We could create a full fine-tuned model in a 2080 Ti with 11 GB of VRAM! - **Trained weights are much, much smaller**. Because the original model is frozen and we inject new layers to be trained, we can save the weights for the new layers as a single file that weighs in at ~3 MB in size. This is about _one thousand times smaller_ than the original size of the UNet model! We are particularly excited about the last point. In order for users to share their awesome fine-tuned or _dreamboothed_ models, they had to share a full copy of the final model. Other users that want to try them out have to download the fine-tuned weights in their favorite UI, adding up to combined massive storage and download costs. As of today, there are about [1,000 Dreambooth models registered in the Dreambooth Concepts Library](https://huggingface.co/sd-dreambooth-library), and probably many more not registered in the library. With LoRA, it is now possible to publish [a single 3.29 MB file](https://huggingface.co/sayakpaul/sd-model-finetuned-lora-t4/blob/main/pytorch_lora_weights.bin) to allow others to use your fine-tuned model. _(h/t to [`@mishig25`](https://github.com/mishig25), the first person I heard use **dreamboothing** as a verb in a normal conversation)._ ## LoRA fine-tuning Full model fine-tuning of Stable Diffusion used to be slow and difficult, and that's part of the reason why lighter-weight methods such as Dreambooth or Textual Inversion have become so popular. With LoRA, it is much easier to fine-tune a model on a custom dataset. Diffusers now provides a [LoRA fine-tuning script](https://github.com/huggingface/diffusers/blob/main/examples/text_to_image/train_text_to_image_lora.py) that can run in as low as 11 GB of GPU RAM without resorting to tricks such as 8-bit optimizers. This is how you'd use it to fine-tune a model using [Lambda Labs Pokémon dataset](https://huggingface.co/datasets/lambdalabs/pokemon-blip-captions): ```bash export MODEL_NAME="runwayml/stable-diffusion-v1-5" export OUTPUT_DIR="/sddata/finetune/lora/pokemon" export HUB_MODEL_ID="pokemon-lora" export DATASET_NAME="lambdalabs/pokemon-blip-captions" accelerate launch --mixed_precision="fp16" train_text_to_image_lora.py \ --pretrained_model_name_or_path=$MODEL_NAME \ --dataset_name=$DATASET_NAME \ --dataloader_num_workers=8 \ --resolution=512 --center_crop --random_flip \ --train_batch_size=1 \ --gradient_accumulation_steps=4 \ --max_train_steps=15000 \ --learning_rate=1e-04 \ --max_grad_norm=1 \ --lr_scheduler="cosine" --lr_warmup_steps=0 \ --output_dir=${OUTPUT_DIR} \ --push_to_hub \ --hub_model_id=${HUB_MODEL_ID} \ --report_to=wandb \ --checkpointing_steps=500 \ --validation_prompt="Totoro" \ --seed=1337 ``` One thing of notice is that the learning rate is `1e-4`, much larger than the usual learning rates for regular fine-tuning (in the order of `~1e-6`, typically). This is a [W&B dashboard](https://wandb.ai/pcuenq/text2image-fine-tune/runs/b4k1w0tn?workspace=user-pcuenq) of the previous run, which took about 5 hours in a 2080 Ti GPU (11 GB of RAM). I did not attempt to optimize the hyperparameters, so feel free to try it out yourself! [Sayak](https://huggingface.co/sayakpaul) did another run on a T4 (16 GB of RAM), here's [his final model](https://huggingface.co/sayakpaul/sd-model-finetuned-lora-t4), and here's [a demo Space that uses it](https://huggingface.co/spaces/pcuenq/lora-pokemon). ![Sample outputs from Sayak's LoRA model](https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/blog/lora-assets/sayak-pokemon-collage.png) For additional details on LoRA support in diffusers, please refer to [our documentation](https://huggingface.co/docs/diffusers/main/en/training/lora) – it will be always kept up to date with the implementation. ## Inference As we've discussed, one of the major advantages of LoRA is that you get excellent results by training orders of magnitude less weights than the original model size. We designed an inference process that allows loading the additional weights on top of the unmodified Stable Diffusion model weights. Let's see how it works. First, we'll use the Hub API to automatically determine what was the base model that was used to fine-tune a LoRA model. Starting from [Sayak's model](https://huggingface.co/sayakpaul/sd-model-finetuned-lora-t4), we can use this code: ```Python from huggingface_hub import model_info # LoRA weights ~3 MB model_path = "sayakpaul/sd-model-finetuned-lora-t4" info = model_info(model_path) model_base = info.cardData["base_model"] print(model_base) # CompVis/stable-diffusion-v1-4 ``` This snippet will print the model he used for fine-tuning, which is `CompVis/stable-diffusion-v1-4`. In my case, I trained my model starting from version 1.5 of Stable Diffusion, so if you run the same code with [my LoRA model](https://huggingface.co/pcuenq/pokemon-lora) you'll see that the output is `runwayml/stable-diffusion-v1-5`. The information about the base model is automatically populated by the fine-tuning script we saw in the previous section, if you use the `--push_to_hub` option. This is recorded as a metadata tag in the `README` file of the model's repo, as you can see [here](https://huggingface.co/pcuenq/pokemon-lora/blob/main/README.md). After we determine the base model we used to fine-tune with LoRA, we load a normal Stable Diffusion pipeline. We'll customize it with the `DPMSolverMultistepScheduler` for very fast inference: ```Python import torch from diffusers import StableDiffusionPipeline, DPMSolverMultistepScheduler pipe = StableDiffusionPipeline.from_pretrained(model_base, torch_dtype=torch.float16) pipe.scheduler = DPMSolverMultistepScheduler.from_config(pipe.scheduler.config) ``` **And here's where the magic comes**. We load the LoRA weights from the Hub _on top of the regular model weights_, move the pipeline to the cuda device and run inference: ```Python pipe.unet.load_attn_procs(model_path) pipe.to("cuda") image = pipe("Green pokemon with menacing face", num_inference_steps=25).images[0] image.save("green_pokemon.png") ``` ## Dreamboothing with LoRA Dreambooth allows you to "teach" new concepts to a Stable Diffusion model. LoRA is compatible with Dreambooth and the process is similar to fine-tuning, with a couple of advantages: - Training is faster. - We only need a few images of the subject we want to train (5 or 10 are usually enough). - We can tweak the text encoder, if we want, for additional fidelity to the subject. To train Dreambooth with LoRA you need to use [this diffusers script](https://github.com/huggingface/diffusers/blob/main/examples/dreambooth/train_dreambooth_lora.py). Please, take a look at [the README](https://github.com/huggingface/diffusers/tree/main/examples/dreambooth#training-with-low-rank-adaptation-of-large-language-models-lora), [the documentation](https://huggingface.co/docs/diffusers/main/en/training/lora) and [our hyperparameter exploration blog post](https://huggingface.co/blog/dreambooth) for details. For a quick, cheap and easy way to train your Dreambooth models with LoRA, please [check this Space](https://huggingface.co/spaces/lora-library/LoRA-DreamBooth-Training-UI) by [`hysts`](https://twitter.com/hysts12321). You need to duplicate it and assign a GPU so it runs fast. This process will save you from having to set up your own training environment and you'll be able to train your models in minutes! ## Other Methods The quest for easy fine-tuning is not new. In addition to Dreambooth, [_textual inversion_](https://huggingface.co/docs/diffusers/main/en/training/text_inversion) is another popular method that attempts to teach new concepts to a trained Stable Diffusion Model. One of the main reasons for using Textual Inversion is that trained weights are also small and easy to share. However, they only work for a single subject (or a small handful of them), whereas LoRA can be used for general-purpose fine-tuning, meaning that it can be adapted to new domains or datasets. [Pivotal Tuning](https://arxiv.org/abs/2106.05744) is a method that tries to combine Textual Inversion with LoRA. First, you teach the model a new concept using Textual Inversion techniques, obtaining a new token embedding to represent it. Then, you train that token embedding using LoRA to get the best of both worlds. We haven't explored Pivotal Tuning with LoRA yet. Who's up for the challenge? 🤗
huggingface/blog/blob/main/lora.md
!--Copyright 2023 The HuggingFace Team. All rights reserved. Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with the License. You may obtain a copy of the License at http://www.apache.org/licenses/LICENSE-2.0 Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the specific language governing permissions and limitations under the License. --> [[open-in-colab]] # Load LoRAs for inference There are many adapters (with LoRAs being the most common type) trained in different styles to achieve different effects. You can even combine multiple adapters to create new and unique images. With the 🤗 [PEFT](https://huggingface.co/docs/peft/index) integration in 🤗 Diffusers, it is really easy to load and manage adapters for inference. In this guide, you'll learn how to use different adapters with [Stable Diffusion XL (SDXL)](../api/pipelines/stable_diffusion/stable_diffusion_xl) for inference. Throughout this guide, you'll use LoRA as the main adapter technique, so we'll use the terms LoRA and adapter interchangeably. You should have some familiarity with LoRA, and if you don't, we welcome you to check out the [LoRA guide](https://huggingface.co/docs/peft/conceptual_guides/lora). Let's first install all the required libraries. ```bash !pip install -q transformers accelerate !pip install peft !pip install diffusers ``` Now, let's load a pipeline with a SDXL checkpoint: ```python from diffusers import DiffusionPipeline import torch pipe_id = "stabilityai/stable-diffusion-xl-base-1.0" pipe = DiffusionPipeline.from_pretrained(pipe_id, torch_dtype=torch.float16).to("cuda") ``` Next, load a LoRA checkpoint with the [`~diffusers.loaders.StableDiffusionXLLoraLoaderMixin.load_lora_weights`] method. With the 🤗 PEFT integration, you can assign a specific `adapter_name` to the checkpoint, which let's you easily switch between different LoRA checkpoints. Let's call this adapter `"toy"`. ```python pipe.load_lora_weights("CiroN2022/toy-face", weight_name="toy_face_sdxl.safetensors", adapter_name="toy") ``` And then perform inference: ```python prompt = "toy_face of a hacker with a hoodie" lora_scale= 0.9 image = pipe( prompt, num_inference_steps=30, cross_attention_kwargs={"scale": lora_scale}, generator=torch.manual_seed(0) ).images[0] image ``` ![toy-face](https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/diffusers/peft_integration/diffusers_peft_lora_inference_8_1.png) With the `adapter_name` parameter, it is really easy to use another adapter for inference! Load the [nerijs/pixel-art-xl](https://huggingface.co/nerijs/pixel-art-xl) adapter that has been fine-tuned to generate pixel art images, and let's call it `"pixel"`. The pipeline automatically sets the first loaded adapter (`"toy"`) as the active adapter. But you can activate the `"pixel"` adapter with the [`~diffusers.loaders.UNet2DConditionLoadersMixin.set_adapters`] method as shown below: ```python pipe.load_lora_weights("nerijs/pixel-art-xl", weight_name="pixel-art-xl.safetensors", adapter_name="pixel") pipe.set_adapters("pixel") ``` Let's now generate an image with the second adapter and check the result: ```python prompt = "a hacker with a hoodie, pixel art" image = pipe( prompt, num_inference_steps=30, cross_attention_kwargs={"scale": lora_scale}, generator=torch.manual_seed(0) ).images[0] image ``` ![pixel-art](https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/diffusers/peft_integration/diffusers_peft_lora_inference_12_1.png) ## Combine multiple adapters You can also perform multi-adapter inference where you combine different adapter checkpoints for inference. Once again, use the [`~diffusers.loaders.UNet2DConditionLoadersMixin.set_adapters`] method to activate two LoRA checkpoints and specify the weight for how the checkpoints should be combined. ```python pipe.set_adapters(["pixel", "toy"], adapter_weights=[0.5, 1.0]) ``` Now that we have set these two adapters, let's generate an image from the combined adapters! <Tip> LoRA checkpoints in the diffusion community are almost always obtained with [DreamBooth](https://huggingface.co/docs/diffusers/main/en/training/dreambooth). DreamBooth training often relies on "trigger" words in the input text prompts in order for the generation results to look as expected. When you combine multiple LoRA checkpoints, it's important to ensure the trigger words for the corresponding LoRA checkpoints are present in the input text prompts. </Tip> The trigger words for [CiroN2022/toy-face](https://hf.co/CiroN2022/toy-face) and [nerijs/pixel-art-xl](https://hf.co/nerijs/pixel-art-xl) are found in their repositories. ```python # Notice how the prompt is constructed. prompt = "toy_face of a hacker with a hoodie, pixel art" image = pipe( prompt, num_inference_steps=30, cross_attention_kwargs={"scale": 1.0}, generator=torch.manual_seed(0) ).images[0] image ``` ![toy-face-pixel-art](https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/diffusers/peft_integration/diffusers_peft_lora_inference_16_1.png) Impressive! As you can see, the model was able to generate an image that mixes the characteristics of both adapters. If you want to go back to using only one adapter, use the [`~diffusers.loaders.UNet2DConditionLoadersMixin.set_adapters`] method to activate the `"toy"` adapter: ```python # First, set the adapter. pipe.set_adapters("toy") # Then, run inference. prompt = "toy_face of a hacker with a hoodie" lora_scale= 0.9 image = pipe( prompt, num_inference_steps=30, cross_attention_kwargs={"scale": lora_scale}, generator=torch.manual_seed(0) ).images[0] image ``` ![toy-face-again](https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/diffusers/peft_integration/diffusers_peft_lora_inference_18_1.png) If you want to switch to only the base model, disable all LoRAs with the [`~diffusers.loaders.UNet2DConditionLoadersMixin.disable_lora`] method. ```python pipe.disable_lora() prompt = "toy_face of a hacker with a hoodie" lora_scale= 0.9 image = pipe(prompt, num_inference_steps=30, generator=torch.manual_seed(0)).images[0] image ``` ![no-lora](https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/diffusers/peft_integration/diffusers_peft_lora_inference_20_1.png) ## Monitoring active adapters You have attached multiple adapters in this tutorial, and if you're feeling a bit lost on what adapters have been attached to the pipeline's components, you can easily check the list of active adapters using the [`~diffusers.loaders.LoraLoaderMixin.get_active_adapters`] method: ```py active_adapters = pipe.get_active_adapters() active_adapters ["toy", "pixel"] ``` You can also get the active adapters of each pipeline component with [`~diffusers.loaders.LoraLoaderMixin.get_list_adapters`]: ```py list_adapters_component_wise = pipe.get_list_adapters() list_adapters_component_wise {"text_encoder": ["toy", "pixel"], "unet": ["toy", "pixel"], "text_encoder_2": ["toy", "pixel"]} ``` ## Fusing adapters into the model You can use PEFT to easily fuse/unfuse multiple adapters directly into the model weights (both UNet and text encoder) using the [`~diffusers.loaders.LoraLoaderMixin.fuse_lora`] method, which can lead to a speed-up in inference and lower VRAM usage. ```py pipe.load_lora_weights("nerijs/pixel-art-xl", weight_name="pixel-art-xl.safetensors", adapter_name="pixel") pipe.load_lora_weights("CiroN2022/toy-face", weight_name="toy_face_sdxl.safetensors", adapter_name="toy") pipe.set_adapters(["pixel", "toy"], adapter_weights=[0.5, 1.0]) # Fuses the LoRAs into the Unet pipe.fuse_lora() prompt = "toy_face of a hacker with a hoodie, pixel art" image = pipe(prompt, num_inference_steps=30, generator=torch.manual_seed(0)).images[0] # Gets the Unet back to the original state pipe.unfuse_lora() ```
huggingface/diffusers/blob/main/docs/source/en/tutorials/using_peft_for_inference.md
Metric Card for Recall ## Metric Description Recall is the fraction of the positive examples that were correctly labeled by the model as positive. It can be computed with the equation: Recall = TP / (TP + FN) Where TP is the number of true positives and FN is the number of false negatives. ## How to Use At minimum, this metric takes as input two `list`s, each containing `int`s: predictions and references. ```python >>> recall_metric = datasets.load_metric('recall') >>> results = recall_metric.compute(references=[0, 1], predictions=[0, 1]) >>> print(results) ["{'recall': 1.0}"] ``` ### Inputs - **predictions** (`list` of `int`): The predicted labels. - **references** (`list` of `int`): The ground truth labels. - **labels** (`list` of `int`): The set of labels to include when `average` is not set to `binary`, and their order when average is `None`. Labels present in the data can be excluded in this input, for example to calculate a multiclass average ignoring a majority negative class, while labels not present in the data will result in 0 components in a macro average. For multilabel targets, labels are column indices. By default, all labels in y_true and y_pred are used in sorted order. Defaults to None. - **pos_label** (`int`): The class label to use as the 'positive class' when calculating the recall. Defaults to `1`. - **average** (`string`): This parameter is required for multiclass/multilabel targets. If None, the scores for each class are returned. Otherwise, this determines the type of averaging performed on the data. Defaults to `'binary'`. - `'binary'`: Only report results for the class specified by `pos_label`. This is applicable only if the target labels and predictions are binary. - `'micro'`: Calculate metrics globally by counting the total true positives, false negatives, and false positives. - `'macro'`: Calculate metrics for each label, and find their unweighted mean. This does not take label imbalance into account. - `'weighted'`: Calculate metrics for each label, and find their average weighted by support (the number of true instances for each label). This alters `'macro'` to account for label imbalance. Note that it can result in an F-score that is not between precision and recall. - `'samples'`: Calculate metrics for each instance, and find their average (only meaningful for multilabel classification). - **sample_weight** (`list` of `float`): Sample weights Defaults to `None`. - **zero_division** (): Sets the value to return when there is a zero division. Defaults to . - `'warn'`: If there is a zero division, the return value is `0`, but warnings are also raised. - `0`: If there is a zero division, the return value is `0`. - `1`: If there is a zero division, the return value is `1`. ### Output Values - **recall**(`float`, or `array` of `float`, for multiclass targets): Either the general recall score, or the recall scores for individual classes, depending on the values input to `labels` and `average`. Minimum possible value is 0. Maximum possible value is 1. A higher recall means that more of the positive examples have been labeled correctly. Therefore, a higher recall is generally considered better. Output Example(s): ```python {'recall': 1.0} ``` ```python {'recall': array([1., 0., 0.])} ``` This metric outputs a dictionary with one entry, `'recall'`. #### Values from Popular Papers ### Examples Example 1-A simple example with some errors ```python >>> recall_metric = datasets.load_metric('recall') >>> results = recall_metric.compute(references=[0, 0, 1, 1, 1], predictions=[0, 1, 0, 1, 1]) >>> print(results) {'recall': 0.6666666666666666} ``` Example 2-The same example as Example 1, but with `pos_label=0` instead of the default `pos_label=1`. ```python >>> recall_metric = datasets.load_metric('recall') >>> results = recall_metric.compute(references=[0, 0, 1, 1, 1], predictions=[0, 1, 0, 1, 1], pos_label=0) >>> print(results) {'recall': 0.5} ``` Example 3-The same example as Example 1, but with `sample_weight` included. ```python >>> recall_metric = datasets.load_metric('recall') >>> sample_weight = [0.9, 0.2, 0.9, 0.3, 0.8] >>> results = recall_metric.compute(references=[0, 0, 1, 1, 1], predictions=[0, 1, 0, 1, 1], sample_weight=sample_weight) >>> print(results) {'recall': 0.55} ``` Example 4-A multiclass example, using different averages. ```python >>> recall_metric = datasets.load_metric('recall') >>> predictions = [0, 2, 1, 0, 0, 1] >>> references = [0, 1, 2, 0, 1, 2] >>> results = recall_metric.compute(predictions=predictions, references=references, average='macro') >>> print(results) {'recall': 0.3333333333333333} >>> results = recall_metric.compute(predictions=predictions, references=references, average='micro') >>> print(results) {'recall': 0.3333333333333333} >>> results = recall_metric.compute(predictions=predictions, references=references, average='weighted') >>> print(results) {'recall': 0.3333333333333333} >>> results = recall_metric.compute(predictions=predictions, references=references, average=None) >>> print(results) {'recall': array([1., 0., 0.])} ``` ## Limitations and Bias ## Citation(s) ```bibtex @article{scikit-learn, title={Scikit-learn: Machine Learning in {P}ython}, author={Pedregosa, F. and Varoquaux, G. and Gramfort, A. and Michel, V. and Thirion, B. and Grisel, O. and Blondel, M. and Prettenhofer, P. and Weiss, R. and Dubourg, V. and Vanderplas, J. and Passos, A. and Cournapeau, D. and Brucher, M. and Perrot, M. and Duchesnay, E.}, journal={Journal of Machine Learning Research}, volume={12}, pages={2825--2830}, year={2011} ``` ## Further References
huggingface/datasets/blob/main/metrics/recall/README.md
!--Copyright 2022 The HuggingFace Team. All rights reserved. Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with the License. You may obtain a copy of the License at http://www.apache.org/licenses/LICENSE-2.0 Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the specific language governing permissions and limitations under the License. ⚠️ Note that this file is in Markdown but contain specific syntax for our doc-builder (similar to MDX) that may not be rendered properly in your Markdown viewer. --> # Padding and truncation Batched inputs are often different lengths, so they can't be converted to fixed-size tensors. Padding and truncation are strategies for dealing with this problem, to create rectangular tensors from batches of varying lengths. Padding adds a special **padding token** to ensure shorter sequences will have the same length as either the longest sequence in a batch or the maximum length accepted by the model. Truncation works in the other direction by truncating long sequences. In most cases, padding your batch to the length of the longest sequence and truncating to the maximum length a model can accept works pretty well. However, the API supports more strategies if you need them. The three arguments you need to are: `padding`, `truncation` and `max_length`. The `padding` argument controls padding. It can be a boolean or a string: - `True` or `'longest'`: pad to the longest sequence in the batch (no padding is applied if you only provide a single sequence). - `'max_length'`: pad to a length specified by the `max_length` argument or the maximum length accepted by the model if no `max_length` is provided (`max_length=None`). Padding will still be applied if you only provide a single sequence. - `False` or `'do_not_pad'`: no padding is applied. This is the default behavior. The `truncation` argument controls truncation. It can be a boolean or a string: - `True` or `'longest_first'`: truncate to a maximum length specified by the `max_length` argument or the maximum length accepted by the model if no `max_length` is provided (`max_length=None`). This will truncate token by token, removing a token from the longest sequence in the pair until the proper length is reached. - `'only_second'`: truncate to a maximum length specified by the `max_length` argument or the maximum length accepted by the model if no `max_length` is provided (`max_length=None`). This will only truncate the second sentence of a pair if a pair of sequences (or a batch of pairs of sequences) is provided. - `'only_first'`: truncate to a maximum length specified by the `max_length` argument or the maximum length accepted by the model if no `max_length` is provided (`max_length=None`). This will only truncate the first sentence of a pair if a pair of sequences (or a batch of pairs of sequences) is provided. - `False` or `'do_not_truncate'`: no truncation is applied. This is the default behavior. The `max_length` argument controls the length of the padding and truncation. It can be an integer or `None`, in which case it will default to the maximum length the model can accept. If the model has no specific maximum input length, truncation or padding to `max_length` is deactivated. The following table summarizes the recommended way to setup padding and truncation. If you use pairs of input sequences in any of the following examples, you can replace `truncation=True` by a `STRATEGY` selected in `['only_first', 'only_second', 'longest_first']`, i.e. `truncation='only_second'` or `truncation='longest_first'` to control how both sequences in the pair are truncated as detailed before. | Truncation | Padding | Instruction | |--------------------------------------|-----------------------------------|---------------------------------------------------------------------------------------------| | no truncation | no padding | `tokenizer(batch_sentences)` | | | padding to max sequence in batch | `tokenizer(batch_sentences, padding=True)` or | | | | `tokenizer(batch_sentences, padding='longest')` | | | padding to max model input length | `tokenizer(batch_sentences, padding='max_length')` | | | padding to specific length | `tokenizer(batch_sentences, padding='max_length', max_length=42)` | | | padding to a multiple of a value | `tokenizer(batch_sentences, padding=True, pad_to_multiple_of=8)` | | truncation to max model input length | no padding | `tokenizer(batch_sentences, truncation=True)` or | | | | `tokenizer(batch_sentences, truncation=STRATEGY)` | | | padding to max sequence in batch | `tokenizer(batch_sentences, padding=True, truncation=True)` or | | | | `tokenizer(batch_sentences, padding=True, truncation=STRATEGY)` | | | padding to max model input length | `tokenizer(batch_sentences, padding='max_length', truncation=True)` or | | | | `tokenizer(batch_sentences, padding='max_length', truncation=STRATEGY)` | | | padding to specific length | Not possible | | truncation to specific length | no padding | `tokenizer(batch_sentences, truncation=True, max_length=42)` or | | | | `tokenizer(batch_sentences, truncation=STRATEGY, max_length=42)` | | | padding to max sequence in batch | `tokenizer(batch_sentences, padding=True, truncation=True, max_length=42)` or | | | | `tokenizer(batch_sentences, padding=True, truncation=STRATEGY, max_length=42)` | | | padding to max model input length | Not possible | | | padding to specific length | `tokenizer(batch_sentences, padding='max_length', truncation=True, max_length=42)` or | | | | `tokenizer(batch_sentences, padding='max_length', truncation=STRATEGY, max_length=42)` |
huggingface/transformers/blob/main/docs/source/en/pad_truncation.md
Real Time Speech Recognition Tags: ASR, SPEECH, STREAMING ## Introduction Automatic speech recognition (ASR), the conversion of spoken speech to text, is a very important and thriving area of machine learning. ASR algorithms run on practically every smartphone, and are becoming increasingly embedded in professional workflows, such as digital assistants for nurses and doctors. Because ASR algorithms are designed to be used directly by customers and end users, it is important to validate that they are behaving as expected when confronted with a wide variety of speech patterns (different accents, pitches, and background audio conditions). Using `gradio`, you can easily build a demo of your ASR model and share that with a testing team, or test it yourself by speaking through the microphone on your device. This tutorial will show how to take a pretrained speech-to-text model and deploy it with a Gradio interface. We will start with a **_full-context_** model, in which the user speaks the entire audio before the prediction runs. Then we will adapt the demo to make it **_streaming_**, meaning that the audio model will convert speech as you speak. ### Prerequisites Make sure you have the `gradio` Python package already [installed](/getting_started). You will also need a pretrained speech recognition model. In this tutorial, we will build demos from 2 ASR libraries: - Transformers (for this, `pip install transformers` and `pip install torch`) Make sure you have at least one of these installed so that you can follow along the tutorial. You will also need `ffmpeg` [installed on your system](https://www.ffmpeg.org/download.html), if you do not already have it, to process files from the microphone. Here's how to build a real time speech recognition (ASR) app: 1. [Set up the Transformers ASR Model](#1-set-up-the-transformers-asr-model) 2. [Create a Full-Context ASR Demo with Transformers](#2-create-a-full-context-asr-demo-with-transformers) 3. [Create a Streaming ASR Demo with Transformers](#3-create-a-streaming-asr-demo-with-transformers) ## 1. Set up the Transformers ASR Model First, you will need to have an ASR model that you have either trained yourself or you will need to download a pretrained model. In this tutorial, we will start by using a pretrained ASR model from the model, `whisper`. Here is the code to load `whisper` from Hugging Face `transformers`. ```python from transformers import pipeline p = pipeline("automatic-speech-recognition", model="openai/whisper-base.en") ``` That's it! ## 2. Create a Full-Context ASR Demo with Transformers We will start by creating a _full-context_ ASR demo, in which the user speaks the full audio before using the ASR model to run inference. This is very easy with Gradio -- we simply create a function around the `pipeline` object above. We will use `gradio`'s built in `Audio` component, configured to take input from the user's microphone and return a filepath for the recorded audio. The output component will be a plain `Textbox`. $code_asr $demo_asr The `transcribe` function takes a single parameter, `audio`, which is a numpy array of the audio the user recorded. The `pipeline` object expects this in float32 format, so we convert it first to float32, and then extract the transcribed text. ## 3. Create a Streaming ASR Demo with Transformers To make this a *streaming* demo, we need to make these changes: 1. Set `streaming=True` in the `Audio` component 2. Set `live=True` in the `Interface` 3. Add a `state` to the interface to store the recorded audio of a user Take a look below. $code_stream_asr Notice now we have a state variable now, because we need to track all the audio history. `transcribe` gets called whenever there is a new small chunk of audio, but we also need to keep track of all the audio that has been spoken so far in state. As the interface runs, the `transcribe` function gets called, with a record of all the previously spoken audio in `stream`, as well as the new chunk of audio as `new_chunk`. We return the new full audio so that can be stored back in state, and we also return the transcription. Here we naively append the audio together and simply call the `transcriber` object on the entire audio. You can imagine more efficient ways of handling this, such as re-processing only the last 5 seconds of audio whenever a new chunk of audio received. $demo_stream_asr Now the ASR model will run inference as you speak!
gradio-app/gradio/blob/main/guides/09_other-tutorials/real-time-speech-recognition.md
Consistency Decoder Consistency decoder can be used to decode the latents from the denoising UNet in the [`StableDiffusionPipeline`]. This decoder was introduced in the [DALL-E 3 technical report](https://openai.com/dall-e-3). The original codebase can be found at [openai/consistencydecoder](https://github.com/openai/consistencydecoder). <Tip warning={true}> Inference is only supported for 2 iterations as of now. </Tip> The pipeline could not have been contributed without the help of [madebyollin](https://github.com/madebyollin) and [mrsteyk](https://github.com/mrsteyk) from [this issue](https://github.com/openai/consistencydecoder/issues/1). ## ConsistencyDecoderVAE [[autodoc]] ConsistencyDecoderVAE - all - decode
huggingface/diffusers/blob/main/docs/source/en/api/models/consistency_decoder_vae.md
!--- Copyright 2020 The HuggingFace Team. All rights reserved. Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with the License. You may obtain a copy of the License at http://www.apache.org/licenses/LICENSE-2.0 Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the specific language governing permissions and limitations under the License. --> # Text classification examples ## GLUE tasks Based on the script [`run_glue.py`](https://github.com/huggingface/transformers/blob/main/examples/pytorch/text-classification/run_glue.py). Fine-tuning the library models for sequence classification on the GLUE benchmark: [General Language Understanding Evaluation](https://gluebenchmark.com/). This script can fine-tune any of the models on the [hub](https://huggingface.co/models) and can also be used for a dataset hosted on our [hub](https://huggingface.co/datasets) or your own data in a csv or a JSON file (the script might need some tweaks in that case, refer to the comments inside for help). GLUE is made up of a total of 9 different tasks. Here is how to run the script on one of them: ```bash export TASK_NAME=mrpc python run_glue.py \ --model_name_or_path bert-base-cased \ --task_name $TASK_NAME \ --do_train \ --do_eval \ --max_seq_length 128 \ --per_device_train_batch_size 32 \ --learning_rate 2e-5 \ --num_train_epochs 3 \ --output_dir /tmp/$TASK_NAME/ ``` where task name can be one of cola, sst2, mrpc, stsb, qqp, mnli, qnli, rte, wnli. We get the following results on the dev set of the benchmark with the previous commands (with an exception for MRPC and WNLI which are tiny and where we used 5 epochs instead of 3). Trainings are seeded so you should obtain the same results with PyTorch 1.6.0 (and close results with different versions), training times are given for information (a single Titan RTX was used): | Task | Metric | Result | Training time | |-------|------------------------------|-------------|---------------| | CoLA | Matthews corr | 56.53 | 3:17 | | SST-2 | Accuracy | 92.32 | 26:06 | | MRPC | F1/Accuracy | 88.85/84.07 | 2:21 | | STS-B | Pearson/Spearman corr. | 88.64/88.48 | 2:13 | | QQP | Accuracy/F1 | 90.71/87.49 | 2:22:26 | | MNLI | Matched acc./Mismatched acc. | 83.91/84.10 | 2:35:23 | | QNLI | Accuracy | 90.66 | 40:57 | | RTE | Accuracy | 65.70 | 57 | | WNLI | Accuracy | 56.34 | 24 | Some of these results are significantly different from the ones reported on the test set of GLUE benchmark on the website. For QQP and WNLI, please refer to [FAQ #12](https://gluebenchmark.com/faq) on the website. The following example fine-tunes BERT on the `imdb` dataset hosted on our [hub](https://huggingface.co/datasets): ```bash python run_glue.py \ --model_name_or_path bert-base-cased \ --dataset_name imdb \ --do_train \ --do_predict \ --max_seq_length 128 \ --per_device_train_batch_size 32 \ --learning_rate 2e-5 \ --num_train_epochs 3 \ --output_dir /tmp/imdb/ ``` > If your model classification head dimensions do not fit the number of labels in the dataset, you can specify `--ignore_mismatched_sizes` to adapt it. ## Text classification As an alternative, we can use the script [`run_classification.py`](./run_classification.py) to fine-tune models on a single/multi-label classification task. The following example fine-tunes BERT on the `en` subset of [`amazon_reviews_multi`](https://huggingface.co/datasets/amazon_reviews_multi) dataset. We can specify the metric, the label column and aso choose which text columns to use jointly for classification. ```bash dataset="amazon_reviews_multi" subset="en" python run_classification.py \ --model_name_or_path bert-base-uncased \ --dataset_name ${dataset} \ --dataset_config_name ${subset} \ --shuffle_train_dataset \ --metric_name accuracy \ --text_column_name "review_title,review_body,product_category" \ --text_column_delimiter "\n" \ --label_column_name stars \ --do_train \ --do_eval \ --max_seq_length 512 \ --per_device_train_batch_size 32 \ --learning_rate 2e-5 \ --num_train_epochs 1 \ --output_dir /tmp/${dataset}_${subset}/ ``` Training for 1 epoch results in acc of around 0.5958 for review_body only and 0.659 for title+body+category. The following is a multi-label classification example. It fine-tunes BERT on the `reuters21578` dataset hosted on our [hub](https://huggingface.co/datasets/reuters21578): ```bash dataset="reuters21578" subset="ModApte" python run_classification.py \ --model_name_or_path bert-base-uncased \ --dataset_name ${dataset} \ --dataset_config_name ${subset} \ --shuffle_train_dataset \ --remove_splits "unused" \ --metric_name f1 \ --text_column_name text \ --label_column_name topics \ --do_train \ --do_eval \ --max_seq_length 512 \ --per_device_train_batch_size 32 \ --learning_rate 2e-5 \ --num_train_epochs 15 \ --output_dir /tmp/${dataset}_${subset}/ ``` It results in a Micro F1 score of around 0.82 without any text and label filtering. Note that you have to explictly remove the "unused" split from the dataset, since it is not used for classification. ### Mixed precision training If you have a GPU with mixed precision capabilities (architecture Pascal or more recent), you can use mixed precision training with PyTorch 1.6.0 or latest, or by installing the [Apex](https://github.com/NVIDIA/apex) library for previous versions. Just add the flag `--fp16` to your command launching one of the scripts mentioned above! Using mixed precision training usually results in 2x-speedup for training with the same final results: | Task | Metric | Result | Training time | Result (FP16) | Training time (FP16) | |-------|------------------------------|-------------|---------------|---------------|----------------------| | CoLA | Matthews corr | 56.53 | 3:17 | 56.78 | 1:41 | | SST-2 | Accuracy | 92.32 | 26:06 | 91.74 | 13:11 | | MRPC | F1/Accuracy | 88.85/84.07 | 2:21 | 88.12/83.58 | 1:10 | | STS-B | Pearson/Spearman corr. | 88.64/88.48 | 2:13 | 88.71/88.55 | 1:08 | | QQP | Accuracy/F1 | 90.71/87.49 | 2:22:26 | 90.67/87.43 | 1:11:54 | | MNLI | Matched acc./Mismatched acc. | 83.91/84.10 | 2:35:23 | 84.04/84.06 | 1:17:06 | | QNLI | Accuracy | 90.66 | 40:57 | 90.96 | 20:16 | | RTE | Accuracy | 65.70 | 57 | 65.34 | 29 | | WNLI | Accuracy | 56.34 | 24 | 56.34 | 12 | ## PyTorch version, no Trainer Based on the script [`run_glue_no_trainer.py`](https://github.com/huggingface/transformers/blob/main/examples/pytorch/text-classification/run_glue_no_trainer.py). Like `run_glue.py`, this script allows you to fine-tune any of the models on the [hub](https://huggingface.co/models) on a text classification task, either a GLUE task or your own data in a csv or a JSON file. The main difference is that this script exposes the bare training loop, to allow you to quickly experiment and add any customization you would like. It offers less options than the script with `Trainer` (for instance you can easily change the options for the optimizer or the dataloaders directly in the script) but still run in a distributed setup, on TPU and supports mixed precision by the mean of the [🤗 `Accelerate`](https://github.com/huggingface/accelerate) library. You can use the script normally after installing it: ```bash pip install git+https://github.com/huggingface/accelerate ``` then ```bash export TASK_NAME=mrpc python run_glue_no_trainer.py \ --model_name_or_path bert-base-cased \ --task_name $TASK_NAME \ --max_length 128 \ --per_device_train_batch_size 32 \ --learning_rate 2e-5 \ --num_train_epochs 3 \ --output_dir /tmp/$TASK_NAME/ ``` You can then use your usual launchers to run in it in a distributed environment, but the easiest way is to run ```bash accelerate config ``` and reply to the questions asked. Then ```bash accelerate test ``` that will check everything is ready for training. Finally, you can launch training with ```bash export TASK_NAME=mrpc accelerate launch run_glue_no_trainer.py \ --model_name_or_path bert-base-cased \ --task_name $TASK_NAME \ --max_length 128 \ --per_device_train_batch_size 32 \ --learning_rate 2e-5 \ --num_train_epochs 3 \ --output_dir /tmp/$TASK_NAME/ ``` This command is the same and will work for: - a CPU-only setup - a setup with one GPU - a distributed training with several GPUs (single or multi node) - a training on TPUs Note that this library is in alpha release so your feedback is more than welcome if you encounter any problem using it. ## XNLI Based on the script [`run_xnli.py`](https://github.com/huggingface/transformers/blob/main/examples/pytorch/text-classification/run_xnli.py). [XNLI](https://cims.nyu.edu/~sbowman/xnli/) is a crowd-sourced dataset based on [MultiNLI](https://cims.nyu.edu/~sbowman/multinli/). It is an evaluation benchmark for cross-lingual text representations. Pairs of text are labeled with textual entailment annotations for 15 different languages (including both high-resource language such as English and low-resource languages such as Swahili). #### Fine-tuning on XNLI This example code fine-tunes mBERT (multi-lingual BERT) on the XNLI dataset. It runs in 106 mins on a single tesla V100 16GB. ```bash python run_xnli.py \ --model_name_or_path bert-base-multilingual-cased \ --language de \ --train_language en \ --do_train \ --do_eval \ --per_device_train_batch_size 32 \ --learning_rate 5e-5 \ --num_train_epochs 2.0 \ --max_seq_length 128 \ --output_dir /tmp/debug_xnli/ \ --save_steps -1 ``` Training with the previously defined hyper-parameters yields the following results on the **test** set: ```bash acc = 0.7093812375249501 ```
huggingface/transformers/blob/main/examples/pytorch/text-classification/README.md
# Amused training Amused can be finetuned on simple datasets relatively cheaply and quickly. Using 8bit optimizers, lora, and gradient accumulation, amused can be finetuned with as little as 5.5 GB. Here are a set of examples for finetuning amused on some relatively simple datasets. These training recipies are aggressively oriented towards minimal resources and fast verification -- i.e. the batch sizes are quite low and the learning rates are quite high. For optimal quality, you will probably want to increase the batch sizes and decrease learning rates. All training examples use fp16 mixed precision and gradient checkpointing. We don't show 8 bit adam + lora as its about the same memory use as just using lora (bitsandbytes uses full precision optimizer states for weights below a minimum size). ### Finetuning the 256 checkpoint These examples finetune on this [nouns](https://huggingface.co/datasets/m1guelpf/nouns) dataset. Example results: ![noun1](https://huggingface.co/datasets/diffusers/docs-images/resolve/main/amused/noun1.png) ![noun2](https://huggingface.co/datasets/diffusers/docs-images/resolve/main/amused/noun2.png) ![noun3](https://huggingface.co/datasets/diffusers/docs-images/resolve/main/amused/noun3.png) #### Full finetuning Batch size: 8, Learning rate: 1e-4, Gives decent results in 750-1000 steps | Batch Size | Gradient Accumulation Steps | Effective Total Batch Size | Memory Used | |------------|-----------------------------|------------------|-------------| | 8 | 1 | 8 | 19.7 GB | | 4 | 2 | 8 | 18.3 GB | | 1 | 8 | 8 | 17.9 GB | ```sh accelerate launch train_amused.py \ --output_dir <output path> \ --train_batch_size <batch size> \ --gradient_accumulation_steps <gradient accumulation steps> \ --learning_rate 1e-4 \ --pretrained_model_name_or_path huggingface/amused-256 \ --instance_data_dataset 'm1guelpf/nouns' \ --image_key image \ --prompt_key text \ --resolution 256 \ --mixed_precision fp16 \ --lr_scheduler constant \ --validation_prompts \ 'a pixel art character with square red glasses, a baseball-shaped head and a orange-colored body on a dark background' \ 'a pixel art character with square orange glasses, a lips-shaped head and a red-colored body on a light background' \ 'a pixel art character with square blue glasses, a microwave-shaped head and a purple-colored body on a sunny background' \ 'a pixel art character with square red glasses, a baseball-shaped head and a blue-colored body on an orange background' \ 'a pixel art character with square red glasses' \ 'a pixel art character' \ 'square red glasses on a pixel art character' \ 'square red glasses on a pixel art character with a baseball-shaped head' \ --max_train_steps 10000 \ --checkpointing_steps 500 \ --validation_steps 250 \ --gradient_checkpointing ``` #### Full finetuning + 8 bit adam Note that this training config keeps the batch size low and the learning rate high to get results fast with low resources. However, due to 8 bit adam, it will diverge eventually. If you want to train for longer, you will have to up the batch size and lower the learning rate. Batch size: 16, Learning rate: 2e-5, Gives decent results in ~750 steps | Batch Size | Gradient Accumulation Steps | Effective Total Batch Size | Memory Used | |------------|-----------------------------|------------------|-------------| | 16 | 1 | 16 | 20.1 GB | | 8 | 2 | 16 | 15.6 GB | | 1 | 16 | 16 | 10.7 GB | ```sh accelerate launch train_amused.py \ --output_dir <output path> \ --train_batch_size <batch size> \ --gradient_accumulation_steps <gradient accumulation steps> \ --learning_rate 2e-5 \ --use_8bit_adam \ --pretrained_model_name_or_path huggingface/amused-256 \ --instance_data_dataset 'm1guelpf/nouns' \ --image_key image \ --prompt_key text \ --resolution 256 \ --mixed_precision fp16 \ --lr_scheduler constant \ --validation_prompts \ 'a pixel art character with square red glasses, a baseball-shaped head and a orange-colored body on a dark background' \ 'a pixel art character with square orange glasses, a lips-shaped head and a red-colored body on a light background' \ 'a pixel art character with square blue glasses, a microwave-shaped head and a purple-colored body on a sunny background' \ 'a pixel art character with square red glasses, a baseball-shaped head and a blue-colored body on an orange background' \ 'a pixel art character with square red glasses' \ 'a pixel art character' \ 'square red glasses on a pixel art character' \ 'square red glasses on a pixel art character with a baseball-shaped head' \ --max_train_steps 10000 \ --checkpointing_steps 500 \ --validation_steps 250 \ --gradient_checkpointing ``` #### Full finetuning + lora Batch size: 16, Learning rate: 8e-4, Gives decent results in 1000-1250 steps | Batch Size | Gradient Accumulation Steps | Effective Total Batch Size | Memory Used | |------------|-----------------------------|------------------|-------------| | 16 | 1 | 16 | 14.1 GB | | 8 | 2 | 16 | 10.1 GB | | 1 | 16 | 16 | 6.5 GB | ```sh accelerate launch train_amused.py \ --output_dir <output path> \ --train_batch_size <batch size> \ --gradient_accumulation_steps <gradient accumulation steps> \ --learning_rate 8e-4 \ --use_lora \ --pretrained_model_name_or_path huggingface/amused-256 \ --instance_data_dataset 'm1guelpf/nouns' \ --image_key image \ --prompt_key text \ --resolution 256 \ --mixed_precision fp16 \ --lr_scheduler constant \ --validation_prompts \ 'a pixel art character with square red glasses, a baseball-shaped head and a orange-colored body on a dark background' \ 'a pixel art character with square orange glasses, a lips-shaped head and a red-colored body on a light background' \ 'a pixel art character with square blue glasses, a microwave-shaped head and a purple-colored body on a sunny background' \ 'a pixel art character with square red glasses, a baseball-shaped head and a blue-colored body on an orange background' \ 'a pixel art character with square red glasses' \ 'a pixel art character' \ 'square red glasses on a pixel art character' \ 'square red glasses on a pixel art character with a baseball-shaped head' \ --max_train_steps 10000 \ --checkpointing_steps 500 \ --validation_steps 250 \ --gradient_checkpointing ``` ### Finetuning the 512 checkpoint These examples finetune on this [minecraft](https://huggingface.co/monadical-labs/minecraft-preview) dataset. Example results: ![minecraft1](https://huggingface.co/datasets/diffusers/docs-images/resolve/main/amused/minecraft1.png) ![minecraft2](https://huggingface.co/datasets/diffusers/docs-images/resolve/main/amused/minecraft2.png) ![minecraft3](https://huggingface.co/datasets/diffusers/docs-images/resolve/main/amused/minecraft3.png) #### Full finetuning Batch size: 8, Learning rate: 8e-5, Gives decent results in 500-1000 steps | Batch Size | Gradient Accumulation Steps | Effective Total Batch Size | Memory Used | |------------|-----------------------------|------------------|-------------| | 8 | 1 | 8 | 24.2 GB | | 4 | 2 | 8 | 19.7 GB | | 1 | 8 | 8 | 16.99 GB | ```sh accelerate launch train_amused.py \ --output_dir <output path> \ --train_batch_size <batch size> \ --gradient_accumulation_steps <gradient accumulation steps> \ --learning_rate 8e-5 \ --pretrained_model_name_or_path huggingface/amused-512 \ --instance_data_dataset 'monadical-labs/minecraft-preview' \ --prompt_prefix 'minecraft ' \ --image_key image \ --prompt_key text \ --resolution 512 \ --mixed_precision fp16 \ --lr_scheduler constant \ --validation_prompts \ 'minecraft Avatar' \ 'minecraft character' \ 'minecraft' \ 'minecraft president' \ 'minecraft pig' \ --max_train_steps 10000 \ --checkpointing_steps 500 \ --validation_steps 250 \ --gradient_checkpointing ``` #### Full finetuning + 8 bit adam Batch size: 8, Learning rate: 5e-6, Gives decent results in 500-1000 steps | Batch Size | Gradient Accumulation Steps | Effective Total Batch Size | Memory Used | |------------|-----------------------------|------------------|-------------| | 8 | 1 | 8 | 21.2 GB | | 4 | 2 | 8 | 13.3 GB | | 1 | 8 | 8 | 9.9 GB | ```sh accelerate launch train_amused.py \ --output_dir <output path> \ --train_batch_size <batch size> \ --gradient_accumulation_steps <gradient accumulation steps> \ --learning_rate 5e-6 \ --pretrained_model_name_or_path huggingface/amused-512 \ --instance_data_dataset 'monadical-labs/minecraft-preview' \ --prompt_prefix 'minecraft ' \ --image_key image \ --prompt_key text \ --resolution 512 \ --mixed_precision fp16 \ --lr_scheduler constant \ --validation_prompts \ 'minecraft Avatar' \ 'minecraft character' \ 'minecraft' \ 'minecraft president' \ 'minecraft pig' \ --max_train_steps 10000 \ --checkpointing_steps 500 \ --validation_steps 250 \ --gradient_checkpointing ``` #### Full finetuning + lora Batch size: 8, Learning rate: 1e-4, Gives decent results in 500-1000 steps | Batch Size | Gradient Accumulation Steps | Effective Total Batch Size | Memory Used | |------------|-----------------------------|------------------|-------------| | 8 | 1 | 8 | 12.7 GB | | 4 | 2 | 8 | 9.0 GB | | 1 | 8 | 8 | 5.6 GB | ```sh accelerate launch train_amused.py \ --output_dir <output path> \ --train_batch_size <batch size> \ --gradient_accumulation_steps <gradient accumulation steps> \ --learning_rate 1e-4 \ --use_lora \ --pretrained_model_name_or_path huggingface/amused-512 \ --instance_data_dataset 'monadical-labs/minecraft-preview' \ --prompt_prefix 'minecraft ' \ --image_key image \ --prompt_key text \ --resolution 512 \ --mixed_precision fp16 \ --lr_scheduler constant \ --validation_prompts \ 'minecraft Avatar' \ 'minecraft character' \ 'minecraft' \ 'minecraft president' \ 'minecraft pig' \ --max_train_steps 10000 \ --checkpointing_steps 500 \ --validation_steps 250 \ --gradient_checkpointing ``` ### Styledrop [Styledrop](https://arxiv.org/abs/2306.00983) is an efficient finetuning method for learning a new style from just one or very few images. It has an optional first stage to generate human picked additional training samples. The additional training samples can be used to augment the initial images. Our examples exclude the optional additional image selection stage and instead we just finetune on a single image. This is our example style image: ![example](https://huggingface.co/datasets/diffusers/docs-images/resolve/main/amused/A%20mushroom%20in%20%5BV%5D%20style.png) Download it to your local directory with ```sh wget https://huggingface.co/datasets/diffusers/docs-images/resolve/main/amused/A%20mushroom%20in%20%5BV%5D%20style.png ``` #### 256 Example results: ![glowing_256_1](https://huggingface.co/datasets/diffusers/docs-images/resolve/main/amused/glowing_256_1.png) ![glowing_256_2](https://huggingface.co/datasets/diffusers/docs-images/resolve/main/amused/glowing_256_2.png) ![glowing_256_3](https://huggingface.co/datasets/diffusers/docs-images/resolve/main/amused/glowing_256_3.png) Learning rate: 4e-4, Gives decent results in 1500-2000 steps Memory used: 6.5 GB ```sh accelerate launch train_amused.py \ --output_dir <output path> \ --mixed_precision fp16 \ --report_to wandb \ --use_lora \ --pretrained_model_name_or_path huggingface/amused-256 \ --train_batch_size 1 \ --lr_scheduler constant \ --learning_rate 4e-4 \ --validation_prompts \ 'A chihuahua walking on the street in [V] style' \ 'A banana on the table in [V] style' \ 'A church on the street in [V] style' \ 'A tabby cat walking in the forest in [V] style' \ --instance_data_image 'A mushroom in [V] style.png' \ --max_train_steps 10000 \ --checkpointing_steps 500 \ --validation_steps 100 \ --resolution 256 ``` #### 512 Example results: ![glowing_512_1](https://huggingface.co/datasets/diffusers/docs-images/resolve/main/amused/glowing_512_1.png) ![glowing_512_2](https://huggingface.co/datasets/diffusers/docs-images/resolve/main/amused/glowing_512_2.png) ![glowing_512_3](https://huggingface.co/datasets/diffusers/docs-images/resolve/main/amused/glowing_512_3.png) Learning rate: 1e-3, Lora alpha 1, Gives decent results in 1500-2000 steps Memory used: 5.6 GB ``` accelerate launch train_amused.py \ --output_dir <output path> \ --mixed_precision fp16 \ --report_to wandb \ --use_lora \ --pretrained_model_name_or_path huggingface/amused-512 \ --train_batch_size 1 \ --lr_scheduler constant \ --learning_rate 1e-3 \ --validation_prompts \ 'A chihuahua walking on the street in [V] style' \ 'A banana on the table in [V] style' \ 'A church on the street in [V] style' \ 'A tabby cat walking in the forest in [V] style' \ --instance_data_image 'A mushroom in [V] style.png' \ --max_train_steps 100000 \ --checkpointing_steps 500 \ --validation_steps 100 \ --resolution 512 \ --lora_alpha 1 ```
huggingface/diffusers/blob/main/examples/amused/README.md
-- title: Mean IoU emoji: 🤗 colorFrom: blue colorTo: red sdk: gradio sdk_version: 3.19.1 app_file: app.py pinned: false tags: - evaluate - metric description: >- IoU is the area of overlap between the predicted segmentation and the ground truth divided by the area of union between the predicted segmentation and the ground truth. For binary (two classes) or multi-class segmentation, the mean IoU of the image is calculated by taking the IoU of each class and averaging them. --- # Metric Card for Mean IoU ## Metric Description IoU (Intersection over Union) is the area of overlap between the predicted segmentation and the ground truth divided by the area of union between the predicted segmentation and the ground truth. For binary (two classes) or multi-class segmentation, the *mean IoU* of the image is calculated by taking the IoU of each class and averaging them. ## How to Use The Mean IoU metric takes two numeric arrays as input corresponding to the predicted and ground truth segmentations: ```python >>> import numpy as np >>> mean_iou = evaluate.load("mean_iou") >>> predicted = np.array([[2, 2, 3], [8, 2, 4], [3, 255, 2]]) >>> ground_truth = np.array([[1, 2, 2], [8, 2, 1], [3, 255, 1]]) >>> results = mean_iou.compute(predictions=predicted, references=ground_truth, num_labels=10, ignore_index=255) ``` ### Inputs **Mandatory inputs** - `predictions` (`List[ndarray]`): List of predicted segmentation maps, each of shape (height, width). Each segmentation map can be of a different size. - `references` (`List[ndarray]`): List of ground truth segmentation maps, each of shape (height, width). Each segmentation map can be of a different size. - `num_labels` (`int`): Number of classes (categories). - `ignore_index` (`int`): Index that will be ignored during evaluation. **Optional inputs** - `nan_to_num` (`int`): If specified, NaN values will be replaced by the number defined by the user. - `label_map` (`dict`): If specified, dictionary mapping old label indices to new label indices. - `reduce_labels` (`bool`): Whether or not to reduce all label values of segmentation maps by 1. Usually used for datasets where 0 is used for background, and background itself is not included in all classes of a dataset (e.g. ADE20k). The background label will be replaced by 255. The default value is `False`. ### Output Values The metric returns a dictionary with the following elements: - `mean_iou` (`float`): Mean Intersection-over-Union (IoU averaged over all categories). - `mean_accuracy` (`float`): Mean accuracy (averaged over all categories). - `overall_accuracy` (`float`): Overall accuracy on all images. - `per_category_accuracy` (`ndarray` of shape `(num_labels,)`): Per category accuracy. - `per_category_iou` (`ndarray` of shape `(num_labels,)`): Per category IoU. The values of all of the scores reported range from from `0.0` (minimum) and `1.0` (maximum). Output Example: ```python {'mean_iou': 0.47750000000000004, 'mean_accuracy': 0.5916666666666666, 'overall_accuracy': 0.5263157894736842, 'per_category_iou': array([0. , 0. , 0.375, 0.4 , 0.5 , 0. , 0.5 , 1. , 1. , 1. ]), 'per_category_accuracy': array([0. , 0. , 0.75 , 0.66666667, 1. , 0. , 0.5 , 1. , 1. , 1. ])} ``` #### Values from Popular Papers The [leaderboard for the CityScapes dataset](https://paperswithcode.com/sota/semantic-segmentation-on-cityscapes) reports a Mean IOU ranging from 64 to 84; that of [ADE20k](https://paperswithcode.com/sota/semantic-segmentation-on-ade20k) ranges from 30 to a peak of 59.9, indicating that the dataset is more difficult for current approaches (as of 2022). ### Examples ```python >>> import numpy as np >>> mean_iou = evaluate.load("mean_iou") >>> # suppose one has 3 different segmentation maps predicted >>> predicted_1 = np.array([[1, 2], [3, 4], [5, 255]]) >>> actual_1 = np.array([[0, 3], [5, 4], [6, 255]]) >>> predicted_2 = np.array([[2, 7], [9, 2], [3, 6]]) >>> actual_2 = np.array([[1, 7], [9, 2], [3, 6]]) >>> predicted_3 = np.array([[2, 2, 3], [8, 2, 4], [3, 255, 2]]) >>> actual_3 = np.array([[1, 2, 2], [8, 2, 1], [3, 255, 1]]) >>> predictions = [predicted_1, predicted_2, predicted_3] >>> references = [actual_1, actual_2, actual_3] >>> results = mean_iou.compute(predictions=predictions, references=references, num_labels=10, ignore_index=255, reduce_labels=False) >>> print(results) # doctest: +NORMALIZE_WHITESPACE {'mean_iou': 0.47750000000000004, 'mean_accuracy': 0.5916666666666666, 'overall_accuracy': 0.5263157894736842, 'per_category_iou': array([0. , 0. , 0.375, 0.4 , 0.5 , 0. , 0.5 , 1. , 1. , 1. ]), 'per_category_accuracy': array([0. , 0. , 0.75 , 0.66666667, 1. , 0. , 0.5 , 1. , 1. , 1. ])} ``` ## Limitations and Bias Mean IOU is an average metric, so it will not show you where model predictions differ from the ground truth (i.e. if there are particular regions or classes that the model does poorly on). Further error analysis is needed to gather actional insights that can be used to inform model improvements. ## Citation(s) ```bibtex @software{MMSegmentation_Contributors_OpenMMLab_Semantic_Segmentation_2020, author = {{MMSegmentation Contributors}}, license = {Apache-2.0}, month = {7}, title = {{OpenMMLab Semantic Segmentation Toolbox and Benchmark}}, url = {https://github.com/open-mmlab/mmsegmentation}, year = {2020} }" ``` ## Further References - [Wikipedia article - Jaccard Index](https://en.wikipedia.org/wiki/Jaccard_index)
huggingface/evaluate/blob/main/metrics/mean_iou/README.md
-- title: SuperGLUE emoji: 🤗 colorFrom: blue colorTo: red sdk: gradio sdk_version: 3.19.1 app_file: app.py pinned: false tags: - evaluate - metric description: >- SuperGLUE (https://super.gluebenchmark.com/) is a new benchmark styled after GLUE with a new set of more difficult language understanding tasks, improved resources, and a new public leaderboard. --- # Metric Card for SuperGLUE ## Metric description This metric is used to compute the SuperGLUE evaluation metric associated to each of the subsets of the [SuperGLUE dataset](https://huggingface.co/datasets/super_glue). SuperGLUE is a new benchmark styled after GLUE with a new set of more difficult language understanding tasks, improved resources, and a new public leaderboard. ## How to use There are two steps: (1) loading the SuperGLUE metric relevant to the subset of the dataset being used for evaluation; and (2) calculating the metric. 1. **Loading the relevant SuperGLUE metric** : the subsets of SuperGLUE are the following: `boolq`, `cb`, `copa`, `multirc`, `record`, `rte`, `wic`, `wsc`, `wsc.fixed`, `axb`, `axg`. More information about the different subsets of the SuperGLUE dataset can be found on the [SuperGLUE dataset page](https://huggingface.co/datasets/super_glue) and on the [official dataset website](https://super.gluebenchmark.com/). 2. **Calculating the metric**: the metric takes two inputs : one list with the predictions of the model to score and one list of reference labels. The structure of both inputs depends on the SuperGlUE subset being used: Format of `predictions`: - for `record`: list of question-answer dictionaries with the following keys: - `idx`: index of the question as specified by the dataset - `prediction_text`: the predicted answer text - for `multirc`: list of question-answer dictionaries with the following keys: - `idx`: index of the question-answer pair as specified by the dataset - `prediction`: the predicted answer label - otherwise: list of predicted labels Format of `references`: - for `record`: list of question-answers dictionaries with the following keys: - `idx`: index of the question as specified by the dataset - `answers`: list of possible answers - otherwise: list of reference labels ```python from evaluate import load super_glue_metric = load('super_glue', 'copa') predictions = [0, 1] references = [0, 1] results = super_glue_metric.compute(predictions=predictions, references=references) ``` ## Output values The output of the metric depends on the SuperGLUE subset chosen, consisting of a dictionary that contains one or several of the following metrics: `exact_match`: A given predicted string's exact match score is 1 if it is the exact same as its reference string, and is 0 otherwise. (See [Exact Match](https://huggingface.co/metrics/exact_match) for more information). `f1`: the harmonic mean of the precision and recall (see [F1 score](https://huggingface.co/metrics/f1) for more information). Its range is 0-1 -- its lowest possible value is 0, if either the precision or the recall is 0, and its highest possible value is 1.0, which means perfect precision and recall. `matthews_correlation`: a measure of the quality of binary and multiclass classifications (see [Matthews Correlation](https://huggingface.co/metrics/matthews_correlation) for more information). Its range of values is between -1 and +1, where a coefficient of +1 represents a perfect prediction, 0 an average random prediction and -1 an inverse prediction. ### Values from popular papers The [original SuperGLUE paper](https://arxiv.org/pdf/1905.00537.pdf) reported average scores ranging from 47 to 71.5%, depending on the model used (with all evaluation values scaled by 100 to make computing the average possible). For more recent model performance, see the [dataset leaderboard](https://super.gluebenchmark.com/leaderboard). ## Examples Maximal values for the COPA subset (which outputs `accuracy`): ```python from evaluate import load super_glue_metric = load('super_glue', 'copa') # any of ["copa", "rte", "wic", "wsc", "wsc.fixed", "boolq", "axg"] predictions = [0, 1] references = [0, 1] results = super_glue_metric.compute(predictions=predictions, references=references) print(results) {'accuracy': 1.0} ``` Minimal values for the MultiRC subset (which outputs `pearson` and `spearmanr`): ```python from evaluate import load super_glue_metric = load('super_glue', 'multirc') predictions = [{'idx': {'answer': 0, 'paragraph': 0, 'question': 0}, 'prediction': 0}, {'idx': {'answer': 1, 'paragraph': 2, 'question': 3}, 'prediction': 1}] references = [1,0] results = super_glue_metric.compute(predictions=predictions, references=references) print(results) {'exact_match': 0.0, 'f1_m': 0.0, 'f1_a': 0.0} ``` Partial match for the COLA subset (which outputs `matthews_correlation`) ```python from evaluate import load super_glue_metric = load('super_glue', 'axb') references = [0, 1] predictions = [1,1] results = super_glue_metric.compute(predictions=predictions, references=references) print(results) {'matthews_correlation': 0.0} ``` ## Limitations and bias This metric works only with datasets that have the same format as the [SuperGLUE dataset](https://huggingface.co/datasets/super_glue). The dataset also includes Winogender, a subset of the dataset that is designed to measure gender bias in coreference resolution systems. However, as noted in the SuperGLUE paper, this subset has its limitations: *"It offers only positive predictive value: A poor bias score is clear evidence that a model exhibits gender bias, but a good score does not mean that the model is unbiased.[...] Also, Winogender does not cover all forms of social bias, or even all forms of gender. For instance, the version of the data used here offers no coverage of gender-neutral they or non-binary pronouns." ## Citation ```bibtex @article{wang2019superglue, title={Super{GLUE}: A Stickier Benchmark for General-Purpose Language Understanding Systems}, author={Wang, Alex and Pruksachatkun, Yada and Nangia, Nikita and Singh, Amanpreet and Michael, Julian and Hill, Felix and Levy, Omer and Bowman, Samuel R}, journal={arXiv preprint arXiv:1905.00537}, year={2019} } ``` ## Further References - [SuperGLUE benchmark homepage](https://super.gluebenchmark.com/)
huggingface/evaluate/blob/main/metrics/super_glue/README.md
(Tensorflow) Inception v3 **Inception v3** is a convolutional neural network architecture from the Inception family that makes several improvements including using [Label Smoothing](https://paperswithcode.com/method/label-smoothing), Factorized 7 x 7 convolutions, and the use of an [auxiliary classifer](https://paperswithcode.com/method/auxiliary-classifier) to propagate label information lower down the network (along with the use of batch normalization for layers in the sidehead). The key building block is an [Inception Module](https://paperswithcode.com/method/inception-v3-module). The weights from this model were ported from [Tensorflow/Models](https://github.com/tensorflow/models). ## How do I use this model on an image? To load a pretrained model: ```python import timm model = timm.create_model('tf_inception_v3', pretrained=True) model.eval() ``` To load and preprocess the image: ```python import urllib from PIL import Image from timm.data import resolve_data_config from timm.data.transforms_factory import create_transform config = resolve_data_config({}, model=model) transform = create_transform(**config) url, filename = ("https://github.com/pytorch/hub/raw/master/images/dog.jpg", "dog.jpg") urllib.request.urlretrieve(url, filename) img = Image.open(filename).convert('RGB') tensor = transform(img).unsqueeze(0) # transform and add batch dimension ``` To get the model predictions: ```python import torch with torch.no_grad(): out = model(tensor) probabilities = torch.nn.functional.softmax(out[0], dim=0) print(probabilities.shape) # prints: torch.Size([1000]) ``` To get the top-5 predictions class names: ```python # Get imagenet class mappings url, filename = ("https://raw.githubusercontent.com/pytorch/hub/master/imagenet_classes.txt", "imagenet_classes.txt") urllib.request.urlretrieve(url, filename) with open("imagenet_classes.txt", "r") as f: categories = [s.strip() for s in f.readlines()] # Print top categories per image top5_prob, top5_catid = torch.topk(probabilities, 5) for i in range(top5_prob.size(0)): print(categories[top5_catid[i]], top5_prob[i].item()) # prints class names and probabilities like: # [('Samoyed', 0.6425196528434753), ('Pomeranian', 0.04062102362513542), ('keeshond', 0.03186424449086189), ('white wolf', 0.01739676296710968), ('Eskimo dog', 0.011717947199940681)] ``` Replace the model name with the variant you want to use, e.g. `tf_inception_v3`. You can find the IDs in the model summaries at the top of this page. To extract image features with this model, follow the [timm feature extraction examples](https://rwightman.github.io/pytorch-image-models/feature_extraction/), just change the name of the model you want to use. ## How do I finetune this model? You can finetune any of the pre-trained models just by changing the classifier (the last layer). ```python model = timm.create_model('tf_inception_v3', pretrained=True, num_classes=NUM_FINETUNE_CLASSES) ``` To finetune on your own dataset, you have to write a training loop or adapt [timm's training script](https://github.com/rwightman/pytorch-image-models/blob/master/train.py) to use your dataset. ## How do I train this model? You can follow the [timm recipe scripts](https://rwightman.github.io/pytorch-image-models/scripts/) for training a new model afresh. ## Citation ```BibTeX @article{DBLP:journals/corr/SzegedyVISW15, author = {Christian Szegedy and Vincent Vanhoucke and Sergey Ioffe and Jonathon Shlens and Zbigniew Wojna}, title = {Rethinking the Inception Architecture for Computer Vision}, journal = {CoRR}, volume = {abs/1512.00567}, year = {2015}, url = {http://arxiv.org/abs/1512.00567}, archivePrefix = {arXiv}, eprint = {1512.00567}, timestamp = {Mon, 13 Aug 2018 16:49:07 +0200}, biburl = {https://dblp.org/rec/journals/corr/SzegedyVISW15.bib}, bibsource = {dblp computer science bibliography, https://dblp.org} } ``` <!-- Type: model-index Collections: - Name: TF Inception v3 Paper: Title: Rethinking the Inception Architecture for Computer Vision URL: https://paperswithcode.com/paper/rethinking-the-inception-architecture-for Models: - Name: tf_inception_v3 In Collection: TF Inception v3 Metadata: FLOPs: 7352418880 Parameters: 23830000 File Size: 95549439 Architecture: - 1x1 Convolution - Auxiliary Classifier - Average Pooling - Average Pooling - Batch Normalization - Convolution - Dense Connections - Dropout - Inception-v3 Module - Max Pooling - ReLU - Softmax Tasks: - Image Classification Training Techniques: - Gradient Clipping - Label Smoothing - RMSProp - Weight Decay Training Data: - ImageNet Training Resources: 50x NVIDIA Kepler GPUs ID: tf_inception_v3 LR: 0.045 Dropout: 0.2 Crop Pct: '0.875' Momentum: 0.9 Image Size: '299' Interpolation: bicubic Code: https://github.com/rwightman/pytorch-image-models/blob/d8e69206be253892b2956341fea09fdebfaae4e3/timm/models/inception_v3.py#L449 Weights: https://github.com/rwightman/pytorch-image-models/releases/download/v0.1-weights/tf_inception_v3-e0069de4.pth Results: - Task: Image Classification Dataset: ImageNet Metrics: Top 1 Accuracy: 77.87% Top 5 Accuracy: 93.65% -->
huggingface/pytorch-image-models/blob/main/docs/models/tf-inception-v3.md
``python import os import torch from transformers import ( AutoTokenizer, default_data_collator, AutoModelForSeq2SeqLM, Seq2SeqTrainingArguments, Seq2SeqTrainer, GenerationConfig, ) from peft import get_peft_model, PromptTuningInit, PromptTuningConfig, TaskType from datasets import load_dataset os.environ["CUDA_VISIBLE_DEVICES"] = "0" os.environ["TOKENIZERS_PARALLELISM"] = "false" device = "cuda" model_name_or_path = "t5-large" tokenizer_name_or_path = "t5-large" checkpoint_name = "financial_sentiment_analysis_prefix_tuning_v1.pt" text_column = "sentence" label_column = "text_label" max_length = 8 lr = 1e0 num_epochs = 5 batch_size = 8 ``` ```python # creating model peft_config = peft_config = PromptTuningConfig( task_type=TaskType.SEQ_2_SEQ_LM, prompt_tuning_init=PromptTuningInit.TEXT, num_virtual_tokens=20, prompt_tuning_init_text="What is the sentiment of this article?\n", inference_mode=False, tokenizer_name_or_path=model_name_or_path, ) model = AutoModelForSeq2SeqLM.from_pretrained(model_name_or_path) model = get_peft_model(model, peft_config) model.print_trainable_parameters() model ``` ```python # loading dataset dataset = load_dataset("financial_phrasebank", "sentences_allagree") dataset = dataset["train"].train_test_split(test_size=0.1) dataset["validation"] = dataset["test"] del dataset["test"] classes = dataset["train"].features["label"].names dataset = dataset.map( lambda x: {"text_label": [classes[label] for label in x["label"]]}, batched=True, num_proc=1, ) dataset["train"][0] ``` ```python # data preprocessing tokenizer = AutoTokenizer.from_pretrained(model_name_or_path) def preprocess_function(examples): inputs = examples[text_column] targets = examples[label_column] model_inputs = tokenizer(inputs, max_length=max_length, padding="max_length", truncation=True, return_tensors="pt") labels = tokenizer(targets, max_length=2, padding="max_length", truncation=True, return_tensors="pt") labels = labels["input_ids"] labels[labels == tokenizer.pad_token_id] = -100 model_inputs["labels"] = labels return model_inputs processed_datasets = dataset.map( preprocess_function, batched=True, num_proc=1, remove_columns=dataset["train"].column_names, load_from_cache_file=False, desc="Running tokenizer on dataset", ) train_dataset = processed_datasets["train"].shuffle() eval_dataset = processed_datasets["validation"] ``` ```python # training and evaluation def compute_metrics(eval_preds): preds, labels = eval_preds preds = tokenizer.batch_decode(preds, skip_special_tokens=True) labels = tokenizer.batch_decode(labels, skip_special_tokens=True) correct = 0 total = 0 for pred, true in zip(preds, labels): if pred.strip() == true.strip(): correct += 1 total += 1 accuracy = correct / total return {"accuracy": accuracy} training_args = Seq2SeqTrainingArguments( "out", per_device_train_batch_size=batch_size, learning_rate=lr, num_train_epochs=num_epochs, evaluation_strategy="epoch", logging_strategy="epoch", save_strategy="no", report_to=[], predict_with_generate=True, generation_config=GenerationConfig(max_length=max_length), ) trainer = Seq2SeqTrainer( model=model, tokenizer=tokenizer, args=training_args, train_dataset=train_dataset, eval_dataset=eval_dataset, data_collator=default_data_collator, compute_metrics=compute_metrics, ) trainer.train() ``` ```python # saving model peft_model_id = f"{model_name_or_path}_{peft_config.peft_type}_{peft_config.task_type}" model.save_pretrained(peft_model_id) ``` ```python ckpt = f"{peft_model_id}/adapter_model.bin" !du -h $ckpt ``` ```python from peft import PeftModel, PeftConfig peft_model_id = f"{model_name_or_path}_{peft_config.peft_type}_{peft_config.task_type}" config = PeftConfig.from_pretrained(peft_model_id) model = AutoModelForSeq2SeqLM.from_pretrained(config.base_model_name_or_path) model = PeftModel.from_pretrained(model, peft_model_id) ``` ```python model.eval() i = 107 inputs = tokenizer(dataset["validation"][text_column][i], return_tensors="pt") print(dataset["validation"][text_column][i]) print(inputs) with torch.no_grad(): outputs = model.generate(input_ids=inputs["input_ids"], max_new_tokens=10) print(outputs) print(tokenizer.batch_decode(outputs.detach().cpu().numpy(), skip_special_tokens=True)) ```
huggingface/peft/blob/main/examples/conditional_generation/peft_prompt_tuning_seq2seq_with_generate.ipynb
DenseNet **DenseNet** is a type of convolutional neural network that utilises dense connections between layers, through [Dense Blocks](http://www.paperswithcode.com/method/dense-block), where we connect *all layers* (with matching feature-map sizes) directly with each other. To preserve the feed-forward nature, each layer obtains additional inputs from all preceding layers and passes on its own feature-maps to all subsequent layers. The **DenseNet Blur** variant in this collection by Ross Wightman employs [Blur Pooling](http://www.paperswithcode.com/method/blur-pooling) ## How do I use this model on an image? To load a pretrained model: ```py >>> import timm >>> model = timm.create_model('densenet121', pretrained=True) >>> model.eval() ``` To load and preprocess the image: ```py >>> import urllib >>> from PIL import Image >>> from timm.data import resolve_data_config >>> from timm.data.transforms_factory import create_transform >>> config = resolve_data_config({}, model=model) >>> transform = create_transform(**config) >>> url, filename = ("https://github.com/pytorch/hub/raw/master/images/dog.jpg", "dog.jpg") >>> urllib.request.urlretrieve(url, filename) >>> img = Image.open(filename).convert('RGB') >>> tensor = transform(img).unsqueeze(0) # transform and add batch dimension ``` To get the model predictions: ```py >>> import torch >>> with torch.no_grad(): ... out = model(tensor) >>> probabilities = torch.nn.functional.softmax(out[0], dim=0) >>> print(probabilities.shape) >>> # prints: torch.Size([1000]) ``` To get the top-5 predictions class names: ```py >>> # Get imagenet class mappings >>> url, filename = ("https://raw.githubusercontent.com/pytorch/hub/master/imagenet_classes.txt", "imagenet_classes.txt") >>> urllib.request.urlretrieve(url, filename) >>> with open("imagenet_classes.txt", "r") as f: ... categories = [s.strip() for s in f.readlines()] >>> # Print top categories per image >>> top5_prob, top5_catid = torch.topk(probabilities, 5) >>> for i in range(top5_prob.size(0)): ... print(categories[top5_catid[i]], top5_prob[i].item()) >>> # prints class names and probabilities like: >>> # [('Samoyed', 0.6425196528434753), ('Pomeranian', 0.04062102362513542), ('keeshond', 0.03186424449086189), ('white wolf', 0.01739676296710968), ('Eskimo dog', 0.011717947199940681)] ``` Replace the model name with the variant you want to use, e.g. `densenet121`. You can find the IDs in the model summaries at the top of this page. To extract image features with this model, follow the [timm feature extraction examples](../feature_extraction), just change the name of the model you want to use. ## How do I finetune this model? You can finetune any of the pre-trained models just by changing the classifier (the last layer). ```py >>> model = timm.create_model('densenet121', pretrained=True, num_classes=NUM_FINETUNE_CLASSES) ``` To finetune on your own dataset, you have to write a training loop or adapt [timm's training script](https://github.com/rwightman/pytorch-image-models/blob/master/train.py) to use your dataset. ## How do I train this model? You can follow the [timm recipe scripts](../scripts) for training a new model afresh. ## Citation ```BibTeX @article{DBLP:journals/corr/HuangLW16a, author = {Gao Huang and Zhuang Liu and Kilian Q. Weinberger}, title = {Densely Connected Convolutional Networks}, journal = {CoRR}, volume = {abs/1608.06993}, year = {2016}, url = {http://arxiv.org/abs/1608.06993}, archivePrefix = {arXiv}, eprint = {1608.06993}, timestamp = {Mon, 10 Sep 2018 15:49:32 +0200}, biburl = {https://dblp.org/rec/journals/corr/HuangLW16a.bib}, bibsource = {dblp computer science bibliography, https://dblp.org} } ``` ``` @misc{rw2019timm, author = {Ross Wightman}, title = {PyTorch Image Models}, year = {2019}, publisher = {GitHub}, journal = {GitHub repository}, doi = {10.5281/zenodo.4414861}, howpublished = {\url{https://github.com/rwightman/pytorch-image-models}} } ``` <!-- Type: model-index Collections: - Name: DenseNet Paper: Title: Densely Connected Convolutional Networks URL: https://paperswithcode.com/paper/densely-connected-convolutional-networks Models: - Name: densenet121 In Collection: DenseNet Metadata: FLOPs: 3641843200 Parameters: 7980000 File Size: 32376726 Architecture: - 1x1 Convolution - Average Pooling - Batch Normalization - Convolution - Dense Block - Dense Connections - Dropout - Max Pooling - ReLU - Softmax Tasks: - Image Classification Training Techniques: - Kaiming Initialization - Nesterov Accelerated Gradient - Weight Decay Training Data: - ImageNet ID: densenet121 LR: 0.1 Epochs: 90 Layers: 121 Dropout: 0.2 Crop Pct: '0.875' Momentum: 0.9 Batch Size: 256 Image Size: '224' Weight Decay: 0.0001 Interpolation: bicubic Code: https://github.com/rwightman/pytorch-image-models/blob/d8e69206be253892b2956341fea09fdebfaae4e3/timm/models/densenet.py#L295 Weights: https://github.com/rwightman/pytorch-image-models/releases/download/v0.1-weights/densenet121_ra-50efcf5c.pth Results: - Task: Image Classification Dataset: ImageNet Metrics: Top 1 Accuracy: 75.56% Top 5 Accuracy: 92.65% - Name: densenet161 In Collection: DenseNet Metadata: FLOPs: 9931959264 Parameters: 28680000 File Size: 115730790 Architecture: - 1x1 Convolution - Average Pooling - Batch Normalization - Convolution - Dense Block - Dense Connections - Dropout - Max Pooling - ReLU - Softmax Tasks: - Image Classification Training Techniques: - Kaiming Initialization - Nesterov Accelerated Gradient - Weight Decay Training Data: - ImageNet ID: densenet161 LR: 0.1 Epochs: 90 Layers: 161 Dropout: 0.2 Crop Pct: '0.875' Momentum: 0.9 Batch Size: 256 Image Size: '224' Weight Decay: 0.0001 Interpolation: bicubic Code: https://github.com/rwightman/pytorch-image-models/blob/d8e69206be253892b2956341fea09fdebfaae4e3/timm/models/densenet.py#L347 Weights: https://download.pytorch.org/models/densenet161-8d451a50.pth Results: - Task: Image Classification Dataset: ImageNet Metrics: Top 1 Accuracy: 77.36% Top 5 Accuracy: 93.63% - Name: densenet169 In Collection: DenseNet Metadata: FLOPs: 4316945792 Parameters: 14150000 File Size: 57365526 Architecture: - 1x1 Convolution - Average Pooling - Batch Normalization - Convolution - Dense Block - Dense Connections - Dropout - Max Pooling - ReLU - Softmax Tasks: - Image Classification Training Techniques: - Kaiming Initialization - Nesterov Accelerated Gradient - Weight Decay Training Data: - ImageNet ID: densenet169 LR: 0.1 Epochs: 90 Layers: 169 Dropout: 0.2 Crop Pct: '0.875' Momentum: 0.9 Batch Size: 256 Image Size: '224' Weight Decay: 0.0001 Interpolation: bicubic Code: https://github.com/rwightman/pytorch-image-models/blob/d8e69206be253892b2956341fea09fdebfaae4e3/timm/models/densenet.py#L327 Weights: https://download.pytorch.org/models/densenet169-b2777c0a.pth Results: - Task: Image Classification Dataset: ImageNet Metrics: Top 1 Accuracy: 75.9% Top 5 Accuracy: 93.02% - Name: densenet201 In Collection: DenseNet Metadata: FLOPs: 5514321024 Parameters: 20010000 File Size: 81131730 Architecture: - 1x1 Convolution - Average Pooling - Batch Normalization - Convolution - Dense Block - Dense Connections - Dropout - Max Pooling - ReLU - Softmax Tasks: - Image Classification Training Techniques: - Kaiming Initialization - Nesterov Accelerated Gradient - Weight Decay Training Data: - ImageNet ID: densenet201 LR: 0.1 Epochs: 90 Layers: 201 Dropout: 0.2 Crop Pct: '0.875' Momentum: 0.9 Batch Size: 256 Image Size: '224' Weight Decay: 0.0001 Interpolation: bicubic Code: https://github.com/rwightman/pytorch-image-models/blob/d8e69206be253892b2956341fea09fdebfaae4e3/timm/models/densenet.py#L337 Weights: https://download.pytorch.org/models/densenet201-c1103571.pth Results: - Task: Image Classification Dataset: ImageNet Metrics: Top 1 Accuracy: 77.29% Top 5 Accuracy: 93.48% - Name: densenetblur121d In Collection: DenseNet Metadata: FLOPs: 3947812864 Parameters: 8000000 File Size: 32456500 Architecture: - 1x1 Convolution - Batch Normalization - Blur Pooling - Convolution - Dense Block - Dense Connections - Dropout - Max Pooling - ReLU - Softmax Tasks: - Image Classification Training Data: - ImageNet ID: densenetblur121d Crop Pct: '0.875' Image Size: '224' Interpolation: bicubic Code: https://github.com/rwightman/pytorch-image-models/blob/d8e69206be253892b2956341fea09fdebfaae4e3/timm/models/densenet.py#L305 Weights: https://github.com/rwightman/pytorch-image-models/releases/download/v0.1-weights/densenetblur121d_ra-100dcfbc.pth Results: - Task: Image Classification Dataset: ImageNet Metrics: Top 1 Accuracy: 76.59% Top 5 Accuracy: 93.2% - Name: tv_densenet121 In Collection: DenseNet Metadata: FLOPs: 3641843200 Parameters: 7980000 File Size: 32342954 Architecture: - 1x1 Convolution - Average Pooling - Batch Normalization - Convolution - Dense Block - Dense Connections - Dropout - Max Pooling - ReLU - Softmax Tasks: - Image Classification Training Techniques: - SGD with Momentum - Weight Decay Training Data: - ImageNet ID: tv_densenet121 LR: 0.1 Epochs: 90 Crop Pct: '0.875' LR Gamma: 0.1 Momentum: 0.9 Batch Size: 32 Image Size: '224' LR Step Size: 30 Weight Decay: 0.0001 Interpolation: bicubic Code: https://github.com/rwightman/pytorch-image-models/blob/d8e69206be253892b2956341fea09fdebfaae4e3/timm/models/densenet.py#L379 Weights: https://download.pytorch.org/models/densenet121-a639ec97.pth Results: - Task: Image Classification Dataset: ImageNet Metrics: Top 1 Accuracy: 74.74% Top 5 Accuracy: 92.15% -->
huggingface/pytorch-image-models/blob/main/hfdocs/source/models/densenet.mdx
# Diffusers examples with Intel optimizations **This research project is not actively maintained by the diffusers team. For any questions or comments, please make sure to tag @hshen14 .** This aims to provide diffusers examples with Intel optimizations such as Bfloat16 for training/fine-tuning acceleration and 8-bit integer (INT8) for inference acceleration on Intel platforms. ## Accelerating the fine-tuning for textual inversion We accelereate the fine-tuning for textual inversion with Intel Extension for PyTorch. The [examples](textual_inversion) enable both single node and multi-node distributed training with Bfloat16 support on Intel Xeon Scalable Processor. ## Accelerating the inference for Stable Diffusion using Bfloat16 We start the inference acceleration with Bfloat16 using Intel Extension for PyTorch. The [script](inference_bf16.py) is generally designed to support standard Stable Diffusion models with Bfloat16 support. ```bash pip install diffusers transformers accelerate scipy safetensors export KMP_BLOCKTIME=1 export KMP_SETTINGS=1 export KMP_AFFINITY=granularity=fine,compact,1,0 # Intel OpenMP export OMP_NUM_THREADS=< Cores to use > export LD_PRELOAD=${LD_PRELOAD}:/path/to/lib/libiomp5.so # Jemalloc is a recommended malloc implementation that emphasizes fragmentation avoidance and scalable concurrency support. export LD_PRELOAD=${LD_PRELOAD}:/path/to/lib/libjemalloc.so export MALLOC_CONF="oversize_threshold:1,background_thread:true,metadata_thp:auto,dirty_decay_ms:-1,muzzy_decay_ms:9000000000" # Launch with default DDIM numactl --membind <node N> -C <cpu list> python python inference_bf16.py # Launch with DPMSolverMultistepScheduler numactl --membind <node N> -C <cpu list> python python inference_bf16.py --dpm ``` ## Accelerating the inference for Stable Diffusion using INT8 Coming soon ...
huggingface/diffusers/blob/main/examples/research_projects/intel_opts/README.md
!--Copyright 2022 The HuggingFace Team. All rights reserved. Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with the License. You may obtain a copy of the License at http://www.apache.org/licenses/LICENSE-2.0 Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the specific language governing permissions and limitations under the License. --> <p align="center"> <br> <img class="!m-0 !border-0 !dark:border-0 !shadow-none !max-w-lg w-[800]" src="https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/simulate/simulate_library.png"/> <br> </p> # 🤗 Simulate 🤗 Simulate is a library for easily creating and sharing simulation environments for intelligent agents (e.g. reinforcement learning) or snythetic data generation. <div class="mt-10"> <div class="w-full flex flex-col space-y-4 md:space-y-0 md:grid md:grid-cols-2 md:gap-y-4 md:gap-x-5"> <a class="!no-underline border dark:border-gray-700 p-5 rounded-lg shadow hover:shadow-lg" href="./tutorials/creating_a_scene" ><div class="w-full text-center bg-gradient-to-br from-blue-400 to-blue-500 rounded-lg py-1.5 font-semibold mb-5 text-white text-lg leading-relaxed">Tutorials</div> <p class="text-gray-700">Start here if you're a beginner. This section will help you gain the basic skills you need to start using the library.</p> </a> <a class="!no-underline border dark:border-gray-700 p-5 rounded-lg shadow hover:shadow-lg" href="./howto/rl" ><div class="w-full text-center bg-gradient-to-br from-indigo-400 to-indigo-500 rounded-lg py-1.5 font-semibold mb-5 text-white text-lg leading-relaxed">How-to guides</div> <p class="text-gray-700">Practical guides to help you achieve specific goals, such as training a reinforcement learning agent, or building custom plugins.</p> </a> <a class="!no-underline border dark:border-gray-700 p-5 rounded-lg shadow hover:shadow-lg" href="./conceptual/philosophy" ><div class="w-full text-center bg-gradient-to-br from-pink-400 to-pink-500 rounded-lg py-1.5 font-semibold mb-5 text-white text-lg leading-relaxed">Conceptual guides</div> <p class="text-gray-700">More discussion and explanation of the underlying concepts and ideas behind 🤗 Simulate.</p> </a> <a class="!no-underline border dark:border-gray-700 p-5 rounded-lg shadow hover:shadow-lg" href="./api/scenes" ><div class="w-full text-center bg-gradient-to-br from-purple-400 to-purple-500 rounded-lg py-1.5 font-semibold mb-5 text-white text-lg leading-relaxed">API</div> <p class="text-gray-700">Technical descriptions of how 🤗 Simulate classes and methods work.</p> </a> </div> </div>
huggingface/simulate/blob/main/docs/source/index.mdx
Dataset features [`Features`] defines the internal structure of a dataset. It is used to specify the underlying serialization format. What's more interesting to you though is that [`Features`] contains high-level information about everything from the column names and types, to the [`ClassLabel`]. You can think of [`Features`] as the backbone of a dataset. The [`Features`] format is simple: `dict[column_name, column_type]`. It is a dictionary of column name and column type pairs. The column type provides a wide range of options for describing the type of data you have. Let's have a look at the features of the MRPC dataset from the GLUE benchmark: ```py >>> from datasets import load_dataset >>> dataset = load_dataset('glue', 'mrpc', split='train') >>> dataset.features {'idx': Value(dtype='int32', id=None), 'label': ClassLabel(num_classes=2, names=['not_equivalent', 'equivalent'], names_file=None, id=None), 'sentence1': Value(dtype='string', id=None), 'sentence2': Value(dtype='string', id=None), } ``` The [`Value`] feature tells 🤗 Datasets: - The `idx` data type is `int32`. - The `sentence1` and `sentence2` data types are `string`. 🤗 Datasets supports many other data types such as `bool`, `float32` and `binary` to name just a few. <Tip> Refer to [`Value`] for a full list of supported data types. </Tip> The [`ClassLabel`] feature informs 🤗 Datasets the `label` column contains two classes. The classes are labeled `not_equivalent` and `equivalent`. Labels are stored as integers in the dataset. When you retrieve the labels, [`ClassLabel.int2str`] and [`ClassLabel.str2int`] carries out the conversion from integer value to label name, and vice versa. If your data type contains a list of objects, then you want to use the [`Sequence`] feature. Remember the SQuAD dataset? ```py >>> from datasets import load_dataset >>> dataset = load_dataset('squad', split='train') >>> dataset.features {'answers': Sequence(feature={'text': Value(dtype='string', id=None), 'answer_start': Value(dtype='int32', id=None)}, length=-1, id=None), 'context': Value(dtype='string', id=None), 'id': Value(dtype='string', id=None), 'question': Value(dtype='string', id=None), 'title': Value(dtype='string', id=None)} ``` The `answers` field is constructed using the [`Sequence`] feature because it contains two subfields, `text` and `answer_start`, which are lists of `string` and `int32`, respectively. <Tip> See the [flatten](./process#flatten) section to learn how you can extract the nested subfields as their own independent columns. </Tip> The array feature type is useful for creating arrays of various sizes. You can create arrays with two dimensions using [`Array2D`], and even arrays with five dimensions using [`Array5D`]. ```py >>> features = Features({'a': Array2D(shape=(1, 3), dtype='int32')}) ``` The array type also allows the first dimension of the array to be dynamic. This is useful for handling sequences with variable lengths such as sentences, without having to pad or truncate the input to a uniform shape. ```py >>> features = Features({'a': Array3D(shape=(None, 5, 2), dtype='int32')}) ``` ## Audio feature Audio datasets have a column with type [`Audio`], which contains three important fields: * `array`: the decoded audio data represented as a 1-dimensional array. * `path`: the path to the downloaded audio file. * `sampling_rate`: the sampling rate of the audio data. When you load an audio dataset and call the audio column, the [`Audio`] feature automatically decodes and resamples the audio file: ```py >>> from datasets import load_dataset, Audio >>> dataset = load_dataset("PolyAI/minds14", "en-US", split="train") >>> dataset[0]["audio"] {'array': array([ 0. , 0.00024414, -0.00024414, ..., -0.00024414, 0. , 0. ], dtype=float32), 'path': '/root/.cache/huggingface/datasets/downloads/extracted/f14948e0e84be638dd7943ac36518a4cf3324e8b7aa331c5ab11541518e9368c/en-US~JOINT_ACCOUNT/602ba55abb1e6d0fbce92065.wav', 'sampling_rate': 8000} ``` <Tip warning={true}> Index into an audio dataset using the row index first and then the `audio` column - `dataset[0]["audio"]` - to avoid decoding and resampling all the audio files in the dataset. Otherwise, this can be a slow and time-consuming process if you have a large dataset. </Tip> With `decode=False`, the [`Audio`] type simply gives you the path or the bytes of the audio file, without decoding it into an `array`, ```py >>> dataset = load_dataset("PolyAI/minds14", "en-US", split="train").cast_column("audio", Audio(decode=False)) >>> dataset[0] {'audio': {'bytes': None, 'path': '/root/.cache/huggingface/datasets/downloads/extracted/f14948e0e84be638dd7943ac36518a4cf3324e8b7aa331c5ab11541518e9368c/en-US~JOINT_ACCOUNT/602ba55abb1e6d0fbce92065.wav'}, 'english_transcription': 'I would like to set up a joint account with my partner', 'intent_class': 11, 'lang_id': 4, 'path': '/root/.cache/huggingface/datasets/downloads/extracted/f14948e0e84be638dd7943ac36518a4cf3324e8b7aa331c5ab11541518e9368c/en-US~JOINT_ACCOUNT/602ba55abb1e6d0fbce92065.wav', 'transcription': 'I would like to set up a joint account with my partner'} ``` ## Image feature Image datasets have a column with type [`Image`], which loads `PIL.Image` objects from images stored as bytes: When you load an image dataset and call the image column, the [`Image`] feature automatically decodes the image file: ```py >>> from datasets import load_dataset, Image >>> dataset = load_dataset("beans", split="train") >>> dataset[0]["image"] <PIL.JpegImagePlugin.JpegImageFile image mode=RGB size=500x500 at 0x125506CF8> ``` <Tip warning={true}> Index into an image dataset using the row index first and then the `image` column - `dataset[0]["image"]` - to avoid decoding all the image files in the dataset. Otherwise, this can be a slow and time-consuming process if you have a large dataset. </Tip> With `decode=False`, the [`Image`] type simply gives you the path or the bytes of the image file, without decoding it into an `PIL.Image`, ```py >>> dataset = load_dataset("beans", split="train").cast_column("image", Image(decode=False)) >>> dataset[0]["image"] {'bytes': None, 'path': '/Users/username/.cache/huggingface/datasets/downloads/extracted/772e7c1fba622cff102b85dd74bcce46e8168634df4eaade7bedd3b8d91d3cd7/train/healthy/healthy_train.265.jpg'} ``` Depending on the dataset, you may get the path to the local downloaded image, or the content of the image as bytes if the dataset is not made of individual files. You can also define a dataset of images from numpy arrays: ```python >>> ds = Dataset.from_dict({"i": [np.zeros(shape=(16, 16, 3), dtype=np.uint8)]}, features=Features({"i": Image()})) ``` And in this case the numpy arrays are encoded into PNG (or TIFF if the pixels values precision is important). For multi-channels arrays like RGB or RGBA, only uint8 is supported. If you use a larger precision, you get a warning and the array is downcasted to uint8. For gray-scale images you can use the integer or float precision you want as long as it is compatible with `Pillow`. A warning is shown if your image integer or float precision is too high, and in this case the array is downcated: an int64 array is downcasted to int32, and a float64 array is downcasted to float32.
huggingface/datasets/blob/main/docs/source/about_dataset_features.mdx
🧨 Diffusers Experimental We are adding experimental code to support novel applications and usages of the Diffusers library. Currently, the following experiments are supported: * Reinforcement learning via an implementation of the [Diffuser](https://arxiv.org/abs/2205.09991) model.
huggingface/diffusers/blob/main/src/diffusers/experimental/README.md
How to write a good issue[[how-to-write-a-good-issue]] <CourseFloatingBanner chapter={8} classNames="absolute z-10 right-0 top-0" notebooks={[ {label: "Google Colab", value: "https://colab.research.google.com/github/huggingface/notebooks/blob/master/course/en/chapter8/section5.ipynb"}, {label: "Aws Studio", value: "https://studiolab.sagemaker.aws/import/github/huggingface/notebooks/blob/master/course/en/chapter8/section5.ipynb"}, ]} /> When you encounter something that doesn't seem right with one of the Hugging Face libraries, you should definitely let us know so we can fix it (the same goes for any open source library, for that matter). If you are not completely certain whether the bug lies in your own code or one of our libraries, the first place to check is the [forums](https://discuss.huggingface.co/). The community will help you figure this out, and the Hugging Face team also closely watches the discussions there. <Youtube id="_PAli-V4wj0"/> When you are sure you have a bug in your hand, the first step is to build a minimal reproducible example. ## Creating a minimal reproducible example[[creating-a-minimal-reproducible-example]] It's very important to isolate the piece of code that produces the bug, as no one in the Hugging Face team is a magician (yet), and they can't fix what they can't see. A minimal reproducible example should, as the name indicates, be reproducible. This means that it should not rely on any external files or data you may have. Try to replace the data you are using with some dummy values that look like your real ones and still produce the same error. <Tip> 🚨 Many issues in the 🤗 Transformers repository are unsolved because the data used to reproduce them is not accessible. </Tip> Once you have something that is self-contained, you can try to reduce it into even less lines of code, building what we call a _minimal reproducible example_. While this requires a bit more work on your side, you will almost be guaranteed to get help and a fix if you provide a nice, short bug reproducer. If you feel comfortable enough, go inspect the source code where your bug happens. You might find a solution to your problem (in which case you can even suggest a pull request to fix it), but more generally, this can help the maintainers better understand the source when they read your report. ## Filling out the issue template[[filling-out-the-issue-template]] When you file your issue, you will notice there is a template to fill out. We will follow the one for [🤗 Transformers issues](https://github.com/huggingface/transformers/issues/new/choose) here, but the same kind of information will be required if you report an issue in another repository. Don't leave the template blank: taking the time to fill it in will maximize your chances of getting an answer and solving your problem. In general, when filing an issue, always stay courteous. This is an open source project, so you are using free software, and no one has any obligation to help you. You may include what you feel is justified criticism in your issue, but then the maintainers may very well take it badly and not be in a rush help you. Make sure you read the [code of conduct](https://github.com/huggingface/transformers/blob/master/CODE_OF_CONDUCT.md) of the project. ### Including your environment information[[including-your-environment-information]] 🤗 Transformers provides a utility to get all the information we need about your environment. Just type the following in your terminal: ``` transformers-cli env ``` and you should get something like this: ```out Copy-and-paste the text below in your GitHub issue and FILL OUT the two last points. - `transformers` version: 4.12.0.dev0 - Platform: Linux-5.10.61-1-MANJARO-x86_64-with-arch-Manjaro-Linux - Python version: 3.7.9 - PyTorch version (GPU?): 1.8.1+cu111 (True) - Tensorflow version (GPU?): 2.5.0 (True) - Flax version (CPU?/GPU?/TPU?): 0.3.4 (cpu) - Jax version: 0.2.13 - JaxLib version: 0.1.65 - Using GPU in script?: <fill in> - Using distributed or parallel set-up in script?: <fill in> ``` You can also add a `!` at the beginning of the `transformers-cli env` command to execute it from a notebook cell, and then copy and paste the result at the beginning of your issue. ### Tagging people[[tagging-people]] Tagging people by typing an `@` followed by their GitHub handle will send them a notification so they will see your issue and might reply quicker. Use this with moderation, because the people you tag might not appreciate being notified if it's something they have no direct link to. If you have looked at the source files related to your bug, you should tag the last person that made changes at the line you think is responsible for your problem (you can find this information by looking at said line on GitHub, selecting it, then clicking "View git blame"). Otherwise, the template offers suggestions of people to tag. In general, never tag more than three people! ### Including a reproducible example[[including-a-reproducible-example]] If you have managed to create a self-contained example that produces the bug, now is the time to include it! Type a line with three backticks followed by `python`, like this: ``` ```python ``` then paste in your minimal reproducible example and type a new line with three backticks. This will ensure your code is properly formatted. If you didn't manage to create a reproducible example, explain in clear steps how you got to your issue. Include a link to a Google Colab notebook where you got the error if you can. The more information you share, the better able the maintainers will be to reply to you. In all cases, you should copy and paste the whole error message you are getting. If you're working in Colab, remember that some of the frames may be automatically collapsed in the stack trace, so make sure you expand them before copying. Like with the code sample, put that error message between two lines with three backticks, so it's properly formatted. ### Describing the expected behavior[[describing-the-expected-behavior]] Explain in a few lines what you expected to get, so that the maintainers get a full grasp of the problem. This part is generally pretty obvious, so it should fit in one sentence, but in some cases you may have a lot to say. ## And then what?[[and-then-what]] Once your issue is filed, make sure to quickly check everything looks okay. You can edit the issue if you made a mistake, or even change its title if you realize the problem is different from what you initially thought. There is no point pinging people if you don't get an answer. If no one helps you in a few days, it's likely that no one could make sense of your problem. Don't hesitate to go back to the reproducible example. Can you make it shorter and more to the point? If you don't get an answer in a week, you can leave a message gently asking for help, especially if you've edited your issue to include more information on the problem.
huggingface/course/blob/main/chapters/en/chapter8/5.mdx
Gradio Demo: unified_demo_text_generation ``` !pip install -q gradio torch transformers ``` ``` import gradio as gr from transformers import pipeline generator = pipeline('text-generation', model = 'gpt2') def generate_text(text_prompt): response = generator(text_prompt, max_length = 30, num_return_sequences=5) return response[0]['generated_text'] textbox = gr.Textbox() demo = gr.Interface(generate_text, textbox, textbox) if __name__ == "__main__": demo.launch() ```
gradio-app/gradio/blob/main/demo/unified_demo_text_generation/run.ipynb
Gradio Demo: calculator_blocks_cached ``` !pip install -q gradio ``` ``` import gradio as gr def calculator(num1, operation, num2): if operation == "add": return num1 + num2 elif operation == "subtract": return num1 - num2 elif operation == "multiply": return num1 * num2 elif operation == "divide": return num1 / num2 with gr.Blocks() as demo: with gr.Row(): with gr.Column(): num_1 = gr.Number() operation = gr.Radio(["add", "subtract", "multiply", "divide"]) num_2 = gr.Number() submit_btn = gr.Button(value="Calculate") with gr.Column(): result = gr.Number() submit_btn.click(calculator, inputs=[num_1, operation, num_2], outputs=[result]) examples = gr.Examples(examples=[[5, "add", 3], [4, "divide", 2], [-4, "multiply", 2.5], [0, "subtract", 1.2]], inputs=[num_1, operation, num_2], outputs=[result], fn=calculator, cache_examples=True) if __name__ == "__main__": demo.launch() ```
gradio-app/gradio/blob/main/demo/calculator_blocks_cached/run.ipynb
!--Copyright 2022 The HuggingFace Team. All rights reserved. Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with the License. You may obtain a copy of the License at http://www.apache.org/licenses/LICENSE-2.0 Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the specific language governing permissions and limitations under the License. ⚠️ Note that this file is in Markdown but contain specific syntax for our doc-builder (similar to MDX) that may not be rendered properly in your Markdown viewer. --> # EfficientFormer ## Overview The EfficientFormer model was proposed in [EfficientFormer: Vision Transformers at MobileNet Speed](https://arxiv.org/abs/2206.01191) by Yanyu Li, Geng Yuan, Yang Wen, Eric Hu, Georgios Evangelidis, Sergey Tulyakov, Yanzhi Wang, Jian Ren. EfficientFormer proposes a dimension-consistent pure transformer that can be run on mobile devices for dense prediction tasks like image classification, object detection and semantic segmentation. The abstract from the paper is the following: *Vision Transformers (ViT) have shown rapid progress in computer vision tasks, achieving promising results on various benchmarks. However, due to the massive number of parameters and model design, e.g., attention mechanism, ViT-based models are generally times slower than lightweight convolutional networks. Therefore, the deployment of ViT for real-time applications is particularly challenging, especially on resource-constrained hardware such as mobile devices. Recent efforts try to reduce the computation complexity of ViT through network architecture search or hybrid design with MobileNet block, yet the inference speed is still unsatisfactory. This leads to an important question: can transformers run as fast as MobileNet while obtaining high performance? To answer this, we first revisit the network architecture and operators used in ViT-based models and identify inefficient designs. Then we introduce a dimension-consistent pure transformer (without MobileNet blocks) as a design paradigm. Finally, we perform latency-driven slimming to get a series of final models dubbed EfficientFormer. Extensive experiments show the superiority of EfficientFormer in performance and speed on mobile devices. Our fastest model, EfficientFormer-L1, achieves 79.2% top-1 accuracy on ImageNet-1K with only 1.6 ms inference latency on iPhone 12 (compiled with CoreML), which { runs as fast as MobileNetV2×1.4 (1.6 ms, 74.7% top-1),} and our largest model, EfficientFormer-L7, obtains 83.3% accuracy with only 7.0 ms latency. Our work proves that properly designed transformers can reach extremely low latency on mobile devices while maintaining high performance.* This model was contributed by [novice03](https://huggingface.co/novice03) and [Bearnardd](https://huggingface.co/Bearnardd). The original code can be found [here](https://github.com/snap-research/EfficientFormer). The TensorFlow version of this model was added by [D-Roberts](https://huggingface.co/D-Roberts). ## Documentation resources - [Image classification task guide](../tasks/image_classification) ## EfficientFormerConfig [[autodoc]] EfficientFormerConfig ## EfficientFormerImageProcessor [[autodoc]] EfficientFormerImageProcessor - preprocess <frameworkcontent> <pt> ## EfficientFormerModel [[autodoc]] EfficientFormerModel - forward ## EfficientFormerForImageClassification [[autodoc]] EfficientFormerForImageClassification - forward ## EfficientFormerForImageClassificationWithTeacher [[autodoc]] EfficientFormerForImageClassificationWithTeacher - forward </pt> <tf> ## TFEfficientFormerModel [[autodoc]] TFEfficientFormerModel - call ## TFEfficientFormerForImageClassification [[autodoc]] TFEfficientFormerForImageClassification - call ## TFEfficientFormerForImageClassificationWithTeacher [[autodoc]] TFEfficientFormerForImageClassificationWithTeacher - call </tf> </frameworkcontent>
huggingface/transformers/blob/main/docs/source/en/model_doc/efficientformer.md
Spaces Overview Hugging Face Spaces make it easy for you to create and deploy ML-powered demos in minutes. Watch the following video for a quick introduction to Spaces: <iframe width="560" height="315" src="https://www.youtube-nocookie.com/embed/3bSVKNKb_PY" title="Spaces intro" frameborder="0" allow="accelerometer; autoplay; clipboard-write; encrypted-media; gyroscope; picture-in-picture" allowfullscreen></iframe> In the following sections, you'll learn the basics of creating a Space, configuring it, and deploying your code to it. ## Creating a new Space **To make a new Space**, visit the [Spaces main page](https://huggingface.co/spaces) and click on **Create new Space**. Along with choosing a name for your Space, selecting an optional license, and setting your Space's visibility, you'll be prompted to choose the **SDK** for your Space. The Hub offers four SDK options: Gradio, Streamlit, Docker and static HTML. If you select "Gradio" as your SDK, you'll be navigated to a new repo showing the following page: <div class="flex justify-center"> <img class="block dark:hidden" src="https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/hub/spaces-blank-space.png"/> <img class="hidden dark:block" src="https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/hub/spaces-blank-space-dark.png"/> </div> Under the hood, Spaces stores your code inside a git repository, just like the model and dataset repositories. Thanks to this, the same tools we use for all the [other repositories on the Hub](./repositories) (`git` and `git-lfs`) also work for Spaces. Follow the same flow as in [Getting Started with Repositories](./repositories-getting-started) to add files to your Space. Each time a new commit is pushed, the Space will automatically rebuild and restart. For step-by-step tutorials to creating your first Space, see the guides below: * [Creating a Gradio Space](./spaces-sdks-gradio) * [Creating a Streamlit Space](./spaces-sdks-streamlit) * [Creating a Docker Space](./spaces-sdks-docker-first-demo) ## Hardware resources Each Spaces environment is limited to 16GB RAM, 2 CPU cores and 50GB of (not persistent) disk space by default, which you can use free of charge. You can upgrade to better hardware, including a variety of GPU accelerators and persistent storage, for a [competitive price](https://huggingface.co/pricing#spaces). To request an upgrade, please click the _Settings_ button in your Space and select your preferred hardware environment. | **Hardware** | **GPU Memory** | **CPU** | **Memory** | **Disk** | **Hourly Price** | |--------------------- |---------------- |---------- |------------ |---------- | ---------------- | | CPU Basic | - | 2 vCPU | 16 GB | 50 GB | Free! | | CPU Upgrade | - | 8 vCPU | 32 GB | 50 GB | $0.03 | | Nvidia T4 - small | 16GB | 4 vCPU | 15 GB | 50 GB | $0.60 | | Nvidia T4 - medium | 16GB | 8 vCPU | 30 GB | 100 GB | $0.90 | | Nvidia A10G - small | 24GB | 4 vCPU | 15 GB | 110 GB | $1.05 | | Nvidia A10G - large | 24GB | 12 vCPU | 46 GB | 200 GB | $3.15 | | 2x Nvidia A10G - large| 48GB | 24 vCPU | 92 GB | 1000 GB | $5.70 | | 4x Nvidia A10G - large| 96GB | 48 vCPU | 184 GB | 2000 GB | $10.80 | | Nvidia A100 - large | 40GB | 12 vCPU | 142 GB | 1000 GB | $4.13 | | **Storage tier** | **Size** | **Persistent** | **Monthly price** | |--------------------- |---------------------- |------------------ | --------------------- | | Ephemeral (default) | 50GB | No | Free! | | Small | Ephemeral + 20GB | Yes | $5 | | Medium | Ephemeral + 150GB | Yes | $25 | | Large | Ephemeral + 1TB | yes | $100 | Note: Find more detailed and comprehensive pricing information on [our pricing page](https://huggingface.co/pricing). Do you have an awesome Space but need help covering the hardware upgrade costs? We love helping out those with an innovative Space so please feel free to apply for a community GPU grant using the link in the _Settings_ tab of your Space and see if yours makes the cut! Read more in our dedicated sections on [Spaces GPU Upgrades](./spaces-gpus) and [Spaces Storage Upgrades](./spaces-storage). <div class="flex justify-center"> <img class="block dark:hidden" src="https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/hub/spaces-gpu-settings.png"/> <img class="hidden dark:block" src="https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/hub/spaces-gpu-settings-dark.png"/> </div> ## Managing secrets and environment variables[[managing-secrets]] <a id="managing-secrets"></a> If your app requires environment variables (for instance, secret keys or tokens), do not hard-code them inside your app! Instead, go to the **Settings** page of your Space repository and add a new variable or secret. Use variables if you need to store non-sensitive configuration values and secrets for storing access tokens, API keys, or any sensitive value or credentials. <div class="flex justify-center"> <img class="block dark:hidden" src="https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/hub/secrets-and-variables.png"/> <img class="hidden dark:block" src="https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/hub/secrets-and-variables-dark.png"/> </div> Variables are publicly accessible and viewable and will be automatically added to Spaces duplicated from your repository. They are exposed to your app as environment variables. For Docker Spaces, check out [environment management with Docker](./spaces-sdks-docker#secrets-and-variables-management). Secrets are private and their value cannot be retrieved once set. They won't be added to Spaces duplicated from your repository. The secrets will be exposed to your app with [Streamlit Secrets Management](https://blog.streamlit.io/secrets-in-sharing-apps/) if you use Streamlit, and as environment variables in other cases. For Docker Spaces, please check out [environment management with Docker](./spaces-sdks-docker#secrets-and-variables-management). Users are warned when our `Spaces Secrets Scanner` [finds hard-coded secrets](./security-secrets). ## Duplicating a Space Duplicating a Space can be useful if you want to build a new demo using another demo as an initial template. Duplicated Spaces can also be useful if you want to have an individual Upgraded Space for your use with fast inference. If you want to duplicate a Space, you can click the three dots at the top right of the space and click **Duplicate this Space**. Once you do this, you will be able to change the following attributes: * Owner: The duplicated Space can be under your account or any organization in which you have write access * Space name * Visibility: The Space is private by default. Read more about private repositories [here](./repositories-settings#private-repositories). * Hardware: You can choose the hardware on which the Space will be running. Read more about hardware upgrades [here](./spaces-gpus). * Storage: If the original repo uses persistent storage, you will be prompted to choose a storage tier. Read more about persistent storage [here](./spaces-storage). * Secrets and variables: If the original repo has set some secrets and variables, you'll be able to set them while duplicating the repo. Some Spaces might have environment variables that you may need to set up. In these cases, the duplicate workflow will auto-populate the public Variables from the source Space, and give you a warning about setting up the Secrets. The duplicated Space will use a free CPU hardware by default, but you can later upgrade if needed. ## Networking If your Space needs to make any network requests, you can make requests through the standard HTTP and HTTPS ports (80 and 443) along with port 8080. Any requests going to other ports will be blocked. ## Lifecycle management On free hardware, your Space will "go to sleep" and stop executing after a period of time if unused. If you wish for your Space to run indefinitely, consider [upgrading to a paid hardware](./spaces-gpus). You can also manually pause your Space from the **Settings** tab. A paused Space stops executing until manually restarted by its owner. Paused time is not billed. ## Helper environment variables In some cases, you might be interested in having programmatic access to the Space author or repository name. This feature is particularly useful when you expect users to duplicate your Space. To help with this, Spaces exposes different environment variables at runtime. Given a Space [`osanseviero/i-like-flan`](https://huggingface.co/spaces/osanseviero/i-like-flan): * `CPU_CORES`: 4 * `MEMORY`: 15Gi * `SPACE_AUTHOR_NAME`: osanseviero * `SPACE_REPO_NAME`: i-like-flan * `SPACE_TITLE`: I Like Flan (specified in the README file) * `SPACE_ID`: `osanseviero/i-like-flan` * `SPACE_HOST`: `osanseviero-i-like-flan.hf.space` In case [OAuth](./spaces-oauth) is enabled for your Space, the following variables will also be available: * `OAUTH_CLIENT_ID`: the client ID of your OAuth app (public) * `OAUTH_CLIENT_SECRET`: the client secret of your OAuth app * `OAUTH_SCOPES`: scopes accessible by your OAuth app. Currently, this is always `"openid profile"`. * `OPENID_PROVIDER_URL`: The URL of the OpenID provider. The OpenID metadata will be available at [`{OPENID_PROVIDER_URL}/.well-known/openid-configuration`](https://huggingface.co/.well-known/openid-configuration). ## Clone the Repository You can easily clone your Space repo locally. Start by clicking on the dropdown menu in the top right of your Space page: <div class="flex justify-center"> <img class="block dark:hidden" src="https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/hub/SpacesCloneRepo2.png"/> <img class="hidden dark:block" src="https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/hub/SpacesCloneRepo1.png"/> </div> Select "Clone repository", and then you'll be able to follow the instructions to clone the Space repo to your local machine using HTTPS or SSH. <div class="flex justify-center"> <img class="block dark:hidden" src="https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/hub/HttpsClone2.png"/> <img class="hidden dark:block" src="https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/hub/HttpsClone1.png"/> </div> <div class="flex justify-center"> <img class="block dark:hidden" src="https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/hub/SSHClone2.png"/> <img class="hidden dark:block" src="https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/hub/SSHClone1.png"/> </div> ## Linking Models and Datasets on the Hub You can showcase all the models and datasets that your Space links to by adding their identifier in your Space's README metadata. To do so, you can define them under the `models` and `datasets` keys. In addition to listing the artefacts in the README file, you can also record them in any `.py`, `.ini` or `.html` file as well. We'll parse it auto-magically! Here's an example linking two models from a space: ``` title: My lovely space emoji: 🤗 colorFrom: blue colorTo: green sdk: docker pinned: false models: - reach-vb/musicgen-large-fp16-endpoint - reach-vb/wav2vec2-large-xls-r-1B-common_voice7-lt-ft ```
huggingface/hub-docs/blob/main/docs/hub/spaces-overview.md
!--Copyright 2023 The HuggingFace Team. All rights reserved. Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with the License. You may obtain a copy of the License at http://www.apache.org/licenses/LICENSE-2.0 Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the specific language governing permissions and limitations under the License. ⚠️ Note that this file is in Markdown but contain specific syntax for our doc-builder (similar to MDX) that may not be rendered properly in your Markdown viewer. --> # Visual Question Answering [[open-in-colab]] Visual Question Answering (VQA) is the task of answering open-ended questions based on an image. The input to models supporting this task is typically a combination of an image and a question, and the output is an answer expressed in natural language. Some noteworthy use case examples for VQA include: * Accessibility applications for visually impaired individuals. * Education: posing questions about visual materials presented in lectures or textbooks. VQA can also be utilized in interactive museum exhibits or historical sites. * Customer service and e-commerce: VQA can enhance user experience by letting users ask questions about products. * Image retrieval: VQA models can be used to retrieve images with specific characteristics. For example, the user can ask "Is there a dog?" to find all images with dogs from a set of images. In this guide you'll learn how to: - Fine-tune a classification VQA model, specifically [ViLT](../model_doc/vilt), on the [`Graphcore/vqa` dataset](https://huggingface.co/datasets/Graphcore/vqa). - Use your fine-tuned ViLT for inference. - Run zero-shot VQA inference with a generative model, like BLIP-2. ## Fine-tuning ViLT ViLT model incorporates text embeddings into a Vision Transformer (ViT), allowing it to have a minimal design for Vision-and-Language Pre-training (VLP). This model can be used for several downstream tasks. For the VQA task, a classifier head is placed on top (a linear layer on top of the final hidden state of the `[CLS]` token) and randomly initialized. Visual Question Answering is thus treated as a **classification problem**. More recent models, such as BLIP, BLIP-2, and InstructBLIP, treat VQA as a generative task. Later in this guide we illustrate how to use them for zero-shot VQA inference. Before you begin, make sure you have all the necessary libraries installed. ```bash pip install -q transformers datasets ``` We encourage you to share your model with the community. Log in to your Hugging Face account to upload it to the 🤗 Hub. When prompted, enter your token to log in: ```py >>> from huggingface_hub import notebook_login >>> notebook_login() ``` Let's define the model checkpoint as a global variable. ```py >>> model_checkpoint = "dandelin/vilt-b32-mlm" ``` ## Load the data For illustration purposes, in this guide we use a very small sample of the annotated visual question answering `Graphcore/vqa` dataset. You can find the full dataset on [🤗 Hub](https://huggingface.co/datasets/Graphcore/vqa). As an alternative to the [`Graphcore/vqa` dataset](https://huggingface.co/datasets/Graphcore/vqa), you can download the same data manually from the official [VQA dataset page](https://visualqa.org/download.html). If you prefer to follow the tutorial with your custom data, check out how to [Create an image dataset](https://huggingface.co/docs/datasets/image_dataset#loading-script) guide in the 🤗 Datasets documentation. Let's load the first 200 examples from the validation split and explore the dataset's features: ```python >>> from datasets import load_dataset >>> dataset = load_dataset("Graphcore/vqa", split="validation[:200]") >>> dataset Dataset({ features: ['question', 'question_type', 'question_id', 'image_id', 'answer_type', 'label'], num_rows: 200 }) ``` Let's take a look at an example to understand the dataset's features: ```py >>> dataset[0] {'question': 'Where is he looking?', 'question_type': 'none of the above', 'question_id': 262148000, 'image_id': '/root/.cache/huggingface/datasets/downloads/extracted/ca733e0e000fb2d7a09fbcc94dbfe7b5a30750681d0e965f8e0a23b1c2f98c75/val2014/COCO_val2014_000000262148.jpg', 'answer_type': 'other', 'label': {'ids': ['at table', 'down', 'skateboard', 'table'], 'weights': [0.30000001192092896, 1.0, 0.30000001192092896, 0.30000001192092896]}} ``` The features relevant to the task include: * `question`: the question to be answered from the image * `image_id`: the path to the image the question refers to * `label`: the annotations We can remove the rest of the features as they won't be necessary: ```py >>> dataset = dataset.remove_columns(['question_type', 'question_id', 'answer_type']) ``` As you can see, the `label` feature contains several answers to the same question (called `ids` here) collected by different human annotators. This is because the answer to a question can be subjective. In this case, the question is "where is he looking?". Some people annotated this with "down", others with "at table", another one with "skateboard", etc. Take a look at the image and consider which answer would you give: ```python >>> from PIL import Image >>> image = Image.open(dataset[0]['image_id']) >>> image ``` <div class="flex justify-center"> <img src="https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/transformers/tasks/vqa-example.png" alt="VQA Image Example"/> </div> Due to the questions' and answers' ambiguity, datasets like this are treated as a multi-label classification problem (as multiple answers are possibly valid). Moreover, rather than just creating a one-hot encoded vector, one creates a soft encoding, based on the number of times a certain answer appeared in the annotations. For instance, in the example above, because the answer "down" is selected way more often than other answers, it has a score (called `weight` in the dataset) of 1.0, and the rest of the answers have scores < 1.0. To later instantiate the model with an appropriate classification head, let's create two dictionaries: one that maps the label name to an integer and vice versa: ```py >>> import itertools >>> labels = [item['ids'] for item in dataset['label']] >>> flattened_labels = list(itertools.chain(*labels)) >>> unique_labels = list(set(flattened_labels)) >>> label2id = {label: idx for idx, label in enumerate(unique_labels)} >>> id2label = {idx: label for label, idx in label2id.items()} ``` Now that we have the mappings, we can replace the string answers with their ids, and flatten the dataset for a more convenient further preprocessing. ```python >>> def replace_ids(inputs): ... inputs["label"]["ids"] = [label2id[x] for x in inputs["label"]["ids"]] ... return inputs >>> dataset = dataset.map(replace_ids) >>> flat_dataset = dataset.flatten() >>> flat_dataset.features {'question': Value(dtype='string', id=None), 'image_id': Value(dtype='string', id=None), 'label.ids': Sequence(feature=Value(dtype='int64', id=None), length=-1, id=None), 'label.weights': Sequence(feature=Value(dtype='float64', id=None), length=-1, id=None)} ``` ## Preprocessing data The next step is to load a ViLT processor to prepare the image and text data for the model. [`ViltProcessor`] wraps a BERT tokenizer and ViLT image processor into a convenient single processor: ```py >>> from transformers import ViltProcessor >>> processor = ViltProcessor.from_pretrained(model_checkpoint) ``` To preprocess the data we need to encode the images and questions using the [`ViltProcessor`]. The processor will use the [`BertTokenizerFast`] to tokenize the text and create `input_ids`, `attention_mask` and `token_type_ids` for the text data. As for images, the processor will leverage [`ViltImageProcessor`] to resize and normalize the image, and create `pixel_values` and `pixel_mask`. All these preprocessing steps are done under the hood, we only need to call the `processor`. However, we still need to prepare the target labels. In this representation, each element corresponds to a possible answer (label). For correct answers, the element holds their respective score (weight), while the remaining elements are set to zero. The following function applies the `processor` to the images and questions and formats the labels as described above: ```py >>> import torch >>> def preprocess_data(examples): ... image_paths = examples['image_id'] ... images = [Image.open(image_path) for image_path in image_paths] ... texts = examples['question'] ... encoding = processor(images, texts, padding="max_length", truncation=True, return_tensors="pt") ... for k, v in encoding.items(): ... encoding[k] = v.squeeze() ... targets = [] ... for labels, scores in zip(examples['label.ids'], examples['label.weights']): ... target = torch.zeros(len(id2label)) ... for label, score in zip(labels, scores): ... target[label] = score ... targets.append(target) ... encoding["labels"] = targets ... return encoding ``` To apply the preprocessing function over the entire dataset, use 🤗 Datasets [`~datasets.map`] function. You can speed up `map` by setting `batched=True` to process multiple elements of the dataset at once. At this point, feel free to remove the columns you don't need. ```py >>> processed_dataset = flat_dataset.map(preprocess_data, batched=True, remove_columns=['question','question_type', 'question_id', 'image_id', 'answer_type', 'label.ids', 'label.weights']) >>> processed_dataset Dataset({ features: ['input_ids', 'token_type_ids', 'attention_mask', 'pixel_values', 'pixel_mask', 'labels'], num_rows: 200 }) ``` As a final step, create a batch of examples using [`DefaultDataCollator`]: ```py >>> from transformers import DefaultDataCollator >>> data_collator = DefaultDataCollator() ``` ## Train the model You’re ready to start training your model now! Load ViLT with [`ViltForQuestionAnswering`]. Specify the number of labels along with the label mappings: ```py >>> from transformers import ViltForQuestionAnswering >>> model = ViltForQuestionAnswering.from_pretrained(model_checkpoint, num_labels=len(id2label), id2label=id2label, label2id=label2id) ``` At this point, only three steps remain: 1. Define your training hyperparameters in [`TrainingArguments`]: ```py >>> from transformers import TrainingArguments >>> repo_id = "MariaK/vilt_finetuned_200" >>> training_args = TrainingArguments( ... output_dir=repo_id, ... per_device_train_batch_size=4, ... num_train_epochs=20, ... save_steps=200, ... logging_steps=50, ... learning_rate=5e-5, ... save_total_limit=2, ... remove_unused_columns=False, ... push_to_hub=True, ... ) ``` 2. Pass the training arguments to [`Trainer`] along with the model, dataset, processor, and data collator. ```py >>> from transformers import Trainer >>> trainer = Trainer( ... model=model, ... args=training_args, ... data_collator=data_collator, ... train_dataset=processed_dataset, ... tokenizer=processor, ... ) ``` 3. Call [`~Trainer.train`] to finetune your model. ```py >>> trainer.train() ``` Once training is completed, share your model to the Hub with the [`~Trainer.push_to_hub`] method to share your final model on the 🤗 Hub: ```py >>> trainer.push_to_hub() ``` ## Inference Now that you have fine-tuned a ViLT model, and uploaded it to the 🤗 Hub, you can use it for inference. The simplest way to try out your fine-tuned model for inference is to use it in a [`Pipeline`]. ```py >>> from transformers import pipeline >>> pipe = pipeline("visual-question-answering", model="MariaK/vilt_finetuned_200") ``` The model in this guide has only been trained on 200 examples, so don't expect a lot from it. Let's see if it at least learned something from the data and take the first example from the dataset to illustrate inference: ```py >>> example = dataset[0] >>> image = Image.open(example['image_id']) >>> question = example['question'] >>> print(question) >>> pipe(image, question, top_k=1) "Where is he looking?" [{'score': 0.5498199462890625, 'answer': 'down'}] ``` Even though not very confident, the model indeed has learned something. With more examples and longer training, you'll get far better results! You can also manually replicate the results of the pipeline if you'd like: 1. Take an image and a question, prepare them for the model using the processor from your model. 2. Forward the result or preprocessing through the model. 3. From the logits, get the most likely answer's id, and find the actual answer in the `id2label`. ```py >>> processor = ViltProcessor.from_pretrained("MariaK/vilt_finetuned_200") >>> image = Image.open(example['image_id']) >>> question = example['question'] >>> # prepare inputs >>> inputs = processor(image, question, return_tensors="pt") >>> model = ViltForQuestionAnswering.from_pretrained("MariaK/vilt_finetuned_200") >>> # forward pass >>> with torch.no_grad(): ... outputs = model(**inputs) >>> logits = outputs.logits >>> idx = logits.argmax(-1).item() >>> print("Predicted answer:", model.config.id2label[idx]) Predicted answer: down ``` ## Zero-shot VQA The previous model treated VQA as a classification task. Some recent models, such as BLIP, BLIP-2, and InstructBLIP approach VQA as a generative task. Let's take [BLIP-2](../model_doc/blip-2) as an example. It introduced a new visual-language pre-training paradigm in which any combination of pre-trained vision encoder and LLM can be used (learn more in the [BLIP-2 blog post](https://huggingface.co/blog/blip-2)). This enables achieving state-of-the-art results on multiple visual-language tasks including visual question answering. Let's illustrate how you can use this model for VQA. First, let's load the model. Here we'll explicitly send the model to a GPU, if available, which we didn't need to do earlier when training, as [`Trainer`] handles this automatically: ```py >>> from transformers import AutoProcessor, Blip2ForConditionalGeneration >>> import torch >>> processor = AutoProcessor.from_pretrained("Salesforce/blip2-opt-2.7b") >>> model = Blip2ForConditionalGeneration.from_pretrained("Salesforce/blip2-opt-2.7b", torch_dtype=torch.float16) >>> device = "cuda" if torch.cuda.is_available() else "cpu" >>> model.to(device) ``` The model takes image and text as input, so let's use the exact same image/question pair from the first example in the VQA dataset: ```py >>> example = dataset[0] >>> image = Image.open(example['image_id']) >>> question = example['question'] ``` To use BLIP-2 for visual question answering task, the textual prompt has to follow a specific format: `Question: {} Answer:`. ```py >>> prompt = f"Question: {question} Answer:" ``` Now we need to preprocess the image/prompt with the model's processor, pass the processed input through the model, and decode the output: ```py >>> inputs = processor(image, text=prompt, return_tensors="pt").to(device, torch.float16) >>> generated_ids = model.generate(**inputs, max_new_tokens=10) >>> generated_text = processor.batch_decode(generated_ids, skip_special_tokens=True)[0].strip() >>> print(generated_text) "He is looking at the crowd" ``` As you can see, the model recognized the crowd, and the direction of the face (looking down), however, it seems to miss the fact the crowd is behind the skater. Still, in cases where acquiring human-annotated datasets is not feasible, this approach can quickly produce useful results.
huggingface/transformers/blob/main/docs/source/en/tasks/visual_question_answering.md
!--Copyright 2023 The HuggingFace Team. All rights reserved. Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with the License. You may obtain a copy of the License at http://www.apache.org/licenses/LICENSE-2.0 Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the specific language governing permissions and limitations under the License. --> # DreamBooth [DreamBooth](https://huggingface.co/papers/2208.12242) is a training technique that updates the entire diffusion model by training on just a few images of a subject or style. It works by associating a special word in the prompt with the example images. If you're training on a GPU with limited vRAM, you should try enabling the `gradient_checkpointing` and `mixed_precision` parameters in the training command. You can also reduce your memory footprint by using memory-efficient attention with [xFormers](../optimization/xformers). JAX/Flax training is also supported for efficient training on TPUs and GPUs, but it doesn't support gradient checkpointing or xFormers. You should have a GPU with >30GB of memory if you want to train faster with Flax. This guide will explore the [train_dreambooth.py](https://github.com/huggingface/diffusers/blob/main/examples/dreambooth/train_dreambooth.py) script to help you become more familiar with it, and how you can adapt it for your own use-case. Before running the script, make sure you install the library from source: ```bash git clone https://github.com/huggingface/diffusers cd diffusers pip install . ``` Navigate to the example folder with the training script and install the required dependencies for the script you're using: <hfoptions id="installation"> <hfoption id="PyTorch"> ```bash cd examples/dreambooth pip install -r requirements.txt ``` </hfoption> <hfoption id="Flax"> ```bash cd examples/dreambooth pip install -r requirements_flax.txt ``` </hfoption> </hfoptions> <Tip> 🤗 Accelerate is a library for helping you train on multiple GPUs/TPUs or with mixed-precision. It'll automatically configure your training setup based on your hardware and environment. Take a look at the 🤗 Accelerate [Quick tour](https://huggingface.co/docs/accelerate/quicktour) to learn more. </Tip> Initialize an 🤗 Accelerate environment: ```bash accelerate config ``` To setup a default 🤗 Accelerate environment without choosing any configurations: ```bash accelerate config default ``` Or if your environment doesn't support an interactive shell, like a notebook, you can use: ```bash from accelerate.utils import write_basic_config write_basic_config() ``` Lastly, if you want to train a model on your own dataset, take a look at the [Create a dataset for training](create_dataset) guide to learn how to create a dataset that works with the training script. <Tip> The following sections highlight parts of the training script that are important for understanding how to modify it, but it doesn't cover every aspect of the script in detail. If you're interested in learning more, feel free to read through the [script](https://github.com/huggingface/diffusers/blob/main/examples/dreambooth/train_dreambooth.py) and let us know if you have any questions or concerns. </Tip> ## Script parameters <Tip warning={true}> DreamBooth is very sensitive to training hyperparameters, and it is easy to overfit. Read the [Training Stable Diffusion with Dreambooth using 🧨 Diffusers](https://huggingface.co/blog/dreambooth) blog post for recommended settings for different subjects to help you choose the appropriate hyperparameters. </Tip> The training script offers many parameters for customizing your training run. All of the parameters and their descriptions are found in the [`parse_args()`](https://github.com/huggingface/diffusers/blob/072e00897a7cf4302c347a63ec917b4b8add16d4/examples/dreambooth/train_dreambooth.py#L228) function. The parameters are set with default values that should work pretty well out-of-the-box, but you can also set your own values in the training command if you'd like. For example, to train in the bf16 format: ```bash accelerate launch train_dreambooth.py \ --mixed_precision="bf16" ``` Some basic and important parameters to know and specify are: - `--pretrained_model_name_or_path`: the name of the model on the Hub or a local path to the pretrained model - `--instance_data_dir`: path to a folder containing the training dataset (example images) - `--instance_prompt`: the text prompt that contains the special word for the example images - `--train_text_encoder`: whether to also train the text encoder - `--output_dir`: where to save the trained model - `--push_to_hub`: whether to push the trained model to the Hub - `--checkpointing_steps`: frequency of saving a checkpoint as the model trains; this is useful if for some reason training is interrupted, you can continue training from that checkpoint by adding `--resume_from_checkpoint` to your training command ### Min-SNR weighting The [Min-SNR](https://huggingface.co/papers/2303.09556) weighting strategy can help with training by rebalancing the loss to achieve faster convergence. The training script supports predicting `epsilon` (noise) or `v_prediction`, but Min-SNR is compatible with both prediction types. This weighting strategy is only supported by PyTorch and is unavailable in the Flax training script. Add the `--snr_gamma` parameter and set it to the recommended value of 5.0: ```bash accelerate launch train_dreambooth.py \ --snr_gamma=5.0 ``` ### Prior preservation loss Prior preservation loss is a method that uses a model's own generated samples to help it learn how to generate more diverse images. Because these generated sample images belong to the same class as the images you provided, they help the model retain what it has learned about the class and how it can use what it already knows about the class to make new compositions. - `--with_prior_preservation`: whether to use prior preservation loss - `--prior_loss_weight`: controls the influence of the prior preservation loss on the model - `--class_data_dir`: path to a folder containing the generated class sample images - `--class_prompt`: the text prompt describing the class of the generated sample images ```bash accelerate launch train_dreambooth.py \ --with_prior_preservation \ --prior_loss_weight=1.0 \ --class_data_dir="path/to/class/images" \ --class_prompt="text prompt describing class" ``` ### Train text encoder To improve the quality of the generated outputs, you can also train the text encoder in addition to the UNet. This requires additional memory and you'll need a GPU with at least 24GB of vRAM. If you have the necessary hardware, then training the text encoder produces better results, especially when generating images of faces. Enable this option by: ```bash accelerate launch train_dreambooth.py \ --train_text_encoder ``` ## Training script DreamBooth comes with its own dataset classes: - [`DreamBoothDataset`](https://github.com/huggingface/diffusers/blob/072e00897a7cf4302c347a63ec917b4b8add16d4/examples/dreambooth/train_dreambooth.py#L604): preprocesses the images and class images, and tokenizes the prompts for training - [`PromptDataset`](https://github.com/huggingface/diffusers/blob/072e00897a7cf4302c347a63ec917b4b8add16d4/examples/dreambooth/train_dreambooth.py#L738): generates the prompt embeddings to generate the class images If you enabled [prior preservation loss](https://github.com/huggingface/diffusers/blob/072e00897a7cf4302c347a63ec917b4b8add16d4/examples/dreambooth/train_dreambooth.py#L842), the class images are generated here: ```py sample_dataset = PromptDataset(args.class_prompt, num_new_images) sample_dataloader = torch.utils.data.DataLoader(sample_dataset, batch_size=args.sample_batch_size) sample_dataloader = accelerator.prepare(sample_dataloader) pipeline.to(accelerator.device) for example in tqdm( sample_dataloader, desc="Generating class images", disable=not accelerator.is_local_main_process ): images = pipeline(example["prompt"]).images ``` Next is the [`main()`](https://github.com/huggingface/diffusers/blob/072e00897a7cf4302c347a63ec917b4b8add16d4/examples/dreambooth/train_dreambooth.py#L799) function which handles setting up the dataset for training and the training loop itself. The script loads the [tokenizer](https://github.com/huggingface/diffusers/blob/072e00897a7cf4302c347a63ec917b4b8add16d4/examples/dreambooth/train_dreambooth.py#L898), [scheduler and models](https://github.com/huggingface/diffusers/blob/072e00897a7cf4302c347a63ec917b4b8add16d4/examples/dreambooth/train_dreambooth.py#L912C1-L912C1): ```py # Load the tokenizer if args.tokenizer_name: tokenizer = AutoTokenizer.from_pretrained(args.tokenizer_name, revision=args.revision, use_fast=False) elif args.pretrained_model_name_or_path: tokenizer = AutoTokenizer.from_pretrained( args.pretrained_model_name_or_path, subfolder="tokenizer", revision=args.revision, use_fast=False, ) # Load scheduler and models noise_scheduler = DDPMScheduler.from_pretrained(args.pretrained_model_name_or_path, subfolder="scheduler") text_encoder = text_encoder_cls.from_pretrained( args.pretrained_model_name_or_path, subfolder="text_encoder", revision=args.revision ) if model_has_vae(args): vae = AutoencoderKL.from_pretrained( args.pretrained_model_name_or_path, subfolder="vae", revision=args.revision ) else: vae = None unet = UNet2DConditionModel.from_pretrained( args.pretrained_model_name_or_path, subfolder="unet", revision=args.revision ) ``` Then, it's time to [create the training dataset](https://github.com/huggingface/diffusers/blob/072e00897a7cf4302c347a63ec917b4b8add16d4/examples/dreambooth/train_dreambooth.py#L1073) and DataLoader from `DreamBoothDataset`: ```py train_dataset = DreamBoothDataset( instance_data_root=args.instance_data_dir, instance_prompt=args.instance_prompt, class_data_root=args.class_data_dir if args.with_prior_preservation else None, class_prompt=args.class_prompt, class_num=args.num_class_images, tokenizer=tokenizer, size=args.resolution, center_crop=args.center_crop, encoder_hidden_states=pre_computed_encoder_hidden_states, class_prompt_encoder_hidden_states=pre_computed_class_prompt_encoder_hidden_states, tokenizer_max_length=args.tokenizer_max_length, ) train_dataloader = torch.utils.data.DataLoader( train_dataset, batch_size=args.train_batch_size, shuffle=True, collate_fn=lambda examples: collate_fn(examples, args.with_prior_preservation), num_workers=args.dataloader_num_workers, ) ``` Lastly, the [training loop](https://github.com/huggingface/diffusers/blob/072e00897a7cf4302c347a63ec917b4b8add16d4/examples/dreambooth/train_dreambooth.py#L1151) takes care of the remaining steps such as converting images to latent space, adding noise to the input, predicting the noise residual, and calculating the loss. If you want to learn more about how the training loop works, check out the [Understanding pipelines, models and schedulers](../using-diffusers/write_own_pipeline) tutorial which breaks down the basic pattern of the denoising process. ## Launch the script You're now ready to launch the training script! 🚀 For this guide, you'll download some images of a [dog](https://huggingface.co/datasets/diffusers/dog-example) and store them in a directory. But remember, you can create and use your own dataset if you want (see the [Create a dataset for training](create_dataset) guide). ```py from huggingface_hub import snapshot_download local_dir = "./dog" snapshot_download( "diffusers/dog-example", local_dir=local_dir, repo_type="dataset", ignore_patterns=".gitattributes", ) ``` Set the environment variable `MODEL_NAME` to a model id on the Hub or a path to a local model, `INSTANCE_DIR` to the path where you just downloaded the dog images to, and `OUTPUT_DIR` to where you want to save the model. You'll use `sks` as the special word to tie the training to. If you're interested in following along with the training process, you can periodically save generated images as training progresses. Add the following parameters to the training command: ```bash --validation_prompt="a photo of a sks dog" --num_validation_images=4 --validation_steps=100 ``` One more thing before you launch the script! Depending on the GPU you have, you may need to enable certain optimizations to train DreamBooth. <hfoptions id="gpu-select"> <hfoption id="16GB"> On a 16GB GPU, you can use bitsandbytes 8-bit optimizer and gradient checkpointing to help you train a DreamBooth model. Install bitsandbytes: ```py pip install bitsandbytes ``` Then, add the following parameter to your training command: ```bash accelerate launch train_dreambooth.py \ --gradient_checkpointing \ --use_8bit_adam \ ``` </hfoption> <hfoption id="12GB"> On a 12GB GPU, you'll need bitsandbytes 8-bit optimizer, gradient checkpointing, xFormers, and set the gradients to `None` instead of zero to reduce your memory-usage. ```bash accelerate launch train_dreambooth.py \ --use_8bit_adam \ --gradient_checkpointing \ --enable_xformers_memory_efficient_attention \ --set_grads_to_none \ ``` </hfoption> <hfoption id="8GB"> On a 8GB GPU, you'll need [DeepSpeed](https://www.deepspeed.ai/) to offload some of the tensors from the vRAM to either the CPU or NVME to allow training with less GPU memory. Run the following command to configure your 🤗 Accelerate environment: ```bash accelerate config ``` During configuration, confirm that you want to use DeepSpeed. Now it should be possible to train on under 8GB vRAM by combining DeepSpeed stage 2, fp16 mixed precision, and offloading the model parameters and the optimizer state to the CPU. The drawback is that this requires more system RAM (~25 GB). See the [DeepSpeed documentation](https://huggingface.co/docs/accelerate/usage_guides/deepspeed) for more configuration options. You should also change the default Adam optimizer to DeepSpeed’s optimized version of Adam [`deepspeed.ops.adam.DeepSpeedCPUAdam`](https://deepspeed.readthedocs.io/en/latest/optimizers.html#adam-cpu) for a substantial speedup. Enabling `DeepSpeedCPUAdam` requires your system’s CUDA toolchain version to be the same as the one installed with PyTorch. bitsandbytes 8-bit optimizers don’t seem to be compatible with DeepSpeed at the moment. That's it! You don't need to add any additional parameters to your training command. </hfoption> </hfoptions> <hfoptions id="training-inference"> <hfoption id="PyTorch"> ```bash export MODEL_NAME="runwayml/stable-diffusion-v1-5" export INSTANCE_DIR="./dog" export OUTPUT_DIR="path_to_saved_model" accelerate launch train_dreambooth.py \ --pretrained_model_name_or_path=$MODEL_NAME \ --instance_data_dir=$INSTANCE_DIR \ --output_dir=$OUTPUT_DIR \ --instance_prompt="a photo of sks dog" \ --resolution=512 \ --train_batch_size=1 \ --gradient_accumulation_steps=1 \ --learning_rate=5e-6 \ --lr_scheduler="constant" \ --lr_warmup_steps=0 \ --max_train_steps=400 \ --push_to_hub ``` </hfoption> <hfoption id="Flax"> ```bash export MODEL_NAME="duongna/stable-diffusion-v1-4-flax" export INSTANCE_DIR="./dog" export OUTPUT_DIR="path-to-save-model" python train_dreambooth_flax.py \ --pretrained_model_name_or_path=$MODEL_NAME \ --instance_data_dir=$INSTANCE_DIR \ --output_dir=$OUTPUT_DIR \ --instance_prompt="a photo of sks dog" \ --resolution=512 \ --train_batch_size=1 \ --learning_rate=5e-6 \ --max_train_steps=400 \ --push_to_hub ``` </hfoption> </hfoptions> Once training is complete, you can use your newly trained model for inference! <Tip> Can't wait to try your model for inference before training is complete? 🤭 Make sure you have the latest version of 🤗 Accelerate installed. ```py from diffusers import DiffusionPipeline, UNet2DConditionModel from transformers import CLIPTextModel import torch unet = UNet2DConditionModel.from_pretrained("path/to/model/checkpoint-100/unet") # if you have trained with `--args.train_text_encoder` make sure to also load the text encoder text_encoder = CLIPTextModel.from_pretrained("path/to/model/checkpoint-100/checkpoint-100/text_encoder") pipeline = DiffusionPipeline.from_pretrained( "runwayml/stable-diffusion-v1-5", unet=unet, text_encoder=text_encoder, dtype=torch.float16, ).to("cuda") image = pipeline("A photo of sks dog in a bucket", num_inference_steps=50, guidance_scale=7.5).images[0] image.save("dog-bucket.png") ``` </Tip> <hfoptions id="training-inference"> <hfoption id="PyTorch"> ```py from diffusers import DiffusionPipeline import torch pipeline = DiffusionPipeline.from_pretrained("path_to_saved_model", torch_dtype=torch.float16, use_safetensors=True).to("cuda") image = pipeline("A photo of sks dog in a bucket", num_inference_steps=50, guidance_scale=7.5).images[0] image.save("dog-bucket.png") ``` </hfoption> <hfoption id="Flax"> ```py import jax import numpy as np from flax.jax_utils import replicate from flax.training.common_utils import shard from diffusers import FlaxStableDiffusionPipeline pipeline, params = FlaxStableDiffusionPipeline.from_pretrained("path-to-your-trained-model", dtype=jax.numpy.bfloat16) prompt = "A photo of sks dog in a bucket" prng_seed = jax.random.PRNGKey(0) num_inference_steps = 50 num_samples = jax.device_count() prompt = num_samples * [prompt] prompt_ids = pipeline.prepare_inputs(prompt) # shard inputs and rng params = replicate(params) prng_seed = jax.random.split(prng_seed, jax.device_count()) prompt_ids = shard(prompt_ids) images = pipeline(prompt_ids, params, prng_seed, num_inference_steps, jit=True).images images = pipeline.numpy_to_pil(np.asarray(images.reshape((num_samples,) + images.shape[-3:]))) image.save("dog-bucket.png") ``` </hfoption> </hfoptions> ## LoRA LoRA is a training technique for significantly reducing the number of trainable parameters. As a result, training is faster and it is easier to store the resulting weights because they are a lot smaller (~100MBs). Use the [train_dreambooth_lora.py](https://github.com/huggingface/diffusers/blob/main/examples/dreambooth/train_dreambooth_lora.py) script to train with LoRA. The LoRA training script is discussed in more detail in the [LoRA training](lora) guide. ## Stable Diffusion XL Stable Diffusion XL (SDXL) is a powerful text-to-image model that generates high-resolution images, and it adds a second text-encoder to its architecture. Use the [train_dreambooth_lora_sdxl.py](https://github.com/huggingface/diffusers/blob/main/examples/dreambooth/train_dreambooth_lora_sdxl.py) script to train a SDXL model with LoRA. The SDXL training script is discussed in more detail in the [SDXL training](sdxl) guide. ## Next steps Congratulations on training your DreamBooth model! To learn more about how to use your new model, the following guide may be helpful: - Learn how to [load a DreamBooth](../using-diffusers/loading_adapters) model for inference if you trained your model with LoRA.
huggingface/diffusers/blob/main/docs/source/en/training/dreambooth.md
!--Copyright 2023 The HuggingFace Team. All rights reserved. Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with the License. You may obtain a copy of the License at http://www.apache.org/licenses/LICENSE-2.0 Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the specific language governing permissions and limitations under the License. --> # Adding support for an unsupported architecture If you wish to export a model whose architecture is not already supported by the library, the PR [#813 Adds support for ResNet](https://github.com/huggingface/optimum/pull/813 ) can be used as a reference. You can make sure tests pass for the new `my_new_modeltype` model type by running: ```bash pytest tests/exporters/tflite/test_*.py -k "my_new_modeltype" -s --exitfirst ```
huggingface/optimum/blob/main/docs/source/exporters/tflite/usage_guides/contribute.mdx
!--Copyright 2023 The HuggingFace Team. All rights reserved. Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with the License. You may obtain a copy of the License at http://www.apache.org/licenses/LICENSE-2.0 Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the specific language governing permissions and limitations under the License. --> # Normalization layers Customized normalization layers for supporting various models in 🤗 Diffusers. ## AdaLayerNorm [[autodoc]] models.normalization.AdaLayerNorm ## AdaLayerNormZero [[autodoc]] models.normalization.AdaLayerNormZero ## AdaLayerNormSingle [[autodoc]] models.normalization.AdaLayerNormSingle ## AdaGroupNorm [[autodoc]] models.normalization.AdaGroupNorm
huggingface/diffusers/blob/main/docs/source/en/api/normalization.md
!--- Copyright 2021 The Google Flax Team Authors and HuggingFace Team. All rights reserved. Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with the License. You may obtain a copy of the License at http://www.apache.org/licenses/LICENSE-2.0 Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the specific language governing permissions and limitations under the License. --> # Text classification examples ## GLUE tasks Based on the script [`run_flax_glue.py`](https://github.com/huggingface/transformers/blob/main/examples/flax/text-classification/run_flax_glue.py). Fine-tuning the library models for sequence classification on the GLUE benchmark: [General Language Understanding Evaluation](https://gluebenchmark.com/). This script can fine-tune any of the models on the [hub](https://huggingface.co/models) and can also be used for a dataset hosted on our [hub](https://huggingface.co/datasets) or your own data in a csv or a JSON file (the script might need some tweaks in that case, refer to the comments inside for help). GLUE is made up of a total of 9 different tasks. Here is how to run the script on one of them: ```bash export TASK_NAME=mrpc python run_flax_glue.py \ --model_name_or_path bert-base-cased \ --task_name ${TASK_NAME} \ --max_seq_length 128 \ --learning_rate 2e-5 \ --num_train_epochs 3 \ --per_device_train_batch_size 4 \ --eval_steps 100 \ --output_dir ./$TASK_NAME/ \ --push_to_hub ``` where task name can be one of cola, mnli, mnli_mismatched, mnli_matched, mrpc, qnli, qqp, rte, sst2, stsb, wnli. Using the command above, the script will train for 3 epochs and run eval after each epoch. Metrics and hyperparameters are stored in Tensorflow event files in `--output_dir`. You can see the results by running `tensorboard` in that directory: ```bash $ tensorboard --logdir . ``` or directly on the hub under *Training metrics*. ### Accuracy Evaluation We train five replicas and report mean accuracy and stdev on the dev set below. We use the settings as in the command above (with an exception for MRPC and WNLI which are tiny and where we used 5 epochs instead of 3), and we use a total train batch size of 32 (we train on 8 Cloud v3 TPUs, so a per-device batch size of 4), On the task other than MRPC and WNLI we train for 3 these epochs because this is the standard, but looking at the training curves of some of them (e.g., SST-2, STS-b), it appears the models are undertrained and we could get better results when training longer. In the Tensorboard results linked below, the random seed of each model is equal to the ID of the run. So in order to reproduce run 1, run the command above with `--seed=1`. The best run used random seed 3, which is the default in the script. The results of all runs are in [this Google Sheet](https://docs.google.com/spreadsheets/d/1p3XzReMO75m_XdEJvPue-PIq_PN-96J2IJpJW1yS-10/edit?usp=sharing). | Task | Metric | Acc (best run) | Acc (avg/5runs) | Stdev | Metrics | |-------|------------------------------|----------------|-----------------|-----------|--------------------------------------------------------------------------| | CoLA | Matthews corr | 60.57 | 59.04 | 1.06 | [tfhub.dev](https://tensorboard.dev/experiment/lfr2adVpRtmLDALKrElkzg/) | | SST-2 | Accuracy | 92.66 | 92.23 | 0.57 | [tfhub.dev](https://tensorboard.dev/experiment/jYvfv2trRHKMjoWnXVwrZA/) | | MRPC | F1/Accuracy | 89.90/85.78 | 88.97/84.36 | 0.72/1.09 | [tfhub.dev](https://tensorboard.dev/experiment/bo3W3DEoRw2Q7YXjWrJkfg/) | | STS-B | Pearson/Spearman corr. | 89.04/88.70 | 88.94/88.63 | 0.07/0.07 | [tfhub.dev](https://tensorboard.dev/experiment/fxVwbLD7QpKhbot0r9rn2w/) | | QQP | Accuracy/F1 | 90.81/87.58 | 90.76/87.51 | 0.05/0.06 | [tfhub.dev](https://tensorboard.dev/experiment/di089Rc9TZmsnKRMrYNLsA/) | | MNLI | Matched acc. | 84.10 | 83.80 | 0.16 | [tfhub.dev](https://tensorboard.dev/experiment/JgNCGHDJSRaW6HBx6YQFYQ/) | | QNLI | Accuracy | 91.01 | 90.82 | 0.17 | [tfhub.dev](https://tensorboard.dev/experiment/Bq7cMGJnQMSggYgL8qNGeQ/) | | RTE | Accuracy | 66.06 | 64.76 | 1.04 | [tfhub.dev](https://tensorboard.dev/experiment/66Eq24bhRjqN6CEhgDSGqQ/) | | WNLI | Accuracy | 46.48 | 37.01 | 6.83 | [tfhub.dev](https://tensorboard.dev/experiment/TAqcnddqTkWvVEeGaWwIdQ/) | Some of these results are significantly different from the ones reported on the test set of GLUE benchmark on the website. For QQP and WNLI, please refer to [FAQ #12](https://gluebenchmark.com/faq) on the website. ### Runtime evaluation We also ran each task once on a single V100 GPU, 8 V100 GPUs, and 8 Cloud v3 TPUs and report the overall training time below. For comparison we ran Pytorch's [run_glue.py](https://github.com/huggingface/transformers/blob/main/examples/pytorch/text-classification/run_glue.py) on a single GPU (last column). | Task | TPU v3-8 | 8 GPU | [1 GPU](https://tensorboard.dev/experiment/mkPS4Zh8TnGe1HB6Yzwj4Q) | 1 GPU (Pytorch) | |-------|-----------|------------|------------|-----------------| | CoLA | 1m 42s | 1m 26s | 3m 9s | 4m 6s | | SST-2 | 5m 12s | 6m 28s | 22m 33s | 34m 37s | | MRPC | 1m 29s | 1m 14s | 2m 20s | 2m 56s | | STS-B | 1m 30s | 1m 12s | 2m 16s | 2m 48s | | QQP | 22m 50s | 31m 48s | 1h 59m 41s | 2h 54m | | MNLI | 25m 03s | 33m 55s | 2h 9m 37s | 3h 7m 6s | | QNLI | 7m30s | 9m 40s | 34m 40s | 49m 8s | | RTE | 1m 20s | 55s | 1m 10s | 1m 16s | | WNLI | 1m 11s | 48s | 39s | 36s | |-------| | **TOTAL** | 1h 03m | 1h 28m | 5h 16m | 6h 37m | *All experiments are ran on Google Cloud Platform. GPU experiments are ran without further optimizations besides JAX transformations. GPU experiments are ran with full precision (fp32). "TPU v3-8" are 8 TPU cores on 4 chips (each chips has 2 cores), while "8 GPU" are 8 GPU chips.
huggingface/transformers/blob/main/examples/flax/text-classification/README.md
!--Copyright 2023 The HuggingFace Team. All rights reserved. Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with the License. You may obtain a copy of the License at http://www.apache.org/licenses/LICENSE-2.0 Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the specific language governing permissions and limitations under the License. ⚠️ Note that this file is in Markdown but contain specific syntax for our doc-builder (similar to MDX) that may not be rendered properly in your Markdown viewer. --> # PatchTST ## Overview The PatchTST model was proposed in [A Time Series is Worth 64 Words: Long-term Forecasting with Transformers](https://arxiv.org/abs/2211.14730) by Yuqi Nie, Nam H. Nguyen, Phanwadee Sinthong and Jayant Kalagnanam. At a high level the model vectorizes time series into patches of a given size and encodes the resulting sequence of vectors via a Transformer that then outputs the prediction length forecast via an appropriate head. The model is illustrated in the following figure: ![model](https://github.com/namctin/transformers/assets/8100/150af169-29de-419a-8d98-eb78251c21fa) The abstract from the paper is the following: *We propose an efficient design of Transformer-based models for multivariate time series forecasting and self-supervised representation learning. It is based on two key components: (i) segmentation of time series into subseries-level patches which are served as input tokens to Transformer; (ii) channel-independence where each channel contains a single univariate time series that shares the same embedding and Transformer weights across all the series. Patching design naturally has three-fold benefit: local semantic information is retained in the embedding; computation and memory usage of the attention maps are quadratically reduced given the same look-back window; and the model can attend longer history. Our channel-independent patch time series Transformer (PatchTST) can improve the long-term forecasting accuracy significantly when compared with that of SOTA Transformer-based models. We also apply our model to self-supervised pre-training tasks and attain excellent fine-tuning performance, which outperforms supervised training on large datasets. Transferring of masked pre-trained representation on one dataset to others also produces SOTA forecasting accuracy.* This model was contributed by [namctin](https://huggingface.co/namctin), [gsinthong](https://huggingface.co/gsinthong), [diepi](https://huggingface.co/diepi), [vijaye12](https://huggingface.co/vijaye12), [wmgifford](https://huggingface.co/wmgifford), and [kashif](https://huggingface.co/kashif). The original code can be found [here](https://github.com/yuqinie98/PatchTST). ## Usage tips The model can also be used for time series classification and time series regression. See the respective [`PatchTSTForClassification`] and [`PatchTSTForRegression`] classes. ## PatchTSTConfig [[autodoc]] PatchTSTConfig ## PatchTSTModel [[autodoc]] PatchTSTModel - forward ## PatchTSTForPrediction [[autodoc]] PatchTSTForPrediction - forward ## PatchTSTForClassification [[autodoc]] PatchTSTForClassification - forward ## PatchTSTForPretraining [[autodoc]] PatchTSTForPretraining - forward ## PatchTSTForRegression [[autodoc]] PatchTSTForRegression - forward
huggingface/transformers/blob/main/docs/source/en/model_doc/patchtst.md
Gradio Demo: image-simple ``` !pip install -q gradio ``` ``` # Downloading files from the demo repo import os !wget -q https://github.com/gradio-app/gradio/raw/main/demo/image-simple/cheetah.jpg ``` ``` import gradio as gr def image(im): return im with gr.Blocks() as demo: im = gr.Image() im2 = gr.Image() btn = gr.Button() btn.click(lambda x: x, outputs=im2, inputs=im) if __name__ == "__main__": demo.launch() ```
gradio-app/gradio/blob/main/demo/image-simple/run.ipynb
oading a custom dataset. Although the Hugging Face Hub hosts over a thousand public datasets, you'll often need to work with data that is stored on your laptop or some remote server. In this video we'll explore how the Datasets library can be used to load datasets that aren’t available on the Hugging Face Hub. As you can see in this table, the Datasets library provides several in-built scripts to load datasets in several formats. To load a dataset in one of these formats, you just need to provide the name of the format to the load_dataset function, along with a data_files argument that points to one or more filepaths or URLs. To see this in action, let's start by loading a local CSV file. In this example, we first download a dataset about wine quality from the UCI machine learning repository. Since this is a CSV file, we then specify the csv loading script. This script needs to know where our data is located, so we provide the filename as part of the data_files argument. The CSV loading script also allows you to pass several keyword arguments, so here we've also specified the separator as a semi-colon. And with that we can see the dataset is loaded automatically as a DatasetDict object, with each column in the CSV file represented as a feature. If your dataset is located on some remote server like GitHub or some other repository, the process is very similar. The only difference is that now the data_files argument points to a URL instead of a local filepath. Let's now take a look at loading raw text files. This format is quite common in NLP and you'll typically find books and plays are just a single file with raw text inside. In this example, we have a text file of Shakespeare plays that's stored on a GitHub repository. As we did for CSV files, we simply choose the text loading script and point the data_files argument to the URL. As you can see, these files are processed line-by-line, so empty lines in the raw text are also represented as a row in the dataset. For JSON files, there are two main formats to know about. The first one is called JSON Lines, where every row in the file is a separate JSON object. For these files, you can load the dataset by selecting the json loading script and pointing the data_files argument to the file or URL. In this example, we've loaded a JSON lines files based on Stack Exchange questions and answers.
huggingface/course/blob/main/subtitles/en/raw/chapter5/02_custom-dataset.md
`@gradio/imageeditor`
gradio-app/gradio/blob/main/js/imageeditor/README.md
!--- Copyright 2020 The HuggingFace Team. All rights reserved. Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with the License. You may obtain a copy of the License at http://www.apache.org/licenses/LICENSE-2.0 Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the specific language governing permissions and limitations under the License. --> <p align="center"> <br> <img src="https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/transformers_logo_name.png" width="400"/> <br> </p> <p align="center"> <a href="https://circleci.com/gh/huggingface/transformers"> <img alt="Build" src="https://img.shields.io/circleci/build/github/huggingface/transformers/main"> </a> <a href="https://github.com/huggingface/transformers/blob/main/LICENSE"> <img alt="GitHub" src="https://img.shields.io/github/license/huggingface/transformers.svg?color=blue"> </a> <a href="https://huggingface.co/docs/transformers/index"> <img alt="Documentation" src="https://img.shields.io/website/http/huggingface.co/docs/transformers/index.svg?down_color=red&down_message=offline&up_message=online"> </a> <a href="https://github.com/huggingface/transformers/releases"> <img alt="GitHub release" src="https://img.shields.io/github/release/huggingface/transformers.svg"> </a> <a href="https://github.com/huggingface/transformers/blob/main/CODE_OF_CONDUCT.md"> <img alt="Contributor Covenant" src="https://img.shields.io/badge/Contributor%20Covenant-v2.0%20adopted-ff69b4.svg"> </a> <a href="https://zenodo.org/badge/latestdoi/155220641"><img src="https://zenodo.org/badge/155220641.svg" alt="DOI"></a> </p> <h4 align="center"> <p> <a href="https://github.com/huggingface/transformers/">English</a> | <a href="https://github.com/huggingface/transformers/blob/main/README_zh-hans.md">简体中文</a> | <a href="https://github.com/huggingface/transformers/blob/main/README_zh-hant.md">繁體中文</a> | <b>한국어</b> | <a href="https://github.com/huggingface/transformers/blob/main/README_es.md">Español</a> | <a href="https://github.com/huggingface/transformers/blob/main/README_ja.md">日本語</a> | <a href="https://github.com/huggingface/transformers/blob/main/README_hd.md">हिन्दी</a> <a href="https://github.com/huggingface/transformers//blob/main/README_te.md">తెలుగు</a> | </p> </h4> <h3 align="center"> <p> Jax, Pytorch, TensorFlow를 위한 최첨단 자연어처리</p> </h3> <h3 align="center"> <a href="https://hf.co/course"><img src="https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/course_banner.png"></a> </h3> 🤗 Transformers는 분류, 정보 추출, 질문 답변, 요약, 번역, 문장 생성 등을 100개 이상의 언어로 수행할 수 있는 수천개의 사전학습된 모델을 제공합니다. 우리의 목표는 모두가 최첨단의 NLP 기술을 쉽게 사용하는 것입니다. 🤗 Transformers는 이러한 사전학습 모델을 빠르게 다운로드해 특정 텍스트에 사용하고, 원하는 데이터로 fine-tuning해 커뮤니티나 우리의 [모델 허브](https://huggingface.co/models)에 공유할 수 있도록 API를 제공합니다. 또한, 모델 구조를 정의하는 각 파이썬 모듈은 완전히 독립적이여서 연구 실험을 위해 손쉽게 수정할 수 있습니다. 🤗 Transformers는 가장 유명한 3개의 딥러닝 라이브러리를 지원합니다. 이들은 서로 완벽히 연동됩니다 — [Jax](https://jax.readthedocs.io/en/latest/), [PyTorch](https://pytorch.org/), [TensorFlow](https://www.tensorflow.org/). 간단하게 이 라이브러리 중 하나로 모델을 학습하고, 또 다른 라이브러리로 추론을 위해 모델을 불러올 수 있습니다. ## 온라인 데모 대부분의 모델을 [모델 허브](https://huggingface.co/models) 페이지에서 바로 테스트해볼 수 있습니다. 공개 및 비공개 모델을 위한 [비공개 모델 호스팅, 버전 관리, 추론 API](https://huggingface.co/pricing)도 제공합니다. 예시: - [BERT로 마스킹된 단어 완성하기](https://huggingface.co/bert-base-uncased?text=Paris+is+the+%5BMASK%5D+of+France) - [Electra를 이용한 개체명 인식](https://huggingface.co/dbmdz/electra-large-discriminator-finetuned-conll03-english?text=My+name+is+Sarah+and+I+live+in+London+city) - [GPT-2로 텍스트 생성하기](https://huggingface.co/gpt2?text=A+long+time+ago%2C+) - [RoBERTa로 자연어 추론하기](https://huggingface.co/roberta-large-mnli?text=The+dog+was+lost.+Nobody+lost+any+animal) - [BART를 이용한 요약](https://huggingface.co/facebook/bart-large-cnn?text=The+tower+is+324+metres+%281%2C063+ft%29+tall%2C+about+the+same+height+as+an+81-storey+building%2C+and+the+tallest+structure+in+Paris.+Its+base+is+square%2C+measuring+125+metres+%28410+ft%29+on+each+side.+During+its+construction%2C+the+Eiffel+Tower+surpassed+the+Washington+Monument+to+become+the+tallest+man-made+structure+in+the+world%2C+a+title+it+held+for+41+years+until+the+Chrysler+Building+in+New+York+City+was+finished+in+1930.+It+was+the+first+structure+to+reach+a+height+of+300+metres.+Due+to+the+addition+of+a+broadcasting+aerial+at+the+top+of+the+tower+in+1957%2C+it+is+now+taller+than+the+Chrysler+Building+by+5.2+metres+%2817+ft%29.+Excluding+transmitters%2C+the+Eiffel+Tower+is+the+second+tallest+free-standing+structure+in+France+after+the+Millau+Viaduct) - [DistilBERT를 이용한 질문 답변](https://huggingface.co/distilbert-base-uncased-distilled-squad?text=Which+name+is+also+used+to+describe+the+Amazon+rainforest+in+English%3F&context=The+Amazon+rainforest+%28Portuguese%3A+Floresta+Amaz%C3%B4nica+or+Amaz%C3%B4nia%3B+Spanish%3A+Selva+Amaz%C3%B3nica%2C+Amazon%C3%ADa+or+usually+Amazonia%3B+French%3A+For%C3%AAt+amazonienne%3B+Dutch%3A+Amazoneregenwoud%29%2C+also+known+in+English+as+Amazonia+or+the+Amazon+Jungle%2C+is+a+moist+broadleaf+forest+that+covers+most+of+the+Amazon+basin+of+South+America.+This+basin+encompasses+7%2C000%2C000+square+kilometres+%282%2C700%2C000+sq+mi%29%2C+of+which+5%2C500%2C000+square+kilometres+%282%2C100%2C000+sq+mi%29+are+covered+by+the+rainforest.+This+region+includes+territory+belonging+to+nine+nations.+The+majority+of+the+forest+is+contained+within+Brazil%2C+with+60%25+of+the+rainforest%2C+followed+by+Peru+with+13%25%2C+Colombia+with+10%25%2C+and+with+minor+amounts+in+Venezuela%2C+Ecuador%2C+Bolivia%2C+Guyana%2C+Suriname+and+French+Guiana.+States+or+departments+in+four+nations+contain+%22Amazonas%22+in+their+names.+The+Amazon+represents+over+half+of+the+planet%27s+remaining+rainforests%2C+and+comprises+the+largest+and+most+biodiverse+tract+of+tropical+rainforest+in+the+world%2C+with+an+estimated+390+billion+individual+trees+divided+into+16%2C000+species) - [T5로 번역하기](https://huggingface.co/t5-base?text=My+name+is+Wolfgang+and+I+live+in+Berlin) **[Transformer와 글쓰기](https://transformer.huggingface.co)** 는 이 저장소의 텍스트 생성 능력에 관한 Hugging Face 팀의 공식 데모입니다. ## Hugging Face 팀의 커스텀 지원을 원한다면 <a target="_blank" href="https://huggingface.co/support"> <img alt="HuggingFace Expert Acceleration Program" src="https://huggingface.co/front/thumbnails/support.png" style="max-width: 600px; border: 1px solid #eee; border-radius: 4px; box-shadow: 0 1px 2px 0 rgba(0, 0, 0, 0.05);"> </a><br> ## 퀵 투어 원하는 텍스트에 바로 모델을 사용할 수 있도록, 우리는 `pipeline` API를 제공합니다. Pipeline은 사전학습 모델과 그 모델을 학습할 때 적용한 전처리 방식을 하나로 합칩니다. 다음은 긍정적인 텍스트와 부정적인 텍스트를 분류하기 위해 pipeline을 사용한 간단한 예시입니다: ```python >>> from transformers import pipeline # Allocate a pipeline for sentiment-analysis >>> classifier = pipeline('sentiment-analysis') >>> classifier('We are very happy to introduce pipeline to the transformers repository.') [{'label': 'POSITIVE', 'score': 0.9996980428695679}] ``` 코드의 두번째 줄은 pipeline이 사용하는 사전학습 모델을 다운로드하고 캐시로 저장합니다. 세번째 줄에선 그 모델이 주어진 텍스트를 평가합니다. 여기서 모델은 99.97%의 확률로 텍스트가 긍정적이라고 평가했습니다. 많은 NLP 과제들을 `pipeline`으로 바로 수행할 수 있습니다. 예를 들어, 질문과 문맥이 주어지면 손쉽게 답변을 추출할 수 있습니다: ``` python >>> from transformers import pipeline # Allocate a pipeline for question-answering >>> question_answerer = pipeline('question-answering') >>> question_answerer({ ... 'question': 'What is the name of the repository ?', ... 'context': 'Pipeline has been included in the huggingface/transformers repository' ... }) {'score': 0.30970096588134766, 'start': 34, 'end': 58, 'answer': 'huggingface/transformers'} ``` 답변뿐만 아니라, 여기에 사용된 사전학습 모델은 확신도와 토크나이즈된 문장 속 답변의 시작점, 끝점까지 반환합니다. [이 튜토리얼](https://huggingface.co/docs/transformers/task_summary)에서 `pipeline` API가 지원하는 다양한 과제를 확인할 수 있습니다. 코드 3줄로 원하는 과제에 맞게 사전학습 모델을 다운로드 받고 사용할 수 있습니다. 다음은 PyTorch 버전입니다: ```python >>> from transformers import AutoTokenizer, AutoModel >>> tokenizer = AutoTokenizer.from_pretrained("bert-base-uncased") >>> model = AutoModel.from_pretrained("bert-base-uncased") >>> inputs = tokenizer("Hello world!", return_tensors="pt") >>> outputs = model(**inputs) ``` 다음은 TensorFlow 버전입니다: ```python >>> from transformers import AutoTokenizer, TFAutoModel >>> tokenizer = AutoTokenizer.from_pretrained("bert-base-uncased") >>> model = TFAutoModel.from_pretrained("bert-base-uncased") >>> inputs = tokenizer("Hello world!", return_tensors="tf") >>> outputs = model(**inputs) ``` 토크나이저는 사전학습 모델의 모든 전처리를 책임집니다. 그리고 (위의 예시처럼) 1개의 스트링이나 리스트도 처리할 수 있습니다. 토크나이저는 딕셔너리를 반환하는데, 이는 다운스트림 코드에 사용하거나 언패킹 연산자 ** 를 이용해 모델에 바로 전달할 수도 있습니다. 모델 자체는 일반적으로 사용되는 [Pytorch `nn.Module`](https://pytorch.org/docs/stable/nn.html#torch.nn.Module)나 [TensorFlow `tf.keras.Model`](https://www.tensorflow.org/api_docs/python/tf/keras/Model)입니다. [이 튜토리얼](https://huggingface.co/transformers/training.html)은 이러한 모델을 표준적인 PyTorch나 TensorFlow 학습 과정에서 사용하는 방법, 또는 새로운 데이터로 fine-tune하기 위해 `Trainer` API를 사용하는 방법을 설명해줍니다. ## 왜 transformers를 사용해야 할까요? 1. 손쉽게 사용할 수 있는 최첨단 모델: - NLU와 NLG 과제에서 뛰어난 성능을 보입니다. - 교육자 실무자에게 진입 장벽이 낮습니다. - 3개의 클래스만 배우면 바로 사용할 수 있습니다. - 하나의 API로 모든 사전학습 모델을 사용할 수 있습니다. 1. 더 적은 계산 비용, 더 적은 탄소 발자국: - 연구자들은 모델을 계속 다시 학습시키는 대신 학습된 모델을 공유할 수 있습니다. - 실무자들은 학습에 필요한 시간과 비용을 절약할 수 있습니다. - 수십개의 모델 구조, 2,000개 이상의 사전학습 모델, 100개 이상의 언어로 학습된 모델 등. 1. 모델의 각 생애주기에 적합한 프레임워크: - 코드 3줄로 최첨단 모델을 학습하세요. - 자유롭게 모델을 TF2.0나 PyTorch 프레임워크로 변환하세요. - 학습, 평가, 공개 등 각 단계에 맞는 프레임워크를 원하는대로 선택하세요. 1. 필요한 대로 모델이나 예시를 커스터마이즈하세요: - 우리는 저자가 공개한 결과를 재현하기 위해 각 모델 구조의 예시를 제공합니다. - 모델 내부 구조는 가능한 일관적으로 공개되어 있습니다. - 빠른 실험을 위해 모델 파일은 라이브러리와 독립적으로 사용될 수 있습니다. ## 왜 transformers를 사용하지 말아야 할까요? - 이 라이브러리는 신경망 블록을 만들기 위한 모듈이 아닙니다. 연구자들이 여러 파일을 살펴보지 않고 바로 각 모델을 사용할 수 있도록, 모델 파일 코드의 추상화 수준을 적정하게 유지했습니다. - 학습 API는 모든 모델에 적용할 수 있도록 만들어지진 않았지만, 라이브러리가 제공하는 모델들에 적용할 수 있도록 최적화되었습니다. 일반적인 머신 러닝을 위해선, 다른 라이브러리를 사용하세요. - 가능한 많은 사용 예시를 보여드리고 싶어서, [예시 폴더](https://github.com/huggingface/transformers/tree/main/examples)의 스크립트를 준비했습니다. 이 스크립트들을 수정 없이 특정한 문제에 바로 적용하지 못할 수 있습니다. 필요에 맞게 일부 코드를 수정해야 할 수 있습니다. ## 설치 ### pip로 설치하기 이 저장소는 Python 3.8+, Flax 0.4.1+, PyTorch 1.10+, TensorFlow 2.6+에서 테스트 되었습니다. [가상 환경](https://docs.python.org/3/library/venv.html)에 🤗 Transformers를 설치하세요. Python 가상 환경에 익숙하지 않다면, [사용자 가이드](https://packaging.python.org/guides/installing-using-pip-and-virtual-environments/)를 확인하세요. 우선, 사용할 Python 버전으로 가상 환경을 만들고 실행하세요. 그 다음, Flax, PyTorch, TensorFlow 중 적어도 하나는 설치해야 합니다. 플랫폼에 맞는 설치 명령어를 확인하기 위해 [TensorFlow 설치 페이지](https://www.tensorflow.org/install/), [PyTorch 설치 페이지](https://pytorch.org/get-started/locally/#start-locally), [Flax 설치 페이지](https://github.com/google/flax#quick-install)를 확인하세요. 이들 중 적어도 하나가 설치되었다면, 🤗 Transformers는 다음과 같이 pip을 이용해 설치할 수 있습니다: ```bash pip install transformers ``` 예시들을 체험해보고 싶거나, 최최최첨단 코드를 원하거나, 새로운 버전이 나올 때까지 기다릴 수 없다면 [라이브러리를 소스에서 바로 설치](https://huggingface.co/docs/transformers/installation#installing-from-source)하셔야 합니다. ### conda로 설치하기 Transformers 버전 v4.0.0부터, conda 채널이 생겼습니다: `huggingface`. 🤗 Transformers는 다음과 같이 conda로 설치할 수 있습니다: ```shell script conda install -c huggingface transformers ``` Flax, PyTorch, TensorFlow 설치 페이지에서 이들을 conda로 설치하는 방법을 확인하세요. ## 모델 구조 **🤗 Transformers가 제공하는 [모든 모델 체크포인트](https://huggingface.co/models)** 는 huggingface.co [모델 허브](https://huggingface.co)에 완벽히 연동되어 있습니다. [개인](https://huggingface.co/users)과 [기관](https://huggingface.co/organizations)이 모델 허브에 직접 업로드할 수 있습니다. 현재 사용 가능한 모델 체크포인트의 개수: ![](https://img.shields.io/endpoint?url=https://huggingface.co/api/shields/models&color=brightgreen) 🤗 Transformers는 다음 모델들을 제공합니다 (각 모델의 요약은 [여기](https://huggingface.co/docs/transformers/model_summary)서 확인하세요): 1. **[ALBERT](https://huggingface.co/docs/transformers/model_doc/albert)** (from Google Research and the Toyota Technological Institute at Chicago) released with the paper [ALBERT: A Lite BERT for Self-supervised Learning of Language Representations](https://arxiv.org/abs/1909.11942), by Zhenzhong Lan, Mingda Chen, Sebastian Goodman, Kevin Gimpel, Piyush Sharma, Radu Soricut. 1. **[ALIGN](https://huggingface.co/docs/transformers/model_doc/align)** (Google Research 에서 제공)은 Chao Jia, Yinfei Yang, Ye Xia, Yi-Ting Chen, Zarana Parekh, Hieu Pham, Quoc V. Le, Yunhsuan Sung, Zhen Li, Tom Duerig.의 [Scaling Up Visual and Vision-Language Representation Learning With Noisy Text Supervision](https://arxiv.org/abs/2102.05918)논문과 함께 발표했습니다. 1. **[AltCLIP](https://huggingface.co/docs/transformers/model_doc/altclip)** (from BAAI) released with the paper [AltCLIP: Altering the Language Encoder in CLIP for Extended Language Capabilities](https://arxiv.org/abs/2211.06679) by Chen, Zhongzhi and Liu, Guang and Zhang, Bo-Wen and Ye, Fulong and Yang, Qinghong and Wu, Ledell. 1. **[Audio Spectrogram Transformer](https://huggingface.co/docs/transformers/model_doc/audio-spectrogram-transformer)** (from MIT) released with the paper [AST: Audio Spectrogram Transformer](https://arxiv.org/abs/2104.01778) by Yuan Gong, Yu-An Chung, James Glass. 1. **[Autoformer](https://huggingface.co/docs/transformers/model_doc/autoformer)** (from Tsinghua University) released with the paper [Autoformer: Decomposition Transformers with Auto-Correlation for Long-Term Series Forecasting](https://arxiv.org/abs/2106.13008) by Haixu Wu, Jiehui Xu, Jianmin Wang, Mingsheng Long. 1. **[Bark](https://huggingface.co/docs/transformers/model_doc/bark)** (from Suno) released in the repository [suno-ai/bark](https://github.com/suno-ai/bark) by Suno AI team. 1. **[BART](https://huggingface.co/docs/transformers/model_doc/bart)** (from Facebook) released with the paper [BART: Denoising Sequence-to-Sequence Pre-training for Natural Language Generation, Translation, and Comprehension](https://arxiv.org/pdf/1910.13461.pdf) by Mike Lewis, Yinhan Liu, Naman Goyal, Marjan Ghazvininejad, Abdelrahman Mohamed, Omer Levy, Ves Stoyanov and Luke Zettlemoyer. 1. **[BARThez](https://huggingface.co/docs/transformers/model_doc/barthez)** (from École polytechnique) released with the paper [BARThez: a Skilled Pretrained French Sequence-to-Sequence Model](https://arxiv.org/abs/2010.12321) by Moussa Kamal Eddine, Antoine J.-P. Tixier, Michalis Vazirgiannis. 1. **[BARTpho](https://huggingface.co/docs/transformers/model_doc/bartpho)** (from VinAI Research) released with the paper [BARTpho: Pre-trained Sequence-to-Sequence Models for Vietnamese](https://arxiv.org/abs/2109.09701) by Nguyen Luong Tran, Duong Minh Le and Dat Quoc Nguyen. 1. **[BEiT](https://huggingface.co/docs/transformers/model_doc/beit)** (from Microsoft) released with the paper [BEiT: BERT Pre-Training of Image Transformers](https://arxiv.org/abs/2106.08254) by Hangbo Bao, Li Dong, Furu Wei. 1. **[BERT](https://huggingface.co/docs/transformers/model_doc/bert)** (from Google) released with the paper [BERT: Pre-training of Deep Bidirectional Transformers for Language Understanding](https://arxiv.org/abs/1810.04805) by Jacob Devlin, Ming-Wei Chang, Kenton Lee and Kristina Toutanova. 1. **[BERT For Sequence Generation](https://huggingface.co/docs/transformers/model_doc/bert-generation)** (from Google) released with the paper [Leveraging Pre-trained Checkpoints for Sequence Generation Tasks](https://arxiv.org/abs/1907.12461) by Sascha Rothe, Shashi Narayan, Aliaksei Severyn. 1. **[BERTweet](https://huggingface.co/docs/transformers/model_doc/bertweet)** (from VinAI Research) released with the paper [BERTweet: A pre-trained language model for English Tweets](https://aclanthology.org/2020.emnlp-demos.2/) by Dat Quoc Nguyen, Thanh Vu and Anh Tuan Nguyen. 1. **[BigBird-Pegasus](https://huggingface.co/docs/transformers/model_doc/bigbird_pegasus)** (from Google Research) released with the paper [Big Bird: Transformers for Longer Sequences](https://arxiv.org/abs/2007.14062) by Manzil Zaheer, Guru Guruganesh, Avinava Dubey, Joshua Ainslie, Chris Alberti, Santiago Ontanon, Philip Pham, Anirudh Ravula, Qifan Wang, Li Yang, Amr Ahmed. 1. **[BigBird-RoBERTa](https://huggingface.co/docs/transformers/model_doc/big_bird)** (from Google Research) released with the paper [Big Bird: Transformers for Longer Sequences](https://arxiv.org/abs/2007.14062) by Manzil Zaheer, Guru Guruganesh, Avinava Dubey, Joshua Ainslie, Chris Alberti, Santiago Ontanon, Philip Pham, Anirudh Ravula, Qifan Wang, Li Yang, Amr Ahmed. 1. **[BioGpt](https://huggingface.co/docs/transformers/model_doc/biogpt)** (from Microsoft Research AI4Science) released with the paper [BioGPT: generative pre-trained transformer for biomedical text generation and mining](https://academic.oup.com/bib/advance-article/doi/10.1093/bib/bbac409/6713511?guestAccessKey=a66d9b5d-4f83-4017-bb52-405815c907b9) by Renqian Luo, Liai Sun, Yingce Xia, Tao Qin, Sheng Zhang, Hoifung Poon and Tie-Yan Liu. 1. **[BiT](https://huggingface.co/docs/transformers/model_doc/bit)** (from Google AI) released with the paper [Big Transfer (BiT) by Alexander Kolesnikov, Lucas Beyer, Xiaohua Zhai, Joan Puigcerver, Jessica Yung, Sylvain Gelly, Neil Houlsby. 1. **[Blenderbot](https://huggingface.co/docs/transformers/model_doc/blenderbot)** (from Facebook) released with the paper [Recipes for building an open-domain chatbot](https://arxiv.org/abs/2004.13637) by Stephen Roller, Emily Dinan, Naman Goyal, Da Ju, Mary Williamson, Yinhan Liu, Jing Xu, Myle Ott, Kurt Shuster, Eric M. Smith, Y-Lan Boureau, Jason Weston. 1. **[BlenderbotSmall](https://huggingface.co/docs/transformers/model_doc/blenderbot-small)** (from Facebook) released with the paper [Recipes for building an open-domain chatbot](https://arxiv.org/abs/2004.13637) by Stephen Roller, Emily Dinan, Naman Goyal, Da Ju, Mary Williamson, Yinhan Liu, Jing Xu, Myle Ott, Kurt Shuster, Eric M. Smith, Y-Lan Boureau, Jason Weston. 1. **[BLIP](https://huggingface.co/docs/transformers/model_doc/blip)** (from Salesforce) released with the paper [BLIP: Bootstrapping Language-Image Pre-training for Unified Vision-Language Understanding and Generation](https://arxiv.org/abs/2201.12086) by Junnan Li, Dongxu Li, Caiming Xiong, Steven Hoi. 1. **[BLIP-2](https://huggingface.co/docs/transformers/model_doc/blip-2)** (Salesforce 에서 제공)은 Junnan Li, Dongxu Li, Silvio Savarese, Steven Hoi.의 [BLIP-2: Bootstrapping Language-Image Pre-training with Frozen Image Encoders and Large Language Models](https://arxiv.org/abs/2301.12597)논문과 함께 발표했습니다. 1. **[BLOOM](https://huggingface.co/docs/transformers/model_doc/bloom)** (from BigScience workshop) released by the [BigScience Workshop](https://bigscience.huggingface.co/). 1. **[BORT](https://huggingface.co/docs/transformers/model_doc/bort)** (Alexa 에서) Adrian de Wynter and Daniel J. Perry 의 [Optimal Subarchitecture Extraction For BERT](https://arxiv.org/abs/2010.10499) 논문과 함께 발표했습니다. 1. **[BridgeTower](https://huggingface.co/docs/transformers/model_doc/bridgetower)** (from Harbin Institute of Technology/Microsoft Research Asia/Intel Labs) released with the paper [BridgeTower: Building Bridges Between Encoders in Vision-Language Representation Learning](https://arxiv.org/abs/2206.08657) by Xiao Xu, Chenfei Wu, Shachar Rosenman, Vasudev Lal, Wanxiang Che, Nan Duan. 1. **[BROS](https://huggingface.co/docs/transformers/model_doc/bros)** (NAVER CLOVA 에서 제공)은 Teakgyu Hong, Donghyun Kim, Mingi Ji, Wonseok Hwang, Daehyun Nam, Sungrae Park.의 [BROS: A Pre-trained Language Model Focusing on Text and Layout for Better Key Information Extraction from Documents](https://arxiv.org/abs/2108.04539)논문과 함께 발표했습니다. 1. **[ByT5](https://huggingface.co/docs/transformers/model_doc/byt5)** (Google Research 에서) Linting Xue, Aditya Barua, Noah Constant, Rami Al-Rfou, Sharan Narang, Mihir Kale, Adam Roberts, Colin Raffel 의 [ByT5: Towards a token-free future with pre-trained byte-to-byte models](https://arxiv.org/abs/2105.13626) 논문과 함께 발표했습니다. 1. **[CamemBERT](https://huggingface.co/docs/transformers/model_doc/camembert)** (Inria/Facebook/Sorbonne 에서) Louis Martin*, Benjamin Muller*, Pedro Javier Ortiz Suárez*, Yoann Dupont, Laurent Romary, Éric Villemonte de la Clergerie, Djamé Seddah and Benoît Sagot 의 [CamemBERT: a Tasty French Language Model](https://arxiv.org/abs/1911.03894) 논문과 함께 발표했습니다. 1. **[CANINE](https://huggingface.co/docs/transformers/model_doc/canine)** (Google Research 에서) Jonathan H. Clark, Dan Garrette, Iulia Turc, John Wieting 의 [CANINE: Pre-training an Efficient Tokenization-Free Encoder for Language Representation](https://arxiv.org/abs/2103.06874) 논문과 함께 발표했습니다. 1. **[Chinese-CLIP](https://huggingface.co/docs/transformers/model_doc/chinese_clip)** (OFA-Sys 에서) An Yang, Junshu Pan, Junyang Lin, Rui Men, Yichang Zhang, Jingren Zhou, Chang Zhou 의 [Chinese CLIP: Contrastive Vision-Language Pretraining in Chinese](https://arxiv.org/abs/2211.01335) 논문과 함께 발표했습니다. 1. **[CLAP](https://huggingface.co/docs/transformers/model_doc/clap)** (LAION-AI 에서 제공)은 Yusong Wu, Ke Chen, Tianyu Zhang, Yuchen Hui, Taylor Berg-Kirkpatrick, Shlomo Dubnov.의 [Large-scale Contrastive Language-Audio Pretraining with Feature Fusion and Keyword-to-Caption Augmentation](https://arxiv.org/abs/2211.06687)논문과 함께 발표했습니다. 1. **[CLIP](https://huggingface.co/docs/transformers/model_doc/clip)** (OpenAI 에서) Alec Radford, Jong Wook Kim, Chris Hallacy, Aditya Ramesh, Gabriel Goh, Sandhini Agarwal, Girish Sastry, Amanda Askell, Pamela Mishkin, Jack Clark, Gretchen Krueger, Ilya Sutskever 의 [Learning Transferable Visual Models From Natural Language Supervision](https://arxiv.org/abs/2103.00020) 논문과 함께 발표했습니다. 1. **[CLIPSeg](https://huggingface.co/docs/transformers/model_doc/clipseg)** (University of Göttingen 에서) Timo Lüddecke and Alexander Ecker 의 [Image Segmentation Using Text and Image Prompts](https://arxiv.org/abs/2112.10003) 논문과 함께 발표했습니다. 1. **[CLVP](https://huggingface.co/docs/transformers/model_doc/clvp)** released with the paper [Better speech synthesis through scaling](https://arxiv.org/abs/2305.07243) by James Betker. 1. **[CodeGen](https://huggingface.co/docs/transformers/model_doc/codegen)** (Salesforce 에서) Erik Nijkamp, Bo Pang, Hiroaki Hayashi, Lifu Tu, Huan Wang, Yingbo Zhou, Silvio Savarese, Caiming Xiong 의 [A Conversational Paradigm for Program Synthesis](https://arxiv.org/abs/2203.13474) 논문과 함께 발표했습니다. 1. **[CodeLlama](https://huggingface.co/docs/transformers/model_doc/llama_code)** (MetaAI 에서 제공)은 Baptiste Rozière, Jonas Gehring, Fabian Gloeckle, Sten Sootla, Itai Gat, Xiaoqing Ellen Tan, Yossi Adi, Jingyu Liu, Tal Remez, Jérémy Rapin, Artyom Kozhevnikov, Ivan Evtimov, Joanna Bitton, Manish Bhatt, Cristian Canton Ferrer, Aaron Grattafiori, Wenhan Xiong, Alexandre Défossez, Jade Copet, Faisal Azhar, Hugo Touvron, Louis Martin, Nicolas Usunier, Thomas Scialom, Gabriel Synnaeve.의 [Code Llama: Open Foundation Models for Code](https://ai.meta.com/research/publications/code-llama-open-foundation-models-for-code/)논문과 함께 발표했습니다. 1. **[Conditional DETR](https://huggingface.co/docs/transformers/model_doc/conditional_detr)** (Microsoft Research Asia 에서) Depu Meng, Xiaokang Chen, Zejia Fan, Gang Zeng, Houqiang Li, Yuhui Yuan, Lei Sun, Jingdong Wang 의 [Conditional DETR for Fast Training Convergence](https://arxiv.org/abs/2108.06152) 논문과 함께 발표했습니다. 1. **[ConvBERT](https://huggingface.co/docs/transformers/model_doc/convbert)** (YituTech 에서) Zihang Jiang, Weihao Yu, Daquan Zhou, Yunpeng Chen, Jiashi Feng, Shuicheng Yan 의 [ConvBERT: Improving BERT with Span-based Dynamic Convolution](https://arxiv.org/abs/2008.02496) 논문과 함께 발표했습니다. 1. **[ConvNeXT](https://huggingface.co/docs/transformers/model_doc/convnext)** (Facebook AI 에서) Zhuang Liu, Hanzi Mao, Chao-Yuan Wu, Christoph Feichtenhofer, Trevor Darrell, Saining Xie 의 [A ConvNet for the 2020s](https://arxiv.org/abs/2201.03545) 논문과 함께 발표했습니다. 1. **[ConvNeXTV2](https://huggingface.co/docs/transformers/model_doc/convnextv2)** (from Facebook AI) released with the paper [ConvNeXt V2: Co-designing and Scaling ConvNets with Masked Autoencoders](https://arxiv.org/abs/2301.00808) by Sanghyun Woo, Shoubhik Debnath, Ronghang Hu, Xinlei Chen, Zhuang Liu, In So Kweon, Saining Xie. 1. **[CPM](https://huggingface.co/docs/transformers/model_doc/cpm)** (Tsinghua University 에서) Zhengyan Zhang, Xu Han, Hao Zhou, Pei Ke, Yuxian Gu, Deming Ye, Yujia Qin, Yusheng Su, Haozhe Ji, Jian Guan, Fanchao Qi, Xiaozhi Wang, Yanan Zheng, Guoyang Zeng, Huanqi Cao, Shengqi Chen, Daixuan Li, Zhenbo Sun, Zhiyuan Liu, Minlie Huang, Wentao Han, Jie Tang, Juanzi Li, Xiaoyan Zhu, Maosong Sun 의 [CPM: A Large-scale Generative Chinese Pre-trained Language Model](https://arxiv.org/abs/2012.00413) 논문과 함께 발표했습니다. 1. **[CPM-Ant](https://huggingface.co/docs/transformers/model_doc/cpmant)** (from OpenBMB) released by the [OpenBMB](https://www.openbmb.org/). 1. **[CTRL](https://huggingface.co/docs/transformers/model_doc/ctrl)** (Salesforce 에서) Nitish Shirish Keskar*, Bryan McCann*, Lav R. Varshney, Caiming Xiong and Richard Socher 의 [CTRL: A Conditional Transformer Language Model for Controllable Generation](https://arxiv.org/abs/1909.05858) 논문과 함께 발표했습니다. 1. **[CvT](https://huggingface.co/docs/transformers/model_doc/cvt)** (Microsoft 에서) Haiping Wu, Bin Xiao, Noel Codella, Mengchen Liu, Xiyang Dai, Lu Yuan, Lei Zhang 의 [CvT: Introducing Convolutions to Vision Transformers](https://arxiv.org/abs/2103.15808) 논문과 함께 발표했습니다. 1. **[Data2Vec](https://huggingface.co/docs/transformers/model_doc/data2vec)** (Facebook 에서) Alexei Baevski, Wei-Ning Hsu, Qiantong Xu, Arun Babu, Jiatao Gu, Michael Auli 의 [Data2Vec: A General Framework for Self-supervised Learning in Speech, Vision and Language](https://arxiv.org/abs/2202.03555) 논문과 함께 발표했습니다. 1. **[DeBERTa](https://huggingface.co/docs/transformers/model_doc/deberta)** (Microsoft 에서) Pengcheng He, Xiaodong Liu, Jianfeng Gao, Weizhu Chen 의 [DeBERTa: Decoding-enhanced BERT with Disentangled Attention](https://arxiv.org/abs/2006.03654) 논문과 함께 발표했습니다. 1. **[DeBERTa-v2](https://huggingface.co/docs/transformers/model_doc/deberta-v2)** (Microsoft 에서) Pengcheng He, Xiaodong Liu, Jianfeng Gao, Weizhu Chen 의 [DeBERTa: Decoding-enhanced BERT with Disentangled Attention](https://arxiv.org/abs/2006.03654) 논문과 함께 발표했습니다. 1. **[Decision Transformer](https://huggingface.co/docs/transformers/model_doc/decision_transformer)** (Berkeley/Facebook/Google 에서) Lili Chen, Kevin Lu, Aravind Rajeswaran, Kimin Lee, Aditya Grover, Michael Laskin, Pieter Abbeel, Aravind Srinivas, Igor Mordatch 의 [Decision Transformer: Reinforcement Learning via Sequence Modeling](https://arxiv.org/abs/2106.01345) 논문과 함께 발표했습니다. 1. **[Deformable DETR](https://huggingface.co/docs/transformers/model_doc/deformable_detr)** (SenseTime Research 에서) Xizhou Zhu, Weijie Su, Lewei Lu, Bin Li, Xiaogang Wang, Jifeng Dai 의 [Deformable DETR: Deformable Transformers for End-to-End Object Detection](https://arxiv.org/abs/2010.04159) 논문과 함께 발표했습니다. 1. **[DeiT](https://huggingface.co/docs/transformers/model_doc/deit)** (Facebook 에서) Hugo Touvron, Matthieu Cord, Matthijs Douze, Francisco Massa, Alexandre Sablayrolles, Hervé Jégou 의 [Training data-efficient image transformers & distillation through attention](https://arxiv.org/abs/2012.12877) 논문과 함께 발표했습니다. 1. **[DePlot](https://huggingface.co/docs/transformers/model_doc/deplot)** (Google AI 에서 제공)은 Fangyu Liu, Julian Martin Eisenschlos, Francesco Piccinno, Syrine Krichene, Chenxi Pang, Kenton Lee, Mandar Joshi, Wenhu Chen, Nigel Collier, Yasemin Altun.의 [DePlot: One-shot visual language reasoning by plot-to-table translation](https://arxiv.org/abs/2212.10505)논문과 함께 발표했습니다. 1. **[DETA](https://huggingface.co/docs/transformers/model_doc/deta)** (The University of Texas at Austin 에서 제공)은 Jeffrey Ouyang-Zhang, Jang Hyun Cho, Xingyi Zhou, Philipp Krähenbühl.의 [NMS Strikes Back](https://arxiv.org/abs/2212.06137)논문과 함께 발표했습니다. 1. **[DETR](https://huggingface.co/docs/transformers/model_doc/detr)** (Facebook 에서) Nicolas Carion, Francisco Massa, Gabriel Synnaeve, Nicolas Usunier, Alexander Kirillov, Sergey Zagoruyko 의 [End-to-End Object Detection with Transformers](https://arxiv.org/abs/2005.12872) 논문과 함께 발표했습니다. 1. **[DialoGPT](https://huggingface.co/docs/transformers/model_doc/dialogpt)** (Microsoft Research 에서) Yizhe Zhang, Siqi Sun, Michel Galley, Yen-Chun Chen, Chris Brockett, Xiang Gao, Jianfeng Gao, Jingjing Liu, Bill Dolan 의 [DialoGPT: Large-Scale Generative Pre-training for Conversational Response Generation](https://arxiv.org/abs/1911.00536) 논문과 함께 발표했습니다. 1. **[DiNAT](https://huggingface.co/docs/transformers/model_doc/dinat)** (SHI Labs 에서) Ali Hassani and Humphrey Shi 의 [Dilated Neighborhood Attention Transformer](https://arxiv.org/abs/2209.15001) 논문과 함께 발표했습니다. 1. **[DINOv2](https://huggingface.co/docs/transformers/model_doc/dinov2)** (Meta AI 에서 제공)은 Maxime Oquab, Timothée Darcet, Théo Moutakanni, Huy Vo, Marc Szafraniec, Vasil Khalidov, Pierre Fernandez, Daniel Haziza, Francisco Massa, Alaaeldin El-Nouby, Mahmoud Assran, Nicolas Ballas, Wojciech Galuba, Russell Howes, Po-Yao Huang, Shang-Wen Li, Ishan Misra, Michael Rabbat, Vasu Sharma, Gabriel Synnaeve, Hu Xu, Hervé Jegou, Julien Mairal, Patrick Labatut, Armand Joulin, Piotr Bojanowski.의 [DINOv2: Learning Robust Visual Features without Supervision](https://arxiv.org/abs/2304.07193)논문과 함께 발표했습니다. 1. **[DistilBERT](https://huggingface.co/docs/transformers/model_doc/distilbert)** (HuggingFace 에서) Victor Sanh, Lysandre Debut and Thomas Wolf. The same method has been applied to compress GPT2 into [DistilGPT2](https://github.com/huggingface/transformers/tree/main/examples/distillation), RoBERTa into [DistilRoBERTa](https://github.com/huggingface/transformers/tree/main/examples/distillation), Multilingual BERT into [DistilmBERT](https://github.com/huggingface/transformers/tree/main/examples/distillation) and a German version of DistilBERT 의 [DistilBERT, a distilled version of BERT: smaller, faster, cheaper and lighter](https://arxiv.org/abs/1910.01108) 논문과 함께 발표했습니다. 1. **[DiT](https://huggingface.co/docs/transformers/model_doc/dit)** (Microsoft Research 에서) Junlong Li, Yiheng Xu, Tengchao Lv, Lei Cui, Cha Zhang, Furu Wei 의 [DiT: Self-supervised Pre-training for Document Image Transformer](https://arxiv.org/abs/2203.02378) 논문과 함께 발표했습니다. 1. **[Donut](https://huggingface.co/docs/transformers/model_doc/donut)** (NAVER 에서) Geewook Kim, Teakgyu Hong, Moonbin Yim, Jeongyeon Nam, Jinyoung Park, Jinyeong Yim, Wonseok Hwang, Sangdoo Yun, Dongyoon Han, Seunghyun Park 의 [OCR-free Document Understanding Transformer](https://arxiv.org/abs/2111.15664) 논문과 함께 발표했습니다. 1. **[DPR](https://huggingface.co/docs/transformers/model_doc/dpr)** (Facebook 에서) Vladimir Karpukhin, Barlas Oğuz, Sewon Min, Patrick Lewis, Ledell Wu, Sergey Edunov, Danqi Chen, and Wen-tau Yih 의 [Dense Passage Retrieval for Open-Domain Question Answering](https://arxiv.org/abs/2004.04906) 논문과 함께 발표했습니다. 1. **[DPT](https://huggingface.co/docs/transformers/master/model_doc/dpt)** (Intel Labs 에서) René Ranftl, Alexey Bochkovskiy, Vladlen Koltun 의 [Vision Transformers for Dense Prediction](https://arxiv.org/abs/2103.13413) 논문과 함께 발표했습니다. 1. **[EfficientFormer](https://huggingface.co/docs/transformers/model_doc/efficientformer)** (from Snap Research) released with the paper [EfficientFormer: Vision Transformers at MobileNetSpeed](https://arxiv.org/abs/2206.01191) by Yanyu Li, Geng Yuan, Yang Wen, Ju Hu, Georgios Evangelidis, Sergey Tulyakov, Yanzhi Wang, Jian Ren. 1. **[EfficientNet](https://huggingface.co/docs/transformers/model_doc/efficientnet)** (from Google Brain) released with the paper [EfficientNet: Rethinking Model Scaling for Convolutional Neural Networks](https://arxiv.org/abs/1905.11946) by Mingxing Tan, Quoc V. Le. 1. **[ELECTRA](https://huggingface.co/docs/transformers/model_doc/electra)** (Google Research/Stanford University 에서) Kevin Clark, Minh-Thang Luong, Quoc V. Le, Christopher D. Manning 의 [ELECTRA: Pre-training text encoders as discriminators rather than generators](https://arxiv.org/abs/2003.10555) 논문과 함께 발표했습니다. 1. **[EnCodec](https://huggingface.co/docs/transformers/model_doc/encodec)** (Meta AI 에서 제공)은 Alexandre Défossez, Jade Copet, Gabriel Synnaeve, Yossi Adi.의 [High Fidelity Neural Audio Compression](https://arxiv.org/abs/2210.13438)논문과 함께 발표했습니다. 1. **[EncoderDecoder](https://huggingface.co/docs/transformers/model_doc/encoder-decoder)** (Google Research 에서) Sascha Rothe, Shashi Narayan, Aliaksei Severyn 의 [Leveraging Pre-trained Checkpoints for Sequence Generation Tasks](https://arxiv.org/abs/1907.12461) 논문과 함께 발표했습니다. 1. **[ERNIE](https://huggingface.co/docs/transformers/model_doc/ernie)** (Baidu 에서) Yu Sun, Shuohuan Wang, Yukun Li, Shikun Feng, Xuyi Chen, Han Zhang, Xin Tian, Danxiang Zhu, Hao Tian, Hua Wu 의 [ERNIE: Enhanced Representation through Knowledge Integration](https://arxiv.org/abs/1904.09223) 논문과 함께 발표했습니다. 1. **[ErnieM](https://huggingface.co/docs/transformers/model_doc/ernie_m)** (Baidu 에서 제공)은 Xuan Ouyang, Shuohuan Wang, Chao Pang, Yu Sun, Hao Tian, Hua Wu, Haifeng Wang.의 [ERNIE-M: Enhanced Multilingual Representation by Aligning Cross-lingual Semantics with Monolingual Corpora](https://arxiv.org/abs/2012.15674)논문과 함께 발표했습니다. 1. **[ESM](https://huggingface.co/docs/transformers/model_doc/esm)** (from Meta AI) are transformer protein language models. **ESM-1b** was released with the paper [Biological structure and function emerge from scaling unsupervised learning to 250 million protein sequences](https://www.pnas.org/content/118/15/e2016239118) by Alexander Rives, Joshua Meier, Tom Sercu, Siddharth Goyal, Zeming Lin, Jason Liu, Demi Guo, Myle Ott, C. Lawrence Zitnick, Jerry Ma, and Rob Fergus. **ESM-1v** was released with the paper [Language models enable zero-shot prediction of the effects of mutations on protein function](https://doi.org/10.1101/2021.07.09.450648) by Joshua Meier, Roshan Rao, Robert Verkuil, Jason Liu, Tom Sercu and Alexander Rives. **ESM-2** was released with the paper [Language models of protein sequences at the scale of evolution enable accurate structure prediction](https://doi.org/10.1101/2022.07.20.500902) by Zeming Lin, Halil Akin, Roshan Rao, Brian Hie, Zhongkai Zhu, Wenting Lu, Allan dos Santos Costa, Maryam Fazel-Zarandi, Tom Sercu, Sal Candido, Alexander Rives. 1. **[Falcon](https://huggingface.co/docs/transformers/model_doc/falcon)** (from Technology Innovation Institute) by Almazrouei, Ebtesam and Alobeidli, Hamza and Alshamsi, Abdulaziz and Cappelli, Alessandro and Cojocaru, Ruxandra and Debbah, Merouane and Goffinet, Etienne and Heslow, Daniel and Launay, Julien and Malartic, Quentin and Noune, Badreddine and Pannier, Baptiste and Penedo, Guilherme. 1. **[FLAN-T5](https://huggingface.co/docs/transformers/model_doc/flan-t5)** (from Google AI) released in the repository [google-research/t5x](https://github.com/google-research/t5x/blob/main/docs/models.md#flan-t5-checkpoints) by Hyung Won Chung, Le Hou, Shayne Longpre, Barret Zoph, Yi Tay, William Fedus, Eric Li, Xuezhi Wang, Mostafa Dehghani, Siddhartha Brahma, Albert Webson, Shixiang Shane Gu, Zhuyun Dai, Mirac Suzgun, Xinyun Chen, Aakanksha Chowdhery, Sharan Narang, Gaurav Mishra, Adams Yu, Vincent Zhao, Yanping Huang, Andrew Dai, Hongkun Yu, Slav Petrov, Ed H. Chi, Jeff Dean, Jacob Devlin, Adam Roberts, Denny Zhou, Quoc V. Le, and Jason Wei 1. **[FLAN-UL2](https://huggingface.co/docs/transformers/model_doc/flan-ul2)** (from Google AI) released in the repository [google-research/t5x](https://github.com/google-research/t5x/blob/main/docs/models.md#flan-ul2-checkpoints) by Hyung Won Chung, Le Hou, Shayne Longpre, Barret Zoph, Yi Tay, William Fedus, Eric Li, Xuezhi Wang, Mostafa Dehghani, Siddhartha Brahma, Albert Webson, Shixiang Shane Gu, Zhuyun Dai, Mirac Suzgun, Xinyun Chen, Aakanksha Chowdhery, Sharan Narang, Gaurav Mishra, Adams Yu, Vincent Zhao, Yanping Huang, Andrew Dai, Hongkun Yu, Slav Petrov, Ed H. Chi, Jeff Dean, Jacob Devlin, Adam Roberts, Denny Zhou, Quoc V. Le, and Jason Wei 1. **[FlauBERT](https://huggingface.co/docs/transformers/model_doc/flaubert)** (from CNRS) released with the paper [FlauBERT: Unsupervised Language Model Pre-training for French](https://arxiv.org/abs/1912.05372) by Hang Le, Loïc Vial, Jibril Frej, Vincent Segonne, Maximin Coavoux, Benjamin Lecouteux, Alexandre Allauzen, Benoît Crabbé, Laurent Besacier, Didier Schwab. 1. **[FLAVA](https://huggingface.co/docs/transformers/model_doc/flava)** (from Facebook AI) released with the paper [FLAVA: A Foundational Language And Vision Alignment Model](https://arxiv.org/abs/2112.04482) by Amanpreet Singh, Ronghang Hu, Vedanuj Goswami, Guillaume Couairon, Wojciech Galuba, Marcus Rohrbach, and Douwe Kiela. 1. **[FNet](https://huggingface.co/docs/transformers/model_doc/fnet)** (from Google Research) released with the paper [FNet: Mixing Tokens with Fourier Transforms](https://arxiv.org/abs/2105.03824) by James Lee-Thorp, Joshua Ainslie, Ilya Eckstein, Santiago Ontanon. 1. **[FocalNet](https://huggingface.co/docs/transformers/model_doc/focalnet)** (from Microsoft Research) released with the paper [Focal Modulation Networks](https://arxiv.org/abs/2203.11926) by Jianwei Yang, Chunyuan Li, Xiyang Dai, Lu Yuan, Jianfeng Gao. 1. **[Funnel Transformer](https://huggingface.co/docs/transformers/model_doc/funnel)** (from CMU/Google Brain) released with the paper [Funnel-Transformer: Filtering out Sequential Redundancy for Efficient Language Processing](https://arxiv.org/abs/2006.03236) by Zihang Dai, Guokun Lai, Yiming Yang, Quoc V. Le. 1. **[Fuyu](https://huggingface.co/docs/transformers/model_doc/fuyu)** (from ADEPT) Rohan Bavishi, Erich Elsen, Curtis Hawthorne, Maxwell Nye, Augustus Odena, Arushi Somani, Sağnak Taşırlar. 논문과 함께 공개 [blog post](https://www.adept.ai/blog/fuyu-8b) 1. **[GIT](https://huggingface.co/docs/transformers/model_doc/git)** (from Microsoft Research) released with the paper [GIT: A Generative Image-to-text Transformer for Vision and Language](https://arxiv.org/abs/2205.14100) by Jianfeng Wang, Zhengyuan Yang, Xiaowei Hu, Linjie Li, Kevin Lin, Zhe Gan, Zicheng Liu, Ce Liu, Lijuan Wang. 1. **[GLPN](https://huggingface.co/docs/transformers/model_doc/glpn)** (from KAIST) released with the paper [Global-Local Path Networks for Monocular Depth Estimation with Vertical CutDepth](https://arxiv.org/abs/2201.07436) by Doyeon Kim, Woonghyun Ga, Pyungwhan Ahn, Donggyu Joo, Sehwan Chun, Junmo Kim. 1. **[GPT](https://huggingface.co/docs/transformers/model_doc/openai-gpt)** (from OpenAI) released with the paper [Improving Language Understanding by Generative Pre-Training](https://blog.openai.com/language-unsupervised/) by Alec Radford, Karthik Narasimhan, Tim Salimans and Ilya Sutskever. 1. **[GPT Neo](https://huggingface.co/docs/transformers/model_doc/gpt_neo)** (from EleutherAI) released in the repository [EleutherAI/gpt-neo](https://github.com/EleutherAI/gpt-neo) by Sid Black, Stella Biderman, Leo Gao, Phil Wang and Connor Leahy. 1. **[GPT NeoX](https://huggingface.co/docs/transformers/model_doc/gpt_neox)** (EleutherAI 에서) Sid Black, Stella Biderman, Eric Hallahan, Quentin Anthony, Leo Gao, Laurence Golding, Horace He, Connor Leahy, Kyle McDonell, Jason Phang, Michael Pieler, USVSN Sai Prashanth, Shivanshu Purohit, Laria Reynolds, Jonathan Tow, Ben Wang, Samuel Weinbac 의 [GPT-NeoX-20B: An Open-Source Autoregressive Language Model](https://arxiv.org/abs/2204.06745) 논문과 함께 발표했습니다. 1. **[GPT NeoX Japanese](https://huggingface.co/docs/transformers/model_doc/gpt_neox_japanese)** (from ABEJA) released by Shinya Otani, Takayoshi Makabe, Anuj Arora, and Kyo Hattori. 1. **[GPT-2](https://huggingface.co/docs/transformers/model_doc/gpt2)** (OpenAI 에서) Alec Radford*, Jeffrey Wu*, Rewon Child, David Luan, Dario Amodei** and Ilya Sutskever** 의 [Language Models are Unsupervised Multitask Learners](https://blog.openai.com/better-language-models/) 논문과 함께 발표했습니다. 1. **[GPT-J](https://huggingface.co/docs/transformers/model_doc/gptj)** (from EleutherAI) released in the repository [kingoflolz/mesh-transformer-jax](https://github.com/kingoflolz/mesh-transformer-jax/) by Ben Wang and Aran Komatsuzaki. 1. **[GPT-Sw3](https://huggingface.co/docs/transformers/model_doc/gpt-sw3)** (AI-Sweden 에서) Ariel Ekgren, Amaru Cuba Gyllensten, Evangelia Gogoulou, Alice Heiman, Severine Verlinden, Joey Öhman, Fredrik Carlsson, Magnus Sahlgren. 의 [Lessons Learned from GPT-SW3: Building the First Large-Scale Generative Language Model for Swedish](http://www.lrec-conf.org/proceedings/lrec2022/pdf/2022.lrec-1.376.pdf) 논문과 함께 발표했습니다. 1. **[GPTBigCode](https://huggingface.co/docs/transformers/model_doc/gpt_bigcode)** (BigCode 에서 제공)은 Loubna Ben Allal, Raymond Li, Denis Kocetkov, Chenghao Mou, Christopher Akiki, Carlos Munoz Ferrandis, Niklas Muennighoff, Mayank Mishra, Alex Gu, Manan Dey, Logesh Kumar Umapathi, Carolyn Jane Anderson, Yangtian Zi, Joel Lamy Poirier, Hailey Schoelkopf, Sergey Troshin, Dmitry Abulkhanov, Manuel Romero, Michael Lappert, Francesco De Toni, Bernardo García del Río, Qian Liu, Shamik Bose, Urvashi Bhattacharyya, Terry Yue Zhuo, Ian Yu, Paulo Villegas, Marco Zocca, Sourab Mangrulkar, David Lansky, Huu Nguyen, Danish Contractor, Luis Villa, Jia Li, Dzmitry Bahdanau, Yacine Jernite, Sean Hughes, Daniel Fried, Arjun Guha, Harm de Vries, Leandro von Werra.의 [SantaCoder: don't reach for the stars!](https://arxiv.org/abs/2301.03988)논문과 함께 발표했습니다. 1. **[GPTSAN-japanese](https://huggingface.co/docs/transformers/model_doc/gptsan-japanese)** released in the repository [tanreinama/GPTSAN](https://github.com/tanreinama/GPTSAN/blob/main/report/model.md) by Toshiyuki Sakamoto(tanreinama). 1. **[Graphormer](https://huggingface.co/docs/transformers/model_doc/graphormer)** (from Microsoft) Chengxuan Ying, Tianle Cai, Shengjie Luo, Shuxin Zheng, Guolin Ke, Di He, Yanming Shen, Tie-Yan Liu 의 [Do Transformers Really Perform Bad for Graph Representation?](https://arxiv.org/abs/2106.05234) 논문과 함께 발표했습니다. 1. **[GroupViT](https://huggingface.co/docs/transformers/model_doc/groupvit)** (UCSD, NVIDIA 에서) Jiarui Xu, Shalini De Mello, Sifei Liu, Wonmin Byeon, Thomas Breuel, Jan Kautz, Xiaolong Wang 의 [GroupViT: Semantic Segmentation Emerges from Text Supervision](https://arxiv.org/abs/2202.11094) 논문과 함께 발표했습니다. 1. **[HerBERT](https://huggingface.co/docs/transformers/model_doc/herbert)** (Allegro.pl, AGH University of Science and Technology 에서 제공)은 Piotr Rybak, Robert Mroczkowski, Janusz Tracz, Ireneusz Gawlik.의 [KLEJ: Comprehensive Benchmark for Polish Language Understanding](https://www.aclweb.org/anthology/2020.acl-main.111.pdf)논문과 함께 발표했습니다. 1. **[Hubert](https://huggingface.co/docs/transformers/model_doc/hubert)** (Facebook 에서) Wei-Ning Hsu, Benjamin Bolte, Yao-Hung Hubert Tsai, Kushal Lakhotia, Ruslan Salakhutdinov, Abdelrahman Mohamed 의 [HuBERT: Self-Supervised Speech Representation Learning by Masked Prediction of Hidden Units](https://arxiv.org/abs/2106.07447) 논문과 함께 발표했습니다. 1. **[I-BERT](https://huggingface.co/docs/transformers/model_doc/ibert)** (Berkeley 에서) Sehoon Kim, Amir Gholami, Zhewei Yao, Michael W. Mahoney, Kurt Keutzer 의 [I-BERT: Integer-only BERT Quantization](https://arxiv.org/abs/2101.01321) 논문과 함께 발표했습니다. 1. **[IDEFICS](https://huggingface.co/docs/transformers/model_doc/idefics)** (from HuggingFace) released with the paper [OBELICS: An Open Web-Scale Filtered Dataset of Interleaved Image-Text Documents](https://huggingface.co/papers/2306.16527) by Hugo Laurençon, Lucile Saulnier, Léo Tronchon, Stas Bekman, Amanpreet Singh, Anton Lozhkov, Thomas Wang, Siddharth Karamcheti, Alexander M. Rush, Douwe Kiela, Matthieu Cord, Victor Sanh. 1. **[ImageGPT](https://huggingface.co/docs/transformers/model_doc/imagegpt)** (OpenAI 에서) Mark Chen, Alec Radford, Rewon Child, Jeffrey Wu, Heewoo Jun, David Luan, Ilya Sutskever 의 [Generative Pretraining from Pixels](https://openai.com/blog/image-gpt/) 논문과 함께 발표했습니다. 1. **[Informer](https://huggingface.co/docs/transformers/model_doc/informer)** (from Beihang University, UC Berkeley, Rutgers University, SEDD Company) released with the paper [Informer: Beyond Efficient Transformer for Long Sequence Time-Series Forecasting](https://arxiv.org/abs/2012.07436) by Haoyi Zhou, Shanghang Zhang, Jieqi Peng, Shuai Zhang, Jianxin Li, Hui Xiong, and Wancai Zhang. 1. **[InstructBLIP](https://huggingface.co/docs/transformers/model_doc/instructblip)** (Salesforce 에서 제공)은 Wenliang Dai, Junnan Li, Dongxu Li, Anthony Meng Huat Tiong, Junqi Zhao, Weisheng Wang, Boyang Li, Pascale Fung, Steven Hoi.의 [InstructBLIP: Towards General-purpose Vision-Language Models with Instruction Tuning](https://arxiv.org/abs/2305.06500)논문과 함께 발표했습니다. 1. **[Jukebox](https://huggingface.co/docs/transformers/model_doc/jukebox)** (OpenAI 에서) Prafulla Dhariwal, Heewoo Jun, Christine Payne, Jong Wook Kim, Alec Radford, Ilya Sutskever 의 [Jukebox: A Generative Model for Music](https://arxiv.org/pdf/2005.00341.pdf) 논문과 함께 발표했습니다. 1. **[KOSMOS-2](https://huggingface.co/docs/transformers/model_doc/kosmos-2)** (from Microsoft Research Asia) released with the paper [Kosmos-2: Grounding Multimodal Large Language Models to the World](https://arxiv.org/abs/2306.14824) by Zhiliang Peng, Wenhui Wang, Li Dong, Yaru Hao, Shaohan Huang, Shuming Ma, Furu Wei. 1. **[LayoutLM](https://huggingface.co/docs/transformers/model_doc/layoutlm)** (Microsoft Research Asia 에서) Yiheng Xu, Minghao Li, Lei Cui, Shaohan Huang, Furu Wei, Ming Zhou 의 [LayoutLM: Pre-training of Text and Layout for Document Image Understanding](https://arxiv.org/abs/1912.13318) 논문과 함께 발표했습니다. 1. **[LayoutLMv2](https://huggingface.co/docs/transformers/model_doc/layoutlmv2)** (Microsoft Research Asia 에서) Yang Xu, Yiheng Xu, Tengchao Lv, Lei Cui, Furu Wei, Guoxin Wang, Yijuan Lu, Dinei Florencio, Cha Zhang, Wanxiang Che, Min Zhang, Lidong Zhou 의 [LayoutLMv2: Multi-modal Pre-training for Visually-Rich Document Understanding](https://arxiv.org/abs/2012.14740) 논문과 함께 발표했습니다. 1. **[LayoutLMv3](https://huggingface.co/docs/transformers/model_doc/layoutlmv3)** (Microsoft Research Asia 에서) Yupan Huang, Tengchao Lv, Lei Cui, Yutong Lu, Furu Wei 의 [LayoutLMv3: Pre-training for Document AI with Unified Text and Image Masking](https://arxiv.org/abs/2204.08387) 논문과 함께 발표했습니다. 1. **[LayoutXLM](https://huggingface.co/docs/transformers/model_doc/layoutxlm)** (Microsoft Research Asia 에서) Yiheng Xu, Tengchao Lv, Lei Cui, Guoxin Wang, Yijuan Lu, Dinei Florencio, Cha Zhang, Furu Wei 의 [LayoutXLM: Multimodal Pre-training for Multilingual Visually-rich Document Understanding](https://arxiv.org/abs/2104.08836) 논문과 함께 발표했습니다. 1. **[LED](https://huggingface.co/docs/transformers/model_doc/led)** (AllenAI 에서) Iz Beltagy, Matthew E. Peters, Arman Cohan 의 [Longformer: The Long-Document Transformer](https://arxiv.org/abs/2004.05150) 논문과 함께 발표했습니다. 1. **[LeViT](https://huggingface.co/docs/transformers/model_doc/levit)** (Meta AI 에서) Ben Graham, Alaaeldin El-Nouby, Hugo Touvron, Pierre Stock, Armand Joulin, Hervé Jégou, Matthijs Douze 의 [LeViT: A Vision Transformer in ConvNet's Clothing for Faster Inference](https://arxiv.org/abs/2104.01136) 논문과 함께 발표했습니다. 1. **[LiLT](https://huggingface.co/docs/transformers/model_doc/lilt)** (South China University of Technology 에서) Jiapeng Wang, Lianwen Jin, Kai Ding 의 [LiLT: A Simple yet Effective Language-Independent Layout Transformer for Structured Document Understanding](https://arxiv.org/abs/2202.13669) 논문과 함께 발표했습니다. 1. **[LLaMA](https://huggingface.co/docs/transformers/model_doc/llama)** (The FAIR team of Meta AI 에서 제공)은 Hugo Touvron, Thibaut Lavril, Gautier Izacard, Xavier Martinet, Marie-Anne Lachaux, Timothée Lacroix, Baptiste Rozière, Naman Goyal, Eric Hambro, Faisal Azhar, Aurelien Rodriguez, Armand Joulin, Edouard Grave, Guillaume Lample.의 [LLaMA: Open and Efficient Foundation Language Models](https://arxiv.org/abs/2302.13971)논문과 함께 발표했습니다. 1. **[Llama2](https://huggingface.co/docs/transformers/model_doc/llama2)** (The FAIR team of Meta AI 에서 제공)은 Hugo Touvron, Louis Martin, Kevin Stone, Peter Albert, Amjad Almahairi, Yasmine Babaei, Nikolay Bashlykov, Soumya Batra, Prajjwal Bhargava, Shruti Bhosale, Dan Bikel, Lukas Blecher, Cristian Canton Ferrer, Moya Chen, Guillem Cucurull, David Esiobu, Jude Fernandes, Jeremy Fu, Wenyin Fu, Brian Fuller, Cynthia Gao, Vedanuj Goswami, Naman Goyal, Anthony Hartshorn, Saghar Hosseini, Rui Hou, Hakan Inan, Marcin Kardas, Viktor Kerkez Madian Khabsa, Isabel Kloumann, Artem Korenev, Punit Singh Koura, Marie-Anne Lachaux, Thibaut Lavril, Jenya Lee, Diana Liskovich, Yinghai Lu, Yuning Mao, Xavier Martinet, Todor Mihaylov, Pushka rMishra, Igor Molybog, Yixin Nie, Andrew Poulton, Jeremy Reizenstein, Rashi Rungta, Kalyan Saladi, Alan Schelten, Ruan Silva, Eric Michael Smith, Ranjan Subramanian, Xiaoqing EllenTan, Binh Tang, Ross Taylor, Adina Williams, Jian Xiang Kuan, Puxin Xu, Zheng Yan, Iliyan Zarov, Yuchen Zhang, Angela Fan, Melanie Kambadur, Sharan Narang, Aurelien Rodriguez, Robert Stojnic, Sergey Edunov, Thomas Scialom..의 [Llama2: Open Foundation and Fine-Tuned Chat Models](https://ai.meta.com/research/publications/llama-2-open-foundation-and-fine-tuned-chat-models/XXX)논문과 함께 발표했습니다. 1. **[LLaVa](https://huggingface.co/docs/transformers/model_doc/llava)** (Microsoft Research & University of Wisconsin-Madison 에서 제공)은 Haotian Liu, Chunyuan Li, Yuheng Li and Yong Jae Lee.의 [Visual Instruction Tuning](https://arxiv.org/abs/2304.08485)논문과 함께 발표했습니다. 1. **[Longformer](https://huggingface.co/docs/transformers/model_doc/longformer)** (AllenAI 에서) Iz Beltagy, Matthew E. Peters, Arman Cohan 의 [Longformer: The Long-Document Transformer](https://arxiv.org/abs/2004.05150) 논문과 함께 발표했습니다. 1. **[LongT5](https://huggingface.co/docs/transformers/model_doc/longt5)** (Google AI 에서) Mandy Guo, Joshua Ainslie, David Uthus, Santiago Ontanon, Jianmo Ni, Yun-Hsuan Sung, Yinfei Yang 의 [LongT5: Efficient Text-To-Text Transformer for Long Sequences](https://arxiv.org/abs/2112.07916) 논문과 함께 발표했습니다. 1. **[LUKE](https://huggingface.co/docs/transformers/model_doc/luke)** (Studio Ousia 에서) Ikuya Yamada, Akari Asai, Hiroyuki Shindo, Hideaki Takeda, Yuji Matsumoto 의 [LUKE: Deep Contextualized Entity Representations with Entity-aware Self-attention](https://arxiv.org/abs/2010.01057) 논문과 함께 발표했습니다. 1. **[LXMERT](https://huggingface.co/docs/transformers/model_doc/lxmert)** (UNC Chapel Hill 에서) Hao Tan and Mohit Bansal 의 [LXMERT: Learning Cross-Modality Encoder Representations from Transformers for Open-Domain Question Answering](https://arxiv.org/abs/1908.07490) 논문과 함께 발표했습니다. 1. **[M-CTC-T](https://huggingface.co/docs/transformers/model_doc/mctct)** (Facebook 에서) Loren Lugosch, Tatiana Likhomanenko, Gabriel Synnaeve, and Ronan Collobert 의 [Pseudo-Labeling For Massively Multilingual Speech Recognition](https://arxiv.org/abs/2111.00161) 논문과 함께 발표했습니다. 1. **[M2M100](https://huggingface.co/docs/transformers/model_doc/m2m_100)** (Facebook 에서) Angela Fan, Shruti Bhosale, Holger Schwenk, Zhiyi Ma, Ahmed El-Kishky, Siddharth Goyal, Mandeep Baines, Onur Celebi, Guillaume Wenzek, Vishrav Chaudhary, Naman Goyal, Tom Birch, Vitaliy Liptchinsky, Sergey Edunov, Edouard Grave, Michael Auli, Armand Joulin 의 [Beyond English-Centric Multilingual Machine Translation](https://arxiv.org/abs/2010.11125) 논문과 함께 발표했습니다. 1. **[MADLAD-400](https://huggingface.co/docs/transformers/model_doc/madlad-400)** (from Google) released with the paper [MADLAD-400: A Multilingual And Document-Level Large Audited Dataset](https://arxiv.org/abs/2309.04662) by Sneha Kudugunta, Isaac Caswell, Biao Zhang, Xavier Garcia, Christopher A. Choquette-Choo, Katherine Lee, Derrick Xin, Aditya Kusupati, Romi Stella, Ankur Bapna, Orhan Firat. 1. **[MarianMT](https://huggingface.co/docs/transformers/model_doc/marian)** Machine translation models trained using [OPUS](http://opus.nlpl.eu/) data by Jörg Tiedemann. The [Marian Framework](https://marian-nmt.github.io/) is being developed by the Microsoft Translator Team. 1. **[MarkupLM](https://huggingface.co/docs/transformers/model_doc/markuplm)** (Microsoft Research Asia 에서) Junlong Li, Yiheng Xu, Lei Cui, Furu Wei 의 [MarkupLM: Pre-training of Text and Markup Language for Visually-rich Document Understanding](https://arxiv.org/abs/2110.08518) 논문과 함께 발표했습니다. 1. **[Mask2Former](https://huggingface.co/docs/transformers/model_doc/mask2former)** (FAIR and UIUC 에서 제공)은 Bowen Cheng, Ishan Misra, Alexander G. Schwing, Alexander Kirillov, Rohit Girdhar.의 [Masked-attention Mask Transformer for Universal Image Segmentation](https://arxiv.org/abs/2112.01527)논문과 함께 발표했습니다. 1. **[MaskFormer](https://huggingface.co/docs/transformers/model_doc/maskformer)** (Meta and UIUC 에서) Bowen Cheng, Alexander G. Schwing, Alexander Kirillov 의 [Per-Pixel Classification is Not All You Need for Semantic Segmentation](https://arxiv.org/abs/2107.06278) 논문과 함께 발표했습니다. 1. **[MatCha](https://huggingface.co/docs/transformers/model_doc/matcha)** (Google AI 에서 제공)은 Fangyu Liu, Francesco Piccinno, Syrine Krichene, Chenxi Pang, Kenton Lee, Mandar Joshi, Yasemin Altun, Nigel Collier, Julian Martin Eisenschlos.의 [MatCha: Enhancing Visual Language Pretraining with Math Reasoning and Chart Derendering](https://arxiv.org/abs/2212.09662)논문과 함께 발표했습니다. 1. **[mBART](https://huggingface.co/docs/transformers/model_doc/mbart)** (Facebook 에서) Yinhan Liu, Jiatao Gu, Naman Goyal, Xian Li, Sergey Edunov, Marjan Ghazvininejad, Mike Lewis, Luke Zettlemoyer 의 [Multilingual Denoising Pre-training for Neural Machine Translation](https://arxiv.org/abs/2001.08210) 논문과 함께 발표했습니다. 1. **[mBART-50](https://huggingface.co/docs/transformers/model_doc/mbart)** (Facebook 에서) Yuqing Tang, Chau Tran, Xian Li, Peng-Jen Chen, Naman Goyal, Vishrav Chaudhary, Jiatao Gu, Angela Fan 의 [Multilingual Translation with Extensible Multilingual Pretraining and Finetuning](https://arxiv.org/abs/2008.00401) 논문과 함께 발표했습니다. 1. **[MEGA](https://huggingface.co/docs/transformers/model_doc/mega)** (Facebook 에서 제공)은 Xuezhe Ma, Chunting Zhou, Xiang Kong, Junxian He, Liangke Gui, Graham Neubig, Jonathan May, and Luke Zettlemoyer.의 [Mega: Moving Average Equipped Gated Attention](https://arxiv.org/abs/2209.10655)논문과 함께 발표했습니다. 1. **[Megatron-BERT](https://huggingface.co/docs/transformers/model_doc/megatron-bert)** (NVIDIA 에서) Mohammad Shoeybi, Mostofa Patwary, Raul Puri, Patrick LeGresley, Jared Casper and Bryan Catanzaro 의 [Megatron-LM: Training Multi-Billion Parameter Language Models Using Model Parallelism](https://arxiv.org/abs/1909.08053) 논문과 함께 발표했습니다. 1. **[Megatron-GPT2](https://huggingface.co/docs/transformers/model_doc/megatron_gpt2)** (NVIDIA 에서) Mohammad Shoeybi, Mostofa Patwary, Raul Puri, Patrick LeGresley, Jared Casper and Bryan Catanzaro 의 [Megatron-LM: Training Multi-Billion Parameter Language Models Using Model Parallelism](https://arxiv.org/abs/1909.08053) 논문과 함께 발표했습니다. 1. **[MGP-STR](https://huggingface.co/docs/transformers/model_doc/mgp-str)** (Alibaba Research 에서 제공)은 Peng Wang, Cheng Da, and Cong Yao.의 [Multi-Granularity Prediction for Scene Text Recognition](https://arxiv.org/abs/2209.03592)논문과 함께 발표했습니다. 1. **[Mistral](https://huggingface.co/docs/transformers/model_doc/mistral)** (from Mistral AI) by The Mistral AI team: Albert Jiang, Alexandre Sablayrolles, Arthur Mensch, Chris Bamford, Devendra Singh Chaplot, Diego de las Casas, Florian Bressand, Gianna Lengyel, Guillaume Lample, Lélio Renard Lavaud, Lucile Saulnier, Marie-Anne Lachaux, Pierre Stock, Teven Le Scao, Thibaut Lavril, Thomas Wang, Timothée Lacroix, William El Sayed.. 1. **[Mixtral](https://huggingface.co/docs/transformers/model_doc/mixtral)** (from Mistral AI) by The [Mistral AI](https://mistral.ai) team: Albert Jiang, Alexandre Sablayrolles, Arthur Mensch, Chris Bamford, Devendra Singh Chaplot, Diego de las Casas, Florian Bressand, Gianna Lengyel, Guillaume Lample, Lélio Renard Lavaud, Lucile Saulnier, Marie-Anne Lachaux, Pierre Stock, Teven Le Scao, Thibaut Lavril, Thomas Wang, Timothée Lacroix, William El Sayed. 1. **[mLUKE](https://huggingface.co/docs/transformers/model_doc/mluke)** (Studio Ousia 에서) Ryokan Ri, Ikuya Yamada, and Yoshimasa Tsuruoka 의 [mLUKE: The Power of Entity Representations in Multilingual Pretrained Language Models](https://arxiv.org/abs/2110.08151) 논문과 함께 발표했습니다. 1. **[MMS](https://huggingface.co/docs/transformers/model_doc/mms)** (Facebook 에서 제공)은 Vineel Pratap, Andros Tjandra, Bowen Shi, Paden Tomasello, Arun Babu, Sayani Kundu, Ali Elkahky, Zhaoheng Ni, Apoorv Vyas, Maryam Fazel-Zarandi, Alexei Baevski, Yossi Adi, Xiaohui Zhang, Wei-Ning Hsu, Alexis Conneau, Michael Auli.의 [Scaling Speech Technology to 1,000+ Languages](https://arxiv.org/abs/2305.13516)논문과 함께 발표했습니다. 1. **[MobileBERT](https://huggingface.co/docs/transformers/model_doc/mobilebert)** (CMU/Google Brain 에서) Zhiqing Sun, Hongkun Yu, Xiaodan Song, Renjie Liu, Yiming Yang, and Denny Zhou 의 [MobileBERT: a Compact Task-Agnostic BERT for Resource-Limited Devices](https://arxiv.org/abs/2004.02984) 논문과 함께 발표했습니다. 1. **[MobileNetV1](https://huggingface.co/docs/transformers/model_doc/mobilenet_v1)** (Google Inc. 에서) Andrew G. Howard, Menglong Zhu, Bo Chen, Dmitry Kalenichenko, Weijun Wang, Tobias Weyand, Marco Andreetto, Hartwig Adam 의 [MobileNets: Efficient Convolutional Neural Networks for Mobile Vision Applications](https://arxiv.org/abs/1704.04861) 논문과 함께 발표했습니다. 1. **[MobileNetV2](https://huggingface.co/docs/transformers/model_doc/mobilenet_v2)** (Google Inc. 에서) Mark Sandler, Andrew Howard, Menglong Zhu, Andrey Zhmoginov, Liang-Chieh Chen 의 [MobileNetV2: Inverted Residuals and Linear Bottlenecks](https://arxiv.org/abs/1801.04381) 논문과 함께 발표했습니다. 1. **[MobileViT](https://huggingface.co/docs/transformers/model_doc/mobilevit)** (Apple 에서) Sachin Mehta and Mohammad Rastegari 의 [MobileViT: Light-weight, General-purpose, and Mobile-friendly Vision Transformer](https://arxiv.org/abs/2110.02178) 논문과 함께 발표했습니다. 1. **[MobileViTV2](https://huggingface.co/docs/transformers/model_doc/mobilevitv2)** (Apple 에서 제공)은 Sachin Mehta and Mohammad Rastegari.의 [Separable Self-attention for Mobile Vision Transformers](https://arxiv.org/abs/2206.02680)논문과 함께 발표했습니다. 1. **[MPNet](https://huggingface.co/docs/transformers/model_doc/mpnet)** (Microsoft Research 에서) Kaitao Song, Xu Tan, Tao Qin, Jianfeng Lu, Tie-Yan Liu 의 [MPNet: Masked and Permuted Pre-training for Language Understanding](https://arxiv.org/abs/2004.09297) 논문과 함께 발표했습니다. 1. **[MPT](https://huggingface.co/docs/transformers/model_doc/mpt)** (MosaiML 에서 제공)은 the MosaicML NLP Team.의 [llm-foundry](https://github.com/mosaicml/llm-foundry/)논문과 함께 발표했습니다. 1. **[MRA](https://huggingface.co/docs/transformers/model_doc/mra)** (the University of Wisconsin - Madison 에서 제공)은 Zhanpeng Zeng, Sourav Pal, Jeffery Kline, Glenn M Fung, Vikas Singh.의 [Multi Resolution Analysis (MRA)](https://arxiv.org/abs/2207.10284) 논문과 함께 발표했습니다. 1. **[MT5](https://huggingface.co/docs/transformers/model_doc/mt5)** (Google AI 에서) Linting Xue, Noah Constant, Adam Roberts, Mihir Kale, Rami Al-Rfou, Aditya Siddhant, Aditya Barua, Colin Raffel 의 [mT5: A massively multilingual pre-trained text-to-text transformer](https://arxiv.org/abs/2010.11934) 논문과 함께 발표했습니다. 1. **[MusicGen](https://huggingface.co/docs/transformers/model_doc/musicgen)** (from Meta) released with the paper [Simple and Controllable Music Generation](https://arxiv.org/abs/2306.05284) by Jade Copet, Felix Kreuk, Itai Gat, Tal Remez, David Kant, Gabriel Synnaeve, Yossi Adi and Alexandre Défossez. 1. **[MVP](https://huggingface.co/docs/transformers/model_doc/mvp)** (RUC AI Box 에서) Tianyi Tang, Junyi Li, Wayne Xin Zhao and Ji-Rong Wen 의 [MVP: Multi-task Supervised Pre-training for Natural Language Generation](https://arxiv.org/abs/2206.12131) 논문과 함께 발표했습니다. 1. **[NAT](https://huggingface.co/docs/transformers/model_doc/nat)** (SHI Labs 에서) Ali Hassani, Steven Walton, Jiachen Li, Shen Li, and Humphrey Shi 의 [Neighborhood Attention Transformer](https://arxiv.org/abs/2204.07143) 논문과 함께 발표했습니다. 1. **[Nezha](https://huggingface.co/docs/transformers/model_doc/nezha)** (Huawei Noah’s Ark Lab 에서) Junqiu Wei, Xiaozhe Ren, Xiaoguang Li, Wenyong Huang, Yi Liao, Yasheng Wang, Jiashu Lin, Xin Jiang, Xiao Chen and Qun Liu 의 [NEZHA: Neural Contextualized Representation for Chinese Language Understanding](https://arxiv.org/abs/1909.00204) 논문과 함께 발표했습니다. 1. **[NLLB](https://huggingface.co/docs/transformers/model_doc/nllb)** (Meta 에서) the NLLB team 의 [No Language Left Behind: Scaling Human-Centered Machine Translation](https://arxiv.org/abs/2207.04672) 논문과 함께 발표했습니다. 1. **[NLLB-MOE](https://huggingface.co/docs/transformers/model_doc/nllb-moe)** (Meta 에서 제공)은 the NLLB team.의 [No Language Left Behind: Scaling Human-Centered Machine Translation](https://arxiv.org/abs/2207.04672)논문과 함께 발표했습니다. 1. **[Nougat](https://huggingface.co/docs/transformers/model_doc/nougat)** (Meta AI 에서 제공)은 Lukas Blecher, Guillem Cucurull, Thomas Scialom, Robert Stojnic.의 [Nougat: Neural Optical Understanding for Academic Documents](https://arxiv.org/abs/2308.13418)논문과 함께 발표했습니다. 1. **[Nyströmformer](https://huggingface.co/docs/transformers/model_doc/nystromformer)** (the University of Wisconsin - Madison 에서) Yunyang Xiong, Zhanpeng Zeng, Rudrasis Chakraborty, Mingxing Tan, Glenn Fung, Yin Li, Vikas Singh 의 [Nyströmformer: A Nyström-Based Algorithm for Approximating Self-Attention](https://arxiv.org/abs/2102.03902) 논문과 함께 발표했습니다. 1. **[OneFormer](https://huggingface.co/docs/transformers/model_doc/oneformer)** (SHI Labs 에서) Jitesh Jain, Jiachen Li, MangTik Chiu, Ali Hassani, Nikita Orlov, Humphrey Shi 의 [OneFormer: One Transformer to Rule Universal Image Segmentation](https://arxiv.org/abs/2211.06220) 논문과 함께 발표했습니다. 1. **[OpenLlama](https://huggingface.co/docs/transformers/model_doc/open-llama)** (from [s-JoL](https://huggingface.co/s-JoL)) released on GitHub (now removed). 1. **[OPT](https://huggingface.co/docs/transformers/master/model_doc/opt)** (Meta AI 에서) Susan Zhang, Stephen Roller, Naman Goyal, Mikel Artetxe, Moya Chen, Shuohui Chen et al 의 [OPT: Open Pre-trained Transformer Language Models](https://arxiv.org/abs/2205.01068) 논문과 함께 발표했습니다. 1. **[OWL-ViT](https://huggingface.co/docs/transformers/model_doc/owlvit)** (Google AI 에서) Matthias Minderer, Alexey Gritsenko, Austin Stone, Maxim Neumann, Dirk Weissenborn, Alexey Dosovitskiy, Aravindh Mahendran, Anurag Arnab, Mostafa Dehghani, Zhuoran Shen, Xiao Wang, Xiaohua Zhai, Thomas Kipf, and Neil Houlsby 의 [Simple Open-Vocabulary Object Detection with Vision Transformers](https://arxiv.org/abs/2205.06230) 논문과 함께 발표했습니다. 1. **[OWLv2](https://huggingface.co/docs/transformers/model_doc/owlv2)** (Google AI 에서 제공)은 Matthias Minderer, Alexey Gritsenko, Neil Houlsby.의 [Scaling Open-Vocabulary Object Detection](https://arxiv.org/abs/2306.09683)논문과 함께 발표했습니다. 1. **[PatchTSMixer](https://huggingface.co/docs/transformers/model_doc/patchtsmixer)** ( IBM Research 에서 제공)은 Vijay Ekambaram, Arindam Jati, Nam Nguyen, Phanwadee Sinthong, Jayant Kalagnanam.의 [TSMixer: Lightweight MLP-Mixer Model for Multivariate Time Series Forecasting](https://arxiv.org/pdf/2306.09364.pdf)논문과 함께 발표했습니다. 1. **[PatchTST](https://huggingface.co/docs/transformers/model_doc/patchtst)** (IBM 에서 제공)은 Yuqi Nie, Nam H. Nguyen, Phanwadee Sinthong, Jayant Kalagnanam.의 [A Time Series is Worth 64 Words: Long-term Forecasting with Transformers](https://arxiv.org/pdf/2211.14730.pdf)논문과 함께 발표했습니다. 1. **[Pegasus](https://huggingface.co/docs/transformers/model_doc/pegasus)** (Google 에서) Jingqing Zhang, Yao Zhao, Mohammad Saleh and Peter J. Liu 의 [PEGASUS: Pre-training with Extracted Gap-sentences for Abstractive Summarization](https://arxiv.org/abs/1912.08777) 논문과 함께 발표했습니다. 1. **[PEGASUS-X](https://huggingface.co/docs/transformers/model_doc/pegasus_x)** (Google 에서) Jason Phang, Yao Zhao, Peter J. Liu 의 [Investigating Efficiently Extending Transformers for Long Input Summarization](https://arxiv.org/abs/2208.04347) 논문과 함께 발표했습니다. 1. **[Perceiver IO](https://huggingface.co/docs/transformers/model_doc/perceiver)** (Deepmind 에서) Andrew Jaegle, Sebastian Borgeaud, Jean-Baptiste Alayrac, Carl Doersch, Catalin Ionescu, David Ding, Skanda Koppula, Daniel Zoran, Andrew Brock, Evan Shelhamer, Olivier Hénaff, Matthew M. Botvinick, Andrew Zisserman, Oriol Vinyals, João Carreira 의 [Perceiver IO: A General Architecture for Structured Inputs & Outputs](https://arxiv.org/abs/2107.14795) 논문과 함께 발표했습니다. 1. **[Persimmon](https://huggingface.co/docs/transformers/model_doc/persimmon)** (ADEPT 에서 제공)은 Erich Elsen, Augustus Odena, Maxwell Nye, Sağnak Taşırlar, Tri Dao, Curtis Hawthorne, Deepak Moparthi, Arushi Somani.의 [blog post](https://www.adept.ai/blog/persimmon-8b)논문과 함께 발표했습니다. 1. **[Phi](https://huggingface.co/docs/transformers/model_doc/phi)** (from Microsoft) released with the papers - [Textbooks Are All You Need](https://arxiv.org/abs/2306.11644) by Suriya Gunasekar, Yi Zhang, Jyoti Aneja, Caio César Teodoro Mendes, Allie Del Giorno, Sivakanth Gopi, Mojan Javaheripi, Piero Kauffmann, Gustavo de Rosa, Olli Saarikivi, Adil Salim, Shital Shah, Harkirat Singh Behl, Xin Wang, Sébastien Bubeck, Ronen Eldan, Adam Tauman Kalai, Yin Tat Lee and Yuanzhi Li, [Textbooks Are All You Need II: phi-1.5 technical report](https://arxiv.org/abs/2309.05463) by Yuanzhi Li, Sébastien Bubeck, Ronen Eldan, Allie Del Giorno, Suriya Gunasekar and Yin Tat Lee. 1. **[PhoBERT](https://huggingface.co/docs/transformers/model_doc/phobert)** (VinAI Research 에서) Dat Quoc Nguyen and Anh Tuan Nguyen 의 [PhoBERT: Pre-trained language models for Vietnamese](https://www.aclweb.org/anthology/2020.findings-emnlp.92/) 논문과 함께 발표했습니다. 1. **[Pix2Struct](https://huggingface.co/docs/transformers/model_doc/pix2struct)** (Google 에서 제공)은 Kenton Lee, Mandar Joshi, Iulia Turc, Hexiang Hu, Fangyu Liu, Julian Eisenschlos, Urvashi Khandelwal, Peter Shaw, Ming-Wei Chang, Kristina Toutanova.의 [Pix2Struct: Screenshot Parsing as Pretraining for Visual Language Understanding](https://arxiv.org/abs/2210.03347)논문과 함께 발표했습니다. 1. **[PLBart](https://huggingface.co/docs/transformers/model_doc/plbart)** (UCLA NLP 에서) Wasi Uddin Ahmad, Saikat Chakraborty, Baishakhi Ray, Kai-Wei Chang 의 [Unified Pre-training for Program Understanding and Generation](https://arxiv.org/abs/2103.06333) 논문과 함께 발표했습니다. 1. **[PoolFormer](https://huggingface.co/docs/transformers/model_doc/poolformer)** (Sea AI Labs 에서) Yu, Weihao and Luo, Mi and Zhou, Pan and Si, Chenyang and Zhou, Yichen and Wang, Xinchao and Feng, Jiashi and Yan, Shuicheng 의 [MetaFormer is Actually What You Need for Vision](https://arxiv.org/abs/2111.11418) 논문과 함께 발표했습니다. 1. **[Pop2Piano](https://huggingface.co/docs/transformers/model_doc/pop2piano)** released with the paper [Pop2Piano : Pop Audio-based Piano Cover Generation](https://arxiv.org/abs/2211.00895) by Jongho Choi, Kyogu Lee. 1. **[ProphetNet](https://huggingface.co/docs/transformers/model_doc/prophetnet)** (Microsoft Research 에서) Yu Yan, Weizhen Qi, Yeyun Gong, Dayiheng Liu, Nan Duan, Jiusheng Chen, Ruofei Zhang and Ming Zhou 의 [ProphetNet: Predicting Future N-gram for Sequence-to-Sequence Pre-training](https://arxiv.org/abs/2001.04063) 논문과 함께 발표했습니다. 1. **[PVT](https://huggingface.co/docs/transformers/model_doc/pvt)** (Nanjing University, The University of Hong Kong etc. 에서 제공)은 Wenhai Wang, Enze Xie, Xiang Li, Deng-Ping Fan, Kaitao Song, Ding Liang, Tong Lu, Ping Luo, Ling Shao.의 [Pyramid Vision Transformer: A Versatile Backbone for Dense Prediction without Convolutions](https://arxiv.org/pdf/2102.12122.pdf)논문과 함께 발표했습니다. 1. **[QDQBert](https://huggingface.co/docs/transformers/model_doc/qdqbert)** (NVIDIA 에서) Hao Wu, Patrick Judd, Xiaojie Zhang, Mikhail Isaev and Paulius Micikevicius 의 [Integer Quantization for Deep Learning Inference: Principles and Empirical Evaluation](https://arxiv.org/abs/2004.09602) 논문과 함께 발표했습니다. 1. **[RAG](https://huggingface.co/docs/transformers/model_doc/rag)** (Facebook 에서) Patrick Lewis, Ethan Perez, Aleksandara Piktus, Fabio Petroni, Vladimir Karpukhin, Naman Goyal, Heinrich Küttler, Mike Lewis, Wen-tau Yih, Tim Rocktäschel, Sebastian Riedel, Douwe Kiela 의 [Retrieval-Augmented Generation for Knowledge-Intensive NLP Tasks](https://arxiv.org/abs/2005.11401) 논문과 함께 발표했습니다. 1. **[REALM](https://huggingface.co/docs/transformers/model_doc/realm.html)** (Google Research 에서) Kelvin Guu, Kenton Lee, Zora Tung, Panupong Pasupat and Ming-Wei Chang 의 [REALM: Retrieval-Augmented Language Model Pre-Training](https://arxiv.org/abs/2002.08909) 논문과 함께 발표했습니다. 1. **[Reformer](https://huggingface.co/docs/transformers/model_doc/reformer)** (Google Research 에서) Nikita Kitaev, Łukasz Kaiser, Anselm Levskaya 의 [Reformer: The Efficient Transformer](https://arxiv.org/abs/2001.04451) 논문과 함께 발표했습니다. 1. **[RegNet](https://huggingface.co/docs/transformers/model_doc/regnet)** (META Research 에서) Ilija Radosavovic, Raj Prateek Kosaraju, Ross Girshick, Kaiming He, Piotr Dollár 의 [Designing Network Design Space](https://arxiv.org/abs/2003.13678) 논문과 함께 발표했습니다. 1. **[RemBERT](https://huggingface.co/docs/transformers/model_doc/rembert)** (Google Research 에서) Hyung Won Chung, Thibault Févry, Henry Tsai, M. Johnson, Sebastian Ruder 의 [Rethinking embedding coupling in pre-trained language models](https://arxiv.org/pdf/2010.12821.pdf) 논문과 함께 발표했습니다. 1. **[ResNet](https://huggingface.co/docs/transformers/model_doc/resnet)** (Microsoft Research 에서) Kaiming He, Xiangyu Zhang, Shaoqing Ren, Jian Sun 의 [Deep Residual Learning for Image Recognition](https://arxiv.org/abs/1512.03385) 논문과 함께 발표했습니다. 1. **[RoBERTa](https://huggingface.co/docs/transformers/model_doc/roberta)** (Facebook 에서) Yinhan Liu, Myle Ott, Naman Goyal, Jingfei Du, Mandar Joshi, Danqi Chen, Omer Levy, Mike Lewis, Luke Zettlemoyer, Veselin Stoyanov 의 a [Robustly Optimized BERT Pretraining Approach](https://arxiv.org/abs/1907.11692) 논문과 함께 발표했습니다. 1. **[RoBERTa-PreLayerNorm](https://huggingface.co/docs/transformers/model_doc/roberta-prelayernorm)** (Facebook 에서) Myle Ott, Sergey Edunov, Alexei Baevski, Angela Fan, Sam Gross, Nathan Ng, David Grangier, Michael Auli 의 [fairseq: A Fast, Extensible Toolkit for Sequence Modeling](https://arxiv.org/abs/1904.01038) 논문과 함께 발표했습니다. 1. **[RoCBert](https://huggingface.co/docs/transformers/model_doc/roc_bert)** (WeChatAI 에서) HuiSu, WeiweiShi, XiaoyuShen, XiaoZhou, TuoJi, JiaruiFang, JieZhou 의 [RoCBert: Robust Chinese Bert with Multimodal Contrastive Pretraining](https://aclanthology.org/2022.acl-long.65.pdf) 논문과 함께 발표했습니다. 1. **[RoFormer](https://huggingface.co/docs/transformers/model_doc/roformer)** (ZhuiyiTechnology 에서) Jianlin Su and Yu Lu and Shengfeng Pan and Bo Wen and Yunfeng Liu 의 a [RoFormer: Enhanced Transformer with Rotary Position Embedding](https://arxiv.org/pdf/2104.09864v1.pdf) 논문과 함께 발표했습니다. 1. **[RWKV](https://huggingface.co/docs/transformers/model_doc/rwkv)** (Bo Peng 에서 제공)은 Bo Peng.의 [this repo](https://github.com/BlinkDL/RWKV-LM)논문과 함께 발표했습니다. 1. **[SeamlessM4T](https://huggingface.co/docs/transformers/model_doc/seamless_m4t)** (from Meta AI) released with the paper [SeamlessM4T — Massively Multilingual & Multimodal Machine Translation](https://dl.fbaipublicfiles.com/seamless/seamless_m4t_paper.pdf) by the Seamless Communication team. 1. **[SeamlessM4Tv2](https://huggingface.co/docs/transformers/model_doc/seamless_m4t_v2)** (from Meta AI) released with the paper [Seamless: Multilingual Expressive and Streaming Speech Translation](https://ai.meta.com/research/publications/seamless-multilingual-expressive-and-streaming-speech-translation/) by the Seamless Communication team. 1. **[SegFormer](https://huggingface.co/docs/transformers/model_doc/segformer)** (NVIDIA 에서) Enze Xie, Wenhai Wang, Zhiding Yu, Anima Anandkumar, Jose M. Alvarez, Ping Luo 의 [SegFormer: Simple and Efficient Design for Semantic Segmentation with Transformers](https://arxiv.org/abs/2105.15203) 논문과 함께 발표했습니다. 1. **[Segment Anything](https://huggingface.co/docs/transformers/model_doc/sam)** (Meta AI 에서 제공)은 Alexander Kirillov, Eric Mintun, Nikhila Ravi, Hanzi Mao, Chloe Rolland, Laura Gustafson, Tete Xiao, Spencer Whitehead, Alex Berg, Wan-Yen Lo, Piotr Dollar, Ross Girshick.의 [Segment Anything](https://arxiv.org/pdf/2304.02643v1.pdf)논문과 함께 발표했습니다. 1. **[SEW](https://huggingface.co/docs/transformers/model_doc/sew)** (ASAPP 에서) Felix Wu, Kwangyoun Kim, Jing Pan, Kyu Han, Kilian Q. Weinberger, Yoav Artzi 의 [Performance-Efficiency Trade-offs in Unsupervised Pre-training for Speech Recognition](https://arxiv.org/abs/2109.06870) 논문과 함께 발표했습니다. 1. **[SEW-D](https://huggingface.co/docs/transformers/model_doc/sew_d)** (ASAPP 에서) Felix Wu, Kwangyoun Kim, Jing Pan, Kyu Han, Kilian Q. Weinberger, Yoav Artzi 의 [Performance-Efficiency Trade-offs in Unsupervised Pre-training for Speech Recognition](https://arxiv.org/abs/2109.06870) 논문과 함께 발표했습니다. 1. **[SpeechT5](https://huggingface.co/docs/transformers/model_doc/speecht5)** (Microsoft Research 에서 제공)은 Junyi Ao, Rui Wang, Long Zhou, Chengyi Wang, Shuo Ren, Yu Wu, Shujie Liu, Tom Ko, Qing Li, Yu Zhang, Zhihua Wei, Yao Qian, Jinyu Li, Furu Wei.의 [SpeechT5: Unified-Modal Encoder-Decoder Pre-Training for Spoken Language Processing](https://arxiv.org/abs/2110.07205)논문과 함께 발표했습니다. 1. **[SpeechToTextTransformer](https://huggingface.co/docs/transformers/model_doc/speech_to_text)** (Facebook 에서) Changhan Wang, Yun Tang, Xutai Ma, Anne Wu, Dmytro Okhonko, Juan Pino 의 [fairseq S2T: Fast Speech-to-Text Modeling with fairseq](https://arxiv.org/abs/2010.05171) 논문과 함께 발표했습니다. 1. **[SpeechToTextTransformer2](https://huggingface.co/docs/transformers/model_doc/speech_to_text_2)** (Facebook 에서) Changhan Wang, Anne Wu, Juan Pino, Alexei Baevski, Michael Auli, Alexis Conneau 의 [Large-Scale Self- and Semi-Supervised Learning for Speech Translation](https://arxiv.org/abs/2104.06678) 논문과 함께 발표했습니다. 1. **[Splinter](https://huggingface.co/docs/transformers/model_doc/splinter)** (Tel Aviv University 에서) Ori Ram, Yuval Kirstain, Jonathan Berant, Amir Globerson, Omer Levy 의 [Few-Shot Question Answering by Pretraining Span Selection](https://arxiv.org/abs/2101.00438) 논문과 함께 발표했습니다. 1. **[SqueezeBERT](https://huggingface.co/docs/transformers/model_doc/squeezebert)** (Berkeley 에서) Forrest N. Iandola, Albert E. Shaw, Ravi Krishna, and Kurt W. Keutzer 의 [SqueezeBERT: What can computer vision teach NLP about efficient neural networks?](https://arxiv.org/abs/2006.11316) 논문과 함께 발표했습니다. 1. **[SwiftFormer](https://huggingface.co/docs/transformers/model_doc/swiftformer)** (MBZUAI 에서 제공)은 Abdelrahman Shaker, Muhammad Maaz, Hanoona Rasheed, Salman Khan, Ming-Hsuan Yang, Fahad Shahbaz Khan.의 [SwiftFormer: Efficient Additive Attention for Transformer-based Real-time Mobile Vision Applications](https://arxiv.org/abs/2303.15446)논문과 함께 발표했습니다. 1. **[Swin Transformer](https://huggingface.co/docs/transformers/model_doc/swin)** (Microsoft 에서) Ze Liu, Yutong Lin, Yue Cao, Han Hu, Yixuan Wei, Zheng Zhang, Stephen Lin, Baining Guo 의 [Swin Transformer: Hierarchical Vision Transformer using Shifted Windows](https://arxiv.org/abs/2103.14030) 논문과 함께 발표했습니다. 1. **[Swin Transformer V2](https://huggingface.co/docs/transformers/model_doc/swinv2)** (Microsoft 에서) Ze Liu, Han Hu, Yutong Lin, Zhuliang Yao, Zhenda Xie, Yixuan Wei, Jia Ning, Yue Cao, Zheng Zhang, Li Dong, Furu Wei, Baining Guo 의 [Swin Transformer V2: Scaling Up Capacity and Resolution](https://arxiv.org/abs/2111.09883) 논문과 함께 발표했습니다. 1. **[Swin2SR](https://huggingface.co/docs/transformers/model_doc/swin2sr)** (University of Würzburg 에서) Marcos V. Conde, Ui-Jin Choi, Maxime Burchi, Radu Timofte 의 [Swin2SR: SwinV2 Transformer for Compressed Image Super-Resolution and Restoration](https://arxiv.org/abs/2209.11345) 논문과 함께 발표했습니다. 1. **[SwitchTransformers](https://huggingface.co/docs/transformers/model_doc/switch_transformers)** (Google 에서) William Fedus, Barret Zoph, Noam Shazeer. 의 [Switch Transformers: Scaling to Trillion Parameter Models with Simple and Efficient Sparsity](https://arxiv.org/abs/2101.03961) 논문과 함께 발표했습니다. 1. **[T5](https://huggingface.co/docs/transformers/model_doc/t5)** (Google AI 에서) Colin Raffel and Noam Shazeer and Adam Roberts and Katherine Lee and Sharan Narang and Michael Matena and Yanqi Zhou and Wei Li and Peter J. Liu 의 [Exploring the Limits of Transfer Learning with a Unified Text-to-Text Transformer](https://arxiv.org/abs/1910.10683) 논문과 함께 발표했습니다. 1. **[T5v1.1](https://huggingface.co/docs/transformers/model_doc/t5v1.1)** (from Google AI) released in the repository [google-research/text-to-text-transfer-transformer](https://github.com/google-research/text-to-text-transfer-transformer/blob/main/released_checkpoints.md#t511) by Colin Raffel and Noam Shazeer and Adam Roberts and Katherine Lee and Sharan Narang and Michael Matena and Yanqi Zhou and Wei Li and Peter J. Liu. 1. **[Table Transformer](https://huggingface.co/docs/transformers/model_doc/table-transformer)** (Microsoft Research 에서) Brandon Smock, Rohith Pesala, Robin Abraham 의 [PubTables-1M: Towards Comprehensive Table Extraction From Unstructured Documents](https://arxiv.org/abs/2110.00061) 논문과 함께 발표했습니다. 1. **[TAPAS](https://huggingface.co/docs/transformers/model_doc/tapas)** (Google AI 에서) Jonathan Herzig, Paweł Krzysztof Nowak, Thomas Müller, Francesco Piccinno and Julian Martin Eisenschlos 의 [TAPAS: Weakly Supervised Table Parsing via Pre-training](https://arxiv.org/abs/2004.02349) 논문과 함께 발표했습니다. 1. **[TAPEX](https://huggingface.co/docs/transformers/model_doc/tapex)** (Microsoft Research 에서) Qian Liu, Bei Chen, Jiaqi Guo, Morteza Ziyadi, Zeqi Lin, Weizhu Chen, Jian-Guang Lou 의 [TAPEX: Table Pre-training via Learning a Neural SQL Executor](https://arxiv.org/abs/2107.07653) 논문과 함께 발표했습니다. 1. **[Time Series Transformer](https://huggingface.co/docs/transformers/model_doc/time_series_transformer)** (from HuggingFace). 1. **[TimeSformer](https://huggingface.co/docs/transformers/model_doc/timesformer)** (Facebook 에서) Gedas Bertasius, Heng Wang, Lorenzo Torresani 의 [Is Space-Time Attention All You Need for Video Understanding?](https://arxiv.org/abs/2102.05095) 논문과 함께 발표했습니다. 1. **[Trajectory Transformer](https://huggingface.co/docs/transformers/model_doc/trajectory_transformers)** (the University of California at Berkeley 에서) Michael Janner, Qiyang Li, Sergey Levin 의 [Offline Reinforcement Learning as One Big Sequence Modeling Problem](https://arxiv.org/abs/2106.02039) 논문과 함께 발표했습니다. 1. **[Transformer-XL](https://huggingface.co/docs/transformers/model_doc/transfo-xl)** (Google/CMU 에서) Zihang Dai*, Zhilin Yang*, Yiming Yang, Jaime Carbonell, Quoc V. Le, Ruslan Salakhutdinov 의 [Transformer-XL: Attentive Language Models Beyond a Fixed-Length Context](https://arxiv.org/abs/1901.02860) 논문과 함께 발표했습니다. 1. **[TrOCR](https://huggingface.co/docs/transformers/model_doc/trocr)** (Microsoft 에서) Minghao Li, Tengchao Lv, Lei Cui, Yijuan Lu, Dinei Florencio, Cha Zhang, Zhoujun Li, Furu Wei 의 [TrOCR: Transformer-based Optical Character Recognition with Pre-trained Models](https://arxiv.org/abs/2109.10282) 논문과 함께 발표했습니다. 1. **[TVLT](https://huggingface.co/docs/transformers/model_doc/tvlt)** (from UNC Chapel Hill 에서) Zineng Tang, Jaemin Cho, Yixin Nie, Mohit Bansal 의 [TVLT: Textless Vision-Language Transformer](https://arxiv.org/abs/2209.14156) 논문과 함께 발표했습니다. 1. **[TVP](https://huggingface.co/docs/transformers/model_doc/tvp)** (Intel 에서) Yimeng Zhang, Xin Chen, Jinghan Jia, Sijia Liu, Ke Ding 의 [Text-Visual Prompting for Efficient 2D Temporal Video Grounding](https://arxiv.org/abs/2303.04995) 논문과 함께 발표했습니다. 1. **[UL2](https://huggingface.co/docs/transformers/model_doc/ul2)** (Google Research 에서) Yi Tay, Mostafa Dehghani, Vinh Q. Tran, Xavier Garcia, Dara Bahri, Tal Schuster, Huaixiu Steven Zheng, Neil Houlsby, Donald Metzle 의 [Unifying Language Learning Paradigms](https://arxiv.org/abs/2205.05131v1) 논문과 함께 발표했습니다. 1. **[UMT5](https://huggingface.co/docs/transformers/model_doc/umt5)** (Google Research 에서 제공)은 Hyung Won Chung, Xavier Garcia, Adam Roberts, Yi Tay, Orhan Firat, Sharan Narang, Noah Constant.의 [UniMax: Fairer and More Effective Language Sampling for Large-Scale Multilingual Pretraining](https://openreview.net/forum?id=kXwdL1cWOAi)논문과 함께 발표했습니다. 1. **[UniSpeech](https://huggingface.co/docs/transformers/model_doc/unispeech)** (Microsoft Research 에서) Chengyi Wang, Yu Wu, Yao Qian, Kenichi Kumatani, Shujie Liu, Furu Wei, Michael Zeng, Xuedong Huang 의 [UniSpeech: Unified Speech Representation Learning with Labeled and Unlabeled Data](https://arxiv.org/abs/2101.07597) 논문과 함께 발표했습니다. 1. **[UniSpeechSat](https://huggingface.co/docs/transformers/model_doc/unispeech-sat)** (Microsoft Research 에서) Sanyuan Chen, Yu Wu, Chengyi Wang, Zhengyang Chen, Zhuo Chen, Shujie Liu, Jian Wu, Yao Qian, Furu Wei, Jinyu Li, Xiangzhan Yu 의 [UNISPEECH-SAT: UNIVERSAL SPEECH REPRESENTATION LEARNING WITH SPEAKER AWARE PRE-TRAINING](https://arxiv.org/abs/2110.05752) 논문과 함께 발표했습니다. 1. **[UnivNet](https://huggingface.co/docs/transformers/model_doc/univnet)** (from Kakao Corporation) released with the paper [UnivNet: A Neural Vocoder with Multi-Resolution Spectrogram Discriminators for High-Fidelity Waveform Generation](https://arxiv.org/abs/2106.07889) by Won Jang, Dan Lim, Jaesam Yoon, Bongwan Kim, and Juntae Kim. 1. **[UPerNet](https://huggingface.co/docs/transformers/model_doc/upernet)** (Peking University 에서 제공)은 Tete Xiao, Yingcheng Liu, Bolei Zhou, Yuning Jiang, Jian Sun.의 [Unified Perceptual Parsing for Scene Understanding](https://arxiv.org/abs/1807.10221)논문과 함께 발표했습니다. 1. **[VAN](https://huggingface.co/docs/transformers/model_doc/van)** (Tsinghua University and Nankai University 에서) Meng-Hao Guo, Cheng-Ze Lu, Zheng-Ning Liu, Ming-Ming Cheng, Shi-Min Hu 의 [Visual Attention Network](https://arxiv.org/pdf/2202.09741.pdf) 논문과 함께 발표했습니다. 1. **[VideoMAE](https://huggingface.co/docs/transformers/model_doc/videomae)** (Multimedia Computing Group, Nanjing University 에서) Zhan Tong, Yibing Song, Jue Wang, Limin Wang 의 [VideoMAE: Masked Autoencoders are Data-Efficient Learners for Self-Supervised Video Pre-Training](https://arxiv.org/abs/2203.12602) 논문과 함께 발표했습니다. 1. **[ViLT](https://huggingface.co/docs/transformers/model_doc/vilt)** (NAVER AI Lab/Kakao Enterprise/Kakao Brain 에서) Wonjae Kim, Bokyung Son, Ildoo Kim 의 [ViLT: Vision-and-Language Transformer Without Convolution or Region Supervision](https://arxiv.org/abs/2102.03334) 논문과 함께 발표했습니다. 1. **[VipLlava](https://huggingface.co/docs/transformers/model_doc/vipllava)** (University of Wisconsin–Madison 에서 제공)은 Mu Cai, Haotian Liu, Siva Karthik Mustikovela, Gregory P. Meyer, Yuning Chai, Dennis Park, Yong Jae Lee.의 [Making Large Multimodal Models Understand Arbitrary Visual Prompts](https://arxiv.org/abs/2312.00784)논문과 함께 발표했습니다. 1. **[Vision Transformer (ViT)](https://huggingface.co/docs/transformers/model_doc/vit)** (Google AI 에서) Alexey Dosovitskiy, Lucas Beyer, Alexander Kolesnikov, Dirk Weissenborn, Xiaohua Zhai, Thomas Unterthiner, Mostafa Dehghani, Matthias Minderer, Georg Heigold, Sylvain Gelly, Jakob Uszkoreit, Neil Houlsby 의 [An Image is Worth 16x16 Words: Transformers for Image Recognition at Scale](https://arxiv.org/abs/2010.11929) 논문과 함께 발표했습니다. 1. **[VisualBERT](https://huggingface.co/docs/transformers/model_doc/visual_bert)** (UCLA NLP 에서) Liunian Harold Li, Mark Yatskar, Da Yin, Cho-Jui Hsieh, Kai-Wei Chang 의 [VisualBERT: A Simple and Performant Baseline for Vision and Language](https://arxiv.org/pdf/1908.03557) 논문과 함께 발표했습니다. 1. **[ViT Hybrid](https://huggingface.co/docs/transformers/model_doc/vit_hybrid)** (Google AI 에서) Alexey Dosovitskiy, Lucas Beyer, Alexander Kolesnikov, Dirk Weissenborn, Xiaohua Zhai, Thomas Unterthiner, Mostafa Dehghani, Matthias Minderer, Georg Heigold, Sylvain Gelly, Jakob Uszkoreit, Neil Houlsby 의 [An Image is Worth 16x16 Words: Transformers for Image Recognition at Scale](https://arxiv.org/abs/2010.11929) 논문과 함께 발표했습니다. 1. **[VitDet](https://huggingface.co/docs/transformers/model_doc/vitdet)** (Meta AI 에서 제공)은 Yanghao Li, Hanzi Mao, Ross Girshick, Kaiming He.의 [Exploring Plain Vision Transformer Backbones for Object Detection](https://arxiv.org/abs/2203.16527)논문과 함께 발표했습니다. 1. **[ViTMAE](https://huggingface.co/docs/transformers/model_doc/vit_mae)** (Meta AI 에서) Kaiming He, Xinlei Chen, Saining Xie, Yanghao Li, Piotr Dollár, Ross Girshick 의 [Masked Autoencoders Are Scalable Vision Learners](https://arxiv.org/abs/2111.06377) 논문과 함께 발표했습니다. 1. **[ViTMatte](https://huggingface.co/docs/transformers/model_doc/vitmatte)** (HUST-VL 에서 제공)은 Jingfeng Yao, Xinggang Wang, Shusheng Yang, Baoyuan Wang.의 [ViTMatte: Boosting Image Matting with Pretrained Plain Vision Transformers](https://arxiv.org/abs/2305.15272)논문과 함께 발표했습니다. 1. **[ViTMSN](https://huggingface.co/docs/transformers/model_doc/vit_msn)** (Meta AI 에서) Mahmoud Assran, Mathilde Caron, Ishan Misra, Piotr Bojanowski, Florian Bordes, Pascal Vincent, Armand Joulin, Michael Rabbat, Nicolas Ballas 의 [Masked Siamese Networks for Label-Efficient Learning](https://arxiv.org/abs/2204.07141) 논문과 함께 발표했습니다. 1. **[VITS](https://huggingface.co/docs/transformers/model_doc/vits)** (Kakao Enterprise 에서 제공)은 Jaehyeon Kim, Jungil Kong, Juhee Son.의 [Conditional Variational Autoencoder with Adversarial Learning for End-to-End Text-to-Speech](https://arxiv.org/abs/2106.06103)논문과 함께 발표했습니다. 1. **[ViViT](https://huggingface.co/docs/transformers/model_doc/vivit)** (from Google Research) released with the paper [ViViT: A Video Vision Transformer](https://arxiv.org/abs/2103.15691) by Anurag Arnab, Mostafa Dehghani, Georg Heigold, Chen Sun, Mario Lučić, Cordelia Schmid. 1. **[Wav2Vec2](https://huggingface.co/docs/transformers/model_doc/wav2vec2)** (Facebook AI 에서) Alexei Baevski, Henry Zhou, Abdelrahman Mohamed, Michael Auli 의 [wav2vec 2.0: A Framework for Self-Supervised Learning of Speech Representations](https://arxiv.org/abs/2006.11477) 논문과 함께 발표했습니다. 1. **[Wav2Vec2-Conformer](https://huggingface.co/docs/transformers/model_doc/wav2vec2-conformer)** (Facebook AI 에서) Changhan Wang, Yun Tang, Xutai Ma, Anne Wu, Sravya Popuri, Dmytro Okhonko, Juan Pino 의 [FAIRSEQ S2T: Fast Speech-to-Text Modeling with FAIRSEQ](https://arxiv.org/abs/2010.05171) 논문과 함께 발표했습니다. 1. **[Wav2Vec2Phoneme](https://huggingface.co/docs/transformers/model_doc/wav2vec2_phoneme)** (Facebook AI 에서) Qiantong Xu, Alexei Baevski, Michael Auli 의 [Simple and Effective Zero-shot Cross-lingual Phoneme Recognition](https://arxiv.org/abs/2109.11680) 논문과 함께 발표했습니다. 1. **[WavLM](https://huggingface.co/docs/transformers/model_doc/wavlm)** (Microsoft Research 에서) Sanyuan Chen, Chengyi Wang, Zhengyang Chen, Yu Wu, Shujie Liu, Zhuo Chen, Jinyu Li, Naoyuki Kanda, Takuya Yoshioka, Xiong Xiao, Jian Wu, Long Zhou, Shuo Ren, Yanmin Qian, Yao Qian, Jian Wu, Michael Zeng, Furu Wei 의 [WavLM: Large-Scale Self-Supervised Pre-Training for Full Stack Speech Processing](https://arxiv.org/abs/2110.13900) 논문과 함께 발표했습니다. 1. **[Whisper](https://huggingface.co/docs/transformers/model_doc/whisper)** (OpenAI 에서) Alec Radford, Jong Wook Kim, Tao Xu, Greg Brockman, Christine McLeavey, Ilya Sutskever 의 [Robust Speech Recognition via Large-Scale Weak Supervision](https://cdn.openai.com/papers/whisper.pdf) 논문과 함께 발표했습니다. 1. **[X-CLIP](https://huggingface.co/docs/transformers/model_doc/xclip)** (Microsoft Research 에서) Bolin Ni, Houwen Peng, Minghao Chen, Songyang Zhang, Gaofeng Meng, Jianlong Fu, Shiming Xiang, Haibin Ling 의 [Expanding Language-Image Pretrained Models for General Video Recognition](https://arxiv.org/abs/2208.02816) 논문과 함께 발표했습니다. 1. **[X-MOD](https://huggingface.co/docs/transformers/model_doc/xmod)** (Meta AI 에서 제공)은 Jonas Pfeiffer, Naman Goyal, Xi Lin, Xian Li, James Cross, Sebastian Riedel, Mikel Artetxe.의 [Lifting the Curse of Multilinguality by Pre-training Modular Transformers](http://dx.doi.org/10.18653/v1/2022.naacl-main.255)논문과 함께 발표했습니다. 1. **[XGLM](https://huggingface.co/docs/transformers/model_doc/xglm)** (Facebook AI 에서 제공) Xi Victoria Lin, Todor Mihaylov, Mikel Artetxe, Tianlu Wang, Shuohui Chen, Daniel Simig, Myle Ott, Naman Goyal, Shruti Bhosale, Jingfei Du, Ramakanth Pasunuru, Sam Shleifer, Punit Singh Koura, Vishrav Chaudhary, Brian O'Horo, Jeff Wang, Luke Zettlemoyer, Zornitsa Kozareva, Mona Diab, Veselin Stoyanov, Xian Li 의 [Few-shot Learning with Multilingual Language Models](https://arxiv.org/abs/2112.10668) 논문과 함께 발표했습니다. 1. **[XLM](https://huggingface.co/docs/transformers/model_doc/xlm)** (Facebook 에서) Guillaume Lample and Alexis Conneau 의 [Cross-lingual Language Model Pretraining](https://arxiv.org/abs/1901.07291) 논문과 함께 발표했습니다. 1. **[XLM-ProphetNet](https://huggingface.co/docs/transformers/model_doc/xlm-prophetnet)** (Microsoft Research 에서) Yu Yan, Weizhen Qi, Yeyun Gong, Dayiheng Liu, Nan Duan, Jiusheng Chen, Ruofei Zhang and Ming Zhou 의 [ProphetNet: Predicting Future N-gram for Sequence-to-Sequence Pre-training](https://arxiv.org/abs/2001.04063) 논문과 함께 발표했습니다. 1. **[XLM-RoBERTa](https://huggingface.co/docs/transformers/model_doc/xlm-roberta)** (Facebook AI 에서) Alexis Conneau*, Kartikay Khandelwal*, Naman Goyal, Vishrav Chaudhary, Guillaume Wenzek, Francisco Guzmán, Edouard Grave, Myle Ott, Luke Zettlemoyer and Veselin Stoyanov 의 [Unsupervised Cross-lingual Representation Learning at Scale](https://arxiv.org/abs/1911.02116) 논문과 함께 발표했습니다. 1. **[XLM-RoBERTa-XL](https://huggingface.co/docs/transformers/model_doc/xlm-roberta-xl)** (Facebook AI 에서) Naman Goyal, Jingfei Du, Myle Ott, Giri Anantharaman, Alexis Conneau 의 [Larger-Scale Transformers for Multilingual Masked Language Modeling](https://arxiv.org/abs/2105.00572) 논문과 함께 발표했습니다. 1. **[XLM-V](https://huggingface.co/docs/transformers/model_doc/xlm-v)** (Meta AI 에서) Davis Liang, Hila Gonen, Yuning Mao, Rui Hou, Naman Goyal, Marjan Ghazvininejad, Luke Zettlemoyer, Madian Khabsa 의 [XLM-V: Overcoming the Vocabulary Bottleneck in Multilingual Masked Language Models](https://arxiv.org/abs/2301.10472) 논문과 함께 발표했습니다. 1. **[XLNet](https://huggingface.co/docs/transformers/model_doc/xlnet)** (Google/CMU 에서) Zhilin Yang*, Zihang Dai*, Yiming Yang, Jaime Carbonell, Ruslan Salakhutdinov, Quoc V. Le 의 [​XLNet: Generalized Autoregressive Pretraining for Language Understanding](https://arxiv.org/abs/1906.08237) 논문과 함께 발표했습니다. 1. **[XLS-R](https://huggingface.co/docs/transformers/model_doc/xls_r)** (Facebook AI 에서) Arun Babu, Changhan Wang, Andros Tjandra, Kushal Lakhotia, Qiantong Xu, Naman Goyal, Kritika Singh, Patrick von Platen, Yatharth Saraf, Juan Pino, Alexei Baevski, Alexis Conneau, Michael Auli 의 [XLS-R: Self-supervised Cross-lingual Speech Representation Learning at Scale](https://arxiv.org/abs/2111.09296) 논문과 함께 발표했습니다. 1. **[XLSR-Wav2Vec2](https://huggingface.co/docs/transformers/model_doc/xlsr_wav2vec2)** (Facebook AI 에서) Alexis Conneau, Alexei Baevski, Ronan Collobert, Abdelrahman Mohamed, Michael Auli 의 [Unsupervised Cross-Lingual Representation Learning For Speech Recognition](https://arxiv.org/abs/2006.13979) 논문과 함께 발표했습니다. 1. **[YOLOS](https://huggingface.co/docs/transformers/model_doc/yolos)** (Huazhong University of Science & Technology 에서) Yuxin Fang, Bencheng Liao, Xinggang Wang, Jiemin Fang, Jiyang Qi, Rui Wu, Jianwei Niu, Wenyu Liu 의 [You Only Look at One Sequence: Rethinking Transformer in Vision through Object Detection](https://arxiv.org/abs/2106.00666) 논문과 함께 발표했습니다. 1. **[YOSO](https://huggingface.co/docs/transformers/model_doc/yoso)** (the University of Wisconsin - Madison 에서) Zhanpeng Zeng, Yunyang Xiong, Sathya N. Ravi, Shailesh Acharya, Glenn Fung, Vikas Singh 의 [You Only Sample (Almost) Once: Linear Cost Self-Attention Via Bernoulli Sampling](https://arxiv.org/abs/2111.09714) 논문과 함께 발표했습니다. 1. 새로운 모델을 올리고 싶나요? 우리가 **상세한 가이드와 템플릿** 으로 새로운 모델을 올리도록 도와드릴게요. 가이드와 템플릿은 이 저장소의 [`templates`](./templates) 폴더에서 확인하실 수 있습니다. [컨트리뷰션 가이드라인](./CONTRIBUTING.md)을 꼭 확인해주시고, PR을 올리기 전에 메인테이너에게 연락하거나 이슈를 오픈해 피드백을 받으시길 바랍니다. 각 모델이 Flax, PyTorch, TensorFlow으로 구현되었는지 또는 🤗 Tokenizers 라이브러리가 지원하는 토크나이저를 사용하는지 확인하려면, [이 표](https://huggingface.co/docs/transformers/index#supported-frameworks)를 확인하세요. 이 구현은 여러 데이터로 검증되었고 (예시 스크립트를 참고하세요) 오리지널 구현의 성능과 같아야 합니다. [도큐먼트](https://huggingface.co/docs/transformers/examples)의 Examples 섹션에서 성능에 대한 자세한 설명을 확인할 수 있습니다. ## 더 알아보기 | 섹션 | 설명 | |-|-| | [도큐먼트](https://huggingface.co/transformers/) | 전체 API 도큐먼트와 튜토리얼 | | [과제 요약](https://huggingface.co/docs/transformers/task_summary) | 🤗 Transformers가 지원하는 과제들 | | [전처리 튜토리얼](https://huggingface.co/docs/transformers/preprocessing) | `Tokenizer` 클래스를 이용해 모델을 위한 데이터 준비하기 | | [학습과 fine-tuning](https://huggingface.co/docs/transformers/training) | 🤗 Transformers가 제공하는 모델 PyTorch/TensorFlow 학습 과정과 `Trainer` API에서 사용하기 | | [퀵 투어: Fine-tuning/사용 스크립트](https://github.com/huggingface/transformers/tree/main/examples) | 다양한 과제에서 모델 fine-tuning하는 예시 스크립트 | | [모델 공유 및 업로드](https://huggingface.co/docs/transformers/model_sharing) | 커뮤니티에 fine-tune된 모델을 업로드 및 공유하기 | | [마이그레이션](https://huggingface.co/docs/transformers/migration) | `pytorch-transformers`나 `pytorch-pretrained-bert`에서 🤗 Transformers로 이동하기| ## 인용 🤗 Transformers 라이브러리를 인용하고 싶다면, 이 [논문](https://www.aclweb.org/anthology/2020.emnlp-demos.6/)을 인용해 주세요: ```bibtex @inproceedings{wolf-etal-2020-transformers, title = "Transformers: State-of-the-Art Natural Language Processing", author = "Thomas Wolf and Lysandre Debut and Victor Sanh and Julien Chaumond and Clement Delangue and Anthony Moi and Pierric Cistac and Tim Rault and Rémi Louf and Morgan Funtowicz and Joe Davison and Sam Shleifer and Patrick von Platen and Clara Ma and Yacine Jernite and Julien Plu and Canwen Xu and Teven Le Scao and Sylvain Gugger and Mariama Drame and Quentin Lhoest and Alexander M. Rush", booktitle = "Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing: System Demonstrations", month = oct, year = "2020", address = "Online", publisher = "Association for Computational Linguistics", url = "https://www.aclweb.org/anthology/2020.emnlp-demos.6", pages = "38--45" } ```
huggingface/transformers/blob/main/README_ko.md
Search You can now easily search anything on the Hub with **Full-text search**. We index model cards, dataset cards, and Spaces app.py files. Go directly to https://huggingface.co/search or, using the search bar at the top of https://huggingface.co, you can select "Try Full-text search" to help find what you seek on the Hub across models, datasets, and Spaces: <div class="flex justify-center"> <img class="block dark:hidden" src="https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/hub/fulltextsearch1.png"/> <img class="hidden dark:block" src="https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/hub/fulltextsearch2.png"/> </div> <div class="flex justify-center"> <img class="block dark:hidden" src="https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/hub/AlbertFTS1.png"/> <img class="hidden dark:block" src="https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/hub/AlbertFTS2.png"/> </div> ## Filter with ease By default, models, datasets, & spaces are being searched when a user enters a query. If one prefers, one can filter to search only models, datasets, or spaces. <div class="flex justify-center"> <img class="block dark:hidden" src="https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/Filter%20search%201.png"/> <img class="hidden dark:block" src="https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/Filter%20search%202.png"/> </div> Moreover, one can copy & share the URL from one's browser's address bar, which should contain the filter information as URL query. For example, when one searches for a query `llama` with a filter to show `Spaces` only, one gets URL https://huggingface.co/search/full-text?q=llama&type=space
huggingface/hub-docs/blob/main/docs/hub/search.md
!--Copyright 2022 The HuggingFace Team. All rights reserved. Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with the License. You may obtain a copy of the License at http://www.apache.org/licenses/LICENSE-2.0 Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the ⚠️ Note that this file is in Markdown but contain specific syntax for our doc-builder (similar to MDX) that may not be rendered properly in your Markdown viewer. --> # Efficient Training on Multiple CPUs When training on a single CPU is too slow, we can use multiple CPUs. This guide focuses on PyTorch-based DDP enabling distributed CPU training efficiently on [bare metal](#usage-in-trainer) and [Kubernetes](#usage-with-kubernetes). ## Intel® oneCCL Bindings for PyTorch [Intel® oneCCL](https://github.com/oneapi-src/oneCCL) (collective communications library) is a library for efficient distributed deep learning training implementing such collectives like allreduce, allgather, alltoall. For more information on oneCCL, please refer to the [oneCCL documentation](https://spec.oneapi.com/versions/latest/elements/oneCCL/source/index.html) and [oneCCL specification](https://spec.oneapi.com/versions/latest/elements/oneCCL/source/index.html). Module `oneccl_bindings_for_pytorch` (`torch_ccl` before version 1.12) implements PyTorch C10D ProcessGroup API and can be dynamically loaded as external ProcessGroup and only works on Linux platform now Check more detailed information for [oneccl_bind_pt](https://github.com/intel/torch-ccl). ### Intel® oneCCL Bindings for PyTorch installation Wheel files are available for the following Python versions: | Extension Version | Python 3.6 | Python 3.7 | Python 3.8 | Python 3.9 | Python 3.10 | | :---------------: | :--------: | :--------: | :--------: | :--------: | :---------: | | 1.13.0 | | √ | √ | √ | √ | | 1.12.100 | | √ | √ | √ | √ | | 1.12.0 | | √ | √ | √ | √ | | 1.11.0 | | √ | √ | √ | √ | | 1.10.0 | √ | √ | √ | √ | | ``` pip install oneccl_bind_pt=={pytorch_version} -f https://developer.intel.com/ipex-whl-stable-cpu ``` where `{pytorch_version}` should be your PyTorch version, for instance 1.13.0. Check more approaches for [oneccl_bind_pt installation](https://github.com/intel/torch-ccl). Versions of oneCCL and PyTorch must match. <Tip warning={true}> oneccl_bindings_for_pytorch 1.12.0 prebuilt wheel does not work with PyTorch 1.12.1 (it is for PyTorch 1.12.0) PyTorch 1.12.1 should work with oneccl_bindings_for_pytorch 1.12.100 </Tip> ## Intel® MPI library Use this standards-based MPI implementation to deliver flexible, efficient, scalable cluster messaging on Intel® architecture. This component is part of the Intel® oneAPI HPC Toolkit. oneccl_bindings_for_pytorch is installed along with the MPI tool set. Need to source the environment before using it. for Intel® oneCCL >= 1.12.0 ``` oneccl_bindings_for_pytorch_path=$(python -c "from oneccl_bindings_for_pytorch import cwd; print(cwd)") source $oneccl_bindings_for_pytorch_path/env/setvars.sh ``` for Intel® oneCCL whose version < 1.12.0 ``` torch_ccl_path=$(python -c "import torch; import torch_ccl; import os; print(os.path.abspath(os.path.dirname(torch_ccl.__file__)))") source $torch_ccl_path/env/setvars.sh ``` #### Intel® Extension for PyTorch installation Intel Extension for PyTorch (IPEX) provides performance optimizations for CPU training with both Float32 and BFloat16 (refer to the [single CPU section](./perf_train_cpu) to learn more). The following "Usage in Trainer" takes mpirun in Intel® MPI library as an example. ## Usage in Trainer To enable multi CPU distributed training in the Trainer with the ccl backend, users should add **`--ddp_backend ccl`** in the command arguments. Let's see an example with the [question-answering example](https://github.com/huggingface/transformers/tree/main/examples/pytorch/question-answering) The following command enables training with 2 processes on one Xeon node, with one process running per one socket. The variables OMP_NUM_THREADS/CCL_WORKER_COUNT can be tuned for optimal performance. ```shell script export CCL_WORKER_COUNT=1 export MASTER_ADDR=127.0.0.1 mpirun -n 2 -genv OMP_NUM_THREADS=23 \ python3 run_qa.py \ --model_name_or_path bert-large-uncased \ --dataset_name squad \ --do_train \ --do_eval \ --per_device_train_batch_size 12 \ --learning_rate 3e-5 \ --num_train_epochs 2 \ --max_seq_length 384 \ --doc_stride 128 \ --output_dir /tmp/debug_squad/ \ --no_cuda \ --ddp_backend ccl \ --use_ipex ``` The following command enables training with a total of four processes on two Xeons (node0 and node1, taking node0 as the main process), ppn (processes per node) is set to 2, with one process running per one socket. The variables OMP_NUM_THREADS/CCL_WORKER_COUNT can be tuned for optimal performance. In node0, you need to create a configuration file which contains the IP addresses of each node (for example hostfile) and pass that configuration file path as an argument. ```shell script cat hostfile xxx.xxx.xxx.xxx #node0 ip xxx.xxx.xxx.xxx #node1 ip ``` Now, run the following command in node0 and **4DDP** will be enabled in node0 and node1 with BF16 auto mixed precision: ```shell script export CCL_WORKER_COUNT=1 export MASTER_ADDR=xxx.xxx.xxx.xxx #node0 ip mpirun -f hostfile -n 4 -ppn 2 \ -genv OMP_NUM_THREADS=23 \ python3 run_qa.py \ --model_name_or_path bert-large-uncased \ --dataset_name squad \ --do_train \ --do_eval \ --per_device_train_batch_size 12 \ --learning_rate 3e-5 \ --num_train_epochs 2 \ --max_seq_length 384 \ --doc_stride 128 \ --output_dir /tmp/debug_squad/ \ --no_cuda \ --ddp_backend ccl \ --use_ipex \ --bf16 ``` ## Usage with Kubernetes The same distributed training job from the previous section can be deployed to a Kubernetes cluster using the [Kubeflow PyTorchJob training operator](https://www.kubeflow.org/docs/components/training/pytorch/). ### Setup This example assumes that you have: * Access to a Kubernetes cluster with [Kubeflow installed](https://www.kubeflow.org/docs/started/installing-kubeflow/) * [`kubectl`](https://kubernetes.io/docs/tasks/tools/) installed and configured to access the Kubernetes cluster * A [Persistent Volume Claim (PVC)](https://kubernetes.io/docs/concepts/storage/persistent-volumes/) that can be used to store datasets and model files. There are multiple options for setting up the PVC including using an NFS [storage class](https://kubernetes.io/docs/concepts/storage/storage-classes/) or a cloud storage bucket. * A Docker container that includes your model training script and all the dependencies needed to run the script. For distributed CPU training jobs, this typically includes PyTorch, Transformers, Intel Extension for PyTorch, Intel oneCCL Bindings for PyTorch, and OpenSSH to communicate between the containers. The snippet below is an example of a Dockerfile that uses a base image that supports distributed CPU training and then extracts a Transformers release to the `/workspace` directory, so that the example scripts are included in the image: ``` FROM intel/ai-workflows:torch-2.0.1-huggingface-multinode-py3.9 WORKDIR /workspace # Download and extract the transformers code ARG HF_TRANSFORMERS_VER="4.35.2" RUN mkdir transformers && \ curl -sSL --retry 5 https://github.com/huggingface/transformers/archive/refs/tags/v${HF_TRANSFORMERS_VER}.tar.gz | tar -C transformers --strip-components=1 -xzf - ``` The image needs to be built and copied to the cluster's nodes or pushed to a container registry prior to deploying the PyTorchJob to the cluster. ### PyTorchJob Specification File The [Kubeflow PyTorchJob](https://www.kubeflow.org/docs/components/training/pytorch/) is used to run the distributed training job on the cluster. The yaml file for the PyTorchJob defines parameters such as: * The name of the PyTorchJob * The number of replicas (workers) * The python script and it's parameters that will be used to run the training job * The types of resources (node selector, memory, and CPU) needed for each worker * The image/tag for the Docker container to use * Environment variables * A volume mount for the PVC The volume mount defines a path where the PVC will be mounted in the container for each worker pod. This location can be used for the dataset, checkpoint files, and the saved model after training completes. The snippet below is an example of a yaml file for a PyTorchJob with 4 workers running the [question-answering example](https://github.com/huggingface/transformers/tree/main/examples/pytorch/question-answering). ```yaml apiVersion: "kubeflow.org/v1" kind: PyTorchJob metadata: name: transformers-pytorchjob namespace: kubeflow spec: elasticPolicy: rdzvBackend: c10d minReplicas: 1 maxReplicas: 4 maxRestarts: 10 pytorchReplicaSpecs: Worker: replicas: 4 # The number of worker pods restartPolicy: OnFailure template: spec: containers: - name: pytorch image: <image name>:<tag> # Specify the docker image to use for the worker pods imagePullPolicy: IfNotPresent command: - torchrun - /workspace/transformers/examples/pytorch/question-answering/run_qa.py - --model_name_or_path - "bert-large-uncased" - --dataset_name - "squad" - --do_train - --do_eval - --per_device_train_batch_size - "12" - --learning_rate - "3e-5" - --num_train_epochs - "2" - --max_seq_length - "384" - --doc_stride - "128" - --output_dir - "/tmp/pvc-mount/output" - --no_cuda - --ddp_backend - "ccl" - --use_ipex - --bf16 # Specify --bf16 if your hardware supports bfloat16 env: - name: LD_PRELOAD value: "/usr/lib/x86_64-linux-gnu/libtcmalloc.so.4.5.9:/usr/local/lib/libiomp5.so" - name: TRANSFORMERS_CACHE value: "/tmp/pvc-mount/transformers_cache" - name: HF_DATASETS_CACHE value: "/tmp/pvc-mount/hf_datasets_cache" - name: LOGLEVEL value: "INFO" - name: CCL_WORKER_COUNT value: "1" - name: OMP_NUM_THREADS # Can be tuned for optimal performance - value: "56" resources: limits: cpu: 200 # Update the CPU and memory limit values based on your nodes memory: 128Gi requests: cpu: 200 # Update the CPU and memory request values based on your nodes memory: 128Gi volumeMounts: - name: pvc-volume mountPath: /tmp/pvc-mount - mountPath: /dev/shm name: dshm restartPolicy: Never nodeSelector: # Optionally use the node selector to specify what types of nodes to use for the workers node-type: spr volumes: - name: pvc-volume persistentVolumeClaim: claimName: transformers-pvc - name: dshm emptyDir: medium: Memory ``` To run this example, update the yaml based on your training script and the nodes in your cluster. <Tip> The CPU resource limits/requests in the yaml are defined in [cpu units](https://kubernetes.io/docs/concepts/configuration/manage-resources-containers/#meaning-of-cpu) where 1 CPU unit is equivalent to 1 physical CPU core or 1 virtual core (depending on whether the node is a physical host or a VM). The amount of CPU and memory limits/requests defined in the yaml should be less than the amount of available CPU/memory capacity on a single machine. It is usually a good idea to not use the entire machine's capacity in order to leave some resources for the kubelet and OS. In order to get ["guaranteed"](https://kubernetes.io/docs/concepts/workloads/pods/pod-qos/#guaranteed) [quality of service](https://kubernetes.io/docs/tasks/configure-pod-container/quality-service-pod/) for the worker pods, set the same CPU and memory amounts for both the resource limits and requests. </Tip> ### Deploy After the PyTorchJob spec has been updated with values appropriate for your cluster and training job, it can be deployed to the cluster using: ``` kubectl create -f pytorchjob.yaml ``` The `kubectl get pods -n kubeflow` command can then be used to list the pods in the `kubeflow` namespace. You should see the worker pods for the PyTorchJob that was just deployed. At first, they will probably have a status of "Pending" as the containers get pulled and created, then the status should change to "Running". ``` NAME READY STATUS RESTARTS AGE ... transformers-pytorchjob-worker-0 1/1 Running 0 7m37s transformers-pytorchjob-worker-1 1/1 Running 0 7m37s transformers-pytorchjob-worker-2 1/1 Running 0 7m37s transformers-pytorchjob-worker-3 1/1 Running 0 7m37s ... ``` The logs for worker can be viewed using `kubectl logs -n kubeflow <pod name>`. Add `-f` to stream the logs, for example: ``` kubectl logs -n kubeflow transformers-pytorchjob-worker-0 -f ``` After the training job completes, the trained model can be copied from the PVC or storage location. When you are done with the job, the PyTorchJob resource can be deleted from the cluster using `kubectl delete -f pytorchjob.yaml`. ## Summary This guide covered running distributed PyTorch training jobs using multiple CPUs on bare metal and on a Kubernetes cluster. Both cases utilize Intel Extension for PyTorch and Intel oneCCL Bindings for PyTorch for optimal training performance, and can be used as a template to run your own workload on multiple nodes.
huggingface/transformers/blob/main/docs/source/en/perf_train_cpu_many.md
Gradio Demo: chatbot_multimodal ``` !pip install -q gradio ``` ``` # Downloading files from the demo repo import os !wget -q https://github.com/gradio-app/gradio/raw/main/demo/chatbot_multimodal/avatar.png ``` ``` import gradio as gr import os import time # Chatbot demo with multimodal input (text, markdown, LaTeX, code blocks, image, audio, & video). Plus shows support for streaming text. def print_like_dislike(x: gr.LikeData): print(x.index, x.value, x.liked) def add_text(history, text): history = history + [(text, None)] return history, gr.Textbox(value="", interactive=False) def add_file(history, file): history = history + [((file.name,), None)] return history def bot(history): response = "**That's cool!**" history[-1][1] = "" for character in response: history[-1][1] += character time.sleep(0.05) yield history with gr.Blocks() as demo: chatbot = gr.Chatbot( [], elem_id="chatbot", bubble_full_width=False, avatar_images=(None, (os.path.join(os.path.abspath(''), "avatar.png"))), ) with gr.Row(): txt = gr.Textbox( scale=4, show_label=False, placeholder="Enter text and press enter, or upload an image", container=False, ) btn = gr.UploadButton("📁", file_types=["image", "video", "audio"]) txt_msg = txt.submit(add_text, [chatbot, txt], [chatbot, txt], queue=False).then( bot, chatbot, chatbot, api_name="bot_response" ) txt_msg.then(lambda: gr.Textbox(interactive=True), None, [txt], queue=False) file_msg = btn.upload(add_file, [chatbot, btn], [chatbot], queue=False).then( bot, chatbot, chatbot ) chatbot.like(print_like_dislike, None, None) demo.queue() if __name__ == "__main__": demo.launch() ```
gradio-app/gradio/blob/main/demo/chatbot_multimodal/run.ipynb
DuckDB [DuckDB](https://duckdb.org/docs/) is a database that supports reading and querying Parquet files really fast. Begin by creating a connection to DuckDB, and then install and load the [`httpfs`](https://duckdb.org/docs/extensions/httpfs.html) extension to read and write remote files: <inferencesnippet> <python> ```py import duckdb url = "https://huggingface.co/datasets/blog_authorship_corpus/resolve/refs%2Fconvert%2Fparquet/blog_authorship_corpus/train/0000.parquet" con = duckdb.connect() con.execute("INSTALL httpfs;") con.execute("LOAD httpfs;") ``` </python> <js> ```js var duckdb = require('duckdb'); var db = new duckdb.Database(':memory:'); var con = db.connect(); con.exec('INSTALL httpfs'); con.exec('LOAD httpfs'); const url = "https://huggingface.co/datasets/blog_authorship_corpus/resolve/refs%2Fconvert%2Fparquet/blog_authorship_corpus/train/0000.parquet" ``` </js> </inferencesnippet> Now you can write and execute your SQL query on the Parquet file: <inferencesnippet> <python> ```py con.sql(f"SELECT horoscope, count(*), AVG(LENGTH(text)) AS avg_blog_length FROM '{url}' GROUP BY horoscope ORDER BY avg_blog_length DESC LIMIT(5)") ┌───────────┬──────────────┬────────────────────┐ │ horoscope │ count_star() │ avg_blog_length │ │ varchar │ int64 │ double │ ├───────────┼──────────────┼────────────────────┤ │ Aquarius │ 34062 │ 1129.218836239798 │ │ Cancer │ 41509 │ 1098.366812016671 │ │ Capricorn │ 33961 │ 1073.2002002296751 │ │ Libra │ 40302 │ 1072.0718326633914 │ │ Leo │ 40587 │ 1064.0536871412028 │ └───────────┴──────────────┴────────────────────┘ ``` </python> <js> ```js con.all(`SELECT horoscope, count(*), AVG(LENGTH(text)) AS avg_blog_length FROM '${url}' GROUP BY horoscope ORDER BY avg_blog_length DESC LIMIT(5)`, function(err, res) { if (err) { throw err; } console.log(res) }); ``` </js> </inferencesnippet> To query multiple files - for example, if the dataset is sharded: <inferencesnippet> <python> ```py con.sql(f"SELECT horoscope, count(*), AVG(LENGTH(text)) AS avg_blog_length FROM read_parquet({urls[:2]}) GROUP BY horoscope ORDER BY avg_blog_length DESC LIMIT(5)") ┌─────────────┬──────────────┬────────────────────┐ │ horoscope │ count_star() │ avg_blog_length │ │ varchar │ int64 │ double │ ├─────────────┼──────────────┼────────────────────┤ │ Aquarius │ 49568 │ 1125.8306770497095 │ │ Cancer │ 63512 │ 1097.95608703867 │ │ Libra │ 60304 │ 1060.6110539931017 │ │ Capricorn │ 49402 │ 1059.5552609206104 │ │ Sagittarius │ 50431 │ 1057.4589835616982 │ └─────────────┴──────────────┴────────────────────┘ ``` </python> <js> ```js con.all(`SELECT horoscope, count(*), AVG(LENGTH(text)) AS avg_blog_length FROM read_parquet(${JSON.stringify(urls)}) GROUP BY horoscope ORDER BY avg_blog_length DESC LIMIT(5)`, function(err, res) { if (err) { throw err; } console.log(res) }); ``` </js> </inferencesnippet> [DuckDB-Wasm](https://duckdb.org/docs/api/wasm), a package powered by [WebAssembly](https://webassembly.org/), is also available for running DuckDB in any browser. This could be useful, for instance, if you want to create a web app to query Parquet files from the browser!
huggingface/datasets-server/blob/main/docs/source/duckdb.mdx
!--Copyright 2021 The HuggingFace Team. All rights reserved. Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with the License. You may obtain a copy of the License at http://www.apache.org/licenses/LICENSE-2.0 Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the specific language governing permissions and limitations under the License. ⚠️ Note that this file is in Markdown but contain specific syntax for our doc-builder (similar to MDX) that may not be rendered properly in your Markdown viewer. --> # MatCha ## Overview MatCha has been proposed in the paper [MatCha: Enhancing Visual Language Pretraining with Math Reasoning and Chart Derendering](https://arxiv.org/abs/2212.09662), from Fangyu Liu, Francesco Piccinno, Syrine Krichene, Chenxi Pang, Kenton Lee, Mandar Joshi, Yasemin Altun, Nigel Collier, Julian Martin Eisenschlos. The abstract of the paper states the following: *Visual language data such as plots, charts, and infographics are ubiquitous in the human world. However, state-of-the-art vision-language models do not perform well on these data. We propose MatCha (Math reasoning and Chart derendering pretraining) to enhance visual language models' capabilities in jointly modeling charts/plots and language data. Specifically, we propose several pretraining tasks that cover plot deconstruction and numerical reasoning which are the key capabilities in visual language modeling. We perform the MatCha pretraining starting from Pix2Struct, a recently proposed image-to-text visual language model. On standard benchmarks such as PlotQA and ChartQA, the MatCha model outperforms state-of-the-art methods by as much as nearly 20%. We also examine how well MatCha pretraining transfers to domains such as screenshots, textbook diagrams, and document figures and observe overall improvement, verifying the usefulness of MatCha pretraining on broader visual language tasks.* ## Model description MatCha is a model that is trained using `Pix2Struct` architecture. You can find more information about `Pix2Struct` in the [Pix2Struct documentation](https://huggingface.co/docs/transformers/main/en/model_doc/pix2struct). MatCha is a Visual Question Answering subset of `Pix2Struct` architecture. It renders the input question on the image and predicts the answer. ## Usage Currently 6 checkpoints are available for MatCha: - `google/matcha`: the base MatCha model, used to fine-tune MatCha on downstream tasks - `google/matcha-chartqa`: MatCha model fine-tuned on ChartQA dataset. It can be used to answer questions about charts. - `google/matcha-plotqa-v1`: MatCha model fine-tuned on PlotQA dataset. It can be used to answer questions about plots. - `google/matcha-plotqa-v2`: MatCha model fine-tuned on PlotQA dataset. It can be used to answer questions about plots. - `google/matcha-chart2text-statista`: MatCha model fine-tuned on Statista dataset. - `google/matcha-chart2text-pew`: MatCha model fine-tuned on Pew dataset. The models finetuned on `chart2text-pew` and `chart2text-statista` are more suited for summarization, whereas the models finetuned on `plotqa` and `chartqa` are more suited for question answering. You can use these models as follows (example on a ChatQA dataset): ```python from transformers import AutoProcessor, Pix2StructForConditionalGeneration import requests from PIL import Image model = Pix2StructForConditionalGeneration.from_pretrained("google/matcha-chartqa").to(0) processor = AutoProcessor.from_pretrained("google/matcha-chartqa") url = "https://raw.githubusercontent.com/vis-nlp/ChartQA/main/ChartQA%20Dataset/val/png/20294671002019.png" image = Image.open(requests.get(url, stream=True).raw) inputs = processor(images=image, text="Is the sum of all 4 places greater than Laos?", return_tensors="pt").to(0) predictions = model.generate(**inputs, max_new_tokens=512) print(processor.decode(predictions[0], skip_special_tokens=True)) ``` ## Fine-tuning To fine-tune MatCha, refer to the pix2struct [fine-tuning notebook](https://github.com/huggingface/notebooks/blob/main/examples/image_captioning_pix2struct.ipynb). For `Pix2Struct` models, we have found out that fine-tuning the model with Adafactor and cosine learning rate scheduler leads to faste convergence: ```python from transformers.optimization import Adafactor, get_cosine_schedule_with_warmup optimizer = Adafactor(self.parameters(), scale_parameter=False, relative_step=False, lr=0.01, weight_decay=1e-05) scheduler = get_cosine_schedule_with_warmup(optimizer, num_warmup_steps=1000, num_training_steps=40000) ``` <Tip> MatCha is a model that is trained using `Pix2Struct` architecture. You can find more information about `Pix2Struct` in the [Pix2Struct documentation](https://huggingface.co/docs/transformers/main/en/model_doc/pix2struct). </Tip>
huggingface/transformers/blob/main/docs/source/en/model_doc/matcha.md
-- title: "Graph Classification with Transformers" thumbnail: /blog/assets/125_intro-to-graphml/thumbnail_classification.png --- # Graph classification with Transformers <div class="blog-metadata"> <small>Published April 14, 2023.</small> <a target="_blank" class="btn no-underline text-sm mb-5 font-sans" href="https://github.com/huggingface/blog/blob/main/graphml-classification.md"> Update on GitHub </a> </div> <div class="author-card"> <a href="/clefourrier"> <img class="avatar avatar-user" src="https://aeiljuispo.cloudimg.io/v7/https://s3.amazonaws.com/moonup/production/uploads/1644340617257-noauth.png?w=200&h=200&f=face" title="Gravatar"> <div class="bfc"> <code>clefourrier</code> <span class="fullname">Clémentine Fourrier</span> </div> </a> </div> In the previous [blog](https://huggingface.co/blog/intro-graphml), we explored some of the theoretical aspects of machine learning on graphs. This one will explore how you can do graph classification using the Transformers library. (You can also follow along by downloading the demo notebook [here](https://github.com/huggingface/blog/blob/main/notebooks/graphml-classification.ipynb)!) At the moment, the only graph transformer model available in Transformers is Microsoft's [Graphormer](https://arxiv.org/abs/2106.05234), so this is the one we will use here. We are looking forward to seeing what other models people will use and integrate 🤗 ## Requirements To follow this tutorial, you need to have installed `datasets` and `transformers` (version >= 4.27.2), which you can do with `pip install -U datasets transformers`. ## Data To use graph data, you can either start from your own datasets, or use [those available on the Hub](https://huggingface.co/datasets?task_categories=task_categories:graph-ml&sort=downloads). We'll focus on using already available ones, but feel free to [add your datasets](https://huggingface.co/docs/datasets/upload_dataset)! ### Loading Loading a graph dataset from the Hub is very easy. Let's load the `ogbg-mohiv` dataset (a baseline from the [Open Graph Benchmark](https://ogb.stanford.edu/) by Stanford), stored in the `OGB` repository: ```python from datasets import load_dataset # There is only one split on the hub dataset = load_dataset("OGB/ogbg-molhiv") dataset = dataset.shuffle(seed=0) ``` This dataset already has three splits, `train`, `validation`, and `test`, and all these splits contain our 5 columns of interest (`edge_index`, `edge_attr`, `y`, `num_nodes`, `node_feat`), which you can see by doing `print(dataset)`. If you have other graph libraries, you can use them to plot your graphs and further inspect the dataset. For example, using PyGeometric and matplotlib: ```python import networkx as nx import matplotlib.pyplot as plt # We want to plot the first train graph graph = dataset["train"][0] edges = graph["edge_index"] num_edges = len(edges[0]) num_nodes = graph["num_nodes"] # Conversion to networkx format G = nx.Graph() G.add_nodes_from(range(num_nodes)) G.add_edges_from([(edges[0][i], edges[1][i]) for i in range(num_edges)]) # Plot nx.draw(G) ``` ### Format On the Hub, graph datasets are mostly stored as lists of graphs (using the `jsonl` format). A single graph is a dictionary, and here is the expected format for our graph classification datasets: - `edge_index` contains the indices of nodes in edges, stored as a list containing two parallel lists of edge indices. - **Type**: list of 2 lists of integers. - **Example**: a graph containing four nodes (0, 1, 2 and 3) and where connections are 1->2, 1->3 and 3->1 will have `edge_index = [[1, 1, 3], [2, 3, 1]]`. You might notice here that node 0 is not present here, as it is not part of an edge per se. This is why the next attribute is important. - `num_nodes` indicates the total number of nodes available in the graph (by default, it is assumed that nodes are numbered sequentially). - **Type**: integer - **Example**: In our above example, `num_nodes = 4`. - `y` maps each graph to what we want to predict from it (be it a class, a property value, or several binary label for different tasks). - **Type**: list of either integers (for multi-class classification), floats (for regression), or lists of ones and zeroes (for binary multi-task classification) - **Example**: We could predict the graph size (small = 0, medium = 1, big = 2). Here, `y = [0]`. - `node_feat` contains the available features (if present) for each node of the graph, ordered by node index. - **Type**: list of lists of integer (Optional) - **Example**: Our above nodes could have, for example, types (like different atoms in a molecule). This could give `node_feat = [[1], [0], [1], [1]]`. - `edge_attr` contains the available attributes (if present) for each edge of the graph, following the `edge_index` ordering. - **Type**: list of lists of integers (Optional) - **Example**: Our above edges could have, for example, types (like molecular bonds). This could give `edge_attr = [[0], [1], [1]]`. ### Preprocessing Graph transformer frameworks usually apply specific preprocessing to their datasets to generate added features and properties which help the underlying learning task (classification in our case). Here, we use Graphormer's default preprocessing, which generates in/out degree information, the shortest path between node matrices, and other properties of interest for the model. ```python from transformers.models.graphormer.collating_graphormer import preprocess_item, GraphormerDataCollator dataset_processed = dataset.map(preprocess_item, batched=False) ``` It is also possible to apply this preprocessing on the fly, in the DataCollator's parameters (by setting `on_the_fly_processing` to True): not all datasets are as small as `ogbg-molhiv`, and for large graphs, it might be too costly to store all the preprocessed data beforehand. ## Model ### Loading Here, we load an existing pretrained model/checkpoint and fine-tune it on our downstream task, which is a binary classification task (hence `num_classes = 2`). We could also fine-tune our model on regression tasks (`num_classes = 1`) or on multi-task classification. ```python from transformers import GraphormerForGraphClassification model = GraphormerForGraphClassification.from_pretrained( "clefourrier/pcqm4mv2_graphormer_base", num_classes=2, # num_classes for the downstream task ignore_mismatched_sizes=True, ) ``` Let's look at this in more detail. Calling the `from_pretrained` method on our model downloads and caches the weights for us. As the number of classes (for prediction) is dataset dependent, we pass the new `num_classes` as well as `ignore_mismatched_sizes` alongside the `model_checkpoint`. This makes sure a custom classification head is created, specific to our task, hence likely different from the original decoder head. It is also possible to create a new randomly initialized model to train from scratch, either following the known parameters of a given checkpoint or by manually choosing them. ### Training or fine-tuning To train our model simply, we will use a `Trainer`. To instantiate it, we will need to define the training configuration and the evaluation metric. The most important is the `TrainingArguments`, which is a class that contains all the attributes to customize the training. It requires a folder name, which will be used to save the checkpoints of the model. ```python from transformers import TrainingArguments, Trainer training_args = TrainingArguments( "graph-classification", logging_dir="graph-classification", per_device_train_batch_size=64, per_device_eval_batch_size=64, auto_find_batch_size=True, # batch size can be changed automatically to prevent OOMs gradient_accumulation_steps=10, dataloader_num_workers=4, #1, num_train_epochs=20, evaluation_strategy="epoch", logging_strategy="epoch", push_to_hub=False, ) ``` For graph datasets, it is particularly important to play around with batch sizes and gradient accumulation steps to train on enough samples while avoiding out-of-memory errors. The last argument `push_to_hub` allows the Trainer to push the model to the Hub regularly during training, as each saving step. ```python trainer = Trainer( model=model, args=training_args, train_dataset=dataset_processed["train"], eval_dataset=dataset_processed["validation"], data_collator=GraphormerDataCollator(), ) ``` In the `Trainer` for graph classification, it is important to pass the specific data collator for the given graph dataset, which will convert individual graphs to batches for training. ```python train_results = trainer.train() trainer.push_to_hub() ``` When the model is trained, it can be saved to the hub with all the associated training artefacts using `push_to_hub`. As this model is quite big, it takes about a day to train/fine-tune for 20 epochs on CPU (IntelCore i7). To go faster, you could use powerful GPUs and parallelization instead, by launching the code either in a Colab notebook or directly on the cluster of your choice. ## Ending note Now that you know how to use `transformers` to train a graph classification model, we hope you will try to share your favorite graph transformer checkpoints, models, and datasets on the Hub for the rest of the community to use!
huggingface/blog/blob/main/graphml-classification.md
-- title: Fine-Tune a Semantic Segmentation Model with a Custom Dataset thumbnail: /blog/assets/56_fine_tune_segformer/thumb.png authors: - user: segments-tobias guest: true - user: nielsr --- # Fine-Tune a Semantic Segmentation Model with a Custom Dataset <script async defer src="https://unpkg.com/medium-zoom-element@0/dist/medium-zoom-element.min.js"></script> <a target="_blank" href="https://colab.research.google.com/github/huggingface/blog/blob/main/notebooks/56_fine_tune_segformer.ipynb"> <img src="https://colab.research.google.com/assets/colab-badge.svg" alt="Open In Colab"/> </a> **This guide shows how you can fine-tune Segformer, a state-of-the-art semantic segmentation model. Our goal is to build a model for a pizza delivery robot, so it can see where to drive and recognize obstacles 🍕🤖. We'll first label a set of sidewalk images on [Segments.ai](https://segments.ai?utm_source=hf&utm_medium=colab&utm_campaign=sem_seg). Then we'll fine-tune a pre-trained SegFormer model by using [`🤗 transformers`](https://huggingface.co/transformers), an open-source library that offers easy-to-use implementations of state-of-the-art models. Along the way, you'll learn how to work with the Hugging Face Hub, the largest open-source catalog of models and datasets.** Semantic segmentation is the task of classifying each pixel in an image. You can see it as a more precise way of classifying an image. It has a wide range of use cases in fields such as medical imaging and autonomous driving. For example, for our pizza delivery robot, it is important to know exactly where the sidewalk is in an image, not just whether there is a sidewalk or not. Because semantic segmentation is a type of classification, the network architectures used for image classification and semantic segmentation are very similar. In 2014, [a seminal paper](https://arxiv.org/abs/1411.4038) by Long et al. used convolutional neural networks for semantic segmentation. More recently, Transformers have been used for image classification (e.g. [ViT](https://huggingface.co/blog/fine-tune-vit)), and now they're also being used for semantic segmentation, pushing the state-of-the-art further. [SegFormer](https://huggingface.co/docs/transformers/model_doc/segformer) is a model for semantic segmentation introduced by Xie et al. in 2021. It has a hierarchical Transformer encoder that doesn't use positional encodings (in contrast to ViT) and a simple multi-layer perceptron decoder. SegFormer achieves state-of-the-art performance on multiple common datasets. Let's see how our pizza delivery robot performs for sidewalk images. <figure class="image table text-center m-0 w-full"> <medium-zoom background="rgba(0,0,0,.7)" alt="Pizza delivery robot segmenting a scene" src="https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/blog/56_fine_tune_segformer/pizza-scene.png"></medium-zoom> </figure> Let's get started by installing the necessary dependencies. Because we're going to push our dataset and model to the Hugging Face Hub, we need to install [Git LFS](https://git-lfs.github.com/) and log in to Hugging Face. The installation of `git-lfs` might be different on your system. Note that Google Colab has Git LFS pre-installed. ```bash pip install -q transformers datasets evaluate segments-ai apt-get install git-lfs git lfs install huggingface-cli login ``` # 1. Create/choose a dataset The first step in any ML project is assembling a good dataset. In order to train a semantic segmentation model, we need a dataset with semantic segmentation labels. We can either use an existing dataset from the Hugging Face Hub, such as [ADE20k](https://huggingface.co/datasets/scene_parse_150), or create our own dataset. For our pizza delivery robot, we could use an existing autonomous driving dataset such as [CityScapes](https://www.cityscapes-dataset.com/) or [BDD100K](https://bdd100k.com/). However, these datasets were captured by cars driving on the road. Since our delivery robot will be driving on the sidewalk, there will be a mismatch between the images in these datasets and the data our robot will see in the real world. We don't want our delivery robot to get confused, so we'll create our own semantic segmentation dataset using images captured on sidewalks. We'll show how you can label the images we captured in the next steps. If you just want to use our finished, labeled dataset, you can skip the ["Create your own dataset"](#create-your-own-dataset) section and continue from ["Use a dataset from the Hub"](#use-a-dataset-from-the-hub). ## Create your own dataset To create your semantic segmentation dataset, you'll need two things: 1. images covering the situations your model will encounter in the real world 2. segmentation labels, i.e. images where each pixel represents a class/category. We went ahead and captured a thousand images of sidewalks in Belgium. Collecting and labeling such a dataset can take a long time, so you can start with a smaller dataset and expand it if the model does not perform well enough. <figure class="image table text-center m-0 w-full"> <medium-zoom background="rgba(0,0,0,.7)" alt="Example images from the sidewalk dataset" src="https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/blog/56_fine_tune_segformer/sidewalk-examples.png"></medium-zoom> <figcaption>Some examples of the raw images in the sidewalk dataset.</figcaption> </figure> To obtain segmentation labels, we need to indicate the classes of all the regions/objects in these images. This can be a time-consuming endeavour, but using the right tools can speed up the task significantly. For labeling, we'll use [Segments.ai](https://segments.ai?utm_source=hf&utm_medium=colab&utm_campaign=sem_seg), since it has smart labeling tools for image segmentation and an easy-to-use Python SDK. ### Set up the labeling task on Segments.ai First, create an account at [https://segments.ai/join](https://segments.ai/join?utm_source=hf&utm_medium=colab&utm_campaign=sem_seg). Next, create a new dataset and upload your images. You can either do this from the web interface or via the Python SDK (see the [notebook](https://colab.research.google.com/github/huggingface/blog/blob/main/notebooks/56_fine_tune_segformer.ipynb)). ### Label the images Now that the raw data is loaded, go to [segments.ai/home](https://segments.ai/home) and open the newly created dataset. Click "Start labeling" and create segmentation masks. You can use the ML-powered superpixel and autosegment tools to label faster. <figure class="image table text-center m-0"> <video alt="Labeling a sidewalk image on Segments.ai" style="max-width: 70%; margin: auto;" autoplay loop autobuffer muted playsinline > <source src="https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/blog/56_fine_tune_segformer/sidewalk-labeling-crop.mp4" poster="https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/blog/56_fine_tune_segformer/sidewalk-labeling-crop-poster.png" type="video/mp4"> </video> <figcaption>Tip: when using the superpixel tool, scroll to change the superpixel size, and click and drag to select segments.</figcaption> </figure> ### Push the result to the Hugging Face Hub When you're done labeling, create a new dataset release containing the labeled data. You can either do this on the releases tab on Segments.ai, or programmatically through the SDK as shown in the notebook. Note that creating the release can take a few seconds. You can check the releases tab on Segments.ai to check if your release is still being created. Now, we'll convert the release to a [Hugging Face dataset](https://huggingface.co/docs/datasets/package_reference/main_classes.html#datasets.Dataset) via the Segments.ai Python SDK. If you haven't set up the Segments Python client yet, follow the instructions in the "Set up the labeling task on Segments.ai" section of the [notebook](https://colab.research.google.com/github/huggingface/blog/blob/main/notebooks/56_fine_tune_segformer.ipynb#scrollTo=9T2Jr9t9y4HD). *Note that the conversion can take a while, depending on the size of your dataset.* ```python from segments.huggingface import release2dataset release = segments_client.get_release(dataset_identifier, release_name) hf_dataset = release2dataset(release) ``` If we inspect the features of the new dataset, we can see the image column and the corresponding label. The label consists of two parts: a list of annotations and a segmentation bitmap. The annotation corresponds to the different objects in the image. For each object, the annotation contains an `id` and a `category_id`. The segmentation bitmap is an image where each pixel contains the `id` of the object at that pixel. More information can be found in the [relevant docs](https://docs.segments.ai/reference/sample-and-label-types/label-types#segmentation-labels). For semantic segmentation, we need a semantic bitmap that contains a `category_id` for each pixel. We'll use the `get_semantic_bitmap` function from the Segments.ai SDK to convert the bitmaps to semantic bitmaps. To apply this function to all the rows in our dataset, we'll use [`dataset.map`](https://huggingface.co/docs/datasets/package_reference/main_classes#datasets.Dataset.map). ```python from segments.utils import get_semantic_bitmap def convert_segmentation_bitmap(example): return { "label.segmentation_bitmap": get_semantic_bitmap( example["label.segmentation_bitmap"], example["label.annotations"], id_increment=0, ) } semantic_dataset = hf_dataset.map( convert_segmentation_bitmap, ) ``` You can also rewrite the `convert_segmentation_bitmap` function to use batches and pass `batched=True` to `dataset.map`. This will significantly speed up the mapping, but you might need to tweak the `batch_size` to ensure the process doesn't run out of memory. The SegFormer model we're going to fine-tune later expects specific names for the features. For convenience, we'll match this format now. Thus, we'll rename the `image` feature to `pixel_values` and the `label.segmentation_bitmap` to `label` and discard the other features. ```python semantic_dataset = semantic_dataset.rename_column('image', 'pixel_values') semantic_dataset = semantic_dataset.rename_column('label.segmentation_bitmap', 'label') semantic_dataset = semantic_dataset.remove_columns(['name', 'uuid', 'status', 'label.annotations']) ``` We can now push the transformed dataset to the Hugging Face Hub. That way, your team and the Hugging Face community can make use of it. In the next section, we'll see how you can load the dataset from the Hub. ```python hf_dataset_identifier = f"{hf_username}/{dataset_name}" semantic_dataset.push_to_hub(hf_dataset_identifier) ``` ## Use a dataset from the Hub If you don't want to create your own dataset, but found a suitable dataset for your use case on the Hugging Face Hub, you can define the identifier here. For example, you can use the full labeled sidewalk dataset. Note that you can check out the examples [directly in your browser](https://huggingface.co/datasets/segments/sidewalk-semantic). ```python hf_dataset_identifier = "segments/sidewalk-semantic" ``` # 2. Load and prepare the Hugging Face dataset for training Now that we've created a new dataset and pushed it to the Hugging Face Hub, we can load the dataset in a single line. ```python from datasets import load_dataset ds = load_dataset(hf_dataset_identifier) ``` Let's shuffle the dataset and split the dataset in a train and test set. ```python ds = ds.shuffle(seed=1) ds = ds["train"].train_test_split(test_size=0.2) train_ds = ds["train"] test_ds = ds["test"] ``` We'll extract the number of labels and the human-readable ids, so we can configure the segmentation model correctly later on. ```python import json from huggingface_hub import hf_hub_download repo_id = f"datasets/{hf_dataset_identifier}" filename = "id2label.json" id2label = json.load(open(hf_hub_download(repo_id=hf_dataset_identifier, filename=filename, repo_type="dataset"), "r")) id2label = {int(k): v for k, v in id2label.items()} label2id = {v: k for k, v in id2label.items()} num_labels = len(id2label) ``` ## Image processor & data augmentation A SegFormer model expects the input to be of a certain shape. To transform our training data to match the expected shape, we can use `SegFormerImageProcessor`. We could use the `ds.map` function to apply the image processor to the whole training dataset in advance, but this can take up a lot of disk space. Instead, we'll use a *transform*, which will only prepare a batch of data when that data is actually used (on-the-fly). This way, we can start training without waiting for further data preprocessing. In our transform, we'll also define some data augmentations to make our model more resilient to different lighting conditions. We'll use the [`ColorJitter`](https://pytorch.org/vision/main/generated/torchvision.transforms.ColorJitter.html) function from `torchvision` to randomly change the brightness, contrast, saturation, and hue of the images in the batch. ```python from torchvision.transforms import ColorJitter from transformers import SegformerImageProcessor processor = SegformerImageProcessor() jitter = ColorJitter(brightness=0.25, contrast=0.25, saturation=0.25, hue=0.1) def train_transforms(example_batch): images = [jitter(x) for x in example_batch['pixel_values']] labels = [x for x in example_batch['label']] inputs = processor(images, labels) return inputs def val_transforms(example_batch): images = [x for x in example_batch['pixel_values']] labels = [x for x in example_batch['label']] inputs = processor(images, labels) return inputs # Set transforms train_ds.set_transform(train_transforms) test_ds.set_transform(val_transforms) ``` # 3. Fine-tune a SegFormer model ## Load the model to fine-tune The SegFormer authors define 5 models with increasing sizes: B0 to B5. The following chart (taken from the original paper) shows the performance of these different models on the ADE20K dataset, compared to other models. <figure class="image table text-center m-0 w-full"> <medium-zoom background="rgba(0,0,0,.7)" alt="SegFormer model variants compared with other segmentation models" src="https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/blog/56_fine_tune_segformer/segformer.png"></medium-zoom> <figcaption><a href="https://arxiv.org/abs/2105.15203">Source</a></figcaption> </figure> Here, we'll load the smallest SegFormer model (B0), pre-trained on ImageNet-1k. It's only about 14MB in size! Using a small model will make sure that our model can run smoothly on our pizza delivery robot. ```python from transformers import SegformerForSemanticSegmentation pretrained_model_name = "nvidia/mit-b0" model = SegformerForSemanticSegmentation.from_pretrained( pretrained_model_name, id2label=id2label, label2id=label2id ) ``` ## Set up the Trainer To fine-tune the model on our data, we'll use Hugging Face's [Trainer API](https://huggingface.co/docs/transformers/main_classes/trainer). We need to set up the training configuration and an evalutation metric to use a Trainer. First, we'll set up the [`TrainingArguments`](https://huggingface.co/docs/transformers/main_classes/trainer#transformers.TrainingArguments). This defines all training hyperparameters, such as learning rate and the number of epochs, frequency to save the model and so on. We also specify to push the model to the hub after training (`push_to_hub=True`) and specify a model name (`hub_model_id`). ```python from transformers import TrainingArguments epochs = 50 lr = 0.00006 batch_size = 2 hub_model_id = "segformer-b0-finetuned-segments-sidewalk-2" training_args = TrainingArguments( "segformer-b0-finetuned-segments-sidewalk-outputs", learning_rate=lr, num_train_epochs=epochs, per_device_train_batch_size=batch_size, per_device_eval_batch_size=batch_size, save_total_limit=3, evaluation_strategy="steps", save_strategy="steps", save_steps=20, eval_steps=20, logging_steps=1, eval_accumulation_steps=5, load_best_model_at_end=True, push_to_hub=True, hub_model_id=hub_model_id, hub_strategy="end", ) ``` Next, we'll define a function that computes the evaluation metric we want to work with. Because we're doing semantic segmentation, we'll use the [mean Intersection over Union (mIoU)](https://huggingface.co/spaces/evaluate-metric/mean_iou), directly accessible in the [`evaluate` library](https://huggingface.co/docs/evaluate/index). IoU represents the overlap of segmentation masks. Mean IoU is the average of the IoU of all semantic classes. Take a look at [this blogpost](https://www.jeremyjordan.me/evaluating-image-segmentation-models/) for an overview of evaluation metrics for image segmentation. Because our model outputs logits with dimensions height/4 and width/4, we have to upscale them before we can compute the mIoU. ```python import torch from torch import nn import evaluate metric = evaluate.load("mean_iou") def compute_metrics(eval_pred): with torch.no_grad(): logits, labels = eval_pred logits_tensor = torch.from_numpy(logits) # scale the logits to the size of the label logits_tensor = nn.functional.interpolate( logits_tensor, size=labels.shape[-2:], mode="bilinear", align_corners=False, ).argmax(dim=1) pred_labels = logits_tensor.detach().cpu().numpy() # currently using _compute instead of compute # see this issue for more info: https://github.com/huggingface/evaluate/pull/328#issuecomment-1286866576 metrics = metric._compute( predictions=pred_labels, references=labels, num_labels=len(id2label), ignore_index=0, reduce_labels=processor.do_reduce_labels, ) # add per category metrics as individual key-value pairs per_category_accuracy = metrics.pop("per_category_accuracy").tolist() per_category_iou = metrics.pop("per_category_iou").tolist() metrics.update({f"accuracy_{id2label[i]}": v for i, v in enumerate(per_category_accuracy)}) metrics.update({f"iou_{id2label[i]}": v for i, v in enumerate(per_category_iou)}) return metrics ``` Finally, we can instantiate a `Trainer` object. ```python from transformers import Trainer trainer = Trainer( model=model, args=training_args, train_dataset=train_ds, eval_dataset=test_ds, compute_metrics=compute_metrics, ) ``` Now that our trainer is set up, training is as simple as calling the `train` function. We don't need to worry about managing our GPU(s), the trainer will take care of that. ```python trainer.train() ``` When we're done with training, we can push our fine-tuned model and the image processor to the Hub. This will also automatically create a model card with our results. We'll supply some extra information in `kwargs` to make the model card more complete. ```python kwargs = { "tags": ["vision", "image-segmentation"], "finetuned_from": pretrained_model_name, "dataset": hf_dataset_identifier, } processor.push_to_hub(hub_model_id) trainer.push_to_hub(**kwargs) ``` # 4. Inference Now comes the exciting part, using our fine-tuned model! In this section, we'll show how you can load your model from the hub and use it for inference. However, you can also try out your model directly on the Hugging Face Hub, thanks to the cool widgets powered by the [hosted inference API](https://api-inference.huggingface.co/docs/python/html/index.html). If you pushed your model to the Hub in the previous step, you should see an inference widget on your model page. You can add default examples to the widget by defining example image URLs in your model card. See [this model card](https://huggingface.co/segments-tobias/segformer-b0-finetuned-segments-sidewalk/blob/main/README.md) as an example. <figure class="image table text-center m-0 w-full"> <video alt="The interactive widget of the model" style="max-width: 70%; margin: auto;" autoplay loop autobuffer muted playsinline > <source src="https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/blog/56_fine_tune_segformer/widget.mp4" poster="https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/blog/56_fine_tune_segformer/widget-poster.png" type="video/mp4"> </video> </figure> ## Use the model from the Hub We'll first load the model from the Hub using `SegformerForSemanticSegmentation.from_pretrained()`. ```python from transformers import SegformerImageProcessor, SegformerForSemanticSegmentation processor = SegformerImageProcessor.from_pretrained("nvidia/segformer-b0-finetuned-ade-512-512") model = SegformerForSemanticSegmentation.from_pretrained(f"{hf_username}/{hub_model_id}") ``` Next, we'll load an image from our test dataset. ```python image = test_ds[0]['pixel_values'] gt_seg = test_ds[0]['label'] image ``` To segment this test image, we first need to prepare the image using the image processor. Then we forward it through the model. We also need to remember to upscale the output logits to the original image size. In order to get the actual category predictions, we just have to apply an `argmax` on the logits. ```python from torch import nn inputs = processor(images=image, return_tensors="pt") outputs = model(**inputs) logits = outputs.logits # shape (batch_size, num_labels, height/4, width/4) # First, rescale logits to original image size upsampled_logits = nn.functional.interpolate( logits, size=image.size[::-1], # (height, width) mode='bilinear', align_corners=False ) # Second, apply argmax on the class dimension pred_seg = upsampled_logits.argmax(dim=1)[0] ``` Now it's time to display the result. We'll display the result next to the ground-truth mask. <figure class="image table text-center m-0 w-full"> <medium-zoom background="rgba(1,1,1,1)" alt="SegFormer prediction vs the ground truth" src="https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/blog/56_fine_tune_segformer/output.png"></medium-zoom> </figure> What do you think? Would you send our pizza delivery robot on the road with this segmentation information? The result might not be perfect yet, but we can always expand our dataset to make the model more robust. We can now also go train a larger SegFormer model, and see how it stacks up. # 5. Conclusion That's it! You now know how to create your own image segmentation dataset and how to use it to fine-tune a semantic segmentation model. We introduced you to some useful tools along the way, such as: * [Segments.ai](https://segments.ai) for labeling your data * [🤗 datasets](https://huggingface.co/docs/datasets/) for creating and sharing a dataset * [🤗 transformers](https://huggingface.co/transformers) for easily fine-tuning a state-of-the-art segmentation model * [Hugging Face Hub](https://huggingface.co/docs/hub/main) for sharing our dataset and model, and for creating an inference widget for our model We hope you enjoyed this post and learned something. Feel free to share your own model with us on Twitter ([@TobiasCornille](https://twitter.com/tobiascornille), [@NielsRogge](https://twitter.com/nielsrogge), and [@huggingface](https://twitter.com/huggingface)).
huggingface/blog/blob/main/fine-tune-segformer.md