text
stringlengths
2
11.8k
Here is a full test example: thon from transformers.testing_utils import CaptureStdout msg = "Secret message\r" final = "Hello World" with CaptureStdout() as cs: print(msg + final) assert cs.out == final + "\n", f"captured: {cs.out}, expecting {final}" If you'd like to capture stderr use the CaptureStderr class instead: thon from transformers.testing_utils import CaptureStderr with CaptureStderr() as cs: function_that_writes_to_stderr() print(cs.err)
If you need to capture both streams at once, use the parent CaptureStd class: thon from transformers.testing_utils import CaptureStd with CaptureStd() as cs: function_that_writes_to_stdout_and_stderr() print(cs.err, cs.out)
Also, to aid debugging test issues, by default these context managers automatically replay the captured streams on exit from the context. Capturing logger stream If you need to validate the output of a logger, you can use CaptureLogger: thon from transformers import logging from transformers.testing_utils import CaptureLogger msg = "Testing 1, 2, 3" logging.set_verbosity_info() logger = logging.get_logger("transformers.models.bart.tokenization_bart") with CaptureLogger(logger) as cl: logger.info(msg) assert cl.out, msg + "\n"
Testing with environment variables If you want to test the impact of environment variables for a specific test you can use a helper decorator transformers.testing_utils.mockenv thon from transformers.testing_utils import mockenv class HfArgumentParserTest(unittest.TestCase): @mockenv(TRANSFORMERS_VERBOSITY="error") def test_env_override(self): env_level_str = os.getenv("TRANSFORMERS_VERBOSITY", None)
At times an external program needs to be called, which requires setting PYTHONPATH in os.environ to include multiple local paths. A helper class transformers.test_utils.TestCasePlus comes to help: thon from transformers.testing_utils import TestCasePlus class EnvExampleTest(TestCasePlus): def test_external_prog(self): env = self.get_env() # now call the external program, passing env to it
Depending on whether the test file was under the tests test suite or examples it'll correctly set up env[PYTHONPATH] to include one of these two directories, and also the src directory to ensure the testing is done against the current repo, and finally with whatever env[PYTHONPATH] was already set to before the test was called if anything. This helper method creates a copy of the os.environ object, so the original remains intact. Getting reproducible results In some situations you may want to remove randomness for your tests. To get identical reproducible results set, you will need to fix the seed: thon seed = 42 python RNG import random random.seed(seed) pytorch RNGs import torch torch.manual_seed(seed) torch.backends.cudnn.deterministic = True if torch.cuda.is_available(): torch.cuda.manual_seed_all(seed) numpy RNG import numpy as np np.random.seed(seed) tf RNG tf.random.set_seed(seed)
Debugging tests To start a debugger at the point of the warning, do this: pytest tests/utils/test_logging.py -W error::UserWarning --pdb Working with github actions workflows To trigger a self-push workflow CI job, you must:
Create a new branch on transformers origin (not a fork!). The branch name has to start with either ci_ or ci- (main triggers it too, but we can't do PRs on main). It also gets triggered only for specific paths - you can find the up-to-date definition in case it changed since this document has been written here under push: Create a PR from this branch. Then you can see the job appear here. It may not run right away if there is a backlog.
Testing Experimental CI Features Testing CI features can be potentially problematic as it can interfere with the normal CI functioning. Therefore if a new CI feature is to be added, it should be done as following.
Create a new dedicated job that tests what needs to be tested The new job must always succeed so that it gives us a green βœ“ (details below). Let it run for some days to see that a variety of different PR types get to run on it (user fork branches, non-forked branches, branches originating from github.com UI direct file edit, various forced pushes, etc. - there are so many) while monitoring the experimental job's logs (not the overall job green as it's purposefully always green) When it's clear that everything is solid, then merge the new changes into existing jobs.
That way experiments on CI functionality itself won't interfere with the normal workflow. Now how can we make the job always succeed while the new CI feature is being developed? Some CIs, like TravisCI support ignore-step-failure and will report the overall job as successful, but CircleCI and Github Actions as of this writing don't support that. So the following workaround can be used:
set +euo pipefail at the beginning of the run command to suppress most potential failures in the bash script. the last command must be a success: echo "done" or just true will do
Here is an example: yaml - run: name: run CI experiment command: | set +euo pipefail echo "setting run-all-despite-any-errors-mode" this_command_will_fail echo "but bash continues to run" # emulate another failure false # but the last command must be a success echo "during experiment do not remove: reporting success to CI, even if there were failures" For simple commands you could also do:
cmd_that_may_fail || true Of course, once satisfied with the results, integrate the experimental step or job with the rest of the normal jobs, while removing set +euo pipefail or any other things you may have added to ensure that the experimental job doesn't interfere with the normal CI functioning. This whole process would have been much easier if we only could set something like allow-failure for the experimental step, and let it fail without impacting the overall status of PRs. But as mentioned earlier CircleCI and Github Actions don't support it at the moment. You can vote for this feature and see where it is at these CI-specific threads:
Github Actions: CircleCI: DeepSpeed integration For a PR that involves the DeepSpeed integration, keep in mind our CircleCI PR CI setup doesn't have GPUs. Tests requiring GPUs are run on a different CI nightly. This means if you get a passing CI report in your PR, it doesn’t mean the DeepSpeed tests pass. To run DeepSpeed tests: RUN_SLOW=1 pytest tests/deepspeed/test_deepspeed.py Any changes to the modeling or PyTorch examples code requires running the model zoo tests as well.
RUN_SLOW=1 pytest tests/deepspeed/test_deepspeed.py Any changes to the modeling or PyTorch examples code requires running the model zoo tests as well. RUN_SLOW=1 pytest tests/deepspeed
Performance and Scalability Training large transformer models and deploying them to production present various challenges. During training, the model may require more GPU memory than available or exhibit slow training speed. In the deployment phase, the model can struggle to handle the required throughput in a production environment. This documentation aims to assist you in overcoming these challenges and finding the optimal setting for your use-case. The guides are divided into training and inference sections, as each comes with different challenges and solutions. Within each section you'll find separate guides for different hardware configurations, such as single GPU vs. multi-GPU for training or CPU vs. GPU for inference. Use this document as your starting point to navigate further to the methods that match your scenario. Training Training large transformer models efficiently requires an accelerator such as a GPU or TPU. The most common case is where you have a single GPU. The methods that you can apply to improve training efficiency on a single GPU extend to other setups such as multiple GPU. However, there are also techniques that are specific to multi-GPU or CPU training. We cover them in separate sections.
Methods and tools for efficient training on a single GPU: start here to learn common approaches that can help optimize GPU memory utilization, speed up the training, or both. Multi-GPU training section: explore this section to learn about further optimization methods that apply to a multi-GPU settings, such as data, tensor, and pipeline parallelism. CPU training section: learn about mixed precision training on CPU. Efficient Training on Multiple CPUs: learn about distributed CPU training. Training on TPU with TensorFlow: if you are new to TPUs, refer to this section for an opinionated introduction to training on TPUs and using XLA. Custom hardware for training: find tips and tricks when building your own deep learning rig. Hyperparameter Search using Trainer API
Inference Efficient inference with large models in a production environment can be as challenging as training them. In the following sections we go through the steps to run inference on CPU and single/multi-GPU setups. Inference on a single CPU Inference on a single GPU Multi-GPU inference XLA Integration for TensorFlow Models Training and inference Here you'll find techniques, tips and tricks that apply whether you are training a model, or running inference with it.
Training and inference Here you'll find techniques, tips and tricks that apply whether you are training a model, or running inference with it. Instantiating a big model Troubleshooting performance issues
Instantiating a big model Troubleshooting performance issues Contribute This document is far from being complete and a lot more needs to be added, so if you have additions or corrections to make please don't hesitate to open a PR or if you aren't sure start an Issue and we can discuss the details there. When making contributions that A is better than B, please try to include a reproducible benchmark and/or a link to the source of that information (unless it comes directly from you).
What πŸ€— Transformers can do πŸ€— Transformers is a library of pretrained state-of-the-art models for natural language processing (NLP), computer vision, and audio and speech processing tasks. Not only does the library contain Transformer models, but it also has non-Transformer models like modern convolutional networks for computer vision tasks. If you look at some of the most popular consumer products today, like smartphones, apps, and televisions, odds are that some kind of deep learning technology is behind it. Want to remove a background object from a picture taken by your smartphone? This is an example of a panoptic segmentation task (don't worry if you don't know what this means yet, we'll describe it in the following sections!). This page provides an overview of the different speech and audio, computer vision, and NLP tasks that can be solved with the πŸ€— Transformers library in just three lines of code! Audio Audio and speech processing tasks are a little different from the other modalities mainly because audio as an input is a continuous signal. Unlike text, a raw audio waveform can't be neatly split into discrete chunks the way a sentence can be divided into words. To get around this, the raw audio signal is typically sampled at regular intervals. If you take more samples within an interval, the sampling rate is higher, and the audio more closely resembles the original audio source. Previous approaches preprocessed the audio to extract useful features from it. It is now more common to start audio and speech processing tasks by directly feeding the raw audio waveform to a feature encoder to extract an audio representation. This simplifies the preprocessing step and allows the model to learn the most essential features. Audio classification Audio classification is a task that labels audio data from a predefined set of classes. It is a broad category with many specific applications, some of which include:
acoustic scene classification: label audio with a scene label ("office", "beach", "stadium") acoustic event detection: label audio with a sound event label ("car horn", "whale calling", "glass breaking") tagging: label audio containing multiple sounds (birdsongs, speaker identification in a meeting) music classification: label music with a genre label ("metal", "hip-hop", "country")
from transformers import pipeline classifier = pipeline(task="audio-classification", model="superb/hubert-base-superb-er") preds = classifier("https://huggingface.co/datasets/Narsil/asr_dummy/resolve/main/mlk.flac") preds = [{"score": round(pred["score"], 4), "label": pred["label"]} for pred in preds] preds [{'score': 0.4532, 'label': 'hap'}, {'score': 0.3622, 'label': 'sad'}, {'score': 0.0943, 'label': 'neu'}, {'score': 0.0903, 'label': 'ang'}]
Automatic speech recognition Automatic speech recognition (ASR) transcribes speech into text. It is one of the most common audio tasks due partly to speech being such a natural form of human communication. Today, ASR systems are embedded in "smart" technology products like speakers, phones, and cars. We can ask our virtual assistants to play music, set reminders, and tell us the weather. But one of the key challenges Transformer architectures have helped with is in low-resource languages. By pretraining on large amounts of speech data, finetuning the model on only one hour of labeled speech data in a low-resource language can still produce high-quality results compared to previous ASR systems trained on 100x more labeled data.
from transformers import pipeline transcriber = pipeline(task="automatic-speech-recognition", model="openai/whisper-small") transcriber("https://huggingface.co/datasets/Narsil/asr_dummy/resolve/main/mlk.flac") {'text': ' I have a dream that one day this nation will rise up and live out the true meaning of its creed.'}
Computer vision One of the first and earliest successful computer vision tasks was recognizing images of zip code numbers using a convolutional neural network (CNN). An image is composed of pixels, and each pixel has a numerical value. This makes it easy to represent an image as a matrix of pixel values. Each particular combination of pixel values describes the colors of an image. Two general ways computer vision tasks can be solved are:
Use convolutions to learn the hierarchical features of an image from low-level features to high-level abstract things. Split an image into patches and use a Transformer to gradually learn how each image patch is related to each other to form an image. Unlike the bottom-up approach favored by a CNN, this is kind of like starting out with a blurry image and then gradually bringing it into focus.
Image classification Image classification labels an entire image from a predefined set of classes. Like most classification tasks, there are many practical use cases for image classification, some of which include:
healthcare: label medical images to detect disease or monitor patient health environment: label satellite images to monitor deforestation, inform wildland management or detect wildfires agriculture: label images of crops to monitor plant health or satellite images for land use monitoring ecology: label images of animal or plant species to monitor wildlife populations or track endangered species
from transformers import pipeline classifier = pipeline(task="image-classification") preds = classifier( "https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/pipeline-cat-chonk.jpeg" ) preds = [{"score": round(pred["score"], 4), "label": pred["label"]} for pred in preds] print(*preds, sep="\n") {'score': 0.4335, 'label': 'lynx, catamount'} {'score': 0.0348, 'label': 'cougar, puma, catamount, mountain lion, painter, panther, Felis concolor'} {'score': 0.0324, 'label': 'snow leopard, ounce, Panthera uncia'} {'score': 0.0239, 'label': 'Egyptian cat'} {'score': 0.0229, 'label': 'tiger cat'}
Object detection Unlike image classification, object detection identifies multiple objects within an image and the objects' positions in an image (defined by the bounding box). Some example applications of object detection include:
self-driving vehicles: detect everyday traffic objects such as other vehicles, pedestrians, and traffic lights remote sensing: disaster monitoring, urban planning, and weather forecasting defect detection: detect cracks or structural damage in buildings, and manufacturing defects
from transformers import pipeline detector = pipeline(task="object-detection") preds = detector( "https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/pipeline-cat-chonk.jpeg" ) preds = [{"score": round(pred["score"], 4), "label": pred["label"], "box": pred["box"]} for pred in preds] preds [{'score': 0.9865, 'label': 'cat', 'box': {'xmin': 178, 'ymin': 154, 'xmax': 882, 'ymax': 598}}]
Image segmentation Image segmentation is a pixel-level task that assigns every pixel in an image to a class. It differs from object detection, which uses bounding boxes to label and predict objects in an image because segmentation is more granular. Segmentation can detect objects at a pixel-level. There are several types of image segmentation:
instance segmentation: in addition to labeling the class of an object, it also labels each distinct instance of an object ("dog-1", "dog-2") panoptic segmentation: a combination of semantic and instance segmentation; it labels each pixel with a semantic class and each distinct instance of an object
Segmentation tasks are helpful in self-driving vehicles to create a pixel-level map of the world around them so they can navigate safely around pedestrians and other vehicles. It is also useful for medical imaging, where the task's finer granularity can help identify abnormal cells or organ features. Image segmentation can also be used in ecommerce to virtually try on clothes or create augmented reality experiences by overlaying objects in the real world through your camera.
from transformers import pipeline segmenter = pipeline(task="image-segmentation") preds = segmenter( "https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/pipeline-cat-chonk.jpeg" ) preds = [{"score": round(pred["score"], 4), "label": pred["label"]} for pred in preds] print(*preds, sep="\n") {'score': 0.9879, 'label': 'LABEL_184'} {'score': 0.9973, 'label': 'snow'} {'score': 0.9972, 'label': 'cat'}
Depth estimation Depth estimation predicts the distance of each pixel in an image from the camera. This computer vision task is especially important for scene understanding and reconstruction. For example, in self-driving cars, vehicles need to understand how far objects like pedestrians, traffic signs, and other vehicles are to avoid obstacles and collisions. Depth information is also helpful for constructing 3D representations from 2D images and can be used to create high-quality 3D representations of biological structures or buildings. There are two approaches to depth estimation:
stereo: depths are estimated by comparing two images of the same image from slightly different angles monocular: depths are estimated from a single image from transformers import pipeline depth_estimator = pipeline(task="depth-estimation") preds = depth_estimator( "https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/pipeline-cat-chonk.jpeg" )
Natural language processing NLP tasks are among the most common types of tasks because text is such a natural way for us to communicate. To get text into a format recognized by a model, it needs to be tokenized. This means dividing a sequence of text into separate words or subwords (tokens) and then converting these tokens into numbers. As a result, you can represent a sequence of text as a sequence of numbers, and once you have a sequence of numbers, it can be input into a model to solve all sorts of NLP tasks! Text classification Like classification tasks in any modality, text classification labels a sequence of text (it can be sentence-level, a paragraph, or a document) from a predefined set of classes. There are many practical applications for text classification, some of which include:
sentiment analysis: label text according to some polarity like positive or negative which can inform and support decision-making in fields like politics, finance, and marketing content classification: label text according to some topic to help organize and filter information in news and social media feeds (weather, sports, finance, etc.)
from transformers import pipeline classifier = pipeline(task="sentiment-analysis") preds = classifier("Hugging Face is the best thing since sliced bread!") preds = [{"score": round(pred["score"], 4), "label": pred["label"]} for pred in preds] preds [{'score': 0.9991, 'label': 'POSITIVE'}]
Token classification In any NLP task, text is preprocessed by separating the sequence of text into individual words or subwords. These are known as tokens. Token classification assigns each token a label from a predefined set of classes. Two common types of token classification are:
named entity recognition (NER): label a token according to an entity category like organization, person, location or date. NER is especially popular in biomedical settings, where it can label genes, proteins, and drug names. part-of-speech tagging (POS): label a token according to its part-of-speech like noun, verb, or adjective. POS is useful for helping translation systems understand how two identical words are grammatically different (bank as a noun versus bank as a verb).
from transformers import pipeline classifier = pipeline(task="ner") preds = classifier("Hugging Face is a French company based in New York City.") preds = [ { "entity": pred["entity"], "score": round(pred["score"], 4), "index": pred["index"], "word": pred["word"], "start": pred["start"], "end": pred["end"], } for pred in preds ] print(*preds, sep="\n") {'entity': 'I-ORG', 'score': 0.9968, 'index': 1, 'word': 'Hu', 'start': 0, 'end': 2} {'entity': 'I-ORG', 'score': 0.9293, 'index': 2, 'word': '##gging', 'start': 2, 'end': 7} {'entity': 'I-ORG', 'score': 0.9763, 'index': 3, 'word': 'Face', 'start': 8, 'end': 12} {'entity': 'I-MISC', 'score': 0.9983, 'index': 6, 'word': 'French', 'start': 18, 'end': 24} {'entity': 'I-LOC', 'score': 0.999, 'index': 10, 'word': 'New', 'start': 42, 'end': 45} {'entity': 'I-LOC', 'score': 0.9987, 'index': 11, 'word': 'York', 'start': 46, 'end': 50} {'entity': 'I-LOC', 'score': 0.9992, 'index': 12, 'word': 'City', 'start': 51, 'end': 55}
Question answering Question answering is another token-level task that returns an answer to a question, sometimes with context (open-domain) and other times without context (closed-domain). This task happens whenever we ask a virtual assistant something like whether a restaurant is open. It can also provide customer or technical support and help search engines retrieve the relevant information you're asking for. There are two common types of question answering:
extractive: given a question and some context, the answer is a span of text from the context the model must extract abstractive: given a question and some context, the answer is generated from the context; this approach is handled by the [Text2TextGenerationPipeline] instead of the [QuestionAnsweringPipeline] shown below
from transformers import pipeline question_answerer = pipeline(task="question-answering") preds = question_answerer( question="What is the name of the repository?", context="The name of the repository is huggingface/transformers", ) print( f"score: {round(preds['score'], 4)}, start: {preds['start']}, end: {preds['end']}, answer: {preds['answer']}" ) score: 0.9327, start: 30, end: 54, answer: huggingface/transformers
Summarization Summarization creates a shorter version of a text from a longer one while trying to preserve most of the meaning of the original document. Summarization is a sequence-to-sequence task; it outputs a shorter text sequence than the input. There are a lot of long-form documents that can be summarized to help readers quickly understand the main points. Legislative bills, legal and financial documents, patents, and scientific papers are a few examples of documents that could be summarized to save readers time and serve as a reading aid. Like question answering, there are two types of summarization:
extractive: identify and extract the most important sentences from the original text abstractive: generate the target summary (which may include new words not in the input document) from the original text; the [SummarizationPipeline] uses the abstractive approach
from transformers import pipeline summarizer = pipeline(task="summarization") summarizer( "In this work, we presented the Transformer, the first sequence transduction model based entirely on attention, replacing the recurrent layers most commonly used in encoder-decoder architectures with multi-headed self-attention. For translation tasks, the Transformer can be trained significantly faster than architectures based on recurrent or convolutional layers. On both WMT 2014 English-to-German and WMT 2014 English-to-French translation tasks, we achieve a new state of the art. In the former task our best model outperforms even all previously reported ensembles." ) [{'summary_text': ' The Transformer is the first sequence transduction model based entirely on attention . It replaces the recurrent layers most commonly used in encoder-decoder architectures with multi-headed self-attention . For translation tasks, the Transformer can be trained significantly faster than architectures based on recurrent or convolutional layers .'}]
Translation Translation converts a sequence of text in one language to another. It is important in helping people from different backgrounds communicate with each other, help translate content to reach wider audiences, and even be a learning tool to help people learn a new language. Along with summarization, translation is a sequence-to-sequence task, meaning the model receives an input sequence and returns a target output sequence. In the early days, translation models were mostly monolingual, but recently, there has been increasing interest in multilingual models that can translate between many pairs of languages.
from transformers import pipeline text = "translate English to French: Hugging Face is a community-based open-source platform for machine learning." translator = pipeline(task="translation", model="google-t5/t5-small") translator(text) [{'translation_text': "Hugging Face est une tribune communautaire de l'apprentissage des machines."}]
Language modeling Language modeling is a task that predicts a word in a sequence of text. It has become a very popular NLP task because a pretrained language model can be finetuned for many other downstream tasks. Lately, there has been a lot of interest in large language models (LLMs) which demonstrate zero- or few-shot learning. This means the model can solve tasks it wasn't explicitly trained to do! Language models can be used to generate fluent and convincing text, though you need to be careful since the text may not always be accurate. There are two types of language modeling:
causal: the model's objective is to predict the next token in a sequence, and future tokens are masked from transformers import pipeline prompt = "Hugging Face is a community-based open-source platform for machine learning." generator = pipeline(task="text-generation") generator(prompt) # doctest: +SKIP masked: the model's objective is to predict a masked token in a sequence with full access to the tokens in the sequence
text = "Hugging Face is a community-based open-source for machine learning." fill_mask = pipeline(task="fill-mask") preds = fill_mask(text, top_k=1) preds = [ { "score": round(pred["score"], 4), "token": pred["token"], "token_str": pred["token_str"], "sequence": pred["sequence"], } for pred in preds ] preds [{'score': 0.2236, 'token': 1761, 'token_str': ' platform', 'sequence': 'Hugging Face is a community-based open-source platform for machine learning.'}]
Multimodal Multimodal tasks require a model to process multiple data modalities (text, image, audio, video) to solve a particular problem. Image captioning is an example of a multimodal task where the model takes an image as input and outputs a sequence of text describing the image or some properties of the image. Although multimodal models work with different data types or modalities, internally, the preprocessing steps help the model convert all the data types into embeddings (vectors or list of numbers that holds meaningful information about the data). For a task like image captioning, the model learns relationships between image embeddings and text embeddings. Document question answering Document question answering is a task that answers natural language questions from a document. Unlike a token-level question answering task which takes text as input, document question answering takes an image of a document as input along with a question about the document and returns an answer. Document question answering can be used to parse structured documents and extract key information from it. In the example below, the total amount and change due can be extracted from a receipt.
from transformers import pipeline from PIL import Image import requests url = "https://datasets-server.huggingface.co/assets/hf-internal-testing/example-documents/--/hf-internal-testing--example-documents/test/2/image/image.jpg" image = Image.open(requests.get(url, stream=True).raw) doc_question_answerer = pipeline("document-question-answering", model="magorshunov/layoutlm-invoices") preds = doc_question_answerer( question="What is the total amount?", image=image, ) preds [{'score': 0.8531, 'answer': '17,000', 'start': 4, 'end': 4}]
Hopefully, this page has given you some more background information about all the types of tasks in each modality and the practical importance of each one. In the next section, you'll learn how πŸ€— Transformers work to solve these tasks.
Hyperparameter Search using Trainer API πŸ€— Transformers provides a [Trainer] class optimized for training πŸ€— Transformers models, making it easier to start training without manually writing your own training loop. The [Trainer] provides API for hyperparameter search. This doc shows how to enable it in example. Hyperparameter Search backend [Trainer] supports four hyperparameter search backends currently: optuna, sigopt, raytune and wandb. you should install them before using them as the hyperparameter search backend
pip install optuna/sigopt/wandb/ray[tune] How to enable Hyperparameter search in example Define the hyperparameter search space, different backends need different format. For sigopt, see sigopt object_parameter, it's like following:
def sigopt_hp_space(trial): return [ {"bounds": {"min": 1e-6, "max": 1e-4}, "name": "learning_rate", "type": "double"}, { "categorical_values": ["16", "32", "64", "128"], "name": "per_device_train_batch_size", "type": "categorical", }, ] For optuna, see optuna object_parameter, it's like following:
For optuna, see optuna object_parameter, it's like following: def optuna_hp_space(trial): return { "learning_rate": trial.suggest_float("learning_rate", 1e-6, 1e-4, log=True), "per_device_train_batch_size": trial.suggest_categorical("per_device_train_batch_size", [16, 32, 64, 128]), }
Optuna provides multi-objective HPO. You can pass direction in hyperparameter_search and define your own compute_objective to return multiple objective values. The Pareto Front (List[BestRun]) will be returned in hyperparameter_search, you should refer to the test case TrainerHyperParameterMultiObjectOptunaIntegrationTest in test_trainer. It's like following
best_trials = trainer.hyperparameter_search( direction=["minimize", "maximize"], backend="optuna", hp_space=optuna_hp_space, n_trials=20, compute_objective=compute_objective, ) For raytune, see raytune object_parameter, it's like following: def ray_hp_space(trial): return { "learning_rate": tune.loguniform(1e-6, 1e-4), "per_device_train_batch_size": tune.choice([16, 32, 64, 128]), } For wandb, see wandb object_parameter, it's like following:
For wandb, see wandb object_parameter, it's like following: def wandb_hp_space(trial): return { "method": "random", "metric": {"name": "objective", "goal": "minimize"}, "parameters": { "learning_rate": {"distribution": "uniform", "min": 1e-6, "max": 1e-4}, "per_device_train_batch_size": {"values": [16, 32, 64, 128]}, }, } Define a model_init function and pass it to the [Trainer], as an example:
Define a model_init function and pass it to the [Trainer], as an example: def model_init(trial): return AutoModelForSequenceClassification.from_pretrained( model_args.model_name_or_path, from_tf=bool(".ckpt" in model_args.model_name_or_path), config=config, cache_dir=model_args.cache_dir, revision=model_args.model_revision, token=True if model_args.use_auth_token else None, )
Create a [Trainer] with your model_init function, training arguments, training and test datasets, and evaluation function: trainer = Trainer( model=None, args=training_args, train_dataset=small_train_dataset, eval_dataset=small_eval_dataset, compute_metrics=compute_metrics, tokenizer=tokenizer, model_init=model_init, data_collator=data_collator, )
Call hyperparameter search, get the best trial parameters, backend could be "optuna"/"sigopt"/"wandb"/"ray". direction can be"minimize" or "maximize", which indicates whether to optimize greater or lower objective. You could define your own compute_objective function, if not defined, the default compute_objective will be called, and the sum of eval metric like f1 is returned as objective value.
best_trial = trainer.hyperparameter_search( direction="maximize", backend="optuna", hp_space=optuna_hp_space, n_trials=20, compute_objective=compute_objective, ) Hyperparameter search For DDP finetune Currently, Hyperparameter search for DDP is enabled for optuna and sigopt. Only the rank-zero process will generate the search trial and pass the argument to other ranks.
Efficient Training on Multiple GPUs If training a model on a single GPU is too slow or if the model's weights do not fit in a single GPU's memory, transitioning to a multi-GPU setup may be a viable option. Prior to making this transition, thoroughly explore all the strategies covered in the Methods and tools for efficient training on a single GPU as they are universally applicable to model training on any number of GPUs. Once you have employed those strategies and found them insufficient for your case on a single GPU, consider moving to multiple GPUs. Transitioning from a single GPU to multiple GPUs requires the introduction of some form of parallelism, as the workload must be distributed across the resources. Multiple techniques can be employed to achieve parallelism, such as data parallelism, tensor parallelism, and pipeline parallelism. It's important to note that there isn't a one-size-fits-all solution, and the optimal settings depend on the specific hardware configuration you are using. This guide offers an in-depth overview of individual types of parallelism, as well as guidance on ways to combine techniques and choosing an appropriate approach. For step-by-step tutorials on distributed training, please refer to the πŸ€— Accelerate documentation.
While the main concepts discussed in this guide are likely applicable across frameworks, here we focus on PyTorch-based implementations.
Before diving deeper into the specifics of each technique, let's go over the rough decision process when training large models on a large infrastructure. Scalability strategy Begin by estimating how much vRAM is required to train your model. For models hosted on the πŸ€— Hub, use our Model Memory Calculator, which gives you accurate calculations within a few percent margin. Parallelization strategy for a single Node / multi-GPU setup When training a model on a single node with multiple GPUs, your choice of parallelization strategy can significantly impact performance. Here's a breakdown of your options: Case 1: Your model fits onto a single GPU If your model can comfortably fit onto a single GPU, you have two primary options:
DDP - Distributed DataParallel ZeRO - depending on the situation and configuration used, this method may or may not be faster, however, it's worth experimenting with it. Case 2: Your model doesn't fit onto a single GPU: If your model is too large for a single GPU, you have several alternatives to consider: PipelineParallel (PP) ZeRO TensorParallel (TP)
With very fast inter-node connectivity (e.g., NVLINK or NVSwitch) all three strategies (PP, ZeRO, TP) should result in similar performance. However, without these, PP will be faster than TP or ZeRO. The degree of TP may also make a difference. It's best to experiment with your specific setup to determine the most suitable strategy. TP is almost always used within a single node. That is TP size <= GPUs per node. Case 3: Largest layer of your model does not fit onto a single GPU
If you are not using ZeRO, you have to use TensorParallel (TP), because PipelineParallel (PP) alone won't be sufficient to accommodate the large layer. If you are using ZeRO, additionally adopt techniques from the Methods and tools for efficient training on a single GPU. Parallelization strategy for a multi-Node / multi-GPU setup When you have fast inter-node connectivity (e.g., NVLINK or NVSwitch) consider using one of these options:
Parallelization strategy for a multi-Node / multi-GPU setup When you have fast inter-node connectivity (e.g., NVLINK or NVSwitch) consider using one of these options: ZeRO - as it requires close to no modifications to the model A combination of PipelineParallel(PP) with TensorParallel(TP) and DataParallel(DP) - this approach will result in fewer communications, but requires significant changes to the model When you have slow inter-node connectivity and still low on GPU memory:
When you have slow inter-node connectivity and still low on GPU memory: Employ a combination of DataParallel(DP) with PipelineParallel(PP), TensorParallel(TP), and ZeRO.
In the following sections of this guide we dig deeper into how these different parallelism methods work. Data Parallelism Even with only 2 GPUs, you can readily leverage the accelerated training capabilities offered by PyTorch's built-in features, such as DataParallel (DP) and DistributedDataParallel (DDP). Note that PyTorch documentation recommends to prefer DistributedDataParallel (DDP) over DataParallel (DP) for multi-GPU training as it works for all models. Let's take a look at how these two methods work and what makes them different. DataParallel vs DistributedDataParallel To understand the key differences in inter-GPU communication overhead between the two methods, let's review the processes per batch: DDP:
At the start time the main process replicates the model once from GPU 0 to the rest of GPUs Then for each batch: Each GPU directly consumes its mini-batch of data. During backward, once the local gradients are ready, they are averaged across all processes.
DP: For each batch: 1. GPU 0 reads the batch of data and then sends a mini-batch to each GPU. 2. The up-to-date model is replicated from GPU 0 to each GPU. 3. forward is executed, and output from each GPU is sent to GPU 0 to compute the loss. 4. The loss is distributed from GPU 0 to all GPUs, and backward is run. 5. Gradients from each GPU are sent to GPU 0 and averaged. Key differences include: 1. DDP performs only a single communication per batch - sending gradients, while DP performs five different data exchanges per batch. DDP copies data using torch.distributed, while DP copies data within the process via Python threads (which introduces limitations associated with GIL). As a result, DistributedDataParallel (DDP) is generally faster than DataParallel (DP) unless you have slow GPU card inter-connectivity. 2. Under DP, GPU 0 performs significantly more work than other GPUs, resulting in GPU under-utilization. 3. DDP supports distributed training across multiple machines, whereas DP does not. This is not an exhaustive list of differences between DP and DDP, however, other nuances are out of scope of this guide. You can get a deeper understanding of these methods by reading this article. Let's illustrate the differences between DP and DDP with an experiment. We'll benchmark the differences between DP and DDP with an added context of NVLink presence:
Hardware: 2x TITAN RTX 24GB each + NVlink with 2 NVLinks (NV2 in nvidia-smi topo -m). Software: pytorch-1.8-to-be + cuda-11.0 / transformers==4.3.0.dev0.
To disable the NVLink feature on one of the benchmarks, we use NCCL_P2P_DISABLE=1. Here is the benchmarking code and outputs: DP ```bash rm -r /tmp/test-clm; CUDA_VISIBLE_DEVICES=0,1 \ python examples/pytorch/language-modeling/run_clm.py \ --model_name_or_path openai-community/gpt2 --dataset_name wikitext --dataset_config_name wikitext-2-raw-v1 \ --do_train --output_dir /tmp/test-clm --per_device_train_batch_size 4 --max_steps 200 {'train_runtime': 110.5948, 'train_samples_per_second': 1.808, 'epoch': 0.69}
DDP w/ NVlink ```bash rm -r /tmp/test-clm; CUDA_VISIBLE_DEVICES=0,1 \ torchrun --nproc_per_node 2 examples/pytorch/language-modeling/run_clm.py \ --model_name_or_path openai-community/gpt2 --dataset_name wikitext --dataset_config_name wikitext-2-raw-v1 \ --do_train --output_dir /tmp/test-clm --per_device_train_batch_size 4 --max_steps 200 {'train_runtime': 101.9003, 'train_samples_per_second': 1.963, 'epoch': 0.69}
DDP w/o NVlink ```bash rm -r /tmp/test-clm; NCCL_P2P_DISABLE=1 CUDA_VISIBLE_DEVICES=0,1 \ torchrun --nproc_per_node 2 examples/pytorch/language-modeling/run_clm.py \ --model_name_or_path openai-community/gpt2 --dataset_name wikitext --dataset_config_name wikitext-2-raw-v1 \ --do_train --output_dir /tmp/test-clm --per_device_train_batch_size 4 --max_steps 200 {'train_runtime': 131.4367, 'train_samples_per_second': 1.522, 'epoch': 0.69}
Here are the same benchmarking results gathered in a table for convenience: | Type | NVlink | Time | | :----- | ----- | ---: | | 2:DP | Y | 110s | | 2:DDP | Y | 101s | | 2:DDP | N | 131s | As you can see, in this case DP is ~10% slower than DDP with NVlink, but ~15% faster than DDP without NVlink. The real difference will depend on how much data each GPU needs to sync with the others - the more there is to sync, the more a slow link will impede the overall runtime. ZeRO Data Parallelism ZeRO-powered data parallelism (ZeRO-DP) is illustrated in the following diagram from this blog post.
While it may appear complex, it is a very similar concept to DataParallel (DP). The difference is that instead of replicating the full model parameters, gradients and optimizer states, each GPU stores only a slice of it. Then, at run-time when the full layer parameters are needed just for the given layer, all GPUs synchronize to give each other parts that they miss. To illustrate this idea, consider a simple model with 3 layers (La, Lb, and Lc), where each layer has 3 parameters. Layer La, for example, has weights a0, a1 and a2: La | Lb | Lc ---|----|--- a0 | b0 | c0 a1 | b1 | c1 a2 | b2 | c2 If we have 3 GPUs, ZeRO-DP splits the model onto 3 GPUs like so:
GPU0: La | Lb | Lc ---|----|--- a0 | b0 | c0 GPU1: La | Lb | Lc ---|----|--- a1 | b1 | c1 GPU2: La | Lb | Lc ---|----|--- a2 | b2 | c2
In a way, this is the same horizontal slicing as tensor parallelism, as opposed to Vertical slicing, where one puts whole layer-groups on different GPUs. Now let's see how this works: Each of these GPUs will get the usual mini-batch as it works in DP: x0 => GPU0 x1 => GPU1 x2 => GPU2 The inputs are passed without modifications as if they would be processed by the original model. First, the inputs get to the layer La. What happens at this point? On GPU0: the x0 mini-batch requires the a0, a1, a2 parameters to do its forward path through the layer, but the GPU0 has only a0. It will get a1 from GPU1 and a2 from GPU2, bringing all the pieces of the model together. In parallel, GPU1 gets another mini-batch - x1. GPU1 has the a1 parameter, but needs a0 and a2, so it gets those from GPU0 and GPU2. Same happens to GPU2 that gets the mini-batch x2. It gets a0 and a1 from GPU0 and GPU1. This way each of the 3 GPUs gets the full tensors reconstructed and makes a forward pass with its own mini-batch. As soon as the calculation is done, the data that is no longer needed gets dropped - it's only used during the calculation. The reconstruction is done efficiently via a pre-fetch. Then the whole process is repeated for layer Lb, then Lc forward-wise, and then backward Lc -> Lb -> La.
This mechanism is similar to an efficient group backpacking strategy: person A carries the tent, person B carries the stove, and person C carries the axe. Each night they all share what they have with others and get from others what they don't have, and in the morning they pack up their allocated type of gear and continue on their way. This is what ZeRO DP/Sharded DDP is. Compare this strategy to the simple one where each person has to carry their own tent, stove and axe (similar to DataParallel (DP and DDP) in PyTorch), which would be far more inefficient.
While reading the literature on this topic you may encounter the following synonyms: Sharded, Partitioned. If you pay close attention the way ZeRO partitions the model's weights - it looks very similar to tensor parallelism which will be discussed later. This is because it partitions/shards each layer's weights, unlike vertical model parallelism which is discussed next. Implementations: DeepSpeed ZeRO-DP stages 1+2+3 Accelerate integration transformers integration
From Naive Model Parallelism to Pipeline Parallelism To explain Pipeline parallelism, we'll first look into Naive Model Parallelism (MP), also known as Vertical MP. This approach involves distributing groups of model layers across multiple GPUs by assigning specific layers to specific GPUs with .to(). As data flows through these layers, it is moved to the same GPU as the layer, while the other layers remain untouched. We refer to this Model parallelism as "Vertical" because of how models are typically visualized. For example, the following diagram shows an 8-layer model split vertically into two slices, placing layers 0-3 onto GPU0 and 4-7 to GPU1:
| Layer | | | 0 | | | 1 | GPU0 | | 2 | | | 3 | | ================ | Layer | | | 4 | | | 5 | GPU1 | | 6 | | | 7 | | ================
In this example, when data moves from layer 0 to 3, it's no different from regular forward pass. However, passing data from layer 3 to 4 requires moving it from GPU0 to GPU1, introducing a communication overhead. If the participating GPUs are on the same compute node (e.g. same physical machine) this copying is fast, but if the GPUs are distributed across different compute nodes (e.g. multiple machines), the communication overhead could be substantially greater. Following that, layers 4 to 7 work as they would in the original model. Upon completion of the 7th layer, there is often a need to send the data back to layer 0 where the labels are (or alternatively send the labels to the last layer). Now the loss can be computed and the optimizer can do its work. Naive Model Parallelism comes several shortcomings: - All but one GPU are idle at any given moment: if 4 GPUs are used, it's nearly identical to quadrupling the amount of memory of a single GPU, and ignoring the rest of the hardware. - Overhead in data transfer between devices: E.g. 4x 6GB cards will be able to accommodate the same size as 1x 24GB card using naive MP, but a single 24GB card will complete the training faster, because it doesn't have the data copying overhead. But, say, if you have 40GB cards and need to fit a 45GB model you can with 4x 40GB cards (but barely because of the gradient and optimizer states) - Copying shared embeddings: Shared embeddings may need to get copied back and forth between GPUs. Now that you are familiar with how the naive approach to model parallelism works and its shortcomings, let's look at Pipeline Parallelism (PP). PP is almost identical to a naive MP, but it solves the GPU idling problem by chunking the incoming batch into micro-batches and artificially creating a pipeline, which allows different GPUs to concurrently participate in the computation process. The following illustration from the GPipe paper shows the naive MP on the top, and PP on the bottom:
At the bottom of the diagram, you can observe that the Pipeline Parallelism (PP) approach minimizes the number of idle GPU zones, referred to as 'bubbles'. Both parts of the diagram show a parallelism level of degree 4, meaning that 4 GPUs are involved in the pipeline. You can see that there's a forward path of 4 pipe stages (F0, F1, F2 and F3) followed by a backward path in reverse order (B3, B2, B1, and B0). PP introduces a new hyperparameter to tune - chunks, which determines how many data chunks are sent in a sequence through the same pipe stage. For example, in the bottom diagram you can see chunks=4. GPU0 performs the same forward path on chunk 0, 1, 2 and 3 (F0,0, F0,1, F0,2, F0,3) and then it waits for other GPUs to do complete their work. Only when the other GPUs begin to complete their work, GPU0 starts to work again doing the backward path for chunks 3, 2, 1 and 0 (B0,3, B0,2, B0,1, B0,0). Note that this is the same concept as gradient accumulation steps. PyTorch uses chunks, while DeepSpeed refers to the same hyperparameter as gradient accumulation steps. Because of the chunks, PP introduces the notion of micro-batches (MBS). DP splits the global data batch size into mini-batches, so if you have a DP degree of 4, a global batch size of 1024 gets split up into 4 mini-batches of 256 each (1024/4). And if the number of chunks (or GAS) is 32 we end up with a micro-batch size of 8 (256/32). Each Pipeline stage works with a single micro-batch at a time. To calculate the global batch size of the DP + PP setup, use the formula: mbs * chunks * dp_degree (8 * 32 * 4 = 1024). With chunks=1 you end up with the naive MP, which is inefficient. With a large chunks value you end up with tiny micro-batch sizes which is also inefficient. For this reason, we encourage to experiment with the chunks value to find the one that leads to the most efficient GPUs utilization. You may notice a bubble of "dead" time on the diagram that can't be parallelized because the last forward stage has to wait for backward to complete the pipeline. The purpose of finding the best value for chunks is to enable a high concurrent GPU utilization across all participating GPUs which translates to minimizing the size of the bubble. Pipeline API solutions have been implemented in: - PyTorch - DeepSpeed - Megatron-LM These come with some shortcomings: - They have to modify the model quite heavily, because Pipeline requires one to rewrite the normal flow of modules into a nn.Sequential sequence of the same, which may require changes to the design of the model. - Currently the Pipeline API is very restricted. If you had a bunch of Python variables being passed in the very first stage of the Pipeline, you will have to find a way around it. Currently, the pipeline interface requires either a single Tensor or a tuple of Tensors as the only input and output. These tensors must have a batch size as the very first dimension, since pipeline is going to chunk the mini batch into micro-batches. Possible improvements are being discussed here https://github.com/pytorch/pytorch/pull/50693 - Conditional control flow at the level of pipe stages is not possible - e.g., Encoder-Decoder models like T5 require special workarounds to handle a conditional encoder stage. - They have to arrange each layer so that the output of one layer becomes an input to the other layer. More recent solutions include: - Varuna - Sagemaker We have not experimented with Varuna and SageMaker but their papers report that they have overcome the list of problems mentioned above and that they require smaller changes to the user's model. Implementations: - PyTorch (initial support in pytorch-1.8, and progressively getting improved in 1.9 and more so in 1.10). Some examples - DeepSpeed - Megatron-LM has an internal implementation - no API. - Varuna - SageMaker - this is a proprietary solution that can only be used on AWS. - OSLO - this is implemented based on the Hugging Face Transformers. πŸ€— Transformers status: as of this writing none of the models supports full-PP. GPT2 and T5 models have naive MP support. The main obstacle is being unable to convert the models to nn.Sequential and have all the inputs to be Tensors. This is because currently the models include many features that make the conversion very complicated, and will need to be removed to accomplish that. DeepSpeed and Megatron-LM integrations are available in πŸ€— Accelerate Other approaches: DeepSpeed, Varuna and SageMaker use the concept of an Interleaved Pipeline
Here the bubble (idle time) is further minimized by prioritizing backward passes. Varuna further attempts to improve the schedule by using simulations to discover the most efficient scheduling. OSLO has pipeline parallelism implementation based on the Transformers without nn.Sequential conversion. Tensor Parallelism In Tensor Parallelism, each GPU processes a slice of a tensor and only aggregates the full tensor for operations requiring it. To describe this method, this section of the guide relies on the concepts and diagrams from the Megatron-LM paper: Efficient Large-Scale Language Model Training on GPU Clusters. The main building block of any transformer is a fully connected nn.Linear followed by a nonlinear activation GeLU. The dot dot-product part of it, following the Megatron's paper notation, can be written as Y = GeLU(XA), where X is an input vector, Y is the output vector, and A is the weight matrix. If we look at the computation in matrix form, you can see how the matrix multiplication can be split between multiple GPUs:
If we split the weight matrix A column-wise across N GPUs and perform matrix multiplications XA_1 through XA_n in parallel, then we will end up with N output vectors Y_1, Y_2, , Y_n which can be fed into GeLU independently:
Using this principle, we can update a multi-layer perceptron of arbitrary depth, without the need for any synchronization between GPUs until the very end, where we need to reconstruct the output vector from shards. The Megatron-LM paper authors provide a helpful illustration for that: Parallelizing the multi-headed attention layers is even simpler, since they are already inherently parallel, due to having multiple independent heads!
Special considerations: TP requires very fast network, and therefore it's not advisable to do TP across more than one node. Practically, if a node has 4 GPUs, the highest TP degree is therefore 4. If you need a TP degree of 8, you need to use nodes that have at least 8 GPUs. This section is based on the original much more detailed TP overview. by @anton-l. Alternative names: - DeepSpeed calls it tensor slicing Implementations: - Megatron-LM has an internal implementation, as it's very model-specific - parallelformers (only inference at the moment) - SageMaker - this is a proprietary solution that can only be used on AWS. - OSLO has the tensor parallelism implementation based on the Transformers. SageMaker combines TP with DP for a more efficient processing. πŸ€— Transformers status: - core: not yet implemented in the core - but if you want inference parallelformers provides this support for most of our models. So until this is implemented in the core you can use theirs. And hopefully training mode will be supported too. - Deepspeed-Inference also supports our BERT, GPT-2, and GPT-Neo models in their super-fast CUDA-kernel-based inference mode, see more here πŸ€— Accelerate integrates with TP from Megatron-LM. Data Parallelism + Pipeline Parallelism The following diagram from the DeepSpeed pipeline tutorial demonstrates how one can combine DP with PP.