Failed loading tokenizer and model on training
Describe the issue
I was trying to train dolly on my box. It runs fine when doing the following in Jupyter Notebook, but failed w/o detailed errors when run the script under DeepSpeed, what could be the reason?
I have the following details captured
from transformers import AutoTokenizer, AutoModelForCausalLM
model_checkpoint = "EleutherAI/gpt-j-6B"
tokenizer = AutoTokenizer.from_pretrained(model_checkpoint)
model = AutoModelForCausalLM.from_pretrained(model_checkpoint)
ds_report output
DeepSpeed C++/CUDA extension op report
NOTE: Ops not installed will be just-in-time (JIT) compiled at
runtime if needed. Op compatibility means that your system
meet the required dependencies to JIT install the op.
JIT compiled ops requires ninja
ninja .................. [OKAY]
op name ................ installed .. compatible
[WARNING] async_io requires the dev libaio .so object and headers but these were not found.
[WARNING] async_io: please install the libaio-dev package with apt
[WARNING] If libaio is already installed (perhaps from source), try setting the CFLAGS and LDFLAGS environment variables to where it can be found.
async_io ............... [NO] ....... [NO]
cpu_adagrad ............ [NO] ....... [OKAY]
cpu_adam ............... [NO] ....... [OKAY]
fused_adam ............. [NO] ....... [OKAY]
fused_lamb ............. [NO] ....... [OKAY]
quantizer .............. [NO] ....... [OKAY]
random_ltd ............. [NO] ....... [OKAY]
[WARNING] please install triton==1.0.0 if you want to use sparse attention
sparse_attn ............ [NO] ....... [NO]
spatial_inference ...... [NO] ....... [OKAY]
transformer ............ [NO] ....... [OKAY]
stochastic_transformer . [NO] ....... [OKAY]
transformer_inference .. [NO] ....... [OKAY]
utils .................. [NO] ....... [OKAY]
DeepSpeed general environment info:
torch install path ............... ['/home/tmatup/anaconda3/envs/py37/lib/python3.7/site-packages/torch']
torch version .................... 1.13.1+cu117
deepspeed install path ........... ['/home/tmatup/anaconda3/envs/py37/lib/python3.7/site-packages/deepspeed']
deepspeed info ................... 0.8.0, unknown, unknown
torch cuda version ............... 11.7
torch hip version ................ None
nvcc version ..................... 11.1
deepspeed wheel compiled w. ...... torch 0.0, cuda 0.0
System info (please complete the following information):
- OS: Ubuntu 20.04 LTS
- GPU count and types: one machines with x8 A6000s]
- Python version: 3.7.16
- Transformer version: 4.25.1
(py37) ms:~/root/auto$ pip show transformers
Name: transformers
Version: 4.25.1
Summary: State-of-the-art Machine Learning for JAX, PyTorch and TensorFlow
Home-page: https://github.com/huggingface/transformers
Author: The Hugging Face team (past and future) with the help of all our contributors (https://github.com/huggingface/transformers/graphs/contributors)
Author-email: transformers@huggingface.co
License: Apache
Location: /home/tmatup/anaconda3/envs/py37/lib/python3.7/site-packages
Requires: filelock, huggingface-hub, importlib-metadata, numpy, packaging, pyyaml, regex, requests, tokenizers, tqdm
Required-by:
Launcher context
Either ran deepspeed under command prompt or from inside python script (via sh command)
Logging output
2023-04-15 13:53:08 INFO [training.trainer] Loading tatsu-lab/alpaca dataset
2023-04-15 13:53:09 WARNING [datasets.builder] Using custom data configuration tatsu-lab--alpaca-9b55fb286e3c7ab6
Downloading and preparing dataset parquet/tatsu-lab--alpaca to /home/tmatup/.cache/huggingface/datasets/tatsu-lab___parquet/tatsu-lab--alpaca-9b55fb286e3c7ab6/0.0.0/2a3b91fbd88a2c90d1dbbb32b460cf621d31bd5b05b934492fdef7d8d6f236ec...
Downloading data files: 100%|βββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββ| 1/1 [00:00<00:00, 1639.04it/s]
Extracting data files: 100%|βββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββ| 1/1 [00:00<00:00, 989.46it/s]
Dataset parquet downloaded and prepared to /home/tmatup/.cache/huggingface/datasets/tatsu-lab___parquet/tatsu-lab--alpaca-9b55fb286e3c7ab6/0.0.0/2a3b91fbd88a2c90d1dbbb32b460cf621d31bd5b05b934492fdef7d8d6f236ec. Subsequent calls will reuse this data.
100%|ββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββ| 1/1 [00:00<00:00, 270.22it/s]
2023-04-15 13:53:09 INFO [training.trainer] Found 52002 rows
100%|ββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββ| 53/53 [00:00<00:00, 113.92ba/s]
100%|βββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββ| 51974/51974 [00:06<00:00, 8382.97ex/s]
2023-04-15 13:53:16 INFO [training.trainer] Loading tokenizer for EleutherAI/gpt-j-6B
2023-04-15 13:53:17 INFO [__main__] Start training...
2023-04-15 13:53:19 INFO [__main__] [2023-04-15 13:53:19,461] [WARNING] [runner.py:186:fetch_hostfile] Unable to find hostfile, will proceed with training with local resources only.
2023-04-15 13:53:19 INFO [__main__] [2023-04-15 13:53:19,528] [INFO] [runner.py:548:main] cmd = /home/tmatup/anaconda3/envs/py37/bin/python -u -m deepspeed.launcher.launch --world_info=eyJsb2NhbGhvc3QiOiBbMCwgMSwgMiwgMywgNCwgNSwgNiwgN119 --master_addr=127.0.0.1 --master_port=29500 --module --enable_each_rank_log=None training.trainer --deepspeed /home/tmatup/root/dolly/config/ds_z3_bf16_config.json --epochs 1 --local-output-dir /home/tmatup/models/dolly/training/dolly__1681591988 --per-device-train-batch-size 1 --per-device-eval-batch-size 1 --lr 1e-5
2023-04-15 13:53:21 INFO [__main__] [2023-04-15 13:53:21,907] [INFO] [launch.py:142:main] WORLD INFO DICT: {'localhost': [0, 1, 2, 3, 4, 5, 6, 7]}
2023-04-15 13:53:21 INFO [__main__] [2023-04-15 13:53:21,907] [INFO] [launch.py:149:main] nnodes=1, num_local_procs=8, node_rank=0
2023-04-15 13:53:21 INFO [__main__] [2023-04-15 13:53:21,907] [INFO] [launch.py:161:main] global_rank_mapping=defaultdict(<class 'list'>, {'localhost': [0, 1, 2, 3, 4, 5, 6, 7]})
2023-04-15 13:53:21 INFO [__main__] [2023-04-15 13:53:21,907] [INFO] [launch.py:162:main] dist_world_size=8
2023-04-15 13:53:21 INFO [__main__] [2023-04-15 13:53:21,907] [INFO] [launch.py:164:main] Setting CUDA_VISIBLE_DEVICES=0,1,2,3,4,5,6,7
2023-04-15 13:53:26 ERROR [__main__] 2023-04-15 13:53:26 INFO [__main__] Loading tokenizer for EleutherAI/gpt-j-6B
2023-04-15 13:53:26 ERROR [__main__] 2023-04-15 13:53:26 INFO [__main__] Loading tokenizer for EleutherAI/gpt-j-6B
2023-04-15 13:53:26 ERROR [__main__] 2023-04-15 13:53:26 INFO [__main__] Loading tokenizer for EleutherAI/gpt-j-6B
2023-04-15 13:53:26 ERROR [__main__] 2023-04-15 13:53:26 INFO [__main__] Loading tokenizer for EleutherAI/gpt-j-6B
2023-04-15 13:53:26 ERROR [__main__] 2023-04-15 13:53:26 INFO [__main__] Loading tokenizer for EleutherAI/gpt-j-6B
2023-04-15 13:53:26 ERROR [__main__] 2023-04-15 13:53:26 INFO [__main__] Loading tokenizer for EleutherAI/gpt-j-6B
2023-04-15 13:53:26 ERROR [__main__] 2023-04-15 13:53:26 INFO [__main__] Loading tokenizer for EleutherAI/gpt-j-6B
2023-04-15 13:53:26 ERROR [__main__] 2023-04-15 13:53:26 INFO [__main__] Loading tokenizer for EleutherAI/gpt-j-6B
2023-04-15 13:53:26 ERROR [__main__] 2023-04-15 13:53:26 INFO [__main__] Loading model for EleutherAI/gpt-j-6B
2023-04-15 13:53:26 ERROR [__main__] 2023-04-15 13:53:26 INFO [__main__] Loading model for EleutherAI/gpt-j-6B
2023-04-15 13:53:26 ERROR [__main__] 2023-04-15 13:53:26 INFO [__main__] Loading model for EleutherAI/gpt-j-6B
2023-04-15 13:53:27 ERROR [__main__] 2023-04-15 13:53:27 INFO [__main__] Loading model for EleutherAI/gpt-j-6B
2023-04-15 13:53:27 ERROR [__main__] 2023-04-15 13:53:27 INFO [__main__] Loading model for EleutherAI/gpt-j-6B
2023-04-15 13:53:27 ERROR [__main__] 2023-04-15 13:53:27 INFO [__main__] Loading model for EleutherAI/gpt-j-6B
2023-04-15 13:53:27 ERROR [__main__] 2023-04-15 13:53:27 INFO [__main__] Loading model for EleutherAI/gpt-j-6B
2023-04-15 13:53:27 ERROR [__main__] 2023-04-15 13:53:27 INFO [__main__] Loading model for EleutherAI/gpt-j-6B
2023-04-15 13:54:52 INFO [__main__] [2023-04-15 13:54:52,905] [INFO] [launch.py:318:sigkill_handler] Killing subprocess 1669300
2023-04-15 13:54:54 INFO [__main__] [2023-04-15 13:54:54,476] [INFO] [launch.py:318:sigkill_handler] Killing subprocess 1669301
2023-04-15 13:54:55 INFO [__main__] [2023-04-15 13:54:55,807] [INFO] [launch.py:318:sigkill_handler] Killing subprocess 1669302
2023-04-15 13:54:57 INFO [__main__] [2023-04-15 13:54:57,577] [INFO] [launch.py:318:sigkill_handler] Killing subprocess 1669303
2023-04-15 13:54:59 INFO [__main__] [2023-04-15 13:54:59,349] [INFO] [launch.py:318:sigkill_handler] Killing subprocess 1669304
2023-04-15 13:55:01 INFO [__main__] [2023-04-15 13:55:01,202] [INFO] [launch.py:318:sigkill_handler] Killing subprocess 1669305
2023-04-15 13:55:02 INFO [__main__] [2023-04-15 13:55:02,690] [INFO] [launch.py:318:sigkill_handler] Killing subprocess 1669306
2023-04-15 13:55:04 INFO [__main__] [2023-04-15 13:55:04,583] [INFO] [launch.py:318:sigkill_handler] Killing subprocess 1669352
2023-04-15 13:55:04 INFO [__main__] [2023-04-15 13:55:04,583] [ERROR] [launch.py:324:sigkill_handler] ['/home/tmatup/anaconda3/envs/py37/bin/python', '-u', '-m', 'training.trainer', '--local_rank=7', '--deepspeed', '/home/tmatup/root/dolly/config/ds_z3_bf16_config.json', '--epochs', '1', '--local-output-dir', '/home/tmatup/models/dolly/training/dolly__1681591988', '--per-device-train-batch-size', '1', '--per-device-eval-batch-size', '1', '--lr', '1e-5'] exits with return code = -9
Traceback (most recent call last):
File "train_dolly.py", line 68, in <module>
_err=process_err
File "/home/tmatup/anaconda3/envs/py37/lib/python3.7/site-packages/sh.py", line 1524, in __call__
return RunningCommand(cmd, call_args, stdin, stdout, stderr)
File "/home/tmatup/anaconda3/envs/py37/lib/python3.7/site-packages/sh.py", line 788, in __init__
self.wait()
File "/home/tmatup/anaconda3/envs/py37/lib/python3.7/site-packages/sh.py", line 845, in wait
self.handle_command_exit_code(exit_code)
File "/home/tmatup/anaconda3/envs/py37/lib/python3.7/site-packages/sh.py", line 869, in handle_command_exit_code
raise exc
sh.ErrorReturnCode_247:
RAN: /home/tmatup/anaconda3/envs/py37/bin/deepspeed --num_gpus 8 --module training.trainer --deepspeed /home/tmatup/root/dolly/config/ds_z3_bf16_config.json --epochs 1 --local-output-dir /home/tmatup/models/dolly/training/dolly__1681591988 --per-device-train-batch-size 1 --per-device-eval-batch-size 1 --lr 1e-5
STDOUT:
STDERR:
Do you know how much memory the GPUs have? My first thought would be OOM. You could try lowering the batch size and see if that helps.
Totally 8 GPUs, each has 47GB. I tried to limit DeepSpeed to use only two of GPUs by doing --include localhost:3,5
, it seems that DeepSpeed still tries to allocate memory from GPU 0 and GPU 1, both of which already has very limited memory left for other trainings going on.
You might need to use all 8 GPUs.
Looks like they're all being used: Setting CUDA_VISIBLE_DEVICES=0,1,2,3,4,5,6,7
But if you have heterogeneous GPUs, it's possible that it's affecting deepspeed's allocations, yeah. I haven't looked into this. But here is some documentation about how it reasons about memory: https://huggingface.co/docs/transformers/main_classes/deepspeed#memory-requirements May not be the issue, but may be that you have to configure HF or deepspeed differently to have it load on different GPUs. Or maybe exclude the small GPUs?
Yeah, @srowen , that was from the earlier run. I tried limiting to the two GPUs having enough memory later on and ran into a different memory allocation issue as I stated in my last comment. I also tried exclude, didn't work. What's baffled me is why it still always tries to allocate from GPU 0 and GPU 1, ignoring the explicitly assigned GPUs 3 and 5.