Dataset Preview
Full Screen
The full dataset viewer is not available (click to read why). Only showing a preview of the rows.
The dataset generation failed
Error code:   DatasetGenerationError
Exception:    ArrowNotImplementedError
Message:      Cannot write struct type 'model_kwargs' with no child field to Parquet. Consider adding a dummy child field.
Traceback:    Traceback (most recent call last):
                File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/builder.py", line 1870, in _prepare_split_single
                  writer.write_table(table)
                File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/arrow_writer.py", line 620, in write_table
                  self._build_writer(inferred_schema=pa_table.schema)
                File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/arrow_writer.py", line 441, in _build_writer
                  self.pa_writer = self._WRITER_CLASS(self.stream, schema)
                File "/src/services/worker/.venv/lib/python3.9/site-packages/pyarrow/parquet/core.py", line 1010, in __init__
                  self.writer = _parquet.ParquetWriter(
                File "pyarrow/_parquet.pyx", line 2157, in pyarrow._parquet.ParquetWriter.__cinit__
                File "pyarrow/error.pxi", line 154, in pyarrow.lib.pyarrow_internal_check_status
                File "pyarrow/error.pxi", line 91, in pyarrow.lib.check_status
              pyarrow.lib.ArrowNotImplementedError: Cannot write struct type 'model_kwargs' with no child field to Parquet. Consider adding a dummy child field.
              
              During handling of the above exception, another exception occurred:
              
              Traceback (most recent call last):
                File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/builder.py", line 1886, in _prepare_split_single
                  num_examples, num_bytes = writer.finalize()
                File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/arrow_writer.py", line 639, in finalize
                  self._build_writer(self.schema)
                File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/arrow_writer.py", line 441, in _build_writer
                  self.pa_writer = self._WRITER_CLASS(self.stream, schema)
                File "/src/services/worker/.venv/lib/python3.9/site-packages/pyarrow/parquet/core.py", line 1010, in __init__
                  self.writer = _parquet.ParquetWriter(
                File "pyarrow/_parquet.pyx", line 2157, in pyarrow._parquet.ParquetWriter.__cinit__
                File "pyarrow/error.pxi", line 154, in pyarrow.lib.pyarrow_internal_check_status
                File "pyarrow/error.pxi", line 91, in pyarrow.lib.check_status
              pyarrow.lib.ArrowNotImplementedError: Cannot write struct type 'model_kwargs' with no child field to Parquet. Consider adding a dummy child field.
              
              The above exception was the direct cause of the following exception:
              
              Traceback (most recent call last):
                File "/src/services/worker/src/worker/job_runners/config/parquet_and_info.py", line 1417, in compute_config_parquet_and_info_response
                  parquet_operations = convert_to_parquet(builder)
                File "/src/services/worker/src/worker/job_runners/config/parquet_and_info.py", line 1049, in convert_to_parquet
                  builder.download_and_prepare(
                File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/builder.py", line 924, in download_and_prepare
                  self._download_and_prepare(
                File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/builder.py", line 1000, in _download_and_prepare
                  self._prepare_split(split_generator, **prepare_split_kwargs)
                File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/builder.py", line 1741, in _prepare_split
                  for job_id, done, content in self._prepare_split_single(
                File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/builder.py", line 1897, in _prepare_split_single
                  raise DatasetGenerationError("An error occurred while generating the dataset") from e
              datasets.exceptions.DatasetGenerationError: An error occurred while generating the dataset

Need help to make the dataset viewer work? Make sure to review how to configure the dataset viewer, and open a discussion for direct support.

config
dict
report
dict
name
string
backend
dict
scenario
dict
launcher
dict
environment
dict
print_report
bool
log_report
bool
overall
dict
warmup
dict
train
dict
{ "name": "cuda_training_transformers_fill-mask_google-bert/bert-base-uncased", "backend": { "name": "pytorch", "version": "2.3.1+rocm5.7", "_target_": "optimum_benchmark.backends.pytorch.backend.PyTorchBackend", "task": "fill-mask", "library": "transformers", "model_type": "bert", "model": "google-bert/bert-base-uncased", "processor": "google-bert/bert-base-uncased", "device": "cuda", "device_ids": "5", "seed": 42, "inter_op_num_threads": null, "intra_op_num_threads": null, "model_kwargs": {}, "processor_kwargs": {}, "no_weights": true, "device_map": null, "torch_dtype": null, "eval_mode": true, "to_bettertransformer": false, "low_cpu_mem_usage": null, "attn_implementation": null, "cache_implementation": null, "autocast_enabled": false, "autocast_dtype": null, "torch_compile": false, "torch_compile_target": "forward", "torch_compile_config": {}, "quantization_scheme": null, "quantization_config": {}, "deepspeed_inference": false, "deepspeed_inference_config": {}, "peft_type": null, "peft_config": {} }, "scenario": { "name": "training", "_target_": "optimum_benchmark.scenarios.training.scenario.TrainingScenario", "max_steps": 5, "warmup_steps": 2, "dataset_shapes": { "dataset_size": 500, "sequence_length": 16, "num_choices": 1 }, "training_arguments": { "per_device_train_batch_size": 2, "gradient_accumulation_steps": 1, "output_dir": "./trainer_output", "evaluation_strategy": "no", "eval_strategy": "no", "save_strategy": "no", "do_train": true, "use_cpu": false, "max_steps": 5, "do_eval": false, "do_predict": false, "report_to": "none", "skip_memory_metrics": true, "ddp_find_unused_parameters": false }, "latency": true, "memory": true, "energy": false }, "launcher": { "name": "process", "_target_": "optimum_benchmark.launchers.process.launcher.ProcessLauncher", "device_isolation": true, "device_isolation_action": "warn", "numactl": false, "numactl_kwargs": {}, "start_method": "spawn" }, "environment": { "cpu": " AMD EPYC 7763 64-Core Processor", "cpu_count": 128, "cpu_ram_mb": 1082014.490624, "system": "Linux", "machine": "x86_64", "platform": "Linux-5.15.0-122-generic-x86_64-with-glibc2.35", "processor": "x86_64", "python_version": "3.10.12", "gpu": [ "Advanced Micro Devices, Inc. [AMD/ATI]", "Advanced Micro Devices, Inc. [AMD/ATI]", "Advanced Micro Devices, Inc. [AMD/ATI]", "Advanced Micro Devices, Inc. [AMD/ATI]", "Advanced Micro Devices, Inc. [AMD/ATI]", "Advanced Micro Devices, Inc. [AMD/ATI]", "Advanced Micro Devices, Inc. [AMD/ATI]", "Advanced Micro Devices, Inc. [AMD/ATI]" ], "gpu_count": 8, "gpu_vram_mb": 549621596160, "optimum_benchmark_version": "0.5.0.dev0", "optimum_benchmark_commit": null, "transformers_version": "4.47.0", "transformers_commit": null, "accelerate_version": "1.2.0", "accelerate_commit": null, "diffusers_version": "0.31.0", "diffusers_commit": null, "optimum_version": null, "optimum_commit": null, "timm_version": "1.0.12", "timm_commit": null, "peft_version": "0.14.0", "peft_commit": null }, "print_report": true, "log_report": true }
{ "overall": { "memory": { "unit": "MB", "max_ram": 1289.580544, "max_global_vram": 68702.69952, "max_process_vram": 290024.251392, "max_reserved": 2497.708032, "max_allocated": 2195.345408 }, "latency": { "unit": "s", "values": [ 0.594968017578125, 0.041894744873046875, 0.04119970321655273, 0.04116450500488281, 0.041515865325927735 ], "count": 5, "total": 0.7607428359985352, "mean": 0.15214856719970704, "p50": 0.041515865325927735, "p90": 0.3737387084960938, "p95": 0.4843533630371093, "p99": 0.5728450866699218, "stdev": 0.22140988152085378, "stdev_": 145.5221600807031 }, "throughput": { "unit": "samples/s", "value": 65.72523280402771 }, "energy": null, "efficiency": null }, "warmup": { "memory": { "unit": "MB", "max_ram": 1289.580544, "max_global_vram": 68702.69952, "max_process_vram": 290024.251392, "max_reserved": 2497.708032, "max_allocated": 2195.345408 }, "latency": { "unit": "s", "values": [ 0.594968017578125, 0.041894744873046875 ], "count": 2, "total": 0.6368627624511719, "mean": 0.31843138122558595, "p50": 0.31843138122558595, "p90": 0.5396606903076172, "p95": 0.5673143539428711, "p99": 0.5894372848510743, "stdev": 0.27653663635253906, "stdev_": 86.8433994439237 }, "throughput": { "unit": "samples/s", "value": 12.56157601240402 }, "energy": null, "efficiency": null }, "train": { "memory": { "unit": "MB", "max_ram": 1289.580544, "max_global_vram": 68702.69952, "max_process_vram": 290024.251392, "max_reserved": 2497.708032, "max_allocated": 2195.345408 }, "latency": { "unit": "s", "values": [ 0.04119970321655273, 0.04116450500488281, 0.041515865325927735 ], "count": 3, "total": 0.12388007354736327, "mean": 0.04129335784912109, "p50": 0.04119970321655273, "p90": 0.04145263290405273, "p95": 0.04148424911499023, "p99": 0.04150954208374023, "stdev": 0.00015799137413655172, "stdev_": 0.38260723362296023 }, "throughput": { "unit": "samples/s", "value": 145.30181880395827 }, "energy": null, "efficiency": null } }
null
null
null
null
null
null
null
null
null
null
null
null
cuda_training_transformers_fill-mask_google-bert/bert-base-uncased
{ "name": "pytorch", "version": "2.3.1+rocm5.7", "_target_": "optimum_benchmark.backends.pytorch.backend.PyTorchBackend", "task": "fill-mask", "library": "transformers", "model_type": "bert", "model": "google-bert/bert-base-uncased", "processor": "google-bert/bert-base-uncased", "device": "cuda", "device_ids": "5", "seed": 42, "inter_op_num_threads": null, "intra_op_num_threads": null, "model_kwargs": {}, "processor_kwargs": {}, "no_weights": true, "device_map": null, "torch_dtype": null, "eval_mode": true, "to_bettertransformer": false, "low_cpu_mem_usage": null, "attn_implementation": null, "cache_implementation": null, "autocast_enabled": false, "autocast_dtype": null, "torch_compile": false, "torch_compile_target": "forward", "torch_compile_config": {}, "quantization_scheme": null, "quantization_config": {}, "deepspeed_inference": false, "deepspeed_inference_config": {}, "peft_type": null, "peft_config": {} }
{ "name": "training", "_target_": "optimum_benchmark.scenarios.training.scenario.TrainingScenario", "max_steps": 5, "warmup_steps": 2, "dataset_shapes": { "dataset_size": 500, "sequence_length": 16, "num_choices": 1 }, "training_arguments": { "per_device_train_batch_size": 2, "gradient_accumulation_steps": 1, "output_dir": "./trainer_output", "evaluation_strategy": "no", "eval_strategy": "no", "save_strategy": "no", "do_train": true, "use_cpu": false, "max_steps": 5, "do_eval": false, "do_predict": false, "report_to": "none", "skip_memory_metrics": true, "ddp_find_unused_parameters": false }, "latency": true, "memory": true, "energy": false }
{ "name": "process", "_target_": "optimum_benchmark.launchers.process.launcher.ProcessLauncher", "device_isolation": true, "device_isolation_action": "warn", "numactl": false, "numactl_kwargs": {}, "start_method": "spawn" }
{ "cpu": " AMD EPYC 7763 64-Core Processor", "cpu_count": 128, "cpu_ram_mb": 1082014.490624, "system": "Linux", "machine": "x86_64", "platform": "Linux-5.15.0-122-generic-x86_64-with-glibc2.35", "processor": "x86_64", "python_version": "3.10.12", "gpu": [ "Advanced Micro Devices, Inc. [AMD/ATI]", "Advanced Micro Devices, Inc. [AMD/ATI]", "Advanced Micro Devices, Inc. [AMD/ATI]", "Advanced Micro Devices, Inc. [AMD/ATI]", "Advanced Micro Devices, Inc. [AMD/ATI]", "Advanced Micro Devices, Inc. [AMD/ATI]", "Advanced Micro Devices, Inc. [AMD/ATI]", "Advanced Micro Devices, Inc. [AMD/ATI]" ], "gpu_count": 8, "gpu_vram_mb": 549621596160, "optimum_benchmark_version": "0.5.0.dev0", "optimum_benchmark_commit": null, "transformers_version": "4.47.0", "transformers_commit": null, "accelerate_version": "1.2.0", "accelerate_commit": null, "diffusers_version": "0.31.0", "diffusers_commit": null, "optimum_version": null, "optimum_commit": null, "timm_version": "1.0.12", "timm_commit": null, "peft_version": "0.14.0", "peft_commit": null }
true
true
null
null
null
null
null
null
null
null
null
null
null
null
{ "memory": { "unit": "MB", "max_ram": 1289.580544, "max_global_vram": 68702.69952, "max_process_vram": 290024.251392, "max_reserved": 2497.708032, "max_allocated": 2195.345408 }, "latency": { "unit": "s", "values": [ 0.594968017578125, 0.041894744873046875, 0.04119970321655273, 0.04116450500488281, 0.041515865325927735 ], "count": 5, "total": 0.7607428359985352, "mean": 0.15214856719970704, "p50": 0.041515865325927735, "p90": 0.3737387084960938, "p95": 0.4843533630371093, "p99": 0.5728450866699218, "stdev": 0.22140988152085378, "stdev_": 145.5221600807031 }, "throughput": { "unit": "samples/s", "value": 65.72523280402771 }, "energy": null, "efficiency": null }
{ "memory": { "unit": "MB", "max_ram": 1289.580544, "max_global_vram": 68702.69952, "max_process_vram": 290024.251392, "max_reserved": 2497.708032, "max_allocated": 2195.345408 }, "latency": { "unit": "s", "values": [ 0.594968017578125, 0.041894744873046875 ], "count": 2, "total": 0.6368627624511719, "mean": 0.31843138122558595, "p50": 0.31843138122558595, "p90": 0.5396606903076172, "p95": 0.5673143539428711, "p99": 0.5894372848510743, "stdev": 0.27653663635253906, "stdev_": 86.8433994439237 }, "throughput": { "unit": "samples/s", "value": 12.56157601240402 }, "energy": null, "efficiency": null }
{ "memory": { "unit": "MB", "max_ram": 1289.580544, "max_global_vram": 68702.69952, "max_process_vram": 290024.251392, "max_reserved": 2497.708032, "max_allocated": 2195.345408 }, "latency": { "unit": "s", "values": [ 0.04119970321655273, 0.04116450500488281, 0.041515865325927735 ], "count": 3, "total": 0.12388007354736327, "mean": 0.04129335784912109, "p50": 0.04119970321655273, "p90": 0.04145263290405273, "p95": 0.04148424911499023, "p99": 0.04150954208374023, "stdev": 0.00015799137413655172, "stdev_": 0.38260723362296023 }, "throughput": { "unit": "samples/s", "value": 145.30181880395827 }, "energy": null, "efficiency": null }
{ "name": "cuda_training_transformers_fill-mask_hf-internal-testing/tiny-random-BertModel", "backend": { "name": "pytorch", "version": "2.3.1+rocm5.7", "_target_": "optimum_benchmark.backends.pytorch.backend.PyTorchBackend", "task": "fill-mask", "library": "transformers", "model_type": "bert", "model": "hf-internal-testing/tiny-random-BertModel", "processor": "hf-internal-testing/tiny-random-BertModel", "device": "cuda", "device_ids": "6", "seed": 42, "inter_op_num_threads": null, "intra_op_num_threads": null, "model_kwargs": {}, "processor_kwargs": {}, "no_weights": true, "device_map": null, "torch_dtype": null, "eval_mode": true, "to_bettertransformer": false, "low_cpu_mem_usage": null, "attn_implementation": null, "cache_implementation": null, "autocast_enabled": false, "autocast_dtype": null, "torch_compile": false, "torch_compile_target": "forward", "torch_compile_config": {}, "quantization_scheme": null, "quantization_config": {}, "deepspeed_inference": false, "deepspeed_inference_config": {}, "peft_type": null, "peft_config": {} }, "scenario": { "name": "training", "_target_": "optimum_benchmark.scenarios.training.scenario.TrainingScenario", "max_steps": 5, "warmup_steps": 2, "dataset_shapes": { "dataset_size": 500, "sequence_length": 16, "num_choices": 1 }, "training_arguments": { "per_device_train_batch_size": 2, "gradient_accumulation_steps": 1, "output_dir": "./trainer_output", "evaluation_strategy": "no", "eval_strategy": "no", "save_strategy": "no", "do_train": true, "use_cpu": false, "max_steps": 5, "do_eval": false, "do_predict": false, "report_to": "none", "skip_memory_metrics": true, "ddp_find_unused_parameters": false }, "latency": true, "memory": true, "energy": false }, "launcher": { "name": "process", "_target_": "optimum_benchmark.launchers.process.launcher.ProcessLauncher", "device_isolation": true, "device_isolation_action": "warn", "numactl": false, "numactl_kwargs": {}, "start_method": "spawn" }, "environment": { "cpu": " AMD EPYC 7763 64-Core Processor", "cpu_count": 128, "cpu_ram_mb": 1082014.490624, "system": "Linux", "machine": "x86_64", "platform": "Linux-5.15.0-122-generic-x86_64-with-glibc2.35", "processor": "x86_64", "python_version": "3.10.12", "gpu": [ "Advanced Micro Devices, Inc. [AMD/ATI]", "Advanced Micro Devices, Inc. [AMD/ATI]", "Advanced Micro Devices, Inc. [AMD/ATI]", "Advanced Micro Devices, Inc. [AMD/ATI]", "Advanced Micro Devices, Inc. [AMD/ATI]", "Advanced Micro Devices, Inc. [AMD/ATI]", "Advanced Micro Devices, Inc. [AMD/ATI]", "Advanced Micro Devices, Inc. [AMD/ATI]" ], "gpu_count": 8, "gpu_vram_mb": 549621596160, "optimum_benchmark_version": "0.5.0.dev0", "optimum_benchmark_commit": null, "transformers_version": "4.47.0", "transformers_commit": null, "accelerate_version": "1.2.0", "accelerate_commit": null, "diffusers_version": "0.31.0", "diffusers_commit": null, "optimum_version": null, "optimum_commit": null, "timm_version": "1.0.12", "timm_commit": null, "peft_version": "0.14.0", "peft_commit": null }, "print_report": true, "log_report": true }
{ "overall": { "memory": { "unit": "MB", "max_ram": 1274.167296, "max_global_vram": 68702.69952, "max_process_vram": 54144.069632, "max_reserved": 2.097152, "max_allocated": 1.940992 }, "latency": { "unit": "s", "values": [ 0.6560673828125, 0.01654225730895996, 0.015230574607849122, 0.015319055557250976, 0.015638574600219727 ], "count": 5, "total": 0.7187978448867799, "mean": 0.14375956897735598, "p50": 0.015638574600219727, "p90": 0.40025733261108404, "p95": 0.5281623577117919, "p99": 0.6304863777923584, "stdev": 0.25615432753952116, "stdev_": 178.1824537745163 }, "throughput": { "unit": "samples/s", "value": 69.56058696569362 }, "energy": null, "efficiency": null }, "warmup": { "memory": { "unit": "MB", "max_ram": 1274.167296, "max_global_vram": 68702.69952, "max_process_vram": 54144.069632, "max_reserved": 2.097152, "max_allocated": 1.940992 }, "latency": { "unit": "s", "values": [ 0.6560673828125, 0.01654225730895996 ], "count": 2, "total": 0.67260964012146, "mean": 0.33630482006073, "p50": 0.33630482006073, "p90": 0.592114870262146, "p95": 0.624091126537323, "p99": 0.6496721315574646, "stdev": 0.31976256275177, "stdev_": 95.08117150804655 }, "throughput": { "unit": "samples/s", "value": 11.893971663200306 }, "energy": null, "efficiency": null }, "train": { "memory": { "unit": "MB", "max_ram": 1274.167296, "max_global_vram": 68702.69952, "max_process_vram": 54144.069632, "max_reserved": 2.097152, "max_allocated": 1.940992 }, "latency": { "unit": "s", "values": [ 0.015230574607849122, 0.015319055557250976, 0.015638574600219727 ], "count": 3, "total": 0.046188204765319825, "mean": 0.015396068255106608, "p50": 0.015319055557250976, "p90": 0.015574670791625977, "p95": 0.015606622695922851, "p99": 0.015632184219360353, "stdev": 0.00017524119600682866, "stdev_": 1.1382204411097243 }, "throughput": { "unit": "samples/s", "value": 389.7098857047418 }, "energy": null, "efficiency": null } }
null
null
null
null
null
null
null
null
null
null
null
null
cuda_training_transformers_fill-mask_hf-internal-testing/tiny-random-BertModel
{ "name": "pytorch", "version": "2.3.1+rocm5.7", "_target_": "optimum_benchmark.backends.pytorch.backend.PyTorchBackend", "task": "fill-mask", "library": "transformers", "model_type": "bert", "model": "hf-internal-testing/tiny-random-BertModel", "processor": "hf-internal-testing/tiny-random-BertModel", "device": "cuda", "device_ids": "6", "seed": 42, "inter_op_num_threads": null, "intra_op_num_threads": null, "model_kwargs": {}, "processor_kwargs": {}, "no_weights": true, "device_map": null, "torch_dtype": null, "eval_mode": true, "to_bettertransformer": false, "low_cpu_mem_usage": null, "attn_implementation": null, "cache_implementation": null, "autocast_enabled": false, "autocast_dtype": null, "torch_compile": false, "torch_compile_target": "forward", "torch_compile_config": {}, "quantization_scheme": null, "quantization_config": {}, "deepspeed_inference": false, "deepspeed_inference_config": {}, "peft_type": null, "peft_config": {} }
{ "name": "training", "_target_": "optimum_benchmark.scenarios.training.scenario.TrainingScenario", "max_steps": 5, "warmup_steps": 2, "dataset_shapes": { "dataset_size": 500, "sequence_length": 16, "num_choices": 1 }, "training_arguments": { "per_device_train_batch_size": 2, "gradient_accumulation_steps": 1, "output_dir": "./trainer_output", "evaluation_strategy": "no", "eval_strategy": "no", "save_strategy": "no", "do_train": true, "use_cpu": false, "max_steps": 5, "do_eval": false, "do_predict": false, "report_to": "none", "skip_memory_metrics": true, "ddp_find_unused_parameters": false }, "latency": true, "memory": true, "energy": false }
{ "name": "process", "_target_": "optimum_benchmark.launchers.process.launcher.ProcessLauncher", "device_isolation": true, "device_isolation_action": "warn", "numactl": false, "numactl_kwargs": {}, "start_method": "spawn" }
{ "cpu": " AMD EPYC 7763 64-Core Processor", "cpu_count": 128, "cpu_ram_mb": 1082014.490624, "system": "Linux", "machine": "x86_64", "platform": "Linux-5.15.0-122-generic-x86_64-with-glibc2.35", "processor": "x86_64", "python_version": "3.10.12", "gpu": [ "Advanced Micro Devices, Inc. [AMD/ATI]", "Advanced Micro Devices, Inc. [AMD/ATI]", "Advanced Micro Devices, Inc. [AMD/ATI]", "Advanced Micro Devices, Inc. [AMD/ATI]", "Advanced Micro Devices, Inc. [AMD/ATI]", "Advanced Micro Devices, Inc. [AMD/ATI]", "Advanced Micro Devices, Inc. [AMD/ATI]", "Advanced Micro Devices, Inc. [AMD/ATI]" ], "gpu_count": 8, "gpu_vram_mb": 549621596160, "optimum_benchmark_version": "0.5.0.dev0", "optimum_benchmark_commit": null, "transformers_version": "4.47.0", "transformers_commit": null, "accelerate_version": "1.2.0", "accelerate_commit": null, "diffusers_version": "0.31.0", "diffusers_commit": null, "optimum_version": null, "optimum_commit": null, "timm_version": "1.0.12", "timm_commit": null, "peft_version": "0.14.0", "peft_commit": null }
true
true
null
null
null
null
null
null
null
null
null
null
null
null
{ "memory": { "unit": "MB", "max_ram": 1274.167296, "max_global_vram": 68702.69952, "max_process_vram": 54144.069632, "max_reserved": 2.097152, "max_allocated": 1.940992 }, "latency": { "unit": "s", "values": [ 0.6560673828125, 0.01654225730895996, 0.015230574607849122, 0.015319055557250976, 0.015638574600219727 ], "count": 5, "total": 0.7187978448867799, "mean": 0.14375956897735598, "p50": 0.015638574600219727, "p90": 0.40025733261108404, "p95": 0.5281623577117919, "p99": 0.6304863777923584, "stdev": 0.25615432753952116, "stdev_": 178.1824537745163 }, "throughput": { "unit": "samples/s", "value": 69.56058696569362 }, "energy": null, "efficiency": null }
{ "memory": { "unit": "MB", "max_ram": 1274.167296, "max_global_vram": 68702.69952, "max_process_vram": 54144.069632, "max_reserved": 2.097152, "max_allocated": 1.940992 }, "latency": { "unit": "s", "values": [ 0.6560673828125, 0.01654225730895996 ], "count": 2, "total": 0.67260964012146, "mean": 0.33630482006073, "p50": 0.33630482006073, "p90": 0.592114870262146, "p95": 0.624091126537323, "p99": 0.6496721315574646, "stdev": 0.31976256275177, "stdev_": 95.08117150804655 }, "throughput": { "unit": "samples/s", "value": 11.893971663200306 }, "energy": null, "efficiency": null }
{ "memory": { "unit": "MB", "max_ram": 1274.167296, "max_global_vram": 68702.69952, "max_process_vram": 54144.069632, "max_reserved": 2.097152, "max_allocated": 1.940992 }, "latency": { "unit": "s", "values": [ 0.015230574607849122, 0.015319055557250976, 0.015638574600219727 ], "count": 3, "total": 0.046188204765319825, "mean": 0.015396068255106608, "p50": 0.015319055557250976, "p90": 0.015574670791625977, "p95": 0.015606622695922851, "p99": 0.015632184219360353, "stdev": 0.00017524119600682866, "stdev_": 1.1382204411097243 }, "throughput": { "unit": "samples/s", "value": 389.7098857047418 }, "energy": null, "efficiency": null }
{ "name": "cuda_training_transformers_image-classification_google/vit-base-patch16-224", "backend": { "name": "pytorch", "version": "2.3.1+rocm5.7", "_target_": "optimum_benchmark.backends.pytorch.backend.PyTorchBackend", "task": "image-classification", "library": "transformers", "model_type": "vit", "model": "google/vit-base-patch16-224", "processor": "google/vit-base-patch16-224", "device": "cuda", "device_ids": "5", "seed": 42, "inter_op_num_threads": null, "intra_op_num_threads": null, "model_kwargs": {}, "processor_kwargs": {}, "no_weights": true, "device_map": null, "torch_dtype": null, "eval_mode": true, "to_bettertransformer": false, "low_cpu_mem_usage": null, "attn_implementation": null, "cache_implementation": null, "autocast_enabled": false, "autocast_dtype": null, "torch_compile": false, "torch_compile_target": "forward", "torch_compile_config": {}, "quantization_scheme": null, "quantization_config": {}, "deepspeed_inference": false, "deepspeed_inference_config": {}, "peft_type": null, "peft_config": {} }, "scenario": { "name": "training", "_target_": "optimum_benchmark.scenarios.training.scenario.TrainingScenario", "max_steps": 5, "warmup_steps": 2, "dataset_shapes": { "dataset_size": 500, "sequence_length": 16, "num_choices": 1 }, "training_arguments": { "per_device_train_batch_size": 2, "gradient_accumulation_steps": 1, "output_dir": "./trainer_output", "evaluation_strategy": "no", "eval_strategy": "no", "save_strategy": "no", "do_train": true, "use_cpu": false, "max_steps": 5, "do_eval": false, "do_predict": false, "report_to": "none", "skip_memory_metrics": true, "ddp_find_unused_parameters": false }, "latency": true, "memory": true, "energy": false }, "launcher": { "name": "process", "_target_": "optimum_benchmark.launchers.process.launcher.ProcessLauncher", "device_isolation": true, "device_isolation_action": "warn", "numactl": false, "numactl_kwargs": {}, "start_method": "spawn" }, "environment": { "cpu": " AMD EPYC 7763 64-Core Processor", "cpu_count": 128, "cpu_ram_mb": 1082014.490624, "system": "Linux", "machine": "x86_64", "platform": "Linux-5.15.0-122-generic-x86_64-with-glibc2.35", "processor": "x86_64", "python_version": "3.10.12", "gpu": [ "Advanced Micro Devices, Inc. [AMD/ATI]", "Advanced Micro Devices, Inc. [AMD/ATI]", "Advanced Micro Devices, Inc. [AMD/ATI]", "Advanced Micro Devices, Inc. [AMD/ATI]", "Advanced Micro Devices, Inc. [AMD/ATI]", "Advanced Micro Devices, Inc. [AMD/ATI]", "Advanced Micro Devices, Inc. [AMD/ATI]", "Advanced Micro Devices, Inc. [AMD/ATI]" ], "gpu_count": 8, "gpu_vram_mb": 549621596160, "optimum_benchmark_version": "0.5.0.dev0", "optimum_benchmark_commit": null, "transformers_version": "4.47.0", "transformers_commit": null, "accelerate_version": "1.2.0", "accelerate_commit": null, "diffusers_version": "0.31.0", "diffusers_commit": null, "optimum_version": null, "optimum_commit": null, "timm_version": "1.0.12", "timm_commit": null, "peft_version": "0.14.0", "peft_commit": null }, "print_report": true, "log_report": true }
{ "overall": { "memory": { "unit": "MB", "max_ram": 1730.260992, "max_global_vram": 68702.69952, "max_process_vram": 312809.005056, "max_reserved": 1935.671296, "max_allocated": 1738.272256 }, "latency": { "unit": "s", "values": [ 0.6361373291015625, 0.03912515640258789, 0.03817972183227539, 0.04045155334472656, 0.03848947525024414 ], "count": 5, "total": 0.7923832359313965, "mean": 0.1584766471862793, "p50": 0.03912515640258789, "p90": 0.39786301879882824, "p95": 0.5170001739501953, "p99": 0.6123098980712891, "stdev": 0.2388316142621009, "stdev_": 150.70461074392205 }, "throughput": { "unit": "samples/s", "value": 63.10077968929789 }, "energy": null, "efficiency": null }, "warmup": { "memory": { "unit": "MB", "max_ram": 1730.260992, "max_global_vram": 68702.69952, "max_process_vram": 312809.005056, "max_reserved": 1935.671296, "max_allocated": 1738.272256 }, "latency": { "unit": "s", "values": [ 0.6361373291015625, 0.03912515640258789 ], "count": 2, "total": 0.6752624855041504, "mean": 0.3376312427520752, "p50": 0.3376312427520752, "p90": 0.5764361118316651, "p95": 0.6062867204666138, "p99": 0.6301672073745728, "stdev": 0.2985060863494873, "stdev_": 88.41186731308579 }, "throughput": { "unit": "samples/s", "value": 11.847244844391447 }, "energy": null, "efficiency": null }, "train": { "memory": { "unit": "MB", "max_ram": 1730.260992, "max_global_vram": 68702.69952, "max_process_vram": 312809.005056, "max_reserved": 1935.671296, "max_allocated": 1738.272256 }, "latency": { "unit": "s", "values": [ 0.03817972183227539, 0.04045155334472656, 0.03848947525024414 ], "count": 3, "total": 0.11712075042724608, "mean": 0.03904025014241536, "p50": 0.03848947525024414, "p90": 0.04005913772583008, "p95": 0.04025534553527832, "p99": 0.04041231178283691, "stdev": 0.001005922244054749, "stdev_": 2.576628583027092 }, "throughput": { "unit": "samples/s", "value": 153.68753986238647 }, "energy": null, "efficiency": null } }
null
null
null
null
null
null
null
null
null
null
null
null
cuda_training_transformers_image-classification_google/vit-base-patch16-224
{ "name": "pytorch", "version": "2.3.1+rocm5.7", "_target_": "optimum_benchmark.backends.pytorch.backend.PyTorchBackend", "task": "image-classification", "library": "transformers", "model_type": "vit", "model": "google/vit-base-patch16-224", "processor": "google/vit-base-patch16-224", "device": "cuda", "device_ids": "5", "seed": 42, "inter_op_num_threads": null, "intra_op_num_threads": null, "model_kwargs": {}, "processor_kwargs": {}, "no_weights": true, "device_map": null, "torch_dtype": null, "eval_mode": true, "to_bettertransformer": false, "low_cpu_mem_usage": null, "attn_implementation": null, "cache_implementation": null, "autocast_enabled": false, "autocast_dtype": null, "torch_compile": false, "torch_compile_target": "forward", "torch_compile_config": {}, "quantization_scheme": null, "quantization_config": {}, "deepspeed_inference": false, "deepspeed_inference_config": {}, "peft_type": null, "peft_config": {} }
{ "name": "training", "_target_": "optimum_benchmark.scenarios.training.scenario.TrainingScenario", "max_steps": 5, "warmup_steps": 2, "dataset_shapes": { "dataset_size": 500, "sequence_length": 16, "num_choices": 1 }, "training_arguments": { "per_device_train_batch_size": 2, "gradient_accumulation_steps": 1, "output_dir": "./trainer_output", "evaluation_strategy": "no", "eval_strategy": "no", "save_strategy": "no", "do_train": true, "use_cpu": false, "max_steps": 5, "do_eval": false, "do_predict": false, "report_to": "none", "skip_memory_metrics": true, "ddp_find_unused_parameters": false }, "latency": true, "memory": true, "energy": false }
{ "name": "process", "_target_": "optimum_benchmark.launchers.process.launcher.ProcessLauncher", "device_isolation": true, "device_isolation_action": "warn", "numactl": false, "numactl_kwargs": {}, "start_method": "spawn" }
{ "cpu": " AMD EPYC 7763 64-Core Processor", "cpu_count": 128, "cpu_ram_mb": 1082014.490624, "system": "Linux", "machine": "x86_64", "platform": "Linux-5.15.0-122-generic-x86_64-with-glibc2.35", "processor": "x86_64", "python_version": "3.10.12", "gpu": [ "Advanced Micro Devices, Inc. [AMD/ATI]", "Advanced Micro Devices, Inc. [AMD/ATI]", "Advanced Micro Devices, Inc. [AMD/ATI]", "Advanced Micro Devices, Inc. [AMD/ATI]", "Advanced Micro Devices, Inc. [AMD/ATI]", "Advanced Micro Devices, Inc. [AMD/ATI]", "Advanced Micro Devices, Inc. [AMD/ATI]", "Advanced Micro Devices, Inc. [AMD/ATI]" ], "gpu_count": 8, "gpu_vram_mb": 549621596160, "optimum_benchmark_version": "0.5.0.dev0", "optimum_benchmark_commit": null, "transformers_version": "4.47.0", "transformers_commit": null, "accelerate_version": "1.2.0", "accelerate_commit": null, "diffusers_version": "0.31.0", "diffusers_commit": null, "optimum_version": null, "optimum_commit": null, "timm_version": "1.0.12", "timm_commit": null, "peft_version": "0.14.0", "peft_commit": null }
true
true
null
null
null
null
null
null
null
null
null
null
null
null
{ "memory": { "unit": "MB", "max_ram": 1730.260992, "max_global_vram": 68702.69952, "max_process_vram": 312809.005056, "max_reserved": 1935.671296, "max_allocated": 1738.272256 }, "latency": { "unit": "s", "values": [ 0.6361373291015625, 0.03912515640258789, 0.03817972183227539, 0.04045155334472656, 0.03848947525024414 ], "count": 5, "total": 0.7923832359313965, "mean": 0.1584766471862793, "p50": 0.03912515640258789, "p90": 0.39786301879882824, "p95": 0.5170001739501953, "p99": 0.6123098980712891, "stdev": 0.2388316142621009, "stdev_": 150.70461074392205 }, "throughput": { "unit": "samples/s", "value": 63.10077968929789 }, "energy": null, "efficiency": null }
{ "memory": { "unit": "MB", "max_ram": 1730.260992, "max_global_vram": 68702.69952, "max_process_vram": 312809.005056, "max_reserved": 1935.671296, "max_allocated": 1738.272256 }, "latency": { "unit": "s", "values": [ 0.6361373291015625, 0.03912515640258789 ], "count": 2, "total": 0.6752624855041504, "mean": 0.3376312427520752, "p50": 0.3376312427520752, "p90": 0.5764361118316651, "p95": 0.6062867204666138, "p99": 0.6301672073745728, "stdev": 0.2985060863494873, "stdev_": 88.41186731308579 }, "throughput": { "unit": "samples/s", "value": 11.847244844391447 }, "energy": null, "efficiency": null }
{ "memory": { "unit": "MB", "max_ram": 1730.260992, "max_global_vram": 68702.69952, "max_process_vram": 312809.005056, "max_reserved": 1935.671296, "max_allocated": 1738.272256 }, "latency": { "unit": "s", "values": [ 0.03817972183227539, 0.04045155334472656, 0.03848947525024414 ], "count": 3, "total": 0.11712075042724608, "mean": 0.03904025014241536, "p50": 0.03848947525024414, "p90": 0.04005913772583008, "p95": 0.04025534553527832, "p99": 0.04041231178283691, "stdev": 0.001005922244054749, "stdev_": 2.576628583027092 }, "throughput": { "unit": "samples/s", "value": 153.68753986238647 }, "energy": null, "efficiency": null }
{ "name": "cuda_training_transformers_multiple-choice_FacebookAI/roberta-base", "backend": { "name": "pytorch", "version": "2.3.1+rocm5.7", "_target_": "optimum_benchmark.backends.pytorch.backend.PyTorchBackend", "task": "multiple-choice", "library": "transformers", "model_type": "roberta", "model": "FacebookAI/roberta-base", "processor": "FacebookAI/roberta-base", "device": "cuda", "device_ids": "5", "seed": 42, "inter_op_num_threads": null, "intra_op_num_threads": null, "model_kwargs": {}, "processor_kwargs": {}, "no_weights": true, "device_map": null, "torch_dtype": null, "eval_mode": true, "to_bettertransformer": false, "low_cpu_mem_usage": null, "attn_implementation": null, "cache_implementation": null, "autocast_enabled": false, "autocast_dtype": null, "torch_compile": false, "torch_compile_target": "forward", "torch_compile_config": {}, "quantization_scheme": null, "quantization_config": {}, "deepspeed_inference": false, "deepspeed_inference_config": {}, "peft_type": null, "peft_config": {} }, "scenario": { "name": "training", "_target_": "optimum_benchmark.scenarios.training.scenario.TrainingScenario", "max_steps": 5, "warmup_steps": 2, "dataset_shapes": { "dataset_size": 500, "sequence_length": 16, "num_choices": 1 }, "training_arguments": { "per_device_train_batch_size": 2, "gradient_accumulation_steps": 1, "output_dir": "./trainer_output", "evaluation_strategy": "no", "eval_strategy": "no", "save_strategy": "no", "do_train": true, "use_cpu": false, "max_steps": 5, "do_eval": false, "do_predict": false, "report_to": "none", "skip_memory_metrics": true, "ddp_find_unused_parameters": false }, "latency": true, "memory": true, "energy": false }, "launcher": { "name": "process", "_target_": "optimum_benchmark.launchers.process.launcher.ProcessLauncher", "device_isolation": true, "device_isolation_action": "warn", "numactl": false, "numactl_kwargs": {}, "start_method": "spawn" }, "environment": { "cpu": " AMD EPYC 7763 64-Core Processor", "cpu_count": 128, "cpu_ram_mb": 1082014.490624, "system": "Linux", "machine": "x86_64", "platform": "Linux-5.15.0-122-generic-x86_64-with-glibc2.35", "processor": "x86_64", "python_version": "3.10.12", "gpu": [ "Advanced Micro Devices, Inc. [AMD/ATI]", "Advanced Micro Devices, Inc. [AMD/ATI]", "Advanced Micro Devices, Inc. [AMD/ATI]", "Advanced Micro Devices, Inc. [AMD/ATI]", "Advanced Micro Devices, Inc. [AMD/ATI]", "Advanced Micro Devices, Inc. [AMD/ATI]", "Advanced Micro Devices, Inc. [AMD/ATI]", "Advanced Micro Devices, Inc. [AMD/ATI]" ], "gpu_count": 8, "gpu_vram_mb": 549621596160, "optimum_benchmark_version": "0.5.0.dev0", "optimum_benchmark_commit": null, "transformers_version": "4.47.0", "transformers_commit": null, "accelerate_version": "1.2.0", "accelerate_commit": null, "diffusers_version": "0.31.0", "diffusers_commit": null, "optimum_version": null, "optimum_commit": null, "timm_version": "1.0.12", "timm_commit": null, "peft_version": "0.14.0", "peft_commit": null }, "print_report": true, "log_report": true }
{ "overall": { "memory": { "unit": "MB", "max_ram": 1298.292736, "max_global_vram": 68702.69952, "max_process_vram": 284601.274368, "max_reserved": 2707.423232, "max_allocated": 2497.88416 }, "latency": { "unit": "s", "values": [ 0.598230712890625, 0.0447728157043457, 0.04401314163208008, 0.04357649993896484, 0.04364081954956055 ], "count": 5, "total": 0.7742339897155761, "mean": 0.15484679794311523, "p50": 0.04401314163208008, "p90": 0.37684755401611336, "p95": 0.4875391334533691, "p99": 0.5760923970031738, "stdev": 0.22169236604797848, "stdev_": 143.16884106923524 }, "throughput": { "unit": "samples/s", "value": 64.57995988831242 }, "energy": null, "efficiency": null }, "warmup": { "memory": { "unit": "MB", "max_ram": 1298.292736, "max_global_vram": 68702.69952, "max_process_vram": 284601.274368, "max_reserved": 2707.423232, "max_allocated": 2497.88416 }, "latency": { "unit": "s", "values": [ 0.598230712890625, 0.0447728157043457 ], "count": 2, "total": 0.6430035285949707, "mean": 0.32150176429748534, "p50": 0.32150176429748534, "p90": 0.5428849231719971, "p95": 0.570557818031311, "p99": 0.5926961339187622, "stdev": 0.2767289485931396, "stdev_": 86.07385069808903 }, "throughput": { "unit": "samples/s", "value": 12.441611350844106 }, "energy": null, "efficiency": null }, "train": { "memory": { "unit": "MB", "max_ram": 1298.292736, "max_global_vram": 68702.69952, "max_process_vram": 284601.274368, "max_reserved": 2707.423232, "max_allocated": 2497.88416 }, "latency": { "unit": "s", "values": [ 0.04401314163208008, 0.04357649993896484, 0.04364081954956055 ], "count": 3, "total": 0.1312304611206055, "mean": 0.04374348704020183, "p50": 0.04364081954956055, "p90": 0.04393867721557618, "p95": 0.043975909423828126, "p99": 0.04400569519042969, "stdev": 0.00019247415803611717, "stdev_": 0.4400064353791149 }, "throughput": { "unit": "samples/s", "value": 137.16327631781584 }, "energy": null, "efficiency": null } }
null
null
null
null
null
null
null
null
null
null
null
null
cuda_training_transformers_multiple-choice_FacebookAI/roberta-base
{ "name": "pytorch", "version": "2.3.1+rocm5.7", "_target_": "optimum_benchmark.backends.pytorch.backend.PyTorchBackend", "task": "multiple-choice", "library": "transformers", "model_type": "roberta", "model": "FacebookAI/roberta-base", "processor": "FacebookAI/roberta-base", "device": "cuda", "device_ids": "5", "seed": 42, "inter_op_num_threads": null, "intra_op_num_threads": null, "model_kwargs": {}, "processor_kwargs": {}, "no_weights": true, "device_map": null, "torch_dtype": null, "eval_mode": true, "to_bettertransformer": false, "low_cpu_mem_usage": null, "attn_implementation": null, "cache_implementation": null, "autocast_enabled": false, "autocast_dtype": null, "torch_compile": false, "torch_compile_target": "forward", "torch_compile_config": {}, "quantization_scheme": null, "quantization_config": {}, "deepspeed_inference": false, "deepspeed_inference_config": {}, "peft_type": null, "peft_config": {} }
{ "name": "training", "_target_": "optimum_benchmark.scenarios.training.scenario.TrainingScenario", "max_steps": 5, "warmup_steps": 2, "dataset_shapes": { "dataset_size": 500, "sequence_length": 16, "num_choices": 1 }, "training_arguments": { "per_device_train_batch_size": 2, "gradient_accumulation_steps": 1, "output_dir": "./trainer_output", "evaluation_strategy": "no", "eval_strategy": "no", "save_strategy": "no", "do_train": true, "use_cpu": false, "max_steps": 5, "do_eval": false, "do_predict": false, "report_to": "none", "skip_memory_metrics": true, "ddp_find_unused_parameters": false }, "latency": true, "memory": true, "energy": false }
{ "name": "process", "_target_": "optimum_benchmark.launchers.process.launcher.ProcessLauncher", "device_isolation": true, "device_isolation_action": "warn", "numactl": false, "numactl_kwargs": {}, "start_method": "spawn" }
{ "cpu": " AMD EPYC 7763 64-Core Processor", "cpu_count": 128, "cpu_ram_mb": 1082014.490624, "system": "Linux", "machine": "x86_64", "platform": "Linux-5.15.0-122-generic-x86_64-with-glibc2.35", "processor": "x86_64", "python_version": "3.10.12", "gpu": [ "Advanced Micro Devices, Inc. [AMD/ATI]", "Advanced Micro Devices, Inc. [AMD/ATI]", "Advanced Micro Devices, Inc. [AMD/ATI]", "Advanced Micro Devices, Inc. [AMD/ATI]", "Advanced Micro Devices, Inc. [AMD/ATI]", "Advanced Micro Devices, Inc. [AMD/ATI]", "Advanced Micro Devices, Inc. [AMD/ATI]", "Advanced Micro Devices, Inc. [AMD/ATI]" ], "gpu_count": 8, "gpu_vram_mb": 549621596160, "optimum_benchmark_version": "0.5.0.dev0", "optimum_benchmark_commit": null, "transformers_version": "4.47.0", "transformers_commit": null, "accelerate_version": "1.2.0", "accelerate_commit": null, "diffusers_version": "0.31.0", "diffusers_commit": null, "optimum_version": null, "optimum_commit": null, "timm_version": "1.0.12", "timm_commit": null, "peft_version": "0.14.0", "peft_commit": null }
true
true
null
null
null
null
null
null
null
null
null
null
null
null
{ "memory": { "unit": "MB", "max_ram": 1298.292736, "max_global_vram": 68702.69952, "max_process_vram": 284601.274368, "max_reserved": 2707.423232, "max_allocated": 2497.88416 }, "latency": { "unit": "s", "values": [ 0.598230712890625, 0.0447728157043457, 0.04401314163208008, 0.04357649993896484, 0.04364081954956055 ], "count": 5, "total": 0.7742339897155761, "mean": 0.15484679794311523, "p50": 0.04401314163208008, "p90": 0.37684755401611336, "p95": 0.4875391334533691, "p99": 0.5760923970031738, "stdev": 0.22169236604797848, "stdev_": 143.16884106923524 }, "throughput": { "unit": "samples/s", "value": 64.57995988831242 }, "energy": null, "efficiency": null }
{ "memory": { "unit": "MB", "max_ram": 1298.292736, "max_global_vram": 68702.69952, "max_process_vram": 284601.274368, "max_reserved": 2707.423232, "max_allocated": 2497.88416 }, "latency": { "unit": "s", "values": [ 0.598230712890625, 0.0447728157043457 ], "count": 2, "total": 0.6430035285949707, "mean": 0.32150176429748534, "p50": 0.32150176429748534, "p90": 0.5428849231719971, "p95": 0.570557818031311, "p99": 0.5926961339187622, "stdev": 0.2767289485931396, "stdev_": 86.07385069808903 }, "throughput": { "unit": "samples/s", "value": 12.441611350844106 }, "energy": null, "efficiency": null }
{ "memory": { "unit": "MB", "max_ram": 1298.292736, "max_global_vram": 68702.69952, "max_process_vram": 284601.274368, "max_reserved": 2707.423232, "max_allocated": 2497.88416 }, "latency": { "unit": "s", "values": [ 0.04401314163208008, 0.04357649993896484, 0.04364081954956055 ], "count": 3, "total": 0.1312304611206055, "mean": 0.04374348704020183, "p50": 0.04364081954956055, "p90": 0.04393867721557618, "p95": 0.043975909423828126, "p99": 0.04400569519042969, "stdev": 0.00019247415803611717, "stdev_": 0.4400064353791149 }, "throughput": { "unit": "samples/s", "value": 137.16327631781584 }, "energy": null, "efficiency": null }
{ "name": "cuda_training_transformers_text-classification_FacebookAI/roberta-base", "backend": { "name": "pytorch", "version": "2.3.1+rocm5.7", "_target_": "optimum_benchmark.backends.pytorch.backend.PyTorchBackend", "task": "text-classification", "library": "transformers", "model_type": "roberta", "model": "FacebookAI/roberta-base", "processor": "FacebookAI/roberta-base", "device": "cuda", "device_ids": "5", "seed": 42, "inter_op_num_threads": null, "intra_op_num_threads": null, "model_kwargs": {}, "processor_kwargs": {}, "no_weights": true, "device_map": null, "torch_dtype": null, "eval_mode": true, "to_bettertransformer": false, "low_cpu_mem_usage": null, "attn_implementation": null, "cache_implementation": null, "autocast_enabled": false, "autocast_dtype": null, "torch_compile": false, "torch_compile_target": "forward", "torch_compile_config": {}, "quantization_scheme": null, "quantization_config": {}, "deepspeed_inference": false, "deepspeed_inference_config": {}, "peft_type": null, "peft_config": {} }, "scenario": { "name": "training", "_target_": "optimum_benchmark.scenarios.training.scenario.TrainingScenario", "max_steps": 5, "warmup_steps": 2, "dataset_shapes": { "dataset_size": 500, "sequence_length": 16, "num_choices": 1 }, "training_arguments": { "per_device_train_batch_size": 2, "gradient_accumulation_steps": 1, "output_dir": "./trainer_output", "evaluation_strategy": "no", "eval_strategy": "no", "save_strategy": "no", "do_train": true, "use_cpu": false, "max_steps": 5, "do_eval": false, "do_predict": false, "report_to": "none", "skip_memory_metrics": true, "ddp_find_unused_parameters": false }, "latency": true, "memory": true, "energy": false }, "launcher": { "name": "process", "_target_": "optimum_benchmark.launchers.process.launcher.ProcessLauncher", "device_isolation": true, "device_isolation_action": "warn", "numactl": false, "numactl_kwargs": {}, "start_method": "spawn" }, "environment": { "cpu": " AMD EPYC 7763 64-Core Processor", "cpu_count": 128, "cpu_ram_mb": 1082014.490624, "system": "Linux", "machine": "x86_64", "platform": "Linux-5.15.0-122-generic-x86_64-with-glibc2.35", "processor": "x86_64", "python_version": "3.10.12", "gpu": [ "Advanced Micro Devices, Inc. [AMD/ATI]", "Advanced Micro Devices, Inc. [AMD/ATI]", "Advanced Micro Devices, Inc. [AMD/ATI]", "Advanced Micro Devices, Inc. [AMD/ATI]", "Advanced Micro Devices, Inc. [AMD/ATI]", "Advanced Micro Devices, Inc. [AMD/ATI]", "Advanced Micro Devices, Inc. [AMD/ATI]", "Advanced Micro Devices, Inc. [AMD/ATI]" ], "gpu_count": 8, "gpu_vram_mb": 549621596160, "optimum_benchmark_version": "0.5.0.dev0", "optimum_benchmark_commit": null, "transformers_version": "4.47.0", "transformers_commit": null, "accelerate_version": "1.2.0", "accelerate_commit": null, "diffusers_version": "0.31.0", "diffusers_commit": null, "optimum_version": null, "optimum_commit": null, "timm_version": "1.0.12", "timm_commit": null, "peft_version": "0.14.0", "peft_commit": null }, "print_report": true, "log_report": true }
{ "overall": { "memory": { "unit": "MB", "max_ram": 1296.44544, "max_global_vram": 68702.69952, "max_process_vram": 278530.973696, "max_reserved": 2707.423232, "max_allocated": 2497.900032 }, "latency": { "unit": "s", "values": [ 0.5760783081054688, 0.043606098175048826, 0.04326290512084961, 0.043311702728271485, 0.0434536247253418 ], "count": 5, "total": 0.7497126388549804, "mean": 0.1499425277709961, "p50": 0.0434536247253418, "p90": 0.3630894241333008, "p95": 0.4695838661193847, "p99": 0.554779419708252, "stdev": 0.21306792379446532, "stdev_": 142.0997277836207 }, "throughput": { "unit": "samples/s", "value": 66.69221967014441 }, "energy": null, "efficiency": null }, "warmup": { "memory": { "unit": "MB", "max_ram": 1296.44544, "max_global_vram": 68702.69952, "max_process_vram": 278530.973696, "max_reserved": 2707.423232, "max_allocated": 2497.900032 }, "latency": { "unit": "s", "values": [ 0.5760783081054688, 0.043606098175048826 ], "count": 2, "total": 0.6196844062805176, "mean": 0.3098422031402588, "p50": 0.3098422031402588, "p90": 0.5228310871124268, "p95": 0.5494546976089477, "p99": 0.5707535860061645, "stdev": 0.26623610496520994, "stdev_": 85.92635291993798 }, "throughput": { "unit": "samples/s", "value": 12.909797178886206 }, "energy": null, "efficiency": null }, "train": { "memory": { "unit": "MB", "max_ram": 1296.44544, "max_global_vram": 68702.69952, "max_process_vram": 278530.973696, "max_reserved": 2707.423232, "max_allocated": 2497.900032 }, "latency": { "unit": "s", "values": [ 0.04326290512084961, 0.043311702728271485, 0.0434536247253418 ], "count": 3, "total": 0.1300282325744629, "mean": 0.04334274419148763, "p50": 0.043311702728271485, "p90": 0.04342524032592773, "p95": 0.043439432525634765, "p99": 0.04345078628540039, "stdev": 0.00008089569915787994, "stdev_": 0.18664184898049807 }, "throughput": { "unit": "samples/s", "value": 138.43147479292233 }, "energy": null, "efficiency": null } }
null
null
null
null
null
null
null
null
null
null
null
null
cuda_training_transformers_text-classification_FacebookAI/roberta-base
{ "name": "pytorch", "version": "2.3.1+rocm5.7", "_target_": "optimum_benchmark.backends.pytorch.backend.PyTorchBackend", "task": "text-classification", "library": "transformers", "model_type": "roberta", "model": "FacebookAI/roberta-base", "processor": "FacebookAI/roberta-base", "device": "cuda", "device_ids": "5", "seed": 42, "inter_op_num_threads": null, "intra_op_num_threads": null, "model_kwargs": {}, "processor_kwargs": {}, "no_weights": true, "device_map": null, "torch_dtype": null, "eval_mode": true, "to_bettertransformer": false, "low_cpu_mem_usage": null, "attn_implementation": null, "cache_implementation": null, "autocast_enabled": false, "autocast_dtype": null, "torch_compile": false, "torch_compile_target": "forward", "torch_compile_config": {}, "quantization_scheme": null, "quantization_config": {}, "deepspeed_inference": false, "deepspeed_inference_config": {}, "peft_type": null, "peft_config": {} }
{ "name": "training", "_target_": "optimum_benchmark.scenarios.training.scenario.TrainingScenario", "max_steps": 5, "warmup_steps": 2, "dataset_shapes": { "dataset_size": 500, "sequence_length": 16, "num_choices": 1 }, "training_arguments": { "per_device_train_batch_size": 2, "gradient_accumulation_steps": 1, "output_dir": "./trainer_output", "evaluation_strategy": "no", "eval_strategy": "no", "save_strategy": "no", "do_train": true, "use_cpu": false, "max_steps": 5, "do_eval": false, "do_predict": false, "report_to": "none", "skip_memory_metrics": true, "ddp_find_unused_parameters": false }, "latency": true, "memory": true, "energy": false }
{ "name": "process", "_target_": "optimum_benchmark.launchers.process.launcher.ProcessLauncher", "device_isolation": true, "device_isolation_action": "warn", "numactl": false, "numactl_kwargs": {}, "start_method": "spawn" }
{ "cpu": " AMD EPYC 7763 64-Core Processor", "cpu_count": 128, "cpu_ram_mb": 1082014.490624, "system": "Linux", "machine": "x86_64", "platform": "Linux-5.15.0-122-generic-x86_64-with-glibc2.35", "processor": "x86_64", "python_version": "3.10.12", "gpu": [ "Advanced Micro Devices, Inc. [AMD/ATI]", "Advanced Micro Devices, Inc. [AMD/ATI]", "Advanced Micro Devices, Inc. [AMD/ATI]", "Advanced Micro Devices, Inc. [AMD/ATI]", "Advanced Micro Devices, Inc. [AMD/ATI]", "Advanced Micro Devices, Inc. [AMD/ATI]", "Advanced Micro Devices, Inc. [AMD/ATI]", "Advanced Micro Devices, Inc. [AMD/ATI]" ], "gpu_count": 8, "gpu_vram_mb": 549621596160, "optimum_benchmark_version": "0.5.0.dev0", "optimum_benchmark_commit": null, "transformers_version": "4.47.0", "transformers_commit": null, "accelerate_version": "1.2.0", "accelerate_commit": null, "diffusers_version": "0.31.0", "diffusers_commit": null, "optimum_version": null, "optimum_commit": null, "timm_version": "1.0.12", "timm_commit": null, "peft_version": "0.14.0", "peft_commit": null }
true
true
null
null
null
null
null
null
null
null
null
null
null
null
{ "memory": { "unit": "MB", "max_ram": 1296.44544, "max_global_vram": 68702.69952, "max_process_vram": 278530.973696, "max_reserved": 2707.423232, "max_allocated": 2497.900032 }, "latency": { "unit": "s", "values": [ 0.5760783081054688, 0.043606098175048826, 0.04326290512084961, 0.043311702728271485, 0.0434536247253418 ], "count": 5, "total": 0.7497126388549804, "mean": 0.1499425277709961, "p50": 0.0434536247253418, "p90": 0.3630894241333008, "p95": 0.4695838661193847, "p99": 0.554779419708252, "stdev": 0.21306792379446532, "stdev_": 142.0997277836207 }, "throughput": { "unit": "samples/s", "value": 66.69221967014441 }, "energy": null, "efficiency": null }
{ "memory": { "unit": "MB", "max_ram": 1296.44544, "max_global_vram": 68702.69952, "max_process_vram": 278530.973696, "max_reserved": 2707.423232, "max_allocated": 2497.900032 }, "latency": { "unit": "s", "values": [ 0.5760783081054688, 0.043606098175048826 ], "count": 2, "total": 0.6196844062805176, "mean": 0.3098422031402588, "p50": 0.3098422031402588, "p90": 0.5228310871124268, "p95": 0.5494546976089477, "p99": 0.5707535860061645, "stdev": 0.26623610496520994, "stdev_": 85.92635291993798 }, "throughput": { "unit": "samples/s", "value": 12.909797178886206 }, "energy": null, "efficiency": null }
{ "memory": { "unit": "MB", "max_ram": 1296.44544, "max_global_vram": 68702.69952, "max_process_vram": 278530.973696, "max_reserved": 2707.423232, "max_allocated": 2497.900032 }, "latency": { "unit": "s", "values": [ 0.04326290512084961, 0.043311702728271485, 0.0434536247253418 ], "count": 3, "total": 0.1300282325744629, "mean": 0.04334274419148763, "p50": 0.043311702728271485, "p90": 0.04342524032592773, "p95": 0.043439432525634765, "p99": 0.04345078628540039, "stdev": 0.00008089569915787994, "stdev_": 0.18664184898049807 }, "throughput": { "unit": "samples/s", "value": 138.43147479292233 }, "energy": null, "efficiency": null }
{ "name": "cuda_training_transformers_text-generation_hf-internal-testing/tiny-random-LlamaForCausalLM", "backend": { "name": "pytorch", "version": "2.3.1+rocm5.7", "_target_": "optimum_benchmark.backends.pytorch.backend.PyTorchBackend", "task": "text-generation", "library": "transformers", "model_type": "llama", "model": "hf-internal-testing/tiny-random-LlamaForCausalLM", "processor": "hf-internal-testing/tiny-random-LlamaForCausalLM", "device": "cuda", "device_ids": "6", "seed": 42, "inter_op_num_threads": null, "intra_op_num_threads": null, "model_kwargs": {}, "processor_kwargs": {}, "no_weights": true, "device_map": null, "torch_dtype": null, "eval_mode": true, "to_bettertransformer": false, "low_cpu_mem_usage": null, "attn_implementation": null, "cache_implementation": null, "autocast_enabled": false, "autocast_dtype": null, "torch_compile": false, "torch_compile_target": "forward", "torch_compile_config": {}, "quantization_scheme": null, "quantization_config": {}, "deepspeed_inference": false, "deepspeed_inference_config": {}, "peft_type": null, "peft_config": {} }, "scenario": { "name": "training", "_target_": "optimum_benchmark.scenarios.training.scenario.TrainingScenario", "max_steps": 5, "warmup_steps": 2, "dataset_shapes": { "dataset_size": 500, "sequence_length": 16, "num_choices": 1 }, "training_arguments": { "per_device_train_batch_size": 2, "gradient_accumulation_steps": 1, "output_dir": "./trainer_output", "evaluation_strategy": "no", "eval_strategy": "no", "save_strategy": "no", "do_train": true, "use_cpu": false, "max_steps": 5, "do_eval": false, "do_predict": false, "report_to": "none", "skip_memory_metrics": true, "ddp_find_unused_parameters": false }, "latency": true, "memory": true, "energy": false }, "launcher": { "name": "process", "_target_": "optimum_benchmark.launchers.process.launcher.ProcessLauncher", "device_isolation": true, "device_isolation_action": "warn", "numactl": false, "numactl_kwargs": {}, "start_method": "spawn" }, "environment": { "cpu": " AMD EPYC 7763 64-Core Processor", "cpu_count": 128, "cpu_ram_mb": 1082014.490624, "system": "Linux", "machine": "x86_64", "platform": "Linux-5.15.0-122-generic-x86_64-with-glibc2.35", "processor": "x86_64", "python_version": "3.10.12", "gpu": [ "Advanced Micro Devices, Inc. [AMD/ATI]", "Advanced Micro Devices, Inc. [AMD/ATI]", "Advanced Micro Devices, Inc. [AMD/ATI]", "Advanced Micro Devices, Inc. [AMD/ATI]", "Advanced Micro Devices, Inc. [AMD/ATI]", "Advanced Micro Devices, Inc. [AMD/ATI]", "Advanced Micro Devices, Inc. [AMD/ATI]", "Advanced Micro Devices, Inc. [AMD/ATI]" ], "gpu_count": 8, "gpu_vram_mb": 549621596160, "optimum_benchmark_version": "0.5.0.dev0", "optimum_benchmark_commit": null, "transformers_version": "4.47.0", "transformers_commit": null, "accelerate_version": "1.2.0", "accelerate_commit": null, "diffusers_version": "0.31.0", "diffusers_commit": null, "optimum_version": null, "optimum_commit": null, "timm_version": "1.0.12", "timm_commit": null, "peft_version": "0.14.0", "peft_commit": null }, "print_report": true, "log_report": true }
{ "overall": { "memory": { "unit": "MB", "max_ram": 1312.206848, "max_global_vram": 68702.69952, "max_process_vram": 65339.850752, "max_reserved": 44.040192, "max_allocated": 25.06496 }, "latency": { "unit": "s", "values": [ 0.5945537719726562, 0.010716172218322753, 0.009252650260925293, 0.009294569969177246, 0.00920864963531494 ], "count": 5, "total": 0.6330258140563965, "mean": 0.1266051628112793, "p50": 0.009294569969177246, "p90": 0.36101873207092294, "p95": 0.4777862520217894, "p99": 0.5712002679824829, "stdev": 0.2339749933892262, "stdev_": 184.8068342505076 }, "throughput": { "unit": "samples/s", "value": 78.9857204710225 }, "energy": null, "efficiency": null }, "warmup": { "memory": { "unit": "MB", "max_ram": 1312.206848, "max_global_vram": 68702.69952, "max_process_vram": 65339.850752, "max_reserved": 44.040192, "max_allocated": 25.06496 }, "latency": { "unit": "s", "values": [ 0.5945537719726562, 0.010716172218322753 ], "count": 2, "total": 0.605269944190979, "mean": 0.3026349720954895, "p50": 0.3026349720954895, "p90": 0.5361700119972229, "p95": 0.5653618919849396, "p99": 0.5887153959751129, "stdev": 0.29191879987716673, "stdev_": 96.45904366434507 }, "throughput": { "unit": "samples/s", "value": 13.217243110746276 }, "energy": null, "efficiency": null }, "train": { "memory": { "unit": "MB", "max_ram": 1312.206848, "max_global_vram": 68702.69952, "max_process_vram": 65339.850752, "max_reserved": 44.040192, "max_allocated": 25.06496 }, "latency": { "unit": "s", "values": [ 0.009252650260925293, 0.009294569969177246, 0.00920864963531494 ], "count": 3, "total": 0.027755869865417476, "mean": 0.009251956621805826, "p50": 0.009252650260925293, "p90": 0.009286186027526856, "p95": 0.009290377998352051, "p99": 0.009293731575012208, "stdev": 0.00003508025840054541, "stdev_": 0.37916583307216517 }, "throughput": { "unit": "samples/s", "value": 648.5114711691008 }, "energy": null, "efficiency": null } }
null
null
null
null
null
null
null
null
null
null
null
null
cuda_training_transformers_text-generation_hf-internal-testing/tiny-random-LlamaForCausalLM
{ "name": "pytorch", "version": "2.3.1+rocm5.7", "_target_": "optimum_benchmark.backends.pytorch.backend.PyTorchBackend", "task": "text-generation", "library": "transformers", "model_type": "llama", "model": "hf-internal-testing/tiny-random-LlamaForCausalLM", "processor": "hf-internal-testing/tiny-random-LlamaForCausalLM", "device": "cuda", "device_ids": "6", "seed": 42, "inter_op_num_threads": null, "intra_op_num_threads": null, "model_kwargs": {}, "processor_kwargs": {}, "no_weights": true, "device_map": null, "torch_dtype": null, "eval_mode": true, "to_bettertransformer": false, "low_cpu_mem_usage": null, "attn_implementation": null, "cache_implementation": null, "autocast_enabled": false, "autocast_dtype": null, "torch_compile": false, "torch_compile_target": "forward", "torch_compile_config": {}, "quantization_scheme": null, "quantization_config": {}, "deepspeed_inference": false, "deepspeed_inference_config": {}, "peft_type": null, "peft_config": {} }
{ "name": "training", "_target_": "optimum_benchmark.scenarios.training.scenario.TrainingScenario", "max_steps": 5, "warmup_steps": 2, "dataset_shapes": { "dataset_size": 500, "sequence_length": 16, "num_choices": 1 }, "training_arguments": { "per_device_train_batch_size": 2, "gradient_accumulation_steps": 1, "output_dir": "./trainer_output", "evaluation_strategy": "no", "eval_strategy": "no", "save_strategy": "no", "do_train": true, "use_cpu": false, "max_steps": 5, "do_eval": false, "do_predict": false, "report_to": "none", "skip_memory_metrics": true, "ddp_find_unused_parameters": false }, "latency": true, "memory": true, "energy": false }
{ "name": "process", "_target_": "optimum_benchmark.launchers.process.launcher.ProcessLauncher", "device_isolation": true, "device_isolation_action": "warn", "numactl": false, "numactl_kwargs": {}, "start_method": "spawn" }
{ "cpu": " AMD EPYC 7763 64-Core Processor", "cpu_count": 128, "cpu_ram_mb": 1082014.490624, "system": "Linux", "machine": "x86_64", "platform": "Linux-5.15.0-122-generic-x86_64-with-glibc2.35", "processor": "x86_64", "python_version": "3.10.12", "gpu": [ "Advanced Micro Devices, Inc. [AMD/ATI]", "Advanced Micro Devices, Inc. [AMD/ATI]", "Advanced Micro Devices, Inc. [AMD/ATI]", "Advanced Micro Devices, Inc. [AMD/ATI]", "Advanced Micro Devices, Inc. [AMD/ATI]", "Advanced Micro Devices, Inc. [AMD/ATI]", "Advanced Micro Devices, Inc. [AMD/ATI]", "Advanced Micro Devices, Inc. [AMD/ATI]" ], "gpu_count": 8, "gpu_vram_mb": 549621596160, "optimum_benchmark_version": "0.5.0.dev0", "optimum_benchmark_commit": null, "transformers_version": "4.47.0", "transformers_commit": null, "accelerate_version": "1.2.0", "accelerate_commit": null, "diffusers_version": "0.31.0", "diffusers_commit": null, "optimum_version": null, "optimum_commit": null, "timm_version": "1.0.12", "timm_commit": null, "peft_version": "0.14.0", "peft_commit": null }
true
true
null
null
null
null
null
null
null
null
null
null
null
null
{ "memory": { "unit": "MB", "max_ram": 1312.206848, "max_global_vram": 68702.69952, "max_process_vram": 65339.850752, "max_reserved": 44.040192, "max_allocated": 25.06496 }, "latency": { "unit": "s", "values": [ 0.5945537719726562, 0.010716172218322753, 0.009252650260925293, 0.009294569969177246, 0.00920864963531494 ], "count": 5, "total": 0.6330258140563965, "mean": 0.1266051628112793, "p50": 0.009294569969177246, "p90": 0.36101873207092294, "p95": 0.4777862520217894, "p99": 0.5712002679824829, "stdev": 0.2339749933892262, "stdev_": 184.8068342505076 }, "throughput": { "unit": "samples/s", "value": 78.9857204710225 }, "energy": null, "efficiency": null }
{ "memory": { "unit": "MB", "max_ram": 1312.206848, "max_global_vram": 68702.69952, "max_process_vram": 65339.850752, "max_reserved": 44.040192, "max_allocated": 25.06496 }, "latency": { "unit": "s", "values": [ 0.5945537719726562, 0.010716172218322753 ], "count": 2, "total": 0.605269944190979, "mean": 0.3026349720954895, "p50": 0.3026349720954895, "p90": 0.5361700119972229, "p95": 0.5653618919849396, "p99": 0.5887153959751129, "stdev": 0.29191879987716673, "stdev_": 96.45904366434507 }, "throughput": { "unit": "samples/s", "value": 13.217243110746276 }, "energy": null, "efficiency": null }
{ "memory": { "unit": "MB", "max_ram": 1312.206848, "max_global_vram": 68702.69952, "max_process_vram": 65339.850752, "max_reserved": 44.040192, "max_allocated": 25.06496 }, "latency": { "unit": "s", "values": [ 0.009252650260925293, 0.009294569969177246, 0.00920864963531494 ], "count": 3, "total": 0.027755869865417476, "mean": 0.009251956621805826, "p50": 0.009252650260925293, "p90": 0.009286186027526856, "p95": 0.009290377998352051, "p99": 0.009293731575012208, "stdev": 0.00003508025840054541, "stdev_": 0.37916583307216517 }, "throughput": { "unit": "samples/s", "value": 648.5114711691008 }, "energy": null, "efficiency": null }
{ "name": "cuda_training_transformers_text-generation_openai-community/gpt2", "backend": { "name": "pytorch", "version": "2.3.1+rocm5.7", "_target_": "optimum_benchmark.backends.pytorch.backend.PyTorchBackend", "task": "text-generation", "library": "transformers", "model_type": "gpt2", "model": "openai-community/gpt2", "processor": "openai-community/gpt2", "device": "cuda", "device_ids": "5", "seed": 42, "inter_op_num_threads": null, "intra_op_num_threads": null, "model_kwargs": {}, "processor_kwargs": {}, "no_weights": true, "device_map": null, "torch_dtype": null, "eval_mode": true, "to_bettertransformer": false, "low_cpu_mem_usage": null, "attn_implementation": null, "cache_implementation": null, "autocast_enabled": false, "autocast_dtype": null, "torch_compile": false, "torch_compile_target": "forward", "torch_compile_config": {}, "quantization_scheme": null, "quantization_config": {}, "deepspeed_inference": false, "deepspeed_inference_config": {}, "peft_type": null, "peft_config": {} }, "scenario": { "name": "training", "_target_": "optimum_benchmark.scenarios.training.scenario.TrainingScenario", "max_steps": 5, "warmup_steps": 2, "dataset_shapes": { "dataset_size": 500, "sequence_length": 16, "num_choices": 1 }, "training_arguments": { "per_device_train_batch_size": 2, "gradient_accumulation_steps": 1, "output_dir": "./trainer_output", "evaluation_strategy": "no", "eval_strategy": "no", "save_strategy": "no", "do_train": true, "use_cpu": false, "max_steps": 5, "do_eval": false, "do_predict": false, "report_to": "none", "skip_memory_metrics": true, "ddp_find_unused_parameters": false }, "latency": true, "memory": true, "energy": false }, "launcher": { "name": "process", "_target_": "optimum_benchmark.launchers.process.launcher.ProcessLauncher", "device_isolation": true, "device_isolation_action": "warn", "numactl": false, "numactl_kwargs": {}, "start_method": "spawn" }, "environment": { "cpu": " AMD EPYC 7763 64-Core Processor", "cpu_count": 128, "cpu_ram_mb": 1082014.490624, "system": "Linux", "machine": "x86_64", "platform": "Linux-5.15.0-122-generic-x86_64-with-glibc2.35", "processor": "x86_64", "python_version": "3.10.12", "gpu": [ "Advanced Micro Devices, Inc. [AMD/ATI]", "Advanced Micro Devices, Inc. [AMD/ATI]", "Advanced Micro Devices, Inc. [AMD/ATI]", "Advanced Micro Devices, Inc. [AMD/ATI]", "Advanced Micro Devices, Inc. [AMD/ATI]", "Advanced Micro Devices, Inc. [AMD/ATI]", "Advanced Micro Devices, Inc. [AMD/ATI]", "Advanced Micro Devices, Inc. [AMD/ATI]" ], "gpu_count": 8, "gpu_vram_mb": 549621596160, "optimum_benchmark_version": "0.5.0.dev0", "optimum_benchmark_commit": null, "transformers_version": "4.47.0", "transformers_commit": null, "accelerate_version": "1.2.0", "accelerate_commit": null, "diffusers_version": "0.31.0", "diffusers_commit": null, "optimum_version": null, "optimum_commit": null, "timm_version": "1.0.12", "timm_commit": null, "peft_version": "0.14.0", "peft_commit": null }, "print_report": true, "log_report": true }
{ "overall": { "memory": { "unit": "MB", "max_ram": 1311.440896, "max_global_vram": 68702.69952, "max_process_vram": 329450.471424, "max_reserved": 2894.06976, "max_allocated": 2506.73664 }, "latency": { "unit": "s", "values": [ 0.6158572387695312, 0.04217601776123047, 0.041766895294189456, 0.04174817657470703, 0.04130561447143555 ], "count": 5, "total": 0.7828539428710938, "mean": 0.15657078857421874, "p50": 0.041766895294189456, "p90": 0.386384750366211, "p95": 0.501120994567871, "p99": 0.5929099899291992, "stdev": 0.2296433902475554, "stdev_": 146.67064804281694 }, "throughput": { "unit": "samples/s", "value": 63.86887420739873 }, "energy": null, "efficiency": null }, "warmup": { "memory": { "unit": "MB", "max_ram": 1311.440896, "max_global_vram": 68702.69952, "max_process_vram": 329450.471424, "max_reserved": 2894.06976, "max_allocated": 2506.73664 }, "latency": { "unit": "s", "values": [ 0.6158572387695312, 0.04217601776123047 ], "count": 2, "total": 0.6580332565307617, "mean": 0.32901662826538086, "p50": 0.32901662826538086, "p90": 0.5584891166687012, "p95": 0.5871731777191161, "p99": 0.6101204265594482, "stdev": 0.2868406105041504, "stdev_": 87.18118960017675 }, "throughput": { "unit": "samples/s", "value": 12.157440251845411 }, "energy": null, "efficiency": null }, "train": { "memory": { "unit": "MB", "max_ram": 1311.440896, "max_global_vram": 68702.69952, "max_process_vram": 329450.471424, "max_reserved": 2894.06976, "max_allocated": 2506.73664 }, "latency": { "unit": "s", "values": [ 0.041766895294189456, 0.04174817657470703, 0.04130561447143555 ], "count": 3, "total": 0.12482068634033203, "mean": 0.04160689544677734, "p50": 0.04174817657470703, "p90": 0.04176315155029297, "p95": 0.04176502342224121, "p99": 0.04176652091979981, "stdev": 0.0002131748377771068, "stdev_": 0.5123545880749395 }, "throughput": { "unit": "samples/s", "value": 144.20686608725885 }, "energy": null, "efficiency": null } }
null
null
null
null
null
null
null
null
null
null
null
null
cuda_training_transformers_text-generation_openai-community/gpt2
{ "name": "pytorch", "version": "2.3.1+rocm5.7", "_target_": "optimum_benchmark.backends.pytorch.backend.PyTorchBackend", "task": "text-generation", "library": "transformers", "model_type": "gpt2", "model": "openai-community/gpt2", "processor": "openai-community/gpt2", "device": "cuda", "device_ids": "5", "seed": 42, "inter_op_num_threads": null, "intra_op_num_threads": null, "model_kwargs": {}, "processor_kwargs": {}, "no_weights": true, "device_map": null, "torch_dtype": null, "eval_mode": true, "to_bettertransformer": false, "low_cpu_mem_usage": null, "attn_implementation": null, "cache_implementation": null, "autocast_enabled": false, "autocast_dtype": null, "torch_compile": false, "torch_compile_target": "forward", "torch_compile_config": {}, "quantization_scheme": null, "quantization_config": {}, "deepspeed_inference": false, "deepspeed_inference_config": {}, "peft_type": null, "peft_config": {} }
{ "name": "training", "_target_": "optimum_benchmark.scenarios.training.scenario.TrainingScenario", "max_steps": 5, "warmup_steps": 2, "dataset_shapes": { "dataset_size": 500, "sequence_length": 16, "num_choices": 1 }, "training_arguments": { "per_device_train_batch_size": 2, "gradient_accumulation_steps": 1, "output_dir": "./trainer_output", "evaluation_strategy": "no", "eval_strategy": "no", "save_strategy": "no", "do_train": true, "use_cpu": false, "max_steps": 5, "do_eval": false, "do_predict": false, "report_to": "none", "skip_memory_metrics": true, "ddp_find_unused_parameters": false }, "latency": true, "memory": true, "energy": false }
{ "name": "process", "_target_": "optimum_benchmark.launchers.process.launcher.ProcessLauncher", "device_isolation": true, "device_isolation_action": "warn", "numactl": false, "numactl_kwargs": {}, "start_method": "spawn" }
{ "cpu": " AMD EPYC 7763 64-Core Processor", "cpu_count": 128, "cpu_ram_mb": 1082014.490624, "system": "Linux", "machine": "x86_64", "platform": "Linux-5.15.0-122-generic-x86_64-with-glibc2.35", "processor": "x86_64", "python_version": "3.10.12", "gpu": [ "Advanced Micro Devices, Inc. [AMD/ATI]", "Advanced Micro Devices, Inc. [AMD/ATI]", "Advanced Micro Devices, Inc. [AMD/ATI]", "Advanced Micro Devices, Inc. [AMD/ATI]", "Advanced Micro Devices, Inc. [AMD/ATI]", "Advanced Micro Devices, Inc. [AMD/ATI]", "Advanced Micro Devices, Inc. [AMD/ATI]", "Advanced Micro Devices, Inc. [AMD/ATI]" ], "gpu_count": 8, "gpu_vram_mb": 549621596160, "optimum_benchmark_version": "0.5.0.dev0", "optimum_benchmark_commit": null, "transformers_version": "4.47.0", "transformers_commit": null, "accelerate_version": "1.2.0", "accelerate_commit": null, "diffusers_version": "0.31.0", "diffusers_commit": null, "optimum_version": null, "optimum_commit": null, "timm_version": "1.0.12", "timm_commit": null, "peft_version": "0.14.0", "peft_commit": null }
true
true
null
null
null
null
null
null
null
null
null
null
null
null
{ "memory": { "unit": "MB", "max_ram": 1311.440896, "max_global_vram": 68702.69952, "max_process_vram": 329450.471424, "max_reserved": 2894.06976, "max_allocated": 2506.73664 }, "latency": { "unit": "s", "values": [ 0.6158572387695312, 0.04217601776123047, 0.041766895294189456, 0.04174817657470703, 0.04130561447143555 ], "count": 5, "total": 0.7828539428710938, "mean": 0.15657078857421874, "p50": 0.041766895294189456, "p90": 0.386384750366211, "p95": 0.501120994567871, "p99": 0.5929099899291992, "stdev": 0.2296433902475554, "stdev_": 146.67064804281694 }, "throughput": { "unit": "samples/s", "value": 63.86887420739873 }, "energy": null, "efficiency": null }
{ "memory": { "unit": "MB", "max_ram": 1311.440896, "max_global_vram": 68702.69952, "max_process_vram": 329450.471424, "max_reserved": 2894.06976, "max_allocated": 2506.73664 }, "latency": { "unit": "s", "values": [ 0.6158572387695312, 0.04217601776123047 ], "count": 2, "total": 0.6580332565307617, "mean": 0.32901662826538086, "p50": 0.32901662826538086, "p90": 0.5584891166687012, "p95": 0.5871731777191161, "p99": 0.6101204265594482, "stdev": 0.2868406105041504, "stdev_": 87.18118960017675 }, "throughput": { "unit": "samples/s", "value": 12.157440251845411 }, "energy": null, "efficiency": null }
{ "memory": { "unit": "MB", "max_ram": 1311.440896, "max_global_vram": 68702.69952, "max_process_vram": 329450.471424, "max_reserved": 2894.06976, "max_allocated": 2506.73664 }, "latency": { "unit": "s", "values": [ 0.041766895294189456, 0.04174817657470703, 0.04130561447143555 ], "count": 3, "total": 0.12482068634033203, "mean": 0.04160689544677734, "p50": 0.04174817657470703, "p90": 0.04176315155029297, "p95": 0.04176502342224121, "p99": 0.04176652091979981, "stdev": 0.0002131748377771068, "stdev_": 0.5123545880749395 }, "throughput": { "unit": "samples/s", "value": 144.20686608725885 }, "energy": null, "efficiency": null }
{ "name": "cuda_training_transformers_token-classification_microsoft/deberta-v3-base", "backend": { "name": "pytorch", "version": "2.3.1+rocm5.7", "_target_": "optimum_benchmark.backends.pytorch.backend.PyTorchBackend", "task": "token-classification", "library": "transformers", "model_type": "deberta-v2", "model": "microsoft/deberta-v3-base", "processor": "microsoft/deberta-v3-base", "device": "cuda", "device_ids": "5", "seed": 42, "inter_op_num_threads": null, "intra_op_num_threads": null, "model_kwargs": {}, "processor_kwargs": {}, "no_weights": true, "device_map": null, "torch_dtype": null, "eval_mode": true, "to_bettertransformer": false, "low_cpu_mem_usage": null, "attn_implementation": null, "cache_implementation": null, "autocast_enabled": false, "autocast_dtype": null, "torch_compile": false, "torch_compile_target": "forward", "torch_compile_config": {}, "quantization_scheme": null, "quantization_config": {}, "deepspeed_inference": false, "deepspeed_inference_config": {}, "peft_type": null, "peft_config": {} }, "scenario": { "name": "training", "_target_": "optimum_benchmark.scenarios.training.scenario.TrainingScenario", "max_steps": 5, "warmup_steps": 2, "dataset_shapes": { "dataset_size": 500, "sequence_length": 16, "num_choices": 1 }, "training_arguments": { "per_device_train_batch_size": 2, "gradient_accumulation_steps": 1, "output_dir": "./trainer_output", "evaluation_strategy": "no", "eval_strategy": "no", "save_strategy": "no", "do_train": true, "use_cpu": false, "max_steps": 5, "do_eval": false, "do_predict": false, "report_to": "none", "skip_memory_metrics": true, "ddp_find_unused_parameters": false }, "latency": true, "memory": true, "energy": false }, "launcher": { "name": "process", "_target_": "optimum_benchmark.launchers.process.launcher.ProcessLauncher", "device_isolation": true, "device_isolation_action": "warn", "numactl": false, "numactl_kwargs": {}, "start_method": "spawn" }, "environment": { "cpu": " AMD EPYC 7763 64-Core Processor", "cpu_count": 128, "cpu_ram_mb": 1082014.490624, "system": "Linux", "machine": "x86_64", "platform": "Linux-5.15.0-122-generic-x86_64-with-glibc2.35", "processor": "x86_64", "python_version": "3.10.12", "gpu": [ "Advanced Micro Devices, Inc. [AMD/ATI]", "Advanced Micro Devices, Inc. [AMD/ATI]", "Advanced Micro Devices, Inc. [AMD/ATI]", "Advanced Micro Devices, Inc. [AMD/ATI]", "Advanced Micro Devices, Inc. [AMD/ATI]", "Advanced Micro Devices, Inc. [AMD/ATI]", "Advanced Micro Devices, Inc. [AMD/ATI]", "Advanced Micro Devices, Inc. [AMD/ATI]" ], "gpu_count": 8, "gpu_vram_mb": 549621596160, "optimum_benchmark_version": "0.5.0.dev0", "optimum_benchmark_commit": null, "transformers_version": "4.47.0", "transformers_commit": null, "accelerate_version": "1.2.0", "accelerate_commit": null, "diffusers_version": "0.31.0", "diffusers_commit": null, "optimum_version": null, "optimum_commit": null, "timm_version": "1.0.12", "timm_commit": null, "peft_version": "0.14.0", "peft_commit": null }, "print_report": true, "log_report": true }
{ "overall": { "memory": { "unit": "MB", "max_ram": 1321.55392, "max_global_vram": 68702.69952, "max_process_vram": 524965.675008, "max_reserved": 3919.577088, "max_allocated": 3695.353344 }, "latency": { "unit": "s", "values": [ 0.6471325073242188, 0.556587646484375, 0.06677162933349609, 0.06684650421142578, 0.06679898834228516 ], "count": 5, "total": 1.4041372756958008, "mean": 0.2808274551391602, "p50": 0.06684650421142578, "p90": 0.6109145629882813, "p95": 0.62902353515625, "p99": 0.643510712890625, "stdev": 0.2636812480658233, "stdev_": 93.894397873299 }, "throughput": { "unit": "samples/s", "value": 35.60905394753742 }, "energy": null, "efficiency": null }, "warmup": { "memory": { "unit": "MB", "max_ram": 1321.55392, "max_global_vram": 68702.69952, "max_process_vram": 524965.675008, "max_reserved": 3919.577088, "max_allocated": 3695.353344 }, "latency": { "unit": "s", "values": [ 0.6471325073242188, 0.556587646484375 ], "count": 2, "total": 1.2037201538085938, "mean": 0.6018600769042969, "p50": 0.6018600769042969, "p90": 0.6380780212402344, "p95": 0.6426052642822266, "p99": 0.6462270587158203, "stdev": 0.045272430419921905, "stdev_": 7.522085640367333 }, "throughput": { "unit": "samples/s", "value": 6.646063019455017 }, "energy": null, "efficiency": null }, "train": { "memory": { "unit": "MB", "max_ram": 1321.55392, "max_global_vram": 68702.69952, "max_process_vram": 524965.675008, "max_reserved": 3919.577088, "max_allocated": 3695.353344 }, "latency": { "unit": "s", "values": [ 0.06677162933349609, 0.06684650421142578, 0.06679898834228516 ], "count": 3, "total": 0.200417121887207, "mean": 0.06680570729573566, "p50": 0.06679898834228516, "p90": 0.06683700103759765, "p95": 0.06684175262451172, "p99": 0.06684555389404297, "stdev": 0.00003093455552020681, "stdev_": 0.04630525859604427 }, "throughput": { "unit": "samples/s", "value": 89.81268581498861 }, "energy": null, "efficiency": null } }
null
null
null
null
null
null
null
null
null
null
null
null
cuda_training_transformers_token-classification_microsoft/deberta-v3-base
{ "name": "pytorch", "version": "2.3.1+rocm5.7", "_target_": "optimum_benchmark.backends.pytorch.backend.PyTorchBackend", "task": "token-classification", "library": "transformers", "model_type": "deberta-v2", "model": "microsoft/deberta-v3-base", "processor": "microsoft/deberta-v3-base", "device": "cuda", "device_ids": "5", "seed": 42, "inter_op_num_threads": null, "intra_op_num_threads": null, "model_kwargs": {}, "processor_kwargs": {}, "no_weights": true, "device_map": null, "torch_dtype": null, "eval_mode": true, "to_bettertransformer": false, "low_cpu_mem_usage": null, "attn_implementation": null, "cache_implementation": null, "autocast_enabled": false, "autocast_dtype": null, "torch_compile": false, "torch_compile_target": "forward", "torch_compile_config": {}, "quantization_scheme": null, "quantization_config": {}, "deepspeed_inference": false, "deepspeed_inference_config": {}, "peft_type": null, "peft_config": {} }
{ "name": "training", "_target_": "optimum_benchmark.scenarios.training.scenario.TrainingScenario", "max_steps": 5, "warmup_steps": 2, "dataset_shapes": { "dataset_size": 500, "sequence_length": 16, "num_choices": 1 }, "training_arguments": { "per_device_train_batch_size": 2, "gradient_accumulation_steps": 1, "output_dir": "./trainer_output", "evaluation_strategy": "no", "eval_strategy": "no", "save_strategy": "no", "do_train": true, "use_cpu": false, "max_steps": 5, "do_eval": false, "do_predict": false, "report_to": "none", "skip_memory_metrics": true, "ddp_find_unused_parameters": false }, "latency": true, "memory": true, "energy": false }
{ "name": "process", "_target_": "optimum_benchmark.launchers.process.launcher.ProcessLauncher", "device_isolation": true, "device_isolation_action": "warn", "numactl": false, "numactl_kwargs": {}, "start_method": "spawn" }
{ "cpu": " AMD EPYC 7763 64-Core Processor", "cpu_count": 128, "cpu_ram_mb": 1082014.490624, "system": "Linux", "machine": "x86_64", "platform": "Linux-5.15.0-122-generic-x86_64-with-glibc2.35", "processor": "x86_64", "python_version": "3.10.12", "gpu": [ "Advanced Micro Devices, Inc. [AMD/ATI]", "Advanced Micro Devices, Inc. [AMD/ATI]", "Advanced Micro Devices, Inc. [AMD/ATI]", "Advanced Micro Devices, Inc. [AMD/ATI]", "Advanced Micro Devices, Inc. [AMD/ATI]", "Advanced Micro Devices, Inc. [AMD/ATI]", "Advanced Micro Devices, Inc. [AMD/ATI]", "Advanced Micro Devices, Inc. [AMD/ATI]" ], "gpu_count": 8, "gpu_vram_mb": 549621596160, "optimum_benchmark_version": "0.5.0.dev0", "optimum_benchmark_commit": null, "transformers_version": "4.47.0", "transformers_commit": null, "accelerate_version": "1.2.0", "accelerate_commit": null, "diffusers_version": "0.31.0", "diffusers_commit": null, "optimum_version": null, "optimum_commit": null, "timm_version": "1.0.12", "timm_commit": null, "peft_version": "0.14.0", "peft_commit": null }
true
true
null
null
null
null
null
null
null
null
null
null
null
null
{ "memory": { "unit": "MB", "max_ram": 1321.55392, "max_global_vram": 68702.69952, "max_process_vram": 524965.675008, "max_reserved": 3919.577088, "max_allocated": 3695.353344 }, "latency": { "unit": "s", "values": [ 0.6471325073242188, 0.556587646484375, 0.06677162933349609, 0.06684650421142578, 0.06679898834228516 ], "count": 5, "total": 1.4041372756958008, "mean": 0.2808274551391602, "p50": 0.06684650421142578, "p90": 0.6109145629882813, "p95": 0.62902353515625, "p99": 0.643510712890625, "stdev": 0.2636812480658233, "stdev_": 93.894397873299 }, "throughput": { "unit": "samples/s", "value": 35.60905394753742 }, "energy": null, "efficiency": null }
{ "memory": { "unit": "MB", "max_ram": 1321.55392, "max_global_vram": 68702.69952, "max_process_vram": 524965.675008, "max_reserved": 3919.577088, "max_allocated": 3695.353344 }, "latency": { "unit": "s", "values": [ 0.6471325073242188, 0.556587646484375 ], "count": 2, "total": 1.2037201538085938, "mean": 0.6018600769042969, "p50": 0.6018600769042969, "p90": 0.6380780212402344, "p95": 0.6426052642822266, "p99": 0.6462270587158203, "stdev": 0.045272430419921905, "stdev_": 7.522085640367333 }, "throughput": { "unit": "samples/s", "value": 6.646063019455017 }, "energy": null, "efficiency": null }
{ "memory": { "unit": "MB", "max_ram": 1321.55392, "max_global_vram": 68702.69952, "max_process_vram": 524965.675008, "max_reserved": 3919.577088, "max_allocated": 3695.353344 }, "latency": { "unit": "s", "values": [ 0.06677162933349609, 0.06684650421142578, 0.06679898834228516 ], "count": 3, "total": 0.200417121887207, "mean": 0.06680570729573566, "p50": 0.06679898834228516, "p90": 0.06683700103759765, "p95": 0.06684175262451172, "p99": 0.06684555389404297, "stdev": 0.00003093455552020681, "stdev_": 0.04630525859604427 }, "throughput": { "unit": "samples/s", "value": 89.81268581498861 }, "energy": null, "efficiency": null }

No dataset card yet

Downloads last month
18,006