Feature Extraction
Transformers
bloom

This is a copy of the original BLOOM weights that is more efficient to use with the DeepSpeed-MII and DeepSpeed-Inference. In this repo the original tensors are split into 8 shards to target 8 GPUs, this allows the user to run the model with DeepSpeed-inference Tensor Parallelism.

For specific details about the BLOOM model itself, please see the original BLOOM model card.

For examples on using this repo please see the following:

Downloads last month
84
Inference Providers NEW
This model isn't deployed by any Inference Provider. πŸ™‹ Ask for provider support

Space using microsoft/bloom-deepspeed-inference-fp16 1