Compiled lmsys/vicuna-7b-v1.5 using optimum-neuron (optimum-neuron==0.0.21 neuron with 2.18.2)
optimum-cli export neuron --model lmsys/vicuna-7b-v1.5 --batch_size 1 --sequence_length 1024 --num_cores 2 --auto_cast_type fp16 ./models/lmsys/vicuna-7b-v1.5
- Downloads last month
- 7
This model does not have enough activity to be deployed to Inference API (serverless) yet. Increase its social
visibility and check back later, or deploy to Inference Endpoints (dedicated)
instead.