How to run llama3.1-70B-Instruct inference with mutil-gpu?

#38
by ToukesuD - opened

how to run on 4090.

Did you resolve it ?

Sign up or log in to comment