Adding Evaluation Results
#15 opened about 1 year ago
by
leaderboard-pr-bot
is it possible to use a continuous batching inference server with this model?
#14 opened over 1 year ago
by
natserrano
Can the model be sharded over several GPUs?
1
#13 opened over 1 year ago
by
silverfisk
Works well on a 3090 ! Insane good !
1
#11 opened over 1 year ago
by
goldrushgames
Anyone has been successful in deploying this to Sagemaker or so?
#9 opened over 1 year ago
by
rafa9
Model exits in Windows Booga
8
#8 opened over 1 year ago
by
0spr4y
Model Performance Curiosity
3
#7 opened over 1 year ago
by
sumuks
Have a problem with rtx4090
17
#6 opened over 1 year ago
by
crainto
i got messy code
5
#3 opened over 1 year ago
by
Tiankong2023
Will this work with the Local LLMs One-Click UI runpod?
7
#2 opened over 1 year ago
by
nichedreams
How much vram+ram 30B needs? I have 3060 12gb + 32gb ram.
21
#1 opened over 1 year ago
by
DaveScream