Optimize inference speed
5
#9 opened 5 months ago
by
CoolWP
OOM occurs in the process of converting the model to torchscript. I have a question about this issue.
1
#8 opened 5 months ago
by
LeeJungHoon
Add benchmark to MTEB
5
#7 opened 5 months ago
by
sam-gab
base model
16
#6 opened 5 months ago
by
ambivalent02
![](https://cdn-avatars.huggingface.co/v1/production/uploads/62e89b92dd52172c3896c482/rEToY2DCBSq7rCYYz6A3h.png)
It is now working colab..
3
#5 opened 5 months ago
by
LeeJungHoon
中文Dense retrieval性能与BGE V1.5相比如何?
3
#3 opened 5 months ago
by
TianyuLLM
OOMS on 8 GB GPU, is it normal?
3
#2 opened 5 months ago
by
tanimazsin130