Suggestion on how to prompt the model for specific RAG use cases
#21 opened 15 days ago
by
eloukas
Answer questions beyond the content provided
#20 opened 18 days ago
by
longhtgg
megatron format to HF format
#19 opened 19 days ago
by
jjw0126
[AUTOMATED] Model Memory Requirements
#18 opened 29 days ago
by
model-sizer-bot
generation_config.json adds a mapping with the special token '<|im_end|>' to solve the problem of non-stop generation when <|im_end|> is encountered.
3
#17 opened about 1 month ago
by
zjyhf
The tokenizer adds a special token '<|im_end|>' to solve the problem of non-stop generation when encountering <|im_end|>.
1
#16 opened about 1 month ago
by
zjyhf
How to use in llama.cpp server
2
#15 opened about 1 month ago
by
subbur
how to set context in multi-turn QA?
6
#14 opened about 1 month ago
by
J22
Update README.md
#13 opened about 1 month ago
by
freyacoltman
Try to run with dedicated endpoint 4x A100 320GB still get not enough hardware capacity
5
#11 opened about 1 month ago
by
trungnx26
Colab Notebook
1
#10 opened about 1 month ago
by
ChristophSchuhmann
Megatron LM training (fine-tuning) code ?
3
#9 opened about 1 month ago
by
StephennFernandes
If i make context empty, it will output chinese.
6
#8 opened about 2 months ago
by
Cometyang
Adding `safetensors` variant of this model
#7 opened about 2 months ago
by
SFconvertbot
Adding `safetensors` variant of this model
#6 opened about 2 months ago
by
SFconvertbot
Chat template
15
#5 opened about 2 months ago
by
bartowski
Adding `safetensors` variant of this model
#4 opened about 2 months ago
by
SFconvertbot
I got answer with the token "ologne" at the end
1
#3 opened about 2 months ago
by
Stilgar