Update README.md
Browse files
README.md
CHANGED
@@ -13,8 +13,8 @@ tags:
|
|
13 |
|
14 |
|
15 |
# MoMonir/Phi-3-mini-128k-instruct-GGUF
|
16 |
-
This model was converted to GGUF format from [`
|
17 |
-
Refer to the [original model card](https://huggingface.co/
|
18 |
|
19 |
<!-- README_GGUF.md-about-gguf start -->
|
20 |
### About GGUF ([TheBloke](https://huggingface.co/TheBloke) Description)
|
@@ -36,7 +36,7 @@ Here is an incomplete list of clients and libraries that are known to support GG
|
|
36 |
|
37 |
<!-- README_GGUF.md-about-gguf end -->
|
38 |
|
39 |
-
|
40 |
## Model Details
|
41 |
We introduce Llama3-ChatQA-1.5, which excels at conversational question answering (QA) and retrieval-augmented generation (RAG). Llama3-ChatQA-1.5 is developed using an improved training recipe from [ChatQA (1.0)](https://arxiv.org/abs/2401.10225), and it is built on top of [Llama-3 base model](https://huggingface.co/meta-llama/Meta-Llama-3-8B). Specifically, we incorporate more conversational QA data to enhance its tabular and arithmetic calculation capability. Llama3-ChatQA-1.5 has two variants: Llama3-ChatQA-1.5-8B and Llama3-ChatQA-1.5-70B. Both models were originally trained using [Megatron-LM](https://github.com/NVIDIA/Megatron-LM), we converted the checkpoints to Hugging Face format.
|
42 |
|
|
|
13 |
|
14 |
|
15 |
# MoMonir/Phi-3-mini-128k-instruct-GGUF
|
16 |
+
This model was converted to GGUF format from [`nvidia/Llama3-ChatQA-1.5-8B`](https://huggingface.co/nvidia/Llama3-ChatQA-1.5-8B)
|
17 |
+
Refer to the [original model card](https://huggingface.co/nvidia/Llama3-ChatQA-1.5-8B) for more details on the model.
|
18 |
|
19 |
<!-- README_GGUF.md-about-gguf start -->
|
20 |
### About GGUF ([TheBloke](https://huggingface.co/TheBloke) Description)
|
|
|
36 |
|
37 |
<!-- README_GGUF.md-about-gguf end -->
|
38 |
|
39 |
+
## #--# Original Model Card #--#
|
40 |
## Model Details
|
41 |
We introduce Llama3-ChatQA-1.5, which excels at conversational question answering (QA) and retrieval-augmented generation (RAG). Llama3-ChatQA-1.5 is developed using an improved training recipe from [ChatQA (1.0)](https://arxiv.org/abs/2401.10225), and it is built on top of [Llama-3 base model](https://huggingface.co/meta-llama/Meta-Llama-3-8B). Specifically, we incorporate more conversational QA data to enhance its tabular and arithmetic calculation capability. Llama3-ChatQA-1.5 has two variants: Llama3-ChatQA-1.5-8B and Llama3-ChatQA-1.5-70B. Both models were originally trained using [Megatron-LM](https://github.com/NVIDIA/Megatron-LM), we converted the checkpoints to Hugging Face format.
|
42 |
|