pankajmathur
commited on
Commit
•
7dadf05
1
Parent(s):
b8758ca
Update README.md
Browse files
README.md
CHANGED
@@ -6,7 +6,7 @@ datasets:
|
|
6 |
- psmathur/lima_unchained_v1
|
7 |
---
|
8 |
|
9 |
-
#
|
10 |
|
11 |
A Llama2-70b model fine-tuned using QLora on all the linear layers with carefully selected ~900 conversations from the [Lima](https://arxiv.org/pdf/2305.11206.pdf)
|
12 |
|
@@ -17,7 +17,7 @@ A Llama2-70b model fine-tuned using QLora on all the linear layers with carefull
|
|
17 |
|
18 |
## Evaluation
|
19 |
|
20 |
-
We evaluated
|
21 |
|
22 |
Here are the results on metrics used by [HuggingFaceH4 Open LLM Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard)
|
23 |
|
@@ -52,9 +52,11 @@ Below shows a code example on how to use this model
|
|
52 |
import torch
|
53 |
from transformers import AutoModelForCausalLM, AutoTokenizer, pipeline
|
54 |
|
55 |
-
|
|
|
|
|
56 |
model = AutoModelForCausalLM.from_pretrained(
|
57 |
-
|
58 |
torch_dtype=torch.float16,
|
59 |
load_in_8bit=True,
|
60 |
low_cpu_mem_usage=True,
|
@@ -89,9 +91,9 @@ Exercise caution and cross-check information when necessary.
|
|
89 |
Please kindly cite using the following BibTeX:
|
90 |
|
91 |
```
|
92 |
-
@misc{
|
93 |
author = {Pankaj Mathur},
|
94 |
-
title = {
|
95 |
year = {2023},
|
96 |
publisher = {HuggingFace},
|
97 |
journal = {HuggingFace repository},
|
|
|
6 |
- psmathur/lima_unchained_v1
|
7 |
---
|
8 |
|
9 |
+
# Lima_Unchained_70b
|
10 |
|
11 |
A Llama2-70b model fine-tuned using QLora on all the linear layers with carefully selected ~900 conversations from the [Lima](https://arxiv.org/pdf/2305.11206.pdf)
|
12 |
|
|
|
17 |
|
18 |
## Evaluation
|
19 |
|
20 |
+
We evaluated Lima_Unchained_70b on a wide range of tasks using [Language Model Evaluation Harness](https://github.com/EleutherAI/lm-evaluation-harness) from EleutherAI.
|
21 |
|
22 |
Here are the results on metrics used by [HuggingFaceH4 Open LLM Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard)
|
23 |
|
|
|
52 |
import torch
|
53 |
from transformers import AutoModelForCausalLM, AutoTokenizer, pipeline
|
54 |
|
55 |
+
model_path="pankajmathur/Lima_Unchained_70b"
|
56 |
+
|
57 |
+
tokenizer = AutoTokenizer.from_pretrained(model_path)
|
58 |
model = AutoModelForCausalLM.from_pretrained(
|
59 |
+
model_path,
|
60 |
torch_dtype=torch.float16,
|
61 |
load_in_8bit=True,
|
62 |
low_cpu_mem_usage=True,
|
|
|
91 |
Please kindly cite using the following BibTeX:
|
92 |
|
93 |
```
|
94 |
+
@misc{Lima_Unchained_70b,
|
95 |
author = {Pankaj Mathur},
|
96 |
+
title = {Lima_Unchained_70b: A LIMA style Llama2-70b model},
|
97 |
year = {2023},
|
98 |
publisher = {HuggingFace},
|
99 |
journal = {HuggingFace repository},
|