filipealmeida commited on
Commit
cd3f6da
1 Parent(s): 9411452

Upload folder using huggingface_hub

Browse files
.gitattributes CHANGED
@@ -33,3 +33,6 @@ saved_model/**/* filter=lfs diff=lfs merge=lfs -text
33
  *.zip filter=lfs diff=lfs merge=lfs -text
34
  *.zst filter=lfs diff=lfs merge=lfs -text
35
  *tfevents* filter=lfs diff=lfs merge=lfs -text
 
 
 
 
33
  *.zip filter=lfs diff=lfs merge=lfs -text
34
  *.zst filter=lfs diff=lfs merge=lfs -text
35
  *tfevents* filter=lfs diff=lfs merge=lfs -text
36
+ ggml-model-Q4_0.gguf filter=lfs diff=lfs merge=lfs -text
37
+ ggml-model-Q8_0.gguf filter=lfs diff=lfs merge=lfs -text
38
+ ggml-model-f16.gguf filter=lfs diff=lfs merge=lfs -text
README.md CHANGED
@@ -1,3 +1,74 @@
1
  ---
2
- license: cc-by-3.0
 
 
 
 
 
 
3
  ---
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
  ---
2
+ license: cc-by-sa-3.0
3
+ datasets:
4
+ - VMware/open-instruct-v1-oasst-dolly-hhrlhf
5
+ language:
6
+ - en
7
+ library_name: transformers
8
+ pipeline_tag: text-generation
9
  ---
10
+
11
+ # Open LLama 13B Open Instruct
12
+ - Model creator: [VMware](https://huggingface.co/VMware)
13
+ - Original model: [](https://huggingface.co/VMware/open-llama-13b-open-instruct)
14
+
15
+ ## Description
16
+
17
+ This repo contains the GGUF model files for [Open LLama 13B Open Instruct](https://huggingface.co/VMware/open-llama-13b-open-instruct).
18
+
19
+ These files are compatible with [llama.cpp](https://github.com/ggerganov/llama.cpp).
20
+
21
+ # VMware/open-llama-13B-open-instruct
22
+ Instruction-tuned version of the fully trained Open LLama 13B model. The model is open for <b>COMMERCIAL USE</b>. <br>
23
+
24
+ <b> NOTE </b> : The model was trained using the Alpaca prompt template \
25
+ <b> NOTE </b> : Fast tokenizer results in incorrect encoding, set the ```use_fast = False``` parameter, when instantiating the tokenizer\
26
+ <b> NOTE </b> : The model might struggle with code as the tokenizer merges multiple spaces
27
+
28
+ ## License
29
+ - <b>Commercially Viable </b>
30
+ - Instruction dataset, [VMware/open-instruct-v1-oasst-dolly-hhrlhf](https://huggingface.co/datasets/VMware/open-instruct-v1-oasst-dolly-hhrlhf) is under cc-by-sa-3.0
31
+ - Language Model, ([openlm-research/open_llama_13b](https://huggingface.co/openlm-research/open_llama_13b)) is under apache-2.0
32
+
33
+
34
+ ## Nomenclature
35
+
36
+ - Model : Open-llama
37
+ - Model Size: 13B parameters
38
+ - Dataset: Open-instruct-v1 (oasst,dolly, hhrlhf)
39
+
40
+ ## Use in Transformers
41
+
42
+ ```
43
+ import os
44
+ import torch
45
+ from transformers import AutoModelForCausalLM, AutoTokenizer
46
+
47
+ model_name = 'VMware/open-llama-13b-open-instruct'
48
+
49
+
50
+ tokenizer = AutoTokenizer.from_pretrained(model_name, use_fast=False)
51
+
52
+ model = AutoModelForCausalLM.from_pretrained(model_name, torch_dtype=torch.float16, device_map='sequential')
53
+
54
+ prompt_template = "Below is an instruction that describes a task. Write a response that appropriately completes the request.\n\n### Instruction:\n{instruction}\n\n### Response:"
55
+
56
+ prompt = 'Explain in simple terms how the attention mechanism of a transformer model works'
57
+
58
+
59
+ inputt = prompt_template.format(instruction= prompt)
60
+ input_ids = tokenizer(inputt, return_tensors="pt").input_ids.to("cuda")
61
+
62
+ output1 = model.generate(input_ids, max_length=512)
63
+ input_length = input_ids.shape[1]
64
+ output1 = output1[:, input_length:]
65
+ output = tokenizer.decode(output1[0])
66
+
67
+ print(output)
68
+ ```
69
+
70
+ ## Finetuning details
71
+ The finetuning scripts will be available in our [RAIL Github Repository](https://github.com/vmware-labs/research-and-development-artificial-intelligence-lab/tree/main/instruction-tuning)
72
+ ## Evaluation
73
+
74
+ <B>TODO</B>
ggml-model-Q4_0.gguf ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:c505951413c201868d893f1c91f576aa260e2e9baa5f7a497c0aa6688b22c7be
3
+ size 7365869152
ggml-model-Q8_0.gguf ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:7635749b68b4d708869ad7603d0eb3415d385a4f17d7fb6c22009c25f9408a3a
3
+ size 13831353952
ggml-model-f16.gguf ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:a2f684e3832feb7d66e9564c3ffeb6849b0d8c970ef847297733eced1e35cdab
3
+ size 26033337888