Teja-Gollapudi commited on
Commit
e249fd0
1 Parent(s): 5ef8730

Create README.md

Browse files
Files changed (1) hide show
  1. README.md +75 -0
README.md ADDED
@@ -0,0 +1,75 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ license: cc
3
+ datasets:
4
+ - VMware/open-instruct-v1.1-oasst-dolly-hhrlhf
5
+ language:
6
+ - en
7
+ library_name: transformers
8
+ pipeline_tag: text-generation
9
+ ---
10
+
11
+ # VMware/open-llama-0.7T-7B-open-instruct-v1.1
12
+ Instruction tunned version of the fully trained Open LLama 7B model
13
+ ## License
14
+ - <b>Commercially Viable </b>
15
+ - Instruction dataset, [VMware/open-instruct-v1.1-oasst-dolly-hhrlhf](https://huggingface.co/datasets/VMware/open-instruct-v1.1-oasst-dolly-hhrlhf) is under cc-by-sa-3.0
16
+ - Language Model ([openlm-research/open_llama_7b](https://huggingface.co/openlm-research/open_llama_7b)) is under apache-2.0
17
+
18
+
19
+ ## Nomenclature
20
+
21
+ - Model : Open-llama
22
+ - Model Size: 7B parameters
23
+ - Dataset: Open-instruct-v1.1 (oasst,dolly, hhrlhf)
24
+
25
+
26
+ ## Use in Transformers
27
+
28
+
29
+ ```
30
+ import os
31
+ import torch
32
+ from transformers import AutoModelForCausalLM, AutoTokenizer
33
+
34
+ model_name = 'VMware/open-llama-7B-open-instruct-v1.1'
35
+
36
+
37
+ tokenizer = AutoTokenizer.from_pretrained(model_name)
38
+
39
+ model = AutoModelForCausalLM.from_pretrained(model_name, torch_dtype= torch.float16, device_map = 'sequential')
40
+
41
+ prompt_template = "Below is an instruction that describes a task. Write a response that appropriately completes the request.\n\n### Instruction:\n{instruction}\n\n### Response:"
42
+
43
+ prompt= 'Explain in simple terms how the attention mechanism of a transformer model works'
44
+
45
+
46
+ inputt = prompt_template.format(instruction= prompt)
47
+ input_ids = tokenizer(inputt, return_tensors="pt").input_ids.to("cuda")
48
+
49
+ output1 = model.generate(input_ids, max_length=512)
50
+ input_length = input_ids.shape[1]
51
+ output1 = output1[:, input_length:]
52
+ output= tokenizer.decode(output1[0])
53
+
54
+ print(output)
55
+
56
+ '''
57
+ Attention is a mechanism used in deep learning models, such as transformer models, to capture global dependencies between different parts of the input. In a transformer model, the attention mechanism works by computing a weighted sum of the input vectors and then applying a non-linear activation function to the result.
58
+
59
+ The attention mechanism in a transformer model works in two steps:
60
+
61
+ 1. Query-Key Mapping: First, the input sequence is divided into two parts: the query vector and the key vector. The query vector represents the input at the current position, and the key vector represents the input at a previous position.
62
+
63
+ 2. Attention Weight Calculation: Second, the attention weights are calculated using the dot product between the query vector and each key vector. The attention weights represent the importance of the input at the previous position to the current position.
64
+
65
+ The attention weights are then used to compute the attention score for each input element. The attention score represents the relevance of the input element to the current position.
66
+
67
+ The attention mechanism in a transformer model is designed to capture global dependencies between different parts of the input. By attending to input elements from different positions, the model can learn to understand the relationships between different parts of the input. This allows the model to perform more complex tasks, such as understanding the relationships between words in a sentence or pixels in an image.</s>
68
+
69
+ '''
70
+ ```
71
+
72
+
73
+ ## Evaluation
74
+
75
+ <B>TODO</B>