abhinand commited on
Commit
61b99f7
1 Parent(s): c5d6f36

Delete .ipynb_checkpoints

Browse files
.ipynb_checkpoints/README-checkpoint.md DELETED
@@ -1,58 +0,0 @@
1
- ---
2
- tags:
3
- - merge
4
- - mergekit
5
- - lazymergekit
6
- - aaditya/Llama3-OpenBioLLM-8B
7
- base_model:
8
- - aaditya/Llama3-OpenBioLLM-8B
9
- ---
10
-
11
- # Llama-3-Galen-8B-32k-v1
12
-
13
- Llama-3-Galen-8B-32k-v1 is a merge of the following models using [LazyMergekit](https://colab.research.google.com/drive/1obulZ1ROXHjYLn6PPZJwRR6GzgQogxxb?usp=sharing):
14
- * [aaditya/Llama3-OpenBioLLM-8B](https://huggingface.co/aaditya/Llama3-OpenBioLLM-8B)
15
-
16
- > **This model is capable of handling a context size of 32K right out of the box, thanks to its Dynamic RoPE scaling.**
17
-
18
- ## 🧩 Configuration
19
-
20
- ```yaml
21
- models:
22
- - model: johnsnowlabs/JSL-MedLlama-3-8B-v2.0
23
- # No parameters necessary for base model
24
- - model: aaditya/Llama3-OpenBioLLM-8B
25
- parameters:
26
- density: 0.53
27
- weight: 0.5
28
- merge_method: dare_ties
29
- base_model: johnsnowlabs/JSL-MedLlama-3-8B-v2.0
30
- parameters:
31
- int8_mask: true
32
- dtype: bfloat16
33
- ```
34
-
35
- ## 💻 Usage
36
-
37
- ```python
38
- !pip install -qU transformers accelerate
39
-
40
- from transformers import AutoTokenizer
41
- import transformers
42
- import torch
43
-
44
- model = "abhinand/Llama-3-Galen-8B-32k-v1"
45
- messages = [{"role": "user", "content": "How long does it take to recover from COVID-19?"}]
46
-
47
- tokenizer = AutoTokenizer.from_pretrained(model)
48
- prompt = tokenizer.apply_chat_template(messages, tokenize=False, add_generation_prompt=True)
49
- pipeline = transformers.pipeline(
50
- "text-generation",
51
- model=model,
52
- torch_dtype=torch.float16,
53
- device_map="auto",
54
- )
55
-
56
- outputs = pipeline(prompt, max_new_tokens=256, do_sample=True, temperature=0.7, top_k=50, top_p=0.95)
57
- print(outputs[0]["generated_text"])
58
- ```
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
.ipynb_checkpoints/config-checkpoint.json DELETED
@@ -1,31 +0,0 @@
1
- {
2
- "_name_or_path": "johnsnowlabs/JSL-MedLlama-3-8B-v2.0",
3
- "architectures": [
4
- "LlamaForCausalLM"
5
- ],
6
- "attention_bias": false,
7
- "attention_dropout": 0.0,
8
- "bos_token_id": 128000,
9
- "eos_token_id": 128001,
10
- "hidden_act": "silu",
11
- "hidden_size": 4096,
12
- "initializer_range": 0.02,
13
- "intermediate_size": 14336,
14
- "max_position_embeddings": 8192,
15
- "model_type": "llama",
16
- "num_attention_heads": 32,
17
- "num_hidden_layers": 32,
18
- "num_key_value_heads": 8,
19
- "pretraining_tp": 1,
20
- "rms_norm_eps": 1e-05,
21
- "rope_scaling": {
22
- "type": "dynamic",
23
- "factor": 4.0
24
- },
25
- "rope_theta": 500000.0,
26
- "tie_word_embeddings": false,
27
- "torch_dtype": "bfloat16",
28
- "transformers_version": "4.39.3",
29
- "use_cache": false,
30
- "vocab_size": 128256
31
- }