Kukedlc commited on
Commit
a8d1acb
1 Parent(s): fe13341

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +8 -2
README.md CHANGED
@@ -14,6 +14,9 @@ base_model:
14
 
15
  # Trascendental-Bot-7B
16
 
 
 
 
17
  Trascendental-Bot-7B is a merge of the following models using [LazyMergekit](https://colab.research.google.com/drive/1obulZ1ROXHjYLn6PPZJwRR6GzgQogxxb?usp=sharing):
18
  * [Hypersniper/The_Philosopher_Zephyr_7B](https://huggingface.co/Hypersniper/The_Philosopher_Zephyr_7B)
19
  * [sayhan/OpenHermes-2.5-Strix-Philosophy-Mistral-7B-LoRA](https://huggingface.co/sayhan/OpenHermes-2.5-Strix-Philosophy-Mistral-7B-LoRA)
@@ -56,7 +59,10 @@ import transformers
56
  import torch
57
 
58
  model = "Kukedlc/Trascendental-Bot-7B"
59
- messages = [{"role": "user", "content": "What is a large language model?"}]
 
 
 
60
 
61
  tokenizer = AutoTokenizer.from_pretrained(model)
62
  prompt = tokenizer.apply_chat_template(messages, tokenize=False, add_generation_prompt=True)
@@ -67,6 +73,6 @@ pipeline = transformers.pipeline(
67
  device_map="auto",
68
  )
69
 
70
- outputs = pipeline(prompt, max_new_tokens=256, do_sample=True, temperature=0.7, top_k=50, top_p=0.95)
71
  print(outputs[0]["generated_text"])
72
  ```
 
14
 
15
  # Trascendental-Bot-7B
16
 
17
+
18
+ ![image/png](https://cdn-uploads.huggingface.co/production/uploads/64d71ab4089bc502ceb44d29/vHD7qJpFPXEc6CcE36vwQ.png)
19
+
20
  Trascendental-Bot-7B is a merge of the following models using [LazyMergekit](https://colab.research.google.com/drive/1obulZ1ROXHjYLn6PPZJwRR6GzgQogxxb?usp=sharing):
21
  * [Hypersniper/The_Philosopher_Zephyr_7B](https://huggingface.co/Hypersniper/The_Philosopher_Zephyr_7B)
22
  * [sayhan/OpenHermes-2.5-Strix-Philosophy-Mistral-7B-LoRA](https://huggingface.co/sayhan/OpenHermes-2.5-Strix-Philosophy-Mistral-7B-LoRA)
 
59
  import torch
60
 
61
  model = "Kukedlc/Trascendental-Bot-7B"
62
+ messages = [
63
+ {"role": "system", "content": "You are an expert assistant in mysticism and philosophy."},
64
+ {"role": "user", "content": "Create an innovative and disruptive theory that explains human consciousness. Give me an extensive and detailed answer."}
65
+ ]
66
 
67
  tokenizer = AutoTokenizer.from_pretrained(model)
68
  prompt = tokenizer.apply_chat_template(messages, tokenize=False, add_generation_prompt=True)
 
73
  device_map="auto",
74
  )
75
 
76
+ outputs = pipeline(prompt, max_new_tokens=1024, do_sample=True, temperature=0.7, top_k=50, top_p=0.95)
77
  print(outputs[0]["generated_text"])
78
  ```