Pankaj Mathur
commited on
Commit
•
53bd1d1
1
Parent(s):
6a9156d
Update README.md
Browse files
README.md
CHANGED
@@ -67,7 +67,7 @@ def generate_text(system, instruction, input=None):
|
|
67 |
tokens = torch.LongTensor(tokens).unsqueeze(0)
|
68 |
tokens = tokens.to('cuda')
|
69 |
|
70 |
-
instance = {'input_ids': tokens,'top_p': 1.0, 'temperature':0.7, 'generate_len': 1024}
|
71 |
|
72 |
length = len(tokens[0])
|
73 |
with torch.no_grad():
|
@@ -77,17 +77,38 @@ def generate_text(system, instruction, input=None):
|
|
77 |
use_cache=True,
|
78 |
do_sample=True,
|
79 |
top_p=instance['top_p'],
|
80 |
-
temperature=instance['temperature']
|
|
|
81 |
)
|
82 |
output = rest[0][length:]
|
83 |
string = tokenizer.decode(output, skip_special_tokens=True)
|
84 |
-
|
85 |
|
86 |
-
#
|
87 |
-
system = 'You are an AI assistant
|
88 |
-
instruction = '
|
89 |
-
|
90 |
-
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
91 |
|
92 |
```
|
93 |
|
@@ -95,9 +116,8 @@ generate_text(system, instruction, input)
|
|
95 |
|
96 |
Next Goals:
|
97 |
1) Try more data like actually using FLAN-v2, just like Orka Research Paper (I am open for suggestions)
|
98 |
-
2)
|
99 |
-
3) Provide
|
100 |
-
4) Provide 4bit GGML/GPTQ quantized model (may be [TheBloke](https://huggingface.co/TheBloke) can help here)
|
101 |
|
102 |
|
103 |
Reference:
|
|
|
67 |
tokens = torch.LongTensor(tokens).unsqueeze(0)
|
68 |
tokens = tokens.to('cuda')
|
69 |
|
70 |
+
instance = {'input_ids': tokens,'top_p': 1.0, 'temperature':0.7, 'generate_len': 1024, 'top_k': 50}
|
71 |
|
72 |
length = len(tokens[0])
|
73 |
with torch.no_grad():
|
|
|
77 |
use_cache=True,
|
78 |
do_sample=True,
|
79 |
top_p=instance['top_p'],
|
80 |
+
temperature=instance['temperature'],
|
81 |
+
top_k=instance['top_k']
|
82 |
)
|
83 |
output = rest[0][length:]
|
84 |
string = tokenizer.decode(output, skip_special_tokens=True)
|
85 |
+
return f'[!] Response: {string}'
|
86 |
|
87 |
+
# Sample Test Instruction Used by Youtuber Sam Witteveen https://www.youtube.com/@samwitteveenai
|
88 |
+
system = 'You are an AI assistant that follows instruction extremely well. Help as much as you can.'
|
89 |
+
instruction = 'Write a letter to Sam Altman, CEO of OpenAI, requesting him to convert GPT4 a private model by OpenAI to an open source project'
|
90 |
+
print(generate_text(system, instruction))
|
91 |
+
|
92 |
+
```
|
93 |
+
|
94 |
+
```
|
95 |
+
|
96 |
+
[!] Response:
|
97 |
+
Dear Sam Altman,
|
98 |
+
|
99 |
+
I am writing to request that you convert the GPT4 private model developed by OpenAI to an open source project. As a user of OpenAI, I have been waiting for the day when I can use the advanced natural language processing capabilities of GPT4 in a more open and accessible way.
|
100 |
+
|
101 |
+
While OpenAI has made significant progress in developing AI applications, it has primarily focused on building private models that are not accessible to the general public. However, with the recent release of GPT-3, there is a growing demand for more open and accessible AI tools.
|
102 |
+
|
103 |
+
Converting GPT4 to an open source project would allow for greater transparency, collaboration, and innovation. It would also help to build trust in the technology and ensure that it is used ethically and responsibly.
|
104 |
+
|
105 |
+
I urge you to consider converting GPT4 to an open source project. This would be a significant contribution to the AI community and would help to create a more open and accessible future.
|
106 |
+
|
107 |
+
Thank you for your consideration.
|
108 |
+
|
109 |
+
Sincerely,
|
110 |
+
|
111 |
+
[Your Name]
|
112 |
|
113 |
```
|
114 |
|
|
|
116 |
|
117 |
Next Goals:
|
118 |
1) Try more data like actually using FLAN-v2, just like Orka Research Paper (I am open for suggestions)
|
119 |
+
2) Provide more options for Text generation UI. (may be https://github.com/oobabooga/text-generation-webui)
|
120 |
+
3) Provide 4bit GGML/GPTQ quantized model (may be [TheBloke](https://huggingface.co/TheBloke) can help here)
|
|
|
121 |
|
122 |
|
123 |
Reference:
|