Thishyaketh commited on
Commit
092f703
1 Parent(s): 2f705b3

Rename README.md to About N-Gen-2

Browse files
Files changed (2) hide show
  1. About N-Gen-2 +19 -0
  2. README.md +0 -3
About N-Gen-2 ADDED
@@ -0,0 +1,19 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ Our N-Gen-2 model can process input sequences and generate output sequences, making it suitable for tasks like language translation, text summarization, and dialogue generation.
2
+
3
+ Language Understanding: The model can understand natural language input and generate coherent responses based on the context provided.
4
+
5
+ Imaginary Writing: It can generate imaginative and creative text, allowing for the generation of stories, poems, and other fictional content.
6
+
7
+ No Pre-trained Model Usage: The model does not rely on pre-trained language models like GPT or BERT, making it more customizable and potentially better suited for specific tasks or domains.
8
+
9
+ Encoder-Decoder Architecture: The model follows an Encoder-Decoder paradigm, where the encoder processes input sequences and the decoder generates corresponding output sequences.
10
+
11
+ Flexible Text Generation: The model can generate text with varying lengths, from short sentences to longer passages, and can be controlled to limit the length of the generated output.
12
+
13
+ Training Capabilities: The model can be trained using input-output pairs, allowing for supervised learning on specific datasets tailored to the task at hand.
14
+
15
+ Overall, the N-GEN-2 model is a versatile architecture capable of generating natural language text for a wide range of applications, from storytelling to language translation, without relying on pre-trained models.
16
+
17
+
18
+ Our Model Has 250 Billion Parameters leaving N-Gen-1 Far behind which had just 30 Million Parameters
19
+ The Dataset of the N-Gen-2 is special made to train it for many task described above
README.md DELETED
@@ -1,3 +0,0 @@
1
- ---
2
- license: unknown
3
- ---