cognitivess commited on
Commit
ed980e5
1 Parent(s): f53ee40

Upload README.md with huggingface_hub

Browse files
Files changed (1) hide show
  1. README.md +6 -176
README.md CHANGED
@@ -1,188 +1,18 @@
1
- ---
2
- tags:
3
- - text-generation-inference
4
- - text-generation
5
- - Sentiment Analysis
6
- - qlora
7
- - peft
8
- license: apache-2.0
9
- library_name: transformers
10
- widget:
11
- - messages:
12
- - role: user
13
- content: What is your name?
14
- language:
15
- - en
16
- - ro
17
- pipeline_tag: text-generation
18
- model-index:
19
- - name: CognitivessAI/cognitivess
20
- results:
21
- - task:
22
- type: text-generation
23
- name: Text Generation
24
- metrics:
25
- - name: Evaluation Status
26
- type: accuracy
27
- value: Pending
28
- description: Comprehensive evaluations are planned and will be conducted in the future.
29
- model_type: CognitivessForCausalLM
30
- quantization_config:
31
- load_in_8bit: true
32
- llm_int8_threshold: 6.0
33
- fine_tuning:
34
- method: qlora
35
- peft_type: LORA
36
- inference:
37
- parameters:
38
- max_new_tokens: 8192
39
- temperature: 0.7
40
- top_p: 0.95
41
- do_sample: true
42
- ---
43
 
44
- <div align="center">
45
- <img src="https://cdn-uploads.huggingface.co/production/uploads/65ec00afa735404e87e1359e/u5qyAgn_2-Bh46nzOFlcI.png">
46
- <h2>Accessible and portable generative AI solutions for developers and businesses.</h2>
47
- </div>
48
-
49
- <p align="center" style="margin-top: 0px;">
50
- <a href="https://cognitivess.com">
51
- <span class="link-text" style=" margin-right: 5px;">Website</span>
52
- </a> |
53
- <a href="https://bella.cognitivess.com">
54
- <span class="link-text" style=" margin-right: 5px;">Demo</span>
55
- </a> |
56
- <a href="https://github.com/Cognitivess/cognitivess">
57
- <img src="https://github.githubassets.com/assets/GitHub-Mark-ea2971cee799.png" alt="GitHub Logo" style="width:20px; vertical-align: middle; display: inline-block; margin-right: 5px; margin-left: 5px; margin-top: 0px; margin-bottom: 0px;"/>
58
- <span class="link-text" style=" margin-right: 5px;">GitHub</span>
59
- </a>
60
- </p>
61
-
62
- # Cognitivess
63
-
64
- Cognitivess is an advanced language model developed by Cognitivess AI, based in Bucharest, Romania. This model is trained from scratch on a diverse and curated dataset, encompassing a wide range of knowledge domains and linguistic styles. Utilizing state-of-the-art Quantized Low-Rank Adaptation (QLoRA) techniques, Cognitivess delivers high-quality text generation while maintaining exceptional efficiency.
65
-
66
- Key features:
67
- - Built on a custom-designed architecture inspired by LLaMA, optimized for versatility and performance
68
- - Trained on a rich tapestry of data sources, including scientific literature, creative writing, multilingual corpora, and real-world conversational data
69
- - Employs advanced few-shot learning capabilities, allowing it to quickly adapt to new tasks with minimal examples
70
- - Capable of generating text in multiple languages, with particular strength in English and Romanian
71
- - Specialized in tasks such as text generation, sentiment analysis, and complex problem-solving across various domains
72
- - Incorporates ethical AI principles, with built-in safeguards against generating harmful or biased content
73
-
74
- Cognitivess aims to serve as more than just an AI assistant; it's designed to be a knowledgeable companion capable of engaging in substantive discussions on topics ranging from cutting-edge technology to classical literature. Whether you need help with data analysis, creative storytelling, or exploring abstract concepts, Cognitivess is equipped to provide nuanced and contextually appropriate responses.
75
-
76
- This model represents Cognitivess AI's commitment to pushing the boundaries of natural language processing. By combining vast knowledge with advanced reasoning capabilities, Cognitivess strives to bridge the gap between artificial and human intelligence, opening new possibilities for AI applications across various industries and academic fields.
77
-
78
-
79
- ***Under the Cognitivess Open Model License, Cognitivess AI confirms:***
80
- - Models are commercially usable.
81
- - You are free to create and distribute Derivative Models.
82
- - Cognitivess does not claim ownership to any outputs generated using the Models or Derivative Models.
83
-
84
- ### Intended use
85
-
86
- Cognitivess is a multilingual chat model designed to support a variety of languages including English, Romanian, Spanish, French, German, and many more, intended for diverse language applications.
87
-
88
-
89
- **Model Developer:** Cognitivess AI
90
-
91
- **Model Dates:** Cognitivess was trained between July 2024.
92
-
93
- **Data Freshness:** The pretraining data has a cutoff of June 2024. Training will continue beyond the current data cutoff date to incorporate new data as it becomes available.
94
-
95
-
96
- ### Model Architecture:
97
-
98
- Cognitivess model architecture is Transformer-based and trained with a sequence length of 8192 tokens.
99
-
100
- **Architecture Type:** Transformer (auto-regressive language model)
101
-
102
-
103
-
104
- Try this model on [bella.cognitivess.com](https://bella.cognitivess.com/) now.
105
-
106
-
107
- ![image/png](https://cdn-uploads.huggingface.co/production/uploads/65ec00afa735404e87e1359e/CQeAV4lwbQp1G8H5n4uWx.png)
108
-
109
-
110
- # Usage
111
 
112
  To use this model, first install the custom package:
113
 
114
  ```bash
115
  pip install git+https://huggingface.co/CognitivessAI/cognitivess
116
-
117
  ```
118
 
119
- ```python
120
 
121
- import torch
122
  from transformers import AutoTokenizer, AutoModelForCausalLM
123
- from peft import PeftModel, PeftConfig
124
-
125
- # Set the device
126
- device = torch.device("cuda" if torch.cuda.is_available() else "cpu")
127
- print(f"Using device: {device}")
128
-
129
- # Load the tokenizer
130
- tokenizer = AutoTokenizer.from_pretrained("CognitivessAI/cognitivess")
131
-
132
- # Load the PEFT configuration
133
- peft_config = PeftConfig.from_pretrained("CognitivessAI/cognitivess")
134
-
135
- # Load the base model
136
- base_model = AutoModelForCausalLM.from_pretrained(
137
- peft_config.base_model_name_or_path,
138
- device_map="auto",
139
- torch_dtype=torch.float16
140
- )
141
-
142
- # Load the PEFT model
143
- model = PeftModel.from_pretrained(base_model, "CognitivessAI/cognitivess")
144
-
145
- # Move the model to the appropriate device
146
- model = model.to(device)
147
-
148
- # Set the model to evaluation mode
149
- model.eval()
150
-
151
- # Function for text generation using the chat template
152
- def generate_text(model, tokenizer, input_text, max_length=8192, temperature=0.7, top_p=0.95):
153
- messages = [
154
- {"role": "user", "content": input_text}
155
- ]
156
- chat_input = tokenizer.apply_chat_template(messages, tokenize=False, add_generation_prompt=True)
157
- inputs = tokenizer(chat_input, return_tensors='pt', padding=True, truncation=True, max_length=8192)
158
- input_ids = inputs['input_ids'].to(device)
159
- attention_mask = inputs['attention_mask'].to(device)
160
- try:
161
- generated_text_ids = model.generate(
162
- input_ids,
163
- attention_mask=attention_mask,
164
- max_length=max_length,
165
- temperature=temperature,
166
- top_p=top_p,
167
- do_sample=True,
168
- eos_token_id=tokenizer.eos_token_id
169
- )
170
- generated_text = tokenizer.decode(generated_text_ids[0], skip_special_tokens=True)
171
- # Extract the assistant's response
172
- response = generated_text.split("GPT4 Correct Assistant")[-1].strip()
173
- return response
174
- except Exception as e:
175
- print(f"Error in text generation: {e}")
176
- return "I'm sorry, I encountered an error while generating a response."
177
-
178
- # Test the model
179
- test_prompt = "Who are you?"
180
- generated_response = generate_text(model, tokenizer, test_prompt, max_length=100)
181
- print(f"Generated response:\n{generated_response}")
182
-
183
- print("Testing completed.")
184
 
 
 
185
  ```
186
-
187
- **Contact:**
188
- <a href="mailto:hello@cognitivess.com">hello@cognitivess.com</a>
 
1
+ # Cognitivess Model
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
2
 
3
+ ## Usage
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
4
 
5
  To use this model, first install the custom package:
6
 
7
  ```bash
8
  pip install git+https://huggingface.co/CognitivessAI/cognitivess
 
9
  ```
10
 
11
+ Then, you can use the model like this:
12
 
13
+ ```python
14
  from transformers import AutoTokenizer, AutoModelForCausalLM
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
15
 
16
+ tokenizer = AutoTokenizer.from_pretrained('CognitivessAI/cognitivess')
17
+ model = AutoModelForCausalLM.from_pretrained('CognitivessAI/cognitivess')
18
  ```