|
--- |
|
tags: |
|
- text-generation-inference |
|
- text-generation |
|
- Sentiment Analysis |
|
- qlora |
|
- peft |
|
license: apache-2.0 |
|
library_name: transformers |
|
widget: |
|
- messages: |
|
- role: user |
|
content: What is your name? |
|
language: |
|
- en |
|
- ro |
|
pipeline_tag: text-generation |
|
model-index: |
|
- name: CognitivessAI/cognitivess |
|
results: |
|
- task: |
|
type: text-generation |
|
name: Text Generation |
|
metrics: |
|
- name: Evaluation Status |
|
type: accuracy |
|
value: Pending |
|
description: Comprehensive evaluations are planned and will be conducted in the future. |
|
model_type: CognitivessForCausalLM |
|
quantization_config: |
|
load_in_8bit: true |
|
llm_int8_threshold: 6.0 |
|
fine_tuning: |
|
method: qlora |
|
peft_type: LORA |
|
inference: |
|
parameters: |
|
max_new_tokens: 8192 |
|
temperature: 0.7 |
|
top_p: 0.95 |
|
do_sample: true |
|
--- |
|
|
|
<div align="center"> |
|
<img src="https://cdn-uploads.huggingface.co/production/uploads/65ec00afa735404e87e1359e/u5qyAgn_2-Bh46nzOFlcI.png"> |
|
<h2>Accessible and portable generative AI solutions for developers and businesses.</h2> |
|
</div> |
|
|
|
<p align="center" style="margin-top: 0px;"> |
|
<a href="https://cognitivess.com"> |
|
<span class="link-text" style=" margin-right: 5px;">Website</span> |
|
</a> | |
|
<a href="https://bella.cognitivess.com"> |
|
<span class="link-text" style=" margin-right: 5px;">Demo</span> |
|
</a> | |
|
<a href="https://github.com/Cognitivess/cognitivess"> |
|
<img src="https://github.githubassets.com/assets/GitHub-Mark-ea2971cee799.png" alt="GitHub Logo" style="width:20px; vertical-align: middle; display: inline-block; margin-right: 5px; margin-left: 5px; margin-top: 0px; margin-bottom: 0px;"/> |
|
<span class="link-text" style=" margin-right: 5px;">GitHub</span> |
|
</a> |
|
</p> |
|
|
|
# Cognitivess |
|
|
|
Cognitivess is an advanced language model developed by Cognitivess AI, based in Bucharest, Romania. This model is trained from scratch on a diverse and curated dataset, encompassing a wide range of knowledge domains and linguistic styles. Utilizing state-of-the-art Quantized Low-Rank Adaptation (QLoRA) techniques, Cognitivess delivers high-quality text generation while maintaining exceptional efficiency. |
|
|
|
Key features: |
|
- Built on a custom-designed architecture inspired by LLaMA, optimized for versatility and performance |
|
- Trained on a rich tapestry of data sources, including scientific literature, creative writing, multilingual corpora, and real-world conversational data |
|
- Employs advanced few-shot learning capabilities, allowing it to quickly adapt to new tasks with minimal examples |
|
- Capable of generating text in multiple languages, with particular strength in English and Romanian |
|
- Specialized in tasks such as text generation, sentiment analysis, and complex problem-solving across various domains |
|
- Incorporates ethical AI principles, with built-in safeguards against generating harmful or biased content |
|
|
|
Cognitivess aims to serve as more than just an AI assistant; it's designed to be a knowledgeable companion capable of engaging in substantive discussions on topics ranging from cutting-edge technology to classical literature. Whether you need help with data analysis, creative storytelling, or exploring abstract concepts, Cognitivess is equipped to provide nuanced and contextually appropriate responses. |
|
|
|
This model represents Cognitivess AI's commitment to pushing the boundaries of natural language processing. By combining vast knowledge with advanced reasoning capabilities, Cognitivess strives to bridge the gap between artificial and human intelligence, opening new possibilities for AI applications across various industries and academic fields. |
|
|
|
|
|
***Under the Cognitivess Open Model License, Cognitivess AI confirms:*** |
|
- Models are commercially usable. |
|
- You are free to create and distribute Derivative Models. |
|
- Cognitivess does not claim ownership to any outputs generated using the Models or Derivative Models. |
|
|
|
### Intended use |
|
|
|
Cognitivess is a multilingual chat model designed to support a variety of languages including English, Romanian, Spanish, French, German, and many more, intended for diverse language applications. |
|
|
|
|
|
**Model Developer:** Cognitivess AI |
|
|
|
**Model Dates:** Cognitivess was trained between July 2024. |
|
|
|
**Data Freshness:** The pretraining data has a cutoff of June 2024. Training will continue beyond the current data cutoff date to incorporate new data as it becomes available. |
|
|
|
|
|
### Model Architecture: |
|
|
|
Cognitivess model architecture is Transformer-based and trained with a sequence length of 8192 tokens. |
|
|
|
**Architecture Type:** Transformer (auto-regressive language model) |
|
|
|
|
|
|
|
Try this model on [bella.cognitivess.com](https://bella.cognitivess.com/) now. |
|
|
|
|
|
![image/png](https://cdn-uploads.huggingface.co/production/uploads/65ec00afa735404e87e1359e/CQeAV4lwbQp1G8H5n4uWx.png) |
|
|
|
## Usage |
|
|
|
To use this model, first install the custom package: |
|
|
|
```bash |
|
# Install required packages |
|
!pip install git+https://huggingface.co/CognitivessAI/cognitivess |
|
``` |
|
|
|
Then, you can use the model like this: |
|
|
|
```python |
|
import cognitivess_model |
|
from transformers import AutoModelForCausalLM, AutoTokenizer |
|
import torch |
|
|
|
# Define the model path |
|
model_path = "CognitivessAI/cognitivess" |
|
|
|
# Load the tokenizer |
|
tokenizer = AutoTokenizer.from_pretrained(model_path) |
|
|
|
# Load the model with correct configuration for precision and device placement |
|
model = AutoModelForCausalLM.from_pretrained( |
|
model_path, |
|
torch_dtype=torch.float32, |
|
device_map="auto" # Automatically maps model to available devices |
|
).eval() |
|
|
|
# Move model to CUDA if available |
|
device = torch.device("cuda" if torch.cuda.is_available() else "cpu") |
|
model.to(device) |
|
|
|
# Prepare input |
|
messages = [ |
|
{"role": "user", "content": "Who are you?"} |
|
] |
|
|
|
# Tokenize input |
|
input_ids = tokenizer( |
|
[msg["content"] for msg in messages], |
|
return_tensors='pt', |
|
padding=True, |
|
truncation=True |
|
).input_ids |
|
|
|
# Move input_ids to the same device as the model |
|
input_ids = input_ids.to(device) |
|
|
|
# Generate output |
|
output_ids = model.generate(input_ids, max_new_tokens=50) |
|
|
|
# Decode output |
|
response = tokenizer.decode(output_ids[0], skip_special_tokens=True) |
|
|
|
print(response) |
|
|
|
``` |
|
|
|
## Usage with LORA + Quantized Versions through bitsandbytes |
|
|
|
To use this model, first install the custom package: |
|
|
|
```bash |
|
# Install required packages |
|
!pip install git+https://huggingface.co/CognitivessAI/cognitivess |
|
!pip install bitsandbytes |
|
!pip install peft |
|
|
|
``` |
|
|
|
Then, you can use the model like this: |
|
```python |
|
import cognitivess_model # Ensure this imports the custom model package |
|
|
|
from transformers import AutoTokenizer, AutoModelForCausalLM |
|
from peft import PeftModel, get_peft_config, LoraConfig |
|
import torch |
|
|
|
model_id = "CognitivessAI/cognitivess" |
|
|
|
# Load the tokenizer |
|
tokenizer = AutoTokenizer.from_pretrained(model_id) |
|
|
|
# Define the quantization configuration |
|
quantization_config = { |
|
"load_in_8bit": True, |
|
"llm_int8_threshold": 6.0 |
|
} |
|
|
|
# Load the model with 8-bit quantization |
|
model = AutoModelForCausalLM.from_pretrained( |
|
model_id, |
|
torch_dtype=torch.float32, |
|
device_map="auto", |
|
**quantization_config |
|
) |
|
|
|
# Define the fine-tuning configuration |
|
fine_tuning_config = LoraConfig( |
|
r=8, |
|
lora_alpha=16, |
|
lora_dropout=0.1, |
|
target_modules=["q_proj", "v_proj"] |
|
) |
|
|
|
# Apply parameter-efficient fine-tuning (PEFT) using QLoRA |
|
model = PeftModel(model, fine_tuning_config) |
|
|
|
# Prepare the messages |
|
messages = [ |
|
{"role": "user", "content": "Explain how large language models work in detail."}, |
|
] |
|
|
|
# Tokenize the input |
|
input_ids = tokenizer.apply_chat_template(messages, add_generation_prompt=True, return_tensors="pt").to(model.device) |
|
|
|
# Define the inference parameters |
|
inference_params = { |
|
"max_new_tokens": 8192, |
|
"temperature": 0.7, |
|
"top_p": 0.95, |
|
"do_sample": True |
|
} |
|
|
|
# Generate the response |
|
outputs = model.generate( |
|
input_ids, |
|
**inference_params |
|
) |
|
|
|
# Decode and print the response |
|
response = outputs[0][input_ids.shape[-1]:] |
|
print(tokenizer.decode(response, skip_special_tokens=True)) |
|
|
|
``` |
|
|
|
**Contact:** |
|
<a href="mailto:hello@cognitivess.com">hello@cognitivess.com</a> |