πŸ€– HelpingAI2.5-10B

A Revolutionary Emotionally Intelligent Language Model

GitHub Organization Hugging Face Model License

Join Community Discussion

πŸ“œ License | 🌐 Website

HelpingAI Logo

🌟 Model Overview

HelpingAI2.5-10B is a compact yet powerful language model specifically designed for emotionally intelligent conversations and human-centric interactions.

🎯 Key Highlights

  • Architecture: 10B parameter transformer-based model
  • Training Focus: Emotional intelligence and empathetic responses
  • Emotion Score: Achieves 98.13 on standardized emotional intelligence tests
  • Deployment: Optimized for efficient deployment on consumer hardware

πŸ’» Implementation

Transformers

import torch
from transformers import AutoModelForCausalLM, AutoTokenizer

# Load the HelpingAI2.5-10B model
model = AutoModelForCausalLM.from_pretrained("OEvortex/HelpingAI2.5-10B")
# Load the tokenizer
tokenizer = AutoTokenizer.from_pretrained("OEvortex/HelpingAI2.5-10B")

# Define the chat input
chat = [
    { "role": "system", "content": "You are HelpingAI, an emotional AI. Always answer my questions in the HelpingAI style." },
    { "role": "user", "content": "GIVE ME YOUR INTRO" }
]

inputs = tokenizer.apply_chat_template(
    chat,
    add_generation_prompt=True,
    return_tensors="pt"
).to(model.device)

# Generate text
outputs = model.generate(
    inputs,
    max_new_tokens=256,
    do_sample=True,
    temperature=0.6,
    top_p=0.9,
)

response = outputs[0][inputs.shape[-1]:]
print(tokenizer.decode(response, skip_special_tokens=True))

GGUF Implementation

from webscout.Local import *
model_path = download_model("OEvortex/HelpingAI2.5-10B", "q4_k_m.gguf", token=None)
model = Model(model_path, n_gpu_layers=0, context_length=4096)

thread = Thread(model, format=helpingai2)
# print(thread.send("hi")) #send a single msg to ai

thread.interact() # interact with the model in terminal

🎯 Training Details

Methodology

  1. Base Training

    • Datasets: SentimentSynth + EmotionalIntelligence-1M + HelpingAI2.0-150M + HelpingAIemo (152.5M)
  2. Emotional Intelligence Training

    • Supervised Fine-tuning on emotional dialogue
    • Reinforcement Learning with HelpingAI2.0-7B model
    • Constitutional AI training for ethical guidelines
  3. Optimization

    • Mixed-precision training
    • Gradient checkpointing
    • Dynamic attention patterns

Dataset Composition

Dataset Type Rows used for training Purpose
Emotional Dialogues 12.5M Core conversation training
Therapeutic Exchanges 2M Mental health support
Cultural Conversations 100K Cultural awareness
Crisis Response 750K Emergency support handling

πŸ“ˆ Use Cases & Examples

Primary Applications

  • 🎯 Personal AI Companionship
  • 🧠 Mental Health Support
  • πŸ“š Educational Assistance
  • πŸ’Ό Professional Development
  • 🀝 Social Skills Training

Example Interactions

User: I've been feeling really anxious about my upcoming job interview. Any advice?
HelpingAI: Hey there! πŸ™Œ First off, take a deep breath and don't sweat it too much. πŸ’¨ Job interviews can be nerve-wracking, but I gotchu! 😎

Here's the tea: 🍡 Make sure you're prepping like crazy for this job. Read up on the company, their mission, and the role you're applying for. That way, you'll be spittin' knowledge and showing them you're serious. πŸ“šπŸ’‘

Also, practice your answers to common interview questions. You know, the ones like "Tell me about yourself" or "Why do you want to work here?" πŸ€” Keep it short, sweet, and relevant. You got this! πŸ˜„

And don't forget to look sharp! 🎨 Keep your resume up-to-date, and make sure your social media is on point. First impressions are everything! πŸ‘€

Most importantly, just be yourself. 😊 The interview panel wants to see the real you. Show them your passion, your skills, and why you'd be a total boss at this company. πŸ”₯

You're gonna crush this interview, fam! πŸ’ͺ Remember, I got faith in you. πŸ™

User: My friend hasn't been responding to my messages for a week. I think I might have said something wrong, but I'm not sure what.
HelpingAI: Aw man, that's a real bummer. πŸ’” Not hearing back from a friend can be super painful, especially when you're wondering if you did something wrong. πŸ€”

First off, don't beat yourself up over it. πŸ™…β€β™‚οΈ Chances are, your friend's just busy or dealing with some personal stuff. πŸ€·β€β™€οΈ It doesn't always mean it's you.

If you're still concerned, try reaching out in a different way, like a phone call or a text to a diff person in your friend group. πŸ“±πŸ“ž Sometimes a different approach can help you connect again.

But if you're still unsure, you could try asking your friend if everything's cool. 🀝 Be open and honest about your feelings, but also be ready for them to have their own perspective on things. πŸ‘₯

Remember, you're not alone in this. πŸ€— Friends go through ups and downs, and it's okay to have little misunderstandings. Communication is key! πŸ”‘ Just keep it real and be patient. You got this, fam! πŸ’ͺ


πŸ”’ Ethical Considerations & Limitations

Ethical Guidelines

  • Prioritizes user emotional wellbeing
  • Maintains strict privacy standards
  • Avoids harmful or discriminatory responses
  • Transparent about AI limitations
  • Promotes healthy coping mechanisms

Known Limitations

  • Cannot Replace Human Professionals
  • Cannot Roleplay
  • Limited Knowledge Base
  • Context Window Constraints

Citation

@misc{helpingai2024,
  author = {Abhay Koul},
  title = {HelpingAI2.5-10B: Emotionally Intelligent Language Model},
  year = {2024},
  publisher = {Huggingface},
  journal = {GitHub repository},
  howpublished = {\url{https://huggingface.co/OEvortex/HelpingAI2.5-10B}}
}

πŸ™ Acknowledgments

Special thanks to the HelpingAI community, Huggingface, contributors, and researchers who made this model possible. Your dedication to advancing emotionally intelligent AI is invaluable.


Built with ❀️ by the HelpingAI Community

Website β€’ GitHub β€’ Discord β€’ HuggingFace

Downloads last month
227
Safetensors
Model size
10.2B params
Tensor type
BF16
Β·
Inference Examples
This model does not have enough activity to be deployed to Inference API (serverless) yet. Increase its social visibility and check back later, or deploy to Inference Endpoints (dedicated) instead.

Model tree for HelpingAI/HelpingAI2.5-10B

Finetunes
2 models
Quantizations
8 models