Athena-1-14B / README.md
Spestly's picture
Update README.md
ec16eb7 verified
|
raw
history blame
3.68 kB
---
base_model: unsloth/qwen2.5-14b-instruct-bnb-4bit
tags:
- text-generation-inference
- transformers
- unsloth
- qwen2
- trl
license: apache-2.0
language:
- en
---
![Header](https://raw.githubusercontent.com/Aayan-Mishra/Images/refs/heads/main/Athena.png)
# Athena 1:
Athena 1 is a state-of-the-art language model fine-tuned from [Qwen/Qwen2.5-14B-Instruct](https://huggingface.co/Qwen/Qwen2.5-14B-Instruct). Designed to excel in instruction-following tasks, Athena 1 delivers advanced capabilities in text generation, coding, mathematics, and long-context understanding. It is optimized for a wide variety of use cases, including conversational AI, structured data interpretation, and multilingual applications. It outperforms Ava 1.5 in many aspects making Athena-1 the superior model.
---
## Key Features
### πŸš€ Enhanced Capabilities
- **Instruction Following**: Athena 1 has been fine-tuned for superior adherence to user prompts, making it ideal for chatbots, virtual assistants, and guided workflows.
- **Coding and Mathematics**: Specialized fine-tuning enhances coding problem-solving and mathematical reasoning.
- **Long-Context Understanding**: Handles input contexts up to 128K tokens and generates up to 8K tokens.
### 🌐 Multilingual Support
Supports 29+ languages, including:
- English, Chinese, French, Spanish, Portuguese, German, Italian, Russian
- Japanese, Korean, Vietnamese, Thai, Arabic, and more.
### πŸ“Š Structured Data & Outputs
- **Structured Data Interpretation**: Understands and processes structured formats like tables and JSON.
- **Structured Output Generation**: Generates well-formatted outputs, including JSON, XML, and other structured formats.
---
## Model Details
- **Base Model**: [Qwen/Qwen2.5-14B-Instruct](https://huggingface.co/Qwen/Qwen2.5-14B-Instruct)
- **Architecture**: Transformers with RoPE, SwiGLU, RMSNorm, and Attention QKV bias.
- **Parameters**: 14.7B total (13.1B non-embedding).
- **Layers**: 48
- **Attention Heads**: 40 for Q, 8 for KV.
- **Context Length**: Up to **131,072 tokens**.
---
## Applications
Athena 1 is designed for a wide range of use cases:
- Conversational AI and chatbots.
- Code generation, debugging, and explanation.
- Mathematical problem-solving.
- Large-document summarization and analysis.
- Multilingual text generation and translation.
- Structured data processing (e.g., tables, JSON).
---
## Quickstart
Below is an example of how to use Athena 1 for text generation:
```python
huggingface-cli login
# Use a pipeline as a high-level helper
from transformers import pipeline
messages = [
{"role": "user", "content": "Who are you?"},
]
pipe = pipeline("text-generation", model="Spestly/Athena-1-14B")
pipe(messages)
# Load model directly
from transformers import AutoTokenizer, AutoModelForCausalLM
tokenizer = AutoTokenizer.from_pretrained("Spestly/Athena-1-14B")
model = AutoModelForCausalLM.from_pretrained("Spestly/Athena-1-14B")
```
## Performance
Athena 1 has been optimized for efficiency and performance on modern GPUs. For detailed evaluation metrics (e.g., throughput, accuracy, and memory requirements), refer to the Qwen2.5 performance benchmarks.
---
## Requirements
To use Athena 1, ensure the following:
- Python >= 3.8
- Transformers >= 4.37.0 (to support Qwen models)
- PyTorch >= 2.0
- GPU with BF16 support for optimal performance.
## Citation
If you use Athena 1 in your research or projects, please cite its base model Qwen2.5 as follows:
```
@misc{qwen2.5,
title = {Qwen2.5: A Party of Foundation Models},
url = {https://qwenlm.github.io/blog/qwen2.5/},
author = {Qwen Team},
month = {September},
year = {2024}
}
```