Spestly commited on
Commit
32e62d6
1 Parent(s): ccaf065

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +77 -7
README.md CHANGED
@@ -1,5 +1,5 @@
1
  ---
2
- base_model: unsloth/qwen2.5-1.5b-instruct-bnb-4bit
3
  tags:
4
  - text-generation-inference
5
  - transformers
@@ -11,12 +11,82 @@ language:
11
  - en
12
  ---
13
 
14
- # Uploaded model
15
 
16
- - **Developed by:** Spestly
17
- - **License:** apache-2.0
18
- - **Finetuned from model :** unsloth/qwen2.5-1.5b-instruct-bnb-4bit
19
 
20
- This qwen2 model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library.
21
 
22
- [<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
  ---
2
+ base_model: Qwen/Qwen2.5-1.5B-Instruct
3
  tags:
4
  - text-generation-inference
5
  - transformers
 
11
  - en
12
  ---
13
 
14
+ ![Header](https://raw.githubusercontent.com/Aayan-Mishra/Images/refs/heads/main/Athena.png)
15
 
16
+ # Athena-1 1.5B:
 
 
17
 
18
+ Athena-1 1.5B is a fine-tuned, instruction-following large language model derived from [Qwen/Qwen2.5-1.5B-Instruct](https://huggingface.co/Qwen/Qwen2.5-1.5B-Instruct). Designed for efficiency and high-quality text generation, Athena-1 1.5B maintains a compact size, making it ideal for real-world applications where performance and resource efficiency are critical, such as lightweight applications, conversational AI, and structured data tasks.
19
 
20
+ ---
21
+
22
+ ## Key Features
23
+
24
+ ### ⚡ Lightweight and Efficient
25
+
26
+ * **Compact Size:** At just **1.5 billion parameters**, Athena-1 1.5B offers excellent performance with reduced computational requirements.
27
+ * **Instruction Following:** Fine-tuned for precise and reliable adherence to user prompts.
28
+ * **Coding and Mathematics:** Proficient in solving coding challenges and handling mathematical tasks.
29
+
30
+ ### 📖 Long-Context Understanding
31
+
32
+ * **Context Length:** Supports up to **32,768 tokens**, enabling the processing of moderately lengthy documents or conversations.
33
+ * **Token Generation:** Can generate up to **8K tokens** of output.
34
+
35
+ ### 🌍 Multilingual Support
36
+
37
+ * Supports **29+ languages**, including:
38
+ * English, Chinese, French, Spanish, Portuguese, German, Italian, Russian
39
+ * Japanese, Korean, Vietnamese, Thai, Arabic, and more.
40
+
41
+ ### 📊 Structured Data & Outputs
42
+
43
+ * **Structured Data Interpretation:** Processes structured formats like tables and JSON.
44
+ * **Structured Output Generation:** Generates well-formatted outputs, including JSON and other structured formats.
45
+
46
+ ---
47
+
48
+ ## Model Details
49
+
50
+ * **Base Model:** [Qwen/Qwen2.5-1.5B-Instruct](https://huggingface.co/Qwen/Qwen2.5-1.5B-Instruct)
51
+ * **Architecture:** Transformers with RoPE, SwiGLU, RMSNorm, Attention QKV bias, and tied word embeddings.
52
+ * **Parameters:** 1.5B total (Adjust non-embedding count if you have it).
53
+ * **Layers:** (Adjust if different from the 3B model)
54
+ * **Attention Heads:** (Adjust if different from the 3B model)
55
+ * **Context Length:** Up to **32,768 tokens**.
56
+
57
+ ---
58
+
59
+
60
+ ## Applications
61
+
62
+ Athena 1.5B is designed for a variety of real-world applications:
63
+
64
+ * **Conversational AI:** Build fast, responsive, and lightweight chatbots.
65
+ * **Code Generation:** Generate, debug, or explain code snippets.
66
+ * **Mathematical Problem Solving:** Assist with calculations and reasoning.
67
+ * **Document Processing:** Summarize and analyze moderately large documents.
68
+ * **Multilingual Applications:** Support for global use cases with diverse language requirements.
69
+ * **Structured Data:** Process and generate structured data, such as tables and JSON.
70
+
71
+ ---
72
+
73
+ ## Quickstart
74
+
75
+ Here’s how you can use Athena 1.5B for quick text generation:
76
+
77
+ ```python
78
+ # Use a pipeline as a high-level helper
79
+ from transformers import pipeline
80
+
81
+ messages = [
82
+ {"role": "user", "content": "Who are you?"},
83
+ ]
84
+ pipe = pipeline("text-generation", model="Spestly/Athena-1-1.5B") # Update model name
85
+ print(pipe(messages))
86
+
87
+ # Load model directly
88
+ from transformers import AutoTokenizer, AutoModelForCausalLM
89
+
90
+ tokenizer = AutoTokenizer.from_pretrained("Spestly/Athena-1-1.5B") # Update model name
91
+ model = AutoModelForCausalLM.from_pretrained("Spestly/Athena-1-1.5B") # Update model name
92
+ ```