Update README.md
Browse files
README.md
CHANGED
@@ -14,13 +14,13 @@ language:
|
|
14 |
- vi
|
15 |
- id
|
16 |
---
|
17 |
-
# FINGU-AI/Qwen2.5-32B-
|
18 |
|
19 |
## Overview
|
20 |
-
`FINGU-AI/Qwen2.5-32B-
|
21 |
|
22 |
## Model Details
|
23 |
-
- **Model ID**: `FINGU-AI/Qwen2.5-32B-
|
24 |
- **Architecture**: Causal Language Model (LM)
|
25 |
- **Parameters**: 32 billion
|
26 |
- **Precision**: Torch BF16 for efficient GPU memory usage
|
@@ -42,7 +42,7 @@ from transformers import AutoTokenizer, AutoModelForCausalLM
|
|
42 |
import torch
|
43 |
|
44 |
# Model and Tokenizer
|
45 |
-
model_id = 'FINGU-AI/Qwen2.5-32B-
|
46 |
model = AutoModelForCausalLM.from_pretrained(model_id, attn_implementation="sdpa", torch_dtype=torch.bfloat16)
|
47 |
tokenizer = AutoTokenizer.from_pretrained(model_id)
|
48 |
model.to('cuda')
|
|
|
14 |
- vi
|
15 |
- id
|
16 |
---
|
17 |
+
# FINGU-AI/Qwen2.5-32B-Lora-HQ-e-1
|
18 |
|
19 |
## Overview
|
20 |
+
`FINGU-AI/Qwen2.5-32B-Lora-HQ-e-1` is a powerful causal language model designed for a variety of natural language processing (NLP) tasks, including machine translation, text generation, and chat-based applications. This model is particularly useful for translating between Korean and Uzbek, as well as supporting other custom NLP tasks through flexible input.
|
21 |
|
22 |
## Model Details
|
23 |
+
- **Model ID**: `FINGU-AI/Qwen2.5-32B-Lora-HQ-e-1`
|
24 |
- **Architecture**: Causal Language Model (LM)
|
25 |
- **Parameters**: 32 billion
|
26 |
- **Precision**: Torch BF16 for efficient GPU memory usage
|
|
|
42 |
import torch
|
43 |
|
44 |
# Model and Tokenizer
|
45 |
+
model_id = 'FINGU-AI/Qwen2.5-32B-Lora-HQ-e-1'
|
46 |
model = AutoModelForCausalLM.from_pretrained(model_id, attn_implementation="sdpa", torch_dtype=torch.bfloat16)
|
47 |
tokenizer = AutoTokenizer.from_pretrained(model_id)
|
48 |
model.to('cuda')
|