license: apache-2.0
language:
- en
- zh
pipeline_tag: text-generation
library_name: transformers
tags:
- text-generation-inference
- RL
- trl
- Math
- Code
Magellanic-Qwen-25B-R999
Magellanic-Qwen-25B-R999 is a general-purpose large language model (LLM) based on the Qwen 2.5 architecture. It is designed for coding, mathematical problem-solving, and general conversational tasks. This model excels in logical reasoning, structured problem-solving, and multi-step calculations, making it suitable for both technical and everyday use. Fine-tuned with diverse datasets, it delivers high-accuracy outputs with a focus on clarity, coherence, and precision.
Key Features
- Advanced Coding Assistance: Supports multiple programming languages, offering debugging, optimization, and algorithmic guidance.
- Mathematical Reasoning: Excels in algebra, calculus, number theory, and logical deduction, providing structured and step-by-step solutions.
- General Chat and Knowledge Retrieval: Handles open-ended questions, discussions, and knowledge-based queries across various domains.
- Long-Context Processing: Supports up to 128K tokens in input context and generates up to 8K tokens in output, ideal for in-depth problem-solving.
- Multilingual Support: Works with over 29 languages, including English, Chinese, French, Spanish, Portuguese, German, Italian, Russian, Japanese, Korean, Arabic, and more.
Quickstart with Transformers
Below is a code snippet demonstrating how to load the tokenizer and model using apply_chat_template
:
from transformers import AutoModelForCausalLM, AutoTokenizer
model_name = "prithivMLmods/Magellanic-Qwen-25B-R999"
model = AutoModelForCausalLM.from_pretrained(
model_name,
torch_dtype="auto",
device_map="auto"
)
tokenizer = AutoTokenizer.from_pretrained(model_name)
prompt = "Write a Python function to check if a number is prime."
messages = [
{"role": "system", "content": "You are a coding assistant specialized in algorithm development and problem-solving."},
{"role": "user", "content": prompt}
]
text = tokenizer.apply_chat_template(
messages,
tokenize=False,
add_generation_prompt=True
)
model_inputs = tokenizer([text], return_tensors="pt").to(model.device)
generated_ids = model.generate(
**model_inputs,
max_new_tokens=512
)
generated_ids = [
output_ids[len(input_ids):] for input_ids, output_ids in zip(model_inputs.input_ids, generated_ids)
]
response = tokenizer.batch_decode(generated_ids, skip_special_tokens=True)[0]
Intended Use
Programming and Development:
- Writing, debugging, and optimizing code.
- Generating explanations for algorithms and data structures.
- Assisting with code documentation and best practices.
Mathematical and Logical Reasoning:
- Solving mathematical equations and complex computations.
- Providing theorem proofs and step-by-step derivations.
- Supporting scientific and engineering applications.
General-Purpose Chat:
- Engaging in meaningful discussions across a wide range of topics.
- Providing accurate and informative responses in multilingual settings.
- Assisting in creative writing, content generation, and structured discussions.
Research and Knowledge Discovery:
- Summarizing papers, research articles, and technical documents.
- Assisting with STEM education and research applications.
- Providing logical explanations for scientific concepts.
Limitations
Hardware Requirements:
Requires high-memory GPUs or TPUs for efficient inference due to its 25B parameters and long-context capabilities.Possible Biases:
While trained on diverse datasets, some responses may inherit biases from the underlying training data.Abstract Problem-Solving Limitations:
May struggle with highly abstract problems that require intuition beyond formal computational reasoning.Error Accumulation in Long Outputs:
In multi-step reasoning or long-form content, minor errors in earlier steps can propagate through the response.Prompt Sensitivity:
Response quality depends on how well the input is structured and framed.