metadata
license: apache-2.0
library_name: transformers
datasets:
- Locutusque/hercules-v5.0
Model Card: Hercules-5.0-Qwen2-7B
Model Description
Locutusque/Hercules-5.0-Qwen2-7B is a fine-tuned language model derived from Qwen2-7B. It is specifically designed to excel in instruction following, function calls, and conversational interactions across various scientific and technical domains. This fine-tuning has hercules-v5.0 with enhanced abilities in:
- Complex Instruction Following: Understanding and accurately executing multi-step instructions, even those involving specialized terminology.
- Function Calling: Seamlessly interpreting and executing function calls, providing appropriate input and output values.
- Domain-Specific Knowledge: Engaging in informative and educational conversations about Biology, Chemistry, Physics, Mathematics, Medicine, Computer Science, and more.
This model was fine-tuned using my TPU-Alignment repository. https://github.com/Locutusque/TPU-Alignment
Join my discord server: https://discord.com/invite/vrGheTUFrm
Intended Uses & Potential Bias
Locutusque/Hercules-5.0-Qwen2-7B is well-suited to the following applications:
- Specialized Chatbots: Creating knowledgeable chatbots and conversational agents in scientific and technical fields.
- Instructional Assistants: Supporting users with educational and step-by-step guidance in various disciplines.
- Code Generation and Execution: Facilitating code execution through function calls, aiding in software development and prototyping.
Limitations and Risks
- Toxicity: The dataset contains toxic or harmful examples.
- Hallucinations and Factual Errors: Like other language models, Locutusque/Hercules-5.0-Qwen2-7B may generate incorrect or misleading information, especially in specialized domains where it lacks sufficient expertise.
- Potential for Misuse: The ability to engage in technical conversations and execute function calls could be misused for malicious purposes.
Training Procedure
- This model was trained on 8 kaggle TPUs, using torch xla SPMD for high MXU efficiency. There was no expense on my end (meaning you can reproduce this too!)
- A learning rate of 5e-6 with the Adam optimizer. A linear scheduler was used, with an end factor of 0.1.
- No mixed precision was used, with the default dtype being bfloat16.
- A total batch size of 64 was used.
- Trained on all examples of Hercules-v5.0 for 1 epoch
- No model parameters were frozen and no quantization was used.
- This model was trained on OpenAI's ChatML prompt format. Because this model has function calling capabilities, the prompt format is slightly different, here's what it would look like:
<|im_start|>system\n{message}<|im_end|>\n<|im_start|>user\n{user message}<|im_end|>\n<|im_start|>call\n{function call message}<|im_end|>\n<|im_start|>function\n{function response message}<|im_end|>\n<|im_start|>assistant\n{assistant message}</s>
This model was fine-tuned using my TPU-Alignment repository. https://github.com/Locutusque/TPU-Alignment