Model Description
Apollo-v1-7b is a state-of-the-art language model with 7 billion parameters, specialized in Question Answering (QA) and code-related queries. It leverages the Mistral architecture, representing an advanced merge of models developed under the Mistral framework.
How to use
!pip install -qU transformers accelerate
from transformers import AutoTokenizer
import transformers
import torch
model = "nextai-team/apollo-v1-7b"
messages = [{"role": "user", "content": "Hello tell me a joke?"}]
tokenizer = AutoTokenizer.from_pretrained(model)
prompt = tokenizer.apply_chat_template(messages, tokenize=False, add_generation_prompt=True)
pipeline = transformers.pipeline(
"text-generation",
model=model,
torch_dtype=torch.float16,
device_map="auto",
)
outputs = pipeline(prompt, max_new_tokens=256, do_sample=True, temperature=0.7, top_k=50, top_p=0.95)
print(outputs[0]["generated_text"])
Intended Use
This model is intended for developers, data scientists, and researchers seeking to integrate sophisticated natural language understanding and code generation functionalities into their applications. Ideal use cases include but are not limited to:
Automated coding assistance Technical support bots Educational tools for learning programming Enhancing code review processes
Benchmarks and performance metrics can be provided upon request.
Limitations and Bias
This model, like any other, has its limitations. It may exhibit biases inherent in the training data or struggle with questions outside its training scope. Users should critically assess the model's outputs, especially for sensitive or critical applications.
Model Architecture
apollo-v1-7b employs an advanced merge of mistral 7 billion parameters model, optimized for high performance in QA and coding tasks. This architecture enables the model to efficiently process and generate accurate responses to complex queries.
Contact
- Downloads last month
- 11