metadata
datasets:
- anon8231489123/ShareGPT_Vicuna_unfiltered
- ehartford/wizard_vicuna_70k_unfiltered
- ehartford/WizardLM_alpaca_evol_instruct_70k_unfiltered
- QingyiSi/Alpaca-CoT
- teknium/GPT4-LLM-Cleaned
- teknium/GPTeacher-General-Instruct
- metaeval/ScienceQA_text_only
- hellaswag
- tasksource/mmlu
- openai/summarize_from_feedback
language:
- en
library_name: transformers
pipeline_tag: text-generation
Manticore 13B - Preview Release (previously Wizard Mega)
Manticore 13B is a Llama 13B model fine-tuned on the following datasets:
- ShareGPT - based on a cleaned and de-suped subset
- WizardLM
- Wizard-Vicuna
- subset of QingyiSi/Alpaca-CoT for roleplay and CoT
- GPT4-LLM-Cleaned
- GPTeacher-General-Instruct
- ARC-Easy & ARC-Challenge - instruct augmented for detailed responses
- mmlu: instruct augmented for detailed responses subset including
- abstract_algebra
- conceptual_physics
- formal_logic
- high_school_physics
- logical_fallacies
- hellaswag - 5K row subset of instruct augmented for concise responses
- metaeval/ScienceQA_text_only - instruct for concise responses
- openai/summarize_from_feedback - instruct augmented tl;dr summarization
Demo
Try out the model in HF Spaces. The demo uses a quantized GGML version of the model to quickly return predictions on smaller GPUs (and even CPUs). Quantized GGML may have some minimal loss of model quality.
Release Notes
Build
Manticore was built with Axolotl on 8xA100 80GB
- Preview Release: 1 epoch taking 8 hours.
- The configuration to duplicate this build is provided in this repo's /config folder.
Bias, Risks, and Limitations
Manticore has not been aligned to human preferences with techniques like RLHF or deployed with in-the-loop filtering of responses like ChatGPT, so the model can produce problematic outputs (especially when prompted to do so). Manticore was fine-tuned from the base model LlaMa 13B, please refer to its model card's Limitations Section for relevant information.
Examples
USER: write Python code that returns the first n numbers of the Fibonacci sequence using memoization.
ASSISTANT:
Sure! Here’s an example implementation of the Fibonacci sequence using memoization in Python:
def fibonacci(n):
"""Return the first n numbers of the Fibonacci sequence."""
cache = {} # Memoize the previous results for faster lookups
def memoized_fibonacci(*args, **kwargs):
if args[0] in cache:
return cache[args[0]]
else:
result = fibonacci(args[0])
cache[args[0]] = result
return result
def fibonacci_recursive(n):
"""Recursive implementation of the Fibonacci sequence."""
if n <= 1:
return n
else:
return fibonacci_recursive(n-1) + fibonacci_recursive(n-2)
return memoized_fibonacci(*[n] * len(cache))
In this implementation, we define a function memoized_fibonacci that takes the same arguments as fibonacci, but also receives a cache dictionary. If an argument is found in the cache dictionary, it returns the corresponding value from the cache. Otherwise, it computes the value of fibonacci(n) and stores it
### Instruction: Finish the joke, a mechanic and a car salesman walk into a bar...
### Assistant: The mechanic turns to the salesman and says, "I don't know what you drive but I bet it never breaks down." The salesman responds, "That's because my customers buy cars like yours!"