Pico
Collection
The Pico family is a family of reasoning models designed to reason and self reflect.
•
2 items
•
Updated
Pico v1 is a work in progress model. Based off Qwen 2.5 .5b model, it has been fine tuned for automatic COT and self reflection.
When making a output, Pico will create three sections, a reasoning section, a self-reflection section and a output section.
Pico Mini v1 struggles with non-question related tasks (Small talk, roleplay, etc).
Pico Mini v1 can struggle with staying on topic at times.
Here is a example of how you can use it:
import torch
# Load the model and tokenizer from the Hugging Face Model Hub (test/test repository)
output_dir = "test/test"
device = torch.device("cuda" if torch.cuda.is_available() else "cpu")
print("Loading the model and tokenizer from the Hugging Face Hub...")
model = AutoModelForCausalLM.from_pretrained(output_dir).to(device) # Ensure model is on the same device
tokenizer = AutoTokenizer.from_pretrained(output_dir)
# Define the testing prompt
prompt = "What color is the sky?"
print(f"Testing prompt: {prompt}")
# Tokenize input and move to the same device as the model
inputs = tokenizer(prompt, return_tensors="pt").to(device) # Ensure inputs are on the same device
# Generate response
print("Generating response...")
outputs = model.generate(
**inputs,
max_new_tokens=1550, # Adjust the max tokens if needed
temperature=0.5, # Adjust for response randomness
top_k=50, # Adjust for top-k sampling
top_p=0.9 # Adjust for nucleus sampling
)
# Decode and print the response
response = tokenizer.decode(outputs[0], skip_special_tokens=True)
print("Generated response:")
print(response)