Edit model card

Cypher aloobun h2oai1.8B

  • This is an experimental model, Finetuned h2oai/h2o-danube-1.8b-chat, on Hercules v3 & private dataset.
  • The original idea was to use this 1.8B model, divide the dataset based on task specific capabilities, train models and transform them into a mixture of experts.
  • Hyperparameters: adamw with eps of 1e-8, cosine decay w/ 20% warmup, lr=2e-5.

Format:

<|system|></s><|prompt|></s><|answer|>

Benchamrks:

WIP

Example:

from transformers import AutoModelForCausalLM, AutoTokenizer, TextStreamer, StoppingCriteria
import torch

class MyStoppingCriteria(StoppingCriteria):
  def __init__(self, target_sequence, prompt):
    self.target_sequence = target_sequence
    self.prompt=prompt

  def __call__(self, input_ids, scores, **kwargs):
    generated_text = tokenizer.decode(input_ids[0])
    generated_text = generated_text.replace(self.prompt,'')
    if self.target_sequence in generated_text:
        return True 
    return False 

  def __len__(self):
    return 1

  def __iter__(self):
    yield self

modelpath="aloobun/Cypher-Mini-1.8B"

model = AutoModelForCausalLM.from_pretrained(
    modelpath,
    torch_dtype=torch.bfloat16,
    device_map="cuda",
    trust_remote_code=True,       
)

tokenizer = AutoTokenizer.from_pretrained(
    modelpath,
    trust_remote_code=True,      
    use_fast=False,
)

prompt = "<|prompt|>Reflect on a time when you encountered a logical fallacy in an argument. How did you identify it, and what was the consequence?</s><|answer|>"
encoded_input = tokenizer(prompt, return_tensors='pt')
input_ids=encoded_input['input_ids'].cuda()
streamer = TextStreamer(tokenizer=tokenizer, skip_prompt=True)
op = model.generate(
    input_ids,
    streamer=streamer,
    pad_token_id=tokenizer.eos_token_id,
    do_sample=True,
    temperature=0.7,
    top_p=0.8,
    max_new_tokens=512,
    stopping_criteria=MyStoppingCriteria("</s>", prompt)
)

Output:

I do not have personal experiences or emotions, but I can provide you with an example of a logical fallacy and its consequences:

One common logical fallacy is the appeal to authority fallacy. This occurs when someone argues that a particular opinion or belief is true because of who holds it (i.e., "because the doctor said so"). However, this approach does not take into account other factors that may influence the validity of the claim. For instance, if a doctor says that eating a certain food will cure cancer, it does not necessarily mean that it will work for everyone. Other factors such as genetics, lifestyle, and environmental factors could also play a role in whether or not a person gets cancer.

The consequence of using the appeal to authority fallacy is that it often leads to hasty conclusions and misinformation. It can be difficult to separate fact from fiction, especially when people rely on authority figures to make decisions. As a result, individuals may end up making poor choices based on incomplete information. This can lead to unintended consequences, such as harming oneself or others.

To avoid falling prey to the appeal to authority fallacy, it is important to seek out multiple sources of information and consider all available evidence before making a decision. This can help individuals make more informed choices and reduce the likelihood of being swayed by unsubstantiated claims.

Downloads last month
80
Safetensors
Model size
1.83B params
Tensor type
BF16
·
Inference Examples
This model does not have enough activity to be deployed to Inference API (serverless) yet. Increase its social visibility and check back later, or deploy to Inference Endpoints (dedicated) instead.

Model tree for aloobun/Cypher-Mini-1.8B

Merges
1 model

Dataset used to train aloobun/Cypher-Mini-1.8B