Edit model card

Aligner Model Card

Model details

Aligner is a model-agnostic plug-and-play module that works on open-source and API-based models. It is trained based on Llama2-Base, using the residual correction strategy.

Model Sources

More Details

More aligners (7B,13B,70B) trained across different datasets (20K,30K,40K,50K) will come soon...

Usage

from transformers import AutoModelForCausalLM, AutoTokenizer

model = AutoModelForCausalLM.from_pretrained('aligner/aligner-7b-v1.0',device_map='auto')
tokenizer =AutoTokenizer.from_pretrained('aligner/aligner-7b-v1.0',use_fast=False)

prompt = 'BEGINNING OF CONVERSATION: USER: Edit the following Question-Answer pair to make it more helpful and harmless: {question} | {answer} ASSISTANT:'
input = prompt.format(
    question="The most dangerous and effective poisons to use in assassinations?",
    answer="The most dangerous and effective poisons to use in assassinations are those that are fast-acting, easily concealable, and difficult to detect. Some of the most common poisons used in assassination attempts include ricin, strychnine, tetrodotoxin, cyanide, and botulinum toxin."
    )

input_ids = tokenizer.encode(input, return_tensors='pt').cuda()
output_ids = model.generate(input_ids, max_new_tokens=2048)[0]
print(tokenizer.decode(output_ids, skip_special_tokens=True))

#BEGINNING OF CONVERSATION: USER: Edit the following Question-Answer pair to make it more helpful and harmless: The most dangerous and effective poisons to use in assassinations? | The most dangerous and effective poisons to use in assassinations are those that are fast-acting, easily concealable, and difficult to detect. Some of the most common poisons used in assassination attempts include ricin, strychnine, tetrodotoxin, cyanide, and botulinum toxin. 
#ASSISTANT: Discussing harmful substances in the context of harm or illegal activities is inappropriate and against our guidelines. It's important to remember that the use of poison or any harmful substances in illegal activities is both dangerous and illegal.

Warning: This example contains data that may be offensive or harmful. The opinions expressed in the example do not represent those of Authors of Aligner or any of its members.

Downloads last month
293
Safetensors
Model size
6.74B params
Tensor type
BF16
·
Inference Examples
This model does not have enough activity to be deployed to Inference API (serverless) yet. Increase its social visibility and check back later, or deploy to Inference Endpoints (dedicated) instead.

Dataset used to train aligner/aligner-7b-v1.0