Edit model card
YAML Metadata Warning: empty or missing yaml metadata in repo card (https://huggingface.co/docs/hub/model-cards#model-card-metadata)

Model summary

  • instruction-tuning on medical data based on LLaMA

data

  • Common
    • alpaca-5.2k
    • unatural-instruct 80k
    • OIG-40M
  • Chinese
    • english/chinese translation data
    • zhihu QA
    • pCLUE
  • Medical Domain:
    • MedDialog-200k
    • Chinese-medical-dialogue-data
    • WebMedQA
  • code
    • alpaca_code-20k

training

Model

  • LLaMA-7B

Hardware

  • 6 x A100 40G using NVLink 4 inter-gpu connects

Software

  • tokenizers==0.12.1
  • sentencepiece==0.1.97
  • transformers==4.28
  • torch==2.0.0+cu117

How to use

import torch
from transformers import LlamaTokenizer, LlamaForCausalLM, GenerationConfig
from peft import PeftModel

base_model="llma-7b"
LORA_WEIGHTS = "llma-med-alpaca-7b"
LOAD_8BIT = False

tokenizer = LlamaTokenizer.from_pretrained(base_model)

model = LlamaForCausalLM.from_pretrained(
    base_model
    load_in_8bit=LOAD_8BIT,
    torch_dtype=torch.float16,
    device_map="auto",
)
model = PeftModel.from_pretrained(
    model,
    LORA_WEIGHTS,
    torch_dtype=torch.float16,
)

config = {
    "temperature": 0 ,
    "max_new_tokens": 1024,
    "top_p": 0.5
}

prompt = "Translate to English: Je t’aime."
input_ids = tokenizer(prompt, return_tensors="pt").input_ids.to(model.device)
outputs = model.generate(input_ids=input_ids, max_new_tokens=config["max_new_tokens"], temperature=config["temperature"])
decoded = tokenizer.decode(outputs[0], skip_special_tokens=True).strip()
print(decoded[len(prompt):])

Limitations

  • This model may output harmful, biased, toxic, and illusory things, and currently does not undergo RLHF training, so this model is only for research purposes

TODO

  • self-instruct data
  • english medical data
  • code data
  • chinese corpus/medical dialog data

Reference

Downloads last month

-

Downloads are not tracked for this model. How to track
Inference API
Unable to determine this model's library. Check the docs .