Base Model Description
The Pythia 70M model is a transformer-based language model developed by EleutherAI. It is part of the Pythia series, known for its high performance in natural language understanding and generation tasks. With 70 million parameters, it is designed to handle a wide range of NLP applications, offering a balance between computational efficiency and model capability.
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- Developed by: Pravin Maurya
- Model type: LoRa fine-tuned transformer model
- Language(s) (NLP): English
- License: MIT
- Finetuned from model: EleutherAI/pythia-70m
Model Sources [optional]
- Colab Link: Click me🔗
Uses
Downstream uses are model can be fine-tuned further for specific applications like medical AI assistants, legal document generation, and other domain-specific NLP tasks.
How to Get Started with the Model
Use the code below to get started with the model.
import torch
from transformers import AutoTokenizer, AutoModelForCausalLM
model = AutoModelForCausalLM.from_pretrained("Pravincoder/pythia-legal-llm-v4 ")
tokenizer = AutoTokenizer.from_pretrained("EleutherAI/pythia-70m")
def inference(text, model, tokenizer, max_input_tokens=1000, max_output_tokens=200):
input_ids = tokenizer.encode(text, return_tensors="pt", truncation=True, max_length=max_input_tokens)
device = model.device
generated_tokens_with_prompt = model.generate(input_ids=input_ids.to(device), max_length=max_output_tokens)
generated_text_with_prompt = tokenizer.batch_decode(generated_tokens_with_prompt, skip_special_tokens=True)
generated_text_answer = generated_text_with_prompt[0][len(text):]
return generated_text_answer
system_message = "Welcome to the medical AI assistant."
user_message = "What are the symptoms of influenza?"
generated_response = inference(system_message, user_message, model, tokenizer)
print("Generated Response:", generated_response)
Training Data
The model was fine-tuned using data relevant to the medical Chat data. for more info click me🔗
Training Procedure
Data preprocessing involved tokenization and formatting suitable for the transformer model.
Training Hyperparameters
-Training regime: Mixed precision (fp16)
Hardware
- Hardware Type: T4 Google Colab GPU
- Hours used: 1.30-2 hr
Model Card Contact
Email :- PravinCoder@gmail.com
Model Trained Using AutoTrain
- Downloads last month
- 3