Edit model card

You need to agree to share your contact information to access this model

This repository is publicly accessible, but you have to accept the conditions to access its files and content.

Log in or Sign Up to review the conditions and access this model content.

Collaiborator-MEDLLM-Llava-Llama-3-8b-v5

image/png This model is a fine-tuned Multimodal version of meta-llama/Meta-Llama-3-8B-Instruct on our custom "BioMedData" text and image datasets.

Model details

Model Name: Collaiborator-MEDLLM-Llava-Llama-3-8b-v5

Base Model: Llama-3-8B-Instruct

Parameter Count: 8 billion

Training Data: Custom high-quality biomedical text and image dataset

Number of Entries in Dataset: 500,000+

Dataset Composition: The dataset comprises of text and image, both synthetic and manually curated samples, ensuring a diverse and comprehensive coverage of biomedical knowledge.

Model description

Collaiborator-MEDLLM-Llava-Llama-3-8b-v5 is a specialized large language model designed for biomedical applications. It is finetuned from the Llama-3-8B-Instruct model using a custom dataset containing over 500,000 diverse entries. These entries include a mix of synthetic and manually curated data, ensuring high quality and broad coverage of biomedical topics.

The model is trained to understand and generate text related to various biomedical fields, making it a valuable tool for researchers, clinicians, and other professionals in the biomedical domain.

Intended uses & limitations

Collaiborator-MEDLLM-Llava-Llama-3-8b-v5 is intended for a wide range of applications within the biomedical field, including:

  1. Research Support: Assisting researchers in literature review and data extraction from biomedical texts.
  2. Clinical Decision Support: Providing information to support clinical decision-making processes.
  3. Educational Tool: Serving as a resource for medical students and professionals seeking to expand their knowledge base.

Limitations and Ethical Considerations

Collaiborator-MEDLLM-Llava-Llama-3-8b-v5 performs well in various biomedical NLP tasks, users should be aware of the following limitations:

Biases: The model may inherit biases present in the training data. Efforts have been made to curate a balanced dataset, but some biases may persist.

Accuracy: The model's responses are based on patterns in the data it has seen and may not always be accurate or up-to-date. Users should verify critical information from reliable sources.

Ethical Use: The model should be used responsibly, particularly in clinical settings where the stakes are high. It should complement, not replace, professional judgment and expertise.

Training and evaluation

Collaiborator-MEDLLM-Llava-Llama-3-8b-v5 was trained using NVIDIA A40 GPU's, which provides the computational power necessary for handling large-scale data and model parameters efficiently. Rigorous evaluation protocols have been implemented to benchmark its performance against similar models, ensuring its robustness and reliability in real-world applications.

Contact Information

For further information, inquiries, or issues related to Biomed-LLM, please contact:

Email: info@collaiborate.com

Website: https://www.collaiborate.com

Training hyperparameters

The following hyperparameters were used during training:

  • learning_rate: 0.0002
  • train_batch_size: 4
  • eval_batch_size: 4
  • Number of epochs: 3
  • seed: 42
  • gradient_accumulation_steps: 4
  • optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
  • lr_scheduler_type: cosine
  • lr_scheduler_warmup_ratio: 0.03
  • mixed_precision_training: Native AMP

Framework versions

  • PEFT 0.11.0
  • Transformers 4.40.2
  • Pytorch 2.1.2
  • Datasets 2.19.1
  • Tokenizers 0.19.1

Citation

If you use Collaiborator-MEDLLM-Llava-Llama-3-8b-v5 in your research or applications, please cite it as follows:

@misc{Collaiborator_MEDLLM, author = Collaiborator, title = {Collaiborator-MEDLLM-Llava-Llama-3-8b-v5: A High-Performance Biomedical Language Model}, year = {2024}, howpublished = {https://huggingface.co/collaiborateorg/Collaiborator-MEDLLM-Llava-Llama-3-8b-v4}, }

Downloads last month
34
Safetensors
Model size
8.35B params
Tensor type
FP16
·
Inference API (serverless) does not yet support transformers models for this pipeline type.

Finetuned from