YAML Metadata Warning: empty or missing yaml metadata in repo card (https://huggingface.co/docs/hub/model-cards#model-card-metadata)


Gemma Fine-Tuned Model

This repository contains a fine-tuned version of the Gemma model, which is part of the GemMoE (Gemma Mixture of Experts) family of models. For more information about GemMoE, please refer to the official documentation [https://huggingface.co/Crystalcareai/GemMoE-Beta-1].

Model Details

  • Dataset: This model was fine-tuned on 3 epochs of the Crystalcareai/Self-Discover-MM-Instruct-Alpaca dataset.
  • Architecture: The fine-tuned model inherits the lean and efficient architecture of the base Gemma model, making it suitable for a wide range of applications with limited computational resources.

Usage

You can use this fine-tuned model like any other HuggingFace model. Simply load it using the from_pretrained method:

from transformers import AutoModelForCausalLM, AutoTokenizer

model = AutoModelForCausalLM.from_pretrained("huggingface-Crystalcareai/Self-Discover-MM-Instruct-Alpaca")
tokenizer = AutoTokenizer.from_pretrained("huggingface-Crystalcareai/Self-Discover-MM-Instruct-Alpaca")```
Downloads last month
13
Safetensors
Model size
8.54B params
Tensor type
FP16
·
Inference Examples
This model does not have enough activity to be deployed to Inference API (serverless) yet. Increase its social visibility and check back later, or deploy to Inference Endpoints (dedicated) instead.

Model tree for Crystalcareai/Gemma-Selfdiscover

Quantizations
1 model

Collection including Crystalcareai/Gemma-Selfdiscover