Gemma 1.1 7B Instruct Minecraft Adapter Model

Updated Version

This model is fine-tuned from Unsloth's Gemma 1.1 7B Instruct quantized model with naklecha's Minecraft Question-Answer dataset. Fine-tuned with first 100k rows from dataset with 1 epoch, it took around 2 hours 20 minutes with NVIDIA RTX 4090.

Model can now generate some good answers. But sometimes it can generate inappropriate answers. I think this problem is based on lack of data.

Important Notes

  • Model sometimes generates answers with no meanings. I am currently investigating this. This process can be long since I am a beginner in this field. If you have any suggestions, feel free to say it on model's Community page.
  • Model is using bitsandbytes so use it with a CUDA supported GPU.
Downloads last month
62
Inference API
Unable to determine this model’s pipeline type. Check the docs .

Model tree for emre570/gemma-7b-us-minecraft

Adapter
(4)
this model

Dataset used to train emre570/gemma-7b-us-minecraft