Sweaterdog commited on
Commit
b634819
1 Parent(s): c6fb203

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +20 -0
README.md CHANGED
@@ -17,6 +17,26 @@ language:
17
  - **License:** apache-2.0
18
  - **Finetuned from model :** unsloth/Qwen2.5-7B-bnb-4bit
19
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
20
  This qwen2 model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library.
21
 
22
  [<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
 
17
  - **License:** apache-2.0
18
  - **Finetuned from model :** unsloth/Qwen2.5-7B-bnb-4bit
19
 
20
+ The MindCraft LLM tuning CSV file can be found here, this can be tweaked as needed. [MindCraft-LLM](https://huggingface.co/datasets/Sweaterdog/MindCraft-LLM-tuning/raw/main/Gemini-Minecraft%20-%20training_data_minecraft_updated.csv)
21
+
22
+ # What is the Purpose?
23
+
24
+ This model is built and designed to play Minecraft via the extension named "[MindCraft](https://github.com/kolbytn/mindcraft)" Which allows language models, like the ones provided in the files section, to play Minecraft.
25
+ - Why a new model?
26
+ #
27
+ While, yes, models that aren't fine tuned to play Minecraft *Can* play Minecraft, most are slow, innaccurate, and not as smart, in the fine tuning, it expands reasoning, conversation examples, and command (tool) usage.
28
+ - What kind of Dataset was used?
29
+ #
30
+ I'm deeming this model *"Hermes"*, it was trained for reasoning by using examples of in-game "Vision" as well as examples of spacial reasoning, for expanding thinking, I also added puzzle examples where the model broke down the process step by step to reach the goal.
31
+ - Why choose Qwen2.5 for the base model?
32
+ #
33
+ During testing, to find the best local LLM for playing Minecraft, I came across two, Gemma 2, and Qwen2.5, these two were by far the best at playing Minecraft before fine-tuning, and I knew, once tuned, it would become better.
34
+
35
+
36
+ Here is the link to the Google Colab notebook for fine tuning your own model, in case you want to use a different one, such as Llama-3-8b, or if you want to change the hyperparameters
37
+ [Google Colab](https://colab.research.google.com/drive/1ZoP7vO50kQrtHoQ54EI6URnoWzIJUg-c?usp=sharing)
38
+
39
+ #
40
  This qwen2 model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library.
41
 
42
  [<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)