Sweaterdog
commited on
Commit
•
9ea9a03
1
Parent(s):
ddb2733
Update README.md
Browse files
README.md
CHANGED
@@ -34,7 +34,7 @@ This model is built and designed to play Minecraft via the extension named "[Min
|
|
34 |
While, yes, models that aren't fine tuned to play Minecraft *Can* play Minecraft, most are slow, innaccurate, and not as smart, in the fine tuning, it expands reasoning, conversation examples, and command (tool) usage.
|
35 |
- What kind of Dataset was used?
|
36 |
#
|
37 |
-
I'm deeming this model *"
|
38 |
- Why choose Qwen2.5 for the base model?
|
39 |
#
|
40 |
During testing, to find the best local LLM for playing Minecraft, I came across two, Gemma 2, and Qwen2.5, these two were by far the best at playing Minecraft before fine-tuning, and I knew, once tuned, it would become better.
|
|
|
34 |
While, yes, models that aren't fine tuned to play Minecraft *Can* play Minecraft, most are slow, innaccurate, and not as smart, in the fine tuning, it expands reasoning, conversation examples, and command (tool) usage.
|
35 |
- What kind of Dataset was used?
|
36 |
#
|
37 |
+
I'm deeming the first generation of this model, Hermesv1, for future generations, they will be named *"Andy"* based from the actual MindCraft plugin's default character. it was trained for reasoning by using examples of in-game "Vision" as well as examples of spacial reasoning, for expanding thinking, I also added puzzle examples where the model broke down the process step by step to reach the goal.
|
38 |
- Why choose Qwen2.5 for the base model?
|
39 |
#
|
40 |
During testing, to find the best local LLM for playing Minecraft, I came across two, Gemma 2, and Qwen2.5, these two were by far the best at playing Minecraft before fine-tuning, and I knew, once tuned, it would become better.
|