Sweaterdog commited on
Commit
47cf4e6
1 Parent(s): d6b22d8

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +4 -1
README.md CHANGED
@@ -39,9 +39,12 @@ This model is built and designed to play Minecraft via the extension named "[Min
39
  #
40
  During testing, to find the best local LLM for playing Minecraft, I came across two, Gemma 2, and Qwen2.5, these two were by far the best at playing Minecraft before fine-tuning, and I knew, once tuned, it would become better.
41
  - If Gemma 2 and Qwen 2.5 are the best before fine tuning, why include Llama 3.2, especially the lower intelligence, 3B parameter version?
42
- #
43
  That is a great question, I know since Llama 3.2 3b has low amounts of parameters, it is dumb, and doesn't play minecraft well without fine tuning, but, it is a lot smaller than other models which are for people with less powerful computers, and the hope is, once the model is tuned, it will become much better at minecraft.
44
 
 
 
 
45
  # How to Use
46
  In order to use this model, A, download the GGUF file of the version you want, either a Qwen, or Gemma model, and then the Modelfile, after you download both, in the Modelfile, change the directory of the model, to your model. Here is a simple guide if needed for the rest:
47
  #
 
39
  #
40
  During testing, to find the best local LLM for playing Minecraft, I came across two, Gemma 2, and Qwen2.5, these two were by far the best at playing Minecraft before fine-tuning, and I knew, once tuned, it would become better.
41
  - If Gemma 2 and Qwen 2.5 are the best before fine tuning, why include Llama 3.2, especially the lower intelligence, 3B parameter version?
42
+ #
43
  That is a great question, I know since Llama 3.2 3b has low amounts of parameters, it is dumb, and doesn't play minecraft well without fine tuning, but, it is a lot smaller than other models which are for people with less powerful computers, and the hope is, once the model is tuned, it will become much better at minecraft.
44
 
45
+ - Why is it taking so long to release more tuned models?
46
+ #
47
+ Well, you see, I do not have the most powerful computer, and Unsloth, the thing I'm using for fine tuning, has a google colab set up, so I am waiting for GPU time to tune the models, but they will be released ASAP, I promise.
48
  # How to Use
49
  In order to use this model, A, download the GGUF file of the version you want, either a Qwen, or Gemma model, and then the Modelfile, after you download both, in the Modelfile, change the directory of the model, to your model. Here is a simple guide if needed for the rest:
50
  #