k-mktr commited on
Commit
0bc8ef3
Β·
verified Β·
1 Parent(s): 8e0ae84

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +6 -6
README.md CHANGED
@@ -17,9 +17,9 @@ Welcome to the GPU-Poor LLM Gladiator Arena, where frugal meets fabulous in the
17
 
18
  ## πŸ€” Starting from "Why?"
19
 
20
- In the recent months, We've seen a lot of these "Tiny" models released, and some of them are really impressive.
21
 
22
- - **Gradio Exploration**: This project serves me as a playground for experimenting with Gradio app development, I am learning how to create interactive AI interfaces with it.
23
 
24
  - **Tiny Model Evaluation**: I wanted to develop a personal (and now public) stats system for evaluating tiny language models. It's not too serious, but it provides valuable insights into the capabilities of these compact powerhouses.
25
 
@@ -49,8 +49,8 @@ In the recent months, We've seen a lot of these "Tiny" models released, and some
49
 
50
  1. Clone the repository:
51
  ```
52
- git clone https://github.com/yourusername/gpu-poor-llm-gladiator-arena.git
53
- cd gpu-poor-llm-gladiator-arena
54
  ```
55
 
56
  2. Install the required packages:
@@ -102,7 +102,7 @@ The arena currently supports various compact models, including:
102
 
103
  ## 🀝 Contributing
104
 
105
- Contributions are welcome! Feel free to suggest a model, which is supported by Ollama. Some results are already quite surprising.
106
 
107
  ## πŸ“œ License
108
 
@@ -111,6 +111,6 @@ This project is open-source and available under the MIT License
111
  ## πŸ™ Acknowledgements
112
 
113
  - Thanks to the Ollama team for providing that amazing tool.
114
- - Shoutout to all the AI researchers and compact language models teams, making this frugal AI arena possible!
115
 
116
  Enjoy the battles in the GPU-Poor LLM Gladiator Arena! May the best compact model win! πŸ†
 
17
 
18
  ## πŸ€” Starting from "Why?"
19
 
20
+ In the recent months, we've seen a lot of these "Tiny" models released, and some of them are really impressive.
21
 
22
+ - **Gradio Exploration**: This project serves me as a playground for experimenting with Gradio app development; I am learning how to create interactive AI interfaces with it.
23
 
24
  - **Tiny Model Evaluation**: I wanted to develop a personal (and now public) stats system for evaluating tiny language models. It's not too serious, but it provides valuable insights into the capabilities of these compact powerhouses.
25
 
 
49
 
50
  1. Clone the repository:
51
  ```
52
+ git clone https://huggingface.co/spaces/k-mktr/gpu-poor-llm-arena.git
53
+ cd gpu-poor-llm-arena
54
  ```
55
 
56
  2. Install the required packages:
 
102
 
103
  ## 🀝 Contributing
104
 
105
+ Contributions are welcome! Please feel free to suggest a model that Ollama supports. Some results are already quite surprising.
106
 
107
  ## πŸ“œ License
108
 
 
111
  ## πŸ™ Acknowledgements
112
 
113
  - Thanks to the Ollama team for providing that amazing tool.
114
+ - Shoutout to all the AI researchers and compact language models teams for making this frugal AI arena possible!
115
 
116
  Enjoy the battles in the GPU-Poor LLM Gladiator Arena! May the best compact model win! πŸ†