lamhieu commited on
Commit
3506c77
·
verified ·
1 Parent(s): 0fd978c

chore: update README.md content

Browse files
Files changed (1) hide show
  1. README.md +30 -5
README.md CHANGED
@@ -46,7 +46,7 @@ The Ghost 8B Beta model outperforms prominent models such as Llama 3.1 8B Instru
46
 
47
  ### Updates
48
 
49
- * **16 Aug 2024**: The model has been released to version 160824, expanding support from 9 languages ​​to 16 languages. The model has improved math, reasoning, and following instructions better than the previous version.
50
 
51
  ### Thoughts
52
 
@@ -75,18 +75,19 @@ We believe that it is possible to optimize language models that are not too larg
75
 
76
  We create many distributions to give you the best access options that best suit your needs.
77
 
78
- | Version | Model card |
79
- | ------- | ------------------------------------------------------------------- |
80
  | BF16 | [🤗 HuggingFace](https://huggingface.co/ghost-x/ghost-8b-beta-1608) |
81
  | GGUF | [🤗 HuggingFace](https://huggingface.co/ghost-x/ghost-8b-beta-1608-gguf) |
82
  | AWQ | [🤗 HuggingFace](https://huggingface.co/ghost-x/ghost-8b-beta-1608-awq) |
 
83
 
84
  ### License
85
 
86
  The Ghost 8B Beta model is released under the [Ghost Open LLMs LICENSE](https://ghost-x.org/ghost-open-llms-license), [Llama 3 LICENSE](https://huggingface.co/meta-llama/Meta-Llama-3-8B/blob/main/LICENSE).
87
 
88
- * For individuals, the model is free to use for personal and research purposes.
89
- * For commercial use of Ghost 8B Beta, it's also free, but please contact us for confirmation. You can email us at "lamhieu.vk [at] gmail.com" with a brief introduction of your project. If possible, include your logo so we can feature it as a case study. We will confirm your permission to use the model. For commercial use as a service, no email confirmation is needed, but we'd appreciate a notification so we can keep track and potentially recommend your services to partners using the model.
90
 
91
  Additionally, it would be great if you could mention or credit the model when it benefits your work.
92
 
@@ -302,6 +303,30 @@ For direct use with `unsloth`, you can easily get started with the following ste
302
  print(results)
303
  ```
304
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
305
  ### Instructions
306
 
307
  Here are specific instructions and explanations for each use case.
 
46
 
47
  ### Updates
48
 
49
+ - **16 Aug 2024**: The model has been released to version 160824, expanding support from 9 languages ​​to 16 languages. The model has improved math, reasoning, and following instructions better than the previous version.
50
 
51
  ### Thoughts
52
 
 
75
 
76
  We create many distributions to give you the best access options that best suit your needs.
77
 
78
+ | Version | Model card |
79
+ | ------- | ------------------------------------------------------------------------ |
80
  | BF16 | [🤗 HuggingFace](https://huggingface.co/ghost-x/ghost-8b-beta-1608) |
81
  | GGUF | [🤗 HuggingFace](https://huggingface.co/ghost-x/ghost-8b-beta-1608-gguf) |
82
  | AWQ | [🤗 HuggingFace](https://huggingface.co/ghost-x/ghost-8b-beta-1608-awq) |
83
+ | MLX | [🤗 HuggingFace](https://huggingface.co/ghost-x/ghost-8b-beta-1608-mlx) |
84
 
85
  ### License
86
 
87
  The Ghost 8B Beta model is released under the [Ghost Open LLMs LICENSE](https://ghost-x.org/ghost-open-llms-license), [Llama 3 LICENSE](https://huggingface.co/meta-llama/Meta-Llama-3-8B/blob/main/LICENSE).
88
 
89
+ - For individuals, the model is free to use for personal and research purposes.
90
+ - For commercial use of Ghost 8B Beta, it's also free, but please contact us for confirmation. You can email us at "lamhieu.vk [at] gmail.com" with a brief introduction of your project. If possible, include your logo so we can feature it as a case study. We will confirm your permission to use the model. For commercial use as a service, no email confirmation is needed, but we'd appreciate a notification so we can keep track and potentially recommend your services to partners using the model.
91
 
92
  Additionally, it would be great if you could mention or credit the model when it benefits your work.
93
 
 
303
  print(results)
304
  ```
305
 
306
+ #### Use with MLX
307
+
308
+ For direct use with `mlx`, you can easily get started with the following steps.
309
+
310
+ - Firstly, you need to install unsloth via the command below with `pip`.
311
+
312
+ ```bash
313
+ pip install mlx-lm
314
+ ```
315
+
316
+ - Right now, you can start using the model directly.
317
+ ```python
318
+ from mlx_lm import load, generate
319
+
320
+ model, tokenizer = load("ghost-x/ghost-8b-beta-1608-mlx")
321
+ messages = [
322
+ {"role": "system", "content": ""},
323
+ {"role": "user", "content": "Why is the sky blue ?"},
324
+ ]
325
+ prompt = tokenizer.apply_chat_template(messages, tokenize=False, add_generation_prompt=True)
326
+ response = generate(model, tokenizer, prompt=prompt, verbose=True)
327
+ ```
328
+
329
+
330
  ### Instructions
331
 
332
  Here are specific instructions and explanations for each use case.