hellork commited on
Commit
da51a96
1 Parent(s): 7536f52

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +23 -2
README.md CHANGED
@@ -141,10 +141,31 @@ llama-cli --hf-repo hellork/medicine-chat-IQ4_NL-GGUF --hf-file medicine-chat-iq
141
 
142
  ### Server:
143
  ```bash
144
- llama-server --hf-repo hellork/medicine-chat-IQ4_NL-GGUF --hf-file medicine-chat-iq4_nl-imat.gguf -c 2048
145
  ```
146
 
147
- Note: You can also use this checkpoint directly through the [usage steps](https://github.com/ggerganov/llama.cpp?tab=readme-ov-file#usage) listed in the Llama.cpp repo as well.
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
148
 
149
  Step 1: Clone llama.cpp from GitHub.
150
  ```
 
141
 
142
  ### Server:
143
  ```bash
144
+ llama-server --hf-repo hellork/medicine-chat-IQ4_NL-GGUF --hf-file medicine-chat-iq4_nl-imat.gguf -c 2048 --port 8888
145
  ```
146
 
147
+ ### The Ship's Computer:
148
+
149
+ Interact with this model by speaking to it. Lean, fast, & private, networked speech to text, AI images, multi-modal voice chat, control apps, webcam, and sound with less than 4GiB of VRAM.
150
+ [whisper_dictation](https://github.com/themanyone/whisper_dictation)
151
+
152
+ ```bash
153
+ git clone -b main --single-branch https://github.com/themanyone/whisper_dictation.git
154
+ pip install -r whisper_dictation/requirements.txt
155
+
156
+ git clone https://github.com/ggerganov/whisper.cpp
157
+ cd whisper.cpp
158
+ GGML_CUDA=1 make -j # assuming CUDA is available. see docs
159
+ ln -s server ~/.local/bin/whisper_cpp_server # (just put it somewhere in $PATH)
160
+
161
+ whisper_cpp_server -l en -m models/ggml-tiny.en.bin --port 7777
162
+ ./whisper_cpp_client.py
163
+ ```
164
+ See [the docs](https://github.com/themanyone/whisper_dictation) for tips on enabling the computer to talk back, draw AI images, carry out voice commands, and other features.
165
+
166
+ ### Install Llama.cpp via git:
167
+
168
+ Note: You can also try this checkpoint with the [usage steps](https://github.com/ggerganov/llama.cpp?tab=readme-ov-file#usage) listed in the Llama.cpp repo.
169
 
170
  Step 1: Clone llama.cpp from GitHub.
171
  ```