thomas-yanxin commited on
Commit
06b4a38
·
verified ·
1 Parent(s): a5f11eb

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +3 -4
README.md CHANGED
@@ -7,7 +7,7 @@ base_model:
7
 
8
  ## How to use it
9
 
10
- 1.
11
  ```
12
  git clone https://github.com/HimariO/llama.cpp.git
13
  cd llama.cpp
@@ -36,16 +36,15 @@ index 8a903d7e..51403be2 100644
36
  examples/llava/llava.cpp \
37
  examples/llava/llava.h \
38
 
39
-
40
  ```
41
 
42
- 3.
43
  ```
44
  cmake . -DGGML_CUDA=ON -DCMAKE_CUDA_COMPILER=$(which nvcc) -DTCNN_CUDA_ARCHITECTURES=61
45
  make -j35
46
  ```
47
 
48
- 4.
49
  ```
50
  ./bin/llama-qwen2vl-cli -m ./thomas-yanxin/Qwen2-VL-7B-GGUF/Qwen2-VL-7B-GGUF-Q4_K_M.gguf --mmproj ./thomas-yanxin/Qwen2-VL-7B-GGUF/qwen2vl-vision.gguf -p "Describe the image" --image "./thomas-yanxin/Qwen2-VL-7B-GGUF/1.png"
51
  ```
 
7
 
8
  ## How to use it
9
 
10
+ 1. To get the Code:
11
  ```
12
  git clone https://github.com/HimariO/llama.cpp.git
13
  cd llama.cpp
 
36
  examples/llava/llava.cpp \
37
  examples/llava/llava.h \
38
 
 
39
  ```
40
 
41
+ 3. Metal Build
42
  ```
43
  cmake . -DGGML_CUDA=ON -DCMAKE_CUDA_COMPILER=$(which nvcc) -DTCNN_CUDA_ARCHITECTURES=61
44
  make -j35
45
  ```
46
 
47
+ 4. RUN
48
  ```
49
  ./bin/llama-qwen2vl-cli -m ./thomas-yanxin/Qwen2-VL-7B-GGUF/Qwen2-VL-7B-GGUF-Q4_K_M.gguf --mmproj ./thomas-yanxin/Qwen2-VL-7B-GGUF/qwen2vl-vision.gguf -p "Describe the image" --image "./thomas-yanxin/Qwen2-VL-7B-GGUF/1.png"
50
  ```