Update README.md
Browse files
README.md
CHANGED
@@ -44,9 +44,9 @@ git checkout cuda-integration
|
|
44 |
rm -rf build && mkdir build && cd build && cmake -DGGML_CUBLAS=1 .. && cmake --build . --config Release
|
45 |
```
|
46 |
|
47 |
-
Note
|
48 |
|
49 |
-
|
50 |
```
|
51 |
bin/falcon_main -t 8 -ngl 100 -m /workspace/wizard-falcon40b.ggmlv3.q3_K_S.bin -p "What is a falcon?\n### Response:"
|
52 |
```
|
|
|
44 |
rm -rf build && mkdir build && cd build && cmake -DGGML_CUBLAS=1 .. && cmake --build . --config Release
|
45 |
```
|
46 |
|
47 |
+
Note from developer cmp-nct for Windows users: 'I personally compile it using VScode. When compiling with CUDA support using the Microsoft compiler it's essential to select the "Community edition build tools". Otherwise CUDA won't compile.'
|
48 |
|
49 |
+
Once compiled you can then use `bin/falcon_main` just like you would use llama.cpp. For example:
|
50 |
```
|
51 |
bin/falcon_main -t 8 -ngl 100 -m /workspace/wizard-falcon40b.ggmlv3.q3_K_S.bin -p "What is a falcon?\n### Response:"
|
52 |
```
|