AsakusaRinne commited on
Commit
efb264a
1 Parent(s): b27834a

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +40 -0
README.md CHANGED
@@ -1,3 +1,43 @@
1
  ---
2
  license: other
 
 
 
 
 
3
  ---
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
  ---
2
  license: other
3
+ language:
4
+ - en
5
+ pipeline_tag: text2text-generation
6
+ tags:
7
+ - code
8
  ---
9
+
10
+
11
+ ## Introduction
12
+
13
+ This is a model repo mainly for [LLamaSharp](https://github.com/SciSharp/LLamaSharp) to provide samples for each version. The models can also be used by llama.cpp or other engines.
14
+
15
+ Since `llama.cpp` always have break changes, it takes much time for users (of [LLamaSharp](https://github.com/SciSharp/LLamaSharp) and others) to find a suitable model to run. This model repo would provide some convenience for users.
16
+
17
+ ## Uploaded models
18
+
19
+ - [x] LLaMa 7B / 13B
20
+ - [ ] Alpaca
21
+ - [ ] GPT4All
22
+ - [ ] Chinese LLaMA / Alpaca
23
+ - [ ] Vigogne (French)
24
+ - [ ] Vicuna
25
+ - [ ] Koala
26
+ - [ ] OpenBuddy 🐶 (Multilingual)
27
+ - [ ] Pygmalion 7B / Metharme 7B
28
+ - [ ] WizardLM
29
+
30
+
31
+ We will appreciate it if you'd like to provide some info about the incompleted models (such as links, model sources, etc.).
32
+
33
+ ## Usages
34
+
35
+ At first, choose a branch with the same name of your LLamaSharp Backend version. For example, if you're using `LLamaSharp.Backend.Cuda11 v0.3.0`, please use `v0.3.0` branch of this repo.
36
+
37
+ Then download a model you like and follow the instructions of [LLamaSharp](https://github.com/SciSharp/LLamaSharp) to run it.
38
+
39
+ ## Contributing
40
+
41
+ Any kind of contribution is welcomed! It's not necessary to upload a model, providing some information can also help a lot! For example, if you know where to download the pth file of `Vicuna`, please tell us via `community` and we'll add it to the list!
42
+
43
+