File size: 1,528 Bytes
b27834a
 
efb264a
 
 
 
 
b27834a
efb264a
 
 
 
 
 
 
 
111be1d
efb264a
 
 
 
 
 
 
 
 
 
111be1d
efb264a
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
---
license: other
language:
- en
pipeline_tag: text2text-generation
tags:
- code
---


## Introduction

This is a model repo mainly for [LLamaSharp](https://github.com/SciSharp/LLamaSharp) to provide samples for each version. The models can also be used by llama.cpp or other engines.

Since `llama.cpp` always have break changes, it takes much time for users (of [LLamaSharp](https://github.com/SciSharp/LLamaSharp) and others) to find a suitable model to run. This model repo would provide some convenience for users.

## Models

- [x] LLaMa 7B / 13B
- [ ] Alpaca
- [ ] GPT4All
- [ ] Chinese LLaMA / Alpaca
- [ ] Vigogne (French)
- [ ] Vicuna
- [ ] Koala
- [ ] OpenBuddy 🐶 (Multilingual)
- [ ] Pygmalion 7B / Metharme 7B
- [x] WizardLM (refer to https://huggingface.co/TheBloke/wizardLM-7B-GGML)


We will appreciate it if you'd like to provide some info about the incompleted models (such as links, model sources, etc.).

## Usages

At first, choose a branch with the same name of your LLamaSharp Backend version. For example, if you're using `LLamaSharp.Backend.Cuda11 v0.3.0`, please use `v0.3.0` branch of this repo.

Then download a model you like and follow the instructions of [LLamaSharp](https://github.com/SciSharp/LLamaSharp) to run it.

## Contributing

Any kind of contribution is welcomed! It's not necessary to upload a model, providing some information can also help a lot! For example, if you know where to download the pth file of `Vicuna`, please tell us via `community` and we'll add it to the list!