mrq
commited on
Commit
·
8d8d957
1
Parent(s):
c2be705
README.md
CHANGED
@@ -6,6 +6,12 @@ This repo catalogs my weights for use with my [VALL-E](https://github.com/e-c-k-
|
|
6 |
|
7 |
The model currently is in a *usable* state under `ar+nar-llama-8` (the default model thats downloaded).
|
8 |
|
|
|
|
|
|
|
|
|
|
|
|
|
9 |
## Models
|
10 |
|
11 |
This repo contains the following configurations under `./models/`:
|
@@ -138,4 +144,4 @@ Using a LoRA is the same as a base model, except you're required to have the bas
|
|
138 |
The only caveat is that my original dataset *does* contain (most of) these samples already, but given the sheer size of it, they're probably underutilized.
|
139 |
* However, the base model already has *almost adequate* output from these speakers, but not enough to be satisfactory.
|
140 |
|
141 |
-
LoRAs under `ckpt[ar+nar-old-llama-8]` are LoRAs married to an older checkpoint, while `ckpt` *should* work under the reference model.
|
|
|
6 |
|
7 |
The model currently is in a *usable* state under `ar+nar-llama-8` (the default model thats downloaded).
|
8 |
|
9 |
+
## Branches
|
10 |
+
|
11 |
+
The [GGUF](https://huggingface.co/ecker/vall-e/tree/gguf) branch contains [`vall_e.cpp`](https://github.com/e-c-k-e-r/vall-e/commits/master/vall_e.cpp/) weights.
|
12 |
+
|
13 |
+
The [HF](https://huggingface.co/ecker/vall-e/tree/gguf) branch contains an HF-ified version of the model weights.
|
14 |
+
|
15 |
## Models
|
16 |
|
17 |
This repo contains the following configurations under `./models/`:
|
|
|
144 |
The only caveat is that my original dataset *does* contain (most of) these samples already, but given the sheer size of it, they're probably underutilized.
|
145 |
* However, the base model already has *almost adequate* output from these speakers, but not enough to be satisfactory.
|
146 |
|
147 |
+
LoRAs under `ckpt[ar+nar-old-llama-8]` are LoRAs married to an older checkpoint, while `ckpt` *should* work under the reference model.
|