Upload README.md with huggingface_hub
Browse files
README.md
CHANGED
@@ -1,32 +1,76 @@
|
|
1 |
---
|
|
|
2 |
language:
|
3 |
- sq
|
4 |
-
license: bigscience-openrail-m
|
5 |
-
library_name: adapter-transformers
|
6 |
-
metrics:
|
7 |
-
- accuracy
|
8 |
---
|
|
|
9 |
|
10 |
-
|
11 |
-
# Bleta-8B
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
12 |
|
13 |
-
|
14 |
|
15 |
-
|
|
|
|
|
16 |
|
17 |
-
|
18 |
|
19 |
-
|
20 |
|
|
|
|
|
|
|
21 |
|
22 |
-
|
23 |
|
24 |
-
|
25 |
|
26 |
-
|
|
|
|
|
27 |
|
28 |
-
|
29 |
-
klei@trydialogo.com
|
30 |
-
Tirana, Albania
|
31 |
|
32 |
-
|
|
|
1 |
---
|
2 |
+
license: creativeml-openrail-m
|
3 |
language:
|
4 |
- sq
|
|
|
|
|
|
|
|
|
5 |
---
|
6 |
+
Here is the revised `README.md` file for the Bleta-8B model:
|
7 |
|
8 |
+
```markdown
|
9 |
+
# Bleta-8B Model
|
10 |
+
|
11 |
+
**License:** [CreativeML OpenRAIL-M](https://huggingface.co/spaces/CompVis/stable-diffusion-license)
|
12 |
+
|
13 |
+
## Overview
|
14 |
+
|
15 |
+
Bleta-8B is a state-of-the-art language model designed to generate coherent and contextually relevant text for a variety of applications, including dialogue generation, content creation, and more.
|
16 |
+
|
17 |
+
## Model Details
|
18 |
+
|
19 |
+
- **Model Name:** Bleta-8B
|
20 |
+
- **Model Size:** 8 Billion parameters
|
21 |
+
- **Format:** GGUF
|
22 |
+
- **License:** CreativeML OpenRAIL-M
|
23 |
+
|
24 |
+
## Files and Versions
|
25 |
+
|
26 |
+
- `adapter_config.json`: Configuration file for the model adapter.
|
27 |
+
- `adapter_model.safetensors`: The model weights in safetensors format.
|
28 |
+
- `config.json`: Basic configuration file.
|
29 |
+
- `special_tokens_map.json`: Mapping of special tokens.
|
30 |
+
- `tokenizer.json`: Tokenizer configuration.
|
31 |
+
- `tokenizer_config.json`: Detailed tokenizer settings.
|
32 |
+
- `Q8_0.gguf`: The model file in GGUF format.
|
33 |
+
|
34 |
+
## Getting Started
|
35 |
+
|
36 |
+
### Prerequisites
|
37 |
+
|
38 |
+
- Ensure you have the necessary dependencies installed, such as `cmake`, `make`, and a C++ compiler like `g++`.
|
39 |
+
- Clone and build the `llama.cpp` repository:
|
40 |
+
|
41 |
+
```bash
|
42 |
+
git clone https://github.com/ggerganov/llama.cpp
|
43 |
+
cd llama.cpp
|
44 |
+
mkdir build
|
45 |
+
cd build
|
46 |
+
cmake ..
|
47 |
+
make
|
48 |
+
```
|
49 |
+
|
50 |
+
### Download the Model
|
51 |
|
52 |
+
Download the model file from the repository:
|
53 |
|
54 |
+
```bash
|
55 |
+
wget https://huggingface.co/klei1/bleta-8b/resolve/main/unsloth.Q8_0.gguf
|
56 |
+
```
|
57 |
|
58 |
+
### Running the Model
|
59 |
|
60 |
+
Use the `llama.cpp` executable to run the model with your desired prompt:
|
61 |
|
62 |
+
```bash
|
63 |
+
./main -m unsloth.Q8_0.gguf -p "Your prompt here"
|
64 |
+
```
|
65 |
|
66 |
+
Replace `"Your prompt here"` with the text you want to process with the model.
|
67 |
|
68 |
+
#### Example Command
|
69 |
|
70 |
+
```bash
|
71 |
+
./main -m unsloth.Q8_0.gguf -p "Hello, world!"
|
72 |
+
```
|
73 |
|
74 |
+
## License
|
|
|
|
|
75 |
|
76 |
+
This model is licensed under the CreativeML OpenRAIL-M.
|