File size: 1,243 Bytes
d9ecdbe
f737e1c
 
 
 
 
 
 
 
 
 
 
 
d9ecdbe
f737e1c
 
d9ecdbe
f737e1c
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
---

base_model: KBLab/sentence-bert-swedish-cased
model_creator: KBLab
model_name: sentence-bert-swedish-cased
pipeline_tag: sentence-similarity
lang:
- sv
tags:
- sentence-transformers
- feature-extraction
- sentence-similarity
- transformers
- gguf
license: apache-2.0
language:
- sv
---


# PierreMesure/sentence-bert-swedish-cased-gguf

This is a GGUF conversion of [KBLab/sentence-bert-swedish-cased](https://huggingface.co/KBLab/sentence-bert-swedish-cased) (F32).

I used llama.cpp's script (*convert_hf_to_gguf.py*):

```bash

python convert_hf_to_gguf.py --outtype f32 ./sentence-bert-swedish-cased --outfile ./sentence-bert-swedish-cased.F32.gguf

```

## Usage

You can use this with any tool building on llama.cpp. I made this GGUF to import it in Ollama.

1. Create a *Modelfile*:

    ```Dockerfile

    FROM ./sentence-bert-swedish-cased.F32.gguf

    ```


    Or in one command:


    ```bash

    git clone https://huggingface.co/PierreMesure/sentence-bert-swedish-cased-gguf

    cd sentence-bert-swedish-cased-gguf/

    echo 'FROM ./sentence-bert-swedish-cased.F32.gguf' > Modelfile

    ```


2. Import with Ollama

    ```bash

    ollama create sentence-bert-swedish-cased

    ```