RichardErkhov commited on
Commit
3bc7339
1 Parent(s): 7b76154

uploaded readme

Browse files
Files changed (1) hide show
  1. README.md +77 -0
README.md ADDED
@@ -0,0 +1,77 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ Quantization made by Richard Erkhov.
2
+
3
+ [Github](https://github.com/RichardErkhov)
4
+
5
+ [Discord](https://discord.gg/pvy7H8DZMG)
6
+
7
+ [Request more models](https://github.com/RichardErkhov/quant_request)
8
+
9
+
10
+ gemma-2b-mt-German-to-English - GGUF
11
+ - Model creator: https://huggingface.co/Samvardhan777/
12
+ - Original model: https://huggingface.co/Samvardhan777/gemma-2b-mt-German-to-English/
13
+
14
+
15
+ | Name | Quant method | Size |
16
+ | ---- | ---- | ---- |
17
+ | [gemma-2b-mt-German-to-English.Q2_K.gguf](https://huggingface.co/RichardErkhov/Samvardhan777_-_gemma-2b-mt-German-to-English-gguf/blob/main/gemma-2b-mt-German-to-English.Q2_K.gguf) | Q2_K | 1.08GB |
18
+ | [gemma-2b-mt-German-to-English.Q3_K_S.gguf](https://huggingface.co/RichardErkhov/Samvardhan777_-_gemma-2b-mt-German-to-English-gguf/blob/main/gemma-2b-mt-German-to-English.Q3_K_S.gguf) | Q3_K_S | 1.2GB |
19
+ | [gemma-2b-mt-German-to-English.Q3_K.gguf](https://huggingface.co/RichardErkhov/Samvardhan777_-_gemma-2b-mt-German-to-English-gguf/blob/main/gemma-2b-mt-German-to-English.Q3_K.gguf) | Q3_K | 1.29GB |
20
+ | [gemma-2b-mt-German-to-English.Q3_K_M.gguf](https://huggingface.co/RichardErkhov/Samvardhan777_-_gemma-2b-mt-German-to-English-gguf/blob/main/gemma-2b-mt-German-to-English.Q3_K_M.gguf) | Q3_K_M | 1.29GB |
21
+ | [gemma-2b-mt-German-to-English.Q3_K_L.gguf](https://huggingface.co/RichardErkhov/Samvardhan777_-_gemma-2b-mt-German-to-English-gguf/blob/main/gemma-2b-mt-German-to-English.Q3_K_L.gguf) | Q3_K_L | 1.36GB |
22
+ | [gemma-2b-mt-German-to-English.IQ4_XS.gguf](https://huggingface.co/RichardErkhov/Samvardhan777_-_gemma-2b-mt-German-to-English-gguf/blob/main/gemma-2b-mt-German-to-English.IQ4_XS.gguf) | IQ4_XS | 1.4GB |
23
+ | [gemma-2b-mt-German-to-English.Q4_0.gguf](https://huggingface.co/RichardErkhov/Samvardhan777_-_gemma-2b-mt-German-to-English-gguf/blob/main/gemma-2b-mt-German-to-English.Q4_0.gguf) | Q4_0 | 1.44GB |
24
+ | [gemma-2b-mt-German-to-English.IQ4_NL.gguf](https://huggingface.co/RichardErkhov/Samvardhan777_-_gemma-2b-mt-German-to-English-gguf/blob/main/gemma-2b-mt-German-to-English.IQ4_NL.gguf) | IQ4_NL | 1.45GB |
25
+ | [gemma-2b-mt-German-to-English.Q4_K_S.gguf](https://huggingface.co/RichardErkhov/Samvardhan777_-_gemma-2b-mt-German-to-English-gguf/blob/main/gemma-2b-mt-German-to-English.Q4_K_S.gguf) | Q4_K_S | 1.45GB |
26
+ | [gemma-2b-mt-German-to-English.Q4_K.gguf](https://huggingface.co/RichardErkhov/Samvardhan777_-_gemma-2b-mt-German-to-English-gguf/blob/main/gemma-2b-mt-German-to-English.Q4_K.gguf) | Q4_K | 1.52GB |
27
+ | [gemma-2b-mt-German-to-English.Q4_K_M.gguf](https://huggingface.co/RichardErkhov/Samvardhan777_-_gemma-2b-mt-German-to-English-gguf/blob/main/gemma-2b-mt-German-to-English.Q4_K_M.gguf) | Q4_K_M | 1.52GB |
28
+ | [gemma-2b-mt-German-to-English.Q4_1.gguf](https://huggingface.co/RichardErkhov/Samvardhan777_-_gemma-2b-mt-German-to-English-gguf/blob/main/gemma-2b-mt-German-to-English.Q4_1.gguf) | Q4_1 | 1.56GB |
29
+ | [gemma-2b-mt-German-to-English.Q5_0.gguf](https://huggingface.co/RichardErkhov/Samvardhan777_-_gemma-2b-mt-German-to-English-gguf/blob/main/gemma-2b-mt-German-to-English.Q5_0.gguf) | Q5_0 | 1.68GB |
30
+ | [gemma-2b-mt-German-to-English.Q5_K_S.gguf](https://huggingface.co/RichardErkhov/Samvardhan777_-_gemma-2b-mt-German-to-English-gguf/blob/main/gemma-2b-mt-German-to-English.Q5_K_S.gguf) | Q5_K_S | 1.68GB |
31
+ | [gemma-2b-mt-German-to-English.Q5_K.gguf](https://huggingface.co/RichardErkhov/Samvardhan777_-_gemma-2b-mt-German-to-English-gguf/blob/main/gemma-2b-mt-German-to-English.Q5_K.gguf) | Q5_K | 1.71GB |
32
+ | [gemma-2b-mt-German-to-English.Q5_K_M.gguf](https://huggingface.co/RichardErkhov/Samvardhan777_-_gemma-2b-mt-German-to-English-gguf/blob/main/gemma-2b-mt-German-to-English.Q5_K_M.gguf) | Q5_K_M | 1.71GB |
33
+ | [gemma-2b-mt-German-to-English.Q5_1.gguf](https://huggingface.co/RichardErkhov/Samvardhan777_-_gemma-2b-mt-German-to-English-gguf/blob/main/gemma-2b-mt-German-to-English.Q5_1.gguf) | Q5_1 | 1.79GB |
34
+ | [gemma-2b-mt-German-to-English.Q6_K.gguf](https://huggingface.co/RichardErkhov/Samvardhan777_-_gemma-2b-mt-German-to-English-gguf/blob/main/gemma-2b-mt-German-to-English.Q6_K.gguf) | Q6_K | 1.92GB |
35
+ | [gemma-2b-mt-German-to-English.Q8_0.gguf](https://huggingface.co/RichardErkhov/Samvardhan777_-_gemma-2b-mt-German-to-English-gguf/blob/main/gemma-2b-mt-German-to-English.Q8_0.gguf) | Q8_0 | 2.49GB |
36
+
37
+
38
+
39
+
40
+ Original model description:
41
+ ---
42
+ license: mit
43
+ language:
44
+ - de
45
+ - en
46
+ pipeline_tag: translation
47
+ tags:
48
+ - text-generation-inference
49
+ ---
50
+
51
+
52
+ # Description
53
+
54
+ ## Gemma 2B German to English v0.1 Alpha [Experimental Release]
55
+ This is a german instruction finetuned version of Google's Gemma 2B model. This is an experiment to see if Gemma can be Translate German to English by expanding vocabulary. While the responses may be rusty at times, it shows a lot of promise for a 2B parameter model.
56
+
57
+
58
+
59
+
60
+ ---
61
+ # Model description 🗄️:
62
+ Model type: A 2B parameter GPT-like model finetuned on 100,000 samples consisting of an equal proportion of English and German samples.
63
+
64
+ Language(s): Bilingual. English and German.
65
+
66
+ License: Google Gemma Terms of Use
67
+
68
+ Finetuned from model: Samvardhan777/gemma-2b-mt-German-to-English
69
+
70
+ Training Precision: bfloat16
71
+
72
+ Training Hardware: Free Google Colab
73
+
74
+ Dataset: kaitchup/opus-German-to-English
75
+
76
+ ---
77
+