Suparious commited on
Commit
c13738a
·
verified ·
1 Parent(s): 585d2c4

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +17 -0
README.md CHANGED
@@ -1,5 +1,10 @@
1
  ---
2
  library_name: transformers
 
 
 
 
 
3
  tags:
4
  - 4-bit
5
  - AWQ
@@ -15,6 +20,18 @@ quantized_by: Suparious
15
  - Model creator: [flammenai](https://huggingface.co/flammenai)
16
  - Original model: [flammen22C-mistral-7B](https://huggingface.co/flammenai/flammen22C-mistral-7B)
17
 
 
 
 
 
 
 
 
 
 
 
 
 
18
 
19
 
20
  ## How to use
 
1
  ---
2
  library_name: transformers
3
+ license: apache-2.0
4
+ base_model:
5
+ - flammenai/flammen22-mistral-7B
6
+ datasets:
7
+ - flammenai/casual-conversation-DPO
8
  tags:
9
  - 4-bit
10
  - AWQ
 
20
  - Model creator: [flammenai](https://huggingface.co/flammenai)
21
  - Original model: [flammen22C-mistral-7B](https://huggingface.co/flammenai/flammen22C-mistral-7B)
22
 
23
+ ![image/png](https://huggingface.co/nbeerbower/flammen13X-mistral-7B/resolve/main/flammen13x.png)
24
+
25
+ ## Model Summary
26
+
27
+ A Mistral 7B LLM built from merging pretrained models and finetuning on [flammenai/casual-conversation-DPO](https://huggingface.co/datasets/flammenai/casual-conversation-DPO).
28
+ Flammen specializes in exceptional character roleplay, creative writing, and general intelligence
29
+
30
+ ### Method
31
+
32
+ Finetuned using an A100 on Google Colab.
33
+
34
+ [Fine-tune a Mistral-7b model with Direct Preference Optimization](https://towardsdatascience.com/fine-tune-a-mistral-7b-model-with-direct-preference-optimization-708042745aac) - [Maxime Labonne](https://huggingface.co/mlabonne)
35
 
36
 
37
  ## How to use