DeepMount00 commited on
Commit
0150041
·
verified ·
1 Parent(s): dc38c29

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +28 -1
README.md CHANGED
@@ -6,8 +6,35 @@ datasets:
6
  language:
7
  - it
8
  ---
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
9
  ## How to Use
10
- How to utilize my Mistral for Italian text generation
11
 
12
  ```python
13
  from transformers import AutoModelForCausalLM, AutoTokenizer
 
6
  language:
7
  - it
8
  ---
9
+
10
+ # Mistral-RAG
11
+
12
+ ## Model Details
13
+ - **Model Name:** Mistral-RAG
14
+ - **Base Model:** Mistral-Ita-7b
15
+ - **Specialization:** Question and Answer Tasks
16
+
17
+ ## Overview
18
+ Mistral-RAG is a refined fine-tuning of the Mistral-Ita-7b model, engineered specifically to enhance question and answer tasks. It features a unique dual-response capability, offering both generative and extractive modes to cater to a wide range of informational needs.
19
+
20
+ ## Capabilities
21
+
22
+ ### Generative Mode
23
+ - **Description:** The generative mode is designed for scenarios that require complex, synthesized responses. This mode integrates information from multiple sources and provides expanded explanations.
24
+ - **Ideal Use Cases:**
25
+ - Educational purposes
26
+ - Advisory services
27
+ - Creative scenarios where depth and detailed understanding are crucial
28
+
29
+ ### Extractive Mode
30
+ - **Description:** The extractive mode focuses on speed and precision. It delivers direct and concise answers by extracting specific data from texts.
31
+ - **Ideal Use Cases:**
32
+ - Factual queries in research
33
+ - Legal contexts
34
+ - Professional environments where accuracy and direct evidence are necessary
35
+
36
+
37
  ## How to Use
 
38
 
39
  ```python
40
  from transformers import AutoModelForCausalLM, AutoTokenizer