avemio-digital commited on
Commit
7c4dcc5
verified
1 Parent(s): 19c4a97

Create README.md

Browse files
Files changed (1) hide show
  1. README.md +152 -0
README.md ADDED
@@ -0,0 +1,152 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ license: apache-2.0
3
+ datasets:
4
+ - avemio/GRAG-CPT-HESSIAN-AI
5
+ language:
6
+ - en
7
+ - de
8
+ base_model:
9
+ - ThomasComics/Phi-3-mini-128k-instruct-LLaMAfied
10
+ pipeline_tag: question-answering
11
+ tags:
12
+ - German
13
+ - RAG
14
+ - Retrieval
15
+ - Question-Answering
16
+ - Summarization
17
+ - Reasoning
18
+ ---
19
+
20
+
21
+ <img src="https://www.grag.ai/wp-content/uploads/2024/12/GRAG-ICON-TO-WORDLOGO-Animation_Loop-small-ezgif.com-video-to-gif-converter.gif" alt="GRAG Logo" width="400" style="margin-left:'auto' margin-right:'auto' display:'block'"/>
22
+
23
+
24
+ # GRAG-PHI-3-mini-4B-CPT-HESSIAN-AI
25
+
26
+ <!-- Provide a quick summary of what the model is/does. -->
27
+
28
+ **GRAG** (**G**erman **R**etrieval **A**ugmented **G**eneration) models are designed for the German-speaking market, enabling innovation and AI solutions to drive German research collaboration in business-focused Generative AI by 2025
29
+
30
+ Our GRAG-MISTRAL-CPT model are trained on this **[GRAG-CPT](https://huggingface.co/datasets/avemio/GRAG-CPT-HESSIAN-AI) dataset.**
31
+
32
+ ## Model Details
33
+
34
+ The core models released in this batch are the following:
35
+ | Size | Training Tokens |
36
+ |------|--------|
37
+ | [GRAG-PHI-CPT](https://huggingface.co/avemio/GRAG-PHI-3.5-MINI-4B-CPT-HESSIAN-AI) | 507.47 million |
38
+ | [GRAG-PHI-SFT](https://huggingface.co/avemio/GRAG-PHI-3.5-MINI-4B-SFT-HESSIAN-AI) | 2.03 billion |
39
+ | [GRAG-PHI-ORPO](https://huggingface.co/avemio/GRAG-PHI-3.5-MINI-4B-ORPO-HESSIAN-AI) | 2.0577 billion |
40
+ ### Model Description
41
+
42
+ <!-- Provide a longer summary of what this model is. -->
43
+
44
+ - **Developed by:** Avemio AI Team
45
+ - **Supported by:** Hessian AI
46
+ - **Model type:** a Transformer style autoregressive language model.
47
+ - **Language(s) (NLP):** German, English
48
+ - **License:** The code and model are released under Apache 2.0.
49
+ - **Contact:** [grag@avemio.digital](mailto:grag@avemio.digital)
50
+
51
+
52
+ ### Model Sources
53
+
54
+ <!-- Provide the basic links for the model. -->
55
+
56
+ - **Project Page:**
57
+ - **Repositories:**
58
+ - Training:
59
+ - Evaluation code:
60
+ - **Technical blog post:**
61
+ <!-- - **Press release:** TODO -->
62
+
63
+ ## Uses
64
+
65
+ <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
66
+
67
+ ### Inference
68
+ Quickly get inference running with the following required installation:
69
+ Now, proceed as usual with HuggingFace:
70
+ ```python
71
+ from transformers import AutoModelForCausalLM, AutoTokenizer
72
+
73
+ model_name = "avemio/GRAG-PHI-3.5-MINI-4B-CPT-HESSIAN-AI"
74
+
75
+ tokenizer = AutoTokenizer.from_pretrained(model_name)
76
+
77
+ model = AutoModelForCausalLM.from_pretrained(model_name)
78
+ inputs = tokenizer("Hello mein Name ist", return_tensors="pt")
79
+
80
+ outputs = model.generate(**inputs, max_new_tokens=20)
81
+ print(tokenizer.decode(outputs[0], skip_special_tokens=True))
82
+ ```
83
+
84
+
85
+ ### Fine-tuning
86
+ We are providing a comprehensive Google Colab notebook to guide users through the process of fine-tuning our model, complete with detailed instructions, essential dependencies, and configurable settings.
87
+ [Colab-Notebook](https://colab.research.google.com/drive/1U6aP3vIkABaCm7doGV1waHgTLvXNGbBp?usp=sharing).
88
+
89
+
90
+
91
+ ## Model Details
92
+
93
+ ### Data
94
+ For training data details, please see the [GRAG-CPT-Dataset](https://huggingface.co/datasets/avemio/GRAG-CPT-HESSIAN-AI) documentation.
95
+
96
+ #### Description
97
+ The SFT tasks represent a focused approach to enhance model capabilities through specialized RAG examples. Most of these tasks were developed using synthetically enhanced data derived from the German Wikipedia, accessed through Cohere's prepared dataset on HuggingFace (licensed CC-BY-SA 4.0). This data was structured in a training knowledge graph where Question-Answer nodes were connected to both relevant and irrelevant Context nodes from the same Wikipedia page, creating a rich and challenging network of relationships for training. The only exceptions are the function calling dataset, which was derived and extended from Salesforce's XLAM Function calling dataset by including function call results and final answer generation, and the reasoning task which synthetic generation was inspired by the Paper from Tencent ([鈥淪caling Synthetic Data Creation with 1,000,000,000 Personas鈥漖(https://arxiv.org/abs/2406.20094)), to generate a diverse set of reasoning tasks across various domains.
98
+ This comprehensive set of SFT tasks ensures the model develops robust capabilities across a wide range of practical applications while maintaining consistent output formats and clear communication patterns. Each task type has been carefully designed to address specific business needs while maintaining high standards of accuracy and reliability, making them valuable tools for organizations looking to enhance their information processing and knowledge management capabilities.
99
+
100
+ ### Architecture
101
+
102
+
103
+ | Parameter | GRAG-MISTRAL-CPT |
104
+ |-----------------------|-----------------------------------------------------------------------------------------------|
105
+ | **d_model** | 4096 |
106
+ | **num heads** | 32 |
107
+ | **num layers** | 32 |
108
+ | **MLP ratio** | 3.5 |
109
+ | **LayerNorm type** | RMSNorm |
110
+ | **pos embeddings** | RoPE |
111
+ | **attention variant**| Multi-head attention with 32 key-value heads |
112
+ | **biases** | none |
113
+ | **block type** | Sequential |
114
+ | **activation** | SiLU |
115
+ | **sequence length** | 131072 |
116
+ | **weight typing** | bfloat16
117
+
118
+ ### Hyperparameters
119
+
120
+
121
+ | Parameter | GRAG-MISTRAL-CPT |
122
+ |---------------------------|--------------------|
123
+ | **warmup steps** | 50 |
124
+ | **peak LR** | 5.0E-07 |
125
+ | **weight decay** | 0.1 |
126
+ | **LR schedule** | linear |
127
+ | **gradient reduce dtype** | FP32 |
128
+ | **optimizer state dtype** | FP32 |
129
+
130
+ ## Environmental Impact
131
+
132
+ GRAG-PHI-CPT, running on NVIDIA A100 with x GPUs for x days, has an approximate power consumption as follows:
133
+
134
+ It's important to note that the actual power consumption may vary depending on the specific workload and operational conditions. For accurate power consumption measurements, using dedicated power monitoring tools is recommended.
135
+
136
+ | Model | GPU Type | Power Consumption From GPUs |
137
+ |----------------|---------------------|-----------------------------|
138
+ | GRAG-PHI-CPT | A100 ([Hessian AI supercomputer](https://hessian.ai/de/)) | 0.xxx MWh |
139
+ ## Bias, Risks, and Limitations
140
+
141
+ Like any base language model or fine-tuned model without safety filtering, it is relatively easy for a user to prompt these models to generate harmful and generally sensitive content.
142
+ Such content can also be produced unintentionally, especially in the case of bias, so we recommend users consider the risks of applications of this technology.
143
+
144
+ Otherwise, many facts from GRAG-MISTRAL-CPT or any LLM will often not be true, so they should be checked.
145
+
146
+
147
+
148
+
149
+ ## Model Card Contact
150
+
151
+
152
+ For errors in this model card, please contact ([grag@avemio.digital](mailto:grag@avemio.digital)).