avemio-digital commited on
Commit
7c27808
verified
1 Parent(s): e14dc7f

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +24 -24
README.md CHANGED
@@ -1,13 +1,13 @@
1
  ---
2
  license: llama3.1
3
  datasets:
4
- - avemio/GRAG-CPT-HESSIAN-AI
5
- - avemio/GRAG-SFT-ShareGPT-HESSIAN-AI
6
  language:
7
  - en
8
  - de
9
  base_model:
10
- - avemio/GRAG-LLAMA-3.1-8B-CPT-HESSIAN-AI
11
  pipeline_tag: question-answering
12
  tags:
13
  - German
@@ -19,25 +19,25 @@ tags:
19
  ---
20
 
21
 
22
- <img src="https://www.grag.ai/wp-content/uploads/2024/12/GRAG-ICON-TO-WORDLOGO-Animation_Loop-small-ezgif.com-video-to-gif-converter.gif" alt="GRAG Logo" width="400" style="margin-left:'auto' margin-right:'auto' display:'block'"/>
23
 
24
 
25
- # GRAG-LLAMA-3.1-8B-SFT-HESSIAN-AI
26
 
27
  <!-- Provide a quick summary of what the model is/does. -->
28
 
29
- **GRAG** (**G**erman **R**etrieval **A**ugmented **G**eneration) models are designed for the German-speaking market, enabling innovation and AI solutions to drive German research collaboration in business-focused Generative AI by 2025
30
 
31
- Our GRAG-LLAMA-SFT model are trained on this **[GRAG-SFT](https://huggingface.co/datasets/avemio/GRAG-SFT-ShareGPT-HESSIAN-AI) dataset.**
32
 
33
  ## Model Details
34
 
35
  The core models released in this batch are the following:
36
  | Size | Training Tokens |
37
  |------|--------|
38
- | [GRAG-LLAMA-CPT](https://huggingface.co/avemio/GRAG-LLAMA-3.1-8B-CPT-HESSIAN-AI) | 507.47 million |
39
- | [GRAG-LLAMA-SFT](https://huggingface.co/avemio/GRAG-LLAMA-3.1-8B-SFT-HESSIAN-AI) | 2.03 billion |
40
- | [GRAG-LLAMA-ORPO](https://huggingface.co/avemio/GRAG-LLAMA-3.1-8B-ORPO-HESSIAN-AI) | 2.0577 billion |
41
  ### Model Description
42
 
43
  <!-- Provide a longer summary of what this model is. -->
@@ -47,19 +47,19 @@ The core models released in this batch are the following:
47
  - **Model type:** a Transformer style autoregressive language model.
48
  - **Language(s) (NLP):** German, English
49
  - **License:** The code and model are released under Apache 2.0.
50
- - **Contact:** [grag@avemio.digital](mailto:grag@avemio.digital)
51
 
52
 
53
  ### Model Sources
54
 
55
  <!-- Provide the basic links for the model. -->
56
 
57
- - **Training Study:** [Training Study](https://avemio.digital/wp-content/uploads/2025/01/GRAG-TRAINING-STUDY-Advancing-German-Language-AI-with-hessian-AI.pdf)
58
  - **Repositories:**
59
  - Training: [Colab-Notebook](https://colab.research.google.com/drive/18SH_aYLCnw1K7cRGOTTZ80y98V5Kquxb?usp=sharing)
60
  - Evaluation code:
61
- - [GRAG-LLM-HARD-BENCHMARK](https://github.com/avemio-digital/GRAG-LLM-HARD-BENCHMARK.git)
62
- - [GRAG-LLM-EASY-BENCHMARK](https://github.com/avemio-digital/GRAG-LLM-EASY-BENCHMARK.git)
63
  - **Technical blog post:**
64
  <!-- - **Press release:** TODO -->
65
 
@@ -73,7 +73,7 @@ Now, proceed as usual with HuggingFace:
73
  ```python
74
  from transformers import AutoModelForCausalLM, AutoTokenizer
75
 
76
- model_name = "avemio/GRAG-LLAMA-3.1-8B-SFT-HESSIAN-AI"
77
 
78
  model = AutoModelForCausalLM.from_pretrained(
79
  model_name,
@@ -135,7 +135,7 @@ Four evaluation metrics were employed across all subsets: language quality, over
135
  - **Overall score:** This metric combined the results from the previous three metrics, offering a comprehensive evaluation of the model's capabilities across all subsets.
136
 
137
 
138
- | Metric | [Vanila-llama-3.1-8B-Instruct](https://huggingface.co/meta-llama/Llama-3.1-8B-Instruct) | **[GRAG-LLAMA-SFT](https://huggingface.co/avemio/GRAG-LLAMA-3.1-8B-SFT-HESSIAN-AI)** | [GRAG-LLAMA-ORPO](https://huggingface.co/avemio/GRAG-LLAMA-3.1-8B-ORPO-HESSIAN-AI) | [GRAG-LLAMA-MERGED]| GPT-3.5-TURBO |
139
  |------------------------------------------|---------------------------------------------------------------------------------|--------------------------------------------------------------------------------|------------------------------------------------------------------------------------------------|-----------------------------|----------------|
140
  | Average Language Quality |87.78 |**88.93** | 88.93 |86.93 |87.58 |
141
  | **OVERALL SCORES (weighted):** | | | | | |
@@ -149,7 +149,7 @@ Four evaluation metrics were employed across all subsets: language quality, over
149
  ## Model Details
150
 
151
  ### Data
152
- For training data details, please see the [GRAG-SFT-Dataset](https://huggingface.co/datasets/avemio/GRAG-SFT-ShareGPT-HESSIAN-AI) documentation.
153
 
154
  #### Description
155
  The SFT tasks represent a focused approach to enhance model capabilities through specialized RAG examples. Most of these tasks were developed using synthetically enhanced data derived from the German Wikipedia, accessed through Cohere's prepared dataset on HuggingFace (licensed CC-BY-SA 4.0). This data was structured in a training knowledge graph where Question-Answer nodes were connected to both relevant and irrelevant Context nodes from the same Wikipedia page, creating a rich and challenging network of relationships for training. The only exceptions are the function calling dataset, which was derived and extended from Salesforce's XLAM Function calling dataset by including function call results and final answer generation, and the reasoning task which synthetic generation was inspired by the Paper from Tencent ([鈥淪caling Synthetic Data Creation with 1,000,000,000 Personas鈥漖(https://arxiv.org/abs/2406.20094)), to generate a diverse set of reasoning tasks across various domains.
@@ -164,7 +164,7 @@ The implementation of these tasks within RAG systems can significantly improve o
164
  ### Architecture
165
 
166
 
167
- | Parameter | GRAG-LLAMA-SFT |
168
  |-----------------------|-----------------------------------------------------------------------------------------------|
169
  | **d_model** | 3072 |
170
  | **num heads** | 32 |
@@ -182,7 +182,7 @@ The implementation of these tasks within RAG systems can significantly improve o
182
  ### Hyperparameters
183
 
184
 
185
- | Parameter | GRAG-LLAMA-SFT |
186
  |---------------------------|--------------------|
187
  | **warmup steps** | 50 |
188
  | **peak LR** | 5.0E-07 |
@@ -193,19 +193,19 @@ The implementation of these tasks within RAG systems can significantly improve o
193
 
194
  ## Environmental Impact
195
 
196
- GRAG-LLAMA-SFT, running on NVIDIA A100 with 40 GPUs for 7 days, has an approximate power consumption as follows:
197
 
198
  It's important to note that the actual power consumption may vary depending on the specific workload and operational conditions. For accurate power consumption measurements, using dedicated power monitoring tools is recommended.
199
 
200
  | Model | GPU Type | Power Consumption From GPUs |
201
  |----------------|---------------------|-----------------------------|
202
- | GRAG-LLAMA-SFT | A100 ([Hessian AI supercomputer](https://hessian.ai/de/)) | 0.02016 MWh |
203
  ## Bias, Risks, and Limitations
204
 
205
  Like any base language model or fine-tuned model without safety filtering, it is relatively easy for a user to prompt these models to generate harmful and generally sensitive content.
206
  Such content can also be produced unintentionally, especially in the case of bias, so we recommend users consider the risks of applications of this technology.
207
 
208
- Otherwise, many facts from GRAG-LLAMA-SFT or any LLM will often not be true, so they should be checked.
209
 
210
 
211
 
@@ -213,9 +213,9 @@ Otherwise, many facts from GRAG-LLAMA-SFT or any LLM will often not be true, so
213
  ## Model Card Contact
214
 
215
 
216
- For errors in this model card, please contact ([grag@avemio.digital](mailto:grag@avemio.digital)).
217
 
218
- ## The GRAG AI Team
219
  [Marcel Rosiak](https://de.linkedin.com/in/marcel-rosiak)
220
  [Soumya Paul](https://de.linkedin.com/in/soumya-paul-1636a68a)
221
  [Siavash Mollaebrahim](https://de.linkedin.com/in/siavash-mollaebrahim-4084b5153?trk=people-guest_people_search-card)
 
1
  ---
2
  license: llama3.1
3
  datasets:
4
+ - avemio/German_RAG-CPT-HESSIAN-AI
5
+ - avemio/German_RAG-SFT-ShareGPT-HESSIAN-AI
6
  language:
7
  - en
8
  - de
9
  base_model:
10
+ - avemio/German_RAG-LLAMA-3.1-8B-CPT-HESSIAN-AI
11
  pipeline_tag: question-answering
12
  tags:
13
  - German
 
19
  ---
20
 
21
 
22
+ <img src="https://www.German_RAG.ai/wp-content/uploads/2024/12/German_RAG-ICON-TO-WORDLOGO-Animation_Loop-small-ezgif.com-video-to-gif-converter.gif" alt="German_RAG Logo" width="400" style="margin-left:'auto' margin-right:'auto' display:'block'"/>
23
 
24
 
25
+ # German_RAG-LLAMA-3.1-8B-SFT-HESSIAN-AI
26
 
27
  <!-- Provide a quick summary of what the model is/does. -->
28
 
29
+ **German_RAG** (**G**erman **R**etrieval **A**ugmented **G**eneration) models are designed for the German-speaking market, enabling innovation and AI solutions to drive German research collaboration in business-focused Generative AI by 2025
30
 
31
+ Our German_RAG-LLAMA-SFT model are trained on this **[German_RAG-SFT](https://huggingface.co/datasets/avemio/German_RAG-SFT-ShareGPT-HESSIAN-AI) dataset.**
32
 
33
  ## Model Details
34
 
35
  The core models released in this batch are the following:
36
  | Size | Training Tokens |
37
  |------|--------|
38
+ | [German_RAG-LLAMA-CPT](https://huggingface.co/avemio/German_RAG-LLAMA-3.1-8B-CPT-HESSIAN-AI) | 507.47 million |
39
+ | [German_RAG-LLAMA-SFT](https://huggingface.co/avemio/German_RAG-LLAMA-3.1-8B-SFT-HESSIAN-AI) | 2.03 billion |
40
+ | [German_RAG-LLAMA-ORPO](https://huggingface.co/avemio/German_RAG-LLAMA-3.1-8B-ORPO-HESSIAN-AI) | 2.0577 billion |
41
  ### Model Description
42
 
43
  <!-- Provide a longer summary of what this model is. -->
 
47
  - **Model type:** a Transformer style autoregressive language model.
48
  - **Language(s) (NLP):** German, English
49
  - **License:** The code and model are released under Apache 2.0.
50
+ - **Contact:** [German_RAG@avemio.digital](mailto:German_RAG@avemio.digital)
51
 
52
 
53
  ### Model Sources
54
 
55
  <!-- Provide the basic links for the model. -->
56
 
57
+ - **Training Study:** [Training Study](https://avemio.digital/wp-content/uploads/2025/01/German_RAG-TRAINING-STUDY-Advancing-German-Language-AI-with-hessian-AI.pdf)
58
  - **Repositories:**
59
  - Training: [Colab-Notebook](https://colab.research.google.com/drive/18SH_aYLCnw1K7cRGOTTZ80y98V5Kquxb?usp=sharing)
60
  - Evaluation code:
61
+ - [German_RAG-LLM-HARD-BENCHMARK](https://github.com/avemio-digital/German_RAG-LLM-HARD-BENCHMARK.git)
62
+ - [German_RAG-LLM-EASY-BENCHMARK](https://github.com/avemio-digital/German_RAG-LLM-EASY-BENCHMARK.git)
63
  - **Technical blog post:**
64
  <!-- - **Press release:** TODO -->
65
 
 
73
  ```python
74
  from transformers import AutoModelForCausalLM, AutoTokenizer
75
 
76
+ model_name = "avemio/German_RAG-LLAMA-3.1-8B-SFT-HESSIAN-AI"
77
 
78
  model = AutoModelForCausalLM.from_pretrained(
79
  model_name,
 
135
  - **Overall score:** This metric combined the results from the previous three metrics, offering a comprehensive evaluation of the model's capabilities across all subsets.
136
 
137
 
138
+ | Metric | [Vanila-llama-3.1-8B-Instruct](https://huggingface.co/meta-llama/Llama-3.1-8B-Instruct) | **[German_RAG-LLAMA-SFT](https://huggingface.co/avemio/German_RAG-LLAMA-3.1-8B-SFT-HESSIAN-AI)** | [German_RAG-LLAMA-ORPO](https://huggingface.co/avemio/German_RAG-LLAMA-3.1-8B-ORPO-HESSIAN-AI) | [German_RAG-LLAMA-MERGED]| GPT-3.5-TURBO |
139
  |------------------------------------------|---------------------------------------------------------------------------------|--------------------------------------------------------------------------------|------------------------------------------------------------------------------------------------|-----------------------------|----------------|
140
  | Average Language Quality |87.78 |**88.93** | 88.93 |86.93 |87.58 |
141
  | **OVERALL SCORES (weighted):** | | | | | |
 
149
  ## Model Details
150
 
151
  ### Data
152
+ For training data details, please see the [German_RAG-SFT-Dataset](https://huggingface.co/datasets/avemio/German_RAG-SFT-ShareGPT-HESSIAN-AI) documentation.
153
 
154
  #### Description
155
  The SFT tasks represent a focused approach to enhance model capabilities through specialized RAG examples. Most of these tasks were developed using synthetically enhanced data derived from the German Wikipedia, accessed through Cohere's prepared dataset on HuggingFace (licensed CC-BY-SA 4.0). This data was structured in a training knowledge graph where Question-Answer nodes were connected to both relevant and irrelevant Context nodes from the same Wikipedia page, creating a rich and challenging network of relationships for training. The only exceptions are the function calling dataset, which was derived and extended from Salesforce's XLAM Function calling dataset by including function call results and final answer generation, and the reasoning task which synthetic generation was inspired by the Paper from Tencent ([鈥淪caling Synthetic Data Creation with 1,000,000,000 Personas鈥漖(https://arxiv.org/abs/2406.20094)), to generate a diverse set of reasoning tasks across various domains.
 
164
  ### Architecture
165
 
166
 
167
+ | Parameter | German_RAG-LLAMA-SFT |
168
  |-----------------------|-----------------------------------------------------------------------------------------------|
169
  | **d_model** | 3072 |
170
  | **num heads** | 32 |
 
182
  ### Hyperparameters
183
 
184
 
185
+ | Parameter | German_RAG-LLAMA-SFT |
186
  |---------------------------|--------------------|
187
  | **warmup steps** | 50 |
188
  | **peak LR** | 5.0E-07 |
 
193
 
194
  ## Environmental Impact
195
 
196
+ German_RAG-LLAMA-SFT, running on NVIDIA A100 with 40 GPUs for 7 days, has an approximate power consumption as follows:
197
 
198
  It's important to note that the actual power consumption may vary depending on the specific workload and operational conditions. For accurate power consumption measurements, using dedicated power monitoring tools is recommended.
199
 
200
  | Model | GPU Type | Power Consumption From GPUs |
201
  |----------------|---------------------|-----------------------------|
202
+ | German_RAG-LLAMA-SFT | A100 ([Hessian AI supercomputer](https://hessian.ai/de/)) | 0.02016 MWh |
203
  ## Bias, Risks, and Limitations
204
 
205
  Like any base language model or fine-tuned model without safety filtering, it is relatively easy for a user to prompt these models to generate harmful and generally sensitive content.
206
  Such content can also be produced unintentionally, especially in the case of bias, so we recommend users consider the risks of applications of this technology.
207
 
208
+ Otherwise, many facts from German_RAG-LLAMA-SFT or any LLM will often not be true, so they should be checked.
209
 
210
 
211
 
 
213
  ## Model Card Contact
214
 
215
 
216
+ For errors in this model card, please contact ([German_RAG@avemio.digital](mailto:German_RAG@avemio.digital)).
217
 
218
+ ## The German_RAG AI Team
219
  [Marcel Rosiak](https://de.linkedin.com/in/marcel-rosiak)
220
  [Soumya Paul](https://de.linkedin.com/in/soumya-paul-1636a68a)
221
  [Siavash Mollaebrahim](https://de.linkedin.com/in/siavash-mollaebrahim-4084b5153?trk=people-guest_people_search-card)