Text Generation
Transformers
Safetensors
English
llama
climate
conversational
text-generation-inference
Inference Endpoints
dthulke commited on
Commit
0e4b0fa
1 Parent(s): e5dd0fa

update model card

Browse files
Files changed (1) hide show
  1. README.md +79 -28
README.md CHANGED
@@ -1,39 +1,90 @@
1
- # Model Card for climategpt/climategpt-7b-fsg
2
- - This model is the 7B parameter from-scratch general ("fsg") variant of the ClimateGPT model release.
 
 
 
 
 
 
 
 
 
 
 
 
 
 
3
 
4
- ## Overview
5
- - **Developed by:** AppTek, Eqtylab, Erasmus AI
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
6
  - **Model type:** decoder-only Transformer
7
- - **Language(s) (NLP):** natively supported: English; supported via cascaded MT on web interface: Arabic, Bangla, Chinese (simplified), Dutch, Finnougoric, French, Germanic, Greek, Hebrew, Indonesian, Japenese, Korean, Lithuanian, Pashto, Persian, Portuguese, Russian, Spanish, Thai, Turkish, Vietnamese,
8
  - **License:** TO BE ADDED
9
- - **Repository:** https://huggingface.co/climategpt/climategpt-7b-fsg
10
- - **Paper:** TO BE ADDED
11
- - **Demo:** TO BE ADDED
 
 
 
12
 
13
  ## Uses
14
- - This model is intended to be directly used as a question answering model that is specialized in the climate domain.
15
- - The model is aimed at providing useful feedback for decision makers, scientists and jounalists involved in climate discussions.
16
- - The model can also be used as a starting point for interested developers for further finetuning.
17
- - The model is NOT intended to be a general-purpose chatbot (although it has chat capabilities).
18
- - For the full system including cascaded MT, RAG, etc., we recommend the user to go to our demo website: TO BE ADDED.
19
- - For hands-on finetuning deployment and inference, we recommend the user to directly use the Huggingface helpers.
20
- - For in-depth model conversion and finetuning, we recommend the user to use https://github.com/epfLLM/Megatron-LLM/.
21
- - **Despite the efforts from the development team to elimite them, as every other chat-capable LLMs, this model may generate biased, offensive, inaccurate responses.**
22
-
23
- ## How to Get Started with the Model
24
- After downloading the HF formatted model, the HF helpers should work out-of-the-box.
25
- It is also possible to evaluate the model with https://github.com/EleutherAI/lm-evaluation-harness by plugging in the model identifier ```--model_args pretrained=climategpt/climategpt-7b-fsg```.
 
 
 
 
 
 
 
 
 
 
 
26
 
27
  ## Training
28
- - For pretraining, a 300B-token dataset with an emphasis on the climate domain is prepared and used.
29
- - For instruction finetuning, about 1.1B instruction-finetuning tokens (both in the climate domain but also general domain) are used.
 
 
 
 
 
30
 
31
  ## Environmental Impact
32
- - **Hardware Type:** H100
33
- - **Hours used:** 30720 hrs
34
- - **Cloud Provider:** TO BE ADDED
35
- - **Compute Region:** TO BE ADDED
36
- - **Carbon Emitted:** TO BE ADDED
 
 
37
 
38
  ## Citation
39
- **BibTeX:** TO BE ADDED
 
1
+ ---
2
+ language:
3
+ - en
4
+ datasets:
5
+ - OpenAssistant/oasst1
6
+ - databricks/databricks-dolly-15k
7
+ base_model: meta-llama/Llama-2-7b-hf
8
+ tags:
9
+ - climate
10
+ co2_eq_emissions:
11
+ emissions: 265800
12
+ training_type: "pre-training"
13
+ geographical_location: "Washington, USA"
14
+ hardware_used: "8x NVIDIA H100 HBM"
15
+ ---
16
+ # ClimateGPT 7B FSG
17
 
18
+ <blockquote style="padding: 10px; margin: 0 0 10px; border-left: 5px solid #ddd;">
19
+ ⚠️ This is a research experiment to explore training from scratch on climate related data. If you're just interested in using the model, we recommend to use the Llama 2 based [ClimateGPT 7B](https://huggingface.co/eci-io/climategpt-7b).
20
+ </blockquote>
21
+
22
+ ClimateGPT is an ensemble of AI models designed to augment human decisions on the fast-moving field of climate change.
23
+ ClimateGPT 7B FSB (from scratch climate) is a 7 billion transformer decoder model that was pre-trained for 319.5B tokens and then continuously pre-training on a collection of 4.2B tokens from curated climate documents.
24
+ The model is further instruction fine-tuned on a dataset of instruction-completion pairs manually collected by AppTek in cooperation with climate scientists.
25
+ [ClimateGPT 7B](https://huggingface.co/eci-io/climategpt-7b) outperforms Llama 2 70B Chat on our climate-specific benchmarks.
26
+ The model is designed to be used together with retrieval augmentation to extend the knowledge, and increase the factuality of the model and with cascaded machine translation to increase the language coverage.
27
+
28
+ <blockquote style="padding: 10px; margin: 0 0 10px; border-left: 5px solid #ddd;">
29
+ A paper describing our approach will be released soon.
30
+ </blockquote>
31
+
32
+ ## Model Details
33
+ - **Trained by:** [AppTek](https://apptek.com)
34
+ - **Powered by:** [Erasmus AI](https://erasmus.ai)
35
+ - **Verified by:** [EQTYLab](https://eqtylab.io)
36
  - **Model type:** decoder-only Transformer
37
+ - **Language(s) (NLP):** English
38
  - **License:** TO BE ADDED
39
+ - **Continued pre-trained from:** Llama 2 7B
40
+ - **Context length:** 4K tokens
41
+ - **Input:** Text-only data
42
+ - **Output:** Model generates text only
43
+ - **Paper:** The paper will be released soon.
44
+ - **Website:** [eci.io](https://eci.io)
45
 
46
  ## Uses
47
+ - This is an experimental model and it is only intended to be used to reproduce our results and for LLM research. For any other use-case, we recommend to use [ClimateGPT 7B](https://huggingface.co/eci-io/climategpt-7b), [13B](https://huggingface.co/eci-io/climategpt-13b) or [70B](https://huggingface.co/eci-io/climategpt-70b)
48
+ - **Despite the efforts from the development team to eliminate them, as every other chat-capable LLMs, this model may generate biased, offensive or inaccurate responses.**
49
+
50
+ ## Downstream Use
51
+
52
+ ClimateGPT 7B FSG is an instruction-tuned model that can be directly used for climate-specific question-answering applications.
53
+ It was trained to perform well with retrieval augmentation and supports up to 5 references in context.
54
+
55
+ The model was trained using ChatML so the following format should be followed when prompting, including the `<|im_start|>`, `<|im_end|>` tags, `system`, `user`, `context` and `assistant` identifiers and `[[0]]`, `[[1]]]` etc. tokens to indicate references.
56
+
57
+ """
58
+ <|im_start|>system
59
+ {system_message}<|im_end|>
60
+ <|im_start|>user
61
+ {prompt}<|im_end|>
62
+ <|im_start|>context
63
+ [[0]] "{reference1_title}", {reference1_year}
64
+ {reference1_text}
65
+ [[1]] "{reference2_title}", {reference2_year}
66
+ {reference2_text}
67
+ [...]<|im_end|>
68
+ <|im_start|>assistant
69
+ """
70
 
71
  ## Training
72
+ - Details on the pre-training data are given in our paper.
73
+ - For continued pre-training, 4.2B climate domain tokens (tokenized by the Llama tokenizer) are used.
74
+ - For instruction fine-tuning, about 272K instruction-completion pairs (both in the climate domain but also general domain) are used.
75
+
76
+ ## Evaluation
77
+
78
+ Detailed evaluation results are presented on our model card website: [eci.io/model-card](https://eci.io/model-card)
79
 
80
  ## Environmental Impact
81
+ - **Hardware Type:** 8x NVIDIA H100 HBM
82
+ - **Power Consumption per GPU:** 775W
83
+ - **Hours used:** 14,288 hrs
84
+ - **Cloud Provider:** MLFoundry
85
+ - **Compute Region:** Washington, USA
86
+ - **Energy Mix:** 100% Hydro Power (24g CO2eq/kWh according to IPCC 2014)
87
+ - **Carbon Emitted:** 265.8kg CO2eq
88
 
89
  ## Citation
90
+ **BibTeX:** Paper will be released soon.