derek-thomas HF staff commited on
Commit
7d28d38
1 Parent(s): 272ccb0

Updating to use target blank

Browse files
assets/prompt-order-experiment.svg CHANGED
mermaid.md ADDED
@@ -0,0 +1,69 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ```mermaid
2
+ graph TD
3
+ style NB0 fill:#333,stroke:#FF9D00,color:#FFD21E
4
+ style NB1 fill:#333,stroke:#FF9D00,color:#FFD21E
5
+ style NB2 fill:#333,stroke:#FF9D00,color:#FFD21E
6
+ style NB3 fill:#333,stroke:#FF9D00,color:#FFD21E
7
+ style NB4 fill:#333,stroke:#FF9D00,color:#FFD21E
8
+ style D fill:#333,stroke:#FF9D00,color:#FFD21E
9
+ style G fill:#333,stroke:#FF9D00,color:#FFD21E
10
+ style A fill:#333,stroke:#FF9D00,color:#FFD21E
11
+ style B fill:#333,stroke:#FF9D00,color:#FFD21E
12
+ style C fill:#333,stroke:#FF9D00,color:#FFD21E
13
+ style E fill:#333,stroke:#FF9D00,color:#FFD21E
14
+ style F fill:#333,stroke:#FF9D00,color:#FFD21E
15
+
16
+ subgraph Notebooks
17
+ NB0[00-poe-generate-mistral-reasoning.ipynb]
18
+ NB1[01-poe-dataset-creation.ipynb]
19
+ NB2[02-autotrain.ipynb]
20
+ NB3[03-poe-token-count-exploration.ipynb]
21
+ NB4[04-poe-eval.ipynb]
22
+ end
23
+
24
+ subgraph Models
25
+ D[Fine-Tuned MODELS]
26
+ G[BASE_MODEL: mistralai/Mistral-7B-Instruct-v0.3]
27
+ end
28
+
29
+ subgraph Datasets
30
+ A[(layoric/labeled-multiple-choice-explained)]
31
+ B[(derek-thomas/labeled-multiple-choice-explained-mistral-reasoning)]
32
+ C[(derek-thomas/labeled-multiple-choice-explained-mistral-tokenized)]
33
+ E[Deployment Config]
34
+ F[(derek-thomas/labeled-multiple-choice-explained-mistral-results)]
35
+ end
36
+
37
+ A --> NB0
38
+ G --> NB0
39
+ NB0 --> B
40
+ NB0 ==> NB1
41
+
42
+ B --> NB1
43
+ NB1 --> C
44
+ NB1 ==> NB2
45
+
46
+ C --> NB2
47
+ NB2 --> D
48
+ NB2 ==> NB3
49
+
50
+ C --> NB3
51
+ NB3 --> E
52
+ NB3 ==> NB4
53
+
54
+ C --> NB4
55
+ D --> NB4
56
+ G --> NB4
57
+ NB4 --> F
58
+
59
+ click NB0 href "https://huggingface.co/derek-thomas/prompt-order-experiment/blob/main/00-poe-generate-mistral-reasoning.ipynb"
60
+ click NB1 href "https://huggingface.co/derek-thomas/prompt-order-experiment/blob/main/01-poe-dataset-creation.ipynb"
61
+ click NB2 href "https://huggingface.co/derek-thomas/prompt-order-experiment/blob/main/02-autotrain.ipynb"
62
+ click NB3 href "https://huggingface.co/derek-thomas/prompt-order-experiment/blob/main/03-poe-token-count-exploration.ipynb"
63
+ click NB4 href "https://huggingface.co/derek-thomas/prompt-order-experiment/blob/main/04-poe-eval.ipynb"
64
+ click G href "https://huggingface.co/mistralai/Mistral-7B-Instruct-v0.3"
65
+ click A href "https://huggingface.co/datasets/layoric/labeled-multiple-choice-explained"
66
+ click B href "https://huggingface.co/datasets/derek-thomas/labeled-multiple-choice-explained-mistral-reasoning"
67
+ click C href "https://huggingface.co/datasets/derek-thomas/labeled-multiple-choice-explained-mistral-tokenized"
68
+ click F href "https://huggingface.co/datasets/derek-thomas/labeled-multiple-choice-explained-mistral-results"
69
+ ```
prompt_order_exeriment/pages/overview.py CHANGED
@@ -3,13 +3,13 @@ import reflex as rx
3
  p2 = '''
4
  # Steps
5
  ### Dataset Selection
6
- We begin with the [layoric/labeled-multiple-choice-explained](https://huggingface.co/datasets/layoric/labeled-multiple-choice-explained) dataset, which includes reasoning provided by GPT-3.5-turbo. reasoning explanations serve as a starting point but may differ from Mistral's reasoning style.
7
 
8
- 0. *[00-poe-generate-mistral-reasoning.ipynb](https://huggingface.co/derek-thomas/prompt-order-experiment/blob/main/00-poe-generate-mistral-reasoning.ipynb)*: To align with Mistral, we need to create a refined dataset: [derek-thomas/labeled-multiple-choice-explained-mistral-reasoning](https://huggingface.co/datasets/derek-thomas/labeled-multiple-choice-explained-mistral-reasoning).
9
- 1. *[01-poe-dataset-creation.ipynb](https://huggingface.co/derek-thomas/prompt-order-experiment/blob/main/01-poe-dataset-creation.ipynb)*: Then we need to create our prompt experiments.
10
- 2. *[02-autotrain.ipynb](https://huggingface.co/derek-thomas/prompt-order-experiment/blob/main/02-autotrain.ipynb)*: We generate autotrain jobs on spaces to train our models.
11
- 3. *[03-poe-token-count-exploration.ipynb](https://huggingface.co/derek-thomas/prompt-order-experiment/blob/main/03-poe-token-count-exploration.ipynb)*: We do some quick analysis so we can optimize our TGI settings.
12
- 4. *[04-poe-eval.ipynb](https://huggingface.co/derek-thomas/prompt-order-experiment/blob/main/04-poe-eval.ipynb)*: We finally evaluate our trained models.
13
 
14
  **The flowchart is _Clickable_**
15
  '''
 
3
  p2 = '''
4
  # Steps
5
  ### Dataset Selection
6
+ We begin with the <a href="https://huggingface.co/datasets/layoric/labeled-multiple-choice-explained" target="_blank">layoric/labeled-multiple-choice-explained</a> dataset, which includes reasoning provided by GPT-3.5-turbo. reasoning explanations serve as a starting point but may differ from Mistral's reasoning style.
7
 
8
+ 0. <i><a href="https://huggingface.co/derek-thomas/prompt-order-experiment/blob/main/00-poe-generate-mistral-reasoning.ipynb" target="_blank">00-poe-generate-mistral-reasoning.ipynb</a></i>: To align with Mistral, we need to create a refined dataset: <a href="https://huggingface.co/datasets/derek-thomas/labeled-multiple-choice-explained-mistral-reasoning" target="_blank">derek-thomas/labeled-multiple-choice-explained-mistral-reasoning</a>.
9
+ 1. <i><a href="https://huggingface.co/derek-thomas/prompt-order-experiment/blob/main/01-poe-dataset-creation.ipynb" target="_blank">01-poe-dataset-creation.ipynb</a></i>: Then we need to create our prompt experiments.
10
+ 2. <i><a href="https://huggingface.co/derek-thomas/prompt-order-experiment/blob/main/02-autotrain.ipynb" target="_blank">02-autotrain.ipynb</a></i>: We generate autotrain jobs on spaces to train our models.
11
+ 3. <i><a href="https://huggingface.co/derek-thomas/prompt-order-experiment/blob/main/03-poe-token-count-exploration.ipynb" target="_blank">03-poe-token-count-exploration.ipynb</a></i>: We do some quick analysis so we can optimize our TGI settings.
12
+ 4. <i><a href="https://huggingface.co/derek-thomas/prompt-order-experiment/blob/main/04-poe-eval.ipynb" target="_blank">04-poe-eval.ipynb</a></i>: We finally evaluate our trained models.
13
 
14
  **The flowchart is _Clickable_**
15
  '''