wjbmattingly commited on
Commit
f5757aa
Β·
1 Parent(s): 5472d34

updated project and readme to specify files

Browse files
Files changed (2) hide show
  1. README.md +60 -1
  2. project.yml +28 -5
README.md CHANGED
@@ -1,6 +1,65 @@
1
  ---
2
  license: mit
3
  ---
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
4
 
5
  # Overall Model Performance
6
  | Model | Precision | Recall | F-Score |
@@ -48,4 +107,4 @@ license: mit
48
  | Small | SPATIAL_OBJ | 96 | 90 | 92.9 |
49
  | Medium | SPATIAL_OBJ | 95.2 | 93.8 | 94.5 |
50
  | Large | SPATIAL_OBJ | 95.3 | 95.5 | 95.4 |
51
- | Transformer | SPATIAL_OBJ | 96.3 | 92.8 | 94.5 |
 
1
  ---
2
  license: mit
3
  ---
4
+ # πŸ“š Placing the Holocaust Weasel (spacy) Project
5
+
6
+ This is the official spaCy project for the Placing the Holocaust Project. This project houses our data and our Python scripts for converting data, serializing it, training 4 different spaCy models with it, and evaluating those models. It also contains all the metrics from v. 0.0.1.
7
+
8
+ For this project, we are using spaCy v. 3.7.4.
9
+
10
+ ## πŸ“‹ project.yml
11
+
12
+ The [`project.yml`](project.yml) defines the data assets required by the project, as well as the available commands and workflows. For details, see the [Weasel documentation](https://github.com/explosion/weasel).
13
+
14
+ ### ⏯ Commands
15
+
16
+ The following commands are defined by the project. They can be executed using [`weasel run [name]`](https://github.com/explosion/weasel/tree/main/docs/cli.md#rocket-run). Commands are only re-run if their inputs have changed.
17
+
18
+ | Command | Description |
19
+ | --- | --- |
20
+ | `download-lg` | Download a large spaCy model with pretrained vectors |
21
+ | `download-md` | Download a medium spaCy model with pretrained vectors |
22
+ | `convert` | Convert the data to spaCy's binary format |
23
+ | `convert-sents` | Convert the data to sentences before converting to spaCy's binary format |
24
+ | `split` | Split data into train/dev/test sets |
25
+ | `create-config-sm` | Create a new config with a spancat pipeline component for small models |
26
+ | `train-sm` | Train the spancat model with a small configuration |
27
+ | `train-md` | Train the spancat model with a medium configuration |
28
+ | `train-lg` | Train the spancat model with a large configuration |
29
+ | `train-trf` | Train the spancat model with a transformer configuration |
30
+ | `evaluate-sm` | Evaluate the small model and export metrics |
31
+ | `evaluate-md` | Evaluate the medium model and export metrics |
32
+ | `evaluate-lg` | Evaluate the large model and export metrics |
33
+ | `build-table` | Build a table from the metrics for README.md |
34
+ | `readme` | Build a table from the metrics for README.md |
35
+ | `package` | Package the trained model as a pip package |
36
+ | `clean` | Remove intermediary directories |
37
+
38
+ ### ⏭ Workflows
39
+
40
+ The following workflows are defined by the project. They can be executed using [`weasel run [name]`](https://github.com/explosion/weasel/tree/main/docs/cli.md#rocket-run) and will run the specified commands in order. Commands are only re-run if their inputs have changed.
41
+
42
+ | Workflow | Steps |
43
+ | --- | --- |
44
+ | `all-sm-sents` | `convert-sents` β†’ `split` β†’ `create-config-sm` β†’ `train-sm` β†’ `evaluate-sm` |
45
+
46
+ ### πŸ—‚ Assets
47
+
48
+ The following assets are defined by the project. They can be fetched by running [`weasel assets`](https://github.com/explosion/weasel/tree/main/docs/cli.md#open_file_folder-assets) in the project directory.
49
+
50
+ | File | Source | Description |
51
+ | --- | --- | --- |
52
+ | [`assets/train.jsonl`](assets/train.jsonl) | Local | Training data. Chunked into sentences. |
53
+ | [`assets/dev.jsonl`](assets/dev.jsonl) | Local | Validation data. Chunked into sentences. |
54
+ | [`assets/test.jsonl`](assets/test.jsonl) | Local | Testing data. Chunked into sentences. |
55
+ | [`assets/annotated_data.json/`](assets/annotated_data.json/) | Local | All data, including negative examples. |
56
+ | [`assets/annotated_data_spans.jsonl`](assets/annotated_data_spans.jsonl) | Local | Data with examples of span annotations. |
57
+ | [`corpus/train.spacy`](corpus/train.spacy) | Local | Training data in serialized format. |
58
+ | [`corpus/dev.spacy`](corpus/dev.spacy) | Local | Validation data in serialized format. |
59
+ | [`corpus/test.spacy`](corpus/test.spacy) | Local | Testing data in serialized format. |
60
+ | [`gold-training-data/*`](gold-training-data/*) | Local | Original outputs from Prodigy. |
61
+ | [`notebooks/*`](notebooks/*) | Local | Notebooks for testing project features. |
62
+ | [`configs/*`](configs/*) | Local | Config files for training spaCy models. |
63
 
64
  # Overall Model Performance
65
  | Model | Precision | Recall | F-Score |
 
107
  | Small | SPATIAL_OBJ | 96 | 90 | 92.9 |
108
  | Medium | SPATIAL_OBJ | 95.2 | 93.8 | 94.5 |
109
  | Large | SPATIAL_OBJ | 95.3 | 95.5 | 95.4 |
110
+ | Transformer | SPATIAL_OBJ | 96.3 | 92.8 | 94.5 |
project.yml CHANGED
@@ -24,11 +24,34 @@ directories: ["assets", "corpus", "configs", "training", "scripts", "packages"]
24
  # Assets that should be downloaded or available in the directory. We're shipping
25
  # them with the project, so they won't have to be downloaded.
26
  assets:
27
- - dest: "assets/train.json"
28
- description: "Demo training data adapted from the `ner_demo` project"
29
- - dest: "assets/dev.json"
30
- description: "Demo development data"
31
-
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
32
  # Workflows are sequences of commands (see below) executed in order. You can
33
  # run them via "spacy project run [workflow]". If a commands's inputs/outputs
34
  # haven't changed, it won't be re-run.
 
24
  # Assets that should be downloaded or available in the directory. We're shipping
25
  # them with the project, so they won't have to be downloaded.
26
  assets:
27
+ - dest: "assets/train.jsonl"
28
+ description: "Training data. For this project, they were chunked into sentences."
29
+ - dest: "assets/dev.jsonl"
30
+ description: "Validation data. For this project, they were chunked into sentences."
31
+ - dest: "assets/test.jsonl"
32
+ description: "Testing data. For this project, they were chunked into sentences."
33
+
34
+ - dest: "assets/annotated_data.json/"
35
+ description: "All data, including those without annotations because they are negative examples."
36
+
37
+ - dest: "assets/annotated_data_spans.jsonl"
38
+ description: "This is just the data that contained examples of span annotations."
39
+
40
+ - dest: "corpus/train.spacy"
41
+ description: "Training data in serialized format."
42
+ - dest: "corpus/dev.spacy"
43
+ description: "Validation data in serialized format."
44
+ - dest: "corpus/test.spacy"
45
+ description: "Testing data in serialized format."
46
+
47
+ - dest: "gold-training-data/*"
48
+ description: "The original outputs from Prodigy, the annotation software used."
49
+
50
+ - dest: "notebooks/*"
51
+ description: "A collection of notebooks for testing different features of the project."
52
+
53
+ - dest: "configs/*"
54
+ description: "A collection of config files used for training the spaCy models."
55
  # Workflows are sequences of commands (see below) executed in order. You can
56
  # run them via "spacy project run [workflow]". If a commands's inputs/outputs
57
  # haven't changed, it won't be re-run.