khalidrajan commited on
Commit
aca3c9b
1 Parent(s): d7f2b22

End of training

Browse files
README.md ADDED
@@ -0,0 +1,87 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ license: apache-2.0
3
+ base_model: google-bert/bert-base-cased
4
+ tags:
5
+ - generated_from_trainer
6
+ model-index:
7
+ - name: bert-base-cased_legal_ner_finetuned
8
+ results: []
9
+ ---
10
+
11
+ <!-- This model card has been generated automatically according to the information the Trainer had access to. You
12
+ should probably proofread and complete it, then remove this comment. -->
13
+
14
+ # bert-base-cased_legal_ner_finetuned
15
+
16
+ This model is a fine-tuned version of [google-bert/bert-base-cased](https://huggingface.co/google-bert/bert-base-cased) on an unknown dataset.
17
+ It achieves the following results on the evaluation set:
18
+ - Loss: 0.3018
19
+ - Law Precision: 0.7364
20
+ - Law Recall: 0.8261
21
+ - Law F1: 0.7787
22
+ - Law Number: 115
23
+ - Violated by Precision: 0.8525
24
+ - Violated by Recall: 0.6933
25
+ - Violated by F1: 0.7647
26
+ - Violated by Number: 75
27
+ - Violated on Precision: 0.4688
28
+ - Violated on Recall: 0.4286
29
+ - Violated on F1: 0.4478
30
+ - Violated on Number: 70
31
+ - Violation Precision: 0.6323
32
+ - Violation Recall: 0.7251
33
+ - Violation F1: 0.6755
34
+ - Violation Number: 491
35
+ - Overall Precision: 0.6524
36
+ - Overall Recall: 0.7097
37
+ - Overall F1: 0.6798
38
+ - Overall Accuracy: 0.9439
39
+
40
+ ## Model description
41
+
42
+ More information needed
43
+
44
+ ## Intended uses & limitations
45
+
46
+ More information needed
47
+
48
+ ## Training and evaluation data
49
+
50
+ More information needed
51
+
52
+ ## Training procedure
53
+
54
+ ### Training hyperparameters
55
+
56
+ The following hyperparameters were used during training:
57
+ - learning_rate: 2e-05
58
+ - train_batch_size: 8
59
+ - eval_batch_size: 8
60
+ - seed: 42
61
+ - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
62
+ - lr_scheduler_type: linear
63
+ - lr_scheduler_warmup_steps: 500
64
+ - num_epochs: 10
65
+
66
+ ### Training results
67
+
68
+ | Training Loss | Epoch | Step | Validation Loss | Law Precision | Law Recall | Law F1 | Law Number | Violated by Precision | Violated by Recall | Violated by F1 | Violated by Number | Violated on Precision | Violated on Recall | Violated on F1 | Violated on Number | Violation Precision | Violation Recall | Violation F1 | Violation Number | Overall Precision | Overall Recall | Overall F1 | Overall Accuracy |
69
+ |:-------------:|:-----:|:----:|:---------------:|:-------------:|:----------:|:------:|:----------:|:---------------------:|:------------------:|:--------------:|:------------------:|:---------------------:|:------------------:|:--------------:|:------------------:|:-------------------:|:----------------:|:------------:|:----------------:|:-----------------:|:--------------:|:----------:|:----------------:|
70
+ | No log | 1.0 | 85 | 0.8046 | 0.0 | 0.0 | 0.0 | 115 | 0.0 | 0.0 | 0.0 | 75 | 0.0 | 0.0 | 0.0 | 70 | 0.0 | 0.0 | 0.0 | 491 | 0.0 | 0.0 | 0.0 | 0.7619 |
71
+ | No log | 2.0 | 170 | 0.4050 | 0.0 | 0.0 | 0.0 | 115 | 0.0 | 0.0 | 0.0 | 75 | 0.0 | 0.0 | 0.0 | 70 | 0.1835 | 0.2037 | 0.1931 | 491 | 0.1835 | 0.1332 | 0.1543 | 0.8819 |
72
+ | No log | 3.0 | 255 | 0.2861 | 0.6111 | 0.4783 | 0.5366 | 115 | 0.1818 | 0.0533 | 0.0825 | 75 | 0.4 | 0.0571 | 0.1000 | 70 | 0.4345 | 0.5540 | 0.4870 | 491 | 0.4479 | 0.4461 | 0.4470 | 0.9130 |
73
+ | No log | 4.0 | 340 | 0.2552 | 0.75 | 0.7043 | 0.7265 | 115 | 0.5625 | 0.36 | 0.4390 | 75 | 0.3429 | 0.1714 | 0.2286 | 70 | 0.4924 | 0.5927 | 0.5379 | 491 | 0.5256 | 0.5473 | 0.5362 | 0.9257 |
74
+ | No log | 5.0 | 425 | 0.2676 | 0.7154 | 0.7652 | 0.7395 | 115 | 0.7308 | 0.5067 | 0.5984 | 75 | 0.2778 | 0.1429 | 0.1887 | 70 | 0.5368 | 0.6090 | 0.5706 | 491 | 0.5664 | 0.5792 | 0.5727 | 0.9300 |
75
+ | 0.4786 | 6.0 | 510 | 0.2663 | 0.6767 | 0.7826 | 0.7258 | 115 | 0.7903 | 0.6533 | 0.7153 | 75 | 0.3684 | 0.4 | 0.3836 | 70 | 0.6155 | 0.7271 | 0.6667 | 491 | 0.6157 | 0.6977 | 0.6542 | 0.9366 |
76
+ | 0.4786 | 7.0 | 595 | 0.2352 | 0.6957 | 0.8348 | 0.7589 | 115 | 0.7941 | 0.72 | 0.7552 | 75 | 0.4242 | 0.4 | 0.4118 | 70 | 0.5799 | 0.7169 | 0.6412 | 491 | 0.6030 | 0.7057 | 0.6503 | 0.9412 |
77
+ | 0.4786 | 8.0 | 680 | 0.2728 | 0.6835 | 0.8261 | 0.7480 | 115 | 0.7857 | 0.7333 | 0.7586 | 75 | 0.3596 | 0.4571 | 0.4025 | 70 | 0.5916 | 0.7434 | 0.6588 | 491 | 0.5978 | 0.7284 | 0.6567 | 0.9415 |
78
+ | 0.4786 | 9.0 | 765 | 0.2952 | 0.7385 | 0.8348 | 0.7837 | 115 | 0.8088 | 0.7333 | 0.7692 | 75 | 0.5 | 0.5 | 0.5 | 70 | 0.6246 | 0.7352 | 0.6754 | 491 | 0.6466 | 0.7284 | 0.6850 | 0.9433 |
79
+ | 0.4786 | 10.0 | 850 | 0.3018 | 0.7364 | 0.8261 | 0.7787 | 115 | 0.8525 | 0.6933 | 0.7647 | 75 | 0.4688 | 0.4286 | 0.4478 | 70 | 0.6323 | 0.7251 | 0.6755 | 491 | 0.6524 | 0.7097 | 0.6798 | 0.9439 |
80
+
81
+
82
+ ### Framework versions
83
+
84
+ - Transformers 4.44.0
85
+ - Pytorch 2.4.0
86
+ - Datasets 2.21.0
87
+ - Tokenizers 0.19.1
config.json ADDED
@@ -0,0 +1,48 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ {
2
+ "_name_or_path": "google-bert/bert-base-cased",
3
+ "architectures": [
4
+ "BertForTokenClassification"
5
+ ],
6
+ "attention_probs_dropout_prob": 0.1,
7
+ "classifier_dropout": null,
8
+ "gradient_checkpointing": false,
9
+ "hidden_act": "gelu",
10
+ "hidden_dropout_prob": 0.1,
11
+ "hidden_size": 768,
12
+ "id2label": {
13
+ "0": "O",
14
+ "1": "I-VIOLATED ON",
15
+ "2": "B-VIOLATED BY",
16
+ "3": "B-LAW",
17
+ "4": "I-LAW",
18
+ "5": "I-VIOLATED BY",
19
+ "6": "B-VIOLATED ON",
20
+ "7": "B-VIOLATION",
21
+ "8": "I-VIOLATION"
22
+ },
23
+ "initializer_range": 0.02,
24
+ "intermediate_size": 3072,
25
+ "label2id": {
26
+ "B-LAW": 3,
27
+ "B-VIOLATED BY": 2,
28
+ "B-VIOLATED ON": 6,
29
+ "B-VIOLATION": 7,
30
+ "I-LAW": 4,
31
+ "I-VIOLATED BY": 5,
32
+ "I-VIOLATED ON": 1,
33
+ "I-VIOLATION": 8,
34
+ "O": 0
35
+ },
36
+ "layer_norm_eps": 1e-12,
37
+ "max_position_embeddings": 512,
38
+ "model_type": "bert",
39
+ "num_attention_heads": 12,
40
+ "num_hidden_layers": 12,
41
+ "pad_token_id": 0,
42
+ "position_embedding_type": "absolute",
43
+ "torch_dtype": "float32",
44
+ "transformers_version": "4.44.0",
45
+ "type_vocab_size": 2,
46
+ "use_cache": true,
47
+ "vocab_size": 28996
48
+ }
model.safetensors ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:15e915b0f428897e6376fb8b4fbed693d9db18c18e641e843471b87616c60bf5
3
+ size 430929740
runs/Sep06_16-03-36_Khalids-MBP/events.out.tfevents.1725653017.Khalids-MBP.23384.12 ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:ece16b3af83647d145789356beca2890a7a14d02f8bc9615d71e2e190efcd729
3
+ size 20487
runs/Sep06_16-03-36_Khalids-MBP/events.out.tfevents.1725653849.Khalids-MBP.23384.13 ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:7bf769c4e01cb9791d0168f188188f983aa0a386645889097aa4cfa54ab62466
3
+ size 1540
special_tokens_map.json ADDED
@@ -0,0 +1,7 @@
 
 
 
 
 
 
 
 
1
+ {
2
+ "cls_token": "[CLS]",
3
+ "mask_token": "[MASK]",
4
+ "pad_token": "[PAD]",
5
+ "sep_token": "[SEP]",
6
+ "unk_token": "[UNK]"
7
+ }
tokenizer.json ADDED
The diff for this file is too large to render. See raw diff
 
tokenizer_config.json ADDED
@@ -0,0 +1,56 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ {
2
+ "add_prefix_space": true,
3
+ "added_tokens_decoder": {
4
+ "0": {
5
+ "content": "[PAD]",
6
+ "lstrip": false,
7
+ "normalized": false,
8
+ "rstrip": false,
9
+ "single_word": false,
10
+ "special": true
11
+ },
12
+ "100": {
13
+ "content": "[UNK]",
14
+ "lstrip": false,
15
+ "normalized": false,
16
+ "rstrip": false,
17
+ "single_word": false,
18
+ "special": true
19
+ },
20
+ "101": {
21
+ "content": "[CLS]",
22
+ "lstrip": false,
23
+ "normalized": false,
24
+ "rstrip": false,
25
+ "single_word": false,
26
+ "special": true
27
+ },
28
+ "102": {
29
+ "content": "[SEP]",
30
+ "lstrip": false,
31
+ "normalized": false,
32
+ "rstrip": false,
33
+ "single_word": false,
34
+ "special": true
35
+ },
36
+ "103": {
37
+ "content": "[MASK]",
38
+ "lstrip": false,
39
+ "normalized": false,
40
+ "rstrip": false,
41
+ "single_word": false,
42
+ "special": true
43
+ }
44
+ },
45
+ "clean_up_tokenization_spaces": true,
46
+ "cls_token": "[CLS]",
47
+ "do_lower_case": false,
48
+ "mask_token": "[MASK]",
49
+ "model_max_length": 512,
50
+ "pad_token": "[PAD]",
51
+ "sep_token": "[SEP]",
52
+ "strip_accents": null,
53
+ "tokenize_chinese_chars": true,
54
+ "tokenizer_class": "BertTokenizer",
55
+ "unk_token": "[UNK]"
56
+ }
training_args.bin ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:460ef6e05098d68f53e185eff64aaff878ec9a7d159b8369bb8eac17ce0658dd
3
+ size 5240
vocab.txt ADDED
The diff for this file is too large to render. See raw diff