ep9io commited on
Commit
b35ab64
1 Parent(s): 5f57c1c

Upload README.md with huggingface_hub

Browse files
Files changed (1) hide show
  1. README.md +233 -21
README.md CHANGED
@@ -1,9 +1,185 @@
1
  ---
2
- library_name: transformers
3
- tags: []
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
4
  ---
5
 
6
- # Model Card for Model ID
7
 
8
  <!-- Provide a quick summary of what the model is/does. -->
9
 
@@ -15,21 +191,21 @@ tags: []
15
 
16
  <!-- Provide a longer summary of what this model is. -->
17
 
18
- This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
19
 
20
- - **Developed by:** [More Information Needed]
21
  - **Funded by [optional]:** [More Information Needed]
22
  - **Shared by [optional]:** [More Information Needed]
23
  - **Model type:** [More Information Needed]
24
- - **Language(s) (NLP):** [More Information Needed]
25
- - **License:** [More Information Needed]
26
- - **Finetuned from model [optional]:** [More Information Needed]
27
 
28
  ### Model Sources [optional]
29
 
30
  <!-- Provide the basic links for the model. -->
31
 
32
- - **Repository:** [More Information Needed]
33
  - **Paper [optional]:** [More Information Needed]
34
  - **Demo [optional]:** [More Information Needed]
35
 
@@ -41,7 +217,25 @@ This is the model card of a 🤗 transformers model that has been pushed on the
41
 
42
  <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
43
 
44
- [More Information Needed]
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
45
 
46
  ### Downstream Use [optional]
47
 
@@ -92,7 +286,7 @@ Use the code below to get started with the model.
92
 
93
  #### Training Hyperparameters
94
 
95
- - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
96
 
97
  #### Speeds, Sizes, Times [optional]
98
 
@@ -126,7 +320,22 @@ Use the code below to get started with the model.
126
 
127
  ### Results
128
 
129
- [More Information Needed]
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
130
 
131
  #### Summary
132
 
@@ -144,11 +353,11 @@ Use the code below to get started with the model.
144
 
145
  Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
146
 
147
- - **Hardware Type:** [More Information Needed]
148
- - **Hours used:** [More Information Needed]
149
- - **Cloud Provider:** [More Information Needed]
150
- - **Compute Region:** [More Information Needed]
151
- - **Carbon Emitted:** [More Information Needed]
152
 
153
  ## Technical Specifications [optional]
154
 
@@ -158,7 +367,10 @@ Carbon emissions can be estimated using the [Machine Learning Impact calculator]
158
 
159
  ### Compute Infrastructure
160
 
161
- [More Information Needed]
 
 
 
162
 
163
  #### Hardware
164
 
@@ -166,7 +378,7 @@ Carbon emissions can be estimated using the [Machine Learning Impact calculator]
166
 
167
  #### Software
168
 
169
- [More Information Needed]
170
 
171
  ## Citation [optional]
172
 
@@ -192,8 +404,8 @@ Carbon emissions can be estimated using the [Machine Learning Impact calculator]
192
 
193
  ## Model Card Authors [optional]
194
 
195
- [More Information Needed]
196
 
197
  ## Model Card Contact
198
 
199
- [More Information Needed]
 
1
  ---
2
+ base_model: FacebookAI/roberta-base
3
+ datasets:
4
+ - 2024-mcm-everitt-ryan/job-bias-synthetic-human-benchmark-v2
5
+ language: en
6
+ license: apache-2.0
7
+ model_id: roberta-base-job-bias-seq-cls
8
+ model_description: The model is a multi-label classifier designed to detect various
9
+ types of bias within job descriptions.
10
+ developers: Tristan Everitt and Paul Ryan
11
+ model_card_authors: See developers
12
+ model_card_contact: See developers
13
+ repo: https://gitlab.computing.dcu.ie/everitt2/2024-mcm-everitt-ryan
14
+ training_regime: 'accelerator_config="{''split_batches'': False, ''dispatch_batches'':
15
+ None, ''even_batches'': True, ''use_seedable_sampler'': True, ''non_blocking'':
16
+ False, ''gradient_accumulation_kwargs'': None}", adafactor=false, adam_beta1=0.9,
17
+ adam_beta2=0.999, adam_epsilon=1e-08, auto_find_batch_size=false, batch_eval_metrics=false,
18
+ bf16=false, bf16_full_eval=false, data_seed="None", dataloader_drop_last=false,
19
+ dataloader_num_workers=0, dataloader_persistent_workers=false, dataloader_pin_memory=true,
20
+ dataloader_prefetch_factor="None", ddp_backend="None", ddp_broadcast_buffers="None",
21
+ ddp_bucket_cap_mb="None", ddp_find_unused_parameters="None", ddp_timeout=1800, deepspeed="None",
22
+ disable_tqdm=false, dispatch_batches="None", do_eval=true, do_predict=false, do_train=false,
23
+ eval_accumulation_steps="None", eval_batch_size=8, eval_delay=0, eval_do_concat_batches=true,
24
+ eval_on_start=false, eval_steps="None", eval_strategy="epoch", evaluation_strategy="None",
25
+ fp16=false, fp16_backend="auto", fp16_full_eval=false, fp16_opt_level="O1", fsdp="[]",
26
+ fsdp_config="{''min_num_params'': 0, ''xla'': False, ''xla_fsdp_v2'': False, ''xla_fsdp_grad_ckpt'':
27
+ False}", fsdp_min_num_params=0, fsdp_transformer_layer_cls_to_wrap="None", full_determinism=false,
28
+ gradient_accumulation_steps=1, gradient_checkpointing="(False,)", gradient_checkpointing_kwargs="None",
29
+ greater_is_better=false, group_by_length=true, half_precision_backend="auto", ignore_data_skip=false,
30
+ include_inputs_for_metrics=false, jit_mode_eval=false, label_names="None", label_smoothing_factor=0.0,
31
+ learning_rate=3e-05, length_column_name="length", load_best_model_at_end=true, local_rank=0,
32
+ lr_scheduler_kwargs="{}", lr_scheduler_type="linear", max_grad_norm=1.0, max_steps=-1,
33
+ metric_for_best_model="loss", mp_parameters="", neftune_noise_alpha="None", no_cuda=false,
34
+ num_train_epochs=3, optim="adamw_torch", optim_args="None", optim_target_modules="None",
35
+ past_index=-1, per_device_eval_batch_size=8, per_device_train_batch_size=8, per_gpu_eval_batch_size="None",
36
+ per_gpu_train_batch_size="None", prediction_loss_only=false, ray_scope="last", remove_unused_columns=true,
37
+ report_to="[]", restore_callback_states_from_checkpoint=false, resume_from_checkpoint="None",
38
+ seed=42, skip_memory_metrics=true, split_batches="None", tf32="None", torch_compile=false,
39
+ torch_compile_backend="None", torch_compile_mode="None", torchdynamo="None", tpu_num_cores="None",
40
+ train_batch_size=8, use_cpu=false, use_ipex=false, use_legacy_prediction_loop=false,
41
+ use_mps_device=false, warmup_ratio=0.0, warmup_steps=0, weight_decay=0.001'
42
+ results: " precision recall f1-score support\n \n \
43
+ \ age 0.80 0.51 0.63 80\n disability 0.87\
44
+ \ 0.50 0.63 80\n feminine 0.93 0.94 0.93\
45
+ \ 80\n general 0.75 0.53 0.62 80\n masculine\
46
+ \ 0.78 0.59 0.67 80\n neutral 0.38 0.72\
47
+ \ 0.50 80\n racial 0.83 0.81 0.82 80\n\
48
+ \ sexuality 0.96 0.89 0.92 80\n \n micro avg\
49
+ \ 0.73 0.69 0.71 640\n macro avg 0.79 0.69\
50
+ \ 0.72 640\n weighted avg 0.79 0.69 0.72 640\n\
51
+ \ samples avg 0.71 0.73 0.71 640\n "
52
+ compute_infrastructure: '- Linux 6.5.0-35-generic x86_64
53
+
54
+ - MemTotal: 1056613768 kB
55
+
56
+ - 256 X AMD EPYC 7702 64-Core Processor
57
+
58
+ - GPU_0: NVIDIA L40S'
59
+ software: python 3.10.12, accelerate 0.32.1, aiohttp 3.9.5, aiosignal 1.3.1, anyio
60
+ 4.2.0, argon2-cffi 23.1.0, argon2-cffi-bindings 21.2.0, arrow 1.3.0, asttokens 2.4.1,
61
+ async-lru 2.0.4, async-timeout 4.0.3, attrs 23.2.0, awscli 1.33.26, Babel 2.14.0,
62
+ beautifulsoup4 4.12.3, bitsandbytes 0.43.1, bleach 6.1.0, blinker 1.4, botocore
63
+ 1.34.144, certifi 2024.2.2, cffi 1.16.0, charset-normalizer 3.3.2, click 8.1.7,
64
+ cloudpickle 3.0.0, colorama 0.4.6, comm 0.2.1, cryptography 3.4.8, dask 2024.7.0,
65
+ datasets 2.20.0, dbus-python 1.2.18, debugpy 1.8.0, decorator 5.1.1, defusedxml
66
+ 0.7.1, dill 0.3.8, distro 1.7.0, docutils 0.16, einops 0.8.0, entrypoints 0.4, evaluate
67
+ 0.4.2, exceptiongroup 1.2.0, executing 2.0.1, fastjsonschema 2.19.1, filelock 3.13.1,
68
+ flash-attn 2.6.1, fqdn 1.5.1, frozenlist 1.4.1, fsspec 2024.2.0, h11 0.14.0, hf_transfer
69
+ 0.1.6, httpcore 1.0.2, httplib2 0.20.2, httpx 0.26.0, huggingface-hub 0.23.4, idna
70
+ 3.6, importlib_metadata 8.0.0, iniconfig 2.0.0, ipykernel 6.29.0, ipython 8.21.0,
71
+ ipython-genutils 0.2.0, ipywidgets 8.1.1, isoduration 20.11.0, jedi 0.19.1, jeepney
72
+ 0.7.1, Jinja2 3.1.3, jmespath 1.0.1, joblib 1.4.2, json5 0.9.14, jsonpointer 2.4,
73
+ jsonschema 4.21.1, jsonschema-specifications 2023.12.1, jupyter-archive 3.4.0, jupyter_client
74
+ 7.4.9, jupyter_contrib_core 0.4.2, jupyter_contrib_nbextensions 0.7.0, jupyter_core
75
+ 5.7.1, jupyter-events 0.9.0, jupyter-highlight-selected-word 0.2.0, jupyter-lsp
76
+ 2.2.2, jupyter-nbextensions-configurator 0.6.3, jupyter_server 2.12.5, jupyter_server_terminals
77
+ 0.5.2, jupyterlab 4.1.0, jupyterlab_pygments 0.3.0, jupyterlab_server 2.25.2, jupyterlab-widgets
78
+ 3.0.9, keyring 23.5.0, launchpadlib 1.10.16, lazr.restfulclient 0.14.4, lazr.uri
79
+ 1.0.6, locket 1.0.0, lxml 5.1.0, MarkupSafe 2.1.5, matplotlib-inline 0.1.6, mistune
80
+ 3.0.2, more-itertools 8.10.0, mpmath 1.3.0, multidict 6.0.5, multiprocess 0.70.16,
81
+ nbclassic 1.0.0, nbclient 0.9.0, nbconvert 7.14.2, nbformat 5.9.2, nest-asyncio
82
+ 1.6.0, networkx 3.2.1, nltk 3.8.1, notebook 6.5.5, notebook_shim 0.2.3, numpy 1.26.3,
83
+ nvidia-cublas-cu12 12.1.3.1, nvidia-cuda-cupti-cu12 12.1.105, nvidia-cuda-nvrtc-cu12
84
+ 12.1.105, nvidia-cuda-runtime-cu12 12.1.105, nvidia-cudnn-cu12 8.9.2.26, nvidia-cufft-cu12
85
+ 11.0.2.54, nvidia-curand-cu12 10.3.2.106, nvidia-cusolver-cu12 11.4.5.107, nvidia-cusparse-cu12
86
+ 12.1.0.106, nvidia-nccl-cu12 2.19.3, nvidia-nvjitlink-cu12 12.3.101, nvidia-nvtx-cu12
87
+ 12.1.105, oauthlib 3.2.0, overrides 7.7.0, packaging 23.2, pandas 2.2.2, pandocfilters
88
+ 1.5.1, parso 0.8.3, partd 1.4.2, peft 0.11.1, pexpect 4.9.0, pillow 10.2.0, pip
89
+ 24.1.2, platformdirs 4.2.0, pluggy 1.5.0, polars 1.1.0, prometheus-client 0.19.0,
90
+ prompt-toolkit 3.0.43, protobuf 5.27.2, psutil 5.9.8, ptyprocess 0.7.0, pure-eval
91
+ 0.2.2, pyarrow 16.1.0, pyarrow-hotfix 0.6, pyasn1 0.6.0, pycparser 2.21, Pygments
92
+ 2.17.2, PyGObject 3.42.1, PyJWT 2.3.0, pyparsing 2.4.7, pytest 8.2.2, python-apt
93
+ 2.4.0+ubuntu3, python-dateutil 2.8.2, python-json-logger 2.0.7, pytz 2024.1, PyYAML
94
+ 6.0.1, pyzmq 24.0.1, referencing 0.33.0, regex 2024.5.15, requests 2.32.3, rfc3339-validator
95
+ 0.1.4, rfc3986-validator 0.1.1, rpds-py 0.17.1, rsa 4.7.2, s3transfer 0.10.2, safetensors
96
+ 0.4.3, scikit-learn 1.5.1, scipy 1.14.0, SecretStorage 3.3.1, Send2Trash 1.8.2,
97
+ sentence-transformers 3.0.1, sentencepiece 0.2.0, setuptools 69.0.3, six 1.16.0,
98
+ sniffio 1.3.0, soupsieve 2.5, stack-data 0.6.3, sympy 1.12, tabulate 0.9.0, terminado
99
+ 0.18.0, threadpoolctl 3.5.0, tiktoken 0.7.0, tinycss2 1.2.1, tokenizers 0.19.1,
100
+ tomli 2.0.1, toolz 0.12.1, torch 2.2.0, torchaudio 2.2.0, torchdata 0.7.1, torchtext
101
+ 0.17.0, torchvision 0.17.0, tornado 6.4, tqdm 4.66.4, traitlets 5.14.1, transformers
102
+ 4.42.4, triton 2.2.0, types-python-dateutil 2.8.19.20240106, typing_extensions 4.9.0,
103
+ tzdata 2024.1, uri-template 1.3.0, urllib3 2.2.2, wadllib 1.3.6, wcwidth 0.2.13,
104
+ webcolors 1.13, webencodings 0.5.1, websocket-client 1.7.0, wheel 0.42.0, widgetsnbextension
105
+ 4.0.9, xxhash 3.4.1, yarl 1.9.4, zipp 1.0.0
106
+ hardware_type: 1 X NVIDIA L40S
107
+ hours_used: '0.13'
108
+ cloud_provider: N/A
109
+ cloud_region: N/A
110
+ co2_emitted: N/A
111
+ direct_use: "\n ```python\n from transformers import pipeline\n\n pipe =\
112
+ \ pipeline(\"text-classification\", model=\"2024-mcm-everitt-ryan/roberta-base-job-bias-seq-cls\"\
113
+ , return_all_scores=True)\n\n results = pipe(\"Join our dynamic and fast-paced\
114
+ \ team as a Junior Marketing Specialist. We seek a tech-savvy and energetic individual\
115
+ \ who thrives in a vibrant environment. Ideal candidates are digital natives with\
116
+ \ a fresh perspective, ready to adapt quickly to new trends. You should have recent\
117
+ \ experience in social media strategies and a strong understanding of current digital\
118
+ \ marketing tools. We're looking for someone with a youthful mindset, eager to bring\
119
+ \ innovative ideas to our young and ambitious team. If you're a recent graduate\
120
+ \ or early in your career, this opportunity is perfect for you!\")\n print(results)\n\
121
+ \ ```\n >> [[\n {'label': 'age', 'score': 0.9883460402488708}, \n {'label':\
122
+ \ 'disability', 'score': 0.00787709467113018}, \n {'label': 'feminine', 'score':\
123
+ \ 0.007224376779049635}, \n {'label': 'general', 'score': 0.09967829287052155},\
124
+ \ \n {'label': 'masculine', 'score': 0.0035264550242573023}, \n {'label':\
125
+ \ 'racial', 'score': 0.014618005603551865}, \n {'label': 'sexuality', 'score':\
126
+ \ 0.005568435415625572}\n ]]\n "
127
+ model-index:
128
+ - name: roberta-base-job-bias-seq-cls
129
+ results:
130
+ - task:
131
+ type: multi_label_classification
132
+ dataset:
133
+ name: 2024-mcm-everitt-ryan/job-bias-synthetic-human-benchmark-v2
134
+ type: mix_human-eval_synthetic
135
+ metrics:
136
+ - type: loss
137
+ value: 0.2519490122795105
138
+ - type: accuracy
139
+ value: 0.6626712328767124
140
+ - type: f1_micro
141
+ value: 0.7080645161290322
142
+ - type: precision_micro
143
+ value: 0.7316666666666667
144
+ - type: recall_micro
145
+ value: 0.6859375
146
+ - type: roc_auc_micro
147
+ value: 0.8230034722222223
148
+ - type: f1_macro
149
+ value: 0.7152770887763198
150
+ - type: precision_macro
151
+ value: 0.787770276836773
152
+ - type: recall_macro
153
+ value: 0.6859375
154
+ - type: roc_auc_macro
155
+ value: 0.8230034722222221
156
+ - type: f1_samples
157
+ value: 0.7133969341161123
158
+ - type: precision_samples
159
+ value: 0.7111872146118721
160
+ - type: recall_samples
161
+ value: 0.7284531963470319
162
+ - type: roc_auc_samples
163
+ value: 0.8439191943900849
164
+ - type: f1_weighted
165
+ value: 0.7152770887763198
166
+ - type: precision_weighted
167
+ value: 0.787770276836773
168
+ - type: recall_weighted
169
+ value: 0.6859375
170
+ - type: roc_auc_weighted
171
+ value: 0.8230034722222221
172
+ - type: runtime
173
+ value: 8.8568
174
+ - type: samples_per_second
175
+ value: 65.938
176
+ - type: steps_per_second
177
+ value: 8.242
178
+ - type: epoch
179
+ value: 3.0
180
  ---
181
 
182
+ # Model Card for roberta-base-job-bias-seq-cls
183
 
184
  <!-- Provide a quick summary of what the model is/does. -->
185
 
 
191
 
192
  <!-- Provide a longer summary of what this model is. -->
193
 
194
+ The model is a multi-label classifier designed to detect various types of bias within job descriptions.
195
 
196
+ - **Developed by:** Tristan Everitt and Paul Ryan
197
  - **Funded by [optional]:** [More Information Needed]
198
  - **Shared by [optional]:** [More Information Needed]
199
  - **Model type:** [More Information Needed]
200
+ - **Language(s) (NLP):** en
201
+ - **License:** apache-2.0
202
+ - **Finetuned from model [optional]:** FacebookAI/roberta-base
203
 
204
  ### Model Sources [optional]
205
 
206
  <!-- Provide the basic links for the model. -->
207
 
208
+ - **Repository:** https://gitlab.computing.dcu.ie/everitt2/2024-mcm-everitt-ryan
209
  - **Paper [optional]:** [More Information Needed]
210
  - **Demo [optional]:** [More Information Needed]
211
 
 
217
 
218
  <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
219
 
220
+
221
+ ```python
222
+ from transformers import pipeline
223
+
224
+ pipe = pipeline("text-classification", model="2024-mcm-everitt-ryan/roberta-base-job-bias-seq-cls", return_all_scores=True)
225
+
226
+ results = pipe("Join our dynamic and fast-paced team as a Junior Marketing Specialist. We seek a tech-savvy and energetic individual who thrives in a vibrant environment. Ideal candidates are digital natives with a fresh perspective, ready to adapt quickly to new trends. You should have recent experience in social media strategies and a strong understanding of current digital marketing tools. We're looking for someone with a youthful mindset, eager to bring innovative ideas to our young and ambitious team. If you're a recent graduate or early in your career, this opportunity is perfect for you!")
227
+ print(results)
228
+ ```
229
+ >> [[
230
+ {'label': 'age', 'score': 0.9883460402488708},
231
+ {'label': 'disability', 'score': 0.00787709467113018},
232
+ {'label': 'feminine', 'score': 0.007224376779049635},
233
+ {'label': 'general', 'score': 0.09967829287052155},
234
+ {'label': 'masculine', 'score': 0.0035264550242573023},
235
+ {'label': 'racial', 'score': 0.014618005603551865},
236
+ {'label': 'sexuality', 'score': 0.005568435415625572}
237
+ ]]
238
+
239
 
240
  ### Downstream Use [optional]
241
 
 
286
 
287
  #### Training Hyperparameters
288
 
289
+ - **Training regime:** accelerator_config="{'split_batches': False, 'dispatch_batches': None, 'even_batches': True, 'use_seedable_sampler': True, 'non_blocking': False, 'gradient_accumulation_kwargs': None}", adafactor=false, adam_beta1=0.9, adam_beta2=0.999, adam_epsilon=1e-08, auto_find_batch_size=false, batch_eval_metrics=false, bf16=false, bf16_full_eval=false, data_seed="None", dataloader_drop_last=false, dataloader_num_workers=0, dataloader_persistent_workers=false, dataloader_pin_memory=true, dataloader_prefetch_factor="None", ddp_backend="None", ddp_broadcast_buffers="None", ddp_bucket_cap_mb="None", ddp_find_unused_parameters="None", ddp_timeout=1800, deepspeed="None", disable_tqdm=false, dispatch_batches="None", do_eval=true, do_predict=false, do_train=false, eval_accumulation_steps="None", eval_batch_size=8, eval_delay=0, eval_do_concat_batches=true, eval_on_start=false, eval_steps="None", eval_strategy="epoch", evaluation_strategy="None", fp16=false, fp16_backend="auto", fp16_full_eval=false, fp16_opt_level="O1", fsdp="[]", fsdp_config="{'min_num_params': 0, 'xla': False, 'xla_fsdp_v2': False, 'xla_fsdp_grad_ckpt': False}", fsdp_min_num_params=0, fsdp_transformer_layer_cls_to_wrap="None", full_determinism=false, gradient_accumulation_steps=1, gradient_checkpointing="(False,)", gradient_checkpointing_kwargs="None", greater_is_better=false, group_by_length=true, half_precision_backend="auto", ignore_data_skip=false, include_inputs_for_metrics=false, jit_mode_eval=false, label_names="None", label_smoothing_factor=0.0, learning_rate=3e-05, length_column_name="length", load_best_model_at_end=true, local_rank=0, lr_scheduler_kwargs="{}", lr_scheduler_type="linear", max_grad_norm=1.0, max_steps=-1, metric_for_best_model="loss", mp_parameters="", neftune_noise_alpha="None", no_cuda=false, num_train_epochs=3, optim="adamw_torch", optim_args="None", optim_target_modules="None", past_index=-1, per_device_eval_batch_size=8, per_device_train_batch_size=8, per_gpu_eval_batch_size="None", per_gpu_train_batch_size="None", prediction_loss_only=false, ray_scope="last", remove_unused_columns=true, report_to="[]", restore_callback_states_from_checkpoint=false, resume_from_checkpoint="None", seed=42, skip_memory_metrics=true, split_batches="None", tf32="None", torch_compile=false, torch_compile_backend="None", torch_compile_mode="None", torchdynamo="None", tpu_num_cores="None", train_batch_size=8, use_cpu=false, use_ipex=false, use_legacy_prediction_loop=false, use_mps_device=false, warmup_ratio=0.0, warmup_steps=0, weight_decay=0.001 <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
290
 
291
  #### Speeds, Sizes, Times [optional]
292
 
 
320
 
321
  ### Results
322
 
323
+ precision recall f1-score support
324
+
325
+ age 0.80 0.51 0.63 80
326
+ disability 0.87 0.50 0.63 80
327
+ feminine 0.93 0.94 0.93 80
328
+ general 0.75 0.53 0.62 80
329
+ masculine 0.78 0.59 0.67 80
330
+ neutral 0.38 0.72 0.50 80
331
+ racial 0.83 0.81 0.82 80
332
+ sexuality 0.96 0.89 0.92 80
333
+
334
+ micro avg 0.73 0.69 0.71 640
335
+ macro avg 0.79 0.69 0.72 640
336
+ weighted avg 0.79 0.69 0.72 640
337
+ samples avg 0.71 0.73 0.71 640
338
+
339
 
340
  #### Summary
341
 
 
353
 
354
  Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
355
 
356
+ - **Hardware Type:** 1 X NVIDIA L40S
357
+ - **Hours used:** 0.13
358
+ - **Cloud Provider:** N/A
359
+ - **Compute Region:** N/A
360
+ - **Carbon Emitted:** N/A
361
 
362
  ## Technical Specifications [optional]
363
 
 
367
 
368
  ### Compute Infrastructure
369
 
370
+ - Linux 6.5.0-35-generic x86_64
371
+ - MemTotal: 1056613768 kB
372
+ - 256 X AMD EPYC 7702 64-Core Processor
373
+ - GPU_0: NVIDIA L40S
374
 
375
  #### Hardware
376
 
 
378
 
379
  #### Software
380
 
381
+ python 3.10.12, accelerate 0.32.1, aiohttp 3.9.5, aiosignal 1.3.1, anyio 4.2.0, argon2-cffi 23.1.0, argon2-cffi-bindings 21.2.0, arrow 1.3.0, asttokens 2.4.1, async-lru 2.0.4, async-timeout 4.0.3, attrs 23.2.0, awscli 1.33.26, Babel 2.14.0, beautifulsoup4 4.12.3, bitsandbytes 0.43.1, bleach 6.1.0, blinker 1.4, botocore 1.34.144, certifi 2024.2.2, cffi 1.16.0, charset-normalizer 3.3.2, click 8.1.7, cloudpickle 3.0.0, colorama 0.4.6, comm 0.2.1, cryptography 3.4.8, dask 2024.7.0, datasets 2.20.0, dbus-python 1.2.18, debugpy 1.8.0, decorator 5.1.1, defusedxml 0.7.1, dill 0.3.8, distro 1.7.0, docutils 0.16, einops 0.8.0, entrypoints 0.4, evaluate 0.4.2, exceptiongroup 1.2.0, executing 2.0.1, fastjsonschema 2.19.1, filelock 3.13.1, flash-attn 2.6.1, fqdn 1.5.1, frozenlist 1.4.1, fsspec 2024.2.0, h11 0.14.0, hf_transfer 0.1.6, httpcore 1.0.2, httplib2 0.20.2, httpx 0.26.0, huggingface-hub 0.23.4, idna 3.6, importlib_metadata 8.0.0, iniconfig 2.0.0, ipykernel 6.29.0, ipython 8.21.0, ipython-genutils 0.2.0, ipywidgets 8.1.1, isoduration 20.11.0, jedi 0.19.1, jeepney 0.7.1, Jinja2 3.1.3, jmespath 1.0.1, joblib 1.4.2, json5 0.9.14, jsonpointer 2.4, jsonschema 4.21.1, jsonschema-specifications 2023.12.1, jupyter-archive 3.4.0, jupyter_client 7.4.9, jupyter_contrib_core 0.4.2, jupyter_contrib_nbextensions 0.7.0, jupyter_core 5.7.1, jupyter-events 0.9.0, jupyter-highlight-selected-word 0.2.0, jupyter-lsp 2.2.2, jupyter-nbextensions-configurator 0.6.3, jupyter_server 2.12.5, jupyter_server_terminals 0.5.2, jupyterlab 4.1.0, jupyterlab_pygments 0.3.0, jupyterlab_server 2.25.2, jupyterlab-widgets 3.0.9, keyring 23.5.0, launchpadlib 1.10.16, lazr.restfulclient 0.14.4, lazr.uri 1.0.6, locket 1.0.0, lxml 5.1.0, MarkupSafe 2.1.5, matplotlib-inline 0.1.6, mistune 3.0.2, more-itertools 8.10.0, mpmath 1.3.0, multidict 6.0.5, multiprocess 0.70.16, nbclassic 1.0.0, nbclient 0.9.0, nbconvert 7.14.2, nbformat 5.9.2, nest-asyncio 1.6.0, networkx 3.2.1, nltk 3.8.1, notebook 6.5.5, notebook_shim 0.2.3, numpy 1.26.3, nvidia-cublas-cu12 12.1.3.1, nvidia-cuda-cupti-cu12 12.1.105, nvidia-cuda-nvrtc-cu12 12.1.105, nvidia-cuda-runtime-cu12 12.1.105, nvidia-cudnn-cu12 8.9.2.26, nvidia-cufft-cu12 11.0.2.54, nvidia-curand-cu12 10.3.2.106, nvidia-cusolver-cu12 11.4.5.107, nvidia-cusparse-cu12 12.1.0.106, nvidia-nccl-cu12 2.19.3, nvidia-nvjitlink-cu12 12.3.101, nvidia-nvtx-cu12 12.1.105, oauthlib 3.2.0, overrides 7.7.0, packaging 23.2, pandas 2.2.2, pandocfilters 1.5.1, parso 0.8.3, partd 1.4.2, peft 0.11.1, pexpect 4.9.0, pillow 10.2.0, pip 24.1.2, platformdirs 4.2.0, pluggy 1.5.0, polars 1.1.0, prometheus-client 0.19.0, prompt-toolkit 3.0.43, protobuf 5.27.2, psutil 5.9.8, ptyprocess 0.7.0, pure-eval 0.2.2, pyarrow 16.1.0, pyarrow-hotfix 0.6, pyasn1 0.6.0, pycparser 2.21, Pygments 2.17.2, PyGObject 3.42.1, PyJWT 2.3.0, pyparsing 2.4.7, pytest 8.2.2, python-apt 2.4.0+ubuntu3, python-dateutil 2.8.2, python-json-logger 2.0.7, pytz 2024.1, PyYAML 6.0.1, pyzmq 24.0.1, referencing 0.33.0, regex 2024.5.15, requests 2.32.3, rfc3339-validator 0.1.4, rfc3986-validator 0.1.1, rpds-py 0.17.1, rsa 4.7.2, s3transfer 0.10.2, safetensors 0.4.3, scikit-learn 1.5.1, scipy 1.14.0, SecretStorage 3.3.1, Send2Trash 1.8.2, sentence-transformers 3.0.1, sentencepiece 0.2.0, setuptools 69.0.3, six 1.16.0, sniffio 1.3.0, soupsieve 2.5, stack-data 0.6.3, sympy 1.12, tabulate 0.9.0, terminado 0.18.0, threadpoolctl 3.5.0, tiktoken 0.7.0, tinycss2 1.2.1, tokenizers 0.19.1, tomli 2.0.1, toolz 0.12.1, torch 2.2.0, torchaudio 2.2.0, torchdata 0.7.1, torchtext 0.17.0, torchvision 0.17.0, tornado 6.4, tqdm 4.66.4, traitlets 5.14.1, transformers 4.42.4, triton 2.2.0, types-python-dateutil 2.8.19.20240106, typing_extensions 4.9.0, tzdata 2024.1, uri-template 1.3.0, urllib3 2.2.2, wadllib 1.3.6, wcwidth 0.2.13, webcolors 1.13, webencodings 0.5.1, websocket-client 1.7.0, wheel 0.42.0, widgetsnbextension 4.0.9, xxhash 3.4.1, yarl 1.9.4, zipp 1.0.0
382
 
383
  ## Citation [optional]
384
 
 
404
 
405
  ## Model Card Authors [optional]
406
 
407
+ See developers
408
 
409
  ## Model Card Contact
410
 
411
+ See developers