Spaces:
Runtime error
Runtime error
emilylearning
commited on
Commit
·
6a3abb5
1
Parent(s):
5943071
Pulling out year-text tokenization into sep function. Improving docs. Changing prefix name order.
Browse files
app.py
CHANGED
@@ -82,7 +82,7 @@ assert label_list[0] == LABEL_DICT["female"], "LABEL_DICT not an ordered dict"
|
|
82 |
|
83 |
label2id = {label: idx for idx, label in enumerate(label_list)}
|
84 |
|
85 |
-
|
86 |
def tokenize_and_append_metadata(text, tokenizer):
|
87 |
tokenized = tokenizer(
|
88 |
text,
|
@@ -90,6 +90,7 @@ def tokenize_and_append_metadata(text, tokenizer):
|
|
90 |
padding=True,
|
91 |
max_length=MAX_TOKEN_LENGTH,
|
92 |
)
|
|
|
93 |
|
94 |
# Finding the gender pronouns in the tokens
|
95 |
token_ids = tokenized["input_ids"]
|
@@ -133,35 +134,46 @@ def tokenize_and_append_metadata(text, tokenizer):
|
|
133 |
return tokenized
|
134 |
|
135 |
|
136 |
-
|
137 |
-
|
138 |
-
num_points, conditioning_variables, f_weights, bert_like_models, input_text
|
139 |
-
):
|
140 |
-
|
141 |
text_portions = input_text.split(SPLIT_KEY)
|
142 |
|
143 |
-
|
144 |
-
num_preds = None
|
145 |
-
|
146 |
-
dfs = []
|
147 |
-
dfs.append(pd.DataFrame({"year": years}))
|
148 |
-
|
149 |
-
tokenized = {'ids':[], 'atten_mask':[], 'toks':[], 'labels':[]}
|
150 |
for b_date in years:
|
|
|
151 |
target_text = f"{b_date}".join(text_portions)
|
152 |
tokenized_sample = tokenize_and_append_metadata(
|
153 |
target_text,
|
154 |
tokenizer=tokenizer,
|
155 |
)
|
156 |
|
157 |
-
|
158 |
-
|
159 |
-
|
160 |
-
|
|
|
|
|
|
|
161 |
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
162 |
for f_weight in f_weights:
|
163 |
for var in conditioning_variables:
|
164 |
-
prefix = f"
|
165 |
model = models[(var, f_weight)]
|
166 |
|
167 |
p_female = []
|
@@ -175,11 +187,8 @@ def predict_gender_pronouns(
|
|
175 |
outputs = model(ids.unsqueeze(dim=0), atten_mask.unsqueeze(dim=0))
|
176 |
preds = torch.argmax(outputs[0][0].cpu(), dim=1)
|
177 |
|
178 |
-
was_masked = labels.cpu() != -100
|
179 |
-
preds = torch.where(
|
180 |
-
|
181 |
-
if not num_preds:
|
182 |
-
num_preds = torch.sum(was_masked).item()
|
183 |
|
184 |
p_female.append(len(torch.where(preds == 0)[0]) / num_preds * 100)
|
185 |
p_male.append(len(torch.where(preds == 1)[0]) / num_preds * 100)
|
@@ -240,28 +249,28 @@ def predict_gender_pronouns(
|
|
240 |
title = "Changing Gender Pronouns"
|
241 |
description = """
|
242 |
<h2> Intro </h2>
|
243 |
-
This is a demo for a project exploring possible spurious correlations
|
244 |
-
|
245 |
-
In a user provided sentence, with at least one reference to a `DATE` and one gender pronoun, we will see how sweeping through a range of `DATE` values can change the predicted pronouns.
|
246 |
|
247 |
-
|
248 |
|
249 |
-
One way to explain this phenomena is by looking at a likely
|
250 |
|
251 |
<h2> Causal DAG </h2>
|
252 |
-
In the DAG, we can see that `birth_place`, `birth_date` and `gender` are all independent elements that have no common cause with the other covariates in the DAG. However `birth_place`, `birth_date` and `gender` may all have a role in causing one's `access_to_resources`, with the general trend that `access_to_resources` has become less gender-dependent over time, but not in every `birth_place`, with recent events in Afghanistan providing a stark counterexample to this trend. `access_to_resources`
|
253 |
|
254 |
-
We
|
255 |
|
256 |
|
257 |
-
In this graph,
|
|
|
|
|
258 |
|
259 |
<center>
|
260 |
<img src="https://www.dropbox.com/s/x60r43h7uwztnru/generic_ds_dag.png?raw=1"
|
261 |
alt="DAG of possible data generating process for datasets used in training.">
|
262 |
</center>
|
263 |
|
264 |
-
Those familiar with causal DAGs may note when can simply condition on `gender` to block any confounding between the `context_words` and the `gender_pronouns`. However, this is not always possible, particularly in generative or mask-filling tasks
|
265 |
|
266 |
<h2> How to use this demo </h2>
|
267 |
In this demo, a user can add any sentence that contains at least one gender pronoun and the capitalized word `DATE`. We then sweep through a range of `date` values in the place of `DATE`, while masking (for prediction) the gender pronouns (included in the list below).
|
@@ -281,9 +290,7 @@ In addition to chosing the test sentence, we ask that you pick how the fine-tune
|
|
281 |
- conditioning variable: which, if any, conditioning variable from the three noted above in the DAG, was included in the text at train time.
|
282 |
- loss function weight: weight assigned to the minority class (female pronouns in this fine-tuning dataset) that was included in the text at train time.
|
283 |
|
284 |
-
|
285 |
-
|
286 |
-
|
287 |
|
288 |
<h2> What are the results</h2>
|
289 |
|
@@ -293,9 +300,11 @@ In the resulting plots, we can look for a dose-response relationship between:
|
|
293 |
|
294 |
Specifically we are seeing if making larger magnitude intervention: an older `DATE` in the text will result in a larger magnitude effect in the outcome: higher percentage of predicted female pronouns.
|
295 |
|
296 |
-
One trend that appears is: conditioning on `birth_date` metadata in both training and inference text has the largest dose-response relationship. This seems reasonable, as the fine-tuned model is able to 'stratify' a learned relationship between gender pronouns and dates, when both are present in the text.
|
297 |
-
While conditioning on either no metadata or `birth_place` data training, have similar middle-ground effects for this inference task.
|
298 |
-
Finally, conditioning on `name` metadata in training, (while again conditioning on `date` in inference) has almost no dose-response relationship. It appears the learning of a `name —> gender pronouns` relationship was sufficiently successful to overwhelm any potential more nuanced learning, such as that driven by `birth_date` or `place`.
|
|
|
|
|
299 |
"""
|
300 |
|
301 |
|
@@ -313,23 +322,23 @@ gr.Interface(
|
|
313 |
CONDITIONING_VARIABLES,
|
314 |
default=["none", "birth_date"],
|
315 |
type="value",
|
316 |
-
label="Pick conditioning variable included in text during fine-tuning.",
|
317 |
),
|
318 |
gr.inputs.CheckboxGroup(
|
319 |
FEMALE_WEIGHTS,
|
320 |
default=[5],
|
321 |
type="value",
|
322 |
-
label="Pick loss function weight placed on female predictions during fine-tuning.",
|
323 |
),
|
324 |
gr.inputs.CheckboxGroup(
|
325 |
BERT_LIKE_MODELS,
|
326 |
default=["bert"],
|
327 |
type="value",
|
328 |
-
label="Pick
|
329 |
),
|
330 |
gr.inputs.Textbox(
|
331 |
lines=7,
|
332 |
-
label="Input Text
|
333 |
default="Born DATE, she was a computer scientist. Her work was greatly respected, and she was well-regarded in her field.",
|
334 |
),
|
335 |
],
|
@@ -356,3 +365,4 @@ gr.Interface(
|
|
356 |
description=description,
|
357 |
article=article,
|
358 |
).launch()
|
|
|
|
82 |
|
83 |
label2id = {label: idx for idx, label in enumerate(label_list)}
|
84 |
|
85 |
+
|
86 |
def tokenize_and_append_metadata(text, tokenizer):
|
87 |
tokenized = tokenizer(
|
88 |
text,
|
|
|
90 |
padding=True,
|
91 |
max_length=MAX_TOKEN_LENGTH,
|
92 |
)
|
93 |
+
"""Tokenize text and mask/flag 'gendered_tokens_ids' in token_ids and labels."""
|
94 |
|
95 |
# Finding the gender pronouns in the tokens
|
96 |
token_ids = tokenized["input_ids"]
|
|
|
134 |
return tokenized
|
135 |
|
136 |
|
137 |
+
def get_tokenized_text_with_years(years, input_text):
|
138 |
+
"""Construct dict of tokenized texts with each year injected into the text."""
|
|
|
|
|
|
|
139 |
text_portions = input_text.split(SPLIT_KEY)
|
140 |
|
141 |
+
tokenized_w_year = {'ids':[], 'atten_mask':[], 'toks':[], 'labels':[]}
|
|
|
|
|
|
|
|
|
|
|
|
|
142 |
for b_date in years:
|
143 |
+
|
144 |
target_text = f"{b_date}".join(text_portions)
|
145 |
tokenized_sample = tokenize_and_append_metadata(
|
146 |
target_text,
|
147 |
tokenizer=tokenizer,
|
148 |
)
|
149 |
|
150 |
+
tokenized_w_year['ids'].append(tokenized_sample["input_ids"])
|
151 |
+
tokenized_w_year['atten_mask'].append(torch.tensor(tokenized_sample["attention_mask"]))
|
152 |
+
tokenized_w_year['toks'].append(tokenizer.convert_ids_to_tokens(tokenized_sample["input_ids"]))
|
153 |
+
tokenized_w_year['labels'].append(tokenized_sample["labels"])
|
154 |
+
|
155 |
+
# Also returning last `target_text`` to display as example text
|
156 |
+
return tokenized_w_year, target_text
|
157 |
|
158 |
+
|
159 |
+
def predict_gender_pronouns(
|
160 |
+
num_points, conditioning_variables, f_weights, bert_like_models, input_text
|
161 |
+
):
|
162 |
+
"""Run inference on input_text for each model type, returning df and plots of precentage
|
163 |
+
of gender pronouns predicted as female and male in each target text.
|
164 |
+
"""
|
165 |
+
|
166 |
+
years = np.linspace(START_YEAR, STOP_YEAR, int(num_points)).astype(int)
|
167 |
+
|
168 |
+
tokenized, target_text = get_tokenized_text_with_years(years, input_text)
|
169 |
+
is_masked = tokenized['ids'][0] == MASK_TOKEN_ID
|
170 |
+
num_preds = torch.sum(is_masked).item()
|
171 |
+
|
172 |
+
dfs = []
|
173 |
+
dfs.append(pd.DataFrame({"year": years}))
|
174 |
for f_weight in f_weights:
|
175 |
for var in conditioning_variables:
|
176 |
+
prefix = f"{var}_w{f_weight}"
|
177 |
model = models[(var, f_weight)]
|
178 |
|
179 |
p_female = []
|
|
|
187 |
outputs = model(ids.unsqueeze(dim=0), atten_mask.unsqueeze(dim=0))
|
188 |
preds = torch.argmax(outputs[0][0].cpu(), dim=1)
|
189 |
|
190 |
+
#was_masked = labels.cpu() != -100
|
191 |
+
preds = torch.where(is_masked, preds, -100)
|
|
|
|
|
|
|
192 |
|
193 |
p_female.append(len(torch.where(preds == 0)[0]) / num_preds * 100)
|
194 |
p_male.append(len(torch.where(preds == 1)[0]) / num_preds * 100)
|
|
|
249 |
title = "Changing Gender Pronouns"
|
250 |
description = """
|
251 |
<h2> Intro </h2>
|
252 |
+
This is a demo for a project exploring possible spurious correlations that have been learned by our models. We can examine the training datasets and learning tasks to hypothesize what spurious correlations may exist, then condition on these variables to determine if we can achieve alternative outcomes.
|
|
|
|
|
253 |
|
254 |
+
Specially in this demo: In a user provided sentence, with at least one reference to a `DATE` and one gender pronoun, we will see how sweeping through a range of `DATE` values can change the predicted pronouns. This effect can be observed in BERT base models and in our fine-tuned models (with a specific pronoun predicting task on the [wiki-bio](https://huggingface.co/datasets/wiki_bio) dataset).
|
255 |
|
256 |
+
One way to explain this phenomena is by looking at a likely data generating process for biographical-like data in both the main BERT training dataset as well as the `wiki_bio` dataset, in the form of a causal DAG.
|
257 |
|
258 |
<h2> Causal DAG </h2>
|
259 |
+
In the DAG, we can see that `birth_place`, `birth_date` and `gender` are all independent elements that have no common cause with the other covariates in the DAG. However `birth_place`, `birth_date` and `gender` may all have a role in causing one's `access_to_resources`, with the general trend that `access_to_resources` has become less gender-dependent over time, but not in every `birth_place`, with recent events in Afghanistan providing a stark counterexample to this trend. Importantly, `access_to_resources` determines how, **if at all**, you may appear in the dataset's `context_words`.
|
260 |
|
261 |
+
We argue that although there are complex causal interactions between each words in any given sentence, the `context_words` are more likely to cause the `gender_pronouns`, rather than vice versa. For example, if the subject is a famous doctor and the object is her wealthy father, these context words will determine which person is being referred to, and thus which gendered-pronoun to use.
|
262 |
|
263 |
|
264 |
+
In this graph, arrow heads are intended to show the assumed direction of caustion. E.g. as descriped above, we are claiming `context_words` cause the `gender_pronouns`. While causation follow direction of the arrows, statistical correlation can flow in any direction (it is cause-agnostic).
|
265 |
+
|
266 |
+
In the case of this graph, any pink path between `context_words` and `gender_pronouns` will allow the flow of statistical correlation, inviting confounding and thus spurious correlations into the trained model.
|
267 |
|
268 |
<center>
|
269 |
<img src="https://www.dropbox.com/s/x60r43h7uwztnru/generic_ds_dag.png?raw=1"
|
270 |
alt="DAG of possible data generating process for datasets used in training.">
|
271 |
</center>
|
272 |
|
273 |
+
Those familiar with causal DAGs may note when can simply condition on `gender` to block any confounding between the `context_words` and the `gender_pronouns`. However, this is not always possible, particularly in generative or mask-filling tasks where gender may be unknown, common in language modeling and in the demo below.
|
274 |
|
275 |
<h2> How to use this demo </h2>
|
276 |
In this demo, a user can add any sentence that contains at least one gender pronoun and the capitalized word `DATE`. We then sweep through a range of `date` values in the place of `DATE`, while masking (for prediction) the gender pronouns (included in the list below).
|
|
|
290 |
- conditioning variable: which, if any, conditioning variable from the three noted above in the DAG, was included in the text at train time.
|
291 |
- loss function weight: weight assigned to the minority class (female pronouns in this fine-tuning dataset) that was included in the text at train time.
|
292 |
|
293 |
+
You can also optionally pick a bert-like model for comparison.
|
|
|
|
|
294 |
|
295 |
<h2> What are the results</h2>
|
296 |
|
|
|
300 |
|
301 |
Specifically we are seeing if making larger magnitude intervention: an older `DATE` in the text will result in a larger magnitude effect in the outcome: higher percentage of predicted female pronouns.
|
302 |
|
303 |
+
- One trend that appears is: conditioning on `birth_date` metadata in both training and inference text has the largest dose-response relationship. This seems reasonable, as the fine-tuned model is able to 'stratify' a learned relationship between gender pronouns and dates, when both are present in the text.
|
304 |
+
- While conditioning on either no metadata or `birth_place` data training, have similar middle-ground effects for this inference task.
|
305 |
+
- Finally, conditioning on `name` metadata in training, (while again conditioning on `date` in inference) has almost no dose-response relationship. It appears the learning of a `name —> gender pronouns` relationship was sufficiently successful to overwhelm any potential more nuanced learning, such as that driven by `birth_date` or `place`.
|
306 |
+
|
307 |
+
Please feel free to ping me on the Hugging Face discord (I'm 'emily_learner' there), with any feedback/comments/concerns or interesting findings!
|
308 |
"""
|
309 |
|
310 |
|
|
|
322 |
CONDITIONING_VARIABLES,
|
323 |
default=["none", "birth_date"],
|
324 |
type="value",
|
325 |
+
label="(1) Pick conditioning variable included in text during fine-tuning.",
|
326 |
),
|
327 |
gr.inputs.CheckboxGroup(
|
328 |
FEMALE_WEIGHTS,
|
329 |
default=[5],
|
330 |
type="value",
|
331 |
+
label="(2) Pick loss function weight placed on female predictions during fine-tuning.",
|
332 |
),
|
333 |
gr.inputs.CheckboxGroup(
|
334 |
BERT_LIKE_MODELS,
|
335 |
default=["bert"],
|
336 |
type="value",
|
337 |
+
label="(Optional) Pick bert-like base uncased model for comparison.",
|
338 |
),
|
339 |
gr.inputs.Textbox(
|
340 |
lines=7,
|
341 |
+
label="Input Text: Include one of more instance of the word 'DATE' below (to be replace with a range of dates in demo), and one of more gender pronoun (to be masked for prediction).",
|
342 |
default="Born DATE, she was a computer scientist. Her work was greatly respected, and she was well-regarded in her field.",
|
343 |
),
|
344 |
],
|
|
|
365 |
description=description,
|
366 |
article=article,
|
367 |
).launch()
|
368 |
+
|