nataliegilbert commited on
Commit
2aebb5b
1 Parent(s): a181f88

Upload README (1).md

Browse files
Files changed (1) hide show
  1. README (1).md +596 -0
README (1).md ADDED
@@ -0,0 +1,596 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ library_name: transformers
3
+ tags: []
4
+ ---
5
+
6
+ # Model Card for Model ID
7
+
8
+ <!-- Provide a quick summary of what the model is/does. -->
9
+ This is the baseline model for the news source classification project.
10
+
11
+ Please run the following evaluation pipeline code:
12
+
13
+ # START #
14
+ ## Imports
15
+ <pre>from huggingface_hub import hf_hub_download
16
+ import joblib
17
+ !huggingface-cli login
18
+ import pandas as pd
19
+ import torch
20
+ from transformers import AutoTokenizer, AutoModel
21
+ import torchvision
22
+ from torchvision import transforms, utils
23
+ import torch.nn as nn
24
+ import torch.optim as optim
25
+ import torchvision.transforms as transforms
26
+ from PIL import Image
27
+ from skimage import io, transform
28
+ from torchvision.io import read_image
29
+ from torch.utils.data import Dataset, DataLoader
30
+ from sklearn.metrics import accuracy_score
31
+ import numpy as np
32
+ import pandas as pd
33
+ import numpy as np
34
+ import matplotlib.pyplot as plt
35
+ import seaborn as sns
36
+ import nltk
37
+ from nltk.corpus import stopwords
38
+ nltk.download('stopwords')
39
+ nltk.download('wordnet')
40
+
41
+ import re
42
+ from transformers import DistilBertTokenizer, DistilBertModel</pre>
43
+
44
+
45
+ # Load model from Huggingface (Please load test data into test_df below)
46
+ <pre>repo_id='awngsz/nn_model'
47
+ filename='nn_model_v3.joblib'
48
+
49
+ model_file_path=hf_hub_download(repo_id=repo_id, filename=filename) <br>
50
+ model=joblib.load(model_file_path)
51
+ print(model)
52
+
53
+ #Load test dataset (assuming the name is the same as the one in the Ed post) <br>
54
+ test_df = pd.read_csv(file_path)
55
+
56
+ #Copying the naming convention from the sample dataset in the edpost <br>
57
+ X_test = test_df['title']
58
+ y_test = test_df['labels'] </pre>
59
+
60
+ # Clean the data
61
+
62
+ <pre>
63
+ def clean_headlines(df, column_name):
64
+ """
65
+ Cleans a specified column in a DataFrame by:
66
+ - Removing HTML tags
67
+ - Removing <script> elements
68
+ - Removing extra spaces, trailing/leading whitespaces
69
+ - Removing special characters
70
+ - Removing repeating special characters
71
+ - Removing tabs
72
+ - Removing newline characters
73
+ - Removing specific punctuation: periods, commas, and parentheses
74
+ - Normalizing double quotes ("") to single quotes ('')
75
+
76
+ Args:
77
+ df (pd.DataFrame): The DataFrame containing the column to clean
78
+ column_name (str): The name of the column to clean
79
+
80
+ Returns:
81
+ pd.DataFrame: A DataFrame with the cleaned column
82
+ """
83
+ # Remove HTML tags
84
+ df[column_name] = df[column_name].str.replace(r'<[^<]+?>', '', regex=True)
85
+
86
+ # Remove scripts
87
+ df[column_name] = df[column_name].str.replace(r'<script.*?</script>', '', regex=True)
88
+
89
+ # Remove special characters
90
+ df[column_name] = df[column_name].str.strip().str.replace(r'[&*|~`^=_+{}[\]<>\\]', ' ', regex=True)
91
+
92
+ # Remove repeating special characters
93
+ df[column_name] = df[column_name].str.strip().str.replace(r'([?!])\1+', r'\1', regex=True)
94
+
95
+ # Remove tabs
96
+ df[column_name] = df[column_name].str.replace(r'\t', ' ', regex=True)
97
+
98
+ # Remove newline characters
99
+ df[column_name] = df[column_name].str.replace(r'\n', ' ', regex=True)
100
+
101
+ # Normalize all references to US as u.s.
102
+ df[column_name] = df[column_name].str.replace(r'US', 'u.s.', regex=True)
103
+ df[column_name] = df[column_name].str.replace(r'UN', 'u.n.', regex=True)
104
+
105
+ # Remove extra spaces including leading/trailing whitespaces
106
+ df[column_name] = df[column_name].str.strip().str.replace(r'\s+', ' ', regex=True)
107
+
108
+ # get rid of these fox news patterns we see
109
+ df[column_name] = df[column_name].str.replace(r'fox news poll:', '', regex=True)
110
+
111
+ df[column_name] = df[column_name].str.replace(r'| fox news', '', regex=True)
112
+
113
+ df[column_name] = df[column_name].str.replace(r'Fox News', '', regex=True)
114
+ df[column_name] = df[column_name].str.replace(r'fox news', '', regex=True)
115
+
116
+ df[column_name] = df[column_name].str.replace(r'news poll:', '', regex=True)
117
+
118
+ df[column_name] = df[column_name].str.replace(r'opinion:', '', regex=True)
119
+
120
+ df[column_name] = df[column_name].str.replace(r"reporter's notebook", '', regex=True)
121
+
122
+ # Normalize double quotes to single quotes
123
+ # df[column_name] = df[column_name].str.replace(r'"', "'", regex=True)
124
+
125
+ # Punctuation
126
+ # df[column_name] = df[column_name].str.replace(r'[.,()]', '', regex=True)
127
+
128
+ return df </pre>
129
+
130
+ <pre>
131
+ def normalize_headlines(df, column_name):
132
+ """
133
+ Normalizes a given headline by:
134
+ - converting it to lowercase
135
+ - removing stopwords
136
+ - applying stemming or lemmatization to reduce words to their base forms
137
+
138
+ Args:
139
+ df (pd.DataFrame): The DataFrame containing the column to clean
140
+ column_name (str): The name of the column to clean
141
+
142
+ Returns:
143
+ pd.DataFrame: A DataFrame with the cleaned column
144
+ """
145
+
146
+ # Convert headlines to lowercase
147
+ df[column_name] = df[column_name].str.lower()
148
+
149
+ # Remove stopwords from headline
150
+ stop_words = set(stopwords.words('english'))
151
+ df[column_name] = df[column_name].apply(lambda x: ' '.join([word for word in x.split() if word not in (stop_words)]))
152
+
153
+ # Lemmatize words to base form
154
+ lemmatizer = nltk.stem.WordNetLemmatizer()
155
+ df[column_name] = df[column_name].apply(lambda x: ' '.join([lemmatizer.lemmatize(word) for word in x.split()]))
156
+
157
+ return df </pre>
158
+
159
+ <pre>
160
+ def handle_missing_data(df, column_name):
161
+ """
162
+ Handles missing or incomplete data in a given column of a DataFrame, including:
163
+
164
+ - Replacing NULL values with "Unknown Headline"
165
+ - Augmenting the data by creating headlines with synonyms of words in other headlines
166
+
167
+ Args:
168
+ df (pd.DataFrame): The DataFrame containing the column to clean
169
+ column_name (str): The name of the column to clean
170
+
171
+ Returns:
172
+ pd.DataFrame: A DataFrame with the cleaned column
173
+ """
174
+
175
+ # Remove NULL headlines
176
+ df = df.dropna(subset=[column_name])
177
+
178
+ # Set a minimum word count threshold
179
+ min_word_count = 3
180
+
181
+ # Filter out titles with fewer words
182
+ df = df[df[column_name].str.split().apply(len) >= min_word_count].reset_index(drop=True)
183
+
184
+
185
+ return df </pre>
186
+
187
+ <pre>
188
+ def consistency_checks(df, column_name):
189
+ """
190
+ Ensures all headlines follow a consistent format by:
191
+ - Removing duplicate headlines
192
+
193
+ Args:
194
+ df (pd.DataFrame): The DataFrame containing the column to clean
195
+ column_name (str): The name of the column to clean
196
+
197
+ Returns:
198
+ pd.DataFrame: A DataFrame with the cleaned column
199
+
200
+ """
201
+
202
+ # Remove duplicate headlines
203
+ df = df.drop_duplicates(subset=[column_name])
204
+
205
+ # Filter headlines with too few or too many words
206
+ #df = df[df['title'].str.split().apply(len).between(3, 20)]
207
+
208
+
209
+ return df </pre>
210
+
211
+ <pre>
212
+ X_test = clean_headlines(X_test, 'title')
213
+ X_test = normalize_headlines(X_test, 'title')
214
+ X_test = X_test.dropna(subset = ['title'])
215
+ X_test = handle_missing_data(X_test, 'title')
216
+ X_test = consistency_checks(X_test, 'title') </pre>
217
+
218
+ # Load the embedding model from Huggingface. Transformer: DistilBERT
219
+
220
+
221
+ <pre>
222
+ def get_embeddings(text_all, tokenizer, model, device, max_len=128):
223
+ '''
224
+ Generate embeddings using a transformer model on GPU if available.
225
+ Args:
226
+ - text_all: List of input texts
227
+ - tokenizer: Tokenizer for the model
228
+ - model: Transformer model
229
+ - device: torch.device to run the computations
230
+ - max_len: Maximum token length for the input
231
+ Returns:
232
+ - embeddings: List of embeddings for each input text
233
+ '''
234
+ embeddings = []
235
+
236
+ count = 0
237
+ print('Start embeddings:')
238
+
239
+ for text in text_all:
240
+ count += 1
241
+ if count % (len(text_all) // 10) == 0:
242
+ print(f'{count / len(text_all) * 100:.1f}% done ...')
243
+
244
+ # Tokenize the input text
245
+ model_input_token = tokenizer(
246
+ text,
247
+ add_special_tokens=True,
248
+ max_length=max_len,
249
+ padding='max_length',
250
+ truncation=True,
251
+ return_tensors='pt'
252
+ ).to(device) # Move input tensors to GPU
253
+
254
+ # Generate embeddings without gradient computation
255
+ with torch.no_grad():
256
+ model_output = model(**model_input_token)
257
+ cls_embedding = model_output.last_hidden_state[:, 0, :] # Use CLS token embedding
258
+ cls_embedding = cls_embedding.squeeze().cpu().numpy() # Move back to CPU for numpy
259
+ embeddings.append(cls_embedding)
260
+
261
+ return embeddings </pre>
262
+
263
+
264
+ # Check for GPU availability
265
+ <pre>
266
+ device = torch.device('cuda' if torch.cuda.is_available() else 'cpu')
267
+ print(f'Using device: {device}')
268
+
269
+ # Load the tokenizer and model for 'all-mpnet-base-v2'
270
+ print("Loading model and tokenizer...")
271
+ # Load model and tokenizer
272
+ tokenizer_news = AutoTokenizer.from_pretrained('distilbert-base-uncased')
273
+ model_news = AutoModel.from_pretrained('distilbert-base-uncased').to(device)
274
+
275
+ # Set the model to evaluation mode
276
+ model_news.eval()
277
+
278
+ ############################################# DBERT UNCASED Embedding #############################################
279
+ ############################################# Embedding #############################################
280
+ print("Computing DBERT embeddings for training data...")
281
+
282
+ y_test = X_test['labels']
283
+ X_test = X_test['title']
284
+
285
+ X_test_embeddings_DBERT = get_embeddings(X_test, tokenizer_news, model_news, device, max_len=128)
286
+ print("DBERT embeddings for training data computed!")
287
+
288
+
289
+ prediction = model.predict(X_test_embeddings_DBERT)
290
+ </pre>
291
+ # Accuracy
292
+ <pre>label_map = {'NBC': 0, 'FoxNews': 1}
293
+
294
+ def compute_category_accuracy(y_true, y_pred, label):
295
+ y_true = np.array(y_true)
296
+ n_correct = np.sum((y_true == label) & (y_pred == label))
297
+ n_total = np.sum(y_true == label)
298
+ cat_accuracy = n_correct / n_total
299
+ return cat_accuracy
300
+
301
+ #Print accuracy
302
+ print(f'Test accuracy: {accuracy_score(y_test, prediction) * 100:.2f}%')
303
+ print(f'Test accuracy for NBC: {compute_category_accuracy(y_test, prediction, label_map["NBC"]) * 100:.2f}%')
304
+ print(f'Test accuracy for FoxNews: {compute_category_accuracy(y_test, prediction, label_map["FoxNews"]) * 100:.2f}%')
305
+ </pre>
306
+
307
+
308
+
309
+
310
+
311
+
312
+ <!-- from huggingface_hub import hf_hub_download
313
+ import joblib
314
+
315
+ #Load model from Huggingface
316
+ repo_id='awngsz/baseline_model'
317
+ filename='CIS5190_Proj2_AWNGSZ.joblib'
318
+
319
+ file_path=hf_hub_download(repo_id=repo_id, filename=filename)
320
+ model=joblib.load(file_path)
321
+
322
+ print(model)
323
+
324
+ #Load test dataset (assuming the name is the same as the one in the Ed post)
325
+ test_df = pd.read_csv(file_path)
326
+
327
+ #Copying the naming convention from the sample dataset in the edpost
328
+ X_test = test_df['title']
329
+ y_test = test_df['labels']
330
+
331
+ #Load the embedding model from Huggingface
332
+ ############################################# Transformer: DistilBERT #############################################
333
+ from transformers import DistilBertTokenizer, DistilBertModel
334
+ # pytorch related packages
335
+ import torch
336
+ import torchvision
337
+ from torchvision import transforms, utils
338
+ import torch.nn as nn
339
+ import torch.optim as optim
340
+ import torchvision.transforms as transforms
341
+ from PIL import Image
342
+ from skimage import io, transform
343
+ from torchvision.io import read_image
344
+ from torch.utils.data import Dataset, DataLoader
345
+
346
+ def get_embeddings(text_all, tokenizer, model, max_len = 128):
347
+ '''
348
+ return: embeddings list
349
+ '''
350
+ embeddings = []
351
+ count = 0
352
+ print('Start embeddings:')
353
+ for text in text_all:
354
+ count += 1
355
+ if count % (len(text_all) // 10) == 0:
356
+ print(f'{count / len(text_all) * 100:.1f}% done ...')
357
+
358
+ model_input_token = tokenizer(
359
+ text,
360
+ add_special_tokens = True,
361
+ max_length = max_len,
362
+ padding = 'max_length',
363
+ truncation = True,
364
+ return_tensors = 'pt'
365
+ )
366
+
367
+ with torch.no_grad():
368
+ model_output = model(**model_input_token)
369
+ cls_embedding = model_output.last_hidden_state[:, 0, :]
370
+ cls_embedding = cls_embedding.squeeze().numpy()
371
+ embeddings.append(cls_embedding)
372
+
373
+ return embeddings
374
+
375
+ #Load the tokenizer and model from Hugging Face
376
+ tokenizer_DBERT = DistilBertTokenizer.from_pretrained('distilbert-base-uncased')
377
+ transformer_model_DBERT = DistilBertModel.from_pretrained('distilbert-base-uncased')
378
+
379
+ #Set the model to evaluation mode
380
+ transformer_model_DBERT.eval()
381
+
382
+ #Get the embeddings for the test data
383
+
384
+ max_len = max(len(text) for text in X_test)
385
+
386
+ #this may take awhile to run
387
+ X_test_embeddings_DBERT = get_embeddings(X_test, tokenizer_DBERT, transformer_model_DBERT, max_len = max_len)
388
+
389
+ prediction = model.predict(X_test_embeddings_DBERT)
390
+
391
+ #Accuracy
392
+ from sklearn.metrics import accuracy_score
393
+
394
+ label_map = {'NBC': 1, 'FoxNews': 0}
395
+
396
+ def compute_category_accuracy(y_true, y_pred, label):
397
+ n_correct = np.sum((y_true == label) & (y_pred == label))
398
+ n_total = np.sum(y_true == label)
399
+ cat_accuracy = n_correct / n_total
400
+ return cat_accuracy
401
+
402
+ #Print accuracy
403
+ print(f'Test accuracy: {accuracy_score(y_test, prediction) * 100:.2f}%')
404
+ print(f'Test accuracy for NBC: {compute_category_accuracy(y_test, prediction, label_map["NBC"]) * 100:.2f}%')
405
+ print(f'Test accuracy for FoxNews: {compute_category_accuracy(y_test, prediction, label_map["FoxNews"]) * 100:.2f}%') -->
406
+
407
+ ##### END ######
408
+
409
+ ## Model Details
410
+
411
+ ### Model Description
412
+
413
+ <!-- Provide a longer summary of what this model is. -->
414
+
415
+ This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
416
+
417
+ - **Developed by:** [More Information Needed]
418
+ - **Funded by [optional]:** [More Information Needed]
419
+ - **Shared by [optional]:** [More Information Needed]
420
+ - **Model type:** [More Information Needed]
421
+ - **Language(s) (NLP):** [More Information Needed]
422
+ - **License:** [More Information Needed]
423
+ - **Finetuned from model [optional]:** [More Information Needed]
424
+
425
+ ### Model Sources [optional]
426
+
427
+ <!-- Provide the basic links for the model. -->
428
+
429
+ - **Repository:** [More Information Needed]
430
+ - **Paper [optional]:** [More Information Needed]
431
+ - **Demo [optional]:** [More Information Needed]
432
+
433
+ ## Uses
434
+
435
+ <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
436
+
437
+ ### Direct Use
438
+
439
+ <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
440
+
441
+ [More Information Needed]
442
+
443
+ ### Downstream Use [optional]
444
+
445
+ <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
446
+
447
+ [More Information Needed]
448
+
449
+ ### Out-of-Scope Use
450
+
451
+ <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
452
+
453
+ [More Information Needed]
454
+
455
+ ## Bias, Risks, and Limitations
456
+
457
+ <!-- This section is meant to convey both technical and sociotechnical limitations. -->
458
+
459
+ [More Information Needed]
460
+
461
+ ### Recommendations
462
+
463
+ <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
464
+
465
+ Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
466
+
467
+ ## How to Get Started with the Model
468
+
469
+ Use the code below to get started with the model.
470
+
471
+ [More Information Needed]
472
+
473
+ ## Training Details
474
+
475
+ ### Training Data
476
+
477
+ <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
478
+
479
+ [More Information Needed]
480
+
481
+ ### Training Procedure
482
+
483
+ <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
484
+
485
+ #### Preprocessing [optional]
486
+
487
+ [More Information Needed]
488
+
489
+
490
+ #### Training Hyperparameters
491
+
492
+ - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
493
+
494
+ #### Speeds, Sizes, Times [optional]
495
+
496
+ <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
497
+
498
+ [More Information Needed]
499
+
500
+ ## Evaluation
501
+
502
+ <!-- This section describes the evaluation protocols and provides the results. -->
503
+
504
+ ### Testing Data, Factors & Metrics
505
+
506
+ #### Testing Data
507
+
508
+ <!-- This should link to a Dataset Card if possible. -->
509
+
510
+ [More Information Needed]
511
+
512
+ #### Factors
513
+
514
+ <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
515
+
516
+ [More Information Needed]
517
+
518
+ #### Metrics
519
+
520
+ <!-- These are the evaluation metrics being used, ideally with a description of why. -->
521
+
522
+ [More Information Needed]
523
+
524
+ ### Results
525
+
526
+ [More Information Needed]
527
+
528
+ #### Summary
529
+
530
+
531
+
532
+ ## Model Examination [optional]
533
+
534
+ <!-- Relevant interpretability work for the model goes here -->
535
+
536
+ [More Information Needed]
537
+
538
+ ## Environmental Impact
539
+
540
+ <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
541
+
542
+ Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
543
+
544
+ - **Hardware Type:** [More Information Needed]
545
+ - **Hours used:** [More Information Needed]
546
+ - **Cloud Provider:** [More Information Needed]
547
+ - **Compute Region:** [More Information Needed]
548
+ - **Carbon Emitted:** [More Information Needed]
549
+
550
+ ## Technical Specifications [optional]
551
+
552
+ ### Model Architecture and Objective
553
+
554
+ [More Information Needed]
555
+
556
+ ### Compute Infrastructure
557
+
558
+ [More Information Needed]
559
+
560
+ #### Hardware
561
+
562
+ [More Information Needed]
563
+
564
+ #### Software
565
+
566
+ [More Information Needed]
567
+
568
+ ## Citation [optional]
569
+
570
+ <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
571
+
572
+ **BibTeX:**
573
+
574
+ [More Information Needed]
575
+
576
+ **APA:**
577
+
578
+ [More Information Needed]
579
+
580
+ ## Glossary [optional]
581
+
582
+ <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
583
+
584
+ [More Information Needed]
585
+
586
+ ## More Information [optional]
587
+
588
+ [More Information Needed]
589
+
590
+ ## Model Card Authors [optional]
591
+
592
+ [More Information Needed]
593
+
594
+ ## Model Card Contact
595
+
596
+ [More Information Needed]