hamzamalik11 commited on
Commit
b8289fa
1 Parent(s): e7c01ca

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +18 -18
README.md CHANGED
@@ -54,7 +54,7 @@ The model should not be used for any purpose other than generating impressions f
54
  ### Recommendations
55
 
56
 
57
- Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
58
 
59
  ## How to Get Started with the Model
60
 
@@ -69,6 +69,7 @@ tokenizer = AutoTokenizer.from_pretrained(model_checkpoint)
69
  from transformers import SummarizationPipeline
70
 
71
  summarizer = SummarizationPipeline(model=model, tokenizer=tokenizer)
 
72
  output= summarizer("heart size normal mediastinal hilar contours remain stable small right pneumothorax remains unchanged surgical lung staples overlying
73
  left upper lobe seen linear pattern consistent prior upper lobe resection soft tissue osseous structures appear unremarkable nasogastric
74
  endotracheal tubes remain satisfactory position atelectatic changes right lower lung field remain unchanged prior study")
@@ -77,9 +78,8 @@ output= summarizer("heart size normal mediastinal hilar contours remain stable s
77
  ## Training Details
78
 
79
  ### Training Data
80
- -Data Source: The training data was a custom dataset of 70,000 radiology reports.
81
- -Data Cleaning: The data was cleaned to remove any personal or confidential information. The data was also tokenized and normalized.
82
- -Data Split: The training data was split into a training set and a validation set. The training set consisted of 63,000 radiology reports, and the validation set consisted of 7,000 radiology reports.
83
 
84
 
85
 
@@ -91,16 +91,16 @@ The model was trained using the Hugging Face Transformers library: https://huggi
91
  #### Training Hyperparameters
92
 
93
  - **Training regime:**
94
- -evaluation_strategy="epoch",
95
- -learning_rate=5.6e-5,
96
- -per_device_train_batch_size=batch_size //4,
97
- -per_device_eval_batch_size=batch_size //4,
98
- -weight_decay=0.01,
99
- -save_total_limit=3,
100
- -num_train_epochs=num_train_epochs,
101
- -predict_with_generate=True,
102
- -logging_steps=logging_steps,
103
- -push_to_hub=False,
104
 
105
 
106
 
@@ -113,10 +113,10 @@ The testing data consisted of 10,000 radiology reports.
113
 
114
  #### Factors
115
  The following factors were evaluated:
116
- -ROUGE-1
117
- -ROUGE-2
118
- -ROUGE-L
119
- -ROUGELSUM
120
 
121
  #### Metrics
122
  The following metrics were used to evaluate the model:
 
54
  ### Recommendations
55
 
56
 
57
+ Users should be aware of the limitations and potential biases of the model when using the generated impressions for clinical decision-making. Further information is needed to provide specific recommendations.
58
 
59
  ## How to Get Started with the Model
60
 
 
69
  from transformers import SummarizationPipeline
70
 
71
  summarizer = SummarizationPipeline(model=model, tokenizer=tokenizer)
72
+
73
  output= summarizer("heart size normal mediastinal hilar contours remain stable small right pneumothorax remains unchanged surgical lung staples overlying
74
  left upper lobe seen linear pattern consistent prior upper lobe resection soft tissue osseous structures appear unremarkable nasogastric
75
  endotracheal tubes remain satisfactory position atelectatic changes right lower lung field remain unchanged prior study")
 
78
  ## Training Details
79
 
80
  ### Training Data
81
+ The training data was a custom dataset of 70,000 radiology reports.The data was cleaned to remove any personal or confidential information. The data was also tokenized and normalized.
82
+ The training data was split into a training set and a validation set. The training set consisted of 63,000 radiology reports, and the validation set consisted of 7,000 radiology reports.
 
83
 
84
 
85
 
 
91
  #### Training Hyperparameters
92
 
93
  - **Training regime:**
94
+ -[evaluation_strategy="epoch"],
95
+ -[learning_rate=5.6e-5],
96
+ -[per_device_train_batch_size=batch_size //4],
97
+ -[per_device_eval_batch_size=batch_size //4,]
98
+ -[weight_decay=0.01],
99
+ -[save_total_limit=3],
100
+ -[num_train_epochs=num_train_epochs //4],
101
+ -[predict_with_generate=True //4],
102
+ -[logging_steps=logging_steps],
103
+ -[push_to_hub=False]
104
 
105
 
106
 
 
113
 
114
  #### Factors
115
  The following factors were evaluated:
116
+ [-ROUGE-1]
117
+ [-ROUGE-2]
118
+ [-ROUGE-L]
119
+ [-ROUGELSUM]
120
 
121
  #### Metrics
122
  The following metrics were used to evaluate the model: