ClinicalMetaScience commited on
Commit
8537e41
1 Parent(s): 555e9e6

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +3 -5
README.md CHANGED
@@ -15,7 +15,7 @@ SciBERT text classification model for positive and negative results prediction i
15
  We annotated over 1,900 clinical psychology abstracts into two categories: 'positive results only' and 'mixed and negative results', and trained models using SciBERT.
16
  The SciBERT model was validated against one in-domain (clinical psychology) and two out-of-domain data sets comprising psychotherapy abstracts. We compared model performance with Random Forest and three further benchmarks: natural language indicators of result types, *p*-values, and abstract length.
17
  SciBERT outperformed all benchmarks and random forest in in-domain (accuracy: 0.86) and out-of-domain data (accuracy: 0.85-0.88).
18
- Further information on documentation, code and data for the project "Publication Bias Research in Clincial Psychology Using Natural Language Processing" can be found on the Github repository [PubBiasDetect](https://github.com/PsyCapsLock/PubBiasDetect).
19
 
20
  ## Using the Model on Huggingface
21
  The model can be used on Hugginface utilizing the "Hosted inference API" in the window on the right.
@@ -23,8 +23,6 @@ Click 'Compute' to predict the class labels for an example abstract or an abstra
23
  The class label 'positive' corresponds to 'positive results only', while 'negative' represents 'mixed and negative results'.
24
 
25
  ## Disclaimer
26
- This tool is developed to analyze and predict the prevalence of positive and negative results in scientific abstracts based on the SciBERT model. The text classification procedures employed within this tool can evaluate with an accuracy between (0.85-0.88) for clinical psychology and psychotherapy abstracts whether an abstract is only reporting positive results (‘positive results only’) or whether at least on negative result is reported (‘mixed and negative results). While publication bias is a plausible explanation for certain patterns observed in scientific literature, the analyses conducted by this tool do not conclusively establish the presence of publication bias or any other underlying factors. It's essential to understand that this tool evaluates data but does not delve into the underlying reasons for the observed trends.
27
-
28
- The validation of this tool has been conducted on primary studies from the field of clinical psychology and psychotherapy. While it might yield insights when applied to abstracts of other fields or other types of studies such as meta-analyses, its applicability and accuracy in such contexts have not been thoroughly tested or validated. Hence, caution should be exercised when extending the use of this tool beyond its validated scope of primary studies. The developers of this tool are not responsible for any misinterpretation or misuse of the tool's results, and encourage users to have a comprehensive understanding of the limitations inherent in statistical analysis and prediction models like this one.
29
-
30
 
 
 
15
  We annotated over 1,900 clinical psychology abstracts into two categories: 'positive results only' and 'mixed and negative results', and trained models using SciBERT.
16
  The SciBERT model was validated against one in-domain (clinical psychology) and two out-of-domain data sets comprising psychotherapy abstracts. We compared model performance with Random Forest and three further benchmarks: natural language indicators of result types, *p*-values, and abstract length.
17
  SciBERT outperformed all benchmarks and random forest in in-domain (accuracy: 0.86) and out-of-domain data (accuracy: 0.85-0.88).
18
+ Further information on documentation, code and data for the project "Publication Bias Research in Clincial Psychology Using Natural Language Processing" can be found on the [Github repository ](https://github.com/PsyCapsLock/PubBiasDetect).
19
 
20
  ## Using the Model on Huggingface
21
  The model can be used on Hugginface utilizing the "Hosted inference API" in the window on the right.
 
23
  The class label 'positive' corresponds to 'positive results only', while 'negative' represents 'mixed and negative results'.
24
 
25
  ## Disclaimer
26
+ This tool is developed to analyze and predict the prevalence of positive and negative results in scientific abstracts based on the SciBERT model. While publication bias is a plausible explanation for certain patterns of results observed in scientific literature, the analyses conducted by this tool do not conclusively establish the presence of publication bias or any other underlying factors. It's essential to understand that this tool evaluates data but does not delve into the underlying reasons for the observed trends.
 
 
 
27
 
28
+ The validation of this tool has been conducted on primary studies from the field of clinical psychology and psychotherapy. While it might yield insights when applied to abstracts of other fields or other types of studies (such as meta-analyses), its applicability and accuracy in such contexts have not been thoroughly tested yet. The developers of this tool are not responsible for any misinterpretation or misuse of the tool's results, and encourage users to have a comprehensive understanding of the limitations inherent in statistical analysis and prediction models.