Dzeniks commited on
Commit
25c8711
1 Parent(s): b39efcf

Create README.md

Browse files
Files changed (1) hide show
  1. README.md +46 -0
README.md ADDED
@@ -0,0 +1,46 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ license: mit
3
+ pipeline_tag: text-classification
4
+ ---
5
+ # Alberta Fact Checking Model
6
+
7
+ The Alberta Fact Checking Model is a natural language processing model designed to classify claims as either supporting or refuting a given evidence. The model uses the ALBERT architecture and a tokenizer for text classification. It was trained on a dataset that primarily consisted of the FEVER, HOOVER, and FEVEROUS datasets, with a small sample of created data.
8
+
9
+ ## Labels
10
+ The model returns two labels:
11
+ - 0 = Supports
12
+ - 1 = Refutes
13
+
14
+ ## Input
15
+ The input to the model should be a claim accompanied by evidence.
16
+
17
+ ## Usage
18
+ The Alberta Fact Checking Model can be used to classify claims based on the evidence provided.
19
+
20
+ ```python
21
+ import torch
22
+ from transformers import AlbertTokenizer, AlbertForSequenceClassification
23
+
24
+ # Load the tokenizer and model
25
+ tokenizer = AlbertTokenizer.from_pretrained('Dzeniks/alberta_fact_checking')
26
+ model = AlbertForSequenceClassification.from_pretrained('Dzeniks/alberta_fact_checking')
27
+
28
+ # Define the claim with evidence to classify
29
+ claim = "Albert Einstein work in the field of computer science"
30
+ evidence = "Albert Einstein was a German-born theoretical physicist, widely acknowledged to be one of the greatest and most influential physicists of all time."
31
+
32
+ # Tokenize the claim with evidence
33
+ x = tokenizer.encode_plus(claim, evidence, return_tensors="pt")
34
+
35
+ model.eval()
36
+ with torch.no_grad():
37
+ prediction = model(**x)
38
+
39
+ label = torch.argmax(outputs[0]).item()
40
+
41
+ print(f"Label: {label}")
42
+ ```
43
+
44
+ ## Disclaimer
45
+
46
+ While the Roberta-Fact-Check Model has been trained on a large dataset and can provide accurate results in many cases, it may not always provide correct results. Users should always exercise caution when making decisions based on the output of any machine learning model.