HannaAbiAkl commited on
Commit
9647415
1 Parent(s): b950365

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +14 -8
README.md CHANGED
@@ -4,30 +4,36 @@ base_model: distilbert-base-uncased
4
  tags:
5
  - generated_from_trainer
6
  model-index:
7
- - name: qa_model_2
8
  results: []
 
 
 
 
 
 
9
  ---
10
 
11
  <!-- This model card has been generated automatically according to the information the Trainer had access to. You
12
  should probably proofread and complete it, then remove this comment. -->
13
 
14
- # qa_model_2
15
 
16
- This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the None dataset.
17
- It achieves the following results on the evaluation set:
18
  - Loss: 0.0000
19
 
20
  ## Model description
21
 
22
- More information needed
 
23
 
24
  ## Intended uses & limitations
25
 
26
- More information needed
27
 
28
  ## Training and evaluation data
29
 
30
- More information needed
31
 
32
  ## Training procedure
33
 
@@ -56,4 +62,4 @@ The following hyperparameters were used during training:
56
  - Transformers 4.33.1
57
  - Pytorch 2.0.1+cu118
58
  - Datasets 2.14.5
59
- - Tokenizers 0.13.3
 
4
  tags:
5
  - generated_from_trainer
6
  model-index:
7
+ - name: psychic
8
  results: []
9
+ datasets:
10
+ - awalesushil/DBLP-QuAD
11
+ language:
12
+ - en
13
+ library_name: transformers
14
+ pipeline_tag: question-answering
15
  ---
16
 
17
  <!-- This model card has been generated automatically according to the information the Trainer had access to. You
18
  should probably proofread and complete it, then remove this comment. -->
19
 
20
+ # PSYCHIC
21
 
22
+ PSYCHIC (Pre-trained SYmbolic CHecker In Context) is a model that is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the DBLP-QuAD dataset. It achieves the following results on the evaluation set:
 
23
  - Loss: 0.0000
24
 
25
  ## Model description
26
 
27
+ The model is trained to learn specific tokens from a question and its context to better determine the answer from the context. It is fine-tuned on the Extractive QA task from which it should return the answer to a knowledge graph question in the form of a SPARQL query.
28
+ The advantage of PSYCHIC is that it leverages neuro-symbolic capabilities to validate query structures as well as LLM capacities to learn from context tokens.
29
 
30
  ## Intended uses & limitations
31
 
32
+ This model is intended to be used with a question-context pair to determine the answer in the form of a SPARQL query.
33
 
34
  ## Training and evaluation data
35
 
36
+ The DBLP-QuAD dataset is used for training and evaluation.
37
 
38
  ## Training procedure
39
 
 
62
  - Transformers 4.33.1
63
  - Pytorch 2.0.1+cu118
64
  - Datasets 2.14.5
65
+ - Tokenizers 0.13.3