Files changed (3) hide show
  1. .gitattributes +0 -1
  2. README.md +78 -109
  3. model.safetensors +0 -3
.gitattributes CHANGED
@@ -25,4 +25,3 @@ saved_model/**/* filter=lfs diff=lfs merge=lfs -text
25
  *.zip filter=lfs diff=lfs merge=lfs -text
26
  *.zstandard filter=lfs diff=lfs merge=lfs -text
27
  *tfevents* filter=lfs diff=lfs merge=lfs -text
28
- model.safetensors filter=lfs diff=lfs merge=lfs -text
 
25
  *.zip filter=lfs diff=lfs merge=lfs -text
26
  *.zstandard filter=lfs diff=lfs merge=lfs -text
27
  *tfevents* filter=lfs diff=lfs merge=lfs -text
 
README.md CHANGED
@@ -1,110 +1,79 @@
1
- ---
2
- language:
3
- - en
4
- tags:
5
- - aspect-based-sentiment-analysis
6
- - PyABSA
7
- license: mit
8
- datasets:
9
- - laptop14
10
- - restaurant14
11
- - restaurant16
12
- - ACL-Twitter
13
- - MAMS
14
- - Television
15
- - TShirt
16
- - Yelp
17
- metrics:
18
- - accuracy
19
- - macro-f1
20
- widget:
21
- - text: "[CLS] when tables opened up, the manager sat another party before us. [SEP] manager [SEP] "
22
- ---
23
-
24
- # Powered by [PyABSA](https://github.com/yangheng95/PyABSA): An open source tool for aspect-based sentiment analysis
25
- This model is training with 30k+ ABSA samples, see [ABSADatasets](https://github.com/yangheng95/ABSADatasets). Yet the test sets are not included in pre-training, so you can use this model for training and benchmarking on common ABSA datasets, e.g., Laptop14, Rest14 datasets. (Except for the Rest15 dataset!)
26
-
27
-
28
- ## Usage
29
- ```python3
30
- from transformers import AutoTokenizer, AutoModelForSequenceClassification
31
-
32
- # Load the ABSA model and tokenizer
33
- model_name = "yangheng/deberta-v3-base-absa-v1.1"
34
- tokenizer = AutoTokenizer.from_pretrained(model_name)
35
- model = AutoModelForSequenceClassification.from_pretrained(model_name)
36
-
37
- classifier = pipeline("text-classification", model=model, tokenizer=tokenizer)
38
-
39
- for aspect in ['camera', 'phone']:
40
- print(aspect, classifier('The camera quality of this phone is amazing.', text_pair=aspect))
41
- ```
42
-
43
- # DeBERTa for aspect-based sentiment analysis
44
- The `deberta-v3-base-absa` model for aspect-based sentiment analysis, trained with English datasets from [ABSADatasets](https://github.com/yangheng95/ABSADatasets).
45
-
46
- ## Training Model
47
- This model is trained based on the FAST-LCF-BERT model with `microsoft/deberta-v3-base`, which comes from [PyABSA](https://github.com/yangheng95/PyABSA).
48
- To track state-of-the-art models, please see [PyASBA](https://github.com/yangheng95/PyABSA).
49
-
50
- ## Example in PyASBA
51
- An [example](https://github.com/yangheng95/PyABSA/blob/release/demos/aspect_polarity_classification/train_apc_multilingual.py) for using FAST-LCF-BERT in PyASBA datasets.
52
-
53
- ## Datasets
54
- This model is fine-tuned with 180k examples for the ABSA dataset (including augmented data). Training dataset files:
55
- ```
56
- loading: integrated_datasets/apc_datasets/SemEval/laptop14/Laptops_Train.xml.seg
57
- loading: integrated_datasets/apc_datasets/SemEval/restaurant14/Restaurants_Train.xml.seg
58
- loading: integrated_datasets/apc_datasets/SemEval/restaurant16/restaurant_train.raw
59
- loading: integrated_datasets/apc_datasets/ACL_Twitter/acl-14-short-data/train.raw
60
- loading: integrated_datasets/apc_datasets/MAMS/train.xml.dat
61
- loading: integrated_datasets/apc_datasets/Television/Television_Train.xml.seg
62
- loading: integrated_datasets/apc_datasets/TShirt/Menstshirt_Train.xml.seg
63
- loading: integrated_datasets/apc_datasets/Yelp/yelp.train.txt
64
-
65
- ```
66
- If you use this model in your research, please cite our papers:
67
- ```
68
- @inproceedings{DBLP:conf/cikm/0008ZL23,
69
- author = {Heng Yang and
70
- Chen Zhang and
71
- Ke Li},
72
- editor = {Ingo Frommholz and
73
- Frank Hopfgartner and
74
- Mark Lee and
75
- Michael Oakes and
76
- Mounia Lalmas and
77
- Min Zhang and
78
- Rodrygo L. T. Santos},
79
- title = {PyABSA: {A} Modularized Framework for Reproducible Aspect-based Sentiment
80
- Analysis},
81
- booktitle = {Proceedings of the 32nd {ACM} International Conference on Information
82
- and Knowledge Management, {CIKM} 2023, Birmingham, United Kingdom,
83
- October 21-25, 2023},
84
- pages = {5117--5122},
85
- publisher = {{ACM}},
86
- year = {2023},
87
- url = {https://doi.org/10.1145/3583780.3614752},
88
- doi = {10.1145/3583780.3614752},
89
- timestamp = {Thu, 23 Nov 2023 13:25:05 +0100},
90
- biburl = {https://dblp.org/rec/conf/cikm/0008ZL23.bib},
91
- bibsource = {dblp computer science bibliography, https://dblp.org}
92
- }
93
- @article{YangZMT21,
94
- author = {Heng Yang and
95
- Biqing Zeng and
96
- Mayi Xu and
97
- Tianxing Wang},
98
- title = {Back to Reality: Leveraging Pattern-driven Modeling to Enable Affordable
99
- Sentiment Dependency Learning},
100
- journal = {CoRR},
101
- volume = {abs/2110.08604},
102
- year = {2021},
103
- url = {https://arxiv.org/abs/2110.08604},
104
- eprinttype = {arXiv},
105
- eprint = {2110.08604},
106
- timestamp = {Fri, 22 Oct 2021 13:33:09 +0200},
107
- biburl = {https://dblp.org/rec/journals/corr/abs-2110-08604.bib},
108
- bibsource = {dblp computer science bibliography, https://dblp.org}
109
- }
110
  ```
 
1
+
2
+ ---
3
+ language:
4
+ - en
5
+ tags:
6
+ - aspect-based-sentiment-analysis
7
+ - PyABSA
8
+ license: mit
9
+ datasets:
10
+ - laptop14
11
+ - restaurant14
12
+ - restaurant16
13
+ - ACL-Twitter
14
+ - MAMS
15
+ - Television
16
+ - TShirt
17
+ - Yelp
18
+ metrics:
19
+ - accuracy
20
+ - macro-f1
21
+ widget:
22
+ - text: "[CLS] when tables opened up, the manager sat another party before us. [SEP] manager [SEP] "
23
+ ---
24
+
25
+ # Note
26
+ This model is training with 30k+ ABSA samples, see [ABSADatasets](https://github.com/yangheng95/ABSADatasets). Yet the test sets are not included in pre-training, so you can use this model for training and benchmarking on common ABSA datasets, e.g., Laptop14, Rest14 datasets. (Except for the Rest15 dataset!)
27
+
28
+ # DeBERTa for aspect-based sentiment analysis
29
+ The `deberta-v3-base-absa` model for aspect-based sentiment analysis, trained with English datasets from [ABSADatasets](https://github.com/yangheng95/ABSADatasets).
30
+
31
+ ## Training Model
32
+ This model is trained based on the FAST-LCF-BERT model with `microsoft/deberta-v3-base`, which comes from [PyABSA](https://github.com/yangheng95/PyABSA).
33
+ To track state-of-the-art models, please see [PyASBA](https://github.com/yangheng95/PyABSA).
34
+
35
+ ## Usage
36
+ ```python3
37
+ from transformers import AutoTokenizer, AutoModelForSequenceClassification
38
+
39
+ tokenizer = AutoTokenizer.from_pretrained("yangheng/deberta-v3-base-absa-v1.1")
40
+
41
+ model = AutoModelForSequenceClassification.from_pretrained("yangheng/deberta-v3-base-absa-v1.1")
42
+ ```
43
+
44
+ ## Example in PyASBA
45
+ An [example](https://github.com/yangheng95/PyABSA/blob/release/demos/aspect_polarity_classification/train_apc_multilingual.py) for using FAST-LCF-BERT in PyASBA datasets.
46
+
47
+ ## Datasets
48
+ This model is fine-tuned with 180k examples for the ABSA dataset (including augmented data). Training dataset files:
49
+ ```
50
+ loading: integrated_datasets/apc_datasets/SemEval/laptop14/Laptops_Train.xml.seg
51
+ loading: integrated_datasets/apc_datasets/SemEval/restaurant14/Restaurants_Train.xml.seg
52
+ loading: integrated_datasets/apc_datasets/SemEval/restaurant16/restaurant_train.raw
53
+ loading: integrated_datasets/apc_datasets/ACL_Twitter/acl-14-short-data/train.raw
54
+ loading: integrated_datasets/apc_datasets/MAMS/train.xml.dat
55
+ loading: integrated_datasets/apc_datasets/Television/Television_Train.xml.seg
56
+ loading: integrated_datasets/apc_datasets/TShirt/Menstshirt_Train.xml.seg
57
+ loading: integrated_datasets/apc_datasets/Yelp/yelp.train.txt
58
+
59
+ ```
60
+ If you use this model in your research, please cite our paper:
61
+ ```
62
+ @article{YangZMT21,
63
+ author = {Heng Yang and
64
+ Biqing Zeng and
65
+ Mayi Xu and
66
+ Tianxing Wang},
67
+ title = {Back to Reality: Leveraging Pattern-driven Modeling to Enable Affordable
68
+ Sentiment Dependency Learning},
69
+ journal = {CoRR},
70
+ volume = {abs/2110.08604},
71
+ year = {2021},
72
+ url = {https://arxiv.org/abs/2110.08604},
73
+ eprinttype = {arXiv},
74
+ eprint = {2110.08604},
75
+ timestamp = {Fri, 22 Oct 2021 13:33:09 +0200},
76
+ biburl = {https://dblp.org/rec/journals/corr/abs-2110-08604.bib},
77
+ bibsource = {dblp computer science bibliography, https://dblp.org}
78
+ }
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
79
  ```
model.safetensors DELETED
@@ -1,3 +0,0 @@
1
- version https://git-lfs.github.com/spec/v1
2
- oid sha256:270559fb59fec507f6a105d2c7765af0b61c685670dbbcf52e551c6fa160601e
3
- size 737726548