bibtex_url
stringlengths 41
53
| acl_proceedings
stringlengths 38
50
| bibtext
stringlengths 528
3.02k
| abstract
stringlengths 17
2.35k
| authors
sequencelengths 1
44
| title
stringlengths 18
190
| id
stringlengths 7
19
| arxiv_id
stringlengths 10
10
⌀ | GitHub
sequencelengths 1
1
| paper_page
stringclasses 528
values | n_linked_authors
int64 -1
15
| upvotes
int64 -1
77
| num_comments
int64 -1
10
| n_authors
int64 -1
52
| Models
sequencelengths 0
100
| Datasets
sequencelengths 0
15
| Spaces
sequencelengths 0
46
| paper_page_exists_pre_conf
int64 0
1
| type
stringclasses 2
values |
---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
https://aclanthology.org/2023.lchange-1.12.bib | https://aclanthology.org/2023.lchange-1.12/ | @inproceedings{siewert-etal-2023-changing-usage,
title = "Changing usage of {L}ow {S}axon auxiliary and modal verbs",
author = "Siewert, Janine and
Wieling, Martijn and
Scherrer, Yves",
editor = "Tahmasebi, Nina and
Montariol, Syrielle and
Dubossarsky, Haim and
Kutuzov, Andrey and
Hengchen, Simon and
Alfter, David and
Periti, Francesco and
Cassotti, Pierluigi",
booktitle = "Proceedings of the 4th Workshop on Computational Approaches to Historical Language Change",
month = dec,
year = "2023",
address = "Singapore",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/2023.lchange-1.12",
doi = "10.18653/v1/2023.lchange-1.12",
pages = "112--118",
abstract = "We investigate the usage of auxiliary and modal verbs in Low Saxon dialects from both Germany and the Netherlands based on word vectors, and compare developments in the modern language to Middle Low Saxon. Although most of these function words have not been affected by lexical replacement, changes in usage that likely at least partly result from contact with the state languages can still be observed.",
}
| We investigate the usage of auxiliary and modal verbs in Low Saxon dialects from both Germany and the Netherlands based on word vectors, and compare developments in the modern language to Middle Low Saxon. Although most of these function words have not been affected by lexical replacement, changes in usage that likely at least partly result from contact with the state languages can still be observed. | [
"Siewert, Janine",
"Wieling, Martijn",
"Scherrer, Yves"
] | Changing usage of Low Saxon auxiliary and modal verbs | lchange-1.12 | null | [
""
] | -1 | -1 | -1 | -1 | [] | [] | [] | 0 | Poster |
|
https://aclanthology.org/2023.lchange-1.13.bib | https://aclanthology.org/2023.lchange-1.13/ | @inproceedings{baes-etal-2023-semantic-shifts,
title = "Semantic Shifts in Mental Health-Related Concepts",
author = "Baes, Naomi and
Haslam, Nick and
Vylomova, Ekaterina",
editor = "Tahmasebi, Nina and
Montariol, Syrielle and
Dubossarsky, Haim and
Kutuzov, Andrey and
Hengchen, Simon and
Alfter, David and
Periti, Francesco and
Cassotti, Pierluigi",
booktitle = "Proceedings of the 4th Workshop on Computational Approaches to Historical Language Change",
month = dec,
year = "2023",
address = "Singapore",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/2023.lchange-1.13",
doi = "10.18653/v1/2023.lchange-1.13",
pages = "119--128",
abstract = "The present study evaluates semantic shifts in mental health-related concepts in two diachronic corpora spanning 1970-2016, one academic and one general. It evaluates whether their meanings have broadened to encompass less severe phenomena and whether they have become more pathology related. It applies a recently proposed methodology (Baes et al., 2023) to examine whether words collocating with a sample of mental health concepts have become less emotionally intense and develops a new way to examine whether the concepts increasingly co-occur with pathology-related terms. In support of the first hypothesis, mental health-related concepts became associated with less emotionally intense language in the psychology corpus (addiction, anger, stress, worry) and in the general corpus (addiction, grief, stress, worry). In support of the second hypothesis, mental health-related concepts came to be more associated with pathology-related language in psychology (addiction, grief, stress, worry) and in the general corpus (grief, stress). Findings demonstrate that some mental health concepts have become normalized and/or pathologized, a conclusion with important social and cultural implications.",
}
| The present study evaluates semantic shifts in mental health-related concepts in two diachronic corpora spanning 1970-2016, one academic and one general. It evaluates whether their meanings have broadened to encompass less severe phenomena and whether they have become more pathology related. It applies a recently proposed methodology (Baes et al., 2023) to examine whether words collocating with a sample of mental health concepts have become less emotionally intense and develops a new way to examine whether the concepts increasingly co-occur with pathology-related terms. In support of the first hypothesis, mental health-related concepts became associated with less emotionally intense language in the psychology corpus (addiction, anger, stress, worry) and in the general corpus (addiction, grief, stress, worry). In support of the second hypothesis, mental health-related concepts came to be more associated with pathology-related language in psychology (addiction, grief, stress, worry) and in the general corpus (grief, stress). Findings demonstrate that some mental health concepts have become normalized and/or pathologized, a conclusion with important social and cultural implications. | [
"Baes, Naomi",
"Haslam, Nick",
"Vylomova, Ekaterina"
] | Semantic Shifts in Mental Health-Related Concepts | lchange-1.13 | null | [
""
] | -1 | -1 | -1 | -1 | [] | [] | [] | 0 | Poster |
|
https://aclanthology.org/2023.lchange-1.14.bib | https://aclanthology.org/2023.lchange-1.14/ | @inproceedings{chang-etal-2023-automating-sound,
title = "Automating Sound Change Prediction for Phylogenetic Inference: A Tukanoan Case Study",
author = "Chang, Kalvin and
Robinson, Nathaniel and
Cai, Anna and
Chen, Ting and
Zhang, Annie and
Mortensen, David",
editor = "Tahmasebi, Nina and
Montariol, Syrielle and
Dubossarsky, Haim and
Kutuzov, Andrey and
Hengchen, Simon and
Alfter, David and
Periti, Francesco and
Cassotti, Pierluigi",
booktitle = "Proceedings of the 4th Workshop on Computational Approaches to Historical Language Change",
month = dec,
year = "2023",
address = "Singapore",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/2023.lchange-1.14",
doi = "10.18653/v1/2023.lchange-1.14",
pages = "129--142",
abstract = "We describe a set of new methods to partially automate linguistic phylogenetic inference given (1) cognate sets with their respective protoforms and sound laws, (2) a mapping from phones to their articulatory features and (3) a typological database of sound changes.We train a neural network on these sound change data to weight articulatory distances between phones and predict intermediate sound change steps between historical protoforms and their modern descendants, replacing a linguistic expert in part of a parsimony-based phylogenetic inference algorithm. In our best experiments on Tukanoan languages, this method produces trees with a Generalized Quartet Distance of 0.12 from a tree that used expert annotations, a significant improvement over other semi-automated baselines. We discuss potential benefits and drawbacks to our neural approach and parsimony-based tree prediction. We also experiment with a minimal generalization learner for automatic sound law induction, finding it less effective than sound laws from expert annotation. Our code is publicly available.",
}
| We describe a set of new methods to partially automate linguistic phylogenetic inference given (1) cognate sets with their respective protoforms and sound laws, (2) a mapping from phones to their articulatory features and (3) a typological database of sound changes.We train a neural network on these sound change data to weight articulatory distances between phones and predict intermediate sound change steps between historical protoforms and their modern descendants, replacing a linguistic expert in part of a parsimony-based phylogenetic inference algorithm. In our best experiments on Tukanoan languages, this method produces trees with a Generalized Quartet Distance of 0.12 from a tree that used expert annotations, a significant improvement over other semi-automated baselines. We discuss potential benefits and drawbacks to our neural approach and parsimony-based tree prediction. We also experiment with a minimal generalization learner for automatic sound law induction, finding it less effective than sound laws from expert annotation. Our code is publicly available. | [
"Chang, Kalvin",
"Robinson, Nathaniel",
"Cai, Anna",
"Chen, Ting",
"Zhang, Annie",
"Mortensen, David"
] | Automating Sound Change Prediction for Phylogenetic Inference: A Tukanoan Case Study | lchange-1.14 | 2402.01582 | [
"https://github.com/cmu-llab/aiscp"
] | -1 | -1 | -1 | -1 | [] | [] | [] | 0 | Poster |
|
https://aclanthology.org/2023.lchange-1.15.bib | https://aclanthology.org/2023.lchange-1.15/ | @inproceedings{paccosi-etal-2023-scent-sensibility,
title = "Scent and Sensibility: Perception Shifts in the Olfactory Domain",
author = "Paccosi, Teresa and
Menini, Stefano and
Leonardelli, Elisa and
Barzon, Ilaria and
Tonelli, Sara",
editor = "Tahmasebi, Nina and
Montariol, Syrielle and
Dubossarsky, Haim and
Kutuzov, Andrey and
Hengchen, Simon and
Alfter, David and
Periti, Francesco and
Cassotti, Pierluigi",
booktitle = "Proceedings of the 4th Workshop on Computational Approaches to Historical Language Change",
month = dec,
year = "2023",
address = "Singapore",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/2023.lchange-1.15",
doi = "10.18653/v1/2023.lchange-1.15",
pages = "143--152",
abstract = "In this work, we investigate olfactory perception shifts, analysing how the description of the smells emitted by specific sources has changed over time. We first create a benchmark of selected smell sources, relying upon existing historical studies related to olfaction. We also collect an English text corpus by retrieving large collections of documents from freely available resources, spanning from 1500 to 2000 and covering different domains. We label such corpus using a system for olfactory information extraction inspired by frame semantics, where the semantic roles around the smell sources in the benchmark are marked. We then analyse how the roles describing Qualities of smell sources change over time and how they can contribute to characterise perception shifts, also in comparison with more standard statistical approaches.",
}
| In this work, we investigate olfactory perception shifts, analysing how the description of the smells emitted by specific sources has changed over time. We first create a benchmark of selected smell sources, relying upon existing historical studies related to olfaction. We also collect an English text corpus by retrieving large collections of documents from freely available resources, spanning from 1500 to 2000 and covering different domains. We label such corpus using a system for olfactory information extraction inspired by frame semantics, where the semantic roles around the smell sources in the benchmark are marked. We then analyse how the roles describing Qualities of smell sources change over time and how they can contribute to characterise perception shifts, also in comparison with more standard statistical approaches. | [
"Paccosi, Teresa",
"Menini, Stefano",
"Leonardelli, Elisa",
"Barzon, Ilaria",
"Tonelli, Sara"
] | Scent and Sensibility: Perception Shifts in the Olfactory Domain | lchange-1.15 | null | [
""
] | -1 | -1 | -1 | -1 | [] | [] | [] | 0 | Poster |
|
https://aclanthology.org/2023.lchange-1.16.bib | https://aclanthology.org/2023.lchange-1.16/ | @inproceedings{gribomont-2023-diachronic-contextual,
title = "From Diachronic to Contextual Lexical Semantic Change: Introducing Semantic Difference Keywords ({SDK}s) for Discourse Studies",
author = "Gribomont, Isabelle",
editor = "Tahmasebi, Nina and
Montariol, Syrielle and
Dubossarsky, Haim and
Kutuzov, Andrey and
Hengchen, Simon and
Alfter, David and
Periti, Francesco and
Cassotti, Pierluigi",
booktitle = "Proceedings of the 4th Workshop on Computational Approaches to Historical Language Change",
month = dec,
year = "2023",
address = "Singapore",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/2023.lchange-1.16",
doi = "10.18653/v1/2023.lchange-1.16",
pages = "153--160",
abstract = "This paper introduces the concept of Semantic Difference Keywords (SDKs). We define SDKs as keywords selected because of a comparatively high semantic difference between their use in two or more corpora. They are extracted by applying methods developed to identify diachronic Lexical Semantic Change. Like statistical keywords, most commonly used in quantitative discourse studies, SDKs capture the distinctiveness of a target corpus. However, they do not do so because they are used significantly more often or more consistently, but because they are used significantly differently. The case study presented in this paper shows that SDKs are successful in identifying concepts which are contested, i.e., sites of {``}semantic struggles{''} (CITATION). SDKs are therefore a useful contribution to (computational) discourse studies and text-based Digital Humanities more broadly.",
}
| This paper introduces the concept of Semantic Difference Keywords (SDKs). We define SDKs as keywords selected because of a comparatively high semantic difference between their use in two or more corpora. They are extracted by applying methods developed to identify diachronic Lexical Semantic Change. Like statistical keywords, most commonly used in quantitative discourse studies, SDKs capture the distinctiveness of a target corpus. However, they do not do so because they are used significantly more often or more consistently, but because they are used significantly differently. The case study presented in this paper shows that SDKs are successful in identifying concepts which are contested, i.e., sites of {``}semantic struggles{''} (CITATION). SDKs are therefore a useful contribution to (computational) discourse studies and text-based Digital Humanities more broadly. | [
"Gribomont, Isabelle"
] | From Diachronic to Contextual Lexical Semantic Change: Introducing Semantic Difference Keywords (SDKs) for Discourse Studies | lchange-1.16 | null | [
""
] | -1 | -1 | -1 | -1 | [] | [] | [] | 0 | Poster |
|
https://aclanthology.org/2023.mrl-1.1.bib | https://aclanthology.org/2023.mrl-1.1/ | @inproceedings{fang-etal-2023-unibrivl,
title = "{U}ni{B}ri{VL}: Robust Audio Representation and Generation of Audio Driven Diffusion Models",
author = "Fang, Sen and
Gao, Bowen and
Wu, Yangjian and
Teoh, TeikToe",
editor = "Ataman, Duygu",
booktitle = "Proceedings of the 3rd Workshop on Multi-lingual Representation Learning (MRL)",
month = dec,
year = "2023",
address = "Singapore",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/2023.mrl-1.1",
doi = "10.18653/v1/2023.mrl-1.1",
pages = "1--11",
}
| No abstract found | [
"Fang, Sen",
"Gao, Bowen",
"Wu, Yangjian",
"Teoh, TeikToe"
] | UniBriVL: Robust Audio Representation and Generation of Audio Driven Diffusion Models | mrl-1.1 | 2307.15898 | [
""
] | -1 | -1 | -1 | -1 | [] | [] | [] | 0 | Poster |
|
https://aclanthology.org/2023.mrl-1.2.bib | https://aclanthology.org/2023.mrl-1.2/ | @inproceedings{hu-keller-2023-meta,
title = "Meta-learning For Vision-and-language Cross-lingual Transfer",
author = "Hu, Hanxu and
Keller, Frank",
editor = "Ataman, Duygu",
booktitle = "Proceedings of the 3rd Workshop on Multi-lingual Representation Learning (MRL)",
month = dec,
year = "2023",
address = "Singapore",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/2023.mrl-1.2",
doi = "10.18653/v1/2023.mrl-1.2",
pages = "12--23",
}
| No abstract found | [
"Hu, Hanxu",
"Keller, Frank"
] | Meta-learning For Vision-and-language Cross-lingual Transfer | mrl-1.2 | 2305.14843 | [
""
] | -1 | -1 | -1 | -1 | [] | [] | [] | 0 | Poster |
|
https://aclanthology.org/2023.mrl-1.3.bib | https://aclanthology.org/2023.mrl-1.3/ | @inproceedings{srinivasan-etal-2023-counterfactually,
title = "Counterfactually Probing Language Identity in Multilingual Models",
author = "Srinivasan, Anirudh and
Govindarajan, Venkata Subrahmanyan and
Mahowald, Kyle",
editor = "Ataman, Duygu",
booktitle = "Proceedings of the 3rd Workshop on Multi-lingual Representation Learning (MRL)",
month = dec,
year = "2023",
address = "Singapore",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/2023.mrl-1.3",
doi = "10.18653/v1/2023.mrl-1.3",
pages = "24--36",
}
| No abstract found | [
"Srinivasan, Anirudh",
"Govindarajan, Venkata Subrahmanyan",
"Mahowald, Kyle"
] | Counterfactually Probing Language Identity in Multilingual Models | mrl-1.3 | 2310.18862 | [
"https://github.com/venkatasg/multilingual-counterfactual-probing"
] | -1 | -1 | -1 | -1 | [] | [] | [] | 0 | Poster |
|
https://aclanthology.org/2023.mrl-1.4.bib | https://aclanthology.org/2023.mrl-1.4/ | @inproceedings{robert-litschko-etal-2023-general,
title = "A General-Purpose Multilingual Document Encoder",
author = "Robert Litschko, Onur Galo{\u{g}}lu and
Litschko, Robert and
Glava{\v{s}}, Goran",
editor = "Ataman, Duygu",
booktitle = "Proceedings of the 3rd Workshop on Multi-lingual Representation Learning (MRL)",
month = dec,
year = "2023",
address = "Singapore",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/2023.mrl-1.4",
doi = "10.18653/v1/2023.mrl-1.4",
pages = "37--49",
}
| No abstract found | [
"Robert Litschko, Onur Galo{\\u{g}}lu",
"Litschko, Robert",
"Glava{\\v{s}}, Goran"
] | A General-Purpose Multilingual Document Encoder | mrl-1.4 | 2305.07016 | [
"https://github.com/ogaloglu/pre-training-multilingual-document-encoders"
] | https://huggingface.co/papers/2305.07016 | 0 | 1 | 0 | 3 | [] | [] | [] | 1 | Poster |
https://aclanthology.org/2023.mrl-1.5.bib | https://aclanthology.org/2023.mrl-1.5/ | @inproceedings{de-raedt-etal-2023-zero,
title = "Zero-Shot Cross-Lingual Sentiment Classification under Distribution Shift: an Exploratory Study",
author = "De Raedt, Maarten and
Bitew, Semere Kiros and
Godin, Fr{\'e}deric and
Demeester, Thomas and
Develder, Chris",
editor = "Ataman, Duygu",
booktitle = "Proceedings of the 3rd Workshop on Multi-lingual Representation Learning (MRL)",
month = dec,
year = "2023",
address = "Singapore",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/2023.mrl-1.5",
doi = "10.18653/v1/2023.mrl-1.5",
pages = "50--66",
}
| No abstract found | [
"De Raedt, Maarten",
"Bitew, Semere Kiros",
"Godin, Fr{\\'e}deric",
"Demeester, Thomas",
"Develder, Chris"
] | Zero-Shot Cross-Lingual Sentiment Classification under Distribution Shift: an Exploratory Study | mrl-1.5 | 2311.06549 | [
""
] | -1 | -1 | -1 | -1 | [] | [] | [] | 0 | Poster |
|
https://aclanthology.org/2023.mrl-1.6.bib | https://aclanthology.org/2023.mrl-1.6/ | @inproceedings{rahman-etal-2023-token,
title = "To token or not to token: A Comparative Study of Text Representations for Cross-Lingual Transfer",
author = "Rahman, Md Mushfiqur and
Sakib, Fardin Ahsan and
Faisal, Fahim and
Anastasopoulos, Antonios",
editor = "Ataman, Duygu",
booktitle = "Proceedings of the 3rd Workshop on Multi-lingual Representation Learning (MRL)",
month = dec,
year = "2023",
address = "Singapore",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/2023.mrl-1.6",
doi = "10.18653/v1/2023.mrl-1.6",
pages = "67--84",
}
| No abstract found | [
"Rahman, Md Mushfiqur",
"Sakib, Fardin Ahsan",
"Faisal, Fahim",
"Anastasopoulos, Antonios"
] | To token or not to token: A Comparative Study of Text Representations for Cross-Lingual Transfer | mrl-1.6 | 2310.08078 | [
"https://github.com/mushfiqur11/tokenfreetransfer"
] | -1 | -1 | -1 | -1 | [] | [] | [] | 0 | Poster |
|
https://aclanthology.org/2023.mrl-1.7.bib | https://aclanthology.org/2023.mrl-1.7/ | @inproceedings{kim-etal-2023-adapt,
title = "Adapt and Prune Strategy for Multilingual Speech Foundational Model on Low-resourced Languages",
author = "Kim, Hyeon Soo and
Cho, Chung Hyeon and
Won, Hyejin and
Park, Kyung Ho",
editor = "Ataman, Duygu",
booktitle = "Proceedings of the 3rd Workshop on Multi-lingual Representation Learning (MRL)",
month = dec,
year = "2023",
address = "Singapore",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/2023.mrl-1.7",
doi = "10.18653/v1/2023.mrl-1.7",
pages = "85--94",
}
| No abstract found | [
"Kim, Hyeon Soo",
"Cho, Chung Hyeon",
"Won, Hyejin",
"Park, Kyung Ho"
] | Adapt and Prune Strategy for Multilingual Speech Foundational Model on Low-resourced Languages | mrl-1.7 | null | [
""
] | -1 | -1 | -1 | -1 | [] | [] | [] | 0 | Poster |
|
https://aclanthology.org/2023.mrl-1.8.bib | https://aclanthology.org/2023.mrl-1.8/ | @inproceedings{hangya-etal-2023-multilingual,
title = "Multilingual Word Embeddings for Low-Resource Languages using Anchors and a Chain of Related Languages",
author = {Hangya, Viktor and
Severini, Silvia and
Ralev, Radoslav and
Fraser, Alexander and
Sch{\"u}tze, Hinrich},
editor = "Ataman, Duygu",
booktitle = "Proceedings of the 3rd Workshop on Multi-lingual Representation Learning (MRL)",
month = dec,
year = "2023",
address = "Singapore",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/2023.mrl-1.8",
doi = "10.18653/v1/2023.mrl-1.8",
pages = "95--105",
}
| No abstract found | [
"Hangya, Viktor",
"Severini, Silvia",
"Ralev, Radoslav",
"Fraser, Alex",
"er",
"Sch{\\\"u}tze, Hinrich"
] | Multilingual Word Embeddings for Low-Resource Languages using Anchors and a Chain of Related Languages | mrl-1.8 | 2311.12489 | [
""
] | -1 | -1 | -1 | -1 | [] | [] | [] | 0 | Poster |
|
https://aclanthology.org/2023.mrl-1.9.bib | https://aclanthology.org/2023.mrl-1.9/ | @inproceedings{jones-etal-2023-talamt,
title = "{T}ala{MT}: Multilingual Machine Translation for {C}ab{\'e}car-{B}ribri-{S}panish",
author = "Jones, Alex and
Coto-Solano, Rolando and
Gonz{\'a}lez Campos, Guillermo",
editor = "Ataman, Duygu",
booktitle = "Proceedings of the 3rd Workshop on Multi-lingual Representation Learning (MRL)",
month = dec,
year = "2023",
address = "Singapore",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/2023.mrl-1.9",
doi = "10.18653/v1/2023.mrl-1.9",
pages = "106--117",
}
| No abstract found | [
"Jones, Alex",
"Coto-Solano, Rol",
"o",
"Gonz{\\'a}lez Campos, Guillermo"
] | TalaMT: Multilingual Machine Translation for Cabécar-Bribri-Spanish | mrl-1.9 | null | [
""
] | -1 | -1 | -1 | -1 | [] | [] | [] | 0 | Poster |
|
https://aclanthology.org/2023.mrl-1.10.bib | https://aclanthology.org/2023.mrl-1.10/ | @inproceedings{seo-etal-2023-mergen,
title = "Mergen: The First {M}anchu-{K}orean Machine Translation Model Trained on Augmented Data",
author = "Seo, Jean and
Byun, Sungjoo and
Kang, Minha and
Lee, Sangah",
editor = "Ataman, Duygu",
booktitle = "Proceedings of the 3rd Workshop on Multi-lingual Representation Learning (MRL)",
month = dec,
year = "2023",
address = "Singapore",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/2023.mrl-1.10",
doi = "10.18653/v1/2023.mrl-1.10",
pages = "118--124",
}
| No abstract found | [
"Seo, Jean",
"Byun, Sungjoo",
"Kang, Minha",
"Lee, Sangah"
] | Mergen: The First Manchu-Korean Machine Translation Model Trained on Augmented Data | mrl-1.10 | 2311.17492 | [
""
] | https://huggingface.co/papers/2311.17492 | 0 | 1 | 0 | 4 | [
"seemdog/manchuBERT"
] | [] | [] | 1 | Poster |
https://aclanthology.org/2023.mrl-1.11.bib | https://aclanthology.org/2023.mrl-1.11/ | @inproceedings{ma-etal-2023-improving-cross,
title = "Improving Cross-Lingual Transfer for Open Information Extraction with Linguistic Feature Projection",
author = "Ma, Youmi and
Kotnis, Bhushan and
Lawrence, Carolin and
Glava{\v{s}}, Goran and
Okazaki, Naoaki",
editor = "Ataman, Duygu",
booktitle = "Proceedings of the 3rd Workshop on Multi-lingual Representation Learning (MRL)",
month = dec,
year = "2023",
address = "Singapore",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/2023.mrl-1.11",
doi = "10.18653/v1/2023.mrl-1.11",
pages = "125--138",
}
| No abstract found | [
"Ma, Youmi",
"Kotnis, Bhushan",
"Lawrence, Carolin",
"Glava{\\v{s}}, Goran",
"Okazaki, Naoaki"
] | Improving Cross-Lingual Transfer for Open Information Extraction with Linguistic Feature Projection | mrl-1.11 | null | [
""
] | -1 | -1 | -1 | -1 | [] | [] | [] | 0 | Poster |
|
https://aclanthology.org/2023.mrl-1.12.bib | https://aclanthology.org/2023.mrl-1.12/ | @inproceedings{faisal-anastasopoulos-2023-geographic,
title = "Geographic and Geopolitical Biases of Language Models",
author = "Faisal, Fahim and
Anastasopoulos, Antonios",
editor = "Ataman, Duygu",
booktitle = "Proceedings of the 3rd Workshop on Multi-lingual Representation Learning (MRL)",
month = dec,
year = "2023",
address = "Singapore",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/2023.mrl-1.12",
doi = "10.18653/v1/2023.mrl-1.12",
pages = "139--163",
}
| No abstract found | [
"Faisal, Fahim",
"Anastasopoulos, Antonios"
] | Geographic and Geopolitical Biases of Language Models | mrl-1.12 | 2212.10408 | [
""
] | -1 | -1 | -1 | -1 | [] | [] | [] | 0 | Poster |
|
https://aclanthology.org/2023.mrl-1.13.bib | https://aclanthology.org/2023.mrl-1.13/ | @inproceedings{pham-etal-2023-task,
title = "Task-Based {M}o{E} for Multitask Multilingual Machine Translation",
author = "Pham, Hai and
Kim, Young Jin and
Mukherjee, Subhabrata and
Woodruff, David P. and
Poczos, Barnabas and
Hassan, Hany",
editor = "Ataman, Duygu",
booktitle = "Proceedings of the 3rd Workshop on Multi-lingual Representation Learning (MRL)",
month = dec,
year = "2023",
address = "Singapore",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/2023.mrl-1.13",
doi = "10.18653/v1/2023.mrl-1.13",
pages = "164--172",
}
| No abstract found | [
"Pham, Hai",
"Kim, Young Jin",
"Mukherjee, Subhabrata",
"Woodruff, David P.",
"Poczos, Barnabas",
"Hassan, Hany"
] | Task-Based MoE for Multitask Multilingual Machine Translation | mrl-1.13 | 2308.15772 | [
""
] | -1 | -1 | -1 | -1 | [] | [] | [] | 0 | Poster |
|
https://aclanthology.org/2023.mrl-1.14.bib | https://aclanthology.org/2023.mrl-1.14/ | @inproceedings{ranaldi-pucci-2023-english,
title = "Does the {E}nglish Matter? Elicit Cross-lingual Abilities of Large Language Models",
author = "Ranaldi, Leonardo and
Pucci, Giulia",
editor = "Ataman, Duygu",
booktitle = "Proceedings of the 3rd Workshop on Multi-lingual Representation Learning (MRL)",
month = dec,
year = "2023",
address = "Singapore",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/2023.mrl-1.14",
doi = "10.18653/v1/2023.mrl-1.14",
pages = "173--183",
}
| No abstract found | [
"Ranaldi, Leonardo",
"Pucci, Giulia"
] | Does the English Matter? Elicit Cross-lingual Abilities of Large Language Models | mrl-1.14 | null | [
""
] | -1 | -1 | -1 | -1 | [] | [] | [] | 0 | Poster |
|
https://aclanthology.org/2023.mrl-1.15.bib | https://aclanthology.org/2023.mrl-1.15/ | @inproceedings{dos-santos-etal-2023-capivara,
title = "{CAPIVARA}: Cost-Efficient Approach for Improving Multilingual {CLIP} Performance on Low-Resource Languages",
author = "dos Santos, Gabriel Oliveira and
Braga Moreira, Diego Alysson and
Ferreira, Alef Iury and
Silva, Jhessica and
Pereira, Luiz and
Bueno, Pedro and
Sousa, Thiago and
Maia, Helena and
Da Silva, N{\'a}dia and
Colombini, Esther and
Pedrini, Helio and
Avila, Sandra",
editor = "Ataman, Duygu",
booktitle = "Proceedings of the 3rd Workshop on Multi-lingual Representation Learning (MRL)",
month = dec,
year = "2023",
address = "Singapore",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/2023.mrl-1.15",
doi = "10.18653/v1/2023.mrl-1.15",
pages = "184--207",
}
| No abstract found | [
"dos Santos, Gabriel Oliveira",
"Braga Moreira, Diego Alysson",
"Ferreira, Alef Iury",
"Silva, Jhessica",
"Pereira, Luiz",
"Bueno, Pedro",
"Sousa, Thiago",
"Maia, Helena",
"Da Silva, N{\\'a}dia",
"Colombini, Esther",
"Pedrini, Helio",
"Avila, S",
"ra"
] | CAPIVARA: Cost-Efficient Approach for Improving Multilingual CLIP Performance on Low-Resource Languages | mrl-1.15 | 2310.13683 | [
"https://github.com/hiaac-nlp/capivara"
] | https://huggingface.co/papers/2310.13683 | 2 | 0 | 0 | 12 | [
"hiaac-nlp/CAPIVARA"
] | [] | [] | 1 | Poster |
https://aclanthology.org/2023.mrl-1.16.bib | https://aclanthology.org/2023.mrl-1.16/ | @inproceedings{gaschi-etal-2023-code,
title = "Code-switching as a cross-lingual Training Signal: an Example with Unsupervised Bilingual Embedding",
author = "Gaschi, Felix and
El-Baamrani, Ilias and
Gendron, Barbara and
Rastin, Parisa and
Toussaint, Yannick",
editor = "Ataman, Duygu",
booktitle = "Proceedings of the 3rd Workshop on Multi-lingual Representation Learning (MRL)",
month = dec,
year = "2023",
address = "Singapore",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/2023.mrl-1.16",
doi = "10.18653/v1/2023.mrl-1.16",
pages = "208--217",
}
| No abstract found | [
"Gaschi, Felix",
"El-Baamrani, Ilias",
"Gendron, Barbara",
"Rastin, Parisa",
"Toussaint, Yannick"
] | Code-switching as a cross-lingual Training Signal: an Example with Unsupervised Bilingual Embedding | mrl-1.16 | null | [
""
] | -1 | -1 | -1 | -1 | [] | [] | [] | 0 | Poster |
|
https://aclanthology.org/2023.mrl-1.17.bib | https://aclanthology.org/2023.mrl-1.17/ | @inproceedings{downey-etal-2023-learning,
title = "Learning to translate by learning to communicate",
author = "Downey, C.m. and
Zhou, Xuhui and
Liu, Zeyu and
Steinert-Threlkeld, Shane",
editor = "Ataman, Duygu",
booktitle = "Proceedings of the 3rd Workshop on Multi-lingual Representation Learning (MRL)",
month = dec,
year = "2023",
address = "Singapore",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/2023.mrl-1.17",
doi = "10.18653/v1/2023.mrl-1.17",
pages = "218--238",
}
| No abstract found | [
"Downey, C.m.",
"Zhou, Xuhui",
"Liu, Zeyu",
"Steinert-Threlkeld, Shane"
] | Learning to translate by learning to communicate | mrl-1.17 | 2402.16247 | [
"https://github.com/clmbrs/communication-translation"
] | -1 | -1 | -1 | -1 | [] | [] | [] | 0 | Poster |
|
https://aclanthology.org/2023.mrl-1.18.bib | https://aclanthology.org/2023.mrl-1.18/ | @inproceedings{kowsher-etal-2023-contrastive,
title = "Contrastive Learning for Universal Zero-Shot {NLI} with Cross-Lingual Sentence Embeddings",
author = "Kowsher, Md and
Sobuj, Md. Shohanur Islam and
Prottasha, Nusrat Jahan and
Arefin, Mohammad Shamsul and
Morimoto, Yasuhiko",
editor = "Ataman, Duygu",
booktitle = "Proceedings of the 3rd Workshop on Multi-lingual Representation Learning (MRL)",
month = dec,
year = "2023",
address = "Singapore",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/2023.mrl-1.18",
doi = "10.18653/v1/2023.mrl-1.18",
pages = "239--252",
}
| No abstract found | [
"Kowsher, Md",
"Sobuj, Md. Shohanur Islam",
"Prottasha, Nusrat Jahan",
"Arefin, Mohammad Shamsul",
"Morimoto, Yasuhiko"
] | Contrastive Learning for Universal Zero-Shot NLI with Cross-Lingual Sentence Embeddings | mrl-1.18 | null | [
""
] | -1 | -1 | -1 | -1 | [] | [] | [] | 0 | Poster |
|
https://aclanthology.org/2023.mrl-1.19.bib | https://aclanthology.org/2023.mrl-1.19/ | @inproceedings{danilova-stymne-2023-ud,
title = "{UD}-{MULTIGENRE} {--} a {UD}-Based Dataset Enriched with Instance-Level Genre Annotations",
author = "Danilova, Vera and
Stymne, Sara",
editor = "Ataman, Duygu",
booktitle = "Proceedings of the 3rd Workshop on Multi-lingual Representation Learning (MRL)",
month = dec,
year = "2023",
address = "Singapore",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/2023.mrl-1.19",
doi = "10.18653/v1/2023.mrl-1.19",
pages = "253--267",
}
| No abstract found | [
"Danilova, Vera",
"Stymne, Sara"
] | UD-MULTIGENRE – a UD-Based Dataset Enriched with Instance-Level Genre Annotations | mrl-1.19 | null | [
""
] | -1 | -1 | -1 | -1 | [] | [] | [] | 0 | Poster |
|
https://aclanthology.org/2023.mrl-1.20.bib | https://aclanthology.org/2023.mrl-1.20/ | @inproceedings{downey-etal-2023-embedding,
title = "Embedding Structure Matters: Comparing Methods to Adapt Multilingual Vocabularies to New Languages",
author = "Downey, C.m. and
Blevins, Terra and
Goldfine, Nora and
Steinert-Threlkeld, Shane",
editor = "Ataman, Duygu",
booktitle = "Proceedings of the 3rd Workshop on Multi-lingual Representation Learning (MRL)",
month = dec,
year = "2023",
address = "Singapore",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/2023.mrl-1.20",
doi = "10.18653/v1/2023.mrl-1.20",
pages = "268--281",
}
| No abstract found | [
"Downey, C.m.",
"Blevins, Terra",
"Goldfine, Nora",
"Steinert-Threlkeld, Shane"
] | Embedding Structure Matters: Comparing Methods to Adapt Multilingual Vocabularies to New Languages | mrl-1.20 | 2309.04679 | [
"https://github.com/cmdowney88/embeddingstructure"
] | https://huggingface.co/papers/2309.04679 | 0 | 0 | 0 | 4 | [] | [] | [] | 1 | Poster |
https://aclanthology.org/2023.mrl-1.21.bib | https://aclanthology.org/2023.mrl-1.21/ | @inproceedings{yang-etal-2023-multi-eup,
title = "Multi-{E}u{P}: The Multilingual {E}uropean Parliament Dataset for Analysis of Bias in Information Retrieval",
author = "Yang, Jinrui and
Baldwin, Timothy and
Cohn, Trevor",
editor = "Ataman, Duygu",
booktitle = "Proceedings of the 3rd Workshop on Multi-lingual Representation Learning (MRL)",
month = dec,
year = "2023",
address = "Singapore",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/2023.mrl-1.21",
doi = "10.18653/v1/2023.mrl-1.21",
pages = "282--291",
}
| No abstract found | [
"Yang, Jinrui",
"Baldwin, Timothy",
"Cohn, Trevor"
] | Multi-EuP: The Multilingual European Parliament Dataset for Analysis of Bias in Information Retrieval | mrl-1.21 | 2311.01870 | [
"https://github.com/jrnlp/multi-eup"
] | https://huggingface.co/papers/2311.01870 | 0 | 0 | 0 | 3 | [] | [
"unimelb-nlp/Multi-EuP"
] | [] | 1 | Poster |
https://aclanthology.org/2023.mrl-1.22.bib | https://aclanthology.org/2023.mrl-1.22/ | @inproceedings{pokharel-agrawal-2023-generating,
title = "Generating Continuations in Multilingual Idiomatic Contexts",
author = "Pokharel, Rhitabrat and
Agrawal, Ameeta",
editor = "Ataman, Duygu",
booktitle = "Proceedings of the 3rd Workshop on Multi-lingual Representation Learning (MRL)",
month = dec,
year = "2023",
address = "Singapore",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/2023.mrl-1.22",
doi = "10.18653/v1/2023.mrl-1.22",
pages = "292--301",
}
| No abstract found | [
"Pokharel, Rhitabrat",
"Agrawal, Ameeta"
] | Generating Continuations in Multilingual Idiomatic Contexts | mrl-1.22 | 2310.20195 | [
"https://github.com/portnlp/llm-in-idiomatic-context"
] | https://huggingface.co/papers/2310.20195 | 0 | 0 | 0 | 2 | [] | [] | [] | 1 | Poster |
https://aclanthology.org/2023.mrl-1.23.bib | https://aclanthology.org/2023.mrl-1.23/ | @inproceedings{helcl-libovicky-2023-cuni,
title = "{CUNI} Submission to {MRL} 2023 Shared Task on Multi-lingual Multi-task Information Retrieval",
author = "Helcl, Jind{\v{r}}ich and
Libovick{\'y}, Jind{\v{r}}ich",
editor = "Ataman, Duygu",
booktitle = "Proceedings of the 3rd Workshop on Multi-lingual Representation Learning (MRL)",
month = dec,
year = "2023",
address = "Singapore",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/2023.mrl-1.23",
doi = "10.18653/v1/2023.mrl-1.23",
pages = "302--309",
}
| No abstract found | [
"Helcl, Jind{\\v{r}}ich",
"Libovick{\\'y}, Jind{\\v{r}}ich"
] | CUNI Submission to MRL 2023 Shared Task on Multi-lingual Multi-task Information Retrieval | mrl-1.23 | 2310.16528 | [
""
] | -1 | -1 | -1 | -1 | [] | [] | [] | 0 | Poster |
|
https://aclanthology.org/2023.mrl-1.24.bib | https://aclanthology.org/2023.mrl-1.24/ | @inproceedings{tinner-etal-2023-findings,
title = "Findings of the 1st Shared Task on Multi-lingual Multi-task Information Retrieval at {MRL} 2023",
author = {Tinner, Francesco and
Adelani, David Ifeoluwa and
Emezue, Chris and
Hajili, Mammad and
Goldman, Omer and
Adilazuarda, Muhammad Farid and
Al Kautsar, Muhammad Dehan and
Mirsaidova, Aziza and
Kural, M{\"u}ge and
Massey, Dylan and
Chukwuneke, Chiamaka and
Mbonu, Chinedu and
Oloyede, Damilola Oluwaseun and
Olaleye, Kayode and
Atala, Jonathan and
Ajibade, Benjamin A. and
Bassi, Saksham and
Aralikatte, Rahul and
Kim, Najoung and
Ataman, Duygu},
editor = "Ataman, Duygu",
booktitle = "Proceedings of the 3rd Workshop on Multi-lingual Representation Learning (MRL)",
month = dec,
year = "2023",
address = "Singapore",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/2023.mrl-1.24",
doi = "10.18653/v1/2023.mrl-1.24",
pages = "310--323",
}
| No abstract found | [
"Tinner, Francesco",
"Adelani, David Ifeoluwa",
"Emezue, Chris",
"Hajili, Mammad",
"Goldman, Omer",
"Adilazuarda, Muhammad Farid",
"Al Kautsar, Muhammad Dehan",
"Mirsaidova, Aziza",
"Kural, M{\\\"u}ge",
"Massey, Dylan",
"Chukwuneke, Chiamaka",
"Mbonu, Chinedu",
"Oloyede, Damilola Oluwaseun",
"Olaleye, Kayode",
"Atala, Jonathan",
"Ajibade, Benjamin A.",
"Bassi, Saksham",
"Aralikatte, Rahul",
"Kim, Najoung",
"Ataman, Duygu"
] | Findings of the 1st Shared Task on Multi-lingual Multi-task Information Retrieval at MRL 2023 | mrl-1.24 | null | [
""
] | -1 | -1 | -1 | -1 | [] | [] | [] | 0 | Poster |
|
https://aclanthology.org/2023.newsum-1.1.bib | https://aclanthology.org/2023.newsum-1.1/ | @inproceedings{wang-etal-2023-chatgpt,
title = "Is {C}hat{GPT} a Good {NLG} Evaluator? A Preliminary Study",
author = "Wang, Jiaan and
Liang, Yunlong and
Meng, Fandong and
Sun, Zengkui and
Shi, Haoxiang and
Li, Zhixu and
Xu, Jinan and
Qu, Jianfeng and
Zhou, Jie",
editor = "Dong, Yue and
Xiao, Wen and
Wang, Lu and
Liu, Fei and
Carenini, Giuseppe",
booktitle = "Proceedings of the 4th New Frontiers in Summarization Workshop",
month = dec,
year = "2023",
address = "Singapore",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/2023.newsum-1.1",
doi = "10.18653/v1/2023.newsum-1.1",
pages = "1--11",
abstract = "Recently, the emergence of ChatGPT has attracted wide attention from the computational linguistics community. Many prior studies have shown that ChatGPT achieves remarkable performance on various NLP tasks in terms of automatic evaluation metrics. However, the ability of ChatGPT to serve as an evaluation metric is still underexplored. Considering assessing the quality of natural language generation (NLG) models is an arduous task and NLG metrics notoriously show their poor correlation with human judgments, we wonder whether ChatGPT is a good NLG evaluation metric. In this report, we provide a preliminary meta-evaluation on ChatGPT to show its reliability as an NLG metric. In detail, we regard ChatGPT as a human evaluator and give task-specific (e.g., summarization) and aspect-specific (e.g., relevance) instruction to prompt ChatGPT to evaluate the generated results of NLG models. We conduct experiments on five NLG meta-evaluation datasets (including summarization, story generation and data-to-text tasks). Experimental results show that compared with previous automatic metrics, ChatGPT achieves state-of-the-art or competitive correlation with human judgments in most cases. In addition, we find that the effectiveness of the ChatGPT evaluator might be influenced by the creation method of the meta-evaluation datasets. For the meta-evaluation datasets which are created greatly depending on the reference and thus are biased, the ChatGPT evaluator might lose its effectiveness. We hope our preliminary study could prompt the emergence of a general-purposed reliable NLG metric.",
}
| Recently, the emergence of ChatGPT has attracted wide attention from the computational linguistics community. Many prior studies have shown that ChatGPT achieves remarkable performance on various NLP tasks in terms of automatic evaluation metrics. However, the ability of ChatGPT to serve as an evaluation metric is still underexplored. Considering assessing the quality of natural language generation (NLG) models is an arduous task and NLG metrics notoriously show their poor correlation with human judgments, we wonder whether ChatGPT is a good NLG evaluation metric. In this report, we provide a preliminary meta-evaluation on ChatGPT to show its reliability as an NLG metric. In detail, we regard ChatGPT as a human evaluator and give task-specific (e.g., summarization) and aspect-specific (e.g., relevance) instruction to prompt ChatGPT to evaluate the generated results of NLG models. We conduct experiments on five NLG meta-evaluation datasets (including summarization, story generation and data-to-text tasks). Experimental results show that compared with previous automatic metrics, ChatGPT achieves state-of-the-art or competitive correlation with human judgments in most cases. In addition, we find that the effectiveness of the ChatGPT evaluator might be influenced by the creation method of the meta-evaluation datasets. For the meta-evaluation datasets which are created greatly depending on the reference and thus are biased, the ChatGPT evaluator might lose its effectiveness. We hope our preliminary study could prompt the emergence of a general-purposed reliable NLG metric. | [
"Wang, Jiaan",
"Liang, Yunlong",
"Meng, F",
"ong",
"Sun, Zengkui",
"Shi, Haoxiang",
"Li, Zhixu",
"Xu, Jinan",
"Qu, Jianfeng",
"Zhou, Jie"
] | Is ChatGPT a Good NLG Evaluator? A Preliminary Study | newsum-1.1 | 2303.04048 | [
"https://github.com/krystalan/chatgpt_as_nlg_evaluator"
] | https://huggingface.co/papers/2303.04048 | 0 | 0 | 0 | 9 | [] | [] | [] | 1 | Poster |
https://aclanthology.org/2023.newsum-1.2.bib | https://aclanthology.org/2023.newsum-1.2/ | @inproceedings{wang-etal-2023-zero,
title = "Zero-Shot Cross-Lingual Summarization via Large Language Models",
author = "Wang, Jiaan and
Liang, Yunlong and
Meng, Fandong and
Zou, Beiqi and
Li, Zhixu and
Qu, Jianfeng and
Zhou, Jie",
editor = "Dong, Yue and
Xiao, Wen and
Wang, Lu and
Liu, Fei and
Carenini, Giuseppe",
booktitle = "Proceedings of the 4th New Frontiers in Summarization Workshop",
month = dec,
year = "2023",
address = "Singapore",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/2023.newsum-1.2",
doi = "10.18653/v1/2023.newsum-1.2",
pages = "12--23",
abstract = "Given a document in a source language, cross-lingual summarization (CLS) aims to generate a summary in a different target language. Recently, the emergence of Large Language Models (LLMs), such as GPT-3.5, ChatGPT and GPT-4, has attracted wide attention from the computational linguistics community. However, it is not yet known the performance of LLMs on CLS. In this report, we empirically use various prompts to guide LLMs to perform zero-shot CLS from different paradigms (i.e., end-to-end and pipeline), and provide a preliminary evaluation on the generated summaries. We find that ChatGPT and GPT-4 originally prefer to produce lengthy summaries with detailed information. These two LLMs can further balance informativeness and conciseness with the help of an interactive prompt, significantly improving their CLS performance. Experimental results on three widely-used CLS datasets show that GPT-4 achieves state-of-the-art zero-shot CLS performance, and performs competitively compared with the fine-tuned mBART-50. Moreover, we also find some multi-lingual and bilingual LLMs (i.e., BLOOMZ, ChatGLM-6B, Vicuna-13B and ChatYuan) have limited zero-shot CLS ability. Due to the composite nature of CLS, which requires models to perform summarization and translation simultaneously, accomplishing this task in a zero-shot manner is even a challenge for LLMs. Therefore, we sincerely hope and recommend future LLM research could use CLS as a testbed.",
}
| Given a document in a source language, cross-lingual summarization (CLS) aims to generate a summary in a different target language. Recently, the emergence of Large Language Models (LLMs), such as GPT-3.5, ChatGPT and GPT-4, has attracted wide attention from the computational linguistics community. However, it is not yet known the performance of LLMs on CLS. In this report, we empirically use various prompts to guide LLMs to perform zero-shot CLS from different paradigms (i.e., end-to-end and pipeline), and provide a preliminary evaluation on the generated summaries. We find that ChatGPT and GPT-4 originally prefer to produce lengthy summaries with detailed information. These two LLMs can further balance informativeness and conciseness with the help of an interactive prompt, significantly improving their CLS performance. Experimental results on three widely-used CLS datasets show that GPT-4 achieves state-of-the-art zero-shot CLS performance, and performs competitively compared with the fine-tuned mBART-50. Moreover, we also find some multi-lingual and bilingual LLMs (i.e., BLOOMZ, ChatGLM-6B, Vicuna-13B and ChatYuan) have limited zero-shot CLS ability. Due to the composite nature of CLS, which requires models to perform summarization and translation simultaneously, accomplishing this task in a zero-shot manner is even a challenge for LLMs. Therefore, we sincerely hope and recommend future LLM research could use CLS as a testbed. | [
"Wang, Jiaan",
"Liang, Yunlong",
"Meng, F",
"ong",
"Zou, Beiqi",
"Li, Zhixu",
"Qu, Jianfeng",
"Zhou, Jie"
] | Zero-Shot Cross-Lingual Summarization via Large Language Models | newsum-1.2 | 2302.14229 | [
""
] | https://huggingface.co/papers/2302.14229 | 0 | 1 | 0 | 7 | [] | [] | [] | 1 | Poster |
https://aclanthology.org/2023.newsum-1.3.bib | https://aclanthology.org/2023.newsum-1.3/ | @inproceedings{fatima-etal-2023-simcsum,
title = "{S}im{CS}um: Joint Learning of Simplification and Cross-lingual Summarization for Cross-lingual Science Journalism",
author = "Fatima, Mehwish and
Kolber, Tim and
Markert, Katja and
Strube, Michael",
editor = "Dong, Yue and
Xiao, Wen and
Wang, Lu and
Liu, Fei and
Carenini, Giuseppe",
booktitle = "Proceedings of the 4th New Frontiers in Summarization Workshop",
month = dec,
year = "2023",
address = "Singapore",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/2023.newsum-1.3",
doi = "10.18653/v1/2023.newsum-1.3",
pages = "24--40",
abstract = "Cross-lingual science journalism is a recently introduced task that generates popular science summaries of scientific articles different from the source language for non-expert readers. A popular science summary must contain salient content of the input document while focusing on coherence and comprehensibility. Meanwhile, generating a cross-lingual summary from the scientific texts in a local language for the targeted audience is challenging. Existing research on cross-lingual science journalism investigates the task with a pipeline model to combine text simplification and cross-lingual summarization. We extend the research in cross-lingual science journalism by introducing a novel, multi-task learning architecture that combines the aforementioned NLP tasks. Our approach is to jointly train the two high-level NLP tasks in SimCSum for generating cross-lingual popular science summaries. We investigate the performance of SimCSum against the pipeline model and several other strong baselines with several evaluation metrics and human evaluation. Overall, SimCSum demonstrates statistically significant improvements over the state-of-the-art on two non-synthetic cross-lingual scientific datasets. Furthermore, we conduct an in-depth investigation into the linguistic properties of generated summaries and an error analysis.",
}
| Cross-lingual science journalism is a recently introduced task that generates popular science summaries of scientific articles different from the source language for non-expert readers. A popular science summary must contain salient content of the input document while focusing on coherence and comprehensibility. Meanwhile, generating a cross-lingual summary from the scientific texts in a local language for the targeted audience is challenging. Existing research on cross-lingual science journalism investigates the task with a pipeline model to combine text simplification and cross-lingual summarization. We extend the research in cross-lingual science journalism by introducing a novel, multi-task learning architecture that combines the aforementioned NLP tasks. Our approach is to jointly train the two high-level NLP tasks in SimCSum for generating cross-lingual popular science summaries. We investigate the performance of SimCSum against the pipeline model and several other strong baselines with several evaluation metrics and human evaluation. Overall, SimCSum demonstrates statistically significant improvements over the state-of-the-art on two non-synthetic cross-lingual scientific datasets. Furthermore, we conduct an in-depth investigation into the linguistic properties of generated summaries and an error analysis. | [
"Fatima, Mehwish",
"Kolber, Tim",
"Markert, Katja",
"Strube, Michael"
] | SimCSum: Joint Learning of Simplification and Cross-lingual Summarization for Cross-lingual Science Journalism | newsum-1.3 | 2304.01621 | [
""
] | -1 | -1 | -1 | -1 | [] | [] | [] | 0 | Poster |
|
https://aclanthology.org/2023.newsum-1.4.bib | https://aclanthology.org/2023.newsum-1.4/ | @inproceedings{guan-padmakumar-2023-extract,
title = "Extract, Select and Rewrite: A Modular Sentence Summarization Method",
author = "Guan, Shuo and
Padmakumar, Vishakh",
editor = "Dong, Yue and
Xiao, Wen and
Wang, Lu and
Liu, Fei and
Carenini, Giuseppe",
booktitle = "Proceedings of the 4th New Frontiers in Summarization Workshop",
month = dec,
year = "2023",
address = "Singapore",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/2023.newsum-1.4",
doi = "10.18653/v1/2023.newsum-1.4",
pages = "41--48",
abstract = "A modular approach has the advantage of being compositional and controllable, comparing to most end-to-end models. In this paper we propose Extract-Select-Rewrite (ESR), a three-phase abstractive sentence summarization method. We decompose summarization into three stages: (i) knowledge extraction, where we extract relation triples from the text using off-the-shelf tools; (ii) content selection, where a subset of triples are selected; and (iii) rewriting, where the selected triple are realized into natural language. Our results demonstrates that ESR is competitive with the best end-to-end models while being more faithful. {\%}than these baseline models. Being modular, ESR{'}s modules can be trained on separate data which is beneficial in low-resource settings and enhancing the style controllability on text generation.",
}
| A modular approach has the advantage of being compositional and controllable, comparing to most end-to-end models. In this paper we propose Extract-Select-Rewrite (ESR), a three-phase abstractive sentence summarization method. We decompose summarization into three stages: (i) knowledge extraction, where we extract relation triples from the text using off-the-shelf tools; (ii) content selection, where a subset of triples are selected; and (iii) rewriting, where the selected triple are realized into natural language. Our results demonstrates that ESR is competitive with the best end-to-end models while being more faithful. {\%}than these baseline models. Being modular, ESR{'}s modules can be trained on separate data which is beneficial in low-resource settings and enhancing the style controllability on text generation. | [
"Guan, Shuo",
"Padmakumar, Vishakh"
] | Extract, Select and Rewrite: A Modular Sentence Summarization Method | newsum-1.4 | null | [
""
] | -1 | -1 | -1 | -1 | [] | [] | [] | 0 | Poster |
|
https://aclanthology.org/2023.newsum-1.5.bib | https://aclanthology.org/2023.newsum-1.5/ | @inproceedings{wang-yoshinaga-2023-summarization,
title = "Summarization-based Data Augmentation for Document Classification",
author = "Wang, Yueguan and
Yoshinaga, Naoki",
editor = "Dong, Yue and
Xiao, Wen and
Wang, Lu and
Liu, Fei and
Carenini, Giuseppe",
booktitle = "Proceedings of the 4th New Frontiers in Summarization Workshop",
month = dec,
year = "2023",
address = "Singapore",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/2023.newsum-1.5",
doi = "10.18653/v1/2023.newsum-1.5",
pages = "49--55",
abstract = "Despite the prevalence of pretrained language models in natural language understanding tasks, understanding lengthy text such as document is still challenging due to the data sparseness problem. Inspired by that humans develop their ability of understanding lengthy text form reading shorter text, we propose a simple yet effective summarization-based data augmentation, SUMMaug, for document classification. We first obtain easy-to-learn examples for the target document classification task by summarizing the input of the original training examples, while optionally merging the original labels to conform to the summarized input. We then use the generated pseudo examples to perform curriculum learning. Experimental results on two datasets confirmed the advantage of our method compared to existing baseline methods in terms of robustness and accuracy. We release our code and data at https://github.com/etsurin/summaug.",
}
| Despite the prevalence of pretrained language models in natural language understanding tasks, understanding lengthy text such as document is still challenging due to the data sparseness problem. Inspired by that humans develop their ability of understanding lengthy text form reading shorter text, we propose a simple yet effective summarization-based data augmentation, SUMMaug, for document classification. We first obtain easy-to-learn examples for the target document classification task by summarizing the input of the original training examples, while optionally merging the original labels to conform to the summarized input. We then use the generated pseudo examples to perform curriculum learning. Experimental results on two datasets confirmed the advantage of our method compared to existing baseline methods in terms of robustness and accuracy. We release our code and data at https://github.com/etsurin/summaug. | [
"Wang, Yueguan",
"Yoshinaga, Naoki"
] | Summarization-based Data Augmentation for Document Classification | newsum-1.5 | 2312.00513 | [
"https://github.com/etsurin/summaug"
] | -1 | -1 | -1 | -1 | [] | [] | [] | 0 | Poster |
|
https://aclanthology.org/2023.newsum-1.6.bib | https://aclanthology.org/2023.newsum-1.6/ | @inproceedings{tang-etal-2023-context,
title = "In-context Learning of Large Language Models for Controlled Dialogue Summarization: A Holistic Benchmark and Empirical Analysis",
author = "Tang, Yuting and
Puduppully, Ratish and
Liu, Zhengyuan and
Chen, Nancy",
editor = "Dong, Yue and
Xiao, Wen and
Wang, Lu and
Liu, Fei and
Carenini, Giuseppe",
booktitle = "Proceedings of the 4th New Frontiers in Summarization Workshop",
month = dec,
year = "2023",
address = "Singapore",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/2023.newsum-1.6",
doi = "10.18653/v1/2023.newsum-1.6",
pages = "56--67",
abstract = "Large Language Models (LLMs) have shown significant performance in numerous NLP tasks, including summarization and controlled text generation. A notable capability of LLMs is in-context learning (ICL), where the model learns new tasks using input-output pairs in the prompt without any parameter update. However, the performance of LLMs in the context of few-shot abstractive dialogue summarization remains underexplored. This study evaluates various state-of-the-art LLMs on the SAMSum dataset within a few-shot framework. We assess these models in both controlled (entity control, length control, and person-focused planning) and uncontrolled settings, establishing a comprehensive benchmark in few-shot dialogue summarization. Our findings provide insights into summary quality and model controllability, offering a crucial reference for future research in dialogue summarization.",
}
| Large Language Models (LLMs) have shown significant performance in numerous NLP tasks, including summarization and controlled text generation. A notable capability of LLMs is in-context learning (ICL), where the model learns new tasks using input-output pairs in the prompt without any parameter update. However, the performance of LLMs in the context of few-shot abstractive dialogue summarization remains underexplored. This study evaluates various state-of-the-art LLMs on the SAMSum dataset within a few-shot framework. We assess these models in both controlled (entity control, length control, and person-focused planning) and uncontrolled settings, establishing a comprehensive benchmark in few-shot dialogue summarization. Our findings provide insights into summary quality and model controllability, offering a crucial reference for future research in dialogue summarization. | [
"Tang, Yuting",
"Puduppully, Ratish",
"Liu, Zhengyuan",
"Chen, Nancy"
] | In-context Learning of Large Language Models for Controlled Dialogue Summarization: A Holistic Benchmark and Empirical Analysis | newsum-1.6 | null | [
""
] | -1 | -1 | -1 | -1 | [] | [] | [] | 0 | Poster |
|
https://aclanthology.org/2023.newsum-1.7.bib | https://aclanthology.org/2023.newsum-1.7/ | @inproceedings{adams-etal-2023-sparse,
title = "From Sparse to Dense: {GPT}-4 Summarization with Chain of Density Prompting",
author = "Adams, Griffin and
Fabbri, Alex and
Ladhak, Faisal and
Lehman, Eric and
Elhadad, No{\'e}mie",
editor = "Dong, Yue and
Xiao, Wen and
Wang, Lu and
Liu, Fei and
Carenini, Giuseppe",
booktitle = "Proceedings of the 4th New Frontiers in Summarization Workshop",
month = dec,
year = "2023",
address = "Singapore",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/2023.newsum-1.7",
doi = "10.18653/v1/2023.newsum-1.7",
pages = "68--74",
abstract = "Selecting the {``}right{''} amount of information to include in a summary is a difficult task. A good summary should be detailed and entity-centric without being overly dense and hard to follow. To better understand this tradeoff, we solicit increasingly dense GPT-4 summaries with what we refer to as a {``}Chain of Density{''} (CoD) prompt. Specifically, GPT-4 generates an initial entity-sparse summary before iteratively incorporating missing salient entities without increasing the length. Summaries generated by CoD are more abstractive, exhibit more fusion, and have less of a lead bias than GPT-4 summaries generated by a vanilla prompt. We conduct a human preference study on 100 CNN DailyMail articles and find that humans prefer GPT-4 summaries that are more dense than those generated by a vanilla prompt and almost as dense as human written summaries. Qualitative analysis supports the notion that there exists a tradeoff between informativeness and readability. 500 annotated CoD summaries, as well as an extra 5,000 unannotated summaries, are freely available on HuggingFace (https://huggingface.co/datasets/griffin/chain{\_}of{\_}density).",
}
| Selecting the {``}right{''} amount of information to include in a summary is a difficult task. A good summary should be detailed and entity-centric without being overly dense and hard to follow. To better understand this tradeoff, we solicit increasingly dense GPT-4 summaries with what we refer to as a {``}Chain of Density{''} (CoD) prompt. Specifically, GPT-4 generates an initial entity-sparse summary before iteratively incorporating missing salient entities without increasing the length. Summaries generated by CoD are more abstractive, exhibit more fusion, and have less of a lead bias than GPT-4 summaries generated by a vanilla prompt. We conduct a human preference study on 100 CNN DailyMail articles and find that humans prefer GPT-4 summaries that are more dense than those generated by a vanilla prompt and almost as dense as human written summaries. Qualitative analysis supports the notion that there exists a tradeoff between informativeness and readability. 500 annotated CoD summaries, as well as an extra 5,000 unannotated summaries, are freely available on HuggingFace (https://huggingface.co/datasets/griffin/chain{\_}of{\_}density). | [
"Adams, Griffin",
"Fabbri, Alex",
"Ladhak, Faisal",
"Lehman, Eric",
"Elhadad, No{\\'e}mie"
] | From Sparse to Dense: GPT-4 Summarization with Chain of Density Prompting | newsum-1.7 | 2309.04269 | [
""
] | https://huggingface.co/papers/2309.04269 | 4 | 32 | 0 | 5 | [] | [] | [] | 1 | Poster |
https://aclanthology.org/2023.newsum-1.8.bib | https://aclanthology.org/2023.newsum-1.8/ | @inproceedings{singha-roy-mercer-2023-generating,
title = "Generating Extractive and Abstractive Summaries in Parallel from Scientific Articles Incorporating Citing Statements",
author = "Singha Roy, Sudipta and
Mercer, Robert E.",
editor = "Dong, Yue and
Xiao, Wen and
Wang, Lu and
Liu, Fei and
Carenini, Giuseppe",
booktitle = "Proceedings of the 4th New Frontiers in Summarization Workshop",
month = dec,
year = "2023",
address = "Singapore",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/2023.newsum-1.8",
doi = "10.18653/v1/2023.newsum-1.8",
pages = "75--86",
abstract = "Summarization of scientific articles often overlooks insights from citing papers, focusing solely on the document{'}s content. To incorporate citation contexts, we develop a model to summarize a scientific document using the information in the source and citing documents. It concurrently generates abstractive and extractive summaries, each enhancing the other. The extractive summarizer utilizes a blend of heterogeneous graph-based neural networks and graph attention networks, while the abstractive summarizer employs an autoregressive decoder. These modules exchange control signals through the loss function, ensuring the creation of high-quality summaries in both styles.",
}
| Summarization of scientific articles often overlooks insights from citing papers, focusing solely on the document{'}s content. To incorporate citation contexts, we develop a model to summarize a scientific document using the information in the source and citing documents. It concurrently generates abstractive and extractive summaries, each enhancing the other. The extractive summarizer utilizes a blend of heterogeneous graph-based neural networks and graph attention networks, while the abstractive summarizer employs an autoregressive decoder. These modules exchange control signals through the loss function, ensuring the creation of high-quality summaries in both styles. | [
"Singha Roy, Sudipta",
"Mercer, Robert E."
] | Generating Extractive and Abstractive Summaries in Parallel from Scientific Articles Incorporating Citing Statements | newsum-1.8 | null | [
""
] | -1 | -1 | -1 | -1 | [] | [] | [] | 0 | Poster |
|
https://aclanthology.org/2023.newsum-1.9.bib | https://aclanthology.org/2023.newsum-1.9/ | @inproceedings{goncalves-etal-2023-supervising,
title = "Supervising the Centroid Baseline for Extractive Multi-Document Summarization",
author = "Gon{\c{c}}alves, Sim{\~a}o and
Correia, Gon{\c{c}}alo and
Pernes, Diogo and
Mendes, Afonso",
editor = "Dong, Yue and
Xiao, Wen and
Wang, Lu and
Liu, Fei and
Carenini, Giuseppe",
booktitle = "Proceedings of the 4th New Frontiers in Summarization Workshop",
month = dec,
year = "2023",
address = "Singapore",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/2023.newsum-1.9",
doi = "10.18653/v1/2023.newsum-1.9",
pages = "87--96",
abstract = "The centroid method is a simple approach for extractive multi-document summarization and many improvements to its pipeline have been proposed. We further refine it by adding a beam search process to the sentence selection and also a centroid estimation attention model that leads to improved results. We demonstrate this in several multi-document summarization datasets, including in a multilingual scenario.",
}
| The centroid method is a simple approach for extractive multi-document summarization and many improvements to its pipeline have been proposed. We further refine it by adding a beam search process to the sentence selection and also a centroid estimation attention model that leads to improved results. We demonstrate this in several multi-document summarization datasets, including in a multilingual scenario. | [
"Gon{\\c{c}}alves, Sim{\\~a}o",
"Correia, Gon{\\c{c}}alo",
"Pernes, Diogo",
"Mendes, Afonso"
] | Supervising the Centroid Baseline for Extractive Multi-Document Summarization | newsum-1.9 | 2311.17771 | [
"https://github.com/priberam/cera-summ"
] | https://huggingface.co/papers/2311.17771 | 1 | 0 | 0 | 4 | [] | [] | [] | 1 | Poster |
https://aclanthology.org/2023.newsum-1.10.bib | https://aclanthology.org/2023.newsum-1.10/ | @inproceedings{roush-mezzetti-2023-debatekg,
title = "{D}ebate{KG} {--} Automatic Policy Debate Case Creation with Semantic Knowledge Graphs",
author = "Roush, Allen and
Mezzetti, David",
editor = "Dong, Yue and
Xiao, Wen and
Wang, Lu and
Liu, Fei and
Carenini, Giuseppe",
booktitle = "Proceedings of the 4th New Frontiers in Summarization Workshop",
month = dec,
year = "2023",
address = "Singapore",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/2023.newsum-1.10",
doi = "10.18653/v1/2023.newsum-1.10",
pages = "97--104",
abstract = "Recent work within the Argument Mining community has shown the applicability of Natural Language Processing systems for solving problems found within competitive debate. One of the most important tasks within competitive debate is for debaters to create high quality debate cases. We show that effective debate cases can be constructed using constrained shortest path traversals on Argumentative Semantic Knowledge Graphs. We study this potential in the context of a type of American Competitive Debate, called {``}Policy Debate{''}, which already has a large scale dataset targeting it called {``}DebateSum{''}. We significantly improve upon DebateSum by introducing 53180 new examples, as well as further useful metadata for every example, to the dataset. We leverage the txtai semantic search and knowledge graph toolchain to produce and contribute 9 semantic knowledge graphs built on this dataset. We create a unique method for evaluating which knowledge graphs are better in the context of producing policy debate cases. A demo which automatically generates debate cases, along with all other code and the Knowledge Graphs, are open-sourced and made available to the public here: https://huggingface.co/spaces/Hellisotherpeople/DebateKG",
}
| Recent work within the Argument Mining community has shown the applicability of Natural Language Processing systems for solving problems found within competitive debate. One of the most important tasks within competitive debate is for debaters to create high quality debate cases. We show that effective debate cases can be constructed using constrained shortest path traversals on Argumentative Semantic Knowledge Graphs. We study this potential in the context of a type of American Competitive Debate, called {``}Policy Debate{''}, which already has a large scale dataset targeting it called {``}DebateSum{''}. We significantly improve upon DebateSum by introducing 53180 new examples, as well as further useful metadata for every example, to the dataset. We leverage the txtai semantic search and knowledge graph toolchain to produce and contribute 9 semantic knowledge graphs built on this dataset. We create a unique method for evaluating which knowledge graphs are better in the context of producing policy debate cases. A demo which automatically generates debate cases, along with all other code and the Knowledge Graphs, are open-sourced and made available to the public here: https://huggingface.co/spaces/Hellisotherpeople/DebateKG | [
"Roush, Allen",
"Mezzetti, David"
] | DebateKG – Automatic Policy Debate Case Creation with Semantic Knowledge Graphs | newsum-1.10 | 2307.04090 | [
"https://github.com/hellisotherpeople/debatekg"
] | https://huggingface.co/papers/2307.04090 | 1 | 1 | 0 | 1 | [] | [] | [] | 1 | Poster |
https://aclanthology.org/2023.newsum-1.11.bib | https://aclanthology.org/2023.newsum-1.11/ | @inproceedings{basu-roy-chowdhury-etal-2023-unsupervised,
title = "Unsupervised Opinion Summarization Using Approximate Geodesics",
author = "Basu Roy Chowdhury, Somnath and
Monath, Nicholas and
Dubey, Kumar and
Ahmed, Amr and
Chaturvedi, Snigdha",
editor = "Dong, Yue and
Xiao, Wen and
Wang, Lu and
Liu, Fei and
Carenini, Giuseppe",
booktitle = "Proceedings of the 4th New Frontiers in Summarization Workshop",
month = dec,
year = "2023",
address = "Singapore",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/2023.newsum-1.11",
doi = "10.18653/v1/2023.newsum-1.11",
pages = "105--120",
abstract = "Opinion summarization is the task of creating summaries capturing popular opinions from user reviews.In this paper, we introduce Geodesic Summarizer (GeoSumm), a novel system to perform unsupervised extractive opinion summarization. GeoSumm consists of an encoder-decoder based representation learning model that generates topical representations of texts. These representations capture the underlying semantics of the text as a distribution over learnable latent units. GeoSumm generates these topical representations by performing dictionary learning over pre-trained text representations at multiple layers of the decoder. We then use these topical representations to quantify the importance of review sentences using a novel approximate geodesic distance-based scoring mechanism. We use the importance scores to identify popular opinions in order to compose general and aspect-specific summaries. Our proposed model, GeoSumm, achieves strong performance on three opinion summarization datasets. We perform additional experiments to analyze the functioning of our model and showcase the generalization ability of GeoSumm across different domains.",
}
| Opinion summarization is the task of creating summaries capturing popular opinions from user reviews.In this paper, we introduce Geodesic Summarizer (GeoSumm), a novel system to perform unsupervised extractive opinion summarization. GeoSumm consists of an encoder-decoder based representation learning model that generates topical representations of texts. These representations capture the underlying semantics of the text as a distribution over learnable latent units. GeoSumm generates these topical representations by performing dictionary learning over pre-trained text representations at multiple layers of the decoder. We then use these topical representations to quantify the importance of review sentences using a novel approximate geodesic distance-based scoring mechanism. We use the importance scores to identify popular opinions in order to compose general and aspect-specific summaries. Our proposed model, GeoSumm, achieves strong performance on three opinion summarization datasets. We perform additional experiments to analyze the functioning of our model and showcase the generalization ability of GeoSumm across different domains. | [
"Basu Roy Chowdhury, Somnath",
"Monath, Nicholas",
"Dubey, Kumar",
"Ahmed, Amr",
"Chaturvedi, Snigdha"
] | Unsupervised Opinion Summarization Using Approximate Geodesics | newsum-1.11 | 2209.07496 | [
""
] | -1 | -1 | -1 | -1 | [] | [] | [] | 0 | Poster |
|
https://aclanthology.org/2023.newsum-1.12.bib | https://aclanthology.org/2023.newsum-1.12/ | @inproceedings{he-etal-2023-analyzing,
title = "Analyzing Multi-Sentence Aggregation in Abstractive Summarization via the Shapley Value",
author = "He, Jingyi and
Cao, Meng and
Cheung, Jackie Chi Kit",
editor = "Dong, Yue and
Xiao, Wen and
Wang, Lu and
Liu, Fei and
Carenini, Giuseppe",
booktitle = "Proceedings of the 4th New Frontiers in Summarization Workshop",
month = dec,
year = "2023",
address = "Singapore",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/2023.newsum-1.12",
doi = "10.18653/v1/2023.newsum-1.12",
pages = "121--134",
abstract = "Abstractive summarization systems aim to write concise summaries capturing the most essential information of the input document in their own words. One of the ways to achieve this is to gather and combine multiple pieces of information from the source document, a process we call aggregation. Despite its importance, the extent to which both reference summaries in benchmark datasets and system-generated summaries require aggregation is yet unknown. In this work, we propose AggSHAP, a measure of the degree of aggregation in a summary sentence. We show that AggSHAP distinguishes multi-sentence aggregation from single-sentence extraction or paraphrasing through automatic and human evaluations. We find that few reference or model-generated summary sentences have a high degree of aggregation measured by the proposed metric. We also demonstrate negative correlations between AggSHAP and other quality scores of system summaries. These findings suggest the need to develop new tasks and datasets to encourage multi-sentence aggregation in summarization.",
}
| Abstractive summarization systems aim to write concise summaries capturing the most essential information of the input document in their own words. One of the ways to achieve this is to gather and combine multiple pieces of information from the source document, a process we call aggregation. Despite its importance, the extent to which both reference summaries in benchmark datasets and system-generated summaries require aggregation is yet unknown. In this work, we propose AggSHAP, a measure of the degree of aggregation in a summary sentence. We show that AggSHAP distinguishes multi-sentence aggregation from single-sentence extraction or paraphrasing through automatic and human evaluations. We find that few reference or model-generated summary sentences have a high degree of aggregation measured by the proposed metric. We also demonstrate negative correlations between AggSHAP and other quality scores of system summaries. These findings suggest the need to develop new tasks and datasets to encourage multi-sentence aggregation in summarization. | [
"He, Jingyi",
"Cao, Meng",
"Cheung, Jackie Chi Kit"
] | Analyzing Multi-Sentence Aggregation in Abstractive Summarization via the Shapley Value | newsum-1.12 | null | [
""
] | -1 | -1 | -1 | -1 | [] | [] | [] | 0 | Poster |
|
https://aclanthology.org/2023.newsum-1.13.bib | https://aclanthology.org/2023.newsum-1.13/ | @inproceedings{lim-song-2023-improving,
title = "Improving Multi-Stage Long Document Summarization with Enhanced Coarse Summarizer",
author = "Lim, Jinhyeong and
Song, Hyun-Je",
editor = "Dong, Yue and
Xiao, Wen and
Wang, Lu and
Liu, Fei and
Carenini, Giuseppe",
booktitle = "Proceedings of the 4th New Frontiers in Summarization Workshop",
month = dec,
year = "2023",
address = "Singapore",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/2023.newsum-1.13",
doi = "10.18653/v1/2023.newsum-1.13",
pages = "135--144",
abstract = "Multi-stage long document summarization, which splits a long document as multiple segments and each of which is used to generate a coarse summary in multiple stage, and then the final summary is produced using the last coarse summary, is a flexible approach to capture salient information from the long document. Even if the coarse summary affects the final summary, however, the coarse summarizer in the existing multi-stage summarization is coarsely trained using data segments that are not useful to generate the final summary. In this paper, we propose a novel method for multi-stage long document summarization. The proposed method first generates new segment pairs, ensuring that all of them are relevant to generating the final summary. We then incorporate contrastive learning into the training of the coarse summarizer, which tries to maximize the similarities between source segments and the target summary during training. Through extensive experiments on six long document summarization datasets, we demonstrate that our proposed method not only enhances the existing multi-stage long document summarization approach, but also achieves performance comparable to state-of-the-art methods, including those utilizing large language models for long document summarization.",
}
| Multi-stage long document summarization, which splits a long document as multiple segments and each of which is used to generate a coarse summary in multiple stage, and then the final summary is produced using the last coarse summary, is a flexible approach to capture salient information from the long document. Even if the coarse summary affects the final summary, however, the coarse summarizer in the existing multi-stage summarization is coarsely trained using data segments that are not useful to generate the final summary. In this paper, we propose a novel method for multi-stage long document summarization. The proposed method first generates new segment pairs, ensuring that all of them are relevant to generating the final summary. We then incorporate contrastive learning into the training of the coarse summarizer, which tries to maximize the similarities between source segments and the target summary during training. Through extensive experiments on six long document summarization datasets, we demonstrate that our proposed method not only enhances the existing multi-stage long document summarization approach, but also achieves performance comparable to state-of-the-art methods, including those utilizing large language models for long document summarization. | [
"Lim, Jinhyeong",
"Song, Hyun-Je"
] | Improving Multi-Stage Long Document Summarization with Enhanced Coarse Summarizer | newsum-1.13 | null | [
""
] | -1 | -1 | -1 | -1 | [] | [] | [] | 0 | Poster |
|
https://aclanthology.org/2023.nllp-1.1.bib | https://aclanthology.org/2023.nllp-1.1/ | @inproceedings{deshpande-etal-2023-anthropomorphization,
title = "Anthropomorphization of {AI}: Opportunities and Risks",
author = "Deshpande, Ameet and
Rajpurohit, Tanmay and
Narasimhan, Karthik and
Kalyan, Ashwin",
editor = "Preo{\textcommabelow{t}}iuc-Pietro, Daniel and
Goanta, Catalina and
Chalkidis, Ilias and
Barrett, Leslie and
Spanakis, Gerasimos and
Aletras, Nikolaos",
booktitle = "Proceedings of the Natural Legal Language Processing Workshop 2023",
month = dec,
year = "2023",
address = "Singapore",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/2023.nllp-1.1",
doi = "10.18653/v1/2023.nllp-1.1",
pages = "1--7",
abstract = "Anthropomorphization is the tendency to attribute human-like traits to non-human entities. It is prevalent in many social contexts {--} children anthropomorphize toys, adults do so with brands, and it is a literary device. It is also a versatile tool in science, with behavioral psychology and evolutionary biology meticulously documenting its consequences. With widespread adoption of AI systems, and the push from stakeholders to make it human-like through alignment techniques, human voice, and pictorial avatars, the tendency for users to anthropomorphize it increases significantly. We take a dyadic approach to understanding this phenomenon with large language models (LLMs) by studying (1) the objective legal implications, as analyzed through the lens of the recent blueprint of AI bill of rights and the (2) subtle psychological aspects customization and anthropomorphization. We find that anthropomorphized LLMs customized for different user bases violate multiple provisions in the legislative blueprint. In addition, we point out that anthropomorphization of LLMs affects the influence they can have on their users, thus having the potential to fundamentally change the nature of human-AI interaction, with potential for manipulation and negative influence. With LLMs being hyper-personalized for vulnerable groups like children and patients among others, our work is a timely and important contribution. We propose a conservative strategy for the cautious use of anthropomorphization to improve trustworthiness of AI systems.",
}
| Anthropomorphization is the tendency to attribute human-like traits to non-human entities. It is prevalent in many social contexts {--} children anthropomorphize toys, adults do so with brands, and it is a literary device. It is also a versatile tool in science, with behavioral psychology and evolutionary biology meticulously documenting its consequences. With widespread adoption of AI systems, and the push from stakeholders to make it human-like through alignment techniques, human voice, and pictorial avatars, the tendency for users to anthropomorphize it increases significantly. We take a dyadic approach to understanding this phenomenon with large language models (LLMs) by studying (1) the objective legal implications, as analyzed through the lens of the recent blueprint of AI bill of rights and the (2) subtle psychological aspects customization and anthropomorphization. We find that anthropomorphized LLMs customized for different user bases violate multiple provisions in the legislative blueprint. In addition, we point out that anthropomorphization of LLMs affects the influence they can have on their users, thus having the potential to fundamentally change the nature of human-AI interaction, with potential for manipulation and negative influence. With LLMs being hyper-personalized for vulnerable groups like children and patients among others, our work is a timely and important contribution. We propose a conservative strategy for the cautious use of anthropomorphization to improve trustworthiness of AI systems. | [
"Deshp",
"e, Ameet",
"Rajpurohit, Tanmay",
"Narasimhan, Karthik",
"Kalyan, Ashwin"
] | Anthropomorphization of AI: Opportunities and Risks | nllp-1.1 | 2305.14784 | [
""
] | -1 | -1 | -1 | -1 | [] | [] | [] | 0 | Poster |
|
https://aclanthology.org/2023.nllp-1.2.bib | https://aclanthology.org/2023.nllp-1.2/ | @inproceedings{pennisi-etal-2023-nomos,
title = "{NOMOS}: Navigating Obligation Mining in Official Statutes",
author = "Pennisi, Andrea and
Gonz{\'a}lez Hern{\'a}ndez, Elvira and
Koivula, Nina",
editor = "Preo{\textcommabelow{t}}iuc-Pietro, Daniel and
Goanta, Catalina and
Chalkidis, Ilias and
Barrett, Leslie and
Spanakis, Gerasimos and
Aletras, Nikolaos",
booktitle = "Proceedings of the Natural Legal Language Processing Workshop 2023",
month = dec,
year = "2023",
address = "Singapore",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/2023.nllp-1.2",
doi = "10.18653/v1/2023.nllp-1.2",
pages = "8--16",
abstract = "The process of identifying obligations in a legal text is not a straightforward task, because not only are the documents long, but the sentences therein are long as well. As a result of long elements in the text, law is more difficult to interpret (Coupette et al., 2021). Moreover, the identification of obligations relies not only on the clarity and precision of the language used but also on the unique perspectives, experiences, and knowledge of the reader. In particular, this paper addresses the problem of identifyingobligations using machine and deep learning approaches showing a full comparison between both methodologies and proposing a new approach called NOMOS based on the combination of Positional Embeddings (PE) and Temporal Convolutional Networks (TCNs). Quantitative and qualitative experiments, conducted on legal regulations 1, demonstrate the effectiveness of the proposed approach.",
}
| The process of identifying obligations in a legal text is not a straightforward task, because not only are the documents long, but the sentences therein are long as well. As a result of long elements in the text, law is more difficult to interpret (Coupette et al., 2021). Moreover, the identification of obligations relies not only on the clarity and precision of the language used but also on the unique perspectives, experiences, and knowledge of the reader. In particular, this paper addresses the problem of identifyingobligations using machine and deep learning approaches showing a full comparison between both methodologies and proposing a new approach called NOMOS based on the combination of Positional Embeddings (PE) and Temporal Convolutional Networks (TCNs). Quantitative and qualitative experiments, conducted on legal regulations 1, demonstrate the effectiveness of the proposed approach. | [
"Pennisi, Andrea",
"Gonz{\\'a}lez Hern{\\'a}ndez, Elvira",
"Koivula, Nina"
] | NOMOS: Navigating Obligation Mining in Official Statutes | nllp-1.2 | null | [
""
] | -1 | -1 | -1 | -1 | [] | [] | [] | 0 | Poster |
|
https://aclanthology.org/2023.nllp-1.3.bib | https://aclanthology.org/2023.nllp-1.3/ | @inproceedings{tuteja-gonzalez-jucla-2023-long,
title = "Long Text Classification using Transformers with Paragraph Selection Strategies",
author = "Tuteja, Mohit and
Gonz{\'a}lez Jucl{\`a}, Daniel",
editor = "Preo{\textcommabelow{t}}iuc-Pietro, Daniel and
Goanta, Catalina and
Chalkidis, Ilias and
Barrett, Leslie and
Spanakis, Gerasimos and
Aletras, Nikolaos",
booktitle = "Proceedings of the Natural Legal Language Processing Workshop 2023",
month = dec,
year = "2023",
address = "Singapore",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/2023.nllp-1.3",
doi = "10.18653/v1/2023.nllp-1.3",
pages = "17--24",
abstract = "In the legal domain, we often perform classification tasks on very long documents, for example court judgements. These documents often contain thousands of words, so the length of these documents poses a challenge for this modelling task. In this research paper, we present a comprehensive evaluation of various strategies to perform long text classification using Transformers in conjunction with strategies to select document chunks using traditional NLP models. We conduct our experiments on 6 benchmark datasets comprising lengthy documents, 4 of which are publicly available. Each dataset has a median word count exceeding 1,000. Our evaluation encompasses state-of-the-art Transformer models, such as RoBERTa, Longformer, HAT, MEGA and LegalBERT and compares them with a traditional baseline TF-IDF + Neural Network (NN) model. We investigate the effectiveness of pre-training on large corpora, fine tuning strategies, and transfer learning techniques in the context of long text classification.",
}
| In the legal domain, we often perform classification tasks on very long documents, for example court judgements. These documents often contain thousands of words, so the length of these documents poses a challenge for this modelling task. In this research paper, we present a comprehensive evaluation of various strategies to perform long text classification using Transformers in conjunction with strategies to select document chunks using traditional NLP models. We conduct our experiments on 6 benchmark datasets comprising lengthy documents, 4 of which are publicly available. Each dataset has a median word count exceeding 1,000. Our evaluation encompasses state-of-the-art Transformer models, such as RoBERTa, Longformer, HAT, MEGA and LegalBERT and compares them with a traditional baseline TF-IDF + Neural Network (NN) model. We investigate the effectiveness of pre-training on large corpora, fine tuning strategies, and transfer learning techniques in the context of long text classification. | [
"Tuteja, Mohit",
"Gonz{\\'a}lez Jucl{\\`a}, Daniel"
] | Long Text Classification using Transformers with Paragraph Selection Strategies | nllp-1.3 | null | [
""
] | -1 | -1 | -1 | -1 | [] | [] | [] | 0 | Poster |
|
https://aclanthology.org/2023.nllp-1.4.bib | https://aclanthology.org/2023.nllp-1.4/ | @inproceedings{barale-etal-2023-language,
title = "Do Language Models Learn about Legal Entity Types during Pretraining?",
author = "Barale, Claire and
Rovatsos, Michael and
Bhuta, Nehal",
editor = "Preo{\textcommabelow{t}}iuc-Pietro, Daniel and
Goanta, Catalina and
Chalkidis, Ilias and
Barrett, Leslie and
Spanakis, Gerasimos and
Aletras, Nikolaos",
booktitle = "Proceedings of the Natural Legal Language Processing Workshop 2023",
month = dec,
year = "2023",
address = "Singapore",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/2023.nllp-1.4",
doi = "10.18653/v1/2023.nllp-1.4",
pages = "25--37",
abstract = "Language Models (LMs) have proven their ability to acquire diverse linguistic knowledge during the pretraining phase, potentially serving as a valuable source of incidental supervision for downstream tasks. However, there has been limited research conducted on the retrieval of domain-specific knowledge, and specifically legal knowledge. We propose to explore the task of Entity Typing, serving as a proxy for evaluating legal knowledge as an essential aspect of text comprehension, and a foundational task to numerous downstream legal NLP applications. Through systematic evaluation and analysis and two types of prompting (cloze sentences and QA-based templates) and to clarify the nature of these acquired cues, we compare diverse types and lengths of entities both general and domain-specific entities, semantics or syntax signals, and different LM pretraining corpus (generic and legal-oriented) and architectures (encoder BERT-based and decoder-only with Llama2). We show that (1) Llama2 performs well on certain entities and exhibits potential for substantial improvement with optimized prompt templates, (2) law-oriented LMs show inconsistent performance, possibly due to variations in their training corpus, (3) LMs demonstrate the ability to type entities even in the case of multi-token entities, (4) all models struggle with entities belonging to sub-domains of the law (5) Llama2 appears to frequently overlook syntactic cues, a shortcoming less present in BERT-based architectures.",
}
| Language Models (LMs) have proven their ability to acquire diverse linguistic knowledge during the pretraining phase, potentially serving as a valuable source of incidental supervision for downstream tasks. However, there has been limited research conducted on the retrieval of domain-specific knowledge, and specifically legal knowledge. We propose to explore the task of Entity Typing, serving as a proxy for evaluating legal knowledge as an essential aspect of text comprehension, and a foundational task to numerous downstream legal NLP applications. Through systematic evaluation and analysis and two types of prompting (cloze sentences and QA-based templates) and to clarify the nature of these acquired cues, we compare diverse types and lengths of entities both general and domain-specific entities, semantics or syntax signals, and different LM pretraining corpus (generic and legal-oriented) and architectures (encoder BERT-based and decoder-only with Llama2). We show that (1) Llama2 performs well on certain entities and exhibits potential for substantial improvement with optimized prompt templates, (2) law-oriented LMs show inconsistent performance, possibly due to variations in their training corpus, (3) LMs demonstrate the ability to type entities even in the case of multi-token entities, (4) all models struggle with entities belonging to sub-domains of the law (5) Llama2 appears to frequently overlook syntactic cues, a shortcoming less present in BERT-based architectures. | [
"Barale, Claire",
"Rovatsos, Michael",
"Bhuta, Nehal"
] | Do Language Models Learn about Legal Entity Types during Pretraining? | nllp-1.4 | 2310.13092 | [
"https://github.com/clairebarale/probing_legal_entity_types"
] | -1 | -1 | -1 | -1 | [] | [] | [] | 0 | Poster |
|
https://aclanthology.org/2023.nllp-1.5.bib | https://aclanthology.org/2023.nllp-1.5/ | @inproceedings{vaudaux-etal-2023-pretrained,
title = "Pretrained Language Models v. Court Ruling Predictions: A Case Study on a Small Dataset of {F}rench Court of Appeal Rulings",
author = "Vaudaux, Olivia and
Bazzoli, Caroline and
Coavoux, Maximin and
Vial, G{\'e}raldine and
Verg{\`e}s, {\'E}tienne",
editor = "Preo{\textcommabelow{t}}iuc-Pietro, Daniel and
Goanta, Catalina and
Chalkidis, Ilias and
Barrett, Leslie and
Spanakis, Gerasimos and
Aletras, Nikolaos",
booktitle = "Proceedings of the Natural Legal Language Processing Workshop 2023",
month = dec,
year = "2023",
address = "Singapore",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/2023.nllp-1.5",
doi = "10.18653/v1/2023.nllp-1.5",
pages = "38--43",
abstract = "NLP systems are increasingly used in the law domain, either by legal institutions or by the industry. As a result there is a pressing need to characterize their strengths and weaknesses and understand their inner workings. This article presents a case study on the task of judicial decision prediction, on a small dataset from French Courts of Appeal. Specifically, our dataset of around 1000 decisions is about the habitual place of residency of children from divorced parents. The task consists in predicting, from the facts and reasons of the documents, whether the court rules that children should live with their mother or their father. Instead of feeding the whole document to a classifier, we carefully construct the dataset to make sure that the input to the classifier does not contain any {`}spoilers{'} (it is often the case in court rulings that information all along the document mentions the final decision). Our results are mostly negative: even classifiers based on French pretrained language models (Flaubert, JuriBERT) do not classify the decisions with a reasonable accuracy. However, they can extract the decision when it is part of the input. With regards to these results, we argue that there is a strong caveat when constructing legal NLP datasets automatically.",
}
| NLP systems are increasingly used in the law domain, either by legal institutions or by the industry. As a result there is a pressing need to characterize their strengths and weaknesses and understand their inner workings. This article presents a case study on the task of judicial decision prediction, on a small dataset from French Courts of Appeal. Specifically, our dataset of around 1000 decisions is about the habitual place of residency of children from divorced parents. The task consists in predicting, from the facts and reasons of the documents, whether the court rules that children should live with their mother or their father. Instead of feeding the whole document to a classifier, we carefully construct the dataset to make sure that the input to the classifier does not contain any {`}spoilers{'} (it is often the case in court rulings that information all along the document mentions the final decision). Our results are mostly negative: even classifiers based on French pretrained language models (Flaubert, JuriBERT) do not classify the decisions with a reasonable accuracy. However, they can extract the decision when it is part of the input. With regards to these results, we argue that there is a strong caveat when constructing legal NLP datasets automatically. | [
"Vaudaux, Olivia",
"Bazzoli, Caroline",
"Coavoux, Maximin",
"Vial, G{\\'e}raldine",
"Verg{\\`e}s, {\\'E}tienne"
] | Pretrained Language Models v. Court Ruling Predictions: A Case Study on a Small Dataset of French Court of Appeal Rulings | nllp-1.5 | null | [
""
] | -1 | -1 | -1 | -1 | [] | [] | [] | 0 | Poster |
|
https://aclanthology.org/2023.nllp-1.6.bib | https://aclanthology.org/2023.nllp-1.6/ | @inproceedings{rovera-etal-2023-italian,
title = "{I}talian Legislative Text Classification for Gazzetta Ufficiale",
author = "Rovera, Marco and
Palmero Aprosio, Alessio and
Greco, Francesco and
Lucchese, Mariano and
Tonelli, Sara and
Antetomaso, Antonio",
editor = "Preo{\textcommabelow{t}}iuc-Pietro, Daniel and
Goanta, Catalina and
Chalkidis, Ilias and
Barrett, Leslie and
Spanakis, Gerasimos and
Aletras, Nikolaos",
booktitle = "Proceedings of the Natural Legal Language Processing Workshop 2023",
month = dec,
year = "2023",
address = "Singapore",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/2023.nllp-1.6",
doi = "10.18653/v1/2023.nllp-1.6",
pages = "44--50",
abstract = "This work introduces a novel, extensive annotated corpus for multi-label legislative text classification in Italian, based on legal acts from the Gazzetta Ufficiale, the official source of legislative information of the Italian state. The annotated dataset, which we released to the community, comprises over 363,000 titles of legislative acts, spanning over 30 years from 1988 until 2022. Moreover, we evaluate four models for text classification on the dataset, demonstrating how using only the acts{'} titles can achieve top-level classification performance, with a micro F1-score of 0.87. Also, our analysis shows how Italian domain-adapted legal models do not outperform general-purpose models on the task. Models{'} performance can be checked by users via a demonstrator system provided in support of this work.",
}
| This work introduces a novel, extensive annotated corpus for multi-label legislative text classification in Italian, based on legal acts from the Gazzetta Ufficiale, the official source of legislative information of the Italian state. The annotated dataset, which we released to the community, comprises over 363,000 titles of legislative acts, spanning over 30 years from 1988 until 2022. Moreover, we evaluate four models for text classification on the dataset, demonstrating how using only the acts{'} titles can achieve top-level classification performance, with a micro F1-score of 0.87. Also, our analysis shows how Italian domain-adapted legal models do not outperform general-purpose models on the task. Models{'} performance can be checked by users via a demonstrator system provided in support of this work. | [
"Rovera, Marco",
"Palmero Aprosio, Alessio",
"Greco, Francesco",
"Lucchese, Mariano",
"Tonelli, Sara",
"Antetomaso, Antonio"
] | Italian Legislative Text Classification for Gazzetta Ufficiale | nllp-1.6 | null | [
""
] | -1 | -1 | -1 | -1 | [] | [] | [] | 0 | Poster |
|
https://aclanthology.org/2023.nllp-1.7.bib | https://aclanthology.org/2023.nllp-1.7/ | @inproceedings{hua-etal-2023-mixed,
title = "Mixed-domain Language Modeling for Processing Long Legal Documents",
author = "Hua, Wenyue and
Zhang, Yuchen and
Chen, Zhe and
Li, Josie and
Weber, Melanie",
editor = "Preo{\textcommabelow{t}}iuc-Pietro, Daniel and
Goanta, Catalina and
Chalkidis, Ilias and
Barrett, Leslie and
Spanakis, Gerasimos and
Aletras, Nikolaos",
booktitle = "Proceedings of the Natural Legal Language Processing Workshop 2023",
month = dec,
year = "2023",
address = "Singapore",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/2023.nllp-1.7",
doi = "10.18653/v1/2023.nllp-1.7",
pages = "51--61",
abstract = "The application of Natural Language Processing (NLP) to specialized domains, such as the law, has recently received a surge of interest. As many legal services rely on processing and analyzing large collections of documents, automating such tasks with NLP tools such as language models emerges as a key challenge since legal documents may contain specialized vocabulary from other domains, such as medical terminology in personal injury text. However, most language models are general-purpose models, which either have limited reasoning capabilities on highly specialized legal terminology and syntax, such as BERT or ROBERTA, or are expensive to run and tune, such as GPT-3.5 and Claude. Thus, in this paper, we propose a specialized language model for personal injury text, LEGALRELECTRA, which is trained on mixed-domain legal and medical corpora. We show that as a small language model, our model improves over general-domain and single-domain medical and legal language models when processing mixed-domain (personal injury) text. Our training architecture implements the ELECTRA framework but utilizes REFORMER instead of BERT for its generator and discriminator. We show that this improves the model{'}s performance on processing long passages and results in better long-range text comprehension.",
}
| The application of Natural Language Processing (NLP) to specialized domains, such as the law, has recently received a surge of interest. As many legal services rely on processing and analyzing large collections of documents, automating such tasks with NLP tools such as language models emerges as a key challenge since legal documents may contain specialized vocabulary from other domains, such as medical terminology in personal injury text. However, most language models are general-purpose models, which either have limited reasoning capabilities on highly specialized legal terminology and syntax, such as BERT or ROBERTA, or are expensive to run and tune, such as GPT-3.5 and Claude. Thus, in this paper, we propose a specialized language model for personal injury text, LEGALRELECTRA, which is trained on mixed-domain legal and medical corpora. We show that as a small language model, our model improves over general-domain and single-domain medical and legal language models when processing mixed-domain (personal injury) text. Our training architecture implements the ELECTRA framework but utilizes REFORMER instead of BERT for its generator and discriminator. We show that this improves the model{'}s performance on processing long passages and results in better long-range text comprehension. | [
"Hua, Wenyue",
"Zhang, Yuchen",
"Chen, Zhe",
"Li, Josie",
"Weber, Melanie"
] | Mixed-domain Language Modeling for Processing Long Legal Documents | nllp-1.7 | null | [
""
] | -1 | -1 | -1 | -1 | [] | [] | [] | 0 | Poster |
|
https://aclanthology.org/2023.nllp-1.8.bib | https://aclanthology.org/2023.nllp-1.8/ | @inproceedings{roegiest-etal-2023-questions,
title = "Questions about Contracts: Prompt Templates for Structured Answer Generation",
author = "Roegiest, Adam and
Chitta, Radha and
Donnelly, Jonathan and
Lash, Maya and
Vtyurina, Alexandra and
Longtin, Francois",
editor = "Preo{\textcommabelow{t}}iuc-Pietro, Daniel and
Goanta, Catalina and
Chalkidis, Ilias and
Barrett, Leslie and
Spanakis, Gerasimos and
Aletras, Nikolaos",
booktitle = "Proceedings of the Natural Legal Language Processing Workshop 2023",
month = dec,
year = "2023",
address = "Singapore",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/2023.nllp-1.8",
doi = "10.18653/v1/2023.nllp-1.8",
pages = "62--72",
abstract = "Finding the answers to legal questions about specific clauses in contracts is an important analysis in many legal workflows (e.g., understanding market trends, due diligence, risk mitigation) but more important is being able to do this at scale. In this paper, we present an examination of using large language models to produce (partially) structured answers to legal questions; primarily in the form of multiple choice and multiple select. We first show that traditional semantic matching is unable to perform this task at acceptable accuracy and then show how question specific prompts can achieve reasonable accuracy across a range of generative models. Finally, we show that much of this effectiveness can be maintained when generalized prompt templates are used rather than question specific ones.",
}
| Finding the answers to legal questions about specific clauses in contracts is an important analysis in many legal workflows (e.g., understanding market trends, due diligence, risk mitigation) but more important is being able to do this at scale. In this paper, we present an examination of using large language models to produce (partially) structured answers to legal questions; primarily in the form of multiple choice and multiple select. We first show that traditional semantic matching is unable to perform this task at acceptable accuracy and then show how question specific prompts can achieve reasonable accuracy across a range of generative models. Finally, we show that much of this effectiveness can be maintained when generalized prompt templates are used rather than question specific ones. | [
"Roegiest, Adam",
"Chitta, Radha",
"Donnelly, Jonathan",
"Lash, Maya",
"Vtyurina, Alex",
"ra",
"Longtin, Francois"
] | Questions about Contracts: Prompt Templates for Structured Answer Generation | nllp-1.8 | null | [
""
] | -1 | -1 | -1 | -1 | [] | [] | [] | 0 | Poster |
|
https://aclanthology.org/2023.nllp-1.9.bib | https://aclanthology.org/2023.nllp-1.9/ | @inproceedings{medvedeva-mcbride-2023-legal,
title = "Legal Judgment Prediction: If You Are Going to Do It, Do It Right",
author = "Medvedeva, Masha and
Mcbride, Pauline",
editor = "Preo{\textcommabelow{t}}iuc-Pietro, Daniel and
Goanta, Catalina and
Chalkidis, Ilias and
Barrett, Leslie and
Spanakis, Gerasimos and
Aletras, Nikolaos",
booktitle = "Proceedings of the Natural Legal Language Processing Workshop 2023",
month = dec,
year = "2023",
address = "Singapore",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/2023.nllp-1.9",
doi = "10.18653/v1/2023.nllp-1.9",
pages = "73--84",
abstract = "The field of Legal Judgment Prediction (LJP) has witnessed significant growth in the past decade, with over 100 papers published in the past three years alone. Our comprehensive survey of over 150 papers reveals a stark reality: only {\textasciitilde}7{\%} of published papers are doing what they set out to do - predict court decisions. We delve into the reasons behind the flawed and unreliable nature of the remaining experiments, emphasising their limited utility in the legal domain. We examine the distinctions between predicting court decisions and the practices of legal professionals in their daily work. We explore how a lack of attention to the identity and needs of end-users has fostered the misconception that LJP is a near-solved challenge suitable for practical application, and contributed to the surge in academic research in the field. To address these issues, we examine three different dimensions of {`}doing LJP right{'}: using data appropriate for the task; tackling explainability; and adopting an application-centric approach to model reporting and evaluation. We formulate a practical checklist of recommendations, delineating the characteristics that are required if a judgment prediction system is to be a valuable addition to the legal field.",
}
| The field of Legal Judgment Prediction (LJP) has witnessed significant growth in the past decade, with over 100 papers published in the past three years alone. Our comprehensive survey of over 150 papers reveals a stark reality: only {\textasciitilde}7{\%} of published papers are doing what they set out to do - predict court decisions. We delve into the reasons behind the flawed and unreliable nature of the remaining experiments, emphasising their limited utility in the legal domain. We examine the distinctions between predicting court decisions and the practices of legal professionals in their daily work. We explore how a lack of attention to the identity and needs of end-users has fostered the misconception that LJP is a near-solved challenge suitable for practical application, and contributed to the surge in academic research in the field. To address these issues, we examine three different dimensions of {`}doing LJP right{'}: using data appropriate for the task; tackling explainability; and adopting an application-centric approach to model reporting and evaluation. We formulate a practical checklist of recommendations, delineating the characteristics that are required if a judgment prediction system is to be a valuable addition to the legal field. | [
"Medvedeva, Masha",
"Mcbride, Pauline"
] | Legal Judgment Prediction: If You Are Going to Do It, Do It Right | nllp-1.9 | null | [
""
] | -1 | -1 | -1 | -1 | [] | [] | [] | 0 | Poster |
|
https://aclanthology.org/2023.nllp-1.10.bib | https://aclanthology.org/2023.nllp-1.10/ | @inproceedings{shvartzshanider-etal-2023-beyond,
title = "Beyond The Text: Analysis of Privacy Statements through Syntactic and Semantic Role Labeling",
author = "Shvartzshanider, Yan and
Balashankar, Ananth and
Wies, Thomas and
Subramanian, Lakshminarayanan",
editor = "Preo{\textcommabelow{t}}iuc-Pietro, Daniel and
Goanta, Catalina and
Chalkidis, Ilias and
Barrett, Leslie and
Spanakis, Gerasimos and
Aletras, Nikolaos",
booktitle = "Proceedings of the Natural Legal Language Processing Workshop 2023",
month = dec,
year = "2023",
address = "Singapore",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/2023.nllp-1.10",
pages = "85--98",
abstract = "This paper formulates a new task of extracting privacy parameters from a privacy policy, through the lens of Contextual Integrity (CI), an established social theory framework for reasoning about privacy norms. Through extensive experiments, we further show that incorporating CI-based domain-specific knowledge into a BERT-based SRL model results in the highest precision and recall, achieving an F1 score of 84{\%}. With our work, we would like to motivate new research in building NLP applications for the privacy domain.",
}
| This paper formulates a new task of extracting privacy parameters from a privacy policy, through the lens of Contextual Integrity (CI), an established social theory framework for reasoning about privacy norms. Through extensive experiments, we further show that incorporating CI-based domain-specific knowledge into a BERT-based SRL model results in the highest precision and recall, achieving an F1 score of 84{\%}. With our work, we would like to motivate new research in building NLP applications for the privacy domain. | [
"Shvartzshanider, Yan",
"Balashankar, Ananth",
"Wies, Thomas",
"Subramanian, Lakshminarayanan"
] | Beyond The Text: Analysis of Privacy Statements through Syntactic and Semantic Role Labeling | nllp-1.10 | 2010.00678 | [
""
] | -1 | -1 | -1 | -1 | [] | [] | [] | 0 | Poster |
|
https://aclanthology.org/2023.nllp-1.11.bib | https://aclanthology.org/2023.nllp-1.11/ | @inproceedings{singhal-etal-2023-towards,
title = "Towards Mitigating Perceived Unfairness in Contracts from a Non-Legal Stakeholder{'}s Perspective",
author = "Singhal, Anmol and
Anish, Preethu Rose and
Karande, Shirish and
Ghaisas, Smita",
editor = "Preo{\textcommabelow{t}}iuc-Pietro, Daniel and
Goanta, Catalina and
Chalkidis, Ilias and
Barrett, Leslie and
Spanakis, Gerasimos and
Aletras, Nikolaos",
booktitle = "Proceedings of the Natural Legal Language Processing Workshop 2023",
month = dec,
year = "2023",
address = "Singapore",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/2023.nllp-1.11",
doi = "10.18653/v1/2023.nllp-1.11",
pages = "99--112",
abstract = "Commercial contracts are known to be a valuable source for deriving project-specific requirements. However, contract negotiations mainly occur among the legal counsel of the parties involved. The participation of non-legal stakeholders, including requirement analysts, engineers, and solution architects, whose primary responsibility lies in ensuring the seamless implementation of contractual terms, is often indirect and inadequate. Consequently, a significant number of sentences in contractual clauses, though legally accurate, can appear unfair from an implementation perspective to non-legal stakeholders. This perception poses a problem since requirements indicated in the clauses are obligatory and can involve punitive measures and penalties if not implemented as committed in the contract. Therefore, the identification of potentially unfair clauses in contracts becomes crucial. In this work, we conduct an empirical study to analyze the perspectives of different stakeholders regarding contractual fairness. We then investigate the ability of Pre-trained Language Models (PLMs) to identify unfairness in contractual sentences by comparing chain of thought prompting and semi-supervised fine-tuning approaches. Using BERT-based fine-tuning, we achieved an accuracy of 84{\%} on a dataset consisting of proprietary contracts. It outperformed chain of thought prompting using Vicuna-13B by a margin of 9{\%}.",
}
| Commercial contracts are known to be a valuable source for deriving project-specific requirements. However, contract negotiations mainly occur among the legal counsel of the parties involved. The participation of non-legal stakeholders, including requirement analysts, engineers, and solution architects, whose primary responsibility lies in ensuring the seamless implementation of contractual terms, is often indirect and inadequate. Consequently, a significant number of sentences in contractual clauses, though legally accurate, can appear unfair from an implementation perspective to non-legal stakeholders. This perception poses a problem since requirements indicated in the clauses are obligatory and can involve punitive measures and penalties if not implemented as committed in the contract. Therefore, the identification of potentially unfair clauses in contracts becomes crucial. In this work, we conduct an empirical study to analyze the perspectives of different stakeholders regarding contractual fairness. We then investigate the ability of Pre-trained Language Models (PLMs) to identify unfairness in contractual sentences by comparing chain of thought prompting and semi-supervised fine-tuning approaches. Using BERT-based fine-tuning, we achieved an accuracy of 84{\%} on a dataset consisting of proprietary contracts. It outperformed chain of thought prompting using Vicuna-13B by a margin of 9{\%}. | [
"Singhal, Anmol",
"Anish, Preethu Rose",
"Kar",
"e, Shirish",
"Ghaisas, Smita"
] | Towards Mitigating Perceived Unfairness in Contracts from a Non-Legal Stakeholder's Perspective | nllp-1.11 | 2312.01398 | [
""
] | https://huggingface.co/papers/2312.01398 | 0 | 0 | 0 | 4 | [] | [] | [] | 1 | Poster |
https://aclanthology.org/2023.nllp-1.12.bib | https://aclanthology.org/2023.nllp-1.12/ | @inproceedings{holzenberger-van-durme-2023-connecting,
title = "Connecting Symbolic Statutory Reasoning with Legal Information Extraction",
author = "Holzenberger, Nils and
Van Durme, Benjamin",
editor = "Preo{\textcommabelow{t}}iuc-Pietro, Daniel and
Goanta, Catalina and
Chalkidis, Ilias and
Barrett, Leslie and
Spanakis, Gerasimos and
Aletras, Nikolaos",
booktitle = "Proceedings of the Natural Legal Language Processing Workshop 2023",
month = dec,
year = "2023",
address = "Singapore",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/2023.nllp-1.12",
doi = "10.18653/v1/2023.nllp-1.12",
pages = "113--131",
abstract = "Statutory reasoning is the task of determining whether a given law {--} a part of a statute {--} applies to a given legal case. Previous work has shown that structured, logical representations of laws and cases can be leveraged to solve statutory reasoning, including on the StAtutory Reasoning Assessment dataset (SARA), but rely on costly human translation into structured representations. Here, we investigate a form of legal information extraction atop the SARA cases, illustrating how the task can be done with high performance. Further, we show how the performance of downstream symbolic reasoning directly correlates with the quality of the information extraction.",
}
| Statutory reasoning is the task of determining whether a given law {--} a part of a statute {--} applies to a given legal case. Previous work has shown that structured, logical representations of laws and cases can be leveraged to solve statutory reasoning, including on the StAtutory Reasoning Assessment dataset (SARA), but rely on costly human translation into structured representations. Here, we investigate a form of legal information extraction atop the SARA cases, illustrating how the task can be done with high performance. Further, we show how the performance of downstream symbolic reasoning directly correlates with the quality of the information extraction. | [
"Holzenberger, Nils",
"Van Durme, Benjamin"
] | Connecting Symbolic Statutory Reasoning with Legal Information Extraction | nllp-1.12 | null | [
""
] | -1 | -1 | -1 | -1 | [] | [] | [] | 0 | Poster |
|
https://aclanthology.org/2023.nllp-1.13.bib | https://aclanthology.org/2023.nllp-1.13/ | @inproceedings{ryu-etal-2023-retrieval,
title = "Retrieval-based Evaluation for {LLM}s: A Case Study in {K}orean Legal {QA}",
author = "Ryu, Cheol and
Lee, Seolhwa and
Pang, Subeen and
Choi, Chanyeol and
Choi, Hojun and
Min, Myeonggee and
Sohn, Jy-Yong",
editor = "Preo{\textcommabelow{t}}iuc-Pietro, Daniel and
Goanta, Catalina and
Chalkidis, Ilias and
Barrett, Leslie and
Spanakis, Gerasimos and
Aletras, Nikolaos",
booktitle = "Proceedings of the Natural Legal Language Processing Workshop 2023",
month = dec,
year = "2023",
address = "Singapore",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/2023.nllp-1.13",
doi = "10.18653/v1/2023.nllp-1.13",
pages = "132--137",
abstract = "While large language models (LLMs) have demonstrated significant capabilities in text generation, their utilization in areas requiring domain-specific expertise, such as law, must be approached cautiously. This caution is warranted due to the inherent challenges associated with LLM-generated texts, including the potential presence of factual errors. Motivated by this issue, we propose Eval-RAG, a new evaluation method for LLM-generated texts. Unlike existing methods, Eval-RAG evaluates the validity of generated texts based on the related document that are collected by the retriever. In other words, Eval-RAG adopts the idea of retrieval augmented generation (RAG) for the purpose of evaluation. Our experimental results on Korean Legal Question-Answering (QA) tasks show that conventional LLM-based evaluation methods can be better aligned with Lawyers{'} evaluations, by combining with Eval-RAG. In addition, our qualitative analysis show that Eval-RAG successfully finds the factual errors in LLM-generated texts, while existing evaluation methods cannot.",
}
| While large language models (LLMs) have demonstrated significant capabilities in text generation, their utilization in areas requiring domain-specific expertise, such as law, must be approached cautiously. This caution is warranted due to the inherent challenges associated with LLM-generated texts, including the potential presence of factual errors. Motivated by this issue, we propose Eval-RAG, a new evaluation method for LLM-generated texts. Unlike existing methods, Eval-RAG evaluates the validity of generated texts based on the related document that are collected by the retriever. In other words, Eval-RAG adopts the idea of retrieval augmented generation (RAG) for the purpose of evaluation. Our experimental results on Korean Legal Question-Answering (QA) tasks show that conventional LLM-based evaluation methods can be better aligned with Lawyers{'} evaluations, by combining with Eval-RAG. In addition, our qualitative analysis show that Eval-RAG successfully finds the factual errors in LLM-generated texts, while existing evaluation methods cannot. | [
"Ryu, Cheol",
"Lee, Seolhwa",
"Pang, Subeen",
"Choi, Chanyeol",
"Choi, Hojun",
"Min, Myeonggee",
"Sohn, Jy-Yong"
] | Retrieval-based Evaluation for LLMs: A Case Study in Korean Legal QA | nllp-1.13 | null | [
""
] | -1 | -1 | -1 | -1 | [] | [] | [] | 0 | Poster |
|
https://aclanthology.org/2023.nllp-1.14.bib | https://aclanthology.org/2023.nllp-1.14/ | @inproceedings{camassa-2023-legal,
title = "Legal {NLP} Meets {M}i{CAR}: Advancing the Analysis of Crypto White Papers",
author = "Camassa, Carolina",
editor = "Preo{\textcommabelow{t}}iuc-Pietro, Daniel and
Goanta, Catalina and
Chalkidis, Ilias and
Barrett, Leslie and
Spanakis, Gerasimos and
Aletras, Nikolaos",
booktitle = "Proceedings of the Natural Legal Language Processing Workshop 2023",
month = dec,
year = "2023",
address = "Singapore",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/2023.nllp-1.14",
doi = "10.18653/v1/2023.nllp-1.14",
pages = "138--148",
abstract = "In the rapidly evolving field of crypto assets, white papers are essential documents for investor guidance, and are now subject to unprecedented content requirements under the European Union{'}s Markets in Crypto-Assets Regulation (MiCAR). Natural Language Processing (NLP) can serve as a powerful tool for both analyzing these documents and assisting in regulatory compliance. This paper delivers two contributions to the topic. First, we survey existing applications of textual analysis to unregulated crypto asset white papers, uncovering a research gap that could be bridged with interdisciplinary collaboration. We then conduct an analysis of the changes introduced by MiCAR, highlighting the opportunities and challenges of integrating NLP within the new regulatory framework. The findings set the stage for further research, with the potential to benefit regulators, crypto asset issuers, and investors.",
}
| In the rapidly evolving field of crypto assets, white papers are essential documents for investor guidance, and are now subject to unprecedented content requirements under the European Union{'}s Markets in Crypto-Assets Regulation (MiCAR). Natural Language Processing (NLP) can serve as a powerful tool for both analyzing these documents and assisting in regulatory compliance. This paper delivers two contributions to the topic. First, we survey existing applications of textual analysis to unregulated crypto asset white papers, uncovering a research gap that could be bridged with interdisciplinary collaboration. We then conduct an analysis of the changes introduced by MiCAR, highlighting the opportunities and challenges of integrating NLP within the new regulatory framework. The findings set the stage for further research, with the potential to benefit regulators, crypto asset issuers, and investors. | [
"Camassa, Carolina"
] | Legal NLP Meets MiCAR: Advancing the Analysis of Crypto White Papers | nllp-1.14 | 2310.10333 | [
""
] | -1 | -1 | -1 | -1 | [] | [] | [] | 0 | Poster |
|
https://aclanthology.org/2023.nllp-1.15.bib | https://aclanthology.org/2023.nllp-1.15/ | @inproceedings{minkova-etal-2023-low,
title = "Low-Resource Deontic Modality Classification in {EU} Legislation",
author = "Minkova, Kristina and
Chakravarthy, Shashank and
Dijck, Gijs",
editor = "Preo{\textcommabelow{t}}iuc-Pietro, Daniel and
Goanta, Catalina and
Chalkidis, Ilias and
Barrett, Leslie and
Spanakis, Gerasimos and
Aletras, Nikolaos",
booktitle = "Proceedings of the Natural Legal Language Processing Workshop 2023",
month = dec,
year = "2023",
address = "Singapore",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/2023.nllp-1.15",
doi = "10.18653/v1/2023.nllp-1.15",
pages = "149--158",
abstract = "In law, it is important to distinguish between obligations, permissions, prohibitions, rights, and powers. These categories are called deontic modalities. This paper evaluates the performance of two deontic modality classification models, LEGAL-BERT and a Fusion model, in a low-resource setting. To create a generalized dataset for multi-class classification, we extracted random provisions from European Union (EU) legislation. By fine-tuning previously researched and published models, we evaluate their performance on our dataset against fusion models designed for low-resource text classification. We incorporate focal loss as an alternative for cross-entropy to tackle issues of class imbalance. The experiments indicate that the fusion model performs better for both balanced and imbalanced data with a macro F1-score of 0.61 for imbalanced data, 0.62 for balanced data, and 0.55 with focal loss for imbalanced data. When focusing on accuracy, our experiments indicate that the fusion model performs better with scores of 0.91 for imbalanced data, 0.78 for balanced data, and 0.90 for imbalanced data with focal loss.",
}
| In law, it is important to distinguish between obligations, permissions, prohibitions, rights, and powers. These categories are called deontic modalities. This paper evaluates the performance of two deontic modality classification models, LEGAL-BERT and a Fusion model, in a low-resource setting. To create a generalized dataset for multi-class classification, we extracted random provisions from European Union (EU) legislation. By fine-tuning previously researched and published models, we evaluate their performance on our dataset against fusion models designed for low-resource text classification. We incorporate focal loss as an alternative for cross-entropy to tackle issues of class imbalance. The experiments indicate that the fusion model performs better for both balanced and imbalanced data with a macro F1-score of 0.61 for imbalanced data, 0.62 for balanced data, and 0.55 with focal loss for imbalanced data. When focusing on accuracy, our experiments indicate that the fusion model performs better with scores of 0.91 for imbalanced data, 0.78 for balanced data, and 0.90 for imbalanced data with focal loss. | [
"Minkova, Kristina",
"Chakravarthy, Shashank",
"Dijck, Gijs"
] | Low-Resource Deontic Modality Classification in EU Legislation | nllp-1.15 | null | [
""
] | -1 | -1 | -1 | -1 | [] | [] | [] | 0 | Poster |
|
https://aclanthology.org/2023.nllp-1.16.bib | https://aclanthology.org/2023.nllp-1.16/ | @inproceedings{niklaus-etal-2023-automatic,
title = "Automatic Anonymization of {S}wiss Federal {S}upreme {C}ourt Rulings",
author = {Niklaus, Joel and
Mami{\'e}, Robin and
St{\"u}rmer, Matthias and
Brunner, Daniel and
Gygli, Marcel},
editor = "Preo{\textcommabelow{t}}iuc-Pietro, Daniel and
Goanta, Catalina and
Chalkidis, Ilias and
Barrett, Leslie and
Spanakis, Gerasimos and
Aletras, Nikolaos",
booktitle = "Proceedings of the Natural Legal Language Processing Workshop 2023",
month = dec,
year = "2023",
address = "Singapore",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/2023.nllp-1.16",
doi = "10.18653/v1/2023.nllp-1.16",
pages = "159--165",
abstract = "Releasing court decisions to the public relies on proper anonymization to protect all involved parties, where necessary. The Swiss Federal Supreme Court relies on an existing system that combines different traditional computational methods with human experts. In this work, we enhance the existing anonymization software using a large dataset annotated with entities to be anonymized. We compared BERT-based models with models pre-trained on in-domain data. Our results show that using in-domain data to pre-train the models further improves the F1-score by more than 5{\%} compared to existing models. Our work demonstrates that combining existing anonymization methods, such as regular expressions, with machine learning can further reduce manual labor and enhance automatic suggestions.",
}
| Releasing court decisions to the public relies on proper anonymization to protect all involved parties, where necessary. The Swiss Federal Supreme Court relies on an existing system that combines different traditional computational methods with human experts. In this work, we enhance the existing anonymization software using a large dataset annotated with entities to be anonymized. We compared BERT-based models with models pre-trained on in-domain data. Our results show that using in-domain data to pre-train the models further improves the F1-score by more than 5{\%} compared to existing models. Our work demonstrates that combining existing anonymization methods, such as regular expressions, with machine learning can further reduce manual labor and enhance automatic suggestions. | [
"Niklaus, Joel",
"Mami{\\'e}, Robin",
"St{\\\"u}rmer, Matthias",
"Brunner, Daniel",
"Gygli, Marcel"
] | Automatic Anonymization of Swiss Federal Supreme Court Rulings | nllp-1.16 | 2310.04632 | [
""
] | -1 | -1 | -1 | -1 | [] | [] | [] | 0 | Poster |
|
https://aclanthology.org/2023.nllp-1.17.bib | https://aclanthology.org/2023.nllp-1.17/ | @inproceedings{pai-etal-2023-exploration,
title = "Exploration of Open Large Language Models for e{D}iscovery",
author = "Pai, Sumit and
Lahiri, Sounak and
Kumar, Ujjwal and
Baksi, Krishanu and
Soba, Elijah and
Suesserman, Michael and
Pudota, Nirmala and
Foster, Jon and
Bowen, Edward and
Bhattacharya, Sanmitra",
editor = "Preo{\textcommabelow{t}}iuc-Pietro, Daniel and
Goanta, Catalina and
Chalkidis, Ilias and
Barrett, Leslie and
Spanakis, Gerasimos and
Aletras, Nikolaos",
booktitle = "Proceedings of the Natural Legal Language Processing Workshop 2023",
month = dec,
year = "2023",
address = "Singapore",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/2023.nllp-1.17",
doi = "10.18653/v1/2023.nllp-1.17",
pages = "166--177",
abstract = "The rapid advancement of Generative Artificial Intelligence (AI), particularly Large Language Models (LLMs), has led to their widespread adoption for various natural language processing (NLP) tasks. One crucial domain ripe for innovation is the Technology-Assisted Review (TAR) process in Electronic discovery (eDiscovery). Traditionally, TAR involves manual review and classification of documents for relevance over large document collections for litigations and investigations. This process is aided by machine learning and NLP tools which require extensive training and fine-tuning. In this paper, we explore the application of LLMs to TAR, specifically for predictive coding. We experiment with out-of-the-box prompting and fine-tuning of LLMs using parameter-efficient techniques. We conduct experiments using open LLMs and compare them to commercially-licensed ones. Our experiments demonstrate that open LLMs lag behind commercially-licensed models in relevance classification using out-of-the-box prompting. However, topic-specific instruction tuning of open LLMs not only improve their effectiveness but can often outperform their commercially-licensed counterparts in performance evaluations. Additionally, we conduct a user study to gauge the preferences of our eDiscovery Subject Matter Specialists (SMS) regarding human-authored versus model-generated reasoning. We demonstrate that instruction-tuned open LLMs can generate high quality reasonings that are comparable to commercial LLMs.",
}
| The rapid advancement of Generative Artificial Intelligence (AI), particularly Large Language Models (LLMs), has led to their widespread adoption for various natural language processing (NLP) tasks. One crucial domain ripe for innovation is the Technology-Assisted Review (TAR) process in Electronic discovery (eDiscovery). Traditionally, TAR involves manual review and classification of documents for relevance over large document collections for litigations and investigations. This process is aided by machine learning and NLP tools which require extensive training and fine-tuning. In this paper, we explore the application of LLMs to TAR, specifically for predictive coding. We experiment with out-of-the-box prompting and fine-tuning of LLMs using parameter-efficient techniques. We conduct experiments using open LLMs and compare them to commercially-licensed ones. Our experiments demonstrate that open LLMs lag behind commercially-licensed models in relevance classification using out-of-the-box prompting. However, topic-specific instruction tuning of open LLMs not only improve their effectiveness but can often outperform their commercially-licensed counterparts in performance evaluations. Additionally, we conduct a user study to gauge the preferences of our eDiscovery Subject Matter Specialists (SMS) regarding human-authored versus model-generated reasoning. We demonstrate that instruction-tuned open LLMs can generate high quality reasonings that are comparable to commercial LLMs. | [
"Pai, Sumit",
"Lahiri, Sounak",
"Kumar, Ujjwal",
"Baksi, Krishanu",
"Soba, Elijah",
"Suesserman, Michael",
"Pudota, Nirmala",
"Foster, Jon",
"Bowen, Edward",
"Bhattacharya, Sanmitra"
] | Exploration of Open Large Language Models for eDiscovery | nllp-1.17 | null | [
""
] | -1 | -1 | -1 | -1 | [] | [] | [] | 0 | Poster |
|
https://aclanthology.org/2023.nllp-1.18.bib | https://aclanthology.org/2023.nllp-1.18/ | @inproceedings{mavi-etal-2023-retrieval,
title = "Retrieval-Augmented Chain-of-Thought in Semi-structured Domains",
author = "Mavi, Vaibhav and
Saparov, Abulhair and
Zhao, Chen",
editor = "Preo{\textcommabelow{t}}iuc-Pietro, Daniel and
Goanta, Catalina and
Chalkidis, Ilias and
Barrett, Leslie and
Spanakis, Gerasimos and
Aletras, Nikolaos",
booktitle = "Proceedings of the Natural Legal Language Processing Workshop 2023",
month = dec,
year = "2023",
address = "Singapore",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/2023.nllp-1.18",
doi = "10.18653/v1/2023.nllp-1.18",
pages = "178--191",
abstract = "Applying existing question answering (QA) systems to specialized domains like law and finance presents challenges that necessitate domain expertise. Although large language models (LLMs) have shown impressive language comprehension and in-context learning capabilities, their inability to handle very long inputs/contexts is well known. Tasks specific to these domains need significant background knowledge, leading to contexts that can often exceed the maximum length that existing LLMs can process. This study explores leveraging the semi-structured nature of legal and financial data to efficiently retrieve relevant context, enabling the use of LLMs for domain-specialized QA. The resulting system outperforms contemporary models and also provides useful explanations for the answers, encouraging the integration of LLMs into legal and financial NLP systems for future research.",
}
| Applying existing question answering (QA) systems to specialized domains like law and finance presents challenges that necessitate domain expertise. Although large language models (LLMs) have shown impressive language comprehension and in-context learning capabilities, their inability to handle very long inputs/contexts is well known. Tasks specific to these domains need significant background knowledge, leading to contexts that can often exceed the maximum length that existing LLMs can process. This study explores leveraging the semi-structured nature of legal and financial data to efficiently retrieve relevant context, enabling the use of LLMs for domain-specialized QA. The resulting system outperforms contemporary models and also provides useful explanations for the answers, encouraging the integration of LLMs into legal and financial NLP systems for future research. | [
"Mavi, Vaibhav",
"Saparov, Abulhair",
"Zhao, Chen"
] | Retrieval-Augmented Chain-of-Thought in Semi-structured Domains | nllp-1.18 | 2310.14435 | [
""
] | -1 | -1 | -1 | -1 | [] | [] | [] | 0 | Poster |
|
https://aclanthology.org/2023.nllp-1.19.bib | https://aclanthology.org/2023.nllp-1.19/ | @inproceedings{hai-long-etal-2023-joint,
title = "Joint Learning for Legal Text Retrieval and Textual Entailment: Leveraging the Relationship between Relevancy and Affirmation",
author = "Hai Long, Nguyen and
Vuong, Thi Hai Yen and
Nguyen, Ha Thanh and
Phan, Xuan-Hieu",
editor = "Preo{\textcommabelow{t}}iuc-Pietro, Daniel and
Goanta, Catalina and
Chalkidis, Ilias and
Barrett, Leslie and
Spanakis, Gerasimos and
Aletras, Nikolaos",
booktitle = "Proceedings of the Natural Legal Language Processing Workshop 2023",
month = dec,
year = "2023",
address = "Singapore",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/2023.nllp-1.19",
doi = "10.18653/v1/2023.nllp-1.19",
pages = "192--201",
abstract = "In legal text processing and reasoning, one normally performs information retrieval to find relevant documents of an input question, and then performs textual entailment to answer the question. The former is about relevancy whereas the latter is about affirmation (or conclusion). While relevancy and affirmation are two different concepts, there is obviously a connection between them. That is why performing retrieval and textual entailment sequentially and independently may not make the most of this mutually supportive relationship. This paper, therefore, propose a multi{--}task learning model for these two tasks to improve their performance. Technically, in the COLIEE dataset, we use the information of Task 4 (conclusions) to improve the performance of Task 3 (searching for legal provisions related to the question). Our empirical findings indicate that this supportive relationship truly exists. This important insight sheds light on how leveraging relationship between tasks can significantly enhance the effectiveness of our multi-task learning approach for legal text processing.",
}
| In legal text processing and reasoning, one normally performs information retrieval to find relevant documents of an input question, and then performs textual entailment to answer the question. The former is about relevancy whereas the latter is about affirmation (or conclusion). While relevancy and affirmation are two different concepts, there is obviously a connection between them. That is why performing retrieval and textual entailment sequentially and independently may not make the most of this mutually supportive relationship. This paper, therefore, propose a multi{--}task learning model for these two tasks to improve their performance. Technically, in the COLIEE dataset, we use the information of Task 4 (conclusions) to improve the performance of Task 3 (searching for legal provisions related to the question). Our empirical findings indicate that this supportive relationship truly exists. This important insight sheds light on how leveraging relationship between tasks can significantly enhance the effectiveness of our multi-task learning approach for legal text processing. | [
"Hai Long, Nguyen",
"Vuong, Thi Hai Yen",
"Nguyen, Ha Thanh",
"Phan, Xuan-Hieu"
] | Joint Learning for Legal Text Retrieval and Textual Entailment: Leveraging the Relationship between Relevancy and Affirmation | nllp-1.19 | null | [
""
] | -1 | -1 | -1 | -1 | [] | [] | [] | 0 | Poster |
|
https://aclanthology.org/2023.nllp-1.20.bib | https://aclanthology.org/2023.nllp-1.20/ | @inproceedings{fang-etal-2023-super,
title = "Super-{SCOTUS}: A multi-sourced dataset for the {S}upreme {C}ourt of the {US}",
author = "Fang, Biaoyan and
Cohn, Trevor and
Baldwin, Timothy and
Frermann, Lea",
editor = "Preo{\textcommabelow{t}}iuc-Pietro, Daniel and
Goanta, Catalina and
Chalkidis, Ilias and
Barrett, Leslie and
Spanakis, Gerasimos and
Aletras, Nikolaos",
booktitle = "Proceedings of the Natural Legal Language Processing Workshop 2023",
month = dec,
year = "2023",
address = "Singapore",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/2023.nllp-1.20",
doi = "10.18653/v1/2023.nllp-1.20",
pages = "202--214",
abstract = "Given the complexity of the judiciary in the US Supreme Court, various procedures, along with various resources, contribute to the court system. However, most research focuses on a limited set of resources, e.g., court opinions or oral arguments, for analyzing a specific perspective in court, e.g., partisanship or voting. To gain a fuller understanding of these perspectives in the legal system of the US Supreme Court, a more comprehensive dataset, connecting different sources in different phases of the court procedure, is needed. To address this gap, we present a multi-sourced dataset for the Supreme Court, comprising court resources from different procedural phases, connecting language documents with extensive metadata. We showcase its utility through a case study on how different court documents reveal the decision direction (conservative vs. liberal) of the cases. We analyze performance differences across three protected attributes, indicating that different court resources encode different biases, and reinforcing that considering various resources provides a fuller picture of the court procedures. We further discuss how our dataset can contribute to future research directions.",
}
| Given the complexity of the judiciary in the US Supreme Court, various procedures, along with various resources, contribute to the court system. However, most research focuses on a limited set of resources, e.g., court opinions or oral arguments, for analyzing a specific perspective in court, e.g., partisanship or voting. To gain a fuller understanding of these perspectives in the legal system of the US Supreme Court, a more comprehensive dataset, connecting different sources in different phases of the court procedure, is needed. To address this gap, we present a multi-sourced dataset for the Supreme Court, comprising court resources from different procedural phases, connecting language documents with extensive metadata. We showcase its utility through a case study on how different court documents reveal the decision direction (conservative vs. liberal) of the cases. We analyze performance differences across three protected attributes, indicating that different court resources encode different biases, and reinforcing that considering various resources provides a fuller picture of the court procedures. We further discuss how our dataset can contribute to future research directions. | [
"Fang, Biaoyan",
"Cohn, Trevor",
"Baldwin, Timothy",
"Frermann, Lea"
] | Super-SCOTUS: A multi-sourced dataset for the Supreme Court of the US | nllp-1.20 | null | [
""
] | -1 | -1 | -1 | -1 | [] | [] | [] | 0 | Poster |
|
https://aclanthology.org/2023.nllp-1.21.bib | https://aclanthology.org/2023.nllp-1.21/ | @inproceedings{kwak-etal-2023-transferring,
title = "Transferring Legal Natural Language Inference Model from a {US} State to Another: What Makes It So Hard?",
author = "Kwak, Alice and
Forte, Gaetano and
Bambauer, Derek and
Surdeanu, Mihai",
editor = "Preo{\textcommabelow{t}}iuc-Pietro, Daniel and
Goanta, Catalina and
Chalkidis, Ilias and
Barrett, Leslie and
Spanakis, Gerasimos and
Aletras, Nikolaos",
booktitle = "Proceedings of the Natural Legal Language Processing Workshop 2023",
month = dec,
year = "2023",
address = "Singapore",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/2023.nllp-1.21",
doi = "10.18653/v1/2023.nllp-1.21",
pages = "215--222",
abstract = "This study investigates whether a legal natural language inference (NLI) model trained on the data from one US state can be transferred to another state. We fine-tuned a pre-trained model on the task of evaluating the validity of legal will statements, once with the dataset containing the Tennessee wills and once with the dataset containing the Idaho wills. Each model{'}s performance on the in-domain setting and the out-of-domain setting are compared to see if the models can across the states. We found that the model trained on one US state can be mostly transferred to another state. However, it is clear that the model{'}s performance drops in the out-of-domain setting. The F1 scores of the Tennessee model and the Idaho model are 96.41 and 92.03 when predicting the data from the same state, but they drop to 66.32 and 81.60 when predicting the data from another state. Subsequent error analysis revealed that there are two major sources of errors. First, the model fails to recognize equivalent laws across states when there are stylistic differences between laws. Second, difference in statutory section numbering system between the states makes it difficult for the model to locate laws relevant to the cases being predicted on. This analysis provides insights on how the future NLI system can be improved. Also, our findings offer empirical support to legal experts advocating the standardization of legal documents.",
}
| This study investigates whether a legal natural language inference (NLI) model trained on the data from one US state can be transferred to another state. We fine-tuned a pre-trained model on the task of evaluating the validity of legal will statements, once with the dataset containing the Tennessee wills and once with the dataset containing the Idaho wills. Each model{'}s performance on the in-domain setting and the out-of-domain setting are compared to see if the models can across the states. We found that the model trained on one US state can be mostly transferred to another state. However, it is clear that the model{'}s performance drops in the out-of-domain setting. The F1 scores of the Tennessee model and the Idaho model are 96.41 and 92.03 when predicting the data from the same state, but they drop to 66.32 and 81.60 when predicting the data from another state. Subsequent error analysis revealed that there are two major sources of errors. First, the model fails to recognize equivalent laws across states when there are stylistic differences between laws. Second, difference in statutory section numbering system between the states makes it difficult for the model to locate laws relevant to the cases being predicted on. This analysis provides insights on how the future NLI system can be improved. Also, our findings offer empirical support to legal experts advocating the standardization of legal documents. | [
"Kwak, Alice",
"Forte, Gaetano",
"Bambauer, Derek",
"Surdeanu, Mihai"
] | Transferring Legal Natural Language Inference Model from a US State to Another: What Makes It So Hard? | nllp-1.21 | null | [
""
] | -1 | -1 | -1 | -1 | [] | [] | [] | 0 | Poster |
|
https://aclanthology.org/2023.nllp-1.22.bib | https://aclanthology.org/2023.nllp-1.22/ | @inproceedings{jayakumar-etal-2023-large,
title = "Large Language Models are legal but they are not: Making the case for a powerful {L}egal{LLM}",
author = "Jayakumar, Thanmay and
Farooqui, Fauzan and
Farooqui, Luqman",
editor = "Preo{\textcommabelow{t}}iuc-Pietro, Daniel and
Goanta, Catalina and
Chalkidis, Ilias and
Barrett, Leslie and
Spanakis, Gerasimos and
Aletras, Nikolaos",
booktitle = "Proceedings of the Natural Legal Language Processing Workshop 2023",
month = dec,
year = "2023",
address = "Singapore",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/2023.nllp-1.22",
doi = "10.18653/v1/2023.nllp-1.22",
pages = "223--229",
abstract = "Realizing the recent advances from Natural Language Processing (NLP) to the legal sector poses challenging problems such as extremely long sequence lengths, specialized vocabulary that is usually only understood by legal professionals, and high amounts of data imbalance. The recent surge of Large Language Models (LLM) has begun to provide new opportunities to apply NLP in the legal domain due to their ability to handle lengthy, complex sequences. Moreover, the emergence of domain-specific LLMs has displayed extremely promising results on various tasks. In this study, we aim to quantify how general LLMs perform in comparison to legal-domain models (be it an LLM or otherwise). Specifically, we compare the zero-shot performance of three general-purpose LLMs (ChatGPT-3.5, LLaMA-70b and Falcon-180b) on the LEDGAR subset of the LexGLUE benchmark for contract provision classification. Although the LLMs were not explicitly trained on legal data, we observe that they are still able to classify the theme correctly in most cases. However, we find that their mic-F1/mac-F1 performance are upto 19.2/26.8{\%} lesser than smaller models fine-tuned on the legal domain, thus underscoring the need for more powerful legal-domain LLMs.",
}
| Realizing the recent advances from Natural Language Processing (NLP) to the legal sector poses challenging problems such as extremely long sequence lengths, specialized vocabulary that is usually only understood by legal professionals, and high amounts of data imbalance. The recent surge of Large Language Models (LLM) has begun to provide new opportunities to apply NLP in the legal domain due to their ability to handle lengthy, complex sequences. Moreover, the emergence of domain-specific LLMs has displayed extremely promising results on various tasks. In this study, we aim to quantify how general LLMs perform in comparison to legal-domain models (be it an LLM or otherwise). Specifically, we compare the zero-shot performance of three general-purpose LLMs (ChatGPT-3.5, LLaMA-70b and Falcon-180b) on the LEDGAR subset of the LexGLUE benchmark for contract provision classification. Although the LLMs were not explicitly trained on legal data, we observe that they are still able to classify the theme correctly in most cases. However, we find that their mic-F1/mac-F1 performance are upto 19.2/26.8{\%} lesser than smaller models fine-tuned on the legal domain, thus underscoring the need for more powerful legal-domain LLMs. | [
"Jayakumar, Thanmay",
"Farooqui, Fauzan",
"Farooqui, Luqman"
] | Large Language Models are legal but they are not: Making the case for a powerful LegalLLM | nllp-1.22 | 2311.08890 | [
""
] | -1 | -1 | -1 | -1 | [] | [] | [] | 0 | Poster |
|
https://aclanthology.org/2023.nllp-1.23.bib | https://aclanthology.org/2023.nllp-1.23/ | @inproceedings{srinivas-etal-2023-potential,
title = "On the Potential and Limitations of Few-Shot In-Context Learning to Generate Metamorphic Specifications for Tax Preparation Software",
author = "Srinivas, Dananjay and
Das, Rohan and
Tizpaz-Niari, Saeid and
Trivedi, Ashutosh and
Pacheco, Maria Leonor",
editor = "Preo{\textcommabelow{t}}iuc-Pietro, Daniel and
Goanta, Catalina and
Chalkidis, Ilias and
Barrett, Leslie and
Spanakis, Gerasimos and
Aletras, Nikolaos",
booktitle = "Proceedings of the Natural Legal Language Processing Workshop 2023",
month = dec,
year = "2023",
address = "Singapore",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/2023.nllp-1.23",
doi = "10.18653/v1/2023.nllp-1.23",
pages = "230--243",
abstract = "Due to the ever-increasing complexity of income tax laws in the United States, the number of US taxpayers filing their taxes using tax preparation software henceforth, tax software) continues to increase. According to the U.S. Internal Revenue Service (IRS), in FY22, nearly 50{\%} of taxpayers filed their individual income taxes using tax software. Given the legal consequences of incorrectly filing taxes for the taxpayer, ensuring the correctness of tax software is of paramount importance. Metamorphic testing has emerged as a leading solution to test and debug legal-critical tax software due to the absence of correctness requirements and trustworthy datasets. The key idea behind metamorphic testing is to express the properties of a system in terms of the relationship between one input and its slightly metamorphosed twinned input. Extracting metamorphic properties from IRS tax publications is a tedious and time-consuming process. As a response, this paper formulates the task of generating metamorphic specifications as a translation task between properties extracted from tax documents - expressed in natural language - to a contrastive first-order logic form. We perform a systematic analysis on the potential and limitations of in-context learning with Large Language Models (LLMs) for this task, and outline a research agenda towards automating the generation of metamorphic specifications for tax preparation software.",
}
| Due to the ever-increasing complexity of income tax laws in the United States, the number of US taxpayers filing their taxes using tax preparation software henceforth, tax software) continues to increase. According to the U.S. Internal Revenue Service (IRS), in FY22, nearly 50{\%} of taxpayers filed their individual income taxes using tax software. Given the legal consequences of incorrectly filing taxes for the taxpayer, ensuring the correctness of tax software is of paramount importance. Metamorphic testing has emerged as a leading solution to test and debug legal-critical tax software due to the absence of correctness requirements and trustworthy datasets. The key idea behind metamorphic testing is to express the properties of a system in terms of the relationship between one input and its slightly metamorphosed twinned input. Extracting metamorphic properties from IRS tax publications is a tedious and time-consuming process. As a response, this paper formulates the task of generating metamorphic specifications as a translation task between properties extracted from tax documents - expressed in natural language - to a contrastive first-order logic form. We perform a systematic analysis on the potential and limitations of in-context learning with Large Language Models (LLMs) for this task, and outline a research agenda towards automating the generation of metamorphic specifications for tax preparation software. | [
"Srinivas, Dananjay",
"Das, Rohan",
"Tizpaz-Niari, Saeid",
"Trivedi, Ashutosh",
"Pacheco, Maria Leonor"
] | On the Potential and Limitations of Few-Shot In-Context Learning to Generate Metamorphic Specifications for Tax Preparation Software | nllp-1.23 | 2311.11979 | [
""
] | -1 | -1 | -1 | -1 | [] | [] | [] | 0 | Poster |
|
https://aclanthology.org/2023.nllp-1.24.bib | https://aclanthology.org/2023.nllp-1.24/ | @inproceedings{barale-etal-2023-asylex,
title = "{A}sy{L}ex: A Dataset for Legal Language Processing of Refugee Claims",
author = "Barale, Claire and
Klaisoongnoen, Mark and
Minervini, Pasquale and
Rovatsos, Michael and
Bhuta, Nehal",
editor = "Preo{\textcommabelow{t}}iuc-Pietro, Daniel and
Goanta, Catalina and
Chalkidis, Ilias and
Barrett, Leslie and
Spanakis, Gerasimos and
Aletras, Nikolaos",
booktitle = "Proceedings of the Natural Legal Language Processing Workshop 2023",
month = dec,
year = "2023",
address = "Singapore",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/2023.nllp-1.24",
doi = "10.18653/v1/2023.nllp-1.24",
pages = "244--257",
abstract = "Advancements in natural language processing (NLP) and language models have demonstrated immense potential in the legal domain, enabling automated analysis and comprehension of legal texts. However, developing robust models in Legal NLP is significantly challenged by the scarcity of resources. This paper presents AsyLex, the first dataset specifically designed for Refugee Law applications to address this gap. The dataset introduces 59,112 documents on refugee status determination in Canada from 1996 to 2022, providing researchers and practitioners with essential material for training and evaluating NLP models for legal research and case review. Case review is defined as entity extraction and outcome prediction tasks. The dataset includes 19,115 gold-standard human-labeled annotations for 20 legally relevant entity types curated with the help of legal experts and 1,682 gold-standard labeled documents for the case outcome. Furthermore, we supply the corresponding trained entity extraction models and the resulting labeled entities generated through the inference process on AsyLex. Four supplementary features are obtained through rule-based extraction. We demonstrate the usefulness of our dataset on the legal judgment prediction task to predict the binary outcome and test a set of baselines using the text of the documents and our annotations. We observe that models pretrained on similar legal documents reach better scores, suggesting that acquiring more datasets for specialized domains such as law is crucial.",
}
| Advancements in natural language processing (NLP) and language models have demonstrated immense potential in the legal domain, enabling automated analysis and comprehension of legal texts. However, developing robust models in Legal NLP is significantly challenged by the scarcity of resources. This paper presents AsyLex, the first dataset specifically designed for Refugee Law applications to address this gap. The dataset introduces 59,112 documents on refugee status determination in Canada from 1996 to 2022, providing researchers and practitioners with essential material for training and evaluating NLP models for legal research and case review. Case review is defined as entity extraction and outcome prediction tasks. The dataset includes 19,115 gold-standard human-labeled annotations for 20 legally relevant entity types curated with the help of legal experts and 1,682 gold-standard labeled documents for the case outcome. Furthermore, we supply the corresponding trained entity extraction models and the resulting labeled entities generated through the inference process on AsyLex. Four supplementary features are obtained through rule-based extraction. We demonstrate the usefulness of our dataset on the legal judgment prediction task to predict the binary outcome and test a set of baselines using the text of the documents and our annotations. We observe that models pretrained on similar legal documents reach better scores, suggesting that acquiring more datasets for specialized domains such as law is crucial. | [
"Barale, Claire",
"Klaisoongnoen, Mark",
"Minervini, Pasquale",
"Rovatsos, Michael",
"Bhuta, Nehal"
] | AsyLex: A Dataset for Legal Language Processing of Refugee Claims | nllp-1.24 | null | [
""
] | -1 | -1 | -1 | -1 | [] | [] | [] | 0 | Poster |
|
https://aclanthology.org/2023.nllp-1.25.bib | https://aclanthology.org/2023.nllp-1.25/ | @inproceedings{hakimi-parizi-etal-2023-comparative,
title = "A Comparative Study of Prompting Strategies for Legal Text Classification",
author = "Hakimi Parizi, Ali and
Liu, Yuyang and
Nokku, Prudhvi and
Gholamian, Sina and
Emerson, David",
editor = "Preo{\textcommabelow{t}}iuc-Pietro, Daniel and
Goanta, Catalina and
Chalkidis, Ilias and
Barrett, Leslie and
Spanakis, Gerasimos and
Aletras, Nikolaos",
booktitle = "Proceedings of the Natural Legal Language Processing Workshop 2023",
month = dec,
year = "2023",
address = "Singapore",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/2023.nllp-1.25",
doi = "10.18653/v1/2023.nllp-1.25",
pages = "258--265",
abstract = "In this study, we explore the performance oflarge language models (LLMs) using differ-ent prompt engineering approaches in the con-text of legal text classification. Prior researchhas demonstrated that various prompting tech-niques can improve the performance of a di-verse array of tasks done by LLMs. However,in this research, we observe that professionaldocuments, and in particular legal documents,pose unique challenges for LLMs. We experi-ment with several LLMs and various promptingtechniques, including zero/few-shot prompting,prompt ensembling, chain-of-thought, and ac-tivation fine-tuning and compare the perfor-mance on legal datasets. Although the newgeneration of LLMs and prompt optimizationtechniques have been shown to improve gener-ation and understanding of generic tasks, ourfindings suggest that such improvements maynot readily transfer to other domains. Specifi-cally, experiments indicate that not all prompt-ing approaches and models are well-suited forthe legal domain which involves complexitiessuch as long documents and domain-specificlanguage.",
}
| In this study, we explore the performance oflarge language models (LLMs) using differ-ent prompt engineering approaches in the con-text of legal text classification. Prior researchhas demonstrated that various prompting tech-niques can improve the performance of a di-verse array of tasks done by LLMs. However,in this research, we observe that professionaldocuments, and in particular legal documents,pose unique challenges for LLMs. We experi-ment with several LLMs and various promptingtechniques, including zero/few-shot prompting,prompt ensembling, chain-of-thought, and ac-tivation fine-tuning and compare the perfor-mance on legal datasets. Although the newgeneration of LLMs and prompt optimizationtechniques have been shown to improve gener-ation and understanding of generic tasks, ourfindings suggest that such improvements maynot readily transfer to other domains. Specifi-cally, experiments indicate that not all prompt-ing approaches and models are well-suited forthe legal domain which involves complexitiessuch as long documents and domain-specificlanguage. | [
"Hakimi Parizi, Ali",
"Liu, Yuyang",
"Nokku, Prudhvi",
"Gholamian, Sina",
"Emerson, David"
] | A Comparative Study of Prompting Strategies for Legal Text Classification | nllp-1.25 | null | [
""
] | -1 | -1 | -1 | -1 | [] | [] | [] | 0 | Poster |
|
https://aclanthology.org/2023.nllp-1.26.bib | https://aclanthology.org/2023.nllp-1.26/ | @inproceedings{xing-etal-2023-tracing,
title = "Tracing Influence at Scale: A Contrastive Learning Approach to Linking Public Comments and Regulator Responses",
author = "Xing, Linzi and
Hackinen, Brad and
Carenini, Giuseppe",
editor = "Preo{\textcommabelow{t}}iuc-Pietro, Daniel and
Goanta, Catalina and
Chalkidis, Ilias and
Barrett, Leslie and
Spanakis, Gerasimos and
Aletras, Nikolaos",
booktitle = "Proceedings of the Natural Legal Language Processing Workshop 2023",
month = dec,
year = "2023",
address = "Singapore",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/2023.nllp-1.26",
doi = "10.18653/v1/2023.nllp-1.26",
pages = "266--274",
abstract = "U.S. Federal Regulators receive over one million comment letters each year from businesses, interest groups, and members of the public, all advocating for changes to proposed regulations. These comments are believed to have wide-ranging impacts on public policy. However, measuring the impact of specific comments is challenging because regulators are required to respond to comments but they do not have to specify which comments they are addressing. In this paper, we propose a simple yet effective solution to this problem by using an iterative contrastive method to train a neural model aiming for matching text from public comments to responses written by regulators. We demonstrate that our proposal substantially outperforms a set of selected text-matching baselines on a human-annotated test set. Furthermore, it delivers performance comparable to the most advanced gigantic language model (i.e., GPT-4), and is more cost-effective when handling comments and regulator responses matching in larger scale.",
}
| U.S. Federal Regulators receive over one million comment letters each year from businesses, interest groups, and members of the public, all advocating for changes to proposed regulations. These comments are believed to have wide-ranging impacts on public policy. However, measuring the impact of specific comments is challenging because regulators are required to respond to comments but they do not have to specify which comments they are addressing. In this paper, we propose a simple yet effective solution to this problem by using an iterative contrastive method to train a neural model aiming for matching text from public comments to responses written by regulators. We demonstrate that our proposal substantially outperforms a set of selected text-matching baselines on a human-annotated test set. Furthermore, it delivers performance comparable to the most advanced gigantic language model (i.e., GPT-4), and is more cost-effective when handling comments and regulator responses matching in larger scale. | [
"Xing, Linzi",
"Hackinen, Brad",
"Carenini, Giuseppe"
] | Tracing Influence at Scale: A Contrastive Learning Approach to Linking Public Comments and Regulator Responses | nllp-1.26 | 2311.14871 | [
""
] | -1 | -1 | -1 | -1 | [] | [] | [] | 0 | Poster |
|
https://aclanthology.org/2023.nlposs-1.1.bib | https://aclanthology.org/2023.nlposs-1.1/ | @inproceedings{miranda-2023-calamancy,
title = "calaman{C}y: A {T}agalog Natural Language Processing Toolkit",
author = "Miranda, Lester James",
editor = "Tan, Liling and
Milajevs, Dmitrijs and
Chauhan, Geeticka and
Gwinnup, Jeremy and
Rippeth, Elijah",
booktitle = "Proceedings of the 3rd Workshop for Natural Language Processing Open Source Software (NLP-OSS 2023)",
month = dec,
year = "2023",
address = "Singapore",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/2023.nlposs-1.1",
doi = "10.18653/v1/2023.nlposs-1.1",
pages = "1--7",
abstract = "We introduce calamanCy, an open-source toolkit for constructing natural language processing (NLP) pipelines for Tagalog. It is built on top of spaCy, enabling easy experimentation and integration with other frameworks. calamanCy addresses the development gap by providing a consistent API for building NLP applications and offering general-purpose multitask models with out-of-the-box support for dependency parsing, parts-of-speech (POS) tagging, and named entity recognition (NER). calamanCy aims to accelerate the progress of Tagalog NLP by consolidating disjointed resources in a unified framework.The calamanCy toolkit is available on GitHub: https://github.com/ljvmiranda921/calamanCy.",
}
| We introduce calamanCy, an open-source toolkit for constructing natural language processing (NLP) pipelines for Tagalog. It is built on top of spaCy, enabling easy experimentation and integration with other frameworks. calamanCy addresses the development gap by providing a consistent API for building NLP applications and offering general-purpose multitask models with out-of-the-box support for dependency parsing, parts-of-speech (POS) tagging, and named entity recognition (NER). calamanCy aims to accelerate the progress of Tagalog NLP by consolidating disjointed resources in a unified framework.The calamanCy toolkit is available on GitHub: https://github.com/ljvmiranda921/calamanCy. | [
"Mir",
"a, Lester James"
] | calamanCy: A Tagalog Natural Language Processing Toolkit | nlposs-1.1 | 2311.07171 | [
"https://github.com/ljvmiranda921/calamancy"
] | https://huggingface.co/papers/2311.07171 | 1 | 0 | 0 | 1 | [
"ljvmiranda921/tl_calamancy_trf",
"ljvmiranda921/tl_calamancy_lg",
"ljvmiranda921/tl_calamancy_md"
] | [] | [
"rande/Taga-Care"
] | 1 | Poster |
https://aclanthology.org/2023.nlposs-1.2.bib | https://aclanthology.org/2023.nlposs-1.2/ | @inproceedings{gunther-etal-2023-jina,
title = "{J}ina Embeddings: A Novel Set of High-Performance Sentence Embedding Models",
author = {G{\"u}nther, Michael and
Mastrapas, Georgios and
Wang, Bo and
Xiao, Han and
Geuter, Jonathan},
editor = "Tan, Liling and
Milajevs, Dmitrijs and
Chauhan, Geeticka and
Gwinnup, Jeremy and
Rippeth, Elijah",
booktitle = "Proceedings of the 3rd Workshop for Natural Language Processing Open Source Software (NLP-OSS 2023)",
month = dec,
year = "2023",
address = "Singapore",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/2023.nlposs-1.2",
doi = "10.18653/v1/2023.nlposs-1.2",
pages = "8--18",
abstract = "Jina Embeddings constitutes a set of high-performance sentence embedding models adept at translating textual inputs into numerical representations, capturing the semantics of the text. These models excel in applications like dense retrieval and semantic textual similarity. This paper details the development of Jina Embeddings, starting with the creation of high-quality pairwise and triplet datasets.It underlines the crucial role of data cleaning in dataset preparation, offers in-depth insights into the model training process, and concludes with a comprehensive performance evaluation using the Massive Text Embedding Benchmark (MTEB). Furthermore, to increase the model{'}s awareness of grammatical negation, we construct a novel training and evaluation dataset of negated and non-negated statements, which we make publicly available to the community.",
}
| Jina Embeddings constitutes a set of high-performance sentence embedding models adept at translating textual inputs into numerical representations, capturing the semantics of the text. These models excel in applications like dense retrieval and semantic textual similarity. This paper details the development of Jina Embeddings, starting with the creation of high-quality pairwise and triplet datasets.It underlines the crucial role of data cleaning in dataset preparation, offers in-depth insights into the model training process, and concludes with a comprehensive performance evaluation using the Massive Text Embedding Benchmark (MTEB). Furthermore, to increase the model{'}s awareness of grammatical negation, we construct a novel training and evaluation dataset of negated and non-negated statements, which we make publicly available to the community. | [
"G{\\\"u}nther, Michael",
"Mastrapas, Georgios",
"Wang, Bo",
"Xiao, Han",
"Geuter, Jonathan"
] | Jina Embeddings: A Novel Set of High-Performance Sentence Embedding Models | nlposs-1.2 | 2307.11224 | [
""
] | https://huggingface.co/papers/2307.11224 | 1 | 5 | 0 | 6 | [
"jinaai/jina-embedding-t-en-v1",
"jinaai/jina-embedding-s-en-v1",
"jinaai/jina-embedding-l-en-v1",
"jinaai/jina-embedding-b-en-v1",
"DecisionOptimizationSystem/DeepFeatEmbeddingLargeContext",
"michaelfeil/ct2fast-jina-embedding-t-en-v1",
"michaelfeil/ct2fast-jina-embedding-l-en-v1",
"michaelfeil/ct2fast-jina-embedding-s-en-v1"
] | [
"jinaai/negation-dataset",
"jinaai/negation-dataset-v2"
] | [
"mteb/leaderboard",
"owaiskha9654/MANUU_Demo_Test",
"herMaster/chat-with-a-pdf",
"rodrigomasini/data-only-mteb-leaderboard",
"Mattral/chat-with-docs",
"Mattral/Organized-Data-Chat",
"Gooly/example-pipeline"
] | 1 | Poster |
https://aclanthology.org/2023.nlposs-1.3.bib | https://aclanthology.org/2023.nlposs-1.3/ | @inproceedings{beauchemin-2023-deepparse,
title = "Deepparse : An Extendable, and Fine-Tunable State-Of-The-Art Library for Parsing Multinational Street Addresses",
author = "Beauchemin, David",
editor = "Tan, Liling and
Milajevs, Dmitrijs and
Chauhan, Geeticka and
Gwinnup, Jeremy and
Rippeth, Elijah",
booktitle = "Proceedings of the 3rd Workshop for Natural Language Processing Open Source Software (NLP-OSS 2023)",
month = dec,
year = "2023",
address = "Singapore",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/2023.nlposs-1.3",
doi = "10.18653/v1/2023.nlposs-1.3",
pages = "19--24",
abstract = "Segmenting an address into meaningful components, also known as address parsing, is an essential step in many applications from record linkage to geocoding and package delivery. Consequently, a lot of work has been dedicated to develop accurate address parsing techniques, with machine learning and neural network methods leading the state-of-the-art scoreboard. However, most of the work on address parsing has been confined to academic endeavours with little availability of free and easy-to-use open-source solutions.This paper presents Deepparse, a Python open-source, extendable, fine-tunable address parsing solution under LGPL-3.0 licence to parse multinational addresses using state-of-the-art deep learning algorithms and evaluated on over 60 countries. It can parse addresses written in any language and use any address standard. The pre-trained model achieves average 99{\%} parsing accuracies on the countries used for training with no pre-processing nor post-processing needed. Moreover, the library supports fine-tuning with new data to generate a custom address parser.",
}
| Segmenting an address into meaningful components, also known as address parsing, is an essential step in many applications from record linkage to geocoding and package delivery. Consequently, a lot of work has been dedicated to develop accurate address parsing techniques, with machine learning and neural network methods leading the state-of-the-art scoreboard. However, most of the work on address parsing has been confined to academic endeavours with little availability of free and easy-to-use open-source solutions.This paper presents Deepparse, a Python open-source, extendable, fine-tunable address parsing solution under LGPL-3.0 licence to parse multinational addresses using state-of-the-art deep learning algorithms and evaluated on over 60 countries. It can parse addresses written in any language and use any address standard. The pre-trained model achieves average 99{\%} parsing accuracies on the countries used for training with no pre-processing nor post-processing needed. Moreover, the library supports fine-tuning with new data to generate a custom address parser. | [
"Beauchemin, David"
] | Deepparse : An Extendable, and Fine-Tunable State-Of-The-Art Library for Parsing Multinational Street Addresses | nlposs-1.3 | 2311.11846 | [
""
] | -1 | -1 | -1 | -1 | [] | [] | [] | 0 | Poster |
|
https://aclanthology.org/2023.nlposs-1.4.bib | https://aclanthology.org/2023.nlposs-1.4/ | @inproceedings{phatthiyaphaibun-etal-2023-pythainlp,
title = "{P}y{T}hai{NLP}: {T}hai Natural Language Processing in Python",
author = "Phatthiyaphaibun, Wannaphong and
Chaovavanich, Korakot and
Polpanumas, Charin and
Suriyawongkul, Arthit and
Lowphansirikul, Lalita and
Chormai, Pattarawat and
Limkonchotiwat, Peerat and
Suntorntip, Thanathip and
Udomcharoenchaikit, Can",
editor = "Tan, Liling and
Milajevs, Dmitrijs and
Chauhan, Geeticka and
Gwinnup, Jeremy and
Rippeth, Elijah",
booktitle = "Proceedings of the 3rd Workshop for Natural Language Processing Open Source Software (NLP-OSS 2023)",
month = dec,
year = "2023",
address = "Singapore",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/2023.nlposs-1.4",
doi = "10.18653/v1/2023.nlposs-1.4",
pages = "25--36",
abstract = "We present PyThaiNLP, a free and open-source natural language processing (NLP) library for Thai language implemented in Python. It provides a wide range of software, models, and datasets for Thai language. We first provide a brief historical context of tools for Thai language prior to the development of PyThaiNLP. We then outline the functionalities it provided as well as datasets and pre-trained language models. We later summarize its development milestones and discuss our experience during its development. We conclude by demonstrating how industrial and research communities utilize PyThaiNLP in their work. The library is freely available at https://github.com/pythainlp/pythainlp.",
}
| We present PyThaiNLP, a free and open-source natural language processing (NLP) library for Thai language implemented in Python. It provides a wide range of software, models, and datasets for Thai language. We first provide a brief historical context of tools for Thai language prior to the development of PyThaiNLP. We then outline the functionalities it provided as well as datasets and pre-trained language models. We later summarize its development milestones and discuss our experience during its development. We conclude by demonstrating how industrial and research communities utilize PyThaiNLP in their work. The library is freely available at https://github.com/pythainlp/pythainlp. | [
"Phatthiyaphaibun, Wannaphong",
"Chaovavanich, Korakot",
"Polpanumas, Charin",
"Suriyawongkul, Arthit",
"Lowphansirikul, Lalita",
"Chormai, Pattarawat",
"Limkonchotiwat, Peerat",
"Suntorntip, Thanathip",
"Udomcharoenchaikit, Can"
] | PyThaiNLP: Thai Natural Language Processing in Python | nlposs-1.4 | 2312.04649 | [
"https://github.com/PyThaiNLP/pythainlp"
] | https://huggingface.co/papers/2312.04649 | 1 | 0 | 0 | 9 | [] | [] | [] | 1 | Poster |
https://aclanthology.org/2023.nlposs-1.5.bib | https://aclanthology.org/2023.nlposs-1.5/ | @inproceedings{stavropoulos-etal-2023-empowering,
title = "Empowering Knowledge Discovery from Scientific Literature: A novel approach to Research Artifact Analysis",
author = "Stavropoulos, Petros and
Lyris, Ioannis and
Manola, Natalia and
Grypari, Ioanna and
Papageorgiou, Haris",
editor = "Tan, Liling and
Milajevs, Dmitrijs and
Chauhan, Geeticka and
Gwinnup, Jeremy and
Rippeth, Elijah",
booktitle = "Proceedings of the 3rd Workshop for Natural Language Processing Open Source Software (NLP-OSS 2023)",
month = dec,
year = "2023",
address = "Singapore",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/2023.nlposs-1.5",
doi = "10.18653/v1/2023.nlposs-1.5",
pages = "37--53",
abstract = "Knowledge extraction from scientific literature is a major issue, crucial to promoting transparency, reproducibility, and innovation in the research community. In this work, we present a novel approach towards the identification, extraction and analysis of dataset and code/software mentions within scientific literature. We introduce a comprehensive dataset, synthetically generated by ChatGPT and meticulously curated, augmented, and expanded with real snippets of scientific text from full-text publications in Computer Science using a human-in-the-loop process. The dataset contains snippets highlighting mentions of the two research artifact (RA) types: dataset and code/software, along with insightful metadata including their Name, Version, License, URL as well as the intended Usage and Provenance. We also fine-tune a simple Large Language Model (LLM) using Low-Rank Adaptation (LoRA) to transform the Research Artifact Analysis (RAA) into an instruction-based Question Answering (QA) task. Ultimately, we report the improvements in performance on the test set of our dataset when compared to other base LLM models. Our method provides a significant step towards facilitating accurate, effective, and efficient extraction of datasets and software from scientific papers, contributing to the challenges of reproducibility and reusability in scientific research.",
}
| Knowledge extraction from scientific literature is a major issue, crucial to promoting transparency, reproducibility, and innovation in the research community. In this work, we present a novel approach towards the identification, extraction and analysis of dataset and code/software mentions within scientific literature. We introduce a comprehensive dataset, synthetically generated by ChatGPT and meticulously curated, augmented, and expanded with real snippets of scientific text from full-text publications in Computer Science using a human-in-the-loop process. The dataset contains snippets highlighting mentions of the two research artifact (RA) types: dataset and code/software, along with insightful metadata including their Name, Version, License, URL as well as the intended Usage and Provenance. We also fine-tune a simple Large Language Model (LLM) using Low-Rank Adaptation (LoRA) to transform the Research Artifact Analysis (RAA) into an instruction-based Question Answering (QA) task. Ultimately, we report the improvements in performance on the test set of our dataset when compared to other base LLM models. Our method provides a significant step towards facilitating accurate, effective, and efficient extraction of datasets and software from scientific papers, contributing to the challenges of reproducibility and reusability in scientific research. | [
"Stavropoulos, Petros",
"Lyris, Ioannis",
"Manola, Natalia",
"Grypari, Ioanna",
"Papageorgiou, Haris"
] | Empowering Knowledge Discovery from Scientific Literature: A novel approach to Research Artifact Analysis | nlposs-1.5 | null | [
""
] | -1 | -1 | -1 | -1 | [] | [] | [] | 0 | Poster |
|
https://aclanthology.org/2023.nlposs-1.6.bib | https://aclanthology.org/2023.nlposs-1.6/ | @inproceedings{grobol-2023-zelda,
title = "Zelda Rose: a tool for hassle-free training of transformer models",
author = {Grobol, Lo{\"\i}c},
editor = "Tan, Liling and
Milajevs, Dmitrijs and
Chauhan, Geeticka and
Gwinnup, Jeremy and
Rippeth, Elijah",
booktitle = "Proceedings of the 3rd Workshop for Natural Language Processing Open Source Software (NLP-OSS 2023)",
month = dec,
year = "2023",
address = "Singapore",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/2023.nlposs-1.6",
doi = "10.18653/v1/2023.nlposs-1.6",
pages = "54--58",
abstract = "Zelda Rose is a command line interface for pretraining transformer-based models. Its purpose is to enable an easy start for users interested in training these ubiquitous models, but unable or unwilling to engage with more comprehensive {---} but more complex {---} frameworks and the complex interactions between libraries for managing models, datasets and computations. Training a model requires no code on the user{'}s part and produce models directly compatible with the HuggingFace ecosystem, allowing quick and easy distribution and reuse. A particular care is given to lowering the cost of maintainability and future-proofing, by making the code as modular as possible and taking advantage of third-party libraries to limit ad-hoc code to the strict minimum.",
}
| Zelda Rose is a command line interface for pretraining transformer-based models. Its purpose is to enable an easy start for users interested in training these ubiquitous models, but unable or unwilling to engage with more comprehensive {---} but more complex {---} frameworks and the complex interactions between libraries for managing models, datasets and computations. Training a model requires no code on the user{'}s part and produce models directly compatible with the HuggingFace ecosystem, allowing quick and easy distribution and reuse. A particular care is given to lowering the cost of maintainability and future-proofing, by making the code as modular as possible and taking advantage of third-party libraries to limit ad-hoc code to the strict minimum. | [
"Grobol, Lo{\\\"\\i}c"
] | Zelda Rose: a tool for hassle-free training of transformer models | nlposs-1.6 | null | [
""
] | -1 | -1 | -1 | -1 | [] | [] | [] | 0 | Poster |
|
https://aclanthology.org/2023.nlposs-1.7.bib | https://aclanthology.org/2023.nlposs-1.7/ | @inproceedings{anand-etal-2023-gpt4all,
title = "{GPT}4{A}ll: An Ecosystem of Open Source Compressed Language Models",
author = "Anand, Yuvanesh and
Nussbaum, Zach and
Treat, Adam and
Miller, Aaron and
Guo, Richard and
Schmidt, Benjamin and
Duderstadt, Brandon and
Mulyar, Andriy",
editor = "Tan, Liling and
Milajevs, Dmitrijs and
Chauhan, Geeticka and
Gwinnup, Jeremy and
Rippeth, Elijah",
booktitle = "Proceedings of the 3rd Workshop for Natural Language Processing Open Source Software (NLP-OSS 2023)",
month = dec,
year = "2023",
address = "Singapore",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/2023.nlposs-1.7",
doi = "10.18653/v1/2023.nlposs-1.7",
pages = "59--64",
abstract = "Large language models (LLMs) have recently achieved human-level performance on a range of professional and academic benchmarks.The accessibility of these models has lagged behind their performance.State-of-the-art LLMs require costly infrastructure; are only accessible via rate-limited, geo-locked, and censored web interfaces; and lack publicly available code and technical reports.In this paper, we tell the story of GPT4All, a popular open source repository that aims to democratize access to LLMs.We outline the technical details of the original GPT4All model family, as well as the evolution of the GPT4All project from a single model into a fully fledged open source ecosystem.It is our hope that this paper acts as both a technical overview of the original GPT4All models as well as a case study on the subsequent growth of the GPT4All open source ecosystem.",
}
| Large language models (LLMs) have recently achieved human-level performance on a range of professional and academic benchmarks.The accessibility of these models has lagged behind their performance.State-of-the-art LLMs require costly infrastructure; are only accessible via rate-limited, geo-locked, and censored web interfaces; and lack publicly available code and technical reports.In this paper, we tell the story of GPT4All, a popular open source repository that aims to democratize access to LLMs.We outline the technical details of the original GPT4All model family, as well as the evolution of the GPT4All project from a single model into a fully fledged open source ecosystem.It is our hope that this paper acts as both a technical overview of the original GPT4All models as well as a case study on the subsequent growth of the GPT4All open source ecosystem. | [
"An",
", Yuvanesh",
"Nussbaum, Zach",
"Treat, Adam",
"Miller, Aaron",
"Guo, Richard",
"Schmidt, Benjamin",
"Duderstadt, Br",
"on",
"Mulyar, Andriy"
] | GPT4All: An Ecosystem of Open Source Compressed Language Models | nlposs-1.7 | 2311.04931 | [
"https://github.com/nomic-ai/gpt4all"
] | https://huggingface.co/papers/2311.04931 | 3 | 20 | 1 | 9 | [] | [] | [] | 1 | Poster |
https://aclanthology.org/2023.nlposs-1.8.bib | https://aclanthology.org/2023.nlposs-1.8/ | @inproceedings{zhu-etal-2023-kani,
title = "Kani: A Lightweight and Highly Hackable Framework for Building Language Model Applications",
author = "Zhu, Andrew and
Dugan, Liam and
Hwang, Alyssa and
Callison-Burch, Chris",
editor = "Tan, Liling and
Milajevs, Dmitrijs and
Chauhan, Geeticka and
Gwinnup, Jeremy and
Rippeth, Elijah",
booktitle = "Proceedings of the 3rd Workshop for Natural Language Processing Open Source Software (NLP-OSS 2023)",
month = dec,
year = "2023",
address = "Singapore",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/2023.nlposs-1.8",
doi = "10.18653/v1/2023.nlposs-1.8",
pages = "65--77",
abstract = "Language model applications are becoming increasingly popular and complex, often including features like tool usage and retrieval augmentation. However, existing frameworks for such applications are often opinionated, deciding for developers how their prompts ought to be formatted and imposing limitations on customizability and reproducibility. To solve this we present Kani: a lightweight, flexible, and model-agnostic open-source framework for building language model applications. Kani helps developers implement a variety of complex features by supporting the core building blocks of chat interaction: model interfacing, chat management, and robust function calling. All Kani core functions are easily overridable and well documented to empower developers to customize functionality for their own needs. Kani thus serves as a useful tool for researchers, hobbyists, and industry professionals alike to accelerate their development while retaining interoperability and fine-grained control.",
}
| Language model applications are becoming increasingly popular and complex, often including features like tool usage and retrieval augmentation. However, existing frameworks for such applications are often opinionated, deciding for developers how their prompts ought to be formatted and imposing limitations on customizability and reproducibility. To solve this we present Kani: a lightweight, flexible, and model-agnostic open-source framework for building language model applications. Kani helps developers implement a variety of complex features by supporting the core building blocks of chat interaction: model interfacing, chat management, and robust function calling. All Kani core functions are easily overridable and well documented to empower developers to customize functionality for their own needs. Kani thus serves as a useful tool for researchers, hobbyists, and industry professionals alike to accelerate their development while retaining interoperability and fine-grained control. | [
"Zhu, Andrew",
"Dugan, Liam",
"Hwang, Alyssa",
"Callison-Burch, Chris"
] | Kani: A Lightweight and Highly Hackable Framework for Building Language Model Applications | nlposs-1.8 | 2309.05542 | [
"https://github.com/zhudotexe/kani"
] | https://huggingface.co/papers/2309.05542 | 1 | 0 | 0 | 4 | [] | [] | [] | 1 | Poster |
https://aclanthology.org/2023.nlposs-1.9.bib | https://aclanthology.org/2023.nlposs-1.9/ | @inproceedings{kashyap-etal-2023-beyond,
title = "Beyond the Repo: A Case Study on Open Source Integration with {GECT}o{R}",
author = "Kashyap, Sanjna and
Xie, Zhaoyang and
Steimel, Kenneth and
Madnani, Nitin",
editor = "Tan, Liling and
Milajevs, Dmitrijs and
Chauhan, Geeticka and
Gwinnup, Jeremy and
Rippeth, Elijah",
booktitle = "Proceedings of the 3rd Workshop for Natural Language Processing Open Source Software (NLP-OSS 2023)",
month = dec,
year = "2023",
address = "Singapore",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/2023.nlposs-1.9",
doi = "10.18653/v1/2023.nlposs-1.9",
pages = "78--82",
abstract = "We present a case study describing our efforts to integrate the open source GECToR code and models into our production NLP pipeline that powers many of Educational Testing Service{'}s products and prototypes. The paper{'}s contributions includes a discussion of the issues we encountered during integration and our solutions, the overarching lessons we learned about integrating open source projects, and, last but not least, the open source contributions we made as part of the journey.",
}
| We present a case study describing our efforts to integrate the open source GECToR code and models into our production NLP pipeline that powers many of Educational Testing Service{'}s products and prototypes. The paper{'}s contributions includes a discussion of the issues we encountered during integration and our solutions, the overarching lessons we learned about integrating open source projects, and, last but not least, the open source contributions we made as part of the journey. | [
"Kashyap, Sanjna",
"Xie, Zhaoyang",
"Steimel, Kenneth",
"Madnani, Nitin"
] | Beyond the Repo: A Case Study on Open Source Integration with GECToR | nlposs-1.9 | null | [
""
] | -1 | -1 | -1 | -1 | [] | [] | [] | 0 | Poster |
|
https://aclanthology.org/2023.nlposs-1.10.bib | https://aclanthology.org/2023.nlposs-1.10/ | @inproceedings{bollmann-etal-2023-two,
title = "Two Decades of the {ACL} {A}nthology: Development, Impact, and Open Challenges",
author = {Bollmann, Marcel and
Schneider, Nathan and
K{\"o}hn, Arne and
Post, Matt},
editor = "Tan, Liling and
Milajevs, Dmitrijs and
Chauhan, Geeticka and
Gwinnup, Jeremy and
Rippeth, Elijah",
booktitle = "Proceedings of the 3rd Workshop for Natural Language Processing Open Source Software (NLP-OSS 2023)",
month = dec,
year = "2023",
address = "Singapore",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/2023.nlposs-1.10",
doi = "10.18653/v1/2023.nlposs-1.10",
pages = "83--94",
abstract = "The ACL Anthology is a prime resource for research papers within computational linguistics and natural language processing, while continuing to be an open-source and community-driven project. Since Gildea et al. (2018) reported on its state and planned directions, the Anthology has seen major technical changes. We discuss what led to these changes and how they impact long-term maintainability and community engagement, describe which open-source data and software tools the Anthology currently provides, and provide a survey of literature that has used the Anthology as a main data source.",
}
| The ACL Anthology is a prime resource for research papers within computational linguistics and natural language processing, while continuing to be an open-source and community-driven project. Since Gildea et al. (2018) reported on its state and planned directions, the Anthology has seen major technical changes. We discuss what led to these changes and how they impact long-term maintainability and community engagement, describe which open-source data and software tools the Anthology currently provides, and provide a survey of literature that has used the Anthology as a main data source. | [
"Bollmann, Marcel",
"Schneider, Nathan",
"K{\\\"o}hn, Arne",
"Post, Matt"
] | Two Decades of the ACL Anthology: Development, Impact, and Open Challenges | nlposs-1.10 | null | [
""
] | -1 | -1 | -1 | -1 | [] | [] | [] | 0 | Poster |
|
https://aclanthology.org/2023.nlposs-1.11.bib | https://aclanthology.org/2023.nlposs-1.11/ | @inproceedings{nawrot-2023-nanot5,
title = "nano{T}5: Fast {\&} Simple Pre-training and Fine-tuning of T5 Models with Limited Resources",
author = "Nawrot, Piotr",
editor = "Tan, Liling and
Milajevs, Dmitrijs and
Chauhan, Geeticka and
Gwinnup, Jeremy and
Rippeth, Elijah",
booktitle = "Proceedings of the 3rd Workshop for Natural Language Processing Open Source Software (NLP-OSS 2023)",
month = dec,
year = "2023",
address = "Singapore",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/2023.nlposs-1.11",
doi = "10.18653/v1/2023.nlposs-1.11",
pages = "95--101",
abstract = "State-of-the-art language models like T5 have revolutionized the NLP landscape, but their computational demands hinder a large portion of the research community. To address this challenge, we present nanoT5, a specially-optimized PyTorch framework for efficient pre-training and fine-tuning of T5 models. Drawing on insights from optimizer differences and prioritizing efficiency, nanoT5 allows a T5-Base model to be pre-trained on a single GPU in just 16 hours, without any loss in performance. With the introduction of this open-source framework, we hope to widen the accessibility to language modelling research and cater to the community{'}s demand for more user-friendly T5 (Encoder-Decoder) implementations. We make our contributions, including configurations, codebase, pre-training insights, and pre-trained models, available to the public.",
}
| State-of-the-art language models like T5 have revolutionized the NLP landscape, but their computational demands hinder a large portion of the research community. To address this challenge, we present nanoT5, a specially-optimized PyTorch framework for efficient pre-training and fine-tuning of T5 models. Drawing on insights from optimizer differences and prioritizing efficiency, nanoT5 allows a T5-Base model to be pre-trained on a single GPU in just 16 hours, without any loss in performance. With the introduction of this open-source framework, we hope to widen the accessibility to language modelling research and cater to the community{'}s demand for more user-friendly T5 (Encoder-Decoder) implementations. We make our contributions, including configurations, codebase, pre-training insights, and pre-trained models, available to the public. | [
"Nawrot, Piotr"
] | nanoT5: Fast & Simple Pre-training and Fine-tuning of T5 Models with Limited Resources | nlposs-1.11 | null | [
""
] | -1 | -1 | -1 | -1 | [] | [] | [] | 0 | Poster |
|
https://aclanthology.org/2023.nlposs-1.12.bib | https://aclanthology.org/2023.nlposs-1.12/ | @inproceedings{giorgi-etal-2023-aware,
title = "{AWARE}-{TEXT}: An Android Package for Mobile Phone Based Text Collection and On-Device Processing",
author = "Giorgi, Salvatore and
Sherman, Garrick and
Bellew, Douglas and
Guntuku, Sharath Chandra and
Ungar, Lyle and
Curtis, Brenda",
editor = "Tan, Liling and
Milajevs, Dmitrijs and
Chauhan, Geeticka and
Gwinnup, Jeremy and
Rippeth, Elijah",
booktitle = "Proceedings of the 3rd Workshop for Natural Language Processing Open Source Software (NLP-OSS 2023)",
month = dec,
year = "2023",
address = "Singapore",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/2023.nlposs-1.12",
doi = "10.18653/v1/2023.nlposs-1.12",
pages = "102--109",
abstract = "We present the AWARE-text package, an open-source software package for collecting textual data on Android mobile devices. This package allows for collecting short message service (SMS or text messages) and character-level keystrokes. In addition to collecting this raw data, AWARE-text is designed for on device lexicon processing, which allows one to collect standard textual-based measures (e.g., sentiment, emotions, and topics) without collecting the underlying raw textual data. This is especially important in the case of mobile phones, which can contain sensitive and identifying information. Thus, the AWARE-text package allows for privacy protection while simultaneously collecting textual information at multiple levels of granularity: person (lifetime history of SMS), conversation (both sides of SMS conversations and group chats), message (single SMS), and character (individual keystrokes entered across applications). Finally, the unique processing environment of mobile devices opens up several methodological and privacy issues, which we discuss.",
}
| We present the AWARE-text package, an open-source software package for collecting textual data on Android mobile devices. This package allows for collecting short message service (SMS or text messages) and character-level keystrokes. In addition to collecting this raw data, AWARE-text is designed for on device lexicon processing, which allows one to collect standard textual-based measures (e.g., sentiment, emotions, and topics) without collecting the underlying raw textual data. This is especially important in the case of mobile phones, which can contain sensitive and identifying information. Thus, the AWARE-text package allows for privacy protection while simultaneously collecting textual information at multiple levels of granularity: person (lifetime history of SMS), conversation (both sides of SMS conversations and group chats), message (single SMS), and character (individual keystrokes entered across applications). Finally, the unique processing environment of mobile devices opens up several methodological and privacy issues, which we discuss. | [
"Giorgi, Salvatore",
"Sherman, Garrick",
"Bellew, Douglas",
"Guntuku, Sharath Ch",
"ra",
"Ungar, Lyle",
"Curtis, Brenda"
] | AWARE-TEXT: An Android Package for Mobile Phone Based Text Collection and On-Device Processing | nlposs-1.12 | null | [
""
] | -1 | -1 | -1 | -1 | [] | [] | [] | 0 | Poster |
|
https://aclanthology.org/2023.nlposs-1.13.bib | https://aclanthology.org/2023.nlposs-1.13/ | @inproceedings{post-etal-2023-sotastream,
title = "{SOTASTREAM}: A Streaming Approach to Machine Translation Training",
author = "Post, Matt and
Gowda, Thamme and
Grundkiewicz, Roman and
Khayrallah, Huda and
Jain, Rohit and
Junczys-Dowmunt, Marcin",
editor = "Tan, Liling and
Milajevs, Dmitrijs and
Chauhan, Geeticka and
Gwinnup, Jeremy and
Rippeth, Elijah",
booktitle = "Proceedings of the 3rd Workshop for Natural Language Processing Open Source Software (NLP-OSS 2023)",
month = dec,
year = "2023",
address = "Singapore",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/2023.nlposs-1.13",
doi = "10.18653/v1/2023.nlposs-1.13",
pages = "110--119",
abstract = "Many machine translation toolkits make use of a data preparation step wherein raw data is transformed into a tensor format that can be used directly by the trainer. This preparation step is increasingly at odds with modern research and development practices because this process produces a static, unchangeable version of the training data, making common training-time needs difficult (e.g., subword sampling), time-consuming (preprocessing with large data can take days), expensive (e.g., disk space), and cumbersome (managing experiment combinatorics). We propose an alternative approach that separates the generation of data from the consumption of that data. In this approach, there is no separate pre-processing step; data generation produces an infinite stream of permutations of the raw training data, which the trainer tensorizes and batches as it is consumed. Additionally, this data stream can be manipulated by a set of user-definable operators that provide on-the-fly modifications, such as data normalization, augmentation or filtering. We release an open-source toolkit, SOTASTREAM, that implements this approach: https://github.com/marian-nmt/sotastream. We show that it cuts training time, adds flexibility, reduces experiment management complexity, and reduces disk space, all without affecting the accuracy of the trained models.",
}
| Many machine translation toolkits make use of a data preparation step wherein raw data is transformed into a tensor format that can be used directly by the trainer. This preparation step is increasingly at odds with modern research and development practices because this process produces a static, unchangeable version of the training data, making common training-time needs difficult (e.g., subword sampling), time-consuming (preprocessing with large data can take days), expensive (e.g., disk space), and cumbersome (managing experiment combinatorics). We propose an alternative approach that separates the generation of data from the consumption of that data. In this approach, there is no separate pre-processing step; data generation produces an infinite stream of permutations of the raw training data, which the trainer tensorizes and batches as it is consumed. Additionally, this data stream can be manipulated by a set of user-definable operators that provide on-the-fly modifications, such as data normalization, augmentation or filtering. We release an open-source toolkit, SOTASTREAM, that implements this approach: https://github.com/marian-nmt/sotastream. We show that it cuts training time, adds flexibility, reduces experiment management complexity, and reduces disk space, all without affecting the accuracy of the trained models. | [
"Post, Matt",
"Gowda, Thamme",
"Grundkiewicz, Roman",
"Khayrallah, Huda",
"Jain, Rohit",
"Junczys-Dowmunt, Marcin"
] | SOTASTREAM: A Streaming Approach to Machine Translation Training | nlposs-1.13 | 2308.07489 | [
"https://github.com/marian-nmt/sotastream"
] | -1 | -1 | -1 | -1 | [] | [] | [] | 0 | Poster |
|
https://aclanthology.org/2023.nlposs-1.14.bib | https://aclanthology.org/2023.nlposs-1.14/ | @inproceedings{singh-etal-2023-open,
title = "An Open-source Web-based Application for Development of Resources and Technologies in Underresourced Languages",
author = "Singh, Siddharth and
Ratan, Shyam and
Mathur, Neerav and
Kumar, Ritesh",
editor = "Tan, Liling and
Milajevs, Dmitrijs and
Chauhan, Geeticka and
Gwinnup, Jeremy and
Rippeth, Elijah",
booktitle = "Proceedings of the 3rd Workshop for Natural Language Processing Open Source Software (NLP-OSS 2023)",
month = dec,
year = "2023",
address = "Singapore",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/2023.nlposs-1.14",
doi = "10.18653/v1/2023.nlposs-1.14",
pages = "120--129",
abstract = "The paper discusses the Linguistic Field Data Management and Analysis System (LiFE), a new open-source, web-based software that systematises storage, management, annotation, analysis and sharing of linguistic data gathered from the field as well as that crawled from various sources on the web such as YouTube, Twitter, Facebook, Instagram, Blog, Newspaper, Wikipedia, etc. The app supports two broad workflows - (a) the field linguists{'} workflow in which data is collected directly from the speakers in the field and analysed further to produce grammatical descriptions, lexicons, educational materials and possibly language technologies; (b) the computational linguists{'} workflow in which data collected from the web using automated crawlers or digitised using manual or semi-automatic means, annotated for various tasks and then used for developing different kinds of language technologies. In addition to supporting these workflows, the app provides some additional features as well - (a) it allows multiple users to collaboratively work on the same project via its granular access control and sharing option; (b) it allows the data to be exported to various formats including CSV, TSV, JSON, XLSX, , PDF, Textgrid, RDF (different serialisation formats) etc as appropriate; (c) it allows data import from various formats viz. LIFT XML, XLSX, JSON, CSV, TSV, Textgrid, etc; (d) it allows users to start working in the app at any stage of their work by giving the option to either create a new project from scratch or derive a new project from an existing project in the app.The app is currently available for use and testing on our server (http://life.unreal-tece.co.in/) and its source code has been released under AGPL license on our GitHub repository (https://github.com/unrealtecellp/life). It is licensed under separate, specific conditions for commercial usage.",
}
| The paper discusses the Linguistic Field Data Management and Analysis System (LiFE), a new open-source, web-based software that systematises storage, management, annotation, analysis and sharing of linguistic data gathered from the field as well as that crawled from various sources on the web such as YouTube, Twitter, Facebook, Instagram, Blog, Newspaper, Wikipedia, etc. The app supports two broad workflows - (a) the field linguists{'} workflow in which data is collected directly from the speakers in the field and analysed further to produce grammatical descriptions, lexicons, educational materials and possibly language technologies; (b) the computational linguists{'} workflow in which data collected from the web using automated crawlers or digitised using manual or semi-automatic means, annotated for various tasks and then used for developing different kinds of language technologies. In addition to supporting these workflows, the app provides some additional features as well - (a) it allows multiple users to collaboratively work on the same project via its granular access control and sharing option; (b) it allows the data to be exported to various formats including CSV, TSV, JSON, XLSX, , PDF, Textgrid, RDF (different serialisation formats) etc as appropriate; (c) it allows data import from various formats viz. LIFT XML, XLSX, JSON, CSV, TSV, Textgrid, etc; (d) it allows users to start working in the app at any stage of their work by giving the option to either create a new project from scratch or derive a new project from an existing project in the app.The app is currently available for use and testing on our server (http://life.unreal-tece.co.in/) and its source code has been released under AGPL license on our GitHub repository (https://github.com/unrealtecellp/life). It is licensed under separate, specific conditions for commercial usage. | [
"Singh, Siddharth",
"Ratan, Shyam",
"Mathur, Neerav",
"Kumar, Ritesh"
] | An Open-source Web-based Application for Development of Resources and Technologies in Underresourced Languages | nlposs-1.14 | null | [
""
] | -1 | -1 | -1 | -1 | [] | [] | [] | 0 | Poster |
|
https://aclanthology.org/2023.nlposs-1.15.bib | https://aclanthology.org/2023.nlposs-1.15/ | @inproceedings{jovanovic-ross-2023-rumour,
title = "Rumour Detection in the Wild: A Browser Extension for {T}witter",
author = {Jovanovic, Andrej and
Ross, Bj{\"o}rn},
editor = "Tan, Liling and
Milajevs, Dmitrijs and
Chauhan, Geeticka and
Gwinnup, Jeremy and
Rippeth, Elijah",
booktitle = "Proceedings of the 3rd Workshop for Natural Language Processing Open Source Software (NLP-OSS 2023)",
month = dec,
year = "2023",
address = "Singapore",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/2023.nlposs-1.15",
doi = "10.18653/v1/2023.nlposs-1.15",
pages = "130--140",
abstract = "Rumour detection, particularly on social media, has gained popularity in recent years. The machine learning community has made significant contributions in investigating automatic methods to detect rumours on such platforms. However, these state-of-the-art (SoTA) models are often deployed by social media companies; ordinary end-users cannot leverage the solutions in the literature for their own rumour detection. To address this issue, we put forward a novel browser extension that allows these users to perform rumour detection on Twitter. Particularly, we leverage the performance from SoTA architectures, which has not been done previously. Initial results from a user study confirm that this browser extension provides benefit. Additionally, we examine the performance of our browser extension{'}s rumour detection model in a simulated deployment environment. Our results show that additional infrastructure for the browser extension is required to ensure its usability when deployed as a live service for Twitter users at scale.",
}
| Rumour detection, particularly on social media, has gained popularity in recent years. The machine learning community has made significant contributions in investigating automatic methods to detect rumours on such platforms. However, these state-of-the-art (SoTA) models are often deployed by social media companies; ordinary end-users cannot leverage the solutions in the literature for their own rumour detection. To address this issue, we put forward a novel browser extension that allows these users to perform rumour detection on Twitter. Particularly, we leverage the performance from SoTA architectures, which has not been done previously. Initial results from a user study confirm that this browser extension provides benefit. Additionally, we examine the performance of our browser extension{'}s rumour detection model in a simulated deployment environment. Our results show that additional infrastructure for the browser extension is required to ensure its usability when deployed as a live service for Twitter users at scale. | [
"Jovanovic, Andrej",
"Ross, Bj{\\\"o}rn"
] | Rumour Detection in the Wild: A Browser Extension for Twitter | nlposs-1.15 | null | [
""
] | -1 | -1 | -1 | -1 | [] | [] | [] | 0 | Poster |
|
https://aclanthology.org/2023.nlposs-1.16.bib | https://aclanthology.org/2023.nlposs-1.16/ | @inproceedings{landes-etal-2023-deepzensols,
title = "{D}eep{Z}ensols: A Deep Learning Natural Language Processing Framework for Experimentation and Reproducibility",
author = "Landes, Paul and
Di Eugenio, Barbara and
Caragea, Cornelia",
editor = "Tan, Liling and
Milajevs, Dmitrijs and
Chauhan, Geeticka and
Gwinnup, Jeremy and
Rippeth, Elijah",
booktitle = "Proceedings of the 3rd Workshop for Natural Language Processing Open Source Software (NLP-OSS 2023)",
month = dec,
year = "2023",
address = "Singapore",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/2023.nlposs-1.16",
doi = "10.18653/v1/2023.nlposs-1.16",
pages = "141--146",
abstract = "Given the criticality and difficulty of reproducing machine learning experiments, there have been significant efforts in reducing the variance of these results. The ability to consistently reproduce results effectively strengthens the underlying hypothesis of the work and should be regarded as important as the novel aspect of the research itself. The contribution of this work is an open source framework that has the following characteristics: a) facilitates reproducing consistent results, b) allows hot-swapping features and embeddings without further processing and re-vectorizing the dataset, c) provides a means of easily creating, training and evaluating natural language processing deep learning models with little to no code changes, and d) is freely available to the community.",
}
| Given the criticality and difficulty of reproducing machine learning experiments, there have been significant efforts in reducing the variance of these results. The ability to consistently reproduce results effectively strengthens the underlying hypothesis of the work and should be regarded as important as the novel aspect of the research itself. The contribution of this work is an open source framework that has the following characteristics: a) facilitates reproducing consistent results, b) allows hot-swapping features and embeddings without further processing and re-vectorizing the dataset, c) provides a means of easily creating, training and evaluating natural language processing deep learning models with little to no code changes, and d) is freely available to the community. | [
"L",
"es, Paul",
"Di Eugenio, Barbara",
"Caragea, Cornelia"
] | DeepZensols: A Deep Learning Natural Language Processing Framework for Experimentation and Reproducibility | nlposs-1.16 | null | [
""
] | -1 | -1 | -1 | -1 | [] | [] | [] | 0 | Poster |
|
https://aclanthology.org/2023.nlposs-1.17.bib | https://aclanthology.org/2023.nlposs-1.17/ | @inproceedings{lignos-etal-2023-improving,
title = "Improving {NER} Research Workflows with {S}eq{S}core",
author = "Lignos, Constantine and
Kruse, Maya and
Rueda, Andrew",
editor = "Tan, Liling and
Milajevs, Dmitrijs and
Chauhan, Geeticka and
Gwinnup, Jeremy and
Rippeth, Elijah",
booktitle = "Proceedings of the 3rd Workshop for Natural Language Processing Open Source Software (NLP-OSS 2023)",
month = dec,
year = "2023",
address = "Singapore",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/2023.nlposs-1.17",
doi = "10.18653/v1/2023.nlposs-1.17",
pages = "147--152",
abstract = "We describe the features of SeqScore, an MIT-licensed Python toolkit for working with named entity recognition (NER) data.While SeqScore began as a tool for NER scoring, it has been expanded to help with the full lifecycle of working with NER data: validating annotation, providing at-a-glance and detailed summaries of the data, modifying annotation to support experiments, scoring system output, and aiding with error analysis.SeqScore is released via PyPI (https://pypi.org/project/seqscore/) and development occurs on GitHub (https://github.com/bltlab/seqscore).",
}
| We describe the features of SeqScore, an MIT-licensed Python toolkit for working with named entity recognition (NER) data.While SeqScore began as a tool for NER scoring, it has been expanded to help with the full lifecycle of working with NER data: validating annotation, providing at-a-glance and detailed summaries of the data, modifying annotation to support experiments, scoring system output, and aiding with error analysis.SeqScore is released via PyPI (https://pypi.org/project/seqscore/) and development occurs on GitHub (https://github.com/bltlab/seqscore). | [
"Lignos, Constantine",
"Kruse, Maya",
"Rueda, Andrew"
] | Improving NER Research Workflows with SeqScore | nlposs-1.17 | null | [
""
] | -1 | -1 | -1 | -1 | [] | [] | [] | 0 | Poster |
|
https://aclanthology.org/2023.nlposs-1.18.bib | https://aclanthology.org/2023.nlposs-1.18/ | @inproceedings{matsubara-2023-torchdistill,
title = "torchdistill Meets Hugging Face Libraries for Reproducible, Coding-Free Deep Learning Studies: A Case Study on {NLP}",
author = "Matsubara, Yoshitomo",
editor = "Tan, Liling and
Milajevs, Dmitrijs and
Chauhan, Geeticka and
Gwinnup, Jeremy and
Rippeth, Elijah",
booktitle = "Proceedings of the 3rd Workshop for Natural Language Processing Open Source Software (NLP-OSS 2023)",
month = dec,
year = "2023",
address = "Singapore",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/2023.nlposs-1.18",
doi = "10.18653/v1/2023.nlposs-1.18",
pages = "153--164",
abstract = "Reproducibility in scientific work has been becoming increasingly important in research communities such as machine learning, natural language processing, and computer vision communities due to the rapid development of the research domains supported by recent advances in deep learning. In this work, we present a significantly upgraded version of torchdistill, a modular-driven coding-free deep learning framework significantly upgraded from the initial release, which supports only image classification and object detection tasks for reproducible knowledge distillation experiments. To demonstrate that the upgraded framework can support more tasks with third-party libraries, we reproduce the GLUE benchmark results of BERT models using a script based on the upgraded torchdistill, harmonizing with various Hugging Face libraries. All the 27 fine-tuned BERT models and configurations to reproduce the results are published at Hugging Face, and the model weights have already been widely used in research communities. We also reimplement popular small-sized models and new knowledge distillation methods and perform additional experiments for computer vision tasks.",
}
| Reproducibility in scientific work has been becoming increasingly important in research communities such as machine learning, natural language processing, and computer vision communities due to the rapid development of the research domains supported by recent advances in deep learning. In this work, we present a significantly upgraded version of torchdistill, a modular-driven coding-free deep learning framework significantly upgraded from the initial release, which supports only image classification and object detection tasks for reproducible knowledge distillation experiments. To demonstrate that the upgraded framework can support more tasks with third-party libraries, we reproduce the GLUE benchmark results of BERT models using a script based on the upgraded torchdistill, harmonizing with various Hugging Face libraries. All the 27 fine-tuned BERT models and configurations to reproduce the results are published at Hugging Face, and the model weights have already been widely used in research communities. We also reimplement popular small-sized models and new knowledge distillation methods and perform additional experiments for computer vision tasks. | [
"Matsubara, Yoshitomo"
] | torchdistill Meets Hugging Face Libraries for Reproducible, Coding-Free Deep Learning Studies: A Case Study on NLP | nlposs-1.18 | 2310.17644 | [
"https://github.com/yoshitomo-matsubara/torchdistill"
] | https://huggingface.co/papers/2310.17644 | 1 | 0 | 0 | 1 | [
"yoshitomo-matsubara/bert-base-uncased-rte",
"yoshitomo-matsubara/bert-large-uncased-cola",
"yoshitomo-matsubara/bert-large-uncased-mnli",
"yoshitomo-matsubara/bert-large-uncased-sst2",
"yoshitomo-matsubara/bert-large-uncased-stsb",
"yoshitomo-matsubara/bert-large-uncased-wnli",
"yoshitomo-matsubara/bert-base-uncased-mrpc_from_bert-large-uncased-mrpc",
"yoshitomo-matsubara/bert-base-uncased-sst2_from_bert-large-uncased-sst2",
"yoshitomo-matsubara/bert-base-uncased-mrpc",
"yoshitomo-matsubara/bert-base-uncased-qqp_from_bert-large-uncased-qqp",
"yoshitomo-matsubara/bert-base-uncased-sst2",
"yoshitomo-matsubara/bert-base-uncased-mnli_from_bert-large-uncased-mnli",
"yoshitomo-matsubara/bert-large-uncased-rte",
"yoshitomo-matsubara/bert-base-uncased-qqp",
"yoshitomo-matsubara/bert-large-uncased-qqp",
"yoshitomo-matsubara/bert-base-uncased-rte_from_bert-large-uncased-rte",
"yoshitomo-matsubara/bert-base-uncased-cola_from_bert-large-uncased-cola",
"yoshitomo-matsubara/bert-base-uncased-wnli_from_bert-large-uncased-wnli",
"yoshitomo-matsubara/bert-large-uncased-mrpc",
"yoshitomo-matsubara/bert-base-uncased-qnli",
"yoshitomo-matsubara/bert-base-uncased-cola",
"yoshitomo-matsubara/bert-base-uncased-qnli_from_bert-large-uncased-qnli",
"yoshitomo-matsubara/bert-large-uncased-qnli",
"yoshitomo-matsubara/bert-base-uncased-mnli",
"yoshitomo-matsubara/bert-base-uncased-stsb_from_bert-large-uncased-stsb",
"yoshitomo-matsubara/bert-base-uncased-wnli",
"yoshitomo-matsubara/bert-base-uncased-stsb"
] | [] | [] | 1 | Poster |
https://aclanthology.org/2023.nlposs-1.19.bib | https://aclanthology.org/2023.nlposs-1.19/ | @inproceedings{miglani-etal-2023-using,
title = "Using Captum to Explain Generative Language Models",
author = "Miglani, Vivek and
Yang, Aobo and
Markosyan, Aram and
Garcia-Olano, Diego and
Kokhlikyan, Narine",
editor = "Tan, Liling and
Milajevs, Dmitrijs and
Chauhan, Geeticka and
Gwinnup, Jeremy and
Rippeth, Elijah",
booktitle = "Proceedings of the 3rd Workshop for Natural Language Processing Open Source Software (NLP-OSS 2023)",
month = dec,
year = "2023",
address = "Singapore",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/2023.nlposs-1.19",
doi = "10.18653/v1/2023.nlposs-1.19",
pages = "165--173",
abstract = "Captum is a comprehensive library for model explainability in PyTorch, offering a range of methods from the interpretability literature to enhance users{'} understanding of PyTorch models. In this paper, we introduce new features in Captum that are specifically designed to analyze the behavior of generative language models. We provide an overview of the available functionalities and example applications of their potential for understanding learned associations within generative language models.",
}
| Captum is a comprehensive library for model explainability in PyTorch, offering a range of methods from the interpretability literature to enhance users{'} understanding of PyTorch models. In this paper, we introduce new features in Captum that are specifically designed to analyze the behavior of generative language models. We provide an overview of the available functionalities and example applications of their potential for understanding learned associations within generative language models. | [
"Miglani, Vivek",
"Yang, Aobo",
"Markosyan, Aram",
"Garcia-Olano, Diego",
"Kokhlikyan, Narine"
] | Using Captum to Explain Generative Language Models | nlposs-1.19 | 2312.05491 | [
""
] | https://huggingface.co/papers/2312.05491 | 4 | 3 | 1 | 5 | [] | [] | [] | 1 | Poster |
https://aclanthology.org/2023.nlposs-1.20.bib | https://aclanthology.org/2023.nlposs-1.20/ | @inproceedings{stollenwerk-2023-nerblackbox,
title = "nerblackbox: A High-level Library for Named Entity Recognition in Python",
author = "Stollenwerk, Felix",
editor = "Tan, Liling and
Milajevs, Dmitrijs and
Chauhan, Geeticka and
Gwinnup, Jeremy and
Rippeth, Elijah",
booktitle = "Proceedings of the 3rd Workshop for Natural Language Processing Open Source Software (NLP-OSS 2023)",
month = dec,
year = "2023",
address = "Singapore",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/2023.nlposs-1.20",
doi = "10.18653/v1/2023.nlposs-1.20",
pages = "174--178",
abstract = "We present **nerblackbox**, a python library to facilitate the use of state-of-the-art transformer-based models for named entity recognition. It provides simple-to-use yet powerful methods to access data and models from a wide range of sources, for fully automated model training and evaluation as well as versatile model inference. While many technical challenges are solved and hidden from the user by default, **nerblackbox** also offers fine-grained control and a rich set of customizable features. It is thus targeted both at application-oriented developers as well as machine learning experts and researchers.",
}
| We present **nerblackbox**, a python library to facilitate the use of state-of-the-art transformer-based models for named entity recognition. It provides simple-to-use yet powerful methods to access data and models from a wide range of sources, for fully automated model training and evaluation as well as versatile model inference. While many technical challenges are solved and hidden from the user by default, **nerblackbox** also offers fine-grained control and a rich set of customizable features. It is thus targeted both at application-oriented developers as well as machine learning experts and researchers. | [
"Stollenwerk, Felix"
] | nerblackbox: A High-level Library for Named Entity Recognition in Python | nlposs-1.20 | 2312.04306 | [
"https://github.com/flxst/nerblackbox"
] | -1 | -1 | -1 | -1 | [] | [] | [] | 0 | Poster |
|
https://aclanthology.org/2023.nlposs-1.21.bib | https://aclanthology.org/2023.nlposs-1.21/ | @inproceedings{hokamp-etal-2023-news,
title = "News Signals: An {NLP} Library for Text and Time Series",
author = "Hokamp, Chris and
Ghalandari, Demian and
Ghaffari, Parsa",
editor = "Tan, Liling and
Milajevs, Dmitrijs and
Chauhan, Geeticka and
Gwinnup, Jeremy and
Rippeth, Elijah",
booktitle = "Proceedings of the 3rd Workshop for Natural Language Processing Open Source Software (NLP-OSS 2023)",
month = dec,
year = "2023",
address = "Singapore",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/2023.nlposs-1.21",
doi = "10.18653/v1/2023.nlposs-1.21",
pages = "179--189",
abstract = "We present an open-source Python library for building and using datasets where inputs are clusters of textual data, and outputs are sequences of real values representing one or more timeseries signals. The news-signals library supports diverse data science and NLP problem settings related to the prediction of time series behaviour using textual data feeds. For example, in the news domain, inputs are document clusters corresponding to daily news articles about a particular entity, and targets are explicitly associated real-valued timeseries: the volume of news about a particular person or company, or the number of pageviews of specific Wikimedia pages. Despite many industry and research usecases for this class of problem settings, to the best of our knowledge, News Signals is the only open-source library designed specifically to facilitate data science and research settings with natural language inputs and timeseries targets. In addition to the core codebase for building and interacting with datasets, we also conduct a suite of experiments using several popular Machine Learning libraries, which are used to establish baselines for timeseries anomaly prediction using textual inputs.",
}
| We present an open-source Python library for building and using datasets where inputs are clusters of textual data, and outputs are sequences of real values representing one or more timeseries signals. The news-signals library supports diverse data science and NLP problem settings related to the prediction of time series behaviour using textual data feeds. For example, in the news domain, inputs are document clusters corresponding to daily news articles about a particular entity, and targets are explicitly associated real-valued timeseries: the volume of news about a particular person or company, or the number of pageviews of specific Wikimedia pages. Despite many industry and research usecases for this class of problem settings, to the best of our knowledge, News Signals is the only open-source library designed specifically to facilitate data science and research settings with natural language inputs and timeseries targets. In addition to the core codebase for building and interacting with datasets, we also conduct a suite of experiments using several popular Machine Learning libraries, which are used to establish baselines for timeseries anomaly prediction using textual inputs. | [
"Hokamp, Chris",
"Ghal",
"ari, Demian",
"Ghaffari, Parsa"
] | News Signals: An NLP Library for Text and Time Series | nlposs-1.21 | 2312.11399 | [
""
] | -1 | -1 | -1 | -1 | [] | [] | [] | 0 | Poster |
|
https://aclanthology.org/2023.nlposs-1.22.bib | https://aclanthology.org/2023.nlposs-1.22/ | @inproceedings{mishra-diesner-2023-pytail,
title = "{P}y{TAIL}: An Open Source Tool for Interactive and Incremental Learning of {NLP} Models with Human in the Loop for Online Data",
author = "Mishra, Shubhanshu and
Diesner, Jana",
editor = "Tan, Liling and
Milajevs, Dmitrijs and
Chauhan, Geeticka and
Gwinnup, Jeremy and
Rippeth, Elijah",
booktitle = "Proceedings of the 3rd Workshop for Natural Language Processing Open Source Software (NLP-OSS 2023)",
month = dec,
year = "2023",
address = "Singapore",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/2023.nlposs-1.22",
doi = "10.18653/v1/2023.nlposs-1.22",
pages = "190--198",
abstract = "Online data streams make training machine learning models hard because of distribution shift and new patterns emerging over time. For natural language processing (NLP) tasks that utilize a collection of features based on lexicons and rules, it is important to adapt these features to the changing data. To address this challenge we introduce PyTAIL, a python library, which allows a human in the loop approach to actively train NLP models. PyTAIL enhances generic active learning, which only suggests new instances to label by also suggesting new features like rules and lexicons to label. Furthermore, PyTAIL is flexible enough for users to accept, reject, or update rules and lexicons as the model is being trained. Finally, we simulate the performance of PyTAIL on existing social media benchmark datasets for text classification. We compare various active learning strategies on these benchmarks. The model closes the gap with as few as 10{\%} of the training data. Finally, we also highlight the importance of tracking evaluation metric on remaining data (which is not yet merged with active learning) alongside the test dataset. This highlights the effectiveness of the model in accurately annotating the remaining dataset, which is especially suitable for batch processing of large unlabelled corpora. PyTAIL will be open sourced and available at https://github.com/socialmediaie/pytail.",
}
| Online data streams make training machine learning models hard because of distribution shift and new patterns emerging over time. For natural language processing (NLP) tasks that utilize a collection of features based on lexicons and rules, it is important to adapt these features to the changing data. To address this challenge we introduce PyTAIL, a python library, which allows a human in the loop approach to actively train NLP models. PyTAIL enhances generic active learning, which only suggests new instances to label by also suggesting new features like rules and lexicons to label. Furthermore, PyTAIL is flexible enough for users to accept, reject, or update rules and lexicons as the model is being trained. Finally, we simulate the performance of PyTAIL on existing social media benchmark datasets for text classification. We compare various active learning strategies on these benchmarks. The model closes the gap with as few as 10{\%} of the training data. Finally, we also highlight the importance of tracking evaluation metric on remaining data (which is not yet merged with active learning) alongside the test dataset. This highlights the effectiveness of the model in accurately annotating the remaining dataset, which is especially suitable for batch processing of large unlabelled corpora. PyTAIL will be open sourced and available at https://github.com/socialmediaie/pytail. | [
"Mishra, Shubhanshu",
"Diesner, Jana"
] | PyTAIL: An Open Source Tool for Interactive and Incremental Learning of NLP Models with Human in the Loop for Online Data | nlposs-1.22 | null | [
""
] | -1 | -1 | -1 | -1 | [] | [] | [] | 0 | Poster |
|
https://aclanthology.org/2023.nlposs-1.23.bib | https://aclanthology.org/2023.nlposs-1.23/ | @inproceedings{terdalkar-bhattacharya-2023-antarlekhaka,
title = "Antarlekhaka: A Comprehensive Tool for Multi-task Natural Language Annotation",
author = "Terdalkar, Hrishikesh and
Bhattacharya, Arnab",
editor = "Tan, Liling and
Milajevs, Dmitrijs and
Chauhan, Geeticka and
Gwinnup, Jeremy and
Rippeth, Elijah",
booktitle = "Proceedings of the 3rd Workshop for Natural Language Processing Open Source Software (NLP-OSS 2023)",
month = dec,
year = "2023",
address = "Singapore",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/2023.nlposs-1.23",
doi = "10.18653/v1/2023.nlposs-1.23",
pages = "199--211",
abstract = "One of the primary obstacles in the advancement of Natural Language Processing (NLP) technologies for low-resource languages is the lack of annotated datasets for training and testing machine learning models. In this paper, we present \textit{Antarlekhaka}, a tool for manual annotation of a comprehensive set of tasks relevant to NLP. The tool is Unicode-compatible, language-agnostic, Web-deployable and supports distributed annotation by multiple simultaneous annotators. The system sports user-friendly interfaces for 8 categories of annotation tasks. These, in turn, enable the annotation of a considerably larger set of NLP tasks. The task categories include two linguistic tasks not handled by any other tool, namely, sentence boundary detection and deciding canonical word order, which are important tasks for text that is in the form of poetry. We propose the idea of \textit{sequential annotation} based on small text units, where an annotator performs several tasks related to a single text unit before proceeding to the next unit. The research applications of the proposed mode of multi-task annotation are also discussed. Antarlekhaka outperforms other annotation tools in objective evaluation. It has been also used for two real-life annotation tasks on two different languages, namely, Sanskrit and Bengali. The tool is available at \url{https://github.com/Antarlekhaka/code}",
}
| One of the primary obstacles in the advancement of Natural Language Processing (NLP) technologies for low-resource languages is the lack of annotated datasets for training and testing machine learning models. In this paper, we present \textit{Antarlekhaka}, a tool for manual annotation of a comprehensive set of tasks relevant to NLP. The tool is Unicode-compatible, language-agnostic, Web-deployable and supports distributed annotation by multiple simultaneous annotators. The system sports user-friendly interfaces for 8 categories of annotation tasks. These, in turn, enable the annotation of a considerably larger set of NLP tasks. The task categories include two linguistic tasks not handled by any other tool, namely, sentence boundary detection and deciding canonical word order, which are important tasks for text that is in the form of poetry. We propose the idea of \textit{sequential annotation} based on small text units, where an annotator performs several tasks related to a single text unit before proceeding to the next unit. The research applications of the proposed mode of multi-task annotation are also discussed. Antarlekhaka outperforms other annotation tools in objective evaluation. It has been also used for two real-life annotation tasks on two different languages, namely, Sanskrit and Bengali. The tool is available at \url{https://github.com/Antarlekhaka/code} | [
"Terdalkar, Hrishikesh",
"Bhattacharya, Arnab"
] | Antarlekhaka: A Comprehensive Tool for Multi-task Natural Language Annotation | nlposs-1.23 | 2310.07826 | [
"https://github.com/Antarlekhaka/code"
] | -1 | -1 | -1 | -1 | [] | [] | [] | 0 | Poster |
|
https://aclanthology.org/2023.nlposs-1.24.bib | https://aclanthology.org/2023.nlposs-1.24/ | @inproceedings{bang-2023-gptcache,
title = "{GPTC}ache: An Open-Source Semantic Cache for {LLM} Applications Enabling Faster Answers and Cost Savings",
author = "Bang, Fu",
editor = "Tan, Liling and
Milajevs, Dmitrijs and
Chauhan, Geeticka and
Gwinnup, Jeremy and
Rippeth, Elijah",
booktitle = "Proceedings of the 3rd Workshop for Natural Language Processing Open Source Software (NLP-OSS 2023)",
month = dec,
year = "2023",
address = "Singapore",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/2023.nlposs-1.24",
doi = "10.18653/v1/2023.nlposs-1.24",
pages = "212--218",
abstract = "The rise of ChatGPT1 has led to the development of artificial intelligence (AI) applications, particularly those that rely on large language models (LLMs). However, recalling LLM APIs can be expensive, and the response speed may slow down during LLMs{'} peak times, causing frustration among developers. Potential solutions to this problem include using better LLM models or investing in more computing resources. However, these options may increase product development costs and decrease development speed. GPTCache2 is an open-source semantic cache that stores LLM responses to address this issue. When integrating an AI application with GPTCache, user queries are first sent to GPTCache for a response before being sent to LLMs like ChatGPT. If GPTCache has the answer to a query, it quickly returns the answer to the user without having to query the LLM. This approach saves costs on API recalls and makes response times much faster. For instance, integrating GPTCache with the GPT service offered by OpenAI can increase response speed 2-10 times when the cache is hit. Moreover, network fluctuations will not affect GPTCache{'}s response time, making it highly stable. This paper presents GPTCache and its architecture, how it functions and performs, and the use cases for which it is most advantageous.",
}
| The rise of ChatGPT1 has led to the development of artificial intelligence (AI) applications, particularly those that rely on large language models (LLMs). However, recalling LLM APIs can be expensive, and the response speed may slow down during LLMs{'} peak times, causing frustration among developers. Potential solutions to this problem include using better LLM models or investing in more computing resources. However, these options may increase product development costs and decrease development speed. GPTCache2 is an open-source semantic cache that stores LLM responses to address this issue. When integrating an AI application with GPTCache, user queries are first sent to GPTCache for a response before being sent to LLMs like ChatGPT. If GPTCache has the answer to a query, it quickly returns the answer to the user without having to query the LLM. This approach saves costs on API recalls and makes response times much faster. For instance, integrating GPTCache with the GPT service offered by OpenAI can increase response speed 2-10 times when the cache is hit. Moreover, network fluctuations will not affect GPTCache{'}s response time, making it highly stable. This paper presents GPTCache and its architecture, how it functions and performs, and the use cases for which it is most advantageous. | [
"Bang, Fu"
] | GPTCache: An Open-Source Semantic Cache for LLM Applications Enabling Faster Answers and Cost Savings | nlposs-1.24 | null | [
""
] | -1 | -1 | -1 | -1 | [] | [] | [] | 0 | Poster |
|
https://aclanthology.org/2023.nlposs-1.25.bib | https://aclanthology.org/2023.nlposs-1.25/ | @inproceedings{manh-etal-2023-vault,
title = "The Vault: A Comprehensive Multilingual Dataset for Advancing Code Understanding and Generation",
author = "Manh, Dung Nguyen and
Hai, Nam Le and
Dau, Anh T. V. and
Nguyen, Anh Minh and
Nghiem, Khanh and
Guo, Jin and
Bui, Nghi D. Q.",
editor = "Tan, Liling and
Milajevs, Dmitrijs and
Chauhan, Geeticka and
Gwinnup, Jeremy and
Rippeth, Elijah",
booktitle = "Proceedings of the 3rd Workshop for Natural Language Processing Open Source Software (NLP-OSS 2023)",
month = dec,
year = "2023",
address = "Singapore",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/2023.nlposs-1.25",
doi = "10.18653/v1/2023.nlposs-1.25",
pages = "219--244",
}
| No abstract found | [
"Manh, Dung Nguyen",
"Hai, Nam Le",
"Dau, Anh T. V.",
"Nguyen, Anh Minh",
"Nghiem, Khanh",
"Guo, Jin",
"Bui, Nghi D. Q."
] | The Vault: A Comprehensive Multilingual Dataset for Advancing Code Understanding and Generation | nlposs-1.25 | 2305.06156 | [
"https://github.com/fsoft-ai4code/thevault"
] | https://huggingface.co/papers/2305.06156 | 1 | 1 | 0 | 7 | [
"Fsoft-AIC/Codebert-docstring-inconsistency"
] | [
"Fsoft-AIC/the-vault-function",
"Fsoft-AIC/the-vault-inline",
"Fsoft-AIC/the-vault-class"
] | [
"namnh113/Code_Summarization",
"nam194/Code_Summarization"
] | 1 | Poster |
https://aclanthology.org/2023.nlposs-1.26.bib | https://aclanthology.org/2023.nlposs-1.26/ | @inproceedings{tjhi-etal-2023-sea,
title = "{SEA}-{LION} ({S}outheast {A}sian Languages In One Network): A Family of {S}outheast {A}sian Language Models",
author = "Ong, David and
Limkonchotiwat, Peerat",
editor = "Tan, Liling and
Milajevs, Dmitrijs and
Chauhan, Geeticka and
Gwinnup, Jeremy and
Rippeth, Elijah",
booktitle = "Proceedings of the 3rd Workshop for Natural Language Processing Open Source Software (NLP-OSS 2023)",
month = dec,
year = "2023",
address = "Singapore",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/2023.nlposs-1.26",
doi = "10.18653/v1/2023.nlposs-1.26",
pages = "245--245",
}
| No abstract found | [
"Ong, David",
"Limkonchotiwat, Peerat"
] | SEA-LION (Southeast Asian Languages In One Network): A Family of Southeast Asian Language Models | nlposs-1.26 | null | [
""
] | -1 | -1 | -1 | -1 | [] | [] | [] | 0 | Poster |
|
https://aclanthology.org/2023.nlposs-1.27.bib | https://aclanthology.org/2023.nlposs-1.27/ | @inproceedings{castricato-2023-trlx,
title = "trl{X}: A Framework for Large Scale Open Source {RLHF}",
author = "Castricato, Louis",
editor = "Tan, Liling and
Milajevs, Dmitrijs and
Chauhan, Geeticka and
Gwinnup, Jeremy and
Rippeth, Elijah",
booktitle = "Proceedings of the 3rd Workshop for Natural Language Processing Open Source Software (NLP-OSS 2023)",
month = dec,
year = "2023",
address = "Singapore",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/2023.nlposs-1.27",
doi = "10.18653/v1/2023.nlposs-1.27",
pages = "246--246",
abstract = "Reinforcement learning from human feedback (RLHF) utilizes human feedback to better align large language models with human preferences via online optimization against a learned reward model. Current RLHF paradigms rely on Proximal Policy Optimization (PPO), which quickly becomes a challenge to implement and scale up to large architectures. To address this difficulty we created the trlX library as a feature-complete open-source framework for RLHF fine-tuning of models up to and exceeding 70 billion parameters. We implemented support for multiple types of distributed training including distributed data parallel, model sharded, as well as tensor, sequential, and pipeline parallelism.",
}
| Reinforcement learning from human feedback (RLHF) utilizes human feedback to better align large language models with human preferences via online optimization against a learned reward model. Current RLHF paradigms rely on Proximal Policy Optimization (PPO), which quickly becomes a challenge to implement and scale up to large architectures. To address this difficulty we created the trlX library as a feature-complete open-source framework for RLHF fine-tuning of models up to and exceeding 70 billion parameters. We implemented support for multiple types of distributed training including distributed data parallel, model sharded, as well as tensor, sequential, and pipeline parallelism. | [
"Castricato, Louis"
] | trlX: A Framework for Large Scale Open Source RLHF | nlposs-1.27 | null | [
""
] | -1 | -1 | -1 | -1 | [] | [] | [] | 0 | Poster |
|
https://aclanthology.org/2023.nlposs-1.28.bib | https://aclanthology.org/2023.nlposs-1.28/ | @inproceedings{duderstadt-anand-2023-towards,
title = "Towards Explainable and Accessible {AI}",
author = "Duderstadt, Brandon and
Anand, Yuvanesh",
editor = "Tan, Liling and
Milajevs, Dmitrijs and
Chauhan, Geeticka and
Gwinnup, Jeremy and
Rippeth, Elijah",
booktitle = "Proceedings of the 3rd Workshop for Natural Language Processing Open Source Software (NLP-OSS 2023)",
month = dec,
year = "2023",
address = "Singapore",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/2023.nlposs-1.28",
doi = "10.18653/v1/2023.nlposs-1.28",
pages = "247--247",
abstract = "Large language models (LLMs) have recently achieved human-level performance on a range of professional and academic benchmarks. Unfortunately, the explainability and accessibility of these models has lagged behind their performance. State-of-the-art LLMs require costly infrastructure, are only accessible via rate-limited, geo-locked, and censored web interfaces, and lack publicly available code and technical reports. Moreover, the lack of tooling for understanding the massive datasets used to train and produced by LLMs presents a critical challenge for explainability research. This talk will be an overview of Nomic AI{'}s efforts to address these challenges through its two core initiatives: GPT4All and Atlas",
}
| Large language models (LLMs) have recently achieved human-level performance on a range of professional and academic benchmarks. Unfortunately, the explainability and accessibility of these models has lagged behind their performance. State-of-the-art LLMs require costly infrastructure, are only accessible via rate-limited, geo-locked, and censored web interfaces, and lack publicly available code and technical reports. Moreover, the lack of tooling for understanding the massive datasets used to train and produced by LLMs presents a critical challenge for explainability research. This talk will be an overview of Nomic AI{'}s efforts to address these challenges through its two core initiatives: GPT4All and Atlas | [
"Duderstadt, Br",
"on",
"An",
", Yuvanesh"
] | Towards Explainable and Accessible AI | nlposs-1.28 | null | [
""
] | -1 | -1 | -1 | -1 | [] | [] | [] | 0 | Poster |
|
https://aclanthology.org/2023.pandl-1.1.bib | https://aclanthology.org/2023.pandl-1.1/ | @inproceedings{rajpoot-parikh-2023-nearest,
title = "Nearest Neighbor Search over Vectorized Lexico-Syntactic Patterns for Relation Extraction from Financial Documents",
author = "Rajpoot, Pawan and
Parikh, Ankur",
editor = "Surdeanu, Mihai and
Riloff, Ellen and
Chiticariu, Laura and
Frietag, Dayne and
Hahn-Powell, Gus and
Morrison, Clayton T. and
Noriega-Atala, Enrique and
Sharp, Rebecca and
Valenzuela-Escarcega, Marco",
booktitle = "Proceedings of the 2nd Workshop on Pattern-based Approaches to NLP in the Age of Deep Learning",
month = dec,
year = "2023",
address = "Singapore",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/2023.pandl-1.1",
doi = "10.18653/v1/2023.pandl-1.1",
pages = "1--5",
abstract = "Relation extraction (RE) has achieved remarkable progress with the help of pre-trained language models. However, existing RE models are usually incapable of handling two situations: implicit expressions and long-tail relation classes, caused by language complexity and data sparsity. Further, these approaches and models are largely inaccessible to users who don{'}t have direct access to large language models (LLMs) and/or infrastructure for supervised training or fine-tuning. Rule-based systems also struggle with implicit expressions. Apart from this, Real world financial documents such as various 10-X reports (including 10-K, 10-Q, etc.) of publicly traded companies pose another challenge to rule-based systems in terms of longer and complex sentences. In this paper, we introduce a simple approach that consults training relations at test time through a nearest-neighbor search over dense vectors of lexico-syntactic patterns and provides a simple yet effective means to tackle the above issues. We evaluate our approach on REFinD and show that our method achieves state-of-the-art performance. We further show that it can provide a good start for human in the loop setup when a small number of annotations are available and it is also beneficial when domain experts can provide high quality patterns. Our code is available at 1.",
}
| Relation extraction (RE) has achieved remarkable progress with the help of pre-trained language models. However, existing RE models are usually incapable of handling two situations: implicit expressions and long-tail relation classes, caused by language complexity and data sparsity. Further, these approaches and models are largely inaccessible to users who don{'}t have direct access to large language models (LLMs) and/or infrastructure for supervised training or fine-tuning. Rule-based systems also struggle with implicit expressions. Apart from this, Real world financial documents such as various 10-X reports (including 10-K, 10-Q, etc.) of publicly traded companies pose another challenge to rule-based systems in terms of longer and complex sentences. In this paper, we introduce a simple approach that consults training relations at test time through a nearest-neighbor search over dense vectors of lexico-syntactic patterns and provides a simple yet effective means to tackle the above issues. We evaluate our approach on REFinD and show that our method achieves state-of-the-art performance. We further show that it can provide a good start for human in the loop setup when a small number of annotations are available and it is also beneficial when domain experts can provide high quality patterns. Our code is available at 1. | [
"Rajpoot, Pawan",
"Parikh, Ankur"
] | Nearest Neighbor Search over Vectorized Lexico-Syntactic Patterns for Relation Extraction from Financial Documents | pandl-1.1 | 2310.17714 | [
"https://github.com/pawan2411/pan-dl_refind"
] | https://huggingface.co/papers/2310.17714 | 2 | 1 | 0 | 2 | [] | [] | [] | 1 | Poster |
https://aclanthology.org/2023.pandl-1.2.bib | https://aclanthology.org/2023.pandl-1.2/ | @inproceedings{lim-etal-2023-leaf,
title = "{LEAF}: Linguistically Enhanced Event Temporal Relation Framework",
author = "Lim, Stanley and
Yin, Da and
Peng, Nanyun",
editor = "Surdeanu, Mihai and
Riloff, Ellen and
Chiticariu, Laura and
Frietag, Dayne and
Hahn-Powell, Gus and
Morrison, Clayton T. and
Noriega-Atala, Enrique and
Sharp, Rebecca and
Valenzuela-Escarcega, Marco",
booktitle = "Proceedings of the 2nd Workshop on Pattern-based Approaches to NLP in the Age of Deep Learning",
month = dec,
year = "2023",
address = "Singapore",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/2023.pandl-1.2",
doi = "10.18653/v1/2023.pandl-1.2",
pages = "6--19",
abstract = "Linguistic structures can implicitly imply diverse types of event relations that have been previously underexplored. For example, the sentence {``}John was cooking freshly made noodles for the family gathering{''} contains no explicit temporal indicators between the events, such as before. Despite this, it is easy for humans to conclude, based on syntax, that the noodles were made before John started cooking, and that the family gathering starts after John starts cooking. We introduce Linguistically enhanced Event TemporAl relation Framework (LEAF), a simple and effective approach to acquiring rich temporal knowledge of events from large-scale corpora. This method improves pre-trained language models by automatically extracting temporal relation knowledge from unannotated corpora using diverse temporal knowledge patterns. We begin by manually curating a comprehensive list of atomic patterns that imply temporal relations between events. These patterns involve event pairs in which one event is contained within the argument of the other. Using transitivity, we discover compositional patterns and assign labels to event pairs involving these patterns. Finally, we make language models learn the rich knowledge by pre-training with the acquired temporal relation supervision. Experiments show that our method outperforms or rivals previous models on two event relation datasets: MATRES and TB-Dense. Our approach is also simpler from past works and excels at identifying complex compositional event relations.",
}
| Linguistic structures can implicitly imply diverse types of event relations that have been previously underexplored. For example, the sentence {``}John was cooking freshly made noodles for the family gathering{''} contains no explicit temporal indicators between the events, such as before. Despite this, it is easy for humans to conclude, based on syntax, that the noodles were made before John started cooking, and that the family gathering starts after John starts cooking. We introduce Linguistically enhanced Event TemporAl relation Framework (LEAF), a simple and effective approach to acquiring rich temporal knowledge of events from large-scale corpora. This method improves pre-trained language models by automatically extracting temporal relation knowledge from unannotated corpora using diverse temporal knowledge patterns. We begin by manually curating a comprehensive list of atomic patterns that imply temporal relations between events. These patterns involve event pairs in which one event is contained within the argument of the other. Using transitivity, we discover compositional patterns and assign labels to event pairs involving these patterns. Finally, we make language models learn the rich knowledge by pre-training with the acquired temporal relation supervision. Experiments show that our method outperforms or rivals previous models on two event relation datasets: MATRES and TB-Dense. Our approach is also simpler from past works and excels at identifying complex compositional event relations. | [
"Lim, Stanley",
"Yin, Da",
"Peng, Nanyun"
] | LEAF: Linguistically Enhanced Event Temporal Relation Framework | pandl-1.2 | null | [
""
] | -1 | -1 | -1 | -1 | [] | [] | [] | 0 | Poster |
|
https://aclanthology.org/2023.pandl-1.3.bib | https://aclanthology.org/2023.pandl-1.3/ | @inproceedings{han-etal-2023-graph,
title = "A Graph-Guided Reasoning Approach for Open-ended Commonsense Question Answering",
author = "Han, Zhen and
Feng, Yue and
Sun, Mingming",
editor = "Surdeanu, Mihai and
Riloff, Ellen and
Chiticariu, Laura and
Frietag, Dayne and
Hahn-Powell, Gus and
Morrison, Clayton T. and
Noriega-Atala, Enrique and
Sharp, Rebecca and
Valenzuela-Escarcega, Marco",
booktitle = "Proceedings of the 2nd Workshop on Pattern-based Approaches to NLP in the Age of Deep Learning",
month = dec,
year = "2023",
address = "Singapore",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/2023.pandl-1.3",
doi = "10.18653/v1/2023.pandl-1.3",
pages = "20--24",
abstract = "Recently, end-to-end trained models for multiple-choice commonsense question answering (QA) have delivered promising results. However, such question-answering systems cannot be directly applied in real-world scenarios where answer candidates are not provided. Hence, a new benchmark challenge set for open-ended commonsense reasoning (OpenCSR) has been recently released, which contains natural science questions without any predefined choices. On the OpenCSR challenge set, many questions require implicit multi-hop reasoning and have a large decision space, reflecting the difficult nature of this task. Existing work on OpenCSR sorely focuses on improving the retrieval process, which extracts relevant factual sentences from a textual knowledge base, leaving the important and non-trivial reasoning task outside the scope. In this work, we extend the scope to include a reasoner that constructs a question-dependent open knowledge graph based on retrieved supporting facts and employs a sequential subgraph reasoning process to predict the answer. The subgraph can be seen as a concise and compact graphical explanation of the prediction. Experiments on two OpenCSR datasets show that the proposed model achieves great performance on benchmark OpenCSR datasets.",
}
| Recently, end-to-end trained models for multiple-choice commonsense question answering (QA) have delivered promising results. However, such question-answering systems cannot be directly applied in real-world scenarios where answer candidates are not provided. Hence, a new benchmark challenge set for open-ended commonsense reasoning (OpenCSR) has been recently released, which contains natural science questions without any predefined choices. On the OpenCSR challenge set, many questions require implicit multi-hop reasoning and have a large decision space, reflecting the difficult nature of this task. Existing work on OpenCSR sorely focuses on improving the retrieval process, which extracts relevant factual sentences from a textual knowledge base, leaving the important and non-trivial reasoning task outside the scope. In this work, we extend the scope to include a reasoner that constructs a question-dependent open knowledge graph based on retrieved supporting facts and employs a sequential subgraph reasoning process to predict the answer. The subgraph can be seen as a concise and compact graphical explanation of the prediction. Experiments on two OpenCSR datasets show that the proposed model achieves great performance on benchmark OpenCSR datasets. | [
"Han, Zhen",
"Feng, Yue",
"Sun, Mingming"
] | A Graph-Guided Reasoning Approach for Open-ended Commonsense Question Answering | pandl-1.3 | null | [
""
] | -1 | -1 | -1 | -1 | [] | [] | [] | 0 | Poster |
|
https://aclanthology.org/2023.pandl-1.4.bib | https://aclanthology.org/2023.pandl-1.4/ | @inproceedings{mille-etal-2023-generating,
title = "Generating {I}rish Text with a Flexible Plug-and-Play Architecture",
author = "Mille, Simon and
U{\'\i} Dhonnchadha, Elaine and
Cassidy, Lauren and
Davis, Brian and
Dasiopoulou, Stamatia and
Belz, Anya",
editor = "Surdeanu, Mihai and
Riloff, Ellen and
Chiticariu, Laura and
Frietag, Dayne and
Hahn-Powell, Gus and
Morrison, Clayton T. and
Noriega-Atala, Enrique and
Sharp, Rebecca and
Valenzuela-Escarcega, Marco",
booktitle = "Proceedings of the 2nd Workshop on Pattern-based Approaches to NLP in the Age of Deep Learning",
month = dec,
year = "2023",
address = "Singapore",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/2023.pandl-1.4",
doi = "10.18653/v1/2023.pandl-1.4",
pages = "25--42",
abstract = "In this paper, we describe M-FleNS, a multilingual flexible plug-and-play architecture designed to accommodate neural and symbolic modules, and initially instantiated with rule-based modules. We focus on using M-FleNS for the specific purpose of building new resources for Irish, a language currently under-represented in the NLP landscape. We present the general M-FleNS framework and how we use it to build an Irish Natural Language Generation system for verbalising part of the DBpedia ontology and building a multilayered dataset with rich linguistic annotations. Via automatic and human assessments of the output texts we show that with very limited resources we are able to create a system that reaches high levels of fluency and semantic accuracy, while having very low energy and memory requirements.",
}
| In this paper, we describe M-FleNS, a multilingual flexible plug-and-play architecture designed to accommodate neural and symbolic modules, and initially instantiated with rule-based modules. We focus on using M-FleNS for the specific purpose of building new resources for Irish, a language currently under-represented in the NLP landscape. We present the general M-FleNS framework and how we use it to build an Irish Natural Language Generation system for verbalising part of the DBpedia ontology and building a multilayered dataset with rich linguistic annotations. Via automatic and human assessments of the output texts we show that with very limited resources we are able to create a system that reaches high levels of fluency and semantic accuracy, while having very low energy and memory requirements. | [
"Mille, Simon",
"U{\\'\\i} Dhonnchadha, Elaine",
"Cassidy, Lauren",
"Davis, Brian",
"Dasiopoulou, Stamatia",
"Belz, Anya"
] | Generating Irish Text with a Flexible Plug-and-Play Architecture | pandl-1.4 | null | [
""
] | -1 | -1 | -1 | -1 | [] | [] | [] | 0 | Poster |