bibtex_url
stringlengths 41
53
| proceedings
stringlengths 38
50
| bibtext
stringlengths 535
2.8k
| abstract
stringlengths 0
2.04k
| authors
sequencelengths 1
31
| title
stringlengths 19
178
| id
stringlengths 7
19
| type
stringclasses 1
value | arxiv_id
stringlengths 0
10
| GitHub
sequencelengths 1
1
| paper_page
stringclasses 124
values | n_linked_authors
int64 -1
7
| upvotes
int64 -1
79
| num_comments
int64 -1
4
| n_authors
int64 -1
22
| paper_page_exists_pre_conf
int64 0
1
| Models
sequencelengths 0
55
| Datasets
sequencelengths 0
46
| Spaces
sequencelengths 0
82
|
---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
https://aclanthology.org/2024.rapid-1.6.bib | https://aclanthology.org/2024.rapid-1.6/ | @inproceedings{themistocleous-2024-open,
title = "Open Brain {AI}. Automatic Language Assessment",
author = "Themistocleous, Charalambos",
editor = "Kokkinakis, Dimitrios and
Fraser, Kathleen C. and
Themistocleous, Charalambos K. and
Fors, Kristina Lundholm and
Tsanas, Athanasios and
Ohman, Fredrik",
booktitle = "Proceedings of the Fifth Workshop on Resources and ProcessIng of linguistic, para-linguistic and extra-linguistic Data from people with various forms of cognitive/psychiatric/developmental impairments @LREC-COLING 2024",
month = may,
year = "2024",
address = "Torino, Italia",
publisher = "ELRA and ICCL",
url = "https://aclanthology.org/2024.rapid-1.6",
pages = "45--53",
abstract = "Language assessment plays a crucial role in diagnosing and treating individuals with speech, language, and communication disorders caused by neurogenic conditions, whether developmental or acquired. To support clinical assessment and research, we developed Open Brain AI (https://openbrainai.com). This computational platform employs AI techniques, namely machine learning, natural language processing, large language models, and automatic speech-to-text transcription, to automatically analyze multilingual spoken and written productions. This paper discusses the development of Open Brain AI, the AI language processing modules, and the linguistic measurements of discourse macro-structure and micro-structure. The fast and automatic analysis of language alleviates the burden on clinicians, enabling them to streamline their workflow and allocate more time and resources to direct patient care. Open Brain AI is freely accessible, empowering clinicians to conduct critical data analyses and give more attention and resources to other critical aspects of therapy and treatment.",
}
| Language assessment plays a crucial role in diagnosing and treating individuals with speech, language, and communication disorders caused by neurogenic conditions, whether developmental or acquired. To support clinical assessment and research, we developed Open Brain AI (https://openbrainai.com). This computational platform employs AI techniques, namely machine learning, natural language processing, large language models, and automatic speech-to-text transcription, to automatically analyze multilingual spoken and written productions. This paper discusses the development of Open Brain AI, the AI language processing modules, and the linguistic measurements of discourse macro-structure and micro-structure. The fast and automatic analysis of language alleviates the burden on clinicians, enabling them to streamline their workflow and allocate more time and resources to direct patient care. Open Brain AI is freely accessible, empowering clinicians to conduct critical data analyses and give more attention and resources to other critical aspects of therapy and treatment. | [
"Themistocleous, Charalambos"
] | Open Brain AI. Automatic Language Assessment | rapid-1.6 | Poster | 2306.06693 | [
""
] | -1 | -1 | -1 | -1 | 0 | [] | [] | [] |
|
https://aclanthology.org/2024.rapid-1.7.bib | https://aclanthology.org/2024.rapid-1.7/ | @inproceedings{mina-etal-2024-exploring,
title = "Exploring the Relationship Between Intrinsic Stigma in Masked Language Models and Training Data Using the Stereotype Content Model",
author = "Mina, Mario and
Falc{\~a}o, J{\'u}lia and
Gonzalez-Agirre, Aitor",
editor = "Kokkinakis, Dimitrios and
Fraser, Kathleen C. and
Themistocleous, Charalambos K. and
Fors, Kristina Lundholm and
Tsanas, Athanasios and
Ohman, Fredrik",
booktitle = "Proceedings of the Fifth Workshop on Resources and ProcessIng of linguistic, para-linguistic and extra-linguistic Data from people with various forms of cognitive/psychiatric/developmental impairments @LREC-COLING 2024",
month = may,
year = "2024",
address = "Torino, Italia",
publisher = "ELRA and ICCL",
url = "https://aclanthology.org/2024.rapid-1.7",
pages = "54--67",
abstract = "Much work has gone into developing language models of increasing size, but only recently have we begun to examine them for pernicious behaviour that could lead to harming marginalised groups. Following Lin et al. (2022) in rooting our work in psychological research, we prompt two masked language models (MLMs) of different specialisations in English and Spanish with statements from a questionnaire developed to measure stigma to determine if they treat physical and mental illnesses equally. In both models we find a statistically significant difference in the treatment of physical and mental illnesses across most if not all latent constructs as measured by the questionnaire, and thus they are more likely to associate mental illnesses with stigma. We then examine their training data or data retrieved from the same domain using a computational implementation of the Stereotype Content Model (SCM) (Fiske et al., 2002; Fraser et al., 2021) to interpret the questionnaire results based on the SCM values as reflected in the data. We observe that model behaviour can largely be explained by the distribution of the mentions of illnesses according to their SCM values.",
}
| Much work has gone into developing language models of increasing size, but only recently have we begun to examine them for pernicious behaviour that could lead to harming marginalised groups. Following Lin et al. (2022) in rooting our work in psychological research, we prompt two masked language models (MLMs) of different specialisations in English and Spanish with statements from a questionnaire developed to measure stigma to determine if they treat physical and mental illnesses equally. In both models we find a statistically significant difference in the treatment of physical and mental illnesses across most if not all latent constructs as measured by the questionnaire, and thus they are more likely to associate mental illnesses with stigma. We then examine their training data or data retrieved from the same domain using a computational implementation of the Stereotype Content Model (SCM) (Fiske et al., 2002; Fraser et al., 2021) to interpret the questionnaire results based on the SCM values as reflected in the data. We observe that model behaviour can largely be explained by the distribution of the mentions of illnesses according to their SCM values. | [
"Mina, Mario",
"Falc{\\~a}o, J{\\'u}lia",
"Gonzalez-Agirre, Aitor"
] | Exploring the Relationship Between Intrinsic Stigma in Masked Language Models and Training Data Using the Stereotype Content Model | rapid-1.7 | Poster | [
""
] | -1 | -1 | -1 | -1 | 0 | [] | [] | [] |
||
https://aclanthology.org/2024.rapid-1.8.bib | https://aclanthology.org/2024.rapid-1.8/ | @inproceedings{stamou-etal-2024-establishing,
title = "Establishing Control Corpora for Depression Detection in {M}odern {G}reek: Methodological Insights",
author = "Stamou, Vivian and
Mikros, George and
Markopoulos, George and
Varlokosta, Spyridoula",
editor = "Kokkinakis, Dimitrios and
Fraser, Kathleen C. and
Themistocleous, Charalambos K. and
Fors, Kristina Lundholm and
Tsanas, Athanasios and
Ohman, Fredrik",
booktitle = "Proceedings of the Fifth Workshop on Resources and ProcessIng of linguistic, para-linguistic and extra-linguistic Data from people with various forms of cognitive/psychiatric/developmental impairments @LREC-COLING 2024",
month = may,
year = "2024",
address = "Torino, Italia",
publisher = "ELRA and ICCL",
url = "https://aclanthology.org/2024.rapid-1.8",
pages = "68--76",
abstract = "This paper presents a methodological approach for establishing control corpora in the context of depression detection in the Modern Greek language. We discuss various methods used to create control corpora, focusing on the challenge of selecting representative samples from the general population when the target reference is the depressed population. Our approach includes traditional random selection among Twitter users, as well as an innovative method for creating topic-oriented control corpora. Through this study, we provide insights into the development of control corpora, offering valuable considerations for researchers working on similar projects in linguistic analysis and mental health studies. In addition, we identify several dominant topics in the depressed population such as religion, sentiments, health and digestion, which seem to align with findings consistently reported in the literature",
}
| This paper presents a methodological approach for establishing control corpora in the context of depression detection in the Modern Greek language. We discuss various methods used to create control corpora, focusing on the challenge of selecting representative samples from the general population when the target reference is the depressed population. Our approach includes traditional random selection among Twitter users, as well as an innovative method for creating topic-oriented control corpora. Through this study, we provide insights into the development of control corpora, offering valuable considerations for researchers working on similar projects in linguistic analysis and mental health studies. In addition, we identify several dominant topics in the depressed population such as religion, sentiments, health and digestion, which seem to align with findings consistently reported in the literature | [
"Stamou, Vivian",
"Mikros, George",
"Markopoulos, George",
"Varlokosta, Spyridoula"
] | Establishing Control Corpora for Depression Detection in Modern Greek: Methodological Insights | rapid-1.8 | Poster | [
""
] | -1 | -1 | -1 | -1 | 0 | [] | [] | [] |
||
https://aclanthology.org/2024.rapid-1.9.bib | https://aclanthology.org/2024.rapid-1.9/ | @inproceedings{khanna-stark-2024-preliminary,
title = "A Preliminary Evaluation of Semantic Coherence and Cohesion in Aphasic and Non-Aphasic Discourse Across Test and Retest",
author = "Khanna, Snigdha and
Stark, Brielle C.",
editor = "Kokkinakis, Dimitrios and
Fraser, Kathleen C. and
Themistocleous, Charalambos K. and
Fors, Kristina Lundholm and
Tsanas, Athanasios and
Ohman, Fredrik",
booktitle = "Proceedings of the Fifth Workshop on Resources and ProcessIng of linguistic, para-linguistic and extra-linguistic Data from people with various forms of cognitive/psychiatric/developmental impairments @LREC-COLING 2024",
month = may,
year = "2024",
address = "Torino, Italia",
publisher = "ELRA and ICCL",
url = "https://aclanthology.org/2024.rapid-1.9",
pages = "77--86",
abstract = "This paper evaluates global and local semantic coherence in aphasic and non-aphasic discourse tasks using the Tool for the Automatic Analysis of Cohesion (TAACO). The motivation for this paper stems from a lack of automatic methods to evaluate discourse-level phenomena, such as semantic cohesion, in transcripts derived from persons with aphasia. It leverages existing test-retest data to evaluate two main objectives: (1) Test-Retest Reliability, to identify if variables significantly differ across test and retest time points for either group (aphasia, control), and (2) Inter-Group Discourse Cohesion, where aphasic discourse is expected to be less cohesive than control discourse, resulting in lower cohesion scores for the aphasia group. Exploratory analysis examines correlations between variables for both groups, identifying any relationships between word-level and sentence-level semantic variables. Results verify that semantic cohesion and coherence are generally preserved in both groups, except for word-level and a few sentence-level semantic measures,w which are higher for the control group. Overall, variables tend to be reliable across time points for both groups. Notably, the aphasia group demonstrates more variability in cohesion than the control group, which is to be expected after brain injury. A close relationship between word-level indices and other indices is observed, suggesting a disconnection between word-level factors and sentence-level metrics.",
}
| This paper evaluates global and local semantic coherence in aphasic and non-aphasic discourse tasks using the Tool for the Automatic Analysis of Cohesion (TAACO). The motivation for this paper stems from a lack of automatic methods to evaluate discourse-level phenomena, such as semantic cohesion, in transcripts derived from persons with aphasia. It leverages existing test-retest data to evaluate two main objectives: (1) Test-Retest Reliability, to identify if variables significantly differ across test and retest time points for either group (aphasia, control), and (2) Inter-Group Discourse Cohesion, where aphasic discourse is expected to be less cohesive than control discourse, resulting in lower cohesion scores for the aphasia group. Exploratory analysis examines correlations between variables for both groups, identifying any relationships between word-level and sentence-level semantic variables. Results verify that semantic cohesion and coherence are generally preserved in both groups, except for word-level and a few sentence-level semantic measures,w which are higher for the control group. Overall, variables tend to be reliable across time points for both groups. Notably, the aphasia group demonstrates more variability in cohesion than the control group, which is to be expected after brain injury. A close relationship between word-level indices and other indices is observed, suggesting a disconnection between word-level factors and sentence-level metrics. | [
"Khanna, Snigdha",
"Stark, Brielle C."
] | A Preliminary Evaluation of Semantic Coherence and Cohesion in Aphasic and Non-Aphasic Discourse Across Test and Retest | rapid-1.9 | Poster | [
""
] | -1 | -1 | -1 | -1 | 0 | [] | [] | [] |
||
https://aclanthology.org/2024.rapid-1.10.bib | https://aclanthology.org/2024.rapid-1.10/ | @inproceedings{cafiero-etal-2024-harnessing,
title = "Harnessing Linguistic Analysis for {ADHD} Diagnosis Support: A Stylometric Approach to Self-Defining Memories",
author = {Cafiero, Florian Rapha{\"e}l and
Barrios Rudloff, Juan and
Gabay, Simon},
editor = "Kokkinakis, Dimitrios and
Fraser, Kathleen C. and
Themistocleous, Charalambos K. and
Fors, Kristina Lundholm and
Tsanas, Athanasios and
Ohman, Fredrik",
booktitle = "Proceedings of the Fifth Workshop on Resources and ProcessIng of linguistic, para-linguistic and extra-linguistic Data from people with various forms of cognitive/psychiatric/developmental impairments @LREC-COLING 2024",
month = may,
year = "2024",
address = "Torino, Italia",
publisher = "ELRA and ICCL",
url = "https://aclanthology.org/2024.rapid-1.10",
pages = "87--94",
abstract = "This study explores the potential of stylometric analysis in identifying Self-Defining Memories (SDMs) authored by individuals with Attention-Deficit/Hyperactivity Disorder (ADHD) versus a control group. A sample of 198 SDMs were written by 66 adolescents and were then analysed using Support Vector Classifiers (SVC). The analysis included a variety of linguistic features such as character 3-grams, function words, sentence length, or lexical richness among others. It also included metadata about the participants (gender, age) and their SDMs (self-reported sentiment after recalling their memories). The results reveal a promising ability of linguistic analysis to accurately classify SDMs, with perfect prediction (F1=1.0) in the contextually simpler setup of text-by-text prediction, and satisfactory levels of precision (F1 = 0.77) when predicting individual by individual. Such results highlight the significant role that linguistic characteristics play in reflecting the distinctive cognitive patterns associated with ADHD. While not a substitute for professional diagnosis, textual analysis offers a supportive avenue for early detection and a deeper understanding of ADHD.",
}
| This study explores the potential of stylometric analysis in identifying Self-Defining Memories (SDMs) authored by individuals with Attention-Deficit/Hyperactivity Disorder (ADHD) versus a control group. A sample of 198 SDMs were written by 66 adolescents and were then analysed using Support Vector Classifiers (SVC). The analysis included a variety of linguistic features such as character 3-grams, function words, sentence length, or lexical richness among others. It also included metadata about the participants (gender, age) and their SDMs (self-reported sentiment after recalling their memories). The results reveal a promising ability of linguistic analysis to accurately classify SDMs, with perfect prediction (F1=1.0) in the contextually simpler setup of text-by-text prediction, and satisfactory levels of precision (F1 = 0.77) when predicting individual by individual. Such results highlight the significant role that linguistic characteristics play in reflecting the distinctive cognitive patterns associated with ADHD. While not a substitute for professional diagnosis, textual analysis offers a supportive avenue for early detection and a deeper understanding of ADHD. | [
"Cafiero, Florian Rapha{\\\"e}l",
"Barrios Rudloff, Juan",
"Gabay, Simon"
] | Harnessing Linguistic Analysis for ADHD Diagnosis Support: A Stylometric Approach to Self-Defining Memories | rapid-1.10 | Poster | [
""
] | -1 | -1 | -1 | -1 | 0 | [] | [] | [] |
||
https://aclanthology.org/2024.rapid-1.11.bib | https://aclanthology.org/2024.rapid-1.11/ | @inproceedings{choi-etal-2024-crosslinguistic,
title = "Crosslinguistic Acoustic Feature-based Dementia Classification Using Advanced Learning Architectures",
author = "Choi, Anna Seo Gyeong and
Kim, Jin-seo and
Kim, Seo-hee and
Back, Min Seok and
Cho, Sunghye",
editor = "Kokkinakis, Dimitrios and
Fraser, Kathleen C. and
Themistocleous, Charalambos K. and
Fors, Kristina Lundholm and
Tsanas, Athanasios and
Ohman, Fredrik",
booktitle = "Proceedings of the Fifth Workshop on Resources and ProcessIng of linguistic, para-linguistic and extra-linguistic Data from people with various forms of cognitive/psychiatric/developmental impairments @LREC-COLING 2024",
month = may,
year = "2024",
address = "Torino, Italia",
publisher = "ELRA and ICCL",
url = "https://aclanthology.org/2024.rapid-1.11",
pages = "95--100",
abstract = "In this study, we rigorously evaluated eight machine learning and deep learning classifiers for identifying Alzheimer{'}s Disease (AD) patients using crosslinguistic acoustic features automatically extracted from one-minute oral picture descriptions produced by speakers of American English, Korean, and Mandarin Chinese. We employed eGeMAPSv2 and ComParE feature sets on segmented and non-segmented audio data. The Multilayer Perceptron model showed the highest performance, achieving an accuracy of 83.54{\%} and an AUC of 0.8 on the ComParE features extracted from non-segmented picture description data. Our findings suggest that classifiers trained with acoustic features extracted from one-minute picture description data in multiple languages are highly promising as a quick, language-universal, large-scale, remote screening tool for AD. However, the dataset included predominantly English-speaking participants, indicating the need for more balanced multilingual datasets in future research.",
}
| In this study, we rigorously evaluated eight machine learning and deep learning classifiers for identifying Alzheimer{'}s Disease (AD) patients using crosslinguistic acoustic features automatically extracted from one-minute oral picture descriptions produced by speakers of American English, Korean, and Mandarin Chinese. We employed eGeMAPSv2 and ComParE feature sets on segmented and non-segmented audio data. The Multilayer Perceptron model showed the highest performance, achieving an accuracy of 83.54{\%} and an AUC of 0.8 on the ComParE features extracted from non-segmented picture description data. Our findings suggest that classifiers trained with acoustic features extracted from one-minute picture description data in multiple languages are highly promising as a quick, language-universal, large-scale, remote screening tool for AD. However, the dataset included predominantly English-speaking participants, indicating the need for more balanced multilingual datasets in future research. | [
"Choi, Anna Seo Gyeong",
"Kim, Jin-seo",
"Kim, Seo-hee",
"Back, Min Seok",
"Cho, Sunghye"
] | Crosslinguistic Acoustic Feature-based Dementia Classification Using Advanced Learning Architectures | rapid-1.11 | Poster | [
""
] | -1 | -1 | -1 | -1 | 0 | [] | [] | [] |
||
https://aclanthology.org/2024.readi-1.1.bib | https://aclanthology.org/2024.readi-1.1/ | @inproceedings{cripwell-etal-2024-evaluating,
title = "Evaluating Document Simplification: On the Importance of Separately Assessing Simplicity and Meaning Preservation",
author = {Cripwell, Liam and
Legrand, Jo{\"e}l and
Gardent, Claire},
editor = "Wilkens, Rodrigo and
Cardon, R{\'e}mi and
Todirascu, Amalia and
Gala, N{\'u}ria",
booktitle = "Proceedings of the 3rd Workshop on Tools and Resources for People with REAding DIfficulties (READI) @ LREC-COLING 2024",
month = may,
year = "2024",
address = "Torino, Italia",
publisher = "ELRA and ICCL",
url = "https://aclanthology.org/2024.readi-1.1",
pages = "1--14",
abstract = "Text simplification intends to make a text easier to read while preserving its core meaning. Intuitively and as shown in previous works, these two dimensions (simplification and meaning preservation) are often-times inversely correlated. An overly conservative text will fail to simplify sufficiently, whereas extreme simplification will degrade meaning preservation. Yet, popular evaluation metrics either aggregate meaning preservation and simplification into a single score (SARI, LENS), or target meaning preservation alone (BERTScore, QuestEval). Moreover, these metrics usually require a set of references and most previous work has only focused on sentence-level simplification. In this paper, we focus on the evaluation of document-level text simplification and compare existing models using distinct metrics for meaning preservation and simplification. We leverage existing metrics from similar tasks and introduce a reference-less metric variant for simplicity, showing that models are mostly biased towards either simplification or meaning preservation, seldom performing well on both dimensions. Making use of the fact that the metrics we use are all reference-less, we also investigate the performance of existing models when applied to unseen data (where reference simplifications are unavailable).",
}
| Text simplification intends to make a text easier to read while preserving its core meaning. Intuitively and as shown in previous works, these two dimensions (simplification and meaning preservation) are often-times inversely correlated. An overly conservative text will fail to simplify sufficiently, whereas extreme simplification will degrade meaning preservation. Yet, popular evaluation metrics either aggregate meaning preservation and simplification into a single score (SARI, LENS), or target meaning preservation alone (BERTScore, QuestEval). Moreover, these metrics usually require a set of references and most previous work has only focused on sentence-level simplification. In this paper, we focus on the evaluation of document-level text simplification and compare existing models using distinct metrics for meaning preservation and simplification. We leverage existing metrics from similar tasks and introduce a reference-less metric variant for simplicity, showing that models are mostly biased towards either simplification or meaning preservation, seldom performing well on both dimensions. Making use of the fact that the metrics we use are all reference-less, we also investigate the performance of existing models when applied to unseen data (where reference simplifications are unavailable). | [
"Cripwell, Liam",
"Legr",
", Jo{\\\"e}l",
"Gardent, Claire"
] | Evaluating Document Simplification: On the Importance of Separately Assessing Simplicity and Meaning Preservation | readi-1.1 | Poster | 2404.03278 | [
""
] | https://huggingface.co/papers/2404.03278 | 1 | 1 | 0 | 3 | 1 | [] | [] | [] |
https://aclanthology.org/2024.readi-1.2.bib | https://aclanthology.org/2024.readi-1.2/ | @inproceedings{hjartarson-fridriksdottir-2024-malmon,
title = "Malmon: A Crowd-Sourcing Platform for Simple Language",
author = {Hjartarson, Helgi Bj{\"o}rn and
Fri{\dh}riksd{\'o}ttir, Steinunn Rut},
editor = "Wilkens, Rodrigo and
Cardon, R{\'e}mi and
Todirascu, Amalia and
Gala, N{\'u}ria",
booktitle = "Proceedings of the 3rd Workshop on Tools and Resources for People with REAding DIfficulties (READI) @ LREC-COLING 2024",
month = may,
year = "2024",
address = "Torino, Italia",
publisher = "ELRA and ICCL",
url = "https://aclanthology.org/2024.readi-1.2",
pages = "15--21",
abstract = "This paper presents a crowd-sourcing platform designed to address the need for parallel corpora in the field of Automatic Text Simplification (ATS). ATS aims to automatically reduce the linguistic complexity of text to aid individuals with reading difficulties, such as those with cognitive disorders, dyslexia, children, and non-native speakers. ATS does not only facilitate improved reading comprehension among these groups but can also enhance the preprocessing stage for various NLP tasks through summarization, contextual simplification, and paraphrasing. Our work introduces a language independent, openly accessible platform that crowdsources training data for ATS models, potentially benefiting low-resource languages where parallel data is scarce. The platform can efficiently aid in the collection of parallel corpora by providing a user-friendly data-collection environment. Furthermore, using human crowd-workers for the data collection process offers a potential resource for linguistic research on text simplification practices. The paper discusses the platform{'}s architecture, built with modern web technologies, and its user-friendly interface designed to encourage widespread participation. Through gamification and a robust admin panel, the platform incentivizes high-quality data collection and engagement from crowdworkers.",
}
| This paper presents a crowd-sourcing platform designed to address the need for parallel corpora in the field of Automatic Text Simplification (ATS). ATS aims to automatically reduce the linguistic complexity of text to aid individuals with reading difficulties, such as those with cognitive disorders, dyslexia, children, and non-native speakers. ATS does not only facilitate improved reading comprehension among these groups but can also enhance the preprocessing stage for various NLP tasks through summarization, contextual simplification, and paraphrasing. Our work introduces a language independent, openly accessible platform that crowdsources training data for ATS models, potentially benefiting low-resource languages where parallel data is scarce. The platform can efficiently aid in the collection of parallel corpora by providing a user-friendly data-collection environment. Furthermore, using human crowd-workers for the data collection process offers a potential resource for linguistic research on text simplification practices. The paper discusses the platform{'}s architecture, built with modern web technologies, and its user-friendly interface designed to encourage widespread participation. Through gamification and a robust admin panel, the platform incentivizes high-quality data collection and engagement from crowdworkers. | [
"Hjartarson, Helgi Bj{\\\"o}rn",
"Fri{\\dh}riksd{\\'o}ttir, Steinunn Rut"
] | Malmon: A Crowd-Sourcing Platform for Simple Language | readi-1.2 | Poster | [
""
] | -1 | -1 | -1 | -1 | 0 | [] | [] | [] |
||
https://aclanthology.org/2024.readi-1.3.bib | https://aclanthology.org/2024.readi-1.3/ | @inproceedings{sauberli-clematide-2024-automatic,
title = "Automatic Generation and Evaluation of Reading Comprehension Test Items with Large Language Models",
author = {S{\"a}uberli, Andreas and
Clematide, Simon},
editor = "Wilkens, Rodrigo and
Cardon, R{\'e}mi and
Todirascu, Amalia and
Gala, N{\'u}ria",
booktitle = "Proceedings of the 3rd Workshop on Tools and Resources for People with REAding DIfficulties (READI) @ LREC-COLING 2024",
month = may,
year = "2024",
address = "Torino, Italia",
publisher = "ELRA and ICCL",
url = "https://aclanthology.org/2024.readi-1.3",
pages = "22--37",
abstract = "Reading comprehension tests are used in a variety of applications, reaching from education to assessing the comprehensibility of simplified texts. However, creating such tests manually and ensuring their quality is difficult and time-consuming. In this paper, we explore how large language models (LLMs) can be used to generate and evaluate multiple-choice reading comprehension items. To this end, we compiled a dataset of German reading comprehension items and developed a new protocol for human and automatic evaluation, including a metric we call text informativity, which is based on guessability and answerability. We then used this protocol and the dataset to evaluate the quality of items generated by Llama 2 and GPT-4. Our results suggest that both models are capable of generating items of acceptable quality in a zero-shot setting, but GPT-4 clearly outperforms Llama 2. We also show that LLMs can be used for automatic evaluation by eliciting item reponses from them. In this scenario, evaluation results with GPT-4 were the most similar to human annotators. Overall, zero-shot generation with LLMs is a promising approach for generating and evaluating reading comprehension test items, in particular for languages without large amounts of available data.",
}
| Reading comprehension tests are used in a variety of applications, reaching from education to assessing the comprehensibility of simplified texts. However, creating such tests manually and ensuring their quality is difficult and time-consuming. In this paper, we explore how large language models (LLMs) can be used to generate and evaluate multiple-choice reading comprehension items. To this end, we compiled a dataset of German reading comprehension items and developed a new protocol for human and automatic evaluation, including a metric we call text informativity, which is based on guessability and answerability. We then used this protocol and the dataset to evaluate the quality of items generated by Llama 2 and GPT-4. Our results suggest that both models are capable of generating items of acceptable quality in a zero-shot setting, but GPT-4 clearly outperforms Llama 2. We also show that LLMs can be used for automatic evaluation by eliciting item reponses from them. In this scenario, evaluation results with GPT-4 were the most similar to human annotators. Overall, zero-shot generation with LLMs is a promising approach for generating and evaluating reading comprehension test items, in particular for languages without large amounts of available data. | [
"S{\\\"a}uberli, Andreas",
"Clematide, Simon"
] | Automatic Generation and Evaluation of Reading Comprehension Test Items with Large Language Models | readi-1.3 | Poster | 2404.07720 | [
""
] | -1 | -1 | -1 | -1 | 0 | [] | [] | [] |
|
https://aclanthology.org/2024.readi-1.4.bib | https://aclanthology.org/2024.readi-1.4/ | @inproceedings{shardlow-etal-2024-extensible,
title = "An Extensible Massively Multilingual Lexical Simplification Pipeline Dataset using the {M}ulti{LS} Framework",
author = {Shardlow, Matthew and
Alva-Manchego, Fernando and
Batista-Navarro, Riza and
Bott, Stefan and
Calderon Ramirez, Saul and
Cardon, R{\'e}mi and
Fran{\c{c}}ois, Thomas and
Hayakawa, Akio and
Horbach, Andrea and
H{\"u}lsing, Anna and
Ide, Yusuke and
Imperial, Joseph Marvin and
Nohejl, Adam and
North, Kai and
Occhipinti, Laura and
Per{\'e}z Rojas, Nelson and
Raihan, Nishat and
Ranasinghe, Tharindu and
Solis Salazar, Martin and
Zampieri, Marcos and
Saggion, Horacio},
editor = "Wilkens, Rodrigo and
Cardon, R{\'e}mi and
Todirascu, Amalia and
Gala, N{\'u}ria",
booktitle = "Proceedings of the 3rd Workshop on Tools and Resources for People with REAding DIfficulties (READI) @ LREC-COLING 2024",
month = may,
year = "2024",
address = "Torino, Italia",
publisher = "ELRA and ICCL",
url = "https://aclanthology.org/2024.readi-1.4",
pages = "38--46",
abstract = "We present preliminary findings on the MultiLS dataset, developed in support of the 2024 Multilingual Lexical Simplification Pipeline (MLSP) Shared Task. This dataset currently comprises of 300 instances of lexical complexity prediction and lexical simplification across 10 languages. In this paper, we (1) describe the annotation protocol in support of the contribution of future datasets and (2) present summary statistics on the existing data that we have gathered. Multilingual lexical simplification can be used to support low-ability readers to engage with otherwise difficult texts in their native, often low-resourced, languages.",
}
| We present preliminary findings on the MultiLS dataset, developed in support of the 2024 Multilingual Lexical Simplification Pipeline (MLSP) Shared Task. This dataset currently comprises of 300 instances of lexical complexity prediction and lexical simplification across 10 languages. In this paper, we (1) describe the annotation protocol in support of the contribution of future datasets and (2) present summary statistics on the existing data that we have gathered. Multilingual lexical simplification can be used to support low-ability readers to engage with otherwise difficult texts in their native, often low-resourced, languages. | [
"Shardlow, Matthew",
"Alva-Manchego, Fern",
"o",
"Batista-Navarro, Riza",
"Bott, Stefan",
"Calderon Ramirez, Saul",
"Cardon, R{\\'e}mi",
"Fran{\\c{c}}ois, Thomas",
"Hayakawa, Akio",
"Horbach, Andrea",
"H{\\\"u}lsing, Anna",
"Ide, Yusuke",
"Imperial, Joseph Marvin",
"Nohejl, Adam",
"North, Kai",
"Occhipinti, Laura",
"Per{\\'e}z Rojas, Nelson",
"Raihan, Nishat",
"Ranasinghe, Tharindu",
"Solis Salazar, Martin",
"Zampieri, Marcos",
"Saggion, Horacio"
] | An Extensible Massively Multilingual Lexical Simplification Pipeline Dataset using the MultiLS Framework | readi-1.4 | Poster | [
""
] | -1 | -1 | -1 | -1 | 0 | [] | [] | [] |
||
https://aclanthology.org/2024.readi-1.5.bib | https://aclanthology.org/2024.readi-1.5/ | @inproceedings{yamanaka-tokunaga-2024-siera,
title = "{SIERA}: An Evaluation Metric for Text Simplification using the Ranking Model and Data Augmentation by Edit Operations",
author = "Yamanaka, Hikaru and
Tokunaga, Takenobu",
editor = "Wilkens, Rodrigo and
Cardon, R{\'e}mi and
Todirascu, Amalia and
Gala, N{\'u}ria",
booktitle = "Proceedings of the 3rd Workshop on Tools and Resources for People with REAding DIfficulties (READI) @ LREC-COLING 2024",
month = may,
year = "2024",
address = "Torino, Italia",
publisher = "ELRA and ICCL",
url = "https://aclanthology.org/2024.readi-1.5",
pages = "47--58",
abstract = "Automatic evaluation metrics are indispensable for text simplification (TS) research. The past TS research adopts three evaluation aspects: fluency, meaning preservation and simplicity. However, there is little consensus on a metric to measure simplicity, a unique aspect of TS compared with other text generation tasks. In addition, many of the existing metrics require reference simplified texts for evaluation. Thus, the cost of collecting reference texts is also an issue. This study proposes a new automatic evaluation metric, SIERA, for sentence simplification. SIERA employs a ranking model for the order relation of simplicity, which is trained by pairs of the original and simplified sentences. It does not require reference sentences for either training or evaluation. The sentence pairs for training are further augmented by the proposed method that utlizes edit operations to generate intermediate sentences with the simplicity between the original and simplified sentences. Using three evaluation datasets for text simplification, we compare SIERA with other metrics by calculating the correlations between metric values and human ratings. The results showed SIERA{'}s superiority over other metrics with a reservation that the quality of evaluation sentences is consistent with that of the training data.",
}
| Automatic evaluation metrics are indispensable for text simplification (TS) research. The past TS research adopts three evaluation aspects: fluency, meaning preservation and simplicity. However, there is little consensus on a metric to measure simplicity, a unique aspect of TS compared with other text generation tasks. In addition, many of the existing metrics require reference simplified texts for evaluation. Thus, the cost of collecting reference texts is also an issue. This study proposes a new automatic evaluation metric, SIERA, for sentence simplification. SIERA employs a ranking model for the order relation of simplicity, which is trained by pairs of the original and simplified sentences. It does not require reference sentences for either training or evaluation. The sentence pairs for training are further augmented by the proposed method that utlizes edit operations to generate intermediate sentences with the simplicity between the original and simplified sentences. Using three evaluation datasets for text simplification, we compare SIERA with other metrics by calculating the correlations between metric values and human ratings. The results showed SIERA{'}s superiority over other metrics with a reservation that the quality of evaluation sentences is consistent with that of the training data. | [
"Yamanaka, Hikaru",
"Tokunaga, Takenobu"
] | SIERA: An Evaluation Metric for Text Simplification using the Ranking Model and Data Augmentation by Edit Operations | readi-1.5 | Poster | [
""
] | -1 | -1 | -1 | -1 | 0 | [] | [] | [] |
||
https://aclanthology.org/2024.readi-1.6.bib | https://aclanthology.org/2024.readi-1.6/ | @inproceedings{athugodage-etal-2024-transfer,
title = "Transfer Learning for {R}ussian Legal Text Simplification",
author = "Athugodage, Mark and
Mitrofanove, Olga and
Gudkov, Vadim",
editor = "Wilkens, Rodrigo and
Cardon, R{\'e}mi and
Todirascu, Amalia and
Gala, N{\'u}ria",
booktitle = "Proceedings of the 3rd Workshop on Tools and Resources for People with REAding DIfficulties (READI) @ LREC-COLING 2024",
month = may,
year = "2024",
address = "Torino, Italia",
publisher = "ELRA and ICCL",
url = "https://aclanthology.org/2024.readi-1.6",
pages = "59--69",
abstract = "We present novel results in legal text simplification for Russian. We introduce the first dataset for such a task in Russian - a parallel corpus based on the data extracted from {``}Rossiyskaya Gazeta Legal Papers{''}. In this study we discuss three approaches for text simplification which involve T5 and GPT model architectures. We evaluate the proposed models on a set of metrics: ROUGE, SARI and BERTScore. We also analysed the models{'} results on such readability indices as Flesch-Kinkaid Grade Level and Gunning Fog Index. And, finally, we performed human evaluation of simplified texts generated by T5 and GPT models; expertise was carried out by native speakers of Russian and Russian lawyers. In this research we compared T5 and GPT architectures for text simplification task and found out that GPT handles better when it is fine-tuned on dataset of coped texts. Our research makes a big step in improving Russian legal text readability and accessibility for common people.",
}
| We present novel results in legal text simplification for Russian. We introduce the first dataset for such a task in Russian - a parallel corpus based on the data extracted from {``}Rossiyskaya Gazeta Legal Papers{''}. In this study we discuss three approaches for text simplification which involve T5 and GPT model architectures. We evaluate the proposed models on a set of metrics: ROUGE, SARI and BERTScore. We also analysed the models{'} results on such readability indices as Flesch-Kinkaid Grade Level and Gunning Fog Index. And, finally, we performed human evaluation of simplified texts generated by T5 and GPT models; expertise was carried out by native speakers of Russian and Russian lawyers. In this research we compared T5 and GPT architectures for text simplification task and found out that GPT handles better when it is fine-tuned on dataset of coped texts. Our research makes a big step in improving Russian legal text readability and accessibility for common people. | [
"Athugodage, Mark",
"Mitrofanove, Olga",
"Gudkov, Vadim"
] | Transfer Learning for Russian Legal Text Simplification | readi-1.6 | Poster | [
""
] | -1 | -1 | -1 | -1 | 0 | [] | [] | [] |
||
https://aclanthology.org/2024.readi-1.7.bib | https://aclanthology.org/2024.readi-1.7/ | @inproceedings{deleanu-etal-2024-accessible,
title = "Accessible Communication: a systematic review and comparative analysis of official {E}nglish Easy-to-Understand ({E}2{U}) language guidelines",
author = "Deleanu, Andreea Maria and
Orasan, Constantin and
Braun, Sabine",
editor = "Wilkens, Rodrigo and
Cardon, R{\'e}mi and
Todirascu, Amalia and
Gala, N{\'u}ria",
booktitle = "Proceedings of the 3rd Workshop on Tools and Resources for People with REAding DIfficulties (READI) @ LREC-COLING 2024",
month = may,
year = "2024",
address = "Torino, Italia",
publisher = "ELRA and ICCL",
url = "https://aclanthology.org/2024.readi-1.7",
pages = "70--92",
abstract = "Easy-to-Understand (E2U) language varieties have been recognized by the United Nation{'}s Convention on the Rights of Persons with Disabilities (2006) as a means to guarantee the fundamental right to Accessible Communication. Increased awareness has driven changes in European (European Commission, 2015, 2021; European Parliament, 2016) and International legislation (ODI, 2010), prompting public-sector and other institutions to offer domain-specific content into E2U language to prevent communicative exclusion of those facing cognitive barriers (COGA, 2017; Maa{\ss}, 2020; Perego, 2020). However, guidance on what it is that makes language actually {`}easier to understand{'} is still fragmented and vague. For this reason, we carried out a systematic review of official guidelines for English Plain Language and Easy Language to identify the most effective lexical, syntactic and adaptation strategies that can reduce complexity in verbal discourse according to official bodies. This article will present the methods and preliminary results of the guidelines analysis.",
}
| Easy-to-Understand (E2U) language varieties have been recognized by the United Nation{'}s Convention on the Rights of Persons with Disabilities (2006) as a means to guarantee the fundamental right to Accessible Communication. Increased awareness has driven changes in European (European Commission, 2015, 2021; European Parliament, 2016) and International legislation (ODI, 2010), prompting public-sector and other institutions to offer domain-specific content into E2U language to prevent communicative exclusion of those facing cognitive barriers (COGA, 2017; Maa{\ss}, 2020; Perego, 2020). However, guidance on what it is that makes language actually {`}easier to understand{'} is still fragmented and vague. For this reason, we carried out a systematic review of official guidelines for English Plain Language and Easy Language to identify the most effective lexical, syntactic and adaptation strategies that can reduce complexity in verbal discourse according to official bodies. This article will present the methods and preliminary results of the guidelines analysis. | [
"Deleanu, Andreea Maria",
"Orasan, Constantin",
"Braun, Sabine"
] | Accessible Communication: a systematic review and comparative analysis of official English Easy-to-Understand (E2U) language guidelines | readi-1.7 | Poster | [
""
] | -1 | -1 | -1 | -1 | 0 | [] | [] | [] |
||
https://aclanthology.org/2024.readi-1.8.bib | https://aclanthology.org/2024.readi-1.8/ | @inproceedings{madina-etal-2024-languagetool,
title = "{L}anguage{T}ool as a {CAT} tool for Easy-to-Read in {S}panish",
author = "Madina, Margot and
Gonzalez-Dios, Itziar and
Siegel, Melanie",
editor = "Wilkens, Rodrigo and
Cardon, R{\'e}mi and
Todirascu, Amalia and
Gala, N{\'u}ria",
booktitle = "Proceedings of the 3rd Workshop on Tools and Resources for People with REAding DIfficulties (READI) @ LREC-COLING 2024",
month = may,
year = "2024",
address = "Torino, Italia",
publisher = "ELRA and ICCL",
url = "https://aclanthology.org/2024.readi-1.8",
pages = "93--101",
abstract = "Easy-to-Read (E2R) is an approach to content creation that emphasizes simplicity and clarity in language to make texts more accessible to readers with cognitive challenges or learning disabilities. The Spanish version of E2R is called Lectura F{\'a}cil (LF). E2R and its variants, such as LF, focus on straightforward language and structure to enhance readability. The manual production of such texts is both time and resource expensive. In this work, we have developed LFWriteAssist, an authoring support tool that aligns with the guidelines of LF. It is underpinned by the functionalities of LanguageTool, a free and open source grammar, style and spelling checker. Our tool assists in ensuring compliance with LF standard, provides definitions for complex, polysemic, or infrequently used terms, and acronym extensions. The tool is primarily targeted at LF creators, as it serves as an authoring aid, identifying any rule infringements and assisting with language simplifications. However, it can be used by anyone who seek to enhance text readability and inclusivity. The tool{'}s code is made available as open source, thereby contributing to the wider effort of creating inclusive and comprehensible content.",
}
| Easy-to-Read (E2R) is an approach to content creation that emphasizes simplicity and clarity in language to make texts more accessible to readers with cognitive challenges or learning disabilities. The Spanish version of E2R is called Lectura F{\'a}cil (LF). E2R and its variants, such as LF, focus on straightforward language and structure to enhance readability. The manual production of such texts is both time and resource expensive. In this work, we have developed LFWriteAssist, an authoring support tool that aligns with the guidelines of LF. It is underpinned by the functionalities of LanguageTool, a free and open source grammar, style and spelling checker. Our tool assists in ensuring compliance with LF standard, provides definitions for complex, polysemic, or infrequently used terms, and acronym extensions. The tool is primarily targeted at LF creators, as it serves as an authoring aid, identifying any rule infringements and assisting with language simplifications. However, it can be used by anyone who seek to enhance text readability and inclusivity. The tool{'}s code is made available as open source, thereby contributing to the wider effort of creating inclusive and comprehensible content. | [
"Madina, Margot",
"Gonzalez-Dios, Itziar",
"Siegel, Melanie"
] | LanguageTool as a CAT tool for Easy-to-Read in Spanish | readi-1.8 | Poster | [
""
] | -1 | -1 | -1 | -1 | 0 | [] | [] | [] |
||
https://aclanthology.org/2024.readi-1.9.bib | https://aclanthology.org/2024.readi-1.9/ | @inproceedings{wilkens-etal-2024-paying,
title = "Paying attention to the words: explaining readability prediction for {F}rench as a foreign language",
author = "Wilkens, Rodrigo and
Watrin, Patrick and
Fran{\c{c}}ois, Thomas",
editor = "Wilkens, Rodrigo and
Cardon, R{\'e}mi and
Todirascu, Amalia and
Gala, N{\'u}ria",
booktitle = "Proceedings of the 3rd Workshop on Tools and Resources for People with REAding DIfficulties (READI) @ LREC-COLING 2024",
month = may,
year = "2024",
address = "Torino, Italia",
publisher = "ELRA and ICCL",
url = "https://aclanthology.org/2024.readi-1.9",
pages = "102--115",
abstract = "Automatic text Readability Assessment (ARA) has been seen as a way of helping people with reading difficulties. Recent advancements in Natural Language Processing have shifted ARA from linguistic-based models to more precise black-box models. However, this shift has weakened the alignment between ARA models and the reading literature, potentially leading to inaccurate predictions based on unintended factors. In this paper, we investigate the explainability of ARA models, inspecting the relationship between attention mechanism scores, ARA features, and CEFR level predictions made by the model. We propose a method for identifying features associated with the predictions made by a model through the use of the attention mechanism. Exploring three feature families (i.e., psycho-linguistic, work frequency and graded lexicon), we associated features with the model{'}s attention heads. Finally, while not fully explanatory of the model{'}s performance, the correlations of these associations surpass those between features and text readability levels.",
}
| Automatic text Readability Assessment (ARA) has been seen as a way of helping people with reading difficulties. Recent advancements in Natural Language Processing have shifted ARA from linguistic-based models to more precise black-box models. However, this shift has weakened the alignment between ARA models and the reading literature, potentially leading to inaccurate predictions based on unintended factors. In this paper, we investigate the explainability of ARA models, inspecting the relationship between attention mechanism scores, ARA features, and CEFR level predictions made by the model. We propose a method for identifying features associated with the predictions made by a model through the use of the attention mechanism. Exploring three feature families (i.e., psycho-linguistic, work frequency and graded lexicon), we associated features with the model{'}s attention heads. Finally, while not fully explanatory of the model{'}s performance, the correlations of these associations surpass those between features and text readability levels. | [
"Wilkens, Rodrigo",
"Watrin, Patrick",
"Fran{\\c{c}}ois, Thomas"
] | Paying attention to the words: explaining readability prediction for French as a foreign language | readi-1.9 | Poster | [
""
] | -1 | -1 | -1 | -1 | 0 | [] | [] | [] |
||
https://aclanthology.org/2024.rfp-1.1.bib | https://aclanthology.org/2024.rfp-1.1/ | @inproceedings{remijnse-etal-2024-tracking,
title = "Tracking Perspectives on Event Participants: a Structural Analysis of the Framing of Real-World Events in Co-Referential Corpora",
author = "Remijnse, Levi and
Sommerauer, Pia and
Fokkens, Antske and
Vossen, Piek T.J.M.",
editor = "Sommerauer, Pia and
Caselli, Tommaso and
Nissim, Malvina and
Remijnse, Levi and
Vossen, Piek",
booktitle = "Proceedings of the First Workshop on Reference, Framing, and Perspective @ LREC-COLING 2024",
month = may,
year = "2024",
address = "Torino, Italia",
publisher = "ELRA and ICCL",
url = "https://aclanthology.org/2024.rfp-1.1",
pages = "1--12",
abstract = "In this paper, we present the outcome of a structural linguistic analysis performed on a referentially grounded FrameNet dataset. In this dataset, multiple Dutch events are referenced by multiple co-referential Dutch news texts. Mentions in those documents are annotated with respect to their referential grounding (i.e., links to structured Wikidata), and their conceptual representation (i.e., frames). Provided with each document{'}s temporal reporting distance, we selected documents for two events - the Utrecht shooting and MH17 - and performed an analysis in which we tracked the events{'} participants over time in both their focalization (number of mentions) and their framing (distribution of frame element labels). This way, we use the carefully collected and annotated data to schematize shifts in focalization and perspectivization of the participants as a result of the constantly developing narrative surrounding the events. This novel type of linguistic research involves reference to the real-world referents and takes into account storytelling in news streams.",
}
| In this paper, we present the outcome of a structural linguistic analysis performed on a referentially grounded FrameNet dataset. In this dataset, multiple Dutch events are referenced by multiple co-referential Dutch news texts. Mentions in those documents are annotated with respect to their referential grounding (i.e., links to structured Wikidata), and their conceptual representation (i.e., frames). Provided with each document{'}s temporal reporting distance, we selected documents for two events - the Utrecht shooting and MH17 - and performed an analysis in which we tracked the events{'} participants over time in both their focalization (number of mentions) and their framing (distribution of frame element labels). This way, we use the carefully collected and annotated data to schematize shifts in focalization and perspectivization of the participants as a result of the constantly developing narrative surrounding the events. This novel type of linguistic research involves reference to the real-world referents and takes into account storytelling in news streams. | [
"Remijnse, Levi",
"Sommerauer, Pia",
"Fokkens, Antske",
"Vossen, Piek T.J.M."
] | Tracking Perspectives on Event Participants: a Structural Analysis of the Framing of Real-World Events in Co-Referential Corpora | rfp-1.1 | Poster | [
""
] | -1 | -1 | -1 | -1 | 0 | [] | [] | [] |
||
https://aclanthology.org/2024.rfp-1.2.bib | https://aclanthology.org/2024.rfp-1.2/ | @inproceedings{lamorte-etal-2024-timeframe,
title = "{T}ime{F}rame: Querying and Visualizing Event Semantic Frames in Time",
author = "Lamorte, Davide and
Rovera, Marco and
Ferrara, Alfio and
Tonelli, Sara",
editor = "Sommerauer, Pia and
Caselli, Tommaso and
Nissim, Malvina and
Remijnse, Levi and
Vossen, Piek",
booktitle = "Proceedings of the First Workshop on Reference, Framing, and Perspective @ LREC-COLING 2024",
month = may,
year = "2024",
address = "Torino, Italia",
publisher = "ELRA and ICCL",
url = "https://aclanthology.org/2024.rfp-1.2",
pages = "13--17",
abstract = "In this work we introduce TimeFrame, an online platform to easily query and visualize events and participants extracted from document collections in Italian following a frame-based approach. The system allows users to select one or more events (frames) or event categories and to display their occurrences on a timeline. Different query types, from coarse to fine-grained, are available through the interface, enabling a time-bound analysis of large historical corpora. We present three use cases based on the full archive of news published in 1948 by the newspaper {``}Corriere della Sera{''}. We show that different crucial events can be explored, providing interesting insights into the narratives around such events, the main participants and their points of view.",
}
| In this work we introduce TimeFrame, an online platform to easily query and visualize events and participants extracted from document collections in Italian following a frame-based approach. The system allows users to select one or more events (frames) or event categories and to display their occurrences on a timeline. Different query types, from coarse to fine-grained, are available through the interface, enabling a time-bound analysis of large historical corpora. We present three use cases based on the full archive of news published in 1948 by the newspaper {``}Corriere della Sera{''}. We show that different crucial events can be explored, providing interesting insights into the narratives around such events, the main participants and their points of view. | [
"Lamorte, Davide",
"Rovera, Marco",
"Ferrara, Alfio",
"Tonelli, Sara"
] | TimeFrame: Querying and Visualizing Event Semantic Frames in Time | rfp-1.2 | Poster | [
""
] | -1 | -1 | -1 | -1 | 0 | [] | [] | [] |
||
https://aclanthology.org/2024.rfp-1.3.bib | https://aclanthology.org/2024.rfp-1.3/ | @inproceedings{ivacic-etal-2024-comparing,
title = "Comparing News Framing of Migration Crises using Zero-Shot Classification",
author = "Iva{\v{c}}i{\v{c}}, Nikola and
Purver, Matthew and
Lind, Fabienne and
Pollak, Senja and
Boomgaarden, Hajo and
Bajt, Veronika",
editor = "Sommerauer, Pia and
Caselli, Tommaso and
Nissim, Malvina and
Remijnse, Levi and
Vossen, Piek",
booktitle = "Proceedings of the First Workshop on Reference, Framing, and Perspective @ LREC-COLING 2024",
month = may,
year = "2024",
address = "Torino, Italia",
publisher = "ELRA and ICCL",
url = "https://aclanthology.org/2024.rfp-1.3",
pages = "18--27",
abstract = "We present an experiment on classifying news frames in a language unseen by the learner, using zero-shot cross-lingual transfer learning. We used two pre-trained multilingual Transformer Encoder neural network models and tested with four specific news frames, investigating two approaches to the resulting multi-label task: Binary Relevance (treating each frame independently) and Label Power-set (predicting each possible combination of frames). We train our classifiers on an available annotated multilingual migration news dataset and test on an unseen Slovene language migration news corpus, first evaluating performance and then using the classifiers to analyse how media framed the news during the periods of Syria and Ukraine conflict-related migrations.",
}
| We present an experiment on classifying news frames in a language unseen by the learner, using zero-shot cross-lingual transfer learning. We used two pre-trained multilingual Transformer Encoder neural network models and tested with four specific news frames, investigating two approaches to the resulting multi-label task: Binary Relevance (treating each frame independently) and Label Power-set (predicting each possible combination of frames). We train our classifiers on an available annotated multilingual migration news dataset and test on an unseen Slovene language migration news corpus, first evaluating performance and then using the classifiers to analyse how media framed the news during the periods of Syria and Ukraine conflict-related migrations. | [
"Iva{\\v{c}}i{\\v{c}}, Nikola",
"Purver, Matthew",
"Lind, Fabienne",
"Pollak, Senja",
"Boomgaarden, Hajo",
"Bajt, Veronika"
] | Comparing News Framing of Migration Crises using Zero-Shot Classification | rfp-1.3 | Poster | [
""
] | -1 | -1 | -1 | -1 | 0 | [] | [] | [] |
||
https://aclanthology.org/2024.rfp-1.4.bib | https://aclanthology.org/2024.rfp-1.4/ | @inproceedings{gemelli-minnema-2024-manosphrames,
title = "Manosphrames: exploring an {I}talian incel community through the lens of {NLP} and Frame Semantics",
author = "Gemelli, Sara and
Minnema, Gosse",
editor = "Sommerauer, Pia and
Caselli, Tommaso and
Nissim, Malvina and
Remijnse, Levi and
Vossen, Piek",
booktitle = "Proceedings of the First Workshop on Reference, Framing, and Perspective @ LREC-COLING 2024",
month = may,
year = "2024",
address = "Torino, Italia",
publisher = "ELRA and ICCL",
url = "https://aclanthology.org/2024.rfp-1.4",
pages = "28--39",
abstract = "We introduce a large corpus of comments extracted from an Italian online incel ({`}involuntary incelibate{'}) forum, a community of men who build a collective identity and anti-feminist ideology centered around their inability to find a sexual or romantic partner and who frequently use explicitly misogynistic language. Our corpus consists of 2.4K comments that have been manually collected, analyzed and annotated with topic labels, and a further 32K threads (300K comments) that have been automatically scraped and automatically annotated with FrameNet annotations. We show how large-scale frame semantic analysis can shed a light on what is discussed in the community, and introduce incel topic classification as a new NLP task and benchmark.",
}
| We introduce a large corpus of comments extracted from an Italian online incel ({`}involuntary incelibate{'}) forum, a community of men who build a collective identity and anti-feminist ideology centered around their inability to find a sexual or romantic partner and who frequently use explicitly misogynistic language. Our corpus consists of 2.4K comments that have been manually collected, analyzed and annotated with topic labels, and a further 32K threads (300K comments) that have been automatically scraped and automatically annotated with FrameNet annotations. We show how large-scale frame semantic analysis can shed a light on what is discussed in the community, and introduce incel topic classification as a new NLP task and benchmark. | [
"Gemelli, Sara",
"Minnema, Gosse"
] | Manosphrames: exploring an Italian incel community through the lens of NLP and Frame Semantics | rfp-1.4 | Poster | [
""
] | -1 | -1 | -1 | -1 | 0 | [] | [] | [] |
||
https://aclanthology.org/2024.rfp-1.5.bib | https://aclanthology.org/2024.rfp-1.5/ | @inproceedings{tan-bloem-2024-broadening,
title = "Broadening the coverage of computational representations of metaphor through Dynamic Metaphor Theory",
author = "Tan, Xiaojuan and
Bloem, Jelke",
editor = "Sommerauer, Pia and
Caselli, Tommaso and
Nissim, Malvina and
Remijnse, Levi and
Vossen, Piek",
booktitle = "Proceedings of the First Workshop on Reference, Framing, and Perspective @ LREC-COLING 2024",
month = may,
year = "2024",
address = "Torino, Italia",
publisher = "ELRA and ICCL",
url = "https://aclanthology.org/2024.rfp-1.5",
pages = "40--50",
abstract = "Current approaches to computational metaphor processing typically incorporate static representations of metaphor. We aim to show that this limits the coverage of such systems. We take insights from dynamic metaphor theory and discuss how existing computational models of metaphor might benefit from representing the dynamics of metaphor when applied to the analysis of conflicting discourse. We propose that a frame-based approach to metaphor representation based on the model of YinYang Dynamics of Metaphoricity (YYDM) would pave the way to more comprehensive modeling of metaphor. In particular, the metaphoricity cues of the YYDM model could be used to address the task of dynamic metaphor identification. Frame-based modeling of dynamic metaphor would facilitate the computational analysis of perspectives in conflicting discourse, with potential applications in analyzing political discourse.",
}
| Current approaches to computational metaphor processing typically incorporate static representations of metaphor. We aim to show that this limits the coverage of such systems. We take insights from dynamic metaphor theory and discuss how existing computational models of metaphor might benefit from representing the dynamics of metaphor when applied to the analysis of conflicting discourse. We propose that a frame-based approach to metaphor representation based on the model of YinYang Dynamics of Metaphoricity (YYDM) would pave the way to more comprehensive modeling of metaphor. In particular, the metaphoricity cues of the YYDM model could be used to address the task of dynamic metaphor identification. Frame-based modeling of dynamic metaphor would facilitate the computational analysis of perspectives in conflicting discourse, with potential applications in analyzing political discourse. | [
"Tan, Xiaojuan",
"Bloem, Jelke"
] | Broadening the coverage of computational representations of metaphor through Dynamic Metaphor Theory | rfp-1.5 | Poster | [
""
] | -1 | -1 | -1 | -1 | 0 | [] | [] | [] |
||
https://aclanthology.org/2024.safety4convai-1.1.bib | https://aclanthology.org/2024.safety4convai-1.1/ | @inproceedings{addlesee-2024-grounding,
title = "Grounding {LLM}s to In-prompt Instructions: Reducing Hallucinations Caused by Static Pre-training Knowledge",
author = "Addlesee, Angus",
editor = "Dinkar, Tanvi and
Attanasio, Giuseppe and
Curry, Amanda Cercas and
Konstas, Ioannis and
Hovy, Dirk and
Rieser, Verena",
booktitle = "Proceedings of Safety4ConvAI: The Third Workshop on Safety for Conversational AI @ LREC-COLING 2024",
month = may,
year = "2024",
address = "Torino, Italia",
publisher = "ELRA and ICCL",
url = "https://aclanthology.org/2024.safety4convai-1.1",
pages = "1--7",
abstract = "When deploying LLMs in certain commercial or research settings, domain specific knowledge must be explicitly provided within the prompt. This in-prompt knowledge can conflict with an LLM{'}s static world knowledge learned at pre-training, causing model hallucination (see examples in Table 1). In safety-critical settings, like healthcare and finance, these hallucinations can harm vulnerable users. We have curated a QA corpus containing information that LLMs could not have seen at pre-training. Using our corpus, we have probed various LLMs, manipulating both the prompt and the knowledge representation. We have found that our {`}Jodie{'} prompt consistently improves the model{'}s textual grounding to the given knowledge, and in-turn the overall answer accuracy. This is true in both the healthcare and finance domains - improving accuracy by up to 28{\%} (mean: 12{\%}). We have also identified that hierarchical and direct node-property graph structures could lead to more interpretable and controllable systems that provide a natural language interface with real-time in-domain knowledge. Our corpus will enable further work on this critical challenge.",
}
| When deploying LLMs in certain commercial or research settings, domain specific knowledge must be explicitly provided within the prompt. This in-prompt knowledge can conflict with an LLM{'}s static world knowledge learned at pre-training, causing model hallucination (see examples in Table 1). In safety-critical settings, like healthcare and finance, these hallucinations can harm vulnerable users. We have curated a QA corpus containing information that LLMs could not have seen at pre-training. Using our corpus, we have probed various LLMs, manipulating both the prompt and the knowledge representation. We have found that our {`}Jodie{'} prompt consistently improves the model{'}s textual grounding to the given knowledge, and in-turn the overall answer accuracy. This is true in both the healthcare and finance domains - improving accuracy by up to 28{\%} (mean: 12{\%}). We have also identified that hierarchical and direct node-property graph structures could lead to more interpretable and controllable systems that provide a natural language interface with real-time in-domain knowledge. Our corpus will enable further work on this critical challenge. | [
"Addlesee, Angus"
] | Grounding LLMs to In-prompt Instructions: Reducing Hallucinations Caused by Static Pre-training Knowledge | safety4convai-1.1 | Poster | [
""
] | -1 | -1 | -1 | -1 | 0 | [] | [] | [] |
||
https://aclanthology.org/2024.safety4convai-1.2.bib | https://aclanthology.org/2024.safety4convai-1.2/ | @inproceedings{parrish-etal-2024-diversity,
title = "Diversity-Aware Annotation for Conversational {AI} Safety",
author = "Parrish, Alicia and
Prabhakaran, Vinodkumar and
Aroyo, Lora and
D{\'\i}az, Mark and
Homan, Christopher M. and
Serapio-Garc{\'\i}a, Greg and
Taylor, Alex S. and
Wang, Ding",
editor = "Dinkar, Tanvi and
Attanasio, Giuseppe and
Curry, Amanda Cercas and
Konstas, Ioannis and
Hovy, Dirk and
Rieser, Verena",
booktitle = "Proceedings of Safety4ConvAI: The Third Workshop on Safety for Conversational AI @ LREC-COLING 2024",
month = may,
year = "2024",
address = "Torino, Italia",
publisher = "ELRA and ICCL",
url = "https://aclanthology.org/2024.safety4convai-1.2",
pages = "8--15",
abstract = "How people interpret content is deeply influenced by their socio-cultural backgrounds and lived experiences. This is especially crucial when evaluating AI systems for safety, where accounting for such diversity in interpretations and potential impacts on human users will make them both more successful and inclusive. While recent work has demonstrated the importance of diversity in human ratings that underlie AI pipelines, effective and efficient ways to incorporate diverse perspectives in human data annotation pipelines is still largely elusive. In this paper, we discuss the primary challenges faced in incorporating diversity into model evaluations, and propose a practical diversity-aware annotation approach. Using an existing dataset with highly parallel safety annotations, we take as a test case a policy that prioritizes recall of safety issues, and demonstrate that our diversity-aware approach can efficiently obtain a higher recall of safety issues flagged by minoritized rater groups without hurting overall precision.",
}
| How people interpret content is deeply influenced by their socio-cultural backgrounds and lived experiences. This is especially crucial when evaluating AI systems for safety, where accounting for such diversity in interpretations and potential impacts on human users will make them both more successful and inclusive. While recent work has demonstrated the importance of diversity in human ratings that underlie AI pipelines, effective and efficient ways to incorporate diverse perspectives in human data annotation pipelines is still largely elusive. In this paper, we discuss the primary challenges faced in incorporating diversity into model evaluations, and propose a practical diversity-aware annotation approach. Using an existing dataset with highly parallel safety annotations, we take as a test case a policy that prioritizes recall of safety issues, and demonstrate that our diversity-aware approach can efficiently obtain a higher recall of safety issues flagged by minoritized rater groups without hurting overall precision. | [
"Parrish, Alicia",
"Prabhakaran, Vinodkumar",
"Aroyo, Lora",
"D{\\'\\i}az, Mark",
"Homan, Christopher M.",
"Serapio-Garc{\\'\\i}a, Greg",
"Taylor, Alex S.",
"Wang, Ding"
] | Diversity-Aware Annotation for Conversational AI Safety | safety4convai-1.2 | Poster | [
""
] | -1 | -1 | -1 | -1 | 0 | [] | [] | [] |
||
https://aclanthology.org/2024.safety4convai-1.3.bib | https://aclanthology.org/2024.safety4convai-1.3/ | @inproceedings{ajayi-etal-2024-using,
title = "Using Information Retrieval Techniques to Automatically Repurpose Existing Dialogue Datasets for Safe Chatbot Development",
author = "Ajayi, Tunde Oluwaseyi and
Negi, Gaurav and
Arcan, Mihael and
Buitelaar, Paul",
editor = "Dinkar, Tanvi and
Attanasio, Giuseppe and
Curry, Amanda Cercas and
Konstas, Ioannis and
Hovy, Dirk and
Rieser, Verena",
booktitle = "Proceedings of Safety4ConvAI: The Third Workshop on Safety for Conversational AI @ LREC-COLING 2024",
month = may,
year = "2024",
address = "Torino, Italia",
publisher = "ELRA and ICCL",
url = "https://aclanthology.org/2024.safety4convai-1.3",
pages = "16--27",
abstract = "There has been notable progress in the development of open-domain dialogue systems (chatbots) especially with the rapid advancement of the capabilities of Large Language Models. Chatbots excel at holding conversations in a manner that keeps a user interested and engaged. However, their responses can be unsafe, as they can respond in an offensive manner or offer harmful professional advice. As a way to mitigate this issue, recent work crowdsource datasets with exemplary responses or annotate dialogue safety datasets, which are relatively scarce compared to casual dialogues. Despite the quality of data obtained from crowdsourcing, it can be expensive and time consuming. This work proposes an effective pipeline, using information retrieval, to automatically repurpose existing dialogue datasets for safe chatbot development, as a way to address the aforementioned challenges. We select an existing dialogue dataset, revise its unsafe responses, as a way to obtain a dataset with safer responses to unsafe user inputs. We then fine-tune dialogue models on the original and revised datasets and generate responses to evaluate the safeness of the models.",
}
| There has been notable progress in the development of open-domain dialogue systems (chatbots) especially with the rapid advancement of the capabilities of Large Language Models. Chatbots excel at holding conversations in a manner that keeps a user interested and engaged. However, their responses can be unsafe, as they can respond in an offensive manner or offer harmful professional advice. As a way to mitigate this issue, recent work crowdsource datasets with exemplary responses or annotate dialogue safety datasets, which are relatively scarce compared to casual dialogues. Despite the quality of data obtained from crowdsourcing, it can be expensive and time consuming. This work proposes an effective pipeline, using information retrieval, to automatically repurpose existing dialogue datasets for safe chatbot development, as a way to address the aforementioned challenges. We select an existing dialogue dataset, revise its unsafe responses, as a way to obtain a dataset with safer responses to unsafe user inputs. We then fine-tune dialogue models on the original and revised datasets and generate responses to evaluate the safeness of the models. | [
"Ajayi, Tunde Oluwaseyi",
"Negi, Gaurav",
"Arcan, Mihael",
"Buitelaar, Paul"
] | Using Information Retrieval Techniques to Automatically Repurpose Existing Dialogue Datasets for Safe Chatbot Development | safety4convai-1.3 | Poster | [
""
] | -1 | -1 | -1 | -1 | 0 | [] | [] | [] |
||
https://aclanthology.org/2024.safety4convai-1.4.bib | https://aclanthology.org/2024.safety4convai-1.4/ | @inproceedings{dwivedi-yu-2024-fairpair,
title = "{F}air{P}air: A Robust Evaluation of Biases in Language Models through Paired Perturbations",
author = "Dwivedi-Yu, Jane",
editor = "Dinkar, Tanvi and
Attanasio, Giuseppe and
Curry, Amanda Cercas and
Konstas, Ioannis and
Hovy, Dirk and
Rieser, Verena",
booktitle = "Proceedings of Safety4ConvAI: The Third Workshop on Safety for Conversational AI @ LREC-COLING 2024",
month = may,
year = "2024",
address = "Torino, Italia",
publisher = "ELRA and ICCL",
url = "https://aclanthology.org/2024.safety4convai-1.4",
pages = "28--39",
abstract = "The accurate evaluation of differential treatment in language models to specific groups is critical to ensuring a positive and safe user experience. An ideal evaluation should have the properties of being robust, extendable to new groups or attributes, and being able to capture biases that appear in typical usage (rather than just extreme, rare cases). Relatedly, bias evaluation should surface not only egregious biases but also ones that are subtle and commonplace, such as a likelihood for talking about appearances with regard to women. We present FairPair, an evaluation framework for assessing differential treatment that occurs during ordinary usage. FairPair operates through counterfactual pairs, but crucially, the paired continuations are grounded in the same demographic group, which ensures equivalent comparison. Additionally, unlike prior work, our method factors in the inherent variability that comes from the generation process itself by measuring the sampling variability. We present an evaluation of several commonly used generative models and a qualitative analysis that indicates a preference for discussing family and hobbies with regard to women.",
}
| The accurate evaluation of differential treatment in language models to specific groups is critical to ensuring a positive and safe user experience. An ideal evaluation should have the properties of being robust, extendable to new groups or attributes, and being able to capture biases that appear in typical usage (rather than just extreme, rare cases). Relatedly, bias evaluation should surface not only egregious biases but also ones that are subtle and commonplace, such as a likelihood for talking about appearances with regard to women. We present FairPair, an evaluation framework for assessing differential treatment that occurs during ordinary usage. FairPair operates through counterfactual pairs, but crucially, the paired continuations are grounded in the same demographic group, which ensures equivalent comparison. Additionally, unlike prior work, our method factors in the inherent variability that comes from the generation process itself by measuring the sampling variability. We present an evaluation of several commonly used generative models and a qualitative analysis that indicates a preference for discussing family and hobbies with regard to women. | [
"Dwivedi-Yu, Jane"
] | FairPair: A Robust Evaluation of Biases in Language Models through Paired Perturbations | safety4convai-1.4 | Poster | 2404.06619 | [
""
] | -1 | -1 | -1 | -1 | 0 | [] | [] | [] |
|
https://aclanthology.org/2024.safety4convai-1.5.bib | https://aclanthology.org/2024.safety4convai-1.5/ | @inproceedings{pantazopoulos-etal-2024-learning,
title = "Learning To See But Forgetting To Follow: Visual Instruction Tuning Makes {LLM}s More Prone To Jailbreak Attacks",
author = "Pantazopoulos, Georgios and
Parekh, Amit and
Nikandrou, Malvina and
Suglia, Alessandro",
editor = "Dinkar, Tanvi and
Attanasio, Giuseppe and
Curry, Amanda Cercas and
Konstas, Ioannis and
Hovy, Dirk and
Rieser, Verena",
booktitle = "Proceedings of Safety4ConvAI: The Third Workshop on Safety for Conversational AI @ LREC-COLING 2024",
month = may,
year = "2024",
address = "Torino, Italia",
publisher = "ELRA and ICCL",
url = "https://aclanthology.org/2024.safety4convai-1.5",
pages = "40--51",
abstract = "Augmenting Large Language Models (LLMs) with image-understanding capabilities has resulted in a boom of high-performing Vision-Language models (VLMs). While studying the alignment of LLMs to human values has received widespread attention, the safety of VLMs has not received the same attention. In this paper, we explore the impact of jailbreaking on three state-of-the-art VLMs, each using a distinct modeling approach. By comparing each VLM to their respective LLM backbone, we find that each VLM is more susceptible to jailbreaking. We consider this as an undesirable outcome from visual instruction-tuning, which imposes a forgetting effect on an LLM{'}s safety guardrails. Therefore, we provide recommendations for future work based on evaluation strategies that aim to highlight the weaknesses of a VLM, as well as take safety measures into account during visual instruction tuning.",
}
| Augmenting Large Language Models (LLMs) with image-understanding capabilities has resulted in a boom of high-performing Vision-Language models (VLMs). While studying the alignment of LLMs to human values has received widespread attention, the safety of VLMs has not received the same attention. In this paper, we explore the impact of jailbreaking on three state-of-the-art VLMs, each using a distinct modeling approach. By comparing each VLM to their respective LLM backbone, we find that each VLM is more susceptible to jailbreaking. We consider this as an undesirable outcome from visual instruction-tuning, which imposes a forgetting effect on an LLM{'}s safety guardrails. Therefore, we provide recommendations for future work based on evaluation strategies that aim to highlight the weaknesses of a VLM, as well as take safety measures into account during visual instruction tuning. | [
"Pantazopoulos, Georgios",
"Parekh, Amit",
"Nik",
"rou, Malvina",
"Suglia, Aless",
"ro"
] | Learning To See But Forgetting To Follow: Visual Instruction Tuning Makes LLMs More Prone To Jailbreak Attacks | safety4convai-1.5 | Poster | 2405.04403 | [
"https://github.com/gpantaz/vl_jailbreak"
] | -1 | -1 | -1 | -1 | 0 | [] | [] | [] |
|
https://aclanthology.org/2024.signlang-1.1.bib | https://aclanthology.org/2024.signlang-1.1/ | @inproceedings{battisti-etal-2024-advancing,
title = "Advancing Annotation for Continuous Data in {S}wiss {G}erman Sign Language",
author = "Battisti, Alessia and
Tissi, Katja and
Sidler-Miserez, Sandra and
Ebling, Sarah",
editor = "Efthimiou, Eleni and
Fotinea, Stavroula-Evita and
Hanke, Thomas and
Hochgesang, Julie A. and
Mesch, Johanna and
Schulder, Marc",
booktitle = "Proceedings of the LREC-COLING 2024 11th Workshop on the Representation and Processing of Sign Languages: Evaluation of Sign Language Resources",
month = may,
year = "2024",
address = "Torino, Italia",
publisher = "ELRA and ICCL",
url = "https://aclanthology.org/2024.signlang-1.1",
pages = "1--12",
}
| No abstract found | [
"Battisti, Alessia",
"Tissi, Katja",
"Sidler-Miserez, S",
"ra",
"Ebling, Sarah"
] | Advancing Annotation for Continuous Data in Swiss German Sign Language | signlang-1.1 | Poster | [
""
] | -1 | -1 | -1 | -1 | 0 | [] | [] | [] |
||
https://aclanthology.org/2024.signlang-1.2.bib | https://aclanthology.org/2024.signlang-1.2/ | @inproceedings{battisti-etal-2024-person,
title = "Person Identification from Pose Estimates in Sign Language",
author = {Battisti, Alessia and
van den Bold, Emma and
G{\"o}hring, Anne and
Holzknecht, Franz and
Ebling, Sarah},
editor = "Efthimiou, Eleni and
Fotinea, Stavroula-Evita and
Hanke, Thomas and
Hochgesang, Julie A. and
Mesch, Johanna and
Schulder, Marc",
booktitle = "Proceedings of the LREC-COLING 2024 11th Workshop on the Representation and Processing of Sign Languages: Evaluation of Sign Language Resources",
month = may,
year = "2024",
address = "Torino, Italia",
publisher = "ELRA and ICCL",
url = "https://aclanthology.org/2024.signlang-1.2",
pages = "13--25",
}
| No abstract found | [
"Battisti, Alessia",
"van den Bold, Emma",
"G{\\\"o}hring, Anne",
"Holzknecht, Franz",
"Ebling, Sarah"
] | Person Identification from Pose Estimates in Sign Language | signlang-1.2 | Poster | [
""
] | -1 | -1 | -1 | -1 | 0 | [] | [] | [] |
||
https://aclanthology.org/2024.signlang-1.3.bib | https://aclanthology.org/2024.signlang-1.3/ | @inproceedings{bono-etal-2024-data,
title = "Data Integration, Annotation, and Transcription Methods for Sign Language Dialogue with Latency in Videoconferencing",
author = "Bono, Mayumi and
Okada, Tomohiro and
Skobov, Victor and
Adam, Robert",
editor = "Efthimiou, Eleni and
Fotinea, Stavroula-Evita and
Hanke, Thomas and
Hochgesang, Julie A. and
Mesch, Johanna and
Schulder, Marc",
booktitle = "Proceedings of the LREC-COLING 2024 11th Workshop on the Representation and Processing of Sign Languages: Evaluation of Sign Language Resources",
month = may,
year = "2024",
address = "Torino, Italia",
publisher = "ELRA and ICCL",
url = "https://aclanthology.org/2024.signlang-1.3",
pages = "26--35",
}
| No abstract found | [
"Bono, Mayumi",
"Okada, Tomohiro",
"Skobov, Victor",
"Adam, Robert"
] | Data Integration, Annotation, and Transcription Methods for Sign Language Dialogue with Latency in Videoconferencing | signlang-1.3 | Poster | [
""
] | -1 | -1 | -1 | -1 | 0 | [] | [] | [] |
||
https://aclanthology.org/2024.signlang-1.4.bib | https://aclanthology.org/2024.signlang-1.4/ | @inproceedings{borstell-2024-evaluating,
title = "Evaluating the Alignment of Utterances in the {S}wedish {S}ign {L}anguage Corpus",
author = {B{\"o}rstell, Carl},
editor = "Efthimiou, Eleni and
Fotinea, Stavroula-Evita and
Hanke, Thomas and
Hochgesang, Julie A. and
Mesch, Johanna and
Schulder, Marc",
booktitle = "Proceedings of the LREC-COLING 2024 11th Workshop on the Representation and Processing of Sign Languages: Evaluation of Sign Language Resources",
month = may,
year = "2024",
address = "Torino, Italia",
publisher = "ELRA and ICCL",
url = "https://aclanthology.org/2024.signlang-1.4",
pages = "36--45",
}
| No abstract found | [
"B{\\\"o}rstell, Carl"
] | Evaluating the Alignment of Utterances in the Swedish Sign Language Corpus | signlang-1.4 | Poster | [
""
] | -1 | -1 | -1 | -1 | 0 | [] | [] | [] |
||
https://aclanthology.org/2024.signlang-1.5.bib | https://aclanthology.org/2024.signlang-1.5/ | @inproceedings{borstell-2024-approach,
title = "How to Approach Lexical Variation in Sign Language Corpora",
author = {B{\"o}rstell, Carl},
editor = "Efthimiou, Eleni and
Fotinea, Stavroula-Evita and
Hanke, Thomas and
Hochgesang, Julie A. and
Mesch, Johanna and
Schulder, Marc",
booktitle = "Proceedings of the LREC-COLING 2024 11th Workshop on the Representation and Processing of Sign Languages: Evaluation of Sign Language Resources",
month = may,
year = "2024",
address = "Torino, Italia",
publisher = "ELRA and ICCL",
url = "https://aclanthology.org/2024.signlang-1.5",
pages = "46--53",
}
| No abstract found | [
"B{\\\"o}rstell, Carl"
] | How to Approach Lexical Variation in Sign Language Corpora | signlang-1.5 | Poster | [
""
] | -1 | -1 | -1 | -1 | 0 | [] | [] | [] |
||
https://aclanthology.org/2024.signlang-1.6.bib | https://aclanthology.org/2024.signlang-1.6/ | @inproceedings{desai-etal-2024-systemic,
title = "Systemic Biases in Sign Language {AI} Research: A Deaf-Led Call to Reevaluate Research Agendas",
author = "Desai, Aashaka and
De Meulder, Maartje and
Hochgesang, Julie A. and
Kocab, Annemarie and
Lu, Alex X.",
editor = "Efthimiou, Eleni and
Fotinea, Stavroula-Evita and
Hanke, Thomas and
Hochgesang, Julie A. and
Mesch, Johanna and
Schulder, Marc",
booktitle = "Proceedings of the LREC-COLING 2024 11th Workshop on the Representation and Processing of Sign Languages: Evaluation of Sign Language Resources",
month = may,
year = "2024",
address = "Torino, Italia",
publisher = "ELRA and ICCL",
url = "https://aclanthology.org/2024.signlang-1.6",
pages = "54--65",
}
| No abstract found | [
"Desai, Aashaka",
"De Meulder, Maartje",
"Hochgesang, Julie A.",
"Kocab, Annemarie",
"Lu, Alex X."
] | Systemic Biases in Sign Language AI Research: A Deaf-Led Call to Reevaluate Research Agendas | signlang-1.6 | Poster | 2403.02563 | [
""
] | -1 | -1 | -1 | -1 | 0 | [] | [] | [] |
|
https://aclanthology.org/2024.signlang-1.7.bib | https://aclanthology.org/2024.signlang-1.7/ | @inproceedings{esselink-etal-2024-evaluating,
title = "Evaluating Inter-Annotator Agreement for Non-Manual Markers in Sign Languages",
author = "Esselink, Lyke D. and
Oomen, Marloes and
Roelofsen, Floris",
editor = "Efthimiou, Eleni and
Fotinea, Stavroula-Evita and
Hanke, Thomas and
Hochgesang, Julie A. and
Mesch, Johanna and
Schulder, Marc",
booktitle = "Proceedings of the LREC-COLING 2024 11th Workshop on the Representation and Processing of Sign Languages: Evaluation of Sign Language Resources",
month = may,
year = "2024",
address = "Torino, Italia",
publisher = "ELRA and ICCL",
url = "https://aclanthology.org/2024.signlang-1.7",
pages = "66--76",
}
| No abstract found | [
"Esselink, Lyke D.",
"Oomen, Marloes",
"Roelofsen, Floris"
] | Evaluating Inter-Annotator Agreement for Non-Manual Markers in Sign Languages | signlang-1.7 | Poster | [
""
] | -1 | -1 | -1 | -1 | 0 | [] | [] | [] |
||
https://aclanthology.org/2024.signlang-1.8.bib | https://aclanthology.org/2024.signlang-1.8/ | @inproceedings{filhol-von-ascheberg-2024-software,
title = "A software editor for the {AZVD} graphical Sign Language representation system",
author = "Filhol, Michael and
von Ascheberg, Thomas",
editor = "Efthimiou, Eleni and
Fotinea, Stavroula-Evita and
Hanke, Thomas and
Hochgesang, Julie A. and
Mesch, Johanna and
Schulder, Marc",
booktitle = "Proceedings of the LREC-COLING 2024 11th Workshop on the Representation and Processing of Sign Languages: Evaluation of Sign Language Resources",
month = may,
year = "2024",
address = "Torino, Italia",
publisher = "ELRA and ICCL",
url = "https://aclanthology.org/2024.signlang-1.8",
pages = "77--85",
}
| No abstract found | [
"Filhol, Michael",
"von Ascheberg, Thomas"
] | A software editor for the AZVD graphical Sign Language representation system | signlang-1.8 | Poster | [
""
] | -1 | -1 | -1 | -1 | 0 | [] | [] | [] |
||
https://aclanthology.org/2024.signlang-1.9.bib | https://aclanthology.org/2024.signlang-1.9/ | @inproceedings{gavrilescu-etal-2024-content,
title = "Content Questions in Sign Language {--} From theory to language description via corpus, experiments, and fieldwork",
author = "Gavrilescu, Robert and
Geraci, Carlo and
Mesch, Johanna",
editor = "Efthimiou, Eleni and
Fotinea, Stavroula-Evita and
Hanke, Thomas and
Hochgesang, Julie A. and
Mesch, Johanna and
Schulder, Marc",
booktitle = "Proceedings of the LREC-COLING 2024 11th Workshop on the Representation and Processing of Sign Languages: Evaluation of Sign Language Resources",
month = may,
year = "2024",
address = "Torino, Italia",
publisher = "ELRA and ICCL",
url = "https://aclanthology.org/2024.signlang-1.9",
pages = "86--94",
}
| No abstract found | [
"Gavrilescu, Robert",
"Geraci, Carlo",
"Mesch, Johanna"
] | Content Questions in Sign Language – From theory to language description via corpus, experiments, and fieldwork | signlang-1.9 | Poster | [
""
] | -1 | -1 | -1 | -1 | 0 | [] | [] | [] |
||
https://aclanthology.org/2024.signlang-1.10.bib | https://aclanthology.org/2024.signlang-1.10/ | @inproceedings{halbout-etal-2024-matignon,
title = "Matignon-{LSF}: a Large Corpus of Interpreted {F}rench {S}ign {L}anguage",
author = "Halbout, Julie and
Fabre, Diandra and
Ouakrim, Yanis and
Lascar, Julie and
Braffort, Annelies and
Gouiff{\`e}s, Mich{\`e}le and
Beautemps, Denis",
editor = "Efthimiou, Eleni and
Fotinea, Stavroula-Evita and
Hanke, Thomas and
Hochgesang, Julie A. and
Mesch, Johanna and
Schulder, Marc",
booktitle = "Proceedings of the LREC-COLING 2024 11th Workshop on the Representation and Processing of Sign Languages: Evaluation of Sign Language Resources",
month = may,
year = "2024",
address = "Torino, Italia",
publisher = "ELRA and ICCL",
url = "https://aclanthology.org/2024.signlang-1.10",
pages = "95--101",
}
| No abstract found | [
"Halbout, Julie",
"Fabre, Di",
"ra",
"Ouakrim, Yanis",
"Lascar, Julie",
"Braffort, Annelies",
"Gouiff{\\`e}s, Mich{\\`e}le",
"Beautemps, Denis"
] | Matignon-LSF: a Large Corpus of Interpreted French Sign Language | signlang-1.10 | Poster | [
""
] | -1 | -1 | -1 | -1 | 0 | [] | [] | [] |
||
https://aclanthology.org/2024.signlang-1.11.bib | https://aclanthology.org/2024.signlang-1.11/ | @inproceedings{hall-etal-2024-phonological,
title = "Phonological Transcription of the {C}anadian Dictionary of {ASL} as a Language Resource",
author = "Hall, Kathleen Currie and
Asthana, Anushka and
Reid, Maggie and
Gao, Yiran and
Hobby, Grace and
Tkachman, Oksana and
Vesik, Kaili",
editor = "Efthimiou, Eleni and
Fotinea, Stavroula-Evita and
Hanke, Thomas and
Hochgesang, Julie A. and
Mesch, Johanna and
Schulder, Marc",
booktitle = "Proceedings of the LREC-COLING 2024 11th Workshop on the Representation and Processing of Sign Languages: Evaluation of Sign Language Resources",
month = may,
year = "2024",
address = "Torino, Italia",
publisher = "ELRA and ICCL",
url = "https://aclanthology.org/2024.signlang-1.11",
pages = "102--110",
}
| No abstract found | [
"Hall, Kathleen Currie",
"Asthana, Anushka",
"Reid, Maggie",
"Gao, Yiran",
"Hobby, Grace",
"Tkachman, Oksana",
"Vesik, Kaili"
] | Phonological Transcription of the Canadian Dictionary of ASL as a Language Resource | signlang-1.11 | Poster | [
""
] | -1 | -1 | -1 | -1 | 0 | [] | [] | [] |
||
https://aclanthology.org/2024.signlang-1.12.bib | https://aclanthology.org/2024.signlang-1.12/ | @inproceedings{imashev-etal-2024-retrospective,
title = "Retrospective of {K}azakh-{R}ussian {S}ign {L}anguage Corpus Formation",
author = "Imashev, Alfarabi and
Kydyrbekova, Aigerim and
Mukushev, Medet and
Sandygulova, Anara and
Islam, Shynggys and
Israilov, Khassan and
Makazhanov, Aibek and
Yessenbayev, Zhandos",
editor = "Efthimiou, Eleni and
Fotinea, Stavroula-Evita and
Hanke, Thomas and
Hochgesang, Julie A. and
Mesch, Johanna and
Schulder, Marc",
booktitle = "Proceedings of the LREC-COLING 2024 11th Workshop on the Representation and Processing of Sign Languages: Evaluation of Sign Language Resources",
month = may,
year = "2024",
address = "Torino, Italia",
publisher = "ELRA and ICCL",
url = "https://aclanthology.org/2024.signlang-1.12",
pages = "111--122",
}
| No abstract found | [
"Imashev, Alfarabi",
"Kydyrbekova, Aigerim",
"Mukushev, Medet",
"S",
"ygulova, Anara",
"Islam, Shynggys",
"Israilov, Khassan",
"Makazhanov, Aibek",
"Yessenbayev, Zh",
"os"
] | Retrospective of Kazakh-Russian Sign Language Corpus Formation | signlang-1.12 | Poster | [
""
] | -1 | -1 | -1 | -1 | 0 | [] | [] | [] |
||
https://aclanthology.org/2024.signlang-1.13.bib | https://aclanthology.org/2024.signlang-1.13/ | @inproceedings{inoue-etal-2024-enhancing,
title = "Enhancing Syllabic Component Classification in {J}apanese {S}ign {L}anguage by Pre-training on Non-{J}apanese {S}ign {L}anguage Data",
author = "Inoue, Jundai and
Miwa, Makoto and
Sasaki, Yutaka and
Hara, Daisuke",
editor = "Efthimiou, Eleni and
Fotinea, Stavroula-Evita and
Hanke, Thomas and
Hochgesang, Julie A. and
Mesch, Johanna and
Schulder, Marc",
booktitle = "Proceedings of the LREC-COLING 2024 11th Workshop on the Representation and Processing of Sign Languages: Evaluation of Sign Language Resources",
month = may,
year = "2024",
address = "Torino, Italia",
publisher = "ELRA and ICCL",
url = "https://aclanthology.org/2024.signlang-1.13",
pages = "123--130",
}
| No abstract found | [
"Inoue, Jundai",
"Miwa, Makoto",
"Sasaki, Yutaka",
"Hara, Daisuke"
] | Enhancing Syllabic Component Classification in Japanese Sign Language by Pre-training on Non-Japanese Sign Language Data | signlang-1.13 | Poster | [
""
] | -1 | -1 | -1 | -1 | 0 | [] | [] | [] |
||
https://aclanthology.org/2024.signlang-1.14.bib | https://aclanthology.org/2024.signlang-1.14/ | @inproceedings{isard-2024-building,
title = "Building Your Query Step by Step: A Query Wizard for the {MY} {DGS} {--} {ANNIS} Portal of the {DGS} {C}orpus",
author = "Isard, Amy",
editor = "Efthimiou, Eleni and
Fotinea, Stavroula-Evita and
Hanke, Thomas and
Hochgesang, Julie A. and
Mesch, Johanna and
Schulder, Marc",
booktitle = "Proceedings of the LREC-COLING 2024 11th Workshop on the Representation and Processing of Sign Languages: Evaluation of Sign Language Resources",
month = may,
year = "2024",
address = "Torino, Italia",
publisher = "ELRA and ICCL",
url = "https://aclanthology.org/2024.signlang-1.14",
pages = "131--139",
}
| No abstract found | [
"Isard, Amy"
] | Building Your Query Step by Step: A Query Wizard for the MY DGS – ANNIS Portal of the DGS Corpus | signlang-1.14 | Poster | [
""
] | -1 | -1 | -1 | -1 | 0 | [] | [] | [] |
||
https://aclanthology.org/2024.signlang-1.15.bib | https://aclanthology.org/2024.signlang-1.15/ | @inproceedings{khan-etal-2024-investigating,
title = "Investigating Motion History Images and Convolutional Neural Networks for Isolated {I}rish {S}ign {L}anguage Fingerspelling Recognition",
author = "Khan, Sarmad and
Murtagh, Irene and
McLoughlin, Simon D.",
editor = "Efthimiou, Eleni and
Fotinea, Stavroula-Evita and
Hanke, Thomas and
Hochgesang, Julie A. and
Mesch, Johanna and
Schulder, Marc",
booktitle = "Proceedings of the LREC-COLING 2024 11th Workshop on the Representation and Processing of Sign Languages: Evaluation of Sign Language Resources",
month = may,
year = "2024",
address = "Torino, Italia",
publisher = "ELRA and ICCL",
url = "https://aclanthology.org/2024.signlang-1.15",
pages = "140--146",
}
| No abstract found | [
"Khan, Sarmad",
"Murtagh, Irene",
"McLoughlin, Simon D."
] | Investigating Motion History Images and Convolutional Neural Networks for Isolated Irish Sign Language Fingerspelling Recognition | signlang-1.15 | Poster | [
""
] | -1 | -1 | -1 | -1 | 0 | [] | [] | [] |
||
https://aclanthology.org/2024.signlang-1.16.bib | https://aclanthology.org/2024.signlang-1.16/ | @inproceedings{kim-etal-2024-shedding,
title = "Shedding Light on the Underexplored: Tackling the Minor Sign Language Research Topics",
author = "Kim, Jung-Ho and
Ko, Changyong and
Huerta-Enochian, Mathew and
Ko, Seung Yong",
editor = "Efthimiou, Eleni and
Fotinea, Stavroula-Evita and
Hanke, Thomas and
Hochgesang, Julie A. and
Mesch, Johanna and
Schulder, Marc",
booktitle = "Proceedings of the LREC-COLING 2024 11th Workshop on the Representation and Processing of Sign Languages: Evaluation of Sign Language Resources",
month = may,
year = "2024",
address = "Torino, Italia",
publisher = "ELRA and ICCL",
url = "https://aclanthology.org/2024.signlang-1.16",
pages = "147--158",
}
| No abstract found | [
"Kim, Jung-Ho",
"Ko, Changyong",
"Huerta-Enochian, Mathew",
"Ko, Seung Yong"
] | Shedding Light on the Underexplored: Tackling the Minor Sign Language Research Topics | signlang-1.16 | Poster | [
""
] | -1 | -1 | -1 | -1 | 0 | [] | [] | [] |
||
https://aclanthology.org/2024.signlang-1.17.bib | https://aclanthology.org/2024.signlang-1.17/ | @inproceedings{kimmelman-etal-2024-headshakes,
title = "Headshakes in {NGT}: Relation between Phonetic Properties {\&} Linguistic Functions",
author = "Kimmelman, Vadim and
Oomen, Marloes and
Pfau, Roland",
editor = "Efthimiou, Eleni and
Fotinea, Stavroula-Evita and
Hanke, Thomas and
Hochgesang, Julie A. and
Mesch, Johanna and
Schulder, Marc",
booktitle = "Proceedings of the LREC-COLING 2024 11th Workshop on the Representation and Processing of Sign Languages: Evaluation of Sign Language Resources",
month = may,
year = "2024",
address = "Torino, Italia",
publisher = "ELRA and ICCL",
url = "https://aclanthology.org/2024.signlang-1.17",
pages = "159--167",
}
| No abstract found | [
"Kimmelman, Vadim",
"Oomen, Marloes",
"Pfau, Rol",
""
] | Headshakes in NGT: Relation between Phonetic Properties & Linguistic Functions | signlang-1.17 | Poster | [
""
] | -1 | -1 | -1 | -1 | 0 | [] | [] | [] |
||
https://aclanthology.org/2024.signlang-1.18.bib | https://aclanthology.org/2024.signlang-1.18/ | @inproceedings{kimmelman-etal-2024-nonmanual,
title = "Nonmanual Marking of Questions in {B}alinese Homesign Interactions: a Computer-Vision Assisted Analysis",
author = "Kimmelman, Vadim and
Price, Ari and
Safar, Josefina and
de Vos, Connie and
Bulla, Jan",
editor = "Efthimiou, Eleni and
Fotinea, Stavroula-Evita and
Hanke, Thomas and
Hochgesang, Julie A. and
Mesch, Johanna and
Schulder, Marc",
booktitle = "Proceedings of the LREC-COLING 2024 11th Workshop on the Representation and Processing of Sign Languages: Evaluation of Sign Language Resources",
month = may,
year = "2024",
address = "Torino, Italia",
publisher = "ELRA and ICCL",
url = "https://aclanthology.org/2024.signlang-1.18",
pages = "168--177",
}
| No abstract found | [
"Kimmelman, Vadim",
"Price, Ari",
"Safar, Josefina",
"de Vos, Connie",
"Bulla, Jan"
] | Nonmanual Marking of Questions in Balinese Homesign Interactions: a Computer-Vision Assisted Analysis | signlang-1.18 | Poster | [
""
] | -1 | -1 | -1 | -1 | 0 | [] | [] | [] |
||
https://aclanthology.org/2024.signlang-1.19.bib | https://aclanthology.org/2024.signlang-1.19/ | @inproceedings{klomp-etal-2024-extension,
title = "An Extension of the {NGT} Dataset in {G}lobal {S}ignbank",
author = "Klomp, Ulrika and
Gierman, Lisa and
Manders, Pieter and
Nauta, Ellen and
Otterspeer, Gom{\`e}r and
Pelupessy, Ray and
Stern, Galya and
Venter, Dalene and
Wubbolts, Casper and
Oomen, Marloes and
Roelofsen, Floris",
editor = "Efthimiou, Eleni and
Fotinea, Stavroula-Evita and
Hanke, Thomas and
Hochgesang, Julie A. and
Mesch, Johanna and
Schulder, Marc",
booktitle = "Proceedings of the LREC-COLING 2024 11th Workshop on the Representation and Processing of Sign Languages: Evaluation of Sign Language Resources",
month = may,
year = "2024",
address = "Torino, Italia",
publisher = "ELRA and ICCL",
url = "https://aclanthology.org/2024.signlang-1.19",
pages = "178--183",
}
| No abstract found | [
"Klomp, Ulrika",
"Gierman, Lisa",
"M",
"ers, Pieter",
"Nauta, Ellen",
"Otterspeer, Gom{\\`e}r",
"Pelupessy, Ray",
"Stern, Galya",
"Venter, Dalene",
"Wubbolts, Casper",
"Oomen, Marloes",
"Roelofsen, Floris"
] | An Extension of the NGT Dataset in Global Signbank | signlang-1.19 | Poster | [
""
] | -1 | -1 | -1 | -1 | 0 | [] | [] | [] |
||
https://aclanthology.org/2024.signlang-1.20.bib | https://aclanthology.org/2024.signlang-1.20/ | @inproceedings{konrad-etal-2024-corpus,
title = "Corpus {\`a} la carte {--} Improving Access to the {P}ublic {DGS} {C}orpus",
author = {Konrad, Reiner and
Hanke, Thomas and
Isard, Amy and
Schulder, Marc and
K{\"o}nig, Lutz and
Bleicken, Julian and
B{\"o}se, Oliver},
editor = "Efthimiou, Eleni and
Fotinea, Stavroula-Evita and
Hanke, Thomas and
Hochgesang, Julie A. and
Mesch, Johanna and
Schulder, Marc",
booktitle = "Proceedings of the LREC-COLING 2024 11th Workshop on the Representation and Processing of Sign Languages: Evaluation of Sign Language Resources",
month = may,
year = "2024",
address = "Torino, Italia",
publisher = "ELRA and ICCL",
url = "https://aclanthology.org/2024.signlang-1.20",
pages = "184--193",
}
| No abstract found | [
"Konrad, Reiner",
"Hanke, Thomas",
"Isard, Amy",
"Schulder, Marc",
"K{\\\"o}nig, Lutz",
"Bleicken, Julian",
"B{\\\"o}se, Oliver"
] | Corpus à la carte – Improving Access to the Public DGS Corpus | signlang-1.20 | Poster | [
""
] | -1 | -1 | -1 | -1 | 0 | [] | [] | [] |
||
https://aclanthology.org/2024.signlang-1.21.bib | https://aclanthology.org/2024.signlang-1.21/ | @inproceedings{langer-etal-2024-introducing,
title = "Introducing the {DW}-{DGS} {--} The Digital Dictionary of {DGS}",
author = {Langer, Gabriele and
M{\"u}ller, Anke and
W{\"a}hl, Sabrina and
Otte, Felicitas and
Sepke, Lea and
Hanke, Thomas},
editor = "Efthimiou, Eleni and
Fotinea, Stavroula-Evita and
Hanke, Thomas and
Hochgesang, Julie A. and
Mesch, Johanna and
Schulder, Marc",
booktitle = "Proceedings of the LREC-COLING 2024 11th Workshop on the Representation and Processing of Sign Languages: Evaluation of Sign Language Resources",
month = may,
year = "2024",
address = "Torino, Italia",
publisher = "ELRA and ICCL",
url = "https://aclanthology.org/2024.signlang-1.21",
pages = "194--203",
}
| No abstract found | [
"Langer, Gabriele",
"M{\\\"u}ller, Anke",
"W{\\\"a}hl, Sabrina",
"Otte, Felicitas",
"Sepke, Lea",
"Hanke, Thomas"
] | Introducing the DW-DGS – The Digital Dictionary of DGS | signlang-1.21 | Poster | [
""
] | -1 | -1 | -1 | -1 | 0 | [] | [] | [] |
||
https://aclanthology.org/2024.signlang-1.22.bib | https://aclanthology.org/2024.signlang-1.22/ | @inproceedings{lascar-etal-2024-annotation,
title = "Annotation of {LSF} subtitled videos without a pre-existing dictionary",
author = "Lascar, Julie and
Gouiff{\`e}s, Mich{\`e}le and
Braffort, Annelies and
Danet, Claire",
editor = "Efthimiou, Eleni and
Fotinea, Stavroula-Evita and
Hanke, Thomas and
Hochgesang, Julie A. and
Mesch, Johanna and
Schulder, Marc",
booktitle = "Proceedings of the LREC-COLING 2024 11th Workshop on the Representation and Processing of Sign Languages: Evaluation of Sign Language Resources",
month = may,
year = "2024",
address = "Torino, Italia",
publisher = "ELRA and ICCL",
url = "https://aclanthology.org/2024.signlang-1.22",
pages = "204--212",
}
| No abstract found | [
"Lascar, Julie",
"Gouiff{\\`e}s, Mich{\\`e}le",
"Braffort, Annelies",
"Danet, Claire"
] | Annotation of LSF subtitled videos without a pre-existing dictionary | signlang-1.22 | Poster | [
""
] | -1 | -1 | -1 | -1 | 0 | [] | [] | [] |
||
https://aclanthology.org/2024.signlang-1.23.bib | https://aclanthology.org/2024.signlang-1.23/ | @inproceedings{malaia-etal-2024-capturing,
title = "Capturing Motion: Using Radar to Build Better Sign Language Corpora",
author = "Malaia, Evie and
Borneman, Joshua and
Gurbuz, Sevgi",
editor = "Efthimiou, Eleni and
Fotinea, Stavroula-Evita and
Hanke, Thomas and
Hochgesang, Julie A. and
Mesch, Johanna and
Schulder, Marc",
booktitle = "Proceedings of the LREC-COLING 2024 11th Workshop on the Representation and Processing of Sign Languages: Evaluation of Sign Language Resources",
month = may,
year = "2024",
address = "Torino, Italia",
publisher = "ELRA and ICCL",
url = "https://aclanthology.org/2024.signlang-1.23",
pages = "213--218",
}
| No abstract found | [
"Malaia, Evie",
"Borneman, Joshua",
"Gurbuz, Sevgi"
] | Capturing Motion: Using Radar to Build Better Sign Language Corpora | signlang-1.23 | Poster | [
""
] | -1 | -1 | -1 | -1 | 0 | [] | [] | [] |
||
https://aclanthology.org/2024.signlang-1.24.bib | https://aclanthology.org/2024.signlang-1.24/ | @inproceedings{malmberg-etal-2024-exploring,
title = "Exploring Latent Sign Language Representations with Isolated Signs, Sentences and In-the-Wild Data",
author = "Malmberg, Fredrik and
Klezovich, Anna and
Mesch, Johanna and
Beskow, Jonas",
editor = "Efthimiou, Eleni and
Fotinea, Stavroula-Evita and
Hanke, Thomas and
Hochgesang, Julie A. and
Mesch, Johanna and
Schulder, Marc",
booktitle = "Proceedings of the LREC-COLING 2024 11th Workshop on the Representation and Processing of Sign Languages: Evaluation of Sign Language Resources",
month = may,
year = "2024",
address = "Torino, Italia",
publisher = "ELRA and ICCL",
url = "https://aclanthology.org/2024.signlang-1.24",
pages = "219--224",
}
| No abstract found | [
"Malmberg, Fredrik",
"Klezovich, Anna",
"Mesch, Johanna",
"Beskow, Jonas"
] | Exploring Latent Sign Language Representations with Isolated Signs, Sentences and In-the-Wild Data | signlang-1.24 | Poster | [
""
] | -1 | -1 | -1 | -1 | 0 | [] | [] | [] |
||
https://aclanthology.org/2024.signlang-1.25.bib | https://aclanthology.org/2024.signlang-1.25/ | @inproceedings{martinez-guevara-curiel-2024-quantitative,
title = "Quantitative Analysis of Hand Locations in both Sign Language and Non-linguistic Gesture Videos",
author = "Mart{\'\i}nez-Guevara, Niels and
Curiel, Arturo",
editor = "Efthimiou, Eleni and
Fotinea, Stavroula-Evita and
Hanke, Thomas and
Hochgesang, Julie A. and
Mesch, Johanna and
Schulder, Marc",
booktitle = "Proceedings of the LREC-COLING 2024 11th Workshop on the Representation and Processing of Sign Languages: Evaluation of Sign Language Resources",
month = may,
year = "2024",
address = "Torino, Italia",
publisher = "ELRA and ICCL",
url = "https://aclanthology.org/2024.signlang-1.25",
pages = "225--234",
}
| No abstract found | [
"Mart{\\'\\i}nez-Guevara, Niels",
"Curiel, Arturo"
] | Quantitative Analysis of Hand Locations in both Sign Language and Non-linguistic Gesture Videos | signlang-1.25 | Poster | [
""
] | -1 | -1 | -1 | -1 | 0 | [] | [] | [] |
||
https://aclanthology.org/2024.signlang-1.26.bib | https://aclanthology.org/2024.signlang-1.26/ | @inproceedings{martinod-filhol-2024-formal,
title = "Formal Representation of Interrogation in {F}rench {S}ign {L}anguage",
author = "Martinod, Emmanuella and
Filhol, Michael",
editor = "Efthimiou, Eleni and
Fotinea, Stavroula-Evita and
Hanke, Thomas and
Hochgesang, Julie A. and
Mesch, Johanna and
Schulder, Marc",
booktitle = "Proceedings of the LREC-COLING 2024 11th Workshop on the Representation and Processing of Sign Languages: Evaluation of Sign Language Resources",
month = may,
year = "2024",
address = "Torino, Italia",
publisher = "ELRA and ICCL",
url = "https://aclanthology.org/2024.signlang-1.26",
pages = "235--243",
}
| No abstract found | [
"Martinod, Emmanuella",
"Filhol, Michael"
] | Formal Representation of Interrogation in French Sign Language | signlang-1.26 | Poster | [
""
] | -1 | -1 | -1 | -1 | 0 | [] | [] | [] |
||
https://aclanthology.org/2024.signlang-1.27.bib | https://aclanthology.org/2024.signlang-1.27/ | @inproceedings{mcdonald-etal-2024-multilingual,
title = "Multilingual Synthesis of Depictions through Structured Descriptions of Sign: An Initial Case Study",
author = "McDonald, John and
Efthimiou, Eleni and
Fotinea, Stavroula-Evita and
Wolfe, Rosalee",
editor = "Efthimiou, Eleni and
Fotinea, Stavroula-Evita and
Hanke, Thomas and
Hochgesang, Julie A. and
Mesch, Johanna and
Schulder, Marc",
booktitle = "Proceedings of the LREC-COLING 2024 11th Workshop on the Representation and Processing of Sign Languages: Evaluation of Sign Language Resources",
month = may,
year = "2024",
address = "Torino, Italia",
publisher = "ELRA and ICCL",
url = "https://aclanthology.org/2024.signlang-1.27",
pages = "244--253",
}
| No abstract found | [
"McDonald, John",
"Efthimiou, Eleni",
"Fotinea, Stavroula-Evita",
"Wolfe, Rosalee"
] | Multilingual Synthesis of Depictions through Structured Descriptions of Sign: An Initial Case Study | signlang-1.27 | Poster | [
""
] | -1 | -1 | -1 | -1 | 0 | [] | [] | [] |
||
https://aclanthology.org/2024.signlang-1.28.bib | https://aclanthology.org/2024.signlang-1.28/ | @inproceedings{mesch-etal-2024-swedish,
title = "{S}wedish {S}ign {L}anguage Resources from a User{'}s Perspective",
author = {Mesch, Johanna and
Bj{\"o}rkstrand, Thomas and
Balkstam, Eira and
Hansson, Patrick and
Riemer Kankkonen, Nikolaus},
editor = "Efthimiou, Eleni and
Fotinea, Stavroula-Evita and
Hanke, Thomas and
Hochgesang, Julie A. and
Mesch, Johanna and
Schulder, Marc",
booktitle = "Proceedings of the LREC-COLING 2024 11th Workshop on the Representation and Processing of Sign Languages: Evaluation of Sign Language Resources",
month = may,
year = "2024",
address = "Torino, Italia",
publisher = "ELRA and ICCL",
url = "https://aclanthology.org/2024.signlang-1.28",
pages = "254--261",
}
| No abstract found | [
"Mesch, Johanna",
"Bj{\\\"o}rkstr",
", Thomas",
"Balkstam, Eira",
"Hansson, Patrick",
"Riemer Kankkonen, Nikolaus"
] | Swedish Sign Language Resources from a User's Perspective | signlang-1.28 | Poster | [
""
] | -1 | -1 | -1 | -1 | 0 | [] | [] | [] |
||
https://aclanthology.org/2024.signlang-1.29.bib | https://aclanthology.org/2024.signlang-1.29/ | @inproceedings{miyazaki-etal-2024-sign,
title = "Sign Language Translation with Gloss Pair Encoding",
author = "Miyazaki, Taro and
Tan, Sihan and
Uchida, Tsubasa and
Kaneko, Hiroyuki",
editor = "Efthimiou, Eleni and
Fotinea, Stavroula-Evita and
Hanke, Thomas and
Hochgesang, Julie A. and
Mesch, Johanna and
Schulder, Marc",
booktitle = "Proceedings of the LREC-COLING 2024 11th Workshop on the Representation and Processing of Sign Languages: Evaluation of Sign Language Resources",
month = may,
year = "2024",
address = "Torino, Italia",
publisher = "ELRA and ICCL",
url = "https://aclanthology.org/2024.signlang-1.29",
pages = "262--268",
}
| No abstract found | [
"Miyazaki, Taro",
"Tan, Sihan",
"Uchida, Tsubasa",
"Kaneko, Hiroyuki"
] | Sign Language Translation with Gloss Pair Encoding | signlang-1.29 | Poster | [
""
] | -1 | -1 | -1 | -1 | 0 | [] | [] | [] |
||
https://aclanthology.org/2024.signlang-1.30.bib | https://aclanthology.org/2024.signlang-1.30/ | @inproceedings{otterspeer-etal-2024-signcollect,
title = "{S}ign{C}ollect: A {`}Touchless{'} Pipeline for Constructing Large-scale Sign Language Repositories",
author = "Otterspeer, Gom{\`e}r and
Klomp, Ulrika and
Roelofsen, Floris",
editor = "Efthimiou, Eleni and
Fotinea, Stavroula-Evita and
Hanke, Thomas and
Hochgesang, Julie A. and
Mesch, Johanna and
Schulder, Marc",
booktitle = "Proceedings of the LREC-COLING 2024 11th Workshop on the Representation and Processing of Sign Languages: Evaluation of Sign Language Resources",
month = may,
year = "2024",
address = "Torino, Italia",
publisher = "ELRA and ICCL",
url = "https://aclanthology.org/2024.signlang-1.30",
pages = "269--275",
}
| No abstract found | [
"Otterspeer, Gom{\\`e}r",
"Klomp, Ulrika",
"Roelofsen, Floris"
] | SignCollect: A `Touchless' Pipeline for Constructing Large-scale Sign Language Repositories | signlang-1.30 | Poster | [
""
] | -1 | -1 | -1 | -1 | 0 | [] | [] | [] |
||
https://aclanthology.org/2024.signlang-1.31.bib | https://aclanthology.org/2024.signlang-1.31/ | @inproceedings{picron-etal-2024-easier,
title = "The {EASIER} Mobile Application and Avatar End-User Evaluation Methodology",
author = "Picron, Frankie and
Van Landuyt, Davy and
Omardeen, Rehana and
Efthimiou, Eleni and
Wolfe, Rosalee and
Fotinea, Stavroula-Evita and
Goulas, Theodore and
Tismer, Christian and
Kopf, Maria and
Hanke, Thomas",
editor = "Efthimiou, Eleni and
Fotinea, Stavroula-Evita and
Hanke, Thomas and
Hochgesang, Julie A. and
Mesch, Johanna and
Schulder, Marc",
booktitle = "Proceedings of the LREC-COLING 2024 11th Workshop on the Representation and Processing of Sign Languages: Evaluation of Sign Language Resources",
month = may,
year = "2024",
address = "Torino, Italia",
publisher = "ELRA and ICCL",
url = "https://aclanthology.org/2024.signlang-1.31",
pages = "276--281",
}
| No abstract found | [
"Picron, Frankie",
"Van L",
"uyt, Davy",
"Omardeen, Rehana",
"Efthimiou, Eleni",
"Wolfe, Rosalee",
"Fotinea, Stavroula-Evita",
"Goulas, Theodore",
"Tismer, Christian",
"Kopf, Maria",
"Hanke, Thomas"
] | The EASIER Mobile Application and Avatar End-User Evaluation Methodology | signlang-1.31 | Poster | [
""
] | -1 | -1 | -1 | -1 | 0 | [] | [] | [] |
||
https://aclanthology.org/2024.signlang-1.32.bib | https://aclanthology.org/2024.signlang-1.32/ | @inproceedings{rathmann-etal-2024-visuolab,
title = "{V}isuo{L}ab: Building a sign language multilingual, multimodal and multifunctional platform",
author = "Rathmann, Christian and
de Quadros, Ronice Muller and
Gei{\ss}ler, Thomas and
Peters, Christian and
Fernandes, Francisco and
Loio, Milene Peixer and
Fran{\c{c}}a, Diego",
editor = "Efthimiou, Eleni and
Fotinea, Stavroula-Evita and
Hanke, Thomas and
Hochgesang, Julie A. and
Mesch, Johanna and
Schulder, Marc",
booktitle = "Proceedings of the LREC-COLING 2024 11th Workshop on the Representation and Processing of Sign Languages: Evaluation of Sign Language Resources",
month = may,
year = "2024",
address = "Torino, Italia",
publisher = "ELRA and ICCL",
url = "https://aclanthology.org/2024.signlang-1.32",
pages = "282--289",
}
| No abstract found | [
"Rathmann, Christian",
"de Quadros, Ronice Muller",
"Gei{\\ss}ler, Thomas",
"Peters, Christian",
"Fern",
"es, Francisco",
"Loio, Milene Peixer",
"Fran{\\c{c}}a, Diego"
] | VisuoLab: Building a sign language multilingual, multimodal and multifunctional platform | signlang-1.32 | Poster | [
""
] | -1 | -1 | -1 | -1 | 0 | [] | [] | [] |
||
https://aclanthology.org/2024.signlang-1.33.bib | https://aclanthology.org/2024.signlang-1.33/ | @inproceedings{ranum-etal-2024-3d,
title = "3{D}-{LEX} v1.0 {--} 3{D} Lexicons for {A}merican {S}ign {L}anguage and {S}ign {L}anguage of the {N}etherlands",
author = "Ranum, Oline and
Otterspeer, Gom{\`e}r and
Andersen, Jari I. and
Belleman, Robert G. and
Roelofsen, Floris",
editor = "Efthimiou, Eleni and
Fotinea, Stavroula-Evita and
Hanke, Thomas and
Hochgesang, Julie A. and
Mesch, Johanna and
Schulder, Marc",
booktitle = "Proceedings of the LREC-COLING 2024 11th Workshop on the Representation and Processing of Sign Languages: Evaluation of Sign Language Resources",
month = may,
year = "2024",
address = "Torino, Italia",
publisher = "ELRA and ICCL",
url = "https://aclanthology.org/2024.signlang-1.33",
pages = "290--301",
}
| No abstract found | [
"Ranum, Oline",
"Otterspeer, Gom{\\`e}r",
"Andersen, Jari I.",
"Belleman, Robert G.",
"Roelofsen, Floris"
] | 3D-LEX v1.0 – 3D Lexicons for American Sign Language and Sign Language of the Netherlands | signlang-1.33 | Poster | [
""
] | -1 | -1 | -1 | -1 | 0 | [] | [] | [] |
||
https://aclanthology.org/2024.signlang-1.34.bib | https://aclanthology.org/2024.signlang-1.34/ | @inproceedings{de-quadros-etal-2024-signbank,
title = "{S}ignbank 2.0 of Sign Languages: Easy to Administer, Easy to Use, Easy to Share",
author = "de Quadros, Ronice Muller and
Rathmann, Christian and
Romanek, Peter Zal{\'a}n and
Fernandes, Francisco and
Cond{\'e}, Sther",
editor = "Efthimiou, Eleni and
Fotinea, Stavroula-Evita and
Hanke, Thomas and
Hochgesang, Julie A. and
Mesch, Johanna and
Schulder, Marc",
booktitle = "Proceedings of the LREC-COLING 2024 11th Workshop on the Representation and Processing of Sign Languages: Evaluation of Sign Language Resources",
month = may,
year = "2024",
address = "Torino, Italia",
publisher = "ELRA and ICCL",
url = "https://aclanthology.org/2024.signlang-1.34",
pages = "302--314",
}
| No abstract found | [
"de Quadros, Ronice Muller",
"Rathmann, Christian",
"Romanek, Peter Zal{\\'a}n",
"Fern",
"es, Francisco",
"Cond{\\'e}, Sther"
] | Signbank 2.0 of Sign Languages: Easy to Administer, Easy to Use, Easy to Share | signlang-1.34 | Poster | [
""
] | -1 | -1 | -1 | -1 | 0 | [] | [] | [] |
||
https://aclanthology.org/2024.signlang-1.35.bib | https://aclanthology.org/2024.signlang-1.35/ | @inproceedings{reverdy-etal-2024-stk,
title = "{STK} {LSF}: A Motion Capture Dataset in {LSF} for {S}ign{T}o{K}ids",
author = "Reverdy, Cl{\'e}ment and
Gibet, Sylvie and
Le Naour, Thibaut",
editor = "Efthimiou, Eleni and
Fotinea, Stavroula-Evita and
Hanke, Thomas and
Hochgesang, Julie A. and
Mesch, Johanna and
Schulder, Marc",
booktitle = "Proceedings of the LREC-COLING 2024 11th Workshop on the Representation and Processing of Sign Languages: Evaluation of Sign Language Resources",
month = may,
year = "2024",
address = "Torino, Italia",
publisher = "ELRA and ICCL",
url = "https://aclanthology.org/2024.signlang-1.35",
pages = "315--322",
}
| No abstract found | [
"Reverdy, Cl{\\'e}ment",
"Gibet, Sylvie",
"Le Naour, Thibaut"
] | STK LSF: A Motion Capture Dataset in LSF for SignToKids | signlang-1.35 | Poster | [
""
] | -1 | -1 | -1 | -1 | 0 | [] | [] | [] |
||
https://aclanthology.org/2024.signlang-1.36.bib | https://aclanthology.org/2024.signlang-1.36/ | @inproceedings{roh-etal-2024-preprocessing,
title = "Preprocessing Mediapipe Keypoints with Keypoint Reconstruction and Anchors for Isolated Sign Language Recognition",
author = "Roh, Kyunggeun and
Lee, Huije and
Hwang, Eui Jun and
Cho, Sukmin and
Park, Jong C.",
editor = "Efthimiou, Eleni and
Fotinea, Stavroula-Evita and
Hanke, Thomas and
Hochgesang, Julie A. and
Mesch, Johanna and
Schulder, Marc",
booktitle = "Proceedings of the LREC-COLING 2024 11th Workshop on the Representation and Processing of Sign Languages: Evaluation of Sign Language Resources",
month = may,
year = "2024",
address = "Torino, Italia",
publisher = "ELRA and ICCL",
url = "https://aclanthology.org/2024.signlang-1.36",
pages = "323--334",
}
| No abstract found | [
"Roh, Kyunggeun",
"Lee, Huije",
"Hwang, Eui Jun",
"Cho, Sukmin",
"Park, Jong C."
] | Preprocessing Mediapipe Keypoints with Keypoint Reconstruction and Anchors for Isolated Sign Language Recognition | signlang-1.36 | Poster | [
""
] | -1 | -1 | -1 | -1 | 0 | [] | [] | [] |
||
https://aclanthology.org/2024.signlang-1.37.bib | https://aclanthology.org/2024.signlang-1.37/ | @inproceedings{sahin-gokgoz-2024-decoding,
title = "Decoding Sign Languages: The {SL}-{FE} Framework for Phonological Analysis and Automated Annotation",
author = {{\c{S}}ahin, Karahan and
G{\"o}kg{\"o}z, Kadir},
editor = "Efthimiou, Eleni and
Fotinea, Stavroula-Evita and
Hanke, Thomas and
Hochgesang, Julie A. and
Mesch, Johanna and
Schulder, Marc",
booktitle = "Proceedings of the LREC-COLING 2024 11th Workshop on the Representation and Processing of Sign Languages: Evaluation of Sign Language Resources",
month = may,
year = "2024",
address = "Torino, Italia",
publisher = "ELRA and ICCL",
url = "https://aclanthology.org/2024.signlang-1.37",
pages = "335--342",
}
| No abstract found | [
"{\\c{S}}ahin, Karahan",
"G{\\\"o}kg{\\\"o}z, Kadir"
] | Decoding Sign Languages: The SL-FE Framework for Phonological Analysis and Automated Annotation | signlang-1.37 | Poster | [
""
] | -1 | -1 | -1 | -1 | 0 | [] | [] | [] |
||
https://aclanthology.org/2024.signlang-1.38.bib | https://aclanthology.org/2024.signlang-1.38/ | @inproceedings{schulder-etal-2024-signs,
title = "Signs and Synonymity: Continuing Development of the Multilingual Sign Language {W}ordnet",
author = {Schulder, Marc and
Bigeard, Sam and
Kopf, Maria and
Hanke, Thomas and
Kuder, Anna and
W{\'o}jcicka, Joanna and
Mesch, Johanna and
Bj{\"o}rkstrand, Thomas and
Vacalopoulou, Anna and
Vasilaki, Kyriaki and
Goulas, Theodore and
Fotinea, Stavroula-Evita and
Efthimiou, Eleni},
editor = "Efthimiou, Eleni and
Fotinea, Stavroula-Evita and
Hanke, Thomas and
Hochgesang, Julie A. and
Mesch, Johanna and
Schulder, Marc",
booktitle = "Proceedings of the LREC-COLING 2024 11th Workshop on the Representation and Processing of Sign Languages: Evaluation of Sign Language Resources",
month = may,
year = "2024",
address = "Torino, Italia",
publisher = "ELRA and ICCL",
url = "https://aclanthology.org/2024.signlang-1.38",
pages = "343--353",
}
| No abstract found | [
"Schulder, Marc",
"Bigeard, Sam",
"Kopf, Maria",
"Hanke, Thomas",
"Kuder, Anna",
"W{\\'o}jcicka, Joanna",
"Mesch, Johanna",
"Bj{\\\"o}rkstr",
", Thomas",
"Vacalopoulou, Anna",
"Vasilaki, Kyriaki",
"Goulas, Theodore",
"Fotinea, Stavroula-Evita",
"Efthimiou, Eleni"
] | Signs and Synonymity: Continuing Development of the Multilingual Sign Language Wordnet | signlang-1.38 | Poster | [
""
] | -1 | -1 | -1 | -1 | 0 | [] | [] | [] |
||
https://aclanthology.org/2024.signlang-1.39.bib | https://aclanthology.org/2024.signlang-1.39/ | @inproceedings{sharma-etal-2024-facial,
title = "Facial Expressions for Sign Language Synthesis using {FACSH}uman and {AZ}ee",
author = "Sharma, Paritosh and
Challant, Camille and
Filhol, Michael",
editor = "Efthimiou, Eleni and
Fotinea, Stavroula-Evita and
Hanke, Thomas and
Hochgesang, Julie A. and
Mesch, Johanna and
Schulder, Marc",
booktitle = "Proceedings of the LREC-COLING 2024 11th Workshop on the Representation and Processing of Sign Languages: Evaluation of Sign Language Resources",
month = may,
year = "2024",
address = "Torino, Italia",
publisher = "ELRA and ICCL",
url = "https://aclanthology.org/2024.signlang-1.39",
pages = "354--360",
}
| No abstract found | [
"Sharma, Paritosh",
"Challant, Camille",
"Filhol, Michael"
] | Facial Expressions for Sign Language Synthesis using FACSHuman and AZee | signlang-1.39 | Poster | [
""
] | -1 | -1 | -1 | -1 | 0 | [] | [] | [] |
||
https://aclanthology.org/2024.signlang-1.40.bib | https://aclanthology.org/2024.signlang-1.40/ | @inproceedings{susman-kimmelman-2024-eye,
title = "Eye Blink Detection in Sign Language Data Using {CNN}s and Rule-Based Methods",
author = "Susman, Margaux and
Kimmelman, Vadim",
editor = "Efthimiou, Eleni and
Fotinea, Stavroula-Evita and
Hanke, Thomas and
Hochgesang, Julie A. and
Mesch, Johanna and
Schulder, Marc",
booktitle = "Proceedings of the LREC-COLING 2024 11th Workshop on the Representation and Processing of Sign Languages: Evaluation of Sign Language Resources",
month = may,
year = "2024",
address = "Torino, Italia",
publisher = "ELRA and ICCL",
url = "https://aclanthology.org/2024.signlang-1.40",
pages = "361--369",
}
| No abstract found | [
"Susman, Margaux",
"Kimmelman, Vadim"
] | Eye Blink Detection in Sign Language Data Using CNNs and Rule-Based Methods | signlang-1.40 | Poster | [
""
] | -1 | -1 | -1 | -1 | 0 | [] | [] | [] |
||
https://aclanthology.org/2024.signlang-1.41.bib | https://aclanthology.org/2024.signlang-1.41/ | @inproceedings{tan-etal-2024-seda,
title = "{SEDA}: Simple and Effective Data Augmentation for Sign Language Understanding",
author = "Tan, Sihan and
Miyazaki, Taro and
Itoyama, Katsutoshi and
Nakadai, Kazuhiro",
editor = "Efthimiou, Eleni and
Fotinea, Stavroula-Evita and
Hanke, Thomas and
Hochgesang, Julie A. and
Mesch, Johanna and
Schulder, Marc",
booktitle = "Proceedings of the LREC-COLING 2024 11th Workshop on the Representation and Processing of Sign Languages: Evaluation of Sign Language Resources",
month = may,
year = "2024",
address = "Torino, Italia",
publisher = "ELRA and ICCL",
url = "https://aclanthology.org/2024.signlang-1.41",
pages = "370--375",
}
| No abstract found | [
"Tan, Sihan",
"Miyazaki, Taro",
"Itoyama, Katsutoshi",
"Nakadai, Kazuhiro"
] | SEDA: Simple and Effective Data Augmentation for Sign Language Understanding | signlang-1.41 | Poster | [
""
] | -1 | -1 | -1 | -1 | 0 | [] | [] | [] |
||
https://aclanthology.org/2024.signlang-1.42.bib | https://aclanthology.org/2024.signlang-1.42/ | @inproceedings{uchida-etal-2024-hamnosys,
title = "{H}am{N}o{S}ys-based Motion Editing Method for Sign Language",
author = "Uchida, Tsubasa and
Miyazaki, Taro and
Kaneko, Hiroyuki",
editor = "Efthimiou, Eleni and
Fotinea, Stavroula-Evita and
Hanke, Thomas and
Hochgesang, Julie A. and
Mesch, Johanna and
Schulder, Marc",
booktitle = "Proceedings of the LREC-COLING 2024 11th Workshop on the Representation and Processing of Sign Languages: Evaluation of Sign Language Resources",
month = may,
year = "2024",
address = "Torino, Italia",
publisher = "ELRA and ICCL",
url = "https://aclanthology.org/2024.signlang-1.42",
pages = "376--385",
}
| No abstract found | [
"Uchida, Tsubasa",
"Miyazaki, Taro",
"Kaneko, Hiroyuki"
] | HamNoSys-based Motion Editing Method for Sign Language | signlang-1.42 | Poster | [
""
] | -1 | -1 | -1 | -1 | 0 | [] | [] | [] |
||
https://aclanthology.org/2024.signlang-1.43.bib | https://aclanthology.org/2024.signlang-1.43/ | @inproceedings{vazquez-enriquez-etal-2024-signamed,
title = "{S}igna{M}ed: a Cooperative Bilingual {LSE}-{S}panish Dictionary in the Healthcare Domain",
author = "V{\'a}zquez-Enr{\'\i}quez, Manuel and
Alba-Castro, Jos{\'e} Luis and
P{\'e}rez-P{\'e}rez, Ania and
Cabeza-Pereiro, Carmen and
Doc{\'\i}o-Fern{\'a}ndez, Laura",
editor = "Efthimiou, Eleni and
Fotinea, Stavroula-Evita and
Hanke, Thomas and
Hochgesang, Julie A. and
Mesch, Johanna and
Schulder, Marc",
booktitle = "Proceedings of the LREC-COLING 2024 11th Workshop on the Representation and Processing of Sign Languages: Evaluation of Sign Language Resources",
month = may,
year = "2024",
address = "Torino, Italia",
publisher = "ELRA and ICCL",
url = "https://aclanthology.org/2024.signlang-1.43",
pages = "386--394",
}
| No abstract found | [
"V{\\'a}zquez-Enr{\\'\\i}quez, Manuel",
"Alba-Castro, Jos{\\'e} Luis",
"P{\\'e}rez-P{\\'e}rez, Ania",
"Cabeza-Pereiro, Carmen",
"Doc{\\'\\i}o-Fern{\\'a}ndez, Laura"
] | SignaMed: a Cooperative Bilingual LSE-Spanish Dictionary in the Healthcare Domain | signlang-1.43 | Poster | [
""
] | -1 | -1 | -1 | -1 | 0 | [] | [] | [] |
||
https://aclanthology.org/2024.signlang-1.44.bib | https://aclanthology.org/2024.signlang-1.44/ | @inproceedings{xia-etal-2024-diffusion,
title = "Diffusion Models for Sign Language Video Anonymization",
author = "Xia, Zhaoyang and
Zhou, Yang and
Han, Ligong and
Neidle, Carol and
Metaxas, Dimitris N.",
editor = "Efthimiou, Eleni and
Fotinea, Stavroula-Evita and
Hanke, Thomas and
Hochgesang, Julie A. and
Mesch, Johanna and
Schulder, Marc",
booktitle = "Proceedings of the LREC-COLING 2024 11th Workshop on the Representation and Processing of Sign Languages: Evaluation of Sign Language Resources",
month = may,
year = "2024",
address = "Torino, Italia",
publisher = "ELRA and ICCL",
url = "https://aclanthology.org/2024.signlang-1.44",
pages = "395--407",
}
| No abstract found | [
"Xia, Zhaoyang",
"Zhou, Yang",
"Han, Ligong",
"Neidle, Carol",
"Metaxas, Dimitris N."
] | Diffusion Models for Sign Language Video Anonymization | signlang-1.44 | Poster | [
"https://github.com/jeffery9707/diffslva"
] | -1 | -1 | -1 | -1 | 0 | [] | [] | [] |
||
https://aclanthology.org/2024.signlang-1.45.bib | https://aclanthology.org/2024.signlang-1.45/ | @inproceedings{zhou-etal-2024-multimodal,
title = "A Multimodal Spatio-Temporal {GCN} Model with Enhancements for Isolated Sign Recognition",
author = "Zhou, Yang and
Xia, Zhaoyang and
Chen, Yuxiao and
Neidle, Carol and
Metaxas, Dimitris N.",
editor = "Efthimiou, Eleni and
Fotinea, Stavroula-Evita and
Hanke, Thomas and
Hochgesang, Julie A. and
Mesch, Johanna and
Schulder, Marc",
booktitle = "Proceedings of the LREC-COLING 2024 11th Workshop on the Representation and Processing of Sign Languages: Evaluation of Sign Language Resources",
month = may,
year = "2024",
address = "Torino, Italia",
publisher = "ELRA and ICCL",
url = "https://aclanthology.org/2024.signlang-1.45",
pages = "408--419",
}
| No abstract found | [
"Zhou, Yang",
"Xia, Zhaoyang",
"Chen, Yuxiao",
"Neidle, Carol",
"Metaxas, Dimitris N."
] | A Multimodal Spatio-Temporal GCN Model with Enhancements for Isolated Sign Recognition | signlang-1.45 | Poster | [
""
] | -1 | -1 | -1 | -1 | 0 | [] | [] | [] |
||
https://aclanthology.org/2024.sigul-1.1.bib | https://aclanthology.org/2024.sigul-1.1/ | @inproceedings{arnett-etal-2024-bit,
title = "A Bit of a Problem: Measurement Disparities in Dataset Sizes across Languages",
author = "Arnett, Catherine and
Chang, Tyler A. and
Bergen, Benjamin",
editor = "Melero, Maite and
Sakti, Sakriani and
Soria, Claudia",
booktitle = "Proceedings of the 3rd Annual Meeting of the Special Interest Group on Under-resourced Languages @ LREC-COLING 2024",
month = may,
year = "2024",
address = "Torino, Italia",
publisher = "ELRA and ICCL",
url = "https://aclanthology.org/2024.sigul-1.1",
pages = "1--9",
abstract = "How should text dataset sizes be compared across languages? Even for content-matched (parallel) corpora, UTF-8 encoded text can require a dramatically different number of bytes for different languages. In our work, we define the byte premium between two languages as the ratio of bytes used to encode content-matched text in those languages. We compute byte premiums for 1155 languages, and we use linear regressions to estimate byte premiums for other languages. We release a tool to obtain byte premiums for any two languages, enabling comparisons of dataset sizes across languages for more equitable multilingual model development and data practices.",
}
| How should text dataset sizes be compared across languages? Even for content-matched (parallel) corpora, UTF-8 encoded text can require a dramatically different number of bytes for different languages. In our work, we define the byte premium between two languages as the ratio of bytes used to encode content-matched text in those languages. We compute byte premiums for 1155 languages, and we use linear regressions to estimate byte premiums for other languages. We release a tool to obtain byte premiums for any two languages, enabling comparisons of dataset sizes across languages for more equitable multilingual model development and data practices. | [
"Arnett, Catherine",
"Chang, Tyler A.",
"Bergen, Benjamin"
] | A Bit of a Problem: Measurement Disparities in Dataset Sizes across Languages | sigul-1.1 | Poster | 2403.00686 | [
"https://github.com/catherinearnett/byte-premium-tool"
] | -1 | -1 | -1 | -1 | 0 | [] | [] | [] |
|
https://aclanthology.org/2024.sigul-1.2.bib | https://aclanthology.org/2024.sigul-1.2/ | @inproceedings{mut-altin-saggion-2024-novel,
title = "A Novel Corpus for Automated Sexism Identification on Social Media",
author = "Mut Altin, Lutfiye Seda and
Saggion, Horacio",
editor = "Melero, Maite and
Sakti, Sakriani and
Soria, Claudia",
booktitle = "Proceedings of the 3rd Annual Meeting of the Special Interest Group on Under-resourced Languages @ LREC-COLING 2024",
month = may,
year = "2024",
address = "Torino, Italia",
publisher = "ELRA and ICCL",
url = "https://aclanthology.org/2024.sigul-1.2",
pages = "10--15",
abstract = "In this paper, we present a novel dataset for the study of automated sexism identification and categorization on social media in Turkish. For this purpose, we have collected, following a well established methodology, a set of Tweets and YouTube comments. Relying on expert organizations in the area of gender equality, each text has been annotated based on a two-level labelling schema derived from previous research. Our resulting dataset consists of around 7,000 annotated instances useful for the study of expressions of sexism and misogyny on the Web. To the best of our knowledge, this is the first two-level manually annotated comprehensive Turkish dataset for sexism identification. In order to fuel research in this relevant area, we also present the result of our benchmarking experiments in the area of sexism identification in Turkish.",
}
| In this paper, we present a novel dataset for the study of automated sexism identification and categorization on social media in Turkish. For this purpose, we have collected, following a well established methodology, a set of Tweets and YouTube comments. Relying on expert organizations in the area of gender equality, each text has been annotated based on a two-level labelling schema derived from previous research. Our resulting dataset consists of around 7,000 annotated instances useful for the study of expressions of sexism and misogyny on the Web. To the best of our knowledge, this is the first two-level manually annotated comprehensive Turkish dataset for sexism identification. In order to fuel research in this relevant area, we also present the result of our benchmarking experiments in the area of sexism identification in Turkish. | [
"Mut Altin, Lutfiye Seda",
"Saggion, Horacio"
] | A Novel Corpus for Automated Sexism Identification on Social Media | sigul-1.2 | Poster | [
""
] | -1 | -1 | -1 | -1 | 0 | [] | [] | [] |
||
https://aclanthology.org/2024.sigul-1.3.bib | https://aclanthology.org/2024.sigul-1.3/ | @inproceedings{santos-etal-2024-advancing,
title = "Advancing Generative {AI} for {P}ortuguese with Open Decoder Gerv{\'a}sio {PT}*",
author = "Santos, Rodrigo and
Silva, Jo{\~a}o Ricardo and
Gomes, Lu{\'\i}s and
Rodrigues, Jo{\~a}o and
Branco, Ant{\'o}nio",
editor = "Melero, Maite and
Sakti, Sakriani and
Soria, Claudia",
booktitle = "Proceedings of the 3rd Annual Meeting of the Special Interest Group on Under-resourced Languages @ LREC-COLING 2024",
month = may,
year = "2024",
address = "Torino, Italia",
publisher = "ELRA and ICCL",
url = "https://aclanthology.org/2024.sigul-1.3",
pages = "16--26",
abstract = "To advance the neural decoding of Portuguese, in this paper we present a fully open Transformer-based, instruction-tuned decoder model that sets a new state of the art in this respect. To develop this decoder, which we named Gerv{\'a}sio PT*, a strong LLaMA 2 7B model was used as a starting point, and its further improvement through additional training was done over language resources that include new instruction data sets of Portuguese prepared for this purpose, which are also contributed in this paper. All versions of Gerv{\'a}sio are open source and distributed for free under an open license, including for either research or commercial usage, and can be run on consumer-grade hardware, thus seeking to contribute to the advancement of research and innovation in language technology for Portuguese.",
}
| To advance the neural decoding of Portuguese, in this paper we present a fully open Transformer-based, instruction-tuned decoder model that sets a new state of the art in this respect. To develop this decoder, which we named Gerv{\'a}sio PT*, a strong LLaMA 2 7B model was used as a starting point, and its further improvement through additional training was done over language resources that include new instruction data sets of Portuguese prepared for this purpose, which are also contributed in this paper. All versions of Gerv{\'a}sio are open source and distributed for free under an open license, including for either research or commercial usage, and can be run on consumer-grade hardware, thus seeking to contribute to the advancement of research and innovation in language technology for Portuguese. | [
"Santos, Rodrigo",
"Silva, Jo{\\~a}o Ricardo",
"Gomes, Lu{\\'\\i}s",
"Rodrigues, Jo{\\~a}o",
"Branco, Ant{\\'o}nio"
] | Advancing Generative AI for Portuguese with Open Decoder Gervásio PT* | sigul-1.3 | Poster | 2402.18766 | [
""
] | -1 | -1 | -1 | -1 | 0 | [] | [] | [] |
|
https://aclanthology.org/2024.sigul-1.4.bib | https://aclanthology.org/2024.sigul-1.4/ | @inproceedings{levow-2024-assessing,
title = "Assessing Pre-Built Speaker Recognition Models for Endangered Language Data",
author = "Levow, Gina-Anne",
editor = "Melero, Maite and
Sakti, Sakriani and
Soria, Claudia",
booktitle = "Proceedings of the 3rd Annual Meeting of the Special Interest Group on Under-resourced Languages @ LREC-COLING 2024",
month = may,
year = "2024",
address = "Torino, Italia",
publisher = "ELRA and ICCL",
url = "https://aclanthology.org/2024.sigul-1.4",
pages = "27--32",
abstract = "Significant research has focused on speaker recognition, determining which speaker is speaking in a segment of audio. However, few experiments have investigated speaker recognition for very low-resource or endangered languages. Furthermore, speaker recognition has the potential to support language documentation and revitalization efforts, making recordings more accessible to researchers and communities. Since endangered language datasets are too small to build competitive speaker representations from scratch, we investigate the application of large-scale pre-built speaker recognition models to bridge this gap. This paper compares four speaker recognition models on six diverse endangered language data sets. Comparisons contrast three recent neural network-based x-vector models and an earlier baseline i-vector model. Experiments demonstrate significantly stronger performance for some of the studied models. Further analysis highlights differences in effectiveness tied to the lengths of test audio segments and amount of data used for speaker modeling.",
}
| Significant research has focused on speaker recognition, determining which speaker is speaking in a segment of audio. However, few experiments have investigated speaker recognition for very low-resource or endangered languages. Furthermore, speaker recognition has the potential to support language documentation and revitalization efforts, making recordings more accessible to researchers and communities. Since endangered language datasets are too small to build competitive speaker representations from scratch, we investigate the application of large-scale pre-built speaker recognition models to bridge this gap. This paper compares four speaker recognition models on six diverse endangered language data sets. Comparisons contrast three recent neural network-based x-vector models and an earlier baseline i-vector model. Experiments demonstrate significantly stronger performance for some of the studied models. Further analysis highlights differences in effectiveness tied to the lengths of test audio segments and amount of data used for speaker modeling. | [
"Levow, Gina-Anne"
] | Assessing Pre-Built Speaker Recognition Models for Endangered Language Data | sigul-1.4 | Poster | [
""
] | -1 | -1 | -1 | -1 | 0 | [] | [] | [] |
||
https://aclanthology.org/2024.sigul-1.5.bib | https://aclanthology.org/2024.sigul-1.5/ | @inproceedings{kuriyozov-etal-2024-bertbek,
title = "{BERT}bek: A Pretrained Language Model for {U}zbek",
author = "Kuriyozov, Elmurod and
Vilares, David and
G{\'o}mez-Rodr{\'\i}guez, Carlos",
editor = "Melero, Maite and
Sakti, Sakriani and
Soria, Claudia",
booktitle = "Proceedings of the 3rd Annual Meeting of the Special Interest Group on Under-resourced Languages @ LREC-COLING 2024",
month = may,
year = "2024",
address = "Torino, Italia",
publisher = "ELRA and ICCL",
url = "https://aclanthology.org/2024.sigul-1.5",
pages = "33--44",
abstract = "Recent advances in neural networks based language representation made it possible for pretrained language models to outperform previous models in many downstream natural language processing (NLP) tasks. These pretrained language models have also shown that if large enough, they exhibit good few-shot abilities, which is especially beneficial for low-resource scenarios. In this respect, although there are some large-scale multilingual pretrained language models available, language-specific pretrained models have demonstrated to be more accurate for monolingual evaluation setups. In this work, we present BERTbek - pretrained language models based on the BERT (Bidirectional Encoder Representations from Transformers) architecture for the low-resource Uzbek language. We also provide a comprehensive evaluation of the models on a number of NLP tasks: sentiment analysis, multi-label topic classification, and named entity recognition, comparing the models with various machine learning methods as well as multilingual BERT (mBERT). Experimental results indicate that our models outperform mBERT and other task-specific baseline models in all three tasks. Additionally, we also show the impact of training data size and quality on the downstream performance of BERT models, by training three different models with different text sources and corpus sizes.",
}
| Recent advances in neural networks based language representation made it possible for pretrained language models to outperform previous models in many downstream natural language processing (NLP) tasks. These pretrained language models have also shown that if large enough, they exhibit good few-shot abilities, which is especially beneficial for low-resource scenarios. In this respect, although there are some large-scale multilingual pretrained language models available, language-specific pretrained models have demonstrated to be more accurate for monolingual evaluation setups. In this work, we present BERTbek - pretrained language models based on the BERT (Bidirectional Encoder Representations from Transformers) architecture for the low-resource Uzbek language. We also provide a comprehensive evaluation of the models on a number of NLP tasks: sentiment analysis, multi-label topic classification, and named entity recognition, comparing the models with various machine learning methods as well as multilingual BERT (mBERT). Experimental results indicate that our models outperform mBERT and other task-specific baseline models in all three tasks. Additionally, we also show the impact of training data size and quality on the downstream performance of BERT models, by training three different models with different text sources and corpus sizes. | [
"Kuriyozov, Elmurod",
"Vilares, David",
"G{\\'o}mez-Rodr{\\'\\i}guez, Carlos"
] | BERTbek: A Pretrained Language Model for Uzbek | sigul-1.5 | Poster | [
""
] | -1 | -1 | -1 | -1 | 0 | [] | [] | [] |
||
https://aclanthology.org/2024.sigul-1.6.bib | https://aclanthology.org/2024.sigul-1.6/ | @inproceedings{arnardottir-etal-2024-beyond,
title = "Beyond Error Categories: A Contextual Approach of Evaluating Emerging Spell and Grammar Checkers",
author = "Arnard{\'o}ttir, {\TH}{\'o}runn and
Ing{\'o}lfsd{\'o}ttir, Svanhv{\'\i}t Lilja and
S{\'\i}monarson, Haukur Barri and
Einarsson, Hafsteinn and
Ingason, Anton Karl and
{\TH}orsteinsson, Vilhj{\'a}lmur",
editor = "Melero, Maite and
Sakti, Sakriani and
Soria, Claudia",
booktitle = "Proceedings of the 3rd Annual Meeting of the Special Interest Group on Under-resourced Languages @ LREC-COLING 2024",
month = may,
year = "2024",
address = "Torino, Italia",
publisher = "ELRA and ICCL",
url = "https://aclanthology.org/2024.sigul-1.6",
pages = "45--52",
abstract = "Automatic spell and grammar checking can be done using various system architectures, and large language models have recently been used to solve the task with promising results. Here we describe a new method of creating test data to measure the performance of spell and grammar checkers, including large language models. Three types of test data represent different approaches to evaluation, from basic error detection to error correction with natural language explanations of the corrections made and error severity scores, which is the main novelty of this approach. These additions are especially useful when evaluating large language models. We present a spell and grammar checking test set for Icelandic in which the described approach is applied. The data consists of whole texts instead of discrete sentences, which facilitates evaluating context awareness of models. The resulting test set can be used to compare different spell and grammar checkers and is published under permissive licenses.",
}
| Automatic spell and grammar checking can be done using various system architectures, and large language models have recently been used to solve the task with promising results. Here we describe a new method of creating test data to measure the performance of spell and grammar checkers, including large language models. Three types of test data represent different approaches to evaluation, from basic error detection to error correction with natural language explanations of the corrections made and error severity scores, which is the main novelty of this approach. These additions are especially useful when evaluating large language models. We present a spell and grammar checking test set for Icelandic in which the described approach is applied. The data consists of whole texts instead of discrete sentences, which facilitates evaluating context awareness of models. The resulting test set can be used to compare different spell and grammar checkers and is published under permissive licenses. | [
"Arnard{\\'o}ttir, {\\TH}{\\'o}runn",
"Ing{\\'o}lfsd{\\'o}ttir, Svanhv{\\'\\i}t Lilja",
"S{\\'\\i}monarson, Haukur Barri",
"Einarsson, Hafsteinn",
"Ingason, Anton Karl",
"{\\TH}orsteinsson, Vilhj{\\'a}lmur"
] | Beyond Error Categories: A Contextual Approach of Evaluating Emerging Spell and Grammar Checkers | sigul-1.6 | Poster | [
""
] | -1 | -1 | -1 | -1 | 0 | [] | [] | [] |
||
https://aclanthology.org/2024.sigul-1.7.bib | https://aclanthology.org/2024.sigul-1.7/ | @inproceedings{poudel-etal-2024-bidirectional,
title = "Bidirectional {E}nglish-{N}epali Machine Translation({MT}) System for Legal Domain",
author = "Poudel, Shabdapurush and
Bal, Bal Krishna and
Acharya, Praveen",
editor = "Melero, Maite and
Sakti, Sakriani and
Soria, Claudia",
booktitle = "Proceedings of the 3rd Annual Meeting of the Special Interest Group on Under-resourced Languages @ LREC-COLING 2024",
month = may,
year = "2024",
address = "Torino, Italia",
publisher = "ELRA and ICCL",
url = "https://aclanthology.org/2024.sigul-1.7",
pages = "53--58",
abstract = "Nepali, a low-resource language belonging to the Indo-Aryan language family and spoken in Nepal, India, Sikkim, and Burma has comparatively very little digital content and resources, more particularly in the legal domain. However, the need to translate legal documents is ever-increasing in the context of growing volumes of legal cases and a large population seeking to go abroad for higher education or employment. This underscores the need for developing an English-Nepali Machine Translation for the legal domain. We attempt to address this problem by utilizing a Neural Machine Translation (NMT) System with an encoder-decoder architecture, specifically designed for legal Nepali-English translation. Leveraging a custom-built legal corpus of 125,000 parallel sentences, our system achieves encouraging BLEU scores of 7.98 in (Nepali â English) and 6.63 (English â Nepali) direction",
}
| Nepali, a low-resource language belonging to the Indo-Aryan language family and spoken in Nepal, India, Sikkim, and Burma has comparatively very little digital content and resources, more particularly in the legal domain. However, the need to translate legal documents is ever-increasing in the context of growing volumes of legal cases and a large population seeking to go abroad for higher education or employment. This underscores the need for developing an English-Nepali Machine Translation for the legal domain. We attempt to address this problem by utilizing a Neural Machine Translation (NMT) System with an encoder-decoder architecture, specifically designed for legal Nepali-English translation. Leveraging a custom-built legal corpus of 125,000 parallel sentences, our system achieves encouraging BLEU scores of 7.98 in (Nepali â English) and 6.63 (English â Nepali) direction | [
"Poudel, Shabdapurush",
"Bal, Bal Krishna",
"Acharya, Praveen"
] | Bidirectional English-Nepali Machine Translation(MT) System for Legal Domain | sigul-1.7 | Poster | [
""
] | -1 | -1 | -1 | -1 | 0 | [] | [] | [] |
||
https://aclanthology.org/2024.sigul-1.8.bib | https://aclanthology.org/2024.sigul-1.8/ | @inproceedings{gonzales-etal-2024-bk3at,
title = "{BK}3{AT}: Bangsamoro K-3 Children{'}s Speech Corpus for Developing Assessment Tools in the Bangsamoro Languages",
author = "Gonzales, Kiel D. and
Maranan, Jazzmin R. and
Santelices, Francis Paolo D. and
Renovalles, Edsel Jedd M. and
Macale, Nissan D. and
Palafox, Nicole Anne A. and
Mendoza, Jose Marie A.",
editor = "Melero, Maite and
Sakti, Sakriani and
Soria, Claudia",
booktitle = "Proceedings of the 3rd Annual Meeting of the Special Interest Group on Under-resourced Languages @ LREC-COLING 2024",
month = may,
year = "2024",
address = "Torino, Italia",
publisher = "ELRA and ICCL",
url = "https://aclanthology.org/2024.sigul-1.8",
pages = "59--65",
abstract = "Bangsamoro languages are among the under-resourced languages in the Mindanao region in the Philippines. Moreover, there is no currently publicly available data for children{'}s speech on most of these languages. BK3AT children{'}s speech corpus is a corpus designed for creating speech technologies that could help facilitators and teachers in K-3 education. The corpus consists of 122 hours of children speech data across 10 languages: Bahasa Sug, Chavacano, English, Filipino, Iranun, Maguindanaon, Meranaw, Sinama, Teduray, and Yakan. Preliminary experiments using Wav2Vec-XLSR architecture have been done in fine-tuning the Tagalog and L2 English corpus subsets to develop automatic speech recognition backend for literacy assessment. Results from the experiments show low word error rates (WERs) for small-vocabulary and targeted domains.",
}
| Bangsamoro languages are among the under-resourced languages in the Mindanao region in the Philippines. Moreover, there is no currently publicly available data for children{'}s speech on most of these languages. BK3AT children{'}s speech corpus is a corpus designed for creating speech technologies that could help facilitators and teachers in K-3 education. The corpus consists of 122 hours of children speech data across 10 languages: Bahasa Sug, Chavacano, English, Filipino, Iranun, Maguindanaon, Meranaw, Sinama, Teduray, and Yakan. Preliminary experiments using Wav2Vec-XLSR architecture have been done in fine-tuning the Tagalog and L2 English corpus subsets to develop automatic speech recognition backend for literacy assessment. Results from the experiments show low word error rates (WERs) for small-vocabulary and targeted domains. | [
"Gonzales, Kiel D.",
"Maranan, Jazzmin R.",
"Santelices, Francis Paolo D.",
"Renovalles, Edsel Jedd M.",
"Macale, Nissan D.",
"Palafox, Nicole Anne A.",
"Mendoza, Jose Marie A."
] | BK3AT: Bangsamoro K-3 Children's Speech Corpus for Developing Assessment Tools in the Bangsamoro Languages | sigul-1.8 | Poster | [
""
] | -1 | -1 | -1 | -1 | 0 | [] | [] | [] |
||
https://aclanthology.org/2024.sigul-1.9.bib | https://aclanthology.org/2024.sigul-1.9/ | @inproceedings{poujade-etal-2024-corpusarieja,
title = "{C}orpus{A}ri{\`e}ja: Building an Annotated Corpus with Variation in {O}ccitan",
author = "Poujade, Clamenca and
Bras, Myriam and
Urieli, Assaf",
editor = "Melero, Maite and
Sakti, Sakriani and
Soria, Claudia",
booktitle = "Proceedings of the 3rd Annual Meeting of the Special Interest Group on Under-resourced Languages @ LREC-COLING 2024",
month = may,
year = "2024",
address = "Torino, Italia",
publisher = "ELRA and ICCL",
url = "https://aclanthology.org/2024.sigul-1.9",
pages = "66--71",
abstract = "The Occitan language is a less resourced language and is classified as {`}in danger{'} by the UNESCO. Thereby, it is important to build resources and tools that can help to safeguard and develop the digitisation of the language. CorpusAri{\`e}ja is a collection of 72 texts (just over 41,000 tokens) in the Occitan language of the French department of Ari{\`e}ge. The majority of the texts needed to be digitised and pass within an Optical Character Recognition. This corpus contains dialectal and spelling variation, but is limited to prose, without diachronic variation or genre variation. It is an annotated corpus with two levels of lemmatisation, POS tags and verbal inflection. One of the main aims of the corpus is to enable the conception of tools that can automatically annotate all Occitan texts, regardless of the dialect or spelling used. The Ari{\`e}ge territory is interesting because it includes the two variations that we focus on, dialectal and spelling. It has plenty of authors that write in their native language, their variety of Occitan.",
}
| The Occitan language is a less resourced language and is classified as {`}in danger{'} by the UNESCO. Thereby, it is important to build resources and tools that can help to safeguard and develop the digitisation of the language. CorpusAri{\`e}ja is a collection of 72 texts (just over 41,000 tokens) in the Occitan language of the French department of Ari{\`e}ge. The majority of the texts needed to be digitised and pass within an Optical Character Recognition. This corpus contains dialectal and spelling variation, but is limited to prose, without diachronic variation or genre variation. It is an annotated corpus with two levels of lemmatisation, POS tags and verbal inflection. One of the main aims of the corpus is to enable the conception of tools that can automatically annotate all Occitan texts, regardless of the dialect or spelling used. The Ari{\`e}ge territory is interesting because it includes the two variations that we focus on, dialectal and spelling. It has plenty of authors that write in their native language, their variety of Occitan. | [
"Poujade, Clamenca",
"Bras, Myriam",
"Urieli, Assaf"
] | CorpusArièja: Building an Annotated Corpus with Variation in Occitan | sigul-1.9 | Poster | [
""
] | -1 | -1 | -1 | -1 | 0 | [] | [] | [] |
||
https://aclanthology.org/2024.sigul-1.10.bib | https://aclanthology.org/2024.sigul-1.10/ | @inproceedings{sekeres-etal-2024-developing,
title = "Developing Infrastructure for Low-Resource Language Corpus Building",
author = "Sekeres, Hedwig G. and
Heeringa, Wilbert and
de Vries, Wietse and
Zwagers, Oscar Yde and
Wieling, Martijn and
Jensma, Goffe Th.",
editor = "Melero, Maite and
Sakti, Sakriani and
Soria, Claudia",
booktitle = "Proceedings of the 3rd Annual Meeting of the Special Interest Group on Under-resourced Languages @ LREC-COLING 2024",
month = may,
year = "2024",
address = "Torino, Italia",
publisher = "ELRA and ICCL",
url = "https://aclanthology.org/2024.sigul-1.10",
pages = "72--78",
abstract = "For many of the world{'}s small languages, few resources are available. In this project, a written online accessible corpus was created for the minority language variant Gronings, which serves both researchers interested in language change and variation and a general audience of (new) speakers interested in finding real-life examples of language use. The corpus was created using a combination of volunteer work and automation, which together formed an efficient pipeline for converting printed text to Key Words in Context (KWICs), annotated with lemmas and part-of-speech tags. In the creation of the corpus, we have taken into account several of the challenges that can occur when creating resources for minority languages, such as a lack of standardisation and limited (financial) resources. As the solutions we offer are applicable to other small languages as well, each step of the corpus creation process is discussed and resources will be made available benefiting future projects on other low-resource languages.",
}
| For many of the world{'}s small languages, few resources are available. In this project, a written online accessible corpus was created for the minority language variant Gronings, which serves both researchers interested in language change and variation and a general audience of (new) speakers interested in finding real-life examples of language use. The corpus was created using a combination of volunteer work and automation, which together formed an efficient pipeline for converting printed text to Key Words in Context (KWICs), annotated with lemmas and part-of-speech tags. In the creation of the corpus, we have taken into account several of the challenges that can occur when creating resources for minority languages, such as a lack of standardisation and limited (financial) resources. As the solutions we offer are applicable to other small languages as well, each step of the corpus creation process is discussed and resources will be made available benefiting future projects on other low-resource languages. | [
"Sekeres, Hedwig G.",
"Heeringa, Wilbert",
"de Vries, Wietse",
"Zwagers, Oscar Yde",
"Wieling, Martijn",
"Jensma, Goffe Th."
] | Developing Infrastructure for Low-Resource Language Corpus Building | sigul-1.10 | Poster | [
""
] | -1 | -1 | -1 | -1 | 0 | [] | [] | [] |
||
https://aclanthology.org/2024.sigul-1.11.bib | https://aclanthology.org/2024.sigul-1.11/ | @inproceedings{johannsson-etal-2024-evaluating,
title = "Evaluating {I}celandic Sentiment Analysis Models Trained on Translated Data",
author = {J{\'o}hannsson, {\'O}lafur A. and
Arndal, Birkir H. and
J{\'o}nsson, Eysteinn {\"O}. and
Olafsson, Stefan and
Loftsson, Hrafn},
editor = "Melero, Maite and
Sakti, Sakriani and
Soria, Claudia",
booktitle = "Proceedings of the 3rd Annual Meeting of the Special Interest Group on Under-resourced Languages @ LREC-COLING 2024",
month = may,
year = "2024",
address = "Torino, Italia",
publisher = "ELRA and ICCL",
url = "https://aclanthology.org/2024.sigul-1.11",
pages = "79--89",
abstract = "We experiment with sentiment classification models for Icelandic that leverage machine-translated data for training. Since no large sentiment dataset exists for Icelandic, we translate 50,000 English IMDb reviews, classified either as positive or negative, into Icelandic using two services: Google Translate and GreynirTranslate. After machine translation, we assess whether the sentiment of the source language text is retained in the target language. Moreover, we evaluate the accuracy of the sentiment classifiers on non-translated Icelandic text.The performance of three types of baseline classifiers is compared, i.e., Support Vector Machines, Logistic Regression and Naive Bayes, when trained on translated data generated by either translation service. Furthermore, we fine-tune and evaluate three pre-trained transformer-based models, RoBERTa, IceBERT and ELECTRA, on both the original English texts and the translated texts. Our results indicate that the transformer models perform better than the baseline classifiers on all datasets. Moreover, our evaluation shows that the transformer models trained on data translated from English reviews can be used to effectively classify sentiment on non-translated Icelandic movie reviews.",
}
| We experiment with sentiment classification models for Icelandic that leverage machine-translated data for training. Since no large sentiment dataset exists for Icelandic, we translate 50,000 English IMDb reviews, classified either as positive or negative, into Icelandic using two services: Google Translate and GreynirTranslate. After machine translation, we assess whether the sentiment of the source language text is retained in the target language. Moreover, we evaluate the accuracy of the sentiment classifiers on non-translated Icelandic text.The performance of three types of baseline classifiers is compared, i.e., Support Vector Machines, Logistic Regression and Naive Bayes, when trained on translated data generated by either translation service. Furthermore, we fine-tune and evaluate three pre-trained transformer-based models, RoBERTa, IceBERT and ELECTRA, on both the original English texts and the translated texts. Our results indicate that the transformer models perform better than the baseline classifiers on all datasets. Moreover, our evaluation shows that the transformer models trained on data translated from English reviews can be used to effectively classify sentiment on non-translated Icelandic movie reviews. | [
"J{\\'o}hannsson, {\\'O}lafur A.",
"Arndal, Birkir H.",
"J{\\'o}nsson, Eysteinn {\\\"O}.",
"Olafsson, Stefan",
"Loftsson, Hrafn"
] | Evaluating Icelandic Sentiment Analysis Models Trained on Translated Data | sigul-1.11 | Poster | [
""
] | -1 | -1 | -1 | -1 | 0 | [] | [] | [] |
||
https://aclanthology.org/2024.sigul-1.12.bib | https://aclanthology.org/2024.sigul-1.12/ | @inproceedings{mc-cahill-etal-2024-exploring,
title = "Exploring Text Classification for Enhancing Digital Game-Based Language Learning for {I}rish",
author = "Mc Cahill, Leona and
Baltazar, Thomas and
Bruen, Sally and
Xu, Liang and
Ward, Monica and
U{\'\i} Dhonnchadha, Elaine and
Foster, Jennifer",
editor = "Melero, Maite and
Sakti, Sakriani and
Soria, Claudia",
booktitle = "Proceedings of the 3rd Annual Meeting of the Special Interest Group on Under-resourced Languages @ LREC-COLING 2024",
month = may,
year = "2024",
address = "Torino, Italia",
publisher = "ELRA and ICCL",
url = "https://aclanthology.org/2024.sigul-1.12",
pages = "90--96",
abstract = "Digital game-based language learning (DGBLL) can help with the language learning process. DGBLL applications can make learning more enjoyable and engaging, but they are difficult to develop. A DBGLL app that relies on target language texts obviously needs to be able to use texts of the appropriate level for the individual learners. This implies that text classification tools should be available to DGBLL developers, who may not be familiar with the target language, in order to incorporate suitable texts into their games. While text difficulty classifiers exist for many of the most commonly spoken languages, this is not the case for under-resourced languages, such as Irish. In this paper, we explore approaches to the development of text classifiers for Irish. In the first approach to text analysis and grading, we apply linguistic analysis to assess text complexity. Features from this approach are then used in machine learning-based text classification, which explores the application of a number of machine learning algorithms to the problem. Although the development of these text classifiers is at an early stage, they show promise, particularly in a low-resourced scenario.",
}
| Digital game-based language learning (DGBLL) can help with the language learning process. DGBLL applications can make learning more enjoyable and engaging, but they are difficult to develop. A DBGLL app that relies on target language texts obviously needs to be able to use texts of the appropriate level for the individual learners. This implies that text classification tools should be available to DGBLL developers, who may not be familiar with the target language, in order to incorporate suitable texts into their games. While text difficulty classifiers exist for many of the most commonly spoken languages, this is not the case for under-resourced languages, such as Irish. In this paper, we explore approaches to the development of text classifiers for Irish. In the first approach to text analysis and grading, we apply linguistic analysis to assess text complexity. Features from this approach are then used in machine learning-based text classification, which explores the application of a number of machine learning algorithms to the problem. Although the development of these text classifiers is at an early stage, they show promise, particularly in a low-resourced scenario. | [
"Mc Cahill, Leona",
"Baltazar, Thomas",
"Bruen, Sally",
"Xu, Liang",
"Ward, Monica",
"U{\\'\\i} Dhonnchadha, Elaine",
"Foster, Jennifer"
] | Exploring Text Classification for Enhancing Digital Game-Based Language Learning for Irish | sigul-1.12 | Poster | [
""
] | -1 | -1 | -1 | -1 | 0 | [] | [] | [] |
||
https://aclanthology.org/2024.sigul-1.13.bib | https://aclanthology.org/2024.sigul-1.13/ | @inproceedings{philippy-etal-2024-forget,
title = "Forget {NLI}, Use a Dictionary: Zero-Shot Topic Classification for Low-Resource Languages with Application to {L}uxembourgish",
author = "Philippy, Fred and
Haddadan, Shohreh and
Guo, Siwen",
editor = "Melero, Maite and
Sakti, Sakriani and
Soria, Claudia",
booktitle = "Proceedings of the 3rd Annual Meeting of the Special Interest Group on Under-resourced Languages @ LREC-COLING 2024",
month = may,
year = "2024",
address = "Torino, Italia",
publisher = "ELRA and ICCL",
url = "https://aclanthology.org/2024.sigul-1.13",
pages = "97--104",
abstract = "In NLP, zero-shot classification (ZSC) is the task of assigning labels to textual data without any labeled examples for the target classes. A common method for ZSC is to fine-tune a language model on a Natural Language Inference (NLI) dataset and then use it to infer the entailment between the input document and the target labels. However, this approach faces certain challenges, particularly for languages with limited resources. In this paper, we propose an alternative solution that leverages dictionaries as a source of data for ZSC. We focus on Luxembourgish, a low-resource language spoken in Luxembourg, and construct two new topic relevance classification datasets based on a dictionary that provides various synonyms, word translations and example sentences. We evaluate the usability of our dataset and compare it with the NLI-based approach on two topic classification tasks in a zero-shot manner. Our results show that by using the dictionary-based dataset, the trained models outperform the ones following the NLI-based approach for ZSC. While we focus on a single low-resource language in this study, we believe that the efficacy of our approach can also transfer to other languages where such a dictionary is available.",
}
| In NLP, zero-shot classification (ZSC) is the task of assigning labels to textual data without any labeled examples for the target classes. A common method for ZSC is to fine-tune a language model on a Natural Language Inference (NLI) dataset and then use it to infer the entailment between the input document and the target labels. However, this approach faces certain challenges, particularly for languages with limited resources. In this paper, we propose an alternative solution that leverages dictionaries as a source of data for ZSC. We focus on Luxembourgish, a low-resource language spoken in Luxembourg, and construct two new topic relevance classification datasets based on a dictionary that provides various synonyms, word translations and example sentences. We evaluate the usability of our dataset and compare it with the NLI-based approach on two topic classification tasks in a zero-shot manner. Our results show that by using the dictionary-based dataset, the trained models outperform the ones following the NLI-based approach for ZSC. While we focus on a single low-resource language in this study, we believe that the efficacy of our approach can also transfer to other languages where such a dictionary is available. | [
"Philippy, Fred",
"Haddadan, Shohreh",
"Guo, Siwen"
] | Forget NLI, Use a Dictionary: Zero-Shot Topic Classification for Low-Resource Languages with Application to Luxembourgish | sigul-1.13 | Poster | 2404.03912 | [
"https://github.com/fredxlpy/letz"
] | -1 | -1 | -1 | -1 | 0 | [] | [] | [] |
|
https://aclanthology.org/2024.sigul-1.14.bib | https://aclanthology.org/2024.sigul-1.14/ | @inproceedings{santos-etal-2024-fostering,
title = "Fostering the Ecosystem of Open Neural Encoders for {P}ortuguese with Albertina {PT}* Family",
author = "Santos, Rodrigo and
Rodrigues, Jo{\~a}o and
Gomes, Lu{\'\i}s and
Silva, Jo{\~a}o Ricardo and
Branco, Ant{\'o}nio and
Lopes Cardoso, Henrique and
Os{\'o}rio, Tom{\'a}s Freitas and
Leite, Bernardo",
editor = "Melero, Maite and
Sakti, Sakriani and
Soria, Claudia",
booktitle = "Proceedings of the 3rd Annual Meeting of the Special Interest Group on Under-resourced Languages @ LREC-COLING 2024",
month = may,
year = "2024",
address = "Torino, Italia",
publisher = "ELRA and ICCL",
url = "https://aclanthology.org/2024.sigul-1.14",
pages = "105--114",
abstract = "To foster the neural encoding of Portuguese, this paper contributes foundation encoder models that represent an expansion of the still very scarce ecosystem of large language models specifically developed for this language that are fully open, in the sense that they are open source and openly distributed for free under an open license for any purpose, thus including research and commercial usages. Like most languages other than English, Portuguese is low-resourced in terms of these foundational language resources, there being the inaugural 900 million parameter Albertina and 335 million Bertimbau. Taking this couple of models as an inaugural set, we present the extension of the ecosystem of state-of-the-art open encoders for Portuguese with a larger, top performance-driven model with 1.5 billion parameters, and a smaller, efficiency-driven model with 100 million parameters. While achieving this primary goal, further results that are relevant for this ecosystem were obtained as well, namely new datasets for Portuguese based on the SuperGLUE benchmark, which we also distribute openly.",
}
| To foster the neural encoding of Portuguese, this paper contributes foundation encoder models that represent an expansion of the still very scarce ecosystem of large language models specifically developed for this language that are fully open, in the sense that they are open source and openly distributed for free under an open license for any purpose, thus including research and commercial usages. Like most languages other than English, Portuguese is low-resourced in terms of these foundational language resources, there being the inaugural 900 million parameter Albertina and 335 million Bertimbau. Taking this couple of models as an inaugural set, we present the extension of the ecosystem of state-of-the-art open encoders for Portuguese with a larger, top performance-driven model with 1.5 billion parameters, and a smaller, efficiency-driven model with 100 million parameters. While achieving this primary goal, further results that are relevant for this ecosystem were obtained as well, namely new datasets for Portuguese based on the SuperGLUE benchmark, which we also distribute openly. | [
"Santos, Rodrigo",
"Rodrigues, Jo{\\~a}o",
"Gomes, Lu{\\'\\i}s",
"Silva, Jo{\\~a}o Ricardo",
"Branco, Ant{\\'o}nio",
"Lopes Cardoso, Henrique",
"Os{\\'o}rio, Tom{\\'a}s Freitas",
"Leite, Bernardo"
] | Fostering the Ecosystem of Open Neural Encoders for Portuguese with Albertina PT* Family | sigul-1.14 | Poster | 2403.01897 | [
""
] | -1 | -1 | -1 | -1 | 0 | [] | [] | [] |
|
https://aclanthology.org/2024.sigul-1.15.bib | https://aclanthology.org/2024.sigul-1.15/ | @inproceedings{jauhiainen-linden-2024-improving,
title = "Improving Language Coverage on {H}e{LI}-{OTS}",
author = "Jauhiainen, Tommi and
Lind{\'e}n, Krister",
editor = "Melero, Maite and
Sakti, Sakriani and
Soria, Claudia",
booktitle = "Proceedings of the 3rd Annual Meeting of the Special Interest Group on Under-resourced Languages @ LREC-COLING 2024",
month = may,
year = "2024",
address = "Torino, Italia",
publisher = "ELRA and ICCL",
url = "https://aclanthology.org/2024.sigul-1.15",
pages = "115--125",
abstract = "In this paper, we add under-resourced languages into the language repertoire of an existing off-the-shelf language identifier, HeLI-OTS. Adding more languages to a language identifier often comes with the drawback of lessened accuracy for the languages already part of the repertoire. We aim to minimize this effect. As sources for training and development data in the new languages, we use the OpenLID and FLORES-200 datasets. They are openly available high-quality datasets that are especially well-suited for language identifier development. By carefully inspecting the effect of each added language and the quality of their training and development data, we managed to add support for 20 new under-resourced languages to HeLI-OTS without affecting the performance of any existing languages to a noticeable extent.",
}
| In this paper, we add under-resourced languages into the language repertoire of an existing off-the-shelf language identifier, HeLI-OTS. Adding more languages to a language identifier often comes with the drawback of lessened accuracy for the languages already part of the repertoire. We aim to minimize this effect. As sources for training and development data in the new languages, we use the OpenLID and FLORES-200 datasets. They are openly available high-quality datasets that are especially well-suited for language identifier development. By carefully inspecting the effect of each added language and the quality of their training and development data, we managed to add support for 20 new under-resourced languages to HeLI-OTS without affecting the performance of any existing languages to a noticeable extent. | [
"Jauhiainen, Tommi",
"Lind{\\'e}n, Krister"
] | Improving Language Coverage on HeLI-OTS | sigul-1.15 | Poster | [
""
] | -1 | -1 | -1 | -1 | 0 | [] | [] | [] |
||
https://aclanthology.org/2024.sigul-1.16.bib | https://aclanthology.org/2024.sigul-1.16/ | @inproceedings{masala-etal-2024-improving,
title = "Improving Legal Judgement Prediction in {R}omanian with Long Text Encoders",
author = "Masala, Mihai and
Rebedea, Traian and
Velicu, Horia",
editor = "Melero, Maite and
Sakti, Sakriani and
Soria, Claudia",
booktitle = "Proceedings of the 3rd Annual Meeting of the Special Interest Group on Under-resourced Languages @ LREC-COLING 2024",
month = may,
year = "2024",
address = "Torino, Italia",
publisher = "ELRA and ICCL",
url = "https://aclanthology.org/2024.sigul-1.16",
pages = "126--132",
abstract = "In recent years,the entire field of Natural Language Processing (NLP) has enjoyed amazing novel results achieving almost human-like performance on a variety of tasks. Legal NLP domain has also been part of this process, as it has seen an impressive growth. However, general-purpose models are not readily applicable for legal domain. Due to the nature of the domain (e.g. specialized vocabulary, long documents) specific models and methods are often needed for Legal NLP. In this work we investigate both specialized and general models for predicting the final ruling of a legal case, task known as Legal Judgment Prediction (LJP). We particularly focus on methods to extend to sequence length of Transformer-based models to better understand the long documents present in legal corpora. Extensive experiments on 4 LJP datasets in Romanian, originating from 2 sources with significantly different sizes and document lengths, show that specialized models and handling long texts are critical for a good performance.",
}
| In recent years,the entire field of Natural Language Processing (NLP) has enjoyed amazing novel results achieving almost human-like performance on a variety of tasks. Legal NLP domain has also been part of this process, as it has seen an impressive growth. However, general-purpose models are not readily applicable for legal domain. Due to the nature of the domain (e.g. specialized vocabulary, long documents) specific models and methods are often needed for Legal NLP. In this work we investigate both specialized and general models for predicting the final ruling of a legal case, task known as Legal Judgment Prediction (LJP). We particularly focus on methods to extend to sequence length of Transformer-based models to better understand the long documents present in legal corpora. Extensive experiments on 4 LJP datasets in Romanian, originating from 2 sources with significantly different sizes and document lengths, show that specialized models and handling long texts are critical for a good performance. | [
"Masala, Mihai",
"Rebedea, Traian",
"Velicu, Horia"
] | Improving Legal Judgement Prediction in Romanian with Long Text Encoders | sigul-1.16 | Poster | 2402.19170 | [
""
] | -1 | -1 | -1 | -1 | 0 | [] | [] | [] |
|
https://aclanthology.org/2024.sigul-1.17.bib | https://aclanthology.org/2024.sigul-1.17/ | @inproceedings{li-vu-2024-improving,
title = "Improving Noisy Student Training for Low-resource Languages in End-to-End {ASR} Using {C}ycle{GAN} and Inter-domain Losses",
author = "Li, Chia-Yu and
Vu, Ngoc Thang",
editor = "Melero, Maite and
Sakti, Sakriani and
Soria, Claudia",
booktitle = "Proceedings of the 3rd Annual Meeting of the Special Interest Group on Under-resourced Languages @ LREC-COLING 2024",
month = may,
year = "2024",
address = "Torino, Italia",
publisher = "ELRA and ICCL",
url = "https://aclanthology.org/2024.sigul-1.17",
pages = "133--142",
abstract = "Training a semi-supervised end-to-end speech recognition system using noisy student training has significantly improved performance. However, this approach requires a substantial amount of paired speech-text and unlabeled speech, which is costly for low-resource languages. Therefore, this paper considers a more extreme case of semi-supervised end-to-end automatic speech recognition where there are limited paired speech-text, unlabeled speech (less than five hours), and abundant external text. Firstly, we observe improved performance by training the model using our previous work on semi-supervised learning {``}CycleGAN and inter-domain losses{''} solely with external text. Secondly, we enhance {``}CycleGAN and inter-domain losses{''} by incorporating automatic hyperparameter tuning, calling {``}enhanced CycleGAN inter-domain losses.{''} Thirdly, we integrate it into the noisy student training approach pipeline for low-resource scenarios. Our experimental results, conducted on six non-English languages from Voxforge and Common Voice, show a 20{\%} word error rate reduction compared to the baseline teacher model and a 10{\%} word error rate reduction compared to the baseline best student model, highlighting the significant improvements achieved through our proposed method.",
}
| Training a semi-supervised end-to-end speech recognition system using noisy student training has significantly improved performance. However, this approach requires a substantial amount of paired speech-text and unlabeled speech, which is costly for low-resource languages. Therefore, this paper considers a more extreme case of semi-supervised end-to-end automatic speech recognition where there are limited paired speech-text, unlabeled speech (less than five hours), and abundant external text. Firstly, we observe improved performance by training the model using our previous work on semi-supervised learning {``}CycleGAN and inter-domain losses{''} solely with external text. Secondly, we enhance {``}CycleGAN and inter-domain losses{''} by incorporating automatic hyperparameter tuning, calling {``}enhanced CycleGAN inter-domain losses.{''} Thirdly, we integrate it into the noisy student training approach pipeline for low-resource scenarios. Our experimental results, conducted on six non-English languages from Voxforge and Common Voice, show a 20{\%} word error rate reduction compared to the baseline teacher model and a 10{\%} word error rate reduction compared to the baseline best student model, highlighting the significant improvements achieved through our proposed method. | [
"Li, Chia-Yu",
"Vu, Ngoc Thang"
] | Improving Noisy Student Training for Low-resource Languages in End-to-End ASR Using CycleGAN and Inter-domain Losses | sigul-1.17 | Poster | 2407.21061 | [
""
] | -1 | -1 | -1 | -1 | 0 | [] | [] | [] |
|
https://aclanthology.org/2024.sigul-1.18.bib | https://aclanthology.org/2024.sigul-1.18/ | @inproceedings{tazakka-etal-2024-indonesian,
title = "{I}ndonesian-{E}nglish Code-Switching Speech Recognition Using the Machine Speech Chain Based Semi-Supervised Learning",
author = "Tazakka, Rais Vaza Man and
Lestari, Dessi and
Purwarianti, Ayu and
Tanaya, Dipta and
Azizah, Kurniawati and
Sakti, Sakriani",
editor = "Melero, Maite and
Sakti, Sakriani and
Soria, Claudia",
booktitle = "Proceedings of the 3rd Annual Meeting of the Special Interest Group on Under-resourced Languages @ LREC-COLING 2024",
month = may,
year = "2024",
address = "Torino, Italia",
publisher = "ELRA and ICCL",
url = "https://aclanthology.org/2024.sigul-1.18",
pages = "143--148",
abstract = "Indonesia is home to a diverse linguistic landscape, where individuals seamlessly transition between Indonesian, English, and local dialects in their everyday conversations{---}a phenomenon known as code-switching. Understanding and accommodating this linguistic fluidity is essential, particularly in the development of accurate speech recognition systems. However, tackling code-switching in Indonesian poses a challenge due to the scarcity of paired code-switching data. Thus, this study endeavors to address Indonesian-English code-switching in speech recognition, leveraging unlabeled data and employing a semi-supervised technique known as the machine speech chain. Our findings demonstrate that the machine speech chain method effectively enhances Automatic Speech Recognition (ASR) performance in recognizing code-switching between Indonesian and English, utilizing previously untapped resources of unlabeled data.",
}
| Indonesia is home to a diverse linguistic landscape, where individuals seamlessly transition between Indonesian, English, and local dialects in their everyday conversations{---}a phenomenon known as code-switching. Understanding and accommodating this linguistic fluidity is essential, particularly in the development of accurate speech recognition systems. However, tackling code-switching in Indonesian poses a challenge due to the scarcity of paired code-switching data. Thus, this study endeavors to address Indonesian-English code-switching in speech recognition, leveraging unlabeled data and employing a semi-supervised technique known as the machine speech chain. Our findings demonstrate that the machine speech chain method effectively enhances Automatic Speech Recognition (ASR) performance in recognizing code-switching between Indonesian and English, utilizing previously untapped resources of unlabeled data. | [
"Tazakka, Rais Vaza Man",
"Lestari, Dessi",
"Purwarianti, Ayu",
"Tanaya, Dipta",
"Azizah, Kurniawati",
"Sakti, Sakriani"
] | Indonesian-English Code-Switching Speech Recognition Using the Machine Speech Chain Based Semi-Supervised Learning | sigul-1.18 | Poster | [
""
] | -1 | -1 | -1 | -1 | 0 | [] | [] | [] |
||
https://aclanthology.org/2024.sigul-1.19.bib | https://aclanthology.org/2024.sigul-1.19/ | @inproceedings{kondo-tamura-2024-inter,
title = "Inter-language Transfer Learning for Visual Speech Recognition toward Under-resourced Environments",
author = "Kondo, Fumiya and
Tamura, Satoshi",
editor = "Melero, Maite and
Sakti, Sakriani and
Soria, Claudia",
booktitle = "Proceedings of the 3rd Annual Meeting of the Special Interest Group on Under-resourced Languages @ LREC-COLING 2024",
month = may,
year = "2024",
address = "Torino, Italia",
publisher = "ELRA and ICCL",
url = "https://aclanthology.org/2024.sigul-1.19",
pages = "149--154",
abstract = "In this study, we introduce a method of inter-language transfer learning for under-resourced visual speech recognition. Deploying speech-related technology to all languages is a quite important activity. However, applying state-of-the-art deep-learning techniques requires huge-size labeled corpora, which makes it hard for under-resourced languages. Our approach leverages a small amount of labeled video data of the target language, and employs inter-language transfer learning using a pre-trained English lip-reading model. By applying the proposed scheme, we build a Japanese lip-reading model, using the ROHAN corpus, the size of which is about one 450th of the size of English datasets. The front-end encoder part of the pre-trained model is fine-tuned to improve the acquisition of pronunciation and lip movement patterns unique to Japanese. On the other hand, the back-end encoder and the decoder are built using the Japanese dataset. Although English and Japanese have different language structures, evaluation experiments show that it is possible to build the Japanese lip-reading model efficiently. Comparison with competitive schemes demonstrates the effectiveness of our method.",
}
| In this study, we introduce a method of inter-language transfer learning for under-resourced visual speech recognition. Deploying speech-related technology to all languages is a quite important activity. However, applying state-of-the-art deep-learning techniques requires huge-size labeled corpora, which makes it hard for under-resourced languages. Our approach leverages a small amount of labeled video data of the target language, and employs inter-language transfer learning using a pre-trained English lip-reading model. By applying the proposed scheme, we build a Japanese lip-reading model, using the ROHAN corpus, the size of which is about one 450th of the size of English datasets. The front-end encoder part of the pre-trained model is fine-tuned to improve the acquisition of pronunciation and lip movement patterns unique to Japanese. On the other hand, the back-end encoder and the decoder are built using the Japanese dataset. Although English and Japanese have different language structures, evaluation experiments show that it is possible to build the Japanese lip-reading model efficiently. Comparison with competitive schemes demonstrates the effectiveness of our method. | [
"Kondo, Fumiya",
"Tamura, Satoshi"
] | Inter-language Transfer Learning for Visual Speech Recognition toward Under-resourced Environments | sigul-1.19 | Poster | [
""
] | -1 | -1 | -1 | -1 | 0 | [] | [] | [] |
||
https://aclanthology.org/2024.sigul-1.20.bib | https://aclanthology.org/2024.sigul-1.20/ | @inproceedings{her-kruschwitz-2024-investigating,
title = "Investigating Neural Machine Translation for Low-Resource Languages: Using {B}avarian as a Case Study",
author = "Her, Wan-hua and
Kruschwitz, Udo",
editor = "Melero, Maite and
Sakti, Sakriani and
Soria, Claudia",
booktitle = "Proceedings of the 3rd Annual Meeting of the Special Interest Group on Under-resourced Languages @ LREC-COLING 2024",
month = may,
year = "2024",
address = "Torino, Italia",
publisher = "ELRA and ICCL",
url = "https://aclanthology.org/2024.sigul-1.20",
pages = "155--167",
abstract = "Machine Translation has made impressive progress in recent years offering close to human-level performance on many languages, but studies have primarily focused on high-resource languages with broad online presence and resources. With the help of growing Large Language Models, more and more low-resource languages achieve better results through the presence of other languages. However, studies have shown that not all low-resource languages can benefit from multilingual systems, especially those with insufficient training and evaluation data. In this paper, we revisit state-of-the-art Neural Machine Translation techniques to develop automatic translation systems between German and Bavarian. We investigate conditions of low-resource languages such as data scarcity and parameter sensitivity and focus on refined solutions that combat low-resource difficulties and creative solutions such as harnessing language similarity. Our experiment entails applying Back-translation and Transfer Learning to automatically generate more training data and achieve higher translation performance. We demonstrate noisiness in the data and present our approach to carry out text preprocessing extensively. Evaluation was conducted using combined metrics: BLEU, chrF and TER. Statistical significance results with Bonferroni correction show surprisingly high baseline systems, and that Back-translation leads to significant improvement. Furthermore, we present a qualitative analysis of translation errors and system limitations.",
}
| Machine Translation has made impressive progress in recent years offering close to human-level performance on many languages, but studies have primarily focused on high-resource languages with broad online presence and resources. With the help of growing Large Language Models, more and more low-resource languages achieve better results through the presence of other languages. However, studies have shown that not all low-resource languages can benefit from multilingual systems, especially those with insufficient training and evaluation data. In this paper, we revisit state-of-the-art Neural Machine Translation techniques to develop automatic translation systems between German and Bavarian. We investigate conditions of low-resource languages such as data scarcity and parameter sensitivity and focus on refined solutions that combat low-resource difficulties and creative solutions such as harnessing language similarity. Our experiment entails applying Back-translation and Transfer Learning to automatically generate more training data and achieve higher translation performance. We demonstrate noisiness in the data and present our approach to carry out text preprocessing extensively. Evaluation was conducted using combined metrics: BLEU, chrF and TER. Statistical significance results with Bonferroni correction show surprisingly high baseline systems, and that Back-translation leads to significant improvement. Furthermore, we present a qualitative analysis of translation errors and system limitations. | [
"Her, Wan-hua",
"Kruschwitz, Udo"
] | Investigating Neural Machine Translation for Low-Resource Languages: Using Bavarian as a Case Study | sigul-1.20 | Poster | 2404.08259 | [
"https://github.com/whher/nmt-de-bar"
] | -1 | -1 | -1 | -1 | 0 | [] | [] | [] |
|
https://aclanthology.org/2024.sigul-1.21.bib | https://aclanthology.org/2024.sigul-1.21/ | @inproceedings{haberland-etal-2024-italian,
title = "{I}talian-{L}igurian Machine Translation in Its Cultural Context",
author = "Haberland, Christopher R. and
Maillard, Jean and
Lusito, Stefano",
editor = "Melero, Maite and
Sakti, Sakriani and
Soria, Claudia",
booktitle = "Proceedings of the 3rd Annual Meeting of the Special Interest Group on Under-resourced Languages @ LREC-COLING 2024",
month = may,
year = "2024",
address = "Torino, Italia",
publisher = "ELRA and ICCL",
url = "https://aclanthology.org/2024.sigul-1.21",
pages = "168--176",
abstract = "Large multilingual machine translation efforts are driving improved access and performance for under-resourced languages, but often fail to translate culturally specific and local concepts. Additionally, translation from practically relevant input languages may flag behind those that are comparatively over-represented in the training dataset. In this work, we release a new corpus, ZenaMT, containing 7,561 parallel Ligurian-Italian sentences, nearly a fifth of which are also translated in English. This corpus spans five domains: local and international news, Ligurian literature, Genoese Ligurian linguistics concepts, traditional card game rules, and Ligurian geographic expressions. We find that a translation model augmented with ZenaMT improves a baseline by 20{\%}, and by over 25{\%} (BLEU) compared to NLLB-3.3B, which is over 50 times the size. Our results demonstrate the utility of creating data sets for MT that are specifically tailored for the cultural context of Ligurian speakers. We freely release ZenaMT and expect to periodically update the corpus to improve MT performance and domain coverage.",
}
| Large multilingual machine translation efforts are driving improved access and performance for under-resourced languages, but often fail to translate culturally specific and local concepts. Additionally, translation from practically relevant input languages may flag behind those that are comparatively over-represented in the training dataset. In this work, we release a new corpus, ZenaMT, containing 7,561 parallel Ligurian-Italian sentences, nearly a fifth of which are also translated in English. This corpus spans five domains: local and international news, Ligurian literature, Genoese Ligurian linguistics concepts, traditional card game rules, and Ligurian geographic expressions. We find that a translation model augmented with ZenaMT improves a baseline by 20{\%}, and by over 25{\%} (BLEU) compared to NLLB-3.3B, which is over 50 times the size. Our results demonstrate the utility of creating data sets for MT that are specifically tailored for the cultural context of Ligurian speakers. We freely release ZenaMT and expect to periodically update the corpus to improve MT performance and domain coverage. | [
"Haberl",
", Christopher R.",
"Maillard, Jean",
"Lusito, Stefano"
] | Italian-Ligurian Machine Translation in Its Cultural Context | sigul-1.21 | Poster | [
""
] | -1 | -1 | -1 | -1 | 0 | [] | [] | [] |
||
https://aclanthology.org/2024.sigul-1.22.bib | https://aclanthology.org/2024.sigul-1.22/ | @inproceedings{de-jesus-nunes-2024-labadain,
title = "Labadain-30k+: A Monolingual Tetun Document-Level Audited Dataset",
author = "de Jesus, Gabriel and
Nunes, S{\'e}rgio",
editor = "Melero, Maite and
Sakti, Sakriani and
Soria, Claudia",
booktitle = "Proceedings of the 3rd Annual Meeting of the Special Interest Group on Under-resourced Languages @ LREC-COLING 2024",
month = may,
year = "2024",
address = "Torino, Italia",
publisher = "ELRA and ICCL",
url = "https://aclanthology.org/2024.sigul-1.22",
pages = "177--188",
abstract = "This paper introduces Labadain-30k+, a monolingual dataset comprising 33.6k documents in Tetun, a low-resource language spoken in Timor-Leste. The dataset was acquired through web crawling and augmented with Wikipedia documents released by Wikimedia. Both sets of documents underwent thorough manual audits at the document level by native Tetun speakers, resulting in the construction of a Tetun text dataset well-suited for a variety of natural language processing and information retrieval tasks. This dataset was employed to conduct a comprehensive content analysis aimed at providing a nuanced understanding of document composition and the evolution of Tetun documents on the web. The analysis revealed that news articles constitute the predominant documents within the dataset, accounting for 89.87{\%} of the total, followed by Wikipedia documents at 4.34{\%}, and legal and governmental documents at 3.65{\%}, among others. Notably, there was a substantial increase in the number of documents in 2020, indicating 11.75 percentage points rise in document quantity, compared to an average of 4.76 percentage points per year from 2001 to 2023. Moreover, the year 2017, marked by the increased popularity of online news in Tetun, served as a threshold for analyzing the evolution of document writing on the web pre- and post-2017, specifically regarding vocabulary usage. Surprisingly, this analysis showed a significant increase of 6.12 percentage points in the Tetun written adhering to the Tetun official standard. Additionally, the persistence of Portuguese loanwords in that trajectory remained evident, reflecting an increase of 5.09 percentage points.",
}
| This paper introduces Labadain-30k+, a monolingual dataset comprising 33.6k documents in Tetun, a low-resource language spoken in Timor-Leste. The dataset was acquired through web crawling and augmented with Wikipedia documents released by Wikimedia. Both sets of documents underwent thorough manual audits at the document level by native Tetun speakers, resulting in the construction of a Tetun text dataset well-suited for a variety of natural language processing and information retrieval tasks. This dataset was employed to conduct a comprehensive content analysis aimed at providing a nuanced understanding of document composition and the evolution of Tetun documents on the web. The analysis revealed that news articles constitute the predominant documents within the dataset, accounting for 89.87{\%} of the total, followed by Wikipedia documents at 4.34{\%}, and legal and governmental documents at 3.65{\%}, among others. Notably, there was a substantial increase in the number of documents in 2020, indicating 11.75 percentage points rise in document quantity, compared to an average of 4.76 percentage points per year from 2001 to 2023. Moreover, the year 2017, marked by the increased popularity of online news in Tetun, served as a threshold for analyzing the evolution of document writing on the web pre- and post-2017, specifically regarding vocabulary usage. Surprisingly, this analysis showed a significant increase of 6.12 percentage points in the Tetun written adhering to the Tetun official standard. Additionally, the persistence of Portuguese loanwords in that trajectory remained evident, reflecting an increase of 5.09 percentage points. | [
"de Jesus, Gabriel",
"Nunes, S{\\'e}rgio"
] | Labadain-30k+: A Monolingual Tetun Document-Level Audited Dataset | sigul-1.22 | Poster | [
""
] | -1 | -1 | -1 | -1 | 0 | [] | [] | [] |
||
https://aclanthology.org/2024.sigul-1.23.bib | https://aclanthology.org/2024.sigul-1.23/ | @inproceedings{ljubesic-etal-2024-language,
title = "Language Models on a Diet: Cost-Efficient Development of Encoders for Closely-Related Languages via Additional Pretraining",
author = "Ljube{\v{s}}i{\'c}, Nikola and
Suchomel, V{\'\i}t and
Rupnik, Peter and
Kuzman, Taja and
van Noord, Rik",
editor = "Melero, Maite and
Sakti, Sakriani and
Soria, Claudia",
booktitle = "Proceedings of the 3rd Annual Meeting of the Special Interest Group on Under-resourced Languages @ LREC-COLING 2024",
month = may,
year = "2024",
address = "Torino, Italia",
publisher = "ELRA and ICCL",
url = "https://aclanthology.org/2024.sigul-1.23",
pages = "189--203",
abstract = "The world of language models is going through turbulent times, better and ever larger models are coming out at an unprecedented speed. However, we argue that, especially for the scientific community, encoder models of up to 1 billion parameters are still very much needed, their primary usage being in enriching large collections of data with metadata necessary for downstream research. We investigate the best way to ensure the existence of such encoder models on the set of very closely related languages - Croatian, Serbian, Bosnian and Montenegrin, by setting up a diverse benchmark for these languages, and comparing the trained-from-scratch models with the new models constructed via additional pretraining of existing multilingual models. We show that comparable performance to dedicated from-scratch models can be obtained by additionally pretraining available multilingual models even with a limited amount of computation. We also show that neighboring languages, in our case Slovenian, can be included in the additional pretraining with little to no loss in the performance of the final model.",
}
| The world of language models is going through turbulent times, better and ever larger models are coming out at an unprecedented speed. However, we argue that, especially for the scientific community, encoder models of up to 1 billion parameters are still very much needed, their primary usage being in enriching large collections of data with metadata necessary for downstream research. We investigate the best way to ensure the existence of such encoder models on the set of very closely related languages - Croatian, Serbian, Bosnian and Montenegrin, by setting up a diverse benchmark for these languages, and comparing the trained-from-scratch models with the new models constructed via additional pretraining of existing multilingual models. We show that comparable performance to dedicated from-scratch models can be obtained by additionally pretraining available multilingual models even with a limited amount of computation. We also show that neighboring languages, in our case Slovenian, can be included in the additional pretraining with little to no loss in the performance of the final model. | [
"Ljube{\\v{s}}i{\\'c}, Nikola",
"Suchomel, V{\\'\\i}t",
"Rupnik, Peter",
"Kuzman, Taja",
"van Noord, Rik"
] | Language Models on a Diet: Cost-Efficient Development of Encoders for Closely-Related Languages via Additional Pretraining | sigul-1.23 | Poster | 2404.05428 | [
"https://github.com/clarinsi/benchich"
] | https://huggingface.co/papers/2404.05428 | 2 | 0 | 0 | 5 | 1 | [
"classla/xlm-r-slobertic"
] | [] | [] |
https://aclanthology.org/2024.sigul-1.24.bib | https://aclanthology.org/2024.sigul-1.24/ | @inproceedings{bick-etal-2024-man,
title = "Man or Machine: Evaluating Spelling Error Detection in {D}anish Newspaper Corpora",
author = "Bick, Eckhard and
Blom, Jonas Nygaard and
Rathje, Marianne and
Schack, J{\o}rgen",
editor = "Melero, Maite and
Sakti, Sakriani and
Soria, Claudia",
booktitle = "Proceedings of the 3rd Annual Meeting of the Special Interest Group on Under-resourced Languages @ LREC-COLING 2024",
month = may,
year = "2024",
address = "Torino, Italia",
publisher = "ELRA and ICCL",
url = "https://aclanthology.org/2024.sigul-1.24",
pages = "204--211",
abstract = "This paper evaluates frequency and detection performance for both spelling and grammatical errors in a corpus of published Danish newspaper texts, comparing the results of three human proofreaders with those of an automatic system, DanProof. Adopting the error categorization scheme of the latter, we look at the accuracy of individual error types and their relative distribution over time, as well as the adequacy of suggested corrections. Finally, we discuss so-called artefact errors introduced by corpus processing, and the potential of DanProof as a corpus cleaning tool for identifying and correcting format conversion, OCR or other compilation errors. In the evaluation, with balanced F1-scores of 77.6 and 67.6 for 1999 texts and 2019 texts, respectively, DanProof achieved a higher recall and accuracy than the individual human annotators, and contributed the largest share of errors not detected by others (16.4{\%} for 1999 and 23.6{\%} for 2019). However, the human annotators had a significantly higher precision. Not counting artifacts, the overall error frequency in the corpus was low ( 0.5{\%}), and less than half in the newer texts compared to the older ones, a change that mostly concerned orthographical errors, with a correspondingly higher relative share of grammatical errors.",
}
| This paper evaluates frequency and detection performance for both spelling and grammatical errors in a corpus of published Danish newspaper texts, comparing the results of three human proofreaders with those of an automatic system, DanProof. Adopting the error categorization scheme of the latter, we look at the accuracy of individual error types and their relative distribution over time, as well as the adequacy of suggested corrections. Finally, we discuss so-called artefact errors introduced by corpus processing, and the potential of DanProof as a corpus cleaning tool for identifying and correcting format conversion, OCR or other compilation errors. In the evaluation, with balanced F1-scores of 77.6 and 67.6 for 1999 texts and 2019 texts, respectively, DanProof achieved a higher recall and accuracy than the individual human annotators, and contributed the largest share of errors not detected by others (16.4{\%} for 1999 and 23.6{\%} for 2019). However, the human annotators had a significantly higher precision. Not counting artifacts, the overall error frequency in the corpus was low ( 0.5{\%}), and less than half in the newer texts compared to the older ones, a change that mostly concerned orthographical errors, with a correspondingly higher relative share of grammatical errors. | [
"Bick, Eckhard",
"Blom, Jonas Nygaard",
"Rathje, Marianne",
"Schack, J{\\o}rgen"
] | Man or Machine: Evaluating Spelling Error Detection in Danish Newspaper Corpora | sigul-1.24 | Poster | [
""
] | -1 | -1 | -1 | -1 | 0 | [] | [] | [] |
||
https://aclanthology.org/2024.sigul-1.25.bib | https://aclanthology.org/2024.sigul-1.25/ | @inproceedings{vergez-couret-etal-2024-managing,
title = "Managing Fine-grained Metadata for Text Bases in Extremely Low Resource Languages: The Cases of Two Regional Languages of {F}rance",
author = "Vergez-Couret, Marianne and
Bernhard, Delphine and
Nauge, Michael and
Bras, Myriam and
Ruiz Fabo, Pablo and
Werner, Carole",
editor = "Melero, Maite and
Sakti, Sakriani and
Soria, Claudia",
booktitle = "Proceedings of the 3rd Annual Meeting of the Special Interest Group on Under-resourced Languages @ LREC-COLING 2024",
month = may,
year = "2024",
address = "Torino, Italia",
publisher = "ELRA and ICCL",
url = "https://aclanthology.org/2024.sigul-1.25",
pages = "212--221",
abstract = "Metadata are key components of language resources and facilitate their exploitation and re-use. Their creation is a labour intensive process and requires a modeling step, which identifies resource-specific information as well as standards and controlled vocabularies that can be reused. In this article, we focus on metadata for documenting text bases for regional languages of France characterised by several levels of variation (space, time, usage, social status), based on a survey of existing metadata schema. Moreover, we implement our metadata model as a database structure for the Heurist data management system, which combines both the ease of use of spreadsheets and the ability to model complex relationships between entities of relational databases. The Heurist template is made freely available and was used to describe metadata for text bases in Alsatian and Poitevin-Santongeais. We also propose tools to automatically generate XML metadata headers files from the database.",
}
| Metadata are key components of language resources and facilitate their exploitation and re-use. Their creation is a labour intensive process and requires a modeling step, which identifies resource-specific information as well as standards and controlled vocabularies that can be reused. In this article, we focus on metadata for documenting text bases for regional languages of France characterised by several levels of variation (space, time, usage, social status), based on a survey of existing metadata schema. Moreover, we implement our metadata model as a database structure for the Heurist data management system, which combines both the ease of use of spreadsheets and the ability to model complex relationships between entities of relational databases. The Heurist template is made freely available and was used to describe metadata for text bases in Alsatian and Poitevin-Santongeais. We also propose tools to automatically generate XML metadata headers files from the database. | [
"Vergez-Couret, Marianne",
"Bernhard, Delphine",
"Nauge, Michael",
"Bras, Myriam",
"Ruiz Fabo, Pablo",
"Werner, Carole"
] | Managing Fine-grained Metadata for Text Bases in Extremely Low Resource Languages: The Cases of Two Regional Languages of France | sigul-1.25 | Poster | [
""
] | -1 | -1 | -1 | -1 | 0 | [] | [] | [] |
||
https://aclanthology.org/2024.sigul-1.26.bib | https://aclanthology.org/2024.sigul-1.26/ | @inproceedings{al-ali-aldarmaki-2024-mixat,
title = "Mixat: A Data Set of Bilingual Emirati-{E}nglish Speech",
author = "Al Ali, Maryam Khalifa and
Aldarmaki, Hanan",
editor = "Melero, Maite and
Sakti, Sakriani and
Soria, Claudia",
booktitle = "Proceedings of the 3rd Annual Meeting of the Special Interest Group on Under-resourced Languages @ LREC-COLING 2024",
month = may,
year = "2024",
address = "Torino, Italia",
publisher = "ELRA and ICCL",
url = "https://aclanthology.org/2024.sigul-1.26",
pages = "222--226",
abstract = "This paper introduces Mixat: a dataset of Emirati speech code-mixed with English. Mixat was developed to address the shortcomings of current speech recognition resources when applied to Emirati speech, and in particular, to bilignual Emirati speakers who often mix and switch between their local dialect and English. The data set consists of 15 hours of speech derived from two public podcasts featuring native Emirati speakers, one of which is in the form of conversations between the host and a guest. Therefore, the collection contains examples of Emirati-English code-switching in both formal and natural conversational contexts. In this paper, we describe the process of data collection and annotation, and describe some of the features and statistics of the resulting data set. In addition, we evaluate the performance of pre-trained Arabic and multi-lingual ASR systems on our dataset, demonstrating the shortcomings of existing models on this low-resource dialectal Arabic, and the additional challenge of recognizing code-switching in ASR. The dataset will be made publicly available for research use.",
}
| This paper introduces Mixat: a dataset of Emirati speech code-mixed with English. Mixat was developed to address the shortcomings of current speech recognition resources when applied to Emirati speech, and in particular, to bilignual Emirati speakers who often mix and switch between their local dialect and English. The data set consists of 15 hours of speech derived from two public podcasts featuring native Emirati speakers, one of which is in the form of conversations between the host and a guest. Therefore, the collection contains examples of Emirati-English code-switching in both formal and natural conversational contexts. In this paper, we describe the process of data collection and annotation, and describe some of the features and statistics of the resulting data set. In addition, we evaluate the performance of pre-trained Arabic and multi-lingual ASR systems on our dataset, demonstrating the shortcomings of existing models on this low-resource dialectal Arabic, and the additional challenge of recognizing code-switching in ASR. The dataset will be made publicly available for research use. | [
"Al Ali, Maryam Khalifa",
"Aldarmaki, Hanan"
] | Mixat: A Data Set of Bilingual Emirati-English Speech | sigul-1.26 | Poster | 2405.02578 | [
"https://github.com/mbzuai-nlp/mixat"
] | -1 | -1 | -1 | -1 | 0 | [] | [] | [] |
|
https://aclanthology.org/2024.sigul-1.27.bib | https://aclanthology.org/2024.sigul-1.27/ | @inproceedings{arthur-etal-2024-multi,
title = "Bi-dialectal {ASR} of {A}rmenian from Naturalistic and Read Speech",
author = "Arthur, Malajyan and
Khurshudyan, Victoria and
Avetisyan, Karen and
Dolatian, Hossep and
Nouvel, Damien",
editor = "Melero, Maite and
Sakti, Sakriani and
Soria, Claudia",
booktitle = "Proceedings of the 3rd Annual Meeting of the Special Interest Group on Under-resourced Languages @ LREC-COLING 2024",
month = may,
year = "2024",
address = "Torino, Italia",
publisher = "ELRA and ICCL",
url = "https://aclanthology.org/2024.sigul-1.27",
pages = "227--236",
abstract = "The paper explores the development of Automatic Speech Recognition (ASR) models for Armenian, by using data from two standard dialects (Eastern Armenian and Western Armenian). The goal is to develop a joint bi-variational model. We achieve state-of-the-art results. Results from our ASR experiments demonstrate the impact of dataset selection and data volume on model performance. The study reveals limited transferability between dialects, although integrating datasets from both dialects enhances overall performance. The paper underscores the importance of dataset diversity and volume in ASR model training for under-resourced languages like Armenian.",
}
| The paper explores the development of Automatic Speech Recognition (ASR) models for Armenian, by using data from two standard dialects (Eastern Armenian and Western Armenian). The goal is to develop a joint bi-variational model. We achieve state-of-the-art results. Results from our ASR experiments demonstrate the impact of dataset selection and data volume on model performance. The study reveals limited transferability between dialects, although integrating datasets from both dialects enhances overall performance. The paper underscores the importance of dataset diversity and volume in ASR model training for under-resourced languages like Armenian. | [
"Arthur, Malajyan",
"Khurshudyan, Victoria",
"Avetisyan, Karen",
"Dolatian, Hossep",
"Nouvel, Damien"
] | Bi-dialectal ASR of Armenian from Naturalistic and Read Speech | sigul-1.27 | Poster | [
""
] | -1 | -1 | -1 | -1 | 0 | [] | [] | [] |
||
https://aclanthology.org/2024.sigul-1.28.bib | https://aclanthology.org/2024.sigul-1.28/ | @inproceedings{nguyen-sakti-2024-multilingual,
title = "Multilingual Self-supervised Visually Grounded Speech Models",
author = "Nguyen, Huynh Phuong Thanh and
Sakti, Sakriani",
editor = "Melero, Maite and
Sakti, Sakriani and
Soria, Claudia",
booktitle = "Proceedings of the 3rd Annual Meeting of the Special Interest Group on Under-resourced Languages @ LREC-COLING 2024",
month = may,
year = "2024",
address = "Torino, Italia",
publisher = "ELRA and ICCL",
url = "https://aclanthology.org/2024.sigul-1.28",
pages = "237--243",
abstract = "Developing a multilingual speech-to-speech translation system poses challenges due to the scarcity of paired speech data in various languages, particularly when dealing with unknown and untranscribed languages. However, the shared semantic representation across multiple languages presents an opportunity to build a translation system based on images. Recently, researchers have explored methods for aligning bilingual speech as a novel approach to discovering speech pairs using semantic images from unknown and untranscribed speech. These aligned speech pairs can then be utilized to train speech-to-speech translation systems. Our research builds upon these approaches by expanding into multiple languages and focusing on achieving multimodal multilingual pairs alignment, with a key component being multilingual visually grounded speech models. The objectives of our research are twofold: (1) to create visually grounded speech datasets for English, Japanese, Indonesian, and Vietnamese, and (2) to develop self-supervised visually grounded speech models for these languages. Our experiments have demonstrated the feasibility of this approach, showcasing the ability to retrieve associations between speeches and images. The results indicate that our multilingual visually grounded speech models yield promising outcomes in representing speeches using semantic images across multiple languages.",
}
| Developing a multilingual speech-to-speech translation system poses challenges due to the scarcity of paired speech data in various languages, particularly when dealing with unknown and untranscribed languages. However, the shared semantic representation across multiple languages presents an opportunity to build a translation system based on images. Recently, researchers have explored methods for aligning bilingual speech as a novel approach to discovering speech pairs using semantic images from unknown and untranscribed speech. These aligned speech pairs can then be utilized to train speech-to-speech translation systems. Our research builds upon these approaches by expanding into multiple languages and focusing on achieving multimodal multilingual pairs alignment, with a key component being multilingual visually grounded speech models. The objectives of our research are twofold: (1) to create visually grounded speech datasets for English, Japanese, Indonesian, and Vietnamese, and (2) to develop self-supervised visually grounded speech models for these languages. Our experiments have demonstrated the feasibility of this approach, showcasing the ability to retrieve associations between speeches and images. The results indicate that our multilingual visually grounded speech models yield promising outcomes in representing speeches using semantic images across multiple languages. | [
"Nguyen, Huynh Phuong Thanh",
"Sakti, Sakriani"
] | Multilingual Self-supervised Visually Grounded Speech Models | sigul-1.28 | Poster | [
""
] | -1 | -1 | -1 | -1 | 0 | [] | [] | [] |
||
https://aclanthology.org/2024.sigul-1.29.bib | https://aclanthology.org/2024.sigul-1.29/ | @inproceedings{nakarmi-etal-2024-nepal,
title = "{N}epal Script Text Recognition Using {CRNN} {CTC} Architecture",
author = "Nakarmi, Swornim and
Sthapit, Sarin and
Shakya, Arya and
Chulyadyo, Rajani and
Bal, Bal Krishna",
editor = "Melero, Maite and
Sakti, Sakriani and
Soria, Claudia",
booktitle = "Proceedings of the 3rd Annual Meeting of the Special Interest Group on Under-resourced Languages @ LREC-COLING 2024",
month = may,
year = "2024",
address = "Torino, Italia",
publisher = "ELRA and ICCL",
url = "https://aclanthology.org/2024.sigul-1.29",
pages = "244--251",
abstract = "Nepal Script (also known as Prachalit Script) is the widely used script of Nepal Bhasa, the native language of the Kathmandu Valley in Nepal. Derived from the Brahmi Script, the Nepal Script was developed in the 9th century and was extensively used till the 20th century, before being replaced by the Devanagari script. Numerous ancient manuscripts, inscriptions, and documents written in the Nepal Script are still available containing immense knowledge on architecture, arts, astrology, ayurveda, literature, music, tantrism, etc. To preserve and revive Nepal Bhasa, digitizing such documents plays a crucial role. This paper presents our work on text recognition for the Nepal Script. The implementation includes the Nepal Script text recognizer based on CRNN CTC architecture aided by line and word segmentations. Leveraging a carefully curated dataset that encompasses handwritten and printed texts in the Nepal Script, our work has achieved CER of 6.65{\%} and WER of 13.11{\%}. The dataset used for this work is available as Nepal Script Text Dataset on Kaggle. The paper further explores the associated challenges due to the complex nature of the script such as conjuncts, modifiers and variations; and the current state of the script.",
}
| Nepal Script (also known as Prachalit Script) is the widely used script of Nepal Bhasa, the native language of the Kathmandu Valley in Nepal. Derived from the Brahmi Script, the Nepal Script was developed in the 9th century and was extensively used till the 20th century, before being replaced by the Devanagari script. Numerous ancient manuscripts, inscriptions, and documents written in the Nepal Script are still available containing immense knowledge on architecture, arts, astrology, ayurveda, literature, music, tantrism, etc. To preserve and revive Nepal Bhasa, digitizing such documents plays a crucial role. This paper presents our work on text recognition for the Nepal Script. The implementation includes the Nepal Script text recognizer based on CRNN CTC architecture aided by line and word segmentations. Leveraging a carefully curated dataset that encompasses handwritten and printed texts in the Nepal Script, our work has achieved CER of 6.65{\%} and WER of 13.11{\%}. The dataset used for this work is available as Nepal Script Text Dataset on Kaggle. The paper further explores the associated challenges due to the complex nature of the script such as conjuncts, modifiers and variations; and the current state of the script. | [
"Nakarmi, Swornim",
"Sthapit, Sarin",
"Shakya, Arya",
"Chulyadyo, Rajani",
"Bal, Bal Krishna"
] | Nepal Script Text Recognition Using CRNN CTC Architecture | sigul-1.29 | Poster | [
""
] | -1 | -1 | -1 | -1 | 0 | [] | [] | [] |
||
https://aclanthology.org/2024.sigul-1.30.bib | https://aclanthology.org/2024.sigul-1.30/ | @inproceedings{cusenza-coltekin-2024-nlp,
title = {{NLP} for Arb{\"e}resh: How an Endangered Language Learns to Write in the 21st Century},
author = {Cusenza, Giulio and
{\c{C}}{\"o}ltekin, {\c{C}}a{\u{g}}r{\i}},
editor = "Melero, Maite and
Sakti, Sakriani and
Soria, Claudia",
booktitle = "Proceedings of the 3rd Annual Meeting of the Special Interest Group on Under-resourced Languages @ LREC-COLING 2024",
month = may,
year = "2024",
address = "Torino, Italia",
publisher = "ELRA and ICCL",
url = "https://aclanthology.org/2024.sigul-1.30",
pages = "252--256",
abstract = {Societies are becoming more and more connected, and minority languages often find themselves helpless against the advent of the digital age, with their speakers having to regularly turn to other languages for written communication. This work introduces the case of Arb{\"e}resh, a southern Italian language related to Albanian. It presents the very first machine-readable Arb{\"e}resh data, collected through a web campaign, and describes a set of tools developed to enable the Arb{\"e}resh people to learn how to write their language, including a spellchecker, a conjugator, a numeral generator, and an interactive platform to learn Arb{\"e}resh spelling. A comprehensive web application was set up to make these tools available to the public, as well as to collect further data through them. This method can be replicated to help revive other minority languages in a situation similar to Arb{\"e}resh{'}s. The main challenges of the process were the extremely low-resource setting and the variability of Arb{\"e}resh dialects.},
}
| Societies are becoming more and more connected, and minority languages often find themselves helpless against the advent of the digital age, with their speakers having to regularly turn to other languages for written communication. This work introduces the case of Arb{\"e}resh, a southern Italian language related to Albanian. It presents the very first machine-readable Arb{\"e}resh data, collected through a web campaign, and describes a set of tools developed to enable the Arb{\"e}resh people to learn how to write their language, including a spellchecker, a conjugator, a numeral generator, and an interactive platform to learn Arb{\"e}resh spelling. A comprehensive web application was set up to make these tools available to the public, as well as to collect further data through them. This method can be replicated to help revive other minority languages in a situation similar to Arb{\"e}resh{'}s. The main challenges of the process were the extremely low-resource setting and the variability of Arb{\"e}resh dialects. | [
"Cusenza, Giulio",
"{\\c{C}}{\\\"o}ltekin, {\\c{C}}a{\\u{g}}r{\\i}"
] | NLP for Arbëresh: How an Endangered Language Learns to Write in the 21st Century | sigul-1.30 | Poster | [
""
] | -1 | -1 | -1 | -1 | 0 | [] | [] | [] |