taesiri commited on
Commit
e96b92d
1 Parent(s): 317fee1

Upload papers/2402/2402.08382.tex with huggingface_hub

Browse files
Files changed (1) hide show
  1. papers/2402/2402.08382.tex +311 -0
papers/2402/2402.08382.tex ADDED
@@ -0,0 +1,311 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+
2
+
3
+ \documentclass[11pt,a4paper]{article}
4
+ \usepackage[hyperref]{acl2020}
5
+ \usepackage{times}
6
+ \usepackage{latexsym}
7
+ \usepackage{linguex}
8
+ \usepackage{amsmath}
9
+ \usepackage{amssymb}
10
+ \usepackage{booktabs}
11
+ \usepackage{linguex}
12
+ \usepackage{url}
13
+ \usepackage{array}
14
+ \usepackage{kotex}
15
+ \newcolumntype{x}[1]{>{\centering\arraybackslash\hspace{0pt}}p{#1}}
16
+ \newcolumntype{C}[1]{>{\centering\let\newline\\\arraybackslash\hspace{0pt}}m{#1}}
17
+
18
+ \usepackage{tabularx}
19
+ \usepackage{graphicx}
20
+ \usepackage{dblfloatfix}
21
+ \usepackage{enumitem}
22
+ \usepackage{ textcomp }
23
+ \usepackage{todonotes}
24
+ \usepackage{subcaption}
25
+ \usepackage{chngcntr}
26
+ \newcommand{\hyun}[1]{\textcolor{olive}{\textbf{Hyun: #1}}}
27
+
28
+ \renewcommand{\UrlFont}{\ttfamily\small}
29
+
30
+ \usepackage{microtype}
31
+ \usepackage{booktabs}
32
+
33
+ \aclfinalcopy \def\aclpaperid{718}
34
+
35
+
36
+
37
+ \newcommand\BibTeX{B\textsc{ib}\TeX}
38
+
39
+ \newlength{\vs}
40
+ \setlength{\vs}{0.5\baselineskip}
41
+
42
+ \title{Punctuation Restoration Improves Structure Understanding without Supervision}
43
+
44
+ \author{Junghyun Min, Minho Lee, Woochul Lee, Yeonsoo Lee\quad\\
45
+ NCSOFT NLP Center, Seongnam, Gyeonggi, Korea\\
46
+ \texttt{\{hyun1, minolee, darkgeo, yeonsoo\}@ncsoft.com}\\
47
+ }
48
+
49
+ \date{}
50
+
51
+ \begin{document}
52
+ \maketitle
53
+ \begin{abstract}
54
+ Unsupervised learning objectives like language modeling and de-noising constitute a significant part in producing pre-trained models that perform various downstream applications from natural language understanding to conversational tasks. However, despite impressive conversational capabilities of recent large language model, their abilities to capture syntactic or semantic structure within text lag behind. We hypothesize that the mismatch between linguistic performance and competence in machines is attributable to insufficient transfer of linguistic structure knowledge to computational systems with currently popular pre-training objectives. We show that punctuation restoration transfers to improvements in in- and out-of-distribution performance on structure-related tasks like named entity recognition, open information extraction, chunking, and part-of-speech tagging. Punctuation restoration is an effective learning objective that can improve structure understanding and yield a more robust structure-aware representations of natural language.
55
+ \end{abstract}
56
+
57
+ \newlength{\myindent}
58
+ \setlength{\myindent}{0.5cm}
59
+
60
+ \section{Introduction}
61
+ \label{sec:introduction}
62
+
63
+ The current framework of natural language processing systems, described by \citet{linzen-2020-accelerate} as the PAID paradigm, consists of two production stages: unsupervised representation learning and task-specific engineering. Modern transformer-based systems that follow the framework \citep{devlin2019bert, raffel2019t5, radford2018gpt, peters-etal-2018-elmo} report high performance in various natural language understanding tasks, often matching or exceeding human performance baselines \citep{wang-etal-2018-glue, wang2019superglue}. However, there is ample evidence that current unsupervised representation learning yields weak structure understanding and brittle generalization abilities. In classification systems, we observe unstable outcome despite consistent input and reliance on shallow heuristics while processing unfamiliar input \citep{mccoy-etal-2020-berts, zhou-etal-2020-curse}. In generative and conversational systems, we observe stagnant natural language understanding performance despite drastic increase in conversational performance \citep{zhong2023chatgpt}, and failure to generalize sentences like "A equals B" to "B equals A" \citep{berglund2023reversal}.
64
+
65
+ These are examples of weak structure understanding in language model based NLP systems. While it is difficult to pinpoint the exact source of these weaknesses or even disentangle between the effects of unsupervised pre-training and task specific engineering, the pre-training stage is at least partially attributable for these behaviors, and there exists room for improvement \citep{zhou-etal-2020-curse, min-etal-2020-syntactic}. We believe word prediction tasks like auto-regressive \citep{radford2018gpt}, masked \citep{devlin2019bert}, and purturbed \citep{raffel2019t5} language modeling may be insufficient to acquire robust representations that contain strong understanding of syntactic and semantic structure. We hypothesize that an additional unsupervised learning objective that focuses on capturing structure within natural language will improve structure understanding, measured by in-distribution (test set from the same source as training set) and out-of-distribution (test set from a different source than training set) performance in structure-related NLP tasks like chunking, information extraction, semantic role labeling, named entity recognition, sentence boundary detection, and part-of-speech tagging.
66
+
67
+ This paper aims to test this hypothesis, using an unsupervised learning objective that reinforces structure understanding in language models. One nontrivial signal for syntactic and semantic structure in natural language is punctuation \citep{briscoe1996syntax, nunberg-1990-linguistics, dale1991exploring}, which can also be an effective parsing constraint that aids grammar induction in web mark-up text \citep{spitkovsky-etal-2010-profiting}. During human speech processing, syntactic disambiguation and grammar induction are facilitated by prosody \citep{kahn2005effective, price1991use}, which is analogous to punctuation in written text. Previously, punctuation has been used for grammar induction to improve unsupervised dependency parsing \citep{spitkovsky-etal-2011-punctuation}. Punctuation restoration is itself also a popular downstream task, especially for polishing output text from automatic speech recognition systems \citep[\textit{inter alia}]{gravano-2009-restoring, alam2020punctuation, gupta2023punctuation} but has not been studied as a transferable language modeling objective.
68
+
69
+ Here, we propose punctuation restoration as the structure-oriented learning objective, which we describe in detail in Section \ref{sec:objective_design}. Our results show additional pre-training with the punctuation restoration objective leads to improvements in various structure-related NLP task performance in both discriminative and generative approaches, supporting our hypothesis. Furthermore, this finding suggests that there is room for improvement in the unsupervised pre-training stage in the current paradigm of producing natural language processing systems.
70
+
71
+ Our contribution is twofold:
72
+ \begin{enumerate}
73
+ \item We suggest a novel research direction in unsupervised transfer learning beyond word prediction
74
+ \item We propose an unsupervised learning objective that yields robust structure understanding
75
+ \end{enumerate}
76
+
77
+ \section{Structure understanding}
78
+ \label{sec:structure-understanding}
79
+
80
+ Understanding of language structure is vital in both human and machine language processing. While human language acquisition and modern machine representation learning take a similar approach--acquisition of structure via implicit structural signals in an unsupervised setting, their outcomes are different, highlighted by poor generalization abilities and high computational costs of machine language processing systems.
81
+
82
+ \subsection{Human structure understanding}
83
+
84
+ Despite the unsupervised and sparse nature of their linguistic stimuli, human learners are able to obtain robust representations that generalize to unfamiliar inputs reliably and with remarkable efficiency. \citet{braine1971two} provide a plethora of examples where children do not respond to negative reinforcement in their corpus. However, even without explicit supervision, humans are able to generalize their linguistics knowledge to novel structures and utterances \citep{sprouse2013acceptability}. Moreover, this is accomplished with remarkable efficiency--\citet{roy2015predicting} analyses that children hear or produce approximately 8 million words over a 15-month period, comparable to around 13-14 million tokens. \citet{linzen-2020-accelerate} acknowledges that NLP tasks or languages with a similar range of available data are often dubbed "low-resource."
85
+
86
+ \subsection{Pre-training to acquire structure understanding}
87
+
88
+ Unlike humans, it is difficult for computational systems to obtain reliable representations of structure, given their lack of human-like inductive bias \citep{linzen-etal-2016-assessing, mccoy-etal-2020-syntax}. Modern language models propose word prediction objectives for representation learning--they acquire natural language representation by predicting words that are most likely to appear in the masked, purturbed, or next-in-sequence slot \citep{devlin2019bert, raffel2019t5, radford2018gpt, yang2020xlnet}. BERT \citep{devlin2019bert} employs masked language modeling, where random words in a sentence are masked, and the model is tasked with predicting those masked tokens. The Text-to-Text Transfer Transformer \citep[T5;][]{raffel2019t5} utilizes a denoising objective, where a portion of input sentences is corrupted, and the model is trained to reconstruct the original text. Electra \citep{clark2020electra} introduces a novel approach through corruption classification, where a subset of tokens is replaced with incorrect ones, and the model distinguishes between genuine and corrupted tokens. The Generative Pre-trained Transformer \citep[GPT;][]{radford2018gpt} employs an autoregressive language modeling objective, predicting the next word in a sequence given the preceding context. XLNet \citep{yang2020xlnet} iterates word prediction over all factorizable permutations, incorporating bidirectional context while maintaining the autoregressive property.
89
+
90
+ \subsection{Other methods for structure understanding}
91
+ \label{sec:other-methods-for-structure-understanding}
92
+
93
+ In addition to structure learning during the pre-training stage, various work suggest methods applicable after it. Approaches related to dataset adjustment account for a significant portion. \citet{gunasekar2023textbooks} observe that training on textbook quality data reduces the need for scaling while maintaining performance. \citet{yaghoobzadeh-etal-2021-increasing} proposes recursion on forgettable examples to curb system reliance on spurious correlations and focus on syntactic and semantic signal. \citet{min-etal-2020-syntactic} introduce a simple yet effective human-in-the-loop adversarial augmentation framework that improves general syntactic structure understanding. \citet{clark-etal-2019-boolq} suggest performing "additional pre-training" on the supervised Multi-Genre Natural Language Inference dataset \citep[MultiNLI;][]{williams-etal-2018-mnli} transfers cross-sentence structure understanding and thus improves downstream performance on their Boolean QA dataset. Other efforts include augmenting input explicitly with syntax by providing constituency or dependency parsing information \citep{pradhan-etal-2005-semantic-role, zhang-etal-2019-syntax, lepori-etal-2020-representations} or via joint inference \citep{punyakanok-etal-2008-importance}, detecting a domain-specialized sub-span of input text to process them separately \citep{park-etal-2023-varco}, and increasing retrieval relevancy by applying additional constraints to encourage learning and prediction of task-specific input-output structure \citep{lee2024slgm}.
94
+
95
+ \section{Objective design and experimental setup}
96
+ \label{sec:objective_design}
97
+
98
+ \subsection{Objective design}
99
+
100
+ The punctuation restoration objective predicts cleared punctuation marks and capitalization. In our implementation, we predict the following set of punctuation marks: the comma \textbf{,}, the period \textbf{.}, the single-quotation mark \textbf{'}, and the double-quotation mark \textbf{"}, along with capitalization, as shown below. Boldface indicates an addition or a modification of source text.
101
+
102
+ \begin{itemize}
103
+ \label{fig:faker}
104
+ \item Source: lee faker sang-hyeok (hangul: 이상혁) is a league of legends esports player currently mid laner and part owner at t1
105
+ \item Target: \textbf{L}ee \textbf{``F}aker\textbf{''} \textbf{S}ang-hyeok (\textbf{H}angul: 이상혁) is a \textbf{L}eague of \textbf{L}egends esports player, currently mid laner and part owner at \textbf{T}1\textbf{.}
106
+
107
+ \end{itemize}
108
+
109
+
110
+ We do not introduce mask tokens that trigger predictions, because we want the learners to be able to infer punctuation marks and capitalization (and hence language structure) from raw text only. We also acknowledge our selection of punctuation marks to restore is arbitrary, and it is possible that a different selection yield better results.
111
+
112
+ From an internal database of English news articles, accessed between January 2022 and August 2023, we collected a total of 435,031 article excerpts, which are non-overlapping parts separated by a limiting word count of 150. Sources include major news outlets like CNN and Reuters.
113
+
114
+ The raw excerpts serve as target text. To create source text, we first normalize punctuation marks, then remove our four selected punctuation marks, then apply the \texttt{.lowercase()} transformation. While we intend to produce a training dataset entirely in English, we did not check for this, and it is possible the training data include non-English words, phrases, or articles.
115
+
116
+ \subsection{Experimental setup}
117
+
118
+ We treat punctuation restoration as additional training before fine-tuning on the target datasets. We experiment with three approaches--a single-task generative approach with the conditional language modeling head, a joint multi-task generative approach, and a discriminative approach with a classification head. For all approaches, we use the publicly available \texttt{t5-base} model architecture and checkpoint from Huggingface Transformers' \texttt{T5ForConditionalGeneration} module. We train the model on the puctuation restoration objective for 40 epochs, before fine-tuning with supervised datasets for downstream tasks. The experiments were run on V100 GPUs, with half precision and gradient accumulation enabled. We follow \citet{raffel2019t5}'s framework of transfer learning in text-to-text tasks for the generative approach and \citet{radford2018gpt}'s framework of generative pre-training followed by discriminative fine-tuning for the discriminative appraoch. Unlike the generative approach, the discriminative approach requires some modification from the original T5 implementation.
119
+
120
+ \subsubsection{Discriminative approach}
121
+
122
+ While there exist sophisticated attempts to incorporate the decoder layers in producing a discriminative model from a pre-trained encoder-decoder architecture \citep{liu2022enct5}, we use a simple architecture where we forgo the decoder block and place a \texttt{T5ClassificationHead} on top of the encoder block of the T5 model. That is, we take the hidden state output from model's encoder and use it as input to the classification head. An illustration of the model architecture is shown in Figure \ref{fig:s4e-architecture}
123
+
124
+ \begin{figure*}
125
+ \centering
126
+ \includegraphics[width=5.5in]{architecture.pdf}
127
+ \caption{(a) The \texttt{t5} architecture for a generative, text-to-text approach to NLP tasks. Here, we illustrate open information extraction. (b) A modification to the \texttt{t5} architecture to allow a discriminative approach to NLP tasks. Here, we illustrate named entity recognition.}
128
+ \label{fig:s4e-architecture}
129
+ \end{figure*}
130
+
131
+ \subsection{Evaluation datasets}
132
+
133
+ We use a suite of structure-related NLP tasks to measure model structure understanding. Relevant tasks include named entity recognition (NER), sentence boundary detection (SBD), open information extraction (OpenIE), chunking, semantic role labeling (SRL), part-of-speech tagging, and relation classification. We use both public and internal datasets, and check for in- and out-of-distribution generalization. A full list of datasets for each task is shown in Table \ref{table:datasets}.
134
+
135
+ \begin{table*}
136
+ \centering
137
+ \begin{tabular}{cll} \toprule
138
+ \multicolumn{1}{c}{Task} & \multicolumn{1}{c}{Dataset} & \multicolumn{1}{c}{Source} \\ \midrule
139
+ \multicolumn{2}{l}{\textbf{Internal datasets}} \\ \midrule
140
+ PR & nc-finPR & Rule-based tagging on finance news \\
141
+ NER & nc-mNER & Manual tagging on finance news and corporate filings \\
142
+ & nc-sNER & Semi-supervised tagging on finance news \\
143
+ OpenIE & EconIE-PRO & Rule-based tagging on finance news, predicate range optimized \\
144
+ \midrule
145
+ \multicolumn{2}{l}{\textbf{Public datasets}} \\ \midrule
146
+ NER & GENIA & \citet{kim2003genia} \\
147
+ & CoNLL 2003 & \citet{tjong-kim-sang-de-meulder-2003-conll2003}\\
148
+ & ontonotes & \citet{weischedel-2013-ontonotes}\\
149
+ SBD & PTB & \citet{marcus-1993-ptb} \\
150
+ OpenIE & OIE2016 & \citet{stanovsky-dagan-2016-oie2016} \\
151
+ & CaRB & \citet{bhardwaj-etal-2019-carb} \\
152
+ Chunk, POS & CoNLL 2000 & \citet{tjong-kim-sang-buchholz-2000-conll2000} \\
153
+ & CoNLL 2003 & \citet{tjong-kim-sang-de-meulder-2003-conll2003} \\
154
+ SRL & CoNLL 2012 & \citet{pradhan-etal-2012-conll2012} \\
155
+ ORE & TACRED & \citet{zhang2017tacred} \\
156
+ \bottomrule
157
+ \end{tabular}
158
+ \caption{We use a total of 16 datasets across 8 tasks, including punctuation restoration. Five are internal datasets, while the rest are publicly available.}
159
+ \label{table:datasets}
160
+ \end{table*}
161
+
162
+ \section{Results}
163
+ \label{sec:results}
164
+
165
+ We measure the effects of punctuation restoration as an addition pre-training objective on structure understanding abilities across in- and out-of-distribution performance in datasets described in Table \ref{table:datasets}. We report the results in various settings, including the generative approach following \citet{raffel2019t5} in Section \ref{sec:generative-results}, the joint multitask approach in Section \ref{sec:multitask-results}, and the discriminative approach following \citet{devlin2019bert} in Section \ref{sec:discriminative-results}.
166
+
167
+ \subsection{Objective results}
168
+
169
+ Punctuation restoration is no trivial task \citep{gravano-2009-restoring, gupta2023punctuation, alam2020punctuation}. Should our hypothesis hold, it is likely that syntactic signals from punctuation restoration transfer more effectively in models with stronger punctuation restoration performances. We experiment with three sizes of the T5 architecture--small, base, and large to help determine our experimental setup. Table \ref{table:punctuation-restoration-performance} includes their punctuation restoration performance, in addition to ChatGPT's \citep{brown-2020-gpt3} zero-shot performance as a reference point, which shows that the objective is nontrivial.
170
+
171
+ \begin{table}
172
+ \centering
173
+ \begin{tabular}{lccc} \toprule
174
+ Model architecture & P & R & F1 \\
175
+ \midrule
176
+ ChatGPT 0-shot* & .75 & .71 & .73 \\
177
+ T5-small & .91 & .86 & .88 \\
178
+ T5-base & .93 & .92 & .93 \\
179
+ T5-large & .94 & .93 & .93 \\
180
+ \bottomrule
181
+ \end{tabular}
182
+ \caption{Punctuation restoration performance after 50 epochs (small), 40 epochs (base), and 20 epochs (large) of training respectively. *ChatGPT 0-shot performance is measured on a small subset of the punctuation restoration evaluation dataset.}
183
+ \label{table:punctuation-restoration-performance}
184
+ \end{table}
185
+
186
+ Within the T5 models, there is some correlation between number of parameters and punctuation restoration performance. Because the performance gap between \texttt{t5-base} and \texttt{t5-large} models is small, we use the \texttt{t5-base} model for our experiments.
187
+
188
+ \subsection{Generative approach}
189
+ \label{sec:generative-results}
190
+
191
+ \begin{table*}
192
+ \centering
193
+ \begin{tabular}{lllcccccc}
194
+ \toprule
195
+ Task & Training set & Evaluation set & \multicolumn{3}{c}{\texttt{t5-base}} & \multicolumn{3}{c}{+ PR} \\
196
+ \cmidrule(lr){4-6} \cmidrule(lr){7-9}
197
+ &&&P&R&F1&P&R&F1 \\
198
+ \midrule
199
+ NER & nc-mNER & ID & .69 & .65 & .67 & .90 & .89 & .89 \\
200
+ && nc-sNER & .67 & .76 & .71 & .74 & .81 & .77 \\
201
+ & GENIA & ID & .57 & .73 & .64 & .64 & .76 & .69 \\
202
+ & CoNLL03 & ID & .89 & .90 & .89 & .92 & .92 & .92 \\
203
+ & ontonotes & ID & .87 & .88 & .88 & .91 & .91 & .91 \\
204
+ \midrule
205
+ OpenIE & EconIE-PRO & ID & .47 & .43 & .45 & .60 & .63 & .62 \\
206
+ && CaRB & .22 & .16 & .19 & .62 & .42 & .50 \\
207
+ & OIE2016 & ID & .16 & .19 & .18 & .19 & .19 & .19 \\
208
+ && CaRB & .10 & .15& .12 & .26 & .27 & .27 \\
209
+ \midrule
210
+ Chunking & CoNLL00 & ID & .94 & .94 & .94 & .96 & .96 & .96 \\
211
+ && CoNLL03 & .41 & .41 & .41 & .41 & .42 & .42 \\ \midrule
212
+ SRL & CoNLL12 & ID & .75 & .79 & .77 & .84 & .86 & .85 \\
213
+ \midrule
214
+ SBD & PTB & ID & .97 & .72 & .81 & .98 & .98 & .98 \\
215
+ \midrule
216
+ POS & CoNLL00 & ID & .96 & .96 & .96 & .98 & .98 & .98 \\
217
+ && CoNLL03 & .74 & .87 & .79 & .64 & .88 & .86 \\ \midrule
218
+ RE & TACRED & ID & & & .67 & & & .83 \\
219
+ \bottomrule
220
+ \end{tabular}
221
+ \caption{Results from generative NER, OpenIE, and multitask models, where we compare vanilla \texttt{t5-base} model to \texttt{t5-base} with additional pre-training on punctuation restoration (+PR). ID is short for in-distribution evaluation, denoting evaluation on a dataset from the same source as the training set. We observe that additional training on punctuation restoration improves downstream task performance across all tasks and datasets, suggesting that it produces more reliable and robust structural representations.}
222
+ \label{table:generative}
223
+ \end{table*}
224
+
225
+ Table \ref{table:generative} contains an overview of model performance on various structure related tasks with and without additional training on punctuation restoration. Each task performance represents an average over 5 runs. We observe increases in in-distribution and out-of-distribution generalization performances across the board. In particular, we note that sentence boundary detection, arguably task closest to punctuation restoration, achieves a near perfect score. Other notable takeaways from the results include out-of-distribution performance jump in open information extraction, even when in-distribution generalization improves little.
226
+
227
+ The results support that punctuation restoration is an effective and efficient addition to the current framework of natural language understanding. We interpret this as evidence for our hypothesis that an additional unsupervised learning objective that focuses on capturing structure within natural language will improves structure understanding. In addition to the generative approach taken in this section, we discuss whether this supportive behavior persists in other setting like joint multitask learning (Section \ref{sec:multitask-results}) and discriminative learning (Section \ref{sec:discriminative-results}).
228
+
229
+ \subsection{Joint multitask generative approach}
230
+ \label{sec:multitask-results}
231
+
232
+ \begin{table*}
233
+ \centering
234
+ \begin{tabular}{ll} \toprule
235
+ Source & South Korean studio NCSoft announced Throne and Liberty back in March \\
236
+ \midrule
237
+ OpenIE & (NCSoft, announced, Throne and Liberty) (NCSoft, is, South Korean studio) \\
238
+ NER & (South Korea: LOC) (NCSoft: ORG) \\
239
+ Multitask & (NCSoft, announced, Throne and Liberty) (NCSoft: ORG) \\
240
+ & (NCSoft, is, South Korean studio) (South Korea: LOC) \\
241
+ \bottomrule
242
+ \end{tabular}
243
+ \caption{Example output from generative NER, OpenIE, and multitask models for illustration purposes}
244
+ \label{table:multitask-format}
245
+ \end{table*}
246
+
247
+ The joint multitask approach, where we focus on open information extraction using the EconIE-PRO dataset and NER using the nc-mNER dataset, is similar to the generative approach. The input sequence is identical to the experiments from Section \ref{sec:generative-results}, but the output sequence is a concatenation of output sequences from the two datasets, as illustrated in Table \ref{table:multitask-format}. Similarly to the generative approach, we observe that additional unsupervised structure learning via punctuation restoration results in downstream task performance improvement.
248
+
249
+ \begin{table}
250
+ \centering
251
+ \begin{tabular}{lcccccc} \toprule
252
+ & \multicolumn{3}{c}{\texttt{t5-base}} & \multicolumn{3}{c}{+ PR} \\
253
+ \cmidrule(lr){2-4} \cmidrule(lr){5-7}
254
+ & P & R & F1 & P & R & F1 \\
255
+ \midrule
256
+ nc-mNER & .86 & .84 & .85 & .87 & .86 & .87 \\
257
+ EconIE-PRO & .57 & .60 & .58 & .60 & .62 & .61 \\
258
+ \bottomrule
259
+ \end{tabular}
260
+ \caption{NER (nc-mNER) and OpenIE (EconIE-PRO) performance after joint training, where we compare vanilla \texttt{t5-base} model to \texttt{t5-base} with additional pre-training on punctuation restoration (+PR). Punctuation restoration improves performance in both NER and OpenIE.}
261
+ \label{table:discriminative-approach-ner}
262
+ \end{table}
263
+
264
+
265
+ \subsection{Discriminative approach}
266
+ \label{sec:discriminative-results}
267
+
268
+ Given the results from the single-task generative approach, the transfer from punctuation restoration to multi-task generative approach may be no big surprise, as there is no drastic difference between the generative nature of the two approaches. However, we report that our improved representations from punctuation restoration non-trivially transfers to the discriminative approach as well, where the decoder block is removed from the model, as illustrated in Figure \ref{fig:s4e-architecture}. After additional pre-training on punctuation restoration objective, the decoder block of the \texttt{t5-base} model is removed and a newly initialized classification head is placed on top of the encoder block. The architecture is comparable to those of BERT-like encoder-only models. Even by retaining weights from the encoder blocks only, we observe that additional unsupervised structure learning via punctuation restoration results in downstream task performance improvement.
269
+
270
+
271
+ \begin{table}
272
+ \centering
273
+ \begin{tabular}{lcccccc} \toprule
274
+ & \multicolumn{3}{c}{\texttt{t5-base}} & \multicolumn{3}{c}{+ PR} \\
275
+ \cmidrule(lr){2-4} \cmidrule(lr){5-7}
276
+ & P & R & F1 & P & R & F1 \\
277
+ \midrule
278
+ nc-mNER & .78 & .93 & .85 & .83 & .92 & .88 \\
279
+ \bottomrule
280
+ \end{tabular}
281
+ \caption{Average nc-mNER performance over 15 runs, with (+PR) and without (\texttt{t5-base}) punctuation restoration as additional pre-training. Additional training in punctuation restoration results in improvements across all measures.}
282
+ \label{table:discriminative-approach-ner}
283
+ \end{table}
284
+
285
+
286
+ \section{Discussion}
287
+ Results from Section \ref{sec:results} support our hypothesis that complementing the denoising pre-training objective with a structure-reinforcing task improves structure understanding. In particular, we use a punctuation restoration objective, described in Section \ref{sec:objective_design} and evaluate with various structure-related tasks listed in Table \ref{table:datasets}. While it is difficult to investigate the exact mechanism on how additional training on punctuation restoration improves learned representations, we attempt to provide an explanation.
288
+
289
+ Providing additional syntactic or semantic information in the form of parses have proven to be effective in improving natural language understanding \citep{pradhan-etal-2005-semantic-role, zhang-etal-2019-syntax, lepori-etal-2020-representations}. That is, the current methods for representation learning during the pre-training stage lacks sufficient syntactic signal, and effective distillation of implicit syntactic sensitivity via additional training should improve structure understanding. Much like how prosody helps disambiguate syntactic structure in human speech processing \citep{kahn2005effective, price1991use}, punctuation can be a useful guide in syntactic structure disambiguation \citep{spitkovsky-etal-2010-profiting}, and eventually in structure understanding and forming a robust representation of text. Because punctuation often indicates syntactic or semantic boundaries, training a computational system to predict punctuation from stripped text also can train the system to predict syntactic and semantic structure within said text, even when there are no punctuation marks to be restored in the original, fully punctuated text. Sufficient training in punctuation restoration or with other markers of syntactic and semantic structure can have similar effects of explicitly providing a syntactic or semantic parse, facilitating natural language understanding via a stronger understanding of sentence structure.
290
+
291
+ Such improvements are not limited to specific domains or datasets and represent an overall increase in representation robustness, as we observe out-of-distribution performance jump in NER, OpenIE, and chunking. Improvements also persist across decoding methods--entity generation in NER, OpenIE, SRL, and relation classification; tag sequence generation in chunking and POS tagging; sequence generation in sentence boundary detection; and token classification in discriminative NER. Because of the wide range of settings in which improvement is observed, We interpret this to a general improvement of structure understanding rather than fortunate task-specific artifacts from the additional training.
292
+
293
+ We claim that our methods are democratic in that we employ a non-intrusive unsupervised learning objective that is orthogonal to other architectural or methodological modifications. Punctuation restoration can be applied to reinforce structure understanding and improve robustness of learned representations regardless of model choice, or task-specific engineering policy. The objective requires no supervision, and one can construct a training corpus with little computational or manual resources.
294
+
295
+ \section{Limitations}
296
+
297
+ The idea of structure understanding reinforcement via punctuation restoration is still young--decisions made relevant to the learning objective in this paper, including selection of punctuation marks and source of learning corpus, were arbitrary and warrant additional investigation in future work. Our set of training hyperparameters also will benefit from additional attention. While we show that structure understanding reinforcement via punctuation restoration is effective in base-sized models for natural language understanding, its effects in larger models, implications to generative or conversational systems, and generalization to other languages and thus language-agnosticity also need to be studied. Despite many unanswered questions, however, we conclude that punctuation restoration is an effective learning objective that improves structure understanding without supervision.
298
+
299
+ \section*{Acknowledgments}
300
+
301
+ We are grateful to members of the Natural Language Understanding Division for their feedback on this project, to Chunghee Lee for his guidance through this work, to Yerang Kim for her thoughtful comments on the paper's structure, to Andrew Matteson for his crucial role in GPU server maintenance, and to Minji Kang for her helpful comments on prosody, syntax, and human language processing. We dedicate this work to our fond memories of Junghwa Lee (1994 - 2023).
302
+
303
+ \bibliography{usl}
304
+ \bibliographystyle{acl_natbib}
305
+
306
+ \clearpage
307
+ \appendix
308
+ \counterwithin{figure}{section}
309
+ \counterwithin{table}{section}
310
+
311
+ \end{document}