\documentclass[11pt,letterpaper]{article} \usepackage{emnlp2017} \usepackage{times} \usepackage{mdwlist} \usepackage{latexsym} \usepackage{mathtools} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{booktabs} \usepackage{xcolor} \usepackage{enumitem} \usepackage{lipsum} \usepackage{graphicx} \usepackage{subcaption} \usepackage{xspace} \usepackage{algorithmic,algorithm} \usepackage{bbm} \renewcommand{\arraystretch}{1.2} \newcommand{\norm}[1]{\left\lVert#1\right\rVert} \renewcommand{\vec}[1]{\boldsymbol{#1}} \renewcommand{\equationautorefname}{Eq} \renewcommand{\sectionautorefname}{\S\kern-0.2em} \renewcommand{\subsectionautorefname}{\S\kern-0.2em} \renewcommand{\subsubsectionautorefname}{\S\kern-0.2em} \newcommand{\bleu}{\textsc{Bleu}\xspace} \newcommand{\persentence}{Per-Sentence \bleu} \newcommand{\heldout}{Heldout \bleu} \newcommand{\halcomment}[1]{ \colorbox{magenta!20}{ \parbox{.8\linewidth}{ Hal: #1} }} \newcommand{\jbgcomment}[1]{ \colorbox{green!20}{ \parbox{.8\linewidth}{ Jordan: #1} }} \newcommand{\kcomment}[1]{ \colorbox{cyan!20}{ \parbox{.8\linewidth}{ K: #1} }} \newcommand{\ignore}[1]{} \emnlpfinalcopy \def\emnlppaperid{1355} \newcommand\BibTeX{B{\sc ib}\TeX} \title{Reinforcement Learning for Bandit Neural Machine Translation with Simulated Human Feedback} \newcommand{\idcs}{${}^\odot$} \newcommand{\idlsc}{${}^\spadesuit$} \newcommand{\idumiacs}{${}^\diamondsuit$} \newcommand{\idischool}{${}^\clubsuit$} \newcommand{\idmsr}{${}^\heartsuit$} \author{Khanh Nguyen\idcs\idumiacs \and Hal Daum{\'e} III\idcs\idlsc\idumiacs\idmsr \and Jordan Boyd-Graber\idcs\idlsc\idischool\idumiacs \\ University of Maryland: Computer Science\idcs, Language Science\idlsc, iSchool\idischool, UMIACS\idumiacs\\ Microsoft Research, New York\idmsr\\ \tt{ \{kxnguyen,hal,jbg\}@umiacs.umd.edu } } \date{} \begin{document} \abovedisplayskip=12pt plus 3pt minus 9pt \abovedisplayshortskip=0pt plus 3pt \belowdisplayskip=12pt plus 3pt minus 9pt \belowdisplayshortskip=7pt plus 3pt minus 4pt \maketitle \begin{abstract} Machine translation is a natural candidate problem for reinforcement learning from human feedback: users provide quick, dirty ratings on candidate translations to guide a system to improve. Yet, current neural machine translation training focuses on expensive human-generated reference translations. We describe a reinforcement learning algorithm that improves neural machine translation systems from simulated human feedback. Our algorithm combines the advantage actor-critic algorithm~\cite{mnih2016asynchronous} with the attention-based neural encoder-decoder architecture~\cite{luong2015effective}. This algorithm (a) is well-designed for problems with a large action space and delayed rewards, (b) effectively optimizes traditional corpus-level machine translation metrics, and (c) is robust to skewed, high-variance, granular feedback modeled after actual human behaviors. \end{abstract} \section{Introduction} Bandit structured prediction is the task of learning to solve complex joint prediction problems (like parsing or machine translation) under a very limited feedback model: a system must produce a \emph{single} structured output (e.g., translation) and then the world reveals a \emph{score} that measures how good or bad that output is, but provides neither a ``correct'' output nor feedback on any other possible output \cite{daume15lols,sokolov2015coactive}. Because of the extreme sparsity of this feedback, a common experimental setup is that one pre-trains a good-but-not-great ``reference'' system based on whatever labeled data is available, and then seeks to improve it over time using this bandit feedback. A common motivation for this problem setting is cost. In the case of translation, bilingual ``experts'' can read a source sentence and a possible translation, and can much more quickly provide a rating of that translation than they can produce a full translation on their own. Furthermore, one can often collect even less expensive ratings from ``non-experts'' who may or may not be bilingual \cite{hu2014crowdsourced}. Breaking this reliance on expensive data could unlock previously ignored languages and speed development of broad-coverage machine translation systems. All work on bandit structured prediction we know makes an important simplifying assumption: the \emph{score} provided by the world is \emph{exactly} the score the system must optimize (\autoref{sec:problem}). In the case of parsing, the score is attachment score; in the case of machine translation, the score is (sentence-level) \bleu. While this simplifying assumption has been incredibly useful in building algorithms, it is highly unrealistic. Any time we want to optimize a system by collecting user feedback, we must take into account: \begin{enumerate}[noitemsep,nolistsep] \item The metric we care about (e.g., expert ratings) may not correlate perfectly with the measure that the reference system was trained on (e.g., \bleu or log likelihood); \item Human judgments might be more granular than traditional continuous metrics (e.g., thumbs up vs. thumbs down); \item Human feedback have high \emph{variance} (e.g., different raters might give different responses given the same system output); \item Human feedback might be substantially \emph{skewed} (e.g., a rater may think all system outputs are poor). \end{enumerate} Our first contribution is a strategy to simulate expert and non-expert ratings to evaluate the robustness of bandit structured prediction algorithms in general, in a more realistic environment (\autoref{sec:noise_model}). We construct a family of perturbations to capture three attributes: \emph{granularity}, \emph{variance}, and \emph{skew}. We apply these perturbations on automatically generated scores to simulate noisy human ratings. To make our simulated ratings as realistic as possible, we study recent human evaluation data \cite{graham2017can} and fit models to match the noise profiles in actual human ratings (\autoref{sec:variance}). Our second contribution is a reinforcement learning solution to bandit structured prediction and a study of its robustness to these simulated human ratings (\autoref{sec:method}).\footnote{Our code is at \url{https://github.com/khanhptnk/bandit-nmt} (in PyTorch).} We combine an encoder-decoder architecture of machine translation~\cite{luong2015effective} with the advantage actor-critic algorithm~\cite{mnih2016asynchronous}, yielding an approach that is simple to implement but works on low-resource bandit machine translation. Even with substantially restricted granularity, with high variance feedback, or with skewed rewards, this combination improves pre-trained models (\autoref{sec:results}). In particular, under realistic settings of our noise parameters, the algorithm's online reward and final held-out accuracies do not significantly degrade from a noise-free setting. \section{Bandit Machine Translation} \label{sec:problem} The bandit structured prediction problem \cite{daume15lols,sokolov2015coactive} is an extension of the contextual bandits problem \cite{kakade2008efficient,langford2008epoch} to structured prediction. Bandit structured prediction operates over time $i=1 \dots K$ as: \begin{enumerate}[nolistsep,noitemsep] \item World reveals context $\vec x^{(i)}$ \item Algorithm predicts structured output $\hat{\vec y}^{(i)}$ \item World reveals reward $R \left(\hat{\vec y}^{(i)}, \vec x^{(i)} \right)$ \end{enumerate} We consider the problem of \emph{learning to translate from human ratings} in a bandit structured prediction framework. In each round, a translation model receives a source sentence $\vec x^{(i)}$, produces a translation $\hat{\vec y}^{(i)}$, and receives a rating $R\left( \hat{\vec y}^{(i)}, \vec x^{(i)} \right)$ from a human that reflects the quality of the translation. We seek an algorithm that achieves high reward over $K$ rounds (high cumulative reward). The challenge is that even though the model knows how good the translation is, it knows neither \emph{where} its mistakes are nor \emph{what} the ``correct'' translation looks like. It must balance exploration (finding new good predictions) with exploitation (producing predictions it already knows are good). This is especially difficult in a task like machine translation, where, for a twenty token sentence with a vocabulary size of $50k$, there are approximately $10^{94}$ possible outputs, of which the algorithm gets to test exactly one. \begin{figure}[t] \centering \includegraphics[width=\linewidth]{images/facebook3} \caption{A translation rating interface provided by Facebook. Users see a sentence followed by its machined-generated translation and can give ratings from one to five stars. } \label{fig:facebook} \end{figure} Despite these challenges, learning from non-expert ratings is desirable. In real-world scenarios, non-expert ratings are easy to collect but other stronger forms of feedback are prohibitively expensive. Platforms that offer translations can get quick feedback ``for free'' from their users to improve their systems (Figure \ref{fig:facebook}). Even in a setting in which annotators are paid, it is much less expensive to ask a bilingual speaker to provide a rating of a proposed translation than it is to pay a professional translator to produce one from scratch. \ignore{ Another scenario is when we want to train the translation model to adapt to user preferences. Since preferences are usually easy to perceive but hard to formalize, it is easier for user to provide ratings based on their preferences than to construct explicit examples.} \section{Effective Algorithm for Bandit MT} \label{sec:method} This section describes the neural machine translation architecture of our system (\autoref{sec:neural_mt}). We formulate bandit neural machine translation as a reinforcement learning problem (\autoref{sec:formulation}) and discuss why standard actor-critic algorithms struggle with this problem (\autoref{sec:why_ac_fail}). Finally, we describe a more effective training approach based on the advantage actor-critic algorithm (\autoref{sec:a2c}). \subsection{Neural machine translation} \label{sec:neural_mt} Our neural machine translation (NMT) model is a neural encoder-decoder that directly computes the probability of translating a target sentence $\vec y = (y_1, \cdots, y_m)$ from source sentence $\vec x$: \begin{align} P_{\vec \theta}(\vec y \mid \vec x) = \prod_{t = 1}^m P_{\vec \theta}(y_t \mid \vec y_{