|
{ |
|
"paper_id": "I17-1033", |
|
"header": { |
|
"generated_with": "S2ORC 1.0.0", |
|
"date_generated": "2023-01-19T07:38:13.800663Z" |
|
}, |
|
"title": "Procedural Text Generation from an Execution Video", |
|
"authors": [ |
|
{ |
|
"first": "Atsushi", |
|
"middle": [], |
|
"last": "Ushiku", |
|
"suffix": "", |
|
"affiliation": { |
|
"laboratory": "", |
|
"institution": "Kyoto University", |
|
"location": { |
|
"settlement": "Kyoto", |
|
"country": "Japan" |
|
} |
|
}, |
|
"email": "" |
|
}, |
|
{ |
|
"first": "Hayato", |
|
"middle": [], |
|
"last": "Hashimoto", |
|
"suffix": "", |
|
"affiliation": { |
|
"laboratory": "", |
|
"institution": "Kyoto University", |
|
"location": { |
|
"settlement": "Kyoto", |
|
"country": "Japan" |
|
} |
|
}, |
|
"email": "ahasimoto@mm.media.kyoto-u.ac.jp" |
|
}, |
|
{ |
|
"first": "Atsushi", |
|
"middle": [], |
|
"last": "Hashimoto", |
|
"suffix": "", |
|
"affiliation": { |
|
"laboratory": "", |
|
"institution": "Kyoto University", |
|
"location": { |
|
"settlement": "Kyoto", |
|
"country": "Japan" |
|
} |
|
}, |
|
"email": "" |
|
}, |
|
{ |
|
"first": "Shinsuke", |
|
"middle": [], |
|
"last": "Mori", |
|
"suffix": "", |
|
"affiliation": { |
|
"laboratory": "", |
|
"institution": "Kyoto University", |
|
"location": { |
|
"settlement": "Kyoto", |
|
"country": "Japan" |
|
} |
|
}, |
|
"email": "" |
|
} |
|
], |
|
"year": "", |
|
"venue": null, |
|
"identifiers": {}, |
|
"abstract": "In recent years, there has been a surge of interest in automatically describing images or videos in a natural language. These descriptions are useful for image/video search, etc. In this paper, we focus on procedure execution videos, in which a human makes or repairs something and propose a method for generating procedural texts from them. Since available video/text pairs are limited in size, the direct application of end-to-end deep learning is not feasible. Thus we propose to train Faster R-CNN network for object recognition and LSTM for text generation and combine them at run time. We took pairs of recipe and cooking video as an example, generated a recipe from a video, and compared it with the original recipe. The experimental results showed that our method can produce a recipe as accurate as the state-of-the-art scene descriptions.", |
|
"pdf_parse": { |
|
"paper_id": "I17-1033", |
|
"_pdf_hash": "", |
|
"abstract": [ |
|
{ |
|
"text": "In recent years, there has been a surge of interest in automatically describing images or videos in a natural language. These descriptions are useful for image/video search, etc. In this paper, we focus on procedure execution videos, in which a human makes or repairs something and propose a method for generating procedural texts from them. Since available video/text pairs are limited in size, the direct application of end-to-end deep learning is not feasible. Thus we propose to train Faster R-CNN network for object recognition and LSTM for text generation and combine them at run time. We took pairs of recipe and cooking video as an example, generated a recipe from a video, and compared it with the original recipe. The experimental results showed that our method can produce a recipe as accurate as the state-of-the-art scene descriptions.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Abstract", |
|
"sec_num": null |
|
} |
|
], |
|
"body_text": [ |
|
{ |
|
"text": "Massive effort has been done to develop a method for generating text from vision in the field of natural language processing and computer vision. More specifically, there are number of studies on generating captions for given images or videos (Yang et al., 2011; Rohrbach et al., 2013; Karpathy and Fei-Fei, 2015; Shetty and Laaksonen, 2016; Johnson et al., 2016) . Most of the existing researches for video captioning, however, deal with simple and short videos Shetty and Laaksonen, 2016) such as a ten second video in which a man playing guitar in a park.", |
|
"cite_spans": [ |
|
{ |
|
"start": 243, |
|
"end": 262, |
|
"text": "(Yang et al., 2011;", |
|
"ref_id": "BIBREF26" |
|
}, |
|
{ |
|
"start": 263, |
|
"end": 285, |
|
"text": "Rohrbach et al., 2013;", |
|
"ref_id": "BIBREF19" |
|
}, |
|
{ |
|
"start": 286, |
|
"end": 313, |
|
"text": "Karpathy and Fei-Fei, 2015;", |
|
"ref_id": "BIBREF8" |
|
}, |
|
{ |
|
"start": 314, |
|
"end": 341, |
|
"text": "Shetty and Laaksonen, 2016;", |
|
"ref_id": "BIBREF21" |
|
}, |
|
{ |
|
"start": 342, |
|
"end": 363, |
|
"text": "Johnson et al., 2016)", |
|
"ref_id": "BIBREF7" |
|
}, |
|
{ |
|
"start": 463, |
|
"end": 490, |
|
"text": "Shetty and Laaksonen, 2016)", |
|
"ref_id": "BIBREF21" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Introduction", |
|
"sec_num": "1" |
|
}, |
|
{ |
|
"text": "In this paper, we propose a new problem in this field: generating a procedural text from an execu-tion video such as cooking or machine assembly. The goal is to develop a method that takes video of a chef cooking a dish from ingredients or a mechanic assembling a machine from parts as the input, and outputs a procedural text that helps another person reproduce the same product.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Introduction", |
|
"sec_num": "1" |
|
}, |
|
{ |
|
"text": "We also give an initial solution to the problem, taking cooking recipe generation as an example. Because no large scale corpus consisting of related execution video and procedural text is available for now, we divide the problem into two subproblems, object recognition and text generation, and train two modules independently using different resources as their training set. Then we combine them and search for the best text. The object recognition module is designed to spot the changes in state of progress of the procedure from video and texts are generated at each time. Finally, some of the generated sentences are selected to cover the entire procedure with discarding redundant sentences.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Introduction", |
|
"sec_num": "1" |
|
}, |
|
{ |
|
"text": "In the experiments, we use KUSK Dataset (Hashimoto et al., 2014) , which consists of pairs of recipes submitted by users to a recipe hosting service Cookpad and video of cooking according to that recipe in a laboratory. The experimental results showed that our method is capable of producing a recipe of reasonable quality.", |
|
"cite_spans": [ |
|
{ |
|
"start": 40, |
|
"end": 64, |
|
"text": "(Hashimoto et al., 2014)", |
|
"ref_id": "BIBREF6" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Introduction", |
|
"sec_num": "1" |
|
}, |
|
{ |
|
"text": "Recent studies on automatic caption generation have reported great results both in images (Xu et al., 2015; Karpathy and Fei-Fei, 2015; Johnson et al., 2016 ) and short video clips Shetty and Laaksonen, 2016) by using convolutional neural network (CNN), recurrent neural network, and LSTM. improved the accuracy with a sequence to sequence model (Sutskever et al., 2014) . In addi-tion, (Laokulrat et al., 2016; Guo et al., 2016) also improved the accuracy of automatic caption generation by introducing an LSTM equipped with an attention mechanism. One of the features of these end-to-end models is that they directly generate sentences from videos without determining content words such as subjects and predicates. Common datasets (Lin et al., 2014; Chen and Dolan, 2011; Torabi et al., 2015; made research on automatic caption generation popular.", |
|
"cite_spans": [ |
|
{ |
|
"start": 90, |
|
"end": 107, |
|
"text": "(Xu et al., 2015;", |
|
"ref_id": "BIBREF25" |
|
}, |
|
{ |
|
"start": 108, |
|
"end": 135, |
|
"text": "Karpathy and Fei-Fei, 2015;", |
|
"ref_id": "BIBREF8" |
|
}, |
|
{ |
|
"start": 136, |
|
"end": 156, |
|
"text": "Johnson et al., 2016", |
|
"ref_id": "BIBREF7" |
|
}, |
|
{ |
|
"start": 181, |
|
"end": 208, |
|
"text": "Shetty and Laaksonen, 2016)", |
|
"ref_id": "BIBREF21" |
|
}, |
|
{ |
|
"start": 346, |
|
"end": 370, |
|
"text": "(Sutskever et al., 2014)", |
|
"ref_id": "BIBREF22" |
|
}, |
|
{ |
|
"start": 387, |
|
"end": 411, |
|
"text": "(Laokulrat et al., 2016;", |
|
"ref_id": "BIBREF11" |
|
}, |
|
{ |
|
"start": 412, |
|
"end": 429, |
|
"text": "Guo et al., 2016)", |
|
"ref_id": "BIBREF4" |
|
}, |
|
{ |
|
"start": 733, |
|
"end": 751, |
|
"text": "(Lin et al., 2014;", |
|
"ref_id": "BIBREF13" |
|
}, |
|
{ |
|
"start": 752, |
|
"end": 773, |
|
"text": "Chen and Dolan, 2011;", |
|
"ref_id": "BIBREF1" |
|
}, |
|
{ |
|
"start": 774, |
|
"end": 794, |
|
"text": "Torabi et al., 2015;", |
|
"ref_id": "BIBREF23" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Related Work", |
|
"sec_num": "2" |
|
}, |
|
{ |
|
"text": "Before the above end-to-end models succeeded, many researchers concentrated models generating sentences via content words or intermediate states (Guadarrama et al., 2013; Rohrbach et al., 2013) . As an advantage of the technique of using intermediate states, object recognition or motion recognition model can be diverted as it is. Thus data of pairs of a medium and a caption have not been particularly required. These methods with intermediate states are inferior in accuracy to the end-to-end models using CNN and LSTM in case that enough size of training data are available. On the other hand, since creation of medium-caption pairs is expensive, methods using intermediate states are also considered to be sufficiently practical for a problem where we have insufficient size of data available for model training.", |
|
"cite_spans": [ |
|
{ |
|
"start": 145, |
|
"end": 170, |
|
"text": "(Guadarrama et al., 2013;", |
|
"ref_id": "BIBREF3" |
|
}, |
|
{ |
|
"start": 171, |
|
"end": 193, |
|
"text": "Rohrbach et al., 2013)", |
|
"ref_id": "BIBREF19" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Related Work", |
|
"sec_num": "2" |
|
}, |
|
{ |
|
"text": "Unlike conventional methods using intermediate states such as subjects, objects, and predicates, for procedure execution videos, there is a problem that the use of recognition results of general actions is not appropriate because of the abstraction level. It is considered preparing tailored data for motion recognition for each kind of procedure execution videos have high cost because it is often vague even for human annotators to assign every concrete motions into text-level motion categories. In contrast, objects directly appear in texts and there is much less ambiguity than motions. Therefore, it is reasonable for the procedural text generation to focus more on object recognition than motion recognition. In addition, the procedure execution videos generally show works performed by one person, thus subject recognition is not necessary. It is preferable to set the object recognition results as an intermediate state and generate sentences from it. Since predicates are not easy to be recognized, they are estimated or supplemented from recognized objects using language knowledge.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Related Work", |
|
"sec_num": "2" |
|
}, |
|
{ |
|
"text": "Many studies generate a caption consisting of (Kaufman et al., 2016) , which gives captions for a movie that is divided into scenes beforehand.", |
|
"cite_spans": [ |
|
{ |
|
"start": 46, |
|
"end": 68, |
|
"text": "(Kaufman et al., 2016)", |
|
"ref_id": "BIBREF9" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Related Work", |
|
"sec_num": "2" |
|
}, |
|
{ |
|
"text": "In this section, we describe our novel task in detail. Then we present prerequisites of our solution.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Task Definition", |
|
"sec_num": "3" |
|
}, |
|
{ |
|
"text": "We propose a task of generating a procedural text from an execution video. Figure 1 shows the overview of the task. In general, an execution video records a sequence of activities to make or repair something from the beginning to the end. As the first trial, we deal with cooking videos in which only one person appears (mainly the hands only). In the beginning, there are some ingredients and tools on the cooking table and some appear in the video later. Then it finishes with a completed dish. This is the input of the task. The output of our task is a procedural text, consisting of some sentences in a natural language, which explains procedures to be conducted by workers to make or repair something. The counterpart of cooking videos of the first trial is recipes. A recipe describes how to cook a certain dish. In general, a recipe includes the dish name and an ingredient list in addition to the instruction text part. In our task, however, we focus on generating the text part only. Thus, this is the output of the task. In the subsequent sections, we refer to that text part by the term recipe.", |
|
"cite_spans": [], |
|
"ref_spans": [ |
|
{ |
|
"start": 75, |
|
"end": 83, |
|
"text": "Figure 1", |
|
"ref_id": "FIGREF0" |
|
} |
|
], |
|
"eq_spans": [], |
|
"section": "Procedural Text Generation from Video", |
|
"sec_num": "3.1" |
|
}, |
|
{ |
|
"text": "As an evaluation metrics, it is preferable to measure how much the output text helps another chef produce the same dish. Thus, the ideal may be objective evaluation over the dishes produced by chefs reading the generated recipes. We propose, however, to adopt BLEU score as a metric of procedural text for the convenience of automated evaluation.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Procedural Text Generation from Video", |
|
"sec_num": "3.1" |
|
}, |
|
{ |
|
"text": "One of the advantages to choose the cooking domain as a benchmark of procedural text generation from video is that there are a huge number of recipes available on the Web. Therefore it is easy to develop a generative model of recipes for the task. In addition, there are recipe/video pairs available for various researches. For example, the KUSK Dataset (Hashimoto et al., 2014 ), which we use in the experiments, contains recipes and their cooking videos. Note that the lengths of these cooking videos are about 20 minutes or more, which are much longer than video clips used in automatic video captioning researches. And also note that the texts are kinds of summaries mentioning only the necessary objects and actions to complete a certain mission. Such texts are intrinsically different from scene descriptions in automatic video (or image) captioning researches.", |
|
"cite_spans": [ |
|
{ |
|
"start": 354, |
|
"end": 377, |
|
"text": "(Hashimoto et al., 2014", |
|
"ref_id": "BIBREF6" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Procedural Text Generation from Video", |
|
"sec_num": "3.1" |
|
}, |
|
{ |
|
"text": "To solve the problem above, we enumerate the preconditions necessary for our method in the recipe generation case.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Prerequisites", |
|
"sec_num": "3.2" |
|
}, |
|
{ |
|
"text": "First we assume a set of terms (word sequences) called named entities (x-NEs) representing important object names in the target domain x. They are the objects to be recognized by computer vision (CV).", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Domain Specific Named Entity", |
|
"sec_num": "3.2.1" |
|
}, |
|
{ |
|
"text": "In the recipe case, noun phrases for ingredients and tools are important object names. In this paper, we adopt the recipe named entities (r-NEs) defined in , whose types are listed in Table 1. There are eight r-NE tag types, but our CV part recognizes only foods (F) and tools (T). We use the notation \"\u30c1\u30f3\u30b2\u30f3 \u83dc/F\" (\"qing-geng-cai/F\") to indicate that \"\u30c1\u30f3\u30b2\u30f3 \u83dc\" is an r-NE and its type is food (F) 1 .", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Domain Specific Named Entity", |
|
"sec_num": "3.2.1" |
|
}, |
|
{ |
|
"text": "In order to develop a useful generative model we must locate x-NEs in given sentences. So-called named entity recognizer (NER) is suitable for this task. In this paper, we adopt NERs based on sequence labeling techniques that can be trained by an annotated corpus.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Named Entity Recognizer", |
|
"sec_num": "3.2.2" |
|
}, |
|
{ |
|
"text": "Our method requires the module that can detect the appearance and the disappearance of materials and tools involved in the procedure. In the cooking video case, we use Faster R-CNN model (Ren et al., 2015) fine-tuned with relatively small set of images of foods and cooking tools.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Object Recognition", |
|
"sec_num": "3.2.3" |
|
}, |
|
{ |
|
"text": "As we mentioned in Section 1, there is no large amount of video/sentence pairs available for our problem. But instead, in some cases, large textonly corpus is available in the domain. The corpus will allow us to train a generative model of the instruction sentences.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Procedural Text Examples", |
|
"sec_num": "3.2.4" |
|
}, |
|
{ |
|
"text": "In this section, we explain the proposed method for recipe generation from cooking videos. The out- line of this method is shown in Figure 2 . First, we recognize objects in the video as a sequence of frames with a CNN and give an r-NE tag to each object (Figure 2 A, B ). Next, we create an r-NE sequence from each partial frame sequence (Figure 2 C) and generate a candidate recipe sentence for each corresponding r-NE sequence (Figure 2 D) . Each candidate recipe sentence is the one which maximizes the score indicating the likelihood of a sentence as a procedural text within the partial frame sequence. Finally, we select the sequence of recipe candidate sequences that maximizes the total score through the entire video based on Viterbi search. We output that sentence sequence as the procedural text for the input procedure execution video (Figure 2 E).", |
|
"cite_spans": [], |
|
"ref_spans": [ |
|
{ |
|
"start": 132, |
|
"end": 140, |
|
"text": "Figure 2", |
|
"ref_id": null |
|
}, |
|
{ |
|
"start": 255, |
|
"end": 269, |
|
"text": "(Figure 2 A, B", |
|
"ref_id": null |
|
}, |
|
{ |
|
"start": 430, |
|
"end": 442, |
|
"text": "(Figure 2 D)", |
|
"ref_id": null |
|
} |
|
], |
|
"eq_spans": [], |
|
"section": "Proposed Method", |
|
"sec_num": "4" |
|
}, |
|
{ |
|
"text": "Object recognition is performed only on the frames at which the chef picks up an object or places it, that is provided in KUSK Object Dataset (Hashimoto et al., 2016) with the object regions. Note that the provided frames and regions can contain plural objects because the method used in (Hashimoto et al., 2016) is based on background subtraction. To divide the detected region into object-wise regions, we adopted Faster R-CNN (Ren et al., 2015). This neural network outputs identified object region as a rectangular area while recognizing its category (Figure 2 A) . It also provides confidence as a probability. An example of visualization of object recognition is shown in Figure 3 , where a cutting board and a knife are in the region detected by (Hashimoto et al., 2016) .", |
|
"cite_spans": [ |
|
{ |
|
"start": 142, |
|
"end": 166, |
|
"text": "(Hashimoto et al., 2016)", |
|
"ref_id": "BIBREF5" |
|
}, |
|
{ |
|
"start": 288, |
|
"end": 312, |
|
"text": "(Hashimoto et al., 2016)", |
|
"ref_id": "BIBREF5" |
|
}, |
|
{ |
|
"start": 753, |
|
"end": 777, |
|
"text": "(Hashimoto et al., 2016)", |
|
"ref_id": "BIBREF5" |
|
} |
|
], |
|
"ref_spans": [ |
|
{ |
|
"start": 555, |
|
"end": 567, |
|
"text": "(Figure 2 A)", |
|
"ref_id": null |
|
}, |
|
{ |
|
"start": 678, |
|
"end": 686, |
|
"text": "Figure 3", |
|
"ref_id": "FIGREF2" |
|
} |
|
], |
|
"eq_spans": [], |
|
"section": "Object Recognition", |
|
"sec_num": "4.1" |
|
}, |
|
{ |
|
"text": "We utilized Faster R-CNN's ability of object region identification to suppress another type of false detection. The regions provided in (Hashimoto et al., 2016) contains objects that are moved only slightly by coming in contact with the hands. Such objects should not be related to the procedure. To suppress such detection but spot only objects obviously related to the procedure, we compare the location of object-wise regions before and after the contact, and ignore object regions if they have the same object name and have a certain score in Jaccard index, which is general method to measure the size of intersection of two regions. After the test of region intersection, only the objects with an obvious location change are regarded as procedure-related. This module passes only procedure-related objects to the second module. Note also that we discarded objects whose name is not listed in x-NEs before passing them to the second module.", |
|
"cite_spans": [ |
|
{ |
|
"start": 136, |
|
"end": 160, |
|
"text": "(Hashimoto et al., 2016)", |
|
"ref_id": "BIBREF5" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Object Recognition", |
|
"sec_num": "4.1" |
|
}, |
|
{ |
|
"text": "Hereafter, we only focus the frames with the procedure-related objects listed in x-NEs, and describe the sequence of such frames as follows:", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Object Recognition", |
|
"sec_num": "4.1" |
|
}, |
|
{ |
|
"text": "EQUATION", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [ |
|
{ |
|
"start": 0, |
|
"end": 8, |
|
"text": "EQUATION", |
|
"ref_id": "EQREF", |
|
"raw_str": "f = f 1 , f 2 , . . . , f |f | ,", |
|
"eq_num": "(1)" |
|
} |
|
], |
|
"section": "Object Recognition", |
|
"sec_num": "4.1" |
|
}, |
|
{ |
|
"text": "where f i is the i-th frame and |f | is the length of the sequence.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Object Recognition", |
|
"sec_num": "4.1" |
|
}, |
|
{ |
|
"text": "We use the named entity recognizer (Sasada et al., 2015) to the object in the i-th frame f i (Figure 2 B) . Let E i be the object set whose tags are F or T in f i . Then, we denote the number of elements in this set as |E i |. The j-th r-NE of E i is denoted as e j i . Then P (e j i |f i ) denotes the conditional probability in which the element e j i (a food or a tool) is estimated to exist in the frame f i .", |
|
"cite_spans": [ |
|
{ |
|
"start": 35, |
|
"end": 56, |
|
"text": "(Sasada et al., 2015)", |
|
"ref_id": "BIBREF20" |
|
} |
|
], |
|
"ref_spans": [ |
|
{ |
|
"start": 93, |
|
"end": 106, |
|
"text": "(Figure 2 B)", |
|
"ref_id": null |
|
} |
|
], |
|
"eq_spans": [], |
|
"section": "Recipe Named Entity Recognition", |
|
"sec_num": "4.2" |
|
}, |
|
{ |
|
"text": "Let f", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Recipe Named Entity Sequence", |
|
"sec_num": "4.3" |
|
}, |
|
{ |
|
"text": "i+(l\u22121) i = f i , f i+1 , .", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Recipe Named Entity Sequence", |
|
"sec_num": "4.3" |
|
}, |
|
{ |
|
"text": ".., f i+(l\u22121) be a substring, of length l, of f that corresponds to a single recipe sentence. A frame f i may contain some r-NEs E i . Then a sequence of r-NEs contained in f", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Recipe Named Entity Sequence", |
|
"sec_num": "4.3" |
|
}, |
|
{ |
|
"text": "i+(l\u22121) i can be expressed by e \u2208 E i \u00d7 E i+1 \u00d7 ... \u00d7 E i+(l\u22121)", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Recipe Named Entity Sequence", |
|
"sec_num": "4.3" |
|
}, |
|
{ |
|
"text": ". Note that the number of all the possible sequences is", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Recipe Named Entity Sequence", |
|
"sec_num": "4.3" |
|
}, |
|
{ |
|
"text": "\u220f i+(l\u22121) k=i |E k |.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Recipe Named Entity Sequence", |
|
"sec_num": "4.3" |
|
}, |
|
{ |
|
"text": "For example in Figure 2 , e is (cutting board/T, meat/F) or (knife/T, meat/F).", |
|
"cite_spans": [], |
|
"ref_spans": [ |
|
{ |
|
"start": 15, |
|
"end": 23, |
|
"text": "Figure 2", |
|
"ref_id": null |
|
} |
|
], |
|
"eq_spans": [], |
|
"section": "Recipe Named Entity Sequence", |
|
"sec_num": "4.3" |
|
}, |
|
{ |
|
"text": "In addition, in order to treat a sequence as a set, we introduce the following notation:", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Recipe Named Entity Sequence", |
|
"sec_num": "4.3" |
|
}, |
|
{ |
|
"text": "EQUATION", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [ |
|
{ |
|
"start": 0, |
|
"end": 8, |
|
"text": "EQUATION", |
|
"ref_id": "EQREF", |
|
"raw_str": "{e} = {e j k k |i \u2264 k \u2264 i + (l \u2212 1)}.", |
|
"eq_num": "(2)" |
|
} |
|
], |
|
"section": "Recipe Named Entity Sequence", |
|
"sec_num": "4.3" |
|
}, |
|
{ |
|
"text": "Note that j k depends on k.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Recipe Named Entity Sequence", |
|
"sec_num": "4.3" |
|
}, |
|
{ |
|
"text": "LSTM \u30d5\u30e9\u30a4\u30d1\u30f3 (pan) \u306b (in) LSTM \u30d5\u30e9\u30a4\u30d1\u30f3 (pan) LSTM \u306b (in) \u8339\u3067 (boil) \u2026. r--NE set C B:", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Recipe Named Entity Sequence", |
|
"sec_num": "4.3" |
|
}, |
|
{ |
|
"text": "Word segmenta=on+ r--NE recogni=on", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Recipe Named Entity Sequence", |
|
"sec_num": "4.3" |
|
}, |
|
{ |
|
"text": "Put the boiled pasta in a pan.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "\u30d5\u30e9\u30a4\u30d1\u30f3\u306b\u8339\u3067\u305f\u30d1\u30b9\u30bf\u3092\u3044\u308c\u308b", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": "A:", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Recipe Sentence", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": "\u30d5\u30e9\u30a4\u30d1\u30f3/T \u306b/O \u8339\u3067/Ac \u305f/O \u30d1\u30b9\u30bf/F \u3092/O \u3044\u308c/Ac \u308b/O Extrac=ng r--NE set ( /T /F) (pan) (pasta) ( /T /F) (pan) (pasta)", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Recipe Sentence", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": "Figure 4: LSTM language model training. This model generates a sentence given an r-NE set.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Recipe Sentence", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": "Considering the likelihood of object recognition and the likelihood of a combination of r-NEs included in the sequence, we set the likelihood P (e) that e appears as follows:", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Recipe Sentence", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": "EQUATION", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [ |
|
{ |
|
"start": 0, |
|
"end": 8, |
|
"text": "EQUATION", |
|
"ref_id": "EQREF", |
|
"raw_str": "P (e) = P (e) \u00d7 P ({e}) \u2212l ,", |
|
"eq_num": "(3)" |
|
} |
|
], |
|
"section": "Recipe Sentence", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": "where P (e) is the average of the probability of the result of object recognition:", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Recipe Sentence", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": "EQUATION", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [ |
|
{ |
|
"start": 0, |
|
"end": 8, |
|
"text": "EQUATION", |
|
"ref_id": "EQREF", |
|
"raw_str": "P (e) = 1 l i+(l\u22121) \u2211 k=i P (e j k k |f k ).", |
|
"eq_num": "(4)" |
|
} |
|
], |
|
"section": "Recipe Sentence", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": "This value indicates the likelihood of object recognition. Also P ({e}) \u2212l is the likelihood of a combination of r-NEs determined from the frequency of a sentence in which all the elements of {e} appear in the corpus.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Recipe Sentence", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": "EQUATION", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [ |
|
{ |
|
"start": 0, |
|
"end": 8, |
|
"text": "EQUATION", |
|
"ref_id": "EQREF", |
|
"raw_str": "P ({e}) = ( count({e}) C ) ,", |
|
"eq_num": "(5)" |
|
} |
|
], |
|
"section": "Recipe Sentence", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": "where C is the number of sentences in the recipe corpus and count({e}) is the frequency of sentences in which all the elements of {e} appear at the same time. Thus, this value indicates the likelihood of the r-NE combination. In addition as the number of elements in the r-NE set increases, the frequency decreases. This is the reason why we introduce P ({e}) \u2212l considering the sequence length l.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Recipe Sentence", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": "For each partial frame sequence, we generate the most likely sentence and its score without referring to the neighboring sentences. Some of these sentences may, however, be discarded in the next step. Thus we call it a recipe sentence candidate. The input to this process is the r-NE sequence and the scores of the r-NEs. And the output is the recipe sentence candidate that maximizes the score for the given partial frame sequence (Figure 2 D) . For the sentence candidate generation we use an LSTM language model. It outputs a sentence and its likelihood. Different from the ordinary LSTM, it takes a set of r-NEs as the input, but not a sequence. In addition, it is trained on the corpus in which r-NEs are recognized and replaced with r-NE tags as summarized in Figure 4 . The first step of its training is preprocessing, in which we conduct word segmentation (Neubig et al., 2011) (not necessary for English or some other languages) and r-NE recognition (Sasada et al., 2015) for each recipe sentence in the recipe corpus (Figure 4 A) . Then we filter out sentences containing r-NEs other than Ac, F and T and delete Ac tags for the reasons below:", |
|
"cite_spans": [ |
|
{ |
|
"start": 959, |
|
"end": 980, |
|
"text": "(Sasada et al., 2015)", |
|
"ref_id": "BIBREF20" |
|
} |
|
], |
|
"ref_spans": [ |
|
{ |
|
"start": 432, |
|
"end": 444, |
|
"text": "(Figure 2 D)", |
|
"ref_id": null |
|
}, |
|
{ |
|
"start": 766, |
|
"end": 774, |
|
"text": "Figure 4", |
|
"ref_id": null |
|
}, |
|
{ |
|
"start": 1027, |
|
"end": 1040, |
|
"text": "(Figure 4 A)", |
|
"ref_id": null |
|
} |
|
], |
|
"eq_spans": [], |
|
"section": "Recipe Sentence Candidate Generation", |
|
"sec_num": "4.4" |
|
}, |
|
{ |
|
"text": "\u2022 We cannot get information about r-NEs other than F and T by the object recognition module.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Recipe Sentence Candidate Generation", |
|
"sec_num": "4.4" |
|
}, |
|
{ |
|
"text": "\u2022 A predicate denoting an action (Ac) is necessary for a complete sentence.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Recipe Sentence Candidate Generation", |
|
"sec_num": "4.4" |
|
}, |
|
{ |
|
"text": "Putting it in another way, our method guesses a suitable predicate (verb) from the objects (foods and/or tools) and the corpus. From each of the resultant sentences, we generate a sentence in which F and T are replaced with tags and the set of the r-NEs contained in it (Figure 4 B) . Finally we train the LSTM language model on the corpus. The LSTM can map a set of r-NEs to a recipe sentence with its likelihood (Figure 4 C) .", |
|
"cite_spans": [], |
|
"ref_spans": [ |
|
{ |
|
"start": 270, |
|
"end": 282, |
|
"text": "(Figure 4 B)", |
|
"ref_id": null |
|
}, |
|
{ |
|
"start": 414, |
|
"end": 426, |
|
"text": "(Figure 4 C)", |
|
"ref_id": null |
|
} |
|
], |
|
"eq_spans": [], |
|
"section": "Recipe Sentence Candidate Generation", |
|
"sec_num": "4.4" |
|
}, |
|
{ |
|
"text": "As the likelihood of this module, our method returns the following score:", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Recipe Sentence Candidate Generation", |
|
"sec_num": "4.4" |
|
}, |
|
{ |
|
"text": "Score(e) = P LSTM (r max (e)|e) \u00d7 P (e),", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Recipe Sentence Candidate Generation", |
|
"sec_num": "4.4" |
|
}, |
|
{ |
|
"text": "where r(e) is the sentence generated by the LSTM language model given e as the input. P LSTM (r(e)|e) is the generation probability of r(e). r max (e) is the sentence that maximizes P LSTM (r(e)|e) with the beam search decoder given e. r max (e) = argmax r(e)\u2208R(e) P LSTM (r(e)|e),", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Recipe Sentence Candidate Generation", |
|
"sec_num": "4.4" |
|
}, |
|
{ |
|
"text": "where R(e) is a set of sentences that can be generated by beam search when e is the input and r(e) is the sentence corresponding to it. The generation probability of a sentence is calculated by the following formula:", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Recipe Sentence Candidate Generation", |
|
"sec_num": "4.4" |
|
}, |
|
{ |
|
"text": "P LSTM (r(e)|e) = N d \u220f k=1 P (d k |d 1 , d 2 , ..., d k\u22121 ; e),", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Recipe Sentence Candidate Generation", |
|
"sec_num": "4.4" |
|
}, |
|
{ |
|
"text": "where r(e) = d 1 , d 2 , ..., d N d is a word string and N d is the length of the word string. And P (d k |d 1 , d 2 , . .., d k\u22121 ; e) denotes the generation probability of the k-th word d k , when the input is e. The sentence is generated by the LSTM language model by beam search. The sentence is, however, aborted when the word length exceeds 20 or the terminal symbol appears. P (e) is introduced to reflect the likelihood that the r-NE sequences e appear (see Equation 3). Calculating the above scores for all the possible e of a partial frame sequence, we define e max as the r-NE sequences which maximize the score. At this stage, the generated sentence is no more than a recipe sentence candidate r max (e max ), whose score is Score(e max ). When the scores earned by partial frame sequences are all 0, no recipe sentence candidate is generated.", |
|
"cite_spans": [], |
|
"ref_spans": [ |
|
{ |
|
"start": 13, |
|
"end": 35, |
|
"text": "d 1 , d 2 , ..., d N d", |
|
"ref_id": "FIGREF0" |
|
}, |
|
{ |
|
"start": 95, |
|
"end": 121, |
|
"text": "And P (d k |d 1 , d 2 , .", |
|
"ref_id": "FIGREF0" |
|
} |
|
], |
|
"eq_spans": [], |
|
"section": "Recipe Sentence Candidate Generation", |
|
"sec_num": "4.4" |
|
}, |
|
{ |
|
"text": "As we see above, a set of recipe sentence candidates is generated from the partial frame sequences. The frame sequence is divided into partial sequences so that the overall score of the division, which is the sum of the Score(e) in each partial sequence, is maximized (Figure 2 E) . The partial sequences cover the entire video, thus the corresponding sentences, sequences of r(e), form a complete recipe.", |
|
"cite_spans": [], |
|
"ref_spans": [ |
|
{ |
|
"start": 268, |
|
"end": 280, |
|
"text": "(Figure 2 E)", |
|
"ref_id": null |
|
} |
|
], |
|
"eq_spans": [], |
|
"section": "Generating Recipe", |
|
"sec_num": "4.5" |
|
}, |
|
{ |
|
"text": "Since it is almost impossible for one chef to perform two operations in parallel, the corresponding partial frame sequences of the recipe sentence candidates must not overlap. In addition, in order to prevent the same recipe sentence from appearing more than once, the score of the recipe sentence candidate which has appeared once in the recipe is set to be 0. Under this condition, the score of a recipe sentence candidate can change. Although it should be totally searched for score maximization, we use the Viterbi algorithm for the calculation, because the change of the score is limited at the time of generation of the same sentence and it is considered that it does not occur so much.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Generating Recipe", |
|
"sec_num": "4.5" |
|
}, |
|
{ |
|
"text": "By calculating the path of the recipe sentence candidate sequence for increasing the score, the generated recipe sentence sequence is output as a recipe. The higher the score, the more recipe-like the sentences are.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Generating Recipe", |
|
"sec_num": "4.5" |
|
}, |
|
{ |
|
"text": "In this section we evaluate our method experimentally. We first describe the settings of the experiments, then report the experimental results, and finally evaluate our method. 2", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Experiments and Evaluation", |
|
"sec_num": "5" |
|
}, |
|
{ |
|
"text": "We used the following dataset to train and evaluate our model.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Experimental Setting", |
|
"sec_num": "5.1" |
|
}, |
|
{ |
|
"text": "KUSK Dataset This dataset contains 20 recipes and corresponding cooking videos.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Test Dataset", |
|
"sec_num": "5.1.1" |
|
}, |
|
{ |
|
"text": "KUSK Object Dataset This dataset contains 180 categories of objects in total, which comprise ingredients, cooking tools, and others (bottle cap, dish cloth, and so on), observed in cooking videos in KUSK Dataset. Since all videos are recorded at the same kitchen, exactly the same cooking tools appear through all videos, including ones in the test set. More detailed information and examples are available in (Hashimoto et al., 2016) .", |
|
"cite_spans": [ |
|
{ |
|
"start": 410, |
|
"end": 434, |
|
"text": "(Hashimoto et al., 2016)", |
|
"ref_id": "BIBREF5" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Train Dataset", |
|
"sec_num": "5.1.2" |
|
}, |
|
{ |
|
"text": "Cookpad NII corpus This corpus contains 1720000 recipes collected from cookpad website. 187700 sentences are extracted for training.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Train Dataset", |
|
"sec_num": "5.1.2" |
|
}, |
|
{ |
|
"text": "Flow Graph Corpus This corpus contains randomly chosen 208 recipes (867 sentences) from Cookpad NII corpus. The text is annotated with the r-NE tags. .", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Train Dataset", |
|
"sec_num": "5.1.2" |
|
}, |
|
{ |
|
"text": "As the first module, an object recognizer for frames, we use Faster R-CNN(Ren et al., 2015).", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Training Faster-RNN and Named Entity Recognizer", |
|
"sec_num": "5.1.3" |
|
}, |
|
{ |
|
"text": "2 The code used in our experiment is available on our website. http://www.ar. media.kyoto-u.ac.jp/member/hayato/ procedural-text-generation/", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Training Faster-RNN and Named Entity Recognizer", |
|
"sec_num": "5.1.3" |
|
}, |
|
{ |
|
"text": "We fine-tuned Faster R-CNN with KUSK Object Dataset. The dataset contains 180 categories in total, but some categories, for example dish clothes or bottle caps, will not appear in recipe texts. Thus we ignored such categories and used 95 categories to fine-tune the Faster R-CNN model, which is done in the manner of leave-one-video-out.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Training Faster-RNN and Named Entity Recognizer", |
|
"sec_num": "5.1.3" |
|
}, |
|
{ |
|
"text": "Because this module is a pre-process of the second module, to achieve higher recall rather than a higher precision, we used any detection proposals from Faster R-CNN with more than 0.01% in confidence score, and set the intersection threshold of Jaccard Index 0.5. This setting earned 78.8% of recall and 22.3% of the precision on average through the 95 categories.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Training Faster-RNN and Named Entity Recognizer", |
|
"sec_num": "5.1.3" |
|
}, |
|
{ |
|
"text": "For the second module we trained an NE recognizer PWNER (Sasada et al., 2015) , which is based on support vector machines and Viterbi best path search, with Flow Graph Corpus. Its accuracy is about 90% in F-measure .", |
|
"cite_spans": [ |
|
{ |
|
"start": 56, |
|
"end": 77, |
|
"text": "(Sasada et al., 2015)", |
|
"ref_id": "BIBREF20" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Training Faster-RNN and Named Entity Recognizer", |
|
"sec_num": "5.1.3" |
|
}, |
|
{ |
|
"text": "When generating the r-NE sequences, we should specify the sequence length l. Most of the sentences in our recipe corpus contain no more than three r-NEs of F or T 3 . So we set the length of frame sequences as l = 1 \u223c 3. The training data of the LSTM language model consists of 11,705 sentences and the number of r-NE tokens is 4,025. These training data are a set of recipe sentences extracted so as to satisfy the following conditions:", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Recipe Named Entity Sequence and Recipe Sentence Candidate Generation", |
|
"sec_num": "5.1.4" |
|
}, |
|
{ |
|
"text": "\u2022 The total number of F and T is between 1 and 3,", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Recipe Named Entity Sequence and Recipe Sentence Candidate Generation", |
|
"sec_num": "5.1.4" |
|
}, |
|
{ |
|
"text": "\u2022 Each sentence does not contain any r-NE other than Ac, F, and T (see Section 4.4).", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Recipe Named Entity Sequence and Recipe Sentence Candidate Generation", |
|
"sec_num": "5.1.4" |
|
}, |
|
{ |
|
"text": "As a result the LSTM language model has a tendency not to generate sentences containing 4 or more r-NEs. The setting of the LSTM language model training is as follows. The epoch number is 100, the batch size is 100, and the number of units of LSTM is 1,000. The objective function is the softmax cross entropy and the optimization algorithm is Adam (Kingma and Ba, 2014) . The beam width for recipe sentence candidate generation is set to be 1. ", |
|
"cite_spans": [ |
|
{ |
|
"start": 349, |
|
"end": 370, |
|
"text": "(Kingma and Ba, 2014)", |
|
"ref_id": "BIBREF10" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Recipe Named Entity Sequence and Recipe Sentence Candidate Generation", |
|
"sec_num": "5.1.4" |
|
}, |
|
{ |
|
"text": "The result of the proposed method Figure 5 : The original recipe for a cooking video and the generated recipe by the proposed method.", |
|
"cite_spans": [], |
|
"ref_spans": [ |
|
{ |
|
"start": 34, |
|
"end": 42, |
|
"text": "Figure 5", |
|
"ref_id": null |
|
} |
|
], |
|
"eq_spans": [], |
|
"section": "The original recipe", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": "We generated a recipe for each of 16 cooking videos corresponding to seven recipes in KUSK Dataset. As we mentioned in Section 3 they are excluded from the training data. In order to investigate the effectiveness of P ({e}) \u2212l , we compared the results of the models with and without it. The evaluation metrics is BLEU (N = 1 \u223c 4) (Papineni et al., 2002) taking the original humanwritten recipes as the reference. The cooking actions in the KUSK Dataset video part were performed with following these recipes. Unlike BLEU calculation in MT, we treat the entire recipe, a sequence of sentences, as the unit instead of a single sentence. This is because one can describe the same actions in various ways with different number of sentences. An example pair is \"cut onions and potatoes.\" and \"cut onions. then cut potatoes.\"", |
|
"cite_spans": [ |
|
{ |
|
"start": 331, |
|
"end": 354, |
|
"text": "(Papineni et al., 2002)", |
|
"ref_id": "BIBREF16" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Evaluation Metrics", |
|
"sec_num": "5.1.5" |
|
}, |
|
{ |
|
"text": "Since our task is quite novel and existing end-toend video captioning methods do not obviously work because of lack of large training data, there is no direct baseline. Thus we discuss absolute BLEU scores of some settings and examples of generated sentences. Table 2 shows the BLEU scores. The absolute BLEU values (ex. 5.50 for N = 4) are much higher than the results of cinema caption generation (Kaufman et al., 2016 ) (0.8 for N = 4), which is regarded as one of the state-of-the-arts of text generation for videos longer than video clips. This result is worth noting considering that cooking videos are raw recording of execution and not edited nor divided into scenes, while input of cinema caption generation is an edited video and scene segmentation is available. Our higher accuracy may be due to a large amount of text data in the target domain.", |
|
"cite_spans": [ |
|
{ |
|
"start": 399, |
|
"end": 420, |
|
"text": "(Kaufman et al., 2016", |
|
"ref_id": "BIBREF9" |
|
} |
|
], |
|
"ref_spans": [ |
|
{ |
|
"start": 260, |
|
"end": 267, |
|
"text": "Table 2", |
|
"ref_id": "TABREF2" |
|
} |
|
], |
|
"eq_spans": [], |
|
"section": "Results and Discussion", |
|
"sec_num": "5.2" |
|
}, |
|
{ |
|
"text": "We then examined generated recipes and the original recipes. Figure 5 presents a recipe example actually generated by the proposed method and its original recipe used in the cooking video recording. We see that there are suitable sentences such as \"\u633d\u8089\u3092\u7092\u3081\u308b\u3002\" (\"Stir-fry minced meat.\"), \"\u5375\u306f\u307b\u3050\u3057\u3066\u304a\u304f\u3002\" (\"Beat an egg.\") in the result. These sentences correspond to \"\u30df\u30f3\u30c1\u3092\u3044\u305f\u3081\u3066\u3001\u8272\u304c\u304b\u308f\u3063\u305f\u3089\u3001\" (\"Saute the meat mince until the color changes\") and \"\u5375\u3092\u3068\u3044\u3066\u3001\" (\"Beat an egg,\") in the original recipe. On the other hand, the result contains some unnecessary sentences. For example, in the third line of the generation result, \"\u5305\u4e01\u3067\u3059\u3092\u4f7f\" (\"Use the knife is.\").", |
|
"cite_spans": [], |
|
"ref_spans": [ |
|
{ |
|
"start": 61, |
|
"end": 69, |
|
"text": "Figure 5", |
|
"ref_id": null |
|
} |
|
], |
|
"eq_spans": [], |
|
"section": "Results and Discussion", |
|
"sec_num": "5.2" |
|
}, |
|
{ |
|
"text": "The sentence itself is semantically correct, but is not suitable for a recipe (and grammatically wrong). This is actually the difference from the existing video clip description research.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Results and Discussion", |
|
"sec_num": "5.2" |
|
}, |
|
{ |
|
"text": "Even if the object recognition functions perfectly, the sentence generation part has to ignore some objects focusing only on the actions to be taken. Such errors can be alleviated by considering the recipe structure such as relations of r-NEs. There are also ungrammatical sentences such as \"\u304a\u597d\u307f\u3067\u3067\u3092\" (\"pour over it that if if you like and serve\") in the result. This sort of errors are caused by the LSTM language model. We may need a language model incorporating grammatical structures (Chelba and Jelinek, 2000) .", |
|
"cite_spans": [ |
|
{ |
|
"start": 487, |
|
"end": 513, |
|
"text": "(Chelba and Jelinek, 2000)", |
|
"ref_id": "BIBREF0" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Results and Discussion", |
|
"sec_num": "5.2" |
|
}, |
|
{ |
|
"text": "Despite the errors mentioned above, our method solves the novel problem, procedural text generation from execution video in a certain accuracy. As it is clear from the explanation of our method, it has the correspondence between the sentence and the video frame region. Thus one can use our method for various practical multimedia applications, such as multimedia document generation from an execution video.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Results and Discussion", |
|
"sec_num": "5.2" |
|
}, |
|
{ |
|
"text": "In this paper, we have proposed a novel task of procedural text generation from an execution video and the first attempt at solving it. Contrary to the ordinary video captioning task, it requires some kind of abstraction, that is, selecting objects to be mentioned. In addition, no existing end-to-end method is applicable due to the limited amount of video/text pairs for training. Instead, our method decomposes the problem into object recognition and sentence generation. Then we train the models for them independently with maximum available resources for each one. Finally we search for the best procedural text referring to them at once. For evaluation, we conduct recipe generation from cooking videos as an example case. The quality was as good as or better than the state-of-the-art scenario description for cinemas. Thus we can say that our method is promising to solve this novel task. We also gave some error analyses to allow further improvements in solutions of this difficult but interesting task.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Conclusion", |
|
"sec_num": "6" |
|
}, |
|
{ |
|
"text": "The language resources used in our experiments are in Japanese. Thus our system outputs recipes in Japanese. However, our method can generate recipes in another language by preparing the prerequisites in that language.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": "The percentage is slightly less than 75%.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "", |
|
"sec_num": null |
|
} |
|
], |
|
"back_matter": [ |
|
{ |
|
"text": "In this paper, we used recipe data provided by Cookpad and the National Institute of Informatics. The work is supported by JSPS Grants-in-Aid for Scientific Research Grant Number 26280084.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Acknowledgement", |
|
"sec_num": null |
|
} |
|
], |
|
"bib_entries": { |
|
"BIBREF0": { |
|
"ref_id": "b0", |
|
"title": "Structured language modeling", |
|
"authors": [ |
|
{ |
|
"first": "Ciprian", |
|
"middle": [], |
|
"last": "Chelba", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Frederick", |
|
"middle": [], |
|
"last": "Jelinek", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2000, |
|
"venue": "Computer Speech and Language", |
|
"volume": "14", |
|
"issue": "", |
|
"pages": "283--332", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Ciprian Chelba and Frederick Jelinek. 2000. Struc- tured language modeling. Computer Speech and Language 14:283-332.", |
|
"links": null |
|
}, |
|
"BIBREF1": { |
|
"ref_id": "b1", |
|
"title": "Collecting highly parallel data for paraphrase evaluation", |
|
"authors": [ |
|
{ |
|
"first": "L", |
|
"middle": [], |
|
"last": "David", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "William B", |
|
"middle": [], |
|
"last": "Chen", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "", |
|
"middle": [], |
|
"last": "Dolan", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2011, |
|
"venue": "Proceedings of the 49th Annual Meeting of the Association for Computational Linguistics: Human Language Technologies", |
|
"volume": "1", |
|
"issue": "", |
|
"pages": "190--200", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "David L Chen and William B Dolan. 2011. Collect- ing highly parallel data for paraphrase evaluation. In Proceedings of the 49th Annual Meeting of the Asso- ciation for Computational Linguistics: Human Lan- guage Technologies-Volume 1. Association for Com- putational Linguistics, pages 190-200.", |
|
"links": null |
|
}, |
|
"BIBREF2": { |
|
"ref_id": "b2", |
|
"title": "Long-term recurrent convolutional networks for visual recognition and description", |
|
"authors": [ |
|
{ |
|
"first": "Jeffrey", |
|
"middle": [], |
|
"last": "Donahue", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Lisa", |
|
"middle": [ |
|
"Anne" |
|
], |
|
"last": "Hendricks", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Sergio", |
|
"middle": [], |
|
"last": "Guadarrama", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Marcus", |
|
"middle": [], |
|
"last": "Rohrbach", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Subhashini", |
|
"middle": [], |
|
"last": "Venugopalan", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Kate", |
|
"middle": [], |
|
"last": "Saenko", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Trevor", |
|
"middle": [ |
|
"Darrell" |
|
], |
|
"last": "", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2015, |
|
"venue": "Proceedings of the IEEE conference on computer vision and pattern recognition", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "2625--2634", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Jeffrey Donahue, Lisa Anne Hendricks, Sergio Guadar- rama, Marcus Rohrbach, Subhashini Venugopalan, Kate Saenko, and Trevor Darrell. 2015. Long-term recurrent convolutional networks for visual recogni- tion and description. In Proceedings of the IEEE conference on computer vision and pattern recogni- tion. pages 2625-2634.", |
|
"links": null |
|
}, |
|
"BIBREF3": { |
|
"ref_id": "b3", |
|
"title": "Youtube2text: Recognizing and describing arbitrary activities using semantic hierarchies and zero-shot recognition", |
|
"authors": [ |
|
{ |
|
"first": "Sergio", |
|
"middle": [], |
|
"last": "Guadarrama", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Niveda", |
|
"middle": [], |
|
"last": "Krishnamoorthy", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Girish", |
|
"middle": [], |
|
"last": "Malkarnenkar", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Subhashini", |
|
"middle": [], |
|
"last": "Venugopalan", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Raymond", |
|
"middle": [], |
|
"last": "Mooney", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Trevor", |
|
"middle": [], |
|
"last": "Darrell", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Kate", |
|
"middle": [], |
|
"last": "Saenko", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2013, |
|
"venue": "Proceedings of the 14th International Conference on Computer Vision", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "2712--2719", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Sergio Guadarrama, Niveda Krishnamoorthy, Girish Malkarnenkar, Subhashini Venugopalan, Raymond Mooney, Trevor Darrell, and Kate Saenko. 2013. Youtube2text: Recognizing and describing arbitrary activities using semantic hierarchies and zero-shot recognition. In Proceedings of the 14th International Conference on Computer Vision. Sydney, Australia, pages 2712-2719.", |
|
"links": null |
|
}, |
|
"BIBREF4": { |
|
"ref_id": "b4", |
|
"title": "Attention-based lstm with semantic consistency for videos captioning", |
|
"authors": [ |
|
{ |
|
"first": "Zhao", |
|
"middle": [], |
|
"last": "Guo", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Lianli", |
|
"middle": [], |
|
"last": "Gao", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Jingkuan", |
|
"middle": [], |
|
"last": "Song", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Xing", |
|
"middle": [], |
|
"last": "Xu", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Jie", |
|
"middle": [], |
|
"last": "Shao", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Heng Tao", |
|
"middle": [], |
|
"last": "Shen", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2016, |
|
"venue": "Proceedings of the 2016 ACM on Multimedia Conference", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "357--361", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Zhao Guo, Lianli Gao, Jingkuan Song, Xing Xu, Jie Shao, and Heng Tao Shen. 2016. Attention-based lstm with semantic consistency for videos caption- ing. In Proceedings of the 2016 ACM on Multimedia Conference. ACM, pages 357-361.", |
|
"links": null |
|
}, |
|
"BIBREF5": { |
|
"ref_id": "b5", |
|
"title": "Kusk object dataset: Recording access to objects in food preparation", |
|
"authors": [ |
|
{ |
|
"first": "Atsushi", |
|
"middle": [], |
|
"last": "Hashimoto", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Shinsuke", |
|
"middle": [], |
|
"last": "Mori", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Masaaki", |
|
"middle": [], |
|
"last": "Iiyama", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Michihiko", |
|
"middle": [], |
|
"last": "Minoh", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2016, |
|
"venue": "Proc. of IEEE International Conference on Multimedia and Expo Workshops", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Atsushi Hashimoto, Shinsuke Mori, Masaaki Iiyama, and Michihiko Minoh. 2016. Kusk object dataset: Recording access to objects in food preparation. In Proc. of IEEE International Conference on Multime- dia and Expo Workshops. IEEE.", |
|
"links": null |
|
}, |
|
"BIBREF6": { |
|
"ref_id": "b6", |
|
"title": "KUSK Dataset: Toward a direct understanding of recipe text and human cooking activity", |
|
"authors": [ |
|
{ |
|
"first": "Atsushi", |
|
"middle": [], |
|
"last": "Hashimoto", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Sasada", |
|
"middle": [], |
|
"last": "Tetsuro", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Yoko", |
|
"middle": [], |
|
"last": "Yamakata", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Shinsuke", |
|
"middle": [], |
|
"last": "Mori", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Michihiko", |
|
"middle": [], |
|
"last": "Minoh", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2014, |
|
"venue": "Workshop on Smart Technology for Cooking and Eating Activities", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "583--588", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Atsushi Hashimoto, Sasada Tetsuro, Yoko Yamakata, Shinsuke Mori, and Michihiko Minoh. 2014. KUSK Dataset: Toward a direct understanding of recipe text and human cooking activity. In Workshop on Smart Technology for Cooking and Eating Activities. pages 583-588.", |
|
"links": null |
|
}, |
|
"BIBREF7": { |
|
"ref_id": "b7", |
|
"title": "Densecap: Fully convolutional localization networks for dense captioning", |
|
"authors": [ |
|
{ |
|
"first": "Justin", |
|
"middle": [], |
|
"last": "Johnson", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Andrej", |
|
"middle": [], |
|
"last": "Karpathy", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Li", |
|
"middle": [], |
|
"last": "Fei-Fei", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2016, |
|
"venue": "Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "4565--4574", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Justin Johnson, Andrej Karpathy, and Li Fei-Fei. 2016. Densecap: Fully convolutional localization networks for dense captioning. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recog- nition. pages 4565-4574.", |
|
"links": null |
|
}, |
|
"BIBREF8": { |
|
"ref_id": "b8", |
|
"title": "Deep visualsemantic alignments for generating image descriptions", |
|
"authors": [ |
|
{ |
|
"first": "Andrej", |
|
"middle": [], |
|
"last": "Karpathy", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Li", |
|
"middle": [], |
|
"last": "Fei-Fei", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2015, |
|
"venue": "Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "3128--3137", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Andrej Karpathy and Li Fei-Fei. 2015. Deep visual- semantic alignments for generating image descrip- tions. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition. pages 3128-3137.", |
|
"links": null |
|
}, |
|
"BIBREF9": { |
|
"ref_id": "b9", |
|
"title": "Temporal tessellation for video annotation and summarization", |
|
"authors": [ |
|
{ |
|
"first": "Dotan", |
|
"middle": [], |
|
"last": "Kaufman", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Gil", |
|
"middle": [], |
|
"last": "Levi", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Tal", |
|
"middle": [], |
|
"last": "Hassner", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Lior", |
|
"middle": [], |
|
"last": "Wolf", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2016, |
|
"venue": "", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": { |
|
"arXiv": [ |
|
"arXiv:1612.06950" |
|
] |
|
}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Dotan Kaufman, Gil Levi, Tal Hassner, and Lior Wolf. 2016. Temporal tessellation for video annotation and summarization. arXiv preprint arXiv:1612.06950.", |
|
"links": null |
|
}, |
|
"BIBREF10": { |
|
"ref_id": "b10", |
|
"title": "Adam: A method for stochastic optimization", |
|
"authors": [ |
|
{ |
|
"first": "Diederik", |
|
"middle": [], |
|
"last": "Kingma", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Jimmy", |
|
"middle": [], |
|
"last": "Ba", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2014, |
|
"venue": "", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": { |
|
"arXiv": [ |
|
"arXiv:1412.6980" |
|
] |
|
}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Diederik Kingma and Jimmy Ba. 2014. Adam: A method for stochastic optimization. arXiv preprint arXiv:1412.6980.", |
|
"links": null |
|
}, |
|
"BIBREF11": { |
|
"ref_id": "b11", |
|
"title": "Generating video description using sequence-to-sequence model with temporal attention", |
|
"authors": [ |
|
{ |
|
"first": "Natsuda", |
|
"middle": [], |
|
"last": "Laokulrat", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Sang", |
|
"middle": [], |
|
"last": "Phan", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Noriki", |
|
"middle": [], |
|
"last": "Nishida", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Raphael", |
|
"middle": [], |
|
"last": "Shu", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Yo", |
|
"middle": [], |
|
"last": "Ehara", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Naoaki", |
|
"middle": [], |
|
"last": "Okazaki", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Yusuke", |
|
"middle": [], |
|
"last": "Miyao", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Hideki", |
|
"middle": [], |
|
"last": "Nakayama", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2016, |
|
"venue": "", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Natsuda Laokulrat, Sang Phan, Noriki Nishida, Raphael Shu, Yo Ehara, Naoaki Okazaki, Yusuke Miyao, and Hideki Nakayama. 2016. Generating video description using sequence-to-sequence model with temporal attention.", |
|
"links": null |
|
}, |
|
"BIBREF12": { |
|
"ref_id": "b12", |
|
"title": "Summarization-based video caption via deep neural networks", |
|
"authors": [ |
|
{ |
|
"first": "Guang", |
|
"middle": [], |
|
"last": "Li", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Shubo", |
|
"middle": [], |
|
"last": "Ma", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Yahong", |
|
"middle": [], |
|
"last": "Han", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2015, |
|
"venue": "Proceedings of the 23rd ACM international conference on Multimedia", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "1191--1194", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Guang Li, Shubo Ma, and Yahong Han. 2015. Summarization-based video caption via deep neural networks. In Proceedings of the 23rd ACM interna- tional conference on Multimedia. ACM, pages 1191- 1194.", |
|
"links": null |
|
}, |
|
"BIBREF13": { |
|
"ref_id": "b13", |
|
"title": "Microsoft coco: Common objects in context", |
|
"authors": [ |
|
{ |
|
"first": "Tsung-Yi", |
|
"middle": [], |
|
"last": "Lin", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Michael", |
|
"middle": [], |
|
"last": "Maire", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Serge", |
|
"middle": [], |
|
"last": "Belongie", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "James", |
|
"middle": [], |
|
"last": "Hays", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Pietro", |
|
"middle": [], |
|
"last": "Perona", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Deva", |
|
"middle": [], |
|
"last": "Ramanan", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Piotr", |
|
"middle": [], |
|
"last": "Doll\u00e1r", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "C Lawrence", |
|
"middle": [], |
|
"last": "Zitnick", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2014, |
|
"venue": "European Conference on Computer Vision", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "740--755", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Tsung-Yi Lin, Michael Maire, Serge Belongie, James Hays, Pietro Perona, Deva Ramanan, Piotr Doll\u00e1r, and C Lawrence Zitnick. 2014. Microsoft coco: Common objects in context. In European Confer- ence on Computer Vision. Springer, pages 740-755.", |
|
"links": null |
|
}, |
|
"BIBREF14": { |
|
"ref_id": "b14", |
|
"title": "Flow graph corpus from recipe texts", |
|
"authors": [ |
|
{ |
|
"first": "Shinsuke", |
|
"middle": [], |
|
"last": "Mori", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Hirokuni", |
|
"middle": [], |
|
"last": "Maeta", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Yoko", |
|
"middle": [], |
|
"last": "Yamakata", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Tetsuro", |
|
"middle": [], |
|
"last": "Sasada", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2014, |
|
"venue": "Proceedings of the Ninth International Conference on Language Resources and Evaluation", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Shinsuke Mori, Hirokuni Maeta, Yoko Yamakata, and Tetsuro Sasada. 2014. Flow graph corpus from recipe texts. In Proceedings of the Ninth International Conference on Language Resources and Evaluation.", |
|
"links": null |
|
}, |
|
"BIBREF15": { |
|
"ref_id": "b15", |
|
"title": "Pointwise prediction for robust, adaptable japanese morphological analysis", |
|
"authors": [ |
|
{ |
|
"first": "Graham", |
|
"middle": [], |
|
"last": "Neubig", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Yosuke", |
|
"middle": [], |
|
"last": "Nakata", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Shinsuke", |
|
"middle": [], |
|
"last": "Mori", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2011, |
|
"venue": "", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "529--533", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Graham Neubig, Yosuke Nakata, and Shinsuke Mori. 2011. Pointwise prediction for robust, adaptable japanese morphological analysis. pages 529-533.", |
|
"links": null |
|
}, |
|
"BIBREF16": { |
|
"ref_id": "b16", |
|
"title": "Bleu: a method for automatic evaluation of machine translation", |
|
"authors": [ |
|
{ |
|
"first": "Kishore", |
|
"middle": [], |
|
"last": "Papineni", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Salim", |
|
"middle": [], |
|
"last": "Roukos", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Todd", |
|
"middle": [], |
|
"last": "Ward", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Wei-Jing", |
|
"middle": [], |
|
"last": "Zhu", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2002, |
|
"venue": "Proceedings of the 40th annual meeting on association for computational linguistics", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "311--318", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Kishore Papineni, Salim Roukos, Todd Ward, and Wei- Jing Zhu. 2002. Bleu: a method for automatic eval- uation of machine translation. In Proceedings of the 40th annual meeting on association for compu- tational linguistics. Association for Computational Linguistics, pages 311-318.", |
|
"links": null |
|
}, |
|
"BIBREF17": { |
|
"ref_id": "b17", |
|
"title": "Faster r-cnn: Towards real-time object detection with region proposal networks", |
|
"authors": [ |
|
{ |
|
"first": "Kaiming", |
|
"middle": [], |
|
"last": "Shaoqing Ren", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Ross", |
|
"middle": [], |
|
"last": "He", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Jian", |
|
"middle": [], |
|
"last": "Girshick", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "", |
|
"middle": [], |
|
"last": "Sun", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2015, |
|
"venue": "Advances in neural information processing systems", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "91--99", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Shaoqing Ren, Kaiming He, Ross Girshick, and Jian Sun. 2015. Faster r-cnn: Towards real-time ob- ject detection with region proposal networks. In Advances in neural information processing systems. pages 91-99.", |
|
"links": null |
|
}, |
|
"BIBREF18": { |
|
"ref_id": "b18", |
|
"title": "Niket Tandon, and Bernt Schiele", |
|
"authors": [ |
|
{ |
|
"first": "Anna", |
|
"middle": [], |
|
"last": "Rohrbach", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Marcus", |
|
"middle": [], |
|
"last": "Rohrbach", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2015, |
|
"venue": "Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Anna Rohrbach, Marcus Rohrbach, Niket Tandon, and Bernt Schiele. 2015. A dataset for movie description. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition.", |
|
"links": null |
|
}, |
|
"BIBREF19": { |
|
"ref_id": "b19", |
|
"title": "Translating video content to natural language descriptions", |
|
"authors": [ |
|
{ |
|
"first": "Marcus", |
|
"middle": [], |
|
"last": "Rohrbach", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Wei", |
|
"middle": [], |
|
"last": "Qiu", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Ivan", |
|
"middle": [], |
|
"last": "Titov", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Stefan", |
|
"middle": [], |
|
"last": "Thater", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2013, |
|
"venue": "Proceedings of the IEEE International Conference on Computer Vision", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "433--440", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Marcus Rohrbach, Wei Qiu, Ivan Titov, Stefan Thater, Manfred Pinkal, and Bernt Schiele. 2013. Translat- ing video content to natural language descriptions. In Proceedings of the IEEE International Conference on Computer Vision. pages 433-440.", |
|
"links": null |
|
}, |
|
"BIBREF20": { |
|
"ref_id": "b20", |
|
"title": "Named entity recognizer trainable from partially annotated data", |
|
"authors": [ |
|
{ |
|
"first": "Tetsuro", |
|
"middle": [], |
|
"last": "Sasada", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Shinsuke", |
|
"middle": [], |
|
"last": "Mori", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Tatsuya", |
|
"middle": [], |
|
"last": "Kawahara", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Yoko", |
|
"middle": [], |
|
"last": "Yamakata", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2015, |
|
"venue": "Proceedings of the Eleventh International Conference Pacific Association for Computational Linguistics", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Tetsuro Sasada, Shinsuke Mori, Tatsuya Kawahara, and Yoko Yamakata. 2015. Named entity recognizer trainable from partially annotated data. In Pro- ceedings of the Eleventh International Conference Pacific Association for Computational Linguistics.", |
|
"links": null |
|
}, |
|
"BIBREF21": { |
|
"ref_id": "b21", |
|
"title": "Frameand segment-level features and candidate pool evaluation for video caption generation", |
|
"authors": [ |
|
{ |
|
"first": "Rakshith", |
|
"middle": [], |
|
"last": "Shetty", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Jorma", |
|
"middle": [], |
|
"last": "Laaksonen", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2016, |
|
"venue": "Proceedings of the 2016 ACM on Multimedia Conference", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "1073--1076", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Rakshith Shetty and Jorma Laaksonen. 2016. Frame- and segment-level features and candidate pool eval- uation for video caption generation. In Proceedings of the 2016 ACM on Multimedia Conference. ACM, pages 1073-1076.", |
|
"links": null |
|
}, |
|
"BIBREF22": { |
|
"ref_id": "b22", |
|
"title": "Sequence to sequence learning with neural networks", |
|
"authors": [ |
|
{ |
|
"first": "Ilya", |
|
"middle": [], |
|
"last": "Sutskever", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Oriol", |
|
"middle": [], |
|
"last": "Vinyals", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Quoc V", |
|
"middle": [], |
|
"last": "Le", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2014, |
|
"venue": "Advances in neural information processing systems", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "3104--3112", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Ilya Sutskever, Oriol Vinyals, and Quoc V Le. 2014. Se- quence to sequence learning with neural networks. In Advances in neural information processing sys- tems. pages 3104-3112.", |
|
"links": null |
|
}, |
|
"BIBREF23": { |
|
"ref_id": "b23", |
|
"title": "Using descriptive video services to create a large data source for video annotation research", |
|
"authors": [ |
|
{ |
|
"first": "Atousa", |
|
"middle": [], |
|
"last": "Torabi", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Christopher", |
|
"middle": [], |
|
"last": "Pal", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Hugo", |
|
"middle": [], |
|
"last": "Larochelle", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Aaron", |
|
"middle": [], |
|
"last": "Courville", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2015, |
|
"venue": "", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": { |
|
"arXiv": [ |
|
"arXiv:1503.01070" |
|
] |
|
}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Atousa Torabi, Christopher Pal, Hugo Larochelle, and Aaron Courville. 2015. Using descriptive video ser- vices to create a large data source for video annota- tion research. arXiv preprint arXiv:1503.01070.", |
|
"links": null |
|
}, |
|
"BIBREF24": { |
|
"ref_id": "b24", |
|
"title": "Sequence to sequence-video to text", |
|
"authors": [ |
|
{ |
|
"first": "Subhashini", |
|
"middle": [], |
|
"last": "Venugopalan", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Marcus", |
|
"middle": [], |
|
"last": "Rohrbach", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Jeffrey", |
|
"middle": [], |
|
"last": "Donahue", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Raymond", |
|
"middle": [], |
|
"last": "Mooney", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Trevor", |
|
"middle": [], |
|
"last": "Darrell", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Kate", |
|
"middle": [], |
|
"last": "Saenko", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2015, |
|
"venue": "Proceedings of the IEEE International Conference on Computer Vision", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "4534--4542", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Subhashini Venugopalan, Marcus Rohrbach, Jeffrey Donahue, Raymond Mooney, Trevor Darrell, and Kate Saenko. 2015. Sequence to sequence-video to text. In Proceedings of the IEEE International Con- ference on Computer Vision. pages 4534-4542.", |
|
"links": null |
|
}, |
|
"BIBREF25": { |
|
"ref_id": "b25", |
|
"title": "Show, attend and tell: Neural image caption generation with visual attention", |
|
"authors": [ |
|
{ |
|
"first": "Kelvin", |
|
"middle": [], |
|
"last": "Xu", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Jimmy", |
|
"middle": [], |
|
"last": "Ba", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Ryan", |
|
"middle": [], |
|
"last": "Kiros", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Kyunghyun", |
|
"middle": [], |
|
"last": "Cho", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "C", |
|
"middle": [], |
|
"last": "Aaron", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Ruslan", |
|
"middle": [], |
|
"last": "Courville", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "", |
|
"middle": [], |
|
"last": "Salakhutdinov", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "S", |
|
"middle": [], |
|
"last": "Richard", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Yoshua", |
|
"middle": [], |
|
"last": "Zemel", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "", |
|
"middle": [], |
|
"last": "Bengio", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2015, |
|
"venue": "ICML", |
|
"volume": "14", |
|
"issue": "", |
|
"pages": "77--81", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Kelvin Xu, Jimmy Ba, Ryan Kiros, Kyunghyun Cho, Aaron C Courville, Ruslan Salakhutdinov, Richard S Zemel, and Yoshua Bengio. 2015. Show, attend and tell: Neural image caption generation with visual at- tention. In ICML. volume 14, pages 77-81.", |
|
"links": null |
|
}, |
|
"BIBREF26": { |
|
"ref_id": "b26", |
|
"title": "Corpus-guided sentence generation of natural images", |
|
"authors": [ |
|
{ |
|
"first": "Yezhou", |
|
"middle": [], |
|
"last": "Yang", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Ching", |
|
"middle": [ |
|
"Lik" |
|
], |
|
"last": "Teo", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Hal", |
|
"middle": [], |
|
"last": "Daum\u00e9", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Iii", |
|
"middle": [], |
|
"last": "", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Yiannis", |
|
"middle": [], |
|
"last": "Aloimonos", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2011, |
|
"venue": "Proceedings of the Conference on Empirical Methods in Natural Language Processing. Association for Computational Linguistics", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "444--454", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Yezhou Yang, Ching Lik Teo, Hal Daum\u00e9 III, and Yian- nis Aloimonos. 2011. Corpus-guided sentence gen- eration of natural images. In Proceedings of the Con- ference on Empirical Methods in Natural Language Processing. Association for Computational Linguis- tics, pages 444-454.", |
|
"links": null |
|
} |
|
}, |
|
"ref_entries": { |
|
"FIGREF0": { |
|
"text": "Task overview.", |
|
"type_str": "figure", |
|
"num": null, |
|
"uris": null |
|
}, |
|
"FIGREF1": { |
|
"text": "\u5305\u4e01/T \u3067 \u8089/F \u3092 \u5207\u308b/AcCut/Ac meat/F with a knife/T. Score: 0.5 Searching the recipe sentence sequence. (E)Object recognition by Faster R-CNN (A) + r-NE recognition(B)Generating r-NE sequence (C(0.6), \u8089/F(0.4) (cutting board) (meat)Figure 2: Overview of the proposed method.", |
|
"type_str": "figure", |
|
"num": null, |
|
"uris": null |
|
}, |
|
"FIGREF2": { |
|
"text": "An example of object recognition by Faster R-CNN.", |
|
"type_str": "figure", |
|
"num": null, |
|
"uris": null |
|
}, |
|
"TABREF0": { |
|
"content": "<table><tr><td>Ac</td><td>Action by the chef</td></tr><tr><td>Af</td><td>Action by foods</td></tr><tr><td>Sf</td><td>State of foods</td></tr><tr><td>St</td><td>State of tools</td></tr><tr><td colspan=\"2\">one sentence for a video clip. Studies on the au-</td></tr><tr><td colspan=\"2\">tomatic caption generation of documents consist-</td></tr><tr><td colspan=\"2\">ing of multiple sentences like procedural text do</td></tr><tr><td colspan=\"2\">not attract much attention as far as we know. One</td></tr><tr><td colspan=\"2\">similar study is done by</td></tr></table>", |
|
"text": "Definition of r-NE tags. r-NE tag meaning", |
|
"num": null, |
|
"type_str": "table", |
|
"html": null |
|
}, |
|
"TABREF2": { |
|
"content": "<table><tr><td colspan=\"2\">BLEU</td><td/></tr><tr><td colspan=\"3\">Configuration N = 1 N = 2 N = 3 N = 4</td></tr><tr><td>w/o P ({e}) \u2212l 22.73 13.13</td><td>7.48</td><td>4.11</td></tr><tr><td>with P ({e}) \u2212l 26.73 15.42</td><td>9.09</td><td>5.50</td></tr></table>", |
|
"text": "The BLEU scores.", |
|
"num": null, |
|
"type_str": "table", |
|
"html": null |
|
}, |
|
"TABREF3": { |
|
"content": "<table><tr><td/><td>\u30d5\u30e9\u30a4\u30d1\u30f3\u306b\u71b1\u3092\u5165\u308c\u3001\u7092\u3081\u3092\u7092</td><td/></tr><tr><td/><td>\u3081\u308b\u3002</td><td/></tr><tr><td/><td>Heat the pan, Stir-fry</td><td/></tr><tr><td/><td>something fried.</td><td/></tr><tr><td/><td>\u30dc\u30a6\u30eb\u30922\u3064\u7528\u610f\u3002</td><td/></tr><tr><td/><td>Prepare two bowls.</td><td/></tr><tr><td>\u7c98\u308a\u6c17\u304c\u3059\u3053\u3057\u3067\u308b\u307e\u3067\u3044\u305f\u3081\u3066\u3001\u5473\u3092\u3064\u3051\u308b Saute\u0301 it until it gets a little sticky, season it.</td><td>\u5305\u4e01\u3067\u3059\u3092\u4f7f\u3001 Use the knife.</td><td/></tr><tr><td>\u5375\u3092\u3068\u3044\u3066\u30011\u3092\u3044\u308c\u3066\u3001\u30d5\u30e9\u30a4\u30d1\u30f3\u3092\u30af\u30eb\u3063\u3066\u3057\u3066\u3001 \u307e\u304f\u3002 Beat an egg, add 1 to the pan and start rolling it</td><td>\u6cb9\u3092\u3057\u3044\u3066\u7092\u3081\u308b Saute\u0301 them after pour the oil in the pan.</td><td>\u5375\u306f\u307b\u3050\u3057\u3066\u304a\u304f\u3002</td></tr><tr><td>by the pan.</td><td>\u633d\u8089\u3092\u7092\u3081\u308b\u3002</td><td>Beat an egg.</td></tr><tr><td>\u304a\u76bf\u306b\u3082\u308a\u3064\u3051\u3066\u3067\u304d\u3042\u304c\u308a\u3043\u3002 Serve the dish. It's ready to eat.</td><td>Stir-fry minced meat. \u304a\u597d\u307f\u3067\u3067\u308b\u3002 As you like, get out.</td><td>\u30d5\u30e9\u30a4\u30d1\u30f3\u306b\u8c46\u8150\u3092\u5165\u308c\u7092\u3081\u308b\u3002 Put tofu in frying pan and stir fry</td></tr></table>", |
|
"text": "\u30df\u30f3\u30c1\u3092\u3044\u305f\u3081\u3066\u3001\u8272\u304c\u304b\u308f\u3063\u305f\u3089\u3001 Saute\u0301 the meat mince until the color changes, \u4ed6\u306e\u91ce\u83dc\u3082\u5165\u308c\u3066\u3044\u305f\u3081\u3066\u3001 put another vegetable and saute\u0301 it. \u706b\u304c\u901a\u3063\u305f\u3089\u5c0f\u9ea6\u7c89\u3092\u3044\u308c\u3066\u3001 After heating them well, put the flour in the pan. (\u7802\u7cd6\u3092\u4f7f\u3046\u65b9\u306f\u3001\u3053\u3053\u3067\u4e00\u7dd2\u306b\u3002 If you like sugar, please add it. \u597d\u307f\u3067\u30b3\u30b7\u30e7\u30a6\u3092\u52a0\u3048\u308b\u3002 If you like pepper, please add it. \u304a\u597d\u307f\u3067\u3067\u3092\u304b\u3051\u3066\u3082\u308b\u3002 (impossible to translate into English.) \u30ad\u30e3\u30d9\u30c4\u306f\u3056\u304f\u5207\u308a\u3002 Cut the cabbage into pieces.", |
|
"num": null, |
|
"type_str": "table", |
|
"html": null |
|
} |
|
} |
|
} |
|
} |