{ "paper_id": "2004", "header": { "generated_with": "S2ORC 1.0.0", "date_generated": "2023-01-19T07:22:05.542339Z" }, "title": "PolyphraZ : a tool for the quantitative and subjective evaluation of parallel corpora", "authors": [ { "first": "Najeh", "middle": [], "last": "Hajlaoui", "suffix": "", "affiliation": { "laboratory": "", "institution": "Universit\u00e9 Joseph Fourier", "location": { "postBox": "BP 53", "postCode": "38041", "settlement": "Grenoble", "region": "IMAG", "country": "France" } }, "email": "najeh.hajlaoui@imag.fr" }, { "first": "Christian", "middle": [], "last": "Boitet", "suffix": "", "affiliation": { "laboratory": "", "institution": "Universit\u00e9 Joseph Fourier", "location": { "postBox": "BP 53", "postCode": "38041", "settlement": "Grenoble", "region": "IMAG", "country": "France" } }, "email": "christian.boitet@imag.fr" } ], "year": "", "venue": null, "identifiers": {}, "abstract": "The PolyphraZ tool is under construction in the framework of the TraCorpEx project (Translation of Corpora of Examples), for the management of parallel multilingual corpora (coding, format, correspondence). It is a software platform allowing the preparation and handling of parallel corpora (languages, codings...), parallel presentation, and addition of new languages to existing corpora by calling several MT systems, and letting human translators produce the final reference translations by using a web-based editor. It integrates the computation of some objective evaluation metrics (NIST, BLUE), and enables subjective evaluations thanks to parallel presentations, and formating based on distance computations between sentences (at several levels). In the future, PolyphraZ should also support versioning and provide feedbacks to developers of the MT systems used: unknown words, badly translated words, and comparative presentations of the outputs of the various systems.", "pdf_parse": { "paper_id": "2004", "_pdf_hash": "", "abstract": [ { "text": "The PolyphraZ tool is under construction in the framework of the TraCorpEx project (Translation of Corpora of Examples), for the management of parallel multilingual corpora (coding, format, correspondence). It is a software platform allowing the preparation and handling of parallel corpora (languages, codings...), parallel presentation, and addition of new languages to existing corpora by calling several MT systems, and letting human translators produce the final reference translations by using a web-based editor. It integrates the computation of some objective evaluation metrics (NIST, BLUE), and enables subjective evaluations thanks to parallel presentations, and formating based on distance computations between sentences (at several levels). In the future, PolyphraZ should also support versioning and provide feedbacks to developers of the MT systems used: unknown words, badly translated words, and comparative presentations of the outputs of the various systems.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Abstract", "sec_num": null } ], "body_text": [ { "text": "We work on several parallel corpora such as the BTEC corpus and the Tanaka corpus, but we miss effective tools for the management of these corpora, such as a web platform allowing import, export, preparation (coding, formats...) and processing (translation, revision, edition\u2026) of multilingual corpora. The BTEC comprises about 162320 sentences (about 4000 standard \"translator's pages\" 1 ) in Japanese, Chinese, English and Korean, less in other CSTAR languages. Diffusion of this corpus is restricted to ATR partners in CSTAR (consortium for speech translation advanced research). Our practical goal is to produce a French version of the BTEC, with the same quality of the sentences. Tools such as Excel, TextEdit or BBEdit do not allow sharing such corpora on the Web, nor editing and visualizing parallel sentences. During a stay at ATR, the second author translated the complete BTEC, submitting 163 files of 1000 sentences to Systran Premium v.4, adequately parametrized, and revised the first 1000 sentences, equivalent to 24 standard translator pages in 6:08 hours, or 15 mn per page, under TextEdit, a standard text editor, manually aligning the source and target files. In a later experiment, he did the same on 510 sentences while three other French native speakers translated them by hand, at a rate of 1h per page each (the usual figure in professional translation). That shows that using MT output really speeds up the 1 250 words, 1400 characters, corresponding to A4, Times 12, double space. process of producing (good) reference translations in a new language, but that sharing the workload is still a necessity (it is about 5000 hours for the whole BTEC with no machine help, and still about 1000 hours using MT outputs as suggestions 2 ). He also tried to use Excel on a larger batch of sentences (20000 sentences or 480 pages), but, as shown in figure 1, the gain was quite small, although alignment is automatic. The reason is that saving takes too much time on such a large file. There is some dilemna if translation is performed on parts of a corpus: on one side, files should be large so that global changes, which are very frequent and productive, can concern as many sentences as possible, and on the other side, files should be small, so that each can be translated in a reasonably short time. Our goal, then, is to develop an efficient tool to expand a multilingual corpus into other languages, as a whole, but in a distributed way, by the cooperation of several translators cooperating through the Web. We will call such a corpus a \"polyphrase\" memory (MPM), introducing the term \"polyphrase\" instead of \"sentence\" because there may be several \"proposals\" (or \"paraphrases\") in each language for one \"original\" sentence in one language. In late 2003, we have started work on PolyphraZ, a web server for displaying, translating and editing MPMs on the Web. Last July, the first 2 functionalities had been implemented, and they have been used experimentally in the context of the CSTAR MT evaluation campaign. We now present in more detail the context and the objectives of the TraCorpex project and the PolyphraZ tool. We then describe the architecture of PolyphraZ, as well as intended scenarios of use and types of users.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": null }, { "text": "We started the TraCorpEx project because we realized that very similar tasks had to be undertaken in 3 other projects: the Papillon project (Papillon) of cooperative construction of a large multilingual lexical data base on the Web, the C-STAR III project of translation of spoken dialogues, a French and Tunisian project (Hajlaoui, Boitet, 2003b) , the UNL project (UNL) of communication and multilingual information system, and some PhD projects.", "cite_spans": [ { "start": 322, "end": 347, "text": "(Hajlaoui, Boitet, 2003b)", "ref_id": null } ], "ref_spans": [], "eq_spans": [], "section": "Context", "sec_num": "1.1." }, { "text": "To get our hands on concrete data, we initially concentrate on 2 parallel corpora, structured differently. Later, we will consider corpora from the UNL project, where each document is a multilingual file containing for each sentence its text in the source language, a UNL graph, the result of deconversions in a certain number of languages, and possibly their revisions, or direct manual translations. All these parallel corpora are aligned at the level of sentences. It is interesting to make it possible to go to a finer level like the segments and the words. In other corpora, we will be obliged to go up to the level of paragraphs, because the sentences are not aligned perfectly.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Current situation", "sec_num": "1.2." }, { "text": "The first problem raised by the parallel available corpora it is that there is no tool making it possible to visualize their contents at a glance, sentence by sentence, nor to show the fine correspondences between subsentential segments. In addition, in the case of UNL documents, we cannot visualize at the same time an utterance in several languages and the corresponding UNL graph. Lastly, it is never possible to see the successive versions at the same time.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Current situation", "sec_num": "1.2." }, { "text": "The objectives of TraCorpEx project are as follows.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Detailed objectives", "sec_num": "1.3." }, { "text": "Starting from parallel corpora, we want to add one or more languages (those of the Papillon project for the Tanaka corpus, French and Arabic for the BTEC corpus). The final results must be of high quality, to be usable as \"reference translations\". Hence, humans must participate in the translation work, and we have to develop some kind of translation aids (TA), sharable on the web.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Addition of new languages (horizontal expansion)", "sec_num": "1.3.1." }, { "text": "A subgoal, then, is to develop a web-enabled platform to import corpora and put them in some normalized form, to translate them using various translation aids (multilingual editor, translation memories, dictionaries, and distant MT systems), to visualize and evaluate them, and to export the results in various formats and codings. To encourage MT developers to give free access to some versions of their products, it is also needed to offer various kinds of feedbacks to developers. That is the PolyphraZ project.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Building a software platform for translation", "sec_num": "1.3.2." }, { "text": "A third goal of TraCorpEx is to research and implement techniques to enlarge an MPM by creating neew polyphrases. Interesting results have already been obtained by Y. Lepage, using a combination of analogical computing and n-gram filtering.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Enlarging parallel corpora (vertical expansion)", "sec_num": "1.3.3." }, { "text": "PolyphraZ should also make it possible to evaluate automatic translators with automatic methods such as NIST, BLUE, PER, and to use this possibility in CSTAR, to evaluate the Chinese-English and Japanese-English translations. To evaluate the results of various MT systems will also enable us to determine \"the best\" (or less bad!) translation, proposable to a contributor as a starting point for revision. The quality of the translations should also be evaluated using calculations of distances between sentences and reverse translations.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Additional goal: evaluation", "sec_num": "2.1." }, { "text": "We also want to give feedbacks to the developers of the systems used (unknown words, badly translated sentences...), and a comparative presentation between the various translation systems. The whole of the objectives of this project led us to propose interactive Web interfaces allowing us to chooses, use, compare, publish machine translations corresponding to several language pairs, and to contribute to the improvement of the results by sending feedbacks to the developers of these systems.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Feedbacks to developers of MT systems", "sec_num": "2.2." }, { "text": "We follow the software architecture of the Papillon platform, and reuse certain techniques of parallel visualization of translation memories (PhD thesis of Ch. Chenon). We classify the objects to handle in three types:", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "General architecture", "sec_num": "2.3." }, { "text": "\u2022 Raw corpus sources", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "General architecture", "sec_num": "2.3." }, { "text": "\u2022 Sources transformed into our CXM (Common Example Markup) XML format (coded in UTF-8), for visualization \"just as they are\", and then in the CPXM format for parallel visualization.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "General architecture", "sec_num": "2.3." }, { "text": "\u2022 MPM: multilingual polyphrase memory ", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "General architecture", "sec_num": "2.3." }, { "text": "We distinguish four principal users: the preparer, the reader (\"normal\" user), the reviser and the manager.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Users of PolyphraZ", "sec_num": "2.4." }, { "text": "His role consists in calling translation systems, by parameterizing them as well as possible, which supposes a certain linguistic ability, and can require a delicate development: comparison of the results obtained with various parameter settings, segmentation in \"blocks\" corresponding to various \"optimal\" parameter settings, etc. The preparer can launch automatic evaluations (NIST, BLUE...) on results of translation, and the computation of distances between sentences (results of translation and/or reverse translations). The mixed character and word distance computation produces, in addition to a value, an XML string from which a \"track changes\" presentation can be generated. He can also set the parameters determining \"the best\" suggestion among various translation candidates.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "The preparer", "sec_num": "2.4.1." }, { "text": "A reader can visualize the data (the original, various translations, and distances between the character strings), but is not allowed to edit the translations.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "The reader (normal user)", "sec_num": "2.4.2." }, { "text": "The translator-posteditor is a contributor who translates from scratch or revises proposed translations (in general, MT results or sentences retrieved from translation memories). There is an editable area to modify the active sentence. One can also ask for global modifications (ex: \"SVP\" changed into \"s'il vous plait\" in transcribed spoken utterances) and correct or supplement the local dictionary attached to the MPM. The system uses the reference sentences already produced like a translation memory. PolyphraZ will also be a system of assistance to the translator, limited to the translation of sets of sentences (or titles).", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "The translator-posteditor", "sec_num": "2.4.3." }, { "text": "The last type of user is the manager, who will request from PolyphraZ \"feedbacks\" for the developers of the MT systems used. A manager cans itself be a developer of an MT system. He can draw up (thanks to calculation of distances and to an adapted presentation) a list of unknown words and words badly translated by each system, then validate it, propose for these words suggestions of translation from the \"reference\" translations obtained after human revision and provide a presentation of the evaluations and comparisons between the results of the various systems used or their various parameter settings.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "The manager", "sec_num": "2.4.4." }, { "text": "PolyphraZ is multi-platform (Mac OS-X, Unix, Linux Windows), being programmed in standard java under the Enhydra development environment used for the dynamic and multilingual Papillon web site.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Implementation", "sec_num": "2.5." }, { "text": "The following diagram synthesizes possible uses. When we import a corpus, we transform it in a single coding (UTF-8), and a single XML format, CXM, similar to the CDM (Common Dictionary Markup) format of the Papillon project.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Scenarios of use", "sec_num": "3." }, { "text": "A second consists in transforming all CXM files corresponding to a given multilingual parallel corpus into a file in the CPXM format (see appendix 2). In this format, we introduce the XML element, which contains a set of monolingual components, each component containing possibly one or more proposals. The MPM format is still undergoing changes. The current version is given in appendix 3. The current version of PolyphraZ is not complete, but several functionalities are already usable, and accessible on the [TraCorpEx] website.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "CPXM.dtd (Common Parallel eXample Markup)", "sec_num": "3.2." }, { "text": "PolyphraZ proposes an option common to the three preceding stages, which consists in visualizing in a parallel way the \"columns\" of the polyphrases, to allow for manual (subjective) comparison of the translations. It is actually useful for readers, translators revisors, and managers. Figure 6 shows an example taken from the BTEC corpus. At this moment, the width of the columns is fixed, but it should be controllable by the user in the future, as well as the display of evaluations and distance computations. ", "cite_spans": [], "ref_spans": [ { "start": 285, "end": 293, "text": "Figure 6", "ref_id": "FIGREF4" } ], "eq_spans": [], "section": "Parallel visualization", "sec_num": "3.4." }, { "text": "We have programmed and integrated in PolyphraZ three evaluation methods (NIST, BLUE, and distance computation). NIST and BLUE are well known. As far as distances are concerned, we use a combination between two edit distances, one based on characters and the other on words. The edit distance between two strings of atoms (characters or words) is the minimal number of suppressions, insertions or replacements of atoms necessary to transform one string into the other. In each case, the set of atoms is the union of the atoms in the 2 strings. The cost of inserting, suppressing and exchanging characters is defined beforehand by a table or by 3 functions. The cost of inserting or suppressing a word is its character distance with the empty word, and the cost of exchanging 2 words is their character-based edit distance. At any level, the edit distance between two strings x = a1\u2026am and y = b1\u2026bn is D(m, n), defined by: The mixed distance is then defined by: D = a Dchar +(1-a) Dword ; 0 \u2264 a \u2264 1.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Evaluation of translation results", "sec_num": "3.5." }, { "text": "\u00d3 \u00d4 \u00cc \u00d4 \u00cf D(", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Evaluation of translation results", "sec_num": "3.5." }, { "text": "For the moment, we use the well-known dynamic programming algorithm of Wagner and Fischer (Wagner & Fischer, 1974) , but it will be easy to replace it by more efficient ones in the future.", "cite_spans": [ { "start": 90, "end": 114, "text": "(Wagner & Fischer, 1974)", "ref_id": "BIBREF8" } ], "ref_spans": [], "eq_spans": [], "section": "Evaluation of translation results", "sec_num": "3.5." }, { "text": "Prototypes of two interfaces have been produced. It also computes distances between English original sentences, so that the document can be used as a translation memory in the following step. The second interface is for human revision of the best suggestion using an English zone: we can correct words or expressions and use the translation memory which is in this case the multilingual document itself. A third interface will be built for the preparation of feedbacks to the developers of the MT systems used. It will allow to calculate and validate the words unknown or badly translated by each system, and to provide translation suggestions from \"reference\" translations obtained after human revision. It will also provide comparisons between the various systems used, always thanks to the computation of distances.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Interfaces", "sec_num": "3.6." }, { "text": "The external and middle levels of PolyphraZ are already used. They allow us to put imported multilingual corpora of parallel sentences into a common format and encoding (CXM), and then to transform a whole corpus into one or more files in CPXM formats, and visualize their content on the web. The central level of MPM (Multilingual Polyphrase Memory) is almost completed. It will also support versioning.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Conclusion", "sec_num": null }, { "text": "In the future, we plan the MPM form to use not only to extend to new languages, but also like a \"pivot\", to establish the correspondence between monolingual structured documents corresponding. To each other even if they are not perfect and complete mutual translations or, if they are complete mutual translations, without imposing a strict alignment of sentences, paragraphs, sections, etc. We are also studying how to integrate into a MPM structure \"generators\" specifying a class of sentences (automata for messages with variables and variants, regular expressions for the IF of CSTAR, etc), and to use them to extend a MPM \"in width\" (addition of new languages), but also \"in height\", by the automatic creation of new statements, natural and/or formal.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Conclusion", "sec_num": null }, { "text": "Note that they cannot be used alone, by a monolingual posteditor, but they must be shown with the original sentences .", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "", "sec_num": null } ], "back_matter": [ { "text": " proposal-id CDATA #REQUIRED> Appendix 2 : CPXM.dtd Appendix 3 : MPM.dtd", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Appendices", "sec_num": null } ], "bib_entries": { "BIBREF0": { "ref_id": "b0", "title": "Gestion de l'\u00e9volution non centralise de documents parall\u00e8les multilingues", "authors": [ { "first": "A", "middle": [], "last": "Assimi", "suffix": "" }, { "first": "", "middle": [], "last": "Assimi", "suffix": "" } ], "year": 2000, "venue": "", "volume": "200", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "A. Assimi (Assimi, 2000). Gestion de l'\u00e9volution non centralise de documents parall\u00e8les multilingues, Nouvelle these, UJF, Grenoble, 31/10/00, 200 p.", "links": null }, "BIBREF1": { "ref_id": "b1", "title": "Management of Non-Centralized Evolution of Parallel Multilingual Documents", "authors": [], "year": 2001, "venue": "Proc. Internationalization Track, 10th International World Wide Web Conference", "volume": "7", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "A-B. Assimi & C.Boitet (Assimi & Boitet, 2001) Management of Non-Centralized Evolution of Parallel Multilingual Documents. Proc. Internationalization Track, 10th International World Wide Web Conference, Hong Kong, May 1-5, 2001, 7 p.", "links": null }, "BIBREF2": { "ref_id": "b2", "title": "Approaches to enlarge bilingual corpora of example sentences to more languages, Papillon-03 seminar", "authors": [ { "first": "", "middle": [], "last": "Ch", "suffix": "" }, { "first": "", "middle": [], "last": "Boitet", "suffix": "" }, { "first": "", "middle": [], "last": "Boitet", "suffix": "" } ], "year": 2003, "venue": "", "volume": "12", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Ch. Boitet (Boitet, 2003) Approaches to enlarge bilingual corpora of example sentences to more languages, Papillon-03 seminar, Sapporo, 3-5 July 2003, 12 p.", "links": null }, "BIBREF3": { "ref_id": "b3", "title": "Coedition to share text revision across languages", "authors": [], "year": 2002, "venue": "Proc. COLING-02 WS on MT", "volume": "8", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Ch. Boitet & Tsai W.-J (Boitet & Tsai 2002). Coedition to share text revision across languages. Proc. COLING-02 WS on MT, Taipeh, 1/9/2002, 8 p.", "links": null }, "BIBREF4": { "ref_id": "b4", "title": "R\u00e9utilisation de traducteurs gratuits pour d\u00e9velopper des syst\u00e8mes multilingues, RECITAL", "authors": [ { "first": "H", "middle": [], "last": "Vo-Trung", "suffix": "" } ], "year": 2004, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "H. Vo-trung (Vo-trung, 2004) R\u00e9utilisation de traducteurs gratuits pour d\u00e9velopper des syst\u00e8mes multilingues, RECITAL 2004, avril 2004, F\u00e8s, Maroc.", "links": null }, "BIBREF5": { "ref_id": "b5", "title": "A \"pivot\" XML-based architecture for multilingual, multiversion documents : parallel monolingual documents aligned through a central correspondence descriptor and possible use of UNL", "authors": [ { "first": "N", "middle": [], "last": "Hajlaoui", "suffix": "" } ], "year": 2003, "venue": "Ch. Boitet (Hajlaoui & Boitet", "volume": "", "issue": "", "pages": "2--6", "other_ids": {}, "num": null, "urls": [], "raw_text": "N. Hajlaoui, Ch. Boitet (Hajlaoui & Boitet, 2003a), A \"pivot\" XML-based architecture for multilingual, multiversion documents : parallel monolingual documents aligned through a central correspondence descriptor and possible use of UNL, Convergences'03, Alexandria, 2-6 December 2003.", "links": null }, "BIBREF6": { "ref_id": "b6", "title": "Mod\u00e9lisation de la production de phrases", "authors": [ { "first": "N", "middle": [], "last": "Hajlaoui", "suffix": "" }, { "first": ";", "middle": [], "last": "Clips", "suffix": "" }, { "first": "Grenoble", "middle": [], "last": "Ujf", "suffix": "" }, { "first": "", "middle": [], "last": "Et", "suffix": "" }, { "first": "", "middle": [], "last": "De Sousse", "suffix": "" } ], "year": 2003, "venue": "Ch. Boitet (Hajlaoui & Boitet", "volume": "25", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "N. Hajlaoui, Ch. Boitet (Hajlaoui & Boitet, 2003b), Mod\u00e9lisation de la production de phrases, projet franco-tunisien entre l'\u00e9quipe GETA, CLIPS, UJF, Grenoble et universit\u00e9 de Sousse, Tunisie, 25 p.", "links": null }, "BIBREF7": { "ref_id": "b7", "title": "Gestion des versions des composants \u00e9l\u00e9ctroniques virtuels", "authors": [ { "first": "N", "middle": [], "last": "Hajlaoui", "suffix": "" } ], "year": 2002, "venue": "", "volume": "80", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "N. Hajlaoui (2002) Gestion des versions des composants \u00e9l\u00e9ctroniques virtuels. Rapport de DEA, CSI, INPG, juin 2002, 80 p.", "links": null }, "BIBREF8": { "ref_id": "b8", "title": "The String-to-String Correction", "authors": [ { "first": "R", "middle": [], "last": "Wagner", "suffix": "" }, { "first": "& M", "middle": [], "last": "Fischer", "suffix": "" } ], "year": 1974, "venue": "Problem ACM Journal of the Association for Computing Machinery", "volume": "21", "issue": "1", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "R. Wagner & M. Fischer (Wagner & Fischer, 1974) The String-to-String Correction Problem ACM Journal of the Association for Computing Machinery, Vol. 21, No 1, Janvier 1974.", "links": null }, "BIBREF9": { "ref_id": "b9", "title": "2001) SWIIVRE a web site for the Initiation, Information, Validation, Research and Experimentation on UNL", "authors": [], "year": 2001, "venue": "Proc. First UNL Open Conference -Building Global Knowledge with UNL", "volume": "8", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "W.-J.Tsai (Tsai, 2001) SWIIVRE a web site for the Initiation, Information, Validation, Research and Experimentation on UNL. Proc. First UNL Open Conference -Building Global Knowledge with UNL, Suzhou, China, 18-20 Nov. 2001, 8 p.", "links": null }, "BIBREF11": { "ref_id": "b11", "title": "Projet PAPILLON de construction coop\u00e9rative d'une base lexicale multilingue et de construction de dictionnaires", "authors": [], "year": null, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Projet PAPILLON de construction coop\u00e9rative d'une base lexicale multilingue et de construction de dictionnaires, http://www.papillon- dictionary.org/", "links": null } }, "ref_entries": { "FIGREF0": { "type_str": "figure", "text": "revision times with TextEdit and Excel", "uris": null, "num": null }, "FIGREF1": { "type_str": "figure", "text": "objects of the PolyphraZ platform", "uris": null, "num": null }, "FIGREF2": { "type_str": "figure", "text": "Import into CXM (Common eXample Markup) example XML file conforming to the CXM.dtd", "uris": null, "num": null }, "FIGREF3": { "type_str": "figure", "text": "logical view of a MPM", "uris": null, "num": null }, "FIGREF4": { "type_str": "figure", "text": "parallel visualisation of the BTEC", "uris": null, "num": null }, "FIGREF5": { "type_str": "figure", "text": ", j)+C(INS(b j+1 )), D(i, j+1)+C(DES(a i+1 )), D(i, j)+C(SUB(a i+1 , b j+1 ))", "uris": null, "num": null }, "FIGREF6": { "type_str": "figure", "text": "Interface 1 \"preparation\"", "uris": null, "num": null }, "FIGREF7": { "type_str": "figure", "text": "interface 2 \"revision\"", "uris": null, "num": null }, "TABREF2": { "type_str": "table", "text": "", "num": null, "html": null, "content": "
initial versions Various formats Various codings Visualization on several documentsRaw corpus sourcesBTEC-JPN Format= texte Coding= EUC BTEC-JPN Format =XML DTD=CXMExportExternal resources Import Internet
Coding= UTF-8
Corpus = BTEC-ENLD MPMCPXMCXMCXM (Commun Example coding = UTF-8 Markup) Single format XML
Parallel visualisation
CPXM
MPM : (Multilingual Polyphase Memory ) CorrespondenceCorpus = BTEC-CHCorpus = CSTARCorpus = Tanaka(Common Parallel Example Markup) Corpus = set of polyphrases
Versioning
LD : Local Dictionary
local (server)On the Web
LD(Local Dictionary)Papillon
Basic toolsParallel Visualisation of data
RecoveryTextEdit, BBEditTools ?Initialisation Of LDMPM (Multilingual Polyphrase Memory )
Methods ?Distance
multilingualscalculation
Initial documentCXMcorpora in CXMCPXMCorpora in CPXMInitialisation of the MPMTranslation
Automatic
evaluation
monolinguals
corpora in
CXM
" } } } }